From noreply at buildbot.pypy.org Tue Nov 1 08:29:58 2011 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 1 Nov 2011 08:29:58 +0100 (CET) Subject: [pypy-commit] pypy default: "Fix" this test. Message-ID: <20111101072958.9CDF9820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48638:bfd8b80c9117 Date: 2011-11-01 07:29 +0000 http://bitbucket.org/pypy/pypy/changeset/bfd8b80c9117/ Log: "Fix" this test. diff --git a/pypy/module/array/test/test_array.py b/pypy/module/array/test/test_array.py --- a/pypy/module/array/test/test_array.py +++ b/pypy/module/array/test/test_array.py @@ -835,7 +835,7 @@ a.append(3.0) r = weakref.ref(a, lambda a: l.append(a())) del a - gc.collect() + gc.collect(); gc.collect() # XXX needs two of them right now... assert l assert l[0] is None or len(l[0]) == 0 From noreply at buildbot.pypy.org Tue Nov 1 08:38:03 2011 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 1 Nov 2011 08:38:03 +0100 (CET) Subject: [pypy-commit] pypy default: Don't crash on reading or writing stuff to the history file Message-ID: <20111101073803.0EA8D820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48639:5e062fe507c3 Date: 2011-11-01 07:37 +0000 http://bitbucket.org/pypy/pypy/changeset/5e062fe507c3/ Log: Don't crash on reading or writing stuff to the history file if the encoding is wrong. Just fall back to utf-8, a kind of safe default. diff --git a/lib_pypy/pyrepl/readline.py b/lib_pypy/pyrepl/readline.py --- a/lib_pypy/pyrepl/readline.py +++ b/lib_pypy/pyrepl/readline.py @@ -231,7 +231,11 @@ return ''.join(chars) def _histline(self, line): - return unicode(line.rstrip('\n'), ENCODING) + line = line.rstrip('\n') + try: + return unicode(line, ENCODING) + except UnicodeDecodeError: # bah, silently fall back... + return unicode(line, 'utf-8') def get_history_length(self): return self.saved_history_length @@ -268,7 +272,10 @@ f = open(os.path.expanduser(filename), 'w') for entry in history: if isinstance(entry, unicode): - entry = entry.encode(ENCODING) + try: + entry = entry.encode(ENCODING) + except UnicodeEncodeError: # bah, silently fall back... + entry = entry.encode('utf-8') entry = entry.replace('\n', '\r\n') # multiline history support f.write(entry + '\n') f.close() From noreply at buildbot.pypy.org Tue Nov 1 08:39:14 2011 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 1 Nov 2011 08:39:14 +0100 (CET) Subject: [pypy-commit] pyrepl default: Port 5e062fe507c3 from pypy. Message-ID: <20111101073914.62F21820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r154:8f621e5d5cb7 Date: 2011-11-01 08:39 +0100 http://bitbucket.org/pypy/pyrepl/changeset/8f621e5d5cb7/ Log: Port 5e062fe507c3 from pypy. diff --git a/pyrepl/readline.py b/pyrepl/readline.py --- a/pyrepl/readline.py +++ b/pyrepl/readline.py @@ -231,7 +231,11 @@ return ''.join(chars) def _histline(self, line): - return unicode(line.rstrip('\n'), ENCODING) + line = line.rstrip('\n') + try: + return unicode(line, ENCODING) + except UnicodeDecodeError: # bah, silently fall back... + return unicode(line, 'utf-8') def get_history_length(self): return self.saved_history_length @@ -268,7 +272,10 @@ f = open(os.path.expanduser(filename), 'w') for entry in history: if isinstance(entry, unicode): - entry = entry.encode(ENCODING) + try: + entry = entry.encode(ENCODING) + except UnicodeEncodeError: # bah, silently fall back... + entry = entry.encode('utf-8') entry = entry.replace('\n', '\r\n') # multiline history support f.write(entry + '\n') f.close() From noreply at buildbot.pypy.org Tue Nov 1 09:28:54 2011 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 1 Nov 2011 09:28:54 +0100 (CET) Subject: [pypy-commit] pypy default: Update the list of irc topics. I had to change details in the script Message-ID: <20111101082854.36DE3820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48640:6e0e3791ee99 Date: 2011-11-01 09:28 +0100 http://bitbucket.org/pypy/pypy/changeset/6e0e3791ee99/ Log: Update the list of irc topics. I had to change details in the script that parses IRC topics, and I just noticed that it no longer matches the very old topics. But maybe it's not too bad anyway: this checkin adds the same order of magnitude of topics as it removes. diff --git a/lib_pypy/_pypy_irc_topic.py b/lib_pypy/_pypy_irc_topic.py --- a/lib_pypy/_pypy_irc_topic.py +++ b/lib_pypy/_pypy_irc_topic.py @@ -1,117 +1,6 @@ -"""qvfgbcvna naq hgbcvna punvef -qlfgbcvna naq hgbcvna punvef -V'z fbeel, pbhyq lbh cyrnfr abg nterr jvgu gur png nf jryy? -V'z fbeel, pbhyq lbh cyrnfr abg nterr jvgu gur punve nf jryy? -jr cnffrq gur RH erivrj -cbfg RhebClguba fcevag fgnegf 12.IVV.2007, 10nz -RhebClguba raqrq -n Pyrna Ragrecevfrf cebqhpgvba -npnqrzl vf n pbzcyvpngrq ebyr tnzr -npnqrzvn vf n pbzcyvpngrq ebyr tnzr -jbexvat pbqr vf crn fbhc -abg lbhe snhyg, zber yvxr vg'f n zbivat gnetrg -guvf fragrapr vf snyfr -abguvat vf gehr -Yncfnat Fbhpubat -Oenpunzhgnaqn -fbeel, V'yy grnpu gur pnpghf ubj gb fjvz yngre -Jul fb znal znal znal znal znal ivbyvaf? -Jul fb znal znal znal znal znal bowrpgf? -"eha njnl naq yvir ba n snez" nccebnpu gb fbsgjner qrirybczrag -"va snpg, lbh zvtug xabj zber nobhg gur genafyngvba gbbypunva nsgre znfgrevat eclguba guna fbzr angvir fcrnxre xabjf nobhg uvf zbgure gbathr" - kbeNkNk -"jurer qvq nyy gur ivbyvaf tb?" -- ClCl fgnghf oybt: uggc://zberclcl.oybtfcbg.pbz/ -uggc://kxpq.pbz/353/ -pnfhnyvgl ivbyngvbaf naq sylvat -wrgmg abpu fpubxbynqvtre -R09 2X @PNN:85? -vs lbh'er gelvat gb oybj hc fghss, jub pnerf? -vs fghss oybjf hc, lbh pner -2008 jvyy or gur lrne bs clcl ba gur qrfxgbc -2008 jvyy or gur lrne bs gur qrfxgbc ba #clcl -2008 jvyy or gur lrne bs gur qrfxgbc ba #clcl, Wnahnel jvyy or gur zbagu bs gur nyc gbcf -lrf, ohg jung'g gur frafr bs 0 < "qhena qhena" -eclguba: flagnk naq frznagvpf bs clguba, fcrrq bs p, erfgevpgvbaf bs wnin naq pbzcvyre reebe zrffntrf nf crargenoyr nf ZHZCF +"""eclguba: flagnk naq frznagvpf bs clguba, fcrrq bs p, erfgevpgvbaf bs wnin naq pbzcvyre reebe zrffntrf nf crargenoyr nf ZHZCF pglcrf unf n fcva bs 1/3 ' ' vf n fcnpr gbb -2009 jvyy or gur lrne bs WVG ba gur qrfxgbc -N ynathntr vf n qvnyrpg jvgu na nezl naq anil -gbcvpf ner sbe gur srroyr zvaqrq -2009 vf gur lrne bs ersyrpgvba ba gur qrfxgbc -gur tybor vf bhe cbal, gur pbfzbf bhe erny ubefr -jub nz V naq vs lrf, ubj znal? -cebtenzzvat va orq vf n cresrpgyl svar npgvivgl -zbber'f ynj vf n qeht jvgu gur jbefg pbzr qbja -EClguba: jr hfr vg fb lbh qba'g unir gb -Zbber'f ynj vf n qeht jvgu gur jbefg pbzr qbja. EClguba: haqrpvqrq. -guvatf jvyy or avpr naq fghss -qba'g cbfg yvaxf gb cngragf urer -Abg lbhe hfhny nanylfrf. -Gur Neg bs gur Punaary -Clguba 300 -V fhccbfr ZO bs UGZY cre frpbaq vf abg gur hfhny fcrrq zrnfher crbcyr jbhyq rkcrpg sbe n wvg -gur fha arire frgf ba gur ClCl rzcver -ghegyrf ner snfgre guna lbh guvax -cebtenzzvat vf na nrfgrguvp raqrnibhe -P vf tbbq sbe fbzrguvat, whfg abg sbe jevgvat fbsgjner -trezna vf tbbq sbe fbzrguvat, whfg abg sbe jevgvat fbsgjner -trezna vf tbbq sbe artngvbaf, whfg abg sbe jevgvat fbsgjner -# nffreg qvq abg penfu -lbh fubhyq fgneg n cresrpg fbsgjner zbirzrag -lbh fubhyq fgneg n cresrpg punaary gbcvp zbirzrag -guvf vf n cresrpg punaary gbcvp -guvf vf n frys-ersreragvny punaary gbcvp -crrcubcr bcgvzvmngvbaf ner jung n Fhssvpvragyl Fzneg Pbzcvyre hfrf -"crrcubcr" bcgvzvmngvbaf ner jung na bcgvzvfgvp Pbzcvyre hfrf -pubbfr lbhe unpx -gur 'fhcre' xrljbeq vf abg gung uhttnoyr -wlguba cngpurf ner abg rabhtu sbe clcl -- qb lbh xabj oreyva? - nyy bs vg? - jryy, whfg oreyva -- ubj jvyy gur snpg gung gurl ner hfrq va bhe ercy punatr bhe gbcvpf? -- ubj pna vg rire unir jbexrq? -- jurer fubhyq gur unpx or fgberq? -- Vg'f uneq gb fnl rknpgyl jung pbafgvghgrf erfrnepu va gur pbzchgre jbeyq, ohg nf n svefg nccebkvzngvba, vg'f fbsgjner gung qbrfa'g unir hfref. -- Cebtenzzvat vf nyy nobhg xabjvat jura gb obvy gur benatr fcbatr qbaxrl npebff gur cuvyyvcvarf -- Jul fb znal, znal, znal, znal, znal, znal qhpxyvatf? -- ab qrgnvy vf bofpher rabhtu gb abg unir fbzr pbqr qrcraqvat ba vg. -- jung V trarenyyl jnag vf serr fcrrqhcf -- nyy bs ClCl vf kv-dhnyvgl -"lbh pna nyjnlf xvyy -9 be bf._rkvg() vs lbh'er va n uheel" -Ohernhpengf ohvyq npnqrzvp rzcverf juvpu puhea bhg zrnavatyrff fbyhgvbaf gb veeryrinag ceboyrzf. -vg'f abg n unpx, vg'f n jbexnebhaq -ClCl qbrfa'g unir pbcbylinevnqvp qrcraqragyl-zbabzbecurq ulcresyhknqf -ClCl qbrfa'g punatr gur shaqnzragny culfvpf pbafgnagf -Qnapr bs gur Fhtnecyhz Snvel -Wnin vf whfg tbbq rabhtu gb or cenpgvpny, ohg abg tbbq rabhtu gb or hfnoyr. -RhebClguba vf unccravat, qba'g rkcrpg nal dhvpx erfcbafr gvzrf. -"V jbhyq yvxr gb fgnl njnl sebz ernyvgl gura" -"gung'f jul gur 'be' vf ernyyl na 'naq' " -jvgu nyy nccebcevngr pbagrkghnyvfngvbavat -qba'g gevc ba gur cbjre pbeq -vzcyrzragvat YBTB va YBTB: "ghegyrf nyy gur jnl qbja" -gur ohooyrfbeg jbhyq or gur jebat jnl gb tb -gur cevapvcyr bs pbafreingvba bs zrff -gb fnir n gerr, rng n ornire -Qre Ovore znpugf evpugvt: Antg nyyrf xnchgg. -"Nal jbeyqivrj gung vfag jenpxrq ol frys-qbhog naq pbashfvba bire vgf bja vqragvgl vf abg n jbeyqivrj sbe zr." - Fpbgg Nnebafba -jr oryvrir va cnapnxrf, znlor -jr oryvrir va ghegyrf, znlor -jr qrsvavgryl oryvrir va zrgn -gur zngevk unf lbh -"Yvsr vf uneq, gura lbh anc" - n png -Vf Nezva ubzr jura gur havirefr prnfrf gb rkvfg? -Qhrffryqbes fcevag fgnegrq -frys.nobeeg("pnaabg ybnq negvpyrf") -QRAGVFGEL FLZOBY YVTUG IREGVPNY NAQ JNIR -"Gur UUH pnzchf vf n tbbq Dhnxr yriry" - Nezva -"Gur UUH pnzchf jbhyq or n greevoyr dhnxr yriry - lbh'q arire unir n pyhr jurer lbh ner" - zvpunry -N enqvbnpgvir png unf 18 unys-yvirf. - : j [fvtu] -f -pbybe-pbqrq oyhrf -"Neebtnapr va pbzchgre fpvrapr vf zrnfherq va anab-Qvwxfgenf." -ClCl arrqf n Whfg-va-Gvzr WVG -"Lbh pna'g gvzr geniry whfg ol frggvat lbhe pybpxf jebat" -Gjb guernqf jnyx vagb n one. Gur onexrrcre ybbxf hc naq lryyf, "url, V jnag qba'g nal pbaqvgvbaf enpr yvxr gvzr ynfg!" Clguba 2.k rfg cerfdhr zbeg, ivir Clguba! Clguba 2.k vf abg qrnq Riregvzr fbzrbar nethrf jvgu "Fznyygnyx unf nyjnlf qbar K", vg vf nyjnlf n tbbq uvag gung fbzrguvat arrqf gb or punatrq snfg. - Znephf Qraxre @@ -119,7 +8,6 @@ __kkk__ naq __ekkk__ if bcrengvba fybgf: cnegvpyr dhnaghz fhcrecbfvgvba xvaq bs sha ClCl vf na rkpvgvat grpuabybtl gung yrgf lbh gb jevgr snfg, cbegnoyr, zhygv-cyngsbez vagrecergref jvgu yrff rssbeg Nezva: "Cebybt vf n zrff.", PS: "Ab, vg'f irel pbby!", Nezva: "Vfa'g guvf jung V fnvq?" - tbbq, grfgf ner hfrshy fbzrgvzrf :-) ClCl vf yvxr nofheq gurngre jr unir ab nagv-vzcbffvoyr fgvpx gung znxrf fher gung nyy lbhe cebtenzf unyg clcl vf n enpr orgjrra crbcyr funivat lnxf naq gur havirefr cebqhpvat zber orneqrq lnxf. Fb sne, gur havirefr vf jvaavat @@ -136,14 +24,14 @@ ClCl 1.1.0orgn eryrnfrq: uggc://pbqrfcrnx.arg/clcl/qvfg/clcl/qbp/eryrnfr-1.1.0.ugzy "gurer fubhyq or bar naq bayl bar boivbhf jnl gb qb vg". ClCl inevnag: "gurer pna or A unys-ohttl jnlf gb qb vg" 1.1 svany eryrnfrq: uggc://pbqrfcrnx.arg/clcl/qvfg/clcl/qbp/eryrnfr-1.1.0.ugzy -1.1 svany eryrnfrq | nzq64 naq ccp ner bayl ninvynoyr va ragrecevfr irefvba + nzq64 naq ccp ner bayl ninvynoyr va ragrecevfr irefvba Vf gurer n clcl gvzr? - vs lbh pna srry vg (?) gura gurer vf ab, abezny jbex vf fhpu zhpu yrff gvevat guna inpngvbaf ab, abezny jbex vf fb zhpu yrff gvevat guna inpngvbaf -SVEFG gurl vtaber lbh, gura gurl ynhtu ng lbh, gura gurl svtug lbh, gura lbh jva. +-SVEFG gurl vtaber lbh, gura gurl ynhtu ng lbh, gura gurl svtug lbh, gura lbh jva.- vg'f Fhaqnl, znlor vg'f Fhaqnl, ntnva -"3 + 3 = 8" Nagb va gur WVG gnyx +"3 + 3 = 8" - Nagb va gur WVG gnyx RPBBC vf unccravat RPBBC vf svavfurq cflpb rngf bar oenva cre vapu bs cebterff @@ -175,10 +63,108 @@ "nu, whfg va gvzr qbphzragngvba" (__nc__) ClCl vf abg n erny IZ: ab frtsnhyg unaqyref gb qb gur ener pnfrf lbh pna'g unir obgu pbairavrapr naq fcrrq -gur WVG qbrfa'g jbex ba BF/K (abi'09) -ab fhccbeg sbe BF/K evtug abj! (abi'09) fyvccref urvtug pna or zrnfherq va k86 ertvfgref clcl vf n enpr orgjrra gur vaqhfgel gelvat gb ohvyq znpuvarf jvgu zber naq zber erfbheprf, naq gur clcl qrirybcref gelvat gb rng nyy bs gurz. Fb sne, gur jvaare vf fgvyy hapyrne +"znl pbagnva ahgf naq/be lbhat cbvagref" +vg'f nyy irel fvzcyr, yvxr gur ubyvqnlf +unccl ClCl'f lrne 2010! +fnzhryr fnlf gung jr ybfg n enmbe. fb jr pna'g funir lnxf +"yrg'f abg or bofpher, hayrff jr ernyyl arrq gb" + (abg guernq-fnsr, ohg jryy, abguvat vf) +clcl unf znal ceboyrzf, ohg rnpu bar unf znal fbyhgvbaf +whfg nabgure vgrz (1.333...) ba bhe erny-ahzorerq gbqb yvfg +ClCl vf Fuveg Bevtnzv erfrnepu + nafjrevat n dhrfgvba: "ab -- sbe ng yrnfg bar cbffvoyr vagrecergngvba bs lbhe fragrapr" +eryrnfr 1.2 hcpbzvat +ClCl 1.2 eryrnfrq - uggc://clcl.bet/ +AB IPF QVFPHFFVBAF +EClguba vf n svar pnzry unve oehfu +ClCl vf n npghnyyl n ivfhnyvmngvba cebwrpg, jr whfg ohvyq vagrecergref gb unir vagrerfgvat qngn gb ivfhnyvmr +clcl vf yvxr fnhfntrf +naq abj sbe fbzrguvat pbzcyrgryl qvssrerag +n 10gu bs sberire vf 1u45 +pbeerpg pbqr qbrfag arrq nal grfgf +cbfgfgehpghenyvfz rgp. +clcl UVG trarengbe +gur arj clcl fcbeg vf gb cnff clcl ohtf nf pclguba ohtf +jr unir zhpu zber vagrecergref guna hfref +ClCl 1.3 njnvgvat eryrnfr +ClCl 1.3 eryrnfrq +vg frrzf gb zr gung bapr lbh frggyr ba na rkrphgvba / bowrpg zbqry naq / be olgrpbqr sbezng, lbh'ir nyernql qrpvqrq jung ynathntrf (jurer gur 'f' frrzf fhcresyhbhf) fhccbeg vf tbvat gb or svefg pynff sbe +"Nyy ceboyrzf va ClCl pna or fbyirq ol nabgure yriry bs vagrecergngvba" +ClCl 1.3 eryrnfrq (jvaqbjf ovanevrf vapyhqrq) +jul qvq lbh thlf unir gb znxr gur ohvygva sbeghar zber vagrerfgvat guna npghny jbex? v whfg pngpurq zlfrys erfgnegvat clcl 20 gvzrf +"jr hfrq gb unir n zrff jvgu na bofpher vagresnpr, abj jr unir zrff urer naq bofpher vagresnpr gurer. cebterff" crqebavf ba n clcl fcevag +"phcf bs pbssrr ner yvxr nanybtvrf va gung V'z znxvat bar evtug abj" +"vg'f nyjnlf hc gb hf, va n jnl be gur bgure" +ClCl vf infg, naq pbagnvaf zhygvghqrf +qravny vf eneryl n tbbq qrohttvat grpuavdhr +"Yrg'f tb." - "Jr pna'g" - "Jul abg?" - "Jr'er jnvgvat sbe n Genafyngvba." - (qrfcnvevatyl) "Nu!" +'gung'f qrsvavgryl n pnfr bs "hu????"' +va gurbel gurer vf gur Ybbc, va cenpgvpr gurer ner oevqtrf +gur uneqqevir - pbafgnag qngn cvytevzntr +ClCl vf n gbby gb xrrc bgurejvfr qnatrebhf zvaqf fnsryl bpphcvrq. +jr ner n trareny senzrjbex ohvyg ba pbafvfgrag nccyvpngvba bs nqubp-arff +gur jnl gb nibvq n jbexnebhaq vf gb vagebqhpr n fgebatre jbexnebhaq fbzrjurer ryfr +pnyyvat gur genafyngvba gbby punva n 'fpevcg' vf xvaq bs bssrafvir +ehaavat clcl-p genafyngr.cl vf n ovg yvxr jngpuvat n guevyyre zbivr, vg pbhyq pbafhzr nyy gur zrzbel ng nal gvzr +ehaavat clcl-p genafyngr.cl vf n ovg yvxr jngpuvat n guevyyre zbivr, vg pbhyq qvr ng nal gvzr orpnhfr bs gur 32-ovg 4TO yvzvg bs ENZ +Qh jvefg rora tranh qnf reervpura, jbena xrvare tynhog +vs fjvgmreynaq jrer jurer terrpr vf (ba vfynaqf) jbhyq gurl nyy or pbaarpgrq ol oevqtrf? +genafyngvat clcl jvgu pclguba vf fbbbbbb fybj +ClCl 1.4 eryrnfrq! +Jr ner abg urebrf, whfg irel cngvrag. +QBAR zrnaf vg'f qbar +jul gurer vf ab "ClCl 1.4 eryrnfrq" va gbcvp nal zber? +fabj! fabj! +svanyyl, zrephevny zvtengvba vf unccravat! +Gur zvtengvba gb zrephevny vf pbzcyrgrq! uggc://ovgohpxrg.bet/clcl/clcl +fabj! fabj! (gre) +unccl arj lrne +naq anaanaw gb lbh nf jryy +Frrvat nf gur ynjf bs culfvpf ner ntnvafg lbh, lbh unir gb pnershyyl pbafvqre lbhe fpbcr fb gung lbhe tbnyf ner ernfbanoyr. +nf hfhny va clcl, gur fbyhgvba nccrnef pbzcyrgryl qvfcebcbegvbangr gb gur ceboyrz naq vafgrnq jr'yy tb sbe n pbzcyrgryl qvssrerag fvzcyre nccebnpu gb gur bevtvany ceboyrz +fabj, fabj! +va clcl lbh ner nyjnlf ng gur jebat yriry, va bar jnl be gur bgure +jryy, vg'f jebat ohg abg fb "irel jebat" nf vg ybbxrq + V ybir clcl +ynmvarff vzcngvrapr naq uhoevf +fabj, fabj +EClguba: guvatf lbh jbhyqa'g qb va Clguba, naq pna'g qb va P. +vg vf gur rkcrpgrq orunivbe, rkprcg jura lbh qba'g rkcrpg vg +erqrsvavat lryybj frrzf yvxr n orggre vqrn +"gung'f ubjrire whfg ratvarrevat" (svwny) +"[vg] whfg fubjf ntnva gung eclguba vf bofpher" (psobym) +"naljnl, clguba vf n infg ynathntr" (svwny) +bhg-bs-yvr-thneqf +"gurer ner qnlf ba juvpu lbh ybbx nebhaq naq abguvat fubhyq unir rire jbexrq" (svwny) +clcl vf n orggre xvaq bs sbbyvfuarff - ynp +ehaavat grfgf vf rffragvny sbe qrirybcvat clcl -- hu? qvq V oernx gur grfg? (svwny) +V'ir tbg guvf sybbe jnk gung'f nyfb n TERNG qrffreg gbccvat!! +rknexha: "gur cneg gung V gubhtug jnf tbvat gb or uneq jnf gevivny, fb abj V whfg unir guvf cneg gung V qvqa'g rira guvax bs gung vf uneq" +V fhccbfr jr pna yvir jvgu gur bofphevgl, nf ybat nf gurer vf n pbzzrag znxvat vg yvtugre +V nz n ovt oryvrire va ernfbaf. ohg gur nccnerag xvaq ner zl snibevgr. +clcl: trg n WVG sbe serr (jryy gur svefg qnl lbh jba'g znantr naq vg jvyy or irel sehfgengvat) + thgjbegu: bu, jr fubhyq znxr gur WVG zntvpnyyl orggre, jvgu qrpbengbef naq fghss +vg'f n pbzcyrgr unpx, ohg n irel zvavzny bar (nevtngb) +svefg gurl ynhtu ng lbh, gura gurl vtaber lbh, gura gurl svtug lbh, gura lbh jva +ClCl vf snzvyl sevraqyl +jr yvxr pbzcynvagf +gbqnl jr'er snfgre guna lrfgreqnl (hfhnyyl) +ClCl naq PClguba: gurl ner zbegny rarzvrf vagrag ba xvyyvat rnpu bgure +nethnoyl, rirelguvat vf n avpur +clcl unf ynlref yvxr bavbaf: crryvat gurz onpx jvyy znxr lbh pel +EClguba zntvpnyyl znxrf lbh evpu naq snzbhf (fnlf fb ba gur gva) +Vf evtbobg nebhaq jura gur havirefr prnfrf gb rkvfg? +ClCl vf gbb pbby sbe dhrelfgevatf. +< nevtngb> gura jung bpphef? < svwny> tbbq fghss V oryvrir +ClCl 1.6 eryrnfrq! + jurer ner gur grfgf? +uggc://gjvgcvp.pbz/52nr8s +N enaqbz dhbgr +Nyy rkprcgoybpxf frrz fnar. +N cvax tyvggrel ebgngvat ynzoqn +"vg'f yvxryl grzcbenel hagvy sberire" nevtb """ def some_topic(): From notifications-noreply at bitbucket.org Tue Nov 1 11:10:50 2011 From: notifications-noreply at bitbucket.org (Bitbucket) Date: Tue, 01 Nov 2011 10:10:50 -0000 Subject: [pypy-commit] Notification: pypy Message-ID: <20111101101050.20515.64556@bitbucket02.managed.contegix.com> You have received a notification from wizz. Hi, I forked pypy. My fork is at https://bitbucket.org/wizz/pypy. -- Disable notifications at https://bitbucket.org/account/notifications/ From noreply at buildbot.pypy.org Tue Nov 1 13:04:17 2011 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 1 Nov 2011 13:04:17 +0100 (CET) Subject: [pypy-commit] pypy default: Expand the code of unpackiterable() into several versions instead Message-ID: <20111101120417.0C23C820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48641:33adaaf2cc05 Date: 2011-11-01 11:08 +0100 http://bitbucket.org/pypy/pypy/changeset/33adaaf2cc05/ Log: Expand the code of unpackiterable() into several versions instead of keeping a single does-it-all version. diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -777,22 +777,55 @@ """Unpack an iterable object into a real (interpreter-level) list. Raise an OperationError(w_ValueError) if the length is wrong.""" w_iterator = self.iter(w_iterable) - # If we know the expected length we can preallocate. if expected_length == -1: + return self._unpackiterable_unknown_length(w_iterator, w_iterable) + else: + return self._unpackiterable_known_length(w_iterator, + expected_length) + + @jit.dont_look_inside + def _unpackiterable_unknown_length(self, w_iterator, w_iterable): + # Unpack a variable-size list of unknown length. + # The JIT does not look inside this function because it + # contains a loop (made explicit with the decorator above). + # + # If we can guess the expected length we can preallocate. + try: + lgt_estimate = self.len_w(w_iterable) + except OperationError, o: + if (not o.match(self, self.w_AttributeError) and + not o.match(self, self.w_TypeError)): + raise + items = [] + else: try: - lgt_estimate = self.len_w(w_iterable) - except OperationError, o: - if (not o.match(self, self.w_AttributeError) and - not o.match(self, self.w_TypeError)): + items = newlist(lgt_estimate) + except MemoryError: + items = [] # it might have lied + # + while True: + try: + w_item = self.next(w_iterator) + except OperationError, e: + if not e.match(self, self.w_StopIteration): raise - items = [] - else: - try: - items = newlist(lgt_estimate) - except MemoryError: - items = [] # it might have lied - else: - items = [None] * expected_length + break # done + items.append(w_item) + # + return items + + @jit.dont_look_inside + def _unpackiterable_known_length(self, w_iterator, expected_length): + # Unpack a known length list, without letting the JIT look inside. + # Implemented by just calling the @jit.unroll_safe version, but + # the JIT stopped looking inside already. + return self._unpackiterable_known_length_jitlook(w_iterator, + expected_length) + + @jit.unroll_safe + def _unpackiterable_known_length_jitlook(self, w_iterator, + expected_length): + items = [None] * expected_length idx = 0 while True: try: @@ -801,26 +834,29 @@ if not e.match(self, self.w_StopIteration): raise break # done - if expected_length != -1 and idx == expected_length: + if idx == expected_length: raise OperationError(self.w_ValueError, - self.wrap("too many values to unpack")) - if expected_length == -1: - items.append(w_item) - else: - items[idx] = w_item + self.wrap("too many values to unpack")) + items[idx] = w_item idx += 1 - if expected_length != -1 and idx < expected_length: + if idx < expected_length: if idx == 1: plural = "" else: plural = "s" - raise OperationError(self.w_ValueError, - self.wrap("need more than %d value%s to unpack" % - (idx, plural))) + raise operationerrfmt(self.w_ValueError, + "need more than %d value%s to unpack", + idx, plural) return items - unpackiterable_unroll = jit.unroll_safe(func_with_new_name(unpackiterable, - 'unpackiterable_unroll')) + def unpackiterable_unroll(self, w_iterable, expected_length): + # Like unpackiterable(), but for the cases where we have + # an expected_length and want to unroll when JITted. + # Returns a fixed-size list. + w_iterator = self.iter(w_iterable) + assert expected_length != -1 + return self._unpackiterable_known_length_jitlook(w_iterator, + expected_length) def fixedview(self, w_iterable, expected_length=-1): """ A fixed list view of w_iterable. Don't modify the result diff --git a/pypy/objspace/std/objspace.py b/pypy/objspace/std/objspace.py --- a/pypy/objspace/std/objspace.py +++ b/pypy/objspace/std/objspace.py @@ -414,7 +414,7 @@ else: if unroll: return make_sure_not_resized(ObjSpace.unpackiterable_unroll( - self, w_obj, expected_length)[:]) + self, w_obj, expected_length)) else: return make_sure_not_resized(ObjSpace.unpackiterable( self, w_obj, expected_length)[:]) @@ -422,7 +422,8 @@ raise self._wrap_expected_length(expected_length, len(t)) return make_sure_not_resized(t) - def fixedview_unroll(self, w_obj, expected_length=-1): + def fixedview_unroll(self, w_obj, expected_length): + assert expected_length >= 0 return self.fixedview(w_obj, expected_length, unroll=True) def listview(self, w_obj, expected_length=-1): From noreply at buildbot.pypy.org Tue Nov 1 13:04:18 2011 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 1 Nov 2011 13:04:18 +0100 (CET) Subject: [pypy-commit] pypy default: Attempting to add a JitDriver to unpackiterable(generator). Message-ID: <20111101120418.39D0E820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48642:894e6aa4bb16 Date: 2011-11-01 11:37 +0100 http://bitbucket.org/pypy/pypy/changeset/894e6aa4bb16/ Log: Attempting to add a JitDriver to unpackiterable(generator). diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -778,6 +778,11 @@ Raise an OperationError(w_ValueError) if the length is wrong.""" w_iterator = self.iter(w_iterable) if expected_length == -1: + # xxx special hack for speed + from pypy.interpreter.generator import GeneratorIterator + if isinstance(w_iterator, GeneratorIterator): + return w_iterator.unpackiterable() + # /xxx return self._unpackiterable_unknown_length(w_iterator, w_iterable) else: return self._unpackiterable_known_length(w_iterator, diff --git a/pypy/interpreter/generator.py b/pypy/interpreter/generator.py --- a/pypy/interpreter/generator.py +++ b/pypy/interpreter/generator.py @@ -155,3 +155,32 @@ "interrupting generator of ") break block = block.previous + + def unpackiterable(self): + """This is a hack for performance: runs the generator and collects + all produced items in a list.""" + # XXX copied and simplified version of send_ex() + space = self.space + if self.running: + raise OperationError(space.w_ValueError, + space.wrap('generator already executing')) + results_w = [] + frame = self.frame + if frame is None: # already finished + return results_w + self.running = True + try: + while True: + jitdriver.jit_merge_point(frame=frame) + w_result = frame.execute_frame(space.w_None) + # if the frame is now marked as finished, it was RETURNed from + if frame.frame_finished_execution: + break + results_w.append(w_result) # YIELDed + finally: + frame.f_backref = jit.vref_None + self.running = False + self.frame = None + return results_w + +jitdriver = jit.JitDriver(greens=['frame.pycode'], reds=['frame']) diff --git a/pypy/interpreter/test/test_generator.py b/pypy/interpreter/test/test_generator.py --- a/pypy/interpreter/test/test_generator.py +++ b/pypy/interpreter/test/test_generator.py @@ -267,3 +267,9 @@ assert r.startswith(" Author: Armin Rigo Branch: Changeset: r48643:1d8951851148 Date: 2011-11-01 11:40 +0100 http://bitbucket.org/pypy/pypy/changeset/1d8951851148/ Log: translation fix diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -783,7 +783,8 @@ if isinstance(w_iterator, GeneratorIterator): return w_iterator.unpackiterable() # /xxx - return self._unpackiterable_unknown_length(w_iterator, w_iterable) + lst_w = self._unpackiterable_unknown_length(w_iterator, w_iterable) + return lst_w[:] # make the resulting list resizable else: return self._unpackiterable_known_length(w_iterator, expected_length) From noreply at buildbot.pypy.org Tue Nov 1 13:04:20 2011 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 1 Nov 2011 13:04:20 +0100 (CET) Subject: [pypy-commit] pypy default: bah. Message-ID: <20111101120420.90233820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48644:0ffcdb4ec169 Date: 2011-11-01 11:41 +0100 http://bitbucket.org/pypy/pypy/changeset/0ffcdb4ec169/ Log: bah. diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -783,11 +783,11 @@ if isinstance(w_iterator, GeneratorIterator): return w_iterator.unpackiterable() # /xxx - lst_w = self._unpackiterable_unknown_length(w_iterator, w_iterable) + return self._unpackiterable_unknown_length(w_iterator, w_iterable) + else: + lst_w = self._unpackiterable_known_length(w_iterator, + expected_length) return lst_w[:] # make the resulting list resizable - else: - return self._unpackiterable_known_length(w_iterator, - expected_length) @jit.dont_look_inside def _unpackiterable_unknown_length(self, w_iterator, w_iterable): From noreply at buildbot.pypy.org Tue Nov 1 13:04:21 2011 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 1 Nov 2011 13:04:21 +0100 (CET) Subject: [pypy-commit] pypy default: Tweaks. Message-ID: <20111101120421.B9807820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48645:f9317d8169dd Date: 2011-11-01 11:56 +0100 http://bitbucket.org/pypy/pypy/changeset/f9317d8169dd/ Log: Tweaks. diff --git a/pypy/interpreter/generator.py b/pypy/interpreter/generator.py --- a/pypy/interpreter/generator.py +++ b/pypy/interpreter/generator.py @@ -171,7 +171,8 @@ self.running = True try: while True: - jitdriver.jit_merge_point(frame=frame) + jitdriver.jit_merge_point(self=self, frame=frame, + results_w=results_w) w_result = frame.execute_frame(space.w_None) # if the frame is now marked as finished, it was RETURNed from if frame.frame_finished_execution: @@ -183,4 +184,5 @@ self.frame = None return results_w -jitdriver = jit.JitDriver(greens=['frame.pycode'], reds=['frame']) +jitdriver = jit.JitDriver(greens=['self.pycode'], + reds=['self', 'frame', 'results_w']) From noreply at buildbot.pypy.org Tue Nov 1 13:04:22 2011 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 1 Nov 2011 13:04:22 +0100 (CET) Subject: [pypy-commit] pypy default: Also use the same jitdriver for list(generator). Message-ID: <20111101120422.E6FA0820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48646:f908d360e53c Date: 2011-11-01 12:06 +0100 http://bitbucket.org/pypy/pypy/changeset/f908d360e53c/ Log: Also use the same jitdriver for list(generator). diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -781,7 +781,9 @@ # xxx special hack for speed from pypy.interpreter.generator import GeneratorIterator if isinstance(w_iterator, GeneratorIterator): - return w_iterator.unpackiterable() + lst_w = [] + w_iterator.unpack_into(lst_w) + return lst_w # /xxx return self._unpackiterable_unknown_length(w_iterator, w_iterable) else: diff --git a/pypy/interpreter/generator.py b/pypy/interpreter/generator.py --- a/pypy/interpreter/generator.py +++ b/pypy/interpreter/generator.py @@ -156,7 +156,7 @@ break block = block.previous - def unpackiterable(self): + def unpack_into(self, results_w): """This is a hack for performance: runs the generator and collects all produced items in a list.""" # XXX copied and simplified version of send_ex() @@ -164,10 +164,9 @@ if self.running: raise OperationError(space.w_ValueError, space.wrap('generator already executing')) - results_w = [] frame = self.frame if frame is None: # already finished - return results_w + return self.running = True try: while True: @@ -182,7 +181,6 @@ frame.f_backref = jit.vref_None self.running = False self.frame = None - return results_w jitdriver = jit.JitDriver(greens=['self.pycode'], reds=['self', 'frame', 'results_w']) diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py --- a/pypy/objspace/std/listobject.py +++ b/pypy/objspace/std/listobject.py @@ -54,7 +54,12 @@ def _init_from_iterable(space, items_w, w_iterable): # in its own function to make the JIT look into init__List - # XXX this would need a JIT driver somehow? + # xxx special hack for speed + from pypy.interpreter.generator import GeneratorIterator + if isinstance(w_iterable, GeneratorIterator): + w_iterable.unpack_into(items_w) + return + # /xxx w_iterator = space.iter(w_iterable) while True: try: diff --git a/pypy/objspace/std/test/test_listobject.py b/pypy/objspace/std/test/test_listobject.py --- a/pypy/objspace/std/test/test_listobject.py +++ b/pypy/objspace/std/test/test_listobject.py @@ -801,6 +801,20 @@ l.__delslice__(0, 2) assert l == [3, 4] + def test_list_from_set(self): + l = ['a'] + l.__init__(set('b')) + assert l == ['b'] + + def test_list_from_generator(self): + l = ['a'] + g = (i*i for i in range(5)) + l.__init__(g) + assert l == [0, 1, 4, 9, 16] + l.__init__(g) + assert l == [] + assert list(g) == [] + class AppTestListFastSubscr: From noreply at buildbot.pypy.org Tue Nov 1 15:17:45 2011 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 1 Nov 2011 15:17:45 +0100 (CET) Subject: [pypy-commit] pypy default: Workaroundish fix for now: don't use green fields here. Message-ID: <20111101141745.05B20820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48647:14f9d8d50de2 Date: 2011-11-01 15:17 +0100 http://bitbucket.org/pypy/pypy/changeset/14f9d8d50de2/ Log: Workaroundish fix for now: don't use green fields here. Using a regular green variable is easy enough and more tested. diff --git a/pypy/interpreter/generator.py b/pypy/interpreter/generator.py --- a/pypy/interpreter/generator.py +++ b/pypy/interpreter/generator.py @@ -169,9 +169,11 @@ return self.running = True try: + pycode = self.pycode while True: jitdriver.jit_merge_point(self=self, frame=frame, - results_w=results_w) + results_w=results_w, + pycode=pycode) w_result = frame.execute_frame(space.w_None) # if the frame is now marked as finished, it was RETURNed from if frame.frame_finished_execution: @@ -182,5 +184,5 @@ self.running = False self.frame = None -jitdriver = jit.JitDriver(greens=['self.pycode'], +jitdriver = jit.JitDriver(greens=['pycode'], reds=['self', 'frame', 'results_w']) From noreply at buildbot.pypy.org Tue Nov 1 15:55:36 2011 From: noreply at buildbot.pypy.org (edelsohn) Date: Tue, 1 Nov 2011 15:55:36 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Use sld for PPC64 shifts in getitem and unicode. Message-ID: <20111101145536.13759820B3@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r48648:09dd516eeee3 Date: 2011-11-01 10:55 -0400 http://bitbucket.org/pypy/pypy/changeset/09dd516eeee3/ Log: Use sld for PPC64 shifts in getitem and unicode. diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -332,7 +332,10 @@ if scale.value > 0: scale_loc = r.r0 self.mc.load_imm(r.r0, scale.value) - self.mc.slw(r.r0.value, ofs_loc.value, r.r0.value) + if IS_PPC_32: + self.mc.slw(r.r0.value, ofs_loc.value, r.r0.value) + else: + self.mc.sld(r.r0.value, ofs_loc.value, r.r0.value) else: scale_loc = ofs_loc @@ -356,7 +359,10 @@ if scale.value > 0: scale_loc = r.r0 self.mc.load_imm(r.r0, scale.value) - self.mc.slw(r.r0.value, ofs_loc.value, scale.value) + if IS_PPC_32: + self.mc.slw(r.r0.value, ofs_loc.value, scale.value) + else: + self.mc.sld(r.r0.value, ofs_loc.value, scale.value) else: scale_loc = ofs_loc if ofs.value > 0: @@ -416,7 +422,10 @@ def emit_unicodegetitem(self, op, arglocs, regalloc): res, base_loc, ofs_loc, scale, basesize, itemsize = arglocs - self.mc.slwi(ofs_loc.value, ofs_loc.value, scale.value) + if IS_PPC_32: + self.mc.slwi(ofs_loc.value, ofs_loc.value, scale.value) + else: + self.mc.sldi(ofs_loc.value, ofs_loc.value, scale.value) self.mc.add(res.value, base_loc.value, ofs_loc.value) if scale.value == 2: @@ -430,7 +439,10 @@ def emit_unicodesetitem(self, op, arglocs, regalloc): value_loc, base_loc, ofs_loc, scale, basesize, itemsize = arglocs - self.mc.slwi(ofs_loc.value, ofs_loc.value, scale.value) + if IS_PPC_32: + self.mc.slwi(ofs_loc.value, ofs_loc.value, scale.value) + else: + self.mc.sldi(ofs_loc.value, ofs_loc.value, scale.value) self.mc.add(base_loc.value, base_loc.value, ofs_loc.value) if scale.value == 2: From noreply at buildbot.pypy.org Tue Nov 1 16:04:31 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Tue, 1 Nov 2011 16:04:31 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: iteration becnhmakrs Message-ID: <20111101150431.8AB94820B3@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: extradoc Changeset: r3955:17ee08a6ac71 Date: 2011-11-01 16:04 +0100 http://bitbucket.org/pypy/extradoc/changeset/17ee08a6ac71/ Log: iteration becnhmakrs diff --git a/talk/iwtc11/benchmarks/image/numpy_compare.py b/talk/iwtc11/benchmarks/image/numpy_compare.py --- a/talk/iwtc11/benchmarks/image/numpy_compare.py +++ b/talk/iwtc11/benchmarks/image/numpy_compare.py @@ -63,8 +63,14 @@ else: self.extend(data) - def new(self): - return Image(self.width, self.height, self.typecode) + def new(self, width=None, height=None, typecode=None): + if width is None: + width = self.width + if height is None: + height = self.height + if typecode is None: + typecode = self.typecode + return Image(width, height, typecode) def clone(self): return Image(self.width, self.height, self.typecode, self) diff --git a/talk/iwtc11/benchmarks/iter/generator.py b/talk/iwtc11/benchmarks/iter/generator.py new file mode 100644 --- /dev/null +++ b/talk/iwtc11/benchmarks/iter/generator.py @@ -0,0 +1,104 @@ +from array import array + +def range1(n): + i = 0 + while i < n: + yield i + i += 1 + +def range2(w, h): + y = 0 + while y < h: + x = 0 + while x < w: + yield x, y + x += 1 + y += 1 + +def _sum1d(a): + sa = 0 + for i in range1(len(a)): + sa += a[i] + +def _xsum1d(a): + sa = 0 + for i in range1(len(a)): + sa += a[i] + i + +def _wsum1d(a): + sa = 0 + for i in range1(len(a)): + sa += a[i] + len(a) + +def _sum2d(a, w, h): + sa = 0 + for x, y in range2(w, h): + sa += a[y*w + x] + +def _wsum2d(a, w, h): + sa = 0 + for x, y in range2(w, h): + sa += a[y*w + x] + w + +def _xsum2d(a, w, h): + sa = 0 + for x, y in range2(w, h): + sa += a[y*w + x] + x + +def _whsum2d(a, w, h): + sa = 0 + for x, y in range2(w, h): + sa += a[y*w + x] + w + h + +def _xysum2d(a, w, h): + sa = 0 + for x, y in range2(w, h): + sa += a[y*w + x] + x + y + +def sum1d(args): + run1d(args, _sum1d) + return "sum1d" + +def xsum1d(args): + run1d(args, _xsum1d) + return "xsum1d" + +def wsum1d(args): + run1d(args, _wsum1d) + return "wsum1d" + +def sum2d(args): + run2d(args, _sum2d) + return "sum2d" + +def wsum2d(args): + run2d(args, _wsum2d) + return "wsum2d" + +def xsum2d(args): + run2d(args, _xsum2d) + return "xsum2d" + +def whsum2d(args): + run2d(args, _whsum2d) + return "whsum2d" + +def xysum2d(args): + run2d(args, _xysum2d) + return "xysum2d" + +def run1d(args, f): + a = array('d', [1]) * 100000000 + n = int(args[0]) + for i in xrange(n): + f(a) + return "sum1d" + +def run2d(args, f): + a = array('d', [1]) * 100000000 + n = int(args[0]) + for i in xrange(n): + f(a, 10000, 10000) + return "sum1d" + + diff --git a/talk/iwtc11/benchmarks/iter/generator2.py b/talk/iwtc11/benchmarks/iter/generator2.py new file mode 100644 --- /dev/null +++ b/talk/iwtc11/benchmarks/iter/generator2.py @@ -0,0 +1,104 @@ +from array import array + +def range1(n): + i = 0 + while i < n: + yield i + i += 1 + +def range2(w, h): + y = x = 0 + while y < h: + yield x, y + x += 1 + if x >= w: + x = 0 + y += 1 + +def _sum1d(a): + sa = 0 + for i in range1(len(a)): + sa += a[i] + +def _xsum1d(a): + sa = 0 + for i in range1(len(a)): + sa += a[i] + i + +def _wsum1d(a): + sa = 0 + for i in range1(len(a)): + sa += a[i] + len(a) + +def _sum2d(a, w, h): + sa = 0 + for x, y in range2(w, h): + sa += a[y*w + x] + +def _wsum2d(a, w, h): + sa = 0 + for x, y in range2(w, h): + sa += a[y*w + x] + w + +def _xsum2d(a, w, h): + sa = 0 + for x, y in range2(w, h): + sa += a[y*w + x] + x + +def _whsum2d(a, w, h): + sa = 0 + for x, y in range2(w, h): + sa += a[y*w + x] + w + h + +def _xysum2d(a, w, h): + sa = 0 + for x, y in range2(w, h): + sa += a[y*w + x] + x + y + +def sum1d(args): + run1d(args, _sum1d) + return "sum1d" + +def xsum1d(args): + run1d(args, _xsum1d) + return "xsum1d" + +def wsum1d(args): + run1d(args, _wsum1d) + return "wsum1d" + +def sum2d(args): + run2d(args, _sum2d) + return "sum2d" + +def wsum2d(args): + run2d(args, _wsum2d) + return "wsum2d" + +def xsum2d(args): + run2d(args, _xsum2d) + return "xsum2d" + +def whsum2d(args): + run2d(args, _whsum2d) + return "whsum2d" + +def xysum2d(args): + run2d(args, _xysum2d) + return "xysum2d" + +def run1d(args, f): + a = array('d', [1]) * 100000000 + n = int(args[0]) + for i in xrange(n): + f(a) + return "sum1d" + +def run2d(args, f): + a = array('d', [1]) * 100000000 + n = int(args[0]) + for i in xrange(n): + f(a, 10000, 10000) + return "sum1d" + + diff --git a/talk/iwtc11/benchmarks/iter/iterator.py b/talk/iwtc11/benchmarks/iter/iterator.py new file mode 100644 --- /dev/null +++ b/talk/iwtc11/benchmarks/iter/iterator.py @@ -0,0 +1,131 @@ +from array import array + +class range1(object): + def __init__(self, n): + self.i = -1 + self.n = n + + def __iter__(self): + return self + + def next(self): + self.i += 1 + if self.i >= self.n: + raise StopIteration + return self.i + +class range2(object): + def __init__(self, w, h): + self.x = -1 + self.y = 0 + self.w = w + self.h = h + + def __iter__(self): + return self + + def next(self): + self.x += 1 + if self.x >= self.w: + self.x = 0 + self.y += 1 + if self.y >= self.h: + raise StopIteration + return self.x, self.y + +def range2(w, h): + y = x = 0 + while y < h: + yield x, y + x += 1 + if x >= w: + x = 0 + y += 1 + +def _sum1d(a): + sa = 0 + for i in range1(len(a)): + sa += a[i] + +def _xsum1d(a): + sa = 0 + for i in range1(len(a)): + sa += a[i] + i + +def _wsum1d(a): + sa = 0 + for i in range1(len(a)): + sa += a[i] + len(a) + +def _sum2d(a, w, h): + sa = 0 + for x, y in range2(w, h): + sa += a[y*w + x] + +def _wsum2d(a, w, h): + sa = 0 + for x, y in range2(w, h): + sa += a[y*w + x] + w + +def _xsum2d(a, w, h): + sa = 0 + for x, y in range2(w, h): + sa += a[y*w + x] + x + +def _whsum2d(a, w, h): + sa = 0 + for x, y in range2(w, h): + sa += a[y*w + x] + w + h + +def _xysum2d(a, w, h): + sa = 0 + for x, y in range2(w, h): + sa += a[y*w + x] + x + y + +def sum1d(args): + run1d(args, _sum1d) + return "sum1d" + +def xsum1d(args): + run1d(args, _xsum1d) + return "xsum1d" + +def wsum1d(args): + run1d(args, _wsum1d) + return "wsum1d" + +def sum2d(args): + run2d(args, _sum2d) + return "sum2d" + +def wsum2d(args): + run2d(args, _wsum2d) + return "wsum2d" + +def xsum2d(args): + run2d(args, _xsum2d) + return "xsum2d" + +def whsum2d(args): + run2d(args, _whsum2d) + return "whsum2d" + +def xysum2d(args): + run2d(args, _xysum2d) + return "xysum2d" + +def run1d(args, f): + a = array('d', [1]) * 100000000 + n = int(args[0]) + for i in xrange(n): + f(a) + return "sum1d" + +def run2d(args, f): + a = array('d', [1]) * 100000000 + n = int(args[0]) + for i in xrange(n): + f(a, 10000, 10000) + return "sum1d" + + diff --git a/talk/iwtc11/benchmarks/iter/range.py b/talk/iwtc11/benchmarks/iter/range.py new file mode 100644 --- /dev/null +++ b/talk/iwtc11/benchmarks/iter/range.py @@ -0,0 +1,94 @@ +from array import array + +def _sum1d(a): + sa = 0 + for i in xrange(len(a)): + sa += a[i] + +def _xsum1d(a): + sa = 0 + for i in xrange(len(a)): + sa += a[i] + i + +def _wsum1d(a): + sa = 0 + for i in xrange(len(a)): + sa += a[i] + len(a) + +def _sum2d(a, w, h): + sa = 0 + for y in xrange(h): + for x in xrange(w): + sa += a[y*w + x] + +def _wsum2d(a, w, h): + sa = 0 + for y in xrange(h): + for x in xrange(w): + sa += a[y*w + x] + w + +def _xsum2d(a, w, h): + sa = 0 + for y in xrange(h): + for x in xrange(w): + sa += a[y*w + x] + x + +def _whsum2d(a, w, h): + sa = 0 + for y in xrange(h): + for x in xrange(w): + sa += a[y*w + x] + w + h + +def _xysum2d(a, w, h): + sa = 0 + for y in xrange(h): + for x in xrange(w): + sa += a[y*w + x] + x + y + +def sum1d(args): + run1d(args, _sum1d) + return "sum1d" + +def xsum1d(args): + run1d(args, _xsum1d) + return "xsum1d" + +def wsum1d(args): + run1d(args, _wsum1d) + return "wsum1d" + +def sum2d(args): + run2d(args, _sum2d) + return "sum2d" + +def wsum2d(args): + run2d(args, _wsum2d) + return "wsum2d" + +def xsum2d(args): + run2d(args, _xsum2d) + return "xsum2d" + +def whsum2d(args): + run2d(args, _whsum2d) + return "whsum2d" + +def xysum2d(args): + run2d(args, _xysum2d) + return "xysum2d" + +def run1d(args, f): + a = array('d', [1]) * 100000000 + n = int(args[0]) + for i in xrange(n): + f(a) + return "sum1d" + +def run2d(args, f): + a = array('d', [1]) * 100000000 + n = int(args[0]) + for i in xrange(n): + f(a, 10000, 10000) + return "sum1d" + + diff --git a/talk/iwtc11/benchmarks/iter/sum1d.c b/talk/iwtc11/benchmarks/iter/sum1d.c new file mode 100644 --- /dev/null +++ b/talk/iwtc11/benchmarks/iter/sum1d.c @@ -0,0 +1,22 @@ +#include +#include +#include + +double result; + +double sum(double *a, int n) { + int i; + double sa = 0; + for (i=0; i +#include +#include + +double result; + +double sum(double *a, int w, int h) { + int x, y; + double sa = 0; + for (y=0; y +#include +#include + +double result; + +double sum(double *a, int w, int h) { + int x, y; + double sa = 0; + for (y=0; y +#include +#include + +double result; + +double sum(double *a, int n) { + int i; + double sa = 0; + for (i=0; i +#include +#include + +double result; + +double sum(double *a, int w, int h) { + int x, y; + double sa = 0; + for (y=0; y +#include +#include + +double result; + +double sum(double *a, int n) { + int i; + double sa = 0; + for (i=0; i +#include +#include + +double result; + +double sum(double *a, int w, int h) { + int x, y; + double sa = 0; + for (y=0; y +#include +#include + +double result; + +double sum(double *a, int w, int h) { + int x, y; + double sa = 0; + for (y=0; y Author: Armin Rigo Branch: Changeset: r48649:06cddf70488a Date: 2011-11-01 17:52 +0100 http://bitbucket.org/pypy/pypy/changeset/06cddf70488a/ Log: Attempt to fix Windows translation. diff --git a/pypy/rlib/rmmap.py b/pypy/rlib/rmmap.py --- a/pypy/rlib/rmmap.py +++ b/pypy/rlib/rmmap.py @@ -78,7 +78,7 @@ from pypy.rlib.rwin32 import HANDLE, LPHANDLE from pypy.rlib.rwin32 import NULL_HANDLE, INVALID_HANDLE_VALUE from pypy.rlib.rwin32 import DWORD, WORD, DWORD_PTR, LPDWORD - from pypy.rlib.rwin32 import BOOL, LPVOID, LPCVOID, LPCSTR, SIZE_T + from pypy.rlib.rwin32 import BOOL, LPVOID, LPCSTR, SIZE_T from pypy.rlib.rwin32 import INT, LONG, PLONG # export the constants inside and outside. see __init__.py @@ -174,9 +174,9 @@ DuplicateHandle = winexternal('DuplicateHandle', [HANDLE, HANDLE, HANDLE, LPHANDLE, DWORD, BOOL, DWORD], BOOL) CreateFileMapping = winexternal('CreateFileMappingA', [HANDLE, rwin32.LPSECURITY_ATTRIBUTES, DWORD, DWORD, DWORD, LPCSTR], HANDLE) MapViewOfFile = winexternal('MapViewOfFile', [HANDLE, DWORD, DWORD, DWORD, SIZE_T], LPCSTR)##!!LPVOID) - UnmapViewOfFile = winexternal('UnmapViewOfFile', [LPCVOID], BOOL, + UnmapViewOfFile = winexternal('UnmapViewOfFile', [LPCSTR], BOOL, threadsafe=False) - FlushViewOfFile = winexternal('FlushViewOfFile', [LPCVOID, SIZE_T], BOOL) + FlushViewOfFile = winexternal('FlushViewOfFile', [LPCSTR, SIZE_T], BOOL) SetFilePointer = winexternal('SetFilePointer', [HANDLE, LONG, PLONG, DWORD], DWORD) SetEndOfFile = winexternal('SetEndOfFile', [HANDLE], BOOL) VirtualAlloc = winexternal('VirtualAlloc', From noreply at buildbot.pypy.org Tue Nov 1 19:35:40 2011 From: noreply at buildbot.pypy.org (mattip) Date: Tue, 1 Nov 2011 19:35:40 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim: repr/str: add tests for numpy compliance, code cleanup Message-ID: <20111101183540.4F92E820B3@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpy-multidim Changeset: r48650:6eff7c357df1 Date: 2011-11-01 20:34 +0200 http://bitbucket.org/pypy/pypy/changeset/6eff7c357df1/ Log: repr/str: add tests for numpy compliance, code cleanup diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -244,12 +244,15 @@ return self.get_concrete().descr_len(space) def descr_repr(self, space): - # Simple implementation so that we can see the array. Needs work. + # Simple implementation so that we can see the array. + # Since what we want is to print a plethora of 2d views, + # use recursive calls to tostr() to do the work. concrete = self.get_concrete() - new_sig = signature.Signature.find_sig([ - NDimSlice.signature, self.signature - ]) - res = "array(" + NDimSlice(concrete, new_sig, [], self.shape[:]).tostr(True, indent=' ') + res = "array(" + res0 = NDimSlice(concrete, self.signature, [], self.shape).tostr(True, indent=' ') + if res0=="[]" and isinstance(self,NDimSlice): + res0 += ", shape=%s"%(tuple(self.shape),) + res += res0 dtype = concrete.find_dtype() if (dtype is not space.fromcache(interp_dtype.W_Float64Dtype) and dtype is not space.fromcache(interp_dtype.W_Int64Dtype)) or not self.find_size(): @@ -258,12 +261,11 @@ return space.wrap(res) def descr_str(self, space): - # Simple implementation so that we can see the array. Needs work. + # Simple implementation so that we can see the array. + # Since what we want is to print a plethora of 2d views, let + # a slice do the work for us. concrete = self.get_concrete() - new_sig = signature.Signature.find_sig([ - NDimSlice.signature, self.signature - ]) - return space.wrap(NDimSlice(concrete, new_sig, [], self.shape[:]).tostr(False)) + return space.wrap(NDimSlice(concrete, self.signature, [], self.shape).tostr(False)) def _index_of_single_item(self, space, w_idx): # we assume C ordering for now @@ -668,6 +670,9 @@ ret = '' dtype = self.find_dtype() ndims = len(self.shape)#-self.shape_reduction + if any([s==0 for s in self.shape]): + ret += '[]' + return ret if ndims>2: ret += '[' for i in range(self.shape[0]): @@ -698,7 +703,7 @@ for j in range(self.shape[0])]) ret += ']' else: - ret += '[]' + ret += dtype.str_format(self.eval(0)) return ret class NDimArray(BaseArray): def __init__(self, size, shape, dtype): diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -91,6 +91,9 @@ a = array((range(5),range(5,10)), dtype="int16") b=a[1,2:] assert repr(b) == "array([7, 8, 9], dtype=int16)" + #This is the way cpython numpy does it - an empty slice prints its shape + b=a[2:1,] + assert repr(b) == "array([], shape=(0, 5), dtype=int16)" def test_str(self): from numpy import array, zeros @@ -114,6 +117,9 @@ a = array((range(5),range(5,10)), dtype="int16") assert str(a) == "[[0 1 2 3 4],\n [5 6 7 8 9]]" + a = array(3,dtype=int) + assert str(a) == "3" + def test_str_slice(self): from numpy import array, zeros a = array(range(5), float) @@ -125,6 +131,8 @@ a = array((range(5),range(5,10)), dtype="int16") b=a[1,2:] assert str(b) == "[7 8 9]" + b=a[2:1,] + assert str(b) == "[]" def test_getitem(self): from numpy import array From noreply at buildbot.pypy.org Tue Nov 1 20:17:10 2011 From: noreply at buildbot.pypy.org (edelsohn) Date: Tue, 1 Nov 2011 20:17:10 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Optimize zero-extend and sign-extend in _ensure_result_bit_extension Message-ID: <20111101191710.2C089820B3@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r48651:058e97bccb87 Date: 2011-11-01 15:16 -0400 http://bitbucket.org/pypy/pypy/changeset/058e97bccb87/ Log: Optimize zero-extend and sign-extend in _ensure_result_bit_extension and add PPC64 support. diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -666,25 +666,30 @@ assert 0, "not supported location" def _ensure_result_bit_extension(self, resloc, size, signed): - if size == 4: - return if size == 1: if not signed: #unsigned char - self.mc.load_imm(r.r0, 0xFF) - self.mc.and_(resloc.value, resloc.value, r.r0.value) + if IS_PPC32: + self.mc.load_imm(r.r0, 0xFF) + self.mc.and_(resloc.value, resloc.value, r.r0.value) + else: + self.mc.rldicl(resloc.value, resloc.value, 0, 56) else: - self.mc.load_imm(r.r0, 24) - self.mc.slw(resloc.value, resloc.value, r.r0.value) - self.mc.sraw(resloc.value, resloc.value, r.r0.value) + self.mc.extsb(resloc.value, resloc.value) elif size == 2: if not signed: - self.mc.load_imm(r.r0, 16) - self.mc.slw(resloc.value, resloc.value, r.r0.value) - self.mc.srw(resloc.value, resloc.value, r.r0.value) + if IS_PPC_32: + self.mc.load_imm(r.r0, 16) + self.mc.slw(resloc.value, resloc.value, r.r0.value) + self.mc.srw(resloc.value, resloc.value, r.r0.value) + else: + self.mc.rldicl(resloc.value, resloc.value, 0, 48) else: - self.mc.load_imm(r.r0, 16) - self.mc.slw(resloc.value, resloc.value, r.r0.value) - self.mc.sraw(resloc.value, resloc.value, r.r0.value) + self.mc.extsh(resloc.value, resloc.value) + elif size == 4: + if not signed: + self.mc.rldicl(resloc.value, resloc.value, 0, 32) + else: + self.mc.extsw(resloc.value, resloc.value) def mark_gc_roots(self, force_index, use_copy_area=False): if force_index < 0: From noreply at buildbot.pypy.org Tue Nov 1 22:07:42 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Tue, 1 Nov 2011 22:07:42 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: typo Message-ID: <20111101210742.007FF82A87@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: extradoc Changeset: r3957:ab6bc92f1d6d Date: 2011-11-01 22:07 +0100 http://bitbucket.org/pypy/extradoc/changeset/ab6bc92f1d6d/ Log: typo diff --git a/talk/iwtc11/benchmarks/iter/mean1d.c b/talk/iwtc11/benchmarks/iter/mean1d.c --- a/talk/iwtc11/benchmarks/iter/mean1d.c +++ b/talk/iwtc11/benchmarks/iter/mean1d.c @@ -20,6 +20,6 @@ double data[] = {-1.0, 1.0}; for (i=0; i Author: Hakan Ardo Branch: extradoc Changeset: r3956:94871ac1f542 Date: 2011-11-01 22:06 +0100 http://bitbucket.org/pypy/extradoc/changeset/94871ac1f542/ Log: a few more becnhmarks and some results diff --git a/talk/iwtc11/benchmarks/iter/generator.py b/talk/iwtc11/benchmarks/iter/generator.py --- a/talk/iwtc11/benchmarks/iter/generator.py +++ b/talk/iwtc11/benchmarks/iter/generator.py @@ -55,6 +55,35 @@ for x, y in range2(w, h): sa += a[y*w + x] + x + y +def _mean1d(a): + sa = 0 + for i in range1(len(a)): + sa = (i*sa + a[i])/(i + 1.0); + +def _median1d(a): + sa = 0 + for i in range1(len(a)): + if sa > a[i]: + sa -= 1.0/(i + 1.0) + elif sa < a[i]: + sa += 1.0/(i + 1.0) + +def _ripple1d(a): + sa = 0 + for i in range1(len(a)): + if sa > a[i]: + sa -= 0.1 + elif sa < a[i]: + sa += 0.1 + +def _ripple2d(a, w, h): + sa = 0 + for x, y in range2(w, h): + if sa > a[y*w + x]: + sa -= 0.1 + elif sa < a[y*w + x]: + sa += 0.1 + def sum1d(args): run1d(args, _sum1d) return "sum1d" @@ -87,15 +116,37 @@ run2d(args, _xysum2d) return "xysum2d" -def run1d(args, f): - a = array('d', [1]) * 100000000 +def mean1d(args): + run1d(args, _mean1d, [1, -1]) + return "mean1d" + +def median1d(args): + run1d(args, _median1d, [1, -1]) + return "median1d" + +def ripple1d(args): + run1d(args, _ripple1d, [1, -1]) + return "ripple1d" + +def ripple2d(args): + run2d(args, _ripple2d, [1, -1]) + return "ripple2d" + +def run1d(args, f, data=None): + if data: + a = array('d', data) * (100000000/len(data)) + else: + a = array('d', [1]) * 100000000 n = int(args[0]) for i in xrange(n): f(a) return "sum1d" -def run2d(args, f): - a = array('d', [1]) * 100000000 +def run2d(args, f, data=None): + if data: + a = array('d', data) * (100000000/len(data)) + else: + a = array('d', [1]) * 100000000 n = int(args[0]) for i in xrange(n): f(a, 10000, 10000) diff --git a/talk/iwtc11/benchmarks/iter/generator2.py b/talk/iwtc11/benchmarks/iter/generator2.py --- a/talk/iwtc11/benchmarks/iter/generator2.py +++ b/talk/iwtc11/benchmarks/iter/generator2.py @@ -30,6 +30,35 @@ for i in range1(len(a)): sa += a[i] + len(a) +def _mean1d(a): + sa = 0 + for i in range1(len(a)): + sa = (i*sa + a[i])/(i + 1.0); + +def _median1d(a): + sa = 0 + for i in range1(len(a)): + if sa > a[i]: + sa -= 1.0/(i + 1.0) + elif sa < a[i]: + sa += 1.0/(i + 1.0) + +def _ripple1d(a): + sa = 0 + for i in range1(len(a)): + if sa > a[i]: + sa -= 0.1 + elif sa < a[i]: + sa += 0.1 + +def _ripple2d(a, w, h): + sa = 0 + for x, y in range2(w, h): + if sa > a[y*w + x]: + sa -= 0.1 + elif sa < a[y*w + x]: + sa += 0.1 + def _sum2d(a, w, h): sa = 0 for x, y in range2(w, h): @@ -87,15 +116,37 @@ run2d(args, _xysum2d) return "xysum2d" -def run1d(args, f): - a = array('d', [1]) * 100000000 +def mean1d(args): + run1d(args, _mean1d, [1, -1]) + return "mean1d" + +def median1d(args): + run1d(args, _median1d, [1, -1]) + return "median1d" + +def ripple1d(args): + run1d(args, _ripple1d, [1, -1]) + return "ripple1d" + +def ripple2d(args): + run2d(args, _ripple2d, [1, -1]) + return "ripple2d" + +def run1d(args, f, data=None): + if data: + a = array('d', data) * (100000000/len(data)) + else: + a = array('d', [1]) * 100000000 n = int(args[0]) for i in xrange(n): f(a) return "sum1d" -def run2d(args, f): - a = array('d', [1]) * 100000000 +def run2d(args, f, data=None): + if data: + a = array('d', data) * (100000000/len(data)) + else: + a = array('d', [1]) * 100000000 n = int(args[0]) for i in xrange(n): f(a, 10000, 10000) diff --git a/talk/iwtc11/benchmarks/iter/iterator.py b/talk/iwtc11/benchmarks/iter/iterator.py --- a/talk/iwtc11/benchmarks/iter/iterator.py +++ b/talk/iwtc11/benchmarks/iter/iterator.py @@ -82,6 +82,36 @@ for x, y in range2(w, h): sa += a[y*w + x] + x + y +def _mean1d(a): + sa = 0 + for i in range1(len(a)): + sa = (i*sa + a[i])/(i + 1.0); + +def _median1d(a): + sa = 0 + for i in range1(len(a)): + if sa > a[i]: + sa -= 1.0/(i + 1.0) + elif sa < a[i]: + sa += 1.0/(i + 1.0) + +def _ripple1d(a): + sa = 0 + for i in range1(len(a)): + if sa > a[i]: + sa -= 0.1 + elif sa < a[i]: + sa += 0.1 + +def _ripple2d(a, w, h): + sa = 0 + for x, y in range2(w, h): + if sa > a[y*w + x]: + sa -= 0.1 + elif sa < a[y*w + x]: + sa += 0.1 + + def sum1d(args): run1d(args, _sum1d) return "sum1d" @@ -114,18 +144,39 @@ run2d(args, _xysum2d) return "xysum2d" -def run1d(args, f): - a = array('d', [1]) * 100000000 +def mean1d(args): + run1d(args, _mean1d, [1, -1]) + return "mean1d" + +def median1d(args): + run1d(args, _median1d, [1, -1]) + return "median1d" + +def ripple1d(args): + run1d(args, _ripple1d, [1, -1]) + return "ripple1d" + +def ripple2d(args): + run2d(args, _ripple2d, [1, -1]) + return "ripple2d" + +def run1d(args, f, data=None): + if data: + a = array('d', data) * (100000000/len(data)) + else: + a = array('d', [1]) * 100000000 n = int(args[0]) for i in xrange(n): f(a) return "sum1d" -def run2d(args, f): - a = array('d', [1]) * 100000000 +def run2d(args, f, data=None): + if data: + a = array('d', data) * (100000000/len(data)) + else: + a = array('d', [1]) * 100000000 n = int(args[0]) for i in xrange(n): f(a, 10000, 10000) return "sum1d" - diff --git a/talk/iwtc11/benchmarks/iter/mean1d.c b/talk/iwtc11/benchmarks/iter/mean1d.c new file mode 100644 --- /dev/null +++ b/talk/iwtc11/benchmarks/iter/mean1d.c @@ -0,0 +1,25 @@ +#include +#include +#include + +double result; + +double sum(double *a, int n) { + int i; + double sa = 0; + for (i=0; i +#include +#include + +double result; + +double sum(double *a, int n) { + int i; + double sa = 0; + for (i=0; i a[i]) { + sa -= 1.0/(i + 1.0); + } else if (sa < a[i]) { + sa += 1.0/(i + 1.0); + } + } + return sa; +} + +#define N 100000000 + +int main(int ac, char **av) { + double *a = malloc(N*sizeof(double)); + int i, n = atoi(av[1]); + double data[] = {-1.0, 1.0}; + for (i=0; i a[i]: + sa -= 1.0/(i + 1.0) + elif sa < a[i]: + sa += 1.0/(i + 1.0) + +def _ripple1d(a): + sa = 0 + for i in xrange(len(a)): + if sa > a[i]: + sa -= 0.1 + elif sa < a[i]: + sa += 0.1 + +def _ripple2d(a, w, h): + sa = 0 + for y in xrange(h): + for x in xrange(w): + if sa > a[y*w + x]: + sa -= 0.1 + elif sa < a[y*w + x]: + sa += 0.1 + def sum1d(args): run1d(args, _sum1d) return "sum1d" @@ -77,18 +107,39 @@ run2d(args, _xysum2d) return "xysum2d" -def run1d(args, f): - a = array('d', [1]) * 100000000 +def mean1d(args): + run1d(args, _mean1d, [1, -1]) + return "mean1d" + +def median1d(args): + run1d(args, _median1d, [1, -1]) + return "median1d" + +def ripple1d(args): + run1d(args, _ripple1d, [1, -1]) + return "ripple1d" + +def ripple2d(args): + run2d(args, _ripple2d, [1, -1]) + return "ripple2d" + +def run1d(args, f, data=None): + if data: + a = array('d', data) * (100000000/len(data)) + else: + a = array('d', [1]) * 100000000 n = int(args[0]) for i in xrange(n): f(a) return "sum1d" -def run2d(args, f): - a = array('d', [1]) * 100000000 +def run2d(args, f, data=None): + if data: + a = array('d', data) * (100000000/len(data)) + else: + a = array('d', [1]) * 100000000 n = int(args[0]) for i in xrange(n): f(a, 10000, 10000) return "sum1d" - diff --git a/talk/iwtc11/benchmarks/iter/result.txt b/talk/iwtc11/benchmarks/iter/result.txt new file mode 100644 --- /dev/null +++ b/talk/iwtc11/benchmarks/iter/result.txt @@ -0,0 +1,84 @@ +gcc -O3 +sum1d: 1.28 +- 0.0 +sum2d: 1.282 +- 0.004472135955 +whsum2d: 1.348 +- 0.0148323969742 +wsum1d: 1.312 +- 0.00836660026534 +wsum2d: 1.296 +- 0.00894427191 +xsum1d: 2.67 +- 0.0 +xsum2d: 2.684 +- 0.00894427191 +xysum2d: 3.89 +- 0.00707106781187 +sum1d: 12.246 +- 0.0955510334847 +sum1d: 8.712 +- 0.0383405790254 +sum1d: 2.534 +- 0.0167332005307 +sum2d: 1.294 +- 0.00547722557505 + +pypy iter/generator2.py +sum1d: 23.9832116127 +- 0.614888065755 +sum2d: 25.14532938 +- 0.539002370348 +whsum2d: 25.3205077648 +- 0.95213818417 +wsum1d: 23.9423354149 +- 0.350982347591 +wsum2d: 25.5328037739 +- 0.0682052173271 +xsum1d: 23.7376705647 +- 0.25634553829 +xsum2d: 24.7689536095 +- 0.0512726458591 +xysum2d: 25.1449195862 +- 0.16430452312 +mean1d: 31.7602347374 +- 0.427882906402 +median1d: 43.1415281773 +- 0.210466180126 +ripple1d: 34.0283002853 +- 0.499598282172 +ripple2d: 38.4699347973 +- 0.0901560447042 + +pypy iter/generator.py +sum1d: 23.7244842052 +- 0.0689331205409 +sum2d: 21.658352232 +- 0.416635728484 +whsum2d: 22.5176876068 +- 0.502224419925 +wsum1d: 23.8211816788 +- 0.266302896949 +wsum2d: 21.1811442852 +- 0.0340298556226 +xsum1d: 23.5302821636 +- 0.347050395147 +xsum2d: 21.3646360397 +- 0.0404815336251 +xysum2d: 23.3054399967 +- 0.605652073438 +mean1d: 29.9068798542 +- 0.137142642142 +median1d: 47.3418916225 +- 0.745256472188 +ripple1d: 38.7682027817 +- 0.151127654833 +ripple2d: 34.50409832 +- 0.450633025924 + +pypy iter/iterator.py +sum1d: 9.11433362961 +- 0.152338942619 +sum2d: 24.8545044422 +- 0.337170412246 +whsum2d: 25.8045747757 +- 0.20809202412 +wsum1d: 9.10523662567 +- 0.0244805405482 +wsum2d: 26.1566844463 +- 0.318886535207 +xsum1d: 9.19495682716 +- 0.0873697747873 +xsum2d: 25.3517719746 +- 0.164766505808 +xysum2d: 26.6187932014 +- 0.209184440299 +mean1d: 16.4915462017 +- 0.017852602834 +median1d: 20.7653402328 +- 0.0630841106192 +ripple1d: 17.4464035511 +- 0.0158743067755 +ripple2d: 39.4511544228 +- 0.627375567049 + +pypy iter/range.py +sum1d: 4.49761414528 +- 0.0188623565601 +sum2d: 4.55957078934 +- 0.00243949374013 +whsum2d: 5.00070867538 +- 0.00618486143797 +wsum1d: 4.49047336578 +- 0.00411149414617 +wsum2d: 4.96318297386 +- 0.00222332048187 +xsum1d: 4.49802703857 +- 0.00188882921078 +xsum2d: 4.9497563839 +- 0.00264963854777 +xysum2d: 5.36755475998 +- 0.0024734467877 +mean1d: 14.0295339584 +- 0.242603017308 +median1d: 13.3812539577 +- 0.219532477212 +ripple1d: 9.65058441162 +- 0.258182544452 +ripple2d: 17.3434608459 +- 0.254643240791 + +pypy iter/while.py +sum1d: 2.96192045212 +- 0.0202773262937 +sum2d: 4.09613256454 +- 0.00233141002671 +whsum2d: 4.1995736599 +- 0.00203621363823 +wsum1d: 3.02741799355 +- 0.00262930561514 +wsum2d: 4.09814844131 +- 0.00222148567149 +xsum1d: 3.31641759872 +- 0.00301746769052 +xsum2d: 4.09652075768 +- 0.00237008101856 +xysum2d: 4.10714039803 +- 0.00191674465195 +mean1d: 13.9958492279 +- 0.244810166895 +median1d: 14.8796311855 +- 0.242170910321 +ripple1d: 7.4315820694 +- 0.24302663505 +ripple2d: 12.0281677723 +- 0.262682059117 + diff --git a/talk/iwtc11/benchmarks/iter/ripple1d.c b/talk/iwtc11/benchmarks/iter/ripple1d.c new file mode 100644 --- /dev/null +++ b/talk/iwtc11/benchmarks/iter/ripple1d.c @@ -0,0 +1,30 @@ +#include +#include +#include + +double result; + +double sum(double *a, int n) { + int i; + double sa = 0; + for (i=0; i a[i]) { + sa -= 0.1; + } else if (sa < a[i]) { + sa += 0.1; + } + } + return sa; +} + +#define N 100000000 + +int main(int ac, char **av) { + double *a = malloc(N*sizeof(double)); + int i, n = atoi(av[1]); + double data[] = {-1.0, 1.0}; + for (i=0; i +#include +#include + +double result; + +double sum(double *a, int w, int h) { + int x, y; + double sa = 0; + for (y=0; y a[y*w + x]) { + sa -= 0.1; + } else if (sa < a[y*w + x]) { + sa += 0.1; + } + } + return sa; +} + +#define W 10000 +#define H 10000 + +int main(int ac, char **av) { + double *a = malloc(W*H*sizeof(double)); + int i, n = atoi(av[1]); + for (i=0; i a[i]: + sa -= 1.0/(i + 1.0) + elif sa < a[i]: + sa += 1.0/(i + 1.0) + i += 1 + +def _ripple1d(a): + sa = i = 0 + while i < len(a): + if sa > a[i]: + sa -= 0.1 + elif sa < a[i]: + sa += 0.1 + i += 1 + +def _ripple2d(a, w, h): + sa = 0 + sa = y = 0 + while y < h: + x = 0 + while x < w: + if sa > a[y*w + x]: + sa -= 0.1 + elif sa < a[y*w + x]: + sa += 0.1 + x += 1 + y += 1 + def sum1d(args): run1d(args, _sum1d) return "sum1d" @@ -97,18 +134,39 @@ run2d(args, _xysum2d) return "xysum2d" -def run1d(args, f): - a = array('d', [1]) * 100000000 +def mean1d(args): + run1d(args, _mean1d, [1, -1]) + return "mean1d" + +def median1d(args): + run1d(args, _median1d, [1, -1]) + return "median1d" + +def ripple1d(args): + run1d(args, _ripple1d, [1, -1]) + return "ripple1d" + +def ripple2d(args): + run2d(args, _ripple2d, [1, -1]) + return "ripple2d" + +def run1d(args, f, data=None): + if data: + a = array('d', data) * (100000000/len(data)) + else: + a = array('d', [1]) * 100000000 n = int(args[0]) for i in xrange(n): f(a) return "sum1d" -def run2d(args, f): - a = array('d', [1]) * 100000000 +def run2d(args, f, data=None): + if data: + a = array('d', data) * (100000000/len(data)) + else: + a = array('d', [1]) * 100000000 n = int(args[0]) for i in xrange(n): f(a, 10000, 10000) return "sum1d" - diff --git a/talk/iwtc11/benchmarks/runiter.sh b/talk/iwtc11/benchmarks/runiter.sh --- a/talk/iwtc11/benchmarks/runiter.sh +++ b/talk/iwtc11/benchmarks/runiter.sh @@ -1,17 +1,16 @@ #!/bin/sh -BENCHMARKS="sum1d sum2d whsum2d wsum1d wsum2d xsum1d xsum2d xysum2d" - +BENCHMARKS="sum1d sum2d whsum2d wsum1d wsum2d xsum1d xsum2d xysum2d mean1d median1d ripple1d ripple2d" echo gcc -O3 for b in $BENCHMARKS; do - echo ./runner.py -n 5 -c "gcc -O3" iter/$b.c 10 + ./runner.py -n 5 -c "gcc -O3" iter/$b.c 10 done echo for p in iter/*.py; do echo pypy $p for b in $BENCHMARKS; do - pypy ./runner.py -n 5 $p $b 10 + /tmp/pypy-trunk ./runner.py -n 5 $p $b 10 done echo done \ No newline at end of file From noreply at buildbot.pypy.org Wed Nov 2 02:10:42 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 2 Nov 2011 02:10:42 +0100 (CET) Subject: [pypy-commit] pypy default: failing test Message-ID: <20111102011042.F24CC820B3@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r48652:1dde33279496 Date: 2011-11-01 21:10 -0400 http://bitbucket.org/pypy/pypy/changeset/1dde33279496/ Log: failing test diff --git a/pypy/interpreter/test/test_generator.py b/pypy/interpreter/test/test_generator.py --- a/pypy/interpreter/test/test_generator.py +++ b/pypy/interpreter/test/test_generator.py @@ -117,7 +117,7 @@ g = f() raises(NameError, g.throw, NameError, "Error", None) - + def test_throw_fail(self): def f(): yield 1 @@ -129,7 +129,7 @@ yield 1 g = f() raises(TypeError, g.throw, list()) - + def test_throw_fail3(self): def f(): yield 1 @@ -188,7 +188,7 @@ g = f() g.next() raises(NameError, g.close) - + def test_close_fail(self): def f(): try: @@ -273,3 +273,9 @@ assert set(g) == set([0, 1, 4, 9, 16, 25]) assert set(g) == set() assert set(i for i in range(0)) == set() + + def test_explicit_stop_iteration_unpackiterable(self): + def f(): + yield 1 + raise StopIteration + assert tuple(f()) == (1,) \ No newline at end of file From noreply at buildbot.pypy.org Wed Nov 2 02:20:40 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 2 Nov 2011 02:20:40 +0100 (CET) Subject: [pypy-commit] pypy default: fix for the failing test - StopIteration raised from anywhere kills the generator Message-ID: <20111102012040.9C134820B3@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r48653:82489bdede61 Date: 2011-11-01 21:20 -0400 http://bitbucket.org/pypy/pypy/changeset/82489bdede61/ Log: fix for the failing test - StopIteration raised from anywhere kills the generator diff --git a/pypy/interpreter/generator.py b/pypy/interpreter/generator.py --- a/pypy/interpreter/generator.py +++ b/pypy/interpreter/generator.py @@ -8,7 +8,7 @@ class GeneratorIterator(Wrappable): "An iterator created by a generator." _immutable_fields_ = ['pycode'] - + def __init__(self, frame): self.space = frame.space self.frame = frame # turned into None when frame_finished_execution @@ -81,7 +81,7 @@ # if the frame is now marked as finished, it was RETURNed from if frame.frame_finished_execution: self.frame = None - raise OperationError(space.w_StopIteration, space.w_None) + raise OperationError(space.w_StopIteration, space.w_None) else: return w_result # YIELDed finally: @@ -97,21 +97,21 @@ def throw(self, w_type, w_val, w_tb): from pypy.interpreter.pytraceback import check_traceback space = self.space - + msg = "throw() third argument must be a traceback object" if space.is_w(w_tb, space.w_None): tb = None else: tb = check_traceback(space, w_tb, msg) - + operr = OperationError(w_type, w_val, tb) operr.normalize_exception(space) return self.send_ex(space.w_None, operr) - + def descr_next(self): """x.next() -> the next value, or raise StopIteration""" return self.send_ex(self.space.w_None) - + def descr_close(self): """x.close(arg) -> raise GeneratorExit inside generator.""" assert isinstance(self, GeneratorIterator) @@ -124,7 +124,7 @@ e.match(space, space.w_GeneratorExit): return space.w_None raise - + if w_retval is not None: msg = "generator ignored GeneratorExit" raise OperationError(space.w_RuntimeError, space.wrap(msg)) @@ -174,7 +174,12 @@ jitdriver.jit_merge_point(self=self, frame=frame, results_w=results_w, pycode=pycode) - w_result = frame.execute_frame(space.w_None) + try: + w_result = frame.execute_frame(space.w_None) + except OperationError, e: + if not e.match(space, space.w_StopIteration): + raise + break # if the frame is now marked as finished, it was RETURNed from if frame.frame_finished_execution: break From noreply at buildbot.pypy.org Wed Nov 2 08:24:55 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Wed, 2 Nov 2011 08:24:55 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: minor fixes Message-ID: <20111102072455.E8A5E820B3@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: extradoc Changeset: r3958:6dbe2306c60f Date: 2011-11-02 08:24 +0100 http://bitbucket.org/pypy/extradoc/changeset/6dbe2306c60f/ Log: minor fixes diff --git a/talk/iwtc11/benchmarks/iter/generator.py b/talk/iwtc11/benchmarks/iter/generator.py --- a/talk/iwtc11/benchmarks/iter/generator.py +++ b/talk/iwtc11/benchmarks/iter/generator.py @@ -152,4 +152,7 @@ f(a, 10000, 10000) return "sum1d" +if __name__ == '__main__': + import sys + eval(sys.argv[1])(sys.argv[2:]) diff --git a/talk/iwtc11/benchmarks/iter/generator2.py b/talk/iwtc11/benchmarks/iter/generator2.py --- a/talk/iwtc11/benchmarks/iter/generator2.py +++ b/talk/iwtc11/benchmarks/iter/generator2.py @@ -152,4 +152,6 @@ f(a, 10000, 10000) return "sum1d" - +if __name__ == '__main__': + import sys + eval(sys.argv[1])(sys.argv[2:]) diff --git a/talk/iwtc11/benchmarks/iter/iterator.py b/talk/iwtc11/benchmarks/iter/iterator.py --- a/talk/iwtc11/benchmarks/iter/iterator.py +++ b/talk/iwtc11/benchmarks/iter/iterator.py @@ -180,3 +180,6 @@ f(a, 10000, 10000) return "sum1d" +if __name__ == '__main__': + import sys + eval(sys.argv[1])(sys.argv[2:]) diff --git a/talk/iwtc11/benchmarks/iter/range.py b/talk/iwtc11/benchmarks/iter/range.py --- a/talk/iwtc11/benchmarks/iter/range.py +++ b/talk/iwtc11/benchmarks/iter/range.py @@ -143,3 +143,6 @@ f(a, 10000, 10000) return "sum1d" +if __name__ == '__main__': + import sys + eval(sys.argv[1])(sys.argv[2:]) diff --git a/talk/iwtc11/benchmarks/iter/result.txt b/talk/iwtc11/benchmarks/iter/result.txt --- a/talk/iwtc11/benchmarks/iter/result.txt +++ b/talk/iwtc11/benchmarks/iter/result.txt @@ -10,7 +10,7 @@ mean1d: 12.246 +- 0.0955510334847 median1d: 8.712 +- 0.0383405790254 ripple1d: 2.534 +- 0.0167332005307 -ripple2d: 1.294 +- 0.00547722557505 +ripple2d: 2.644 +- 0.0219089023002 pypy iter/generator2.py sum1d: 23.9832116127 +- 0.614888065755 diff --git a/talk/iwtc11/benchmarks/iter/ripple2d.c b/talk/iwtc11/benchmarks/iter/ripple2d.c --- a/talk/iwtc11/benchmarks/iter/ripple2d.c +++ b/talk/iwtc11/benchmarks/iter/ripple2d.c @@ -23,6 +23,8 @@ int main(int ac, char **av) { double *a = malloc(W*H*sizeof(double)); int i, n = atoi(av[1]); + double data[] = {-1.0, 1.0}; + for (i=0; i Author: Armin Rigo Branch: Changeset: r48654:f1e31eaa1fa3 Date: 2011-11-02 11:48 +0100 http://bitbucket.org/pypy/pypy/changeset/f1e31eaa1fa3/ Log: On Windows, the renamed binary file must end with ".exe". diff --git a/pypy/tool/release/package.py b/pypy/tool/release/package.py --- a/pypy/tool/release/package.py +++ b/pypy/tool/release/package.py @@ -53,6 +53,8 @@ if not pypy_c.check(): print pypy_c raise PyPyCNotFound('Please compile pypy first, using translate.py') + if sys.platform == 'win32' and not rename_pypy_c.lower().endswith('.exe'): + rename_pypy_c += '.exe' binaries = [(pypy_c, rename_pypy_c)] # if sys.platform == 'win32': From noreply at buildbot.pypy.org Wed Nov 2 13:55:13 2011 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 2 Nov 2011 13:55:13 +0100 (CET) Subject: [pypy-commit] pypy default: Windows fix. Message-ID: <20111102125513.426BF820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48655:a6d3047f241c Date: 2011-11-02 13:54 +0100 http://bitbucket.org/pypy/pypy/changeset/a6d3047f241c/ Log: Windows fix. diff --git a/pypy/module/bz2/test/test_large.py b/pypy/module/bz2/test/test_large.py --- a/pypy/module/bz2/test/test_large.py +++ b/pypy/module/bz2/test/test_large.py @@ -8,7 +8,7 @@ py.test.skip("skipping this very slow test; try 'pypy-c -A'") cls.space = gettestobjspace(usemodules=('bz2',)) largetest_bz2 = py.path.local(__file__).dirpath().join("largetest.bz2") - cls.w_compressed_data = cls.space.wrap(largetest_bz2.read()) + cls.w_compressed_data = cls.space.wrap(largetest_bz2.read('rb')) def test_decompress(self): from bz2 import decompress From noreply at buildbot.pypy.org Wed Nov 2 14:03:31 2011 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 2 Nov 2011 14:03:31 +0100 (CET) Subject: [pypy-commit] pypy default: Skip if we on't have curses. Message-ID: <20111102130331.CEC78820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48656:24570e79aebc Date: 2011-11-02 14:01 +0100 http://bitbucket.org/pypy/pypy/changeset/24570e79aebc/ Log: Skip if we on't have curses. diff --git a/pypy/module/_minimal_curses/__init__.py b/pypy/module/_minimal_curses/__init__.py --- a/pypy/module/_minimal_curses/__init__.py +++ b/pypy/module/_minimal_curses/__init__.py @@ -4,7 +4,8 @@ try: import _minimal_curses as _curses # when running on top of pypy-c except ImportError: - raise ImportError("no _curses or _minimal_curses module") # no _curses at all + import py + py.test.skip("no _curses or _minimal_curses module") #no _curses at all from pypy.interpreter.mixedmodule import MixedModule from pypy.module._minimal_curses import fficurses From noreply at buildbot.pypy.org Wed Nov 2 14:03:33 2011 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 2 Nov 2011 14:03:33 +0100 (CET) Subject: [pypy-commit] pypy default: Accept py.test.skip()'s exception as also meaning "skip this package" here. Message-ID: <20111102130333.09E38820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48657:8ed4c87f1e89 Date: 2011-11-02 14:03 +0100 http://bitbucket.org/pypy/pypy/changeset/8ed4c87f1e89/ Log: Accept py.test.skip()'s exception as also meaning "skip this package" here. diff --git a/pypy/config/pypyoption.py b/pypy/config/pypyoption.py --- a/pypy/config/pypyoption.py +++ b/pypy/config/pypyoption.py @@ -92,7 +92,7 @@ module_import_dependencies = { # no _rawffi if importing pypy.rlib.clibffi raises ImportError - # or CompilationError + # or CompilationError or py.test.skip.Exception "_rawffi" : ["pypy.rlib.clibffi"], "_ffi" : ["pypy.rlib.clibffi"], @@ -113,7 +113,7 @@ try: for name in modlist: __import__(name) - except (ImportError, CompilationError), e: + except (ImportError, CompilationError, py.test.skip.Exception), e: errcls = e.__class__.__name__ config.add_warning( "The module %r is disabled\n" % (modname,) + From noreply at buildbot.pypy.org Wed Nov 2 14:52:11 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 2 Nov 2011 14:52:11 +0100 (CET) Subject: [pypy-commit] pypy list-strategies: Merged default in, resolved merge conflicts (involved removing an optimization that had been done in a different way on default). Message-ID: <20111102135211.6C834820B3@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: list-strategies Changeset: r48658:56c40b33f07e Date: 2011-11-02 09:51 -0400 http://bitbucket.org/pypy/pypy/changeset/56c40b33f07e/ Log: Merged default in, resolved merge conflicts (involved removing an optimization that had been done in a different way on default). diff too long, truncating to 10000 out of 19409 lines diff --git a/lib-python/modified-2.7/heapq.py b/lib-python/modified-2.7/heapq.py new file mode 100644 --- /dev/null +++ b/lib-python/modified-2.7/heapq.py @@ -0,0 +1,442 @@ +# -*- coding: latin-1 -*- + +"""Heap queue algorithm (a.k.a. priority queue). + +Heaps are arrays for which a[k] <= a[2*k+1] and a[k] <= a[2*k+2] for +all k, counting elements from 0. For the sake of comparison, +non-existing elements are considered to be infinite. The interesting +property of a heap is that a[0] is always its smallest element. + +Usage: + +heap = [] # creates an empty heap +heappush(heap, item) # pushes a new item on the heap +item = heappop(heap) # pops the smallest item from the heap +item = heap[0] # smallest item on the heap without popping it +heapify(x) # transforms list into a heap, in-place, in linear time +item = heapreplace(heap, item) # pops and returns smallest item, and adds + # new item; the heap size is unchanged + +Our API differs from textbook heap algorithms as follows: + +- We use 0-based indexing. This makes the relationship between the + index for a node and the indexes for its children slightly less + obvious, but is more suitable since Python uses 0-based indexing. + +- Our heappop() method returns the smallest item, not the largest. + +These two make it possible to view the heap as a regular Python list +without surprises: heap[0] is the smallest item, and heap.sort() +maintains the heap invariant! +""" + +# Original code by Kevin O'Connor, augmented by Tim Peters and Raymond Hettinger + +__about__ = """Heap queues + +[explanation by Fran�ois Pinard] + +Heaps are arrays for which a[k] <= a[2*k+1] and a[k] <= a[2*k+2] for +all k, counting elements from 0. For the sake of comparison, +non-existing elements are considered to be infinite. The interesting +property of a heap is that a[0] is always its smallest element. + +The strange invariant above is meant to be an efficient memory +representation for a tournament. The numbers below are `k', not a[k]: + + 0 + + 1 2 + + 3 4 5 6 + + 7 8 9 10 11 12 13 14 + + 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 + + +In the tree above, each cell `k' is topping `2*k+1' and `2*k+2'. In +an usual binary tournament we see in sports, each cell is the winner +over the two cells it tops, and we can trace the winner down the tree +to see all opponents s/he had. However, in many computer applications +of such tournaments, we do not need to trace the history of a winner. +To be more memory efficient, when a winner is promoted, we try to +replace it by something else at a lower level, and the rule becomes +that a cell and the two cells it tops contain three different items, +but the top cell "wins" over the two topped cells. + +If this heap invariant is protected at all time, index 0 is clearly +the overall winner. The simplest algorithmic way to remove it and +find the "next" winner is to move some loser (let's say cell 30 in the +diagram above) into the 0 position, and then percolate this new 0 down +the tree, exchanging values, until the invariant is re-established. +This is clearly logarithmic on the total number of items in the tree. +By iterating over all items, you get an O(n ln n) sort. + +A nice feature of this sort is that you can efficiently insert new +items while the sort is going on, provided that the inserted items are +not "better" than the last 0'th element you extracted. This is +especially useful in simulation contexts, where the tree holds all +incoming events, and the "win" condition means the smallest scheduled +time. When an event schedule other events for execution, they are +scheduled into the future, so they can easily go into the heap. So, a +heap is a good structure for implementing schedulers (this is what I +used for my MIDI sequencer :-). + +Various structures for implementing schedulers have been extensively +studied, and heaps are good for this, as they are reasonably speedy, +the speed is almost constant, and the worst case is not much different +than the average case. However, there are other representations which +are more efficient overall, yet the worst cases might be terrible. + +Heaps are also very useful in big disk sorts. You most probably all +know that a big sort implies producing "runs" (which are pre-sorted +sequences, which size is usually related to the amount of CPU memory), +followed by a merging passes for these runs, which merging is often +very cleverly organised[1]. It is very important that the initial +sort produces the longest runs possible. Tournaments are a good way +to that. If, using all the memory available to hold a tournament, you +replace and percolate items that happen to fit the current run, you'll +produce runs which are twice the size of the memory for random input, +and much better for input fuzzily ordered. + +Moreover, if you output the 0'th item on disk and get an input which +may not fit in the current tournament (because the value "wins" over +the last output value), it cannot fit in the heap, so the size of the +heap decreases. The freed memory could be cleverly reused immediately +for progressively building a second heap, which grows at exactly the +same rate the first heap is melting. When the first heap completely +vanishes, you switch heaps and start a new run. Clever and quite +effective! + +In a word, heaps are useful memory structures to know. I use them in +a few applications, and I think it is good to keep a `heap' module +around. :-) + +-------------------- +[1] The disk balancing algorithms which are current, nowadays, are +more annoying than clever, and this is a consequence of the seeking +capabilities of the disks. On devices which cannot seek, like big +tape drives, the story was quite different, and one had to be very +clever to ensure (far in advance) that each tape movement will be the +most effective possible (that is, will best participate at +"progressing" the merge). Some tapes were even able to read +backwards, and this was also used to avoid the rewinding time. +Believe me, real good tape sorts were quite spectacular to watch! +From all times, sorting has always been a Great Art! :-) +""" + +__all__ = ['heappush', 'heappop', 'heapify', 'heapreplace', 'merge', + 'nlargest', 'nsmallest', 'heappushpop'] + +from itertools import islice, repeat, count, imap, izip, tee, chain +from operator import itemgetter +import bisect + +def heappush(heap, item): + """Push item onto heap, maintaining the heap invariant.""" + heap.append(item) + _siftdown(heap, 0, len(heap)-1) + +def heappop(heap): + """Pop the smallest item off the heap, maintaining the heap invariant.""" + lastelt = heap.pop() # raises appropriate IndexError if heap is empty + if heap: + returnitem = heap[0] + heap[0] = lastelt + _siftup(heap, 0) + else: + returnitem = lastelt + return returnitem + +def heapreplace(heap, item): + """Pop and return the current smallest value, and add the new item. + + This is more efficient than heappop() followed by heappush(), and can be + more appropriate when using a fixed-size heap. Note that the value + returned may be larger than item! That constrains reasonable uses of + this routine unless written as part of a conditional replacement: + + if item > heap[0]: + item = heapreplace(heap, item) + """ + returnitem = heap[0] # raises appropriate IndexError if heap is empty + heap[0] = item + _siftup(heap, 0) + return returnitem + +def heappushpop(heap, item): + """Fast version of a heappush followed by a heappop.""" + if heap and heap[0] < item: + item, heap[0] = heap[0], item + _siftup(heap, 0) + return item + +def heapify(x): + """Transform list into a heap, in-place, in O(len(heap)) time.""" + n = len(x) + # Transform bottom-up. The largest index there's any point to looking at + # is the largest with a child index in-range, so must have 2*i + 1 < n, + # or i < (n-1)/2. If n is even = 2*j, this is (2*j-1)/2 = j-1/2 so + # j-1 is the largest, which is n//2 - 1. If n is odd = 2*j+1, this is + # (2*j+1-1)/2 = j so j-1 is the largest, and that's again n//2-1. + for i in reversed(xrange(n//2)): + _siftup(x, i) + +def nlargest(n, iterable): + """Find the n largest elements in a dataset. + + Equivalent to: sorted(iterable, reverse=True)[:n] + """ + if n < 0: # for consistency with the c impl + return [] + it = iter(iterable) + result = list(islice(it, n)) + if not result: + return result + heapify(result) + _heappushpop = heappushpop + for elem in it: + _heappushpop(result, elem) + result.sort(reverse=True) + return result + +def nsmallest(n, iterable): + """Find the n smallest elements in a dataset. + + Equivalent to: sorted(iterable)[:n] + """ + if n < 0: # for consistency with the c impl + return [] + if hasattr(iterable, '__len__') and n * 10 <= len(iterable): + # For smaller values of n, the bisect method is faster than a minheap. + # It is also memory efficient, consuming only n elements of space. + it = iter(iterable) + result = sorted(islice(it, 0, n)) + if not result: + return result + insort = bisect.insort + pop = result.pop + los = result[-1] # los --> Largest of the nsmallest + for elem in it: + if los <= elem: + continue + insort(result, elem) + pop() + los = result[-1] + return result + # An alternative approach manifests the whole iterable in memory but + # saves comparisons by heapifying all at once. Also, saves time + # over bisect.insort() which has O(n) data movement time for every + # insertion. Finding the n smallest of an m length iterable requires + # O(m) + O(n log m) comparisons. + h = list(iterable) + heapify(h) + return map(heappop, repeat(h, min(n, len(h)))) + +# 'heap' is a heap at all indices >= startpos, except possibly for pos. pos +# is the index of a leaf with a possibly out-of-order value. Restore the +# heap invariant. +def _siftdown(heap, startpos, pos): + newitem = heap[pos] + # Follow the path to the root, moving parents down until finding a place + # newitem fits. + while pos > startpos: + parentpos = (pos - 1) >> 1 + parent = heap[parentpos] + if newitem < parent: + heap[pos] = parent + pos = parentpos + continue + break + heap[pos] = newitem + +# The child indices of heap index pos are already heaps, and we want to make +# a heap at index pos too. We do this by bubbling the smaller child of +# pos up (and so on with that child's children, etc) until hitting a leaf, +# then using _siftdown to move the oddball originally at index pos into place. +# +# We *could* break out of the loop as soon as we find a pos where newitem <= +# both its children, but turns out that's not a good idea, and despite that +# many books write the algorithm that way. During a heap pop, the last array +# element is sifted in, and that tends to be large, so that comparing it +# against values starting from the root usually doesn't pay (= usually doesn't +# get us out of the loop early). See Knuth, Volume 3, where this is +# explained and quantified in an exercise. +# +# Cutting the # of comparisons is important, since these routines have no +# way to extract "the priority" from an array element, so that intelligence +# is likely to be hiding in custom __cmp__ methods, or in array elements +# storing (priority, record) tuples. Comparisons are thus potentially +# expensive. +# +# On random arrays of length 1000, making this change cut the number of +# comparisons made by heapify() a little, and those made by exhaustive +# heappop() a lot, in accord with theory. Here are typical results from 3 +# runs (3 just to demonstrate how small the variance is): +# +# Compares needed by heapify Compares needed by 1000 heappops +# -------------------------- -------------------------------- +# 1837 cut to 1663 14996 cut to 8680 +# 1855 cut to 1659 14966 cut to 8678 +# 1847 cut to 1660 15024 cut to 8703 +# +# Building the heap by using heappush() 1000 times instead required +# 2198, 2148, and 2219 compares: heapify() is more efficient, when +# you can use it. +# +# The total compares needed by list.sort() on the same lists were 8627, +# 8627, and 8632 (this should be compared to the sum of heapify() and +# heappop() compares): list.sort() is (unsurprisingly!) more efficient +# for sorting. + +def _siftup(heap, pos): + endpos = len(heap) + startpos = pos + newitem = heap[pos] + # Bubble up the smaller child until hitting a leaf. + childpos = 2*pos + 1 # leftmost child position + while childpos < endpos: + # Set childpos to index of smaller child. + rightpos = childpos + 1 + if rightpos < endpos and not heap[childpos] < heap[rightpos]: + childpos = rightpos + # Move the smaller child up. + heap[pos] = heap[childpos] + pos = childpos + childpos = 2*pos + 1 + # The leaf at pos is empty now. Put newitem there, and bubble it up + # to its final resting place (by sifting its parents down). + heap[pos] = newitem + _siftdown(heap, startpos, pos) + +# If available, use C implementation +try: + from _heapq import * +except ImportError: + pass + +def merge(*iterables): + '''Merge multiple sorted inputs into a single sorted output. + + Similar to sorted(itertools.chain(*iterables)) but returns a generator, + does not pull the data into memory all at once, and assumes that each of + the input streams is already sorted (smallest to largest). + + >>> list(merge([1,3,5,7], [0,2,4,8], [5,10,15,20], [], [25])) + [0, 1, 2, 3, 4, 5, 5, 7, 8, 10, 15, 20, 25] + + ''' + _heappop, _heapreplace, _StopIteration = heappop, heapreplace, StopIteration + + h = [] + h_append = h.append + for itnum, it in enumerate(map(iter, iterables)): + try: + next = it.next + h_append([next(), itnum, next]) + except _StopIteration: + pass + heapify(h) + + while 1: + try: + while 1: + v, itnum, next = s = h[0] # raises IndexError when h is empty + yield v + s[0] = next() # raises StopIteration when exhausted + _heapreplace(h, s) # restore heap condition + except _StopIteration: + _heappop(h) # remove empty iterator + except IndexError: + return + +# Extend the implementations of nsmallest and nlargest to use a key= argument +_nsmallest = nsmallest +def nsmallest(n, iterable, key=None): + """Find the n smallest elements in a dataset. + + Equivalent to: sorted(iterable, key=key)[:n] + """ + # Short-cut for n==1 is to use min() when len(iterable)>0 + if n == 1: + it = iter(iterable) + head = list(islice(it, 1)) + if not head: + return [] + if key is None: + return [min(chain(head, it))] + return [min(chain(head, it), key=key)] + + # When n>=size, it's faster to use sort() + try: + size = len(iterable) + except (TypeError, AttributeError): + pass + else: + if n >= size: + return sorted(iterable, key=key)[:n] + + # When key is none, use simpler decoration + if key is None: + it = izip(iterable, count()) # decorate + result = _nsmallest(n, it) + return map(itemgetter(0), result) # undecorate + + # General case, slowest method + in1, in2 = tee(iterable) + it = izip(imap(key, in1), count(), in2) # decorate + result = _nsmallest(n, it) + return map(itemgetter(2), result) # undecorate + +_nlargest = nlargest +def nlargest(n, iterable, key=None): + """Find the n largest elements in a dataset. + + Equivalent to: sorted(iterable, key=key, reverse=True)[:n] + """ + + # Short-cut for n==1 is to use max() when len(iterable)>0 + if n == 1: + it = iter(iterable) + head = list(islice(it, 1)) + if not head: + return [] + if key is None: + return [max(chain(head, it))] + return [max(chain(head, it), key=key)] + + # When n>=size, it's faster to use sort() + try: + size = len(iterable) + except (TypeError, AttributeError): + pass + else: + if n >= size: + return sorted(iterable, key=key, reverse=True)[:n] + + # When key is none, use simpler decoration + if key is None: + it = izip(iterable, count(0,-1)) # decorate + result = _nlargest(n, it) + return map(itemgetter(0), result) # undecorate + + # General case, slowest method + in1, in2 = tee(iterable) + it = izip(imap(key, in1), count(0,-1), in2) # decorate + result = _nlargest(n, it) + return map(itemgetter(2), result) # undecorate + +if __name__ == "__main__": + # Simple sanity test + heap = [] + data = [1, 3, 5, 7, 9, 2, 4, 6, 8, 0] + for item in data: + heappush(heap, item) + sort = [] + while heap: + sort.append(heappop(heap)) + print sort + + import doctest + doctest.testmod() diff --git a/lib-python/modified-2.7/httplib.py b/lib-python/modified-2.7/httplib.py new file mode 100644 --- /dev/null +++ b/lib-python/modified-2.7/httplib.py @@ -0,0 +1,1377 @@ +"""HTTP/1.1 client library + + + + +HTTPConnection goes through a number of "states", which define when a client +may legally make another request or fetch the response for a particular +request. This diagram details these state transitions: + + (null) + | + | HTTPConnection() + v + Idle + | + | putrequest() + v + Request-started + | + | ( putheader() )* endheaders() + v + Request-sent + | + | response = getresponse() + v + Unread-response [Response-headers-read] + |\____________________ + | | + | response.read() | putrequest() + v v + Idle Req-started-unread-response + ______/| + / | + response.read() | | ( putheader() )* endheaders() + v v + Request-started Req-sent-unread-response + | + | response.read() + v + Request-sent + +This diagram presents the following rules: + -- a second request may not be started until {response-headers-read} + -- a response [object] cannot be retrieved until {request-sent} + -- there is no differentiation between an unread response body and a + partially read response body + +Note: this enforcement is applied by the HTTPConnection class. The + HTTPResponse class does not enforce this state machine, which + implies sophisticated clients may accelerate the request/response + pipeline. Caution should be taken, though: accelerating the states + beyond the above pattern may imply knowledge of the server's + connection-close behavior for certain requests. For example, it + is impossible to tell whether the server will close the connection + UNTIL the response headers have been read; this means that further + requests cannot be placed into the pipeline until it is known that + the server will NOT be closing the connection. + +Logical State __state __response +------------- ------- ---------- +Idle _CS_IDLE None +Request-started _CS_REQ_STARTED None +Request-sent _CS_REQ_SENT None +Unread-response _CS_IDLE +Req-started-unread-response _CS_REQ_STARTED +Req-sent-unread-response _CS_REQ_SENT +""" + +from array import array +import os +import socket +from sys import py3kwarning +from urlparse import urlsplit +import warnings +with warnings.catch_warnings(): + if py3kwarning: + warnings.filterwarnings("ignore", ".*mimetools has been removed", + DeprecationWarning) + import mimetools + +try: + from cStringIO import StringIO +except ImportError: + from StringIO import StringIO + +__all__ = ["HTTP", "HTTPResponse", "HTTPConnection", + "HTTPException", "NotConnected", "UnknownProtocol", + "UnknownTransferEncoding", "UnimplementedFileMode", + "IncompleteRead", "InvalidURL", "ImproperConnectionState", + "CannotSendRequest", "CannotSendHeader", "ResponseNotReady", + "BadStatusLine", "error", "responses"] + +HTTP_PORT = 80 +HTTPS_PORT = 443 + +_UNKNOWN = 'UNKNOWN' + +# connection states +_CS_IDLE = 'Idle' +_CS_REQ_STARTED = 'Request-started' +_CS_REQ_SENT = 'Request-sent' + +# status codes +# informational +CONTINUE = 100 +SWITCHING_PROTOCOLS = 101 +PROCESSING = 102 + +# successful +OK = 200 +CREATED = 201 +ACCEPTED = 202 +NON_AUTHORITATIVE_INFORMATION = 203 +NO_CONTENT = 204 +RESET_CONTENT = 205 +PARTIAL_CONTENT = 206 +MULTI_STATUS = 207 +IM_USED = 226 + +# redirection +MULTIPLE_CHOICES = 300 +MOVED_PERMANENTLY = 301 +FOUND = 302 +SEE_OTHER = 303 +NOT_MODIFIED = 304 +USE_PROXY = 305 +TEMPORARY_REDIRECT = 307 + +# client error +BAD_REQUEST = 400 +UNAUTHORIZED = 401 +PAYMENT_REQUIRED = 402 +FORBIDDEN = 403 +NOT_FOUND = 404 +METHOD_NOT_ALLOWED = 405 +NOT_ACCEPTABLE = 406 +PROXY_AUTHENTICATION_REQUIRED = 407 +REQUEST_TIMEOUT = 408 +CONFLICT = 409 +GONE = 410 +LENGTH_REQUIRED = 411 +PRECONDITION_FAILED = 412 +REQUEST_ENTITY_TOO_LARGE = 413 +REQUEST_URI_TOO_LONG = 414 +UNSUPPORTED_MEDIA_TYPE = 415 +REQUESTED_RANGE_NOT_SATISFIABLE = 416 +EXPECTATION_FAILED = 417 +UNPROCESSABLE_ENTITY = 422 +LOCKED = 423 +FAILED_DEPENDENCY = 424 +UPGRADE_REQUIRED = 426 + +# server error +INTERNAL_SERVER_ERROR = 500 +NOT_IMPLEMENTED = 501 +BAD_GATEWAY = 502 +SERVICE_UNAVAILABLE = 503 +GATEWAY_TIMEOUT = 504 +HTTP_VERSION_NOT_SUPPORTED = 505 +INSUFFICIENT_STORAGE = 507 +NOT_EXTENDED = 510 + +# Mapping status codes to official W3C names +responses = { + 100: 'Continue', + 101: 'Switching Protocols', + + 200: 'OK', + 201: 'Created', + 202: 'Accepted', + 203: 'Non-Authoritative Information', + 204: 'No Content', + 205: 'Reset Content', + 206: 'Partial Content', + + 300: 'Multiple Choices', + 301: 'Moved Permanently', + 302: 'Found', + 303: 'See Other', + 304: 'Not Modified', + 305: 'Use Proxy', + 306: '(Unused)', + 307: 'Temporary Redirect', + + 400: 'Bad Request', + 401: 'Unauthorized', + 402: 'Payment Required', + 403: 'Forbidden', + 404: 'Not Found', + 405: 'Method Not Allowed', + 406: 'Not Acceptable', + 407: 'Proxy Authentication Required', + 408: 'Request Timeout', + 409: 'Conflict', + 410: 'Gone', + 411: 'Length Required', + 412: 'Precondition Failed', + 413: 'Request Entity Too Large', + 414: 'Request-URI Too Long', + 415: 'Unsupported Media Type', + 416: 'Requested Range Not Satisfiable', + 417: 'Expectation Failed', + + 500: 'Internal Server Error', + 501: 'Not Implemented', + 502: 'Bad Gateway', + 503: 'Service Unavailable', + 504: 'Gateway Timeout', + 505: 'HTTP Version Not Supported', +} + +# maximal amount of data to read at one time in _safe_read +MAXAMOUNT = 1048576 + +class HTTPMessage(mimetools.Message): + + def addheader(self, key, value): + """Add header for field key handling repeats.""" + prev = self.dict.get(key) + if prev is None: + self.dict[key] = value + else: + combined = ", ".join((prev, value)) + self.dict[key] = combined + + def addcontinue(self, key, more): + """Add more field data from a continuation line.""" + prev = self.dict[key] + self.dict[key] = prev + "\n " + more + + def readheaders(self): + """Read header lines. + + Read header lines up to the entirely blank line that terminates them. + The (normally blank) line that ends the headers is skipped, but not + included in the returned list. If a non-header line ends the headers, + (which is an error), an attempt is made to backspace over it; it is + never included in the returned list. + + The variable self.status is set to the empty string if all went well, + otherwise it is an error message. The variable self.headers is a + completely uninterpreted list of lines contained in the header (so + printing them will reproduce the header exactly as it appears in the + file). + + If multiple header fields with the same name occur, they are combined + according to the rules in RFC 2616 sec 4.2: + + Appending each subsequent field-value to the first, each separated + by a comma. The order in which header fields with the same field-name + are received is significant to the interpretation of the combined + field value. + """ + # XXX The implementation overrides the readheaders() method of + # rfc822.Message. The base class design isn't amenable to + # customized behavior here so the method here is a copy of the + # base class code with a few small changes. + + self.dict = {} + self.unixfrom = '' + self.headers = hlist = [] + self.status = '' + headerseen = "" + firstline = 1 + startofline = unread = tell = None + if hasattr(self.fp, 'unread'): + unread = self.fp.unread + elif self.seekable: + tell = self.fp.tell + while True: + if tell: + try: + startofline = tell() + except IOError: + startofline = tell = None + self.seekable = 0 + line = self.fp.readline() + if not line: + self.status = 'EOF in headers' + break + # Skip unix From name time lines + if firstline and line.startswith('From '): + self.unixfrom = self.unixfrom + line + continue + firstline = 0 + if headerseen and line[0] in ' \t': + # XXX Not sure if continuation lines are handled properly + # for http and/or for repeating headers + # It's a continuation line. + hlist.append(line) + self.addcontinue(headerseen, line.strip()) + continue + elif self.iscomment(line): + # It's a comment. Ignore it. + continue + elif self.islast(line): + # Note! No pushback here! The delimiter line gets eaten. + break + headerseen = self.isheader(line) + if headerseen: + # It's a legal header line, save it. + hlist.append(line) + self.addheader(headerseen, line[len(headerseen)+1:].strip()) + continue + else: + # It's not a header line; throw it back and stop here. + if not self.dict: + self.status = 'No headers' + else: + self.status = 'Non-header line where header expected' + # Try to undo the read. + if unread: + unread(line) + elif tell: + self.fp.seek(startofline) + else: + self.status = self.status + '; bad seek' + break + +class HTTPResponse: + + # strict: If true, raise BadStatusLine if the status line can't be + # parsed as a valid HTTP/1.0 or 1.1 status line. By default it is + # false because it prevents clients from talking to HTTP/0.9 + # servers. Note that a response with a sufficiently corrupted + # status line will look like an HTTP/0.9 response. + + # See RFC 2616 sec 19.6 and RFC 1945 sec 6 for details. + + def __init__(self, sock, debuglevel=0, strict=0, method=None, buffering=False): + if buffering: + # The caller won't be using any sock.recv() calls, so buffering + # is fine and recommended for performance. + self.fp = sock.makefile('rb') + else: + # The buffer size is specified as zero, because the headers of + # the response are read with readline(). If the reads were + # buffered the readline() calls could consume some of the + # response, which make be read via a recv() on the underlying + # socket. + self.fp = sock.makefile('rb', 0) + self.debuglevel = debuglevel + self.strict = strict + self._method = method + + self.msg = None + + # from the Status-Line of the response + self.version = _UNKNOWN # HTTP-Version + self.status = _UNKNOWN # Status-Code + self.reason = _UNKNOWN # Reason-Phrase + + self.chunked = _UNKNOWN # is "chunked" being used? + self.chunk_left = _UNKNOWN # bytes left to read in current chunk + self.length = _UNKNOWN # number of bytes left in response + self.will_close = _UNKNOWN # conn will close at end of response + + def _read_status(self): + # Initialize with Simple-Response defaults + line = self.fp.readline() + if self.debuglevel > 0: + print "reply:", repr(line) + if not line: + # Presumably, the server closed the connection before + # sending a valid response. + raise BadStatusLine(line) + try: + [version, status, reason] = line.split(None, 2) + except ValueError: + try: + [version, status] = line.split(None, 1) + reason = "" + except ValueError: + # empty version will cause next test to fail and status + # will be treated as 0.9 response. + version = "" + if not version.startswith('HTTP/'): + if self.strict: + self.close() + raise BadStatusLine(line) + else: + # assume it's a Simple-Response from an 0.9 server + self.fp = LineAndFileWrapper(line, self.fp) + return "HTTP/0.9", 200, "" + + # The status code is a three-digit number + try: + status = int(status) + if status < 100 or status > 999: + raise BadStatusLine(line) + except ValueError: + raise BadStatusLine(line) + return version, status, reason + + def begin(self): + if self.msg is not None: + # we've already started reading the response + return + + # read until we get a non-100 response + while True: + version, status, reason = self._read_status() + if status != CONTINUE: + break + # skip the header from the 100 response + while True: + skip = self.fp.readline().strip() + if not skip: + break + if self.debuglevel > 0: + print "header:", skip + + self.status = status + self.reason = reason.strip() + if version == 'HTTP/1.0': + self.version = 10 + elif version.startswith('HTTP/1.'): + self.version = 11 # use HTTP/1.1 code for HTTP/1.x where x>=1 + elif version == 'HTTP/0.9': + self.version = 9 + else: + raise UnknownProtocol(version) + + if self.version == 9: + self.length = None + self.chunked = 0 + self.will_close = 1 + self.msg = HTTPMessage(StringIO()) + return + + self.msg = HTTPMessage(self.fp, 0) + if self.debuglevel > 0: + for hdr in self.msg.headers: + print "header:", hdr, + + # don't let the msg keep an fp + self.msg.fp = None + + # are we using the chunked-style of transfer encoding? + tr_enc = self.msg.getheader('transfer-encoding') + if tr_enc and tr_enc.lower() == "chunked": + self.chunked = 1 + self.chunk_left = None + else: + self.chunked = 0 + + # will the connection close at the end of the response? + self.will_close = self._check_close() + + # do we have a Content-Length? + # NOTE: RFC 2616, S4.4, #3 says we ignore this if tr_enc is "chunked" + length = self.msg.getheader('content-length') + if length and not self.chunked: + try: + self.length = int(length) + except ValueError: + self.length = None + else: + if self.length < 0: # ignore nonsensical negative lengths + self.length = None + else: + self.length = None + + # does the body have a fixed length? (of zero) + if (status == NO_CONTENT or status == NOT_MODIFIED or + 100 <= status < 200 or # 1xx codes + self._method == 'HEAD'): + self.length = 0 + + # if the connection remains open, and we aren't using chunked, and + # a content-length was not provided, then assume that the connection + # WILL close. + if not self.will_close and \ + not self.chunked and \ + self.length is None: + self.will_close = 1 + + def _check_close(self): + conn = self.msg.getheader('connection') + if self.version == 11: + # An HTTP/1.1 proxy is assumed to stay open unless + # explicitly closed. + conn = self.msg.getheader('connection') + if conn and "close" in conn.lower(): + return True + return False + + # Some HTTP/1.0 implementations have support for persistent + # connections, using rules different than HTTP/1.1. + + # For older HTTP, Keep-Alive indicates persistent connection. + if self.msg.getheader('keep-alive'): + return False + + # At least Akamai returns a "Connection: Keep-Alive" header, + # which was supposed to be sent by the client. + if conn and "keep-alive" in conn.lower(): + return False + + # Proxy-Connection is a netscape hack. + pconn = self.msg.getheader('proxy-connection') + if pconn and "keep-alive" in pconn.lower(): + return False + + # otherwise, assume it will close + return True + + def close(self): + if self.fp: + self.fp.close() + self.fp = None + + def isclosed(self): + # NOTE: it is possible that we will not ever call self.close(). This + # case occurs when will_close is TRUE, length is None, and we + # read up to the last byte, but NOT past it. + # + # IMPLIES: if will_close is FALSE, then self.close() will ALWAYS be + # called, meaning self.isclosed() is meaningful. + return self.fp is None + + # XXX It would be nice to have readline and __iter__ for this, too. + + def read(self, amt=None): + if self.fp is None: + return '' + + if self._method == 'HEAD': + self.close() + return '' + + if self.chunked: + return self._read_chunked(amt) + + if amt is None: + # unbounded read + if self.length is None: + s = self.fp.read() + else: + s = self._safe_read(self.length) + self.length = 0 + self.close() # we read everything + return s + + if self.length is not None: + if amt > self.length: + # clip the read to the "end of response" + amt = self.length + + # we do not use _safe_read() here because this may be a .will_close + # connection, and the user is reading more bytes than will be provided + # (for example, reading in 1k chunks) + s = self.fp.read(amt) + if self.length is not None: + self.length -= len(s) + if not self.length: + self.close() + return s + + def _read_chunked(self, amt): + assert self.chunked != _UNKNOWN + chunk_left = self.chunk_left + value = [] + while True: + if chunk_left is None: + line = self.fp.readline() + i = line.find(';') + if i >= 0: + line = line[:i] # strip chunk-extensions + try: + chunk_left = int(line, 16) + except ValueError: + # close the connection as protocol synchronisation is + # probably lost + self.close() + raise IncompleteRead(''.join(value)) + if chunk_left == 0: + break + if amt is None: + value.append(self._safe_read(chunk_left)) + elif amt < chunk_left: + value.append(self._safe_read(amt)) + self.chunk_left = chunk_left - amt + return ''.join(value) + elif amt == chunk_left: + value.append(self._safe_read(amt)) + self._safe_read(2) # toss the CRLF at the end of the chunk + self.chunk_left = None + return ''.join(value) + else: + value.append(self._safe_read(chunk_left)) + amt -= chunk_left + + # we read the whole chunk, get another + self._safe_read(2) # toss the CRLF at the end of the chunk + chunk_left = None + + # read and discard trailer up to the CRLF terminator + ### note: we shouldn't have any trailers! + while True: + line = self.fp.readline() + if not line: + # a vanishingly small number of sites EOF without + # sending the trailer + break + if line == '\r\n': + break + + # we read everything; close the "file" + self.close() + + return ''.join(value) + + def _safe_read(self, amt): + """Read the number of bytes requested, compensating for partial reads. + + Normally, we have a blocking socket, but a read() can be interrupted + by a signal (resulting in a partial read). + + Note that we cannot distinguish between EOF and an interrupt when zero + bytes have been read. IncompleteRead() will be raised in this + situation. + + This function should be used when bytes "should" be present for + reading. If the bytes are truly not available (due to EOF), then the + IncompleteRead exception can be used to detect the problem. + """ + # NOTE(gps): As of svn r74426 socket._fileobject.read(x) will never + # return less than x bytes unless EOF is encountered. It now handles + # signal interruptions (socket.error EINTR) internally. This code + # never caught that exception anyways. It seems largely pointless. + # self.fp.read(amt) will work fine. + s = [] + while amt > 0: + chunk = self.fp.read(min(amt, MAXAMOUNT)) + if not chunk: + raise IncompleteRead(''.join(s), amt) + s.append(chunk) + amt -= len(chunk) + return ''.join(s) + + def fileno(self): + return self.fp.fileno() + + def getheader(self, name, default=None): + if self.msg is None: + raise ResponseNotReady() + return self.msg.getheader(name, default) + + def getheaders(self): + """Return list of (header, value) tuples.""" + if self.msg is None: + raise ResponseNotReady() + return self.msg.items() + + +class HTTPConnection: + + _http_vsn = 11 + _http_vsn_str = 'HTTP/1.1' + + response_class = HTTPResponse + default_port = HTTP_PORT + auto_open = 1 + debuglevel = 0 + strict = 0 + + def __init__(self, host, port=None, strict=None, + timeout=socket._GLOBAL_DEFAULT_TIMEOUT, source_address=None): + self.timeout = timeout + self.source_address = source_address + self.sock = None + self._buffer = [] + self.__response = None + self.__state = _CS_IDLE + self._method = None + self._tunnel_host = None + self._tunnel_port = None + self._tunnel_headers = {} + + self._set_hostport(host, port) + if strict is not None: + self.strict = strict + + def set_tunnel(self, host, port=None, headers=None): + """ Sets up the host and the port for the HTTP CONNECT Tunnelling. + + The headers argument should be a mapping of extra HTTP headers + to send with the CONNECT request. + """ + self._tunnel_host = host + self._tunnel_port = port + if headers: + self._tunnel_headers = headers + else: + self._tunnel_headers.clear() + + def _set_hostport(self, host, port): + if port is None: + i = host.rfind(':') + j = host.rfind(']') # ipv6 addresses have [...] + if i > j: + try: + port = int(host[i+1:]) + except ValueError: + raise InvalidURL("nonnumeric port: '%s'" % host[i+1:]) + host = host[:i] + else: + port = self.default_port + if host and host[0] == '[' and host[-1] == ']': + host = host[1:-1] + self.host = host + self.port = port + + def set_debuglevel(self, level): + self.debuglevel = level + + def _tunnel(self): + self._set_hostport(self._tunnel_host, self._tunnel_port) + self.send("CONNECT %s:%d HTTP/1.0\r\n" % (self.host, self.port)) + for header, value in self._tunnel_headers.iteritems(): + self.send("%s: %s\r\n" % (header, value)) + self.send("\r\n") + response = self.response_class(self.sock, strict = self.strict, + method = self._method) + (version, code, message) = response._read_status() + + if code != 200: + self.close() + raise socket.error("Tunnel connection failed: %d %s" % (code, + message.strip())) + while True: + line = response.fp.readline() + if line == '\r\n': break + + + def connect(self): + """Connect to the host and port specified in __init__.""" + self.sock = socket.create_connection((self.host,self.port), + self.timeout, self.source_address) + + if self._tunnel_host: + self._tunnel() + + def close(self): + """Close the connection to the HTTP server.""" + if self.sock: + self.sock.close() # close it manually... there may be other refs + self.sock = None + if self.__response: + self.__response.close() + self.__response = None + self.__state = _CS_IDLE + + def send(self, data): + """Send `data' to the server.""" + if self.sock is None: + if self.auto_open: + self.connect() + else: + raise NotConnected() + + if self.debuglevel > 0: + print "send:", repr(data) + blocksize = 8192 + if hasattr(data,'read') and not isinstance(data, array): + if self.debuglevel > 0: print "sendIng a read()able" + datablock = data.read(blocksize) + while datablock: + self.sock.sendall(datablock) + datablock = data.read(blocksize) + else: + self.sock.sendall(data) + + def _output(self, s): + """Add a line of output to the current request buffer. + + Assumes that the line does *not* end with \\r\\n. + """ + self._buffer.append(s) + + def _send_output(self, message_body=None): + """Send the currently buffered request and clear the buffer. + + Appends an extra \\r\\n to the buffer. + A message_body may be specified, to be appended to the request. + """ + self._buffer.extend(("", "")) + msg = "\r\n".join(self._buffer) + del self._buffer[:] + # If msg and message_body are sent in a single send() call, + # it will avoid performance problems caused by the interaction + # between delayed ack and the Nagle algorithim. + if isinstance(message_body, str): + msg += message_body + message_body = None + self.send(msg) + if message_body is not None: + #message_body was not a string (i.e. it is a file) and + #we must run the risk of Nagle + self.send(message_body) + + def putrequest(self, method, url, skip_host=0, skip_accept_encoding=0): + """Send a request to the server. + + `method' specifies an HTTP request method, e.g. 'GET'. + `url' specifies the object being requested, e.g. '/index.html'. + `skip_host' if True does not add automatically a 'Host:' header + `skip_accept_encoding' if True does not add automatically an + 'Accept-Encoding:' header + """ + + # if a prior response has been completed, then forget about it. + if self.__response and self.__response.isclosed(): + self.__response = None + + + # in certain cases, we cannot issue another request on this connection. + # this occurs when: + # 1) we are in the process of sending a request. (_CS_REQ_STARTED) + # 2) a response to a previous request has signalled that it is going + # to close the connection upon completion. + # 3) the headers for the previous response have not been read, thus + # we cannot determine whether point (2) is true. (_CS_REQ_SENT) + # + # if there is no prior response, then we can request at will. + # + # if point (2) is true, then we will have passed the socket to the + # response (effectively meaning, "there is no prior response"), and + # will open a new one when a new request is made. + # + # Note: if a prior response exists, then we *can* start a new request. + # We are not allowed to begin fetching the response to this new + # request, however, until that prior response is complete. + # + if self.__state == _CS_IDLE: + self.__state = _CS_REQ_STARTED + else: + raise CannotSendRequest() + + # Save the method we use, we need it later in the response phase + self._method = method + if not url: + url = '/' + hdr = '%s %s %s' % (method, url, self._http_vsn_str) + + self._output(hdr) + + if self._http_vsn == 11: + # Issue some standard headers for better HTTP/1.1 compliance + + if not skip_host: + # this header is issued *only* for HTTP/1.1 + # connections. more specifically, this means it is + # only issued when the client uses the new + # HTTPConnection() class. backwards-compat clients + # will be using HTTP/1.0 and those clients may be + # issuing this header themselves. we should NOT issue + # it twice; some web servers (such as Apache) barf + # when they see two Host: headers + + # If we need a non-standard port,include it in the + # header. If the request is going through a proxy, + # but the host of the actual URL, not the host of the + # proxy. + + netloc = '' + if url.startswith('http'): + nil, netloc, nil, nil, nil = urlsplit(url) + + if netloc: + try: + netloc_enc = netloc.encode("ascii") + except UnicodeEncodeError: + netloc_enc = netloc.encode("idna") + self.putheader('Host', netloc_enc) + else: + try: + host_enc = self.host.encode("ascii") + except UnicodeEncodeError: + host_enc = self.host.encode("idna") + # Wrap the IPv6 Host Header with [] (RFC 2732) + if host_enc.find(':') >= 0: + host_enc = "[" + host_enc + "]" + if self.port == self.default_port: + self.putheader('Host', host_enc) + else: + self.putheader('Host', "%s:%s" % (host_enc, self.port)) + + # note: we are assuming that clients will not attempt to set these + # headers since *this* library must deal with the + # consequences. this also means that when the supporting + # libraries are updated to recognize other forms, then this + # code should be changed (removed or updated). + + # we only want a Content-Encoding of "identity" since we don't + # support encodings such as x-gzip or x-deflate. + if not skip_accept_encoding: + self.putheader('Accept-Encoding', 'identity') + + # we can accept "chunked" Transfer-Encodings, but no others + # NOTE: no TE header implies *only* "chunked" + #self.putheader('TE', 'chunked') + + # if TE is supplied in the header, then it must appear in a + # Connection header. + #self.putheader('Connection', 'TE') + + else: + # For HTTP/1.0, the server will assume "not chunked" + pass + + def putheader(self, header, *values): + """Send a request header line to the server. + + For example: h.putheader('Accept', 'text/html') + """ + if self.__state != _CS_REQ_STARTED: + raise CannotSendHeader() + + hdr = '%s: %s' % (header, '\r\n\t'.join([str(v) for v in values])) + self._output(hdr) + + def endheaders(self, message_body=None): + """Indicate that the last header line has been sent to the server. + + This method sends the request to the server. The optional + message_body argument can be used to pass message body + associated with the request. The message body will be sent in + the same packet as the message headers if possible. The + message_body should be a string. + """ + if self.__state == _CS_REQ_STARTED: + self.__state = _CS_REQ_SENT + else: + raise CannotSendHeader() + self._send_output(message_body) + + def request(self, method, url, body=None, headers={}): + """Send a complete request to the server.""" + self._send_request(method, url, body, headers) + + def _set_content_length(self, body): + # Set the content-length based on the body. + thelen = None + try: + thelen = str(len(body)) + except TypeError, te: + # If this is a file-like object, try to + # fstat its file descriptor + try: + thelen = str(os.fstat(body.fileno()).st_size) + except (AttributeError, OSError): + # Don't send a length if this failed + if self.debuglevel > 0: print "Cannot stat!!" + + if thelen is not None: + self.putheader('Content-Length', thelen) + + def _send_request(self, method, url, body, headers): + # Honor explicitly requested Host: and Accept-Encoding: headers. + header_names = dict.fromkeys([k.lower() for k in headers]) + skips = {} + if 'host' in header_names: + skips['skip_host'] = 1 + if 'accept-encoding' in header_names: + skips['skip_accept_encoding'] = 1 + + self.putrequest(method, url, **skips) + + if body and ('content-length' not in header_names): + self._set_content_length(body) + for hdr, value in headers.iteritems(): + self.putheader(hdr, value) + self.endheaders(body) + + def getresponse(self, buffering=False): + "Get the response from the server." + + # if a prior response has been completed, then forget about it. + if self.__response and self.__response.isclosed(): + self.__response = None + + # + # if a prior response exists, then it must be completed (otherwise, we + # cannot read this response's header to determine the connection-close + # behavior) + # + # note: if a prior response existed, but was connection-close, then the + # socket and response were made independent of this HTTPConnection + # object since a new request requires that we open a whole new + # connection + # + # this means the prior response had one of two states: + # 1) will_close: this connection was reset and the prior socket and + # response operate independently + # 2) persistent: the response was retained and we await its + # isclosed() status to become true. + # + if self.__state != _CS_REQ_SENT or self.__response: + raise ResponseNotReady() + + args = (self.sock,) + kwds = {"strict":self.strict, "method":self._method} + if self.debuglevel > 0: + args += (self.debuglevel,) + if buffering: + #only add this keyword if non-default, for compatibility with + #other response_classes. + kwds["buffering"] = True; + response = self.response_class(*args, **kwds) + + try: + response.begin() + except: + response.close() + raise + assert response.will_close != _UNKNOWN + self.__state = _CS_IDLE + + if response.will_close: + # this effectively passes the connection to the response + self.close() + else: + # remember this, so we can tell when it is complete + self.__response = response + + return response + + +class HTTP: + "Compatibility class with httplib.py from 1.5." + + _http_vsn = 10 + _http_vsn_str = 'HTTP/1.0' + + debuglevel = 0 + + _connection_class = HTTPConnection + + def __init__(self, host='', port=None, strict=None): + "Provide a default host, since the superclass requires one." + + # some joker passed 0 explicitly, meaning default port + if port == 0: + port = None + + # Note that we may pass an empty string as the host; this will throw + # an error when we attempt to connect. Presumably, the client code + # will call connect before then, with a proper host. + self._setup(self._connection_class(host, port, strict)) + + def _setup(self, conn): + self._conn = conn + + # set up delegation to flesh out interface + self.send = conn.send + self.putrequest = conn.putrequest + self.putheader = conn.putheader + self.endheaders = conn.endheaders + self.set_debuglevel = conn.set_debuglevel + + conn._http_vsn = self._http_vsn + conn._http_vsn_str = self._http_vsn_str + + self.file = None + + def connect(self, host=None, port=None): + "Accept arguments to set the host/port, since the superclass doesn't." + + if host is not None: + self._conn._set_hostport(host, port) + self._conn.connect() + + def getfile(self): + "Provide a getfile, since the superclass' does not use this concept." + return self.file + + def getreply(self, buffering=False): + """Compat definition since superclass does not define it. + + Returns a tuple consisting of: + - server status code (e.g. '200' if all goes well) + - server "reason" corresponding to status code + - any RFC822 headers in the response from the server + """ + try: + if not buffering: + response = self._conn.getresponse() + else: + #only add this keyword if non-default for compatibility + #with other connection classes + response = self._conn.getresponse(buffering) + except BadStatusLine, e: + ### hmm. if getresponse() ever closes the socket on a bad request, + ### then we are going to have problems with self.sock + + ### should we keep this behavior? do people use it? + # keep the socket open (as a file), and return it + self.file = self._conn.sock.makefile('rb', 0) + + # close our socket -- we want to restart after any protocol error + self.close() + + self.headers = None + return -1, e.line, None + + self.headers = response.msg + self.file = response.fp + return response.status, response.reason, response.msg + + def close(self): + self._conn.close() + + # note that self.file == response.fp, which gets closed by the + # superclass. just clear the object ref here. + ### hmm. messy. if status==-1, then self.file is owned by us. + ### well... we aren't explicitly closing, but losing this ref will + ### do it + self.file = None + +try: + import ssl +except ImportError: + pass +else: + class HTTPSConnection(HTTPConnection): + "This class allows communication via SSL." + + default_port = HTTPS_PORT + + def __init__(self, host, port=None, key_file=None, cert_file=None, + strict=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, + source_address=None): + HTTPConnection.__init__(self, host, port, strict, timeout, + source_address) + self.key_file = key_file + self.cert_file = cert_file + + def connect(self): + "Connect to a host on a given (SSL) port." + + sock = socket.create_connection((self.host, self.port), + self.timeout, self.source_address) + if self._tunnel_host: + self.sock = sock + self._tunnel() + self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file) + + __all__.append("HTTPSConnection") + + class HTTPS(HTTP): + """Compatibility with 1.5 httplib interface + + Python 1.5.2 did not have an HTTPS class, but it defined an + interface for sending http requests that is also useful for + https. + """ + + _connection_class = HTTPSConnection + + def __init__(self, host='', port=None, key_file=None, cert_file=None, + strict=None): + # provide a default host, pass the X509 cert info + + # urf. compensate for bad input. + if port == 0: + port = None + self._setup(self._connection_class(host, port, key_file, + cert_file, strict)) + + # we never actually use these for anything, but we keep them + # here for compatibility with post-1.5.2 CVS. + self.key_file = key_file + self.cert_file = cert_file + + + def FakeSocket (sock, sslobj): + warnings.warn("FakeSocket is deprecated, and won't be in 3.x. " + + "Use the result of ssl.wrap_socket() directly instead.", + DeprecationWarning, stacklevel=2) + return sslobj + + +class HTTPException(Exception): + # Subclasses that define an __init__ must call Exception.__init__ + # or define self.args. Otherwise, str() will fail. + pass + +class NotConnected(HTTPException): + pass + +class InvalidURL(HTTPException): + pass + +class UnknownProtocol(HTTPException): + def __init__(self, version): + self.args = version, + self.version = version + +class UnknownTransferEncoding(HTTPException): + pass + +class UnimplementedFileMode(HTTPException): + pass + +class IncompleteRead(HTTPException): + def __init__(self, partial, expected=None): + self.args = partial, + self.partial = partial + self.expected = expected + def __repr__(self): + if self.expected is not None: + e = ', %i more expected' % self.expected + else: + e = '' + return 'IncompleteRead(%i bytes read%s)' % (len(self.partial), e) + def __str__(self): + return repr(self) + +class ImproperConnectionState(HTTPException): + pass + +class CannotSendRequest(ImproperConnectionState): + pass + +class CannotSendHeader(ImproperConnectionState): + pass + +class ResponseNotReady(ImproperConnectionState): + pass + +class BadStatusLine(HTTPException): + def __init__(self, line): + if not line: + line = repr(line) + self.args = line, + self.line = line + +# for backwards compatibility +error = HTTPException + +class LineAndFileWrapper: + """A limited file-like object for HTTP/0.9 responses.""" + + # The status-line parsing code calls readline(), which normally + # get the HTTP status line. For a 0.9 response, however, this is + # actually the first line of the body! Clients need to get a + # readable file object that contains that line. + + def __init__(self, line, file): + self._line = line + self._file = file + self._line_consumed = 0 + self._line_offset = 0 + self._line_left = len(line) + + def __getattr__(self, attr): + return getattr(self._file, attr) + + def _done(self): + # called when the last byte is read from the line. After the + # call, all read methods are delegated to the underlying file + # object. + self._line_consumed = 1 + self.read = self._file.read + self.readline = self._file.readline + self.readlines = self._file.readlines + + def read(self, amt=None): + if self._line_consumed: + return self._file.read(amt) + assert self._line_left + if amt is None or amt > self._line_left: + s = self._line[self._line_offset:] + self._done() + if amt is None: + return s + self._file.read() + else: + return s + self._file.read(amt - len(s)) + else: + assert amt <= self._line_left + i = self._line_offset + j = i + amt + s = self._line[i:j] + self._line_offset = j + self._line_left -= amt + if self._line_left == 0: + self._done() + return s + + def readline(self): + if self._line_consumed: + return self._file.readline() + assert self._line_left + s = self._line[self._line_offset:] + self._done() + return s + + def readlines(self, size=None): + if self._line_consumed: + return self._file.readlines(size) + assert self._line_left + L = [self._line[self._line_offset:]] + self._done() + if size is None: + return L + self._file.readlines() + else: + return L + self._file.readlines(size) + +def test(): + """Test this module. + + A hodge podge of tests collected here, because they have too many + external dependencies for the regular test suite. + """ + + import sys + import getopt + opts, args = getopt.getopt(sys.argv[1:], 'd') + dl = 0 + for o, a in opts: + if o == '-d': dl = dl + 1 + host = 'www.python.org' + selector = '/' + if args[0:]: host = args[0] + if args[1:]: selector = args[1] + h = HTTP() + h.set_debuglevel(dl) + h.connect(host) + h.putrequest('GET', selector) + h.endheaders() + status, reason, headers = h.getreply() + print 'status =', status + print 'reason =', reason + print "read", len(h.getfile().read()) + print + if headers: + for header in headers.headers: print header.strip() + print + + # minimal test that code to extract host from url works + class HTTP11(HTTP): + _http_vsn = 11 + _http_vsn_str = 'HTTP/1.1' + + h = HTTP11('www.python.org') + h.putrequest('GET', 'http://www.python.org/~jeremy/') + h.endheaders() + h.getreply() + h.close() + + try: + import ssl + except ImportError: + pass + else: + + for host, selector in (('sourceforge.net', '/projects/python'), + ): + print "https://%s%s" % (host, selector) + hs = HTTPS() + hs.set_debuglevel(dl) + hs.connect(host) + hs.putrequest('GET', selector) + hs.endheaders() + status, reason, headers = hs.getreply() + print 'status =', status + print 'reason =', reason + print "read", len(hs.getfile().read()) + print + if headers: + for header in headers.headers: print header.strip() + print + +if __name__ == '__main__': + test() diff --git a/lib-python/modified-2.7/json/encoder.py b/lib-python/modified-2.7/json/encoder.py --- a/lib-python/modified-2.7/json/encoder.py +++ b/lib-python/modified-2.7/json/encoder.py @@ -2,14 +2,7 @@ """ import re -try: - from _json import encode_basestring_ascii as c_encode_basestring_ascii -except ImportError: - c_encode_basestring_ascii = None -try: - from _json import make_encoder as c_make_encoder -except ImportError: - c_make_encoder = None +from __pypy__.builders import StringBuilder, UnicodeBuilder ESCAPE = re.compile(r'[\x00-\x1f\\"\b\f\n\r\t]') ESCAPE_ASCII = re.compile(r'([\\"]|[^\ -~])') @@ -24,8 +17,7 @@ '\t': '\\t', } for i in range(0x20): - ESCAPE_DCT.setdefault(chr(i), '\\u{0:04x}'.format(i)) - #ESCAPE_DCT.setdefault(chr(i), '\\u%04x' % (i,)) + ESCAPE_DCT.setdefault(chr(i), '\\u%04x' % (i,)) # Assume this produces an infinity on all machines (probably not guaranteed) INFINITY = float('1e66666') @@ -37,10 +29,9 @@ """ def replace(match): return ESCAPE_DCT[match.group(0)] - return '"' + ESCAPE.sub(replace, s) + '"' + return ESCAPE.sub(replace, s) - -def py_encode_basestring_ascii(s): +def encode_basestring_ascii(s): """Return an ASCII-only JSON representation of a Python string """ @@ -53,20 +44,18 @@ except KeyError: n = ord(s) if n < 0x10000: - return '\\u{0:04x}'.format(n) - #return '\\u%04x' % (n,) + return '\\u%04x' % (n,) else: # surrogate pair n -= 0x10000 s1 = 0xd800 | ((n >> 10) & 0x3ff) s2 = 0xdc00 | (n & 0x3ff) - return '\\u{0:04x}\\u{1:04x}'.format(s1, s2) - #return '\\u%04x\\u%04x' % (s1, s2) - return '"' + str(ESCAPE_ASCII.sub(replace, s)) + '"' - - -encode_basestring_ascii = ( - c_encode_basestring_ascii or py_encode_basestring_ascii) + return '\\u%04x\\u%04x' % (s1, s2) + if ESCAPE_ASCII.search(s): + return str(ESCAPE_ASCII.sub(replace, s)) + return s +py_encode_basestring_ascii = lambda s: '"' + encode_basestring_ascii(s) + '"' +c_encode_basestring_ascii = None class JSONEncoder(object): """Extensible JSON encoder for Python data structures. @@ -147,6 +136,17 @@ self.skipkeys = skipkeys self.ensure_ascii = ensure_ascii + if ensure_ascii: + self.encoder = encode_basestring_ascii + else: + self.encoder = encode_basestring + if encoding != 'utf-8': + orig_encoder = self.encoder + def encoder(o): + if isinstance(o, str): + o = o.decode(encoding) + return orig_encoder(o) + self.encoder = encoder self.check_circular = check_circular self.allow_nan = allow_nan self.sort_keys = sort_keys @@ -184,24 +184,126 @@ '{"foo": ["bar", "baz"]}' """ - # This is for extremely simple cases and benchmarks. + if self.check_circular: + markers = {} + else: + markers = None + if self.ensure_ascii: + builder = StringBuilder() + else: + builder = UnicodeBuilder() + self._encode(o, markers, builder, 0) + return builder.build() + + def _emit_indent(self, builder, _current_indent_level): + if self.indent is not None: + _current_indent_level += 1 + newline_indent = '\n' + (' ' * (self.indent * + _current_indent_level)) + separator = self.item_separator + newline_indent + builder.append(newline_indent) + else: + separator = self.item_separator + return separator, _current_indent_level + + def _emit_unindent(self, builder, _current_indent_level): + if self.indent is not None: + builder.append('\n') + builder.append(' ' * (self.indent * (_current_indent_level - 1))) + + def _encode(self, o, markers, builder, _current_indent_level): if isinstance(o, basestring): - if isinstance(o, str): - _encoding = self.encoding - if (_encoding is not None - and not (_encoding == 'utf-8')): - o = o.decode(_encoding) - if self.ensure_ascii: - return encode_basestring_ascii(o) + builder.append('"') + builder.append(self.encoder(o)) + builder.append('"') + elif o is None: + builder.append('null') + elif o is True: + builder.append('true') + elif o is False: + builder.append('false') + elif isinstance(o, (int, long)): + builder.append(str(o)) + elif isinstance(o, float): + builder.append(self._floatstr(o)) + elif isinstance(o, (list, tuple)): + if not o: + builder.append('[]') + return + self._encode_list(o, markers, builder, _current_indent_level) + elif isinstance(o, dict): + if not o: + builder.append('{}') + return + self._encode_dict(o, markers, builder, _current_indent_level) + else: + self._mark_markers(markers, o) + res = self.default(o) + self._encode(res, markers, builder, _current_indent_level) + self._remove_markers(markers, o) + return res + + def _encode_list(self, l, markers, builder, _current_indent_level): + self._mark_markers(markers, l) + builder.append('[') + first = True + separator, _current_indent_level = self._emit_indent(builder, + _current_indent_level) + for elem in l: + if first: + first = False else: - return encode_basestring(o) - # This doesn't pass the iterator directly to ''.join() because the - # exceptions aren't as detailed. The list call should be roughly - # equivalent to the PySequence_Fast that ''.join() would do. - chunks = self.iterencode(o, _one_shot=True) - if not isinstance(chunks, (list, tuple)): - chunks = list(chunks) - return ''.join(chunks) + builder.append(separator) + self._encode(elem, markers, builder, _current_indent_level) + del elem # XXX grumble + self._emit_unindent(builder, _current_indent_level) + builder.append(']') + self._remove_markers(markers, l) + + def _encode_dict(self, d, markers, builder, _current_indent_level): + self._mark_markers(markers, d) + first = True + builder.append('{') + separator, _current_indent_level = self._emit_indent(builder, + _current_indent_level) + if self.sort_keys: + items = sorted(d.items(), key=lambda kv: kv[0]) + else: + items = d.iteritems() + + for key, v in items: + if first: + first = False + else: + builder.append(separator) + if isinstance(key, basestring): + pass + # JavaScript is weakly typed for these, so it makes sense to + # also allow them. Many encoders seem to do something like this. + elif isinstance(key, float): + key = self._floatstr(key) + elif key is True: + key = 'true' + elif key is False: + key = 'false' + elif key is None: + key = 'null' + elif isinstance(key, (int, long)): + key = str(key) + elif self.skipkeys: + continue + else: + raise TypeError("key " + repr(key) + " is not a string") + builder.append('"') + builder.append(self.encoder(key)) + builder.append('"') + builder.append(self.key_separator) + self._encode(v, markers, builder, _current_indent_level) + del key + del v # XXX grumble + self._emit_unindent(builder, _current_indent_level) + builder.append('}') + self._remove_markers(markers, d) def iterencode(self, o, _one_shot=False): """Encode the given object and yield each string @@ -217,86 +319,54 @@ markers = {} else: markers = None - if self.ensure_ascii: - _encoder = encode_basestring_ascii + return self._iterencode(o, markers, 0) + + def _floatstr(self, o): + # Check for specials. Note that this type of test is processor + # and/or platform-specific, so do tests which don't depend on the + # internals. + + if o != o: + text = 'NaN' + elif o == INFINITY: + text = 'Infinity' + elif o == -INFINITY: + text = '-Infinity' else: - _encoder = encode_basestring - if self.encoding != 'utf-8': - def _encoder(o, _orig_encoder=_encoder, _encoding=self.encoding): - if isinstance(o, str): - o = o.decode(_encoding) - return _orig_encoder(o) + return FLOAT_REPR(o) - def floatstr(o, allow_nan=self.allow_nan, - _repr=FLOAT_REPR, _inf=INFINITY, _neginf=-INFINITY): - # Check for specials. Note that this type of test is processor - # and/or platform-specific, so do tests which don't depend on the - # internals. + if not self.allow_nan: + raise ValueError( + "Out of range float values are not JSON compliant: " + + repr(o)) - if o != o: - text = 'NaN' - elif o == _inf: - text = 'Infinity' - elif o == _neginf: - text = '-Infinity' - else: - return _repr(o) + return text - if not allow_nan: - raise ValueError( - "Out of range float values are not JSON compliant: " + - repr(o)) + def _mark_markers(self, markers, o): + if markers is not None: + if id(o) in markers: + raise ValueError("Circular reference detected") + markers[id(o)] = None - return text + def _remove_markers(self, markers, o): + if markers is not None: + del markers[id(o)] - - if (_one_shot and c_make_encoder is not None - and not self.indent and not self.sort_keys): - _iterencode = c_make_encoder( - markers, self.default, _encoder, self.indent, - self.key_separator, self.item_separator, self.sort_keys, - self.skipkeys, self.allow_nan) - else: - _iterencode = _make_iterencode( - markers, self.default, _encoder, self.indent, floatstr, - self.key_separator, self.item_separator, self.sort_keys, - self.skipkeys, _one_shot) - return _iterencode(o, 0) - -def _make_iterencode(markers, _default, _encoder, _indent, _floatstr, - _key_separator, _item_separator, _sort_keys, _skipkeys, _one_shot, - ## HACK: hand-optimized bytecode; turn globals into locals - ValueError=ValueError, - basestring=basestring, - dict=dict, - float=float, - id=id, - int=int, - isinstance=isinstance, - list=list, - long=long, - str=str, - tuple=tuple, - ): - - def _iterencode_list(lst, _current_indent_level): + def _iterencode_list(self, lst, markers, _current_indent_level): if not lst: yield '[]' return - if markers is not None: - markerid = id(lst) - if markerid in markers: - raise ValueError("Circular reference detected") - markers[markerid] = lst + self._mark_markers(markers, lst) buf = '[' - if _indent is not None: + if self.indent is not None: _current_indent_level += 1 - newline_indent = '\n' + (' ' * (_indent * _current_indent_level)) - separator = _item_separator + newline_indent + newline_indent = '\n' + (' ' * (self.indent * + _current_indent_level)) + separator = self.item_separator + newline_indent buf += newline_indent else: newline_indent = None - separator = _item_separator + separator = self.item_separator first = True for value in lst: if first: @@ -304,7 +374,7 @@ else: buf = separator if isinstance(value, basestring): - yield buf + _encoder(value) + yield buf + '"' + self.encoder(value) + '"' elif value is None: yield buf + 'null' elif value is True: @@ -314,44 +384,43 @@ elif isinstance(value, (int, long)): yield buf + str(value) elif isinstance(value, float): - yield buf + _floatstr(value) + yield buf + self._floatstr(value) else: yield buf if isinstance(value, (list, tuple)): - chunks = _iterencode_list(value, _current_indent_level) + chunks = self._iterencode_list(value, markers, + _current_indent_level) elif isinstance(value, dict): - chunks = _iterencode_dict(value, _current_indent_level) + chunks = self._iterencode_dict(value, markers, + _current_indent_level) else: - chunks = _iterencode(value, _current_indent_level) + chunks = self._iterencode(value, markers, + _current_indent_level) for chunk in chunks: yield chunk if newline_indent is not None: _current_indent_level -= 1 - yield '\n' + (' ' * (_indent * _current_indent_level)) + yield '\n' + (' ' * (self.indent * _current_indent_level)) yield ']' - if markers is not None: - del markers[markerid] + self._remove_markers(markers, lst) - def _iterencode_dict(dct, _current_indent_level): + def _iterencode_dict(self, dct, markers, _current_indent_level): if not dct: yield '{}' return - if markers is not None: - markerid = id(dct) - if markerid in markers: - raise ValueError("Circular reference detected") - markers[markerid] = dct + self._mark_markers(markers, dct) yield '{' - if _indent is not None: + if self.indent is not None: _current_indent_level += 1 - newline_indent = '\n' + (' ' * (_indent * _current_indent_level)) - item_separator = _item_separator + newline_indent + newline_indent = '\n' + (' ' * (self.indent * + _current_indent_level)) + item_separator = self.item_separator + newline_indent yield newline_indent else: newline_indent = None - item_separator = _item_separator + item_separator = self.item_separator first = True - if _sort_keys: + if self.sort_keys: items = sorted(dct.items(), key=lambda kv: kv[0]) else: items = dct.iteritems() @@ -361,7 +430,7 @@ # JavaScript is weakly typed for these, so it makes sense to # also allow them. Many encoders seem to do something like this. elif isinstance(key, float): - key = _floatstr(key) + key = self._floatstr(key) elif key is True: key = 'true' elif key is False: @@ -370,7 +439,7 @@ key = 'null' elif isinstance(key, (int, long)): key = str(key) - elif _skipkeys: + elif self.skipkeys: continue else: raise TypeError("key " + repr(key) + " is not a string") @@ -378,10 +447,10 @@ first = False else: yield item_separator - yield _encoder(key) - yield _key_separator + yield '"' + self.encoder(key) + '"' + yield self.key_separator if isinstance(value, basestring): - yield _encoder(value) + yield '"' + self.encoder(value) + '"' elif value is None: yield 'null' elif value is True: @@ -391,26 +460,28 @@ elif isinstance(value, (int, long)): yield str(value) elif isinstance(value, float): - yield _floatstr(value) + yield self._floatstr(value) else: if isinstance(value, (list, tuple)): - chunks = _iterencode_list(value, _current_indent_level) + chunks = self._iterencode_list(value, markers, + _current_indent_level) elif isinstance(value, dict): - chunks = _iterencode_dict(value, _current_indent_level) + chunks = self._iterencode_dict(value, markers, + _current_indent_level) else: - chunks = _iterencode(value, _current_indent_level) + chunks = self._iterencode(value, markers, + _current_indent_level) for chunk in chunks: yield chunk if newline_indent is not None: _current_indent_level -= 1 - yield '\n' + (' ' * (_indent * _current_indent_level)) + yield '\n' + (' ' * (self.indent * _current_indent_level)) yield '}' - if markers is not None: - del markers[markerid] + self._remove_markers(markers, dct) - def _iterencode(o, _current_indent_level): + def _iterencode(self, o, markers, _current_indent_level): if isinstance(o, basestring): - yield _encoder(o) + yield '"' + self.encoder(o) + '"' elif o is None: yield 'null' elif o is True: @@ -420,23 +491,19 @@ elif isinstance(o, (int, long)): yield str(o) elif isinstance(o, float): - yield _floatstr(o) + yield self._floatstr(o) elif isinstance(o, (list, tuple)): - for chunk in _iterencode_list(o, _current_indent_level): + for chunk in self._iterencode_list(o, markers, + _current_indent_level): yield chunk elif isinstance(o, dict): - for chunk in _iterencode_dict(o, _current_indent_level): + for chunk in self._iterencode_dict(o, markers, + _current_indent_level): yield chunk else: - if markers is not None: - markerid = id(o) - if markerid in markers: - raise ValueError("Circular reference detected") - markers[markerid] = o - o = _default(o) - for chunk in _iterencode(o, _current_indent_level): + self._mark_markers(markers, o) + obj = self.default(o) + for chunk in self._iterencode(obj, markers, + _current_indent_level): yield chunk - if markers is not None: - del markers[markerid] - - return _iterencode + self._remove_markers(markers, o) diff --git a/lib-python/modified-2.7/json/tests/test_unicode.py b/lib-python/modified-2.7/json/tests/test_unicode.py --- a/lib-python/modified-2.7/json/tests/test_unicode.py +++ b/lib-python/modified-2.7/json/tests/test_unicode.py @@ -80,3 +80,9 @@ self.assertEqual(type(json.loads(u'["a"]')[0]), unicode) # Issue 10038. self.assertEqual(type(json.loads('"foo"')), unicode) + + def test_encode_not_utf_8(self): + self.assertEqual(json.dumps('\xb1\xe6', encoding='iso8859-2'), + '"\\u0105\\u0107"') + self.assertEqual(json.dumps(['\xb1\xe6'], encoding='iso8859-2'), + '["\\u0105\\u0107"]') diff --git a/lib-python/modified-2.7/test/test_array.py b/lib-python/modified-2.7/test/test_array.py --- a/lib-python/modified-2.7/test/test_array.py +++ b/lib-python/modified-2.7/test/test_array.py @@ -295,9 +295,10 @@ ) b = array.array(self.badtypecode()) - self.assertRaises(TypeError, "a + b") - - self.assertRaises(TypeError, "a + 'bad'") + with self.assertRaises(TypeError): + a + b + with self.assertRaises(TypeError): + a + 'bad' def test_iadd(self): a = array.array(self.typecode, self.example[::-1]) @@ -316,9 +317,10 @@ ) b = array.array(self.badtypecode()) - self.assertRaises(TypeError, "a += b") - - self.assertRaises(TypeError, "a += 'bad'") + with self.assertRaises(TypeError): + a += b + with self.assertRaises(TypeError): + a += 'bad' def test_mul(self): a = 5*array.array(self.typecode, self.example) @@ -345,7 +347,8 @@ array.array(self.typecode) ) - self.assertRaises(TypeError, "a * 'bad'") + with self.assertRaises(TypeError): + a * 'bad' def test_imul(self): a = array.array(self.typecode, self.example) @@ -374,7 +377,8 @@ a *= -1 self.assertEqual(a, array.array(self.typecode)) - self.assertRaises(TypeError, "a *= 'bad'") + with self.assertRaises(TypeError): + a *= 'bad' def test_getitem(self): a = array.array(self.typecode, self.example) diff --git a/lib-python/modified-2.7/test/test_heapq.py b/lib-python/modified-2.7/test/test_heapq.py --- a/lib-python/modified-2.7/test/test_heapq.py +++ b/lib-python/modified-2.7/test/test_heapq.py @@ -186,6 +186,11 @@ self.assertFalse(sys.modules['heapq'] is self.module) self.assertTrue(hasattr(self.module.heapify, 'func_code')) + def test_islice_protection(self): + m = self.module + self.assertFalse(m.nsmallest(-1, [1])) + self.assertFalse(m.nlargest(-1, [1])) + class TestHeapC(TestHeap): module = c_heapq diff --git a/lib-python/modified-2.7/test/test_sys_settrace.py b/lib-python/modified-2.7/test/test_sys_settrace.py --- a/lib-python/modified-2.7/test/test_sys_settrace.py +++ b/lib-python/modified-2.7/test/test_sys_settrace.py @@ -286,11 +286,11 @@ self.compare_events(func.func_code.co_firstlineno, tracer.events, func.events) - def set_and_retrieve_none(self): + def test_set_and_retrieve_none(self): sys.settrace(None) assert sys.gettrace() is None - def set_and_retrieve_func(self): + def test_set_and_retrieve_func(self): def fn(*args): pass diff --git a/lib-python/modified-2.7/test/test_urllib2.py b/lib-python/modified-2.7/test/test_urllib2.py --- a/lib-python/modified-2.7/test/test_urllib2.py +++ b/lib-python/modified-2.7/test/test_urllib2.py @@ -307,6 +307,9 @@ def getresponse(self): return MockHTTPResponse(MockFile(), {}, 200, "OK") + def close(self): + pass + class MockHandler: # useful for testing handler machinery # see add_ordered_mock_handlers() docstring diff --git a/lib-python/modified-2.7/urllib2.py b/lib-python/modified-2.7/urllib2.py new file mode 100644 --- /dev/null +++ b/lib-python/modified-2.7/urllib2.py @@ -0,0 +1,1436 @@ +"""An extensible library for opening URLs using a variety of protocols + +The simplest way to use this module is to call the urlopen function, +which accepts a string containing a URL or a Request object (described +below). It opens the URL and returns the results as file-like +object; the returned object has some extra methods described below. + +The OpenerDirector manages a collection of Handler objects that do +all the actual work. Each Handler implements a particular protocol or +option. The OpenerDirector is a composite object that invokes the +Handlers needed to open the requested URL. For example, the +HTTPHandler performs HTTP GET and POST requests and deals with +non-error returns. The HTTPRedirectHandler automatically deals with +HTTP 301, 302, 303 and 307 redirect errors, and the HTTPDigestAuthHandler +deals with digest authentication. + +urlopen(url, data=None) -- Basic usage is the same as original +urllib. pass the url and optionally data to post to an HTTP URL, and +get a file-like object back. One difference is that you can also pass +a Request instance instead of URL. Raises a URLError (subclass of +IOError); for HTTP errors, raises an HTTPError, which can also be +treated as a valid response. + +build_opener -- Function that creates a new OpenerDirector instance. +Will install the default handlers. Accepts one or more Handlers as +arguments, either instances or Handler classes that it will +instantiate. If one of the argument is a subclass of the default +handler, the argument will be installed instead of the default. + +install_opener -- Installs a new opener as the default opener. + +objects of interest: + +OpenerDirector -- Sets up the User Agent as the Python-urllib client and manages +the Handler classes, while dealing with requests and responses. + +Request -- An object that encapsulates the state of a request. The +state can be as simple as the URL. It can also include extra HTTP +headers, e.g. a User-Agent. + +BaseHandler -- + +exceptions: +URLError -- A subclass of IOError, individual protocols have their own +specific subclass. + +HTTPError -- Also a valid HTTP response, so you can treat an HTTP error +as an exceptional event or valid response. + +internals: +BaseHandler and parent +_call_chain conventions + +Example usage: + +import urllib2 + +# set up authentication info +authinfo = urllib2.HTTPBasicAuthHandler() +authinfo.add_password(realm='PDQ Application', + uri='https://mahler:8092/site-updates.py', + user='klem', + passwd='geheim$parole') + +proxy_support = urllib2.ProxyHandler({"http" : "http://ahad-haam:3128"}) + +# build a new opener that adds authentication and caching FTP handlers +opener = urllib2.build_opener(proxy_support, authinfo, urllib2.CacheFTPHandler) + +# install it +urllib2.install_opener(opener) + +f = urllib2.urlopen('http://www.python.org/') + + +""" + +# XXX issues: +# If an authentication error handler that tries to perform +# authentication for some reason but fails, how should the error be +# signalled? The client needs to know the HTTP error code. But if +# the handler knows that the problem was, e.g., that it didn't know +# that hash algo that requested in the challenge, it would be good to +# pass that information along to the client, too. +# ftp errors aren't handled cleanly +# check digest against correct (i.e. non-apache) implementation + +# Possible extensions: +# complex proxies XXX not sure what exactly was meant by this +# abstract factory for opener + +import base64 +import hashlib +import httplib +import mimetools +import os +import posixpath +import random +import re +import socket +import sys +import time +import urlparse +import bisect + +try: + from cStringIO import StringIO +except ImportError: + from StringIO import StringIO + +from urllib import (unwrap, unquote, splittype, splithost, quote, + addinfourl, splitport, splittag, + splitattr, ftpwrapper, splituser, splitpasswd, splitvalue) + +# support for FileHandler, proxies via environment variables +from urllib import localhost, url2pathname, getproxies, proxy_bypass + +# used in User-Agent header sent +__version__ = sys.version[:3] + +_opener = None +def urlopen(url, data=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT): + global _opener + if _opener is None: + _opener = build_opener() + return _opener.open(url, data, timeout) + +def install_opener(opener): + global _opener + _opener = opener + +# do these error classes make sense? +# make sure all of the IOError stuff is overridden. we just want to be +# subtypes. + +class URLError(IOError): + # URLError is a sub-type of IOError, but it doesn't share any of + # the implementation. need to override __init__ and __str__. + # It sets self.args for compatibility with other EnvironmentError + # subclasses, but args doesn't have the typical format with errno in + # slot 0 and strerror in slot 1. This may be better than nothing. + def __init__(self, reason): + self.args = reason, + self.reason = reason + + def __str__(self): + return '' % self.reason + +class HTTPError(URLError, addinfourl): + """Raised when HTTP error occurs, but also acts like non-error return""" + __super_init = addinfourl.__init__ + + def __init__(self, url, code, msg, hdrs, fp): + self.code = code + self.msg = msg + self.hdrs = hdrs + self.fp = fp + self.filename = url + # The addinfourl classes depend on fp being a valid file + # object. In some cases, the HTTPError may not have a valid + # file object. If this happens, the simplest workaround is to + # not initialize the base classes. + if fp is not None: + self.__super_init(fp, hdrs, url, code) + + def __str__(self): + return 'HTTP Error %s: %s' % (self.code, self.msg) + +# copied from cookielib.py +_cut_port_re = re.compile(r":\d+$") +def request_host(request): + """Return request-host, as defined by RFC 2965. + + Variation from RFC: returned value is lowercased, for convenient + comparison. + + """ + url = request.get_full_url() + host = urlparse.urlparse(url)[1] + if host == "": + host = request.get_header("Host", "") + + # remove port, if present + host = _cut_port_re.sub("", host, 1) + return host.lower() + +class Request: + + def __init__(self, url, data=None, headers={}, + origin_req_host=None, unverifiable=False): + # unwrap('') --> 'type://host/path' + self.__original = unwrap(url) + self.__original, fragment = splittag(self.__original) + self.type = None + # self.__r_type is what's left after doing the splittype + self.host = None + self.port = None + self._tunnel_host = None + self.data = data + self.headers = {} + for key, value in headers.items(): + self.add_header(key, value) + self.unredirected_hdrs = {} + if origin_req_host is None: + origin_req_host = request_host(self) + self.origin_req_host = origin_req_host + self.unverifiable = unverifiable + + def __getattr__(self, attr): + # XXX this is a fallback mechanism to guard against these + # methods getting called in a non-standard order. this may be + # too complicated and/or unnecessary. + # XXX should the __r_XXX attributes be public? + if attr[:12] == '_Request__r_': + name = attr[12:] + if hasattr(Request, 'get_' + name): + getattr(self, 'get_' + name)() + return getattr(self, attr) + raise AttributeError, attr + + def get_method(self): + if self.has_data(): + return "POST" + else: + return "GET" + + # XXX these helper methods are lame + + def add_data(self, data): + self.data = data + + def has_data(self): + return self.data is not None + + def get_data(self): + return self.data + + def get_full_url(self): + return self.__original + + def get_type(self): + if self.type is None: + self.type, self.__r_type = splittype(self.__original) + if self.type is None: + raise ValueError, "unknown url type: %s" % self.__original + return self.type + + def get_host(self): + if self.host is None: + self.host, self.__r_host = splithost(self.__r_type) + if self.host: + self.host = unquote(self.host) + return self.host + + def get_selector(self): + return self.__r_host + + def set_proxy(self, host, type): + if self.type == 'https' and not self._tunnel_host: + self._tunnel_host = self.host + else: + self.type = type + self.__r_host = self.__original + + self.host = host + + def has_proxy(self): + return self.__r_host == self.__original + + def get_origin_req_host(self): + return self.origin_req_host + + def is_unverifiable(self): + return self.unverifiable + + def add_header(self, key, val): + # useful for something like authentication + self.headers[key.capitalize()] = val + + def add_unredirected_header(self, key, val): + # will not be added to a redirected request + self.unredirected_hdrs[key.capitalize()] = val + + def has_header(self, header_name): + return (header_name in self.headers or + header_name in self.unredirected_hdrs) + + def get_header(self, header_name, default=None): + return self.headers.get( + header_name, + self.unredirected_hdrs.get(header_name, default)) + + def header_items(self): + hdrs = self.unredirected_hdrs.copy() + hdrs.update(self.headers) + return hdrs.items() + +class OpenerDirector: + def __init__(self): + client_version = "Python-urllib/%s" % __version__ + self.addheaders = [('User-agent', client_version)] + # manage the individual handlers + self.handlers = [] + self.handle_open = {} + self.handle_error = {} + self.process_response = {} + self.process_request = {} + + def add_handler(self, handler): + if not hasattr(handler, "add_parent"): + raise TypeError("expected BaseHandler instance, got %r" % + type(handler)) + + added = False + for meth in dir(handler): + if meth in ["redirect_request", "do_open", "proxy_open"]: + # oops, coincidental match + continue + + i = meth.find("_") + protocol = meth[:i] + condition = meth[i+1:] + + if condition.startswith("error"): + j = condition.find("_") + i + 1 + kind = meth[j+1:] + try: + kind = int(kind) + except ValueError: + pass + lookup = self.handle_error.get(protocol, {}) + self.handle_error[protocol] = lookup + elif condition == "open": + kind = protocol + lookup = self.handle_open + elif condition == "response": + kind = protocol + lookup = self.process_response + elif condition == "request": + kind = protocol + lookup = self.process_request + else: + continue + + handlers = lookup.setdefault(kind, []) + if handlers: + bisect.insort(handlers, handler) + else: + handlers.append(handler) + added = True + + if added: + # the handlers must work in an specific order, the order + # is specified in a Handler attribute + bisect.insort(self.handlers, handler) + handler.add_parent(self) + + def close(self): + # Only exists for backwards compatibility. + pass + + def _call_chain(self, chain, kind, meth_name, *args): + # Handlers raise an exception if no one else should try to handle + # the request, or return None if they can't but another handler + # could. Otherwise, they return the response. + handlers = chain.get(kind, ()) + for handler in handlers: + func = getattr(handler, meth_name) + + result = func(*args) + if result is not None: + return result + + def open(self, fullurl, data=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT): + # accept a URL or a Request object + if isinstance(fullurl, basestring): + req = Request(fullurl, data) + else: + req = fullurl + if data is not None: + req.add_data(data) + + req.timeout = timeout + protocol = req.get_type() + + # pre-process request + meth_name = protocol+"_request" + for processor in self.process_request.get(protocol, []): + meth = getattr(processor, meth_name) + req = meth(req) + + response = self._open(req, data) + + # post-process response + meth_name = protocol+"_response" + for processor in self.process_response.get(protocol, []): + meth = getattr(processor, meth_name) + response = meth(req, response) + + return response + + def _open(self, req, data=None): + result = self._call_chain(self.handle_open, 'default', + 'default_open', req) + if result: + return result + + protocol = req.get_type() + result = self._call_chain(self.handle_open, protocol, protocol + + '_open', req) + if result: + return result + + return self._call_chain(self.handle_open, 'unknown', + 'unknown_open', req) + + def error(self, proto, *args): + if proto in ('http', 'https'): + # XXX http[s] protocols are special-cased + dict = self.handle_error['http'] # https is not different than http + proto = args[2] # YUCK! + meth_name = 'http_error_%s' % proto + http_err = 1 + orig_args = args + else: + dict = self.handle_error + meth_name = proto + '_error' + http_err = 0 + args = (dict, proto, meth_name) + args + result = self._call_chain(*args) + if result: + return result + + if http_err: + args = (dict, 'default', 'http_error_default') + orig_args + return self._call_chain(*args) + +# XXX probably also want an abstract factory that knows when it makes +# sense to skip a superclass in favor of a subclass and when it might +# make sense to include both + +def build_opener(*handlers): + """Create an opener object from a list of handlers. + + The opener will use several default handlers, including support + for HTTP, FTP and when applicable, HTTPS. + + If any of the handlers passed as arguments are subclasses of the + default handlers, the default handlers will not be used. + """ + import types + def isclass(obj): + return isinstance(obj, (types.ClassType, type)) + + opener = OpenerDirector() + default_classes = [ProxyHandler, UnknownHandler, HTTPHandler, + HTTPDefaultErrorHandler, HTTPRedirectHandler, + FTPHandler, FileHandler, HTTPErrorProcessor] + if hasattr(httplib, 'HTTPS'): + default_classes.append(HTTPSHandler) + skip = set() + for klass in default_classes: + for check in handlers: + if isclass(check): + if issubclass(check, klass): + skip.add(klass) + elif isinstance(check, klass): + skip.add(klass) + for klass in skip: + default_classes.remove(klass) + + for klass in default_classes: + opener.add_handler(klass()) + + for h in handlers: + if isclass(h): + h = h() + opener.add_handler(h) + return opener + +class BaseHandler: + handler_order = 500 + + def add_parent(self, parent): + self.parent = parent + + def close(self): + # Only exists for backwards compatibility + pass + + def __lt__(self, other): + if not hasattr(other, "handler_order"): + # Try to preserve the old behavior of having custom classes + # inserted after default ones (works only for custom user + # classes which are not aware of handler_order). + return True + return self.handler_order < other.handler_order + + +class HTTPErrorProcessor(BaseHandler): + """Process HTTP error responses.""" + handler_order = 1000 # after all other processing + + def http_response(self, request, response): + code, msg, hdrs = response.code, response.msg, response.info() + + # According to RFC 2616, "2xx" code indicates that the client's + # request was successfully received, understood, and accepted. + if not (200 <= code < 300): + response = self.parent.error( + 'http', request, response, code, msg, hdrs) + + return response + + https_response = http_response + +class HTTPDefaultErrorHandler(BaseHandler): + def http_error_default(self, req, fp, code, msg, hdrs): + raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) + +class HTTPRedirectHandler(BaseHandler): + # maximum number of redirections to any single URL + # this is needed because of the state that cookies introduce + max_repeats = 4 + # maximum total number of redirections (regardless of URL) before + # assuming we're in a loop + max_redirections = 10 + + def redirect_request(self, req, fp, code, msg, headers, newurl): + """Return a Request or None in response to a redirect. + + This is called by the http_error_30x methods when a + redirection response is received. If a redirection should + take place, return a new Request to allow http_error_30x to + perform the redirect. Otherwise, raise HTTPError if no-one + else should try to handle this url. Return None if you can't + but another Handler might. + """ + m = req.get_method() + if (code in (301, 302, 303, 307) and m in ("GET", "HEAD") + or code in (301, 302, 303) and m == "POST"): + # Strictly (according to RFC 2616), 301 or 302 in response + # to a POST MUST NOT cause a redirection without confirmation + # from the user (of urllib2, in this case). In practice, + # essentially all clients do redirect in this case, so we + # do the same. + # be conciliant with URIs containing a space + newurl = newurl.replace(' ', '%20') + newheaders = dict((k,v) for k,v in req.headers.items() + if k.lower() not in ("content-length", "content-type") + ) + return Request(newurl, + headers=newheaders, + origin_req_host=req.get_origin_req_host(), + unverifiable=True) + else: + raise HTTPError(req.get_full_url(), code, msg, headers, fp) + + # Implementation note: To avoid the server sending us into an + # infinite loop, the request object needs to track what URLs we + # have already seen. Do this by adding a handler-specific + # attribute to the Request object. + def http_error_302(self, req, fp, code, msg, headers): + # Some servers (incorrectly) return multiple Location headers + # (so probably same goes for URI). Use first header. + if 'location' in headers: + newurl = headers.getheaders('location')[0] + elif 'uri' in headers: + newurl = headers.getheaders('uri')[0] + else: + return + + # fix a possible malformed URL + urlparts = urlparse.urlparse(newurl) + if not urlparts.path: + urlparts = list(urlparts) + urlparts[2] = "/" + newurl = urlparse.urlunparse(urlparts) + + newurl = urlparse.urljoin(req.get_full_url(), newurl) + + # XXX Probably want to forget about the state of the current + # request, although that might interact poorly with other + # handlers that also use handler-specific request attributes + new = self.redirect_request(req, fp, code, msg, headers, newurl) + if new is None: + return + + # loop detection + # .redirect_dict has a key url if url was previously visited. + if hasattr(req, 'redirect_dict'): + visited = new.redirect_dict = req.redirect_dict + if (visited.get(newurl, 0) >= self.max_repeats or + len(visited) >= self.max_redirections): + raise HTTPError(req.get_full_url(), code, + self.inf_msg + msg, headers, fp) + else: + visited = new.redirect_dict = req.redirect_dict = {} + visited[newurl] = visited.get(newurl, 0) + 1 + + # Don't close the fp until we are sure that we won't use it + # with HTTPError. + fp.read() + fp.close() + + return self.parent.open(new, timeout=req.timeout) + + http_error_301 = http_error_303 = http_error_307 = http_error_302 + + inf_msg = "The HTTP server returned a redirect error that would " \ + "lead to an infinite loop.\n" \ + "The last 30x error message was:\n" + + +def _parse_proxy(proxy): + """Return (scheme, user, password, host/port) given a URL or an authority. + + If a URL is supplied, it must have an authority (host:port) component. + According to RFC 3986, having an authority component means the URL must + have two slashes after the scheme: + + >>> _parse_proxy('file:/ftp.example.com/') + Traceback (most recent call last): + ValueError: proxy URL with no authority: 'file:/ftp.example.com/' + + The first three items of the returned tuple may be None. + + Examples of authority parsing: + + >>> _parse_proxy('proxy.example.com') + (None, None, None, 'proxy.example.com') + >>> _parse_proxy('proxy.example.com:3128') + (None, None, None, 'proxy.example.com:3128') + + The authority component may optionally include userinfo (assumed to be + username:password): + + >>> _parse_proxy('joe:password at proxy.example.com') + (None, 'joe', 'password', 'proxy.example.com') + >>> _parse_proxy('joe:password at proxy.example.com:3128') + (None, 'joe', 'password', 'proxy.example.com:3128') + + Same examples, but with URLs instead: + + >>> _parse_proxy('http://proxy.example.com/') + ('http', None, None, 'proxy.example.com') + >>> _parse_proxy('http://proxy.example.com:3128/') + ('http', None, None, 'proxy.example.com:3128') + >>> _parse_proxy('http://joe:password at proxy.example.com/') + ('http', 'joe', 'password', 'proxy.example.com') + >>> _parse_proxy('http://joe:password at proxy.example.com:3128') + ('http', 'joe', 'password', 'proxy.example.com:3128') + + Everything after the authority is ignored: + + >>> _parse_proxy('ftp://joe:password at proxy.example.com/rubbish:3128') + ('ftp', 'joe', 'password', 'proxy.example.com') + + Test for no trailing '/' case: + + >>> _parse_proxy('http://joe:password at proxy.example.com') + ('http', 'joe', 'password', 'proxy.example.com') + + """ + scheme, r_scheme = splittype(proxy) + if not r_scheme.startswith("/"): + # authority + scheme = None + authority = proxy + else: + # URL + if not r_scheme.startswith("//"): + raise ValueError("proxy URL with no authority: %r" % proxy) + # We have an authority, so for RFC 3986-compliant URLs (by ss 3. + # and 3.3.), path is empty or starts with '/' + end = r_scheme.find("/", 2) + if end == -1: + end = None + authority = r_scheme[2:end] + userinfo, hostport = splituser(authority) + if userinfo is not None: + user, password = splitpasswd(userinfo) + else: + user = password = None + return scheme, user, password, hostport + +class ProxyHandler(BaseHandler): + # Proxies must be in front + handler_order = 100 + + def __init__(self, proxies=None): + if proxies is None: + proxies = getproxies() + assert hasattr(proxies, 'has_key'), "proxies must be a mapping" + self.proxies = proxies + for type, url in proxies.items(): + setattr(self, '%s_open' % type, + lambda r, proxy=url, type=type, meth=self.proxy_open: \ + meth(r, proxy, type)) + + def proxy_open(self, req, proxy, type): + orig_type = req.get_type() + proxy_type, user, password, hostport = _parse_proxy(proxy) + + if proxy_type is None: + proxy_type = orig_type + + if req.host and proxy_bypass(req.host): + return None + + if user and password: + user_pass = '%s:%s' % (unquote(user), unquote(password)) + creds = base64.b64encode(user_pass).strip() + req.add_header('Proxy-authorization', 'Basic ' + creds) + hostport = unquote(hostport) + req.set_proxy(hostport, proxy_type) + + if orig_type == proxy_type or orig_type == 'https': + # let other handlers take care of it + return None + else: + # need to start over, because the other handlers don't + # grok the proxy's URL type + # e.g. if we have a constructor arg proxies like so: + # {'http': 'ftp://proxy.example.com'}, we may end up turning + # a request for http://acme.example.com/a into one for + # ftp://proxy.example.com/a + return self.parent.open(req, timeout=req.timeout) + +class HTTPPasswordMgr: + + def __init__(self): + self.passwd = {} + + def add_password(self, realm, uri, user, passwd): + # uri could be a single URI or a sequence + if isinstance(uri, basestring): + uri = [uri] + if not realm in self.passwd: + self.passwd[realm] = {} + for default_port in True, False: + reduced_uri = tuple( + [self.reduce_uri(u, default_port) for u in uri]) + self.passwd[realm][reduced_uri] = (user, passwd) + + def find_user_password(self, realm, authuri): + domains = self.passwd.get(realm, {}) + for default_port in True, False: + reduced_authuri = self.reduce_uri(authuri, default_port) + for uris, authinfo in domains.iteritems(): + for uri in uris: + if self.is_suburi(uri, reduced_authuri): + return authinfo + return None, None + + def reduce_uri(self, uri, default_port=True): + """Accept authority or URI and extract only the authority and path.""" + # note HTTP URLs do not have a userinfo component + parts = urlparse.urlsplit(uri) + if parts[1]: + # URI + scheme = parts[0] + authority = parts[1] + path = parts[2] or '/' + else: + # host or host:port + scheme = None + authority = uri + path = '/' + host, port = splitport(authority) + if default_port and port is None and scheme is not None: + dport = {"http": 80, + "https": 443, + }.get(scheme) + if dport is not None: + authority = "%s:%d" % (host, dport) + return authority, path + + def is_suburi(self, base, test): + """Check if test is below base in a URI tree + + Both args must be URIs in reduced form. + """ + if base == test: + return True + if base[0] != test[0]: + return False + common = posixpath.commonprefix((base[1], test[1])) + if len(common) == len(base[1]): + return True + return False + + +class HTTPPasswordMgrWithDefaultRealm(HTTPPasswordMgr): + + def find_user_password(self, realm, authuri): + user, password = HTTPPasswordMgr.find_user_password(self, realm, + authuri) + if user is not None: + return user, password + return HTTPPasswordMgr.find_user_password(self, None, authuri) + + +class AbstractBasicAuthHandler: + + # XXX this allows for multiple auth-schemes, but will stupidly pick + # the last one with a realm specified. + + # allow for double- and single-quoted realm values + # (single quotes are a violation of the RFC, but appear in the wild) + rx = re.compile('(?:.*,)*[ \t]*([^ \t]+)[ \t]+' + 'realm=(["\'])(.*?)\\2', re.I) + + # XXX could pre-emptively send auth info already accepted (RFC 2617, + # end of section 2, and section 1.2 immediately after "credentials" + # production). + + def __init__(self, password_mgr=None): + if password_mgr is None: + password_mgr = HTTPPasswordMgr() + self.passwd = password_mgr + self.add_password = self.passwd.add_password + self.retried = 0 + + def reset_retry_count(self): + self.retried = 0 + + def http_error_auth_reqed(self, authreq, host, req, headers): + # host may be an authority (without userinfo) or a URL with an + # authority + # XXX could be multiple headers + authreq = headers.get(authreq, None) + + if self.retried > 5: + # retry sending the username:password 5 times before failing. + raise HTTPError(req.get_full_url(), 401, "basic auth failed", + headers, None) + else: + self.retried += 1 + + if authreq: + mo = AbstractBasicAuthHandler.rx.search(authreq) + if mo: + scheme, quote, realm = mo.groups() + if scheme.lower() == 'basic': + response = self.retry_http_basic_auth(host, req, realm) + if response and response.code != 401: + self.retried = 0 + return response + + def retry_http_basic_auth(self, host, req, realm): + user, pw = self.passwd.find_user_password(realm, host) + if pw is not None: + raw = "%s:%s" % (user, pw) + auth = 'Basic %s' % base64.b64encode(raw).strip() + if req.headers.get(self.auth_header, None) == auth: + return None + req.add_unredirected_header(self.auth_header, auth) + return self.parent.open(req, timeout=req.timeout) + else: + return None + + +class HTTPBasicAuthHandler(AbstractBasicAuthHandler, BaseHandler): + + auth_header = 'Authorization' + + def http_error_401(self, req, fp, code, msg, headers): + url = req.get_full_url() + response = self.http_error_auth_reqed('www-authenticate', + url, req, headers) + self.reset_retry_count() + return response + + +class ProxyBasicAuthHandler(AbstractBasicAuthHandler, BaseHandler): + + auth_header = 'Proxy-authorization' + + def http_error_407(self, req, fp, code, msg, headers): + # http_error_auth_reqed requires that there is no userinfo component in + # authority. Assume there isn't one, since urllib2 does not (and + # should not, RFC 3986 s. 3.2.1) support requests for URLs containing + # userinfo. + authority = req.get_host() + response = self.http_error_auth_reqed('proxy-authenticate', + authority, req, headers) + self.reset_retry_count() + return response + + +def randombytes(n): + """Return n random bytes.""" + # Use /dev/urandom if it is available. Fall back to random module + # if not. It might be worthwhile to extend this function to use + # other platform-specific mechanisms for getting random bytes. + if os.path.exists("/dev/urandom"): + f = open("/dev/urandom") + s = f.read(n) + f.close() + return s + else: + L = [chr(random.randrange(0, 256)) for i in range(n)] + return "".join(L) + +class AbstractDigestAuthHandler: + # Digest authentication is specified in RFC 2617. + + # XXX The client does not inspect the Authentication-Info header + # in a successful response. + + # XXX It should be possible to test this implementation against + # a mock server that just generates a static set of challenges. + + # XXX qop="auth-int" supports is shaky + + def __init__(self, passwd=None): + if passwd is None: + passwd = HTTPPasswordMgr() + self.passwd = passwd + self.add_password = self.passwd.add_password + self.retried = 0 + self.nonce_count = 0 + self.last_nonce = None + + def reset_retry_count(self): + self.retried = 0 + + def http_error_auth_reqed(self, auth_header, host, req, headers): + authreq = headers.get(auth_header, None) + if self.retried > 5: + # Don't fail endlessly - if we failed once, we'll probably + # fail a second time. Hm. Unless the Password Manager is + # prompting for the information. Crap. This isn't great + # but it's better than the current 'repeat until recursion + # depth exceeded' approach + raise HTTPError(req.get_full_url(), 401, "digest auth failed", + headers, None) + else: + self.retried += 1 + if authreq: + scheme = authreq.split()[0] + if scheme.lower() == 'digest': + return self.retry_http_digest_auth(req, authreq) + + def retry_http_digest_auth(self, req, auth): + token, challenge = auth.split(' ', 1) + chal = parse_keqv_list(parse_http_list(challenge)) + auth = self.get_authorization(req, chal) + if auth: + auth_val = 'Digest %s' % auth + if req.headers.get(self.auth_header, None) == auth_val: + return None + req.add_unredirected_header(self.auth_header, auth_val) + resp = self.parent.open(req, timeout=req.timeout) + return resp + + def get_cnonce(self, nonce): + # The cnonce-value is an opaque + # quoted string value provided by the client and used by both client + # and server to avoid chosen plaintext attacks, to provide mutual + # authentication, and to provide some message integrity protection. + # This isn't a fabulous effort, but it's probably Good Enough. + dig = hashlib.sha1("%s:%s:%s:%s" % (self.nonce_count, nonce, time.ctime(), + randombytes(8))).hexdigest() + return dig[:16] + + def get_authorization(self, req, chal): + try: + realm = chal['realm'] + nonce = chal['nonce'] + qop = chal.get('qop') + algorithm = chal.get('algorithm', 'MD5') + # mod_digest doesn't send an opaque, even though it isn't + # supposed to be optional + opaque = chal.get('opaque', None) + except KeyError: + return None + + H, KD = self.get_algorithm_impls(algorithm) + if H is None: + return None + + user, pw = self.passwd.find_user_password(realm, req.get_full_url()) + if user is None: + return None + + # XXX not implemented yet + if req.has_data(): + entdig = self.get_entity_digest(req.get_data(), chal) + else: + entdig = None + + A1 = "%s:%s:%s" % (user, realm, pw) + A2 = "%s:%s" % (req.get_method(), + # XXX selector: what about proxies and full urls + req.get_selector()) + if qop == 'auth': + if nonce == self.last_nonce: + self.nonce_count += 1 + else: + self.nonce_count = 1 + self.last_nonce = nonce + + ncvalue = '%08x' % self.nonce_count + cnonce = self.get_cnonce(nonce) + noncebit = "%s:%s:%s:%s:%s" % (nonce, ncvalue, cnonce, qop, H(A2)) + respdig = KD(H(A1), noncebit) + elif qop is None: + respdig = KD(H(A1), "%s:%s" % (nonce, H(A2))) + else: + # XXX handle auth-int. + raise URLError("qop '%s' is not supported." % qop) + + # XXX should the partial digests be encoded too? + + base = 'username="%s", realm="%s", nonce="%s", uri="%s", ' \ + 'response="%s"' % (user, realm, nonce, req.get_selector(), + respdig) + if opaque: + base += ', opaque="%s"' % opaque + if entdig: + base += ', digest="%s"' % entdig + base += ', algorithm="%s"' % algorithm + if qop: + base += ', qop=auth, nc=%s, cnonce="%s"' % (ncvalue, cnonce) + return base + + def get_algorithm_impls(self, algorithm): + # algorithm should be case-insensitive according to RFC2617 + algorithm = algorithm.upper() + # lambdas assume digest modules are imported at the top level + if algorithm == 'MD5': + H = lambda x: hashlib.md5(x).hexdigest() + elif algorithm == 'SHA': + H = lambda x: hashlib.sha1(x).hexdigest() + # XXX MD5-sess + KD = lambda s, d: H("%s:%s" % (s, d)) + return H, KD + + def get_entity_digest(self, data, chal): + # XXX not implemented yet + return None + + +class HTTPDigestAuthHandler(BaseHandler, AbstractDigestAuthHandler): + """An authentication protocol defined by RFC 2069 + + Digest authentication improves on basic authentication because it + does not transmit passwords in the clear. + """ + + auth_header = 'Authorization' + handler_order = 490 # before Basic auth + + def http_error_401(self, req, fp, code, msg, headers): + host = urlparse.urlparse(req.get_full_url())[1] + retry = self.http_error_auth_reqed('www-authenticate', + host, req, headers) + self.reset_retry_count() + return retry + + +class ProxyDigestAuthHandler(BaseHandler, AbstractDigestAuthHandler): + + auth_header = 'Proxy-Authorization' + handler_order = 490 # before Basic auth + + def http_error_407(self, req, fp, code, msg, headers): + host = req.get_host() + retry = self.http_error_auth_reqed('proxy-authenticate', + host, req, headers) + self.reset_retry_count() + return retry + +class AbstractHTTPHandler(BaseHandler): + + def __init__(self, debuglevel=0): + self._debuglevel = debuglevel + + def set_http_debuglevel(self, level): + self._debuglevel = level + + def do_request_(self, request): + host = request.get_host() + if not host: + raise URLError('no host given') + + if request.has_data(): # POST + data = request.get_data() + if not request.has_header('Content-type'): + request.add_unredirected_header( + 'Content-type', + 'application/x-www-form-urlencoded') + if not request.has_header('Content-length'): + request.add_unredirected_header( + 'Content-length', '%d' % len(data)) + + sel_host = host + if request.has_proxy(): + scheme, sel = splittype(request.get_selector()) + sel_host, sel_path = splithost(sel) + + if not request.has_header('Host'): + request.add_unredirected_header('Host', sel_host) + for name, value in self.parent.addheaders: + name = name.capitalize() + if not request.has_header(name): + request.add_unredirected_header(name, value) + + return request + + def do_open(self, http_class, req): + """Return an addinfourl object for the request, using http_class. + + http_class must implement the HTTPConnection API from httplib. + The addinfourl return value is a file-like object. It also + has methods and attributes including: + - info(): return a mimetools.Message object for the headers + - geturl(): return the original request URL + - code: HTTP status code + """ + host = req.get_host() + if not host: + raise URLError('no host given') + + h = http_class(host, timeout=req.timeout) # will parse host:port + h.set_debuglevel(self._debuglevel) + + headers = dict(req.unredirected_hdrs) + headers.update(dict((k, v) for k, v in req.headers.items() + if k not in headers)) + + # We want to make an HTTP/1.1 request, but the addinfourl + # class isn't prepared to deal with a persistent connection. + # It will try to read all remaining data from the socket, + # which will block while the server waits for the next request. + # So make sure the connection gets closed after the (only) + # request. + headers["Connection"] = "close" + headers = dict( + (name.title(), val) for name, val in headers.items()) + + if req._tunnel_host: + tunnel_headers = {} + proxy_auth_hdr = "Proxy-Authorization" + if proxy_auth_hdr in headers: + tunnel_headers[proxy_auth_hdr] = headers[proxy_auth_hdr] + # Proxy-Authorization should not be sent to origin + # server. + del headers[proxy_auth_hdr] + h.set_tunnel(req._tunnel_host, headers=tunnel_headers) + + try: + h.request(req.get_method(), req.get_selector(), req.data, headers) + try: + r = h.getresponse(buffering=True) + except TypeError: #buffering kw not supported + r = h.getresponse() + except socket.error, err: # XXX what error? + h.close() + raise URLError(err) + + # Pick apart the HTTPResponse object to get the addinfourl + # object initialized properly. + + # Wrap the HTTPResponse object in socket's file object adapter + # for Windows. That adapter calls recv(), so delegate recv() + # to read(). This weird wrapping allows the returned object to + # have readline() and readlines() methods. + + # XXX It might be better to extract the read buffering code + # out of socket._fileobject() and into a base class. + + r.recv = r.read + fp = socket._fileobject(r, close=True) + + resp = addinfourl(fp, r.msg, req.get_full_url()) + resp.code = r.status + resp.msg = r.reason + return resp + + +class HTTPHandler(AbstractHTTPHandler): + + def http_open(self, req): + return self.do_open(httplib.HTTPConnection, req) + + http_request = AbstractHTTPHandler.do_request_ + +if hasattr(httplib, 'HTTPS'): + class HTTPSHandler(AbstractHTTPHandler): + + def https_open(self, req): + return self.do_open(httplib.HTTPSConnection, req) + + https_request = AbstractHTTPHandler.do_request_ + +class HTTPCookieProcessor(BaseHandler): + def __init__(self, cookiejar=None): + import cookielib + if cookiejar is None: + cookiejar = cookielib.CookieJar() + self.cookiejar = cookiejar + + def http_request(self, request): + self.cookiejar.add_cookie_header(request) + return request + + def http_response(self, request, response): + self.cookiejar.extract_cookies(response, request) + return response + + https_request = http_request + https_response = http_response + +class UnknownHandler(BaseHandler): + def unknown_open(self, req): + type = req.get_type() + raise URLError('unknown url type: %s' % type) + +def parse_keqv_list(l): + """Parse list of key=value strings where keys are not duplicated.""" + parsed = {} + for elt in l: + k, v = elt.split('=', 1) + if v[0] == '"' and v[-1] == '"': + v = v[1:-1] + parsed[k] = v + return parsed + +def parse_http_list(s): + """Parse lists as described by RFC 2068 Section 2. + + In particular, parse comma-separated lists where the elements of + the list may include quoted-strings. A quoted-string could + contain a comma. A non-quoted string could have quotes in the + middle. Neither commas nor quotes count if they are escaped. + Only double-quotes count, not single-quotes. + """ + res = [] + part = '' + + escape = quote = False + for cur in s: + if escape: + part += cur + escape = False + continue + if quote: + if cur == '\\': + escape = True + continue + elif cur == '"': + quote = False + part += cur + continue + + if cur == ',': + res.append(part) + part = '' + continue + + if cur == '"': + quote = True + + part += cur + + # append last part + if part: + res.append(part) + + return [part.strip() for part in res] + +def _safe_gethostbyname(host): + try: + return socket.gethostbyname(host) + except socket.gaierror: + return None + +class FileHandler(BaseHandler): + # Use local file or FTP depending on form of URL + def file_open(self, req): + url = req.get_selector() + if url[:2] == '//' and url[2:3] != '/' and (req.host and + req.host != 'localhost'): + req.type = 'ftp' + return self.parent.open(req) + else: + return self.open_local_file(req) + + # names for the localhost + names = None + def get_names(self): + if FileHandler.names is None: + try: + FileHandler.names = tuple( + socket.gethostbyname_ex('localhost')[2] + + socket.gethostbyname_ex(socket.gethostname())[2]) + except socket.gaierror: + FileHandler.names = (socket.gethostbyname('localhost'),) + return FileHandler.names + + # not entirely sure what the rules are here + def open_local_file(self, req): + import email.utils + import mimetypes + host = req.get_host() + filename = req.get_selector() + localfile = url2pathname(filename) + try: + stats = os.stat(localfile) + size = stats.st_size + modified = email.utils.formatdate(stats.st_mtime, usegmt=True) + mtype = mimetypes.guess_type(filename)[0] + headers = mimetools.Message(StringIO( + 'Content-type: %s\nContent-length: %d\nLast-modified: %s\n' % + (mtype or 'text/plain', size, modified))) + if host: + host, port = splitport(host) + if not host or \ + (not port and _safe_gethostbyname(host) in self.get_names()): + if host: + origurl = 'file://' + host + filename + else: + origurl = 'file://' + filename + return addinfourl(open(localfile, 'rb'), headers, origurl) + except OSError, msg: + # urllib2 users shouldn't expect OSErrors coming from urlopen() + raise URLError(msg) + raise URLError('file not on local host') + +class FTPHandler(BaseHandler): + def ftp_open(self, req): + import ftplib + import mimetypes + host = req.get_host() + if not host: + raise URLError('ftp error: no host given') + host, port = splitport(host) + if port is None: + port = ftplib.FTP_PORT + else: + port = int(port) + + # username/password handling + user, host = splituser(host) + if user: + user, passwd = splitpasswd(user) + else: + passwd = None + host = unquote(host) + user = user or '' + passwd = passwd or '' + + try: + host = socket.gethostbyname(host) + except socket.error, msg: + raise URLError(msg) + path, attrs = splitattr(req.get_selector()) + dirs = path.split('/') + dirs = map(unquote, dirs) + dirs, file = dirs[:-1], dirs[-1] + if dirs and not dirs[0]: + dirs = dirs[1:] + try: + fw = self.connect_ftp(user, passwd, host, port, dirs, req.timeout) + type = file and 'I' or 'D' + for attr in attrs: + attr, value = splitvalue(attr) + if attr.lower() == 'type' and \ + value in ('a', 'A', 'i', 'I', 'd', 'D'): + type = value.upper() + fp, retrlen = fw.retrfile(file, type) + headers = "" + mtype = mimetypes.guess_type(req.get_full_url())[0] + if mtype: + headers += "Content-type: %s\n" % mtype + if retrlen is not None and retrlen >= 0: + headers += "Content-length: %d\n" % retrlen + sf = StringIO(headers) + headers = mimetools.Message(sf) + return addinfourl(fp, headers, req.get_full_url()) + except ftplib.all_errors, msg: + raise URLError, ('ftp error: %s' % msg), sys.exc_info()[2] + + def connect_ftp(self, user, passwd, host, port, dirs, timeout): + fw = ftpwrapper(user, passwd, host, port, dirs, timeout) +## fw.ftp.set_debuglevel(1) + return fw + +class CacheFTPHandler(FTPHandler): + # XXX would be nice to have pluggable cache strategies + # XXX this stuff is definitely not thread safe + def __init__(self): + self.cache = {} + self.timeout = {} + self.soonest = 0 + self.delay = 60 + self.max_conns = 16 + + def setTimeout(self, t): + self.delay = t + + def setMaxConns(self, m): + self.max_conns = m + + def connect_ftp(self, user, passwd, host, port, dirs, timeout): + key = user, host, port, '/'.join(dirs), timeout + if key in self.cache: + self.timeout[key] = time.time() + self.delay + else: + self.cache[key] = ftpwrapper(user, passwd, host, port, dirs, timeout) + self.timeout[key] = time.time() + self.delay + self.check_cache() + return self.cache[key] + + def check_cache(self): + # first check for old ones + t = time.time() + if self.soonest <= t: + for k, v in self.timeout.items(): + if v < t: + self.cache[k].close() + del self.cache[k] + del self.timeout[k] + self.soonest = min(self.timeout.values()) + + # then check the size + if len(self.cache) == self.max_conns: + for k, v in self.timeout.items(): + if v == self.soonest: + del self.cache[k] + del self.timeout[k] + break + self.soonest = min(self.timeout.values()) diff --git a/lib_pypy/_pypy_irc_topic.py b/lib_pypy/_pypy_irc_topic.py --- a/lib_pypy/_pypy_irc_topic.py +++ b/lib_pypy/_pypy_irc_topic.py @@ -1,117 +1,6 @@ -"""qvfgbcvna naq hgbcvna punvef -qlfgbcvna naq hgbcvna punvef -V'z fbeel, pbhyq lbh cyrnfr abg nterr jvgu gur png nf jryy? -V'z fbeel, pbhyq lbh cyrnfr abg nterr jvgu gur punve nf jryy? -jr cnffrq gur RH erivrj -cbfg RhebClguba fcevag fgnegf 12.IVV.2007, 10nz -RhebClguba raqrq -n Pyrna Ragrecevfrf cebqhpgvba -npnqrzl vf n pbzcyvpngrq ebyr tnzr -npnqrzvn vf n pbzcyvpngrq ebyr tnzr -jbexvat pbqr vf crn fbhc -abg lbhe snhyg, zber yvxr vg'f n zbivat gnetrg -guvf fragrapr vf snyfr -abguvat vf gehr -Yncfnat Fbhpubat -Oenpunzhgnaqn -fbeel, V'yy grnpu gur pnpghf ubj gb fjvz yngre -Jul fb znal znal znal znal znal ivbyvaf? -Jul fb znal znal znal znal znal bowrpgf? -"eha njnl naq yvir ba n snez" nccebnpu gb fbsgjner qrirybczrag -"va snpg, lbh zvtug xabj zber nobhg gur genafyngvba gbbypunva nsgre znfgrevat eclguba guna fbzr angvir fcrnxre xabjf nobhg uvf zbgure gbathr" - kbeNkNk -"jurer qvq nyy gur ivbyvaf tb?" -- ClCl fgnghf oybt: uggc://zberclcl.oybtfcbg.pbz/ -uggc://kxpq.pbz/353/ -pnfhnyvgl ivbyngvbaf naq sylvat -wrgmg abpu fpubxbynqvtre -R09 2X @PNN:85? -vs lbh'er gelvat gb oybj hc fghss, jub pnerf? -vs fghss oybjf hc, lbh pner -2008 jvyy or gur lrne bs clcl ba gur qrfxgbc -2008 jvyy or gur lrne bs gur qrfxgbc ba #clcl -2008 jvyy or gur lrne bs gur qrfxgbc ba #clcl, Wnahnel jvyy or gur zbagu bs gur nyc gbcf -lrf, ohg jung'g gur frafr bs 0 < "qhena qhena" -eclguba: flagnk naq frznagvpf bs clguba, fcrrq bs p, erfgevpgvbaf bs wnin naq pbzcvyre reebe zrffntrf nf crargenoyr nf ZHZCF +"""eclguba: flagnk naq frznagvpf bs clguba, fcrrq bs p, erfgevpgvbaf bs wnin naq pbzcvyre reebe zrffntrf nf crargenoyr nf ZHZCF pglcrf unf n fcva bs 1/3 ' ' vf n fcnpr gbb -2009 jvyy or gur lrne bs WVG ba gur qrfxgbc -N ynathntr vf n qvnyrpg jvgu na nezl naq anil -gbcvpf ner sbe gur srroyr zvaqrq -2009 vf gur lrne bs ersyrpgvba ba gur qrfxgbc -gur tybor vf bhe cbal, gur pbfzbf bhe erny ubefr -jub nz V naq vs lrf, ubj znal? -cebtenzzvat va orq vf n cresrpgyl svar npgvivgl -zbber'f ynj vf n qeht jvgu gur jbefg pbzr qbja -EClguba: jr hfr vg fb lbh qba'g unir gb -Zbber'f ynj vf n qeht jvgu gur jbefg pbzr qbja. EClguba: haqrpvqrq. -guvatf jvyy or avpr naq fghss -qba'g cbfg yvaxf gb cngragf urer -Abg lbhe hfhny nanylfrf. -Gur Neg bs gur Punaary -Clguba 300 -V fhccbfr ZO bs UGZY cre frpbaq vf abg gur hfhny fcrrq zrnfher crbcyr jbhyq rkcrpg sbe n wvg -gur fha arire frgf ba gur ClCl rzcver -ghegyrf ner snfgre guna lbh guvax -cebtenzzvat vf na nrfgrguvp raqrnibhe -P vf tbbq sbe fbzrguvat, whfg abg sbe jevgvat fbsgjner -trezna vf tbbq sbe fbzrguvat, whfg abg sbe jevgvat fbsgjner -trezna vf tbbq sbe artngvbaf, whfg abg sbe jevgvat fbsgjner -# nffreg qvq abg penfu -lbh fubhyq fgneg n cresrpg fbsgjner zbirzrag -lbh fubhyq fgneg n cresrpg punaary gbcvp zbirzrag -guvf vf n cresrpg punaary gbcvp -guvf vf n frys-ersreragvny punaary gbcvp -crrcubcr bcgvzvmngvbaf ner jung n Fhssvpvragyl Fzneg Pbzcvyre hfrf -"crrcubcr" bcgvzvmngvbaf ner jung na bcgvzvfgvp Pbzcvyre hfrf -pubbfr lbhe unpx -gur 'fhcre' xrljbeq vf abg gung uhttnoyr -wlguba cngpurf ner abg rabhtu sbe clcl -- qb lbh xabj oreyva? - nyy bs vg? - jryy, whfg oreyva -- ubj jvyy gur snpg gung gurl ner hfrq va bhe ercy punatr bhe gbcvpf? -- ubj pna vg rire unir jbexrq? -- jurer fubhyq gur unpx or fgberq? -- Vg'f uneq gb fnl rknpgyl jung pbafgvghgrf erfrnepu va gur pbzchgre jbeyq, ohg nf n svefg nccebkvzngvba, vg'f fbsgjner gung qbrfa'g unir hfref. -- Cebtenzzvat vf nyy nobhg xabjvat jura gb obvy gur benatr fcbatr qbaxrl npebff gur cuvyyvcvarf -- Jul fb znal, znal, znal, znal, znal, znal qhpxyvatf? -- ab qrgnvy vf bofpher rabhtu gb abg unir fbzr pbqr qrcraqvat ba vg. -- jung V trarenyyl jnag vf serr fcrrqhcf -- nyy bs ClCl vf kv-dhnyvgl -"lbh pna nyjnlf xvyy -9 be bf._rkvg() vs lbh'er va n uheel" -Ohernhpengf ohvyq npnqrzvp rzcverf juvpu puhea bhg zrnavatyrff fbyhgvbaf gb veeryrinag ceboyrzf. -vg'f abg n unpx, vg'f n jbexnebhaq -ClCl qbrfa'g unir pbcbylinevnqvp qrcraqragyl-zbabzbecurq ulcresyhknqf -ClCl qbrfa'g punatr gur shaqnzragny culfvpf pbafgnagf -Qnapr bs gur Fhtnecyhz Snvel -Wnin vf whfg tbbq rabhtu gb or cenpgvpny, ohg abg tbbq rabhtu gb or hfnoyr. -RhebClguba vf unccravat, qba'g rkcrpg nal dhvpx erfcbafr gvzrf. -"V jbhyq yvxr gb fgnl njnl sebz ernyvgl gura" -"gung'f jul gur 'be' vf ernyyl na 'naq' " -jvgu nyy nccebcevngr pbagrkghnyvfngvbavat -qba'g gevc ba gur cbjre pbeq -vzcyrzragvat YBTB va YBTB: "ghegyrf nyy gur jnl qbja" -gur ohooyrfbeg jbhyq or gur jebat jnl gb tb -gur cevapvcyr bs pbafreingvba bs zrff -gb fnir n gerr, rng n ornire -Qre Ovore znpugf evpugvt: Antg nyyrf xnchgg. -"Nal jbeyqivrj gung vfag jenpxrq ol frys-qbhog naq pbashfvba bire vgf bja vqragvgl vf abg n jbeyqivrj sbe zr." - Fpbgg Nnebafba -jr oryvrir va cnapnxrf, znlor -jr oryvrir va ghegyrf, znlor -jr qrsvavgryl oryvrir va zrgn -gur zngevk unf lbh -"Yvsr vf uneq, gura lbh anc" - n png -Vf Nezva ubzr jura gur havirefr prnfrf gb rkvfg? -Qhrffryqbes fcevag fgnegrq -frys.nobeeg("pnaabg ybnq negvpyrf") -QRAGVFGEL FLZOBY YVTUG IREGVPNY NAQ JNIR -"Gur UUH pnzchf vf n tbbq Dhnxr yriry" - Nezva -"Gur UUH pnzchf jbhyq or n greevoyr dhnxr yriry - lbh'q arire unir n pyhr jurer lbh ner" - zvpunry -N enqvbnpgvir png unf 18 unys-yvirf. - : j [fvtu] -f -pbybe-pbqrq oyhrf -"Neebtnapr va pbzchgre fpvrapr vf zrnfherq va anab-Qvwxfgenf." -ClCl arrqf n Whfg-va-Gvzr WVG -"Lbh pna'g gvzr geniry whfg ol frggvat lbhe pybpxf jebat" -Gjb guernqf jnyx vagb n one. Gur onexrrcre ybbxf hc naq lryyf, "url, V jnag qba'g nal pbaqvgvbaf enpr yvxr gvzr ynfg!" Clguba 2.k rfg cerfdhr zbeg, ivir Clguba! Clguba 2.k vf abg qrnq Riregvzr fbzrbar nethrf jvgu "Fznyygnyx unf nyjnlf qbar K", vg vf nyjnlf n tbbq uvag gung fbzrguvat arrqf gb or punatrq snfg. - Znephf Qraxre @@ -119,7 +8,6 @@ __kkk__ naq __ekkk__ if bcrengvba fybgf: cnegvpyr dhnaghz fhcrecbfvgvba xvaq bs sha ClCl vf na rkpvgvat grpuabybtl gung yrgf lbh gb jevgr snfg, cbegnoyr, zhygv-cyngsbez vagrecergref jvgu yrff rssbeg Nezva: "Cebybt vf n zrff.", PS: "Ab, vg'f irel pbby!", Nezva: "Vfa'g guvf jung V fnvq?" - tbbq, grfgf ner hfrshy fbzrgvzrf :-) ClCl vf yvxr nofheq gurngre jr unir ab nagv-vzcbffvoyr fgvpx gung znxrf fher gung nyy lbhe cebtenzf unyg clcl vf n enpr orgjrra crbcyr funivat lnxf naq gur havirefr cebqhpvat zber orneqrq lnxf. Fb sne, gur havirefr vf jvaavat @@ -136,14 +24,14 @@ ClCl 1.1.0orgn eryrnfrq: uggc://pbqrfcrnx.arg/clcl/qvfg/clcl/qbp/eryrnfr-1.1.0.ugzy "gurer fubhyq or bar naq bayl bar boivbhf jnl gb qb vg". ClCl inevnag: "gurer pna or A unys-ohttl jnlf gb qb vg" 1.1 svany eryrnfrq: uggc://pbqrfcrnx.arg/clcl/qvfg/clcl/qbp/eryrnfr-1.1.0.ugzy -1.1 svany eryrnfrq | nzq64 naq ccp ner bayl ninvynoyr va ragrecevfr irefvba + nzq64 naq ccp ner bayl ninvynoyr va ragrecevfr irefvba Vf gurer n clcl gvzr? - vs lbh pna srry vg (?) gura gurer vf ab, abezny jbex vf fhpu zhpu yrff gvevat guna inpngvbaf ab, abezny jbex vf fb zhpu yrff gvevat guna inpngvbaf -SVEFG gurl vtaber lbh, gura gurl ynhtu ng lbh, gura gurl svtug lbh, gura lbh jva. +-SVEFG gurl vtaber lbh, gura gurl ynhtu ng lbh, gura gurl svtug lbh, gura lbh jva.- vg'f Fhaqnl, znlor vg'f Fhaqnl, ntnva -"3 + 3 = 8" Nagb va gur WVG gnyx +"3 + 3 = 8" - Nagb va gur WVG gnyx RPBBC vf unccravat RPBBC vf svavfurq cflpb rngf bar oenva cre vapu bs cebterff @@ -175,10 +63,108 @@ "nu, whfg va gvzr qbphzragngvba" (__nc__) ClCl vf abg n erny IZ: ab frtsnhyg unaqyref gb qb gur ener pnfrf lbh pna'g unir obgu pbairavrapr naq fcrrq -gur WVG qbrfa'g jbex ba BF/K (abi'09) -ab fhccbeg sbe BF/K evtug abj! (abi'09) fyvccref urvtug pna or zrnfherq va k86 ertvfgref clcl vf n enpr orgjrra gur vaqhfgel gelvat gb ohvyq znpuvarf jvgu zber naq zber erfbheprf, naq gur clcl qrirybcref gelvat gb rng nyy bs gurz. Fb sne, gur jvaare vf fgvyy hapyrne +"znl pbagnva ahgf naq/be lbhat cbvagref" +vg'f nyy irel fvzcyr, yvxr gur ubyvqnlf +unccl ClCl'f lrne 2010! +fnzhryr fnlf gung jr ybfg n enmbe. fb jr pna'g funir lnxf +"yrg'f abg or bofpher, hayrff jr ernyyl arrq gb" + (abg guernq-fnsr, ohg jryy, abguvat vf) +clcl unf znal ceboyrzf, ohg rnpu bar unf znal fbyhgvbaf +whfg nabgure vgrz (1.333...) ba bhe erny-ahzorerq gbqb yvfg +ClCl vf Fuveg Bevtnzv erfrnepu + nafjrevat n dhrfgvba: "ab -- sbe ng yrnfg bar cbffvoyr vagrecergngvba bs lbhe fragrapr" +eryrnfr 1.2 hcpbzvat +ClCl 1.2 eryrnfrq - uggc://clcl.bet/ +AB IPF QVFPHFFVBAF +EClguba vf n svar pnzry unve oehfu +ClCl vf n npghnyyl n ivfhnyvmngvba cebwrpg, jr whfg ohvyq vagrecergref gb unir vagrerfgvat qngn gb ivfhnyvmr +clcl vf yvxr fnhfntrf +naq abj sbe fbzrguvat pbzcyrgryl qvssrerag +n 10gu bs sberire vf 1u45 +pbeerpg pbqr qbrfag arrq nal grfgf +cbfgfgehpghenyvfz rgp. +clcl UVG trarengbe +gur arj clcl fcbeg vf gb cnff clcl ohtf nf pclguba ohtf +jr unir zhpu zber vagrecergref guna hfref +ClCl 1.3 njnvgvat eryrnfr +ClCl 1.3 eryrnfrq +vg frrzf gb zr gung bapr lbh frggyr ba na rkrphgvba / bowrpg zbqry naq / be olgrpbqr sbezng, lbh'ir nyernql qrpvqrq jung ynathntrf (jurer gur 'f' frrzf fhcresyhbhf) fhccbeg vf tbvat gb or svefg pynff sbe +"Nyy ceboyrzf va ClCl pna or fbyirq ol nabgure yriry bs vagrecergngvba" +ClCl 1.3 eryrnfrq (jvaqbjf ovanevrf vapyhqrq) +jul qvq lbh thlf unir gb znxr gur ohvygva sbeghar zber vagrerfgvat guna npghny jbex? v whfg pngpurq zlfrys erfgnegvat clcl 20 gvzrf +"jr hfrq gb unir n zrff jvgu na bofpher vagresnpr, abj jr unir zrff urer naq bofpher vagresnpr gurer. cebterff" crqebavf ba n clcl fcevag +"phcf bs pbssrr ner yvxr nanybtvrf va gung V'z znxvat bar evtug abj" +"vg'f nyjnlf hc gb hf, va n jnl be gur bgure" +ClCl vf infg, naq pbagnvaf zhygvghqrf +qravny vf eneryl n tbbq qrohttvat grpuavdhr +"Yrg'f tb." - "Jr pna'g" - "Jul abg?" - "Jr'er jnvgvat sbe n Genafyngvba." - (qrfcnvevatyl) "Nu!" +'gung'f qrsvavgryl n pnfr bs "hu????"' +va gurbel gurer vf gur Ybbc, va cenpgvpr gurer ner oevqtrf +gur uneqqevir - pbafgnag qngn cvytevzntr +ClCl vf n gbby gb xrrc bgurejvfr qnatrebhf zvaqf fnsryl bpphcvrq. +jr ner n trareny senzrjbex ohvyg ba pbafvfgrag nccyvpngvba bs nqubp-arff +gur jnl gb nibvq n jbexnebhaq vf gb vagebqhpr n fgebatre jbexnebhaq fbzrjurer ryfr +pnyyvat gur genafyngvba gbby punva n 'fpevcg' vf xvaq bs bssrafvir +ehaavat clcl-p genafyngr.cl vf n ovg yvxr jngpuvat n guevyyre zbivr, vg pbhyq pbafhzr nyy gur zrzbel ng nal gvzr +ehaavat clcl-p genafyngr.cl vf n ovg yvxr jngpuvat n guevyyre zbivr, vg pbhyq qvr ng nal gvzr orpnhfr bs gur 32-ovg 4TO yvzvg bs ENZ +Qh jvefg rora tranh qnf reervpura, jbena xrvare tynhog +vs fjvgmreynaq jrer jurer terrpr vf (ba vfynaqf) jbhyq gurl nyy or pbaarpgrq ol oevqtrf? +genafyngvat clcl jvgu pclguba vf fbbbbbb fybj +ClCl 1.4 eryrnfrq! +Jr ner abg urebrf, whfg irel cngvrag. +QBAR zrnaf vg'f qbar +jul gurer vf ab "ClCl 1.4 eryrnfrq" va gbcvp nal zber? +fabj! fabj! +svanyyl, zrephevny zvtengvba vf unccravat! +Gur zvtengvba gb zrephevny vf pbzcyrgrq! uggc://ovgohpxrg.bet/clcl/clcl +fabj! fabj! (gre) +unccl arj lrne +naq anaanaw gb lbh nf jryy +Frrvat nf gur ynjf bs culfvpf ner ntnvafg lbh, lbh unir gb pnershyyl pbafvqre lbhe fpbcr fb gung lbhe tbnyf ner ernfbanoyr. +nf hfhny va clcl, gur fbyhgvba nccrnef pbzcyrgryl qvfcebcbegvbangr gb gur ceboyrz naq vafgrnq jr'yy tb sbe n pbzcyrgryl qvssrerag fvzcyre nccebnpu gb gur bevtvany ceboyrz +fabj, fabj! +va clcl lbh ner nyjnlf ng gur jebat yriry, va bar jnl be gur bgure +jryy, vg'f jebat ohg abg fb "irel jebat" nf vg ybbxrq + V ybir clcl +ynmvarff vzcngvrapr naq uhoevf +fabj, fabj +EClguba: guvatf lbh jbhyqa'g qb va Clguba, naq pna'g qb va P. +vg vf gur rkcrpgrq orunivbe, rkprcg jura lbh qba'g rkcrpg vg +erqrsvavat lryybj frrzf yvxr n orggre vqrn +"gung'f ubjrire whfg ratvarrevat" (svwny) +"[vg] whfg fubjf ntnva gung eclguba vf bofpher" (psobym) +"naljnl, clguba vf n infg ynathntr" (svwny) +bhg-bs-yvr-thneqf +"gurer ner qnlf ba juvpu lbh ybbx nebhaq naq abguvat fubhyq unir rire jbexrq" (svwny) +clcl vf n orggre xvaq bs sbbyvfuarff - ynp +ehaavat grfgf vf rffragvny sbe qrirybcvat clcl -- hu? qvq V oernx gur grfg? (svwny) +V'ir tbg guvf sybbe jnk gung'f nyfb n TERNG qrffreg gbccvat!! +rknexha: "gur cneg gung V gubhtug jnf tbvat gb or uneq jnf gevivny, fb abj V whfg unir guvf cneg gung V qvqa'g rira guvax bs gung vf uneq" +V fhccbfr jr pna yvir jvgu gur bofphevgl, nf ybat nf gurer vf n pbzzrag znxvat vg yvtugre +V nz n ovt oryvrire va ernfbaf. ohg gur nccnerag xvaq ner zl snibevgr. +clcl: trg n WVG sbe serr (jryy gur svefg qnl lbh jba'g znantr naq vg jvyy or irel sehfgengvat) + thgjbegu: bu, jr fubhyq znxr gur WVG zntvpnyyl orggre, jvgu qrpbengbef naq fghss +vg'f n pbzcyrgr unpx, ohg n irel zvavzny bar (nevtngb) +svefg gurl ynhtu ng lbh, gura gurl vtaber lbh, gura gurl svtug lbh, gura lbh jva +ClCl vf snzvyl sevraqyl +jr yvxr pbzcynvagf +gbqnl jr'er snfgre guna lrfgreqnl (hfhnyyl) +ClCl naq PClguba: gurl ner zbegny rarzvrf vagrag ba xvyyvat rnpu bgure +nethnoyl, rirelguvat vf n avpur +clcl unf ynlref yvxr bavbaf: crryvat gurz onpx jvyy znxr lbh pel +EClguba zntvpnyyl znxrf lbh evpu naq snzbhf (fnlf fb ba gur gva) +Vf evtbobg nebhaq jura gur havirefr prnfrf gb rkvfg? +ClCl vf gbb pbby sbe dhrelfgevatf. +< nevtngb> gura jung bpphef? < svwny> tbbq fghss V oryvrir +ClCl 1.6 eryrnfrq! + jurer ner gur grfgf? +uggc://gjvgcvp.pbz/52nr8s +N enaqbz dhbgr +Nyy rkprcgoybpxf frrz fnar. +N cvax tyvggrel ebgngvat ynzoqn +"vg'f yvxryl grzcbenel hagvy sberire" nevtb """ def some_topic(): diff --git a/lib_pypy/pyrepl/readline.py b/lib_pypy/pyrepl/readline.py --- a/lib_pypy/pyrepl/readline.py +++ b/lib_pypy/pyrepl/readline.py @@ -231,7 +231,11 @@ return ''.join(chars) def _histline(self, line): - return unicode(line.rstrip('\n'), ENCODING) + line = line.rstrip('\n') + try: + return unicode(line, ENCODING) + except UnicodeDecodeError: # bah, silently fall back... + return unicode(line, 'utf-8') def get_history_length(self): return self.saved_history_length @@ -268,7 +272,10 @@ f = open(os.path.expanduser(filename), 'w') for entry in history: if isinstance(entry, unicode): - entry = entry.encode(ENCODING) + try: + entry = entry.encode(ENCODING) + except UnicodeEncodeError: # bah, silently fall back... + entry = entry.encode('utf-8') entry = entry.replace('\n', '\r\n') # multiline history support f.write(entry + '\n') f.close() @@ -395,9 +402,21 @@ _wrapper.f_in = f_in _wrapper.f_out = f_out - if hasattr(sys, '__raw_input__'): # PyPy - _old_raw_input = sys.__raw_input__ + if '__pypy__' in sys.builtin_module_names: # PyPy + + def _old_raw_input(prompt=''): + # sys.__raw_input__() is only called when stdin and stdout are + # as expected and are ttys. If it is the case, then get_reader() + # should not really fail in _wrapper.raw_input(). If it still + # does, then we will just cancel the redirection and call again + # the built-in raw_input(). + try: + del sys.__raw_input__ + except AttributeError: + pass + return raw_input(prompt) sys.__raw_input__ = _wrapper.raw_input + else: # this is not really what readline.c does. Better than nothing I guess import __builtin__ diff --git a/lib_pypy/resource.py b/lib_pypy/resource.py --- a/lib_pypy/resource.py +++ b/lib_pypy/resource.py @@ -7,7 +7,7 @@ from ctypes_support import standard_c_lib as libc from ctypes_support import get_errno -from ctypes import Structure, c_int, c_long, byref, sizeof, POINTER +from ctypes import Structure, c_int, c_long, byref, POINTER from errno import EINVAL, EPERM import _structseq @@ -165,7 +165,6 @@ @builtinify def getpagesize(): - pagesize = 0 if _getpagesize: return _getpagesize() else: diff --git a/pypy/annotation/classdef.py b/pypy/annotation/classdef.py --- a/pypy/annotation/classdef.py +++ b/pypy/annotation/classdef.py @@ -276,8 +276,8 @@ # create the Attribute and do the generalization asked for newattr = Attribute(attr, self.bookkeeper) if s_value: - if newattr.name == 'intval' and getattr(s_value, 'unsigned', False): - import pdb; pdb.set_trace() + #if newattr.name == 'intval' and getattr(s_value, 'unsigned', False): + # import pdb; pdb.set_trace() newattr.s_value = s_value # keep all subattributes' values diff --git a/pypy/config/pypyoption.py b/pypy/config/pypyoption.py --- a/pypy/config/pypyoption.py +++ b/pypy/config/pypyoption.py @@ -72,6 +72,7 @@ del working_modules['fcntl'] # LOCK_NB not defined del working_modules["_minimal_curses"] del working_modules["termios"] + del working_modules["_multiprocessing"] # depends on rctime @@ -91,7 +92,7 @@ module_import_dependencies = { # no _rawffi if importing pypy.rlib.clibffi raises ImportError - # or CompilationError + # or CompilationError or py.test.skip.Exception "_rawffi" : ["pypy.rlib.clibffi"], "_ffi" : ["pypy.rlib.clibffi"], @@ -112,7 +113,7 @@ try: for name in modlist: __import__(name) - except (ImportError, CompilationError), e: + except (ImportError, CompilationError, py.test.skip.Exception), e: errcls = e.__class__.__name__ config.add_warning( "The module %r is disabled\n" % (modname,) + @@ -127,7 +128,7 @@ pypy_optiondescription = OptionDescription("objspace", "Object Space Options", [ ChoiceOption("name", "Object Space name", - ["std", "flow", "thunk", "dump", "taint"], + ["std", "flow", "thunk", "dump"], "std", cmdline='--objspace -o'), diff --git a/pypy/doc/__pypy__-module.rst b/pypy/doc/__pypy__-module.rst --- a/pypy/doc/__pypy__-module.rst +++ b/pypy/doc/__pypy__-module.rst @@ -37,29 +37,6 @@ .. _`thunk object space docs`: objspace-proxies.html#thunk .. _`interface section of the thunk object space docs`: objspace-proxies.html#thunk-interface -.. broken: - - Taint Object Space Functionality - ================================ - - When the taint object space is used (choose with :config:`objspace.name`), - the following names are put into ``__pypy__``: - - - ``taint`` - - ``is_tainted`` - - ``untaint`` - - ``taint_atomic`` - - ``_taint_debug`` - - ``_taint_look`` - - ``TaintError`` - - Those are all described in the `interface section of the taint object space - docs`_. - - For more detailed explanations and examples see the `taint object space docs`_. - - .. _`taint object space docs`: objspace-proxies.html#taint - .. _`interface section of the taint object space docs`: objspace-proxies.html#taint-interface Transparent Proxy Functionality =============================== diff --git a/pypy/doc/config/objspace.name.txt b/pypy/doc/config/objspace.name.txt --- a/pypy/doc/config/objspace.name.txt +++ b/pypy/doc/config/objspace.name.txt @@ -4,7 +4,6 @@ for normal usage): * thunk_: The thunk object space adds lazy evaluation to PyPy. - * taint_: The taint object space adds soft security features. * dump_: Using this object spaces results in the dumpimp of all operations to a log. @@ -12,5 +11,4 @@ .. _`Object Space Proxies`: ../objspace-proxies.html .. _`Standard Object Space`: ../objspace.html#standard-object-space .. _thunk: ../objspace-proxies.html#thunk -.. _taint: ../objspace-proxies.html#taint .. _dump: ../objspace-proxies.html#dump diff --git a/pypy/doc/index.rst b/pypy/doc/index.rst --- a/pypy/doc/index.rst +++ b/pypy/doc/index.rst @@ -309,7 +309,6 @@ .. _`object space`: objspace.html .. _FlowObjSpace: objspace.html#the-flow-object-space .. _`trace object space`: objspace.html#the-trace-object-space -.. _`taint object space`: objspace-proxies.html#taint .. _`thunk object space`: objspace-proxies.html#thunk .. _`transparent proxies`: objspace-proxies.html#tproxy .. _`Differences between PyPy and CPython`: cpython_differences.html diff --git a/pypy/doc/objspace-proxies.rst b/pypy/doc/objspace-proxies.rst --- a/pypy/doc/objspace-proxies.rst +++ b/pypy/doc/objspace-proxies.rst @@ -129,297 +129,6 @@ function behaves lazily: all calls to it return a thunk object. -.. broken right now: - - .. _taint: - - The Taint Object Space - ====================== - - Motivation - ---------- - - The Taint Object Space provides a form of security: "tainted objects", - inspired by various sources, see [D12.1]_ for a more detailed discussion. - - The basic idea of this kind of security is not to protect against - malicious code but to help with handling and boxing sensitive data. - It covers two kinds of sensitive data: secret data which should not leak, - and untrusted data coming from an external source and that must be - validated before it is used. - - The idea is that, considering a large application that handles these - kinds of sensitive data, there are typically only a small number of - places that need to explicitly manipulate that sensitive data; all the - other places merely pass it around, or do entirely unrelated things. - - Nevertheless, if a large application needs to be reviewed for security, - it must be entirely carefully checked, because it is possible that a - bug at some apparently unrelated place could lead to a leak of sensitive - information in a way that an external attacker could exploit. For - example, if any part of the application provides web services, an - attacker might be able to issue unexpected requests with a regular web - browser and deduce secret information from the details of the answers he - gets. Another example is the common CGI attack where an attacker sends - malformed inputs and causes the CGI script to do unintended things. - - An approach like that of the Taint Object Space allows the small parts - of the program that manipulate sensitive data to be explicitly marked. - The effect of this is that although these small parts still need a - careful security review, the rest of the application no longer does, - because even a bug would be unable to leak the information. - - We have implemented a simple two-level model: objects are either - regular (untainted), or sensitive (tainted). Objects are marked as - sensitive if they are secret or untrusted, and only declassified at - carefully-checked positions (e.g. where the secret data is needed, or - after the untrusted data has been fully validated). - - It would be simple to extend the code for more fine-grained scales of - secrecy. For example it is typical in the literature to consider - user-specified lattices of secrecy levels, corresponding to multiple - "owners" that cannot access data belonging to another "owner" unless - explicitly authorized to do so. - - Tainting and untainting - ----------------------- - - Start a py.py with the Taint Object Space and try the following example:: - - $ py.py -o taint - >>>> from __pypy__ import taint - >>>> x = taint(6) - - # x is hidden from now on. We can pass it around and - # even operate on it, but not inspect it. Taintness - # is propagated to operation results. - - >>>> x - TaintError - - >>>> if x > 5: y = 2 # see below - TaintError - - >>>> y = x + 5 # ok - >>>> lst = [x, y] - >>>> z = lst.pop() - >>>> t = type(z) # type() works too, tainted answer - >>>> t - TaintError - >>>> u = t is int # even 'is' works - >>>> u - TaintError - - Notice that using a tainted boolean like ``x > 5`` in an ``if`` - statement is forbidden. This is because knowing which path is followed - would give away a hint about ``x``; in the example above, if the - statement ``if x > 5: y = 2`` was allowed to run, we would know - something about the value of ``x`` by looking at the (untainted) value - in the variable ``y``. - - Of course, there is a way to inspect tainted objects. The basic way is - to explicitly "declassify" it with the ``untaint()`` function. In an - application, the places that use ``untaint()`` are the places that need - careful security review. To avoid unexpected objects showing up, the - ``untaint()`` function must be called with the exact type of the object - to declassify. It will raise ``TaintError`` if the type doesn't match:: - - >>>> from __pypy__ import taint - >>>> untaint(int, x) - 6 - >>>> untaint(int, z) - 11 - >>>> untaint(bool, x > 5) - True - >>>> untaint(int, x > 5) - TaintError - - - Taint Bombs - ----------- - - In this area, a common problem is what to do about failing operations. - If an operation raises an exception when manipulating a tainted object, - then the very presence of the exception can leak information about the - tainted object itself. Consider:: - - >>>> 5 / (x-6) - - By checking if this raises ``ZeroDivisionError`` or not, we would know - if ``x`` was equal to 6 or not. The solution to this problem in the - Taint Object Space is to introduce *Taint Bombs*. They are a kind of - tainted object that doesn't contain a real object, but a pending - exception. Taint Bombs are indistinguishable from normal tainted - objects to unprivileged code. See:: - - >>>> x = taint(6) - >>>> i = 5 / (x-6) # no exception here - >>>> j = i + 1 # nor here - >>>> k = j + 5 # nor here - >>>> untaint(int, k) - TaintError - - In the above example, all of ``i``, ``j`` and ``k`` contain a Taint - Bomb. Trying to untaint it raises an exception - a generic - ``TaintError``. What we win is that the exception gives little away, - and most importantly it occurs at the point where ``untaint()`` is - called, not where the operation failed. This means that all calls to - ``untaint()`` - but not the rest of the code - must be carefully - reviewed for what occurs if they receive a Taint Bomb; they might catch - the ``TaintError`` and give the user a generic message that something - went wrong, if we are reasonably careful that the message or even its - presence doesn't give information away. This might be a - problem by itself, but there is no satisfying general solution here: - it must be considered on a case-by-case basis. Again, what the - Taint Object Space approach achieves is not solving these problems, but - localizing them to well-defined small parts of the application - namely, - around calls to ``untaint()``. - - The ``TaintError`` exception deliberately does not include any - useful error messages, because they might give information away. - Of course, this makes debugging quite a bit harder; a difficult - problem to solve properly. So far we have implemented a way to peek in a Taint - Box or Bomb, ``__pypy__._taint_look(x)``, and a "debug mode" that - prints the exception as soon as a Bomb is created - both write - information to the low-level stderr of the application, where we hope - that it is unlikely to be seen by anyone but the application - developer. - - - Taint Atomic functions - ---------------------- - - Occasionally, a more complicated computation must be performed on a - tainted object. This requires first untainting the object, performing the - computations, and then carefully tainting the result again (including - hiding all exceptions into Bombs). - - There is a built-in decorator that does this for you:: - - >>>> @__pypy__.taint_atomic - >>>> def myop(x, y): - .... while x > 0: - .... x -= y - .... return x - .... - >>>> myop(42, 10) - -8 - >>>> z = myop(taint(42), 10) - >>>> z - TaintError - >>>> untaint(int, z) - -8 - - The decorator makes a whole function behave like a built-in operation. - If no tainted argument is passed in, the function behaves normally. But - if any of the arguments is tainted, it is automatically untainted - so - the function body always sees untainted arguments - and the eventual - result is tainted again (possibly in a Taint Bomb). - - It is important for the function marked as ``taint_atomic`` to have no - visible side effects, as these could cause information leakage. - This is currently not enforced, which means that all ``taint_atomic`` - functions have to be carefully reviewed for security (but not the - callers of ``taint_atomic`` functions). - - A possible future extension would be to forbid side-effects on - non-tainted objects from all ``taint_atomic`` functions. - - An example of usage: given a tainted object ``passwords_db`` that - references a database of passwords, we can write a function - that checks if a password is valid as follows:: - - @taint_atomic - def validate(passwords_db, username, password): - assert type(passwords_db) is PasswordDatabase - assert type(username) is str - assert type(password) is str - ...load username entry from passwords_db... - return expected_password == password - - It returns a tainted boolean answer, or a Taint Bomb if something - went wrong. A caller can do:: - - ok = validate(passwords_db, 'john', '1234') - ok = untaint(bool, ok) - - This can give three outcomes: ``True``, ``False``, or a ``TaintError`` - exception (with no information on it) if anything went wrong. If even - this is considered giving too much information away, the ``False`` case - can be made indistinguishable from the ``TaintError`` case (simply by - raising an exception in ``validate()`` if the password is wrong). - - In the above example, the security results achieved are the following: - as long as ``validate()`` does not leak information, no other part of - the code can obtain more information about a passwords database than a - Yes/No answer to a precise query. - - A possible extension of the ``taint_atomic`` decorator would be to check - the argument types, as ``untaint()`` does, for the same reason: to - prevent bugs where a function like ``validate()`` above is accidentally - called with the wrong kind of tainted object, which would make it - misbehave. For now, all ``taint_atomic`` functions should be - conservative and carefully check all assumptions on their input - arguments. - - - .. _`taint-interface`: - - Interface - --------- - - .. _`like a built-in operation`: - - The basic rule of the Tainted Object Space is that it introduces two new - kinds of objects, Tainted Boxes and Tainted Bombs (which are not types - in the Python sense). Each box internally contains a regular object; - each bomb internally contains an exception object. An operation - involving Tainted Boxes is performed on the objects contained in the - boxes, and gives a Tainted Box or a Tainted Bomb as a result (such an - operation does not let an exception be raised). An operation called - with a Tainted Bomb argument immediately returns the same Tainted Bomb. - - In a PyPy running with (or translated with) the Taint Object Space, - the ``__pypy__`` module exposes the following interface: - - * ``taint(obj)`` - - Return a new Tainted Box wrapping ``obj``. Return ``obj`` itself - if it is already tainted (a Box or a Bomb). - - * ``is_tainted(obj)`` - - Check if ``obj`` is tainted (a Box or a Bomb). - - * ``untaint(type, obj)`` - - Untaints ``obj`` if it is tainted. Raise ``TaintError`` if the type - of the untainted object is not exactly ``type``, or if ``obj`` is a - Bomb. - - * ``taint_atomic(func)`` - - Return a wrapper function around the callable ``func``. The wrapper - behaves `like a built-in operation`_ with respect to untainting the - arguments, tainting the result, and returning a Bomb. - - * ``TaintError`` - - Exception. On purpose, it provides no attribute or error message. - - * ``_taint_debug(level)`` - - Set the debugging level to ``level`` (0=off). At level 1 or above, - all Taint Bombs print a diagnostic message to stderr when they are - created. - - * ``_taint_look(obj)`` - - For debugging purposes: prints (to stderr) the type and address of - the object in a Tainted Box, or prints the exception if ``obj`` is - a Taint Bomb. - - .. _dump: The Dump Object Space diff --git a/pypy/doc/project-ideas.rst b/pypy/doc/project-ideas.rst --- a/pypy/doc/project-ideas.rst +++ b/pypy/doc/project-ideas.rst @@ -17,6 +17,12 @@ projects, or anything else in PyPy, pop up on IRC or write to us on the `mailing list`_. +Make big integers faster +------------------------- + +PyPy's implementation of the Python ``long`` type is slower than CPython's. +Find out why and optimize them. + Numpy improvements ------------------ diff --git a/pypy/interpreter/astcompiler/ast.py b/pypy/interpreter/astcompiler/ast.py --- a/pypy/interpreter/astcompiler/ast.py +++ b/pypy/interpreter/astcompiler/ast.py @@ -2,7 +2,7 @@ from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter import typedef from pypy.interpreter.gateway import interp2app -from pypy.interpreter.error import OperationError +from pypy.interpreter.error import OperationError, operationerrfmt from pypy.rlib.unroll import unrolling_iterable from pypy.tool.pairtype import extendabletype from pypy.tool.sourcetools import func_with_new_name @@ -2925,14 +2925,13 @@ def Module_get_body(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -2968,14 +2967,13 @@ def Interactive_get_body(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -3015,8 +3013,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') return space.wrap(w_self.body) def Expression_set_body(space, w_self, w_new_value): @@ -3057,14 +3054,13 @@ def Suite_get_body(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -3104,8 +3100,7 @@ return w_obj if not w_self.initialization_state & w_self._lineno_mask: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'lineno'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'lineno') return space.wrap(w_self.lineno) def stmt_set_lineno(space, w_self, w_new_value): @@ -3126,8 +3121,7 @@ return w_obj if not w_self.initialization_state & w_self._col_offset_mask: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'col_offset'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'col_offset') return space.wrap(w_self.col_offset) def stmt_set_col_offset(space, w_self, w_new_value): @@ -3157,8 +3151,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'name'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'name') return space.wrap(w_self.name) def FunctionDef_set_name(space, w_self, w_new_value): @@ -3179,8 +3172,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'args'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'args') return space.wrap(w_self.args) def FunctionDef_set_args(space, w_self, w_new_value): @@ -3197,14 +3189,13 @@ def FunctionDef_get_body(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -3215,14 +3206,13 @@ def FunctionDef_get_decorator_list(space, w_self): if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'decorator_list'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'decorator_list') if w_self.w_decorator_list is None: if w_self.decorator_list is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.decorator_list] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_decorator_list = w_list return w_self.w_decorator_list @@ -3266,8 +3256,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'name'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'name') return space.wrap(w_self.name) def ClassDef_set_name(space, w_self, w_new_value): @@ -3284,14 +3273,13 @@ def ClassDef_get_bases(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'bases'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'bases') if w_self.w_bases is None: if w_self.bases is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.bases] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_bases = w_list return w_self.w_bases @@ -3302,14 +3290,13 @@ def ClassDef_get_body(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -3320,14 +3307,13 @@ def ClassDef_get_decorator_list(space, w_self): if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'decorator_list'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'decorator_list') if w_self.w_decorator_list is None: if w_self.decorator_list is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.decorator_list] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_decorator_list = w_list return w_self.w_decorator_list @@ -3372,8 +3358,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Return_set_value(space, w_self, w_new_value): @@ -3414,14 +3399,13 @@ def Delete_get_targets(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'targets'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'targets') if w_self.w_targets is None: if w_self.targets is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.targets] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_targets = w_list return w_self.w_targets @@ -3457,14 +3441,13 @@ def Assign_get_targets(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'targets'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'targets') if w_self.w_targets is None: if w_self.targets is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.targets] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_targets = w_list return w_self.w_targets @@ -3479,8 +3462,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Assign_set_value(space, w_self, w_new_value): @@ -3527,8 +3509,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'target'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'target') return space.wrap(w_self.target) def AugAssign_set_target(space, w_self, w_new_value): @@ -3549,8 +3530,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'op'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'op') return operator_to_class[w_self.op - 1]() def AugAssign_set_op(space, w_self, w_new_value): @@ -3573,8 +3553,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def AugAssign_set_value(space, w_self, w_new_value): @@ -3621,8 +3600,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'dest'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'dest') return space.wrap(w_self.dest) def Print_set_dest(space, w_self, w_new_value): @@ -3639,14 +3617,13 @@ def Print_get_values(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'values'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'values') if w_self.w_values is None: if w_self.values is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.values] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_values = w_list return w_self.w_values @@ -3661,8 +3638,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'nl'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'nl') return space.wrap(w_self.nl) def Print_set_nl(space, w_self, w_new_value): @@ -3710,8 +3686,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'target'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'target') return space.wrap(w_self.target) def For_set_target(space, w_self, w_new_value): @@ -3732,8 +3707,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'iter'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'iter') return space.wrap(w_self.iter) def For_set_iter(space, w_self, w_new_value): @@ -3750,14 +3724,13 @@ def For_get_body(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -3768,14 +3741,13 @@ def For_get_orelse(space, w_self): if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'orelse'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') if w_self.w_orelse is None: if w_self.orelse is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.orelse] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_orelse = w_list return w_self.w_orelse @@ -3819,8 +3791,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'test'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'test') return space.wrap(w_self.test) def While_set_test(space, w_self, w_new_value): @@ -3837,14 +3808,13 @@ def While_get_body(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -3855,14 +3825,13 @@ def While_get_orelse(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'orelse'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') if w_self.w_orelse is None: if w_self.orelse is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.orelse] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_orelse = w_list return w_self.w_orelse @@ -3905,8 +3874,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'test'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'test') return space.wrap(w_self.test) def If_set_test(space, w_self, w_new_value): @@ -3923,14 +3891,13 @@ def If_get_body(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -3941,14 +3908,13 @@ def If_get_orelse(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'orelse'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') if w_self.w_orelse is None: if w_self.orelse is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.orelse] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_orelse = w_list return w_self.w_orelse @@ -3991,8 +3957,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'context_expr'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'context_expr') return space.wrap(w_self.context_expr) def With_set_context_expr(space, w_self, w_new_value): @@ -4013,8 +3978,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'optional_vars'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'optional_vars') return space.wrap(w_self.optional_vars) def With_set_optional_vars(space, w_self, w_new_value): @@ -4031,14 +3995,13 @@ def With_get_body(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -4080,8 +4043,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'type'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'type') return space.wrap(w_self.type) def Raise_set_type(space, w_self, w_new_value): @@ -4102,8 +4064,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'inst'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'inst') return space.wrap(w_self.inst) def Raise_set_inst(space, w_self, w_new_value): @@ -4124,8 +4085,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'tback'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'tback') return space.wrap(w_self.tback) def Raise_set_tback(space, w_self, w_new_value): @@ -4168,14 +4128,13 @@ def TryExcept_get_body(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -4186,14 +4145,13 @@ def TryExcept_get_handlers(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'handlers'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'handlers') if w_self.w_handlers is None: if w_self.handlers is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.handlers] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_handlers = w_list return w_self.w_handlers @@ -4204,14 +4162,13 @@ def TryExcept_get_orelse(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'orelse'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') if w_self.w_orelse is None: if w_self.orelse is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.orelse] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_orelse = w_list return w_self.w_orelse @@ -4251,14 +4208,13 @@ def TryFinally_get_body(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -4269,14 +4225,13 @@ def TryFinally_get_finalbody(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'finalbody'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'finalbody') if w_self.w_finalbody is None: if w_self.finalbody is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.finalbody] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_finalbody = w_list return w_self.w_finalbody @@ -4318,8 +4273,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'test'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'test') return space.wrap(w_self.test) def Assert_set_test(space, w_self, w_new_value): @@ -4340,8 +4294,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'msg'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'msg') return space.wrap(w_self.msg) def Assert_set_msg(space, w_self, w_new_value): @@ -4383,14 +4336,13 @@ def Import_get_names(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'names'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'names') if w_self.w_names is None: if w_self.names is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.names] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_names = w_list return w_self.w_names @@ -4430,8 +4382,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'module'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'module') return space.wrap(w_self.module) def ImportFrom_set_module(space, w_self, w_new_value): @@ -4451,14 +4402,13 @@ def ImportFrom_get_names(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'names'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'names') if w_self.w_names is None: if w_self.names is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.names] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_names = w_list return w_self.w_names @@ -4473,8 +4423,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'level'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'level') return space.wrap(w_self.level) def ImportFrom_set_level(space, w_self, w_new_value): @@ -4522,8 +4471,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') return space.wrap(w_self.body) def Exec_set_body(space, w_self, w_new_value): @@ -4544,8 +4492,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'globals'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'globals') return space.wrap(w_self.globals) def Exec_set_globals(space, w_self, w_new_value): @@ -4566,8 +4513,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'locals'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'locals') return space.wrap(w_self.locals) def Exec_set_locals(space, w_self, w_new_value): @@ -4610,14 +4556,13 @@ def Global_get_names(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'names'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'names') if w_self.w_names is None: if w_self.names is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.names] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_names = w_list return w_self.w_names @@ -4657,8 +4602,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Expr_set_value(space, w_self, w_new_value): @@ -4754,8 +4698,7 @@ return w_obj if not w_self.initialization_state & w_self._lineno_mask: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'lineno'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'lineno') return space.wrap(w_self.lineno) def expr_set_lineno(space, w_self, w_new_value): @@ -4776,8 +4719,7 @@ return w_obj if not w_self.initialization_state & w_self._col_offset_mask: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'col_offset'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'col_offset') return space.wrap(w_self.col_offset) def expr_set_col_offset(space, w_self, w_new_value): @@ -4807,8 +4749,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'op'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'op') return boolop_to_class[w_self.op - 1]() def BoolOp_set_op(space, w_self, w_new_value): @@ -4827,14 +4768,13 @@ def BoolOp_get_values(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'values'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'values') if w_self.w_values is None: if w_self.values is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.values] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_values = w_list return w_self.w_values @@ -4875,8 +4815,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'left'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'left') return space.wrap(w_self.left) def BinOp_set_left(space, w_self, w_new_value): @@ -4897,8 +4836,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'op'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'op') return operator_to_class[w_self.op - 1]() def BinOp_set_op(space, w_self, w_new_value): @@ -4921,8 +4859,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'right'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'right') return space.wrap(w_self.right) def BinOp_set_right(space, w_self, w_new_value): @@ -4969,8 +4906,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'op'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'op') return unaryop_to_class[w_self.op - 1]() def UnaryOp_set_op(space, w_self, w_new_value): @@ -4993,8 +4929,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'operand'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'operand') return space.wrap(w_self.operand) def UnaryOp_set_operand(space, w_self, w_new_value): @@ -5040,8 +4975,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'args'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'args') return space.wrap(w_self.args) def Lambda_set_args(space, w_self, w_new_value): @@ -5062,8 +4996,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') return space.wrap(w_self.body) def Lambda_set_body(space, w_self, w_new_value): @@ -5109,8 +5042,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'test'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'test') return space.wrap(w_self.test) def IfExp_set_test(space, w_self, w_new_value): @@ -5131,8 +5063,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') return space.wrap(w_self.body) def IfExp_set_body(space, w_self, w_new_value): @@ -5153,8 +5084,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'orelse'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') return space.wrap(w_self.orelse) def IfExp_set_orelse(space, w_self, w_new_value): @@ -5197,14 +5127,13 @@ def Dict_get_keys(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'keys'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'keys') if w_self.w_keys is None: if w_self.keys is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.keys] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_keys = w_list return w_self.w_keys @@ -5215,14 +5144,13 @@ def Dict_get_values(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'values'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'values') if w_self.w_values is None: if w_self.values is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.values] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_values = w_list return w_self.w_values @@ -5260,14 +5188,13 @@ def Set_get_elts(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'elts'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elts') if w_self.w_elts is None: if w_self.elts is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.elts] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_elts = w_list return w_self.w_elts @@ -5307,8 +5234,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'elt'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elt') return space.wrap(w_self.elt) def ListComp_set_elt(space, w_self, w_new_value): @@ -5325,14 +5251,13 @@ def ListComp_get_generators(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'generators'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'generators') if w_self.w_generators is None: if w_self.generators is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.generators] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_generators = w_list return w_self.w_generators @@ -5373,8 +5298,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'elt'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elt') return space.wrap(w_self.elt) def SetComp_set_elt(space, w_self, w_new_value): @@ -5391,14 +5315,13 @@ def SetComp_get_generators(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'generators'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'generators') if w_self.w_generators is None: if w_self.generators is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.generators] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_generators = w_list return w_self.w_generators @@ -5439,8 +5362,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'key'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'key') return space.wrap(w_self.key) def DictComp_set_key(space, w_self, w_new_value): @@ -5461,8 +5383,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def DictComp_set_value(space, w_self, w_new_value): @@ -5479,14 +5400,13 @@ def DictComp_get_generators(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'generators'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'generators') if w_self.w_generators is None: if w_self.generators is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.generators] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_generators = w_list return w_self.w_generators @@ -5528,8 +5448,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'elt'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elt') return space.wrap(w_self.elt) def GeneratorExp_set_elt(space, w_self, w_new_value): @@ -5546,14 +5465,13 @@ def GeneratorExp_get_generators(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'generators'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'generators') if w_self.w_generators is None: if w_self.generators is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.generators] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_generators = w_list return w_self.w_generators @@ -5594,8 +5512,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Yield_set_value(space, w_self, w_new_value): @@ -5640,8 +5557,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'left'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'left') return space.wrap(w_self.left) def Compare_set_left(space, w_self, w_new_value): @@ -5658,14 +5574,13 @@ def Compare_get_ops(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ops'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ops') if w_self.w_ops is None: if w_self.ops is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [cmpop_to_class[node - 1]() for node in w_self.ops] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_ops = w_list return w_self.w_ops @@ -5676,14 +5591,13 @@ def Compare_get_comparators(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'comparators'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'comparators') if w_self.w_comparators is None: if w_self.comparators is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.comparators] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_comparators = w_list return w_self.w_comparators @@ -5726,8 +5640,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'func'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'func') return space.wrap(w_self.func) def Call_set_func(space, w_self, w_new_value): @@ -5744,14 +5657,13 @@ def Call_get_args(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'args'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'args') if w_self.w_args is None: if w_self.args is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.args] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_args = w_list return w_self.w_args @@ -5762,14 +5674,13 @@ def Call_get_keywords(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'keywords'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'keywords') if w_self.w_keywords is None: if w_self.keywords is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.keywords] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_keywords = w_list return w_self.w_keywords @@ -5784,8 +5695,7 @@ return w_obj if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'starargs'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'starargs') return space.wrap(w_self.starargs) def Call_set_starargs(space, w_self, w_new_value): @@ -5806,8 +5716,7 @@ return w_obj if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'kwargs'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'kwargs') return space.wrap(w_self.kwargs) def Call_set_kwargs(space, w_self, w_new_value): @@ -5858,8 +5767,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Repr_set_value(space, w_self, w_new_value): @@ -5904,8 +5812,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'n'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'n') return w_self.n def Num_set_n(space, w_self, w_new_value): @@ -5950,8 +5857,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 's'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 's') return w_self.s def Str_set_s(space, w_self, w_new_value): @@ -5996,8 +5902,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Attribute_set_value(space, w_self, w_new_value): @@ -6018,8 +5923,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'attr'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'attr') return space.wrap(w_self.attr) def Attribute_set_attr(space, w_self, w_new_value): @@ -6040,8 +5944,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ctx'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() def Attribute_set_ctx(space, w_self, w_new_value): @@ -6090,8 +5993,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Subscript_set_value(space, w_self, w_new_value): @@ -6112,8 +6014,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'slice'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'slice') return space.wrap(w_self.slice) def Subscript_set_slice(space, w_self, w_new_value): @@ -6134,8 +6035,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ctx'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() def Subscript_set_ctx(space, w_self, w_new_value): @@ -6184,8 +6084,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'id'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'id') return space.wrap(w_self.id) def Name_set_id(space, w_self, w_new_value): @@ -6206,8 +6105,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ctx'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() def Name_set_ctx(space, w_self, w_new_value): @@ -6251,14 +6149,13 @@ def List_get_elts(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'elts'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elts') if w_self.w_elts is None: if w_self.elts is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.elts] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_elts = w_list return w_self.w_elts @@ -6273,8 +6170,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ctx'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() def List_set_ctx(space, w_self, w_new_value): @@ -6319,14 +6215,13 @@ def Tuple_get_elts(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'elts'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elts') if w_self.w_elts is None: if w_self.elts is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.elts] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_elts = w_list return w_self.w_elts @@ -6341,8 +6236,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ctx'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() def Tuple_set_ctx(space, w_self, w_new_value): @@ -6391,8 +6285,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return w_self.value def Const_set_value(space, w_self, w_new_value): @@ -6510,8 +6403,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'lower'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'lower') return space.wrap(w_self.lower) def Slice_set_lower(space, w_self, w_new_value): @@ -6532,8 +6424,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'upper'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'upper') return space.wrap(w_self.upper) def Slice_set_upper(space, w_self, w_new_value): @@ -6554,8 +6445,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'step'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'step') return space.wrap(w_self.step) def Slice_set_step(space, w_self, w_new_value): @@ -6598,14 +6488,13 @@ def ExtSlice_get_dims(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'dims'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'dims') if w_self.w_dims is None: if w_self.dims is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.dims] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_dims = w_list return w_self.w_dims @@ -6645,8 +6534,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Index_set_value(space, w_self, w_new_value): @@ -6915,8 +6803,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'target'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'target') return space.wrap(w_self.target) def comprehension_set_target(space, w_self, w_new_value): @@ -6937,8 +6824,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'iter'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'iter') return space.wrap(w_self.iter) def comprehension_set_iter(space, w_self, w_new_value): @@ -6955,14 +6841,13 @@ def comprehension_get_ifs(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ifs'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ifs') if w_self.w_ifs is None: if w_self.ifs is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.ifs] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_ifs = w_list return w_self.w_ifs @@ -7004,8 +6889,7 @@ return w_obj if not w_self.initialization_state & w_self._lineno_mask: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'lineno'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'lineno') return space.wrap(w_self.lineno) def excepthandler_set_lineno(space, w_self, w_new_value): @@ -7026,8 +6910,7 @@ return w_obj if not w_self.initialization_state & w_self._col_offset_mask: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'col_offset'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'col_offset') return space.wrap(w_self.col_offset) def excepthandler_set_col_offset(space, w_self, w_new_value): @@ -7057,8 +6940,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'type'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'type') return space.wrap(w_self.type) def ExceptHandler_set_type(space, w_self, w_new_value): @@ -7079,8 +6961,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'name'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'name') return space.wrap(w_self.name) def ExceptHandler_set_name(space, w_self, w_new_value): @@ -7097,14 +6978,13 @@ def ExceptHandler_get_body(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -7142,14 +7022,13 @@ def arguments_get_args(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'args'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'args') if w_self.w_args is None: if w_self.args is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.args] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_args = w_list return w_self.w_args @@ -7164,8 +7043,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'vararg'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'vararg') return space.wrap(w_self.vararg) def arguments_set_vararg(space, w_self, w_new_value): @@ -7189,8 +7067,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'kwarg'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'kwarg') return space.wrap(w_self.kwarg) def arguments_set_kwarg(space, w_self, w_new_value): @@ -7210,14 +7087,13 @@ def arguments_get_defaults(space, w_self): if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'defaults'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'defaults') if w_self.w_defaults is None: if w_self.defaults is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.defaults] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_defaults = w_list return w_self.w_defaults @@ -7261,8 +7137,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'arg'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'arg') return space.wrap(w_self.arg) def keyword_set_arg(space, w_self, w_new_value): @@ -7283,8 +7158,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def keyword_set_value(space, w_self, w_new_value): @@ -7330,8 +7204,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'name'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'name') return space.wrap(w_self.name) def alias_set_name(space, w_self, w_new_value): @@ -7352,8 +7225,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'asname'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'asname') return space.wrap(w_self.asname) def alias_set_asname(space, w_self, w_new_value): diff --git a/pypy/interpreter/astcompiler/tools/asdl_py.py b/pypy/interpreter/astcompiler/tools/asdl_py.py --- a/pypy/interpreter/astcompiler/tools/asdl_py.py +++ b/pypy/interpreter/astcompiler/tools/asdl_py.py @@ -414,13 +414,12 @@ self.emit(" return w_obj", 1) self.emit("if not w_self.initialization_state & %s:" % (flag,), 1) self.emit("typename = space.type(w_self).getname(space)", 2) - self.emit("w_err = space.wrap(\"'%%s' object has no attribute '%s'\" %% typename)" % + self.emit("raise operationerrfmt(space.w_AttributeError, \"'%%s' object has no attribute '%%s'\", typename, '%s')" % (field.name,), 2) - self.emit("raise OperationError(space.w_AttributeError, w_err)", 2) if field.seq: self.emit("if w_self.w_%s is None:" % (field.name,), 1) self.emit("if w_self.%s is None:" % (field.name,), 2) - self.emit("w_list = space.newlist([])", 3) + self.emit("list_w = []", 3) self.emit("else:", 2) if field.type.value in self.data.simple_types: wrapper = "%s_to_class[node - 1]()" % (field.type,) @@ -428,7 +427,7 @@ wrapper = "space.wrap(node)" self.emit("list_w = [%s for node in w_self.%s]" % (wrapper, field.name), 3) - self.emit("w_list = space.newlist(list_w)", 3) + self.emit("w_list = space.newlist(list_w)", 2) self.emit("w_self.w_%s = w_list" % (field.name,), 2) self.emit("return w_self.w_%s" % (field.name,), 1) elif field.type.value in self.data.simple_types: @@ -540,7 +539,7 @@ from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter import typedef from pypy.interpreter.gateway import interp2app -from pypy.interpreter.error import OperationError +from pypy.interpreter.error import OperationError, operationerrfmt from pypy.rlib.unroll import unrolling_iterable from pypy.tool.pairtype import extendabletype from pypy.tool.sourcetools import func_with_new_name @@ -639,9 +638,7 @@ missing = required[i] if missing is not None: err = "required field \\"%s\\" missing from %s" - err = err % (missing, host) - w_err = space.wrap(err) - raise OperationError(space.w_TypeError, w_err) + raise operationerrfmt(space.w_TypeError, err, missing, host) raise AssertionError("should not reach here") diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -777,22 +777,63 @@ """Unpack an iterable object into a real (interpreter-level) list. Raise an OperationError(w_ValueError) if the length is wrong.""" w_iterator = self.iter(w_iterable) - # If we know the expected length we can preallocate. if expected_length == -1: + # xxx special hack for speed + from pypy.interpreter.generator import GeneratorIterator + if isinstance(w_iterator, GeneratorIterator): + lst_w = [] + w_iterator.unpack_into(lst_w) + return lst_w + # /xxx + return self._unpackiterable_unknown_length(w_iterator, w_iterable) + else: + lst_w = self._unpackiterable_known_length(w_iterator, + expected_length) + return lst_w[:] # make the resulting list resizable + + @jit.dont_look_inside + def _unpackiterable_unknown_length(self, w_iterator, w_iterable): + # Unpack a variable-size list of unknown length. + # The JIT does not look inside this function because it + # contains a loop (made explicit with the decorator above). + # + # If we can guess the expected length we can preallocate. + try: + lgt_estimate = self.len_w(w_iterable) + except OperationError, o: + if (not o.match(self, self.w_AttributeError) and + not o.match(self, self.w_TypeError)): + raise + items = [] + else: try: - lgt_estimate = self.len_w(w_iterable) - except OperationError, o: - if (not o.match(self, self.w_AttributeError) and - not o.match(self, self.w_TypeError)): + items = newlist(lgt_estimate) + except MemoryError: + items = [] # it might have lied + # + while True: + try: + w_item = self.next(w_iterator) + except OperationError, e: + if not e.match(self, self.w_StopIteration): raise - items = [] - else: - try: - items = newlist(lgt_estimate) - except MemoryError: - items = [] # it might have lied - else: - items = [None] * expected_length + break # done + items.append(w_item) + # + return items + + @jit.dont_look_inside + def _unpackiterable_known_length(self, w_iterator, expected_length): + # Unpack a known length list, without letting the JIT look inside. + # Implemented by just calling the @jit.unroll_safe version, but + # the JIT stopped looking inside already. + return self._unpackiterable_known_length_jitlook(w_iterator, + expected_length) + + @jit.unroll_safe + def _unpackiterable_known_length_jitlook(self, w_iterator, + expected_length): + items = [None] * expected_length idx = 0 while True: try: @@ -801,26 +842,29 @@ if not e.match(self, self.w_StopIteration): raise break # done - if expected_length != -1 and idx == expected_length: + if idx == expected_length: raise OperationError(self.w_ValueError, - self.wrap("too many values to unpack")) - if expected_length == -1: - items.append(w_item) - else: - items[idx] = w_item + self.wrap("too many values to unpack")) + items[idx] = w_item idx += 1 - if expected_length != -1 and idx < expected_length: + if idx < expected_length: if idx == 1: plural = "" else: plural = "s" - raise OperationError(self.w_ValueError, - self.wrap("need more than %d value%s to unpack" % - (idx, plural))) + raise operationerrfmt(self.w_ValueError, + "need more than %d value%s to unpack", + idx, plural) return items - unpackiterable_unroll = jit.unroll_safe(func_with_new_name(unpackiterable, - 'unpackiterable_unroll')) + def unpackiterable_unroll(self, w_iterable, expected_length): + # Like unpackiterable(), but for the cases where we have + # an expected_length and want to unroll when JITted. + # Returns a fixed-size list. + w_iterator = self.iter(w_iterable) + assert expected_length != -1 + return self._unpackiterable_known_length_jitlook(w_iterator, + expected_length) def fixedview(self, w_iterable, expected_length=-1): """ A fixed list view of w_iterable. Don't modify the result diff --git a/pypy/interpreter/executioncontext.py b/pypy/interpreter/executioncontext.py --- a/pypy/interpreter/executioncontext.py +++ b/pypy/interpreter/executioncontext.py @@ -175,6 +175,9 @@ self.w_tracefunc = w_func self.space.frame_trace_action.fire() + def gettrace(self): + return self.w_tracefunc + def setprofile(self, w_func): """Set the global trace function.""" if self.space.is_w(w_func, self.space.w_None): @@ -388,8 +391,11 @@ def decrement_ticker(self, by): value = self._ticker if self.has_bytecode_counter: # this 'if' is constant-folded - value -= by - self._ticker = value + if jit.isconstant(by) and by == 0: + pass # normally constant-folded too + else: + value -= by + self._ticker = value return value diff --git a/pypy/interpreter/generator.py b/pypy/interpreter/generator.py --- a/pypy/interpreter/generator.py +++ b/pypy/interpreter/generator.py @@ -8,7 +8,7 @@ class GeneratorIterator(Wrappable): "An iterator created by a generator." _immutable_fields_ = ['pycode'] - + def __init__(self, frame): self.space = frame.space self.frame = frame # turned into None when frame_finished_execution @@ -81,7 +81,7 @@ # if the frame is now marked as finished, it was RETURNed from if frame.frame_finished_execution: self.frame = None - raise OperationError(space.w_StopIteration, space.w_None) + raise OperationError(space.w_StopIteration, space.w_None) else: return w_result # YIELDed finally: @@ -97,21 +97,21 @@ def throw(self, w_type, w_val, w_tb): from pypy.interpreter.pytraceback import check_traceback space = self.space - + msg = "throw() third argument must be a traceback object" if space.is_w(w_tb, space.w_None): tb = None else: tb = check_traceback(space, w_tb, msg) - + operr = OperationError(w_type, w_val, tb) operr.normalize_exception(space) return self.send_ex(space.w_None, operr) - + def descr_next(self): """x.next() -> the next value, or raise StopIteration""" return self.send_ex(self.space.w_None) - + def descr_close(self): """x.close(arg) -> raise GeneratorExit inside generator.""" assert isinstance(self, GeneratorIterator) @@ -124,7 +124,7 @@ e.match(space, space.w_GeneratorExit): return space.w_None raise - + if w_retval is not None: msg = "generator ignored GeneratorExit" raise OperationError(space.w_RuntimeError, space.wrap(msg)) @@ -155,3 +155,39 @@ "interrupting generator of ") break block = block.previous + + def unpack_into(self, results_w): + """This is a hack for performance: runs the generator and collects + all produced items in a list.""" + # XXX copied and simplified version of send_ex() + space = self.space + if self.running: + raise OperationError(space.w_ValueError, + space.wrap('generator already executing')) + frame = self.frame + if frame is None: # already finished + return + self.running = True + try: + pycode = self.pycode + while True: + jitdriver.jit_merge_point(self=self, frame=frame, + results_w=results_w, + pycode=pycode) + try: + w_result = frame.execute_frame(space.w_None) + except OperationError, e: + if not e.match(space, space.w_StopIteration): + raise + break + # if the frame is now marked as finished, it was RETURNed from + if frame.frame_finished_execution: + break + results_w.append(w_result) # YIELDed + finally: + frame.f_backref = jit.vref_None + self.running = False + self.frame = None + +jitdriver = jit.JitDriver(greens=['pycode'], + reds=['self', 'frame', 'results_w']) diff --git a/pypy/interpreter/pyparser/pytokenizer.py b/pypy/interpreter/pyparser/pytokenizer.py --- a/pypy/interpreter/pyparser/pytokenizer.py +++ b/pypy/interpreter/pyparser/pytokenizer.py @@ -226,7 +226,7 @@ parenlev = parenlev - 1 if parenlev < 0: raise TokenError("unmatched '%s'" % initial, line, - lnum-1, 0, token_list) + lnum, start + 1, token_list) if token in python_opmap: punct = python_opmap[token] else: diff --git a/pypy/interpreter/pyparser/test/test_pyparse.py b/pypy/interpreter/pyparser/test/test_pyparse.py --- a/pypy/interpreter/pyparser/test/test_pyparse.py +++ b/pypy/interpreter/pyparser/test/test_pyparse.py @@ -87,6 +87,10 @@ assert exc.lineno == 1 assert exc.offset == 5 assert exc.lastlineno == 5 + exc = py.test.raises(SyntaxError, parse, "abc)").value + assert exc.msg == "unmatched ')'" + assert exc.lineno == 1 + assert exc.offset == 4 def test_is(self): self.parse("x is y") diff --git a/pypy/interpreter/test/test_generator.py b/pypy/interpreter/test/test_generator.py --- a/pypy/interpreter/test/test_generator.py +++ b/pypy/interpreter/test/test_generator.py @@ -117,7 +117,7 @@ g = f() raises(NameError, g.throw, NameError, "Error", None) - + def test_throw_fail(self): def f(): yield 1 @@ -129,7 +129,7 @@ yield 1 g = f() raises(TypeError, g.throw, list()) - + def test_throw_fail3(self): def f(): yield 1 @@ -188,7 +188,7 @@ g = f() g.next() raises(NameError, g.close) - + def test_close_fail(self): def f(): try: @@ -267,3 +267,15 @@ assert r.startswith("' % self.fielddescr.repr_of_descr() + +def get_interiorfield_descr(gc_ll_descr, ARRAY, FIELDTP, name): + cache = gc_ll_descr._cache_interiorfield + try: + return cache[(ARRAY, FIELDTP, name)] + except KeyError: + arraydescr = get_array_descr(gc_ll_descr, ARRAY) + fielddescr = get_field_descr(gc_ll_descr, FIELDTP, name) + descr = InteriorFieldDescr(arraydescr, fielddescr) + cache[(ARRAY, FIELDTP, name)] = descr + return descr # ____________________________________________________________ # CallDescrs @@ -525,7 +570,8 @@ # if TYPE is lltype.Float or is_longlong(TYPE): setattr(Descr, floatattrname, True) - elif TYPE is not lltype.Bool and rffi.cast(TYPE, -1) == -1: + elif (TYPE is not lltype.Bool and isinstance(TYPE, lltype.Number) and + rffi.cast(TYPE, -1) == -1): setattr(Descr, signedattrname, True) # _cache[nameprefix, TYPE] = Descr diff --git a/pypy/jit/backend/llsupport/gc.py b/pypy/jit/backend/llsupport/gc.py --- a/pypy/jit/backend/llsupport/gc.py +++ b/pypy/jit/backend/llsupport/gc.py @@ -45,6 +45,22 @@ def freeing_block(self, start, stop): pass + def get_funcptr_for_newarray(self): + return llhelper(self.GC_MALLOC_ARRAY, self.malloc_array) + def get_funcptr_for_newstr(self): + return llhelper(self.GC_MALLOC_STR_UNICODE, self.malloc_str) + def get_funcptr_for_newunicode(self): + return llhelper(self.GC_MALLOC_STR_UNICODE, self.malloc_unicode) + + + def record_constptrs(self, op, gcrefs_output_list): + for i in range(op.numargs()): + v = op.getarg(i) + if isinstance(v, ConstPtr) and bool(v.value): + p = v.value + rgc._make_sure_does_not_move(p) + gcrefs_output_list.append(p) + # ____________________________________________________________ class GcLLDescr_boehm(GcLLDescription): @@ -88,6 +104,39 @@ malloc_fn_ptr = self.configure_boehm_once() self.funcptr_for_new = malloc_fn_ptr + def malloc_array(basesize, itemsize, ofs_length, num_elem): + try: + size = ovfcheck(basesize + ovfcheck(itemsize * num_elem)) + except OverflowError: + return lltype.nullptr(llmemory.GCREF.TO) + res = self.funcptr_for_new(size) + if not res: + return res + rffi.cast(rffi.CArrayPtr(lltype.Signed), res)[ofs_length/WORD] = num_elem + return res + self.malloc_array = malloc_array + self.GC_MALLOC_ARRAY = lltype.Ptr(lltype.FuncType( + [lltype.Signed] * 4, llmemory.GCREF)) + + + (str_basesize, str_itemsize, str_ofs_length + ) = symbolic.get_array_token(rstr.STR, self.translate_support_code) + (unicode_basesize, unicode_itemsize, unicode_ofs_length + ) = symbolic.get_array_token(rstr.UNICODE, self.translate_support_code) + def malloc_str(length): + return self.malloc_array( + str_basesize, str_itemsize, str_ofs_length, length + ) + def malloc_unicode(length): + return self.malloc_array( + unicode_basesize, unicode_itemsize, unicode_ofs_length, length + ) + self.malloc_str = malloc_str + self.malloc_unicode = malloc_unicode + self.GC_MALLOC_STR_UNICODE = lltype.Ptr(lltype.FuncType( + [lltype.Signed], llmemory.GCREF)) + + # on some platform GC_init is required before any other # GC_* functions, call it here for the benefit of tests # XXX move this to tests @@ -108,38 +157,34 @@ ofs_length = arraydescr.get_ofs_length(self.translate_support_code) basesize = arraydescr.get_base_size(self.translate_support_code) itemsize = arraydescr.get_item_size(self.translate_support_code) - size = basesize + itemsize * num_elem - res = self.funcptr_for_new(size) - rffi.cast(rffi.CArrayPtr(lltype.Signed), res)[ofs_length/WORD] = num_elem - return res + return self.malloc_array(basesize, itemsize, ofs_length, num_elem) def gc_malloc_str(self, num_elem): - basesize, itemsize, ofs_length = symbolic.get_array_token(rstr.STR, - self.translate_support_code) - assert itemsize == 1 - size = basesize + num_elem - res = self.funcptr_for_new(size) - rffi.cast(rffi.CArrayPtr(lltype.Signed), res)[ofs_length/WORD] = num_elem - return res + return self.malloc_str(num_elem) def gc_malloc_unicode(self, num_elem): - basesize, itemsize, ofs_length = symbolic.get_array_token(rstr.UNICODE, - self.translate_support_code) - size = basesize + num_elem * itemsize - res = self.funcptr_for_new(size) - rffi.cast(rffi.CArrayPtr(lltype.Signed), res)[ofs_length/WORD] = num_elem - return res + return self.malloc_unicode(num_elem) def args_for_new(self, sizedescr): assert isinstance(sizedescr, BaseSizeDescr) return [sizedescr.size] + def args_for_new_array(self, arraydescr): + ofs_length = arraydescr.get_ofs_length(self.translate_support_code) + basesize = arraydescr.get_base_size(self.translate_support_code) + itemsize = arraydescr.get_item_size(self.translate_support_code) + return [basesize, itemsize, ofs_length] + def get_funcptr_for_new(self): return self.funcptr_for_new - get_funcptr_for_newarray = None - get_funcptr_for_newstr = None - get_funcptr_for_newunicode = None + def rewrite_assembler(self, cpu, operations, gcrefs_output_list): + # record all GCREFs too, because Boehm cannot see them and keep them + # alive if they end up as constants in the assembler + for op in operations: + self.record_constptrs(op, gcrefs_output_list) + return GcLLDescription.rewrite_assembler(self, cpu, operations, + gcrefs_output_list) # ____________________________________________________________ @@ -604,10 +649,13 @@ def malloc_basic(size, tid): type_id = llop.extract_ushort(llgroup.HALFWORD, tid) has_finalizer = bool(tid & (1<' # - cache = {} descr4 = get_call_descr(c0, [lltype.Char, lltype.Ptr(S)], lltype.Ptr(S)) assert 'GcPtrCallDescr' in descr4.repr_of_descr() # @@ -412,10 +413,10 @@ ARGS = [lltype.Float, lltype.Ptr(ARRAY)] RES = lltype.Float - def f(a, b): + def f2(a, b): return float(b[0]) + a - fnptr = llhelper(lltype.Ptr(lltype.FuncType(ARGS, RES)), f) + fnptr = llhelper(lltype.Ptr(lltype.FuncType(ARGS, RES)), f2) descr2 = get_call_descr(c0, ARGS, RES) a = lltype.malloc(ARRAY, 3) opaquea = lltype.cast_opaque_ptr(llmemory.GCREF, a) diff --git a/pypy/jit/backend/llsupport/test/test_gc.py b/pypy/jit/backend/llsupport/test/test_gc.py --- a/pypy/jit/backend/llsupport/test/test_gc.py +++ b/pypy/jit/backend/llsupport/test/test_gc.py @@ -247,12 +247,14 @@ self.record = [] def do_malloc_fixedsize_clear(self, RESTYPE, type_id, size, - has_finalizer, contains_weakptr): + has_finalizer, has_light_finalizer, + contains_weakptr): assert not contains_weakptr + assert not has_finalizer # in these tests + assert not has_light_finalizer # in these tests p = llmemory.raw_malloc(size) p = llmemory.cast_adr_to_ptr(p, RESTYPE) - flags = int(has_finalizer) << 16 - tid = llop.combine_ushort(lltype.Signed, type_id, flags) + tid = llop.combine_ushort(lltype.Signed, type_id, 0) self.record.append(("fixedsize", repr(size), tid, p)) return p diff --git a/pypy/jit/backend/model.py b/pypy/jit/backend/model.py --- a/pypy/jit/backend/model.py +++ b/pypy/jit/backend/model.py @@ -1,5 +1,5 @@ from pypy.rlib.debug import debug_start, debug_print, debug_stop -from pypy.jit.metainterp import history, compile +from pypy.jit.metainterp import history class AbstractCPU(object): @@ -213,6 +213,10 @@ def typedescrof(TYPE): raise NotImplementedError + @staticmethod + def interiorfielddescrof(A, fieldname): + raise NotImplementedError + # ---------- the backend-dependent operations ---------- # lltype specific operations diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -5,7 +5,7 @@ BoxInt, Box, BoxPtr, LoopToken, ConstInt, ConstPtr, - BoxObj, Const, + BoxObj, ConstObj, BoxFloat, ConstFloat) from pypy.jit.metainterp.resoperation import ResOperation, rop from pypy.jit.metainterp.typesystem import deref @@ -111,7 +111,7 @@ self.cpu.set_future_value_int(0, 2) fail = self.cpu.execute_token(looptoken) res = self.cpu.get_latest_value_int(0) - assert res == 3 + assert res == 3 assert fail.identifier == 1 def test_compile_loop(self): @@ -127,7 +127,7 @@ ] inputargs = [i0] operations[2].setfailargs([i1]) - + self.cpu.compile_loop(inputargs, operations, looptoken) self.cpu.set_future_value_int(0, 2) fail = self.cpu.execute_token(looptoken) @@ -148,7 +148,7 @@ ] inputargs = [i0] operations[2].setfailargs([None, None, i1, None]) - + self.cpu.compile_loop(inputargs, operations, looptoken) self.cpu.set_future_value_int(0, 2) fail = self.cpu.execute_token(looptoken) @@ -372,7 +372,7 @@ for opnum, boxargs, retvalue in get_int_tests(): res = self.execute_operation(opnum, boxargs, 'int') assert res.value == retvalue - + def test_float_operations(self): from pypy.jit.metainterp.test.test_executor import get_float_tests for opnum, boxargs, rettype, retvalue in get_float_tests(self.cpu): @@ -438,7 +438,7 @@ def test_ovf_operations_reversed(self): self.test_ovf_operations(reversed=True) - + def test_bh_call(self): cpu = self.cpu # @@ -503,7 +503,7 @@ [funcbox, BoxInt(num), BoxInt(num)], 'int', descr=dyn_calldescr) assert res.value == 2 * num - + if cpu.supports_floats: def func(f0, f1, f2, f3, f4, f5, f6, i0, i1, f7, f8, f9): @@ -543,7 +543,7 @@ funcbox = self.get_funcbox(self.cpu, func_ptr) res = self.execute_operation(rop.CALL, [funcbox] + map(BoxInt, args), 'int', descr=calldescr) assert res.value == func(*args) - + def test_call_stack_alignment(self): # test stack alignment issues, notably for Mac OS/X. # also test the ordering of the arguments. @@ -615,7 +615,7 @@ res = self.execute_operation(rop.GETFIELD_GC, [t_box], 'int', descr=shortdescr) assert res.value == 1331 - + # u_box, U_box = self.alloc_instance(self.U) fielddescr2 = self.cpu.fielddescrof(self.S, 'next') @@ -695,7 +695,7 @@ def test_failing_guard_class(self): t_box, T_box = self.alloc_instance(self.T) - u_box, U_box = self.alloc_instance(self.U) + u_box, U_box = self.alloc_instance(self.U) null_box = self.null_instance() for opname, args in [(rop.GUARD_CLASS, [t_box, U_box]), (rop.GUARD_CLASS, [u_box, T_box]), @@ -787,7 +787,7 @@ r = self.execute_operation(rop.GETARRAYITEM_GC, [a_box, BoxInt(3)], 'int', descr=arraydescr) assert r.value == 160 - + # if isinstance(A, lltype.GcArray): A = lltype.Ptr(A) @@ -880,6 +880,73 @@ 'int', descr=arraydescr) assert r.value == 7441 + def test_array_of_structs(self): + TP = lltype.GcStruct('x') + ITEM = lltype.Struct('x', + ('vs', lltype.Signed), + ('vu', lltype.Unsigned), + ('vsc', rffi.SIGNEDCHAR), + ('vuc', rffi.UCHAR), + ('vss', rffi.SHORT), + ('vus', rffi.USHORT), + ('vsi', rffi.INT), + ('vui', rffi.UINT), + ('k', lltype.Float), + ('p', lltype.Ptr(TP))) + a_box, A = self.alloc_array_of(ITEM, 15) + s_box, S = self.alloc_instance(TP) + kdescr = self.cpu.interiorfielddescrof(A, 'k') + pdescr = self.cpu.interiorfielddescrof(A, 'p') + self.execute_operation(rop.SETINTERIORFIELD_GC, [a_box, BoxInt(3), + boxfloat(1.5)], + 'void', descr=kdescr) + f = self.cpu.bh_getinteriorfield_gc_f(a_box.getref_base(), 3, kdescr) + assert longlong.getrealfloat(f) == 1.5 + self.cpu.bh_setinteriorfield_gc_f(a_box.getref_base(), 3, kdescr, longlong.getfloatstorage(2.5)) + r = self.execute_operation(rop.GETINTERIORFIELD_GC, [a_box, BoxInt(3)], + 'float', descr=kdescr) + assert r.getfloat() == 2.5 + # + NUMBER_FIELDS = [('vs', lltype.Signed), + ('vu', lltype.Unsigned), + ('vsc', rffi.SIGNEDCHAR), + ('vuc', rffi.UCHAR), + ('vss', rffi.SHORT), + ('vus', rffi.USHORT), + ('vsi', rffi.INT), + ('vui', rffi.UINT)] + for name, TYPE in NUMBER_FIELDS[::-1]: + vdescr = self.cpu.interiorfielddescrof(A, name) + self.execute_operation(rop.SETINTERIORFIELD_GC, [a_box, BoxInt(3), + BoxInt(-15)], + 'void', descr=vdescr) + for name, TYPE in NUMBER_FIELDS: + vdescr = self.cpu.interiorfielddescrof(A, name) + i = self.cpu.bh_getinteriorfield_gc_i(a_box.getref_base(), 3, + vdescr) + assert i == rffi.cast(lltype.Signed, rffi.cast(TYPE, -15)) + for name, TYPE in NUMBER_FIELDS[::-1]: + vdescr = self.cpu.interiorfielddescrof(A, name) + self.cpu.bh_setinteriorfield_gc_i(a_box.getref_base(), 3, + vdescr, -25) + for name, TYPE in NUMBER_FIELDS: + vdescr = self.cpu.interiorfielddescrof(A, name) + r = self.execute_operation(rop.GETINTERIORFIELD_GC, + [a_box, BoxInt(3)], + 'int', descr=vdescr) + assert r.getint() == rffi.cast(lltype.Signed, rffi.cast(TYPE, -25)) + # + self.execute_operation(rop.SETINTERIORFIELD_GC, [a_box, BoxInt(4), + s_box], + 'void', descr=pdescr) + r = self.cpu.bh_getinteriorfield_gc_r(a_box.getref_base(), 4, pdescr) + assert r == s_box.getref_base() + self.cpu.bh_setinteriorfield_gc_r(a_box.getref_base(), 3, pdescr, + s_box.getref_base()) + r = self.execute_operation(rop.GETINTERIORFIELD_GC, [a_box, BoxInt(3)], + 'ref', descr=pdescr) + assert r.getref_base() == s_box.getref_base() + def test_string_basic(self): s_box = self.alloc_string("hello\xfe") r = self.execute_operation(rop.STRLEN, [s_box], 'int') @@ -1402,7 +1469,7 @@ addr = llmemory.cast_ptr_to_adr(func_ptr) return ConstInt(heaptracker.adr2int(addr)) - + MY_VTABLE = rclass.OBJECT_VTABLE # for tests only S = lltype.GcForwardReference() @@ -1439,7 +1506,6 @@ return BoxPtr(lltype.nullptr(llmemory.GCREF.TO)) def alloc_array_of(self, ITEM, length): - cpu = self.cpu A = lltype.GcArray(ITEM) a = lltype.malloc(A, length) a_box = BoxPtr(lltype.cast_opaque_ptr(llmemory.GCREF, a)) @@ -1468,20 +1534,16 @@ return u''.join(u.chars) - def test_casts(self): - py.test.skip("xxx fix or kill") - from pypy.rpython.lltypesystem import lltype, llmemory - TP = lltype.GcStruct('x') - x = lltype.malloc(TP) - x = lltype.cast_opaque_ptr(llmemory.GCREF, x) + def test_cast_int_to_ptr(self): + res = self.execute_operation(rop.CAST_INT_TO_PTR, + [BoxInt(-17)], 'ref').value + assert lltype.cast_ptr_to_int(res) == -17 + + def test_cast_ptr_to_int(self): + x = lltype.cast_int_to_ptr(llmemory.GCREF, -19) res = self.execute_operation(rop.CAST_PTR_TO_INT, - [BoxPtr(x)], 'int').value - expected = self.cpu.cast_adr_to_int(llmemory.cast_ptr_to_adr(x)) - assert rffi.get_real_int(res) == rffi.get_real_int(expected) - res = self.execute_operation(rop.CAST_PTR_TO_INT, - [ConstPtr(x)], 'int').value - expected = self.cpu.cast_adr_to_int(llmemory.cast_ptr_to_adr(x)) - assert rffi.get_real_int(res) == rffi.get_real_int(expected) + [BoxPtr(x)], 'int').value + assert res == -19 def test_ooops_non_gc(self): x = lltype.malloc(lltype.Struct('x'), flavor='raw') @@ -2299,13 +2361,6 @@ # cpu.bh_strsetitem(x, 4, ord('/')) assert str.chars[4] == '/' - # -## x = cpu.bh_newstr(5) -## y = cpu.bh_cast_ptr_to_int(x) -## z = cpu.bh_cast_ptr_to_int(x) -## y = rffi.get_real_int(y) -## z = rffi.get_real_int(z) -## assert type(y) == type(z) == int and y == z def test_sorting_of_fields(self): S = self.S @@ -2329,7 +2384,7 @@ for opname, arg, res in ops: self.execute_operation(opname, [arg], 'void') assert self.guard_failed == res - + lltype.free(x, flavor='raw') def test_assembler_call(self): @@ -2409,7 +2464,7 @@ FakeJitDriverSD.portal_calldescr = self.cpu.calldescrof( lltype.Ptr(lltype.FuncType(ARGS, RES)), ARGS, RES, EffectInfo.MOST_GENERAL) - + ops = ''' [f0, f1] f2 = float_add(f0, f1) @@ -2500,7 +2555,7 @@ FakeJitDriverSD.portal_calldescr = self.cpu.calldescrof( lltype.Ptr(lltype.FuncType(ARGS, RES)), ARGS, RES, EffectInfo.MOST_GENERAL) - + ops = ''' [f0, f1] f2 = float_add(f0, f1) @@ -2951,4 +3006,4 @@ def alloc_unicode(self, unicode): py.test.skip("implement me") - + diff --git a/pypy/jit/backend/test/test_ll_random.py b/pypy/jit/backend/test/test_ll_random.py --- a/pypy/jit/backend/test/test_ll_random.py +++ b/pypy/jit/backend/test/test_ll_random.py @@ -28,16 +28,27 @@ fork.structure_types_and_vtables = self.structure_types_and_vtables return fork - def get_structptr_var(self, r, must_have_vtable=False, type=lltype.Struct): + def _choose_ptr_vars(self, from_, type, array_of_structs): + ptrvars = [] + for i in range(len(from_)): + v, S = from_[i][:2] + if not isinstance(S, type): + continue + if ((isinstance(S, lltype.Array) and + isinstance(S.OF, lltype.Struct)) == array_of_structs): + ptrvars.append((v, S)) + return ptrvars + + def get_structptr_var(self, r, must_have_vtable=False, type=lltype.Struct, + array_of_structs=False): while True: - ptrvars = [(v, S) for (v, S) in self.ptrvars - if isinstance(S, type)] + ptrvars = self._choose_ptr_vars(self.ptrvars, type, + array_of_structs) if ptrvars and r.random() < 0.8: v, S = r.choice(ptrvars) else: - prebuilt_ptr_consts = [(v, S) - for (v, S, _) in self.prebuilt_ptr_consts - if isinstance(S, type)] + prebuilt_ptr_consts = self._choose_ptr_vars( + self.prebuilt_ptr_consts, type, array_of_structs) if prebuilt_ptr_consts and r.random() < 0.7: v, S = r.choice(prebuilt_ptr_consts) else: @@ -48,7 +59,8 @@ has_vtable=must_have_vtable) else: # create a new constant array - p = self.get_random_array(r) + p = self.get_random_array(r, + must_be_array_of_structs=array_of_structs) S = lltype.typeOf(p).TO v = ConstPtr(lltype.cast_opaque_ptr(llmemory.GCREF, p)) self.prebuilt_ptr_consts.append((v, S, @@ -74,7 +86,8 @@ TYPE = lltype.Signed return TYPE - def get_random_structure_type(self, r, with_vtable=None, cache=True): + def get_random_structure_type(self, r, with_vtable=None, cache=True, + type=lltype.GcStruct): if cache and self.structure_types and r.random() < 0.5: return r.choice(self.structure_types) fields = [] @@ -85,7 +98,7 @@ for i in range(r.randrange(1, 5)): TYPE = self.get_random_primitive_type(r) fields.append(('f%d' % i, TYPE)) - S = lltype.GcStruct('S%d' % self.counter, *fields, **kwds) + S = type('S%d' % self.counter, *fields, **kwds) self.counter += 1 if cache: self.structure_types.append(S) @@ -125,17 +138,29 @@ setattr(p, fieldname, rffi.cast(TYPE, r.random_integer())) return p - def get_random_array_type(self, r): - TYPE = self.get_random_primitive_type(r) + def get_random_array_type(self, r, can_be_array_of_struct=False, + must_be_array_of_structs=False): + if ((can_be_array_of_struct and r.random() < 0.1) or + must_be_array_of_structs): + TYPE = self.get_random_structure_type(r, cache=False, + type=lltype.Struct) + else: + TYPE = self.get_random_primitive_type(r) return lltype.GcArray(TYPE) - def get_random_array(self, r): - A = self.get_random_array_type(r) + def get_random_array(self, r, must_be_array_of_structs=False): + A = self.get_random_array_type(r, + must_be_array_of_structs=must_be_array_of_structs) length = (r.random_integer() // 15) % 300 # length: between 0 and 299 # likely to be small p = lltype.malloc(A, length) - for i in range(length): - p[i] = rffi.cast(A.OF, r.random_integer()) + if isinstance(A.OF, lltype.Primitive): + for i in range(length): + p[i] = rffi.cast(A.OF, r.random_integer()) + else: + for i in range(length): + for fname, TP in A.OF._flds.iteritems(): + setattr(p[i], fname, rffi.cast(TP, r.random_integer())) return p def get_index(self, length, r): @@ -155,8 +180,16 @@ dic[fieldname] = getattr(p, fieldname) else: assert isinstance(S, lltype.Array) - for i in range(len(p)): - dic[i] = p[i] + if isinstance(S.OF, lltype.Struct): + for i in range(len(p)): + item = p[i] + s1 = {} + for fieldname in S.OF._names: + s1[fieldname] = getattr(item, fieldname) + dic[i] = s1 + else: + for i in range(len(p)): + dic[i] = p[i] return dic def print_loop_prebuilt(self, names, writevar, s): @@ -220,7 +253,7 @@ class GetFieldOperation(test_random.AbstractOperation): def field_descr(self, builder, r): - v, S = builder.get_structptr_var(r) + v, S = builder.get_structptr_var(r, ) names = S._names if names[0] == 'parent': names = names[1:] @@ -239,6 +272,28 @@ continue break +class GetInteriorFieldOperation(test_random.AbstractOperation): + def field_descr(self, builder, r): + v, A = builder.get_structptr_var(r, type=lltype.Array, + array_of_structs=True) + array = v.getref(lltype.Ptr(A)) + v_index = builder.get_index(len(array), r) + name = r.choice(A.OF._names) + descr = builder.cpu.interiorfielddescrof(A, name) + descr._random_info = 'cpu.interiorfielddescrof(%s, %r)' % (A.OF._name, + name) + TYPE = getattr(A.OF, name) + return v, v_index, descr, TYPE + + def produce_into(self, builder, r): + while True: + try: + v, v_index, descr, _ = self.field_descr(builder, r) + self.put(builder, [v, v_index], descr) + except lltype.UninitializedMemoryAccess: + continue + break + class SetFieldOperation(GetFieldOperation): def produce_into(self, builder, r): v, descr, TYPE = self.field_descr(builder, r) @@ -251,6 +306,18 @@ break builder.do(self.opnum, [v, w], descr) +class SetInteriorFieldOperation(GetInteriorFieldOperation): + def produce_into(self, builder, r): + v, v_index, descr, TYPE = self.field_descr(builder, r) + while True: + if r.random() < 0.3: + w = ConstInt(r.random_integer()) + else: + w = r.choice(builder.intvars) + if rffi.cast(lltype.Signed, rffi.cast(TYPE, w.value)) == w.value: + break + builder.do(self.opnum, [v, v_index, w], descr) + class NewOperation(test_random.AbstractOperation): def size_descr(self, builder, S): descr = builder.cpu.sizeof(S) @@ -306,7 +373,7 @@ class NewArrayOperation(ArrayOperation): def produce_into(self, builder, r): - A = builder.get_random_array_type(r) + A = builder.get_random_array_type(r, can_be_array_of_struct=True) v_size = builder.get_index(300, r) v_ptr = builder.do(self.opnum, [v_size], self.array_descr(builder, A)) builder.ptrvars.append((v_ptr, A)) @@ -586,7 +653,9 @@ for i in range(4): # make more common OPERATIONS.append(GetFieldOperation(rop.GETFIELD_GC)) OPERATIONS.append(GetFieldOperation(rop.GETFIELD_GC)) + OPERATIONS.append(GetInteriorFieldOperation(rop.GETINTERIORFIELD_GC)) OPERATIONS.append(SetFieldOperation(rop.SETFIELD_GC)) + OPERATIONS.append(SetInteriorFieldOperation(rop.SETINTERIORFIELD_GC)) OPERATIONS.append(NewOperation(rop.NEW)) OPERATIONS.append(NewOperation(rop.NEW_WITH_VTABLE)) diff --git a/pypy/jit/backend/test/test_random.py b/pypy/jit/backend/test/test_random.py --- a/pypy/jit/backend/test/test_random.py +++ b/pypy/jit/backend/test/test_random.py @@ -595,6 +595,10 @@ for name, value in fields.items(): if isinstance(name, str): setattr(container, name, value) + elif isinstance(value, dict): + item = container.getitem(name) + for key1, value1 in value.items(): + setattr(item, key1, value1) else: container.setitem(name, value) diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -1,7 +1,7 @@ import sys, os from pypy.jit.backend.llsupport import symbolic from pypy.jit.backend.llsupport.asmmemmgr import MachineDataBlockWrapper -from pypy.jit.metainterp.history import Const, Box, BoxInt, BoxPtr, BoxFloat +from pypy.jit.metainterp.history import Const, Box, BoxInt, ConstInt from pypy.jit.metainterp.history import (AbstractFailDescr, INT, REF, FLOAT, LoopToken) from pypy.rpython.lltypesystem import lltype, rffi, rstr, llmemory @@ -36,7 +36,6 @@ from pypy.rlib import rgc from pypy.rlib.clibffi import FFI_DEFAULT_ABI from pypy.jit.backend.x86.jump import remap_frame_layout -from pypy.jit.metainterp.history import ConstInt, BoxInt from pypy.jit.codewriter.effectinfo import EffectInfo from pypy.jit.codewriter import longlong @@ -729,8 +728,8 @@ # Also, make sure this is consistent with FRAME_FIXED_SIZE. self.mc.PUSH_r(ebp.value) self.mc.MOV_rr(ebp.value, esp.value) - for regloc in self.cpu.CALLEE_SAVE_REGISTERS: - self.mc.PUSH_r(regloc.value) + for loc in self.cpu.CALLEE_SAVE_REGISTERS: + self.mc.PUSH_r(loc.value) gcrootmap = self.cpu.gc_ll_descr.gcrootmap if gcrootmap and gcrootmap.is_shadow_stack: @@ -994,7 +993,7 @@ effectinfo = op.getdescr().get_extra_info() oopspecindex = effectinfo.oopspecindex genop_llong_list[oopspecindex](self, op, arglocs, resloc) - + def regalloc_perform_math(self, op, arglocs, resloc): effectinfo = op.getdescr().get_extra_info() oopspecindex = effectinfo.oopspecindex @@ -1277,8 +1276,8 @@ genop_int_ne = _cmpop("NE", "NE") genop_int_gt = _cmpop("G", "L") genop_int_ge = _cmpop("GE", "LE") - genop_ptr_eq = genop_int_eq - genop_ptr_ne = genop_int_ne + genop_ptr_eq = genop_instance_ptr_eq = genop_int_eq + genop_ptr_ne = genop_instance_ptr_ne = genop_int_ne genop_float_lt = _cmpop_float('B', 'A') genop_float_le = _cmpop_float('BE', 'AE') @@ -1298,8 +1297,8 @@ genop_guard_int_ne = _cmpop_guard("NE", "NE", "E", "E") genop_guard_int_gt = _cmpop_guard("G", "L", "LE", "GE") genop_guard_int_ge = _cmpop_guard("GE", "LE", "L", "G") - genop_guard_ptr_eq = genop_guard_int_eq - genop_guard_ptr_ne = genop_guard_int_ne + genop_guard_ptr_eq = genop_guard_instance_ptr_eq = genop_guard_int_eq + genop_guard_ptr_ne = genop_guard_instance_ptr_ne = genop_guard_int_ne genop_guard_uint_gt = _cmpop_guard("A", "B", "BE", "AE") genop_guard_uint_lt = _cmpop_guard("B", "A", "AE", "BE") @@ -1311,7 +1310,7 @@ genop_guard_float_eq = _cmpop_guard_float("E", "E", "NE","NE") genop_guard_float_gt = _cmpop_guard_float("A", "B", "BE","AE") genop_guard_float_ge = _cmpop_guard_float("AE","BE", "B", "A") - + def genop_math_sqrt(self, op, arglocs, resloc): self.mc.SQRTSD(arglocs[0], resloc) @@ -1387,7 +1386,8 @@ def genop_same_as(self, op, arglocs, resloc): self.mov(arglocs[0], resloc) - #genop_cast_ptr_to_int = genop_same_as + genop_cast_ptr_to_int = genop_same_as + genop_cast_int_to_ptr = genop_same_as def genop_int_mod(self, op, arglocs, resloc): if IS_X86_32: @@ -1596,12 +1596,44 @@ genop_getarrayitem_gc_pure = genop_getarrayitem_gc genop_getarrayitem_raw = genop_getarrayitem_gc + def _get_interiorfield_addr(self, temp_loc, index_loc, itemsize_loc, + base_loc, ofs_loc): + assert isinstance(itemsize_loc, ImmedLoc) + if isinstance(index_loc, ImmedLoc): + temp_loc = imm(index_loc.value * itemsize_loc.value) + else: + # XXX should not use IMUL in most cases + assert isinstance(temp_loc, RegLoc) + assert isinstance(index_loc, RegLoc) + assert not temp_loc.is_xmm + self.mc.IMUL_rri(temp_loc.value, index_loc.value, + itemsize_loc.value) + assert isinstance(ofs_loc, ImmedLoc) + return AddressLoc(base_loc, temp_loc, 0, ofs_loc.value) + + def genop_getinteriorfield_gc(self, op, arglocs, resloc): + (base_loc, ofs_loc, itemsize_loc, fieldsize_loc, + index_loc, temp_loc, sign_loc) = arglocs + src_addr = self._get_interiorfield_addr(temp_loc, index_loc, + itemsize_loc, base_loc, + ofs_loc) + self.load_from_mem(resloc, src_addr, fieldsize_loc, sign_loc) + + def genop_discard_setfield_gc(self, op, arglocs): base_loc, ofs_loc, size_loc, value_loc = arglocs assert isinstance(size_loc, ImmedLoc) dest_addr = AddressLoc(base_loc, ofs_loc) self.save_into_mem(dest_addr, value_loc, size_loc) + def genop_discard_setinteriorfield_gc(self, op, arglocs): + (base_loc, ofs_loc, itemsize_loc, fieldsize_loc, + index_loc, temp_loc, value_loc) = arglocs + dest_addr = self._get_interiorfield_addr(temp_loc, index_loc, + itemsize_loc, base_loc, + ofs_loc) + self.save_into_mem(dest_addr, value_loc, fieldsize_loc) + def genop_discard_setarrayitem_gc(self, op, arglocs): base_loc, ofs_loc, value_loc, size_loc, baseofs = arglocs assert isinstance(baseofs, ImmedLoc) diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -7,7 +7,7 @@ ResOperation, BoxPtr, ConstFloat, BoxFloat, LoopToken, INT, REF, FLOAT) from pypy.jit.backend.x86.regloc import * -from pypy.rpython.lltypesystem import lltype, ll2ctypes, rffi, rstr +from pypy.rpython.lltypesystem import lltype, rffi, rstr from pypy.rlib.objectmodel import we_are_translated from pypy.rlib import rgc from pypy.jit.backend.llsupport import symbolic @@ -17,11 +17,12 @@ from pypy.jit.metainterp.resoperation import rop from pypy.jit.backend.llsupport.descr import BaseFieldDescr, BaseArrayDescr from pypy.jit.backend.llsupport.descr import BaseCallDescr, BaseSizeDescr +from pypy.jit.backend.llsupport.descr import InteriorFieldDescr from pypy.jit.backend.llsupport.regalloc import FrameManager, RegisterManager,\ TempBox from pypy.jit.backend.x86.arch import WORD, FRAME_FIXED_SIZE from pypy.jit.backend.x86.arch import IS_X86_32, IS_X86_64, MY_COPY_OF_REGS -from pypy.rlib.rarithmetic import r_longlong, r_uint +from pypy.rlib.rarithmetic import r_longlong class X86RegisterManager(RegisterManager): @@ -433,7 +434,7 @@ if self.can_merge_with_next_guard(op, i, operations): oplist_with_guard[op.getopnum()](self, op, operations[i + 1]) i += 1 - elif not we_are_translated() and op.getopnum() == -124: + elif not we_are_translated() and op.getopnum() == -124: self._consider_force_spill(op) else: oplist[op.getopnum()](self, op) @@ -650,8 +651,8 @@ consider_uint_lt = _consider_compop consider_uint_le = _consider_compop consider_uint_ge = _consider_compop - consider_ptr_eq = _consider_compop - consider_ptr_ne = _consider_compop + consider_ptr_eq = consider_instance_ptr_eq = _consider_compop + consider_ptr_ne = consider_instance_ptr_ne = _consider_compop def _consider_float_op(self, op): loc1 = self.xrm.loc(op.getarg(1)) @@ -815,7 +816,7 @@ save_all_regs = guard_not_forced_op is not None self.xrm.before_call(force_store, save_all_regs=save_all_regs) if not save_all_regs: - gcrootmap = gc_ll_descr = self.assembler.cpu.gc_ll_descr.gcrootmap + gcrootmap = self.assembler.cpu.gc_ll_descr.gcrootmap if gcrootmap and gcrootmap.is_shadow_stack: save_all_regs = 2 self.rm.before_call(force_store, save_all_regs=save_all_regs) @@ -972,74 +973,27 @@ return self._call(op, arglocs) def consider_newstr(self, op): - gc_ll_descr = self.assembler.cpu.gc_ll_descr - if gc_ll_descr.get_funcptr_for_newstr is not None: - # framework GC - loc = self.loc(op.getarg(0)) - return self._call(op, [loc]) - # boehm GC (XXX kill the following code at some point) - ofs_items, itemsize, ofs = symbolic.get_array_token(rstr.STR, self.translate_support_code) - assert itemsize == 1 - return self._malloc_varsize(ofs_items, ofs, 0, op.getarg(0), - op.result) + loc = self.loc(op.getarg(0)) + return self._call(op, [loc]) def consider_newunicode(self, op): - gc_ll_descr = self.assembler.cpu.gc_ll_descr - if gc_ll_descr.get_funcptr_for_newunicode is not None: - # framework GC - loc = self.loc(op.getarg(0)) - return self._call(op, [loc]) - # boehm GC (XXX kill the following code at some point) - ofs_items, _, ofs = symbolic.get_array_token(rstr.UNICODE, - self.translate_support_code) - scale = self._get_unicode_item_scale() - return self._malloc_varsize(ofs_items, ofs, scale, op.getarg(0), - op.result) - - def _malloc_varsize(self, ofs_items, ofs_length, scale, v, res_v): - # XXX kill this function at some point - if isinstance(v, Box): - loc = self.rm.make_sure_var_in_reg(v, [v]) - tempbox = TempBox() - other_loc = self.rm.force_allocate_reg(tempbox, [v]) - self.assembler.load_effective_addr(loc, ofs_items,scale, other_loc) - else: - tempbox = None - other_loc = imm(ofs_items + (v.getint() << scale)) - self._call(ResOperation(rop.NEW, [], res_v), - [other_loc], [v]) - loc = self.rm.make_sure_var_in_reg(v, [res_v]) - assert self.loc(res_v) == eax - # now we have to reload length to some reasonable place - self.rm.possibly_free_var(v) - if tempbox is not None: - self.rm.possibly_free_var(tempbox) - self.PerformDiscard(ResOperation(rop.SETFIELD_GC, [None, None], None), - [eax, imm(ofs_length), imm(WORD), loc]) + loc = self.loc(op.getarg(0)) + return self._call(op, [loc]) def consider_new_array(self, op): gc_ll_descr = self.assembler.cpu.gc_ll_descr - if gc_ll_descr.get_funcptr_for_newarray is not None: - # framework GC - box_num_elem = op.getarg(0) - if isinstance(box_num_elem, ConstInt): - num_elem = box_num_elem.value - if gc_ll_descr.can_inline_malloc_varsize(op.getdescr(), - num_elem): - self.fastpath_malloc_varsize(op, op.getdescr(), num_elem) - return - args = self.assembler.cpu.gc_ll_descr.args_for_new_array( - op.getdescr()) - arglocs = [imm(x) for x in args] - arglocs.append(self.loc(box_num_elem)) - self._call(op, arglocs) - return - # boehm GC (XXX kill the following code at some point) - itemsize, basesize, ofs_length, _, _ = ( - self._unpack_arraydescr(op.getdescr())) - scale_of_field = _get_scale(itemsize) - self._malloc_varsize(basesize, ofs_length, scale_of_field, - op.getarg(0), op.result) + box_num_elem = op.getarg(0) + if isinstance(box_num_elem, ConstInt): + num_elem = box_num_elem.value + if gc_ll_descr.can_inline_malloc_varsize(op.getdescr(), + num_elem): + self.fastpath_malloc_varsize(op, op.getdescr(), num_elem) + return + args = self.assembler.cpu.gc_ll_descr.args_for_new_array( + op.getdescr()) + arglocs = [imm(x) for x in args] + arglocs.append(self.loc(box_num_elem)) + self._call(op, arglocs) def _unpack_arraydescr(self, arraydescr): assert isinstance(arraydescr, BaseArrayDescr) @@ -1058,6 +1012,16 @@ sign = fielddescr.is_field_signed() return imm(ofs), imm(size), ptr, sign + def _unpack_interiorfielddescr(self, descr): + assert isinstance(descr, InteriorFieldDescr) + arraydescr = descr.arraydescr + ofs = arraydescr.get_base_size(self.translate_support_code) + itemsize = arraydescr.get_item_size(self.translate_support_code) + fieldsize = descr.fielddescr.get_field_size(self.translate_support_code) + sign = descr.fielddescr.is_field_signed() + ofs += descr.fielddescr.offset + return imm(ofs), imm(itemsize), imm(fieldsize), sign + def consider_setfield_gc(self, op): ofs_loc, size_loc, _, _ = self._unpack_fielddescr(op.getdescr()) assert isinstance(size_loc, ImmedLoc) @@ -1074,6 +1038,35 @@ consider_setfield_raw = consider_setfield_gc + def consider_setinteriorfield_gc(self, op): + t = self._unpack_interiorfielddescr(op.getdescr()) + ofs, itemsize, fieldsize, _ = t + args = op.getarglist() + if fieldsize.value == 1: + need_lower_byte = True + else: + need_lower_byte = False + box_base, box_index, box_value = args + base_loc = self.rm.make_sure_var_in_reg(box_base, args) + index_loc = self.rm.make_sure_var_in_reg(box_index, args) + value_loc = self.make_sure_var_in_reg(box_value, args, + need_lower_byte=need_lower_byte) + # If 'index_loc' is not an immediate, then we need a 'temp_loc' that + # is a register whose value will be destroyed. It's fine to destroy + # the same register as 'index_loc', but not the other ones. + self.rm.possibly_free_var(box_index) + if not isinstance(index_loc, ImmedLoc): + tempvar = TempBox() + temp_loc = self.rm.force_allocate_reg(tempvar, [box_base, + box_value]) + self.rm.possibly_free_var(tempvar) + else: + temp_loc = None + self.rm.possibly_free_var(box_base) + self.possibly_free_var(box_value) + self.PerformDiscard(op, [base_loc, ofs, itemsize, fieldsize, + index_loc, temp_loc, value_loc]) + def consider_strsetitem(self, op): args = op.getarglist() base_loc = self.rm.make_sure_var_in_reg(op.getarg(0), args) @@ -1135,6 +1128,36 @@ consider_getarrayitem_raw = consider_getarrayitem_gc consider_getarrayitem_gc_pure = consider_getarrayitem_gc + def consider_getinteriorfield_gc(self, op): + t = self._unpack_interiorfielddescr(op.getdescr()) + ofs, itemsize, fieldsize, sign = t + if sign: + sign_loc = imm1 + else: + sign_loc = imm0 + args = op.getarglist() + base_loc = self.rm.make_sure_var_in_reg(op.getarg(0), args) + index_loc = self.rm.make_sure_var_in_reg(op.getarg(1), args) + # 'base' and 'index' are put in two registers (or one if 'index' + # is an immediate). 'result' can be in the same register as + # 'index' but must be in a different register than 'base'. + self.rm.possibly_free_var(op.getarg(1)) + result_loc = self.force_allocate_reg(op.result, [op.getarg(0)]) + assert isinstance(result_loc, RegLoc) + # two cases: 1) if result_loc is a normal register, use it as temp_loc + if not result_loc.is_xmm: + temp_loc = result_loc + else: + # 2) if result_loc is an xmm register, we (likely) need another + # temp_loc that is a normal register. It can be in the same + # register as 'index' but not 'base'. + tempvar = TempBox() + temp_loc = self.rm.force_allocate_reg(tempvar, [op.getarg(0)]) + self.rm.possibly_free_var(tempvar) + self.rm.possibly_free_var(op.getarg(0)) + self.Perform(op, [base_loc, ofs, itemsize, fieldsize, + index_loc, temp_loc, sign_loc], result_loc) + def consider_int_is_true(self, op, guard_op): # doesn't need arg to be in a register argloc = self.loc(op.getarg(0)) @@ -1152,7 +1175,8 @@ self.possibly_free_var(op.getarg(0)) resloc = self.force_allocate_reg(op.result) self.Perform(op, [argloc], resloc) - #consider_cast_ptr_to_int = consider_same_as + consider_cast_ptr_to_int = consider_same_as + consider_cast_int_to_ptr = consider_same_as def consider_strlen(self, op): args = op.getarglist() @@ -1240,7 +1264,6 @@ self.rm.possibly_free_var(srcaddr_box) def _gen_address_inside_string(self, baseloc, ofsloc, resloc, is_unicode): - cpu = self.assembler.cpu if is_unicode: ofs_items, _, _ = symbolic.get_array_token(rstr.UNICODE, self.translate_support_code) @@ -1299,7 +1322,7 @@ tmpreg = X86RegisterManager.all_regs[0] tmploc = self.rm.force_allocate_reg(box, selected_reg=tmpreg) xmmtmp = X86XMMRegisterManager.all_regs[0] - xmmtmploc = self.xrm.force_allocate_reg(box1, selected_reg=xmmtmp) + self.xrm.force_allocate_reg(box1, selected_reg=xmmtmp) # Part about non-floats # XXX we don't need a copy, we only just the original list src_locations1 = [self.loc(op.getarg(i)) for i in range(op.numargs()) @@ -1379,7 +1402,7 @@ return lambda self, op: fn(self, op, None) def is_comparison_or_ovf_op(opnum): - from pypy.jit.metainterp.resoperation import opclasses, AbstractResOp + from pypy.jit.metainterp.resoperation import opclasses cls = opclasses[opnum] # hack hack: in theory they are instance method, but they don't use # any instance field, we can use a fake object diff --git a/pypy/jit/backend/x86/test/test_del.py b/pypy/jit/backend/x86/test/test_del.py --- a/pypy/jit/backend/x86/test/test_del.py +++ b/pypy/jit/backend/x86/test/test_del.py @@ -1,5 +1,4 @@ -import py from pypy.jit.backend.x86.test.test_basic import Jit386Mixin from pypy.jit.metainterp.test.test_del import DelTests diff --git a/pypy/jit/backend/x86/test/test_dict.py b/pypy/jit/backend/x86/test/test_dict.py new file mode 100644 --- /dev/null +++ b/pypy/jit/backend/x86/test/test_dict.py @@ -0,0 +1,9 @@ + +from pypy.jit.backend.x86.test.test_basic import Jit386Mixin +from pypy.jit.metainterp.test.test_dict import DictTests + + +class TestDict(Jit386Mixin, DictTests): + # for the individual tests see + # ====> ../../../metainterp/test/test_dict.py + pass diff --git a/pypy/jit/backend/x86/test/test_runner.py b/pypy/jit/backend/x86/test/test_runner.py --- a/pypy/jit/backend/x86/test/test_runner.py +++ b/pypy/jit/backend/x86/test/test_runner.py @@ -31,7 +31,7 @@ # for the individual tests see # ====> ../../test/runner_test.py - + def setup_method(self, meth): self.cpu = CPU(rtyper=None, stats=FakeStats()) self.cpu.setup_once() @@ -69,22 +69,16 @@ def test_allocations(self): from pypy.rpython.lltypesystem import rstr - + allocs = [None] all = [] + orig_new = self.cpu.gc_ll_descr.funcptr_for_new def f(size): allocs.insert(0, size) - buf = ctypes.create_string_buffer(size) - all.append(buf) - return ctypes.cast(buf, ctypes.c_void_p).value - func = ctypes.CFUNCTYPE(ctypes.c_int, ctypes.c_int)(f) - addr = ctypes.cast(func, ctypes.c_void_p).value - # ctypes produces an unsigned value. We need it to be signed for, eg, - # relative addressing to work properly. - addr = rffi.cast(lltype.Signed, addr) - + return orig_new(size) + self.cpu.assembler.setup_once() - self.cpu.assembler.malloc_func_addr = addr + self.cpu.gc_ll_descr.funcptr_for_new = f ofs = symbolic.get_field_token(rstr.STR, 'chars', False)[0] res = self.execute_operation(rop.NEWSTR, [ConstInt(7)], 'ref') @@ -108,7 +102,7 @@ res = self.execute_operation(rop.NEW_ARRAY, [ConstInt(10)], 'ref', descr) assert allocs[0] == 10*WORD + ofs + WORD - resbuf = self._resbuf(res) + resbuf = self._resbuf(res) assert resbuf[ofs/WORD] == 10 # ------------------------------------------------------------ @@ -116,7 +110,7 @@ res = self.execute_operation(rop.NEW_ARRAY, [BoxInt(10)], 'ref', descr) assert allocs[0] == 10*WORD + ofs + WORD - resbuf = self._resbuf(res) + resbuf = self._resbuf(res) assert resbuf[ofs/WORD] == 10 def test_stringitems(self): @@ -146,7 +140,7 @@ ConstInt(2), BoxInt(38)], 'void', descr) assert resbuf[itemsofs/WORD + 2] == 38 - + self.execute_operation(rop.SETARRAYITEM_GC, [res, BoxInt(3), BoxInt(42)], 'void', descr) @@ -167,7 +161,7 @@ BoxInt(2)], 'int', descr) assert r.value == 38 - + r = self.execute_operation(rop.GETARRAYITEM_GC, [res, BoxInt(3)], 'int', descr) assert r.value == 42 @@ -226,7 +220,7 @@ self.execute_operation(rop.SETFIELD_GC, [res, BoxInt(1234)], 'void', ofs_i) i = self.execute_operation(rop.GETFIELD_GC, [res], 'int', ofs_i) assert i.value == 1234 - + #u = self.execute_operation(rop.GETFIELD_GC, [res, ofs_u], 'int') #assert u.value == 5 self.execute_operation(rop.SETFIELD_GC, [res, ConstInt(1)], 'void', @@ -299,7 +293,7 @@ else: assert result != execute(self.cpu, None, op, None, b).value - + def test_stuff_followed_by_guard(self): boxes = [(BoxInt(1), BoxInt(0)), @@ -523,7 +517,7 @@ def test_debugger_on(self): from pypy.tool.logparser import parse_log_file, extract_category from pypy.rlib import debug - + loop = """ [i0] debug_merge_point('xyz', 0) diff --git a/pypy/jit/backend/x86/tool/viewcode.py b/pypy/jit/backend/x86/tool/viewcode.py --- a/pypy/jit/backend/x86/tool/viewcode.py +++ b/pypy/jit/backend/x86/tool/viewcode.py @@ -9,7 +9,12 @@ """ import autopath -import operator, sys, os, re, py, new +import new +import operator +import py +import re +import sys +import subprocess from bisect import bisect_left # don't use pypy.tool.udir here to avoid removing old usessions which @@ -44,14 +49,16 @@ f = open(tmpfile, 'wb') f.write(data) f.close() - g = os.popen(objdump % { + p = subprocess.Popen(objdump % { 'file': tmpfile, 'origin': originaddr, 'backend': objdump_backend_option[backend_name], - }, 'r') - result = g.readlines() - g.close() - lines = result[6:] # drop some objdump cruft + }, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) + stdout, stderr = p.communicate() + assert not p.returncode, ('Encountered an error running objdump: %s' % + stderr) + # drop some objdump cruft + lines = stdout.splitlines(True)[6:] # drop some objdump cruft return format_code_dump_with_labels(originaddr, lines, label_list) def format_code_dump_with_labels(originaddr, lines, label_list): @@ -85,8 +92,12 @@ # print 'loading symbols from %s...' % (filename,) symbols = {} - g = os.popen(symbollister % filename, "r") - for line in g: + p = subprocess.Popen(symbollister % filename, shell=True, + stdout=subprocess.PIPE, stderr=subprocess.PIPE) + stdout, stderr = p.communicate() + assert not p.returncode, ('Encountered an error running nm: %s' % + stderr) + for line in stdout.splitlines(True): match = re_symbolentry.match(line) if match: addr = long(match.group(1), 16) @@ -94,7 +105,6 @@ if name.startswith('pypy_g_'): name = '\xb7' + name[7:] symbols[addr] = name - g.close() print '%d symbols found' % (len(symbols),) return symbols diff --git a/pypy/jit/codewriter/jtransform.py b/pypy/jit/codewriter/jtransform.py --- a/pypy/jit/codewriter/jtransform.py +++ b/pypy/jit/codewriter/jtransform.py @@ -52,9 +52,11 @@ newoperations = [] # def do_rename(var, var_or_const): + if var.concretetype is lltype.Void: + renamings[var] = Constant(None, lltype.Void) + return renamings[var] = var_or_const - if (isinstance(var_or_const, Constant) - and var.concretetype != lltype.Void): + if isinstance(var_or_const, Constant): value = var_or_const.value value = lltype._cast_whatever(var.concretetype, value) renamings_constants[var] = Constant(value, var.concretetype) @@ -441,6 +443,8 @@ rewrite_op_gc_identityhash = _do_builtin_call rewrite_op_gc_id = _do_builtin_call rewrite_op_uint_mod = _do_builtin_call + rewrite_op_cast_float_to_uint = _do_builtin_call + rewrite_op_cast_uint_to_float = _do_builtin_call # ---------- # getfield/setfield/mallocs etc. @@ -455,6 +459,23 @@ # the special return value None forces op.result to be considered # equal to op.args[0] return [op0, op1, None] + if (hints.get('promote_string') and + op.args[0].concretetype is not lltype.Void): + S = lltype.Ptr(rstr.STR) + assert op.args[0].concretetype == S + self._register_extra_helper(EffectInfo.OS_STREQ_NONNULL, + "str.eq_nonnull", + [S, S], + lltype.Signed, + EffectInfo.EF_ELIDABLE_CANNOT_RAISE) + descr, p = self.callcontrol.callinfocollection.callinfo_for_oopspec( + EffectInfo.OS_STREQ_NONNULL) + # XXX this is fairly ugly way of creating a constant, + # however, callinfocollection has no better interface + c = Constant(p.adr.ptr, lltype.typeOf(p.adr.ptr)) + op1 = SpaceOperation('str_guard_value', [op.args[0], c, descr], + op.result) + return [SpaceOperation('-live-', [], None), op1, None] else: log.WARNING('ignoring hint %r at %r' % (hints, self.graph)) @@ -718,29 +739,54 @@ return SpaceOperation(opname, [op.args[0]], op.result) def rewrite_op_getinteriorfield(self, op): - # only supports strings and unicodes assert len(op.args) == 3 - assert op.args[1].value == 'chars' optype = op.args[0].concretetype if optype == lltype.Ptr(rstr.STR): opname = "strgetitem" + return SpaceOperation(opname, [op.args[0], op.args[2]], op.result) + elif optype == lltype.Ptr(rstr.UNICODE): + opname = "unicodegetitem" + return SpaceOperation(opname, [op.args[0], op.args[2]], op.result) else: - assert optype == lltype.Ptr(rstr.UNICODE) - opname = "unicodegetitem" - return SpaceOperation(opname, [op.args[0], op.args[2]], op.result) + v_inst, v_index, c_field = op.args + if op.result.concretetype is lltype.Void: + return + # only GcArray of Struct supported + assert isinstance(v_inst.concretetype.TO, lltype.GcArray) + STRUCT = v_inst.concretetype.TO.OF + assert isinstance(STRUCT, lltype.Struct) + descr = self.cpu.interiorfielddescrof(v_inst.concretetype.TO, + c_field.value) + args = [v_inst, v_index, descr] + kind = getkind(op.result.concretetype)[0] + return SpaceOperation('getinteriorfield_gc_%s' % kind, args, + op.result) def rewrite_op_setinteriorfield(self, op): - # only supports strings and unicodes assert len(op.args) == 4 - assert op.args[1].value == 'chars' optype = op.args[0].concretetype if optype == lltype.Ptr(rstr.STR): opname = "strsetitem" + return SpaceOperation(opname, [op.args[0], op.args[2], op.args[3]], + op.result) + elif optype == lltype.Ptr(rstr.UNICODE): + opname = "unicodesetitem" + return SpaceOperation(opname, [op.args[0], op.args[2], op.args[3]], + op.result) else: - assert optype == lltype.Ptr(rstr.UNICODE) - opname = "unicodesetitem" - return SpaceOperation(opname, [op.args[0], op.args[2], op.args[3]], - op.result) + v_inst, v_index, c_field, v_value = op.args + if v_value.concretetype is lltype.Void: + return + # only GcArray of Struct supported + assert isinstance(v_inst.concretetype.TO, lltype.GcArray) + STRUCT = v_inst.concretetype.TO.OF + assert isinstance(STRUCT, lltype.Struct) + descr = self.cpu.interiorfielddescrof(v_inst.concretetype.TO, + c_field.value) + kind = getkind(v_value.concretetype)[0] + args = [v_inst, v_index, v_value, descr] + return SpaceOperation('setinteriorfield_gc_%s' % kind, args, + op.result) def _rewrite_equality(self, op, opname): arg0, arg1 = op.args @@ -754,6 +800,9 @@ def _is_gc(self, v): return getattr(getattr(v.concretetype, "TO", None), "_gckind", "?") == 'gc' + def _is_rclass_instance(self, v): + return lltype._castdepth(v.concretetype.TO, rclass.OBJECT) >= 0 + def _rewrite_cmp_ptrs(self, op): if self._is_gc(op.args[0]): return op @@ -771,11 +820,21 @@ return self._rewrite_equality(op, 'int_is_true') def rewrite_op_ptr_eq(self, op): - op1 = self._rewrite_equality(op, 'ptr_iszero') + prefix = '' + if self._is_rclass_instance(op.args[0]): + assert self._is_rclass_instance(op.args[1]) + op = SpaceOperation('instance_ptr_eq', op.args, op.result) + prefix = 'instance_' + op1 = self._rewrite_equality(op, prefix + 'ptr_iszero') return self._rewrite_cmp_ptrs(op1) def rewrite_op_ptr_ne(self, op): - op1 = self._rewrite_equality(op, 'ptr_nonzero') + prefix = '' + if self._is_rclass_instance(op.args[0]): + assert self._is_rclass_instance(op.args[1]) + op = SpaceOperation('instance_ptr_ne', op.args, op.result) + prefix = 'instance_' + op1 = self._rewrite_equality(op, prefix + 'ptr_nonzero') return self._rewrite_cmp_ptrs(op1) rewrite_op_ptr_iszero = _rewrite_cmp_ptrs @@ -783,8 +842,11 @@ def rewrite_op_cast_ptr_to_int(self, op): if self._is_gc(op.args[0]): - #return op - raise NotImplementedError("cast_ptr_to_int") + return op + + def rewrite_op_cast_opaque_ptr(self, op): + # None causes the result of this op to get aliased to op.args[0] + return [SpaceOperation('mark_opaque_ptr', op.args, None), None] def rewrite_op_force_cast(self, op): v_arg = op.args[0] @@ -805,26 +867,44 @@ elif not float_arg and float_res: # some int -> some float ops = [] - v1 = varoftype(lltype.Signed) - oplist = self.rewrite_operation( - SpaceOperation('force_cast', [v_arg], v1) - ) - if oplist: - ops.extend(oplist) + v2 = varoftype(lltype.Float) + sizesign = rffi.size_and_sign(v_arg.concretetype) + if sizesign <= rffi.size_and_sign(lltype.Signed): + # cast from a type that fits in an int: either the size is + # smaller, or it is equal and it is not unsigned + v1 = varoftype(lltype.Signed) + oplist = self.rewrite_operation( + SpaceOperation('force_cast', [v_arg], v1) + ) + if oplist: + ops.extend(oplist) + else: + v1 = v_arg + op = self.rewrite_operation( + SpaceOperation('cast_int_to_float', [v1], v2) + ) + ops.append(op) else: - v1 = v_arg - v2 = varoftype(lltype.Float) - op = self.rewrite_operation( - SpaceOperation('cast_int_to_float', [v1], v2) - ) - ops.append(op) + if sizesign == rffi.size_and_sign(lltype.Unsigned): + opname = 'cast_uint_to_float' + elif sizesign == rffi.size_and_sign(lltype.SignedLongLong): + opname = 'cast_longlong_to_float' + elif sizesign == rffi.size_and_sign(lltype.UnsignedLongLong): + opname = 'cast_ulonglong_to_float' + else: + raise AssertionError('cast_x_to_float: %r' % (sizesign,)) + ops1 = self.rewrite_operation( + SpaceOperation(opname, [v_arg], v2) + ) + if not isinstance(ops1, list): ops1 = [ops1] + ops.extend(ops1) op2 = self.rewrite_operation( SpaceOperation('force_cast', [v2], v_result) ) if op2: ops.append(op2) else: - op.result = v_result + ops[-1].result = v_result return ops elif float_arg and not float_res: # some float -> some int @@ -837,18 +917,36 @@ ops.append(op1) else: v1 = v_arg - v2 = varoftype(lltype.Signed) - op = self.rewrite_operation( - SpaceOperation('cast_float_to_int', [v1], v2) - ) - ops.append(op) - oplist = self.rewrite_operation( - SpaceOperation('force_cast', [v2], v_result) - ) - if oplist: - ops.extend(oplist) + sizesign = rffi.size_and_sign(v_result.concretetype) + if sizesign <= rffi.size_and_sign(lltype.Signed): + # cast to a type that fits in an int: either the size is + # smaller, or it is equal and it is not unsigned + v2 = varoftype(lltype.Signed) + op = self.rewrite_operation( + SpaceOperation('cast_float_to_int', [v1], v2) + ) + ops.append(op) + oplist = self.rewrite_operation( + SpaceOperation('force_cast', [v2], v_result) + ) + if oplist: + ops.extend(oplist) + else: + op.result = v_result else: - op.result = v_result + if sizesign == rffi.size_and_sign(lltype.Unsigned): + opname = 'cast_float_to_uint' + elif sizesign == rffi.size_and_sign(lltype.SignedLongLong): + opname = 'cast_float_to_longlong' + elif sizesign == rffi.size_and_sign(lltype.UnsignedLongLong): + opname = 'cast_float_to_ulonglong' + else: + raise AssertionError('cast_float_to_x: %r' % (sizesign,)) + ops1 = self.rewrite_operation( + SpaceOperation(opname, [v1], v_result) + ) + if not isinstance(ops1, list): ops1 = [ops1] + ops.extend(ops1) return ops else: assert False @@ -1054,8 +1152,6 @@ # The new operation is optionally further processed by rewrite_operation(). for _old, _new in [('bool_not', 'int_is_zero'), ('cast_bool_to_float', 'cast_int_to_float'), - ('cast_uint_to_float', 'cast_int_to_float'), - ('cast_float_to_uint', 'cast_float_to_int'), ('int_add_nonneg_ovf', 'int_add_ovf'), ('keepalive', '-live-'), @@ -1526,6 +1622,10 @@ def rewrite_op_jit_force_virtual(self, op): return self._do_builtin_call(op) + def rewrite_op_jit_is_virtual(self, op): + raise Exception, ( + "'vref.virtual' should not be used from jit-visible code") + def rewrite_op_jit_force_virtualizable(self, op): # this one is for virtualizables vinfo = self.get_vinfo(op.args[0]) diff --git a/pypy/jit/codewriter/support.py b/pypy/jit/codewriter/support.py --- a/pypy/jit/codewriter/support.py +++ b/pypy/jit/codewriter/support.py @@ -13,7 +13,6 @@ from pypy.translator.simplify import get_funcobj from pypy.translator.unsimplify import split_block from pypy.objspace.flow.model import Constant -from pypy import conftest from pypy.translator.translator import TranslationContext from pypy.annotation.policy import AnnotatorPolicy from pypy.annotation import model as annmodel @@ -38,9 +37,11 @@ return a.typeannotation(t) def annotate(func, values, inline=None, backendoptimize=True, - type_system="lltype"): + type_system="lltype", translationoptions={}): # build the normal ll graphs for ll_function t = TranslationContext() + for key, value in translationoptions.items(): + setattr(t.config.translation, key, value) annpolicy = AnnotatorPolicy() annpolicy.allow_someobjects = False a = t.buildannotator(policy=annpolicy) @@ -48,15 +49,13 @@ a.build_types(func, argtypes, main_entry_point=True) rtyper = t.buildrtyper(type_system = type_system) rtyper.specialize() - if inline: - auto_inlining(t, threshold=inline) + #if inline: + # auto_inlining(t, threshold=inline) if backendoptimize: from pypy.translator.backendopt.all import backend_optimizations backend_optimizations(t, inline_threshold=inline or 0, remove_asserts=True, really_remove_asserts=True) - #if conftest.option.view: - # t.view() return rtyper def getgraph(func, values): @@ -232,6 +231,17 @@ else: return x +def _ll_1_cast_uint_to_float(x): + # XXX on 32-bit platforms, this should be done using cast_longlong_to_float + # (which is a residual call right now in the x86 backend) + return llop.cast_uint_to_float(lltype.Float, x) + +def _ll_1_cast_float_to_uint(x): + # XXX on 32-bit platforms, this should be done using cast_float_to_longlong + # (which is a residual call right now in the x86 backend) + return llop.cast_float_to_uint(lltype.Unsigned, x) + + # math support # ------------ @@ -456,6 +466,8 @@ return LLtypeHelpers._dictnext_items(lltype.Ptr(RES), iter) _ll_1_dictiter_nextitems.need_result_type = True + _ll_1_dict_resize = ll_rdict.ll_dict_resize + # ---------- strings and unicode ---------- _ll_1_str_str2unicode = ll_rstr.LLHelpers.ll_str2unicode diff --git a/pypy/jit/codewriter/test/test_flatten.py b/pypy/jit/codewriter/test/test_flatten.py --- a/pypy/jit/codewriter/test/test_flatten.py +++ b/pypy/jit/codewriter/test/test_flatten.py @@ -8,7 +8,7 @@ from pypy.rpython.lltypesystem import lltype, rclass, rstr from pypy.objspace.flow.model import SpaceOperation, Variable, Constant from pypy.translator.unsimplify import varoftype -from pypy.rlib.rarithmetic import ovfcheck, r_uint +from pypy.rlib.rarithmetic import ovfcheck, r_uint, r_longlong, r_ulonglong from pypy.rlib.jit import dont_look_inside, _we_are_jitted, JitDriver from pypy.rlib.objectmodel import keepalive_until_here from pypy.rlib import jit @@ -70,7 +70,8 @@ return 'residual' def getcalldescr(self, op, oopspecindex=None, extraeffect=None): try: - if 'cannot_raise' in op.args[0].value._obj.graph.name: + name = op.args[0].value._obj._name + if 'cannot_raise' in name or name.startswith('cast_'): return self._descr_cannot_raise except AttributeError: pass @@ -900,6 +901,67 @@ int_return %i4 """, transform=True) + def f(dbl): + return rffi.cast(rffi.UCHAR, dbl) + self.encoding_test(f, [12.456], """ + cast_float_to_int %f0 -> %i0 + int_and %i0, $255 -> %i1 + int_return %i1 + """, transform=True) + + def f(dbl): + return rffi.cast(lltype.Unsigned, dbl) + self.encoding_test(f, [12.456], """ + residual_call_irf_i $<* fn cast_float_to_uint>, , I[], R[], F[%f0] -> %i0 + int_return %i0 + """, transform=True) + + def f(i): + return rffi.cast(lltype.Float, chr(i)) # "char -> float" + self.encoding_test(f, [12], """ + cast_int_to_float %i0 -> %f0 + float_return %f0 + """, transform=True) + + def f(i): + return rffi.cast(lltype.Float, r_uint(i)) # "uint -> float" + self.encoding_test(f, [12], """ + residual_call_irf_f $<* fn cast_uint_to_float>, , I[%i0], R[], F[] -> %f0 + float_return %f0 + """, transform=True) + + if not longlong.is_64_bit: + def f(dbl): + return rffi.cast(lltype.SignedLongLong, dbl) + self.encoding_test(f, [12.3], """ + residual_call_irf_f $<* fn llong_from_float>, , I[], R[], F[%f0] -> %f1 + float_return %f1 + """, transform=True) + + def f(dbl): + return rffi.cast(lltype.UnsignedLongLong, dbl) + self.encoding_test(f, [12.3], """ + residual_call_irf_f $<* fn ullong_from_float>, , I[], R[], F[%f0] -> %f1 + float_return %f1 + """, transform=True) + + def f(x): + ll = r_longlong(x) + return rffi.cast(lltype.Float, ll) + self.encoding_test(f, [12], """ + residual_call_irf_f $<* fn llong_from_int>, , I[%i0], R[], F[] -> %f0 + residual_call_irf_f $<* fn llong_to_float>, , I[], R[], F[%f0] -> %f1 + float_return %f1 + """, transform=True) + + def f(x): + ll = r_ulonglong(x) + return rffi.cast(lltype.Float, ll) + self.encoding_test(f, [12], """ + residual_call_irf_f $<* fn ullong_from_int>, , I[%i0], R[], F[] -> %f0 + residual_call_irf_f $<* fn ullong_u_to_float>, , I[], R[], F[%f0] -> %f1 + float_return %f1 + """, transform=True) def test_direct_ptradd(self): from pypy.rpython.lltypesystem import rffi diff --git a/pypy/jit/codewriter/test/test_jtransform.py b/pypy/jit/codewriter/test/test_jtransform.py --- a/pypy/jit/codewriter/test/test_jtransform.py +++ b/pypy/jit/codewriter/test/test_jtransform.py @@ -1,14 +1,27 @@ -import py import random +try: + from itertools import product +except ImportError: + # Python 2.5, this is taken from the CPython docs, but simplified. + def product(*args): + # product('ABCD', 'xy') --> Ax Ay Bx By Cx Cy Dx Dy + # product(range(2), repeat=3) --> 000 001 010 011 100 101 110 111 + pools = map(tuple, args) + result = [[]] + for pool in pools: + result = [x+[y] for x in result for y in pool] + for prod in result: + yield tuple(prod) + from pypy.objspace.flow.model import FunctionGraph, Block, Link from pypy.objspace.flow.model import SpaceOperation, Variable, Constant -from pypy.jit.codewriter.jtransform import Transformer -from pypy.jit.metainterp.history import getkind -from pypy.rpython.lltypesystem import lltype, llmemory, rclass, rstr, rlist +from pypy.rpython.lltypesystem import lltype, llmemory, rclass, rstr from pypy.rpython.lltypesystem.module import ll_math from pypy.translator.unsimplify import varoftype from pypy.jit.codewriter import heaptracker, effectinfo from pypy.jit.codewriter.flatten import ListOfKind +from pypy.jit.codewriter.jtransform import Transformer +from pypy.jit.metainterp.history import getkind def const(x): return Constant(x, lltype.typeOf(x)) @@ -23,6 +36,8 @@ return ('calldescr', FUNC, ARGS, RESULT) def fielddescrof(self, STRUCT, name): return ('fielddescr', STRUCT, name) + def interiorfielddescrof(self, ARRAY, name): + return ('interiorfielddescr', ARRAY, name) def arraydescrof(self, ARRAY): return FakeDescr(('arraydescr', ARRAY)) def sizeof(self, STRUCT): @@ -85,6 +100,12 @@ if i == oopspecindex: return True return False + def callinfo_for_oopspec(self, oopspecindex): + assert oopspecindex == effectinfo.EffectInfo.OS_STREQ_NONNULL + class c: + class adr: + ptr = 1 + return ('calldescr', c) class FakeBuiltinCallControl: def __init__(self): @@ -105,6 +126,7 @@ EI.OS_STR2UNICODE:([PSTR], PUNICODE), EI.OS_STR_CONCAT: ([PSTR, PSTR], PSTR), EI.OS_STR_SLICE: ([PSTR, INT, INT], PSTR), + EI.OS_STREQ_NONNULL: ([PSTR, PSTR], INT), EI.OS_UNI_CONCAT: ([PUNICODE, PUNICODE], PUNICODE), EI.OS_UNI_SLICE: ([PUNICODE, INT, INT], PUNICODE), EI.OS_UNI_EQUAL: ([PUNICODE, PUNICODE], lltype.Bool), @@ -254,26 +276,35 @@ assert op1.result is None def test_calls(): - for RESTYPE in [lltype.Signed, rclass.OBJECTPTR, - lltype.Float, lltype.Void]: - for with_void in [False, True]: - for with_i in [False, True]: - for with_r in [False, True]: - for with_f in [False, True]: - ARGS = [] - if with_void: ARGS += [lltype.Void, lltype.Void] - if with_i: ARGS += [lltype.Signed, lltype.Char] - if with_r: ARGS += [rclass.OBJECTPTR, lltype.Ptr(rstr.STR)] - if with_f: ARGS += [lltype.Float, lltype.Float] - random.shuffle(ARGS) - if RESTYPE == lltype.Float: with_f = True - if with_f: expectedkind = 'irf' # all kinds - elif with_i: expectedkind = 'ir' # integers and references - else: expectedkind = 'r' # only references - yield residual_call_test, ARGS, RESTYPE, expectedkind - yield direct_call_test, ARGS, RESTYPE, expectedkind - yield indirect_residual_call_test, ARGS, RESTYPE, expectedkind - yield indirect_regular_call_test, ARGS, RESTYPE, expectedkind + for RESTYPE, with_void, with_i, with_r, with_f in product( + [lltype.Signed, rclass.OBJECTPTR, lltype.Float, lltype.Void], + [False, True], + [False, True], + [False, True], + [False, True], + ): + ARGS = [] + if with_void: + ARGS += [lltype.Void, lltype.Void] + if with_i: + ARGS += [lltype.Signed, lltype.Char] + if with_r: + ARGS += [rclass.OBJECTPTR, lltype.Ptr(rstr.STR)] + if with_f: + ARGS += [lltype.Float, lltype.Float] + random.shuffle(ARGS) + if RESTYPE == lltype.Float: + with_f = True + if with_f: + expectedkind = 'irf' # all kinds + elif with_i: + expectedkind = 'ir' # integers and references + else: + expectedkind = 'r' # only references + yield residual_call_test, ARGS, RESTYPE, expectedkind + yield direct_call_test, ARGS, RESTYPE, expectedkind + yield indirect_residual_call_test, ARGS, RESTYPE, expectedkind + yield indirect_regular_call_test, ARGS, RESTYPE, expectedkind def get_direct_call_op(argtypes, restype): FUNC = lltype.FuncType(argtypes, restype) @@ -509,7 +540,7 @@ def test_rename_on_links(): v1 = Variable() - v2 = Variable() + v2 = Variable(); v2.concretetype = llmemory.Address v3 = Variable() block = Block([v1]) block.operations = [SpaceOperation('cast_pointer', [v1], v2)] @@ -545,10 +576,10 @@ assert op1.args == [v2] def test_ptr_eq(): - v1 = varoftype(rclass.OBJECTPTR) - v2 = varoftype(rclass.OBJECTPTR) + v1 = varoftype(lltype.Ptr(rstr.STR)) + v2 = varoftype(lltype.Ptr(rstr.STR)) v3 = varoftype(lltype.Bool) - c0 = const(lltype.nullptr(rclass.OBJECT)) + c0 = const(lltype.nullptr(rstr.STR)) # for opname, reducedname in [('ptr_eq', 'ptr_iszero'), ('ptr_ne', 'ptr_nonzero')]: @@ -567,6 +598,31 @@ assert op1.opname == reducedname assert op1.args == [v2] +def test_instance_ptr_eq(): + v1 = varoftype(rclass.OBJECTPTR) + v2 = varoftype(rclass.OBJECTPTR) + v3 = varoftype(lltype.Bool) + c0 = const(lltype.nullptr(rclass.OBJECT)) + + for opname, newopname, reducedname in [ + ('ptr_eq', 'instance_ptr_eq', 'instance_ptr_iszero'), + ('ptr_ne', 'instance_ptr_ne', 'instance_ptr_nonzero') + ]: + op = SpaceOperation(opname, [v1, v2], v3) + op1 = Transformer().rewrite_operation(op) + assert op1.opname == newopname + assert op1.args == [v1, v2] + + op = SpaceOperation(opname, [v1, c0], v3) + op1 = Transformer().rewrite_operation(op) + assert op1.opname == reducedname + assert op1.args == [v1] + + op = SpaceOperation(opname, [c0, v1], v3) + op1 = Transformer().rewrite_operation(op) + assert op1.opname == reducedname + assert op1.args == [v1] + def test_nongc_ptr_eq(): v1 = varoftype(rclass.NONGCOBJECTPTR) v2 = varoftype(rclass.NONGCOBJECTPTR) @@ -646,6 +702,22 @@ assert op1.args == [v, v_index] assert op1.result == v_result +def test_dict_getinteriorfield(): + DICT = lltype.GcArray(lltype.Struct('ENTRY', ('v', lltype.Signed), + ('k', lltype.Signed))) + v = varoftype(lltype.Ptr(DICT)) + i = varoftype(lltype.Signed) + v_result = varoftype(lltype.Signed) + op = SpaceOperation('getinteriorfield', [v, i, Constant('v', lltype.Void)], + v_result) + op1 = Transformer(FakeCPU()).rewrite_operation(op) + assert op1.opname == 'getinteriorfield_gc_i' + assert op1.args == [v, i, ('interiorfielddescr', DICT, 'v')] + op = SpaceOperation('getinteriorfield', [v, i, Constant('v', lltype.Void)], + Constant(None, lltype.Void)) + op1 = Transformer(FakeCPU()).rewrite_operation(op) + assert op1 is None + def test_str_setinteriorfield(): v = varoftype(lltype.Ptr(rstr.STR)) v_index = varoftype(lltype.Signed) @@ -672,6 +744,23 @@ assert op1.args == [v, v_index, v_newchr] assert op1.result == v_void +def test_dict_setinteriorfield(): + DICT = lltype.GcArray(lltype.Struct('ENTRY', ('v', lltype.Signed), + ('k', lltype.Signed))) + v = varoftype(lltype.Ptr(DICT)) + i = varoftype(lltype.Signed) + v_void = varoftype(lltype.Void) + op = SpaceOperation('setinteriorfield', [v, i, Constant('v', lltype.Void), + i], + v_void) + op1 = Transformer(FakeCPU()).rewrite_operation(op) + assert op1.opname == 'setinteriorfield_gc_i' + assert op1.args == [v, i, i, ('interiorfielddescr', DICT, 'v')] + op = SpaceOperation('setinteriorfield', [v, i, Constant('v', lltype.Void), + v_void], v_void) + op1 = Transformer(FakeCPU()).rewrite_operation(op) + assert not op1 + def test_promote_1(): v1 = varoftype(lltype.Signed) v2 = varoftype(lltype.Signed) @@ -821,6 +910,21 @@ assert op1.args[2] == ListOfKind('ref', [v1, v2]) assert op1.result == v3 +def test_str_promote(): + PSTR = lltype.Ptr(rstr.STR) + v1 = varoftype(PSTR) + v2 = varoftype(PSTR) + op = SpaceOperation('hint', + [v1, Constant({'promote_string': True}, lltype.Void)], + v2) + tr = Transformer(FakeCPU(), FakeBuiltinCallControl()) + op0, op1, _ = tr.rewrite_operation(op) + assert op1.opname == 'str_guard_value' + assert op1.args[0] == v1 + assert op1.args[2] == 'calldescr' + assert op1.result == v2 + assert op0.opname == '-live-' + def test_unicode_concat(): # test that the oopspec is present and correctly transformed PSTR = lltype.Ptr(rstr.UNICODE) @@ -1024,3 +1128,16 @@ varoftype(lltype.Signed)) tr = Transformer(None, None) raises(NotImplementedError, tr.rewrite_operation, op) + +def test_cast_opaque_ptr(): + S = lltype.GcStruct("S", ("x", lltype.Signed)) + v1 = varoftype(lltype.Ptr(S)) + v2 = varoftype(lltype.Ptr(rclass.OBJECT)) + + op = SpaceOperation('cast_opaque_ptr', [v1], v2) + tr = Transformer() + [op1, op2] = tr.rewrite_operation(op) + assert op1.opname == 'mark_opaque_ptr' + assert op1.args == [v1] + assert op1.result is None + assert op2 is None \ No newline at end of file diff --git a/pypy/jit/metainterp/blackhole.py b/pypy/jit/metainterp/blackhole.py --- a/pypy/jit/metainterp/blackhole.py +++ b/pypy/jit/metainterp/blackhole.py @@ -2,11 +2,10 @@ from pypy.rlib.rtimer import read_timestamp from pypy.rlib.rarithmetic import intmask, LONG_BIT, r_uint, ovfcheck from pypy.rlib.objectmodel import we_are_translated -from pypy.rlib.debug import debug_start, debug_stop +from pypy.rlib.debug import debug_start, debug_stop, ll_assert from pypy.rlib.debug import make_sure_not_resized from pypy.rpython.lltypesystem import lltype, llmemory, rclass from pypy.rpython.lltypesystem.lloperation import llop -from pypy.rpython.llinterp import LLException from pypy.jit.codewriter.jitcode import JitCode, SwitchDictDescr from pypy.jit.codewriter import heaptracker, longlong from pypy.jit.metainterp.jitexc import JitException, get_llexception, reraise @@ -500,9 +499,25 @@ @arguments("r", returns="i") def bhimpl_ptr_nonzero(a): return bool(a) - @arguments("r", returns="r") - def bhimpl_cast_opaque_ptr(a): - return a + @arguments("r", "r", returns="i") + def bhimpl_instance_ptr_eq(a, b): + return a == b + @arguments("r", "r", returns="i") + def bhimpl_instance_ptr_ne(a, b): + return a != b + @arguments("r", returns="i") + def bhimpl_cast_ptr_to_int(a): + i = lltype.cast_ptr_to_int(a) + ll_assert((i & 1) == 1, "bhimpl_cast_ptr_to_int: not an odd int") + return i + @arguments("i", returns="r") + def bhimpl_cast_int_to_ptr(i): + ll_assert((i & 1) == 1, "bhimpl_cast_int_to_ptr: not an odd int") + return lltype.cast_int_to_ptr(llmemory.GCREF, i) + + @arguments("r") + def bhimpl_mark_opaque_ptr(a): + pass @arguments("i", returns="i") def bhimpl_int_copy(a): @@ -523,6 +538,9 @@ @arguments("f") def bhimpl_float_guard_value(a): pass + @arguments("r", "i", "d", returns="r") + def bhimpl_str_guard_value(a, i, d): + return a @arguments("self", "i") def bhimpl_int_push(self, a): @@ -619,6 +637,9 @@ a = longlong.getrealfloat(a) # note: we need to call int() twice to care for the fact that # int(-2147483648.0) returns a long :-( + # we could also call intmask() instead of the outermost int(), but + # it's probably better to explicitly crash (by getting a long) if a + # non-translated version tries to cast a too large float to an int. return int(int(a)) @arguments("i", returns="f") @@ -1142,6 +1163,26 @@ array = cpu.bh_getfield_gc_r(vable, fdescr) return cpu.bh_arraylen_gc(adescr, array) + @arguments("cpu", "r", "i", "d", returns="i") + def bhimpl_getinteriorfield_gc_i(cpu, array, index, descr): + return cpu.bh_getinteriorfield_gc_i(array, index, descr) + @arguments("cpu", "r", "i", "d", returns="r") + def bhimpl_getinteriorfield_gc_r(cpu, array, index, descr): + return cpu.bh_getinteriorfield_gc_r(array, index, descr) + @arguments("cpu", "r", "i", "d", returns="f") + def bhimpl_getinteriorfield_gc_f(cpu, array, index, descr): + return cpu.bh_getinteriorfield_gc_f(array, index, descr) + + @arguments("cpu", "r", "i", "d", "i") + def bhimpl_setinteriorfield_gc_i(cpu, array, index, descr, value): + cpu.bh_setinteriorfield_gc_i(array, index, descr, value) + @arguments("cpu", "r", "i", "d", "r") + def bhimpl_setinteriorfield_gc_r(cpu, array, index, descr, value): + cpu.bh_setinteriorfield_gc_r(array, index, descr, value) + @arguments("cpu", "r", "i", "d", "f") + def bhimpl_setinteriorfield_gc_f(cpu, array, index, descr, value): + cpu.bh_setinteriorfield_gc_f(array, index, descr, value) + @arguments("cpu", "r", "d", returns="i") def bhimpl_getfield_gc_i(cpu, struct, fielddescr): return cpu.bh_getfield_gc_i(struct, fielddescr) diff --git a/pypy/jit/metainterp/executor.py b/pypy/jit/metainterp/executor.py --- a/pypy/jit/metainterp/executor.py +++ b/pypy/jit/metainterp/executor.py @@ -1,11 +1,8 @@ """This implements pyjitpl's execution of operations. """ -import py -from pypy.rpython.lltypesystem import lltype, llmemory, rstr -from pypy.rpython.ootypesystem import ootype -from pypy.rpython.lltypesystem.lloperation import llop -from pypy.rlib.rarithmetic import ovfcheck, r_uint, intmask, r_longlong +from pypy.rpython.lltypesystem import lltype, rstr +from pypy.rlib.rarithmetic import ovfcheck, r_longlong from pypy.rlib.rtimer import read_timestamp from pypy.rlib.unroll import unrolling_iterable from pypy.jit.metainterp.history import BoxInt, BoxPtr, BoxFloat, check_descr @@ -123,6 +120,29 @@ else: cpu.bh_setarrayitem_raw_i(arraydescr, array, index, itembox.getint()) +def do_getinteriorfield_gc(cpu, _, arraybox, indexbox, descr): + array = arraybox.getref_base() + index = indexbox.getint() + if descr.is_pointer_field(): + return BoxPtr(cpu.bh_getinteriorfield_gc_r(array, index, descr)) + elif descr.is_float_field(): + return BoxFloat(cpu.bh_getinteriorfield_gc_f(array, index, descr)) + else: + return BoxInt(cpu.bh_getinteriorfield_gc_i(array, index, descr)) + +def do_setinteriorfield_gc(cpu, _, arraybox, indexbox, valuebox, descr): + array = arraybox.getref_base() + index = indexbox.getint() + if descr.is_pointer_field(): + cpu.bh_setinteriorfield_gc_r(array, index, descr, + valuebox.getref_base()) + elif descr.is_float_field(): + cpu.bh_setinteriorfield_gc_f(array, index, descr, + valuebox.getfloatstorage()) + else: + cpu.bh_setinteriorfield_gc_i(array, index, descr, + valuebox.getint()) + def do_getfield_gc(cpu, _, structbox, fielddescr): struct = structbox.getref_base() if fielddescr.is_pointer_field(): diff --git a/pypy/jit/metainterp/graphpage.py b/pypy/jit/metainterp/graphpage.py --- a/pypy/jit/metainterp/graphpage.py +++ b/pypy/jit/metainterp/graphpage.py @@ -12,8 +12,8 @@ def get_display_text(self): return None -def display_loops(loops, errmsg=None, highlight_loops=()): - graphs = [(loop, loop in highlight_loops) for loop in loops] +def display_loops(loops, errmsg=None, highlight_loops={}): + graphs = [(loop, highlight_loops.get(loop, 0)) for loop in loops] for graph, highlight in graphs: for op in graph.get_operations(): if is_interesting_guard(op): @@ -65,8 +65,7 @@ def add_graph(self, graph, highlight=False): graphindex = len(self.graphs) self.graphs.append(graph) - if highlight: - self.highlight_graphs[graph] = True + self.highlight_graphs[graph] = highlight for i, op in enumerate(graph.get_operations()): self.all_operations[op] = graphindex, i @@ -126,10 +125,13 @@ self.dotgen.emit('subgraph cluster%d {' % graphindex) label = graph.get_display_text() if label is not None: - if self.highlight_graphs.get(graph): - fillcolor = '#f084c2' + colorindex = self.highlight_graphs.get(graph, 0) + if colorindex == 1: + fillcolor = '#f084c2' # highlighted graph + elif colorindex == 2: + fillcolor = '#808080' # invalidated graph else: - fillcolor = '#84f0c2' + fillcolor = '#84f0c2' # normal color self.dotgen.emit_node(graphname, shape="octagon", label=label, fillcolor=fillcolor) self.pendingedges.append((graphname, diff --git a/pypy/jit/metainterp/heapcache.py b/pypy/jit/metainterp/heapcache.py --- a/pypy/jit/metainterp/heapcache.py +++ b/pypy/jit/metainterp/heapcache.py @@ -29,29 +29,11 @@ # cache the length of arrays self.length_cache = {} - # equivalences between boxes - self.equivalent = {} - - def get_repr(self, box): - res = self.equivalent.get(box, box) - if res is not box: - res2 = self.get_repr(res) - # path compression - if res2 is not res: - self.equivalent[box] = res2 - res = res2 - return res - - def same_boxes(self, box1, box2): - assert box1 not in self.equivalent - self.equivalent[box1] = self.get_repr(box2) - def invalidate_caches(self, opnum, descr, argboxes): self.mark_escaped(opnum, argboxes) self.clear_caches(opnum, descr, argboxes) def mark_escaped(self, opnum, argboxes): - idx = 0 if opnum == rop.SETFIELD_GC: assert len(argboxes) == 2 box, valuebox = argboxes @@ -59,8 +41,20 @@ self.dependencies.setdefault(box, []).append(valuebox) else: self._escape(valuebox) - # GETFIELD_GC doesn't escape it's argument - elif opnum != rop.GETFIELD_GC: + elif opnum == rop.SETARRAYITEM_GC: + assert len(argboxes) == 3 + box, indexbox, valuebox = argboxes + if self.is_unescaped(box) and self.is_unescaped(valuebox): + self.dependencies.setdefault(box, []).append(valuebox) + else: + self._escape(valuebox) + # GETFIELD_GC, MARK_OPAQUE_PTR, PTR_EQ, and PTR_NE don't escape their + # arguments + elif (opnum != rop.GETFIELD_GC and + opnum != rop.MARK_OPAQUE_PTR and + opnum != rop.PTR_EQ and + opnum != rop.PTR_NE): + idx = 0 for box in argboxes: # setarrayitem_gc don't escape its first argument if not (idx == 0 and opnum in [rop.SETARRAYITEM_GC]): @@ -71,18 +65,19 @@ if box in self.new_boxes: self.new_boxes[box] = False if box in self.dependencies: - for dep in self.dependencies[box]: + deps = self.dependencies[box] + del self.dependencies[box] + for dep in deps: self._escape(dep) - del self.dependencies[box] def clear_caches(self, opnum, descr, argboxes): - if opnum == rop.SETFIELD_GC: - return - if opnum == rop.SETARRAYITEM_GC: - return - if opnum == rop.SETFIELD_RAW: - return - if opnum == rop.SETARRAYITEM_RAW: + if (opnum == rop.SETFIELD_GC or + opnum == rop.SETARRAYITEM_GC or + opnum == rop.SETFIELD_RAW or + opnum == rop.SETARRAYITEM_RAW or + opnum == rop.SETINTERIORFIELD_GC or + opnum == rop.COPYSTRCONTENT or + opnum == rop.COPYUNICODECONTENT): return if rop._OVF_FIRST <= opnum <= rop._OVF_LAST: return @@ -91,9 +86,9 @@ if opnum == rop.CALL or opnum == rop.CALL_LOOPINVARIANT: effectinfo = descr.get_extra_info() ef = effectinfo.extraeffect - if ef == effectinfo.EF_LOOPINVARIANT or \ - ef == effectinfo.EF_ELIDABLE_CANNOT_RAISE or \ - ef == effectinfo.EF_ELIDABLE_CAN_RAISE: + if (ef == effectinfo.EF_LOOPINVARIANT or + ef == effectinfo.EF_ELIDABLE_CANNOT_RAISE or + ef == effectinfo.EF_ELIDABLE_CAN_RAISE): return # A special case for ll_arraycopy, because it is so common, and its # effects are so well defined. @@ -115,23 +110,18 @@ self.heap_array_cache.clear() def is_class_known(self, box): - box = self.get_repr(box) return box in self.known_class_boxes def class_now_known(self, box): - box = self.get_repr(box) self.known_class_boxes[box] = None def is_nonstandard_virtualizable(self, box): - box = self.get_repr(box) return box in self.nonstandard_virtualizables def nonstandard_virtualizables_now_known(self, box): - box = self.get_repr(box) self.nonstandard_virtualizables[box] = None def is_unescaped(self, box): - box = self.get_repr(box) return self.new_boxes.get(box, False) def new(self, box): @@ -142,7 +132,6 @@ self.arraylen_now_known(box, lengthbox) def getfield(self, box, descr): - box = self.get_repr(box) d = self.heap_cache.get(descr, None) if d: tobox = d.get(box, None) @@ -151,11 +140,9 @@ return None def getfield_now_known(self, box, descr, fieldbox): - box = self.get_repr(box) self.heap_cache.setdefault(descr, {})[box] = fieldbox def setfield(self, box, descr, fieldbox): - box = self.get_repr(box) d = self.heap_cache.get(descr, None) new_d = self._do_write_with_aliasing(d, box, fieldbox) self.heap_cache[descr] = new_d @@ -179,7 +166,6 @@ return new_d def getarrayitem(self, box, descr, indexbox): - box = self.get_repr(box) if not isinstance(indexbox, ConstInt): return index = indexbox.getint() @@ -190,7 +176,6 @@ return indexcache.get(box, None) def getarrayitem_now_known(self, box, descr, indexbox, valuebox): - box = self.get_repr(box) if not isinstance(indexbox, ConstInt): return index = indexbox.getint() @@ -202,7 +187,6 @@ cache[index] = {box: valuebox} def setarrayitem(self, box, descr, indexbox, valuebox): - box = self.get_repr(box) if not isinstance(indexbox, ConstInt): cache = self.heap_array_cache.get(descr, None) if cache is not None: @@ -214,11 +198,9 @@ cache[index] = self._do_write_with_aliasing(indexcache, box, valuebox) def arraylen(self, box): - box = self.get_repr(box) return self.length_cache.get(box, None) def arraylen_now_known(self, box, lengthbox): - box = self.get_repr(box) self.length_cache[box] = lengthbox def _replace_box(self, d, oldbox, newbox): @@ -232,7 +214,6 @@ return new_d def replace_box(self, oldbox, newbox): - oldbox = self.get_repr(oldbox) for descr, d in self.heap_cache.iteritems(): self.heap_cache[descr] = self._replace_box(d, oldbox, newbox) for descr, d in self.heap_array_cache.iteritems(): diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -9,12 +9,14 @@ from pypy.jit.metainterp.resoperation import ResOperation, rop from pypy.jit.codewriter import heaptracker, longlong +from pypy.rlib.objectmodel import compute_identity_hash # ____________________________________________________________ INT = 'i' REF = 'r' FLOAT = 'f' +STRUCT = 's' HOLE = '_' VOID = 'v' @@ -104,7 +106,7 @@ getref._annspecialcase_ = 'specialize:arg(1)' def _get_hash_(self): - raise NotImplementedError + return compute_identity_hash(self) def clonebox(self): raise NotImplementedError @@ -133,6 +135,9 @@ def _get_str(self): raise NotImplementedError + def same_box(self, other): + return self is other + class AbstractDescr(AbstractValue): __slots__ = () @@ -168,6 +173,11 @@ """ raise NotImplementedError + def is_array_of_structs(self): + """ Implement for array descr + """ + raise NotImplementedError + def is_pointer_field(self): """ Implement for field descr """ @@ -241,32 +251,15 @@ def constbox(self): return self + def same_box(self, other): + return self.same_constant(other) + def same_constant(self, other): raise NotImplementedError def __repr__(self): return 'Const(%s)' % self._getrepr_() - def __eq__(self, other): - "NOT_RPYTHON" - # Remember that you should not compare Consts with '==' in RPython. - # Consts have no special __hash__, in order to force different Consts - # from being considered as different keys when stored in dicts - # (as they always are after translation). Use a dict_equal_consts() - # to get the other behavior (i.e. using this __eq__). - if self.__class__ is not other.__class__: - return False - try: - return self.value == other.value - except TypeError: - if (isinstance(self.value, Symbolic) and - isinstance(other.value, Symbolic)): - return self.value is other.value - raise - - def __ne__(self, other): - return not (self == other) - class ConstInt(Const): type = INT @@ -688,33 +681,6 @@ # ____________________________________________________________ -def dict_equal_consts(): - "NOT_RPYTHON" - # Returns a dict in which Consts that compare as equal - # are identified when used as keys. - return r_dict(dc_eq, dc_hash) - -def dc_eq(c1, c2): - return c1 == c2 - -def dc_hash(c): - "NOT_RPYTHON" - # This is called during translation only. Avoid using identityhash(), - # to avoid forcing a hash, at least on lltype objects. - if not isinstance(c, Const): - return hash(c) - if isinstance(c.value, Symbolic): - return id(c.value) - try: - if isinstance(c, ConstPtr): - p = lltype.normalizeptr(c.value) - if p is not None: - return hash(p._obj) - else: - return 0 - return c._get_hash_() - except lltype.DelayedPointer: - return -2 # xxx risk of changing hash... def make_hashable_int(i): from pypy.rpython.lltypesystem.ll2ctypes import NotCtypesAllocatedStructure @@ -772,6 +738,7 @@ failed_states = None retraced_count = 0 terminating = False # see TerminatingLoopToken in compile.py + invalidated = False outermost_jitdriver_sd = None # and more data specified by the backend when the loop is compiled number = -1 @@ -962,6 +929,9 @@ def view(self, **kwds): pass + def clear(self): + pass + class Stats(object): """For tests.""" @@ -974,6 +944,16 @@ self.loops = [] self.locations = [] self.aborted_keys = [] + self.invalidated_token_numbers = set() + + def clear(self): + del self.loops[:] + del self.locations[:] + del self.aborted_keys[:] + self.invalidated_token_numbers.clear() + self.compiled_count = 0 + self.enter_count = 0 + self.aborted_count = 0 def set_history(self, history): self.operations = history.operations @@ -1052,7 +1032,12 @@ if loop in loops: loops.remove(loop) loops.append(loop) - display_loops(loops, errmsg, extraloops) + highlight_loops = dict.fromkeys(extraloops, 1) + for loop in loops: + if hasattr(loop, '_looptoken_number') and ( + loop._looptoken_number in self.invalidated_token_numbers): + highlight_loops.setdefault(loop, 2) + display_loops(loops, errmsg, highlight_loops) # ---------------------------------------------------------------- diff --git a/pypy/jit/metainterp/memmgr.py b/pypy/jit/metainterp/memmgr.py --- a/pypy/jit/metainterp/memmgr.py +++ b/pypy/jit/metainterp/memmgr.py @@ -68,7 +68,8 @@ debug_print("Loop tokens before:", oldtotal) max_generation = self.current_generation - (self.max_age-1) for looptoken in self.alive_loops.keys(): - if 0 <= looptoken.generation < max_generation: + if (0 <= looptoken.generation < max_generation or + looptoken.invalidated): del self.alive_loops[looptoken] newtotal = len(self.alive_loops) debug_print("Loop tokens freed: ", oldtotal - newtotal) diff --git a/pypy/jit/metainterp/optimizeopt/intbounds.py b/pypy/jit/metainterp/optimizeopt/intbounds.py --- a/pypy/jit/metainterp/optimizeopt/intbounds.py +++ b/pypy/jit/metainterp/optimizeopt/intbounds.py @@ -1,3 +1,4 @@ +import sys from pypy.jit.metainterp.optimizeopt.optimizer import Optimization, CONST_1, CONST_0, \ MODE_ARRAY, MODE_STR, MODE_UNICODE from pypy.jit.metainterp.history import ConstInt @@ -5,6 +6,7 @@ IntUpperBound) from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method from pypy.jit.metainterp.resoperation import rop +from pypy.rlib.rarithmetic import LONG_BIT class OptIntBounds(Optimization): @@ -126,14 +128,29 @@ r.intbound.intersect(v1.intbound.div_bound(v2.intbound)) def optimize_INT_MOD(self, op): + v1 = self.getvalue(op.getarg(0)) + v2 = self.getvalue(op.getarg(1)) + known_nonneg = (v1.intbound.known_ge(IntBound(0, 0)) and + v2.intbound.known_ge(IntBound(0, 0))) + if known_nonneg and v2.is_constant(): + val = v2.box.getint() + if (val & (val-1)) == 0: + # nonneg % power-of-two ==> nonneg & (power-of-two - 1) From noreply at buildbot.pypy.org Wed Nov 2 15:51:12 2011 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 2 Nov 2011 15:51:12 +0100 (CET) Subject: [pypy-commit] benchmarks default: Essential fix. Message-ID: <20111102145112.7DE56820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r153:2bd97aedb97e Date: 2011-11-02 15:51 +0100 http://bitbucket.org/pypy/benchmarks/changeset/2bd97aedb97e/ Log: Essential fix. diff --git a/own/json_bench.py b/own/json_bench.py --- a/own/json_bench.py +++ b/own/json_bench.py @@ -26,7 +26,7 @@ import util, optparse parser = optparse.OptionParser( usage="%prog [options]", - description="Test the performance of the Go benchmark") + description="Test the performance of the JSON benchmark") util.add_standard_options_to(parser) options, args = parser.parse_args() From noreply at buildbot.pypy.org Wed Nov 2 15:58:00 2011 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 2 Nov 2011 15:58:00 +0100 (CET) Subject: [pypy-commit] pypy default: Try never to crash when inspect.getsource() fails. Message-ID: <20111102145800.19767820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48659:0dedcb956aa5 Date: 2011-11-02 15:57 +0100 http://bitbucket.org/pypy/pypy/changeset/0dedcb956aa5/ Log: Try never to crash when inspect.getsource() fails. diff --git a/pypy/tool/sourcetools.py b/pypy/tool/sourcetools.py --- a/pypy/tool/sourcetools.py +++ b/pypy/tool/sourcetools.py @@ -107,10 +107,8 @@ else: try: src = inspect.getsource(object) - except IOError: - return None - except IndentationError: - return None + except Exception: # catch IOError, IndentationError, and also rarely + return None # some other exceptions like IndexError if hasattr(name, "__sourceargs__"): return src % name.__sourceargs__ return src From noreply at buildbot.pypy.org Wed Nov 2 16:08:15 2011 From: noreply at buildbot.pypy.org (cfbolz) Date: Wed, 2 Nov 2011 16:08:15 +0100 (CET) Subject: [pypy-commit] pypy default: remove some C-isms Message-ID: <20111102150815.2F5C2820B3@wyvern.cs.uni-duesseldorf.de> Author: Carl Friedrich Bolz Branch: Changeset: r48660:2ac3b6128b0f Date: 2011-11-02 16:07 +0100 http://bitbucket.org/pypy/pypy/changeset/2ac3b6128b0f/ Log: remove some C-isms diff --git a/pypy/rlib/rbigint.py b/pypy/rlib/rbigint.py --- a/pypy/rlib/rbigint.py +++ b/pypy/rlib/rbigint.py @@ -921,7 +921,7 @@ ah, al = _kmul_split(a, shift) assert ah.sign == 1 # the split isn't degenerate - if a == b: + if a is b: bh = ah bl = al else: @@ -975,26 +975,21 @@ i = ret.numdigits() - shift # # digits after shift _v_isub(ret, shift, i, t2, t2.numdigits()) _v_isub(ret, shift, i, t1, t1.numdigits()) - del t1, t2 # 6. t3 <- (ah+al)(bh+bl), and add into result. t1 = _x_add(ah, al) - del ah, al - if a == b: + if a is b: t2 = t1 else: t2 = _x_add(bh, bl) - del bh, bl t3 = _k_mul(t1, t2) - del t1, t2 assert t3.sign >=0 # Add t3. It's not obvious why we can't run out of room here. # See the (*) comment after this function. _v_iadd(ret, shift, i, t3, t3.numdigits()) - del t3 ret._normalize() return ret @@ -1085,7 +1080,6 @@ # Add into result. _v_iadd(ret, nbdone, ret.numdigits() - nbdone, product, product.numdigits()) - del product bsize -= nbtouse nbdone += nbtouse From noreply at buildbot.pypy.org Wed Nov 2 16:14:06 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 2 Nov 2011 16:14:06 +0100 (CET) Subject: [pypy-commit] pypy list-strategies: Create 2 versions of GeneratorIterator.unpack_into, one which takes a W_ListObject and one which takes an RPython list. This is to allow unwrapped lists to be built from generators with intermediaries (and to fix a translation error). Message-ID: <20111102151406.3ECF4820B3@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: list-strategies Changeset: r48661:10e8201adabd Date: 2011-11-02 11:13 -0400 http://bitbucket.org/pypy/pypy/changeset/10e8201adabd/ Log: Create 2 versions of GeneratorIterator.unpack_into, one which takes a W_ListObject and one which takes an RPython list. This is to allow unwrapped lists to be built from generators with intermediaries (and to fix a translation error). diff --git a/pypy/interpreter/generator.py b/pypy/interpreter/generator.py --- a/pypy/interpreter/generator.py +++ b/pypy/interpreter/generator.py @@ -1,8 +1,9 @@ +from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.error import OperationError -from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.gateway import NoneNotWrapped +from pypy.interpreter.pyopcode import LoopBlock from pypy.rlib import jit -from pypy.interpreter.pyopcode import LoopBlock +from pypy.rlib.objectmodel import specialize class GeneratorIterator(Wrappable): @@ -156,38 +157,43 @@ break block = block.previous - def unpack_into(self, results_w): - """This is a hack for performance: runs the generator and collects - all produced items in a list.""" - # XXX copied and simplified version of send_ex() - space = self.space - if self.running: - raise OperationError(space.w_ValueError, - space.wrap('generator already executing')) - frame = self.frame - if frame is None: # already finished - return - self.running = True - try: - pycode = self.pycode - while True: - jitdriver.jit_merge_point(self=self, frame=frame, - results_w=results_w, - pycode=pycode) - try: - w_result = frame.execute_frame(space.w_None) - except OperationError, e: - if not e.match(space, space.w_StopIteration): - raise - break - # if the frame is now marked as finished, it was RETURNed from - if frame.frame_finished_execution: - break - results_w.append(w_result) # YIELDed - finally: - frame.f_backref = jit.vref_None - self.running = False - self.frame = None - -jitdriver = jit.JitDriver(greens=['pycode'], - reds=['self', 'frame', 'results_w']) + # Results can be either an RPython list of W_Root, or it can be an + # app-level W_ListObject, which also has an append() method, that's why we + # generate 2 versions of the function and 2 jit drivers. + def _create_unpack_into(): + jitdriver = jit.JitDriver(greens=['pycode'], + reds=['self', 'frame', 'results']) + def unpack_into(self, results): + """This is a hack for performance: runs the generator and collects + all produced items in a list.""" + # XXX copied and simplified version of send_ex() + space = self.space + if self.running: + raise OperationError(space.w_ValueError, + space.wrap('generator already executing')) + frame = self.frame + if frame is None: # already finished + return + self.running = True + try: + pycode = self.pycode + while True: + jitdriver.jit_merge_point(self=self, frame=frame, + results=results, pycode=pycode) + try: + w_result = frame.execute_frame(space.w_None) + except OperationError, e: + if not e.match(space, space.w_StopIteration): + raise + break + # if the frame is now marked as finished, it was RETURNed from + if frame.frame_finished_execution: + break + results.append(w_result) # YIELDed + finally: + frame.f_backref = jit.vref_None + self.running = False + self.frame = None + return unpack_into + unpack_into = _create_unpack_into() + unpack_into_w = _create_unpack_into() \ No newline at end of file diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py --- a/pypy/objspace/std/listobject.py +++ b/pypy/objspace/std/listobject.py @@ -943,7 +943,7 @@ # xxx special hack for speed from pypy.interpreter.generator import GeneratorIterator if isinstance(w_iterable, GeneratorIterator): - w_iterable.unpack_into(items_w) + w_iterable.unpack_into_w(w_list) return # /xxx w_iterator = space.iter(w_iterable) From noreply at buildbot.pypy.org Wed Nov 2 17:00:56 2011 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 2 Nov 2011 17:00:56 +0100 (CET) Subject: [pypy-commit] pypy default: merge heads Message-ID: <20111102160056.8AAF782A87@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48663:df3d6ccd5f93 Date: 2011-11-02 17:00 +0100 http://bitbucket.org/pypy/pypy/changeset/df3d6ccd5f93/ Log: merge heads diff --git a/pypy/rlib/rbigint.py b/pypy/rlib/rbigint.py --- a/pypy/rlib/rbigint.py +++ b/pypy/rlib/rbigint.py @@ -921,7 +921,7 @@ ah, al = _kmul_split(a, shift) assert ah.sign == 1 # the split isn't degenerate - if a == b: + if a is b: bh = ah bl = al else: @@ -975,26 +975,21 @@ i = ret.numdigits() - shift # # digits after shift _v_isub(ret, shift, i, t2, t2.numdigits()) _v_isub(ret, shift, i, t1, t1.numdigits()) - del t1, t2 # 6. t3 <- (ah+al)(bh+bl), and add into result. t1 = _x_add(ah, al) - del ah, al - if a == b: + if a is b: t2 = t1 else: t2 = _x_add(bh, bl) - del bh, bl t3 = _k_mul(t1, t2) - del t1, t2 assert t3.sign >=0 # Add t3. It's not obvious why we can't run out of room here. # See the (*) comment after this function. _v_iadd(ret, shift, i, t3, t3.numdigits()) - del t3 ret._normalize() return ret @@ -1085,7 +1080,6 @@ # Add into result. _v_iadd(ret, nbdone, ret.numdigits() - nbdone, product, product.numdigits()) - del product bsize -= nbtouse nbdone += nbtouse From noreply at buildbot.pypy.org Wed Nov 2 17:00:55 2011 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 2 Nov 2011 17:00:55 +0100 (CET) Subject: [pypy-commit] pypy default: Clean up: min() is now RPython. Message-ID: <20111102160055.5AD55820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48662:6cb0e75f39f9 Date: 2011-11-02 17:00 +0100 http://bitbucket.org/pypy/pypy/changeset/6cb0e75f39f9/ Log: Clean up: min() is now RPython. diff --git a/pypy/objspace/std/tupleobject.py b/pypy/objspace/std/tupleobject.py --- a/pypy/objspace/std/tupleobject.py +++ b/pypy/objspace/std/tupleobject.py @@ -108,15 +108,10 @@ return space.w_False return space.w_True -def _min(a, b): - if a < b: - return a - return b - def lt__Tuple_Tuple(space, w_tuple1, w_tuple2): items1 = w_tuple1.wrappeditems items2 = w_tuple2.wrappeditems - ncmp = _min(len(items1), len(items2)) + ncmp = min(len(items1), len(items2)) # Search for the first index where items are different for p in range(ncmp): if not space.eq_w(items1[p], items2[p]): @@ -127,7 +122,7 @@ def gt__Tuple_Tuple(space, w_tuple1, w_tuple2): items1 = w_tuple1.wrappeditems items2 = w_tuple2.wrappeditems - ncmp = _min(len(items1), len(items2)) + ncmp = min(len(items1), len(items2)) # Search for the first index where items are different for p in range(ncmp): if not space.eq_w(items1[p], items2[p]): From noreply at buildbot.pypy.org Wed Nov 2 17:18:16 2011 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 2 Nov 2011 17:18:16 +0100 (CET) Subject: [pypy-commit] pypy default: Do the imports only if the config option is set. Message-ID: <20111102161816.C69EA820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48664:d2cbfaf3d3a1 Date: 2011-11-02 17:04 +0100 http://bitbucket.org/pypy/pypy/changeset/d2cbfaf3d3a1/ Log: Do the imports only if the config option is set. diff --git a/pypy/objspace/std/tupletype.py b/pypy/objspace/std/tupletype.py --- a/pypy/objspace/std/tupletype.py +++ b/pypy/objspace/std/tupletype.py @@ -5,14 +5,14 @@ def wraptuple(space, list_w): from pypy.objspace.std.tupleobject import W_TupleObject - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject2 - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject3 - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject4 - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject5 - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject6 - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject7 - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject8 if space.config.objspace.std.withsmalltuple: + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject2 + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject3 + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject4 + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject5 + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject6 + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject7 + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject8 if len(list_w) == 2: return W_SmallTupleObject2(list_w) if len(list_w) == 3: From noreply at buildbot.pypy.org Wed Nov 2 17:18:17 2011 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 2 Nov 2011 17:18:17 +0100 (CET) Subject: [pypy-commit] pypy default: Move these imports to a place where they will only be triggered Message-ID: <20111102161817.F0210820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48665:3c3328236908 Date: 2011-11-02 17:18 +0100 http://bitbucket.org/pypy/pypy/changeset/3c3328236908/ Log: Move these imports to a place where they will only be triggered if we are configured to use them. diff --git a/pypy/objspace/std/model.py b/pypy/objspace/std/model.py --- a/pypy/objspace/std/model.py +++ b/pypy/objspace/std/model.py @@ -69,19 +69,11 @@ from pypy.objspace.std import floatobject from pypy.objspace.std import complexobject from pypy.objspace.std import setobject - from pypy.objspace.std import smallintobject - from pypy.objspace.std import smalllongobject from pypy.objspace.std import tupleobject - from pypy.objspace.std import smalltupleobject from pypy.objspace.std import listobject from pypy.objspace.std import dictmultiobject from pypy.objspace.std import stringobject from pypy.objspace.std import bytearrayobject - from pypy.objspace.std import ropeobject - from pypy.objspace.std import ropeunicodeobject - from pypy.objspace.std import strsliceobject - from pypy.objspace.std import strjoinobject - from pypy.objspace.std import strbufobject from pypy.objspace.std import typeobject from pypy.objspace.std import sliceobject from pypy.objspace.std import longobject @@ -89,7 +81,6 @@ from pypy.objspace.std import iterobject from pypy.objspace.std import unicodeobject from pypy.objspace.std import dictproxyobject - from pypy.objspace.std import rangeobject from pypy.objspace.std import proxyobject from pypy.objspace.std import fake import pypy.objspace.std.default # register a few catch-all multimethods @@ -141,7 +132,12 @@ for option, value in config.objspace.std: if option.startswith("with") and option in option_to_typename: for classname in option_to_typename[option]: - implcls = eval(classname) + modname = classname[:classname.index('.')] + classname = classname[classname.index('.')+1:] + d = {} + exec "from pypy.objspace.std.%s import %s" % ( + modname, classname) in d + implcls = d[classname] if value: self.typeorder[implcls] = [] else: @@ -167,6 +163,7 @@ # XXX build these lists a bit more automatically later if config.objspace.std.withsmallint: + from pypy.objspace.std import smallintobject self.typeorder[boolobject.W_BoolObject] += [ (smallintobject.W_SmallIntObject, boolobject.delegate_Bool2SmallInt), ] @@ -189,6 +186,7 @@ (complexobject.W_ComplexObject, complexobject.delegate_Int2Complex), ] if config.objspace.std.withsmalllong: + from pypy.objspace.std import smalllongobject self.typeorder[boolobject.W_BoolObject] += [ (smalllongobject.W_SmallLongObject, smalllongobject.delegate_Bool2SmallLong), ] @@ -220,7 +218,9 @@ (unicodeobject.W_UnicodeObject, unicodeobject.delegate_String2Unicode), ] else: + from pypy.objspace.std import ropeobject if config.objspace.std.withropeunicode: + from pypy.objspace.std import ropeunicodeobject self.typeorder[ropeobject.W_RopeObject] += [ (ropeunicodeobject.W_RopeUnicodeObject, ropeunicodeobject.delegate_Rope2RopeUnicode), @@ -230,6 +230,7 @@ (unicodeobject.W_UnicodeObject, unicodeobject.delegate_String2Unicode), ] if config.objspace.std.withstrslice: + from pypy.objspace.std import strsliceobject self.typeorder[strsliceobject.W_StringSliceObject] += [ (stringobject.W_StringObject, strsliceobject.delegate_slice2str), @@ -237,6 +238,7 @@ strsliceobject.delegate_slice2unicode), ] if config.objspace.std.withstrjoin: + from pypy.objspace.std import strjoinobject self.typeorder[strjoinobject.W_StringJoinObject] += [ (stringobject.W_StringObject, strjoinobject.delegate_join2str), @@ -244,6 +246,7 @@ strjoinobject.delegate_join2unicode) ] elif config.objspace.std.withstrbuf: + from pypy.objspace.std import strbufobject self.typeorder[strbufobject.W_StringBufferObject] += [ (stringobject.W_StringObject, strbufobject.delegate_buf2str), @@ -251,11 +254,13 @@ strbufobject.delegate_buf2unicode) ] if config.objspace.std.withrangelist: + from pypy.objspace.std import rangeobject self.typeorder[rangeobject.W_RangeListObject] += [ (listobject.W_ListObject, rangeobject.delegate_range2list), ] if config.objspace.std.withsmalltuple: + from pypy.objspace.std import smalltupleobject self.typeorder[smalltupleobject.W_SmallTupleObject] += [ (tupleobject.W_TupleObject, smalltupleobject.delegate_SmallTuple2Tuple)] From noreply at buildbot.pypy.org Wed Nov 2 17:51:57 2011 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 2 Nov 2011 17:51:57 +0100 (CET) Subject: [pypy-commit] pypy default: Found the cause of the failure of test_nongc_attached_to_gc in Message-ID: <20111102165157.C690D820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48666:548c842da8b9 Date: 2011-11-02 17:49 +0100 http://bitbucket.org/pypy/pypy/changeset/548c842da8b9/ Log: Found the cause of the failure of test_nongc_attached_to_gc in test_newgc: we forgot to add the surviving objects from young_objects_with_light_finalizers to the old version of that list. Fix the test and re-enable light finalizers with minimark. diff --git a/pypy/rpython/memory/gc/minimark.py b/pypy/rpython/memory/gc/minimark.py --- a/pypy/rpython/memory/gc/minimark.py +++ b/pypy/rpython/memory/gc/minimark.py @@ -468,7 +468,7 @@ # # If the object needs a finalizer, ask for a rawmalloc. # The following check should be constant-folded. - if needs_finalizer: ## and not is_finalizer_light: + if needs_finalizer and not is_finalizer_light: ll_assert(not contains_weakptr, "'needs_finalizer' and 'contains_weakptr' both specified") obj = self.external_malloc(typeid, 0, can_make_young=False) @@ -1850,6 +1850,9 @@ finalizer = self.getlightfinalizer(self.get_type_id(obj)) ll_assert(bool(finalizer), "no light finalizer found") finalizer(obj, llmemory.NULL) + else: + obj = self.get_forwarding_address(obj) + self.old_objects_with_light_finalizers.append(obj) def deal_with_old_objects_with_finalizers(self): """ This is a much simpler version of dealing with finalizers From noreply at buildbot.pypy.org Wed Nov 2 18:19:35 2011 From: noreply at buildbot.pypy.org (cfbolz) Date: Wed, 2 Nov 2011 18:19:35 +0100 (CET) Subject: [pypy-commit] pypy list-strategies: implement a fast path for list.pop() (without arguments) Message-ID: <20111102171935.9EBBC820B3@wyvern.cs.uni-duesseldorf.de> Author: Carl Friedrich Bolz Branch: list-strategies Changeset: r48667:caa63b86e8cf Date: 2011-11-02 18:19 +0100 http://bitbucket.org/pypy/pypy/changeset/caa63b86e8cf/ Log: implement a fast path for list.pop() (without arguments) diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py --- a/pypy/objspace/std/listobject.py +++ b/pypy/objspace/std/listobject.py @@ -189,6 +189,10 @@ May raise IndexError.""" return self.strategy.pop(self, index) + def pop_end(self): + """ Pop the last element from the list.""" + return self.strategy.pop_end(self) + def setitem(self, index, w_item): """Inserts a wrapped item at the given (unwrapped) index. May raise IndexError.""" @@ -282,6 +286,9 @@ def pop(self, w_list, index): raise NotImplementedError + def pop_end(self, w_list): + return self.pop(w_list, self.length(w_list) - 1) + def setitem(self, w_list, index, w_item): raise NotImplementedError @@ -372,7 +379,7 @@ pass def pop(self, w_list, index): - # will not be called becuase IndexError was already raised in + # will not be called because IndexError was already raised in # list_pop__List_ANY raise IndexError @@ -527,21 +534,25 @@ self.switch_to_integer_strategy(w_list) w_list.deleteslice(start, step, slicelength) + def pop_end(self, w_list): + start, step, length = self.unerase(w_list.lstorage) + w_result = self.wrap(start + (length - 1) * step) + new = self.erase((start, step, length - 1)) + w_list.lstorage = new + return w_result + def pop(self, w_list, index): l = self.unerase(w_list.lstorage) start = l[0] step = l[1] length = l[2] if index == 0: - r = self.getitem(w_list, index) + w_result = self.wrap(start) new = self.erase((start + step, step, length - 1)) w_list.lstorage = new - return r + return w_result elif index == length - 1: - r = self.getitem(w_list, index) - new = self.erase((start, step, length - 1)) - w_list.lstorage = new - return r + return self.pop_end(w_list) else: self.switch_to_integer_strategy(w_list) return w_list.pop(index) @@ -812,6 +823,10 @@ assert start >= 0 # annotator hint del items[start:] + def pop_end(self, w_list): + l = self.unerase(w_list.lstorage) + return self.wrap(l.pop()) + def pop(self, w_list, index): l = self.unerase(w_list.lstorage) # not sure if RPython raises IndexError on pop @@ -1196,12 +1211,15 @@ w_list.extend(w_other) return space.w_None -# note that the default value will come back wrapped!!! -def list_pop__List_ANY(space, w_list, w_idx=-1): +# default of w_idx is space.w_None (see listtype.py) +def list_pop__List_ANY(space, w_list, w_idx): length = w_list.length() if length == 0: raise OperationError(space.w_IndexError, space.wrap("pop from empty list")) + # clearly differentiate between list.pop() and list.pop(index) + if space.is_w(w_idx, space.w_None): + return w_list.pop_end() # cannot raise because list is not empty if space.isinstance_w(w_idx, space.w_float): raise OperationError(space.w_TypeError, space.wrap("integer argument expected, got float") diff --git a/pypy/objspace/std/listtype.py b/pypy/objspace/std/listtype.py --- a/pypy/objspace/std/listtype.py +++ b/pypy/objspace/std/listtype.py @@ -11,7 +11,7 @@ list_extend = SMM('extend', 2, doc='L.extend(iterable) -- extend list by appending' ' elements from the iterable') -list_pop = SMM('pop', 2, defaults=(-1,), +list_pop = SMM('pop', 2, defaults=(None,), doc='L.pop([index]) -> item -- remove and return item at' ' index (default last)') list_remove = SMM('remove', 2, diff --git a/pypy/objspace/std/test/test_liststrategies.py b/pypy/objspace/std/test/test_liststrategies.py --- a/pypy/objspace/std/test/test_liststrategies.py +++ b/pypy/objspace/std/test/test_liststrategies.py @@ -1,4 +1,5 @@ from pypy.objspace.std.listobject import W_ListObject, EmptyListStrategy, ObjectListStrategy, IntegerListStrategy, StringListStrategy, RangeListStrategy, make_range_list +from pypy.objspace.std import listobject from pypy.objspace.std.test.test_listobject import TestW_ListObject from pypy.conftest import gettestobjspace @@ -237,6 +238,18 @@ l = make_range_list(self.space, 1,3,7) assert isinstance(l.strategy, RangeListStrategy) + v = l.pop(0) + assert self.space.eq_w(v, self.space.wrap(1)) + assert isinstance(l.strategy, RangeListStrategy) + v = l.pop(l.length() - 1) + assert self.space.eq_w(v, self.space.wrap(19)) + assert isinstance(l.strategy, RangeListStrategy) + v = l.pop_end() + assert self.space.eq_w(v, self.space.wrap(16)) + assert isinstance(l.strategy, RangeListStrategy) + + l = make_range_list(self.space, 1,3,7) + assert isinstance(l.strategy, RangeListStrategy) l.append(self.space.wrap("string")) assert isinstance(l.strategy, ObjectListStrategy) @@ -379,6 +392,13 @@ assert space.listview_str(w_l) == ["a", "b", "c"] assert space.listview_str(w_l2) == ["a", "b", "c"] + def test_pop_without_argument_is_fast(self): + space = self.space + w_l = W_ListObject(space, [space.wrap(1), space.wrap(2), space.wrap(3)]) + w_l.pop = None + w_res = listobject.list_pop__List_ANY(space, w_l, space.w_None) # does not crash + assert space.unwrap(w_res) == 3 + class TestW_ListStrategiesDisabled: def setup_class(cls): From noreply at buildbot.pypy.org Wed Nov 2 18:26:05 2011 From: noreply at buildbot.pypy.org (cfbolz) Date: Wed, 2 Nov 2011 18:26:05 +0100 (CET) Subject: [pypy-commit] pypy list-strategies: don't encode exact offsets Message-ID: <20111102172605.4EFEA820B3@wyvern.cs.uni-duesseldorf.de> Author: Carl Friedrich Bolz Branch: list-strategies Changeset: r48668:1159092f9ab4 Date: 2011-11-02 18:25 +0100 http://bitbucket.org/pypy/pypy/changeset/1159092f9ab4/ Log: don't encode exact offsets diff --git a/pypy/module/pypyjit/test_pypy_c/test_call.py b/pypy/module/pypyjit/test_pypy_c/test_call.py --- a/pypy/module/pypyjit/test_pypy_c/test_call.py +++ b/pypy/module/pypyjit/test_pypy_c/test_call.py @@ -371,7 +371,7 @@ p24 = new_array(1, descr=) p26 = new_with_vtable(ConstClass(W_ListObject)) setfield_gc(p0, i20, descr=) - setfield_gc(p26, ConstPtr(ptr22), descr=) + setfield_gc(p26, ConstPtr(ptr22), descr=) setarrayitem_gc(p24, 0, p26, descr=) setfield_gc(p22, p24, descr=) p32 = call_may_force(11376960, p18, p22, descr=) @@ -484,4 +484,4 @@ i4 = int_add(i0, 1) --TICK-- jump(..., descr=...) - """) \ No newline at end of file + """) diff --git a/pypy/module/pypyjit/test_pypy_c/test_misc.py b/pypy/module/pypyjit/test_pypy_c/test_misc.py --- a/pypy/module/pypyjit/test_pypy_c/test_misc.py +++ b/pypy/module/pypyjit/test_pypy_c/test_misc.py @@ -201,10 +201,10 @@ assert log.result == 1000000 loop, = log.loops_by_filename(self.filepath) assert loop.match(""" - i14 = getfield_gc(p12, descr=) + i14 = getfield_gc(p12, descr=) i16 = uint_ge(i12, i14) guard_false(i16, descr=...) - p16 = getfield_gc(p12, descr=) + p16 = getfield_gc(p12, descr=) p17 = getarrayitem_gc(p16, i12, descr=) i19 = int_add(i12, 1) setfield_gc(p9, i19, descr=) From noreply at buildbot.pypy.org Wed Nov 2 18:53:41 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Wed, 2 Nov 2011 18:53:41 +0100 (CET) Subject: [pypy-commit] pypy default: failing test Message-ID: <20111102175341.DFECF820B3@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: Changeset: r48669:a6f23c0ae3e6 Date: 2011-11-02 18:49 +0100 http://bitbucket.org/pypy/pypy/changeset/a6f23c0ae3e6/ Log: failing test diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -7307,6 +7307,26 @@ """ self.optimize_loop(ops, expected) + def test_repeated_setfield_mixed_with_guard(self): + ops = """ + [p22, p18] + setfield_gc(p22, 2, descr=valuedescr) + guard_nonnull_class(p18, ConstClass(node_vtable)) [] + setfield_gc(p22, 2, descr=valuedescr) + jump(p22, p18) + """ + preamble = """ + [p22, p18] + setfield_gc(p22, 2, descr=valuedescr) + guard_nonnull_class(p18, ConstClass(node_vtable)) [] + jump(p22, p18) + """ + expected = """ + [p22, p18] + jump(p22, p18) + """ + self.optimize_loop(ops, expected, preamble) + class TestLLtype(OptimizeOptTest, LLtypeMixin): pass From noreply at buildbot.pypy.org Wed Nov 2 18:53:43 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Wed, 2 Nov 2011 18:53:43 +0100 (CET) Subject: [pypy-commit] pypy default: hg merge Message-ID: <20111102175343.8D102820B3@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: Changeset: r48670:3594446a82fc Date: 2011-11-02 18:50 +0100 http://bitbucket.org/pypy/pypy/changeset/3594446a82fc/ Log: hg merge diff --git a/lib-python/modified-2.7/heapq.py b/lib-python/modified-2.7/heapq.py new file mode 100644 --- /dev/null +++ b/lib-python/modified-2.7/heapq.py @@ -0,0 +1,442 @@ +# -*- coding: latin-1 -*- + +"""Heap queue algorithm (a.k.a. priority queue). + +Heaps are arrays for which a[k] <= a[2*k+1] and a[k] <= a[2*k+2] for +all k, counting elements from 0. For the sake of comparison, +non-existing elements are considered to be infinite. The interesting +property of a heap is that a[0] is always its smallest element. + +Usage: + +heap = [] # creates an empty heap +heappush(heap, item) # pushes a new item on the heap +item = heappop(heap) # pops the smallest item from the heap +item = heap[0] # smallest item on the heap without popping it +heapify(x) # transforms list into a heap, in-place, in linear time +item = heapreplace(heap, item) # pops and returns smallest item, and adds + # new item; the heap size is unchanged + +Our API differs from textbook heap algorithms as follows: + +- We use 0-based indexing. This makes the relationship between the + index for a node and the indexes for its children slightly less + obvious, but is more suitable since Python uses 0-based indexing. + +- Our heappop() method returns the smallest item, not the largest. + +These two make it possible to view the heap as a regular Python list +without surprises: heap[0] is the smallest item, and heap.sort() +maintains the heap invariant! +""" + +# Original code by Kevin O'Connor, augmented by Tim Peters and Raymond Hettinger + +__about__ = """Heap queues + +[explanation by Fran�ois Pinard] + +Heaps are arrays for which a[k] <= a[2*k+1] and a[k] <= a[2*k+2] for +all k, counting elements from 0. For the sake of comparison, +non-existing elements are considered to be infinite. The interesting +property of a heap is that a[0] is always its smallest element. + +The strange invariant above is meant to be an efficient memory +representation for a tournament. The numbers below are `k', not a[k]: + + 0 + + 1 2 + + 3 4 5 6 + + 7 8 9 10 11 12 13 14 + + 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 + + +In the tree above, each cell `k' is topping `2*k+1' and `2*k+2'. In +an usual binary tournament we see in sports, each cell is the winner +over the two cells it tops, and we can trace the winner down the tree +to see all opponents s/he had. However, in many computer applications +of such tournaments, we do not need to trace the history of a winner. +To be more memory efficient, when a winner is promoted, we try to +replace it by something else at a lower level, and the rule becomes +that a cell and the two cells it tops contain three different items, +but the top cell "wins" over the two topped cells. + +If this heap invariant is protected at all time, index 0 is clearly +the overall winner. The simplest algorithmic way to remove it and +find the "next" winner is to move some loser (let's say cell 30 in the +diagram above) into the 0 position, and then percolate this new 0 down +the tree, exchanging values, until the invariant is re-established. +This is clearly logarithmic on the total number of items in the tree. +By iterating over all items, you get an O(n ln n) sort. + +A nice feature of this sort is that you can efficiently insert new +items while the sort is going on, provided that the inserted items are +not "better" than the last 0'th element you extracted. This is +especially useful in simulation contexts, where the tree holds all +incoming events, and the "win" condition means the smallest scheduled +time. When an event schedule other events for execution, they are +scheduled into the future, so they can easily go into the heap. So, a +heap is a good structure for implementing schedulers (this is what I +used for my MIDI sequencer :-). + +Various structures for implementing schedulers have been extensively +studied, and heaps are good for this, as they are reasonably speedy, +the speed is almost constant, and the worst case is not much different +than the average case. However, there are other representations which +are more efficient overall, yet the worst cases might be terrible. + +Heaps are also very useful in big disk sorts. You most probably all +know that a big sort implies producing "runs" (which are pre-sorted +sequences, which size is usually related to the amount of CPU memory), +followed by a merging passes for these runs, which merging is often +very cleverly organised[1]. It is very important that the initial +sort produces the longest runs possible. Tournaments are a good way +to that. If, using all the memory available to hold a tournament, you +replace and percolate items that happen to fit the current run, you'll +produce runs which are twice the size of the memory for random input, +and much better for input fuzzily ordered. + +Moreover, if you output the 0'th item on disk and get an input which +may not fit in the current tournament (because the value "wins" over +the last output value), it cannot fit in the heap, so the size of the +heap decreases. The freed memory could be cleverly reused immediately +for progressively building a second heap, which grows at exactly the +same rate the first heap is melting. When the first heap completely +vanishes, you switch heaps and start a new run. Clever and quite +effective! + +In a word, heaps are useful memory structures to know. I use them in +a few applications, and I think it is good to keep a `heap' module +around. :-) + +-------------------- +[1] The disk balancing algorithms which are current, nowadays, are +more annoying than clever, and this is a consequence of the seeking +capabilities of the disks. On devices which cannot seek, like big +tape drives, the story was quite different, and one had to be very +clever to ensure (far in advance) that each tape movement will be the +most effective possible (that is, will best participate at +"progressing" the merge). Some tapes were even able to read +backwards, and this was also used to avoid the rewinding time. +Believe me, real good tape sorts were quite spectacular to watch! +From all times, sorting has always been a Great Art! :-) +""" + +__all__ = ['heappush', 'heappop', 'heapify', 'heapreplace', 'merge', + 'nlargest', 'nsmallest', 'heappushpop'] + +from itertools import islice, repeat, count, imap, izip, tee, chain +from operator import itemgetter +import bisect + +def heappush(heap, item): + """Push item onto heap, maintaining the heap invariant.""" + heap.append(item) + _siftdown(heap, 0, len(heap)-1) + +def heappop(heap): + """Pop the smallest item off the heap, maintaining the heap invariant.""" + lastelt = heap.pop() # raises appropriate IndexError if heap is empty + if heap: + returnitem = heap[0] + heap[0] = lastelt + _siftup(heap, 0) + else: + returnitem = lastelt + return returnitem + +def heapreplace(heap, item): + """Pop and return the current smallest value, and add the new item. + + This is more efficient than heappop() followed by heappush(), and can be + more appropriate when using a fixed-size heap. Note that the value + returned may be larger than item! That constrains reasonable uses of + this routine unless written as part of a conditional replacement: + + if item > heap[0]: + item = heapreplace(heap, item) + """ + returnitem = heap[0] # raises appropriate IndexError if heap is empty + heap[0] = item + _siftup(heap, 0) + return returnitem + +def heappushpop(heap, item): + """Fast version of a heappush followed by a heappop.""" + if heap and heap[0] < item: + item, heap[0] = heap[0], item + _siftup(heap, 0) + return item + +def heapify(x): + """Transform list into a heap, in-place, in O(len(heap)) time.""" + n = len(x) + # Transform bottom-up. The largest index there's any point to looking at + # is the largest with a child index in-range, so must have 2*i + 1 < n, + # or i < (n-1)/2. If n is even = 2*j, this is (2*j-1)/2 = j-1/2 so + # j-1 is the largest, which is n//2 - 1. If n is odd = 2*j+1, this is + # (2*j+1-1)/2 = j so j-1 is the largest, and that's again n//2-1. + for i in reversed(xrange(n//2)): + _siftup(x, i) + +def nlargest(n, iterable): + """Find the n largest elements in a dataset. + + Equivalent to: sorted(iterable, reverse=True)[:n] + """ + if n < 0: # for consistency with the c impl + return [] + it = iter(iterable) + result = list(islice(it, n)) + if not result: + return result + heapify(result) + _heappushpop = heappushpop + for elem in it: + _heappushpop(result, elem) + result.sort(reverse=True) + return result + +def nsmallest(n, iterable): + """Find the n smallest elements in a dataset. + + Equivalent to: sorted(iterable)[:n] + """ + if n < 0: # for consistency with the c impl + return [] + if hasattr(iterable, '__len__') and n * 10 <= len(iterable): + # For smaller values of n, the bisect method is faster than a minheap. + # It is also memory efficient, consuming only n elements of space. + it = iter(iterable) + result = sorted(islice(it, 0, n)) + if not result: + return result + insort = bisect.insort + pop = result.pop + los = result[-1] # los --> Largest of the nsmallest + for elem in it: + if los <= elem: + continue + insort(result, elem) + pop() + los = result[-1] + return result + # An alternative approach manifests the whole iterable in memory but + # saves comparisons by heapifying all at once. Also, saves time + # over bisect.insort() which has O(n) data movement time for every + # insertion. Finding the n smallest of an m length iterable requires + # O(m) + O(n log m) comparisons. + h = list(iterable) + heapify(h) + return map(heappop, repeat(h, min(n, len(h)))) + +# 'heap' is a heap at all indices >= startpos, except possibly for pos. pos +# is the index of a leaf with a possibly out-of-order value. Restore the +# heap invariant. +def _siftdown(heap, startpos, pos): + newitem = heap[pos] + # Follow the path to the root, moving parents down until finding a place + # newitem fits. + while pos > startpos: + parentpos = (pos - 1) >> 1 + parent = heap[parentpos] + if newitem < parent: + heap[pos] = parent + pos = parentpos + continue + break + heap[pos] = newitem + +# The child indices of heap index pos are already heaps, and we want to make +# a heap at index pos too. We do this by bubbling the smaller child of +# pos up (and so on with that child's children, etc) until hitting a leaf, +# then using _siftdown to move the oddball originally at index pos into place. +# +# We *could* break out of the loop as soon as we find a pos where newitem <= +# both its children, but turns out that's not a good idea, and despite that +# many books write the algorithm that way. During a heap pop, the last array +# element is sifted in, and that tends to be large, so that comparing it +# against values starting from the root usually doesn't pay (= usually doesn't +# get us out of the loop early). See Knuth, Volume 3, where this is +# explained and quantified in an exercise. +# +# Cutting the # of comparisons is important, since these routines have no +# way to extract "the priority" from an array element, so that intelligence +# is likely to be hiding in custom __cmp__ methods, or in array elements +# storing (priority, record) tuples. Comparisons are thus potentially +# expensive. +# +# On random arrays of length 1000, making this change cut the number of +# comparisons made by heapify() a little, and those made by exhaustive +# heappop() a lot, in accord with theory. Here are typical results from 3 +# runs (3 just to demonstrate how small the variance is): +# +# Compares needed by heapify Compares needed by 1000 heappops +# -------------------------- -------------------------------- +# 1837 cut to 1663 14996 cut to 8680 +# 1855 cut to 1659 14966 cut to 8678 +# 1847 cut to 1660 15024 cut to 8703 +# +# Building the heap by using heappush() 1000 times instead required +# 2198, 2148, and 2219 compares: heapify() is more efficient, when +# you can use it. +# +# The total compares needed by list.sort() on the same lists were 8627, +# 8627, and 8632 (this should be compared to the sum of heapify() and +# heappop() compares): list.sort() is (unsurprisingly!) more efficient +# for sorting. + +def _siftup(heap, pos): + endpos = len(heap) + startpos = pos + newitem = heap[pos] + # Bubble up the smaller child until hitting a leaf. + childpos = 2*pos + 1 # leftmost child position + while childpos < endpos: + # Set childpos to index of smaller child. + rightpos = childpos + 1 + if rightpos < endpos and not heap[childpos] < heap[rightpos]: + childpos = rightpos + # Move the smaller child up. + heap[pos] = heap[childpos] + pos = childpos + childpos = 2*pos + 1 + # The leaf at pos is empty now. Put newitem there, and bubble it up + # to its final resting place (by sifting its parents down). + heap[pos] = newitem + _siftdown(heap, startpos, pos) + +# If available, use C implementation +try: + from _heapq import * +except ImportError: + pass + +def merge(*iterables): + '''Merge multiple sorted inputs into a single sorted output. + + Similar to sorted(itertools.chain(*iterables)) but returns a generator, + does not pull the data into memory all at once, and assumes that each of + the input streams is already sorted (smallest to largest). + + >>> list(merge([1,3,5,7], [0,2,4,8], [5,10,15,20], [], [25])) + [0, 1, 2, 3, 4, 5, 5, 7, 8, 10, 15, 20, 25] + + ''' + _heappop, _heapreplace, _StopIteration = heappop, heapreplace, StopIteration + + h = [] + h_append = h.append + for itnum, it in enumerate(map(iter, iterables)): + try: + next = it.next + h_append([next(), itnum, next]) + except _StopIteration: + pass + heapify(h) + + while 1: + try: + while 1: + v, itnum, next = s = h[0] # raises IndexError when h is empty + yield v + s[0] = next() # raises StopIteration when exhausted + _heapreplace(h, s) # restore heap condition + except _StopIteration: + _heappop(h) # remove empty iterator + except IndexError: + return + +# Extend the implementations of nsmallest and nlargest to use a key= argument +_nsmallest = nsmallest +def nsmallest(n, iterable, key=None): + """Find the n smallest elements in a dataset. + + Equivalent to: sorted(iterable, key=key)[:n] + """ + # Short-cut for n==1 is to use min() when len(iterable)>0 + if n == 1: + it = iter(iterable) + head = list(islice(it, 1)) + if not head: + return [] + if key is None: + return [min(chain(head, it))] + return [min(chain(head, it), key=key)] + + # When n>=size, it's faster to use sort() + try: + size = len(iterable) + except (TypeError, AttributeError): + pass + else: + if n >= size: + return sorted(iterable, key=key)[:n] + + # When key is none, use simpler decoration + if key is None: + it = izip(iterable, count()) # decorate + result = _nsmallest(n, it) + return map(itemgetter(0), result) # undecorate + + # General case, slowest method + in1, in2 = tee(iterable) + it = izip(imap(key, in1), count(), in2) # decorate + result = _nsmallest(n, it) + return map(itemgetter(2), result) # undecorate + +_nlargest = nlargest +def nlargest(n, iterable, key=None): + """Find the n largest elements in a dataset. + + Equivalent to: sorted(iterable, key=key, reverse=True)[:n] + """ + + # Short-cut for n==1 is to use max() when len(iterable)>0 + if n == 1: + it = iter(iterable) + head = list(islice(it, 1)) + if not head: + return [] + if key is None: + return [max(chain(head, it))] + return [max(chain(head, it), key=key)] + + # When n>=size, it's faster to use sort() + try: + size = len(iterable) + except (TypeError, AttributeError): + pass + else: + if n >= size: + return sorted(iterable, key=key, reverse=True)[:n] + + # When key is none, use simpler decoration + if key is None: + it = izip(iterable, count(0,-1)) # decorate + result = _nlargest(n, it) + return map(itemgetter(0), result) # undecorate + + # General case, slowest method + in1, in2 = tee(iterable) + it = izip(imap(key, in1), count(0,-1), in2) # decorate + result = _nlargest(n, it) + return map(itemgetter(2), result) # undecorate + +if __name__ == "__main__": + # Simple sanity test + heap = [] + data = [1, 3, 5, 7, 9, 2, 4, 6, 8, 0] + for item in data: + heappush(heap, item) + sort = [] + while heap: + sort.append(heappop(heap)) + print sort + + import doctest + doctest.testmod() diff --git a/lib-python/modified-2.7/test/test_heapq.py b/lib-python/modified-2.7/test/test_heapq.py --- a/lib-python/modified-2.7/test/test_heapq.py +++ b/lib-python/modified-2.7/test/test_heapq.py @@ -186,6 +186,11 @@ self.assertFalse(sys.modules['heapq'] is self.module) self.assertTrue(hasattr(self.module.heapify, 'func_code')) + def test_islice_protection(self): + m = self.module + self.assertFalse(m.nsmallest(-1, [1])) + self.assertFalse(m.nlargest(-1, [1])) + class TestHeapC(TestHeap): module = c_heapq diff --git a/pypy/doc/project-ideas.rst b/pypy/doc/project-ideas.rst --- a/pypy/doc/project-ideas.rst +++ b/pypy/doc/project-ideas.rst @@ -17,6 +17,12 @@ projects, or anything else in PyPy, pop up on IRC or write to us on the `mailing list`_. +Make big integers faster +------------------------- + +PyPy's implementation of the Python ``long`` type is slower than CPython's. +Find out why and optimize them. + Numpy improvements ------------------ diff --git a/pypy/jit/backend/llsupport/descr.py b/pypy/jit/backend/llsupport/descr.py --- a/pypy/jit/backend/llsupport/descr.py +++ b/pypy/jit/backend/llsupport/descr.py @@ -281,6 +281,9 @@ def is_float_field(self): return self.fielddescr.is_float_field() + def sort_key(self): + return self.fielddescr.sort_key() + def repr_of_descr(self): return '' % self.fielddescr.repr_of_descr() diff --git a/pypy/jit/backend/test/test_ll_random.py b/pypy/jit/backend/test/test_ll_random.py --- a/pypy/jit/backend/test/test_ll_random.py +++ b/pypy/jit/backend/test/test_ll_random.py @@ -34,8 +34,8 @@ v, S = from_[i][:2] if not isinstance(S, type): continue - if (isinstance(S, lltype.Array) and - isinstance(S.OF, lltype.Struct) == array_of_structs): + if ((isinstance(S, lltype.Array) and + isinstance(S.OF, lltype.Struct)) == array_of_structs): ptrvars.append((v, S)) return ptrvars @@ -180,8 +180,16 @@ dic[fieldname] = getattr(p, fieldname) else: assert isinstance(S, lltype.Array) - for i in range(len(p)): - dic[i] = p[i] + if isinstance(S.OF, lltype.Struct): + for i in range(len(p)): + item = p[i] + s1 = {} + for fieldname in S.OF._names: + s1[fieldname] = getattr(item, fieldname) + dic[i] = s1 + else: + for i in range(len(p)): + dic[i] = p[i] return dic def print_loop_prebuilt(self, names, writevar, s): @@ -270,10 +278,7 @@ array_of_structs=True) array = v.getref(lltype.Ptr(A)) v_index = builder.get_index(len(array), r) - names = A.OF._names - if names[0] == 'parent': - names = names[1:] - name = r.choice(names) + name = r.choice(A.OF._names) descr = builder.cpu.interiorfielddescrof(A, name) descr._random_info = 'cpu.interiorfielddescrof(%s, %r)' % (A.OF._name, name) @@ -301,11 +306,9 @@ break builder.do(self.opnum, [v, w], descr) -class SetInteriorFieldOperation(GetFieldOperation): +class SetInteriorFieldOperation(GetInteriorFieldOperation): def produce_into(self, builder, r): - import pdb - pdb.set_trace() - v, descr, TYPE = self.field_descr(builder, r) + v, v_index, descr, TYPE = self.field_descr(builder, r) while True: if r.random() < 0.3: w = ConstInt(r.random_integer()) @@ -313,7 +316,7 @@ w = r.choice(builder.intvars) if rffi.cast(lltype.Signed, rffi.cast(TYPE, w.value)) == w.value: break - builder.do(self.opnum, [v, w], descr) + builder.do(self.opnum, [v, v_index, w], descr) class NewOperation(test_random.AbstractOperation): def size_descr(self, builder, S): @@ -652,7 +655,7 @@ OPERATIONS.append(GetFieldOperation(rop.GETFIELD_GC)) OPERATIONS.append(GetInteriorFieldOperation(rop.GETINTERIORFIELD_GC)) OPERATIONS.append(SetFieldOperation(rop.SETFIELD_GC)) - #OPERATIONS.append(SetInteriorFieldOperation(rop.SETINTERIORFIELD_GC)) + OPERATIONS.append(SetInteriorFieldOperation(rop.SETINTERIORFIELD_GC)) OPERATIONS.append(NewOperation(rop.NEW)) OPERATIONS.append(NewOperation(rop.NEW_WITH_VTABLE)) diff --git a/pypy/jit/backend/test/test_random.py b/pypy/jit/backend/test/test_random.py --- a/pypy/jit/backend/test/test_random.py +++ b/pypy/jit/backend/test/test_random.py @@ -595,6 +595,10 @@ for name, value in fields.items(): if isinstance(name, str): setattr(container, name, value) + elif isinstance(value, dict): + item = container.getitem(name) + for key1, value1 in value.items(): + setattr(item, key1, value1) else: container.setitem(name, value) diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -1596,11 +1596,26 @@ genop_getarrayitem_gc_pure = genop_getarrayitem_gc genop_getarrayitem_raw = genop_getarrayitem_gc + def _get_interiorfield_addr(self, temp_loc, index_loc, itemsize_loc, + base_loc, ofs_loc): + assert isinstance(itemsize_loc, ImmedLoc) + if isinstance(index_loc, ImmedLoc): + temp_loc = imm(index_loc.value * itemsize_loc.value) + else: + # XXX should not use IMUL in most cases + assert isinstance(temp_loc, RegLoc) + assert isinstance(index_loc, RegLoc) + self.mc.IMUL_rri(temp_loc.value, index_loc.value, + itemsize_loc.value) + assert isinstance(ofs_loc, ImmedLoc) + return AddressLoc(base_loc, temp_loc, 0, ofs_loc.value) + def genop_getinteriorfield_gc(self, op, arglocs, resloc): - base_loc, ofs_loc, itemsize_loc, fieldsize_loc, index_loc, sign_loc = arglocs - # XXX should not use IMUL in most cases - self.mc.IMUL(index_loc, itemsize_loc) - src_addr = AddressLoc(base_loc, index_loc, 0, ofs_loc.value) + (base_loc, ofs_loc, itemsize_loc, fieldsize_loc, + index_loc, sign_loc) = arglocs + src_addr = self._get_interiorfield_addr(resloc, index_loc, + itemsize_loc, base_loc, + ofs_loc) self.load_from_mem(resloc, src_addr, fieldsize_loc, sign_loc) @@ -1611,10 +1626,11 @@ self.save_into_mem(dest_addr, value_loc, size_loc) def genop_discard_setinteriorfield_gc(self, op, arglocs): - base_loc, ofs_loc, itemsize_loc, fieldsize_loc, index_loc, value_loc = arglocs - # XXX should not use IMUL in most cases - self.mc.IMUL(index_loc, itemsize_loc) - dest_addr = AddressLoc(base_loc, index_loc, 0, ofs_loc.value) + (base_loc, ofs_loc, itemsize_loc, fieldsize_loc, + index_loc, temp_loc, value_loc) = arglocs + dest_addr = self._get_interiorfield_addr(temp_loc, index_loc, + itemsize_loc, base_loc, + ofs_loc) self.save_into_mem(dest_addr, value_loc, fieldsize_loc) def genop_discard_setarrayitem_gc(self, op, arglocs): diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -1042,16 +1042,30 @@ t = self._unpack_interiorfielddescr(op.getdescr()) ofs, itemsize, fieldsize, _ = t args = op.getarglist() - tmpvar = TempBox() - base_loc = self.rm.make_sure_var_in_reg(op.getarg(0), args) - index_loc = self.rm.force_result_in_reg(tmpvar, op.getarg(1), - args) - # we're free to modify index now - value_loc = self.make_sure_var_in_reg(op.getarg(2), args) - self.possibly_free_vars(args) - self.rm.possibly_free_var(tmpvar) + if fieldsize.value == 1: + need_lower_byte = True + else: + need_lower_byte = False + box_base, box_index, box_value = args + base_loc = self.rm.make_sure_var_in_reg(box_base, args) + index_loc = self.rm.make_sure_var_in_reg(box_index, args) + value_loc = self.make_sure_var_in_reg(box_value, args, + need_lower_byte=need_lower_byte) + # If 'index_loc' is not an immediate, then we need a 'temp_loc' that + # is a register whose value will be destroyed. It's fine to destroy + # the same register as 'index_loc', but not the other ones. + self.rm.possibly_free_var(box_index) + if not isinstance(index_loc, ImmedLoc): + tempvar = TempBox() + temp_loc = self.rm.force_allocate_reg(tempvar, [box_base, + box_value]) + self.rm.possibly_free_var(tempvar) + else: + temp_loc = None + self.rm.possibly_free_var(box_base) + self.possibly_free_var(box_value) self.PerformDiscard(op, [base_loc, ofs, itemsize, fieldsize, - index_loc, value_loc]) + index_loc, temp_loc, value_loc]) def consider_strsetitem(self, op): args = op.getarglist() @@ -1122,13 +1136,14 @@ else: sign_loc = imm0 args = op.getarglist() - tmpvar = TempBox() base_loc = self.rm.make_sure_var_in_reg(op.getarg(0), args) - index_loc = self.rm.force_result_in_reg(tmpvar, op.getarg(1), - args) - self.rm.possibly_free_vars_for_op(op) - self.rm.possibly_free_var(tmpvar) - result_loc = self.force_allocate_reg(op.result) + index_loc = self.rm.make_sure_var_in_reg(op.getarg(1), args) + # 'base' and 'index' are put in two registers (or one if 'index' + # is an immediate). 'result' can be in the same register as + # 'index' but must be in a different register than 'base'. + self.rm.possibly_free_var(op.getarg(1)) + result_loc = self.force_allocate_reg(op.result, [op.getarg(0)]) + self.rm.possibly_free_var(op.getarg(0)) self.Perform(op, [base_loc, ofs, itemsize, fieldsize, index_loc, sign_loc], result_loc) diff --git a/pypy/jit/codewriter/jtransform.py b/pypy/jit/codewriter/jtransform.py --- a/pypy/jit/codewriter/jtransform.py +++ b/pypy/jit/codewriter/jtransform.py @@ -844,6 +844,10 @@ if self._is_gc(op.args[0]): return op + def rewrite_op_cast_opaque_ptr(self, op): + # None causes the result of this op to get aliased to op.args[0] + return [SpaceOperation('mark_opaque_ptr', op.args, None), None] + def rewrite_op_force_cast(self, op): v_arg = op.args[0] v_result = op.result diff --git a/pypy/jit/codewriter/test/test_jtransform.py b/pypy/jit/codewriter/test/test_jtransform.py --- a/pypy/jit/codewriter/test/test_jtransform.py +++ b/pypy/jit/codewriter/test/test_jtransform.py @@ -1128,3 +1128,16 @@ varoftype(lltype.Signed)) tr = Transformer(None, None) raises(NotImplementedError, tr.rewrite_operation, op) + +def test_cast_opaque_ptr(): + S = lltype.GcStruct("S", ("x", lltype.Signed)) + v1 = varoftype(lltype.Ptr(S)) + v2 = varoftype(lltype.Ptr(rclass.OBJECT)) + + op = SpaceOperation('cast_opaque_ptr', [v1], v2) + tr = Transformer() + [op1, op2] = tr.rewrite_operation(op) + assert op1.opname == 'mark_opaque_ptr' + assert op1.args == [v1] + assert op1.result is None + assert op2 is None \ No newline at end of file diff --git a/pypy/jit/metainterp/blackhole.py b/pypy/jit/metainterp/blackhole.py --- a/pypy/jit/metainterp/blackhole.py +++ b/pypy/jit/metainterp/blackhole.py @@ -505,9 +505,6 @@ @arguments("r", "r", returns="i") def bhimpl_instance_ptr_ne(a, b): return a != b - @arguments("r", returns="r") - def bhimpl_cast_opaque_ptr(a): - return a @arguments("r", returns="i") def bhimpl_cast_ptr_to_int(a): i = lltype.cast_ptr_to_int(a) @@ -518,6 +515,10 @@ ll_assert((i & 1) == 1, "bhimpl_cast_int_to_ptr: not an odd int") return lltype.cast_int_to_ptr(llmemory.GCREF, i) + @arguments("r") + def bhimpl_mark_opaque_ptr(a): + pass + @arguments("i", returns="i") def bhimpl_int_copy(a): return a diff --git a/pypy/jit/metainterp/heapcache.py b/pypy/jit/metainterp/heapcache.py --- a/pypy/jit/metainterp/heapcache.py +++ b/pypy/jit/metainterp/heapcache.py @@ -34,7 +34,6 @@ self.clear_caches(opnum, descr, argboxes) def mark_escaped(self, opnum, argboxes): - idx = 0 if opnum == rop.SETFIELD_GC: assert len(argboxes) == 2 box, valuebox = argboxes @@ -42,8 +41,20 @@ self.dependencies.setdefault(box, []).append(valuebox) else: self._escape(valuebox) - # GETFIELD_GC doesn't escape it's argument - elif opnum != rop.GETFIELD_GC: + elif opnum == rop.SETARRAYITEM_GC: + assert len(argboxes) == 3 + box, indexbox, valuebox = argboxes + if self.is_unescaped(box) and self.is_unescaped(valuebox): + self.dependencies.setdefault(box, []).append(valuebox) + else: + self._escape(valuebox) + # GETFIELD_GC, MARK_OPAQUE_PTR, PTR_EQ, and PTR_NE don't escape their + # arguments + elif (opnum != rop.GETFIELD_GC and + opnum != rop.MARK_OPAQUE_PTR and + opnum != rop.PTR_EQ and + opnum != rop.PTR_NE): + idx = 0 for box in argboxes: # setarrayitem_gc don't escape its first argument if not (idx == 0 and opnum in [rop.SETARRAYITEM_GC]): @@ -60,13 +71,13 @@ self._escape(dep) def clear_caches(self, opnum, descr, argboxes): - if opnum == rop.SETFIELD_GC: - return - if opnum == rop.SETARRAYITEM_GC: - return - if opnum == rop.SETFIELD_RAW: - return - if opnum == rop.SETARRAYITEM_RAW: + if (opnum == rop.SETFIELD_GC or + opnum == rop.SETARRAYITEM_GC or + opnum == rop.SETFIELD_RAW or + opnum == rop.SETARRAYITEM_RAW or + opnum == rop.SETINTERIORFIELD_GC or + opnum == rop.COPYSTRCONTENT or + opnum == rop.COPYUNICODECONTENT): return if rop._OVF_FIRST <= opnum <= rop._OVF_LAST: return @@ -75,9 +86,9 @@ if opnum == rop.CALL or opnum == rop.CALL_LOOPINVARIANT: effectinfo = descr.get_extra_info() ef = effectinfo.extraeffect - if ef == effectinfo.EF_LOOPINVARIANT or \ - ef == effectinfo.EF_ELIDABLE_CANNOT_RAISE or \ - ef == effectinfo.EF_ELIDABLE_CAN_RAISE: + if (ef == effectinfo.EF_LOOPINVARIANT or + ef == effectinfo.EF_ELIDABLE_CANNOT_RAISE or + ef == effectinfo.EF_ELIDABLE_CAN_RAISE): return # A special case for ll_arraycopy, because it is so common, and its # effects are so well defined. diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -929,6 +929,9 @@ def view(self, **kwds): pass + def clear(self): + pass + class Stats(object): """For tests.""" @@ -943,6 +946,15 @@ self.aborted_keys = [] self.invalidated_token_numbers = set() + def clear(self): + del self.loops[:] + del self.locations[:] + del self.aborted_keys[:] + self.invalidated_token_numbers.clear() + self.compiled_count = 0 + self.enter_count = 0 + self.aborted_count = 0 + def set_history(self, history): self.operations = history.operations diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -209,13 +209,19 @@ def setfield(self, ofs, value): raise NotImplementedError + def getlength(self): + raise NotImplementedError + def getitem(self, index): raise NotImplementedError - def getlength(self): + def setitem(self, index, value): raise NotImplementedError - def setitem(self, index, value): + def getinteriorfield(self, index, ofs, default): + raise NotImplementedError + + def setinteriorfield(self, index, ofs, value): raise NotImplementedError @@ -283,11 +289,11 @@ return self.optimizer.optpure.has_pure_result(opnum, args, descr) return False - def get_pure_result(self, key): + def get_pure_result(self, key): if self.optimizer.optpure: return self.optimizer.optpure.get_pure_result(key) return None - + def setup(self): pass @@ -524,7 +530,7 @@ def replace_op(self, old_op, new_op): # XXX: Do we want to cache indexes to prevent search? - i = len(self._newoperations) + i = len(self._newoperations) while i > 0: i -= 1 if self._newoperations[i] is old_op: diff --git a/pypy/jit/metainterp/optimizeopt/rewrite.py b/pypy/jit/metainterp/optimizeopt/rewrite.py --- a/pypy/jit/metainterp/optimizeopt/rewrite.py +++ b/pypy/jit/metainterp/optimizeopt/rewrite.py @@ -465,10 +465,9 @@ args = [op.getarg(0), ConstInt(highest_bit(val))]) self.emit_operation(op) - def optimize_CAST_OPAQUE_PTR(self, op): + def optimize_MARK_OPAQUE_PTR(self, op): value = self.getvalue(op.getarg(0)) self.optimizer.opaque_pointers[value] = True - self.make_equal_to(op.result, value) def optimize_CAST_PTR_TO_INT(self, op): self.pure(rop.CAST_INT_TO_PTR, [op.result], op.getarg(0)) diff --git a/pypy/jit/metainterp/optimizeopt/simplify.py b/pypy/jit/metainterp/optimizeopt/simplify.py --- a/pypy/jit/metainterp/optimizeopt/simplify.py +++ b/pypy/jit/metainterp/optimizeopt/simplify.py @@ -25,7 +25,8 @@ # but it's a bit hard to implement robustly if heap.py is also run pass - optimize_CAST_OPAQUE_PTR = optimize_VIRTUAL_REF + def optimize_MARK_OPAQUE_PTR(self, op): + pass dispatch_opt = make_dispatcher_method(OptSimplify, 'optimize_', diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -935,7 +935,6 @@ """ self.optimize_loop(ops, expected) - def test_virtual_constant_isnonnull(self): ops = """ [i0] @@ -951,6 +950,55 @@ """ self.optimize_loop(ops, expected) + def test_virtual_array_of_struct(self): + ops = """ + [f0, f1, f2, f3] + p0 = new_array(2, descr=complexarraydescr) + setinteriorfield_gc(p0, 0, f0, descr=complexrealdescr) + setinteriorfield_gc(p0, 0, f1, descr=compleximagdescr) + setinteriorfield_gc(p0, 1, f2, descr=complexrealdescr) + setinteriorfield_gc(p0, 1, f3, descr=compleximagdescr) + f4 = getinteriorfield_gc(p0, 0, descr=complexrealdescr) + f5 = getinteriorfield_gc(p0, 1, descr=complexrealdescr) + f6 = float_mul(f4, f5) + f7 = getinteriorfield_gc(p0, 0, descr=compleximagdescr) + f8 = getinteriorfield_gc(p0, 1, descr=compleximagdescr) + f9 = float_mul(f7, f8) + f10 = float_add(f6, f9) + finish(f10) + """ + expected = """ + [f0, f1, f2, f3] + f4 = float_mul(f0, f2) + f5 = float_mul(f1, f3) + f6 = float_add(f4, f5) + finish(f6) + """ + self.optimize_loop(ops, expected) + + def test_virtual_array_of_struct_forced(self): + ops = """ + [f0, f1] + p0 = new_array(1, descr=complexarraydescr) + setinteriorfield_gc(p0, 0, f0, descr=complexrealdescr) + setinteriorfield_gc(p0, 0, f1, descr=compleximagdescr) + f2 = getinteriorfield_gc(p0, 0, descr=complexrealdescr) + f3 = getinteriorfield_gc(p0, 0, descr=compleximagdescr) + f4 = float_mul(f2, f3) + i0 = escape(f4, p0) + finish(i0) + """ + expected = """ + [f0, f1] + f2 = float_mul(f0, f1) + p0 = new_array(1, descr=complexarraydescr) + setinteriorfield_gc(p0, 0, f0, descr=complexrealdescr) + setinteriorfield_gc(p0, 0, f1, descr=compleximagdescr) + i0 = escape(f2, p0) + finish(i0) + """ + self.optimize_loop(ops, expected) + def test_nonvirtual_1(self): ops = """ [i] @@ -4181,10 +4229,12 @@ class FakeCallInfoCollection: def callinfo_for_oopspec(self, oopspecindex): calldescrtype = type(LLtypeMixin.strequaldescr) + effectinfotype = type(LLtypeMixin.strequaldescr.get_extra_info()) for value in LLtypeMixin.__dict__.values(): if isinstance(value, calldescrtype): extra = value.get_extra_info() - if extra and extra.oopspecindex == oopspecindex: + if (extra and isinstance(extra, effectinfotype) and + extra.oopspecindex == oopspecindex): # returns 0 for 'func' in this test return value, 0 raise AssertionError("not found: oopspecindex=%d" % diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -5800,10 +5800,12 @@ class FakeCallInfoCollection: def callinfo_for_oopspec(self, oopspecindex): calldescrtype = type(LLtypeMixin.strequaldescr) + effectinfotype = type(LLtypeMixin.strequaldescr.get_extra_info()) for value in LLtypeMixin.__dict__.values(): if isinstance(value, calldescrtype): extra = value.get_extra_info() - if extra and extra.oopspecindex == oopspecindex: + if (extra and isinstance(extra, effectinfotype) and + extra.oopspecindex == oopspecindex): # returns 0 for 'func' in this test return value, 0 raise AssertionError("not found: oopspecindex=%d" % diff --git a/pypy/jit/metainterp/optimizeopt/test/test_util.py b/pypy/jit/metainterp/optimizeopt/test/test_util.py --- a/pypy/jit/metainterp/optimizeopt/test/test_util.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_util.py @@ -185,6 +185,18 @@ EffectInfo([], [arraydescr], [], [arraydescr], oopspecindex=EffectInfo.OS_ARRAYCOPY)) + + # array of structs (complex data) + complexarray = lltype.GcArray( + lltype.Struct("complex", + ("real", lltype.Float), + ("imag", lltype.Float), + ) + ) + complexarraydescr = cpu.arraydescrof(complexarray) + complexrealdescr = cpu.interiorfielddescrof(complexarray, "real") + compleximagdescr = cpu.interiorfielddescrof(complexarray, "imag") + for _name, _os in [ ('strconcatdescr', 'OS_STR_CONCAT'), ('strslicedescr', 'OS_STR_SLICE'), @@ -240,7 +252,7 @@ ## def get_class_of_box(self, box): ## root = box.getref(ootype.ROOT) ## return ootype.classof(root) - + ## cpu = runner.OOtypeCPU(None) ## NODE = ootype.Instance('NODE', ootype.ROOT, {}) ## NODE._add_fields({'value': ootype.Signed, diff --git a/pypy/jit/metainterp/optimizeopt/virtualize.py b/pypy/jit/metainterp/optimizeopt/virtualize.py --- a/pypy/jit/metainterp/optimizeopt/virtualize.py +++ b/pypy/jit/metainterp/optimizeopt/virtualize.py @@ -271,6 +271,69 @@ def _make_virtual(self, modifier): return modifier.make_varray(self.arraydescr) +class VArrayStructValue(AbstractVirtualValue): + def __init__(self, arraydescr, size, keybox, source_op=None): + AbstractVirtualValue.__init__(self, keybox, source_op) + self.arraydescr = arraydescr + self._items = [{} for _ in xrange(size)] + + def getlength(self): + return len(self._items) + + def getinteriorfield(self, index, ofs, default): + return self._items[index].get(ofs, default) + + def setinteriorfield(self, index, ofs, itemvalue): + assert isinstance(itemvalue, optimizer.OptValue) + self._items[index][ofs] = itemvalue + + def _really_force(self, optforce): + assert self.source_op is not None + if not we_are_translated(): + self.source_op.name = 'FORCE ' + self.source_op.name + optforce.emit_operation(self.source_op) + self.box = box = self.source_op.result + for index in range(len(self._items)): + for descr, value in self._items[index].iteritems(): + subbox = value.force_box(optforce) + op = ResOperation(rop.SETINTERIORFIELD_GC, + [box, ConstInt(index), subbox], None, descr=descr + ) + optforce.emit_operation(op) + + def _get_list_of_descrs(self): + descrs = [] + for item in self._items: + item_descrs = item.keys() + sort_descrs(item_descrs) + descrs.append(item_descrs) + return descrs + + def get_args_for_fail(self, modifier): + if self.box is None and not modifier.already_seen_virtual(self.keybox): + itemdescrs = self._get_list_of_descrs() + itemboxes = [] + for i in range(len(self._items)): + for descr in itemdescrs[i]: + itemboxes.append(self._items[i][descr].get_key_box()) + modifier.register_virtual_fields(self.keybox, itemboxes) + for i in range(len(self._items)): + for descr in itemdescrs[i]: + self._items[i][descr].get_args_for_fail(modifier) + + def force_at_end_of_preamble(self, already_forced, optforce): + if self in already_forced: + return self + already_forced[self] = self + for index in range(len(self._items)): + for descr in self._items[index].keys(): + self._items[index][descr] = self._items[index][descr].force_at_end_of_preamble(already_forced, optforce) + return self + + def _make_virtual(self, modifier): + return modifier.make_varraystruct(self.arraydescr, self._get_list_of_descrs()) + + class OptVirtualize(optimizer.Optimization): "Virtualize objects until they escape." @@ -283,8 +346,11 @@ return vvalue def make_varray(self, arraydescr, size, box, source_op=None): - constvalue = self.new_const_item(arraydescr) - vvalue = VArrayValue(arraydescr, constvalue, size, box, source_op) + if arraydescr.is_array_of_structs(): + vvalue = VArrayStructValue(arraydescr, size, box, source_op) + else: + constvalue = self.new_const_item(arraydescr) + vvalue = VArrayValue(arraydescr, constvalue, size, box, source_op) self.make_equal_to(box, vvalue) return vvalue @@ -386,8 +452,7 @@ def optimize_NEW_ARRAY(self, op): sizebox = self.get_constant_box(op.getarg(0)) - # For now we can't make arrays of structs virtual. - if sizebox is not None and not op.getdescr().is_array_of_structs(): + if sizebox is not None: # if the original 'op' did not have a ConstInt as argument, # build a new one with the ConstInt argument if not isinstance(op.getarg(0), ConstInt): @@ -432,6 +497,34 @@ value.ensure_nonnull() self.emit_operation(op) + def optimize_GETINTERIORFIELD_GC(self, op): + value = self.getvalue(op.getarg(0)) + if value.is_virtual(): + indexbox = self.get_constant_box(op.getarg(1)) + if indexbox is not None: + descr = op.getdescr() + fieldvalue = value.getinteriorfield( + indexbox.getint(), descr, None + ) + if fieldvalue is None: + fieldvalue = self.new_const(descr) + self.make_equal_to(op.result, fieldvalue) + return + value.ensure_nonnull() + self.emit_operation(op) + + def optimize_SETINTERIORFIELD_GC(self, op): + value = self.getvalue(op.getarg(0)) + if value.is_virtual(): + indexbox = self.get_constant_box(op.getarg(1)) + if indexbox is not None: + value.setinteriorfield( + indexbox.getint(), op.getdescr(), self.getvalue(op.getarg(2)) + ) + return + value.ensure_nonnull() + self.emit_operation(op) + dispatch_opt = make_dispatcher_method(OptVirtualize, 'optimize_', default=OptVirtualize.emit_operation) diff --git a/pypy/jit/metainterp/optimizeopt/virtualstate.py b/pypy/jit/metainterp/optimizeopt/virtualstate.py --- a/pypy/jit/metainterp/optimizeopt/virtualstate.py +++ b/pypy/jit/metainterp/optimizeopt/virtualstate.py @@ -16,7 +16,7 @@ class AbstractVirtualStateInfo(resume.AbstractVirtualInfo): position = -1 - + def generalization_of(self, other, renum, bad): raise NotImplementedError @@ -54,7 +54,7 @@ s.debug_print(indent + " ", seen, bad) else: debug_print(indent + " ...") - + def debug_header(self, indent): raise NotImplementedError @@ -77,13 +77,15 @@ bad[self] = True bad[other] = True return False + + assert isinstance(other, AbstractVirtualStructStateInfo) assert len(self.fielddescrs) == len(self.fieldstate) assert len(other.fielddescrs) == len(other.fieldstate) if len(self.fielddescrs) != len(other.fielddescrs): bad[self] = True bad[other] = True return False - + for i in range(len(self.fielddescrs)): if other.fielddescrs[i] is not self.fielddescrs[i]: bad[self] = True @@ -112,8 +114,8 @@ def _enum(self, virtual_state): for s in self.fieldstate: s.enum(virtual_state) - - + + class VirtualStateInfo(AbstractVirtualStructStateInfo): def __init__(self, known_class, fielddescrs): AbstractVirtualStructStateInfo.__init__(self, fielddescrs) @@ -128,13 +130,13 @@ def debug_header(self, indent): debug_print(indent + 'VirtualStateInfo(%d):' % self.position) - + class VStructStateInfo(AbstractVirtualStructStateInfo): def __init__(self, typedescr, fielddescrs): AbstractVirtualStructStateInfo.__init__(self, fielddescrs) self.typedescr = typedescr - def _generalization_of(self, other): + def _generalization_of(self, other): if not isinstance(other, VStructStateInfo): return False if self.typedescr is not other.typedescr: @@ -143,7 +145,7 @@ def debug_header(self, indent): debug_print(indent + 'VStructStateInfo(%d):' % self.position) - + class VArrayStateInfo(AbstractVirtualStateInfo): def __init__(self, arraydescr): self.arraydescr = arraydescr @@ -157,11 +159,7 @@ bad[other] = True return False renum[self.position] = other.position - if not isinstance(other, VArrayStateInfo): - bad[self] = True - bad[other] = True - return False - if self.arraydescr is not other.arraydescr: + if not self._generalization_of(other): bad[self] = True bad[other] = True return False @@ -177,6 +175,10 @@ return False return True + def _generalization_of(self, other): + return (isinstance(other, VArrayStateInfo) and + self.arraydescr is other.arraydescr) + def enum_forced_boxes(self, boxes, value, optimizer): assert isinstance(value, virtualize.VArrayValue) assert value.is_virtual() @@ -192,8 +194,75 @@ def debug_header(self, indent): debug_print(indent + 'VArrayStateInfo(%d):' % self.position) - - + +class VArrayStructStateInfo(AbstractVirtualStateInfo): + def __init__(self, arraydescr, fielddescrs): + self.arraydescr = arraydescr + self.fielddescrs = fielddescrs + + def generalization_of(self, other, renum, bad): + assert self.position != -1 + if self.position in renum: + if renum[self.position] == other.position: + return True + bad[self] = True + bad[other] = True + return False + renum[self.position] = other.position + if not self._generalization_of(other): + bad[self] = True + bad[other] = True + return False + + assert isinstance(other, VArrayStructStateInfo) + if len(self.fielddescrs) != len(other.fielddescrs): + bad[self] = True + bad[other] = True + return False + + p = 0 + for i in range(len(self.fielddescrs)): + if len(self.fielddescrs[i]) != len(other.fielddescrs[i]): + bad[self] = True + bad[other] = True + return False + for j in range(len(self.fielddescrs[i])): + if self.fielddescrs[i][j] is not other.fielddescrs[i][j]: + bad[self] = True + bad[other] = True + return False + if not self.fieldstate[p].generalization_of(other.fieldstate[p], + renum, bad): + bad[self] = True + bad[other] = True + return False + p += 1 + return True + + def _generalization_of(self, other): + return (isinstance(other, VArrayStructStateInfo) and + self.arraydescr is other.arraydescr) + + def _enum(self, virtual_state): + for s in self.fieldstate: + s.enum(virtual_state) + + def enum_forced_boxes(self, boxes, value, optimizer): + assert isinstance(value, virtualize.VArrayStructValue) + assert value.is_virtual() + p = 0 + for i in range(len(self.fielddescrs)): + for j in range(len(self.fielddescrs[i])): + v = value._items[i][self.fielddescrs[i][j]] + s = self.fieldstate[p] + if s.position > self.position: + s.enum_forced_boxes(boxes, v, optimizer) + p += 1 + + def debug_header(self, indent): + debug_print(indent + 'VArrayStructStateInfo(%d):' % self.position) + + class NotVirtualStateInfo(AbstractVirtualStateInfo): def __init__(self, value): self.known_class = value.known_class @@ -277,7 +346,7 @@ op = ResOperation(rop.GUARD_CLASS, [box, self.known_class], None) extra_guards.append(op) return - + if self.level == LEVEL_NONNULL and \ other.level == LEVEL_UNKNOWN and \ isinstance(box, BoxPtr) and \ @@ -285,7 +354,7 @@ op = ResOperation(rop.GUARD_NONNULL, [box], None) extra_guards.append(op) return - + if self.level == LEVEL_UNKNOWN and \ other.level == LEVEL_UNKNOWN and \ isinstance(box, BoxInt) and \ @@ -309,7 +378,7 @@ op = ResOperation(rop.GUARD_TRUE, [res], None) extra_guards.append(op) return - + # Remaining cases are probably not interesting raise InvalidLoop if self.level == LEVEL_CONSTANT: @@ -319,7 +388,7 @@ def enum_forced_boxes(self, boxes, value, optimizer): if self.level == LEVEL_CONSTANT: return - assert 0 <= self.position_in_notvirtuals + assert 0 <= self.position_in_notvirtuals boxes[self.position_in_notvirtuals] = value.force_box(optimizer) def _enum(self, virtual_state): @@ -348,7 +417,7 @@ lb = '' if self.lenbound: lb = ', ' + self.lenbound.bound.__repr__() - + debug_print(indent + mark + 'NotVirtualInfo(%d' % self.position + ', ' + l + ', ' + self.intbound.__repr__() + lb + ')') @@ -370,7 +439,7 @@ return False return True - def generate_guards(self, other, args, cpu, extra_guards): + def generate_guards(self, other, args, cpu, extra_guards): assert len(self.state) == len(other.state) == len(args) renum = {} for i in range(len(self.state)): @@ -393,7 +462,7 @@ inputargs.append(box) assert None not in inputargs - + return inputargs def debug_print(self, hdr='', bad=None): @@ -412,7 +481,7 @@ def register_virtual_fields(self, keybox, fieldboxes): self.fieldboxes[keybox] = fieldboxes - + def already_seen_virtual(self, keybox): return keybox in self.fieldboxes @@ -463,6 +532,9 @@ def make_varray(self, arraydescr): return VArrayStateInfo(arraydescr) + def make_varraystruct(self, arraydescr, fielddescrs): + return VArrayStructStateInfo(arraydescr, fielddescrs) + class BoxNotProducable(Exception): pass @@ -501,12 +573,12 @@ else: # Low priority lo -= 1 return alts - + def renamed(self, box): if box in self.rename: return self.rename[box] return box - + def add_to_short(self, box, op): if op: op = op.clone() @@ -528,12 +600,12 @@ self.optimizer.make_equal_to(newbox, value) else: self.short_boxes[box] = op - + def produce_short_preamble_box(self, box): if box in self.short_boxes: - return + return if isinstance(box, Const): - return + return if box in self.potential_ops: ops = self.prioritized_alternatives(box) produced_one = False @@ -570,7 +642,7 @@ else: debug_print(logops.repr_of_arg(box) + ': None') debug_stop('jit-short-boxes') - + def operations(self): if not we_are_translated(): # For tests ops = self.short_boxes.values() @@ -588,7 +660,7 @@ if not isinstance(oldbox, Const) and newbox not in self.short_boxes: self.short_boxes[newbox] = self.short_boxes[oldbox] self.aliases[newbox] = oldbox - + def original(self, box): while box in self.aliases: box = self.aliases[box] diff --git a/pypy/jit/metainterp/optimizeopt/vstring.py b/pypy/jit/metainterp/optimizeopt/vstring.py --- a/pypy/jit/metainterp/optimizeopt/vstring.py +++ b/pypy/jit/metainterp/optimizeopt/vstring.py @@ -163,17 +163,6 @@ for value in self._chars: value.get_args_for_fail(modifier) - def FIXME_enum_forced_boxes(self, boxes, already_seen): - key = self.get_key_box() - if key in already_seen: - return - already_seen[key] = None - if self.box is None: - for box in self._chars: - box.enum_forced_boxes(boxes, already_seen) - else: - boxes.append(self.box) - def _make_virtual(self, modifier): return modifier.make_vstrplain(self.mode is mode_unicode) @@ -226,18 +215,6 @@ self.left.get_args_for_fail(modifier) self.right.get_args_for_fail(modifier) - def FIXME_enum_forced_boxes(self, boxes, already_seen): - key = self.get_key_box() - if key in already_seen: - return - already_seen[key] = None - if self.box is None: - self.left.enum_forced_boxes(boxes, already_seen) - self.right.enum_forced_boxes(boxes, already_seen) - self.lengthbox = None - else: - boxes.append(self.box) - def _make_virtual(self, modifier): return modifier.make_vstrconcat(self.mode is mode_unicode) @@ -284,18 +261,6 @@ self.vstart.get_args_for_fail(modifier) self.vlength.get_args_for_fail(modifier) - def FIXME_enum_forced_boxes(self, boxes, already_seen): - key = self.get_key_box() - if key in already_seen: - return - already_seen[key] = None - if self.box is None: - self.vstr.enum_forced_boxes(boxes, already_seen) - self.vstart.enum_forced_boxes(boxes, already_seen) - self.vlength.enum_forced_boxes(boxes, already_seen) - else: - boxes.append(self.box) - def _make_virtual(self, modifier): return modifier.make_vstrslice(self.mode is mode_unicode) diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -240,8 +240,8 @@ return self.execute(rop.PTR_EQ, box, history.CONST_NULL) @arguments("box") - def opimpl_cast_opaque_ptr(self, box): - return self.execute(rop.CAST_OPAQUE_PTR, box) + def opimpl_mark_opaque_ptr(self, box): + return self.execute(rop.MARK_OPAQUE_PTR, box) @arguments("box") def _opimpl_any_return(self, box): diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -439,7 +439,6 @@ 'PTR_NE/2b', 'INSTANCE_PTR_EQ/2b', 'INSTANCE_PTR_NE/2b', - 'CAST_OPAQUE_PTR/1b', # 'ARRAYLEN_GC/1d', 'STRLEN/1', @@ -471,6 +470,7 @@ 'FORCE_TOKEN/0', 'VIRTUAL_REF/2', # removed before it's passed to the backend 'READ_TIMESTAMP/0', + 'MARK_OPAQUE_PTR/1b', '_NOSIDEEFFECT_LAST', # ----- end of no_side_effect operations ----- 'SETARRAYITEM_GC/3d', diff --git a/pypy/jit/metainterp/resume.py b/pypy/jit/metainterp/resume.py --- a/pypy/jit/metainterp/resume.py +++ b/pypy/jit/metainterp/resume.py @@ -139,7 +139,7 @@ self.numberings = {} self.cached_boxes = {} self.cached_virtuals = {} - + self.nvirtuals = 0 self.nvholes = 0 self.nvreused = 0 @@ -273,6 +273,9 @@ def make_varray(self, arraydescr): return VArrayInfo(arraydescr) + def make_varraystruct(self, arraydescr, fielddescrs): + return VArrayStructInfo(arraydescr, fielddescrs) + def make_vstrplain(self, is_unicode=False): if is_unicode: return VUniPlainInfo() @@ -402,7 +405,7 @@ virtuals[num] = vinfo if self._invalidation_needed(len(liveboxes), nholes): - memo.clear_box_virtual_numbers() + memo.clear_box_virtual_numbers() def _invalidation_needed(self, nliveboxes, nholes): memo = self.memo @@ -455,7 +458,7 @@ def debug_prints(self): raise NotImplementedError - + class AbstractVirtualStructInfo(AbstractVirtualInfo): def __init__(self, fielddescrs): self.fielddescrs = fielddescrs @@ -537,6 +540,29 @@ for i in self.fieldnums: debug_print("\t\t", str(untag(i))) + +class VArrayStructInfo(AbstractVirtualInfo): + def __init__(self, arraydescr, fielddescrs): + self.arraydescr = arraydescr + self.fielddescrs = fielddescrs + + def debug_prints(self): + debug_print("\tvarraystructinfo", self.arraydescr) + for i in self.fieldnums: + debug_print("\t\t", str(untag(i))) + + @specialize.argtype(1) + def allocate(self, decoder, index): + array = decoder.allocate_array(self.arraydescr, len(self.fielddescrs)) + decoder.virtuals_cache[index] = array + p = 0 + for i in range(len(self.fielddescrs)): + for j in range(len(self.fielddescrs[i])): + decoder.setinteriorfield(i, self.fielddescrs[i][j], array, self.fieldnums[p]) + p += 1 + return array + + class VStrPlainInfo(AbstractVirtualInfo): """Stands for the string made out of the characters of all fieldnums.""" @@ -884,6 +910,17 @@ self.metainterp.execute_and_record(rop.SETFIELD_GC, descr, structbox, fieldbox) + def setinteriorfield(self, index, descr, array, fieldnum): + if descr.is_pointer_field(): + kind = REF + elif descr.is_float_field(): + kind = FLOAT + else: + kind = INT + fieldbox = self.decode_box(fieldnum, kind) + self.metainterp.execute_and_record(rop.SETINTERIORFIELD_GC, descr, + array, ConstInt(index), fieldbox) + def setarrayitem_int(self, arraydescr, arraybox, index, fieldnum): self._setarrayitem(arraydescr, arraybox, index, fieldnum, INT) @@ -1164,6 +1201,17 @@ newvalue = self.decode_int(fieldnum) self.cpu.bh_setfield_gc_i(struct, descr, newvalue) + def setinteriorfield(self, index, descr, array, fieldnum): + if descr.is_pointer_field(): + newvalue = self.decode_ref(fieldnum) + self.cpu.bh_setinteriorfield_gc_r(array, index, descr, newvalue) + elif descr.is_float_field(): + newvalue = self.decode_float(fieldnum) + self.cpu.bh_setinteriorfield_gc_f(array, index, descr, newvalue) + else: + newvalue = self.decode_int(fieldnum) + self.cpu.bh_setinteriorfield_gc_i(array, index, descr, newvalue) + def setarrayitem_int(self, arraydescr, array, index, fieldnum): newvalue = self.decode_int(fieldnum) self.cpu.bh_setarrayitem_gc_i(arraydescr, array, index, newvalue) diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -10,6 +10,7 @@ from pypy.jit.metainterp.typesystem import LLTypeHelper, OOTypeHelper from pypy.jit.metainterp.warmspot import get_stats from pypy.jit.metainterp.warmstate import set_future_value +from pypy.rlib import rerased from pypy.rlib.jit import (JitDriver, we_are_jitted, hint, dont_look_inside, loop_invariant, elidable, promote, jit_debug, assert_green, AssertGreenFailed, unroll_safe, current_trace_length, look_inside_iff, @@ -3494,16 +3495,70 @@ d = None while n > 0: myjitdriver.jit_merge_point(n=n, d=d) - d = {} + d = {"q": 1} if n % 2: d["k"] = n else: d["z"] = n - n -= len(d) + n -= len(d) - d["q"] return n res = self.meta_interp(f, [10]) assert res == 0 + def test_virtual_dict_constant_keys(self): + myjitdriver = JitDriver(greens = [], reds = ["n"]) + def g(d): + return d["key"] - 1 + + def f(n): + while n > 0: + myjitdriver.jit_merge_point(n=n) + n = g({"key": n}) + return n + + res = self.meta_interp(f, [10]) + assert res == 0 + self.check_loops({"int_sub": 1, "int_gt": 1, "guard_true": 1, "jump": 1}) + + def test_virtual_opaque_ptr(self): + myjitdriver = JitDriver(greens = [], reds = ["n"]) + erase, unerase = rerased.new_erasing_pair("x") + @look_inside_iff(lambda x: isvirtual(x)) + def g(x): + return x[0] + def f(n): + while n > 0: + myjitdriver.jit_merge_point(n=n) + x = [] + y = erase(x) + z = unerase(y) + z.append(1) + n -= g(z) + return n + res = self.meta_interp(f, [10]) + assert res == 0 + self.check_loops({"int_sub": 1, "int_gt": 1, "guard_true": 1, "jump": 1}) + + def test_virtual_opaque_dict(self): + myjitdriver = JitDriver(greens = [], reds = ["n"]) + erase, unerase = rerased.new_erasing_pair("x") + @look_inside_iff(lambda x: isvirtual(x)) + def g(x): + return x[0]["key"] - 1 + def f(n): + while n > 0: + myjitdriver.jit_merge_point(n=n) + x = [{}] + x[0]["key"] = n + x[0]["other key"] = n + y = erase(x) + z = unerase(y) + n = g(x) + return n + res = self.meta_interp(f, [10]) + assert res == 0 + self.check_loops({"int_sub": 1, "int_gt": 1, "guard_true": 1, "jump": 1}) + class TestLLtype(BaseLLtypeTests, LLJitMixin): @@ -3561,8 +3616,7 @@ res = self.meta_interp(main, [False, 100, True], taggedpointers=True) def test_rerased(self): - from pypy.rlib.rerased import erase_int, unerase_int, new_erasing_pair - eraseX, uneraseX = new_erasing_pair("X") + eraseX, uneraseX = rerased.new_erasing_pair("X") # class X: def __init__(self, a, b): @@ -3575,14 +3629,14 @@ e = eraseX(X(i, j)) else: try: - e = erase_int(i) + e = rerased.erase_int(i) except OverflowError: return -42 if j & 1: x = uneraseX(e) return x.a - x.b else: - return unerase_int(e) + return rerased.unerase_int(e) # x = self.interp_operations(f, [-128, 0], taggedpointers=True) assert x == -128 diff --git a/pypy/jit/metainterp/test/test_heapcache.py b/pypy/jit/metainterp/test/test_heapcache.py --- a/pypy/jit/metainterp/test/test_heapcache.py +++ b/pypy/jit/metainterp/test/test_heapcache.py @@ -371,3 +371,17 @@ assert h.is_unescaped(box1) h.invalidate_caches(rop.SETARRAYITEM_GC, None, [box2, index1, box1]) assert not h.is_unescaped(box1) + + h = HeapCache() + h.new_array(box1, lengthbox1) + h.new(box2) + assert h.is_unescaped(box1) + assert h.is_unescaped(box2) + h.invalidate_caches(rop.SETARRAYITEM_GC, None, [box1, lengthbox2, box2]) + assert h.is_unescaped(box1) + assert h.is_unescaped(box2) + h.invalidate_caches( + rop.CALL, FakeCallDescr(FakeEffektinfo.EF_RANDOM_EFFECTS), [box1] + ) + assert not h.is_unescaped(box1) + assert not h.is_unescaped(box2) diff --git a/pypy/jit/metainterp/test/test_tracingopts.py b/pypy/jit/metainterp/test/test_tracingopts.py --- a/pypy/jit/metainterp/test/test_tracingopts.py +++ b/pypy/jit/metainterp/test/test_tracingopts.py @@ -3,6 +3,7 @@ from pypy.jit.metainterp.test.support import LLJitMixin from pypy.rlib import jit from pypy.rlib.rarithmetic import ovfcheck +from pypy.rlib.rstring import StringBuilder import py @@ -590,4 +591,14 @@ assert res == 4 self.check_operations_history(int_add_ovf=0) res = self.interp_operations(fn, [sys.maxint]) - assert res == 12 \ No newline at end of file + assert res == 12 + + def test_copy_str_content(self): + def fn(n): + a = StringBuilder() + x = [1] + a.append("hello world") + return x[0] + res = self.interp_operations(fn, [0]) + assert res == 1 + self.check_operations_history(getarrayitem_gc=0, getarrayitem_gc_pure=0 ) \ No newline at end of file diff --git a/pypy/jit/metainterp/warmspot.py b/pypy/jit/metainterp/warmspot.py --- a/pypy/jit/metainterp/warmspot.py +++ b/pypy/jit/metainterp/warmspot.py @@ -62,7 +62,7 @@ clear_tcache() return jittify_and_run(interp, graph, args, backendopt=backendopt, **kwds) -def jittify_and_run(interp, graph, args, repeat=1, +def jittify_and_run(interp, graph, args, repeat=1, graph_and_interp_only=False, backendopt=False, trace_limit=sys.maxint, inline=False, loop_longevity=0, retrace_limit=5, function_threshold=4, @@ -93,6 +93,8 @@ jd.warmstate.set_param_max_retrace_guards(max_retrace_guards) jd.warmstate.set_param_enable_opts(enable_opts) warmrunnerdesc.finish() + if graph_and_interp_only: + return interp, graph res = interp.eval_graph(graph, args) if not kwds.get('translate_support_code', False): warmrunnerdesc.metainterp_sd.profiler.finish() @@ -157,6 +159,9 @@ def get_stats(): return pyjitpl._warmrunnerdesc.stats +def reset_stats(): + pyjitpl._warmrunnerdesc.stats.clear() + def get_translator(): return pyjitpl._warmrunnerdesc.translator diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -13,6 +13,9 @@ 'empty': 'interp_numarray.zeros', 'ones': 'interp_numarray.ones', 'fromstring': 'interp_support.fromstring', + + 'True_': 'space.w_True', + 'False_': 'space.w_False', } # ufuncs diff --git a/pypy/module/micronumpy/compile.py b/pypy/module/micronumpy/compile.py --- a/pypy/module/micronumpy/compile.py +++ b/pypy/module/micronumpy/compile.py @@ -4,30 +4,52 @@ """ from pypy.interpreter.baseobjspace import InternalSpaceCache, W_Root -from pypy.module.micronumpy.interp_dtype import W_Float64Dtype -from pypy.module.micronumpy.interp_numarray import Scalar, SingleDimArray, BaseArray +from pypy.module.micronumpy.interp_dtype import W_Float64Dtype, W_BoolDtype +from pypy.module.micronumpy.interp_numarray import (Scalar, BaseArray, + descr_new_array, scalar_w, SingleDimArray) +from pypy.module.micronumpy import interp_ufuncs from pypy.rlib.objectmodel import specialize class BogusBytecode(Exception): pass -def create_array(dtype, size): - a = SingleDimArray(size, dtype=dtype) - for i in range(size): - dtype.setitem(a.storage, i, dtype.box(float(i % 10))) - return a +class ArgumentMismatch(Exception): + pass + +class ArgumentNotAnArray(Exception): + pass + +class WrongFunctionName(Exception): + pass + +SINGLE_ARG_FUNCTIONS = ["sum", "prod", "max", "min", "all", "any", "unegative"] class FakeSpace(object): w_ValueError = None w_TypeError = None + w_None = None + + w_bool = "bool" + w_int = "int" + w_float = "float" + w_list = "list" + w_long = "long" + w_tuple = 'tuple' def __init__(self): """NOT_RPYTHON""" self.fromcache = InternalSpaceCache(self).getorbuild + self.w_float64dtype = W_Float64Dtype(self) def issequence_w(self, w_obj): - return True + return isinstance(w_obj, ListObject) or isinstance(w_obj, SingleDimArray) + + def isinstance_w(self, w_obj, w_tp): + return False + + def decode_index4(self, w_idx, size): + return (self.int_w(w_idx), 0, 0, 1) @specialize.argtype(1) def wrap(self, obj): @@ -39,72 +61,382 @@ return IntObject(obj) raise Exception + def newlist(self, items): + return ListObject(items) + + def listview(self, obj): + assert isinstance(obj, ListObject) + return obj.items + def float(self, w_obj): assert isinstance(w_obj, FloatObject) return w_obj def float_w(self, w_obj): + assert isinstance(w_obj, FloatObject) return w_obj.floatval + def int_w(self, w_obj): + if isinstance(w_obj, IntObject): + return w_obj.intval + elif isinstance(w_obj, FloatObject): + return int(w_obj.floatval) + raise NotImplementedError + + def int(self, w_obj): + return w_obj + + def is_true(self, w_obj): + assert isinstance(w_obj, BoolObject) + return w_obj.boolval + + def is_w(self, w_obj, w_what): + return w_obj is w_what + + def type(self, w_obj): + return w_obj.tp + + def gettypefor(self, w_obj): + return None + + def call_function(self, tp, w_dtype): + return w_dtype + + @specialize.arg(1) + def interp_w(self, tp, what): + assert isinstance(what, tp) + return what class FloatObject(W_Root): + tp = FakeSpace.w_float def __init__(self, floatval): self.floatval = floatval class BoolObject(W_Root): + tp = FakeSpace.w_bool def __init__(self, boolval): self.boolval = boolval class IntObject(W_Root): + tp = FakeSpace.w_int def __init__(self, intval): self.intval = intval +class ListObject(W_Root): + tp = FakeSpace.w_list + def __init__(self, items): + self.items = items -space = FakeSpace() +class InterpreterState(object): + def __init__(self, code): + self.code = code + self.variables = {} + self.results = [] -def numpy_compile(bytecode, array_size): - stack = [] - i = 0 - dtype = space.fromcache(W_Float64Dtype) - for b in bytecode: - if b == 'a': - stack.append(create_array(dtype, array_size)) - i += 1 - elif b == 'f': - stack.append(Scalar(dtype, dtype.box(1.2))) - elif b == '+': - right = stack.pop() - res = stack.pop().descr_add(space, right) - assert isinstance(res, BaseArray) - stack.append(res) - elif b == '-': - right = stack.pop() - res = stack.pop().descr_sub(space, right) - assert isinstance(res, BaseArray) - stack.append(res) - elif b == '*': - right = stack.pop() - res = stack.pop().descr_mul(space, right) - assert isinstance(res, BaseArray) - stack.append(res) - elif b == '/': - right = stack.pop() - res = stack.pop().descr_div(space, right) - assert isinstance(res, BaseArray) - stack.append(res) - elif b == '%': - right = stack.pop() - res = stack.pop().descr_mod(space, right) - assert isinstance(res, BaseArray) - stack.append(res) - elif b == '|': - res = stack.pop().descr_abs(space) - assert isinstance(res, BaseArray) - stack.append(res) + def run(self, space): + self.space = space + for stmt in self.code.statements: + stmt.execute(self) + +class Node(object): + def __eq__(self, other): + return (self.__class__ == other.__class__ and + self.__dict__ == other.__dict__) + + def __ne__(self, other): + return not self == other + + def wrap(self, space): + raise NotImplementedError + + def execute(self, interp): + raise NotImplementedError + +class Assignment(Node): + def __init__(self, name, expr): + self.name = name + self.expr = expr + + def execute(self, interp): + interp.variables[self.name] = self.expr.execute(interp) + + def __repr__(self): + return "%% = %r" % (self.name, self.expr) + +class ArrayAssignment(Node): + def __init__(self, name, index, expr): + self.name = name + self.index = index + self.expr = expr + + def execute(self, interp): + arr = interp.variables[self.name] + w_index = self.index.execute(interp).eval(0).wrap(interp.space) + w_val = self.expr.execute(interp).eval(0).wrap(interp.space) + arr.descr_setitem(interp.space, w_index, w_val) + + def __repr__(self): + return "%s[%r] = %r" % (self.name, self.index, self.expr) + +class Variable(Node): + def __init__(self, name): + self.name = name + + def execute(self, interp): + return interp.variables[self.name] + + def __repr__(self): + return 'v(%s)' % self.name + +class Operator(Node): + def __init__(self, lhs, name, rhs): + self.name = name + self.lhs = lhs + self.rhs = rhs + + def execute(self, interp): + w_lhs = self.lhs.execute(interp) + assert isinstance(w_lhs, BaseArray) + if isinstance(self.rhs, SliceConstant): + # XXX interface has changed on multidim branch + raise NotImplementedError + w_rhs = self.rhs.execute(interp) + if self.name == '+': + w_res = w_lhs.descr_add(interp.space, w_rhs) + elif self.name == '*': + w_res = w_lhs.descr_mul(interp.space, w_rhs) + elif self.name == '-': + w_res = w_lhs.descr_sub(interp.space, w_rhs) + elif self.name == '->': + if isinstance(w_rhs, Scalar): + index = int(interp.space.float_w( + w_rhs.value.wrap(interp.space))) + dtype = interp.space.fromcache(W_Float64Dtype) + return Scalar(dtype, w_lhs.get_concrete().eval(index)) + else: + raise NotImplementedError else: - print "Unknown opcode: %s" % b - raise BogusBytecode() - if len(stack) != 1: - print "Bogus bytecode, uneven stack length" - raise BogusBytecode() - return stack[0] + raise NotImplementedError + if not isinstance(w_res, BaseArray): + dtype = interp.space.fromcache(W_Float64Dtype) + w_res = scalar_w(interp.space, dtype, w_res) + return w_res + + def __repr__(self): + return '(%r %s %r)' % (self.lhs, self.name, self.rhs) + +class FloatConstant(Node): + def __init__(self, v): + self.v = float(v) + + def __repr__(self): + return "Const(%s)" % self.v + + def wrap(self, space): + return space.wrap(self.v) + + def execute(self, interp): + dtype = interp.space.fromcache(W_Float64Dtype) + assert isinstance(dtype, W_Float64Dtype) + return Scalar(dtype, dtype.box(self.v)) + +class RangeConstant(Node): + def __init__(self, v): + self.v = int(v) + + def execute(self, interp): + w_list = interp.space.newlist( + [interp.space.wrap(float(i)) for i in range(self.v)]) + dtype = interp.space.fromcache(W_Float64Dtype) + return descr_new_array(interp.space, None, w_list, w_dtype=dtype) + + def __repr__(self): + return 'Range(%s)' % self.v + +class Code(Node): + def __init__(self, statements): + self.statements = statements + + def __repr__(self): + return "\n".join([repr(i) for i in self.statements]) + +class ArrayConstant(Node): + def __init__(self, items): + self.items = items + + def wrap(self, space): + return space.newlist([item.wrap(space) for item in self.items]) + + def execute(self, interp): + w_list = self.wrap(interp.space) + dtype = interp.space.fromcache(W_Float64Dtype) + return descr_new_array(interp.space, None, w_list, w_dtype=dtype) + + def __repr__(self): + return "[" + ", ".join([repr(item) for item in self.items]) + "]" + +class SliceConstant(Node): + def __init__(self): + pass + + def __repr__(self): + return 'slice()' + +class Execute(Node): + def __init__(self, expr): + self.expr = expr + + def __repr__(self): + return repr(self.expr) + + def execute(self, interp): + interp.results.append(self.expr.execute(interp)) + +class FunctionCall(Node): + def __init__(self, name, args): + self.name = name + self.args = args + + def __repr__(self): + return "%s(%s)" % (self.name, ", ".join([repr(arg) + for arg in self.args])) + + def execute(self, interp): + if self.name in SINGLE_ARG_FUNCTIONS: + if len(self.args) != 1: + raise ArgumentMismatch + arr = self.args[0].execute(interp) + if not isinstance(arr, BaseArray): + raise ArgumentNotAnArray + if self.name == "sum": + w_res = arr.descr_sum(interp.space) + elif self.name == "prod": + w_res = arr.descr_prod(interp.space) + elif self.name == "max": + w_res = arr.descr_max(interp.space) + elif self.name == "min": + w_res = arr.descr_min(interp.space) + elif self.name == "any": + w_res = arr.descr_any(interp.space) + elif self.name == "all": + w_res = arr.descr_all(interp.space) + elif self.name == "unegative": + neg = interp_ufuncs.get(interp.space).negative + w_res = neg.call(interp.space, [arr]) + else: + assert False # unreachable code + if isinstance(w_res, BaseArray): + return w_res + if isinstance(w_res, FloatObject): + dtype = interp.space.fromcache(W_Float64Dtype) + elif isinstance(w_res, BoolObject): + dtype = interp.space.fromcache(W_BoolDtype) + else: + dtype = None + return scalar_w(interp.space, dtype, w_res) + else: + raise WrongFunctionName + +class Parser(object): + def parse_identifier(self, id): + id = id.strip(" ") + #assert id.isalpha() + return Variable(id) + + def parse_expression(self, expr): + tokens = [i for i in expr.split(" ") if i] + if len(tokens) == 1: + return self.parse_constant_or_identifier(tokens[0]) + stack = [] + tokens.reverse() + while tokens: + token = tokens.pop() + if token == ')': + raise NotImplementedError + elif self.is_identifier_or_const(token): + if stack: + name = stack.pop().name + lhs = stack.pop() + rhs = self.parse_constant_or_identifier(token) + stack.append(Operator(lhs, name, rhs)) + else: + stack.append(self.parse_constant_or_identifier(token)) + else: + stack.append(Variable(token)) + assert len(stack) == 1 + return stack[-1] + + def parse_constant(self, v): + lgt = len(v)-1 + assert lgt >= 0 + if ':' in v: + # a slice + assert v == ':' + return SliceConstant() + if v[0] == '[': + return ArrayConstant([self.parse_constant(elem) + for elem in v[1:lgt].split(",")]) + if v[0] == '|': + return RangeConstant(v[1:lgt]) + return FloatConstant(v) + + def is_identifier_or_const(self, v): + c = v[0] + if ((c >= 'a' and c <= 'z') or (c >= 'A' and c <= 'Z') or + (c >= '0' and c <= '9') or c in '-.[|:'): + if v == '-' or v == "->": + return False + return True + return False + + def parse_function_call(self, v): + l = v.split('(') + assert len(l) == 2 + name = l[0] + cut = len(l[1]) - 1 + assert cut >= 0 + args = [self.parse_constant_or_identifier(id) + for id in l[1][:cut].split(",")] + return FunctionCall(name, args) + + def parse_constant_or_identifier(self, v): + c = v[0] + if (c >= 'a' and c <= 'z') or (c >= 'A' and c <= 'Z'): + if '(' in v: + return self.parse_function_call(v) + return self.parse_identifier(v) + return self.parse_constant(v) + + def parse_array_subscript(self, v): + v = v.strip(" ") + l = v.split("[") + lgt = len(l[1]) - 1 + assert lgt >= 0 + rhs = self.parse_constant_or_identifier(l[1][:lgt]) + return l[0], rhs + + def parse_statement(self, line): + if '=' in line: + lhs, rhs = line.split("=") + lhs = lhs.strip(" ") + if '[' in lhs: + name, index = self.parse_array_subscript(lhs) + return ArrayAssignment(name, index, self.parse_expression(rhs)) + else: + return Assignment(lhs, self.parse_expression(rhs)) + else: + return Execute(self.parse_expression(line)) + + def parse(self, code): + statements = [] + for line in code.split("\n"): + if '#' in line: + line = line.split('#', 1)[0] + line = line.strip(" ") + if line: + statements.append(self.parse_statement(line)) + return Code(statements) + +def numpy_compile(code): + parser = Parser() + return InterpreterState(parser.parse(code)) diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -108,6 +108,12 @@ def setitem_w(self, space, storage, i, w_item): self.setitem(storage, i, self.unwrap(space, w_item)) + def fill(self, storage, item, start, stop): + storage = self.unerase(storage) + item = self.unbox(item) + for i in xrange(start, stop): + storage[i] = item + @specialize.argtype(1) def adapt_val(self, val): return self.box(rffi.cast(TP.TO.OF, val)) diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -14,6 +14,27 @@ any_driver = jit.JitDriver(greens=['signature'], reds=['i', 'size', 'self', 'dtype']) slice_driver = jit.JitDriver(greens=['signature'], reds=['i', 'j', 'step', 'stop', 'source', 'dest']) +def descr_new_array(space, w_subtype, w_size_or_iterable, w_dtype=None): + l = space.listview(w_size_or_iterable) + if space.is_w(w_dtype, space.w_None): + w_dtype = None + for w_item in l: + w_dtype = interp_ufuncs.find_dtype_for_scalar(space, w_item, w_dtype) + if w_dtype is space.fromcache(interp_dtype.W_Float64Dtype): + break + if w_dtype is None: + w_dtype = space.w_None + + dtype = space.interp_w(interp_dtype.W_Dtype, + space.call_function(space.gettypefor(interp_dtype.W_Dtype), w_dtype) + ) + arr = SingleDimArray(len(l), dtype=dtype) + i = 0 + for w_elem in l: + dtype.setitem_w(space, arr.storage, i, w_elem) + i += 1 + return arr + class BaseArray(Wrappable): _attrs_ = ["invalidates", "signature"] @@ -32,27 +53,6 @@ def add_invalidates(self, other): self.invalidates.append(other) - def descr__new__(space, w_subtype, w_size_or_iterable, w_dtype=None): - l = space.listview(w_size_or_iterable) - if space.is_w(w_dtype, space.w_None): - w_dtype = None - for w_item in l: - w_dtype = interp_ufuncs.find_dtype_for_scalar(space, w_item, w_dtype) - if w_dtype is space.fromcache(interp_dtype.W_Float64Dtype): - break - if w_dtype is None: - w_dtype = space.w_None - - dtype = space.interp_w(interp_dtype.W_Dtype, - space.call_function(space.gettypefor(interp_dtype.W_Dtype), w_dtype) - ) - arr = SingleDimArray(len(l), dtype=dtype) - i = 0 - for w_elem in l: - dtype.setitem_w(space, arr.storage, i, w_elem) - i += 1 - return arr - def _unaryop_impl(ufunc_name): def impl(self, space): return getattr(interp_ufuncs.get(space), ufunc_name).call(space, [self]) @@ -565,13 +565,12 @@ arr = SingleDimArray(size, dtype=dtype) one = dtype.adapt_val(1) - for i in xrange(size): - arr.dtype.setitem(arr.storage, i, one) + arr.dtype.fill(arr.storage, one, 0, size) return space.wrap(arr) BaseArray.typedef = TypeDef( 'numarray', - __new__ = interp2app(BaseArray.descr__new__.im_func), + __new__ = interp2app(descr_new_array), __len__ = interp2app(BaseArray.descr_len), diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -32,11 +32,17 @@ return self.identity.wrap(space) def descr_call(self, space, __args__): - try: - args_w = __args__.fixedunpack(self.argcount) - except ValueError, e: - raise OperationError(space.w_TypeError, space.wrap(str(e))) - return self.call(space, args_w) + if __args__.keywords or len(__args__.arguments_w) < self.argcount: + raise OperationError(space.w_ValueError, + space.wrap("invalid number of arguments") + ) + elif len(__args__.arguments_w) > self.argcount: + # The extra arguments should actually be the output array, but we + # don't support that yet. + raise OperationError(space.w_TypeError, + space.wrap("invalid number of arguments") + ) + return self.call(space, __args__.arguments_w) def descr_reduce(self, space, w_obj): from pypy.module.micronumpy.interp_numarray import convert_to_array, Scalar @@ -236,22 +242,20 @@ return dt def find_dtype_for_scalar(space, w_obj, current_guess=None): - w_type = space.type(w_obj) - bool_dtype = space.fromcache(interp_dtype.W_BoolDtype) long_dtype = space.fromcache(interp_dtype.W_LongDtype) int64_dtype = space.fromcache(interp_dtype.W_Int64Dtype) - if space.is_w(w_type, space.w_bool): + if space.isinstance_w(w_obj, space.w_bool): if current_guess is None or current_guess is bool_dtype: return bool_dtype return current_guess - elif space.is_w(w_type, space.w_int): + elif space.isinstance_w(w_obj, space.w_int): if (current_guess is None or current_guess is bool_dtype or current_guess is long_dtype): return long_dtype return current_guess - elif space.is_w(w_type, space.w_long): + elif space.isinstance_w(w_obj, space.w_long): if (current_guess is None or current_guess is bool_dtype or current_guess is long_dtype or current_guess is int64_dtype): return int64_dtype diff --git a/pypy/module/micronumpy/test/test_compile.py b/pypy/module/micronumpy/test/test_compile.py new file mode 100644 --- /dev/null +++ b/pypy/module/micronumpy/test/test_compile.py @@ -0,0 +1,170 @@ + +import py +from pypy.module.micronumpy.compile import * + +class TestCompiler(object): + def compile(self, code): + return numpy_compile(code) + + def test_vars(self): + code = """ + a = 2 + b = 3 + """ + interp = self.compile(code) + assert isinstance(interp.code.statements[0], Assignment) + assert interp.code.statements[0].name == 'a' + assert interp.code.statements[0].expr.v == 2 + assert interp.code.statements[1].name == 'b' + assert interp.code.statements[1].expr.v == 3 + + def test_array_literal(self): + code = "a = [1,2,3]" + interp = self.compile(code) + assert isinstance(interp.code.statements[0].expr, ArrayConstant) + st = interp.code.statements[0] + assert st.expr.items == [FloatConstant(1), FloatConstant(2), + FloatConstant(3)] + + def test_array_literal2(self): + code = "a = [[1],[2],[3]]" + interp = self.compile(code) + assert isinstance(interp.code.statements[0].expr, ArrayConstant) + st = interp.code.statements[0] + assert st.expr.items == [ArrayConstant([FloatConstant(1)]), + ArrayConstant([FloatConstant(2)]), + ArrayConstant([FloatConstant(3)])] + + def test_expr_1(self): + code = "b = a + 1" + interp = self.compile(code) + assert (interp.code.statements[0].expr == + Operator(Variable("a"), "+", FloatConstant(1))) + + def test_expr_2(self): + code = "b = a + b - 3" + interp = self.compile(code) + assert (interp.code.statements[0].expr == + Operator(Operator(Variable("a"), "+", Variable("b")), "-", + FloatConstant(3))) + + def test_expr_3(self): + # an equivalent of range + code = "a = |20|" + interp = self.compile(code) + assert interp.code.statements[0].expr == RangeConstant(20) + + def test_expr_only(self): + code = "3 + a" + interp = self.compile(code) + assert interp.code.statements[0] == Execute( + Operator(FloatConstant(3), "+", Variable("a"))) + + def test_array_access(self): + code = "a -> 3" + interp = self.compile(code) + assert interp.code.statements[0] == Execute( + Operator(Variable("a"), "->", FloatConstant(3))) + + def test_function_call(self): + code = "sum(a)" + interp = self.compile(code) + assert interp.code.statements[0] == Execute( + FunctionCall("sum", [Variable("a")])) + + def test_comment(self): + code = """ + # some comment + a = b + 3 # another comment + """ + interp = self.compile(code) + assert interp.code.statements[0] == Assignment( + 'a', Operator(Variable('b'), "+", FloatConstant(3))) + +class TestRunner(object): + def run(self, code): + interp = numpy_compile(code) + space = FakeSpace() + interp.run(space) + return interp + + def test_one(self): + code = """ + a = 3 + b = 4 + a + b + """ + interp = self.run(code) + assert sorted(interp.variables.keys()) == ['a', 'b'] + assert interp.results[0] + + def test_array_add(self): + code = """ + a = [1,2,3,4] + b = [4,5,6,5] + a + b + """ + interp = self.run(code) + assert interp.results[0]._getnums(False) == ["5.0", "7.0", "9.0", "9.0"] + + def test_array_getitem(self): + code = """ + a = [1,2,3,4] + b = [4,5,6,5] + a + b -> 3 + """ + interp = self.run(code) + assert interp.results[0].value.val == 3 + 6 + + def test_range_getitem(self): + code = """ + r = |20| + 3 + r -> 3 + """ + interp = self.run(code) + assert interp.results[0].value.val == 6 + + def test_sum(self): + code = """ + a = [1,2,3,4,5] + r = sum(a) + r + """ + interp = self.run(code) + assert interp.results[0].value.val == 15 + + def test_array_write(self): + code = """ + a = [1,2,3,4,5] + a[3] = 15 + a -> 3 + """ + interp = self.run(code) + assert interp.results[0].value.val == 15 + + def test_min(self): + interp = self.run(""" + a = |30| + a[15] = -12 + b = a + a + min(b) + """) + assert interp.results[0].value.val == -24 + + def test_max(self): + interp = self.run(""" + a = |30| + a[13] = 128 + b = a + a + max(b) + """) + assert interp.results[0].value.val == 256 + + def test_slice(self): + py.test.skip("in progress") + interp = self.run(""" + a = [1,2,3,4] + b = a -> : + b -> 3 + """) + assert interp.results[0].value.val == 3 diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -36,37 +36,40 @@ assert str(d) == "bool" def test_bool_array(self): - from numpy import array + import numpy - a = array([0, 1, 2, 2.5], dtype='?') - assert a[0] is False + a = numpy.array([0, 1, 2, 2.5], dtype='?') + assert a[0] is numpy.False_ for i in xrange(1, 4): - assert a[i] is True + assert a[i] is numpy.True_ def test_copy_array_with_dtype(self): - from numpy import array - a = array([0, 1, 2, 3], dtype=long) + import numpy + + a = numpy.array([0, 1, 2, 3], dtype=long) # int on 64-bit, long in 32-bit assert isinstance(a[0], (int, long)) b = a.copy() assert isinstance(b[0], (int, long)) - a = array([0, 1, 2, 3], dtype=bool) - assert isinstance(a[0], bool) + a = numpy.array([0, 1, 2, 3], dtype=bool) + assert a[0] is numpy.False_ b = a.copy() - assert isinstance(b[0], bool) + assert b[0] is numpy.False_ def test_zeros_bool(self): - from numpy import zeros - a = zeros(10, dtype=bool) + import numpy + + a = numpy.zeros(10, dtype=bool) for i in range(10): - assert a[i] is False + assert a[i] is numpy.False_ def test_ones_bool(self): - from numpy import ones - a = ones(10, dtype=bool) + import numpy + + a = numpy.ones(10, dtype=bool) for i in range(10): - assert a[i] is True + assert a[i] is numpy.True_ def test_zeros_long(self): from numpy import zeros @@ -77,7 +80,7 @@ def test_ones_long(self): from numpy import ones - a = ones(10, dtype=bool) + a = ones(10, dtype=long) for i in range(10): assert isinstance(a[i], (int, long)) assert a[1] == 1 @@ -96,8 +99,9 @@ def test_bool_binop_types(self): from numpy import array, dtype - types = ('?','b','B','h','H','i','I','l','L','q','Q','f','d') - N = len(types) + types = [ + '?', 'b', 'B', 'h', 'H', 'i', 'I', 'l', 'L', 'q', 'Q', 'f', 'd' + ] a = array([True], '?') for t in types: assert (a + array([0], t)).dtype is dtype(t) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -214,7 +214,7 @@ def test_add_other(self): from numpy import array a = array(range(5)) - b = array(reversed(range(5))) + b = array(range(4, -1, -1)) c = a + b for i in range(5): assert c[i] == 4 @@ -264,18 +264,19 @@ assert b[i] == i - 5 def test_mul(self): - from numpy import array, dtype - a = array(range(5)) + import numpy + + a = numpy.array(range(5)) b = a * a for i in range(5): assert b[i] == i * i - a = array(range(5), dtype=bool) + a = numpy.array(range(5), dtype=bool) b = a * a - assert b.dtype is dtype(bool) - assert b[0] is False + assert b.dtype is numpy.dtype(bool) + assert b[0] is numpy.False_ for i in range(1, 5): - assert b[i] is True + assert b[i] is numpy.True_ def test_mul_constant(self): from numpy import array diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -24,10 +24,10 @@ def test_wrong_arguments(self): from numpy import add, sin - raises(TypeError, add, 1) + raises(ValueError, add, 1) raises(TypeError, add, 1, 2, 3) raises(TypeError, sin, 1, 2) - raises(TypeError, sin) + raises(ValueError, sin) def test_single_item(self): from numpy import negative, sign, minimum @@ -82,6 +82,8 @@ b = negative(a) a[0] = 5.0 assert b[0] == 5.0 + a = array(range(30)) + assert negative(a + a)[3] == -6 def test_abs(self): from numpy import array, absolute @@ -355,4 +357,4 @@ (3.5, 3), (3, 3.5), ]: - assert ufunc(a, b) is func(a, b) + assert ufunc(a, b) == func(a, b) diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -1,253 +1,195 @@ from pypy.jit.metainterp.test.support import LLJitMixin from pypy.module.micronumpy import interp_ufuncs, signature -from pypy.module.micronumpy.compile import (numpy_compile, FakeSpace, - FloatObject, IntObject) -from pypy.module.micronumpy.interp_dtype import W_Int32Dtype, W_Float64Dtype, W_Int64Dtype, W_UInt64Dtype -from pypy.module.micronumpy.interp_numarray import (BaseArray, SingleDimArray, - SingleDimSlice, scalar_w) +from pypy.module.micronumpy.compile import (FakeSpace, + FloatObject, IntObject, numpy_compile, BoolObject) +from pypy.module.micronumpy.interp_numarray import (SingleDimArray, + SingleDimSlice) from pypy.rlib.nonconst import NonConstant -from pypy.rpython.annlowlevel import llstr -from pypy.rpython.test.test_llinterp import interpret +from pypy.rpython.annlowlevel import llstr, hlstr +from pypy.jit.metainterp.warmspot import reset_stats +from pypy.jit.metainterp import pyjitpl import py class TestNumpyJIt(LLJitMixin): - def setup_class(cls): - cls.space = FakeSpace() - cls.float64_dtype = cls.space.fromcache(W_Float64Dtype) - cls.int64_dtype = cls.space.fromcache(W_Int64Dtype) - cls.uint64_dtype = cls.space.fromcache(W_UInt64Dtype) - cls.int32_dtype = cls.space.fromcache(W_Int32Dtype) + graph = None + interp = None + + def run(self, code): + space = FakeSpace() + + def f(code): + interp = numpy_compile(hlstr(code)) + interp.run(space) + res = interp.results[-1] + w_res = res.eval(0).wrap(interp.space) + if isinstance(w_res, BoolObject): + return float(w_res.boolval) + elif isinstance(w_res, FloatObject): + return w_res.floatval + elif isinstance(w_res, IntObject): + return w_res.intval + else: + return -42. + + if self.graph is None: + interp, graph = self.meta_interp(f, [llstr(code)], + listops=True, + backendopt=True, + graph_and_interp_only=True) + self.__class__.interp = interp + self.__class__.graph = graph + + reset_stats() + pyjitpl._warmrunnerdesc.memory_manager.alive_loops.clear() + return self.interp.eval_graph(self.graph, [llstr(code)]) def test_add(self): - def f(i): - ar = SingleDimArray(i, dtype=self.float64_dtype) - v = interp_ufuncs.get(self.space).add.call(self.space, [ar, ar]) - return v.get_concrete().eval(3).val - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + result = self.run(""" + a = |30| + b = a + a + b -> 3 + """) self.check_loops({'getarrayitem_raw': 2, 'float_add': 1, 'setarrayitem_raw': 1, 'int_add': 1, 'int_lt': 1, 'guard_true': 1, 'jump': 1}) - assert result == f(5) + assert result == 3 + 3 def test_floatadd(self): - def f(i): - ar = SingleDimArray(i, dtype=self.float64_dtype) - v = interp_ufuncs.get(self.space).add.call(self.space, [ - ar, - scalar_w(self.space, self.float64_dtype, self.space.wrap(4.5)) - ], - ) - assert isinstance(v, BaseArray) - return v.get_concrete().eval(3).val - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + result = self.run(""" + a = |30| + 3 + a -> 3 + """) + assert result == 3 + 3 self.check_loops({"getarrayitem_raw": 1, "float_add": 1, "setarrayitem_raw": 1, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1}) - assert result == f(5) def test_sum(self): - space = self.space - float64_dtype = self.float64_dtype - int64_dtype = self.int64_dtype - - def f(i): - if NonConstant(False): - dtype = int64_dtype - else: - dtype = float64_dtype - ar = SingleDimArray(i, dtype=dtype) - v = ar.descr_add(space, ar).descr_sum(space) - assert isinstance(v, FloatObject) - return v.floatval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + result = self.run(""" + a = |30| + b = a + a + sum(b) + """) + assert result == 2 * sum(range(30)) self.check_loops({"getarrayitem_raw": 2, "float_add": 2, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1}) - assert result == f(5) def test_prod(self): - space = self.space - float64_dtype = self.float64_dtype - int64_dtype = self.int64_dtype - - def f(i): - if NonConstant(False): - dtype = int64_dtype - else: - dtype = float64_dtype - ar = SingleDimArray(i, dtype=dtype) - v = ar.descr_add(space, ar).descr_prod(space) - assert isinstance(v, FloatObject) - return v.floatval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + result = self.run(""" + a = |30| + b = a + a + prod(b) + """) + expected = 1 + for i in range(30): + expected *= i * 2 + assert result == expected self.check_loops({"getarrayitem_raw": 2, "float_add": 1, "float_mul": 1, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1}) - assert result == f(5) def test_max(self): - space = self.space - float64_dtype = self.float64_dtype - int64_dtype = self.int64_dtype - - def f(i): - if NonConstant(False): - dtype = int64_dtype - else: - dtype = float64_dtype - ar = SingleDimArray(i, dtype=dtype) - j = 0 - while j < i: - ar.get_concrete().setitem(j, float64_dtype.box(float(j))) - j += 1 - v = ar.descr_add(space, ar).descr_max(space) - assert isinstance(v, FloatObject) - return v.floatval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + py.test.skip("broken, investigate") + result = self.run(""" + a = |30| + a[13] = 128 + b = a + a + max(b) + """) + assert result == 256 self.check_loops({"getarrayitem_raw": 2, "float_add": 1, - "float_gt": 1, "int_add": 1, - "int_lt": 1, "guard_true": 1, - "guard_false": 1, "jump": 1}) - assert result == f(5) + "float_mul": 1, "int_add": 1, + "int_lt": 1, "guard_true": 1, "jump": 1}) def test_min(self): - space = self.space - float64_dtype = self.float64_dtype - int64_dtype = self.int64_dtype - - def f(i): - if NonConstant(False): - dtype = int64_dtype - else: - dtype = float64_dtype - ar = SingleDimArray(i, dtype=dtype) - j = 0 - while j < i: - ar.get_concrete().setitem(j, float64_dtype.box(float(j))) - j += 1 - v = ar.descr_add(space, ar).descr_min(space) - assert isinstance(v, FloatObject) - return v.floatval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + py.test.skip("broken, investigate") + result = self.run(""" + a = |30| + a[15] = -12 + b = a + a + min(b) + """) + assert result == -24 self.check_loops({"getarrayitem_raw": 2, "float_add": 1, - "float_lt": 1, "int_add": 1, - "int_lt": 1, "guard_true": 2, - "jump": 1}) - assert result == f(5) - - def test_argmin(self): - space = self.space - float64_dtype = self.float64_dtype - - def f(i): - ar = SingleDimArray(i, dtype=NonConstant(float64_dtype)) - j = 0 - while j < i: - ar.get_concrete().setitem(j, float64_dtype.box(float(j))) - j += 1 - return ar.descr_add(space, ar).descr_argmin(space).intval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) - self.check_loops({"getarrayitem_raw": 2, "float_add": 1, - "float_lt": 1, "int_add": 1, - "int_lt": 1, "guard_true": 2, - "jump": 1}) - assert result == f(5) - - def test_all(self): - space = self.space - float64_dtype = self.float64_dtype - - def f(i): - ar = SingleDimArray(i, dtype=NonConstant(float64_dtype)) - j = 0 - while j < i: - ar.get_concrete().setitem(j, float64_dtype.box(1.0)) - j += 1 - return ar.descr_add(space, ar).descr_all(space).boolval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) - self.check_loops({"getarrayitem_raw": 2, "float_add": 1, - "int_add": 1, "float_ne": 1, - "int_lt": 1, "guard_true": 2, "jump": 1}) - assert result == f(5) + "float_mul": 1, "int_add": 1, + "int_lt": 1, "guard_true": 1, "jump": 1}) def test_any(self): - space = self.space - float64_dtype = self.float64_dtype - - def f(i): - ar = SingleDimArray(i, dtype=NonConstant(float64_dtype)) - return ar.descr_add(space, ar).descr_any(space).boolval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + result = self.run(""" + a = [0,0,0,0,0,0,0,0,0,0,0] + a[8] = -12 + b = a + a + any(b) + """) + assert result == 1 self.check_loops({"getarrayitem_raw": 2, "float_add": 1, - "int_add": 1, "float_ne": 1, "guard_false": 1, - "int_lt": 1, "guard_true": 1, "jump": 1}) - assert result == f(5) + "float_ne": 1, "int_add": 1, + "int_lt": 1, "guard_true": 1, "jump": 1, + "guard_false": 1}) def test_already_forced(self): - space = self.space - - def f(i): - ar = SingleDimArray(i, dtype=self.float64_dtype) - v1 = interp_ufuncs.get(self.space).add.call(space, [ar, scalar_w(space, self.float64_dtype, space.wrap(4.5))]) - assert isinstance(v1, BaseArray) - v2 = interp_ufuncs.get(self.space).multiply.call(space, [v1, scalar_w(space, self.float64_dtype, space.wrap(4.5))]) - v1.force_if_needed() - assert isinstance(v2, BaseArray) - return v2.get_concrete().eval(3).val - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + result = self.run(""" + a = |30| + b = a + 4.5 + b -> 5 # forces + c = b * 8 + c -> 5 + """) + assert result == (5 + 4.5) * 8 # This is the sum of the ops for both loops, however if you remove the # optimization then you end up with 2 float_adds, so we can still be # sure it was optimized correctly. self.check_loops({"getarrayitem_raw": 2, "float_mul": 1, "float_add": 1, "setarrayitem_raw": 2, "int_add": 2, "int_lt": 2, "guard_true": 2, "jump": 2}) - assert result == f(5) def test_ufunc(self): - space = self.space - def f(i): - ar = SingleDimArray(i, dtype=self.float64_dtype) - v1 = interp_ufuncs.get(self.space).add.call(space, [ar, ar]) - v2 = interp_ufuncs.get(self.space).negative.call(space, [v1]) - return v2.get_concrete().eval(3).val - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + result = self.run(""" + a = |30| + b = a + a + c = unegative(b) + c -> 3 + """) + assert result == -6 self.check_loops({"getarrayitem_raw": 2, "float_add": 1, "float_neg": 1, "setarrayitem_raw": 1, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1, }) - assert result == f(5) - def test_appropriate_specialization(self): - space = self.space - def f(i): - ar = SingleDimArray(i, dtype=self.float64_dtype) - - v1 = interp_ufuncs.get(self.space).add.call(space, [ar, ar]) - v2 = interp_ufuncs.get(self.space).negative.call(space, [v1]) - v2.get_concrete() - - for i in xrange(5): - v1 = interp_ufuncs.get(self.space).multiply.call(space, [ar, ar]) - v2 = interp_ufuncs.get(self.space).negative.call(space, [v1]) - v2.get_concrete() - - self.meta_interp(f, [5], listops=True, backendopt=True) + def test_specialization(self): + self.run(""" + a = |30| + b = a + a + c = unegative(b) + c -> 3 + d = a * a + unegative(d) + d -> 3 + d = a * a + unegative(d) + d -> 3 + d = a * a + unegative(d) + d -> 3 + d = a * a + unegative(d) + d -> 3 + """) # This is 3, not 2 because there is a bridge for the exit. self.check_loop_count(3) + +class TestNumpyOld(LLJitMixin): + def setup_class(cls): + from pypy.module.micronumpy.compile import FakeSpace + from pypy.module.micronumpy.interp_dtype import W_Float64Dtype + + cls.space = FakeSpace() + cls.float64_dtype = cls.space.fromcache(W_Float64Dtype) + def test_slice(self): def f(i): step = 3 @@ -332,17 +274,3 @@ result = self.meta_interp(f, [5], listops=True, backendopt=True) assert result == f(5) -class TestTranslation(object): - def test_compile(self): - x = numpy_compile('aa+f*f/a-', 10) - x = x.compute() - assert isinstance(x, SingleDimArray) - assert x.size == 10 - assert x.eval(0).val == 0 - assert x.eval(1).val == ((1 + 1) * 1.2) / 1.2 - 1 - - def test_translation(self): - # we import main to check if the target compiles - from pypy.translator.goal.targetnumpystandalone import main - - interpret(main, [llstr('af+'), 100]) diff --git a/pypy/module/pypyjit/test_pypy_c/test_call.py b/pypy/module/pypyjit/test_pypy_c/test_call.py --- a/pypy/module/pypyjit/test_pypy_c/test_call.py +++ b/pypy/module/pypyjit/test_pypy_c/test_call.py @@ -465,3 +465,25 @@ setfield_gc(p4, p22, descr=) jump(p0, p1, p2, p3, p4, p7, p22, p7, descr=) """) + + def test_kwargs_virtual(self): + def main(n): + def g(**kwargs): + return kwargs["x"] + 1 + + i = 0 + while i < n: + i = g(x=i) + return i + + log = self.run(main, [500]) + assert log.result == 500 + loop, = log.loops_by_filename(self.filepath) + assert loop.match(""" + i2 = int_lt(i0, i1) + guard_true(i2, descr=...) + i3 = force_token() + i4 = int_add(i0, 1) + --TICK-- + jump(..., descr=...) + """) \ No newline at end of file diff --git a/pypy/module/pypyjit/test_pypy_c/test_containers.py b/pypy/module/pypyjit/test_pypy_c/test_containers.py --- a/pypy/module/pypyjit/test_pypy_c/test_containers.py +++ b/pypy/module/pypyjit/test_pypy_c/test_containers.py @@ -44,7 +44,7 @@ # gc_id call is hoisted out of the loop, the id of a value obviously # can't change ;) assert loop.match_by_id("getitem", """ - i28 = call(ConstClass(ll_dict_lookup__dicttablePtr_objectPtr_Signed), p18, p6, i25, descr=...) + i26 = call(ConstClass(ll_dict_lookup), p18, p6, i25, descr=...) ... p33 = getinteriorfield_gc(p31, i26, descr=>) ... @@ -69,4 +69,51 @@ i9 = int_add(i5, 1) --TICK-- jump(..., descr=...) + """) + + def test_non_virtual_dict(self): + def main(n): + i = 0 + while i < n: + d = {str(i): i} + i += d[str(i)] - i + 1 + return i + + log = self.run(main, [1000]) + assert log.result == main(1000) + loop, = log.loops_by_filename(self.filepath) + assert loop.match(""" + i8 = int_lt(i5, i7) + guard_true(i8, descr=...) + guard_not_invalidated(descr=...) + p10 = call(ConstClass(ll_int_str), i5, descr=) + guard_no_exception(descr=...) + i12 = call(ConstClass(ll_strhash), p10, descr=) + p13 = new(descr=...) + p15 = new_array(8, descr=) + setfield_gc(p13, p15, descr=) + i17 = call(ConstClass(ll_dict_lookup_trampoline), p13, p10, i12, descr=) + setfield_gc(p13, 16, descr=) + guard_no_exception(descr=...) + p20 = new_with_vtable(ConstClass(W_IntObject)) + call(ConstClass(_ll_dict_setitem_lookup_done_trampoline), p13, p10, p20, i12, i17, descr=) + setfield_gc(p20, i5, descr=) + guard_no_exception(descr=...) + i23 = call(ConstClass(ll_dict_lookup_trampoline), p13, p10, i12, descr=) + guard_no_exception(descr=...) + i26 = int_and(i23, .*) + i27 = int_is_true(i26) + guard_false(i27, descr=...) + p28 = getfield_gc(p13, descr=) + p29 = getinteriorfield_gc(p28, i23, descr=>) + guard_nonnull_class(p29, ConstClass(W_IntObject), descr=...) + i31 = getfield_gc_pure(p29, descr=) + i32 = int_sub_ovf(i31, i5) + guard_no_overflow(descr=...) + i34 = int_add_ovf(i32, 1) + guard_no_overflow(descr=...) + i35 = int_add_ovf(i5, i34) + guard_no_overflow(descr=...) + --TICK-- + jump(p0, p1, p2, p3, p4, i35, p13, i7, descr=) """) \ No newline at end of file diff --git a/pypy/module/rctime/interp_time.py b/pypy/module/rctime/interp_time.py --- a/pypy/module/rctime/interp_time.py +++ b/pypy/module/rctime/interp_time.py @@ -246,8 +246,8 @@ @unwrap_spec(secs=float) def sleep(space, secs): if secs < 0: - raise space.OperationError(space.w_IOError, - space.wrap("Invalid argument: negative time in sleep")) + raise OperationError(space.w_IOError, + space.wrap("Invalid argument: negative time in sleep")) pytime.sleep(secs) else: from pypy.rlib import rwin32 @@ -269,8 +269,8 @@ @unwrap_spec(secs=float) def sleep(space, secs): if secs < 0: - raise space.OperationError(space.w_IOError, - space.wrap("Invalid argument: negative time in sleep")) + raise OperationError(space.w_IOError, + space.wrap("Invalid argument: negative time in sleep")) # as decreed by Guido, only the main thread can be # interrupted. main_thread = space.fromcache(State).main_thread diff --git a/pypy/module/rctime/test/test_rctime.py b/pypy/module/rctime/test/test_rctime.py --- a/pypy/module/rctime/test/test_rctime.py +++ b/pypy/module/rctime/test/test_rctime.py @@ -20,7 +20,7 @@ import sys import os raises(TypeError, rctime.sleep, "foo") - rctime.sleep(1.2345) + rctime.sleep(0.12345) raises(IOError, rctime.sleep, -1.0) def test_clock(self): diff --git a/pypy/objspace/std/floatobject.py b/pypy/objspace/std/floatobject.py --- a/pypy/objspace/std/floatobject.py +++ b/pypy/objspace/std/floatobject.py @@ -546,6 +546,12 @@ # Try to return int. return space.newtuple([space.int(w_num), space.int(w_den)]) +def float_is_integer__Float(space, w_float): + v = w_float.floatval + if not rfloat.isfinite(v): + return space.w_False + return space.wrap(math.floor(v) == v) + from pypy.objspace.std import floattype register_all(vars(), floattype) diff --git a/pypy/objspace/std/floattype.py b/pypy/objspace/std/floattype.py --- a/pypy/objspace/std/floattype.py +++ b/pypy/objspace/std/floattype.py @@ -12,6 +12,7 @@ float_as_integer_ratio = SMM("as_integer_ratio", 1) +float_is_integer = SMM("is_integer", 1) float_hex = SMM("hex", 1) def descr_conjugate(space, w_float): diff --git a/pypy/objspace/std/objspace.py b/pypy/objspace/std/objspace.py --- a/pypy/objspace/std/objspace.py +++ b/pypy/objspace/std/objspace.py @@ -83,11 +83,12 @@ if self.config.objspace.std.withtproxy: transparent.setup(self) + interplevel_classes = {} for type, classes in self.model.typeorder.iteritems(): - if len(classes) >= 3: + if len(classes) >= 3: # XXX what does this 3 mean??! # W_Root, AnyXxx and actual object - self.gettypefor(type).interplevel_cls = classes[0][0] - + interplevel_classes[self.gettypefor(type)] = classes[0][0] + self._interplevel_classes = interplevel_classes def get_builtin_types(self): return self.builtin_types @@ -579,7 +580,7 @@ raise OperationError(self.w_TypeError, self.wrap("need type object")) if is_annotation_constant(w_type): - cls = w_type.interplevel_cls + cls = self._get_interplevel_cls(w_type) if cls is not None: assert w_inst is not None if isinstance(w_inst, cls): @@ -589,3 +590,9 @@ @specialize.arg_or_var(2) def isinstance_w(space, w_inst, w_type): return space._type_isinstance(w_inst, w_type) + + @specialize.memo() + def _get_interplevel_cls(self, w_type): + if not hasattr(self, "_interplevel_classes"): + return None # before running initialize + return self._interplevel_classes.get(w_type, None) diff --git a/pypy/objspace/std/smallintobject.py b/pypy/objspace/std/smallintobject.py --- a/pypy/objspace/std/smallintobject.py +++ b/pypy/objspace/std/smallintobject.py @@ -12,6 +12,7 @@ from pypy.rlib.rbigint import rbigint from pypy.rlib.rarithmetic import r_uint from pypy.tool.sourcetools import func_with_new_name +from pypy.objspace.std.inttype import wrapint class W_SmallIntObject(W_Object, UnboxedValue): __slots__ = 'intval' @@ -48,14 +49,36 @@ def delegate_SmallInt2Complex(space, w_small): return space.newcomplex(float(w_small.intval), 0.0) +def add__SmallInt_SmallInt(space, w_a, w_b): + return wrapint(space, w_a.intval + w_b.intval) # cannot overflow + +def sub__SmallInt_SmallInt(space, w_a, w_b): + return wrapint(space, w_a.intval - w_b.intval) # cannot overflow + +def floordiv__SmallInt_SmallInt(space, w_a, w_b): + return wrapint(space, w_a.intval // w_b.intval) # cannot overflow + +div__SmallInt_SmallInt = floordiv__SmallInt_SmallInt + +def mod__SmallInt_SmallInt(space, w_a, w_b): + return wrapint(space, w_a.intval % w_b.intval) # cannot overflow + +def divmod__SmallInt_SmallInt(space, w_a, w_b): + w = wrapint(space, w_a.intval // w_b.intval) # cannot overflow + z = wrapint(space, w_a.intval % w_b.intval) + return space.newtuple([w, z]) + def copy_multimethods(ns): """Copy integer multimethods for small int.""" for name, func in intobject.__dict__.iteritems(): if "__Int" in name: new_name = name.replace("Int", "SmallInt") - # Copy the function, so the annotator specializes it for - # W_SmallIntObject. - ns[new_name] = func_with_new_name(func, new_name) + if new_name not in ns: + # Copy the function, so the annotator specializes it for + # W_SmallIntObject. + ns[new_name] = func = func_with_new_name(func, new_name, globals=ns) + else: + ns[name] = func ns["get_integer"] = ns["pos__SmallInt"] = ns["int__SmallInt"] ns["get_negint"] = ns["neg__SmallInt"] diff --git a/pypy/objspace/std/test/test_floatobject.py b/pypy/objspace/std/test/test_floatobject.py --- a/pypy/objspace/std/test/test_floatobject.py +++ b/pypy/objspace/std/test/test_floatobject.py @@ -63,6 +63,12 @@ def setup_class(cls): cls.w_py26 = cls.space.wrap(sys.version_info >= (2, 6)) + def test_isinteger(self): + assert (1.).is_integer() + assert not (1.1).is_integer() + assert not float("inf").is_integer() + assert not float("nan").is_integer() + def test_conjugate(self): assert (1.).conjugate() == 1. assert (-1.).conjugate() == -1. @@ -782,4 +788,4 @@ # divide by 0 raises(ZeroDivisionError, lambda: inf % 0) raises(ZeroDivisionError, lambda: inf // 0) - raises(ZeroDivisionError, divmod, inf, 0) \ No newline at end of file + raises(ZeroDivisionError, divmod, inf, 0) diff --git a/pypy/objspace/std/test/test_obj.py b/pypy/objspace/std/test/test_obj.py --- a/pypy/objspace/std/test/test_obj.py +++ b/pypy/objspace/std/test/test_obj.py @@ -102,3 +102,11 @@ def __repr__(self): return 123456 assert A().__str__() == 123456 + +def test_isinstance_shortcut(): + from pypy.objspace.std import objspace + space = objspace.StdObjSpace() + w_a = space.wrap("a") + space.type = None + space.isinstance_w(w_a, space.w_str) # does not crash + diff --git a/pypy/objspace/std/test/test_stdobjspace.py b/pypy/objspace/std/test/test_stdobjspace.py --- a/pypy/objspace/std/test/test_stdobjspace.py +++ b/pypy/objspace/std/test/test_stdobjspace.py @@ -14,11 +14,11 @@ def test_int_w_non_int(self): raises(OperationError,self.space.int_w,self.space.wrap(None)) - raises(OperationError,self.space.int_w,self.space.wrap("")) + raises(OperationError,self.space.int_w,self.space.wrap("")) def test_uint_w_non_int(self): raises(OperationError,self.space.uint_w,self.space.wrap(None)) - raises(OperationError,self.space.uint_w,self.space.wrap("")) + raises(OperationError,self.space.uint_w,self.space.wrap("")) def test_multimethods_defined_on(self): from pypy.objspace.std.stdtypedef import multimethods_defined_on @@ -49,14 +49,14 @@ def test_fastpath_isinstance(self): from pypy.objspace.std.stringobject import W_StringObject from pypy.objspace.std.intobject import W_IntObject - + space = self.space - assert space.w_str.interplevel_cls is W_StringObject - assert space.w_int.interplevel_cls is W_IntObject + assert space._get_interplevel_cls(space.w_str) is W_StringObject + assert space._get_interplevel_cls(space.w_int) is W_IntObject class X(W_StringObject): def __init__(self): pass - + typedef = None assert space.isinstance_w(X(), space.w_str) diff --git a/pypy/objspace/std/typeobject.py b/pypy/objspace/std/typeobject.py --- a/pypy/objspace/std/typeobject.py +++ b/pypy/objspace/std/typeobject.py @@ -102,7 +102,6 @@ 'instancetypedef', 'terminator', '_version_tag?', - 'interplevel_cls', ] # for config.objspace.std.getattributeshortcut @@ -117,9 +116,6 @@ # of the __new__ is an instance of the type w_bltin_new = None - interplevel_cls = None # not None for prebuilt instances of - # interpreter-level types - @dont_look_inside def __init__(w_self, space, name, bases_w, dict_w, overridetypedef=None): diff --git a/pypy/rlib/rsre/rsre_core.py b/pypy/rlib/rsre/rsre_core.py --- a/pypy/rlib/rsre/rsre_core.py +++ b/pypy/rlib/rsre/rsre_core.py @@ -391,6 +391,8 @@ if self.num_pending >= min: while enum is not None and ptr == ctx.match_end: enum = enum.move_to_next_result(ctx) + # matched marks for zero-width assertions + marks = ctx.match_marks # if enum is not None: # matched one more 'item'. record it and continue. diff --git a/pypy/rlib/rsre/test/test_re.py b/pypy/rlib/rsre/test/test_re.py --- a/pypy/rlib/rsre/test/test_re.py +++ b/pypy/rlib/rsre/test/test_re.py @@ -226,6 +226,13 @@ (None, 'b', None)) assert pat.match('ac').group(1, 'b2', 3) == ('a', None, 'c') + def test_bug_923(self): + # Issue923: grouping inside optional lookahead problem + assert re.match(r'a(?=(b))?', "ab").groups() == ("b",) + assert re.match(r'(a(?=(b))?)', "ab").groups() == ('a', 'b') + assert re.match(r'(a)(?=(b))?', "ab").groups() == ('a', 'b') + assert re.match(r'(?Pa)(?=(?Pb))?', "ab").groupdict() == {'g1': 'a', 'g2': 'b'} + def test_re_groupref_exists(self): assert re.match('^(\()?([^()]+)(?(1)\))$', '(a)').groups() == ( ('(', 'a')) diff --git a/pypy/rpython/lltypesystem/lltype.py b/pypy/rpython/lltypesystem/lltype.py --- a/pypy/rpython/lltypesystem/lltype.py +++ b/pypy/rpython/lltypesystem/lltype.py @@ -1713,6 +1713,7 @@ return v def setitem(self, index, value): + assert typeOf(value) == self._TYPE.OF self.items[index] = value assert not '__dict__' in dir(_array) diff --git a/pypy/rpython/lltypesystem/rdict.py b/pypy/rpython/lltypesystem/rdict.py --- a/pypy/rpython/lltypesystem/rdict.py +++ b/pypy/rpython/lltypesystem/rdict.py @@ -445,9 +445,9 @@ i = ll_dict_lookup(d, key, hash) return _ll_dict_setitem_lookup_done(d, key, value, hash, i) -# Leaving as dont_look_inside ATM, it has a few branches which could lead to -# many bridges if we don't consider their possible frequency. - at jit.dont_look_inside +# It may be safe to look inside always, it has a few branches though, and their +# frequencies needs to be investigated. + at jit.look_inside_iff(lambda d, key, value, hash, i: jit.isvirtual(d) and jit.isconstant(key)) def _ll_dict_setitem_lookup_done(d, key, value, hash, i): valid = (i & HIGHEST_BIT) == 0 i = i & MASK @@ -533,7 +533,7 @@ # ------- a port of CPython's dictobject.c's lookdict implementation ------- PERTURB_SHIFT = 5 - at jit.dont_look_inside + at jit.look_inside_iff(lambda d, key, hash: jit.isvirtual(d) and jit.isconstant(key)) def ll_dict_lookup(d, key, hash): entries = d.entries ENTRIES = lltype.typeOf(entries).TO diff --git a/pypy/rpython/lltypesystem/rstr.py b/pypy/rpython/lltypesystem/rstr.py --- a/pypy/rpython/lltypesystem/rstr.py +++ b/pypy/rpython/lltypesystem/rstr.py @@ -20,6 +20,7 @@ from pypy.rpython.rmodel import Repr from pypy.rpython.lltypesystem import llmemory from pypy.tool.sourcetools import func_with_new_name +from pypy.rpython.lltypesystem.lloperation import llop # ____________________________________________________________ # @@ -364,8 +365,10 @@ while lpos < rpos and s.chars[lpos] == ch: lpos += 1 if right: - while lpos < rpos and s.chars[rpos] == ch: + while lpos < rpos + 1 and s.chars[rpos] == ch: rpos -= 1 + if rpos < lpos: + return s.empty() r_len = rpos - lpos + 1 result = s.malloc(r_len) s.copy_contents(s, result, lpos, 0, r_len) diff --git a/pypy/rpython/test/test_rstr.py b/pypy/rpython/test/test_rstr.py --- a/pypy/rpython/test/test_rstr.py +++ b/pypy/rpython/test/test_rstr.py @@ -372,12 +372,20 @@ return const('!ab!').lstrip(const('!')) def right(): return const('!ab!').rstrip(const('!')) + def empty(): + return const(' ').strip(' ') + def left2(): + return const('a ').strip(' ') res = self.interpret(both, []) assert self.ll_to_string(res) == const('ab') res = self.interpret(left, []) assert self.ll_to_string(res) == const('ab!') res = self.interpret(right, []) assert self.ll_to_string(res) == const('!ab') + res = self.interpret(empty, []) + assert self.ll_to_string(res) == const('') + res = self.interpret(left2, []) + assert self.ll_to_string(res) == const('a') def test_upper(self): const = self.const diff --git a/pypy/tool/sourcetools.py b/pypy/tool/sourcetools.py --- a/pypy/tool/sourcetools.py +++ b/pypy/tool/sourcetools.py @@ -216,9 +216,11 @@ # ____________________________________________________________ -def func_with_new_name(func, newname): +def func_with_new_name(func, newname, globals=None): """Make a renamed copy of a function.""" - f = new.function(func.func_code, func.func_globals, + if globals is None: + globals = func.func_globals + f = new.function(func.func_code, globals, newname, func.func_defaults, func.func_closure) if func.func_dict: diff --git a/pypy/translator/c/primitive.py b/pypy/translator/c/primitive.py --- a/pypy/translator/c/primitive.py +++ b/pypy/translator/c/primitive.py @@ -144,14 +144,19 @@ obj = value._obj if isinstance(obj, int): # a tagged pointer - assert obj & 1 == 1 - return '((%s) %d)' % (cdecl("void*", ''), obj) + return _name_tagged(obj, db) realobj = obj.container + if isinstance(realobj, int): + return _name_tagged(realobj, db) realvalue = cast_opaque_ptr(Ptr(typeOf(realobj)), value) return db.get(realvalue) else: return 'NULL' +def _name_tagged(obj, db): + assert obj & 1 == 1 + return '((%s) %d)' % (cdecl("void*", ''), obj) + def name_small_integer(value, db): """Works for integers of size at most INT or UINT.""" if isinstance(value, Symbolic): diff --git a/pypy/translator/c/test/test_rtagged.py b/pypy/translator/c/test/test_rtagged.py --- a/pypy/translator/c/test/test_rtagged.py +++ b/pypy/translator/c/test/test_rtagged.py @@ -77,3 +77,12 @@ data = g.read() g.close() assert data.rstrip().endswith('ALL OK') + +def test_name_gcref(): + from pypy.rpython.lltypesystem import lltype, llmemory, rclass + from pypy.translator.c import primitive + from pypy.translator.c.database import LowLevelDatabase + x = lltype.cast_int_to_ptr(rclass.OBJECTPTR, 19) + y = lltype.cast_opaque_ptr(llmemory.GCREF, x) + db = LowLevelDatabase() + assert primitive.name_gcref(y, db) == "((void*) 19)" From noreply at buildbot.pypy.org Wed Nov 2 18:53:45 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Wed, 2 Nov 2011 18:53:45 +0100 (CET) Subject: [pypy-commit] pypy default: hg merge Message-ID: <20111102175345.0D2D1820B3@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: Changeset: r48671:aafe34e70361 Date: 2011-11-02 18:53 +0100 http://bitbucket.org/pypy/pypy/changeset/aafe34e70361/ Log: hg merge diff --git a/lib_pypy/_pypy_irc_topic.py b/lib_pypy/_pypy_irc_topic.py --- a/lib_pypy/_pypy_irc_topic.py +++ b/lib_pypy/_pypy_irc_topic.py @@ -1,117 +1,6 @@ -"""qvfgbcvna naq hgbcvna punvef -qlfgbcvna naq hgbcvna punvef -V'z fbeel, pbhyq lbh cyrnfr abg nterr jvgu gur png nf jryy? -V'z fbeel, pbhyq lbh cyrnfr abg nterr jvgu gur punve nf jryy? -jr cnffrq gur RH erivrj -cbfg RhebClguba fcevag fgnegf 12.IVV.2007, 10nz -RhebClguba raqrq -n Pyrna Ragrecevfrf cebqhpgvba -npnqrzl vf n pbzcyvpngrq ebyr tnzr -npnqrzvn vf n pbzcyvpngrq ebyr tnzr -jbexvat pbqr vf crn fbhc -abg lbhe snhyg, zber yvxr vg'f n zbivat gnetrg -guvf fragrapr vf snyfr -abguvat vf gehr -Yncfnat Fbhpubat -Oenpunzhgnaqn -fbeel, V'yy grnpu gur pnpghf ubj gb fjvz yngre -Jul fb znal znal znal znal znal ivbyvaf? -Jul fb znal znal znal znal znal bowrpgf? -"eha njnl naq yvir ba n snez" nccebnpu gb fbsgjner qrirybczrag -"va snpg, lbh zvtug xabj zber nobhg gur genafyngvba gbbypunva nsgre znfgrevat eclguba guna fbzr angvir fcrnxre xabjf nobhg uvf zbgure gbathr" - kbeNkNk -"jurer qvq nyy gur ivbyvaf tb?" -- ClCl fgnghf oybt: uggc://zberclcl.oybtfcbg.pbz/ -uggc://kxpq.pbz/353/ -pnfhnyvgl ivbyngvbaf naq sylvat -wrgmg abpu fpubxbynqvtre -R09 2X @PNN:85? -vs lbh'er gelvat gb oybj hc fghss, jub pnerf? -vs fghss oybjf hc, lbh pner -2008 jvyy or gur lrne bs clcl ba gur qrfxgbc -2008 jvyy or gur lrne bs gur qrfxgbc ba #clcl -2008 jvyy or gur lrne bs gur qrfxgbc ba #clcl, Wnahnel jvyy or gur zbagu bs gur nyc gbcf -lrf, ohg jung'g gur frafr bs 0 < "qhena qhena" -eclguba: flagnk naq frznagvpf bs clguba, fcrrq bs p, erfgevpgvbaf bs wnin naq pbzcvyre reebe zrffntrf nf crargenoyr nf ZHZCF +"""eclguba: flagnk naq frznagvpf bs clguba, fcrrq bs p, erfgevpgvbaf bs wnin naq pbzcvyre reebe zrffntrf nf crargenoyr nf ZHZCF pglcrf unf n fcva bs 1/3 ' ' vf n fcnpr gbb -2009 jvyy or gur lrne bs WVG ba gur qrfxgbc -N ynathntr vf n qvnyrpg jvgu na nezl naq anil -gbcvpf ner sbe gur srroyr zvaqrq -2009 vf gur lrne bs ersyrpgvba ba gur qrfxgbc -gur tybor vf bhe cbal, gur pbfzbf bhe erny ubefr -jub nz V naq vs lrf, ubj znal? -cebtenzzvat va orq vf n cresrpgyl svar npgvivgl -zbber'f ynj vf n qeht jvgu gur jbefg pbzr qbja -EClguba: jr hfr vg fb lbh qba'g unir gb -Zbber'f ynj vf n qeht jvgu gur jbefg pbzr qbja. EClguba: haqrpvqrq. -guvatf jvyy or avpr naq fghss -qba'g cbfg yvaxf gb cngragf urer -Abg lbhe hfhny nanylfrf. -Gur Neg bs gur Punaary -Clguba 300 -V fhccbfr ZO bs UGZY cre frpbaq vf abg gur hfhny fcrrq zrnfher crbcyr jbhyq rkcrpg sbe n wvg -gur fha arire frgf ba gur ClCl rzcver -ghegyrf ner snfgre guna lbh guvax -cebtenzzvat vf na nrfgrguvp raqrnibhe -P vf tbbq sbe fbzrguvat, whfg abg sbe jevgvat fbsgjner -trezna vf tbbq sbe fbzrguvat, whfg abg sbe jevgvat fbsgjner -trezna vf tbbq sbe artngvbaf, whfg abg sbe jevgvat fbsgjner -# nffreg qvq abg penfu -lbh fubhyq fgneg n cresrpg fbsgjner zbirzrag -lbh fubhyq fgneg n cresrpg punaary gbcvp zbirzrag -guvf vf n cresrpg punaary gbcvp -guvf vf n frys-ersreragvny punaary gbcvp -crrcubcr bcgvzvmngvbaf ner jung n Fhssvpvragyl Fzneg Pbzcvyre hfrf -"crrcubcr" bcgvzvmngvbaf ner jung na bcgvzvfgvp Pbzcvyre hfrf -pubbfr lbhe unpx -gur 'fhcre' xrljbeq vf abg gung uhttnoyr -wlguba cngpurf ner abg rabhtu sbe clcl -- qb lbh xabj oreyva? - nyy bs vg? - jryy, whfg oreyva -- ubj jvyy gur snpg gung gurl ner hfrq va bhe ercy punatr bhe gbcvpf? -- ubj pna vg rire unir jbexrq? -- jurer fubhyq gur unpx or fgberq? -- Vg'f uneq gb fnl rknpgyl jung pbafgvghgrf erfrnepu va gur pbzchgre jbeyq, ohg nf n svefg nccebkvzngvba, vg'f fbsgjner gung qbrfa'g unir hfref. -- Cebtenzzvat vf nyy nobhg xabjvat jura gb obvy gur benatr fcbatr qbaxrl npebff gur cuvyyvcvarf -- Jul fb znal, znal, znal, znal, znal, znal qhpxyvatf? -- ab qrgnvy vf bofpher rabhtu gb abg unir fbzr pbqr qrcraqvat ba vg. -- jung V trarenyyl jnag vf serr fcrrqhcf -- nyy bs ClCl vf kv-dhnyvgl -"lbh pna nyjnlf xvyy -9 be bf._rkvg() vs lbh'er va n uheel" -Ohernhpengf ohvyq npnqrzvp rzcverf juvpu puhea bhg zrnavatyrff fbyhgvbaf gb veeryrinag ceboyrzf. -vg'f abg n unpx, vg'f n jbexnebhaq -ClCl qbrfa'g unir pbcbylinevnqvp qrcraqragyl-zbabzbecurq ulcresyhknqf -ClCl qbrfa'g punatr gur shaqnzragny culfvpf pbafgnagf -Qnapr bs gur Fhtnecyhz Snvel -Wnin vf whfg tbbq rabhtu gb or cenpgvpny, ohg abg tbbq rabhtu gb or hfnoyr. -RhebClguba vf unccravat, qba'g rkcrpg nal dhvpx erfcbafr gvzrf. -"V jbhyq yvxr gb fgnl njnl sebz ernyvgl gura" -"gung'f jul gur 'be' vf ernyyl na 'naq' " -jvgu nyy nccebcevngr pbagrkghnyvfngvbavat -qba'g gevc ba gur cbjre pbeq -vzcyrzragvat YBTB va YBTB: "ghegyrf nyy gur jnl qbja" -gur ohooyrfbeg jbhyq or gur jebat jnl gb tb -gur cevapvcyr bs pbafreingvba bs zrff -gb fnir n gerr, rng n ornire -Qre Ovore znpugf evpugvt: Antg nyyrf xnchgg. -"Nal jbeyqivrj gung vfag jenpxrq ol frys-qbhog naq pbashfvba bire vgf bja vqragvgl vf abg n jbeyqivrj sbe zr." - Fpbgg Nnebafba -jr oryvrir va cnapnxrf, znlor -jr oryvrir va ghegyrf, znlor -jr qrsvavgryl oryvrir va zrgn -gur zngevk unf lbh -"Yvsr vf uneq, gura lbh anc" - n png -Vf Nezva ubzr jura gur havirefr prnfrf gb rkvfg? -Qhrffryqbes fcevag fgnegrq -frys.nobeeg("pnaabg ybnq negvpyrf") -QRAGVFGEL FLZOBY YVTUG IREGVPNY NAQ JNIR -"Gur UUH pnzchf vf n tbbq Dhnxr yriry" - Nezva -"Gur UUH pnzchf jbhyq or n greevoyr dhnxr yriry - lbh'q arire unir n pyhr jurer lbh ner" - zvpunry -N enqvbnpgvir png unf 18 unys-yvirf. - : j [fvtu] -f -pbybe-pbqrq oyhrf -"Neebtnapr va pbzchgre fpvrapr vf zrnfherq va anab-Qvwxfgenf." -ClCl arrqf n Whfg-va-Gvzr WVG -"Lbh pna'g gvzr geniry whfg ol frggvat lbhe pybpxf jebat" -Gjb guernqf jnyx vagb n one. Gur onexrrcre ybbxf hc naq lryyf, "url, V jnag qba'g nal pbaqvgvbaf enpr yvxr gvzr ynfg!" Clguba 2.k rfg cerfdhr zbeg, ivir Clguba! Clguba 2.k vf abg qrnq Riregvzr fbzrbar nethrf jvgu "Fznyygnyx unf nyjnlf qbar K", vg vf nyjnlf n tbbq uvag gung fbzrguvat arrqf gb or punatrq snfg. - Znephf Qraxre @@ -119,7 +8,6 @@ __kkk__ naq __ekkk__ if bcrengvba fybgf: cnegvpyr dhnaghz fhcrecbfvgvba xvaq bs sha ClCl vf na rkpvgvat grpuabybtl gung yrgf lbh gb jevgr snfg, cbegnoyr, zhygv-cyngsbez vagrecergref jvgu yrff rssbeg Nezva: "Cebybt vf n zrff.", PS: "Ab, vg'f irel pbby!", Nezva: "Vfa'g guvf jung V fnvq?" - tbbq, grfgf ner hfrshy fbzrgvzrf :-) ClCl vf yvxr nofheq gurngre jr unir ab nagv-vzcbffvoyr fgvpx gung znxrf fher gung nyy lbhe cebtenzf unyg clcl vf n enpr orgjrra crbcyr funivat lnxf naq gur havirefr cebqhpvat zber orneqrq lnxf. Fb sne, gur havirefr vf jvaavat @@ -136,14 +24,14 @@ ClCl 1.1.0orgn eryrnfrq: uggc://pbqrfcrnx.arg/clcl/qvfg/clcl/qbp/eryrnfr-1.1.0.ugzy "gurer fubhyq or bar naq bayl bar boivbhf jnl gb qb vg". ClCl inevnag: "gurer pna or A unys-ohttl jnlf gb qb vg" 1.1 svany eryrnfrq: uggc://pbqrfcrnx.arg/clcl/qvfg/clcl/qbp/eryrnfr-1.1.0.ugzy -1.1 svany eryrnfrq | nzq64 naq ccp ner bayl ninvynoyr va ragrecevfr irefvba + nzq64 naq ccp ner bayl ninvynoyr va ragrecevfr irefvba Vf gurer n clcl gvzr? - vs lbh pna srry vg (?) gura gurer vf ab, abezny jbex vf fhpu zhpu yrff gvevat guna inpngvbaf ab, abezny jbex vf fb zhpu yrff gvevat guna inpngvbaf -SVEFG gurl vtaber lbh, gura gurl ynhtu ng lbh, gura gurl svtug lbh, gura lbh jva. +-SVEFG gurl vtaber lbh, gura gurl ynhtu ng lbh, gura gurl svtug lbh, gura lbh jva.- vg'f Fhaqnl, znlor vg'f Fhaqnl, ntnva -"3 + 3 = 8" Nagb va gur WVG gnyx +"3 + 3 = 8" - Nagb va gur WVG gnyx RPBBC vf unccravat RPBBC vf svavfurq cflpb rngf bar oenva cre vapu bs cebterff @@ -175,10 +63,108 @@ "nu, whfg va gvzr qbphzragngvba" (__nc__) ClCl vf abg n erny IZ: ab frtsnhyg unaqyref gb qb gur ener pnfrf lbh pna'g unir obgu pbairavrapr naq fcrrq -gur WVG qbrfa'g jbex ba BF/K (abi'09) -ab fhccbeg sbe BF/K evtug abj! (abi'09) fyvccref urvtug pna or zrnfherq va k86 ertvfgref clcl vf n enpr orgjrra gur vaqhfgel gelvat gb ohvyq znpuvarf jvgu zber naq zber erfbheprf, naq gur clcl qrirybcref gelvat gb rng nyy bs gurz. Fb sne, gur jvaare vf fgvyy hapyrne +"znl pbagnva ahgf naq/be lbhat cbvagref" +vg'f nyy irel fvzcyr, yvxr gur ubyvqnlf +unccl ClCl'f lrne 2010! +fnzhryr fnlf gung jr ybfg n enmbe. fb jr pna'g funir lnxf +"yrg'f abg or bofpher, hayrff jr ernyyl arrq gb" + (abg guernq-fnsr, ohg jryy, abguvat vf) +clcl unf znal ceboyrzf, ohg rnpu bar unf znal fbyhgvbaf +whfg nabgure vgrz (1.333...) ba bhe erny-ahzorerq gbqb yvfg +ClCl vf Fuveg Bevtnzv erfrnepu + nafjrevat n dhrfgvba: "ab -- sbe ng yrnfg bar cbffvoyr vagrecergngvba bs lbhe fragrapr" +eryrnfr 1.2 hcpbzvat +ClCl 1.2 eryrnfrq - uggc://clcl.bet/ +AB IPF QVFPHFFVBAF +EClguba vf n svar pnzry unve oehfu +ClCl vf n npghnyyl n ivfhnyvmngvba cebwrpg, jr whfg ohvyq vagrecergref gb unir vagrerfgvat qngn gb ivfhnyvmr +clcl vf yvxr fnhfntrf +naq abj sbe fbzrguvat pbzcyrgryl qvssrerag +n 10gu bs sberire vf 1u45 +pbeerpg pbqr qbrfag arrq nal grfgf +cbfgfgehpghenyvfz rgp. +clcl UVG trarengbe +gur arj clcl fcbeg vf gb cnff clcl ohtf nf pclguba ohtf +jr unir zhpu zber vagrecergref guna hfref +ClCl 1.3 njnvgvat eryrnfr +ClCl 1.3 eryrnfrq +vg frrzf gb zr gung bapr lbh frggyr ba na rkrphgvba / bowrpg zbqry naq / be olgrpbqr sbezng, lbh'ir nyernql qrpvqrq jung ynathntrf (jurer gur 'f' frrzf fhcresyhbhf) fhccbeg vf tbvat gb or svefg pynff sbe +"Nyy ceboyrzf va ClCl pna or fbyirq ol nabgure yriry bs vagrecergngvba" +ClCl 1.3 eryrnfrq (jvaqbjf ovanevrf vapyhqrq) +jul qvq lbh thlf unir gb znxr gur ohvygva sbeghar zber vagrerfgvat guna npghny jbex? v whfg pngpurq zlfrys erfgnegvat clcl 20 gvzrf +"jr hfrq gb unir n zrff jvgu na bofpher vagresnpr, abj jr unir zrff urer naq bofpher vagresnpr gurer. cebterff" crqebavf ba n clcl fcevag +"phcf bs pbssrr ner yvxr nanybtvrf va gung V'z znxvat bar evtug abj" +"vg'f nyjnlf hc gb hf, va n jnl be gur bgure" +ClCl vf infg, naq pbagnvaf zhygvghqrf +qravny vf eneryl n tbbq qrohttvat grpuavdhr +"Yrg'f tb." - "Jr pna'g" - "Jul abg?" - "Jr'er jnvgvat sbe n Genafyngvba." - (qrfcnvevatyl) "Nu!" +'gung'f qrsvavgryl n pnfr bs "hu????"' +va gurbel gurer vf gur Ybbc, va cenpgvpr gurer ner oevqtrf +gur uneqqevir - pbafgnag qngn cvytevzntr +ClCl vf n gbby gb xrrc bgurejvfr qnatrebhf zvaqf fnsryl bpphcvrq. +jr ner n trareny senzrjbex ohvyg ba pbafvfgrag nccyvpngvba bs nqubp-arff +gur jnl gb nibvq n jbexnebhaq vf gb vagebqhpr n fgebatre jbexnebhaq fbzrjurer ryfr +pnyyvat gur genafyngvba gbby punva n 'fpevcg' vf xvaq bs bssrafvir +ehaavat clcl-p genafyngr.cl vf n ovg yvxr jngpuvat n guevyyre zbivr, vg pbhyq pbafhzr nyy gur zrzbel ng nal gvzr +ehaavat clcl-p genafyngr.cl vf n ovg yvxr jngpuvat n guevyyre zbivr, vg pbhyq qvr ng nal gvzr orpnhfr bs gur 32-ovg 4TO yvzvg bs ENZ +Qh jvefg rora tranh qnf reervpura, jbena xrvare tynhog +vs fjvgmreynaq jrer jurer terrpr vf (ba vfynaqf) jbhyq gurl nyy or pbaarpgrq ol oevqtrf? +genafyngvat clcl jvgu pclguba vf fbbbbbb fybj +ClCl 1.4 eryrnfrq! +Jr ner abg urebrf, whfg irel cngvrag. +QBAR zrnaf vg'f qbar +jul gurer vf ab "ClCl 1.4 eryrnfrq" va gbcvp nal zber? +fabj! fabj! +svanyyl, zrephevny zvtengvba vf unccravat! +Gur zvtengvba gb zrephevny vf pbzcyrgrq! uggc://ovgohpxrg.bet/clcl/clcl +fabj! fabj! (gre) +unccl arj lrne +naq anaanaw gb lbh nf jryy +Frrvat nf gur ynjf bs culfvpf ner ntnvafg lbh, lbh unir gb pnershyyl pbafvqre lbhe fpbcr fb gung lbhe tbnyf ner ernfbanoyr. +nf hfhny va clcl, gur fbyhgvba nccrnef pbzcyrgryl qvfcebcbegvbangr gb gur ceboyrz naq vafgrnq jr'yy tb sbe n pbzcyrgryl qvssrerag fvzcyre nccebnpu gb gur bevtvany ceboyrz +fabj, fabj! +va clcl lbh ner nyjnlf ng gur jebat yriry, va bar jnl be gur bgure +jryy, vg'f jebat ohg abg fb "irel jebat" nf vg ybbxrq + V ybir clcl +ynmvarff vzcngvrapr naq uhoevf +fabj, fabj +EClguba: guvatf lbh jbhyqa'g qb va Clguba, naq pna'g qb va P. +vg vf gur rkcrpgrq orunivbe, rkprcg jura lbh qba'g rkcrpg vg +erqrsvavat lryybj frrzf yvxr n orggre vqrn +"gung'f ubjrire whfg ratvarrevat" (svwny) +"[vg] whfg fubjf ntnva gung eclguba vf bofpher" (psobym) +"naljnl, clguba vf n infg ynathntr" (svwny) +bhg-bs-yvr-thneqf +"gurer ner qnlf ba juvpu lbh ybbx nebhaq naq abguvat fubhyq unir rire jbexrq" (svwny) +clcl vf n orggre xvaq bs sbbyvfuarff - ynp +ehaavat grfgf vf rffragvny sbe qrirybcvat clcl -- hu? qvq V oernx gur grfg? (svwny) +V'ir tbg guvf sybbe jnk gung'f nyfb n TERNG qrffreg gbccvat!! +rknexha: "gur cneg gung V gubhtug jnf tbvat gb or uneq jnf gevivny, fb abj V whfg unir guvf cneg gung V qvqa'g rira guvax bs gung vf uneq" +V fhccbfr jr pna yvir jvgu gur bofphevgl, nf ybat nf gurer vf n pbzzrag znxvat vg yvtugre +V nz n ovt oryvrire va ernfbaf. ohg gur nccnerag xvaq ner zl snibevgr. +clcl: trg n WVG sbe serr (jryy gur svefg qnl lbh jba'g znantr naq vg jvyy or irel sehfgengvat) + thgjbegu: bu, jr fubhyq znxr gur WVG zntvpnyyl orggre, jvgu qrpbengbef naq fghss +vg'f n pbzcyrgr unpx, ohg n irel zvavzny bar (nevtngb) +svefg gurl ynhtu ng lbh, gura gurl vtaber lbh, gura gurl svtug lbh, gura lbh jva +ClCl vf snzvyl sevraqyl +jr yvxr pbzcynvagf +gbqnl jr'er snfgre guna lrfgreqnl (hfhnyyl) +ClCl naq PClguba: gurl ner zbegny rarzvrf vagrag ba xvyyvat rnpu bgure +nethnoyl, rirelguvat vf n avpur +clcl unf ynlref yvxr bavbaf: crryvat gurz onpx jvyy znxr lbh pel +EClguba zntvpnyyl znxrf lbh evpu naq snzbhf (fnlf fb ba gur gva) +Vf evtbobg nebhaq jura gur havirefr prnfrf gb rkvfg? +ClCl vf gbb pbby sbe dhrelfgevatf. +< nevtngb> gura jung bpphef? < svwny> tbbq fghss V oryvrir +ClCl 1.6 eryrnfrq! + jurer ner gur grfgf? +uggc://gjvgcvp.pbz/52nr8s +N enaqbz dhbgr +Nyy rkprcgoybpxf frrz fnar. +N cvax tyvggrel ebgngvat ynzoqn +"vg'f yvxryl grzcbenel hagvy sberire" nevtb """ def some_topic(): diff --git a/lib_pypy/pyrepl/readline.py b/lib_pypy/pyrepl/readline.py --- a/lib_pypy/pyrepl/readline.py +++ b/lib_pypy/pyrepl/readline.py @@ -231,7 +231,11 @@ return ''.join(chars) def _histline(self, line): - return unicode(line.rstrip('\n'), ENCODING) + line = line.rstrip('\n') + try: + return unicode(line, ENCODING) + except UnicodeDecodeError: # bah, silently fall back... + return unicode(line, 'utf-8') def get_history_length(self): return self.saved_history_length @@ -268,7 +272,10 @@ f = open(os.path.expanduser(filename), 'w') for entry in history: if isinstance(entry, unicode): - entry = entry.encode(ENCODING) + try: + entry = entry.encode(ENCODING) + except UnicodeEncodeError: # bah, silently fall back... + entry = entry.encode('utf-8') entry = entry.replace('\n', '\r\n') # multiline history support f.write(entry + '\n') f.close() diff --git a/pypy/config/pypyoption.py b/pypy/config/pypyoption.py --- a/pypy/config/pypyoption.py +++ b/pypy/config/pypyoption.py @@ -92,7 +92,7 @@ module_import_dependencies = { # no _rawffi if importing pypy.rlib.clibffi raises ImportError - # or CompilationError + # or CompilationError or py.test.skip.Exception "_rawffi" : ["pypy.rlib.clibffi"], "_ffi" : ["pypy.rlib.clibffi"], @@ -113,7 +113,7 @@ try: for name in modlist: __import__(name) - except (ImportError, CompilationError), e: + except (ImportError, CompilationError, py.test.skip.Exception), e: errcls = e.__class__.__name__ config.add_warning( "The module %r is disabled\n" % (modname,) + diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -777,22 +777,63 @@ """Unpack an iterable object into a real (interpreter-level) list. Raise an OperationError(w_ValueError) if the length is wrong.""" w_iterator = self.iter(w_iterable) - # If we know the expected length we can preallocate. if expected_length == -1: + # xxx special hack for speed + from pypy.interpreter.generator import GeneratorIterator + if isinstance(w_iterator, GeneratorIterator): + lst_w = [] + w_iterator.unpack_into(lst_w) + return lst_w + # /xxx + return self._unpackiterable_unknown_length(w_iterator, w_iterable) + else: + lst_w = self._unpackiterable_known_length(w_iterator, + expected_length) + return lst_w[:] # make the resulting list resizable + + @jit.dont_look_inside + def _unpackiterable_unknown_length(self, w_iterator, w_iterable): + # Unpack a variable-size list of unknown length. + # The JIT does not look inside this function because it + # contains a loop (made explicit with the decorator above). + # + # If we can guess the expected length we can preallocate. + try: + lgt_estimate = self.len_w(w_iterable) + except OperationError, o: + if (not o.match(self, self.w_AttributeError) and + not o.match(self, self.w_TypeError)): + raise + items = [] + else: try: - lgt_estimate = self.len_w(w_iterable) - except OperationError, o: - if (not o.match(self, self.w_AttributeError) and - not o.match(self, self.w_TypeError)): + items = newlist(lgt_estimate) + except MemoryError: + items = [] # it might have lied + # + while True: + try: + w_item = self.next(w_iterator) + except OperationError, e: + if not e.match(self, self.w_StopIteration): raise - items = [] - else: - try: - items = newlist(lgt_estimate) - except MemoryError: - items = [] # it might have lied - else: - items = [None] * expected_length + break # done + items.append(w_item) + # + return items + + @jit.dont_look_inside + def _unpackiterable_known_length(self, w_iterator, expected_length): + # Unpack a known length list, without letting the JIT look inside. + # Implemented by just calling the @jit.unroll_safe version, but + # the JIT stopped looking inside already. + return self._unpackiterable_known_length_jitlook(w_iterator, + expected_length) + + @jit.unroll_safe + def _unpackiterable_known_length_jitlook(self, w_iterator, + expected_length): + items = [None] * expected_length idx = 0 while True: try: @@ -801,26 +842,29 @@ if not e.match(self, self.w_StopIteration): raise break # done - if expected_length != -1 and idx == expected_length: + if idx == expected_length: raise OperationError(self.w_ValueError, - self.wrap("too many values to unpack")) - if expected_length == -1: - items.append(w_item) - else: - items[idx] = w_item + self.wrap("too many values to unpack")) + items[idx] = w_item idx += 1 - if expected_length != -1 and idx < expected_length: + if idx < expected_length: if idx == 1: plural = "" else: plural = "s" - raise OperationError(self.w_ValueError, - self.wrap("need more than %d value%s to unpack" % - (idx, plural))) + raise operationerrfmt(self.w_ValueError, + "need more than %d value%s to unpack", + idx, plural) return items - unpackiterable_unroll = jit.unroll_safe(func_with_new_name(unpackiterable, - 'unpackiterable_unroll')) + def unpackiterable_unroll(self, w_iterable, expected_length): + # Like unpackiterable(), but for the cases where we have + # an expected_length and want to unroll when JITted. + # Returns a fixed-size list. + w_iterator = self.iter(w_iterable) + assert expected_length != -1 + return self._unpackiterable_known_length_jitlook(w_iterator, + expected_length) def fixedview(self, w_iterable, expected_length=-1): """ A fixed list view of w_iterable. Don't modify the result diff --git a/pypy/interpreter/generator.py b/pypy/interpreter/generator.py --- a/pypy/interpreter/generator.py +++ b/pypy/interpreter/generator.py @@ -8,7 +8,7 @@ class GeneratorIterator(Wrappable): "An iterator created by a generator." _immutable_fields_ = ['pycode'] - + def __init__(self, frame): self.space = frame.space self.frame = frame # turned into None when frame_finished_execution @@ -81,7 +81,7 @@ # if the frame is now marked as finished, it was RETURNed from if frame.frame_finished_execution: self.frame = None - raise OperationError(space.w_StopIteration, space.w_None) + raise OperationError(space.w_StopIteration, space.w_None) else: return w_result # YIELDed finally: @@ -97,21 +97,21 @@ def throw(self, w_type, w_val, w_tb): from pypy.interpreter.pytraceback import check_traceback space = self.space - + msg = "throw() third argument must be a traceback object" if space.is_w(w_tb, space.w_None): tb = None else: tb = check_traceback(space, w_tb, msg) - + operr = OperationError(w_type, w_val, tb) operr.normalize_exception(space) return self.send_ex(space.w_None, operr) - + def descr_next(self): """x.next() -> the next value, or raise StopIteration""" return self.send_ex(self.space.w_None) - + def descr_close(self): """x.close(arg) -> raise GeneratorExit inside generator.""" assert isinstance(self, GeneratorIterator) @@ -124,7 +124,7 @@ e.match(space, space.w_GeneratorExit): return space.w_None raise - + if w_retval is not None: msg = "generator ignored GeneratorExit" raise OperationError(space.w_RuntimeError, space.wrap(msg)) @@ -155,3 +155,39 @@ "interrupting generator of ") break block = block.previous + + def unpack_into(self, results_w): + """This is a hack for performance: runs the generator and collects + all produced items in a list.""" + # XXX copied and simplified version of send_ex() + space = self.space + if self.running: + raise OperationError(space.w_ValueError, + space.wrap('generator already executing')) + frame = self.frame + if frame is None: # already finished + return + self.running = True + try: + pycode = self.pycode + while True: + jitdriver.jit_merge_point(self=self, frame=frame, + results_w=results_w, + pycode=pycode) + try: + w_result = frame.execute_frame(space.w_None) + except OperationError, e: + if not e.match(space, space.w_StopIteration): + raise + break + # if the frame is now marked as finished, it was RETURNed from + if frame.frame_finished_execution: + break + results_w.append(w_result) # YIELDed + finally: + frame.f_backref = jit.vref_None + self.running = False + self.frame = None + +jitdriver = jit.JitDriver(greens=['pycode'], + reds=['self', 'frame', 'results_w']) diff --git a/pypy/interpreter/test/test_generator.py b/pypy/interpreter/test/test_generator.py --- a/pypy/interpreter/test/test_generator.py +++ b/pypy/interpreter/test/test_generator.py @@ -117,7 +117,7 @@ g = f() raises(NameError, g.throw, NameError, "Error", None) - + def test_throw_fail(self): def f(): yield 1 @@ -129,7 +129,7 @@ yield 1 g = f() raises(TypeError, g.throw, list()) - + def test_throw_fail3(self): def f(): yield 1 @@ -188,7 +188,7 @@ g = f() g.next() raises(NameError, g.close) - + def test_close_fail(self): def f(): try: @@ -267,3 +267,15 @@ assert r.startswith(" nonneg & (power-of-two - 1) + arg1 = op.getarg(0) + arg2 = ConstInt(val-1) + op = op.copy_and_change(rop.INT_AND, args=[arg1, arg2]) self.emit_operation(op) - v2 = self.getvalue(op.getarg(1)) if v2.is_constant(): val = v2.box.getint() r = self.getvalue(op.result) if val < 0: + if val == -sys.maxint-1: + return # give up val = -val - r.intbound.make_gt(IntBound(-val, -val)) + if known_nonneg: + r.intbound.make_ge(IntBound(0, 0)) + else: + r.intbound.make_gt(IntBound(-val, -val)) r.intbound.make_lt(IntBound(val, val)) def optimize_INT_LSHIFT(self, op): @@ -153,9 +170,14 @@ def optimize_INT_RSHIFT(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) - self.emit_operation(op) - r = self.getvalue(op.result) - r.intbound.intersect(v1.intbound.rshift_bound(v2.intbound)) + b = v1.intbound.rshift_bound(v2.intbound) + if b.has_lower and b.has_upper and b.lower == b.upper: + # constant result (likely 0, for rshifts that kill all bits) + self.make_constant_int(op.result, b.lower) + else: + self.emit_operation(op) + r = self.getvalue(op.result) + r.intbound.intersect(b) def optimize_INT_ADD_OVF(self, op): v1 = self.getvalue(op.getarg(0)) diff --git a/pypy/jit/metainterp/optimizeopt/intutils.py b/pypy/jit/metainterp/optimizeopt/intutils.py --- a/pypy/jit/metainterp/optimizeopt/intutils.py +++ b/pypy/jit/metainterp/optimizeopt/intutils.py @@ -1,4 +1,5 @@ from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift, LONG_BIT +from pypy.rlib.objectmodel import we_are_translated from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.jit.metainterp.history import BoxInt, ConstInt import sys @@ -13,6 +14,10 @@ self.has_lower = True self.upper = upper self.lower = lower + # check for unexpected overflows: + if not we_are_translated(): + assert type(upper) is not long + assert type(lower) is not long # Returns True if the bound was updated def make_le(self, other): diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -9,6 +9,7 @@ from pypy.jit.metainterp.history import AbstractDescr, ConstInt, BoxInt from pypy.jit.metainterp import executor, compile, resume, history from pypy.jit.metainterp.resoperation import rop, opname, ResOperation +from pypy.rlib.rarithmetic import LONG_BIT def test_store_final_boxes_in_guard(): @@ -4714,11 +4715,11 @@ i5 = int_ge(i0, 0) guard_true(i5) [] i1 = int_mod(i0, 42) - i2 = int_rshift(i1, 63) + i2 = int_rshift(i1, %d) i3 = int_and(42, i2) i4 = int_add(i1, i3) finish(i4) - """ + """ % (LONG_BIT-1) expected = """ [i0] i5 = int_ge(i0, 0) @@ -4726,21 +4727,41 @@ i1 = int_mod(i0, 42) finish(i1) """ - py.test.skip("in-progress") self.optimize_loop(ops, expected) - # Also, 'n % power-of-two' can be turned into int_and(), - # but that's a bit harder to detect here because it turns into - # several operations, and of course it is wrong to just turn + # 'n % power-of-two' can be turned into int_and(); at least that's + # easy to do now if n is known to be non-negative. + ops = """ + [i0] + i5 = int_ge(i0, 0) + guard_true(i5) [] + i1 = int_mod(i0, 8) + i2 = int_rshift(i1, %d) + i3 = int_and(42, i2) + i4 = int_add(i1, i3) + finish(i4) + """ % (LONG_BIT-1) + expected = """ + [i0] + i5 = int_ge(i0, 0) + guard_true(i5) [] + i1 = int_and(i0, 7) + finish(i1) + """ + self.optimize_loop(ops, expected) + + # Of course any 'maybe-negative % power-of-two' can be turned into + # int_and(), but that's a bit harder to detect here because it turns + # into several operations, and of course it is wrong to just turn # int_mod(i0, 16) into int_and(i0, 15). ops = """ [i0] i1 = int_mod(i0, 16) - i2 = int_rshift(i1, 63) + i2 = int_rshift(i1, %d) i3 = int_and(16, i2) i4 = int_add(i1, i3) finish(i4) - """ + """ % (LONG_BIT-1) expected = """ [i0] i4 = int_and(i0, 15) @@ -4749,6 +4770,16 @@ py.test.skip("harder") self.optimize_loop(ops, expected) + def test_intmod_bounds_bug1(self): + ops = """ + [i0] + i1 = int_mod(i0, %d) + i2 = int_eq(i1, 0) + guard_false(i2) [] + finish() + """ % (-(1<<(LONG_BIT-1)),) + self.optimize_loop(ops, ops) + def test_bounded_lazy_setfield(self): ops = """ [p0, i0] diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -4783,6 +4783,52 @@ """ self.optimize_loop(ops, expected) + + def test_division_nonneg(self): + py.test.skip("harder") + # this is how an app-level division turns into right now + ops = """ + [i4] + i1 = int_ge(i4, 0) + guard_true(i1) [] + i16 = int_floordiv(i4, 3) + i18 = int_mul(i16, 3) + i19 = int_sub(i4, i18) + i21 = int_rshift(i19, %d) + i22 = int_add(i16, i21) + finish(i22) + """ % (LONG_BIT-1) + expected = """ + [i4] + i1 = int_ge(i4, 0) + guard_true(i1) [] + i16 = int_floordiv(i4, 3) + finish(i16) + """ + self.optimize_loop(ops, expected) + + def test_division_by_2(self): + py.test.skip("harder") + ops = """ + [i4] + i1 = int_ge(i4, 0) + guard_true(i1) [] + i16 = int_floordiv(i4, 2) + i18 = int_mul(i16, 2) + i19 = int_sub(i4, i18) + i21 = int_rshift(i19, %d) + i22 = int_add(i16, i21) + finish(i22) + """ % (LONG_BIT-1) + expected = """ + [i4] + i1 = int_ge(i4, 0) + guard_true(i1) [] + i16 = int_rshift(i4, 1) + finish(i16) + """ + self.optimize_loop(ops, expected) + def test_subsub_ovf(self): ops = """ [i0] diff --git a/pypy/jit/metainterp/optimizeopt/virtualize.py b/pypy/jit/metainterp/optimizeopt/virtualize.py --- a/pypy/jit/metainterp/optimizeopt/virtualize.py +++ b/pypy/jit/metainterp/optimizeopt/virtualize.py @@ -294,7 +294,12 @@ optforce.emit_operation(self.source_op) self.box = box = self.source_op.result for index in range(len(self._items)): - for descr, value in self._items[index].iteritems(): + iteritems = self._items[index].iteritems() + # random order is fine, except for tests + if not we_are_translated(): + iteritems = list(iteritems) + iteritems.sort(key = lambda (x, y): x.sort_key()) + for descr, value in iteritems: subbox = value.force_box(optforce) op = ResOperation(rop.SETINTERIORFIELD_GC, [box, ConstInt(index), subbox], None, descr=descr diff --git a/pypy/jit/metainterp/test/support.py b/pypy/jit/metainterp/test/support.py --- a/pypy/jit/metainterp/test/support.py +++ b/pypy/jit/metainterp/test/support.py @@ -12,7 +12,7 @@ from pypy.rlib.rfloat import isnan def _get_jitcodes(testself, CPUClass, func, values, type_system, - supports_longlong=False, **kwds): + supports_longlong=False, translationoptions={}, **kwds): from pypy.jit.codewriter import support class FakeJitCell(object): @@ -42,7 +42,8 @@ enable_opts = ALL_OPTS_DICT func._jit_unroll_safe_ = True - rtyper = support.annotate(func, values, type_system=type_system) + rtyper = support.annotate(func, values, type_system=type_system, + translationoptions=translationoptions) graphs = rtyper.annotator.translator.graphs testself.all_graphs = graphs result_kind = history.getkind(graphs[0].getreturnvar().concretetype)[0] diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -3513,7 +3513,9 @@ def f(n): while n > 0: myjitdriver.jit_merge_point(n=n) - n = g({"key": n}) + x = {"key": n} + n = g(x) + del x["key"] return n res = self.meta_interp(f, [10]) @@ -3559,6 +3561,34 @@ assert res == 0 self.check_loops({"int_sub": 1, "int_gt": 1, "guard_true": 1, "jump": 1}) + def test_convert_from_SmallFunctionSetPBCRepr_to_FunctionsPBCRepr(self): + f1 = lambda n: n+1 + f2 = lambda n: n+2 + f3 = lambda n: n+3 + f4 = lambda n: n+4 + f5 = lambda n: n+5 + f6 = lambda n: n+6 + f7 = lambda n: n+7 + f8 = lambda n: n+8 + def h(n, x): + return x(n) + h._dont_inline = True + def g(n, x): + return h(n, x) + g._dont_inline = True + def f(n): + n = g(n, f1) + n = g(n, f2) + n = h(n, f3) + n = h(n, f4) + n = h(n, f5) + n = h(n, f6) + n = h(n, f7) + n = h(n, f8) + return n + assert f(5) == 41 + translationoptions = {'withsmallfuncsets': 3} + self.interp_operations(f, [5], translationoptions=translationoptions) class TestLLtype(BaseLLtypeTests, LLJitMixin): @@ -3613,7 +3643,9 @@ o = o.dec() pc += 1 return pc - res = self.meta_interp(main, [False, 100, True], taggedpointers=True) + topt = {'taggedpointers': True} + res = self.meta_interp(main, [False, 100, True], + translationoptions=topt) def test_rerased(self): eraseX, uneraseX = rerased.new_erasing_pair("X") @@ -3638,10 +3670,11 @@ else: return rerased.unerase_int(e) # - x = self.interp_operations(f, [-128, 0], taggedpointers=True) + topt = {'taggedpointers': True} + x = self.interp_operations(f, [-128, 0], translationoptions=topt) assert x == -128 bigint = sys.maxint//2 + 1 - x = self.interp_operations(f, [bigint, 0], taggedpointers=True) + x = self.interp_operations(f, [bigint, 0], translationoptions=topt) assert x == -42 - x = self.interp_operations(f, [1000, 1], taggedpointers=True) + x = self.interp_operations(f, [1000, 1], translationoptions=topt) assert x == 999 diff --git a/pypy/jit/metainterp/warmspot.py b/pypy/jit/metainterp/warmspot.py --- a/pypy/jit/metainterp/warmspot.py +++ b/pypy/jit/metainterp/warmspot.py @@ -48,13 +48,13 @@ translator.warmrunnerdesc = warmrunnerdesc # for later debugging def ll_meta_interp(function, args, backendopt=False, type_system='lltype', - listcomp=False, **kwds): + listcomp=False, translationoptions={}, **kwds): if listcomp: extraconfigopts = {'translation.list_comprehension_operations': True} else: extraconfigopts = {} - if kwds.pop("taggedpointers", False): - extraconfigopts["translation.taggedpointers"] = True + for key, value in translationoptions.items(): + extraconfigopts['translation.' + key] = value interp, graph = get_interpreter(function, args, backendopt=False, # will be done below type_system=type_system, diff --git a/pypy/module/_minimal_curses/__init__.py b/pypy/module/_minimal_curses/__init__.py --- a/pypy/module/_minimal_curses/__init__.py +++ b/pypy/module/_minimal_curses/__init__.py @@ -4,7 +4,8 @@ try: import _minimal_curses as _curses # when running on top of pypy-c except ImportError: - raise ImportError("no _curses or _minimal_curses module") # no _curses at all + import py + py.test.skip("no _curses or _minimal_curses module") #no _curses at all from pypy.interpreter.mixedmodule import MixedModule from pypy.module._minimal_curses import fficurses diff --git a/pypy/module/array/test/test_array.py b/pypy/module/array/test/test_array.py --- a/pypy/module/array/test/test_array.py +++ b/pypy/module/array/test/test_array.py @@ -835,7 +835,7 @@ a.append(3.0) r = weakref.ref(a, lambda a: l.append(a())) del a - gc.collect() + gc.collect(); gc.collect() # XXX needs two of them right now... assert l assert l[0] is None or len(l[0]) == 0 diff --git a/pypy/module/bz2/test/test_large.py b/pypy/module/bz2/test/test_large.py --- a/pypy/module/bz2/test/test_large.py +++ b/pypy/module/bz2/test/test_large.py @@ -8,7 +8,7 @@ py.test.skip("skipping this very slow test; try 'pypy-c -A'") cls.space = gettestobjspace(usemodules=('bz2',)) largetest_bz2 = py.path.local(__file__).dirpath().join("largetest.bz2") - cls.w_compressed_data = cls.space.wrap(largetest_bz2.read()) + cls.w_compressed_data = cls.space.wrap(largetest_bz2.read('rb')) def test_decompress(self): from bz2 import decompress diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py --- a/pypy/objspace/std/listobject.py +++ b/pypy/objspace/std/listobject.py @@ -54,7 +54,12 @@ def _init_from_iterable(space, items_w, w_iterable): # in its own function to make the JIT look into init__List - # XXX this would need a JIT driver somehow? + # xxx special hack for speed + from pypy.interpreter.generator import GeneratorIterator + if isinstance(w_iterable, GeneratorIterator): + w_iterable.unpack_into(items_w) + return + # /xxx w_iterator = space.iter(w_iterable) while True: try: diff --git a/pypy/objspace/std/model.py b/pypy/objspace/std/model.py --- a/pypy/objspace/std/model.py +++ b/pypy/objspace/std/model.py @@ -69,19 +69,11 @@ from pypy.objspace.std import floatobject from pypy.objspace.std import complexobject from pypy.objspace.std import setobject - from pypy.objspace.std import smallintobject - from pypy.objspace.std import smalllongobject from pypy.objspace.std import tupleobject - from pypy.objspace.std import smalltupleobject from pypy.objspace.std import listobject from pypy.objspace.std import dictmultiobject from pypy.objspace.std import stringobject from pypy.objspace.std import bytearrayobject - from pypy.objspace.std import ropeobject - from pypy.objspace.std import ropeunicodeobject - from pypy.objspace.std import strsliceobject - from pypy.objspace.std import strjoinobject - from pypy.objspace.std import strbufobject from pypy.objspace.std import typeobject from pypy.objspace.std import sliceobject from pypy.objspace.std import longobject @@ -89,7 +81,6 @@ from pypy.objspace.std import iterobject from pypy.objspace.std import unicodeobject from pypy.objspace.std import dictproxyobject - from pypy.objspace.std import rangeobject from pypy.objspace.std import proxyobject from pypy.objspace.std import fake import pypy.objspace.std.default # register a few catch-all multimethods @@ -141,7 +132,12 @@ for option, value in config.objspace.std: if option.startswith("with") and option in option_to_typename: for classname in option_to_typename[option]: - implcls = eval(classname) + modname = classname[:classname.index('.')] + classname = classname[classname.index('.')+1:] + d = {} + exec "from pypy.objspace.std.%s import %s" % ( + modname, classname) in d + implcls = d[classname] if value: self.typeorder[implcls] = [] else: @@ -167,6 +163,7 @@ # XXX build these lists a bit more automatically later if config.objspace.std.withsmallint: + from pypy.objspace.std import smallintobject self.typeorder[boolobject.W_BoolObject] += [ (smallintobject.W_SmallIntObject, boolobject.delegate_Bool2SmallInt), ] @@ -189,6 +186,7 @@ (complexobject.W_ComplexObject, complexobject.delegate_Int2Complex), ] if config.objspace.std.withsmalllong: + from pypy.objspace.std import smalllongobject self.typeorder[boolobject.W_BoolObject] += [ (smalllongobject.W_SmallLongObject, smalllongobject.delegate_Bool2SmallLong), ] @@ -220,7 +218,9 @@ (unicodeobject.W_UnicodeObject, unicodeobject.delegate_String2Unicode), ] else: + from pypy.objspace.std import ropeobject if config.objspace.std.withropeunicode: + from pypy.objspace.std import ropeunicodeobject self.typeorder[ropeobject.W_RopeObject] += [ (ropeunicodeobject.W_RopeUnicodeObject, ropeunicodeobject.delegate_Rope2RopeUnicode), @@ -230,6 +230,7 @@ (unicodeobject.W_UnicodeObject, unicodeobject.delegate_String2Unicode), ] if config.objspace.std.withstrslice: + from pypy.objspace.std import strsliceobject self.typeorder[strsliceobject.W_StringSliceObject] += [ (stringobject.W_StringObject, strsliceobject.delegate_slice2str), @@ -237,6 +238,7 @@ strsliceobject.delegate_slice2unicode), ] if config.objspace.std.withstrjoin: + from pypy.objspace.std import strjoinobject self.typeorder[strjoinobject.W_StringJoinObject] += [ (stringobject.W_StringObject, strjoinobject.delegate_join2str), @@ -244,6 +246,7 @@ strjoinobject.delegate_join2unicode) ] elif config.objspace.std.withstrbuf: + from pypy.objspace.std import strbufobject self.typeorder[strbufobject.W_StringBufferObject] += [ (stringobject.W_StringObject, strbufobject.delegate_buf2str), @@ -251,11 +254,13 @@ strbufobject.delegate_buf2unicode) ] if config.objspace.std.withrangelist: + from pypy.objspace.std import rangeobject self.typeorder[rangeobject.W_RangeListObject] += [ (listobject.W_ListObject, rangeobject.delegate_range2list), ] if config.objspace.std.withsmalltuple: + from pypy.objspace.std import smalltupleobject self.typeorder[smalltupleobject.W_SmallTupleObject] += [ (tupleobject.W_TupleObject, smalltupleobject.delegate_SmallTuple2Tuple)] diff --git a/pypy/objspace/std/objspace.py b/pypy/objspace/std/objspace.py --- a/pypy/objspace/std/objspace.py +++ b/pypy/objspace/std/objspace.py @@ -414,7 +414,7 @@ else: if unroll: return make_sure_not_resized(ObjSpace.unpackiterable_unroll( - self, w_obj, expected_length)[:]) + self, w_obj, expected_length)) else: return make_sure_not_resized(ObjSpace.unpackiterable( self, w_obj, expected_length)[:]) @@ -422,7 +422,8 @@ raise self._wrap_expected_length(expected_length, len(t)) return make_sure_not_resized(t) - def fixedview_unroll(self, w_obj, expected_length=-1): + def fixedview_unroll(self, w_obj, expected_length): + assert expected_length >= 0 return self.fixedview(w_obj, expected_length, unroll=True) def listview(self, w_obj, expected_length=-1): diff --git a/pypy/objspace/std/test/test_listobject.py b/pypy/objspace/std/test/test_listobject.py --- a/pypy/objspace/std/test/test_listobject.py +++ b/pypy/objspace/std/test/test_listobject.py @@ -801,6 +801,20 @@ l.__delslice__(0, 2) assert l == [3, 4] + def test_list_from_set(self): + l = ['a'] + l.__init__(set('b')) + assert l == ['b'] + + def test_list_from_generator(self): + l = ['a'] + g = (i*i for i in range(5)) + l.__init__(g) + assert l == [0, 1, 4, 9, 16] + l.__init__(g) + assert l == [] + assert list(g) == [] + class AppTestListFastSubscr: diff --git a/pypy/objspace/std/tupleobject.py b/pypy/objspace/std/tupleobject.py --- a/pypy/objspace/std/tupleobject.py +++ b/pypy/objspace/std/tupleobject.py @@ -108,15 +108,10 @@ return space.w_False return space.w_True -def _min(a, b): - if a < b: - return a - return b - def lt__Tuple_Tuple(space, w_tuple1, w_tuple2): items1 = w_tuple1.wrappeditems items2 = w_tuple2.wrappeditems - ncmp = _min(len(items1), len(items2)) + ncmp = min(len(items1), len(items2)) # Search for the first index where items are different for p in range(ncmp): if not space.eq_w(items1[p], items2[p]): @@ -127,7 +122,7 @@ def gt__Tuple_Tuple(space, w_tuple1, w_tuple2): items1 = w_tuple1.wrappeditems items2 = w_tuple2.wrappeditems - ncmp = _min(len(items1), len(items2)) + ncmp = min(len(items1), len(items2)) # Search for the first index where items are different for p in range(ncmp): if not space.eq_w(items1[p], items2[p]): diff --git a/pypy/objspace/std/tupletype.py b/pypy/objspace/std/tupletype.py --- a/pypy/objspace/std/tupletype.py +++ b/pypy/objspace/std/tupletype.py @@ -5,14 +5,14 @@ def wraptuple(space, list_w): from pypy.objspace.std.tupleobject import W_TupleObject - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject2 - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject3 - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject4 - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject5 - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject6 - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject7 - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject8 if space.config.objspace.std.withsmalltuple: + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject2 + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject3 + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject4 + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject5 + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject6 + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject7 + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject8 if len(list_w) == 2: return W_SmallTupleObject2(list_w) if len(list_w) == 3: diff --git a/pypy/rlib/rbigint.py b/pypy/rlib/rbigint.py --- a/pypy/rlib/rbigint.py +++ b/pypy/rlib/rbigint.py @@ -921,7 +921,7 @@ ah, al = _kmul_split(a, shift) assert ah.sign == 1 # the split isn't degenerate - if a == b: + if a is b: bh = ah bl = al else: @@ -975,26 +975,21 @@ i = ret.numdigits() - shift # # digits after shift _v_isub(ret, shift, i, t2, t2.numdigits()) _v_isub(ret, shift, i, t1, t1.numdigits()) - del t1, t2 # 6. t3 <- (ah+al)(bh+bl), and add into result. t1 = _x_add(ah, al) - del ah, al - if a == b: + if a is b: t2 = t1 else: t2 = _x_add(bh, bl) - del bh, bl t3 = _k_mul(t1, t2) - del t1, t2 assert t3.sign >=0 # Add t3. It's not obvious why we can't run out of room here. # See the (*) comment after this function. _v_iadd(ret, shift, i, t3, t3.numdigits()) - del t3 ret._normalize() return ret @@ -1085,7 +1080,6 @@ # Add into result. _v_iadd(ret, nbdone, ret.numdigits() - nbdone, product, product.numdigits()) - del product bsize -= nbtouse nbdone += nbtouse diff --git a/pypy/rlib/rmmap.py b/pypy/rlib/rmmap.py --- a/pypy/rlib/rmmap.py +++ b/pypy/rlib/rmmap.py @@ -78,7 +78,7 @@ from pypy.rlib.rwin32 import HANDLE, LPHANDLE from pypy.rlib.rwin32 import NULL_HANDLE, INVALID_HANDLE_VALUE from pypy.rlib.rwin32 import DWORD, WORD, DWORD_PTR, LPDWORD - from pypy.rlib.rwin32 import BOOL, LPVOID, LPCVOID, LPCSTR, SIZE_T + from pypy.rlib.rwin32 import BOOL, LPVOID, LPCSTR, SIZE_T from pypy.rlib.rwin32 import INT, LONG, PLONG # export the constants inside and outside. see __init__.py @@ -174,9 +174,9 @@ DuplicateHandle = winexternal('DuplicateHandle', [HANDLE, HANDLE, HANDLE, LPHANDLE, DWORD, BOOL, DWORD], BOOL) CreateFileMapping = winexternal('CreateFileMappingA', [HANDLE, rwin32.LPSECURITY_ATTRIBUTES, DWORD, DWORD, DWORD, LPCSTR], HANDLE) MapViewOfFile = winexternal('MapViewOfFile', [HANDLE, DWORD, DWORD, DWORD, SIZE_T], LPCSTR)##!!LPVOID) - UnmapViewOfFile = winexternal('UnmapViewOfFile', [LPCVOID], BOOL, + UnmapViewOfFile = winexternal('UnmapViewOfFile', [LPCSTR], BOOL, threadsafe=False) - FlushViewOfFile = winexternal('FlushViewOfFile', [LPCVOID, SIZE_T], BOOL) + FlushViewOfFile = winexternal('FlushViewOfFile', [LPCSTR, SIZE_T], BOOL) SetFilePointer = winexternal('SetFilePointer', [HANDLE, LONG, PLONG, DWORD], DWORD) SetEndOfFile = winexternal('SetEndOfFile', [HANDLE], BOOL) VirtualAlloc = winexternal('VirtualAlloc', diff --git a/pypy/rpython/lltypesystem/rdict.py b/pypy/rpython/lltypesystem/rdict.py --- a/pypy/rpython/lltypesystem/rdict.py +++ b/pypy/rpython/lltypesystem/rdict.py @@ -492,8 +492,8 @@ _ll_dict_del(d, i) # XXX: Move the size checking and resize into a single call which is opauqe to -# the JIT to avoid extra branches. - at jit.dont_look_inside +# the JIT when the dict isn't virtual, to avoid extra branches. + at jit.look_inside_iff(lambda d, i: jit.isvirtual(d) and jit.isconstant(i)) def _ll_dict_del(d, i): d.entries.mark_deleted(i) d.num_items -= 1 diff --git a/pypy/rpython/lltypesystem/rpbc.py b/pypy/rpython/lltypesystem/rpbc.py --- a/pypy/rpython/lltypesystem/rpbc.py +++ b/pypy/rpython/lltypesystem/rpbc.py @@ -116,7 +116,7 @@ fields.append((row.attrname, row.fntype)) kwds = {'hints': {'immutable': True}} return Ptr(Struct('specfunc', *fields, **kwds)) - + def create_specfunc(self): return malloc(self.lowleveltype.TO, immortal=True) @@ -149,7 +149,8 @@ self.descriptions = list(self.s_pbc.descriptions) if self.s_pbc.can_be_None: self.descriptions.insert(0, None) - POINTER_TABLE = Array(self.pointer_repr.lowleveltype) + POINTER_TABLE = Array(self.pointer_repr.lowleveltype, + hints={'nolength': True}) pointer_table = malloc(POINTER_TABLE, len(self.descriptions), immortal=True) for i, desc in enumerate(self.descriptions): @@ -302,7 +303,8 @@ if r_to in r_from._conversion_tables: return r_from._conversion_tables[r_to] else: - t = malloc(Array(Char), len(r_from.descriptions), immortal=True) + t = malloc(Array(Char, hints={'nolength': True}), + len(r_from.descriptions), immortal=True) l = [] for i, d in enumerate(r_from.descriptions): if d in r_to.descriptions: @@ -314,7 +316,7 @@ if l == range(len(r_from.descriptions)): r = None else: - r = inputconst(Ptr(Array(Char)), t) + r = inputconst(Ptr(Array(Char, hints={'nolength': True})), t) r_from._conversion_tables[r_to] = r return r @@ -402,12 +404,12 @@ # ____________________________________________________________ -##def rtype_call_memo(hop): +##def rtype_call_memo(hop): ## memo_table = hop.args_v[0].value ## if memo_table.s_result.is_constant(): ## return hop.inputconst(hop.r_result, memo_table.s_result.const) -## fieldname = memo_table.fieldname -## assert hop.nb_args == 2, "XXX" +## fieldname = memo_table.fieldname +## assert hop.nb_args == 2, "XXX" ## r_pbc = hop.args_r[1] ## assert isinstance(r_pbc, (MultipleFrozenPBCRepr, ClassesPBCRepr)) diff --git a/pypy/rpython/memory/gc/minimark.py b/pypy/rpython/memory/gc/minimark.py --- a/pypy/rpython/memory/gc/minimark.py +++ b/pypy/rpython/memory/gc/minimark.py @@ -1850,6 +1850,9 @@ finalizer = self.getlightfinalizer(self.get_type_id(obj)) ll_assert(bool(finalizer), "no light finalizer found") finalizer(obj, llmemory.NULL) + else: + obj = self.get_forwarding_address(obj) + self.old_objects_with_light_finalizers.append(obj) def deal_with_old_objects_with_finalizers(self): """ This is a much simpler version of dealing with finalizers diff --git a/pypy/rpython/memory/gc/semispace.py b/pypy/rpython/memory/gc/semispace.py --- a/pypy/rpython/memory/gc/semispace.py +++ b/pypy/rpython/memory/gc/semispace.py @@ -105,9 +105,10 @@ llarena.arena_reserve(result, totalsize) self.init_gc_object(result, typeid16) self.free = result + totalsize - if is_finalizer_light: - self.objects_with_light_finalizers.append(result + size_gc_header) - elif has_finalizer: + #if is_finalizer_light: + # self.objects_with_light_finalizers.append(result + size_gc_header) + #else: + if has_finalizer: self.objects_with_finalizers.append(result + size_gc_header) if contains_weakptr: self.objects_with_weakrefs.append(result + size_gc_header) diff --git a/pypy/tool/release/package.py b/pypy/tool/release/package.py --- a/pypy/tool/release/package.py +++ b/pypy/tool/release/package.py @@ -53,6 +53,8 @@ if not pypy_c.check(): print pypy_c raise PyPyCNotFound('Please compile pypy first, using translate.py') + if sys.platform == 'win32' and not rename_pypy_c.lower().endswith('.exe'): + rename_pypy_c += '.exe' binaries = [(pypy_c, rename_pypy_c)] # if sys.platform == 'win32': diff --git a/pypy/tool/sourcetools.py b/pypy/tool/sourcetools.py --- a/pypy/tool/sourcetools.py +++ b/pypy/tool/sourcetools.py @@ -107,10 +107,8 @@ else: try: src = inspect.getsource(object) - except IOError: - return None - except IndentationError: - return None + except Exception: # catch IOError, IndentationError, and also rarely + return None # some other exceptions like IndexError if hasattr(name, "__sourceargs__"): return src % name.__sourceargs__ return src diff --git a/pypy/translator/jvm/src/pypy/PyPy.java b/pypy/translator/jvm/src/pypy/PyPy.java --- a/pypy/translator/jvm/src/pypy/PyPy.java +++ b/pypy/translator/jvm/src/pypy/PyPy.java @@ -755,7 +755,7 @@ int end = str.length(); if (left) { - while (start <= str.length() && str.charAt(start) == ch) start++; + while (start < str.length() && str.charAt(start) == ch) start++; } if (right) { From noreply at buildbot.pypy.org Wed Nov 2 19:00:14 2011 From: noreply at buildbot.pypy.org (edelsohn) Date: Wed, 2 Nov 2011 19:00:14 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Use rlwinm for PPC32 zero-extend Message-ID: <20111102180014.E4D87820B3@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r48672:21ca802d37d8 Date: 2011-11-02 14:00 -0400 http://bitbucket.org/pypy/pypy/changeset/21ca802d37d8/ Log: Use rlwinm for PPC32 zero-extend diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -669,8 +669,7 @@ if size == 1: if not signed: #unsigned char if IS_PPC32: - self.mc.load_imm(r.r0, 0xFF) - self.mc.and_(resloc.value, resloc.value, r.r0.value) + self.mc.rlwinm(resloc.value, resloc.value, 0, 24, 31) else: self.mc.rldicl(resloc.value, resloc.value, 0, 56) else: @@ -678,9 +677,7 @@ elif size == 2: if not signed: if IS_PPC_32: - self.mc.load_imm(r.r0, 16) - self.mc.slw(resloc.value, resloc.value, r.r0.value) - self.mc.srw(resloc.value, resloc.value, r.r0.value) + self.mc.rlwinm(resloc.value, resloc.value, 0, 16, 31) else: self.mc.rldicl(resloc.value, resloc.value, 0, 48) else: From noreply at buildbot.pypy.org Wed Nov 2 19:12:27 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 2 Nov 2011 19:12:27 +0100 (CET) Subject: [pypy-commit] pypy list-strategies: When using a string list-strategy have the same behavior on str.join with one element lists. Message-ID: <20111102181227.BFBCA820B3@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: list-strategies Changeset: r48673:b387640aa6ba Date: 2011-11-02 14:12 -0400 http://bitbucket.org/pypy/pypy/changeset/b387640aa6ba/ Log: When using a string list-strategy have the same behavior on str.join with one element lists. diff --git a/pypy/objspace/std/stringobject.py b/pypy/objspace/std/stringobject.py --- a/pypy/objspace/std/stringobject.py +++ b/pypy/objspace/std/stringobject.py @@ -344,6 +344,8 @@ def str_join__String_ANY(space, w_self, w_list): l = space.listview_str(w_list) if l is not None: + if len(l) == 1: + return space.wrap(l[0]) return space.wrap(w_self._value.join(l)) list_w = space.listview(w_list) size = len(list_w) diff --git a/pypy/objspace/std/test/test_liststrategies.py b/pypy/objspace/std/test/test_liststrategies.py --- a/pypy/objspace/std/test/test_liststrategies.py +++ b/pypy/objspace/std/test/test_liststrategies.py @@ -367,12 +367,19 @@ w_l = self.space.newlist([self.space.wrap('a'), self.space.wrap('b')]) assert space.listview_str(w_l) == ["a", "b"] - def test_string_uses_listview_str(self): + def test_string_join_uses_listview_str(self): space = self.space w_l = self.space.newlist([self.space.wrap('a'), self.space.wrap('b')]) w_l.getitems = None assert space.str_w(space.call_method(space.wrap("c"), "join", w_l)) == "acb" + def test_string_join_returns_same_instance(self): + space = self.space + w_text = space.wrap("text") + w_l = self.space.newlist([w_text]) + w_l.getitems = None + assert space.is_w(space.call_method(space.wrap(" -- "), "join", w_l), w_text) + def test_newlist_str(self): space = self.space l = ['a', 'b'] diff --git a/pypy/objspace/std/test/test_stringobject.py b/pypy/objspace/std/test/test_stringobject.py --- a/pypy/objspace/std/test/test_stringobject.py +++ b/pypy/objspace/std/test/test_stringobject.py @@ -496,6 +496,7 @@ assert "-".join(['a', 'b']) == 'a-b' text = 'text' assert "".join([text]) == text + assert " -- ".join([text]) is text raises(TypeError, ''.join, 1) raises(TypeError, ''.join, [1]) raises(TypeError, ''.join, [[1]]) From noreply at buildbot.pypy.org Wed Nov 2 22:14:29 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 2 Nov 2011 22:14:29 +0100 (CET) Subject: [pypy-commit] pypy list-strategies: don't use the JIT strslice optimization if some of the characters are in an unknown state with regards to whether they're initialized Message-ID: <20111102211429.8D199820B3@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: list-strategies Changeset: r48674:30af37ca7941 Date: 2011-11-02 17:14 -0400 http://bitbucket.org/pypy/pypy/changeset/30af37ca7941/ Log: don't use the JIT strslice optimization if some of the characters are in an unknown state with regards to whether they're initialized diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -4225,6 +4225,27 @@ """ self.optimize_strunicode_loop(ops, expected) + def test_str_slice_plain_virtual(self): + ops = """ + [] + p0 = newstr(11) + copystrcontent(s"hello world", p0, 0, 0, 11) + p1 = call(0, p0, 0, 5, descr=strslicedescr) + finish(p1) + """ + expected = """ + [] + p0 = newstr(11) + copystrcontent(s"hello world", p0, 0, 0, 11) + # Eventually this should just return s"hello", but ATM this test is + # just verifying that it doesn't return "\0\0\0\0\0", so being + # slightly underoptimized is ok. + p1 = newstr(5) + copystrcontent(p0, p1, 0, 0, 5) + finish(p1) + """ + self.optimize_strunicode_loop(ops, expected) + # ---------- def optimize_strunicode_loop_extradescrs(self, ops, optops): class FakeCallInfoCollection: diff --git a/pypy/jit/metainterp/optimizeopt/vstring.py b/pypy/jit/metainterp/optimizeopt/vstring.py --- a/pypy/jit/metainterp/optimizeopt/vstring.py +++ b/pypy/jit/metainterp/optimizeopt/vstring.py @@ -505,11 +505,17 @@ # if (isinstance(vstr, VStringPlainValue) and vstart.is_constant() and vstop.is_constant()): - # slicing with constant bounds of a VStringPlainValue - value = self.make_vstring_plain(op.result, op, mode) - value.setup_slice(vstr._chars, vstart.box.getint(), - vstop.box.getint()) - return True + # slicing with constant bounds of a VStringPlainValue, if any of + # the characters is unitialized we don't do this special slice, we + # do the regular copy contents. + for i in range(vstart.box.getint(), vstop.box.getint()): + if vstr.getitem(i) is optimizer.CVAL_UNINITIALIZED_ZERO: + break + else: + value = self.make_vstring_plain(op.result, op, mode) + value.setup_slice(vstr._chars, vstart.box.getint(), + vstop.box.getint()) + return True # vstr.ensure_nonnull() lengthbox = _int_sub(self, vstop.force_box(self), From noreply at buildbot.pypy.org Wed Nov 2 22:15:41 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 2 Nov 2011 22:15:41 +0100 (CET) Subject: [pypy-commit] pypy default: don't use the JIT strslice optimization if some of the characters are in an unknown state with regards to whether they're initialized Message-ID: <20111102211541.6B8E7820B3@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r48675:3150cc438a42 Date: 2011-11-02 17:14 -0400 http://bitbucket.org/pypy/pypy/changeset/3150cc438a42/ Log: don't use the JIT strslice optimization if some of the characters are in an unknown state with regards to whether they're initialized diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -4225,6 +4225,27 @@ """ self.optimize_strunicode_loop(ops, expected) + def test_str_slice_plain_virtual(self): + ops = """ + [] + p0 = newstr(11) + copystrcontent(s"hello world", p0, 0, 0, 11) + p1 = call(0, p0, 0, 5, descr=strslicedescr) + finish(p1) + """ + expected = """ + [] + p0 = newstr(11) + copystrcontent(s"hello world", p0, 0, 0, 11) + # Eventually this should just return s"hello", but ATM this test is + # just verifying that it doesn't return "\0\0\0\0\0", so being + # slightly underoptimized is ok. + p1 = newstr(5) + copystrcontent(p0, p1, 0, 0, 5) + finish(p1) + """ + self.optimize_strunicode_loop(ops, expected) + # ---------- def optimize_strunicode_loop_extradescrs(self, ops, optops): class FakeCallInfoCollection: diff --git a/pypy/jit/metainterp/optimizeopt/vstring.py b/pypy/jit/metainterp/optimizeopt/vstring.py --- a/pypy/jit/metainterp/optimizeopt/vstring.py +++ b/pypy/jit/metainterp/optimizeopt/vstring.py @@ -505,11 +505,17 @@ # if (isinstance(vstr, VStringPlainValue) and vstart.is_constant() and vstop.is_constant()): - # slicing with constant bounds of a VStringPlainValue - value = self.make_vstring_plain(op.result, op, mode) - value.setup_slice(vstr._chars, vstart.box.getint(), - vstop.box.getint()) - return True + # slicing with constant bounds of a VStringPlainValue, if any of + # the characters is unitialized we don't do this special slice, we + # do the regular copy contents. + for i in range(vstart.box.getint(), vstop.box.getint()): + if vstr.getitem(i) is optimizer.CVAL_UNINITIALIZED_ZERO: + break + else: + value = self.make_vstring_plain(op.result, op, mode) + value.setup_slice(vstr._chars, vstart.box.getint(), + vstop.box.getint()) + return True # vstr.ensure_nonnull() lengthbox = _int_sub(self, vstop.force_box(self), From noreply at buildbot.pypy.org Wed Nov 2 23:32:30 2011 From: noreply at buildbot.pypy.org (mattip) Date: Wed, 2 Nov 2011 23:32:30 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim: fixes for translation Message-ID: <20111102223230.ED4AA820B3@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpy-multidim Changeset: r48676:2b3481fe7090 Date: 2011-11-03 00:31 +0200 http://bitbucket.org/pypy/pypy/changeset/2b3481fe7090/ Log: fixes for translation diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -250,8 +250,12 @@ concrete = self.get_concrete() res = "array(" res0 = NDimSlice(concrete, self.signature, [], self.shape).tostr(True, indent=' ') + #This is for numpy compliance: an empty slice reports its shape if res0=="[]" and isinstance(self,NDimSlice): - res0 += ", shape=%s"%(tuple(self.shape),) + res0 += ", shape=" + res1 = str(self.shape) + assert len(res1)>1 + res0 += '('+ res1[1:max(len(res1)-1,1)]+')' res += res0 dtype = concrete.find_dtype() if (dtype is not space.fromcache(interp_dtype.W_Float64Dtype) and @@ -409,6 +413,7 @@ return scalar_w(space, dtype, w_obj) def scalar_w(space, dtype, w_obj): + assert isinstance(dtype, interp_dtype.W_Dtype) return Scalar(dtype, dtype.unwrap(space, w_obj)) class Scalar(BaseArray): @@ -670,9 +675,10 @@ ret = '' dtype = self.find_dtype() ndims = len(self.shape)#-self.shape_reduction - if any([s==0 for s in self.shape]): - ret += '[]' - return ret + for s in self.shape: + if s==0: + ret += '[]' + return ret if ndims>2: ret += '[' for i in range(self.shape[0]): From noreply at buildbot.pypy.org Thu Nov 3 10:24:39 2011 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 3 Nov 2011 10:24:39 +0100 (CET) Subject: [pypy-commit] pypy default: - add in the backend, for binary instructions, a memo function Message-ID: <20111103092439.9DD65820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48677:eb27c44ca6ad Date: 2011-11-03 08:21 +0100 http://bitbucket.org/pypy/pypy/changeset/eb27c44ca6ad/ Log: - add in the backend, for binary instructions, a memo function that returns True if there is any NAME_xy that could match. If it returns False we know the whole subcase can be omitted from translated code. Without this hack, the size of most _binaryop INSN functions ends up quite large in C code. - found out that a lot of instructions have a missing case on 64 bits, because INSN_m used to fall back to INSN_a if the constant offset doesn't fit in 32 bits --- but most instructions that have an 'm' form don't have an 'a' form. Fixed by generating an extra LEA and not falling back to the 'a' form. - location_code() is an indirect method call for no really good reason. Turn it into a monomorphic method that always read a _location_code attribute. diff --git a/pypy/jit/backend/x86/regloc.py b/pypy/jit/backend/x86/regloc.py --- a/pypy/jit/backend/x86/regloc.py +++ b/pypy/jit/backend/x86/regloc.py @@ -17,7 +17,7 @@ class AssemblerLocation(object): # XXX: Is adding "width" here correct? - __slots__ = ('value', 'width') + _attrs_ = ('value', 'width', '_location_code') _immutable_ = True def _getregkey(self): return self.value @@ -25,6 +25,9 @@ def is_memory_reference(self): return self.location_code() in ('b', 's', 'j', 'a', 'm') + def location_code(self): + return self._location_code + def value_r(self): return self.value def value_b(self): return self.value def value_s(self): return self.value @@ -38,6 +41,8 @@ class StackLoc(AssemblerLocation): _immutable_ = True + _location_code = 'b' + def __init__(self, position, ebp_offset, num_words, type): assert ebp_offset < 0 # so no confusion with RegLoc.value self.position = position @@ -49,9 +54,6 @@ def __repr__(self): return '%d(%%ebp)' % (self.value,) - def location_code(self): - return 'b' - def assembler(self): return repr(self) @@ -63,8 +65,10 @@ self.is_xmm = is_xmm if self.is_xmm: self.width = 8 + self._location_code = 'x' else: self.width = WORD + self._location_code = 'r' def __repr__(self): if self.is_xmm: return rx86.R.xmmnames[self.value] @@ -79,12 +83,6 @@ assert not self.is_xmm return RegLoc(rx86.high_byte(self.value), False) - def location_code(self): - if self.is_xmm: - return 'x' - else: - return 'r' - def assembler(self): return '%' + repr(self) @@ -97,14 +95,13 @@ class ImmedLoc(AssemblerLocation): _immutable_ = True width = WORD + _location_code = 'i' + def __init__(self, value): from pypy.rpython.lltypesystem import rffi, lltype # force as a real int self.value = rffi.cast(lltype.Signed, value) - def location_code(self): - return 'i' - def getint(self): return self.value @@ -149,9 +146,6 @@ info = getattr(self, attr, '?') return '' % (self._location_code, info) - def location_code(self): - return self._location_code - def value_a(self): return self.loc_a @@ -191,6 +185,7 @@ # we want a width of 8 (... I think. Check this!) _immutable_ = True width = 8 + _location_code = 'j' def __init__(self, address): self.value = address @@ -198,9 +193,6 @@ def __repr__(self): return '' % (self.value,) - def location_code(self): - return 'j' - if IS_X86_32: class FloatImmedLoc(AssemblerLocation): # This stands for an immediate float. It cannot be directly used in @@ -209,6 +201,7 @@ # instead; see below. _immutable_ = True width = 8 + _location_code = '#' # don't use me def __init__(self, floatstorage): self.aslonglong = floatstorage @@ -229,9 +222,6 @@ floatvalue = longlong.getrealfloat(self.aslonglong) return '' % (floatvalue,) - def location_code(self): - raise NotImplementedError - if IS_X86_64: def FloatImmedLoc(floatstorage): from pypy.rlib.longlong2float import float2longlong @@ -270,6 +260,11 @@ else: raise AssertionError(methname + " undefined") +def _missing_binary_insn(name, code1, code2): + raise AssertionError(name + "_" + code1 + code2 + " missing") +_missing_binary_insn._dont_inline_ = True + + class LocationCodeBuilder(object): _mixin_ = True @@ -310,6 +305,23 @@ _rx86_getattr(self, methname)(val1, val2) invoke._annspecialcase_ = 'specialize:arg(1)' + def has_implementation_for(loc1, loc2): + # A memo function that returns True if there is any NAME_xy that could match. + # If it returns False we know the whole subcase can be omitted from translated + # code. Without this hack, the size of most _binaryop INSN functions ends up + # quite large in C code. + if loc1 == '?': + return any([has_implementation_for(loc1, loc2) + for loc1 in unrolling_location_codes]) + methname = name + "_" + loc1 + loc2 + if not hasattr(rx86.AbstractX86CodeBuilder, methname): + return False + # any NAME_j should have a NAME_m as a fallback, too. Check it + if loc1 == 'j': assert has_implementation_for('m', loc2), methname + if loc2 == 'j': assert has_implementation_for(loc1, 'm'), methname + return True + has_implementation_for._annspecialcase_ = 'specialize:memo' + def INSN(self, loc1, loc2): code1 = loc1.location_code() code2 = loc2.location_code() @@ -325,6 +337,8 @@ assert code2 not in ('j', 'i') for possible_code2 in unrolling_location_codes: + if not has_implementation_for('?', possible_code2): + continue if code2 == possible_code2: val2 = getattr(loc2, "value_" + possible_code2)() # @@ -335,28 +349,32 @@ # # Regular case for possible_code1 in unrolling_location_codes: + if not has_implementation_for(possible_code1, + possible_code2): + continue if code1 == possible_code1: val1 = getattr(loc1, "value_" + possible_code1)() # More faking out of certain operations for x86_64 - if possible_code1 == 'j' and not rx86.fits_in_32bits(val1): + fits32 = rx86.fits_in_32bits + if possible_code1 == 'j' and not fits32(val1): val1 = self._addr_as_reg_offset(val1) invoke(self, "m" + possible_code2, val1, val2) - elif possible_code2 == 'j' and not rx86.fits_in_32bits(val2): + return + if possible_code2 == 'j' and not fits32(val2): val2 = self._addr_as_reg_offset(val2) invoke(self, possible_code1 + "m", val1, val2) - elif possible_code1 == 'm' and not rx86.fits_in_32bits(val1[1]): + return + if possible_code1 == 'm' and not fits32(val1[1]): val1 = self._fix_static_offset_64_m(val1) - invoke(self, "a" + possible_code2, val1, val2) - elif possible_code2 == 'm' and not rx86.fits_in_32bits(val2[1]): + if possible_code2 == 'm' and not fits32(val2[1]): val2 = self._fix_static_offset_64_m(val2) - invoke(self, possible_code1 + "a", val1, val2) - else: - if possible_code1 == 'a' and not rx86.fits_in_32bits(val1[3]): - val1 = self._fix_static_offset_64_a(val1) - if possible_code2 == 'a' and not rx86.fits_in_32bits(val2[3]): - val2 = self._fix_static_offset_64_a(val2) - invoke(self, possible_code1 + possible_code2, val1, val2) + if possible_code1 == 'a' and not fits32(val1[3]): + val1 = self._fix_static_offset_64_a(val1) + if possible_code2 == 'a' and not fits32(val2[3]): + val2 = self._fix_static_offset_64_a(val2) + invoke(self, possible_code1 + possible_code2, val1, val2) return + _missing_binary_insn(name, code1, code2) return func_with_new_name(INSN, "INSN_" + name) @@ -431,12 +449,14 @@ def _fix_static_offset_64_m(self, (basereg, static_offset)): # For cases where an AddressLoc has the location_code 'm', but # where the static offset does not fit in 32-bits. We have to fall - # back to the X86_64_SCRATCH_REG. Note that this returns a location - # encoded as mode 'a'. These are all possibly rare cases; don't try + # back to the X86_64_SCRATCH_REG. Returns a new location encoded + # as mode 'm' too. These are all possibly rare cases; don't try # to reuse a past value of the scratch register at all. self._scratch_register_known = False self.MOV_ri(X86_64_SCRATCH_REG.value, static_offset) - return (basereg, X86_64_SCRATCH_REG.value, 0, 0) + self.LEA_ra(X86_64_SCRATCH_REG.value, + (basereg, X86_64_SCRATCH_REG.value, 0, 0)) + return (X86_64_SCRATCH_REG.value, 0) def _fix_static_offset_64_a(self, (basereg, scalereg, scale, static_offset)): diff --git a/pypy/jit/backend/x86/rx86.py b/pypy/jit/backend/x86/rx86.py --- a/pypy/jit/backend/x86/rx86.py +++ b/pypy/jit/backend/x86/rx86.py @@ -745,6 +745,7 @@ assert insnname_template.count('*') == 1 add_insn('x', register(2), '\xC0') add_insn('j', abs_, immediate(2)) + add_insn('m', mem_reg_plus_const(2)) define_pxmm_insn('PADDQ_x*', '\xD4') define_pxmm_insn('PSUBQ_x*', '\xFB') diff --git a/pypy/jit/backend/x86/test/test_regloc.py b/pypy/jit/backend/x86/test/test_regloc.py --- a/pypy/jit/backend/x86/test/test_regloc.py +++ b/pypy/jit/backend/x86/test/test_regloc.py @@ -146,8 +146,10 @@ expected_instructions = ( # mov r11, 0xFEDCBA9876543210 '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' - # mov rcx, [rdx+r11] - '\x4A\x8B\x0C\x1A' + # lea r11, [rdx+r11] + '\x4E\x8D\x1C\x1A' + # mov rcx, [r11] + '\x49\x8B\x0B' ) assert cb.getvalue() == expected_instructions @@ -217,8 +219,10 @@ expected_instructions = ( # mov r11, 0xFEDCBA9876543210 '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' - # mov [rdx+r11], -0x01234567 - '\x4A\xC7\x04\x1A\x99\xBA\xDC\xFE' + # lea r11, [rdx+r11] + '\x4E\x8D\x1C\x1A' + # mov [r11], -0x01234567 + '\x49\xC7\x03\x99\xBA\xDC\xFE' ) assert cb.getvalue() == expected_instructions @@ -300,8 +304,10 @@ '\x48\xBA\xEF\xCD\xAB\x89\x67\x45\x23\x01' # mov r11, 0xFEDCBA9876543210 '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' - # mov [rax+r11], rdx - '\x4A\x89\x14\x18' + # lea r11, [rax+r11] + '\x4E\x8D\x1C\x18' + # mov [r11], rdx + '\x49\x89\x13' # pop rdx '\x5A' ) From noreply at buildbot.pypy.org Thu Nov 3 10:24:40 2011 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 3 Nov 2011 10:24:40 +0100 (CET) Subject: [pypy-commit] pypy default: Add tests for two special cases of "MOV" in INSN(). Message-ID: <20111103092440.CDF6B82A87@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48678:5478d1f631fa Date: 2011-11-03 08:36 +0100 http://bitbucket.org/pypy/pypy/changeset/5478d1f631fa/ Log: Add tests for two special cases of "MOV" in INSN(). diff --git a/pypy/jit/backend/x86/regloc.py b/pypy/jit/backend/x86/regloc.py --- a/pypy/jit/backend/x86/regloc.py +++ b/pypy/jit/backend/x86/regloc.py @@ -298,6 +298,8 @@ else: # For this case, we should not need the scratch register more than here. self._load_scratch(val2) + if name == 'MOV' and loc1 is X86_64_SCRATCH_REG: + return # don't need a dummy "MOV r11, r11" INSN(self, loc1, X86_64_SCRATCH_REG) def invoke(self, codes, val1, val2): diff --git a/pypy/jit/backend/x86/test/test_regloc.py b/pypy/jit/backend/x86/test/test_regloc.py --- a/pypy/jit/backend/x86/test/test_regloc.py +++ b/pypy/jit/backend/x86/test/test_regloc.py @@ -176,6 +176,30 @@ # ------------------------------------------------------------ + def test_MOV_64bit_constant_into_r11(self): + base_constant = 0xFEDCBA9876543210 + cb = LocationCodeBuilder64() + cb.MOV(r11, imm(base_constant)) + + expected_instructions = ( + # mov r11, 0xFEDCBA9876543210 + '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' + ) + assert cb.getvalue() == expected_instructions + + def test_MOV_64bit_address_into_r11(self): + base_addr = 0xFEDCBA9876543210 + cb = LocationCodeBuilder64() + cb.MOV(r11, heap(base_addr)) + + expected_instructions = ( + # mov r11, 0xFEDCBA9876543210 + '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' + + # mov r11, [r11] + '\x4D\x8B\x1B' + ) + assert cb.getvalue() == expected_instructions + def test_MOV_immed32_into_64bit_address_1(self): immed = -0x01234567 base_addr = 0xFEDCBA9876543210 From noreply at buildbot.pypy.org Thu Nov 3 10:24:42 2011 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 3 Nov 2011 10:24:42 +0100 (CET) Subject: [pypy-commit] pypy default: merge heads Message-ID: <20111103092442.1B08F82A88@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48679:665b14e5263a Date: 2011-11-03 10:24 +0100 http://bitbucket.org/pypy/pypy/changeset/665b14e5263a/ Log: merge heads diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -4225,6 +4225,27 @@ """ self.optimize_strunicode_loop(ops, expected) + def test_str_slice_plain_virtual(self): + ops = """ + [] + p0 = newstr(11) + copystrcontent(s"hello world", p0, 0, 0, 11) + p1 = call(0, p0, 0, 5, descr=strslicedescr) + finish(p1) + """ + expected = """ + [] + p0 = newstr(11) + copystrcontent(s"hello world", p0, 0, 0, 11) + # Eventually this should just return s"hello", but ATM this test is + # just verifying that it doesn't return "\0\0\0\0\0", so being + # slightly underoptimized is ok. + p1 = newstr(5) + copystrcontent(p0, p1, 0, 0, 5) + finish(p1) + """ + self.optimize_strunicode_loop(ops, expected) + # ---------- def optimize_strunicode_loop_extradescrs(self, ops, optops): class FakeCallInfoCollection: diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -7355,6 +7355,26 @@ """ self.optimize_loop(ops, expected) + def test_repeated_setfield_mixed_with_guard(self): + ops = """ + [p22, p18] + setfield_gc(p22, 2, descr=valuedescr) + guard_nonnull_class(p18, ConstClass(node_vtable)) [] + setfield_gc(p22, 2, descr=valuedescr) + jump(p22, p18) + """ + preamble = """ + [p22, p18] + setfield_gc(p22, 2, descr=valuedescr) + guard_nonnull_class(p18, ConstClass(node_vtable)) [] + jump(p22, p18) + """ + expected = """ + [p22, p18] + jump(p22, p18) + """ + self.optimize_loop(ops, expected, preamble) + class TestLLtype(OptimizeOptTest, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/optimizeopt/vstring.py b/pypy/jit/metainterp/optimizeopt/vstring.py --- a/pypy/jit/metainterp/optimizeopt/vstring.py +++ b/pypy/jit/metainterp/optimizeopt/vstring.py @@ -505,11 +505,17 @@ # if (isinstance(vstr, VStringPlainValue) and vstart.is_constant() and vstop.is_constant()): - # slicing with constant bounds of a VStringPlainValue - value = self.make_vstring_plain(op.result, op, mode) - value.setup_slice(vstr._chars, vstart.box.getint(), - vstop.box.getint()) - return True + # slicing with constant bounds of a VStringPlainValue, if any of + # the characters is unitialized we don't do this special slice, we + # do the regular copy contents. + for i in range(vstart.box.getint(), vstop.box.getint()): + if vstr.getitem(i) is optimizer.CVAL_UNINITIALIZED_ZERO: + break + else: + value = self.make_vstring_plain(op.result, op, mode) + value.setup_slice(vstr._chars, vstart.box.getint(), + vstop.box.getint()) + return True # vstr.ensure_nonnull() lengthbox = _int_sub(self, vstop.force_box(self), From noreply at buildbot.pypy.org Thu Nov 3 10:38:54 2011 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 3 Nov 2011 10:38:54 +0100 (CET) Subject: [pypy-commit] pypy stm: Improve targetdemo. Message-ID: <20111103093854.BB0CA820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r48680:755507f9382b Date: 2011-11-03 10:38 +0100 http://bitbucket.org/pypy/pypy/changeset/755507f9382b/ Log: Improve targetdemo. diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -306,6 +306,7 @@ AroundFnPtr = lltype.Ptr(lltype.FuncType([], lltype.Void)) class AroundState: + _alloc_flavor_ = "raw" def _freeze_(self): self.before = None # or a regular RPython function self.after = None # or a regular RPython function diff --git a/pypy/translator/stm/src_stm/et.c b/pypy/translator/stm/src_stm/et.c --- a/pypy/translator/stm/src_stm/et.c +++ b/pypy/translator/stm/src_stm/et.c @@ -140,9 +140,17 @@ } } +static _Bool is_inevitable_or_inactive(struct tx_descriptor *d) +{ + return d->setjmp_buf == NULL; +} + static _Bool is_inevitable(struct tx_descriptor *d) { - return d->setjmp_buf == NULL; +#ifdef RPY_STM_ASSERT + assert(d->transaction_active); +#endif + return is_inevitable_or_inactive(d); } /*** run the redo log to commit a transaction, and release the locks */ @@ -249,6 +257,7 @@ assert(d->transaction_active); d->transaction_active = 0; #endif + d->setjmp_buf = NULL; } static void tx_cleanup(struct tx_descriptor *d) @@ -261,9 +270,10 @@ static void tx_restart(struct tx_descriptor *d) { + jmp_buf *env = d->setjmp_buf; tx_cleanup(d); tx_spinloop(0); - longjmp(*d->setjmp_buf, 1); + longjmp(*env, 1); } /*** increase the abort count and restart the transaction */ @@ -335,6 +345,30 @@ #ifdef USE_PTHREAD_MUTEX /* mutex: only to avoid busy-looping too much in tx_spinloop() below */ static pthread_mutex_t mutex_inevitable = PTHREAD_MUTEX_INITIALIZER; +# ifdef RPY_STM_ASSERT +void mutex_lock(void) +{ + unsigned long pself = (unsigned long)pthread_self(); + if (PYPY_HAVE_DEBUG_PRINTS) fprintf(PYPY_DEBUG_FILE, + "%lx: mutex inev locking...\n", pself); + pthread_mutex_lock(&mutex_inevitable); + if (PYPY_HAVE_DEBUG_PRINTS) fprintf(PYPY_DEBUG_FILE, + "%lx: mutex inev locked\n", pself); +} +void mutex_unlock(void) +{ + unsigned long pself = (unsigned long)pthread_self(); + pthread_mutex_unlock(&mutex_inevitable); + if (PYPY_HAVE_DEBUG_PRINTS) fprintf(PYPY_DEBUG_FILE, + "%lx: mutex inev unlocked\n", pself); +} +# else +# define mutex_lock() pthread_mutex_lock(&mutex_inevitable) +# define mutex_unlock() pthread_mutex_unlock(&mutex_inevitable) +# endif +#else +# define mutex_lock() /* nothing */ +# define mutex_unlock() /* nothing */ #endif #ifdef COMMIT_OTHER_INEV @@ -436,10 +470,8 @@ d->start_time = curts - 1; } tx_spinloop(4); -#ifdef USE_PTHREAD_MUTEX - pthread_mutex_lock(&mutex_inevitable); - pthread_mutex_unlock(&mutex_inevitable); -#endif + mutex_lock(); + mutex_unlock(); } acquireLocks(d); } @@ -465,15 +497,16 @@ // run the redo log, and release the locks tx_redo(d); -#ifdef USE_PTHREAD_MUTEX - pthread_mutex_unlock(&mutex_inevitable); -#endif + mutex_unlock(); } /* lazy/lazy read instrumentation */ long stm_read_word(long* addr) { struct tx_descriptor *d = thread_descriptor; +#ifdef RPY_STM_ASSERT + assert(d->transaction_active); +#endif // check writeset first wlog_t* found; @@ -535,6 +568,9 @@ void stm_write_word(long* addr, long val) { struct tx_descriptor *d = thread_descriptor; +#ifdef RPY_STM_ASSERT + assert(d->transaction_active); +#endif redolog_insert(&d->redolog, addr, val); } @@ -647,9 +683,7 @@ unsigned long ts = get_global_timestamp(d); assert(ts & 1); set_global_timestamp(d, ts - 1); -#ifdef USE_PTHREAD_MUTEX - pthread_mutex_unlock(&mutex_inevitable); -#endif + mutex_unlock(); } d->num_commits++; common_cleanup(d); @@ -723,17 +757,17 @@ if (PYPY_HAVE_DEBUG_PRINTS) { fprintf(PYPY_DEBUG_FILE, "%s%s\n", why, + (!d->transaction_active) ? " (inactive)" : is_inevitable(d) ? " (already inevitable)" : ""); } - assert(d->transaction_active); #endif - if (is_inevitable(d)) + if (is_inevitable_or_inactive(d)) { #ifdef RPY_STM_ASSERT PYPY_DEBUG_STOP("stm-inevitable"); #endif - return; /* I am already inevitable */ + return; /* I am already inevitable, or not in a transaction at all */ } while (1) @@ -744,26 +778,20 @@ validate_fast(d, 2); d->start_time = curtime & ~1; } -#ifdef USE_PTHREAD_MUTEX - pthread_mutex_lock(&mutex_inevitable); -#endif + mutex_lock(); if (curtime & 1) /* there is, or was, already an inevitable thread */ { /* should we spinloop here, or abort (and likely come back in try_inevitable() very soon)? unclear. For now let's try to spinloop, after the waiting done by acquiring the mutex */ -#ifdef USE_PTHREAD_MUTEX - pthread_mutex_unlock(&mutex_inevitable); -#endif + mutex_unlock(); tx_spinloop(6); continue; } if (change_global_timestamp(d, curtime, curtime + 1)) break; -#ifdef USE_PTHREAD_MUTEX - pthread_mutex_unlock(&mutex_inevitable); -#endif + mutex_unlock(); } d->setjmp_buf = NULL; /* inevitable from now on */ #ifdef COMMIT_OTHER_INEV @@ -789,18 +817,14 @@ unsigned long curtime; retry: -#ifdef USE_PTHREAD_MUTEX - pthread_mutex_lock(&mutex_inevitable); /* possibly waiting here */ -#endif + mutex_lock(); /* possibly waiting here */ while (1) { curtime = global_timestamp; if (curtime & 1) { -#ifdef USE_PTHREAD_MUTEX - pthread_mutex_unlock(&mutex_inevitable); -#endif + mutex_unlock(); tx_spinloop(5); goto retry; } diff --git a/pypy/translator/stm/test/targetdemo.py b/pypy/translator/stm/test/targetdemo.py --- a/pypy/translator/stm/test/targetdemo.py +++ b/pypy/translator/stm/test/targetdemo.py @@ -32,13 +32,34 @@ print "thread done." + +# __________ temp, move me somewhere else __________ + +from pypy.rlib.objectmodel import invoke_around_extcall + +def before_external_call(): + # this function must not raise, in such a way that the exception + # transformer knows that it cannot raise! + rstm.commit_transaction() +before_external_call._gctransformer_hint_cannot_collect_ = True +before_external_call._dont_reach_me_in_del_ = True + +def after_external_call(): + rstm.begin_inevitable_transaction() +after_external_call._gctransformer_hint_cannot_collect_ = True +after_external_call._dont_reach_me_in_del_ = True + + # __________ Entry point __________ def entry_point(argv): + invoke_around_extcall(before_external_call, after_external_call) print "hello world" for i in range(NUM_THREADS): ll_thread.start_new_thread(run_me, ()) + print "sleeping..." time.sleep(10) + print "done sleeping." return 0 # _____ Define and setup target ___ From noreply at buildbot.pypy.org Thu Nov 3 11:03:25 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Thu, 3 Nov 2011 11:03:25 +0100 (CET) Subject: [pypy-commit] pypy default: interning ints aswell Message-ID: <20111103100325.C4212820B3@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: Changeset: r48681:d0466dedbb14 Date: 2011-11-03 07:34 +0100 http://bitbucket.org/pypy/pypy/changeset/d0466dedbb14/ Log: interning ints aswell diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -1,6 +1,6 @@ from pypy.jit.metainterp import jitprof, resume, compile from pypy.jit.metainterp.executor import execute_nonspec -from pypy.jit.metainterp.history import BoxInt, BoxFloat, Const, ConstInt, REF +from pypy.jit.metainterp.history import BoxInt, BoxFloat, Const, ConstInt, REF, INT from pypy.jit.metainterp.optimizeopt.intutils import IntBound, IntUnbounded, \ ImmutableIntUnbounded, \ IntLowerBound, MININT, MAXINT @@ -326,6 +326,7 @@ self.bridge = bridge self.values = {} self.interned_refs = self.cpu.ts.new_ref_dict() + self.interned_ints = {} self.resumedata_memo = resume.ResumeDataLoopMemo(metainterp_sd) self.bool_boxes = {} self.producer = {} @@ -398,6 +399,9 @@ if not value: return box return self.interned_refs.setdefault(value, box) + elif constbox.type == INT: + value = constbox.getint() + return self.interned_ints.setdefault(value, box) else: return box From noreply at buildbot.pypy.org Thu Nov 3 11:03:26 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Thu, 3 Nov 2011 11:03:26 +0100 (CET) Subject: [pypy-commit] pypy default: test short preamble and non constant case aswell Message-ID: <20111103100326.F3FEC82A87@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: Changeset: r48682:9f81b789732c Date: 2011-11-03 07:50 +0100 http://bitbucket.org/pypy/pypy/changeset/9f81b789732c/ Log: test short preamble and non constant case aswell diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -7355,7 +7355,7 @@ """ self.optimize_loop(ops, expected) - def test_repeated_setfield_mixed_with_guard(self): + def test_repeated_constant_setfield_mixed_with_guard(self): ops = """ [p22, p18] setfield_gc(p22, 2, descr=valuedescr) @@ -7369,11 +7369,48 @@ guard_nonnull_class(p18, ConstClass(node_vtable)) [] jump(p22, p18) """ + short = """ + [p22, p18] + i1 = getfield_gc(p22, descr=valuedescr) + guard_value(i1, 2) [] + jump(p22, p18) + """ expected = """ [p22, p18] jump(p22, p18) """ - self.optimize_loop(ops, expected, preamble) + self.optimize_loop(ops, expected, preamble, expected_short=short) + + def test_repeated_setfield_mixed_with_guard(self): + ops = """ + [p22, p18, i1] + i2 = getfield_gc(p22, descr=valuedescr) + call(i2, descr=nonwritedescr) + setfield_gc(p22, i1, descr=valuedescr) + guard_nonnull_class(p18, ConstClass(node_vtable)) [] + setfield_gc(p22, i1, descr=valuedescr) + jump(p22, p18, i1) + """ + preamble = """ + [p22, p18, i1] + i2 = getfield_gc(p22, descr=valuedescr) + call(i2, descr=nonwritedescr) + setfield_gc(p22, i1, descr=valuedescr) + guard_nonnull_class(p18, ConstClass(node_vtable)) [] + jump(p22, p18, i1, i1) + """ + short = """ + [p22, p18, i1] + i2 = getfield_gc(p22, descr=valuedescr) + jump(p22, p18, i1, i2) + """ + expected = """ + [p22, p18, i1, i2] + call(i2, descr=nonwritedescr) + setfield_gc(p22, i1, descr=valuedescr) + jump(p22, p18, i1, i1) + """ + self.optimize_loop(ops, expected, preamble, expected_short=short) class TestLLtype(OptimizeOptTest, LLtypeMixin): pass From noreply at buildbot.pypy.org Thu Nov 3 11:03:28 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Thu, 3 Nov 2011 11:03:28 +0100 (CET) Subject: [pypy-commit] pypy default: corner case not handled very well Message-ID: <20111103100328.2A39A820B3@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: Changeset: r48683:ce8c2eb5ccba Date: 2011-11-03 08:49 +0100 http://bitbucket.org/pypy/pypy/changeset/ce8c2eb5ccba/ Log: corner case not handled very well diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -95,6 +95,10 @@ return guards def import_from(self, other, optimizer): + if self.level == LEVEL_CONSTANT: + assert other.level == LEVEL_CONSTANT + assert other.box.same_constant(self.box) + return assert self.level <= LEVEL_NONNULL if other.level == LEVEL_CONSTANT: self.make_constant(other.get_key_box()) From noreply at buildbot.pypy.org Thu Nov 3 11:03:29 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Thu, 3 Nov 2011 11:03:29 +0100 (CET) Subject: [pypy-commit] pypy default: dissable for now, it makes test_convert_from_SmallFunctionSetPBCRepr_to_FunctionsPBCRep fail Message-ID: <20111103100329.537B6820B3@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: Changeset: r48684:5fb2ee9b17b4 Date: 2011-11-03 09:01 +0100 http://bitbucket.org/pypy/pypy/changeset/5fb2ee9b17b4/ Log: dissable for now, it makes test_convert_from_SmallFunctionSetPBCRepr_to_FunctionsPBCRep fail diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -403,9 +403,9 @@ if not value: return box return self.interned_refs.setdefault(value, box) - elif constbox.type == INT: - value = constbox.getint() - return self.interned_ints.setdefault(value, box) + #elif constbox.type == INT: + # value = constbox.getint() + # return self.interned_ints.setdefault(value, box) else: return box From noreply at buildbot.pypy.org Thu Nov 3 11:03:30 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Thu, 3 Nov 2011 11:03:30 +0100 (CET) Subject: [pypy-commit] pypy default: alternative fix that does not rely on interning ints Message-ID: <20111103100330.84A89820B3@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: Changeset: r48685:569f16f25b1b Date: 2011-11-03 09:10 +0100 http://bitbucket.org/pypy/pypy/changeset/569f16f25b1b/ Log: alternative fix that does not rely on interning ints diff --git a/pypy/jit/metainterp/optimizeopt/heap.py b/pypy/jit/metainterp/optimizeopt/heap.py --- a/pypy/jit/metainterp/optimizeopt/heap.py +++ b/pypy/jit/metainterp/optimizeopt/heap.py @@ -43,7 +43,7 @@ optheap.optimizer.ensure_imported(cached_fieldvalue) cached_fieldvalue = self._cached_fields.get(structvalue, None) - if cached_fieldvalue is not fieldvalue: + if not fieldvalue.same_value(cached_fieldvalue): # common case: store the 'op' as lazy_setfield, and register # myself in the optheap's _lazy_setfields_and_arrayitems list self._lazy_setfield = op diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -145,6 +145,13 @@ return not box.nonnull() return False + def same_value(self, other): + if not other: + return False + if self.is_constant() and other.is_constant(): + return self.box.same_constant(other.box) + return self is other + def make_constant(self, constbox): """Replace 'self.box' with a Const box.""" assert isinstance(constbox, Const) From noreply at buildbot.pypy.org Thu Nov 3 11:03:31 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Thu, 3 Nov 2011 11:03:31 +0100 (CET) Subject: [pypy-commit] pypy default: allow setarrayitem to update the cache exported from the preamble to the loop the same way setfield does Message-ID: <20111103100331.B5E35820B3@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: Changeset: r48686:8b75e3ece413 Date: 2011-11-03 11:02 +0100 http://bitbucket.org/pypy/pypy/changeset/8b75e3ece413/ Log: allow setarrayitem to update the cache exported from the preamble to the loop the same way setfield does diff --git a/pypy/jit/metainterp/optimizeopt/heap.py b/pypy/jit/metainterp/optimizeopt/heap.py --- a/pypy/jit/metainterp/optimizeopt/heap.py +++ b/pypy/jit/metainterp/optimizeopt/heap.py @@ -140,6 +140,17 @@ getop = ResOperation(rop.GETFIELD_GC, [op.getarg(0)], result, op.getdescr()) shortboxes.add_potential(getop, synthetic=True) + if op.getopnum() == rop.SETARRAYITEM_GC: + result = op.getarg(2) + if isinstance(result, Const): + newresult = result.clonebox() + optimizer.make_constant(newresult, result) + result = newresult + if result is op.getarg(0): # FIXME: Unsupported corner case?? + continue + getop = ResOperation(rop.GETARRAYITEM_GC, [op.getarg(0), op.getarg(1)], + result, op.getdescr()) + shortboxes.add_potential(getop, synthetic=True) elif op.result is not None: shortboxes.add_potential(op) diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -7412,6 +7412,44 @@ """ self.optimize_loop(ops, expected, preamble, expected_short=short) + def test_cache_setfield_across_loop_boundaries(self): + ops = """ + [p1] + p2 = getfield_gc(p1, descr=valuedescr) + guard_nonnull_class(p2, ConstClass(node_vtable)) [] + call(p2, descr=nonwritedescr) + p3 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(p1, p3, descr=valuedescr) + jump(p1) + """ + expected = """ + [p1, p2] + call(p2, descr=nonwritedescr) + p3 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(p1, p3, descr=valuedescr) + jump(p1, p3) + """ + self.optimize_loop(ops, expected) + + def test_cache_setarrayitem_across_loop_boundaries(self): + ops = """ + [p1] + p2 = getarrayitem_gc(p1, 3, descr=arraydescr) + guard_nonnull_class(p2, ConstClass(node_vtable)) [] + call(p2, descr=nonwritedescr) + p3 = new_with_vtable(ConstClass(node_vtable)) + setarrayitem_gc(p1, 3, p3, descr=arraydescr) + jump(p1) + """ + expected = """ + [p1, p2] + call(p2, descr=nonwritedescr) + p3 = new_with_vtable(ConstClass(node_vtable)) + setarrayitem_gc(p1, 3, p3, descr=arraydescr) + jump(p1, p3) + """ + self.optimize_loop(ops, expected) + class TestLLtype(OptimizeOptTest, LLtypeMixin): pass From noreply at buildbot.pypy.org Thu Nov 3 11:03:32 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Thu, 3 Nov 2011 11:03:32 +0100 (CET) Subject: [pypy-commit] pypy default: hg merge Message-ID: <20111103100332.E4C3F820B3@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: Changeset: r48687:9aea5197a2d6 Date: 2011-11-03 11:03 +0100 http://bitbucket.org/pypy/pypy/changeset/9aea5197a2d6/ Log: hg merge diff --git a/pypy/jit/backend/x86/regloc.py b/pypy/jit/backend/x86/regloc.py --- a/pypy/jit/backend/x86/regloc.py +++ b/pypy/jit/backend/x86/regloc.py @@ -17,7 +17,7 @@ class AssemblerLocation(object): # XXX: Is adding "width" here correct? - __slots__ = ('value', 'width') + _attrs_ = ('value', 'width', '_location_code') _immutable_ = True def _getregkey(self): return self.value @@ -25,6 +25,9 @@ def is_memory_reference(self): return self.location_code() in ('b', 's', 'j', 'a', 'm') + def location_code(self): + return self._location_code + def value_r(self): return self.value def value_b(self): return self.value def value_s(self): return self.value @@ -38,6 +41,8 @@ class StackLoc(AssemblerLocation): _immutable_ = True + _location_code = 'b' + def __init__(self, position, ebp_offset, num_words, type): assert ebp_offset < 0 # so no confusion with RegLoc.value self.position = position @@ -49,9 +54,6 @@ def __repr__(self): return '%d(%%ebp)' % (self.value,) - def location_code(self): - return 'b' - def assembler(self): return repr(self) @@ -63,8 +65,10 @@ self.is_xmm = is_xmm if self.is_xmm: self.width = 8 + self._location_code = 'x' else: self.width = WORD + self._location_code = 'r' def __repr__(self): if self.is_xmm: return rx86.R.xmmnames[self.value] @@ -79,12 +83,6 @@ assert not self.is_xmm return RegLoc(rx86.high_byte(self.value), False) - def location_code(self): - if self.is_xmm: - return 'x' - else: - return 'r' - def assembler(self): return '%' + repr(self) @@ -97,14 +95,13 @@ class ImmedLoc(AssemblerLocation): _immutable_ = True width = WORD + _location_code = 'i' + def __init__(self, value): from pypy.rpython.lltypesystem import rffi, lltype # force as a real int self.value = rffi.cast(lltype.Signed, value) - def location_code(self): - return 'i' - def getint(self): return self.value @@ -149,9 +146,6 @@ info = getattr(self, attr, '?') return '' % (self._location_code, info) - def location_code(self): - return self._location_code - def value_a(self): return self.loc_a @@ -191,6 +185,7 @@ # we want a width of 8 (... I think. Check this!) _immutable_ = True width = 8 + _location_code = 'j' def __init__(self, address): self.value = address @@ -198,9 +193,6 @@ def __repr__(self): return '' % (self.value,) - def location_code(self): - return 'j' - if IS_X86_32: class FloatImmedLoc(AssemblerLocation): # This stands for an immediate float. It cannot be directly used in @@ -209,6 +201,7 @@ # instead; see below. _immutable_ = True width = 8 + _location_code = '#' # don't use me def __init__(self, floatstorage): self.aslonglong = floatstorage @@ -229,9 +222,6 @@ floatvalue = longlong.getrealfloat(self.aslonglong) return '' % (floatvalue,) - def location_code(self): - raise NotImplementedError - if IS_X86_64: def FloatImmedLoc(floatstorage): from pypy.rlib.longlong2float import float2longlong @@ -270,6 +260,11 @@ else: raise AssertionError(methname + " undefined") +def _missing_binary_insn(name, code1, code2): + raise AssertionError(name + "_" + code1 + code2 + " missing") +_missing_binary_insn._dont_inline_ = True + + class LocationCodeBuilder(object): _mixin_ = True @@ -303,6 +298,8 @@ else: # For this case, we should not need the scratch register more than here. self._load_scratch(val2) + if name == 'MOV' and loc1 is X86_64_SCRATCH_REG: + return # don't need a dummy "MOV r11, r11" INSN(self, loc1, X86_64_SCRATCH_REG) def invoke(self, codes, val1, val2): @@ -310,6 +307,23 @@ _rx86_getattr(self, methname)(val1, val2) invoke._annspecialcase_ = 'specialize:arg(1)' + def has_implementation_for(loc1, loc2): + # A memo function that returns True if there is any NAME_xy that could match. + # If it returns False we know the whole subcase can be omitted from translated + # code. Without this hack, the size of most _binaryop INSN functions ends up + # quite large in C code. + if loc1 == '?': + return any([has_implementation_for(loc1, loc2) + for loc1 in unrolling_location_codes]) + methname = name + "_" + loc1 + loc2 + if not hasattr(rx86.AbstractX86CodeBuilder, methname): + return False + # any NAME_j should have a NAME_m as a fallback, too. Check it + if loc1 == 'j': assert has_implementation_for('m', loc2), methname + if loc2 == 'j': assert has_implementation_for(loc1, 'm'), methname + return True + has_implementation_for._annspecialcase_ = 'specialize:memo' + def INSN(self, loc1, loc2): code1 = loc1.location_code() code2 = loc2.location_code() @@ -325,6 +339,8 @@ assert code2 not in ('j', 'i') for possible_code2 in unrolling_location_codes: + if not has_implementation_for('?', possible_code2): + continue if code2 == possible_code2: val2 = getattr(loc2, "value_" + possible_code2)() # @@ -335,28 +351,32 @@ # # Regular case for possible_code1 in unrolling_location_codes: + if not has_implementation_for(possible_code1, + possible_code2): + continue if code1 == possible_code1: val1 = getattr(loc1, "value_" + possible_code1)() # More faking out of certain operations for x86_64 - if possible_code1 == 'j' and not rx86.fits_in_32bits(val1): + fits32 = rx86.fits_in_32bits + if possible_code1 == 'j' and not fits32(val1): val1 = self._addr_as_reg_offset(val1) invoke(self, "m" + possible_code2, val1, val2) - elif possible_code2 == 'j' and not rx86.fits_in_32bits(val2): + return + if possible_code2 == 'j' and not fits32(val2): val2 = self._addr_as_reg_offset(val2) invoke(self, possible_code1 + "m", val1, val2) - elif possible_code1 == 'm' and not rx86.fits_in_32bits(val1[1]): + return + if possible_code1 == 'm' and not fits32(val1[1]): val1 = self._fix_static_offset_64_m(val1) - invoke(self, "a" + possible_code2, val1, val2) - elif possible_code2 == 'm' and not rx86.fits_in_32bits(val2[1]): + if possible_code2 == 'm' and not fits32(val2[1]): val2 = self._fix_static_offset_64_m(val2) - invoke(self, possible_code1 + "a", val1, val2) - else: - if possible_code1 == 'a' and not rx86.fits_in_32bits(val1[3]): - val1 = self._fix_static_offset_64_a(val1) - if possible_code2 == 'a' and not rx86.fits_in_32bits(val2[3]): - val2 = self._fix_static_offset_64_a(val2) - invoke(self, possible_code1 + possible_code2, val1, val2) + if possible_code1 == 'a' and not fits32(val1[3]): + val1 = self._fix_static_offset_64_a(val1) + if possible_code2 == 'a' and not fits32(val2[3]): + val2 = self._fix_static_offset_64_a(val2) + invoke(self, possible_code1 + possible_code2, val1, val2) return + _missing_binary_insn(name, code1, code2) return func_with_new_name(INSN, "INSN_" + name) @@ -431,12 +451,14 @@ def _fix_static_offset_64_m(self, (basereg, static_offset)): # For cases where an AddressLoc has the location_code 'm', but # where the static offset does not fit in 32-bits. We have to fall - # back to the X86_64_SCRATCH_REG. Note that this returns a location - # encoded as mode 'a'. These are all possibly rare cases; don't try + # back to the X86_64_SCRATCH_REG. Returns a new location encoded + # as mode 'm' too. These are all possibly rare cases; don't try # to reuse a past value of the scratch register at all. self._scratch_register_known = False self.MOV_ri(X86_64_SCRATCH_REG.value, static_offset) - return (basereg, X86_64_SCRATCH_REG.value, 0, 0) + self.LEA_ra(X86_64_SCRATCH_REG.value, + (basereg, X86_64_SCRATCH_REG.value, 0, 0)) + return (X86_64_SCRATCH_REG.value, 0) def _fix_static_offset_64_a(self, (basereg, scalereg, scale, static_offset)): diff --git a/pypy/jit/backend/x86/rx86.py b/pypy/jit/backend/x86/rx86.py --- a/pypy/jit/backend/x86/rx86.py +++ b/pypy/jit/backend/x86/rx86.py @@ -745,6 +745,7 @@ assert insnname_template.count('*') == 1 add_insn('x', register(2), '\xC0') add_insn('j', abs_, immediate(2)) + add_insn('m', mem_reg_plus_const(2)) define_pxmm_insn('PADDQ_x*', '\xD4') define_pxmm_insn('PSUBQ_x*', '\xFB') diff --git a/pypy/jit/backend/x86/test/test_regloc.py b/pypy/jit/backend/x86/test/test_regloc.py --- a/pypy/jit/backend/x86/test/test_regloc.py +++ b/pypy/jit/backend/x86/test/test_regloc.py @@ -146,8 +146,10 @@ expected_instructions = ( # mov r11, 0xFEDCBA9876543210 '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' - # mov rcx, [rdx+r11] - '\x4A\x8B\x0C\x1A' + # lea r11, [rdx+r11] + '\x4E\x8D\x1C\x1A' + # mov rcx, [r11] + '\x49\x8B\x0B' ) assert cb.getvalue() == expected_instructions @@ -174,6 +176,30 @@ # ------------------------------------------------------------ + def test_MOV_64bit_constant_into_r11(self): + base_constant = 0xFEDCBA9876543210 + cb = LocationCodeBuilder64() + cb.MOV(r11, imm(base_constant)) + + expected_instructions = ( + # mov r11, 0xFEDCBA9876543210 + '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' + ) + assert cb.getvalue() == expected_instructions + + def test_MOV_64bit_address_into_r11(self): + base_addr = 0xFEDCBA9876543210 + cb = LocationCodeBuilder64() + cb.MOV(r11, heap(base_addr)) + + expected_instructions = ( + # mov r11, 0xFEDCBA9876543210 + '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' + + # mov r11, [r11] + '\x4D\x8B\x1B' + ) + assert cb.getvalue() == expected_instructions + def test_MOV_immed32_into_64bit_address_1(self): immed = -0x01234567 base_addr = 0xFEDCBA9876543210 @@ -217,8 +243,10 @@ expected_instructions = ( # mov r11, 0xFEDCBA9876543210 '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' - # mov [rdx+r11], -0x01234567 - '\x4A\xC7\x04\x1A\x99\xBA\xDC\xFE' + # lea r11, [rdx+r11] + '\x4E\x8D\x1C\x1A' + # mov [r11], -0x01234567 + '\x49\xC7\x03\x99\xBA\xDC\xFE' ) assert cb.getvalue() == expected_instructions @@ -300,8 +328,10 @@ '\x48\xBA\xEF\xCD\xAB\x89\x67\x45\x23\x01' # mov r11, 0xFEDCBA9876543210 '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' - # mov [rax+r11], rdx - '\x4A\x89\x14\x18' + # lea r11, [rax+r11] + '\x4E\x8D\x1C\x18' + # mov [r11], rdx + '\x49\x89\x13' # pop rdx '\x5A' ) From noreply at buildbot.pypy.org Thu Nov 3 11:23:51 2011 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 3 Nov 2011 11:23:51 +0100 (CET) Subject: [pypy-commit] pypy stm: Tweaks. Message-ID: <20111103102351.083CF820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r48688:65eb6e47e3b4 Date: 2011-11-03 11:23 +0100 http://bitbucket.org/pypy/pypy/changeset/65eb6e47e3b4/ Log: Tweaks. diff --git a/pypy/translator/stm/funcgen.py b/pypy/translator/stm/funcgen.py --- a/pypy/translator/stm/funcgen.py +++ b/pypy/translator/stm/funcgen.py @@ -111,6 +111,7 @@ def op_stm(funcgen, op): - assert funcgen.db.translator.stm_transformation_applied + if not getattr(funcgen.db.translator, 'stm_transformation_applied', None): + raise AssertionError("STM transformation not applied. You need '--stm'") func = globals()[op.opname] return func(funcgen, op) diff --git a/pypy/translator/stm/src_stm/et.c b/pypy/translator/stm/src_stm/et.c --- a/pypy/translator/stm/src_stm/et.c +++ b/pypy/translator/stm/src_stm/et.c @@ -22,7 +22,8 @@ #include "src_stm/et.h" #include "src_stm/atomic_ops.h" -#ifdef RPY_STM_ASSERT +#ifdef PYPY_STANDALONE /* obscure: cannot include debug_print.h if compiled */ +# define RPY_STM_DEBUG_PRINT /* via ll2ctypes; only include it in normal builds */ # include "src/debug_print.h" #endif @@ -346,18 +347,22 @@ /* mutex: only to avoid busy-looping too much in tx_spinloop() below */ static pthread_mutex_t mutex_inevitable = PTHREAD_MUTEX_INITIALIZER; # ifdef RPY_STM_ASSERT +unsigned long locked_by = 0; void mutex_lock(void) { unsigned long pself = (unsigned long)pthread_self(); if (PYPY_HAVE_DEBUG_PRINTS) fprintf(PYPY_DEBUG_FILE, "%lx: mutex inev locking...\n", pself); + assert(locked_by != pself); pthread_mutex_lock(&mutex_inevitable); + locked_by = pself; if (PYPY_HAVE_DEBUG_PRINTS) fprintf(PYPY_DEBUG_FILE, "%lx: mutex inev locked\n", pself); } void mutex_unlock(void) { unsigned long pself = (unsigned long)pthread_self(); + locked_by = 0; pthread_mutex_unlock(&mutex_inevitable); if (PYPY_HAVE_DEBUG_PRINTS) fprintf(PYPY_DEBUG_FILE, "%lx: mutex inev unlocked\n", pself); @@ -577,9 +582,6 @@ void stm_descriptor_init(void) { -#ifdef RPY_STM_ASSERT - PYPY_DEBUG_START("stm-init"); -#endif if (thread_descriptor != NULL) thread_descriptor->init_counter++; else @@ -587,6 +589,10 @@ struct tx_descriptor *d = malloc(sizeof(struct tx_descriptor)); memset(d, 0, sizeof(struct tx_descriptor)); +#ifdef RPY_STM_DEBUG_PRINT + PYPY_DEBUG_START("stm-init"); +#endif + /* initialize 'my_lock_word' to be a unique negative number */ d->my_lock_word = (owner_version_t)d; if (!IS_LOCKED(d->my_lock_word)) @@ -596,10 +602,13 @@ d->init_counter = 1; thread_descriptor = d; + +#ifdef RPY_STM_DEBUG_PRINT + if (PYPY_HAVE_DEBUG_PRINTS) fprintf(PYPY_DEBUG_FILE, "thread %lx starting\n", + d->my_lock_word); + PYPY_DEBUG_STOP("stm-init"); +#endif } -#ifdef RPY_STM_ASSERT - PYPY_DEBUG_STOP("stm-init"); -#endif } void stm_descriptor_done(void) @@ -611,7 +620,7 @@ thread_descriptor = NULL; -#ifdef RPY_STM_ASSERT +#ifdef RPY_STM_DEBUG_PRINT PYPY_DEBUG_START("stm-done"); if (PYPY_HAVE_DEBUG_PRINTS) { int num_aborts = 0, num_spinloops = 0; @@ -816,6 +825,10 @@ struct tx_descriptor *d = thread_descriptor; unsigned long curtime; +#ifdef RPY_STM_ASSERT + assert(!d->transaction_active); +#endif + retry: mutex_lock(); /* possibly waiting here */ diff --git a/pypy/translator/stm/test/targetdemo.py b/pypy/translator/stm/test/targetdemo.py --- a/pypy/translator/stm/test/targetdemo.py +++ b/pypy/translator/stm/test/targetdemo.py @@ -4,7 +4,7 @@ NUM_THREADS = 4 -LENGTH = 1000 +LENGTH = 10000 class Node: diff --git a/pypy/translator/stm/transform.py b/pypy/translator/stm/transform.py --- a/pypy/translator/stm/transform.py +++ b/pypy/translator/stm/transform.py @@ -40,7 +40,7 @@ self.add_stm_declare_variable(graph) if self.seen_gc_stack_bottom: self.add_descriptor_init_stuff(graph) - self.add_descriptor_init_stuff(entrypointgraph) + self.add_descriptor_init_stuff(entrypointgraph, main=True) self.translator.stm_transformation_applied = True def transform_block(self, block): @@ -71,9 +71,13 @@ for block in graph.iterblocks(): self.transform_block(block) - def add_descriptor_init_stuff(self, graph): - f_init = _rffi_stm.descriptor_init_and_being_inevitable_transaction - f_done = _rffi_stm.commit_transaction_and_descriptor_done + def add_descriptor_init_stuff(self, graph, main=False): + if main: + f_init = _rffi_stm.descriptor_init_and_being_inevitable_transaction + f_done = _rffi_stm.commit_transaction_and_descriptor_done + else: + f_init = _rffi_stm.descriptor_init + f_done = _rffi_stm.descriptor_done c_init = Constant(f_init, lltype.typeOf(f_init)) c_done = Constant(f_done, lltype.typeOf(f_done)) # @@ -108,7 +112,7 @@ if STRUCT._immutable_field(op.args[1].value): op1 = op elif STRUCT._gckind == 'raw': - turn_inevitable(newoperations, "getfield_raw") + turn_inevitable(newoperations, "getfield-raw") op1 = op else: op1 = SpaceOperation('stm_getfield', op.args, op.result) @@ -119,7 +123,7 @@ if STRUCT._immutable_field(op.args[1].value): op1 = op elif STRUCT._gckind == 'raw': - turn_inevitable(newoperations, "setfield_raw") + turn_inevitable(newoperations, "setfield-raw") op1 = op else: op1 = SpaceOperation('stm_setfield', op.args, op.result) From noreply at buildbot.pypy.org Thu Nov 3 11:30:49 2011 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 3 Nov 2011 11:30:49 +0100 (CET) Subject: [pypy-commit] pypy stm: Yay, the first example of RPython program that runs successfully Message-ID: <20111103103049.38F14820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r48689:0524190818dc Date: 2011-11-03 11:30 +0100 http://bitbucket.org/pypy/pypy/changeset/0524190818dc/ Log: Yay, the first example of RPython program that runs successfully on multiple threads. diff --git a/pypy/translator/stm/test/targetdemo.py b/pypy/translator/stm/test/targetdemo.py --- a/pypy/translator/stm/test/targetdemo.py +++ b/pypy/translator/stm/test/targetdemo.py @@ -4,7 +4,7 @@ NUM_THREADS = 4 -LENGTH = 10000 +LENGTH = 5000 class Node: diff --git a/pypy/translator/stm/transform.py b/pypy/translator/stm/transform.py --- a/pypy/translator/stm/transform.py +++ b/pypy/translator/stm/transform.py @@ -3,23 +3,17 @@ from pypy.annotation import model as annmodel from pypy.translator.stm import _rffi_stm from pypy.translator.unsimplify import varoftype, copyvar -from pypy.rpython.lltypesystem import lltype +from pypy.rpython.lltypesystem import lltype, lloperation ALWAYS_ALLOW_OPERATIONS = set([ - 'int_*', 'uint_*', 'llong_*', 'ullong_*', 'float_*', - 'same_as', 'cast_*', 'direct_call', 'debug_print', 'debug_assert', ]) +ALWAYS_ALLOW_OPERATIONS |= set(lloperation.enum_foldable_ops()) def op_in_set(opname, set): - if opname in set: - return True - for i in range(len(opname)-1, -1, -1): - if (opname[:i] + '*') in set: - return True - return False + return opname in set # ____________________________________________________________ From noreply at buildbot.pypy.org Thu Nov 3 11:41:04 2011 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 3 Nov 2011 11:41:04 +0100 (CET) Subject: [pypy-commit] pypy stm: A poor man's lock: just use a regular counter and check it every second. Message-ID: <20111103104104.7D934820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r48690:65545adde075 Date: 2011-11-03 11:40 +0100 http://bitbucket.org/pypy/pypy/changeset/65545adde075/ Log: A poor man's lock: just use a regular counter and check it every second. As it's all protected by STM it works nicely. diff --git a/pypy/translator/stm/test/targetdemo.py b/pypy/translator/stm/test/targetdemo.py --- a/pypy/translator/stm/test/targetdemo.py +++ b/pypy/translator/stm/test/targetdemo.py @@ -30,6 +30,7 @@ add_at_end_of_chained_list(glob.anchor, i) rstm.transaction_boundary() print "thread done." + glob.done += 1 @@ -55,10 +56,12 @@ def entry_point(argv): invoke_around_extcall(before_external_call, after_external_call) print "hello world" + glob.done = 0 for i in range(NUM_THREADS): ll_thread.start_new_thread(run_me, ()) print "sleeping..." - time.sleep(10) + while glob.done < NUM_THREADS: # poor man's lock + time.sleep(1) print "done sleeping." return 0 From noreply at buildbot.pypy.org Thu Nov 3 11:47:18 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Thu, 3 Nov 2011 11:47:18 +0100 (CET) Subject: [pypy-commit] pypy default: break up circular dependencies among short_boxes and give up Message-ID: <20111103104718.CD255820B3@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: Changeset: r48691:d9708bf78c40 Date: 2011-11-03 11:46 +0100 http://bitbucket.org/pypy/pypy/changeset/d9708bf78c40/ Log: break up circular dependencies among short_boxes and give up diff --git a/pypy/jit/metainterp/optimizeopt/heap.py b/pypy/jit/metainterp/optimizeopt/heap.py --- a/pypy/jit/metainterp/optimizeopt/heap.py +++ b/pypy/jit/metainterp/optimizeopt/heap.py @@ -146,8 +146,6 @@ newresult = result.clonebox() optimizer.make_constant(newresult, result) result = newresult - if result is op.getarg(0): # FIXME: Unsupported corner case?? - continue getop = ResOperation(rop.GETARRAYITEM_GC, [op.getarg(0), op.getarg(1)], result, op.getdescr()) shortboxes.add_potential(getop, synthetic=True) diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -7450,6 +7450,55 @@ """ self.optimize_loop(ops, expected) + def test_setarrayitem_p0_p0(self): + ops = """ + [i0, i1] + p0 = escape() + setarrayitem_gc(p0, 2, p0, descr=arraydescr) + jump(i0, i1) + """ + expected = """ + [i0, i1] + p0 = escape() + setarrayitem_gc(p0, 2, p0, descr=arraydescr) + jump(i0, i1) + """ + self.optimize_loop(ops, expected) + + def test_setfield_p0_p0(self): + ops = """ + [i0, i1] + p0 = escape() + setfield_gc(p0, p0, descr=arraydescr) + jump(i0, i1) + """ + expected = """ + [i0, i1] + p0 = escape() + setfield_gc(p0, p0, descr=arraydescr) + jump(i0, i1) + """ + self.optimize_loop(ops, expected) + + def test_setfield_p0_p1_p0(self): + ops = """ + [i0, i1] + p0 = escape() + p1 = escape() + setfield_gc(p0, p1, descr=adescr) + setfield_gc(p1, p0, descr=bdescr) + jump(i0, i1) + """ + expected = """ + [i0, i1] + p0 = escape() + p1 = escape() + setfield_gc(p0, p1, descr=adescr) + setfield_gc(p1, p0, descr=bdescr) + jump(i0, i1) + """ + self.optimize_loop(ops, expected) + class TestLLtype(OptimizeOptTest, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/optimizeopt/virtualstate.py b/pypy/jit/metainterp/optimizeopt/virtualstate.py --- a/pypy/jit/metainterp/optimizeopt/virtualstate.py +++ b/pypy/jit/metainterp/optimizeopt/virtualstate.py @@ -551,6 +551,7 @@ optimizer.produce_potential_short_preamble_ops(self) self.short_boxes = {} + self.short_boxes_in_production = {} for box in self.potential_ops.keys(): try: @@ -606,6 +607,10 @@ return if isinstance(box, Const): return + if box in self.short_boxes_in_production: + raise BoxNotProducable + self.short_boxes_in_production[box] = True + if box in self.potential_ops: ops = self.prioritized_alternatives(box) produced_one = False From noreply at buildbot.pypy.org Thu Nov 3 11:50:43 2011 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 3 Nov 2011 11:50:43 +0100 (CET) Subject: [pypy-commit] pypy stm: kill these two C functions. Message-ID: <20111103105043.71524820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r48692:b30caa32b11c Date: 2011-11-03 11:47 +0100 http://bitbucket.org/pypy/pypy/changeset/b30caa32b11c/ Log: kill these two C functions. diff --git a/pypy/translator/stm/src_stm/et.c b/pypy/translator/stm/src_stm/et.c --- a/pypy/translator/stm/src_stm/et.c +++ b/pypy/translator/stm/src_stm/et.c @@ -875,21 +875,6 @@ #endif } -void stm_descriptor_init_and_being_inevitable_transaction(void) -{ - int was_not_started = (thread_descriptor == NULL); - stm_descriptor_init(); - if (was_not_started) - stm_begin_inevitable_transaction(); -} - -void stm_commit_transaction_and_descriptor_done(void) -{ - if (thread_descriptor->init_counter == 1) - stm_commit_transaction(); - stm_descriptor_done(); -} - // XXX little-endian only! void stm_write_partial_word(int fieldsize, char *base, long offset, unsigned long nval) diff --git a/pypy/translator/stm/transform.py b/pypy/translator/stm/transform.py --- a/pypy/translator/stm/transform.py +++ b/pypy/translator/stm/transform.py @@ -67,11 +67,14 @@ def add_descriptor_init_stuff(self, graph, main=False): if main: - f_init = _rffi_stm.descriptor_init_and_being_inevitable_transaction - f_done = _rffi_stm.commit_transaction_and_descriptor_done - else: - f_init = _rffi_stm.descriptor_init - f_done = _rffi_stm.descriptor_done + self._add_calls_around(graph, + _rffi_stm.begin_inevitable_transaction, + _rffi_stm.commit_transaction) + self._add_calls_around(graph, + _rffi_stm.descriptor_init, + _rffi_stm.descriptor_done) + + def _add_calls_around(self, graph, f_init, f_done): c_init = Constant(f_init, lltype.typeOf(f_init)) c_done = Constant(f_done, lltype.typeOf(f_done)) # From noreply at buildbot.pypy.org Thu Nov 3 11:50:44 2011 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 3 Nov 2011 11:50:44 +0100 (CET) Subject: [pypy-commit] pypy stm: Break a line that is definitely too long in the log. Message-ID: <20111103105044.9E07D820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r48693:b4212b951b97 Date: 2011-11-03 11:50 +0100 http://bitbucket.org/pypy/pypy/changeset/b4212b951b97/ Log: Break a line that is definitely too long in the log. diff --git a/pypy/translator/stm/src_stm/et.c b/pypy/translator/stm/src_stm/et.c --- a/pypy/translator/stm/src_stm/et.c +++ b/pypy/translator/stm/src_stm/et.c @@ -632,7 +632,7 @@ for (i=0; inum_spinloops[i]; - p += sprintf(p, "thread %lx: %d commits, %d aborts ", + p += sprintf(p, "thread %lx: %d commits, %d aborts\n", d->my_lock_word, d->num_commits, num_aborts); From noreply at buildbot.pypy.org Thu Nov 3 13:11:51 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Thu, 3 Nov 2011 13:11:51 +0100 (CET) Subject: [pypy-commit] pypy default: only use a single counter in xrange iterators (should save a setitem) Message-ID: <20111103121151.1BCC4820B3@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: Changeset: r48694:a27a481ec877 Date: 2011-11-03 13:11 +0100 http://bitbucket.org/pypy/pypy/changeset/a27a481ec877/ Log: only use a single counter in xrange iterators (should save a setitem) diff --git a/pypy/module/__builtin__/functional.py b/pypy/module/__builtin__/functional.py --- a/pypy/module/__builtin__/functional.py +++ b/pypy/module/__builtin__/functional.py @@ -312,10 +312,11 @@ class W_XRange(Wrappable): - def __init__(self, space, start, len, step): + def __init__(self, space, start, stop, step): self.space = space self.start = start - self.len = len + self.stop = stop + self.len = get_len_of_range(space, start, stop, step) self.step = step def descr_new(space, w_subtype, w_start, w_stop=None, w_step=1): @@ -325,9 +326,8 @@ start, stop = 0, start else: stop = _toint(space, w_stop) - howmany = get_len_of_range(space, start, stop, step) obj = space.allocate_instance(W_XRange, w_subtype) - W_XRange.__init__(obj, space, start, howmany, step) + W_XRange.__init__(obj, space, start, stop, step) return space.wrap(obj) def descr_repr(self): @@ -357,12 +357,12 @@ def descr_iter(self): return self.space.wrap(W_XRangeIterator(self.space, self.start, - self.len, self.step)) + self.stop, self.step)) def descr_reversed(self): lastitem = self.start + (self.len-1) * self.step return self.space.wrap(W_XRangeIterator(self.space, lastitem, - self.len, -self.step)) + self.start - 1, -self.step)) def descr_reduce(self): space = self.space @@ -389,25 +389,24 @@ ) class W_XRangeIterator(Wrappable): - def __init__(self, space, current, remaining, step): + def __init__(self, space, start, stop, step): self.space = space - self.current = current - self.remaining = remaining + self.current = start + self.stop = stop self.step = step def descr_iter(self): return self.space.wrap(self) def descr_next(self): - if self.remaining > 0: + if (self.step > 0 and self.current < self.stop) or (self.step < 0 and self.current > self.stop): item = self.current self.current = item + self.step - self.remaining -= 1 return self.space.wrap(item) raise OperationError(self.space.w_StopIteration, self.space.w_None) - def descr_len(self): - return self.space.wrap(self.remaining) + #def descr_len(self): + # return self.space.wrap(self.remaining) def descr_reduce(self): from pypy.interpreter.mixedmodule import MixedModule @@ -418,7 +417,7 @@ w = space.wrap nt = space.newtuple - tup = [w(self.current), w(self.remaining), w(self.step)] + tup = [w(self.current), w(self.stop), w(self.step)] return nt([new_inst, nt(tup)]) W_XRangeIterator.typedef = TypeDef("rangeiterator", diff --git a/pypy/module/_pickle_support/maker.py b/pypy/module/_pickle_support/maker.py --- a/pypy/module/_pickle_support/maker.py +++ b/pypy/module/_pickle_support/maker.py @@ -66,10 +66,10 @@ new_generator.running = running return space.wrap(new_generator) - at unwrap_spec(current=int, remaining=int, step=int) -def xrangeiter_new(space, current, remaining, step): + at unwrap_spec(current=int, stop=int, step=int) +def xrangeiter_new(space, current, stop, step): from pypy.module.__builtin__.functional import W_XRangeIterator - new_iter = W_XRangeIterator(space, current, remaining, step) + new_iter = W_XRangeIterator(space, current, stop, step) return space.wrap(new_iter) @unwrap_spec(identifier=str) From noreply at buildbot.pypy.org Thu Nov 3 13:39:20 2011 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 3 Nov 2011 13:39:20 +0100 (CET) Subject: [pypy-commit] pypy stm: Baaaaah. setjmp() cannot be called on a jmp_buf that belongs to a parent Message-ID: <20111103123920.DC8F0820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r48695:e76f2b79fd27 Date: 2011-11-03 13:38 +0100 http://bitbucket.org/pypy/pypy/changeset/e76f2b79fd27/ Log: Baaaaah. setjmp() cannot be called on a jmp_buf that belongs to a parent frame, because then longjmp()ing to it will not automatically recreate the subframe... Need to fix it by introducing macros instead of a function call. diff --git a/pypy/translator/stm/src_stm/et.c b/pypy/translator/stm/src_stm/et.c --- a/pypy/translator/stm/src_stm/et.c +++ b/pypy/translator/stm/src_stm/et.c @@ -862,19 +862,6 @@ tx_abort(7); /* manual abort */ } -void stm_transaction_boundary(jmp_buf* buf) -{ -#ifdef RPY_STM_ASSERT - PYPY_DEBUG_START("stm-transaction-boundary"); -#endif - stm_commit_transaction(); - setjmp(*buf); - stm_begin_transaction(buf); -#ifdef RPY_STM_ASSERT - PYPY_DEBUG_STOP("stm-transaction-boundary"); -#endif -} - // XXX little-endian only! void stm_write_partial_word(int fieldsize, char *base, long offset, unsigned long nval) diff --git a/pypy/translator/stm/src_stm/et.h b/pypy/translator/stm/src_stm/et.h --- a/pypy/translator/stm/src_stm/et.h +++ b/pypy/translator/stm/src_stm/et.h @@ -35,7 +35,6 @@ void stm_try_inevitable_if(jmp_buf* buf STM_CCHARP(why)); void stm_begin_inevitable_transaction(void); void stm_abort_and_retry(void); -void stm_transaction_boundary(jmp_buf* buf); void stm_descriptor_init_and_being_inevitable_transaction(void); void stm_commit_transaction_and_descriptor_done(void); @@ -48,7 +47,11 @@ #define STM_DECLARE_VARIABLE() ; jmp_buf jmpbuf #define STM_MAKE_INEVITABLE() stm_try_inevitable_if(&jmpbuf \ STM_EXPLAIN("return")) -#define STM_TRANSACTION_BOUNDARY() stm_transaction_boundary(&jmpbuf) +#define STM_TRANSACTION_BOUNDARY() \ + stm_commit_transaction(); \ + setjmp(jmpbuf); \ + stm_begin_transaction(&jmpbuf); + // XXX little-endian only! #define STM_read_partial_word(T, base, offset) \ From noreply at buildbot.pypy.org Thu Nov 3 13:40:58 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Thu, 3 Nov 2011 13:40:58 +0100 (CET) Subject: [pypy-commit] pypy step-one-xrange: special case xrange without any step specified Message-ID: <20111103124058.0397B820B3@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: step-one-xrange Changeset: r48696:6cf1ae5ff5d6 Date: 2011-11-03 13:39 +0100 http://bitbucket.org/pypy/pypy/changeset/6cf1ae5ff5d6/ Log: special case xrange without any step specified diff --git a/pypy/module/__builtin__/functional.py b/pypy/module/__builtin__/functional.py --- a/pypy/module/__builtin__/functional.py +++ b/pypy/module/__builtin__/functional.py @@ -312,22 +312,28 @@ class W_XRange(Wrappable): - def __init__(self, space, start, stop, step): + def __init__(self, space, start, stop, step, promote_step=False): self.space = space self.start = start self.stop = stop self.len = get_len_of_range(space, start, stop, step) self.step = step + self.promote_step = promote_step - def descr_new(space, w_subtype, w_start, w_stop=None, w_step=1): + def descr_new(space, w_subtype, w_start, w_stop=None, w_step=None): start = _toint(space, w_start) - step = _toint(space, w_step) + if space.is_w(w_step, space.w_None): # no step argument provided + step = 1 + promote_step = True + else: + step = _toint(space, w_step) + promote_step = False if space.is_w(w_stop, space.w_None): # only 1 argument provided start, stop = 0, start else: stop = _toint(space, w_stop) obj = space.allocate_instance(W_XRange, w_subtype) - W_XRange.__init__(obj, space, start, stop, step) + W_XRange.__init__(obj, space, start, stop, step, promote_step) return space.wrap(obj) def descr_repr(self): @@ -356,8 +362,12 @@ space.wrap("xrange object index out of range")) def descr_iter(self): - return self.space.wrap(W_XRangeIterator(self.space, self.start, - self.stop, self.step)) + if self.promote_step and self.step == 1: + return self.space.wrap(W_XRangeStepOneIterator(self.space, self.start, + self.stop)) + else: + return self.space.wrap(W_XRangeIterator(self.space, self.start, + self.stop, self.step)) def descr_reversed(self): lastitem = self.start + (self.len-1) * self.step @@ -427,3 +437,24 @@ next = interp2app(W_XRangeIterator.descr_next), __reduce__ = interp2app(W_XRangeIterator.descr_reduce), ) + +class W_XRangeStepOneIterator(W_XRangeIterator): + def __init__(self, space, start, stop): + self.space = space + self.current = start + self.stop = stop + self.step = 1 + + def descr_next(self): + if self.current < self.stop: + item = self.current + self.current = item + 1 + return self.space.wrap(item) + raise OperationError(self.space.w_StopIteration, self.space.w_None) + + +W_XRangeStepOneIterator.typedef = TypeDef("xrangesteponeiterator", + __iter__ = interp2app(W_XRangeStepOneIterator.descr_iter), + next = interp2app(W_XRangeStepOneIterator.descr_next), + __reduce__ = interp2app(W_XRangeStepOneIterator.descr_reduce), +) From notifications-noreply at bitbucket.org Thu Nov 3 13:42:49 2011 From: notifications-noreply at bitbucket.org (Bitbucket) Date: Thu, 03 Nov 2011 12:42:49 -0000 Subject: [pypy-commit] Notification: pypy Message-ID: <20111103124249.21811.96591@bitbucket01.managed.contegix.com> You have received a notification from Van Lindberg. Hi, I forked pypy. My fork is at https://bitbucket.org/vanl/pypy. -- Disable notifications at https://bitbucket.org/account/notifications/ From noreply at buildbot.pypy.org Thu Nov 3 15:02:40 2011 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 3 Nov 2011 15:02:40 +0100 (CET) Subject: [pypy-commit] pypy rgc-mem-pressure: optimize it slightly. not look up the dictionary each time we see _digest_size Message-ID: <20111103140240.C18C3820B3@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: rgc-mem-pressure Changeset: r48697:92885c7cf7b6 Date: 2011-11-03 15:02 +0100 http://bitbucket.org/pypy/pypy/changeset/92885c7cf7b6/ Log: optimize it slightly. not look up the dictionary each time we see _digest_size diff --git a/pypy/module/_hashlib/interp_hashlib.py b/pypy/module/_hashlib/interp_hashlib.py --- a/pypy/module/_hashlib/interp_hashlib.py +++ b/pypy/module/_hashlib/interp_hashlib.py @@ -21,9 +21,11 @@ class W_Hash(Wrappable): ctx = lltype.nullptr(ropenssl.EVP_MD_CTX.TO) + _block_size = -1 def __init__(self, space, name): self.name = name + self.digest_size = self.compute_digest_size() # Allocate a lock for each HASH object. # An optimization would be to not release the GIL on small requests, @@ -31,7 +33,7 @@ self.lock = Lock(space) ctx = lltype.malloc(ropenssl.EVP_MD_CTX.TO, flavor='raw') - rgc.add_memory_pressure(HASH_MALLOC_SIZE + self._digest_size()) + rgc.add_memory_pressure(HASH_MALLOC_SIZE + self.digest_size) self.ctx = ctx def initdigest(self, space, name): @@ -75,29 +77,29 @@ "Return the digest value as a string of hexadecimal digits." digest = self._digest(space) hexdigits = '0123456789abcdef' - result = StringBuilder(self._digest_size() * 2) + result = StringBuilder(self.digest_size * 2) for c in digest: result.append(hexdigits[(ord(c) >> 4) & 0xf]) result.append(hexdigits[ ord(c) & 0xf]) return space.wrap(result.build()) def get_digest_size(self, space): - return space.wrap(self._digest_size()) + return space.wrap(self.digest_size) def get_block_size(self, space): - return space.wrap(self._block_size()) + return space.wrap(self.compute_block_size()) def _digest(self, space): with lltype.scoped_alloc(ropenssl.EVP_MD_CTX.TO) as ctx: with self.lock: ropenssl.EVP_MD_CTX_copy(ctx, self.ctx) - digest_size = self._digest_size() + digest_size = self.digest_size with lltype.scoped_alloc(rffi.CCHARP.TO, digest_size) as digest: ropenssl.EVP_DigestFinal(ctx, digest, None) ropenssl.EVP_MD_CTX_cleanup(ctx) return rffi.charpsize2str(digest, digest_size) - def _digest_size(self): + def compute_digest_size(self): # XXX This isn't the nicest way, but the EVP_MD_size OpenSSL # XXX function is defined as a C macro on OS X and would be # XXX significantly harder to implement in another way. @@ -111,12 +113,14 @@ 'sha512': 64, 'SHA512': 64, }.get(self.name, 0) - def _block_size(self): + def compute_block_size(self): + if self._block_size != -1: + return self._block_size # XXX This isn't the nicest way, but the EVP_MD_CTX_block_size # XXX OpenSSL function is defined as a C macro on some systems # XXX and would be significantly harder to implement in # XXX another way. - return { + self._block_size = { 'md5': 64, 'MD5': 64, 'sha1': 64, 'SHA1': 64, 'sha224': 64, 'SHA224': 64, @@ -124,6 +128,7 @@ 'sha384': 128, 'SHA384': 128, 'sha512': 128, 'SHA512': 128, }.get(self.name, 0) + return self._block_size W_Hash.typedef = TypeDef( 'HASH', From noreply at buildbot.pypy.org Thu Nov 3 15:12:50 2011 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 3 Nov 2011 15:12:50 +0100 (CET) Subject: [pypy-commit] pypy stm: hum. Message-ID: <20111103141250.6F481820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r48698:7a7ae4b45135 Date: 2011-11-03 15:02 +0100 http://bitbucket.org/pypy/pypy/changeset/7a7ae4b45135/ Log: hum. diff --git a/pypy/translator/stm/transform.py b/pypy/translator/stm/transform.py --- a/pypy/translator/stm/transform.py +++ b/pypy/translator/stm/transform.py @@ -7,7 +7,7 @@ ALWAYS_ALLOW_OPERATIONS = set([ - 'direct_call', + 'direct_call', 'force_cast', 'debug_print', 'debug_assert', ]) ALWAYS_ALLOW_OPERATIONS |= set(lloperation.enum_foldable_ops()) From noreply at buildbot.pypy.org Thu Nov 3 15:12:55 2011 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 3 Nov 2011 15:12:55 +0100 (CET) Subject: [pypy-commit] pypy stm: hg merge default Message-ID: <20111103141255.4165D820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r48699:16ac40bcfc6e Date: 2011-11-03 15:02 +0100 http://bitbucket.org/pypy/pypy/changeset/16ac40bcfc6e/ Log: hg merge default diff too long, truncating to 10000 out of 12527 lines diff --git a/lib-python/modified-2.7/heapq.py b/lib-python/modified-2.7/heapq.py new file mode 100644 --- /dev/null +++ b/lib-python/modified-2.7/heapq.py @@ -0,0 +1,442 @@ +# -*- coding: latin-1 -*- + +"""Heap queue algorithm (a.k.a. priority queue). + +Heaps are arrays for which a[k] <= a[2*k+1] and a[k] <= a[2*k+2] for +all k, counting elements from 0. For the sake of comparison, +non-existing elements are considered to be infinite. The interesting +property of a heap is that a[0] is always its smallest element. + +Usage: + +heap = [] # creates an empty heap +heappush(heap, item) # pushes a new item on the heap +item = heappop(heap) # pops the smallest item from the heap +item = heap[0] # smallest item on the heap without popping it +heapify(x) # transforms list into a heap, in-place, in linear time +item = heapreplace(heap, item) # pops and returns smallest item, and adds + # new item; the heap size is unchanged + +Our API differs from textbook heap algorithms as follows: + +- We use 0-based indexing. This makes the relationship between the + index for a node and the indexes for its children slightly less + obvious, but is more suitable since Python uses 0-based indexing. + +- Our heappop() method returns the smallest item, not the largest. + +These two make it possible to view the heap as a regular Python list +without surprises: heap[0] is the smallest item, and heap.sort() +maintains the heap invariant! +""" + +# Original code by Kevin O'Connor, augmented by Tim Peters and Raymond Hettinger + +__about__ = """Heap queues + +[explanation by Fran�ois Pinard] + +Heaps are arrays for which a[k] <= a[2*k+1] and a[k] <= a[2*k+2] for +all k, counting elements from 0. For the sake of comparison, +non-existing elements are considered to be infinite. The interesting +property of a heap is that a[0] is always its smallest element. + +The strange invariant above is meant to be an efficient memory +representation for a tournament. The numbers below are `k', not a[k]: + + 0 + + 1 2 + + 3 4 5 6 + + 7 8 9 10 11 12 13 14 + + 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 + + +In the tree above, each cell `k' is topping `2*k+1' and `2*k+2'. In +an usual binary tournament we see in sports, each cell is the winner +over the two cells it tops, and we can trace the winner down the tree +to see all opponents s/he had. However, in many computer applications +of such tournaments, we do not need to trace the history of a winner. +To be more memory efficient, when a winner is promoted, we try to +replace it by something else at a lower level, and the rule becomes +that a cell and the two cells it tops contain three different items, +but the top cell "wins" over the two topped cells. + +If this heap invariant is protected at all time, index 0 is clearly +the overall winner. The simplest algorithmic way to remove it and +find the "next" winner is to move some loser (let's say cell 30 in the +diagram above) into the 0 position, and then percolate this new 0 down +the tree, exchanging values, until the invariant is re-established. +This is clearly logarithmic on the total number of items in the tree. +By iterating over all items, you get an O(n ln n) sort. + +A nice feature of this sort is that you can efficiently insert new +items while the sort is going on, provided that the inserted items are +not "better" than the last 0'th element you extracted. This is +especially useful in simulation contexts, where the tree holds all +incoming events, and the "win" condition means the smallest scheduled +time. When an event schedule other events for execution, they are +scheduled into the future, so they can easily go into the heap. So, a +heap is a good structure for implementing schedulers (this is what I +used for my MIDI sequencer :-). + +Various structures for implementing schedulers have been extensively +studied, and heaps are good for this, as they are reasonably speedy, +the speed is almost constant, and the worst case is not much different +than the average case. However, there are other representations which +are more efficient overall, yet the worst cases might be terrible. + +Heaps are also very useful in big disk sorts. You most probably all +know that a big sort implies producing "runs" (which are pre-sorted +sequences, which size is usually related to the amount of CPU memory), +followed by a merging passes for these runs, which merging is often +very cleverly organised[1]. It is very important that the initial +sort produces the longest runs possible. Tournaments are a good way +to that. If, using all the memory available to hold a tournament, you +replace and percolate items that happen to fit the current run, you'll +produce runs which are twice the size of the memory for random input, +and much better for input fuzzily ordered. + +Moreover, if you output the 0'th item on disk and get an input which +may not fit in the current tournament (because the value "wins" over +the last output value), it cannot fit in the heap, so the size of the +heap decreases. The freed memory could be cleverly reused immediately +for progressively building a second heap, which grows at exactly the +same rate the first heap is melting. When the first heap completely +vanishes, you switch heaps and start a new run. Clever and quite +effective! + +In a word, heaps are useful memory structures to know. I use them in +a few applications, and I think it is good to keep a `heap' module +around. :-) + +-------------------- +[1] The disk balancing algorithms which are current, nowadays, are +more annoying than clever, and this is a consequence of the seeking +capabilities of the disks. On devices which cannot seek, like big +tape drives, the story was quite different, and one had to be very +clever to ensure (far in advance) that each tape movement will be the +most effective possible (that is, will best participate at +"progressing" the merge). Some tapes were even able to read +backwards, and this was also used to avoid the rewinding time. +Believe me, real good tape sorts were quite spectacular to watch! +From all times, sorting has always been a Great Art! :-) +""" + +__all__ = ['heappush', 'heappop', 'heapify', 'heapreplace', 'merge', + 'nlargest', 'nsmallest', 'heappushpop'] + +from itertools import islice, repeat, count, imap, izip, tee, chain +from operator import itemgetter +import bisect + +def heappush(heap, item): + """Push item onto heap, maintaining the heap invariant.""" + heap.append(item) + _siftdown(heap, 0, len(heap)-1) + +def heappop(heap): + """Pop the smallest item off the heap, maintaining the heap invariant.""" + lastelt = heap.pop() # raises appropriate IndexError if heap is empty + if heap: + returnitem = heap[0] + heap[0] = lastelt + _siftup(heap, 0) + else: + returnitem = lastelt + return returnitem + +def heapreplace(heap, item): + """Pop and return the current smallest value, and add the new item. + + This is more efficient than heappop() followed by heappush(), and can be + more appropriate when using a fixed-size heap. Note that the value + returned may be larger than item! That constrains reasonable uses of + this routine unless written as part of a conditional replacement: + + if item > heap[0]: + item = heapreplace(heap, item) + """ + returnitem = heap[0] # raises appropriate IndexError if heap is empty + heap[0] = item + _siftup(heap, 0) + return returnitem + +def heappushpop(heap, item): + """Fast version of a heappush followed by a heappop.""" + if heap and heap[0] < item: + item, heap[0] = heap[0], item + _siftup(heap, 0) + return item + +def heapify(x): + """Transform list into a heap, in-place, in O(len(heap)) time.""" + n = len(x) + # Transform bottom-up. The largest index there's any point to looking at + # is the largest with a child index in-range, so must have 2*i + 1 < n, + # or i < (n-1)/2. If n is even = 2*j, this is (2*j-1)/2 = j-1/2 so + # j-1 is the largest, which is n//2 - 1. If n is odd = 2*j+1, this is + # (2*j+1-1)/2 = j so j-1 is the largest, and that's again n//2-1. + for i in reversed(xrange(n//2)): + _siftup(x, i) + +def nlargest(n, iterable): + """Find the n largest elements in a dataset. + + Equivalent to: sorted(iterable, reverse=True)[:n] + """ + if n < 0: # for consistency with the c impl + return [] + it = iter(iterable) + result = list(islice(it, n)) + if not result: + return result + heapify(result) + _heappushpop = heappushpop + for elem in it: + _heappushpop(result, elem) + result.sort(reverse=True) + return result + +def nsmallest(n, iterable): + """Find the n smallest elements in a dataset. + + Equivalent to: sorted(iterable)[:n] + """ + if n < 0: # for consistency with the c impl + return [] + if hasattr(iterable, '__len__') and n * 10 <= len(iterable): + # For smaller values of n, the bisect method is faster than a minheap. + # It is also memory efficient, consuming only n elements of space. + it = iter(iterable) + result = sorted(islice(it, 0, n)) + if not result: + return result + insort = bisect.insort + pop = result.pop + los = result[-1] # los --> Largest of the nsmallest + for elem in it: + if los <= elem: + continue + insort(result, elem) + pop() + los = result[-1] + return result + # An alternative approach manifests the whole iterable in memory but + # saves comparisons by heapifying all at once. Also, saves time + # over bisect.insort() which has O(n) data movement time for every + # insertion. Finding the n smallest of an m length iterable requires + # O(m) + O(n log m) comparisons. + h = list(iterable) + heapify(h) + return map(heappop, repeat(h, min(n, len(h)))) + +# 'heap' is a heap at all indices >= startpos, except possibly for pos. pos +# is the index of a leaf with a possibly out-of-order value. Restore the +# heap invariant. +def _siftdown(heap, startpos, pos): + newitem = heap[pos] + # Follow the path to the root, moving parents down until finding a place + # newitem fits. + while pos > startpos: + parentpos = (pos - 1) >> 1 + parent = heap[parentpos] + if newitem < parent: + heap[pos] = parent + pos = parentpos + continue + break + heap[pos] = newitem + +# The child indices of heap index pos are already heaps, and we want to make +# a heap at index pos too. We do this by bubbling the smaller child of +# pos up (and so on with that child's children, etc) until hitting a leaf, +# then using _siftdown to move the oddball originally at index pos into place. +# +# We *could* break out of the loop as soon as we find a pos where newitem <= +# both its children, but turns out that's not a good idea, and despite that +# many books write the algorithm that way. During a heap pop, the last array +# element is sifted in, and that tends to be large, so that comparing it +# against values starting from the root usually doesn't pay (= usually doesn't +# get us out of the loop early). See Knuth, Volume 3, where this is +# explained and quantified in an exercise. +# +# Cutting the # of comparisons is important, since these routines have no +# way to extract "the priority" from an array element, so that intelligence +# is likely to be hiding in custom __cmp__ methods, or in array elements +# storing (priority, record) tuples. Comparisons are thus potentially +# expensive. +# +# On random arrays of length 1000, making this change cut the number of +# comparisons made by heapify() a little, and those made by exhaustive +# heappop() a lot, in accord with theory. Here are typical results from 3 +# runs (3 just to demonstrate how small the variance is): +# +# Compares needed by heapify Compares needed by 1000 heappops +# -------------------------- -------------------------------- +# 1837 cut to 1663 14996 cut to 8680 +# 1855 cut to 1659 14966 cut to 8678 +# 1847 cut to 1660 15024 cut to 8703 +# +# Building the heap by using heappush() 1000 times instead required +# 2198, 2148, and 2219 compares: heapify() is more efficient, when +# you can use it. +# +# The total compares needed by list.sort() on the same lists were 8627, +# 8627, and 8632 (this should be compared to the sum of heapify() and +# heappop() compares): list.sort() is (unsurprisingly!) more efficient +# for sorting. + +def _siftup(heap, pos): + endpos = len(heap) + startpos = pos + newitem = heap[pos] + # Bubble up the smaller child until hitting a leaf. + childpos = 2*pos + 1 # leftmost child position + while childpos < endpos: + # Set childpos to index of smaller child. + rightpos = childpos + 1 + if rightpos < endpos and not heap[childpos] < heap[rightpos]: + childpos = rightpos + # Move the smaller child up. + heap[pos] = heap[childpos] + pos = childpos + childpos = 2*pos + 1 + # The leaf at pos is empty now. Put newitem there, and bubble it up + # to its final resting place (by sifting its parents down). + heap[pos] = newitem + _siftdown(heap, startpos, pos) + +# If available, use C implementation +try: + from _heapq import * +except ImportError: + pass + +def merge(*iterables): + '''Merge multiple sorted inputs into a single sorted output. + + Similar to sorted(itertools.chain(*iterables)) but returns a generator, + does not pull the data into memory all at once, and assumes that each of + the input streams is already sorted (smallest to largest). + + >>> list(merge([1,3,5,7], [0,2,4,8], [5,10,15,20], [], [25])) + [0, 1, 2, 3, 4, 5, 5, 7, 8, 10, 15, 20, 25] + + ''' + _heappop, _heapreplace, _StopIteration = heappop, heapreplace, StopIteration + + h = [] + h_append = h.append + for itnum, it in enumerate(map(iter, iterables)): + try: + next = it.next + h_append([next(), itnum, next]) + except _StopIteration: + pass + heapify(h) + + while 1: + try: + while 1: + v, itnum, next = s = h[0] # raises IndexError when h is empty + yield v + s[0] = next() # raises StopIteration when exhausted + _heapreplace(h, s) # restore heap condition + except _StopIteration: + _heappop(h) # remove empty iterator + except IndexError: + return + +# Extend the implementations of nsmallest and nlargest to use a key= argument +_nsmallest = nsmallest +def nsmallest(n, iterable, key=None): + """Find the n smallest elements in a dataset. + + Equivalent to: sorted(iterable, key=key)[:n] + """ + # Short-cut for n==1 is to use min() when len(iterable)>0 + if n == 1: + it = iter(iterable) + head = list(islice(it, 1)) + if not head: + return [] + if key is None: + return [min(chain(head, it))] + return [min(chain(head, it), key=key)] + + # When n>=size, it's faster to use sort() + try: + size = len(iterable) + except (TypeError, AttributeError): + pass + else: + if n >= size: + return sorted(iterable, key=key)[:n] + + # When key is none, use simpler decoration + if key is None: + it = izip(iterable, count()) # decorate + result = _nsmallest(n, it) + return map(itemgetter(0), result) # undecorate + + # General case, slowest method + in1, in2 = tee(iterable) + it = izip(imap(key, in1), count(), in2) # decorate + result = _nsmallest(n, it) + return map(itemgetter(2), result) # undecorate + +_nlargest = nlargest +def nlargest(n, iterable, key=None): + """Find the n largest elements in a dataset. + + Equivalent to: sorted(iterable, key=key, reverse=True)[:n] + """ + + # Short-cut for n==1 is to use max() when len(iterable)>0 + if n == 1: + it = iter(iterable) + head = list(islice(it, 1)) + if not head: + return [] + if key is None: + return [max(chain(head, it))] + return [max(chain(head, it), key=key)] + + # When n>=size, it's faster to use sort() + try: + size = len(iterable) + except (TypeError, AttributeError): + pass + else: + if n >= size: + return sorted(iterable, key=key, reverse=True)[:n] + + # When key is none, use simpler decoration + if key is None: + it = izip(iterable, count(0,-1)) # decorate + result = _nlargest(n, it) + return map(itemgetter(0), result) # undecorate + + # General case, slowest method + in1, in2 = tee(iterable) + it = izip(imap(key, in1), count(0,-1), in2) # decorate + result = _nlargest(n, it) + return map(itemgetter(2), result) # undecorate + +if __name__ == "__main__": + # Simple sanity test + heap = [] + data = [1, 3, 5, 7, 9, 2, 4, 6, 8, 0] + for item in data: + heappush(heap, item) + sort = [] + while heap: + sort.append(heappop(heap)) + print sort + + import doctest + doctest.testmod() diff --git a/lib-python/modified-2.7/json/encoder.py b/lib-python/modified-2.7/json/encoder.py --- a/lib-python/modified-2.7/json/encoder.py +++ b/lib-python/modified-2.7/json/encoder.py @@ -2,14 +2,7 @@ """ import re -try: - from _json import encode_basestring_ascii as c_encode_basestring_ascii -except ImportError: - c_encode_basestring_ascii = None -try: - from _json import make_encoder as c_make_encoder -except ImportError: - c_make_encoder = None +from __pypy__.builders import StringBuilder, UnicodeBuilder ESCAPE = re.compile(r'[\x00-\x1f\\"\b\f\n\r\t]') ESCAPE_ASCII = re.compile(r'([\\"]|[^\ -~])') @@ -24,8 +17,7 @@ '\t': '\\t', } for i in range(0x20): - ESCAPE_DCT.setdefault(chr(i), '\\u{0:04x}'.format(i)) - #ESCAPE_DCT.setdefault(chr(i), '\\u%04x' % (i,)) + ESCAPE_DCT.setdefault(chr(i), '\\u%04x' % (i,)) # Assume this produces an infinity on all machines (probably not guaranteed) INFINITY = float('1e66666') @@ -37,10 +29,9 @@ """ def replace(match): return ESCAPE_DCT[match.group(0)] - return '"' + ESCAPE.sub(replace, s) + '"' + return ESCAPE.sub(replace, s) - -def py_encode_basestring_ascii(s): +def encode_basestring_ascii(s): """Return an ASCII-only JSON representation of a Python string """ @@ -53,20 +44,18 @@ except KeyError: n = ord(s) if n < 0x10000: - return '\\u{0:04x}'.format(n) - #return '\\u%04x' % (n,) + return '\\u%04x' % (n,) else: # surrogate pair n -= 0x10000 s1 = 0xd800 | ((n >> 10) & 0x3ff) s2 = 0xdc00 | (n & 0x3ff) - return '\\u{0:04x}\\u{1:04x}'.format(s1, s2) - #return '\\u%04x\\u%04x' % (s1, s2) - return '"' + str(ESCAPE_ASCII.sub(replace, s)) + '"' - - -encode_basestring_ascii = ( - c_encode_basestring_ascii or py_encode_basestring_ascii) + return '\\u%04x\\u%04x' % (s1, s2) + if ESCAPE_ASCII.search(s): + return str(ESCAPE_ASCII.sub(replace, s)) + return s +py_encode_basestring_ascii = lambda s: '"' + encode_basestring_ascii(s) + '"' +c_encode_basestring_ascii = None class JSONEncoder(object): """Extensible JSON encoder for Python data structures. @@ -147,6 +136,17 @@ self.skipkeys = skipkeys self.ensure_ascii = ensure_ascii + if ensure_ascii: + self.encoder = encode_basestring_ascii + else: + self.encoder = encode_basestring + if encoding != 'utf-8': + orig_encoder = self.encoder + def encoder(o): + if isinstance(o, str): + o = o.decode(encoding) + return orig_encoder(o) + self.encoder = encoder self.check_circular = check_circular self.allow_nan = allow_nan self.sort_keys = sort_keys @@ -184,24 +184,126 @@ '{"foo": ["bar", "baz"]}' """ - # This is for extremely simple cases and benchmarks. + if self.check_circular: + markers = {} + else: + markers = None + if self.ensure_ascii: + builder = StringBuilder() + else: + builder = UnicodeBuilder() + self._encode(o, markers, builder, 0) + return builder.build() + + def _emit_indent(self, builder, _current_indent_level): + if self.indent is not None: + _current_indent_level += 1 + newline_indent = '\n' + (' ' * (self.indent * + _current_indent_level)) + separator = self.item_separator + newline_indent + builder.append(newline_indent) + else: + separator = self.item_separator + return separator, _current_indent_level + + def _emit_unindent(self, builder, _current_indent_level): + if self.indent is not None: + builder.append('\n') + builder.append(' ' * (self.indent * (_current_indent_level - 1))) + + def _encode(self, o, markers, builder, _current_indent_level): if isinstance(o, basestring): - if isinstance(o, str): - _encoding = self.encoding - if (_encoding is not None - and not (_encoding == 'utf-8')): - o = o.decode(_encoding) - if self.ensure_ascii: - return encode_basestring_ascii(o) + builder.append('"') + builder.append(self.encoder(o)) + builder.append('"') + elif o is None: + builder.append('null') + elif o is True: + builder.append('true') + elif o is False: + builder.append('false') + elif isinstance(o, (int, long)): + builder.append(str(o)) + elif isinstance(o, float): + builder.append(self._floatstr(o)) + elif isinstance(o, (list, tuple)): + if not o: + builder.append('[]') + return + self._encode_list(o, markers, builder, _current_indent_level) + elif isinstance(o, dict): + if not o: + builder.append('{}') + return + self._encode_dict(o, markers, builder, _current_indent_level) + else: + self._mark_markers(markers, o) + res = self.default(o) + self._encode(res, markers, builder, _current_indent_level) + self._remove_markers(markers, o) + return res + + def _encode_list(self, l, markers, builder, _current_indent_level): + self._mark_markers(markers, l) + builder.append('[') + first = True + separator, _current_indent_level = self._emit_indent(builder, + _current_indent_level) + for elem in l: + if first: + first = False else: - return encode_basestring(o) - # This doesn't pass the iterator directly to ''.join() because the - # exceptions aren't as detailed. The list call should be roughly - # equivalent to the PySequence_Fast that ''.join() would do. - chunks = self.iterencode(o, _one_shot=True) - if not isinstance(chunks, (list, tuple)): - chunks = list(chunks) - return ''.join(chunks) + builder.append(separator) + self._encode(elem, markers, builder, _current_indent_level) + del elem # XXX grumble + self._emit_unindent(builder, _current_indent_level) + builder.append(']') + self._remove_markers(markers, l) + + def _encode_dict(self, d, markers, builder, _current_indent_level): + self._mark_markers(markers, d) + first = True + builder.append('{') + separator, _current_indent_level = self._emit_indent(builder, + _current_indent_level) + if self.sort_keys: + items = sorted(d.items(), key=lambda kv: kv[0]) + else: + items = d.iteritems() + + for key, v in items: + if first: + first = False + else: + builder.append(separator) + if isinstance(key, basestring): + pass + # JavaScript is weakly typed for these, so it makes sense to + # also allow them. Many encoders seem to do something like this. + elif isinstance(key, float): + key = self._floatstr(key) + elif key is True: + key = 'true' + elif key is False: + key = 'false' + elif key is None: + key = 'null' + elif isinstance(key, (int, long)): + key = str(key) + elif self.skipkeys: + continue + else: + raise TypeError("key " + repr(key) + " is not a string") + builder.append('"') + builder.append(self.encoder(key)) + builder.append('"') + builder.append(self.key_separator) + self._encode(v, markers, builder, _current_indent_level) + del key + del v # XXX grumble + self._emit_unindent(builder, _current_indent_level) + builder.append('}') + self._remove_markers(markers, d) def iterencode(self, o, _one_shot=False): """Encode the given object and yield each string @@ -217,86 +319,54 @@ markers = {} else: markers = None - if self.ensure_ascii: - _encoder = encode_basestring_ascii + return self._iterencode(o, markers, 0) + + def _floatstr(self, o): + # Check for specials. Note that this type of test is processor + # and/or platform-specific, so do tests which don't depend on the + # internals. + + if o != o: + text = 'NaN' + elif o == INFINITY: + text = 'Infinity' + elif o == -INFINITY: + text = '-Infinity' else: - _encoder = encode_basestring - if self.encoding != 'utf-8': - def _encoder(o, _orig_encoder=_encoder, _encoding=self.encoding): - if isinstance(o, str): - o = o.decode(_encoding) - return _orig_encoder(o) + return FLOAT_REPR(o) - def floatstr(o, allow_nan=self.allow_nan, - _repr=FLOAT_REPR, _inf=INFINITY, _neginf=-INFINITY): - # Check for specials. Note that this type of test is processor - # and/or platform-specific, so do tests which don't depend on the - # internals. + if not self.allow_nan: + raise ValueError( + "Out of range float values are not JSON compliant: " + + repr(o)) - if o != o: - text = 'NaN' - elif o == _inf: - text = 'Infinity' - elif o == _neginf: - text = '-Infinity' - else: - return _repr(o) + return text - if not allow_nan: - raise ValueError( - "Out of range float values are not JSON compliant: " + - repr(o)) + def _mark_markers(self, markers, o): + if markers is not None: + if id(o) in markers: + raise ValueError("Circular reference detected") + markers[id(o)] = None - return text + def _remove_markers(self, markers, o): + if markers is not None: + del markers[id(o)] - - if (_one_shot and c_make_encoder is not None - and not self.indent and not self.sort_keys): - _iterencode = c_make_encoder( - markers, self.default, _encoder, self.indent, - self.key_separator, self.item_separator, self.sort_keys, - self.skipkeys, self.allow_nan) - else: - _iterencode = _make_iterencode( - markers, self.default, _encoder, self.indent, floatstr, - self.key_separator, self.item_separator, self.sort_keys, - self.skipkeys, _one_shot) - return _iterencode(o, 0) - -def _make_iterencode(markers, _default, _encoder, _indent, _floatstr, - _key_separator, _item_separator, _sort_keys, _skipkeys, _one_shot, - ## HACK: hand-optimized bytecode; turn globals into locals - ValueError=ValueError, - basestring=basestring, - dict=dict, - float=float, - id=id, - int=int, - isinstance=isinstance, - list=list, - long=long, - str=str, - tuple=tuple, - ): - - def _iterencode_list(lst, _current_indent_level): + def _iterencode_list(self, lst, markers, _current_indent_level): if not lst: yield '[]' return - if markers is not None: - markerid = id(lst) - if markerid in markers: - raise ValueError("Circular reference detected") - markers[markerid] = lst + self._mark_markers(markers, lst) buf = '[' - if _indent is not None: + if self.indent is not None: _current_indent_level += 1 - newline_indent = '\n' + (' ' * (_indent * _current_indent_level)) - separator = _item_separator + newline_indent + newline_indent = '\n' + (' ' * (self.indent * + _current_indent_level)) + separator = self.item_separator + newline_indent buf += newline_indent else: newline_indent = None - separator = _item_separator + separator = self.item_separator first = True for value in lst: if first: @@ -304,7 +374,7 @@ else: buf = separator if isinstance(value, basestring): - yield buf + _encoder(value) + yield buf + '"' + self.encoder(value) + '"' elif value is None: yield buf + 'null' elif value is True: @@ -314,44 +384,43 @@ elif isinstance(value, (int, long)): yield buf + str(value) elif isinstance(value, float): - yield buf + _floatstr(value) + yield buf + self._floatstr(value) else: yield buf if isinstance(value, (list, tuple)): - chunks = _iterencode_list(value, _current_indent_level) + chunks = self._iterencode_list(value, markers, + _current_indent_level) elif isinstance(value, dict): - chunks = _iterencode_dict(value, _current_indent_level) + chunks = self._iterencode_dict(value, markers, + _current_indent_level) else: - chunks = _iterencode(value, _current_indent_level) + chunks = self._iterencode(value, markers, + _current_indent_level) for chunk in chunks: yield chunk if newline_indent is not None: _current_indent_level -= 1 - yield '\n' + (' ' * (_indent * _current_indent_level)) + yield '\n' + (' ' * (self.indent * _current_indent_level)) yield ']' - if markers is not None: - del markers[markerid] + self._remove_markers(markers, lst) - def _iterencode_dict(dct, _current_indent_level): + def _iterencode_dict(self, dct, markers, _current_indent_level): if not dct: yield '{}' return - if markers is not None: - markerid = id(dct) - if markerid in markers: - raise ValueError("Circular reference detected") - markers[markerid] = dct + self._mark_markers(markers, dct) yield '{' - if _indent is not None: + if self.indent is not None: _current_indent_level += 1 - newline_indent = '\n' + (' ' * (_indent * _current_indent_level)) - item_separator = _item_separator + newline_indent + newline_indent = '\n' + (' ' * (self.indent * + _current_indent_level)) + item_separator = self.item_separator + newline_indent yield newline_indent else: newline_indent = None - item_separator = _item_separator + item_separator = self.item_separator first = True - if _sort_keys: + if self.sort_keys: items = sorted(dct.items(), key=lambda kv: kv[0]) else: items = dct.iteritems() @@ -361,7 +430,7 @@ # JavaScript is weakly typed for these, so it makes sense to # also allow them. Many encoders seem to do something like this. elif isinstance(key, float): - key = _floatstr(key) + key = self._floatstr(key) elif key is True: key = 'true' elif key is False: @@ -370,7 +439,7 @@ key = 'null' elif isinstance(key, (int, long)): key = str(key) - elif _skipkeys: + elif self.skipkeys: continue else: raise TypeError("key " + repr(key) + " is not a string") @@ -378,10 +447,10 @@ first = False else: yield item_separator - yield _encoder(key) - yield _key_separator + yield '"' + self.encoder(key) + '"' + yield self.key_separator if isinstance(value, basestring): - yield _encoder(value) + yield '"' + self.encoder(value) + '"' elif value is None: yield 'null' elif value is True: @@ -391,26 +460,28 @@ elif isinstance(value, (int, long)): yield str(value) elif isinstance(value, float): - yield _floatstr(value) + yield self._floatstr(value) else: if isinstance(value, (list, tuple)): - chunks = _iterencode_list(value, _current_indent_level) + chunks = self._iterencode_list(value, markers, + _current_indent_level) elif isinstance(value, dict): - chunks = _iterencode_dict(value, _current_indent_level) + chunks = self._iterencode_dict(value, markers, + _current_indent_level) else: - chunks = _iterencode(value, _current_indent_level) + chunks = self._iterencode(value, markers, + _current_indent_level) for chunk in chunks: yield chunk if newline_indent is not None: _current_indent_level -= 1 - yield '\n' + (' ' * (_indent * _current_indent_level)) + yield '\n' + (' ' * (self.indent * _current_indent_level)) yield '}' - if markers is not None: - del markers[markerid] + self._remove_markers(markers, dct) - def _iterencode(o, _current_indent_level): + def _iterencode(self, o, markers, _current_indent_level): if isinstance(o, basestring): - yield _encoder(o) + yield '"' + self.encoder(o) + '"' elif o is None: yield 'null' elif o is True: @@ -420,23 +491,19 @@ elif isinstance(o, (int, long)): yield str(o) elif isinstance(o, float): - yield _floatstr(o) + yield self._floatstr(o) elif isinstance(o, (list, tuple)): - for chunk in _iterencode_list(o, _current_indent_level): + for chunk in self._iterencode_list(o, markers, + _current_indent_level): yield chunk elif isinstance(o, dict): - for chunk in _iterencode_dict(o, _current_indent_level): + for chunk in self._iterencode_dict(o, markers, + _current_indent_level): yield chunk else: - if markers is not None: - markerid = id(o) - if markerid in markers: - raise ValueError("Circular reference detected") - markers[markerid] = o - o = _default(o) - for chunk in _iterencode(o, _current_indent_level): + self._mark_markers(markers, o) + obj = self.default(o) + for chunk in self._iterencode(obj, markers, + _current_indent_level): yield chunk - if markers is not None: - del markers[markerid] - - return _iterencode + self._remove_markers(markers, o) diff --git a/lib-python/modified-2.7/json/tests/test_unicode.py b/lib-python/modified-2.7/json/tests/test_unicode.py --- a/lib-python/modified-2.7/json/tests/test_unicode.py +++ b/lib-python/modified-2.7/json/tests/test_unicode.py @@ -80,3 +80,9 @@ self.assertEqual(type(json.loads(u'["a"]')[0]), unicode) # Issue 10038. self.assertEqual(type(json.loads('"foo"')), unicode) + + def test_encode_not_utf_8(self): + self.assertEqual(json.dumps('\xb1\xe6', encoding='iso8859-2'), + '"\\u0105\\u0107"') + self.assertEqual(json.dumps(['\xb1\xe6'], encoding='iso8859-2'), + '["\\u0105\\u0107"]') diff --git a/lib-python/modified-2.7/test/test_heapq.py b/lib-python/modified-2.7/test/test_heapq.py --- a/lib-python/modified-2.7/test/test_heapq.py +++ b/lib-python/modified-2.7/test/test_heapq.py @@ -186,6 +186,11 @@ self.assertFalse(sys.modules['heapq'] is self.module) self.assertTrue(hasattr(self.module.heapify, 'func_code')) + def test_islice_protection(self): + m = self.module + self.assertFalse(m.nsmallest(-1, [1])) + self.assertFalse(m.nlargest(-1, [1])) + class TestHeapC(TestHeap): module = c_heapq diff --git a/lib-python/modified-2.7/urllib2.py b/lib-python/modified-2.7/urllib2.py --- a/lib-python/modified-2.7/urllib2.py +++ b/lib-python/modified-2.7/urllib2.py @@ -395,11 +395,7 @@ meth_name = protocol+"_response" for processor in self.process_response.get(protocol, []): meth = getattr(processor, meth_name) - try: - response = meth(req, response) - except: - response.close() - raise + response = meth(req, response) return response diff --git a/lib_pypy/_pypy_irc_topic.py b/lib_pypy/_pypy_irc_topic.py --- a/lib_pypy/_pypy_irc_topic.py +++ b/lib_pypy/_pypy_irc_topic.py @@ -1,117 +1,6 @@ -"""qvfgbcvna naq hgbcvna punvef -qlfgbcvna naq hgbcvna punvef -V'z fbeel, pbhyq lbh cyrnfr abg nterr jvgu gur png nf jryy? -V'z fbeel, pbhyq lbh cyrnfr abg nterr jvgu gur punve nf jryy? -jr cnffrq gur RH erivrj -cbfg RhebClguba fcevag fgnegf 12.IVV.2007, 10nz -RhebClguba raqrq -n Pyrna Ragrecevfrf cebqhpgvba -npnqrzl vf n pbzcyvpngrq ebyr tnzr -npnqrzvn vf n pbzcyvpngrq ebyr tnzr -jbexvat pbqr vf crn fbhc -abg lbhe snhyg, zber yvxr vg'f n zbivat gnetrg -guvf fragrapr vf snyfr -abguvat vf gehr -Yncfnat Fbhpubat -Oenpunzhgnaqn -fbeel, V'yy grnpu gur pnpghf ubj gb fjvz yngre -Jul fb znal znal znal znal znal ivbyvaf? -Jul fb znal znal znal znal znal bowrpgf? -"eha njnl naq yvir ba n snez" nccebnpu gb fbsgjner qrirybczrag -"va snpg, lbh zvtug xabj zber nobhg gur genafyngvba gbbypunva nsgre znfgrevat eclguba guna fbzr angvir fcrnxre xabjf nobhg uvf zbgure gbathr" - kbeNkNk -"jurer qvq nyy gur ivbyvaf tb?" -- ClCl fgnghf oybt: uggc://zberclcl.oybtfcbg.pbz/ -uggc://kxpq.pbz/353/ -pnfhnyvgl ivbyngvbaf naq sylvat -wrgmg abpu fpubxbynqvtre -R09 2X @PNN:85? -vs lbh'er gelvat gb oybj hc fghss, jub pnerf? -vs fghss oybjf hc, lbh pner -2008 jvyy or gur lrne bs clcl ba gur qrfxgbc -2008 jvyy or gur lrne bs gur qrfxgbc ba #clcl -2008 jvyy or gur lrne bs gur qrfxgbc ba #clcl, Wnahnel jvyy or gur zbagu bs gur nyc gbcf -lrf, ohg jung'g gur frafr bs 0 < "qhena qhena" -eclguba: flagnk naq frznagvpf bs clguba, fcrrq bs p, erfgevpgvbaf bs wnin naq pbzcvyre reebe zrffntrf nf crargenoyr nf ZHZCF +"""eclguba: flagnk naq frznagvpf bs clguba, fcrrq bs p, erfgevpgvbaf bs wnin naq pbzcvyre reebe zrffntrf nf crargenoyr nf ZHZCF pglcrf unf n fcva bs 1/3 ' ' vf n fcnpr gbb -2009 jvyy or gur lrne bs WVG ba gur qrfxgbc -N ynathntr vf n qvnyrpg jvgu na nezl naq anil -gbcvpf ner sbe gur srroyr zvaqrq -2009 vf gur lrne bs ersyrpgvba ba gur qrfxgbc -gur tybor vf bhe cbal, gur pbfzbf bhe erny ubefr -jub nz V naq vs lrf, ubj znal? -cebtenzzvat va orq vf n cresrpgyl svar npgvivgl -zbber'f ynj vf n qeht jvgu gur jbefg pbzr qbja -EClguba: jr hfr vg fb lbh qba'g unir gb -Zbber'f ynj vf n qeht jvgu gur jbefg pbzr qbja. EClguba: haqrpvqrq. -guvatf jvyy or avpr naq fghss -qba'g cbfg yvaxf gb cngragf urer -Abg lbhe hfhny nanylfrf. -Gur Neg bs gur Punaary -Clguba 300 -V fhccbfr ZO bs UGZY cre frpbaq vf abg gur hfhny fcrrq zrnfher crbcyr jbhyq rkcrpg sbe n wvg -gur fha arire frgf ba gur ClCl rzcver -ghegyrf ner snfgre guna lbh guvax -cebtenzzvat vf na nrfgrguvp raqrnibhe -P vf tbbq sbe fbzrguvat, whfg abg sbe jevgvat fbsgjner -trezna vf tbbq sbe fbzrguvat, whfg abg sbe jevgvat fbsgjner -trezna vf tbbq sbe artngvbaf, whfg abg sbe jevgvat fbsgjner -# nffreg qvq abg penfu -lbh fubhyq fgneg n cresrpg fbsgjner zbirzrag -lbh fubhyq fgneg n cresrpg punaary gbcvp zbirzrag -guvf vf n cresrpg punaary gbcvp -guvf vf n frys-ersreragvny punaary gbcvp -crrcubcr bcgvzvmngvbaf ner jung n Fhssvpvragyl Fzneg Pbzcvyre hfrf -"crrcubcr" bcgvzvmngvbaf ner jung na bcgvzvfgvp Pbzcvyre hfrf -pubbfr lbhe unpx -gur 'fhcre' xrljbeq vf abg gung uhttnoyr -wlguba cngpurf ner abg rabhtu sbe clcl -- qb lbh xabj oreyva? - nyy bs vg? - jryy, whfg oreyva -- ubj jvyy gur snpg gung gurl ner hfrq va bhe ercy punatr bhe gbcvpf? -- ubj pna vg rire unir jbexrq? -- jurer fubhyq gur unpx or fgberq? -- Vg'f uneq gb fnl rknpgyl jung pbafgvghgrf erfrnepu va gur pbzchgre jbeyq, ohg nf n svefg nccebkvzngvba, vg'f fbsgjner gung qbrfa'g unir hfref. -- Cebtenzzvat vf nyy nobhg xabjvat jura gb obvy gur benatr fcbatr qbaxrl npebff gur cuvyyvcvarf -- Jul fb znal, znal, znal, znal, znal, znal qhpxyvatf? -- ab qrgnvy vf bofpher rabhtu gb abg unir fbzr pbqr qrcraqvat ba vg. -- jung V trarenyyl jnag vf serr fcrrqhcf -- nyy bs ClCl vf kv-dhnyvgl -"lbh pna nyjnlf xvyy -9 be bf._rkvg() vs lbh'er va n uheel" -Ohernhpengf ohvyq npnqrzvp rzcverf juvpu puhea bhg zrnavatyrff fbyhgvbaf gb veeryrinag ceboyrzf. -vg'f abg n unpx, vg'f n jbexnebhaq -ClCl qbrfa'g unir pbcbylinevnqvp qrcraqragyl-zbabzbecurq ulcresyhknqf -ClCl qbrfa'g punatr gur shaqnzragny culfvpf pbafgnagf -Qnapr bs gur Fhtnecyhz Snvel -Wnin vf whfg tbbq rabhtu gb or cenpgvpny, ohg abg tbbq rabhtu gb or hfnoyr. -RhebClguba vf unccravat, qba'g rkcrpg nal dhvpx erfcbafr gvzrf. -"V jbhyq yvxr gb fgnl njnl sebz ernyvgl gura" -"gung'f jul gur 'be' vf ernyyl na 'naq' " -jvgu nyy nccebcevngr pbagrkghnyvfngvbavat -qba'g gevc ba gur cbjre pbeq -vzcyrzragvat YBTB va YBTB: "ghegyrf nyy gur jnl qbja" -gur ohooyrfbeg jbhyq or gur jebat jnl gb tb -gur cevapvcyr bs pbafreingvba bs zrff -gb fnir n gerr, rng n ornire -Qre Ovore znpugf evpugvt: Antg nyyrf xnchgg. -"Nal jbeyqivrj gung vfag jenpxrq ol frys-qbhog naq pbashfvba bire vgf bja vqragvgl vf abg n jbeyqivrj sbe zr." - Fpbgg Nnebafba -jr oryvrir va cnapnxrf, znlor -jr oryvrir va ghegyrf, znlor -jr qrsvavgryl oryvrir va zrgn -gur zngevk unf lbh -"Yvsr vf uneq, gura lbh anc" - n png -Vf Nezva ubzr jura gur havirefr prnfrf gb rkvfg? -Qhrffryqbes fcevag fgnegrq -frys.nobeeg("pnaabg ybnq negvpyrf") -QRAGVFGEL FLZOBY YVTUG IREGVPNY NAQ JNIR -"Gur UUH pnzchf vf n tbbq Dhnxr yriry" - Nezva -"Gur UUH pnzchf jbhyq or n greevoyr dhnxr yriry - lbh'q arire unir n pyhr jurer lbh ner" - zvpunry -N enqvbnpgvir png unf 18 unys-yvirf. - : j [fvtu] -f -pbybe-pbqrq oyhrf -"Neebtnapr va pbzchgre fpvrapr vf zrnfherq va anab-Qvwxfgenf." -ClCl arrqf n Whfg-va-Gvzr WVG -"Lbh pna'g gvzr geniry whfg ol frggvat lbhe pybpxf jebat" -Gjb guernqf jnyx vagb n one. Gur onexrrcre ybbxf hc naq lryyf, "url, V jnag qba'g nal pbaqvgvbaf enpr yvxr gvzr ynfg!" Clguba 2.k rfg cerfdhr zbeg, ivir Clguba! Clguba 2.k vf abg qrnq Riregvzr fbzrbar nethrf jvgu "Fznyygnyx unf nyjnlf qbar K", vg vf nyjnlf n tbbq uvag gung fbzrguvat arrqf gb or punatrq snfg. - Znephf Qraxre @@ -119,7 +8,6 @@ __kkk__ naq __ekkk__ if bcrengvba fybgf: cnegvpyr dhnaghz fhcrecbfvgvba xvaq bs sha ClCl vf na rkpvgvat grpuabybtl gung yrgf lbh gb jevgr snfg, cbegnoyr, zhygv-cyngsbez vagrecergref jvgu yrff rssbeg Nezva: "Cebybt vf n zrff.", PS: "Ab, vg'f irel pbby!", Nezva: "Vfa'g guvf jung V fnvq?" - tbbq, grfgf ner hfrshy fbzrgvzrf :-) ClCl vf yvxr nofheq gurngre jr unir ab nagv-vzcbffvoyr fgvpx gung znxrf fher gung nyy lbhe cebtenzf unyg clcl vf n enpr orgjrra crbcyr funivat lnxf naq gur havirefr cebqhpvat zber orneqrq lnxf. Fb sne, gur havirefr vf jvaavat @@ -136,14 +24,14 @@ ClCl 1.1.0orgn eryrnfrq: uggc://pbqrfcrnx.arg/clcl/qvfg/clcl/qbp/eryrnfr-1.1.0.ugzy "gurer fubhyq or bar naq bayl bar boivbhf jnl gb qb vg". ClCl inevnag: "gurer pna or A unys-ohttl jnlf gb qb vg" 1.1 svany eryrnfrq: uggc://pbqrfcrnx.arg/clcl/qvfg/clcl/qbp/eryrnfr-1.1.0.ugzy -1.1 svany eryrnfrq | nzq64 naq ccp ner bayl ninvynoyr va ragrecevfr irefvba + nzq64 naq ccp ner bayl ninvynoyr va ragrecevfr irefvba Vf gurer n clcl gvzr? - vs lbh pna srry vg (?) gura gurer vf ab, abezny jbex vf fhpu zhpu yrff gvevat guna inpngvbaf ab, abezny jbex vf fb zhpu yrff gvevat guna inpngvbaf -SVEFG gurl vtaber lbh, gura gurl ynhtu ng lbh, gura gurl svtug lbh, gura lbh jva. +-SVEFG gurl vtaber lbh, gura gurl ynhtu ng lbh, gura gurl svtug lbh, gura lbh jva.- vg'f Fhaqnl, znlor vg'f Fhaqnl, ntnva -"3 + 3 = 8" Nagb va gur WVG gnyx +"3 + 3 = 8" - Nagb va gur WVG gnyx RPBBC vf unccravat RPBBC vf svavfurq cflpb rngf bar oenva cre vapu bs cebterff @@ -175,10 +63,108 @@ "nu, whfg va gvzr qbphzragngvba" (__nc__) ClCl vf abg n erny IZ: ab frtsnhyg unaqyref gb qb gur ener pnfrf lbh pna'g unir obgu pbairavrapr naq fcrrq -gur WVG qbrfa'g jbex ba BF/K (abi'09) -ab fhccbeg sbe BF/K evtug abj! (abi'09) fyvccref urvtug pna or zrnfherq va k86 ertvfgref clcl vf n enpr orgjrra gur vaqhfgel gelvat gb ohvyq znpuvarf jvgu zber naq zber erfbheprf, naq gur clcl qrirybcref gelvat gb rng nyy bs gurz. Fb sne, gur jvaare vf fgvyy hapyrne +"znl pbagnva ahgf naq/be lbhat cbvagref" +vg'f nyy irel fvzcyr, yvxr gur ubyvqnlf +unccl ClCl'f lrne 2010! +fnzhryr fnlf gung jr ybfg n enmbe. fb jr pna'g funir lnxf +"yrg'f abg or bofpher, hayrff jr ernyyl arrq gb" + (abg guernq-fnsr, ohg jryy, abguvat vf) +clcl unf znal ceboyrzf, ohg rnpu bar unf znal fbyhgvbaf +whfg nabgure vgrz (1.333...) ba bhe erny-ahzorerq gbqb yvfg +ClCl vf Fuveg Bevtnzv erfrnepu + nafjrevat n dhrfgvba: "ab -- sbe ng yrnfg bar cbffvoyr vagrecergngvba bs lbhe fragrapr" +eryrnfr 1.2 hcpbzvat +ClCl 1.2 eryrnfrq - uggc://clcl.bet/ +AB IPF QVFPHFFVBAF +EClguba vf n svar pnzry unve oehfu +ClCl vf n npghnyyl n ivfhnyvmngvba cebwrpg, jr whfg ohvyq vagrecergref gb unir vagrerfgvat qngn gb ivfhnyvmr +clcl vf yvxr fnhfntrf +naq abj sbe fbzrguvat pbzcyrgryl qvssrerag +n 10gu bs sberire vf 1u45 +pbeerpg pbqr qbrfag arrq nal grfgf +cbfgfgehpghenyvfz rgp. +clcl UVG trarengbe +gur arj clcl fcbeg vf gb cnff clcl ohtf nf pclguba ohtf +jr unir zhpu zber vagrecergref guna hfref +ClCl 1.3 njnvgvat eryrnfr +ClCl 1.3 eryrnfrq +vg frrzf gb zr gung bapr lbh frggyr ba na rkrphgvba / bowrpg zbqry naq / be olgrpbqr sbezng, lbh'ir nyernql qrpvqrq jung ynathntrf (jurer gur 'f' frrzf fhcresyhbhf) fhccbeg vf tbvat gb or svefg pynff sbe +"Nyy ceboyrzf va ClCl pna or fbyirq ol nabgure yriry bs vagrecergngvba" +ClCl 1.3 eryrnfrq (jvaqbjf ovanevrf vapyhqrq) +jul qvq lbh thlf unir gb znxr gur ohvygva sbeghar zber vagrerfgvat guna npghny jbex? v whfg pngpurq zlfrys erfgnegvat clcl 20 gvzrf +"jr hfrq gb unir n zrff jvgu na bofpher vagresnpr, abj jr unir zrff urer naq bofpher vagresnpr gurer. cebterff" crqebavf ba n clcl fcevag +"phcf bs pbssrr ner yvxr nanybtvrf va gung V'z znxvat bar evtug abj" +"vg'f nyjnlf hc gb hf, va n jnl be gur bgure" +ClCl vf infg, naq pbagnvaf zhygvghqrf +qravny vf eneryl n tbbq qrohttvat grpuavdhr +"Yrg'f tb." - "Jr pna'g" - "Jul abg?" - "Jr'er jnvgvat sbe n Genafyngvba." - (qrfcnvevatyl) "Nu!" +'gung'f qrsvavgryl n pnfr bs "hu????"' +va gurbel gurer vf gur Ybbc, va cenpgvpr gurer ner oevqtrf +gur uneqqevir - pbafgnag qngn cvytevzntr +ClCl vf n gbby gb xrrc bgurejvfr qnatrebhf zvaqf fnsryl bpphcvrq. +jr ner n trareny senzrjbex ohvyg ba pbafvfgrag nccyvpngvba bs nqubp-arff +gur jnl gb nibvq n jbexnebhaq vf gb vagebqhpr n fgebatre jbexnebhaq fbzrjurer ryfr +pnyyvat gur genafyngvba gbby punva n 'fpevcg' vf xvaq bs bssrafvir +ehaavat clcl-p genafyngr.cl vf n ovg yvxr jngpuvat n guevyyre zbivr, vg pbhyq pbafhzr nyy gur zrzbel ng nal gvzr +ehaavat clcl-p genafyngr.cl vf n ovg yvxr jngpuvat n guevyyre zbivr, vg pbhyq qvr ng nal gvzr orpnhfr bs gur 32-ovg 4TO yvzvg bs ENZ +Qh jvefg rora tranh qnf reervpura, jbena xrvare tynhog +vs fjvgmreynaq jrer jurer terrpr vf (ba vfynaqf) jbhyq gurl nyy or pbaarpgrq ol oevqtrf? +genafyngvat clcl jvgu pclguba vf fbbbbbb fybj +ClCl 1.4 eryrnfrq! +Jr ner abg urebrf, whfg irel cngvrag. +QBAR zrnaf vg'f qbar +jul gurer vf ab "ClCl 1.4 eryrnfrq" va gbcvp nal zber? +fabj! fabj! +svanyyl, zrephevny zvtengvba vf unccravat! +Gur zvtengvba gb zrephevny vf pbzcyrgrq! uggc://ovgohpxrg.bet/clcl/clcl +fabj! fabj! (gre) +unccl arj lrne +naq anaanaw gb lbh nf jryy +Frrvat nf gur ynjf bs culfvpf ner ntnvafg lbh, lbh unir gb pnershyyl pbafvqre lbhe fpbcr fb gung lbhe tbnyf ner ernfbanoyr. +nf hfhny va clcl, gur fbyhgvba nccrnef pbzcyrgryl qvfcebcbegvbangr gb gur ceboyrz naq vafgrnq jr'yy tb sbe n pbzcyrgryl qvssrerag fvzcyre nccebnpu gb gur bevtvany ceboyrz +fabj, fabj! +va clcl lbh ner nyjnlf ng gur jebat yriry, va bar jnl be gur bgure +jryy, vg'f jebat ohg abg fb "irel jebat" nf vg ybbxrq + V ybir clcl +ynmvarff vzcngvrapr naq uhoevf +fabj, fabj +EClguba: guvatf lbh jbhyqa'g qb va Clguba, naq pna'g qb va P. +vg vf gur rkcrpgrq orunivbe, rkprcg jura lbh qba'g rkcrpg vg +erqrsvavat lryybj frrzf yvxr n orggre vqrn +"gung'f ubjrire whfg ratvarrevat" (svwny) +"[vg] whfg fubjf ntnva gung eclguba vf bofpher" (psobym) +"naljnl, clguba vf n infg ynathntr" (svwny) +bhg-bs-yvr-thneqf +"gurer ner qnlf ba juvpu lbh ybbx nebhaq naq abguvat fubhyq unir rire jbexrq" (svwny) +clcl vf n orggre xvaq bs sbbyvfuarff - ynp +ehaavat grfgf vf rffragvny sbe qrirybcvat clcl -- hu? qvq V oernx gur grfg? (svwny) +V'ir tbg guvf sybbe jnk gung'f nyfb n TERNG qrffreg gbccvat!! +rknexha: "gur cneg gung V gubhtug jnf tbvat gb or uneq jnf gevivny, fb abj V whfg unir guvf cneg gung V qvqa'g rira guvax bs gung vf uneq" +V fhccbfr jr pna yvir jvgu gur bofphevgl, nf ybat nf gurer vf n pbzzrag znxvat vg yvtugre +V nz n ovt oryvrire va ernfbaf. ohg gur nccnerag xvaq ner zl snibevgr. +clcl: trg n WVG sbe serr (jryy gur svefg qnl lbh jba'g znantr naq vg jvyy or irel sehfgengvat) + thgjbegu: bu, jr fubhyq znxr gur WVG zntvpnyyl orggre, jvgu qrpbengbef naq fghss +vg'f n pbzcyrgr unpx, ohg n irel zvavzny bar (nevtngb) +svefg gurl ynhtu ng lbh, gura gurl vtaber lbh, gura gurl svtug lbh, gura lbh jva +ClCl vf snzvyl sevraqyl +jr yvxr pbzcynvagf +gbqnl jr'er snfgre guna lrfgreqnl (hfhnyyl) +ClCl naq PClguba: gurl ner zbegny rarzvrf vagrag ba xvyyvat rnpu bgure +nethnoyl, rirelguvat vf n avpur +clcl unf ynlref yvxr bavbaf: crryvat gurz onpx jvyy znxr lbh pel +EClguba zntvpnyyl znxrf lbh evpu naq snzbhf (fnlf fb ba gur gva) +Vf evtbobg nebhaq jura gur havirefr prnfrf gb rkvfg? +ClCl vf gbb pbby sbe dhrelfgevatf. +< nevtngb> gura jung bpphef? < svwny> tbbq fghss V oryvrir +ClCl 1.6 eryrnfrq! + jurer ner gur grfgf? +uggc://gjvgcvp.pbz/52nr8s +N enaqbz dhbgr +Nyy rkprcgoybpxf frrz fnar. +N cvax tyvggrel ebgngvat ynzoqn +"vg'f yvxryl grzcbenel hagvy sberire" nevtb """ def some_topic(): diff --git a/lib_pypy/pyrepl/readline.py b/lib_pypy/pyrepl/readline.py --- a/lib_pypy/pyrepl/readline.py +++ b/lib_pypy/pyrepl/readline.py @@ -231,7 +231,11 @@ return ''.join(chars) def _histline(self, line): - return unicode(line.rstrip('\n'), ENCODING) + line = line.rstrip('\n') + try: + return unicode(line, ENCODING) + except UnicodeDecodeError: # bah, silently fall back... + return unicode(line, 'utf-8') def get_history_length(self): return self.saved_history_length @@ -268,7 +272,10 @@ f = open(os.path.expanduser(filename), 'w') for entry in history: if isinstance(entry, unicode): - entry = entry.encode(ENCODING) + try: + entry = entry.encode(ENCODING) + except UnicodeEncodeError: # bah, silently fall back... + entry = entry.encode('utf-8') entry = entry.replace('\n', '\r\n') # multiline history support f.write(entry + '\n') f.close() diff --git a/pypy/config/pypyoption.py b/pypy/config/pypyoption.py --- a/pypy/config/pypyoption.py +++ b/pypy/config/pypyoption.py @@ -92,7 +92,7 @@ module_import_dependencies = { # no _rawffi if importing pypy.rlib.clibffi raises ImportError - # or CompilationError + # or CompilationError or py.test.skip.Exception "_rawffi" : ["pypy.rlib.clibffi"], "_ffi" : ["pypy.rlib.clibffi"], @@ -113,7 +113,7 @@ try: for name in modlist: __import__(name) - except (ImportError, CompilationError), e: + except (ImportError, CompilationError, py.test.skip.Exception), e: errcls = e.__class__.__name__ config.add_warning( "The module %r is disabled\n" % (modname,) + diff --git a/pypy/doc/project-ideas.rst b/pypy/doc/project-ideas.rst --- a/pypy/doc/project-ideas.rst +++ b/pypy/doc/project-ideas.rst @@ -17,6 +17,12 @@ projects, or anything else in PyPy, pop up on IRC or write to us on the `mailing list`_. +Make big integers faster +------------------------- + +PyPy's implementation of the Python ``long`` type is slower than CPython's. +Find out why and optimize them. + Numpy improvements ------------------ diff --git a/pypy/interpreter/astcompiler/ast.py b/pypy/interpreter/astcompiler/ast.py --- a/pypy/interpreter/astcompiler/ast.py +++ b/pypy/interpreter/astcompiler/ast.py @@ -2,7 +2,7 @@ from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter import typedef from pypy.interpreter.gateway import interp2app -from pypy.interpreter.error import OperationError +from pypy.interpreter.error import OperationError, operationerrfmt from pypy.rlib.unroll import unrolling_iterable from pypy.tool.pairtype import extendabletype from pypy.tool.sourcetools import func_with_new_name @@ -2925,14 +2925,13 @@ def Module_get_body(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -2968,14 +2967,13 @@ def Interactive_get_body(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -3015,8 +3013,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') return space.wrap(w_self.body) def Expression_set_body(space, w_self, w_new_value): @@ -3057,14 +3054,13 @@ def Suite_get_body(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -3104,8 +3100,7 @@ return w_obj if not w_self.initialization_state & w_self._lineno_mask: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'lineno'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'lineno') return space.wrap(w_self.lineno) def stmt_set_lineno(space, w_self, w_new_value): @@ -3126,8 +3121,7 @@ return w_obj if not w_self.initialization_state & w_self._col_offset_mask: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'col_offset'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'col_offset') return space.wrap(w_self.col_offset) def stmt_set_col_offset(space, w_self, w_new_value): @@ -3157,8 +3151,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'name'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'name') return space.wrap(w_self.name) def FunctionDef_set_name(space, w_self, w_new_value): @@ -3179,8 +3172,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'args'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'args') return space.wrap(w_self.args) def FunctionDef_set_args(space, w_self, w_new_value): @@ -3197,14 +3189,13 @@ def FunctionDef_get_body(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -3215,14 +3206,13 @@ def FunctionDef_get_decorator_list(space, w_self): if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'decorator_list'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'decorator_list') if w_self.w_decorator_list is None: if w_self.decorator_list is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.decorator_list] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_decorator_list = w_list return w_self.w_decorator_list @@ -3266,8 +3256,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'name'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'name') return space.wrap(w_self.name) def ClassDef_set_name(space, w_self, w_new_value): @@ -3284,14 +3273,13 @@ def ClassDef_get_bases(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'bases'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'bases') if w_self.w_bases is None: if w_self.bases is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.bases] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_bases = w_list return w_self.w_bases @@ -3302,14 +3290,13 @@ def ClassDef_get_body(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -3320,14 +3307,13 @@ def ClassDef_get_decorator_list(space, w_self): if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'decorator_list'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'decorator_list') if w_self.w_decorator_list is None: if w_self.decorator_list is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.decorator_list] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_decorator_list = w_list return w_self.w_decorator_list @@ -3372,8 +3358,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Return_set_value(space, w_self, w_new_value): @@ -3414,14 +3399,13 @@ def Delete_get_targets(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'targets'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'targets') if w_self.w_targets is None: if w_self.targets is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.targets] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_targets = w_list return w_self.w_targets @@ -3457,14 +3441,13 @@ def Assign_get_targets(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'targets'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'targets') if w_self.w_targets is None: if w_self.targets is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.targets] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_targets = w_list return w_self.w_targets @@ -3479,8 +3462,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Assign_set_value(space, w_self, w_new_value): @@ -3527,8 +3509,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'target'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'target') return space.wrap(w_self.target) def AugAssign_set_target(space, w_self, w_new_value): @@ -3549,8 +3530,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'op'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'op') return operator_to_class[w_self.op - 1]() def AugAssign_set_op(space, w_self, w_new_value): @@ -3573,8 +3553,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def AugAssign_set_value(space, w_self, w_new_value): @@ -3621,8 +3600,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'dest'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'dest') return space.wrap(w_self.dest) def Print_set_dest(space, w_self, w_new_value): @@ -3639,14 +3617,13 @@ def Print_get_values(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'values'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'values') if w_self.w_values is None: if w_self.values is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.values] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_values = w_list return w_self.w_values @@ -3661,8 +3638,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'nl'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'nl') return space.wrap(w_self.nl) def Print_set_nl(space, w_self, w_new_value): @@ -3710,8 +3686,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'target'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'target') return space.wrap(w_self.target) def For_set_target(space, w_self, w_new_value): @@ -3732,8 +3707,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'iter'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'iter') return space.wrap(w_self.iter) def For_set_iter(space, w_self, w_new_value): @@ -3750,14 +3724,13 @@ def For_get_body(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -3768,14 +3741,13 @@ def For_get_orelse(space, w_self): if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'orelse'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') if w_self.w_orelse is None: if w_self.orelse is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.orelse] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_orelse = w_list return w_self.w_orelse @@ -3819,8 +3791,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'test'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'test') return space.wrap(w_self.test) def While_set_test(space, w_self, w_new_value): @@ -3837,14 +3808,13 @@ def While_get_body(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -3855,14 +3825,13 @@ def While_get_orelse(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'orelse'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') if w_self.w_orelse is None: if w_self.orelse is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.orelse] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_orelse = w_list return w_self.w_orelse @@ -3905,8 +3874,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'test'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'test') return space.wrap(w_self.test) def If_set_test(space, w_self, w_new_value): @@ -3923,14 +3891,13 @@ def If_get_body(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -3941,14 +3908,13 @@ def If_get_orelse(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'orelse'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') if w_self.w_orelse is None: if w_self.orelse is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.orelse] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_orelse = w_list return w_self.w_orelse @@ -3991,8 +3957,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'context_expr'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'context_expr') return space.wrap(w_self.context_expr) def With_set_context_expr(space, w_self, w_new_value): @@ -4013,8 +3978,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'optional_vars'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'optional_vars') return space.wrap(w_self.optional_vars) def With_set_optional_vars(space, w_self, w_new_value): @@ -4031,14 +3995,13 @@ def With_get_body(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -4080,8 +4043,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'type'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'type') return space.wrap(w_self.type) def Raise_set_type(space, w_self, w_new_value): @@ -4102,8 +4064,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'inst'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'inst') return space.wrap(w_self.inst) def Raise_set_inst(space, w_self, w_new_value): @@ -4124,8 +4085,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'tback'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'tback') return space.wrap(w_self.tback) def Raise_set_tback(space, w_self, w_new_value): @@ -4168,14 +4128,13 @@ def TryExcept_get_body(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -4186,14 +4145,13 @@ def TryExcept_get_handlers(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'handlers'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'handlers') if w_self.w_handlers is None: if w_self.handlers is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.handlers] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_handlers = w_list return w_self.w_handlers @@ -4204,14 +4162,13 @@ def TryExcept_get_orelse(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'orelse'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') if w_self.w_orelse is None: if w_self.orelse is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.orelse] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_orelse = w_list return w_self.w_orelse @@ -4251,14 +4208,13 @@ def TryFinally_get_body(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -4269,14 +4225,13 @@ def TryFinally_get_finalbody(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'finalbody'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'finalbody') if w_self.w_finalbody is None: if w_self.finalbody is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.finalbody] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_finalbody = w_list return w_self.w_finalbody @@ -4318,8 +4273,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'test'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'test') return space.wrap(w_self.test) def Assert_set_test(space, w_self, w_new_value): @@ -4340,8 +4294,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'msg'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'msg') return space.wrap(w_self.msg) def Assert_set_msg(space, w_self, w_new_value): @@ -4383,14 +4336,13 @@ def Import_get_names(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'names'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'names') if w_self.w_names is None: if w_self.names is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.names] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_names = w_list return w_self.w_names @@ -4430,8 +4382,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'module'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'module') return space.wrap(w_self.module) def ImportFrom_set_module(space, w_self, w_new_value): @@ -4451,14 +4402,13 @@ def ImportFrom_get_names(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'names'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'names') if w_self.w_names is None: if w_self.names is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.names] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_names = w_list return w_self.w_names @@ -4473,8 +4423,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'level'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'level') return space.wrap(w_self.level) def ImportFrom_set_level(space, w_self, w_new_value): @@ -4522,8 +4471,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') return space.wrap(w_self.body) def Exec_set_body(space, w_self, w_new_value): @@ -4544,8 +4492,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'globals'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'globals') return space.wrap(w_self.globals) def Exec_set_globals(space, w_self, w_new_value): @@ -4566,8 +4513,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'locals'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'locals') return space.wrap(w_self.locals) def Exec_set_locals(space, w_self, w_new_value): @@ -4610,14 +4556,13 @@ def Global_get_names(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'names'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'names') if w_self.w_names is None: if w_self.names is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.names] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_names = w_list return w_self.w_names @@ -4657,8 +4602,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Expr_set_value(space, w_self, w_new_value): @@ -4754,8 +4698,7 @@ return w_obj if not w_self.initialization_state & w_self._lineno_mask: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'lineno'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'lineno') return space.wrap(w_self.lineno) def expr_set_lineno(space, w_self, w_new_value): @@ -4776,8 +4719,7 @@ return w_obj if not w_self.initialization_state & w_self._col_offset_mask: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'col_offset'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'col_offset') return space.wrap(w_self.col_offset) def expr_set_col_offset(space, w_self, w_new_value): @@ -4807,8 +4749,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'op'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'op') return boolop_to_class[w_self.op - 1]() def BoolOp_set_op(space, w_self, w_new_value): @@ -4827,14 +4768,13 @@ def BoolOp_get_values(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'values'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'values') if w_self.w_values is None: if w_self.values is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.values] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_values = w_list return w_self.w_values @@ -4875,8 +4815,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'left'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'left') return space.wrap(w_self.left) def BinOp_set_left(space, w_self, w_new_value): @@ -4897,8 +4836,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'op'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'op') return operator_to_class[w_self.op - 1]() def BinOp_set_op(space, w_self, w_new_value): @@ -4921,8 +4859,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'right'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'right') return space.wrap(w_self.right) def BinOp_set_right(space, w_self, w_new_value): @@ -4969,8 +4906,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'op'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'op') return unaryop_to_class[w_self.op - 1]() def UnaryOp_set_op(space, w_self, w_new_value): @@ -4993,8 +4929,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'operand'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'operand') return space.wrap(w_self.operand) def UnaryOp_set_operand(space, w_self, w_new_value): @@ -5040,8 +4975,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'args'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'args') return space.wrap(w_self.args) def Lambda_set_args(space, w_self, w_new_value): @@ -5062,8 +4996,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') return space.wrap(w_self.body) def Lambda_set_body(space, w_self, w_new_value): @@ -5109,8 +5042,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'test'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'test') return space.wrap(w_self.test) def IfExp_set_test(space, w_self, w_new_value): @@ -5131,8 +5063,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') return space.wrap(w_self.body) def IfExp_set_body(space, w_self, w_new_value): @@ -5153,8 +5084,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'orelse'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') return space.wrap(w_self.orelse) def IfExp_set_orelse(space, w_self, w_new_value): @@ -5197,14 +5127,13 @@ def Dict_get_keys(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'keys'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'keys') if w_self.w_keys is None: if w_self.keys is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.keys] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_keys = w_list return w_self.w_keys @@ -5215,14 +5144,13 @@ def Dict_get_values(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'values'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'values') if w_self.w_values is None: if w_self.values is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.values] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_values = w_list return w_self.w_values @@ -5260,14 +5188,13 @@ def Set_get_elts(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'elts'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elts') if w_self.w_elts is None: if w_self.elts is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.elts] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_elts = w_list return w_self.w_elts @@ -5307,8 +5234,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'elt'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elt') return space.wrap(w_self.elt) def ListComp_set_elt(space, w_self, w_new_value): @@ -5325,14 +5251,13 @@ def ListComp_get_generators(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'generators'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'generators') if w_self.w_generators is None: if w_self.generators is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.generators] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_generators = w_list return w_self.w_generators @@ -5373,8 +5298,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'elt'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elt') return space.wrap(w_self.elt) def SetComp_set_elt(space, w_self, w_new_value): @@ -5391,14 +5315,13 @@ def SetComp_get_generators(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'generators'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'generators') if w_self.w_generators is None: if w_self.generators is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.generators] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_generators = w_list return w_self.w_generators @@ -5439,8 +5362,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'key'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'key') return space.wrap(w_self.key) def DictComp_set_key(space, w_self, w_new_value): @@ -5461,8 +5383,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def DictComp_set_value(space, w_self, w_new_value): @@ -5479,14 +5400,13 @@ def DictComp_get_generators(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'generators'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'generators') if w_self.w_generators is None: if w_self.generators is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.generators] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_generators = w_list return w_self.w_generators @@ -5528,8 +5448,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'elt'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elt') return space.wrap(w_self.elt) def GeneratorExp_set_elt(space, w_self, w_new_value): @@ -5546,14 +5465,13 @@ def GeneratorExp_get_generators(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'generators'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'generators') if w_self.w_generators is None: if w_self.generators is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.generators] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_generators = w_list return w_self.w_generators @@ -5594,8 +5512,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Yield_set_value(space, w_self, w_new_value): @@ -5640,8 +5557,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'left'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'left') return space.wrap(w_self.left) def Compare_set_left(space, w_self, w_new_value): @@ -5658,14 +5574,13 @@ def Compare_get_ops(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ops'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ops') if w_self.w_ops is None: if w_self.ops is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [cmpop_to_class[node - 1]() for node in w_self.ops] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_ops = w_list return w_self.w_ops @@ -5676,14 +5591,13 @@ def Compare_get_comparators(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'comparators'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'comparators') if w_self.w_comparators is None: if w_self.comparators is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.comparators] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_comparators = w_list return w_self.w_comparators @@ -5726,8 +5640,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'func'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'func') return space.wrap(w_self.func) def Call_set_func(space, w_self, w_new_value): @@ -5744,14 +5657,13 @@ def Call_get_args(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'args'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'args') if w_self.w_args is None: if w_self.args is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.args] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_args = w_list return w_self.w_args @@ -5762,14 +5674,13 @@ def Call_get_keywords(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'keywords'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'keywords') if w_self.w_keywords is None: if w_self.keywords is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.keywords] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_keywords = w_list return w_self.w_keywords @@ -5784,8 +5695,7 @@ return w_obj if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'starargs'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'starargs') return space.wrap(w_self.starargs) def Call_set_starargs(space, w_self, w_new_value): @@ -5806,8 +5716,7 @@ return w_obj if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'kwargs'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'kwargs') return space.wrap(w_self.kwargs) def Call_set_kwargs(space, w_self, w_new_value): @@ -5858,8 +5767,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Repr_set_value(space, w_self, w_new_value): @@ -5904,8 +5812,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'n'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'n') return w_self.n def Num_set_n(space, w_self, w_new_value): @@ -5950,8 +5857,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 's'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 's') return w_self.s def Str_set_s(space, w_self, w_new_value): @@ -5996,8 +5902,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Attribute_set_value(space, w_self, w_new_value): @@ -6018,8 +5923,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'attr'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'attr') return space.wrap(w_self.attr) def Attribute_set_attr(space, w_self, w_new_value): @@ -6040,8 +5944,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ctx'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() def Attribute_set_ctx(space, w_self, w_new_value): @@ -6090,8 +5993,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Subscript_set_value(space, w_self, w_new_value): @@ -6112,8 +6014,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'slice'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'slice') return space.wrap(w_self.slice) def Subscript_set_slice(space, w_self, w_new_value): @@ -6134,8 +6035,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ctx'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() def Subscript_set_ctx(space, w_self, w_new_value): @@ -6184,8 +6084,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'id'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'id') return space.wrap(w_self.id) def Name_set_id(space, w_self, w_new_value): @@ -6206,8 +6105,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ctx'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() def Name_set_ctx(space, w_self, w_new_value): @@ -6251,14 +6149,13 @@ def List_get_elts(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'elts'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elts') if w_self.w_elts is None: if w_self.elts is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.elts] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_elts = w_list return w_self.w_elts @@ -6273,8 +6170,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ctx'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() def List_set_ctx(space, w_self, w_new_value): @@ -6319,14 +6215,13 @@ def Tuple_get_elts(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'elts'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elts') if w_self.w_elts is None: if w_self.elts is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.elts] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_elts = w_list return w_self.w_elts @@ -6341,8 +6236,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ctx'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() def Tuple_set_ctx(space, w_self, w_new_value): @@ -6391,8 +6285,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return w_self.value def Const_set_value(space, w_self, w_new_value): @@ -6510,8 +6403,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'lower'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'lower') return space.wrap(w_self.lower) def Slice_set_lower(space, w_self, w_new_value): @@ -6532,8 +6424,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'upper'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'upper') return space.wrap(w_self.upper) def Slice_set_upper(space, w_self, w_new_value): @@ -6554,8 +6445,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'step'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'step') return space.wrap(w_self.step) def Slice_set_step(space, w_self, w_new_value): @@ -6598,14 +6488,13 @@ def ExtSlice_get_dims(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'dims'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'dims') if w_self.w_dims is None: if w_self.dims is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.dims] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_dims = w_list return w_self.w_dims @@ -6645,8 +6534,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Index_set_value(space, w_self, w_new_value): @@ -6915,8 +6803,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'target'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'target') return space.wrap(w_self.target) def comprehension_set_target(space, w_self, w_new_value): @@ -6937,8 +6824,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'iter'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'iter') return space.wrap(w_self.iter) def comprehension_set_iter(space, w_self, w_new_value): @@ -6955,14 +6841,13 @@ def comprehension_get_ifs(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ifs'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ifs') if w_self.w_ifs is None: if w_self.ifs is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.ifs] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_ifs = w_list return w_self.w_ifs @@ -7004,8 +6889,7 @@ return w_obj if not w_self.initialization_state & w_self._lineno_mask: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'lineno'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'lineno') return space.wrap(w_self.lineno) def excepthandler_set_lineno(space, w_self, w_new_value): @@ -7026,8 +6910,7 @@ return w_obj if not w_self.initialization_state & w_self._col_offset_mask: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'col_offset'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'col_offset') return space.wrap(w_self.col_offset) def excepthandler_set_col_offset(space, w_self, w_new_value): @@ -7057,8 +6940,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'type'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'type') return space.wrap(w_self.type) def ExceptHandler_set_type(space, w_self, w_new_value): @@ -7079,8 +6961,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'name'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'name') return space.wrap(w_self.name) def ExceptHandler_set_name(space, w_self, w_new_value): @@ -7097,14 +6978,13 @@ def ExceptHandler_get_body(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -7142,14 +7022,13 @@ def arguments_get_args(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'args'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'args') if w_self.w_args is None: if w_self.args is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.args] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_args = w_list return w_self.w_args @@ -7164,8 +7043,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'vararg'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'vararg') return space.wrap(w_self.vararg) def arguments_set_vararg(space, w_self, w_new_value): @@ -7189,8 +7067,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'kwarg'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'kwarg') return space.wrap(w_self.kwarg) def arguments_set_kwarg(space, w_self, w_new_value): @@ -7210,14 +7087,13 @@ def arguments_get_defaults(space, w_self): if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'defaults'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'defaults') if w_self.w_defaults is None: if w_self.defaults is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.defaults] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_defaults = w_list return w_self.w_defaults @@ -7261,8 +7137,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'arg'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'arg') return space.wrap(w_self.arg) def keyword_set_arg(space, w_self, w_new_value): @@ -7283,8 +7158,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def keyword_set_value(space, w_self, w_new_value): @@ -7330,8 +7204,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'name'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'name') return space.wrap(w_self.name) def alias_set_name(space, w_self, w_new_value): @@ -7352,8 +7225,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'asname'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'asname') return space.wrap(w_self.asname) def alias_set_asname(space, w_self, w_new_value): diff --git a/pypy/interpreter/astcompiler/tools/asdl_py.py b/pypy/interpreter/astcompiler/tools/asdl_py.py --- a/pypy/interpreter/astcompiler/tools/asdl_py.py +++ b/pypy/interpreter/astcompiler/tools/asdl_py.py @@ -414,13 +414,12 @@ self.emit(" return w_obj", 1) self.emit("if not w_self.initialization_state & %s:" % (flag,), 1) self.emit("typename = space.type(w_self).getname(space)", 2) - self.emit("w_err = space.wrap(\"'%%s' object has no attribute '%s'\" %% typename)" % + self.emit("raise operationerrfmt(space.w_AttributeError, \"'%%s' object has no attribute '%%s'\", typename, '%s')" % (field.name,), 2) - self.emit("raise OperationError(space.w_AttributeError, w_err)", 2) if field.seq: self.emit("if w_self.w_%s is None:" % (field.name,), 1) self.emit("if w_self.%s is None:" % (field.name,), 2) - self.emit("w_list = space.newlist([])", 3) + self.emit("list_w = []", 3) self.emit("else:", 2) if field.type.value in self.data.simple_types: wrapper = "%s_to_class[node - 1]()" % (field.type,) @@ -428,7 +427,7 @@ wrapper = "space.wrap(node)" self.emit("list_w = [%s for node in w_self.%s]" % (wrapper, field.name), 3) - self.emit("w_list = space.newlist(list_w)", 3) + self.emit("w_list = space.newlist(list_w)", 2) self.emit("w_self.w_%s = w_list" % (field.name,), 2) self.emit("return w_self.w_%s" % (field.name,), 1) elif field.type.value in self.data.simple_types: @@ -540,7 +539,7 @@ from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter import typedef from pypy.interpreter.gateway import interp2app -from pypy.interpreter.error import OperationError +from pypy.interpreter.error import OperationError, operationerrfmt from pypy.rlib.unroll import unrolling_iterable from pypy.tool.pairtype import extendabletype from pypy.tool.sourcetools import func_with_new_name @@ -639,9 +638,7 @@ missing = required[i] if missing is not None: err = "required field \\"%s\\" missing from %s" - err = err % (missing, host) - w_err = space.wrap(err) - raise OperationError(space.w_TypeError, w_err) + raise operationerrfmt(space.w_TypeError, err, missing, host) raise AssertionError("should not reach here") diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -777,22 +777,63 @@ """Unpack an iterable object into a real (interpreter-level) list. Raise an OperationError(w_ValueError) if the length is wrong.""" w_iterator = self.iter(w_iterable) - # If we know the expected length we can preallocate. if expected_length == -1: + # xxx special hack for speed + from pypy.interpreter.generator import GeneratorIterator + if isinstance(w_iterator, GeneratorIterator): + lst_w = [] + w_iterator.unpack_into(lst_w) + return lst_w + # /xxx + return self._unpackiterable_unknown_length(w_iterator, w_iterable) + else: + lst_w = self._unpackiterable_known_length(w_iterator, + expected_length) + return lst_w[:] # make the resulting list resizable + + @jit.dont_look_inside + def _unpackiterable_unknown_length(self, w_iterator, w_iterable): + # Unpack a variable-size list of unknown length. + # The JIT does not look inside this function because it + # contains a loop (made explicit with the decorator above). + # + # If we can guess the expected length we can preallocate. + try: + lgt_estimate = self.len_w(w_iterable) + except OperationError, o: + if (not o.match(self, self.w_AttributeError) and + not o.match(self, self.w_TypeError)): + raise + items = [] + else: try: - lgt_estimate = self.len_w(w_iterable) - except OperationError, o: - if (not o.match(self, self.w_AttributeError) and - not o.match(self, self.w_TypeError)): + items = newlist(lgt_estimate) + except MemoryError: + items = [] # it might have lied + # + while True: + try: + w_item = self.next(w_iterator) + except OperationError, e: + if not e.match(self, self.w_StopIteration): raise - items = [] - else: - try: - items = newlist(lgt_estimate) - except MemoryError: - items = [] # it might have lied - else: - items = [None] * expected_length + break # done + items.append(w_item) + # + return items + + @jit.dont_look_inside + def _unpackiterable_known_length(self, w_iterator, expected_length): + # Unpack a known length list, without letting the JIT look inside. + # Implemented by just calling the @jit.unroll_safe version, but + # the JIT stopped looking inside already. + return self._unpackiterable_known_length_jitlook(w_iterator, + expected_length) + + @jit.unroll_safe + def _unpackiterable_known_length_jitlook(self, w_iterator, + expected_length): + items = [None] * expected_length idx = 0 while True: try: @@ -801,26 +842,29 @@ if not e.match(self, self.w_StopIteration): raise break # done - if expected_length != -1 and idx == expected_length: + if idx == expected_length: raise OperationError(self.w_ValueError, - self.wrap("too many values to unpack")) - if expected_length == -1: - items.append(w_item) - else: - items[idx] = w_item + self.wrap("too many values to unpack")) + items[idx] = w_item idx += 1 - if expected_length != -1 and idx < expected_length: + if idx < expected_length: if idx == 1: plural = "" else: plural = "s" - raise OperationError(self.w_ValueError, - self.wrap("need more than %d value%s to unpack" % - (idx, plural))) + raise operationerrfmt(self.w_ValueError, + "need more than %d value%s to unpack", + idx, plural) return items - unpackiterable_unroll = jit.unroll_safe(func_with_new_name(unpackiterable, - 'unpackiterable_unroll')) + def unpackiterable_unroll(self, w_iterable, expected_length): + # Like unpackiterable(), but for the cases where we have + # an expected_length and want to unroll when JITted. + # Returns a fixed-size list. + w_iterator = self.iter(w_iterable) + assert expected_length != -1 + return self._unpackiterable_known_length_jitlook(w_iterator, + expected_length) def fixedview(self, w_iterable, expected_length=-1): """ A fixed list view of w_iterable. Don't modify the result diff --git a/pypy/interpreter/generator.py b/pypy/interpreter/generator.py --- a/pypy/interpreter/generator.py +++ b/pypy/interpreter/generator.py @@ -8,7 +8,7 @@ class GeneratorIterator(Wrappable): "An iterator created by a generator." _immutable_fields_ = ['pycode'] - + def __init__(self, frame): self.space = frame.space self.frame = frame # turned into None when frame_finished_execution @@ -81,7 +81,7 @@ # if the frame is now marked as finished, it was RETURNed from if frame.frame_finished_execution: self.frame = None - raise OperationError(space.w_StopIteration, space.w_None) + raise OperationError(space.w_StopIteration, space.w_None) else: return w_result # YIELDed finally: @@ -97,21 +97,21 @@ def throw(self, w_type, w_val, w_tb): from pypy.interpreter.pytraceback import check_traceback space = self.space - + msg = "throw() third argument must be a traceback object" if space.is_w(w_tb, space.w_None): tb = None else: tb = check_traceback(space, w_tb, msg) - + operr = OperationError(w_type, w_val, tb) operr.normalize_exception(space) return self.send_ex(space.w_None, operr) - + def descr_next(self): """x.next() -> the next value, or raise StopIteration""" return self.send_ex(self.space.w_None) - + def descr_close(self): """x.close(arg) -> raise GeneratorExit inside generator.""" assert isinstance(self, GeneratorIterator) @@ -124,7 +124,7 @@ e.match(space, space.w_GeneratorExit): return space.w_None raise - + if w_retval is not None: msg = "generator ignored GeneratorExit" raise OperationError(space.w_RuntimeError, space.wrap(msg)) @@ -155,3 +155,39 @@ "interrupting generator of ") break block = block.previous + + def unpack_into(self, results_w): + """This is a hack for performance: runs the generator and collects + all produced items in a list.""" + # XXX copied and simplified version of send_ex() + space = self.space + if self.running: + raise OperationError(space.w_ValueError, + space.wrap('generator already executing')) + frame = self.frame + if frame is None: # already finished + return + self.running = True + try: + pycode = self.pycode + while True: + jitdriver.jit_merge_point(self=self, frame=frame, + results_w=results_w, + pycode=pycode) + try: + w_result = frame.execute_frame(space.w_None) + except OperationError, e: + if not e.match(space, space.w_StopIteration): + raise + break + # if the frame is now marked as finished, it was RETURNed from + if frame.frame_finished_execution: + break + results_w.append(w_result) # YIELDed + finally: + frame.f_backref = jit.vref_None + self.running = False + self.frame = None + +jitdriver = jit.JitDriver(greens=['pycode'], + reds=['self', 'frame', 'results_w']) diff --git a/pypy/interpreter/test/test_generator.py b/pypy/interpreter/test/test_generator.py --- a/pypy/interpreter/test/test_generator.py +++ b/pypy/interpreter/test/test_generator.py @@ -117,7 +117,7 @@ g = f() raises(NameError, g.throw, NameError, "Error", None) - + def test_throw_fail(self): def f(): yield 1 @@ -129,7 +129,7 @@ yield 1 g = f() raises(TypeError, g.throw, list()) - + def test_throw_fail3(self): def f(): yield 1 @@ -188,7 +188,7 @@ g = f() g.next() raises(NameError, g.close) - + def test_close_fail(self): def f(): try: @@ -267,3 +267,15 @@ assert r.startswith("' % self.fielddescr.repr_of_descr() + +def get_interiorfield_descr(gc_ll_descr, ARRAY, FIELDTP, name): + cache = gc_ll_descr._cache_interiorfield + try: + return cache[(ARRAY, FIELDTP, name)] + except KeyError: + arraydescr = get_array_descr(gc_ll_descr, ARRAY) + fielddescr = get_field_descr(gc_ll_descr, FIELDTP, name) + descr = InteriorFieldDescr(arraydescr, fielddescr) + cache[(ARRAY, FIELDTP, name)] = descr + return descr # ____________________________________________________________ # CallDescrs @@ -525,7 +570,8 @@ # if TYPE is lltype.Float or is_longlong(TYPE): setattr(Descr, floatattrname, True) - elif TYPE is not lltype.Bool and rffi.cast(TYPE, -1) == -1: + elif (TYPE is not lltype.Bool and isinstance(TYPE, lltype.Number) and + rffi.cast(TYPE, -1) == -1): setattr(Descr, signedattrname, True) # _cache[nameprefix, TYPE] = Descr diff --git a/pypy/jit/backend/llsupport/gc.py b/pypy/jit/backend/llsupport/gc.py --- a/pypy/jit/backend/llsupport/gc.py +++ b/pypy/jit/backend/llsupport/gc.py @@ -45,6 +45,14 @@ def freeing_block(self, start, stop): pass + def get_funcptr_for_newarray(self): + return llhelper(self.GC_MALLOC_ARRAY, self.malloc_array) + def get_funcptr_for_newstr(self): + return llhelper(self.GC_MALLOC_STR_UNICODE, self.malloc_str) + def get_funcptr_for_newunicode(self): + return llhelper(self.GC_MALLOC_STR_UNICODE, self.malloc_unicode) + + def record_constptrs(self, op, gcrefs_output_list): for i in range(op.numargs()): v = op.getarg(i) @@ -96,6 +104,39 @@ malloc_fn_ptr = self.configure_boehm_once() self.funcptr_for_new = malloc_fn_ptr + def malloc_array(basesize, itemsize, ofs_length, num_elem): + try: + size = ovfcheck(basesize + ovfcheck(itemsize * num_elem)) + except OverflowError: + return lltype.nullptr(llmemory.GCREF.TO) + res = self.funcptr_for_new(size) + if not res: + return res + rffi.cast(rffi.CArrayPtr(lltype.Signed), res)[ofs_length/WORD] = num_elem + return res + self.malloc_array = malloc_array + self.GC_MALLOC_ARRAY = lltype.Ptr(lltype.FuncType( + [lltype.Signed] * 4, llmemory.GCREF)) + + + (str_basesize, str_itemsize, str_ofs_length + ) = symbolic.get_array_token(rstr.STR, self.translate_support_code) + (unicode_basesize, unicode_itemsize, unicode_ofs_length + ) = symbolic.get_array_token(rstr.UNICODE, self.translate_support_code) + def malloc_str(length): + return self.malloc_array( + str_basesize, str_itemsize, str_ofs_length, length + ) + def malloc_unicode(length): + return self.malloc_array( + unicode_basesize, unicode_itemsize, unicode_ofs_length, length + ) + self.malloc_str = malloc_str + self.malloc_unicode = malloc_unicode + self.GC_MALLOC_STR_UNICODE = lltype.Ptr(lltype.FuncType( + [lltype.Signed], llmemory.GCREF)) + + # on some platform GC_init is required before any other # GC_* functions, call it here for the benefit of tests # XXX move this to tests @@ -116,39 +157,27 @@ ofs_length = arraydescr.get_ofs_length(self.translate_support_code) basesize = arraydescr.get_base_size(self.translate_support_code) itemsize = arraydescr.get_item_size(self.translate_support_code) - size = basesize + itemsize * num_elem - res = self.funcptr_for_new(size) - rffi.cast(rffi.CArrayPtr(lltype.Signed), res)[ofs_length/WORD] = num_elem - return res + return self.malloc_array(basesize, itemsize, ofs_length, num_elem) def gc_malloc_str(self, num_elem): - basesize, itemsize, ofs_length = symbolic.get_array_token(rstr.STR, - self.translate_support_code) - assert itemsize == 1 - size = basesize + num_elem - res = self.funcptr_for_new(size) - rffi.cast(rffi.CArrayPtr(lltype.Signed), res)[ofs_length/WORD] = num_elem - return res + return self.malloc_str(num_elem) def gc_malloc_unicode(self, num_elem): - basesize, itemsize, ofs_length = symbolic.get_array_token(rstr.UNICODE, - self.translate_support_code) - size = basesize + num_elem * itemsize - res = self.funcptr_for_new(size) - rffi.cast(rffi.CArrayPtr(lltype.Signed), res)[ofs_length/WORD] = num_elem - return res + return self.malloc_unicode(num_elem) def args_for_new(self, sizedescr): assert isinstance(sizedescr, BaseSizeDescr) return [sizedescr.size] + def args_for_new_array(self, arraydescr): + ofs_length = arraydescr.get_ofs_length(self.translate_support_code) + basesize = arraydescr.get_base_size(self.translate_support_code) + itemsize = arraydescr.get_item_size(self.translate_support_code) + return [basesize, itemsize, ofs_length] + def get_funcptr_for_new(self): return self.funcptr_for_new - get_funcptr_for_newarray = None - get_funcptr_for_newstr = None - get_funcptr_for_newunicode = None - def rewrite_assembler(self, cpu, operations, gcrefs_output_list): # record all GCREFs too, because Boehm cannot see them and keep them # alive if they end up as constants in the assembler @@ -620,10 +649,13 @@ def malloc_basic(size, tid): type_id = llop.extract_ushort(llgroup.HALFWORD, tid) has_finalizer = bool(tid & (1<' # - cache = {} descr4 = get_call_descr(c0, [lltype.Char, lltype.Ptr(S)], lltype.Ptr(S)) assert 'GcPtrCallDescr' in descr4.repr_of_descr() # @@ -412,10 +413,10 @@ ARGS = [lltype.Float, lltype.Ptr(ARRAY)] RES = lltype.Float - def f(a, b): + def f2(a, b): return float(b[0]) + a - fnptr = llhelper(lltype.Ptr(lltype.FuncType(ARGS, RES)), f) + fnptr = llhelper(lltype.Ptr(lltype.FuncType(ARGS, RES)), f2) descr2 = get_call_descr(c0, ARGS, RES) a = lltype.malloc(ARRAY, 3) opaquea = lltype.cast_opaque_ptr(llmemory.GCREF, a) diff --git a/pypy/jit/backend/llsupport/test/test_gc.py b/pypy/jit/backend/llsupport/test/test_gc.py --- a/pypy/jit/backend/llsupport/test/test_gc.py +++ b/pypy/jit/backend/llsupport/test/test_gc.py @@ -247,12 +247,14 @@ self.record = [] def do_malloc_fixedsize_clear(self, RESTYPE, type_id, size, - has_finalizer, contains_weakptr): + has_finalizer, has_light_finalizer, + contains_weakptr): assert not contains_weakptr + assert not has_finalizer # in these tests + assert not has_light_finalizer # in these tests p = llmemory.raw_malloc(size) p = llmemory.cast_adr_to_ptr(p, RESTYPE) - flags = int(has_finalizer) << 16 - tid = llop.combine_ushort(lltype.Signed, type_id, flags) + tid = llop.combine_ushort(lltype.Signed, type_id, 0) self.record.append(("fixedsize", repr(size), tid, p)) return p diff --git a/pypy/jit/backend/model.py b/pypy/jit/backend/model.py --- a/pypy/jit/backend/model.py +++ b/pypy/jit/backend/model.py @@ -1,5 +1,5 @@ from pypy.rlib.debug import debug_start, debug_print, debug_stop -from pypy.jit.metainterp import history, compile +from pypy.jit.metainterp import history class AbstractCPU(object): @@ -213,6 +213,10 @@ def typedescrof(TYPE): raise NotImplementedError + @staticmethod + def interiorfielddescrof(A, fieldname): + raise NotImplementedError + # ---------- the backend-dependent operations ---------- # lltype specific operations diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -5,7 +5,7 @@ BoxInt, Box, BoxPtr, LoopToken, ConstInt, ConstPtr, - BoxObj, Const, + BoxObj, ConstObj, BoxFloat, ConstFloat) from pypy.jit.metainterp.resoperation import ResOperation, rop from pypy.jit.metainterp.typesystem import deref @@ -111,7 +111,7 @@ self.cpu.set_future_value_int(0, 2) fail = self.cpu.execute_token(looptoken) res = self.cpu.get_latest_value_int(0) - assert res == 3 + assert res == 3 assert fail.identifier == 1 def test_compile_loop(self): @@ -127,7 +127,7 @@ ] inputargs = [i0] operations[2].setfailargs([i1]) - + self.cpu.compile_loop(inputargs, operations, looptoken) self.cpu.set_future_value_int(0, 2) fail = self.cpu.execute_token(looptoken) @@ -148,7 +148,7 @@ ] inputargs = [i0] operations[2].setfailargs([None, None, i1, None]) - + self.cpu.compile_loop(inputargs, operations, looptoken) self.cpu.set_future_value_int(0, 2) fail = self.cpu.execute_token(looptoken) @@ -372,7 +372,7 @@ for opnum, boxargs, retvalue in get_int_tests(): res = self.execute_operation(opnum, boxargs, 'int') assert res.value == retvalue - + def test_float_operations(self): from pypy.jit.metainterp.test.test_executor import get_float_tests for opnum, boxargs, rettype, retvalue in get_float_tests(self.cpu): @@ -438,7 +438,7 @@ def test_ovf_operations_reversed(self): self.test_ovf_operations(reversed=True) - + def test_bh_call(self): cpu = self.cpu # @@ -503,7 +503,7 @@ [funcbox, BoxInt(num), BoxInt(num)], 'int', descr=dyn_calldescr) assert res.value == 2 * num - + if cpu.supports_floats: def func(f0, f1, f2, f3, f4, f5, f6, i0, i1, f7, f8, f9): @@ -543,7 +543,7 @@ funcbox = self.get_funcbox(self.cpu, func_ptr) res = self.execute_operation(rop.CALL, [funcbox] + map(BoxInt, args), 'int', descr=calldescr) assert res.value == func(*args) - + def test_call_stack_alignment(self): # test stack alignment issues, notably for Mac OS/X. # also test the ordering of the arguments. @@ -615,7 +615,7 @@ res = self.execute_operation(rop.GETFIELD_GC, [t_box], 'int', descr=shortdescr) assert res.value == 1331 - + # u_box, U_box = self.alloc_instance(self.U) fielddescr2 = self.cpu.fielddescrof(self.S, 'next') @@ -695,7 +695,7 @@ def test_failing_guard_class(self): t_box, T_box = self.alloc_instance(self.T) - u_box, U_box = self.alloc_instance(self.U) + u_box, U_box = self.alloc_instance(self.U) null_box = self.null_instance() for opname, args in [(rop.GUARD_CLASS, [t_box, U_box]), (rop.GUARD_CLASS, [u_box, T_box]), @@ -787,7 +787,7 @@ r = self.execute_operation(rop.GETARRAYITEM_GC, [a_box, BoxInt(3)], 'int', descr=arraydescr) assert r.value == 160 - + # if isinstance(A, lltype.GcArray): A = lltype.Ptr(A) @@ -880,6 +880,73 @@ 'int', descr=arraydescr) assert r.value == 7441 + def test_array_of_structs(self): + TP = lltype.GcStruct('x') + ITEM = lltype.Struct('x', + ('vs', lltype.Signed), + ('vu', lltype.Unsigned), + ('vsc', rffi.SIGNEDCHAR), + ('vuc', rffi.UCHAR), + ('vss', rffi.SHORT), + ('vus', rffi.USHORT), + ('vsi', rffi.INT), + ('vui', rffi.UINT), + ('k', lltype.Float), + ('p', lltype.Ptr(TP))) + a_box, A = self.alloc_array_of(ITEM, 15) + s_box, S = self.alloc_instance(TP) + kdescr = self.cpu.interiorfielddescrof(A, 'k') + pdescr = self.cpu.interiorfielddescrof(A, 'p') + self.execute_operation(rop.SETINTERIORFIELD_GC, [a_box, BoxInt(3), + boxfloat(1.5)], + 'void', descr=kdescr) + f = self.cpu.bh_getinteriorfield_gc_f(a_box.getref_base(), 3, kdescr) + assert longlong.getrealfloat(f) == 1.5 + self.cpu.bh_setinteriorfield_gc_f(a_box.getref_base(), 3, kdescr, longlong.getfloatstorage(2.5)) + r = self.execute_operation(rop.GETINTERIORFIELD_GC, [a_box, BoxInt(3)], + 'float', descr=kdescr) + assert r.getfloat() == 2.5 + # + NUMBER_FIELDS = [('vs', lltype.Signed), + ('vu', lltype.Unsigned), + ('vsc', rffi.SIGNEDCHAR), + ('vuc', rffi.UCHAR), + ('vss', rffi.SHORT), + ('vus', rffi.USHORT), + ('vsi', rffi.INT), + ('vui', rffi.UINT)] + for name, TYPE in NUMBER_FIELDS[::-1]: + vdescr = self.cpu.interiorfielddescrof(A, name) + self.execute_operation(rop.SETINTERIORFIELD_GC, [a_box, BoxInt(3), + BoxInt(-15)], + 'void', descr=vdescr) + for name, TYPE in NUMBER_FIELDS: + vdescr = self.cpu.interiorfielddescrof(A, name) + i = self.cpu.bh_getinteriorfield_gc_i(a_box.getref_base(), 3, + vdescr) + assert i == rffi.cast(lltype.Signed, rffi.cast(TYPE, -15)) + for name, TYPE in NUMBER_FIELDS[::-1]: + vdescr = self.cpu.interiorfielddescrof(A, name) + self.cpu.bh_setinteriorfield_gc_i(a_box.getref_base(), 3, + vdescr, -25) + for name, TYPE in NUMBER_FIELDS: + vdescr = self.cpu.interiorfielddescrof(A, name) + r = self.execute_operation(rop.GETINTERIORFIELD_GC, + [a_box, BoxInt(3)], + 'int', descr=vdescr) + assert r.getint() == rffi.cast(lltype.Signed, rffi.cast(TYPE, -25)) + # + self.execute_operation(rop.SETINTERIORFIELD_GC, [a_box, BoxInt(4), + s_box], + 'void', descr=pdescr) + r = self.cpu.bh_getinteriorfield_gc_r(a_box.getref_base(), 4, pdescr) + assert r == s_box.getref_base() + self.cpu.bh_setinteriorfield_gc_r(a_box.getref_base(), 3, pdescr, + s_box.getref_base()) + r = self.execute_operation(rop.GETINTERIORFIELD_GC, [a_box, BoxInt(3)], + 'ref', descr=pdescr) + assert r.getref_base() == s_box.getref_base() + def test_string_basic(self): s_box = self.alloc_string("hello\xfe") r = self.execute_operation(rop.STRLEN, [s_box], 'int') @@ -1402,7 +1469,7 @@ addr = llmemory.cast_ptr_to_adr(func_ptr) return ConstInt(heaptracker.adr2int(addr)) - + MY_VTABLE = rclass.OBJECT_VTABLE # for tests only S = lltype.GcForwardReference() @@ -1439,7 +1506,6 @@ return BoxPtr(lltype.nullptr(llmemory.GCREF.TO)) def alloc_array_of(self, ITEM, length): - cpu = self.cpu A = lltype.GcArray(ITEM) a = lltype.malloc(A, length) a_box = BoxPtr(lltype.cast_opaque_ptr(llmemory.GCREF, a)) @@ -2318,7 +2384,7 @@ for opname, arg, res in ops: self.execute_operation(opname, [arg], 'void') assert self.guard_failed == res - + lltype.free(x, flavor='raw') def test_assembler_call(self): @@ -2398,7 +2464,7 @@ FakeJitDriverSD.portal_calldescr = self.cpu.calldescrof( lltype.Ptr(lltype.FuncType(ARGS, RES)), ARGS, RES, EffectInfo.MOST_GENERAL) - + ops = ''' [f0, f1] f2 = float_add(f0, f1) @@ -2489,7 +2555,7 @@ FakeJitDriverSD.portal_calldescr = self.cpu.calldescrof( lltype.Ptr(lltype.FuncType(ARGS, RES)), ARGS, RES, EffectInfo.MOST_GENERAL) - + ops = ''' [f0, f1] f2 = float_add(f0, f1) @@ -2940,4 +3006,4 @@ def alloc_unicode(self, unicode): py.test.skip("implement me") - + diff --git a/pypy/jit/backend/test/test_ll_random.py b/pypy/jit/backend/test/test_ll_random.py --- a/pypy/jit/backend/test/test_ll_random.py +++ b/pypy/jit/backend/test/test_ll_random.py @@ -28,16 +28,27 @@ fork.structure_types_and_vtables = self.structure_types_and_vtables return fork - def get_structptr_var(self, r, must_have_vtable=False, type=lltype.Struct): + def _choose_ptr_vars(self, from_, type, array_of_structs): + ptrvars = [] + for i in range(len(from_)): + v, S = from_[i][:2] + if not isinstance(S, type): + continue + if ((isinstance(S, lltype.Array) and + isinstance(S.OF, lltype.Struct)) == array_of_structs): + ptrvars.append((v, S)) + return ptrvars + + def get_structptr_var(self, r, must_have_vtable=False, type=lltype.Struct, + array_of_structs=False): while True: - ptrvars = [(v, S) for (v, S) in self.ptrvars - if isinstance(S, type)] + ptrvars = self._choose_ptr_vars(self.ptrvars, type, + array_of_structs) if ptrvars and r.random() < 0.8: v, S = r.choice(ptrvars) else: - prebuilt_ptr_consts = [(v, S) - for (v, S, _) in self.prebuilt_ptr_consts - if isinstance(S, type)] + prebuilt_ptr_consts = self._choose_ptr_vars( + self.prebuilt_ptr_consts, type, array_of_structs) if prebuilt_ptr_consts and r.random() < 0.7: v, S = r.choice(prebuilt_ptr_consts) else: @@ -48,7 +59,8 @@ has_vtable=must_have_vtable) else: # create a new constant array - p = self.get_random_array(r) + p = self.get_random_array(r, + must_be_array_of_structs=array_of_structs) S = lltype.typeOf(p).TO v = ConstPtr(lltype.cast_opaque_ptr(llmemory.GCREF, p)) self.prebuilt_ptr_consts.append((v, S, @@ -74,7 +86,8 @@ TYPE = lltype.Signed return TYPE - def get_random_structure_type(self, r, with_vtable=None, cache=True): + def get_random_structure_type(self, r, with_vtable=None, cache=True, + type=lltype.GcStruct): if cache and self.structure_types and r.random() < 0.5: return r.choice(self.structure_types) fields = [] @@ -85,7 +98,7 @@ for i in range(r.randrange(1, 5)): TYPE = self.get_random_primitive_type(r) fields.append(('f%d' % i, TYPE)) - S = lltype.GcStruct('S%d' % self.counter, *fields, **kwds) + S = type('S%d' % self.counter, *fields, **kwds) self.counter += 1 if cache: self.structure_types.append(S) @@ -125,17 +138,29 @@ setattr(p, fieldname, rffi.cast(TYPE, r.random_integer())) return p - def get_random_array_type(self, r): - TYPE = self.get_random_primitive_type(r) + def get_random_array_type(self, r, can_be_array_of_struct=False, + must_be_array_of_structs=False): + if ((can_be_array_of_struct and r.random() < 0.1) or + must_be_array_of_structs): + TYPE = self.get_random_structure_type(r, cache=False, + type=lltype.Struct) + else: + TYPE = self.get_random_primitive_type(r) return lltype.GcArray(TYPE) - def get_random_array(self, r): - A = self.get_random_array_type(r) + def get_random_array(self, r, must_be_array_of_structs=False): + A = self.get_random_array_type(r, + must_be_array_of_structs=must_be_array_of_structs) length = (r.random_integer() // 15) % 300 # length: between 0 and 299 # likely to be small p = lltype.malloc(A, length) - for i in range(length): - p[i] = rffi.cast(A.OF, r.random_integer()) + if isinstance(A.OF, lltype.Primitive): + for i in range(length): + p[i] = rffi.cast(A.OF, r.random_integer()) + else: + for i in range(length): + for fname, TP in A.OF._flds.iteritems(): + setattr(p[i], fname, rffi.cast(TP, r.random_integer())) return p def get_index(self, length, r): @@ -155,8 +180,16 @@ dic[fieldname] = getattr(p, fieldname) else: assert isinstance(S, lltype.Array) - for i in range(len(p)): - dic[i] = p[i] + if isinstance(S.OF, lltype.Struct): + for i in range(len(p)): + item = p[i] + s1 = {} + for fieldname in S.OF._names: + s1[fieldname] = getattr(item, fieldname) + dic[i] = s1 + else: + for i in range(len(p)): + dic[i] = p[i] return dic def print_loop_prebuilt(self, names, writevar, s): @@ -220,7 +253,7 @@ class GetFieldOperation(test_random.AbstractOperation): def field_descr(self, builder, r): - v, S = builder.get_structptr_var(r) + v, S = builder.get_structptr_var(r, ) names = S._names if names[0] == 'parent': names = names[1:] @@ -239,6 +272,28 @@ continue break +class GetInteriorFieldOperation(test_random.AbstractOperation): + def field_descr(self, builder, r): + v, A = builder.get_structptr_var(r, type=lltype.Array, + array_of_structs=True) + array = v.getref(lltype.Ptr(A)) + v_index = builder.get_index(len(array), r) + name = r.choice(A.OF._names) + descr = builder.cpu.interiorfielddescrof(A, name) + descr._random_info = 'cpu.interiorfielddescrof(%s, %r)' % (A.OF._name, + name) + TYPE = getattr(A.OF, name) + return v, v_index, descr, TYPE + + def produce_into(self, builder, r): + while True: + try: + v, v_index, descr, _ = self.field_descr(builder, r) + self.put(builder, [v, v_index], descr) + except lltype.UninitializedMemoryAccess: + continue + break + class SetFieldOperation(GetFieldOperation): def produce_into(self, builder, r): v, descr, TYPE = self.field_descr(builder, r) @@ -251,6 +306,18 @@ break builder.do(self.opnum, [v, w], descr) +class SetInteriorFieldOperation(GetInteriorFieldOperation): + def produce_into(self, builder, r): + v, v_index, descr, TYPE = self.field_descr(builder, r) + while True: + if r.random() < 0.3: + w = ConstInt(r.random_integer()) + else: + w = r.choice(builder.intvars) + if rffi.cast(lltype.Signed, rffi.cast(TYPE, w.value)) == w.value: + break + builder.do(self.opnum, [v, v_index, w], descr) + class NewOperation(test_random.AbstractOperation): def size_descr(self, builder, S): descr = builder.cpu.sizeof(S) @@ -306,7 +373,7 @@ class NewArrayOperation(ArrayOperation): def produce_into(self, builder, r): - A = builder.get_random_array_type(r) + A = builder.get_random_array_type(r, can_be_array_of_struct=True) v_size = builder.get_index(300, r) v_ptr = builder.do(self.opnum, [v_size], self.array_descr(builder, A)) builder.ptrvars.append((v_ptr, A)) @@ -586,7 +653,9 @@ for i in range(4): # make more common OPERATIONS.append(GetFieldOperation(rop.GETFIELD_GC)) OPERATIONS.append(GetFieldOperation(rop.GETFIELD_GC)) + OPERATIONS.append(GetInteriorFieldOperation(rop.GETINTERIORFIELD_GC)) OPERATIONS.append(SetFieldOperation(rop.SETFIELD_GC)) + OPERATIONS.append(SetInteriorFieldOperation(rop.SETINTERIORFIELD_GC)) OPERATIONS.append(NewOperation(rop.NEW)) OPERATIONS.append(NewOperation(rop.NEW_WITH_VTABLE)) diff --git a/pypy/jit/backend/test/test_random.py b/pypy/jit/backend/test/test_random.py --- a/pypy/jit/backend/test/test_random.py +++ b/pypy/jit/backend/test/test_random.py @@ -595,6 +595,10 @@ for name, value in fields.items(): if isinstance(name, str): setattr(container, name, value) + elif isinstance(value, dict): + item = container.getitem(name) + for key1, value1 in value.items(): + setattr(item, key1, value1) else: container.setitem(name, value) diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -1,7 +1,7 @@ import sys, os from pypy.jit.backend.llsupport import symbolic from pypy.jit.backend.llsupport.asmmemmgr import MachineDataBlockWrapper -from pypy.jit.metainterp.history import Const, Box, BoxInt, BoxPtr, BoxFloat +from pypy.jit.metainterp.history import Const, Box, BoxInt, ConstInt from pypy.jit.metainterp.history import (AbstractFailDescr, INT, REF, FLOAT, LoopToken) from pypy.rpython.lltypesystem import lltype, rffi, rstr, llmemory @@ -36,7 +36,6 @@ from pypy.rlib import rgc from pypy.rlib.clibffi import FFI_DEFAULT_ABI from pypy.jit.backend.x86.jump import remap_frame_layout -from pypy.jit.metainterp.history import ConstInt, BoxInt from pypy.jit.codewriter.effectinfo import EffectInfo from pypy.jit.codewriter import longlong @@ -729,8 +728,8 @@ # Also, make sure this is consistent with FRAME_FIXED_SIZE. self.mc.PUSH_r(ebp.value) self.mc.MOV_rr(ebp.value, esp.value) - for regloc in self.cpu.CALLEE_SAVE_REGISTERS: - self.mc.PUSH_r(regloc.value) + for loc in self.cpu.CALLEE_SAVE_REGISTERS: + self.mc.PUSH_r(loc.value) gcrootmap = self.cpu.gc_ll_descr.gcrootmap if gcrootmap and gcrootmap.is_shadow_stack: @@ -994,7 +993,7 @@ effectinfo = op.getdescr().get_extra_info() oopspecindex = effectinfo.oopspecindex genop_llong_list[oopspecindex](self, op, arglocs, resloc) - + def regalloc_perform_math(self, op, arglocs, resloc): effectinfo = op.getdescr().get_extra_info() oopspecindex = effectinfo.oopspecindex @@ -1277,8 +1276,8 @@ genop_int_ne = _cmpop("NE", "NE") genop_int_gt = _cmpop("G", "L") genop_int_ge = _cmpop("GE", "LE") - genop_ptr_eq = genop_int_eq - genop_ptr_ne = genop_int_ne + genop_ptr_eq = genop_instance_ptr_eq = genop_int_eq + genop_ptr_ne = genop_instance_ptr_ne = genop_int_ne genop_float_lt = _cmpop_float('B', 'A') genop_float_le = _cmpop_float('BE', 'AE') @@ -1298,8 +1297,8 @@ genop_guard_int_ne = _cmpop_guard("NE", "NE", "E", "E") genop_guard_int_gt = _cmpop_guard("G", "L", "LE", "GE") genop_guard_int_ge = _cmpop_guard("GE", "LE", "L", "G") - genop_guard_ptr_eq = genop_guard_int_eq - genop_guard_ptr_ne = genop_guard_int_ne + genop_guard_ptr_eq = genop_guard_instance_ptr_eq = genop_guard_int_eq + genop_guard_ptr_ne = genop_guard_instance_ptr_ne = genop_guard_int_ne genop_guard_uint_gt = _cmpop_guard("A", "B", "BE", "AE") genop_guard_uint_lt = _cmpop_guard("B", "A", "AE", "BE") @@ -1311,7 +1310,7 @@ genop_guard_float_eq = _cmpop_guard_float("E", "E", "NE","NE") genop_guard_float_gt = _cmpop_guard_float("A", "B", "BE","AE") genop_guard_float_ge = _cmpop_guard_float("AE","BE", "B", "A") - + def genop_math_sqrt(self, op, arglocs, resloc): self.mc.SQRTSD(arglocs[0], resloc) @@ -1597,12 +1596,44 @@ genop_getarrayitem_gc_pure = genop_getarrayitem_gc genop_getarrayitem_raw = genop_getarrayitem_gc + def _get_interiorfield_addr(self, temp_loc, index_loc, itemsize_loc, + base_loc, ofs_loc): + assert isinstance(itemsize_loc, ImmedLoc) + if isinstance(index_loc, ImmedLoc): + temp_loc = imm(index_loc.value * itemsize_loc.value) + else: + # XXX should not use IMUL in most cases + assert isinstance(temp_loc, RegLoc) + assert isinstance(index_loc, RegLoc) + assert not temp_loc.is_xmm + self.mc.IMUL_rri(temp_loc.value, index_loc.value, + itemsize_loc.value) + assert isinstance(ofs_loc, ImmedLoc) + return AddressLoc(base_loc, temp_loc, 0, ofs_loc.value) + + def genop_getinteriorfield_gc(self, op, arglocs, resloc): + (base_loc, ofs_loc, itemsize_loc, fieldsize_loc, + index_loc, temp_loc, sign_loc) = arglocs + src_addr = self._get_interiorfield_addr(temp_loc, index_loc, + itemsize_loc, base_loc, + ofs_loc) + self.load_from_mem(resloc, src_addr, fieldsize_loc, sign_loc) + + def genop_discard_setfield_gc(self, op, arglocs): base_loc, ofs_loc, size_loc, value_loc = arglocs assert isinstance(size_loc, ImmedLoc) dest_addr = AddressLoc(base_loc, ofs_loc) self.save_into_mem(dest_addr, value_loc, size_loc) + def genop_discard_setinteriorfield_gc(self, op, arglocs): + (base_loc, ofs_loc, itemsize_loc, fieldsize_loc, + index_loc, temp_loc, value_loc) = arglocs + dest_addr = self._get_interiorfield_addr(temp_loc, index_loc, + itemsize_loc, base_loc, + ofs_loc) + self.save_into_mem(dest_addr, value_loc, fieldsize_loc) + def genop_discard_setarrayitem_gc(self, op, arglocs): base_loc, ofs_loc, value_loc, size_loc, baseofs = arglocs assert isinstance(baseofs, ImmedLoc) diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -7,7 +7,7 @@ ResOperation, BoxPtr, ConstFloat, BoxFloat, LoopToken, INT, REF, FLOAT) from pypy.jit.backend.x86.regloc import * -from pypy.rpython.lltypesystem import lltype, ll2ctypes, rffi, rstr +from pypy.rpython.lltypesystem import lltype, rffi, rstr from pypy.rlib.objectmodel import we_are_translated from pypy.rlib import rgc from pypy.jit.backend.llsupport import symbolic @@ -17,11 +17,12 @@ from pypy.jit.metainterp.resoperation import rop from pypy.jit.backend.llsupport.descr import BaseFieldDescr, BaseArrayDescr from pypy.jit.backend.llsupport.descr import BaseCallDescr, BaseSizeDescr +from pypy.jit.backend.llsupport.descr import InteriorFieldDescr from pypy.jit.backend.llsupport.regalloc import FrameManager, RegisterManager,\ TempBox from pypy.jit.backend.x86.arch import WORD, FRAME_FIXED_SIZE from pypy.jit.backend.x86.arch import IS_X86_32, IS_X86_64, MY_COPY_OF_REGS -from pypy.rlib.rarithmetic import r_longlong, r_uint +from pypy.rlib.rarithmetic import r_longlong class X86RegisterManager(RegisterManager): @@ -433,7 +434,7 @@ if self.can_merge_with_next_guard(op, i, operations): oplist_with_guard[op.getopnum()](self, op, operations[i + 1]) i += 1 - elif not we_are_translated() and op.getopnum() == -124: + elif not we_are_translated() and op.getopnum() == -124: self._consider_force_spill(op) else: oplist[op.getopnum()](self, op) @@ -650,8 +651,8 @@ consider_uint_lt = _consider_compop consider_uint_le = _consider_compop consider_uint_ge = _consider_compop - consider_ptr_eq = _consider_compop - consider_ptr_ne = _consider_compop + consider_ptr_eq = consider_instance_ptr_eq = _consider_compop + consider_ptr_ne = consider_instance_ptr_ne = _consider_compop def _consider_float_op(self, op): loc1 = self.xrm.loc(op.getarg(1)) @@ -815,7 +816,7 @@ save_all_regs = guard_not_forced_op is not None self.xrm.before_call(force_store, save_all_regs=save_all_regs) if not save_all_regs: - gcrootmap = gc_ll_descr = self.assembler.cpu.gc_ll_descr.gcrootmap + gcrootmap = self.assembler.cpu.gc_ll_descr.gcrootmap if gcrootmap and gcrootmap.is_shadow_stack: save_all_regs = 2 self.rm.before_call(force_store, save_all_regs=save_all_regs) @@ -972,74 +973,27 @@ return self._call(op, arglocs) def consider_newstr(self, op): - gc_ll_descr = self.assembler.cpu.gc_ll_descr - if gc_ll_descr.get_funcptr_for_newstr is not None: - # framework GC - loc = self.loc(op.getarg(0)) - return self._call(op, [loc]) - # boehm GC (XXX kill the following code at some point) - ofs_items, itemsize, ofs = symbolic.get_array_token(rstr.STR, self.translate_support_code) - assert itemsize == 1 - return self._malloc_varsize(ofs_items, ofs, 0, op.getarg(0), - op.result) + loc = self.loc(op.getarg(0)) + return self._call(op, [loc]) def consider_newunicode(self, op): - gc_ll_descr = self.assembler.cpu.gc_ll_descr - if gc_ll_descr.get_funcptr_for_newunicode is not None: - # framework GC - loc = self.loc(op.getarg(0)) - return self._call(op, [loc]) - # boehm GC (XXX kill the following code at some point) - ofs_items, _, ofs = symbolic.get_array_token(rstr.UNICODE, - self.translate_support_code) - scale = self._get_unicode_item_scale() - return self._malloc_varsize(ofs_items, ofs, scale, op.getarg(0), - op.result) - - def _malloc_varsize(self, ofs_items, ofs_length, scale, v, res_v): - # XXX kill this function at some point - if isinstance(v, Box): - loc = self.rm.make_sure_var_in_reg(v, [v]) - tempbox = TempBox() - other_loc = self.rm.force_allocate_reg(tempbox, [v]) - self.assembler.load_effective_addr(loc, ofs_items,scale, other_loc) - else: - tempbox = None - other_loc = imm(ofs_items + (v.getint() << scale)) - self._call(ResOperation(rop.NEW, [], res_v), - [other_loc], [v]) - loc = self.rm.make_sure_var_in_reg(v, [res_v]) - assert self.loc(res_v) == eax - # now we have to reload length to some reasonable place - self.rm.possibly_free_var(v) - if tempbox is not None: - self.rm.possibly_free_var(tempbox) - self.PerformDiscard(ResOperation(rop.SETFIELD_GC, [None, None], None), - [eax, imm(ofs_length), imm(WORD), loc]) + loc = self.loc(op.getarg(0)) + return self._call(op, [loc]) def consider_new_array(self, op): gc_ll_descr = self.assembler.cpu.gc_ll_descr - if gc_ll_descr.get_funcptr_for_newarray is not None: - # framework GC - box_num_elem = op.getarg(0) - if isinstance(box_num_elem, ConstInt): - num_elem = box_num_elem.value - if gc_ll_descr.can_inline_malloc_varsize(op.getdescr(), - num_elem): - self.fastpath_malloc_varsize(op, op.getdescr(), num_elem) - return - args = self.assembler.cpu.gc_ll_descr.args_for_new_array( - op.getdescr()) - arglocs = [imm(x) for x in args] - arglocs.append(self.loc(box_num_elem)) - self._call(op, arglocs) - return - # boehm GC (XXX kill the following code at some point) - itemsize, basesize, ofs_length, _, _ = ( - self._unpack_arraydescr(op.getdescr())) - scale_of_field = _get_scale(itemsize) - self._malloc_varsize(basesize, ofs_length, scale_of_field, - op.getarg(0), op.result) + box_num_elem = op.getarg(0) + if isinstance(box_num_elem, ConstInt): + num_elem = box_num_elem.value + if gc_ll_descr.can_inline_malloc_varsize(op.getdescr(), + num_elem): + self.fastpath_malloc_varsize(op, op.getdescr(), num_elem) + return + args = self.assembler.cpu.gc_ll_descr.args_for_new_array( + op.getdescr()) + arglocs = [imm(x) for x in args] + arglocs.append(self.loc(box_num_elem)) + self._call(op, arglocs) def _unpack_arraydescr(self, arraydescr): assert isinstance(arraydescr, BaseArrayDescr) @@ -1058,6 +1012,16 @@ sign = fielddescr.is_field_signed() return imm(ofs), imm(size), ptr, sign + def _unpack_interiorfielddescr(self, descr): + assert isinstance(descr, InteriorFieldDescr) + arraydescr = descr.arraydescr + ofs = arraydescr.get_base_size(self.translate_support_code) + itemsize = arraydescr.get_item_size(self.translate_support_code) + fieldsize = descr.fielddescr.get_field_size(self.translate_support_code) + sign = descr.fielddescr.is_field_signed() + ofs += descr.fielddescr.offset + return imm(ofs), imm(itemsize), imm(fieldsize), sign + def consider_setfield_gc(self, op): ofs_loc, size_loc, _, _ = self._unpack_fielddescr(op.getdescr()) assert isinstance(size_loc, ImmedLoc) @@ -1074,6 +1038,35 @@ consider_setfield_raw = consider_setfield_gc + def consider_setinteriorfield_gc(self, op): + t = self._unpack_interiorfielddescr(op.getdescr()) + ofs, itemsize, fieldsize, _ = t + args = op.getarglist() + if fieldsize.value == 1: + need_lower_byte = True + else: + need_lower_byte = False + box_base, box_index, box_value = args + base_loc = self.rm.make_sure_var_in_reg(box_base, args) + index_loc = self.rm.make_sure_var_in_reg(box_index, args) + value_loc = self.make_sure_var_in_reg(box_value, args, + need_lower_byte=need_lower_byte) + # If 'index_loc' is not an immediate, then we need a 'temp_loc' that + # is a register whose value will be destroyed. It's fine to destroy + # the same register as 'index_loc', but not the other ones. + self.rm.possibly_free_var(box_index) + if not isinstance(index_loc, ImmedLoc): + tempvar = TempBox() + temp_loc = self.rm.force_allocate_reg(tempvar, [box_base, + box_value]) + self.rm.possibly_free_var(tempvar) + else: + temp_loc = None + self.rm.possibly_free_var(box_base) + self.possibly_free_var(box_value) + self.PerformDiscard(op, [base_loc, ofs, itemsize, fieldsize, + index_loc, temp_loc, value_loc]) + def consider_strsetitem(self, op): args = op.getarglist() base_loc = self.rm.make_sure_var_in_reg(op.getarg(0), args) @@ -1135,6 +1128,36 @@ consider_getarrayitem_raw = consider_getarrayitem_gc consider_getarrayitem_gc_pure = consider_getarrayitem_gc + def consider_getinteriorfield_gc(self, op): + t = self._unpack_interiorfielddescr(op.getdescr()) + ofs, itemsize, fieldsize, sign = t + if sign: + sign_loc = imm1 + else: + sign_loc = imm0 + args = op.getarglist() + base_loc = self.rm.make_sure_var_in_reg(op.getarg(0), args) + index_loc = self.rm.make_sure_var_in_reg(op.getarg(1), args) + # 'base' and 'index' are put in two registers (or one if 'index' + # is an immediate). 'result' can be in the same register as + # 'index' but must be in a different register than 'base'. + self.rm.possibly_free_var(op.getarg(1)) + result_loc = self.force_allocate_reg(op.result, [op.getarg(0)]) + assert isinstance(result_loc, RegLoc) + # two cases: 1) if result_loc is a normal register, use it as temp_loc + if not result_loc.is_xmm: + temp_loc = result_loc + else: + # 2) if result_loc is an xmm register, we (likely) need another + # temp_loc that is a normal register. It can be in the same + # register as 'index' but not 'base'. + tempvar = TempBox() + temp_loc = self.rm.force_allocate_reg(tempvar, [op.getarg(0)]) + self.rm.possibly_free_var(tempvar) + self.rm.possibly_free_var(op.getarg(0)) + self.Perform(op, [base_loc, ofs, itemsize, fieldsize, + index_loc, temp_loc, sign_loc], result_loc) + def consider_int_is_true(self, op, guard_op): # doesn't need arg to be in a register argloc = self.loc(op.getarg(0)) @@ -1241,7 +1264,6 @@ self.rm.possibly_free_var(srcaddr_box) def _gen_address_inside_string(self, baseloc, ofsloc, resloc, is_unicode): - cpu = self.assembler.cpu if is_unicode: ofs_items, _, _ = symbolic.get_array_token(rstr.UNICODE, self.translate_support_code) @@ -1300,7 +1322,7 @@ tmpreg = X86RegisterManager.all_regs[0] tmploc = self.rm.force_allocate_reg(box, selected_reg=tmpreg) xmmtmp = X86XMMRegisterManager.all_regs[0] - xmmtmploc = self.xrm.force_allocate_reg(box1, selected_reg=xmmtmp) + self.xrm.force_allocate_reg(box1, selected_reg=xmmtmp) # Part about non-floats # XXX we don't need a copy, we only just the original list src_locations1 = [self.loc(op.getarg(i)) for i in range(op.numargs()) @@ -1380,7 +1402,7 @@ return lambda self, op: fn(self, op, None) def is_comparison_or_ovf_op(opnum): - from pypy.jit.metainterp.resoperation import opclasses, AbstractResOp + from pypy.jit.metainterp.resoperation import opclasses cls = opclasses[opnum] # hack hack: in theory they are instance method, but they don't use # any instance field, we can use a fake object diff --git a/pypy/jit/backend/x86/regloc.py b/pypy/jit/backend/x86/regloc.py --- a/pypy/jit/backend/x86/regloc.py +++ b/pypy/jit/backend/x86/regloc.py @@ -17,7 +17,7 @@ class AssemblerLocation(object): # XXX: Is adding "width" here correct? - __slots__ = ('value', 'width') + _attrs_ = ('value', 'width', '_location_code') _immutable_ = True def _getregkey(self): return self.value @@ -25,6 +25,9 @@ def is_memory_reference(self): return self.location_code() in ('b', 's', 'j', 'a', 'm') + def location_code(self): + return self._location_code + def value_r(self): return self.value def value_b(self): return self.value def value_s(self): return self.value @@ -38,6 +41,8 @@ class StackLoc(AssemblerLocation): _immutable_ = True + _location_code = 'b' + def __init__(self, position, ebp_offset, num_words, type): assert ebp_offset < 0 # so no confusion with RegLoc.value self.position = position @@ -49,9 +54,6 @@ def __repr__(self): return '%d(%%ebp)' % (self.value,) - def location_code(self): - return 'b' - def assembler(self): return repr(self) @@ -63,8 +65,10 @@ self.is_xmm = is_xmm if self.is_xmm: self.width = 8 + self._location_code = 'x' else: self.width = WORD + self._location_code = 'r' def __repr__(self): if self.is_xmm: return rx86.R.xmmnames[self.value] @@ -79,12 +83,6 @@ assert not self.is_xmm return RegLoc(rx86.high_byte(self.value), False) - def location_code(self): - if self.is_xmm: - return 'x' - else: - return 'r' - def assembler(self): return '%' + repr(self) @@ -97,14 +95,13 @@ class ImmedLoc(AssemblerLocation): _immutable_ = True width = WORD + _location_code = 'i' + def __init__(self, value): from pypy.rpython.lltypesystem import rffi, lltype # force as a real int self.value = rffi.cast(lltype.Signed, value) - def location_code(self): - return 'i' - def getint(self): return self.value @@ -149,9 +146,6 @@ info = getattr(self, attr, '?') return '' % (self._location_code, info) - def location_code(self): - return self._location_code - def value_a(self): return self.loc_a @@ -191,6 +185,7 @@ # we want a width of 8 (... I think. Check this!) _immutable_ = True width = 8 + _location_code = 'j' def __init__(self, address): self.value = address @@ -198,9 +193,6 @@ def __repr__(self): return '' % (self.value,) - def location_code(self): - return 'j' - if IS_X86_32: class FloatImmedLoc(AssemblerLocation): # This stands for an immediate float. It cannot be directly used in @@ -209,6 +201,7 @@ # instead; see below. _immutable_ = True width = 8 + _location_code = '#' # don't use me def __init__(self, floatstorage): self.aslonglong = floatstorage @@ -229,9 +222,6 @@ floatvalue = longlong.getrealfloat(self.aslonglong) return '' % (floatvalue,) - def location_code(self): - raise NotImplementedError - if IS_X86_64: def FloatImmedLoc(floatstorage): from pypy.rlib.longlong2float import float2longlong @@ -270,6 +260,11 @@ else: raise AssertionError(methname + " undefined") +def _missing_binary_insn(name, code1, code2): + raise AssertionError(name + "_" + code1 + code2 + " missing") +_missing_binary_insn._dont_inline_ = True + + class LocationCodeBuilder(object): _mixin_ = True @@ -303,6 +298,8 @@ else: # For this case, we should not need the scratch register more than here. self._load_scratch(val2) + if name == 'MOV' and loc1 is X86_64_SCRATCH_REG: + return # don't need a dummy "MOV r11, r11" INSN(self, loc1, X86_64_SCRATCH_REG) def invoke(self, codes, val1, val2): @@ -310,6 +307,23 @@ _rx86_getattr(self, methname)(val1, val2) invoke._annspecialcase_ = 'specialize:arg(1)' + def has_implementation_for(loc1, loc2): + # A memo function that returns True if there is any NAME_xy that could match. + # If it returns False we know the whole subcase can be omitted from translated + # code. Without this hack, the size of most _binaryop INSN functions ends up + # quite large in C code. + if loc1 == '?': + return any([has_implementation_for(loc1, loc2) + for loc1 in unrolling_location_codes]) + methname = name + "_" + loc1 + loc2 + if not hasattr(rx86.AbstractX86CodeBuilder, methname): + return False + # any NAME_j should have a NAME_m as a fallback, too. Check it + if loc1 == 'j': assert has_implementation_for('m', loc2), methname + if loc2 == 'j': assert has_implementation_for(loc1, 'm'), methname + return True + has_implementation_for._annspecialcase_ = 'specialize:memo' + def INSN(self, loc1, loc2): code1 = loc1.location_code() code2 = loc2.location_code() @@ -325,6 +339,8 @@ assert code2 not in ('j', 'i') for possible_code2 in unrolling_location_codes: + if not has_implementation_for('?', possible_code2): + continue if code2 == possible_code2: val2 = getattr(loc2, "value_" + possible_code2)() # @@ -335,28 +351,32 @@ # # Regular case for possible_code1 in unrolling_location_codes: + if not has_implementation_for(possible_code1, + possible_code2): + continue if code1 == possible_code1: val1 = getattr(loc1, "value_" + possible_code1)() # More faking out of certain operations for x86_64 - if possible_code1 == 'j' and not rx86.fits_in_32bits(val1): + fits32 = rx86.fits_in_32bits + if possible_code1 == 'j' and not fits32(val1): val1 = self._addr_as_reg_offset(val1) invoke(self, "m" + possible_code2, val1, val2) - elif possible_code2 == 'j' and not rx86.fits_in_32bits(val2): + return + if possible_code2 == 'j' and not fits32(val2): val2 = self._addr_as_reg_offset(val2) invoke(self, possible_code1 + "m", val1, val2) - elif possible_code1 == 'm' and not rx86.fits_in_32bits(val1[1]): + return + if possible_code1 == 'm' and not fits32(val1[1]): val1 = self._fix_static_offset_64_m(val1) - invoke(self, "a" + possible_code2, val1, val2) - elif possible_code2 == 'm' and not rx86.fits_in_32bits(val2[1]): + if possible_code2 == 'm' and not fits32(val2[1]): val2 = self._fix_static_offset_64_m(val2) - invoke(self, possible_code1 + "a", val1, val2) - else: - if possible_code1 == 'a' and not rx86.fits_in_32bits(val1[3]): - val1 = self._fix_static_offset_64_a(val1) - if possible_code2 == 'a' and not rx86.fits_in_32bits(val2[3]): - val2 = self._fix_static_offset_64_a(val2) - invoke(self, possible_code1 + possible_code2, val1, val2) + if possible_code1 == 'a' and not fits32(val1[3]): + val1 = self._fix_static_offset_64_a(val1) + if possible_code2 == 'a' and not fits32(val2[3]): + val2 = self._fix_static_offset_64_a(val2) + invoke(self, possible_code1 + possible_code2, val1, val2) return + _missing_binary_insn(name, code1, code2) return func_with_new_name(INSN, "INSN_" + name) @@ -431,12 +451,14 @@ def _fix_static_offset_64_m(self, (basereg, static_offset)): # For cases where an AddressLoc has the location_code 'm', but # where the static offset does not fit in 32-bits. We have to fall - # back to the X86_64_SCRATCH_REG. Note that this returns a location - # encoded as mode 'a'. These are all possibly rare cases; don't try + # back to the X86_64_SCRATCH_REG. Returns a new location encoded + # as mode 'm' too. These are all possibly rare cases; don't try # to reuse a past value of the scratch register at all. self._scratch_register_known = False self.MOV_ri(X86_64_SCRATCH_REG.value, static_offset) - return (basereg, X86_64_SCRATCH_REG.value, 0, 0) + self.LEA_ra(X86_64_SCRATCH_REG.value, + (basereg, X86_64_SCRATCH_REG.value, 0, 0)) + return (X86_64_SCRATCH_REG.value, 0) def _fix_static_offset_64_a(self, (basereg, scalereg, scale, static_offset)): diff --git a/pypy/jit/backend/x86/rx86.py b/pypy/jit/backend/x86/rx86.py --- a/pypy/jit/backend/x86/rx86.py +++ b/pypy/jit/backend/x86/rx86.py @@ -745,6 +745,7 @@ assert insnname_template.count('*') == 1 add_insn('x', register(2), '\xC0') add_insn('j', abs_, immediate(2)) + add_insn('m', mem_reg_plus_const(2)) define_pxmm_insn('PADDQ_x*', '\xD4') define_pxmm_insn('PSUBQ_x*', '\xFB') diff --git a/pypy/jit/backend/x86/test/test_del.py b/pypy/jit/backend/x86/test/test_del.py --- a/pypy/jit/backend/x86/test/test_del.py +++ b/pypy/jit/backend/x86/test/test_del.py @@ -1,5 +1,4 @@ -import py from pypy.jit.backend.x86.test.test_basic import Jit386Mixin from pypy.jit.metainterp.test.test_del import DelTests diff --git a/pypy/jit/backend/x86/test/test_dict.py b/pypy/jit/backend/x86/test/test_dict.py new file mode 100644 --- /dev/null +++ b/pypy/jit/backend/x86/test/test_dict.py @@ -0,0 +1,9 @@ + +from pypy.jit.backend.x86.test.test_basic import Jit386Mixin +from pypy.jit.metainterp.test.test_dict import DictTests + + +class TestDict(Jit386Mixin, DictTests): + # for the individual tests see + # ====> ../../../metainterp/test/test_dict.py + pass diff --git a/pypy/jit/backend/x86/test/test_regloc.py b/pypy/jit/backend/x86/test/test_regloc.py --- a/pypy/jit/backend/x86/test/test_regloc.py +++ b/pypy/jit/backend/x86/test/test_regloc.py @@ -146,8 +146,10 @@ expected_instructions = ( # mov r11, 0xFEDCBA9876543210 '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' - # mov rcx, [rdx+r11] - '\x4A\x8B\x0C\x1A' + # lea r11, [rdx+r11] + '\x4E\x8D\x1C\x1A' + # mov rcx, [r11] + '\x49\x8B\x0B' ) assert cb.getvalue() == expected_instructions @@ -174,6 +176,30 @@ # ------------------------------------------------------------ + def test_MOV_64bit_constant_into_r11(self): + base_constant = 0xFEDCBA9876543210 + cb = LocationCodeBuilder64() + cb.MOV(r11, imm(base_constant)) + + expected_instructions = ( + # mov r11, 0xFEDCBA9876543210 + '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' + ) + assert cb.getvalue() == expected_instructions + + def test_MOV_64bit_address_into_r11(self): + base_addr = 0xFEDCBA9876543210 + cb = LocationCodeBuilder64() + cb.MOV(r11, heap(base_addr)) + + expected_instructions = ( + # mov r11, 0xFEDCBA9876543210 + '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' + + # mov r11, [r11] + '\x4D\x8B\x1B' + ) + assert cb.getvalue() == expected_instructions + def test_MOV_immed32_into_64bit_address_1(self): immed = -0x01234567 base_addr = 0xFEDCBA9876543210 @@ -217,8 +243,10 @@ expected_instructions = ( # mov r11, 0xFEDCBA9876543210 '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' - # mov [rdx+r11], -0x01234567 - '\x4A\xC7\x04\x1A\x99\xBA\xDC\xFE' + # lea r11, [rdx+r11] + '\x4E\x8D\x1C\x1A' + # mov [r11], -0x01234567 + '\x49\xC7\x03\x99\xBA\xDC\xFE' ) assert cb.getvalue() == expected_instructions @@ -300,8 +328,10 @@ '\x48\xBA\xEF\xCD\xAB\x89\x67\x45\x23\x01' # mov r11, 0xFEDCBA9876543210 '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' - # mov [rax+r11], rdx - '\x4A\x89\x14\x18' + # lea r11, [rax+r11] + '\x4E\x8D\x1C\x18' + # mov [r11], rdx + '\x49\x89\x13' # pop rdx '\x5A' ) diff --git a/pypy/jit/backend/x86/test/test_runner.py b/pypy/jit/backend/x86/test/test_runner.py --- a/pypy/jit/backend/x86/test/test_runner.py +++ b/pypy/jit/backend/x86/test/test_runner.py @@ -31,7 +31,7 @@ # for the individual tests see # ====> ../../test/runner_test.py - + def setup_method(self, meth): self.cpu = CPU(rtyper=None, stats=FakeStats()) self.cpu.setup_once() @@ -69,22 +69,16 @@ def test_allocations(self): from pypy.rpython.lltypesystem import rstr - + allocs = [None] all = [] + orig_new = self.cpu.gc_ll_descr.funcptr_for_new def f(size): allocs.insert(0, size) - buf = ctypes.create_string_buffer(size) - all.append(buf) - return ctypes.cast(buf, ctypes.c_void_p).value - func = ctypes.CFUNCTYPE(ctypes.c_int, ctypes.c_int)(f) - addr = ctypes.cast(func, ctypes.c_void_p).value - # ctypes produces an unsigned value. We need it to be signed for, eg, - # relative addressing to work properly. - addr = rffi.cast(lltype.Signed, addr) - + return orig_new(size) + self.cpu.assembler.setup_once() - self.cpu.assembler.malloc_func_addr = addr + self.cpu.gc_ll_descr.funcptr_for_new = f ofs = symbolic.get_field_token(rstr.STR, 'chars', False)[0] res = self.execute_operation(rop.NEWSTR, [ConstInt(7)], 'ref') @@ -108,7 +102,7 @@ res = self.execute_operation(rop.NEW_ARRAY, [ConstInt(10)], 'ref', descr) assert allocs[0] == 10*WORD + ofs + WORD - resbuf = self._resbuf(res) + resbuf = self._resbuf(res) assert resbuf[ofs/WORD] == 10 # ------------------------------------------------------------ @@ -116,7 +110,7 @@ res = self.execute_operation(rop.NEW_ARRAY, [BoxInt(10)], 'ref', descr) assert allocs[0] == 10*WORD + ofs + WORD - resbuf = self._resbuf(res) + resbuf = self._resbuf(res) assert resbuf[ofs/WORD] == 10 def test_stringitems(self): @@ -146,7 +140,7 @@ ConstInt(2), BoxInt(38)], 'void', descr) assert resbuf[itemsofs/WORD + 2] == 38 - + self.execute_operation(rop.SETARRAYITEM_GC, [res, BoxInt(3), BoxInt(42)], 'void', descr) @@ -167,7 +161,7 @@ BoxInt(2)], 'int', descr) assert r.value == 38 - + r = self.execute_operation(rop.GETARRAYITEM_GC, [res, BoxInt(3)], 'int', descr) assert r.value == 42 @@ -226,7 +220,7 @@ self.execute_operation(rop.SETFIELD_GC, [res, BoxInt(1234)], 'void', ofs_i) i = self.execute_operation(rop.GETFIELD_GC, [res], 'int', ofs_i) assert i.value == 1234 - + #u = self.execute_operation(rop.GETFIELD_GC, [res, ofs_u], 'int') #assert u.value == 5 self.execute_operation(rop.SETFIELD_GC, [res, ConstInt(1)], 'void', @@ -299,7 +293,7 @@ else: assert result != execute(self.cpu, None, op, None, b).value - + def test_stuff_followed_by_guard(self): boxes = [(BoxInt(1), BoxInt(0)), @@ -523,7 +517,7 @@ def test_debugger_on(self): from pypy.tool.logparser import parse_log_file, extract_category from pypy.rlib import debug - + loop = """ [i0] debug_merge_point('xyz', 0) diff --git a/pypy/jit/backend/x86/tool/viewcode.py b/pypy/jit/backend/x86/tool/viewcode.py --- a/pypy/jit/backend/x86/tool/viewcode.py +++ b/pypy/jit/backend/x86/tool/viewcode.py @@ -58,7 +58,7 @@ assert not p.returncode, ('Encountered an error running objdump: %s' % stderr) # drop some objdump cruft - lines = stdout.splitlines()[6:] + lines = stdout.splitlines(True)[6:] # drop some objdump cruft return format_code_dump_with_labels(originaddr, lines, label_list) def format_code_dump_with_labels(originaddr, lines, label_list): @@ -97,7 +97,7 @@ stdout, stderr = p.communicate() assert not p.returncode, ('Encountered an error running nm: %s' % stderr) - for line in stdout.splitlines(): + for line in stdout.splitlines(True): match = re_symbolentry.match(line) if match: addr = long(match.group(1), 16) diff --git a/pypy/jit/codewriter/jtransform.py b/pypy/jit/codewriter/jtransform.py --- a/pypy/jit/codewriter/jtransform.py +++ b/pypy/jit/codewriter/jtransform.py @@ -52,9 +52,11 @@ newoperations = [] # def do_rename(var, var_or_const): + if var.concretetype is lltype.Void: + renamings[var] = Constant(None, lltype.Void) + return renamings[var] = var_or_const - if (isinstance(var_or_const, Constant) - and var.concretetype != lltype.Void): + if isinstance(var_or_const, Constant): value = var_or_const.value value = lltype._cast_whatever(var.concretetype, value) renamings_constants[var] = Constant(value, var.concretetype) @@ -441,6 +443,8 @@ rewrite_op_gc_identityhash = _do_builtin_call rewrite_op_gc_id = _do_builtin_call rewrite_op_uint_mod = _do_builtin_call + rewrite_op_cast_float_to_uint = _do_builtin_call + rewrite_op_cast_uint_to_float = _do_builtin_call # ---------- # getfield/setfield/mallocs etc. @@ -735,29 +739,54 @@ return SpaceOperation(opname, [op.args[0]], op.result) def rewrite_op_getinteriorfield(self, op): - # only supports strings and unicodes assert len(op.args) == 3 - assert op.args[1].value == 'chars' optype = op.args[0].concretetype if optype == lltype.Ptr(rstr.STR): opname = "strgetitem" + return SpaceOperation(opname, [op.args[0], op.args[2]], op.result) + elif optype == lltype.Ptr(rstr.UNICODE): + opname = "unicodegetitem" + return SpaceOperation(opname, [op.args[0], op.args[2]], op.result) else: - assert optype == lltype.Ptr(rstr.UNICODE) - opname = "unicodegetitem" - return SpaceOperation(opname, [op.args[0], op.args[2]], op.result) + v_inst, v_index, c_field = op.args + if op.result.concretetype is lltype.Void: + return + # only GcArray of Struct supported + assert isinstance(v_inst.concretetype.TO, lltype.GcArray) + STRUCT = v_inst.concretetype.TO.OF + assert isinstance(STRUCT, lltype.Struct) + descr = self.cpu.interiorfielddescrof(v_inst.concretetype.TO, + c_field.value) + args = [v_inst, v_index, descr] + kind = getkind(op.result.concretetype)[0] + return SpaceOperation('getinteriorfield_gc_%s' % kind, args, + op.result) def rewrite_op_setinteriorfield(self, op): - # only supports strings and unicodes assert len(op.args) == 4 - assert op.args[1].value == 'chars' optype = op.args[0].concretetype if optype == lltype.Ptr(rstr.STR): opname = "strsetitem" + return SpaceOperation(opname, [op.args[0], op.args[2], op.args[3]], + op.result) + elif optype == lltype.Ptr(rstr.UNICODE): + opname = "unicodesetitem" + return SpaceOperation(opname, [op.args[0], op.args[2], op.args[3]], + op.result) else: - assert optype == lltype.Ptr(rstr.UNICODE) - opname = "unicodesetitem" - return SpaceOperation(opname, [op.args[0], op.args[2], op.args[3]], - op.result) + v_inst, v_index, c_field, v_value = op.args + if v_value.concretetype is lltype.Void: + return + # only GcArray of Struct supported + assert isinstance(v_inst.concretetype.TO, lltype.GcArray) + STRUCT = v_inst.concretetype.TO.OF + assert isinstance(STRUCT, lltype.Struct) + descr = self.cpu.interiorfielddescrof(v_inst.concretetype.TO, + c_field.value) + kind = getkind(v_value.concretetype)[0] + args = [v_inst, v_index, v_value, descr] + return SpaceOperation('setinteriorfield_gc_%s' % kind, args, + op.result) def _rewrite_equality(self, op, opname): arg0, arg1 = op.args @@ -771,6 +800,9 @@ def _is_gc(self, v): return getattr(getattr(v.concretetype, "TO", None), "_gckind", "?") == 'gc' + def _is_rclass_instance(self, v): + return lltype._castdepth(v.concretetype.TO, rclass.OBJECT) >= 0 + def _rewrite_cmp_ptrs(self, op): if self._is_gc(op.args[0]): return op @@ -788,11 +820,21 @@ return self._rewrite_equality(op, 'int_is_true') def rewrite_op_ptr_eq(self, op): - op1 = self._rewrite_equality(op, 'ptr_iszero') + prefix = '' + if self._is_rclass_instance(op.args[0]): + assert self._is_rclass_instance(op.args[1]) + op = SpaceOperation('instance_ptr_eq', op.args, op.result) + prefix = 'instance_' + op1 = self._rewrite_equality(op, prefix + 'ptr_iszero') return self._rewrite_cmp_ptrs(op1) def rewrite_op_ptr_ne(self, op): - op1 = self._rewrite_equality(op, 'ptr_nonzero') + prefix = '' + if self._is_rclass_instance(op.args[0]): + assert self._is_rclass_instance(op.args[1]) + op = SpaceOperation('instance_ptr_ne', op.args, op.result) + prefix = 'instance_' + op1 = self._rewrite_equality(op, prefix + 'ptr_nonzero') return self._rewrite_cmp_ptrs(op1) rewrite_op_ptr_iszero = _rewrite_cmp_ptrs @@ -802,6 +844,10 @@ if self._is_gc(op.args[0]): return op + def rewrite_op_cast_opaque_ptr(self, op): + # None causes the result of this op to get aliased to op.args[0] + return [SpaceOperation('mark_opaque_ptr', op.args, None), None] + def rewrite_op_force_cast(self, op): v_arg = op.args[0] v_result = op.result @@ -821,26 +867,44 @@ elif not float_arg and float_res: # some int -> some float ops = [] - v1 = varoftype(lltype.Signed) - oplist = self.rewrite_operation( - SpaceOperation('force_cast', [v_arg], v1) - ) - if oplist: - ops.extend(oplist) + v2 = varoftype(lltype.Float) + sizesign = rffi.size_and_sign(v_arg.concretetype) + if sizesign <= rffi.size_and_sign(lltype.Signed): + # cast from a type that fits in an int: either the size is + # smaller, or it is equal and it is not unsigned + v1 = varoftype(lltype.Signed) + oplist = self.rewrite_operation( + SpaceOperation('force_cast', [v_arg], v1) + ) + if oplist: + ops.extend(oplist) + else: + v1 = v_arg + op = self.rewrite_operation( + SpaceOperation('cast_int_to_float', [v1], v2) + ) + ops.append(op) else: - v1 = v_arg - v2 = varoftype(lltype.Float) - op = self.rewrite_operation( - SpaceOperation('cast_int_to_float', [v1], v2) - ) - ops.append(op) + if sizesign == rffi.size_and_sign(lltype.Unsigned): + opname = 'cast_uint_to_float' + elif sizesign == rffi.size_and_sign(lltype.SignedLongLong): + opname = 'cast_longlong_to_float' + elif sizesign == rffi.size_and_sign(lltype.UnsignedLongLong): + opname = 'cast_ulonglong_to_float' + else: + raise AssertionError('cast_x_to_float: %r' % (sizesign,)) + ops1 = self.rewrite_operation( + SpaceOperation(opname, [v_arg], v2) + ) + if not isinstance(ops1, list): ops1 = [ops1] + ops.extend(ops1) op2 = self.rewrite_operation( SpaceOperation('force_cast', [v2], v_result) ) if op2: ops.append(op2) else: - op.result = v_result + ops[-1].result = v_result return ops elif float_arg and not float_res: # some float -> some int @@ -853,18 +917,36 @@ ops.append(op1) else: v1 = v_arg - v2 = varoftype(lltype.Signed) - op = self.rewrite_operation( - SpaceOperation('cast_float_to_int', [v1], v2) - ) - ops.append(op) - oplist = self.rewrite_operation( - SpaceOperation('force_cast', [v2], v_result) - ) - if oplist: - ops.extend(oplist) + sizesign = rffi.size_and_sign(v_result.concretetype) + if sizesign <= rffi.size_and_sign(lltype.Signed): + # cast to a type that fits in an int: either the size is + # smaller, or it is equal and it is not unsigned + v2 = varoftype(lltype.Signed) + op = self.rewrite_operation( + SpaceOperation('cast_float_to_int', [v1], v2) + ) + ops.append(op) + oplist = self.rewrite_operation( + SpaceOperation('force_cast', [v2], v_result) + ) + if oplist: + ops.extend(oplist) + else: + op.result = v_result else: - op.result = v_result + if sizesign == rffi.size_and_sign(lltype.Unsigned): + opname = 'cast_float_to_uint' + elif sizesign == rffi.size_and_sign(lltype.SignedLongLong): + opname = 'cast_float_to_longlong' + elif sizesign == rffi.size_and_sign(lltype.UnsignedLongLong): + opname = 'cast_float_to_ulonglong' + else: + raise AssertionError('cast_float_to_x: %r' % (sizesign,)) + ops1 = self.rewrite_operation( + SpaceOperation(opname, [v1], v_result) + ) + if not isinstance(ops1, list): ops1 = [ops1] + ops.extend(ops1) return ops else: assert False @@ -1070,8 +1152,6 @@ # The new operation is optionally further processed by rewrite_operation(). for _old, _new in [('bool_not', 'int_is_zero'), ('cast_bool_to_float', 'cast_int_to_float'), - ('cast_uint_to_float', 'cast_int_to_float'), - ('cast_float_to_uint', 'cast_float_to_int'), ('int_add_nonneg_ovf', 'int_add_ovf'), ('keepalive', '-live-'), diff --git a/pypy/jit/codewriter/support.py b/pypy/jit/codewriter/support.py --- a/pypy/jit/codewriter/support.py +++ b/pypy/jit/codewriter/support.py @@ -13,7 +13,6 @@ from pypy.translator.simplify import get_funcobj from pypy.translator.unsimplify import split_block from pypy.objspace.flow.model import Constant -from pypy import conftest from pypy.translator.translator import TranslationContext from pypy.annotation.policy import AnnotatorPolicy from pypy.annotation import model as annmodel @@ -38,9 +37,11 @@ return a.typeannotation(t) def annotate(func, values, inline=None, backendoptimize=True, - type_system="lltype"): + type_system="lltype", translationoptions={}): # build the normal ll graphs for ll_function t = TranslationContext() + for key, value in translationoptions.items(): + setattr(t.config.translation, key, value) annpolicy = AnnotatorPolicy() annpolicy.allow_someobjects = False a = t.buildannotator(policy=annpolicy) @@ -48,15 +49,13 @@ a.build_types(func, argtypes, main_entry_point=True) rtyper = t.buildrtyper(type_system = type_system) rtyper.specialize() - if inline: - auto_inlining(t, threshold=inline) + #if inline: + # auto_inlining(t, threshold=inline) if backendoptimize: from pypy.translator.backendopt.all import backend_optimizations backend_optimizations(t, inline_threshold=inline or 0, remove_asserts=True, really_remove_asserts=True) - #if conftest.option.view: - # t.view() return rtyper def getgraph(func, values): @@ -232,6 +231,17 @@ else: return x +def _ll_1_cast_uint_to_float(x): + # XXX on 32-bit platforms, this should be done using cast_longlong_to_float + # (which is a residual call right now in the x86 backend) + return llop.cast_uint_to_float(lltype.Float, x) + +def _ll_1_cast_float_to_uint(x): + # XXX on 32-bit platforms, this should be done using cast_float_to_longlong + # (which is a residual call right now in the x86 backend) + return llop.cast_float_to_uint(lltype.Unsigned, x) + + # math support # ------------ @@ -456,6 +466,8 @@ return LLtypeHelpers._dictnext_items(lltype.Ptr(RES), iter) _ll_1_dictiter_nextitems.need_result_type = True + _ll_1_dict_resize = ll_rdict.ll_dict_resize + # ---------- strings and unicode ---------- _ll_1_str_str2unicode = ll_rstr.LLHelpers.ll_str2unicode diff --git a/pypy/jit/codewriter/test/test_flatten.py b/pypy/jit/codewriter/test/test_flatten.py --- a/pypy/jit/codewriter/test/test_flatten.py +++ b/pypy/jit/codewriter/test/test_flatten.py @@ -8,7 +8,7 @@ from pypy.rpython.lltypesystem import lltype, rclass, rstr from pypy.objspace.flow.model import SpaceOperation, Variable, Constant from pypy.translator.unsimplify import varoftype -from pypy.rlib.rarithmetic import ovfcheck, r_uint +from pypy.rlib.rarithmetic import ovfcheck, r_uint, r_longlong, r_ulonglong from pypy.rlib.jit import dont_look_inside, _we_are_jitted, JitDriver from pypy.rlib.objectmodel import keepalive_until_here from pypy.rlib import jit @@ -70,7 +70,8 @@ return 'residual' def getcalldescr(self, op, oopspecindex=None, extraeffect=None): try: - if 'cannot_raise' in op.args[0].value._obj.graph.name: + name = op.args[0].value._obj._name + if 'cannot_raise' in name or name.startswith('cast_'): return self._descr_cannot_raise except AttributeError: pass @@ -900,6 +901,67 @@ int_return %i4 """, transform=True) + def f(dbl): + return rffi.cast(rffi.UCHAR, dbl) + self.encoding_test(f, [12.456], """ + cast_float_to_int %f0 -> %i0 + int_and %i0, $255 -> %i1 + int_return %i1 + """, transform=True) + + def f(dbl): + return rffi.cast(lltype.Unsigned, dbl) + self.encoding_test(f, [12.456], """ + residual_call_irf_i $<* fn cast_float_to_uint>, , I[], R[], F[%f0] -> %i0 + int_return %i0 + """, transform=True) + + def f(i): + return rffi.cast(lltype.Float, chr(i)) # "char -> float" + self.encoding_test(f, [12], """ + cast_int_to_float %i0 -> %f0 + float_return %f0 + """, transform=True) + + def f(i): + return rffi.cast(lltype.Float, r_uint(i)) # "uint -> float" + self.encoding_test(f, [12], """ + residual_call_irf_f $<* fn cast_uint_to_float>, , I[%i0], R[], F[] -> %f0 + float_return %f0 + """, transform=True) + + if not longlong.is_64_bit: + def f(dbl): + return rffi.cast(lltype.SignedLongLong, dbl) + self.encoding_test(f, [12.3], """ + residual_call_irf_f $<* fn llong_from_float>, , I[], R[], F[%f0] -> %f1 + float_return %f1 + """, transform=True) + + def f(dbl): + return rffi.cast(lltype.UnsignedLongLong, dbl) + self.encoding_test(f, [12.3], """ + residual_call_irf_f $<* fn ullong_from_float>, , I[], R[], F[%f0] -> %f1 + float_return %f1 + """, transform=True) + + def f(x): + ll = r_longlong(x) + return rffi.cast(lltype.Float, ll) + self.encoding_test(f, [12], """ + residual_call_irf_f $<* fn llong_from_int>, , I[%i0], R[], F[] -> %f0 + residual_call_irf_f $<* fn llong_to_float>, , I[], R[], F[%f0] -> %f1 + float_return %f1 + """, transform=True) + + def f(x): + ll = r_ulonglong(x) + return rffi.cast(lltype.Float, ll) + self.encoding_test(f, [12], """ + residual_call_irf_f $<* fn ullong_from_int>, , I[%i0], R[], F[] -> %f0 + residual_call_irf_f $<* fn ullong_u_to_float>, , I[], R[], F[%f0] -> %f1 + float_return %f1 + """, transform=True) def test_direct_ptradd(self): from pypy.rpython.lltypesystem import rffi diff --git a/pypy/jit/codewriter/test/test_jtransform.py b/pypy/jit/codewriter/test/test_jtransform.py --- a/pypy/jit/codewriter/test/test_jtransform.py +++ b/pypy/jit/codewriter/test/test_jtransform.py @@ -1,4 +1,3 @@ -import py import random try: from itertools import product @@ -16,13 +15,13 @@ from pypy.objspace.flow.model import FunctionGraph, Block, Link from pypy.objspace.flow.model import SpaceOperation, Variable, Constant -from pypy.jit.codewriter.jtransform import Transformer -from pypy.jit.metainterp.history import getkind -from pypy.rpython.lltypesystem import lltype, llmemory, rclass, rstr, rlist +from pypy.rpython.lltypesystem import lltype, llmemory, rclass, rstr from pypy.rpython.lltypesystem.module import ll_math from pypy.translator.unsimplify import varoftype from pypy.jit.codewriter import heaptracker, effectinfo from pypy.jit.codewriter.flatten import ListOfKind +from pypy.jit.codewriter.jtransform import Transformer +from pypy.jit.metainterp.history import getkind def const(x): return Constant(x, lltype.typeOf(x)) @@ -37,6 +36,8 @@ return ('calldescr', FUNC, ARGS, RESULT) def fielddescrof(self, STRUCT, name): return ('fielddescr', STRUCT, name) + def interiorfielddescrof(self, ARRAY, name): + return ('interiorfielddescr', ARRAY, name) def arraydescrof(self, ARRAY): return FakeDescr(('arraydescr', ARRAY)) def sizeof(self, STRUCT): @@ -539,7 +540,7 @@ def test_rename_on_links(): v1 = Variable() - v2 = Variable() + v2 = Variable(); v2.concretetype = llmemory.Address v3 = Variable() block = Block([v1]) block.operations = [SpaceOperation('cast_pointer', [v1], v2)] @@ -575,10 +576,10 @@ assert op1.args == [v2] def test_ptr_eq(): - v1 = varoftype(rclass.OBJECTPTR) - v2 = varoftype(rclass.OBJECTPTR) + v1 = varoftype(lltype.Ptr(rstr.STR)) + v2 = varoftype(lltype.Ptr(rstr.STR)) v3 = varoftype(lltype.Bool) - c0 = const(lltype.nullptr(rclass.OBJECT)) + c0 = const(lltype.nullptr(rstr.STR)) # for opname, reducedname in [('ptr_eq', 'ptr_iszero'), ('ptr_ne', 'ptr_nonzero')]: @@ -597,6 +598,31 @@ assert op1.opname == reducedname assert op1.args == [v2] +def test_instance_ptr_eq(): + v1 = varoftype(rclass.OBJECTPTR) + v2 = varoftype(rclass.OBJECTPTR) + v3 = varoftype(lltype.Bool) + c0 = const(lltype.nullptr(rclass.OBJECT)) + + for opname, newopname, reducedname in [ + ('ptr_eq', 'instance_ptr_eq', 'instance_ptr_iszero'), + ('ptr_ne', 'instance_ptr_ne', 'instance_ptr_nonzero') + ]: + op = SpaceOperation(opname, [v1, v2], v3) + op1 = Transformer().rewrite_operation(op) + assert op1.opname == newopname + assert op1.args == [v1, v2] + + op = SpaceOperation(opname, [v1, c0], v3) + op1 = Transformer().rewrite_operation(op) + assert op1.opname == reducedname + assert op1.args == [v1] + + op = SpaceOperation(opname, [c0, v1], v3) + op1 = Transformer().rewrite_operation(op) + assert op1.opname == reducedname + assert op1.args == [v1] + def test_nongc_ptr_eq(): v1 = varoftype(rclass.NONGCOBJECTPTR) v2 = varoftype(rclass.NONGCOBJECTPTR) @@ -676,6 +702,22 @@ assert op1.args == [v, v_index] assert op1.result == v_result +def test_dict_getinteriorfield(): + DICT = lltype.GcArray(lltype.Struct('ENTRY', ('v', lltype.Signed), + ('k', lltype.Signed))) + v = varoftype(lltype.Ptr(DICT)) + i = varoftype(lltype.Signed) + v_result = varoftype(lltype.Signed) + op = SpaceOperation('getinteriorfield', [v, i, Constant('v', lltype.Void)], + v_result) + op1 = Transformer(FakeCPU()).rewrite_operation(op) + assert op1.opname == 'getinteriorfield_gc_i' + assert op1.args == [v, i, ('interiorfielddescr', DICT, 'v')] + op = SpaceOperation('getinteriorfield', [v, i, Constant('v', lltype.Void)], + Constant(None, lltype.Void)) + op1 = Transformer(FakeCPU()).rewrite_operation(op) + assert op1 is None + def test_str_setinteriorfield(): v = varoftype(lltype.Ptr(rstr.STR)) v_index = varoftype(lltype.Signed) @@ -702,6 +744,23 @@ assert op1.args == [v, v_index, v_newchr] assert op1.result == v_void +def test_dict_setinteriorfield(): + DICT = lltype.GcArray(lltype.Struct('ENTRY', ('v', lltype.Signed), + ('k', lltype.Signed))) + v = varoftype(lltype.Ptr(DICT)) + i = varoftype(lltype.Signed) + v_void = varoftype(lltype.Void) + op = SpaceOperation('setinteriorfield', [v, i, Constant('v', lltype.Void), + i], + v_void) + op1 = Transformer(FakeCPU()).rewrite_operation(op) + assert op1.opname == 'setinteriorfield_gc_i' + assert op1.args == [v, i, i, ('interiorfielddescr', DICT, 'v')] + op = SpaceOperation('setinteriorfield', [v, i, Constant('v', lltype.Void), + v_void], v_void) + op1 = Transformer(FakeCPU()).rewrite_operation(op) + assert not op1 + def test_promote_1(): v1 = varoftype(lltype.Signed) v2 = varoftype(lltype.Signed) @@ -1069,3 +1128,16 @@ varoftype(lltype.Signed)) tr = Transformer(None, None) raises(NotImplementedError, tr.rewrite_operation, op) + +def test_cast_opaque_ptr(): + S = lltype.GcStruct("S", ("x", lltype.Signed)) + v1 = varoftype(lltype.Ptr(S)) + v2 = varoftype(lltype.Ptr(rclass.OBJECT)) + + op = SpaceOperation('cast_opaque_ptr', [v1], v2) + tr = Transformer() + [op1, op2] = tr.rewrite_operation(op) + assert op1.opname == 'mark_opaque_ptr' + assert op1.args == [v1] + assert op1.result is None + assert op2 is None \ No newline at end of file diff --git a/pypy/jit/metainterp/blackhole.py b/pypy/jit/metainterp/blackhole.py --- a/pypy/jit/metainterp/blackhole.py +++ b/pypy/jit/metainterp/blackhole.py @@ -6,7 +6,6 @@ from pypy.rlib.debug import make_sure_not_resized from pypy.rpython.lltypesystem import lltype, llmemory, rclass from pypy.rpython.lltypesystem.lloperation import llop -from pypy.rpython.llinterp import LLException from pypy.jit.codewriter.jitcode import JitCode, SwitchDictDescr from pypy.jit.codewriter import heaptracker, longlong from pypy.jit.metainterp.jitexc import JitException, get_llexception, reraise @@ -500,9 +499,12 @@ @arguments("r", returns="i") def bhimpl_ptr_nonzero(a): return bool(a) - @arguments("r", returns="r") - def bhimpl_cast_opaque_ptr(a): - return a + @arguments("r", "r", returns="i") + def bhimpl_instance_ptr_eq(a, b): + return a == b + @arguments("r", "r", returns="i") + def bhimpl_instance_ptr_ne(a, b): + return a != b @arguments("r", returns="i") def bhimpl_cast_ptr_to_int(a): i = lltype.cast_ptr_to_int(a) @@ -513,6 +515,10 @@ ll_assert((i & 1) == 1, "bhimpl_cast_int_to_ptr: not an odd int") return lltype.cast_int_to_ptr(llmemory.GCREF, i) + @arguments("r") + def bhimpl_mark_opaque_ptr(a): + pass + @arguments("i", returns="i") def bhimpl_int_copy(a): return a @@ -631,6 +637,9 @@ a = longlong.getrealfloat(a) # note: we need to call int() twice to care for the fact that # int(-2147483648.0) returns a long :-( + # we could also call intmask() instead of the outermost int(), but + # it's probably better to explicitly crash (by getting a long) if a + # non-translated version tries to cast a too large float to an int. return int(int(a)) @arguments("i", returns="f") @@ -1154,6 +1163,26 @@ array = cpu.bh_getfield_gc_r(vable, fdescr) return cpu.bh_arraylen_gc(adescr, array) + @arguments("cpu", "r", "i", "d", returns="i") + def bhimpl_getinteriorfield_gc_i(cpu, array, index, descr): + return cpu.bh_getinteriorfield_gc_i(array, index, descr) + @arguments("cpu", "r", "i", "d", returns="r") + def bhimpl_getinteriorfield_gc_r(cpu, array, index, descr): + return cpu.bh_getinteriorfield_gc_r(array, index, descr) + @arguments("cpu", "r", "i", "d", returns="f") + def bhimpl_getinteriorfield_gc_f(cpu, array, index, descr): + return cpu.bh_getinteriorfield_gc_f(array, index, descr) + + @arguments("cpu", "r", "i", "d", "i") + def bhimpl_setinteriorfield_gc_i(cpu, array, index, descr, value): + cpu.bh_setinteriorfield_gc_i(array, index, descr, value) + @arguments("cpu", "r", "i", "d", "r") + def bhimpl_setinteriorfield_gc_r(cpu, array, index, descr, value): + cpu.bh_setinteriorfield_gc_r(array, index, descr, value) + @arguments("cpu", "r", "i", "d", "f") + def bhimpl_setinteriorfield_gc_f(cpu, array, index, descr, value): + cpu.bh_setinteriorfield_gc_f(array, index, descr, value) + @arguments("cpu", "r", "d", returns="i") def bhimpl_getfield_gc_i(cpu, struct, fielddescr): return cpu.bh_getfield_gc_i(struct, fielddescr) diff --git a/pypy/jit/metainterp/executor.py b/pypy/jit/metainterp/executor.py --- a/pypy/jit/metainterp/executor.py +++ b/pypy/jit/metainterp/executor.py @@ -1,11 +1,8 @@ """This implements pyjitpl's execution of operations. """ -import py -from pypy.rpython.lltypesystem import lltype, llmemory, rstr -from pypy.rpython.ootypesystem import ootype -from pypy.rpython.lltypesystem.lloperation import llop -from pypy.rlib.rarithmetic import ovfcheck, r_uint, intmask, r_longlong +from pypy.rpython.lltypesystem import lltype, rstr +from pypy.rlib.rarithmetic import ovfcheck, r_longlong from pypy.rlib.rtimer import read_timestamp from pypy.rlib.unroll import unrolling_iterable from pypy.jit.metainterp.history import BoxInt, BoxPtr, BoxFloat, check_descr @@ -123,6 +120,29 @@ else: cpu.bh_setarrayitem_raw_i(arraydescr, array, index, itembox.getint()) +def do_getinteriorfield_gc(cpu, _, arraybox, indexbox, descr): + array = arraybox.getref_base() + index = indexbox.getint() + if descr.is_pointer_field(): + return BoxPtr(cpu.bh_getinteriorfield_gc_r(array, index, descr)) + elif descr.is_float_field(): + return BoxFloat(cpu.bh_getinteriorfield_gc_f(array, index, descr)) + else: + return BoxInt(cpu.bh_getinteriorfield_gc_i(array, index, descr)) + +def do_setinteriorfield_gc(cpu, _, arraybox, indexbox, valuebox, descr): + array = arraybox.getref_base() + index = indexbox.getint() + if descr.is_pointer_field(): + cpu.bh_setinteriorfield_gc_r(array, index, descr, + valuebox.getref_base()) + elif descr.is_float_field(): + cpu.bh_setinteriorfield_gc_f(array, index, descr, + valuebox.getfloatstorage()) + else: + cpu.bh_setinteriorfield_gc_i(array, index, descr, + valuebox.getint()) + def do_getfield_gc(cpu, _, structbox, fielddescr): struct = structbox.getref_base() if fielddescr.is_pointer_field(): diff --git a/pypy/jit/metainterp/heapcache.py b/pypy/jit/metainterp/heapcache.py --- a/pypy/jit/metainterp/heapcache.py +++ b/pypy/jit/metainterp/heapcache.py @@ -34,7 +34,6 @@ self.clear_caches(opnum, descr, argboxes) def mark_escaped(self, opnum, argboxes): - idx = 0 if opnum == rop.SETFIELD_GC: assert len(argboxes) == 2 box, valuebox = argboxes @@ -42,8 +41,20 @@ self.dependencies.setdefault(box, []).append(valuebox) else: self._escape(valuebox) - # GETFIELD_GC doesn't escape it's argument - elif opnum != rop.GETFIELD_GC: + elif opnum == rop.SETARRAYITEM_GC: + assert len(argboxes) == 3 + box, indexbox, valuebox = argboxes + if self.is_unescaped(box) and self.is_unescaped(valuebox): + self.dependencies.setdefault(box, []).append(valuebox) + else: + self._escape(valuebox) + # GETFIELD_GC, MARK_OPAQUE_PTR, PTR_EQ, and PTR_NE don't escape their + # arguments + elif (opnum != rop.GETFIELD_GC and + opnum != rop.MARK_OPAQUE_PTR and + opnum != rop.PTR_EQ and + opnum != rop.PTR_NE): + idx = 0 for box in argboxes: # setarrayitem_gc don't escape its first argument if not (idx == 0 and opnum in [rop.SETARRAYITEM_GC]): @@ -60,13 +71,13 @@ self._escape(dep) def clear_caches(self, opnum, descr, argboxes): - if opnum == rop.SETFIELD_GC: - return - if opnum == rop.SETARRAYITEM_GC: - return - if opnum == rop.SETFIELD_RAW: - return - if opnum == rop.SETARRAYITEM_RAW: + if (opnum == rop.SETFIELD_GC or + opnum == rop.SETARRAYITEM_GC or + opnum == rop.SETFIELD_RAW or + opnum == rop.SETARRAYITEM_RAW or + opnum == rop.SETINTERIORFIELD_GC or + opnum == rop.COPYSTRCONTENT or + opnum == rop.COPYUNICODECONTENT): return if rop._OVF_FIRST <= opnum <= rop._OVF_LAST: return @@ -75,9 +86,9 @@ if opnum == rop.CALL or opnum == rop.CALL_LOOPINVARIANT: effectinfo = descr.get_extra_info() ef = effectinfo.extraeffect - if ef == effectinfo.EF_LOOPINVARIANT or \ - ef == effectinfo.EF_ELIDABLE_CANNOT_RAISE or \ - ef == effectinfo.EF_ELIDABLE_CAN_RAISE: + if (ef == effectinfo.EF_LOOPINVARIANT or + ef == effectinfo.EF_ELIDABLE_CANNOT_RAISE or + ef == effectinfo.EF_ELIDABLE_CAN_RAISE): return # A special case for ll_arraycopy, because it is so common, and its # effects are so well defined. diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -16,6 +16,7 @@ INT = 'i' REF = 'r' FLOAT = 'f' +STRUCT = 's' HOLE = '_' VOID = 'v' @@ -172,6 +173,11 @@ """ raise NotImplementedError + def is_array_of_structs(self): + """ Implement for array descr + """ + raise NotImplementedError + def is_pointer_field(self): """ Implement for field descr """ @@ -923,6 +929,9 @@ def view(self, **kwds): pass + def clear(self): + pass + class Stats(object): """For tests.""" @@ -937,6 +946,15 @@ self.aborted_keys = [] self.invalidated_token_numbers = set() + def clear(self): + del self.loops[:] + del self.locations[:] + del self.aborted_keys[:] + self.invalidated_token_numbers.clear() + self.compiled_count = 0 + self.enter_count = 0 + self.aborted_count = 0 + def set_history(self, history): self.operations = history.operations diff --git a/pypy/jit/metainterp/optimizeopt/heap.py b/pypy/jit/metainterp/optimizeopt/heap.py --- a/pypy/jit/metainterp/optimizeopt/heap.py +++ b/pypy/jit/metainterp/optimizeopt/heap.py @@ -43,7 +43,7 @@ optheap.optimizer.ensure_imported(cached_fieldvalue) cached_fieldvalue = self._cached_fields.get(structvalue, None) - if cached_fieldvalue is not fieldvalue: + if not fieldvalue.same_value(cached_fieldvalue): # common case: store the 'op' as lazy_setfield, and register # myself in the optheap's _lazy_setfields_and_arrayitems list self._lazy_setfield = op @@ -140,6 +140,15 @@ getop = ResOperation(rop.GETFIELD_GC, [op.getarg(0)], result, op.getdescr()) shortboxes.add_potential(getop, synthetic=True) + if op.getopnum() == rop.SETARRAYITEM_GC: + result = op.getarg(2) + if isinstance(result, Const): + newresult = result.clonebox() + optimizer.make_constant(newresult, result) + result = newresult + getop = ResOperation(rop.GETARRAYITEM_GC, [op.getarg(0), op.getarg(1)], + result, op.getdescr()) + shortboxes.add_potential(getop, synthetic=True) elif op.result is not None: shortboxes.add_potential(op) diff --git a/pypy/jit/metainterp/optimizeopt/intbounds.py b/pypy/jit/metainterp/optimizeopt/intbounds.py --- a/pypy/jit/metainterp/optimizeopt/intbounds.py +++ b/pypy/jit/metainterp/optimizeopt/intbounds.py @@ -1,3 +1,4 @@ +import sys from pypy.jit.metainterp.optimizeopt.optimizer import Optimization, CONST_1, CONST_0, \ MODE_ARRAY, MODE_STR, MODE_UNICODE from pypy.jit.metainterp.history import ConstInt @@ -5,6 +6,7 @@ IntUpperBound) from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method from pypy.jit.metainterp.resoperation import rop +from pypy.rlib.rarithmetic import LONG_BIT class OptIntBounds(Optimization): @@ -126,14 +128,29 @@ r.intbound.intersect(v1.intbound.div_bound(v2.intbound)) def optimize_INT_MOD(self, op): + v1 = self.getvalue(op.getarg(0)) + v2 = self.getvalue(op.getarg(1)) + known_nonneg = (v1.intbound.known_ge(IntBound(0, 0)) and + v2.intbound.known_ge(IntBound(0, 0))) + if known_nonneg and v2.is_constant(): + val = v2.box.getint() + if (val & (val-1)) == 0: + # nonneg % power-of-two ==> nonneg & (power-of-two - 1) + arg1 = op.getarg(0) + arg2 = ConstInt(val-1) + op = op.copy_and_change(rop.INT_AND, args=[arg1, arg2]) self.emit_operation(op) - v2 = self.getvalue(op.getarg(1)) if v2.is_constant(): val = v2.box.getint() r = self.getvalue(op.result) if val < 0: + if val == -sys.maxint-1: + return # give up val = -val - r.intbound.make_gt(IntBound(-val, -val)) + if known_nonneg: + r.intbound.make_ge(IntBound(0, 0)) + else: + r.intbound.make_gt(IntBound(-val, -val)) r.intbound.make_lt(IntBound(val, val)) def optimize_INT_LSHIFT(self, op): @@ -153,9 +170,14 @@ def optimize_INT_RSHIFT(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) - self.emit_operation(op) - r = self.getvalue(op.result) - r.intbound.intersect(v1.intbound.rshift_bound(v2.intbound)) + b = v1.intbound.rshift_bound(v2.intbound) + if b.has_lower and b.has_upper and b.lower == b.upper: + # constant result (likely 0, for rshifts that kill all bits) + self.make_constant_int(op.result, b.lower) + else: + self.emit_operation(op) + r = self.getvalue(op.result) + r.intbound.intersect(b) def optimize_INT_ADD_OVF(self, op): v1 = self.getvalue(op.getarg(0)) diff --git a/pypy/jit/metainterp/optimizeopt/intutils.py b/pypy/jit/metainterp/optimizeopt/intutils.py --- a/pypy/jit/metainterp/optimizeopt/intutils.py +++ b/pypy/jit/metainterp/optimizeopt/intutils.py @@ -1,4 +1,5 @@ from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift, LONG_BIT +from pypy.rlib.objectmodel import we_are_translated from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.jit.metainterp.history import BoxInt, ConstInt import sys @@ -13,6 +14,10 @@ self.has_lower = True self.upper = upper self.lower = lower + # check for unexpected overflows: + if not we_are_translated(): + assert type(upper) is not long + assert type(lower) is not long # Returns True if the bound was updated def make_le(self, other): diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -1,6 +1,6 @@ from pypy.jit.metainterp import jitprof, resume, compile from pypy.jit.metainterp.executor import execute_nonspec -from pypy.jit.metainterp.history import BoxInt, BoxFloat, Const, ConstInt, REF +from pypy.jit.metainterp.history import BoxInt, BoxFloat, Const, ConstInt, REF, INT from pypy.jit.metainterp.optimizeopt.intutils import IntBound, IntUnbounded, \ ImmutableIntUnbounded, \ IntLowerBound, MININT, MAXINT @@ -95,6 +95,10 @@ return guards def import_from(self, other, optimizer): + if self.level == LEVEL_CONSTANT: + assert other.level == LEVEL_CONSTANT + assert other.box.same_constant(self.box) + return assert self.level <= LEVEL_NONNULL if other.level == LEVEL_CONSTANT: self.make_constant(other.get_key_box()) @@ -141,6 +145,13 @@ return not box.nonnull() return False + def same_value(self, other): + if not other: + return False + if self.is_constant() and other.is_constant(): + return self.box.same_constant(other.box) + return self is other + def make_constant(self, constbox): """Replace 'self.box' with a Const box.""" assert isinstance(constbox, Const) @@ -209,13 +220,19 @@ def setfield(self, ofs, value): raise NotImplementedError + def getlength(self): + raise NotImplementedError + def getitem(self, index): raise NotImplementedError - def getlength(self): + def setitem(self, index, value): raise NotImplementedError - def setitem(self, index, value): + def getinteriorfield(self, index, ofs, default): + raise NotImplementedError + + def setinteriorfield(self, index, ofs, value): raise NotImplementedError @@ -283,11 +300,11 @@ return self.optimizer.optpure.has_pure_result(opnum, args, descr) return False - def get_pure_result(self, key): + def get_pure_result(self, key): if self.optimizer.optpure: return self.optimizer.optpure.get_pure_result(key) return None - + def setup(self): pass @@ -320,6 +337,7 @@ self.bridge = bridge self.values = {} self.interned_refs = self.cpu.ts.new_ref_dict() + self.interned_ints = {} self.resumedata_memo = resume.ResumeDataLoopMemo(metainterp_sd) self.bool_boxes = {} self.producer = {} @@ -392,6 +410,9 @@ if not value: return box return self.interned_refs.setdefault(value, box) + #elif constbox.type == INT: + # value = constbox.getint() + # return self.interned_ints.setdefault(value, box) else: return box @@ -524,7 +545,7 @@ def replace_op(self, old_op, new_op): # XXX: Do we want to cache indexes to prevent search? - i = len(self._newoperations) + i = len(self._newoperations) while i > 0: i -= 1 if self._newoperations[i] is old_op: diff --git a/pypy/jit/metainterp/optimizeopt/rewrite.py b/pypy/jit/metainterp/optimizeopt/rewrite.py --- a/pypy/jit/metainterp/optimizeopt/rewrite.py +++ b/pypy/jit/metainterp/optimizeopt/rewrite.py @@ -337,7 +337,7 @@ def optimize_INT_IS_ZERO(self, op): self._optimize_nullness(op, op.getarg(0), False) - def _optimize_oois_ooisnot(self, op, expect_isnot): + def _optimize_oois_ooisnot(self, op, expect_isnot, instance): value0 = self.getvalue(op.getarg(0)) value1 = self.getvalue(op.getarg(1)) if value0.is_virtual(): @@ -355,21 +355,28 @@ elif value0 is value1: self.make_constant_int(op.result, not expect_isnot) else: - cls0 = value0.get_constant_class(self.optimizer.cpu) - if cls0 is not None: - cls1 = value1.get_constant_class(self.optimizer.cpu) - if cls1 is not None and not cls0.same_constant(cls1): - # cannot be the same object, as we know that their - # class is different - self.make_constant_int(op.result, expect_isnot) - return + if instance: + cls0 = value0.get_constant_class(self.optimizer.cpu) + if cls0 is not None: + cls1 = value1.get_constant_class(self.optimizer.cpu) + if cls1 is not None and not cls0.same_constant(cls1): + # cannot be the same object, as we know that their + # class is different + self.make_constant_int(op.result, expect_isnot) + return self.emit_operation(op) + def optimize_PTR_EQ(self, op): + self._optimize_oois_ooisnot(op, False, False) + def optimize_PTR_NE(self, op): - self._optimize_oois_ooisnot(op, True) + self._optimize_oois_ooisnot(op, True, False) - def optimize_PTR_EQ(self, op): - self._optimize_oois_ooisnot(op, False) + def optimize_INSTANCE_PTR_EQ(self, op): + self._optimize_oois_ooisnot(op, False, True) + + def optimize_INSTANCE_PTR_NE(self, op): + self._optimize_oois_ooisnot(op, True, True) ## def optimize_INSTANCEOF(self, op): ## value = self.getvalue(op.args[0]) @@ -448,6 +455,9 @@ if v2.is_constant() and v2.box.getint() == 1: self.make_equal_to(op.result, v1) return + elif v1.is_constant() and v1.box.getint() == 0: + self.make_constant_int(op.result, 0) + return if v1.intbound.known_ge(IntBound(0, 0)) and v2.is_constant(): val = v2.box.getint() if val & (val - 1) == 0 and val > 0: # val == 2**shift @@ -455,10 +465,9 @@ args = [op.getarg(0), ConstInt(highest_bit(val))]) self.emit_operation(op) - def optimize_CAST_OPAQUE_PTR(self, op): + def optimize_MARK_OPAQUE_PTR(self, op): value = self.getvalue(op.getarg(0)) self.optimizer.opaque_pointers[value] = True - self.make_equal_to(op.result, value) def optimize_CAST_PTR_TO_INT(self, op): self.pure(rop.CAST_INT_TO_PTR, [op.result], op.getarg(0)) diff --git a/pypy/jit/metainterp/optimizeopt/simplify.py b/pypy/jit/metainterp/optimizeopt/simplify.py --- a/pypy/jit/metainterp/optimizeopt/simplify.py +++ b/pypy/jit/metainterp/optimizeopt/simplify.py @@ -25,7 +25,8 @@ # but it's a bit hard to implement robustly if heap.py is also run pass - optimize_CAST_OPAQUE_PTR = optimize_VIRTUAL_REF + def optimize_MARK_OPAQUE_PTR(self, op): + pass dispatch_opt = make_dispatcher_method(OptSimplify, 'optimize_', diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -9,6 +9,7 @@ from pypy.jit.metainterp.history import AbstractDescr, ConstInt, BoxInt from pypy.jit.metainterp import executor, compile, resume, history from pypy.jit.metainterp.resoperation import rop, opname, ResOperation +from pypy.rlib.rarithmetic import LONG_BIT def test_store_final_boxes_in_guard(): @@ -508,13 +509,13 @@ ops = """ [p0] guard_class(p0, ConstClass(node_vtable)) [] - i0 = ptr_ne(p0, NULL) + i0 = instance_ptr_ne(p0, NULL) guard_true(i0) [] - i1 = ptr_eq(p0, NULL) + i1 = instance_ptr_eq(p0, NULL) guard_false(i1) [] - i2 = ptr_ne(NULL, p0) + i2 = instance_ptr_ne(NULL, p0) guard_true(i0) [] - i3 = ptr_eq(NULL, p0) + i3 = instance_ptr_eq(NULL, p0) guard_false(i1) [] jump(p0) """ @@ -935,7 +936,6 @@ """ self.optimize_loop(ops, expected) - def test_virtual_constant_isnonnull(self): ops = """ [i0] @@ -951,6 +951,55 @@ """ self.optimize_loop(ops, expected) + def test_virtual_array_of_struct(self): + ops = """ + [f0, f1, f2, f3] + p0 = new_array(2, descr=complexarraydescr) + setinteriorfield_gc(p0, 0, f0, descr=complexrealdescr) + setinteriorfield_gc(p0, 0, f1, descr=compleximagdescr) + setinteriorfield_gc(p0, 1, f2, descr=complexrealdescr) + setinteriorfield_gc(p0, 1, f3, descr=compleximagdescr) + f4 = getinteriorfield_gc(p0, 0, descr=complexrealdescr) + f5 = getinteriorfield_gc(p0, 1, descr=complexrealdescr) + f6 = float_mul(f4, f5) + f7 = getinteriorfield_gc(p0, 0, descr=compleximagdescr) + f8 = getinteriorfield_gc(p0, 1, descr=compleximagdescr) + f9 = float_mul(f7, f8) + f10 = float_add(f6, f9) + finish(f10) + """ + expected = """ + [f0, f1, f2, f3] + f4 = float_mul(f0, f2) + f5 = float_mul(f1, f3) + f6 = float_add(f4, f5) + finish(f6) + """ + self.optimize_loop(ops, expected) + + def test_virtual_array_of_struct_forced(self): + ops = """ + [f0, f1] + p0 = new_array(1, descr=complexarraydescr) + setinteriorfield_gc(p0, 0, f0, descr=complexrealdescr) + setinteriorfield_gc(p0, 0, f1, descr=compleximagdescr) + f2 = getinteriorfield_gc(p0, 0, descr=complexrealdescr) + f3 = getinteriorfield_gc(p0, 0, descr=compleximagdescr) + f4 = float_mul(f2, f3) + i0 = escape(f4, p0) + finish(i0) + """ + expected = """ + [f0, f1] + f2 = float_mul(f0, f1) + p0 = new_array(1, descr=complexarraydescr) + setinteriorfield_gc(p0, 0, f0, descr=complexrealdescr) + setinteriorfield_gc(p0, 0, f1, descr=compleximagdescr) + i0 = escape(f2, p0) + finish(i0) + """ + self.optimize_loop(ops, expected) + def test_nonvirtual_1(self): ops = """ [i] @@ -2026,7 +2075,7 @@ ops = """ [p1] guard_class(p1, ConstClass(node_vtable2)) [] - i = ptr_ne(ConstPtr(myptr), p1) + i = instance_ptr_ne(ConstPtr(myptr), p1) guard_true(i) [] jump(p1) """ @@ -2181,6 +2230,17 @@ """ self.optimize_loop(ops, expected) + ops = """ + [i0] + i1 = int_floordiv(0, i0) + jump(i1) + """ + expected = """ + [i0] + jump(0) + """ + self.optimize_loop(ops, expected) + def test_fold_partially_constant_ops_ovf(self): ops = """ [i0] @@ -4165,15 +4225,38 @@ """ self.optimize_strunicode_loop(ops, expected) + def test_str_slice_plain_virtual(self): + ops = """ + [] + p0 = newstr(11) + copystrcontent(s"hello world", p0, 0, 0, 11) + p1 = call(0, p0, 0, 5, descr=strslicedescr) + finish(p1) + """ + expected = """ + [] + p0 = newstr(11) + copystrcontent(s"hello world", p0, 0, 0, 11) + # Eventually this should just return s"hello", but ATM this test is + # just verifying that it doesn't return "\0\0\0\0\0", so being + # slightly underoptimized is ok. + p1 = newstr(5) + copystrcontent(p0, p1, 0, 0, 5) + finish(p1) + """ + self.optimize_strunicode_loop(ops, expected) + # ---------- def optimize_strunicode_loop_extradescrs(self, ops, optops): class FakeCallInfoCollection: def callinfo_for_oopspec(self, oopspecindex): calldescrtype = type(LLtypeMixin.strequaldescr) + effectinfotype = type(LLtypeMixin.strequaldescr.get_extra_info()) for value in LLtypeMixin.__dict__.values(): if isinstance(value, calldescrtype): extra = value.get_extra_info() - if extra and extra.oopspecindex == oopspecindex: + if (extra and isinstance(extra, effectinfotype) and + extra.oopspecindex == oopspecindex): # returns 0 for 'func' in this test return value, 0 raise AssertionError("not found: oopspecindex=%d" % @@ -4653,11 +4736,11 @@ i5 = int_ge(i0, 0) guard_true(i5) [] i1 = int_mod(i0, 42) - i2 = int_rshift(i1, 63) + i2 = int_rshift(i1, %d) i3 = int_and(42, i2) i4 = int_add(i1, i3) finish(i4) - """ + """ % (LONG_BIT-1) expected = """ [i0] i5 = int_ge(i0, 0) @@ -4665,21 +4748,41 @@ i1 = int_mod(i0, 42) finish(i1) """ - py.test.skip("in-progress") self.optimize_loop(ops, expected) - # Also, 'n % power-of-two' can be turned into int_and(), - # but that's a bit harder to detect here because it turns into - # several operations, and of course it is wrong to just turn + # 'n % power-of-two' can be turned into int_and(); at least that's + # easy to do now if n is known to be non-negative. + ops = """ + [i0] + i5 = int_ge(i0, 0) + guard_true(i5) [] + i1 = int_mod(i0, 8) + i2 = int_rshift(i1, %d) + i3 = int_and(42, i2) + i4 = int_add(i1, i3) + finish(i4) + """ % (LONG_BIT-1) + expected = """ + [i0] + i5 = int_ge(i0, 0) + guard_true(i5) [] + i1 = int_and(i0, 7) + finish(i1) + """ + self.optimize_loop(ops, expected) + + # Of course any 'maybe-negative % power-of-two' can be turned into + # int_and(), but that's a bit harder to detect here because it turns + # into several operations, and of course it is wrong to just turn # int_mod(i0, 16) into int_and(i0, 15). ops = """ [i0] i1 = int_mod(i0, 16) - i2 = int_rshift(i1, 63) + i2 = int_rshift(i1, %d) i3 = int_and(16, i2) i4 = int_add(i1, i3) finish(i4) - """ + """ % (LONG_BIT-1) expected = """ [i0] i4 = int_and(i0, 15) @@ -4688,6 +4791,16 @@ py.test.skip("harder") self.optimize_loop(ops, expected) + def test_intmod_bounds_bug1(self): + ops = """ + [i0] + i1 = int_mod(i0, %d) + i2 = int_eq(i1, 0) + guard_false(i2) [] + finish() + """ % (-(1<<(LONG_BIT-1)),) + self.optimize_loop(ops, ops) + def test_bounded_lazy_setfield(self): ops = """ [p0, i0] @@ -4789,6 +4902,18 @@ """ self.optimize_strunicode_loop(ops, expected) + def test_ptr_eq_str_constant(self): + ops = """ + [] + i0 = ptr_eq(s"abc", s"\x00") + finish(i0) + """ + expected = """ + [] + finish(0) + """ + self.optimize_loop(ops, expected) + class TestLLtype(BaseTestOptimizeBasic, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -2683,7 +2683,7 @@ ops = """ [p1] guard_class(p1, ConstClass(node_vtable2)) [] - i = ptr_ne(ConstPtr(myptr), p1) + i = instance_ptr_ne(ConstPtr(myptr), p1) guard_true(i) [] jump(p1) """ @@ -3331,7 +3331,7 @@ jump(p1, i1, i2, i6) ''' self.optimize_loop(ops, expected, preamble) - + # ---------- @@ -4783,6 +4783,52 @@ """ self.optimize_loop(ops, expected) + + def test_division_nonneg(self): + py.test.skip("harder") + # this is how an app-level division turns into right now + ops = """ + [i4] + i1 = int_ge(i4, 0) + guard_true(i1) [] + i16 = int_floordiv(i4, 3) + i18 = int_mul(i16, 3) + i19 = int_sub(i4, i18) + i21 = int_rshift(i19, %d) + i22 = int_add(i16, i21) + finish(i22) + """ % (LONG_BIT-1) + expected = """ + [i4] + i1 = int_ge(i4, 0) + guard_true(i1) [] + i16 = int_floordiv(i4, 3) + finish(i16) + """ + self.optimize_loop(ops, expected) + + def test_division_by_2(self): + py.test.skip("harder") + ops = """ + [i4] + i1 = int_ge(i4, 0) + guard_true(i1) [] + i16 = int_floordiv(i4, 2) + i18 = int_mul(i16, 2) + i19 = int_sub(i4, i18) + i21 = int_rshift(i19, %d) + i22 = int_add(i16, i21) + finish(i22) + """ % (LONG_BIT-1) + expected = """ + [i4] + i1 = int_ge(i4, 0) + guard_true(i1) [] + i16 = int_rshift(i4, 1) + finish(i16) + """ + self.optimize_loop(ops, expected) + def test_subsub_ovf(self): ops = """ [i0] @@ -5800,10 +5846,12 @@ class FakeCallInfoCollection: def callinfo_for_oopspec(self, oopspecindex): calldescrtype = type(LLtypeMixin.strequaldescr) + effectinfotype = type(LLtypeMixin.strequaldescr.get_extra_info()) for value in LLtypeMixin.__dict__.values(): if isinstance(value, calldescrtype): extra = value.get_extra_info() - if extra and extra.oopspecindex == oopspecindex: + if (extra and isinstance(extra, effectinfotype) and + extra.oopspecindex == oopspecindex): # returns 0 for 'func' in this test return value, 0 raise AssertionError("not found: oopspecindex=%d" % @@ -7280,7 +7328,7 @@ ops = """ [p1, p2] setarrayitem_gc(p1, 2, 10, descr=arraydescr) - setarrayitem_gc(p2, 3, 13, descr=arraydescr) + setarrayitem_gc(p2, 3, 13, descr=arraydescr) call(0, p1, p2, 0, 0, 10, descr=arraycopydescr) jump(p1, p2) """ @@ -7307,6 +7355,150 @@ """ self.optimize_loop(ops, expected) + def test_repeated_constant_setfield_mixed_with_guard(self): + ops = """ + [p22, p18] + setfield_gc(p22, 2, descr=valuedescr) + guard_nonnull_class(p18, ConstClass(node_vtable)) [] + setfield_gc(p22, 2, descr=valuedescr) + jump(p22, p18) + """ + preamble = """ + [p22, p18] + setfield_gc(p22, 2, descr=valuedescr) + guard_nonnull_class(p18, ConstClass(node_vtable)) [] + jump(p22, p18) + """ + short = """ + [p22, p18] + i1 = getfield_gc(p22, descr=valuedescr) + guard_value(i1, 2) [] + jump(p22, p18) + """ + expected = """ + [p22, p18] + jump(p22, p18) + """ + self.optimize_loop(ops, expected, preamble, expected_short=short) + + def test_repeated_setfield_mixed_with_guard(self): + ops = """ + [p22, p18, i1] + i2 = getfield_gc(p22, descr=valuedescr) + call(i2, descr=nonwritedescr) + setfield_gc(p22, i1, descr=valuedescr) + guard_nonnull_class(p18, ConstClass(node_vtable)) [] + setfield_gc(p22, i1, descr=valuedescr) + jump(p22, p18, i1) + """ + preamble = """ + [p22, p18, i1] + i2 = getfield_gc(p22, descr=valuedescr) + call(i2, descr=nonwritedescr) + setfield_gc(p22, i1, descr=valuedescr) + guard_nonnull_class(p18, ConstClass(node_vtable)) [] + jump(p22, p18, i1, i1) + """ + short = """ + [p22, p18, i1] + i2 = getfield_gc(p22, descr=valuedescr) + jump(p22, p18, i1, i2) + """ + expected = """ + [p22, p18, i1, i2] + call(i2, descr=nonwritedescr) + setfield_gc(p22, i1, descr=valuedescr) + jump(p22, p18, i1, i1) + """ + self.optimize_loop(ops, expected, preamble, expected_short=short) + + def test_cache_setfield_across_loop_boundaries(self): + ops = """ + [p1] + p2 = getfield_gc(p1, descr=valuedescr) + guard_nonnull_class(p2, ConstClass(node_vtable)) [] + call(p2, descr=nonwritedescr) + p3 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(p1, p3, descr=valuedescr) + jump(p1) + """ + expected = """ + [p1, p2] + call(p2, descr=nonwritedescr) + p3 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(p1, p3, descr=valuedescr) + jump(p1, p3) + """ + self.optimize_loop(ops, expected) + + def test_cache_setarrayitem_across_loop_boundaries(self): + ops = """ + [p1] + p2 = getarrayitem_gc(p1, 3, descr=arraydescr) + guard_nonnull_class(p2, ConstClass(node_vtable)) [] + call(p2, descr=nonwritedescr) + p3 = new_with_vtable(ConstClass(node_vtable)) + setarrayitem_gc(p1, 3, p3, descr=arraydescr) + jump(p1) + """ + expected = """ + [p1, p2] + call(p2, descr=nonwritedescr) + p3 = new_with_vtable(ConstClass(node_vtable)) + setarrayitem_gc(p1, 3, p3, descr=arraydescr) + jump(p1, p3) + """ + self.optimize_loop(ops, expected) + + def test_setarrayitem_p0_p0(self): + ops = """ + [i0, i1] + p0 = escape() + setarrayitem_gc(p0, 2, p0, descr=arraydescr) + jump(i0, i1) + """ + expected = """ + [i0, i1] + p0 = escape() + setarrayitem_gc(p0, 2, p0, descr=arraydescr) + jump(i0, i1) + """ + self.optimize_loop(ops, expected) + + def test_setfield_p0_p0(self): + ops = """ + [i0, i1] + p0 = escape() + setfield_gc(p0, p0, descr=arraydescr) + jump(i0, i1) + """ + expected = """ + [i0, i1] + p0 = escape() + setfield_gc(p0, p0, descr=arraydescr) + jump(i0, i1) + """ + self.optimize_loop(ops, expected) + + def test_setfield_p0_p1_p0(self): + ops = """ + [i0, i1] + p0 = escape() + p1 = escape() + setfield_gc(p0, p1, descr=adescr) + setfield_gc(p1, p0, descr=bdescr) + jump(i0, i1) + """ + expected = """ + [i0, i1] + p0 = escape() + p1 = escape() + setfield_gc(p0, p1, descr=adescr) + setfield_gc(p1, p0, descr=bdescr) + jump(i0, i1) + """ + self.optimize_loop(ops, expected) + class TestLLtype(OptimizeOptTest, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/optimizeopt/test/test_util.py b/pypy/jit/metainterp/optimizeopt/test/test_util.py --- a/pypy/jit/metainterp/optimizeopt/test/test_util.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_util.py @@ -185,6 +185,18 @@ EffectInfo([], [arraydescr], [], [arraydescr], oopspecindex=EffectInfo.OS_ARRAYCOPY)) + + # array of structs (complex data) + complexarray = lltype.GcArray( + lltype.Struct("complex", + ("real", lltype.Float), + ("imag", lltype.Float), + ) + ) + complexarraydescr = cpu.arraydescrof(complexarray) + complexrealdescr = cpu.interiorfielddescrof(complexarray, "real") + compleximagdescr = cpu.interiorfielddescrof(complexarray, "imag") + for _name, _os in [ ('strconcatdescr', 'OS_STR_CONCAT'), ('strslicedescr', 'OS_STR_SLICE'), @@ -240,7 +252,7 @@ ## def get_class_of_box(self, box): ## root = box.getref(ootype.ROOT) ## return ootype.classof(root) - + ## cpu = runner.OOtypeCPU(None) ## NODE = ootype.Instance('NODE', ootype.ROOT, {}) ## NODE._add_fields({'value': ootype.Signed, diff --git a/pypy/jit/metainterp/optimizeopt/virtualize.py b/pypy/jit/metainterp/optimizeopt/virtualize.py --- a/pypy/jit/metainterp/optimizeopt/virtualize.py +++ b/pypy/jit/metainterp/optimizeopt/virtualize.py @@ -59,7 +59,7 @@ def import_from(self, other, optimizer): raise NotImplementedError("should not be called at this level") - + def get_fielddescrlist_cache(cpu): if not hasattr(cpu, '_optimizeopt_fielddescrlist_cache'): result = descrlist_dict() @@ -113,7 +113,7 @@ # if not we_are_translated(): op.name = 'FORCE ' + self.source_op.name - + if self._is_immutable_and_filled_with_constants(optforce): box = optforce.optimizer.constant_fold(op) self.make_constant(box) @@ -239,12 +239,12 @@ for index in range(len(self._items)): self._items[index] = self._items[index].force_at_end_of_preamble(already_forced, optforce) return self - + def _really_force(self, optforce): assert self.source_op is not None if not we_are_translated(): self.source_op.name = 'FORCE ' + self.source_op.name - optforce.emit_operation(self.source_op) + optforce.emit_operation(self.source_op) self.box = box = self.source_op.result for index in range(len(self._items)): subvalue = self._items[index] @@ -271,20 +271,91 @@ def _make_virtual(self, modifier): return modifier.make_varray(self.arraydescr) +class VArrayStructValue(AbstractVirtualValue): + def __init__(self, arraydescr, size, keybox, source_op=None): + AbstractVirtualValue.__init__(self, keybox, source_op) + self.arraydescr = arraydescr + self._items = [{} for _ in xrange(size)] + + def getlength(self): + return len(self._items) + + def getinteriorfield(self, index, ofs, default): + return self._items[index].get(ofs, default) + + def setinteriorfield(self, index, ofs, itemvalue): + assert isinstance(itemvalue, optimizer.OptValue) + self._items[index][ofs] = itemvalue + + def _really_force(self, optforce): + assert self.source_op is not None + if not we_are_translated(): + self.source_op.name = 'FORCE ' + self.source_op.name + optforce.emit_operation(self.source_op) + self.box = box = self.source_op.result + for index in range(len(self._items)): + iteritems = self._items[index].iteritems() + # random order is fine, except for tests + if not we_are_translated(): + iteritems = list(iteritems) + iteritems.sort(key = lambda (x, y): x.sort_key()) + for descr, value in iteritems: + subbox = value.force_box(optforce) + op = ResOperation(rop.SETINTERIORFIELD_GC, + [box, ConstInt(index), subbox], None, descr=descr + ) + optforce.emit_operation(op) + + def _get_list_of_descrs(self): + descrs = [] + for item in self._items: + item_descrs = item.keys() + sort_descrs(item_descrs) + descrs.append(item_descrs) + return descrs + + def get_args_for_fail(self, modifier): + if self.box is None and not modifier.already_seen_virtual(self.keybox): + itemdescrs = self._get_list_of_descrs() + itemboxes = [] + for i in range(len(self._items)): + for descr in itemdescrs[i]: + itemboxes.append(self._items[i][descr].get_key_box()) + modifier.register_virtual_fields(self.keybox, itemboxes) + for i in range(len(self._items)): + for descr in itemdescrs[i]: + self._items[i][descr].get_args_for_fail(modifier) + + def force_at_end_of_preamble(self, already_forced, optforce): + if self in already_forced: + return self + already_forced[self] = self + for index in range(len(self._items)): + for descr in self._items[index].keys(): + self._items[index][descr] = self._items[index][descr].force_at_end_of_preamble(already_forced, optforce) + return self + + def _make_virtual(self, modifier): + return modifier.make_varraystruct(self.arraydescr, self._get_list_of_descrs()) + + class OptVirtualize(optimizer.Optimization): "Virtualize objects until they escape." def new(self): return OptVirtualize() - + def make_virtual(self, known_class, box, source_op=None): vvalue = VirtualValue(self.optimizer.cpu, known_class, box, source_op) self.make_equal_to(box, vvalue) return vvalue def make_varray(self, arraydescr, size, box, source_op=None): - constvalue = self.new_const_item(arraydescr) - vvalue = VArrayValue(arraydescr, constvalue, size, box, source_op) + if arraydescr.is_array_of_structs(): + vvalue = VArrayStructValue(arraydescr, size, box, source_op) + else: + constvalue = self.new_const_item(arraydescr) + vvalue = VArrayValue(arraydescr, constvalue, size, box, source_op) self.make_equal_to(box, vvalue) return vvalue @@ -431,6 +502,34 @@ value.ensure_nonnull() self.emit_operation(op) + def optimize_GETINTERIORFIELD_GC(self, op): + value = self.getvalue(op.getarg(0)) + if value.is_virtual(): + indexbox = self.get_constant_box(op.getarg(1)) + if indexbox is not None: + descr = op.getdescr() + fieldvalue = value.getinteriorfield( + indexbox.getint(), descr, None + ) + if fieldvalue is None: + fieldvalue = self.new_const(descr) + self.make_equal_to(op.result, fieldvalue) + return + value.ensure_nonnull() + self.emit_operation(op) + + def optimize_SETINTERIORFIELD_GC(self, op): + value = self.getvalue(op.getarg(0)) + if value.is_virtual(): + indexbox = self.get_constant_box(op.getarg(1)) + if indexbox is not None: + value.setinteriorfield( + indexbox.getint(), op.getdescr(), self.getvalue(op.getarg(2)) + ) + return + value.ensure_nonnull() + self.emit_operation(op) + dispatch_opt = make_dispatcher_method(OptVirtualize, 'optimize_', default=OptVirtualize.emit_operation) diff --git a/pypy/jit/metainterp/optimizeopt/virtualstate.py b/pypy/jit/metainterp/optimizeopt/virtualstate.py --- a/pypy/jit/metainterp/optimizeopt/virtualstate.py +++ b/pypy/jit/metainterp/optimizeopt/virtualstate.py @@ -16,7 +16,7 @@ class AbstractVirtualStateInfo(resume.AbstractVirtualInfo): position = -1 - + def generalization_of(self, other, renum, bad): raise NotImplementedError @@ -54,7 +54,7 @@ s.debug_print(indent + " ", seen, bad) else: debug_print(indent + " ...") - + def debug_header(self, indent): raise NotImplementedError @@ -77,13 +77,15 @@ bad[self] = True bad[other] = True return False + + assert isinstance(other, AbstractVirtualStructStateInfo) assert len(self.fielddescrs) == len(self.fieldstate) assert len(other.fielddescrs) == len(other.fieldstate) if len(self.fielddescrs) != len(other.fielddescrs): bad[self] = True bad[other] = True return False - + for i in range(len(self.fielddescrs)): if other.fielddescrs[i] is not self.fielddescrs[i]: bad[self] = True @@ -112,8 +114,8 @@ def _enum(self, virtual_state): for s in self.fieldstate: s.enum(virtual_state) - - + + class VirtualStateInfo(AbstractVirtualStructStateInfo): def __init__(self, known_class, fielddescrs): AbstractVirtualStructStateInfo.__init__(self, fielddescrs) @@ -128,13 +130,13 @@ def debug_header(self, indent): debug_print(indent + 'VirtualStateInfo(%d):' % self.position) - + class VStructStateInfo(AbstractVirtualStructStateInfo): def __init__(self, typedescr, fielddescrs): AbstractVirtualStructStateInfo.__init__(self, fielddescrs) self.typedescr = typedescr - def _generalization_of(self, other): + def _generalization_of(self, other): if not isinstance(other, VStructStateInfo): return False if self.typedescr is not other.typedescr: @@ -143,7 +145,7 @@ def debug_header(self, indent): debug_print(indent + 'VStructStateInfo(%d):' % self.position) - + class VArrayStateInfo(AbstractVirtualStateInfo): def __init__(self, arraydescr): self.arraydescr = arraydescr @@ -157,11 +159,7 @@ bad[other] = True return False renum[self.position] = other.position - if not isinstance(other, VArrayStateInfo): - bad[self] = True - bad[other] = True - return False - if self.arraydescr is not other.arraydescr: + if not self._generalization_of(other): bad[self] = True bad[other] = True return False @@ -177,6 +175,10 @@ return False return True + def _generalization_of(self, other): + return (isinstance(other, VArrayStateInfo) and + self.arraydescr is other.arraydescr) + def enum_forced_boxes(self, boxes, value, optimizer): assert isinstance(value, virtualize.VArrayValue) assert value.is_virtual() @@ -192,8 +194,75 @@ def debug_header(self, indent): debug_print(indent + 'VArrayStateInfo(%d):' % self.position) - - + +class VArrayStructStateInfo(AbstractVirtualStateInfo): + def __init__(self, arraydescr, fielddescrs): + self.arraydescr = arraydescr + self.fielddescrs = fielddescrs + + def generalization_of(self, other, renum, bad): + assert self.position != -1 + if self.position in renum: + if renum[self.position] == other.position: + return True + bad[self] = True + bad[other] = True + return False + renum[self.position] = other.position + if not self._generalization_of(other): + bad[self] = True + bad[other] = True + return False + + assert isinstance(other, VArrayStructStateInfo) + if len(self.fielddescrs) != len(other.fielddescrs): + bad[self] = True + bad[other] = True + return False + + p = 0 + for i in range(len(self.fielddescrs)): + if len(self.fielddescrs[i]) != len(other.fielddescrs[i]): + bad[self] = True + bad[other] = True + return False + for j in range(len(self.fielddescrs[i])): + if self.fielddescrs[i][j] is not other.fielddescrs[i][j]: + bad[self] = True + bad[other] = True + return False + if not self.fieldstate[p].generalization_of(other.fieldstate[p], + renum, bad): + bad[self] = True + bad[other] = True + return False + p += 1 + return True + + def _generalization_of(self, other): + return (isinstance(other, VArrayStructStateInfo) and + self.arraydescr is other.arraydescr) + + def _enum(self, virtual_state): + for s in self.fieldstate: + s.enum(virtual_state) + + def enum_forced_boxes(self, boxes, value, optimizer): + assert isinstance(value, virtualize.VArrayStructValue) + assert value.is_virtual() + p = 0 + for i in range(len(self.fielddescrs)): + for j in range(len(self.fielddescrs[i])): + v = value._items[i][self.fielddescrs[i][j]] + s = self.fieldstate[p] + if s.position > self.position: + s.enum_forced_boxes(boxes, v, optimizer) + p += 1 + + def debug_header(self, indent): + debug_print(indent + 'VArrayStructStateInfo(%d):' % self.position) + + class NotVirtualStateInfo(AbstractVirtualStateInfo): def __init__(self, value): self.known_class = value.known_class @@ -277,7 +346,7 @@ op = ResOperation(rop.GUARD_CLASS, [box, self.known_class], None) extra_guards.append(op) return - + if self.level == LEVEL_NONNULL and \ other.level == LEVEL_UNKNOWN and \ isinstance(box, BoxPtr) and \ @@ -285,7 +354,7 @@ op = ResOperation(rop.GUARD_NONNULL, [box], None) extra_guards.append(op) return - + if self.level == LEVEL_UNKNOWN and \ other.level == LEVEL_UNKNOWN and \ isinstance(box, BoxInt) and \ @@ -309,7 +378,7 @@ op = ResOperation(rop.GUARD_TRUE, [res], None) extra_guards.append(op) return - + # Remaining cases are probably not interesting raise InvalidLoop if self.level == LEVEL_CONSTANT: @@ -319,7 +388,7 @@ def enum_forced_boxes(self, boxes, value, optimizer): if self.level == LEVEL_CONSTANT: return - assert 0 <= self.position_in_notvirtuals + assert 0 <= self.position_in_notvirtuals boxes[self.position_in_notvirtuals] = value.force_box(optimizer) def _enum(self, virtual_state): @@ -348,7 +417,7 @@ lb = '' if self.lenbound: lb = ', ' + self.lenbound.bound.__repr__() - + debug_print(indent + mark + 'NotVirtualInfo(%d' % self.position + ', ' + l + ', ' + self.intbound.__repr__() + lb + ')') @@ -370,7 +439,7 @@ return False return True - def generate_guards(self, other, args, cpu, extra_guards): + def generate_guards(self, other, args, cpu, extra_guards): assert len(self.state) == len(other.state) == len(args) renum = {} for i in range(len(self.state)): @@ -393,7 +462,7 @@ inputargs.append(box) assert None not in inputargs - + return inputargs def debug_print(self, hdr='', bad=None): @@ -412,7 +481,7 @@ def register_virtual_fields(self, keybox, fieldboxes): self.fieldboxes[keybox] = fieldboxes - + def already_seen_virtual(self, keybox): return keybox in self.fieldboxes @@ -463,6 +532,9 @@ def make_varray(self, arraydescr): return VArrayStateInfo(arraydescr) + def make_varraystruct(self, arraydescr, fielddescrs): + return VArrayStructStateInfo(arraydescr, fielddescrs) + class BoxNotProducable(Exception): pass @@ -479,6 +551,7 @@ optimizer.produce_potential_short_preamble_ops(self) self.short_boxes = {} + self.short_boxes_in_production = {} for box in self.potential_ops.keys(): try: @@ -501,12 +574,12 @@ else: # Low priority lo -= 1 return alts - + def renamed(self, box): if box in self.rename: return self.rename[box] return box - + def add_to_short(self, box, op): if op: op = op.clone() @@ -528,12 +601,16 @@ self.optimizer.make_equal_to(newbox, value) else: self.short_boxes[box] = op - + def produce_short_preamble_box(self, box): if box in self.short_boxes: - return + return if isinstance(box, Const): - return + return + if box in self.short_boxes_in_production: + raise BoxNotProducable + self.short_boxes_in_production[box] = True + if box in self.potential_ops: ops = self.prioritized_alternatives(box) produced_one = False @@ -570,7 +647,7 @@ else: debug_print(logops.repr_of_arg(box) + ': None') debug_stop('jit-short-boxes') - + def operations(self): if not we_are_translated(): # For tests ops = self.short_boxes.values() @@ -588,7 +665,7 @@ if not isinstance(oldbox, Const) and newbox not in self.short_boxes: self.short_boxes[newbox] = self.short_boxes[oldbox] self.aliases[newbox] = oldbox - + def original(self, box): while box in self.aliases: box = self.aliases[box] diff --git a/pypy/jit/metainterp/optimizeopt/vstring.py b/pypy/jit/metainterp/optimizeopt/vstring.py --- a/pypy/jit/metainterp/optimizeopt/vstring.py +++ b/pypy/jit/metainterp/optimizeopt/vstring.py @@ -163,17 +163,6 @@ for value in self._chars: value.get_args_for_fail(modifier) - def FIXME_enum_forced_boxes(self, boxes, already_seen): - key = self.get_key_box() - if key in already_seen: - return - already_seen[key] = None - if self.box is None: - for box in self._chars: - box.enum_forced_boxes(boxes, already_seen) - else: - boxes.append(self.box) - def _make_virtual(self, modifier): return modifier.make_vstrplain(self.mode is mode_unicode) @@ -226,18 +215,6 @@ self.left.get_args_for_fail(modifier) self.right.get_args_for_fail(modifier) - def FIXME_enum_forced_boxes(self, boxes, already_seen): - key = self.get_key_box() - if key in already_seen: - return - already_seen[key] = None - if self.box is None: - self.left.enum_forced_boxes(boxes, already_seen) - self.right.enum_forced_boxes(boxes, already_seen) - self.lengthbox = None - else: - boxes.append(self.box) - def _make_virtual(self, modifier): return modifier.make_vstrconcat(self.mode is mode_unicode) @@ -284,18 +261,6 @@ self.vstart.get_args_for_fail(modifier) self.vlength.get_args_for_fail(modifier) - def FIXME_enum_forced_boxes(self, boxes, already_seen): - key = self.get_key_box() - if key in already_seen: - return - already_seen[key] = None - if self.box is None: - self.vstr.enum_forced_boxes(boxes, already_seen) - self.vstart.enum_forced_boxes(boxes, already_seen) - self.vlength.enum_forced_boxes(boxes, already_seen) - else: - boxes.append(self.box) - def _make_virtual(self, modifier): return modifier.make_vstrslice(self.mode is mode_unicode) @@ -540,11 +505,17 @@ # if (isinstance(vstr, VStringPlainValue) and vstart.is_constant() and vstop.is_constant()): - # slicing with constant bounds of a VStringPlainValue - value = self.make_vstring_plain(op.result, op, mode) - value.setup_slice(vstr._chars, vstart.box.getint(), - vstop.box.getint()) - return True + # slicing with constant bounds of a VStringPlainValue, if any of + # the characters is unitialized we don't do this special slice, we + # do the regular copy contents. + for i in range(vstart.box.getint(), vstop.box.getint()): + if vstr.getitem(i) is optimizer.CVAL_UNINITIALIZED_ZERO: + break + else: + value = self.make_vstring_plain(op.result, op, mode) + value.setup_slice(vstr._chars, vstart.box.getint(), + vstop.box.getint()) + return True # vstr.ensure_nonnull() lengthbox = _int_sub(self, vstop.force_box(self), diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -36,6 +36,7 @@ class MIFrame(object): + debug = False def __init__(self, metainterp): self.metainterp = metainterp @@ -164,7 +165,7 @@ if not we_are_translated(): for b in registers[count:]: assert not oldbox.same_box(b) - + def make_result_of_lastop(self, resultbox): got_type = resultbox.type @@ -198,7 +199,7 @@ 'float_add', 'float_sub', 'float_mul', 'float_truediv', 'float_lt', 'float_le', 'float_eq', 'float_ne', 'float_gt', 'float_ge', - 'ptr_eq', 'ptr_ne', + 'ptr_eq', 'ptr_ne', 'instance_ptr_eq', 'instance_ptr_ne', ]: exec py.code.Source(''' @arguments("box", "box") @@ -239,8 +240,8 @@ return self.execute(rop.PTR_EQ, box, history.CONST_NULL) @arguments("box") - def opimpl_cast_opaque_ptr(self, box): - return self.execute(rop.CAST_OPAQUE_PTR, box) + def opimpl_mark_opaque_ptr(self, box): + return self.execute(rop.MARK_OPAQUE_PTR, box) @arguments("box") def _opimpl_any_return(self, box): @@ -548,6 +549,14 @@ opimpl_getfield_gc_r_pure = _opimpl_getfield_gc_pure_any opimpl_getfield_gc_f_pure = _opimpl_getfield_gc_pure_any + @arguments("box", "box", "descr") + def _opimpl_getinteriorfield_gc_any(self, array, index, descr): + return self.execute_with_descr(rop.GETINTERIORFIELD_GC, descr, + array, index) + opimpl_getinteriorfield_gc_i = _opimpl_getinteriorfield_gc_any + opimpl_getinteriorfield_gc_f = _opimpl_getinteriorfield_gc_any + opimpl_getinteriorfield_gc_r = _opimpl_getinteriorfield_gc_any + @specialize.arg(1) def _opimpl_getfield_gc_any_pureornot(self, opnum, box, fielddescr): tobox = self.metainterp.heapcache.getfield(box, fielddescr) @@ -588,6 +597,15 @@ opimpl_setfield_gc_r = _opimpl_setfield_gc_any opimpl_setfield_gc_f = _opimpl_setfield_gc_any + @arguments("box", "box", "box", "descr") + def _opimpl_setinteriorfield_gc_any(self, array, index, value, descr): + self.execute_with_descr(rop.SETINTERIORFIELD_GC, descr, + array, index, value) + opimpl_setinteriorfield_gc_i = _opimpl_setinteriorfield_gc_any + opimpl_setinteriorfield_gc_f = _opimpl_setinteriorfield_gc_any + opimpl_setinteriorfield_gc_r = _opimpl_setinteriorfield_gc_any + + @arguments("box", "descr") def _opimpl_getfield_raw_any(self, box, fielddescr): return self.execute_with_descr(rop.GETFIELD_RAW, fielddescr, box) @@ -2588,17 +2606,21 @@ self.pc = position # if not we_are_translated(): - print '\tpyjitpl: %s(%s)' % (name, ', '.join(map(repr, args))), + if self.debug: + print '\tpyjitpl: %s(%s)' % (name, ', '.join(map(repr, args))), try: resultbox = unboundmethod(self, *args) except Exception, e: - print '-> %s!' % e.__class__.__name__ + if self.debug: + print '-> %s!' % e.__class__.__name__ raise if num_return_args == 0: - print + if self.debug: + print assert resultbox is None else: - print '-> %r' % (resultbox,) + if self.debug: + print '-> %r' % (resultbox,) assert argcodes[next_argcode] == '>' result_argcode = argcodes[next_argcode + 1] assert resultbox.type == {'i': history.INT, diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -1,5 +1,4 @@ from pypy.rlib.objectmodel import we_are_translated -from pypy.rlib.debug import make_sure_not_resized def ResOperation(opnum, args, result, descr=None): cls = opclasses[opnum] @@ -405,8 +404,8 @@ 'FLOAT_TRUEDIV/2', 'FLOAT_NEG/1', 'FLOAT_ABS/1', - 'CAST_FLOAT_TO_INT/1', - 'CAST_INT_TO_FLOAT/1', + 'CAST_FLOAT_TO_INT/1', # don't use for unsigned ints; we would + 'CAST_INT_TO_FLOAT/1', # need some messy code in the backend 'CAST_FLOAT_TO_SINGLEFLOAT/1', 'CAST_SINGLEFLOAT_TO_FLOAT/1', # @@ -438,7 +437,8 @@ # 'PTR_EQ/2b', 'PTR_NE/2b', - 'CAST_OPAQUE_PTR/1b', + 'INSTANCE_PTR_EQ/2b', + 'INSTANCE_PTR_NE/2b', # 'ARRAYLEN_GC/1d', 'STRLEN/1', @@ -457,6 +457,7 @@ 'GETARRAYITEM_GC/2d', 'GETARRAYITEM_RAW/2d', + 'GETINTERIORFIELD_GC/2d', 'GETFIELD_GC/1d', 'GETFIELD_RAW/1d', '_MALLOC_FIRST', @@ -469,10 +470,12 @@ 'FORCE_TOKEN/0', 'VIRTUAL_REF/2', # removed before it's passed to the backend 'READ_TIMESTAMP/0', + 'MARK_OPAQUE_PTR/1b', '_NOSIDEEFFECT_LAST', # ----- end of no_side_effect operations ----- 'SETARRAYITEM_GC/3d', 'SETARRAYITEM_RAW/3d', + 'SETINTERIORFIELD_GC/3d', 'SETFIELD_GC/2d', 'SETFIELD_RAW/2d', 'STRSETITEM/3', diff --git a/pypy/jit/metainterp/resume.py b/pypy/jit/metainterp/resume.py --- a/pypy/jit/metainterp/resume.py +++ b/pypy/jit/metainterp/resume.py @@ -139,7 +139,7 @@ self.numberings = {} self.cached_boxes = {} self.cached_virtuals = {} - + self.nvirtuals = 0 self.nvholes = 0 self.nvreused = 0 @@ -273,6 +273,9 @@ def make_varray(self, arraydescr): return VArrayInfo(arraydescr) + def make_varraystruct(self, arraydescr, fielddescrs): + return VArrayStructInfo(arraydescr, fielddescrs) + def make_vstrplain(self, is_unicode=False): if is_unicode: return VUniPlainInfo() @@ -402,7 +405,7 @@ virtuals[num] = vinfo if self._invalidation_needed(len(liveboxes), nholes): - memo.clear_box_virtual_numbers() + memo.clear_box_virtual_numbers() def _invalidation_needed(self, nliveboxes, nholes): memo = self.memo @@ -455,7 +458,7 @@ def debug_prints(self): raise NotImplementedError - + class AbstractVirtualStructInfo(AbstractVirtualInfo): def __init__(self, fielddescrs): self.fielddescrs = fielddescrs @@ -537,6 +540,29 @@ for i in self.fieldnums: debug_print("\t\t", str(untag(i))) + +class VArrayStructInfo(AbstractVirtualInfo): + def __init__(self, arraydescr, fielddescrs): + self.arraydescr = arraydescr + self.fielddescrs = fielddescrs + + def debug_prints(self): + debug_print("\tvarraystructinfo", self.arraydescr) + for i in self.fieldnums: + debug_print("\t\t", str(untag(i))) + + @specialize.argtype(1) + def allocate(self, decoder, index): + array = decoder.allocate_array(self.arraydescr, len(self.fielddescrs)) + decoder.virtuals_cache[index] = array + p = 0 + for i in range(len(self.fielddescrs)): + for j in range(len(self.fielddescrs[i])): + decoder.setinteriorfield(i, self.fielddescrs[i][j], array, self.fieldnums[p]) + p += 1 + return array + + class VStrPlainInfo(AbstractVirtualInfo): """Stands for the string made out of the characters of all fieldnums.""" @@ -884,6 +910,17 @@ self.metainterp.execute_and_record(rop.SETFIELD_GC, descr, structbox, fieldbox) + def setinteriorfield(self, index, descr, array, fieldnum): + if descr.is_pointer_field(): + kind = REF + elif descr.is_float_field(): + kind = FLOAT + else: + kind = INT + fieldbox = self.decode_box(fieldnum, kind) + self.metainterp.execute_and_record(rop.SETINTERIORFIELD_GC, descr, + array, ConstInt(index), fieldbox) + def setarrayitem_int(self, arraydescr, arraybox, index, fieldnum): self._setarrayitem(arraydescr, arraybox, index, fieldnum, INT) @@ -1164,6 +1201,17 @@ newvalue = self.decode_int(fieldnum) self.cpu.bh_setfield_gc_i(struct, descr, newvalue) + def setinteriorfield(self, index, descr, array, fieldnum): + if descr.is_pointer_field(): + newvalue = self.decode_ref(fieldnum) + self.cpu.bh_setinteriorfield_gc_r(array, index, descr, newvalue) + elif descr.is_float_field(): + newvalue = self.decode_float(fieldnum) + self.cpu.bh_setinteriorfield_gc_f(array, index, descr, newvalue) + else: + newvalue = self.decode_int(fieldnum) + self.cpu.bh_setinteriorfield_gc_i(array, index, descr, newvalue) + def setarrayitem_int(self, arraydescr, array, index, fieldnum): newvalue = self.decode_int(fieldnum) self.cpu.bh_setarrayitem_gc_i(arraydescr, array, index, newvalue) diff --git a/pypy/jit/metainterp/test/support.py b/pypy/jit/metainterp/test/support.py --- a/pypy/jit/metainterp/test/support.py +++ b/pypy/jit/metainterp/test/support.py @@ -12,7 +12,7 @@ from pypy.rlib.rfloat import isnan def _get_jitcodes(testself, CPUClass, func, values, type_system, - supports_longlong=False, **kwds): + supports_longlong=False, translationoptions={}, **kwds): from pypy.jit.codewriter import support class FakeJitCell(object): @@ -42,7 +42,8 @@ enable_opts = ALL_OPTS_DICT func._jit_unroll_safe_ = True - rtyper = support.annotate(func, values, type_system=type_system) + rtyper = support.annotate(func, values, type_system=type_system, + translationoptions=translationoptions) graphs = rtyper.annotator.translator.graphs testself.all_graphs = graphs result_kind = history.getkind(graphs[0].getreturnvar().concretetype)[0] diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -10,6 +10,7 @@ from pypy.jit.metainterp.typesystem import LLTypeHelper, OOTypeHelper from pypy.jit.metainterp.warmspot import get_stats from pypy.jit.metainterp.warmstate import set_future_value +from pypy.rlib import rerased from pypy.rlib.jit import (JitDriver, we_are_jitted, hint, dont_look_inside, loop_invariant, elidable, promote, jit_debug, assert_green, AssertGreenFailed, unroll_safe, current_trace_length, look_inside_iff, @@ -3435,7 +3436,159 @@ return sa res = self.meta_interp(f, [16]) assert res == f(16) - + + def test_ptr_eq(self): + myjitdriver = JitDriver(greens = [], reds = ["n", "x"]) + class A(object): + def __init__(self, v): + self.v = v + def f(n, x): + while n > 0: + myjitdriver.jit_merge_point(n=n, x=x) + z = 0 / x + a1 = A("key") + a2 = A("\x00") + n -= [a1, a2][z].v is not a2.v + return n + res = self.meta_interp(f, [10, 1]) + assert res == 0 + + def test_instance_ptr_eq(self): + myjitdriver = JitDriver(greens = [], reds = ["n", "i", "a1", "a2"]) + class A(object): + pass + def f(n): + a1 = A() + a2 = A() + i = 0 + while n > 0: + myjitdriver.jit_merge_point(n=n, i=i, a1=a1, a2=a2) + if n % 2: + a = a2 + else: + a = a1 + i += a is a1 + n -= 1 + return i + res = self.meta_interp(f, [10]) + assert res == f(10) + def f(n): + a1 = A() + a2 = A() + i = 0 + while n > 0: + myjitdriver.jit_merge_point(n=n, i=i, a1=a1, a2=a2) + if n % 2: + a = a2 + else: + a = a1 + if a is a2: + i += 1 + n -= 1 + return i + res = self.meta_interp(f, [10]) + assert res == f(10) + + def test_virtual_array_of_structs(self): + myjitdriver = JitDriver(greens = [], reds=["n", "d"]) + def f(n): + d = None + while n > 0: + myjitdriver.jit_merge_point(n=n, d=d) + d = {"q": 1} + if n % 2: + d["k"] = n + else: + d["z"] = n + n -= len(d) - d["q"] + return n + res = self.meta_interp(f, [10]) + assert res == 0 + + def test_virtual_dict_constant_keys(self): + myjitdriver = JitDriver(greens = [], reds = ["n"]) + def g(d): + return d["key"] - 1 + + def f(n): + while n > 0: + myjitdriver.jit_merge_point(n=n) + x = {"key": n} + n = g(x) + del x["key"] + return n + + res = self.meta_interp(f, [10]) + assert res == 0 + self.check_loops({"int_sub": 1, "int_gt": 1, "guard_true": 1, "jump": 1}) + + def test_virtual_opaque_ptr(self): + myjitdriver = JitDriver(greens = [], reds = ["n"]) + erase, unerase = rerased.new_erasing_pair("x") + @look_inside_iff(lambda x: isvirtual(x)) + def g(x): + return x[0] + def f(n): + while n > 0: + myjitdriver.jit_merge_point(n=n) + x = [] + y = erase(x) + z = unerase(y) + z.append(1) + n -= g(z) + return n + res = self.meta_interp(f, [10]) + assert res == 0 + self.check_loops({"int_sub": 1, "int_gt": 1, "guard_true": 1, "jump": 1}) + + def test_virtual_opaque_dict(self): + myjitdriver = JitDriver(greens = [], reds = ["n"]) + erase, unerase = rerased.new_erasing_pair("x") + @look_inside_iff(lambda x: isvirtual(x)) + def g(x): + return x[0]["key"] - 1 + def f(n): + while n > 0: + myjitdriver.jit_merge_point(n=n) + x = [{}] + x[0]["key"] = n + x[0]["other key"] = n + y = erase(x) + z = unerase(y) + n = g(x) + return n + res = self.meta_interp(f, [10]) + assert res == 0 + self.check_loops({"int_sub": 1, "int_gt": 1, "guard_true": 1, "jump": 1}) + + def test_convert_from_SmallFunctionSetPBCRepr_to_FunctionsPBCRepr(self): + f1 = lambda n: n+1 + f2 = lambda n: n+2 + f3 = lambda n: n+3 + f4 = lambda n: n+4 + f5 = lambda n: n+5 + f6 = lambda n: n+6 + f7 = lambda n: n+7 + f8 = lambda n: n+8 + def h(n, x): + return x(n) + h._dont_inline = True + def g(n, x): + return h(n, x) + g._dont_inline = True + def f(n): + n = g(n, f1) + n = g(n, f2) + n = h(n, f3) + n = h(n, f4) + n = h(n, f5) + n = h(n, f6) + n = h(n, f7) + n = h(n, f8) + return n + assert f(5) == 41 + translationoptions = {'withsmallfuncsets': 3} + self.interp_operations(f, [5], translationoptions=translationoptions) class TestLLtype(BaseLLtypeTests, LLJitMixin): @@ -3490,11 +3643,12 @@ o = o.dec() pc += 1 return pc - res = self.meta_interp(main, [False, 100, True], taggedpointers=True) + topt = {'taggedpointers': True} + res = self.meta_interp(main, [False, 100, True], + translationoptions=topt) def test_rerased(self): - from pypy.rlib.rerased import erase_int, unerase_int, new_erasing_pair - eraseX, uneraseX = new_erasing_pair("X") + eraseX, uneraseX = rerased.new_erasing_pair("X") # class X: def __init__(self, a, b): @@ -3507,19 +3661,20 @@ e = eraseX(X(i, j)) else: try: - e = erase_int(i) + e = rerased.erase_int(i) except OverflowError: return -42 if j & 1: x = uneraseX(e) return x.a - x.b else: - return unerase_int(e) + return rerased.unerase_int(e) # - x = self.interp_operations(f, [-128, 0], taggedpointers=True) + topt = {'taggedpointers': True} + x = self.interp_operations(f, [-128, 0], translationoptions=topt) assert x == -128 bigint = sys.maxint//2 + 1 - x = self.interp_operations(f, [bigint, 0], taggedpointers=True) + x = self.interp_operations(f, [bigint, 0], translationoptions=topt) assert x == -42 - x = self.interp_operations(f, [1000, 1], taggedpointers=True) + x = self.interp_operations(f, [1000, 1], translationoptions=topt) assert x == 999 diff --git a/pypy/jit/metainterp/test/test_dict.py b/pypy/jit/metainterp/test/test_dict.py --- a/pypy/jit/metainterp/test/test_dict.py +++ b/pypy/jit/metainterp/test/test_dict.py @@ -91,7 +91,7 @@ res1 = f(100) res2 = self.meta_interp(f, [100], listops=True) assert res1 == res2 - self.check_loops(int_mod=1) # the hash was traced + self.check_loops(int_mod=1) # the hash was traced and eq, but cached def test_dict_setdefault(self): myjitdriver = JitDriver(greens = [], reds = ['total', 'dct']) @@ -128,7 +128,7 @@ assert f(100) == 50 res = self.meta_interp(f, [100], listops=True) assert res == 50 - self.check_loops(int_mod=1) + self.check_loops(int_mod=1) # key + eq, but cached def test_repeated_lookup(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'd']) @@ -153,10 +153,12 @@ res = self.meta_interp(f, [100], listops=True) assert res == f(50) - self.check_loops({"call": 7, "guard_false": 1, "guard_no_exception": 6, + self.check_loops({"call": 5, "getfield_gc": 1, "getinteriorfield_gc": 1, + "guard_false": 1, "guard_no_exception": 4, "guard_true": 1, "int_and": 1, "int_gt": 1, "int_is_true": 1, "int_sub": 1, "jump": 1, - "new_with_vtable": 1, "setfield_gc": 1}) + "new_with_vtable": 1, "new": 1, "new_array": 1, + "setfield_gc": 3, }) class TestOOtype(DictTests, OOJitMixin): diff --git a/pypy/jit/metainterp/test/test_float.py b/pypy/jit/metainterp/test/test_float.py --- a/pypy/jit/metainterp/test/test_float.py +++ b/pypy/jit/metainterp/test/test_float.py @@ -1,5 +1,6 @@ -import math +import math, sys from pypy.jit.metainterp.test.support import LLJitMixin, OOJitMixin +from pypy.rlib.rarithmetic import intmask, r_uint class FloatTests: @@ -45,6 +46,34 @@ res = self.interp_operations(f, [-2.0]) assert res == -8.5 + def test_cast_float_to_int(self): + def g(f): + return int(f) + res = self.interp_operations(g, [-12345.9]) + assert res == -12345 + + def test_cast_float_to_uint(self): + def g(f): + return intmask(r_uint(f)) + res = self.interp_operations(g, [sys.maxint*2.0]) + assert res == intmask(long(sys.maxint*2.0)) + res = self.interp_operations(g, [-12345.9]) + assert res == -12345 + + def test_cast_int_to_float(self): + def g(i): + return float(i) + res = self.interp_operations(g, [-12345]) + assert type(res) is float and res == -12345.0 + + def test_cast_uint_to_float(self): + def g(i): + return float(r_uint(i)) + res = self.interp_operations(g, [intmask(sys.maxint*2)]) + assert type(res) is float and res == float(sys.maxint*2) + res = self.interp_operations(g, [-12345]) + assert type(res) is float and res == float(long(r_uint(-12345))) + class TestOOtype(FloatTests, OOJitMixin): pass diff --git a/pypy/jit/metainterp/test/test_heapcache.py b/pypy/jit/metainterp/test/test_heapcache.py --- a/pypy/jit/metainterp/test/test_heapcache.py +++ b/pypy/jit/metainterp/test/test_heapcache.py @@ -371,3 +371,17 @@ assert h.is_unescaped(box1) h.invalidate_caches(rop.SETARRAYITEM_GC, None, [box2, index1, box1]) assert not h.is_unescaped(box1) + + h = HeapCache() + h.new_array(box1, lengthbox1) + h.new(box2) + assert h.is_unescaped(box1) + assert h.is_unescaped(box2) + h.invalidate_caches(rop.SETARRAYITEM_GC, None, [box1, lengthbox2, box2]) + assert h.is_unescaped(box1) + assert h.is_unescaped(box2) + h.invalidate_caches( + rop.CALL, FakeCallDescr(FakeEffektinfo.EF_RANDOM_EFFECTS), [box1] + ) + assert not h.is_unescaped(box1) + assert not h.is_unescaped(box2) diff --git a/pypy/jit/metainterp/test/test_tracingopts.py b/pypy/jit/metainterp/test/test_tracingopts.py --- a/pypy/jit/metainterp/test/test_tracingopts.py +++ b/pypy/jit/metainterp/test/test_tracingopts.py @@ -3,6 +3,7 @@ from pypy.jit.metainterp.test.support import LLJitMixin from pypy.rlib import jit from pypy.rlib.rarithmetic import ovfcheck +from pypy.rlib.rstring import StringBuilder import py @@ -590,4 +591,14 @@ assert res == 4 self.check_operations_history(int_add_ovf=0) res = self.interp_operations(fn, [sys.maxint]) - assert res == 12 \ No newline at end of file + assert res == 12 + + def test_copy_str_content(self): + def fn(n): + a = StringBuilder() + x = [1] + a.append("hello world") + return x[0] + res = self.interp_operations(fn, [0]) + assert res == 1 + self.check_operations_history(getarrayitem_gc=0, getarrayitem_gc_pure=0 ) \ No newline at end of file diff --git a/pypy/jit/metainterp/warmspot.py b/pypy/jit/metainterp/warmspot.py --- a/pypy/jit/metainterp/warmspot.py +++ b/pypy/jit/metainterp/warmspot.py @@ -48,13 +48,13 @@ translator.warmrunnerdesc = warmrunnerdesc # for later debugging def ll_meta_interp(function, args, backendopt=False, type_system='lltype', - listcomp=False, **kwds): + listcomp=False, translationoptions={}, **kwds): if listcomp: extraconfigopts = {'translation.list_comprehension_operations': True} else: extraconfigopts = {} - if kwds.pop("taggedpointers", False): - extraconfigopts["translation.taggedpointers"] = True + for key, value in translationoptions.items(): + extraconfigopts['translation.' + key] = value interp, graph = get_interpreter(function, args, backendopt=False, # will be done below type_system=type_system, @@ -62,7 +62,7 @@ clear_tcache() return jittify_and_run(interp, graph, args, backendopt=backendopt, **kwds) -def jittify_and_run(interp, graph, args, repeat=1, +def jittify_and_run(interp, graph, args, repeat=1, graph_and_interp_only=False, backendopt=False, trace_limit=sys.maxint, inline=False, loop_longevity=0, retrace_limit=5, function_threshold=4, @@ -93,6 +93,8 @@ jd.warmstate.set_param_max_retrace_guards(max_retrace_guards) jd.warmstate.set_param_enable_opts(enable_opts) warmrunnerdesc.finish() + if graph_and_interp_only: + return interp, graph res = interp.eval_graph(graph, args) if not kwds.get('translate_support_code', False): warmrunnerdesc.metainterp_sd.profiler.finish() @@ -157,6 +159,9 @@ def get_stats(): return pyjitpl._warmrunnerdesc.stats +def reset_stats(): + pyjitpl._warmrunnerdesc.stats.clear() + def get_translator(): return pyjitpl._warmrunnerdesc.translator @@ -206,7 +211,7 @@ self.make_enter_functions() self.rewrite_jit_merge_points(policy) - verbose = not self.cpu.translate_support_code + verbose = False # not self.cpu.translate_support_code self.codewriter.make_jitcodes(verbose=verbose) self.rewrite_can_enter_jits() self.rewrite_set_param() diff --git a/pypy/module/__builtin__/app_io.py b/pypy/module/__builtin__/app_io.py --- a/pypy/module/__builtin__/app_io.py +++ b/pypy/module/__builtin__/app_io.py @@ -71,7 +71,7 @@ return line[:-1] return line -def input(prompt=None): +def input(prompt=''): """Equivalent to eval(raw_input(prompt)).""" return eval(raw_input(prompt)) diff --git a/pypy/module/__builtin__/functional.py b/pypy/module/__builtin__/functional.py --- a/pypy/module/__builtin__/functional.py +++ b/pypy/module/__builtin__/functional.py @@ -312,10 +312,11 @@ class W_XRange(Wrappable): - def __init__(self, space, start, len, step): + def __init__(self, space, start, stop, step): self.space = space self.start = start - self.len = len + self.stop = stop + self.len = get_len_of_range(space, start, stop, step) self.step = step def descr_new(space, w_subtype, w_start, w_stop=None, w_step=1): @@ -325,9 +326,8 @@ start, stop = 0, start else: stop = _toint(space, w_stop) - howmany = get_len_of_range(space, start, stop, step) obj = space.allocate_instance(W_XRange, w_subtype) - W_XRange.__init__(obj, space, start, howmany, step) + W_XRange.__init__(obj, space, start, stop, step) return space.wrap(obj) def descr_repr(self): @@ -357,12 +357,12 @@ def descr_iter(self): return self.space.wrap(W_XRangeIterator(self.space, self.start, - self.len, self.step)) + self.stop, self.step)) def descr_reversed(self): lastitem = self.start + (self.len-1) * self.step return self.space.wrap(W_XRangeIterator(self.space, lastitem, - self.len, -self.step)) + self.start - 1, -self.step)) def descr_reduce(self): space = self.space @@ -389,25 +389,24 @@ ) class W_XRangeIterator(Wrappable): - def __init__(self, space, current, remaining, step): + def __init__(self, space, start, stop, step): self.space = space - self.current = current - self.remaining = remaining + self.current = start + self.stop = stop self.step = step def descr_iter(self): return self.space.wrap(self) def descr_next(self): - if self.remaining > 0: + if (self.step > 0 and self.current < self.stop) or (self.step < 0 and self.current > self.stop): item = self.current self.current = item + self.step - self.remaining -= 1 return self.space.wrap(item) raise OperationError(self.space.w_StopIteration, self.space.w_None) - def descr_len(self): - return self.space.wrap(self.remaining) + #def descr_len(self): + # return self.space.wrap(self.remaining) def descr_reduce(self): from pypy.interpreter.mixedmodule import MixedModule @@ -418,7 +417,7 @@ w = space.wrap nt = space.newtuple - tup = [w(self.current), w(self.remaining), w(self.step)] + tup = [w(self.current), w(self.stop), w(self.step)] return nt([new_inst, nt(tup)]) W_XRangeIterator.typedef = TypeDef("rangeiterator", diff --git a/pypy/module/__builtin__/test/test_rawinput.py b/pypy/module/__builtin__/test/test_rawinput.py --- a/pypy/module/__builtin__/test/test_rawinput.py +++ b/pypy/module/__builtin__/test/test_rawinput.py @@ -3,29 +3,32 @@ class AppTestRawInput(): - def test_raw_input(self): + def test_input_and_raw_input(self): import sys, StringIO for prompt, expected in [("def:", "abc/ def:/ghi\n"), ("", "abc/ /ghi\n"), (42, "abc/ 42/ghi\n"), (None, "abc/ None/ghi\n"), (Ellipsis, "abc/ /ghi\n")]: - save = sys.stdin, sys.stdout - try: - sys.stdin = StringIO.StringIO("foo\nbar\n") - out = sys.stdout = StringIO.StringIO() - print "abc", # softspace = 1 - out.write('/') - if prompt is Ellipsis: - got = raw_input() - else: - got = raw_input(prompt) - out.write('/') - print "ghi" - finally: - sys.stdin, sys.stdout = save - assert out.getvalue() == expected - assert got == "foo" + for inputfn, inputtext, gottext in [ + (raw_input, "foo\nbar\n", "foo"), + (input, "40+2\n", 42)]: + save = sys.stdin, sys.stdout + try: + sys.stdin = StringIO.StringIO(inputtext) + out = sys.stdout = StringIO.StringIO() + print "abc", # softspace = 1 + out.write('/') + if prompt is Ellipsis: + got = inputfn() + else: + got = inputfn(prompt) + out.write('/') + print "ghi" + finally: + sys.stdin, sys.stdout = save + assert out.getvalue() == expected + assert got == gottext def test_softspace(self): import sys diff --git a/pypy/module/_minimal_curses/__init__.py b/pypy/module/_minimal_curses/__init__.py --- a/pypy/module/_minimal_curses/__init__.py +++ b/pypy/module/_minimal_curses/__init__.py @@ -4,7 +4,8 @@ try: import _minimal_curses as _curses # when running on top of pypy-c except ImportError: - raise ImportError("no _curses or _minimal_curses module") # no _curses at all + import py + py.test.skip("no _curses or _minimal_curses module") #no _curses at all from pypy.interpreter.mixedmodule import MixedModule from pypy.module._minimal_curses import fficurses diff --git a/pypy/module/_pickle_support/maker.py b/pypy/module/_pickle_support/maker.py --- a/pypy/module/_pickle_support/maker.py +++ b/pypy/module/_pickle_support/maker.py @@ -66,10 +66,10 @@ new_generator.running = running return space.wrap(new_generator) - at unwrap_spec(current=int, remaining=int, step=int) -def xrangeiter_new(space, current, remaining, step): + at unwrap_spec(current=int, stop=int, step=int) +def xrangeiter_new(space, current, stop, step): from pypy.module.__builtin__.functional import W_XRangeIterator - new_iter = W_XRangeIterator(space, current, remaining, step) + new_iter = W_XRangeIterator(space, current, stop, step) return space.wrap(new_iter) @unwrap_spec(identifier=str) diff --git a/pypy/module/_socket/interp_socket.py b/pypy/module/_socket/interp_socket.py --- a/pypy/module/_socket/interp_socket.py +++ b/pypy/module/_socket/interp_socket.py @@ -19,7 +19,7 @@ class W_RSocket(Wrappable, RSocket): def __del__(self): self.clear_all_weakrefs() - self.close() + RSocket.__del__(self) def accept_w(self, space): """accept() -> (socket object, address info) diff --git a/pypy/module/array/interp_array.py b/pypy/module/array/interp_array.py --- a/pypy/module/array/interp_array.py +++ b/pypy/module/array/interp_array.py @@ -211,7 +211,9 @@ return result def __del__(self): - self.clear_all_weakrefs() + # note that we don't call clear_all_weakrefs here because + # an array with freed buffer is ok to see - it's just empty with 0 + # length self.setlen(0) def setlen(self, size): diff --git a/pypy/module/array/test/test_array.py b/pypy/module/array/test/test_array.py --- a/pypy/module/array/test/test_array.py +++ b/pypy/module/array/test/test_array.py @@ -824,6 +824,22 @@ r = weakref.ref(a) assert r() is a + def test_subclass_del(self): + import array, gc, weakref + l = [] + + class A(array.array): + pass + + a = A('d') + a.append(3.0) + r = weakref.ref(a, lambda a: l.append(a())) + del a + gc.collect(); gc.collect() # XXX needs two of them right now... + assert l + assert l[0] is None or len(l[0]) == 0 + + class TestCPythonsOwnArray(BaseArrayTests): def setup_class(cls): @@ -844,11 +860,7 @@ cls.w_tempfile = cls.space.wrap( str(py.test.ensuretemp('array').join('tmpfile'))) cls.w_maxint = cls.space.wrap(sys.maxint) - - - - - + def test_buffer_info(self): a = self.array('c', 'Hi!') bi = a.buffer_info() diff --git a/pypy/module/bz2/test/test_large.py b/pypy/module/bz2/test/test_large.py --- a/pypy/module/bz2/test/test_large.py +++ b/pypy/module/bz2/test/test_large.py @@ -8,7 +8,7 @@ py.test.skip("skipping this very slow test; try 'pypy-c -A'") cls.space = gettestobjspace(usemodules=('bz2',)) largetest_bz2 = py.path.local(__file__).dirpath().join("largetest.bz2") - cls.w_compressed_data = cls.space.wrap(largetest_bz2.read()) + cls.w_compressed_data = cls.space.wrap(largetest_bz2.read('rb')) def test_decompress(self): from bz2 import decompress diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -13,6 +13,9 @@ 'empty': 'interp_numarray.zeros', 'ones': 'interp_numarray.ones', 'fromstring': 'interp_support.fromstring', + + 'True_': 'space.w_True', + 'False_': 'space.w_False', } # ufuncs diff --git a/pypy/module/micronumpy/compile.py b/pypy/module/micronumpy/compile.py --- a/pypy/module/micronumpy/compile.py +++ b/pypy/module/micronumpy/compile.py @@ -4,30 +4,52 @@ """ from pypy.interpreter.baseobjspace import InternalSpaceCache, W_Root -from pypy.module.micronumpy.interp_dtype import W_Float64Dtype -from pypy.module.micronumpy.interp_numarray import Scalar, SingleDimArray, BaseArray +from pypy.module.micronumpy.interp_dtype import W_Float64Dtype, W_BoolDtype +from pypy.module.micronumpy.interp_numarray import (Scalar, BaseArray, + descr_new_array, scalar_w, SingleDimArray) +from pypy.module.micronumpy import interp_ufuncs from pypy.rlib.objectmodel import specialize class BogusBytecode(Exception): pass -def create_array(dtype, size): - a = SingleDimArray(size, dtype=dtype) - for i in range(size): - dtype.setitem(a.storage, i, dtype.box(float(i % 10))) - return a +class ArgumentMismatch(Exception): + pass + +class ArgumentNotAnArray(Exception): + pass + +class WrongFunctionName(Exception): + pass + +SINGLE_ARG_FUNCTIONS = ["sum", "prod", "max", "min", "all", "any", "unegative"] class FakeSpace(object): w_ValueError = None w_TypeError = None + w_None = None + + w_bool = "bool" + w_int = "int" + w_float = "float" + w_list = "list" + w_long = "long" + w_tuple = 'tuple' def __init__(self): """NOT_RPYTHON""" self.fromcache = InternalSpaceCache(self).getorbuild + self.w_float64dtype = W_Float64Dtype(self) def issequence_w(self, w_obj): - return True + return isinstance(w_obj, ListObject) or isinstance(w_obj, SingleDimArray) + + def isinstance_w(self, w_obj, w_tp): + return False + + def decode_index4(self, w_idx, size): + return (self.int_w(w_idx), 0, 0, 1) @specialize.argtype(1) def wrap(self, obj): @@ -39,72 +61,382 @@ return IntObject(obj) raise Exception + def newlist(self, items): + return ListObject(items) + + def listview(self, obj): + assert isinstance(obj, ListObject) + return obj.items + def float(self, w_obj): assert isinstance(w_obj, FloatObject) return w_obj def float_w(self, w_obj): + assert isinstance(w_obj, FloatObject) return w_obj.floatval + def int_w(self, w_obj): + if isinstance(w_obj, IntObject): + return w_obj.intval + elif isinstance(w_obj, FloatObject): + return int(w_obj.floatval) + raise NotImplementedError + + def int(self, w_obj): + return w_obj + + def is_true(self, w_obj): + assert isinstance(w_obj, BoolObject) + return w_obj.boolval + + def is_w(self, w_obj, w_what): + return w_obj is w_what + + def type(self, w_obj): + return w_obj.tp + + def gettypefor(self, w_obj): + return None + + def call_function(self, tp, w_dtype): + return w_dtype + + @specialize.arg(1) + def interp_w(self, tp, what): + assert isinstance(what, tp) + return what class FloatObject(W_Root): + tp = FakeSpace.w_float def __init__(self, floatval): self.floatval = floatval class BoolObject(W_Root): + tp = FakeSpace.w_bool def __init__(self, boolval): self.boolval = boolval class IntObject(W_Root): + tp = FakeSpace.w_int def __init__(self, intval): self.intval = intval +class ListObject(W_Root): + tp = FakeSpace.w_list + def __init__(self, items): + self.items = items -space = FakeSpace() +class InterpreterState(object): + def __init__(self, code): + self.code = code + self.variables = {} + self.results = [] -def numpy_compile(bytecode, array_size): - stack = [] - i = 0 - dtype = space.fromcache(W_Float64Dtype) - for b in bytecode: - if b == 'a': - stack.append(create_array(dtype, array_size)) - i += 1 - elif b == 'f': - stack.append(Scalar(dtype, dtype.box(1.2))) - elif b == '+': - right = stack.pop() - res = stack.pop().descr_add(space, right) - assert isinstance(res, BaseArray) - stack.append(res) - elif b == '-': - right = stack.pop() - res = stack.pop().descr_sub(space, right) - assert isinstance(res, BaseArray) - stack.append(res) - elif b == '*': - right = stack.pop() - res = stack.pop().descr_mul(space, right) - assert isinstance(res, BaseArray) - stack.append(res) - elif b == '/': - right = stack.pop() - res = stack.pop().descr_div(space, right) - assert isinstance(res, BaseArray) - stack.append(res) - elif b == '%': - right = stack.pop() - res = stack.pop().descr_mod(space, right) - assert isinstance(res, BaseArray) - stack.append(res) - elif b == '|': - res = stack.pop().descr_abs(space) - assert isinstance(res, BaseArray) - stack.append(res) + def run(self, space): + self.space = space + for stmt in self.code.statements: + stmt.execute(self) + +class Node(object): + def __eq__(self, other): + return (self.__class__ == other.__class__ and + self.__dict__ == other.__dict__) + + def __ne__(self, other): + return not self == other + + def wrap(self, space): + raise NotImplementedError + + def execute(self, interp): + raise NotImplementedError + +class Assignment(Node): + def __init__(self, name, expr): + self.name = name + self.expr = expr + + def execute(self, interp): + interp.variables[self.name] = self.expr.execute(interp) + + def __repr__(self): + return "%% = %r" % (self.name, self.expr) + +class ArrayAssignment(Node): + def __init__(self, name, index, expr): + self.name = name + self.index = index + self.expr = expr + + def execute(self, interp): + arr = interp.variables[self.name] + w_index = self.index.execute(interp).eval(0).wrap(interp.space) + w_val = self.expr.execute(interp).eval(0).wrap(interp.space) + arr.descr_setitem(interp.space, w_index, w_val) + + def __repr__(self): + return "%s[%r] = %r" % (self.name, self.index, self.expr) + +class Variable(Node): + def __init__(self, name): + self.name = name + + def execute(self, interp): + return interp.variables[self.name] + + def __repr__(self): + return 'v(%s)' % self.name + +class Operator(Node): + def __init__(self, lhs, name, rhs): + self.name = name + self.lhs = lhs + self.rhs = rhs + + def execute(self, interp): + w_lhs = self.lhs.execute(interp) + assert isinstance(w_lhs, BaseArray) + if isinstance(self.rhs, SliceConstant): + # XXX interface has changed on multidim branch + raise NotImplementedError + w_rhs = self.rhs.execute(interp) + if self.name == '+': + w_res = w_lhs.descr_add(interp.space, w_rhs) + elif self.name == '*': + w_res = w_lhs.descr_mul(interp.space, w_rhs) + elif self.name == '-': + w_res = w_lhs.descr_sub(interp.space, w_rhs) + elif self.name == '->': + if isinstance(w_rhs, Scalar): + index = int(interp.space.float_w( + w_rhs.value.wrap(interp.space))) + dtype = interp.space.fromcache(W_Float64Dtype) + return Scalar(dtype, w_lhs.get_concrete().eval(index)) + else: + raise NotImplementedError else: - print "Unknown opcode: %s" % b - raise BogusBytecode() - if len(stack) != 1: - print "Bogus bytecode, uneven stack length" - raise BogusBytecode() - return stack[0] + raise NotImplementedError + if not isinstance(w_res, BaseArray): + dtype = interp.space.fromcache(W_Float64Dtype) + w_res = scalar_w(interp.space, dtype, w_res) + return w_res + + def __repr__(self): + return '(%r %s %r)' % (self.lhs, self.name, self.rhs) + +class FloatConstant(Node): + def __init__(self, v): + self.v = float(v) + + def __repr__(self): + return "Const(%s)" % self.v + + def wrap(self, space): + return space.wrap(self.v) + + def execute(self, interp): + dtype = interp.space.fromcache(W_Float64Dtype) + assert isinstance(dtype, W_Float64Dtype) + return Scalar(dtype, dtype.box(self.v)) + +class RangeConstant(Node): + def __init__(self, v): + self.v = int(v) + + def execute(self, interp): + w_list = interp.space.newlist( + [interp.space.wrap(float(i)) for i in range(self.v)]) + dtype = interp.space.fromcache(W_Float64Dtype) + return descr_new_array(interp.space, None, w_list, w_dtype=dtype) + + def __repr__(self): + return 'Range(%s)' % self.v + +class Code(Node): + def __init__(self, statements): + self.statements = statements + + def __repr__(self): + return "\n".join([repr(i) for i in self.statements]) + +class ArrayConstant(Node): + def __init__(self, items): + self.items = items + + def wrap(self, space): + return space.newlist([item.wrap(space) for item in self.items]) + + def execute(self, interp): + w_list = self.wrap(interp.space) + dtype = interp.space.fromcache(W_Float64Dtype) + return descr_new_array(interp.space, None, w_list, w_dtype=dtype) + + def __repr__(self): + return "[" + ", ".join([repr(item) for item in self.items]) + "]" + +class SliceConstant(Node): + def __init__(self): + pass + + def __repr__(self): + return 'slice()' + +class Execute(Node): + def __init__(self, expr): + self.expr = expr + + def __repr__(self): + return repr(self.expr) + + def execute(self, interp): + interp.results.append(self.expr.execute(interp)) + +class FunctionCall(Node): + def __init__(self, name, args): + self.name = name + self.args = args + + def __repr__(self): + return "%s(%s)" % (self.name, ", ".join([repr(arg) + for arg in self.args])) + + def execute(self, interp): + if self.name in SINGLE_ARG_FUNCTIONS: + if len(self.args) != 1: + raise ArgumentMismatch + arr = self.args[0].execute(interp) + if not isinstance(arr, BaseArray): + raise ArgumentNotAnArray + if self.name == "sum": + w_res = arr.descr_sum(interp.space) + elif self.name == "prod": + w_res = arr.descr_prod(interp.space) + elif self.name == "max": + w_res = arr.descr_max(interp.space) + elif self.name == "min": + w_res = arr.descr_min(interp.space) + elif self.name == "any": + w_res = arr.descr_any(interp.space) + elif self.name == "all": + w_res = arr.descr_all(interp.space) + elif self.name == "unegative": + neg = interp_ufuncs.get(interp.space).negative + w_res = neg.call(interp.space, [arr]) + else: + assert False # unreachable code + if isinstance(w_res, BaseArray): + return w_res + if isinstance(w_res, FloatObject): + dtype = interp.space.fromcache(W_Float64Dtype) + elif isinstance(w_res, BoolObject): + dtype = interp.space.fromcache(W_BoolDtype) + else: + dtype = None + return scalar_w(interp.space, dtype, w_res) + else: + raise WrongFunctionName + +class Parser(object): + def parse_identifier(self, id): + id = id.strip(" ") + #assert id.isalpha() + return Variable(id) + + def parse_expression(self, expr): + tokens = [i for i in expr.split(" ") if i] + if len(tokens) == 1: + return self.parse_constant_or_identifier(tokens[0]) + stack = [] + tokens.reverse() + while tokens: + token = tokens.pop() + if token == ')': + raise NotImplementedError + elif self.is_identifier_or_const(token): + if stack: + name = stack.pop().name + lhs = stack.pop() + rhs = self.parse_constant_or_identifier(token) + stack.append(Operator(lhs, name, rhs)) + else: + stack.append(self.parse_constant_or_identifier(token)) + else: + stack.append(Variable(token)) + assert len(stack) == 1 + return stack[-1] + + def parse_constant(self, v): + lgt = len(v)-1 + assert lgt >= 0 + if ':' in v: + # a slice + assert v == ':' + return SliceConstant() + if v[0] == '[': + return ArrayConstant([self.parse_constant(elem) + for elem in v[1:lgt].split(",")]) + if v[0] == '|': + return RangeConstant(v[1:lgt]) + return FloatConstant(v) + + def is_identifier_or_const(self, v): + c = v[0] + if ((c >= 'a' and c <= 'z') or (c >= 'A' and c <= 'Z') or + (c >= '0' and c <= '9') or c in '-.[|:'): + if v == '-' or v == "->": + return False + return True + return False + + def parse_function_call(self, v): + l = v.split('(') + assert len(l) == 2 + name = l[0] + cut = len(l[1]) - 1 + assert cut >= 0 + args = [self.parse_constant_or_identifier(id) + for id in l[1][:cut].split(",")] + return FunctionCall(name, args) + + def parse_constant_or_identifier(self, v): + c = v[0] + if (c >= 'a' and c <= 'z') or (c >= 'A' and c <= 'Z'): + if '(' in v: + return self.parse_function_call(v) + return self.parse_identifier(v) + return self.parse_constant(v) + + def parse_array_subscript(self, v): + v = v.strip(" ") + l = v.split("[") + lgt = len(l[1]) - 1 + assert lgt >= 0 + rhs = self.parse_constant_or_identifier(l[1][:lgt]) + return l[0], rhs + + def parse_statement(self, line): + if '=' in line: + lhs, rhs = line.split("=") + lhs = lhs.strip(" ") + if '[' in lhs: + name, index = self.parse_array_subscript(lhs) + return ArrayAssignment(name, index, self.parse_expression(rhs)) + else: + return Assignment(lhs, self.parse_expression(rhs)) + else: + return Execute(self.parse_expression(line)) + + def parse(self, code): + statements = [] + for line in code.split("\n"): + if '#' in line: + line = line.split('#', 1)[0] + line = line.strip(" ") + if line: + statements.append(self.parse_statement(line)) + return Code(statements) + +def numpy_compile(code): + parser = Parser() + return InterpreterState(parser.parse(code)) diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -108,6 +108,12 @@ def setitem_w(self, space, storage, i, w_item): self.setitem(storage, i, self.unwrap(space, w_item)) + def fill(self, storage, item, start, stop): + storage = self.unerase(storage) + item = self.unbox(item) + for i in xrange(start, stop): + storage[i] = item + @specialize.argtype(1) def adapt_val(self, val): return self.box(rffi.cast(TP.TO.OF, val)) diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -14,6 +14,27 @@ any_driver = jit.JitDriver(greens=['signature'], reds=['i', 'size', 'self', 'dtype']) slice_driver = jit.JitDriver(greens=['signature'], reds=['i', 'j', 'step', 'stop', 'source', 'dest']) +def descr_new_array(space, w_subtype, w_size_or_iterable, w_dtype=None): + l = space.listview(w_size_or_iterable) + if space.is_w(w_dtype, space.w_None): + w_dtype = None + for w_item in l: + w_dtype = interp_ufuncs.find_dtype_for_scalar(space, w_item, w_dtype) + if w_dtype is space.fromcache(interp_dtype.W_Float64Dtype): + break + if w_dtype is None: + w_dtype = space.w_None + + dtype = space.interp_w(interp_dtype.W_Dtype, + space.call_function(space.gettypefor(interp_dtype.W_Dtype), w_dtype) + ) + arr = SingleDimArray(len(l), dtype=dtype) + i = 0 + for w_elem in l: + dtype.setitem_w(space, arr.storage, i, w_elem) + i += 1 + return arr + class BaseArray(Wrappable): _attrs_ = ["invalidates", "signature"] @@ -32,27 +53,6 @@ def add_invalidates(self, other): self.invalidates.append(other) - def descr__new__(space, w_subtype, w_size_or_iterable, w_dtype=None): - l = space.listview(w_size_or_iterable) - if space.is_w(w_dtype, space.w_None): - w_dtype = None - for w_item in l: - w_dtype = interp_ufuncs.find_dtype_for_scalar(space, w_item, w_dtype) - if w_dtype is space.fromcache(interp_dtype.W_Float64Dtype): - break - if w_dtype is None: - w_dtype = space.w_None - - dtype = space.interp_w(interp_dtype.W_Dtype, - space.call_function(space.gettypefor(interp_dtype.W_Dtype), w_dtype) - ) - arr = SingleDimArray(len(l), dtype=dtype) - i = 0 - for w_elem in l: - dtype.setitem_w(space, arr.storage, i, w_elem) - i += 1 - return arr - def _unaryop_impl(ufunc_name): def impl(self, space): return getattr(interp_ufuncs.get(space), ufunc_name).call(space, [self]) @@ -565,13 +565,12 @@ arr = SingleDimArray(size, dtype=dtype) one = dtype.adapt_val(1) - for i in xrange(size): - arr.dtype.setitem(arr.storage, i, one) + arr.dtype.fill(arr.storage, one, 0, size) return space.wrap(arr) BaseArray.typedef = TypeDef( 'numarray', - __new__ = interp2app(BaseArray.descr__new__.im_func), + __new__ = interp2app(descr_new_array), __len__ = interp2app(BaseArray.descr_len), diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -32,11 +32,17 @@ return self.identity.wrap(space) def descr_call(self, space, __args__): - try: - args_w = __args__.fixedunpack(self.argcount) - except ValueError, e: - raise OperationError(space.w_TypeError, space.wrap(str(e))) - return self.call(space, args_w) + if __args__.keywords or len(__args__.arguments_w) < self.argcount: + raise OperationError(space.w_ValueError, + space.wrap("invalid number of arguments") + ) + elif len(__args__.arguments_w) > self.argcount: + # The extra arguments should actually be the output array, but we + # don't support that yet. + raise OperationError(space.w_TypeError, + space.wrap("invalid number of arguments") + ) + return self.call(space, __args__.arguments_w) def descr_reduce(self, space, w_obj): from pypy.module.micronumpy.interp_numarray import convert_to_array, Scalar @@ -236,22 +242,20 @@ return dt def find_dtype_for_scalar(space, w_obj, current_guess=None): - w_type = space.type(w_obj) - bool_dtype = space.fromcache(interp_dtype.W_BoolDtype) long_dtype = space.fromcache(interp_dtype.W_LongDtype) int64_dtype = space.fromcache(interp_dtype.W_Int64Dtype) - if space.is_w(w_type, space.w_bool): + if space.isinstance_w(w_obj, space.w_bool): if current_guess is None or current_guess is bool_dtype: return bool_dtype return current_guess - elif space.is_w(w_type, space.w_int): + elif space.isinstance_w(w_obj, space.w_int): if (current_guess is None or current_guess is bool_dtype or current_guess is long_dtype): return long_dtype return current_guess - elif space.is_w(w_type, space.w_long): + elif space.isinstance_w(w_obj, space.w_long): if (current_guess is None or current_guess is bool_dtype or current_guess is long_dtype or current_guess is int64_dtype): return int64_dtype diff --git a/pypy/module/micronumpy/test/test_compile.py b/pypy/module/micronumpy/test/test_compile.py new file mode 100644 --- /dev/null +++ b/pypy/module/micronumpy/test/test_compile.py @@ -0,0 +1,170 @@ + +import py +from pypy.module.micronumpy.compile import * + +class TestCompiler(object): + def compile(self, code): + return numpy_compile(code) + + def test_vars(self): + code = """ + a = 2 + b = 3 + """ + interp = self.compile(code) + assert isinstance(interp.code.statements[0], Assignment) + assert interp.code.statements[0].name == 'a' + assert interp.code.statements[0].expr.v == 2 + assert interp.code.statements[1].name == 'b' + assert interp.code.statements[1].expr.v == 3 + + def test_array_literal(self): + code = "a = [1,2,3]" + interp = self.compile(code) + assert isinstance(interp.code.statements[0].expr, ArrayConstant) + st = interp.code.statements[0] + assert st.expr.items == [FloatConstant(1), FloatConstant(2), + FloatConstant(3)] + + def test_array_literal2(self): + code = "a = [[1],[2],[3]]" + interp = self.compile(code) + assert isinstance(interp.code.statements[0].expr, ArrayConstant) + st = interp.code.statements[0] + assert st.expr.items == [ArrayConstant([FloatConstant(1)]), + ArrayConstant([FloatConstant(2)]), + ArrayConstant([FloatConstant(3)])] + + def test_expr_1(self): + code = "b = a + 1" + interp = self.compile(code) + assert (interp.code.statements[0].expr == + Operator(Variable("a"), "+", FloatConstant(1))) + + def test_expr_2(self): + code = "b = a + b - 3" + interp = self.compile(code) + assert (interp.code.statements[0].expr == + Operator(Operator(Variable("a"), "+", Variable("b")), "-", + FloatConstant(3))) + + def test_expr_3(self): + # an equivalent of range + code = "a = |20|" + interp = self.compile(code) + assert interp.code.statements[0].expr == RangeConstant(20) + + def test_expr_only(self): + code = "3 + a" + interp = self.compile(code) + assert interp.code.statements[0] == Execute( + Operator(FloatConstant(3), "+", Variable("a"))) + + def test_array_access(self): + code = "a -> 3" + interp = self.compile(code) + assert interp.code.statements[0] == Execute( + Operator(Variable("a"), "->", FloatConstant(3))) + + def test_function_call(self): + code = "sum(a)" + interp = self.compile(code) + assert interp.code.statements[0] == Execute( + FunctionCall("sum", [Variable("a")])) + + def test_comment(self): + code = """ + # some comment + a = b + 3 # another comment + """ + interp = self.compile(code) + assert interp.code.statements[0] == Assignment( + 'a', Operator(Variable('b'), "+", FloatConstant(3))) + +class TestRunner(object): + def run(self, code): + interp = numpy_compile(code) + space = FakeSpace() + interp.run(space) + return interp + + def test_one(self): + code = """ + a = 3 + b = 4 + a + b + """ + interp = self.run(code) + assert sorted(interp.variables.keys()) == ['a', 'b'] + assert interp.results[0] + + def test_array_add(self): + code = """ + a = [1,2,3,4] + b = [4,5,6,5] + a + b + """ + interp = self.run(code) + assert interp.results[0]._getnums(False) == ["5.0", "7.0", "9.0", "9.0"] + + def test_array_getitem(self): + code = """ + a = [1,2,3,4] + b = [4,5,6,5] + a + b -> 3 + """ + interp = self.run(code) + assert interp.results[0].value.val == 3 + 6 + + def test_range_getitem(self): + code = """ + r = |20| + 3 + r -> 3 + """ + interp = self.run(code) + assert interp.results[0].value.val == 6 + + def test_sum(self): + code = """ + a = [1,2,3,4,5] + r = sum(a) + r + """ + interp = self.run(code) + assert interp.results[0].value.val == 15 + + def test_array_write(self): + code = """ + a = [1,2,3,4,5] + a[3] = 15 + a -> 3 + """ + interp = self.run(code) + assert interp.results[0].value.val == 15 + + def test_min(self): + interp = self.run(""" + a = |30| + a[15] = -12 + b = a + a + min(b) + """) + assert interp.results[0].value.val == -24 + + def test_max(self): + interp = self.run(""" + a = |30| + a[13] = 128 + b = a + a + max(b) + """) + assert interp.results[0].value.val == 256 + + def test_slice(self): + py.test.skip("in progress") + interp = self.run(""" + a = [1,2,3,4] + b = a -> : + b -> 3 + """) + assert interp.results[0].value.val == 3 diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -36,37 +36,40 @@ assert str(d) == "bool" def test_bool_array(self): - from numpy import array + import numpy - a = array([0, 1, 2, 2.5], dtype='?') - assert a[0] is False + a = numpy.array([0, 1, 2, 2.5], dtype='?') + assert a[0] is numpy.False_ for i in xrange(1, 4): - assert a[i] is True + assert a[i] is numpy.True_ def test_copy_array_with_dtype(self): - from numpy import array - a = array([0, 1, 2, 3], dtype=long) + import numpy + + a = numpy.array([0, 1, 2, 3], dtype=long) # int on 64-bit, long in 32-bit assert isinstance(a[0], (int, long)) b = a.copy() assert isinstance(b[0], (int, long)) - a = array([0, 1, 2, 3], dtype=bool) - assert isinstance(a[0], bool) + a = numpy.array([0, 1, 2, 3], dtype=bool) + assert a[0] is numpy.False_ b = a.copy() - assert isinstance(b[0], bool) + assert b[0] is numpy.False_ def test_zeros_bool(self): - from numpy import zeros - a = zeros(10, dtype=bool) + import numpy + + a = numpy.zeros(10, dtype=bool) for i in range(10): - assert a[i] is False + assert a[i] is numpy.False_ def test_ones_bool(self): - from numpy import ones - a = ones(10, dtype=bool) + import numpy + + a = numpy.ones(10, dtype=bool) for i in range(10): - assert a[i] is True + assert a[i] is numpy.True_ def test_zeros_long(self): from numpy import zeros @@ -77,7 +80,7 @@ def test_ones_long(self): from numpy import ones - a = ones(10, dtype=bool) + a = ones(10, dtype=long) for i in range(10): assert isinstance(a[i], (int, long)) assert a[1] == 1 @@ -96,8 +99,9 @@ def test_bool_binop_types(self): from numpy import array, dtype - types = ('?','b','B','h','H','i','I','l','L','q','Q','f','d') - N = len(types) + types = [ + '?', 'b', 'B', 'h', 'H', 'i', 'I', 'l', 'L', 'q', 'Q', 'f', 'd' + ] a = array([True], '?') for t in types: assert (a + array([0], t)).dtype is dtype(t) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -214,7 +214,7 @@ def test_add_other(self): from numpy import array a = array(range(5)) - b = array(reversed(range(5))) + b = array(range(4, -1, -1)) c = a + b for i in range(5): assert c[i] == 4 @@ -264,18 +264,19 @@ assert b[i] == i - 5 def test_mul(self): - from numpy import array, dtype - a = array(range(5)) + import numpy + + a = numpy.array(range(5)) b = a * a for i in range(5): assert b[i] == i * i - a = array(range(5), dtype=bool) + a = numpy.array(range(5), dtype=bool) b = a * a - assert b.dtype is dtype(bool) - assert b[0] is False + assert b.dtype is numpy.dtype(bool) + assert b[0] is numpy.False_ for i in range(1, 5): - assert b[i] is True + assert b[i] is numpy.True_ def test_mul_constant(self): from numpy import array diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -24,10 +24,10 @@ def test_wrong_arguments(self): from numpy import add, sin - raises(TypeError, add, 1) + raises(ValueError, add, 1) raises(TypeError, add, 1, 2, 3) raises(TypeError, sin, 1, 2) - raises(TypeError, sin) + raises(ValueError, sin) def test_single_item(self): from numpy import negative, sign, minimum @@ -82,6 +82,8 @@ b = negative(a) a[0] = 5.0 assert b[0] == 5.0 + a = array(range(30)) + assert negative(a + a)[3] == -6 def test_abs(self): from numpy import array, absolute @@ -355,4 +357,4 @@ (3.5, 3), (3, 3.5), ]: - assert ufunc(a, b) is func(a, b) + assert ufunc(a, b) == func(a, b) diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -1,253 +1,195 @@ from pypy.jit.metainterp.test.support import LLJitMixin from pypy.module.micronumpy import interp_ufuncs, signature -from pypy.module.micronumpy.compile import (numpy_compile, FakeSpace, - FloatObject, IntObject) -from pypy.module.micronumpy.interp_dtype import W_Int32Dtype, W_Float64Dtype, W_Int64Dtype, W_UInt64Dtype -from pypy.module.micronumpy.interp_numarray import (BaseArray, SingleDimArray, - SingleDimSlice, scalar_w) +from pypy.module.micronumpy.compile import (FakeSpace, + FloatObject, IntObject, numpy_compile, BoolObject) +from pypy.module.micronumpy.interp_numarray import (SingleDimArray, + SingleDimSlice) from pypy.rlib.nonconst import NonConstant -from pypy.rpython.annlowlevel import llstr -from pypy.rpython.test.test_llinterp import interpret +from pypy.rpython.annlowlevel import llstr, hlstr +from pypy.jit.metainterp.warmspot import reset_stats +from pypy.jit.metainterp import pyjitpl import py class TestNumpyJIt(LLJitMixin): - def setup_class(cls): - cls.space = FakeSpace() - cls.float64_dtype = cls.space.fromcache(W_Float64Dtype) - cls.int64_dtype = cls.space.fromcache(W_Int64Dtype) - cls.uint64_dtype = cls.space.fromcache(W_UInt64Dtype) - cls.int32_dtype = cls.space.fromcache(W_Int32Dtype) + graph = None + interp = None + + def run(self, code): + space = FakeSpace() + + def f(code): + interp = numpy_compile(hlstr(code)) + interp.run(space) + res = interp.results[-1] + w_res = res.eval(0).wrap(interp.space) + if isinstance(w_res, BoolObject): + return float(w_res.boolval) + elif isinstance(w_res, FloatObject): + return w_res.floatval + elif isinstance(w_res, IntObject): + return w_res.intval + else: + return -42. + + if self.graph is None: + interp, graph = self.meta_interp(f, [llstr(code)], + listops=True, + backendopt=True, + graph_and_interp_only=True) + self.__class__.interp = interp + self.__class__.graph = graph + + reset_stats() + pyjitpl._warmrunnerdesc.memory_manager.alive_loops.clear() + return self.interp.eval_graph(self.graph, [llstr(code)]) def test_add(self): - def f(i): - ar = SingleDimArray(i, dtype=self.float64_dtype) - v = interp_ufuncs.get(self.space).add.call(self.space, [ar, ar]) - return v.get_concrete().eval(3).val - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + result = self.run(""" + a = |30| + b = a + a + b -> 3 + """) self.check_loops({'getarrayitem_raw': 2, 'float_add': 1, 'setarrayitem_raw': 1, 'int_add': 1, 'int_lt': 1, 'guard_true': 1, 'jump': 1}) - assert result == f(5) + assert result == 3 + 3 def test_floatadd(self): - def f(i): - ar = SingleDimArray(i, dtype=self.float64_dtype) - v = interp_ufuncs.get(self.space).add.call(self.space, [ - ar, - scalar_w(self.space, self.float64_dtype, self.space.wrap(4.5)) - ], - ) - assert isinstance(v, BaseArray) - return v.get_concrete().eval(3).val - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + result = self.run(""" + a = |30| + 3 + a -> 3 + """) + assert result == 3 + 3 self.check_loops({"getarrayitem_raw": 1, "float_add": 1, "setarrayitem_raw": 1, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1}) - assert result == f(5) def test_sum(self): - space = self.space - float64_dtype = self.float64_dtype - int64_dtype = self.int64_dtype - - def f(i): - if NonConstant(False): - dtype = int64_dtype - else: - dtype = float64_dtype - ar = SingleDimArray(i, dtype=dtype) - v = ar.descr_add(space, ar).descr_sum(space) - assert isinstance(v, FloatObject) - return v.floatval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + result = self.run(""" + a = |30| + b = a + a + sum(b) + """) + assert result == 2 * sum(range(30)) self.check_loops({"getarrayitem_raw": 2, "float_add": 2, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1}) - assert result == f(5) def test_prod(self): - space = self.space - float64_dtype = self.float64_dtype - int64_dtype = self.int64_dtype - - def f(i): - if NonConstant(False): - dtype = int64_dtype - else: - dtype = float64_dtype - ar = SingleDimArray(i, dtype=dtype) - v = ar.descr_add(space, ar).descr_prod(space) - assert isinstance(v, FloatObject) - return v.floatval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + result = self.run(""" + a = |30| + b = a + a + prod(b) + """) + expected = 1 + for i in range(30): + expected *= i * 2 + assert result == expected self.check_loops({"getarrayitem_raw": 2, "float_add": 1, "float_mul": 1, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1}) - assert result == f(5) def test_max(self): - space = self.space - float64_dtype = self.float64_dtype - int64_dtype = self.int64_dtype - - def f(i): - if NonConstant(False): - dtype = int64_dtype - else: - dtype = float64_dtype - ar = SingleDimArray(i, dtype=dtype) - j = 0 - while j < i: - ar.get_concrete().setitem(j, float64_dtype.box(float(j))) - j += 1 - v = ar.descr_add(space, ar).descr_max(space) - assert isinstance(v, FloatObject) - return v.floatval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + py.test.skip("broken, investigate") + result = self.run(""" + a = |30| + a[13] = 128 + b = a + a + max(b) + """) + assert result == 256 self.check_loops({"getarrayitem_raw": 2, "float_add": 1, - "float_gt": 1, "int_add": 1, - "int_lt": 1, "guard_true": 1, - "guard_false": 1, "jump": 1}) - assert result == f(5) + "float_mul": 1, "int_add": 1, + "int_lt": 1, "guard_true": 1, "jump": 1}) def test_min(self): - space = self.space - float64_dtype = self.float64_dtype - int64_dtype = self.int64_dtype - - def f(i): - if NonConstant(False): - dtype = int64_dtype - else: - dtype = float64_dtype - ar = SingleDimArray(i, dtype=dtype) - j = 0 - while j < i: - ar.get_concrete().setitem(j, float64_dtype.box(float(j))) - j += 1 - v = ar.descr_add(space, ar).descr_min(space) - assert isinstance(v, FloatObject) - return v.floatval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + py.test.skip("broken, investigate") + result = self.run(""" + a = |30| + a[15] = -12 + b = a + a + min(b) + """) + assert result == -24 self.check_loops({"getarrayitem_raw": 2, "float_add": 1, - "float_lt": 1, "int_add": 1, - "int_lt": 1, "guard_true": 2, - "jump": 1}) - assert result == f(5) - - def test_argmin(self): - space = self.space - float64_dtype = self.float64_dtype - - def f(i): - ar = SingleDimArray(i, dtype=NonConstant(float64_dtype)) - j = 0 - while j < i: - ar.get_concrete().setitem(j, float64_dtype.box(float(j))) - j += 1 - return ar.descr_add(space, ar).descr_argmin(space).intval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) - self.check_loops({"getarrayitem_raw": 2, "float_add": 1, - "float_lt": 1, "int_add": 1, - "int_lt": 1, "guard_true": 2, - "jump": 1}) - assert result == f(5) - - def test_all(self): - space = self.space - float64_dtype = self.float64_dtype - - def f(i): - ar = SingleDimArray(i, dtype=NonConstant(float64_dtype)) - j = 0 - while j < i: - ar.get_concrete().setitem(j, float64_dtype.box(1.0)) - j += 1 - return ar.descr_add(space, ar).descr_all(space).boolval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) - self.check_loops({"getarrayitem_raw": 2, "float_add": 1, - "int_add": 1, "float_ne": 1, - "int_lt": 1, "guard_true": 2, "jump": 1}) - assert result == f(5) + "float_mul": 1, "int_add": 1, + "int_lt": 1, "guard_true": 1, "jump": 1}) def test_any(self): - space = self.space - float64_dtype = self.float64_dtype - - def f(i): - ar = SingleDimArray(i, dtype=NonConstant(float64_dtype)) - return ar.descr_add(space, ar).descr_any(space).boolval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + result = self.run(""" + a = [0,0,0,0,0,0,0,0,0,0,0] + a[8] = -12 + b = a + a + any(b) + """) + assert result == 1 self.check_loops({"getarrayitem_raw": 2, "float_add": 1, - "int_add": 1, "float_ne": 1, "guard_false": 1, - "int_lt": 1, "guard_true": 1, "jump": 1}) - assert result == f(5) + "float_ne": 1, "int_add": 1, + "int_lt": 1, "guard_true": 1, "jump": 1, + "guard_false": 1}) def test_already_forced(self): - space = self.space - - def f(i): - ar = SingleDimArray(i, dtype=self.float64_dtype) - v1 = interp_ufuncs.get(self.space).add.call(space, [ar, scalar_w(space, self.float64_dtype, space.wrap(4.5))]) - assert isinstance(v1, BaseArray) - v2 = interp_ufuncs.get(self.space).multiply.call(space, [v1, scalar_w(space, self.float64_dtype, space.wrap(4.5))]) - v1.force_if_needed() - assert isinstance(v2, BaseArray) - return v2.get_concrete().eval(3).val - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + result = self.run(""" + a = |30| + b = a + 4.5 + b -> 5 # forces + c = b * 8 + c -> 5 + """) + assert result == (5 + 4.5) * 8 # This is the sum of the ops for both loops, however if you remove the # optimization then you end up with 2 float_adds, so we can still be # sure it was optimized correctly. self.check_loops({"getarrayitem_raw": 2, "float_mul": 1, "float_add": 1, "setarrayitem_raw": 2, "int_add": 2, "int_lt": 2, "guard_true": 2, "jump": 2}) - assert result == f(5) def test_ufunc(self): - space = self.space - def f(i): - ar = SingleDimArray(i, dtype=self.float64_dtype) - v1 = interp_ufuncs.get(self.space).add.call(space, [ar, ar]) - v2 = interp_ufuncs.get(self.space).negative.call(space, [v1]) - return v2.get_concrete().eval(3).val - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + result = self.run(""" + a = |30| + b = a + a + c = unegative(b) + c -> 3 + """) + assert result == -6 self.check_loops({"getarrayitem_raw": 2, "float_add": 1, "float_neg": 1, "setarrayitem_raw": 1, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1, }) - assert result == f(5) - def test_appropriate_specialization(self): - space = self.space - def f(i): - ar = SingleDimArray(i, dtype=self.float64_dtype) - - v1 = interp_ufuncs.get(self.space).add.call(space, [ar, ar]) - v2 = interp_ufuncs.get(self.space).negative.call(space, [v1]) - v2.get_concrete() - - for i in xrange(5): - v1 = interp_ufuncs.get(self.space).multiply.call(space, [ar, ar]) - v2 = interp_ufuncs.get(self.space).negative.call(space, [v1]) - v2.get_concrete() - - self.meta_interp(f, [5], listops=True, backendopt=True) + def test_specialization(self): + self.run(""" + a = |30| + b = a + a + c = unegative(b) + c -> 3 + d = a * a + unegative(d) + d -> 3 + d = a * a + unegative(d) + d -> 3 From noreply at buildbot.pypy.org Thu Nov 3 15:12:56 2011 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 3 Nov 2011 15:12:56 +0100 (CET) Subject: [pypy-commit] pypy stm: A few extra operations that are always allowed. Message-ID: <20111103141256.87943820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r48700:e6d9748a9589 Date: 2011-11-03 15:11 +0100 http://bitbucket.org/pypy/pypy/changeset/e6d9748a9589/ Log: A few extra operations that are always allowed. diff --git a/pypy/rpython/lltypesystem/lloperation.py b/pypy/rpython/lltypesystem/lloperation.py --- a/pypy/rpython/lltypesystem/lloperation.py +++ b/pypy/rpython/lltypesystem/lloperation.py @@ -147,6 +147,12 @@ assert not opdesc.canraise yield opname +def enum_tryfold_ops(): + """Enumerate operations that can be constant-folded.""" + for opname, opdesc in LL_OPERATIONS.iteritems(): + if opdesc.tryfold: + yield opname + class Entry(ExtRegistryEntry): "Annotation and rtyping of LLOp instances, which are callable." diff --git a/pypy/translator/stm/transform.py b/pypy/translator/stm/transform.py --- a/pypy/translator/stm/transform.py +++ b/pypy/translator/stm/transform.py @@ -7,10 +7,10 @@ ALWAYS_ALLOW_OPERATIONS = set([ - 'direct_call', 'force_cast', + 'direct_call', 'force_cast', 'keepalive', 'cast_ptr_to_adr', 'debug_print', 'debug_assert', ]) -ALWAYS_ALLOW_OPERATIONS |= set(lloperation.enum_foldable_ops()) +ALWAYS_ALLOW_OPERATIONS |= set(lloperation.enum_tryfold_ops()) def op_in_set(opname, set): return opname in set From noreply at buildbot.pypy.org Thu Nov 3 15:21:42 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Thu, 3 Nov 2011 15:21:42 +0100 (CET) Subject: [pypy-commit] pypy step-one-xrange: test ensuring xrange iterator only produces a single setitem Message-ID: <20111103142142.0BCA4820B3@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: step-one-xrange Changeset: r48701:3aaee477e4be Date: 2011-11-03 15:21 +0100 http://bitbucket.org/pypy/pypy/changeset/3aaee477e4be/ Log: test ensuring xrange iterator only produces a single setitem diff --git a/pypy/module/pypyjit/test_pypy_c/test_misc.py b/pypy/module/pypyjit/test_pypy_c/test_misc.py --- a/pypy/module/pypyjit/test_pypy_c/test_misc.py +++ b/pypy/module/pypyjit/test_pypy_c/test_misc.py @@ -128,6 +128,36 @@ jump(..., descr=...) """) + def test_xrange_iter(self): + def main(n): + def g(n): + return xrange(n) + s = 0 + for i in xrange(n): # ID: for + tmp = g(n) + s += tmp[i] # ID: getitem + a = 0 + return s + # + log = self.run(main, [1000]) + assert log.result == 1000 * 999 / 2 + loop, = log.loops_by_filename(self.filepath) + assert loop.match(""" + i15 = int_lt(i10, i11) + guard_true(i15, descr=...) + i17 = int_add(i10, 1) + i18 = force_token() + setfield_gc(p9, i17, descr=<.* .*W_XRangeIterator.inst_current .*>) + guard_not_invalidated(descr=...) + i21 = int_lt(i10, 0) + guard_false(i21, descr=...) + i22 = int_lt(i10, i14) + guard_true(i22, descr=...) + i23 = int_add_ovf(i6, i10) + guard_no_overflow(descr=...) + --TICK-- + jump(..., descr=) + """) def test_range_iter(self): def main(n): From noreply at buildbot.pypy.org Thu Nov 3 16:11:28 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Thu, 3 Nov 2011 16:11:28 +0100 (CET) Subject: [pypy-commit] pypy win64 test: merge default Message-ID: <20111103151128.1F8A6820B3@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64 test Changeset: r48702:19ea93d6b3ae Date: 2011-11-03 16:02 +0100 http://bitbucket.org/pypy/pypy/changeset/19ea93d6b3ae/ Log: merge default diff too long, truncating to 10000 out of 87222 lines diff --git a/.hgtags b/.hgtags --- a/.hgtags +++ b/.hgtags @@ -1,1 +1,3 @@ b590cf6de4190623aad9aa698694c22e614d67b9 release-1.5 +b48df0bf4e75b81d98f19ce89d4a7dc3e1dab5e5 benchmarked +d8ac7d23d3ec5f9a0fa1264972f74a010dbfd07f release-1.6 diff --git a/LICENSE b/LICENSE --- a/LICENSE +++ b/LICENSE @@ -37,22 +37,22 @@ Armin Rigo Maciej Fijalkowski Carl Friedrich Bolz + Antonio Cuni Amaury Forgeot d'Arc - Antonio Cuni Samuele Pedroni Michael Hudson Holger Krekel + Benjamin Peterson Christian Tismer - Benjamin Peterson + Hakan Ardo + Alex Gaynor Eric van Riet Paap - Anders Chrigström - Håkan Ardö + Anders Chrigstrom + David Schneider Richard Emslie Dan Villiom Podlaski Christiansen Alexander Schremmer - Alex Gaynor - David Schneider - Aurelién Campeas + Aurelien Campeas Anders Lehmann Camillo Bruni Niklaus Haldimann @@ -63,16 +63,17 @@ Bartosz Skowron Jakub Gustak Guido Wesdorp + Daniel Roberts Adrien Di Mascio Laura Creighton Ludovic Aubry Niko Matsakis - Daniel Roberts Jason Creighton - Jacob Hallén + Jacob Hallen Alex Martelli Anders Hammarquist Jan de Mooij + Wim Lavrijsen Stephan Diehl Michael Foord Stefan Schwarzer @@ -83,9 +84,13 @@ Alexandre Fayolle Marius Gedminas Simon Burton + Justin Peel Jean-Paul Calderone John Witulski + Lukas Diekmann + holger krekel Wim Lavrijsen + Dario Bertini Andreas Stührk Jean-Philippe St. Pierre Guido van Rossum @@ -97,15 +102,16 @@ Georg Brandl Gerald Klix Wanja Saatkamp + Ronny Pfannschmidt Boris Feigin Oscar Nierstrasz - Dario Bertini David Malcolm Eugene Oden Henry Mason + Sven Hager Lukas Renggli + Ilya Osadchiy Guenter Jantzen - Ronny Pfannschmidt Bert Freudenberg Amit Regmi Ben Young @@ -122,8 +128,8 @@ Jared Grubb Karl Bartel Gabriel Lavoie + Victor Stinner Brian Dorsey - Victor Stinner Stuart Williams Toby Watson Antoine Pitrou @@ -134,19 +140,23 @@ Jonathan David Riehl Elmo Mäntynen Anders Qvist - Beatrice Düring + Beatrice During Alexander Sedov + Timo Paulssen + Corbin Simpson Vincent Legoll + Romain Guillebert Alan McIntyre - Romain Guillebert Alex Perry Jens-Uwe Mager + Simon Cross Dan Stromberg - Lukas Diekmann + Guillebert Romain Carl Meyer Pieter Zieschang Alejandro J. Cura Sylvain Thenault + Christoph Gerum Travis Francis Athougies Henrik Vendelbo Lutz Paelike @@ -157,6 +167,7 @@ Miguel de Val Borro Ignas Mikalajunas Artur Lisiecki + Philip Jenvey Joshua Gilbert Godefroid Chappelle Yusei Tahara @@ -165,26 +176,31 @@ Gustavo Niemeyer William Leslie Akira Li - Kristján Valur Jónsson + Kristjan Valur Jonsson Bobby Impollonia + Michael Hudson-Doyle Andrew Thompson Anders Sigfridsson + Floris Bruynooghe Jacek Generowicz Dan Colish - Sven Hager Zooko Wilcox-O Hearn + Dan Villiom Podlaski Christiansen Anders Hammarquist + Chris Lambacher Dinu Gherman Dan Colish + Brett Cannon Daniel Neuhäuser Michael Chermside Konrad Delong Anna Ravencroft Greg Price Armin Ronacher + Christian Muirhead Jim Baker - Philip Jenvey Rodrigo Araújo + Romain Guillebert Heinrich-Heine University, Germany Open End AB (formerly AB Strakt), Sweden diff --git a/ctypes_configure/configure.py b/ctypes_configure/configure.py --- a/ctypes_configure/configure.py +++ b/ctypes_configure/configure.py @@ -559,7 +559,9 @@ C_HEADER = """ #include #include /* for offsetof() */ -#include /* FreeBSD: for uint64_t */ +#ifndef _WIN32 +# include /* FreeBSD: for uint64_t */ +#endif void dump(char* key, int value) { printf("%s: %d\\n", key, value); diff --git a/ctypes_configure/stdoutcapture.py b/ctypes_configure/stdoutcapture.py --- a/ctypes_configure/stdoutcapture.py +++ b/ctypes_configure/stdoutcapture.py @@ -15,6 +15,15 @@ not hasattr(os, 'fdopen')): self.dummy = 1 else: + try: + self.tmpout = os.tmpfile() + if mixed_out_err: + self.tmperr = self.tmpout + else: + self.tmperr = os.tmpfile() + except OSError: # bah? on at least one Windows box + self.dummy = 1 + return self.dummy = 0 # make new stdout/stderr files if needed self.localoutfd = os.dup(1) @@ -29,11 +38,6 @@ sys.stderr = os.fdopen(self.localerrfd, 'w', 0) else: self.saved_stderr = None - self.tmpout = os.tmpfile() - if mixed_out_err: - self.tmperr = self.tmpout - else: - self.tmperr = os.tmpfile() os.dup2(self.tmpout.fileno(), 1) os.dup2(self.tmperr.fileno(), 2) diff --git a/dotviewer/graphparse.py b/dotviewer/graphparse.py --- a/dotviewer/graphparse.py +++ b/dotviewer/graphparse.py @@ -36,48 +36,45 @@ print >> sys.stderr, "Warning: could not guess file type, using 'dot'" return 'unknown' -def dot2plain(content, contenttype, use_codespeak=False): - if contenttype == 'plain': - # already a .plain file - return content +def dot2plain_graphviz(content, contenttype, use_codespeak=False): + if contenttype != 'neato': + cmdline = 'dot -Tplain' + else: + cmdline = 'neato -Tplain' + #print >> sys.stderr, '* running:', cmdline + close_fds = sys.platform != 'win32' + p = subprocess.Popen(cmdline, shell=True, close_fds=close_fds, + stdin=subprocess.PIPE, stdout=subprocess.PIPE) + (child_in, child_out) = (p.stdin, p.stdout) + try: + import thread + except ImportError: + bkgndwrite(child_in, content) + else: + thread.start_new_thread(bkgndwrite, (child_in, content)) + plaincontent = child_out.read() + child_out.close() + if not plaincontent: # 'dot' is likely not installed + raise PlainParseError("no result from running 'dot'") + return plaincontent - if not use_codespeak: - if contenttype != 'neato': - cmdline = 'dot -Tplain' - else: - cmdline = 'neato -Tplain' - #print >> sys.stderr, '* running:', cmdline - close_fds = sys.platform != 'win32' - p = subprocess.Popen(cmdline, shell=True, close_fds=close_fds, - stdin=subprocess.PIPE, stdout=subprocess.PIPE) - (child_in, child_out) = (p.stdin, p.stdout) - try: - import thread - except ImportError: - bkgndwrite(child_in, content) - else: - thread.start_new_thread(bkgndwrite, (child_in, content)) - plaincontent = child_out.read() - child_out.close() - if not plaincontent: # 'dot' is likely not installed - raise PlainParseError("no result from running 'dot'") - else: - import urllib - request = urllib.urlencode({'dot': content}) - url = 'http://codespeak.net/pypy/convertdot.cgi' - print >> sys.stderr, '* posting:', url - g = urllib.urlopen(url, data=request) - result = [] - while True: - data = g.read(16384) - if not data: - break - result.append(data) - g.close() - plaincontent = ''.join(result) - # very simple-minded way to give a somewhat better error message - if plaincontent.startswith('> sys.stderr, '* posting:', url + g = urllib.urlopen(url, data=request) + result = [] + while True: + data = g.read(16384) + if not data: + break + result.append(data) + g.close() + plaincontent = ''.join(result) + # very simple-minded way to give a somewhat better error message + if plaincontent.startswith('" + return _PROTOCOL_NAMES.get(protocol_code, '') # a replacement for the old socket.ssl function diff --git a/lib-python/2.7/test/test_ssl.py b/lib-python/2.7/test/test_ssl.py --- a/lib-python/2.7/test/test_ssl.py +++ b/lib-python/2.7/test/test_ssl.py @@ -58,32 +58,35 @@ # Issue #9415: Ubuntu hijacks their OpenSSL and forcefully disables SSLv2 def skip_if_broken_ubuntu_ssl(func): - # We need to access the lower-level wrapper in order to create an - # implicit SSL context without trying to connect or listen. - try: - import _ssl - except ImportError: - # The returned function won't get executed, just ignore the error - pass - @functools.wraps(func) - def f(*args, **kwargs): + if hasattr(ssl, 'PROTOCOL_SSLv2'): + # We need to access the lower-level wrapper in order to create an + # implicit SSL context without trying to connect or listen. try: - s = socket.socket(socket.AF_INET) - _ssl.sslwrap(s._sock, 0, None, None, - ssl.CERT_NONE, ssl.PROTOCOL_SSLv2, None, None) - except ssl.SSLError as e: - if (ssl.OPENSSL_VERSION_INFO == (0, 9, 8, 15, 15) and - platform.linux_distribution() == ('debian', 'squeeze/sid', '') - and 'Invalid SSL protocol variant specified' in str(e)): - raise unittest.SkipTest("Patched Ubuntu OpenSSL breaks behaviour") - return func(*args, **kwargs) - return f + import _ssl + except ImportError: + # The returned function won't get executed, just ignore the error + pass + @functools.wraps(func) + def f(*args, **kwargs): + try: + s = socket.socket(socket.AF_INET) + _ssl.sslwrap(s._sock, 0, None, None, + ssl.CERT_NONE, ssl.PROTOCOL_SSLv2, None, None) + except ssl.SSLError as e: + if (ssl.OPENSSL_VERSION_INFO == (0, 9, 8, 15, 15) and + platform.linux_distribution() == ('debian', 'squeeze/sid', '') + and 'Invalid SSL protocol variant specified' in str(e)): + raise unittest.SkipTest("Patched Ubuntu OpenSSL breaks behaviour") + return func(*args, **kwargs) + return f + else: + return func class BasicSocketTests(unittest.TestCase): def test_constants(self): - ssl.PROTOCOL_SSLv2 + #ssl.PROTOCOL_SSLv2 ssl.PROTOCOL_SSLv23 ssl.PROTOCOL_SSLv3 ssl.PROTOCOL_TLSv1 @@ -964,7 +967,8 @@ try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True, ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True, ssl.CERT_REQUIRED) - try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv2, False) + if hasattr(ssl, 'PROTOCOL_SSLv2'): + try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv2, False) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv23, False) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_TLSv1, False) @@ -976,7 +980,8 @@ try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True, ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True, ssl.CERT_REQUIRED) - try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv2, False) + if hasattr(ssl, 'PROTOCOL_SSLv2'): + try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv2, False) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv3, False) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv23, False) diff --git a/lib-python/conftest.py b/lib-python/conftest.py --- a/lib-python/conftest.py +++ b/lib-python/conftest.py @@ -154,18 +154,18 @@ RegrTest('test_cmd.py'), RegrTest('test_cmd_line_script.py'), RegrTest('test_codeccallbacks.py', core=True), - RegrTest('test_codecencodings_cn.py'), - RegrTest('test_codecencodings_hk.py'), - RegrTest('test_codecencodings_jp.py'), - RegrTest('test_codecencodings_kr.py'), - RegrTest('test_codecencodings_tw.py'), + RegrTest('test_codecencodings_cn.py', usemodules='_multibytecodec'), + RegrTest('test_codecencodings_hk.py', usemodules='_multibytecodec'), + RegrTest('test_codecencodings_jp.py', usemodules='_multibytecodec'), + RegrTest('test_codecencodings_kr.py', usemodules='_multibytecodec'), + RegrTest('test_codecencodings_tw.py', usemodules='_multibytecodec'), - RegrTest('test_codecmaps_cn.py'), - RegrTest('test_codecmaps_hk.py'), - RegrTest('test_codecmaps_jp.py'), - RegrTest('test_codecmaps_kr.py'), - RegrTest('test_codecmaps_tw.py'), - RegrTest('test_codecs.py', core=True), + RegrTest('test_codecmaps_cn.py', usemodules='_multibytecodec'), + RegrTest('test_codecmaps_hk.py', usemodules='_multibytecodec'), + RegrTest('test_codecmaps_jp.py', usemodules='_multibytecodec'), + RegrTest('test_codecmaps_kr.py', usemodules='_multibytecodec'), + RegrTest('test_codecmaps_tw.py', usemodules='_multibytecodec'), + RegrTest('test_codecs.py', core=True, usemodules='_multibytecodec'), RegrTest('test_codeop.py', core=True), RegrTest('test_coercion.py', core=True), RegrTest('test_collections.py'), @@ -314,10 +314,10 @@ RegrTest('test_mmap.py'), RegrTest('test_module.py', core=True), RegrTest('test_modulefinder.py'), - RegrTest('test_multibytecodec.py'), + RegrTest('test_multibytecodec.py', usemodules='_multibytecodec'), RegrTest('test_multibytecodec_support.py', skip="not a test"), RegrTest('test_multifile.py'), - RegrTest('test_multiprocessing.py', skip='FIXME leaves subprocesses'), + RegrTest('test_multiprocessing.py', skip="FIXME leaves subprocesses"), RegrTest('test_mutants.py', core="possibly"), RegrTest('test_mutex.py'), RegrTest('test_netrc.py'), @@ -359,7 +359,7 @@ RegrTest('test_property.py', core=True), RegrTest('test_pstats.py'), RegrTest('test_pty.py', skip="unsupported extension module"), - RegrTest('test_pwd.py', skip=skip_win32), + RegrTest('test_pwd.py', usemodules="pwd", skip=skip_win32), RegrTest('test_py3kwarn.py'), RegrTest('test_pyclbr.py'), RegrTest('test_pydoc.py'), diff --git a/lib-python/modified-2.7/ctypes/__init__.py b/lib-python/modified-2.7/ctypes/__init__.py --- a/lib-python/modified-2.7/ctypes/__init__.py +++ b/lib-python/modified-2.7/ctypes/__init__.py @@ -489,9 +489,12 @@ _flags_ = _FUNCFLAG_CDECL | _FUNCFLAG_PYTHONAPI return CFunctionType -_cast = PYFUNCTYPE(py_object, c_void_p, py_object, py_object)(_cast_addr) def cast(obj, typ): - return _cast(obj, obj, typ) + try: + c_void_p.from_param(obj) + except TypeError, e: + raise ArgumentError(str(e)) + return _cast_addr(obj, obj, typ) _string_at = PYFUNCTYPE(py_object, c_void_p, c_int)(_string_at_addr) def string_at(ptr, size=-1): diff --git a/lib-python/modified-2.7/ctypes/util.py b/lib-python/modified-2.7/ctypes/util.py --- a/lib-python/modified-2.7/ctypes/util.py +++ b/lib-python/modified-2.7/ctypes/util.py @@ -72,8 +72,8 @@ return name if os.name == "posix" and sys.platform == "darwin": - from ctypes.macholib.dyld import dyld_find as _dyld_find def find_library(name): + from ctypes.macholib.dyld import dyld_find as _dyld_find possible = ['lib%s.dylib' % name, '%s.dylib' % name, '%s.framework/%s' % (name, name)] diff --git a/lib-python/modified-2.7/distutils/sysconfig_pypy.py b/lib-python/modified-2.7/distutils/sysconfig_pypy.py --- a/lib-python/modified-2.7/distutils/sysconfig_pypy.py +++ b/lib-python/modified-2.7/distutils/sysconfig_pypy.py @@ -116,6 +116,12 @@ if compiler.compiler_type == "unix": compiler.compiler_so.extend(['-fPIC', '-Wimplicit']) compiler.shared_lib_extension = get_config_var('SO') + if "CFLAGS" in os.environ: + cflags = os.environ["CFLAGS"] + compiler.compiler.append(cflags) + compiler.compiler_so.append(cflags) + compiler.linker_so.append(cflags) + from sysconfig_cpython import ( parse_makefile, _variable_rx, expand_makefile_vars) diff --git a/lib-python/modified-2.7/distutils/unixccompiler.py b/lib-python/modified-2.7/distutils/unixccompiler.py --- a/lib-python/modified-2.7/distutils/unixccompiler.py +++ b/lib-python/modified-2.7/distutils/unixccompiler.py @@ -324,7 +324,7 @@ # On OSX users can specify an alternate SDK using # '-isysroot', calculate the SDK root if it is specified # (and use it further on) - cflags = sysconfig.get_config_var('CFLAGS') + cflags = sysconfig.get_config_var('CFLAGS') or '' m = re.search(r'-isysroot\s+(\S+)', cflags) if m is None: sysroot = '/' diff --git a/lib-python/modified-2.7/heapq.py b/lib-python/modified-2.7/heapq.py new file mode 100644 --- /dev/null +++ b/lib-python/modified-2.7/heapq.py @@ -0,0 +1,442 @@ +# -*- coding: latin-1 -*- + +"""Heap queue algorithm (a.k.a. priority queue). + +Heaps are arrays for which a[k] <= a[2*k+1] and a[k] <= a[2*k+2] for +all k, counting elements from 0. For the sake of comparison, +non-existing elements are considered to be infinite. The interesting +property of a heap is that a[0] is always its smallest element. + +Usage: + +heap = [] # creates an empty heap +heappush(heap, item) # pushes a new item on the heap +item = heappop(heap) # pops the smallest item from the heap +item = heap[0] # smallest item on the heap without popping it +heapify(x) # transforms list into a heap, in-place, in linear time +item = heapreplace(heap, item) # pops and returns smallest item, and adds + # new item; the heap size is unchanged + +Our API differs from textbook heap algorithms as follows: + +- We use 0-based indexing. This makes the relationship between the + index for a node and the indexes for its children slightly less + obvious, but is more suitable since Python uses 0-based indexing. + +- Our heappop() method returns the smallest item, not the largest. + +These two make it possible to view the heap as a regular Python list +without surprises: heap[0] is the smallest item, and heap.sort() +maintains the heap invariant! +""" + +# Original code by Kevin O'Connor, augmented by Tim Peters and Raymond Hettinger + +__about__ = """Heap queues + +[explanation by Fran�ois Pinard] + +Heaps are arrays for which a[k] <= a[2*k+1] and a[k] <= a[2*k+2] for +all k, counting elements from 0. For the sake of comparison, +non-existing elements are considered to be infinite. The interesting +property of a heap is that a[0] is always its smallest element. + +The strange invariant above is meant to be an efficient memory +representation for a tournament. The numbers below are `k', not a[k]: + + 0 + + 1 2 + + 3 4 5 6 + + 7 8 9 10 11 12 13 14 + + 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 + + +In the tree above, each cell `k' is topping `2*k+1' and `2*k+2'. In +an usual binary tournament we see in sports, each cell is the winner +over the two cells it tops, and we can trace the winner down the tree +to see all opponents s/he had. However, in many computer applications +of such tournaments, we do not need to trace the history of a winner. +To be more memory efficient, when a winner is promoted, we try to +replace it by something else at a lower level, and the rule becomes +that a cell and the two cells it tops contain three different items, +but the top cell "wins" over the two topped cells. + +If this heap invariant is protected at all time, index 0 is clearly +the overall winner. The simplest algorithmic way to remove it and +find the "next" winner is to move some loser (let's say cell 30 in the +diagram above) into the 0 position, and then percolate this new 0 down +the tree, exchanging values, until the invariant is re-established. +This is clearly logarithmic on the total number of items in the tree. +By iterating over all items, you get an O(n ln n) sort. + +A nice feature of this sort is that you can efficiently insert new +items while the sort is going on, provided that the inserted items are +not "better" than the last 0'th element you extracted. This is +especially useful in simulation contexts, where the tree holds all +incoming events, and the "win" condition means the smallest scheduled +time. When an event schedule other events for execution, they are +scheduled into the future, so they can easily go into the heap. So, a +heap is a good structure for implementing schedulers (this is what I +used for my MIDI sequencer :-). + +Various structures for implementing schedulers have been extensively +studied, and heaps are good for this, as they are reasonably speedy, +the speed is almost constant, and the worst case is not much different +than the average case. However, there are other representations which +are more efficient overall, yet the worst cases might be terrible. + +Heaps are also very useful in big disk sorts. You most probably all +know that a big sort implies producing "runs" (which are pre-sorted +sequences, which size is usually related to the amount of CPU memory), +followed by a merging passes for these runs, which merging is often +very cleverly organised[1]. It is very important that the initial +sort produces the longest runs possible. Tournaments are a good way +to that. If, using all the memory available to hold a tournament, you +replace and percolate items that happen to fit the current run, you'll +produce runs which are twice the size of the memory for random input, +and much better for input fuzzily ordered. + +Moreover, if you output the 0'th item on disk and get an input which +may not fit in the current tournament (because the value "wins" over +the last output value), it cannot fit in the heap, so the size of the +heap decreases. The freed memory could be cleverly reused immediately +for progressively building a second heap, which grows at exactly the +same rate the first heap is melting. When the first heap completely +vanishes, you switch heaps and start a new run. Clever and quite +effective! + +In a word, heaps are useful memory structures to know. I use them in +a few applications, and I think it is good to keep a `heap' module +around. :-) + +-------------------- +[1] The disk balancing algorithms which are current, nowadays, are +more annoying than clever, and this is a consequence of the seeking +capabilities of the disks. On devices which cannot seek, like big +tape drives, the story was quite different, and one had to be very +clever to ensure (far in advance) that each tape movement will be the +most effective possible (that is, will best participate at +"progressing" the merge). Some tapes were even able to read +backwards, and this was also used to avoid the rewinding time. +Believe me, real good tape sorts were quite spectacular to watch! +From all times, sorting has always been a Great Art! :-) +""" + +__all__ = ['heappush', 'heappop', 'heapify', 'heapreplace', 'merge', + 'nlargest', 'nsmallest', 'heappushpop'] + +from itertools import islice, repeat, count, imap, izip, tee, chain +from operator import itemgetter +import bisect + +def heappush(heap, item): + """Push item onto heap, maintaining the heap invariant.""" + heap.append(item) + _siftdown(heap, 0, len(heap)-1) + +def heappop(heap): + """Pop the smallest item off the heap, maintaining the heap invariant.""" + lastelt = heap.pop() # raises appropriate IndexError if heap is empty + if heap: + returnitem = heap[0] + heap[0] = lastelt + _siftup(heap, 0) + else: + returnitem = lastelt + return returnitem + +def heapreplace(heap, item): + """Pop and return the current smallest value, and add the new item. + + This is more efficient than heappop() followed by heappush(), and can be + more appropriate when using a fixed-size heap. Note that the value + returned may be larger than item! That constrains reasonable uses of + this routine unless written as part of a conditional replacement: + + if item > heap[0]: + item = heapreplace(heap, item) + """ + returnitem = heap[0] # raises appropriate IndexError if heap is empty + heap[0] = item + _siftup(heap, 0) + return returnitem + +def heappushpop(heap, item): + """Fast version of a heappush followed by a heappop.""" + if heap and heap[0] < item: + item, heap[0] = heap[0], item + _siftup(heap, 0) + return item + +def heapify(x): + """Transform list into a heap, in-place, in O(len(heap)) time.""" + n = len(x) + # Transform bottom-up. The largest index there's any point to looking at + # is the largest with a child index in-range, so must have 2*i + 1 < n, + # or i < (n-1)/2. If n is even = 2*j, this is (2*j-1)/2 = j-1/2 so + # j-1 is the largest, which is n//2 - 1. If n is odd = 2*j+1, this is + # (2*j+1-1)/2 = j so j-1 is the largest, and that's again n//2-1. + for i in reversed(xrange(n//2)): + _siftup(x, i) + +def nlargest(n, iterable): + """Find the n largest elements in a dataset. + + Equivalent to: sorted(iterable, reverse=True)[:n] + """ + if n < 0: # for consistency with the c impl + return [] + it = iter(iterable) + result = list(islice(it, n)) + if not result: + return result + heapify(result) + _heappushpop = heappushpop + for elem in it: + _heappushpop(result, elem) + result.sort(reverse=True) + return result + +def nsmallest(n, iterable): + """Find the n smallest elements in a dataset. + + Equivalent to: sorted(iterable)[:n] + """ + if n < 0: # for consistency with the c impl + return [] + if hasattr(iterable, '__len__') and n * 10 <= len(iterable): + # For smaller values of n, the bisect method is faster than a minheap. + # It is also memory efficient, consuming only n elements of space. + it = iter(iterable) + result = sorted(islice(it, 0, n)) + if not result: + return result + insort = bisect.insort + pop = result.pop + los = result[-1] # los --> Largest of the nsmallest + for elem in it: + if los <= elem: + continue + insort(result, elem) + pop() + los = result[-1] + return result + # An alternative approach manifests the whole iterable in memory but + # saves comparisons by heapifying all at once. Also, saves time + # over bisect.insort() which has O(n) data movement time for every + # insertion. Finding the n smallest of an m length iterable requires + # O(m) + O(n log m) comparisons. + h = list(iterable) + heapify(h) + return map(heappop, repeat(h, min(n, len(h)))) + +# 'heap' is a heap at all indices >= startpos, except possibly for pos. pos +# is the index of a leaf with a possibly out-of-order value. Restore the +# heap invariant. +def _siftdown(heap, startpos, pos): + newitem = heap[pos] + # Follow the path to the root, moving parents down until finding a place + # newitem fits. + while pos > startpos: + parentpos = (pos - 1) >> 1 + parent = heap[parentpos] + if newitem < parent: + heap[pos] = parent + pos = parentpos + continue + break + heap[pos] = newitem + +# The child indices of heap index pos are already heaps, and we want to make +# a heap at index pos too. We do this by bubbling the smaller child of +# pos up (and so on with that child's children, etc) until hitting a leaf, +# then using _siftdown to move the oddball originally at index pos into place. +# +# We *could* break out of the loop as soon as we find a pos where newitem <= +# both its children, but turns out that's not a good idea, and despite that +# many books write the algorithm that way. During a heap pop, the last array +# element is sifted in, and that tends to be large, so that comparing it +# against values starting from the root usually doesn't pay (= usually doesn't +# get us out of the loop early). See Knuth, Volume 3, where this is +# explained and quantified in an exercise. +# +# Cutting the # of comparisons is important, since these routines have no +# way to extract "the priority" from an array element, so that intelligence +# is likely to be hiding in custom __cmp__ methods, or in array elements +# storing (priority, record) tuples. Comparisons are thus potentially +# expensive. +# +# On random arrays of length 1000, making this change cut the number of +# comparisons made by heapify() a little, and those made by exhaustive +# heappop() a lot, in accord with theory. Here are typical results from 3 +# runs (3 just to demonstrate how small the variance is): +# +# Compares needed by heapify Compares needed by 1000 heappops +# -------------------------- -------------------------------- +# 1837 cut to 1663 14996 cut to 8680 +# 1855 cut to 1659 14966 cut to 8678 +# 1847 cut to 1660 15024 cut to 8703 +# +# Building the heap by using heappush() 1000 times instead required +# 2198, 2148, and 2219 compares: heapify() is more efficient, when +# you can use it. +# +# The total compares needed by list.sort() on the same lists were 8627, +# 8627, and 8632 (this should be compared to the sum of heapify() and +# heappop() compares): list.sort() is (unsurprisingly!) more efficient +# for sorting. + +def _siftup(heap, pos): + endpos = len(heap) + startpos = pos + newitem = heap[pos] + # Bubble up the smaller child until hitting a leaf. + childpos = 2*pos + 1 # leftmost child position + while childpos < endpos: + # Set childpos to index of smaller child. + rightpos = childpos + 1 + if rightpos < endpos and not heap[childpos] < heap[rightpos]: + childpos = rightpos + # Move the smaller child up. + heap[pos] = heap[childpos] + pos = childpos + childpos = 2*pos + 1 + # The leaf at pos is empty now. Put newitem there, and bubble it up + # to its final resting place (by sifting its parents down). + heap[pos] = newitem + _siftdown(heap, startpos, pos) + +# If available, use C implementation +try: + from _heapq import * +except ImportError: + pass + +def merge(*iterables): + '''Merge multiple sorted inputs into a single sorted output. + + Similar to sorted(itertools.chain(*iterables)) but returns a generator, + does not pull the data into memory all at once, and assumes that each of + the input streams is already sorted (smallest to largest). + + >>> list(merge([1,3,5,7], [0,2,4,8], [5,10,15,20], [], [25])) + [0, 1, 2, 3, 4, 5, 5, 7, 8, 10, 15, 20, 25] + + ''' + _heappop, _heapreplace, _StopIteration = heappop, heapreplace, StopIteration + + h = [] + h_append = h.append + for itnum, it in enumerate(map(iter, iterables)): + try: + next = it.next + h_append([next(), itnum, next]) + except _StopIteration: + pass + heapify(h) + + while 1: + try: + while 1: + v, itnum, next = s = h[0] # raises IndexError when h is empty + yield v + s[0] = next() # raises StopIteration when exhausted + _heapreplace(h, s) # restore heap condition + except _StopIteration: + _heappop(h) # remove empty iterator + except IndexError: + return + +# Extend the implementations of nsmallest and nlargest to use a key= argument +_nsmallest = nsmallest +def nsmallest(n, iterable, key=None): + """Find the n smallest elements in a dataset. + + Equivalent to: sorted(iterable, key=key)[:n] + """ + # Short-cut for n==1 is to use min() when len(iterable)>0 + if n == 1: + it = iter(iterable) + head = list(islice(it, 1)) + if not head: + return [] + if key is None: + return [min(chain(head, it))] + return [min(chain(head, it), key=key)] + + # When n>=size, it's faster to use sort() + try: + size = len(iterable) + except (TypeError, AttributeError): + pass + else: + if n >= size: + return sorted(iterable, key=key)[:n] + + # When key is none, use simpler decoration + if key is None: + it = izip(iterable, count()) # decorate + result = _nsmallest(n, it) + return map(itemgetter(0), result) # undecorate + + # General case, slowest method + in1, in2 = tee(iterable) + it = izip(imap(key, in1), count(), in2) # decorate + result = _nsmallest(n, it) + return map(itemgetter(2), result) # undecorate + +_nlargest = nlargest +def nlargest(n, iterable, key=None): + """Find the n largest elements in a dataset. + + Equivalent to: sorted(iterable, key=key, reverse=True)[:n] + """ + + # Short-cut for n==1 is to use max() when len(iterable)>0 + if n == 1: + it = iter(iterable) + head = list(islice(it, 1)) + if not head: + return [] + if key is None: + return [max(chain(head, it))] + return [max(chain(head, it), key=key)] + + # When n>=size, it's faster to use sort() + try: + size = len(iterable) + except (TypeError, AttributeError): + pass + else: + if n >= size: + return sorted(iterable, key=key, reverse=True)[:n] + + # When key is none, use simpler decoration + if key is None: + it = izip(iterable, count(0,-1)) # decorate + result = _nlargest(n, it) + return map(itemgetter(0), result) # undecorate + + # General case, slowest method + in1, in2 = tee(iterable) + it = izip(imap(key, in1), count(0,-1), in2) # decorate + result = _nlargest(n, it) + return map(itemgetter(2), result) # undecorate + +if __name__ == "__main__": + # Simple sanity test + heap = [] + data = [1, 3, 5, 7, 9, 2, 4, 6, 8, 0] + for item in data: + heappush(heap, item) + sort = [] + while heap: + sort.append(heappop(heap)) + print sort + + import doctest + doctest.testmod() diff --git a/lib-python/modified-2.7/httplib.py b/lib-python/modified-2.7/httplib.py new file mode 100644 --- /dev/null +++ b/lib-python/modified-2.7/httplib.py @@ -0,0 +1,1377 @@ +"""HTTP/1.1 client library + + + + +HTTPConnection goes through a number of "states", which define when a client +may legally make another request or fetch the response for a particular +request. This diagram details these state transitions: + + (null) + | + | HTTPConnection() + v + Idle + | + | putrequest() + v + Request-started + | + | ( putheader() )* endheaders() + v + Request-sent + | + | response = getresponse() + v + Unread-response [Response-headers-read] + |\____________________ + | | + | response.read() | putrequest() + v v + Idle Req-started-unread-response + ______/| + / | + response.read() | | ( putheader() )* endheaders() + v v + Request-started Req-sent-unread-response + | + | response.read() + v + Request-sent + +This diagram presents the following rules: + -- a second request may not be started until {response-headers-read} + -- a response [object] cannot be retrieved until {request-sent} + -- there is no differentiation between an unread response body and a + partially read response body + +Note: this enforcement is applied by the HTTPConnection class. The + HTTPResponse class does not enforce this state machine, which + implies sophisticated clients may accelerate the request/response + pipeline. Caution should be taken, though: accelerating the states + beyond the above pattern may imply knowledge of the server's + connection-close behavior for certain requests. For example, it + is impossible to tell whether the server will close the connection + UNTIL the response headers have been read; this means that further + requests cannot be placed into the pipeline until it is known that + the server will NOT be closing the connection. + +Logical State __state __response +------------- ------- ---------- +Idle _CS_IDLE None +Request-started _CS_REQ_STARTED None +Request-sent _CS_REQ_SENT None +Unread-response _CS_IDLE +Req-started-unread-response _CS_REQ_STARTED +Req-sent-unread-response _CS_REQ_SENT +""" + +from array import array +import os +import socket +from sys import py3kwarning +from urlparse import urlsplit +import warnings +with warnings.catch_warnings(): + if py3kwarning: + warnings.filterwarnings("ignore", ".*mimetools has been removed", + DeprecationWarning) + import mimetools + +try: + from cStringIO import StringIO +except ImportError: + from StringIO import StringIO + +__all__ = ["HTTP", "HTTPResponse", "HTTPConnection", + "HTTPException", "NotConnected", "UnknownProtocol", + "UnknownTransferEncoding", "UnimplementedFileMode", + "IncompleteRead", "InvalidURL", "ImproperConnectionState", + "CannotSendRequest", "CannotSendHeader", "ResponseNotReady", + "BadStatusLine", "error", "responses"] + +HTTP_PORT = 80 +HTTPS_PORT = 443 + +_UNKNOWN = 'UNKNOWN' + +# connection states +_CS_IDLE = 'Idle' +_CS_REQ_STARTED = 'Request-started' +_CS_REQ_SENT = 'Request-sent' + +# status codes +# informational +CONTINUE = 100 +SWITCHING_PROTOCOLS = 101 +PROCESSING = 102 + +# successful +OK = 200 +CREATED = 201 +ACCEPTED = 202 +NON_AUTHORITATIVE_INFORMATION = 203 +NO_CONTENT = 204 +RESET_CONTENT = 205 +PARTIAL_CONTENT = 206 +MULTI_STATUS = 207 +IM_USED = 226 + +# redirection +MULTIPLE_CHOICES = 300 +MOVED_PERMANENTLY = 301 +FOUND = 302 +SEE_OTHER = 303 +NOT_MODIFIED = 304 +USE_PROXY = 305 +TEMPORARY_REDIRECT = 307 + +# client error +BAD_REQUEST = 400 +UNAUTHORIZED = 401 +PAYMENT_REQUIRED = 402 +FORBIDDEN = 403 +NOT_FOUND = 404 +METHOD_NOT_ALLOWED = 405 +NOT_ACCEPTABLE = 406 +PROXY_AUTHENTICATION_REQUIRED = 407 +REQUEST_TIMEOUT = 408 +CONFLICT = 409 +GONE = 410 +LENGTH_REQUIRED = 411 +PRECONDITION_FAILED = 412 +REQUEST_ENTITY_TOO_LARGE = 413 +REQUEST_URI_TOO_LONG = 414 +UNSUPPORTED_MEDIA_TYPE = 415 +REQUESTED_RANGE_NOT_SATISFIABLE = 416 +EXPECTATION_FAILED = 417 +UNPROCESSABLE_ENTITY = 422 +LOCKED = 423 +FAILED_DEPENDENCY = 424 +UPGRADE_REQUIRED = 426 + +# server error +INTERNAL_SERVER_ERROR = 500 +NOT_IMPLEMENTED = 501 +BAD_GATEWAY = 502 +SERVICE_UNAVAILABLE = 503 +GATEWAY_TIMEOUT = 504 +HTTP_VERSION_NOT_SUPPORTED = 505 +INSUFFICIENT_STORAGE = 507 +NOT_EXTENDED = 510 + +# Mapping status codes to official W3C names +responses = { + 100: 'Continue', + 101: 'Switching Protocols', + + 200: 'OK', + 201: 'Created', + 202: 'Accepted', + 203: 'Non-Authoritative Information', + 204: 'No Content', + 205: 'Reset Content', + 206: 'Partial Content', + + 300: 'Multiple Choices', + 301: 'Moved Permanently', + 302: 'Found', + 303: 'See Other', + 304: 'Not Modified', + 305: 'Use Proxy', + 306: '(Unused)', + 307: 'Temporary Redirect', + + 400: 'Bad Request', + 401: 'Unauthorized', + 402: 'Payment Required', + 403: 'Forbidden', + 404: 'Not Found', + 405: 'Method Not Allowed', + 406: 'Not Acceptable', + 407: 'Proxy Authentication Required', + 408: 'Request Timeout', + 409: 'Conflict', + 410: 'Gone', + 411: 'Length Required', + 412: 'Precondition Failed', + 413: 'Request Entity Too Large', + 414: 'Request-URI Too Long', + 415: 'Unsupported Media Type', + 416: 'Requested Range Not Satisfiable', + 417: 'Expectation Failed', + + 500: 'Internal Server Error', + 501: 'Not Implemented', + 502: 'Bad Gateway', + 503: 'Service Unavailable', + 504: 'Gateway Timeout', + 505: 'HTTP Version Not Supported', +} + +# maximal amount of data to read at one time in _safe_read +MAXAMOUNT = 1048576 + +class HTTPMessage(mimetools.Message): + + def addheader(self, key, value): + """Add header for field key handling repeats.""" + prev = self.dict.get(key) + if prev is None: + self.dict[key] = value + else: + combined = ", ".join((prev, value)) + self.dict[key] = combined + + def addcontinue(self, key, more): + """Add more field data from a continuation line.""" + prev = self.dict[key] + self.dict[key] = prev + "\n " + more + + def readheaders(self): + """Read header lines. + + Read header lines up to the entirely blank line that terminates them. + The (normally blank) line that ends the headers is skipped, but not + included in the returned list. If a non-header line ends the headers, + (which is an error), an attempt is made to backspace over it; it is + never included in the returned list. + + The variable self.status is set to the empty string if all went well, + otherwise it is an error message. The variable self.headers is a + completely uninterpreted list of lines contained in the header (so + printing them will reproduce the header exactly as it appears in the + file). + + If multiple header fields with the same name occur, they are combined + according to the rules in RFC 2616 sec 4.2: + + Appending each subsequent field-value to the first, each separated + by a comma. The order in which header fields with the same field-name + are received is significant to the interpretation of the combined + field value. + """ + # XXX The implementation overrides the readheaders() method of + # rfc822.Message. The base class design isn't amenable to + # customized behavior here so the method here is a copy of the + # base class code with a few small changes. + + self.dict = {} + self.unixfrom = '' + self.headers = hlist = [] + self.status = '' + headerseen = "" + firstline = 1 + startofline = unread = tell = None + if hasattr(self.fp, 'unread'): + unread = self.fp.unread + elif self.seekable: + tell = self.fp.tell + while True: + if tell: + try: + startofline = tell() + except IOError: + startofline = tell = None + self.seekable = 0 + line = self.fp.readline() + if not line: + self.status = 'EOF in headers' + break + # Skip unix From name time lines + if firstline and line.startswith('From '): + self.unixfrom = self.unixfrom + line + continue + firstline = 0 + if headerseen and line[0] in ' \t': + # XXX Not sure if continuation lines are handled properly + # for http and/or for repeating headers + # It's a continuation line. + hlist.append(line) + self.addcontinue(headerseen, line.strip()) + continue + elif self.iscomment(line): + # It's a comment. Ignore it. + continue + elif self.islast(line): + # Note! No pushback here! The delimiter line gets eaten. + break + headerseen = self.isheader(line) + if headerseen: + # It's a legal header line, save it. + hlist.append(line) + self.addheader(headerseen, line[len(headerseen)+1:].strip()) + continue + else: + # It's not a header line; throw it back and stop here. + if not self.dict: + self.status = 'No headers' + else: + self.status = 'Non-header line where header expected' + # Try to undo the read. + if unread: + unread(line) + elif tell: + self.fp.seek(startofline) + else: + self.status = self.status + '; bad seek' + break + +class HTTPResponse: + + # strict: If true, raise BadStatusLine if the status line can't be + # parsed as a valid HTTP/1.0 or 1.1 status line. By default it is + # false because it prevents clients from talking to HTTP/0.9 + # servers. Note that a response with a sufficiently corrupted + # status line will look like an HTTP/0.9 response. + + # See RFC 2616 sec 19.6 and RFC 1945 sec 6 for details. + + def __init__(self, sock, debuglevel=0, strict=0, method=None, buffering=False): + if buffering: + # The caller won't be using any sock.recv() calls, so buffering + # is fine and recommended for performance. + self.fp = sock.makefile('rb') + else: + # The buffer size is specified as zero, because the headers of + # the response are read with readline(). If the reads were + # buffered the readline() calls could consume some of the + # response, which make be read via a recv() on the underlying + # socket. + self.fp = sock.makefile('rb', 0) + self.debuglevel = debuglevel + self.strict = strict + self._method = method + + self.msg = None + + # from the Status-Line of the response + self.version = _UNKNOWN # HTTP-Version + self.status = _UNKNOWN # Status-Code + self.reason = _UNKNOWN # Reason-Phrase + + self.chunked = _UNKNOWN # is "chunked" being used? + self.chunk_left = _UNKNOWN # bytes left to read in current chunk + self.length = _UNKNOWN # number of bytes left in response + self.will_close = _UNKNOWN # conn will close at end of response + + def _read_status(self): + # Initialize with Simple-Response defaults + line = self.fp.readline() + if self.debuglevel > 0: + print "reply:", repr(line) + if not line: + # Presumably, the server closed the connection before + # sending a valid response. + raise BadStatusLine(line) + try: + [version, status, reason] = line.split(None, 2) + except ValueError: + try: + [version, status] = line.split(None, 1) + reason = "" + except ValueError: + # empty version will cause next test to fail and status + # will be treated as 0.9 response. + version = "" + if not version.startswith('HTTP/'): + if self.strict: + self.close() + raise BadStatusLine(line) + else: + # assume it's a Simple-Response from an 0.9 server + self.fp = LineAndFileWrapper(line, self.fp) + return "HTTP/0.9", 200, "" + + # The status code is a three-digit number + try: + status = int(status) + if status < 100 or status > 999: + raise BadStatusLine(line) + except ValueError: + raise BadStatusLine(line) + return version, status, reason + + def begin(self): + if self.msg is not None: + # we've already started reading the response + return + + # read until we get a non-100 response + while True: + version, status, reason = self._read_status() + if status != CONTINUE: + break + # skip the header from the 100 response + while True: + skip = self.fp.readline().strip() + if not skip: + break + if self.debuglevel > 0: + print "header:", skip + + self.status = status + self.reason = reason.strip() + if version == 'HTTP/1.0': + self.version = 10 + elif version.startswith('HTTP/1.'): + self.version = 11 # use HTTP/1.1 code for HTTP/1.x where x>=1 + elif version == 'HTTP/0.9': + self.version = 9 + else: + raise UnknownProtocol(version) + + if self.version == 9: + self.length = None + self.chunked = 0 + self.will_close = 1 + self.msg = HTTPMessage(StringIO()) + return + + self.msg = HTTPMessage(self.fp, 0) + if self.debuglevel > 0: + for hdr in self.msg.headers: + print "header:", hdr, + + # don't let the msg keep an fp + self.msg.fp = None + + # are we using the chunked-style of transfer encoding? + tr_enc = self.msg.getheader('transfer-encoding') + if tr_enc and tr_enc.lower() == "chunked": + self.chunked = 1 + self.chunk_left = None + else: + self.chunked = 0 + + # will the connection close at the end of the response? + self.will_close = self._check_close() + + # do we have a Content-Length? + # NOTE: RFC 2616, S4.4, #3 says we ignore this if tr_enc is "chunked" + length = self.msg.getheader('content-length') + if length and not self.chunked: + try: + self.length = int(length) + except ValueError: + self.length = None + else: + if self.length < 0: # ignore nonsensical negative lengths + self.length = None + else: + self.length = None + + # does the body have a fixed length? (of zero) + if (status == NO_CONTENT or status == NOT_MODIFIED or + 100 <= status < 200 or # 1xx codes + self._method == 'HEAD'): + self.length = 0 + + # if the connection remains open, and we aren't using chunked, and + # a content-length was not provided, then assume that the connection + # WILL close. + if not self.will_close and \ + not self.chunked and \ + self.length is None: + self.will_close = 1 + + def _check_close(self): + conn = self.msg.getheader('connection') + if self.version == 11: + # An HTTP/1.1 proxy is assumed to stay open unless + # explicitly closed. + conn = self.msg.getheader('connection') + if conn and "close" in conn.lower(): + return True + return False + + # Some HTTP/1.0 implementations have support for persistent + # connections, using rules different than HTTP/1.1. + + # For older HTTP, Keep-Alive indicates persistent connection. + if self.msg.getheader('keep-alive'): + return False + + # At least Akamai returns a "Connection: Keep-Alive" header, + # which was supposed to be sent by the client. + if conn and "keep-alive" in conn.lower(): + return False + + # Proxy-Connection is a netscape hack. + pconn = self.msg.getheader('proxy-connection') + if pconn and "keep-alive" in pconn.lower(): + return False + + # otherwise, assume it will close + return True + + def close(self): + if self.fp: + self.fp.close() + self.fp = None + + def isclosed(self): + # NOTE: it is possible that we will not ever call self.close(). This + # case occurs when will_close is TRUE, length is None, and we + # read up to the last byte, but NOT past it. + # + # IMPLIES: if will_close is FALSE, then self.close() will ALWAYS be + # called, meaning self.isclosed() is meaningful. + return self.fp is None + + # XXX It would be nice to have readline and __iter__ for this, too. + + def read(self, amt=None): + if self.fp is None: + return '' + + if self._method == 'HEAD': + self.close() + return '' + + if self.chunked: + return self._read_chunked(amt) + + if amt is None: + # unbounded read + if self.length is None: + s = self.fp.read() + else: + s = self._safe_read(self.length) + self.length = 0 + self.close() # we read everything + return s + + if self.length is not None: + if amt > self.length: + # clip the read to the "end of response" + amt = self.length + + # we do not use _safe_read() here because this may be a .will_close + # connection, and the user is reading more bytes than will be provided + # (for example, reading in 1k chunks) + s = self.fp.read(amt) + if self.length is not None: + self.length -= len(s) + if not self.length: + self.close() + return s + + def _read_chunked(self, amt): + assert self.chunked != _UNKNOWN + chunk_left = self.chunk_left + value = [] + while True: + if chunk_left is None: + line = self.fp.readline() + i = line.find(';') + if i >= 0: + line = line[:i] # strip chunk-extensions + try: + chunk_left = int(line, 16) + except ValueError: + # close the connection as protocol synchronisation is + # probably lost + self.close() + raise IncompleteRead(''.join(value)) + if chunk_left == 0: + break + if amt is None: + value.append(self._safe_read(chunk_left)) + elif amt < chunk_left: + value.append(self._safe_read(amt)) + self.chunk_left = chunk_left - amt + return ''.join(value) + elif amt == chunk_left: + value.append(self._safe_read(amt)) + self._safe_read(2) # toss the CRLF at the end of the chunk + self.chunk_left = None + return ''.join(value) + else: + value.append(self._safe_read(chunk_left)) + amt -= chunk_left + + # we read the whole chunk, get another + self._safe_read(2) # toss the CRLF at the end of the chunk + chunk_left = None + + # read and discard trailer up to the CRLF terminator + ### note: we shouldn't have any trailers! + while True: + line = self.fp.readline() + if not line: + # a vanishingly small number of sites EOF without + # sending the trailer + break + if line == '\r\n': + break + + # we read everything; close the "file" + self.close() + + return ''.join(value) + + def _safe_read(self, amt): + """Read the number of bytes requested, compensating for partial reads. + + Normally, we have a blocking socket, but a read() can be interrupted + by a signal (resulting in a partial read). + + Note that we cannot distinguish between EOF and an interrupt when zero + bytes have been read. IncompleteRead() will be raised in this + situation. + + This function should be used when bytes "should" be present for + reading. If the bytes are truly not available (due to EOF), then the + IncompleteRead exception can be used to detect the problem. + """ + # NOTE(gps): As of svn r74426 socket._fileobject.read(x) will never + # return less than x bytes unless EOF is encountered. It now handles + # signal interruptions (socket.error EINTR) internally. This code + # never caught that exception anyways. It seems largely pointless. + # self.fp.read(amt) will work fine. + s = [] + while amt > 0: + chunk = self.fp.read(min(amt, MAXAMOUNT)) + if not chunk: + raise IncompleteRead(''.join(s), amt) + s.append(chunk) + amt -= len(chunk) + return ''.join(s) + + def fileno(self): + return self.fp.fileno() + + def getheader(self, name, default=None): + if self.msg is None: + raise ResponseNotReady() + return self.msg.getheader(name, default) + + def getheaders(self): + """Return list of (header, value) tuples.""" + if self.msg is None: + raise ResponseNotReady() + return self.msg.items() + + +class HTTPConnection: + + _http_vsn = 11 + _http_vsn_str = 'HTTP/1.1' + + response_class = HTTPResponse + default_port = HTTP_PORT + auto_open = 1 + debuglevel = 0 + strict = 0 + + def __init__(self, host, port=None, strict=None, + timeout=socket._GLOBAL_DEFAULT_TIMEOUT, source_address=None): + self.timeout = timeout + self.source_address = source_address + self.sock = None + self._buffer = [] + self.__response = None + self.__state = _CS_IDLE + self._method = None + self._tunnel_host = None + self._tunnel_port = None + self._tunnel_headers = {} + + self._set_hostport(host, port) + if strict is not None: + self.strict = strict + + def set_tunnel(self, host, port=None, headers=None): + """ Sets up the host and the port for the HTTP CONNECT Tunnelling. + + The headers argument should be a mapping of extra HTTP headers + to send with the CONNECT request. + """ + self._tunnel_host = host + self._tunnel_port = port + if headers: + self._tunnel_headers = headers + else: + self._tunnel_headers.clear() + + def _set_hostport(self, host, port): + if port is None: + i = host.rfind(':') + j = host.rfind(']') # ipv6 addresses have [...] + if i > j: + try: + port = int(host[i+1:]) + except ValueError: + raise InvalidURL("nonnumeric port: '%s'" % host[i+1:]) + host = host[:i] + else: + port = self.default_port + if host and host[0] == '[' and host[-1] == ']': + host = host[1:-1] + self.host = host + self.port = port + + def set_debuglevel(self, level): + self.debuglevel = level + + def _tunnel(self): + self._set_hostport(self._tunnel_host, self._tunnel_port) + self.send("CONNECT %s:%d HTTP/1.0\r\n" % (self.host, self.port)) + for header, value in self._tunnel_headers.iteritems(): + self.send("%s: %s\r\n" % (header, value)) + self.send("\r\n") + response = self.response_class(self.sock, strict = self.strict, + method = self._method) + (version, code, message) = response._read_status() + + if code != 200: + self.close() + raise socket.error("Tunnel connection failed: %d %s" % (code, + message.strip())) + while True: + line = response.fp.readline() + if line == '\r\n': break + + + def connect(self): + """Connect to the host and port specified in __init__.""" + self.sock = socket.create_connection((self.host,self.port), + self.timeout, self.source_address) + + if self._tunnel_host: + self._tunnel() + + def close(self): + """Close the connection to the HTTP server.""" + if self.sock: + self.sock.close() # close it manually... there may be other refs + self.sock = None + if self.__response: + self.__response.close() + self.__response = None + self.__state = _CS_IDLE + + def send(self, data): + """Send `data' to the server.""" + if self.sock is None: + if self.auto_open: + self.connect() + else: + raise NotConnected() + + if self.debuglevel > 0: + print "send:", repr(data) + blocksize = 8192 + if hasattr(data,'read') and not isinstance(data, array): + if self.debuglevel > 0: print "sendIng a read()able" + datablock = data.read(blocksize) + while datablock: + self.sock.sendall(datablock) + datablock = data.read(blocksize) + else: + self.sock.sendall(data) + + def _output(self, s): + """Add a line of output to the current request buffer. + + Assumes that the line does *not* end with \\r\\n. + """ + self._buffer.append(s) + + def _send_output(self, message_body=None): + """Send the currently buffered request and clear the buffer. + + Appends an extra \\r\\n to the buffer. + A message_body may be specified, to be appended to the request. + """ + self._buffer.extend(("", "")) + msg = "\r\n".join(self._buffer) + del self._buffer[:] + # If msg and message_body are sent in a single send() call, + # it will avoid performance problems caused by the interaction + # between delayed ack and the Nagle algorithim. + if isinstance(message_body, str): + msg += message_body + message_body = None + self.send(msg) + if message_body is not None: + #message_body was not a string (i.e. it is a file) and + #we must run the risk of Nagle + self.send(message_body) + + def putrequest(self, method, url, skip_host=0, skip_accept_encoding=0): + """Send a request to the server. + + `method' specifies an HTTP request method, e.g. 'GET'. + `url' specifies the object being requested, e.g. '/index.html'. + `skip_host' if True does not add automatically a 'Host:' header + `skip_accept_encoding' if True does not add automatically an + 'Accept-Encoding:' header + """ + + # if a prior response has been completed, then forget about it. + if self.__response and self.__response.isclosed(): + self.__response = None + + + # in certain cases, we cannot issue another request on this connection. + # this occurs when: + # 1) we are in the process of sending a request. (_CS_REQ_STARTED) + # 2) a response to a previous request has signalled that it is going + # to close the connection upon completion. + # 3) the headers for the previous response have not been read, thus + # we cannot determine whether point (2) is true. (_CS_REQ_SENT) + # + # if there is no prior response, then we can request at will. + # + # if point (2) is true, then we will have passed the socket to the + # response (effectively meaning, "there is no prior response"), and + # will open a new one when a new request is made. + # + # Note: if a prior response exists, then we *can* start a new request. + # We are not allowed to begin fetching the response to this new + # request, however, until that prior response is complete. + # + if self.__state == _CS_IDLE: + self.__state = _CS_REQ_STARTED + else: + raise CannotSendRequest() + + # Save the method we use, we need it later in the response phase + self._method = method + if not url: + url = '/' + hdr = '%s %s %s' % (method, url, self._http_vsn_str) + + self._output(hdr) + + if self._http_vsn == 11: + # Issue some standard headers for better HTTP/1.1 compliance + + if not skip_host: + # this header is issued *only* for HTTP/1.1 + # connections. more specifically, this means it is + # only issued when the client uses the new + # HTTPConnection() class. backwards-compat clients + # will be using HTTP/1.0 and those clients may be + # issuing this header themselves. we should NOT issue + # it twice; some web servers (such as Apache) barf + # when they see two Host: headers + + # If we need a non-standard port,include it in the + # header. If the request is going through a proxy, + # but the host of the actual URL, not the host of the + # proxy. + + netloc = '' + if url.startswith('http'): + nil, netloc, nil, nil, nil = urlsplit(url) + + if netloc: + try: + netloc_enc = netloc.encode("ascii") + except UnicodeEncodeError: + netloc_enc = netloc.encode("idna") + self.putheader('Host', netloc_enc) + else: + try: + host_enc = self.host.encode("ascii") + except UnicodeEncodeError: + host_enc = self.host.encode("idna") + # Wrap the IPv6 Host Header with [] (RFC 2732) + if host_enc.find(':') >= 0: + host_enc = "[" + host_enc + "]" + if self.port == self.default_port: + self.putheader('Host', host_enc) + else: + self.putheader('Host', "%s:%s" % (host_enc, self.port)) + + # note: we are assuming that clients will not attempt to set these + # headers since *this* library must deal with the + # consequences. this also means that when the supporting + # libraries are updated to recognize other forms, then this + # code should be changed (removed or updated). + + # we only want a Content-Encoding of "identity" since we don't + # support encodings such as x-gzip or x-deflate. + if not skip_accept_encoding: + self.putheader('Accept-Encoding', 'identity') + + # we can accept "chunked" Transfer-Encodings, but no others + # NOTE: no TE header implies *only* "chunked" + #self.putheader('TE', 'chunked') + + # if TE is supplied in the header, then it must appear in a + # Connection header. + #self.putheader('Connection', 'TE') + + else: + # For HTTP/1.0, the server will assume "not chunked" + pass + + def putheader(self, header, *values): + """Send a request header line to the server. + + For example: h.putheader('Accept', 'text/html') + """ + if self.__state != _CS_REQ_STARTED: + raise CannotSendHeader() + + hdr = '%s: %s' % (header, '\r\n\t'.join([str(v) for v in values])) + self._output(hdr) + + def endheaders(self, message_body=None): + """Indicate that the last header line has been sent to the server. + + This method sends the request to the server. The optional + message_body argument can be used to pass message body + associated with the request. The message body will be sent in + the same packet as the message headers if possible. The + message_body should be a string. + """ + if self.__state == _CS_REQ_STARTED: + self.__state = _CS_REQ_SENT + else: + raise CannotSendHeader() + self._send_output(message_body) + + def request(self, method, url, body=None, headers={}): + """Send a complete request to the server.""" + self._send_request(method, url, body, headers) + + def _set_content_length(self, body): + # Set the content-length based on the body. + thelen = None + try: + thelen = str(len(body)) + except TypeError, te: + # If this is a file-like object, try to + # fstat its file descriptor + try: + thelen = str(os.fstat(body.fileno()).st_size) + except (AttributeError, OSError): + # Don't send a length if this failed + if self.debuglevel > 0: print "Cannot stat!!" + + if thelen is not None: + self.putheader('Content-Length', thelen) + + def _send_request(self, method, url, body, headers): + # Honor explicitly requested Host: and Accept-Encoding: headers. + header_names = dict.fromkeys([k.lower() for k in headers]) + skips = {} + if 'host' in header_names: + skips['skip_host'] = 1 + if 'accept-encoding' in header_names: + skips['skip_accept_encoding'] = 1 + + self.putrequest(method, url, **skips) + + if body and ('content-length' not in header_names): + self._set_content_length(body) + for hdr, value in headers.iteritems(): + self.putheader(hdr, value) + self.endheaders(body) + + def getresponse(self, buffering=False): + "Get the response from the server." + + # if a prior response has been completed, then forget about it. + if self.__response and self.__response.isclosed(): + self.__response = None + + # + # if a prior response exists, then it must be completed (otherwise, we + # cannot read this response's header to determine the connection-close + # behavior) + # + # note: if a prior response existed, but was connection-close, then the + # socket and response were made independent of this HTTPConnection + # object since a new request requires that we open a whole new + # connection + # + # this means the prior response had one of two states: + # 1) will_close: this connection was reset and the prior socket and + # response operate independently + # 2) persistent: the response was retained and we await its + # isclosed() status to become true. + # + if self.__state != _CS_REQ_SENT or self.__response: + raise ResponseNotReady() + + args = (self.sock,) + kwds = {"strict":self.strict, "method":self._method} + if self.debuglevel > 0: + args += (self.debuglevel,) + if buffering: + #only add this keyword if non-default, for compatibility with + #other response_classes. + kwds["buffering"] = True; + response = self.response_class(*args, **kwds) + + try: + response.begin() + except: + response.close() + raise + assert response.will_close != _UNKNOWN + self.__state = _CS_IDLE + + if response.will_close: + # this effectively passes the connection to the response + self.close() + else: + # remember this, so we can tell when it is complete + self.__response = response + + return response + + +class HTTP: + "Compatibility class with httplib.py from 1.5." + + _http_vsn = 10 + _http_vsn_str = 'HTTP/1.0' + + debuglevel = 0 + + _connection_class = HTTPConnection + + def __init__(self, host='', port=None, strict=None): + "Provide a default host, since the superclass requires one." + + # some joker passed 0 explicitly, meaning default port + if port == 0: + port = None + + # Note that we may pass an empty string as the host; this will throw + # an error when we attempt to connect. Presumably, the client code + # will call connect before then, with a proper host. + self._setup(self._connection_class(host, port, strict)) + + def _setup(self, conn): + self._conn = conn + + # set up delegation to flesh out interface + self.send = conn.send + self.putrequest = conn.putrequest + self.putheader = conn.putheader + self.endheaders = conn.endheaders + self.set_debuglevel = conn.set_debuglevel + + conn._http_vsn = self._http_vsn + conn._http_vsn_str = self._http_vsn_str + + self.file = None + + def connect(self, host=None, port=None): + "Accept arguments to set the host/port, since the superclass doesn't." + + if host is not None: + self._conn._set_hostport(host, port) + self._conn.connect() + + def getfile(self): + "Provide a getfile, since the superclass' does not use this concept." + return self.file + + def getreply(self, buffering=False): + """Compat definition since superclass does not define it. + + Returns a tuple consisting of: + - server status code (e.g. '200' if all goes well) + - server "reason" corresponding to status code + - any RFC822 headers in the response from the server + """ + try: + if not buffering: + response = self._conn.getresponse() + else: + #only add this keyword if non-default for compatibility + #with other connection classes + response = self._conn.getresponse(buffering) + except BadStatusLine, e: + ### hmm. if getresponse() ever closes the socket on a bad request, + ### then we are going to have problems with self.sock + + ### should we keep this behavior? do people use it? + # keep the socket open (as a file), and return it + self.file = self._conn.sock.makefile('rb', 0) + + # close our socket -- we want to restart after any protocol error + self.close() + + self.headers = None + return -1, e.line, None + + self.headers = response.msg + self.file = response.fp + return response.status, response.reason, response.msg + + def close(self): + self._conn.close() + + # note that self.file == response.fp, which gets closed by the + # superclass. just clear the object ref here. + ### hmm. messy. if status==-1, then self.file is owned by us. + ### well... we aren't explicitly closing, but losing this ref will + ### do it + self.file = None + +try: + import ssl +except ImportError: + pass +else: + class HTTPSConnection(HTTPConnection): + "This class allows communication via SSL." + + default_port = HTTPS_PORT + + def __init__(self, host, port=None, key_file=None, cert_file=None, + strict=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, + source_address=None): + HTTPConnection.__init__(self, host, port, strict, timeout, + source_address) + self.key_file = key_file + self.cert_file = cert_file + + def connect(self): + "Connect to a host on a given (SSL) port." + + sock = socket.create_connection((self.host, self.port), + self.timeout, self.source_address) + if self._tunnel_host: + self.sock = sock + self._tunnel() + self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file) + + __all__.append("HTTPSConnection") + + class HTTPS(HTTP): + """Compatibility with 1.5 httplib interface + + Python 1.5.2 did not have an HTTPS class, but it defined an + interface for sending http requests that is also useful for + https. + """ + + _connection_class = HTTPSConnection + + def __init__(self, host='', port=None, key_file=None, cert_file=None, + strict=None): + # provide a default host, pass the X509 cert info + + # urf. compensate for bad input. + if port == 0: + port = None + self._setup(self._connection_class(host, port, key_file, + cert_file, strict)) + + # we never actually use these for anything, but we keep them + # here for compatibility with post-1.5.2 CVS. + self.key_file = key_file + self.cert_file = cert_file + + + def FakeSocket (sock, sslobj): + warnings.warn("FakeSocket is deprecated, and won't be in 3.x. " + + "Use the result of ssl.wrap_socket() directly instead.", + DeprecationWarning, stacklevel=2) + return sslobj + + +class HTTPException(Exception): + # Subclasses that define an __init__ must call Exception.__init__ + # or define self.args. Otherwise, str() will fail. + pass + +class NotConnected(HTTPException): + pass + +class InvalidURL(HTTPException): + pass + +class UnknownProtocol(HTTPException): + def __init__(self, version): + self.args = version, + self.version = version + +class UnknownTransferEncoding(HTTPException): + pass + +class UnimplementedFileMode(HTTPException): + pass + +class IncompleteRead(HTTPException): + def __init__(self, partial, expected=None): + self.args = partial, + self.partial = partial + self.expected = expected + def __repr__(self): + if self.expected is not None: + e = ', %i more expected' % self.expected + else: + e = '' + return 'IncompleteRead(%i bytes read%s)' % (len(self.partial), e) + def __str__(self): + return repr(self) + +class ImproperConnectionState(HTTPException): + pass + +class CannotSendRequest(ImproperConnectionState): + pass + +class CannotSendHeader(ImproperConnectionState): + pass + +class ResponseNotReady(ImproperConnectionState): + pass + +class BadStatusLine(HTTPException): + def __init__(self, line): + if not line: + line = repr(line) + self.args = line, + self.line = line + +# for backwards compatibility +error = HTTPException + +class LineAndFileWrapper: + """A limited file-like object for HTTP/0.9 responses.""" + + # The status-line parsing code calls readline(), which normally + # get the HTTP status line. For a 0.9 response, however, this is + # actually the first line of the body! Clients need to get a + # readable file object that contains that line. + + def __init__(self, line, file): + self._line = line + self._file = file + self._line_consumed = 0 + self._line_offset = 0 + self._line_left = len(line) + + def __getattr__(self, attr): + return getattr(self._file, attr) + + def _done(self): + # called when the last byte is read from the line. After the + # call, all read methods are delegated to the underlying file + # object. + self._line_consumed = 1 + self.read = self._file.read + self.readline = self._file.readline + self.readlines = self._file.readlines + + def read(self, amt=None): + if self._line_consumed: + return self._file.read(amt) + assert self._line_left + if amt is None or amt > self._line_left: + s = self._line[self._line_offset:] + self._done() + if amt is None: + return s + self._file.read() + else: + return s + self._file.read(amt - len(s)) + else: + assert amt <= self._line_left + i = self._line_offset + j = i + amt + s = self._line[i:j] + self._line_offset = j + self._line_left -= amt + if self._line_left == 0: + self._done() + return s + + def readline(self): + if self._line_consumed: + return self._file.readline() + assert self._line_left + s = self._line[self._line_offset:] + self._done() + return s + + def readlines(self, size=None): + if self._line_consumed: + return self._file.readlines(size) + assert self._line_left + L = [self._line[self._line_offset:]] + self._done() + if size is None: + return L + self._file.readlines() + else: + return L + self._file.readlines(size) + +def test(): + """Test this module. + + A hodge podge of tests collected here, because they have too many + external dependencies for the regular test suite. + """ + + import sys + import getopt + opts, args = getopt.getopt(sys.argv[1:], 'd') + dl = 0 + for o, a in opts: + if o == '-d': dl = dl + 1 + host = 'www.python.org' + selector = '/' + if args[0:]: host = args[0] + if args[1:]: selector = args[1] + h = HTTP() + h.set_debuglevel(dl) + h.connect(host) + h.putrequest('GET', selector) + h.endheaders() + status, reason, headers = h.getreply() + print 'status =', status + print 'reason =', reason + print "read", len(h.getfile().read()) + print + if headers: + for header in headers.headers: print header.strip() + print + + # minimal test that code to extract host from url works + class HTTP11(HTTP): + _http_vsn = 11 + _http_vsn_str = 'HTTP/1.1' + + h = HTTP11('www.python.org') + h.putrequest('GET', 'http://www.python.org/~jeremy/') + h.endheaders() + h.getreply() + h.close() + + try: + import ssl + except ImportError: + pass + else: + + for host, selector in (('sourceforge.net', '/projects/python'), + ): + print "https://%s%s" % (host, selector) + hs = HTTPS() + hs.set_debuglevel(dl) + hs.connect(host) + hs.putrequest('GET', selector) + hs.endheaders() + status, reason, headers = hs.getreply() + print 'status =', status + print 'reason =', reason + print "read", len(hs.getfile().read()) + print + if headers: + for header in headers.headers: print header.strip() + print + +if __name__ == '__main__': + test() diff --git a/lib-python/modified-2.7/json/encoder.py b/lib-python/modified-2.7/json/encoder.py --- a/lib-python/modified-2.7/json/encoder.py +++ b/lib-python/modified-2.7/json/encoder.py @@ -2,14 +2,7 @@ """ import re -try: - from _json import encode_basestring_ascii as c_encode_basestring_ascii -except ImportError: - c_encode_basestring_ascii = None -try: - from _json import make_encoder as c_make_encoder -except ImportError: - c_make_encoder = None +from __pypy__.builders import StringBuilder, UnicodeBuilder ESCAPE = re.compile(r'[\x00-\x1f\\"\b\f\n\r\t]') ESCAPE_ASCII = re.compile(r'([\\"]|[^\ -~])') @@ -24,8 +17,7 @@ '\t': '\\t', } for i in range(0x20): - ESCAPE_DCT.setdefault(chr(i), '\\u{0:04x}'.format(i)) - #ESCAPE_DCT.setdefault(chr(i), '\\u%04x' % (i,)) + ESCAPE_DCT.setdefault(chr(i), '\\u%04x' % (i,)) # Assume this produces an infinity on all machines (probably not guaranteed) INFINITY = float('1e66666') @@ -37,10 +29,9 @@ """ def replace(match): return ESCAPE_DCT[match.group(0)] - return '"' + ESCAPE.sub(replace, s) + '"' + return ESCAPE.sub(replace, s) - -def py_encode_basestring_ascii(s): +def encode_basestring_ascii(s): """Return an ASCII-only JSON representation of a Python string """ @@ -53,20 +44,18 @@ except KeyError: n = ord(s) if n < 0x10000: - return '\\u{0:04x}'.format(n) - #return '\\u%04x' % (n,) + return '\\u%04x' % (n,) else: # surrogate pair n -= 0x10000 s1 = 0xd800 | ((n >> 10) & 0x3ff) s2 = 0xdc00 | (n & 0x3ff) - return '\\u{0:04x}\\u{1:04x}'.format(s1, s2) - #return '\\u%04x\\u%04x' % (s1, s2) - return '"' + str(ESCAPE_ASCII.sub(replace, s)) + '"' - - -encode_basestring_ascii = ( - c_encode_basestring_ascii or py_encode_basestring_ascii) + return '\\u%04x\\u%04x' % (s1, s2) + if ESCAPE_ASCII.search(s): + return str(ESCAPE_ASCII.sub(replace, s)) + return s +py_encode_basestring_ascii = lambda s: '"' + encode_basestring_ascii(s) + '"' +c_encode_basestring_ascii = None class JSONEncoder(object): """Extensible JSON encoder for Python data structures. @@ -147,6 +136,17 @@ self.skipkeys = skipkeys self.ensure_ascii = ensure_ascii + if ensure_ascii: + self.encoder = encode_basestring_ascii + else: + self.encoder = encode_basestring + if encoding != 'utf-8': + orig_encoder = self.encoder + def encoder(o): + if isinstance(o, str): + o = o.decode(encoding) + return orig_encoder(o) + self.encoder = encoder self.check_circular = check_circular self.allow_nan = allow_nan self.sort_keys = sort_keys @@ -184,24 +184,126 @@ '{"foo": ["bar", "baz"]}' """ - # This is for extremely simple cases and benchmarks. + if self.check_circular: + markers = {} + else: + markers = None + if self.ensure_ascii: + builder = StringBuilder() + else: + builder = UnicodeBuilder() + self._encode(o, markers, builder, 0) + return builder.build() + + def _emit_indent(self, builder, _current_indent_level): + if self.indent is not None: + _current_indent_level += 1 + newline_indent = '\n' + (' ' * (self.indent * + _current_indent_level)) + separator = self.item_separator + newline_indent + builder.append(newline_indent) + else: + separator = self.item_separator + return separator, _current_indent_level + + def _emit_unindent(self, builder, _current_indent_level): + if self.indent is not None: + builder.append('\n') + builder.append(' ' * (self.indent * (_current_indent_level - 1))) + + def _encode(self, o, markers, builder, _current_indent_level): if isinstance(o, basestring): - if isinstance(o, str): - _encoding = self.encoding - if (_encoding is not None - and not (_encoding == 'utf-8')): - o = o.decode(_encoding) - if self.ensure_ascii: - return encode_basestring_ascii(o) + builder.append('"') + builder.append(self.encoder(o)) + builder.append('"') + elif o is None: + builder.append('null') + elif o is True: + builder.append('true') + elif o is False: + builder.append('false') + elif isinstance(o, (int, long)): + builder.append(str(o)) + elif isinstance(o, float): + builder.append(self._floatstr(o)) + elif isinstance(o, (list, tuple)): + if not o: + builder.append('[]') + return + self._encode_list(o, markers, builder, _current_indent_level) + elif isinstance(o, dict): + if not o: + builder.append('{}') + return + self._encode_dict(o, markers, builder, _current_indent_level) + else: + self._mark_markers(markers, o) + res = self.default(o) + self._encode(res, markers, builder, _current_indent_level) + self._remove_markers(markers, o) + return res + + def _encode_list(self, l, markers, builder, _current_indent_level): + self._mark_markers(markers, l) + builder.append('[') + first = True + separator, _current_indent_level = self._emit_indent(builder, + _current_indent_level) + for elem in l: + if first: + first = False else: - return encode_basestring(o) - # This doesn't pass the iterator directly to ''.join() because the - # exceptions aren't as detailed. The list call should be roughly - # equivalent to the PySequence_Fast that ''.join() would do. - chunks = self.iterencode(o, _one_shot=True) - if not isinstance(chunks, (list, tuple)): - chunks = list(chunks) - return ''.join(chunks) + builder.append(separator) + self._encode(elem, markers, builder, _current_indent_level) + del elem # XXX grumble + self._emit_unindent(builder, _current_indent_level) + builder.append(']') + self._remove_markers(markers, l) + + def _encode_dict(self, d, markers, builder, _current_indent_level): + self._mark_markers(markers, d) + first = True + builder.append('{') + separator, _current_indent_level = self._emit_indent(builder, + _current_indent_level) + if self.sort_keys: + items = sorted(d.items(), key=lambda kv: kv[0]) + else: + items = d.iteritems() + + for key, v in items: + if first: + first = False + else: + builder.append(separator) + if isinstance(key, basestring): + pass + # JavaScript is weakly typed for these, so it makes sense to + # also allow them. Many encoders seem to do something like this. + elif isinstance(key, float): + key = self._floatstr(key) + elif key is True: + key = 'true' + elif key is False: + key = 'false' + elif key is None: + key = 'null' + elif isinstance(key, (int, long)): + key = str(key) + elif self.skipkeys: + continue + else: + raise TypeError("key " + repr(key) + " is not a string") + builder.append('"') + builder.append(self.encoder(key)) + builder.append('"') + builder.append(self.key_separator) + self._encode(v, markers, builder, _current_indent_level) + del key + del v # XXX grumble + self._emit_unindent(builder, _current_indent_level) + builder.append('}') + self._remove_markers(markers, d) def iterencode(self, o, _one_shot=False): """Encode the given object and yield each string @@ -217,86 +319,54 @@ markers = {} else: markers = None - if self.ensure_ascii: - _encoder = encode_basestring_ascii + return self._iterencode(o, markers, 0) + + def _floatstr(self, o): + # Check for specials. Note that this type of test is processor + # and/or platform-specific, so do tests which don't depend on the + # internals. + + if o != o: + text = 'NaN' + elif o == INFINITY: + text = 'Infinity' + elif o == -INFINITY: + text = '-Infinity' else: - _encoder = encode_basestring - if self.encoding != 'utf-8': - def _encoder(o, _orig_encoder=_encoder, _encoding=self.encoding): - if isinstance(o, str): - o = o.decode(_encoding) - return _orig_encoder(o) + return FLOAT_REPR(o) - def floatstr(o, allow_nan=self.allow_nan, - _repr=FLOAT_REPR, _inf=INFINITY, _neginf=-INFINITY): - # Check for specials. Note that this type of test is processor - # and/or platform-specific, so do tests which don't depend on the - # internals. + if not self.allow_nan: + raise ValueError( + "Out of range float values are not JSON compliant: " + + repr(o)) - if o != o: - text = 'NaN' - elif o == _inf: - text = 'Infinity' - elif o == _neginf: - text = '-Infinity' - else: - return _repr(o) + return text - if not allow_nan: - raise ValueError( - "Out of range float values are not JSON compliant: " + - repr(o)) + def _mark_markers(self, markers, o): + if markers is not None: + if id(o) in markers: + raise ValueError("Circular reference detected") + markers[id(o)] = None - return text + def _remove_markers(self, markers, o): + if markers is not None: + del markers[id(o)] - - if (_one_shot and c_make_encoder is not None - and not self.indent and not self.sort_keys): - _iterencode = c_make_encoder( - markers, self.default, _encoder, self.indent, - self.key_separator, self.item_separator, self.sort_keys, - self.skipkeys, self.allow_nan) - else: - _iterencode = _make_iterencode( - markers, self.default, _encoder, self.indent, floatstr, - self.key_separator, self.item_separator, self.sort_keys, - self.skipkeys, _one_shot) - return _iterencode(o, 0) - -def _make_iterencode(markers, _default, _encoder, _indent, _floatstr, - _key_separator, _item_separator, _sort_keys, _skipkeys, _one_shot, - ## HACK: hand-optimized bytecode; turn globals into locals - ValueError=ValueError, - basestring=basestring, - dict=dict, - float=float, - id=id, - int=int, - isinstance=isinstance, - list=list, - long=long, - str=str, - tuple=tuple, - ): - - def _iterencode_list(lst, _current_indent_level): + def _iterencode_list(self, lst, markers, _current_indent_level): if not lst: yield '[]' return - if markers is not None: - markerid = id(lst) - if markerid in markers: - raise ValueError("Circular reference detected") - markers[markerid] = lst + self._mark_markers(markers, lst) buf = '[' - if _indent is not None: + if self.indent is not None: _current_indent_level += 1 - newline_indent = '\n' + (' ' * (_indent * _current_indent_level)) - separator = _item_separator + newline_indent + newline_indent = '\n' + (' ' * (self.indent * + _current_indent_level)) + separator = self.item_separator + newline_indent buf += newline_indent else: newline_indent = None - separator = _item_separator + separator = self.item_separator first = True for value in lst: if first: @@ -304,7 +374,7 @@ else: buf = separator if isinstance(value, basestring): - yield buf + _encoder(value) + yield buf + '"' + self.encoder(value) + '"' elif value is None: yield buf + 'null' elif value is True: @@ -314,44 +384,43 @@ elif isinstance(value, (int, long)): yield buf + str(value) elif isinstance(value, float): - yield buf + _floatstr(value) + yield buf + self._floatstr(value) else: yield buf if isinstance(value, (list, tuple)): - chunks = _iterencode_list(value, _current_indent_level) + chunks = self._iterencode_list(value, markers, + _current_indent_level) elif isinstance(value, dict): - chunks = _iterencode_dict(value, _current_indent_level) + chunks = self._iterencode_dict(value, markers, + _current_indent_level) else: - chunks = _iterencode(value, _current_indent_level) + chunks = self._iterencode(value, markers, + _current_indent_level) for chunk in chunks: yield chunk if newline_indent is not None: _current_indent_level -= 1 - yield '\n' + (' ' * (_indent * _current_indent_level)) + yield '\n' + (' ' * (self.indent * _current_indent_level)) yield ']' - if markers is not None: - del markers[markerid] + self._remove_markers(markers, lst) - def _iterencode_dict(dct, _current_indent_level): + def _iterencode_dict(self, dct, markers, _current_indent_level): if not dct: yield '{}' return - if markers is not None: - markerid = id(dct) - if markerid in markers: - raise ValueError("Circular reference detected") - markers[markerid] = dct + self._mark_markers(markers, dct) yield '{' - if _indent is not None: + if self.indent is not None: _current_indent_level += 1 - newline_indent = '\n' + (' ' * (_indent * _current_indent_level)) - item_separator = _item_separator + newline_indent + newline_indent = '\n' + (' ' * (self.indent * + _current_indent_level)) + item_separator = self.item_separator + newline_indent yield newline_indent else: newline_indent = None - item_separator = _item_separator + item_separator = self.item_separator first = True - if _sort_keys: + if self.sort_keys: items = sorted(dct.items(), key=lambda kv: kv[0]) else: items = dct.iteritems() @@ -361,7 +430,7 @@ # JavaScript is weakly typed for these, so it makes sense to # also allow them. Many encoders seem to do something like this. elif isinstance(key, float): - key = _floatstr(key) + key = self._floatstr(key) elif key is True: key = 'true' elif key is False: @@ -370,7 +439,7 @@ key = 'null' elif isinstance(key, (int, long)): key = str(key) - elif _skipkeys: + elif self.skipkeys: continue else: raise TypeError("key " + repr(key) + " is not a string") @@ -378,10 +447,10 @@ first = False else: yield item_separator - yield _encoder(key) - yield _key_separator + yield '"' + self.encoder(key) + '"' + yield self.key_separator if isinstance(value, basestring): - yield _encoder(value) + yield '"' + self.encoder(value) + '"' elif value is None: yield 'null' elif value is True: @@ -391,26 +460,28 @@ elif isinstance(value, (int, long)): yield str(value) elif isinstance(value, float): - yield _floatstr(value) + yield self._floatstr(value) else: if isinstance(value, (list, tuple)): - chunks = _iterencode_list(value, _current_indent_level) + chunks = self._iterencode_list(value, markers, + _current_indent_level) elif isinstance(value, dict): - chunks = _iterencode_dict(value, _current_indent_level) + chunks = self._iterencode_dict(value, markers, + _current_indent_level) else: - chunks = _iterencode(value, _current_indent_level) + chunks = self._iterencode(value, markers, + _current_indent_level) for chunk in chunks: yield chunk if newline_indent is not None: _current_indent_level -= 1 - yield '\n' + (' ' * (_indent * _current_indent_level)) + yield '\n' + (' ' * (self.indent * _current_indent_level)) yield '}' - if markers is not None: - del markers[markerid] + self._remove_markers(markers, dct) - def _iterencode(o, _current_indent_level): + def _iterencode(self, o, markers, _current_indent_level): if isinstance(o, basestring): - yield _encoder(o) + yield '"' + self.encoder(o) + '"' elif o is None: yield 'null' elif o is True: @@ -420,23 +491,19 @@ elif isinstance(o, (int, long)): yield str(o) elif isinstance(o, float): - yield _floatstr(o) + yield self._floatstr(o) elif isinstance(o, (list, tuple)): - for chunk in _iterencode_list(o, _current_indent_level): + for chunk in self._iterencode_list(o, markers, + _current_indent_level): yield chunk elif isinstance(o, dict): - for chunk in _iterencode_dict(o, _current_indent_level): + for chunk in self._iterencode_dict(o, markers, + _current_indent_level): yield chunk else: - if markers is not None: - markerid = id(o) - if markerid in markers: - raise ValueError("Circular reference detected") - markers[markerid] = o - o = _default(o) - for chunk in _iterencode(o, _current_indent_level): + self._mark_markers(markers, o) + obj = self.default(o) + for chunk in self._iterencode(obj, markers, + _current_indent_level): yield chunk - if markers is not None: - del markers[markerid] - - return _iterencode + self._remove_markers(markers, o) diff --git a/lib-python/modified-2.7/json/tests/test_unicode.py b/lib-python/modified-2.7/json/tests/test_unicode.py --- a/lib-python/modified-2.7/json/tests/test_unicode.py +++ b/lib-python/modified-2.7/json/tests/test_unicode.py @@ -80,3 +80,9 @@ self.assertEqual(type(json.loads(u'["a"]')[0]), unicode) # Issue 10038. self.assertEqual(type(json.loads('"foo"')), unicode) + + def test_encode_not_utf_8(self): + self.assertEqual(json.dumps('\xb1\xe6', encoding='iso8859-2'), + '"\\u0105\\u0107"') + self.assertEqual(json.dumps(['\xb1\xe6'], encoding='iso8859-2'), + '["\\u0105\\u0107"]') diff --git a/lib-python/modified-2.7/sqlite3/test/regression.py b/lib-python/modified-2.7/sqlite3/test/regression.py --- a/lib-python/modified-2.7/sqlite3/test/regression.py +++ b/lib-python/modified-2.7/sqlite3/test/regression.py @@ -274,6 +274,18 @@ cur.execute("UPDATE foo SET id = 3 WHERE id = 1") self.assertEqual(cur.description, None) + def CheckStatementCache(self): + cur = self.con.cursor() + cur.execute("CREATE TABLE foo (id INTEGER)") + values = [(i,) for i in xrange(5)] + cur.executemany("INSERT INTO foo (id) VALUES (?)", values) + + cur.execute("SELECT id FROM foo") + self.assertEqual(list(cur), values) + self.con.commit() + cur.execute("SELECT id FROM foo") + self.assertEqual(list(cur), values) + def suite(): regression_suite = unittest.makeSuite(RegressionTests, "Check") return unittest.TestSuite((regression_suite,)) diff --git a/lib-python/modified-2.7/ssl.py b/lib-python/modified-2.7/ssl.py --- a/lib-python/modified-2.7/ssl.py +++ b/lib-python/modified-2.7/ssl.py @@ -62,7 +62,6 @@ from _ssl import OPENSSL_VERSION_NUMBER, OPENSSL_VERSION_INFO, OPENSSL_VERSION from _ssl import SSLError from _ssl import CERT_NONE, CERT_OPTIONAL, CERT_REQUIRED -from _ssl import PROTOCOL_SSLv2, PROTOCOL_SSLv3, PROTOCOL_SSLv23, PROTOCOL_TLSv1 from _ssl import RAND_status, RAND_egd, RAND_add from _ssl import \ SSL_ERROR_ZERO_RETURN, \ @@ -74,6 +73,18 @@ SSL_ERROR_WANT_CONNECT, \ SSL_ERROR_EOF, \ SSL_ERROR_INVALID_ERROR_CODE +from _ssl import PROTOCOL_SSLv3, PROTOCOL_SSLv23, PROTOCOL_TLSv1 +_PROTOCOL_NAMES = { + PROTOCOL_TLSv1: "TLSv1", + PROTOCOL_SSLv23: "SSLv23", + PROTOCOL_SSLv3: "SSLv3", +} +try: + from _ssl import PROTOCOL_SSLv2 +except ImportError: + pass +else: + _PROTOCOL_NAMES[PROTOCOL_SSLv2] = "SSLv2" from socket import socket, _fileobject, error as socket_error from socket import getnameinfo as _getnameinfo @@ -400,16 +411,7 @@ return DER_cert_to_PEM_cert(dercert) def get_protocol_name(protocol_code): - if protocol_code == PROTOCOL_TLSv1: - return "TLSv1" - elif protocol_code == PROTOCOL_SSLv23: - return "SSLv23" - elif protocol_code == PROTOCOL_SSLv2: - return "SSLv2" - elif protocol_code == PROTOCOL_SSLv3: - return "SSLv3" - else: - return "" + return _PROTOCOL_NAMES.get(protocol_code, '') # a replacement for the old socket.ssl function diff --git a/lib-python/modified-2.7/test/regrtest.py b/lib-python/modified-2.7/test/regrtest.py --- a/lib-python/modified-2.7/test/regrtest.py +++ b/lib-python/modified-2.7/test/regrtest.py @@ -1403,7 +1403,26 @@ test_zipimport test_zlib """, - 'openbsd3': + 'openbsd4': + """ + test_ascii_formatd + test_bsddb + test_bsddb3 + test_ctypes + test_dl + test_epoll + test_gdbm + test_locale + test_normalization + test_ossaudiodev + test_pep277 + test_tcl + test_tk + test_ttk_guionly + test_ttk_textonly + test_multiprocessing + """, + 'openbsd5': """ test_ascii_formatd test_bsddb diff --git a/lib-python/modified-2.7/test/test_array.py b/lib-python/modified-2.7/test/test_array.py --- a/lib-python/modified-2.7/test/test_array.py +++ b/lib-python/modified-2.7/test/test_array.py @@ -295,9 +295,10 @@ ) b = array.array(self.badtypecode()) - self.assertRaises(TypeError, "a + b") - - self.assertRaises(TypeError, "a + 'bad'") + with self.assertRaises(TypeError): + a + b + with self.assertRaises(TypeError): + a + 'bad' def test_iadd(self): a = array.array(self.typecode, self.example[::-1]) @@ -316,9 +317,10 @@ ) b = array.array(self.badtypecode()) - self.assertRaises(TypeError, "a += b") - - self.assertRaises(TypeError, "a += 'bad'") + with self.assertRaises(TypeError): + a += b + with self.assertRaises(TypeError): + a += 'bad' def test_mul(self): a = 5*array.array(self.typecode, self.example) @@ -345,7 +347,8 @@ array.array(self.typecode) ) - self.assertRaises(TypeError, "a * 'bad'") + with self.assertRaises(TypeError): + a * 'bad' def test_imul(self): a = array.array(self.typecode, self.example) @@ -374,7 +377,8 @@ a *= -1 self.assertEqual(a, array.array(self.typecode)) - self.assertRaises(TypeError, "a *= 'bad'") + with self.assertRaises(TypeError): + a *= 'bad' def test_getitem(self): a = array.array(self.typecode, self.example) diff --git a/lib-python/modified-2.7/test/test_bz2.py b/lib-python/modified-2.7/test/test_bz2.py --- a/lib-python/modified-2.7/test/test_bz2.py +++ b/lib-python/modified-2.7/test/test_bz2.py @@ -50,6 +50,7 @@ self.filename = TESTFN def tearDown(self): + test_support.gc_collect() if os.path.isfile(self.filename): os.unlink(self.filename) diff --git a/lib-python/modified-2.7/test/test_fcntl.py b/lib-python/modified-2.7/test/test_fcntl.py new file mode 100644 --- /dev/null +++ b/lib-python/modified-2.7/test/test_fcntl.py @@ -0,0 +1,108 @@ +"""Test program for the fcntl C module. + +OS/2+EMX doesn't support the file locking operations. + +""" +import os +import struct +import sys +import unittest +from test.test_support import (verbose, TESTFN, unlink, run_unittest, + import_module) + +# Skip test if no fnctl module. +fcntl = import_module('fcntl') + + +# TODO - Write tests for flock() and lockf(). + +def get_lockdata(): + if sys.platform.startswith('atheos'): + start_len = "qq" + else: + try: + os.O_LARGEFILE + except AttributeError: + start_len = "ll" + else: + start_len = "qq" + + if sys.platform in ('netbsd1', 'netbsd2', 'netbsd3', + 'Darwin1.2', 'darwin', + 'freebsd2', 'freebsd3', 'freebsd4', 'freebsd5', + 'freebsd6', 'freebsd7', 'freebsd8', + 'bsdos2', 'bsdos3', 'bsdos4', + 'openbsd', 'openbsd2', 'openbsd3', 'openbsd4', 'openbsd5'): + if struct.calcsize('l') == 8: + off_t = 'l' + pid_t = 'i' + else: + off_t = 'lxxxx' + pid_t = 'l' + lockdata = struct.pack(off_t + off_t + pid_t + 'hh', 0, 0, 0, + fcntl.F_WRLCK, 0) + elif sys.platform in ['aix3', 'aix4', 'hp-uxB', 'unixware7']: + lockdata = struct.pack('hhlllii', fcntl.F_WRLCK, 0, 0, 0, 0, 0, 0) + elif sys.platform in ['os2emx']: + lockdata = None + else: + lockdata = struct.pack('hh'+start_len+'hh', fcntl.F_WRLCK, 0, 0, 0, 0, 0) + if lockdata: + if verbose: + print 'struct.pack: ', repr(lockdata) + return lockdata + +lockdata = get_lockdata() + + +class TestFcntl(unittest.TestCase): + + def setUp(self): + self.f = None + + def tearDown(self): + if self.f and not self.f.closed: + self.f.close() + unlink(TESTFN) + + def test_fcntl_fileno(self): + # the example from the library docs + self.f = open(TESTFN, 'w') + rv = fcntl.fcntl(self.f.fileno(), fcntl.F_SETFL, os.O_NONBLOCK) + if verbose: + print 'Status from fcntl with O_NONBLOCK: ', rv + if sys.platform not in ['os2emx']: + rv = fcntl.fcntl(self.f.fileno(), fcntl.F_SETLKW, lockdata) + if verbose: + print 'String from fcntl with F_SETLKW: ', repr(rv) + self.f.close() + + def test_fcntl_file_descriptor(self): + # again, but pass the file rather than numeric descriptor + self.f = open(TESTFN, 'w') + rv = fcntl.fcntl(self.f, fcntl.F_SETFL, os.O_NONBLOCK) + if sys.platform not in ['os2emx']: + rv = fcntl.fcntl(self.f, fcntl.F_SETLKW, lockdata) + self.f.close() + + def test_fcntl_64_bit(self): + # Issue #1309352: fcntl shouldn't fail when the third arg fits in a + # C 'long' but not in a C 'int'. + try: + cmd = fcntl.F_NOTIFY + # This flag is larger than 2**31 in 64-bit builds + flags = fcntl.DN_MULTISHOT + except AttributeError: + self.skipTest("F_NOTIFY or DN_MULTISHOT unavailable") + fd = os.open(os.path.dirname(os.path.abspath(TESTFN)), os.O_RDONLY) + try: + fcntl.fcntl(fd, cmd, flags) + finally: + os.close(fd) + + +def test_main(): + run_unittest(TestFcntl) + +if __name__ == '__main__': + test_main() diff --git a/lib-python/modified-2.7/test/test_heapq.py b/lib-python/modified-2.7/test/test_heapq.py --- a/lib-python/modified-2.7/test/test_heapq.py +++ b/lib-python/modified-2.7/test/test_heapq.py @@ -186,6 +186,11 @@ self.assertFalse(sys.modules['heapq'] is self.module) self.assertTrue(hasattr(self.module.heapify, 'func_code')) + def test_islice_protection(self): + m = self.module + self.assertFalse(m.nsmallest(-1, [1])) + self.assertFalse(m.nlargest(-1, [1])) + class TestHeapC(TestHeap): module = c_heapq diff --git a/lib-python/modified-2.7/test/test_multibytecodec.py b/lib-python/modified-2.7/test/test_multibytecodec.py --- a/lib-python/modified-2.7/test/test_multibytecodec.py +++ b/lib-python/modified-2.7/test/test_multibytecodec.py @@ -148,7 +148,8 @@ class Test_StreamReader(unittest.TestCase): def test_bug1728403(self): try: - open(TESTFN, 'w').write('\xa1') + with open(TESTFN, 'w') as f: + f.write('\xa1') f = codecs.open(TESTFN, encoding='cp949') self.assertRaises(UnicodeDecodeError, f.read, 2) finally: diff --git a/lib-python/modified-2.7/test/test_multiprocessing.py b/lib-python/modified-2.7/test/test_multiprocessing.py --- a/lib-python/modified-2.7/test/test_multiprocessing.py +++ b/lib-python/modified-2.7/test/test_multiprocessing.py @@ -510,7 +510,6 @@ p.join() - @unittest.skipIf(os.name == 'posix', "PYPY: FIXME") def test_qsize(self): q = self.Queue() try: @@ -532,7 +531,6 @@ time.sleep(DELTA) q.task_done() - @unittest.skipIf(os.name == 'posix', "PYPY: FIXME") def test_task_done(self): queue = self.JoinableQueue() @@ -1091,7 +1089,6 @@ class _TestPoolWorkerLifetime(BaseTestCase): ALLOWED_TYPES = ('processes', ) - @unittest.skipIf(os.name == 'posix', "PYPY: FIXME") def test_pool_worker_lifetime(self): p = multiprocessing.Pool(3, maxtasksperchild=10) self.assertEqual(3, len(p._pool)) @@ -1280,7 +1277,6 @@ queue = manager.get_queue() queue.put('hello world') - @unittest.skipIf(os.name == 'posix', "PYPY: FIXME") def test_rapid_restart(self): authkey = os.urandom(32) manager = QueueManager( @@ -1297,6 +1293,7 @@ queue = manager.get_queue() self.assertEqual(queue.get(), 'hello world') del queue + test_support.gc_collect() manager.shutdown() manager = QueueManager( address=addr, authkey=authkey, serializer=SERIALIZER) @@ -1573,7 +1570,6 @@ ALLOWED_TYPES = ('processes',) - @unittest.skipIf(os.name == 'posix', "PYPY: FIXME") def test_heap(self): iterations = 5000 maxblocks = 50 diff --git a/lib-python/modified-2.7/test/test_ssl.py b/lib-python/modified-2.7/test/test_ssl.py --- a/lib-python/modified-2.7/test/test_ssl.py +++ b/lib-python/modified-2.7/test/test_ssl.py @@ -58,32 +58,35 @@ # Issue #9415: Ubuntu hijacks their OpenSSL and forcefully disables SSLv2 def skip_if_broken_ubuntu_ssl(func): - # We need to access the lower-level wrapper in order to create an - # implicit SSL context without trying to connect or listen. - try: - import _ssl - except ImportError: - # The returned function won't get executed, just ignore the error - pass - @functools.wraps(func) - def f(*args, **kwargs): + if hasattr(ssl, 'PROTOCOL_SSLv2'): + # We need to access the lower-level wrapper in order to create an + # implicit SSL context without trying to connect or listen. try: - s = socket.socket(socket.AF_INET) - _ssl.sslwrap(s._sock, 0, None, None, - ssl.CERT_NONE, ssl.PROTOCOL_SSLv2, None, None) - except ssl.SSLError as e: - if (ssl.OPENSSL_VERSION_INFO == (0, 9, 8, 15, 15) and - platform.linux_distribution() == ('debian', 'squeeze/sid', '') - and 'Invalid SSL protocol variant specified' in str(e)): - raise unittest.SkipTest("Patched Ubuntu OpenSSL breaks behaviour") - return func(*args, **kwargs) - return f + import _ssl + except ImportError: + # The returned function won't get executed, just ignore the error + pass + @functools.wraps(func) + def f(*args, **kwargs): + try: + s = socket.socket(socket.AF_INET) + _ssl.sslwrap(s._sock, 0, None, None, + ssl.CERT_NONE, ssl.PROTOCOL_SSLv2, None, None) + except ssl.SSLError as e: + if (ssl.OPENSSL_VERSION_INFO == (0, 9, 8, 15, 15) and + platform.linux_distribution() == ('debian', 'squeeze/sid', '') + and 'Invalid SSL protocol variant specified' in str(e)): + raise unittest.SkipTest("Patched Ubuntu OpenSSL breaks behaviour") + return func(*args, **kwargs) + return f + else: + return func class BasicSocketTests(unittest.TestCase): def test_constants(self): - ssl.PROTOCOL_SSLv2 + #ssl.PROTOCOL_SSLv2 ssl.PROTOCOL_SSLv23 ssl.PROTOCOL_SSLv3 ssl.PROTOCOL_TLSv1 @@ -966,7 +969,8 @@ try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True, ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True, ssl.CERT_REQUIRED) - try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv2, False) + if hasattr(ssl, 'PROTOCOL_SSLv2'): + try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv2, False) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv23, False) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_TLSv1, False) @@ -978,7 +982,8 @@ try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True, ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True, ssl.CERT_REQUIRED) - try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv2, False) + if hasattr(ssl, 'PROTOCOL_SSLv2'): + try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv2, False) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv3, False) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv23, False) diff --git a/lib-python/modified-2.7/test/test_sys_settrace.py b/lib-python/modified-2.7/test/test_sys_settrace.py --- a/lib-python/modified-2.7/test/test_sys_settrace.py +++ b/lib-python/modified-2.7/test/test_sys_settrace.py @@ -286,11 +286,11 @@ self.compare_events(func.func_code.co_firstlineno, tracer.events, func.events) - def set_and_retrieve_none(self): + def test_set_and_retrieve_none(self): sys.settrace(None) assert sys.gettrace() is None - def set_and_retrieve_func(self): + def test_set_and_retrieve_func(self): def fn(*args): pass diff --git a/lib-python/2.7/test/test_tarfile.py b/lib-python/modified-2.7/test/test_tarfile.py copy from lib-python/2.7/test/test_tarfile.py copy to lib-python/modified-2.7/test/test_tarfile.py --- a/lib-python/2.7/test/test_tarfile.py +++ b/lib-python/modified-2.7/test/test_tarfile.py @@ -169,6 +169,7 @@ except tarfile.ReadError: self.fail("tarfile.open() failed on empty archive") self.assertListEqual(tar.getmembers(), []) + tar.close() def test_null_tarfile(self): # Test for issue6123: Allow opening empty archives. @@ -207,16 +208,21 @@ fobj = open(self.tarname, "rb") tar = tarfile.open(fileobj=fobj, mode=self.mode) self.assertEqual(tar.name, os.path.abspath(fobj.name)) + tar.close() def test_no_name_attribute(self): - data = open(self.tarname, "rb").read() + f = open(self.tarname, "rb") + data = f.read() + f.close() fobj = StringIO.StringIO(data) self.assertRaises(AttributeError, getattr, fobj, "name") tar = tarfile.open(fileobj=fobj, mode=self.mode) self.assertEqual(tar.name, None) def test_empty_name_attribute(self): - data = open(self.tarname, "rb").read() + f = open(self.tarname, "rb") + data = f.read() + f.close() fobj = StringIO.StringIO(data) fobj.name = "" tar = tarfile.open(fileobj=fobj, mode=self.mode) @@ -515,6 +521,7 @@ self.tar = tarfile.open(self.tarname, mode=self.mode, encoding="iso8859-1") tarinfo = self.tar.getmember("pax/umlauts-�������") self._test_member(tarinfo, size=7011, chksum=md5_regtype) + self.tar.close() class LongnameTest(ReadTest): @@ -675,6 +682,7 @@ tar = tarfile.open(tmpname, self.mode) tarinfo = tar.gettarinfo(path) self.assertEqual(tarinfo.size, 0) + tar.close() finally: os.rmdir(path) @@ -692,6 +700,7 @@ tar.gettarinfo(target) tarinfo = tar.gettarinfo(link) self.assertEqual(tarinfo.size, 0) + tar.close() finally: os.remove(target) os.remove(link) @@ -704,6 +713,7 @@ tar = tarfile.open(tmpname, self.mode) tarinfo = tar.gettarinfo(path) self.assertEqual(tarinfo.size, 0) + tar.close() finally: os.remove(path) @@ -722,6 +732,7 @@ tar.add(dstname) os.chdir(cwd) self.assertTrue(tar.getnames() == [], "added the archive to itself") + tar.close() def test_exclude(self): tempdir = os.path.join(TEMPDIR, "exclude") @@ -742,6 +753,7 @@ tar = tarfile.open(tmpname, "r") self.assertEqual(len(tar.getmembers()), 1) self.assertEqual(tar.getnames()[0], "empty_dir") + tar.close() finally: shutil.rmtree(tempdir) @@ -859,7 +871,9 @@ fobj.close() elif self.mode.endswith("bz2"): dec = bz2.BZ2Decompressor() - data = open(tmpname, "rb").read() + f = open(tmpname, "rb") + data = f.read() + f.close() data = dec.decompress(data) self.assertTrue(len(dec.unused_data) == 0, "found trailing data") @@ -938,6 +952,7 @@ "unable to read longname member") self.assertEqual(tarinfo.linkname, member.linkname, "unable to read longname member") + tar.close() def test_longname_1023(self): self._test(("longnam/" * 127) + "longnam") @@ -1030,6 +1045,7 @@ else: n = tar.getmembers()[0].name self.assertTrue(name == n, "PAX longname creation failed") + tar.close() def test_pax_global_header(self): pax_headers = { @@ -1058,6 +1074,7 @@ tarfile.PAX_NUMBER_FIELDS[key](val) except (TypeError, ValueError): self.fail("unable to convert pax header field") + tar.close() def test_pax_extended_header(self): # The fields from the pax header have priority over the @@ -1077,6 +1094,7 @@ self.assertEqual(t.pax_headers, pax_headers) self.assertEqual(t.name, "foo") self.assertEqual(t.uid, 123) + tar.close() class UstarUnicodeTest(unittest.TestCase): @@ -1120,6 +1138,7 @@ tarinfo.name = "foo" tarinfo.uname = u"���" self.assertRaises(UnicodeError, tar.addfile, tarinfo) + tar.close() def test_unicode_argument(self): tar = tarfile.open(tarname, "r", encoding="iso8859-1", errors="strict") @@ -1174,6 +1193,7 @@ tar = tarfile.open(tmpname, format=self.format, encoding="ascii", errors=handler) self.assertEqual(tar.getnames()[0], name) + tar.close() self.assertRaises(UnicodeError, tarfile.open, tmpname, encoding="ascii", errors="strict") @@ -1186,6 +1206,7 @@ tar = tarfile.open(tmpname, format=self.format, encoding="iso8859-1", errors="utf-8") self.assertEqual(tar.getnames()[0], "���/" + u"�".encode("utf8")) + tar.close() class AppendTest(unittest.TestCase): @@ -1213,6 +1234,7 @@ def _test(self, names=["bar"], fileobj=None): tar = tarfile.open(self.tarname, fileobj=fileobj) self.assertEqual(tar.getnames(), names) + tar.close() def test_non_existing(self): self._add_testfile() @@ -1231,7 +1253,9 @@ def test_fileobj(self): self._create_testtar() - data = open(self.tarname).read() + f = open(self.tarname) + data = f.read() + f.close() fobj = StringIO.StringIO(data) self._add_testfile(fobj) fobj.seek(0) @@ -1257,7 +1281,9 @@ # Append mode is supposed to fail if the tarfile to append to # does not end with a zero block. def _test_error(self, data): - open(self.tarname, "wb").write(data) + f = open(self.tarname, "wb") + f.write(data) + f.close() self.assertRaises(tarfile.ReadError, self._add_testfile) def test_null(self): diff --git a/lib-python/modified-2.7/test/test_tempfile.py b/lib-python/modified-2.7/test/test_tempfile.py --- a/lib-python/modified-2.7/test/test_tempfile.py +++ b/lib-python/modified-2.7/test/test_tempfile.py @@ -23,8 +23,8 @@ # TEST_FILES may need to be tweaked for systems depending on the maximum # number of files that can be opened at one time (see ulimit -n) -if sys.platform in ('openbsd3', 'openbsd4'): - TEST_FILES = 48 +if sys.platform.startswith("openbsd"): + TEST_FILES = 64 # ulimit -n defaults to 128 for normal users else: TEST_FILES = 100 diff --git a/lib-python/modified-2.7/test/test_urllib2.py b/lib-python/modified-2.7/test/test_urllib2.py --- a/lib-python/modified-2.7/test/test_urllib2.py +++ b/lib-python/modified-2.7/test/test_urllib2.py @@ -307,6 +307,9 @@ def getresponse(self): return MockHTTPResponse(MockFile(), {}, 200, "OK") + def close(self): + pass + class MockHandler: # useful for testing handler machinery # see add_ordered_mock_handlers() docstring diff --git a/lib-python/modified-2.7/urllib2.py b/lib-python/modified-2.7/urllib2.py new file mode 100644 --- /dev/null +++ b/lib-python/modified-2.7/urllib2.py @@ -0,0 +1,1436 @@ +"""An extensible library for opening URLs using a variety of protocols + +The simplest way to use this module is to call the urlopen function, +which accepts a string containing a URL or a Request object (described +below). It opens the URL and returns the results as file-like +object; the returned object has some extra methods described below. + +The OpenerDirector manages a collection of Handler objects that do +all the actual work. Each Handler implements a particular protocol or +option. The OpenerDirector is a composite object that invokes the +Handlers needed to open the requested URL. For example, the +HTTPHandler performs HTTP GET and POST requests and deals with +non-error returns. The HTTPRedirectHandler automatically deals with +HTTP 301, 302, 303 and 307 redirect errors, and the HTTPDigestAuthHandler +deals with digest authentication. + +urlopen(url, data=None) -- Basic usage is the same as original +urllib. pass the url and optionally data to post to an HTTP URL, and +get a file-like object back. One difference is that you can also pass +a Request instance instead of URL. Raises a URLError (subclass of +IOError); for HTTP errors, raises an HTTPError, which can also be +treated as a valid response. + +build_opener -- Function that creates a new OpenerDirector instance. +Will install the default handlers. Accepts one or more Handlers as +arguments, either instances or Handler classes that it will +instantiate. If one of the argument is a subclass of the default +handler, the argument will be installed instead of the default. + +install_opener -- Installs a new opener as the default opener. + +objects of interest: + +OpenerDirector -- Sets up the User Agent as the Python-urllib client and manages +the Handler classes, while dealing with requests and responses. + +Request -- An object that encapsulates the state of a request. The +state can be as simple as the URL. It can also include extra HTTP +headers, e.g. a User-Agent. + +BaseHandler -- + +exceptions: +URLError -- A subclass of IOError, individual protocols have their own +specific subclass. + +HTTPError -- Also a valid HTTP response, so you can treat an HTTP error +as an exceptional event or valid response. + +internals: +BaseHandler and parent +_call_chain conventions + +Example usage: + +import urllib2 + +# set up authentication info +authinfo = urllib2.HTTPBasicAuthHandler() +authinfo.add_password(realm='PDQ Application', + uri='https://mahler:8092/site-updates.py', + user='klem', + passwd='geheim$parole') + +proxy_support = urllib2.ProxyHandler({"http" : "http://ahad-haam:3128"}) + +# build a new opener that adds authentication and caching FTP handlers +opener = urllib2.build_opener(proxy_support, authinfo, urllib2.CacheFTPHandler) + +# install it +urllib2.install_opener(opener) + +f = urllib2.urlopen('http://www.python.org/') + + +""" + +# XXX issues: +# If an authentication error handler that tries to perform +# authentication for some reason but fails, how should the error be +# signalled? The client needs to know the HTTP error code. But if +# the handler knows that the problem was, e.g., that it didn't know +# that hash algo that requested in the challenge, it would be good to +# pass that information along to the client, too. +# ftp errors aren't handled cleanly +# check digest against correct (i.e. non-apache) implementation + +# Possible extensions: +# complex proxies XXX not sure what exactly was meant by this +# abstract factory for opener + +import base64 +import hashlib +import httplib +import mimetools +import os +import posixpath +import random +import re +import socket +import sys +import time +import urlparse +import bisect + +try: + from cStringIO import StringIO +except ImportError: + from StringIO import StringIO + +from urllib import (unwrap, unquote, splittype, splithost, quote, + addinfourl, splitport, splittag, + splitattr, ftpwrapper, splituser, splitpasswd, splitvalue) + +# support for FileHandler, proxies via environment variables +from urllib import localhost, url2pathname, getproxies, proxy_bypass + +# used in User-Agent header sent +__version__ = sys.version[:3] + +_opener = None +def urlopen(url, data=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT): + global _opener + if _opener is None: + _opener = build_opener() + return _opener.open(url, data, timeout) + +def install_opener(opener): + global _opener + _opener = opener + +# do these error classes make sense? +# make sure all of the IOError stuff is overridden. we just want to be +# subtypes. + +class URLError(IOError): + # URLError is a sub-type of IOError, but it doesn't share any of + # the implementation. need to override __init__ and __str__. + # It sets self.args for compatibility with other EnvironmentError + # subclasses, but args doesn't have the typical format with errno in + # slot 0 and strerror in slot 1. This may be better than nothing. + def __init__(self, reason): + self.args = reason, + self.reason = reason + + def __str__(self): + return '' % self.reason + +class HTTPError(URLError, addinfourl): + """Raised when HTTP error occurs, but also acts like non-error return""" + __super_init = addinfourl.__init__ + + def __init__(self, url, code, msg, hdrs, fp): + self.code = code + self.msg = msg + self.hdrs = hdrs + self.fp = fp + self.filename = url + # The addinfourl classes depend on fp being a valid file + # object. In some cases, the HTTPError may not have a valid + # file object. If this happens, the simplest workaround is to + # not initialize the base classes. + if fp is not None: + self.__super_init(fp, hdrs, url, code) + + def __str__(self): + return 'HTTP Error %s: %s' % (self.code, self.msg) + +# copied from cookielib.py +_cut_port_re = re.compile(r":\d+$") +def request_host(request): + """Return request-host, as defined by RFC 2965. + + Variation from RFC: returned value is lowercased, for convenient + comparison. + + """ + url = request.get_full_url() + host = urlparse.urlparse(url)[1] + if host == "": + host = request.get_header("Host", "") + + # remove port, if present + host = _cut_port_re.sub("", host, 1) + return host.lower() + +class Request: + + def __init__(self, url, data=None, headers={}, + origin_req_host=None, unverifiable=False): + # unwrap('') --> 'type://host/path' + self.__original = unwrap(url) + self.__original, fragment = splittag(self.__original) + self.type = None + # self.__r_type is what's left after doing the splittype + self.host = None + self.port = None + self._tunnel_host = None + self.data = data + self.headers = {} + for key, value in headers.items(): + self.add_header(key, value) + self.unredirected_hdrs = {} + if origin_req_host is None: + origin_req_host = request_host(self) + self.origin_req_host = origin_req_host + self.unverifiable = unverifiable + + def __getattr__(self, attr): + # XXX this is a fallback mechanism to guard against these + # methods getting called in a non-standard order. this may be + # too complicated and/or unnecessary. + # XXX should the __r_XXX attributes be public? + if attr[:12] == '_Request__r_': + name = attr[12:] + if hasattr(Request, 'get_' + name): + getattr(self, 'get_' + name)() + return getattr(self, attr) + raise AttributeError, attr + + def get_method(self): + if self.has_data(): + return "POST" + else: + return "GET" + + # XXX these helper methods are lame + + def add_data(self, data): + self.data = data + + def has_data(self): + return self.data is not None + + def get_data(self): + return self.data + + def get_full_url(self): + return self.__original + + def get_type(self): + if self.type is None: + self.type, self.__r_type = splittype(self.__original) + if self.type is None: + raise ValueError, "unknown url type: %s" % self.__original + return self.type + + def get_host(self): + if self.host is None: + self.host, self.__r_host = splithost(self.__r_type) + if self.host: + self.host = unquote(self.host) + return self.host + + def get_selector(self): + return self.__r_host + + def set_proxy(self, host, type): + if self.type == 'https' and not self._tunnel_host: + self._tunnel_host = self.host + else: + self.type = type + self.__r_host = self.__original + + self.host = host + + def has_proxy(self): + return self.__r_host == self.__original + + def get_origin_req_host(self): + return self.origin_req_host + + def is_unverifiable(self): + return self.unverifiable + + def add_header(self, key, val): + # useful for something like authentication + self.headers[key.capitalize()] = val + + def add_unredirected_header(self, key, val): + # will not be added to a redirected request + self.unredirected_hdrs[key.capitalize()] = val + + def has_header(self, header_name): + return (header_name in self.headers or + header_name in self.unredirected_hdrs) + + def get_header(self, header_name, default=None): + return self.headers.get( + header_name, + self.unredirected_hdrs.get(header_name, default)) + + def header_items(self): + hdrs = self.unredirected_hdrs.copy() + hdrs.update(self.headers) + return hdrs.items() + +class OpenerDirector: + def __init__(self): + client_version = "Python-urllib/%s" % __version__ + self.addheaders = [('User-agent', client_version)] + # manage the individual handlers + self.handlers = [] + self.handle_open = {} + self.handle_error = {} + self.process_response = {} + self.process_request = {} + + def add_handler(self, handler): + if not hasattr(handler, "add_parent"): + raise TypeError("expected BaseHandler instance, got %r" % + type(handler)) + + added = False + for meth in dir(handler): + if meth in ["redirect_request", "do_open", "proxy_open"]: + # oops, coincidental match + continue + + i = meth.find("_") + protocol = meth[:i] + condition = meth[i+1:] + + if condition.startswith("error"): + j = condition.find("_") + i + 1 + kind = meth[j+1:] + try: + kind = int(kind) + except ValueError: + pass + lookup = self.handle_error.get(protocol, {}) + self.handle_error[protocol] = lookup + elif condition == "open": + kind = protocol + lookup = self.handle_open + elif condition == "response": + kind = protocol + lookup = self.process_response + elif condition == "request": + kind = protocol + lookup = self.process_request + else: + continue + + handlers = lookup.setdefault(kind, []) + if handlers: + bisect.insort(handlers, handler) + else: + handlers.append(handler) + added = True + + if added: + # the handlers must work in an specific order, the order + # is specified in a Handler attribute + bisect.insort(self.handlers, handler) + handler.add_parent(self) + + def close(self): + # Only exists for backwards compatibility. + pass + + def _call_chain(self, chain, kind, meth_name, *args): + # Handlers raise an exception if no one else should try to handle + # the request, or return None if they can't but another handler + # could. Otherwise, they return the response. + handlers = chain.get(kind, ()) + for handler in handlers: + func = getattr(handler, meth_name) + + result = func(*args) + if result is not None: + return result + + def open(self, fullurl, data=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT): + # accept a URL or a Request object + if isinstance(fullurl, basestring): + req = Request(fullurl, data) + else: + req = fullurl + if data is not None: + req.add_data(data) + + req.timeout = timeout + protocol = req.get_type() + + # pre-process request + meth_name = protocol+"_request" + for processor in self.process_request.get(protocol, []): + meth = getattr(processor, meth_name) + req = meth(req) + + response = self._open(req, data) + + # post-process response + meth_name = protocol+"_response" + for processor in self.process_response.get(protocol, []): + meth = getattr(processor, meth_name) + response = meth(req, response) + + return response + + def _open(self, req, data=None): + result = self._call_chain(self.handle_open, 'default', + 'default_open', req) + if result: + return result + + protocol = req.get_type() + result = self._call_chain(self.handle_open, protocol, protocol + + '_open', req) + if result: + return result + + return self._call_chain(self.handle_open, 'unknown', + 'unknown_open', req) + + def error(self, proto, *args): + if proto in ('http', 'https'): + # XXX http[s] protocols are special-cased + dict = self.handle_error['http'] # https is not different than http + proto = args[2] # YUCK! + meth_name = 'http_error_%s' % proto + http_err = 1 + orig_args = args + else: + dict = self.handle_error + meth_name = proto + '_error' + http_err = 0 + args = (dict, proto, meth_name) + args + result = self._call_chain(*args) + if result: + return result + + if http_err: + args = (dict, 'default', 'http_error_default') + orig_args + return self._call_chain(*args) + +# XXX probably also want an abstract factory that knows when it makes +# sense to skip a superclass in favor of a subclass and when it might +# make sense to include both + +def build_opener(*handlers): + """Create an opener object from a list of handlers. + + The opener will use several default handlers, including support + for HTTP, FTP and when applicable, HTTPS. + + If any of the handlers passed as arguments are subclasses of the + default handlers, the default handlers will not be used. + """ + import types + def isclass(obj): + return isinstance(obj, (types.ClassType, type)) + + opener = OpenerDirector() + default_classes = [ProxyHandler, UnknownHandler, HTTPHandler, + HTTPDefaultErrorHandler, HTTPRedirectHandler, + FTPHandler, FileHandler, HTTPErrorProcessor] + if hasattr(httplib, 'HTTPS'): + default_classes.append(HTTPSHandler) + skip = set() + for klass in default_classes: + for check in handlers: + if isclass(check): + if issubclass(check, klass): + skip.add(klass) + elif isinstance(check, klass): + skip.add(klass) + for klass in skip: + default_classes.remove(klass) + + for klass in default_classes: + opener.add_handler(klass()) + + for h in handlers: + if isclass(h): + h = h() + opener.add_handler(h) + return opener + +class BaseHandler: + handler_order = 500 + + def add_parent(self, parent): + self.parent = parent + + def close(self): + # Only exists for backwards compatibility + pass + + def __lt__(self, other): + if not hasattr(other, "handler_order"): + # Try to preserve the old behavior of having custom classes + # inserted after default ones (works only for custom user + # classes which are not aware of handler_order). + return True + return self.handler_order < other.handler_order + + +class HTTPErrorProcessor(BaseHandler): + """Process HTTP error responses.""" + handler_order = 1000 # after all other processing + + def http_response(self, request, response): + code, msg, hdrs = response.code, response.msg, response.info() + + # According to RFC 2616, "2xx" code indicates that the client's + # request was successfully received, understood, and accepted. + if not (200 <= code < 300): + response = self.parent.error( + 'http', request, response, code, msg, hdrs) + + return response + + https_response = http_response + +class HTTPDefaultErrorHandler(BaseHandler): + def http_error_default(self, req, fp, code, msg, hdrs): + raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) + +class HTTPRedirectHandler(BaseHandler): + # maximum number of redirections to any single URL + # this is needed because of the state that cookies introduce + max_repeats = 4 + # maximum total number of redirections (regardless of URL) before + # assuming we're in a loop + max_redirections = 10 + + def redirect_request(self, req, fp, code, msg, headers, newurl): + """Return a Request or None in response to a redirect. + + This is called by the http_error_30x methods when a + redirection response is received. If a redirection should + take place, return a new Request to allow http_error_30x to + perform the redirect. Otherwise, raise HTTPError if no-one + else should try to handle this url. Return None if you can't + but another Handler might. + """ + m = req.get_method() + if (code in (301, 302, 303, 307) and m in ("GET", "HEAD") + or code in (301, 302, 303) and m == "POST"): + # Strictly (according to RFC 2616), 301 or 302 in response + # to a POST MUST NOT cause a redirection without confirmation + # from the user (of urllib2, in this case). In practice, + # essentially all clients do redirect in this case, so we + # do the same. + # be conciliant with URIs containing a space + newurl = newurl.replace(' ', '%20') + newheaders = dict((k,v) for k,v in req.headers.items() + if k.lower() not in ("content-length", "content-type") + ) + return Request(newurl, + headers=newheaders, + origin_req_host=req.get_origin_req_host(), + unverifiable=True) + else: + raise HTTPError(req.get_full_url(), code, msg, headers, fp) + + # Implementation note: To avoid the server sending us into an + # infinite loop, the request object needs to track what URLs we + # have already seen. Do this by adding a handler-specific + # attribute to the Request object. + def http_error_302(self, req, fp, code, msg, headers): + # Some servers (incorrectly) return multiple Location headers + # (so probably same goes for URI). Use first header. + if 'location' in headers: + newurl = headers.getheaders('location')[0] + elif 'uri' in headers: + newurl = headers.getheaders('uri')[0] + else: + return + + # fix a possible malformed URL + urlparts = urlparse.urlparse(newurl) + if not urlparts.path: + urlparts = list(urlparts) + urlparts[2] = "/" + newurl = urlparse.urlunparse(urlparts) + + newurl = urlparse.urljoin(req.get_full_url(), newurl) + + # XXX Probably want to forget about the state of the current + # request, although that might interact poorly with other + # handlers that also use handler-specific request attributes + new = self.redirect_request(req, fp, code, msg, headers, newurl) + if new is None: + return + + # loop detection + # .redirect_dict has a key url if url was previously visited. + if hasattr(req, 'redirect_dict'): + visited = new.redirect_dict = req.redirect_dict + if (visited.get(newurl, 0) >= self.max_repeats or + len(visited) >= self.max_redirections): + raise HTTPError(req.get_full_url(), code, + self.inf_msg + msg, headers, fp) + else: + visited = new.redirect_dict = req.redirect_dict = {} + visited[newurl] = visited.get(newurl, 0) + 1 + + # Don't close the fp until we are sure that we won't use it + # with HTTPError. + fp.read() + fp.close() + + return self.parent.open(new, timeout=req.timeout) + + http_error_301 = http_error_303 = http_error_307 = http_error_302 + + inf_msg = "The HTTP server returned a redirect error that would " \ + "lead to an infinite loop.\n" \ + "The last 30x error message was:\n" + + +def _parse_proxy(proxy): + """Return (scheme, user, password, host/port) given a URL or an authority. + + If a URL is supplied, it must have an authority (host:port) component. + According to RFC 3986, having an authority component means the URL must + have two slashes after the scheme: + + >>> _parse_proxy('file:/ftp.example.com/') + Traceback (most recent call last): + ValueError: proxy URL with no authority: 'file:/ftp.example.com/' + + The first three items of the returned tuple may be None. + + Examples of authority parsing: + + >>> _parse_proxy('proxy.example.com') + (None, None, None, 'proxy.example.com') + >>> _parse_proxy('proxy.example.com:3128') + (None, None, None, 'proxy.example.com:3128') + + The authority component may optionally include userinfo (assumed to be + username:password): + + >>> _parse_proxy('joe:password at proxy.example.com') + (None, 'joe', 'password', 'proxy.example.com') + >>> _parse_proxy('joe:password at proxy.example.com:3128') + (None, 'joe', 'password', 'proxy.example.com:3128') + + Same examples, but with URLs instead: + + >>> _parse_proxy('http://proxy.example.com/') + ('http', None, None, 'proxy.example.com') + >>> _parse_proxy('http://proxy.example.com:3128/') + ('http', None, None, 'proxy.example.com:3128') + >>> _parse_proxy('http://joe:password at proxy.example.com/') + ('http', 'joe', 'password', 'proxy.example.com') + >>> _parse_proxy('http://joe:password at proxy.example.com:3128') + ('http', 'joe', 'password', 'proxy.example.com:3128') + + Everything after the authority is ignored: + + >>> _parse_proxy('ftp://joe:password at proxy.example.com/rubbish:3128') + ('ftp', 'joe', 'password', 'proxy.example.com') + + Test for no trailing '/' case: + + >>> _parse_proxy('http://joe:password at proxy.example.com') + ('http', 'joe', 'password', 'proxy.example.com') + + """ + scheme, r_scheme = splittype(proxy) + if not r_scheme.startswith("/"): + # authority + scheme = None + authority = proxy + else: + # URL + if not r_scheme.startswith("//"): + raise ValueError("proxy URL with no authority: %r" % proxy) + # We have an authority, so for RFC 3986-compliant URLs (by ss 3. + # and 3.3.), path is empty or starts with '/' + end = r_scheme.find("/", 2) + if end == -1: + end = None + authority = r_scheme[2:end] + userinfo, hostport = splituser(authority) + if userinfo is not None: + user, password = splitpasswd(userinfo) + else: + user = password = None + return scheme, user, password, hostport + +class ProxyHandler(BaseHandler): + # Proxies must be in front + handler_order = 100 + + def __init__(self, proxies=None): + if proxies is None: + proxies = getproxies() + assert hasattr(proxies, 'has_key'), "proxies must be a mapping" + self.proxies = proxies + for type, url in proxies.items(): + setattr(self, '%s_open' % type, + lambda r, proxy=url, type=type, meth=self.proxy_open: \ + meth(r, proxy, type)) + + def proxy_open(self, req, proxy, type): + orig_type = req.get_type() + proxy_type, user, password, hostport = _parse_proxy(proxy) + + if proxy_type is None: + proxy_type = orig_type + + if req.host and proxy_bypass(req.host): + return None + + if user and password: + user_pass = '%s:%s' % (unquote(user), unquote(password)) + creds = base64.b64encode(user_pass).strip() + req.add_header('Proxy-authorization', 'Basic ' + creds) + hostport = unquote(hostport) + req.set_proxy(hostport, proxy_type) + + if orig_type == proxy_type or orig_type == 'https': + # let other handlers take care of it + return None + else: + # need to start over, because the other handlers don't + # grok the proxy's URL type + # e.g. if we have a constructor arg proxies like so: + # {'http': 'ftp://proxy.example.com'}, we may end up turning + # a request for http://acme.example.com/a into one for + # ftp://proxy.example.com/a + return self.parent.open(req, timeout=req.timeout) + +class HTTPPasswordMgr: + + def __init__(self): + self.passwd = {} + + def add_password(self, realm, uri, user, passwd): + # uri could be a single URI or a sequence + if isinstance(uri, basestring): + uri = [uri] + if not realm in self.passwd: + self.passwd[realm] = {} + for default_port in True, False: + reduced_uri = tuple( + [self.reduce_uri(u, default_port) for u in uri]) + self.passwd[realm][reduced_uri] = (user, passwd) + + def find_user_password(self, realm, authuri): + domains = self.passwd.get(realm, {}) + for default_port in True, False: + reduced_authuri = self.reduce_uri(authuri, default_port) + for uris, authinfo in domains.iteritems(): + for uri in uris: + if self.is_suburi(uri, reduced_authuri): + return authinfo + return None, None + + def reduce_uri(self, uri, default_port=True): + """Accept authority or URI and extract only the authority and path.""" + # note HTTP URLs do not have a userinfo component + parts = urlparse.urlsplit(uri) + if parts[1]: + # URI + scheme = parts[0] + authority = parts[1] + path = parts[2] or '/' + else: + # host or host:port + scheme = None + authority = uri + path = '/' + host, port = splitport(authority) + if default_port and port is None and scheme is not None: + dport = {"http": 80, + "https": 443, + }.get(scheme) + if dport is not None: + authority = "%s:%d" % (host, dport) + return authority, path + + def is_suburi(self, base, test): + """Check if test is below base in a URI tree + + Both args must be URIs in reduced form. + """ + if base == test: + return True + if base[0] != test[0]: + return False + common = posixpath.commonprefix((base[1], test[1])) + if len(common) == len(base[1]): + return True + return False + + +class HTTPPasswordMgrWithDefaultRealm(HTTPPasswordMgr): + + def find_user_password(self, realm, authuri): + user, password = HTTPPasswordMgr.find_user_password(self, realm, + authuri) + if user is not None: + return user, password + return HTTPPasswordMgr.find_user_password(self, None, authuri) + + +class AbstractBasicAuthHandler: + + # XXX this allows for multiple auth-schemes, but will stupidly pick + # the last one with a realm specified. + + # allow for double- and single-quoted realm values + # (single quotes are a violation of the RFC, but appear in the wild) + rx = re.compile('(?:.*,)*[ \t]*([^ \t]+)[ \t]+' + 'realm=(["\'])(.*?)\\2', re.I) + + # XXX could pre-emptively send auth info already accepted (RFC 2617, + # end of section 2, and section 1.2 immediately after "credentials" + # production). + + def __init__(self, password_mgr=None): + if password_mgr is None: + password_mgr = HTTPPasswordMgr() + self.passwd = password_mgr + self.add_password = self.passwd.add_password + self.retried = 0 + + def reset_retry_count(self): + self.retried = 0 + + def http_error_auth_reqed(self, authreq, host, req, headers): + # host may be an authority (without userinfo) or a URL with an + # authority + # XXX could be multiple headers + authreq = headers.get(authreq, None) + + if self.retried > 5: + # retry sending the username:password 5 times before failing. + raise HTTPError(req.get_full_url(), 401, "basic auth failed", + headers, None) + else: + self.retried += 1 + + if authreq: + mo = AbstractBasicAuthHandler.rx.search(authreq) + if mo: + scheme, quote, realm = mo.groups() + if scheme.lower() == 'basic': + response = self.retry_http_basic_auth(host, req, realm) + if response and response.code != 401: + self.retried = 0 + return response + + def retry_http_basic_auth(self, host, req, realm): + user, pw = self.passwd.find_user_password(realm, host) + if pw is not None: + raw = "%s:%s" % (user, pw) + auth = 'Basic %s' % base64.b64encode(raw).strip() + if req.headers.get(self.auth_header, None) == auth: + return None + req.add_unredirected_header(self.auth_header, auth) + return self.parent.open(req, timeout=req.timeout) + else: + return None + + +class HTTPBasicAuthHandler(AbstractBasicAuthHandler, BaseHandler): + + auth_header = 'Authorization' + + def http_error_401(self, req, fp, code, msg, headers): + url = req.get_full_url() + response = self.http_error_auth_reqed('www-authenticate', + url, req, headers) + self.reset_retry_count() + return response + + +class ProxyBasicAuthHandler(AbstractBasicAuthHandler, BaseHandler): + + auth_header = 'Proxy-authorization' + + def http_error_407(self, req, fp, code, msg, headers): + # http_error_auth_reqed requires that there is no userinfo component in + # authority. Assume there isn't one, since urllib2 does not (and + # should not, RFC 3986 s. 3.2.1) support requests for URLs containing + # userinfo. + authority = req.get_host() + response = self.http_error_auth_reqed('proxy-authenticate', + authority, req, headers) + self.reset_retry_count() + return response + + +def randombytes(n): + """Return n random bytes.""" + # Use /dev/urandom if it is available. Fall back to random module + # if not. It might be worthwhile to extend this function to use + # other platform-specific mechanisms for getting random bytes. + if os.path.exists("/dev/urandom"): + f = open("/dev/urandom") + s = f.read(n) + f.close() + return s + else: + L = [chr(random.randrange(0, 256)) for i in range(n)] + return "".join(L) + +class AbstractDigestAuthHandler: + # Digest authentication is specified in RFC 2617. + + # XXX The client does not inspect the Authentication-Info header + # in a successful response. + + # XXX It should be possible to test this implementation against + # a mock server that just generates a static set of challenges. + + # XXX qop="auth-int" supports is shaky + + def __init__(self, passwd=None): + if passwd is None: + passwd = HTTPPasswordMgr() + self.passwd = passwd + self.add_password = self.passwd.add_password + self.retried = 0 + self.nonce_count = 0 + self.last_nonce = None + + def reset_retry_count(self): + self.retried = 0 + + def http_error_auth_reqed(self, auth_header, host, req, headers): + authreq = headers.get(auth_header, None) + if self.retried > 5: + # Don't fail endlessly - if we failed once, we'll probably + # fail a second time. Hm. Unless the Password Manager is + # prompting for the information. Crap. This isn't great + # but it's better than the current 'repeat until recursion + # depth exceeded' approach + raise HTTPError(req.get_full_url(), 401, "digest auth failed", + headers, None) + else: + self.retried += 1 + if authreq: + scheme = authreq.split()[0] + if scheme.lower() == 'digest': + return self.retry_http_digest_auth(req, authreq) + + def retry_http_digest_auth(self, req, auth): + token, challenge = auth.split(' ', 1) + chal = parse_keqv_list(parse_http_list(challenge)) + auth = self.get_authorization(req, chal) + if auth: + auth_val = 'Digest %s' % auth + if req.headers.get(self.auth_header, None) == auth_val: + return None + req.add_unredirected_header(self.auth_header, auth_val) + resp = self.parent.open(req, timeout=req.timeout) + return resp + + def get_cnonce(self, nonce): + # The cnonce-value is an opaque + # quoted string value provided by the client and used by both client + # and server to avoid chosen plaintext attacks, to provide mutual + # authentication, and to provide some message integrity protection. + # This isn't a fabulous effort, but it's probably Good Enough. + dig = hashlib.sha1("%s:%s:%s:%s" % (self.nonce_count, nonce, time.ctime(), + randombytes(8))).hexdigest() + return dig[:16] + + def get_authorization(self, req, chal): + try: + realm = chal['realm'] + nonce = chal['nonce'] + qop = chal.get('qop') + algorithm = chal.get('algorithm', 'MD5') + # mod_digest doesn't send an opaque, even though it isn't + # supposed to be optional + opaque = chal.get('opaque', None) + except KeyError: + return None + + H, KD = self.get_algorithm_impls(algorithm) + if H is None: + return None + + user, pw = self.passwd.find_user_password(realm, req.get_full_url()) + if user is None: + return None + + # XXX not implemented yet + if req.has_data(): + entdig = self.get_entity_digest(req.get_data(), chal) + else: + entdig = None + + A1 = "%s:%s:%s" % (user, realm, pw) + A2 = "%s:%s" % (req.get_method(), + # XXX selector: what about proxies and full urls + req.get_selector()) + if qop == 'auth': + if nonce == self.last_nonce: + self.nonce_count += 1 + else: + self.nonce_count = 1 + self.last_nonce = nonce + + ncvalue = '%08x' % self.nonce_count + cnonce = self.get_cnonce(nonce) + noncebit = "%s:%s:%s:%s:%s" % (nonce, ncvalue, cnonce, qop, H(A2)) + respdig = KD(H(A1), noncebit) + elif qop is None: + respdig = KD(H(A1), "%s:%s" % (nonce, H(A2))) + else: + # XXX handle auth-int. + raise URLError("qop '%s' is not supported." % qop) + + # XXX should the partial digests be encoded too? + + base = 'username="%s", realm="%s", nonce="%s", uri="%s", ' \ + 'response="%s"' % (user, realm, nonce, req.get_selector(), + respdig) + if opaque: + base += ', opaque="%s"' % opaque + if entdig: + base += ', digest="%s"' % entdig + base += ', algorithm="%s"' % algorithm + if qop: + base += ', qop=auth, nc=%s, cnonce="%s"' % (ncvalue, cnonce) + return base + + def get_algorithm_impls(self, algorithm): + # algorithm should be case-insensitive according to RFC2617 + algorithm = algorithm.upper() + # lambdas assume digest modules are imported at the top level + if algorithm == 'MD5': + H = lambda x: hashlib.md5(x).hexdigest() + elif algorithm == 'SHA': + H = lambda x: hashlib.sha1(x).hexdigest() + # XXX MD5-sess + KD = lambda s, d: H("%s:%s" % (s, d)) + return H, KD + + def get_entity_digest(self, data, chal): + # XXX not implemented yet + return None + + +class HTTPDigestAuthHandler(BaseHandler, AbstractDigestAuthHandler): + """An authentication protocol defined by RFC 2069 + + Digest authentication improves on basic authentication because it + does not transmit passwords in the clear. + """ + + auth_header = 'Authorization' + handler_order = 490 # before Basic auth + + def http_error_401(self, req, fp, code, msg, headers): + host = urlparse.urlparse(req.get_full_url())[1] + retry = self.http_error_auth_reqed('www-authenticate', + host, req, headers) + self.reset_retry_count() + return retry + + +class ProxyDigestAuthHandler(BaseHandler, AbstractDigestAuthHandler): + + auth_header = 'Proxy-Authorization' + handler_order = 490 # before Basic auth + + def http_error_407(self, req, fp, code, msg, headers): + host = req.get_host() + retry = self.http_error_auth_reqed('proxy-authenticate', + host, req, headers) + self.reset_retry_count() + return retry + +class AbstractHTTPHandler(BaseHandler): + + def __init__(self, debuglevel=0): + self._debuglevel = debuglevel + + def set_http_debuglevel(self, level): + self._debuglevel = level + + def do_request_(self, request): + host = request.get_host() + if not host: + raise URLError('no host given') + + if request.has_data(): # POST + data = request.get_data() + if not request.has_header('Content-type'): + request.add_unredirected_header( + 'Content-type', + 'application/x-www-form-urlencoded') + if not request.has_header('Content-length'): + request.add_unredirected_header( + 'Content-length', '%d' % len(data)) + + sel_host = host + if request.has_proxy(): + scheme, sel = splittype(request.get_selector()) + sel_host, sel_path = splithost(sel) + + if not request.has_header('Host'): + request.add_unredirected_header('Host', sel_host) + for name, value in self.parent.addheaders: + name = name.capitalize() + if not request.has_header(name): + request.add_unredirected_header(name, value) + + return request + + def do_open(self, http_class, req): + """Return an addinfourl object for the request, using http_class. + + http_class must implement the HTTPConnection API from httplib. + The addinfourl return value is a file-like object. It also + has methods and attributes including: + - info(): return a mimetools.Message object for the headers + - geturl(): return the original request URL + - code: HTTP status code + """ + host = req.get_host() + if not host: + raise URLError('no host given') + + h = http_class(host, timeout=req.timeout) # will parse host:port + h.set_debuglevel(self._debuglevel) + + headers = dict(req.unredirected_hdrs) + headers.update(dict((k, v) for k, v in req.headers.items() + if k not in headers)) + + # We want to make an HTTP/1.1 request, but the addinfourl + # class isn't prepared to deal with a persistent connection. + # It will try to read all remaining data from the socket, + # which will block while the server waits for the next request. + # So make sure the connection gets closed after the (only) + # request. + headers["Connection"] = "close" + headers = dict( + (name.title(), val) for name, val in headers.items()) + + if req._tunnel_host: + tunnel_headers = {} + proxy_auth_hdr = "Proxy-Authorization" + if proxy_auth_hdr in headers: + tunnel_headers[proxy_auth_hdr] = headers[proxy_auth_hdr] + # Proxy-Authorization should not be sent to origin + # server. + del headers[proxy_auth_hdr] + h.set_tunnel(req._tunnel_host, headers=tunnel_headers) + + try: + h.request(req.get_method(), req.get_selector(), req.data, headers) + try: + r = h.getresponse(buffering=True) + except TypeError: #buffering kw not supported + r = h.getresponse() + except socket.error, err: # XXX what error? + h.close() + raise URLError(err) + + # Pick apart the HTTPResponse object to get the addinfourl + # object initialized properly. + + # Wrap the HTTPResponse object in socket's file object adapter + # for Windows. That adapter calls recv(), so delegate recv() + # to read(). This weird wrapping allows the returned object to + # have readline() and readlines() methods. + + # XXX It might be better to extract the read buffering code + # out of socket._fileobject() and into a base class. + + r.recv = r.read + fp = socket._fileobject(r, close=True) + + resp = addinfourl(fp, r.msg, req.get_full_url()) + resp.code = r.status + resp.msg = r.reason + return resp + + +class HTTPHandler(AbstractHTTPHandler): + + def http_open(self, req): + return self.do_open(httplib.HTTPConnection, req) + + http_request = AbstractHTTPHandler.do_request_ + +if hasattr(httplib, 'HTTPS'): + class HTTPSHandler(AbstractHTTPHandler): + + def https_open(self, req): + return self.do_open(httplib.HTTPSConnection, req) + + https_request = AbstractHTTPHandler.do_request_ + +class HTTPCookieProcessor(BaseHandler): + def __init__(self, cookiejar=None): + import cookielib + if cookiejar is None: + cookiejar = cookielib.CookieJar() + self.cookiejar = cookiejar + + def http_request(self, request): + self.cookiejar.add_cookie_header(request) + return request + + def http_response(self, request, response): + self.cookiejar.extract_cookies(response, request) + return response + + https_request = http_request + https_response = http_response + +class UnknownHandler(BaseHandler): + def unknown_open(self, req): + type = req.get_type() + raise URLError('unknown url type: %s' % type) + +def parse_keqv_list(l): + """Parse list of key=value strings where keys are not duplicated.""" + parsed = {} + for elt in l: + k, v = elt.split('=', 1) + if v[0] == '"' and v[-1] == '"': + v = v[1:-1] + parsed[k] = v + return parsed + +def parse_http_list(s): + """Parse lists as described by RFC 2068 Section 2. + + In particular, parse comma-separated lists where the elements of + the list may include quoted-strings. A quoted-string could + contain a comma. A non-quoted string could have quotes in the + middle. Neither commas nor quotes count if they are escaped. + Only double-quotes count, not single-quotes. + """ + res = [] + part = '' + + escape = quote = False + for cur in s: + if escape: + part += cur + escape = False + continue + if quote: + if cur == '\\': + escape = True + continue + elif cur == '"': + quote = False + part += cur + continue + + if cur == ',': + res.append(part) + part = '' + continue + + if cur == '"': + quote = True + + part += cur + + # append last part + if part: + res.append(part) + + return [part.strip() for part in res] + +def _safe_gethostbyname(host): + try: + return socket.gethostbyname(host) + except socket.gaierror: + return None + +class FileHandler(BaseHandler): + # Use local file or FTP depending on form of URL + def file_open(self, req): + url = req.get_selector() + if url[:2] == '//' and url[2:3] != '/' and (req.host and + req.host != 'localhost'): + req.type = 'ftp' + return self.parent.open(req) + else: + return self.open_local_file(req) + + # names for the localhost + names = None + def get_names(self): + if FileHandler.names is None: + try: + FileHandler.names = tuple( + socket.gethostbyname_ex('localhost')[2] + + socket.gethostbyname_ex(socket.gethostname())[2]) + except socket.gaierror: + FileHandler.names = (socket.gethostbyname('localhost'),) + return FileHandler.names + + # not entirely sure what the rules are here + def open_local_file(self, req): + import email.utils + import mimetypes + host = req.get_host() + filename = req.get_selector() + localfile = url2pathname(filename) + try: + stats = os.stat(localfile) + size = stats.st_size + modified = email.utils.formatdate(stats.st_mtime, usegmt=True) + mtype = mimetypes.guess_type(filename)[0] + headers = mimetools.Message(StringIO( + 'Content-type: %s\nContent-length: %d\nLast-modified: %s\n' % + (mtype or 'text/plain', size, modified))) + if host: + host, port = splitport(host) + if not host or \ + (not port and _safe_gethostbyname(host) in self.get_names()): + if host: + origurl = 'file://' + host + filename + else: + origurl = 'file://' + filename + return addinfourl(open(localfile, 'rb'), headers, origurl) + except OSError, msg: + # urllib2 users shouldn't expect OSErrors coming from urlopen() + raise URLError(msg) + raise URLError('file not on local host') + +class FTPHandler(BaseHandler): + def ftp_open(self, req): + import ftplib + import mimetypes + host = req.get_host() + if not host: + raise URLError('ftp error: no host given') + host, port = splitport(host) + if port is None: + port = ftplib.FTP_PORT + else: + port = int(port) + + # username/password handling + user, host = splituser(host) + if user: + user, passwd = splitpasswd(user) + else: + passwd = None + host = unquote(host) + user = user or '' + passwd = passwd or '' + + try: + host = socket.gethostbyname(host) + except socket.error, msg: + raise URLError(msg) + path, attrs = splitattr(req.get_selector()) + dirs = path.split('/') + dirs = map(unquote, dirs) + dirs, file = dirs[:-1], dirs[-1] + if dirs and not dirs[0]: + dirs = dirs[1:] + try: + fw = self.connect_ftp(user, passwd, host, port, dirs, req.timeout) + type = file and 'I' or 'D' + for attr in attrs: + attr, value = splitvalue(attr) + if attr.lower() == 'type' and \ + value in ('a', 'A', 'i', 'I', 'd', 'D'): + type = value.upper() + fp, retrlen = fw.retrfile(file, type) + headers = "" + mtype = mimetypes.guess_type(req.get_full_url())[0] + if mtype: + headers += "Content-type: %s\n" % mtype + if retrlen is not None and retrlen >= 0: + headers += "Content-length: %d\n" % retrlen + sf = StringIO(headers) + headers = mimetools.Message(sf) + return addinfourl(fp, headers, req.get_full_url()) + except ftplib.all_errors, msg: + raise URLError, ('ftp error: %s' % msg), sys.exc_info()[2] + + def connect_ftp(self, user, passwd, host, port, dirs, timeout): + fw = ftpwrapper(user, passwd, host, port, dirs, timeout) +## fw.ftp.set_debuglevel(1) + return fw + +class CacheFTPHandler(FTPHandler): + # XXX would be nice to have pluggable cache strategies + # XXX this stuff is definitely not thread safe + def __init__(self): + self.cache = {} + self.timeout = {} + self.soonest = 0 + self.delay = 60 + self.max_conns = 16 + + def setTimeout(self, t): + self.delay = t + + def setMaxConns(self, m): + self.max_conns = m + + def connect_ftp(self, user, passwd, host, port, dirs, timeout): + key = user, host, port, '/'.join(dirs), timeout + if key in self.cache: + self.timeout[key] = time.time() + self.delay + else: + self.cache[key] = ftpwrapper(user, passwd, host, port, dirs, timeout) + self.timeout[key] = time.time() + self.delay + self.check_cache() + return self.cache[key] + + def check_cache(self): + # first check for old ones + t = time.time() + if self.soonest <= t: + for k, v in self.timeout.items(): + if v < t: + self.cache[k].close() + del self.cache[k] + del self.timeout[k] + self.soonest = min(self.timeout.values()) + + # then check the size + if len(self.cache) == self.max_conns: + for k, v in self.timeout.items(): + if v == self.soonest: + del self.cache[k] + del self.timeout[k] + break + self.soonest = min(self.timeout.values()) diff --git a/lib_pypy/_ctypes/basics.py b/lib_pypy/_ctypes/basics.py --- a/lib_pypy/_ctypes/basics.py +++ b/lib_pypy/_ctypes/basics.py @@ -54,7 +54,8 @@ def get_ffi_argtype(self): if self._ffiargtype: return self._ffiargtype - return _shape_to_ffi_type(self._ffiargshape) + self._ffiargtype = _shape_to_ffi_type(self._ffiargshape) + return self._ffiargtype def _CData_output(self, resbuffer, base=None, index=-1): #assert isinstance(resbuffer, _rawffi.ArrayInstance) @@ -166,7 +167,8 @@ return tp._alignmentofinstances() def byref(cdata): - from ctypes import pointer + # "pointer" is imported at the end of this module to avoid circular + # imports return pointer(cdata) def cdata_from_address(self, address): @@ -224,5 +226,9 @@ 'Z' : _ffi.types.void_p, 'X' : _ffi.types.void_p, 'v' : _ffi.types.sshort, + '?' : _ffi.types.ubyte, } + +# used by "byref" +from _ctypes.pointer import pointer diff --git a/lib_pypy/_ctypes/function.py b/lib_pypy/_ctypes/function.py --- a/lib_pypy/_ctypes/function.py +++ b/lib_pypy/_ctypes/function.py @@ -78,8 +78,6 @@ _com_iid = None _is_fastpath = False - __restype_set = False - def _getargtypes(self): return self._argtypes_ @@ -93,13 +91,15 @@ raise TypeError( "item %d in _argtypes_ has no from_param method" % ( i + 1,)) - # - if all([hasattr(argtype, '_ffiargshape') for argtype in argtypes]): - fastpath_cls = make_fastpath_subclass(self.__class__) - fastpath_cls.enable_fastpath_maybe(self) self._argtypes_ = list(argtypes) + self._check_argtypes_for_fastpath() argtypes = property(_getargtypes, _setargtypes) + def _check_argtypes_for_fastpath(self): + if all([hasattr(argtype, '_ffiargshape') for argtype in self._argtypes_]): + fastpath_cls = make_fastpath_subclass(self.__class__) + fastpath_cls.enable_fastpath_maybe(self) + def _getparamflags(self): return self._paramflags @@ -149,7 +149,6 @@ return self._restype_ def _setrestype(self, restype): - self.__restype_set = True self._ptr = None if restype is int: from ctypes import c_int @@ -219,6 +218,7 @@ import ctypes restype = ctypes.c_int self._ptr = self._getfuncptr_fromaddress(self._argtypes_, restype) + self._check_argtypes_for_fastpath() return @@ -296,13 +296,12 @@ "This function takes %d argument%s (%s given)" % (len(self._argtypes_), plural, len(args))) - # check that arguments are convertible - ## XXX Not as long as ctypes.cast is a callback function with - ## py_object arguments... - ## self._convert_args(self._argtypes_, args, {}) - try: - res = self.callable(*args) + newargs = self._convert_args_for_callback(argtypes, args) + except (UnicodeError, TypeError, ValueError), e: + raise ArgumentError(str(e)) + try: + res = self.callable(*newargs) except: exc_info = sys.exc_info() traceback.print_tb(exc_info[2], file=sys.stderr) @@ -316,10 +315,6 @@ warnings.warn('C function without declared arguments called', RuntimeWarning, stacklevel=2) argtypes = [] - - if not self.__restype_set: - warnings.warn('C function without declared return type called', - RuntimeWarning, stacklevel=2) if self._com_index: from ctypes import cast, c_void_p, POINTER @@ -366,7 +361,10 @@ if self._flags_ & _rawffi.FUNCFLAG_USE_LASTERROR: set_last_error(_rawffi.get_last_error()) # - return self._build_result(self._restype_, result, newargs) + try: + return self._build_result(self._restype_, result, newargs) + finally: + funcptr.free_temp_buffers() def _do_errcheck(self, result, args): # The 'errcheck' protocol @@ -466,6 +464,19 @@ return cobj, cobj._to_ffi_param(), type(cobj) + def _convert_args_for_callback(self, argtypes, args): + assert len(argtypes) == len(args) + newargs = [] + for argtype, arg in zip(argtypes, args): + param = argtype.from_param(arg) + _type_ = getattr(argtype, '_type_', None) + if _type_ == 'P': # special-case for c_void_p + param = param._get_buffer_value() + elif self._is_primitive(argtype): + param = param.value + newargs.append(param) + return newargs + def _convert_args(self, argtypes, args, kwargs, marker=object()): newargs = [] outargs = [] @@ -556,6 +567,9 @@ newargtypes.append(newargtype) return keepalives, newargs, newargtypes, outargs + @staticmethod + def _is_primitive(argtype): + return argtype.__bases__[0] is _SimpleCData def _wrap_result(self, restype, result): """ @@ -564,7 +578,7 @@ """ # hack for performance: if restype is a "simple" primitive type, don't # allocate the buffer because it's going to be thrown away immediately - if restype.__bases__[0] is _SimpleCData and not restype._is_pointer_like(): + if self._is_primitive(restype) and not restype._is_pointer_like(): return result # shape = restype._ffishape @@ -680,7 +694,7 @@ try: result = self._call_funcptr(funcptr, *args) result = self._do_errcheck(result, args) - except (TypeError, ArgumentError): # XXX, should be FFITypeError + except (TypeError, ArgumentError, UnicodeDecodeError): assert self._slowpath_allowed return CFuncPtr.__call__(self, *args) return result diff --git a/lib_pypy/_ctypes/primitive.py b/lib_pypy/_ctypes/primitive.py --- a/lib_pypy/_ctypes/primitive.py +++ b/lib_pypy/_ctypes/primitive.py @@ -10,6 +10,8 @@ from _ctypes.builtin import ConvMode from _ctypes.array import Array from _ctypes.pointer import _Pointer, as_ffi_pointer +#from _ctypes.function import CFuncPtr # this import is moved at the bottom + # because else it's circular class NULL(object): pass @@ -86,7 +88,7 @@ return res if isinstance(value, Array): return value - if isinstance(value, _Pointer): + if isinstance(value, (_Pointer, CFuncPtr)): return cls.from_address(value._buffer.buffer) if isinstance(value, (int, long)): return cls(value) @@ -338,3 +340,5 @@ def __nonzero__(self): return self._buffer[0] not in (0, '\x00') + +from _ctypes.function import CFuncPtr diff --git a/lib_pypy/_ctypes/structure.py b/lib_pypy/_ctypes/structure.py --- a/lib_pypy/_ctypes/structure.py +++ b/lib_pypy/_ctypes/structure.py @@ -14,6 +14,15 @@ raise TypeError("Expected CData subclass, got %s" % (tp,)) if isinstance(tp, StructOrUnionMeta): tp._make_final() + if len(f) == 3: + if (not hasattr(tp, '_type_') + or not isinstance(tp._type_, str) + or tp._type_ not in "iIhHbBlL"): + #XXX: are those all types? + # we just dont get the type name + # in the interp levle thrown TypeError + # from rawffi if there are more + raise TypeError('bit fields not allowed for type ' + tp.__name__) all_fields = [] for cls in reversed(inspect.getmro(superclass)): @@ -34,34 +43,37 @@ for i, field in enumerate(all_fields): name = field[0] value = field[1] + is_bitfield = (len(field) == 3) fields[name] = Field(name, self._ffistruct.fieldoffset(name), self._ffistruct.fieldsize(name), - value, i) + value, i, is_bitfield) if anonymous_fields: resnames = [] for i, field in enumerate(all_fields): name = field[0] value = field[1] + is_bitfield = (len(field) == 3) startpos = self._ffistruct.fieldoffset(name) if name in anonymous_fields: for subname in value._names: resnames.append(subname) - relpos = startpos + value._fieldtypes[subname].offset - subvalue = value._fieldtypes[subname].ctype + subfield = getattr(value, subname) + relpos = startpos + subfield.offset + subvalue = subfield.ctype fields[subname] = Field(subname, relpos, subvalue._sizeofinstances(), - subvalue, i) + subvalue, i, is_bitfield) else: resnames.append(name) names = resnames self._names = names - self._fieldtypes = fields + self.__dict__.update(fields) class Field(object): - def __init__(self, name, offset, size, ctype, num): - for k in ('name', 'offset', 'size', 'ctype', 'num'): + def __init__(self, name, offset, size, ctype, num, is_bitfield): + for k in ('name', 'offset', 'size', 'ctype', 'num', 'is_bitfield'): self.__dict__[k] = locals()[k] def __setattr__(self, name, value): @@ -71,6 +83,35 @@ return "" % (self.name, self.offset, self.size) + def __get__(self, obj, cls=None): + if obj is None: + return self + if self.is_bitfield: + # bitfield member, use direct access + return obj._buffer.__getattr__(self.name) + else: + fieldtype = self.ctype + offset = self.num + suba = obj._subarray(fieldtype, self.name) + return fieldtype._CData_output(suba, obj, offset) + + + def __set__(self, obj, value): + fieldtype = self.ctype + cobj = fieldtype.from_param(value) + if ensure_objects(cobj) is not None: + key = keepalive_key(self.num) + store_reference(obj, key, cobj._objects) + arg = cobj._get_buffer_value() + if fieldtype._fficompositesize is not None: + from ctypes import memmove + dest = obj._buffer.fieldaddress(self.name) + memmove(dest, arg, fieldtype._fficompositesize) + else: + obj._buffer.__setattr__(self.name, arg) + + + # ________________________________________________________________ def _set_shape(tp, rawfields, is_union=False): @@ -79,17 +120,12 @@ tp._ffiargshape = tp._ffishape = (tp._ffistruct, 1) tp._fficompositesize = tp._ffistruct.size -def struct_getattr(self, name): - if name not in ('_fields_', '_fieldtypes'): - if hasattr(self, '_fieldtypes') and name in self._fieldtypes: - return self._fieldtypes[name] - return _CDataMeta.__getattribute__(self, name) def struct_setattr(self, name, value): if name == '_fields_': if self.__dict__.get('_fields_', None) is not None: raise AttributeError("_fields_ is final") - if self in [v for k, v in value]: + if self in [f[1] for f in value]: raise AttributeError("Structure or union cannot contain itself") names_and_fields( self, @@ -127,14 +163,14 @@ if '_fields_' not in self.__dict__: self._fields_ = [] self._names = [] - self._fieldtypes = {} _set_shape(self, [], self._is_union) - __getattr__ = struct_getattr __setattr__ = struct_setattr def from_address(self, address): instance = StructOrUnion.__new__(self) + if isinstance(address, _rawffi.StructureInstance): + address = address.buffer instance.__dict__['_buffer'] = self._ffistruct.fromaddress(address) return instance @@ -200,40 +236,6 @@ A = _rawffi.Array(fieldtype._ffishape) return A.fromaddress(address, 1) - def __setattr__(self, name, value): - try: - field = self._fieldtypes[name] - except KeyError: - return _CData.__setattr__(self, name, value) - fieldtype = field.ctype - cobj = fieldtype.from_param(value) - if ensure_objects(cobj) is not None: - key = keepalive_key(field.num) - store_reference(self, key, cobj._objects) - arg = cobj._get_buffer_value() - if fieldtype._fficompositesize is not None: - from ctypes import memmove - dest = self._buffer.fieldaddress(name) - memmove(dest, arg, fieldtype._fficompositesize) - else: - self._buffer.__setattr__(name, arg) - - def __getattribute__(self, name): - if name == '_fieldtypes': - return _CData.__getattribute__(self, '_fieldtypes') - try: - field = self._fieldtypes[name] - except KeyError: - return _CData.__getattribute__(self, name) - if field.size >> 16: - # bitfield member, use direct access - return self._buffer.__getattr__(name) - else: - fieldtype = field.ctype - offset = field.num - suba = self._subarray(fieldtype, name) - return fieldtype._CData_output(suba, self, offset) - def _get_buffer_for_param(self): return self diff --git a/lib_pypy/_elementtree.py b/lib_pypy/_elementtree.py new file mode 100644 --- /dev/null +++ b/lib_pypy/_elementtree.py @@ -0,0 +1,6 @@ +# Just use ElementTree. + +from xml.etree import ElementTree + +globals().update(ElementTree.__dict__) +del __all__ diff --git a/lib_pypy/_functools.py b/lib_pypy/_functools.py --- a/lib_pypy/_functools.py +++ b/lib_pypy/_functools.py @@ -14,10 +14,9 @@ raise TypeError("the first argument must be callable") self.func = func self.args = args - self.keywords = keywords + self.keywords = keywords or None def __call__(self, *fargs, **fkeywords): - newkeywords = self.keywords.copy() - newkeywords.update(fkeywords) - return self.func(*(self.args + fargs), **newkeywords) - + if self.keywords is not None: + fkeywords = dict(self.keywords, **fkeywords) + return self.func(*(self.args + fargs), **fkeywords) diff --git a/lib_pypy/_pypy_interact.py b/lib_pypy/_pypy_interact.py --- a/lib_pypy/_pypy_interact.py +++ b/lib_pypy/_pypy_interact.py @@ -56,6 +56,10 @@ prompt = getattr(sys, 'ps1', '>>> ') try: line = raw_input(prompt) + # Can be None if sys.stdin was redefined + encoding = getattr(sys.stdin, 'encoding', None) + if encoding and not isinstance(line, unicode): + line = line.decode(encoding) except EOFError: console.write("\n") break diff --git a/lib_pypy/_pypy_irc_topic.py b/lib_pypy/_pypy_irc_topic.py --- a/lib_pypy/_pypy_irc_topic.py +++ b/lib_pypy/_pypy_irc_topic.py @@ -1,117 +1,6 @@ -"""qvfgbcvna naq hgbcvna punvef -qlfgbcvna naq hgbcvna punvef -V'z fbeel, pbhyq lbh cyrnfr abg nterr jvgu gur png nf jryy? -V'z fbeel, pbhyq lbh cyrnfr abg nterr jvgu gur punve nf jryy? -jr cnffrq gur RH erivrj -cbfg RhebClguba fcevag fgnegf 12.IVV.2007, 10nz -RhebClguba raqrq -n Pyrna Ragrecevfrf cebqhpgvba -npnqrzl vf n pbzcyvpngrq ebyr tnzr -npnqrzvn vf n pbzcyvpngrq ebyr tnzr -jbexvat pbqr vf crn fbhc -abg lbhe snhyg, zber yvxr vg'f n zbivat gnetrg -guvf fragrapr vf snyfr -abguvat vf gehr -Yncfnat Fbhpubat -Oenpunzhgnaqn -fbeel, V'yy grnpu gur pnpghf ubj gb fjvz yngre -Jul fb znal znal znal znal znal ivbyvaf? -Jul fb znal znal znal znal znal bowrpgf? -"eha njnl naq yvir ba n snez" nccebnpu gb fbsgjner qrirybczrag -"va snpg, lbh zvtug xabj zber nobhg gur genafyngvba gbbypunva nsgre znfgrevat eclguba guna fbzr angvir fcrnxre xabjf nobhg uvf zbgure gbathr" - kbeNkNk -"jurer qvq nyy gur ivbyvaf tb?" -- ClCl fgnghf oybt: uggc://zberclcl.oybtfcbg.pbz/ -uggc://kxpq.pbz/353/ -pnfhnyvgl ivbyngvbaf naq sylvat -wrgmg abpu fpubxbynqvtre -R09 2X @PNN:85? -vs lbh'er gelvat gb oybj hc fghss, jub pnerf? -vs fghss oybjf hc, lbh pner -2008 jvyy or gur lrne bs clcl ba gur qrfxgbc -2008 jvyy or gur lrne bs gur qrfxgbc ba #clcl -2008 jvyy or gur lrne bs gur qrfxgbc ba #clcl, Wnahnel jvyy or gur zbagu bs gur nyc gbcf -lrf, ohg jung'g gur frafr bs 0 < "qhena qhena" -eclguba: flagnk naq frznagvpf bs clguba, fcrrq bs p, erfgevpgvbaf bs wnin naq pbzcvyre reebe zrffntrf nf crargenoyr nf ZHZCF +"""eclguba: flagnk naq frznagvpf bs clguba, fcrrq bs p, erfgevpgvbaf bs wnin naq pbzcvyre reebe zrffntrf nf crargenoyr nf ZHZCF pglcrf unf n fcva bs 1/3 ' ' vf n fcnpr gbb -2009 jvyy or gur lrne bs WVG ba gur qrfxgbc -N ynathntr vf n qvnyrpg jvgu na nezl naq anil -gbcvpf ner sbe gur srroyr zvaqrq -2009 vf gur lrne bs ersyrpgvba ba gur qrfxgbc -gur tybor vf bhe cbal, gur pbfzbf bhe erny ubefr -jub nz V naq vs lrf, ubj znal? -cebtenzzvat va orq vf n cresrpgyl svar npgvivgl -zbber'f ynj vf n qeht jvgu gur jbefg pbzr qbja -EClguba: jr hfr vg fb lbh qba'g unir gb -Zbber'f ynj vf n qeht jvgu gur jbefg pbzr qbja. EClguba: haqrpvqrq. -guvatf jvyy or avpr naq fghss -qba'g cbfg yvaxf gb cngragf urer -Abg lbhe hfhny nanylfrf. -Gur Neg bs gur Punaary -Clguba 300 -V fhccbfr ZO bs UGZY cre frpbaq vf abg gur hfhny fcrrq zrnfher crbcyr jbhyq rkcrpg sbe n wvg -gur fha arire frgf ba gur ClCl rzcver -ghegyrf ner snfgre guna lbh guvax -cebtenzzvat vf na nrfgrguvp raqrnibhe -P vf tbbq sbe fbzrguvat, whfg abg sbe jevgvat fbsgjner -trezna vf tbbq sbe fbzrguvat, whfg abg sbe jevgvat fbsgjner -trezna vf tbbq sbe artngvbaf, whfg abg sbe jevgvat fbsgjner -# nffreg qvq abg penfu -lbh fubhyq fgneg n cresrpg fbsgjner zbirzrag -lbh fubhyq fgneg n cresrpg punaary gbcvp zbirzrag -guvf vf n cresrpg punaary gbcvp -guvf vf n frys-ersreragvny punaary gbcvp -crrcubcr bcgvzvmngvbaf ner jung n Fhssvpvragyl Fzneg Pbzcvyre hfrf -"crrcubcr" bcgvzvmngvbaf ner jung na bcgvzvfgvp Pbzcvyre hfrf -pubbfr lbhe unpx -gur 'fhcre' xrljbeq vf abg gung uhttnoyr -wlguba cngpurf ner abg rabhtu sbe clcl -- qb lbh xabj oreyva? - nyy bs vg? - jryy, whfg oreyva -- ubj jvyy gur snpg gung gurl ner hfrq va bhe ercy punatr bhe gbcvpf? -- ubj pna vg rire unir jbexrq? -- jurer fubhyq gur unpx or fgberq? -- Vg'f uneq gb fnl rknpgyl jung pbafgvghgrf erfrnepu va gur pbzchgre jbeyq, ohg nf n svefg nccebkvzngvba, vg'f fbsgjner gung qbrfa'g unir hfref. -- Cebtenzzvat vf nyy nobhg xabjvat jura gb obvy gur benatr fcbatr qbaxrl npebff gur cuvyyvcvarf -- Jul fb znal, znal, znal, znal, znal, znal qhpxyvatf? -- ab qrgnvy vf bofpher rabhtu gb abg unir fbzr pbqr qrcraqvat ba vg. -- jung V trarenyyl jnag vf serr fcrrqhcf -- nyy bs ClCl vf kv-dhnyvgl -"lbh pna nyjnlf xvyy -9 be bf._rkvg() vs lbh'er va n uheel" -Ohernhpengf ohvyq npnqrzvp rzcverf juvpu puhea bhg zrnavatyrff fbyhgvbaf gb veeryrinag ceboyrzf. -vg'f abg n unpx, vg'f n jbexnebhaq -ClCl qbrfa'g unir pbcbylinevnqvp qrcraqragyl-zbabzbecurq ulcresyhknqf -ClCl qbrfa'g punatr gur shaqnzragny culfvpf pbafgnagf -Qnapr bs gur Fhtnecyhz Snvel -Wnin vf whfg tbbq rabhtu gb or cenpgvpny, ohg abg tbbq rabhtu gb or hfnoyr. -RhebClguba vf unccravat, qba'g rkcrpg nal dhvpx erfcbafr gvzrf. -"V jbhyq yvxr gb fgnl njnl sebz ernyvgl gura" -"gung'f jul gur 'be' vf ernyyl na 'naq' " -jvgu nyy nccebcevngr pbagrkghnyvfngvbavat -qba'g gevc ba gur cbjre pbeq -vzcyrzragvat YBTB va YBTB: "ghegyrf nyy gur jnl qbja" -gur ohooyrfbeg jbhyq or gur jebat jnl gb tb -gur cevapvcyr bs pbafreingvba bs zrff -gb fnir n gerr, rng n ornire -Qre Ovore znpugf evpugvt: Antg nyyrf xnchgg. -"Nal jbeyqivrj gung vfag jenpxrq ol frys-qbhog naq pbashfvba bire vgf bja vqragvgl vf abg n jbeyqivrj sbe zr." - Fpbgg Nnebafba -jr oryvrir va cnapnxrf, znlor -jr oryvrir va ghegyrf, znlor -jr qrsvavgryl oryvrir va zrgn -gur zngevk unf lbh -"Yvsr vf uneq, gura lbh anc" - n png -Vf Nezva ubzr jura gur havirefr prnfrf gb rkvfg? -Qhrffryqbes fcevag fgnegrq -frys.nobeeg("pnaabg ybnq negvpyrf") -QRAGVFGEL FLZOBY YVTUG IREGVPNY NAQ JNIR -"Gur UUH pnzchf vf n tbbq Dhnxr yriry" - Nezva -"Gur UUH pnzchf jbhyq or n greevoyr dhnxr yriry - lbh'q arire unir n pyhr jurer lbh ner" - zvpunry -N enqvbnpgvir png unf 18 unys-yvirf. - : j [fvtu] -f -pbybe-pbqrq oyhrf -"Neebtnapr va pbzchgre fpvrapr vf zrnfherq va anab-Qvwxfgenf." -ClCl arrqf n Whfg-va-Gvzr WVG -"Lbh pna'g gvzr geniry whfg ol frggvat lbhe pybpxf jebat" -Gjb guernqf jnyx vagb n one. Gur onexrrcre ybbxf hc naq lryyf, "url, V jnag qba'g nal pbaqvgvbaf enpr yvxr gvzr ynfg!" Clguba 2.k rfg cerfdhr zbeg, ivir Clguba! Clguba 2.k vf abg qrnq Riregvzr fbzrbar nethrf jvgu "Fznyygnyx unf nyjnlf qbar K", vg vf nyjnlf n tbbq uvag gung fbzrguvat arrqf gb or punatrq snfg. - Znephf Qraxre @@ -119,7 +8,6 @@ __kkk__ naq __ekkk__ if bcrengvba fybgf: cnegvpyr dhnaghz fhcrecbfvgvba xvaq bs sha ClCl vf na rkpvgvat grpuabybtl gung yrgf lbh gb jevgr snfg, cbegnoyr, zhygv-cyngsbez vagrecergref jvgu yrff rssbeg Nezva: "Cebybt vf n zrff.", PS: "Ab, vg'f irel pbby!", Nezva: "Vfa'g guvf jung V fnvq?" - tbbq, grfgf ner hfrshy fbzrgvzrf :-) ClCl vf yvxr nofheq gurngre jr unir ab nagv-vzcbffvoyr fgvpx gung znxrf fher gung nyy lbhe cebtenzf unyg clcl vf n enpr orgjrra crbcyr funivat lnxf naq gur havirefr cebqhpvat zber orneqrq lnxf. Fb sne, gur havirefr vf jvaavat @@ -136,14 +24,14 @@ ClCl 1.1.0orgn eryrnfrq: uggc://pbqrfcrnx.arg/clcl/qvfg/clcl/qbp/eryrnfr-1.1.0.ugzy "gurer fubhyq or bar naq bayl bar boivbhf jnl gb qb vg". ClCl inevnag: "gurer pna or A unys-ohttl jnlf gb qb vg" 1.1 svany eryrnfrq: uggc://pbqrfcrnx.arg/clcl/qvfg/clcl/qbp/eryrnfr-1.1.0.ugzy -1.1 svany eryrnfrq | nzq64 naq ccp ner bayl ninvynoyr va ragrecevfr irefvba + nzq64 naq ccp ner bayl ninvynoyr va ragrecevfr irefvba Vf gurer n clcl gvzr? - vs lbh pna srry vg (?) gura gurer vf ab, abezny jbex vf fhpu zhpu yrff gvevat guna inpngvbaf ab, abezny jbex vf fb zhpu yrff gvevat guna inpngvbaf -SVEFG gurl vtaber lbh, gura gurl ynhtu ng lbh, gura gurl svtug lbh, gura lbh jva. +-SVEFG gurl vtaber lbh, gura gurl ynhtu ng lbh, gura gurl svtug lbh, gura lbh jva.- vg'f Fhaqnl, znlor vg'f Fhaqnl, ntnva -"3 + 3 = 8" Nagb va gur WVG gnyx +"3 + 3 = 8" - Nagb va gur WVG gnyx RPBBC vf unccravat RPBBC vf svavfurq cflpb rngf bar oenva cre vapu bs cebterff @@ -175,10 +63,108 @@ "nu, whfg va gvzr qbphzragngvba" (__nc__) ClCl vf abg n erny IZ: ab frtsnhyg unaqyref gb qb gur ener pnfrf lbh pna'g unir obgu pbairavrapr naq fcrrq -gur WVG qbrfa'g jbex ba BF/K (abi'09) -ab fhccbeg sbe BF/K evtug abj! (abi'09) fyvccref urvtug pna or zrnfherq va k86 ertvfgref clcl vf n enpr orgjrra gur vaqhfgel gelvat gb ohvyq znpuvarf jvgu zber naq zber erfbheprf, naq gur clcl qrirybcref gelvat gb rng nyy bs gurz. Fb sne, gur jvaare vf fgvyy hapyrne +"znl pbagnva ahgf naq/be lbhat cbvagref" +vg'f nyy irel fvzcyr, yvxr gur ubyvqnlf +unccl ClCl'f lrne 2010! +fnzhryr fnlf gung jr ybfg n enmbe. fb jr pna'g funir lnxf +"yrg'f abg or bofpher, hayrff jr ernyyl arrq gb" + (abg guernq-fnsr, ohg jryy, abguvat vf) +clcl unf znal ceboyrzf, ohg rnpu bar unf znal fbyhgvbaf +whfg nabgure vgrz (1.333...) ba bhe erny-ahzorerq gbqb yvfg +ClCl vf Fuveg Bevtnzv erfrnepu + nafjrevat n dhrfgvba: "ab -- sbe ng yrnfg bar cbffvoyr vagrecergngvba bs lbhe fragrapr" +eryrnfr 1.2 hcpbzvat +ClCl 1.2 eryrnfrq - uggc://clcl.bet/ +AB IPF QVFPHFFVBAF +EClguba vf n svar pnzry unve oehfu +ClCl vf n npghnyyl n ivfhnyvmngvba cebwrpg, jr whfg ohvyq vagrecergref gb unir vagrerfgvat qngn gb ivfhnyvmr +clcl vf yvxr fnhfntrf +naq abj sbe fbzrguvat pbzcyrgryl qvssrerag +n 10gu bs sberire vf 1u45 +pbeerpg pbqr qbrfag arrq nal grfgf +cbfgfgehpghenyvfz rgp. +clcl UVG trarengbe +gur arj clcl fcbeg vf gb cnff clcl ohtf nf pclguba ohtf +jr unir zhpu zber vagrecergref guna hfref +ClCl 1.3 njnvgvat eryrnfr +ClCl 1.3 eryrnfrq +vg frrzf gb zr gung bapr lbh frggyr ba na rkrphgvba / bowrpg zbqry naq / be olgrpbqr sbezng, lbh'ir nyernql qrpvqrq jung ynathntrf (jurer gur 'f' frrzf fhcresyhbhf) fhccbeg vf tbvat gb or svefg pynff sbe +"Nyy ceboyrzf va ClCl pna or fbyirq ol nabgure yriry bs vagrecergngvba" +ClCl 1.3 eryrnfrq (jvaqbjf ovanevrf vapyhqrq) +jul qvq lbh thlf unir gb znxr gur ohvygva sbeghar zber vagrerfgvat guna npghny jbex? v whfg pngpurq zlfrys erfgnegvat clcl 20 gvzrf +"jr hfrq gb unir n zrff jvgu na bofpher vagresnpr, abj jr unir zrff urer naq bofpher vagresnpr gurer. cebterff" crqebavf ba n clcl fcevag +"phcf bs pbssrr ner yvxr nanybtvrf va gung V'z znxvat bar evtug abj" +"vg'f nyjnlf hc gb hf, va n jnl be gur bgure" +ClCl vf infg, naq pbagnvaf zhygvghqrf +qravny vf eneryl n tbbq qrohttvat grpuavdhr +"Yrg'f tb." - "Jr pna'g" - "Jul abg?" - "Jr'er jnvgvat sbe n Genafyngvba." - (qrfcnvevatyl) "Nu!" +'gung'f qrsvavgryl n pnfr bs "hu????"' +va gurbel gurer vf gur Ybbc, va cenpgvpr gurer ner oevqtrf +gur uneqqevir - pbafgnag qngn cvytevzntr +ClCl vf n gbby gb xrrc bgurejvfr qnatrebhf zvaqf fnsryl bpphcvrq. +jr ner n trareny senzrjbex ohvyg ba pbafvfgrag nccyvpngvba bs nqubp-arff +gur jnl gb nibvq n jbexnebhaq vf gb vagebqhpr n fgebatre jbexnebhaq fbzrjurer ryfr +pnyyvat gur genafyngvba gbby punva n 'fpevcg' vf xvaq bs bssrafvir +ehaavat clcl-p genafyngr.cl vf n ovg yvxr jngpuvat n guevyyre zbivr, vg pbhyq pbafhzr nyy gur zrzbel ng nal gvzr +ehaavat clcl-p genafyngr.cl vf n ovg yvxr jngpuvat n guevyyre zbivr, vg pbhyq qvr ng nal gvzr orpnhfr bs gur 32-ovg 4TO yvzvg bs ENZ +Qh jvefg rora tranh qnf reervpura, jbena xrvare tynhog +vs fjvgmreynaq jrer jurer terrpr vf (ba vfynaqf) jbhyq gurl nyy or pbaarpgrq ol oevqtrf? +genafyngvat clcl jvgu pclguba vf fbbbbbb fybj +ClCl 1.4 eryrnfrq! +Jr ner abg urebrf, whfg irel cngvrag. +QBAR zrnaf vg'f qbar +jul gurer vf ab "ClCl 1.4 eryrnfrq" va gbcvp nal zber? +fabj! fabj! +svanyyl, zrephevny zvtengvba vf unccravat! +Gur zvtengvba gb zrephevny vf pbzcyrgrq! uggc://ovgohpxrg.bet/clcl/clcl +fabj! fabj! (gre) +unccl arj lrne +naq anaanaw gb lbh nf jryy +Frrvat nf gur ynjf bs culfvpf ner ntnvafg lbh, lbh unir gb pnershyyl pbafvqre lbhe fpbcr fb gung lbhe tbnyf ner ernfbanoyr. +nf hfhny va clcl, gur fbyhgvba nccrnef pbzcyrgryl qvfcebcbegvbangr gb gur ceboyrz naq vafgrnq jr'yy tb sbe n pbzcyrgryl qvssrerag fvzcyre nccebnpu gb gur bevtvany ceboyrz +fabj, fabj! +va clcl lbh ner nyjnlf ng gur jebat yriry, va bar jnl be gur bgure +jryy, vg'f jebat ohg abg fb "irel jebat" nf vg ybbxrq + V ybir clcl +ynmvarff vzcngvrapr naq uhoevf +fabj, fabj +EClguba: guvatf lbh jbhyqa'g qb va Clguba, naq pna'g qb va P. +vg vf gur rkcrpgrq orunivbe, rkprcg jura lbh qba'g rkcrpg vg +erqrsvavat lryybj frrzf yvxr n orggre vqrn +"gung'f ubjrire whfg ratvarrevat" (svwny) +"[vg] whfg fubjf ntnva gung eclguba vf bofpher" (psobym) +"naljnl, clguba vf n infg ynathntr" (svwny) +bhg-bs-yvr-thneqf +"gurer ner qnlf ba juvpu lbh ybbx nebhaq naq abguvat fubhyq unir rire jbexrq" (svwny) +clcl vf n orggre xvaq bs sbbyvfuarff - ynp +ehaavat grfgf vf rffragvny sbe qrirybcvat clcl -- hu? qvq V oernx gur grfg? (svwny) +V'ir tbg guvf sybbe jnk gung'f nyfb n TERNG qrffreg gbccvat!! +rknexha: "gur cneg gung V gubhtug jnf tbvat gb or uneq jnf gevivny, fb abj V whfg unir guvf cneg gung V qvqa'g rira guvax bs gung vf uneq" +V fhccbfr jr pna yvir jvgu gur bofphevgl, nf ybat nf gurer vf n pbzzrag znxvat vg yvtugre +V nz n ovt oryvrire va ernfbaf. ohg gur nccnerag xvaq ner zl snibevgr. +clcl: trg n WVG sbe serr (jryy gur svefg qnl lbh jba'g znantr naq vg jvyy or irel sehfgengvat) + thgjbegu: bu, jr fubhyq znxr gur WVG zntvpnyyl orggre, jvgu qrpbengbef naq fghss +vg'f n pbzcyrgr unpx, ohg n irel zvavzny bar (nevtngb) +svefg gurl ynhtu ng lbh, gura gurl vtaber lbh, gura gurl svtug lbh, gura lbh jva +ClCl vf snzvyl sevraqyl +jr yvxr pbzcynvagf +gbqnl jr'er snfgre guna lrfgreqnl (hfhnyyl) +ClCl naq PClguba: gurl ner zbegny rarzvrf vagrag ba xvyyvat rnpu bgure +nethnoyl, rirelguvat vf n avpur +clcl unf ynlref yvxr bavbaf: crryvat gurz onpx jvyy znxr lbh pel +EClguba zntvpnyyl znxrf lbh evpu naq snzbhf (fnlf fb ba gur gva) +Vf evtbobg nebhaq jura gur havirefr prnfrf gb rkvfg? +ClCl vf gbb pbby sbe dhrelfgevatf. +< nevtngb> gura jung bpphef? < svwny> tbbq fghss V oryvrir +ClCl 1.6 eryrnfrq! + jurer ner gur grfgf? +uggc://gjvgcvp.pbz/52nr8s +N enaqbz dhbgr +Nyy rkprcgoybpxf frrz fnar. +N cvax tyvggrel ebgngvat ynzoqn +"vg'f yvxryl grzcbenel hagvy sberire" nevtb """ def some_topic(): diff --git a/lib_pypy/_sqlite3.py b/lib_pypy/_sqlite3.py --- a/lib_pypy/_sqlite3.py +++ b/lib_pypy/_sqlite3.py @@ -24,6 +24,7 @@ from ctypes import c_void_p, c_int, c_double, c_int64, c_char_p, cdll from ctypes import POINTER, byref, string_at, CFUNCTYPE, cast from ctypes import sizeof, c_ssize_t +from collections import OrderedDict import datetime import sys import time @@ -274,6 +275,28 @@ def unicode_text_factory(x): return unicode(x, 'utf-8') + +class StatementCache(object): + def __init__(self, connection, maxcount): + self.connection = connection + self.maxcount = maxcount + self.cache = OrderedDict() + + def get(self, sql, cursor, row_factory): + try: + stat = self.cache[sql] + except KeyError: + stat = Statement(self.connection, sql) + self.cache[sql] = stat + if len(self.cache) > self.maxcount: + self.cache.popitem(0) + # + if stat.in_use: + stat = Statement(self.connection, sql) + stat.set_row_factory(row_factory) + return stat + + class Connection(object): def __init__(self, database, timeout=5.0, detect_types=0, isolation_level="", check_same_thread=True, factory=None, cached_statements=100): @@ -291,6 +314,7 @@ self.row_factory = None self._isolation_level = isolation_level self.detect_types = detect_types + self.statement_cache = StatementCache(self, cached_statements) self.cursors = [] @@ -399,7 +423,7 @@ cur = Cursor(self) if not isinstance(sql, (str, unicode)): raise Warning("SQL is of wrong type. Must be string or unicode.") - statement = Statement(cur, sql, self.row_factory) + statement = self.statement_cache.get(sql, cur, self.row_factory) return statement def _get_isolation_level(self): @@ -681,6 +705,8 @@ from sqlite3.dump import _iterdump return _iterdump(self) +DML, DQL, DDL = range(3) + class Cursor(object): def __init__(self, con): if not isinstance(con, Connection): @@ -708,12 +734,12 @@ if type(sql) is unicode: sql = sql.encode("utf-8") self._check_closed() - self.statement = Statement(self, sql, self.row_factory) + self.statement = self.connection.statement_cache.get(sql, self, self.row_factory) if self.connection._isolation_level is not None: - if self.statement.kind == "DDL": + if self.statement.kind == DDL: self.connection.commit() - elif self.statement.kind == "DML": + elif self.statement.kind == DML: self.connection._begin() self.statement.set_params(params) @@ -724,19 +750,18 @@ self.statement.reset() raise self.connection._get_exception(ret) - if self.statement.kind == "DQL": - if ret == SQLITE_ROW: - self.statement._build_row_cast_map() - self.statement._readahead() - else: - self.statement.item = None - self.statement.exhausted = True + if self.statement.kind == DQL and ret == SQLITE_ROW: + self.statement._build_row_cast_map() + self.statement._readahead(self) + else: + self.statement.item = None + self.statement.exhausted = True - if self.statement.kind in ("DML", "DDL"): + if self.statement.kind == DML or self.statement.kind == DDL: self.statement.reset() self.rowcount = -1 - if self.statement.kind == "DML": + if self.statement.kind == DML: self.rowcount = sqlite.sqlite3_changes(self.connection.db) return self @@ -747,8 +772,9 @@ if type(sql) is unicode: sql = sql.encode("utf-8") self._check_closed() - self.statement = Statement(self, sql, self.row_factory) - if self.statement.kind == "DML": + self.statement = self.connection.statement_cache.get(sql, self, self.row_factory) + + if self.statement.kind == DML: self.connection._begin() else: raise ProgrammingError, "executemany is only for DML statements" @@ -800,7 +826,7 @@ return self def __iter__(self): - return self.statement + return iter(self.fetchone, None) def _check_reset(self): if self.reset: @@ -817,7 +843,7 @@ return None try: - return self.statement.next() + return self.statement.next(self) except StopIteration: return None @@ -831,7 +857,7 @@ if size is None: size = self.arraysize lst = [] - for row in self.statement: + for row in self: lst.append(row) if len(lst) == size: break @@ -842,7 +868,7 @@ self._check_reset() if self.statement is None: return [] - return list(self.statement) + return list(self) def _getdescription(self): if self._description is None: @@ -872,39 +898,47 @@ lastrowid = property(_getlastrowid) class Statement(object): - def __init__(self, cur, sql, row_factory): + def __init__(self, connection, sql): self.statement = None if not isinstance(sql, str): raise ValueError, "sql must be a string" - self.con = cur.connection - self.cur = weakref.ref(cur) + self.con = connection self.sql = sql # DEBUG ONLY - self.row_factory = row_factory first_word = self._statement_kind = sql.lstrip().split(" ")[0].upper() if first_word in ("INSERT", "UPDATE", "DELETE", "REPLACE"): - self.kind = "DML" + self.kind = DML elif first_word in ("SELECT", "PRAGMA"): - self.kind = "DQL" + self.kind = DQL else: - self.kind = "DDL" + self.kind = DDL self.exhausted = False + self.in_use = False + # + # set by set_row_factory + self.row_factory = None self.statement = c_void_p() next_char = c_char_p() - ret = sqlite.sqlite3_prepare_v2(self.con.db, sql, -1, byref(self.statement), byref(next_char)) + sql_char = c_char_p(sql) + ret = sqlite.sqlite3_prepare_v2(self.con.db, sql_char, -1, byref(self.statement), byref(next_char)) if ret == SQLITE_OK and self.statement.value is None: # an empty statement, we work around that, as it's the least trouble ret = sqlite.sqlite3_prepare_v2(self.con.db, "select 42", -1, byref(self.statement), byref(next_char)) - self.kind = "DQL" + self.kind = DQL if ret != SQLITE_OK: raise self.con._get_exception(ret) self.con._remember_statement(self) if _check_remaining_sql(next_char.value): - raise Warning, "One and only one statement required" + raise Warning, "One and only one statement required: %r" % ( + next_char.value,) + # sql_char should remain alive until here self._build_row_cast_map() + def set_row_factory(self, row_factory): + self.row_factory = row_factory + def _build_row_cast_map(self): self.row_cast_map = [] for i in xrange(sqlite.sqlite3_column_count(self.statement)): @@ -974,6 +1008,7 @@ ret = sqlite.sqlite3_reset(self.statement) if ret != SQLITE_OK: raise self.con._get_exception(ret) + self.mark_dirty() if params is None: if sqlite.sqlite3_bind_parameter_count(self.statement) != 0: @@ -1004,10 +1039,7 @@ raise ProgrammingError("missing parameter '%s'" %param) self.set_param(idx, param) - def __iter__(self): - return self - - def next(self): + def next(self, cursor): self.con._check_closed() self.con._check_thread() if self.exhausted: @@ -1023,10 +1055,10 @@ sqlite.sqlite3_reset(self.statement) raise exc - self._readahead() + self._readahead(cursor) return item - def _readahead(self): + def _readahead(self, cursor): self.column_count = sqlite.sqlite3_column_count(self.statement) row = [] for i in xrange(self.column_count): @@ -1061,23 +1093,30 @@ row = tuple(row) if self.row_factory is not None: - row = self.row_factory(self.cur(), row) + row = self.row_factory(cursor, row) self.item = row def reset(self): self.row_cast_map = None - return sqlite.sqlite3_reset(self.statement) + ret = sqlite.sqlite3_reset(self.statement) + self.in_use = False + self.exhausted = False + return ret def finalize(self): sqlite.sqlite3_finalize(self.statement) self.statement = None + self.in_use = False + + def mark_dirty(self): + self.in_use = True def __del__(self): sqlite.sqlite3_finalize(self.statement) self.statement = None def _get_description(self): - if self.kind == "DML": + if self.kind == DML: return None desc = [] for i in xrange(sqlite.sqlite3_column_count(self.statement)): diff --git a/lib_pypy/_subprocess.py b/lib_pypy/_subprocess.py --- a/lib_pypy/_subprocess.py +++ b/lib_pypy/_subprocess.py @@ -35,7 +35,7 @@ _DuplicateHandle.restype = ctypes.c_int _WaitForSingleObject = _kernel32.WaitForSingleObject -_WaitForSingleObject.argtypes = [ctypes.c_int, ctypes.c_int] +_WaitForSingleObject.argtypes = [ctypes.c_int, ctypes.c_uint] _WaitForSingleObject.restype = ctypes.c_int _GetExitCodeProcess = _kernel32.GetExitCodeProcess diff --git a/lib_pypy/distributed/test/test_distributed.py b/lib_pypy/distributed/test/test_distributed.py --- a/lib_pypy/distributed/test/test_distributed.py +++ b/lib_pypy/distributed/test/test_distributed.py @@ -9,7 +9,7 @@ class AppTestDistributed(object): def setup_class(cls): cls.space = gettestobjspace(**{"objspace.std.withtproxy": True, - "usemodules":("_stackless",)}) + "usemodules":("_continuation",)}) def test_init(self): import distributed @@ -91,10 +91,8 @@ class AppTestDistributedTasklets(object): spaceconfig = {"objspace.std.withtproxy": True, - "objspace.usemodules._stackless": True} + "objspace.usemodules._continuation": True} def setup_class(cls): - #cls.space = gettestobjspace(**{"objspace.std.withtproxy": True, - # "usemodules":("_stackless",)}) cls.w_test_env = cls.space.appexec([], """(): from distributed import test_env return test_env diff --git a/lib_pypy/distributed/test/test_greensock.py b/lib_pypy/distributed/test/test_greensock.py --- a/lib_pypy/distributed/test/test_greensock.py +++ b/lib_pypy/distributed/test/test_greensock.py @@ -10,7 +10,7 @@ if not option.runappdirect: py.test.skip("Cannot run this on top of py.py because of PopenGateway") cls.space = gettestobjspace(**{"objspace.std.withtproxy": True, - "usemodules":("_stackless",)}) + "usemodules":("_continuation",)}) cls.w_remote_side_code = cls.space.appexec([], """(): import sys sys.path.insert(0, '%s') diff --git a/lib_pypy/distributed/test/test_socklayer.py b/lib_pypy/distributed/test/test_socklayer.py --- a/lib_pypy/distributed/test/test_socklayer.py +++ b/lib_pypy/distributed/test/test_socklayer.py @@ -9,7 +9,8 @@ class AppTestSocklayer: def setup_class(cls): cls.space = gettestobjspace(**{"objspace.std.withtproxy": True, - "usemodules":("_stackless","_socket", "select")}) + "usemodules":("_continuation", + "_socket", "select")}) def test_socklayer(self): class X(object): diff --git a/lib_pypy/greenlet.py b/lib_pypy/greenlet.py --- a/lib_pypy/greenlet.py +++ b/lib_pypy/greenlet.py @@ -1,1 +1,144 @@ -from _stackless import greenlet +import _continuation, sys + + +# ____________________________________________________________ +# Exceptions + +class GreenletExit(Exception): + """This special exception does not propagate to the parent greenlet; it +can be used to kill a single greenlet.""" + +error = _continuation.error + +# ____________________________________________________________ +# Helper function + +def getcurrent(): + "Returns the current greenlet (i.e. the one which called this function)." + try: + return _tls.current + except AttributeError: + # first call in this thread: current == main + _green_create_main() + return _tls.current + +# ____________________________________________________________ +# The 'greenlet' class + +_continulet = _continuation.continulet + +class greenlet(_continulet): + getcurrent = staticmethod(getcurrent) + error = error + GreenletExit = GreenletExit + __main = False + __started = False + + def __new__(cls, *args, **kwds): + self = _continulet.__new__(cls) + self.parent = getcurrent() + return self + + def __init__(self, run=None, parent=None): + if run is not None: + self.run = run + if parent is not None: + self.parent = parent + + def switch(self, *args): + "Switch execution to this greenlet, optionally passing the values " + "given as argument(s). Returns the value passed when switching back." + return self.__switch('switch', args) + + def throw(self, typ=GreenletExit, val=None, tb=None): + "raise exception in greenlet, return value passed when switching back" + return self.__switch('throw', typ, val, tb) + + def __switch(target, methodname, *args): + current = getcurrent() + # + while not target: + if not target.__started: + if methodname == 'switch': + greenlet_func = _greenlet_start + else: + greenlet_func = _greenlet_throw + _continulet.__init__(target, greenlet_func, *args) + methodname = 'switch' + args = () + target.__started = True + break + # already done, go to the parent instead + # (NB. infinite loop possible, but unlikely, unless you mess + # up the 'parent' explicitly. Good enough, because a Ctrl-C + # will show that the program is caught in this loop here.) + target = target.parent + # + try: + unbound_method = getattr(_continulet, methodname) + args = unbound_method(current, *args, to=target) + except GreenletExit, e: + args = (e,) + finally: + _tls.current = current + # + if len(args) == 1: + return args[0] + else: + return args + + def __nonzero__(self): + return self.__main or _continulet.is_pending(self) + + @property + def dead(self): + return self.__started and not self + + @property + def gr_frame(self): + # xxx this doesn't work when called on either the current or + # the main greenlet of another thread + if self is getcurrent(): + return None + if self.__main: + self = getcurrent() + f = _continulet.__reduce__(self)[2][0] + if not f: + return None + return f.f_back.f_back.f_back # go past start(), __switch(), switch() + +# ____________________________________________________________ +# Internal stuff + +try: + from thread import _local +except ImportError: + class _local(object): # assume no threads + pass + +_tls = _local() + +def _green_create_main(): + # create the main greenlet for this thread + _tls.current = None + gmain = greenlet.__new__(greenlet) + gmain._greenlet__main = True + gmain._greenlet__started = True + assert gmain.parent is None + _tls.main = gmain + _tls.current = gmain + +def _greenlet_start(greenlet, args): + _tls.current = greenlet + try: + res = greenlet.run(*args) + finally: + _continuation.permute(greenlet, greenlet.parent) + return (res,) + +def _greenlet_throw(greenlet, exc, value, tb): + _tls.current = greenlet + try: + raise exc, value, tb + finally: + _continuation.permute(greenlet, greenlet.parent) diff --git a/lib_pypy/pypy_test/test_coroutine.py b/lib_pypy/pypy_test/test_coroutine.py --- a/lib_pypy/pypy_test/test_coroutine.py +++ b/lib_pypy/pypy_test/test_coroutine.py @@ -2,7 +2,7 @@ from py.test import skip, raises try: - from lib_pypy.stackless import coroutine, CoroutineExit + from stackless import coroutine, CoroutineExit except ImportError, e: skip('cannot import stackless: %s' % (e,)) @@ -20,10 +20,6 @@ assert not co.is_zombie def test_is_zombie_del_without_frame(self): - try: - import _stackless # are we on pypy with a stackless build? - except ImportError: - skip("only works on pypy-c-stackless") import gc res = [] class MyCoroutine(coroutine): @@ -45,10 +41,6 @@ assert res[0], "is_zombie was False in __del__" def test_is_zombie_del_with_frame(self): - try: - import _stackless # are we on pypy with a stackless build? - except ImportError: - skip("only works on pypy-c-stackless") import gc res = [] class MyCoroutine(coroutine): diff --git a/lib_pypy/pypy_test/test_stackless_pickling.py b/lib_pypy/pypy_test/test_stackless_pickling.py --- a/lib_pypy/pypy_test/test_stackless_pickling.py +++ b/lib_pypy/pypy_test/test_stackless_pickling.py @@ -1,7 +1,3 @@ -""" -this test should probably not run from CPython or py.py. -I'm not entirely sure, how to do that. -""" from __future__ import absolute_import from py.test import skip try: @@ -16,11 +12,15 @@ class Test_StacklessPickling: + def test_pickle_main_coroutine(self): + import stackless, pickle + s = pickle.dumps(stackless.coroutine.getcurrent()) + print s + c = pickle.loads(s) + assert c is stackless.coroutine.getcurrent() + def test_basic_tasklet_pickling(self): - try: - import stackless - except ImportError: - skip("can't load stackless and don't know why!!!") + import stackless from stackless import run, schedule, tasklet import pickle diff --git a/lib_pypy/pyrepl/completing_reader.py b/lib_pypy/pyrepl/completing_reader.py --- a/lib_pypy/pyrepl/completing_reader.py +++ b/lib_pypy/pyrepl/completing_reader.py @@ -229,7 +229,8 @@ def after_command(self, cmd): super(CompletingReader, self).after_command(cmd) - if not isinstance(cmd, complete) and not isinstance(cmd, self_insert): + if not isinstance(cmd, self.commands['complete']) \ + and not isinstance(cmd, self.commands['self_insert']): self.cmpltn_reset() def calc_screen(self): diff --git a/lib_pypy/pyrepl/reader.py b/lib_pypy/pyrepl/reader.py --- a/lib_pypy/pyrepl/reader.py +++ b/lib_pypy/pyrepl/reader.py @@ -401,13 +401,19 @@ return "(arg: %s) "%self.arg if "\n" in self.buffer: if lineno == 0: - return self._ps2 + res = self.ps2 elif lineno == self.buffer.count("\n"): - return self._ps4 + res = self.ps4 else: - return self._ps3 + res = self.ps3 else: - return self._ps1 + res = self.ps1 + # Lazily call str() on self.psN, and cache the results using as key + # the object on which str() was called. This ensures that even if the + # same object is used e.g. for ps1 and ps2, str() is called only once. + if res not in self._pscache: + self._pscache[res] = str(res) + return self._pscache[res] def push_input_trans(self, itrans): self.input_trans_stack.append(self.input_trans) @@ -473,8 +479,7 @@ self.pos = 0 self.dirty = 1 self.last_command = None - self._ps1, self._ps2, self._ps3, self._ps4 = \ - map(str, [self.ps1, self.ps2, self.ps3, self.ps4]) + self._pscache = {} except: self.restore() raise @@ -571,7 +576,7 @@ self.console.push_char(char) self.handle1(0) - def readline(self): + def readline(self, returns_unicode=False): """Read a line. The implementation of this method also shows how to drive Reader if you want more control over the event loop.""" @@ -580,6 +585,8 @@ self.refresh() while not self.finished: self.handle1() + if returns_unicode: + return self.get_unicode() return self.get_buffer() finally: self.restore() diff --git a/lib_pypy/pyrepl/readline.py b/lib_pypy/pyrepl/readline.py --- a/lib_pypy/pyrepl/readline.py +++ b/lib_pypy/pyrepl/readline.py @@ -33,7 +33,7 @@ from pyrepl.unix_console import UnixConsole, _error -ENCODING = 'latin1' # XXX hard-coded +ENCODING = sys.getfilesystemencoding() or 'latin1' # XXX review __all__ = ['add_history', 'clear_history', @@ -198,7 +198,7 @@ reader.ps1 = prompt return reader.readline() - def multiline_input(self, more_lines, ps1, ps2): + def multiline_input(self, more_lines, ps1, ps2, returns_unicode=False): """Read an input on possibly multiple lines, asking for more lines as long as 'more_lines(unicodetext)' returns an object whose boolean value is true. @@ -209,7 +209,7 @@ reader.more_lines = more_lines reader.ps1 = reader.ps2 = ps1 reader.ps3 = reader.ps4 = ps2 - return reader.readline() + return reader.readline(returns_unicode=returns_unicode) finally: reader.more_lines = saved @@ -231,7 +231,11 @@ return ''.join(chars) def _histline(self, line): - return unicode(line.rstrip('\n'), ENCODING) + line = line.rstrip('\n') + try: + return unicode(line, ENCODING) + except UnicodeDecodeError: # bah, silently fall back... + return unicode(line, 'utf-8') def get_history_length(self): return self.saved_history_length @@ -268,7 +272,10 @@ f = open(os.path.expanduser(filename), 'w') for entry in history: if isinstance(entry, unicode): - entry = entry.encode(ENCODING) + try: + entry = entry.encode(ENCODING) + except UnicodeEncodeError: # bah, silently fall back... + entry = entry.encode('utf-8') entry = entry.replace('\n', '\r\n') # multiline history support f.write(entry + '\n') f.close() @@ -395,9 +402,21 @@ _wrapper.f_in = f_in _wrapper.f_out = f_out - if hasattr(sys, '__raw_input__'): # PyPy - _old_raw_input = sys.__raw_input__ + if '__pypy__' in sys.builtin_module_names: # PyPy + + def _old_raw_input(prompt=''): + # sys.__raw_input__() is only called when stdin and stdout are + # as expected and are ttys. If it is the case, then get_reader() + # should not really fail in _wrapper.raw_input(). If it still + # does, then we will just cancel the redirection and call again + # the built-in raw_input(). + try: + del sys.__raw_input__ + except AttributeError: + pass + return raw_input(prompt) sys.__raw_input__ = _wrapper.raw_input + else: # this is not really what readline.c does. Better than nothing I guess import __builtin__ diff --git a/lib_pypy/pyrepl/simple_interact.py b/lib_pypy/pyrepl/simple_interact.py --- a/lib_pypy/pyrepl/simple_interact.py +++ b/lib_pypy/pyrepl/simple_interact.py @@ -54,7 +54,8 @@ ps1 = getattr(sys, 'ps1', '>>> ') ps2 = getattr(sys, 'ps2', '... ') try: - statement = multiline_input(more_lines, ps1, ps2) + statement = multiline_input(more_lines, ps1, ps2, + returns_unicode=True) except EOFError: break more = console.push(statement) diff --git a/lib_pypy/pyrepl/unix_console.py b/lib_pypy/pyrepl/unix_console.py --- a/lib_pypy/pyrepl/unix_console.py +++ b/lib_pypy/pyrepl/unix_console.py @@ -384,15 +384,19 @@ self.__maybe_write_code(self._smkx) - self.old_sigwinch = signal.signal( - signal.SIGWINCH, self.__sigwinch) + try: + self.old_sigwinch = signal.signal( + signal.SIGWINCH, self.__sigwinch) + except ValueError: + pass def restore(self): self.__maybe_write_code(self._rmkx) self.flushoutput() tcsetattr(self.input_fd, termios.TCSADRAIN, self.__svtermstate) - signal.signal(signal.SIGWINCH, self.old_sigwinch) + if hasattr(self, 'old_sigwinch'): + signal.signal(signal.SIGWINCH, self.old_sigwinch) def __sigwinch(self, signum, frame): self.height, self.width = self.getheightwidth() diff --git a/lib_pypy/resource.py b/lib_pypy/resource.py --- a/lib_pypy/resource.py +++ b/lib_pypy/resource.py @@ -7,7 +7,7 @@ from ctypes_support import standard_c_lib as libc from ctypes_support import get_errno -from ctypes import Structure, c_int, c_long, byref, sizeof, POINTER +from ctypes import Structure, c_int, c_long, byref, POINTER from errno import EINVAL, EPERM import _structseq @@ -165,7 +165,6 @@ @builtinify def getpagesize(): - pagesize = 0 if _getpagesize: return _getpagesize() else: diff --git a/lib_pypy/stackless.py b/lib_pypy/stackless.py --- a/lib_pypy/stackless.py +++ b/lib_pypy/stackless.py @@ -4,121 +4,110 @@ Please refer to their documentation. """ -DEBUG = True -def dprint(*args): - for arg in args: - print arg, - print +import _continuation -import traceback -import sys +class TaskletExit(Exception): + pass + +CoroutineExit = TaskletExit + + +def _coroutine_getcurrent(): + "Returns the current coroutine (i.e. the one which called this function)." + try: + return _tls.current_coroutine + except AttributeError: + # first call in this thread: current == main + return _coroutine_getmain() + +def _coroutine_getmain(): + try: + return _tls.main_coroutine + except AttributeError: + # create the main coroutine for this thread + continulet = _continuation.continulet + main = coroutine() + main._frame = continulet.__new__(continulet) + main._is_started = -1 + _tls.current_coroutine = _tls.main_coroutine = main + return _tls.main_coroutine + + +class coroutine(object): + _is_started = 0 # 0=no, 1=yes, -1=main + + def __init__(self): + self._frame = None + + def bind(self, func, *argl, **argd): + """coro.bind(f, *argl, **argd) -> None. + binds function f to coro. f will be called with + arguments *argl, **argd + """ + if self.is_alive: + raise ValueError("cannot bind a bound coroutine") + def run(c): + _tls.current_coroutine = self + self._is_started = 1 + return func(*argl, **argd) + self._is_started = 0 + self._frame = _continuation.continulet(run) + + def switch(self): + """coro.switch() -> returnvalue + switches to coroutine coro. If the bound function + f finishes, the returnvalue is that of f, otherwise + None is returned + """ + current = _coroutine_getcurrent() + try: + current._frame.switch(to=self._frame) + finally: + _tls.current_coroutine = current + + def kill(self): + """coro.kill() : kill coroutine coro""" + current = _coroutine_getcurrent() + try: + current._frame.throw(CoroutineExit, to=self._frame) + finally: + _tls.current_coroutine = current + + @property + def is_alive(self): + return self._is_started < 0 or ( + self._frame is not None and self._frame.is_pending()) + + @property + def is_zombie(self): + return self._is_started > 0 and not self._frame.is_pending() + + getcurrent = staticmethod(_coroutine_getcurrent) + + def __reduce__(self): + if self._is_started < 0: + return _coroutine_getmain, () + else: + return type(self), (), self.__dict__ + + try: - # If _stackless can be imported then TaskletExit and CoroutineExit are - # automatically added to the builtins. - from _stackless import coroutine, greenlet -except ImportError: # we are running from CPython - from greenlet import greenlet, GreenletExit - TaskletExit = CoroutineExit = GreenletExit - del GreenletExit - try: - from functools import partial - except ImportError: # we are not running python 2.5 - class partial(object): - # just enough of 'partial' to be usefull - def __init__(self, func, *argl, **argd): - self.func = func - self.argl = argl - self.argd = argd + from thread import _local +except ImportError: + class _local(object): # assume no threads + pass - def __call__(self): - return self.func(*self.argl, **self.argd) +_tls = _local() - class GWrap(greenlet): - """This is just a wrapper around greenlets to allow - to stick additional attributes to a greenlet. - To be more concrete, we need a backreference to - the coroutine object""" - class MWrap(object): - def __init__(self,something): - self.something = something +# ____________________________________________________________ - def __getattr__(self, attr): - return getattr(self.something, attr) - - class coroutine(object): - "we can't have greenlet as a base, because greenlets can't be rebound" - - def __init__(self): - self._frame = None - self.is_zombie = False - - def __getattr__(self, attr): - return getattr(self._frame, attr) - - def __del__(self): - self.is_zombie = True - del self._frame - self._frame = None - - def bind(self, func, *argl, **argd): - """coro.bind(f, *argl, **argd) -> None. - binds function f to coro. f will be called with - arguments *argl, **argd - """ - if self._frame is None or self._frame.dead: - self._frame = frame = GWrap() - frame.coro = self - if hasattr(self._frame, 'run') and self._frame.run: - raise ValueError("cannot bind a bound coroutine") - self._frame.run = partial(func, *argl, **argd) - - def switch(self): - """coro.switch() -> returnvalue - switches to coroutine coro. If the bound function - f finishes, the returnvalue is that of f, otherwise - None is returned - """ - try: - return greenlet.switch(self._frame) - except TypeError, exp: # self._frame is the main coroutine - return greenlet.switch(self._frame.something) - - def kill(self): - """coro.kill() : kill coroutine coro""" - self._frame.throw() - - def _is_alive(self): - if self._frame is None: - return False - return not self._frame.dead - is_alive = property(_is_alive) - del _is_alive - - def getcurrent(): - """coroutine.getcurrent() -> the currently running coroutine""" - try: - return greenlet.getcurrent().coro - except AttributeError: - return _maincoro - getcurrent = staticmethod(getcurrent) - - def __reduce__(self): - raise TypeError, 'pickling is not possible based upon greenlets' - - _maincoro = coroutine() - maingreenlet = greenlet.getcurrent() - _maincoro._frame = frame = MWrap(maingreenlet) - frame.coro = _maincoro - del frame - del maingreenlet from collections import deque import operator -__all__ = 'run getcurrent getmain schedule tasklet channel coroutine \ - greenlet'.split() +__all__ = 'run getcurrent getmain schedule tasklet channel coroutine'.split() _global_task_id = 0 _squeue = None @@ -131,7 +120,8 @@ def _scheduler_remove(value): try: del _squeue[operator.indexOf(_squeue, value)] - except ValueError:pass + except ValueError: + pass def _scheduler_append(value, normal=True): if normal: @@ -157,10 +147,7 @@ _last_task = next assert not next.blocked if next is not current: - try: - next.switch() - except CoroutineExit: - raise TaskletExit + next.switch() return current def set_schedule_callback(callback): @@ -184,34 +171,6 @@ raise self.type, self.value, self.traceback # -# helpers for pickling -# - -_stackless_primitive_registry = {} - -def register_stackless_primitive(thang, retval_expr='None'): - import types - func = thang - if isinstance(thang, types.MethodType): - func = thang.im_func - code = func.func_code - _stackless_primitive_registry[code] = retval_expr - # It is not too nice to attach info via the code object, but - # I can't think of a better solution without a real transform. - -def rewrite_stackless_primitive(coro_state, alive, tempval): - flags, frame, thunk, parent = coro_state - while frame is not None: - retval_expr = _stackless_primitive_registry.get(frame.f_code) - if retval_expr: - # this tasklet needs to stop pickling here and return its value. - tempval = eval(retval_expr, globals(), frame.f_locals) - coro_state = flags, frame, thunk, parent - break - frame = frame.f_back - return coro_state, alive, tempval - -# # class channel(object): @@ -363,8 +322,6 @@ """ return self._channel_action(None, -1) - register_stackless_primitive(receive, retval_expr='receiver.tempval') - def send_exception(self, exp_type, msg): self.send(bomb(exp_type, exp_type(msg))) @@ -381,9 +338,8 @@ the runnables list. """ return self._channel_action(msg, 1) - - register_stackless_primitive(send) - + + class tasklet(coroutine): """ A tasklet object represents a tiny task in a Python thread. @@ -455,6 +411,7 @@ def _func(): try: try: + coroutine.switch(back) func(*argl, **argd) except TaskletExit: pass @@ -464,6 +421,8 @@ self.func = None coroutine.bind(self, _func) + back = _coroutine_getcurrent() + coroutine.switch(self) self.alive = True _scheduler_append(self) return self @@ -486,39 +445,6 @@ raise RuntimeError, "The current tasklet cannot be removed." # not sure if I will revive this " Use t=tasklet().capture()" _scheduler_remove(self) - - def __reduce__(self): - one, two, coro_state = coroutine.__reduce__(self) - assert one is coroutine - assert two == () - # we want to get rid of the parent thing. - # for now, we just drop it - a, frame, c, d = coro_state - - # Removing all frames related to stackless.py. - # They point to stuff we don't want to be pickled. - - pickleframe = frame - while frame is not None: - if frame.f_code == schedule.func_code: - # Removing everything including and after the - # call to stackless.schedule() - pickleframe = frame.f_back - break - frame = frame.f_back - if d: - assert isinstance(d, coroutine) - coro_state = a, pickleframe, c, None - coro_state, alive, tempval = rewrite_stackless_primitive(coro_state, self.alive, self.tempval) - inst_dict = self.__dict__.copy() - inst_dict.pop('tempval', None) - return self.__class__, (), (coro_state, alive, tempval, inst_dict) - - def __setstate__(self, (coro_state, alive, tempval, inst_dict)): - coroutine.__setstate__(self, coro_state) - self.__dict__.update(inst_dict) - self.alive = alive - self.tempval = tempval def getmain(): """ @@ -607,30 +533,7 @@ global _last_task _global_task_id = 0 _main_tasklet = coroutine.getcurrent() - try: - _main_tasklet.__class__ = tasklet - except TypeError: # we are running pypy-c - class TaskletProxy(object): - """TaskletProxy is needed to give the _main_coroutine tasklet behaviour""" - def __init__(self, coro): - self._coro = coro - - def __getattr__(self,attr): - return getattr(self._coro,attr) - - def __str__(self): - return '' % (self._task_id, self.is_alive) - - def __reduce__(self): - return getmain, () - - __repr__ = __str__ - - - global _main_coroutine - _main_coroutine = _main_tasklet - _main_tasklet = TaskletProxy(_main_tasklet) - assert _main_tasklet.is_alive and not _main_tasklet.is_zombie + _main_tasklet.__class__ = tasklet # XXX HAAAAAAAAAAAAAAAAAAAAACK _last_task = _main_tasklet tasklet._init.im_func(_main_tasklet, label='main') _squeue = deque() diff --git a/py/_code/source.py b/py/_code/source.py --- a/py/_code/source.py +++ b/py/_code/source.py @@ -139,7 +139,7 @@ trysource = self[start:end] if trysource.isparseable(): return start, end - return start, end + return start, len(self) def getblockend(self, lineno): # XXX diff --git a/pypy/annotation/annrpython.py b/pypy/annotation/annrpython.py --- a/pypy/annotation/annrpython.py +++ b/pypy/annotation/annrpython.py @@ -149,7 +149,7 @@ desc = olddesc.bind_self(classdef) args = self.bookkeeper.build_args("simple_call", args_s[:]) desc.consider_call_site(self.bookkeeper, desc.getcallfamily(), [desc], - args, annmodel.s_ImpossibleValue) + args, annmodel.s_ImpossibleValue, None) result = [] def schedule(graph, inputcells): result.append((graph, inputcells)) diff --git a/pypy/annotation/bookkeeper.py b/pypy/annotation/bookkeeper.py --- a/pypy/annotation/bookkeeper.py +++ b/pypy/annotation/bookkeeper.py @@ -209,8 +209,8 @@ self.consider_call_site(call_op) for pbc, args_s in self.emulated_pbc_calls.itervalues(): - self.consider_call_site_for_pbc(pbc, 'simple_call', - args_s, s_ImpossibleValue) + self.consider_call_site_for_pbc(pbc, 'simple_call', + args_s, s_ImpossibleValue, None) self.emulated_pbc_calls = {} finally: self.leave() @@ -257,18 +257,18 @@ args_s = [lltype_to_annotation(adtmeth.ll_ptrtype)] + args_s if isinstance(s_callable, SomePBC): s_result = binding(call_op.result, s_ImpossibleValue) - self.consider_call_site_for_pbc(s_callable, - call_op.opname, - args_s, s_result) + self.consider_call_site_for_pbc(s_callable, call_op.opname, args_s, + s_result, call_op) - def consider_call_site_for_pbc(self, s_callable, opname, args_s, s_result): + def consider_call_site_for_pbc(self, s_callable, opname, args_s, s_result, + call_op): descs = list(s_callable.descriptions) if not descs: return family = descs[0].getcallfamily() args = self.build_args(opname, args_s) s_callable.getKind().consider_call_site(self, family, descs, args, - s_result) + s_result, call_op) def getuniqueclassdef(self, cls): """Get the ClassDef associated with the given user cls. @@ -656,6 +656,7 @@ whence = None else: whence = emulated # callback case + op = None s_previous_result = s_ImpossibleValue def schedule(graph, inputcells): @@ -663,7 +664,7 @@ results = [] for desc in descs: - results.append(desc.pycall(schedule, args, s_previous_result)) + results.append(desc.pycall(schedule, args, s_previous_result, op)) s_result = unionof(*results) return s_result diff --git a/pypy/annotation/builtin.py b/pypy/annotation/builtin.py --- a/pypy/annotation/builtin.py +++ b/pypy/annotation/builtin.py @@ -308,9 +308,6 @@ clsdef = clsdef.commonbase(cdef) return SomeInstance(clsdef) -def robjmodel_we_are_translated(): - return immutablevalue(True) - def robjmodel_r_dict(s_eqfn, s_hashfn, s_force_non_null=None): if s_force_non_null is None: force_non_null = False @@ -376,8 +373,6 @@ BUILTIN_ANALYZERS[pypy.rlib.rarithmetic.intmask] = rarith_intmask BUILTIN_ANALYZERS[pypy.rlib.objectmodel.instantiate] = robjmodel_instantiate -BUILTIN_ANALYZERS[pypy.rlib.objectmodel.we_are_translated] = ( - robjmodel_we_are_translated) BUILTIN_ANALYZERS[pypy.rlib.objectmodel.r_dict] = robjmodel_r_dict BUILTIN_ANALYZERS[pypy.rlib.objectmodel.hlinvoke] = robjmodel_hlinvoke BUILTIN_ANALYZERS[pypy.rlib.objectmodel.keepalive_until_here] = robjmodel_keepalive_until_here @@ -416,7 +411,8 @@ from pypy.annotation.model import SomePtr from pypy.rpython.lltypesystem import lltype -def malloc(s_T, s_n=None, s_flavor=None, s_zero=None, s_track_allocation=None): +def malloc(s_T, s_n=None, s_flavor=None, s_zero=None, s_track_allocation=None, + s_add_memory_pressure=None): assert (s_n is None or s_n.knowntype == int or issubclass(s_n.knowntype, pypy.rlib.rarithmetic.base_int)) assert s_T.is_constant() @@ -432,6 +428,8 @@ else: assert s_flavor.is_constant() assert s_track_allocation is None or s_track_allocation.is_constant() + assert (s_add_memory_pressure is None or + s_add_memory_pressure.is_constant()) # not sure how to call malloc() for the example 'p' in the # presence of s_extraargs r = SomePtr(lltype.Ptr(s_T.const)) diff --git a/pypy/annotation/classdef.py b/pypy/annotation/classdef.py --- a/pypy/annotation/classdef.py +++ b/pypy/annotation/classdef.py @@ -276,8 +276,8 @@ # create the Attribute and do the generalization asked for newattr = Attribute(attr, self.bookkeeper) if s_value: - if newattr.name == 'intval' and getattr(s_value, 'unsigned', False): - import pdb; pdb.set_trace() + #if newattr.name == 'intval' and getattr(s_value, 'unsigned', False): + # import pdb; pdb.set_trace() newattr.s_value = s_value # keep all subattributes' values diff --git a/pypy/annotation/description.py b/pypy/annotation/description.py --- a/pypy/annotation/description.py +++ b/pypy/annotation/description.py @@ -255,7 +255,11 @@ raise TypeError, "signature mismatch: %s" % e.getmsg(self.name) return inputcells - def specialize(self, inputcells): + def specialize(self, inputcells, op=None): + if (op is None and + getattr(self.bookkeeper, "position_key", None) is not None): + _, block, i = self.bookkeeper.position_key + op = block.operations[i] if self.specializer is None: # get the specializer based on the tag of the 'pyobj' # (if any), according to the current policy @@ -269,11 +273,14 @@ enforceargs = Sig(*enforceargs) self.pyobj._annenforceargs_ = enforceargs enforceargs(self, inputcells) # can modify inputcells in-place - return self.specializer(self, inputcells) + if getattr(self.pyobj, '_annspecialcase_', '').endswith("call_location"): + return self.specializer(self, inputcells, op) + else: + return self.specializer(self, inputcells) - def pycall(self, schedule, args, s_previous_result): + def pycall(self, schedule, args, s_previous_result, op=None): inputcells = self.parse_arguments(args) - result = self.specialize(inputcells) + result = self.specialize(inputcells, op) if isinstance(result, FunctionGraph): graph = result # common case # if that graph has a different signature, we need to re-parse @@ -296,17 +303,17 @@ None, # selfclassdef name) - def consider_call_site(bookkeeper, family, descs, args, s_result): + def consider_call_site(bookkeeper, family, descs, args, s_result, op): shape = rawshape(args) - row = FunctionDesc.row_to_consider(descs, args) + row = FunctionDesc.row_to_consider(descs, args, op) family.calltable_add_row(shape, row) consider_call_site = staticmethod(consider_call_site) - def variant_for_call_site(bookkeeper, family, descs, args): + def variant_for_call_site(bookkeeper, family, descs, args, op): shape = rawshape(args) bookkeeper.enter(None) try: - row = FunctionDesc.row_to_consider(descs, args) + row = FunctionDesc.row_to_consider(descs, args, op) finally: bookkeeper.leave() index = family.calltable_lookup_row(shape, row) @@ -316,7 +323,7 @@ def rowkey(self): return self - def row_to_consider(descs, args): + def row_to_consider(descs, args, op): # see comments in CallFamily from pypy.annotation.model import s_ImpossibleValue row = {} @@ -324,7 +331,7 @@ def enlist(graph, ignore): row[desc.rowkey()] = graph return s_ImpossibleValue # meaningless - desc.pycall(enlist, args, s_ImpossibleValue) + desc.pycall(enlist, args, s_ImpossibleValue, op) return row row_to_consider = staticmethod(row_to_consider) @@ -399,9 +406,7 @@ if b1 is object: continue if b1.__dict__.get('_mixin_', False): - assert b1.__bases__ == () or b1.__bases__ == (object,), ( - "mixin class %r should have no base" % (b1,)) - self.add_sources_for_class(b1, mixin=True) + self.add_mixin(b1) else: assert base is object, ("multiple inheritance only supported " "with _mixin_: %r" % (cls,)) @@ -469,6 +474,15 @@ return self.classdict[name] = Constant(value) + def add_mixin(self, base): + for subbase in base.__bases__: + if subbase is object: + continue + assert subbase.__dict__.get("_mixin_", False), ("Mixin class %r has non" + "mixin base class %r" % (base, subbase)) + self.add_mixin(subbase) + self.add_sources_for_class(base, mixin=True) + def add_sources_for_class(self, cls, mixin=False): for name, value in cls.__dict__.items(): self.add_source_attribute(name, value, mixin) @@ -514,7 +528,7 @@ "specialization" % (self.name,)) return self.getclassdef(None) - def pycall(self, schedule, args, s_previous_result): + def pycall(self, schedule, args, s_previous_result, op=None): from pypy.annotation.model import SomeInstance, SomeImpossibleValue if self.specialize: if self.specialize == 'specialize:ctr_location': @@ -657,7 +671,7 @@ cdesc = cdesc.basedesc return s_result # common case - def consider_call_site(bookkeeper, family, descs, args, s_result): + def consider_call_site(bookkeeper, family, descs, args, s_result, op): from pypy.annotation.model import SomeInstance, SomePBC, s_None if len(descs) == 1: # call to a single class, look at the result annotation @@ -702,7 +716,7 @@ initdescs[0].mergecallfamilies(*initdescs[1:]) initfamily = initdescs[0].getcallfamily() MethodDesc.consider_call_site(bookkeeper, initfamily, initdescs, - args, s_None) + args, s_None, op) consider_call_site = staticmethod(consider_call_site) def getallbases(self): @@ -775,13 +789,13 @@ def getuniquegraph(self): return self.funcdesc.getuniquegraph() - def pycall(self, schedule, args, s_previous_result): + def pycall(self, schedule, args, s_previous_result, op=None): from pypy.annotation.model import SomeInstance if self.selfclassdef is None: raise Exception("calling %r" % (self,)) s_instance = SomeInstance(self.selfclassdef, flags = self.flags) args = args.prepend(s_instance) - return self.funcdesc.pycall(schedule, args, s_previous_result) + return self.funcdesc.pycall(schedule, args, s_previous_result, op) def bind_under(self, classdef, name): self.bookkeeper.warning("rebinding an already bound %r" % (self,)) @@ -794,10 +808,10 @@ self.name, flags) - def consider_call_site(bookkeeper, family, descs, args, s_result): + def consider_call_site(bookkeeper, family, descs, args, s_result, op): shape = rawshape(args, nextra=1) # account for the extra 'self' funcdescs = [methoddesc.funcdesc for methoddesc in descs] - row = FunctionDesc.row_to_consider(descs, args) + row = FunctionDesc.row_to_consider(descs, args, op) family.calltable_add_row(shape, row) consider_call_site = staticmethod(consider_call_site) @@ -949,16 +963,16 @@ return '' % (self.funcdesc, self.frozendesc) - def pycall(self, schedule, args, s_previous_result): + def pycall(self, schedule, args, s_previous_result, op=None): from pypy.annotation.model import SomePBC s_self = SomePBC([self.frozendesc]) args = args.prepend(s_self) - return self.funcdesc.pycall(schedule, args, s_previous_result) + return self.funcdesc.pycall(schedule, args, s_previous_result, op) - def consider_call_site(bookkeeper, family, descs, args, s_result): + def consider_call_site(bookkeeper, family, descs, args, s_result, op): shape = rawshape(args, nextra=1) # account for the extra 'self' funcdescs = [mofdesc.funcdesc for mofdesc in descs] - row = FunctionDesc.row_to_consider(descs, args) + row = FunctionDesc.row_to_consider(descs, args, op) family.calltable_add_row(shape, row) consider_call_site = staticmethod(consider_call_site) diff --git a/pypy/annotation/policy.py b/pypy/annotation/policy.py --- a/pypy/annotation/policy.py +++ b/pypy/annotation/policy.py @@ -1,7 +1,7 @@ # base annotation policy for specialization from pypy.annotation.specialize import default_specialize as default -from pypy.annotation.specialize import specialize_argvalue, specialize_argtype, specialize_arglistitemtype -from pypy.annotation.specialize import memo +from pypy.annotation.specialize import specialize_argvalue, specialize_argtype, specialize_arglistitemtype, specialize_arg_or_var +from pypy.annotation.specialize import memo, specialize_call_location # for some reason, model must be imported first, # or we create a cycle. from pypy.annotation import model as annmodel @@ -73,8 +73,10 @@ default_specialize = staticmethod(default) specialize__memo = staticmethod(memo) specialize__arg = staticmethod(specialize_argvalue) # specialize:arg(N) + specialize__arg_or_var = staticmethod(specialize_arg_or_var) specialize__argtype = staticmethod(specialize_argtype) # specialize:argtype(N) specialize__arglistitemtype = staticmethod(specialize_arglistitemtype) + specialize__call_location = staticmethod(specialize_call_location) def specialize__ll(pol, *args): from pypy.rpython.annlowlevel import LowLevelAnnotatorPolicy diff --git a/pypy/annotation/specialize.py b/pypy/annotation/specialize.py --- a/pypy/annotation/specialize.py +++ b/pypy/annotation/specialize.py @@ -353,6 +353,16 @@ key = tuple(key) return maybe_star_args(funcdesc, key, args_s) +def specialize_arg_or_var(funcdesc, args_s, *argindices): + for argno in argindices: + if not args_s[argno].is_constant(): + break + else: + # all constant + return specialize_argvalue(funcdesc, args_s, *argindices) + # some not constant + return maybe_star_args(funcdesc, None, args_s) + def specialize_argtype(funcdesc, args_s, *argindices): key = tuple([args_s[i].knowntype for i in argindices]) for cls in key: @@ -370,3 +380,7 @@ else: key = s.listdef.listitem.s_value.knowntype return maybe_star_args(funcdesc, key, args_s) + +def specialize_call_location(funcdesc, args_s, op): + assert op is not None + return maybe_star_args(funcdesc, op, args_s) diff --git a/pypy/annotation/test/test_annrpython.py b/pypy/annotation/test/test_annrpython.py --- a/pypy/annotation/test/test_annrpython.py +++ b/pypy/annotation/test/test_annrpython.py @@ -1099,8 +1099,8 @@ allocdesc = a.bookkeeper.getdesc(alloc) s_C1 = a.bookkeeper.immutablevalue(C1) s_C2 = a.bookkeeper.immutablevalue(C2) - graph1 = allocdesc.specialize([s_C1]) - graph2 = allocdesc.specialize([s_C2]) + graph1 = allocdesc.specialize([s_C1], None) + graph2 = allocdesc.specialize([s_C2], None) assert a.binding(graph1.getreturnvar()).classdef == C1df assert a.binding(graph2.getreturnvar()).classdef == C2df assert graph1 in a.translator.graphs @@ -1135,8 +1135,8 @@ allocdesc = a.bookkeeper.getdesc(alloc) s_C1 = a.bookkeeper.immutablevalue(C1) s_C2 = a.bookkeeper.immutablevalue(C2) - graph1 = allocdesc.specialize([s_C1, s_C2]) - graph2 = allocdesc.specialize([s_C2, s_C2]) + graph1 = allocdesc.specialize([s_C1, s_C2], None) + graph2 = allocdesc.specialize([s_C2, s_C2], None) assert a.binding(graph1.getreturnvar()).classdef == C1df assert a.binding(graph2.getreturnvar()).classdef == C2df assert graph1 in a.translator.graphs @@ -1194,6 +1194,33 @@ assert len(executedesc._cache[(0, 'star', 2)].startblock.inputargs) == 4 assert len(executedesc._cache[(1, 'star', 3)].startblock.inputargs) == 5 + def test_specialize_arg_or_var(self): + def f(a): + return 1 + f._annspecialcase_ = 'specialize:arg_or_var(0)' + + def fn(a): + return f(3) + f(a) + + a = self.RPythonAnnotator() + a.build_types(fn, [int]) + executedesc = a.bookkeeper.getdesc(f) + assert sorted(executedesc._cache.keys()) == [None, (3,)] + # we got two different special + + def test_specialize_call_location(self): + def g(a): + return a + g._annspecialcase_ = "specialize:call_location" + def f(x): + return g(x) + f._annspecialcase_ = "specialize:argtype(0)" + def h(y): + w = f(y) + return int(f(str(y))) + w + a = self.RPythonAnnotator() + assert a.build_types(h, [int]) == annmodel.SomeInteger() + def test_assert_list_doesnt_lose_info(self): class T(object): pass @@ -3177,6 +3204,8 @@ s = a.build_types(f, []) assert isinstance(s, annmodel.SomeList) assert not s.listdef.listitem.resized + assert not s.listdef.listitem.immutable + assert s.listdef.listitem.mutated def test_delslice(self): def f(): diff --git a/pypy/annotation/unaryop.py b/pypy/annotation/unaryop.py --- a/pypy/annotation/unaryop.py +++ b/pypy/annotation/unaryop.py @@ -352,6 +352,7 @@ check_negative_slice(s_start, s_stop) if not isinstance(s_iterable, SomeList): raise Exception("list[start:stop] = x: x must be a list") + lst.listdef.mutate() lst.listdef.agree(s_iterable.listdef) # note that setslice is not allowed to resize a list in RPython diff --git a/pypy/config/config.py b/pypy/config/config.py --- a/pypy/config/config.py +++ b/pypy/config/config.py @@ -81,6 +81,12 @@ (self.__class__, name)) return self._cfgimpl_values[name] + def __dir__(self): + from_type = dir(type(self)) + from_dict = list(self.__dict__) + extras = list(self._cfgimpl_values) + return sorted(set(extras + from_type + from_dict)) + def __delattr__(self, name): # XXX if you use delattr you are responsible for all bad things # happening diff --git a/pypy/config/makerestdoc.py b/pypy/config/makerestdoc.py --- a/pypy/config/makerestdoc.py +++ b/pypy/config/makerestdoc.py @@ -134,7 +134,7 @@ for child in self._children: subpath = fullpath + "." + child._name toctree.append(subpath) - content.add(Directive("toctree", *toctree, maxdepth=4)) + content.add(Directive("toctree", *toctree, **{'maxdepth': 4})) content.join( ListItem(Strong("name:"), self._name), ListItem(Strong("description:"), self.doc)) diff --git a/pypy/config/pypyoption.py b/pypy/config/pypyoption.py --- a/pypy/config/pypyoption.py +++ b/pypy/config/pypyoption.py @@ -27,13 +27,14 @@ # --allworkingmodules working_modules = default_modules.copy() working_modules.update(dict.fromkeys( - ["_socket", "unicodedata", "mmap", "fcntl", "_locale", + ["_socket", "unicodedata", "mmap", "fcntl", "_locale", "pwd", "rctime" , "select", "zipimport", "_lsprof", "crypt", "signal", "_rawffi", "termios", "zlib", "bz2", "struct", "_hashlib", "_md5", "_sha", "_minimal_curses", "cStringIO", "thread", "itertools", "pyexpat", "_ssl", "cpyext", "array", "_bisect", "binascii", "_multiprocessing", '_warnings', - "_collections", "_multibytecodec", "micronumpy", "_ffi"] + "_collections", "_multibytecodec", "micronumpy", "_ffi", + "_continuation"] )) translation_modules = default_modules.copy() @@ -57,6 +58,7 @@ # unix only modules del working_modules["crypt"] del working_modules["fcntl"] + del working_modules["pwd"] del working_modules["termios"] del working_modules["_minimal_curses"] @@ -70,6 +72,7 @@ del working_modules['fcntl'] # LOCK_NB not defined del working_modules["_minimal_curses"] del working_modules["termios"] + del working_modules["_multiprocessing"] # depends on rctime @@ -89,7 +92,7 @@ module_import_dependencies = { # no _rawffi if importing pypy.rlib.clibffi raises ImportError - # or CompilationError + # or CompilationError or py.test.skip.Exception "_rawffi" : ["pypy.rlib.clibffi"], "_ffi" : ["pypy.rlib.clibffi"], @@ -99,6 +102,7 @@ "_ssl" : ["pypy.module._ssl.interp_ssl"], "_hashlib" : ["pypy.module._ssl.interp_ssl"], "_minimal_curses": ["pypy.module._minimal_curses.fficurses"], + "_continuation": ["pypy.rlib.rstacklet"], } def get_module_validator(modname): @@ -109,7 +113,7 @@ try: for name in modlist: __import__(name) - except (ImportError, CompilationError), e: + except (ImportError, CompilationError, py.test.skip.Exception), e: errcls = e.__class__.__name__ config.add_warning( "The module %r is disabled\n" % (modname,) + @@ -124,7 +128,7 @@ pypy_optiondescription = OptionDescription("objspace", "Object Space Options", [ ChoiceOption("name", "Object Space name", - ["std", "flow", "thunk", "dump", "taint"], + ["std", "flow", "thunk", "dump"], "std", cmdline='--objspace -o'), @@ -327,6 +331,9 @@ BoolOption("mutable_builtintypes", "Allow the changing of builtin types", default=False, requires=[("objspace.std.builtinshortcut", True)]), + BoolOption("withidentitydict", + "track types that override __hash__, __eq__ or __cmp__ and use a special dict strategy for those which do not", + default=True), ]), ]) diff --git a/pypy/config/support.py b/pypy/config/support.py --- a/pypy/config/support.py +++ b/pypy/config/support.py @@ -9,7 +9,7 @@ return 1 # don't override MAKEFLAGS. This will call 'make' without any '-j' option if sys.platform == 'darwin': return darwin_get_cpu_count() - elif sys.platform != 'linux2': + elif not sys.platform.startswith('linux'): return 1 # implement me try: if isinstance(filename_or_file, str): diff --git a/pypy/config/test/test_config.py b/pypy/config/test/test_config.py --- a/pypy/config/test/test_config.py +++ b/pypy/config/test/test_config.py @@ -1,5 +1,5 @@ from pypy.config.config import * -import py +import py, sys def make_description(): gcoption = ChoiceOption('name', 'GC name', ['ref', 'framework'], 'ref') @@ -63,6 +63,22 @@ py.test.raises(ConfigError, 'config.gc.name = "ref"') config.gc.name = "framework" +def test___dir__(): + descr = make_description() + config = Config(descr, bool=False) + attrs = dir(config) + assert '__repr__' in attrs # from the type + assert '_cfgimpl_values' in attrs # from self + if sys.version_info >= (2, 6): + assert 'gc' in attrs # custom attribute + assert 'objspace' in attrs # custom attribute + # + attrs = dir(config.gc) + if sys.version_info >= (2, 6): + assert 'name' in attrs + assert 'dummy' in attrs + assert 'float' in attrs + def test_arbitrary_option(): descr = OptionDescription("top", "", [ ArbitraryOption("a", "no help", default=None) @@ -265,11 +281,11 @@ def test_underscore_in_option_name(): descr = OptionDescription("opt", "", [ - BoolOption("_stackless", "", default=False), + BoolOption("_foobar", "", default=False), ]) config = Config(descr) parser = to_optparse(config) - assert parser.has_option("--_stackless") + assert parser.has_option("--_foobar") def test_none(): dummy1 = BoolOption('dummy1', 'doc dummy', default=False, cmdline=None) diff --git a/pypy/config/test/test_support.py b/pypy/config/test/test_support.py --- a/pypy/config/test/test_support.py +++ b/pypy/config/test/test_support.py @@ -40,7 +40,7 @@ return self._value def test_cpuinfo_linux(): - if sys.platform != 'linux2': + if not sys.platform.startswith('linux'): py.test.skip("linux only") saved = os.environ try: diff --git a/pypy/config/translationoption.py b/pypy/config/translationoption.py --- a/pypy/config/translationoption.py +++ b/pypy/config/translationoption.py @@ -13,6 +13,10 @@ DEFL_LOW_INLINE_THRESHOLD = DEFL_INLINE_THRESHOLD / 2.0 DEFL_GC = "minimark" +if sys.platform.startswith("linux"): + DEFL_ROOTFINDER_WITHJIT = "asmgcc" +else: + DEFL_ROOTFINDER_WITHJIT = "shadowstack" IS_64_BITS = sys.maxint > 2147483647 @@ -24,10 +28,9 @@ translation_optiondescription = OptionDescription( "translation", "Translation Options", [ - BoolOption("stackless", "enable stackless features during compilation", - default=False, cmdline="--stackless", - requires=[("translation.type_system", "lltype"), - ("translation.gcremovetypeptr", False)]), # XXX? + BoolOption("continuation", "enable single-shot continuations", + default=False, cmdline="--continuation", + requires=[("translation.type_system", "lltype")]), ChoiceOption("type_system", "Type system to use when RTyping", ["lltype", "ootype"], cmdline=None, default="lltype", requires={ @@ -66,7 +69,8 @@ "statistics": [("translation.gctransformer", "framework")], "generation": [("translation.gctransformer", "framework")], "hybrid": [("translation.gctransformer", "framework")], - "boehm": [("translation.gctransformer", "boehm")], + "boehm": [("translation.gctransformer", "boehm"), + ("translation.continuation", False)], # breaks "markcompact": [("translation.gctransformer", "framework")], "minimark": [("translation.gctransformer", "framework")], }, @@ -109,7 +113,7 @@ BoolOption("jit", "generate a JIT", default=False, suggests=[("translation.gc", DEFL_GC), - ("translation.gcrootfinder", "asmgcc"), + ("translation.gcrootfinder", DEFL_ROOTFINDER_WITHJIT), ("translation.list_comprehension_operations", True)]), ChoiceOption("jit_backend", "choose the backend for the JIT", ["auto", "x86", "x86-without-sse2", "llvm"], @@ -140,7 +144,10 @@ ["annotate", "rtype", "backendopt", "database", "source", "pyjitpl"], default=None, cmdline="--fork-before"), - + BoolOption("dont_write_c_files", + "Make the C backend write everyting to /dev/null. " + + "Useful for benchmarking, so you don't actually involve the disk", + default=False, cmdline="--dont-write-c-files"), ArbitraryOption("instrumentctl", "internal", default=None), StrOption("output", "Output file name", cmdline="--output"), @@ -382,8 +389,6 @@ config.translation.suggest(withsmallfuncsets=5) elif word == 'jit': config.translation.suggest(jit=True) - if config.translation.stackless: - raise NotImplementedError("JIT conflicts with stackless for now") elif word == 'removetypeptr': config.translation.suggest(gcremovetypeptr=True) else: diff --git a/pypy/doc/__pypy__-module.rst b/pypy/doc/__pypy__-module.rst --- a/pypy/doc/__pypy__-module.rst +++ b/pypy/doc/__pypy__-module.rst @@ -37,29 +37,6 @@ .. _`thunk object space docs`: objspace-proxies.html#thunk .. _`interface section of the thunk object space docs`: objspace-proxies.html#thunk-interface -.. broken: - - Taint Object Space Functionality - ================================ - - When the taint object space is used (choose with :config:`objspace.name`), - the following names are put into ``__pypy__``: - - - ``taint`` - - ``is_tainted`` - - ``untaint`` - - ``taint_atomic`` - - ``_taint_debug`` - - ``_taint_look`` - - ``TaintError`` - - Those are all described in the `interface section of the taint object space - docs`_. - - For more detailed explanations and examples see the `taint object space docs`_. - - .. _`taint object space docs`: objspace-proxies.html#taint - .. _`interface section of the taint object space docs`: objspace-proxies.html#taint-interface Transparent Proxy Functionality =============================== diff --git a/pypy/doc/_ref.txt b/pypy/doc/_ref.txt --- a/pypy/doc/_ref.txt +++ b/pypy/doc/_ref.txt @@ -1,11 +1,10 @@ .. _`ctypes_configure/doc/sample.py`: https://bitbucket.org/pypy/pypy/src/default/ctypes_configure/doc/sample.py .. _`demo/`: https://bitbucket.org/pypy/pypy/src/default/demo/ -.. _`demo/pickle_coroutine.py`: https://bitbucket.org/pypy/pypy/src/default/demo/pickle_coroutine.py .. _`lib-python/`: https://bitbucket.org/pypy/pypy/src/default/lib-python/ .. _`lib-python/2.7/dis.py`: https://bitbucket.org/pypy/pypy/src/default/lib-python/2.7/dis.py .. _`lib_pypy/`: https://bitbucket.org/pypy/pypy/src/default/lib_pypy/ +.. _`lib_pypy/greenlet.py`: https://bitbucket.org/pypy/pypy/src/default/lib_pypy/greenlet.py .. _`lib_pypy/pypy_test/`: https://bitbucket.org/pypy/pypy/src/default/lib_pypy/pypy_test/ -.. _`lib_pypy/stackless.py`: https://bitbucket.org/pypy/pypy/src/default/lib_pypy/stackless.py .. _`lib_pypy/tputil.py`: https://bitbucket.org/pypy/pypy/src/default/lib_pypy/tputil.py .. _`pypy/annotation`: .. _`pypy/annotation/`: https://bitbucket.org/pypy/pypy/src/default/pypy/annotation/ @@ -55,7 +54,6 @@ .. _`pypy/module`: .. _`pypy/module/`: https://bitbucket.org/pypy/pypy/src/default/pypy/module/ .. _`pypy/module/__builtin__/__init__.py`: https://bitbucket.org/pypy/pypy/src/default/pypy/module/__builtin__/__init__.py -.. _`pypy/module/_stackless/test/test_composable_coroutine.py`: https://bitbucket.org/pypy/pypy/src/default/pypy/module/_stackless/test/test_composable_coroutine.py .. _`pypy/objspace`: .. _`pypy/objspace/`: https://bitbucket.org/pypy/pypy/src/default/pypy/objspace/ .. _`pypy/objspace/dump.py`: https://bitbucket.org/pypy/pypy/src/default/pypy/objspace/dump.py @@ -117,6 +115,7 @@ .. _`pypy/translator/`: https://bitbucket.org/pypy/pypy/src/default/pypy/translator/ .. _`pypy/translator/backendopt/`: https://bitbucket.org/pypy/pypy/src/default/pypy/translator/backendopt/ .. _`pypy/translator/c/`: https://bitbucket.org/pypy/pypy/src/default/pypy/translator/c/ +.. _`pypy/translator/c/src/stacklet/`: https://bitbucket.org/pypy/pypy/src/default/pypy/translator/c/src/stacklet/ .. _`pypy/translator/cli/`: https://bitbucket.org/pypy/pypy/src/default/pypy/translator/cli/ .. _`pypy/translator/goal/`: https://bitbucket.org/pypy/pypy/src/default/pypy/translator/goal/ .. _`pypy/translator/jvm/`: https://bitbucket.org/pypy/pypy/src/default/pypy/translator/jvm/ diff --git a/pypy/doc/architecture.rst b/pypy/doc/architecture.rst --- a/pypy/doc/architecture.rst +++ b/pypy/doc/architecture.rst @@ -153,7 +153,7 @@ * Optionally, `various transformations`_ can then be applied which, for example, perform optimizations such as inlining, add capabilities - such as stackless_-style concurrency, or insert code for the + such as stackless-style concurrency (deprecated), or insert code for the `garbage collector`_. * Then, the graphs are converted to source code for the target platform @@ -255,7 +255,6 @@ .. _Python: http://docs.python.org/reference/ .. _Psyco: http://psyco.sourceforge.net -.. _stackless: stackless.html .. _`generate Just-In-Time Compilers`: jit/index.html .. _`JIT Generation in PyPy`: jit/index.html .. _`implement your own interpreter`: http://morepypy.blogspot.com/2011/04/tutorial-writing-interpreter-with-pypy.html diff --git a/pypy/doc/coding-guide.rst b/pypy/doc/coding-guide.rst --- a/pypy/doc/coding-guide.rst +++ b/pypy/doc/coding-guide.rst @@ -929,6 +929,19 @@ located in the ``py/bin/`` directory. For switches to modify test execution pass the ``-h`` option. +Coverage reports +---------------- + +In order to get coverage reports the `pytest-cov`_ plugin is included. +it adds some extra requirements ( coverage_ and `cov-core`_ ) +and can once they are installed coverage testing can be invoked via:: + + python test_all.py --cov file_or_direcory_to_cover file_or_directory + +.. _`pytest-cov`: http://pypi.python.org/pypi/pytest-cov +.. _`coverage`: http://pypi.python.org/pypi/coverage +.. _`cov-core`: http://pypi.python.org/pypi/cov-core + Test conventions ---------------- diff --git a/pypy/doc/conf.py b/pypy/doc/conf.py --- a/pypy/doc/conf.py +++ b/pypy/doc/conf.py @@ -45,9 +45,9 @@ # built documents. # # The short X.Y version. -version = '1.5' +version = '1.6' # The full version, including alpha/beta/rc tags. -release = '1.5' +release = '1.6' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. diff --git a/pypy/doc/config/objspace.name.txt b/pypy/doc/config/objspace.name.txt --- a/pypy/doc/config/objspace.name.txt +++ b/pypy/doc/config/objspace.name.txt @@ -4,7 +4,6 @@ for normal usage): * thunk_: The thunk object space adds lazy evaluation to PyPy. - * taint_: The taint object space adds soft security features. * dump_: Using this object spaces results in the dumpimp of all operations to a log. @@ -12,5 +11,4 @@ .. _`Object Space Proxies`: ../objspace-proxies.html .. _`Standard Object Space`: ../objspace.html#standard-object-space .. _thunk: ../objspace-proxies.html#thunk -.. _taint: ../objspace-proxies.html#taint .. _dump: ../objspace-proxies.html#dump diff --git a/pypy/doc/config/objspace.std.withidentitydict.txt b/pypy/doc/config/objspace.std.withidentitydict.txt new file mode 100644 --- /dev/null +++ b/pypy/doc/config/objspace.std.withidentitydict.txt @@ -0,0 +1,21 @@ +============================= +objspace.std.withidentitydict +============================= + +* **name:** withidentitydict + +* **description:** enable a dictionary strategy for "by identity" comparisons + +* **command-line:** --objspace-std-withidentitydict + +* **command-line for negation:** --no-objspace-std-withidentitydict + +* **option type:** boolean option + +* **default:** True + + +Enable a dictionary strategy specialized for instances of classes which +compares "by identity", which is the default unless you override ``__hash__``, +``__eq__`` or ``__cmp__``. This strategy will be used only with new-style +classes. diff --git a/pypy/doc/config/objspace.usemodules._stackless.txt b/pypy/doc/config/objspace.usemodules._continuation.txt rename from pypy/doc/config/objspace.usemodules._stackless.txt rename to pypy/doc/config/objspace.usemodules._continuation.txt --- a/pypy/doc/config/objspace.usemodules._stackless.txt +++ b/pypy/doc/config/objspace.usemodules._continuation.txt @@ -1,6 +1,4 @@ -Use the '_stackless' module. +Use the '_continuation' module. -Exposes the `stackless` primitives, and also implies a stackless build. -See also :config:`translation.stackless`. - -.. _`stackless`: ../stackless.html +Exposes the `continulet` app-level primitives. +See also :config:`translation.continuation`. diff --git a/pypy/doc/config/objspace.usemodules.pwd.txt b/pypy/doc/config/objspace.usemodules.pwd.txt new file mode 100644 --- /dev/null +++ b/pypy/doc/config/objspace.usemodules.pwd.txt @@ -0,0 +1,2 @@ +Use the 'pwd' module. +This module is expected to be fully working. diff --git a/pypy/doc/config/translation.stackless.txt b/pypy/doc/config/translation.continuation.txt rename from pypy/doc/config/translation.stackless.txt rename to pypy/doc/config/translation.continuation.txt --- a/pypy/doc/config/translation.stackless.txt +++ b/pypy/doc/config/translation.continuation.txt @@ -1,5 +1,2 @@ -Run the `stackless transform`_ on each generated graph, which enables the use -of coroutines at RPython level and the "stackless" module when translating -PyPy. - -.. _`stackless transform`: ../stackless.html +Enable the use of a stackless-like primitive called "stacklet". +In PyPy, this is exposed at app-level by the "_continuation" module. diff --git a/pypy/doc/config/translation.dont_write_c_files.txt b/pypy/doc/config/translation.dont_write_c_files.txt new file mode 100644 --- /dev/null +++ b/pypy/doc/config/translation.dont_write_c_files.txt @@ -0,0 +1,4 @@ +write the generated C files to ``/dev/null`` instead of to the disk. Useful if +you want to use translate.py as a benchmark and don't want to access the disk. + +.. _`translation documentation`: ../translation.html diff --git a/pypy/doc/config/translation.gc.txt b/pypy/doc/config/translation.gc.txt --- a/pypy/doc/config/translation.gc.txt +++ b/pypy/doc/config/translation.gc.txt @@ -1,4 +1,6 @@ -Choose the Garbage Collector used by the translated program: +Choose the Garbage Collector used by the translated program. +The good performing collectors are "hybrid" and "minimark". +The default is "minimark". - "ref": reference counting. Takes very long to translate and the result is slow. @@ -11,3 +13,12 @@ older generation. - "boehm": use the Boehm conservative GC. + + - "hybrid": a hybrid collector of "generation" together with a + mark-n-sweep old space + + - "markcompact": a slow, but memory-efficient collector, + influenced e.g. by Smalltalk systems. + + - "minimark": a generational mark-n-sweep collector with good + performance. Includes page marking for large arrays. diff --git a/pypy/doc/contributor.rst b/pypy/doc/contributor.rst --- a/pypy/doc/contributor.rst +++ b/pypy/doc/contributor.rst @@ -9,22 +9,22 @@ Armin Rigo Maciej Fijalkowski Carl Friedrich Bolz + Antonio Cuni Amaury Forgeot d'Arc - Antonio Cuni Samuele Pedroni Michael Hudson Holger Krekel + Benjamin Peterson Christian Tismer - Benjamin Peterson + Hakan Ardo + Alex Gaynor Eric van Riet Paap - Anders Chrigström - Håkan Ardö + Anders Chrigstrom + David Schneider Richard Emslie Dan Villiom Podlaski Christiansen Alexander Schremmer - Alex Gaynor - David Schneider - Aurelién Campeas + Aurelien Campeas Anders Lehmann Camillo Bruni Niklaus Haldimann @@ -35,16 +35,17 @@ Bartosz Skowron Jakub Gustak Guido Wesdorp + Daniel Roberts Adrien Di Mascio Laura Creighton Ludovic Aubry Niko Matsakis - Daniel Roberts Jason Creighton - Jacob Hallén + Jacob Hallen Alex Martelli Anders Hammarquist Jan de Mooij + Wim Lavrijsen Stephan Diehl Michael Foord Stefan Schwarzer @@ -55,9 +56,13 @@ Alexandre Fayolle Marius Gedminas Simon Burton + Justin Peel Jean-Paul Calderone John Witulski + Lukas Diekmann + holger krekel Wim Lavrijsen + Dario Bertini Andreas Stührk Jean-Philippe St. Pierre Guido van Rossum @@ -69,15 +74,16 @@ Georg Brandl Gerald Klix Wanja Saatkamp + Ronny Pfannschmidt Boris Feigin Oscar Nierstrasz - Dario Bertini David Malcolm Eugene Oden Henry Mason + Sven Hager Lukas Renggli + Ilya Osadchiy Guenter Jantzen - Ronny Pfannschmidt Bert Freudenberg Amit Regmi Ben Young @@ -94,8 +100,8 @@ Jared Grubb Karl Bartel Gabriel Lavoie + Victor Stinner Brian Dorsey - Victor Stinner Stuart Williams Toby Watson Antoine Pitrou @@ -106,19 +112,23 @@ Jonathan David Riehl Elmo Mäntynen Anders Qvist - Beatrice Düring + Beatrice During Alexander Sedov + Timo Paulssen + Corbin Simpson Vincent Legoll + Romain Guillebert Alan McIntyre - Romain Guillebert Alex Perry Jens-Uwe Mager + Simon Cross Dan Stromberg - Lukas Diekmann + Guillebert Romain Carl Meyer Pieter Zieschang Alejandro J. Cura Sylvain Thenault + Christoph Gerum Travis Francis Athougies Henrik Vendelbo Lutz Paelike @@ -129,6 +139,7 @@ Miguel de Val Borro Ignas Mikalajunas Artur Lisiecki + Philip Jenvey Joshua Gilbert Godefroid Chappelle Yusei Tahara @@ -137,24 +148,29 @@ Gustavo Niemeyer William Leslie Akira Li - Kristján Valur Jónsson + Kristjan Valur Jonsson Bobby Impollonia + Michael Hudson-Doyle Andrew Thompson Anders Sigfridsson + Floris Bruynooghe Jacek Generowicz Dan Colish - Sven Hager Zooko Wilcox-O Hearn + Dan Villiom Podlaski Christiansen Anders Hammarquist + Chris Lambacher Dinu Gherman Dan Colish + Brett Cannon Daniel Neuhäuser Michael Chermside Konrad Delong Anna Ravencroft Greg Price Armin Ronacher + Christian Muirhead Jim Baker - Philip Jenvey Rodrigo Araújo + Romain Guillebert diff --git a/pypy/doc/cpython_differences.rst b/pypy/doc/cpython_differences.rst --- a/pypy/doc/cpython_differences.rst +++ b/pypy/doc/cpython_differences.rst @@ -24,6 +24,7 @@ _bisect _codecs _collections + `_continuation`_ `_ffi`_ _hashlib _io @@ -84,9 +85,12 @@ _winreg - Extra module with Stackless_ only: - - _stackless + Note that only some of these modules are built-in in a typical + CPython installation, and the rest is from non built-in extension + modules. This means that e.g. ``import parser`` will, on CPython, + find a local file ``parser.py``, while ``import sys`` will not find a + local file ``sys.py``. In PyPy the difference does not exist: all + these modules are built-in. * Supported by being rewritten in pure Python (possibly using ``ctypes``): see the `lib_pypy/`_ directory. Examples of modules that we @@ -101,11 +105,11 @@ .. the nonstandard modules are listed below... .. _`__pypy__`: __pypy__-module.html +.. _`_continuation`: stackless.html .. _`_ffi`: ctypes-implementation.html .. _`_rawffi`: ctypes-implementation.html .. _`_minimal_curses`: config/objspace.usemodules._minimal_curses.html .. _`cpyext`: http://morepypy.blogspot.com/2010/04/using-cpython-extension-modules-with.html -.. _Stackless: stackless.html Differences related to garbage collection strategies @@ -211,6 +215,38 @@ >>>> print d1['a'] 42 +Mutating classes of objects which are already used as dictionary keys +--------------------------------------------------------------------- + +Consider the following snippet of code:: + + class X(object): + pass + + def __evil_eq__(self, other): + print 'hello world' + return False + + def evil(y): + d = {x(): 1} + X.__eq__ = __evil_eq__ + d[y] # might trigger a call to __eq__? + +In CPython, __evil_eq__ **might** be called, although there is no way to write +a test which reliably calls it. It happens if ``y is not x`` and ``hash(y) == +hash(x)``, where ``hash(x)`` is computed when ``x`` is inserted into the +dictionary. If **by chance** the condition is satisfied, then ``__evil_eq__`` +is called. + +PyPy uses a special strategy to optimize dictionaries whose keys are instances +of user-defined classes which do not override the default ``__hash__``, +``__eq__`` and ``__cmp__``: when using this strategy, ``__eq__`` and +``__cmp__`` are never called, but instead the lookup is done by identity, so +in the case above it is guaranteed that ``__eq__`` won't be called. + +Note that in all other cases (e.g., if you have a custom ``__hash__`` and +``__eq__`` in ``y``) the behavior is exactly the same as CPython. + Ignored exceptions ----------------------- @@ -248,7 +284,14 @@ never a dictionary as it sometimes is in CPython. Assigning to ``__builtins__`` has no effect. -* object identity of immutable keys in dictionaries is not necessarily preserved. - Never compare immutable objects with ``is``. +* Do not compare immutable objects with ``is``. For example on CPython + it is true that ``x is 0`` works, i.e. does the same as ``type(x) is + int and x == 0``, but it is so by accident. If you do instead + ``x is 1000``, then it stops working, because 1000 is too large and + doesn't come from the internal cache. In PyPy it fails to work in + both cases, because we have no need for a cache at all. + +* Also, object identity of immutable keys in dictionaries is not necessarily + preserved. .. include:: _ref.txt diff --git a/pypy/doc/extending.rst b/pypy/doc/extending.rst --- a/pypy/doc/extending.rst +++ b/pypy/doc/extending.rst @@ -19,12 +19,12 @@ section * Write them in pure python and use direct libffi low-level bindings, See - \_rawffi_ module description. + \_ffi_ module description. * Write them in RPython as mixedmodule_, using *rffi* as bindings. .. _ctypes: #CTypes -.. _\_rawffi: #LibFFI +.. _\_ffi: #LibFFI .. _mixedmodule: #Mixed Modules CTypes @@ -42,41 +42,50 @@ platform-dependent details (compiling small snippets of C code and running them), so it'll benefit not pypy-related ctypes-based modules as well. +ctypes call are optimized by the JIT and the resulting machine code contains a +direct call to the target C function. However, due to the very dynamic nature +of ctypes, some overhead over a bare C call is still present, in particular to +check/convert the types of the parameters. Moreover, even if most calls are +optimized, some cannot and thus need to follow the slow path, not optimized by +the JIT. + .. _`ctypes-configure`: ctypes-implementation.html#ctypes-configure +.. _`CPython ctypes`: http://docs.python.org/library/ctypes.html Pros ---- -Stable, CPython-compatible API +Stable, CPython-compatible API. Most calls are fast, optimized by JIT. Cons ---- -Only pure-python code (slow), problems with platform-dependency (although -we partially solve those). PyPy implementation is now very slow. +Problems with platform-dependency (although we partially solve +those). Although the JIT optimizes ctypes calls, some overhead is still +present. The slow-path is very slow. -_`CPython ctypes`: http://python.net/crew/theller/ctypes/ LibFFI ====== Mostly in order to be able to write a ctypes module, we developed a very -low-level libffi bindings. (libffi is a C-level library for dynamic calling, +low-level libffi bindings called ``_ffi``. (libffi is a C-level library for dynamic calling, which is used by CPython ctypes). This library provides stable and usable API, although it's API is a very low-level one. It does not contain any -magic. +magic. It is also optimized by the JIT, but has much less overhead than ctypes. Pros ---- -Works. Combines disadvantages of using ctypes with disadvantages of -using mixed modules. Probably more suitable for a delicate code -where ctypes magic goes in a way. +It Works. Probably more suitable for a delicate code where ctypes magic goes +in a way. All calls are optimized by the JIT, there is no slow path as in +ctypes. Cons ---- -Slow. CPython-incompatible API, very rough and low-level +It combines disadvantages of using ctypes with disadvantages of using mixed +modules. CPython-incompatible API, very rough and low-level. Mixed Modules ============= @@ -87,15 +96,15 @@ * a mixed module needs to be written in RPython, which is far more complicated than Python (XXX link) -* due to lack of separate compilation (as of April 2008), each +* due to lack of separate compilation (as of July 2011), each compilation-check requires to recompile whole PyPy python interpreter, which takes 0.5-1h. We plan to solve this at some point in near future. * although rpython is a garbage-collected language, the border between C and RPython needs to be managed by hand (each object that goes into the - C level must be explicitly freed) XXX we try to solve this + C level must be explicitly freed). -Some document is available `here`_ +Some documentation is available `here`_ .. _`here`: rffi.html diff --git a/pypy/doc/faq.rst b/pypy/doc/faq.rst --- a/pypy/doc/faq.rst +++ b/pypy/doc/faq.rst @@ -315,6 +315,28 @@ .. _`Andrew Brown's tutorial`: http://morepypy.blogspot.com/2011/04/tutorial-writing-interpreter-with-pypy.html +--------------------------------------------------------- +Can RPython modules for PyPy be translated independently? +--------------------------------------------------------- + +No, you have to rebuild the entire interpreter. This means two things: + +* It is imperative to use test-driven development. You have to test + exhaustively your module in pure Python, before even attempting to + translate it. Once you translate it, you should have only a few typing + issues left to fix, but otherwise the result should work out of the box. + +* Second, and perhaps most important: do you have a really good reason + for writing the module in RPython in the first place? Nowadays you + should really look at alternatives, like writing it in pure Python, + using ctypes if it needs to call C code. Other alternatives are being + developed too (as of summer 2011), like a Cython binding. + +In this context it is not that important to be able to translate +RPython modules independently of translating the complete interpreter. +(It could be done given enough efforts, but it's a really serious +undertaking. Consider it as quite unlikely for now.) + ---------------------------------------------------------- Why does PyPy draw a Mandelbrot fractal while translating? ---------------------------------------------------------- diff --git a/pypy/doc/garbage_collection.rst b/pypy/doc/garbage_collection.rst --- a/pypy/doc/garbage_collection.rst +++ b/pypy/doc/garbage_collection.rst @@ -147,7 +147,7 @@ You can read more about them at the start of `pypy/rpython/memory/gc/minimark.py`_. -In more details: +In more detail: - The small newly malloced objects are allocated in the nursery (case 1). All objects living in the nursery are "young". diff --git a/pypy/doc/getting-started-python.rst b/pypy/doc/getting-started-python.rst --- a/pypy/doc/getting-started-python.rst +++ b/pypy/doc/getting-started-python.rst @@ -32,7 +32,10 @@ .. _`windows document`: windows.html You can translate the whole of PyPy's Python interpreter to low level C code, -or `CLI code`_. +or `CLI code`_. If you intend to build using gcc, check to make sure that +the version you have is not 4.2 or you will run into `this bug`_. + +.. _`this bug`: https://bugs.launchpad.net/ubuntu/+source/gcc-4.2/+bug/187391 1. First `download a pre-built PyPy`_ for your architecture which you will use to translate your Python interpreter. It is, of course, possible to @@ -64,7 +67,6 @@ * ``libssl-dev`` (for the optional ``_ssl`` module) * ``libgc-dev`` (for the Boehm garbage collector: only needed when translating with `--opt=0, 1` or `size`) * ``python-sphinx`` (for the optional documentation build. You need version 1.0.7 or later) - * ``python-greenlet`` (for the optional stackless support in interpreted mode/testing) 3. Translation is time-consuming -- 45 minutes on a very fast machine -- @@ -102,7 +104,7 @@ $ ./pypy-c Python 2.7.0 (61ef2a11b56a, Mar 02 2011, 03:00:11) - [PyPy 1.5.0-alpha0 with GCC 4.4.3] on linux2 + [PyPy 1.6.0 with GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. And now for something completely different: ``this sentence is false'' >>>> 46 - 4 @@ -117,19 +119,8 @@ Installation_ below. The ``translate.py`` script takes a very large number of options controlling -what to translate and how. See ``translate.py -h``. Some of the more -interesting options (but for now incompatible with the JIT) are: - - * ``--stackless``: this produces a pypy-c that includes features - inspired by `Stackless Python `__. - - * ``--gc=boehm|ref|marknsweep|semispace|generation|hybrid|minimark``: - choose between using - the `Boehm-Demers-Weiser garbage collector`_, our reference - counting implementation or one of own collector implementations - (the default depends on the optimization level but is usually - ``minimark``). - +what to translate and how. See ``translate.py -h``. The default options +should be suitable for mostly everybody by now. Find a more detailed description of the various options in our `configuration sections`_. @@ -162,7 +153,7 @@ $ ./pypy-cli Python 2.7.0 (61ef2a11b56a, Mar 02 2011, 03:00:11) - [PyPy 1.5.0-alpha0] on linux2 + [PyPy 1.6.0] on linux2 Type "help", "copyright", "credits" or "license" for more information. And now for something completely different: ``distopian and utopian chairs'' >>>> @@ -199,7 +190,7 @@ $ ./pypy-jvm Python 2.7.0 (61ef2a11b56a, Mar 02 2011, 03:00:11) - [PyPy 1.5.0-alpha0] on linux2 + [PyPy 1.6.0] on linux2 Type "help", "copyright", "credits" or "license" for more information. And now for something completely different: ``# assert did not crash'' >>>> @@ -238,7 +229,7 @@ the ``bin/pypy`` executable. To install PyPy system wide on unix-like systems, it is recommended to put the -whole hierarchy alone (e.g. in ``/opt/pypy1.5``) and put a symlink to the +whole hierarchy alone (e.g. in ``/opt/pypy1.6``) and put a symlink to the ``pypy`` executable into ``/usr/bin`` or ``/usr/local/bin`` If the executable fails to find suitable libraries, it will report diff --git a/pypy/doc/getting-started.rst b/pypy/doc/getting-started.rst --- a/pypy/doc/getting-started.rst +++ b/pypy/doc/getting-started.rst @@ -53,11 +53,11 @@ PyPy is ready to be executed as soon as you unpack the tarball or the zip file, with no need to install it in any specific location:: - $ tar xf pypy-1.5-linux.tar.bz2 + $ tar xf pypy-1.6-linux.tar.bz2 - $ ./pypy-1.5-linux/bin/pypy + $ ./pypy-1.6/bin/pypy Python 2.7.1 (?, Apr 27 2011, 12:44:21) - [PyPy 1.5.0-alpha0 with GCC 4.4.3] on linux2 + [PyPy 1.6.0 with GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. And now for something completely different: ``implementing LOGO in LOGO: "turtles all the way down"'' @@ -73,16 +73,16 @@ $ curl -O http://python-distribute.org/distribute_setup.py - $ curl -O https://github.com/pypa/pip/raw/master/contrib/get-pip.py + $ curl -O https://raw.github.com/pypa/pip/master/contrib/get-pip.py - $ ./pypy-1.5-linux/bin/pypy distribute_setup.py + $ ./pypy-1.6/bin/pypy distribute_setup.py - $ ./pypy-1.5-linux/bin/pypy get-pip.py + $ ./pypy-1.6/bin/pypy get-pip.py - $ ./pypy-1.5-linux/bin/pip install pygments # for example + $ ./pypy-1.6/bin/pip install pygments # for example -3rd party libraries will be installed in ``pypy-1.5-linux/site-packages``, and -the scripts in ``pypy-1.5-linux/bin``. +3rd party libraries will be installed in ``pypy-1.6/site-packages``, and +the scripts in ``pypy-1.6/bin``. Installing using virtualenv --------------------------- diff --git a/pypy/doc/how-to-release.rst b/pypy/doc/how-to-release.rst --- a/pypy/doc/how-to-release.rst +++ b/pypy/doc/how-to-release.rst @@ -21,8 +21,8 @@ Release Steps ---------------- -* at code freeze make a release branch under - http://codepeak.net/svn/pypy/release/x.y(.z). IMPORTANT: bump the +* at code freeze make a release branch using release-x.x.x in mercurial + IMPORTANT: bump the pypy version number in module/sys/version.py and in module/cpyext/include/patchlevel.h, notice that the branch will capture the revision number of this change for the release; @@ -42,18 +42,11 @@ JIT: windows, linux, os/x no JIT: windows, linux, os/x sandbox: linux, os/x - stackless: windows, linux, os/x * write release announcement pypy/doc/release-x.y(.z).txt the release announcement should contain a direct link to the download page * update pypy.org (under extradoc/pypy.org), rebuild and commit -* update http://codespeak.net/pypy/trunk: - code0> + chmod -R yourname:users /www/codespeak.net/htdocs/pypy/trunk - local> cd ..../pypy/doc && py.test - local> cd ..../pypy - local> rsync -az doc codespeak.net:/www/codespeak.net/htdocs/pypy/trunk/pypy/ - * post announcement on morepypy.blogspot.com * send announcements to pypy-dev, python-list, python-announce, python-dev ... diff --git a/pypy/doc/index-of-release-notes.rst b/pypy/doc/index-of-release-notes.rst --- a/pypy/doc/index-of-release-notes.rst +++ b/pypy/doc/index-of-release-notes.rst @@ -16,3 +16,4 @@ release-1.4.0beta.rst release-1.4.1.rst release-1.5.0.rst + release-1.6.0.rst diff --git a/pypy/doc/index.rst b/pypy/doc/index.rst --- a/pypy/doc/index.rst +++ b/pypy/doc/index.rst @@ -15,14 +15,12 @@ * `FAQ`_: some frequently asked questions. -* `Release 1.5`_: the latest official release +* `Release 1.6`_: the latest official release * `PyPy Blog`_: news and status info about PyPy * `Papers`_: Academic papers, talks, and related projects -* `Videos`_: Videos of PyPy talks and presentations - * `speed.pypy.org`_: Daily benchmarks of how fast PyPy is * `potential project ideas`_: In case you want to get your feet wet... @@ -35,7 +33,7 @@ * `Differences between PyPy and CPython`_ * `What PyPy can do for your objects`_ - * `Stackless and coroutines`_ + * `Continulets and greenlets`_ * `JIT Generation in PyPy`_ * `Sandboxing Python code`_ @@ -77,7 +75,7 @@ .. _`Getting Started`: getting-started.html .. _`Papers`: extradoc.html .. _`Videos`: video-index.html -.. _`Release 1.5`: http://pypy.org/download.html +.. _`Release 1.6`: http://pypy.org/download.html .. _`speed.pypy.org`: http://speed.pypy.org .. _`RPython toolchain`: translation.html .. _`potential project ideas`: project-ideas.html @@ -122,9 +120,9 @@ Windows, on top of .NET, and on top of Java. To dig into PyPy it is recommended to try out the current Mercurial default branch, which is always working or mostly working, -instead of the latest release, which is `1.5`__. +instead of the latest release, which is `1.6`__. -.. __: release-1.5.0.html +.. __: release-1.6.0.html PyPy is mainly developed on Linux and Mac OS X. Windows is supported, but platform-specific bugs tend to take longer before we notice and fix @@ -292,8 +290,6 @@ `pypy/translator/jvm/`_ the Java backend -`pypy/translator/stackless/`_ the `Stackless Transform`_ - `pypy/translator/tool/`_ helper tools for translation, including the Pygame `graph viewer`_ @@ -313,12 +309,11 @@ .. _`object space`: objspace.html .. _FlowObjSpace: objspace.html#the-flow-object-space .. _`trace object space`: objspace.html#the-trace-object-space -.. _`taint object space`: objspace-proxies.html#taint .. _`thunk object space`: objspace-proxies.html#thunk .. _`transparent proxies`: objspace-proxies.html#tproxy .. _`Differences between PyPy and CPython`: cpython_differences.html .. _`What PyPy can do for your objects`: objspace-proxies.html -.. _`Stackless and coroutines`: stackless.html +.. _`Continulets and greenlets`: stackless.html .. _StdObjSpace: objspace.html#the-standard-object-space .. _`abstract interpretation`: http://en.wikipedia.org/wiki/Abstract_interpretation .. _`rpython`: coding-guide.html#rpython @@ -337,7 +332,6 @@ .. _`low-level type system`: rtyper.html#low-level-type .. _`object-oriented type system`: rtyper.html#oo-type .. _`garbage collector`: garbage_collection.html -.. _`Stackless Transform`: translation.html#the-stackless-transform .. _`main PyPy-translation scripts`: getting-started-python.html#translating-the-pypy-python-interpreter .. _`.NET`: http://www.microsoft.com/net/ .. _Mono: http://www.mono-project.com/ diff --git a/pypy/doc/jit/pyjitpl5.rst b/pypy/doc/jit/pyjitpl5.rst --- a/pypy/doc/jit/pyjitpl5.rst +++ b/pypy/doc/jit/pyjitpl5.rst @@ -103,7 +103,7 @@ The meta-interpreter starts interpreting the JIT bytecode. Each operation is executed and then recorded in a list of operations, called the trace. -Operations can have a list of boxes that operate on, arguments. Some operations +Operations can have a list of boxes they operate on, arguments. Some operations (like GETFIELD and GETARRAYITEM) also have special objects that describe how their arguments are laid out in memory. All possible operations generated by tracing are listed in metainterp/resoperation.py. When a (interpreter-level) diff --git a/pypy/doc/objspace-proxies.rst b/pypy/doc/objspace-proxies.rst --- a/pypy/doc/objspace-proxies.rst +++ b/pypy/doc/objspace-proxies.rst @@ -129,297 +129,6 @@ function behaves lazily: all calls to it return a thunk object. -.. broken right now: - - .. _taint: - - The Taint Object Space - ====================== - - Motivation - ---------- - - The Taint Object Space provides a form of security: "tainted objects", - inspired by various sources, see [D12.1]_ for a more detailed discussion. - - The basic idea of this kind of security is not to protect against - malicious code but to help with handling and boxing sensitive data. - It covers two kinds of sensitive data: secret data which should not leak, - and untrusted data coming from an external source and that must be - validated before it is used. - - The idea is that, considering a large application that handles these - kinds of sensitive data, there are typically only a small number of - places that need to explicitly manipulate that sensitive data; all the - other places merely pass it around, or do entirely unrelated things. - - Nevertheless, if a large application needs to be reviewed for security, - it must be entirely carefully checked, because it is possible that a - bug at some apparently unrelated place could lead to a leak of sensitive - information in a way that an external attacker could exploit. For - example, if any part of the application provides web services, an - attacker might be able to issue unexpected requests with a regular web - browser and deduce secret information from the details of the answers he - gets. Another example is the common CGI attack where an attacker sends - malformed inputs and causes the CGI script to do unintended things. - - An approach like that of the Taint Object Space allows the small parts - of the program that manipulate sensitive data to be explicitly marked. - The effect of this is that although these small parts still need a - careful security review, the rest of the application no longer does, - because even a bug would be unable to leak the information. - - We have implemented a simple two-level model: objects are either - regular (untainted), or sensitive (tainted). Objects are marked as - sensitive if they are secret or untrusted, and only declassified at - carefully-checked positions (e.g. where the secret data is needed, or - after the untrusted data has been fully validated). - - It would be simple to extend the code for more fine-grained scales of - secrecy. For example it is typical in the literature to consider - user-specified lattices of secrecy levels, corresponding to multiple - "owners" that cannot access data belonging to another "owner" unless - explicitly authorized to do so. - - Tainting and untainting - ----------------------- - - Start a py.py with the Taint Object Space and try the following example:: - - $ py.py -o taint - >>>> from __pypy__ import taint - >>>> x = taint(6) - - # x is hidden from now on. We can pass it around and - # even operate on it, but not inspect it. Taintness - # is propagated to operation results. - - >>>> x - TaintError - - >>>> if x > 5: y = 2 # see below - TaintError - - >>>> y = x + 5 # ok - >>>> lst = [x, y] - >>>> z = lst.pop() - >>>> t = type(z) # type() works too, tainted answer - >>>> t - TaintError - >>>> u = t is int # even 'is' works - >>>> u - TaintError - - Notice that using a tainted boolean like ``x > 5`` in an ``if`` - statement is forbidden. This is because knowing which path is followed - would give away a hint about ``x``; in the example above, if the - statement ``if x > 5: y = 2`` was allowed to run, we would know - something about the value of ``x`` by looking at the (untainted) value - in the variable ``y``. - - Of course, there is a way to inspect tainted objects. The basic way is - to explicitly "declassify" it with the ``untaint()`` function. In an - application, the places that use ``untaint()`` are the places that need - careful security review. To avoid unexpected objects showing up, the - ``untaint()`` function must be called with the exact type of the object - to declassify. It will raise ``TaintError`` if the type doesn't match:: - - >>>> from __pypy__ import taint - >>>> untaint(int, x) - 6 - >>>> untaint(int, z) - 11 - >>>> untaint(bool, x > 5) - True - >>>> untaint(int, x > 5) - TaintError - - - Taint Bombs - ----------- - - In this area, a common problem is what to do about failing operations. - If an operation raises an exception when manipulating a tainted object, - then the very presence of the exception can leak information about the - tainted object itself. Consider:: - - >>>> 5 / (x-6) - - By checking if this raises ``ZeroDivisionError`` or not, we would know - if ``x`` was equal to 6 or not. The solution to this problem in the - Taint Object Space is to introduce *Taint Bombs*. They are a kind of - tainted object that doesn't contain a real object, but a pending - exception. Taint Bombs are indistinguishable from normal tainted - objects to unprivileged code. See:: - - >>>> x = taint(6) - >>>> i = 5 / (x-6) # no exception here - >>>> j = i + 1 # nor here - >>>> k = j + 5 # nor here - >>>> untaint(int, k) - TaintError - - In the above example, all of ``i``, ``j`` and ``k`` contain a Taint - Bomb. Trying to untaint it raises an exception - a generic - ``TaintError``. What we win is that the exception gives little away, - and most importantly it occurs at the point where ``untaint()`` is - called, not where the operation failed. This means that all calls to - ``untaint()`` - but not the rest of the code - must be carefully - reviewed for what occurs if they receive a Taint Bomb; they might catch - the ``TaintError`` and give the user a generic message that something - went wrong, if we are reasonably careful that the message or even its - presence doesn't give information away. This might be a - problem by itself, but there is no satisfying general solution here: - it must be considered on a case-by-case basis. Again, what the - Taint Object Space approach achieves is not solving these problems, but - localizing them to well-defined small parts of the application - namely, - around calls to ``untaint()``. - - The ``TaintError`` exception deliberately does not include any - useful error messages, because they might give information away. - Of course, this makes debugging quite a bit harder; a difficult - problem to solve properly. So far we have implemented a way to peek in a Taint - Box or Bomb, ``__pypy__._taint_look(x)``, and a "debug mode" that - prints the exception as soon as a Bomb is created - both write - information to the low-level stderr of the application, where we hope - that it is unlikely to be seen by anyone but the application - developer. - - - Taint Atomic functions - ---------------------- - - Occasionally, a more complicated computation must be performed on a - tainted object. This requires first untainting the object, performing the - computations, and then carefully tainting the result again (including - hiding all exceptions into Bombs). - - There is a built-in decorator that does this for you:: - - >>>> @__pypy__.taint_atomic - >>>> def myop(x, y): - .... while x > 0: - .... x -= y - .... return x - .... - >>>> myop(42, 10) - -8 - >>>> z = myop(taint(42), 10) - >>>> z - TaintError - >>>> untaint(int, z) - -8 - - The decorator makes a whole function behave like a built-in operation. - If no tainted argument is passed in, the function behaves normally. But - if any of the arguments is tainted, it is automatically untainted - so - the function body always sees untainted arguments - and the eventual - result is tainted again (possibly in a Taint Bomb). - - It is important for the function marked as ``taint_atomic`` to have no - visible side effects, as these could cause information leakage. - This is currently not enforced, which means that all ``taint_atomic`` - functions have to be carefully reviewed for security (but not the - callers of ``taint_atomic`` functions). - - A possible future extension would be to forbid side-effects on - non-tainted objects from all ``taint_atomic`` functions. - - An example of usage: given a tainted object ``passwords_db`` that - references a database of passwords, we can write a function - that checks if a password is valid as follows:: - - @taint_atomic - def validate(passwords_db, username, password): - assert type(passwords_db) is PasswordDatabase - assert type(username) is str - assert type(password) is str - ...load username entry from passwords_db... - return expected_password == password - - It returns a tainted boolean answer, or a Taint Bomb if something - went wrong. A caller can do:: - - ok = validate(passwords_db, 'john', '1234') - ok = untaint(bool, ok) - - This can give three outcomes: ``True``, ``False``, or a ``TaintError`` - exception (with no information on it) if anything went wrong. If even - this is considered giving too much information away, the ``False`` case - can be made indistinguishable from the ``TaintError`` case (simply by - raising an exception in ``validate()`` if the password is wrong). - - In the above example, the security results achieved are the following: - as long as ``validate()`` does not leak information, no other part of - the code can obtain more information about a passwords database than a - Yes/No answer to a precise query. - - A possible extension of the ``taint_atomic`` decorator would be to check - the argument types, as ``untaint()`` does, for the same reason: to - prevent bugs where a function like ``validate()`` above is accidentally - called with the wrong kind of tainted object, which would make it - misbehave. For now, all ``taint_atomic`` functions should be - conservative and carefully check all assumptions on their input - arguments. - - - .. _`taint-interface`: - - Interface - --------- - - .. _`like a built-in operation`: - - The basic rule of the Tainted Object Space is that it introduces two new - kinds of objects, Tainted Boxes and Tainted Bombs (which are not types - in the Python sense). Each box internally contains a regular object; - each bomb internally contains an exception object. An operation - involving Tainted Boxes is performed on the objects contained in the - boxes, and gives a Tainted Box or a Tainted Bomb as a result (such an - operation does not let an exception be raised). An operation called - with a Tainted Bomb argument immediately returns the same Tainted Bomb. - - In a PyPy running with (or translated with) the Taint Object Space, - the ``__pypy__`` module exposes the following interface: - - * ``taint(obj)`` - - Return a new Tainted Box wrapping ``obj``. Return ``obj`` itself - if it is already tainted (a Box or a Bomb). - - * ``is_tainted(obj)`` - - Check if ``obj`` is tainted (a Box or a Bomb). - - * ``untaint(type, obj)`` - - Untaints ``obj`` if it is tainted. Raise ``TaintError`` if the type - of the untainted object is not exactly ``type``, or if ``obj`` is a - Bomb. - - * ``taint_atomic(func)`` - - Return a wrapper function around the callable ``func``. The wrapper - behaves `like a built-in operation`_ with respect to untainting the - arguments, tainting the result, and returning a Bomb. - - * ``TaintError`` - - Exception. On purpose, it provides no attribute or error message. - - * ``_taint_debug(level)`` - - Set the debugging level to ``level`` (0=off). At level 1 or above, - all Taint Bombs print a diagnostic message to stderr when they are - created. - - * ``_taint_look(obj)`` - - For debugging purposes: prints (to stderr) the type and address of - the object in a Tainted Box, or prints the exception if ``obj`` is - a Taint Bomb. - - .. _dump: The Dump Object Space diff --git a/pypy/doc/project-ideas.rst b/pypy/doc/project-ideas.rst --- a/pypy/doc/project-ideas.rst +++ b/pypy/doc/project-ideas.rst @@ -17,6 +17,12 @@ projects, or anything else in PyPy, pop up on IRC or write to us on the `mailing list`_. +Make big integers faster +------------------------- + +PyPy's implementation of the Python ``long`` type is slower than CPython's. +Find out why and optimize them. + Numpy improvements ------------------ @@ -48,17 +54,23 @@ .. image:: image/jitviewer.png -We would like to add one level to this hierarchy, by showing the generated -machine code for each jit operation. The necessary information is already in -the log file produced by the JIT, so it is "only" a matter of teaching the -jitviewer to display it. Ideally, the machine code should be hidden by -default and viewable on request. - The jitviewer is a web application based on flask and jinja2 (and jQuery on the client): if you have great web developing skills and want to help PyPy, this is an ideal task to get started, because it does not require any deep knowledge of the internals. +Optimized Unicode Representation +-------------------------------- + +CPython 3.3 will use an `optimized unicode representation`_ which switches between +different ways to represent a unicode string, depending on whether the string +fits into ASCII, has only two-byte characters or needs four-byte characters. + +The actual details would be rather differen in PyPy, but we would like to have +the same optimization implemented. + +.. _`optimized unicode representation`: http://www.python.org/dev/peps/pep-0393/ + Translation Toolchain --------------------- diff --git a/pypy/doc/release-1.6.0.rst b/pypy/doc/release-1.6.0.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/release-1.6.0.rst @@ -0,0 +1,95 @@ +======================== +PyPy 1.6 - kickass panda +======================== + +We're pleased to announce the 1.6 release of PyPy. This release brings a lot +of bugfixes and performance improvements over 1.5, and improves support for +Windows 32bit and OS X 64bit. This version fully implements Python 2.7.1 and +has beta level support for loading CPython C extensions. You can download it +here: + + http://pypy.org/download.html + +What is PyPy? +============= + +PyPy is a very compliant Python interpreter, almost a drop-in replacement for +CPython 2.7.1. It's fast (`pypy 1.5 and cpython 2.6.2`_ performance comparison) +due to its integrated tracing JIT compiler. + +This release supports x86 machines running Linux 32/64 or Mac OS X. Windows 32 +is beta (it roughly works but a lot of small issues have not been fixed so +far). Windows 64 is not yet supported. + +The main topics of this release are speed and stability: on average on +our benchmark suite, PyPy 1.6 is between **20% and 30%** faster than PyPy 1.5, +which was already much faster than CPython on our set of benchmarks. + +The speed improvements have been made possible by optimizing many of the +layers which compose PyPy. In particular, we improved: the Garbage Collector, +the JIT warmup time, the optimizations performed by the JIT, the quality of +the generated machine code and the implementation of our Python interpreter. + +.. _`pypy 1.5 and cpython 2.6.2`: http://speed.pypy.org + + +Highlights +========== + +* Numerous performance improvements, overall giving considerable speedups: + + - better GC behavior when dealing with very large objects and arrays + + - **fast ctypes:** now calls to ctypes functions are seen and optimized + by the JIT, and they are up to 60 times faster than PyPy 1.5 and 10 times + faster than CPython + + - improved generators(1): simple generators now are inlined into the caller + loop, making performance up to 3.5 times faster than PyPy 1.5. + + - improved generators(2): thanks to other optimizations, even generators + that are not inlined are between 10% and 20% faster than PyPy 1.5. + + - faster warmup time for the JIT + + - JIT support for single floats (e.g., for ``array('f')``) + + - optimized dictionaries: the internal representation of dictionaries is now + dynamically selected depending on the type of stored objects, resulting in + faster code and smaller memory footprint. For example, dictionaries whose + keys are all strings, or all integers. Other dictionaries are also smaller + due to bugfixes. + +* JitViewer: this is the first official release which includes the JitViewer, + a web-based tool which helps you to see which parts of your Python code have + been compiled by the JIT, down until the assembler. The `jitviewer`_ 0.1 has + already been release and works well with PyPy 1.6. + +* The CPython extension module API has been improved and now supports many + more extensions. For information on which one are supported, please refer to + our `compatibility wiki`_. + +* Multibyte encoding support: this was of of the last areas in which we were + still behind CPython, but now we fully support them. + +* Preliminary support for NumPy: this release includes a preview of a very + fast NumPy module integrated with the PyPy JIT. Unfortunately, this does + not mean that you can expect to take an existing NumPy program and run it on + PyPy, because the module is still unfinished and supports only some of the + numpy API. However, barring some details, what works should be + blazingly fast :-) + +* Bugfixes: since the 1.5 release we fixed 53 bugs in our `bug tracker`_, not + counting the numerous bugs that were found and reported through other + channels than the bug tracker. + +Cheers, + +Hakan Ardo, Carl Friedrich Bolz, Laura Creighton, Antonio Cuni, +Maciej Fijalkowski, Amaury Forgeot d'Arc, Alex Gaynor, +Armin Rigo and the PyPy team + +.. _`jitviewer`: http://morepypy.blogspot.com/2011/08/visualization-of-jitted-code.html +.. _`bug tracker`: https://bugs.pypy.org +.. _`compatibility wiki`: https://bitbucket.org/pypy/compatibility/wiki/Home + diff --git a/pypy/doc/rlib.rst b/pypy/doc/rlib.rst --- a/pypy/doc/rlib.rst +++ b/pypy/doc/rlib.rst @@ -134,69 +134,6 @@ a hierarchy of Address classes, in a typical static-OO-programming style. -``rstack`` -========== - -The `pypy/rlib/rstack.py`_ module allows an RPython program to control its own execution stack. -This is only useful if the program is translated using stackless. An old -description of the exposed functions is below. - -We introduce an RPython type ``frame_stack_top`` and a built-in function -``yield_current_frame_to_caller()`` that work as follows (see example below): - -* The built-in function ``yield_current_frame_to_caller()`` causes the current - function's state to be captured in a new ``frame_stack_top`` object that is - returned to the parent. Only one frame, the current one, is captured this - way. The current frame is suspended and the caller continues to run. Note - that the caller is only resumed once: when - ``yield_current_frame_to_caller()`` is called. See below. - -* A ``frame_stack_top`` object can be jumped to by calling its ``switch()`` - method with no argument. - -* ``yield_current_frame_to_caller()`` and ``switch()`` themselves return a new - ``frame_stack_top`` object: the freshly captured state of the caller of the - source ``switch()`` that was just executed, or None in the case described - below. - -* the function that called ``yield_current_frame_to_caller()`` also has a - normal return statement, like all functions. This statement must return - another ``frame_stack_top`` object. The latter is *not* returned to the - original caller; there is no way to return several times to the caller. - Instead, it designates the place to which the execution must jump, as if by - a ``switch()``. The place to which we jump this way will see a None as the - source frame stack top. - -* every frame stack top must be resumed once and only once. Not resuming - it at all causes a leak. Resuming it several times causes a crash. - -* a function that called ``yield_current_frame_to_caller()`` should not raise. - It would have no implicit parent frame to propagate the exception to. That - would be a crashingly bad idea. - -The following example would print the numbers from 1 to 7 in order:: - - def g(): - print 2 - frametop_before_5 = yield_current_frame_to_caller() - print 4 - frametop_before_7 = frametop_before_5.switch() - print 6 - return frametop_before_7 - - def f(): - print 1 - frametop_before_4 = g() - print 3 - frametop_before_6 = frametop_before_4.switch() - print 5 - frametop_after_return = frametop_before_6.switch() - print 7 - assert frametop_after_return is None - - f() - - ``streamio`` ============ diff --git a/pypy/doc/stackless.rst b/pypy/doc/stackless.rst --- a/pypy/doc/stackless.rst +++ b/pypy/doc/stackless.rst @@ -8,446 +8,312 @@ ================ PyPy can expose to its user language features similar to the ones -present in `Stackless Python`_: **no recursion depth limit**, and the -ability to write code in a **massively concurrent style**. It actually -exposes three different paradigms to choose from: +present in `Stackless Python`_: the ability to write code in a +**massively concurrent style**. (It does not (any more) offer the +ability to run with no `recursion depth limit`_, but the same effect +can be achieved indirectly.) -* `Tasklets and channels`_; +This feature is based on a custom primitive called a continulet_. +Continulets can be directly used by application code, or it is possible +to write (entirely at app-level) more user-friendly interfaces. -* Greenlets_; +Currently PyPy implements greenlets_ on top of continulets. It would be +easy to implement tasklets and channels as well, emulating the model +of `Stackless Python`_. -* Plain coroutines_. +Continulets are extremely light-weight, which means that PyPy should be +able to handle programs containing large amounts of them. However, due +to an implementation restriction, a PyPy compiled with +``--gcrootfinder=shadowstack`` consumes at least one page of physical +memory (4KB) per live continulet, and half a megabyte of virtual memory +on 32-bit or a complete megabyte on 64-bit. Moreover, the feature is +only available (so far) on x86 and x86-64 CPUs; for other CPUs you need +to add a short page of custom assembler to +`pypy/translator/c/src/stacklet/`_. -All of them are extremely light-weight, which means that PyPy should be -able to handle programs containing large amounts of coroutines, tasklets -and greenlets. +Theory +====== -Requirements -++++++++++++++++ +The fundamental idea is that, at any point in time, the program happens +to run one stack of frames (or one per thread, in case of +multi-threading). To see the stack, start at the top frame and follow +the chain of ``f_back`` until you reach the bottom frame. From the +point of view of one of these frames, it has a ``f_back`` pointing to +another frame (unless it is the bottom frame), and it is itself being +pointed to by another frame (unless it is the top frame). -If you are running py.py on top of CPython, then you need to enable -the _stackless module by running it as follows:: +The theory behind continulets is to literally take the previous sentence +as definition of "an O.K. situation". The trick is that there are +O.K. situations that are more complex than just one stack: you will +always have one stack, but you can also have in addition one or more +detached *cycles* of frames, such that by following the ``f_back`` chain +you run in a circle. But note that these cycles are indeed completely +detached: the top frame (the currently running one) is always the one +which is not the ``f_back`` of anybody else, and it is always the top of +a stack that ends with the bottom frame, never a part of these extra +cycles. - py.py --withmod-_stackless +How do you create such cycles? The fundamental operation to do so is to +take two frames and *permute* their ``f_back`` --- i.e. exchange them. +You can permute any two ``f_back`` without breaking the rule of "an O.K. +situation". Say for example that ``f`` is some frame halfway down the +stack, and you permute its ``f_back`` with the ``f_back`` of the top +frame. Then you have removed from the normal stack all intermediate +frames, and turned them into one stand-alone cycle. By doing the same +permutation again you restore the original situation. -This is implemented internally using greenlets, so it only works on a -platform where `greenlets`_ are supported. A few features do -not work this way, though, and really require a translated -``pypy-c``. +In practice, in PyPy, you cannot change the ``f_back`` of an abitrary +frame, but only of frames stored in ``continulets``. -To obtain a translated version of ``pypy-c`` that includes Stackless -support, run translate.py as follows:: - - cd pypy/translator/goal - python translate.py --stackless +Continulets are internally implemented using stacklets_. Stacklets are a +bit more primitive (they are really one-shot continuations), but that +idea only works in C, not in Python. The basic idea of continulets is +to have at any point in time a complete valid stack; this is important +e.g. to correctly propagate exceptions (and it seems to give meaningful +tracebacks too). Application level interface ============================= -A stackless PyPy contains a module called ``stackless``. The interface -exposed by this module have not been refined much, so it should be -considered in-flux (as of 2007). -So far, PyPy does not provide support for ``stackless`` in a threaded -environment. This limitation is not fundamental, as previous experience -has shown, so supporting this would probably be reasonably easy. +.. _continulet: -An interesting point is that the same ``stackless`` module can provide -a number of different concurrency paradigms at the same time. From a -theoretical point of view, none of above-mentioned existing three -paradigms considered on its own is new: two of them are from previous -Python work, and the third one is a variant of the classical coroutine. -The new part is that the PyPy implementation manages to provide all of -them and let the user implement more. Moreover - and this might be an -important theoretical contribution of this work - we manage to provide -these concurrency concepts in a "composable" way. In other words, it -is possible to naturally mix in a single application multiple -concurrency paradigms, and multiple unrelated usages of the same -paradigm. This is discussed in the Composability_ section below. +Continulets ++++++++++++ +A translated PyPy contains by default a module called ``_continuation`` +exporting the type ``continulet``. A ``continulet`` object from this +module is a container that stores a "one-shot continuation". It plays +the role of an extra frame you can insert in the stack, and whose +``f_back`` can be changed. -Infinite recursion -++++++++++++++++++ +To make a continulet object, call ``continulet()`` with a callable and +optional extra arguments. -Any stackless PyPy executable natively supports recursion that is only -limited by the available memory. As in normal Python, though, there is -an initial recursion limit (which is 5000 in all pypy-c's, and 1000 in -CPython). It can be changed with ``sys.setrecursionlimit()``. With a -stackless PyPy, any value is acceptable - use ``sys.maxint`` for -unlimited. +Later, the first time you ``switch()`` to the continulet, the callable +is invoked with the same continulet object as the extra first argument. +At that point, the one-shot continuation stored in the continulet points +to the caller of ``switch()``. In other words you have a perfectly +normal-looking stack of frames. But when ``switch()`` is called again, +this stored one-shot continuation is exchanged with the current one; it +means that the caller of ``switch()`` is suspended with its continuation +stored in the container, and the old continuation from the continulet +object is resumed. -In some cases, you can write Python code that causes interpreter-level -infinite recursion -- i.e. infinite recursion without going via -application-level function calls. It is possible to limit that too, -with ``_stackless.set_stack_depth_limit()``, or to unlimit it completely -by setting it to ``sys.maxint``. +The most primitive API is actually 'permute()', which just permutes the +one-shot continuation stored in two (or more) continulets. +In more details: -Coroutines -++++++++++ +* ``continulet(callable, *args, **kwds)``: make a new continulet. + Like a generator, this only creates it; the ``callable`` is only + actually called the first time it is switched to. It will be + called as follows:: -A Coroutine is similar to a very small thread, with no preemptive scheduling. -Within a family of coroutines, the flow of execution is explicitly -transferred from one to another by the programmer. When execution is -transferred to a coroutine, it begins to execute some Python code. When -it transfers execution away from itself it is temporarily suspended, and -when execution returns to it it resumes its execution from the -point where it was suspended. Conceptually, only one coroutine is -actively running at any given time (but see Composability_ below). + callable(cont, *args, **kwds) -The ``stackless.coroutine`` class is instantiated with no argument. -It provides the following methods and attributes: + where ``cont`` is the same continulet object. -* ``stackless.coroutine.getcurrent()`` + Note that it is actually ``cont.__init__()`` that binds + the continulet. It is also possible to create a not-bound-yet + continulet by calling explicitly ``continulet.__new__()``, and + only bind it later by calling explicitly ``cont.__init__()``. - Static method returning the currently running coroutine. There is a - so-called "main" coroutine object that represents the "outer" - execution context, where your main program started and where it runs - as long as it does not switch to another coroutine. +* ``cont.switch(value=None, to=None)``: start the continulet if + it was not started yet. Otherwise, store the current continuation + in ``cont``, and activate the target continuation, which is the + one that was previously stored in ``cont``. Note that the target + continuation was itself previously suspended by another call to + ``switch()``; this older ``switch()`` will now appear to return. + The ``value`` argument is any object that is carried to the target + and returned by the target's ``switch()``. -* ``coro.bind(callable, *args, **kwds)`` + If ``to`` is given, it must be another continulet object. In + that case, performs a "double switch": it switches as described + above to ``cont``, and then immediately switches again to ``to``. + This is different from switching directly to ``to``: the current + continuation gets stored in ``cont``, the old continuation from + ``cont`` gets stored in ``to``, and only then we resume the + execution from the old continuation out of ``to``. - Bind the coroutine so that it will execute ``callable(*args, - **kwds)``. The call is not performed immediately, but only the - first time we call the ``coro.switch()`` method. A coroutine must - be bound before it is switched to. When the coroutine finishes - (because the call to the callable returns), the coroutine exits and - implicitly switches back to another coroutine (its "parent"); after - this point, it is possible to bind it again and switch to it again. - (Which coroutine is the parent of which is not documented, as it is - likely to change when the interface is refined.) +* ``cont.throw(type, value=None, tb=None, to=None)``: similar to + ``switch()``, except that immediately after the switch is done, raise + the given exception in the target. -* ``coro.switch()`` +* ``cont.is_pending()``: return True if the continulet is pending. + This is False when it is not initialized (because we called + ``__new__`` and not ``__init__``) or when it is finished (because + the ``callable()`` returned). When it is False, the continulet + object is empty and cannot be ``switch()``-ed to. - Suspend the current (caller) coroutine, and resume execution in the - target coroutine ``coro``. +* ``permute(*continulets)``: a global function that permutes the + continuations stored in the given continulets arguments. Mostly + theoretical. In practice, using ``cont.switch()`` is easier and + more efficient than using ``permute()``; the latter does not on + its own change the currently running frame. -* ``coro.kill()`` - Kill ``coro`` by sending a CoroutineExit exception and switching - execution immediately to it. This exception can be caught in the - coroutine itself and can be raised from any call to ``coro.switch()``. - This exception isn't propagated to the parent coroutine. +Genlets ++++++++ -* ``coro.throw(type, value)`` +The ``_continuation`` module also exposes the ``generator`` decorator:: - Insert an exception in ``coro`` an resume switches execution - immediately to it. In the coroutine itself, this exception - will come from any call to ``coro.switch()`` and can be caught. If the - exception isn't caught, it will be propagated to the parent coroutine. + @generator + def f(cont, a, b): + cont.switch(a + b) + cont.switch(a + b + 1) -When a coroutine is garbage-collected, it gets the ``.kill()`` method sent to -it. This happens at the point the next ``.switch`` method is called, so the -target coroutine of this call will be executed only after the ``.kill`` has -finished. + for i in f(10, 20): + print i -Example -~~~~~~~ +This example prints 30 and 31. The only advantage over using regular +generators is that the generator itself is not limited to ``yield`` +statements that must all occur syntactically in the same function. +Instead, we can pass around ``cont``, e.g. to nested sub-functions, and +call ``cont.switch(x)`` from there. -Here is a classical producer/consumer example: an algorithm computes a -sequence of values, while another consumes them. For our purposes we -assume that the producer can generate several values at once, and the -consumer can process up to 3 values in a batch - it can also process -batches with fewer than 3 values without waiting for the producer (which -would be messy to express with a classical Python generator). :: +The ``generator`` decorator can also be applied to methods:: - def producer(lst): - while True: - ...compute some more values... - lst.extend(new_values) - coro_consumer.switch() - - def consumer(lst): - while True: - # First ask the producer for more values if needed - while len(lst) == 0: - coro_producer.switch() - # Process the available values in a batch, but at most 3 - batch = lst[:3] - del lst[:3] - ...process batch... - - # Initialize two coroutines with a shared list as argument - exchangelst = [] - coro_producer = coroutine() - coro_producer.bind(producer, exchangelst) - coro_consumer = coroutine() - coro_consumer.bind(consumer, exchangelst) - - # Start running the consumer coroutine - coro_consumer.switch() - - -Tasklets and channels -+++++++++++++++++++++ - -The ``stackless`` module also provides an interface that is roughly -compatible with the interface of the ``stackless`` module in `Stackless -Python`_: it contains ``stackless.tasklet`` and ``stackless.channel`` -classes. Tasklets are also similar to microthreads, but (like coroutines) -they don't actually run in parallel with other microthreads; instead, -they synchronize and exchange data with each other over Channels, and -these exchanges determine which Tasklet runs next. - -For usage reference, see the documentation on the `Stackless Python`_ -website. - -Note that Tasklets and Channels are implemented at application-level in -`lib_pypy/stackless.py`_ on top of coroutines_. You can refer to this -module for more details and API documentation. - -The stackless.py code tries to resemble the stackless C code as much -as possible. This makes the code somewhat unpythonic. - -Bird's eye view of tasklets and channels -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Tasklets are a bit like threads: they encapsulate a function in such a way that -they can be suspended/restarted any time. Unlike threads, they won't -run concurrently, but must be cooperative. When using stackless -features, it is vitally important that no action is performed that blocks -everything else. In particular, blocking input/output should be centralized -to a single tasklet. - -Communication between tasklets is done via channels. -There are three ways for a tasklet to give up control: - -1. call ``stackless.schedule()`` -2. send something over a channel -3. receive something from a channel - -A (live) tasklet can either be running, waiting to get scheduled, or be -blocked by a channel. - -Scheduling is done in strictly round-robin manner. A blocked tasklet -is removed from the scheduling queue and will be reinserted when it -becomes unblocked. - -Example -~~~~~~~ - -Here is a many-producers many-consumers example, where any consumer can -process the result of any producer. For this situation we set up a -single channel where all producer send, and on which all consumers -wait:: - - def producer(chan): - while True: - chan.send(...next value...) - - def consumer(chan): - while True: - x = chan.receive() - ...do something with x... - - # Set up the N producer and M consumer tasklets - common_channel = stackless.channel() - for i in range(N): - stackless.tasklet(producer, common_channel)() - for i in range(M): - stackless.tasklet(consumer, common_channel)() - - # Run it all - stackless.run() - -Each item sent over the channel is received by one of the waiting -consumers; which one is not specified. The producers block until their -item is consumed: the channel is not a queue, but rather a meeting point -which causes tasklets to block until both a consumer and a producer are -ready. In practice, the reason for having several consumers receiving -on a single channel is that some of the consumers can be busy in other -ways part of the time. For example, each consumer might receive a -database request, process it, and send the result to a further channel -before it asks for the next request. In this situation, further -requests can still be received by other consumers. + class X: + @generator + def f(self, cont, a, b): + ... Greenlets +++++++++ -A Greenlet is a kind of primitive Tasklet with a lower-level interface -and with exact control over the execution order. Greenlets are similar -to Coroutines, with a slightly different interface: greenlets put more -emphasis on a tree structure. The various greenlets of a program form a -precise tree, which fully determines their order of execution. +Greenlets are implemented on top of continulets in `lib_pypy/greenlet.py`_. +See the official `documentation of the greenlets`_. -For usage reference, see the `documentation of the greenlets`_. -The PyPy interface is identical. You should use ``greenlet.greenlet`` -instead of ``stackless.greenlet`` directly, because the greenlet library -can give you the latter when you ask for the former on top of PyPy. +Note that unlike the CPython greenlets, this version does not suffer +from GC issues: if the program "forgets" an unfinished greenlet, it will +always be collected at the next garbage collection. -PyPy's greenlets do not suffer from the cyclic GC limitation that the -CPython greenlets have: greenlets referencing each other via local -variables tend to leak on top of CPython (where it is mostly impossible -to do the right thing). It works correctly on top of PyPy. +Unimplemented features +++++++++++++++++++++++ -Coroutine Pickling -++++++++++++++++++ +The following features (present in some past Stackless version of PyPy) +are for the time being not supported any more: -Coroutines and tasklets can be pickled and unpickled, i.e. serialized to -a string of bytes for the purpose of storage or transmission. This -allows "live" coroutines or tasklets to be made persistent, moved to -other machines, or cloned in any way. The standard ``pickle`` module -works with coroutines and tasklets (at least in a translated ``pypy-c``; -unpickling live coroutines or tasklets cannot be easily implemented on -top of CPython). +* Tasklets and channels (currently ``stackless.py`` seems to import, + but you have tasklets on top of coroutines on top of greenlets on + top of continulets on top of stacklets, and it's probably not too + hard to cut two of these levels by adapting ``stackless.py`` to + use directly continulets) -To be able to achieve this result, we have to consider many objects that -are not normally pickleable in CPython. Here again, the `Stackless -Python`_ implementation has paved the way, and we follow the same -general design decisions: simple internal objects like bound method -objects and various kinds of iterators are supported; frame objects can -be fully pickled and unpickled -(by serializing a reference to the bytecode they are -running in addition to all the local variables). References to globals -and modules are pickled by name, similarly to references to functions -and classes in the traditional CPython ``pickle``. +* Coroutines (could be rewritten at app-level) -The "magic" part of this process is the implementation of the unpickling -of a chain of frames. The Python interpreter of PyPy uses -interpreter-level recursion to represent application-level calls. The -reason for this is that it tremendously simplifies the implementation of -the interpreter itself. Indeed, in Python, almost any operation can -potentially result in a non-tail-recursive call to another Python -function. This makes writing a non-recursive interpreter extremely -tedious; instead, we rely on lower-level transformations during the -translation process to control this recursion. This is the `Stackless -Transform`_, which is at the heart of PyPy's support for stackless-style -concurrency. +* Pickling and unpickling continulets (*) -At any point in time, a chain of Python-level frames corresponds to a -chain of interpreter-level frames (e.g. C frames in pypy-c), where each -single Python-level frame corresponds to one or a few interpreter-level -frames - depending on the length of the interpreter-level call chain -from one bytecode evaluation loop to the next (recursively invoked) one. +* Continuing execution of a continulet in a different thread (*) -This means that it is not sufficient to simply create a chain of Python -frame objects in the heap of a process before we can resume execution of -these newly built frames. We must recreate a corresponding chain of -interpreter-level frames. To this end, we have inserted a few *named -resume points* (see 3.2.4, in `D07.1 Massive Parallelism and Translation Aspects`_) in the Python interpreter of PyPy. This is the -motivation for implementing the interpreter-level primitives -``resume_state_create()`` and ``resume_state_invoke()``, the powerful -interface that allows an RPython program to artificially rebuild a chain -of calls in a reflective way, completely from scratch, and jump to it. +* Automatic unlimited stack (must be emulated__ so far) -.. _`D07.1 Massive Parallelism and Translation Aspects`: http://codespeak.net/pypy/extradoc/eu-report/D07.1_Massive_Parallelism_and_Translation_Aspects-2007-02-28.pdf +* Support for other CPUs than x86 and x86-64 -Example -~~~~~~~ +.. __: `recursion depth limit`_ -(See `demo/pickle_coroutine.py`_ for the complete source of this demo.) +(*) Pickling, as well as changing threads, could be implemented by using +a "soft" stack switching mode again. We would get either "hard" or +"soft" switches, similarly to Stackless Python 3rd version: you get a +"hard" switch (like now) when the C stack contains non-trivial C frames +to save, and a "soft" switch (like previously) when it contains only +simple calls from Python to Python. Soft-switched continulets would +also consume a bit less RAM, and the switch might be a bit faster too +(unsure about that; what is the Stackless Python experience?). -Consider a program which contains a part performing a long-running -computation:: - def ackermann(x, y): - if x == 0: - return y + 1 - if y == 0: - return ackermann(x - 1, 1) - return ackermann(x - 1, ackermann(x, y - 1)) +Recursion depth limit ++++++++++++++++++++++ -By using pickling, we can save the state of the computation while it is -running, for the purpose of restoring it later and continuing the -computation at another time or on a different machine. However, -pickling does not produce a whole-program dump: it can only pickle -individual coroutines. This means that the computation should be -started in its own coroutine:: +You can use continulets to emulate the infinite recursion depth present +in Stackless Python and in stackless-enabled older versions of PyPy. - # Make a coroutine that will run 'ackermann(3, 8)' - coro = coroutine() - coro.bind(ackermann, 3, 8) +The trick is to start a continulet "early", i.e. when the recursion +depth is very low, and switch to it "later", i.e. when the recursion +depth is high. Example:: - # Now start running the coroutine - result = coro.switch() + from _continuation import continulet -The coroutine itself must switch back to the main program when it needs -to be interrupted (we can only pickle suspended coroutines). Due to -current limitations this requires an explicit check in the -``ackermann()`` function:: + def invoke(_, callable, arg): + return callable(arg) - def ackermann(x, y): - if interrupt_flag: # test a global flag - main.switch() # and switch back to 'main' if it is set - if x == 0: - return y + 1 - if y == 0: - return ackermann(x - 1, 1) - return ackermann(x - 1, ackermann(x, y - 1)) + def bootstrap(c): + # this loop runs forever, at a very low recursion depth + callable, arg = c.switch() + while True: + # start a new continulet from here, and switch to + # it using an "exchange", i.e. a switch with to=. + to = continulet(invoke, callable, arg) + callable, arg = c.switch(to=to) -The global ``interrupt_flag`` would be set for example by a timeout, or -by a signal handler reacting to Ctrl-C, etc. It causes the coroutine to -transfer control back to the main program. The execution comes back -just after the line ``coro.switch()``, where we can pickle the coroutine -if necessary:: + c = continulet(bootstrap) + c.switch() - if not coro.is_alive: - print "finished; the result is:", result - else: - # save the state of the suspended coroutine - f = open('demo.pickle', 'w') - pickle.dump(coro, f) - f.close() -The process can then stop. At any later time, or on another machine, -we can reload the file and restart the coroutine with:: + def recursive(n): + if n == 0: + return ("ok", n) + if n % 200 == 0: + prev = c.switch((recursive, n - 1)) + else: + prev = recursive(n - 1) + return (prev[0], prev[1] + 1) - f = open('demo.pickle', 'r') - coro = pickle.load(f) - f.close() - result = coro.switch() + print recursive(999999) # prints ('ok', 999999) -Limitations -~~~~~~~~~~~ +Note that if you press Ctrl-C while running this example, the traceback +will be built with *all* recursive() calls so far, even if this is more +than the number that can possibly fit in the C stack. These frames are +"overlapping" each other in the sense of the C stack; more precisely, +they are copied out of and into the C stack as needed. -Coroutine pickling is subject to some limitations. First of all, it is -not a whole-program "memory dump". It means that only the "local" state -of a coroutine is saved. The local state is defined to include the -chain of calls and the local variables, but not for example the value of -any global variable. +(The example above also makes use of the following general "guideline" +to help newcomers write continulets: in ``bootstrap(c)``, only call +methods on ``c``, not on another continulet object. That's why we wrote +``c.switch(to=to)`` and not ``to.switch()``, which would mess up the +state. This is however just a guideline; in general we would recommend +to use other interfaces like genlets and greenlets.) -As in normal Python, the pickle will not include any function object's -code, any class definition, etc., but only references to functions and -classes. Unlike normal Python, the pickle contains frames. A pickled -frame stores a bytecode index, representing the current execution -position. This means that the user program cannot be modified *at all* -between pickling and unpickling! -On the other hand, the pickled data is fairly independent from the -platform and from the PyPy version. +Stacklets ++++++++++ -Pickling/unpickling fails if the coroutine is suspended in a state that -involves Python frames which were *indirectly* called. To define this -more precisely, a Python function can issue a regular function or method -call to invoke another Python function - this is a *direct* call and can -be pickled and unpickled. But there are many ways to invoke a Python -function indirectly. For example, most operators can invoke a special -method ``__xyz__()`` on a class, various built-in functions can call -back Python functions, signals can invoke signal handlers, and so on. -These cases are not supported yet. +Continulets are internally implemented using stacklets, which is the +generic RPython-level building block for "one-shot continuations". For +more information about them please see the documentation in the C source +at `pypy/translator/c/src/stacklet/stacklet.h`_. +The module ``pypy.rlib.rstacklet`` is a thin wrapper around the above +functions. The key point is that new() and switch() always return a +fresh stacklet handle (or an empty one), and switch() additionally +consumes one. It makes no sense to have code in which the returned +handle is ignored, or used more than once. Note that ``stacklet.c`` is +written assuming that the user knows that, and so no additional checking +occurs; this can easily lead to obscure crashes if you don't use a +wrapper like PyPy's '_continuation' module. -Composability -+++++++++++++ + +Theory of composability ++++++++++++++++++++++++ Although the concept of coroutines is far from new, they have not been generally integrated into mainstream languages, or only in limited form (like generators in Python and iterators in C#). We can argue that a possible reason for that is that they do not scale well when a program's complexity increases: they look attractive in small examples, but the -models that require explicit switching, by naming the target coroutine, -do not compose naturally. This means that a program that uses -coroutines for two unrelated purposes may run into conflicts caused by -unexpected interactions. +models that require explicit switching, for example by naming the target +coroutine, do not compose naturally. This means that a program that +uses coroutines for two unrelated purposes may run into conflicts caused +by unexpected interactions. To illustrate the problem, consider the following example (simplified -code; see the full source in -`pypy/module/_stackless/test/test_composable_coroutine.py`_). First, a -simple usage of coroutine:: +code using a theorical ``coroutine`` class). First, a simple usage of +coroutine:: main_coro = coroutine.getcurrent() # the main (outer) coroutine data = [] @@ -530,74 +396,35 @@ main coroutine, which confuses the ``generator_iterator.next()`` method (it gets resumed, but not as a result of a call to ``Yield()``). -As part of trying to combine multiple different paradigms into a single -application-level module, we have built a way to solve this problem. -The idea is to avoid the notion of a single, global "main" coroutine (or -a single main greenlet, or a single main tasklet). Instead, each -conceptually separated user of one of these concurrency interfaces can -create its own "view" on what the main coroutine/greenlet/tasklet is, -which other coroutine/greenlet/tasklets there are, and which of these is -the currently running one. Each "view" is orthogonal to the others. In -particular, each view has one (and exactly one) "current" -coroutine/greenlet/tasklet at any point in time. When the user switches -to a coroutine/greenlet/tasklet, it implicitly means that he wants to -switch away from the current coroutine/greenlet/tasklet *that belongs to -the same view as the target*. +Thus the notion of coroutine is *not composable*. By opposition, the +primitive notion of continulets is composable: if you build two +different interfaces on top of it, or have a program that uses twice the +same interface in two parts, then assuming that both parts independently +work, the composition of the two parts still works. -The precise application-level interface has not been fixed yet; so far, -"views" in the above sense are objects of the type -``stackless.usercostate``. The above two examples can be rewritten in -the following way:: +A full proof of that claim would require careful definitions, but let us +just claim that this fact is true because of the following observation: +the API of continulets is such that, when doing a ``switch()``, it +requires the program to have some continulet to explicitly operate on. +It shuffles the current continuation with the continuation stored in +that continulet, but has no effect outside. So if a part of a program +has a continulet object, and does not expose it as a global, then the +rest of the program cannot accidentally influence the continuation +stored in that continulet object. - producer_view = stackless.usercostate() # a local view - main_coro = producer_view.getcurrent() # the main (outer) coroutine - ... - producer_coro = producer_view.newcoroutine() - ... - -and:: - - generators_view = stackless.usercostate() - - def generator(f): - def wrappedfunc(*args, **kwds): - g = generators_view.newcoroutine(generator_iterator) - ... - - ...generators_view.getcurrent()... - -Then the composition ``grab_values()`` works as expected, because the -two views are independent. The coroutine captured as ``self.caller`` in -the ``generator_iterator.next()`` method is the main coroutine of the -``generators_view``. It is no longer the same object as the main -coroutine of the ``producer_view``, so when ``data_producer()`` issues -the following command:: - - main_coro.switch() - -the control flow cannot accidentally jump back to -``generator_iterator.next()``. In other words, from the point of view -of ``producer_view``, the function ``grab_next_value()`` always runs in -its main coroutine ``main_coro`` and the function ``data_producer`` in -its coroutine ``producer_coro``. This is the case independently of -which ``generators_view``-based coroutine is the current one when -``grab_next_value()`` is called. - -Only code that has explicit access to the ``producer_view`` or its -coroutine objects can perform switches that are relevant for the -generator code. If the view object and the coroutine objects that share -this view are all properly encapsulated inside the generator logic, no -external code can accidentally temper with the expected control flow any -longer. - -In conclusion: we will probably change the app-level interface of PyPy's -stackless module in the future to not expose coroutines and greenlets at -all, but only views. They are not much more difficult to use, and they -scale automatically to larger programs. +In other words, if we regard the continulet object as being essentially +a modifiable ``f_back``, then it is just a link between the frame of +``callable()`` and the parent frame --- and it cannot be arbitrarily +changed by unrelated code, as long as they don't explicitly manipulate +the continulet object. Typically, both the frame of ``callable()`` +(commonly a local function) and its parent frame (which is the frame +that switched to it) belong to the same class or module; so from that +point of view the continulet is a purely local link between two local +frames. It doesn't make sense to have a concept that allows this link +to be manipulated from outside. .. _`Stackless Python`: http://www.stackless.com .. _`documentation of the greenlets`: http://packages.python.org/greenlet/ -.. _`Stackless Transform`: translation.html#the-stackless-transform .. include:: _ref.txt diff --git a/pypy/doc/translation.rst b/pypy/doc/translation.rst --- a/pypy/doc/translation.rst +++ b/pypy/doc/translation.rst @@ -552,14 +552,15 @@ The stackless transform converts functions into a form that knows how to save the execution point and active variables into a heap structure -and resume execution at that point. This is used to implement +and resume execution at that point. This was used to implement coroutines as an RPython-level feature, which in turn are used to -implement `coroutines, greenlets and tasklets`_ as an application +implement coroutines, greenlets and tasklets as an application level feature for the Standard Interpreter. -Enable the stackless transformation with :config:`translation.stackless`. +The stackless transformation has been deprecated and is no longer +available in trunk. It has been replaced with continulets_. -.. _`coroutines, greenlets and tasklets`: stackless.html +.. _continulets: stackless.html .. _`preparing the graphs for source generation`: diff --git a/pypy/doc/windows.rst b/pypy/doc/windows.rst --- a/pypy/doc/windows.rst +++ b/pypy/doc/windows.rst @@ -32,6 +32,24 @@ modules that relies on third-party libraries. See below how to get and build them. +Preping Windows for the Large Build +----------------------------------- + +Normally 32bit programs are limited to 2GB of memory on Windows. It is +possible to raise this limit, to 3GB on Windows 32bit, and almost 4GB +on Windows 64bit. + +On Windows 32bit, it is necessary to modify the system: follow +http://usa.autodesk.com/adsk/servlet/ps/dl/item?siteID=123112&id=9583842&linkID=9240617 +to enable the "3GB" feature, and reboot. This step is not necessary on +Windows 64bit. + +Then you need to execute:: + + editbin /largeaddressaware pypy.exe + +on the pypy.exe file you compiled. + Installing external packages ---------------------------- diff --git a/pypy/interpreter/argument.py b/pypy/interpreter/argument.py --- a/pypy/interpreter/argument.py +++ b/pypy/interpreter/argument.py @@ -125,6 +125,7 @@ ### Manipulation ### + @jit.look_inside_iff(lambda self: not self._dont_jit) def unpack(self): # slowish "Return a ([w1,w2...], {'kw':w3...}) pair." kwds_w = {} @@ -245,6 +246,8 @@ ### Parsing for function calls ### + # XXX: this should be @jit.look_inside_iff, but we need key word arguments, + # and it doesn't support them for now. def _match_signature(self, w_firstarg, scope_w, signature, defaults_w=None, blindargs=0): """Parse args and kwargs according to the signature of a code object, diff --git a/pypy/interpreter/astcompiler/ast.py b/pypy/interpreter/astcompiler/ast.py --- a/pypy/interpreter/astcompiler/ast.py +++ b/pypy/interpreter/astcompiler/ast.py @@ -2,7 +2,7 @@ from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter import typedef from pypy.interpreter.gateway import interp2app -from pypy.interpreter.error import OperationError +from pypy.interpreter.error import OperationError, operationerrfmt from pypy.rlib.unroll import unrolling_iterable from pypy.tool.pairtype import extendabletype from pypy.tool.sourcetools import func_with_new_name @@ -2541,8 +2541,9 @@ class ASTVisitor(object): def visit_sequence(self, seq): - for node in seq: - node.walkabout(self) + if seq is not None: + for node in seq: + node.walkabout(self) def default_visitor(self, node): raise NodeVisitorNotImplemented @@ -2673,46 +2674,36 @@ class GenericASTVisitor(ASTVisitor): def visit_Module(self, node): - if node.body: - self.visit_sequence(node.body) + self.visit_sequence(node.body) def visit_Interactive(self, node): - if node.body: - self.visit_sequence(node.body) + self.visit_sequence(node.body) def visit_Expression(self, node): node.body.walkabout(self) def visit_Suite(self, node): - if node.body: - self.visit_sequence(node.body) + self.visit_sequence(node.body) def visit_FunctionDef(self, node): node.args.walkabout(self) - if node.body: - self.visit_sequence(node.body) - if node.decorator_list: - self.visit_sequence(node.decorator_list) + self.visit_sequence(node.body) + self.visit_sequence(node.decorator_list) def visit_ClassDef(self, node): - if node.bases: - self.visit_sequence(node.bases) - if node.body: - self.visit_sequence(node.body) - if node.decorator_list: - self.visit_sequence(node.decorator_list) + self.visit_sequence(node.bases) + self.visit_sequence(node.body) + self.visit_sequence(node.decorator_list) def visit_Return(self, node): if node.value: node.value.walkabout(self) def visit_Delete(self, node): - if node.targets: - self.visit_sequence(node.targets) + self.visit_sequence(node.targets) def visit_Assign(self, node): - if node.targets: - self.visit_sequence(node.targets) + self.visit_sequence(node.targets) node.value.walkabout(self) def visit_AugAssign(self, node): @@ -2722,37 +2713,29 @@ def visit_Print(self, node): if node.dest: node.dest.walkabout(self) - if node.values: - self.visit_sequence(node.values) + self.visit_sequence(node.values) def visit_For(self, node): node.target.walkabout(self) node.iter.walkabout(self) - if node.body: - self.visit_sequence(node.body) - if node.orelse: - self.visit_sequence(node.orelse) + self.visit_sequence(node.body) + self.visit_sequence(node.orelse) def visit_While(self, node): node.test.walkabout(self) - if node.body: - self.visit_sequence(node.body) - if node.orelse: - self.visit_sequence(node.orelse) + self.visit_sequence(node.body) + self.visit_sequence(node.orelse) def visit_If(self, node): node.test.walkabout(self) - if node.body: - self.visit_sequence(node.body) - if node.orelse: - self.visit_sequence(node.orelse) + self.visit_sequence(node.body) + self.visit_sequence(node.orelse) def visit_With(self, node): node.context_expr.walkabout(self) if node.optional_vars: node.optional_vars.walkabout(self) - if node.body: - self.visit_sequence(node.body) + self.visit_sequence(node.body) def visit_Raise(self, node): if node.type: @@ -2763,18 +2746,13 @@ node.tback.walkabout(self) def visit_TryExcept(self, node): - if node.body: - self.visit_sequence(node.body) - if node.handlers: - self.visit_sequence(node.handlers) - if node.orelse: - self.visit_sequence(node.orelse) + self.visit_sequence(node.body) + self.visit_sequence(node.handlers) + self.visit_sequence(node.orelse) def visit_TryFinally(self, node): - if node.body: - self.visit_sequence(node.body) - if node.finalbody: - self.visit_sequence(node.finalbody) + self.visit_sequence(node.body) + self.visit_sequence(node.finalbody) def visit_Assert(self, node): node.test.walkabout(self) @@ -2782,12 +2760,10 @@ node.msg.walkabout(self) def visit_Import(self, node): - if node.names: - self.visit_sequence(node.names) + self.visit_sequence(node.names) def visit_ImportFrom(self, node): - if node.names: - self.visit_sequence(node.names) + self.visit_sequence(node.names) def visit_Exec(self, node): node.body.walkabout(self) @@ -2812,8 +2788,7 @@ pass def visit_BoolOp(self, node): - if node.values: - self.visit_sequence(node.values) + self.visit_sequence(node.values) def visit_BinOp(self, node): node.left.walkabout(self) @@ -2832,35 +2807,28 @@ node.orelse.walkabout(self) def visit_Dict(self, node): - if node.keys: - self.visit_sequence(node.keys) - if node.values: - self.visit_sequence(node.values) + self.visit_sequence(node.keys) + self.visit_sequence(node.values) def visit_Set(self, node): From noreply at buildbot.pypy.org Thu Nov 3 16:27:58 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Thu, 3 Nov 2011 16:27:58 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: continuing win64 Message-ID: <20111103152758.2A999820B3@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r48703:5179e1483dfb Date: 2011-11-03 16:24 +0100 http://bitbucket.org/pypy/pypy/changeset/5179e1483dfb/ Log: continuing win64 From noreply at buildbot.pypy.org Thu Nov 3 16:41:32 2011 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 3 Nov 2011 16:41:32 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim: kill a function that was only used by one test. a bit pep-8ify, not too Message-ID: <20111103154132.A18C3820B3@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-multidim Changeset: r48704:fa14e6831e42 Date: 2011-11-03 16:41 +0100 http://bitbucket.org/pypy/pypy/changeset/fa14e6831e42/ Log: kill a function that was only used by one test. a bit pep-8ify, not too much though diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -7,7 +7,6 @@ from pypy.rpython.lltypesystem import lltype from pypy.tool.sourcetools import func_with_new_name - numpy_driver = jit.JitDriver(greens = ['signature'], reds = ['result_size', 'i', 'self', 'result']) all_driver = jit.JitDriver(greens=['signature'], reds=['i', 'size', 'self', @@ -209,25 +208,6 @@ assert isinstance(w_res, BaseArray) return w_res.descr_sum(space) - def _getnums(self, comma): - dtype = self.find_dtype() - if self.find_size() > 1000: - nums = [ - dtype.str_format(self.eval(index)) - for index in range(3) - ] - nums.append("..." + "," * comma) - nums.extend([ - dtype.str_format(self.eval(index)) - for index in range(self.find_size() - 3, self.find_size()) - ]) - else: - nums = [ - dtype.str_format(self.eval(index)) - for index in range(self.find_size()) - ] - return nums - def get_concrete(self): raise NotImplementedError @@ -269,7 +249,8 @@ # Since what we want is to print a plethora of 2d views, let # a slice do the work for us. concrete = self.get_concrete() - return space.wrap(NDimSlice(concrete, self.signature, [], self.shape).tostr(False)) + r = NDimSlice(concrete, self.signature, [], self.shape).tostr(False) + return space.wrap(r) def _index_of_single_item(self, space, w_idx): # we assume C ordering for now @@ -641,7 +622,6 @@ @jit.unroll_safe def calc_index(self, item): index = [] - __item = item _item = item for i in range(len(self.shape) -1, 0, -1): s = self.shape[i] @@ -671,18 +651,21 @@ item += index[i] i += 1 return item - def tostr(self, comma,indent=' '): + + def tostr(self, comma, indent=' '): ret = '' dtype = self.find_dtype() ndims = len(self.shape)#-self.shape_reduction for s in self.shape: - if s==0: + if s == 0: ret += '[]' return ret - if ndims>2: + if ndims > 2: ret += '[' for i in range(self.shape[0]): - ret += NDimSlice(self.parent, self.signature, [(i,0,0,1)], self.shape[1:]).tostr(comma,indent=indent+' ') + chunks = [(i, 0, 0, 1)] + ret += NDimSlice(self.parent, self.signature, chunks, + self.shape[1:]).tostr(comma,indent=indent + ' ') if i+1 3 """ interp = self.run(code) - assert interp.results[0]._getnums(False) == ["5.0", "7.0", "9.0", "9.0"] + assert interp.results[-1].value.val == 9 def test_array_getitem(self): code = """ From noreply at buildbot.pypy.org Thu Nov 3 17:48:14 2011 From: noreply at buildbot.pypy.org (hager) Date: Thu, 3 Nov 2011 17:48:14 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: (bivab, hager): Message-ID: <20111103164814.DFD09820B3@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r48705:a82d9737ffe8 Date: 2011-11-03 17:37 +0100 http://bitbucket.org/pypy/pypy/changeset/a82d9737ffe8/ Log: (bivab, hager): Fixed enoying error which occurred at calls because of the backchain. diff --git a/pypy/jit/backend/ppc/ppcgen/arch.py b/pypy/jit/backend/ppc/ppcgen/arch.py --- a/pypy/jit/backend/ppc/ppcgen/arch.py +++ b/pypy/jit/backend/ppc/ppcgen/arch.py @@ -16,3 +16,5 @@ GPR_SAVE_AREA = len(NONVOLATILES) * WORD MAX_REG_PARAMS = 8 + +BACKCHAIN_SIZE = 2 * WORD diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -9,9 +9,9 @@ from pypy.jit.backend.ppc.ppcgen.symbol_lookup import lookup from pypy.jit.backend.ppc.ppcgen.codebuilder import PPCBuilder from pypy.jit.backend.ppc.ppcgen.arch import (IS_PPC_32, WORD, NONVOLATILES, - GPR_SAVE_AREA) + GPR_SAVE_AREA, BACKCHAIN_SIZE) from pypy.jit.backend.ppc.ppcgen.helper.assembler import (gen_emit_cmp_op, - encode32, decode32) + encode32, decode32, decode32_test) import pypy.jit.backend.ppc.ppcgen.register as r import pypy.jit.backend.ppc.ppcgen.condition as c from pypy.jit.metainterp.history import (Const, ConstPtr, LoopToken, @@ -156,12 +156,12 @@ def setup_failure_recovery(self): @rgc.no_collect - def failure_recovery_func(mem_loc, frame_pointer, stack_pointer): + def failure_recovery_func(mem_loc, stack_pointer, spilling_pointer): """mem_loc is a structure in memory describing where the values for the failargs are stored. frame loc is the address of the frame pointer for the frame to be decoded frame """ - return self.decode_registers_and_descr(mem_loc, frame_pointer, stack_pointer) + return self.decode_registers_and_descr(mem_loc, stack_pointer, spilling_pointer) self.failure_recovery_func = failure_recovery_func @@ -177,11 +177,13 @@ ''' enc = rffi.cast(rffi.CCHARP, mem_loc) managed_size = WORD * len(r.MANAGED_REGS) + # XXX do some sanity considerations spilling_depth = spp_loc - stack_loc + managed_size spilling_area = rffi.cast(rffi.CCHARP, stack_loc + managed_size) assert spilling_depth >= 0 + assert spp_loc > stack_loc - regs = rffi.cast(rffi.CCHARP, stack_loc) + regs = rffi.cast(rffi.CCHARP, stack_loc + BACKCHAIN_SIZE) i = -1 fail_index = -1 while(True): @@ -226,8 +228,8 @@ self.fail_boxes_float.setitem(fail_index, value) continue else: - value = decode32(regs, reg*WORD - 2 * WORD) - + value = decode32(regs, (reg - 3) * WORD) + if group == self.INT_TYPE: self.fail_boxes_int.setitem(fail_index, value) elif group == self.REF_TYPE: @@ -268,8 +270,7 @@ j += 4 else: # REG_LOC #loc = r.all_regs[ord(res)] - #import pdb; pdb.set_trace() - loc = r.MANAGED_REGS[ord(res) - 2] + loc = r.MANAGED_REGS[ord(res) - 3] j += 1 locs.append(loc) return locs @@ -293,7 +294,10 @@ # self._save_managed_regs(mc) # adjust SP (r1) - size = WORD * len(r.MANAGED_REGS) + size = WORD * (len(r.MANAGED_REGS)) + BACKCHAIN_SIZE + # XXX do quadword alignment + #while size % (4 * WORD) != 0: + # size += WORD mc.addi(r.SP.value, r.SP.value, -size) # decode_func_addr = llhelper(self.recovery_func_sign, @@ -301,8 +305,7 @@ addr = rffi.cast(lltype.Signed, decode_func_addr) # # load parameters into parameter registers - mc.lwz(r.r3.value, r.SPP.value, 0) - #mc.mr(r.r3.value, r.r0.value) # address of state encoding + mc.lwz(r.r3.value, r.SPP.value, 0) # address of state encoding mc.mr(r.r4.value, r.SP.value) # load stack pointer mc.mr(r.r5.value, r.SPP.value) # load spilling pointer # diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/regalloc.py @@ -207,7 +207,7 @@ box = TempInt() loc = self.force_allocate_reg(box, forbidden_vars=forbidden_vars) imm = self.rm.convert_to_imm(thing) - self.assembler.load_imm(loc.value, imm.value) + self.assembler.mc.load_imm(loc, imm.value) else: loc = self.make_sure_var_in_reg(thing, forbidden_vars=forbidden_vars) diff --git a/pypy/jit/backend/ppc/ppcgen/register.py b/pypy/jit/backend/ppc/ppcgen/register.py --- a/pypy/jit/backend/ppc/ppcgen/register.py +++ b/pypy/jit/backend/ppc/ppcgen/register.py @@ -14,7 +14,7 @@ SP = r1 RES = r3 -MANAGED_REGS = [r2, r3, r4, r5, r6, r7, r8, r9, r10, +MANAGED_REGS = [r3, r4, r5, r6, r7, r8, r9, r10, r11, r12, r13, r14, r15, r16, r17, r18, r19, r20, r21, r22, r23, r24, r25, r26, r27, r28, r29, r30] From noreply at buildbot.pypy.org Thu Nov 3 17:48:16 2011 From: noreply at buildbot.pypy.org (hager) Date: Thu, 3 Nov 2011 17:48:16 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: merge Message-ID: <20111103164816.1D727820B3@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r48706:0dbe1538b91a Date: 2011-11-03 17:47 +0100 http://bitbucket.org/pypy/pypy/changeset/0dbe1538b91a/ Log: merge diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -332,7 +332,10 @@ if scale.value > 0: scale_loc = r.r0 self.mc.load_imm(r.r0, scale.value) - self.mc.slw(r.r0.value, ofs_loc.value, r.r0.value) + if IS_PPC_32: + self.mc.slw(r.r0.value, ofs_loc.value, r.r0.value) + else: + self.mc.sld(r.r0.value, ofs_loc.value, r.r0.value) else: scale_loc = ofs_loc @@ -356,7 +359,10 @@ if scale.value > 0: scale_loc = r.r0 self.mc.load_imm(r.r0, scale.value) - self.mc.slw(r.r0.value, ofs_loc.value, scale.value) + if IS_PPC_32: + self.mc.slw(r.r0.value, ofs_loc.value, scale.value) + else: + self.mc.sld(r.r0.value, ofs_loc.value, scale.value) else: scale_loc = ofs_loc if ofs.value > 0: @@ -416,7 +422,10 @@ def emit_unicodegetitem(self, op, arglocs, regalloc): res, base_loc, ofs_loc, scale, basesize, itemsize = arglocs - self.mc.slwi(ofs_loc.value, ofs_loc.value, scale.value) + if IS_PPC_32: + self.mc.slwi(ofs_loc.value, ofs_loc.value, scale.value) + else: + self.mc.sldi(ofs_loc.value, ofs_loc.value, scale.value) self.mc.add(res.value, base_loc.value, ofs_loc.value) if scale.value == 2: @@ -430,7 +439,10 @@ def emit_unicodesetitem(self, op, arglocs, regalloc): value_loc, base_loc, ofs_loc, scale, basesize, itemsize = arglocs - self.mc.slwi(ofs_loc.value, ofs_loc.value, scale.value) + if IS_PPC_32: + self.mc.slwi(ofs_loc.value, ofs_loc.value, scale.value) + else: + self.mc.sldi(ofs_loc.value, ofs_loc.value, scale.value) self.mc.add(base_loc.value, base_loc.value, ofs_loc.value) if scale.value == 2: @@ -503,7 +515,17 @@ remap_frame_layout(self, non_float_locs, non_float_regs, r.r0) #the actual call - self.mc.bl_abs(adr) + if IS_PPC_32: + self.mc.bl_abs(adr) + else: + self.mc.std(r.r2.value, r.SP.value, 40) + self.mc.load_from_addr(r.r0, adr) + self.mc.load_from_addr(r.r2, adr+WORD) + self.mc.load_from_addr(r.r11, adr+2*WORD) + self.mc.mtctr(r.r0.value) + self.mc.bctrl() + self.mc.ld(r.r2.value, r.SP.value, 40) + self.mark_gc_roots(force_index) regalloc.possibly_free_vars(args) # readjust the sp in case we passed some args on the stack diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -11,7 +11,7 @@ from pypy.jit.backend.ppc.ppcgen.arch import (IS_PPC_32, WORD, NONVOLATILES, GPR_SAVE_AREA, BACKCHAIN_SIZE) from pypy.jit.backend.ppc.ppcgen.helper.assembler import (gen_emit_cmp_op, - encode32, decode32, decode32_test) + encode32, decode32) import pypy.jit.backend.ppc.ppcgen.register as r import pypy.jit.backend.ppc.ppcgen.condition as c from pypy.jit.metainterp.history import (Const, ConstPtr, LoopToken, From noreply at buildbot.pypy.org Thu Nov 3 18:19:31 2011 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 3 Nov 2011 18:19:31 +0100 (CET) Subject: [pypy-commit] pypy stm: In-progress Message-ID: <20111103171931.11E15820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r48707:497d967a02c3 Date: 2011-11-03 18:19 +0100 http://bitbucket.org/pypy/pypy/changeset/497d967a02c3/ Log: In-progress diff --git a/pypy/translator/stm/src_stm/et.c b/pypy/translator/stm/src_stm/et.c --- a/pypy/translator/stm/src_stm/et.c +++ b/pypy/translator/stm/src_stm/et.c @@ -283,6 +283,12 @@ struct tx_descriptor *d = thread_descriptor; assert(!is_inevitable(d)); d->num_aborts[reason]++; +#ifdef RPY_STM_DEBUG_PRINT + PYPY_DEBUG_START("stm-abort"); + if (PYPY_HAVE_DEBUG_PRINTS) fprintf(PYPY_DEBUG_FILE, "thread %lx aborting\n", + (long)pthread_self()); + PYPY_DEBUG_STOP("stm-abort"); +#endif tx_restart(d); } @@ -363,9 +369,9 @@ { unsigned long pself = (unsigned long)pthread_self(); locked_by = 0; - pthread_mutex_unlock(&mutex_inevitable); if (PYPY_HAVE_DEBUG_PRINTS) fprintf(PYPY_DEBUG_FILE, "%lx: mutex inev unlocked\n", pself); + pthread_mutex_unlock(&mutex_inevitable); } # else # define mutex_lock() pthread_mutex_lock(&mutex_inevitable) @@ -605,7 +611,7 @@ #ifdef RPY_STM_DEBUG_PRINT if (PYPY_HAVE_DEBUG_PRINTS) fprintf(PYPY_DEBUG_FILE, "thread %lx starting\n", - d->my_lock_word); + (long)pthread_self()); PYPY_DEBUG_STOP("stm-init"); #endif } @@ -633,7 +639,7 @@ num_spinloops += d->num_spinloops[i]; p += sprintf(p, "thread %lx: %d commits, %d aborts\n", - d->my_lock_word, + (long)pthread_self(), d->num_commits, num_aborts); diff --git a/pypy/translator/stm/test/targetdemo.py b/pypy/translator/stm/test/targetdemo.py --- a/pypy/translator/stm/test/targetdemo.py +++ b/pypy/translator/stm/test/targetdemo.py @@ -19,6 +19,26 @@ newnode = Node(value) node.next = newnode +def check_chained_list(node): + seen = [0] * (LENGTH+1) + seen[-1] = NUM_THREADS + while node is not None: + value = node.value + print value + if not (0 <= value < LENGTH): + print "node.value out of bounds:", value + raise AssertionError + seen[value] += 1 + if seen[value] > seen[value-1]: + print "seen[%d] = %d, seen[%d] = %d" % (value-1, seen[value-1], + value, seen[value]) + raise AssertionError + node = node.next + if seen[LENGTH-1] != NUM_THREADS: + print "seen[LENGTH-1] != NUM_THREADS" + raise AssertionError + print "check ok!" + class Global: anchor = Node(-1) @@ -63,6 +83,7 @@ while glob.done < NUM_THREADS: # poor man's lock time.sleep(1) print "done sleeping." + check_chained_list(glob.anchor.next) return 0 # _____ Define and setup target ___ From noreply at buildbot.pypy.org Thu Nov 3 18:21:09 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Thu, 3 Nov 2011 18:21:09 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: introduce targets that can be placed somewhere in a trace that can be used jump targets Message-ID: <20111103172109.7E243820B3@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48708:505538a47fdb Date: 2011-11-03 18:20 +0100 http://bitbucket.org/pypy/pypy/changeset/505538a47fdb/ Log: introduce targets that can be placed somewhere in a trace that can be used jump targets diff --git a/pypy/jit/backend/llgraph/llimpl.py b/pypy/jit/backend/llgraph/llimpl.py --- a/pypy/jit/backend/llgraph/llimpl.py +++ b/pypy/jit/backend/llgraph/llimpl.py @@ -339,12 +339,16 @@ assert isinstance(type, str) and len(type) == 1 op.args.append(Descr(ofs, type, arg_types=arg_types)) -def compile_add_loop_token(loop, descr): +def compile_add_loop_token(loop, descr, clt): if we_are_translated(): raise ValueError("CALL_ASSEMBLER not supported") loop = _from_opaque(loop) op = loop.operations[-1] op.descr = weakref.ref(descr) + if op.opnum == rop.TARGET: + descr.compiled_loop_token = clt + descr.target_opindex = len(loop.operations) + descr.target_arguments = op.args def compile_add_var(loop, intvar): loop = _from_opaque(loop) @@ -380,13 +384,17 @@ _variables.append(v) return r -def compile_add_jump_target(loop, loop_target): +def compile_add_jump_target(loop, loop_target, target_opindex, target_inputargs): loop = _from_opaque(loop) loop_target = _from_opaque(loop_target) + if not target_inputargs: + target_inputargs = loop_target.inputargs op = loop.operations[-1] op.jump_target = loop_target + op.jump_target_opindex = target_opindex + op.jump_target_inputargs = target_inputargs assert op.opnum == rop.JUMP - assert len(op.args) == len(loop_target.inputargs) + assert len(op.args) == len(target_inputargs) if loop_target == loop: log.info("compiling new loop") else: @@ -520,10 +528,11 @@ self.opindex += 1 continue if op.opnum == rop.JUMP: - assert len(op.jump_target.inputargs) == len(args) - self.env = dict(zip(op.jump_target.inputargs, args)) + inputargs = op.jump_target_inputargs + assert len(inputargs) == len(args) + self.env = dict(zip(inputargs, args)) self.loop = op.jump_target - self.opindex = 0 + self.opindex = op.jump_target_opindex _stats.exec_jumps += 1 elif op.opnum == rop.FINISH: if self.verbose: @@ -616,6 +625,9 @@ # return _op_default_implementation + def op_target(self, _, *args): + pass + def op_debug_merge_point(self, _, *args): from pypy.jit.metainterp.warmspot import get_stats try: diff --git a/pypy/jit/backend/llgraph/runner.py b/pypy/jit/backend/llgraph/runner.py --- a/pypy/jit/backend/llgraph/runner.py +++ b/pypy/jit/backend/llgraph/runner.py @@ -136,7 +136,7 @@ clt = original_loop_token.compiled_loop_token clt.loop_and_bridges.append(c) clt.compiling_a_bridge() - self._compile_loop_or_bridge(c, inputargs, operations) + self._compile_loop_or_bridge(c, inputargs, operations, clt) old, oldindex = faildescr._compiled_fail llimpl.compile_redirect_fail(old, oldindex, c) @@ -151,14 +151,16 @@ clt.loop_and_bridges = [c] clt.compiled_version = c looptoken.compiled_loop_token = clt - self._compile_loop_or_bridge(c, inputargs, operations) + looptoken.target_opindex = 0 + looptoken.target_arguments = None + self._compile_loop_or_bridge(c, inputargs, operations, clt) def free_loop_and_bridges(self, compiled_loop_token): for c in compiled_loop_token.loop_and_bridges: llimpl.mark_as_free(c) model.AbstractCPU.free_loop_and_bridges(self, compiled_loop_token) - def _compile_loop_or_bridge(self, c, inputargs, operations): + def _compile_loop_or_bridge(self, c, inputargs, operations, clt): var2index = {} for box in inputargs: if isinstance(box, history.BoxInt): @@ -170,19 +172,19 @@ var2index[box] = llimpl.compile_start_float_var(c) else: raise Exception("box is: %r" % (box,)) - self._compile_operations(c, operations, var2index) + self._compile_operations(c, operations, var2index, clt) return c - def _compile_operations(self, c, operations, var2index): + def _compile_operations(self, c, operations, var2index, clt): for op in operations: llimpl.compile_add(c, op.getopnum()) descr = op.getdescr() if isinstance(descr, Descr): llimpl.compile_add_descr(c, descr.ofs, descr.typeinfo, descr.arg_types) - if (isinstance(descr, history.LoopToken) and - op.getopnum() != rop.JUMP): - llimpl.compile_add_loop_token(c, descr) + if isinstance(descr, history.LoopToken): + if op.getopnum() != rop.JUMP: + llimpl.compile_add_loop_token(c, descr, clt) if self.is_oo and isinstance(descr, (OODescr, MethDescr)): # hack hack, not rpython c._obj.externalobj.operations[-1].setdescr(descr) @@ -238,7 +240,8 @@ targettoken = op.getdescr() assert isinstance(targettoken, history.LoopToken) compiled_version = targettoken.compiled_loop_token.compiled_version - llimpl.compile_add_jump_target(c, compiled_version) + opindex = targettoken.target_opindex + llimpl.compile_add_jump_target(c, compiled_version, opindex, targettoken.target_arguments) elif op.getopnum() == rop.FINISH: faildescr = op.getdescr() index = self.get_fail_descr_number(faildescr) diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -2965,7 +2965,48 @@ fail = self.cpu.execute_token(looptoken) assert fail.identifier == excdescr.identifier + def test_compile_loop_with_target(self): + i0 = BoxInt() + i1 = BoxInt() + i2 = BoxInt() + i3 = BoxInt() + looptoken = LoopToken() + targettoken = LoopToken() + faildescr = BasicFailDescr(2) + operations = [ + ResOperation(rop.INT_ADD, [i0, ConstInt(1)], i1), + ResOperation(rop.INT_LE, [i1, ConstInt(9)], i2), + ResOperation(rop.GUARD_TRUE, [i2], None, descr=faildescr), + ResOperation(rop.TARGET, [i1], None, descr=targettoken), + ResOperation(rop.INT_GE, [i1, ConstInt(0)], i3), + ResOperation(rop.GUARD_TRUE, [i3], None, descr=BasicFailDescr(3)), + ResOperation(rop.JUMP, [i1], None, descr=looptoken), + ] + inputargs = [i0] + operations[2].setfailargs([i1]) + operations[5].setfailargs([i1]) + self.cpu.compile_loop(inputargs, operations, looptoken) + self.cpu.set_future_value_int(0, 2) + fail = self.cpu.execute_token(looptoken) + assert fail.identifier == 2 + res = self.cpu.get_latest_value_int(0) + assert res == 10 + + inputargs = [i0] + operations = [ + ResOperation(rop.INT_SUB, [i0, ConstInt(20)], i2), + ResOperation(rop.JUMP, [i2], None, descr=targettoken), + ] + self.cpu.compile_bridge(faildescr, inputargs, operations, looptoken) + + self.cpu.set_future_value_int(0, 2) + fail = self.cpu.execute_token(looptoken) + assert fail.identifier == 3 + res = self.cpu.get_latest_value_int(0) + assert res == -10 + + class OOtypeBackendTest(BaseBackendTest): type_system = 'ootype' diff --git a/pypy/jit/metainterp/executor.py b/pypy/jit/metainterp/executor.py --- a/pypy/jit/metainterp/executor.py +++ b/pypy/jit/metainterp/executor.py @@ -342,6 +342,7 @@ rop.SETARRAYITEM_RAW, rop.CALL_RELEASE_GIL, rop.QUASIIMMUT_FIELD, + rop.TARGET, ): # list of opcodes never executed by pyjitpl continue raise AssertionError("missing %r" % (key,)) diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -366,6 +366,8 @@ 'FINISH/*d', '_FINAL_LAST', + 'TARGET/*d', + '_GUARD_FIRST', '_GUARD_FOLDABLE_FIRST', 'GUARD_TRUE/1d', From noreply at buildbot.pypy.org Thu Nov 3 19:15:39 2011 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 3 Nov 2011 19:15:39 +0100 (CET) Subject: [pypy-commit] pypy stm: Test and fix. Message-ID: <20111103181539.E7343820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r48709:a45743e6ee4e Date: 2011-11-03 18:54 +0100 http://bitbucket.org/pypy/pypy/changeset/a45743e6ee4e/ Log: Test and fix. diff --git a/pypy/translator/stm/funcgen.py b/pypy/translator/stm/funcgen.py --- a/pypy/translator/stm/funcgen.py +++ b/pypy/translator/stm/funcgen.py @@ -102,7 +102,15 @@ def stm_transaction_boundary(funcgen, op): assert funcgen.exception_policy == 'stm' - return 'STM_TRANSACTION_BOUNDARY();' + lines = ['STM_TRANSACTION_BOUNDARY();'] + TMPVAR = 'ty_%s' + for v in op.args: + tmpname = TMPVAR % v.name + cdeclname = cdecl(funcgen.lltypename(v), 'volatile ' + tmpname) + realname = funcgen.expr(v) + lines.insert(0, '%s = %s;' % (cdeclname, realname)) + lines.append('%s = %s;' % (realname, tmpname)) + return '{\n\t' + '\n\t'.join(lines) + '\n}' def stm_try_inevitable(funcgen, op): info = op.args[0].value diff --git a/pypy/translator/stm/test/test_transform.py b/pypy/translator/stm/test/test_transform.py --- a/pypy/translator/stm/test/test_transform.py +++ b/pypy/translator/stm/test/test_transform.py @@ -175,3 +175,21 @@ return 0 t, cbuilder = self.compile(simplefunc) cbuilder.cmdexec('') + + def test_transaction_boundary_3(self): + def simplefunc(argv): + s1 = argv[0] + debug_print('STEP1:', len(s1)) + rstm.transaction_boundary() + rstm.transaction_boundary() + rstm.transaction_boundary() + debug_print('STEP2:', len(s1)) + return 0 + t, cbuilder = self.compile(simplefunc) + data, err = cbuilder.cmdexec('', err=True) + lines = err.splitlines() + steps = [(line[:6], line[6:]) + for line in lines if line.startswith('STEP')] + steps = zip(*steps) + assert steps[0] == ('STEP1:', 'STEP2:') + assert steps[1][0] == steps[1][1] diff --git a/pypy/translator/stm/transform.py b/pypy/translator/stm/transform.py --- a/pypy/translator/stm/transform.py +++ b/pypy/translator/stm/transform.py @@ -1,4 +1,4 @@ -from pypy.objspace.flow.model import SpaceOperation, Constant +from pypy.objspace.flow.model import SpaceOperation, Constant, Variable from pypy.objspace.flow.model import Block, Link, checkgraph from pypy.annotation import model as annmodel from pypy.translator.stm import _rffi_stm @@ -41,7 +41,9 @@ if block.operations == (): return newoperations = [] - for op in block.operations: + self.current_block = block + for i, op in enumerate(block.operations): + self.current_op_index = i try: meth = getattr(self, 'stt_' + op.opname) except AttributeError: @@ -60,6 +62,7 @@ else: assert res is None block.operations = newoperations + self.current_block = None def transform_graph(self, graph): for block in graph.iterblocks(): @@ -128,7 +131,20 @@ def stt_stm_transaction_boundary(self, newoperations, op): self.seen_transaction_boundary = True - return True + v_result = op.result + # record in op.args the list of variables that are alive across + # this call + block = self.current_block + vars = set() + for op in block.operations[:self.current_op_index:-1]: + vars.discard(op.result) + vars.update(op.args) + for link in block.exits: + vars.update(link.args) + vars.update(link.getextravars()) + livevars = [v for v in vars if isinstance(v, Variable)] + newop = SpaceOperation('stm_transaction_boundary', livevars, v_result) + newoperations.append(newop) def stt_malloc(self, newoperations, op): flags = op.args[1].value From noreply at buildbot.pypy.org Thu Nov 3 19:15:41 2011 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 3 Nov 2011 19:15:41 +0100 (CET) Subject: [pypy-commit] pypy stm: Yay! targetdemo is fixed and seems to be working. Message-ID: <20111103181541.2024B820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r48710:b27ec3dc59d2 Date: 2011-11-03 19:15 +0100 http://bitbucket.org/pypy/pypy/changeset/b27ec3dc59d2/ Log: Yay! targetdemo is fixed and seems to be working. Added a test for it. diff --git a/pypy/translator/stm/funcgen.py b/pypy/translator/stm/funcgen.py --- a/pypy/translator/stm/funcgen.py +++ b/pypy/translator/stm/funcgen.py @@ -102,15 +102,29 @@ def stm_transaction_boundary(funcgen, op): assert funcgen.exception_policy == 'stm' - lines = ['STM_TRANSACTION_BOUNDARY();'] - TMPVAR = 'ty_%s' + # make code looking like this: + # + # stm_commit_transaction(); + # { + # volatile long tmp_123 = l_123; + # setjmp(jmpbuf); + # l_123 = tmp_123; + # } + # stm_begin_transaction(&jmpbuf); + # + lines = ['\tsetjmp(jmpbuf);'] + TMPVAR = 'tmp_%s' for v in op.args: tmpname = TMPVAR % v.name cdeclname = cdecl(funcgen.lltypename(v), 'volatile ' + tmpname) realname = funcgen.expr(v) - lines.insert(0, '%s = %s;' % (cdeclname, realname)) - lines.append('%s = %s;' % (realname, tmpname)) - return '{\n\t' + '\n\t'.join(lines) + '\n}' + lines.insert(0, '\t%s = %s;' % (cdeclname, realname)) + lines.append('\t%s = %s;' % (realname, tmpname)) + lines.insert(0, '{') + lines.insert(0, 'stm_commit_transaction();') + lines.append('}') + lines.append('stm_begin_transaction(&jmpbuf);') + return '\n'.join(lines) def stm_try_inevitable(funcgen, op): info = op.args[0].value diff --git a/pypy/translator/stm/src_stm/et.h b/pypy/translator/stm/src_stm/et.h --- a/pypy/translator/stm/src_stm/et.h +++ b/pypy/translator/stm/src_stm/et.h @@ -47,11 +47,6 @@ #define STM_DECLARE_VARIABLE() ; jmp_buf jmpbuf #define STM_MAKE_INEVITABLE() stm_try_inevitable_if(&jmpbuf \ STM_EXPLAIN("return")) -#define STM_TRANSACTION_BOUNDARY() \ - stm_commit_transaction(); \ - setjmp(jmpbuf); \ - stm_begin_transaction(&jmpbuf); - // XXX little-endian only! #define STM_read_partial_word(T, base, offset) \ diff --git a/pypy/translator/stm/test/test_transform.py b/pypy/translator/stm/test/test_transform.py --- a/pypy/translator/stm/test/test_transform.py +++ b/pypy/translator/stm/test/test_transform.py @@ -108,12 +108,13 @@ # ____________________________________________________________ -class TestTransformSingleThread(StandaloneTests): +class CompiledSTMTests(StandaloneTests): def compile(self, entry_point): from pypy.config.pypyoption import get_pypy_config self.config = get_pypy_config(translating=True) self.config.translation.stm = True + self.config.translation.gc = "none" # # Prevent the RaiseAnalyzer from just emitting "WARNING: Unknown # operation". We want instead it to crash. @@ -125,6 +126,9 @@ del RaiseAnalyzer.fail_on_unknown_operation return res + +class TestTransformSingleThread(CompiledSTMTests): + def test_no_pointer_operations(self): def simplefunc(argv): i = 0 diff --git a/pypy/translator/stm/test/test_ztranslated.py b/pypy/translator/stm/test/test_ztranslated.py new file mode 100644 --- /dev/null +++ b/pypy/translator/stm/test/test_ztranslated.py @@ -0,0 +1,11 @@ +from pypy.translator.stm.test.test_transform import CompiledSTMTests +from pypy.translator.stm.test import targetdemo + + +class TestSTMTranslated(CompiledSTMTests): + + def test_hello_world(self): + t, cbuilder = self.compile(targetdemo.entry_point) + data = cbuilder.cmdexec('') + assert 'done sleeping.' in data + assert 'check ok!' in data From noreply at buildbot.pypy.org Thu Nov 3 19:19:08 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Thu, 3 Nov 2011 19:19:08 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: use TargetToken to refere to a target Message-ID: <20111103181908.A4D79820B3@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48711:1860421891fe Date: 2011-11-03 19:18 +0100 http://bitbucket.org/pypy/pypy/changeset/1860421891fe/ Log: use TargetToken to refere to a target diff --git a/pypy/jit/backend/llgraph/llimpl.py b/pypy/jit/backend/llgraph/llimpl.py --- a/pypy/jit/backend/llgraph/llimpl.py +++ b/pypy/jit/backend/llgraph/llimpl.py @@ -8,6 +8,7 @@ from pypy.objspace.flow.model import Variable, Constant from pypy.annotation import model as annmodel from pypy.jit.metainterp.history import REF, INT, FLOAT +from pypy.jit.metainterp import history from pypy.jit.codewriter import heaptracker from pypy.rpython.lltypesystem import lltype, llmemory, rclass, rstr, rffi from pypy.rpython.ootypesystem import ootype @@ -339,16 +340,20 @@ assert isinstance(type, str) and len(type) == 1 op.args.append(Descr(ofs, type, arg_types=arg_types)) -def compile_add_loop_token(loop, descr, clt): +def compile_add_loop_token(loop, descr): if we_are_translated(): raise ValueError("CALL_ASSEMBLER not supported") loop = _from_opaque(loop) op = loop.operations[-1] op.descr = weakref.ref(descr) - if op.opnum == rop.TARGET: - descr.compiled_loop_token = clt - descr.target_opindex = len(loop.operations) - descr.target_arguments = op.args + +def compile_add_target_token(loop, descr): + compiled_version = loop + loop = _from_opaque(loop) + op = loop.operations[-1] + descr.compiled_version = compiled_version + descr.target_opindex = len(loop.operations) + descr.target_arguments = op.args def compile_add_var(loop, intvar): loop = _from_opaque(loop) @@ -384,11 +389,19 @@ _variables.append(v) return r -def compile_add_jump_target(loop, loop_target, target_opindex, target_inputargs): +def compile_add_jump_target(loop, targettoken): loop = _from_opaque(loop) - loop_target = _from_opaque(loop_target) - if not target_inputargs: + if isinstance(targettoken, history.LoopToken): + loop_target = _from_opaque(targettoken.compiled_loop_token.compiled_version) + target_opindex = 0 target_inputargs = loop_target.inputargs + elif isinstance(targettoken, history.TargetToken): + loop_target = _from_opaque(targettoken.compiled_version) + target_opindex = targettoken.target_opindex + target_inputargs = targettoken.target_arguments + else: + assert False + op = loop.operations[-1] op.jump_target = loop_target op.jump_target_opindex = target_opindex diff --git a/pypy/jit/backend/llgraph/runner.py b/pypy/jit/backend/llgraph/runner.py --- a/pypy/jit/backend/llgraph/runner.py +++ b/pypy/jit/backend/llgraph/runner.py @@ -136,7 +136,7 @@ clt = original_loop_token.compiled_loop_token clt.loop_and_bridges.append(c) clt.compiling_a_bridge() - self._compile_loop_or_bridge(c, inputargs, operations, clt) + self._compile_loop_or_bridge(c, inputargs, operations) old, oldindex = faildescr._compiled_fail llimpl.compile_redirect_fail(old, oldindex, c) @@ -151,16 +151,14 @@ clt.loop_and_bridges = [c] clt.compiled_version = c looptoken.compiled_loop_token = clt - looptoken.target_opindex = 0 - looptoken.target_arguments = None - self._compile_loop_or_bridge(c, inputargs, operations, clt) + self._compile_loop_or_bridge(c, inputargs, operations) def free_loop_and_bridges(self, compiled_loop_token): for c in compiled_loop_token.loop_and_bridges: llimpl.mark_as_free(c) model.AbstractCPU.free_loop_and_bridges(self, compiled_loop_token) - def _compile_loop_or_bridge(self, c, inputargs, operations, clt): + def _compile_loop_or_bridge(self, c, inputargs, operations): var2index = {} for box in inputargs: if isinstance(box, history.BoxInt): @@ -172,10 +170,10 @@ var2index[box] = llimpl.compile_start_float_var(c) else: raise Exception("box is: %r" % (box,)) - self._compile_operations(c, operations, var2index, clt) + self._compile_operations(c, operations, var2index) return c - def _compile_operations(self, c, operations, var2index, clt): + def _compile_operations(self, c, operations, var2index): for op in operations: llimpl.compile_add(c, op.getopnum()) descr = op.getdescr() @@ -184,7 +182,9 @@ descr.arg_types) if isinstance(descr, history.LoopToken): if op.getopnum() != rop.JUMP: - llimpl.compile_add_loop_token(c, descr, clt) + llimpl.compile_add_loop_token(c, descr) + if isinstance(descr, history.TargetToken) and op.getopnum() == rop.TARGET: + llimpl.compile_add_target_token(c, descr) if self.is_oo and isinstance(descr, (OODescr, MethDescr)): # hack hack, not rpython c._obj.externalobj.operations[-1].setdescr(descr) @@ -238,10 +238,7 @@ assert op.is_final() if op.getopnum() == rop.JUMP: targettoken = op.getdescr() - assert isinstance(targettoken, history.LoopToken) - compiled_version = targettoken.compiled_loop_token.compiled_version - opindex = targettoken.target_opindex - llimpl.compile_add_jump_target(c, compiled_version, opindex, targettoken.target_arguments) + llimpl.compile_add_jump_target(c, targettoken) elif op.getopnum() == rop.FINISH: faildescr = op.getdescr() index = self.get_fail_descr_number(faildescr) diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -3,7 +3,7 @@ AbstractDescr, BasicFailDescr, BoxInt, Box, BoxPtr, - LoopToken, + LoopToken, TargetToken, ConstInt, ConstPtr, BoxObj, ConstObj, BoxFloat, ConstFloat) @@ -2971,7 +2971,7 @@ i2 = BoxInt() i3 = BoxInt() looptoken = LoopToken() - targettoken = LoopToken() + targettoken = TargetToken() faildescr = BasicFailDescr(2) operations = [ ResOperation(rop.INT_ADD, [i0, ConstInt(1)], i1), diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -765,6 +765,9 @@ def dump(self): self.compiled_loop_token.cpu.dump_loop_token(self) +class TargetToken(AbstractDescr): + pass + class TreeLoop(object): inputargs = None operations = None From noreply at buildbot.pypy.org Thu Nov 3 19:50:59 2011 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 3 Nov 2011 19:50:59 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim: make compile use a real tokenizer - breaks test_zjit for now Message-ID: <20111103185059.E3FE2820B3@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-multidim Changeset: r48712:6d64103f1147 Date: 2011-11-03 19:28 +0100 http://bitbucket.org/pypy/pypy/changeset/6d64103f1147/ Log: make compile use a real tokenizer - breaks test_zjit for now diff --git a/pypy/module/micronumpy/compile.py b/pypy/module/micronumpy/compile.py --- a/pypy/module/micronumpy/compile.py +++ b/pypy/module/micronumpy/compile.py @@ -23,6 +23,12 @@ class WrongFunctionName(Exception): pass +class TokenizerError(Exception): + pass + +class BadToken(Exception): + pass + SINGLE_ARG_FUNCTIONS = ["sum", "prod", "max", "min", "all", "any", "unegative"] class FakeSpace(object): @@ -192,7 +198,7 @@ interp.variables[self.name] = self.expr.execute(interp) def __repr__(self): - return "%% = %r" % (self.name, self.expr) + return "%r = %r" % (self.name, self.expr) class ArrayAssignment(Node): def __init__(self, name, index, expr): @@ -214,7 +220,7 @@ class Variable(Node): def __init__(self, name): - self.name = name + self.name = name.strip() def execute(self, interp): return interp.variables[self.name] @@ -332,7 +338,7 @@ class FunctionCall(Node): def __init__(self, name, args): - self.name = name + self.name = name.strip() self.args = args def __repr__(self): @@ -375,118 +381,174 @@ else: raise WrongFunctionName +import re + +_REGEXES = [ + ('-?[\d]+', 'number'), + ('\[', 'array_left'), + (':', 'colon'), + ('\w+', 'identifier'), + ('\]', 'array_right'), + ('(->)|[\+\-\*\/]', 'operator'), + ('=', 'assign'), + (',', 'coma'), + ('\|', 'pipe'), + ('\(', 'paren_left'), + ('\)', 'paren_right'), +] +REGEXES = [] + +for r, name in _REGEXES: + REGEXES.append((re.compile(' *(' + r + ')'), name)) +del _REGEXES + +class Token(object): + def __init__(self, name, v): + self.name = name + self.v = v + + def __repr__(self): + return '(%s, %s)' % (self.name, self.v) + +empty = Token('', '') + +class TokenStack(object): + def __init__(self, tokens): + self.tokens = tokens + self.c = 0 + + def pop(self): + token = self.tokens[self.c] + self.c += 1 + return token + + def get(self, i): + if self.c + i >= len(self.tokens): + return empty + return self.tokens[self.c + i] + + def remaining(self): + return len(self.tokens) - self.c + + def push(self): + self.c -= 1 + + def __repr__(self): + return repr(self.tokens[self.c:]) + class Parser(object): - def parse_identifier(self, id): - id = id.strip(" ") - #assert id.isalpha() - return Variable(id) + def tokenize(self, line): + tokens = [] + while True: + for r, name in REGEXES: + m = r.match(line) + if m is not None: + g = m.group(0) + tokens.append(Token(name, g)) + line = line[len(g):] + if not line: + return TokenStack(tokens) + break + else: + raise TokenizerError(line) - def parse_expression(self, expr): - tokens = [i for i in expr.split(" ") if i] - if len(tokens) == 1: - return self.parse_constant_or_identifier(tokens[0]) + def parse_number_or_slice(self, tokens): + start_tok = tokens.pop() + if start_tok.name == 'colon': + start = 0 + else: + start = int(start_tok.v) + if tokens.get(0).name != 'colon': + return FloatConstant(start) + tokens.pop() + if not tokens.get(0).name in ['colon', 'number']: + stop = -1 + step = 1 + else: + next = tokens.pop() + if next.name == 'colon': + stop = -1 + step = int(tokens.pop().v) + else: + stop = int(next.v) + if tokens.get(0).name == 'colon': + tokens.pop() + step = int(tokens.pop().v) + else: + step = 1 + return SliceConstant(start, stop, step) + + + def parse_expression(self, tokens): stack = [] - tokens.reverse() - while tokens: + while tokens.remaining(): token = tokens.pop() - if token == ')': - raise NotImplementedError - elif self.is_identifier_or_const(token): - if stack: - name = stack.pop().name - lhs = stack.pop() - rhs = self.parse_constant_or_identifier(token) - stack.append(Operator(lhs, name, rhs)) + if token.name == 'identifier': + if tokens.remaining() and tokens.get(0).name == 'paren_left': + stack.append(self.parse_function_call(token.v, tokens)) else: - stack.append(self.parse_constant_or_identifier(token)) + stack.append(Variable(token.v)) + elif token.name == 'array_left': + stack.append(ArrayConstant(self.parse_array_const(tokens))) + elif token.name == 'operator': + stack.append(Variable(token.v)) + elif token.name == 'number' or token.name == 'colon': + tokens.push() + stack.append(self.parse_number_or_slice(tokens)) + elif token.name == 'pipe': + stack.append(RangeConstant(tokens.pop().v)) + end = tokens.pop() + assert end.name == 'pipe' else: - stack.append(Variable(token)) - assert len(stack) == 1 - return stack[-1] + tokens.push() + break + stack.reverse() + lhs = stack.pop() + while stack: + op = stack.pop() + assert isinstance(op, Variable) + rhs = stack.pop() + lhs = Operator(lhs, op.name, rhs) + return lhs - def parse_constant(self, v): - lgt = len(v)-1 - assert lgt >= 0 - if ':' in v: - # a slice - if v == ':': - return SliceConstant(0, 0, 0) - else: - l = v.split(':') - if len(l) == 2: - one = l[0] - two = l[1] - if not one: - one = 0 - else: - one = int(one) - return SliceConstant(int(l[0]), int(l[1]), 1) - else: - three = int(l[2]) - # all can be empty - if l[0]: - one = int(l[0]) - else: - one = 0 - if l[1]: - two = int(l[1]) - else: - two = -1 - return SliceConstant(one, two, three) - - if v[0] == '[': - return ArrayConstant([self.parse_constant(elem) - for elem in v[1:lgt].split(",")]) - if v[0] == '|': - return RangeConstant(v[1:lgt]) - return FloatConstant(v) - - def is_identifier_or_const(self, v): - c = v[0] - if ((c >= 'a' and c <= 'z') or (c >= 'A' and c <= 'Z') or - (c >= '0' and c <= '9') or c in '-.[|:'): - if v == '-' or v == "->": - return False - return True - return False - - def parse_function_call(self, v): - l = v.split('(') - assert len(l) == 2 - name = l[0] - cut = len(l[1]) - 1 - assert cut >= 0 - args = [self.parse_constant_or_identifier(id) - for id in l[1][:cut].split(",")] + def parse_function_call(self, name, tokens): + args = [] + tokens.pop() # lparen + while tokens.get(0).name != 'paren_right': + args.append(self.parse_expression(tokens)) return FunctionCall(name, args) - def parse_constant_or_identifier(self, v): - c = v[0] - if (c >= 'a' and c <= 'z') or (c >= 'A' and c <= 'Z'): - if '(' in v: - return self.parse_function_call(v) - return self.parse_identifier(v) - return self.parse_constant(v) - - def parse_array_subscript(self, v): - v = v.strip(" ") - l = v.split("[") - lgt = len(l[1]) - 1 - assert lgt >= 0 - rhs = self.parse_constant_or_identifier(l[1][:lgt]) - return l[0], rhs + def parse_array_const(self, tokens): + elems = [] + while True: + token = tokens.pop() + if token.name == 'number': + elems.append(FloatConstant(token.v)) + elif token.name == 'array_left': + elems.append(ArrayConstant(self.parse_array_const(tokens))) + else: + raise BadToken() + token = tokens.pop() + if token.name == 'array_right': + return elems + assert token.name == 'coma' - def parse_statement(self, line): - if '=' in line: - lhs, rhs = line.split("=") - lhs = lhs.strip(" ") - if '[' in lhs: - name, index = self.parse_array_subscript(lhs) - return ArrayAssignment(name, index, self.parse_expression(rhs)) - else: - return Assignment(lhs, self.parse_expression(rhs)) - else: - return Execute(self.parse_expression(line)) + def parse_statement(self, tokens): + if (tokens.get(0).name == 'identifier' and + tokens.get(1).name == 'assign'): + lhs = tokens.pop().v + tokens.pop() + rhs = self.parse_expression(tokens) + return Assignment(lhs, rhs) + elif (tokens.get(0).name == 'identifier' and + tokens.get(1).name == 'array_left'): + name = tokens.pop().v + tokens.pop() + index = self.parse_expression(tokens) + tokens.pop() + tokens.pop() + return ArrayAssignment(name, index, self.parse_expression(tokens)) + return Execute(self.parse_expression(tokens)) def parse(self, code): statements = [] @@ -495,7 +557,8 @@ line = line.split('#', 1)[0] line = line.strip(" ") if line: - statements.append(self.parse_statement(line)) + tokens = self.tokenize(line) + statements.append(self.parse_statement(tokens)) return Code(statements) def numpy_compile(code): diff --git a/pypy/module/micronumpy/test/test_compile.py b/pypy/module/micronumpy/test/test_compile.py --- a/pypy/module/micronumpy/test/test_compile.py +++ b/pypy/module/micronumpy/test/test_compile.py @@ -177,3 +177,9 @@ """) assert interp.results[0].value.val == 6 + def test_multidim_getitem(self): + interp = self.run(""" + a = [[1,2]] + a -> 0 -> 1 + """) + assert interp.results[0].value.val == 2 From noreply at buildbot.pypy.org Thu Nov 3 19:51:01 2011 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 3 Nov 2011 19:51:01 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim: shuffle code around. Now get_code lives outside of tests Message-ID: <20111103185101.20633820B3@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-multidim Changeset: r48713:1582795d14f5 Date: 2011-11-03 19:50 +0100 http://bitbucket.org/pypy/pypy/changeset/1582795d14f5/ Log: shuffle code around. Now get_code lives outside of tests diff --git a/pypy/rlib/rsre/rpy.py b/pypy/rlib/rsre/rpy.py new file mode 100644 --- /dev/null +++ b/pypy/rlib/rsre/rpy.py @@ -0,0 +1,49 @@ + +from pypy.rlib.rsre import rsre_char +from pypy.rlib.rsre.rsre_core import match + +def get_hacked_sre_compile(my_compile): + """Return a copy of the sre_compile module for which the _sre + module is a custom module that has _sre.compile == my_compile + and CODESIZE == rsre_char.CODESIZE. + """ + import sre_compile, __builtin__, new + sre_hacked = new.module("_sre_hacked") + sre_hacked.compile = my_compile + sre_hacked.MAGIC = sre_compile.MAGIC + sre_hacked.CODESIZE = rsre_char.CODESIZE + sre_hacked.getlower = rsre_char.getlower + def my_import(name, *args): + if name == '_sre': + return sre_hacked + else: + return default_import(name, *args) + src = sre_compile.__file__ + if src.lower().endswith('.pyc') or src.lower().endswith('.pyo'): + src = src[:-1] + mod = new.module("sre_compile_hacked") + default_import = __import__ + try: + __builtin__.__import__ = my_import + execfile(src, mod.__dict__) + finally: + __builtin__.__import__ = default_import + return mod + +class GotIt(Exception): + pass +def my_compile(pattern, flags, code, *args): + raise GotIt(code, flags, args) +sre_compile_hacked = get_hacked_sre_compile(my_compile) + +def get_code(regexp, flags=0, allargs=False): + try: + sre_compile_hacked.compile(regexp, flags) + except GotIt, e: + pass + else: + raise ValueError("did not reach _sre.compile()!") + if allargs: + return e.args + else: + return e.args[0] diff --git a/pypy/rlib/rsre/test/test_match.py b/pypy/rlib/rsre/test/test_match.py --- a/pypy/rlib/rsre/test/test_match.py +++ b/pypy/rlib/rsre/test/test_match.py @@ -1,54 +1,8 @@ import re -from pypy.rlib.rsre import rsre_core, rsre_char +from pypy.rlib.rsre import rsre_core +from pypy.rlib.rsre.rpy import get_code -def get_hacked_sre_compile(my_compile): - """Return a copy of the sre_compile module for which the _sre - module is a custom module that has _sre.compile == my_compile - and CODESIZE == rsre_char.CODESIZE. - """ - import sre_compile, __builtin__, new - sre_hacked = new.module("_sre_hacked") - sre_hacked.compile = my_compile - sre_hacked.MAGIC = sre_compile.MAGIC - sre_hacked.CODESIZE = rsre_char.CODESIZE - sre_hacked.getlower = rsre_char.getlower - def my_import(name, *args): - if name == '_sre': - return sre_hacked - else: - return default_import(name, *args) - src = sre_compile.__file__ - if src.lower().endswith('.pyc') or src.lower().endswith('.pyo'): - src = src[:-1] - mod = new.module("sre_compile_hacked") - default_import = __import__ - try: - __builtin__.__import__ = my_import - execfile(src, mod.__dict__) - finally: - __builtin__.__import__ = default_import - return mod - -class GotIt(Exception): - pass -def my_compile(pattern, flags, code, *args): - print code - raise GotIt(code, flags, args) -sre_compile_hacked = get_hacked_sre_compile(my_compile) - -def get_code(regexp, flags=0, allargs=False): - try: - sre_compile_hacked.compile(regexp, flags) - except GotIt, e: - pass - else: - raise ValueError("did not reach _sre.compile()!") - if allargs: - return e.args - else: - return e.args[0] - def get_code_and_re(regexp): return get_code(regexp), re.compile(regexp) From noreply at buildbot.pypy.org Thu Nov 3 20:42:10 2011 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 3 Nov 2011 20:42:10 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim: make parser not rpython (we'll think about it later) and use the same trick Message-ID: <20111103194210.1E826820B3@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-multidim Changeset: r48714:b6ce14bbf83d Date: 2011-11-03 20:41 +0100 http://bitbucket.org/pypy/pypy/changeset/b6ce14bbf83d/ Log: make parser not rpython (we'll think about it later) and use the same trick as we used in test_newgs. Running rsre on llinterp is too much of a mess diff --git a/pypy/module/micronumpy/compile.py b/pypy/module/micronumpy/compile.py --- a/pypy/module/micronumpy/compile.py +++ b/pypy/module/micronumpy/compile.py @@ -9,7 +9,7 @@ descr_new_array, scalar_w, NDimArray) from pypy.module.micronumpy import interp_ufuncs from pypy.rlib.objectmodel import specialize - +import re class BogusBytecode(Exception): pass @@ -220,7 +220,7 @@ class Variable(Node): def __init__(self, name): - self.name = name.strip() + self.name = name.strip(" ") def execute(self, interp): return interp.variables[self.name] @@ -338,7 +338,7 @@ class FunctionCall(Node): def __init__(self, name, args): - self.name = name.strip() + self.name = name.strip(" ") self.args = args def __repr__(self): @@ -381,10 +381,8 @@ else: raise WrongFunctionName -import re - _REGEXES = [ - ('-?[\d]+', 'number'), + ('-?[\d\.]+', 'number'), ('\[', 'array_left'), (':', 'colon'), ('\w+', 'identifier'), @@ -399,7 +397,7 @@ REGEXES = [] for r, name in _REGEXES: - REGEXES.append((re.compile(' *(' + r + ')'), name)) + REGEXES.append((re.compile(r' *(' + r + ')'), name)) del _REGEXES class Token(object): @@ -457,9 +455,9 @@ if start_tok.name == 'colon': start = 0 else: + if tokens.get(0).name != 'colon': + return FloatConstant(start_tok.v) start = int(start_tok.v) - if tokens.get(0).name != 'colon': - return FloatConstant(start) tokens.pop() if not tokens.get(0).name in ['colon', 'number']: stop = -1 diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -8,7 +8,7 @@ from pypy.jit.metainterp.test.support import LLJitMixin from pypy.module.micronumpy import interp_ufuncs, signature from pypy.module.micronumpy.compile import (numpy_compile, FakeSpace, - FloatObject, IntObject, BoolObject) + FloatObject, IntObject, BoolObject, Parser, InterpreterState) from pypy.module.micronumpy.interp_numarray import NDimArray, NDimSlice from pypy.rlib.nonconst import NonConstant from pypy.rpython.annlowlevel import llstr, hlstr @@ -18,12 +18,33 @@ class TestNumpyJIt(LLJitMixin): graph = None interp = None + + def setup_class(cls): + default = """ + a = [1,2,3,4] + c = a + b + sum(c) -> 1::1 + a -> 3:1:2 + """ + + d = {} + p = Parser() + allcodes = [p.parse(default)] + for name, meth in cls.__dict__.iteritems(): + if name.startswith("define_"): + code = meth() + d[name[len("define_"):]] = len(allcodes) + allcodes.append(p.parse(code)) + cls.code_mapping = d + cls.codes = allcodes - def run(self, code): + def run(self, name): space = FakeSpace() + i = self.code_mapping[name] + codes = self.codes - def f(code): - interp = numpy_compile(hlstr(code)) + def f(i): + interp = InterpreterState(codes[i]) interp.run(space) res = interp.results[-1] w_res = res.eval(0).wrap(interp.space) @@ -37,55 +58,66 @@ return -42. if self.graph is None: - interp, graph = self.meta_interp(f, [llstr(code)], + interp, graph = self.meta_interp(f, [i], listops=True, backendopt=True, graph_and_interp_only=True) self.__class__.interp = interp self.__class__.graph = graph - reset_stats() pyjitpl._warmrunnerdesc.memory_manager.alive_loops.clear() - return self.interp.eval_graph(self.graph, [llstr(code)]) + return self.interp.eval_graph(self.graph, [i]) - def test_add(self): - result = self.run(""" + def define_add(): + return """ a = |30| b = a + a b -> 3 - """) + """ + + def test_add(self): + result = self.run("add") self.check_loops({'getarrayitem_raw': 2, 'float_add': 1, 'setarrayitem_raw': 1, 'int_add': 1, 'int_lt': 1, 'guard_true': 1, 'jump': 1}) assert result == 3 + 3 - def test_floatadd(self): - result = self.run(""" + def define_float_add(): + return """ a = |30| + 3 a -> 3 - """) + """ + + def test_floatadd(self): + result = self.run("float_add") assert result == 3 + 3 self.check_loops({"getarrayitem_raw": 1, "float_add": 1, "setarrayitem_raw": 1, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1}) - def test_sum(self): - result = self.run(""" + def define_sum(): + return """ a = |30| b = a + a sum(b) - """) + """ + + def test_sum(self): + result = self.run("sum") assert result == 2 * sum(range(30)) self.check_loops({"getarrayitem_raw": 2, "float_add": 2, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1}) - def test_prod(self): - result = self.run(""" + def define_prod(): + return """ a = |30| b = a + a prod(b) - """) + """ + + def test_prod(self): + result = self.run("prod") expected = 1 for i in range(30): expected *= i * 2 @@ -120,27 +152,33 @@ "float_mul": 1, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1}) - def test_any(self): - result = self.run(""" + def define_any(): + return """ a = [0,0,0,0,0,0,0,0,0,0,0] a[8] = -12 b = a + a any(b) - """) + """ + + def test_any(self): + result = self.run("any") assert result == 1 self.check_loops({"getarrayitem_raw": 2, "float_add": 1, "float_ne": 1, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1, "guard_false": 1}) - def test_already_forced(self): - result = self.run(""" + def define_already_forced(): + return """ a = |30| b = a + 4.5 b -> 5 # forces c = b * 8 c -> 5 - """) + """ + + def test_already_forced(self): + result = self.run("already_forced") assert result == (5 + 4.5) * 8 # This is the sum of the ops for both loops, however if you remove the # optimization then you end up with 2 float_adds, so we can still be @@ -149,21 +187,24 @@ "setarrayitem_raw": 2, "int_add": 2, "int_lt": 2, "guard_true": 2, "jump": 2}) - def test_ufunc(self): - result = self.run(""" + def define_ufunc(): + return """ a = |30| b = a + a c = unegative(b) c -> 3 - """) + """ + + def test_ufunc(self): + result = self.run("ufunc") assert result == -6 self.check_loops({"getarrayitem_raw": 2, "float_add": 1, "float_neg": 1, "setarrayitem_raw": 1, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1, }) - def test_specialization(self): - self.run(""" + def define_specialization(): + return """ a = |30| b = a + a c = unegative(b) @@ -180,17 +221,23 @@ d = a * a unegative(d) d -> 3 - """) + """ + + def test_specialization(self): + self.run("specialization") # This is 3, not 2 because there is a bridge for the exit. self.check_loop_count(3) - def test_slice(self): - result = self.run(""" + def define_slice(): + return """ a = |30| b = a -> ::3 c = b + b c -> 3 - """) + """ + + def test_slice(self): + result = self.run("slice") assert result == 18 self.check_loops({'int_mul': 2, 'getarrayitem_raw': 2, 'float_add': 1, 'setarrayitem_raw': 1, 'int_add': 3, diff --git a/pypy/rlib/rsre/rsre_core.py b/pypy/rlib/rsre/rsre_core.py --- a/pypy/rlib/rsre/rsre_core.py +++ b/pypy/rlib/rsre/rsre_core.py @@ -154,7 +154,6 @@ return (fmarks[groupnum], fmarks[groupnum+1]) def group(self, groupnum=0): - "NOT_RPYTHON" # compatibility frm, to = self.span(groupnum) if 0 <= frm <= to: return self._string[frm:to] From noreply at buildbot.pypy.org Thu Nov 3 20:48:02 2011 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 3 Nov 2011 20:48:02 +0100 (CET) Subject: [pypy-commit] pypy default: Merge rgc-mem-pressure. This branch adds memory pressure in some crucial points Message-ID: <20111103194802.4BFDE820B3@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r48715:19bc61988c39 Date: 2011-11-03 20:46 +0100 http://bitbucket.org/pypy/pypy/changeset/19bc61988c39/ Log: Merge rgc-mem-pressure. This branch adds memory pressure in some crucial points where C allocates a lot, but struct is fixed size. diff --git a/pypy/module/_hashlib/interp_hashlib.py b/pypy/module/_hashlib/interp_hashlib.py --- a/pypy/module/_hashlib/interp_hashlib.py +++ b/pypy/module/_hashlib/interp_hashlib.py @@ -4,32 +4,44 @@ from pypy.interpreter.error import OperationError from pypy.tool.sourcetools import func_renamer from pypy.interpreter.baseobjspace import Wrappable -from pypy.rpython.lltypesystem import lltype, rffi +from pypy.rpython.lltypesystem import lltype, llmemory, rffi +from pypy.rlib import rgc, ropenssl from pypy.rlib.objectmodel import keepalive_until_here -from pypy.rlib import ropenssl from pypy.rlib.rstring import StringBuilder from pypy.module.thread.os_lock import Lock algorithms = ('md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512') +# HASH_MALLOC_SIZE is the size of EVP_MD, EVP_MD_CTX plus their points +# Used for adding memory pressure. Last number is an (under?)estimate of +# EVP_PKEY_CTX's size. +# XXX: Make a better estimate here +HASH_MALLOC_SIZE = ropenssl.EVP_MD_SIZE + ropenssl.EVP_MD_CTX_SIZE \ + + rffi.sizeof(ropenssl.EVP_MD) * 2 + 208 + class W_Hash(Wrappable): ctx = lltype.nullptr(ropenssl.EVP_MD_CTX.TO) + _block_size = -1 def __init__(self, space, name): self.name = name + self.digest_size = self.compute_digest_size() # Allocate a lock for each HASH object. # An optimization would be to not release the GIL on small requests, # and use a custom lock only when needed. self.lock = Lock(space) + ctx = lltype.malloc(ropenssl.EVP_MD_CTX.TO, flavor='raw') + rgc.add_memory_pressure(HASH_MALLOC_SIZE + self.digest_size) + self.ctx = ctx + + def initdigest(self, space, name): digest = ropenssl.EVP_get_digestbyname(name) if not digest: raise OperationError(space.w_ValueError, space.wrap("unknown hash function")) - ctx = lltype.malloc(ropenssl.EVP_MD_CTX.TO, flavor='raw') - ropenssl.EVP_DigestInit(ctx, digest) - self.ctx = ctx + ropenssl.EVP_DigestInit(self.ctx, digest) def __del__(self): # self.lock.free() @@ -65,33 +77,29 @@ "Return the digest value as a string of hexadecimal digits." digest = self._digest(space) hexdigits = '0123456789abcdef' - result = StringBuilder(self._digest_size() * 2) + result = StringBuilder(self.digest_size * 2) for c in digest: result.append(hexdigits[(ord(c) >> 4) & 0xf]) result.append(hexdigits[ ord(c) & 0xf]) return space.wrap(result.build()) def get_digest_size(self, space): - return space.wrap(self._digest_size()) + return space.wrap(self.digest_size) def get_block_size(self, space): - return space.wrap(self._block_size()) + return space.wrap(self.compute_block_size()) def _digest(self, space): - copy = self.copy(space) - ctx = copy.ctx - digest_size = self._digest_size() - digest = lltype.malloc(rffi.CCHARP.TO, digest_size, flavor='raw') + with lltype.scoped_alloc(ropenssl.EVP_MD_CTX.TO) as ctx: + with self.lock: + ropenssl.EVP_MD_CTX_copy(ctx, self.ctx) + digest_size = self.digest_size + with lltype.scoped_alloc(rffi.CCHARP.TO, digest_size) as digest: + ropenssl.EVP_DigestFinal(ctx, digest, None) + ropenssl.EVP_MD_CTX_cleanup(ctx) + return rffi.charpsize2str(digest, digest_size) - try: - ropenssl.EVP_DigestFinal(ctx, digest, None) - return rffi.charpsize2str(digest, digest_size) - finally: - keepalive_until_here(copy) - lltype.free(digest, flavor='raw') - - - def _digest_size(self): + def compute_digest_size(self): # XXX This isn't the nicest way, but the EVP_MD_size OpenSSL # XXX function is defined as a C macro on OS X and would be # XXX significantly harder to implement in another way. @@ -105,12 +113,14 @@ 'sha512': 64, 'SHA512': 64, }.get(self.name, 0) - def _block_size(self): + def compute_block_size(self): + if self._block_size != -1: + return self._block_size # XXX This isn't the nicest way, but the EVP_MD_CTX_block_size # XXX OpenSSL function is defined as a C macro on some systems # XXX and would be significantly harder to implement in # XXX another way. - return { + self._block_size = { 'md5': 64, 'MD5': 64, 'sha1': 64, 'SHA1': 64, 'sha224': 64, 'SHA224': 64, @@ -118,6 +128,7 @@ 'sha384': 128, 'SHA384': 128, 'sha512': 128, 'SHA512': 128, }.get(self.name, 0) + return self._block_size W_Hash.typedef = TypeDef( 'HASH', @@ -135,6 +146,7 @@ @unwrap_spec(name=str, string='bufferstr') def new(space, name, string=''): w_hash = W_Hash(space, name) + w_hash.initdigest(space, name) w_hash.update(space, string) return space.wrap(w_hash) diff --git a/pypy/module/_multiprocessing/interp_semaphore.py b/pypy/module/_multiprocessing/interp_semaphore.py --- a/pypy/module/_multiprocessing/interp_semaphore.py +++ b/pypy/module/_multiprocessing/interp_semaphore.py @@ -4,6 +4,7 @@ from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.error import wrap_oserror, OperationError from pypy.rpython.lltypesystem import rffi, lltype +from pypy.rlib import rgc from pypy.rlib.rarithmetic import r_uint from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.rpython.tool import rffi_platform as platform @@ -23,6 +24,8 @@ _CreateSemaphore = rwin32.winexternal( 'CreateSemaphoreA', [rffi.VOIDP, rffi.LONG, rffi.LONG, rwin32.LPCSTR], rwin32.HANDLE) + _CloseHandle = rwin32.winexternal('CloseHandle', [rwin32.HANDLE], + rwin32.BOOL, threadsafe=False) _ReleaseSemaphore = rwin32.winexternal( 'ReleaseSemaphore', [rwin32.HANDLE, rffi.LONG, rffi.LONGP], rwin32.BOOL) @@ -51,6 +54,7 @@ SEM_FAILED = platform.ConstantInteger('SEM_FAILED') SEM_VALUE_MAX = platform.ConstantInteger('SEM_VALUE_MAX') SEM_TIMED_WAIT = platform.Has('sem_timedwait') + SEM_T_SIZE = platform.SizeOf('sem_t') config = platform.configure(CConfig) TIMEVAL = config['TIMEVAL'] @@ -61,18 +65,21 @@ SEM_FAILED = config['SEM_FAILED'] # rffi.cast(SEM_T, config['SEM_FAILED']) SEM_VALUE_MAX = config['SEM_VALUE_MAX'] SEM_TIMED_WAIT = config['SEM_TIMED_WAIT'] + SEM_T_SIZE = config['SEM_T_SIZE'] if sys.platform == 'darwin': HAVE_BROKEN_SEM_GETVALUE = True else: HAVE_BROKEN_SEM_GETVALUE = False - def external(name, args, result): + def external(name, args, result, **kwargs): return rffi.llexternal(name, args, result, - compilation_info=eci) + compilation_info=eci, **kwargs) _sem_open = external('sem_open', [rffi.CCHARP, rffi.INT, rffi.INT, rffi.UINT], SEM_T) + # tread sem_close as not threadsafe for now to be able to use the __del__ + _sem_close = external('sem_close', [SEM_T], rffi.INT, threadsafe=False) _sem_unlink = external('sem_unlink', [rffi.CCHARP], rffi.INT) _sem_wait = external('sem_wait', [SEM_T], rffi.INT) _sem_trywait = external('sem_trywait', [SEM_T], rffi.INT) @@ -90,6 +97,11 @@ raise OSError(rposix.get_errno(), "sem_open failed") return res + def sem_close(handle): + res = _sem_close(handle) + if res < 0: + raise OSError(rposix.get_errno(), "sem_close failed") + def sem_unlink(name): res = _sem_unlink(name) if res < 0: @@ -205,6 +217,11 @@ raise WindowsError(err, "CreateSemaphore") return handle + def delete_semaphore(handle): + if not _CloseHandle(handle): + err = rwin32.GetLastError() + raise WindowsError(err, "CloseHandle") + def semlock_acquire(self, space, block, w_timeout): if not block: full_msecs = 0 @@ -291,8 +308,13 @@ sem_unlink(name) except OSError: pass + else: + rgc.add_memory_pressure(SEM_T_SIZE) return sem + def delete_semaphore(handle): + sem_close(handle) + def semlock_acquire(self, space, block, w_timeout): if not block: deadline = lltype.nullptr(TIMESPECP.TO) @@ -483,6 +505,9 @@ def exit(self, space, __args__): self.release(space) + def __del__(self): + delete_semaphore(self.handle) + @unwrap_spec(kind=int, value=int, maxvalue=int) def descr_new(space, w_subtype, kind, value, maxvalue): if kind != RECURSIVE_MUTEX and kind != SEMAPHORE: diff --git a/pypy/module/pyexpat/interp_pyexpat.py b/pypy/module/pyexpat/interp_pyexpat.py --- a/pypy/module/pyexpat/interp_pyexpat.py +++ b/pypy/module/pyexpat/interp_pyexpat.py @@ -4,9 +4,9 @@ from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.error import OperationError from pypy.objspace.descroperation import object_setattr +from pypy.rlib import rgc +from pypy.rlib.unroll import unrolling_iterable from pypy.rpython.lltypesystem import rffi, lltype -from pypy.rlib.unroll import unrolling_iterable - from pypy.rpython.tool import rffi_platform from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.translator.platform import platform @@ -118,6 +118,19 @@ locals()[name] = rffi_platform.ConstantInteger(name) for name in xml_model_list: locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + XML_Parser_SIZE = rffi_platform.SizeOf("XML_Parser") for k, v in rffi_platform.configure(CConfigure).items(): globals()[k] = v @@ -793,7 +806,10 @@ rffi.cast(rffi.CHAR, namespace_separator)) else: xmlparser = XML_ParserCreate(encoding) - + # Currently this is just the size of the pointer and some estimated bytes. + # The struct isn't actually defined in expat.h - it is in xmlparse.c + # XXX: find a good estimate of the XML_ParserStruct + rgc.add_memory_pressure(XML_Parser_SIZE + 300) if not xmlparser: raise OperationError(space.w_RuntimeError, space.wrap('XML_ParserCreate failed')) diff --git a/pypy/module/thread/ll_thread.py b/pypy/module/thread/ll_thread.py --- a/pypy/module/thread/ll_thread.py +++ b/pypy/module/thread/ll_thread.py @@ -2,10 +2,11 @@ from pypy.rpython.lltypesystem import rffi, lltype, llmemory from pypy.translator.tool.cbuild import ExternalCompilationInfo import py -from pypy.rlib import jit +from pypy.rlib import jit, rgc from pypy.rlib.debug import ll_assert from pypy.rlib.objectmodel import we_are_translated from pypy.rpython.lltypesystem.lloperation import llop +from pypy.rpython.tool import rffi_platform from pypy.tool import autopath class error(Exception): @@ -49,7 +50,7 @@ TLOCKP = rffi.COpaquePtr('struct RPyOpaque_ThreadLock', compilation_info=eci) - +TLOCKP_SIZE = rffi_platform.sizeof('struct RPyOpaque_ThreadLock', eci) c_thread_lock_init = llexternal('RPyThreadLockInit', [TLOCKP], rffi.INT, threadsafe=False) # may add in a global list c_thread_lock_dealloc_NOAUTO = llexternal('RPyOpaqueDealloc_ThreadLock', @@ -164,6 +165,9 @@ if rffi.cast(lltype.Signed, res) <= 0: lltype.free(ll_lock, flavor='raw', track_allocation=False) raise error("out of resources") + # Add some memory pressure for the size of the lock because it is an + # Opaque object + rgc.add_memory_pressure(TLOCKP_SIZE) return ll_lock def free_ll_lock(ll_lock): diff --git a/pypy/rlib/rgc.py b/pypy/rlib/rgc.py --- a/pypy/rlib/rgc.py +++ b/pypy/rlib/rgc.py @@ -259,6 +259,24 @@ except Exception: return False # don't keep objects whose _freeze_() method explodes +def add_memory_pressure(estimate): + """Add memory pressure for OpaquePtrs.""" + pass + +class AddMemoryPressureEntry(ExtRegistryEntry): + _about_ = add_memory_pressure + + def compute_result_annotation(self, s_nbytes): + from pypy.annotation import model as annmodel + return annmodel.s_None + + def specialize_call(self, hop): + [v_size] = hop.inputargs(lltype.Signed) + hop.exception_cannot_occur() + return hop.genop('gc_add_memory_pressure', [v_size], + resulttype=lltype.Void) + + def get_rpy_memory_usage(gcref): "NOT_RPYTHON" # approximate implementation using CPython's type info diff --git a/pypy/rlib/ropenssl.py b/pypy/rlib/ropenssl.py --- a/pypy/rlib/ropenssl.py +++ b/pypy/rlib/ropenssl.py @@ -25,6 +25,7 @@ 'openssl/err.h', 'openssl/rand.h', 'openssl/evp.h', + 'openssl/ossl_typ.h', 'openssl/x509v3.h'] eci = ExternalCompilationInfo( @@ -108,7 +109,9 @@ GENERAL_NAME_st = rffi_platform.Struct( 'struct GENERAL_NAME_st', [('type', rffi.INT), - ]) + ]) + EVP_MD_SIZE = rffi_platform.SizeOf('EVP_MD') + EVP_MD_CTX_SIZE = rffi_platform.SizeOf('EVP_MD_CTX') for k, v in rffi_platform.configure(CConfig).items(): @@ -154,7 +157,7 @@ ssl_external('CRYPTO_set_id_callback', [lltype.Ptr(lltype.FuncType([], rffi.LONG))], lltype.Void) - + if HAVE_OPENSSL_RAND: ssl_external('RAND_add', [rffi.CCHARP, rffi.INT, rffi.DOUBLE], lltype.Void) ssl_external('RAND_status', [], rffi.INT) @@ -255,7 +258,7 @@ [BIO, rffi.VOIDP, rffi.VOIDP, rffi.VOIDP], X509) EVP_MD_CTX = rffi.COpaquePtr('EVP_MD_CTX', compilation_info=eci) -EVP_MD = rffi.COpaquePtr('EVP_MD') +EVP_MD = rffi.COpaquePtr('EVP_MD', compilation_info=eci) OpenSSL_add_all_digests = external( 'OpenSSL_add_all_digests', [], lltype.Void) diff --git a/pypy/rpython/llinterp.py b/pypy/rpython/llinterp.py --- a/pypy/rpython/llinterp.py +++ b/pypy/rpython/llinterp.py @@ -172,7 +172,7 @@ def checkadr(addr): assert lltype.typeOf(addr) is llmemory.Address - + def is_inst(inst): return isinstance(lltype.typeOf(inst), (ootype.Instance, ootype.BuiltinType, ootype.StaticMethod)) @@ -657,7 +657,7 @@ raise TypeError("graph with %r args called with wrong func ptr type: %r" % (tuple([v.concretetype for v in args_v]), ARGS)) frame = self.newsubframe(graph, args) - return frame.eval() + return frame.eval() def op_direct_call(self, f, *args): FTYPE = self.llinterpreter.typer.type_system.derefType(lltype.typeOf(f)) @@ -698,13 +698,13 @@ return ptr except MemoryError: self.make_llexception() - + def op_malloc_nonmovable(self, TYPE, flags): flavor = flags['flavor'] assert flavor == 'gc' zero = flags.get('zero', False) return self.heap.malloc_nonmovable(TYPE, zero=zero) - + def op_malloc_nonmovable_varsize(self, TYPE, flags, size): flavor = flags['flavor'] assert flavor == 'gc' @@ -716,6 +716,9 @@ track_allocation = flags.get('track_allocation', True) self.heap.free(obj, flavor='raw', track_allocation=track_allocation) + def op_gc_add_memory_pressure(self, size): + self.heap.add_memory_pressure(size) + def op_shrink_array(self, obj, smallersize): return self.heap.shrink_array(obj, smallersize) @@ -1318,7 +1321,7 @@ func_graph = fn.graph else: # obj is an instance, we want to call 'method_name' on it - assert fn is None + assert fn is None self_arg = [obj] func_graph = obj._TYPE._methods[method_name._str].graph diff --git a/pypy/rpython/lltypesystem/llheap.py b/pypy/rpython/lltypesystem/llheap.py --- a/pypy/rpython/lltypesystem/llheap.py +++ b/pypy/rpython/lltypesystem/llheap.py @@ -5,8 +5,7 @@ setfield = setattr from operator import setitem as setarrayitem -from pypy.rlib.rgc import collect -from pypy.rlib.rgc import can_move +from pypy.rlib.rgc import can_move, collect, add_memory_pressure def setinterior(toplevelcontainer, inneraddr, INNERTYPE, newvalue, offsets=None): diff --git a/pypy/rpython/lltypesystem/lloperation.py b/pypy/rpython/lltypesystem/lloperation.py --- a/pypy/rpython/lltypesystem/lloperation.py +++ b/pypy/rpython/lltypesystem/lloperation.py @@ -473,6 +473,7 @@ 'gc_is_rpy_instance' : LLOp(), 'gc_dump_rpy_heap' : LLOp(), 'gc_typeids_z' : LLOp(), + 'gc_add_memory_pressure': LLOp(), # ------- JIT & GC interaction, only for some GCs ---------- diff --git a/pypy/rpython/lltypesystem/lltype.py b/pypy/rpython/lltypesystem/lltype.py --- a/pypy/rpython/lltypesystem/lltype.py +++ b/pypy/rpython/lltypesystem/lltype.py @@ -48,7 +48,7 @@ self.TYPE = TYPE def __repr__(self): return ''%(self.TYPE,) - + def saferecursive(func, defl, TLS=TLS): def safe(*args): @@ -537,9 +537,9 @@ return "Func ( %s ) -> %s" % (args, self.RESULT) __str__ = saferecursive(__str__, '...') - def _short_name(self): + def _short_name(self): args = ', '.join([ARG._short_name() for ARG in self.ARGS]) - return "Func(%s)->%s" % (args, self.RESULT._short_name()) + return "Func(%s)->%s" % (args, self.RESULT._short_name()) _short_name = saferecursive(_short_name, '...') def _container_example(self): @@ -553,7 +553,7 @@ class OpaqueType(ContainerType): _gckind = 'raw' - + def __init__(self, tag, hints={}): """ if hints['render_structure'] is set, the type is internal and not considered to come from somewhere else (it should be rendered as a structure) """ @@ -723,10 +723,10 @@ def __str__(self): return '* %s' % (self.TO, ) - + def _short_name(self): return 'Ptr %s' % (self.TO._short_name(), ) - + def _is_atomic(self): return self.TO._gckind == 'raw' diff --git a/pypy/rpython/memory/gctransform/framework.py b/pypy/rpython/memory/gctransform/framework.py --- a/pypy/rpython/memory/gctransform/framework.py +++ b/pypy/rpython/memory/gctransform/framework.py @@ -377,17 +377,24 @@ self.malloc_varsize_nonmovable_ptr = None if getattr(GCClass, 'raw_malloc_memory_pressure', False): - def raw_malloc_memory_pressure(length, itemsize): + def raw_malloc_memory_pressure_varsize(length, itemsize): totalmem = length * itemsize if totalmem > 0: gcdata.gc.raw_malloc_memory_pressure(totalmem) #else: probably an overflow -- the following rawmalloc # will fail then + def raw_malloc_memory_pressure(sizehint): + gcdata.gc.raw_malloc_memory_pressure(sizehint) + self.raw_malloc_memory_pressure_varsize_ptr = getfn( + raw_malloc_memory_pressure_varsize, + [annmodel.SomeInteger(), annmodel.SomeInteger()], + annmodel.s_None, minimal_transform = False) self.raw_malloc_memory_pressure_ptr = getfn( raw_malloc_memory_pressure, - [annmodel.SomeInteger(), annmodel.SomeInteger()], + [annmodel.SomeInteger()], annmodel.s_None, minimal_transform = False) + self.identityhash_ptr = getfn(GCClass.identityhash.im_func, [s_gc, s_gcref], annmodel.SomeInteger(), diff --git a/pypy/rpython/memory/gctransform/transform.py b/pypy/rpython/memory/gctransform/transform.py --- a/pypy/rpython/memory/gctransform/transform.py +++ b/pypy/rpython/memory/gctransform/transform.py @@ -63,7 +63,7 @@ gct.push_alive(v_result, self.llops) elif opname not in ('direct_call', 'indirect_call'): gct.push_alive(v_result, self.llops) - + def rename(self, newopname): @@ -118,7 +118,7 @@ self.minimalgctransformer = self.MinimalGCTransformer(self) else: self.minimalgctransformer = None - + def get_lltype_of_exception_value(self): if self.translator is not None: exceptiondata = self.translator.rtyper.getexceptiondata() @@ -399,7 +399,7 @@ def gct_gc_heap_stats(self, hop): from pypy.rpython.memory.gc.base import ARRAY_TYPEID_MAP - + return hop.cast_result(rmodel.inputconst(lltype.Ptr(ARRAY_TYPEID_MAP), lltype.nullptr(ARRAY_TYPEID_MAP))) @@ -427,7 +427,7 @@ assert flavor == 'raw' assert not flags.get('zero') return self.parenttransformer.gct_malloc_varsize(hop) - + def gct_free(self, hop): flags = hop.spaceop.args[1].value flavor = flags['flavor'] @@ -502,7 +502,7 @@ stack_mh = mallocHelpers() stack_mh.allocate = lambda size: llop.stack_malloc(llmemory.Address, size) ll_stack_malloc_fixedsize = stack_mh._ll_malloc_fixedsize - + if self.translator: self.raw_malloc_fixedsize_ptr = self.inittime_helper( ll_raw_malloc_fixedsize, [lltype.Signed], llmemory.Address) @@ -541,7 +541,7 @@ resulttype=llmemory.Address) if flags.get('zero'): hop.genop("raw_memclear", [v_raw, c_size]) - return v_raw + return v_raw def gct_malloc_varsize(self, hop, add_flags=None): flags = hop.spaceop.args[1].value @@ -559,6 +559,14 @@ def gct_malloc_nonmovable_varsize(self, *args, **kwds): return self.gct_malloc_varsize(*args, **kwds) + def gct_gc_add_memory_pressure(self, hop): + if hasattr(self, 'raw_malloc_memory_pressure_ptr'): + op = hop.spaceop + size = op.args[0] + return hop.genop("direct_call", + [self.raw_malloc_memory_pressure_ptr, + size]) + def varsize_malloc_helper(self, hop, flags, meth, extraargs): def intconst(c): return rmodel.inputconst(lltype.Signed, c) op = hop.spaceop @@ -590,9 +598,9 @@ def gct_fv_raw_malloc_varsize(self, hop, flags, TYPE, v_length, c_const_size, c_item_size, c_offset_to_length): if flags.get('add_memory_pressure', False): - if hasattr(self, 'raw_malloc_memory_pressure_ptr'): + if hasattr(self, 'raw_malloc_memory_pressure_varsize_ptr'): hop.genop("direct_call", - [self.raw_malloc_memory_pressure_ptr, + [self.raw_malloc_memory_pressure_varsize_ptr, v_length, c_item_size]) if c_offset_to_length is None: if flags.get('zero'): @@ -625,7 +633,7 @@ hop.genop("track_alloc_stop", [v]) hop.genop('raw_free', [v]) else: - assert False, "%s has no support for free with flavor %r" % (self, flavor) + assert False, "%s has no support for free with flavor %r" % (self, flavor) def gct_gc_can_move(self, hop): return hop.cast_result(rmodel.inputconst(lltype.Bool, False)) diff --git a/pypy/rpython/memory/gcwrapper.py b/pypy/rpython/memory/gcwrapper.py --- a/pypy/rpython/memory/gcwrapper.py +++ b/pypy/rpython/memory/gcwrapper.py @@ -66,6 +66,10 @@ gctypelayout.zero_gc_pointers(result) return result + def add_memory_pressure(self, size): + if hasattr(self.gc, 'raw_malloc_memory_pressure'): + self.gc.raw_malloc_memory_pressure(size) + def shrink_array(self, p, smallersize): if hasattr(self.gc, 'shrink_array'): addr = llmemory.cast_ptr_to_adr(p) diff --git a/pypy/rpython/memory/test/test_gc.py b/pypy/rpython/memory/test/test_gc.py --- a/pypy/rpython/memory/test/test_gc.py +++ b/pypy/rpython/memory/test/test_gc.py @@ -592,7 +592,7 @@ return rgc.can_move(lltype.malloc(TP, 1)) assert self.interpret(func, []) == self.GC_CAN_MOVE - + def test_malloc_nonmovable(self): TP = lltype.GcArray(lltype.Char) def func(): diff --git a/pypy/rpython/memory/test/test_transformed_gc.py b/pypy/rpython/memory/test/test_transformed_gc.py --- a/pypy/rpython/memory/test/test_transformed_gc.py +++ b/pypy/rpython/memory/test/test_transformed_gc.py @@ -27,7 +27,7 @@ t.config.set(**extraconfigopts) ann = t.buildannotator(policy=annpolicy.StrictAnnotatorPolicy()) ann.build_types(func, inputtypes) - + if specialize: t.buildrtyper().specialize() if backendopt: @@ -44,7 +44,7 @@ GC_CAN_MOVE = False GC_CAN_MALLOC_NONMOVABLE = True taggedpointers = False - + def setup_class(cls): funcs0 = [] funcs2 = [] @@ -155,7 +155,7 @@ return run, gct else: return run - + class GenericGCTests(GCTest): GC_CAN_SHRINK_ARRAY = False @@ -190,7 +190,7 @@ j += 1 return 0 return malloc_a_lot - + def test_instances(self): run, statistics = self.runner("instances", statistics=True) run([]) @@ -276,7 +276,7 @@ for i in range(1, 5): res = run([i, i - 1]) assert res == i - 1 # crashes if constants are not considered roots - + def define_string_concatenation(cls): def concat(j, dummy): lst = [] @@ -656,7 +656,7 @@ # return 2 return func - + def test_malloc_nonmovable(self): run = self.runner("malloc_nonmovable") assert int(self.GC_CAN_MALLOC_NONMOVABLE) == run([]) @@ -676,7 +676,7 @@ return 2 return func - + def test_malloc_nonmovable_fixsize(self): run = self.runner("malloc_nonmovable_fixsize") assert run([]) == int(self.GC_CAN_MALLOC_NONMOVABLE) @@ -757,7 +757,7 @@ lltype.free(idarray, flavor='raw') return 0 return f - + def test_many_ids(self): if not self.GC_CAN_TEST_ID: py.test.skip("fails for bad reasons in lltype.py :-(") @@ -813,7 +813,7 @@ else: assert 0, "oups, not found" return f, None, fix_graph_of_g - + def test_do_malloc_operations(self): run = self.runner("do_malloc_operations") run([]) @@ -850,7 +850,7 @@ else: assert 0, "oups, not found" return f, None, fix_graph_of_g - + def test_do_malloc_operations_in_call(self): run = self.runner("do_malloc_operations_in_call") run([]) @@ -861,7 +861,7 @@ l2 = [] l3 = [] l4 = [] - + def f(): for i in range(10): s = lltype.malloc(S) @@ -1026,7 +1026,7 @@ llop.gc__collect(lltype.Void) return static.p.x + i def cleanup(): - static.p = lltype.nullptr(T1) + static.p = lltype.nullptr(T1) return f, cleanup, None def test_nongc_static_root_minor_collect(self): @@ -1081,7 +1081,7 @@ return 0 return f - + def test_many_weakrefs(self): run = self.runner("many_weakrefs") run([]) @@ -1131,7 +1131,7 @@ def define_adr_of_nursery(cls): class A(object): pass - + def f(): # we need at least 1 obj to allocate a nursery a = A() @@ -1147,9 +1147,9 @@ assert nt1 > nf1 assert nt1 == nt0 return 0 - + return f - + def test_adr_of_nursery(self): run = self.runner("adr_of_nursery") res = run([]) @@ -1175,7 +1175,7 @@ def _teardown(self): self.__ready = False # collecting here is expected GenerationGC._teardown(self) - + GC_PARAMS = {'space_size': 512*WORD, 'nursery_size': 128*WORD, 'translated_to_c': False} diff --git a/pypy/translator/c/test/test_newgc.py b/pypy/translator/c/test/test_newgc.py --- a/pypy/translator/c/test/test_newgc.py +++ b/pypy/translator/c/test/test_newgc.py @@ -37,7 +37,7 @@ else: print res return 0 - + t = Translation(main, standalone=True, gc=cls.gcpolicy, policy=annpolicy.StrictAnnotatorPolicy(), taggedpointers=cls.taggedpointers, @@ -128,10 +128,10 @@ if not args: args = (-1, ) res = self.allfuncs(name, *args) - num = self.name_to_func[name] + num = self.name_to_func[name] if self.funcsstr[num]: return res - return int(res) + return int(res) def define_empty_collect(cls): def f(): @@ -228,7 +228,7 @@ T = lltype.GcStruct("T", ('y', lltype.Signed), ('s', lltype.Ptr(S))) ARRAY_Ts = lltype.GcArray(lltype.Ptr(T)) - + def f(): r = 0 for i in range(30): @@ -250,7 +250,7 @@ def test_framework_varsized(self): res = self.run('framework_varsized') assert res == self.run_orig('framework_varsized') - + def define_framework_using_lists(cls): class A(object): pass @@ -271,7 +271,7 @@ N = 1000 res = self.run('framework_using_lists') assert res == N*(N - 1)/2 - + def define_framework_static_roots(cls): class A(object): def __init__(self, y): @@ -318,8 +318,8 @@ def test_framework_void_array(self): res = self.run('framework_void_array') assert res == 44 - - + + def define_framework_malloc_failure(cls): def f(): a = [1] * (sys.maxint//2) @@ -342,7 +342,7 @@ def test_framework_array_of_void(self): res = self.run('framework_array_of_void') assert res == 43 + 1000000 - + def define_framework_opaque(cls): A = lltype.GcStruct('A', ('value', lltype.Signed)) O = lltype.GcOpaqueType('test.framework') @@ -437,7 +437,7 @@ b = B() return 0 return func - + def test_del_raises(self): self.run('del_raises') # does not raise @@ -712,7 +712,7 @@ def test_callback_with_collect(self): assert self.run('callback_with_collect') - + def define_can_move(cls): class A: pass @@ -1255,7 +1255,7 @@ l1 = [] l2 = [] l3 = [] - + def f(): for i in range(10): s = lltype.malloc(S) @@ -1298,7 +1298,7 @@ def test_string_builder(self): res = self.run('string_builder') assert res == "aabcbdddd" - + def definestr_string_builder_over_allocation(cls): import gc def fn(_): @@ -1458,6 +1458,37 @@ res = self.run("nongc_attached_to_gc") assert res == -99997 + def define_nongc_opaque_attached_to_gc(cls): + from pypy.module._hashlib.interp_hashlib import HASH_MALLOC_SIZE + from pypy.rlib import rgc, ropenssl + from pypy.rpython.lltypesystem import rffi + + class A: + def __init__(self): + self.ctx = lltype.malloc(ropenssl.EVP_MD_CTX.TO, + flavor='raw') + digest = ropenssl.EVP_get_digestbyname('sha1') + ropenssl.EVP_DigestInit(self.ctx, digest) + rgc.add_memory_pressure(HASH_MALLOC_SIZE + 64) + + def __del__(self): + ropenssl.EVP_MD_CTX_cleanup(self.ctx) + lltype.free(self.ctx, flavor='raw') + A() + def f(): + am1 = am2 = am3 = None + for i in range(100000): + am3 = am2 + am2 = am1 + am1 = A() + # what can we use for the res? + return 0 + return f + + def test_nongc_opaque_attached_to_gc(self): + res = self.run("nongc_opaque_attached_to_gc") + assert res == 0 + # ____________________________________________________________________ class TaggedPointersTest(object): From noreply at buildbot.pypy.org Thu Nov 3 20:48:03 2011 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 3 Nov 2011 20:48:03 +0100 (CET) Subject: [pypy-commit] pypy rgc-mem-pressure: closed merged branch Message-ID: <20111103194803.7A327820B3@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: rgc-mem-pressure Changeset: r48716:0ede8b92968e Date: 2011-11-03 20:47 +0100 http://bitbucket.org/pypy/pypy/changeset/0ede8b92968e/ Log: closed merged branch From noreply at buildbot.pypy.org Thu Nov 3 20:48:04 2011 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 3 Nov 2011 20:48:04 +0100 (CET) Subject: [pypy-commit] pypy default: merge default Message-ID: <20111103194804.BBCEF820B3@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r48717:8ecb5f0cd990 Date: 2011-11-03 20:47 +0100 http://bitbucket.org/pypy/pypy/changeset/8ecb5f0cd990/ Log: merge default diff --git a/pypy/jit/metainterp/optimizeopt/heap.py b/pypy/jit/metainterp/optimizeopt/heap.py --- a/pypy/jit/metainterp/optimizeopt/heap.py +++ b/pypy/jit/metainterp/optimizeopt/heap.py @@ -146,8 +146,6 @@ newresult = result.clonebox() optimizer.make_constant(newresult, result) result = newresult - if result is op.getarg(0): # FIXME: Unsupported corner case?? - continue getop = ResOperation(rop.GETARRAYITEM_GC, [op.getarg(0), op.getarg(1)], result, op.getdescr()) shortboxes.add_potential(getop, synthetic=True) diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -7450,6 +7450,55 @@ """ self.optimize_loop(ops, expected) + def test_setarrayitem_p0_p0(self): + ops = """ + [i0, i1] + p0 = escape() + setarrayitem_gc(p0, 2, p0, descr=arraydescr) + jump(i0, i1) + """ + expected = """ + [i0, i1] + p0 = escape() + setarrayitem_gc(p0, 2, p0, descr=arraydescr) + jump(i0, i1) + """ + self.optimize_loop(ops, expected) + + def test_setfield_p0_p0(self): + ops = """ + [i0, i1] + p0 = escape() + setfield_gc(p0, p0, descr=arraydescr) + jump(i0, i1) + """ + expected = """ + [i0, i1] + p0 = escape() + setfield_gc(p0, p0, descr=arraydescr) + jump(i0, i1) + """ + self.optimize_loop(ops, expected) + + def test_setfield_p0_p1_p0(self): + ops = """ + [i0, i1] + p0 = escape() + p1 = escape() + setfield_gc(p0, p1, descr=adescr) + setfield_gc(p1, p0, descr=bdescr) + jump(i0, i1) + """ + expected = """ + [i0, i1] + p0 = escape() + p1 = escape() + setfield_gc(p0, p1, descr=adescr) + setfield_gc(p1, p0, descr=bdescr) + jump(i0, i1) + """ + self.optimize_loop(ops, expected) + class TestLLtype(OptimizeOptTest, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/optimizeopt/virtualstate.py b/pypy/jit/metainterp/optimizeopt/virtualstate.py --- a/pypy/jit/metainterp/optimizeopt/virtualstate.py +++ b/pypy/jit/metainterp/optimizeopt/virtualstate.py @@ -551,6 +551,7 @@ optimizer.produce_potential_short_preamble_ops(self) self.short_boxes = {} + self.short_boxes_in_production = {} for box in self.potential_ops.keys(): try: @@ -606,6 +607,10 @@ return if isinstance(box, Const): return + if box in self.short_boxes_in_production: + raise BoxNotProducable + self.short_boxes_in_production[box] = True + if box in self.potential_ops: ops = self.prioritized_alternatives(box) produced_one = False diff --git a/pypy/module/__builtin__/functional.py b/pypy/module/__builtin__/functional.py --- a/pypy/module/__builtin__/functional.py +++ b/pypy/module/__builtin__/functional.py @@ -312,10 +312,11 @@ class W_XRange(Wrappable): - def __init__(self, space, start, len, step): + def __init__(self, space, start, stop, step): self.space = space self.start = start - self.len = len + self.stop = stop + self.len = get_len_of_range(space, start, stop, step) self.step = step def descr_new(space, w_subtype, w_start, w_stop=None, w_step=1): @@ -325,9 +326,8 @@ start, stop = 0, start else: stop = _toint(space, w_stop) - howmany = get_len_of_range(space, start, stop, step) obj = space.allocate_instance(W_XRange, w_subtype) - W_XRange.__init__(obj, space, start, howmany, step) + W_XRange.__init__(obj, space, start, stop, step) return space.wrap(obj) def descr_repr(self): @@ -357,12 +357,12 @@ def descr_iter(self): return self.space.wrap(W_XRangeIterator(self.space, self.start, - self.len, self.step)) + self.stop, self.step)) def descr_reversed(self): lastitem = self.start + (self.len-1) * self.step return self.space.wrap(W_XRangeIterator(self.space, lastitem, - self.len, -self.step)) + self.start - 1, -self.step)) def descr_reduce(self): space = self.space @@ -389,25 +389,24 @@ ) class W_XRangeIterator(Wrappable): - def __init__(self, space, current, remaining, step): + def __init__(self, space, start, stop, step): self.space = space - self.current = current - self.remaining = remaining + self.current = start + self.stop = stop self.step = step def descr_iter(self): return self.space.wrap(self) def descr_next(self): - if self.remaining > 0: + if (self.step > 0 and self.current < self.stop) or (self.step < 0 and self.current > self.stop): item = self.current self.current = item + self.step - self.remaining -= 1 return self.space.wrap(item) raise OperationError(self.space.w_StopIteration, self.space.w_None) - def descr_len(self): - return self.space.wrap(self.remaining) + #def descr_len(self): + # return self.space.wrap(self.remaining) def descr_reduce(self): from pypy.interpreter.mixedmodule import MixedModule @@ -418,7 +417,7 @@ w = space.wrap nt = space.newtuple - tup = [w(self.current), w(self.remaining), w(self.step)] + tup = [w(self.current), w(self.stop), w(self.step)] return nt([new_inst, nt(tup)]) W_XRangeIterator.typedef = TypeDef("rangeiterator", diff --git a/pypy/module/_pickle_support/maker.py b/pypy/module/_pickle_support/maker.py --- a/pypy/module/_pickle_support/maker.py +++ b/pypy/module/_pickle_support/maker.py @@ -66,10 +66,10 @@ new_generator.running = running return space.wrap(new_generator) - at unwrap_spec(current=int, remaining=int, step=int) -def xrangeiter_new(space, current, remaining, step): + at unwrap_spec(current=int, stop=int, step=int) +def xrangeiter_new(space, current, stop, step): from pypy.module.__builtin__.functional import W_XRangeIterator - new_iter = W_XRangeIterator(space, current, remaining, step) + new_iter = W_XRangeIterator(space, current, stop, step) return space.wrap(new_iter) @unwrap_spec(identifier=str) From noreply at buildbot.pypy.org Thu Nov 3 20:59:22 2011 From: noreply at buildbot.pypy.org (mattip) Date: Thu, 3 Nov 2011 20:59:22 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim: pep-8, use StringBuilder Message-ID: <20111103195922.BE680820B3@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpy-multidim Changeset: r48718:7dbb08ac308d Date: 2011-11-03 21:53 +0200 http://bitbucket.org/pypy/pypy/changeset/7dbb08ac308d/ Log: pep-8, use StringBuilder diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -6,6 +6,7 @@ from pypy.rlib import jit from pypy.rpython.lltypesystem import lltype from pypy.tool.sourcetools import func_with_new_name +from pypy.rlib.rstring import StringBuilder numpy_driver = jit.JitDriver(greens = ['signature'], reds = ['result_size', 'i', 'self', 'result']) @@ -226,30 +227,34 @@ def descr_repr(self, space): # Simple implementation so that we can see the array. # Since what we want is to print a plethora of 2d views, - # use recursive calls to tostr() to do the work. + # use recursive calls to to_str() to do the work. concrete = self.get_concrete() - res = "array(" - res0 = NDimSlice(concrete, self.signature, [], self.shape).tostr(True, indent=' ') + res = StringBuilder() + res.append("array(") + myview = NDimSlice(concrete, self.signature, [], self.shape) + res0 = myview.to_str(True, indent=' ') #This is for numpy compliance: an empty slice reports its shape - if res0=="[]" and isinstance(self,NDimSlice): - res0 += ", shape=" - res1 = str(self.shape) - assert len(res1)>1 - res0 += '('+ res1[1:max(len(res1)-1,1)]+')' - res += res0 + if res0 == "[]" and isinstance(self, NDimSlice): + res.append("[], shape=(") + self_shape = str(self.shape) + res.append_slice(str(self_shape),1,len(self_shape)-1) + res.append(')') + else: + res.append(res0) dtype = concrete.find_dtype() if (dtype is not space.fromcache(interp_dtype.W_Float64Dtype) and - dtype is not space.fromcache(interp_dtype.W_Int64Dtype)) or not self.find_size(): - res += ", dtype=" + dtype.name - res += ")" - return space.wrap(res) + dtype is not space.fromcache(interp_dtype.W_Int64Dtype)) or \ + not self.find_size(): + res.append(", dtype=" + dtype.name) + res.append(")") + return space.wrap(res.build()) def descr_str(self, space): # Simple implementation so that we can see the array. # Since what we want is to print a plethora of 2d views, let # a slice do the work for us. concrete = self.get_concrete() - r = NDimSlice(concrete, self.signature, [], self.shape).tostr(False) + r = NDimSlice(concrete, self.signature, [], self.shape).to_str(False) return space.wrap(r) def _index_of_single_item(self, space, w_idx): @@ -572,7 +577,7 @@ class NDimSlice(ViewArray): signature = signature.BaseSignature() - + _immutable_fields_ = ['shape[*]', 'chunks[*]'] def __init__(self, parent, signature, chunks, shape): @@ -651,49 +656,53 @@ item += index[i] i += 1 return item - - def tostr(self, comma, indent=' '): - ret = '' + + def to_str(self, comma, indent=' '): + ret = StringBuilder() dtype = self.find_dtype() - ndims = len(self.shape)#-self.shape_reduction + ndims = len(self.shape) for s in self.shape: if s == 0: - ret += '[]' - return ret + ret.append('[]') + return ret.build() if ndims > 2: - ret += '[' + ret.append('[') for i in range(self.shape[0]): - chunks = [(i, 0, 0, 1)] - ret += NDimSlice(self.parent, self.signature, chunks, - self.shape[1:]).tostr(comma,indent=indent + ' ') - if i+11000: - ret += (','*comma + ' ').join([dtype.str_format(self.eval(j)) \ - for j in range(3)]) - ret += ','*comma + ' ..., ' - ret += (','*comma + ' ').join([dtype.str_format(self.eval(j)) \ - for j in range(self.shape[0]-3,self.shape[0])]) + ret.append('[') + spacer = ',' * comma + ' ' + ret.append(spacer.join(\ + [dtype.str_format(self.eval(i * self.shape[1] + j)) \ + for j in range(self.shape[1])])) + ret.append(']') + if i + 1 < self.shape[0]: + ret.append(',\n' + indent) + ret.append(']') + elif ndims == 1: + ret.append('[') + spacer = ',' * comma + ' ' + if self.shape[0] > 1000: + ret.append(spacer.join([dtype.str_format(self.eval(j)) \ + for j in range(3)])) + ret.append(',' * comma + ' ..., ') + ret.append(spacer.join([dtype.str_format(self.eval(j)) \ + for j in range(self.shape[0] - 3, self.shape[0])])) else: - ret += (','*comma + ' ').join([dtype.str_format(self.eval(j)) \ - for j in range(self.shape[0])]) - ret += ']' + ret.append(spacer.join([dtype.str_format(self.eval(j)) \ + for j in range(self.shape[0])])) + ret.append(']') else: - ret += dtype.str_format(self.eval(0)) - return ret + ret.append(dtype.str_format(self.eval(0))) + return ret.build() + class NDimArray(BaseArray): def __init__(self, size, shape, dtype): BaseArray.__init__(self, shape) From noreply at buildbot.pypy.org Thu Nov 3 20:59:23 2011 From: noreply at buildbot.pypy.org (mattip) Date: Thu, 3 Nov 2011 20:59:23 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim: Merge str and repr cleanup Message-ID: <20111103195924.0018F820B3@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpy-multidim Changeset: r48719:8e0658ce330e Date: 2011-11-03 21:57 +0200 http://bitbucket.org/pypy/pypy/changeset/8e0658ce330e/ Log: Merge str and repr cleanup diff --git a/pypy/module/micronumpy/compile.py b/pypy/module/micronumpy/compile.py --- a/pypy/module/micronumpy/compile.py +++ b/pypy/module/micronumpy/compile.py @@ -9,7 +9,7 @@ descr_new_array, scalar_w, NDimArray) from pypy.module.micronumpy import interp_ufuncs from pypy.rlib.objectmodel import specialize - +import re class BogusBytecode(Exception): pass @@ -23,6 +23,12 @@ class WrongFunctionName(Exception): pass +class TokenizerError(Exception): + pass + +class BadToken(Exception): + pass + SINGLE_ARG_FUNCTIONS = ["sum", "prod", "max", "min", "all", "any", "unegative"] class FakeSpace(object): @@ -192,7 +198,7 @@ interp.variables[self.name] = self.expr.execute(interp) def __repr__(self): - return "%% = %r" % (self.name, self.expr) + return "%r = %r" % (self.name, self.expr) class ArrayAssignment(Node): def __init__(self, name, index, expr): @@ -214,7 +220,7 @@ class Variable(Node): def __init__(self, name): - self.name = name + self.name = name.strip(" ") def execute(self, interp): return interp.variables[self.name] @@ -332,7 +338,7 @@ class FunctionCall(Node): def __init__(self, name, args): - self.name = name + self.name = name.strip(" ") self.args = args def __repr__(self): @@ -375,118 +381,172 @@ else: raise WrongFunctionName +_REGEXES = [ + ('-?[\d\.]+', 'number'), + ('\[', 'array_left'), + (':', 'colon'), + ('\w+', 'identifier'), + ('\]', 'array_right'), + ('(->)|[\+\-\*\/]', 'operator'), + ('=', 'assign'), + (',', 'coma'), + ('\|', 'pipe'), + ('\(', 'paren_left'), + ('\)', 'paren_right'), +] +REGEXES = [] + +for r, name in _REGEXES: + REGEXES.append((re.compile(r' *(' + r + ')'), name)) +del _REGEXES + +class Token(object): + def __init__(self, name, v): + self.name = name + self.v = v + + def __repr__(self): + return '(%s, %s)' % (self.name, self.v) + +empty = Token('', '') + +class TokenStack(object): + def __init__(self, tokens): + self.tokens = tokens + self.c = 0 + + def pop(self): + token = self.tokens[self.c] + self.c += 1 + return token + + def get(self, i): + if self.c + i >= len(self.tokens): + return empty + return self.tokens[self.c + i] + + def remaining(self): + return len(self.tokens) - self.c + + def push(self): + self.c -= 1 + + def __repr__(self): + return repr(self.tokens[self.c:]) + class Parser(object): - def parse_identifier(self, id): - id = id.strip(" ") - #assert id.isalpha() - return Variable(id) + def tokenize(self, line): + tokens = [] + while True: + for r, name in REGEXES: + m = r.match(line) + if m is not None: + g = m.group(0) + tokens.append(Token(name, g)) + line = line[len(g):] + if not line: + return TokenStack(tokens) + break + else: + raise TokenizerError(line) - def parse_expression(self, expr): - tokens = [i for i in expr.split(" ") if i] - if len(tokens) == 1: - return self.parse_constant_or_identifier(tokens[0]) + def parse_number_or_slice(self, tokens): + start_tok = tokens.pop() + if start_tok.name == 'colon': + start = 0 + else: + if tokens.get(0).name != 'colon': + return FloatConstant(start_tok.v) + start = int(start_tok.v) + tokens.pop() + if not tokens.get(0).name in ['colon', 'number']: + stop = -1 + step = 1 + else: + next = tokens.pop() + if next.name == 'colon': + stop = -1 + step = int(tokens.pop().v) + else: + stop = int(next.v) + if tokens.get(0).name == 'colon': + tokens.pop() + step = int(tokens.pop().v) + else: + step = 1 + return SliceConstant(start, stop, step) + + + def parse_expression(self, tokens): stack = [] - tokens.reverse() - while tokens: + while tokens.remaining(): token = tokens.pop() - if token == ')': - raise NotImplementedError - elif self.is_identifier_or_const(token): - if stack: - name = stack.pop().name - lhs = stack.pop() - rhs = self.parse_constant_or_identifier(token) - stack.append(Operator(lhs, name, rhs)) + if token.name == 'identifier': + if tokens.remaining() and tokens.get(0).name == 'paren_left': + stack.append(self.parse_function_call(token.v, tokens)) else: - stack.append(self.parse_constant_or_identifier(token)) + stack.append(Variable(token.v)) + elif token.name == 'array_left': + stack.append(ArrayConstant(self.parse_array_const(tokens))) + elif token.name == 'operator': + stack.append(Variable(token.v)) + elif token.name == 'number' or token.name == 'colon': + tokens.push() + stack.append(self.parse_number_or_slice(tokens)) + elif token.name == 'pipe': + stack.append(RangeConstant(tokens.pop().v)) + end = tokens.pop() + assert end.name == 'pipe' else: - stack.append(Variable(token)) - assert len(stack) == 1 - return stack[-1] + tokens.push() + break + stack.reverse() + lhs = stack.pop() + while stack: + op = stack.pop() + assert isinstance(op, Variable) + rhs = stack.pop() + lhs = Operator(lhs, op.name, rhs) + return lhs - def parse_constant(self, v): - lgt = len(v)-1 - assert lgt >= 0 - if ':' in v: - # a slice - if v == ':': - return SliceConstant(0, 0, 0) - else: - l = v.split(':') - if len(l) == 2: - one = l[0] - two = l[1] - if not one: - one = 0 - else: - one = int(one) - return SliceConstant(int(l[0]), int(l[1]), 1) - else: - three = int(l[2]) - # all can be empty - if l[0]: - one = int(l[0]) - else: - one = 0 - if l[1]: - two = int(l[1]) - else: - two = -1 - return SliceConstant(one, two, three) - - if v[0] == '[': - return ArrayConstant([self.parse_constant(elem) - for elem in v[1:lgt].split(",")]) - if v[0] == '|': - return RangeConstant(v[1:lgt]) - return FloatConstant(v) - - def is_identifier_or_const(self, v): - c = v[0] - if ((c >= 'a' and c <= 'z') or (c >= 'A' and c <= 'Z') or - (c >= '0' and c <= '9') or c in '-.[|:'): - if v == '-' or v == "->": - return False - return True - return False - - def parse_function_call(self, v): - l = v.split('(') - assert len(l) == 2 - name = l[0] - cut = len(l[1]) - 1 - assert cut >= 0 - args = [self.parse_constant_or_identifier(id) - for id in l[1][:cut].split(",")] + def parse_function_call(self, name, tokens): + args = [] + tokens.pop() # lparen + while tokens.get(0).name != 'paren_right': + args.append(self.parse_expression(tokens)) return FunctionCall(name, args) - def parse_constant_or_identifier(self, v): - c = v[0] - if (c >= 'a' and c <= 'z') or (c >= 'A' and c <= 'Z'): - if '(' in v: - return self.parse_function_call(v) - return self.parse_identifier(v) - return self.parse_constant(v) - - def parse_array_subscript(self, v): - v = v.strip(" ") - l = v.split("[") - lgt = len(l[1]) - 1 - assert lgt >= 0 - rhs = self.parse_constant_or_identifier(l[1][:lgt]) - return l[0], rhs + def parse_array_const(self, tokens): + elems = [] + while True: + token = tokens.pop() + if token.name == 'number': + elems.append(FloatConstant(token.v)) + elif token.name == 'array_left': + elems.append(ArrayConstant(self.parse_array_const(tokens))) + else: + raise BadToken() + token = tokens.pop() + if token.name == 'array_right': + return elems + assert token.name == 'coma' - def parse_statement(self, line): - if '=' in line: - lhs, rhs = line.split("=") - lhs = lhs.strip(" ") - if '[' in lhs: - name, index = self.parse_array_subscript(lhs) - return ArrayAssignment(name, index, self.parse_expression(rhs)) - else: - return Assignment(lhs, self.parse_expression(rhs)) - else: - return Execute(self.parse_expression(line)) + def parse_statement(self, tokens): + if (tokens.get(0).name == 'identifier' and + tokens.get(1).name == 'assign'): + lhs = tokens.pop().v + tokens.pop() + rhs = self.parse_expression(tokens) + return Assignment(lhs, rhs) + elif (tokens.get(0).name == 'identifier' and + tokens.get(1).name == 'array_left'): + name = tokens.pop().v + tokens.pop() + index = self.parse_expression(tokens) + tokens.pop() + tokens.pop() + return ArrayAssignment(name, index, self.parse_expression(tokens)) + return Execute(self.parse_expression(tokens)) def parse(self, code): statements = [] @@ -495,7 +555,8 @@ line = line.split('#', 1)[0] line = line.strip(" ") if line: - statements.append(self.parse_statement(line)) + tokens = self.tokenize(line) + statements.append(self.parse_statement(tokens)) return Code(statements) def numpy_compile(code): diff --git a/pypy/module/micronumpy/test/test_compile.py b/pypy/module/micronumpy/test/test_compile.py --- a/pypy/module/micronumpy/test/test_compile.py +++ b/pypy/module/micronumpy/test/test_compile.py @@ -177,3 +177,9 @@ """) assert interp.results[0].value.val == 6 + def test_multidim_getitem(self): + interp = self.run(""" + a = [[1,2]] + a -> 0 -> 1 + """) + assert interp.results[0].value.val == 2 diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -8,7 +8,7 @@ from pypy.jit.metainterp.test.support import LLJitMixin from pypy.module.micronumpy import interp_ufuncs, signature from pypy.module.micronumpy.compile import (numpy_compile, FakeSpace, - FloatObject, IntObject, BoolObject) + FloatObject, IntObject, BoolObject, Parser, InterpreterState) from pypy.module.micronumpy.interp_numarray import NDimArray, NDimSlice from pypy.rlib.nonconst import NonConstant from pypy.rpython.annlowlevel import llstr, hlstr @@ -18,12 +18,33 @@ class TestNumpyJIt(LLJitMixin): graph = None interp = None + + def setup_class(cls): + default = """ + a = [1,2,3,4] + c = a + b + sum(c) -> 1::1 + a -> 3:1:2 + """ + + d = {} + p = Parser() + allcodes = [p.parse(default)] + for name, meth in cls.__dict__.iteritems(): + if name.startswith("define_"): + code = meth() + d[name[len("define_"):]] = len(allcodes) + allcodes.append(p.parse(code)) + cls.code_mapping = d + cls.codes = allcodes - def run(self, code): + def run(self, name): space = FakeSpace() + i = self.code_mapping[name] + codes = self.codes - def f(code): - interp = numpy_compile(hlstr(code)) + def f(i): + interp = InterpreterState(codes[i]) interp.run(space) res = interp.results[-1] w_res = res.eval(0).wrap(interp.space) @@ -37,55 +58,66 @@ return -42. if self.graph is None: - interp, graph = self.meta_interp(f, [llstr(code)], + interp, graph = self.meta_interp(f, [i], listops=True, backendopt=True, graph_and_interp_only=True) self.__class__.interp = interp self.__class__.graph = graph - reset_stats() pyjitpl._warmrunnerdesc.memory_manager.alive_loops.clear() - return self.interp.eval_graph(self.graph, [llstr(code)]) + return self.interp.eval_graph(self.graph, [i]) - def test_add(self): - result = self.run(""" + def define_add(): + return """ a = |30| b = a + a b -> 3 - """) + """ + + def test_add(self): + result = self.run("add") self.check_loops({'getarrayitem_raw': 2, 'float_add': 1, 'setarrayitem_raw': 1, 'int_add': 1, 'int_lt': 1, 'guard_true': 1, 'jump': 1}) assert result == 3 + 3 - def test_floatadd(self): - result = self.run(""" + def define_float_add(): + return """ a = |30| + 3 a -> 3 - """) + """ + + def test_floatadd(self): + result = self.run("float_add") assert result == 3 + 3 self.check_loops({"getarrayitem_raw": 1, "float_add": 1, "setarrayitem_raw": 1, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1}) - def test_sum(self): - result = self.run(""" + def define_sum(): + return """ a = |30| b = a + a sum(b) - """) + """ + + def test_sum(self): + result = self.run("sum") assert result == 2 * sum(range(30)) self.check_loops({"getarrayitem_raw": 2, "float_add": 2, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1}) - def test_prod(self): - result = self.run(""" + def define_prod(): + return """ a = |30| b = a + a prod(b) - """) + """ + + def test_prod(self): + result = self.run("prod") expected = 1 for i in range(30): expected *= i * 2 @@ -120,27 +152,33 @@ "float_mul": 1, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1}) - def test_any(self): - result = self.run(""" + def define_any(): + return """ a = [0,0,0,0,0,0,0,0,0,0,0] a[8] = -12 b = a + a any(b) - """) + """ + + def test_any(self): + result = self.run("any") assert result == 1 self.check_loops({"getarrayitem_raw": 2, "float_add": 1, "float_ne": 1, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1, "guard_false": 1}) - def test_already_forced(self): - result = self.run(""" + def define_already_forced(): + return """ a = |30| b = a + 4.5 b -> 5 # forces c = b * 8 c -> 5 - """) + """ + + def test_already_forced(self): + result = self.run("already_forced") assert result == (5 + 4.5) * 8 # This is the sum of the ops for both loops, however if you remove the # optimization then you end up with 2 float_adds, so we can still be @@ -149,21 +187,24 @@ "setarrayitem_raw": 2, "int_add": 2, "int_lt": 2, "guard_true": 2, "jump": 2}) - def test_ufunc(self): - result = self.run(""" + def define_ufunc(): + return """ a = |30| b = a + a c = unegative(b) c -> 3 - """) + """ + + def test_ufunc(self): + result = self.run("ufunc") assert result == -6 self.check_loops({"getarrayitem_raw": 2, "float_add": 1, "float_neg": 1, "setarrayitem_raw": 1, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1, }) - def test_specialization(self): - self.run(""" + def define_specialization(): + return """ a = |30| b = a + a c = unegative(b) @@ -180,17 +221,23 @@ d = a * a unegative(d) d -> 3 - """) + """ + + def test_specialization(self): + self.run("specialization") # This is 3, not 2 because there is a bridge for the exit. self.check_loop_count(3) - def test_slice(self): - result = self.run(""" + def define_slice(): + return """ a = |30| b = a -> ::3 c = b + b c -> 3 - """) + """ + + def test_slice(self): + result = self.run("slice") assert result == 18 self.check_loops({'int_mul': 2, 'getarrayitem_raw': 2, 'float_add': 1, 'setarrayitem_raw': 1, 'int_add': 3, diff --git a/pypy/rlib/rsre/rpy.py b/pypy/rlib/rsre/rpy.py new file mode 100644 --- /dev/null +++ b/pypy/rlib/rsre/rpy.py @@ -0,0 +1,49 @@ + +from pypy.rlib.rsre import rsre_char +from pypy.rlib.rsre.rsre_core import match + +def get_hacked_sre_compile(my_compile): + """Return a copy of the sre_compile module for which the _sre + module is a custom module that has _sre.compile == my_compile + and CODESIZE == rsre_char.CODESIZE. + """ + import sre_compile, __builtin__, new + sre_hacked = new.module("_sre_hacked") + sre_hacked.compile = my_compile + sre_hacked.MAGIC = sre_compile.MAGIC + sre_hacked.CODESIZE = rsre_char.CODESIZE + sre_hacked.getlower = rsre_char.getlower + def my_import(name, *args): + if name == '_sre': + return sre_hacked + else: + return default_import(name, *args) + src = sre_compile.__file__ + if src.lower().endswith('.pyc') or src.lower().endswith('.pyo'): + src = src[:-1] + mod = new.module("sre_compile_hacked") + default_import = __import__ + try: + __builtin__.__import__ = my_import + execfile(src, mod.__dict__) + finally: + __builtin__.__import__ = default_import + return mod + +class GotIt(Exception): + pass +def my_compile(pattern, flags, code, *args): + raise GotIt(code, flags, args) +sre_compile_hacked = get_hacked_sre_compile(my_compile) + +def get_code(regexp, flags=0, allargs=False): + try: + sre_compile_hacked.compile(regexp, flags) + except GotIt, e: + pass + else: + raise ValueError("did not reach _sre.compile()!") + if allargs: + return e.args + else: + return e.args[0] diff --git a/pypy/rlib/rsre/rsre_core.py b/pypy/rlib/rsre/rsre_core.py --- a/pypy/rlib/rsre/rsre_core.py +++ b/pypy/rlib/rsre/rsre_core.py @@ -154,7 +154,6 @@ return (fmarks[groupnum], fmarks[groupnum+1]) def group(self, groupnum=0): - "NOT_RPYTHON" # compatibility frm, to = self.span(groupnum) if 0 <= frm <= to: return self._string[frm:to] diff --git a/pypy/rlib/rsre/test/test_match.py b/pypy/rlib/rsre/test/test_match.py --- a/pypy/rlib/rsre/test/test_match.py +++ b/pypy/rlib/rsre/test/test_match.py @@ -1,54 +1,8 @@ import re -from pypy.rlib.rsre import rsre_core, rsre_char +from pypy.rlib.rsre import rsre_core +from pypy.rlib.rsre.rpy import get_code -def get_hacked_sre_compile(my_compile): - """Return a copy of the sre_compile module for which the _sre - module is a custom module that has _sre.compile == my_compile - and CODESIZE == rsre_char.CODESIZE. - """ - import sre_compile, __builtin__, new - sre_hacked = new.module("_sre_hacked") - sre_hacked.compile = my_compile - sre_hacked.MAGIC = sre_compile.MAGIC - sre_hacked.CODESIZE = rsre_char.CODESIZE - sre_hacked.getlower = rsre_char.getlower - def my_import(name, *args): - if name == '_sre': - return sre_hacked - else: - return default_import(name, *args) - src = sre_compile.__file__ - if src.lower().endswith('.pyc') or src.lower().endswith('.pyo'): - src = src[:-1] - mod = new.module("sre_compile_hacked") - default_import = __import__ - try: - __builtin__.__import__ = my_import - execfile(src, mod.__dict__) - finally: - __builtin__.__import__ = default_import - return mod - -class GotIt(Exception): - pass -def my_compile(pattern, flags, code, *args): - print code - raise GotIt(code, flags, args) -sre_compile_hacked = get_hacked_sre_compile(my_compile) - -def get_code(regexp, flags=0, allargs=False): - try: - sre_compile_hacked.compile(regexp, flags) - except GotIt, e: - pass - else: - raise ValueError("did not reach _sre.compile()!") - if allargs: - return e.args - else: - return e.args[0] - def get_code_and_re(regexp): return get_code(regexp), re.compile(regexp) From noreply at buildbot.pypy.org Thu Nov 3 21:12:44 2011 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 3 Nov 2011 21:12:44 +0100 (CET) Subject: [pypy-commit] pypy stm: setarrayitem. Message-ID: <20111103201244.A8A5F820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r48720:e67329e4d516 Date: 2011-11-03 19:28 +0100 http://bitbucket.org/pypy/pypy/changeset/e67329e4d516/ Log: setarrayitem. diff --git a/pypy/rpython/lltypesystem/lloperation.py b/pypy/rpython/lltypesystem/lloperation.py --- a/pypy/rpython/lltypesystem/lloperation.py +++ b/pypy/rpython/lltypesystem/lloperation.py @@ -397,6 +397,7 @@ 'stm_getfield': LLOp(sideeffects=False, canrun=True), 'stm_setfield': LLOp(), + 'stm_setarrayitem': LLOp(), 'stm_begin_transaction': LLOp(), 'stm_commit_transaction': LLOp(), diff --git a/pypy/translator/stm/llstminterp.py b/pypy/translator/stm/llstminterp.py --- a/pypy/translator/stm/llstminterp.py +++ b/pypy/translator/stm/llstminterp.py @@ -94,6 +94,22 @@ self.check_stm_mode(lambda m: False) assert 0 + def opstm_setarrayitem(self, array, index, newvalue): + ARRAY = lltype.typeOf(struct).TO + if ARRAY._immutable_field(): + # immutable item writes (i.e. initializing writes) should + # always be fine, because they should occur into newly malloced + # arrays + LLFrame.op_setarrayitem(self, array, index, newvalue) + elif ARRAY._gckind == 'raw': + # raw setarrayitems are allowed outside a regular transaction + self.check_stm_mode(lambda m: m != "regular_transaction") + LLFrame.op_setarrayitem(self, array, index, newvalue) + else: + # mutable 'setarrayitems' are always forbidden for now + self.check_stm_mode(lambda m: False) + assert 0 + def opstm_malloc(self, TYPE, flags): # non-GC must not occur in a regular transaction, # but can occur in inevitable mode or outside a transaction @@ -118,6 +134,10 @@ self.check_stm_mode(lambda m: m != "not_in_transaction") LLFrame.op_setfield(self, struct, fieldname, value) + def opstm_stm_setarrayitem(self, array, index, value): + self.check_stm_mode(lambda m: m != "not_in_transaction") + LLFrame.op_setarrayitem(self, array, index, value) + def opstm_stm_begin_transaction(self): self.check_stm_mode(lambda m: m == "not_in_transaction") self.llinterpreter.stm_mode = "regular_transaction" diff --git a/pypy/translator/stm/test/targetdemo.py b/pypy/translator/stm/test/targetdemo.py --- a/pypy/translator/stm/test/targetdemo.py +++ b/pypy/translator/stm/test/targetdemo.py @@ -24,7 +24,7 @@ seen[-1] = NUM_THREADS while node is not None: value = node.value - print value + #print value if not (0 <= value < LENGTH): print "node.value out of bounds:", value raise AssertionError diff --git a/pypy/translator/stm/test/test_transform.py b/pypy/translator/stm/test/test_transform.py --- a/pypy/translator/stm/test/test_transform.py +++ b/pypy/translator/stm/test/test_transform.py @@ -56,6 +56,17 @@ res = eval_stm_graph(interp, graph, [p], stm_mode="regular_transaction") assert res == 42 +def test_setarrayitem(): + A = lltype.GcArray(lltype.Signed) + p = lltype.malloc(A, 100, immortal=True) + p[42] = 666 + def func(p): + p[42] = 676 + interp, graph = get_interpreter(func, [p]) + transform_graph(graph) + assert summary(graph) == {'stm_setarrayitem': 1} + eval_stm_graph(interp, graph, [p], stm_mode="regular_transaction") + def test_unsupported_operation(): def func(n): n += 1 diff --git a/pypy/translator/stm/transform.py b/pypy/translator/stm/transform.py --- a/pypy/translator/stm/transform.py +++ b/pypy/translator/stm/transform.py @@ -129,6 +129,17 @@ op1 = SpaceOperation('stm_setfield', op.args, op.result) newoperations.append(op1) + def stt_setarrayitem(self, newoperations, op): + ARRAY = op.args[0].concretetype.TO + if ARRAY._immutable_field(): + op1 = op + elif ARRAY._gckind == 'raw': + turn_inevitable(newoperations, "setarrayitem-raw") + op1 = op + else: + op1 = SpaceOperation('stm_setarrayitem', op.args, op.result) + newoperations.append(op1) + def stt_stm_transaction_boundary(self, newoperations, op): self.seen_transaction_boundary = True v_result = op.result From noreply at buildbot.pypy.org Thu Nov 3 21:12:45 2011 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 3 Nov 2011 21:12:45 +0100 (CET) Subject: [pypy-commit] pypy stm: getarrayitem. Disabled for now because it's missing the C impl. Message-ID: <20111103201245.E4FB5820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r48721:104a651020e1 Date: 2011-11-03 21:12 +0100 http://bitbucket.org/pypy/pypy/changeset/104a651020e1/ Log: getarrayitem. Disabled for now because it's missing the C impl. diff --git a/pypy/rpython/lltypesystem/lloperation.py b/pypy/rpython/lltypesystem/lloperation.py --- a/pypy/rpython/lltypesystem/lloperation.py +++ b/pypy/rpython/lltypesystem/lloperation.py @@ -397,6 +397,7 @@ 'stm_getfield': LLOp(sideeffects=False, canrun=True), 'stm_setfield': LLOp(), + 'stm_getarrayitem': LLOp(sideeffects=False, canrun=True), 'stm_setarrayitem': LLOp(), 'stm_begin_transaction': LLOp(), diff --git a/pypy/translator/stm/llstminterp.py b/pypy/translator/stm/llstminterp.py --- a/pypy/translator/stm/llstminterp.py +++ b/pypy/translator/stm/llstminterp.py @@ -94,6 +94,20 @@ self.check_stm_mode(lambda m: False) assert 0 + def opstm_getarrayitem(self, array, index): + ARRAY = lltype.typeOf(struct).TO + if ARRAY._immutable_field(): + # immutable item reads are always allowed + return LLFrame.op_getarrayitem(self, array, index) + elif ARRAY._gckind == 'raw': + # raw getfields are allowed outside a regular transaction + self.check_stm_mode(lambda m: m != "regular_transaction") + return LLFrame.op_getarrayitem(self, array, index) + else: + # mutable 'getarrayitems' are always forbidden for now + self.check_stm_mode(lambda m: False) + assert 0 + def opstm_setarrayitem(self, array, index, newvalue): ARRAY = lltype.typeOf(struct).TO if ARRAY._immutable_field(): @@ -134,6 +148,10 @@ self.check_stm_mode(lambda m: m != "not_in_transaction") LLFrame.op_setfield(self, struct, fieldname, value) + def opstm_stm_getarrayitem(self, array, index): + self.check_stm_mode(lambda m: m != "not_in_transaction") + return LLFrame.op_getarrayitem(self, array, index) + def opstm_stm_setarrayitem(self, array, index, value): self.check_stm_mode(lambda m: m != "not_in_transaction") LLFrame.op_setarrayitem(self, array, index, value) diff --git a/pypy/translator/stm/test/test_transform.py b/pypy/translator/stm/test/test_transform.py --- a/pypy/translator/stm/test/test_transform.py +++ b/pypy/translator/stm/test/test_transform.py @@ -56,6 +56,18 @@ res = eval_stm_graph(interp, graph, [p], stm_mode="regular_transaction") assert res == 42 +def test_getarrayitem(): + A = lltype.GcArray(lltype.Signed) + p = lltype.malloc(A, 100, immortal=True) + p[42] = 666 + def func(p): + return p[42] + interp, graph = get_interpreter(func, [p]) + transform_graph(graph) + assert summary(graph) == {'stm_getarrayitem': 1} + res = eval_stm_graph(interp, graph, [p], stm_mode="regular_transaction") + assert res == 666 + def test_setarrayitem(): A = lltype.GcArray(lltype.Signed) p = lltype.malloc(A, 100, immortal=True) diff --git a/pypy/translator/stm/transform.py b/pypy/translator/stm/transform.py --- a/pypy/translator/stm/transform.py +++ b/pypy/translator/stm/transform.py @@ -129,7 +129,18 @@ op1 = SpaceOperation('stm_setfield', op.args, op.result) newoperations.append(op1) - def stt_setarrayitem(self, newoperations, op): + def FINISHME_stt_getarrayitem(self, newoperations, op): + ARRAY = op.args[0].concretetype.TO + if ARRAY._immutable_field(): + op1 = op + elif ARRAY._gckind == 'raw': + turn_inevitable(newoperations, "getarrayitem-raw") + op1 = op + else: + op1 = SpaceOperation('stm_getarrayitem', op.args, op.result) + newoperations.append(op1) + + def FINISHME_stt_setarrayitem(self, newoperations, op): ARRAY = op.args[0].concretetype.TO if ARRAY._immutable_field(): op1 = op From noreply at buildbot.pypy.org Thu Nov 3 21:12:47 2011 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 3 Nov 2011 21:12:47 +0100 (CET) Subject: [pypy-commit] pypy stm: Start a test for the complicated logic in funcgen.py, Message-ID: <20111103201247.1B73F820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r48722:d5f6a1b6e66c Date: 2011-11-03 21:12 +0100 http://bitbucket.org/pypy/pypy/changeset/d5f6a1b6e66c/ Log: Start a test for the complicated logic in funcgen.py, even though it mirrors closely the (tested) logic in rstm.py. diff --git a/pypy/translator/stm/test/test_funcgen.py b/pypy/translator/stm/test/test_funcgen.py new file mode 100644 --- /dev/null +++ b/pypy/translator/stm/test/test_funcgen.py @@ -0,0 +1,101 @@ +from pypy.rpython.lltypesystem import lltype +from pypy.rlib.rarithmetic import r_longlong, r_singlefloat +from pypy.translator.stm.test.test_transform import CompiledSTMTests +from pypy.translator.stm import rstm + + +A = lltype.Struct('A', ('x', lltype.Signed), ('y', lltype.Signed), + ('c1', lltype.Char), ('c2', lltype.Char), + ('c3', lltype.Char), ('l', lltype.SignedLongLong), + ('f', lltype.Float), ('sa', lltype.SingleFloat), + ('sb', lltype.SingleFloat)) +rll1 = r_longlong(-10000000000003) +rll2 = r_longlong(-300400500600700) +rf1 = -12.38976129 +rf2 = 52.1029 +rs1a = r_singlefloat(-0.598127) +rs2a = r_singlefloat(0.017634) +rs1b = r_singlefloat(40.121) +rs2b = r_singlefloat(-9e9) + +def make_a_1(): + a = lltype.malloc(A, flavor='raw') + a.x = -611 + a.c1 = '/' + a.c2 = '\\' + a.c3 = '!' + a.y = 0 + a.l = rll1 + a.f = rf1 + a.sa = rs1a + a.sb = rs1b + return a +make_a_1._dont_inline_ = True + +def do_stm_getfield(argv): + a = make_a_1() + # + assert a.x == -611 + assert a.c1 == '/' + assert a.c2 == '\\' + assert a.c3 == '!' + assert a.y == 0 + assert a.l == rll1 + assert a.f == rf1 + assert float(a.sa) == float(rs1a) + assert float(a.sb) == float(rs1b) + # + lltype.free(a, flavor='raw') + return 0 + +def do_stm_setfield(argv): + a = make_a_1() + # + a.x = 12871981 + a.c1 = '(' + assert a.c1 == '(' + assert a.c2 == '\\' + assert a.c3 == '!' + a.c2 = '?' + assert a.c1 == '(' + assert a.c2 == '?' + assert a.c3 == '!' + a.c3 = ')' + a.l = rll2 + a.f = rf2 + a.sa = rs2a + a.sb = rs2b + # + assert a.x == 12871981 + assert a.c1 == '(' + assert a.c2 == '?' + assert a.c3 == ')' + assert a.l == rll2 + assert a.f == rf2 + assert float(a.sa) == float(rs2a) + assert float(a.sb) == float(rs2b) + # + rstm.transaction_boundary() + # + assert a.x == 12871981 + assert a.c1 == '(' + assert a.c2 == '?' + assert a.c3 == ')' + assert a.l == rll2 + assert a.f == rf2 + assert float(a.sa) == float(rs2a) + assert float(a.sb) == float(rs2b) + # + lltype.free(a, flavor='raw') + return 0 + + +class TestFuncGen(CompiledSTMTests): + + def test_getfield_all_sizes(self): + t, cbuilder = self.compile(do_stm_getfield) + cbuilder.cmdexec('') + + def test_setfield_all_sizes(self): + t, cbuilder = self.compile(do_stm_setfield) + cbuilder.cmdexec('') From noreply at buildbot.pypy.org Thu Nov 3 21:15:29 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Thu, 3 Nov 2011 21:15:29 +0100 (CET) Subject: [pypy-commit] pypy default: fix an overlow bug Message-ID: <20111103201529.880F6820B3@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: Changeset: r48723:7202b0d9cb70 Date: 2011-11-03 21:14 +0100 http://bitbucket.org/pypy/pypy/changeset/7202b0d9cb70/ Log: fix an overlow bug diff --git a/pypy/module/__builtin__/functional.py b/pypy/module/__builtin__/functional.py --- a/pypy/module/__builtin__/functional.py +++ b/pypy/module/__builtin__/functional.py @@ -362,7 +362,7 @@ def descr_reversed(self): lastitem = self.start + (self.len-1) * self.step return self.space.wrap(W_XRangeIterator(self.space, lastitem, - self.start - 1, -self.step)) + self.start, -self.step, True)) def descr_reduce(self): space = self.space @@ -389,21 +389,26 @@ ) class W_XRangeIterator(Wrappable): - def __init__(self, space, start, stop, step): + def __init__(self, space, start, stop, step, inclusive=False): self.space = space self.current = start self.stop = stop self.step = step + self.inclusive = inclusive def descr_iter(self): return self.space.wrap(self) def descr_next(self): - if (self.step > 0 and self.current < self.stop) or (self.step < 0 and self.current > self.stop): - item = self.current - self.current = item + self.step - return self.space.wrap(item) - raise OperationError(self.space.w_StopIteration, self.space.w_None) + if self.inclusive: + if not ((self.step > 0 and self.current <= self.stop) or (self.step < 0 and self.current >= self.stop)): + raise OperationError(self.space.w_StopIteration, self.space.w_None) + else: + if not ((self.step > 0 and self.current < self.stop) or (self.step < 0 and self.current > self.stop)): + raise OperationError(self.space.w_StopIteration, self.space.w_None) + item = self.current + self.current = item + self.step + return self.space.wrap(item) #def descr_len(self): # return self.space.wrap(self.remaining) diff --git a/pypy/module/__builtin__/test/test_functional.py b/pypy/module/__builtin__/test/test_functional.py --- a/pypy/module/__builtin__/test/test_functional.py +++ b/pypy/module/__builtin__/test/test_functional.py @@ -157,7 +157,8 @@ raises(OverflowError, xrange, a) raises(OverflowError, xrange, 0, a) raises(OverflowError, xrange, 0, 1, a) - + assert list(reversed(xrange(-sys.maxint-1, -sys.maxint-1, -2))) == [] + def test_xrange_reduce(self): x = xrange(2, 9, 3) callable, args = x.__reduce__() From noreply at buildbot.pypy.org Thu Nov 3 21:15:30 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Thu, 3 Nov 2011 21:15:30 +0100 (CET) Subject: [pypy-commit] pypy default: hg merge Message-ID: <20111103201530.D3ABE820B3@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: Changeset: r48724:7cd8e99541db Date: 2011-11-03 21:15 +0100 http://bitbucket.org/pypy/pypy/changeset/7cd8e99541db/ Log: hg merge diff --git a/pypy/module/_hashlib/interp_hashlib.py b/pypy/module/_hashlib/interp_hashlib.py --- a/pypy/module/_hashlib/interp_hashlib.py +++ b/pypy/module/_hashlib/interp_hashlib.py @@ -4,32 +4,44 @@ from pypy.interpreter.error import OperationError from pypy.tool.sourcetools import func_renamer from pypy.interpreter.baseobjspace import Wrappable -from pypy.rpython.lltypesystem import lltype, rffi +from pypy.rpython.lltypesystem import lltype, llmemory, rffi +from pypy.rlib import rgc, ropenssl from pypy.rlib.objectmodel import keepalive_until_here -from pypy.rlib import ropenssl from pypy.rlib.rstring import StringBuilder from pypy.module.thread.os_lock import Lock algorithms = ('md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512') +# HASH_MALLOC_SIZE is the size of EVP_MD, EVP_MD_CTX plus their points +# Used for adding memory pressure. Last number is an (under?)estimate of +# EVP_PKEY_CTX's size. +# XXX: Make a better estimate here +HASH_MALLOC_SIZE = ropenssl.EVP_MD_SIZE + ropenssl.EVP_MD_CTX_SIZE \ + + rffi.sizeof(ropenssl.EVP_MD) * 2 + 208 + class W_Hash(Wrappable): ctx = lltype.nullptr(ropenssl.EVP_MD_CTX.TO) + _block_size = -1 def __init__(self, space, name): self.name = name + self.digest_size = self.compute_digest_size() # Allocate a lock for each HASH object. # An optimization would be to not release the GIL on small requests, # and use a custom lock only when needed. self.lock = Lock(space) + ctx = lltype.malloc(ropenssl.EVP_MD_CTX.TO, flavor='raw') + rgc.add_memory_pressure(HASH_MALLOC_SIZE + self.digest_size) + self.ctx = ctx + + def initdigest(self, space, name): digest = ropenssl.EVP_get_digestbyname(name) if not digest: raise OperationError(space.w_ValueError, space.wrap("unknown hash function")) - ctx = lltype.malloc(ropenssl.EVP_MD_CTX.TO, flavor='raw') - ropenssl.EVP_DigestInit(ctx, digest) - self.ctx = ctx + ropenssl.EVP_DigestInit(self.ctx, digest) def __del__(self): # self.lock.free() @@ -65,33 +77,29 @@ "Return the digest value as a string of hexadecimal digits." digest = self._digest(space) hexdigits = '0123456789abcdef' - result = StringBuilder(self._digest_size() * 2) + result = StringBuilder(self.digest_size * 2) for c in digest: result.append(hexdigits[(ord(c) >> 4) & 0xf]) result.append(hexdigits[ ord(c) & 0xf]) return space.wrap(result.build()) def get_digest_size(self, space): - return space.wrap(self._digest_size()) + return space.wrap(self.digest_size) def get_block_size(self, space): - return space.wrap(self._block_size()) + return space.wrap(self.compute_block_size()) def _digest(self, space): - copy = self.copy(space) - ctx = copy.ctx - digest_size = self._digest_size() - digest = lltype.malloc(rffi.CCHARP.TO, digest_size, flavor='raw') + with lltype.scoped_alloc(ropenssl.EVP_MD_CTX.TO) as ctx: + with self.lock: + ropenssl.EVP_MD_CTX_copy(ctx, self.ctx) + digest_size = self.digest_size + with lltype.scoped_alloc(rffi.CCHARP.TO, digest_size) as digest: + ropenssl.EVP_DigestFinal(ctx, digest, None) + ropenssl.EVP_MD_CTX_cleanup(ctx) + return rffi.charpsize2str(digest, digest_size) - try: - ropenssl.EVP_DigestFinal(ctx, digest, None) - return rffi.charpsize2str(digest, digest_size) - finally: - keepalive_until_here(copy) - lltype.free(digest, flavor='raw') - - - def _digest_size(self): + def compute_digest_size(self): # XXX This isn't the nicest way, but the EVP_MD_size OpenSSL # XXX function is defined as a C macro on OS X and would be # XXX significantly harder to implement in another way. @@ -105,12 +113,14 @@ 'sha512': 64, 'SHA512': 64, }.get(self.name, 0) - def _block_size(self): + def compute_block_size(self): + if self._block_size != -1: + return self._block_size # XXX This isn't the nicest way, but the EVP_MD_CTX_block_size # XXX OpenSSL function is defined as a C macro on some systems # XXX and would be significantly harder to implement in # XXX another way. - return { + self._block_size = { 'md5': 64, 'MD5': 64, 'sha1': 64, 'SHA1': 64, 'sha224': 64, 'SHA224': 64, @@ -118,6 +128,7 @@ 'sha384': 128, 'SHA384': 128, 'sha512': 128, 'SHA512': 128, }.get(self.name, 0) + return self._block_size W_Hash.typedef = TypeDef( 'HASH', @@ -135,6 +146,7 @@ @unwrap_spec(name=str, string='bufferstr') def new(space, name, string=''): w_hash = W_Hash(space, name) + w_hash.initdigest(space, name) w_hash.update(space, string) return space.wrap(w_hash) diff --git a/pypy/module/_multiprocessing/interp_semaphore.py b/pypy/module/_multiprocessing/interp_semaphore.py --- a/pypy/module/_multiprocessing/interp_semaphore.py +++ b/pypy/module/_multiprocessing/interp_semaphore.py @@ -4,6 +4,7 @@ from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.error import wrap_oserror, OperationError from pypy.rpython.lltypesystem import rffi, lltype +from pypy.rlib import rgc from pypy.rlib.rarithmetic import r_uint from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.rpython.tool import rffi_platform as platform @@ -23,6 +24,8 @@ _CreateSemaphore = rwin32.winexternal( 'CreateSemaphoreA', [rffi.VOIDP, rffi.LONG, rffi.LONG, rwin32.LPCSTR], rwin32.HANDLE) + _CloseHandle = rwin32.winexternal('CloseHandle', [rwin32.HANDLE], + rwin32.BOOL, threadsafe=False) _ReleaseSemaphore = rwin32.winexternal( 'ReleaseSemaphore', [rwin32.HANDLE, rffi.LONG, rffi.LONGP], rwin32.BOOL) @@ -51,6 +54,7 @@ SEM_FAILED = platform.ConstantInteger('SEM_FAILED') SEM_VALUE_MAX = platform.ConstantInteger('SEM_VALUE_MAX') SEM_TIMED_WAIT = platform.Has('sem_timedwait') + SEM_T_SIZE = platform.SizeOf('sem_t') config = platform.configure(CConfig) TIMEVAL = config['TIMEVAL'] @@ -61,18 +65,21 @@ SEM_FAILED = config['SEM_FAILED'] # rffi.cast(SEM_T, config['SEM_FAILED']) SEM_VALUE_MAX = config['SEM_VALUE_MAX'] SEM_TIMED_WAIT = config['SEM_TIMED_WAIT'] + SEM_T_SIZE = config['SEM_T_SIZE'] if sys.platform == 'darwin': HAVE_BROKEN_SEM_GETVALUE = True else: HAVE_BROKEN_SEM_GETVALUE = False - def external(name, args, result): + def external(name, args, result, **kwargs): return rffi.llexternal(name, args, result, - compilation_info=eci) + compilation_info=eci, **kwargs) _sem_open = external('sem_open', [rffi.CCHARP, rffi.INT, rffi.INT, rffi.UINT], SEM_T) + # tread sem_close as not threadsafe for now to be able to use the __del__ + _sem_close = external('sem_close', [SEM_T], rffi.INT, threadsafe=False) _sem_unlink = external('sem_unlink', [rffi.CCHARP], rffi.INT) _sem_wait = external('sem_wait', [SEM_T], rffi.INT) _sem_trywait = external('sem_trywait', [SEM_T], rffi.INT) @@ -90,6 +97,11 @@ raise OSError(rposix.get_errno(), "sem_open failed") return res + def sem_close(handle): + res = _sem_close(handle) + if res < 0: + raise OSError(rposix.get_errno(), "sem_close failed") + def sem_unlink(name): res = _sem_unlink(name) if res < 0: @@ -205,6 +217,11 @@ raise WindowsError(err, "CreateSemaphore") return handle + def delete_semaphore(handle): + if not _CloseHandle(handle): + err = rwin32.GetLastError() + raise WindowsError(err, "CloseHandle") + def semlock_acquire(self, space, block, w_timeout): if not block: full_msecs = 0 @@ -291,8 +308,13 @@ sem_unlink(name) except OSError: pass + else: + rgc.add_memory_pressure(SEM_T_SIZE) return sem + def delete_semaphore(handle): + sem_close(handle) + def semlock_acquire(self, space, block, w_timeout): if not block: deadline = lltype.nullptr(TIMESPECP.TO) @@ -483,6 +505,9 @@ def exit(self, space, __args__): self.release(space) + def __del__(self): + delete_semaphore(self.handle) + @unwrap_spec(kind=int, value=int, maxvalue=int) def descr_new(space, w_subtype, kind, value, maxvalue): if kind != RECURSIVE_MUTEX and kind != SEMAPHORE: diff --git a/pypy/module/pyexpat/interp_pyexpat.py b/pypy/module/pyexpat/interp_pyexpat.py --- a/pypy/module/pyexpat/interp_pyexpat.py +++ b/pypy/module/pyexpat/interp_pyexpat.py @@ -4,9 +4,9 @@ from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.error import OperationError from pypy.objspace.descroperation import object_setattr +from pypy.rlib import rgc +from pypy.rlib.unroll import unrolling_iterable from pypy.rpython.lltypesystem import rffi, lltype -from pypy.rlib.unroll import unrolling_iterable - from pypy.rpython.tool import rffi_platform from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.translator.platform import platform @@ -118,6 +118,19 @@ locals()[name] = rffi_platform.ConstantInteger(name) for name in xml_model_list: locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + XML_Parser_SIZE = rffi_platform.SizeOf("XML_Parser") for k, v in rffi_platform.configure(CConfigure).items(): globals()[k] = v @@ -793,7 +806,10 @@ rffi.cast(rffi.CHAR, namespace_separator)) else: xmlparser = XML_ParserCreate(encoding) - + # Currently this is just the size of the pointer and some estimated bytes. + # The struct isn't actually defined in expat.h - it is in xmlparse.c + # XXX: find a good estimate of the XML_ParserStruct + rgc.add_memory_pressure(XML_Parser_SIZE + 300) if not xmlparser: raise OperationError(space.w_RuntimeError, space.wrap('XML_ParserCreate failed')) diff --git a/pypy/module/thread/ll_thread.py b/pypy/module/thread/ll_thread.py --- a/pypy/module/thread/ll_thread.py +++ b/pypy/module/thread/ll_thread.py @@ -2,10 +2,11 @@ from pypy.rpython.lltypesystem import rffi, lltype, llmemory from pypy.translator.tool.cbuild import ExternalCompilationInfo import py -from pypy.rlib import jit +from pypy.rlib import jit, rgc from pypy.rlib.debug import ll_assert from pypy.rlib.objectmodel import we_are_translated from pypy.rpython.lltypesystem.lloperation import llop +from pypy.rpython.tool import rffi_platform from pypy.tool import autopath class error(Exception): @@ -49,7 +50,7 @@ TLOCKP = rffi.COpaquePtr('struct RPyOpaque_ThreadLock', compilation_info=eci) - +TLOCKP_SIZE = rffi_platform.sizeof('struct RPyOpaque_ThreadLock', eci) c_thread_lock_init = llexternal('RPyThreadLockInit', [TLOCKP], rffi.INT, threadsafe=False) # may add in a global list c_thread_lock_dealloc_NOAUTO = llexternal('RPyOpaqueDealloc_ThreadLock', @@ -164,6 +165,9 @@ if rffi.cast(lltype.Signed, res) <= 0: lltype.free(ll_lock, flavor='raw', track_allocation=False) raise error("out of resources") + # Add some memory pressure for the size of the lock because it is an + # Opaque object + rgc.add_memory_pressure(TLOCKP_SIZE) return ll_lock def free_ll_lock(ll_lock): diff --git a/pypy/rlib/rgc.py b/pypy/rlib/rgc.py --- a/pypy/rlib/rgc.py +++ b/pypy/rlib/rgc.py @@ -259,6 +259,24 @@ except Exception: return False # don't keep objects whose _freeze_() method explodes +def add_memory_pressure(estimate): + """Add memory pressure for OpaquePtrs.""" + pass + +class AddMemoryPressureEntry(ExtRegistryEntry): + _about_ = add_memory_pressure + + def compute_result_annotation(self, s_nbytes): + from pypy.annotation import model as annmodel + return annmodel.s_None + + def specialize_call(self, hop): + [v_size] = hop.inputargs(lltype.Signed) + hop.exception_cannot_occur() + return hop.genop('gc_add_memory_pressure', [v_size], + resulttype=lltype.Void) + + def get_rpy_memory_usage(gcref): "NOT_RPYTHON" # approximate implementation using CPython's type info diff --git a/pypy/rlib/ropenssl.py b/pypy/rlib/ropenssl.py --- a/pypy/rlib/ropenssl.py +++ b/pypy/rlib/ropenssl.py @@ -25,6 +25,7 @@ 'openssl/err.h', 'openssl/rand.h', 'openssl/evp.h', + 'openssl/ossl_typ.h', 'openssl/x509v3.h'] eci = ExternalCompilationInfo( @@ -108,7 +109,9 @@ GENERAL_NAME_st = rffi_platform.Struct( 'struct GENERAL_NAME_st', [('type', rffi.INT), - ]) + ]) + EVP_MD_SIZE = rffi_platform.SizeOf('EVP_MD') + EVP_MD_CTX_SIZE = rffi_platform.SizeOf('EVP_MD_CTX') for k, v in rffi_platform.configure(CConfig).items(): @@ -154,7 +157,7 @@ ssl_external('CRYPTO_set_id_callback', [lltype.Ptr(lltype.FuncType([], rffi.LONG))], lltype.Void) - + if HAVE_OPENSSL_RAND: ssl_external('RAND_add', [rffi.CCHARP, rffi.INT, rffi.DOUBLE], lltype.Void) ssl_external('RAND_status', [], rffi.INT) @@ -255,7 +258,7 @@ [BIO, rffi.VOIDP, rffi.VOIDP, rffi.VOIDP], X509) EVP_MD_CTX = rffi.COpaquePtr('EVP_MD_CTX', compilation_info=eci) -EVP_MD = rffi.COpaquePtr('EVP_MD') +EVP_MD = rffi.COpaquePtr('EVP_MD', compilation_info=eci) OpenSSL_add_all_digests = external( 'OpenSSL_add_all_digests', [], lltype.Void) diff --git a/pypy/rpython/llinterp.py b/pypy/rpython/llinterp.py --- a/pypy/rpython/llinterp.py +++ b/pypy/rpython/llinterp.py @@ -172,7 +172,7 @@ def checkadr(addr): assert lltype.typeOf(addr) is llmemory.Address - + def is_inst(inst): return isinstance(lltype.typeOf(inst), (ootype.Instance, ootype.BuiltinType, ootype.StaticMethod)) @@ -657,7 +657,7 @@ raise TypeError("graph with %r args called with wrong func ptr type: %r" % (tuple([v.concretetype for v in args_v]), ARGS)) frame = self.newsubframe(graph, args) - return frame.eval() + return frame.eval() def op_direct_call(self, f, *args): FTYPE = self.llinterpreter.typer.type_system.derefType(lltype.typeOf(f)) @@ -698,13 +698,13 @@ return ptr except MemoryError: self.make_llexception() - + def op_malloc_nonmovable(self, TYPE, flags): flavor = flags['flavor'] assert flavor == 'gc' zero = flags.get('zero', False) return self.heap.malloc_nonmovable(TYPE, zero=zero) - + def op_malloc_nonmovable_varsize(self, TYPE, flags, size): flavor = flags['flavor'] assert flavor == 'gc' @@ -716,6 +716,9 @@ track_allocation = flags.get('track_allocation', True) self.heap.free(obj, flavor='raw', track_allocation=track_allocation) + def op_gc_add_memory_pressure(self, size): + self.heap.add_memory_pressure(size) + def op_shrink_array(self, obj, smallersize): return self.heap.shrink_array(obj, smallersize) @@ -1318,7 +1321,7 @@ func_graph = fn.graph else: # obj is an instance, we want to call 'method_name' on it - assert fn is None + assert fn is None self_arg = [obj] func_graph = obj._TYPE._methods[method_name._str].graph diff --git a/pypy/rpython/lltypesystem/llheap.py b/pypy/rpython/lltypesystem/llheap.py --- a/pypy/rpython/lltypesystem/llheap.py +++ b/pypy/rpython/lltypesystem/llheap.py @@ -5,8 +5,7 @@ setfield = setattr from operator import setitem as setarrayitem -from pypy.rlib.rgc import collect -from pypy.rlib.rgc import can_move +from pypy.rlib.rgc import can_move, collect, add_memory_pressure def setinterior(toplevelcontainer, inneraddr, INNERTYPE, newvalue, offsets=None): diff --git a/pypy/rpython/lltypesystem/lloperation.py b/pypy/rpython/lltypesystem/lloperation.py --- a/pypy/rpython/lltypesystem/lloperation.py +++ b/pypy/rpython/lltypesystem/lloperation.py @@ -473,6 +473,7 @@ 'gc_is_rpy_instance' : LLOp(), 'gc_dump_rpy_heap' : LLOp(), 'gc_typeids_z' : LLOp(), + 'gc_add_memory_pressure': LLOp(), # ------- JIT & GC interaction, only for some GCs ---------- diff --git a/pypy/rpython/lltypesystem/lltype.py b/pypy/rpython/lltypesystem/lltype.py --- a/pypy/rpython/lltypesystem/lltype.py +++ b/pypy/rpython/lltypesystem/lltype.py @@ -48,7 +48,7 @@ self.TYPE = TYPE def __repr__(self): return ''%(self.TYPE,) - + def saferecursive(func, defl, TLS=TLS): def safe(*args): @@ -537,9 +537,9 @@ return "Func ( %s ) -> %s" % (args, self.RESULT) __str__ = saferecursive(__str__, '...') - def _short_name(self): + def _short_name(self): args = ', '.join([ARG._short_name() for ARG in self.ARGS]) - return "Func(%s)->%s" % (args, self.RESULT._short_name()) + return "Func(%s)->%s" % (args, self.RESULT._short_name()) _short_name = saferecursive(_short_name, '...') def _container_example(self): @@ -553,7 +553,7 @@ class OpaqueType(ContainerType): _gckind = 'raw' - + def __init__(self, tag, hints={}): """ if hints['render_structure'] is set, the type is internal and not considered to come from somewhere else (it should be rendered as a structure) """ @@ -723,10 +723,10 @@ def __str__(self): return '* %s' % (self.TO, ) - + def _short_name(self): return 'Ptr %s' % (self.TO._short_name(), ) - + def _is_atomic(self): return self.TO._gckind == 'raw' diff --git a/pypy/rpython/memory/gctransform/framework.py b/pypy/rpython/memory/gctransform/framework.py --- a/pypy/rpython/memory/gctransform/framework.py +++ b/pypy/rpython/memory/gctransform/framework.py @@ -377,17 +377,24 @@ self.malloc_varsize_nonmovable_ptr = None if getattr(GCClass, 'raw_malloc_memory_pressure', False): - def raw_malloc_memory_pressure(length, itemsize): + def raw_malloc_memory_pressure_varsize(length, itemsize): totalmem = length * itemsize if totalmem > 0: gcdata.gc.raw_malloc_memory_pressure(totalmem) #else: probably an overflow -- the following rawmalloc # will fail then + def raw_malloc_memory_pressure(sizehint): + gcdata.gc.raw_malloc_memory_pressure(sizehint) + self.raw_malloc_memory_pressure_varsize_ptr = getfn( + raw_malloc_memory_pressure_varsize, + [annmodel.SomeInteger(), annmodel.SomeInteger()], + annmodel.s_None, minimal_transform = False) self.raw_malloc_memory_pressure_ptr = getfn( raw_malloc_memory_pressure, - [annmodel.SomeInteger(), annmodel.SomeInteger()], + [annmodel.SomeInteger()], annmodel.s_None, minimal_transform = False) + self.identityhash_ptr = getfn(GCClass.identityhash.im_func, [s_gc, s_gcref], annmodel.SomeInteger(), diff --git a/pypy/rpython/memory/gctransform/transform.py b/pypy/rpython/memory/gctransform/transform.py --- a/pypy/rpython/memory/gctransform/transform.py +++ b/pypy/rpython/memory/gctransform/transform.py @@ -63,7 +63,7 @@ gct.push_alive(v_result, self.llops) elif opname not in ('direct_call', 'indirect_call'): gct.push_alive(v_result, self.llops) - + def rename(self, newopname): @@ -118,7 +118,7 @@ self.minimalgctransformer = self.MinimalGCTransformer(self) else: self.minimalgctransformer = None - + def get_lltype_of_exception_value(self): if self.translator is not None: exceptiondata = self.translator.rtyper.getexceptiondata() @@ -399,7 +399,7 @@ def gct_gc_heap_stats(self, hop): from pypy.rpython.memory.gc.base import ARRAY_TYPEID_MAP - + return hop.cast_result(rmodel.inputconst(lltype.Ptr(ARRAY_TYPEID_MAP), lltype.nullptr(ARRAY_TYPEID_MAP))) @@ -427,7 +427,7 @@ assert flavor == 'raw' assert not flags.get('zero') return self.parenttransformer.gct_malloc_varsize(hop) - + def gct_free(self, hop): flags = hop.spaceop.args[1].value flavor = flags['flavor'] @@ -502,7 +502,7 @@ stack_mh = mallocHelpers() stack_mh.allocate = lambda size: llop.stack_malloc(llmemory.Address, size) ll_stack_malloc_fixedsize = stack_mh._ll_malloc_fixedsize - + if self.translator: self.raw_malloc_fixedsize_ptr = self.inittime_helper( ll_raw_malloc_fixedsize, [lltype.Signed], llmemory.Address) @@ -541,7 +541,7 @@ resulttype=llmemory.Address) if flags.get('zero'): hop.genop("raw_memclear", [v_raw, c_size]) - return v_raw + return v_raw def gct_malloc_varsize(self, hop, add_flags=None): flags = hop.spaceop.args[1].value @@ -559,6 +559,14 @@ def gct_malloc_nonmovable_varsize(self, *args, **kwds): return self.gct_malloc_varsize(*args, **kwds) + def gct_gc_add_memory_pressure(self, hop): + if hasattr(self, 'raw_malloc_memory_pressure_ptr'): + op = hop.spaceop + size = op.args[0] + return hop.genop("direct_call", + [self.raw_malloc_memory_pressure_ptr, + size]) + def varsize_malloc_helper(self, hop, flags, meth, extraargs): def intconst(c): return rmodel.inputconst(lltype.Signed, c) op = hop.spaceop @@ -590,9 +598,9 @@ def gct_fv_raw_malloc_varsize(self, hop, flags, TYPE, v_length, c_const_size, c_item_size, c_offset_to_length): if flags.get('add_memory_pressure', False): - if hasattr(self, 'raw_malloc_memory_pressure_ptr'): + if hasattr(self, 'raw_malloc_memory_pressure_varsize_ptr'): hop.genop("direct_call", - [self.raw_malloc_memory_pressure_ptr, + [self.raw_malloc_memory_pressure_varsize_ptr, v_length, c_item_size]) if c_offset_to_length is None: if flags.get('zero'): @@ -625,7 +633,7 @@ hop.genop("track_alloc_stop", [v]) hop.genop('raw_free', [v]) else: - assert False, "%s has no support for free with flavor %r" % (self, flavor) + assert False, "%s has no support for free with flavor %r" % (self, flavor) def gct_gc_can_move(self, hop): return hop.cast_result(rmodel.inputconst(lltype.Bool, False)) diff --git a/pypy/rpython/memory/gcwrapper.py b/pypy/rpython/memory/gcwrapper.py --- a/pypy/rpython/memory/gcwrapper.py +++ b/pypy/rpython/memory/gcwrapper.py @@ -66,6 +66,10 @@ gctypelayout.zero_gc_pointers(result) return result + def add_memory_pressure(self, size): + if hasattr(self.gc, 'raw_malloc_memory_pressure'): + self.gc.raw_malloc_memory_pressure(size) + def shrink_array(self, p, smallersize): if hasattr(self.gc, 'shrink_array'): addr = llmemory.cast_ptr_to_adr(p) diff --git a/pypy/rpython/memory/test/test_gc.py b/pypy/rpython/memory/test/test_gc.py --- a/pypy/rpython/memory/test/test_gc.py +++ b/pypy/rpython/memory/test/test_gc.py @@ -592,7 +592,7 @@ return rgc.can_move(lltype.malloc(TP, 1)) assert self.interpret(func, []) == self.GC_CAN_MOVE - + def test_malloc_nonmovable(self): TP = lltype.GcArray(lltype.Char) def func(): diff --git a/pypy/rpython/memory/test/test_transformed_gc.py b/pypy/rpython/memory/test/test_transformed_gc.py --- a/pypy/rpython/memory/test/test_transformed_gc.py +++ b/pypy/rpython/memory/test/test_transformed_gc.py @@ -27,7 +27,7 @@ t.config.set(**extraconfigopts) ann = t.buildannotator(policy=annpolicy.StrictAnnotatorPolicy()) ann.build_types(func, inputtypes) - + if specialize: t.buildrtyper().specialize() if backendopt: @@ -44,7 +44,7 @@ GC_CAN_MOVE = False GC_CAN_MALLOC_NONMOVABLE = True taggedpointers = False - + def setup_class(cls): funcs0 = [] funcs2 = [] @@ -155,7 +155,7 @@ return run, gct else: return run - + class GenericGCTests(GCTest): GC_CAN_SHRINK_ARRAY = False @@ -190,7 +190,7 @@ j += 1 return 0 return malloc_a_lot - + def test_instances(self): run, statistics = self.runner("instances", statistics=True) run([]) @@ -276,7 +276,7 @@ for i in range(1, 5): res = run([i, i - 1]) assert res == i - 1 # crashes if constants are not considered roots - + def define_string_concatenation(cls): def concat(j, dummy): lst = [] @@ -656,7 +656,7 @@ # return 2 return func - + def test_malloc_nonmovable(self): run = self.runner("malloc_nonmovable") assert int(self.GC_CAN_MALLOC_NONMOVABLE) == run([]) @@ -676,7 +676,7 @@ return 2 return func - + def test_malloc_nonmovable_fixsize(self): run = self.runner("malloc_nonmovable_fixsize") assert run([]) == int(self.GC_CAN_MALLOC_NONMOVABLE) @@ -757,7 +757,7 @@ lltype.free(idarray, flavor='raw') return 0 return f - + def test_many_ids(self): if not self.GC_CAN_TEST_ID: py.test.skip("fails for bad reasons in lltype.py :-(") @@ -813,7 +813,7 @@ else: assert 0, "oups, not found" return f, None, fix_graph_of_g - + def test_do_malloc_operations(self): run = self.runner("do_malloc_operations") run([]) @@ -850,7 +850,7 @@ else: assert 0, "oups, not found" return f, None, fix_graph_of_g - + def test_do_malloc_operations_in_call(self): run = self.runner("do_malloc_operations_in_call") run([]) @@ -861,7 +861,7 @@ l2 = [] l3 = [] l4 = [] - + def f(): for i in range(10): s = lltype.malloc(S) @@ -1026,7 +1026,7 @@ llop.gc__collect(lltype.Void) return static.p.x + i def cleanup(): - static.p = lltype.nullptr(T1) + static.p = lltype.nullptr(T1) return f, cleanup, None def test_nongc_static_root_minor_collect(self): @@ -1081,7 +1081,7 @@ return 0 return f - + def test_many_weakrefs(self): run = self.runner("many_weakrefs") run([]) @@ -1131,7 +1131,7 @@ def define_adr_of_nursery(cls): class A(object): pass - + def f(): # we need at least 1 obj to allocate a nursery a = A() @@ -1147,9 +1147,9 @@ assert nt1 > nf1 assert nt1 == nt0 return 0 - + return f - + def test_adr_of_nursery(self): run = self.runner("adr_of_nursery") res = run([]) @@ -1175,7 +1175,7 @@ def _teardown(self): self.__ready = False # collecting here is expected GenerationGC._teardown(self) - + GC_PARAMS = {'space_size': 512*WORD, 'nursery_size': 128*WORD, 'translated_to_c': False} diff --git a/pypy/translator/c/test/test_newgc.py b/pypy/translator/c/test/test_newgc.py --- a/pypy/translator/c/test/test_newgc.py +++ b/pypy/translator/c/test/test_newgc.py @@ -37,7 +37,7 @@ else: print res return 0 - + t = Translation(main, standalone=True, gc=cls.gcpolicy, policy=annpolicy.StrictAnnotatorPolicy(), taggedpointers=cls.taggedpointers, @@ -128,10 +128,10 @@ if not args: args = (-1, ) res = self.allfuncs(name, *args) - num = self.name_to_func[name] + num = self.name_to_func[name] if self.funcsstr[num]: return res - return int(res) + return int(res) def define_empty_collect(cls): def f(): @@ -228,7 +228,7 @@ T = lltype.GcStruct("T", ('y', lltype.Signed), ('s', lltype.Ptr(S))) ARRAY_Ts = lltype.GcArray(lltype.Ptr(T)) - + def f(): r = 0 for i in range(30): @@ -250,7 +250,7 @@ def test_framework_varsized(self): res = self.run('framework_varsized') assert res == self.run_orig('framework_varsized') - + def define_framework_using_lists(cls): class A(object): pass @@ -271,7 +271,7 @@ N = 1000 res = self.run('framework_using_lists') assert res == N*(N - 1)/2 - + def define_framework_static_roots(cls): class A(object): def __init__(self, y): @@ -318,8 +318,8 @@ def test_framework_void_array(self): res = self.run('framework_void_array') assert res == 44 - - + + def define_framework_malloc_failure(cls): def f(): a = [1] * (sys.maxint//2) @@ -342,7 +342,7 @@ def test_framework_array_of_void(self): res = self.run('framework_array_of_void') assert res == 43 + 1000000 - + def define_framework_opaque(cls): A = lltype.GcStruct('A', ('value', lltype.Signed)) O = lltype.GcOpaqueType('test.framework') @@ -437,7 +437,7 @@ b = B() return 0 return func - + def test_del_raises(self): self.run('del_raises') # does not raise @@ -712,7 +712,7 @@ def test_callback_with_collect(self): assert self.run('callback_with_collect') - + def define_can_move(cls): class A: pass @@ -1255,7 +1255,7 @@ l1 = [] l2 = [] l3 = [] - + def f(): for i in range(10): s = lltype.malloc(S) @@ -1298,7 +1298,7 @@ def test_string_builder(self): res = self.run('string_builder') assert res == "aabcbdddd" - + def definestr_string_builder_over_allocation(cls): import gc def fn(_): @@ -1458,6 +1458,37 @@ res = self.run("nongc_attached_to_gc") assert res == -99997 + def define_nongc_opaque_attached_to_gc(cls): + from pypy.module._hashlib.interp_hashlib import HASH_MALLOC_SIZE + from pypy.rlib import rgc, ropenssl + from pypy.rpython.lltypesystem import rffi + + class A: + def __init__(self): + self.ctx = lltype.malloc(ropenssl.EVP_MD_CTX.TO, + flavor='raw') + digest = ropenssl.EVP_get_digestbyname('sha1') + ropenssl.EVP_DigestInit(self.ctx, digest) + rgc.add_memory_pressure(HASH_MALLOC_SIZE + 64) + + def __del__(self): + ropenssl.EVP_MD_CTX_cleanup(self.ctx) + lltype.free(self.ctx, flavor='raw') + A() + def f(): + am1 = am2 = am3 = None + for i in range(100000): + am3 = am2 + am2 = am1 + am1 = A() + # what can we use for the res? + return 0 + return f + + def test_nongc_opaque_attached_to_gc(self): + res = self.run("nongc_opaque_attached_to_gc") + assert res == 0 + # ____________________________________________________________________ class TaggedPointersTest(object): From noreply at buildbot.pypy.org Thu Nov 3 21:56:42 2011 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 3 Nov 2011 21:56:42 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim: pep8 Message-ID: <20111103205642.AC084820B3@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-multidim Changeset: r48725:a5e0435c51ef Date: 2011-11-03 21:56 +0100 http://bitbucket.org/pypy/pypy/changeset/a5e0435c51ef/ Log: pep8 diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -237,7 +237,7 @@ if res0 == "[]" and isinstance(self, NDimSlice): res.append("[], shape=(") self_shape = str(self.shape) - res.append_slice(str(self_shape),1,len(self_shape)-1) + res.append_slice(str(self_shape), 1, len(self_shape)-1) res.append(')') else: res.append(res0) From noreply at buildbot.pypy.org Thu Nov 3 22:04:20 2011 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 3 Nov 2011 22:04:20 +0100 (CET) Subject: [pypy-commit] pypy.org extradoc: update numbers Message-ID: <20111103210420.A020D820B3@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r289:a86881b1306a Date: 2011-11-03 22:04 +0100 http://bitbucket.org/pypy/pypy.org/changeset/a86881b1306a/ Log: update numbers diff --git a/don1.html b/don1.html --- a/don1.html +++ b/don1.html @@ -8,12 +8,12 @@ - $4321 of $105000 (4.1%) + $4369 of $105000 (4.2%)
diff --git a/don3.html b/don3.html --- a/don3.html +++ b/don3.html @@ -8,12 +8,12 @@ - $1750 of $60000 (2.9%) + $2567 of $60000 (4.2%)
From noreply at buildbot.pypy.org Fri Nov 4 10:36:10 2011 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 4 Nov 2011 10:36:10 +0100 (CET) Subject: [pypy-commit] pypy default: Rewrite the algorithm of readlines() based on the hypothesis that Message-ID: <20111104093610.C78BF820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48726:c98931f5191b Date: 2011-11-04 10:35 +0100 http://bitbucket.org/pypy/pypy/changeset/c98931f5191b/ Log: Rewrite the algorithm of readlines() based on the hypothesis that it is equivalent to read() followed by splitting after each '\n'. I *think* it is true, because read() should do itself the conversion from '\r' or '\r\n' when the file is in text or universal mode. diff --git a/pypy/module/_file/interp_file.py b/pypy/module/_file/interp_file.py --- a/pypy/module/_file/interp_file.py +++ b/pypy/module/_file/interp_file.py @@ -206,24 +206,28 @@ @unwrap_spec(size=int) def direct_readlines(self, size=0): stream = self.getstream() - # NB. this implementation is very inefficient for unbuffered - # streams, but ok if stream.readline() is efficient. + # this is implemented as: .read().split('\n') + # except that it keeps the \n in the resulting strings if size <= 0: - result = [] - while True: - line = stream.readline() - if not line: - break - result.append(line) - size -= len(line) + data = stream.readall() else: - result = [] - while size > 0: - line = stream.readline() - if not line: - break - result.append(line) - size -= len(line) + data = stream.read(size) + result = [] + splitfrom = 0 + for i in range(len(data)): + if data[i] == '\n': + result.append(data[splitfrom : i + 1]) + splitfrom = i + 1 + # + if splitfrom < len(data): + # there is a partial line at the end. If size > 0, it is likely + # to be because the 'read(size)' returned data up to the middle + # of a line. In that case, use 'readline()' to read until the + # end of the current line. + data = data[splitfrom:] + if size > 0: + data += stream.readline() + result.append(data) return result @unwrap_spec(offset=r_longlong, whence=int) From noreply at buildbot.pypy.org Fri Nov 4 10:50:17 2011 From: noreply at buildbot.pypy.org (cfbolz) Date: Fri, 4 Nov 2011 10:50:17 +0100 (CET) Subject: [pypy-commit] pypy default: these loops are unrolled anyway, directly access the correct attribute instead Message-ID: <20111104095017.1AA85820B3@wyvern.cs.uni-duesseldorf.de> Author: Carl Friedrich Bolz Branch: Changeset: r48727:6418ef5bfbf7 Date: 2011-11-03 20:09 +0100 http://bitbucket.org/pypy/pypy/changeset/6418ef5bfbf7/ Log: these loops are unrolled anyway, directly access the correct attribute instead of going through a switch again. diff --git a/pypy/objspace/std/smalltupleobject.py b/pypy/objspace/std/smalltupleobject.py --- a/pypy/objspace/std/smalltupleobject.py +++ b/pypy/objspace/std/smalltupleobject.py @@ -71,7 +71,7 @@ if self.length() != w_other.length(): return space.w_False for i in iter_n: - item1 = self.getitem(i) + item1 = getattr(self,'w_value%s' % i) item2 = w_other.getitem(i) if not space.eq_w(item1, item2): return space.w_False @@ -82,7 +82,7 @@ x = 0x345678 z = self.length() for i in iter_n: - w_item = self.getitem(i) + w_item = getattr(self, 'w_value%s' % i) y = space.int_w(space.hash(w_item)) x = (x ^ y) * mult z -= 1 From noreply at buildbot.pypy.org Fri Nov 4 10:50:18 2011 From: noreply at buildbot.pypy.org (cfbolz) Date: Fri, 4 Nov 2011 10:50:18 +0100 (CET) Subject: [pypy-commit] pypy default: similarly, no need to call length here Message-ID: <20111104095018.47435820B3@wyvern.cs.uni-duesseldorf.de> Author: Carl Friedrich Bolz Branch: Changeset: r48728:c15bbb59c756 Date: 2011-11-03 20:11 +0100 http://bitbucket.org/pypy/pypy/changeset/c15bbb59c756/ Log: similarly, no need to call length here diff --git a/pypy/objspace/std/smalltupleobject.py b/pypy/objspace/std/smalltupleobject.py --- a/pypy/objspace/std/smalltupleobject.py +++ b/pypy/objspace/std/smalltupleobject.py @@ -68,7 +68,7 @@ raise IndexError def eq(self, space, w_other): - if self.length() != w_other.length(): + if n != w_other.length(): return space.w_False for i in iter_n: item1 = getattr(self,'w_value%s' % i) @@ -80,7 +80,7 @@ def hash(self, space): mult = 1000003 x = 0x345678 - z = self.length() + z = n for i in iter_n: w_item = getattr(self, 'w_value%s' % i) y = space.int_w(space.hash(w_item)) From noreply at buildbot.pypy.org Fri Nov 4 10:50:19 2011 From: noreply at buildbot.pypy.org (cfbolz) Date: Fri, 4 Nov 2011 10:50:19 +0100 (CET) Subject: [pypy-commit] pypy default: reduce code duplication Message-ID: <20111104095019.74AA5820B3@wyvern.cs.uni-duesseldorf.de> Author: Carl Friedrich Bolz Branch: Changeset: r48729:7c022d2fff84 Date: 2011-11-03 20:29 +0100 http://bitbucket.org/pypy/pypy/changeset/7c022d2fff84/ Log: reduce code duplication diff --git a/pypy/objspace/std/slicetype.py b/pypy/objspace/std/slicetype.py --- a/pypy/objspace/std/slicetype.py +++ b/pypy/objspace/std/slicetype.py @@ -34,11 +34,7 @@ return index def adapt_bound(space, size, w_index): - index = eval_slice_index(space, w_index) - if index < 0: - index = index + size - if index < 0: - index = 0 + index = adapt_lower_bound(space, size, w_index) if index > size: index = size assert index >= 0 From noreply at buildbot.pypy.org Fri Nov 4 10:50:20 2011 From: noreply at buildbot.pypy.org (cfbolz) Date: Fri, 4 Nov 2011 10:50:20 +0100 (CET) Subject: [pypy-commit] pypy default: some of the _convert_idx_params implementations are rather nonsensical (and of Message-ID: <20111104095020.AE57C820B3@wyvern.cs.uni-duesseldorf.de> Author: Carl Friedrich Bolz Branch: Changeset: r48730:c0f86162108b Date: 2011-11-03 20:50 +0100 http://bitbucket.org/pypy/pypy/changeset/c0f86162108b/ Log: some of the _convert_idx_params implementations are rather nonsensical (and of course they are copied here and there, with slight variations). fix that by putting one version into slicetype.py diff --git a/pypy/objspace/std/bytearrayobject.py b/pypy/objspace/std/bytearrayobject.py --- a/pypy/objspace/std/bytearrayobject.py +++ b/pypy/objspace/std/bytearrayobject.py @@ -283,17 +283,9 @@ return space.wrap(''.join(w_bytearray.data)) def _convert_idx_params(space, w_self, w_start, w_stop): - start = slicetype.eval_slice_index(space, w_start) - stop = slicetype.eval_slice_index(space, w_stop) length = len(w_self.data) - if start < 0: - start += length - if start < 0: - start = 0 - if stop < 0: - stop += length - if stop < 0: - stop = 0 + start, stop = slicetype.unwrap_start_stop( + space, length, w_start, w_stop, False) return start, stop, length def str_count__Bytearray_Int_ANY_ANY(space, w_bytearray, w_char, w_start, w_stop): diff --git a/pypy/objspace/std/ropeobject.py b/pypy/objspace/std/ropeobject.py --- a/pypy/objspace/std/ropeobject.py +++ b/pypy/objspace/std/ropeobject.py @@ -357,16 +357,8 @@ self = w_self._node sub = w_sub._node - if space.is_w(w_start, space.w_None): - w_start = space.wrap(0) - if space.is_w(w_end, space.w_None): - w_end = space.len(w_self) - if upper_bound: - start = slicetype.adapt_bound(space, self.length(), w_start) - end = slicetype.adapt_bound(space, self.length(), w_end) - else: - start = slicetype.adapt_lower_bound(space, self.length(), w_start) - end = slicetype.adapt_lower_bound(space, self.length(), w_end) + start, end = slicetype.unwrap_start_stop( + space, self.length(), w_start, w_end, upper_bound) return (self, sub, start, end) _convert_idx_params._annspecialcase_ = 'specialize:arg(5)' diff --git a/pypy/objspace/std/slicetype.py b/pypy/objspace/std/slicetype.py --- a/pypy/objspace/std/slicetype.py +++ b/pypy/objspace/std/slicetype.py @@ -3,6 +3,7 @@ from pypy.objspace.std.stdtypedef import StdTypeDef, SMM from pypy.objspace.std.register_all import register_all from pypy.interpreter.error import OperationError +from pypy.rlib.objectmodel import specialize # indices multimehtod slice_indices = SMM('indices', 2, @@ -40,6 +41,23 @@ assert index >= 0 return index + at specialize.arg(4) +def unwrap_start_stop(space, size, w_start, w_end, upper_bound): + if space.is_w(w_start, space.w_None): + start = 0 + elif upper_bound: + start = adapt_bound(space, size, w_start) + else: + start = adapt_lower_bound(space, size, w_start) + + if space.is_w(w_end, space.w_None): + end = size + elif upper_bound: + end = adapt_bound(space, size, w_end) + else: + end = adapt_lower_bound(space, size, w_end) + return start, end + register_all(vars(), globals()) # ____________________________________________________________ diff --git a/pypy/objspace/std/stringobject.py b/pypy/objspace/std/stringobject.py --- a/pypy/objspace/std/stringobject.py +++ b/pypy/objspace/std/stringobject.py @@ -422,18 +422,11 @@ def _convert_idx_params(space, w_self, w_sub, w_start, w_end, upper_bound=False): self = w_self._value + lenself = len(self) sub = w_sub._value - if space.is_w(w_start, space.w_None): - w_start = space.wrap(0) - if space.is_w(w_end, space.w_None): - w_end = space.len(w_self) - if upper_bound: - start = slicetype.adapt_bound(space, len(self), w_start) - end = slicetype.adapt_bound(space, len(self), w_end) - else: - start = slicetype.adapt_lower_bound(space, len(self), w_start) - end = slicetype.adapt_lower_bound(space, len(self), w_end) + start, end = slicetype.unwrap_start_stop( + space, lenself, w_start, w_end, upper_bound=upper_bound) return (self, sub, start, end) _convert_idx_params._annspecialcase_ = 'specialize:arg(5)' diff --git a/pypy/objspace/std/unicodeobject.py b/pypy/objspace/std/unicodeobject.py --- a/pypy/objspace/std/unicodeobject.py +++ b/pypy/objspace/std/unicodeobject.py @@ -479,18 +479,8 @@ assert isinstance(w_sub, W_UnicodeObject) self = w_self._value sub = w_sub._value - - if space.is_w(w_start, space.w_None): - w_start = space.wrap(0) - if space.is_w(w_end, space.w_None): - w_end = space.len(w_self) - - if upper_bound: - start = slicetype.adapt_bound(space, len(self), w_start) - end = slicetype.adapt_bound(space, len(self), w_end) - else: - start = slicetype.adapt_lower_bound(space, len(self), w_start) - end = slicetype.adapt_lower_bound(space, len(self), w_end) + start, end = slicetype.unwrap_start_stop( + space, len(self), w_start, w_end, upper_bound) return (self, sub, start, end) _convert_idx_params._annspecialcase_ = 'specialize:arg(5)' From noreply at buildbot.pypy.org Fri Nov 4 10:50:21 2011 From: noreply at buildbot.pypy.org (cfbolz) Date: Fri, 4 Nov 2011 10:50:21 +0100 (CET) Subject: [pypy-commit] pypy default: reuse the new helper here too Message-ID: <20111104095021.DB202820B3@wyvern.cs.uni-duesseldorf.de> Author: Carl Friedrich Bolz Branch: Changeset: r48731:81222f34969f Date: 2011-11-03 21:03 +0100 http://bitbucket.org/pypy/pypy/changeset/81222f34969f/ Log: reuse the new helper here too diff --git a/pypy/objspace/std/slicetype.py b/pypy/objspace/std/slicetype.py --- a/pypy/objspace/std/slicetype.py +++ b/pypy/objspace/std/slicetype.py @@ -42,7 +42,7 @@ return index @specialize.arg(4) -def unwrap_start_stop(space, size, w_start, w_end, upper_bound): +def unwrap_start_stop(space, size, w_start, w_end, upper_bound=False): if space.is_w(w_start, space.w_None): start = 0 elif upper_bound: diff --git a/pypy/objspace/std/tupleobject.py b/pypy/objspace/std/tupleobject.py --- a/pypy/objspace/std/tupleobject.py +++ b/pypy/objspace/std/tupleobject.py @@ -167,17 +167,8 @@ return space.wrap(count) def tuple_index__Tuple_ANY_ANY_ANY(space, w_tuple, w_obj, w_start, w_stop): - start = slicetype.eval_slice_index(space, w_start) - stop = slicetype.eval_slice_index(space, w_stop) length = len(w_tuple.wrappeditems) - if start < 0: - start += length - if start < 0: - start = 0 - if stop < 0: - stop += length - if stop < 0: - stop = 0 + start, stop = slicetype.unwrap_start_stop(space, length, w_start, w_stop) for i in range(start, min(stop, length)): w_item = w_tuple.wrappeditems[i] if space.eq_w(w_item, w_obj): From noreply at buildbot.pypy.org Fri Nov 4 10:50:23 2011 From: noreply at buildbot.pypy.org (cfbolz) Date: Fri, 4 Nov 2011 10:50:23 +0100 (CET) Subject: [pypy-commit] pypy default: more places Message-ID: <20111104095023.1590C820B3@wyvern.cs.uni-duesseldorf.de> Author: Carl Friedrich Bolz Branch: Changeset: r48732:843cfd1d8ce5 Date: 2011-11-03 21:07 +0100 http://bitbucket.org/pypy/pypy/changeset/843cfd1d8ce5/ Log: more places diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py --- a/pypy/objspace/std/listobject.py +++ b/pypy/objspace/std/listobject.py @@ -419,8 +419,8 @@ # needs to be safe against eq_w() mutating the w_list behind our back items = w_list.wrappeditems size = len(items) - i = slicetype.adapt_bound(space, size, w_start) - stop = slicetype.adapt_bound(space, size, w_stop) + i, stop = slicetype.unwrap_start_stop( + space, size, w_start, w_stop, True) while i < stop and i < len(items): if space.eq_w(items[i], w_any): return space.wrap(i) diff --git a/pypy/objspace/std/strsliceobject.py b/pypy/objspace/std/strsliceobject.py --- a/pypy/objspace/std/strsliceobject.py +++ b/pypy/objspace/std/strsliceobject.py @@ -60,8 +60,8 @@ def _convert_idx_params(space, w_self, w_sub, w_start, w_end): length = w_self.stop - w_self.start sub = w_sub._value - start = slicetype.adapt_bound(space, length, w_start) - end = slicetype.adapt_bound(space, length, w_end) + start, end = slicetype.unwrap_start_stop( + space, length, w_start, w_end, True) assert start >= 0 assert end >= 0 From noreply at buildbot.pypy.org Fri Nov 4 10:50:24 2011 From: noreply at buildbot.pypy.org (cfbolz) Date: Fri, 4 Nov 2011 10:50:24 +0100 (CET) Subject: [pypy-commit] pypy default: simplify _convert_idx_params Message-ID: <20111104095024.4306B820B3@wyvern.cs.uni-duesseldorf.de> Author: Carl Friedrich Bolz Branch: Changeset: r48733:257c829ce3c1 Date: 2011-11-03 21:29 +0100 http://bitbucket.org/pypy/pypy/changeset/257c829ce3c1/ Log: simplify _convert_idx_params diff --git a/pypy/objspace/std/stringobject.py b/pypy/objspace/std/stringobject.py --- a/pypy/objspace/std/stringobject.py +++ b/pypy/objspace/std/stringobject.py @@ -420,14 +420,13 @@ return space.wrap(u_self) -def _convert_idx_params(space, w_self, w_sub, w_start, w_end, upper_bound=False): +def _convert_idx_params(space, w_self, w_start, w_end, upper_bound=False): self = w_self._value lenself = len(self) - sub = w_sub._value start, end = slicetype.unwrap_start_stop( space, lenself, w_start, w_end, upper_bound=upper_bound) - return (self, sub, start, end) + return (self, start, end) _convert_idx_params._annspecialcase_ = 'specialize:arg(5)' def contains__String_String(space, w_self, w_sub): @@ -436,13 +435,13 @@ return space.newbool(self.find(sub) >= 0) def str_find__String_String_ANY_ANY(space, w_self, w_sub, w_start, w_end): - (self, sub, start, end) = _convert_idx_params(space, w_self, w_sub, w_start, w_end) - res = self.find(sub, start, end) + (self, start, end) = _convert_idx_params(space, w_self, w_start, w_end) + res = self.find(w_sub._value, start, end) return space.wrap(res) def str_rfind__String_String_ANY_ANY(space, w_self, w_sub, w_start, w_end): - (self, sub, start, end) = _convert_idx_params(space, w_self, w_sub, w_start, w_end) - res = self.rfind(sub, start, end) + (self, start, end) = _convert_idx_params(space, w_self, w_start, w_end) + res = self.rfind(w_sub._value, start, end) return space.wrap(res) def str_partition__String_String(space, w_self, w_sub): @@ -476,8 +475,8 @@ def str_index__String_String_ANY_ANY(space, w_self, w_sub, w_start, w_end): - (self, sub, start, end) = _convert_idx_params(space, w_self, w_sub, w_start, w_end) - res = self.find(sub, start, end) + (self, start, end) = _convert_idx_params(space, w_self, w_start, w_end) + res = self.find(w_sub._value, start, end) if res < 0: raise OperationError(space.w_ValueError, space.wrap("substring not found in string.index")) @@ -486,8 +485,8 @@ def str_rindex__String_String_ANY_ANY(space, w_self, w_sub, w_start, w_end): - (self, sub, start, end) = _convert_idx_params(space, w_self, w_sub, w_start, w_end) - res = self.rfind(sub, start, end) + (self, start, end) = _convert_idx_params(space, w_self, w_start, w_end) + res = self.rfind(w_sub._value, start, end) if res < 0: raise OperationError(space.w_ValueError, space.wrap("substring not found in string.rindex")) @@ -629,20 +628,17 @@ return wrapstr(space, u_centered) def str_count__String_String_ANY_ANY(space, w_self, w_arg, w_start, w_end): - u_self, u_arg, u_start, u_end = _convert_idx_params(space, w_self, w_arg, - w_start, w_end) - return wrapint(space, u_self.count(u_arg, u_start, u_end)) + u_self, u_start, u_end = _convert_idx_params(space, w_self, w_start, w_end) + return wrapint(space, u_self.count(w_arg._value, u_start, u_end)) def str_endswith__String_String_ANY_ANY(space, w_self, w_suffix, w_start, w_end): - (u_self, suffix, start, end) = _convert_idx_params(space, w_self, - w_suffix, w_start, - w_end, True) - return space.newbool(stringendswith(u_self, suffix, start, end)) + (u_self, start, end) = _convert_idx_params(space, w_self, w_start, + w_end, True) + return space.newbool(stringendswith(u_self, w_suffix._value, start, end)) def str_endswith__String_Tuple_ANY_ANY(space, w_self, w_suffixes, w_start, w_end): - (u_self, _, start, end) = _convert_idx_params(space, w_self, - space.wrap(''), w_start, - w_end, True) + (u_self, start, end) = _convert_idx_params(space, w_self, w_start, + w_end, True) for w_suffix in space.fixedview(w_suffixes): if space.isinstance_w(w_suffix, space.w_unicode): w_u = space.call_function(space.w_unicode, w_self) @@ -654,14 +650,13 @@ return space.w_False def str_startswith__String_String_ANY_ANY(space, w_self, w_prefix, w_start, w_end): - (u_self, prefix, start, end) = _convert_idx_params(space, w_self, - w_prefix, w_start, - w_end, True) - return space.newbool(stringstartswith(u_self, prefix, start, end)) + (u_self, start, end) = _convert_idx_params(space, w_self, w_start, + w_end, True) + return space.newbool(stringstartswith(u_self, w_prefix._value, start, end)) def str_startswith__String_Tuple_ANY_ANY(space, w_self, w_prefixes, w_start, w_end): - (u_self, _, start, end) = _convert_idx_params(space, w_self, space.wrap(''), - w_start, w_end, True) + (u_self, start, end) = _convert_idx_params(space, w_self, + w_start, w_end, True) for w_prefix in space.fixedview(w_prefixes): if space.isinstance_w(w_prefix, space.w_unicode): w_u = space.call_function(space.w_unicode, w_self) From noreply at buildbot.pypy.org Fri Nov 4 10:50:25 2011 From: noreply at buildbot.pypy.org (cfbolz) Date: Fri, 4 Nov 2011 10:50:25 +0100 (CET) Subject: [pypy-commit] pypy default: a test that now passes, after the refactorings. however, it fails on cpython Message-ID: <20111104095025.7173F820B3@wyvern.cs.uni-duesseldorf.de> Author: Carl Friedrich Bolz Branch: Changeset: r48734:bd1e2ea19fa3 Date: 2011-11-03 21:47 +0100 http://bitbucket.org/pypy/pypy/changeset/bd1e2ea19fa3/ Log: a test that now passes, after the refactorings. however, it fails on cpython diff --git a/pypy/objspace/std/test/test_listobject.py b/pypy/objspace/std/test/test_listobject.py --- a/pypy/objspace/std/test/test_listobject.py +++ b/pypy/objspace/std/test/test_listobject.py @@ -6,6 +6,12 @@ class TestW_ListObject(object): + def setup_class(cls): + import sys + on_cpython = (option.runappdirect and + not hasattr(sys, 'pypy_translation_info')) + + cls.w_on_cpython = cls.space.wrap(on_cpython) def test_is_true(self): w = self.space.wrap @@ -616,6 +622,14 @@ assert c.index(0) == 0 raises(ValueError, c.index, 3) + def test_index_cpython_bug(self): + if self.on_cpython: + skip("cpython has a bug here") + c = list('hello world') + assert c.index('l', None, None) == 2 + assert c.index('l', 3, None) == 3 + assert c.index('l', None, 4) == 2 + def test_ass_slice(self): l = range(6) l[1:3] = 'abc' From noreply at buildbot.pypy.org Fri Nov 4 10:50:26 2011 From: noreply at buildbot.pypy.org (cfbolz) Date: Fri, 4 Nov 2011 10:50:26 +0100 (CET) Subject: [pypy-commit] pypy default: rename this function to be non-official. it can lead to bugs because the caller Message-ID: <20111104095026.9FD63820B3@wyvern.cs.uni-duesseldorf.de> Author: Carl Friedrich Bolz Branch: Changeset: r48735:2e617066124d Date: 2011-11-03 21:48 +0100 http://bitbucket.org/pypy/pypy/changeset/2e617066124d/ Log: rename this function to be non-official. it can lead to bugs because the caller has to check for None diff --git a/pypy/objspace/std/sliceobject.py b/pypy/objspace/std/sliceobject.py --- a/pypy/objspace/std/sliceobject.py +++ b/pypy/objspace/std/sliceobject.py @@ -4,7 +4,7 @@ from pypy.interpreter import gateway from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all -from pypy.objspace.std.slicetype import eval_slice_index +from pypy.objspace.std.slicetype import _eval_slice_index class W_SliceObject(W_Object): from pypy.objspace.std.slicetype import slice_typedef as typedef @@ -25,7 +25,7 @@ if space.is_w(w_slice.w_step, space.w_None): step = 1 else: - step = eval_slice_index(space, w_slice.w_step) + step = _eval_slice_index(space, w_slice.w_step) if step == 0: raise OperationError(space.w_ValueError, space.wrap("slice step cannot be zero")) @@ -35,7 +35,7 @@ else: start = 0 else: - start = eval_slice_index(space, w_slice.w_start) + start = _eval_slice_index(space, w_slice.w_start) if start < 0: start += length if start < 0: @@ -54,7 +54,7 @@ else: stop = length else: - stop = eval_slice_index(space, w_slice.w_stop) + stop = _eval_slice_index(space, w_slice.w_stop) if stop < 0: stop += length if stop < 0: diff --git a/pypy/objspace/std/slicetype.py b/pypy/objspace/std/slicetype.py --- a/pypy/objspace/std/slicetype.py +++ b/pypy/objspace/std/slicetype.py @@ -15,7 +15,9 @@ ' normal slices.') # utility functions -def eval_slice_index(space, w_int): +def _eval_slice_index(space, w_int): + # note that it is the *callers* responsibility to check for w_None + # otherwise you can get funny error messages try: return space.getindex_w(w_int, None) # clamp if long integer too large except OperationError, err: @@ -26,7 +28,7 @@ "None or have an __index__ method")) def adapt_lower_bound(space, size, w_index): - index = eval_slice_index(space, w_index) + index = _eval_slice_index(space, w_index) if index < 0: index = index + size if index < 0: From noreply at buildbot.pypy.org Fri Nov 4 10:50:28 2011 From: noreply at buildbot.pypy.org (cfbolz) Date: Fri, 4 Nov 2011 10:50:28 +0100 (CET) Subject: [pypy-commit] pypy default: merge Message-ID: <20111104095028.8D6C9820B3@wyvern.cs.uni-duesseldorf.de> Author: Carl Friedrich Bolz Branch: Changeset: r48736:a2743a169e88 Date: 2011-11-04 09:48 +0100 http://bitbucket.org/pypy/pypy/changeset/a2743a169e88/ Log: merge diff --git a/pypy/jit/backend/x86/regloc.py b/pypy/jit/backend/x86/regloc.py --- a/pypy/jit/backend/x86/regloc.py +++ b/pypy/jit/backend/x86/regloc.py @@ -17,7 +17,7 @@ class AssemblerLocation(object): # XXX: Is adding "width" here correct? - __slots__ = ('value', 'width') + _attrs_ = ('value', 'width', '_location_code') _immutable_ = True def _getregkey(self): return self.value @@ -25,6 +25,9 @@ def is_memory_reference(self): return self.location_code() in ('b', 's', 'j', 'a', 'm') + def location_code(self): + return self._location_code + def value_r(self): return self.value def value_b(self): return self.value def value_s(self): return self.value @@ -38,6 +41,8 @@ class StackLoc(AssemblerLocation): _immutable_ = True + _location_code = 'b' + def __init__(self, position, ebp_offset, num_words, type): assert ebp_offset < 0 # so no confusion with RegLoc.value self.position = position @@ -49,9 +54,6 @@ def __repr__(self): return '%d(%%ebp)' % (self.value,) - def location_code(self): - return 'b' - def assembler(self): return repr(self) @@ -63,8 +65,10 @@ self.is_xmm = is_xmm if self.is_xmm: self.width = 8 + self._location_code = 'x' else: self.width = WORD + self._location_code = 'r' def __repr__(self): if self.is_xmm: return rx86.R.xmmnames[self.value] @@ -79,12 +83,6 @@ assert not self.is_xmm return RegLoc(rx86.high_byte(self.value), False) - def location_code(self): - if self.is_xmm: - return 'x' - else: - return 'r' - def assembler(self): return '%' + repr(self) @@ -97,14 +95,13 @@ class ImmedLoc(AssemblerLocation): _immutable_ = True width = WORD + _location_code = 'i' + def __init__(self, value): from pypy.rpython.lltypesystem import rffi, lltype # force as a real int self.value = rffi.cast(lltype.Signed, value) - def location_code(self): - return 'i' - def getint(self): return self.value @@ -149,9 +146,6 @@ info = getattr(self, attr, '?') return '' % (self._location_code, info) - def location_code(self): - return self._location_code - def value_a(self): return self.loc_a @@ -191,6 +185,7 @@ # we want a width of 8 (... I think. Check this!) _immutable_ = True width = 8 + _location_code = 'j' def __init__(self, address): self.value = address @@ -198,9 +193,6 @@ def __repr__(self): return '' % (self.value,) - def location_code(self): - return 'j' - if IS_X86_32: class FloatImmedLoc(AssemblerLocation): # This stands for an immediate float. It cannot be directly used in @@ -209,6 +201,7 @@ # instead; see below. _immutable_ = True width = 8 + _location_code = '#' # don't use me def __init__(self, floatstorage): self.aslonglong = floatstorage @@ -229,9 +222,6 @@ floatvalue = longlong.getrealfloat(self.aslonglong) return '' % (floatvalue,) - def location_code(self): - raise NotImplementedError - if IS_X86_64: def FloatImmedLoc(floatstorage): from pypy.rlib.longlong2float import float2longlong @@ -270,6 +260,11 @@ else: raise AssertionError(methname + " undefined") +def _missing_binary_insn(name, code1, code2): + raise AssertionError(name + "_" + code1 + code2 + " missing") +_missing_binary_insn._dont_inline_ = True + + class LocationCodeBuilder(object): _mixin_ = True @@ -303,6 +298,8 @@ else: # For this case, we should not need the scratch register more than here. self._load_scratch(val2) + if name == 'MOV' and loc1 is X86_64_SCRATCH_REG: + return # don't need a dummy "MOV r11, r11" INSN(self, loc1, X86_64_SCRATCH_REG) def invoke(self, codes, val1, val2): @@ -310,6 +307,23 @@ _rx86_getattr(self, methname)(val1, val2) invoke._annspecialcase_ = 'specialize:arg(1)' + def has_implementation_for(loc1, loc2): + # A memo function that returns True if there is any NAME_xy that could match. + # If it returns False we know the whole subcase can be omitted from translated + # code. Without this hack, the size of most _binaryop INSN functions ends up + # quite large in C code. + if loc1 == '?': + return any([has_implementation_for(loc1, loc2) + for loc1 in unrolling_location_codes]) + methname = name + "_" + loc1 + loc2 + if not hasattr(rx86.AbstractX86CodeBuilder, methname): + return False + # any NAME_j should have a NAME_m as a fallback, too. Check it + if loc1 == 'j': assert has_implementation_for('m', loc2), methname + if loc2 == 'j': assert has_implementation_for(loc1, 'm'), methname + return True + has_implementation_for._annspecialcase_ = 'specialize:memo' + def INSN(self, loc1, loc2): code1 = loc1.location_code() code2 = loc2.location_code() @@ -325,6 +339,8 @@ assert code2 not in ('j', 'i') for possible_code2 in unrolling_location_codes: + if not has_implementation_for('?', possible_code2): + continue if code2 == possible_code2: val2 = getattr(loc2, "value_" + possible_code2)() # @@ -335,28 +351,32 @@ # # Regular case for possible_code1 in unrolling_location_codes: + if not has_implementation_for(possible_code1, + possible_code2): + continue if code1 == possible_code1: val1 = getattr(loc1, "value_" + possible_code1)() # More faking out of certain operations for x86_64 - if possible_code1 == 'j' and not rx86.fits_in_32bits(val1): + fits32 = rx86.fits_in_32bits + if possible_code1 == 'j' and not fits32(val1): val1 = self._addr_as_reg_offset(val1) invoke(self, "m" + possible_code2, val1, val2) - elif possible_code2 == 'j' and not rx86.fits_in_32bits(val2): + return + if possible_code2 == 'j' and not fits32(val2): val2 = self._addr_as_reg_offset(val2) invoke(self, possible_code1 + "m", val1, val2) - elif possible_code1 == 'm' and not rx86.fits_in_32bits(val1[1]): + return + if possible_code1 == 'm' and not fits32(val1[1]): val1 = self._fix_static_offset_64_m(val1) - invoke(self, "a" + possible_code2, val1, val2) - elif possible_code2 == 'm' and not rx86.fits_in_32bits(val2[1]): + if possible_code2 == 'm' and not fits32(val2[1]): val2 = self._fix_static_offset_64_m(val2) - invoke(self, possible_code1 + "a", val1, val2) - else: - if possible_code1 == 'a' and not rx86.fits_in_32bits(val1[3]): - val1 = self._fix_static_offset_64_a(val1) - if possible_code2 == 'a' and not rx86.fits_in_32bits(val2[3]): - val2 = self._fix_static_offset_64_a(val2) - invoke(self, possible_code1 + possible_code2, val1, val2) + if possible_code1 == 'a' and not fits32(val1[3]): + val1 = self._fix_static_offset_64_a(val1) + if possible_code2 == 'a' and not fits32(val2[3]): + val2 = self._fix_static_offset_64_a(val2) + invoke(self, possible_code1 + possible_code2, val1, val2) return + _missing_binary_insn(name, code1, code2) return func_with_new_name(INSN, "INSN_" + name) @@ -431,12 +451,14 @@ def _fix_static_offset_64_m(self, (basereg, static_offset)): # For cases where an AddressLoc has the location_code 'm', but # where the static offset does not fit in 32-bits. We have to fall - # back to the X86_64_SCRATCH_REG. Note that this returns a location - # encoded as mode 'a'. These are all possibly rare cases; don't try + # back to the X86_64_SCRATCH_REG. Returns a new location encoded + # as mode 'm' too. These are all possibly rare cases; don't try # to reuse a past value of the scratch register at all. self._scratch_register_known = False self.MOV_ri(X86_64_SCRATCH_REG.value, static_offset) - return (basereg, X86_64_SCRATCH_REG.value, 0, 0) + self.LEA_ra(X86_64_SCRATCH_REG.value, + (basereg, X86_64_SCRATCH_REG.value, 0, 0)) + return (X86_64_SCRATCH_REG.value, 0) def _fix_static_offset_64_a(self, (basereg, scalereg, scale, static_offset)): diff --git a/pypy/jit/backend/x86/rx86.py b/pypy/jit/backend/x86/rx86.py --- a/pypy/jit/backend/x86/rx86.py +++ b/pypy/jit/backend/x86/rx86.py @@ -745,6 +745,7 @@ assert insnname_template.count('*') == 1 add_insn('x', register(2), '\xC0') add_insn('j', abs_, immediate(2)) + add_insn('m', mem_reg_plus_const(2)) define_pxmm_insn('PADDQ_x*', '\xD4') define_pxmm_insn('PSUBQ_x*', '\xFB') diff --git a/pypy/jit/backend/x86/test/test_regloc.py b/pypy/jit/backend/x86/test/test_regloc.py --- a/pypy/jit/backend/x86/test/test_regloc.py +++ b/pypy/jit/backend/x86/test/test_regloc.py @@ -146,8 +146,10 @@ expected_instructions = ( # mov r11, 0xFEDCBA9876543210 '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' - # mov rcx, [rdx+r11] - '\x4A\x8B\x0C\x1A' + # lea r11, [rdx+r11] + '\x4E\x8D\x1C\x1A' + # mov rcx, [r11] + '\x49\x8B\x0B' ) assert cb.getvalue() == expected_instructions @@ -174,6 +176,30 @@ # ------------------------------------------------------------ + def test_MOV_64bit_constant_into_r11(self): + base_constant = 0xFEDCBA9876543210 + cb = LocationCodeBuilder64() + cb.MOV(r11, imm(base_constant)) + + expected_instructions = ( + # mov r11, 0xFEDCBA9876543210 + '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' + ) + assert cb.getvalue() == expected_instructions + + def test_MOV_64bit_address_into_r11(self): + base_addr = 0xFEDCBA9876543210 + cb = LocationCodeBuilder64() + cb.MOV(r11, heap(base_addr)) + + expected_instructions = ( + # mov r11, 0xFEDCBA9876543210 + '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' + + # mov r11, [r11] + '\x4D\x8B\x1B' + ) + assert cb.getvalue() == expected_instructions + def test_MOV_immed32_into_64bit_address_1(self): immed = -0x01234567 base_addr = 0xFEDCBA9876543210 @@ -217,8 +243,10 @@ expected_instructions = ( # mov r11, 0xFEDCBA9876543210 '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' - # mov [rdx+r11], -0x01234567 - '\x4A\xC7\x04\x1A\x99\xBA\xDC\xFE' + # lea r11, [rdx+r11] + '\x4E\x8D\x1C\x1A' + # mov [r11], -0x01234567 + '\x49\xC7\x03\x99\xBA\xDC\xFE' ) assert cb.getvalue() == expected_instructions @@ -300,8 +328,10 @@ '\x48\xBA\xEF\xCD\xAB\x89\x67\x45\x23\x01' # mov r11, 0xFEDCBA9876543210 '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' - # mov [rax+r11], rdx - '\x4A\x89\x14\x18' + # lea r11, [rax+r11] + '\x4E\x8D\x1C\x18' + # mov [r11], rdx + '\x49\x89\x13' # pop rdx '\x5A' ) diff --git a/pypy/jit/metainterp/optimizeopt/heap.py b/pypy/jit/metainterp/optimizeopt/heap.py --- a/pypy/jit/metainterp/optimizeopt/heap.py +++ b/pypy/jit/metainterp/optimizeopt/heap.py @@ -43,7 +43,7 @@ optheap.optimizer.ensure_imported(cached_fieldvalue) cached_fieldvalue = self._cached_fields.get(structvalue, None) - if cached_fieldvalue is not fieldvalue: + if not fieldvalue.same_value(cached_fieldvalue): # common case: store the 'op' as lazy_setfield, and register # myself in the optheap's _lazy_setfields_and_arrayitems list self._lazy_setfield = op @@ -140,6 +140,15 @@ getop = ResOperation(rop.GETFIELD_GC, [op.getarg(0)], result, op.getdescr()) shortboxes.add_potential(getop, synthetic=True) + if op.getopnum() == rop.SETARRAYITEM_GC: + result = op.getarg(2) + if isinstance(result, Const): + newresult = result.clonebox() + optimizer.make_constant(newresult, result) + result = newresult + getop = ResOperation(rop.GETARRAYITEM_GC, [op.getarg(0), op.getarg(1)], + result, op.getdescr()) + shortboxes.add_potential(getop, synthetic=True) elif op.result is not None: shortboxes.add_potential(op) diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -1,6 +1,6 @@ from pypy.jit.metainterp import jitprof, resume, compile from pypy.jit.metainterp.executor import execute_nonspec -from pypy.jit.metainterp.history import BoxInt, BoxFloat, Const, ConstInt, REF +from pypy.jit.metainterp.history import BoxInt, BoxFloat, Const, ConstInt, REF, INT from pypy.jit.metainterp.optimizeopt.intutils import IntBound, IntUnbounded, \ ImmutableIntUnbounded, \ IntLowerBound, MININT, MAXINT @@ -95,6 +95,10 @@ return guards def import_from(self, other, optimizer): + if self.level == LEVEL_CONSTANT: + assert other.level == LEVEL_CONSTANT + assert other.box.same_constant(self.box) + return assert self.level <= LEVEL_NONNULL if other.level == LEVEL_CONSTANT: self.make_constant(other.get_key_box()) @@ -141,6 +145,13 @@ return not box.nonnull() return False + def same_value(self, other): + if not other: + return False + if self.is_constant() and other.is_constant(): + return self.box.same_constant(other.box) + return self is other + def make_constant(self, constbox): """Replace 'self.box' with a Const box.""" assert isinstance(constbox, Const) @@ -326,6 +337,7 @@ self.bridge = bridge self.values = {} self.interned_refs = self.cpu.ts.new_ref_dict() + self.interned_ints = {} self.resumedata_memo = resume.ResumeDataLoopMemo(metainterp_sd) self.bool_boxes = {} self.producer = {} @@ -398,6 +410,9 @@ if not value: return box return self.interned_refs.setdefault(value, box) + #elif constbox.type == INT: + # value = constbox.getint() + # return self.interned_ints.setdefault(value, box) else: return box diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -4225,6 +4225,27 @@ """ self.optimize_strunicode_loop(ops, expected) + def test_str_slice_plain_virtual(self): + ops = """ + [] + p0 = newstr(11) + copystrcontent(s"hello world", p0, 0, 0, 11) + p1 = call(0, p0, 0, 5, descr=strslicedescr) + finish(p1) + """ + expected = """ + [] + p0 = newstr(11) + copystrcontent(s"hello world", p0, 0, 0, 11) + # Eventually this should just return s"hello", but ATM this test is + # just verifying that it doesn't return "\0\0\0\0\0", so being + # slightly underoptimized is ok. + p1 = newstr(5) + copystrcontent(p0, p1, 0, 0, 5) + finish(p1) + """ + self.optimize_strunicode_loop(ops, expected) + # ---------- def optimize_strunicode_loop_extradescrs(self, ops, optops): class FakeCallInfoCollection: diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -7355,6 +7355,150 @@ """ self.optimize_loop(ops, expected) + def test_repeated_constant_setfield_mixed_with_guard(self): + ops = """ + [p22, p18] + setfield_gc(p22, 2, descr=valuedescr) + guard_nonnull_class(p18, ConstClass(node_vtable)) [] + setfield_gc(p22, 2, descr=valuedescr) + jump(p22, p18) + """ + preamble = """ + [p22, p18] + setfield_gc(p22, 2, descr=valuedescr) + guard_nonnull_class(p18, ConstClass(node_vtable)) [] + jump(p22, p18) + """ + short = """ + [p22, p18] + i1 = getfield_gc(p22, descr=valuedescr) + guard_value(i1, 2) [] + jump(p22, p18) + """ + expected = """ + [p22, p18] + jump(p22, p18) + """ + self.optimize_loop(ops, expected, preamble, expected_short=short) + + def test_repeated_setfield_mixed_with_guard(self): + ops = """ + [p22, p18, i1] + i2 = getfield_gc(p22, descr=valuedescr) + call(i2, descr=nonwritedescr) + setfield_gc(p22, i1, descr=valuedescr) + guard_nonnull_class(p18, ConstClass(node_vtable)) [] + setfield_gc(p22, i1, descr=valuedescr) + jump(p22, p18, i1) + """ + preamble = """ + [p22, p18, i1] + i2 = getfield_gc(p22, descr=valuedescr) + call(i2, descr=nonwritedescr) + setfield_gc(p22, i1, descr=valuedescr) + guard_nonnull_class(p18, ConstClass(node_vtable)) [] + jump(p22, p18, i1, i1) + """ + short = """ + [p22, p18, i1] + i2 = getfield_gc(p22, descr=valuedescr) + jump(p22, p18, i1, i2) + """ + expected = """ + [p22, p18, i1, i2] + call(i2, descr=nonwritedescr) + setfield_gc(p22, i1, descr=valuedescr) + jump(p22, p18, i1, i1) + """ + self.optimize_loop(ops, expected, preamble, expected_short=short) + + def test_cache_setfield_across_loop_boundaries(self): + ops = """ + [p1] + p2 = getfield_gc(p1, descr=valuedescr) + guard_nonnull_class(p2, ConstClass(node_vtable)) [] + call(p2, descr=nonwritedescr) + p3 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(p1, p3, descr=valuedescr) + jump(p1) + """ + expected = """ + [p1, p2] + call(p2, descr=nonwritedescr) + p3 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(p1, p3, descr=valuedescr) + jump(p1, p3) + """ + self.optimize_loop(ops, expected) + + def test_cache_setarrayitem_across_loop_boundaries(self): + ops = """ + [p1] + p2 = getarrayitem_gc(p1, 3, descr=arraydescr) + guard_nonnull_class(p2, ConstClass(node_vtable)) [] + call(p2, descr=nonwritedescr) + p3 = new_with_vtable(ConstClass(node_vtable)) + setarrayitem_gc(p1, 3, p3, descr=arraydescr) + jump(p1) + """ + expected = """ + [p1, p2] + call(p2, descr=nonwritedescr) + p3 = new_with_vtable(ConstClass(node_vtable)) + setarrayitem_gc(p1, 3, p3, descr=arraydescr) + jump(p1, p3) + """ + self.optimize_loop(ops, expected) + + def test_setarrayitem_p0_p0(self): + ops = """ + [i0, i1] + p0 = escape() + setarrayitem_gc(p0, 2, p0, descr=arraydescr) + jump(i0, i1) + """ + expected = """ + [i0, i1] + p0 = escape() + setarrayitem_gc(p0, 2, p0, descr=arraydescr) + jump(i0, i1) + """ + self.optimize_loop(ops, expected) + + def test_setfield_p0_p0(self): + ops = """ + [i0, i1] + p0 = escape() + setfield_gc(p0, p0, descr=arraydescr) + jump(i0, i1) + """ + expected = """ + [i0, i1] + p0 = escape() + setfield_gc(p0, p0, descr=arraydescr) + jump(i0, i1) + """ + self.optimize_loop(ops, expected) + + def test_setfield_p0_p1_p0(self): + ops = """ + [i0, i1] + p0 = escape() + p1 = escape() + setfield_gc(p0, p1, descr=adescr) + setfield_gc(p1, p0, descr=bdescr) + jump(i0, i1) + """ + expected = """ + [i0, i1] + p0 = escape() + p1 = escape() + setfield_gc(p0, p1, descr=adescr) + setfield_gc(p1, p0, descr=bdescr) + jump(i0, i1) + """ + self.optimize_loop(ops, expected) + class TestLLtype(OptimizeOptTest, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/optimizeopt/virtualstate.py b/pypy/jit/metainterp/optimizeopt/virtualstate.py --- a/pypy/jit/metainterp/optimizeopt/virtualstate.py +++ b/pypy/jit/metainterp/optimizeopt/virtualstate.py @@ -551,6 +551,7 @@ optimizer.produce_potential_short_preamble_ops(self) self.short_boxes = {} + self.short_boxes_in_production = {} for box in self.potential_ops.keys(): try: @@ -606,6 +607,10 @@ return if isinstance(box, Const): return + if box in self.short_boxes_in_production: + raise BoxNotProducable + self.short_boxes_in_production[box] = True + if box in self.potential_ops: ops = self.prioritized_alternatives(box) produced_one = False diff --git a/pypy/jit/metainterp/optimizeopt/vstring.py b/pypy/jit/metainterp/optimizeopt/vstring.py --- a/pypy/jit/metainterp/optimizeopt/vstring.py +++ b/pypy/jit/metainterp/optimizeopt/vstring.py @@ -505,11 +505,17 @@ # if (isinstance(vstr, VStringPlainValue) and vstart.is_constant() and vstop.is_constant()): - # slicing with constant bounds of a VStringPlainValue - value = self.make_vstring_plain(op.result, op, mode) - value.setup_slice(vstr._chars, vstart.box.getint(), - vstop.box.getint()) - return True + # slicing with constant bounds of a VStringPlainValue, if any of + # the characters is unitialized we don't do this special slice, we + # do the regular copy contents. + for i in range(vstart.box.getint(), vstop.box.getint()): + if vstr.getitem(i) is optimizer.CVAL_UNINITIALIZED_ZERO: + break + else: + value = self.make_vstring_plain(op.result, op, mode) + value.setup_slice(vstr._chars, vstart.box.getint(), + vstop.box.getint()) + return True # vstr.ensure_nonnull() lengthbox = _int_sub(self, vstop.force_box(self), diff --git a/pypy/module/__builtin__/functional.py b/pypy/module/__builtin__/functional.py --- a/pypy/module/__builtin__/functional.py +++ b/pypy/module/__builtin__/functional.py @@ -312,10 +312,11 @@ class W_XRange(Wrappable): - def __init__(self, space, start, len, step): + def __init__(self, space, start, stop, step): self.space = space self.start = start - self.len = len + self.stop = stop + self.len = get_len_of_range(space, start, stop, step) self.step = step def descr_new(space, w_subtype, w_start, w_stop=None, w_step=1): @@ -325,9 +326,8 @@ start, stop = 0, start else: stop = _toint(space, w_stop) - howmany = get_len_of_range(space, start, stop, step) obj = space.allocate_instance(W_XRange, w_subtype) - W_XRange.__init__(obj, space, start, howmany, step) + W_XRange.__init__(obj, space, start, stop, step) return space.wrap(obj) def descr_repr(self): @@ -357,12 +357,12 @@ def descr_iter(self): return self.space.wrap(W_XRangeIterator(self.space, self.start, - self.len, self.step)) + self.stop, self.step)) def descr_reversed(self): lastitem = self.start + (self.len-1) * self.step return self.space.wrap(W_XRangeIterator(self.space, lastitem, - self.len, -self.step)) + self.start, -self.step, True)) def descr_reduce(self): space = self.space @@ -389,25 +389,29 @@ ) class W_XRangeIterator(Wrappable): - def __init__(self, space, current, remaining, step): + def __init__(self, space, start, stop, step, inclusive=False): self.space = space - self.current = current - self.remaining = remaining + self.current = start + self.stop = stop self.step = step + self.inclusive = inclusive def descr_iter(self): return self.space.wrap(self) def descr_next(self): - if self.remaining > 0: - item = self.current - self.current = item + self.step - self.remaining -= 1 - return self.space.wrap(item) - raise OperationError(self.space.w_StopIteration, self.space.w_None) + if self.inclusive: + if not ((self.step > 0 and self.current <= self.stop) or (self.step < 0 and self.current >= self.stop)): + raise OperationError(self.space.w_StopIteration, self.space.w_None) + else: + if not ((self.step > 0 and self.current < self.stop) or (self.step < 0 and self.current > self.stop)): + raise OperationError(self.space.w_StopIteration, self.space.w_None) + item = self.current + self.current = item + self.step + return self.space.wrap(item) - def descr_len(self): - return self.space.wrap(self.remaining) + #def descr_len(self): + # return self.space.wrap(self.remaining) def descr_reduce(self): from pypy.interpreter.mixedmodule import MixedModule @@ -418,7 +422,7 @@ w = space.wrap nt = space.newtuple - tup = [w(self.current), w(self.remaining), w(self.step)] + tup = [w(self.current), w(self.stop), w(self.step)] return nt([new_inst, nt(tup)]) W_XRangeIterator.typedef = TypeDef("rangeiterator", diff --git a/pypy/module/__builtin__/test/test_functional.py b/pypy/module/__builtin__/test/test_functional.py --- a/pypy/module/__builtin__/test/test_functional.py +++ b/pypy/module/__builtin__/test/test_functional.py @@ -157,7 +157,8 @@ raises(OverflowError, xrange, a) raises(OverflowError, xrange, 0, a) raises(OverflowError, xrange, 0, 1, a) - + assert list(reversed(xrange(-sys.maxint-1, -sys.maxint-1, -2))) == [] + def test_xrange_reduce(self): x = xrange(2, 9, 3) callable, args = x.__reduce__() diff --git a/pypy/module/_hashlib/interp_hashlib.py b/pypy/module/_hashlib/interp_hashlib.py --- a/pypy/module/_hashlib/interp_hashlib.py +++ b/pypy/module/_hashlib/interp_hashlib.py @@ -4,32 +4,44 @@ from pypy.interpreter.error import OperationError from pypy.tool.sourcetools import func_renamer from pypy.interpreter.baseobjspace import Wrappable -from pypy.rpython.lltypesystem import lltype, rffi +from pypy.rpython.lltypesystem import lltype, llmemory, rffi +from pypy.rlib import rgc, ropenssl from pypy.rlib.objectmodel import keepalive_until_here -from pypy.rlib import ropenssl from pypy.rlib.rstring import StringBuilder from pypy.module.thread.os_lock import Lock algorithms = ('md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512') +# HASH_MALLOC_SIZE is the size of EVP_MD, EVP_MD_CTX plus their points +# Used for adding memory pressure. Last number is an (under?)estimate of +# EVP_PKEY_CTX's size. +# XXX: Make a better estimate here +HASH_MALLOC_SIZE = ropenssl.EVP_MD_SIZE + ropenssl.EVP_MD_CTX_SIZE \ + + rffi.sizeof(ropenssl.EVP_MD) * 2 + 208 + class W_Hash(Wrappable): ctx = lltype.nullptr(ropenssl.EVP_MD_CTX.TO) + _block_size = -1 def __init__(self, space, name): self.name = name + self.digest_size = self.compute_digest_size() # Allocate a lock for each HASH object. # An optimization would be to not release the GIL on small requests, # and use a custom lock only when needed. self.lock = Lock(space) + ctx = lltype.malloc(ropenssl.EVP_MD_CTX.TO, flavor='raw') + rgc.add_memory_pressure(HASH_MALLOC_SIZE + self.digest_size) + self.ctx = ctx + + def initdigest(self, space, name): digest = ropenssl.EVP_get_digestbyname(name) if not digest: raise OperationError(space.w_ValueError, space.wrap("unknown hash function")) - ctx = lltype.malloc(ropenssl.EVP_MD_CTX.TO, flavor='raw') - ropenssl.EVP_DigestInit(ctx, digest) - self.ctx = ctx + ropenssl.EVP_DigestInit(self.ctx, digest) def __del__(self): # self.lock.free() @@ -65,33 +77,29 @@ "Return the digest value as a string of hexadecimal digits." digest = self._digest(space) hexdigits = '0123456789abcdef' - result = StringBuilder(self._digest_size() * 2) + result = StringBuilder(self.digest_size * 2) for c in digest: result.append(hexdigits[(ord(c) >> 4) & 0xf]) result.append(hexdigits[ ord(c) & 0xf]) return space.wrap(result.build()) def get_digest_size(self, space): - return space.wrap(self._digest_size()) + return space.wrap(self.digest_size) def get_block_size(self, space): - return space.wrap(self._block_size()) + return space.wrap(self.compute_block_size()) def _digest(self, space): - copy = self.copy(space) - ctx = copy.ctx - digest_size = self._digest_size() - digest = lltype.malloc(rffi.CCHARP.TO, digest_size, flavor='raw') + with lltype.scoped_alloc(ropenssl.EVP_MD_CTX.TO) as ctx: + with self.lock: + ropenssl.EVP_MD_CTX_copy(ctx, self.ctx) + digest_size = self.digest_size + with lltype.scoped_alloc(rffi.CCHARP.TO, digest_size) as digest: + ropenssl.EVP_DigestFinal(ctx, digest, None) + ropenssl.EVP_MD_CTX_cleanup(ctx) + return rffi.charpsize2str(digest, digest_size) - try: - ropenssl.EVP_DigestFinal(ctx, digest, None) - return rffi.charpsize2str(digest, digest_size) - finally: - keepalive_until_here(copy) - lltype.free(digest, flavor='raw') - - - def _digest_size(self): + def compute_digest_size(self): # XXX This isn't the nicest way, but the EVP_MD_size OpenSSL # XXX function is defined as a C macro on OS X and would be # XXX significantly harder to implement in another way. @@ -105,12 +113,14 @@ 'sha512': 64, 'SHA512': 64, }.get(self.name, 0) - def _block_size(self): + def compute_block_size(self): + if self._block_size != -1: + return self._block_size # XXX This isn't the nicest way, but the EVP_MD_CTX_block_size # XXX OpenSSL function is defined as a C macro on some systems # XXX and would be significantly harder to implement in # XXX another way. - return { + self._block_size = { 'md5': 64, 'MD5': 64, 'sha1': 64, 'SHA1': 64, 'sha224': 64, 'SHA224': 64, @@ -118,6 +128,7 @@ 'sha384': 128, 'SHA384': 128, 'sha512': 128, 'SHA512': 128, }.get(self.name, 0) + return self._block_size W_Hash.typedef = TypeDef( 'HASH', @@ -135,6 +146,7 @@ @unwrap_spec(name=str, string='bufferstr') def new(space, name, string=''): w_hash = W_Hash(space, name) + w_hash.initdigest(space, name) w_hash.update(space, string) return space.wrap(w_hash) diff --git a/pypy/module/_multiprocessing/interp_semaphore.py b/pypy/module/_multiprocessing/interp_semaphore.py --- a/pypy/module/_multiprocessing/interp_semaphore.py +++ b/pypy/module/_multiprocessing/interp_semaphore.py @@ -4,6 +4,7 @@ from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.error import wrap_oserror, OperationError from pypy.rpython.lltypesystem import rffi, lltype +from pypy.rlib import rgc from pypy.rlib.rarithmetic import r_uint from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.rpython.tool import rffi_platform as platform @@ -23,6 +24,8 @@ _CreateSemaphore = rwin32.winexternal( 'CreateSemaphoreA', [rffi.VOIDP, rffi.LONG, rffi.LONG, rwin32.LPCSTR], rwin32.HANDLE) + _CloseHandle = rwin32.winexternal('CloseHandle', [rwin32.HANDLE], + rwin32.BOOL, threadsafe=False) _ReleaseSemaphore = rwin32.winexternal( 'ReleaseSemaphore', [rwin32.HANDLE, rffi.LONG, rffi.LONGP], rwin32.BOOL) @@ -51,6 +54,7 @@ SEM_FAILED = platform.ConstantInteger('SEM_FAILED') SEM_VALUE_MAX = platform.ConstantInteger('SEM_VALUE_MAX') SEM_TIMED_WAIT = platform.Has('sem_timedwait') + SEM_T_SIZE = platform.SizeOf('sem_t') config = platform.configure(CConfig) TIMEVAL = config['TIMEVAL'] @@ -61,18 +65,21 @@ SEM_FAILED = config['SEM_FAILED'] # rffi.cast(SEM_T, config['SEM_FAILED']) SEM_VALUE_MAX = config['SEM_VALUE_MAX'] SEM_TIMED_WAIT = config['SEM_TIMED_WAIT'] + SEM_T_SIZE = config['SEM_T_SIZE'] if sys.platform == 'darwin': HAVE_BROKEN_SEM_GETVALUE = True else: HAVE_BROKEN_SEM_GETVALUE = False - def external(name, args, result): + def external(name, args, result, **kwargs): return rffi.llexternal(name, args, result, - compilation_info=eci) + compilation_info=eci, **kwargs) _sem_open = external('sem_open', [rffi.CCHARP, rffi.INT, rffi.INT, rffi.UINT], SEM_T) + # tread sem_close as not threadsafe for now to be able to use the __del__ + _sem_close = external('sem_close', [SEM_T], rffi.INT, threadsafe=False) _sem_unlink = external('sem_unlink', [rffi.CCHARP], rffi.INT) _sem_wait = external('sem_wait', [SEM_T], rffi.INT) _sem_trywait = external('sem_trywait', [SEM_T], rffi.INT) @@ -90,6 +97,11 @@ raise OSError(rposix.get_errno(), "sem_open failed") return res + def sem_close(handle): + res = _sem_close(handle) + if res < 0: + raise OSError(rposix.get_errno(), "sem_close failed") + def sem_unlink(name): res = _sem_unlink(name) if res < 0: @@ -205,6 +217,11 @@ raise WindowsError(err, "CreateSemaphore") return handle + def delete_semaphore(handle): + if not _CloseHandle(handle): + err = rwin32.GetLastError() + raise WindowsError(err, "CloseHandle") + def semlock_acquire(self, space, block, w_timeout): if not block: full_msecs = 0 @@ -291,8 +308,13 @@ sem_unlink(name) except OSError: pass + else: + rgc.add_memory_pressure(SEM_T_SIZE) return sem + def delete_semaphore(handle): + sem_close(handle) + def semlock_acquire(self, space, block, w_timeout): if not block: deadline = lltype.nullptr(TIMESPECP.TO) @@ -483,6 +505,9 @@ def exit(self, space, __args__): self.release(space) + def __del__(self): + delete_semaphore(self.handle) + @unwrap_spec(kind=int, value=int, maxvalue=int) def descr_new(space, w_subtype, kind, value, maxvalue): if kind != RECURSIVE_MUTEX and kind != SEMAPHORE: diff --git a/pypy/module/_pickle_support/maker.py b/pypy/module/_pickle_support/maker.py --- a/pypy/module/_pickle_support/maker.py +++ b/pypy/module/_pickle_support/maker.py @@ -66,10 +66,10 @@ new_generator.running = running return space.wrap(new_generator) - at unwrap_spec(current=int, remaining=int, step=int) -def xrangeiter_new(space, current, remaining, step): + at unwrap_spec(current=int, stop=int, step=int) +def xrangeiter_new(space, current, stop, step): from pypy.module.__builtin__.functional import W_XRangeIterator - new_iter = W_XRangeIterator(space, current, remaining, step) + new_iter = W_XRangeIterator(space, current, stop, step) return space.wrap(new_iter) @unwrap_spec(identifier=str) diff --git a/pypy/module/pyexpat/interp_pyexpat.py b/pypy/module/pyexpat/interp_pyexpat.py --- a/pypy/module/pyexpat/interp_pyexpat.py +++ b/pypy/module/pyexpat/interp_pyexpat.py @@ -4,9 +4,9 @@ from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.error import OperationError from pypy.objspace.descroperation import object_setattr +from pypy.rlib import rgc +from pypy.rlib.unroll import unrolling_iterable from pypy.rpython.lltypesystem import rffi, lltype -from pypy.rlib.unroll import unrolling_iterable - from pypy.rpython.tool import rffi_platform from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.translator.platform import platform @@ -118,6 +118,19 @@ locals()[name] = rffi_platform.ConstantInteger(name) for name in xml_model_list: locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + XML_Parser_SIZE = rffi_platform.SizeOf("XML_Parser") for k, v in rffi_platform.configure(CConfigure).items(): globals()[k] = v @@ -793,7 +806,10 @@ rffi.cast(rffi.CHAR, namespace_separator)) else: xmlparser = XML_ParserCreate(encoding) - + # Currently this is just the size of the pointer and some estimated bytes. + # The struct isn't actually defined in expat.h - it is in xmlparse.c + # XXX: find a good estimate of the XML_ParserStruct + rgc.add_memory_pressure(XML_Parser_SIZE + 300) if not xmlparser: raise OperationError(space.w_RuntimeError, space.wrap('XML_ParserCreate failed')) diff --git a/pypy/module/thread/ll_thread.py b/pypy/module/thread/ll_thread.py --- a/pypy/module/thread/ll_thread.py +++ b/pypy/module/thread/ll_thread.py @@ -2,10 +2,11 @@ from pypy.rpython.lltypesystem import rffi, lltype, llmemory from pypy.translator.tool.cbuild import ExternalCompilationInfo import py -from pypy.rlib import jit +from pypy.rlib import jit, rgc from pypy.rlib.debug import ll_assert from pypy.rlib.objectmodel import we_are_translated from pypy.rpython.lltypesystem.lloperation import llop +from pypy.rpython.tool import rffi_platform from pypy.tool import autopath class error(Exception): @@ -49,7 +50,7 @@ TLOCKP = rffi.COpaquePtr('struct RPyOpaque_ThreadLock', compilation_info=eci) - +TLOCKP_SIZE = rffi_platform.sizeof('struct RPyOpaque_ThreadLock', eci) c_thread_lock_init = llexternal('RPyThreadLockInit', [TLOCKP], rffi.INT, threadsafe=False) # may add in a global list c_thread_lock_dealloc_NOAUTO = llexternal('RPyOpaqueDealloc_ThreadLock', @@ -164,6 +165,9 @@ if rffi.cast(lltype.Signed, res) <= 0: lltype.free(ll_lock, flavor='raw', track_allocation=False) raise error("out of resources") + # Add some memory pressure for the size of the lock because it is an + # Opaque object + rgc.add_memory_pressure(TLOCKP_SIZE) return ll_lock def free_ll_lock(ll_lock): diff --git a/pypy/rlib/rgc.py b/pypy/rlib/rgc.py --- a/pypy/rlib/rgc.py +++ b/pypy/rlib/rgc.py @@ -259,6 +259,24 @@ except Exception: return False # don't keep objects whose _freeze_() method explodes +def add_memory_pressure(estimate): + """Add memory pressure for OpaquePtrs.""" + pass + +class AddMemoryPressureEntry(ExtRegistryEntry): + _about_ = add_memory_pressure + + def compute_result_annotation(self, s_nbytes): + from pypy.annotation import model as annmodel + return annmodel.s_None + + def specialize_call(self, hop): + [v_size] = hop.inputargs(lltype.Signed) + hop.exception_cannot_occur() + return hop.genop('gc_add_memory_pressure', [v_size], + resulttype=lltype.Void) + + def get_rpy_memory_usage(gcref): "NOT_RPYTHON" # approximate implementation using CPython's type info diff --git a/pypy/rlib/ropenssl.py b/pypy/rlib/ropenssl.py --- a/pypy/rlib/ropenssl.py +++ b/pypy/rlib/ropenssl.py @@ -25,6 +25,7 @@ 'openssl/err.h', 'openssl/rand.h', 'openssl/evp.h', + 'openssl/ossl_typ.h', 'openssl/x509v3.h'] eci = ExternalCompilationInfo( @@ -108,7 +109,9 @@ GENERAL_NAME_st = rffi_platform.Struct( 'struct GENERAL_NAME_st', [('type', rffi.INT), - ]) + ]) + EVP_MD_SIZE = rffi_platform.SizeOf('EVP_MD') + EVP_MD_CTX_SIZE = rffi_platform.SizeOf('EVP_MD_CTX') for k, v in rffi_platform.configure(CConfig).items(): @@ -154,7 +157,7 @@ ssl_external('CRYPTO_set_id_callback', [lltype.Ptr(lltype.FuncType([], rffi.LONG))], lltype.Void) - + if HAVE_OPENSSL_RAND: ssl_external('RAND_add', [rffi.CCHARP, rffi.INT, rffi.DOUBLE], lltype.Void) ssl_external('RAND_status', [], rffi.INT) @@ -255,7 +258,7 @@ [BIO, rffi.VOIDP, rffi.VOIDP, rffi.VOIDP], X509) EVP_MD_CTX = rffi.COpaquePtr('EVP_MD_CTX', compilation_info=eci) -EVP_MD = rffi.COpaquePtr('EVP_MD') +EVP_MD = rffi.COpaquePtr('EVP_MD', compilation_info=eci) OpenSSL_add_all_digests = external( 'OpenSSL_add_all_digests', [], lltype.Void) diff --git a/pypy/rpython/llinterp.py b/pypy/rpython/llinterp.py --- a/pypy/rpython/llinterp.py +++ b/pypy/rpython/llinterp.py @@ -172,7 +172,7 @@ def checkadr(addr): assert lltype.typeOf(addr) is llmemory.Address - + def is_inst(inst): return isinstance(lltype.typeOf(inst), (ootype.Instance, ootype.BuiltinType, ootype.StaticMethod)) @@ -657,7 +657,7 @@ raise TypeError("graph with %r args called with wrong func ptr type: %r" % (tuple([v.concretetype for v in args_v]), ARGS)) frame = self.newsubframe(graph, args) - return frame.eval() + return frame.eval() def op_direct_call(self, f, *args): FTYPE = self.llinterpreter.typer.type_system.derefType(lltype.typeOf(f)) @@ -698,13 +698,13 @@ return ptr except MemoryError: self.make_llexception() - + def op_malloc_nonmovable(self, TYPE, flags): flavor = flags['flavor'] assert flavor == 'gc' zero = flags.get('zero', False) return self.heap.malloc_nonmovable(TYPE, zero=zero) - + def op_malloc_nonmovable_varsize(self, TYPE, flags, size): flavor = flags['flavor'] assert flavor == 'gc' @@ -716,6 +716,9 @@ track_allocation = flags.get('track_allocation', True) self.heap.free(obj, flavor='raw', track_allocation=track_allocation) + def op_gc_add_memory_pressure(self, size): + self.heap.add_memory_pressure(size) + def op_shrink_array(self, obj, smallersize): return self.heap.shrink_array(obj, smallersize) @@ -1318,7 +1321,7 @@ func_graph = fn.graph else: # obj is an instance, we want to call 'method_name' on it - assert fn is None + assert fn is None self_arg = [obj] func_graph = obj._TYPE._methods[method_name._str].graph diff --git a/pypy/rpython/lltypesystem/llheap.py b/pypy/rpython/lltypesystem/llheap.py --- a/pypy/rpython/lltypesystem/llheap.py +++ b/pypy/rpython/lltypesystem/llheap.py @@ -5,8 +5,7 @@ setfield = setattr from operator import setitem as setarrayitem -from pypy.rlib.rgc import collect -from pypy.rlib.rgc import can_move +from pypy.rlib.rgc import can_move, collect, add_memory_pressure def setinterior(toplevelcontainer, inneraddr, INNERTYPE, newvalue, offsets=None): diff --git a/pypy/rpython/lltypesystem/lloperation.py b/pypy/rpython/lltypesystem/lloperation.py --- a/pypy/rpython/lltypesystem/lloperation.py +++ b/pypy/rpython/lltypesystem/lloperation.py @@ -473,6 +473,7 @@ 'gc_is_rpy_instance' : LLOp(), 'gc_dump_rpy_heap' : LLOp(), 'gc_typeids_z' : LLOp(), + 'gc_add_memory_pressure': LLOp(), # ------- JIT & GC interaction, only for some GCs ---------- diff --git a/pypy/rpython/lltypesystem/lltype.py b/pypy/rpython/lltypesystem/lltype.py --- a/pypy/rpython/lltypesystem/lltype.py +++ b/pypy/rpython/lltypesystem/lltype.py @@ -48,7 +48,7 @@ self.TYPE = TYPE def __repr__(self): return ''%(self.TYPE,) - + def saferecursive(func, defl, TLS=TLS): def safe(*args): @@ -537,9 +537,9 @@ return "Func ( %s ) -> %s" % (args, self.RESULT) __str__ = saferecursive(__str__, '...') - def _short_name(self): + def _short_name(self): args = ', '.join([ARG._short_name() for ARG in self.ARGS]) - return "Func(%s)->%s" % (args, self.RESULT._short_name()) + return "Func(%s)->%s" % (args, self.RESULT._short_name()) _short_name = saferecursive(_short_name, '...') def _container_example(self): @@ -553,7 +553,7 @@ class OpaqueType(ContainerType): _gckind = 'raw' - + def __init__(self, tag, hints={}): """ if hints['render_structure'] is set, the type is internal and not considered to come from somewhere else (it should be rendered as a structure) """ @@ -723,10 +723,10 @@ def __str__(self): return '* %s' % (self.TO, ) - + def _short_name(self): return 'Ptr %s' % (self.TO._short_name(), ) - + def _is_atomic(self): return self.TO._gckind == 'raw' diff --git a/pypy/rpython/memory/gctransform/framework.py b/pypy/rpython/memory/gctransform/framework.py --- a/pypy/rpython/memory/gctransform/framework.py +++ b/pypy/rpython/memory/gctransform/framework.py @@ -377,17 +377,24 @@ self.malloc_varsize_nonmovable_ptr = None if getattr(GCClass, 'raw_malloc_memory_pressure', False): - def raw_malloc_memory_pressure(length, itemsize): + def raw_malloc_memory_pressure_varsize(length, itemsize): totalmem = length * itemsize if totalmem > 0: gcdata.gc.raw_malloc_memory_pressure(totalmem) #else: probably an overflow -- the following rawmalloc # will fail then + def raw_malloc_memory_pressure(sizehint): + gcdata.gc.raw_malloc_memory_pressure(sizehint) + self.raw_malloc_memory_pressure_varsize_ptr = getfn( + raw_malloc_memory_pressure_varsize, + [annmodel.SomeInteger(), annmodel.SomeInteger()], + annmodel.s_None, minimal_transform = False) self.raw_malloc_memory_pressure_ptr = getfn( raw_malloc_memory_pressure, - [annmodel.SomeInteger(), annmodel.SomeInteger()], + [annmodel.SomeInteger()], annmodel.s_None, minimal_transform = False) + self.identityhash_ptr = getfn(GCClass.identityhash.im_func, [s_gc, s_gcref], annmodel.SomeInteger(), diff --git a/pypy/rpython/memory/gctransform/transform.py b/pypy/rpython/memory/gctransform/transform.py --- a/pypy/rpython/memory/gctransform/transform.py +++ b/pypy/rpython/memory/gctransform/transform.py @@ -63,7 +63,7 @@ gct.push_alive(v_result, self.llops) elif opname not in ('direct_call', 'indirect_call'): gct.push_alive(v_result, self.llops) - + def rename(self, newopname): @@ -118,7 +118,7 @@ self.minimalgctransformer = self.MinimalGCTransformer(self) else: self.minimalgctransformer = None - + def get_lltype_of_exception_value(self): if self.translator is not None: exceptiondata = self.translator.rtyper.getexceptiondata() @@ -399,7 +399,7 @@ def gct_gc_heap_stats(self, hop): from pypy.rpython.memory.gc.base import ARRAY_TYPEID_MAP - + return hop.cast_result(rmodel.inputconst(lltype.Ptr(ARRAY_TYPEID_MAP), lltype.nullptr(ARRAY_TYPEID_MAP))) @@ -427,7 +427,7 @@ assert flavor == 'raw' assert not flags.get('zero') return self.parenttransformer.gct_malloc_varsize(hop) - + def gct_free(self, hop): flags = hop.spaceop.args[1].value flavor = flags['flavor'] @@ -502,7 +502,7 @@ stack_mh = mallocHelpers() stack_mh.allocate = lambda size: llop.stack_malloc(llmemory.Address, size) ll_stack_malloc_fixedsize = stack_mh._ll_malloc_fixedsize - + if self.translator: self.raw_malloc_fixedsize_ptr = self.inittime_helper( ll_raw_malloc_fixedsize, [lltype.Signed], llmemory.Address) @@ -541,7 +541,7 @@ resulttype=llmemory.Address) if flags.get('zero'): hop.genop("raw_memclear", [v_raw, c_size]) - return v_raw + return v_raw def gct_malloc_varsize(self, hop, add_flags=None): flags = hop.spaceop.args[1].value @@ -559,6 +559,14 @@ def gct_malloc_nonmovable_varsize(self, *args, **kwds): return self.gct_malloc_varsize(*args, **kwds) + def gct_gc_add_memory_pressure(self, hop): + if hasattr(self, 'raw_malloc_memory_pressure_ptr'): + op = hop.spaceop + size = op.args[0] + return hop.genop("direct_call", + [self.raw_malloc_memory_pressure_ptr, + size]) + def varsize_malloc_helper(self, hop, flags, meth, extraargs): def intconst(c): return rmodel.inputconst(lltype.Signed, c) op = hop.spaceop @@ -590,9 +598,9 @@ def gct_fv_raw_malloc_varsize(self, hop, flags, TYPE, v_length, c_const_size, c_item_size, c_offset_to_length): if flags.get('add_memory_pressure', False): - if hasattr(self, 'raw_malloc_memory_pressure_ptr'): + if hasattr(self, 'raw_malloc_memory_pressure_varsize_ptr'): hop.genop("direct_call", - [self.raw_malloc_memory_pressure_ptr, + [self.raw_malloc_memory_pressure_varsize_ptr, v_length, c_item_size]) if c_offset_to_length is None: if flags.get('zero'): @@ -625,7 +633,7 @@ hop.genop("track_alloc_stop", [v]) hop.genop('raw_free', [v]) else: - assert False, "%s has no support for free with flavor %r" % (self, flavor) + assert False, "%s has no support for free with flavor %r" % (self, flavor) def gct_gc_can_move(self, hop): return hop.cast_result(rmodel.inputconst(lltype.Bool, False)) diff --git a/pypy/rpython/memory/gcwrapper.py b/pypy/rpython/memory/gcwrapper.py --- a/pypy/rpython/memory/gcwrapper.py +++ b/pypy/rpython/memory/gcwrapper.py @@ -66,6 +66,10 @@ gctypelayout.zero_gc_pointers(result) return result + def add_memory_pressure(self, size): + if hasattr(self.gc, 'raw_malloc_memory_pressure'): + self.gc.raw_malloc_memory_pressure(size) + def shrink_array(self, p, smallersize): if hasattr(self.gc, 'shrink_array'): addr = llmemory.cast_ptr_to_adr(p) diff --git a/pypy/rpython/memory/test/test_gc.py b/pypy/rpython/memory/test/test_gc.py --- a/pypy/rpython/memory/test/test_gc.py +++ b/pypy/rpython/memory/test/test_gc.py @@ -592,7 +592,7 @@ return rgc.can_move(lltype.malloc(TP, 1)) assert self.interpret(func, []) == self.GC_CAN_MOVE - + def test_malloc_nonmovable(self): TP = lltype.GcArray(lltype.Char) def func(): diff --git a/pypy/rpython/memory/test/test_transformed_gc.py b/pypy/rpython/memory/test/test_transformed_gc.py --- a/pypy/rpython/memory/test/test_transformed_gc.py +++ b/pypy/rpython/memory/test/test_transformed_gc.py @@ -27,7 +27,7 @@ t.config.set(**extraconfigopts) ann = t.buildannotator(policy=annpolicy.StrictAnnotatorPolicy()) ann.build_types(func, inputtypes) - + if specialize: t.buildrtyper().specialize() if backendopt: @@ -44,7 +44,7 @@ GC_CAN_MOVE = False GC_CAN_MALLOC_NONMOVABLE = True taggedpointers = False - + def setup_class(cls): funcs0 = [] funcs2 = [] @@ -155,7 +155,7 @@ return run, gct else: return run - + class GenericGCTests(GCTest): GC_CAN_SHRINK_ARRAY = False @@ -190,7 +190,7 @@ j += 1 return 0 return malloc_a_lot - + def test_instances(self): run, statistics = self.runner("instances", statistics=True) run([]) @@ -276,7 +276,7 @@ for i in range(1, 5): res = run([i, i - 1]) assert res == i - 1 # crashes if constants are not considered roots - + def define_string_concatenation(cls): def concat(j, dummy): lst = [] @@ -656,7 +656,7 @@ # return 2 return func - + def test_malloc_nonmovable(self): run = self.runner("malloc_nonmovable") assert int(self.GC_CAN_MALLOC_NONMOVABLE) == run([]) @@ -676,7 +676,7 @@ return 2 return func - + def test_malloc_nonmovable_fixsize(self): run = self.runner("malloc_nonmovable_fixsize") assert run([]) == int(self.GC_CAN_MALLOC_NONMOVABLE) @@ -757,7 +757,7 @@ lltype.free(idarray, flavor='raw') return 0 return f - + def test_many_ids(self): if not self.GC_CAN_TEST_ID: py.test.skip("fails for bad reasons in lltype.py :-(") @@ -813,7 +813,7 @@ else: assert 0, "oups, not found" return f, None, fix_graph_of_g - + def test_do_malloc_operations(self): run = self.runner("do_malloc_operations") run([]) @@ -850,7 +850,7 @@ else: assert 0, "oups, not found" return f, None, fix_graph_of_g - + def test_do_malloc_operations_in_call(self): run = self.runner("do_malloc_operations_in_call") run([]) @@ -861,7 +861,7 @@ l2 = [] l3 = [] l4 = [] - + def f(): for i in range(10): s = lltype.malloc(S) @@ -1026,7 +1026,7 @@ llop.gc__collect(lltype.Void) return static.p.x + i def cleanup(): - static.p = lltype.nullptr(T1) + static.p = lltype.nullptr(T1) return f, cleanup, None def test_nongc_static_root_minor_collect(self): @@ -1081,7 +1081,7 @@ return 0 return f - + def test_many_weakrefs(self): run = self.runner("many_weakrefs") run([]) @@ -1131,7 +1131,7 @@ def define_adr_of_nursery(cls): class A(object): pass - + def f(): # we need at least 1 obj to allocate a nursery a = A() @@ -1147,9 +1147,9 @@ assert nt1 > nf1 assert nt1 == nt0 return 0 - + return f - + def test_adr_of_nursery(self): run = self.runner("adr_of_nursery") res = run([]) @@ -1175,7 +1175,7 @@ def _teardown(self): self.__ready = False # collecting here is expected GenerationGC._teardown(self) - + GC_PARAMS = {'space_size': 512*WORD, 'nursery_size': 128*WORD, 'translated_to_c': False} diff --git a/pypy/translator/c/test/test_newgc.py b/pypy/translator/c/test/test_newgc.py --- a/pypy/translator/c/test/test_newgc.py +++ b/pypy/translator/c/test/test_newgc.py @@ -37,7 +37,7 @@ else: print res return 0 - + t = Translation(main, standalone=True, gc=cls.gcpolicy, policy=annpolicy.StrictAnnotatorPolicy(), taggedpointers=cls.taggedpointers, @@ -128,10 +128,10 @@ if not args: args = (-1, ) res = self.allfuncs(name, *args) - num = self.name_to_func[name] + num = self.name_to_func[name] if self.funcsstr[num]: return res - return int(res) + return int(res) def define_empty_collect(cls): def f(): @@ -228,7 +228,7 @@ T = lltype.GcStruct("T", ('y', lltype.Signed), ('s', lltype.Ptr(S))) ARRAY_Ts = lltype.GcArray(lltype.Ptr(T)) - + def f(): r = 0 for i in range(30): @@ -250,7 +250,7 @@ def test_framework_varsized(self): res = self.run('framework_varsized') assert res == self.run_orig('framework_varsized') - + def define_framework_using_lists(cls): class A(object): pass @@ -271,7 +271,7 @@ N = 1000 res = self.run('framework_using_lists') assert res == N*(N - 1)/2 - + def define_framework_static_roots(cls): class A(object): def __init__(self, y): @@ -318,8 +318,8 @@ def test_framework_void_array(self): res = self.run('framework_void_array') assert res == 44 - - + + def define_framework_malloc_failure(cls): def f(): a = [1] * (sys.maxint//2) @@ -342,7 +342,7 @@ def test_framework_array_of_void(self): res = self.run('framework_array_of_void') assert res == 43 + 1000000 - + def define_framework_opaque(cls): A = lltype.GcStruct('A', ('value', lltype.Signed)) O = lltype.GcOpaqueType('test.framework') @@ -437,7 +437,7 @@ b = B() return 0 return func - + def test_del_raises(self): self.run('del_raises') # does not raise @@ -712,7 +712,7 @@ def test_callback_with_collect(self): assert self.run('callback_with_collect') - + def define_can_move(cls): class A: pass @@ -1255,7 +1255,7 @@ l1 = [] l2 = [] l3 = [] - + def f(): for i in range(10): s = lltype.malloc(S) @@ -1298,7 +1298,7 @@ def test_string_builder(self): res = self.run('string_builder') assert res == "aabcbdddd" - + def definestr_string_builder_over_allocation(cls): import gc def fn(_): @@ -1458,6 +1458,37 @@ res = self.run("nongc_attached_to_gc") assert res == -99997 + def define_nongc_opaque_attached_to_gc(cls): + from pypy.module._hashlib.interp_hashlib import HASH_MALLOC_SIZE + from pypy.rlib import rgc, ropenssl + from pypy.rpython.lltypesystem import rffi + + class A: + def __init__(self): + self.ctx = lltype.malloc(ropenssl.EVP_MD_CTX.TO, + flavor='raw') + digest = ropenssl.EVP_get_digestbyname('sha1') + ropenssl.EVP_DigestInit(self.ctx, digest) + rgc.add_memory_pressure(HASH_MALLOC_SIZE + 64) + + def __del__(self): + ropenssl.EVP_MD_CTX_cleanup(self.ctx) + lltype.free(self.ctx, flavor='raw') + A() + def f(): + am1 = am2 = am3 = None + for i in range(100000): + am3 = am2 + am2 = am1 + am1 = A() + # what can we use for the res? + return 0 + return f + + def test_nongc_opaque_attached_to_gc(self): + res = self.run("nongc_opaque_attached_to_gc") + assert res == 0 + # ____________________________________________________________________ class TaggedPointersTest(object): From noreply at buildbot.pypy.org Fri Nov 4 10:50:29 2011 From: noreply at buildbot.pypy.org (cfbolz) Date: Fri, 4 Nov 2011 10:50:29 +0100 (CET) Subject: [pypy-commit] pypy default: woops, wrong class Message-ID: <20111104095029.BB5DC820B3@wyvern.cs.uni-duesseldorf.de> Author: Carl Friedrich Bolz Branch: Changeset: r48737:7e5b7f49a9f0 Date: 2011-11-04 10:13 +0100 http://bitbucket.org/pypy/pypy/changeset/7e5b7f49a9f0/ Log: woops, wrong class diff --git a/pypy/objspace/std/test/test_listobject.py b/pypy/objspace/std/test/test_listobject.py --- a/pypy/objspace/std/test/test_listobject.py +++ b/pypy/objspace/std/test/test_listobject.py @@ -2,17 +2,10 @@ from pypy.objspace.std.listobject import W_ListObject from pypy.interpreter.error import OperationError -from pypy.conftest import gettestobjspace +from pypy.conftest import gettestobjspace, option class TestW_ListObject(object): - def setup_class(cls): - import sys - on_cpython = (option.runappdirect and - not hasattr(sys, 'pypy_translation_info')) - - cls.w_on_cpython = cls.space.wrap(on_cpython) - def test_is_true(self): w = self.space.wrap w_list = W_ListObject([]) @@ -349,6 +342,13 @@ class AppTestW_ListObject(object): + def setup_class(cls): + import sys + on_cpython = (option.runappdirect and + not hasattr(sys, 'pypy_translation_info')) + + cls.w_on_cpython = cls.space.wrap(on_cpython) + def test_call_list(self): assert list('') == [] assert list('abc') == ['a', 'b', 'c'] From noreply at buildbot.pypy.org Fri Nov 4 10:50:30 2011 From: noreply at buildbot.pypy.org (cfbolz) Date: Fri, 4 Nov 2011 10:50:30 +0100 (CET) Subject: [pypy-commit] pypy default: I knew that running a translation first was a good idea Message-ID: <20111104095030.E8B77820B3@wyvern.cs.uni-duesseldorf.de> Author: Carl Friedrich Bolz Branch: Changeset: r48738:54f86b92c275 Date: 2011-11-04 10:24 +0100 http://bitbucket.org/pypy/pypy/changeset/54f86b92c275/ Log: I knew that running a translation first was a good idea diff --git a/pypy/objspace/std/stringobject.py b/pypy/objspace/std/stringobject.py --- a/pypy/objspace/std/stringobject.py +++ b/pypy/objspace/std/stringobject.py @@ -4,7 +4,7 @@ from pypy.interpreter.error import OperationError, operationerrfmt from pypy.interpreter import gateway from pypy.rlib.rarithmetic import ovfcheck -from pypy.rlib.objectmodel import we_are_translated, compute_hash +from pypy.rlib.objectmodel import we_are_translated, compute_hash, specialize from pypy.objspace.std.inttype import wrapint from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice from pypy.objspace.std import slicetype, newformat @@ -420,6 +420,7 @@ return space.wrap(u_self) + at specialize.arg(4) def _convert_idx_params(space, w_self, w_start, w_end, upper_bound=False): self = w_self._value lenself = len(self) @@ -427,7 +428,6 @@ start, end = slicetype.unwrap_start_stop( space, lenself, w_start, w_end, upper_bound=upper_bound) return (self, start, end) -_convert_idx_params._annspecialcase_ = 'specialize:arg(5)' def contains__String_String(space, w_self, w_sub): self = w_self._value From noreply at buildbot.pypy.org Fri Nov 4 10:50:32 2011 From: noreply at buildbot.pypy.org (cfbolz) Date: Fri, 4 Nov 2011 10:50:32 +0100 (CET) Subject: [pypy-commit] pypy default: replace other two specialize calls as well Message-ID: <20111104095032.241CC820B3@wyvern.cs.uni-duesseldorf.de> Author: Carl Friedrich Bolz Branch: Changeset: r48739:ba2d6fdc947d Date: 2011-11-04 10:31 +0100 http://bitbucket.org/pypy/pypy/changeset/ba2d6fdc947d/ Log: replace other two specialize calls as well diff --git a/pypy/objspace/std/stringobject.py b/pypy/objspace/std/stringobject.py --- a/pypy/objspace/std/stringobject.py +++ b/pypy/objspace/std/stringobject.py @@ -47,6 +47,7 @@ W_StringObject.PREBUILT = [W_StringObject(chr(i)) for i in range(256)] del i + at specialize.arg(2) def _is_generic(space, w_self, fun): v = w_self._value if len(v) == 0: @@ -56,14 +57,13 @@ return space.newbool(fun(c)) else: return _is_generic_loop(space, v, fun) -_is_generic._annspecialcase_ = "specialize:arg(2)" + at specialize.arg(2) def _is_generic_loop(space, v, fun): for idx in range(len(v)): if not fun(v[idx]): return space.w_False return space.w_True -_is_generic_loop._annspecialcase_ = "specialize:arg(2)" def _upper(ch): if ch.islower(): From noreply at buildbot.pypy.org Fri Nov 4 10:50:33 2011 From: noreply at buildbot.pypy.org (cfbolz) Date: Fri, 4 Nov 2011 10:50:33 +0100 (CET) Subject: [pypy-commit] pypy default: merge Message-ID: <20111104095033.517BB820B3@wyvern.cs.uni-duesseldorf.de> Author: Carl Friedrich Bolz Branch: Changeset: r48740:ff9f3efc4c62 Date: 2011-11-04 10:47 +0100 http://bitbucket.org/pypy/pypy/changeset/ff9f3efc4c62/ Log: merge diff --git a/pypy/module/_file/interp_file.py b/pypy/module/_file/interp_file.py --- a/pypy/module/_file/interp_file.py +++ b/pypy/module/_file/interp_file.py @@ -206,24 +206,28 @@ @unwrap_spec(size=int) def direct_readlines(self, size=0): stream = self.getstream() - # NB. this implementation is very inefficient for unbuffered - # streams, but ok if stream.readline() is efficient. + # this is implemented as: .read().split('\n') + # except that it keeps the \n in the resulting strings if size <= 0: - result = [] - while True: - line = stream.readline() - if not line: - break - result.append(line) - size -= len(line) + data = stream.readall() else: - result = [] - while size > 0: - line = stream.readline() - if not line: - break - result.append(line) - size -= len(line) + data = stream.read(size) + result = [] + splitfrom = 0 + for i in range(len(data)): + if data[i] == '\n': + result.append(data[splitfrom : i + 1]) + splitfrom = i + 1 + # + if splitfrom < len(data): + # there is a partial line at the end. If size > 0, it is likely + # to be because the 'read(size)' returned data up to the middle + # of a line. In that case, use 'readline()' to read until the + # end of the current line. + data = data[splitfrom:] + if size > 0: + data += stream.readline() + result.append(data) return result @unwrap_spec(offset=r_longlong, whence=int) From noreply at buildbot.pypy.org Fri Nov 4 11:28:25 2011 From: noreply at buildbot.pypy.org (hager) Date: Fri, 4 Nov 2011 11:28:25 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: If functions with many (> 8) arguments are called, Message-ID: <20111104102825.7F9EF820B3@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r48741:724de1b4f87e Date: 2011-11-04 11:28 +0100 http://bitbucket.org/pypy/pypy/changeset/724de1b4f87e/ Log: If functions with many (> 8) arguments are called, pass every parameter ove the 8th on the stack. diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -475,14 +475,43 @@ signed = descr.is_result_signed() self._ensure_result_bit_extension(loc, size, signed) + # XXX 64 bit adjustment def _emit_call(self, force_index, adr, args, regalloc, result=None): n_args = len(args) reg_args = count_reg_args(args) n = 0 # used to count the number of words pushed on the stack, so we - #can later modify the SP back to its original value + # can later modify the SP back to its original value + stack_args = [] if n_args > reg_args: - assert 0, "not implemented yet" + # first we need to prepare the list so it stays aligned + count = 0 + for i in range(reg_args, n_args): + arg = args[i] + if arg.type == FLOAT: + assert 0, "not implemented yet" + else: + count += 1 + n += WORD + stack_args.append(arg) + if count % 2 != 0: + n += WORD + stack_args.append(None) + + # adjust SP and compute size of parameter save area + stack_space = 4 * (WORD + len(stack_args)) + self.mc.stwu(1, 1, -stack_space) + self.mc.mflr(0) + self.mc.stw(0, 1, stack_space + WORD) + + # then we push everything on the stack + for i, arg in enumerate(stack_args): + offset = (2 + i) * WORD + self.mc.load_imm(r.r0, arg.value) + if IS_PPC_32: + self.mc.stw(r.r0.value, r.SP.value, offset) + else: + assert 0, "not implemented yet" # collect variables that need to go in registers # and the registers they will be stored in @@ -517,6 +546,9 @@ #the actual call if IS_PPC_32: self.mc.bl_abs(adr) + self.mc.lwz(0, 1, stack_space + WORD) + self.mc.mtlr(0) + self.mc.addi(1, 1, stack_space) else: self.mc.std(r.r2.value, r.SP.value, 40) self.mc.load_from_addr(r.r0, adr) @@ -528,9 +560,6 @@ self.mark_gc_roots(force_index) regalloc.possibly_free_vars(args) - # readjust the sp in case we passed some args on the stack - if n > 0: - assert 0, "not implemented yet" # restore the arguments stored on the stack if result is not None: From noreply at buildbot.pypy.org Fri Nov 4 11:48:23 2011 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 4 Nov 2011 11:48:23 +0100 (CET) Subject: [pypy-commit] pypy default: Attempt to fix generally the issue of VStringPlainValue that can Message-ID: <20111104104823.1D326820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48742:4c8fd4b0ca57 Date: 2011-11-04 11:43 +0100 http://bitbucket.org/pypy/pypy/changeset/4c8fd4b0ca57/ Log: Attempt to fix generally the issue of VStringPlainValue that can be forced before being fully built --- e.g. because it's a target of a copystrcontent. I think this fix is all that it needed, but in case I missed a place, then it will end up with an AssertionError or a segfault (self._chars is None) at run-time, instead of producing bogus results. diff --git a/pypy/jit/metainterp/optimizeopt/vstring.py b/pypy/jit/metainterp/optimizeopt/vstring.py --- a/pypy/jit/metainterp/optimizeopt/vstring.py +++ b/pypy/jit/metainterp/optimizeopt/vstring.py @@ -113,6 +113,33 @@ """A string built with newstr(const).""" _lengthbox = None # cache only + # Warning: an issue with VStringPlainValue is that sometimes it is + # initialized unpredictably by some copystrcontent. When this occurs + # we set self._chars to None. Be careful to check for is_valid(). + + def is_valid(self): + return self._chars is not None + + def _invalidate(self): + assert self.is_valid() + if self._lengthbox is None: + self._lengthbox = ConstInt(len(self._chars)) + self._chars = None + + def _really_force(self, optforce): + VAbstractStringValue._really_force(self, optforce) + assert self.box is not None + if self.is_valid(): + for c in self._chars: + if c is optimizer.CVAL_UNINITIALIZED_ZERO: + # the string has uninitialized null bytes in it, so + # assume that it is forced for being further mutated + # (e.g. by copystrcontent). So it becomes invalid + # as a VStringPlainValue: the _chars must not be used + # any longer. + self._invalidate() + break + def setup(self, size): self._chars = [optimizer.CVAL_UNINITIALIZED_ZERO] * size @@ -134,6 +161,8 @@ @specialize.arg(1) def get_constant_string_spec(self, mode): + if not self.is_valid(): + return None for c in self._chars: if c is optimizer.CVAL_UNINITIALIZED_ZERO or not c.is_constant(): return None @@ -141,11 +170,9 @@ for c in self._chars]) def string_copy_parts(self, string_optimizer, targetbox, offsetbox, mode): - if not self.is_virtual() and targetbox is not self.box: - lengthbox = self.getstrlen(string_optimizer, mode) - srcbox = self.force_box(string_optimizer) - return copy_str_content(string_optimizer, srcbox, targetbox, - CONST_0, offsetbox, lengthbox, mode) + if not self.is_valid(): + return VAbstractStringValue.string_copy_parts( + self, string_optimizer, targetbox, offsetbox, mode) for i in range(len(self._chars)): charbox = self._chars[i].force_box(string_optimizer) if not (isinstance(charbox, Const) and charbox.same_constant(CONST_0)): @@ -158,6 +185,7 @@ def get_args_for_fail(self, modifier): if self.box is None and not modifier.already_seen_virtual(self.keybox): + assert self.is_valid() charboxes = [value.get_key_box() for value in self._chars] modifier.register_virtual_fields(self.keybox, charboxes) for value in self._chars: @@ -373,7 +401,8 @@ def optimize_STRSETITEM(self, op): value = self.getvalue(op.getarg(0)) - if value.is_virtual() and isinstance(value, VStringPlainValue): + if (value.is_virtual() and isinstance(value, VStringPlainValue) + and value.is_valid()): indexbox = self.get_constant_box(op.getarg(1)) if indexbox is not None: value.setitem(indexbox.getint(), self.getvalue(op.getarg(2))) @@ -404,13 +433,10 @@ value = value.vstr vindex = self.getvalue(fullindexbox) # - if isinstance(value, VStringPlainValue): # even if no longer virtual + if (isinstance(value, VStringPlainValue) # even if no longer virtual + and value.is_valid()): # but make sure it is valid if vindex.is_constant(): - res = value.getitem(vindex.box.getint()) - # If it is uninitialized we can't return it, it was set by a - # COPYSTRCONTENT, not a STRSETITEM - if res is not optimizer.CVAL_UNINITIALIZED_ZERO: - return res + return value.getitem(vindex.box.getint()) # resbox = _strgetitem(self, value.force_box(self), vindex.force_box(self), mode) return self.getvalue(resbox) @@ -503,19 +529,12 @@ vstart = self.getvalue(op.getarg(2)) vstop = self.getvalue(op.getarg(3)) # - if (isinstance(vstr, VStringPlainValue) and vstart.is_constant() - and vstop.is_constant()): - # slicing with constant bounds of a VStringPlainValue, if any of - # the characters is unitialized we don't do this special slice, we - # do the regular copy contents. - for i in range(vstart.box.getint(), vstop.box.getint()): - if vstr.getitem(i) is optimizer.CVAL_UNINITIALIZED_ZERO: - break - else: - value = self.make_vstring_plain(op.result, op, mode) - value.setup_slice(vstr._chars, vstart.box.getint(), - vstop.box.getint()) - return True + if (isinstance(vstr, VStringPlainValue) and vstr.is_valid() + and vstart.is_constant() and vstop.is_constant()): + value = self.make_vstring_plain(op.result, op, mode) + value.setup_slice(vstr._chars, vstart.box.getint(), + vstop.box.getint()) + return True # vstr.ensure_nonnull() lengthbox = _int_sub(self, vstop.force_box(self), From noreply at buildbot.pypy.org Fri Nov 4 11:48:24 2011 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 4 Nov 2011 11:48:24 +0100 (CET) Subject: [pypy-commit] pypy default: merge heads Message-ID: <20111104104824.570CA820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48743:122e57eb1d74 Date: 2011-11-04 11:48 +0100 http://bitbucket.org/pypy/pypy/changeset/122e57eb1d74/ Log: merge heads diff --git a/pypy/objspace/std/bytearrayobject.py b/pypy/objspace/std/bytearrayobject.py --- a/pypy/objspace/std/bytearrayobject.py +++ b/pypy/objspace/std/bytearrayobject.py @@ -283,17 +283,9 @@ return space.wrap(''.join(w_bytearray.data)) def _convert_idx_params(space, w_self, w_start, w_stop): - start = slicetype.eval_slice_index(space, w_start) - stop = slicetype.eval_slice_index(space, w_stop) length = len(w_self.data) - if start < 0: - start += length - if start < 0: - start = 0 - if stop < 0: - stop += length - if stop < 0: - stop = 0 + start, stop = slicetype.unwrap_start_stop( + space, length, w_start, w_stop, False) return start, stop, length def str_count__Bytearray_Int_ANY_ANY(space, w_bytearray, w_char, w_start, w_stop): diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py --- a/pypy/objspace/std/listobject.py +++ b/pypy/objspace/std/listobject.py @@ -419,8 +419,8 @@ # needs to be safe against eq_w() mutating the w_list behind our back items = w_list.wrappeditems size = len(items) - i = slicetype.adapt_bound(space, size, w_start) - stop = slicetype.adapt_bound(space, size, w_stop) + i, stop = slicetype.unwrap_start_stop( + space, size, w_start, w_stop, True) while i < stop and i < len(items): if space.eq_w(items[i], w_any): return space.wrap(i) diff --git a/pypy/objspace/std/ropeobject.py b/pypy/objspace/std/ropeobject.py --- a/pypy/objspace/std/ropeobject.py +++ b/pypy/objspace/std/ropeobject.py @@ -357,16 +357,8 @@ self = w_self._node sub = w_sub._node - if space.is_w(w_start, space.w_None): - w_start = space.wrap(0) - if space.is_w(w_end, space.w_None): - w_end = space.len(w_self) - if upper_bound: - start = slicetype.adapt_bound(space, self.length(), w_start) - end = slicetype.adapt_bound(space, self.length(), w_end) - else: - start = slicetype.adapt_lower_bound(space, self.length(), w_start) - end = slicetype.adapt_lower_bound(space, self.length(), w_end) + start, end = slicetype.unwrap_start_stop( + space, self.length(), w_start, w_end, upper_bound) return (self, sub, start, end) _convert_idx_params._annspecialcase_ = 'specialize:arg(5)' diff --git a/pypy/objspace/std/sliceobject.py b/pypy/objspace/std/sliceobject.py --- a/pypy/objspace/std/sliceobject.py +++ b/pypy/objspace/std/sliceobject.py @@ -4,7 +4,7 @@ from pypy.interpreter import gateway from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all -from pypy.objspace.std.slicetype import eval_slice_index +from pypy.objspace.std.slicetype import _eval_slice_index class W_SliceObject(W_Object): from pypy.objspace.std.slicetype import slice_typedef as typedef @@ -25,7 +25,7 @@ if space.is_w(w_slice.w_step, space.w_None): step = 1 else: - step = eval_slice_index(space, w_slice.w_step) + step = _eval_slice_index(space, w_slice.w_step) if step == 0: raise OperationError(space.w_ValueError, space.wrap("slice step cannot be zero")) @@ -35,7 +35,7 @@ else: start = 0 else: - start = eval_slice_index(space, w_slice.w_start) + start = _eval_slice_index(space, w_slice.w_start) if start < 0: start += length if start < 0: @@ -54,7 +54,7 @@ else: stop = length else: - stop = eval_slice_index(space, w_slice.w_stop) + stop = _eval_slice_index(space, w_slice.w_stop) if stop < 0: stop += length if stop < 0: diff --git a/pypy/objspace/std/slicetype.py b/pypy/objspace/std/slicetype.py --- a/pypy/objspace/std/slicetype.py +++ b/pypy/objspace/std/slicetype.py @@ -3,6 +3,7 @@ from pypy.objspace.std.stdtypedef import StdTypeDef, SMM from pypy.objspace.std.register_all import register_all from pypy.interpreter.error import OperationError +from pypy.rlib.objectmodel import specialize # indices multimehtod slice_indices = SMM('indices', 2, @@ -14,7 +15,9 @@ ' normal slices.') # utility functions -def eval_slice_index(space, w_int): +def _eval_slice_index(space, w_int): + # note that it is the *callers* responsibility to check for w_None + # otherwise you can get funny error messages try: return space.getindex_w(w_int, None) # clamp if long integer too large except OperationError, err: @@ -25,7 +28,7 @@ "None or have an __index__ method")) def adapt_lower_bound(space, size, w_index): - index = eval_slice_index(space, w_index) + index = _eval_slice_index(space, w_index) if index < 0: index = index + size if index < 0: @@ -34,16 +37,29 @@ return index def adapt_bound(space, size, w_index): - index = eval_slice_index(space, w_index) - if index < 0: - index = index + size - if index < 0: - index = 0 + index = adapt_lower_bound(space, size, w_index) if index > size: index = size assert index >= 0 return index + at specialize.arg(4) +def unwrap_start_stop(space, size, w_start, w_end, upper_bound=False): + if space.is_w(w_start, space.w_None): + start = 0 + elif upper_bound: + start = adapt_bound(space, size, w_start) + else: + start = adapt_lower_bound(space, size, w_start) + + if space.is_w(w_end, space.w_None): + end = size + elif upper_bound: + end = adapt_bound(space, size, w_end) + else: + end = adapt_lower_bound(space, size, w_end) + return start, end + register_all(vars(), globals()) # ____________________________________________________________ diff --git a/pypy/objspace/std/smalltupleobject.py b/pypy/objspace/std/smalltupleobject.py --- a/pypy/objspace/std/smalltupleobject.py +++ b/pypy/objspace/std/smalltupleobject.py @@ -68,10 +68,10 @@ raise IndexError def eq(self, space, w_other): - if self.length() != w_other.length(): + if n != w_other.length(): return space.w_False for i in iter_n: - item1 = self.getitem(i) + item1 = getattr(self,'w_value%s' % i) item2 = w_other.getitem(i) if not space.eq_w(item1, item2): return space.w_False @@ -80,9 +80,9 @@ def hash(self, space): mult = 1000003 x = 0x345678 - z = self.length() + z = n for i in iter_n: - w_item = self.getitem(i) + w_item = getattr(self, 'w_value%s' % i) y = space.int_w(space.hash(w_item)) x = (x ^ y) * mult z -= 1 diff --git a/pypy/objspace/std/stringobject.py b/pypy/objspace/std/stringobject.py --- a/pypy/objspace/std/stringobject.py +++ b/pypy/objspace/std/stringobject.py @@ -4,7 +4,7 @@ from pypy.interpreter.error import OperationError, operationerrfmt from pypy.interpreter import gateway from pypy.rlib.rarithmetic import ovfcheck -from pypy.rlib.objectmodel import we_are_translated, compute_hash +from pypy.rlib.objectmodel import we_are_translated, compute_hash, specialize from pypy.objspace.std.inttype import wrapint from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice from pypy.objspace.std import slicetype, newformat @@ -47,6 +47,7 @@ W_StringObject.PREBUILT = [W_StringObject(chr(i)) for i in range(256)] del i + at specialize.arg(2) def _is_generic(space, w_self, fun): v = w_self._value if len(v) == 0: @@ -56,14 +57,13 @@ return space.newbool(fun(c)) else: return _is_generic_loop(space, v, fun) -_is_generic._annspecialcase_ = "specialize:arg(2)" + at specialize.arg(2) def _is_generic_loop(space, v, fun): for idx in range(len(v)): if not fun(v[idx]): return space.w_False return space.w_True -_is_generic_loop._annspecialcase_ = "specialize:arg(2)" def _upper(ch): if ch.islower(): @@ -420,22 +420,14 @@ return space.wrap(u_self) -def _convert_idx_params(space, w_self, w_sub, w_start, w_end, upper_bound=False): + at specialize.arg(4) +def _convert_idx_params(space, w_self, w_start, w_end, upper_bound=False): self = w_self._value - sub = w_sub._value + lenself = len(self) - if space.is_w(w_start, space.w_None): - w_start = space.wrap(0) - if space.is_w(w_end, space.w_None): - w_end = space.len(w_self) - if upper_bound: - start = slicetype.adapt_bound(space, len(self), w_start) - end = slicetype.adapt_bound(space, len(self), w_end) - else: - start = slicetype.adapt_lower_bound(space, len(self), w_start) - end = slicetype.adapt_lower_bound(space, len(self), w_end) - return (self, sub, start, end) -_convert_idx_params._annspecialcase_ = 'specialize:arg(5)' + start, end = slicetype.unwrap_start_stop( + space, lenself, w_start, w_end, upper_bound=upper_bound) + return (self, start, end) def contains__String_String(space, w_self, w_sub): self = w_self._value @@ -443,13 +435,13 @@ return space.newbool(self.find(sub) >= 0) def str_find__String_String_ANY_ANY(space, w_self, w_sub, w_start, w_end): - (self, sub, start, end) = _convert_idx_params(space, w_self, w_sub, w_start, w_end) - res = self.find(sub, start, end) + (self, start, end) = _convert_idx_params(space, w_self, w_start, w_end) + res = self.find(w_sub._value, start, end) return space.wrap(res) def str_rfind__String_String_ANY_ANY(space, w_self, w_sub, w_start, w_end): - (self, sub, start, end) = _convert_idx_params(space, w_self, w_sub, w_start, w_end) - res = self.rfind(sub, start, end) + (self, start, end) = _convert_idx_params(space, w_self, w_start, w_end) + res = self.rfind(w_sub._value, start, end) return space.wrap(res) def str_partition__String_String(space, w_self, w_sub): @@ -483,8 +475,8 @@ def str_index__String_String_ANY_ANY(space, w_self, w_sub, w_start, w_end): - (self, sub, start, end) = _convert_idx_params(space, w_self, w_sub, w_start, w_end) - res = self.find(sub, start, end) + (self, start, end) = _convert_idx_params(space, w_self, w_start, w_end) + res = self.find(w_sub._value, start, end) if res < 0: raise OperationError(space.w_ValueError, space.wrap("substring not found in string.index")) @@ -493,8 +485,8 @@ def str_rindex__String_String_ANY_ANY(space, w_self, w_sub, w_start, w_end): - (self, sub, start, end) = _convert_idx_params(space, w_self, w_sub, w_start, w_end) - res = self.rfind(sub, start, end) + (self, start, end) = _convert_idx_params(space, w_self, w_start, w_end) + res = self.rfind(w_sub._value, start, end) if res < 0: raise OperationError(space.w_ValueError, space.wrap("substring not found in string.rindex")) @@ -636,20 +628,17 @@ return wrapstr(space, u_centered) def str_count__String_String_ANY_ANY(space, w_self, w_arg, w_start, w_end): - u_self, u_arg, u_start, u_end = _convert_idx_params(space, w_self, w_arg, - w_start, w_end) - return wrapint(space, u_self.count(u_arg, u_start, u_end)) + u_self, u_start, u_end = _convert_idx_params(space, w_self, w_start, w_end) + return wrapint(space, u_self.count(w_arg._value, u_start, u_end)) def str_endswith__String_String_ANY_ANY(space, w_self, w_suffix, w_start, w_end): - (u_self, suffix, start, end) = _convert_idx_params(space, w_self, - w_suffix, w_start, - w_end, True) - return space.newbool(stringendswith(u_self, suffix, start, end)) + (u_self, start, end) = _convert_idx_params(space, w_self, w_start, + w_end, True) + return space.newbool(stringendswith(u_self, w_suffix._value, start, end)) def str_endswith__String_Tuple_ANY_ANY(space, w_self, w_suffixes, w_start, w_end): - (u_self, _, start, end) = _convert_idx_params(space, w_self, - space.wrap(''), w_start, - w_end, True) + (u_self, start, end) = _convert_idx_params(space, w_self, w_start, + w_end, True) for w_suffix in space.fixedview(w_suffixes): if space.isinstance_w(w_suffix, space.w_unicode): w_u = space.call_function(space.w_unicode, w_self) @@ -661,14 +650,13 @@ return space.w_False def str_startswith__String_String_ANY_ANY(space, w_self, w_prefix, w_start, w_end): - (u_self, prefix, start, end) = _convert_idx_params(space, w_self, - w_prefix, w_start, - w_end, True) - return space.newbool(stringstartswith(u_self, prefix, start, end)) + (u_self, start, end) = _convert_idx_params(space, w_self, w_start, + w_end, True) + return space.newbool(stringstartswith(u_self, w_prefix._value, start, end)) def str_startswith__String_Tuple_ANY_ANY(space, w_self, w_prefixes, w_start, w_end): - (u_self, _, start, end) = _convert_idx_params(space, w_self, space.wrap(''), - w_start, w_end, True) + (u_self, start, end) = _convert_idx_params(space, w_self, + w_start, w_end, True) for w_prefix in space.fixedview(w_prefixes): if space.isinstance_w(w_prefix, space.w_unicode): w_u = space.call_function(space.w_unicode, w_self) diff --git a/pypy/objspace/std/strsliceobject.py b/pypy/objspace/std/strsliceobject.py --- a/pypy/objspace/std/strsliceobject.py +++ b/pypy/objspace/std/strsliceobject.py @@ -60,8 +60,8 @@ def _convert_idx_params(space, w_self, w_sub, w_start, w_end): length = w_self.stop - w_self.start sub = w_sub._value - start = slicetype.adapt_bound(space, length, w_start) - end = slicetype.adapt_bound(space, length, w_end) + start, end = slicetype.unwrap_start_stop( + space, length, w_start, w_end, True) assert start >= 0 assert end >= 0 diff --git a/pypy/objspace/std/test/test_listobject.py b/pypy/objspace/std/test/test_listobject.py --- a/pypy/objspace/std/test/test_listobject.py +++ b/pypy/objspace/std/test/test_listobject.py @@ -2,11 +2,10 @@ from pypy.objspace.std.listobject import W_ListObject from pypy.interpreter.error import OperationError -from pypy.conftest import gettestobjspace +from pypy.conftest import gettestobjspace, option class TestW_ListObject(object): - def test_is_true(self): w = self.space.wrap w_list = W_ListObject([]) @@ -343,6 +342,13 @@ class AppTestW_ListObject(object): + def setup_class(cls): + import sys + on_cpython = (option.runappdirect and + not hasattr(sys, 'pypy_translation_info')) + + cls.w_on_cpython = cls.space.wrap(on_cpython) + def test_call_list(self): assert list('') == [] assert list('abc') == ['a', 'b', 'c'] @@ -616,6 +622,14 @@ assert c.index(0) == 0 raises(ValueError, c.index, 3) + def test_index_cpython_bug(self): + if self.on_cpython: + skip("cpython has a bug here") + c = list('hello world') + assert c.index('l', None, None) == 2 + assert c.index('l', 3, None) == 3 + assert c.index('l', None, 4) == 2 + def test_ass_slice(self): l = range(6) l[1:3] = 'abc' diff --git a/pypy/objspace/std/tupleobject.py b/pypy/objspace/std/tupleobject.py --- a/pypy/objspace/std/tupleobject.py +++ b/pypy/objspace/std/tupleobject.py @@ -167,17 +167,8 @@ return space.wrap(count) def tuple_index__Tuple_ANY_ANY_ANY(space, w_tuple, w_obj, w_start, w_stop): - start = slicetype.eval_slice_index(space, w_start) - stop = slicetype.eval_slice_index(space, w_stop) length = len(w_tuple.wrappeditems) - if start < 0: - start += length - if start < 0: - start = 0 - if stop < 0: - stop += length - if stop < 0: - stop = 0 + start, stop = slicetype.unwrap_start_stop(space, length, w_start, w_stop) for i in range(start, min(stop, length)): w_item = w_tuple.wrappeditems[i] if space.eq_w(w_item, w_obj): diff --git a/pypy/objspace/std/unicodeobject.py b/pypy/objspace/std/unicodeobject.py --- a/pypy/objspace/std/unicodeobject.py +++ b/pypy/objspace/std/unicodeobject.py @@ -479,18 +479,8 @@ assert isinstance(w_sub, W_UnicodeObject) self = w_self._value sub = w_sub._value - - if space.is_w(w_start, space.w_None): - w_start = space.wrap(0) - if space.is_w(w_end, space.w_None): - w_end = space.len(w_self) - - if upper_bound: - start = slicetype.adapt_bound(space, len(self), w_start) - end = slicetype.adapt_bound(space, len(self), w_end) - else: - start = slicetype.adapt_lower_bound(space, len(self), w_start) - end = slicetype.adapt_lower_bound(space, len(self), w_end) + start, end = slicetype.unwrap_start_stop( + space, len(self), w_start, w_end, upper_bound) return (self, sub, start, end) _convert_idx_params._annspecialcase_ = 'specialize:arg(5)' From noreply at buildbot.pypy.org Fri Nov 4 12:19:50 2011 From: noreply at buildbot.pypy.org (hager) Date: Fri, 4 Nov 2011 12:19:50 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Handle stack alignment in emit_call Message-ID: <20111104111950.AFAA0820B3@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r48744:8157df3c66e8 Date: 2011-11-04 12:18 +0100 http://bitbucket.org/pypy/pypy/changeset/8157df3c66e8/ Log: Handle stack alignment in emit_call diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -500,6 +500,8 @@ # adjust SP and compute size of parameter save area stack_space = 4 * (WORD + len(stack_args)) + while stack_space % (4 * WORD) != 0: + stack_space += 1 self.mc.stwu(1, 1, -stack_space) self.mc.mflr(0) self.mc.stw(0, 1, stack_space + WORD) @@ -507,11 +509,12 @@ # then we push everything on the stack for i, arg in enumerate(stack_args): offset = (2 + i) * WORD - self.mc.load_imm(r.r0, arg.value) + if arg is not None: + self.mc.load_imm(r.r0, arg.value) if IS_PPC_32: self.mc.stw(r.r0.value, r.SP.value, offset) else: - assert 0, "not implemented yet" + self.mc.std(r.r0.value, r.SP.value, offset) # collect variables that need to go in registers # and the registers they will be stored in From noreply at buildbot.pypy.org Fri Nov 4 12:19:51 2011 From: noreply at buildbot.pypy.org (hager) Date: Fri, 4 Nov 2011 12:19:51 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Do sanity check Message-ID: <20111104111951.DCDAF820B3@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r48745:044df512a0f2 Date: 2011-11-04 12:19 +0100 http://bitbucket.org/pypy/pypy/changeset/044df512a0f2/ Log: Do sanity check diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/regalloc.py @@ -174,6 +174,8 @@ self.rm._check_invariants() def loc(self, var): + if var.type == FLOAT: + assert 0, "not implemented yet" return self.rm.loc(var) def position(self): From noreply at buildbot.pypy.org Fri Nov 4 12:35:30 2011 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 4 Nov 2011 12:35:30 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: add condition codes for unsigned lower and unsigned higher or same Message-ID: <20111104113530.8673A820B3@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r48746:efc0bc4d9ef5 Date: 2011-11-04 12:09 +0100 http://bitbucket.org/pypy/pypy/changeset/efc0bc4d9ef5/ Log: add condition codes for unsigned lower and unsigned higher or same diff --git a/pypy/jit/backend/arm/conditions.py b/pypy/jit/backend/arm/conditions.py --- a/pypy/jit/backend/arm/conditions.py +++ b/pypy/jit/backend/arm/conditions.py @@ -1,7 +1,7 @@ EQ = 0x0 NE = 0x1 -CS = 0x2 -CC = 0x3 +HS = CS = 0x2 +LO = CC = 0x3 MI = 0x4 PL = 0x5 VS = 0x6 From noreply at buildbot.pypy.org Fri Nov 4 12:35:31 2011 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 4 Nov 2011 12:35:31 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: add some more tests for intergers and guards to cover unary and unsigned cmp ops Message-ID: <20111104113531.BA410820B3@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r48747:7cb3b3ddfee2 Date: 2011-11-04 12:11 +0100 http://bitbucket.org/pypy/pypy/changeset/7cb3b3ddfee2/ Log: add some more tests for intergers and guards to cover unary and unsigned cmp ops diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -1205,6 +1205,37 @@ got = longlong.getrealfloat(self.cpu.get_latest_value_float(i)) assert got == 13.5 + 6.73 * i + def test_integers_and_guards2(self): + for opname, compare in [ + (rop.INT_IS_TRUE, lambda x: bool(x)), + (rop.INT_IS_ZERO, lambda x: not bool(x))]: + for opguard, guard_case in [ + (rop.GUARD_FALSE, False), + (rop.GUARD_TRUE, True), + ]: + box = BoxInt() + res = BoxInt() + faildescr1 = BasicFailDescr(1) + faildescr2 = BasicFailDescr(2) + inputargs = [box] + operations = [ + ResOperation(opname, [box], res), + ResOperation(opguard, [res], None, descr=faildescr1), + ResOperation(rop.FINISH, [], None, descr=faildescr2), + ] + operations[1].setfailargs([]) + looptoken = LoopToken() + self.cpu.compile_loop(inputargs, operations, looptoken) + # + cpu = self.cpu + for value in [-42, 0, 1, 10]: + cpu.set_future_value_int(0, value) + fail = cpu.execute_token(looptoken) + # + expected = compare(value) + expected ^= guard_case + assert fail.identifier == 2 - expected + def test_integers_and_guards(self): for opname, compare in [ (rop.INT_LT, lambda x, y: x < y), @@ -1243,9 +1274,9 @@ self.cpu.compile_loop(inputargs, operations, looptoken) # cpu = self.cpu - for test1 in [-65, -42, -11]: + for test1 in [-65, -42, -11, 0, 1, 10]: if test1 == -42 or combinaison[0] == 'b': - for test2 in [-65, -42, -11]: + for test2 in [-65, -42, -11, 0, 1, 10]: if test2 == -42 or combinaison[1] == 'b': n = 0 if combinaison[0] == 'b': @@ -1260,6 +1291,59 @@ expected ^= guard_case assert fail.identifier == 2 - expected + def test_integers_and_guards_uint(self): + for opname, compare in [ + (rop.UINT_LE, lambda x, y: (x) <= (y)), + (rop.UINT_GT, lambda x, y: (x) > (y)), + (rop.UINT_LT, lambda x, y: (x) < (y)), + (rop.UINT_GE, lambda x, y: (x) >= (y)), + ]: + for opguard, guard_case in [ + (rop.GUARD_FALSE, False), + (rop.GUARD_TRUE, True), + ]: + for combinaison in ["bb", "bc", "cb"]: + # + if combinaison[0] == 'b': + ibox1 = BoxInt() + else: + ibox1 = ConstInt(42) + if combinaison[1] == 'b': + ibox2 = BoxInt() + else: + ibox2 = ConstInt(42) + b1 = BoxInt() + faildescr1 = BasicFailDescr(1) + faildescr2 = BasicFailDescr(2) + inputargs = [ib for ib in [ibox1, ibox2] + if isinstance(ib, BoxInt)] + operations = [ + ResOperation(opname, [ibox1, ibox2], b1), + ResOperation(opguard, [b1], None, descr=faildescr1), + ResOperation(rop.FINISH, [], None, descr=faildescr2), + ] + operations[-2].setfailargs([]) + looptoken = LoopToken() + self.cpu.compile_loop(inputargs, operations, looptoken) + # + cpu = self.cpu + for test1 in [65, 42, 11, 0, 1]: + if test1 == 42 or combinaison[0] == 'b': + for test2 in [65, 42, 11, 0, 1]: + if test2 == 42 or combinaison[1] == 'b': + n = 0 + if combinaison[0] == 'b': + cpu.set_future_value_int(n, test1) + n += 1 + if combinaison[1] == 'b': + cpu.set_future_value_int(n, test2) + n += 1 + fail = cpu.execute_token(looptoken) + # + expected = compare(test1, test2) + expected ^= guard_case + assert fail.identifier == 2 - expected + def test_floats_and_guards(self): if not self.cpu.supports_floats: py.test.skip("requires floats") From noreply at buildbot.pypy.org Fri Nov 4 12:35:32 2011 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 4 Nov 2011 12:35:32 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: remove the inverse argument for the register allocation of cmp operations and use the correct correct condition flags for uint operations Message-ID: <20111104113532.EA25F820B3@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r48748:518a528ba0b8 Date: 2011-11-04 12:13 +0100 http://bitbucket.org/pypy/pypy/changeset/518a528ba0b8/ Log: remove the inverse argument for the register allocation of cmp operations and use the correct correct condition flags for uint operations diff --git a/pypy/jit/backend/arm/helper/regalloc.py b/pypy/jit/backend/arm/helper/regalloc.py --- a/pypy/jit/backend/arm/helper/regalloc.py +++ b/pypy/jit/backend/arm/helper/regalloc.py @@ -103,7 +103,7 @@ f.__name__ = name return f -def prepare_cmp_op(name=None, inverse=False): +def prepare_cmp_op(name=None): def f(self, op, guard_op, fcond): assert fcond is not None boxes = list(op.getarglist()) diff --git a/pypy/jit/backend/arm/opassembler.py b/pypy/jit/backend/arm/opassembler.py --- a/pypy/jit/backend/arm/opassembler.py +++ b/pypy/jit/backend/arm/opassembler.py @@ -136,11 +136,6 @@ emit_op_int_gt = gen_emit_cmp_op('int_gt', c.GT) emit_op_int_ge = gen_emit_cmp_op('int_ge', c.GE) - emit_op_uint_le = gen_emit_cmp_op('uint_le', c.LS) - emit_op_uint_gt = gen_emit_cmp_op('uint_gt', c.HI) - - emit_op_uint_lt = gen_emit_cmp_op('uint_lt', c.HI) - emit_op_uint_ge = gen_emit_cmp_op('uint_ge', c.LS) emit_op_ptr_eq = emit_op_int_eq emit_op_ptr_ne = emit_op_int_ne @@ -152,6 +147,13 @@ emit_guard_int_gt = gen_emit_cmp_op_guard('int_gt', c.GT) emit_guard_int_ge = gen_emit_cmp_op_guard('int_ge', c.GE) + emit_op_uint_le = gen_emit_cmp_op('uint_le', c.LS) + emit_op_uint_gt = gen_emit_cmp_op('uint_gt', c.HI) + emit_op_uint_lt = gen_emit_cmp_op('uint_lt', c.LO) + emit_op_uint_ge = gen_emit_cmp_op('uint_ge', c.HS) + emit_guard_uint_lt = gen_emit_cmp_op_guard('uint_lt', c.LO) + emit_guard_uint_ge = gen_emit_cmp_op_guard('uint_ge', c.HS) + emit_guard_uint_le = gen_emit_cmp_op_guard('uint_le', c.LS) emit_guard_uint_gt = gen_emit_cmp_op_guard('uint_gt', c.HI) diff --git a/pypy/jit/backend/arm/regalloc.py b/pypy/jit/backend/arm/regalloc.py --- a/pypy/jit/backend/arm/regalloc.py +++ b/pypy/jit/backend/arm/regalloc.py @@ -422,8 +422,8 @@ prepare_op_uint_le = prepare_cmp_op('uint_le') prepare_op_uint_gt = prepare_cmp_op('uint_gt') - prepare_op_uint_lt = prepare_cmp_op('uint_lt', inverse=True) - prepare_op_uint_ge = prepare_cmp_op('uint_ge', inverse=True) + prepare_op_uint_lt = prepare_cmp_op('uint_lt') + prepare_op_uint_ge = prepare_cmp_op('uint_ge') prepare_op_ptr_eq = prepare_op_int_eq prepare_op_ptr_ne = prepare_op_int_ne @@ -438,8 +438,8 @@ prepare_guard_uint_le = prepare_cmp_op('guard_uint_le') prepare_guard_uint_gt = prepare_cmp_op('guard_uint_gt') - prepare_guard_uint_lt = prepare_cmp_op('guard_uint_lt', inverse=True) - prepare_guard_uint_ge = prepare_cmp_op('guard_uint_ge', inverse=True) + prepare_guard_uint_lt = prepare_cmp_op('guard_uint_lt') + prepare_guard_uint_ge = prepare_cmp_op('guard_uint_ge') prepare_guard_ptr_eq = prepare_guard_int_eq prepare_guard_ptr_ne = prepare_guard_int_ne From noreply at buildbot.pypy.org Fri Nov 4 12:35:34 2011 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 4 Nov 2011 12:35:34 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: refactor a bit the generation of functions that emit code for cmp operations Message-ID: <20111104113534.25ED8820B3@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r48749:b797d85e31a7 Date: 2011-11-04 12:14 +0100 http://bitbucket.org/pypy/pypy/changeset/b797d85e31a7/ Log: refactor a bit the generation of functions that emit code for cmp operations diff --git a/pypy/jit/backend/arm/helper/assembler.py b/pypy/jit/backend/arm/helper/assembler.py --- a/pypy/jit/backend/arm/helper/assembler.py +++ b/pypy/jit/backend/arm/helper/assembler.py @@ -6,7 +6,8 @@ from pypy.rlib.rarithmetic import r_uint, r_longlong, intmask from pypy.jit.metainterp.resoperation import rop -def gen_emit_op_unary_cmp(name, true_cond, false_cond): +def gen_emit_op_unary_cmp(name, true_cond): + false_cond = c.get_opposite_of(true_cond) def f(self, op, arglocs, regalloc, fcond): assert fcond is not None reg, res = arglocs @@ -17,7 +18,8 @@ f.__name__ = 'emit_op_%s' % name return f -def gen_emit_guard_unary_cmp(name, true_cond, false_cond): +def gen_emit_guard_unary_cmp(name, true_cond): + false_cond = c.get_opposite_of(true_cond) def f(self, op, guard, arglocs, regalloc, fcond): assert fcond is not None assert guard is not None @@ -27,8 +29,7 @@ guard_opnum = guard.getopnum() if guard_opnum == rop.GUARD_FALSE: cond = false_cond - self._emit_guard(guard, arglocs[1:], cond) - return fcond + return self._emit_guard(guard, arglocs[1:], cond) f.__name__ = 'emit_guard_%s' % name return f @@ -61,10 +62,10 @@ return f def gen_emit_cmp_op(name, condition): + inv = c.get_opposite_of(condition) def f(self, op, arglocs, regalloc, fcond): l0, l1, res = arglocs - inv = c.get_opposite_of(condition) if l1.is_imm(): self.mc.CMP_ri(l0.value, imm=l1.getint(), cond=fcond) else: @@ -75,22 +76,23 @@ f.__name__ = 'emit_op_%s' % name return f -def gen_emit_cmp_op_guard(name, condition): +def gen_emit_cmp_op_guard(name, true_cond): + false_cond = c.get_opposite_of(true_cond) def f(self, op, guard, arglocs, regalloc, fcond): + assert guard is not None l0 = arglocs[0] l1 = arglocs[1] + assert l0.is_reg() - inv = c.get_opposite_of(condition) if l1.is_imm(): self.mc.CMP_ri(l0.value, imm=l1.getint(), cond=fcond) else: self.mc.CMP_rr(l0.value, l1.value, cond=fcond) guard_opnum = guard.getopnum() - cond = condition + cond = true_cond if guard_opnum == rop.GUARD_FALSE: - cond = inv - self._emit_guard(guard, arglocs[2:], cond) - return fcond + cond = false_cond + return self._emit_guard(guard, arglocs[2:], cond) f.__name__ = 'emit_guard_%s' % name return f @@ -112,9 +114,9 @@ return f def gen_emit_float_cmp_op(name, cond): + inv = c.get_opposite_of(cond) def f(self, op, arglocs, regalloc, fcond): arg1, arg2, res = arglocs - inv = c.get_opposite_of(cond) self.mc.VCMP(arg1.value, arg2.value) self.mc.VMRS(cond=fcond) self.mc.MOV_ri(res.value, 1, cond=cond) @@ -123,19 +125,19 @@ f.__name__ = 'emit_op_%s' % name return f -def gen_emit_float_cmp_op_guard(name, guard_cond): +def gen_emit_float_cmp_op_guard(name, true_cond): + false_cond = c.get_opposite_of(true_cond) def f(self, op, guard, arglocs, regalloc, fcond): + assert guard is not None arg1 = arglocs[0] arg2 = arglocs[1] - inv = c.get_opposite_of(guard_cond) self.mc.VCMP(arg1.value, arg2.value) self.mc.VMRS(cond=fcond) - cond = guard_cond + cond = true_cond guard_opnum = guard.getopnum() if guard_opnum == rop.GUARD_FALSE: - cond = inv - self._emit_guard(guard, arglocs[2:], cond) - return fcond + cond = false_cond + return self._emit_guard(guard, arglocs[2:], cond) f.__name__ = 'emit_guard_%s' % name return f diff --git a/pypy/jit/backend/arm/helper/regalloc.py b/pypy/jit/backend/arm/helper/regalloc.py --- a/pypy/jit/backend/arm/helper/regalloc.py +++ b/pypy/jit/backend/arm/helper/regalloc.py @@ -107,17 +107,12 @@ def f(self, op, guard_op, fcond): assert fcond is not None boxes = list(op.getarglist()) - if not inverse: - arg0, arg1 = boxes - else: - arg1, arg0 = boxes - # XXX consider swapping argumentes if arg0 is const - imm_a0 = _check_imm_arg(arg0) + arg0, arg1 = boxes imm_a1 = _check_imm_arg(arg1) l0, box = self._ensure_value_is_boxed(arg0, forbidden_vars=boxes) boxes.append(box) - if imm_a1 and not imm_a0: + if imm_a1: l1 = self.make_sure_var_in_reg(arg1, boxes) else: l1, box = self._ensure_value_is_boxed(arg1, forbidden_vars=boxes) diff --git a/pypy/jit/backend/arm/opassembler.py b/pypy/jit/backend/arm/opassembler.py --- a/pypy/jit/backend/arm/opassembler.py +++ b/pypy/jit/backend/arm/opassembler.py @@ -137,8 +137,6 @@ emit_op_int_ge = gen_emit_cmp_op('int_ge', c.GE) - emit_op_ptr_eq = emit_op_int_eq - emit_op_ptr_ne = emit_op_int_ne emit_guard_int_lt = gen_emit_cmp_op_guard('int_lt', c.LT) emit_guard_int_le = gen_emit_cmp_op_guard('int_le', c.LE) @@ -151,15 +149,14 @@ emit_op_uint_gt = gen_emit_cmp_op('uint_gt', c.HI) emit_op_uint_lt = gen_emit_cmp_op('uint_lt', c.LO) emit_op_uint_ge = gen_emit_cmp_op('uint_ge', c.HS) + + emit_guard_uint_le = gen_emit_cmp_op_guard('uint_le', c.LS) + emit_guard_uint_gt = gen_emit_cmp_op_guard('uint_gt', c.HI) emit_guard_uint_lt = gen_emit_cmp_op_guard('uint_lt', c.LO) emit_guard_uint_ge = gen_emit_cmp_op_guard('uint_ge', c.HS) - emit_guard_uint_le = gen_emit_cmp_op_guard('uint_le', c.LS) - emit_guard_uint_gt = gen_emit_cmp_op_guard('uint_gt', c.HI) - - emit_guard_uint_lt = gen_emit_cmp_op_guard('uint_lt', c.HI) - emit_guard_uint_ge = gen_emit_cmp_op_guard('uint_ge', c.LS) - + emit_op_ptr_eq = emit_op_int_eq + emit_op_ptr_ne = emit_op_int_ne emit_guard_ptr_eq = emit_guard_int_eq emit_guard_ptr_ne = emit_guard_int_ne @@ -172,11 +169,11 @@ _mixin_ = True - emit_op_int_is_true = gen_emit_op_unary_cmp('int_is_true', c.NE, c.EQ) - emit_op_int_is_zero = gen_emit_op_unary_cmp('int_is_zero', c.EQ, c.NE) + emit_op_int_is_true = gen_emit_op_unary_cmp('int_is_true', c.NE) + emit_op_int_is_zero = gen_emit_op_unary_cmp('int_is_zero', c.EQ) - emit_guard_int_is_true = gen_emit_guard_unary_cmp('int_is_true', c.NE, c.EQ) - emit_guard_int_is_zero = gen_emit_guard_unary_cmp('int_is_zero', c.EQ, c.NE) + emit_guard_int_is_true = gen_emit_guard_unary_cmp('int_is_true', c.NE) + emit_guard_int_is_zero = gen_emit_guard_unary_cmp('int_is_zero', c.EQ) def emit_op_int_invert(self, op, arglocs, regalloc, fcond): reg, res = arglocs @@ -193,7 +190,6 @@ _mixin_ = True - guard_size = 5*WORD def _emit_guard(self, op, arglocs, fcond, save_exc=False, is_guard_not_ivalidated=False): descr = op.getdescr() assert isinstance(descr, AbstractFailDescr) From noreply at buildbot.pypy.org Fri Nov 4 12:58:47 2011 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 4 Nov 2011 12:58:47 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: Planning. Message-ID: <20111104115847.A2A79820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: extradoc Changeset: r3959:93a337edf716 Date: 2011-11-04 12:58 +0100 http://bitbucket.org/pypy/extradoc/changeset/93a337edf716/ Log: Planning. diff --git a/sprintinfo/gothenburg-2011-2/planning.txt b/sprintinfo/gothenburg-2011-2/planning.txt new file mode 100644 --- /dev/null +++ b/sprintinfo/gothenburg-2011-2/planning.txt @@ -0,0 +1,29 @@ +people present: + Christian Timser + Hakan Ardo + + + +done so far: + + Christian works on win64 support, continuing the job started in Genua + + Hakan refactors unrolling: adds a TARGET resoperation that can be used + in the middle of loops, and that defines a possible JUMP target. + + Armin did random stuff including progress on the STM branch. + + Mark worked on specializing 2-tuples to contain int/floats/strings. + + Andrew Dalke and Sam Lade worked on the previous days on numpy + integration, looking at f2py + + +today: + + the TARGET resoperation: Hakan, Armin + + win64: Christian + + specialized 2-tuples: Mark, Anto + From noreply at buildbot.pypy.org Fri Nov 4 12:58:48 2011 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 4 Nov 2011 12:58:48 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: merge heads Message-ID: <20111104115848.EFCF9820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: extradoc Changeset: r3960:8831869b67ff Date: 2011-11-04 12:58 +0100 http://bitbucket.org/pypy/extradoc/changeset/8831869b67ff/ Log: merge heads diff --git a/talk/iwtc11/benchmarks/iter/generator.py b/talk/iwtc11/benchmarks/iter/generator.py --- a/talk/iwtc11/benchmarks/iter/generator.py +++ b/talk/iwtc11/benchmarks/iter/generator.py @@ -55,6 +55,35 @@ for x, y in range2(w, h): sa += a[y*w + x] + x + y +def _mean1d(a): + sa = 0 + for i in range1(len(a)): + sa = (i*sa + a[i])/(i + 1.0); + +def _median1d(a): + sa = 0 + for i in range1(len(a)): + if sa > a[i]: + sa -= 1.0/(i + 1.0) + elif sa < a[i]: + sa += 1.0/(i + 1.0) + +def _ripple1d(a): + sa = 0 + for i in range1(len(a)): + if sa > a[i]: + sa -= 0.1 + elif sa < a[i]: + sa += 0.1 + +def _ripple2d(a, w, h): + sa = 0 + for x, y in range2(w, h): + if sa > a[y*w + x]: + sa -= 0.1 + elif sa < a[y*w + x]: + sa += 0.1 + def sum1d(args): run1d(args, _sum1d) return "sum1d" @@ -87,18 +116,43 @@ run2d(args, _xysum2d) return "xysum2d" -def run1d(args, f): - a = array('d', [1]) * 100000000 +def mean1d(args): + run1d(args, _mean1d, [1, -1]) + return "mean1d" + +def median1d(args): + run1d(args, _median1d, [1, -1]) + return "median1d" + +def ripple1d(args): + run1d(args, _ripple1d, [1, -1]) + return "ripple1d" + +def ripple2d(args): + run2d(args, _ripple2d, [1, -1]) + return "ripple2d" + +def run1d(args, f, data=None): + if data: + a = array('d', data) * (100000000/len(data)) + else: + a = array('d', [1]) * 100000000 n = int(args[0]) for i in xrange(n): f(a) return "sum1d" -def run2d(args, f): - a = array('d', [1]) * 100000000 +def run2d(args, f, data=None): + if data: + a = array('d', data) * (100000000/len(data)) + else: + a = array('d', [1]) * 100000000 n = int(args[0]) for i in xrange(n): f(a, 10000, 10000) return "sum1d" +if __name__ == '__main__': + import sys + eval(sys.argv[1])(sys.argv[2:]) diff --git a/talk/iwtc11/benchmarks/iter/generator2.py b/talk/iwtc11/benchmarks/iter/generator2.py --- a/talk/iwtc11/benchmarks/iter/generator2.py +++ b/talk/iwtc11/benchmarks/iter/generator2.py @@ -30,6 +30,35 @@ for i in range1(len(a)): sa += a[i] + len(a) +def _mean1d(a): + sa = 0 + for i in range1(len(a)): + sa = (i*sa + a[i])/(i + 1.0); + +def _median1d(a): + sa = 0 + for i in range1(len(a)): + if sa > a[i]: + sa -= 1.0/(i + 1.0) + elif sa < a[i]: + sa += 1.0/(i + 1.0) + +def _ripple1d(a): + sa = 0 + for i in range1(len(a)): + if sa > a[i]: + sa -= 0.1 + elif sa < a[i]: + sa += 0.1 + +def _ripple2d(a, w, h): + sa = 0 + for x, y in range2(w, h): + if sa > a[y*w + x]: + sa -= 0.1 + elif sa < a[y*w + x]: + sa += 0.1 + def _sum2d(a, w, h): sa = 0 for x, y in range2(w, h): @@ -87,18 +116,42 @@ run2d(args, _xysum2d) return "xysum2d" -def run1d(args, f): - a = array('d', [1]) * 100000000 +def mean1d(args): + run1d(args, _mean1d, [1, -1]) + return "mean1d" + +def median1d(args): + run1d(args, _median1d, [1, -1]) + return "median1d" + +def ripple1d(args): + run1d(args, _ripple1d, [1, -1]) + return "ripple1d" + +def ripple2d(args): + run2d(args, _ripple2d, [1, -1]) + return "ripple2d" + +def run1d(args, f, data=None): + if data: + a = array('d', data) * (100000000/len(data)) + else: + a = array('d', [1]) * 100000000 n = int(args[0]) for i in xrange(n): f(a) return "sum1d" -def run2d(args, f): - a = array('d', [1]) * 100000000 +def run2d(args, f, data=None): + if data: + a = array('d', data) * (100000000/len(data)) + else: + a = array('d', [1]) * 100000000 n = int(args[0]) for i in xrange(n): f(a, 10000, 10000) return "sum1d" - +if __name__ == '__main__': + import sys + eval(sys.argv[1])(sys.argv[2:]) diff --git a/talk/iwtc11/benchmarks/iter/iterator.py b/talk/iwtc11/benchmarks/iter/iterator.py --- a/talk/iwtc11/benchmarks/iter/iterator.py +++ b/talk/iwtc11/benchmarks/iter/iterator.py @@ -82,6 +82,36 @@ for x, y in range2(w, h): sa += a[y*w + x] + x + y +def _mean1d(a): + sa = 0 + for i in range1(len(a)): + sa = (i*sa + a[i])/(i + 1.0); + +def _median1d(a): + sa = 0 + for i in range1(len(a)): + if sa > a[i]: + sa -= 1.0/(i + 1.0) + elif sa < a[i]: + sa += 1.0/(i + 1.0) + +def _ripple1d(a): + sa = 0 + for i in range1(len(a)): + if sa > a[i]: + sa -= 0.1 + elif sa < a[i]: + sa += 0.1 + +def _ripple2d(a, w, h): + sa = 0 + for x, y in range2(w, h): + if sa > a[y*w + x]: + sa -= 0.1 + elif sa < a[y*w + x]: + sa += 0.1 + + def sum1d(args): run1d(args, _sum1d) return "sum1d" @@ -114,18 +144,42 @@ run2d(args, _xysum2d) return "xysum2d" -def run1d(args, f): - a = array('d', [1]) * 100000000 +def mean1d(args): + run1d(args, _mean1d, [1, -1]) + return "mean1d" + +def median1d(args): + run1d(args, _median1d, [1, -1]) + return "median1d" + +def ripple1d(args): + run1d(args, _ripple1d, [1, -1]) + return "ripple1d" + +def ripple2d(args): + run2d(args, _ripple2d, [1, -1]) + return "ripple2d" + +def run1d(args, f, data=None): + if data: + a = array('d', data) * (100000000/len(data)) + else: + a = array('d', [1]) * 100000000 n = int(args[0]) for i in xrange(n): f(a) return "sum1d" -def run2d(args, f): - a = array('d', [1]) * 100000000 +def run2d(args, f, data=None): + if data: + a = array('d', data) * (100000000/len(data)) + else: + a = array('d', [1]) * 100000000 n = int(args[0]) for i in xrange(n): f(a, 10000, 10000) return "sum1d" - +if __name__ == '__main__': + import sys + eval(sys.argv[1])(sys.argv[2:]) diff --git a/talk/iwtc11/benchmarks/iter/mean1d.c b/talk/iwtc11/benchmarks/iter/mean1d.c new file mode 100644 --- /dev/null +++ b/talk/iwtc11/benchmarks/iter/mean1d.c @@ -0,0 +1,25 @@ +#include +#include +#include + +double result; + +double sum(double *a, int n) { + int i; + double sa = 0; + for (i=0; i +#include +#include + +double result; + +double sum(double *a, int n) { + int i; + double sa = 0; + for (i=0; i a[i]) { + sa -= 1.0/(i + 1.0); + } else if (sa < a[i]) { + sa += 1.0/(i + 1.0); + } + } + return sa; +} + +#define N 100000000 + +int main(int ac, char **av) { + double *a = malloc(N*sizeof(double)); + int i, n = atoi(av[1]); + double data[] = {-1.0, 1.0}; + for (i=0; i a[i]: + sa -= 1.0/(i + 1.0) + elif sa < a[i]: + sa += 1.0/(i + 1.0) + +def _ripple1d(a): + sa = 0 + for i in xrange(len(a)): + if sa > a[i]: + sa -= 0.1 + elif sa < a[i]: + sa += 0.1 + +def _ripple2d(a, w, h): + sa = 0 + for y in xrange(h): + for x in xrange(w): + if sa > a[y*w + x]: + sa -= 0.1 + elif sa < a[y*w + x]: + sa += 0.1 + def sum1d(args): run1d(args, _sum1d) return "sum1d" @@ -77,18 +107,42 @@ run2d(args, _xysum2d) return "xysum2d" -def run1d(args, f): - a = array('d', [1]) * 100000000 +def mean1d(args): + run1d(args, _mean1d, [1, -1]) + return "mean1d" + +def median1d(args): + run1d(args, _median1d, [1, -1]) + return "median1d" + +def ripple1d(args): + run1d(args, _ripple1d, [1, -1]) + return "ripple1d" + +def ripple2d(args): + run2d(args, _ripple2d, [1, -1]) + return "ripple2d" + +def run1d(args, f, data=None): + if data: + a = array('d', data) * (100000000/len(data)) + else: + a = array('d', [1]) * 100000000 n = int(args[0]) for i in xrange(n): f(a) return "sum1d" -def run2d(args, f): - a = array('d', [1]) * 100000000 +def run2d(args, f, data=None): + if data: + a = array('d', data) * (100000000/len(data)) + else: + a = array('d', [1]) * 100000000 n = int(args[0]) for i in xrange(n): f(a, 10000, 10000) return "sum1d" - +if __name__ == '__main__': + import sys + eval(sys.argv[1])(sys.argv[2:]) diff --git a/talk/iwtc11/benchmarks/iter/result.txt b/talk/iwtc11/benchmarks/iter/result.txt new file mode 100644 --- /dev/null +++ b/talk/iwtc11/benchmarks/iter/result.txt @@ -0,0 +1,84 @@ +gcc -O3 +sum1d: 1.28 +- 0.0 +sum2d: 1.282 +- 0.004472135955 +whsum2d: 1.348 +- 0.0148323969742 +wsum1d: 1.312 +- 0.00836660026534 +wsum2d: 1.296 +- 0.00894427191 +xsum1d: 2.67 +- 0.0 +xsum2d: 2.684 +- 0.00894427191 +xysum2d: 3.89 +- 0.00707106781187 +mean1d: 12.246 +- 0.0955510334847 +median1d: 8.712 +- 0.0383405790254 +ripple1d: 2.534 +- 0.0167332005307 +ripple2d: 2.644 +- 0.0219089023002 + +pypy iter/generator2.py +sum1d: 23.9832116127 +- 0.614888065755 +sum2d: 25.14532938 +- 0.539002370348 +whsum2d: 25.3205077648 +- 0.95213818417 +wsum1d: 23.9423354149 +- 0.350982347591 +wsum2d: 25.5328037739 +- 0.0682052173271 +xsum1d: 23.7376705647 +- 0.25634553829 +xsum2d: 24.7689536095 +- 0.0512726458591 +xysum2d: 25.1449195862 +- 0.16430452312 +mean1d: 31.7602347374 +- 0.427882906402 +median1d: 43.1415281773 +- 0.210466180126 +ripple1d: 34.0283002853 +- 0.499598282172 +ripple2d: 38.4699347973 +- 0.0901560447042 + +pypy iter/generator.py +sum1d: 23.7244842052 +- 0.0689331205409 +sum2d: 21.658352232 +- 0.416635728484 +whsum2d: 22.5176876068 +- 0.502224419925 +wsum1d: 23.8211816788 +- 0.266302896949 +wsum2d: 21.1811442852 +- 0.0340298556226 +xsum1d: 23.5302821636 +- 0.347050395147 +xsum2d: 21.3646360397 +- 0.0404815336251 +xysum2d: 23.3054399967 +- 0.605652073438 +mean1d: 29.9068798542 +- 0.137142642142 +median1d: 47.3418916225 +- 0.745256472188 +ripple1d: 38.7682027817 +- 0.151127654833 +ripple2d: 34.50409832 +- 0.450633025924 + +pypy iter/iterator.py +sum1d: 9.11433362961 +- 0.152338942619 +sum2d: 24.8545044422 +- 0.337170412246 +whsum2d: 25.8045747757 +- 0.20809202412 +wsum1d: 9.10523662567 +- 0.0244805405482 +wsum2d: 26.1566844463 +- 0.318886535207 +xsum1d: 9.19495682716 +- 0.0873697747873 +xsum2d: 25.3517719746 +- 0.164766505808 +xysum2d: 26.6187932014 +- 0.209184440299 +mean1d: 16.4915462017 +- 0.017852602834 +median1d: 20.7653402328 +- 0.0630841106192 +ripple1d: 17.4464035511 +- 0.0158743067755 +ripple2d: 39.4511544228 +- 0.627375567049 + +pypy iter/range.py +sum1d: 4.49761414528 +- 0.0188623565601 +sum2d: 4.55957078934 +- 0.00243949374013 +whsum2d: 5.00070867538 +- 0.00618486143797 +wsum1d: 4.49047336578 +- 0.00411149414617 +wsum2d: 4.96318297386 +- 0.00222332048187 +xsum1d: 4.49802703857 +- 0.00188882921078 +xsum2d: 4.9497563839 +- 0.00264963854777 +xysum2d: 5.36755475998 +- 0.0024734467877 +mean1d: 14.0295339584 +- 0.242603017308 +median1d: 13.3812539577 +- 0.219532477212 +ripple1d: 9.65058441162 +- 0.258182544452 +ripple2d: 17.3434608459 +- 0.254643240791 + +pypy iter/while.py +sum1d: 2.96192045212 +- 0.0202773262937 +sum2d: 4.09613256454 +- 0.00233141002671 +whsum2d: 4.1995736599 +- 0.00203621363823 +wsum1d: 3.02741799355 +- 0.00262930561514 +wsum2d: 4.09814844131 +- 0.00222148567149 +xsum1d: 3.31641759872 +- 0.00301746769052 +xsum2d: 4.09652075768 +- 0.00237008101856 +xysum2d: 4.10714039803 +- 0.00191674465195 +mean1d: 13.9958492279 +- 0.244810166895 +median1d: 14.8796311855 +- 0.242170910321 +ripple1d: 7.4315820694 +- 0.24302663505 +ripple2d: 12.0281677723 +- 0.262682059117 + diff --git a/talk/iwtc11/benchmarks/iter/ripple1d.c b/talk/iwtc11/benchmarks/iter/ripple1d.c new file mode 100644 --- /dev/null +++ b/talk/iwtc11/benchmarks/iter/ripple1d.c @@ -0,0 +1,30 @@ +#include +#include +#include + +double result; + +double sum(double *a, int n) { + int i; + double sa = 0; + for (i=0; i a[i]) { + sa -= 0.1; + } else if (sa < a[i]) { + sa += 0.1; + } + } + return sa; +} + +#define N 100000000 + +int main(int ac, char **av) { + double *a = malloc(N*sizeof(double)); + int i, n = atoi(av[1]); + double data[] = {-1.0, 1.0}; + for (i=0; i +#include +#include + +double result; + +double sum(double *a, int w, int h) { + int x, y; + double sa = 0; + for (y=0; y a[y*w + x]) { + sa -= 0.1; + } else if (sa < a[y*w + x]) { + sa += 0.1; + } + } + return sa; +} + +#define W 10000 +#define H 10000 + +int main(int ac, char **av) { + double *a = malloc(W*H*sizeof(double)); + int i, n = atoi(av[1]); + double data[] = {-1.0, 1.0}; + for (i=0; i a[i]: + sa -= 1.0/(i + 1.0) + elif sa < a[i]: + sa += 1.0/(i + 1.0) + i += 1 + +def _ripple1d(a): + sa = i = 0 + while i < len(a): + if sa > a[i]: + sa -= 0.1 + elif sa < a[i]: + sa += 0.1 + i += 1 + +def _ripple2d(a, w, h): + sa = 0 + sa = y = 0 + while y < h: + x = 0 + while x < w: + if sa > a[y*w + x]: + sa -= 0.1 + elif sa < a[y*w + x]: + sa += 0.1 + x += 1 + y += 1 + def sum1d(args): run1d(args, _sum1d) return "sum1d" @@ -97,18 +134,43 @@ run2d(args, _xysum2d) return "xysum2d" -def run1d(args, f): - a = array('d', [1]) * 100000000 +def mean1d(args): + run1d(args, _mean1d, [1, -1]) + return "mean1d" + +def median1d(args): + run1d(args, _median1d, [1, -1]) + return "median1d" + +def ripple1d(args): + run1d(args, _ripple1d, [1, -1]) + return "ripple1d" + +def ripple2d(args): + run2d(args, _ripple2d, [1, -1]) + return "ripple2d" + +def run1d(args, f, data=None): + if data: + a = array('d', data) * (100000000/len(data)) + else: + a = array('d', [1]) * 100000000 n = int(args[0]) for i in xrange(n): f(a) return "sum1d" -def run2d(args, f): - a = array('d', [1]) * 100000000 +def run2d(args, f, data=None): + if data: + a = array('d', data) * (100000000/len(data)) + else: + a = array('d', [1]) * 100000000 n = int(args[0]) for i in xrange(n): f(a, 10000, 10000) return "sum1d" +if __name__ == '__main__': + import sys + eval(sys.argv[1])(sys.argv[2:]) diff --git a/talk/iwtc11/benchmarks/runiter.sh b/talk/iwtc11/benchmarks/runiter.sh --- a/talk/iwtc11/benchmarks/runiter.sh +++ b/talk/iwtc11/benchmarks/runiter.sh @@ -1,17 +1,16 @@ #!/bin/sh -BENCHMARKS="sum1d sum2d whsum2d wsum1d wsum2d xsum1d xsum2d xysum2d" - +BENCHMARKS="sum1d sum2d whsum2d wsum1d wsum2d xsum1d xsum2d xysum2d mean1d median1d ripple1d ripple2d" echo gcc -O3 for b in $BENCHMARKS; do - echo ./runner.py -n 5 -c "gcc -O3" iter/$b.c 10 + ./runner.py -n 5 -c "gcc -O3" iter/$b.c 10 done echo for p in iter/*.py; do echo pypy $p for b in $BENCHMARKS; do - pypy ./runner.py -n 5 $p $b 10 + /tmp/pypy-trunk ./runner.py -n 5 $p $b 10 done echo done \ No newline at end of file From noreply at buildbot.pypy.org Fri Nov 4 14:12:13 2011 From: noreply at buildbot.pypy.org (cfbolz) Date: Fri, 4 Nov 2011 14:12:13 +0100 (CET) Subject: [pypy-commit] pypy default: similarly simplify some unicode code Message-ID: <20111104131213.951CE820B3@wyvern.cs.uni-duesseldorf.de> Author: Carl Friedrich Bolz Branch: Changeset: r48750:256f96157565 Date: 2011-11-04 14:08 +0100 http://bitbucket.org/pypy/pypy/changeset/256f96157565/ Log: similarly simplify some unicode code diff --git a/pypy/objspace/std/unicodeobject.py b/pypy/objspace/std/unicodeobject.py --- a/pypy/objspace/std/unicodeobject.py +++ b/pypy/objspace/std/unicodeobject.py @@ -10,7 +10,7 @@ from pypy.objspace.std import slicetype, newformat from pypy.objspace.std.tupleobject import W_TupleObject from pypy.rlib.rarithmetic import intmask, ovfcheck -from pypy.rlib.objectmodel import compute_hash +from pypy.rlib.objectmodel import compute_hash, specialize from pypy.rlib.rstring import UnicodeBuilder from pypy.rlib.runicode import unicode_encode_unicode_escape from pypy.module.unicodedata import unicodedb @@ -475,32 +475,29 @@ index = length return index -def _convert_idx_params(space, w_self, w_sub, w_start, w_end, upper_bound=False): - assert isinstance(w_sub, W_UnicodeObject) + at specialize.arg(4) +def _convert_idx_params(space, w_self, w_start, w_end, upper_bound=False): self = w_self._value - sub = w_sub._value start, end = slicetype.unwrap_start_stop( space, len(self), w_start, w_end, upper_bound) - return (self, sub, start, end) -_convert_idx_params._annspecialcase_ = 'specialize:arg(5)' + return (self, start, end) def unicode_endswith__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, + self, start, end = _convert_idx_params(space, w_self, w_start, w_end, True) - return space.newbool(stringendswith(self, substr, start, end)) + return space.newbool(stringendswith(self, w_substr._value, start, end)) def unicode_startswith__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end, True) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end, True) # XXX this stuff can be waaay better for ootypebased backends if # we re-use more of our rpython machinery (ie implement startswith # with additional parameters as rpython) - return space.newbool(stringstartswith(self, substr, start, end)) + return space.newbool(stringstartswith(self, w_substr._value, start, end)) def unicode_startswith__Unicode_Tuple_ANY_ANY(space, w_unistr, w_prefixes, w_start, w_end): - unistr, _, start, end = _convert_idx_params(space, w_unistr, space.wrap(u''), - w_start, w_end, True) + unistr, start, end = _convert_idx_params(space, w_unistr, + w_start, w_end, True) for w_prefix in space.fixedview(w_prefixes): prefix = space.unicode_w(w_prefix) if stringstartswith(unistr, prefix, start, end): @@ -509,7 +506,7 @@ def unicode_endswith__Unicode_Tuple_ANY_ANY(space, w_unistr, w_suffixes, w_start, w_end): - unistr, _, start, end = _convert_idx_params(space, w_unistr, space.wrap(u''), + unistr, start, end = _convert_idx_params(space, w_unistr, w_start, w_end, True) for w_suffix in space.fixedview(w_suffixes): suffix = space.unicode_w(w_suffix) @@ -615,37 +612,32 @@ return space.newlist(lines) def unicode_find__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - return space.wrap(self.find(substr, start, end)) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + return space.wrap(self.find(w_substr._value, start, end)) def unicode_rfind__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - return space.wrap(self.rfind(substr, start, end)) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + return space.wrap(self.rfind(w_substr._value, start, end)) def unicode_index__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - index = self.find(substr, start, end) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + index = self.find(w_substr._value, start, end) if index < 0: raise OperationError(space.w_ValueError, space.wrap('substring not found')) return space.wrap(index) def unicode_rindex__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - index = self.rfind(substr, start, end) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + index = self.rfind(w_substr._value, start, end) if index < 0: raise OperationError(space.w_ValueError, space.wrap('substring not found')) return space.wrap(index) def unicode_count__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - return space.wrap(self.count(substr, start, end)) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + return space.wrap(self.count(w_substr._value, start, end)) def unicode_split__Unicode_None_ANY(space, w_self, w_none, w_maxsplit): maxsplit = space.int_w(w_maxsplit) From noreply at buildbot.pypy.org Fri Nov 4 14:46:11 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Fri, 4 Nov 2011 14:46:11 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: disabled errno check on win64 and py 2.7.2 - ctypes bug Message-ID: <20111104134611.9CF8E820B3@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r48751:96b9f3051fa4 Date: 2011-11-04 14:44 +0100 http://bitbucket.org/pypy/pypy/changeset/96b9f3051fa4/ Log: disabled errno check on win64 and py 2.7.2 - ctypes bug diff --git a/pypy/rpython/lltypesystem/ll2ctypes.py b/pypy/rpython/lltypesystem/ll2ctypes.py --- a/pypy/rpython/lltypesystem/ll2ctypes.py +++ b/pypy/rpython/lltypesystem/ll2ctypes.py @@ -177,8 +177,17 @@ assert max_n >= 0 ITEM = A.OF ctypes_item = get_ctypes_type(ITEM, delayed_builders) + # Python 2.5 ctypes can raise OverflowError on 64-bit builds + for n in [sys.maxint, 2**31]: + MAX_SIZE = n/64 + try: + PtrType = ctypes.POINTER(MAX_SIZE * ctypes_item) except (OverflowError, AttributeError), e: pass # ^^^ bah, blame ctypes + else: + break + else: + raise e class CArray(ctypes.Structure): if not A._hints.get('nolength'): @@ -210,6 +219,9 @@ cls._ptrtype = ctypes.POINTER(cls.MAX_SIZE * ctypes_item) except OverflowError, e: pass + except AttributeError, e: + pass # XXX win64 failure and segfault, afterwards: + # AttributeError: class must define a '_length_' attribute, which must be a positive integer else: break else: diff --git a/pypy/rpython/lltypesystem/test/test_ll2ctypes.py b/pypy/rpython/lltypesystem/test/test_ll2ctypes.py --- a/pypy/rpython/lltypesystem/test/test_ll2ctypes.py +++ b/pypy/rpython/lltypesystem/test/test_ll2ctypes.py @@ -742,6 +742,8 @@ eci = ExternalCompilationInfo(includes=['string.h']) if sys.platform.startswith('win'): underscore_on_windows = '_' + if sys.version.startswith('2.7.2 '): + py.test.skip('ctypes is buggy. errno crashes with win64 and python 2.7.2') else: underscore_on_windows = '' strlen = rffi.llexternal('strlen', [rffi.CCHARP], rffi.SIZE_T, From noreply at buildbot.pypy.org Fri Nov 4 14:53:43 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Fri, 4 Nov 2011 14:53:43 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: disabled errno check on win64 and py 2.7.2 - ctypes bug Message-ID: <20111104135343.4360C820B3@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r48752:bed6d5ad754f Date: 2011-11-04 14:52 +0100 http://bitbucket.org/pypy/pypy/changeset/bed6d5ad754f/ Log: disabled errno check on win64 and py 2.7.2 - ctypes bug diff --git a/pypy/jit/metainterp/optimizeopt/vstring.py b/pypy/jit/metainterp/optimizeopt/vstring.py --- a/pypy/jit/metainterp/optimizeopt/vstring.py +++ b/pypy/jit/metainterp/optimizeopt/vstring.py @@ -113,6 +113,33 @@ """A string built with newstr(const).""" _lengthbox = None # cache only + # Warning: an issue with VStringPlainValue is that sometimes it is + # initialized unpredictably by some copystrcontent. When this occurs + # we set self._chars to None. Be careful to check for is_valid(). + + def is_valid(self): + return self._chars is not None + + def _invalidate(self): + assert self.is_valid() + if self._lengthbox is None: + self._lengthbox = ConstInt(len(self._chars)) + self._chars = None + + def _really_force(self, optforce): + VAbstractStringValue._really_force(self, optforce) + assert self.box is not None + if self.is_valid(): + for c in self._chars: + if c is optimizer.CVAL_UNINITIALIZED_ZERO: + # the string has uninitialized null bytes in it, so + # assume that it is forced for being further mutated + # (e.g. by copystrcontent). So it becomes invalid + # as a VStringPlainValue: the _chars must not be used + # any longer. + self._invalidate() + break + def setup(self, size): self._chars = [optimizer.CVAL_UNINITIALIZED_ZERO] * size @@ -134,6 +161,8 @@ @specialize.arg(1) def get_constant_string_spec(self, mode): + if not self.is_valid(): + return None for c in self._chars: if c is optimizer.CVAL_UNINITIALIZED_ZERO or not c.is_constant(): return None @@ -141,11 +170,9 @@ for c in self._chars]) def string_copy_parts(self, string_optimizer, targetbox, offsetbox, mode): - if not self.is_virtual() and targetbox is not self.box: - lengthbox = self.getstrlen(string_optimizer, mode) - srcbox = self.force_box(string_optimizer) - return copy_str_content(string_optimizer, srcbox, targetbox, - CONST_0, offsetbox, lengthbox, mode) + if not self.is_valid(): + return VAbstractStringValue.string_copy_parts( + self, string_optimizer, targetbox, offsetbox, mode) for i in range(len(self._chars)): charbox = self._chars[i].force_box(string_optimizer) if not (isinstance(charbox, Const) and charbox.same_constant(CONST_0)): @@ -158,6 +185,7 @@ def get_args_for_fail(self, modifier): if self.box is None and not modifier.already_seen_virtual(self.keybox): + assert self.is_valid() charboxes = [value.get_key_box() for value in self._chars] modifier.register_virtual_fields(self.keybox, charboxes) for value in self._chars: @@ -373,7 +401,8 @@ def optimize_STRSETITEM(self, op): value = self.getvalue(op.getarg(0)) - if value.is_virtual() and isinstance(value, VStringPlainValue): + if (value.is_virtual() and isinstance(value, VStringPlainValue) + and value.is_valid()): indexbox = self.get_constant_box(op.getarg(1)) if indexbox is not None: value.setitem(indexbox.getint(), self.getvalue(op.getarg(2))) @@ -404,13 +433,10 @@ value = value.vstr vindex = self.getvalue(fullindexbox) # - if isinstance(value, VStringPlainValue): # even if no longer virtual + if (isinstance(value, VStringPlainValue) # even if no longer virtual + and value.is_valid()): # but make sure it is valid if vindex.is_constant(): - res = value.getitem(vindex.box.getint()) - # If it is uninitialized we can't return it, it was set by a - # COPYSTRCONTENT, not a STRSETITEM - if res is not optimizer.CVAL_UNINITIALIZED_ZERO: - return res + return value.getitem(vindex.box.getint()) # resbox = _strgetitem(self, value.force_box(self), vindex.force_box(self), mode) return self.getvalue(resbox) @@ -503,19 +529,12 @@ vstart = self.getvalue(op.getarg(2)) vstop = self.getvalue(op.getarg(3)) # - if (isinstance(vstr, VStringPlainValue) and vstart.is_constant() - and vstop.is_constant()): - # slicing with constant bounds of a VStringPlainValue, if any of - # the characters is unitialized we don't do this special slice, we - # do the regular copy contents. - for i in range(vstart.box.getint(), vstop.box.getint()): - if vstr.getitem(i) is optimizer.CVAL_UNINITIALIZED_ZERO: - break - else: - value = self.make_vstring_plain(op.result, op, mode) - value.setup_slice(vstr._chars, vstart.box.getint(), - vstop.box.getint()) - return True + if (isinstance(vstr, VStringPlainValue) and vstr.is_valid() + and vstart.is_constant() and vstop.is_constant()): + value = self.make_vstring_plain(op.result, op, mode) + value.setup_slice(vstr._chars, vstart.box.getint(), + vstop.box.getint()) + return True # vstr.ensure_nonnull() lengthbox = _int_sub(self, vstop.force_box(self), diff --git a/pypy/module/__builtin__/functional.py b/pypy/module/__builtin__/functional.py --- a/pypy/module/__builtin__/functional.py +++ b/pypy/module/__builtin__/functional.py @@ -362,7 +362,7 @@ def descr_reversed(self): lastitem = self.start + (self.len-1) * self.step return self.space.wrap(W_XRangeIterator(self.space, lastitem, - self.start - 1, -self.step)) + self.start, -self.step, True)) def descr_reduce(self): space = self.space @@ -389,21 +389,26 @@ ) class W_XRangeIterator(Wrappable): - def __init__(self, space, start, stop, step): + def __init__(self, space, start, stop, step, inclusive=False): self.space = space self.current = start self.stop = stop self.step = step + self.inclusive = inclusive def descr_iter(self): return self.space.wrap(self) def descr_next(self): - if (self.step > 0 and self.current < self.stop) or (self.step < 0 and self.current > self.stop): - item = self.current - self.current = item + self.step - return self.space.wrap(item) - raise OperationError(self.space.w_StopIteration, self.space.w_None) + if self.inclusive: + if not ((self.step > 0 and self.current <= self.stop) or (self.step < 0 and self.current >= self.stop)): + raise OperationError(self.space.w_StopIteration, self.space.w_None) + else: + if not ((self.step > 0 and self.current < self.stop) or (self.step < 0 and self.current > self.stop)): + raise OperationError(self.space.w_StopIteration, self.space.w_None) + item = self.current + self.current = item + self.step + return self.space.wrap(item) #def descr_len(self): # return self.space.wrap(self.remaining) diff --git a/pypy/module/__builtin__/test/test_functional.py b/pypy/module/__builtin__/test/test_functional.py --- a/pypy/module/__builtin__/test/test_functional.py +++ b/pypy/module/__builtin__/test/test_functional.py @@ -157,7 +157,8 @@ raises(OverflowError, xrange, a) raises(OverflowError, xrange, 0, a) raises(OverflowError, xrange, 0, 1, a) - + assert list(reversed(xrange(-sys.maxint-1, -sys.maxint-1, -2))) == [] + def test_xrange_reduce(self): x = xrange(2, 9, 3) callable, args = x.__reduce__() diff --git a/pypy/module/_file/interp_file.py b/pypy/module/_file/interp_file.py --- a/pypy/module/_file/interp_file.py +++ b/pypy/module/_file/interp_file.py @@ -206,24 +206,28 @@ @unwrap_spec(size=int) def direct_readlines(self, size=0): stream = self.getstream() - # NB. this implementation is very inefficient for unbuffered - # streams, but ok if stream.readline() is efficient. + # this is implemented as: .read().split('\n') + # except that it keeps the \n in the resulting strings if size <= 0: - result = [] - while True: - line = stream.readline() - if not line: - break - result.append(line) - size -= len(line) + data = stream.readall() else: - result = [] - while size > 0: - line = stream.readline() - if not line: - break - result.append(line) - size -= len(line) + data = stream.read(size) + result = [] + splitfrom = 0 + for i in range(len(data)): + if data[i] == '\n': + result.append(data[splitfrom : i + 1]) + splitfrom = i + 1 + # + if splitfrom < len(data): + # there is a partial line at the end. If size > 0, it is likely + # to be because the 'read(size)' returned data up to the middle + # of a line. In that case, use 'readline()' to read until the + # end of the current line. + data = data[splitfrom:] + if size > 0: + data += stream.readline() + result.append(data) return result @unwrap_spec(offset=r_longlong, whence=int) diff --git a/pypy/module/_hashlib/interp_hashlib.py b/pypy/module/_hashlib/interp_hashlib.py --- a/pypy/module/_hashlib/interp_hashlib.py +++ b/pypy/module/_hashlib/interp_hashlib.py @@ -4,32 +4,44 @@ from pypy.interpreter.error import OperationError from pypy.tool.sourcetools import func_renamer from pypy.interpreter.baseobjspace import Wrappable -from pypy.rpython.lltypesystem import lltype, rffi +from pypy.rpython.lltypesystem import lltype, llmemory, rffi +from pypy.rlib import rgc, ropenssl from pypy.rlib.objectmodel import keepalive_until_here -from pypy.rlib import ropenssl from pypy.rlib.rstring import StringBuilder from pypy.module.thread.os_lock import Lock algorithms = ('md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512') +# HASH_MALLOC_SIZE is the size of EVP_MD, EVP_MD_CTX plus their points +# Used for adding memory pressure. Last number is an (under?)estimate of +# EVP_PKEY_CTX's size. +# XXX: Make a better estimate here +HASH_MALLOC_SIZE = ropenssl.EVP_MD_SIZE + ropenssl.EVP_MD_CTX_SIZE \ + + rffi.sizeof(ropenssl.EVP_MD) * 2 + 208 + class W_Hash(Wrappable): ctx = lltype.nullptr(ropenssl.EVP_MD_CTX.TO) + _block_size = -1 def __init__(self, space, name): self.name = name + self.digest_size = self.compute_digest_size() # Allocate a lock for each HASH object. # An optimization would be to not release the GIL on small requests, # and use a custom lock only when needed. self.lock = Lock(space) + ctx = lltype.malloc(ropenssl.EVP_MD_CTX.TO, flavor='raw') + rgc.add_memory_pressure(HASH_MALLOC_SIZE + self.digest_size) + self.ctx = ctx + + def initdigest(self, space, name): digest = ropenssl.EVP_get_digestbyname(name) if not digest: raise OperationError(space.w_ValueError, space.wrap("unknown hash function")) - ctx = lltype.malloc(ropenssl.EVP_MD_CTX.TO, flavor='raw') - ropenssl.EVP_DigestInit(ctx, digest) - self.ctx = ctx + ropenssl.EVP_DigestInit(self.ctx, digest) def __del__(self): # self.lock.free() @@ -65,33 +77,29 @@ "Return the digest value as a string of hexadecimal digits." digest = self._digest(space) hexdigits = '0123456789abcdef' - result = StringBuilder(self._digest_size() * 2) + result = StringBuilder(self.digest_size * 2) for c in digest: result.append(hexdigits[(ord(c) >> 4) & 0xf]) result.append(hexdigits[ ord(c) & 0xf]) return space.wrap(result.build()) def get_digest_size(self, space): - return space.wrap(self._digest_size()) + return space.wrap(self.digest_size) def get_block_size(self, space): - return space.wrap(self._block_size()) + return space.wrap(self.compute_block_size()) def _digest(self, space): - copy = self.copy(space) - ctx = copy.ctx - digest_size = self._digest_size() - digest = lltype.malloc(rffi.CCHARP.TO, digest_size, flavor='raw') + with lltype.scoped_alloc(ropenssl.EVP_MD_CTX.TO) as ctx: + with self.lock: + ropenssl.EVP_MD_CTX_copy(ctx, self.ctx) + digest_size = self.digest_size + with lltype.scoped_alloc(rffi.CCHARP.TO, digest_size) as digest: + ropenssl.EVP_DigestFinal(ctx, digest, None) + ropenssl.EVP_MD_CTX_cleanup(ctx) + return rffi.charpsize2str(digest, digest_size) - try: - ropenssl.EVP_DigestFinal(ctx, digest, None) - return rffi.charpsize2str(digest, digest_size) - finally: - keepalive_until_here(copy) - lltype.free(digest, flavor='raw') - - - def _digest_size(self): + def compute_digest_size(self): # XXX This isn't the nicest way, but the EVP_MD_size OpenSSL # XXX function is defined as a C macro on OS X and would be # XXX significantly harder to implement in another way. @@ -105,12 +113,14 @@ 'sha512': 64, 'SHA512': 64, }.get(self.name, 0) - def _block_size(self): + def compute_block_size(self): + if self._block_size != -1: + return self._block_size # XXX This isn't the nicest way, but the EVP_MD_CTX_block_size # XXX OpenSSL function is defined as a C macro on some systems # XXX and would be significantly harder to implement in # XXX another way. - return { + self._block_size = { 'md5': 64, 'MD5': 64, 'sha1': 64, 'SHA1': 64, 'sha224': 64, 'SHA224': 64, @@ -118,6 +128,7 @@ 'sha384': 128, 'SHA384': 128, 'sha512': 128, 'SHA512': 128, }.get(self.name, 0) + return self._block_size W_Hash.typedef = TypeDef( 'HASH', @@ -135,6 +146,7 @@ @unwrap_spec(name=str, string='bufferstr') def new(space, name, string=''): w_hash = W_Hash(space, name) + w_hash.initdigest(space, name) w_hash.update(space, string) return space.wrap(w_hash) diff --git a/pypy/module/_multiprocessing/interp_semaphore.py b/pypy/module/_multiprocessing/interp_semaphore.py --- a/pypy/module/_multiprocessing/interp_semaphore.py +++ b/pypy/module/_multiprocessing/interp_semaphore.py @@ -4,6 +4,7 @@ from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.error import wrap_oserror, OperationError from pypy.rpython.lltypesystem import rffi, lltype +from pypy.rlib import rgc from pypy.rlib.rarithmetic import r_uint from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.rpython.tool import rffi_platform as platform @@ -23,6 +24,8 @@ _CreateSemaphore = rwin32.winexternal( 'CreateSemaphoreA', [rffi.VOIDP, rffi.LONG, rffi.LONG, rwin32.LPCSTR], rwin32.HANDLE) + _CloseHandle = rwin32.winexternal('CloseHandle', [rwin32.HANDLE], + rwin32.BOOL, threadsafe=False) _ReleaseSemaphore = rwin32.winexternal( 'ReleaseSemaphore', [rwin32.HANDLE, rffi.LONG, rffi.LONGP], rwin32.BOOL) @@ -51,6 +54,7 @@ SEM_FAILED = platform.ConstantInteger('SEM_FAILED') SEM_VALUE_MAX = platform.ConstantInteger('SEM_VALUE_MAX') SEM_TIMED_WAIT = platform.Has('sem_timedwait') + SEM_T_SIZE = platform.SizeOf('sem_t') config = platform.configure(CConfig) TIMEVAL = config['TIMEVAL'] @@ -61,18 +65,21 @@ SEM_FAILED = config['SEM_FAILED'] # rffi.cast(SEM_T, config['SEM_FAILED']) SEM_VALUE_MAX = config['SEM_VALUE_MAX'] SEM_TIMED_WAIT = config['SEM_TIMED_WAIT'] + SEM_T_SIZE = config['SEM_T_SIZE'] if sys.platform == 'darwin': HAVE_BROKEN_SEM_GETVALUE = True else: HAVE_BROKEN_SEM_GETVALUE = False - def external(name, args, result): + def external(name, args, result, **kwargs): return rffi.llexternal(name, args, result, - compilation_info=eci) + compilation_info=eci, **kwargs) _sem_open = external('sem_open', [rffi.CCHARP, rffi.INT, rffi.INT, rffi.UINT], SEM_T) + # tread sem_close as not threadsafe for now to be able to use the __del__ + _sem_close = external('sem_close', [SEM_T], rffi.INT, threadsafe=False) _sem_unlink = external('sem_unlink', [rffi.CCHARP], rffi.INT) _sem_wait = external('sem_wait', [SEM_T], rffi.INT) _sem_trywait = external('sem_trywait', [SEM_T], rffi.INT) @@ -90,6 +97,11 @@ raise OSError(rposix.get_errno(), "sem_open failed") return res + def sem_close(handle): + res = _sem_close(handle) + if res < 0: + raise OSError(rposix.get_errno(), "sem_close failed") + def sem_unlink(name): res = _sem_unlink(name) if res < 0: @@ -205,6 +217,11 @@ raise WindowsError(err, "CreateSemaphore") return handle + def delete_semaphore(handle): + if not _CloseHandle(handle): + err = rwin32.GetLastError() + raise WindowsError(err, "CloseHandle") + def semlock_acquire(self, space, block, w_timeout): if not block: full_msecs = 0 @@ -291,8 +308,13 @@ sem_unlink(name) except OSError: pass + else: + rgc.add_memory_pressure(SEM_T_SIZE) return sem + def delete_semaphore(handle): + sem_close(handle) + def semlock_acquire(self, space, block, w_timeout): if not block: deadline = lltype.nullptr(TIMESPECP.TO) @@ -483,6 +505,9 @@ def exit(self, space, __args__): self.release(space) + def __del__(self): + delete_semaphore(self.handle) + @unwrap_spec(kind=int, value=int, maxvalue=int) def descr_new(space, w_subtype, kind, value, maxvalue): if kind != RECURSIVE_MUTEX and kind != SEMAPHORE: diff --git a/pypy/module/pyexpat/interp_pyexpat.py b/pypy/module/pyexpat/interp_pyexpat.py --- a/pypy/module/pyexpat/interp_pyexpat.py +++ b/pypy/module/pyexpat/interp_pyexpat.py @@ -4,9 +4,9 @@ from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.error import OperationError from pypy.objspace.descroperation import object_setattr +from pypy.rlib import rgc +from pypy.rlib.unroll import unrolling_iterable from pypy.rpython.lltypesystem import rffi, lltype -from pypy.rlib.unroll import unrolling_iterable - from pypy.rpython.tool import rffi_platform from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.translator.platform import platform @@ -118,6 +118,19 @@ locals()[name] = rffi_platform.ConstantInteger(name) for name in xml_model_list: locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + XML_Parser_SIZE = rffi_platform.SizeOf("XML_Parser") for k, v in rffi_platform.configure(CConfigure).items(): globals()[k] = v @@ -793,7 +806,10 @@ rffi.cast(rffi.CHAR, namespace_separator)) else: xmlparser = XML_ParserCreate(encoding) - + # Currently this is just the size of the pointer and some estimated bytes. + # The struct isn't actually defined in expat.h - it is in xmlparse.c + # XXX: find a good estimate of the XML_ParserStruct + rgc.add_memory_pressure(XML_Parser_SIZE + 300) if not xmlparser: raise OperationError(space.w_RuntimeError, space.wrap('XML_ParserCreate failed')) diff --git a/pypy/module/thread/ll_thread.py b/pypy/module/thread/ll_thread.py --- a/pypy/module/thread/ll_thread.py +++ b/pypy/module/thread/ll_thread.py @@ -2,10 +2,11 @@ from pypy.rpython.lltypesystem import rffi, lltype, llmemory from pypy.translator.tool.cbuild import ExternalCompilationInfo import py -from pypy.rlib import jit +from pypy.rlib import jit, rgc from pypy.rlib.debug import ll_assert from pypy.rlib.objectmodel import we_are_translated from pypy.rpython.lltypesystem.lloperation import llop +from pypy.rpython.tool import rffi_platform from pypy.tool import autopath class error(Exception): @@ -49,7 +50,7 @@ TLOCKP = rffi.COpaquePtr('struct RPyOpaque_ThreadLock', compilation_info=eci) - +TLOCKP_SIZE = rffi_platform.sizeof('struct RPyOpaque_ThreadLock', eci) c_thread_lock_init = llexternal('RPyThreadLockInit', [TLOCKP], rffi.INT, threadsafe=False) # may add in a global list c_thread_lock_dealloc_NOAUTO = llexternal('RPyOpaqueDealloc_ThreadLock', @@ -164,6 +165,9 @@ if rffi.cast(lltype.Signed, res) <= 0: lltype.free(ll_lock, flavor='raw', track_allocation=False) raise error("out of resources") + # Add some memory pressure for the size of the lock because it is an + # Opaque object + rgc.add_memory_pressure(TLOCKP_SIZE) return ll_lock def free_ll_lock(ll_lock): diff --git a/pypy/objspace/std/bytearrayobject.py b/pypy/objspace/std/bytearrayobject.py --- a/pypy/objspace/std/bytearrayobject.py +++ b/pypy/objspace/std/bytearrayobject.py @@ -283,17 +283,9 @@ return space.wrap(''.join(w_bytearray.data)) def _convert_idx_params(space, w_self, w_start, w_stop): - start = slicetype.eval_slice_index(space, w_start) - stop = slicetype.eval_slice_index(space, w_stop) length = len(w_self.data) - if start < 0: - start += length - if start < 0: - start = 0 - if stop < 0: - stop += length - if stop < 0: - stop = 0 + start, stop = slicetype.unwrap_start_stop( + space, length, w_start, w_stop, False) return start, stop, length def str_count__Bytearray_Int_ANY_ANY(space, w_bytearray, w_char, w_start, w_stop): diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py --- a/pypy/objspace/std/listobject.py +++ b/pypy/objspace/std/listobject.py @@ -419,8 +419,8 @@ # needs to be safe against eq_w() mutating the w_list behind our back items = w_list.wrappeditems size = len(items) - i = slicetype.adapt_bound(space, size, w_start) - stop = slicetype.adapt_bound(space, size, w_stop) + i, stop = slicetype.unwrap_start_stop( + space, size, w_start, w_stop, True) while i < stop and i < len(items): if space.eq_w(items[i], w_any): return space.wrap(i) diff --git a/pypy/objspace/std/ropeobject.py b/pypy/objspace/std/ropeobject.py --- a/pypy/objspace/std/ropeobject.py +++ b/pypy/objspace/std/ropeobject.py @@ -357,16 +357,8 @@ self = w_self._node sub = w_sub._node - if space.is_w(w_start, space.w_None): - w_start = space.wrap(0) - if space.is_w(w_end, space.w_None): - w_end = space.len(w_self) - if upper_bound: - start = slicetype.adapt_bound(space, self.length(), w_start) - end = slicetype.adapt_bound(space, self.length(), w_end) - else: - start = slicetype.adapt_lower_bound(space, self.length(), w_start) - end = slicetype.adapt_lower_bound(space, self.length(), w_end) + start, end = slicetype.unwrap_start_stop( + space, self.length(), w_start, w_end, upper_bound) return (self, sub, start, end) _convert_idx_params._annspecialcase_ = 'specialize:arg(5)' diff --git a/pypy/objspace/std/sliceobject.py b/pypy/objspace/std/sliceobject.py --- a/pypy/objspace/std/sliceobject.py +++ b/pypy/objspace/std/sliceobject.py @@ -4,7 +4,7 @@ from pypy.interpreter import gateway from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all -from pypy.objspace.std.slicetype import eval_slice_index +from pypy.objspace.std.slicetype import _eval_slice_index class W_SliceObject(W_Object): from pypy.objspace.std.slicetype import slice_typedef as typedef @@ -25,7 +25,7 @@ if space.is_w(w_slice.w_step, space.w_None): step = 1 else: - step = eval_slice_index(space, w_slice.w_step) + step = _eval_slice_index(space, w_slice.w_step) if step == 0: raise OperationError(space.w_ValueError, space.wrap("slice step cannot be zero")) @@ -35,7 +35,7 @@ else: start = 0 else: - start = eval_slice_index(space, w_slice.w_start) + start = _eval_slice_index(space, w_slice.w_start) if start < 0: start += length if start < 0: @@ -54,7 +54,7 @@ else: stop = length else: - stop = eval_slice_index(space, w_slice.w_stop) + stop = _eval_slice_index(space, w_slice.w_stop) if stop < 0: stop += length if stop < 0: diff --git a/pypy/objspace/std/slicetype.py b/pypy/objspace/std/slicetype.py --- a/pypy/objspace/std/slicetype.py +++ b/pypy/objspace/std/slicetype.py @@ -3,6 +3,7 @@ from pypy.objspace.std.stdtypedef import StdTypeDef, SMM from pypy.objspace.std.register_all import register_all from pypy.interpreter.error import OperationError +from pypy.rlib.objectmodel import specialize # indices multimehtod slice_indices = SMM('indices', 2, @@ -14,7 +15,9 @@ ' normal slices.') # utility functions -def eval_slice_index(space, w_int): +def _eval_slice_index(space, w_int): + # note that it is the *callers* responsibility to check for w_None + # otherwise you can get funny error messages try: return space.getindex_w(w_int, None) # clamp if long integer too large except OperationError, err: @@ -25,7 +28,7 @@ "None or have an __index__ method")) def adapt_lower_bound(space, size, w_index): - index = eval_slice_index(space, w_index) + index = _eval_slice_index(space, w_index) if index < 0: index = index + size if index < 0: @@ -34,16 +37,29 @@ return index def adapt_bound(space, size, w_index): - index = eval_slice_index(space, w_index) - if index < 0: - index = index + size - if index < 0: - index = 0 + index = adapt_lower_bound(space, size, w_index) if index > size: index = size assert index >= 0 return index + at specialize.arg(4) +def unwrap_start_stop(space, size, w_start, w_end, upper_bound=False): + if space.is_w(w_start, space.w_None): + start = 0 + elif upper_bound: + start = adapt_bound(space, size, w_start) + else: + start = adapt_lower_bound(space, size, w_start) + + if space.is_w(w_end, space.w_None): + end = size + elif upper_bound: + end = adapt_bound(space, size, w_end) + else: + end = adapt_lower_bound(space, size, w_end) + return start, end + register_all(vars(), globals()) # ____________________________________________________________ diff --git a/pypy/objspace/std/smalltupleobject.py b/pypy/objspace/std/smalltupleobject.py --- a/pypy/objspace/std/smalltupleobject.py +++ b/pypy/objspace/std/smalltupleobject.py @@ -68,10 +68,10 @@ raise IndexError def eq(self, space, w_other): - if self.length() != w_other.length(): + if n != w_other.length(): return space.w_False for i in iter_n: - item1 = self.getitem(i) + item1 = getattr(self,'w_value%s' % i) item2 = w_other.getitem(i) if not space.eq_w(item1, item2): return space.w_False @@ -80,9 +80,9 @@ def hash(self, space): mult = 1000003 x = 0x345678 - z = self.length() + z = n for i in iter_n: - w_item = self.getitem(i) + w_item = getattr(self, 'w_value%s' % i) y = space.int_w(space.hash(w_item)) x = (x ^ y) * mult z -= 1 diff --git a/pypy/objspace/std/stringobject.py b/pypy/objspace/std/stringobject.py --- a/pypy/objspace/std/stringobject.py +++ b/pypy/objspace/std/stringobject.py @@ -4,7 +4,7 @@ from pypy.interpreter.error import OperationError, operationerrfmt from pypy.interpreter import gateway from pypy.rlib.rarithmetic import ovfcheck -from pypy.rlib.objectmodel import we_are_translated, compute_hash +from pypy.rlib.objectmodel import we_are_translated, compute_hash, specialize from pypy.objspace.std.inttype import wrapint from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice from pypy.objspace.std import slicetype, newformat @@ -47,6 +47,7 @@ W_StringObject.PREBUILT = [W_StringObject(chr(i)) for i in range(256)] del i + at specialize.arg(2) def _is_generic(space, w_self, fun): v = w_self._value if len(v) == 0: @@ -56,14 +57,13 @@ return space.newbool(fun(c)) else: return _is_generic_loop(space, v, fun) -_is_generic._annspecialcase_ = "specialize:arg(2)" + at specialize.arg(2) def _is_generic_loop(space, v, fun): for idx in range(len(v)): if not fun(v[idx]): return space.w_False return space.w_True -_is_generic_loop._annspecialcase_ = "specialize:arg(2)" def _upper(ch): if ch.islower(): @@ -420,22 +420,14 @@ return space.wrap(u_self) -def _convert_idx_params(space, w_self, w_sub, w_start, w_end, upper_bound=False): + at specialize.arg(4) +def _convert_idx_params(space, w_self, w_start, w_end, upper_bound=False): self = w_self._value - sub = w_sub._value + lenself = len(self) - if space.is_w(w_start, space.w_None): - w_start = space.wrap(0) - if space.is_w(w_end, space.w_None): - w_end = space.len(w_self) - if upper_bound: - start = slicetype.adapt_bound(space, len(self), w_start) - end = slicetype.adapt_bound(space, len(self), w_end) - else: - start = slicetype.adapt_lower_bound(space, len(self), w_start) - end = slicetype.adapt_lower_bound(space, len(self), w_end) - return (self, sub, start, end) -_convert_idx_params._annspecialcase_ = 'specialize:arg(5)' + start, end = slicetype.unwrap_start_stop( + space, lenself, w_start, w_end, upper_bound=upper_bound) + return (self, start, end) def contains__String_String(space, w_self, w_sub): self = w_self._value @@ -443,13 +435,13 @@ return space.newbool(self.find(sub) >= 0) def str_find__String_String_ANY_ANY(space, w_self, w_sub, w_start, w_end): - (self, sub, start, end) = _convert_idx_params(space, w_self, w_sub, w_start, w_end) - res = self.find(sub, start, end) + (self, start, end) = _convert_idx_params(space, w_self, w_start, w_end) + res = self.find(w_sub._value, start, end) return space.wrap(res) def str_rfind__String_String_ANY_ANY(space, w_self, w_sub, w_start, w_end): - (self, sub, start, end) = _convert_idx_params(space, w_self, w_sub, w_start, w_end) - res = self.rfind(sub, start, end) + (self, start, end) = _convert_idx_params(space, w_self, w_start, w_end) + res = self.rfind(w_sub._value, start, end) return space.wrap(res) def str_partition__String_String(space, w_self, w_sub): @@ -483,8 +475,8 @@ def str_index__String_String_ANY_ANY(space, w_self, w_sub, w_start, w_end): - (self, sub, start, end) = _convert_idx_params(space, w_self, w_sub, w_start, w_end) - res = self.find(sub, start, end) + (self, start, end) = _convert_idx_params(space, w_self, w_start, w_end) + res = self.find(w_sub._value, start, end) if res < 0: raise OperationError(space.w_ValueError, space.wrap("substring not found in string.index")) @@ -493,8 +485,8 @@ def str_rindex__String_String_ANY_ANY(space, w_self, w_sub, w_start, w_end): - (self, sub, start, end) = _convert_idx_params(space, w_self, w_sub, w_start, w_end) - res = self.rfind(sub, start, end) + (self, start, end) = _convert_idx_params(space, w_self, w_start, w_end) + res = self.rfind(w_sub._value, start, end) if res < 0: raise OperationError(space.w_ValueError, space.wrap("substring not found in string.rindex")) @@ -636,20 +628,17 @@ return wrapstr(space, u_centered) def str_count__String_String_ANY_ANY(space, w_self, w_arg, w_start, w_end): - u_self, u_arg, u_start, u_end = _convert_idx_params(space, w_self, w_arg, - w_start, w_end) - return wrapint(space, u_self.count(u_arg, u_start, u_end)) + u_self, u_start, u_end = _convert_idx_params(space, w_self, w_start, w_end) + return wrapint(space, u_self.count(w_arg._value, u_start, u_end)) def str_endswith__String_String_ANY_ANY(space, w_self, w_suffix, w_start, w_end): - (u_self, suffix, start, end) = _convert_idx_params(space, w_self, - w_suffix, w_start, - w_end, True) - return space.newbool(stringendswith(u_self, suffix, start, end)) + (u_self, start, end) = _convert_idx_params(space, w_self, w_start, + w_end, True) + return space.newbool(stringendswith(u_self, w_suffix._value, start, end)) def str_endswith__String_Tuple_ANY_ANY(space, w_self, w_suffixes, w_start, w_end): - (u_self, _, start, end) = _convert_idx_params(space, w_self, - space.wrap(''), w_start, - w_end, True) + (u_self, start, end) = _convert_idx_params(space, w_self, w_start, + w_end, True) for w_suffix in space.fixedview(w_suffixes): if space.isinstance_w(w_suffix, space.w_unicode): w_u = space.call_function(space.w_unicode, w_self) @@ -661,14 +650,13 @@ return space.w_False def str_startswith__String_String_ANY_ANY(space, w_self, w_prefix, w_start, w_end): - (u_self, prefix, start, end) = _convert_idx_params(space, w_self, - w_prefix, w_start, - w_end, True) - return space.newbool(stringstartswith(u_self, prefix, start, end)) + (u_self, start, end) = _convert_idx_params(space, w_self, w_start, + w_end, True) + return space.newbool(stringstartswith(u_self, w_prefix._value, start, end)) def str_startswith__String_Tuple_ANY_ANY(space, w_self, w_prefixes, w_start, w_end): - (u_self, _, start, end) = _convert_idx_params(space, w_self, space.wrap(''), - w_start, w_end, True) + (u_self, start, end) = _convert_idx_params(space, w_self, + w_start, w_end, True) for w_prefix in space.fixedview(w_prefixes): if space.isinstance_w(w_prefix, space.w_unicode): w_u = space.call_function(space.w_unicode, w_self) diff --git a/pypy/objspace/std/strsliceobject.py b/pypy/objspace/std/strsliceobject.py --- a/pypy/objspace/std/strsliceobject.py +++ b/pypy/objspace/std/strsliceobject.py @@ -60,8 +60,8 @@ def _convert_idx_params(space, w_self, w_sub, w_start, w_end): length = w_self.stop - w_self.start sub = w_sub._value - start = slicetype.adapt_bound(space, length, w_start) - end = slicetype.adapt_bound(space, length, w_end) + start, end = slicetype.unwrap_start_stop( + space, length, w_start, w_end, True) assert start >= 0 assert end >= 0 diff --git a/pypy/objspace/std/test/test_listobject.py b/pypy/objspace/std/test/test_listobject.py --- a/pypy/objspace/std/test/test_listobject.py +++ b/pypy/objspace/std/test/test_listobject.py @@ -2,11 +2,10 @@ from pypy.objspace.std.listobject import W_ListObject from pypy.interpreter.error import OperationError -from pypy.conftest import gettestobjspace +from pypy.conftest import gettestobjspace, option class TestW_ListObject(object): - def test_is_true(self): w = self.space.wrap w_list = W_ListObject([]) @@ -343,6 +342,13 @@ class AppTestW_ListObject(object): + def setup_class(cls): + import sys + on_cpython = (option.runappdirect and + not hasattr(sys, 'pypy_translation_info')) + + cls.w_on_cpython = cls.space.wrap(on_cpython) + def test_call_list(self): assert list('') == [] assert list('abc') == ['a', 'b', 'c'] @@ -616,6 +622,14 @@ assert c.index(0) == 0 raises(ValueError, c.index, 3) + def test_index_cpython_bug(self): + if self.on_cpython: + skip("cpython has a bug here") + c = list('hello world') + assert c.index('l', None, None) == 2 + assert c.index('l', 3, None) == 3 + assert c.index('l', None, 4) == 2 + def test_ass_slice(self): l = range(6) l[1:3] = 'abc' diff --git a/pypy/objspace/std/tupleobject.py b/pypy/objspace/std/tupleobject.py --- a/pypy/objspace/std/tupleobject.py +++ b/pypy/objspace/std/tupleobject.py @@ -167,17 +167,8 @@ return space.wrap(count) def tuple_index__Tuple_ANY_ANY_ANY(space, w_tuple, w_obj, w_start, w_stop): - start = slicetype.eval_slice_index(space, w_start) - stop = slicetype.eval_slice_index(space, w_stop) length = len(w_tuple.wrappeditems) - if start < 0: - start += length - if start < 0: - start = 0 - if stop < 0: - stop += length - if stop < 0: - stop = 0 + start, stop = slicetype.unwrap_start_stop(space, length, w_start, w_stop) for i in range(start, min(stop, length)): w_item = w_tuple.wrappeditems[i] if space.eq_w(w_item, w_obj): diff --git a/pypy/objspace/std/unicodeobject.py b/pypy/objspace/std/unicodeobject.py --- a/pypy/objspace/std/unicodeobject.py +++ b/pypy/objspace/std/unicodeobject.py @@ -10,7 +10,7 @@ from pypy.objspace.std import slicetype, newformat from pypy.objspace.std.tupleobject import W_TupleObject from pypy.rlib.rarithmetic import intmask, ovfcheck -from pypy.rlib.objectmodel import compute_hash +from pypy.rlib.objectmodel import compute_hash, specialize from pypy.rlib.rstring import UnicodeBuilder from pypy.rlib.runicode import unicode_encode_unicode_escape from pypy.module.unicodedata import unicodedb @@ -475,42 +475,29 @@ index = length return index -def _convert_idx_params(space, w_self, w_sub, w_start, w_end, upper_bound=False): - assert isinstance(w_sub, W_UnicodeObject) + at specialize.arg(4) +def _convert_idx_params(space, w_self, w_start, w_end, upper_bound=False): self = w_self._value - sub = w_sub._value - - if space.is_w(w_start, space.w_None): - w_start = space.wrap(0) - if space.is_w(w_end, space.w_None): - w_end = space.len(w_self) - - if upper_bound: - start = slicetype.adapt_bound(space, len(self), w_start) - end = slicetype.adapt_bound(space, len(self), w_end) - else: - start = slicetype.adapt_lower_bound(space, len(self), w_start) - end = slicetype.adapt_lower_bound(space, len(self), w_end) - return (self, sub, start, end) -_convert_idx_params._annspecialcase_ = 'specialize:arg(5)' + start, end = slicetype.unwrap_start_stop( + space, len(self), w_start, w_end, upper_bound) + return (self, start, end) def unicode_endswith__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, + self, start, end = _convert_idx_params(space, w_self, w_start, w_end, True) - return space.newbool(stringendswith(self, substr, start, end)) + return space.newbool(stringendswith(self, w_substr._value, start, end)) def unicode_startswith__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end, True) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end, True) # XXX this stuff can be waaay better for ootypebased backends if # we re-use more of our rpython machinery (ie implement startswith # with additional parameters as rpython) - return space.newbool(stringstartswith(self, substr, start, end)) + return space.newbool(stringstartswith(self, w_substr._value, start, end)) def unicode_startswith__Unicode_Tuple_ANY_ANY(space, w_unistr, w_prefixes, w_start, w_end): - unistr, _, start, end = _convert_idx_params(space, w_unistr, space.wrap(u''), - w_start, w_end, True) + unistr, start, end = _convert_idx_params(space, w_unistr, + w_start, w_end, True) for w_prefix in space.fixedview(w_prefixes): prefix = space.unicode_w(w_prefix) if stringstartswith(unistr, prefix, start, end): @@ -519,7 +506,7 @@ def unicode_endswith__Unicode_Tuple_ANY_ANY(space, w_unistr, w_suffixes, w_start, w_end): - unistr, _, start, end = _convert_idx_params(space, w_unistr, space.wrap(u''), + unistr, start, end = _convert_idx_params(space, w_unistr, w_start, w_end, True) for w_suffix in space.fixedview(w_suffixes): suffix = space.unicode_w(w_suffix) @@ -625,37 +612,32 @@ return space.newlist(lines) def unicode_find__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - return space.wrap(self.find(substr, start, end)) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + return space.wrap(self.find(w_substr._value, start, end)) def unicode_rfind__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - return space.wrap(self.rfind(substr, start, end)) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + return space.wrap(self.rfind(w_substr._value, start, end)) def unicode_index__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - index = self.find(substr, start, end) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + index = self.find(w_substr._value, start, end) if index < 0: raise OperationError(space.w_ValueError, space.wrap('substring not found')) return space.wrap(index) def unicode_rindex__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - index = self.rfind(substr, start, end) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + index = self.rfind(w_substr._value, start, end) if index < 0: raise OperationError(space.w_ValueError, space.wrap('substring not found')) return space.wrap(index) def unicode_count__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - return space.wrap(self.count(substr, start, end)) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + return space.wrap(self.count(w_substr._value, start, end)) def unicode_split__Unicode_None_ANY(space, w_self, w_none, w_maxsplit): maxsplit = space.int_w(w_maxsplit) diff --git a/pypy/rlib/rgc.py b/pypy/rlib/rgc.py --- a/pypy/rlib/rgc.py +++ b/pypy/rlib/rgc.py @@ -259,6 +259,24 @@ except Exception: return False # don't keep objects whose _freeze_() method explodes +def add_memory_pressure(estimate): + """Add memory pressure for OpaquePtrs.""" + pass + +class AddMemoryPressureEntry(ExtRegistryEntry): + _about_ = add_memory_pressure + + def compute_result_annotation(self, s_nbytes): + from pypy.annotation import model as annmodel + return annmodel.s_None + + def specialize_call(self, hop): + [v_size] = hop.inputargs(lltype.Signed) + hop.exception_cannot_occur() + return hop.genop('gc_add_memory_pressure', [v_size], + resulttype=lltype.Void) + + def get_rpy_memory_usage(gcref): "NOT_RPYTHON" # approximate implementation using CPython's type info diff --git a/pypy/rlib/ropenssl.py b/pypy/rlib/ropenssl.py --- a/pypy/rlib/ropenssl.py +++ b/pypy/rlib/ropenssl.py @@ -25,6 +25,7 @@ 'openssl/err.h', 'openssl/rand.h', 'openssl/evp.h', + 'openssl/ossl_typ.h', 'openssl/x509v3.h'] eci = ExternalCompilationInfo( @@ -108,7 +109,9 @@ GENERAL_NAME_st = rffi_platform.Struct( 'struct GENERAL_NAME_st', [('type', rffi.INT), - ]) + ]) + EVP_MD_SIZE = rffi_platform.SizeOf('EVP_MD') + EVP_MD_CTX_SIZE = rffi_platform.SizeOf('EVP_MD_CTX') for k, v in rffi_platform.configure(CConfig).items(): @@ -154,7 +157,7 @@ ssl_external('CRYPTO_set_id_callback', [lltype.Ptr(lltype.FuncType([], rffi.LONG))], lltype.Void) - + if HAVE_OPENSSL_RAND: ssl_external('RAND_add', [rffi.CCHARP, rffi.INT, rffi.DOUBLE], lltype.Void) ssl_external('RAND_status', [], rffi.INT) @@ -255,7 +258,7 @@ [BIO, rffi.VOIDP, rffi.VOIDP, rffi.VOIDP], X509) EVP_MD_CTX = rffi.COpaquePtr('EVP_MD_CTX', compilation_info=eci) -EVP_MD = rffi.COpaquePtr('EVP_MD') +EVP_MD = rffi.COpaquePtr('EVP_MD', compilation_info=eci) OpenSSL_add_all_digests = external( 'OpenSSL_add_all_digests', [], lltype.Void) diff --git a/pypy/rpython/llinterp.py b/pypy/rpython/llinterp.py --- a/pypy/rpython/llinterp.py +++ b/pypy/rpython/llinterp.py @@ -172,7 +172,7 @@ def checkadr(addr): assert lltype.typeOf(addr) is llmemory.Address - + def is_inst(inst): return isinstance(lltype.typeOf(inst), (ootype.Instance, ootype.BuiltinType, ootype.StaticMethod)) @@ -657,7 +657,7 @@ raise TypeError("graph with %r args called with wrong func ptr type: %r" % (tuple([v.concretetype for v in args_v]), ARGS)) frame = self.newsubframe(graph, args) - return frame.eval() + return frame.eval() def op_direct_call(self, f, *args): FTYPE = self.llinterpreter.typer.type_system.derefType(lltype.typeOf(f)) @@ -698,13 +698,13 @@ return ptr except MemoryError: self.make_llexception() - + def op_malloc_nonmovable(self, TYPE, flags): flavor = flags['flavor'] assert flavor == 'gc' zero = flags.get('zero', False) return self.heap.malloc_nonmovable(TYPE, zero=zero) - + def op_malloc_nonmovable_varsize(self, TYPE, flags, size): flavor = flags['flavor'] assert flavor == 'gc' @@ -716,6 +716,9 @@ track_allocation = flags.get('track_allocation', True) self.heap.free(obj, flavor='raw', track_allocation=track_allocation) + def op_gc_add_memory_pressure(self, size): + self.heap.add_memory_pressure(size) + def op_shrink_array(self, obj, smallersize): return self.heap.shrink_array(obj, smallersize) @@ -1321,7 +1324,7 @@ func_graph = fn.graph else: # obj is an instance, we want to call 'method_name' on it - assert fn is None + assert fn is None self_arg = [obj] func_graph = obj._TYPE._methods[method_name._str].graph diff --git a/pypy/rpython/lltypesystem/llheap.py b/pypy/rpython/lltypesystem/llheap.py --- a/pypy/rpython/lltypesystem/llheap.py +++ b/pypy/rpython/lltypesystem/llheap.py @@ -5,8 +5,7 @@ setfield = setattr from operator import setitem as setarrayitem -from pypy.rlib.rgc import collect -from pypy.rlib.rgc import can_move +from pypy.rlib.rgc import can_move, collect, add_memory_pressure def setinterior(toplevelcontainer, inneraddr, INNERTYPE, newvalue, offsets=None): diff --git a/pypy/rpython/lltypesystem/lloperation.py b/pypy/rpython/lltypesystem/lloperation.py --- a/pypy/rpython/lltypesystem/lloperation.py +++ b/pypy/rpython/lltypesystem/lloperation.py @@ -473,6 +473,7 @@ 'gc_is_rpy_instance' : LLOp(), 'gc_dump_rpy_heap' : LLOp(), 'gc_typeids_z' : LLOp(), + 'gc_add_memory_pressure': LLOp(), # ------- JIT & GC interaction, only for some GCs ---------- diff --git a/pypy/rpython/lltypesystem/lltype.py b/pypy/rpython/lltypesystem/lltype.py --- a/pypy/rpython/lltypesystem/lltype.py +++ b/pypy/rpython/lltypesystem/lltype.py @@ -48,7 +48,7 @@ self.TYPE = TYPE def __repr__(self): return ''%(self.TYPE,) - + def saferecursive(func, defl, TLS=TLS): def safe(*args): @@ -537,9 +537,9 @@ return "Func ( %s ) -> %s" % (args, self.RESULT) __str__ = saferecursive(__str__, '...') - def _short_name(self): + def _short_name(self): args = ', '.join([ARG._short_name() for ARG in self.ARGS]) - return "Func(%s)->%s" % (args, self.RESULT._short_name()) + return "Func(%s)->%s" % (args, self.RESULT._short_name()) _short_name = saferecursive(_short_name, '...') def _container_example(self): @@ -553,7 +553,7 @@ class OpaqueType(ContainerType): _gckind = 'raw' - + def __init__(self, tag, hints={}): """ if hints['render_structure'] is set, the type is internal and not considered to come from somewhere else (it should be rendered as a structure) """ @@ -723,10 +723,10 @@ def __str__(self): return '* %s' % (self.TO, ) - + def _short_name(self): return 'Ptr %s' % (self.TO._short_name(), ) - + def _is_atomic(self): return self.TO._gckind == 'raw' diff --git a/pypy/rpython/memory/gctransform/framework.py b/pypy/rpython/memory/gctransform/framework.py --- a/pypy/rpython/memory/gctransform/framework.py +++ b/pypy/rpython/memory/gctransform/framework.py @@ -377,17 +377,24 @@ self.malloc_varsize_nonmovable_ptr = None if getattr(GCClass, 'raw_malloc_memory_pressure', False): - def raw_malloc_memory_pressure(length, itemsize): + def raw_malloc_memory_pressure_varsize(length, itemsize): totalmem = length * itemsize if totalmem > 0: gcdata.gc.raw_malloc_memory_pressure(totalmem) #else: probably an overflow -- the following rawmalloc # will fail then + def raw_malloc_memory_pressure(sizehint): + gcdata.gc.raw_malloc_memory_pressure(sizehint) + self.raw_malloc_memory_pressure_varsize_ptr = getfn( + raw_malloc_memory_pressure_varsize, + [annmodel.SomeInteger(), annmodel.SomeInteger()], + annmodel.s_None, minimal_transform = False) self.raw_malloc_memory_pressure_ptr = getfn( raw_malloc_memory_pressure, - [annmodel.SomeInteger(), annmodel.SomeInteger()], + [annmodel.SomeInteger()], annmodel.s_None, minimal_transform = False) + self.identityhash_ptr = getfn(GCClass.identityhash.im_func, [s_gc, s_gcref], annmodel.SomeInteger(), diff --git a/pypy/rpython/memory/gctransform/transform.py b/pypy/rpython/memory/gctransform/transform.py --- a/pypy/rpython/memory/gctransform/transform.py +++ b/pypy/rpython/memory/gctransform/transform.py @@ -63,7 +63,7 @@ gct.push_alive(v_result, self.llops) elif opname not in ('direct_call', 'indirect_call'): gct.push_alive(v_result, self.llops) - + def rename(self, newopname): @@ -118,7 +118,7 @@ self.minimalgctransformer = self.MinimalGCTransformer(self) else: self.minimalgctransformer = None - + def get_lltype_of_exception_value(self): if self.translator is not None: exceptiondata = self.translator.rtyper.getexceptiondata() @@ -399,7 +399,7 @@ def gct_gc_heap_stats(self, hop): from pypy.rpython.memory.gc.base import ARRAY_TYPEID_MAP - + return hop.cast_result(rmodel.inputconst(lltype.Ptr(ARRAY_TYPEID_MAP), lltype.nullptr(ARRAY_TYPEID_MAP))) @@ -427,7 +427,7 @@ assert flavor == 'raw' assert not flags.get('zero') return self.parenttransformer.gct_malloc_varsize(hop) - + def gct_free(self, hop): flags = hop.spaceop.args[1].value flavor = flags['flavor'] @@ -502,7 +502,7 @@ stack_mh = mallocHelpers() stack_mh.allocate = lambda size: llop.stack_malloc(llmemory.Address, size) ll_stack_malloc_fixedsize = stack_mh._ll_malloc_fixedsize - + if self.translator: self.raw_malloc_fixedsize_ptr = self.inittime_helper( ll_raw_malloc_fixedsize, [lltype.Signed], llmemory.Address) @@ -541,7 +541,7 @@ resulttype=llmemory.Address) if flags.get('zero'): hop.genop("raw_memclear", [v_raw, c_size]) - return v_raw + return v_raw def gct_malloc_varsize(self, hop, add_flags=None): flags = hop.spaceop.args[1].value @@ -559,6 +559,14 @@ def gct_malloc_nonmovable_varsize(self, *args, **kwds): return self.gct_malloc_varsize(*args, **kwds) + def gct_gc_add_memory_pressure(self, hop): + if hasattr(self, 'raw_malloc_memory_pressure_ptr'): + op = hop.spaceop + size = op.args[0] + return hop.genop("direct_call", + [self.raw_malloc_memory_pressure_ptr, + size]) + def varsize_malloc_helper(self, hop, flags, meth, extraargs): def intconst(c): return rmodel.inputconst(lltype.Signed, c) op = hop.spaceop @@ -590,9 +598,9 @@ def gct_fv_raw_malloc_varsize(self, hop, flags, TYPE, v_length, c_const_size, c_item_size, c_offset_to_length): if flags.get('add_memory_pressure', False): - if hasattr(self, 'raw_malloc_memory_pressure_ptr'): + if hasattr(self, 'raw_malloc_memory_pressure_varsize_ptr'): hop.genop("direct_call", - [self.raw_malloc_memory_pressure_ptr, + [self.raw_malloc_memory_pressure_varsize_ptr, v_length, c_item_size]) if c_offset_to_length is None: if flags.get('zero'): @@ -625,7 +633,7 @@ hop.genop("track_alloc_stop", [v]) hop.genop('raw_free', [v]) else: - assert False, "%s has no support for free with flavor %r" % (self, flavor) + assert False, "%s has no support for free with flavor %r" % (self, flavor) def gct_gc_can_move(self, hop): return hop.cast_result(rmodel.inputconst(lltype.Bool, False)) diff --git a/pypy/rpython/memory/gcwrapper.py b/pypy/rpython/memory/gcwrapper.py --- a/pypy/rpython/memory/gcwrapper.py +++ b/pypy/rpython/memory/gcwrapper.py @@ -66,6 +66,10 @@ gctypelayout.zero_gc_pointers(result) return result + def add_memory_pressure(self, size): + if hasattr(self.gc, 'raw_malloc_memory_pressure'): + self.gc.raw_malloc_memory_pressure(size) + def shrink_array(self, p, smallersize): if hasattr(self.gc, 'shrink_array'): addr = llmemory.cast_ptr_to_adr(p) diff --git a/pypy/rpython/memory/test/test_gc.py b/pypy/rpython/memory/test/test_gc.py --- a/pypy/rpython/memory/test/test_gc.py +++ b/pypy/rpython/memory/test/test_gc.py @@ -592,7 +592,7 @@ return rgc.can_move(lltype.malloc(TP, 1)) assert self.interpret(func, []) == self.GC_CAN_MOVE - + def test_malloc_nonmovable(self): TP = lltype.GcArray(lltype.Char) def func(): diff --git a/pypy/rpython/memory/test/test_transformed_gc.py b/pypy/rpython/memory/test/test_transformed_gc.py --- a/pypy/rpython/memory/test/test_transformed_gc.py +++ b/pypy/rpython/memory/test/test_transformed_gc.py @@ -27,7 +27,7 @@ t.config.set(**extraconfigopts) ann = t.buildannotator(policy=annpolicy.StrictAnnotatorPolicy()) ann.build_types(func, inputtypes) - + if specialize: t.buildrtyper().specialize() if backendopt: @@ -44,7 +44,7 @@ GC_CAN_MOVE = False GC_CAN_MALLOC_NONMOVABLE = True taggedpointers = False - + def setup_class(cls): funcs0 = [] funcs2 = [] @@ -155,7 +155,7 @@ return run, gct else: return run - + class GenericGCTests(GCTest): GC_CAN_SHRINK_ARRAY = False @@ -190,7 +190,7 @@ j += 1 return 0 return malloc_a_lot - + def test_instances(self): run, statistics = self.runner("instances", statistics=True) run([]) @@ -276,7 +276,7 @@ for i in range(1, 5): res = run([i, i - 1]) assert res == i - 1 # crashes if constants are not considered roots - + def define_string_concatenation(cls): def concat(j, dummy): lst = [] @@ -656,7 +656,7 @@ # return 2 return func - + def test_malloc_nonmovable(self): run = self.runner("malloc_nonmovable") assert int(self.GC_CAN_MALLOC_NONMOVABLE) == run([]) @@ -676,7 +676,7 @@ return 2 return func - + def test_malloc_nonmovable_fixsize(self): run = self.runner("malloc_nonmovable_fixsize") assert run([]) == int(self.GC_CAN_MALLOC_NONMOVABLE) @@ -757,7 +757,7 @@ lltype.free(idarray, flavor='raw') return 0 return f - + def test_many_ids(self): if not self.GC_CAN_TEST_ID: py.test.skip("fails for bad reasons in lltype.py :-(") @@ -813,7 +813,7 @@ else: assert 0, "oups, not found" return f, None, fix_graph_of_g - + def test_do_malloc_operations(self): run = self.runner("do_malloc_operations") run([]) @@ -850,7 +850,7 @@ else: assert 0, "oups, not found" return f, None, fix_graph_of_g - + def test_do_malloc_operations_in_call(self): run = self.runner("do_malloc_operations_in_call") run([]) @@ -861,7 +861,7 @@ l2 = [] l3 = [] l4 = [] - + def f(): for i in range(10): s = lltype.malloc(S) @@ -1026,7 +1026,7 @@ llop.gc__collect(lltype.Void) return static.p.x + i def cleanup(): - static.p = lltype.nullptr(T1) + static.p = lltype.nullptr(T1) return f, cleanup, None def test_nongc_static_root_minor_collect(self): @@ -1081,7 +1081,7 @@ return 0 return f - + def test_many_weakrefs(self): run = self.runner("many_weakrefs") run([]) @@ -1131,7 +1131,7 @@ def define_adr_of_nursery(cls): class A(object): pass - + def f(): # we need at least 1 obj to allocate a nursery a = A() @@ -1147,9 +1147,9 @@ assert nt1 > nf1 assert nt1 == nt0 return 0 - + return f - + def test_adr_of_nursery(self): run = self.runner("adr_of_nursery") res = run([]) @@ -1175,7 +1175,7 @@ def _teardown(self): self.__ready = False # collecting here is expected GenerationGC._teardown(self) - + GC_PARAMS = {'space_size': 512*WORD, 'nursery_size': 128*WORD, 'translated_to_c': False} diff --git a/pypy/translator/c/test/test_newgc.py b/pypy/translator/c/test/test_newgc.py --- a/pypy/translator/c/test/test_newgc.py +++ b/pypy/translator/c/test/test_newgc.py @@ -37,7 +37,7 @@ else: print res return 0 - + t = Translation(main, standalone=True, gc=cls.gcpolicy, policy=annpolicy.StrictAnnotatorPolicy(), taggedpointers=cls.taggedpointers, @@ -128,10 +128,10 @@ if not args: args = (-1, ) res = self.allfuncs(name, *args) - num = self.name_to_func[name] + num = self.name_to_func[name] if self.funcsstr[num]: return res - return int(res) + return int(res) def define_empty_collect(cls): def f(): @@ -228,7 +228,7 @@ T = lltype.GcStruct("T", ('y', lltype.Signed), ('s', lltype.Ptr(S))) ARRAY_Ts = lltype.GcArray(lltype.Ptr(T)) - + def f(): r = 0 for i in range(30): @@ -250,7 +250,7 @@ def test_framework_varsized(self): res = self.run('framework_varsized') assert res == self.run_orig('framework_varsized') - + def define_framework_using_lists(cls): class A(object): pass @@ -271,7 +271,7 @@ N = 1000 res = self.run('framework_using_lists') assert res == N*(N - 1)/2 - + def define_framework_static_roots(cls): class A(object): def __init__(self, y): @@ -318,8 +318,8 @@ def test_framework_void_array(self): res = self.run('framework_void_array') assert res == 44 - - + + def define_framework_malloc_failure(cls): def f(): a = [1] * (sys.maxint//2) @@ -342,7 +342,7 @@ def test_framework_array_of_void(self): res = self.run('framework_array_of_void') assert res == 43 + 1000000 - + def define_framework_opaque(cls): A = lltype.GcStruct('A', ('value', lltype.Signed)) O = lltype.GcOpaqueType('test.framework') @@ -437,7 +437,7 @@ b = B() return 0 return func - + def test_del_raises(self): self.run('del_raises') # does not raise @@ -712,7 +712,7 @@ def test_callback_with_collect(self): assert self.run('callback_with_collect') - + def define_can_move(cls): class A: pass @@ -1255,7 +1255,7 @@ l1 = [] l2 = [] l3 = [] - + def f(): for i in range(10): s = lltype.malloc(S) @@ -1298,7 +1298,7 @@ def test_string_builder(self): res = self.run('string_builder') assert res == "aabcbdddd" - + def definestr_string_builder_over_allocation(cls): import gc def fn(_): @@ -1458,6 +1458,37 @@ res = self.run("nongc_attached_to_gc") assert res == -99997 + def define_nongc_opaque_attached_to_gc(cls): + from pypy.module._hashlib.interp_hashlib import HASH_MALLOC_SIZE + from pypy.rlib import rgc, ropenssl + from pypy.rpython.lltypesystem import rffi + + class A: + def __init__(self): + self.ctx = lltype.malloc(ropenssl.EVP_MD_CTX.TO, + flavor='raw') + digest = ropenssl.EVP_get_digestbyname('sha1') + ropenssl.EVP_DigestInit(self.ctx, digest) + rgc.add_memory_pressure(HASH_MALLOC_SIZE + 64) + + def __del__(self): + ropenssl.EVP_MD_CTX_cleanup(self.ctx) + lltype.free(self.ctx, flavor='raw') + A() + def f(): + am1 = am2 = am3 = None + for i in range(100000): + am3 = am2 + am2 = am1 + am1 = A() + # what can we use for the res? + return 0 + return f + + def test_nongc_opaque_attached_to_gc(self): + res = self.run("nongc_opaque_attached_to_gc") + assert res == 0 + # ____________________________________________________________________ class TaggedPointersTest(object): From noreply at buildbot.pypy.org Fri Nov 4 15:28:23 2011 From: noreply at buildbot.pypy.org (cfbolz) Date: Fri, 4 Nov 2011 15:28:23 +0100 (CET) Subject: [pypy-commit] pypy default: also add a test for indices4 Message-ID: <20111104142823.A7551820B3@wyvern.cs.uni-duesseldorf.de> Author: Carl Friedrich Bolz Branch: Changeset: r48753:20285b132dca Date: 2011-11-04 15:28 +0100 http://bitbucket.org/pypy/pypy/changeset/20285b132dca/ Log: also add a test for indices4 diff --git a/pypy/objspace/std/test/test_sliceobject.py b/pypy/objspace/std/test/test_sliceobject.py --- a/pypy/objspace/std/test/test_sliceobject.py +++ b/pypy/objspace/std/test/test_sliceobject.py @@ -42,6 +42,23 @@ getslice(length, mystart, mystop)) + def test_indexes4(self): + space = self.space + w = space.wrap + + def getslice(length, start, stop, step): + return [i for i in range(0, length, step) if start <= i < stop] + + for step in [-5, -4, -3, -2, -1, 1, 2, 3, 4, 5, None]: + for length in range(5): + for start in range(-2*length-2, 2*length+3) + [None]: + for stop in range(-2*length-2, 2*length+3) + [None]: + sl = space.newslice(w(start), w(stop), w(step)) + mystart, mystop, mystep, slicelength = sl.indices4(space, length) + assert len(range(length)[start:stop:step]) == slicelength + assert slice(start, stop, step).indices(length) == ( + mystart, mystop, mystep) + class AppTest_SliceObject: def test_new(self): def cmp_slice(sl1, sl2): From noreply at buildbot.pypy.org Fri Nov 4 16:11:00 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Fri, 4 Nov 2011 16:11:00 +0100 (CET) Subject: [pypy-commit] pypy default: (arigato mostly) fix optimizeopt tests that had the arguments to copystrcontent in the wrong order Message-ID: <20111104151100.9824C820B3@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r48754:0bfda46664f9 Date: 2011-11-04 11:10 -0400 http://bitbucket.org/pypy/pypy/changeset/0bfda46664f9/ Log: (arigato mostly) fix optimizeopt tests that had the arguments to copystrcontent in the wrong order diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -2168,13 +2168,13 @@ ops = """ [p0, i0, p1, i1, i2] setfield_gc(p0, i1, descr=valuedescr) - copystrcontent(p0, i0, p1, i1, i2) + copystrcontent(p0, p1, i0, i1, i2) escape() jump(p0, i0, p1, i1, i2) """ expected = """ [p0, i0, p1, i1, i2] - copystrcontent(p0, i0, p1, i1, i2) + copystrcontent(p0, p1, i0, i1, i2) setfield_gc(p0, i1, descr=valuedescr) escape() jump(p0, i0, p1, i1, i2) @@ -7407,7 +7407,7 @@ expected = """ [p22, p18, i1, i2] call(i2, descr=nonwritedescr) - setfield_gc(p22, i1, descr=valuedescr) + setfield_gc(p22, i1, descr=valuedescr) jump(p22, p18, i1, i1) """ self.optimize_loop(ops, expected, preamble, expected_short=short) @@ -7434,7 +7434,7 @@ def test_cache_setarrayitem_across_loop_boundaries(self): ops = """ [p1] - p2 = getarrayitem_gc(p1, 3, descr=arraydescr) + p2 = getarrayitem_gc(p1, 3, descr=arraydescr) guard_nonnull_class(p2, ConstClass(node_vtable)) [] call(p2, descr=nonwritedescr) p3 = new_with_vtable(ConstClass(node_vtable)) From noreply at buildbot.pypy.org Fri Nov 4 16:12:34 2011 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 4 Nov 2011 16:12:34 +0100 (CET) Subject: [pypy-commit] pypy default: Add asserts and fix a test. The main point of the asserts is to Message-ID: <20111104151234.7EB81820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48755:02a2ce422c4c Date: 2011-11-04 16:09 +0100 http://bitbucket.org/pypy/pypy/changeset/02a2ce422c4c/ Log: Add asserts and fix a test. The main point of the asserts is to catch obscure cases where we generate a residual operation STRSETITEM(ConstPtr(..), ..), which never makes sense. This breaks on running "pypy translate.py". Will try to figure out why. diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -2168,13 +2168,13 @@ ops = """ [p0, i0, p1, i1, i2] setfield_gc(p0, i1, descr=valuedescr) - copystrcontent(p0, i0, p1, i1, i2) + copystrcontent(p0, p1, i0, i1, i2) escape() jump(p0, i0, p1, i1, i2) """ expected = """ [p0, i0, p1, i1, i2] - copystrcontent(p0, i0, p1, i1, i2) + copystrcontent(p0, p1, i0, i1, i2) setfield_gc(p0, i1, descr=valuedescr) escape() jump(p0, i0, p1, i1, i2) diff --git a/pypy/jit/metainterp/optimizeopt/vstring.py b/pypy/jit/metainterp/optimizeopt/vstring.py --- a/pypy/jit/metainterp/optimizeopt/vstring.py +++ b/pypy/jit/metainterp/optimizeopt/vstring.py @@ -1,6 +1,6 @@ from pypy.jit.codewriter.effectinfo import EffectInfo from pypy.jit.metainterp.history import (BoxInt, Const, ConstInt, ConstPtr, - get_const_ptr_for_string, get_const_ptr_for_unicode) + get_const_ptr_for_string, get_const_ptr_for_unicode, BoxPtr, REF, INT) from pypy.jit.metainterp.optimizeopt import optimizer, virtualize from pypy.jit.metainterp.optimizeopt.optimizer import CONST_0, CONST_1, llhelper from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method @@ -174,6 +174,7 @@ return VAbstractStringValue.string_copy_parts( self, string_optimizer, targetbox, offsetbox, mode) for i in range(len(self._chars)): + assert isinstance(targetbox, BoxPtr) # ConstPtr never makes sense charbox = self._chars[i].force_box(string_optimizer) if not (isinstance(charbox, Const) and charbox.same_constant(CONST_0)): string_optimizer.emit_operation(ResOperation(mode.STRSETITEM, [targetbox, @@ -305,6 +306,7 @@ for i in range(lengthbox.value): charbox = _strgetitem(string_optimizer, srcbox, srcoffsetbox, mode) srcoffsetbox = _int_add(string_optimizer, srcoffsetbox, CONST_1) + assert isinstance(targetbox, BoxPtr) # ConstPtr never makes sense string_optimizer.emit_operation(ResOperation(mode.STRSETITEM, [targetbox, offsetbox, charbox], @@ -315,6 +317,7 @@ nextoffsetbox = _int_add(string_optimizer, offsetbox, lengthbox) else: nextoffsetbox = None + assert isinstance(targetbox, BoxPtr) # ConstPtr never makes sense op = ResOperation(mode.COPYSTRCONTENT, [srcbox, targetbox, srcoffsetbox, offsetbox, lengthbox], None) @@ -401,6 +404,7 @@ def optimize_STRSETITEM(self, op): value = self.getvalue(op.getarg(0)) + assert not value.is_constant() # strsetitem(ConstPtr) never makes sense if (value.is_virtual() and isinstance(value, VStringPlainValue) and value.is_valid()): indexbox = self.get_constant_box(op.getarg(1)) @@ -458,6 +462,11 @@ def _optimize_COPYSTRCONTENT(self, op, mode): # args: src dst srcstart dststart length + assert op.getarg(0).type == REF + assert op.getarg(1).type == REF + assert op.getarg(2).type == INT + assert op.getarg(3).type == INT + assert op.getarg(4).type == INT src = self.getvalue(op.getarg(0)) dst = self.getvalue(op.getarg(1)) srcstart = self.getvalue(op.getarg(2)) From noreply at buildbot.pypy.org Fri Nov 4 16:12:35 2011 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 4 Nov 2011 16:12:35 +0100 (CET) Subject: [pypy-commit] pypy default: merge heads Message-ID: <20111104151235.B2B65820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48756:a42afd412616 Date: 2011-11-04 16:12 +0100 http://bitbucket.org/pypy/pypy/changeset/a42afd412616/ Log: merge heads diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -7407,7 +7407,7 @@ expected = """ [p22, p18, i1, i2] call(i2, descr=nonwritedescr) - setfield_gc(p22, i1, descr=valuedescr) + setfield_gc(p22, i1, descr=valuedescr) jump(p22, p18, i1, i1) """ self.optimize_loop(ops, expected, preamble, expected_short=short) @@ -7434,7 +7434,7 @@ def test_cache_setarrayitem_across_loop_boundaries(self): ops = """ [p1] - p2 = getarrayitem_gc(p1, 3, descr=arraydescr) + p2 = getarrayitem_gc(p1, 3, descr=arraydescr) guard_nonnull_class(p2, ConstClass(node_vtable)) [] call(p2, descr=nonwritedescr) p3 = new_with_vtable(ConstClass(node_vtable)) diff --git a/pypy/objspace/std/test/test_sliceobject.py b/pypy/objspace/std/test/test_sliceobject.py --- a/pypy/objspace/std/test/test_sliceobject.py +++ b/pypy/objspace/std/test/test_sliceobject.py @@ -42,6 +42,23 @@ getslice(length, mystart, mystop)) + def test_indexes4(self): + space = self.space + w = space.wrap + + def getslice(length, start, stop, step): + return [i for i in range(0, length, step) if start <= i < stop] + + for step in [-5, -4, -3, -2, -1, 1, 2, 3, 4, 5, None]: + for length in range(5): + for start in range(-2*length-2, 2*length+3) + [None]: + for stop in range(-2*length-2, 2*length+3) + [None]: + sl = space.newslice(w(start), w(stop), w(step)) + mystart, mystop, mystep, slicelength = sl.indices4(space, length) + assert len(range(length)[start:stop:step]) == slicelength + assert slice(start, stop, step).indices(length) == ( + mystart, mystop, mystep) + class AppTest_SliceObject: def test_new(self): def cmp_slice(sl1, sl2): diff --git a/pypy/objspace/std/unicodeobject.py b/pypy/objspace/std/unicodeobject.py --- a/pypy/objspace/std/unicodeobject.py +++ b/pypy/objspace/std/unicodeobject.py @@ -10,7 +10,7 @@ from pypy.objspace.std import slicetype, newformat from pypy.objspace.std.tupleobject import W_TupleObject from pypy.rlib.rarithmetic import intmask, ovfcheck -from pypy.rlib.objectmodel import compute_hash +from pypy.rlib.objectmodel import compute_hash, specialize from pypy.rlib.rstring import UnicodeBuilder from pypy.rlib.runicode import unicode_encode_unicode_escape from pypy.module.unicodedata import unicodedb @@ -475,32 +475,29 @@ index = length return index -def _convert_idx_params(space, w_self, w_sub, w_start, w_end, upper_bound=False): - assert isinstance(w_sub, W_UnicodeObject) + at specialize.arg(4) +def _convert_idx_params(space, w_self, w_start, w_end, upper_bound=False): self = w_self._value - sub = w_sub._value start, end = slicetype.unwrap_start_stop( space, len(self), w_start, w_end, upper_bound) - return (self, sub, start, end) -_convert_idx_params._annspecialcase_ = 'specialize:arg(5)' + return (self, start, end) def unicode_endswith__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, + self, start, end = _convert_idx_params(space, w_self, w_start, w_end, True) - return space.newbool(stringendswith(self, substr, start, end)) + return space.newbool(stringendswith(self, w_substr._value, start, end)) def unicode_startswith__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end, True) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end, True) # XXX this stuff can be waaay better for ootypebased backends if # we re-use more of our rpython machinery (ie implement startswith # with additional parameters as rpython) - return space.newbool(stringstartswith(self, substr, start, end)) + return space.newbool(stringstartswith(self, w_substr._value, start, end)) def unicode_startswith__Unicode_Tuple_ANY_ANY(space, w_unistr, w_prefixes, w_start, w_end): - unistr, _, start, end = _convert_idx_params(space, w_unistr, space.wrap(u''), - w_start, w_end, True) + unistr, start, end = _convert_idx_params(space, w_unistr, + w_start, w_end, True) for w_prefix in space.fixedview(w_prefixes): prefix = space.unicode_w(w_prefix) if stringstartswith(unistr, prefix, start, end): @@ -509,7 +506,7 @@ def unicode_endswith__Unicode_Tuple_ANY_ANY(space, w_unistr, w_suffixes, w_start, w_end): - unistr, _, start, end = _convert_idx_params(space, w_unistr, space.wrap(u''), + unistr, start, end = _convert_idx_params(space, w_unistr, w_start, w_end, True) for w_suffix in space.fixedview(w_suffixes): suffix = space.unicode_w(w_suffix) @@ -615,37 +612,32 @@ return space.newlist(lines) def unicode_find__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - return space.wrap(self.find(substr, start, end)) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + return space.wrap(self.find(w_substr._value, start, end)) def unicode_rfind__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - return space.wrap(self.rfind(substr, start, end)) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + return space.wrap(self.rfind(w_substr._value, start, end)) def unicode_index__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - index = self.find(substr, start, end) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + index = self.find(w_substr._value, start, end) if index < 0: raise OperationError(space.w_ValueError, space.wrap('substring not found')) return space.wrap(index) def unicode_rindex__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - index = self.rfind(substr, start, end) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + index = self.rfind(w_substr._value, start, end) if index < 0: raise OperationError(space.w_ValueError, space.wrap('substring not found')) return space.wrap(index) def unicode_count__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - return space.wrap(self.count(substr, start, end)) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + return space.wrap(self.count(w_substr._value, start, end)) def unicode_split__Unicode_None_ANY(space, w_self, w_none, w_maxsplit): maxsplit = space.int_w(w_maxsplit) From noreply at buildbot.pypy.org Fri Nov 4 16:35:46 2011 From: noreply at buildbot.pypy.org (cfbolz) Date: Fri, 4 Nov 2011 16:35:46 +0100 (CET) Subject: [pypy-commit] pypy default: negative bools??! Message-ID: <20111104153546.4AD39820B3@wyvern.cs.uni-duesseldorf.de> Author: Carl Friedrich Bolz Branch: Changeset: r48757:43676b026018 Date: 2011-11-04 16:34 +0100 http://bitbucket.org/pypy/pypy/changeset/43676b026018/ Log: negative bools??! diff --git a/pypy/objspace/std/boolobject.py b/pypy/objspace/std/boolobject.py --- a/pypy/objspace/std/boolobject.py +++ b/pypy/objspace/std/boolobject.py @@ -27,11 +27,7 @@ def uint_w(w_self, space): intval = int(w_self.boolval) - if intval < 0: - raise OperationError(space.w_ValueError, - space.wrap("cannot convert negative integer to unsigned")) - else: - return r_uint(intval) + return r_uint(intval) def bigint_w(w_self, space): return rbigint.fromint(int(w_self.boolval)) From noreply at buildbot.pypy.org Fri Nov 4 19:09:50 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Fri, 4 Nov 2011 19:09:50 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: added file to memorize things to do. Message-ID: <20111104180950.ECA1C820B3@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r48758:ac60a3708bc3 Date: 2011-11-04 19:08 +0100 http://bitbucket.org/pypy/pypy/changeset/ac60a3708bc3/ Log: added file to memorize things to do. diff --git a/pypy/doc/discussion/win64_todo.txt b/pypy/doc/discussion/win64_todo.txt new file mode 100644 --- /dev/null +++ b/pypy/doc/discussion/win64_todo.txt @@ -0,0 +1,4 @@ +20011-11-4 +ll_os.py has a problem with the file rwin32.py. +Temporarily disabled for the win64_gborg branch. This needs to be +investigated and re-enabled. \ No newline at end of file From noreply at buildbot.pypy.org Fri Nov 4 19:12:56 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Fri, 4 Nov 2011 19:12:56 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: temporarily disabled import of rwin32 Message-ID: <20111104181256.72953820B3@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r48759:4acfd6f8e884 Date: 2011-11-04 19:12 +0100 http://bitbucket.org/pypy/pypy/changeset/4acfd6f8e884/ Log: temporarily disabled import of rwin32 diff --git a/pypy/rpython/module/ll_os.py b/pypy/rpython/module/ll_os.py --- a/pypy/rpython/module/ll_os.py +++ b/pypy/rpython/module/ll_os.py @@ -1738,7 +1738,8 @@ # ____________________________________________________________ # Support for the WindowsError exception -if sys.platform == 'win32': +# XXX temporarily disabled +if 0 and sys.platform == 'win32': from pypy.rlib import rwin32 class RegisterFormatError(BaseLazyRegistering): From noreply at buildbot.pypy.org Fri Nov 4 19:14:12 2011 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 4 Nov 2011 19:14:12 +0100 (CET) Subject: [pypy-commit] pypy default: Change VStringPlainValue, refactoring and giving a long Message-ID: <20111104181412.B5AF5820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48760:23f5428f3b52 Date: 2011-11-04 18:07 +0100 http://bitbucket.org/pypy/pypy/changeset/23f5428f3b52/ Log: Change VStringPlainValue, refactoring and giving a long explanation of the meaning of '_chars' and when it contains None values. It simplifies some code I did earlier today, and hopefully it makes vstring.py safe now. At worst it should now crash in an assert that tries to do one of now-forbidden operations. diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -247,7 +247,6 @@ CONST_1 = ConstInt(1) CVAL_ZERO = ConstantValue(CONST_0) CVAL_ZERO_FLOAT = ConstantValue(Const._new(0.0)) -CVAL_UNINITIALIZED_ZERO = ConstantValue(CONST_0) llhelper.CVAL_NULLREF = ConstantValue(llhelper.CONST_NULL) oohelper.CVAL_NULLREF = ConstantValue(oohelper.CONST_NULL) diff --git a/pypy/jit/metainterp/optimizeopt/vstring.py b/pypy/jit/metainterp/optimizeopt/vstring.py --- a/pypy/jit/metainterp/optimizeopt/vstring.py +++ b/pypy/jit/metainterp/optimizeopt/vstring.py @@ -106,46 +106,33 @@ if not we_are_translated(): op.name = 'FORCE' optforce.emit_operation(op) - self.string_copy_parts(optforce, box, CONST_0, self.mode) + self.initialize_forced_string(optforce, box, CONST_0, self.mode) + + def initialize_forced_string(self, string_optimizer, targetbox, + offsetbox, mode): + return self.string_copy_parts(string_optimizer, targetbox, + offsetbox, mode) class VStringPlainValue(VAbstractStringValue): """A string built with newstr(const).""" _lengthbox = None # cache only - # Warning: an issue with VStringPlainValue is that sometimes it is - # initialized unpredictably by some copystrcontent. When this occurs - # we set self._chars to None. Be careful to check for is_valid(). - - def is_valid(self): - return self._chars is not None - - def _invalidate(self): - assert self.is_valid() - if self._lengthbox is None: - self._lengthbox = ConstInt(len(self._chars)) - self._chars = None - - def _really_force(self, optforce): - VAbstractStringValue._really_force(self, optforce) - assert self.box is not None - if self.is_valid(): - for c in self._chars: - if c is optimizer.CVAL_UNINITIALIZED_ZERO: - # the string has uninitialized null bytes in it, so - # assume that it is forced for being further mutated - # (e.g. by copystrcontent). So it becomes invalid - # as a VStringPlainValue: the _chars must not be used - # any longer. - self._invalidate() - break - def setup(self, size): - self._chars = [optimizer.CVAL_UNINITIALIZED_ZERO] * size + # in this list, None means: "it's probably uninitialized so far, + # but maybe it was actually filled." So to handle this case, + # strgetitem cannot be virtual-ized and must be done as a residual + # operation. By contrast, any non-None value means: we know it + # is initialized to this value; strsetitem() there makes no sense. + # Also, as long as self.is_virtual(), then we know that no-one else + # could have written to the string, so we know that in this case + # "None" corresponds to "really uninitialized". + self._chars = [None] * size def setup_slice(self, longerlist, start, stop): assert 0 <= start <= stop <= len(longerlist) self._chars = longerlist[start:stop] + # slice the 'longerlist', which may also contain Nones def getstrlen(self, _, mode): if self._lengthbox is None: @@ -153,44 +140,66 @@ return self._lengthbox def getitem(self, index): - return self._chars[index] + return self._chars[index] # may return None! def setitem(self, index, charvalue): assert isinstance(charvalue, optimizer.OptValue) + assert self._chars[index] is None, ( + "setitem() on an already-initialized location") self._chars[index] = charvalue + def is_completely_initialized(self): + for c in self._chars: + if c is None: + return False + return True + @specialize.arg(1) def get_constant_string_spec(self, mode): - if not self.is_valid(): - return None for c in self._chars: - if c is optimizer.CVAL_UNINITIALIZED_ZERO or not c.is_constant(): + if c is None or not c.is_constant(): return None return mode.emptystr.join([mode.chr(c.box.getint()) for c in self._chars]) def string_copy_parts(self, string_optimizer, targetbox, offsetbox, mode): - if not self.is_valid(): + if not self.is_virtual() and not self.is_completely_initialized(): return VAbstractStringValue.string_copy_parts( self, string_optimizer, targetbox, offsetbox, mode) + else: + return self.initialize_forced_string(string_optimizer, targetbox, + offsetbox, mode) + + def initialize_forced_string(self, string_optimizer, targetbox, + offsetbox, mode): for i in range(len(self._chars)): assert isinstance(targetbox, BoxPtr) # ConstPtr never makes sense - charbox = self._chars[i].force_box(string_optimizer) - if not (isinstance(charbox, Const) and charbox.same_constant(CONST_0)): - string_optimizer.emit_operation(ResOperation(mode.STRSETITEM, [targetbox, - offsetbox, - charbox], - None)) + charvalue = self.getitem(i) + if charvalue is not None: + charbox = charvalue.force_box(string_optimizer) + if not (isinstance(charbox, Const) and + charbox.same_constant(CONST_0)): + op = ResOperation(mode.STRSETITEM, [targetbox, + offsetbox, + charbox], + None) + string_optimizer.emit_operation(op) offsetbox = _int_add(string_optimizer, offsetbox, CONST_1) return offsetbox def get_args_for_fail(self, modifier): if self.box is None and not modifier.already_seen_virtual(self.keybox): - assert self.is_valid() - charboxes = [value.get_key_box() for value in self._chars] + charboxes = [] + for value in self._chars: + if value is not None: + box = value.get_key_box() + else: + box = None + charboxes.append(box) modifier.register_virtual_fields(self.keybox, charboxes) for value in self._chars: - value.get_args_for_fail(modifier) + if value is not None: + value.get_args_for_fail(modifier) def _make_virtual(self, modifier): return modifier.make_vstrplain(self.mode is mode_unicode) @@ -405,8 +414,7 @@ def optimize_STRSETITEM(self, op): value = self.getvalue(op.getarg(0)) assert not value.is_constant() # strsetitem(ConstPtr) never makes sense - if (value.is_virtual() and isinstance(value, VStringPlainValue) - and value.is_valid()): + if value.is_virtual() and isinstance(value, VStringPlainValue): indexbox = self.get_constant_box(op.getarg(1)) if indexbox is not None: value.setitem(indexbox.getint(), self.getvalue(op.getarg(2))) @@ -437,10 +445,11 @@ value = value.vstr vindex = self.getvalue(fullindexbox) # - if (isinstance(value, VStringPlainValue) # even if no longer virtual - and value.is_valid()): # but make sure it is valid + if isinstance(value, VStringPlainValue): # even if no longer virtual if vindex.is_constant(): - return value.getitem(vindex.box.getint()) + result = value.getitem(vindex.box.getint()) + if result is not None: + return result # resbox = _strgetitem(self, value.force_box(self), vindex.force_box(self), mode) return self.getvalue(resbox) @@ -538,12 +547,12 @@ vstart = self.getvalue(op.getarg(2)) vstop = self.getvalue(op.getarg(3)) # - if (isinstance(vstr, VStringPlainValue) and vstr.is_valid() - and vstart.is_constant() and vstop.is_constant()): - value = self.make_vstring_plain(op.result, op, mode) - value.setup_slice(vstr._chars, vstart.box.getint(), - vstop.box.getint()) - return True + #if (isinstance(vstr, VStringPlainValue) and vstart.is_constant() + # and vstop.is_constant()): + # value = self.make_vstring_plain(op.result, op, mode) + # value.setup_slice(vstr._chars, vstart.box.getint(), + # vstop.box.getint()) + # return True # vstr.ensure_nonnull() lengthbox = _int_sub(self, vstop.force_box(self), diff --git a/pypy/jit/metainterp/resume.py b/pypy/jit/metainterp/resume.py --- a/pypy/jit/metainterp/resume.py +++ b/pypy/jit/metainterp/resume.py @@ -126,6 +126,7 @@ UNASSIGNED = tag(-1<<13, TAGBOX) UNASSIGNEDVIRTUAL = tag(-1<<13, TAGVIRTUAL) NULLREF = tag(-1, TAGCONST) +UNINITIALIZED = tag(-2, TAGCONST) # used for uninitialized string characters class ResumeDataLoopMemo(object): @@ -439,6 +440,8 @@ self.storage.rd_pendingfields = rd_pendingfields def _gettagged(self, box): + if box is None: + return UNINITIALIZED if isinstance(box, Const): return self.memo.getconst(box) else: @@ -572,7 +575,9 @@ string = decoder.allocate_string(length) decoder.virtuals_cache[index] = string for i in range(length): - decoder.string_setitem(string, i, self.fieldnums[i]) + charnum = self.fieldnums[i] + if not tagged_eq(charnum, UNINITIALIZED): + decoder.string_setitem(string, i, charnum) return string def debug_prints(self): @@ -625,7 +630,9 @@ string = decoder.allocate_unicode(length) decoder.virtuals_cache[index] = string for i in range(length): - decoder.unicode_setitem(string, i, self.fieldnums[i]) + charnum = self.fieldnums[i] + if not tagged_eq(charnum, UNINITIALIZED): + decoder.unicode_setitem(string, i, charnum) return string def debug_prints(self): From noreply at buildbot.pypy.org Fri Nov 4 20:57:09 2011 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 4 Nov 2011 20:57:09 +0100 (CET) Subject: [pypy-commit] pypy default: Another case: strgetitem on a VStringConcatValue can be resolved Message-ID: <20111104195709.1EF08820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48761:98bf21b80fc5 Date: 2011-11-04 20:56 +0100 http://bitbucket.org/pypy/pypy/changeset/98bf21b80fc5/ Log: Another case: strgetitem on a VStringConcatValue can be resolved if we know statically on which half of the two-parts string it is done. Could be improved in theory by using intbounds analysis... diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -4123,6 +4123,38 @@ """ self.optimize_strunicode_loop(ops, expected) + def test_str_concat_constant_lengths(self): + ops = """ + [i0] + p0 = newstr(1) + strsetitem(p0, 0, i0) + p1 = newstr(0) + p2 = call(0, p0, p1, descr=strconcatdescr) + i1 = call(0, p2, p0, descr=strequaldescr) + finish(i1) + """ + expected = """ + [i0] + finish(1) + """ + self.optimize_strunicode_loop(ops, expected) + + def test_str_concat_constant_lengths_2(self): + ops = """ + [i0] + p0 = newstr(0) + p1 = newstr(1) + strsetitem(p1, 0, i0) + p2 = call(0, p0, p1, descr=strconcatdescr) + i1 = call(0, p2, p1, descr=strequaldescr) + finish(i1) + """ + expected = """ + [i0] + finish(1) + """ + self.optimize_strunicode_loop(ops, expected) + def test_str_slice_1(self): ops = """ [p1, i1, i2] @@ -4883,6 +4915,27 @@ def test_plain_virtual_string_copy_content(self): ops = """ + [i1] + p0 = newstr(6) + copystrcontent(s"hello!", p0, 0, 0, 6) + p1 = call(0, p0, s"abc123", descr=strconcatdescr) + i0 = strgetitem(p1, i1) + finish(i0) + """ + expected = """ + [i1] + p0 = newstr(6) + copystrcontent(s"hello!", p0, 0, 0, 6) + p1 = newstr(12) + copystrcontent(p0, p1, 0, 0, 6) + copystrcontent(s"abc123", p1, 0, 6, 6) + i0 = strgetitem(p1, i1) + finish(i0) + """ + self.optimize_strunicode_loop(ops, expected) + + def test_plain_virtual_string_copy_content_2(self): + ops = """ [] p0 = newstr(6) copystrcontent(s"hello!", p0, 0, 0, 6) @@ -4894,10 +4947,7 @@ [] p0 = newstr(6) copystrcontent(s"hello!", p0, 0, 0, 6) - p1 = newstr(12) - copystrcontent(p0, p1, 0, 0, 6) - copystrcontent(s"abc123", p1, 0, 6, 6) - i0 = strgetitem(p1, 0) + i0 = strgetitem(p0, 0) finish(i0) """ self.optimize_strunicode_loop(ops, expected) diff --git a/pypy/jit/metainterp/optimizeopt/vstring.py b/pypy/jit/metainterp/optimizeopt/vstring.py --- a/pypy/jit/metainterp/optimizeopt/vstring.py +++ b/pypy/jit/metainterp/optimizeopt/vstring.py @@ -451,6 +451,17 @@ if result is not None: return result # + if isinstance(value, VStringConcatValue) and vindex.is_constant(): + len1box = value.left.getstrlen(self, mode) + if isinstance(len1box, ConstInt): + index = vindex.box.getint() + len1 = len1box.getint() + if index < len1: + return self.strgetitem(value.left, vindex, mode) + else: + vindex = optimizer.ConstantValue(ConstInt(index - len1)) + return self.strgetitem(value.right, vindex, mode) + # resbox = _strgetitem(self, value.force_box(self), vindex.force_box(self), mode) return self.getvalue(resbox) From noreply at buildbot.pypy.org Fri Nov 4 21:14:44 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Fri, 4 Nov 2011 21:14:44 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: hg merge default Message-ID: <20111104201444.871EC820B3@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48762:67b1506999f5 Date: 2011-11-04 07:43 +0100 http://bitbucket.org/pypy/pypy/changeset/67b1506999f5/ Log: hg merge default diff --git a/pypy/module/__builtin__/functional.py b/pypy/module/__builtin__/functional.py --- a/pypy/module/__builtin__/functional.py +++ b/pypy/module/__builtin__/functional.py @@ -362,7 +362,7 @@ def descr_reversed(self): lastitem = self.start + (self.len-1) * self.step return self.space.wrap(W_XRangeIterator(self.space, lastitem, - self.start - 1, -self.step)) + self.start, -self.step, True)) def descr_reduce(self): space = self.space @@ -389,21 +389,26 @@ ) class W_XRangeIterator(Wrappable): - def __init__(self, space, start, stop, step): + def __init__(self, space, start, stop, step, inclusive=False): self.space = space self.current = start self.stop = stop self.step = step + self.inclusive = inclusive def descr_iter(self): return self.space.wrap(self) def descr_next(self): - if (self.step > 0 and self.current < self.stop) or (self.step < 0 and self.current > self.stop): - item = self.current - self.current = item + self.step - return self.space.wrap(item) - raise OperationError(self.space.w_StopIteration, self.space.w_None) + if self.inclusive: + if not ((self.step > 0 and self.current <= self.stop) or (self.step < 0 and self.current >= self.stop)): + raise OperationError(self.space.w_StopIteration, self.space.w_None) + else: + if not ((self.step > 0 and self.current < self.stop) or (self.step < 0 and self.current > self.stop)): + raise OperationError(self.space.w_StopIteration, self.space.w_None) + item = self.current + self.current = item + self.step + return self.space.wrap(item) #def descr_len(self): # return self.space.wrap(self.remaining) diff --git a/pypy/module/__builtin__/test/test_functional.py b/pypy/module/__builtin__/test/test_functional.py --- a/pypy/module/__builtin__/test/test_functional.py +++ b/pypy/module/__builtin__/test/test_functional.py @@ -157,7 +157,8 @@ raises(OverflowError, xrange, a) raises(OverflowError, xrange, 0, a) raises(OverflowError, xrange, 0, 1, a) - + assert list(reversed(xrange(-sys.maxint-1, -sys.maxint-1, -2))) == [] + def test_xrange_reduce(self): x = xrange(2, 9, 3) callable, args = x.__reduce__() diff --git a/pypy/module/_hashlib/interp_hashlib.py b/pypy/module/_hashlib/interp_hashlib.py --- a/pypy/module/_hashlib/interp_hashlib.py +++ b/pypy/module/_hashlib/interp_hashlib.py @@ -4,32 +4,44 @@ from pypy.interpreter.error import OperationError from pypy.tool.sourcetools import func_renamer from pypy.interpreter.baseobjspace import Wrappable -from pypy.rpython.lltypesystem import lltype, rffi +from pypy.rpython.lltypesystem import lltype, llmemory, rffi +from pypy.rlib import rgc, ropenssl from pypy.rlib.objectmodel import keepalive_until_here -from pypy.rlib import ropenssl from pypy.rlib.rstring import StringBuilder from pypy.module.thread.os_lock import Lock algorithms = ('md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512') +# HASH_MALLOC_SIZE is the size of EVP_MD, EVP_MD_CTX plus their points +# Used for adding memory pressure. Last number is an (under?)estimate of +# EVP_PKEY_CTX's size. +# XXX: Make a better estimate here +HASH_MALLOC_SIZE = ropenssl.EVP_MD_SIZE + ropenssl.EVP_MD_CTX_SIZE \ + + rffi.sizeof(ropenssl.EVP_MD) * 2 + 208 + class W_Hash(Wrappable): ctx = lltype.nullptr(ropenssl.EVP_MD_CTX.TO) + _block_size = -1 def __init__(self, space, name): self.name = name + self.digest_size = self.compute_digest_size() # Allocate a lock for each HASH object. # An optimization would be to not release the GIL on small requests, # and use a custom lock only when needed. self.lock = Lock(space) + ctx = lltype.malloc(ropenssl.EVP_MD_CTX.TO, flavor='raw') + rgc.add_memory_pressure(HASH_MALLOC_SIZE + self.digest_size) + self.ctx = ctx + + def initdigest(self, space, name): digest = ropenssl.EVP_get_digestbyname(name) if not digest: raise OperationError(space.w_ValueError, space.wrap("unknown hash function")) - ctx = lltype.malloc(ropenssl.EVP_MD_CTX.TO, flavor='raw') - ropenssl.EVP_DigestInit(ctx, digest) - self.ctx = ctx + ropenssl.EVP_DigestInit(self.ctx, digest) def __del__(self): # self.lock.free() @@ -65,33 +77,29 @@ "Return the digest value as a string of hexadecimal digits." digest = self._digest(space) hexdigits = '0123456789abcdef' - result = StringBuilder(self._digest_size() * 2) + result = StringBuilder(self.digest_size * 2) for c in digest: result.append(hexdigits[(ord(c) >> 4) & 0xf]) result.append(hexdigits[ ord(c) & 0xf]) return space.wrap(result.build()) def get_digest_size(self, space): - return space.wrap(self._digest_size()) + return space.wrap(self.digest_size) def get_block_size(self, space): - return space.wrap(self._block_size()) + return space.wrap(self.compute_block_size()) def _digest(self, space): - copy = self.copy(space) - ctx = copy.ctx - digest_size = self._digest_size() - digest = lltype.malloc(rffi.CCHARP.TO, digest_size, flavor='raw') + with lltype.scoped_alloc(ropenssl.EVP_MD_CTX.TO) as ctx: + with self.lock: + ropenssl.EVP_MD_CTX_copy(ctx, self.ctx) + digest_size = self.digest_size + with lltype.scoped_alloc(rffi.CCHARP.TO, digest_size) as digest: + ropenssl.EVP_DigestFinal(ctx, digest, None) + ropenssl.EVP_MD_CTX_cleanup(ctx) + return rffi.charpsize2str(digest, digest_size) - try: - ropenssl.EVP_DigestFinal(ctx, digest, None) - return rffi.charpsize2str(digest, digest_size) - finally: - keepalive_until_here(copy) - lltype.free(digest, flavor='raw') - - - def _digest_size(self): + def compute_digest_size(self): # XXX This isn't the nicest way, but the EVP_MD_size OpenSSL # XXX function is defined as a C macro on OS X and would be # XXX significantly harder to implement in another way. @@ -105,12 +113,14 @@ 'sha512': 64, 'SHA512': 64, }.get(self.name, 0) - def _block_size(self): + def compute_block_size(self): + if self._block_size != -1: + return self._block_size # XXX This isn't the nicest way, but the EVP_MD_CTX_block_size # XXX OpenSSL function is defined as a C macro on some systems # XXX and would be significantly harder to implement in # XXX another way. - return { + self._block_size = { 'md5': 64, 'MD5': 64, 'sha1': 64, 'SHA1': 64, 'sha224': 64, 'SHA224': 64, @@ -118,6 +128,7 @@ 'sha384': 128, 'SHA384': 128, 'sha512': 128, 'SHA512': 128, }.get(self.name, 0) + return self._block_size W_Hash.typedef = TypeDef( 'HASH', @@ -135,6 +146,7 @@ @unwrap_spec(name=str, string='bufferstr') def new(space, name, string=''): w_hash = W_Hash(space, name) + w_hash.initdigest(space, name) w_hash.update(space, string) return space.wrap(w_hash) diff --git a/pypy/module/_multiprocessing/interp_semaphore.py b/pypy/module/_multiprocessing/interp_semaphore.py --- a/pypy/module/_multiprocessing/interp_semaphore.py +++ b/pypy/module/_multiprocessing/interp_semaphore.py @@ -4,6 +4,7 @@ from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.error import wrap_oserror, OperationError from pypy.rpython.lltypesystem import rffi, lltype +from pypy.rlib import rgc from pypy.rlib.rarithmetic import r_uint from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.rpython.tool import rffi_platform as platform @@ -23,6 +24,8 @@ _CreateSemaphore = rwin32.winexternal( 'CreateSemaphoreA', [rffi.VOIDP, rffi.LONG, rffi.LONG, rwin32.LPCSTR], rwin32.HANDLE) + _CloseHandle = rwin32.winexternal('CloseHandle', [rwin32.HANDLE], + rwin32.BOOL, threadsafe=False) _ReleaseSemaphore = rwin32.winexternal( 'ReleaseSemaphore', [rwin32.HANDLE, rffi.LONG, rffi.LONGP], rwin32.BOOL) @@ -51,6 +54,7 @@ SEM_FAILED = platform.ConstantInteger('SEM_FAILED') SEM_VALUE_MAX = platform.ConstantInteger('SEM_VALUE_MAX') SEM_TIMED_WAIT = platform.Has('sem_timedwait') + SEM_T_SIZE = platform.SizeOf('sem_t') config = platform.configure(CConfig) TIMEVAL = config['TIMEVAL'] @@ -61,18 +65,21 @@ SEM_FAILED = config['SEM_FAILED'] # rffi.cast(SEM_T, config['SEM_FAILED']) SEM_VALUE_MAX = config['SEM_VALUE_MAX'] SEM_TIMED_WAIT = config['SEM_TIMED_WAIT'] + SEM_T_SIZE = config['SEM_T_SIZE'] if sys.platform == 'darwin': HAVE_BROKEN_SEM_GETVALUE = True else: HAVE_BROKEN_SEM_GETVALUE = False - def external(name, args, result): + def external(name, args, result, **kwargs): return rffi.llexternal(name, args, result, - compilation_info=eci) + compilation_info=eci, **kwargs) _sem_open = external('sem_open', [rffi.CCHARP, rffi.INT, rffi.INT, rffi.UINT], SEM_T) + # tread sem_close as not threadsafe for now to be able to use the __del__ + _sem_close = external('sem_close', [SEM_T], rffi.INT, threadsafe=False) _sem_unlink = external('sem_unlink', [rffi.CCHARP], rffi.INT) _sem_wait = external('sem_wait', [SEM_T], rffi.INT) _sem_trywait = external('sem_trywait', [SEM_T], rffi.INT) @@ -90,6 +97,11 @@ raise OSError(rposix.get_errno(), "sem_open failed") return res + def sem_close(handle): + res = _sem_close(handle) + if res < 0: + raise OSError(rposix.get_errno(), "sem_close failed") + def sem_unlink(name): res = _sem_unlink(name) if res < 0: @@ -205,6 +217,11 @@ raise WindowsError(err, "CreateSemaphore") return handle + def delete_semaphore(handle): + if not _CloseHandle(handle): + err = rwin32.GetLastError() + raise WindowsError(err, "CloseHandle") + def semlock_acquire(self, space, block, w_timeout): if not block: full_msecs = 0 @@ -291,8 +308,13 @@ sem_unlink(name) except OSError: pass + else: + rgc.add_memory_pressure(SEM_T_SIZE) return sem + def delete_semaphore(handle): + sem_close(handle) + def semlock_acquire(self, space, block, w_timeout): if not block: deadline = lltype.nullptr(TIMESPECP.TO) @@ -483,6 +505,9 @@ def exit(self, space, __args__): self.release(space) + def __del__(self): + delete_semaphore(self.handle) + @unwrap_spec(kind=int, value=int, maxvalue=int) def descr_new(space, w_subtype, kind, value, maxvalue): if kind != RECURSIVE_MUTEX and kind != SEMAPHORE: diff --git a/pypy/module/pyexpat/interp_pyexpat.py b/pypy/module/pyexpat/interp_pyexpat.py --- a/pypy/module/pyexpat/interp_pyexpat.py +++ b/pypy/module/pyexpat/interp_pyexpat.py @@ -4,9 +4,9 @@ from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.error import OperationError from pypy.objspace.descroperation import object_setattr +from pypy.rlib import rgc +from pypy.rlib.unroll import unrolling_iterable from pypy.rpython.lltypesystem import rffi, lltype -from pypy.rlib.unroll import unrolling_iterable - from pypy.rpython.tool import rffi_platform from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.translator.platform import platform @@ -118,6 +118,19 @@ locals()[name] = rffi_platform.ConstantInteger(name) for name in xml_model_list: locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + XML_Parser_SIZE = rffi_platform.SizeOf("XML_Parser") for k, v in rffi_platform.configure(CConfigure).items(): globals()[k] = v @@ -793,7 +806,10 @@ rffi.cast(rffi.CHAR, namespace_separator)) else: xmlparser = XML_ParserCreate(encoding) - + # Currently this is just the size of the pointer and some estimated bytes. + # The struct isn't actually defined in expat.h - it is in xmlparse.c + # XXX: find a good estimate of the XML_ParserStruct + rgc.add_memory_pressure(XML_Parser_SIZE + 300) if not xmlparser: raise OperationError(space.w_RuntimeError, space.wrap('XML_ParserCreate failed')) diff --git a/pypy/module/thread/ll_thread.py b/pypy/module/thread/ll_thread.py --- a/pypy/module/thread/ll_thread.py +++ b/pypy/module/thread/ll_thread.py @@ -2,10 +2,11 @@ from pypy.rpython.lltypesystem import rffi, lltype, llmemory from pypy.translator.tool.cbuild import ExternalCompilationInfo import py -from pypy.rlib import jit +from pypy.rlib import jit, rgc from pypy.rlib.debug import ll_assert from pypy.rlib.objectmodel import we_are_translated from pypy.rpython.lltypesystem.lloperation import llop +from pypy.rpython.tool import rffi_platform from pypy.tool import autopath class error(Exception): @@ -49,7 +50,7 @@ TLOCKP = rffi.COpaquePtr('struct RPyOpaque_ThreadLock', compilation_info=eci) - +TLOCKP_SIZE = rffi_platform.sizeof('struct RPyOpaque_ThreadLock', eci) c_thread_lock_init = llexternal('RPyThreadLockInit', [TLOCKP], rffi.INT, threadsafe=False) # may add in a global list c_thread_lock_dealloc_NOAUTO = llexternal('RPyOpaqueDealloc_ThreadLock', @@ -164,6 +165,9 @@ if rffi.cast(lltype.Signed, res) <= 0: lltype.free(ll_lock, flavor='raw', track_allocation=False) raise error("out of resources") + # Add some memory pressure for the size of the lock because it is an + # Opaque object + rgc.add_memory_pressure(TLOCKP_SIZE) return ll_lock def free_ll_lock(ll_lock): diff --git a/pypy/rlib/rgc.py b/pypy/rlib/rgc.py --- a/pypy/rlib/rgc.py +++ b/pypy/rlib/rgc.py @@ -259,6 +259,24 @@ except Exception: return False # don't keep objects whose _freeze_() method explodes +def add_memory_pressure(estimate): + """Add memory pressure for OpaquePtrs.""" + pass + +class AddMemoryPressureEntry(ExtRegistryEntry): + _about_ = add_memory_pressure + + def compute_result_annotation(self, s_nbytes): + from pypy.annotation import model as annmodel + return annmodel.s_None + + def specialize_call(self, hop): + [v_size] = hop.inputargs(lltype.Signed) + hop.exception_cannot_occur() + return hop.genop('gc_add_memory_pressure', [v_size], + resulttype=lltype.Void) + + def get_rpy_memory_usage(gcref): "NOT_RPYTHON" # approximate implementation using CPython's type info diff --git a/pypy/rlib/ropenssl.py b/pypy/rlib/ropenssl.py --- a/pypy/rlib/ropenssl.py +++ b/pypy/rlib/ropenssl.py @@ -25,6 +25,7 @@ 'openssl/err.h', 'openssl/rand.h', 'openssl/evp.h', + 'openssl/ossl_typ.h', 'openssl/x509v3.h'] eci = ExternalCompilationInfo( @@ -108,7 +109,9 @@ GENERAL_NAME_st = rffi_platform.Struct( 'struct GENERAL_NAME_st', [('type', rffi.INT), - ]) + ]) + EVP_MD_SIZE = rffi_platform.SizeOf('EVP_MD') + EVP_MD_CTX_SIZE = rffi_platform.SizeOf('EVP_MD_CTX') for k, v in rffi_platform.configure(CConfig).items(): @@ -154,7 +157,7 @@ ssl_external('CRYPTO_set_id_callback', [lltype.Ptr(lltype.FuncType([], rffi.LONG))], lltype.Void) - + if HAVE_OPENSSL_RAND: ssl_external('RAND_add', [rffi.CCHARP, rffi.INT, rffi.DOUBLE], lltype.Void) ssl_external('RAND_status', [], rffi.INT) @@ -255,7 +258,7 @@ [BIO, rffi.VOIDP, rffi.VOIDP, rffi.VOIDP], X509) EVP_MD_CTX = rffi.COpaquePtr('EVP_MD_CTX', compilation_info=eci) -EVP_MD = rffi.COpaquePtr('EVP_MD') +EVP_MD = rffi.COpaquePtr('EVP_MD', compilation_info=eci) OpenSSL_add_all_digests = external( 'OpenSSL_add_all_digests', [], lltype.Void) diff --git a/pypy/rpython/llinterp.py b/pypy/rpython/llinterp.py --- a/pypy/rpython/llinterp.py +++ b/pypy/rpython/llinterp.py @@ -172,7 +172,7 @@ def checkadr(addr): assert lltype.typeOf(addr) is llmemory.Address - + def is_inst(inst): return isinstance(lltype.typeOf(inst), (ootype.Instance, ootype.BuiltinType, ootype.StaticMethod)) @@ -657,7 +657,7 @@ raise TypeError("graph with %r args called with wrong func ptr type: %r" % (tuple([v.concretetype for v in args_v]), ARGS)) frame = self.newsubframe(graph, args) - return frame.eval() + return frame.eval() def op_direct_call(self, f, *args): FTYPE = self.llinterpreter.typer.type_system.derefType(lltype.typeOf(f)) @@ -698,13 +698,13 @@ return ptr except MemoryError: self.make_llexception() - + def op_malloc_nonmovable(self, TYPE, flags): flavor = flags['flavor'] assert flavor == 'gc' zero = flags.get('zero', False) return self.heap.malloc_nonmovable(TYPE, zero=zero) - + def op_malloc_nonmovable_varsize(self, TYPE, flags, size): flavor = flags['flavor'] assert flavor == 'gc' @@ -716,6 +716,9 @@ track_allocation = flags.get('track_allocation', True) self.heap.free(obj, flavor='raw', track_allocation=track_allocation) + def op_gc_add_memory_pressure(self, size): + self.heap.add_memory_pressure(size) + def op_shrink_array(self, obj, smallersize): return self.heap.shrink_array(obj, smallersize) @@ -1318,7 +1321,7 @@ func_graph = fn.graph else: # obj is an instance, we want to call 'method_name' on it - assert fn is None + assert fn is None self_arg = [obj] func_graph = obj._TYPE._methods[method_name._str].graph diff --git a/pypy/rpython/lltypesystem/llheap.py b/pypy/rpython/lltypesystem/llheap.py --- a/pypy/rpython/lltypesystem/llheap.py +++ b/pypy/rpython/lltypesystem/llheap.py @@ -5,8 +5,7 @@ setfield = setattr from operator import setitem as setarrayitem -from pypy.rlib.rgc import collect -from pypy.rlib.rgc import can_move +from pypy.rlib.rgc import can_move, collect, add_memory_pressure def setinterior(toplevelcontainer, inneraddr, INNERTYPE, newvalue, offsets=None): diff --git a/pypy/rpython/lltypesystem/lloperation.py b/pypy/rpython/lltypesystem/lloperation.py --- a/pypy/rpython/lltypesystem/lloperation.py +++ b/pypy/rpython/lltypesystem/lloperation.py @@ -473,6 +473,7 @@ 'gc_is_rpy_instance' : LLOp(), 'gc_dump_rpy_heap' : LLOp(), 'gc_typeids_z' : LLOp(), + 'gc_add_memory_pressure': LLOp(), # ------- JIT & GC interaction, only for some GCs ---------- diff --git a/pypy/rpython/lltypesystem/lltype.py b/pypy/rpython/lltypesystem/lltype.py --- a/pypy/rpython/lltypesystem/lltype.py +++ b/pypy/rpython/lltypesystem/lltype.py @@ -48,7 +48,7 @@ self.TYPE = TYPE def __repr__(self): return ''%(self.TYPE,) - + def saferecursive(func, defl, TLS=TLS): def safe(*args): @@ -537,9 +537,9 @@ return "Func ( %s ) -> %s" % (args, self.RESULT) __str__ = saferecursive(__str__, '...') - def _short_name(self): + def _short_name(self): args = ', '.join([ARG._short_name() for ARG in self.ARGS]) - return "Func(%s)->%s" % (args, self.RESULT._short_name()) + return "Func(%s)->%s" % (args, self.RESULT._short_name()) _short_name = saferecursive(_short_name, '...') def _container_example(self): @@ -553,7 +553,7 @@ class OpaqueType(ContainerType): _gckind = 'raw' - + def __init__(self, tag, hints={}): """ if hints['render_structure'] is set, the type is internal and not considered to come from somewhere else (it should be rendered as a structure) """ @@ -723,10 +723,10 @@ def __str__(self): return '* %s' % (self.TO, ) - + def _short_name(self): return 'Ptr %s' % (self.TO._short_name(), ) - + def _is_atomic(self): return self.TO._gckind == 'raw' diff --git a/pypy/rpython/memory/gctransform/framework.py b/pypy/rpython/memory/gctransform/framework.py --- a/pypy/rpython/memory/gctransform/framework.py +++ b/pypy/rpython/memory/gctransform/framework.py @@ -377,17 +377,24 @@ self.malloc_varsize_nonmovable_ptr = None if getattr(GCClass, 'raw_malloc_memory_pressure', False): - def raw_malloc_memory_pressure(length, itemsize): + def raw_malloc_memory_pressure_varsize(length, itemsize): totalmem = length * itemsize if totalmem > 0: gcdata.gc.raw_malloc_memory_pressure(totalmem) #else: probably an overflow -- the following rawmalloc # will fail then + def raw_malloc_memory_pressure(sizehint): + gcdata.gc.raw_malloc_memory_pressure(sizehint) + self.raw_malloc_memory_pressure_varsize_ptr = getfn( + raw_malloc_memory_pressure_varsize, + [annmodel.SomeInteger(), annmodel.SomeInteger()], + annmodel.s_None, minimal_transform = False) self.raw_malloc_memory_pressure_ptr = getfn( raw_malloc_memory_pressure, - [annmodel.SomeInteger(), annmodel.SomeInteger()], + [annmodel.SomeInteger()], annmodel.s_None, minimal_transform = False) + self.identityhash_ptr = getfn(GCClass.identityhash.im_func, [s_gc, s_gcref], annmodel.SomeInteger(), diff --git a/pypy/rpython/memory/gctransform/transform.py b/pypy/rpython/memory/gctransform/transform.py --- a/pypy/rpython/memory/gctransform/transform.py +++ b/pypy/rpython/memory/gctransform/transform.py @@ -63,7 +63,7 @@ gct.push_alive(v_result, self.llops) elif opname not in ('direct_call', 'indirect_call'): gct.push_alive(v_result, self.llops) - + def rename(self, newopname): @@ -118,7 +118,7 @@ self.minimalgctransformer = self.MinimalGCTransformer(self) else: self.minimalgctransformer = None - + def get_lltype_of_exception_value(self): if self.translator is not None: exceptiondata = self.translator.rtyper.getexceptiondata() @@ -399,7 +399,7 @@ def gct_gc_heap_stats(self, hop): from pypy.rpython.memory.gc.base import ARRAY_TYPEID_MAP - + return hop.cast_result(rmodel.inputconst(lltype.Ptr(ARRAY_TYPEID_MAP), lltype.nullptr(ARRAY_TYPEID_MAP))) @@ -427,7 +427,7 @@ assert flavor == 'raw' assert not flags.get('zero') return self.parenttransformer.gct_malloc_varsize(hop) - + def gct_free(self, hop): flags = hop.spaceop.args[1].value flavor = flags['flavor'] @@ -502,7 +502,7 @@ stack_mh = mallocHelpers() stack_mh.allocate = lambda size: llop.stack_malloc(llmemory.Address, size) ll_stack_malloc_fixedsize = stack_mh._ll_malloc_fixedsize - + if self.translator: self.raw_malloc_fixedsize_ptr = self.inittime_helper( ll_raw_malloc_fixedsize, [lltype.Signed], llmemory.Address) @@ -541,7 +541,7 @@ resulttype=llmemory.Address) if flags.get('zero'): hop.genop("raw_memclear", [v_raw, c_size]) - return v_raw + return v_raw def gct_malloc_varsize(self, hop, add_flags=None): flags = hop.spaceop.args[1].value @@ -559,6 +559,14 @@ def gct_malloc_nonmovable_varsize(self, *args, **kwds): return self.gct_malloc_varsize(*args, **kwds) + def gct_gc_add_memory_pressure(self, hop): + if hasattr(self, 'raw_malloc_memory_pressure_ptr'): + op = hop.spaceop + size = op.args[0] + return hop.genop("direct_call", + [self.raw_malloc_memory_pressure_ptr, + size]) + def varsize_malloc_helper(self, hop, flags, meth, extraargs): def intconst(c): return rmodel.inputconst(lltype.Signed, c) op = hop.spaceop @@ -590,9 +598,9 @@ def gct_fv_raw_malloc_varsize(self, hop, flags, TYPE, v_length, c_const_size, c_item_size, c_offset_to_length): if flags.get('add_memory_pressure', False): - if hasattr(self, 'raw_malloc_memory_pressure_ptr'): + if hasattr(self, 'raw_malloc_memory_pressure_varsize_ptr'): hop.genop("direct_call", - [self.raw_malloc_memory_pressure_ptr, + [self.raw_malloc_memory_pressure_varsize_ptr, v_length, c_item_size]) if c_offset_to_length is None: if flags.get('zero'): @@ -625,7 +633,7 @@ hop.genop("track_alloc_stop", [v]) hop.genop('raw_free', [v]) else: - assert False, "%s has no support for free with flavor %r" % (self, flavor) + assert False, "%s has no support for free with flavor %r" % (self, flavor) def gct_gc_can_move(self, hop): return hop.cast_result(rmodel.inputconst(lltype.Bool, False)) diff --git a/pypy/rpython/memory/gcwrapper.py b/pypy/rpython/memory/gcwrapper.py --- a/pypy/rpython/memory/gcwrapper.py +++ b/pypy/rpython/memory/gcwrapper.py @@ -66,6 +66,10 @@ gctypelayout.zero_gc_pointers(result) return result + def add_memory_pressure(self, size): + if hasattr(self.gc, 'raw_malloc_memory_pressure'): + self.gc.raw_malloc_memory_pressure(size) + def shrink_array(self, p, smallersize): if hasattr(self.gc, 'shrink_array'): addr = llmemory.cast_ptr_to_adr(p) diff --git a/pypy/rpython/memory/test/test_gc.py b/pypy/rpython/memory/test/test_gc.py --- a/pypy/rpython/memory/test/test_gc.py +++ b/pypy/rpython/memory/test/test_gc.py @@ -592,7 +592,7 @@ return rgc.can_move(lltype.malloc(TP, 1)) assert self.interpret(func, []) == self.GC_CAN_MOVE - + def test_malloc_nonmovable(self): TP = lltype.GcArray(lltype.Char) def func(): diff --git a/pypy/rpython/memory/test/test_transformed_gc.py b/pypy/rpython/memory/test/test_transformed_gc.py --- a/pypy/rpython/memory/test/test_transformed_gc.py +++ b/pypy/rpython/memory/test/test_transformed_gc.py @@ -27,7 +27,7 @@ t.config.set(**extraconfigopts) ann = t.buildannotator(policy=annpolicy.StrictAnnotatorPolicy()) ann.build_types(func, inputtypes) - + if specialize: t.buildrtyper().specialize() if backendopt: @@ -44,7 +44,7 @@ GC_CAN_MOVE = False GC_CAN_MALLOC_NONMOVABLE = True taggedpointers = False - + def setup_class(cls): funcs0 = [] funcs2 = [] @@ -155,7 +155,7 @@ return run, gct else: return run - + class GenericGCTests(GCTest): GC_CAN_SHRINK_ARRAY = False @@ -190,7 +190,7 @@ j += 1 return 0 return malloc_a_lot - + def test_instances(self): run, statistics = self.runner("instances", statistics=True) run([]) @@ -276,7 +276,7 @@ for i in range(1, 5): res = run([i, i - 1]) assert res == i - 1 # crashes if constants are not considered roots - + def define_string_concatenation(cls): def concat(j, dummy): lst = [] @@ -656,7 +656,7 @@ # return 2 return func - + def test_malloc_nonmovable(self): run = self.runner("malloc_nonmovable") assert int(self.GC_CAN_MALLOC_NONMOVABLE) == run([]) @@ -676,7 +676,7 @@ return 2 return func - + def test_malloc_nonmovable_fixsize(self): run = self.runner("malloc_nonmovable_fixsize") assert run([]) == int(self.GC_CAN_MALLOC_NONMOVABLE) @@ -757,7 +757,7 @@ lltype.free(idarray, flavor='raw') return 0 return f - + def test_many_ids(self): if not self.GC_CAN_TEST_ID: py.test.skip("fails for bad reasons in lltype.py :-(") @@ -813,7 +813,7 @@ else: assert 0, "oups, not found" return f, None, fix_graph_of_g - + def test_do_malloc_operations(self): run = self.runner("do_malloc_operations") run([]) @@ -850,7 +850,7 @@ else: assert 0, "oups, not found" return f, None, fix_graph_of_g - + def test_do_malloc_operations_in_call(self): run = self.runner("do_malloc_operations_in_call") run([]) @@ -861,7 +861,7 @@ l2 = [] l3 = [] l4 = [] - + def f(): for i in range(10): s = lltype.malloc(S) @@ -1026,7 +1026,7 @@ llop.gc__collect(lltype.Void) return static.p.x + i def cleanup(): - static.p = lltype.nullptr(T1) + static.p = lltype.nullptr(T1) return f, cleanup, None def test_nongc_static_root_minor_collect(self): @@ -1081,7 +1081,7 @@ return 0 return f - + def test_many_weakrefs(self): run = self.runner("many_weakrefs") run([]) @@ -1131,7 +1131,7 @@ def define_adr_of_nursery(cls): class A(object): pass - + def f(): # we need at least 1 obj to allocate a nursery a = A() @@ -1147,9 +1147,9 @@ assert nt1 > nf1 assert nt1 == nt0 return 0 - + return f - + def test_adr_of_nursery(self): run = self.runner("adr_of_nursery") res = run([]) @@ -1175,7 +1175,7 @@ def _teardown(self): self.__ready = False # collecting here is expected GenerationGC._teardown(self) - + GC_PARAMS = {'space_size': 512*WORD, 'nursery_size': 128*WORD, 'translated_to_c': False} diff --git a/pypy/translator/c/test/test_newgc.py b/pypy/translator/c/test/test_newgc.py --- a/pypy/translator/c/test/test_newgc.py +++ b/pypy/translator/c/test/test_newgc.py @@ -37,7 +37,7 @@ else: print res return 0 - + t = Translation(main, standalone=True, gc=cls.gcpolicy, policy=annpolicy.StrictAnnotatorPolicy(), taggedpointers=cls.taggedpointers, @@ -128,10 +128,10 @@ if not args: args = (-1, ) res = self.allfuncs(name, *args) - num = self.name_to_func[name] + num = self.name_to_func[name] if self.funcsstr[num]: return res - return int(res) + return int(res) def define_empty_collect(cls): def f(): @@ -228,7 +228,7 @@ T = lltype.GcStruct("T", ('y', lltype.Signed), ('s', lltype.Ptr(S))) ARRAY_Ts = lltype.GcArray(lltype.Ptr(T)) - + def f(): r = 0 for i in range(30): @@ -250,7 +250,7 @@ def test_framework_varsized(self): res = self.run('framework_varsized') assert res == self.run_orig('framework_varsized') - + def define_framework_using_lists(cls): class A(object): pass @@ -271,7 +271,7 @@ N = 1000 res = self.run('framework_using_lists') assert res == N*(N - 1)/2 - + def define_framework_static_roots(cls): class A(object): def __init__(self, y): @@ -318,8 +318,8 @@ def test_framework_void_array(self): res = self.run('framework_void_array') assert res == 44 - - + + def define_framework_malloc_failure(cls): def f(): a = [1] * (sys.maxint//2) @@ -342,7 +342,7 @@ def test_framework_array_of_void(self): res = self.run('framework_array_of_void') assert res == 43 + 1000000 - + def define_framework_opaque(cls): A = lltype.GcStruct('A', ('value', lltype.Signed)) O = lltype.GcOpaqueType('test.framework') @@ -437,7 +437,7 @@ b = B() return 0 return func - + def test_del_raises(self): self.run('del_raises') # does not raise @@ -712,7 +712,7 @@ def test_callback_with_collect(self): assert self.run('callback_with_collect') - + def define_can_move(cls): class A: pass @@ -1255,7 +1255,7 @@ l1 = [] l2 = [] l3 = [] - + def f(): for i in range(10): s = lltype.malloc(S) @@ -1298,7 +1298,7 @@ def test_string_builder(self): res = self.run('string_builder') assert res == "aabcbdddd" - + def definestr_string_builder_over_allocation(cls): import gc def fn(_): @@ -1458,6 +1458,37 @@ res = self.run("nongc_attached_to_gc") assert res == -99997 + def define_nongc_opaque_attached_to_gc(cls): + from pypy.module._hashlib.interp_hashlib import HASH_MALLOC_SIZE + from pypy.rlib import rgc, ropenssl + from pypy.rpython.lltypesystem import rffi + + class A: + def __init__(self): + self.ctx = lltype.malloc(ropenssl.EVP_MD_CTX.TO, + flavor='raw') + digest = ropenssl.EVP_get_digestbyname('sha1') + ropenssl.EVP_DigestInit(self.ctx, digest) + rgc.add_memory_pressure(HASH_MALLOC_SIZE + 64) + + def __del__(self): + ropenssl.EVP_MD_CTX_cleanup(self.ctx) + lltype.free(self.ctx, flavor='raw') + A() + def f(): + am1 = am2 = am3 = None + for i in range(100000): + am3 = am2 + am2 = am1 + am1 = A() + # what can we use for the res? + return 0 + return f + + def test_nongc_opaque_attached_to_gc(self): + res = self.run("nongc_opaque_attached_to_gc") + assert res == 0 + # ____________________________________________________________________ class TaggedPointersTest(object): From noreply at buildbot.pypy.org Fri Nov 4 21:14:45 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Fri, 4 Nov 2011 21:14:45 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: refactor unrolling to use the new target resop Message-ID: <20111104201445.C9887820B3@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48763:9ef690e84b21 Date: 2011-11-04 21:14 +0100 http://bitbucket.org/pypy/pypy/changeset/9ef690e84b21/ Log: refactor unrolling to use the new target resop diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -766,10 +766,10 @@ self.compiled_loop_token.cpu.dump_loop_token(self) class TargetToken(AbstractDescr): - pass + def __init__(self): + self.exported_state = None class TreeLoop(object): - inputargs = None operations = None token = None call_pure_results = None @@ -778,11 +778,24 @@ def __init__(self, name): self.name = name - # self.inputargs = list of distinct Boxes # self.operations = list of ResOperations # ops of the kind 'guard_xxx' contain a further list of operations, # which may itself contain 'guard_xxx' and so on, making a tree. + _inputargs = None + + def get_inputargs(self): + "NOT_RPYTHON" + if self._inputargs is not None: + return self._inputargs + assert self.operations[0].getopnum() == rop.TARGET + return self.operations[0].getarglist() + + def set_inputargs(self, inputargs): + self._inputargs = inputargs + + inputargs = property(get_inputargs, set_inputargs) + def _all_operations(self, omit_finish=False): "NOT_RPYTHON" result = [] @@ -801,7 +814,7 @@ return self.operations def get_display_text(self): # for graphpage.py - return self.name + '\n' + repr(self.inputargs) + return self.name def show(self, errmsg=None): "NOT_RPYTHON" @@ -810,15 +823,13 @@ def check_consistency(self): # for testing "NOT_RPYTHON" - self.check_consistency_of(self.inputargs, self.operations) + self.check_consistency_of(self.operations) @staticmethod - def check_consistency_of(inputargs, operations): - for box in inputargs: - assert isinstance(box, Box), "Loop.inputargs contains %r" % (box,) + def check_consistency_of(operations): + assert operations[0].getopnum() == rop.TARGET + inputargs = operations[0].getarglist() seen = dict.fromkeys(inputargs) - assert len(seen) == len(inputargs), ( - "duplicate Box in the Loop.inputargs") TreeLoop.check_consistency_of_branch(operations, seen) @staticmethod @@ -845,6 +856,14 @@ assert isinstance(box, Box) assert box not in seen seen[box] = True + if op.getopnum() == rop.TARGET: + inputargs = op.getarglist() + for box in inputargs: + assert isinstance(box, Box), "TARGET contains %r" % (box,) + seen = dict.fromkeys(inputargs) + assert len(seen) == len(inputargs), ( + "duplicate Box in the TARGET arguments") + assert operations[-1].is_final() if operations[-1].getopnum() == rop.JUMP: target = operations[-1].getdescr() @@ -853,7 +872,7 @@ def dump(self): # RPython-friendly - print '%r: inputargs =' % self, self._dump_args(self.inputargs) + print '%r: ' % self for op in self.operations: args = op.getarglist() print '\t', op.getopname(), self._dump_args(args), \ diff --git a/pypy/jit/metainterp/optimizeopt/__init__.py b/pypy/jit/metainterp/optimizeopt/__init__.py --- a/pypy/jit/metainterp/optimizeopt/__init__.py +++ b/pypy/jit/metainterp/optimizeopt/__init__.py @@ -80,3 +80,14 @@ if __name__ == '__main__': print ALL_OPTS_NAMES + +def optimize_trace(metainterp_sd, loop, enable_opts): + """Optimize loop.operations to remove internal overheadish operations. + """ + + optimizations, unroll = build_opt_chain(metainterp_sd, enable_opts, False, False) + if unroll: + optimize_unroll(metainterp_sd, loop, optimizations) + else: + optimizer = Optimizer(metainterp_sd, loop, optimizations, bridge) + optimizer.propagate_all_forward() diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -497,9 +497,10 @@ else: return CVAL_ZERO - def propagate_all_forward(self): + def propagate_all_forward(self, clear=True): self.exception_might_have_happened = self.bridge - self.clear_newoperations() + if clear: + self.clear_newoperations() for op in self.loop.operations: self.first_optimization.propagate_forward(op) self.loop.operations = self.get_newoperations() @@ -556,6 +557,7 @@ def store_final_boxes_in_guard(self, op): descr = op.getdescr() + print 'HHHHHHHHHHHH', descr, id(descr) assert isinstance(descr, compile.ResumeGuardDescr) modifier = resume.ResumeDataVirtualAdder(descr, self.resumedata_memo) newboxes = modifier.finish(self.values, self.pendingfields) diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -7,7 +7,7 @@ from pypy.jit.metainterp.optimizeopt import optimize_loop_1, ALL_OPTS_DICT, build_opt_chain from pypy.jit.metainterp.optimize import InvalidLoop from pypy.jit.metainterp.history import AbstractDescr, ConstInt, BoxInt -from pypy.jit.metainterp.history import TreeLoop, LoopToken +from pypy.jit.metainterp.history import TreeLoop, LoopToken, TargetToken from pypy.jit.metainterp.jitprof import EmptyProfiler from pypy.jit.metainterp import executor, compile, resume, history from pypy.jit.metainterp.resoperation import rop, opname, ResOperation @@ -15,7 +15,7 @@ from pypy.jit.metainterp.optimizeopt.util import args_dict from pypy.jit.metainterp.optimizeopt.test.test_optimizebasic import FakeMetaInterpStaticData from pypy.config.pypyoption import get_pypy_config - +from pypy.jit.metainterp.optimizeopt.unroll import Inliner def test_build_opt_chain(): def check(chain, expected_names): @@ -79,43 +79,83 @@ expected_preamble = self.parse(expected_preamble) if expected_short: expected_short = self.parse(expected_short) - loop.preamble = TreeLoop('preamble') - loop.preamble.inputargs = loop.inputargs - loop.preamble.token = LoopToken() - loop.preamble.start_resumedescr = FakeDescr() - # + operations = loop.operations + cloned_operations = [op.clone() for op in operations] + + preamble = TreeLoop('preamble') + #loop.preamble.inputargs = loop.inputargs + #loop.preamble.token = LoopToken() + preamble.start_resumedescr = FakeDescr() + assert operations[-1].getopnum() == rop.JUMP + inputargs = loop.inputargs + jump_args = operations[-1].getarglist() + targettoken = TargetToken() + operations[-1].setdescr(targettoken) + cloned_operations[-1].setdescr(targettoken) + preamble.operations = [ResOperation(rop.TARGET, inputargs, None, descr=TargetToken())] + \ + operations[:-1] + \ + [ResOperation(rop.TARGET, jump_args, None, descr=targettoken)] + self._do_optimize_loop(preamble, call_pure_results) + + jump_args = preamble.operations[-1].getdescr().exported_state.jump_args # FIXME!! + inliner = Inliner(inputargs, jump_args) + loop.inputargs = None + loop.start_resumedescr = preamble.start_resumedescr + loop.operations = [preamble.operations[-1]] + \ + [inliner.inline_op(op, clone=False) for op in cloned_operations] self._do_optimize_loop(loop, call_pure_results) + extra_same_as = [] + while loop.operations[0].getopnum() != rop.TARGET: + extra_same_as.append(loop.operations[0]) + del loop.operations[0] + + # Hack to prevent random order of same_as ops + extra_same_as.sort(key=lambda op: str(preamble.operations).find(str(op.getarg(0)))) + + for op in extra_same_as: + preamble.operations.insert(-1, op) + # print print "Preamble:" - print loop.preamble.inputargs - if loop.preamble.operations: - print '\n'.join([str(o) for o in loop.preamble.operations]) + if preamble.operations: + print '\n'.join([str(o) for o in preamble.operations]) else: print 'Failed!' print print "Loop:" - print loop.inputargs print '\n'.join([str(o) for o in loop.operations]) print if expected_short: print "Short Preamble:" - short = loop.preamble.token.short_preamble[0] + short = loop.token.short_preamble[0] print short.inputargs print '\n'.join([str(o) for o in short.operations]) print assert expected != "crash!", "should have raised an exception" - self.assert_equal(loop, expected) + self.assert_equal(loop, convert_old_style_to_targets(expected, jump=True)) + assert loop.operations[0].getdescr() == loop.operations[-1].getdescr() if expected_preamble: - self.assert_equal(loop.preamble, expected_preamble, + self.assert_equal(preamble, convert_old_style_to_targets(expected_preamble, jump=False), text_right='expected preamble') + assert preamble.operations[-1].getdescr() == loop.operations[0].getdescr() if expected_short: - self.assert_equal(short, expected_short, + self.assert_equal(short, convert_old_style_to_targets(expected_short, jump=True), text_right='expected short preamble') + assert short.operations[-1].getdescr() == loop.operations[0].getdescr() return loop +def convert_old_style_to_targets(loop, jump): + newloop = TreeLoop(loop.name) + newloop.operations = [ResOperation(rop.TARGET, loop.inputargs, None, descr=FakeDescr())] + \ + loop.operations + if not jump: + assert newloop.operations[-1].getopnum() == rop.JUMP + newloop.operations[-1] = ResOperation(rop.TARGET, newloop.operations[-1].getarglist(), None, descr=FakeDescr()) + return newloop + class OptimizeOptTest(BaseTestWithUnroll): def setup_method(self, meth=None): @@ -234,7 +274,7 @@ """ % expected_value self.optimize_loop(ops, expected) - def test_reverse_of_cast(self): + def test_reverse_of_cast_1(self): ops = """ [i0] p0 = cast_int_to_ptr(i0) @@ -246,6 +286,8 @@ jump(i0) """ self.optimize_loop(ops, expected) + + def test_reverse_of_cast_2(self): ops = """ [p0] i1 = cast_ptr_to_int(p0) @@ -1166,6 +1208,7 @@ i1 = getfield_gc(p0, descr=valuedescr) i2 = int_sub(i1, 1) i3 = int_add(i0, i1) + i4 = same_as(i2) # This same_as should be killed by backend jump(i3, i2, i1) """ expected = """ @@ -1233,6 +1276,7 @@ p30 = new_with_vtable(ConstClass(node_vtable)) setfield_gc(p30, i28, descr=nextdescr) setfield_gc(p3, p30, descr=valuedescr) + p46 = same_as(p30) # This same_as should be killed by backend jump(i29, p30, p3) """ expected = """ @@ -1240,8 +1284,8 @@ i28 = int_add(i0, 1) i29 = int_add(i28, 1) p30 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(p30, i28, descr=nextdescr) setfield_gc(p3, p30, descr=valuedescr) - setfield_gc(p30, i28, descr=nextdescr) jump(i29, p30, p3) """ self.optimize_loop(ops, expected, preamble) @@ -2034,7 +2078,9 @@ guard_true(i3) [] i4 = int_neg(i2) setfield_gc(p1, i2, descr=valuedescr) - jump(p1, i1, i2, i4, i4) + i7 = same_as(i2) # This same_as should be killed by backend + i6 = same_as(i4) + jump(p1, i1, i2, i4, i6) """ expected = """ [p1, i1, i2, i4, i5] @@ -2064,7 +2110,8 @@ i4 = int_neg(i2) setfield_gc(p1, NULL, descr=nextdescr) escape() - jump(p1, i2, i4, i4) + i5 = same_as(i4) + jump(p1, i2, i4, i5) """ expected = """ [p1, i2, i4, i5] @@ -2093,7 +2140,8 @@ i4 = int_neg(i2) setfield_gc(p1, NULL, descr=nextdescr) escape() - jump(p1, i2, i4, i4) + i5 = same_as(i4) + jump(p1, i2, i4, i5) """ expected = """ [p1, i2, i4, i5] @@ -2123,7 +2171,9 @@ guard_true(i5) [] i4 = int_neg(i2) setfield_gc(p1, i2, descr=valuedescr) - jump(p1, i1, i2, i4, i4) + i8 = same_as(i2) # This same_as should be killed by backend + i7 = same_as(i4) + jump(p1, i1, i2, i4, i7) """ expected = """ [p1, i1, i2, i4, i7] @@ -2349,7 +2399,8 @@ p2 = new_with_vtable(ConstClass(node_vtable)) setfield_gc(p2, p4, descr=nextdescr) setfield_gc(p1, p2, descr=nextdescr) - jump(p1, i2, i4, p4, i4) + i101 = same_as(i4) + jump(p1, i2, i4, p4, i101) """ expected = """ [p1, i2, i4, p4, i5] @@ -3192,7 +3243,15 @@ setfield_gc(p1, i3, descr=valuedescr) jump(p1, i4, i3) ''' - self.optimize_loop(ops, ops, ops) + preamble = ''' + [p1, i1, i4] + setfield_gc(p1, i1, descr=valuedescr) + i3 = call_assembler(i1, descr=asmdescr) + setfield_gc(p1, i3, descr=valuedescr) + i143 = same_as(i3) # Should be killed by backend + jump(p1, i4, i3) + ''' + self.optimize_loop(ops, ops, preamble) def test_call_assembler_invalidates_heap_knowledge(self): ops = ''' @@ -3223,7 +3282,9 @@ setfield_gc(p1, i1, descr=valuedescr) i3 = call(p1, descr=plaincalldescr) setfield_gc(p1, i3, descr=valuedescr) - jump(p1, i4, i3, i3) + i148 = same_as(i3) + i147 = same_as(i3) + jump(p1, i4, i3, i148) ''' self.optimize_loop(ops, expected, preamble) @@ -3246,7 +3307,8 @@ setfield_gc(p1, i1, descr=valuedescr) i3 = call(p1, descr=plaincalldescr) setfield_gc(p1, i1, descr=valuedescr) - jump(p1, i4, i3, i3) + i151 = same_as(i3) + jump(p1, i4, i3, i151) ''' self.optimize_loop(ops, expected, preamble) @@ -3266,7 +3328,8 @@ escape(i1) escape(i2) i4 = call(123456, 4, i0, 6, descr=plaincalldescr) - jump(i0, i4, i4) + i153 = same_as(i4) + jump(i0, i4, i153) ''' expected = ''' [i0, i4, i5] @@ -3296,7 +3359,8 @@ escape(i2) i4 = call(123456, 4, i0, 6, descr=plaincalldescr) guard_no_exception() [] - jump(i0, i4, i4) + i155 = same_as(i4) + jump(i0, i4, i155) ''' expected = ''' [i0, i2, i3] @@ -4114,6 +4178,7 @@ preamble = """ [p0] i0 = strlen(p0) + i3 = same_as(i0) # Should be killed by backend jump(p0) """ expected = """ @@ -5334,6 +5399,7 @@ [p0] p1 = getfield_gc(p0, descr=valuedescr) setfield_gc(p0, p0, descr=valuedescr) + p4450 = same_as(p0) # Should be killed by backend jump(p0) """ expected = """ @@ -5479,7 +5545,8 @@ p3 = newstr(i3) copystrcontent(p1, p3, 0, 0, i1) copystrcontent(p2, p3, 0, i1, i2) - jump(p2, p3, i2) + i7 = same_as(i2) + jump(p2, p3, i7) """ expected = """ [p1, p2, i1] @@ -5554,7 +5621,9 @@ copystrcontent(p1, p5, 0, 0, i1) copystrcontent(p2, p5, 0, i1, i2) copystrcontent(p3, p5, 0, i12, i3) - jump(p2, p3, p5, i2, i3) + i129 = same_as(i2) + i130 = same_as(i3) + jump(p2, p3, p5, i129, i130) """ expected = """ [p1, p2, p3, i1, i2] @@ -5614,7 +5683,8 @@ [p1, i1, i2, i3] escape(i3) i4 = int_sub(i2, i1) - jump(p1, i1, i2, i4, i4) + i5 = same_as(i4) + jump(p1, i1, i2, i4, i5) """ expected = """ [p1, i1, i2, i3, i4] @@ -5639,7 +5709,8 @@ escape(i5) i4 = int_sub(i2, i1) setfield_gc(p2, i4, descr=valuedescr) - jump(p1, i1, i2, p2, i4, i4) + i8 = same_as(i4) + jump(p1, i1, i2, p2, i8, i4) """ expected = """ [p1, i1, i2, p2, i5, i6] @@ -5765,7 +5836,8 @@ p4 = newstr(i5) copystrcontent(p1, p4, i1, 0, i3) copystrcontent(p2, p4, 0, i3, i4) - jump(p4, i1, i2, p2, i5, i3, i4) + i9 = same_as(i4) + jump(p4, i1, i2, p2, i5, i3, i9) """ expected = """ [p1, i1, i2, p2, i5, i3, i4] @@ -5887,7 +5959,9 @@ copystrcontent(p2, p4, 0, i1, i2) i0 = call(0, p3, p4, descr=strequaldescr) escape(i0) - jump(p1, p2, p3, i3, i1, i2) + i11 = same_as(i1) + i12 = same_as(i2) + jump(p1, p2, p3, i3, i11, i12) """ expected = """ [p1, p2, p3, i3, i1, i2] @@ -6107,6 +6181,7 @@ i1 = strlen(p1) i0 = int_eq(i1, 0) escape(i0) + i3 = same_as(i1) jump(p1, i0) """ self.optimize_strunicode_loop_extradescrs(ops, expected, preamble) @@ -6152,7 +6227,9 @@ copystrcontent(p2, p4, 0, i1, i2) i0 = call(0, s"hello world", p4, descr=streq_nonnull_descr) escape(i0) - jump(p1, p2, i3, i1, i2) + i11 = same_as(i1) + i12 = same_as(i2) + jump(p1, p2, i3, i11, i12) """ expected = """ [p1, p2, i3, i1, i2] @@ -6436,7 +6513,8 @@ p188 = getarrayitem_gc(p187, 42, descr=) guard_value(p188, ConstPtr(myptr)) [] p25 = getfield_gc(ConstPtr(myptr), descr=otherdescr) - jump(p25, p187, i184, p25) + p26 = same_as(p25) + jump(p25, p187, i184, p26) """ short = """ [p1, p187, i184] @@ -6705,7 +6783,8 @@ [p9] i843 = strlen(p9) call(i843, descr=nonwritedescr) - jump(p9, i843) + i0 = same_as(i843) + jump(p9, i0) """ short = """ [p9] @@ -7397,7 +7476,8 @@ call(i2, descr=nonwritedescr) setfield_gc(p22, i1, descr=valuedescr) guard_nonnull_class(p18, ConstClass(node_vtable)) [] - jump(p22, p18, i1, i1) + i10 = same_as(i1) + jump(p22, p18, i1, i10) """ short = """ [p22, p18, i1] diff --git a/pypy/jit/metainterp/optimizeopt/test/test_util.py b/pypy/jit/metainterp/optimizeopt/test/test_util.py --- a/pypy/jit/metainterp/optimizeopt/test/test_util.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_util.py @@ -384,7 +384,7 @@ expected.operations, False, remap, text_right) def _do_optimize_loop(self, loop, call_pure_results): - from pypy.jit.metainterp.optimizeopt import optimize_loop_1 + from pypy.jit.metainterp.optimizeopt import optimize_trace from pypy.jit.metainterp.optimizeopt.util import args_dict self.loop = loop @@ -398,7 +398,7 @@ if hasattr(self, 'callinfocollection'): metainterp_sd.callinfocollection = self.callinfocollection # - optimize_loop_1(metainterp_sd, loop, self.enable_opts) + optimize_trace(metainterp_sd, loop, self.enable_opts) # ____________________________________________________________ diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -1,7 +1,7 @@ from pypy.jit.codewriter.effectinfo import EffectInfo from pypy.jit.metainterp.optimizeopt.virtualstate import VirtualStateAdder, ShortBoxes from pypy.jit.metainterp.compile import ResumeGuardDescr -from pypy.jit.metainterp.history import TreeLoop, LoopToken +from pypy.jit.metainterp.history import TreeLoop, LoopToken, TargetToken from pypy.jit.metainterp.jitexc import JitException from pypy.jit.metainterp.optimize import InvalidLoop, RetraceLoop from pypy.jit.metainterp.optimizeopt.optimizer import * @@ -103,12 +103,8 @@ def __init__(self, metainterp_sd, loop, optimizations): self.optimizer = UnrollableOptimizer(metainterp_sd, loop, optimizations) - self.cloned_operations = [] - for op in self.optimizer.loop.operations: - newop = op.clone() - self.cloned_operations.append(newop) - def fix_snapshot(self, loop, jump_args, snapshot): + def fix_snapshot(self, jump_args, snapshot): if snapshot is None: return None snapshot_args = snapshot.boxes @@ -116,233 +112,170 @@ for a in snapshot_args: a = self.getvalue(a).get_key_box() new_snapshot_args.append(a) - prev = self.fix_snapshot(loop, jump_args, snapshot.prev) + prev = self.fix_snapshot(jump_args, snapshot.prev) return Snapshot(prev, new_snapshot_args) def propagate_all_forward(self): loop = self.optimizer.loop - jumpop = loop.operations[-1] - if jumpop.getopnum() == rop.JUMP: + start_targetop = loop.operations[0] + assert start_targetop.getopnum() == rop.TARGET + loop.operations = loop.operations[1:] + self.optimizer.clear_newoperations() + self.optimizer.send_extra_operation(start_targetop) + + self.import_state(start_targetop) + + lastop = loop.operations[-1] + if lastop.getopnum() == rop.TARGET or lastop.getopnum() == rop.JUMP: loop.operations = loop.operations[:-1] - else: - loopop = None - - self.optimizer.propagate_all_forward() - - - if jumpop: - assert jumpop.getdescr() is loop.token - jump_args = jumpop.getarglist() - jumpop.initarglist([]) + #FIXME: FINISH + + self.optimizer.propagate_all_forward(clear=False) + + if lastop.getopnum() == rop.TARGET: self.optimizer.flush() - KillHugeIntBounds(self.optimizer).apply() - - loop.preamble.operations = self.optimizer.get_newoperations() - jump_args = [self.getvalue(a).get_key_box() for a in jump_args] - - start_resumedescr = loop.preamble.start_resumedescr.clone_if_mutable() - self.start_resumedescr = start_resumedescr - assert isinstance(start_resumedescr, ResumeGuardDescr) - start_resumedescr.rd_snapshot = self.fix_snapshot(loop, jump_args, - start_resumedescr.rd_snapshot) - - modifier = VirtualStateAdder(self.optimizer) - virtual_state = modifier.get_virtual_state(jump_args) - - values = [self.getvalue(arg) for arg in jump_args] - inputargs = virtual_state.make_inputargs(values, self.optimizer) - short_inputargs = virtual_state.make_inputargs(values, self.optimizer, - keyboxes=True) - - self.constant_inputargs = {} - for box in jump_args: - const = self.get_constant_box(box) - if const: - self.constant_inputargs[box] = const - - sb = ShortBoxes(self.optimizer, inputargs + self.constant_inputargs.keys()) - self.short_boxes = sb - preamble_optimizer = self.optimizer - loop.preamble.quasi_immutable_deps = ( - self.optimizer.quasi_immutable_deps) - self.optimizer = self.optimizer.new() - loop.quasi_immutable_deps = self.optimizer.quasi_immutable_deps - - logops = self.optimizer.loop.logops - if logops: - args = ", ".join([logops.repr_of_arg(arg) for arg in inputargs]) - debug_print('inputargs: ' + args) - args = ", ".join([logops.repr_of_arg(arg) for arg in short_inputargs]) - debug_print('short inputargs: ' + args) - self.short_boxes.debug_print(logops) - - - # Force virtuals amoung the jump_args of the preamble to get the - # operations needed to setup the proper state of those virtuals - # in the peeled loop - inputarg_setup_ops = [] - preamble_optimizer.clear_newoperations() - seen = {} - for box in inputargs: - if box in seen: - continue - seen[box] = True - preamble_value = preamble_optimizer.getvalue(box) - value = self.optimizer.getvalue(box) - value.import_from(preamble_value, self.optimizer) - for box in short_inputargs: - if box in seen: - continue - seen[box] = True - value = preamble_optimizer.getvalue(box) - value.force_box(preamble_optimizer) - inputarg_setup_ops += preamble_optimizer.get_newoperations() - - # Setup the state of the new optimizer by emiting the - # short preamble operations and discarding the result - self.optimizer.emitting_dissabled = True - for op in inputarg_setup_ops: - self.optimizer.send_extra_operation(op) - seen = {} - for op in self.short_boxes.operations(): - self.ensure_short_op_emitted(op, self.optimizer, seen) - if op and op.result: - preamble_value = preamble_optimizer.getvalue(op.result) - value = self.optimizer.getvalue(op.result) - if not value.is_virtual(): - imp = ValueImporter(self, preamble_value, op) - self.optimizer.importable_values[value] = imp - newresult = self.optimizer.getvalue(op.result).get_key_box() - if newresult is not op.result: - self.short_boxes.alias(newresult, op.result) - self.optimizer.flush() - self.optimizer.emitting_dissabled = False - - initial_inputargs_len = len(inputargs) - self.inliner = Inliner(loop.inputargs, jump_args) - - - short = self.inline(inputargs, self.cloned_operations, - loop.inputargs, short_inputargs, - virtual_state) - - loop.inputargs = inputargs - args = [preamble_optimizer.getvalue(self.short_boxes.original(a)).force_box(preamble_optimizer)\ - for a in inputargs] - jmp = ResOperation(rop.JUMP, args, None) - jmp.setdescr(loop.token) - loop.preamble.operations.append(jmp) loop.operations = self.optimizer.get_newoperations() - maxguards = self.optimizer.metainterp_sd.warmrunnerdesc.memory_manager.max_retrace_guards + self.export_state(lastop) + loop.operations.append(lastop) + elif lastop.getopnum() == rop.JUMP: + assert lastop.getdescr() is start_targetop.getdescr() + self.close_loop(lastop) + short_preamble_loop = self.produce_short_preamble(lastop) + assert isinstance(loop.token, LoopToken) + if loop.token.short_preamble: + loop.token.short_preamble.append(short_preamble_loop) # FIXME: ?? + else: + loop.token.short_preamble = [short_preamble_loop] + else: + loop.operations = self.optimizer.get_newoperations() + + def export_state(self, targetop): + jump_args = targetop.getarglist() + jump_args = [self.getvalue(a).get_key_box() for a in jump_args] + + start_resumedescr = self.optimizer.loop.start_resumedescr.clone_if_mutable() + assert isinstance(start_resumedescr, ResumeGuardDescr) + start_resumedescr.rd_snapshot = self.fix_snapshot(jump_args, start_resumedescr.rd_snapshot) + + modifier = VirtualStateAdder(self.optimizer) + virtual_state = modifier.get_virtual_state(jump_args) - if self.optimizer.emitted_guards > maxguards: - loop.preamble.token.retraced_count = sys.maxint + values = [self.getvalue(arg) for arg in jump_args] + inputargs = virtual_state.make_inputargs(values, self.optimizer) + short_inputargs = virtual_state.make_inputargs(values, self.optimizer, keyboxes=True) - if short: - assert short[-1].getopnum() == rop.JUMP - short[-1].setdescr(loop.token) + constant_inputargs = {} + for box in jump_args: + const = self.get_constant_box(box) + if const: + constant_inputargs[box] = const - # Turn guards into conditional jumps to the preamble - for i in range(len(short)): - op = short[i] - if op.is_guard(): - op = op.clone() - op.setfailargs(None) - descr = self.start_resumedescr.clone_if_mutable() - op.setdescr(descr) - short[i] = op + short_boxes = ShortBoxes(self.optimizer, inputargs + constant_inputargs.keys()) - short_loop = TreeLoop('short preamble') - short_loop.inputargs = short_inputargs - short_loop.operations = short + self.optimizer.clear_newoperations() + for box in short_inputargs: + value = self.getvalue(box) + if value.is_virtual(): + value.force_box(self.optimizer) + inputarg_setup_ops = self.optimizer.get_newoperations() - # Clone ops and boxes to get private versions and - boxmap = {} - newargs = [None] * len(short_loop.inputargs) - for i in range(len(short_loop.inputargs)): - a = short_loop.inputargs[i] - if a in boxmap: - newargs[i] = boxmap[a] - else: - newargs[i] = a.clonebox() - boxmap[a] = newargs[i] - inliner = Inliner(short_loop.inputargs, newargs) - for box, const in self.constant_inputargs.items(): - inliner.argmap[box] = const - short_loop.inputargs = newargs - ops = [inliner.inline_op(op) for op in short_loop.operations] - short_loop.operations = ops - descr = self.start_resumedescr.clone_if_mutable() - inliner.inline_descr_inplace(descr) - short_loop.start_resumedescr = descr + target_token = targetop.getdescr() + assert isinstance(target_token, TargetToken) + targetop.initarglist(inputargs) + target_token.exported_state = ExportedState(values, short_inputargs, + constant_inputargs, short_boxes, + inputarg_setup_ops, self.optimizer, + jump_args, virtual_state, + start_resumedescr) - assert isinstance(loop.preamble.token, LoopToken) - if loop.preamble.token.short_preamble: - loop.preamble.token.short_preamble.append(short_loop) - else: - loop.preamble.token.short_preamble = [short_loop] - short_loop.virtual_state = virtual_state + def import_state(self, targetop): + target_token = targetop.getdescr() + assert isinstance(target_token, TargetToken) + exported_state = target_token.exported_state + if not exported_state: + # FIXME: Set up some sort of empty state with no virtuals + return - # Forget the values to allow them to be freed - for box in short_loop.inputargs: - box.forget_value() - for op in short_loop.operations: - if op.result: - op.result.forget_value() + self.short = [] + self.short_seen = {} + self.short_boxes = exported_state.short_boxes + for box, const in exported_state.constant_inputargs.items(): + self.short_seen[box] = True + self.imported_state = exported_state + self.inputargs = targetop.getarglist() + self.start_resumedescr = exported_state.start_resumedescr - def inline(self, inputargs, loop_operations, loop_args, short_inputargs, virtual_state): - inliner = self.inliner + seen = {} + for box in self.inputargs: + if box in seen: + continue + seen[box] = True + preamble_value = exported_state.optimizer.getvalue(box) + value = self.optimizer.getvalue(box) + value.import_from(preamble_value, self.optimizer) + + # Setup the state of the new optimizer by emiting the + # short operations and discarding the result + self.optimizer.emitting_dissabled = True + for op in exported_state.inputarg_setup_ops: + self.optimizer.send_extra_operation(op) + seen = {} + for op in self.short_boxes.operations(): + self.ensure_short_op_emitted(op, self.optimizer, seen) + if op and op.result: + preamble_value = exported_state.optimizer.getvalue(op.result) + value = self.optimizer.getvalue(op.result) + if not value.is_virtual(): + imp = ValueImporter(self, preamble_value, op) + self.optimizer.importable_values[value] = imp + newvalue = self.optimizer.getvalue(op.result) + newresult = newvalue.get_key_box() + if newresult is not op.result and not newvalue.is_constant(): + self.short_boxes.alias(newresult, op.result) + op = ResOperation(rop.SAME_AS, [op.result], newresult) + self.optimizer._newoperations = [op] + self.optimizer._newoperations # XXX + #self.optimizer.getvalue(op.result).box = op.result # FIXME: HACK!!! + self.optimizer.flush() + self.optimizer.emitting_dissabled = False + def close_loop(self, jumpop): + assert jumpop + virtual_state = self.imported_state.virtual_state + short_inputargs = self.imported_state.short_inputargs + constant_inputargs = self.imported_state.constant_inputargs + inputargs = self.inputargs short_jumpargs = inputargs[:] - short = self.short = [] - short_seen = self.short_seen = {} - for box, const in self.constant_inputargs.items(): - short_seen[box] = True - - # This loop is equivalent to the main optimization loop in - # Optimizer.propagate_all_forward - jumpop = None - for newop in loop_operations: - newop = inliner.inline_op(newop, clone=False) - if newop.getopnum() == rop.JUMP: - jumpop = newop - break - - #self.optimizer.first_optimization.propagate_forward(newop) - self.optimizer.send_extra_operation(newop) - - self.boxes_created_this_iteration = {} - - assert jumpop + # Construct jumpargs from the virtual state original_jumpargs = jumpop.getarglist()[:] values = [self.getvalue(arg) for arg in jumpop.getarglist()] jumpargs = virtual_state.make_inputargs(values, self.optimizer) jumpop.initarglist(jumpargs) - jmp_to_short_args = virtual_state.make_inputargs(values, self.optimizer, - keyboxes=True) + + # Inline the short preamble at the end of the loop + jmp_to_short_args = virtual_state.make_inputargs(values, self.optimizer, keyboxes=True) self.short_inliner = Inliner(short_inputargs, jmp_to_short_args) - - for box, const in self.constant_inputargs.items(): + for box, const in constant_inputargs.items(): self.short_inliner.argmap[box] = const - - for op in short: + for op in self.short: newop = self.short_inliner.inline_op(op) self.optimizer.send_extra_operation(newop) - + + # Import boxes produced in the preamble but used in the loop newoperations = self.optimizer.get_newoperations() - + self.boxes_created_this_iteration = {} i = j = 0 + while newoperations[i].getopnum() != rop.TARGET: + i += 1 while i < len(newoperations) or j < len(jumpargs): if i == len(newoperations): while j < len(jumpargs): a = jumpargs[j] if self.optimizer.loop.logops: debug_print('J: ' + self.optimizer.loop.logops.repr_of_arg(a)) - self.import_box(a, inputargs, short, short_jumpargs, - jumpargs, short_seen) + self.import_box(a, inputargs, short_jumpargs, jumpargs) j += 1 else: op = newoperations[i] @@ -357,15 +290,16 @@ for a in args: if self.optimizer.loop.logops: debug_print('A: ' + self.optimizer.loop.logops.repr_of_arg(a)) - self.import_box(a, inputargs, short, short_jumpargs, - jumpargs, short_seen) + self.import_box(a, inputargs, short_jumpargs, jumpargs) i += 1 newoperations = self.optimizer.get_newoperations() jumpop.initarglist(jumpargs) self.optimizer.send_extra_operation(jumpop) - short.append(ResOperation(rop.JUMP, short_jumpargs, None)) + self.short.append(ResOperation(rop.JUMP, short_jumpargs, None, descr=jumpop.getdescr())) + # Verify that the virtual state at the end of the loop is one + # that is compatible with the virtual state at the start of the loop modifier = VirtualStateAdder(self.optimizer) final_virtual_state = modifier.get_virtual_state(original_jumpargs) debug_start('jit-log-virtualstate') @@ -382,8 +316,79 @@ debug_stop('jit-log-virtualstate') raise InvalidLoop debug_stop('jit-log-virtualstate') + + def produce_short_preamble(self, lastop): + short = self.short + assert short[-1].getopnum() == rop.JUMP + + # Turn guards into conditional jumps to the preamble + for i in range(len(short)): + op = short[i] + if op.is_guard(): + op = op.clone() + op.setfailargs(None) + descr = self.start_resumedescr.clone_if_mutable() + op.setdescr(descr) + short[i] = op + + short_loop = TreeLoop('short preamble') + short_inputargs = self.imported_state.short_inputargs + short_loop.operations = [ResOperation(rop.TARGET, short_inputargs, None)] + \ + short + + # Clone ops and boxes to get private versions and + boxmap = {} + newargs = [None] * len(short_inputargs) + for i in range(len(short_inputargs)): + a = short_inputargs[i] + if a in boxmap: + newargs[i] = boxmap[a] + else: + newargs[i] = a.clonebox() + boxmap[a] = newargs[i] + inliner = Inliner(short_inputargs, newargs) + for box, const in self.imported_state.constant_inputargs.items(): + inliner.argmap[box] = const + ops = [inliner.inline_op(op) for op in short_loop.operations] + short_loop.operations = ops + descr = self.start_resumedescr.clone_if_mutable() + inliner.inline_descr_inplace(descr) + short_loop.start_resumedescr = descr + + short_loop.virtual_state = self.imported_state.virtual_state + + # Forget the values to allow them to be freed + for box in short_loop.inputargs: + box.forget_value() + for op in short_loop.operations: + if op.result: + op.result.forget_value() + + return short_loop - return short + def FIXME_old_stuff(): + preamble_optimizer = self.optimizer + loop.preamble.quasi_immutable_deps = ( + self.optimizer.quasi_immutable_deps) + self.optimizer = self.optimizer.new() + loop.quasi_immutable_deps = self.optimizer.quasi_immutable_deps + + + loop.inputargs = inputargs + args = [preamble_optimizer.getvalue(self.short_boxes.original(a)).force_box(preamble_optimizer)\ + for a in inputargs] + jmp = ResOperation(rop.JUMP, args, None) + jmp.setdescr(loop.token) + loop.preamble.operations.append(jmp) + + loop.operations = self.optimizer.get_newoperations() + maxguards = self.optimizer.metainterp_sd.warmrunnerdesc.memory_manager.max_retrace_guards + + if self.optimizer.emitted_guards > maxguards: + loop.preamble.token.retraced_count = sys.maxint + + if short: + pass def ensure_short_op_emitted(self, op, optimizer, seen): if op is None: @@ -399,19 +404,18 @@ guard = ResOperation(rop.GUARD_NO_OVERFLOW, [], None) optimizer.send_extra_operation(guard) - def add_op_to_short(self, op, short, short_seen, emit=True, guards_needed=False): + def add_op_to_short(self, op, emit=True, guards_needed=False): if op is None: return None - if op.result is not None and op.result in short_seen: + if op.result is not None and op.result in self.short_seen: if emit: return self.short_inliner.inline_arg(op.result) else: return None for a in op.getarglist(): - if not isinstance(a, Const) and a not in short_seen: - self.add_op_to_short(self.short_boxes.producer(a), short, short_seen, - emit, guards_needed) + if not isinstance(a, Const) and a not in self.short_seen: + self.add_op_to_short(self.short_boxes.producer(a), emit, guards_needed) if op.is_guard(): descr = self.start_resumedescr.clone_if_mutable() op.setdescr(descr) @@ -421,8 +425,8 @@ else: value_guards = [] - short.append(op) - short_seen[op.result] = True + self.short.append(op) + self.short_seen[op.result] = True if emit: newop = self.short_inliner.inline_op(op) self.optimizer.send_extra_operation(newop) @@ -432,23 +436,22 @@ if op.is_ovf(): # FIXME: ensure that GUARD_OVERFLOW:ed ops not end up here guard = ResOperation(rop.GUARD_NO_OVERFLOW, [], None) - self.add_op_to_short(guard, short, short_seen, emit, guards_needed) + self.add_op_to_short(guard, emit, guards_needed) for guard in value_guards: - self.add_op_to_short(guard, short, short_seen, emit, guards_needed) + self.add_op_to_short(guard, emit, guards_needed) if newop: return newop.result return None - def import_box(self, box, inputargs, short, short_jumpargs, - jumpargs, short_seen): + def import_box(self, box, inputargs, short_jumpargs, jumpargs): if isinstance(box, Const) or box in inputargs: return if box in self.boxes_created_this_iteration: return short_op = self.short_boxes.producer(box) - newresult = self.add_op_to_short(short_op, short, short_seen) + newresult = self.add_op_to_short(short_op) short_jumpargs.append(short_op.result) inputargs.append(box) @@ -468,7 +471,7 @@ def propagate_forward(self, op): if op.getopnum() == rop.JUMP: loop_token = op.getdescr() - assert isinstance(loop_token, LoopToken) + assert isinstance(loop_token, TargetToken) short = loop_token.short_preamble if short: args = op.getarglist() @@ -557,5 +560,19 @@ def import_value(self, value): value.import_from(self.preamble_value, self.unroll.optimizer) - self.unroll.add_op_to_short(self.op, self.unroll.short, self.unroll.short_seen, False, True) + self.unroll.add_op_to_short(self.op, False, True) + +class ExportedState(object): + def __init__(self, values, short_inputargs, constant_inputargs, + short_boxes, inputarg_setup_ops, optimizer, jump_args, virtual_state, + start_resumedescr): + self.values = values + self.short_inputargs = short_inputargs + self.constant_inputargs = constant_inputargs + self.short_boxes = short_boxes + self.inputarg_setup_ops = inputarg_setup_ops + self.optimizer = optimizer + self.jump_args = jump_args + self.virtual_state = virtual_state + self.start_resumedescr = start_resumedescr diff --git a/pypy/jit/metainterp/optimizeopt/util.py b/pypy/jit/metainterp/optimizeopt/util.py --- a/pypy/jit/metainterp/optimizeopt/util.py +++ b/pypy/jit/metainterp/optimizeopt/util.py @@ -148,7 +148,7 @@ assert op1.result.same_box(remap[op2.result]) else: remap[op2.result] = op1.result - if op1.getopnum() != rop.JUMP: # xxx obscure + if op1.getopnum() not in (rop.JUMP, rop.TARGET): # xxx obscure assert op1.getdescr() == op2.getdescr() if op1.getfailargs() or op2.getfailargs(): assert len(op1.getfailargs()) == len(op2.getfailargs()) diff --git a/pypy/jit/metainterp/optimizeopt/virtualstate.py b/pypy/jit/metainterp/optimizeopt/virtualstate.py --- a/pypy/jit/metainterp/optimizeopt/virtualstate.py +++ b/pypy/jit/metainterp/optimizeopt/virtualstate.py @@ -598,6 +598,7 @@ newbox = newop.result = op.result.clonebox() self.short_boxes[newop.result] = newop value = self.optimizer.getvalue(box) + self.optimizer.emit_operation(ResOperation(rop.SAME_AS, [box], newbox)) self.optimizer.make_equal_to(newbox, value) else: self.short_boxes[box] = op From noreply at buildbot.pypy.org Fri Nov 4 21:41:07 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Fri, 4 Nov 2011 21:41:07 +0100 (CET) Subject: [pypy-commit] pypy list-strategies: merged default in Message-ID: <20111104204107.A3CB7820B3@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: list-strategies Changeset: r48764:6d3c19abaea1 Date: 2011-11-04 16:17 -0400 http://bitbucket.org/pypy/pypy/changeset/6d3c19abaea1/ Log: merged default in diff --git a/pypy/jit/backend/x86/regloc.py b/pypy/jit/backend/x86/regloc.py --- a/pypy/jit/backend/x86/regloc.py +++ b/pypy/jit/backend/x86/regloc.py @@ -17,7 +17,7 @@ class AssemblerLocation(object): # XXX: Is adding "width" here correct? - __slots__ = ('value', 'width') + _attrs_ = ('value', 'width', '_location_code') _immutable_ = True def _getregkey(self): return self.value @@ -25,6 +25,9 @@ def is_memory_reference(self): return self.location_code() in ('b', 's', 'j', 'a', 'm') + def location_code(self): + return self._location_code + def value_r(self): return self.value def value_b(self): return self.value def value_s(self): return self.value @@ -38,6 +41,8 @@ class StackLoc(AssemblerLocation): _immutable_ = True + _location_code = 'b' + def __init__(self, position, ebp_offset, num_words, type): assert ebp_offset < 0 # so no confusion with RegLoc.value self.position = position @@ -49,9 +54,6 @@ def __repr__(self): return '%d(%%ebp)' % (self.value,) - def location_code(self): - return 'b' - def assembler(self): return repr(self) @@ -63,8 +65,10 @@ self.is_xmm = is_xmm if self.is_xmm: self.width = 8 + self._location_code = 'x' else: self.width = WORD + self._location_code = 'r' def __repr__(self): if self.is_xmm: return rx86.R.xmmnames[self.value] @@ -79,12 +83,6 @@ assert not self.is_xmm return RegLoc(rx86.high_byte(self.value), False) - def location_code(self): - if self.is_xmm: - return 'x' - else: - return 'r' - def assembler(self): return '%' + repr(self) @@ -97,14 +95,13 @@ class ImmedLoc(AssemblerLocation): _immutable_ = True width = WORD + _location_code = 'i' + def __init__(self, value): from pypy.rpython.lltypesystem import rffi, lltype # force as a real int self.value = rffi.cast(lltype.Signed, value) - def location_code(self): - return 'i' - def getint(self): return self.value @@ -149,9 +146,6 @@ info = getattr(self, attr, '?') return '' % (self._location_code, info) - def location_code(self): - return self._location_code - def value_a(self): return self.loc_a @@ -191,6 +185,7 @@ # we want a width of 8 (... I think. Check this!) _immutable_ = True width = 8 + _location_code = 'j' def __init__(self, address): self.value = address @@ -198,9 +193,6 @@ def __repr__(self): return '' % (self.value,) - def location_code(self): - return 'j' - if IS_X86_32: class FloatImmedLoc(AssemblerLocation): # This stands for an immediate float. It cannot be directly used in @@ -209,6 +201,7 @@ # instead; see below. _immutable_ = True width = 8 + _location_code = '#' # don't use me def __init__(self, floatstorage): self.aslonglong = floatstorage @@ -229,9 +222,6 @@ floatvalue = longlong.getrealfloat(self.aslonglong) return '' % (floatvalue,) - def location_code(self): - raise NotImplementedError - if IS_X86_64: def FloatImmedLoc(floatstorage): from pypy.rlib.longlong2float import float2longlong @@ -270,6 +260,11 @@ else: raise AssertionError(methname + " undefined") +def _missing_binary_insn(name, code1, code2): + raise AssertionError(name + "_" + code1 + code2 + " missing") +_missing_binary_insn._dont_inline_ = True + + class LocationCodeBuilder(object): _mixin_ = True @@ -303,6 +298,8 @@ else: # For this case, we should not need the scratch register more than here. self._load_scratch(val2) + if name == 'MOV' and loc1 is X86_64_SCRATCH_REG: + return # don't need a dummy "MOV r11, r11" INSN(self, loc1, X86_64_SCRATCH_REG) def invoke(self, codes, val1, val2): @@ -310,6 +307,23 @@ _rx86_getattr(self, methname)(val1, val2) invoke._annspecialcase_ = 'specialize:arg(1)' + def has_implementation_for(loc1, loc2): + # A memo function that returns True if there is any NAME_xy that could match. + # If it returns False we know the whole subcase can be omitted from translated + # code. Without this hack, the size of most _binaryop INSN functions ends up + # quite large in C code. + if loc1 == '?': + return any([has_implementation_for(loc1, loc2) + for loc1 in unrolling_location_codes]) + methname = name + "_" + loc1 + loc2 + if not hasattr(rx86.AbstractX86CodeBuilder, methname): + return False + # any NAME_j should have a NAME_m as a fallback, too. Check it + if loc1 == 'j': assert has_implementation_for('m', loc2), methname + if loc2 == 'j': assert has_implementation_for(loc1, 'm'), methname + return True + has_implementation_for._annspecialcase_ = 'specialize:memo' + def INSN(self, loc1, loc2): code1 = loc1.location_code() code2 = loc2.location_code() @@ -325,6 +339,8 @@ assert code2 not in ('j', 'i') for possible_code2 in unrolling_location_codes: + if not has_implementation_for('?', possible_code2): + continue if code2 == possible_code2: val2 = getattr(loc2, "value_" + possible_code2)() # @@ -335,28 +351,32 @@ # # Regular case for possible_code1 in unrolling_location_codes: + if not has_implementation_for(possible_code1, + possible_code2): + continue if code1 == possible_code1: val1 = getattr(loc1, "value_" + possible_code1)() # More faking out of certain operations for x86_64 - if possible_code1 == 'j' and not rx86.fits_in_32bits(val1): + fits32 = rx86.fits_in_32bits + if possible_code1 == 'j' and not fits32(val1): val1 = self._addr_as_reg_offset(val1) invoke(self, "m" + possible_code2, val1, val2) - elif possible_code2 == 'j' and not rx86.fits_in_32bits(val2): + return + if possible_code2 == 'j' and not fits32(val2): val2 = self._addr_as_reg_offset(val2) invoke(self, possible_code1 + "m", val1, val2) - elif possible_code1 == 'm' and not rx86.fits_in_32bits(val1[1]): + return + if possible_code1 == 'm' and not fits32(val1[1]): val1 = self._fix_static_offset_64_m(val1) - invoke(self, "a" + possible_code2, val1, val2) - elif possible_code2 == 'm' and not rx86.fits_in_32bits(val2[1]): + if possible_code2 == 'm' and not fits32(val2[1]): val2 = self._fix_static_offset_64_m(val2) - invoke(self, possible_code1 + "a", val1, val2) - else: - if possible_code1 == 'a' and not rx86.fits_in_32bits(val1[3]): - val1 = self._fix_static_offset_64_a(val1) - if possible_code2 == 'a' and not rx86.fits_in_32bits(val2[3]): - val2 = self._fix_static_offset_64_a(val2) - invoke(self, possible_code1 + possible_code2, val1, val2) + if possible_code1 == 'a' and not fits32(val1[3]): + val1 = self._fix_static_offset_64_a(val1) + if possible_code2 == 'a' and not fits32(val2[3]): + val2 = self._fix_static_offset_64_a(val2) + invoke(self, possible_code1 + possible_code2, val1, val2) return + _missing_binary_insn(name, code1, code2) return func_with_new_name(INSN, "INSN_" + name) @@ -431,12 +451,14 @@ def _fix_static_offset_64_m(self, (basereg, static_offset)): # For cases where an AddressLoc has the location_code 'm', but # where the static offset does not fit in 32-bits. We have to fall - # back to the X86_64_SCRATCH_REG. Note that this returns a location - # encoded as mode 'a'. These are all possibly rare cases; don't try + # back to the X86_64_SCRATCH_REG. Returns a new location encoded + # as mode 'm' too. These are all possibly rare cases; don't try # to reuse a past value of the scratch register at all. self._scratch_register_known = False self.MOV_ri(X86_64_SCRATCH_REG.value, static_offset) - return (basereg, X86_64_SCRATCH_REG.value, 0, 0) + self.LEA_ra(X86_64_SCRATCH_REG.value, + (basereg, X86_64_SCRATCH_REG.value, 0, 0)) + return (X86_64_SCRATCH_REG.value, 0) def _fix_static_offset_64_a(self, (basereg, scalereg, scale, static_offset)): diff --git a/pypy/jit/backend/x86/rx86.py b/pypy/jit/backend/x86/rx86.py --- a/pypy/jit/backend/x86/rx86.py +++ b/pypy/jit/backend/x86/rx86.py @@ -745,6 +745,7 @@ assert insnname_template.count('*') == 1 add_insn('x', register(2), '\xC0') add_insn('j', abs_, immediate(2)) + add_insn('m', mem_reg_plus_const(2)) define_pxmm_insn('PADDQ_x*', '\xD4') define_pxmm_insn('PSUBQ_x*', '\xFB') diff --git a/pypy/jit/backend/x86/test/test_regloc.py b/pypy/jit/backend/x86/test/test_regloc.py --- a/pypy/jit/backend/x86/test/test_regloc.py +++ b/pypy/jit/backend/x86/test/test_regloc.py @@ -146,8 +146,10 @@ expected_instructions = ( # mov r11, 0xFEDCBA9876543210 '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' - # mov rcx, [rdx+r11] - '\x4A\x8B\x0C\x1A' + # lea r11, [rdx+r11] + '\x4E\x8D\x1C\x1A' + # mov rcx, [r11] + '\x49\x8B\x0B' ) assert cb.getvalue() == expected_instructions @@ -174,6 +176,30 @@ # ------------------------------------------------------------ + def test_MOV_64bit_constant_into_r11(self): + base_constant = 0xFEDCBA9876543210 + cb = LocationCodeBuilder64() + cb.MOV(r11, imm(base_constant)) + + expected_instructions = ( + # mov r11, 0xFEDCBA9876543210 + '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' + ) + assert cb.getvalue() == expected_instructions + + def test_MOV_64bit_address_into_r11(self): + base_addr = 0xFEDCBA9876543210 + cb = LocationCodeBuilder64() + cb.MOV(r11, heap(base_addr)) + + expected_instructions = ( + # mov r11, 0xFEDCBA9876543210 + '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' + + # mov r11, [r11] + '\x4D\x8B\x1B' + ) + assert cb.getvalue() == expected_instructions + def test_MOV_immed32_into_64bit_address_1(self): immed = -0x01234567 base_addr = 0xFEDCBA9876543210 @@ -217,8 +243,10 @@ expected_instructions = ( # mov r11, 0xFEDCBA9876543210 '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' - # mov [rdx+r11], -0x01234567 - '\x4A\xC7\x04\x1A\x99\xBA\xDC\xFE' + # lea r11, [rdx+r11] + '\x4E\x8D\x1C\x1A' + # mov [r11], -0x01234567 + '\x49\xC7\x03\x99\xBA\xDC\xFE' ) assert cb.getvalue() == expected_instructions @@ -300,8 +328,10 @@ '\x48\xBA\xEF\xCD\xAB\x89\x67\x45\x23\x01' # mov r11, 0xFEDCBA9876543210 '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' - # mov [rax+r11], rdx - '\x4A\x89\x14\x18' + # lea r11, [rax+r11] + '\x4E\x8D\x1C\x18' + # mov [r11], rdx + '\x49\x89\x13' # pop rdx '\x5A' ) diff --git a/pypy/jit/metainterp/optimizeopt/heap.py b/pypy/jit/metainterp/optimizeopt/heap.py --- a/pypy/jit/metainterp/optimizeopt/heap.py +++ b/pypy/jit/metainterp/optimizeopt/heap.py @@ -43,7 +43,7 @@ optheap.optimizer.ensure_imported(cached_fieldvalue) cached_fieldvalue = self._cached_fields.get(structvalue, None) - if cached_fieldvalue is not fieldvalue: + if not fieldvalue.same_value(cached_fieldvalue): # common case: store the 'op' as lazy_setfield, and register # myself in the optheap's _lazy_setfields_and_arrayitems list self._lazy_setfield = op @@ -140,6 +140,15 @@ getop = ResOperation(rop.GETFIELD_GC, [op.getarg(0)], result, op.getdescr()) shortboxes.add_potential(getop, synthetic=True) + if op.getopnum() == rop.SETARRAYITEM_GC: + result = op.getarg(2) + if isinstance(result, Const): + newresult = result.clonebox() + optimizer.make_constant(newresult, result) + result = newresult + getop = ResOperation(rop.GETARRAYITEM_GC, [op.getarg(0), op.getarg(1)], + result, op.getdescr()) + shortboxes.add_potential(getop, synthetic=True) elif op.result is not None: shortboxes.add_potential(op) diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -1,6 +1,6 @@ from pypy.jit.metainterp import jitprof, resume, compile from pypy.jit.metainterp.executor import execute_nonspec -from pypy.jit.metainterp.history import BoxInt, BoxFloat, Const, ConstInt, REF +from pypy.jit.metainterp.history import BoxInt, BoxFloat, Const, ConstInt, REF, INT from pypy.jit.metainterp.optimizeopt.intutils import IntBound, IntUnbounded, \ ImmutableIntUnbounded, \ IntLowerBound, MININT, MAXINT @@ -95,6 +95,10 @@ return guards def import_from(self, other, optimizer): + if self.level == LEVEL_CONSTANT: + assert other.level == LEVEL_CONSTANT + assert other.box.same_constant(self.box) + return assert self.level <= LEVEL_NONNULL if other.level == LEVEL_CONSTANT: self.make_constant(other.get_key_box()) @@ -141,6 +145,13 @@ return not box.nonnull() return False + def same_value(self, other): + if not other: + return False + if self.is_constant() and other.is_constant(): + return self.box.same_constant(other.box) + return self is other + def make_constant(self, constbox): """Replace 'self.box' with a Const box.""" assert isinstance(constbox, Const) @@ -236,7 +247,6 @@ CONST_1 = ConstInt(1) CVAL_ZERO = ConstantValue(CONST_0) CVAL_ZERO_FLOAT = ConstantValue(Const._new(0.0)) -CVAL_UNINITIALIZED_ZERO = ConstantValue(CONST_0) llhelper.CVAL_NULLREF = ConstantValue(llhelper.CONST_NULL) oohelper.CVAL_NULLREF = ConstantValue(oohelper.CONST_NULL) @@ -326,6 +336,7 @@ self.bridge = bridge self.values = {} self.interned_refs = self.cpu.ts.new_ref_dict() + self.interned_ints = {} self.resumedata_memo = resume.ResumeDataLoopMemo(metainterp_sd) self.bool_boxes = {} self.producer = {} @@ -398,6 +409,9 @@ if not value: return box return self.interned_refs.setdefault(value, box) + #elif constbox.type == INT: + # value = constbox.getint() + # return self.interned_ints.setdefault(value, box) else: return box diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -4123,6 +4123,38 @@ """ self.optimize_strunicode_loop(ops, expected) + def test_str_concat_constant_lengths(self): + ops = """ + [i0] + p0 = newstr(1) + strsetitem(p0, 0, i0) + p1 = newstr(0) + p2 = call(0, p0, p1, descr=strconcatdescr) + i1 = call(0, p2, p0, descr=strequaldescr) + finish(i1) + """ + expected = """ + [i0] + finish(1) + """ + self.optimize_strunicode_loop(ops, expected) + + def test_str_concat_constant_lengths_2(self): + ops = """ + [i0] + p0 = newstr(0) + p1 = newstr(1) + strsetitem(p1, 0, i0) + p2 = call(0, p0, p1, descr=strconcatdescr) + i1 = call(0, p2, p1, descr=strequaldescr) + finish(i1) + """ + expected = """ + [i0] + finish(1) + """ + self.optimize_strunicode_loop(ops, expected) + def test_str_slice_1(self): ops = """ [p1, i1, i2] @@ -4883,6 +4915,27 @@ def test_plain_virtual_string_copy_content(self): ops = """ + [i1] + p0 = newstr(6) + copystrcontent(s"hello!", p0, 0, 0, 6) + p1 = call(0, p0, s"abc123", descr=strconcatdescr) + i0 = strgetitem(p1, i1) + finish(i0) + """ + expected = """ + [i1] + p0 = newstr(6) + copystrcontent(s"hello!", p0, 0, 0, 6) + p1 = newstr(12) + copystrcontent(p0, p1, 0, 0, 6) + copystrcontent(s"abc123", p1, 0, 6, 6) + i0 = strgetitem(p1, i1) + finish(i0) + """ + self.optimize_strunicode_loop(ops, expected) + + def test_plain_virtual_string_copy_content_2(self): + ops = """ [] p0 = newstr(6) copystrcontent(s"hello!", p0, 0, 0, 6) @@ -4894,10 +4947,7 @@ [] p0 = newstr(6) copystrcontent(s"hello!", p0, 0, 0, 6) - p1 = newstr(12) - copystrcontent(p0, p1, 0, 0, 6) - copystrcontent(s"abc123", p1, 0, 6, 6) - i0 = strgetitem(p1, 0) + i0 = strgetitem(p0, 0) finish(i0) """ self.optimize_strunicode_loop(ops, expected) diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -2168,13 +2168,13 @@ ops = """ [p0, i0, p1, i1, i2] setfield_gc(p0, i1, descr=valuedescr) - copystrcontent(p0, i0, p1, i1, i2) + copystrcontent(p0, p1, i0, i1, i2) escape() jump(p0, i0, p1, i1, i2) """ expected = """ [p0, i0, p1, i1, i2] - copystrcontent(p0, i0, p1, i1, i2) + copystrcontent(p0, p1, i0, i1, i2) setfield_gc(p0, i1, descr=valuedescr) escape() jump(p0, i0, p1, i1, i2) @@ -7355,6 +7355,150 @@ """ self.optimize_loop(ops, expected) + def test_repeated_constant_setfield_mixed_with_guard(self): + ops = """ + [p22, p18] + setfield_gc(p22, 2, descr=valuedescr) + guard_nonnull_class(p18, ConstClass(node_vtable)) [] + setfield_gc(p22, 2, descr=valuedescr) + jump(p22, p18) + """ + preamble = """ + [p22, p18] + setfield_gc(p22, 2, descr=valuedescr) + guard_nonnull_class(p18, ConstClass(node_vtable)) [] + jump(p22, p18) + """ + short = """ + [p22, p18] + i1 = getfield_gc(p22, descr=valuedescr) + guard_value(i1, 2) [] + jump(p22, p18) + """ + expected = """ + [p22, p18] + jump(p22, p18) + """ + self.optimize_loop(ops, expected, preamble, expected_short=short) + + def test_repeated_setfield_mixed_with_guard(self): + ops = """ + [p22, p18, i1] + i2 = getfield_gc(p22, descr=valuedescr) + call(i2, descr=nonwritedescr) + setfield_gc(p22, i1, descr=valuedescr) + guard_nonnull_class(p18, ConstClass(node_vtable)) [] + setfield_gc(p22, i1, descr=valuedescr) + jump(p22, p18, i1) + """ + preamble = """ + [p22, p18, i1] + i2 = getfield_gc(p22, descr=valuedescr) + call(i2, descr=nonwritedescr) + setfield_gc(p22, i1, descr=valuedescr) + guard_nonnull_class(p18, ConstClass(node_vtable)) [] + jump(p22, p18, i1, i1) + """ + short = """ + [p22, p18, i1] + i2 = getfield_gc(p22, descr=valuedescr) + jump(p22, p18, i1, i2) + """ + expected = """ + [p22, p18, i1, i2] + call(i2, descr=nonwritedescr) + setfield_gc(p22, i1, descr=valuedescr) + jump(p22, p18, i1, i1) + """ + self.optimize_loop(ops, expected, preamble, expected_short=short) + + def test_cache_setfield_across_loop_boundaries(self): + ops = """ + [p1] + p2 = getfield_gc(p1, descr=valuedescr) + guard_nonnull_class(p2, ConstClass(node_vtable)) [] + call(p2, descr=nonwritedescr) + p3 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(p1, p3, descr=valuedescr) + jump(p1) + """ + expected = """ + [p1, p2] + call(p2, descr=nonwritedescr) + p3 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(p1, p3, descr=valuedescr) + jump(p1, p3) + """ + self.optimize_loop(ops, expected) + + def test_cache_setarrayitem_across_loop_boundaries(self): + ops = """ + [p1] + p2 = getarrayitem_gc(p1, 3, descr=arraydescr) + guard_nonnull_class(p2, ConstClass(node_vtable)) [] + call(p2, descr=nonwritedescr) + p3 = new_with_vtable(ConstClass(node_vtable)) + setarrayitem_gc(p1, 3, p3, descr=arraydescr) + jump(p1) + """ + expected = """ + [p1, p2] + call(p2, descr=nonwritedescr) + p3 = new_with_vtable(ConstClass(node_vtable)) + setarrayitem_gc(p1, 3, p3, descr=arraydescr) + jump(p1, p3) + """ + self.optimize_loop(ops, expected) + + def test_setarrayitem_p0_p0(self): + ops = """ + [i0, i1] + p0 = escape() + setarrayitem_gc(p0, 2, p0, descr=arraydescr) + jump(i0, i1) + """ + expected = """ + [i0, i1] + p0 = escape() + setarrayitem_gc(p0, 2, p0, descr=arraydescr) + jump(i0, i1) + """ + self.optimize_loop(ops, expected) + + def test_setfield_p0_p0(self): + ops = """ + [i0, i1] + p0 = escape() + setfield_gc(p0, p0, descr=arraydescr) + jump(i0, i1) + """ + expected = """ + [i0, i1] + p0 = escape() + setfield_gc(p0, p0, descr=arraydescr) + jump(i0, i1) + """ + self.optimize_loop(ops, expected) + + def test_setfield_p0_p1_p0(self): + ops = """ + [i0, i1] + p0 = escape() + p1 = escape() + setfield_gc(p0, p1, descr=adescr) + setfield_gc(p1, p0, descr=bdescr) + jump(i0, i1) + """ + expected = """ + [i0, i1] + p0 = escape() + p1 = escape() + setfield_gc(p0, p1, descr=adescr) + setfield_gc(p1, p0, descr=bdescr) + jump(i0, i1) + """ + self.optimize_loop(ops, expected) + class TestLLtype(OptimizeOptTest, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/optimizeopt/virtualstate.py b/pypy/jit/metainterp/optimizeopt/virtualstate.py --- a/pypy/jit/metainterp/optimizeopt/virtualstate.py +++ b/pypy/jit/metainterp/optimizeopt/virtualstate.py @@ -551,6 +551,7 @@ optimizer.produce_potential_short_preamble_ops(self) self.short_boxes = {} + self.short_boxes_in_production = {} for box in self.potential_ops.keys(): try: @@ -606,6 +607,10 @@ return if isinstance(box, Const): return + if box in self.short_boxes_in_production: + raise BoxNotProducable + self.short_boxes_in_production[box] = True + if box in self.potential_ops: ops = self.prioritized_alternatives(box) produced_one = False diff --git a/pypy/jit/metainterp/optimizeopt/vstring.py b/pypy/jit/metainterp/optimizeopt/vstring.py --- a/pypy/jit/metainterp/optimizeopt/vstring.py +++ b/pypy/jit/metainterp/optimizeopt/vstring.py @@ -1,6 +1,6 @@ from pypy.jit.codewriter.effectinfo import EffectInfo from pypy.jit.metainterp.history import (BoxInt, Const, ConstInt, ConstPtr, - get_const_ptr_for_string, get_const_ptr_for_unicode) + get_const_ptr_for_string, get_const_ptr_for_unicode, BoxPtr, REF, INT) from pypy.jit.metainterp.optimizeopt import optimizer, virtualize from pypy.jit.metainterp.optimizeopt.optimizer import CONST_0, CONST_1, llhelper from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method @@ -106,7 +106,12 @@ if not we_are_translated(): op.name = 'FORCE' optforce.emit_operation(op) - self.string_copy_parts(optforce, box, CONST_0, self.mode) + self.initialize_forced_string(optforce, box, CONST_0, self.mode) + + def initialize_forced_string(self, string_optimizer, targetbox, + offsetbox, mode): + return self.string_copy_parts(string_optimizer, targetbox, + offsetbox, mode) class VStringPlainValue(VAbstractStringValue): @@ -114,11 +119,20 @@ _lengthbox = None # cache only def setup(self, size): - self._chars = [optimizer.CVAL_UNINITIALIZED_ZERO] * size + # in this list, None means: "it's probably uninitialized so far, + # but maybe it was actually filled." So to handle this case, + # strgetitem cannot be virtual-ized and must be done as a residual + # operation. By contrast, any non-None value means: we know it + # is initialized to this value; strsetitem() there makes no sense. + # Also, as long as self.is_virtual(), then we know that no-one else + # could have written to the string, so we know that in this case + # "None" corresponds to "really uninitialized". + self._chars = [None] * size def setup_slice(self, longerlist, start, stop): assert 0 <= start <= stop <= len(longerlist) self._chars = longerlist[start:stop] + # slice the 'longerlist', which may also contain Nones def getstrlen(self, _, mode): if self._lengthbox is None: @@ -126,42 +140,66 @@ return self._lengthbox def getitem(self, index): - return self._chars[index] + return self._chars[index] # may return None! def setitem(self, index, charvalue): assert isinstance(charvalue, optimizer.OptValue) + assert self._chars[index] is None, ( + "setitem() on an already-initialized location") self._chars[index] = charvalue + def is_completely_initialized(self): + for c in self._chars: + if c is None: + return False + return True + @specialize.arg(1) def get_constant_string_spec(self, mode): for c in self._chars: - if c is optimizer.CVAL_UNINITIALIZED_ZERO or not c.is_constant(): + if c is None or not c.is_constant(): return None return mode.emptystr.join([mode.chr(c.box.getint()) for c in self._chars]) def string_copy_parts(self, string_optimizer, targetbox, offsetbox, mode): - if not self.is_virtual() and targetbox is not self.box: - lengthbox = self.getstrlen(string_optimizer, mode) - srcbox = self.force_box(string_optimizer) - return copy_str_content(string_optimizer, srcbox, targetbox, - CONST_0, offsetbox, lengthbox, mode) + if not self.is_virtual() and not self.is_completely_initialized(): + return VAbstractStringValue.string_copy_parts( + self, string_optimizer, targetbox, offsetbox, mode) + else: + return self.initialize_forced_string(string_optimizer, targetbox, + offsetbox, mode) + + def initialize_forced_string(self, string_optimizer, targetbox, + offsetbox, mode): for i in range(len(self._chars)): - charbox = self._chars[i].force_box(string_optimizer) - if not (isinstance(charbox, Const) and charbox.same_constant(CONST_0)): - string_optimizer.emit_operation(ResOperation(mode.STRSETITEM, [targetbox, - offsetbox, - charbox], - None)) + assert isinstance(targetbox, BoxPtr) # ConstPtr never makes sense + charvalue = self.getitem(i) + if charvalue is not None: + charbox = charvalue.force_box(string_optimizer) + if not (isinstance(charbox, Const) and + charbox.same_constant(CONST_0)): + op = ResOperation(mode.STRSETITEM, [targetbox, + offsetbox, + charbox], + None) + string_optimizer.emit_operation(op) offsetbox = _int_add(string_optimizer, offsetbox, CONST_1) return offsetbox def get_args_for_fail(self, modifier): if self.box is None and not modifier.already_seen_virtual(self.keybox): - charboxes = [value.get_key_box() for value in self._chars] + charboxes = [] + for value in self._chars: + if value is not None: + box = value.get_key_box() + else: + box = None + charboxes.append(box) modifier.register_virtual_fields(self.keybox, charboxes) for value in self._chars: - value.get_args_for_fail(modifier) + if value is not None: + value.get_args_for_fail(modifier) def _make_virtual(self, modifier): return modifier.make_vstrplain(self.mode is mode_unicode) @@ -277,6 +315,7 @@ for i in range(lengthbox.value): charbox = _strgetitem(string_optimizer, srcbox, srcoffsetbox, mode) srcoffsetbox = _int_add(string_optimizer, srcoffsetbox, CONST_1) + assert isinstance(targetbox, BoxPtr) # ConstPtr never makes sense string_optimizer.emit_operation(ResOperation(mode.STRSETITEM, [targetbox, offsetbox, charbox], @@ -287,6 +326,7 @@ nextoffsetbox = _int_add(string_optimizer, offsetbox, lengthbox) else: nextoffsetbox = None + assert isinstance(targetbox, BoxPtr) # ConstPtr never makes sense op = ResOperation(mode.COPYSTRCONTENT, [srcbox, targetbox, srcoffsetbox, offsetbox, lengthbox], None) @@ -373,6 +413,7 @@ def optimize_STRSETITEM(self, op): value = self.getvalue(op.getarg(0)) + assert not value.is_constant() # strsetitem(ConstPtr) never makes sense if value.is_virtual() and isinstance(value, VStringPlainValue): indexbox = self.get_constant_box(op.getarg(1)) if indexbox is not None: @@ -406,11 +447,20 @@ # if isinstance(value, VStringPlainValue): # even if no longer virtual if vindex.is_constant(): - res = value.getitem(vindex.box.getint()) - # If it is uninitialized we can't return it, it was set by a - # COPYSTRCONTENT, not a STRSETITEM - if res is not optimizer.CVAL_UNINITIALIZED_ZERO: - return res + result = value.getitem(vindex.box.getint()) + if result is not None: + return result + # + if isinstance(value, VStringConcatValue) and vindex.is_constant(): + len1box = value.left.getstrlen(self, mode) + if isinstance(len1box, ConstInt): + index = vindex.box.getint() + len1 = len1box.getint() + if index < len1: + return self.strgetitem(value.left, vindex, mode) + else: + vindex = optimizer.ConstantValue(ConstInt(index - len1)) + return self.strgetitem(value.right, vindex, mode) # resbox = _strgetitem(self, value.force_box(self), vindex.force_box(self), mode) return self.getvalue(resbox) @@ -432,6 +482,11 @@ def _optimize_COPYSTRCONTENT(self, op, mode): # args: src dst srcstart dststart length + assert op.getarg(0).type == REF + assert op.getarg(1).type == REF + assert op.getarg(2).type == INT + assert op.getarg(3).type == INT + assert op.getarg(4).type == INT src = self.getvalue(op.getarg(0)) dst = self.getvalue(op.getarg(1)) srcstart = self.getvalue(op.getarg(2)) @@ -503,19 +558,12 @@ vstart = self.getvalue(op.getarg(2)) vstop = self.getvalue(op.getarg(3)) # - if (isinstance(vstr, VStringPlainValue) and vstart.is_constant() - and vstop.is_constant()): - # slicing with constant bounds of a VStringPlainValue, if any of - # the characters is unitialized we don't do this special slice, we - # do the regular copy contents. - for i in range(vstart.box.getint(), vstop.box.getint()): - if vstr.getitem(i) is optimizer.CVAL_UNINITIALIZED_ZERO: - break - else: - value = self.make_vstring_plain(op.result, op, mode) - value.setup_slice(vstr._chars, vstart.box.getint(), - vstop.box.getint()) - return True + #if (isinstance(vstr, VStringPlainValue) and vstart.is_constant() + # and vstop.is_constant()): + # value = self.make_vstring_plain(op.result, op, mode) + # value.setup_slice(vstr._chars, vstart.box.getint(), + # vstop.box.getint()) + # return True # vstr.ensure_nonnull() lengthbox = _int_sub(self, vstop.force_box(self), diff --git a/pypy/jit/metainterp/resume.py b/pypy/jit/metainterp/resume.py --- a/pypy/jit/metainterp/resume.py +++ b/pypy/jit/metainterp/resume.py @@ -126,6 +126,7 @@ UNASSIGNED = tag(-1<<13, TAGBOX) UNASSIGNEDVIRTUAL = tag(-1<<13, TAGVIRTUAL) NULLREF = tag(-1, TAGCONST) +UNINITIALIZED = tag(-2, TAGCONST) # used for uninitialized string characters class ResumeDataLoopMemo(object): @@ -439,6 +440,8 @@ self.storage.rd_pendingfields = rd_pendingfields def _gettagged(self, box): + if box is None: + return UNINITIALIZED if isinstance(box, Const): return self.memo.getconst(box) else: @@ -572,7 +575,9 @@ string = decoder.allocate_string(length) decoder.virtuals_cache[index] = string for i in range(length): - decoder.string_setitem(string, i, self.fieldnums[i]) + charnum = self.fieldnums[i] + if not tagged_eq(charnum, UNINITIALIZED): + decoder.string_setitem(string, i, charnum) return string def debug_prints(self): @@ -625,7 +630,9 @@ string = decoder.allocate_unicode(length) decoder.virtuals_cache[index] = string for i in range(length): - decoder.unicode_setitem(string, i, self.fieldnums[i]) + charnum = self.fieldnums[i] + if not tagged_eq(charnum, UNINITIALIZED): + decoder.unicode_setitem(string, i, charnum) return string def debug_prints(self): diff --git a/pypy/module/__builtin__/functional.py b/pypy/module/__builtin__/functional.py --- a/pypy/module/__builtin__/functional.py +++ b/pypy/module/__builtin__/functional.py @@ -314,10 +314,11 @@ class W_XRange(Wrappable): - def __init__(self, space, start, len, step): + def __init__(self, space, start, stop, step): self.space = space self.start = start - self.len = len + self.stop = stop + self.len = get_len_of_range(space, start, stop, step) self.step = step def descr_new(space, w_subtype, w_start, w_stop=None, w_step=1): @@ -327,9 +328,8 @@ start, stop = 0, start else: stop = _toint(space, w_stop) - howmany = get_len_of_range(space, start, stop, step) obj = space.allocate_instance(W_XRange, w_subtype) - W_XRange.__init__(obj, space, start, howmany, step) + W_XRange.__init__(obj, space, start, stop, step) return space.wrap(obj) def descr_repr(self): @@ -359,12 +359,12 @@ def descr_iter(self): return self.space.wrap(W_XRangeIterator(self.space, self.start, - self.len, self.step)) + self.stop, self.step)) def descr_reversed(self): lastitem = self.start + (self.len-1) * self.step return self.space.wrap(W_XRangeIterator(self.space, lastitem, - self.len, -self.step)) + self.start, -self.step, True)) def descr_reduce(self): space = self.space @@ -391,25 +391,29 @@ ) class W_XRangeIterator(Wrappable): - def __init__(self, space, current, remaining, step): + def __init__(self, space, start, stop, step, inclusive=False): self.space = space - self.current = current - self.remaining = remaining + self.current = start + self.stop = stop self.step = step + self.inclusive = inclusive def descr_iter(self): return self.space.wrap(self) def descr_next(self): - if self.remaining > 0: - item = self.current - self.current = item + self.step - self.remaining -= 1 - return self.space.wrap(item) - raise OperationError(self.space.w_StopIteration, self.space.w_None) + if self.inclusive: + if not ((self.step > 0 and self.current <= self.stop) or (self.step < 0 and self.current >= self.stop)): + raise OperationError(self.space.w_StopIteration, self.space.w_None) + else: + if not ((self.step > 0 and self.current < self.stop) or (self.step < 0 and self.current > self.stop)): + raise OperationError(self.space.w_StopIteration, self.space.w_None) + item = self.current + self.current = item + self.step + return self.space.wrap(item) - def descr_len(self): - return self.space.wrap(self.remaining) + #def descr_len(self): + # return self.space.wrap(self.remaining) def descr_reduce(self): from pypy.interpreter.mixedmodule import MixedModule @@ -420,7 +424,7 @@ w = space.wrap nt = space.newtuple - tup = [w(self.current), w(self.remaining), w(self.step)] + tup = [w(self.current), w(self.stop), w(self.step)] return nt([new_inst, nt(tup)]) W_XRangeIterator.typedef = TypeDef("rangeiterator", diff --git a/pypy/module/__builtin__/test/test_functional.py b/pypy/module/__builtin__/test/test_functional.py --- a/pypy/module/__builtin__/test/test_functional.py +++ b/pypy/module/__builtin__/test/test_functional.py @@ -157,7 +157,8 @@ raises(OverflowError, xrange, a) raises(OverflowError, xrange, 0, a) raises(OverflowError, xrange, 0, 1, a) - + assert list(reversed(xrange(-sys.maxint-1, -sys.maxint-1, -2))) == [] + def test_xrange_reduce(self): x = xrange(2, 9, 3) callable, args = x.__reduce__() diff --git a/pypy/module/_file/interp_file.py b/pypy/module/_file/interp_file.py --- a/pypy/module/_file/interp_file.py +++ b/pypy/module/_file/interp_file.py @@ -206,24 +206,28 @@ @unwrap_spec(size=int) def direct_readlines(self, size=0): stream = self.getstream() - # NB. this implementation is very inefficient for unbuffered - # streams, but ok if stream.readline() is efficient. + # this is implemented as: .read().split('\n') + # except that it keeps the \n in the resulting strings if size <= 0: - result = [] - while True: - line = stream.readline() - if not line: - break - result.append(line) - size -= len(line) + data = stream.readall() else: - result = [] - while size > 0: - line = stream.readline() - if not line: - break - result.append(line) - size -= len(line) + data = stream.read(size) + result = [] + splitfrom = 0 + for i in range(len(data)): + if data[i] == '\n': + result.append(data[splitfrom : i + 1]) + splitfrom = i + 1 + # + if splitfrom < len(data): + # there is a partial line at the end. If size > 0, it is likely + # to be because the 'read(size)' returned data up to the middle + # of a line. In that case, use 'readline()' to read until the + # end of the current line. + data = data[splitfrom:] + if size > 0: + data += stream.readline() + result.append(data) return result @unwrap_spec(offset=r_longlong, whence=int) diff --git a/pypy/module/_hashlib/interp_hashlib.py b/pypy/module/_hashlib/interp_hashlib.py --- a/pypy/module/_hashlib/interp_hashlib.py +++ b/pypy/module/_hashlib/interp_hashlib.py @@ -4,32 +4,44 @@ from pypy.interpreter.error import OperationError from pypy.tool.sourcetools import func_renamer from pypy.interpreter.baseobjspace import Wrappable -from pypy.rpython.lltypesystem import lltype, rffi +from pypy.rpython.lltypesystem import lltype, llmemory, rffi +from pypy.rlib import rgc, ropenssl from pypy.rlib.objectmodel import keepalive_until_here -from pypy.rlib import ropenssl from pypy.rlib.rstring import StringBuilder from pypy.module.thread.os_lock import Lock algorithms = ('md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512') +# HASH_MALLOC_SIZE is the size of EVP_MD, EVP_MD_CTX plus their points +# Used for adding memory pressure. Last number is an (under?)estimate of +# EVP_PKEY_CTX's size. +# XXX: Make a better estimate here +HASH_MALLOC_SIZE = ropenssl.EVP_MD_SIZE + ropenssl.EVP_MD_CTX_SIZE \ + + rffi.sizeof(ropenssl.EVP_MD) * 2 + 208 + class W_Hash(Wrappable): ctx = lltype.nullptr(ropenssl.EVP_MD_CTX.TO) + _block_size = -1 def __init__(self, space, name): self.name = name + self.digest_size = self.compute_digest_size() # Allocate a lock for each HASH object. # An optimization would be to not release the GIL on small requests, # and use a custom lock only when needed. self.lock = Lock(space) + ctx = lltype.malloc(ropenssl.EVP_MD_CTX.TO, flavor='raw') + rgc.add_memory_pressure(HASH_MALLOC_SIZE + self.digest_size) + self.ctx = ctx + + def initdigest(self, space, name): digest = ropenssl.EVP_get_digestbyname(name) if not digest: raise OperationError(space.w_ValueError, space.wrap("unknown hash function")) - ctx = lltype.malloc(ropenssl.EVP_MD_CTX.TO, flavor='raw') - ropenssl.EVP_DigestInit(ctx, digest) - self.ctx = ctx + ropenssl.EVP_DigestInit(self.ctx, digest) def __del__(self): # self.lock.free() @@ -65,33 +77,29 @@ "Return the digest value as a string of hexadecimal digits." digest = self._digest(space) hexdigits = '0123456789abcdef' - result = StringBuilder(self._digest_size() * 2) + result = StringBuilder(self.digest_size * 2) for c in digest: result.append(hexdigits[(ord(c) >> 4) & 0xf]) result.append(hexdigits[ ord(c) & 0xf]) return space.wrap(result.build()) def get_digest_size(self, space): - return space.wrap(self._digest_size()) + return space.wrap(self.digest_size) def get_block_size(self, space): - return space.wrap(self._block_size()) + return space.wrap(self.compute_block_size()) def _digest(self, space): - copy = self.copy(space) - ctx = copy.ctx - digest_size = self._digest_size() - digest = lltype.malloc(rffi.CCHARP.TO, digest_size, flavor='raw') + with lltype.scoped_alloc(ropenssl.EVP_MD_CTX.TO) as ctx: + with self.lock: + ropenssl.EVP_MD_CTX_copy(ctx, self.ctx) + digest_size = self.digest_size + with lltype.scoped_alloc(rffi.CCHARP.TO, digest_size) as digest: + ropenssl.EVP_DigestFinal(ctx, digest, None) + ropenssl.EVP_MD_CTX_cleanup(ctx) + return rffi.charpsize2str(digest, digest_size) - try: - ropenssl.EVP_DigestFinal(ctx, digest, None) - return rffi.charpsize2str(digest, digest_size) - finally: - keepalive_until_here(copy) - lltype.free(digest, flavor='raw') - - - def _digest_size(self): + def compute_digest_size(self): # XXX This isn't the nicest way, but the EVP_MD_size OpenSSL # XXX function is defined as a C macro on OS X and would be # XXX significantly harder to implement in another way. @@ -105,12 +113,14 @@ 'sha512': 64, 'SHA512': 64, }.get(self.name, 0) - def _block_size(self): + def compute_block_size(self): + if self._block_size != -1: + return self._block_size # XXX This isn't the nicest way, but the EVP_MD_CTX_block_size # XXX OpenSSL function is defined as a C macro on some systems # XXX and would be significantly harder to implement in # XXX another way. - return { + self._block_size = { 'md5': 64, 'MD5': 64, 'sha1': 64, 'SHA1': 64, 'sha224': 64, 'SHA224': 64, @@ -118,6 +128,7 @@ 'sha384': 128, 'SHA384': 128, 'sha512': 128, 'SHA512': 128, }.get(self.name, 0) + return self._block_size W_Hash.typedef = TypeDef( 'HASH', @@ -135,6 +146,7 @@ @unwrap_spec(name=str, string='bufferstr') def new(space, name, string=''): w_hash = W_Hash(space, name) + w_hash.initdigest(space, name) w_hash.update(space, string) return space.wrap(w_hash) diff --git a/pypy/module/_multiprocessing/interp_semaphore.py b/pypy/module/_multiprocessing/interp_semaphore.py --- a/pypy/module/_multiprocessing/interp_semaphore.py +++ b/pypy/module/_multiprocessing/interp_semaphore.py @@ -4,6 +4,7 @@ from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.error import wrap_oserror, OperationError from pypy.rpython.lltypesystem import rffi, lltype +from pypy.rlib import rgc from pypy.rlib.rarithmetic import r_uint from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.rpython.tool import rffi_platform as platform @@ -23,6 +24,8 @@ _CreateSemaphore = rwin32.winexternal( 'CreateSemaphoreA', [rffi.VOIDP, rffi.LONG, rffi.LONG, rwin32.LPCSTR], rwin32.HANDLE) + _CloseHandle = rwin32.winexternal('CloseHandle', [rwin32.HANDLE], + rwin32.BOOL, threadsafe=False) _ReleaseSemaphore = rwin32.winexternal( 'ReleaseSemaphore', [rwin32.HANDLE, rffi.LONG, rffi.LONGP], rwin32.BOOL) @@ -51,6 +54,7 @@ SEM_FAILED = platform.ConstantInteger('SEM_FAILED') SEM_VALUE_MAX = platform.ConstantInteger('SEM_VALUE_MAX') SEM_TIMED_WAIT = platform.Has('sem_timedwait') + SEM_T_SIZE = platform.SizeOf('sem_t') config = platform.configure(CConfig) TIMEVAL = config['TIMEVAL'] @@ -61,18 +65,21 @@ SEM_FAILED = config['SEM_FAILED'] # rffi.cast(SEM_T, config['SEM_FAILED']) SEM_VALUE_MAX = config['SEM_VALUE_MAX'] SEM_TIMED_WAIT = config['SEM_TIMED_WAIT'] + SEM_T_SIZE = config['SEM_T_SIZE'] if sys.platform == 'darwin': HAVE_BROKEN_SEM_GETVALUE = True else: HAVE_BROKEN_SEM_GETVALUE = False - def external(name, args, result): + def external(name, args, result, **kwargs): return rffi.llexternal(name, args, result, - compilation_info=eci) + compilation_info=eci, **kwargs) _sem_open = external('sem_open', [rffi.CCHARP, rffi.INT, rffi.INT, rffi.UINT], SEM_T) + # tread sem_close as not threadsafe for now to be able to use the __del__ + _sem_close = external('sem_close', [SEM_T], rffi.INT, threadsafe=False) _sem_unlink = external('sem_unlink', [rffi.CCHARP], rffi.INT) _sem_wait = external('sem_wait', [SEM_T], rffi.INT) _sem_trywait = external('sem_trywait', [SEM_T], rffi.INT) @@ -90,6 +97,11 @@ raise OSError(rposix.get_errno(), "sem_open failed") return res + def sem_close(handle): + res = _sem_close(handle) + if res < 0: + raise OSError(rposix.get_errno(), "sem_close failed") + def sem_unlink(name): res = _sem_unlink(name) if res < 0: @@ -205,6 +217,11 @@ raise WindowsError(err, "CreateSemaphore") return handle + def delete_semaphore(handle): + if not _CloseHandle(handle): + err = rwin32.GetLastError() + raise WindowsError(err, "CloseHandle") + def semlock_acquire(self, space, block, w_timeout): if not block: full_msecs = 0 @@ -291,8 +308,13 @@ sem_unlink(name) except OSError: pass + else: + rgc.add_memory_pressure(SEM_T_SIZE) return sem + def delete_semaphore(handle): + sem_close(handle) + def semlock_acquire(self, space, block, w_timeout): if not block: deadline = lltype.nullptr(TIMESPECP.TO) @@ -483,6 +505,9 @@ def exit(self, space, __args__): self.release(space) + def __del__(self): + delete_semaphore(self.handle) + @unwrap_spec(kind=int, value=int, maxvalue=int) def descr_new(space, w_subtype, kind, value, maxvalue): if kind != RECURSIVE_MUTEX and kind != SEMAPHORE: diff --git a/pypy/module/_pickle_support/maker.py b/pypy/module/_pickle_support/maker.py --- a/pypy/module/_pickle_support/maker.py +++ b/pypy/module/_pickle_support/maker.py @@ -66,10 +66,10 @@ new_generator.running = running return space.wrap(new_generator) - at unwrap_spec(current=int, remaining=int, step=int) -def xrangeiter_new(space, current, remaining, step): + at unwrap_spec(current=int, stop=int, step=int) +def xrangeiter_new(space, current, stop, step): from pypy.module.__builtin__.functional import W_XRangeIterator - new_iter = W_XRangeIterator(space, current, remaining, step) + new_iter = W_XRangeIterator(space, current, stop, step) return space.wrap(new_iter) @unwrap_spec(identifier=str) diff --git a/pypy/module/pyexpat/interp_pyexpat.py b/pypy/module/pyexpat/interp_pyexpat.py --- a/pypy/module/pyexpat/interp_pyexpat.py +++ b/pypy/module/pyexpat/interp_pyexpat.py @@ -4,9 +4,9 @@ from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.error import OperationError from pypy.objspace.descroperation import object_setattr +from pypy.rlib import rgc +from pypy.rlib.unroll import unrolling_iterable from pypy.rpython.lltypesystem import rffi, lltype -from pypy.rlib.unroll import unrolling_iterable - from pypy.rpython.tool import rffi_platform from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.translator.platform import platform @@ -118,6 +118,19 @@ locals()[name] = rffi_platform.ConstantInteger(name) for name in xml_model_list: locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + XML_Parser_SIZE = rffi_platform.SizeOf("XML_Parser") for k, v in rffi_platform.configure(CConfigure).items(): globals()[k] = v @@ -793,7 +806,10 @@ rffi.cast(rffi.CHAR, namespace_separator)) else: xmlparser = XML_ParserCreate(encoding) - + # Currently this is just the size of the pointer and some estimated bytes. + # The struct isn't actually defined in expat.h - it is in xmlparse.c + # XXX: find a good estimate of the XML_ParserStruct + rgc.add_memory_pressure(XML_Parser_SIZE + 300) if not xmlparser: raise OperationError(space.w_RuntimeError, space.wrap('XML_ParserCreate failed')) diff --git a/pypy/module/thread/ll_thread.py b/pypy/module/thread/ll_thread.py --- a/pypy/module/thread/ll_thread.py +++ b/pypy/module/thread/ll_thread.py @@ -2,10 +2,11 @@ from pypy.rpython.lltypesystem import rffi, lltype, llmemory from pypy.translator.tool.cbuild import ExternalCompilationInfo import py -from pypy.rlib import jit +from pypy.rlib import jit, rgc from pypy.rlib.debug import ll_assert from pypy.rlib.objectmodel import we_are_translated from pypy.rpython.lltypesystem.lloperation import llop +from pypy.rpython.tool import rffi_platform from pypy.tool import autopath class error(Exception): @@ -49,7 +50,7 @@ TLOCKP = rffi.COpaquePtr('struct RPyOpaque_ThreadLock', compilation_info=eci) - +TLOCKP_SIZE = rffi_platform.sizeof('struct RPyOpaque_ThreadLock', eci) c_thread_lock_init = llexternal('RPyThreadLockInit', [TLOCKP], rffi.INT, threadsafe=False) # may add in a global list c_thread_lock_dealloc_NOAUTO = llexternal('RPyOpaqueDealloc_ThreadLock', @@ -164,6 +165,9 @@ if rffi.cast(lltype.Signed, res) <= 0: lltype.free(ll_lock, flavor='raw', track_allocation=False) raise error("out of resources") + # Add some memory pressure for the size of the lock because it is an + # Opaque object + rgc.add_memory_pressure(TLOCKP_SIZE) return ll_lock def free_ll_lock(ll_lock): diff --git a/pypy/objspace/std/boolobject.py b/pypy/objspace/std/boolobject.py --- a/pypy/objspace/std/boolobject.py +++ b/pypy/objspace/std/boolobject.py @@ -27,11 +27,7 @@ def uint_w(w_self, space): intval = int(w_self.boolval) - if intval < 0: - raise OperationError(space.w_ValueError, - space.wrap("cannot convert negative integer to unsigned")) - else: - return r_uint(intval) + return r_uint(intval) def bigint_w(w_self, space): return rbigint.fromint(int(w_self.boolval)) diff --git a/pypy/objspace/std/bytearrayobject.py b/pypy/objspace/std/bytearrayobject.py --- a/pypy/objspace/std/bytearrayobject.py +++ b/pypy/objspace/std/bytearrayobject.py @@ -280,17 +280,9 @@ return space.wrap(''.join(w_bytearray.data)) def _convert_idx_params(space, w_self, w_start, w_stop): - start = slicetype.eval_slice_index(space, w_start) - stop = slicetype.eval_slice_index(space, w_stop) length = len(w_self.data) - if start < 0: - start += length - if start < 0: - start = 0 - if stop < 0: - stop += length - if stop < 0: - stop = 0 + start, stop = slicetype.unwrap_start_stop( + space, length, w_start, w_stop, False) return start, stop, length def str_count__Bytearray_Int_ANY_ANY(space, w_bytearray, w_char, w_start, w_stop): diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py --- a/pypy/objspace/std/listobject.py +++ b/pypy/objspace/std/listobject.py @@ -1248,8 +1248,8 @@ def list_index__List_ANY_ANY_ANY(space, w_list, w_any, w_start, w_stop): # needs to be safe against eq_w() mutating the w_list behind our back size = w_list.length() - i = slicetype.adapt_bound(space, size, w_start) - stop = slicetype.adapt_bound(space, size, w_stop) + i, stop = slicetype.unwrap_start_stop( + space, size, w_start, w_stop, True) while i < stop and i < w_list.length(): if space.eq_w(w_list.getitem(i), w_any): return space.wrap(i) diff --git a/pypy/objspace/std/model.py b/pypy/objspace/std/model.py --- a/pypy/objspace/std/model.py +++ b/pypy/objspace/std/model.py @@ -67,19 +67,11 @@ from pypy.objspace.std import floatobject from pypy.objspace.std import complexobject from pypy.objspace.std import setobject - from pypy.objspace.std import smallintobject - from pypy.objspace.std import smalllongobject from pypy.objspace.std import tupleobject - from pypy.objspace.std import smalltupleobject from pypy.objspace.std import listobject from pypy.objspace.std import dictmultiobject from pypy.objspace.std import stringobject from pypy.objspace.std import bytearrayobject - from pypy.objspace.std import ropeobject - from pypy.objspace.std import ropeunicodeobject - from pypy.objspace.std import strsliceobject - from pypy.objspace.std import strjoinobject - from pypy.objspace.std import strbufobject from pypy.objspace.std import typeobject from pypy.objspace.std import sliceobject from pypy.objspace.std import longobject @@ -138,7 +130,12 @@ for option, value in config.objspace.std: if option.startswith("with") and option in option_to_typename: for classname in option_to_typename[option]: - implcls = eval(classname) + modname = classname[:classname.index('.')] + classname = classname[classname.index('.')+1:] + d = {} + exec "from pypy.objspace.std.%s import %s" % ( + modname, classname) in d + implcls = d[classname] if value: self.typeorder[implcls] = [] else: @@ -164,6 +161,7 @@ # XXX build these lists a bit more automatically later if config.objspace.std.withsmallint: + from pypy.objspace.std import smallintobject self.typeorder[boolobject.W_BoolObject] += [ (smallintobject.W_SmallIntObject, boolobject.delegate_Bool2SmallInt), ] @@ -186,6 +184,7 @@ (complexobject.W_ComplexObject, complexobject.delegate_Int2Complex), ] if config.objspace.std.withsmalllong: + from pypy.objspace.std import smalllongobject self.typeorder[boolobject.W_BoolObject] += [ (smalllongobject.W_SmallLongObject, smalllongobject.delegate_Bool2SmallLong), ] @@ -217,7 +216,9 @@ (unicodeobject.W_UnicodeObject, unicodeobject.delegate_String2Unicode), ] else: + from pypy.objspace.std import ropeobject if config.objspace.std.withropeunicode: + from pypy.objspace.std import ropeunicodeobject self.typeorder[ropeobject.W_RopeObject] += [ (ropeunicodeobject.W_RopeUnicodeObject, ropeunicodeobject.delegate_Rope2RopeUnicode), @@ -227,6 +228,7 @@ (unicodeobject.W_UnicodeObject, unicodeobject.delegate_String2Unicode), ] if config.objspace.std.withstrslice: + from pypy.objspace.std import strsliceobject self.typeorder[strsliceobject.W_StringSliceObject] += [ (stringobject.W_StringObject, strsliceobject.delegate_slice2str), @@ -234,6 +236,7 @@ strsliceobject.delegate_slice2unicode), ] if config.objspace.std.withstrjoin: + from pypy.objspace.std import strjoinobject self.typeorder[strjoinobject.W_StringJoinObject] += [ (stringobject.W_StringObject, strjoinobject.delegate_join2str), @@ -241,6 +244,7 @@ strjoinobject.delegate_join2unicode) ] elif config.objspace.std.withstrbuf: + from pypy.objspace.std import strbufobject self.typeorder[strbufobject.W_StringBufferObject] += [ (stringobject.W_StringObject, strbufobject.delegate_buf2str), @@ -248,6 +252,7 @@ strbufobject.delegate_buf2unicode) ] if config.objspace.std.withsmalltuple: + from pypy.objspace.std import smalltupleobject self.typeorder[smalltupleobject.W_SmallTupleObject] += [ (tupleobject.W_TupleObject, smalltupleobject.delegate_SmallTuple2Tuple)] diff --git a/pypy/objspace/std/ropeobject.py b/pypy/objspace/std/ropeobject.py --- a/pypy/objspace/std/ropeobject.py +++ b/pypy/objspace/std/ropeobject.py @@ -357,16 +357,8 @@ self = w_self._node sub = w_sub._node - if space.is_w(w_start, space.w_None): - w_start = space.wrap(0) - if space.is_w(w_end, space.w_None): - w_end = space.len(w_self) - if upper_bound: - start = slicetype.adapt_bound(space, self.length(), w_start) - end = slicetype.adapt_bound(space, self.length(), w_end) - else: - start = slicetype.adapt_lower_bound(space, self.length(), w_start) - end = slicetype.adapt_lower_bound(space, self.length(), w_end) + start, end = slicetype.unwrap_start_stop( + space, self.length(), w_start, w_end, upper_bound) return (self, sub, start, end) _convert_idx_params._annspecialcase_ = 'specialize:arg(5)' diff --git a/pypy/objspace/std/sliceobject.py b/pypy/objspace/std/sliceobject.py --- a/pypy/objspace/std/sliceobject.py +++ b/pypy/objspace/std/sliceobject.py @@ -4,7 +4,7 @@ from pypy.interpreter import gateway from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all -from pypy.objspace.std.slicetype import eval_slice_index +from pypy.objspace.std.slicetype import _eval_slice_index class W_SliceObject(W_Object): from pypy.objspace.std.slicetype import slice_typedef as typedef @@ -25,7 +25,7 @@ if space.is_w(w_slice.w_step, space.w_None): step = 1 else: - step = eval_slice_index(space, w_slice.w_step) + step = _eval_slice_index(space, w_slice.w_step) if step == 0: raise OperationError(space.w_ValueError, space.wrap("slice step cannot be zero")) @@ -35,7 +35,7 @@ else: start = 0 else: - start = eval_slice_index(space, w_slice.w_start) + start = _eval_slice_index(space, w_slice.w_start) if start < 0: start += length if start < 0: @@ -54,7 +54,7 @@ else: stop = length else: - stop = eval_slice_index(space, w_slice.w_stop) + stop = _eval_slice_index(space, w_slice.w_stop) if stop < 0: stop += length if stop < 0: diff --git a/pypy/objspace/std/slicetype.py b/pypy/objspace/std/slicetype.py --- a/pypy/objspace/std/slicetype.py +++ b/pypy/objspace/std/slicetype.py @@ -3,6 +3,7 @@ from pypy.objspace.std.stdtypedef import StdTypeDef, SMM from pypy.objspace.std.register_all import register_all from pypy.interpreter.error import OperationError +from pypy.rlib.objectmodel import specialize # indices multimehtod slice_indices = SMM('indices', 2, @@ -14,7 +15,9 @@ ' normal slices.') # utility functions -def eval_slice_index(space, w_int): +def _eval_slice_index(space, w_int): + # note that it is the *callers* responsibility to check for w_None + # otherwise you can get funny error messages try: return space.getindex_w(w_int, None) # clamp if long integer too large except OperationError, err: @@ -25,7 +28,7 @@ "None or have an __index__ method")) def adapt_lower_bound(space, size, w_index): - index = eval_slice_index(space, w_index) + index = _eval_slice_index(space, w_index) if index < 0: index = index + size if index < 0: @@ -34,16 +37,29 @@ return index def adapt_bound(space, size, w_index): - index = eval_slice_index(space, w_index) - if index < 0: - index = index + size - if index < 0: - index = 0 + index = adapt_lower_bound(space, size, w_index) if index > size: index = size assert index >= 0 return index + at specialize.arg(4) +def unwrap_start_stop(space, size, w_start, w_end, upper_bound=False): + if space.is_w(w_start, space.w_None): + start = 0 + elif upper_bound: + start = adapt_bound(space, size, w_start) + else: + start = adapt_lower_bound(space, size, w_start) + + if space.is_w(w_end, space.w_None): + end = size + elif upper_bound: + end = adapt_bound(space, size, w_end) + else: + end = adapt_lower_bound(space, size, w_end) + return start, end + register_all(vars(), globals()) # ____________________________________________________________ diff --git a/pypy/objspace/std/smalltupleobject.py b/pypy/objspace/std/smalltupleobject.py --- a/pypy/objspace/std/smalltupleobject.py +++ b/pypy/objspace/std/smalltupleobject.py @@ -68,10 +68,10 @@ raise IndexError def eq(self, space, w_other): - if self.length() != w_other.length(): + if n != w_other.length(): return space.w_False for i in iter_n: - item1 = self.getitem(i) + item1 = getattr(self,'w_value%s' % i) item2 = w_other.getitem(i) if not space.eq_w(item1, item2): return space.w_False @@ -80,9 +80,9 @@ def hash(self, space): mult = 1000003 x = 0x345678 - z = self.length() + z = n for i in iter_n: - w_item = self.getitem(i) + w_item = getattr(self, 'w_value%s' % i) y = space.int_w(space.hash(w_item)) x = (x ^ y) * mult z -= 1 diff --git a/pypy/objspace/std/stringobject.py b/pypy/objspace/std/stringobject.py --- a/pypy/objspace/std/stringobject.py +++ b/pypy/objspace/std/stringobject.py @@ -4,7 +4,7 @@ from pypy.interpreter.error import OperationError, operationerrfmt from pypy.interpreter import gateway from pypy.rlib.rarithmetic import ovfcheck -from pypy.rlib.objectmodel import we_are_translated, compute_hash +from pypy.rlib.objectmodel import we_are_translated, compute_hash, specialize from pypy.objspace.std.inttype import wrapint from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice from pypy.objspace.std import slicetype, newformat @@ -47,6 +47,7 @@ W_StringObject.PREBUILT = [W_StringObject(chr(i)) for i in range(256)] del i + at specialize.arg(2) def _is_generic(space, w_self, fun): v = w_self._value if len(v) == 0: @@ -56,14 +57,13 @@ return space.newbool(fun(c)) else: return _is_generic_loop(space, v, fun) -_is_generic._annspecialcase_ = "specialize:arg(2)" + at specialize.arg(2) def _is_generic_loop(space, v, fun): for idx in range(len(v)): if not fun(v[idx]): return space.w_False return space.w_True -_is_generic_loop._annspecialcase_ = "specialize:arg(2)" def _upper(ch): if ch.islower(): @@ -418,22 +418,14 @@ return space.wrap(u_self) -def _convert_idx_params(space, w_self, w_sub, w_start, w_end, upper_bound=False): + at specialize.arg(4) +def _convert_idx_params(space, w_self, w_start, w_end, upper_bound=False): self = w_self._value - sub = w_sub._value + lenself = len(self) - if space.is_w(w_start, space.w_None): - w_start = space.wrap(0) - if space.is_w(w_end, space.w_None): - w_end = space.len(w_self) - if upper_bound: - start = slicetype.adapt_bound(space, len(self), w_start) - end = slicetype.adapt_bound(space, len(self), w_end) - else: - start = slicetype.adapt_lower_bound(space, len(self), w_start) - end = slicetype.adapt_lower_bound(space, len(self), w_end) - return (self, sub, start, end) -_convert_idx_params._annspecialcase_ = 'specialize:arg(5)' + start, end = slicetype.unwrap_start_stop( + space, lenself, w_start, w_end, upper_bound=upper_bound) + return (self, start, end) def contains__String_String(space, w_self, w_sub): self = w_self._value @@ -441,13 +433,13 @@ return space.newbool(self.find(sub) >= 0) def str_find__String_String_ANY_ANY(space, w_self, w_sub, w_start, w_end): - (self, sub, start, end) = _convert_idx_params(space, w_self, w_sub, w_start, w_end) - res = self.find(sub, start, end) + (self, start, end) = _convert_idx_params(space, w_self, w_start, w_end) + res = self.find(w_sub._value, start, end) return space.wrap(res) def str_rfind__String_String_ANY_ANY(space, w_self, w_sub, w_start, w_end): - (self, sub, start, end) = _convert_idx_params(space, w_self, w_sub, w_start, w_end) - res = self.rfind(sub, start, end) + (self, start, end) = _convert_idx_params(space, w_self, w_start, w_end) + res = self.rfind(w_sub._value, start, end) return space.wrap(res) def str_partition__String_String(space, w_self, w_sub): @@ -481,8 +473,8 @@ def str_index__String_String_ANY_ANY(space, w_self, w_sub, w_start, w_end): - (self, sub, start, end) = _convert_idx_params(space, w_self, w_sub, w_start, w_end) - res = self.find(sub, start, end) + (self, start, end) = _convert_idx_params(space, w_self, w_start, w_end) + res = self.find(w_sub._value, start, end) if res < 0: raise OperationError(space.w_ValueError, space.wrap("substring not found in string.index")) @@ -491,8 +483,8 @@ def str_rindex__String_String_ANY_ANY(space, w_self, w_sub, w_start, w_end): - (self, sub, start, end) = _convert_idx_params(space, w_self, w_sub, w_start, w_end) - res = self.rfind(sub, start, end) + (self, start, end) = _convert_idx_params(space, w_self, w_start, w_end) + res = self.rfind(w_sub._value, start, end) if res < 0: raise OperationError(space.w_ValueError, space.wrap("substring not found in string.rindex")) @@ -634,20 +626,17 @@ return wrapstr(space, u_centered) def str_count__String_String_ANY_ANY(space, w_self, w_arg, w_start, w_end): - u_self, u_arg, u_start, u_end = _convert_idx_params(space, w_self, w_arg, - w_start, w_end) - return wrapint(space, u_self.count(u_arg, u_start, u_end)) + u_self, u_start, u_end = _convert_idx_params(space, w_self, w_start, w_end) + return wrapint(space, u_self.count(w_arg._value, u_start, u_end)) def str_endswith__String_String_ANY_ANY(space, w_self, w_suffix, w_start, w_end): - (u_self, suffix, start, end) = _convert_idx_params(space, w_self, - w_suffix, w_start, - w_end, True) - return space.newbool(stringendswith(u_self, suffix, start, end)) + (u_self, start, end) = _convert_idx_params(space, w_self, w_start, + w_end, True) + return space.newbool(stringendswith(u_self, w_suffix._value, start, end)) def str_endswith__String_Tuple_ANY_ANY(space, w_self, w_suffixes, w_start, w_end): - (u_self, _, start, end) = _convert_idx_params(space, w_self, - space.wrap(''), w_start, - w_end, True) + (u_self, start, end) = _convert_idx_params(space, w_self, w_start, + w_end, True) for w_suffix in space.fixedview(w_suffixes): if space.isinstance_w(w_suffix, space.w_unicode): w_u = space.call_function(space.w_unicode, w_self) @@ -659,14 +648,13 @@ return space.w_False def str_startswith__String_String_ANY_ANY(space, w_self, w_prefix, w_start, w_end): - (u_self, prefix, start, end) = _convert_idx_params(space, w_self, - w_prefix, w_start, - w_end, True) - return space.newbool(stringstartswith(u_self, prefix, start, end)) + (u_self, start, end) = _convert_idx_params(space, w_self, w_start, + w_end, True) + return space.newbool(stringstartswith(u_self, w_prefix._value, start, end)) def str_startswith__String_Tuple_ANY_ANY(space, w_self, w_prefixes, w_start, w_end): - (u_self, _, start, end) = _convert_idx_params(space, w_self, space.wrap(''), - w_start, w_end, True) + (u_self, start, end) = _convert_idx_params(space, w_self, + w_start, w_end, True) for w_prefix in space.fixedview(w_prefixes): if space.isinstance_w(w_prefix, space.w_unicode): w_u = space.call_function(space.w_unicode, w_self) diff --git a/pypy/objspace/std/strsliceobject.py b/pypy/objspace/std/strsliceobject.py --- a/pypy/objspace/std/strsliceobject.py +++ b/pypy/objspace/std/strsliceobject.py @@ -60,8 +60,8 @@ def _convert_idx_params(space, w_self, w_sub, w_start, w_end): length = w_self.stop - w_self.start sub = w_sub._value - start = slicetype.adapt_bound(space, length, w_start) - end = slicetype.adapt_bound(space, length, w_end) + start, end = slicetype.unwrap_start_stop( + space, length, w_start, w_end, True) assert start >= 0 assert end >= 0 diff --git a/pypy/objspace/std/test/test_listobject.py b/pypy/objspace/std/test/test_listobject.py --- a/pypy/objspace/std/test/test_listobject.py +++ b/pypy/objspace/std/test/test_listobject.py @@ -3,11 +3,10 @@ from pypy.objspace.std.listobject import W_ListObject from pypy.interpreter.error import OperationError -from pypy.conftest import gettestobjspace +from pypy.conftest import gettestobjspace, option class TestW_ListObject(object): - def test_is_true(self): w = self.space.wrap w_list = W_ListObject(self.space, []) @@ -353,10 +352,14 @@ class AppTestW_ListObject(object): + def setup_class(cls): + import sys + on_cpython = (option.runappdirect and + not hasattr(sys, 'pypy_translation_info')) + cls.w_on_cpython = cls.space.wrap(on_cpython) def test_getstrategyfromlist_w(self): l0 = ["a", "2", "a", True] - # this raised TypeError on ListStrategies l1 = ["a", "2", True, "a"] l2 = [1, "2", "a", "a"] @@ -410,6 +413,8 @@ assert not l.__contains__(-20) assert not l.__contains__(-21) +======= +>>>>>>> other def test_call_list(self): assert list('') == [] assert list('abc') == ['a', 'b', 'c'] @@ -728,7 +733,15 @@ assert c.index(0) == 0 raises(ValueError, c.index, 3) - def test_assign_slice(self): + def test_index_cpython_bug(self): + if self.on_cpython: + skip("cpython has a bug here") + c = list('hello world') + assert c.index('l', None, None) == 2 + assert c.index('l', 3, None) == 3 + assert c.index('l', None, 4) == 2 + + def test_ass_slice(self): l = range(6) l[1:3] = 'abc' assert l == [0, 'a', 'b', 'c', 3, 4, 5] diff --git a/pypy/objspace/std/test/test_sliceobject.py b/pypy/objspace/std/test/test_sliceobject.py --- a/pypy/objspace/std/test/test_sliceobject.py +++ b/pypy/objspace/std/test/test_sliceobject.py @@ -42,6 +42,23 @@ getslice(length, mystart, mystop)) + def test_indexes4(self): + space = self.space + w = space.wrap + + def getslice(length, start, stop, step): + return [i for i in range(0, length, step) if start <= i < stop] + + for step in [-5, -4, -3, -2, -1, 1, 2, 3, 4, 5, None]: + for length in range(5): + for start in range(-2*length-2, 2*length+3) + [None]: + for stop in range(-2*length-2, 2*length+3) + [None]: + sl = space.newslice(w(start), w(stop), w(step)) + mystart, mystop, mystep, slicelength = sl.indices4(space, length) + assert len(range(length)[start:stop:step]) == slicelength + assert slice(start, stop, step).indices(length) == ( + mystart, mystop, mystep) + class AppTest_SliceObject: def test_new(self): def cmp_slice(sl1, sl2): diff --git a/pypy/objspace/std/tupleobject.py b/pypy/objspace/std/tupleobject.py --- a/pypy/objspace/std/tupleobject.py +++ b/pypy/objspace/std/tupleobject.py @@ -108,15 +108,10 @@ return space.w_False return space.w_True -def _min(a, b): - if a < b: - return a - return b - def lt__Tuple_Tuple(space, w_tuple1, w_tuple2): items1 = w_tuple1.wrappeditems items2 = w_tuple2.wrappeditems - ncmp = _min(len(items1), len(items2)) + ncmp = min(len(items1), len(items2)) # Search for the first index where items are different for p in range(ncmp): if not space.eq_w(items1[p], items2[p]): @@ -127,7 +122,7 @@ def gt__Tuple_Tuple(space, w_tuple1, w_tuple2): items1 = w_tuple1.wrappeditems items2 = w_tuple2.wrappeditems - ncmp = _min(len(items1), len(items2)) + ncmp = min(len(items1), len(items2)) # Search for the first index where items are different for p in range(ncmp): if not space.eq_w(items1[p], items2[p]): @@ -172,17 +167,8 @@ return space.wrap(count) def tuple_index__Tuple_ANY_ANY_ANY(space, w_tuple, w_obj, w_start, w_stop): - start = slicetype.eval_slice_index(space, w_start) - stop = slicetype.eval_slice_index(space, w_stop) length = len(w_tuple.wrappeditems) - if start < 0: - start += length - if start < 0: - start = 0 - if stop < 0: - stop += length - if stop < 0: - stop = 0 + start, stop = slicetype.unwrap_start_stop(space, length, w_start, w_stop) for i in range(start, min(stop, length)): w_item = w_tuple.wrappeditems[i] if space.eq_w(w_item, w_obj): diff --git a/pypy/objspace/std/tupletype.py b/pypy/objspace/std/tupletype.py --- a/pypy/objspace/std/tupletype.py +++ b/pypy/objspace/std/tupletype.py @@ -5,14 +5,14 @@ def wraptuple(space, list_w): from pypy.objspace.std.tupleobject import W_TupleObject - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject2 - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject3 - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject4 - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject5 - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject6 - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject7 - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject8 if space.config.objspace.std.withsmalltuple: + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject2 + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject3 + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject4 + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject5 + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject6 + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject7 + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject8 if len(list_w) == 2: return W_SmallTupleObject2(list_w) if len(list_w) == 3: diff --git a/pypy/objspace/std/unicodeobject.py b/pypy/objspace/std/unicodeobject.py --- a/pypy/objspace/std/unicodeobject.py +++ b/pypy/objspace/std/unicodeobject.py @@ -10,7 +10,7 @@ from pypy.objspace.std import slicetype, newformat from pypy.objspace.std.tupleobject import W_TupleObject from pypy.rlib.rarithmetic import intmask, ovfcheck -from pypy.rlib.objectmodel import compute_hash +from pypy.rlib.objectmodel import compute_hash, specialize from pypy.rlib.rstring import UnicodeBuilder from pypy.rlib.runicode import unicode_encode_unicode_escape from pypy.module.unicodedata import unicodedb @@ -475,42 +475,29 @@ index = length return index -def _convert_idx_params(space, w_self, w_sub, w_start, w_end, upper_bound=False): - assert isinstance(w_sub, W_UnicodeObject) + at specialize.arg(4) +def _convert_idx_params(space, w_self, w_start, w_end, upper_bound=False): self = w_self._value - sub = w_sub._value - - if space.is_w(w_start, space.w_None): - w_start = space.wrap(0) - if space.is_w(w_end, space.w_None): - w_end = space.len(w_self) - - if upper_bound: - start = slicetype.adapt_bound(space, len(self), w_start) - end = slicetype.adapt_bound(space, len(self), w_end) - else: - start = slicetype.adapt_lower_bound(space, len(self), w_start) - end = slicetype.adapt_lower_bound(space, len(self), w_end) - return (self, sub, start, end) -_convert_idx_params._annspecialcase_ = 'specialize:arg(5)' + start, end = slicetype.unwrap_start_stop( + space, len(self), w_start, w_end, upper_bound) + return (self, start, end) def unicode_endswith__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, + self, start, end = _convert_idx_params(space, w_self, w_start, w_end, True) - return space.newbool(stringendswith(self, substr, start, end)) + return space.newbool(stringendswith(self, w_substr._value, start, end)) def unicode_startswith__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end, True) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end, True) # XXX this stuff can be waaay better for ootypebased backends if # we re-use more of our rpython machinery (ie implement startswith # with additional parameters as rpython) - return space.newbool(stringstartswith(self, substr, start, end)) + return space.newbool(stringstartswith(self, w_substr._value, start, end)) def unicode_startswith__Unicode_Tuple_ANY_ANY(space, w_unistr, w_prefixes, w_start, w_end): - unistr, _, start, end = _convert_idx_params(space, w_unistr, space.wrap(u''), - w_start, w_end, True) + unistr, start, end = _convert_idx_params(space, w_unistr, + w_start, w_end, True) for w_prefix in space.fixedview(w_prefixes): prefix = space.unicode_w(w_prefix) if stringstartswith(unistr, prefix, start, end): @@ -519,7 +506,7 @@ def unicode_endswith__Unicode_Tuple_ANY_ANY(space, w_unistr, w_suffixes, w_start, w_end): - unistr, _, start, end = _convert_idx_params(space, w_unistr, space.wrap(u''), + unistr, start, end = _convert_idx_params(space, w_unistr, w_start, w_end, True) for w_suffix in space.fixedview(w_suffixes): suffix = space.unicode_w(w_suffix) @@ -625,37 +612,32 @@ return space.newlist(lines) def unicode_find__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - return space.wrap(self.find(substr, start, end)) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + return space.wrap(self.find(w_substr._value, start, end)) def unicode_rfind__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - return space.wrap(self.rfind(substr, start, end)) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + return space.wrap(self.rfind(w_substr._value, start, end)) def unicode_index__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - index = self.find(substr, start, end) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + index = self.find(w_substr._value, start, end) if index < 0: raise OperationError(space.w_ValueError, space.wrap('substring not found')) return space.wrap(index) def unicode_rindex__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - index = self.rfind(substr, start, end) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + index = self.rfind(w_substr._value, start, end) if index < 0: raise OperationError(space.w_ValueError, space.wrap('substring not found')) return space.wrap(index) def unicode_count__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - return space.wrap(self.count(substr, start, end)) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + return space.wrap(self.count(w_substr._value, start, end)) def unicode_split__Unicode_None_ANY(space, w_self, w_none, w_maxsplit): maxsplit = space.int_w(w_maxsplit) diff --git a/pypy/rlib/rbigint.py b/pypy/rlib/rbigint.py --- a/pypy/rlib/rbigint.py +++ b/pypy/rlib/rbigint.py @@ -921,7 +921,7 @@ ah, al = _kmul_split(a, shift) assert ah.sign == 1 # the split isn't degenerate - if a == b: + if a is b: bh = ah bl = al else: @@ -975,26 +975,21 @@ i = ret.numdigits() - shift # # digits after shift _v_isub(ret, shift, i, t2, t2.numdigits()) _v_isub(ret, shift, i, t1, t1.numdigits()) - del t1, t2 # 6. t3 <- (ah+al)(bh+bl), and add into result. t1 = _x_add(ah, al) - del ah, al - if a == b: + if a is b: t2 = t1 else: t2 = _x_add(bh, bl) - del bh, bl t3 = _k_mul(t1, t2) - del t1, t2 assert t3.sign >=0 # Add t3. It's not obvious why we can't run out of room here. # See the (*) comment after this function. _v_iadd(ret, shift, i, t3, t3.numdigits()) - del t3 ret._normalize() return ret @@ -1085,7 +1080,6 @@ # Add into result. _v_iadd(ret, nbdone, ret.numdigits() - nbdone, product, product.numdigits()) - del product bsize -= nbtouse nbdone += nbtouse diff --git a/pypy/rlib/rgc.py b/pypy/rlib/rgc.py --- a/pypy/rlib/rgc.py +++ b/pypy/rlib/rgc.py @@ -259,6 +259,24 @@ except Exception: return False # don't keep objects whose _freeze_() method explodes +def add_memory_pressure(estimate): + """Add memory pressure for OpaquePtrs.""" + pass + +class AddMemoryPressureEntry(ExtRegistryEntry): + _about_ = add_memory_pressure + + def compute_result_annotation(self, s_nbytes): + from pypy.annotation import model as annmodel + return annmodel.s_None + + def specialize_call(self, hop): + [v_size] = hop.inputargs(lltype.Signed) + hop.exception_cannot_occur() + return hop.genop('gc_add_memory_pressure', [v_size], + resulttype=lltype.Void) + + def get_rpy_memory_usage(gcref): "NOT_RPYTHON" # approximate implementation using CPython's type info diff --git a/pypy/rlib/ropenssl.py b/pypy/rlib/ropenssl.py --- a/pypy/rlib/ropenssl.py +++ b/pypy/rlib/ropenssl.py @@ -25,6 +25,7 @@ 'openssl/err.h', 'openssl/rand.h', 'openssl/evp.h', + 'openssl/ossl_typ.h', 'openssl/x509v3.h'] eci = ExternalCompilationInfo( @@ -108,7 +109,9 @@ GENERAL_NAME_st = rffi_platform.Struct( 'struct GENERAL_NAME_st', [('type', rffi.INT), - ]) + ]) + EVP_MD_SIZE = rffi_platform.SizeOf('EVP_MD') + EVP_MD_CTX_SIZE = rffi_platform.SizeOf('EVP_MD_CTX') for k, v in rffi_platform.configure(CConfig).items(): @@ -154,7 +157,7 @@ ssl_external('CRYPTO_set_id_callback', [lltype.Ptr(lltype.FuncType([], rffi.LONG))], lltype.Void) - + if HAVE_OPENSSL_RAND: ssl_external('RAND_add', [rffi.CCHARP, rffi.INT, rffi.DOUBLE], lltype.Void) ssl_external('RAND_status', [], rffi.INT) @@ -255,7 +258,7 @@ [BIO, rffi.VOIDP, rffi.VOIDP, rffi.VOIDP], X509) EVP_MD_CTX = rffi.COpaquePtr('EVP_MD_CTX', compilation_info=eci) -EVP_MD = rffi.COpaquePtr('EVP_MD') +EVP_MD = rffi.COpaquePtr('EVP_MD', compilation_info=eci) OpenSSL_add_all_digests = external( 'OpenSSL_add_all_digests', [], lltype.Void) diff --git a/pypy/rpython/llinterp.py b/pypy/rpython/llinterp.py --- a/pypy/rpython/llinterp.py +++ b/pypy/rpython/llinterp.py @@ -172,7 +172,7 @@ def checkadr(addr): assert lltype.typeOf(addr) is llmemory.Address - + def is_inst(inst): return isinstance(lltype.typeOf(inst), (ootype.Instance, ootype.BuiltinType, ootype.StaticMethod)) @@ -657,7 +657,7 @@ raise TypeError("graph with %r args called with wrong func ptr type: %r" % (tuple([v.concretetype for v in args_v]), ARGS)) frame = self.newsubframe(graph, args) - return frame.eval() + return frame.eval() def op_direct_call(self, f, *args): FTYPE = self.llinterpreter.typer.type_system.derefType(lltype.typeOf(f)) @@ -698,13 +698,13 @@ return ptr except MemoryError: self.make_llexception() - + def op_malloc_nonmovable(self, TYPE, flags): flavor = flags['flavor'] assert flavor == 'gc' zero = flags.get('zero', False) return self.heap.malloc_nonmovable(TYPE, zero=zero) - + def op_malloc_nonmovable_varsize(self, TYPE, flags, size): flavor = flags['flavor'] assert flavor == 'gc' @@ -716,6 +716,9 @@ track_allocation = flags.get('track_allocation', True) self.heap.free(obj, flavor='raw', track_allocation=track_allocation) + def op_gc_add_memory_pressure(self, size): + self.heap.add_memory_pressure(size) + def op_shrink_array(self, obj, smallersize): return self.heap.shrink_array(obj, smallersize) @@ -1318,7 +1321,7 @@ func_graph = fn.graph else: # obj is an instance, we want to call 'method_name' on it - assert fn is None + assert fn is None self_arg = [obj] func_graph = obj._TYPE._methods[method_name._str].graph diff --git a/pypy/rpython/lltypesystem/llheap.py b/pypy/rpython/lltypesystem/llheap.py --- a/pypy/rpython/lltypesystem/llheap.py +++ b/pypy/rpython/lltypesystem/llheap.py @@ -5,8 +5,7 @@ setfield = setattr from operator import setitem as setarrayitem -from pypy.rlib.rgc import collect -from pypy.rlib.rgc import can_move +from pypy.rlib.rgc import can_move, collect, add_memory_pressure def setinterior(toplevelcontainer, inneraddr, INNERTYPE, newvalue, offsets=None): diff --git a/pypy/rpython/lltypesystem/lloperation.py b/pypy/rpython/lltypesystem/lloperation.py --- a/pypy/rpython/lltypesystem/lloperation.py +++ b/pypy/rpython/lltypesystem/lloperation.py @@ -473,6 +473,7 @@ 'gc_is_rpy_instance' : LLOp(), 'gc_dump_rpy_heap' : LLOp(), 'gc_typeids_z' : LLOp(), + 'gc_add_memory_pressure': LLOp(), # ------- JIT & GC interaction, only for some GCs ---------- diff --git a/pypy/rpython/lltypesystem/lltype.py b/pypy/rpython/lltypesystem/lltype.py --- a/pypy/rpython/lltypesystem/lltype.py +++ b/pypy/rpython/lltypesystem/lltype.py @@ -48,7 +48,7 @@ self.TYPE = TYPE def __repr__(self): return ''%(self.TYPE,) - + def saferecursive(func, defl, TLS=TLS): def safe(*args): @@ -537,9 +537,9 @@ return "Func ( %s ) -> %s" % (args, self.RESULT) __str__ = saferecursive(__str__, '...') - def _short_name(self): + def _short_name(self): args = ', '.join([ARG._short_name() for ARG in self.ARGS]) - return "Func(%s)->%s" % (args, self.RESULT._short_name()) + return "Func(%s)->%s" % (args, self.RESULT._short_name()) _short_name = saferecursive(_short_name, '...') def _container_example(self): @@ -553,7 +553,7 @@ class OpaqueType(ContainerType): _gckind = 'raw' - + def __init__(self, tag, hints={}): """ if hints['render_structure'] is set, the type is internal and not considered to come from somewhere else (it should be rendered as a structure) """ @@ -723,10 +723,10 @@ def __str__(self): return '* %s' % (self.TO, ) - + def _short_name(self): return 'Ptr %s' % (self.TO._short_name(), ) - + def _is_atomic(self): return self.TO._gckind == 'raw' diff --git a/pypy/rpython/memory/gc/minimark.py b/pypy/rpython/memory/gc/minimark.py --- a/pypy/rpython/memory/gc/minimark.py +++ b/pypy/rpython/memory/gc/minimark.py @@ -468,7 +468,7 @@ # # If the object needs a finalizer, ask for a rawmalloc. # The following check should be constant-folded. - if needs_finalizer: ## and not is_finalizer_light: + if needs_finalizer and not is_finalizer_light: ll_assert(not contains_weakptr, "'needs_finalizer' and 'contains_weakptr' both specified") obj = self.external_malloc(typeid, 0, can_make_young=False) @@ -1850,6 +1850,9 @@ finalizer = self.getlightfinalizer(self.get_type_id(obj)) ll_assert(bool(finalizer), "no light finalizer found") finalizer(obj, llmemory.NULL) + else: + obj = self.get_forwarding_address(obj) + self.old_objects_with_light_finalizers.append(obj) def deal_with_old_objects_with_finalizers(self): """ This is a much simpler version of dealing with finalizers diff --git a/pypy/rpython/memory/gctransform/framework.py b/pypy/rpython/memory/gctransform/framework.py --- a/pypy/rpython/memory/gctransform/framework.py +++ b/pypy/rpython/memory/gctransform/framework.py @@ -377,17 +377,24 @@ self.malloc_varsize_nonmovable_ptr = None if getattr(GCClass, 'raw_malloc_memory_pressure', False): - def raw_malloc_memory_pressure(length, itemsize): + def raw_malloc_memory_pressure_varsize(length, itemsize): totalmem = length * itemsize if totalmem > 0: gcdata.gc.raw_malloc_memory_pressure(totalmem) #else: probably an overflow -- the following rawmalloc # will fail then + def raw_malloc_memory_pressure(sizehint): + gcdata.gc.raw_malloc_memory_pressure(sizehint) + self.raw_malloc_memory_pressure_varsize_ptr = getfn( + raw_malloc_memory_pressure_varsize, + [annmodel.SomeInteger(), annmodel.SomeInteger()], + annmodel.s_None, minimal_transform = False) self.raw_malloc_memory_pressure_ptr = getfn( raw_malloc_memory_pressure, - [annmodel.SomeInteger(), annmodel.SomeInteger()], + [annmodel.SomeInteger()], annmodel.s_None, minimal_transform = False) + self.identityhash_ptr = getfn(GCClass.identityhash.im_func, [s_gc, s_gcref], annmodel.SomeInteger(), diff --git a/pypy/rpython/memory/gctransform/transform.py b/pypy/rpython/memory/gctransform/transform.py --- a/pypy/rpython/memory/gctransform/transform.py +++ b/pypy/rpython/memory/gctransform/transform.py @@ -63,7 +63,7 @@ gct.push_alive(v_result, self.llops) elif opname not in ('direct_call', 'indirect_call'): gct.push_alive(v_result, self.llops) - + def rename(self, newopname): @@ -118,7 +118,7 @@ self.minimalgctransformer = self.MinimalGCTransformer(self) else: self.minimalgctransformer = None - + def get_lltype_of_exception_value(self): if self.translator is not None: exceptiondata = self.translator.rtyper.getexceptiondata() @@ -399,7 +399,7 @@ def gct_gc_heap_stats(self, hop): from pypy.rpython.memory.gc.base import ARRAY_TYPEID_MAP - + return hop.cast_result(rmodel.inputconst(lltype.Ptr(ARRAY_TYPEID_MAP), lltype.nullptr(ARRAY_TYPEID_MAP))) @@ -427,7 +427,7 @@ assert flavor == 'raw' assert not flags.get('zero') return self.parenttransformer.gct_malloc_varsize(hop) - + def gct_free(self, hop): flags = hop.spaceop.args[1].value flavor = flags['flavor'] @@ -502,7 +502,7 @@ stack_mh = mallocHelpers() stack_mh.allocate = lambda size: llop.stack_malloc(llmemory.Address, size) ll_stack_malloc_fixedsize = stack_mh._ll_malloc_fixedsize - + if self.translator: self.raw_malloc_fixedsize_ptr = self.inittime_helper( ll_raw_malloc_fixedsize, [lltype.Signed], llmemory.Address) @@ -541,7 +541,7 @@ resulttype=llmemory.Address) if flags.get('zero'): hop.genop("raw_memclear", [v_raw, c_size]) - return v_raw + return v_raw def gct_malloc_varsize(self, hop, add_flags=None): flags = hop.spaceop.args[1].value @@ -559,6 +559,14 @@ def gct_malloc_nonmovable_varsize(self, *args, **kwds): return self.gct_malloc_varsize(*args, **kwds) + def gct_gc_add_memory_pressure(self, hop): + if hasattr(self, 'raw_malloc_memory_pressure_ptr'): + op = hop.spaceop + size = op.args[0] + return hop.genop("direct_call", + [self.raw_malloc_memory_pressure_ptr, + size]) + def varsize_malloc_helper(self, hop, flags, meth, extraargs): def intconst(c): return rmodel.inputconst(lltype.Signed, c) op = hop.spaceop @@ -590,9 +598,9 @@ def gct_fv_raw_malloc_varsize(self, hop, flags, TYPE, v_length, c_const_size, c_item_size, c_offset_to_length): if flags.get('add_memory_pressure', False): - if hasattr(self, 'raw_malloc_memory_pressure_ptr'): + if hasattr(self, 'raw_malloc_memory_pressure_varsize_ptr'): hop.genop("direct_call", - [self.raw_malloc_memory_pressure_ptr, + [self.raw_malloc_memory_pressure_varsize_ptr, v_length, c_item_size]) if c_offset_to_length is None: if flags.get('zero'): @@ -625,7 +633,7 @@ hop.genop("track_alloc_stop", [v]) hop.genop('raw_free', [v]) else: - assert False, "%s has no support for free with flavor %r" % (self, flavor) + assert False, "%s has no support for free with flavor %r" % (self, flavor) def gct_gc_can_move(self, hop): return hop.cast_result(rmodel.inputconst(lltype.Bool, False)) diff --git a/pypy/rpython/memory/gcwrapper.py b/pypy/rpython/memory/gcwrapper.py --- a/pypy/rpython/memory/gcwrapper.py +++ b/pypy/rpython/memory/gcwrapper.py @@ -66,6 +66,10 @@ gctypelayout.zero_gc_pointers(result) return result + def add_memory_pressure(self, size): + if hasattr(self.gc, 'raw_malloc_memory_pressure'): + self.gc.raw_malloc_memory_pressure(size) + def shrink_array(self, p, smallersize): if hasattr(self.gc, 'shrink_array'): addr = llmemory.cast_ptr_to_adr(p) diff --git a/pypy/rpython/memory/test/test_gc.py b/pypy/rpython/memory/test/test_gc.py --- a/pypy/rpython/memory/test/test_gc.py +++ b/pypy/rpython/memory/test/test_gc.py @@ -592,7 +592,7 @@ return rgc.can_move(lltype.malloc(TP, 1)) assert self.interpret(func, []) == self.GC_CAN_MOVE - + def test_malloc_nonmovable(self): TP = lltype.GcArray(lltype.Char) def func(): diff --git a/pypy/rpython/memory/test/test_transformed_gc.py b/pypy/rpython/memory/test/test_transformed_gc.py --- a/pypy/rpython/memory/test/test_transformed_gc.py +++ b/pypy/rpython/memory/test/test_transformed_gc.py @@ -27,7 +27,7 @@ t.config.set(**extraconfigopts) ann = t.buildannotator(policy=annpolicy.StrictAnnotatorPolicy()) ann.build_types(func, inputtypes) - + if specialize: t.buildrtyper().specialize() if backendopt: @@ -44,7 +44,7 @@ GC_CAN_MOVE = False GC_CAN_MALLOC_NONMOVABLE = True taggedpointers = False - + def setup_class(cls): funcs0 = [] funcs2 = [] @@ -155,7 +155,7 @@ return run, gct else: return run - + class GenericGCTests(GCTest): GC_CAN_SHRINK_ARRAY = False @@ -190,7 +190,7 @@ j += 1 return 0 return malloc_a_lot - + def test_instances(self): run, statistics = self.runner("instances", statistics=True) run([]) @@ -276,7 +276,7 @@ for i in range(1, 5): res = run([i, i - 1]) assert res == i - 1 # crashes if constants are not considered roots - + def define_string_concatenation(cls): def concat(j, dummy): lst = [] @@ -656,7 +656,7 @@ # return 2 return func - + def test_malloc_nonmovable(self): run = self.runner("malloc_nonmovable") assert int(self.GC_CAN_MALLOC_NONMOVABLE) == run([]) @@ -676,7 +676,7 @@ return 2 return func - + def test_malloc_nonmovable_fixsize(self): run = self.runner("malloc_nonmovable_fixsize") assert run([]) == int(self.GC_CAN_MALLOC_NONMOVABLE) @@ -757,7 +757,7 @@ lltype.free(idarray, flavor='raw') return 0 return f - + def test_many_ids(self): if not self.GC_CAN_TEST_ID: py.test.skip("fails for bad reasons in lltype.py :-(") @@ -813,7 +813,7 @@ else: assert 0, "oups, not found" return f, None, fix_graph_of_g - + def test_do_malloc_operations(self): run = self.runner("do_malloc_operations") run([]) @@ -850,7 +850,7 @@ else: assert 0, "oups, not found" return f, None, fix_graph_of_g - + def test_do_malloc_operations_in_call(self): run = self.runner("do_malloc_operations_in_call") run([]) @@ -861,7 +861,7 @@ l2 = [] l3 = [] l4 = [] - + def f(): for i in range(10): s = lltype.malloc(S) @@ -1026,7 +1026,7 @@ llop.gc__collect(lltype.Void) return static.p.x + i def cleanup(): - static.p = lltype.nullptr(T1) + static.p = lltype.nullptr(T1) return f, cleanup, None def test_nongc_static_root_minor_collect(self): @@ -1081,7 +1081,7 @@ return 0 return f - + def test_many_weakrefs(self): run = self.runner("many_weakrefs") run([]) @@ -1131,7 +1131,7 @@ def define_adr_of_nursery(cls): class A(object): pass - + def f(): # we need at least 1 obj to allocate a nursery a = A() @@ -1147,9 +1147,9 @@ assert nt1 > nf1 assert nt1 == nt0 return 0 - + return f - + def test_adr_of_nursery(self): run = self.runner("adr_of_nursery") res = run([]) @@ -1175,7 +1175,7 @@ def _teardown(self): self.__ready = False # collecting here is expected GenerationGC._teardown(self) - + GC_PARAMS = {'space_size': 512*WORD, 'nursery_size': 128*WORD, 'translated_to_c': False} diff --git a/pypy/tool/sourcetools.py b/pypy/tool/sourcetools.py --- a/pypy/tool/sourcetools.py +++ b/pypy/tool/sourcetools.py @@ -107,10 +107,8 @@ else: try: src = inspect.getsource(object) - except IOError: - return None - except IndentationError: - return None + except Exception: # catch IOError, IndentationError, and also rarely + return None # some other exceptions like IndexError if hasattr(name, "__sourceargs__"): return src % name.__sourceargs__ return src diff --git a/pypy/translator/c/test/test_newgc.py b/pypy/translator/c/test/test_newgc.py --- a/pypy/translator/c/test/test_newgc.py +++ b/pypy/translator/c/test/test_newgc.py @@ -37,7 +37,7 @@ else: print res return 0 - + t = Translation(main, standalone=True, gc=cls.gcpolicy, policy=annpolicy.StrictAnnotatorPolicy(), taggedpointers=cls.taggedpointers, @@ -128,10 +128,10 @@ if not args: args = (-1, ) res = self.allfuncs(name, *args) - num = self.name_to_func[name] + num = self.name_to_func[name] if self.funcsstr[num]: return res - return int(res) + return int(res) def define_empty_collect(cls): def f(): @@ -228,7 +228,7 @@ T = lltype.GcStruct("T", ('y', lltype.Signed), ('s', lltype.Ptr(S))) ARRAY_Ts = lltype.GcArray(lltype.Ptr(T)) - + def f(): r = 0 for i in range(30): @@ -250,7 +250,7 @@ def test_framework_varsized(self): res = self.run('framework_varsized') assert res == self.run_orig('framework_varsized') - + def define_framework_using_lists(cls): class A(object): pass @@ -271,7 +271,7 @@ N = 1000 res = self.run('framework_using_lists') assert res == N*(N - 1)/2 - + def define_framework_static_roots(cls): class A(object): def __init__(self, y): @@ -318,8 +318,8 @@ def test_framework_void_array(self): res = self.run('framework_void_array') assert res == 44 - - + + def define_framework_malloc_failure(cls): def f(): a = [1] * (sys.maxint//2) @@ -342,7 +342,7 @@ def test_framework_array_of_void(self): res = self.run('framework_array_of_void') assert res == 43 + 1000000 - + def define_framework_opaque(cls): A = lltype.GcStruct('A', ('value', lltype.Signed)) O = lltype.GcOpaqueType('test.framework') @@ -437,7 +437,7 @@ b = B() return 0 return func - + def test_del_raises(self): self.run('del_raises') # does not raise @@ -712,7 +712,7 @@ def test_callback_with_collect(self): assert self.run('callback_with_collect') - + def define_can_move(cls): class A: pass @@ -1255,7 +1255,7 @@ l1 = [] l2 = [] l3 = [] - + def f(): for i in range(10): s = lltype.malloc(S) @@ -1298,7 +1298,7 @@ def test_string_builder(self): res = self.run('string_builder') assert res == "aabcbdddd" - + def definestr_string_builder_over_allocation(cls): import gc def fn(_): @@ -1458,6 +1458,37 @@ res = self.run("nongc_attached_to_gc") assert res == -99997 + def define_nongc_opaque_attached_to_gc(cls): + from pypy.module._hashlib.interp_hashlib import HASH_MALLOC_SIZE + from pypy.rlib import rgc, ropenssl + from pypy.rpython.lltypesystem import rffi + + class A: + def __init__(self): + self.ctx = lltype.malloc(ropenssl.EVP_MD_CTX.TO, + flavor='raw') + digest = ropenssl.EVP_get_digestbyname('sha1') + ropenssl.EVP_DigestInit(self.ctx, digest) + rgc.add_memory_pressure(HASH_MALLOC_SIZE + 64) + + def __del__(self): + ropenssl.EVP_MD_CTX_cleanup(self.ctx) + lltype.free(self.ctx, flavor='raw') + A() + def f(): + am1 = am2 = am3 = None + for i in range(100000): + am3 = am2 + am2 = am1 + am1 = A() + # what can we use for the res? + return 0 + return f + + def test_nongc_opaque_attached_to_gc(self): + res = self.run("nongc_opaque_attached_to_gc") + assert res == 0 + # ____________________________________________________________________ class TaggedPointersTest(object): From noreply at buildbot.pypy.org Fri Nov 4 21:41:08 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Fri, 4 Nov 2011 21:41:08 +0100 (CET) Subject: [pypy-commit] pypy list-strategies: fix from merge Message-ID: <20111104204108.D447F820B3@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: list-strategies Changeset: r48765:a7d33918047a Date: 2011-11-04 16:40 -0400 http://bitbucket.org/pypy/pypy/changeset/a7d33918047a/ Log: fix from merge diff --git a/pypy/objspace/std/test/test_listobject.py b/pypy/objspace/std/test/test_listobject.py --- a/pypy/objspace/std/test/test_listobject.py +++ b/pypy/objspace/std/test/test_listobject.py @@ -413,8 +413,6 @@ assert not l.__contains__(-20) assert not l.__contains__(-21) -======= ->>>>>>> other def test_call_list(self): assert list('') == [] assert list('abc') == ['a', 'b', 'c'] From noreply at buildbot.pypy.org Sat Nov 5 10:02:24 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sat, 5 Nov 2011 10:02:24 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: allow orignial jump_args to be used in the peeled loop Message-ID: <20111105090224.76E00820B3@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48766:90d65a7d35c3 Date: 2011-11-04 22:19 +0100 http://bitbucket.org/pypy/pypy/changeset/90d65a7d35c3/ Log: allow orignial jump_args to be used in the peeled loop diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -97,12 +97,12 @@ [ResOperation(rop.TARGET, jump_args, None, descr=targettoken)] self._do_optimize_loop(preamble, call_pure_results) - jump_args = preamble.operations[-1].getdescr().exported_state.jump_args # FIXME!! inliner = Inliner(inputargs, jump_args) loop.inputargs = None loop.start_resumedescr = preamble.start_resumedescr loop.operations = [preamble.operations[-1]] + \ [inliner.inline_op(op, clone=False) for op in cloned_operations] + self._do_optimize_loop(loop, call_pure_results) extra_same_as = [] while loop.operations[0].getopnum() != rop.TARGET: diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -152,8 +152,8 @@ loop.operations = self.optimizer.get_newoperations() def export_state(self, targetop): - jump_args = targetop.getarglist() - jump_args = [self.getvalue(a).get_key_box() for a in jump_args] + original_jump_args = targetop.getarglist() + jump_args = [self.getvalue(a).get_key_box() for a in original_jump_args] start_resumedescr = self.optimizer.loop.start_resumedescr.clone_if_mutable() assert isinstance(start_resumedescr, ResumeGuardDescr) @@ -173,6 +173,9 @@ constant_inputargs[box] = const short_boxes = ShortBoxes(self.optimizer, inputargs + constant_inputargs.keys()) + for i in range(len(original_jump_args)): + if original_jump_args[i] is not jump_args[i]: + short_boxes.alias(original_jump_args[i], jump_args[i]) self.optimizer.clear_newoperations() for box in short_inputargs: @@ -215,6 +218,9 @@ preamble_value = exported_state.optimizer.getvalue(box) value = self.optimizer.getvalue(box) value.import_from(preamble_value, self.optimizer) + + for newbox, oldbox in self.short_boxes.aliases.items(): + self.optimizer.make_equal_to(newbox, self.optimizer.getvalue(oldbox)) # Setup the state of the new optimizer by emiting the # short operations and discarding the result From noreply at buildbot.pypy.org Sat Nov 5 10:02:26 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sat, 5 Nov 2011 10:02:26 +0100 (CET) Subject: [pypy-commit] pypy default: hg backout a27a481ec877 Message-ID: <20111105090226.DAEA582A87@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: Changeset: r48768:21cb735ed98a Date: 2011-11-04 22:38 +0100 http://bitbucket.org/pypy/pypy/changeset/21cb735ed98a/ Log: hg backout a27a481ec877 diff --git a/pypy/module/__builtin__/functional.py b/pypy/module/__builtin__/functional.py --- a/pypy/module/__builtin__/functional.py +++ b/pypy/module/__builtin__/functional.py @@ -312,11 +312,10 @@ class W_XRange(Wrappable): - def __init__(self, space, start, stop, step): + def __init__(self, space, start, len, step): self.space = space self.start = start - self.stop = stop - self.len = get_len_of_range(space, start, stop, step) + self.len = len self.step = step def descr_new(space, w_subtype, w_start, w_stop=None, w_step=1): @@ -326,8 +325,9 @@ start, stop = 0, start else: stop = _toint(space, w_stop) + howmany = get_len_of_range(space, start, stop, step) obj = space.allocate_instance(W_XRange, w_subtype) - W_XRange.__init__(obj, space, start, stop, step) + W_XRange.__init__(obj, space, start, howmany, step) return space.wrap(obj) def descr_repr(self): @@ -357,12 +357,12 @@ def descr_iter(self): return self.space.wrap(W_XRangeIterator(self.space, self.start, - self.stop, self.step)) + self.len, self.step)) def descr_reversed(self): lastitem = self.start + (self.len-1) * self.step return self.space.wrap(W_XRangeIterator(self.space, lastitem, - self.start - 1, -self.step)) + self.len, -self.step)) def descr_reduce(self): space = self.space @@ -389,24 +389,25 @@ ) class W_XRangeIterator(Wrappable): - def __init__(self, space, start, stop, step): + def __init__(self, space, current, remaining, step): self.space = space - self.current = start - self.stop = stop + self.current = current + self.remaining = remaining self.step = step def descr_iter(self): return self.space.wrap(self) def descr_next(self): - if (self.step > 0 and self.current < self.stop) or (self.step < 0 and self.current > self.stop): + if self.remaining > 0: item = self.current self.current = item + self.step + self.remaining -= 1 return self.space.wrap(item) raise OperationError(self.space.w_StopIteration, self.space.w_None) - #def descr_len(self): - # return self.space.wrap(self.remaining) + def descr_len(self): + return self.space.wrap(self.remaining) def descr_reduce(self): from pypy.interpreter.mixedmodule import MixedModule @@ -417,7 +418,7 @@ w = space.wrap nt = space.newtuple - tup = [w(self.current), w(self.stop), w(self.step)] + tup = [w(self.current), w(self.remaining), w(self.step)] return nt([new_inst, nt(tup)]) W_XRangeIterator.typedef = TypeDef("rangeiterator", diff --git a/pypy/module/_pickle_support/maker.py b/pypy/module/_pickle_support/maker.py --- a/pypy/module/_pickle_support/maker.py +++ b/pypy/module/_pickle_support/maker.py @@ -66,10 +66,10 @@ new_generator.running = running return space.wrap(new_generator) - at unwrap_spec(current=int, stop=int, step=int) -def xrangeiter_new(space, current, stop, step): + at unwrap_spec(current=int, remaining=int, step=int) +def xrangeiter_new(space, current, remaining, step): from pypy.module.__builtin__.functional import W_XRangeIterator - new_iter = W_XRangeIterator(space, current, stop, step) + new_iter = W_XRangeIterator(space, current, remaining, step) return space.wrap(new_iter) @unwrap_spec(identifier=str) From noreply at buildbot.pypy.org Sat Nov 5 10:02:28 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sat, 5 Nov 2011 10:02:28 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: no need to export jump_args Message-ID: <20111105090228.1517882A87@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48769:9e83d7c21fba Date: 2011-11-05 08:05 +0100 http://bitbucket.org/pypy/pypy/changeset/9e83d7c21fba/ Log: no need to export jump_args diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -190,8 +190,7 @@ target_token.exported_state = ExportedState(values, short_inputargs, constant_inputargs, short_boxes, inputarg_setup_ops, self.optimizer, - jump_args, virtual_state, - start_resumedescr) + virtual_state, start_resumedescr) def import_state(self, targetop): target_token = targetop.getdescr() @@ -570,7 +569,7 @@ class ExportedState(object): def __init__(self, values, short_inputargs, constant_inputargs, - short_boxes, inputarg_setup_ops, optimizer, jump_args, virtual_state, + short_boxes, inputarg_setup_ops, optimizer, virtual_state, start_resumedescr): self.values = values self.short_inputargs = short_inputargs @@ -578,7 +577,6 @@ self.short_boxes = short_boxes self.inputarg_setup_ops = inputarg_setup_ops self.optimizer = optimizer - self.jump_args = jump_args self.virtual_state = virtual_state self.start_resumedescr = start_resumedescr From noreply at buildbot.pypy.org Sat Nov 5 10:02:25 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sat, 5 Nov 2011 10:02:25 +0100 (CET) Subject: [pypy-commit] pypy default: hg backout 7202b0d9cb70 Message-ID: <20111105090225.AC152820B3@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: Changeset: r48767:aa07c2a53f95 Date: 2011-11-04 22:37 +0100 http://bitbucket.org/pypy/pypy/changeset/aa07c2a53f95/ Log: hg backout 7202b0d9cb70 diff --git a/pypy/module/__builtin__/functional.py b/pypy/module/__builtin__/functional.py --- a/pypy/module/__builtin__/functional.py +++ b/pypy/module/__builtin__/functional.py @@ -362,7 +362,7 @@ def descr_reversed(self): lastitem = self.start + (self.len-1) * self.step return self.space.wrap(W_XRangeIterator(self.space, lastitem, - self.start, -self.step, True)) + self.start - 1, -self.step)) def descr_reduce(self): space = self.space @@ -389,26 +389,21 @@ ) class W_XRangeIterator(Wrappable): - def __init__(self, space, start, stop, step, inclusive=False): + def __init__(self, space, start, stop, step): self.space = space self.current = start self.stop = stop self.step = step - self.inclusive = inclusive def descr_iter(self): return self.space.wrap(self) def descr_next(self): - if self.inclusive: - if not ((self.step > 0 and self.current <= self.stop) or (self.step < 0 and self.current >= self.stop)): - raise OperationError(self.space.w_StopIteration, self.space.w_None) - else: - if not ((self.step > 0 and self.current < self.stop) or (self.step < 0 and self.current > self.stop)): - raise OperationError(self.space.w_StopIteration, self.space.w_None) - item = self.current - self.current = item + self.step - return self.space.wrap(item) + if (self.step > 0 and self.current < self.stop) or (self.step < 0 and self.current > self.stop): + item = self.current + self.current = item + self.step + return self.space.wrap(item) + raise OperationError(self.space.w_StopIteration, self.space.w_None) #def descr_len(self): # return self.space.wrap(self.remaining) diff --git a/pypy/module/__builtin__/test/test_functional.py b/pypy/module/__builtin__/test/test_functional.py --- a/pypy/module/__builtin__/test/test_functional.py +++ b/pypy/module/__builtin__/test/test_functional.py @@ -157,8 +157,7 @@ raises(OverflowError, xrange, a) raises(OverflowError, xrange, 0, a) raises(OverflowError, xrange, 0, 1, a) - assert list(reversed(xrange(-sys.maxint-1, -sys.maxint-1, -2))) == [] - + def test_xrange_reduce(self): x = xrange(2, 9, 3) callable, args = x.__reduce__() From noreply at buildbot.pypy.org Sat Nov 5 10:02:29 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sat, 5 Nov 2011 10:02:29 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: move the decition wheter to unroll or not back into optimizeopt Message-ID: <20111105090229.4E3CA820B3@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48770:0aa80cc1b315 Date: 2011-11-05 08:44 +0100 http://bitbucket.org/pypy/pypy/changeset/0aa80cc1b315/ Log: move the decition wheter to unroll or not back into optimizeopt diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -766,7 +766,8 @@ self.compiled_loop_token.cpu.dump_loop_token(self) class TargetToken(AbstractDescr): - def __init__(self): + def __init__(self, merge_point): + self.merge_point = merge_point self.exported_state = None class TreeLoop(object): diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -557,7 +557,6 @@ def store_final_boxes_in_guard(self, op): descr = op.getdescr() - print 'HHHHHHHHHHHH', descr, id(descr) assert isinstance(descr, compile.ResumeGuardDescr) modifier = resume.ResumeDataVirtualAdder(descr, self.resumedata_memo) newboxes = modifier.finish(self.values, self.pendingfields) diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -80,28 +80,33 @@ if expected_short: expected_short = self.parse(expected_short) operations = loop.operations + jumpop = operations[-1] + assert jumpop.getopnum() == rop.JUMP + inputargs = loop.inputargs + loop.inputargs = None + + jump_args = jumpop.getarglist()[:] + operations = operations[:-1] cloned_operations = [op.clone() for op in operations] preamble = TreeLoop('preamble') #loop.preamble.inputargs = loop.inputargs #loop.preamble.token = LoopToken() preamble.start_resumedescr = FakeDescr() - assert operations[-1].getopnum() == rop.JUMP - inputargs = loop.inputargs - jump_args = operations[-1].getarglist() - targettoken = TargetToken() - operations[-1].setdescr(targettoken) - cloned_operations[-1].setdescr(targettoken) - preamble.operations = [ResOperation(rop.TARGET, inputargs, None, descr=TargetToken())] + \ - operations[:-1] + \ - [ResOperation(rop.TARGET, jump_args, None, descr=targettoken)] + + token = LoopToken() # FIXME: Make this a MergePointToken? + preamble.operations = [ResOperation(rop.TARGET, inputargs, None, descr=TargetToken(token))] + \ + operations + \ + [ResOperation(rop.TARGET, jump_args, None, descr=TargetToken(token))] self._do_optimize_loop(preamble, call_pure_results) inliner = Inliner(inputargs, jump_args) - loop.inputargs = None loop.start_resumedescr = preamble.start_resumedescr loop.operations = [preamble.operations[-1]] + \ - [inliner.inline_op(op, clone=False) for op in cloned_operations] + [inliner.inline_op(op, clone=False) for op in cloned_operations] + \ + [ResOperation(rop.TARGET, [inliner.inline_arg(a) for a in jump_args], + None, descr=TargetToken(token))] + #[inliner.inline_op(jumpop)] self._do_optimize_loop(loop, call_pure_results) extra_same_as = [] diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -126,30 +126,34 @@ self.import_state(start_targetop) lastop = loop.operations[-1] - if lastop.getopnum() == rop.TARGET or lastop.getopnum() == rop.JUMP: - loop.operations = loop.operations[:-1] + assert lastop.getopnum() == rop.TARGET + loop.operations = loop.operations[:-1] + #if lastop.getopnum() == rop.TARGET or lastop.getopnum() == rop.JUMP: + # loop.operations = loop.operations[:-1] #FIXME: FINISH self.optimizer.propagate_all_forward(clear=False) - if lastop.getopnum() == rop.TARGET: + #if lastop.getopnum() == rop.TARGET: + if not self.did_peel_one: # Enforce the previous behaviour of always peeling exactly one iteration (for now) self.optimizer.flush() KillHugeIntBounds(self.optimizer).apply() loop.operations = self.optimizer.get_newoperations() self.export_state(lastop) loop.operations.append(lastop) - elif lastop.getopnum() == rop.JUMP: - assert lastop.getdescr() is start_targetop.getdescr() - self.close_loop(lastop) + else: + assert lastop.getdescr().merge_point is start_targetop.getdescr().merge_point + jumpop = ResOperation(rop.JUMP, lastop.getarglist(), None, descr=start_targetop.getdescr()) + self.close_loop(jumpop) short_preamble_loop = self.produce_short_preamble(lastop) assert isinstance(loop.token, LoopToken) if loop.token.short_preamble: loop.token.short_preamble.append(short_preamble_loop) # FIXME: ?? else: loop.token.short_preamble = [short_preamble_loop] - else: - loop.operations = self.optimizer.get_newoperations() + #else: + # loop.operations = self.optimizer.get_newoperations() def export_state(self, targetop): original_jump_args = targetop.getarglist() @@ -197,9 +201,11 @@ assert isinstance(target_token, TargetToken) exported_state = target_token.exported_state if not exported_state: + self.did_peel_one = False # FIXME: Set up some sort of empty state with no virtuals return - + self.did_peel_one = True + self.short = [] self.short_seen = {} self.short_boxes = exported_state.short_boxes @@ -245,8 +251,7 @@ self.optimizer.flush() self.optimizer.emitting_dissabled = False - def close_loop(self, jumpop): - assert jumpop + def close_loop(self, jumpop): virtual_state = self.imported_state.virtual_state short_inputargs = self.imported_state.short_inputargs constant_inputargs = self.imported_state.constant_inputargs From noreply at buildbot.pypy.org Sat Nov 5 10:02:30 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sat, 5 Nov 2011 10:02:30 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: place the virtual_state on the target_token Message-ID: <20111105090230.7E0E3820B3@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48771:beacbd0267fc Date: 2011-11-05 09:04 +0100 http://bitbucket.org/pypy/pypy/changeset/beacbd0267fc/ Log: place the virtual_state on the target_token diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -768,6 +768,7 @@ class TargetToken(AbstractDescr): def __init__(self, merge_point): self.merge_point = merge_point + self.virtual_state = None self.exported_state = None class TreeLoop(object): diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -191,10 +191,11 @@ target_token = targetop.getdescr() assert isinstance(target_token, TargetToken) targetop.initarglist(inputargs) + target_token.virtual_state = virtual_state target_token.exported_state = ExportedState(values, short_inputargs, constant_inputargs, short_boxes, inputarg_setup_ops, self.optimizer, - virtual_state, start_resumedescr) + start_resumedescr) def import_state(self, targetop): target_token = targetop.getdescr() @@ -214,6 +215,7 @@ self.imported_state = exported_state self.inputargs = targetop.getarglist() self.start_resumedescr = exported_state.start_resumedescr + self.initial_virtual_state = target_token.virtual_state seen = {} for box in self.inputargs: @@ -252,7 +254,7 @@ self.optimizer.emitting_dissabled = False def close_loop(self, jumpop): - virtual_state = self.imported_state.virtual_state + virtual_state = self.initial_virtual_state short_inputargs = self.imported_state.short_inputargs constant_inputargs = self.imported_state.constant_inputargs inputargs = self.inputargs @@ -365,8 +367,6 @@ inliner.inline_descr_inplace(descr) short_loop.start_resumedescr = descr - short_loop.virtual_state = self.imported_state.virtual_state - # Forget the values to allow them to be freed for box in short_loop.inputargs: box.forget_value() @@ -574,7 +574,7 @@ class ExportedState(object): def __init__(self, values, short_inputargs, constant_inputargs, - short_boxes, inputarg_setup_ops, optimizer, virtual_state, + short_boxes, inputarg_setup_ops, optimizer, start_resumedescr): self.values = values self.short_inputargs = short_inputargs @@ -582,6 +582,5 @@ self.short_boxes = short_boxes self.inputarg_setup_ops = inputarg_setup_ops self.optimizer = optimizer - self.virtual_state = virtual_state self.start_resumedescr = start_resumedescr From noreply at buildbot.pypy.org Sat Nov 5 10:02:31 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sat, 5 Nov 2011 10:02:31 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: no need to export values Message-ID: <20111105090231.AAEF6820B3@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48772:5b76dc7b47e9 Date: 2011-11-05 09:06 +0100 http://bitbucket.org/pypy/pypy/changeset/5b76dc7b47e9/ Log: no need to export values diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -192,7 +192,7 @@ assert isinstance(target_token, TargetToken) targetop.initarglist(inputargs) target_token.virtual_state = virtual_state - target_token.exported_state = ExportedState(values, short_inputargs, + target_token.exported_state = ExportedState(short_inputargs, constant_inputargs, short_boxes, inputarg_setup_ops, self.optimizer, start_resumedescr) @@ -573,10 +573,9 @@ self.unroll.add_op_to_short(self.op, False, True) class ExportedState(object): - def __init__(self, values, short_inputargs, constant_inputargs, + def __init__(self, short_inputargs, constant_inputargs, short_boxes, inputarg_setup_ops, optimizer, start_resumedescr): - self.values = values self.short_inputargs = short_inputargs self.constant_inputargs = constant_inputargs self.short_boxes = short_boxes From noreply at buildbot.pypy.org Sat Nov 5 10:02:32 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sat, 5 Nov 2011 10:02:32 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: store the short preamble on the TargetToken instead Message-ID: <20111105090232.E0064820B3@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48773:096da5690e5d Date: 2011-11-05 09:30 +0100 http://bitbucket.org/pypy/pypy/changeset/096da5690e5d/ Log: store the short preamble on the TargetToken instead diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -133,9 +133,8 @@ print if expected_short: print "Short Preamble:" - short = loop.token.short_preamble[0] - print short.inputargs - print '\n'.join([str(o) for o in short.operations]) + short = loop.operations[0].getdescr().short_preamble + print '\n'.join([str(o) for o in short]) print assert expected != "crash!", "should have raised an exception" @@ -146,9 +145,11 @@ text_right='expected preamble') assert preamble.operations[-1].getdescr() == loop.operations[0].getdescr() if expected_short: - self.assert_equal(short, convert_old_style_to_targets(expected_short, jump=True), + short_preamble = TreeLoop('short preamble') + short_preamble.operations = short + self.assert_equal(short_preamble, convert_old_style_to_targets(expected_short, jump=True), text_right='expected short preamble') - assert short.operations[-1].getdescr() == loop.operations[0].getdescr() + assert short[-1].getdescr() == loop.operations[0].getdescr() return loop diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -145,13 +145,10 @@ else: assert lastop.getdescr().merge_point is start_targetop.getdescr().merge_point jumpop = ResOperation(rop.JUMP, lastop.getarglist(), None, descr=start_targetop.getdescr()) + self.close_loop(jumpop) - short_preamble_loop = self.produce_short_preamble(lastop) - assert isinstance(loop.token, LoopToken) - if loop.token.short_preamble: - loop.token.short_preamble.append(short_preamble_loop) # FIXME: ?? - else: - loop.token.short_preamble = [short_preamble_loop] + self.finilize_short_preamble(lastop) + start_targetop.getdescr().short_preamble = self.short #else: # loop.operations = self.optimizer.get_newoperations() @@ -192,8 +189,8 @@ assert isinstance(target_token, TargetToken) targetop.initarglist(inputargs) target_token.virtual_state = virtual_state - target_token.exported_state = ExportedState(short_inputargs, - constant_inputargs, short_boxes, + target_token.short_preamble = [ResOperation(rop.TARGET, short_inputargs, None)] + target_token.exported_state = ExportedState(constant_inputargs, short_boxes, inputarg_setup_ops, self.optimizer, start_resumedescr) @@ -207,7 +204,7 @@ return self.did_peel_one = True - self.short = [] + self.short = target_token.short_preamble self.short_seen = {} self.short_boxes = exported_state.short_boxes for box, const in exported_state.constant_inputargs.items(): @@ -255,7 +252,7 @@ def close_loop(self, jumpop): virtual_state = self.initial_virtual_state - short_inputargs = self.imported_state.short_inputargs + short_inputargs = self.short[0].getarglist() constant_inputargs = self.imported_state.constant_inputargs inputargs = self.inputargs short_jumpargs = inputargs[:] @@ -271,7 +268,7 @@ self.short_inliner = Inliner(short_inputargs, jmp_to_short_args) for box, const in constant_inputargs.items(): self.short_inliner.argmap[box] = const - for op in self.short: + for op in self.short[1:]: newop = self.short_inliner.inline_op(op) self.optimizer.send_extra_operation(newop) @@ -329,7 +326,7 @@ raise InvalidLoop debug_stop('jit-log-virtualstate') - def produce_short_preamble(self, lastop): + def finilize_short_preamble(self, lastop): short = self.short assert short[-1].getopnum() == rop.JUMP @@ -343,12 +340,8 @@ op.setdescr(descr) short[i] = op - short_loop = TreeLoop('short preamble') - short_inputargs = self.imported_state.short_inputargs - short_loop.operations = [ResOperation(rop.TARGET, short_inputargs, None)] + \ - short - # Clone ops and boxes to get private versions and + short_inputargs = short[0].getarglist() boxmap = {} newargs = [None] * len(short_inputargs) for i in range(len(short_inputargs)): @@ -361,20 +354,20 @@ inliner = Inliner(short_inputargs, newargs) for box, const in self.imported_state.constant_inputargs.items(): inliner.argmap[box] = const - ops = [inliner.inline_op(op) for op in short_loop.operations] - short_loop.operations = ops - descr = self.start_resumedescr.clone_if_mutable() - inliner.inline_descr_inplace(descr) - short_loop.start_resumedescr = descr + for i in range(len(short)): + short[i] = inliner.inline_op(short[i]) + + self.start_resumedescr = self.start_resumedescr.clone_if_mutable() + inliner.inline_descr_inplace(self.start_resumedescr) + #short_loop.start_resumedescr = descr + # FIXME: move this to targettoken # Forget the values to allow them to be freed - for box in short_loop.inputargs: + for box in short[0].getarglist(): box.forget_value() - for op in short_loop.operations: + for op in short: if op.result: op.result.forget_value() - - return short_loop def FIXME_old_stuff(): preamble_optimizer = self.optimizer @@ -573,10 +566,9 @@ self.unroll.add_op_to_short(self.op, False, True) class ExportedState(object): - def __init__(self, short_inputargs, constant_inputargs, + def __init__(self, constant_inputargs, short_boxes, inputarg_setup_ops, optimizer, start_resumedescr): - self.short_inputargs = short_inputargs self.constant_inputargs = constant_inputargs self.short_boxes = short_boxes self.inputarg_setup_ops = inputarg_setup_ops From noreply at buildbot.pypy.org Sat Nov 5 10:02:34 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sat, 5 Nov 2011 10:02:34 +0100 (CET) Subject: [pypy-commit] pypy default: hg merge Message-ID: <20111105090234.6D9AD820B3@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: Changeset: r48774:7fdbb6ee5b80 Date: 2011-11-05 09:56 +0100 http://bitbucket.org/pypy/pypy/changeset/7fdbb6ee5b80/ Log: hg merge diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -247,7 +247,6 @@ CONST_1 = ConstInt(1) CVAL_ZERO = ConstantValue(CONST_0) CVAL_ZERO_FLOAT = ConstantValue(Const._new(0.0)) -CVAL_UNINITIALIZED_ZERO = ConstantValue(CONST_0) llhelper.CVAL_NULLREF = ConstantValue(llhelper.CONST_NULL) oohelper.CVAL_NULLREF = ConstantValue(oohelper.CONST_NULL) diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -4123,6 +4123,38 @@ """ self.optimize_strunicode_loop(ops, expected) + def test_str_concat_constant_lengths(self): + ops = """ + [i0] + p0 = newstr(1) + strsetitem(p0, 0, i0) + p1 = newstr(0) + p2 = call(0, p0, p1, descr=strconcatdescr) + i1 = call(0, p2, p0, descr=strequaldescr) + finish(i1) + """ + expected = """ + [i0] + finish(1) + """ + self.optimize_strunicode_loop(ops, expected) + + def test_str_concat_constant_lengths_2(self): + ops = """ + [i0] + p0 = newstr(0) + p1 = newstr(1) + strsetitem(p1, 0, i0) + p2 = call(0, p0, p1, descr=strconcatdescr) + i1 = call(0, p2, p1, descr=strequaldescr) + finish(i1) + """ + expected = """ + [i0] + finish(1) + """ + self.optimize_strunicode_loop(ops, expected) + def test_str_slice_1(self): ops = """ [p1, i1, i2] @@ -4883,6 +4915,27 @@ def test_plain_virtual_string_copy_content(self): ops = """ + [i1] + p0 = newstr(6) + copystrcontent(s"hello!", p0, 0, 0, 6) + p1 = call(0, p0, s"abc123", descr=strconcatdescr) + i0 = strgetitem(p1, i1) + finish(i0) + """ + expected = """ + [i1] + p0 = newstr(6) + copystrcontent(s"hello!", p0, 0, 0, 6) + p1 = newstr(12) + copystrcontent(p0, p1, 0, 0, 6) + copystrcontent(s"abc123", p1, 0, 6, 6) + i0 = strgetitem(p1, i1) + finish(i0) + """ + self.optimize_strunicode_loop(ops, expected) + + def test_plain_virtual_string_copy_content_2(self): + ops = """ [] p0 = newstr(6) copystrcontent(s"hello!", p0, 0, 0, 6) @@ -4894,10 +4947,7 @@ [] p0 = newstr(6) copystrcontent(s"hello!", p0, 0, 0, 6) - p1 = newstr(12) - copystrcontent(p0, p1, 0, 0, 6) - copystrcontent(s"abc123", p1, 0, 6, 6) - i0 = strgetitem(p1, 0) + i0 = strgetitem(p0, 0) finish(i0) """ self.optimize_strunicode_loop(ops, expected) diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -2168,13 +2168,13 @@ ops = """ [p0, i0, p1, i1, i2] setfield_gc(p0, i1, descr=valuedescr) - copystrcontent(p0, i0, p1, i1, i2) + copystrcontent(p0, p1, i0, i1, i2) escape() jump(p0, i0, p1, i1, i2) """ expected = """ [p0, i0, p1, i1, i2] - copystrcontent(p0, i0, p1, i1, i2) + copystrcontent(p0, p1, i0, i1, i2) setfield_gc(p0, i1, descr=valuedescr) escape() jump(p0, i0, p1, i1, i2) @@ -7407,7 +7407,7 @@ expected = """ [p22, p18, i1, i2] call(i2, descr=nonwritedescr) - setfield_gc(p22, i1, descr=valuedescr) + setfield_gc(p22, i1, descr=valuedescr) jump(p22, p18, i1, i1) """ self.optimize_loop(ops, expected, preamble, expected_short=short) @@ -7434,7 +7434,7 @@ def test_cache_setarrayitem_across_loop_boundaries(self): ops = """ [p1] - p2 = getarrayitem_gc(p1, 3, descr=arraydescr) + p2 = getarrayitem_gc(p1, 3, descr=arraydescr) guard_nonnull_class(p2, ConstClass(node_vtable)) [] call(p2, descr=nonwritedescr) p3 = new_with_vtable(ConstClass(node_vtable)) diff --git a/pypy/jit/metainterp/optimizeopt/vstring.py b/pypy/jit/metainterp/optimizeopt/vstring.py --- a/pypy/jit/metainterp/optimizeopt/vstring.py +++ b/pypy/jit/metainterp/optimizeopt/vstring.py @@ -1,6 +1,6 @@ from pypy.jit.codewriter.effectinfo import EffectInfo from pypy.jit.metainterp.history import (BoxInt, Const, ConstInt, ConstPtr, - get_const_ptr_for_string, get_const_ptr_for_unicode) + get_const_ptr_for_string, get_const_ptr_for_unicode, BoxPtr, REF, INT) from pypy.jit.metainterp.optimizeopt import optimizer, virtualize from pypy.jit.metainterp.optimizeopt.optimizer import CONST_0, CONST_1, llhelper from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method @@ -106,7 +106,12 @@ if not we_are_translated(): op.name = 'FORCE' optforce.emit_operation(op) - self.string_copy_parts(optforce, box, CONST_0, self.mode) + self.initialize_forced_string(optforce, box, CONST_0, self.mode) + + def initialize_forced_string(self, string_optimizer, targetbox, + offsetbox, mode): + return self.string_copy_parts(string_optimizer, targetbox, + offsetbox, mode) class VStringPlainValue(VAbstractStringValue): @@ -114,11 +119,20 @@ _lengthbox = None # cache only def setup(self, size): - self._chars = [optimizer.CVAL_UNINITIALIZED_ZERO] * size + # in this list, None means: "it's probably uninitialized so far, + # but maybe it was actually filled." So to handle this case, + # strgetitem cannot be virtual-ized and must be done as a residual + # operation. By contrast, any non-None value means: we know it + # is initialized to this value; strsetitem() there makes no sense. + # Also, as long as self.is_virtual(), then we know that no-one else + # could have written to the string, so we know that in this case + # "None" corresponds to "really uninitialized". + self._chars = [None] * size def setup_slice(self, longerlist, start, stop): assert 0 <= start <= stop <= len(longerlist) self._chars = longerlist[start:stop] + # slice the 'longerlist', which may also contain Nones def getstrlen(self, _, mode): if self._lengthbox is None: @@ -126,42 +140,66 @@ return self._lengthbox def getitem(self, index): - return self._chars[index] + return self._chars[index] # may return None! def setitem(self, index, charvalue): assert isinstance(charvalue, optimizer.OptValue) + assert self._chars[index] is None, ( + "setitem() on an already-initialized location") self._chars[index] = charvalue + def is_completely_initialized(self): + for c in self._chars: + if c is None: + return False + return True + @specialize.arg(1) def get_constant_string_spec(self, mode): for c in self._chars: - if c is optimizer.CVAL_UNINITIALIZED_ZERO or not c.is_constant(): + if c is None or not c.is_constant(): return None return mode.emptystr.join([mode.chr(c.box.getint()) for c in self._chars]) def string_copy_parts(self, string_optimizer, targetbox, offsetbox, mode): - if not self.is_virtual() and targetbox is not self.box: - lengthbox = self.getstrlen(string_optimizer, mode) - srcbox = self.force_box(string_optimizer) - return copy_str_content(string_optimizer, srcbox, targetbox, - CONST_0, offsetbox, lengthbox, mode) + if not self.is_virtual() and not self.is_completely_initialized(): + return VAbstractStringValue.string_copy_parts( + self, string_optimizer, targetbox, offsetbox, mode) + else: + return self.initialize_forced_string(string_optimizer, targetbox, + offsetbox, mode) + + def initialize_forced_string(self, string_optimizer, targetbox, + offsetbox, mode): for i in range(len(self._chars)): - charbox = self._chars[i].force_box(string_optimizer) - if not (isinstance(charbox, Const) and charbox.same_constant(CONST_0)): - string_optimizer.emit_operation(ResOperation(mode.STRSETITEM, [targetbox, - offsetbox, - charbox], - None)) + assert isinstance(targetbox, BoxPtr) # ConstPtr never makes sense + charvalue = self.getitem(i) + if charvalue is not None: + charbox = charvalue.force_box(string_optimizer) + if not (isinstance(charbox, Const) and + charbox.same_constant(CONST_0)): + op = ResOperation(mode.STRSETITEM, [targetbox, + offsetbox, + charbox], + None) + string_optimizer.emit_operation(op) offsetbox = _int_add(string_optimizer, offsetbox, CONST_1) return offsetbox def get_args_for_fail(self, modifier): if self.box is None and not modifier.already_seen_virtual(self.keybox): - charboxes = [value.get_key_box() for value in self._chars] + charboxes = [] + for value in self._chars: + if value is not None: + box = value.get_key_box() + else: + box = None + charboxes.append(box) modifier.register_virtual_fields(self.keybox, charboxes) for value in self._chars: - value.get_args_for_fail(modifier) + if value is not None: + value.get_args_for_fail(modifier) def _make_virtual(self, modifier): return modifier.make_vstrplain(self.mode is mode_unicode) @@ -277,6 +315,7 @@ for i in range(lengthbox.value): charbox = _strgetitem(string_optimizer, srcbox, srcoffsetbox, mode) srcoffsetbox = _int_add(string_optimizer, srcoffsetbox, CONST_1) + assert isinstance(targetbox, BoxPtr) # ConstPtr never makes sense string_optimizer.emit_operation(ResOperation(mode.STRSETITEM, [targetbox, offsetbox, charbox], @@ -287,6 +326,7 @@ nextoffsetbox = _int_add(string_optimizer, offsetbox, lengthbox) else: nextoffsetbox = None + assert isinstance(targetbox, BoxPtr) # ConstPtr never makes sense op = ResOperation(mode.COPYSTRCONTENT, [srcbox, targetbox, srcoffsetbox, offsetbox, lengthbox], None) @@ -373,6 +413,7 @@ def optimize_STRSETITEM(self, op): value = self.getvalue(op.getarg(0)) + assert not value.is_constant() # strsetitem(ConstPtr) never makes sense if value.is_virtual() and isinstance(value, VStringPlainValue): indexbox = self.get_constant_box(op.getarg(1)) if indexbox is not None: @@ -406,11 +447,20 @@ # if isinstance(value, VStringPlainValue): # even if no longer virtual if vindex.is_constant(): - res = value.getitem(vindex.box.getint()) - # If it is uninitialized we can't return it, it was set by a - # COPYSTRCONTENT, not a STRSETITEM - if res is not optimizer.CVAL_UNINITIALIZED_ZERO: - return res + result = value.getitem(vindex.box.getint()) + if result is not None: + return result + # + if isinstance(value, VStringConcatValue) and vindex.is_constant(): + len1box = value.left.getstrlen(self, mode) + if isinstance(len1box, ConstInt): + index = vindex.box.getint() + len1 = len1box.getint() + if index < len1: + return self.strgetitem(value.left, vindex, mode) + else: + vindex = optimizer.ConstantValue(ConstInt(index - len1)) + return self.strgetitem(value.right, vindex, mode) # resbox = _strgetitem(self, value.force_box(self), vindex.force_box(self), mode) return self.getvalue(resbox) @@ -432,6 +482,11 @@ def _optimize_COPYSTRCONTENT(self, op, mode): # args: src dst srcstart dststart length + assert op.getarg(0).type == REF + assert op.getarg(1).type == REF + assert op.getarg(2).type == INT + assert op.getarg(3).type == INT + assert op.getarg(4).type == INT src = self.getvalue(op.getarg(0)) dst = self.getvalue(op.getarg(1)) srcstart = self.getvalue(op.getarg(2)) @@ -503,19 +558,12 @@ vstart = self.getvalue(op.getarg(2)) vstop = self.getvalue(op.getarg(3)) # - if (isinstance(vstr, VStringPlainValue) and vstart.is_constant() - and vstop.is_constant()): - # slicing with constant bounds of a VStringPlainValue, if any of - # the characters is unitialized we don't do this special slice, we - # do the regular copy contents. - for i in range(vstart.box.getint(), vstop.box.getint()): - if vstr.getitem(i) is optimizer.CVAL_UNINITIALIZED_ZERO: - break - else: - value = self.make_vstring_plain(op.result, op, mode) - value.setup_slice(vstr._chars, vstart.box.getint(), - vstop.box.getint()) - return True + #if (isinstance(vstr, VStringPlainValue) and vstart.is_constant() + # and vstop.is_constant()): + # value = self.make_vstring_plain(op.result, op, mode) + # value.setup_slice(vstr._chars, vstart.box.getint(), + # vstop.box.getint()) + # return True # vstr.ensure_nonnull() lengthbox = _int_sub(self, vstop.force_box(self), diff --git a/pypy/jit/metainterp/resume.py b/pypy/jit/metainterp/resume.py --- a/pypy/jit/metainterp/resume.py +++ b/pypy/jit/metainterp/resume.py @@ -126,6 +126,7 @@ UNASSIGNED = tag(-1<<13, TAGBOX) UNASSIGNEDVIRTUAL = tag(-1<<13, TAGVIRTUAL) NULLREF = tag(-1, TAGCONST) +UNINITIALIZED = tag(-2, TAGCONST) # used for uninitialized string characters class ResumeDataLoopMemo(object): @@ -439,6 +440,8 @@ self.storage.rd_pendingfields = rd_pendingfields def _gettagged(self, box): + if box is None: + return UNINITIALIZED if isinstance(box, Const): return self.memo.getconst(box) else: @@ -572,7 +575,9 @@ string = decoder.allocate_string(length) decoder.virtuals_cache[index] = string for i in range(length): - decoder.string_setitem(string, i, self.fieldnums[i]) + charnum = self.fieldnums[i] + if not tagged_eq(charnum, UNINITIALIZED): + decoder.string_setitem(string, i, charnum) return string def debug_prints(self): @@ -625,7 +630,9 @@ string = decoder.allocate_unicode(length) decoder.virtuals_cache[index] = string for i in range(length): - decoder.unicode_setitem(string, i, self.fieldnums[i]) + charnum = self.fieldnums[i] + if not tagged_eq(charnum, UNINITIALIZED): + decoder.unicode_setitem(string, i, charnum) return string def debug_prints(self): diff --git a/pypy/module/_file/interp_file.py b/pypy/module/_file/interp_file.py --- a/pypy/module/_file/interp_file.py +++ b/pypy/module/_file/interp_file.py @@ -206,24 +206,28 @@ @unwrap_spec(size=int) def direct_readlines(self, size=0): stream = self.getstream() - # NB. this implementation is very inefficient for unbuffered - # streams, but ok if stream.readline() is efficient. + # this is implemented as: .read().split('\n') + # except that it keeps the \n in the resulting strings if size <= 0: - result = [] - while True: - line = stream.readline() - if not line: - break - result.append(line) - size -= len(line) + data = stream.readall() else: - result = [] - while size > 0: - line = stream.readline() - if not line: - break - result.append(line) - size -= len(line) + data = stream.read(size) + result = [] + splitfrom = 0 + for i in range(len(data)): + if data[i] == '\n': + result.append(data[splitfrom : i + 1]) + splitfrom = i + 1 + # + if splitfrom < len(data): + # there is a partial line at the end. If size > 0, it is likely + # to be because the 'read(size)' returned data up to the middle + # of a line. In that case, use 'readline()' to read until the + # end of the current line. + data = data[splitfrom:] + if size > 0: + data += stream.readline() + result.append(data) return result @unwrap_spec(offset=r_longlong, whence=int) diff --git a/pypy/objspace/std/boolobject.py b/pypy/objspace/std/boolobject.py --- a/pypy/objspace/std/boolobject.py +++ b/pypy/objspace/std/boolobject.py @@ -27,11 +27,7 @@ def uint_w(w_self, space): intval = int(w_self.boolval) - if intval < 0: - raise OperationError(space.w_ValueError, - space.wrap("cannot convert negative integer to unsigned")) - else: - return r_uint(intval) + return r_uint(intval) def bigint_w(w_self, space): return rbigint.fromint(int(w_self.boolval)) diff --git a/pypy/objspace/std/bytearrayobject.py b/pypy/objspace/std/bytearrayobject.py --- a/pypy/objspace/std/bytearrayobject.py +++ b/pypy/objspace/std/bytearrayobject.py @@ -283,17 +283,9 @@ return space.wrap(''.join(w_bytearray.data)) def _convert_idx_params(space, w_self, w_start, w_stop): - start = slicetype.eval_slice_index(space, w_start) - stop = slicetype.eval_slice_index(space, w_stop) length = len(w_self.data) - if start < 0: - start += length - if start < 0: - start = 0 - if stop < 0: - stop += length - if stop < 0: - stop = 0 + start, stop = slicetype.unwrap_start_stop( + space, length, w_start, w_stop, False) return start, stop, length def str_count__Bytearray_Int_ANY_ANY(space, w_bytearray, w_char, w_start, w_stop): diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py --- a/pypy/objspace/std/listobject.py +++ b/pypy/objspace/std/listobject.py @@ -419,8 +419,8 @@ # needs to be safe against eq_w() mutating the w_list behind our back items = w_list.wrappeditems size = len(items) - i = slicetype.adapt_bound(space, size, w_start) - stop = slicetype.adapt_bound(space, size, w_stop) + i, stop = slicetype.unwrap_start_stop( + space, size, w_start, w_stop, True) while i < stop and i < len(items): if space.eq_w(items[i], w_any): return space.wrap(i) diff --git a/pypy/objspace/std/ropeobject.py b/pypy/objspace/std/ropeobject.py --- a/pypy/objspace/std/ropeobject.py +++ b/pypy/objspace/std/ropeobject.py @@ -357,16 +357,8 @@ self = w_self._node sub = w_sub._node - if space.is_w(w_start, space.w_None): - w_start = space.wrap(0) - if space.is_w(w_end, space.w_None): - w_end = space.len(w_self) - if upper_bound: - start = slicetype.adapt_bound(space, self.length(), w_start) - end = slicetype.adapt_bound(space, self.length(), w_end) - else: - start = slicetype.adapt_lower_bound(space, self.length(), w_start) - end = slicetype.adapt_lower_bound(space, self.length(), w_end) + start, end = slicetype.unwrap_start_stop( + space, self.length(), w_start, w_end, upper_bound) return (self, sub, start, end) _convert_idx_params._annspecialcase_ = 'specialize:arg(5)' diff --git a/pypy/objspace/std/sliceobject.py b/pypy/objspace/std/sliceobject.py --- a/pypy/objspace/std/sliceobject.py +++ b/pypy/objspace/std/sliceobject.py @@ -4,7 +4,7 @@ from pypy.interpreter import gateway from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all -from pypy.objspace.std.slicetype import eval_slice_index +from pypy.objspace.std.slicetype import _eval_slice_index class W_SliceObject(W_Object): from pypy.objspace.std.slicetype import slice_typedef as typedef @@ -25,7 +25,7 @@ if space.is_w(w_slice.w_step, space.w_None): step = 1 else: - step = eval_slice_index(space, w_slice.w_step) + step = _eval_slice_index(space, w_slice.w_step) if step == 0: raise OperationError(space.w_ValueError, space.wrap("slice step cannot be zero")) @@ -35,7 +35,7 @@ else: start = 0 else: - start = eval_slice_index(space, w_slice.w_start) + start = _eval_slice_index(space, w_slice.w_start) if start < 0: start += length if start < 0: @@ -54,7 +54,7 @@ else: stop = length else: - stop = eval_slice_index(space, w_slice.w_stop) + stop = _eval_slice_index(space, w_slice.w_stop) if stop < 0: stop += length if stop < 0: diff --git a/pypy/objspace/std/slicetype.py b/pypy/objspace/std/slicetype.py --- a/pypy/objspace/std/slicetype.py +++ b/pypy/objspace/std/slicetype.py @@ -3,6 +3,7 @@ from pypy.objspace.std.stdtypedef import StdTypeDef, SMM from pypy.objspace.std.register_all import register_all from pypy.interpreter.error import OperationError +from pypy.rlib.objectmodel import specialize # indices multimehtod slice_indices = SMM('indices', 2, @@ -14,7 +15,9 @@ ' normal slices.') # utility functions -def eval_slice_index(space, w_int): +def _eval_slice_index(space, w_int): + # note that it is the *callers* responsibility to check for w_None + # otherwise you can get funny error messages try: return space.getindex_w(w_int, None) # clamp if long integer too large except OperationError, err: @@ -25,7 +28,7 @@ "None or have an __index__ method")) def adapt_lower_bound(space, size, w_index): - index = eval_slice_index(space, w_index) + index = _eval_slice_index(space, w_index) if index < 0: index = index + size if index < 0: @@ -34,16 +37,29 @@ return index def adapt_bound(space, size, w_index): - index = eval_slice_index(space, w_index) - if index < 0: - index = index + size - if index < 0: - index = 0 + index = adapt_lower_bound(space, size, w_index) if index > size: index = size assert index >= 0 return index + at specialize.arg(4) +def unwrap_start_stop(space, size, w_start, w_end, upper_bound=False): + if space.is_w(w_start, space.w_None): + start = 0 + elif upper_bound: + start = adapt_bound(space, size, w_start) + else: + start = adapt_lower_bound(space, size, w_start) + + if space.is_w(w_end, space.w_None): + end = size + elif upper_bound: + end = adapt_bound(space, size, w_end) + else: + end = adapt_lower_bound(space, size, w_end) + return start, end + register_all(vars(), globals()) # ____________________________________________________________ diff --git a/pypy/objspace/std/smalltupleobject.py b/pypy/objspace/std/smalltupleobject.py --- a/pypy/objspace/std/smalltupleobject.py +++ b/pypy/objspace/std/smalltupleobject.py @@ -68,10 +68,10 @@ raise IndexError def eq(self, space, w_other): - if self.length() != w_other.length(): + if n != w_other.length(): return space.w_False for i in iter_n: - item1 = self.getitem(i) + item1 = getattr(self,'w_value%s' % i) item2 = w_other.getitem(i) if not space.eq_w(item1, item2): return space.w_False @@ -80,9 +80,9 @@ def hash(self, space): mult = 1000003 x = 0x345678 - z = self.length() + z = n for i in iter_n: - w_item = self.getitem(i) + w_item = getattr(self, 'w_value%s' % i) y = space.int_w(space.hash(w_item)) x = (x ^ y) * mult z -= 1 diff --git a/pypy/objspace/std/stringobject.py b/pypy/objspace/std/stringobject.py --- a/pypy/objspace/std/stringobject.py +++ b/pypy/objspace/std/stringobject.py @@ -4,7 +4,7 @@ from pypy.interpreter.error import OperationError, operationerrfmt from pypy.interpreter import gateway from pypy.rlib.rarithmetic import ovfcheck -from pypy.rlib.objectmodel import we_are_translated, compute_hash +from pypy.rlib.objectmodel import we_are_translated, compute_hash, specialize from pypy.objspace.std.inttype import wrapint from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice from pypy.objspace.std import slicetype, newformat @@ -47,6 +47,7 @@ W_StringObject.PREBUILT = [W_StringObject(chr(i)) for i in range(256)] del i + at specialize.arg(2) def _is_generic(space, w_self, fun): v = w_self._value if len(v) == 0: @@ -56,14 +57,13 @@ return space.newbool(fun(c)) else: return _is_generic_loop(space, v, fun) -_is_generic._annspecialcase_ = "specialize:arg(2)" + at specialize.arg(2) def _is_generic_loop(space, v, fun): for idx in range(len(v)): if not fun(v[idx]): return space.w_False return space.w_True -_is_generic_loop._annspecialcase_ = "specialize:arg(2)" def _upper(ch): if ch.islower(): @@ -420,22 +420,14 @@ return space.wrap(u_self) -def _convert_idx_params(space, w_self, w_sub, w_start, w_end, upper_bound=False): + at specialize.arg(4) +def _convert_idx_params(space, w_self, w_start, w_end, upper_bound=False): self = w_self._value - sub = w_sub._value + lenself = len(self) - if space.is_w(w_start, space.w_None): - w_start = space.wrap(0) - if space.is_w(w_end, space.w_None): - w_end = space.len(w_self) - if upper_bound: - start = slicetype.adapt_bound(space, len(self), w_start) - end = slicetype.adapt_bound(space, len(self), w_end) - else: - start = slicetype.adapt_lower_bound(space, len(self), w_start) - end = slicetype.adapt_lower_bound(space, len(self), w_end) - return (self, sub, start, end) -_convert_idx_params._annspecialcase_ = 'specialize:arg(5)' + start, end = slicetype.unwrap_start_stop( + space, lenself, w_start, w_end, upper_bound=upper_bound) + return (self, start, end) def contains__String_String(space, w_self, w_sub): self = w_self._value @@ -443,13 +435,13 @@ return space.newbool(self.find(sub) >= 0) def str_find__String_String_ANY_ANY(space, w_self, w_sub, w_start, w_end): - (self, sub, start, end) = _convert_idx_params(space, w_self, w_sub, w_start, w_end) - res = self.find(sub, start, end) + (self, start, end) = _convert_idx_params(space, w_self, w_start, w_end) + res = self.find(w_sub._value, start, end) return space.wrap(res) def str_rfind__String_String_ANY_ANY(space, w_self, w_sub, w_start, w_end): - (self, sub, start, end) = _convert_idx_params(space, w_self, w_sub, w_start, w_end) - res = self.rfind(sub, start, end) + (self, start, end) = _convert_idx_params(space, w_self, w_start, w_end) + res = self.rfind(w_sub._value, start, end) return space.wrap(res) def str_partition__String_String(space, w_self, w_sub): @@ -483,8 +475,8 @@ def str_index__String_String_ANY_ANY(space, w_self, w_sub, w_start, w_end): - (self, sub, start, end) = _convert_idx_params(space, w_self, w_sub, w_start, w_end) - res = self.find(sub, start, end) + (self, start, end) = _convert_idx_params(space, w_self, w_start, w_end) + res = self.find(w_sub._value, start, end) if res < 0: raise OperationError(space.w_ValueError, space.wrap("substring not found in string.index")) @@ -493,8 +485,8 @@ def str_rindex__String_String_ANY_ANY(space, w_self, w_sub, w_start, w_end): - (self, sub, start, end) = _convert_idx_params(space, w_self, w_sub, w_start, w_end) - res = self.rfind(sub, start, end) + (self, start, end) = _convert_idx_params(space, w_self, w_start, w_end) + res = self.rfind(w_sub._value, start, end) if res < 0: raise OperationError(space.w_ValueError, space.wrap("substring not found in string.rindex")) @@ -636,20 +628,17 @@ return wrapstr(space, u_centered) def str_count__String_String_ANY_ANY(space, w_self, w_arg, w_start, w_end): - u_self, u_arg, u_start, u_end = _convert_idx_params(space, w_self, w_arg, - w_start, w_end) - return wrapint(space, u_self.count(u_arg, u_start, u_end)) + u_self, u_start, u_end = _convert_idx_params(space, w_self, w_start, w_end) + return wrapint(space, u_self.count(w_arg._value, u_start, u_end)) def str_endswith__String_String_ANY_ANY(space, w_self, w_suffix, w_start, w_end): - (u_self, suffix, start, end) = _convert_idx_params(space, w_self, - w_suffix, w_start, - w_end, True) - return space.newbool(stringendswith(u_self, suffix, start, end)) + (u_self, start, end) = _convert_idx_params(space, w_self, w_start, + w_end, True) + return space.newbool(stringendswith(u_self, w_suffix._value, start, end)) def str_endswith__String_Tuple_ANY_ANY(space, w_self, w_suffixes, w_start, w_end): - (u_self, _, start, end) = _convert_idx_params(space, w_self, - space.wrap(''), w_start, - w_end, True) + (u_self, start, end) = _convert_idx_params(space, w_self, w_start, + w_end, True) for w_suffix in space.fixedview(w_suffixes): if space.isinstance_w(w_suffix, space.w_unicode): w_u = space.call_function(space.w_unicode, w_self) @@ -661,14 +650,13 @@ return space.w_False def str_startswith__String_String_ANY_ANY(space, w_self, w_prefix, w_start, w_end): - (u_self, prefix, start, end) = _convert_idx_params(space, w_self, - w_prefix, w_start, - w_end, True) - return space.newbool(stringstartswith(u_self, prefix, start, end)) + (u_self, start, end) = _convert_idx_params(space, w_self, w_start, + w_end, True) + return space.newbool(stringstartswith(u_self, w_prefix._value, start, end)) def str_startswith__String_Tuple_ANY_ANY(space, w_self, w_prefixes, w_start, w_end): - (u_self, _, start, end) = _convert_idx_params(space, w_self, space.wrap(''), - w_start, w_end, True) + (u_self, start, end) = _convert_idx_params(space, w_self, + w_start, w_end, True) for w_prefix in space.fixedview(w_prefixes): if space.isinstance_w(w_prefix, space.w_unicode): w_u = space.call_function(space.w_unicode, w_self) diff --git a/pypy/objspace/std/strsliceobject.py b/pypy/objspace/std/strsliceobject.py --- a/pypy/objspace/std/strsliceobject.py +++ b/pypy/objspace/std/strsliceobject.py @@ -60,8 +60,8 @@ def _convert_idx_params(space, w_self, w_sub, w_start, w_end): length = w_self.stop - w_self.start sub = w_sub._value - start = slicetype.adapt_bound(space, length, w_start) - end = slicetype.adapt_bound(space, length, w_end) + start, end = slicetype.unwrap_start_stop( + space, length, w_start, w_end, True) assert start >= 0 assert end >= 0 diff --git a/pypy/objspace/std/test/test_listobject.py b/pypy/objspace/std/test/test_listobject.py --- a/pypy/objspace/std/test/test_listobject.py +++ b/pypy/objspace/std/test/test_listobject.py @@ -2,11 +2,10 @@ from pypy.objspace.std.listobject import W_ListObject from pypy.interpreter.error import OperationError -from pypy.conftest import gettestobjspace +from pypy.conftest import gettestobjspace, option class TestW_ListObject(object): - def test_is_true(self): w = self.space.wrap w_list = W_ListObject([]) @@ -343,6 +342,13 @@ class AppTestW_ListObject(object): + def setup_class(cls): + import sys + on_cpython = (option.runappdirect and + not hasattr(sys, 'pypy_translation_info')) + + cls.w_on_cpython = cls.space.wrap(on_cpython) + def test_call_list(self): assert list('') == [] assert list('abc') == ['a', 'b', 'c'] @@ -616,6 +622,14 @@ assert c.index(0) == 0 raises(ValueError, c.index, 3) + def test_index_cpython_bug(self): + if self.on_cpython: + skip("cpython has a bug here") + c = list('hello world') + assert c.index('l', None, None) == 2 + assert c.index('l', 3, None) == 3 + assert c.index('l', None, 4) == 2 + def test_ass_slice(self): l = range(6) l[1:3] = 'abc' diff --git a/pypy/objspace/std/test/test_sliceobject.py b/pypy/objspace/std/test/test_sliceobject.py --- a/pypy/objspace/std/test/test_sliceobject.py +++ b/pypy/objspace/std/test/test_sliceobject.py @@ -42,6 +42,23 @@ getslice(length, mystart, mystop)) + def test_indexes4(self): + space = self.space + w = space.wrap + + def getslice(length, start, stop, step): + return [i for i in range(0, length, step) if start <= i < stop] + + for step in [-5, -4, -3, -2, -1, 1, 2, 3, 4, 5, None]: + for length in range(5): + for start in range(-2*length-2, 2*length+3) + [None]: + for stop in range(-2*length-2, 2*length+3) + [None]: + sl = space.newslice(w(start), w(stop), w(step)) + mystart, mystop, mystep, slicelength = sl.indices4(space, length) + assert len(range(length)[start:stop:step]) == slicelength + assert slice(start, stop, step).indices(length) == ( + mystart, mystop, mystep) + class AppTest_SliceObject: def test_new(self): def cmp_slice(sl1, sl2): diff --git a/pypy/objspace/std/tupleobject.py b/pypy/objspace/std/tupleobject.py --- a/pypy/objspace/std/tupleobject.py +++ b/pypy/objspace/std/tupleobject.py @@ -167,17 +167,8 @@ return space.wrap(count) def tuple_index__Tuple_ANY_ANY_ANY(space, w_tuple, w_obj, w_start, w_stop): - start = slicetype.eval_slice_index(space, w_start) - stop = slicetype.eval_slice_index(space, w_stop) length = len(w_tuple.wrappeditems) - if start < 0: - start += length - if start < 0: - start = 0 - if stop < 0: - stop += length - if stop < 0: - stop = 0 + start, stop = slicetype.unwrap_start_stop(space, length, w_start, w_stop) for i in range(start, min(stop, length)): w_item = w_tuple.wrappeditems[i] if space.eq_w(w_item, w_obj): diff --git a/pypy/objspace/std/unicodeobject.py b/pypy/objspace/std/unicodeobject.py --- a/pypy/objspace/std/unicodeobject.py +++ b/pypy/objspace/std/unicodeobject.py @@ -10,7 +10,7 @@ from pypy.objspace.std import slicetype, newformat from pypy.objspace.std.tupleobject import W_TupleObject from pypy.rlib.rarithmetic import intmask, ovfcheck -from pypy.rlib.objectmodel import compute_hash +from pypy.rlib.objectmodel import compute_hash, specialize from pypy.rlib.rstring import UnicodeBuilder from pypy.rlib.runicode import unicode_encode_unicode_escape from pypy.module.unicodedata import unicodedb @@ -475,42 +475,29 @@ index = length return index -def _convert_idx_params(space, w_self, w_sub, w_start, w_end, upper_bound=False): - assert isinstance(w_sub, W_UnicodeObject) + at specialize.arg(4) +def _convert_idx_params(space, w_self, w_start, w_end, upper_bound=False): self = w_self._value - sub = w_sub._value - - if space.is_w(w_start, space.w_None): - w_start = space.wrap(0) - if space.is_w(w_end, space.w_None): - w_end = space.len(w_self) - - if upper_bound: - start = slicetype.adapt_bound(space, len(self), w_start) - end = slicetype.adapt_bound(space, len(self), w_end) - else: - start = slicetype.adapt_lower_bound(space, len(self), w_start) - end = slicetype.adapt_lower_bound(space, len(self), w_end) - return (self, sub, start, end) -_convert_idx_params._annspecialcase_ = 'specialize:arg(5)' + start, end = slicetype.unwrap_start_stop( + space, len(self), w_start, w_end, upper_bound) + return (self, start, end) def unicode_endswith__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, + self, start, end = _convert_idx_params(space, w_self, w_start, w_end, True) - return space.newbool(stringendswith(self, substr, start, end)) + return space.newbool(stringendswith(self, w_substr._value, start, end)) def unicode_startswith__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end, True) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end, True) # XXX this stuff can be waaay better for ootypebased backends if # we re-use more of our rpython machinery (ie implement startswith # with additional parameters as rpython) - return space.newbool(stringstartswith(self, substr, start, end)) + return space.newbool(stringstartswith(self, w_substr._value, start, end)) def unicode_startswith__Unicode_Tuple_ANY_ANY(space, w_unistr, w_prefixes, w_start, w_end): - unistr, _, start, end = _convert_idx_params(space, w_unistr, space.wrap(u''), - w_start, w_end, True) + unistr, start, end = _convert_idx_params(space, w_unistr, + w_start, w_end, True) for w_prefix in space.fixedview(w_prefixes): prefix = space.unicode_w(w_prefix) if stringstartswith(unistr, prefix, start, end): @@ -519,7 +506,7 @@ def unicode_endswith__Unicode_Tuple_ANY_ANY(space, w_unistr, w_suffixes, w_start, w_end): - unistr, _, start, end = _convert_idx_params(space, w_unistr, space.wrap(u''), + unistr, start, end = _convert_idx_params(space, w_unistr, w_start, w_end, True) for w_suffix in space.fixedview(w_suffixes): suffix = space.unicode_w(w_suffix) @@ -625,37 +612,32 @@ return space.newlist(lines) def unicode_find__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - return space.wrap(self.find(substr, start, end)) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + return space.wrap(self.find(w_substr._value, start, end)) def unicode_rfind__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - return space.wrap(self.rfind(substr, start, end)) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + return space.wrap(self.rfind(w_substr._value, start, end)) def unicode_index__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - index = self.find(substr, start, end) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + index = self.find(w_substr._value, start, end) if index < 0: raise OperationError(space.w_ValueError, space.wrap('substring not found')) return space.wrap(index) def unicode_rindex__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - index = self.rfind(substr, start, end) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + index = self.rfind(w_substr._value, start, end) if index < 0: raise OperationError(space.w_ValueError, space.wrap('substring not found')) return space.wrap(index) def unicode_count__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - return space.wrap(self.count(substr, start, end)) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + return space.wrap(self.count(w_substr._value, start, end)) def unicode_split__Unicode_None_ANY(space, w_self, w_none, w_maxsplit): maxsplit = space.int_w(w_maxsplit) From noreply at buildbot.pypy.org Sat Nov 5 10:22:17 2011 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 5 Nov 2011 10:22:17 +0100 (CET) Subject: [pypy-commit] pypy default: Translation fix for x86/test/test_ztranslation. Message-ID: <20111105092217.A0DB2820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48775:2c051f701629 Date: 2011-11-05 10:21 +0100 http://bitbucket.org/pypy/pypy/changeset/2c051f701629/ Log: Translation fix for x86/test/test_ztranslation. diff --git a/pypy/jit/metainterp/optimizeopt/vstring.py b/pypy/jit/metainterp/optimizeopt/vstring.py --- a/pypy/jit/metainterp/optimizeopt/vstring.py +++ b/pypy/jit/metainterp/optimizeopt/vstring.py @@ -207,6 +207,7 @@ class VStringConcatValue(VAbstractStringValue): """The concatenation of two other strings.""" + _attrs_ = ('left', 'right', 'lengthbox') lengthbox = None # or the computed length From noreply at buildbot.pypy.org Sat Nov 5 10:23:58 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sat, 5 Nov 2011 10:23:58 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: hg merge default Message-ID: <20111105092358.E1130820B3@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48776:88580873fdfd Date: 2011-11-05 10:10 +0100 http://bitbucket.org/pypy/pypy/changeset/88580873fdfd/ Log: hg merge default diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -247,7 +247,6 @@ CONST_1 = ConstInt(1) CVAL_ZERO = ConstantValue(CONST_0) CVAL_ZERO_FLOAT = ConstantValue(Const._new(0.0)) -CVAL_UNINITIALIZED_ZERO = ConstantValue(CONST_0) llhelper.CVAL_NULLREF = ConstantValue(llhelper.CONST_NULL) oohelper.CVAL_NULLREF = ConstantValue(oohelper.CONST_NULL) diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -4123,6 +4123,38 @@ """ self.optimize_strunicode_loop(ops, expected) + def test_str_concat_constant_lengths(self): + ops = """ + [i0] + p0 = newstr(1) + strsetitem(p0, 0, i0) + p1 = newstr(0) + p2 = call(0, p0, p1, descr=strconcatdescr) + i1 = call(0, p2, p0, descr=strequaldescr) + finish(i1) + """ + expected = """ + [i0] + finish(1) + """ + self.optimize_strunicode_loop(ops, expected) + + def test_str_concat_constant_lengths_2(self): + ops = """ + [i0] + p0 = newstr(0) + p1 = newstr(1) + strsetitem(p1, 0, i0) + p2 = call(0, p0, p1, descr=strconcatdescr) + i1 = call(0, p2, p1, descr=strequaldescr) + finish(i1) + """ + expected = """ + [i0] + finish(1) + """ + self.optimize_strunicode_loop(ops, expected) + def test_str_slice_1(self): ops = """ [p1, i1, i2] @@ -4883,6 +4915,27 @@ def test_plain_virtual_string_copy_content(self): ops = """ + [i1] + p0 = newstr(6) + copystrcontent(s"hello!", p0, 0, 0, 6) + p1 = call(0, p0, s"abc123", descr=strconcatdescr) + i0 = strgetitem(p1, i1) + finish(i0) + """ + expected = """ + [i1] + p0 = newstr(6) + copystrcontent(s"hello!", p0, 0, 0, 6) + p1 = newstr(12) + copystrcontent(p0, p1, 0, 0, 6) + copystrcontent(s"abc123", p1, 0, 6, 6) + i0 = strgetitem(p1, i1) + finish(i0) + """ + self.optimize_strunicode_loop(ops, expected) + + def test_plain_virtual_string_copy_content_2(self): + ops = """ [] p0 = newstr(6) copystrcontent(s"hello!", p0, 0, 0, 6) @@ -4894,10 +4947,7 @@ [] p0 = newstr(6) copystrcontent(s"hello!", p0, 0, 0, 6) - p1 = newstr(12) - copystrcontent(p0, p1, 0, 0, 6) - copystrcontent(s"abc123", p1, 0, 6, 6) - i0 = strgetitem(p1, 0) + i0 = strgetitem(p0, 0) finish(i0) """ self.optimize_strunicode_loop(ops, expected) diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -2224,13 +2224,13 @@ ops = """ [p0, i0, p1, i1, i2] setfield_gc(p0, i1, descr=valuedescr) - copystrcontent(p0, i0, p1, i1, i2) + copystrcontent(p0, p1, i0, i1, i2) escape() jump(p0, i0, p1, i1, i2) """ expected = """ [p0, i0, p1, i1, i2] - copystrcontent(p0, i0, p1, i1, i2) + copystrcontent(p0, p1, i0, i1, i2) setfield_gc(p0, i1, descr=valuedescr) escape() jump(p0, i0, p1, i1, i2) @@ -7493,7 +7493,7 @@ expected = """ [p22, p18, i1, i2] call(i2, descr=nonwritedescr) - setfield_gc(p22, i1, descr=valuedescr) + setfield_gc(p22, i1, descr=valuedescr) jump(p22, p18, i1, i1) """ self.optimize_loop(ops, expected, preamble, expected_short=short) @@ -7520,7 +7520,7 @@ def test_cache_setarrayitem_across_loop_boundaries(self): ops = """ [p1] - p2 = getarrayitem_gc(p1, 3, descr=arraydescr) + p2 = getarrayitem_gc(p1, 3, descr=arraydescr) guard_nonnull_class(p2, ConstClass(node_vtable)) [] call(p2, descr=nonwritedescr) p3 = new_with_vtable(ConstClass(node_vtable)) diff --git a/pypy/jit/metainterp/optimizeopt/vstring.py b/pypy/jit/metainterp/optimizeopt/vstring.py --- a/pypy/jit/metainterp/optimizeopt/vstring.py +++ b/pypy/jit/metainterp/optimizeopt/vstring.py @@ -1,6 +1,6 @@ from pypy.jit.codewriter.effectinfo import EffectInfo from pypy.jit.metainterp.history import (BoxInt, Const, ConstInt, ConstPtr, - get_const_ptr_for_string, get_const_ptr_for_unicode) + get_const_ptr_for_string, get_const_ptr_for_unicode, BoxPtr, REF, INT) from pypy.jit.metainterp.optimizeopt import optimizer, virtualize from pypy.jit.metainterp.optimizeopt.optimizer import CONST_0, CONST_1, llhelper from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method @@ -106,7 +106,12 @@ if not we_are_translated(): op.name = 'FORCE' optforce.emit_operation(op) - self.string_copy_parts(optforce, box, CONST_0, self.mode) + self.initialize_forced_string(optforce, box, CONST_0, self.mode) + + def initialize_forced_string(self, string_optimizer, targetbox, + offsetbox, mode): + return self.string_copy_parts(string_optimizer, targetbox, + offsetbox, mode) class VStringPlainValue(VAbstractStringValue): @@ -114,11 +119,20 @@ _lengthbox = None # cache only def setup(self, size): - self._chars = [optimizer.CVAL_UNINITIALIZED_ZERO] * size + # in this list, None means: "it's probably uninitialized so far, + # but maybe it was actually filled." So to handle this case, + # strgetitem cannot be virtual-ized and must be done as a residual + # operation. By contrast, any non-None value means: we know it + # is initialized to this value; strsetitem() there makes no sense. + # Also, as long as self.is_virtual(), then we know that no-one else + # could have written to the string, so we know that in this case + # "None" corresponds to "really uninitialized". + self._chars = [None] * size def setup_slice(self, longerlist, start, stop): assert 0 <= start <= stop <= len(longerlist) self._chars = longerlist[start:stop] + # slice the 'longerlist', which may also contain Nones def getstrlen(self, _, mode): if self._lengthbox is None: @@ -126,42 +140,66 @@ return self._lengthbox def getitem(self, index): - return self._chars[index] + return self._chars[index] # may return None! def setitem(self, index, charvalue): assert isinstance(charvalue, optimizer.OptValue) + assert self._chars[index] is None, ( + "setitem() on an already-initialized location") self._chars[index] = charvalue + def is_completely_initialized(self): + for c in self._chars: + if c is None: + return False + return True + @specialize.arg(1) def get_constant_string_spec(self, mode): for c in self._chars: - if c is optimizer.CVAL_UNINITIALIZED_ZERO or not c.is_constant(): + if c is None or not c.is_constant(): return None return mode.emptystr.join([mode.chr(c.box.getint()) for c in self._chars]) def string_copy_parts(self, string_optimizer, targetbox, offsetbox, mode): - if not self.is_virtual() and targetbox is not self.box: - lengthbox = self.getstrlen(string_optimizer, mode) - srcbox = self.force_box(string_optimizer) - return copy_str_content(string_optimizer, srcbox, targetbox, - CONST_0, offsetbox, lengthbox, mode) + if not self.is_virtual() and not self.is_completely_initialized(): + return VAbstractStringValue.string_copy_parts( + self, string_optimizer, targetbox, offsetbox, mode) + else: + return self.initialize_forced_string(string_optimizer, targetbox, + offsetbox, mode) + + def initialize_forced_string(self, string_optimizer, targetbox, + offsetbox, mode): for i in range(len(self._chars)): - charbox = self._chars[i].force_box(string_optimizer) - if not (isinstance(charbox, Const) and charbox.same_constant(CONST_0)): - string_optimizer.emit_operation(ResOperation(mode.STRSETITEM, [targetbox, - offsetbox, - charbox], - None)) + assert isinstance(targetbox, BoxPtr) # ConstPtr never makes sense + charvalue = self.getitem(i) + if charvalue is not None: + charbox = charvalue.force_box(string_optimizer) + if not (isinstance(charbox, Const) and + charbox.same_constant(CONST_0)): + op = ResOperation(mode.STRSETITEM, [targetbox, + offsetbox, + charbox], + None) + string_optimizer.emit_operation(op) offsetbox = _int_add(string_optimizer, offsetbox, CONST_1) return offsetbox def get_args_for_fail(self, modifier): if self.box is None and not modifier.already_seen_virtual(self.keybox): - charboxes = [value.get_key_box() for value in self._chars] + charboxes = [] + for value in self._chars: + if value is not None: + box = value.get_key_box() + else: + box = None + charboxes.append(box) modifier.register_virtual_fields(self.keybox, charboxes) for value in self._chars: - value.get_args_for_fail(modifier) + if value is not None: + value.get_args_for_fail(modifier) def _make_virtual(self, modifier): return modifier.make_vstrplain(self.mode is mode_unicode) @@ -277,6 +315,7 @@ for i in range(lengthbox.value): charbox = _strgetitem(string_optimizer, srcbox, srcoffsetbox, mode) srcoffsetbox = _int_add(string_optimizer, srcoffsetbox, CONST_1) + assert isinstance(targetbox, BoxPtr) # ConstPtr never makes sense string_optimizer.emit_operation(ResOperation(mode.STRSETITEM, [targetbox, offsetbox, charbox], @@ -287,6 +326,7 @@ nextoffsetbox = _int_add(string_optimizer, offsetbox, lengthbox) else: nextoffsetbox = None + assert isinstance(targetbox, BoxPtr) # ConstPtr never makes sense op = ResOperation(mode.COPYSTRCONTENT, [srcbox, targetbox, srcoffsetbox, offsetbox, lengthbox], None) @@ -373,6 +413,7 @@ def optimize_STRSETITEM(self, op): value = self.getvalue(op.getarg(0)) + assert not value.is_constant() # strsetitem(ConstPtr) never makes sense if value.is_virtual() and isinstance(value, VStringPlainValue): indexbox = self.get_constant_box(op.getarg(1)) if indexbox is not None: @@ -406,11 +447,20 @@ # if isinstance(value, VStringPlainValue): # even if no longer virtual if vindex.is_constant(): - res = value.getitem(vindex.box.getint()) - # If it is uninitialized we can't return it, it was set by a - # COPYSTRCONTENT, not a STRSETITEM - if res is not optimizer.CVAL_UNINITIALIZED_ZERO: - return res + result = value.getitem(vindex.box.getint()) + if result is not None: + return result + # + if isinstance(value, VStringConcatValue) and vindex.is_constant(): + len1box = value.left.getstrlen(self, mode) + if isinstance(len1box, ConstInt): + index = vindex.box.getint() + len1 = len1box.getint() + if index < len1: + return self.strgetitem(value.left, vindex, mode) + else: + vindex = optimizer.ConstantValue(ConstInt(index - len1)) + return self.strgetitem(value.right, vindex, mode) # resbox = _strgetitem(self, value.force_box(self), vindex.force_box(self), mode) return self.getvalue(resbox) @@ -432,6 +482,11 @@ def _optimize_COPYSTRCONTENT(self, op, mode): # args: src dst srcstart dststart length + assert op.getarg(0).type == REF + assert op.getarg(1).type == REF + assert op.getarg(2).type == INT + assert op.getarg(3).type == INT + assert op.getarg(4).type == INT src = self.getvalue(op.getarg(0)) dst = self.getvalue(op.getarg(1)) srcstart = self.getvalue(op.getarg(2)) @@ -503,19 +558,12 @@ vstart = self.getvalue(op.getarg(2)) vstop = self.getvalue(op.getarg(3)) # - if (isinstance(vstr, VStringPlainValue) and vstart.is_constant() - and vstop.is_constant()): - # slicing with constant bounds of a VStringPlainValue, if any of - # the characters is unitialized we don't do this special slice, we - # do the regular copy contents. - for i in range(vstart.box.getint(), vstop.box.getint()): - if vstr.getitem(i) is optimizer.CVAL_UNINITIALIZED_ZERO: - break - else: - value = self.make_vstring_plain(op.result, op, mode) - value.setup_slice(vstr._chars, vstart.box.getint(), - vstop.box.getint()) - return True + #if (isinstance(vstr, VStringPlainValue) and vstart.is_constant() + # and vstop.is_constant()): + # value = self.make_vstring_plain(op.result, op, mode) + # value.setup_slice(vstr._chars, vstart.box.getint(), + # vstop.box.getint()) + # return True # vstr.ensure_nonnull() lengthbox = _int_sub(self, vstop.force_box(self), diff --git a/pypy/jit/metainterp/resume.py b/pypy/jit/metainterp/resume.py --- a/pypy/jit/metainterp/resume.py +++ b/pypy/jit/metainterp/resume.py @@ -126,6 +126,7 @@ UNASSIGNED = tag(-1<<13, TAGBOX) UNASSIGNEDVIRTUAL = tag(-1<<13, TAGVIRTUAL) NULLREF = tag(-1, TAGCONST) +UNINITIALIZED = tag(-2, TAGCONST) # used for uninitialized string characters class ResumeDataLoopMemo(object): @@ -439,6 +440,8 @@ self.storage.rd_pendingfields = rd_pendingfields def _gettagged(self, box): + if box is None: + return UNINITIALIZED if isinstance(box, Const): return self.memo.getconst(box) else: @@ -572,7 +575,9 @@ string = decoder.allocate_string(length) decoder.virtuals_cache[index] = string for i in range(length): - decoder.string_setitem(string, i, self.fieldnums[i]) + charnum = self.fieldnums[i] + if not tagged_eq(charnum, UNINITIALIZED): + decoder.string_setitem(string, i, charnum) return string def debug_prints(self): @@ -625,7 +630,9 @@ string = decoder.allocate_unicode(length) decoder.virtuals_cache[index] = string for i in range(length): - decoder.unicode_setitem(string, i, self.fieldnums[i]) + charnum = self.fieldnums[i] + if not tagged_eq(charnum, UNINITIALIZED): + decoder.unicode_setitem(string, i, charnum) return string def debug_prints(self): diff --git a/pypy/module/__builtin__/functional.py b/pypy/module/__builtin__/functional.py --- a/pypy/module/__builtin__/functional.py +++ b/pypy/module/__builtin__/functional.py @@ -312,11 +312,10 @@ class W_XRange(Wrappable): - def __init__(self, space, start, stop, step): + def __init__(self, space, start, len, step): self.space = space self.start = start - self.stop = stop - self.len = get_len_of_range(space, start, stop, step) + self.len = len self.step = step def descr_new(space, w_subtype, w_start, w_stop=None, w_step=1): @@ -326,8 +325,9 @@ start, stop = 0, start else: stop = _toint(space, w_stop) + howmany = get_len_of_range(space, start, stop, step) obj = space.allocate_instance(W_XRange, w_subtype) - W_XRange.__init__(obj, space, start, stop, step) + W_XRange.__init__(obj, space, start, howmany, step) return space.wrap(obj) def descr_repr(self): @@ -357,12 +357,12 @@ def descr_iter(self): return self.space.wrap(W_XRangeIterator(self.space, self.start, - self.stop, self.step)) + self.len, self.step)) def descr_reversed(self): lastitem = self.start + (self.len-1) * self.step return self.space.wrap(W_XRangeIterator(self.space, lastitem, - self.start, -self.step, True)) + self.len, -self.step)) def descr_reduce(self): space = self.space @@ -389,29 +389,25 @@ ) class W_XRangeIterator(Wrappable): - def __init__(self, space, start, stop, step, inclusive=False): + def __init__(self, space, current, remaining, step): self.space = space - self.current = start - self.stop = stop + self.current = current + self.remaining = remaining self.step = step - self.inclusive = inclusive def descr_iter(self): return self.space.wrap(self) def descr_next(self): - if self.inclusive: - if not ((self.step > 0 and self.current <= self.stop) or (self.step < 0 and self.current >= self.stop)): - raise OperationError(self.space.w_StopIteration, self.space.w_None) - else: - if not ((self.step > 0 and self.current < self.stop) or (self.step < 0 and self.current > self.stop)): - raise OperationError(self.space.w_StopIteration, self.space.w_None) - item = self.current - self.current = item + self.step - return self.space.wrap(item) + if self.remaining > 0: + item = self.current + self.current = item + self.step + self.remaining -= 1 + return self.space.wrap(item) + raise OperationError(self.space.w_StopIteration, self.space.w_None) - #def descr_len(self): - # return self.space.wrap(self.remaining) + def descr_len(self): + return self.space.wrap(self.remaining) def descr_reduce(self): from pypy.interpreter.mixedmodule import MixedModule @@ -422,7 +418,7 @@ w = space.wrap nt = space.newtuple - tup = [w(self.current), w(self.stop), w(self.step)] + tup = [w(self.current), w(self.remaining), w(self.step)] return nt([new_inst, nt(tup)]) W_XRangeIterator.typedef = TypeDef("rangeiterator", diff --git a/pypy/module/__builtin__/test/test_functional.py b/pypy/module/__builtin__/test/test_functional.py --- a/pypy/module/__builtin__/test/test_functional.py +++ b/pypy/module/__builtin__/test/test_functional.py @@ -157,8 +157,7 @@ raises(OverflowError, xrange, a) raises(OverflowError, xrange, 0, a) raises(OverflowError, xrange, 0, 1, a) - assert list(reversed(xrange(-sys.maxint-1, -sys.maxint-1, -2))) == [] - + def test_xrange_reduce(self): x = xrange(2, 9, 3) callable, args = x.__reduce__() diff --git a/pypy/module/_file/interp_file.py b/pypy/module/_file/interp_file.py --- a/pypy/module/_file/interp_file.py +++ b/pypy/module/_file/interp_file.py @@ -206,24 +206,28 @@ @unwrap_spec(size=int) def direct_readlines(self, size=0): stream = self.getstream() - # NB. this implementation is very inefficient for unbuffered - # streams, but ok if stream.readline() is efficient. + # this is implemented as: .read().split('\n') + # except that it keeps the \n in the resulting strings if size <= 0: - result = [] - while True: - line = stream.readline() - if not line: - break - result.append(line) - size -= len(line) + data = stream.readall() else: - result = [] - while size > 0: - line = stream.readline() - if not line: - break - result.append(line) - size -= len(line) + data = stream.read(size) + result = [] + splitfrom = 0 + for i in range(len(data)): + if data[i] == '\n': + result.append(data[splitfrom : i + 1]) + splitfrom = i + 1 + # + if splitfrom < len(data): + # there is a partial line at the end. If size > 0, it is likely + # to be because the 'read(size)' returned data up to the middle + # of a line. In that case, use 'readline()' to read until the + # end of the current line. + data = data[splitfrom:] + if size > 0: + data += stream.readline() + result.append(data) return result @unwrap_spec(offset=r_longlong, whence=int) diff --git a/pypy/module/_pickle_support/maker.py b/pypy/module/_pickle_support/maker.py --- a/pypy/module/_pickle_support/maker.py +++ b/pypy/module/_pickle_support/maker.py @@ -66,10 +66,10 @@ new_generator.running = running return space.wrap(new_generator) - at unwrap_spec(current=int, stop=int, step=int) -def xrangeiter_new(space, current, stop, step): + at unwrap_spec(current=int, remaining=int, step=int) +def xrangeiter_new(space, current, remaining, step): from pypy.module.__builtin__.functional import W_XRangeIterator - new_iter = W_XRangeIterator(space, current, stop, step) + new_iter = W_XRangeIterator(space, current, remaining, step) return space.wrap(new_iter) @unwrap_spec(identifier=str) diff --git a/pypy/objspace/std/boolobject.py b/pypy/objspace/std/boolobject.py --- a/pypy/objspace/std/boolobject.py +++ b/pypy/objspace/std/boolobject.py @@ -27,11 +27,7 @@ def uint_w(w_self, space): intval = int(w_self.boolval) - if intval < 0: - raise OperationError(space.w_ValueError, - space.wrap("cannot convert negative integer to unsigned")) - else: - return r_uint(intval) + return r_uint(intval) def bigint_w(w_self, space): return rbigint.fromint(int(w_self.boolval)) diff --git a/pypy/objspace/std/bytearrayobject.py b/pypy/objspace/std/bytearrayobject.py --- a/pypy/objspace/std/bytearrayobject.py +++ b/pypy/objspace/std/bytearrayobject.py @@ -283,17 +283,9 @@ return space.wrap(''.join(w_bytearray.data)) def _convert_idx_params(space, w_self, w_start, w_stop): - start = slicetype.eval_slice_index(space, w_start) - stop = slicetype.eval_slice_index(space, w_stop) length = len(w_self.data) - if start < 0: - start += length - if start < 0: - start = 0 - if stop < 0: - stop += length - if stop < 0: - stop = 0 + start, stop = slicetype.unwrap_start_stop( + space, length, w_start, w_stop, False) return start, stop, length def str_count__Bytearray_Int_ANY_ANY(space, w_bytearray, w_char, w_start, w_stop): diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py --- a/pypy/objspace/std/listobject.py +++ b/pypy/objspace/std/listobject.py @@ -419,8 +419,8 @@ # needs to be safe against eq_w() mutating the w_list behind our back items = w_list.wrappeditems size = len(items) - i = slicetype.adapt_bound(space, size, w_start) - stop = slicetype.adapt_bound(space, size, w_stop) + i, stop = slicetype.unwrap_start_stop( + space, size, w_start, w_stop, True) while i < stop and i < len(items): if space.eq_w(items[i], w_any): return space.wrap(i) diff --git a/pypy/objspace/std/ropeobject.py b/pypy/objspace/std/ropeobject.py --- a/pypy/objspace/std/ropeobject.py +++ b/pypy/objspace/std/ropeobject.py @@ -357,16 +357,8 @@ self = w_self._node sub = w_sub._node - if space.is_w(w_start, space.w_None): - w_start = space.wrap(0) - if space.is_w(w_end, space.w_None): - w_end = space.len(w_self) - if upper_bound: - start = slicetype.adapt_bound(space, self.length(), w_start) - end = slicetype.adapt_bound(space, self.length(), w_end) - else: - start = slicetype.adapt_lower_bound(space, self.length(), w_start) - end = slicetype.adapt_lower_bound(space, self.length(), w_end) + start, end = slicetype.unwrap_start_stop( + space, self.length(), w_start, w_end, upper_bound) return (self, sub, start, end) _convert_idx_params._annspecialcase_ = 'specialize:arg(5)' diff --git a/pypy/objspace/std/sliceobject.py b/pypy/objspace/std/sliceobject.py --- a/pypy/objspace/std/sliceobject.py +++ b/pypy/objspace/std/sliceobject.py @@ -4,7 +4,7 @@ from pypy.interpreter import gateway from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all -from pypy.objspace.std.slicetype import eval_slice_index +from pypy.objspace.std.slicetype import _eval_slice_index class W_SliceObject(W_Object): from pypy.objspace.std.slicetype import slice_typedef as typedef @@ -25,7 +25,7 @@ if space.is_w(w_slice.w_step, space.w_None): step = 1 else: - step = eval_slice_index(space, w_slice.w_step) + step = _eval_slice_index(space, w_slice.w_step) if step == 0: raise OperationError(space.w_ValueError, space.wrap("slice step cannot be zero")) @@ -35,7 +35,7 @@ else: start = 0 else: - start = eval_slice_index(space, w_slice.w_start) + start = _eval_slice_index(space, w_slice.w_start) if start < 0: start += length if start < 0: @@ -54,7 +54,7 @@ else: stop = length else: - stop = eval_slice_index(space, w_slice.w_stop) + stop = _eval_slice_index(space, w_slice.w_stop) if stop < 0: stop += length if stop < 0: diff --git a/pypy/objspace/std/slicetype.py b/pypy/objspace/std/slicetype.py --- a/pypy/objspace/std/slicetype.py +++ b/pypy/objspace/std/slicetype.py @@ -3,6 +3,7 @@ from pypy.objspace.std.stdtypedef import StdTypeDef, SMM from pypy.objspace.std.register_all import register_all from pypy.interpreter.error import OperationError +from pypy.rlib.objectmodel import specialize # indices multimehtod slice_indices = SMM('indices', 2, @@ -14,7 +15,9 @@ ' normal slices.') # utility functions -def eval_slice_index(space, w_int): +def _eval_slice_index(space, w_int): + # note that it is the *callers* responsibility to check for w_None + # otherwise you can get funny error messages try: return space.getindex_w(w_int, None) # clamp if long integer too large except OperationError, err: @@ -25,7 +28,7 @@ "None or have an __index__ method")) def adapt_lower_bound(space, size, w_index): - index = eval_slice_index(space, w_index) + index = _eval_slice_index(space, w_index) if index < 0: index = index + size if index < 0: @@ -34,16 +37,29 @@ return index def adapt_bound(space, size, w_index): - index = eval_slice_index(space, w_index) - if index < 0: - index = index + size - if index < 0: - index = 0 + index = adapt_lower_bound(space, size, w_index) if index > size: index = size assert index >= 0 return index + at specialize.arg(4) +def unwrap_start_stop(space, size, w_start, w_end, upper_bound=False): + if space.is_w(w_start, space.w_None): + start = 0 + elif upper_bound: + start = adapt_bound(space, size, w_start) + else: + start = adapt_lower_bound(space, size, w_start) + + if space.is_w(w_end, space.w_None): + end = size + elif upper_bound: + end = adapt_bound(space, size, w_end) + else: + end = adapt_lower_bound(space, size, w_end) + return start, end + register_all(vars(), globals()) # ____________________________________________________________ diff --git a/pypy/objspace/std/smalltupleobject.py b/pypy/objspace/std/smalltupleobject.py --- a/pypy/objspace/std/smalltupleobject.py +++ b/pypy/objspace/std/smalltupleobject.py @@ -68,10 +68,10 @@ raise IndexError def eq(self, space, w_other): - if self.length() != w_other.length(): + if n != w_other.length(): return space.w_False for i in iter_n: - item1 = self.getitem(i) + item1 = getattr(self,'w_value%s' % i) item2 = w_other.getitem(i) if not space.eq_w(item1, item2): return space.w_False @@ -80,9 +80,9 @@ def hash(self, space): mult = 1000003 x = 0x345678 - z = self.length() + z = n for i in iter_n: - w_item = self.getitem(i) + w_item = getattr(self, 'w_value%s' % i) y = space.int_w(space.hash(w_item)) x = (x ^ y) * mult z -= 1 diff --git a/pypy/objspace/std/stringobject.py b/pypy/objspace/std/stringobject.py --- a/pypy/objspace/std/stringobject.py +++ b/pypy/objspace/std/stringobject.py @@ -4,7 +4,7 @@ from pypy.interpreter.error import OperationError, operationerrfmt from pypy.interpreter import gateway from pypy.rlib.rarithmetic import ovfcheck -from pypy.rlib.objectmodel import we_are_translated, compute_hash +from pypy.rlib.objectmodel import we_are_translated, compute_hash, specialize from pypy.objspace.std.inttype import wrapint from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice from pypy.objspace.std import slicetype, newformat @@ -47,6 +47,7 @@ W_StringObject.PREBUILT = [W_StringObject(chr(i)) for i in range(256)] del i + at specialize.arg(2) def _is_generic(space, w_self, fun): v = w_self._value if len(v) == 0: @@ -56,14 +57,13 @@ return space.newbool(fun(c)) else: return _is_generic_loop(space, v, fun) -_is_generic._annspecialcase_ = "specialize:arg(2)" + at specialize.arg(2) def _is_generic_loop(space, v, fun): for idx in range(len(v)): if not fun(v[idx]): return space.w_False return space.w_True -_is_generic_loop._annspecialcase_ = "specialize:arg(2)" def _upper(ch): if ch.islower(): @@ -420,22 +420,14 @@ return space.wrap(u_self) -def _convert_idx_params(space, w_self, w_sub, w_start, w_end, upper_bound=False): + at specialize.arg(4) +def _convert_idx_params(space, w_self, w_start, w_end, upper_bound=False): self = w_self._value - sub = w_sub._value + lenself = len(self) - if space.is_w(w_start, space.w_None): - w_start = space.wrap(0) - if space.is_w(w_end, space.w_None): - w_end = space.len(w_self) - if upper_bound: - start = slicetype.adapt_bound(space, len(self), w_start) - end = slicetype.adapt_bound(space, len(self), w_end) - else: - start = slicetype.adapt_lower_bound(space, len(self), w_start) - end = slicetype.adapt_lower_bound(space, len(self), w_end) - return (self, sub, start, end) -_convert_idx_params._annspecialcase_ = 'specialize:arg(5)' + start, end = slicetype.unwrap_start_stop( + space, lenself, w_start, w_end, upper_bound=upper_bound) + return (self, start, end) def contains__String_String(space, w_self, w_sub): self = w_self._value @@ -443,13 +435,13 @@ return space.newbool(self.find(sub) >= 0) def str_find__String_String_ANY_ANY(space, w_self, w_sub, w_start, w_end): - (self, sub, start, end) = _convert_idx_params(space, w_self, w_sub, w_start, w_end) - res = self.find(sub, start, end) + (self, start, end) = _convert_idx_params(space, w_self, w_start, w_end) + res = self.find(w_sub._value, start, end) return space.wrap(res) def str_rfind__String_String_ANY_ANY(space, w_self, w_sub, w_start, w_end): - (self, sub, start, end) = _convert_idx_params(space, w_self, w_sub, w_start, w_end) - res = self.rfind(sub, start, end) + (self, start, end) = _convert_idx_params(space, w_self, w_start, w_end) + res = self.rfind(w_sub._value, start, end) return space.wrap(res) def str_partition__String_String(space, w_self, w_sub): @@ -483,8 +475,8 @@ def str_index__String_String_ANY_ANY(space, w_self, w_sub, w_start, w_end): - (self, sub, start, end) = _convert_idx_params(space, w_self, w_sub, w_start, w_end) - res = self.find(sub, start, end) + (self, start, end) = _convert_idx_params(space, w_self, w_start, w_end) + res = self.find(w_sub._value, start, end) if res < 0: raise OperationError(space.w_ValueError, space.wrap("substring not found in string.index")) @@ -493,8 +485,8 @@ def str_rindex__String_String_ANY_ANY(space, w_self, w_sub, w_start, w_end): - (self, sub, start, end) = _convert_idx_params(space, w_self, w_sub, w_start, w_end) - res = self.rfind(sub, start, end) + (self, start, end) = _convert_idx_params(space, w_self, w_start, w_end) + res = self.rfind(w_sub._value, start, end) if res < 0: raise OperationError(space.w_ValueError, space.wrap("substring not found in string.rindex")) @@ -636,20 +628,17 @@ return wrapstr(space, u_centered) def str_count__String_String_ANY_ANY(space, w_self, w_arg, w_start, w_end): - u_self, u_arg, u_start, u_end = _convert_idx_params(space, w_self, w_arg, - w_start, w_end) - return wrapint(space, u_self.count(u_arg, u_start, u_end)) + u_self, u_start, u_end = _convert_idx_params(space, w_self, w_start, w_end) + return wrapint(space, u_self.count(w_arg._value, u_start, u_end)) def str_endswith__String_String_ANY_ANY(space, w_self, w_suffix, w_start, w_end): - (u_self, suffix, start, end) = _convert_idx_params(space, w_self, - w_suffix, w_start, - w_end, True) - return space.newbool(stringendswith(u_self, suffix, start, end)) + (u_self, start, end) = _convert_idx_params(space, w_self, w_start, + w_end, True) + return space.newbool(stringendswith(u_self, w_suffix._value, start, end)) def str_endswith__String_Tuple_ANY_ANY(space, w_self, w_suffixes, w_start, w_end): - (u_self, _, start, end) = _convert_idx_params(space, w_self, - space.wrap(''), w_start, - w_end, True) + (u_self, start, end) = _convert_idx_params(space, w_self, w_start, + w_end, True) for w_suffix in space.fixedview(w_suffixes): if space.isinstance_w(w_suffix, space.w_unicode): w_u = space.call_function(space.w_unicode, w_self) @@ -661,14 +650,13 @@ return space.w_False def str_startswith__String_String_ANY_ANY(space, w_self, w_prefix, w_start, w_end): - (u_self, prefix, start, end) = _convert_idx_params(space, w_self, - w_prefix, w_start, - w_end, True) - return space.newbool(stringstartswith(u_self, prefix, start, end)) + (u_self, start, end) = _convert_idx_params(space, w_self, w_start, + w_end, True) + return space.newbool(stringstartswith(u_self, w_prefix._value, start, end)) def str_startswith__String_Tuple_ANY_ANY(space, w_self, w_prefixes, w_start, w_end): - (u_self, _, start, end) = _convert_idx_params(space, w_self, space.wrap(''), - w_start, w_end, True) + (u_self, start, end) = _convert_idx_params(space, w_self, + w_start, w_end, True) for w_prefix in space.fixedview(w_prefixes): if space.isinstance_w(w_prefix, space.w_unicode): w_u = space.call_function(space.w_unicode, w_self) diff --git a/pypy/objspace/std/strsliceobject.py b/pypy/objspace/std/strsliceobject.py --- a/pypy/objspace/std/strsliceobject.py +++ b/pypy/objspace/std/strsliceobject.py @@ -60,8 +60,8 @@ def _convert_idx_params(space, w_self, w_sub, w_start, w_end): length = w_self.stop - w_self.start sub = w_sub._value - start = slicetype.adapt_bound(space, length, w_start) - end = slicetype.adapt_bound(space, length, w_end) + start, end = slicetype.unwrap_start_stop( + space, length, w_start, w_end, True) assert start >= 0 assert end >= 0 diff --git a/pypy/objspace/std/test/test_listobject.py b/pypy/objspace/std/test/test_listobject.py --- a/pypy/objspace/std/test/test_listobject.py +++ b/pypy/objspace/std/test/test_listobject.py @@ -2,11 +2,10 @@ from pypy.objspace.std.listobject import W_ListObject from pypy.interpreter.error import OperationError -from pypy.conftest import gettestobjspace +from pypy.conftest import gettestobjspace, option class TestW_ListObject(object): - def test_is_true(self): w = self.space.wrap w_list = W_ListObject([]) @@ -343,6 +342,13 @@ class AppTestW_ListObject(object): + def setup_class(cls): + import sys + on_cpython = (option.runappdirect and + not hasattr(sys, 'pypy_translation_info')) + + cls.w_on_cpython = cls.space.wrap(on_cpython) + def test_call_list(self): assert list('') == [] assert list('abc') == ['a', 'b', 'c'] @@ -616,6 +622,14 @@ assert c.index(0) == 0 raises(ValueError, c.index, 3) + def test_index_cpython_bug(self): + if self.on_cpython: + skip("cpython has a bug here") + c = list('hello world') + assert c.index('l', None, None) == 2 + assert c.index('l', 3, None) == 3 + assert c.index('l', None, 4) == 2 + def test_ass_slice(self): l = range(6) l[1:3] = 'abc' diff --git a/pypy/objspace/std/test/test_sliceobject.py b/pypy/objspace/std/test/test_sliceobject.py --- a/pypy/objspace/std/test/test_sliceobject.py +++ b/pypy/objspace/std/test/test_sliceobject.py @@ -42,6 +42,23 @@ getslice(length, mystart, mystop)) + def test_indexes4(self): + space = self.space + w = space.wrap + + def getslice(length, start, stop, step): + return [i for i in range(0, length, step) if start <= i < stop] + + for step in [-5, -4, -3, -2, -1, 1, 2, 3, 4, 5, None]: + for length in range(5): + for start in range(-2*length-2, 2*length+3) + [None]: + for stop in range(-2*length-2, 2*length+3) + [None]: + sl = space.newslice(w(start), w(stop), w(step)) + mystart, mystop, mystep, slicelength = sl.indices4(space, length) + assert len(range(length)[start:stop:step]) == slicelength + assert slice(start, stop, step).indices(length) == ( + mystart, mystop, mystep) + class AppTest_SliceObject: def test_new(self): def cmp_slice(sl1, sl2): diff --git a/pypy/objspace/std/tupleobject.py b/pypy/objspace/std/tupleobject.py --- a/pypy/objspace/std/tupleobject.py +++ b/pypy/objspace/std/tupleobject.py @@ -167,17 +167,8 @@ return space.wrap(count) def tuple_index__Tuple_ANY_ANY_ANY(space, w_tuple, w_obj, w_start, w_stop): - start = slicetype.eval_slice_index(space, w_start) - stop = slicetype.eval_slice_index(space, w_stop) length = len(w_tuple.wrappeditems) - if start < 0: - start += length - if start < 0: - start = 0 - if stop < 0: - stop += length - if stop < 0: - stop = 0 + start, stop = slicetype.unwrap_start_stop(space, length, w_start, w_stop) for i in range(start, min(stop, length)): w_item = w_tuple.wrappeditems[i] if space.eq_w(w_item, w_obj): diff --git a/pypy/objspace/std/unicodeobject.py b/pypy/objspace/std/unicodeobject.py --- a/pypy/objspace/std/unicodeobject.py +++ b/pypy/objspace/std/unicodeobject.py @@ -10,7 +10,7 @@ from pypy.objspace.std import slicetype, newformat from pypy.objspace.std.tupleobject import W_TupleObject from pypy.rlib.rarithmetic import intmask, ovfcheck -from pypy.rlib.objectmodel import compute_hash +from pypy.rlib.objectmodel import compute_hash, specialize from pypy.rlib.rstring import UnicodeBuilder from pypy.rlib.runicode import unicode_encode_unicode_escape from pypy.module.unicodedata import unicodedb @@ -475,42 +475,29 @@ index = length return index -def _convert_idx_params(space, w_self, w_sub, w_start, w_end, upper_bound=False): - assert isinstance(w_sub, W_UnicodeObject) + at specialize.arg(4) +def _convert_idx_params(space, w_self, w_start, w_end, upper_bound=False): self = w_self._value - sub = w_sub._value - - if space.is_w(w_start, space.w_None): - w_start = space.wrap(0) - if space.is_w(w_end, space.w_None): - w_end = space.len(w_self) - - if upper_bound: - start = slicetype.adapt_bound(space, len(self), w_start) - end = slicetype.adapt_bound(space, len(self), w_end) - else: - start = slicetype.adapt_lower_bound(space, len(self), w_start) - end = slicetype.adapt_lower_bound(space, len(self), w_end) - return (self, sub, start, end) -_convert_idx_params._annspecialcase_ = 'specialize:arg(5)' + start, end = slicetype.unwrap_start_stop( + space, len(self), w_start, w_end, upper_bound) + return (self, start, end) def unicode_endswith__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, + self, start, end = _convert_idx_params(space, w_self, w_start, w_end, True) - return space.newbool(stringendswith(self, substr, start, end)) + return space.newbool(stringendswith(self, w_substr._value, start, end)) def unicode_startswith__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end, True) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end, True) # XXX this stuff can be waaay better for ootypebased backends if # we re-use more of our rpython machinery (ie implement startswith # with additional parameters as rpython) - return space.newbool(stringstartswith(self, substr, start, end)) + return space.newbool(stringstartswith(self, w_substr._value, start, end)) def unicode_startswith__Unicode_Tuple_ANY_ANY(space, w_unistr, w_prefixes, w_start, w_end): - unistr, _, start, end = _convert_idx_params(space, w_unistr, space.wrap(u''), - w_start, w_end, True) + unistr, start, end = _convert_idx_params(space, w_unistr, + w_start, w_end, True) for w_prefix in space.fixedview(w_prefixes): prefix = space.unicode_w(w_prefix) if stringstartswith(unistr, prefix, start, end): @@ -519,7 +506,7 @@ def unicode_endswith__Unicode_Tuple_ANY_ANY(space, w_unistr, w_suffixes, w_start, w_end): - unistr, _, start, end = _convert_idx_params(space, w_unistr, space.wrap(u''), + unistr, start, end = _convert_idx_params(space, w_unistr, w_start, w_end, True) for w_suffix in space.fixedview(w_suffixes): suffix = space.unicode_w(w_suffix) @@ -625,37 +612,32 @@ return space.newlist(lines) def unicode_find__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - return space.wrap(self.find(substr, start, end)) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + return space.wrap(self.find(w_substr._value, start, end)) def unicode_rfind__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - return space.wrap(self.rfind(substr, start, end)) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + return space.wrap(self.rfind(w_substr._value, start, end)) def unicode_index__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - index = self.find(substr, start, end) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + index = self.find(w_substr._value, start, end) if index < 0: raise OperationError(space.w_ValueError, space.wrap('substring not found')) return space.wrap(index) def unicode_rindex__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - index = self.rfind(substr, start, end) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + index = self.rfind(w_substr._value, start, end) if index < 0: raise OperationError(space.w_ValueError, space.wrap('substring not found')) return space.wrap(index) def unicode_count__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - return space.wrap(self.count(substr, start, end)) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + return space.wrap(self.count(w_substr._value, start, end)) def unicode_split__Unicode_None_ANY(space, w_self, w_none, w_maxsplit): maxsplit = space.int_w(w_maxsplit) From noreply at buildbot.pypy.org Sat Nov 5 10:24:00 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sat, 5 Nov 2011 10:24:00 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: rename TARGET to LABEL Message-ID: <20111105092400.27ABA820B3@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48777:743a06937826 Date: 2011-11-05 10:23 +0100 http://bitbucket.org/pypy/pypy/changeset/743a06937826/ Log: rename TARGET to LABEL diff --git a/pypy/jit/metainterp/executor.py b/pypy/jit/metainterp/executor.py --- a/pypy/jit/metainterp/executor.py +++ b/pypy/jit/metainterp/executor.py @@ -342,7 +342,7 @@ rop.SETARRAYITEM_RAW, rop.CALL_RELEASE_GIL, rop.QUASIIMMUT_FIELD, - rop.TARGET, + rop.LABEL, ): # list of opcodes never executed by pyjitpl continue raise AssertionError("missing %r" % (key,)) diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -790,7 +790,7 @@ "NOT_RPYTHON" if self._inputargs is not None: return self._inputargs - assert self.operations[0].getopnum() == rop.TARGET + assert self.operations[0].getopnum() == rop.LABEL return self.operations[0].getarglist() def set_inputargs(self, inputargs): @@ -829,7 +829,7 @@ @staticmethod def check_consistency_of(operations): - assert operations[0].getopnum() == rop.TARGET + assert operations[0].getopnum() == rop.LABEL inputargs = operations[0].getarglist() seen = dict.fromkeys(inputargs) TreeLoop.check_consistency_of_branch(operations, seen) @@ -858,13 +858,13 @@ assert isinstance(box, Box) assert box not in seen seen[box] = True - if op.getopnum() == rop.TARGET: + if op.getopnum() == rop.LABEL: inputargs = op.getarglist() for box in inputargs: - assert isinstance(box, Box), "TARGET contains %r" % (box,) + assert isinstance(box, Box), "LABEL contains %r" % (box,) seen = dict.fromkeys(inputargs) assert len(seen) == len(inputargs), ( - "duplicate Box in the TARGET arguments") + "duplicate Box in the LABEL arguments") assert operations[-1].is_final() if operations[-1].getopnum() == rop.JUMP: diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -95,22 +95,22 @@ preamble.start_resumedescr = FakeDescr() token = LoopToken() # FIXME: Make this a MergePointToken? - preamble.operations = [ResOperation(rop.TARGET, inputargs, None, descr=TargetToken(token))] + \ + preamble.operations = [ResOperation(rop.LABEL, inputargs, None, descr=TargetToken(token))] + \ operations + \ - [ResOperation(rop.TARGET, jump_args, None, descr=TargetToken(token))] + [ResOperation(rop.LABEL, jump_args, None, descr=TargetToken(token))] self._do_optimize_loop(preamble, call_pure_results) inliner = Inliner(inputargs, jump_args) loop.start_resumedescr = preamble.start_resumedescr loop.operations = [preamble.operations[-1]] + \ [inliner.inline_op(op, clone=False) for op in cloned_operations] + \ - [ResOperation(rop.TARGET, [inliner.inline_arg(a) for a in jump_args], + [ResOperation(rop.LABEL, [inliner.inline_arg(a) for a in jump_args], None, descr=TargetToken(token))] #[inliner.inline_op(jumpop)] self._do_optimize_loop(loop, call_pure_results) extra_same_as = [] - while loop.operations[0].getopnum() != rop.TARGET: + while loop.operations[0].getopnum() != rop.LABEL: extra_same_as.append(loop.operations[0]) del loop.operations[0] @@ -155,11 +155,11 @@ def convert_old_style_to_targets(loop, jump): newloop = TreeLoop(loop.name) - newloop.operations = [ResOperation(rop.TARGET, loop.inputargs, None, descr=FakeDescr())] + \ + newloop.operations = [ResOperation(rop.LABEL, loop.inputargs, None, descr=FakeDescr())] + \ loop.operations if not jump: assert newloop.operations[-1].getopnum() == rop.JUMP - newloop.operations[-1] = ResOperation(rop.TARGET, newloop.operations[-1].getarglist(), None, descr=FakeDescr()) + newloop.operations[-1] = ResOperation(rop.LABEL, newloop.operations[-1].getarglist(), None, descr=FakeDescr()) return newloop class OptimizeOptTest(BaseTestWithUnroll): diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -118,7 +118,7 @@ def propagate_all_forward(self): loop = self.optimizer.loop start_targetop = loop.operations[0] - assert start_targetop.getopnum() == rop.TARGET + assert start_targetop.getopnum() == rop.LABEL loop.operations = loop.operations[1:] self.optimizer.clear_newoperations() self.optimizer.send_extra_operation(start_targetop) @@ -126,15 +126,15 @@ self.import_state(start_targetop) lastop = loop.operations[-1] - assert lastop.getopnum() == rop.TARGET + assert lastop.getopnum() == rop.LABEL loop.operations = loop.operations[:-1] - #if lastop.getopnum() == rop.TARGET or lastop.getopnum() == rop.JUMP: + #if lastop.getopnum() == rop.LABEL or lastop.getopnum() == rop.JUMP: # loop.operations = loop.operations[:-1] #FIXME: FINISH self.optimizer.propagate_all_forward(clear=False) - #if lastop.getopnum() == rop.TARGET: + #if lastop.getopnum() == rop.LABEL: if not self.did_peel_one: # Enforce the previous behaviour of always peeling exactly one iteration (for now) self.optimizer.flush() KillHugeIntBounds(self.optimizer).apply() @@ -189,7 +189,7 @@ assert isinstance(target_token, TargetToken) targetop.initarglist(inputargs) target_token.virtual_state = virtual_state - target_token.short_preamble = [ResOperation(rop.TARGET, short_inputargs, None)] + target_token.short_preamble = [ResOperation(rop.LABEL, short_inputargs, None)] target_token.exported_state = ExportedState(constant_inputargs, short_boxes, inputarg_setup_ops, self.optimizer, start_resumedescr) @@ -276,7 +276,7 @@ newoperations = self.optimizer.get_newoperations() self.boxes_created_this_iteration = {} i = j = 0 - while newoperations[i].getopnum() != rop.TARGET: + while newoperations[i].getopnum() != rop.LABEL: i += 1 while i < len(newoperations) or j < len(jumpargs): if i == len(newoperations): diff --git a/pypy/jit/metainterp/optimizeopt/util.py b/pypy/jit/metainterp/optimizeopt/util.py --- a/pypy/jit/metainterp/optimizeopt/util.py +++ b/pypy/jit/metainterp/optimizeopt/util.py @@ -148,7 +148,7 @@ assert op1.result.same_box(remap[op2.result]) else: remap[op2.result] = op1.result - if op1.getopnum() not in (rop.JUMP, rop.TARGET): # xxx obscure + if op1.getopnum() not in (rop.JUMP, rop.LABEL): # xxx obscure assert op1.getdescr() == op2.getdescr() if op1.getfailargs() or op2.getfailargs(): assert len(op1.getfailargs()) == len(op2.getfailargs()) diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -366,7 +366,7 @@ 'FINISH/*d', '_FINAL_LAST', - 'TARGET/*d', + 'LABEL/*d', '_GUARD_FIRST', '_GUARD_FOLDABLE_FIRST', From noreply at buildbot.pypy.org Sat Nov 5 10:32:28 2011 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 5 Nov 2011 10:32:28 +0100 (CET) Subject: [pypy-commit] pypy default: Fix. Message-ID: <20111105093228.96C98820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48778:c4dce4f412b1 Date: 2011-11-05 10:32 +0100 http://bitbucket.org/pypy/pypy/changeset/c4dce4f412b1/ Log: Fix. diff --git a/pypy/jit/metainterp/test/test_virtualstate.py b/pypy/jit/metainterp/test/test_virtualstate.py --- a/pypy/jit/metainterp/test/test_virtualstate.py +++ b/pypy/jit/metainterp/test/test_virtualstate.py @@ -847,7 +847,8 @@ i5 = arraylen_gc(p2, descr=arraydescr) i6 = int_ge(i5, 1) guard_true(i6) [] - jump(p0, p1, p2) + p3 = getarrayitem_gc(p2, 0, descr=arraydescr) + jump(p0, p1, p3, p2) """ self.optimize_bridge(loop, bridge, expected, p0=self.myptr) From noreply at buildbot.pypy.org Sat Nov 5 13:34:06 2011 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 5 Nov 2011 13:34:06 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: Support LABEL in the x86 backend. Probably not RPython yet. Message-ID: <20111105123406.C55FD820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: jit-targets Changeset: r48779:8e7affa7409d Date: 2011-11-05 13:33 +0100 http://bitbucket.org/pypy/pypy/changeset/8e7affa7409d/ Log: Support LABEL in the x86 backend. Probably not RPython yet. diff --git a/pypy/jit/backend/llgraph/llimpl.py b/pypy/jit/backend/llgraph/llimpl.py --- a/pypy/jit/backend/llgraph/llimpl.py +++ b/pypy/jit/backend/llgraph/llimpl.py @@ -638,7 +638,7 @@ # return _op_default_implementation - def op_target(self, _, *args): + def op_label(self, _, *args): pass def op_debug_merge_point(self, _, *args): diff --git a/pypy/jit/backend/llgraph/runner.py b/pypy/jit/backend/llgraph/runner.py --- a/pypy/jit/backend/llgraph/runner.py +++ b/pypy/jit/backend/llgraph/runner.py @@ -183,7 +183,7 @@ if isinstance(descr, history.LoopToken): if op.getopnum() != rop.JUMP: llimpl.compile_add_loop_token(c, descr) - if isinstance(descr, history.TargetToken) and op.getopnum() == rop.TARGET: + if isinstance(descr, history.TargetToken) and op.getopnum() == rop.LABEL: llimpl.compile_add_target_token(c, descr) if self.is_oo and isinstance(descr, (OODescr, MethDescr)): # hack hack, not rpython diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -2971,13 +2971,13 @@ i2 = BoxInt() i3 = BoxInt() looptoken = LoopToken() - targettoken = TargetToken() + targettoken = TargetToken(None) faildescr = BasicFailDescr(2) operations = [ ResOperation(rop.INT_ADD, [i0, ConstInt(1)], i1), ResOperation(rop.INT_LE, [i1, ConstInt(9)], i2), ResOperation(rop.GUARD_TRUE, [i2], None, descr=faildescr), - ResOperation(rop.TARGET, [i1], None, descr=targettoken), + ResOperation(rop.LABEL, [i1], None, descr=targettoken), ResOperation(rop.INT_GE, [i1, ConstInt(0)], i3), ResOperation(rop.GUARD_TRUE, [i3], None, descr=BasicFailDescr(3)), ResOperation(rop.JUMP, [i1], None, descr=looptoken), diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -152,14 +152,13 @@ allblocks = self.get_asmmemmgr_blocks(looptoken) self.datablockwrapper = MachineDataBlockWrapper(self.cpu.asmmemmgr, allblocks) + self.target_tokens_currently_compiling = {} def teardown(self): self.pending_guard_tokens = None if WORD == 8: self.pending_memoryerror_trampoline_from = None self.mc = None - self.looppos = -1 - self.currently_compiling_loop = None self.current_clt = None def finish_once(self): @@ -443,7 +442,6 @@ assert len(set(inputargs)) == len(inputargs) self.setup(looptoken) - self.currently_compiling_loop = looptoken if log: self._register_counter(False, looptoken.number) operations = self._inject_debugging_code(looptoken, operations) @@ -455,7 +453,9 @@ bootstrappos = self.mc.get_relative_pos() stackadjustpos = self._assemble_bootstrap_code(inputargs, arglocs) - self.looppos = self.mc.get_relative_pos() + looppos = self.mc.get_relative_pos() + looptoken._x86_loop_code = looppos + self.target_tokens_currently_compiling[looptoken] = None looptoken._x86_frame_depth = -1 # temporarily looptoken._x86_param_depth = -1 # temporarily frame_depth, param_depth = self._assemble(regalloc, operations) @@ -463,7 +463,7 @@ looptoken._x86_param_depth = param_depth directbootstrappos = self.mc.get_relative_pos() - self._assemble_bootstrap_direct_call(arglocs, self.looppos, + self._assemble_bootstrap_direct_call(arglocs, looppos, frame_depth+param_depth) self.write_pending_failure_recoveries() fullsize = self.mc.get_relative_pos() @@ -472,7 +472,7 @@ debug_start("jit-backend-addr") debug_print("Loop %d (%s) has address %x to %x (bootstrap %x)" % ( looptoken.number, loopname, - rawstart + self.looppos, + rawstart + looppos, rawstart + directbootstrappos, rawstart)) debug_stop("jit-backend-addr") @@ -488,8 +488,8 @@ looptoken._x86_ops_offset = ops_offset looptoken._x86_bootstrap_code = rawstart + bootstrappos - looptoken._x86_loop_code = rawstart + self.looppos looptoken._x86_direct_bootstrap_code = rawstart + directbootstrappos + self.fixup_target_tokens(rawstart) self.teardown() # oprofile support if self.cpu.profile_agent is not None: @@ -548,6 +548,7 @@ # patch the jump from original guard self.patch_jump_for_descr(faildescr, rawstart) ops_offset = self.mc.ops_offset + self.fixup_target_tokens(rawstart) self.teardown() # oprofile support if self.cpu.profile_agent is not None: @@ -668,6 +669,11 @@ mc.copy_to_raw_memory(adr_target) faildescr._x86_adr_jump_offset = 0 # means "patched" + def fixup_target_tokens(self, rawstart): + for looptoken in self.target_tokens_currently_compiling: + looptoken._x86_loop_code += rawstart + self.target_tokens_currently_compiling = None + @specialize.argtype(1) def _inject_debugging_code(self, looptoken, operations): if self._debug: @@ -2576,11 +2582,12 @@ return loop_token._x86_arglocs def closing_jump(self, loop_token): - if loop_token is self.currently_compiling_loop: + target = loop_token._x86_loop_code + if loop_token in self.target_tokens_currently_compiling: curpos = self.mc.get_relative_pos() + 5 - self.mc.JMP_l(self.looppos - curpos) + self.mc.JMP_l(target - curpos) else: - self.mc.JMP(imm(loop_token._x86_loop_code)) + self.mc.JMP(imm(target)) def malloc_cond(self, nursery_free_adr, nursery_top_adr, size, tid): size = max(size, self.cpu.gc_ll_descr.minimal_size_in_nursery) diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -5,7 +5,8 @@ import os from pypy.jit.metainterp.history import (Box, Const, ConstInt, ConstPtr, ResOperation, BoxPtr, ConstFloat, - BoxFloat, LoopToken, INT, REF, FLOAT) + BoxFloat, LoopToken, INT, REF, FLOAT, + TargetToken) from pypy.jit.backend.x86.regloc import * from pypy.rpython.lltypesystem import lltype, rffi, rstr from pypy.rlib.objectmodel import we_are_translated @@ -1313,9 +1314,9 @@ assembler = self.assembler assert self.jump_target_descr is None descr = op.getdescr() - assert isinstance(descr, LoopToken) + assert isinstance(descr, (LoopToken, TargetToken)) # XXX refactor! + nonfloatlocs, floatlocs = assembler.target_arglocs(descr) self.jump_target_descr = descr - nonfloatlocs, floatlocs = assembler.target_arglocs(self.jump_target_descr) # compute 'tmploc' to be all_regs[0] by spilling what is there box = TempBox() box1 = TempBox() @@ -1388,6 +1389,27 @@ # the FORCE_TOKEN operation returns directly 'ebp' self.rm.force_allocate_frame_reg(op.result) + def consider_label(self, op): + # XXX big refactoring needed? + descr = op.getdescr() + assert isinstance(descr, TargetToken) + inputargs = op.getarglist() + floatlocs = [None] * len(inputargs) + nonfloatlocs = [None] * len(inputargs) + for i in range(len(inputargs)): + arg = inputargs[i] + assert not isinstance(arg, Const) + loc = self.loc(arg) + if arg.type == FLOAT: + floatlocs[i] = loc + else: + nonfloatlocs[i] = loc + descr._x86_arglocs = nonfloatlocs, floatlocs + descr._x86_loop_code = self.assembler.mc.get_relative_pos() + descr._x86_frame_depth = self.fm.frame_depth + descr._x86_param_depth = self.param_depth + self.assembler.target_tokens_currently_compiling[descr] = None + def not_implemented_op(self, op): not_implemented("not implemented operation: %s" % op.getopname()) From noreply at buildbot.pypy.org Sat Nov 5 15:03:25 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sat, 5 Nov 2011 15:03:25 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: refactoring in progress Message-ID: <20111105140325.80895820B3@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48780:0e0764aac5be Date: 2011-11-05 14:36 +0100 http://bitbucket.org/pypy/pypy/changeset/0e0764aac5be/ Log: refactoring in progress diff --git a/pypy/jit/backend/llgraph/llimpl.py b/pypy/jit/backend/llgraph/llimpl.py --- a/pypy/jit/backend/llgraph/llimpl.py +++ b/pypy/jit/backend/llgraph/llimpl.py @@ -391,7 +391,8 @@ def compile_add_jump_target(loop, targettoken): loop = _from_opaque(loop) - if isinstance(targettoken, history.LoopToken): + if isinstance(targettoken, history.ProcedureToken): + assert False loop_target = _from_opaque(targettoken.compiled_loop_token.compiled_version) target_opindex = 0 target_inputargs = loop_target.inputargs diff --git a/pypy/jit/backend/llgraph/runner.py b/pypy/jit/backend/llgraph/runner.py --- a/pypy/jit/backend/llgraph/runner.py +++ b/pypy/jit/backend/llgraph/runner.py @@ -180,10 +180,11 @@ if isinstance(descr, Descr): llimpl.compile_add_descr(c, descr.ofs, descr.typeinfo, descr.arg_types) - if isinstance(descr, history.LoopToken): + if isinstance(descr, history.ProcedureToken): + assert False if op.getopnum() != rop.JUMP: llimpl.compile_add_loop_token(c, descr) - if isinstance(descr, history.TargetToken) and op.getopnum() == rop.TARGET: + if isinstance(descr, history.TargetToken) and op.getopnum() == rop.LABEL: llimpl.compile_add_target_token(c, descr) if self.is_oo and isinstance(descr, (OODescr, MethDescr)): # hack hack, not rpython diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -9,12 +9,13 @@ from pypy.tool.sourcetools import func_with_new_name from pypy.jit.metainterp.resoperation import ResOperation, rop, get_deep_immutable_oplist -from pypy.jit.metainterp.history import TreeLoop, Box, History, LoopToken +from pypy.jit.metainterp.history import TreeLoop, Box, History, ProcedureToken, TargetToken from pypy.jit.metainterp.history import AbstractFailDescr, BoxInt from pypy.jit.metainterp.history import BoxPtr, BoxObj, BoxFloat, Const from pypy.jit.metainterp import history from pypy.jit.metainterp.typesystem import llhelper, oohelper from pypy.jit.metainterp.optimize import InvalidLoop +from pypy.jit.metainterp.inliner import Inliner from pypy.jit.metainterp.resume import NUMBERING, PENDINGFIELDSP from pypy.jit.codewriter import heaptracker, longlong @@ -45,10 +46,10 @@ return loop -def make_loop_token(nb_args, jitdriver_sd): - loop_token = LoopToken() - loop_token.outermost_jitdriver_sd = jitdriver_sd - return loop_token +def make_procedure_token(jitdriver_sd): + procedure_token = ProcedureToken() + procedure_token.outermost_jitdriver_sd = jitdriver_sd + return procedure_token def record_loop_or_bridge(metainterp_sd, loop): """Do post-backend recordings and cleanups on 'loop'. @@ -67,16 +68,18 @@ n = descr.index if n >= 0: # we also record the resumedescr number looptoken.compiled_loop_token.record_faildescr_index(n) - elif isinstance(descr, LoopToken): + elif isinstance(descr, ProcedureToken): + assert False, "FIXME" + elif isinstance(descr, TargetToken): # for a JUMP or a CALL_ASSEMBLER: record it as a potential jump. # (the following test is not enough to prevent more complicated # cases of cycles, but at least it helps in simple tests of # test_memgr.py) - if descr is not looptoken: - looptoken.record_jump_to(descr) + if descr.procedure_token is not looptoken: + looptoken.record_jump_to(descr.procedure_token) op._descr = None # clear reference, mostly for tests if not we_are_translated(): - op._jumptarget_number = descr.number + op._jumptarget_number = descr.procedure_token.number # record this looptoken on the QuasiImmut used in the code if loop.quasi_immutable_deps is not None: for qmut in loop.quasi_immutable_deps: @@ -89,47 +92,60 @@ # ____________________________________________________________ -def compile_new_loop(metainterp, old_loop_tokens, greenkey, start, - start_resumedescr, full_preamble_needed=True): - """Try to compile a new loop by closing the current history back +def compile_procedure(metainterp, greenkey, start, + inputargs, jumpargs, + start_resumedescr, full_preamble_needed=True): + """Try to compile a new procedure by closing the current history back to the first operation. """ - from pypy.jit.metainterp.optimize import optimize_loop + from pypy.jit.metainterp.optimizeopt import optimize_trace history = metainterp.history + metainterp_sd = metainterp.staticdata + jitdriver_sd = metainterp.jitdriver_sd + loop = create_empty_loop(metainterp) - loop.inputargs = history.inputargs[:] + loop.inputargs = inputargs[:] + + procedure_token = make_procedure_token(jitdriver_sd) + part = create_empty_loop(metainterp) + h_ops = history.operations + part.start_resumedescr = start_resumedescr + part.operations = [ResOperation(rop.LABEL, inputargs, None, descr=TargetToken(procedure_token))] + \ + [h_ops[i].clone() for i in range(start, len(h_ops))] + \ + [ResOperation(rop.LABEL, jumpargs, None, descr=TargetToken(procedure_token))] + try: + optimize_trace(metainterp_sd, part, jitdriver_sd.warmstate.enable_opts) + except InvalidLoop: + return None + loop.operations = part.operations + while part.operations[-1].getopnum() == rop.LABEL: + inliner = Inliner(inputargs, jumpargs) + part.operations = [part.operations[-1]] + \ + [inliner.inline_op(h_ops[i]) for i in range(start, len(h_ops))] + \ + [ResOperation(rop.LABEL, [inliner.inline_arg(a) for a in jumpargs], + None, descr=TargetToken(procedure_token))] + inputargs = jumpargs + jumpargs = part.operations[-1].getarglist() + + try: + optimize_trace(metainterp_sd, part, jitdriver_sd.warmstate.enable_opts) + except InvalidLoop: + return None + + loop.operations = loop.operations[:-1] + part.operations + for box in loop.inputargs: assert isinstance(box, Box) - # make a copy, because optimize_loop can mutate the ops and descrs - h_ops = history.operations - loop.operations = [h_ops[i].clone() for i in range(start, len(h_ops))] - metainterp_sd = metainterp.staticdata - jitdriver_sd = metainterp.jitdriver_sd - loop_token = make_loop_token(len(loop.inputargs), jitdriver_sd) - loop.token = loop_token - loop.operations[-1].setdescr(loop_token) # patch the target of the JUMP - loop.preamble = create_empty_loop(metainterp, 'Preamble ') - loop.preamble.inputargs = loop.inputargs - loop.preamble.token = make_loop_token(len(loop.inputargs), jitdriver_sd) - loop.preamble.start_resumedescr = start_resumedescr + loop.token = procedure_token - try: - old_loop_token = optimize_loop(metainterp_sd, old_loop_tokens, loop, - jitdriver_sd.warmstate.enable_opts) - except InvalidLoop: - debug_print("compile_new_loop: got an InvalidLoop") - return None - if old_loop_token is not None: - metainterp.staticdata.log("reusing old loop") - return old_loop_token + send_loop_to_backend(greenkey, jitdriver_sd, metainterp_sd, loop, "loop") + record_loop_or_bridge(metainterp_sd, loop) + return loop.token - if loop.preamble.operations is not None: - send_loop_to_backend(greenkey, jitdriver_sd, metainterp_sd, loop, - "loop") - record_loop_or_bridge(metainterp_sd, loop) - token = loop.preamble.token + + if False: # FIXME: full_preamble_needed?? if full_preamble_needed: send_loop_to_backend(greenkey, jitdriver_sd, metainterp_sd, loop.preamble, "entry bridge") @@ -263,7 +279,7 @@ raise metainterp_sd.ExitFrameWithExceptionRef(cpu, value) -class TerminatingLoopToken(LoopToken): +class TerminatingLoopToken(ProcedureToken): # FIXME:!! terminating = True def __init__(self, nargs, finishdescr): diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -727,7 +727,7 @@ # of operations. Each branch ends in a jump which can go either to # the top of the same loop, or to another TreeLoop; or it ends in a FINISH. -class LoopToken(AbstractDescr): +class ProcedureToken(AbstractDescr): """Used for rop.JUMP, giving the target of the jump. This is different from TreeLoop: the TreeLoop class contains the whole loop, including 'operations', and goes away after the loop @@ -766,8 +766,8 @@ self.compiled_loop_token.cpu.dump_loop_token(self) class TargetToken(AbstractDescr): - def __init__(self, merge_point): - self.merge_point = merge_point + def __init__(self, procedure_token): + self.procedure_token = procedure_token self.virtual_state = None self.exported_state = None @@ -870,7 +870,7 @@ if operations[-1].getopnum() == rop.JUMP: target = operations[-1].getdescr() if target is not None: - assert isinstance(target, LoopToken) + assert isinstance(target, TargetToken) def dump(self): # RPython-friendly diff --git a/pypy/jit/metainterp/inliner.py b/pypy/jit/metainterp/inliner.py new file mode 100644 --- /dev/null +++ b/pypy/jit/metainterp/inliner.py @@ -0,0 +1,57 @@ +from pypy.jit.metainterp.history import Const +from pypy.jit.metainterp.resume import Snapshot + +class Inliner(object): + def __init__(self, inputargs, jump_args): + assert len(inputargs) == len(jump_args) + self.argmap = {} + for i in range(len(inputargs)): + if inputargs[i] in self.argmap: + assert self.argmap[inputargs[i]] == jump_args[i] + else: + self.argmap[inputargs[i]] = jump_args[i] + self.snapshot_map = {None: None} + + def inline_op(self, newop, ignore_result=False, clone=True, + ignore_failargs=False): + if clone: + newop = newop.clone() + args = newop.getarglist() + newop.initarglist([self.inline_arg(a) for a in args]) + + if newop.is_guard(): + args = newop.getfailargs() + if args and not ignore_failargs: + newop.setfailargs([self.inline_arg(a) for a in args]) + else: + newop.setfailargs([]) + + if newop.result and not ignore_result: + old_result = newop.result + newop.result = newop.result.clonebox() + self.argmap[old_result] = newop.result + + self.inline_descr_inplace(newop.getdescr()) + + return newop + + def inline_descr_inplace(self, descr): + from pypy.jit.metainterp.compile import ResumeGuardDescr + if isinstance(descr, ResumeGuardDescr): + descr.rd_snapshot = self.inline_snapshot(descr.rd_snapshot) + + def inline_arg(self, arg): + if arg is None: + return None + if isinstance(arg, Const): + return arg + return self.argmap[arg] + + def inline_snapshot(self, snapshot): + if snapshot in self.snapshot_map: + return self.snapshot_map[snapshot] + boxes = [self.inline_arg(a) for a in snapshot.boxes] + new_snapshot = Snapshot(self.inline_snapshot(snapshot.prev), boxes) + self.snapshot_map[snapshot] = new_snapshot + return new_snapshot + diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -1,11 +1,12 @@ from pypy.jit.codewriter.effectinfo import EffectInfo from pypy.jit.metainterp.optimizeopt.virtualstate import VirtualStateAdder, ShortBoxes from pypy.jit.metainterp.compile import ResumeGuardDescr -from pypy.jit.metainterp.history import TreeLoop, LoopToken, TargetToken +from pypy.jit.metainterp.history import TreeLoop, TargetToken from pypy.jit.metainterp.jitexc import JitException from pypy.jit.metainterp.optimize import InvalidLoop, RetraceLoop from pypy.jit.metainterp.optimizeopt.optimizer import * from pypy.jit.metainterp.optimizeopt.generalize import KillHugeIntBounds +from pypy.jit.metainterp.inliner import Inliner from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.jit.metainterp.resume import Snapshot from pypy.rlib.debug import debug_print @@ -17,59 +18,6 @@ opt = UnrollOptimizer(metainterp_sd, loop, optimizations) opt.propagate_all_forward() -class Inliner(object): - def __init__(self, inputargs, jump_args): - assert len(inputargs) == len(jump_args) - self.argmap = {} - for i in range(len(inputargs)): - if inputargs[i] in self.argmap: - assert self.argmap[inputargs[i]] == jump_args[i] - else: - self.argmap[inputargs[i]] = jump_args[i] - self.snapshot_map = {None: None} - - def inline_op(self, newop, ignore_result=False, clone=True, - ignore_failargs=False): - if clone: - newop = newop.clone() - args = newop.getarglist() - newop.initarglist([self.inline_arg(a) for a in args]) - - if newop.is_guard(): - args = newop.getfailargs() - if args and not ignore_failargs: - newop.setfailargs([self.inline_arg(a) for a in args]) - else: - newop.setfailargs([]) - - if newop.result and not ignore_result: - old_result = newop.result - newop.result = newop.result.clonebox() - self.argmap[old_result] = newop.result - - self.inline_descr_inplace(newop.getdescr()) - - return newop - - def inline_descr_inplace(self, descr): - if isinstance(descr, ResumeGuardDescr): - descr.rd_snapshot = self.inline_snapshot(descr.rd_snapshot) - - def inline_arg(self, arg): - if arg is None: - return None - if isinstance(arg, Const): - return arg - return self.argmap[arg] - - def inline_snapshot(self, snapshot): - if snapshot in self.snapshot_map: - return self.snapshot_map[snapshot] - boxes = [self.inline_arg(a) for a in snapshot.boxes] - new_snapshot = Snapshot(self.inline_snapshot(snapshot.prev), boxes) - self.snapshot_map[snapshot] = new_snapshot - return new_snapshot - class UnrollableOptimizer(Optimizer): def setup(self): self.importable_values = {} @@ -143,7 +91,7 @@ self.export_state(lastop) loop.operations.append(lastop) else: - assert lastop.getdescr().merge_point is start_targetop.getdescr().merge_point + assert lastop.getdescr().procedure_token is start_targetop.getdescr().procedure_token jumpop = ResOperation(rop.JUMP, lastop.getarglist(), None, descr=start_targetop.getdescr()) self.close_loop(jumpop) @@ -474,7 +422,9 @@ def propagate_forward(self, op): if op.getopnum() == rop.JUMP: loop_token = op.getdescr() - assert isinstance(loop_token, TargetToken) + if not isinstance(loop_token, TargetToken): + self.emit_operation(op) + return short = loop_token.short_preamble if short: args = op.getarglist() diff --git a/pypy/jit/metainterp/optimizeopt/util.py b/pypy/jit/metainterp/optimizeopt/util.py --- a/pypy/jit/metainterp/optimizeopt/util.py +++ b/pypy/jit/metainterp/optimizeopt/util.py @@ -171,3 +171,4 @@ assert len(oplist1) == len(oplist2) print '-'*totwidth return True + diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -1928,7 +1928,8 @@ # that failed; # - if self.resumekey is a ResumeFromInterpDescr, it starts directly # from the interpreter. - if not self.retracing_loop_from: + if False: # FIXME + if not self.retracing_loop_from: try: self.compile_bridge(live_arg_boxes) except RetraceLoop: @@ -1964,7 +1965,7 @@ live_arg_boxes, start, bridge_arg_boxes, resumedescr) else: - self.compile(original_boxes, live_arg_boxes, start, resumedescr) + self.compile_procedure(original_boxes, live_arg_boxes, start, resumedescr) # creation of the loop was cancelled! self.staticdata.log('cancelled, tracing more...') #self.staticdata.log('cancelled, stopping tracing') @@ -2020,36 +2021,25 @@ from pypy.jit.metainterp.resoperation import opname raise NotImplementedError(opname[opnum]) - def get_compiled_merge_points(self, greenkey): - """Get the list of looptokens corresponding to the greenkey. - Turns the (internal) list of weakrefs into regular refs. - """ + def get_procedure_token(self, greenkey): cell = self.jitdriver_sd.warmstate.jit_cell_at_key(greenkey) - return cell.get_compiled_merge_points() - - def set_compiled_merge_points(self, greenkey, looptokens): - cell = self.jitdriver_sd.warmstate.jit_cell_at_key(greenkey) - cell.set_compiled_merge_points(looptokens) - - def compile(self, original_boxes, live_arg_boxes, start, start_resumedescr): + return cell.get_procedure_token() + + def compile_procedure(self, original_boxes, live_arg_boxes, start, start_resumedescr): num_green_args = self.jitdriver_sd.num_green_args - original_inputargs = self.history.inputargs - self.history.inputargs = original_boxes[num_green_args:] greenkey = original_boxes[:num_green_args] - old_loop_tokens = self.get_compiled_merge_points(greenkey) - self.history.record(rop.JUMP, live_arg_boxes[num_green_args:], None) - loop_token = compile.compile_new_loop(self, old_loop_tokens, - greenkey, start, start_resumedescr) - if loop_token is not None: # raise if it *worked* correctly - self.set_compiled_merge_points(greenkey, old_loop_tokens) + assert self.get_procedure_token(greenkey) == None # FIXME: recursion? + procedure_token = compile.compile_procedure(self, greenkey, start, + original_boxes[num_green_args:], + live_arg_boxes[num_green_args:], + start_resumedescr) + if procedure_token is not None: # raise if it *worked* correctly + self.jitdriver_sd.attach_procedure_to_interp(greenkey, procedure_token) self.history.inputargs = None self.history.operations = None - raise GenerateMergePoint(live_arg_boxes, loop_token) + raise GenerateMergePoint(live_arg_boxes, procedure_token) - self.history.inputargs = original_inputargs - self.history.operations.pop() # remove the JUMP - - def compile_bridge(self, live_arg_boxes): + def compile_trace(self, live_arg_boxes): num_green_args = self.jitdriver_sd.num_green_args greenkey = live_arg_boxes[:num_green_args] old_loop_tokens = self.get_compiled_merge_points(greenkey) diff --git a/pypy/jit/metainterp/warmstate.py b/pypy/jit/metainterp/warmstate.py --- a/pypy/jit/metainterp/warmstate.py +++ b/pypy/jit/metainterp/warmstate.py @@ -169,34 +169,20 @@ # counter == -1: there is an entry bridge for this cell # counter == -2: tracing is currently going on for this cell counter = 0 - compiled_merge_points_wref = None # list of weakrefs to LoopToken dont_trace_here = False - wref_entry_loop_token = None # (possibly) one weakref to LoopToken + wref_procedure_token = None - def get_compiled_merge_points(self): - result = [] - if self.compiled_merge_points_wref is not None: - for wref in self.compiled_merge_points_wref: - looptoken = wref() - if looptoken is not None and not looptoken.invalidated: - result.append(looptoken) - return result - - def set_compiled_merge_points(self, looptokens): - self.compiled_merge_points_wref = [self._makeref(token) - for token in looptokens] - - def get_entry_loop_token(self): - if self.wref_entry_loop_token is not None: - return self.wref_entry_loop_token() + def get_procedure_token(self): + if self.wref_procedure_token is not None: + return self.wref_procedure_token() return None - def set_entry_loop_token(self, looptoken): - self.wref_entry_loop_token = self._makeref(looptoken) + def set_procedure_token(self, token): + self.wref_procedure_token = self._makeref(token) - def _makeref(self, looptoken): - assert looptoken is not None - return weakref.ref(looptoken) + def _makeref(self, token): + assert token is not None + return weakref.ref(token) # ____________________________________________________________ @@ -283,18 +269,17 @@ debug_print("disabled inlining", loc) debug_stop("jit-disableinlining") - def attach_unoptimized_bridge_from_interp(self, greenkey, - entry_loop_token): + def attach_procedure_to_interp(self, greenkey, procedure_token): cell = self.jit_cell_at_key(greenkey) - old_token = cell.get_entry_loop_token() - cell.set_entry_loop_token(entry_loop_token) - cell.counter = -1 # valid entry bridge attached + old_token = cell.get_procedure_token() + cell.set_procedure_token(procedure_token) + cell.counter = -1 # valid procedure bridge attached if old_token is not None: - self.cpu.redirect_call_assembler(old_token, entry_loop_token) - # entry_loop_token is also kept alive by any loop that used + self.cpu.redirect_call_assembler(old_token, procedure_token) + # procedure_token is also kept alive by any loop that used # to point to old_token. Actually freeing old_token early # is a pointless optimization (it is tiny). - old_token.record_jump_to(entry_loop_token) + old_token.record_jump_to(procedure_token) # ---------- @@ -617,16 +602,16 @@ def get_assembler_token(greenkey, redboxes): # 'redboxes' is only used to know the types of red arguments cell = self.jit_cell_at_key(greenkey) - entry_loop_token = cell.get_entry_loop_token() - if entry_loop_token is None: + procedure_token = cell.get_procedure_token() + if procedure_token is None: from pypy.jit.metainterp.compile import compile_tmp_callback if cell.counter == -1: # used to be a valid entry bridge, cell.counter = 0 # but was freed in the meantime. memmgr = warmrunnerdesc.memory_manager - entry_loop_token = compile_tmp_callback(cpu, jd, greenkey, - redboxes, memmgr) - cell.set_entry_loop_token(entry_loop_token) - return entry_loop_token + procedure_token = compile_tmp_callback(cpu, jd, greenkey, + redboxes, memmgr) + cell.set_procedure_token(procedure_token) + return procedure_token self.get_assembler_token = get_assembler_token # From noreply at buildbot.pypy.org Sat Nov 5 15:03:26 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sat, 5 Nov 2011 15:03:26 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: hg merge Message-ID: <20111105140326.C3E12820B3@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48781:592bf0aa2470 Date: 2011-11-05 14:36 +0100 http://bitbucket.org/pypy/pypy/changeset/592bf0aa2470/ Log: hg merge diff --git a/pypy/jit/backend/llgraph/llimpl.py b/pypy/jit/backend/llgraph/llimpl.py --- a/pypy/jit/backend/llgraph/llimpl.py +++ b/pypy/jit/backend/llgraph/llimpl.py @@ -639,7 +639,7 @@ # return _op_default_implementation - def op_target(self, _, *args): + def op_label(self, _, *args): pass def op_debug_merge_point(self, _, *args): diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -2971,13 +2971,13 @@ i2 = BoxInt() i3 = BoxInt() looptoken = LoopToken() - targettoken = TargetToken() + targettoken = TargetToken(None) faildescr = BasicFailDescr(2) operations = [ ResOperation(rop.INT_ADD, [i0, ConstInt(1)], i1), ResOperation(rop.INT_LE, [i1, ConstInt(9)], i2), ResOperation(rop.GUARD_TRUE, [i2], None, descr=faildescr), - ResOperation(rop.TARGET, [i1], None, descr=targettoken), + ResOperation(rop.LABEL, [i1], None, descr=targettoken), ResOperation(rop.INT_GE, [i1, ConstInt(0)], i3), ResOperation(rop.GUARD_TRUE, [i3], None, descr=BasicFailDescr(3)), ResOperation(rop.JUMP, [i1], None, descr=looptoken), diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -152,14 +152,13 @@ allblocks = self.get_asmmemmgr_blocks(looptoken) self.datablockwrapper = MachineDataBlockWrapper(self.cpu.asmmemmgr, allblocks) + self.target_tokens_currently_compiling = {} def teardown(self): self.pending_guard_tokens = None if WORD == 8: self.pending_memoryerror_trampoline_from = None self.mc = None - self.looppos = -1 - self.currently_compiling_loop = None self.current_clt = None def finish_once(self): @@ -443,7 +442,6 @@ assert len(set(inputargs)) == len(inputargs) self.setup(looptoken) - self.currently_compiling_loop = looptoken if log: self._register_counter(False, looptoken.number) operations = self._inject_debugging_code(looptoken, operations) @@ -455,7 +453,9 @@ bootstrappos = self.mc.get_relative_pos() stackadjustpos = self._assemble_bootstrap_code(inputargs, arglocs) - self.looppos = self.mc.get_relative_pos() + looppos = self.mc.get_relative_pos() + looptoken._x86_loop_code = looppos + self.target_tokens_currently_compiling[looptoken] = None looptoken._x86_frame_depth = -1 # temporarily looptoken._x86_param_depth = -1 # temporarily frame_depth, param_depth = self._assemble(regalloc, operations) @@ -463,7 +463,7 @@ looptoken._x86_param_depth = param_depth directbootstrappos = self.mc.get_relative_pos() - self._assemble_bootstrap_direct_call(arglocs, self.looppos, + self._assemble_bootstrap_direct_call(arglocs, looppos, frame_depth+param_depth) self.write_pending_failure_recoveries() fullsize = self.mc.get_relative_pos() @@ -472,7 +472,7 @@ debug_start("jit-backend-addr") debug_print("Loop %d (%s) has address %x to %x (bootstrap %x)" % ( looptoken.number, loopname, - rawstart + self.looppos, + rawstart + looppos, rawstart + directbootstrappos, rawstart)) debug_stop("jit-backend-addr") @@ -488,8 +488,8 @@ looptoken._x86_ops_offset = ops_offset looptoken._x86_bootstrap_code = rawstart + bootstrappos - looptoken._x86_loop_code = rawstart + self.looppos looptoken._x86_direct_bootstrap_code = rawstart + directbootstrappos + self.fixup_target_tokens(rawstart) self.teardown() # oprofile support if self.cpu.profile_agent is not None: @@ -548,6 +548,7 @@ # patch the jump from original guard self.patch_jump_for_descr(faildescr, rawstart) ops_offset = self.mc.ops_offset + self.fixup_target_tokens(rawstart) self.teardown() # oprofile support if self.cpu.profile_agent is not None: @@ -668,6 +669,11 @@ mc.copy_to_raw_memory(adr_target) faildescr._x86_adr_jump_offset = 0 # means "patched" + def fixup_target_tokens(self, rawstart): + for looptoken in self.target_tokens_currently_compiling: + looptoken._x86_loop_code += rawstart + self.target_tokens_currently_compiling = None + @specialize.argtype(1) def _inject_debugging_code(self, looptoken, operations): if self._debug: @@ -2576,11 +2582,12 @@ return loop_token._x86_arglocs def closing_jump(self, loop_token): - if loop_token is self.currently_compiling_loop: + target = loop_token._x86_loop_code + if loop_token in self.target_tokens_currently_compiling: curpos = self.mc.get_relative_pos() + 5 - self.mc.JMP_l(self.looppos - curpos) + self.mc.JMP_l(target - curpos) else: - self.mc.JMP(imm(loop_token._x86_loop_code)) + self.mc.JMP(imm(target)) def malloc_cond(self, nursery_free_adr, nursery_top_adr, size, tid): size = max(size, self.cpu.gc_ll_descr.minimal_size_in_nursery) diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -5,7 +5,8 @@ import os from pypy.jit.metainterp.history import (Box, Const, ConstInt, ConstPtr, ResOperation, BoxPtr, ConstFloat, - BoxFloat, LoopToken, INT, REF, FLOAT) + BoxFloat, LoopToken, INT, REF, FLOAT, + TargetToken) from pypy.jit.backend.x86.regloc import * from pypy.rpython.lltypesystem import lltype, rffi, rstr from pypy.rlib.objectmodel import we_are_translated @@ -1313,9 +1314,9 @@ assembler = self.assembler assert self.jump_target_descr is None descr = op.getdescr() - assert isinstance(descr, LoopToken) + assert isinstance(descr, (LoopToken, TargetToken)) # XXX refactor! + nonfloatlocs, floatlocs = assembler.target_arglocs(descr) self.jump_target_descr = descr - nonfloatlocs, floatlocs = assembler.target_arglocs(self.jump_target_descr) # compute 'tmploc' to be all_regs[0] by spilling what is there box = TempBox() box1 = TempBox() @@ -1388,6 +1389,27 @@ # the FORCE_TOKEN operation returns directly 'ebp' self.rm.force_allocate_frame_reg(op.result) + def consider_label(self, op): + # XXX big refactoring needed? + descr = op.getdescr() + assert isinstance(descr, TargetToken) + inputargs = op.getarglist() + floatlocs = [None] * len(inputargs) + nonfloatlocs = [None] * len(inputargs) + for i in range(len(inputargs)): + arg = inputargs[i] + assert not isinstance(arg, Const) + loc = self.loc(arg) + if arg.type == FLOAT: + floatlocs[i] = loc + else: + nonfloatlocs[i] = loc + descr._x86_arglocs = nonfloatlocs, floatlocs + descr._x86_loop_code = self.assembler.mc.get_relative_pos() + descr._x86_frame_depth = self.fm.frame_depth + descr._x86_param_depth = self.param_depth + self.assembler.target_tokens_currently_compiling[descr] = None + def not_implemented_op(self, op): not_implemented("not implemented operation: %s" % op.getopname()) From noreply at buildbot.pypy.org Sat Nov 5 15:03:27 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sat, 5 Nov 2011 15:03:27 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: hg merge default Message-ID: <20111105140327.F3944820B3@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48782:d1f3913a23d1 Date: 2011-11-05 14:37 +0100 http://bitbucket.org/pypy/pypy/changeset/d1f3913a23d1/ Log: hg merge default diff --git a/pypy/jit/metainterp/optimizeopt/vstring.py b/pypy/jit/metainterp/optimizeopt/vstring.py --- a/pypy/jit/metainterp/optimizeopt/vstring.py +++ b/pypy/jit/metainterp/optimizeopt/vstring.py @@ -207,6 +207,7 @@ class VStringConcatValue(VAbstractStringValue): """The concatenation of two other strings.""" + _attrs_ = ('left', 'right', 'lengthbox') lengthbox = None # or the computed length diff --git a/pypy/jit/metainterp/test/test_virtualstate.py b/pypy/jit/metainterp/test/test_virtualstate.py --- a/pypy/jit/metainterp/test/test_virtualstate.py +++ b/pypy/jit/metainterp/test/test_virtualstate.py @@ -847,7 +847,8 @@ i5 = arraylen_gc(p2, descr=arraydescr) i6 = int_ge(i5, 1) guard_true(i6) [] - jump(p0, p1, p2) + p3 = getarrayitem_gc(p2, 0, descr=arraydescr) + jump(p0, p1, p3, p2) """ self.optimize_bridge(loop, bridge, expected, p0=self.myptr) From noreply at buildbot.pypy.org Sat Nov 5 15:03:29 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sat, 5 Nov 2011 15:03:29 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: first simple loop now passed along all the way in the new format using labels Message-ID: <20111105140329.338CD820B3@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48783:05b67bb3c2ac Date: 2011-11-05 15:02 +0100 http://bitbucket.org/pypy/pypy/changeset/05b67bb3c2ac/ Log: first simple loop now passed along all the way in the new format using labels diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -134,7 +134,7 @@ return None loop.operations = loop.operations[:-1] + part.operations - + for box in loop.inputargs: assert isinstance(box, Box) @@ -142,7 +142,7 @@ send_loop_to_backend(greenkey, jitdriver_sd, metainterp_sd, loop, "loop") record_loop_or_bridge(metainterp_sd, loop) - return loop.token + return procedure_token if False: # FIXME: full_preamble_needed?? diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -2034,7 +2034,7 @@ live_arg_boxes[num_green_args:], start_resumedescr) if procedure_token is not None: # raise if it *worked* correctly - self.jitdriver_sd.attach_procedure_to_interp(greenkey, procedure_token) + self.jitdriver_sd.warmstate.attach_procedure_to_interp(greenkey, procedure_token) self.history.inputargs = None self.history.operations = None raise GenerateMergePoint(live_arg_boxes, procedure_token) From noreply at buildbot.pypy.org Sat Nov 5 16:21:33 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sat, 5 Nov 2011 16:21:33 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: count all operations in each test Message-ID: <20111105152133.91F6D820B3@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48784:c6d704e0080f Date: 2011-11-05 16:21 +0100 http://bitbucket.org/pypy/pypy/changeset/c6d704e0080f/ Log: count all operations in each test diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -1023,12 +1023,9 @@ "found %d %r, expected %d" % (found, insn, expected_count)) return insns - def check_loops(self, expected=None, everywhere=False, **check): + def check_resops(self, expected=None, **check): insns = {} for loop in self.loops: - if not everywhere: - if getattr(loop, '_ignore_during_counting', False): - continue insns = loop.summary(adding_insns=insns) if expected is not None: insns.pop('debug_merge_point', None) @@ -1039,6 +1036,36 @@ assert found == expected_count, ( "found %d %r, expected %d" % (found, insn, expected_count)) return insns + + def check_loops(self, expected=None, everywhere=False, **check): + insns = {} + for loop in self.loops: + #if not everywhere: + # if getattr(loop, '_ignore_during_counting', False): + # continue + insns = loop.summary(adding_insns=insns) + if expected is not None: + insns.pop('debug_merge_point', None) + print + print + print " self.check_resops(%s)" % str(insns) + print + import pdb; pdb.set_trace() + else: + chk = ['%s=%d' % (i, insns.get(i, 0)) for i in check] + print + print + print " self.check_resops(%s)" % ', '.join(chk) + print + import pdb; pdb.set_trace() + return + + for insn, expected_count in check.items(): + getattr(rop, insn.upper()) # fails if 'rop.INSN' does not exist + found = insns.get(insn, 0) + assert found == expected_count, ( + "found %d %r, expected %d" % (found, insn, expected_count)) + return insns def check_consistency(self): "NOT_RPYTHON" diff --git a/pypy/jit/metainterp/test/support.py b/pypy/jit/metainterp/test/support.py --- a/pypy/jit/metainterp/test/support.py +++ b/pypy/jit/metainterp/test/support.py @@ -155,9 +155,13 @@ class JitMixin: basic = True + def check_resops(self, expected=None, **check): + get_stats().check_resops(expected=expected, **check) + + def check_loops(self, expected=None, everywhere=False, **check): get_stats().check_loops(expected=expected, everywhere=everywhere, - **check) + **check) def check_loop_count(self, count): """NB. This is a hack; use check_tree_loop_count() or check_enter_count() for the real thing. diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -79,9 +79,8 @@ res = self.meta_interp(f, [6, 7]) assert res == 42 self.check_loop_count(1) - self.check_loops({'guard_true': 1, - 'int_add': 1, 'int_sub': 1, 'int_gt': 1, - 'jump': 1}) + self.check_resops({'jump': 2, 'int_gt': 2, 'int_add': 2, 'guard_true': 2, 'int_sub': 2}) + if self.basic: found = 0 for op in get_stats().loops[0]._all_operations(): @@ -108,7 +107,7 @@ res = self.meta_interp(f, [6, 7]) assert res == 1323 self.check_loop_count(1) - self.check_loops(int_mul=1) + self.check_resops(int_mul=3) def test_loop_variant_mul_ovf(self): myjitdriver = JitDriver(greens = [], reds = ['y', 'res', 'x']) @@ -125,7 +124,7 @@ res = self.meta_interp(f, [6, 7]) assert res == 1323 self.check_loop_count(1) - self.check_loops(int_mul_ovf=1) + self.check_resops(int_mul_ovf=3) def test_loop_invariant_mul1(self): myjitdriver = JitDriver(greens = [], reds = ['y', 'res', 'x']) @@ -140,9 +139,9 @@ res = self.meta_interp(f, [6, 7]) assert res == 252 self.check_loop_count(1) - self.check_loops({'guard_true': 1, - 'int_add': 1, 'int_sub': 1, 'int_gt': 1, - 'jump': 1}) + self.check_resops({'jump': 2, 'int_gt': 2, 'int_add': 2, + 'int_mul': 1, 'guard_true': 2, 'int_sub': 2}) + def test_loop_invariant_mul_ovf(self): myjitdriver = JitDriver(greens = [], reds = ['y', 'res', 'x']) @@ -158,10 +157,10 @@ res = self.meta_interp(f, [6, 7]) assert res == 308 self.check_loop_count(1) - self.check_loops({'guard_true': 1, - 'int_add': 2, 'int_sub': 1, 'int_gt': 1, - 'int_lshift': 1, - 'jump': 1}) + self.check_resops({'jump': 2, 'int_lshift': 2, 'int_gt': 2, + 'int_mul_ovf': 1, 'int_add': 4, + 'guard_true': 2, 'guard_no_overflow': 1, + 'int_sub': 2}) def test_loop_invariant_mul_bridge1(self): myjitdriver = JitDriver(greens = [], reds = ['y', 'res', 'x']) @@ -194,11 +193,9 @@ res = self.meta_interp(f, [6, 32]) assert res == 1167 self.check_loop_count(3) - self.check_loops({'int_add': 3, 'int_lt': 2, - 'int_sub': 2, 'guard_false': 1, - 'jump': 2, - 'int_gt': 1, 'guard_true': 2}) - + self.check_resops({'int_lt': 3, 'int_gt': 2, 'int_add': 5, + 'guard_true': 3, 'int_sub': 4, 'jump': 4, + 'int_mul': 2, 'guard_false': 2}) def test_loop_invariant_mul_bridge_maintaining2(self): myjitdriver = JitDriver(greens = [], reds = ['y', 'res', 'x']) @@ -216,10 +213,9 @@ res = self.meta_interp(f, [6, 32]) assert res == 1692 self.check_loop_count(3) - self.check_loops({'int_add': 3, 'int_lt': 2, - 'int_sub': 2, 'guard_false': 1, - 'jump': 2, - 'int_gt': 1, 'guard_true': 2}) + self.check_resops({'int_lt': 3, 'int_gt': 2, 'int_add': 5, + 'guard_true': 3, 'int_sub': 4, 'jump': 4, + 'int_mul': 2, 'guard_false': 2}) def test_loop_invariant_mul_bridge_maintaining3(self): myjitdriver = JitDriver(greens = [], reds = ['y', 'res', 'x', 'm']) @@ -237,10 +233,9 @@ res = self.meta_interp(f, [6, 32, 16]) assert res == 1692 self.check_loop_count(3) - self.check_loops({'int_add': 2, 'int_lt': 1, - 'int_sub': 2, 'guard_false': 1, - 'jump': 2, 'int_mul': 1, - 'int_gt': 2, 'guard_true': 2}) + self.check_resops({'int_lt': 2, 'int_gt': 4, 'guard_false': 2, + 'guard_true': 4, 'int_sub': 4, 'jump': 4, + 'int_mul': 3, 'int_add': 4}) def test_loop_invariant_intbox(self): myjitdriver = JitDriver(greens = [], reds = ['y', 'res', 'x']) @@ -261,9 +256,9 @@ res = self.meta_interp(f, [6, 7]) assert res == 252 self.check_loop_count(1) - self.check_loops({'guard_true': 1, - 'int_add': 1, 'int_sub': 1, 'int_gt': 1, - 'jump': 1}) + self.check_resops({'jump': 2, 'int_gt': 2, 'int_add': 2, + 'getfield_gc_pure': 1, 'int_mul': 1, + 'guard_true': 2, 'int_sub': 2}) def test_loops_are_transient(self): import gc, weakref @@ -381,7 +376,7 @@ assert res == 0 # CALL_PURE is recorded in the history, but turned into a CALL # by optimizeopt.py - self.check_loops(int_sub=0, call=1, call_pure=0) + self.check_resops(call_pure=0, call=2, int_sub=0) def test_constfold_call_elidable(self): myjitdriver = JitDriver(greens = ['m'], reds = ['n']) @@ -397,7 +392,7 @@ res = self.meta_interp(f, [21, 5]) assert res == -1 # the CALL_PURE is constant-folded away by optimizeopt.py - self.check_loops(int_sub=1, call=0, call_pure=0) + self.check_resops(call_pure=0, call=0, int_sub=2) def test_constfold_call_elidable_2(self): myjitdriver = JitDriver(greens = ['m'], reds = ['n']) @@ -417,7 +412,7 @@ res = self.meta_interp(f, [21, 5]) assert res == -1 # the CALL_PURE is constant-folded away by optimizeopt.py - self.check_loops(int_sub=1, call=0, call_pure=0) + self.check_resops(call_pure=0, call=0, int_sub=2) def test_elidable_function_returning_object(self): myjitdriver = JitDriver(greens = ['m'], reds = ['n']) @@ -442,7 +437,7 @@ res = self.meta_interp(f, [21, 5]) assert res == -1 # the CALL_PURE is constant-folded away by optimizeopt.py - self.check_loops(int_sub=1, call=0, call_pure=0, getfield_gc=0) + self.check_resops(call_pure=0, call=0, getfield_gc=1, int_sub=2) def test_elidable_raising(self): myjitdriver = JitDriver(greens = ['m'], reds = ['n']) @@ -463,12 +458,12 @@ res = self.meta_interp(f, [22, 6]) assert res == -3 # the CALL_PURE is constant-folded away during tracing - self.check_loops(int_sub=1, call=0, call_pure=0) + self.check_resops(call_pure=0, call=0, int_sub=2) # res = self.meta_interp(f, [22, -5]) assert res == 0 # raises: becomes CALL and is not constant-folded away - self.check_loops(int_sub=1, call=1, call_pure=0) + self.check_resops(call_pure=0, call=2, int_sub=2) def test_elidable_raising_2(self): myjitdriver = JitDriver(greens = ['m'], reds = ['n']) @@ -489,12 +484,12 @@ res = self.meta_interp(f, [22, 6]) assert res == -3 # the CALL_PURE is constant-folded away by optimizeopt.py - self.check_loops(int_sub=1, call=0, call_pure=0) + self.check_resops(call_pure=0, call=0, int_sub=2) # res = self.meta_interp(f, [22, -5]) assert res == 0 # raises: becomes CALL and is not constant-folded away - self.check_loops(int_sub=1, call=1, call_pure=0) + self.check_resops(call_pure=0, call=2, int_sub=2) def test_constant_across_mp(self): myjitdriver = JitDriver(greens = [], reds = ['n']) @@ -533,7 +528,7 @@ policy = StopAtXPolicy(externfn) res = self.meta_interp(f, [31], policy=policy) assert res == 42 - self.check_loops(int_mul=1, int_mod=0) + self.check_resops(int_mul=2, int_mod=0) def test_we_are_jitted(self): myjitdriver = JitDriver(greens = [], reds = ['y']) @@ -835,7 +830,7 @@ return n res = self.meta_interp(f, [20, 1, 2]) assert res == 0 - self.check_loops(call=0) + self.check_resops(call=0) def test_abs(self): myjitdriver = JitDriver(greens = [], reds = ['i', 't']) @@ -865,9 +860,8 @@ res = self.meta_interp(f, [6, 7]) assert res == 42.0 self.check_loop_count(1) - self.check_loops({'guard_true': 1, - 'float_add': 1, 'float_sub': 1, 'float_gt': 1, - 'jump': 1}) + self.check_resops({'jump': 2, 'float_gt': 2, 'float_add': 2, + 'float_sub': 2, 'guard_true': 2}) def test_print(self): myjitdriver = JitDriver(greens = [], reds = ['n']) @@ -1038,7 +1032,7 @@ return x res = self.meta_interp(f, [20], enable_opts='') assert res == f(20) - self.check_loops(call=0) + self.check_resops(call=0) def test_zerodivisionerror(self): # test the case of exception-raising operation that is not delegated @@ -1348,7 +1342,7 @@ res = self.meta_interp(f, [6, 7]) assert res == 42 self.check_loop_count(1) - self.check_loops(call=1) + self.check_resops(call=2) def test_merge_guardclass_guardvalue(self): from pypy.rlib.objectmodel import instantiate @@ -1375,8 +1369,7 @@ return x res = self.meta_interp(f, [299], listops=True) assert res == f(299) - self.check_loops(guard_class=0, guard_value=3) - self.check_loops(guard_class=0, guard_value=6, everywhere=True) + self.check_resops(guard_class=0, guard_value=6) def test_merge_guardnonnull_guardclass(self): from pypy.rlib.objectmodel import instantiate @@ -1404,11 +1397,9 @@ return x res = self.meta_interp(f, [299], listops=True) assert res == f(299) - self.check_loops(guard_class=0, guard_nonnull=2, - guard_nonnull_class=2, guard_isnull=1) - self.check_loops(guard_class=0, guard_nonnull=4, - guard_nonnull_class=4, guard_isnull=2, - everywhere=True) + self.check_resops(guard_class=0, guard_nonnull=4, + guard_nonnull_class=4, guard_isnull=2) + def test_merge_guardnonnull_guardvalue(self): from pypy.rlib.objectmodel import instantiate @@ -1435,11 +1426,9 @@ return x res = self.meta_interp(f, [299], listops=True) assert res == f(299) - self.check_loops(guard_class=0, guard_nonnull=2, guard_value=2, - guard_nonnull_class=0, guard_isnull=1) - self.check_loops(guard_class=0, guard_nonnull=4, guard_value=4, - guard_nonnull_class=0, guard_isnull=2, - everywhere=True) + self.check_resops(guard_value=4, guard_class=0, guard_nonnull=4, + guard_nonnull_class=0, guard_isnull=2) + def test_merge_guardnonnull_guardvalue_2(self): from pypy.rlib.objectmodel import instantiate @@ -1466,11 +1455,9 @@ return x res = self.meta_interp(f, [299], listops=True) assert res == f(299) - self.check_loops(guard_class=0, guard_nonnull=2, guard_value=2, - guard_nonnull_class=0, guard_isnull=1) - self.check_loops(guard_class=0, guard_nonnull=4, guard_value=4, - guard_nonnull_class=0, guard_isnull=2, - everywhere=True) + self.check_resops(guard_value=4, guard_class=0, guard_nonnull=4, + guard_nonnull_class=0, guard_isnull=2) + def test_merge_guardnonnull_guardclass_guardvalue(self): from pypy.rlib.objectmodel import instantiate @@ -1500,11 +1487,9 @@ return x res = self.meta_interp(f, [399], listops=True) assert res == f(399) - self.check_loops(guard_class=0, guard_nonnull=3, guard_value=3, - guard_nonnull_class=0, guard_isnull=1) - self.check_loops(guard_class=0, guard_nonnull=6, guard_value=6, - guard_nonnull_class=0, guard_isnull=2, - everywhere=True) + self.check_resops(guard_class=0, guard_nonnull=6, guard_value=6, + guard_nonnull_class=0, guard_isnull=2) + def test_residual_call_doesnt_lose_info(self): myjitdriver = JitDriver(greens = [], reds = ['x', 'y', 'l']) @@ -1530,8 +1515,7 @@ y.v = g(y.v) - y.v/y.v + lc/l[0] - 1 return y.v res = self.meta_interp(f, [20], listops=True) - self.check_loops(getfield_gc=0, getarrayitem_gc=0) - self.check_loops(getfield_gc=1, getarrayitem_gc=0, everywhere=True) + self.check_resops(getarrayitem_gc=0, getfield_gc=1) def test_guard_isnull_nonnull(self): myjitdriver = JitDriver(greens = [], reds = ['x', 'res']) @@ -1559,7 +1543,7 @@ return res res = self.meta_interp(f, [21]) assert res == 42 - self.check_loops(guard_nonnull=1, guard_isnull=1) + self.check_resops(guard_nonnull=2, guard_isnull=2) def test_loop_invariant1(self): myjitdriver = JitDriver(greens = [], reds = ['x', 'res']) @@ -1586,8 +1570,7 @@ return res res = self.meta_interp(g, [21]) assert res == 3 * 21 - self.check_loops(call=0) - self.check_loops(call=1, everywhere=True) + self.check_resops(call=1) def test_bug_optimizeopt_mutates_ops(self): myjitdriver = JitDriver(greens = [], reds = ['x', 'res', 'const', 'a']) @@ -1707,7 +1690,7 @@ return x res = self.meta_interp(f, [8]) assert res == 0 - self.check_loops(jit_debug=2) + self.check_resops(jit_debug=4) def test_assert_green(self): def f(x, promote_flag): @@ -1749,9 +1732,10 @@ res = self.meta_interp(g, [6, 7]) assert res == 6*8 + 6**8 self.check_loop_count(5) - self.check_loops({'guard_true': 2, - 'int_add': 1, 'int_mul': 1, 'int_sub': 2, - 'int_gt': 2, 'jump': 2}) + self.check_resops({'guard_class': 2, 'int_gt': 4, + 'getfield_gc': 4, 'guard_true': 4, + 'int_sub': 4, 'jump': 4, 'int_mul': 2, + 'int_add': 2}) def test_multiple_specialied_versions_array(self): myjitdriver = JitDriver(greens = [], reds = ['idx', 'y', 'x', 'res', @@ -1792,7 +1776,7 @@ res = self.meta_interp(g, [6, 14]) assert res == g(6, 14) self.check_loop_count(9) - self.check_loops(getarrayitem_gc=8, everywhere=True) + self.check_resops(getarrayitem_gc=8) def test_multiple_specialied_versions_bridge(self): myjitdriver = JitDriver(greens = [], reds = ['y', 'x', 'z', 'res']) @@ -1980,8 +1964,8 @@ res = self.meta_interp(g, [3, 23]) assert res == 7068153 self.check_loop_count(7) - self.check_loops(guard_true=4, guard_class=0, int_add=2, int_mul=2, - guard_false=2) + self.check_resops(guard_true=6, guard_class=2, int_mul=3, + int_add=3, guard_false=3) def test_dont_trace_every_iteration(self): myjitdriver = JitDriver(greens = [], reds = ['a', 'b', 'i', 'sa']) @@ -2225,27 +2209,27 @@ return sa assert self.meta_interp(f1, [5, 5]) == 50 - self.check_loops(int_rshift=0, everywhere=True) + self.check_resops(int_rshift=0) for f in (f1, f2): assert self.meta_interp(f, [5, 6]) == 50 - self.check_loops(int_rshift=3, everywhere=True) + self.check_resops(int_rshift=3) assert self.meta_interp(f, [10, 5]) == 100 - self.check_loops(int_rshift=3, everywhere=True) + self.check_resops(int_rshift=3) assert self.meta_interp(f, [10, 6]) == 100 - self.check_loops(int_rshift=3, everywhere=True) + self.check_resops(int_rshift=3) assert self.meta_interp(f, [5, 31]) == 0 - self.check_loops(int_rshift=3, everywhere=True) + self.check_resops(int_rshift=3) bigval = 1 while (bigval << 3).__class__ is int: bigval = bigval << 1 assert self.meta_interp(f, [bigval, 5]) == 0 - self.check_loops(int_rshift=3, everywhere=True) + self.check_resops(int_rshift=3) def test_overflowing_shift_neg(self): myjitdriver = JitDriver(greens = [], reds = ['a', 'b', 'n', 'sa']) @@ -2270,27 +2254,27 @@ return sa assert self.meta_interp(f1, [-5, 5]) == -50 - self.check_loops(int_rshift=0, everywhere=True) + self.check_resops(int_rshift=0) for f in (f1, f2): assert self.meta_interp(f, [-5, 6]) == -50 - self.check_loops(int_rshift=3, everywhere=True) + self.check_resops(int_rshift=3) assert self.meta_interp(f, [-10, 5]) == -100 - self.check_loops(int_rshift=3, everywhere=True) + self.check_resops(int_rshift=3) assert self.meta_interp(f, [-10, 6]) == -100 - self.check_loops(int_rshift=3, everywhere=True) + self.check_resops(int_rshift=3) assert self.meta_interp(f, [-5, 31]) == 0 - self.check_loops(int_rshift=3, everywhere=True) + self.check_resops(int_rshift=3) bigval = 1 while (bigval << 3).__class__ is int: bigval = bigval << 1 assert self.meta_interp(f, [bigval, 5]) == 0 - self.check_loops(int_rshift=3, everywhere=True) + self.check_resops(int_rshift=3) def test_pure_op_not_to_be_propagated(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'sa']) @@ -2430,8 +2414,7 @@ if counter > 10: return 7 assert self.meta_interp(build, []) == 7 - self.check_loops(getfield_gc_pure=0) - self.check_loops(getfield_gc_pure=2, everywhere=True) + self.check_resops(getfield_gc_pure=2) def test_args_becomming_equal(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'i', 'sa', 'a', 'b']) @@ -2564,7 +2547,7 @@ i += 1 return sa assert self.meta_interp(f, [20]) == f(20) - self.check_loops(int_gt=1, int_lt=2, int_ge=0, int_le=0) + self.check_resops(int_lt=4, int_le=0, int_ge=0, int_gt=2) def test_intbounds_not_generalized1(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'i', 'sa']) @@ -2581,7 +2564,8 @@ i += 1 return sa assert self.meta_interp(f, [20]) == f(20) - self.check_loops(int_gt=1, int_lt=3, int_ge=2, int_le=1) + self.check_resops(int_lt=6, int_le=2, int_ge=4, int_gt=3) + def test_intbounds_not_generalized2(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'i', 'sa', 'node']) @@ -2601,7 +2585,7 @@ i += 1 return sa assert self.meta_interp(f, [20]) == f(20) - self.check_loops(int_gt=1, int_lt=2, int_ge=1, int_le=1) + self.check_resops(int_lt=4, int_le=3, int_ge=3, int_gt=2) def test_retrace_limit1(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'i', 'sa', 'a']) @@ -2855,7 +2839,7 @@ return a[0].intvalue res = self.meta_interp(f, [100]) assert res == -2 - #self.check_loops(getarrayitem_gc=0, setarrayitem_gc=0) -- xxx? + self.check_resops(setarrayitem_gc=2, getarrayitem_gc=1) def test_retrace_ending_up_retracing_another_loop(self): @@ -2955,7 +2939,7 @@ i += 1 res = self.meta_interp(f, [32]) assert res == f(32) - self.check_loops(arraylen_gc=2) + self.check_resops(arraylen_gc=3) def test_ulonglong_mod(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'sa', 'i']) @@ -3142,9 +3126,9 @@ a = A(a.i + 1) self.meta_interp(f, []) - self.check_loops(new_with_vtable=0) + self.check_resops(new_with_vtable=0) self.meta_interp(f, [], enable_opts='') - self.check_loops(new_with_vtable=1) + self.check_resops(new_with_vtable=1) def test_two_loopinvariant_arrays1(self): from pypy.rpython.lltypesystem import lltype, llmemory, rffi @@ -3236,7 +3220,7 @@ return sa res = self.meta_interp(f, [32]) assert res == f(32) - self.check_loops(arraylen_gc=2, everywhere=True) + self.check_resops(arraylen_gc=2) def test_release_gil_flush_heap_cache(self): if sys.platform == "win32": @@ -3273,7 +3257,7 @@ lock.release() return n res = self.meta_interp(f, [10, 1]) - self.check_loops(getfield_gc=2) + self.check_resops(getfield_gc=4) assert res == f(10, 1) def test_jit_merge_point_with_raw_pointer(self): @@ -3337,10 +3321,10 @@ res = self.meta_interp(main, [0, 10, 2], enable_opts='') assert res == main(0, 10, 2) - self.check_loops(call=1) + self.check_resops(call=1) res = self.meta_interp(main, [1, 10, 2], enable_opts='') assert res == main(1, 10, 2) - self.check_loops(call=0) + self.check_resops(call=0) def test_look_inside_iff_virtual(self): # There's no good reason for this to be look_inside_iff, but it's a test! @@ -3365,10 +3349,10 @@ i += f(A(2), n) res = self.meta_interp(main, [0], enable_opts='') assert res == main(0) - self.check_loops(call=1, getfield_gc=0) + self.check_resops(call=1, getfield_gc=0) res = self.meta_interp(main, [1], enable_opts='') assert res == main(1) - self.check_loops(call=0, getfield_gc=0) + self.check_resops(call=0, getfield_gc=0) def test_reuse_elidable_result(self): driver = JitDriver(reds=['n', 's'], greens = []) @@ -3381,10 +3365,9 @@ return s res = self.meta_interp(main, [10]) assert res == main(10) - self.check_loops({ - 'call': 1, 'guard_no_exception': 1, 'guard_true': 1, 'int_add': 2, - 'int_gt': 1, 'int_sub': 1, 'strlen': 1, 'jump': 1, - }) + self.check_resops({'int_gt': 2, 'strlen': 2, 'guard_true': 2, + 'int_sub': 2, 'jump': 2, 'call': 2, + 'guard_no_exception': 2, 'int_add': 4}) def test_look_inside_iff_const_getarrayitem_gc_pure(self): driver = JitDriver(greens=['unroll'], reds=['s', 'n']) @@ -3416,10 +3399,10 @@ res = self.meta_interp(main, [0, 10]) assert res == main(0, 10) # 2 calls, one for f() and one for char_mul - self.check_loops(call=2) + self.check_resops(call=4) res = self.meta_interp(main, [1, 10]) assert res == main(1, 10) - self.check_loops(call=0) + self.check_resops(call=0) def test_setarrayitem_followed_by_arraycopy(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'sa', 'x', 'y']) @@ -3520,7 +3503,8 @@ res = self.meta_interp(f, [10]) assert res == 0 - self.check_loops({"int_sub": 1, "int_gt": 1, "guard_true": 1, "jump": 1}) + self.check_resops({'jump': 2, 'guard_true': 2, 'int_gt': 2, + 'int_sub': 2}) def test_virtual_opaque_ptr(self): myjitdriver = JitDriver(greens = [], reds = ["n"]) @@ -3539,7 +3523,9 @@ return n res = self.meta_interp(f, [10]) assert res == 0 - self.check_loops({"int_sub": 1, "int_gt": 1, "guard_true": 1, "jump": 1}) + self.check_resops({'jump': 2, 'guard_true': 2, 'int_gt': 2, + 'int_sub': 2}) + def test_virtual_opaque_dict(self): myjitdriver = JitDriver(greens = [], reds = ["n"]) @@ -3559,7 +3545,10 @@ return n res = self.meta_interp(f, [10]) assert res == 0 - self.check_loops({"int_sub": 1, "int_gt": 1, "guard_true": 1, "jump": 1}) + self.check_resops({'int_gt': 2, 'getfield_gc': 1, 'int_eq': 1, + 'guard_true': 2, 'int_sub': 2, 'jump': 2, + 'guard_false': 1}) + def test_convert_from_SmallFunctionSetPBCRepr_to_FunctionsPBCRepr(self): f1 = lambda n: n+1 From noreply at buildbot.pypy.org Sat Nov 5 16:40:48 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sat, 5 Nov 2011 16:40:48 +0100 (CET) Subject: [pypy-commit] pypy jit-refactor-tests: count all operations in each test Message-ID: <20111105154049.01204820B3@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-refactor-tests Changeset: r48785:b4eba0d6c859 Date: 2011-11-05 16:21 +0100 http://bitbucket.org/pypy/pypy/changeset/b4eba0d6c859/ Log: count all operations in each test diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -999,12 +999,9 @@ "found %d %r, expected %d" % (found, insn, expected_count)) return insns - def check_loops(self, expected=None, everywhere=False, **check): + def check_resops(self, expected=None, **check): insns = {} for loop in self.loops: - if not everywhere: - if getattr(loop, '_ignore_during_counting', False): - continue insns = loop.summary(adding_insns=insns) if expected is not None: insns.pop('debug_merge_point', None) @@ -1015,6 +1012,36 @@ assert found == expected_count, ( "found %d %r, expected %d" % (found, insn, expected_count)) return insns + + def check_loops(self, expected=None, everywhere=False, **check): + insns = {} + for loop in self.loops: + #if not everywhere: + # if getattr(loop, '_ignore_during_counting', False): + # continue + insns = loop.summary(adding_insns=insns) + if expected is not None: + insns.pop('debug_merge_point', None) + print + print + print " self.check_resops(%s)" % str(insns) + print + import pdb; pdb.set_trace() + else: + chk = ['%s=%d' % (i, insns.get(i, 0)) for i in check] + print + print + print " self.check_resops(%s)" % ', '.join(chk) + print + import pdb; pdb.set_trace() + return + + for insn, expected_count in check.items(): + getattr(rop, insn.upper()) # fails if 'rop.INSN' does not exist + found = insns.get(insn, 0) + assert found == expected_count, ( + "found %d %r, expected %d" % (found, insn, expected_count)) + return insns def check_consistency(self): "NOT_RPYTHON" diff --git a/pypy/jit/metainterp/test/support.py b/pypy/jit/metainterp/test/support.py --- a/pypy/jit/metainterp/test/support.py +++ b/pypy/jit/metainterp/test/support.py @@ -155,9 +155,13 @@ class JitMixin: basic = True + def check_resops(self, expected=None, **check): + get_stats().check_resops(expected=expected, **check) + + def check_loops(self, expected=None, everywhere=False, **check): get_stats().check_loops(expected=expected, everywhere=everywhere, - **check) + **check) def check_loop_count(self, count): """NB. This is a hack; use check_tree_loop_count() or check_enter_count() for the real thing. diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -79,9 +79,8 @@ res = self.meta_interp(f, [6, 7]) assert res == 42 self.check_loop_count(1) - self.check_loops({'guard_true': 1, - 'int_add': 1, 'int_sub': 1, 'int_gt': 1, - 'jump': 1}) + self.check_resops({'jump': 2, 'int_gt': 2, 'int_add': 2, 'guard_true': 2, 'int_sub': 2}) + if self.basic: found = 0 for op in get_stats().loops[0]._all_operations(): @@ -108,7 +107,7 @@ res = self.meta_interp(f, [6, 7]) assert res == 1323 self.check_loop_count(1) - self.check_loops(int_mul=1) + self.check_resops(int_mul=3) def test_loop_variant_mul_ovf(self): myjitdriver = JitDriver(greens = [], reds = ['y', 'res', 'x']) @@ -125,7 +124,7 @@ res = self.meta_interp(f, [6, 7]) assert res == 1323 self.check_loop_count(1) - self.check_loops(int_mul_ovf=1) + self.check_resops(int_mul_ovf=3) def test_loop_invariant_mul1(self): myjitdriver = JitDriver(greens = [], reds = ['y', 'res', 'x']) @@ -140,9 +139,9 @@ res = self.meta_interp(f, [6, 7]) assert res == 252 self.check_loop_count(1) - self.check_loops({'guard_true': 1, - 'int_add': 1, 'int_sub': 1, 'int_gt': 1, - 'jump': 1}) + self.check_resops({'jump': 2, 'int_gt': 2, 'int_add': 2, + 'int_mul': 1, 'guard_true': 2, 'int_sub': 2}) + def test_loop_invariant_mul_ovf(self): myjitdriver = JitDriver(greens = [], reds = ['y', 'res', 'x']) @@ -158,10 +157,10 @@ res = self.meta_interp(f, [6, 7]) assert res == 308 self.check_loop_count(1) - self.check_loops({'guard_true': 1, - 'int_add': 2, 'int_sub': 1, 'int_gt': 1, - 'int_lshift': 1, - 'jump': 1}) + self.check_resops({'jump': 2, 'int_lshift': 2, 'int_gt': 2, + 'int_mul_ovf': 1, 'int_add': 4, + 'guard_true': 2, 'guard_no_overflow': 1, + 'int_sub': 2}) def test_loop_invariant_mul_bridge1(self): myjitdriver = JitDriver(greens = [], reds = ['y', 'res', 'x']) @@ -194,11 +193,9 @@ res = self.meta_interp(f, [6, 32]) assert res == 1167 self.check_loop_count(3) - self.check_loops({'int_add': 3, 'int_lt': 2, - 'int_sub': 2, 'guard_false': 1, - 'jump': 2, - 'int_gt': 1, 'guard_true': 2}) - + self.check_resops({'int_lt': 3, 'int_gt': 2, 'int_add': 5, + 'guard_true': 3, 'int_sub': 4, 'jump': 4, + 'int_mul': 2, 'guard_false': 2}) def test_loop_invariant_mul_bridge_maintaining2(self): myjitdriver = JitDriver(greens = [], reds = ['y', 'res', 'x']) @@ -216,10 +213,9 @@ res = self.meta_interp(f, [6, 32]) assert res == 1692 self.check_loop_count(3) - self.check_loops({'int_add': 3, 'int_lt': 2, - 'int_sub': 2, 'guard_false': 1, - 'jump': 2, - 'int_gt': 1, 'guard_true': 2}) + self.check_resops({'int_lt': 3, 'int_gt': 2, 'int_add': 5, + 'guard_true': 3, 'int_sub': 4, 'jump': 4, + 'int_mul': 2, 'guard_false': 2}) def test_loop_invariant_mul_bridge_maintaining3(self): myjitdriver = JitDriver(greens = [], reds = ['y', 'res', 'x', 'm']) @@ -237,10 +233,9 @@ res = self.meta_interp(f, [6, 32, 16]) assert res == 1692 self.check_loop_count(3) - self.check_loops({'int_add': 2, 'int_lt': 1, - 'int_sub': 2, 'guard_false': 1, - 'jump': 2, 'int_mul': 1, - 'int_gt': 2, 'guard_true': 2}) + self.check_resops({'int_lt': 2, 'int_gt': 4, 'guard_false': 2, + 'guard_true': 4, 'int_sub': 4, 'jump': 4, + 'int_mul': 3, 'int_add': 4}) def test_loop_invariant_intbox(self): myjitdriver = JitDriver(greens = [], reds = ['y', 'res', 'x']) @@ -261,9 +256,9 @@ res = self.meta_interp(f, [6, 7]) assert res == 252 self.check_loop_count(1) - self.check_loops({'guard_true': 1, - 'int_add': 1, 'int_sub': 1, 'int_gt': 1, - 'jump': 1}) + self.check_resops({'jump': 2, 'int_gt': 2, 'int_add': 2, + 'getfield_gc_pure': 1, 'int_mul': 1, + 'guard_true': 2, 'int_sub': 2}) def test_loops_are_transient(self): import gc, weakref @@ -381,7 +376,7 @@ assert res == 0 # CALL_PURE is recorded in the history, but turned into a CALL # by optimizeopt.py - self.check_loops(int_sub=0, call=1, call_pure=0) + self.check_resops(call_pure=0, call=2, int_sub=0) def test_constfold_call_elidable(self): myjitdriver = JitDriver(greens = ['m'], reds = ['n']) @@ -397,7 +392,7 @@ res = self.meta_interp(f, [21, 5]) assert res == -1 # the CALL_PURE is constant-folded away by optimizeopt.py - self.check_loops(int_sub=1, call=0, call_pure=0) + self.check_resops(call_pure=0, call=0, int_sub=2) def test_constfold_call_elidable_2(self): myjitdriver = JitDriver(greens = ['m'], reds = ['n']) @@ -417,7 +412,7 @@ res = self.meta_interp(f, [21, 5]) assert res == -1 # the CALL_PURE is constant-folded away by optimizeopt.py - self.check_loops(int_sub=1, call=0, call_pure=0) + self.check_resops(call_pure=0, call=0, int_sub=2) def test_elidable_function_returning_object(self): myjitdriver = JitDriver(greens = ['m'], reds = ['n']) @@ -442,7 +437,7 @@ res = self.meta_interp(f, [21, 5]) assert res == -1 # the CALL_PURE is constant-folded away by optimizeopt.py - self.check_loops(int_sub=1, call=0, call_pure=0, getfield_gc=0) + self.check_resops(call_pure=0, call=0, getfield_gc=1, int_sub=2) def test_elidable_raising(self): myjitdriver = JitDriver(greens = ['m'], reds = ['n']) @@ -463,12 +458,12 @@ res = self.meta_interp(f, [22, 6]) assert res == -3 # the CALL_PURE is constant-folded away during tracing - self.check_loops(int_sub=1, call=0, call_pure=0) + self.check_resops(call_pure=0, call=0, int_sub=2) # res = self.meta_interp(f, [22, -5]) assert res == 0 # raises: becomes CALL and is not constant-folded away - self.check_loops(int_sub=1, call=1, call_pure=0) + self.check_resops(call_pure=0, call=2, int_sub=2) def test_elidable_raising_2(self): myjitdriver = JitDriver(greens = ['m'], reds = ['n']) @@ -489,12 +484,12 @@ res = self.meta_interp(f, [22, 6]) assert res == -3 # the CALL_PURE is constant-folded away by optimizeopt.py - self.check_loops(int_sub=1, call=0, call_pure=0) + self.check_resops(call_pure=0, call=0, int_sub=2) # res = self.meta_interp(f, [22, -5]) assert res == 0 # raises: becomes CALL and is not constant-folded away - self.check_loops(int_sub=1, call=1, call_pure=0) + self.check_resops(call_pure=0, call=2, int_sub=2) def test_constant_across_mp(self): myjitdriver = JitDriver(greens = [], reds = ['n']) @@ -533,7 +528,7 @@ policy = StopAtXPolicy(externfn) res = self.meta_interp(f, [31], policy=policy) assert res == 42 - self.check_loops(int_mul=1, int_mod=0) + self.check_resops(int_mul=2, int_mod=0) def test_we_are_jitted(self): myjitdriver = JitDriver(greens = [], reds = ['y']) @@ -835,7 +830,7 @@ return n res = self.meta_interp(f, [20, 1, 2]) assert res == 0 - self.check_loops(call=0) + self.check_resops(call=0) def test_abs(self): myjitdriver = JitDriver(greens = [], reds = ['i', 't']) @@ -865,9 +860,8 @@ res = self.meta_interp(f, [6, 7]) assert res == 42.0 self.check_loop_count(1) - self.check_loops({'guard_true': 1, - 'float_add': 1, 'float_sub': 1, 'float_gt': 1, - 'jump': 1}) + self.check_resops({'jump': 2, 'float_gt': 2, 'float_add': 2, + 'float_sub': 2, 'guard_true': 2}) def test_print(self): myjitdriver = JitDriver(greens = [], reds = ['n']) @@ -1038,7 +1032,7 @@ return x res = self.meta_interp(f, [20], enable_opts='') assert res == f(20) - self.check_loops(call=0) + self.check_resops(call=0) def test_zerodivisionerror(self): # test the case of exception-raising operation that is not delegated @@ -1348,7 +1342,7 @@ res = self.meta_interp(f, [6, 7]) assert res == 42 self.check_loop_count(1) - self.check_loops(call=1) + self.check_resops(call=2) def test_merge_guardclass_guardvalue(self): from pypy.rlib.objectmodel import instantiate @@ -1375,8 +1369,7 @@ return x res = self.meta_interp(f, [299], listops=True) assert res == f(299) - self.check_loops(guard_class=0, guard_value=3) - self.check_loops(guard_class=0, guard_value=6, everywhere=True) + self.check_resops(guard_class=0, guard_value=6) def test_merge_guardnonnull_guardclass(self): from pypy.rlib.objectmodel import instantiate @@ -1404,11 +1397,9 @@ return x res = self.meta_interp(f, [299], listops=True) assert res == f(299) - self.check_loops(guard_class=0, guard_nonnull=2, - guard_nonnull_class=2, guard_isnull=1) - self.check_loops(guard_class=0, guard_nonnull=4, - guard_nonnull_class=4, guard_isnull=2, - everywhere=True) + self.check_resops(guard_class=0, guard_nonnull=4, + guard_nonnull_class=4, guard_isnull=2) + def test_merge_guardnonnull_guardvalue(self): from pypy.rlib.objectmodel import instantiate @@ -1435,11 +1426,9 @@ return x res = self.meta_interp(f, [299], listops=True) assert res == f(299) - self.check_loops(guard_class=0, guard_nonnull=2, guard_value=2, - guard_nonnull_class=0, guard_isnull=1) - self.check_loops(guard_class=0, guard_nonnull=4, guard_value=4, - guard_nonnull_class=0, guard_isnull=2, - everywhere=True) + self.check_resops(guard_value=4, guard_class=0, guard_nonnull=4, + guard_nonnull_class=0, guard_isnull=2) + def test_merge_guardnonnull_guardvalue_2(self): from pypy.rlib.objectmodel import instantiate @@ -1466,11 +1455,9 @@ return x res = self.meta_interp(f, [299], listops=True) assert res == f(299) - self.check_loops(guard_class=0, guard_nonnull=2, guard_value=2, - guard_nonnull_class=0, guard_isnull=1) - self.check_loops(guard_class=0, guard_nonnull=4, guard_value=4, - guard_nonnull_class=0, guard_isnull=2, - everywhere=True) + self.check_resops(guard_value=4, guard_class=0, guard_nonnull=4, + guard_nonnull_class=0, guard_isnull=2) + def test_merge_guardnonnull_guardclass_guardvalue(self): from pypy.rlib.objectmodel import instantiate @@ -1500,11 +1487,9 @@ return x res = self.meta_interp(f, [399], listops=True) assert res == f(399) - self.check_loops(guard_class=0, guard_nonnull=3, guard_value=3, - guard_nonnull_class=0, guard_isnull=1) - self.check_loops(guard_class=0, guard_nonnull=6, guard_value=6, - guard_nonnull_class=0, guard_isnull=2, - everywhere=True) + self.check_resops(guard_class=0, guard_nonnull=6, guard_value=6, + guard_nonnull_class=0, guard_isnull=2) + def test_residual_call_doesnt_lose_info(self): myjitdriver = JitDriver(greens = [], reds = ['x', 'y', 'l']) @@ -1530,8 +1515,7 @@ y.v = g(y.v) - y.v/y.v + lc/l[0] - 1 return y.v res = self.meta_interp(f, [20], listops=True) - self.check_loops(getfield_gc=0, getarrayitem_gc=0) - self.check_loops(getfield_gc=1, getarrayitem_gc=0, everywhere=True) + self.check_resops(getarrayitem_gc=0, getfield_gc=1) def test_guard_isnull_nonnull(self): myjitdriver = JitDriver(greens = [], reds = ['x', 'res']) @@ -1559,7 +1543,7 @@ return res res = self.meta_interp(f, [21]) assert res == 42 - self.check_loops(guard_nonnull=1, guard_isnull=1) + self.check_resops(guard_nonnull=2, guard_isnull=2) def test_loop_invariant1(self): myjitdriver = JitDriver(greens = [], reds = ['x', 'res']) @@ -1586,8 +1570,7 @@ return res res = self.meta_interp(g, [21]) assert res == 3 * 21 - self.check_loops(call=0) - self.check_loops(call=1, everywhere=True) + self.check_resops(call=1) def test_bug_optimizeopt_mutates_ops(self): myjitdriver = JitDriver(greens = [], reds = ['x', 'res', 'const', 'a']) @@ -1707,7 +1690,7 @@ return x res = self.meta_interp(f, [8]) assert res == 0 - self.check_loops(jit_debug=2) + self.check_resops(jit_debug=4) def test_assert_green(self): def f(x, promote_flag): @@ -1749,9 +1732,10 @@ res = self.meta_interp(g, [6, 7]) assert res == 6*8 + 6**8 self.check_loop_count(5) - self.check_loops({'guard_true': 2, - 'int_add': 1, 'int_mul': 1, 'int_sub': 2, - 'int_gt': 2, 'jump': 2}) + self.check_resops({'guard_class': 2, 'int_gt': 4, + 'getfield_gc': 4, 'guard_true': 4, + 'int_sub': 4, 'jump': 4, 'int_mul': 2, + 'int_add': 2}) def test_multiple_specialied_versions_array(self): myjitdriver = JitDriver(greens = [], reds = ['idx', 'y', 'x', 'res', @@ -1792,7 +1776,7 @@ res = self.meta_interp(g, [6, 14]) assert res == g(6, 14) self.check_loop_count(9) - self.check_loops(getarrayitem_gc=8, everywhere=True) + self.check_resops(getarrayitem_gc=8) def test_multiple_specialied_versions_bridge(self): myjitdriver = JitDriver(greens = [], reds = ['y', 'x', 'z', 'res']) @@ -1980,8 +1964,8 @@ res = self.meta_interp(g, [3, 23]) assert res == 7068153 self.check_loop_count(7) - self.check_loops(guard_true=4, guard_class=0, int_add=2, int_mul=2, - guard_false=2) + self.check_resops(guard_true=6, guard_class=2, int_mul=3, + int_add=3, guard_false=3) def test_dont_trace_every_iteration(self): myjitdriver = JitDriver(greens = [], reds = ['a', 'b', 'i', 'sa']) @@ -2225,27 +2209,27 @@ return sa assert self.meta_interp(f1, [5, 5]) == 50 - self.check_loops(int_rshift=0, everywhere=True) + self.check_resops(int_rshift=0) for f in (f1, f2): assert self.meta_interp(f, [5, 6]) == 50 - self.check_loops(int_rshift=3, everywhere=True) + self.check_resops(int_rshift=3) assert self.meta_interp(f, [10, 5]) == 100 - self.check_loops(int_rshift=3, everywhere=True) + self.check_resops(int_rshift=3) assert self.meta_interp(f, [10, 6]) == 100 - self.check_loops(int_rshift=3, everywhere=True) + self.check_resops(int_rshift=3) assert self.meta_interp(f, [5, 31]) == 0 - self.check_loops(int_rshift=3, everywhere=True) + self.check_resops(int_rshift=3) bigval = 1 while (bigval << 3).__class__ is int: bigval = bigval << 1 assert self.meta_interp(f, [bigval, 5]) == 0 - self.check_loops(int_rshift=3, everywhere=True) + self.check_resops(int_rshift=3) def test_overflowing_shift_neg(self): myjitdriver = JitDriver(greens = [], reds = ['a', 'b', 'n', 'sa']) @@ -2270,27 +2254,27 @@ return sa assert self.meta_interp(f1, [-5, 5]) == -50 - self.check_loops(int_rshift=0, everywhere=True) + self.check_resops(int_rshift=0) for f in (f1, f2): assert self.meta_interp(f, [-5, 6]) == -50 - self.check_loops(int_rshift=3, everywhere=True) + self.check_resops(int_rshift=3) assert self.meta_interp(f, [-10, 5]) == -100 - self.check_loops(int_rshift=3, everywhere=True) + self.check_resops(int_rshift=3) assert self.meta_interp(f, [-10, 6]) == -100 - self.check_loops(int_rshift=3, everywhere=True) + self.check_resops(int_rshift=3) assert self.meta_interp(f, [-5, 31]) == 0 - self.check_loops(int_rshift=3, everywhere=True) + self.check_resops(int_rshift=3) bigval = 1 while (bigval << 3).__class__ is int: bigval = bigval << 1 assert self.meta_interp(f, [bigval, 5]) == 0 - self.check_loops(int_rshift=3, everywhere=True) + self.check_resops(int_rshift=3) def test_pure_op_not_to_be_propagated(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'sa']) @@ -2430,8 +2414,7 @@ if counter > 10: return 7 assert self.meta_interp(build, []) == 7 - self.check_loops(getfield_gc_pure=0) - self.check_loops(getfield_gc_pure=2, everywhere=True) + self.check_resops(getfield_gc_pure=2) def test_args_becomming_equal(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'i', 'sa', 'a', 'b']) @@ -2564,7 +2547,7 @@ i += 1 return sa assert self.meta_interp(f, [20]) == f(20) - self.check_loops(int_gt=1, int_lt=2, int_ge=0, int_le=0) + self.check_resops(int_lt=4, int_le=0, int_ge=0, int_gt=2) def test_intbounds_not_generalized1(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'i', 'sa']) @@ -2581,7 +2564,8 @@ i += 1 return sa assert self.meta_interp(f, [20]) == f(20) - self.check_loops(int_gt=1, int_lt=3, int_ge=2, int_le=1) + self.check_resops(int_lt=6, int_le=2, int_ge=4, int_gt=3) + def test_intbounds_not_generalized2(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'i', 'sa', 'node']) @@ -2601,7 +2585,7 @@ i += 1 return sa assert self.meta_interp(f, [20]) == f(20) - self.check_loops(int_gt=1, int_lt=2, int_ge=1, int_le=1) + self.check_resops(int_lt=4, int_le=3, int_ge=3, int_gt=2) def test_retrace_limit1(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'i', 'sa', 'a']) @@ -2855,7 +2839,7 @@ return a[0].intvalue res = self.meta_interp(f, [100]) assert res == -2 - #self.check_loops(getarrayitem_gc=0, setarrayitem_gc=0) -- xxx? + self.check_resops(setarrayitem_gc=2, getarrayitem_gc=1) def test_retrace_ending_up_retracing_another_loop(self): @@ -2955,7 +2939,7 @@ i += 1 res = self.meta_interp(f, [32]) assert res == f(32) - self.check_loops(arraylen_gc=2) + self.check_resops(arraylen_gc=3) def test_ulonglong_mod(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'sa', 'i']) @@ -3142,9 +3126,9 @@ a = A(a.i + 1) self.meta_interp(f, []) - self.check_loops(new_with_vtable=0) + self.check_resops(new_with_vtable=0) self.meta_interp(f, [], enable_opts='') - self.check_loops(new_with_vtable=1) + self.check_resops(new_with_vtable=1) def test_two_loopinvariant_arrays1(self): from pypy.rpython.lltypesystem import lltype, llmemory, rffi @@ -3236,7 +3220,7 @@ return sa res = self.meta_interp(f, [32]) assert res == f(32) - self.check_loops(arraylen_gc=2, everywhere=True) + self.check_resops(arraylen_gc=2) def test_release_gil_flush_heap_cache(self): if sys.platform == "win32": @@ -3273,7 +3257,7 @@ lock.release() return n res = self.meta_interp(f, [10, 1]) - self.check_loops(getfield_gc=2) + self.check_resops(getfield_gc=4) assert res == f(10, 1) def test_jit_merge_point_with_raw_pointer(self): @@ -3337,10 +3321,10 @@ res = self.meta_interp(main, [0, 10, 2], enable_opts='') assert res == main(0, 10, 2) - self.check_loops(call=1) + self.check_resops(call=1) res = self.meta_interp(main, [1, 10, 2], enable_opts='') assert res == main(1, 10, 2) - self.check_loops(call=0) + self.check_resops(call=0) def test_look_inside_iff_virtual(self): # There's no good reason for this to be look_inside_iff, but it's a test! @@ -3365,10 +3349,10 @@ i += f(A(2), n) res = self.meta_interp(main, [0], enable_opts='') assert res == main(0) - self.check_loops(call=1, getfield_gc=0) + self.check_resops(call=1, getfield_gc=0) res = self.meta_interp(main, [1], enable_opts='') assert res == main(1) - self.check_loops(call=0, getfield_gc=0) + self.check_resops(call=0, getfield_gc=0) def test_reuse_elidable_result(self): driver = JitDriver(reds=['n', 's'], greens = []) @@ -3381,10 +3365,9 @@ return s res = self.meta_interp(main, [10]) assert res == main(10) - self.check_loops({ - 'call': 1, 'guard_no_exception': 1, 'guard_true': 1, 'int_add': 2, - 'int_gt': 1, 'int_sub': 1, 'strlen': 1, 'jump': 1, - }) + self.check_resops({'int_gt': 2, 'strlen': 2, 'guard_true': 2, + 'int_sub': 2, 'jump': 2, 'call': 2, + 'guard_no_exception': 2, 'int_add': 4}) def test_look_inside_iff_const_getarrayitem_gc_pure(self): driver = JitDriver(greens=['unroll'], reds=['s', 'n']) @@ -3416,10 +3399,10 @@ res = self.meta_interp(main, [0, 10]) assert res == main(0, 10) # 2 calls, one for f() and one for char_mul - self.check_loops(call=2) + self.check_resops(call=4) res = self.meta_interp(main, [1, 10]) assert res == main(1, 10) - self.check_loops(call=0) + self.check_resops(call=0) def test_setarrayitem_followed_by_arraycopy(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'sa', 'x', 'y']) @@ -3520,7 +3503,8 @@ res = self.meta_interp(f, [10]) assert res == 0 - self.check_loops({"int_sub": 1, "int_gt": 1, "guard_true": 1, "jump": 1}) + self.check_resops({'jump': 2, 'guard_true': 2, 'int_gt': 2, + 'int_sub': 2}) def test_virtual_opaque_ptr(self): myjitdriver = JitDriver(greens = [], reds = ["n"]) @@ -3539,7 +3523,9 @@ return n res = self.meta_interp(f, [10]) assert res == 0 - self.check_loops({"int_sub": 1, "int_gt": 1, "guard_true": 1, "jump": 1}) + self.check_resops({'jump': 2, 'guard_true': 2, 'int_gt': 2, + 'int_sub': 2}) + def test_virtual_opaque_dict(self): myjitdriver = JitDriver(greens = [], reds = ["n"]) @@ -3559,7 +3545,10 @@ return n res = self.meta_interp(f, [10]) assert res == 0 - self.check_loops({"int_sub": 1, "int_gt": 1, "guard_true": 1, "jump": 1}) + self.check_resops({'int_gt': 2, 'getfield_gc': 1, 'int_eq': 1, + 'guard_true': 2, 'int_sub': 2, 'jump': 2, + 'guard_false': 1}) + def test_convert_from_SmallFunctionSetPBCRepr_to_FunctionsPBCRepr(self): f1 = lambda n: n+1 From noreply at buildbot.pypy.org Sat Nov 5 16:53:29 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sat, 5 Nov 2011 16:53:29 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: hg merge jit-refactor-tests Message-ID: <20111105155329.E9466820B3@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48786:80ea8d142cf8 Date: 2011-11-05 16:43 +0100 http://bitbucket.org/pypy/pypy/changeset/80ea8d142cf8/ Log: hg merge jit-refactor-tests From noreply at buildbot.pypy.org Sat Nov 5 16:53:31 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sat, 5 Nov 2011 16:53:31 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: fix test Message-ID: <20111105155331.2A256820B3@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48787:b04a65021a14 Date: 2011-11-05 16:53 +0100 http://bitbucket.org/pypy/pypy/changeset/b04a65021a14/ Log: fix test diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -1029,6 +1029,7 @@ insns = loop.summary(adding_insns=insns) if expected is not None: insns.pop('debug_merge_point', None) + insns.pop('label', None) assert insns == expected for insn, expected_count in check.items(): getattr(rop, insn.upper()) # fails if 'rop.INSN' does not exist diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -66,7 +66,7 @@ res = self.interp_operations(f, [8, 98]) assert res == 110 - def test_loop(self): + def test_loop_1(self): myjitdriver = JitDriver(greens = [], reds = ['x', 'y', 'res']) def f(x, y): res = 0 @@ -79,7 +79,8 @@ res = self.meta_interp(f, [6, 7]) assert res == 42 self.check_loop_count(1) - self.check_resops({'jump': 2, 'int_gt': 2, 'int_add': 2, 'guard_true': 2, 'int_sub': 2}) + self.check_resops({'jump': 1, 'int_gt': 2, 'int_add': 2, + 'guard_true': 2, 'int_sub': 2}) if self.basic: found = 0 @@ -90,7 +91,7 @@ for box in liveboxes: assert isinstance(box, history.BoxInt) found += 1 - assert found == 1 + assert found == 2 def test_loop_variant_mul1(self): myjitdriver = JitDriver(greens = [], reds = ['y', 'res', 'x']) From noreply at buildbot.pypy.org Sat Nov 5 17:25:18 2011 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 5 Nov 2011 17:25:18 +0100 (CET) Subject: [pypy-commit] pypy stm: getarrayitem in funcgen.py. Refactor test_funcgen to actually Message-ID: <20111105162518.0D44E820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r48788:dc24a8407e9e Date: 2011-11-05 15:40 +0100 http://bitbucket.org/pypy/pypy/changeset/dc24a8407e9e/ Log: getarrayitem in funcgen.py. Refactor test_funcgen to actually test something :-( diff --git a/pypy/translator/c/funcgen.py b/pypy/translator/c/funcgen.py --- a/pypy/translator/c/funcgen.py +++ b/pypy/translator/c/funcgen.py @@ -598,6 +598,8 @@ return self.op_stm(op) OP_STM_GETFIELD = _OP_STM OP_STM_SETFIELD = _OP_STM + OP_STM_GETARRAYITEM = _OP_STM + OP_STM_SETARRAYITEM = _OP_STM OP_STM_BEGIN_TRANSACTION = _OP_STM OP_STM_COMMIT_TRANSACTION = _OP_STM OP_STM_BEGIN_INEVITABLE_TRANSACTION = _OP_STM diff --git a/pypy/translator/stm/funcgen.py b/pypy/translator/stm/funcgen.py --- a/pypy/translator/stm/funcgen.py +++ b/pypy/translator/stm/funcgen.py @@ -4,12 +4,7 @@ from pypy.translator.stm.rstm import size_of_voidp -def stm_getfield(funcgen, op): - STRUCT = funcgen.lltypemap(op.args[0]).TO - structdef = funcgen.db.gettypedefnode(STRUCT) - baseexpr_is_const = isinstance(op.args[0], Constant) - basename = funcgen.expr(op.args[0]) - fieldname = op.args[1].value +def _stm_generic_get(funcgen, op, expr): T = funcgen.lltypemap(op.result) fieldtypename = funcgen.db.gettype(T) cfieldtypename = cdecl(fieldtypename, '') @@ -29,19 +24,34 @@ funcname = 'stm_read_doubleword' else: raise NotImplementedError(fieldsize) - expr = structdef.ptr_access_expr(basename, - fieldname, - baseexpr_is_const) return '%s = (%s)%s((long*)&%s);' % ( newvalue, cfieldtypename, funcname, expr) else: - # assume that the object is aligned, and any possible misalignment - # comes from the field offset, so that it can be resolved at - # compile-time (by using C macros) - return '%s = STM_read_partial_word(%s, %s, offsetof(%s, %s));' % ( - newvalue, cfieldtypename, basename, - cdecl(funcgen.db.gettype(STRUCT), ''), - structdef.c_struct_field_name(fieldname)) + STRUCT = funcgen.lltypemap(op.args[0]).TO + if isinstance(STRUCT, lltype.Struct): + # assume that the object is aligned, and any possible misalignment + # comes from the field offset, so that it can be resolved at + # compile-time (by using C macros) + structdef = funcgen.db.gettypedefnode(STRUCT) + basename = funcgen.expr(op.args[0]) + fieldname = op.args[1].value + return '%s = STM_read_partial_word(%s, %s, offsetof(%s, %s));' % ( + newvalue, cfieldtypename, basename, + cdecl(funcgen.db.gettype(STRUCT), ''), + structdef.c_struct_field_name(fieldname)) + # + else: + return '%s = stm_read_partial_word(sizeof(%s), &%s);' % ( + newvalue, cfieldtypename, expr) + +def stm_getfield(funcgen, op): + STRUCT = funcgen.lltypemap(op.args[0]).TO + structdef = funcgen.db.gettypedefnode(STRUCT) + baseexpr_is_const = isinstance(op.args[0], Constant) + expr = structdef.ptr_access_expr(funcgen.expr(op.args[0]), + op.args[1].value, + baseexpr_is_const) + return _stm_generic_get(funcgen, op, expr) def stm_setfield(funcgen, op): STRUCT = funcgen.lltypemap(op.args[0]).TO @@ -53,6 +63,10 @@ fieldtypename = funcgen.db.gettype(T) newvalue = funcgen.expr(op.args[2], special_case_void=False) # + expr = structdef.ptr_access_expr(basename, + fieldname, + baseexpr_is_const) + # assert T is not lltype.Void # XXX fieldsize = rffi.sizeof(T) if fieldsize >= size_of_voidp or T == lltype.SingleFloat: @@ -71,18 +85,21 @@ newtype = 'long long' else: raise NotImplementedError(fieldsize) - expr = structdef.ptr_access_expr(basename, - fieldname, - baseexpr_is_const) return '%s((long*)&%s, (%s)%s);' % ( funcname, expr, newtype, newvalue) else: cfieldtypename = cdecl(fieldtypename, '') - return ('stm_write_partial_word(sizeof(%s), (char*)%s, ' - 'offsetof(%s, %s), (long)%s);' % ( - cfieldtypename, basename, - cdecl(funcgen.db.gettype(STRUCT), ''), - structdef.c_struct_field_name(fieldname), newvalue)) + return ('stm_write_partial_word(sizeof(%s), &%s, %s);' % ( + cfieldtypename, expr, newvalue)) + +def stm_getarrayitem(funcgen, op): + ARRAY = funcgen.lltypemap(op.args[0]).TO + ptr = funcgen.expr(op.args[0]) + index = funcgen.expr(op.args[1]) + arraydef = funcgen.db.gettypedefnode(ARRAY) + expr = arraydef.itemindex_access_expr(ptr, index) + return _stm_generic_get(funcgen, op, expr) + def stm_begin_transaction(funcgen, op): return 'STM_begin_transaction();' diff --git a/pypy/translator/stm/src_stm/et.c b/pypy/translator/stm/src_stm/et.c --- a/pypy/translator/stm/src_stm/et.c +++ b/pypy/translator/stm/src_stm/et.c @@ -869,11 +869,19 @@ } // XXX little-endian only! -void stm_write_partial_word(int fieldsize, char *base, long offset, - unsigned long nval) +unsigned long stm_read_partial_word(int fieldsize, char *addr) { - long *p = (long*)(base + (offset & ~(sizeof(void*)-1))); - int misalignment = offset & (sizeof(void*)-1); + int misalignment = ((long)addr) & (sizeof(void*)-1); + long *p = (long*)(addr - misalignment); + unsigned long word = stm_read_word(p); + return word >> (misalignment * 8); +} + +// XXX little-endian only! +void stm_write_partial_word(int fieldsize, char *addr, unsigned long nval) +{ + int misalignment = ((long)addr) & (sizeof(void*)-1); + long *p = (long*)(addr - misalignment); long val = nval << (misalignment * 8); long word = stm_read_word(p); long mask = ((1L << (fieldsize * 8)) - 1) << (misalignment * 8); diff --git a/pypy/translator/stm/src_stm/et.h b/pypy/translator/stm/src_stm/et.h --- a/pypy/translator/stm/src_stm/et.h +++ b/pypy/translator/stm/src_stm/et.h @@ -54,8 +54,8 @@ (long*)(((char*)(base)) + ((offset) & ~(sizeof(void*)-1)))) \ >> (8 * ((offset) & (sizeof(void*)-1)))) -void stm_write_partial_word(int fieldsize, char *base, long offset, - unsigned long nval); +unsigned long stm_read_partial_word(int fieldsize, char *addr); +void stm_write_partial_word(int fieldsize, char *addr, unsigned long nval); double stm_read_double(long *addr); void stm_write_double(long *addr, double val); diff --git a/pypy/translator/stm/test/test_funcgen.py b/pypy/translator/stm/test/test_funcgen.py --- a/pypy/translator/stm/test/test_funcgen.py +++ b/pypy/translator/stm/test/test_funcgen.py @@ -1,14 +1,14 @@ -from pypy.rpython.lltypesystem import lltype +from pypy.rpython.lltypesystem import lltype, rffi from pypy.rlib.rarithmetic import r_longlong, r_singlefloat from pypy.translator.stm.test.test_transform import CompiledSTMTests from pypy.translator.stm import rstm -A = lltype.Struct('A', ('x', lltype.Signed), ('y', lltype.Signed), - ('c1', lltype.Char), ('c2', lltype.Char), - ('c3', lltype.Char), ('l', lltype.SignedLongLong), - ('f', lltype.Float), ('sa', lltype.SingleFloat), - ('sb', lltype.SingleFloat)) +A = lltype.GcStruct('A', ('x', lltype.Signed), ('y', lltype.Signed), + ('c1', lltype.Char), ('c2', lltype.Char), + ('c3', lltype.Char), ('l', lltype.SignedLongLong), + ('f', lltype.Float), ('sa', lltype.SingleFloat), + ('sb', lltype.SingleFloat)) rll1 = r_longlong(-10000000000003) rll2 = r_longlong(-300400500600700) rf1 = -12.38976129 @@ -19,7 +19,7 @@ rs2b = r_singlefloat(-9e9) def make_a_1(): - a = lltype.malloc(A, flavor='raw') + a = lltype.malloc(A, immortal=True) a.x = -611 a.c1 = '/' a.c2 = '\\' @@ -30,11 +30,10 @@ a.sa = rs1a a.sb = rs1b return a -make_a_1._dont_inline_ = True +a_prebuilt = make_a_1() def do_stm_getfield(argv): - a = make_a_1() - # + a = a_prebuilt assert a.x == -611 assert a.c1 == '/' assert a.c2 == '\\' @@ -44,12 +43,10 @@ assert a.f == rf1 assert float(a.sa) == float(rs1a) assert float(a.sb) == float(rs1b) - # - lltype.free(a, flavor='raw') return 0 def do_stm_setfield(argv): - a = make_a_1() + a = a_prebuilt # a.x = 12871981 a.c1 = '(' @@ -86,7 +83,28 @@ assert float(a.sa) == float(rs2a) assert float(a.sb) == float(rs2b) # - lltype.free(a, flavor='raw') + return 0 + + +def make_array(OF): + a = lltype.malloc(lltype.GcArray(OF), 5, immortal=True) + for i, value in enumerate([1, 10, -1, -10, 42]): + a[i] = rffi.cast(OF, value) + return a + +prebuilt_array_signed = make_array(lltype.Signed) +prebuilt_array_char = make_array(lltype.Char) + +def check(array, expected): + assert len(array) == len(expected) + for i in range(len(expected)): + assert array[i] == expected[i] +check._annspecialcase_ = 'specialize:ll' + +def do_stm_getarrayitem(argv): + check(prebuilt_array_signed, [1, 10, -1, -10, 42]) + check(prebuilt_array_char, [chr(1), chr(10), chr(255), + chr(246), chr(42)]) return 0 @@ -99,3 +117,7 @@ def test_setfield_all_sizes(self): t, cbuilder = self.compile(do_stm_setfield) cbuilder.cmdexec('') + + def test_getarrayitem_all_sizes(self): + t, cbuilder = self.compile(do_stm_getarrayitem) + cbuilder.cmdexec('') diff --git a/pypy/translator/stm/test/test_transform.py b/pypy/translator/stm/test/test_transform.py --- a/pypy/translator/stm/test/test_transform.py +++ b/pypy/translator/stm/test/test_transform.py @@ -56,6 +56,18 @@ res = eval_stm_graph(interp, graph, [p], stm_mode="regular_transaction") assert res == 42 +def test_getarraysize(): + A = lltype.GcArray(lltype.Signed) + p = lltype.malloc(A, 100, immortal=True) + p[42] = 666 + def func(p): + return len(p) + interp, graph = get_interpreter(func, [p]) + transform_graph(graph) + assert summary(graph) == {'getarraysize': 1} + res = eval_stm_graph(interp, graph, [p], stm_mode="regular_transaction") + assert res == 100 + def test_getarrayitem(): A = lltype.GcArray(lltype.Signed) p = lltype.malloc(A, 100, immortal=True) diff --git a/pypy/translator/stm/transform.py b/pypy/translator/stm/transform.py --- a/pypy/translator/stm/transform.py +++ b/pypy/translator/stm/transform.py @@ -129,7 +129,7 @@ op1 = SpaceOperation('stm_setfield', op.args, op.result) newoperations.append(op1) - def FINISHME_stt_getarrayitem(self, newoperations, op): + def stt_getarrayitem(self, newoperations, op): ARRAY = op.args[0].concretetype.TO if ARRAY._immutable_field(): op1 = op From noreply at buildbot.pypy.org Sat Nov 5 17:25:19 2011 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 5 Nov 2011 17:25:19 +0100 (CET) Subject: [pypy-commit] pypy stm: setarrayitem Message-ID: <20111105162519.3E156820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r48789:fb549313e992 Date: 2011-11-05 16:09 +0100 http://bitbucket.org/pypy/pypy/changeset/fb549313e992/ Log: setarrayitem diff --git a/pypy/translator/stm/funcgen.py b/pypy/translator/stm/funcgen.py --- a/pypy/translator/stm/funcgen.py +++ b/pypy/translator/stm/funcgen.py @@ -6,8 +6,8 @@ def _stm_generic_get(funcgen, op, expr): T = funcgen.lltypemap(op.result) - fieldtypename = funcgen.db.gettype(T) - cfieldtypename = cdecl(fieldtypename, '') + resulttypename = funcgen.db.gettype(T) + cresulttypename = cdecl(resulttypename, '') newvalue = funcgen.expr(op.result, special_case_void=False) # assert T is not lltype.Void # XXX @@ -25,7 +25,7 @@ else: raise NotImplementedError(fieldsize) return '%s = (%s)%s((long*)&%s);' % ( - newvalue, cfieldtypename, funcname, expr) + newvalue, cresulttypename, funcname, expr) else: STRUCT = funcgen.lltypemap(op.args[0]).TO if isinstance(STRUCT, lltype.Struct): @@ -36,37 +36,18 @@ basename = funcgen.expr(op.args[0]) fieldname = op.args[1].value return '%s = STM_read_partial_word(%s, %s, offsetof(%s, %s));' % ( - newvalue, cfieldtypename, basename, + newvalue, cresulttypename, basename, cdecl(funcgen.db.gettype(STRUCT), ''), structdef.c_struct_field_name(fieldname)) # else: return '%s = stm_read_partial_word(sizeof(%s), &%s);' % ( - newvalue, cfieldtypename, expr) + newvalue, cresulttypename, expr) -def stm_getfield(funcgen, op): - STRUCT = funcgen.lltypemap(op.args[0]).TO - structdef = funcgen.db.gettypedefnode(STRUCT) - baseexpr_is_const = isinstance(op.args[0], Constant) - expr = structdef.ptr_access_expr(funcgen.expr(op.args[0]), - op.args[1].value, - baseexpr_is_const) - return _stm_generic_get(funcgen, op, expr) - -def stm_setfield(funcgen, op): - STRUCT = funcgen.lltypemap(op.args[0]).TO - structdef = funcgen.db.gettypedefnode(STRUCT) - baseexpr_is_const = isinstance(op.args[0], Constant) +def _stm_generic_set(funcgen, op, targetexpr, T): basename = funcgen.expr(op.args[0]) - fieldname = op.args[1].value - T = funcgen.lltypemap(op.args[2]) - fieldtypename = funcgen.db.gettype(T) newvalue = funcgen.expr(op.args[2], special_case_void=False) # - expr = structdef.ptr_access_expr(basename, - fieldname, - baseexpr_is_const) - # assert T is not lltype.Void # XXX fieldsize = rffi.sizeof(T) if fieldsize >= size_of_voidp or T == lltype.SingleFloat: @@ -86,11 +67,32 @@ else: raise NotImplementedError(fieldsize) return '%s((long*)&%s, (%s)%s);' % ( - funcname, expr, newtype, newvalue) + funcname, targetexpr, newtype, newvalue) else: - cfieldtypename = cdecl(fieldtypename, '') + itemtypename = funcgen.db.gettype(T) + citemtypename = cdecl(itemtypename, '') return ('stm_write_partial_word(sizeof(%s), &%s, %s);' % ( - cfieldtypename, expr, newvalue)) + citemtypename, targetexpr, newvalue)) + + +def stm_getfield(funcgen, op): + STRUCT = funcgen.lltypemap(op.args[0]).TO + structdef = funcgen.db.gettypedefnode(STRUCT) + baseexpr_is_const = isinstance(op.args[0], Constant) + expr = structdef.ptr_access_expr(funcgen.expr(op.args[0]), + op.args[1].value, + baseexpr_is_const) + return _stm_generic_get(funcgen, op, expr) + +def stm_setfield(funcgen, op): + STRUCT = funcgen.lltypemap(op.args[0]).TO + structdef = funcgen.db.gettypedefnode(STRUCT) + baseexpr_is_const = isinstance(op.args[0], Constant) + expr = structdef.ptr_access_expr(funcgen.expr(op.args[0]), + op.args[1].value, + baseexpr_is_const) + T = op.args[2].concretetype + return _stm_generic_set(funcgen, op, expr, T) def stm_getarrayitem(funcgen, op): ARRAY = funcgen.lltypemap(op.args[0]).TO @@ -100,6 +102,15 @@ expr = arraydef.itemindex_access_expr(ptr, index) return _stm_generic_get(funcgen, op, expr) +def stm_setarrayitem(funcgen, op): + ARRAY = funcgen.lltypemap(op.args[0]).TO + ptr = funcgen.expr(op.args[0]) + index = funcgen.expr(op.args[1]) + arraydef = funcgen.db.gettypedefnode(ARRAY) + expr = arraydef.itemindex_access_expr(ptr, index) + T = op.args[2].concretetype + return _stm_generic_set(funcgen, op, expr, T) + def stm_begin_transaction(funcgen, op): return 'STM_begin_transaction();' diff --git a/pypy/translator/stm/test/test_funcgen.py b/pypy/translator/stm/test/test_funcgen.py --- a/pypy/translator/stm/test/test_funcgen.py +++ b/pypy/translator/stm/test/test_funcgen.py @@ -101,12 +101,25 @@ assert array[i] == expected[i] check._annspecialcase_ = 'specialize:ll' +def change(array, newvalues): + assert len(newvalues) <= len(array) + for i in range(len(newvalues)): + array[i] = rffi.cast(lltype.typeOf(array).TO.OF, newvalues[i]) +change._annspecialcase_ = 'specialize:ll' + def do_stm_getarrayitem(argv): check(prebuilt_array_signed, [1, 10, -1, -10, 42]) check(prebuilt_array_char, [chr(1), chr(10), chr(255), chr(246), chr(42)]) return 0 +def do_stm_setarrayitem(argv): + change(prebuilt_array_signed, [500000, -10000000, 3]) + check(prebuilt_array_signed, [500000, -10000000, 3, -10, 42]) + change(prebuilt_array_char, ['A', 'B', 'C']) + check(prebuilt_array_char, ['A', 'B', 'C', chr(246), chr(42)]) + return 0 + class TestFuncGen(CompiledSTMTests): @@ -121,3 +134,7 @@ def test_getarrayitem_all_sizes(self): t, cbuilder = self.compile(do_stm_getarrayitem) cbuilder.cmdexec('') + + def test_setarrayitem_all_sizes(self): + t, cbuilder = self.compile(do_stm_setarrayitem) + cbuilder.cmdexec('') diff --git a/pypy/translator/stm/transform.py b/pypy/translator/stm/transform.py --- a/pypy/translator/stm/transform.py +++ b/pypy/translator/stm/transform.py @@ -140,7 +140,7 @@ op1 = SpaceOperation('stm_getarrayitem', op.args, op.result) newoperations.append(op1) - def FINISHME_stt_setarrayitem(self, newoperations, op): + def stt_setarrayitem(self, newoperations, op): ARRAY = op.args[0].concretetype.TO if ARRAY._immutable_field(): op1 = op From noreply at buildbot.pypy.org Sat Nov 5 17:25:20 2011 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 5 Nov 2011 17:25:20 +0100 (CET) Subject: [pypy-commit] pypy stm: More tests, fixes. Message-ID: <20111105162520.705AC820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r48790:cf31b07133b5 Date: 2011-11-05 16:24 +0100 http://bitbucket.org/pypy/pypy/changeset/cf31b07133b5/ Log: More tests, fixes. diff --git a/pypy/translator/stm/llstminterp.py b/pypy/translator/stm/llstminterp.py --- a/pypy/translator/stm/llstminterp.py +++ b/pypy/translator/stm/llstminterp.py @@ -95,7 +95,7 @@ assert 0 def opstm_getarrayitem(self, array, index): - ARRAY = lltype.typeOf(struct).TO + ARRAY = lltype.typeOf(array).TO if ARRAY._immutable_field(): # immutable item reads are always allowed return LLFrame.op_getarrayitem(self, array, index) diff --git a/pypy/translator/stm/rstm.py b/pypy/translator/stm/rstm.py --- a/pypy/translator/stm/rstm.py +++ b/pypy/translator/stm/rstm.py @@ -79,6 +79,10 @@ #print 'getting %x, mask=%x, replacing with %x' % (word, mask, val) _rffi_stm.stm_write_word(p, val) +def stm_getarrayitem(arrayptr, index): + "NOT_RPYTHON" + raise NotImplementedError("sorry") + def begin_transaction(): "NOT_RPYTHON. For tests only" raise NotImplementedError("hard to really emulate") @@ -131,6 +135,21 @@ class ExtEntry(ExtRegistryEntry): + _about_ = stm_getarrayitem + + def compute_result_annotation(self, s_arrayptr, s_index): + from pypy.tool.pairtype import pair + return pair(s_arrayptr, s_index).getitem() + + def specialize_call(self, hop): + r_arrayptr = hop.args_r[0] + v_arrayptr, v_index = hop.inputargs(r_arrayptr, lltype.Signed) + hop.exception_cannot_occur() + return hop.genop('stm_getarrayitem', [v_arrayptr, v_index], + resulttype = hop.r_result) + + +class ExtEntry(ExtRegistryEntry): _about_ = (begin_transaction, commit_transaction, begin_inevitable_transaction, transaction_boundary) diff --git a/pypy/translator/stm/test/test_llstminterp.py b/pypy/translator/stm/test/test_llstminterp.py --- a/pypy/translator/stm/test/test_llstminterp.py +++ b/pypy/translator/stm/test/test_llstminterp.py @@ -46,6 +46,24 @@ res = eval_stm_graph(interp, graph, [p], stm_mode="inevitable_transaction") assert res == 42 +def test_stm_getarrayitem(): + A = lltype.GcArray(lltype.Signed) + p = lltype.malloc(A, 5, immortal=True) + p[3] = 42 + def func(p): + return rstm.stm_getarrayitem(p, 3) + interp, graph = get_interpreter(func, [p]) + # forbidden in "not_in_transaction" mode + py.test.raises(ForbiddenInstructionInSTMMode, + eval_stm_graph, interp, graph, [p], + stm_mode="not_in_transaction") + # works in "regular_transaction" mode + res = eval_stm_graph(interp, graph, [p], stm_mode="regular_transaction") + assert res == 42 + # works in "inevitable_transaction" mode + res = eval_stm_graph(interp, graph, [p], stm_mode="inevitable_transaction") + assert res == 42 + def test_getfield_immutable(): S = lltype.GcStruct('S', ('x', lltype.Signed), hints = {'immutable': True}) p = lltype.malloc(S, immortal=True) diff --git a/pypy/translator/stm/test/test_transform.py b/pypy/translator/stm/test/test_transform.py --- a/pypy/translator/stm/test/test_transform.py +++ b/pypy/translator/stm/test/test_transform.py @@ -141,6 +141,19 @@ eval_stm_graph(interp, graph, [p], stm_mode="regular_transaction", final_stm_mode="inevitable_transaction") +def test_unsupported_getarrayitem_raw(): + A = lltype.Array(lltype.Signed) + p = lltype.malloc(A, 5, immortal=True) + p[3] = 42 + def func(p): + return p[3] + interp, graph = get_interpreter(func, [p]) + transform_graph(graph) + assert summary(graph) == {'stm_try_inevitable': 1, 'getarrayitem': 1} + res = eval_stm_graph(interp, graph, [p], stm_mode="regular_transaction", + final_stm_mode="inevitable_transaction") + assert res == 42 + # ____________________________________________________________ class CompiledSTMTests(StandaloneTests): From noreply at buildbot.pypy.org Sat Nov 5 17:25:21 2011 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 5 Nov 2011 17:25:21 +0100 (CET) Subject: [pypy-commit] pypy stm: Extend the test. Message-ID: <20111105162521.9D8FA820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r48791:3da50029cdf7 Date: 2011-11-05 17:06 +0100 http://bitbucket.org/pypy/pypy/changeset/3da50029cdf7/ Log: Extend the test. diff --git a/pypy/translator/stm/test/test_funcgen.py b/pypy/translator/stm/test/test_funcgen.py --- a/pypy/translator/stm/test/test_funcgen.py +++ b/pypy/translator/stm/test/test_funcgen.py @@ -116,8 +116,24 @@ def do_stm_setarrayitem(argv): change(prebuilt_array_signed, [500000, -10000000, 3]) check(prebuilt_array_signed, [500000, -10000000, 3, -10, 42]) - change(prebuilt_array_char, ['A', 'B', 'C']) - check(prebuilt_array_char, ['A', 'B', 'C', chr(246), chr(42)]) + prebuilt_array_char[0] = 'A' + check(prebuilt_array_char, ['A', chr(10), chr(255), chr(246), chr(42)]) + prebuilt_array_char[3] = 'B' + check(prebuilt_array_char, ['A', chr(10), chr(255), 'B', chr(42)]) + prebuilt_array_char[4] = 'C' + check(prebuilt_array_char, ['A', chr(10), chr(255), 'B', 'C']) + # + rstm.transaction_boundary() + # + check(prebuilt_array_char, ['A', chr(10), chr(255), 'B', 'C']) + prebuilt_array_char[1] = 'D' + check(prebuilt_array_char, ['A', 'D', chr(255), 'B', 'C']) + prebuilt_array_char[2] = 'E' + check(prebuilt_array_char, ['A', 'D', 'E', 'B', 'C']) + # + rstm.transaction_boundary() + # + check(prebuilt_array_char, ['A', 'D', 'E', 'B', 'C']) return 0 From noreply at buildbot.pypy.org Sat Nov 5 17:25:22 2011 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 5 Nov 2011 17:25:22 +0100 (CET) Subject: [pypy-commit] pypy stm: Hack. It may stay around if no solution to the problem is found, Message-ID: <20111105162522.CD90C820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r48792:c456e6bccc2a Date: 2011-11-05 17:24 +0100 http://bitbucket.org/pypy/pypy/changeset/c456e6bccc2a/ Log: Hack. It may stay around if no solution to the problem is found, but of course only if stm is enabled. diff --git a/pypy/rpython/memory/gctransform/transform.py b/pypy/rpython/memory/gctransform/transform.py --- a/pypy/rpython/memory/gctransform/transform.py +++ b/pypy/rpython/memory/gctransform/transform.py @@ -628,7 +628,11 @@ assert False, "%s has no support for free with flavor %r" % (self, flavor) def gct_gc_can_move(self, hop): - return hop.cast_result(rmodel.inputconst(lltype.Bool, False)) + #return hop.cast_result(rmodel.inputconst(lltype.Bool, False)) + # XXX hack: in case of STM, we cannot pass a pointer inside a + # GcStruct or GcArray to the C world, because some of the + # content may still live in the STM buffers + return hop.cast_result(rmodel.inputconst(lltype.Bool, True)) def gct_shrink_array(self, hop): return hop.cast_result(rmodel.inputconst(lltype.Bool, False)) From noreply at buildbot.pypy.org Sat Nov 5 17:25:24 2011 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 5 Nov 2011 17:25:24 +0100 (CET) Subject: [pypy-commit] pypy stm: Added a file in that branch too to memorize things to do. Message-ID: <20111105162524.06395820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r48793:3bb22324bde9 Date: 2011-11-05 17:24 +0100 http://bitbucket.org/pypy/pypy/changeset/3bb22324bde9/ Log: Added a file in that branch too to memorize things to do. diff --git a/pypy/doc/discussion/stm_todo.txt b/pypy/doc/discussion/stm_todo.txt new file mode 100644 --- /dev/null +++ b/pypy/doc/discussion/stm_todo.txt @@ -0,0 +1,7 @@ +Things I changed (or hacked) outside the pypy/translator/stm +directory and that need to be cleaned up before the 'stm' +branch can be merged:: + + 2869bd44f830 Make the exc_data structure a thread-local. + + c456e6bccc2a gc_can_move returns always True. From noreply at buildbot.pypy.org Sat Nov 5 18:03:08 2011 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 5 Nov 2011 18:03:08 +0100 (CET) Subject: [pypy-commit] pypy stm: interiorfield operations. Message-ID: <20111105170308.DC637820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r48794:8c30442dc679 Date: 2011-11-05 17:58 +0100 http://bitbucket.org/pypy/pypy/changeset/8c30442dc679/ Log: interiorfield operations. diff --git a/pypy/rpython/lltypesystem/lloperation.py b/pypy/rpython/lltypesystem/lloperation.py --- a/pypy/rpython/lltypesystem/lloperation.py +++ b/pypy/rpython/lltypesystem/lloperation.py @@ -399,6 +399,8 @@ 'stm_setfield': LLOp(), 'stm_getarrayitem': LLOp(sideeffects=False, canrun=True), 'stm_setarrayitem': LLOp(), + 'stm_getinteriorfield': LLOp(sideeffects=False, canrun=True), + 'stm_setinteriorfield': LLOp(), 'stm_begin_transaction': LLOp(), 'stm_commit_transaction': LLOp(), diff --git a/pypy/translator/c/funcgen.py b/pypy/translator/c/funcgen.py --- a/pypy/translator/c/funcgen.py +++ b/pypy/translator/c/funcgen.py @@ -600,6 +600,8 @@ OP_STM_SETFIELD = _OP_STM OP_STM_GETARRAYITEM = _OP_STM OP_STM_SETARRAYITEM = _OP_STM + OP_STM_GETINTERIORFIELD = _OP_STM + OP_STM_SETINTERIORFIELD = _OP_STM OP_STM_BEGIN_TRANSACTION = _OP_STM OP_STM_COMMIT_TRANSACTION = _OP_STM OP_STM_BEGIN_INEVITABLE_TRANSACTION = _OP_STM diff --git a/pypy/translator/stm/funcgen.py b/pypy/translator/stm/funcgen.py --- a/pypy/translator/stm/funcgen.py +++ b/pypy/translator/stm/funcgen.py @@ -46,7 +46,7 @@ def _stm_generic_set(funcgen, op, targetexpr, T): basename = funcgen.expr(op.args[0]) - newvalue = funcgen.expr(op.args[2], special_case_void=False) + newvalue = funcgen.expr(op.args[-1], special_case_void=False) # assert T is not lltype.Void # XXX fieldsize = rffi.sizeof(T) @@ -75,42 +75,48 @@ citemtypename, targetexpr, newvalue)) +def field_expr(funcgen, args): + STRUCT = funcgen.lltypemap(args[0]).TO + structdef = funcgen.db.gettypedefnode(STRUCT) + baseexpr_is_const = isinstance(args[0], Constant) + return structdef.ptr_access_expr(funcgen.expr(args[0]), + args[1].value, + baseexpr_is_const) + def stm_getfield(funcgen, op): - STRUCT = funcgen.lltypemap(op.args[0]).TO - structdef = funcgen.db.gettypedefnode(STRUCT) - baseexpr_is_const = isinstance(op.args[0], Constant) - expr = structdef.ptr_access_expr(funcgen.expr(op.args[0]), - op.args[1].value, - baseexpr_is_const) + expr = field_expr(funcgen, op.args) return _stm_generic_get(funcgen, op, expr) def stm_setfield(funcgen, op): - STRUCT = funcgen.lltypemap(op.args[0]).TO - structdef = funcgen.db.gettypedefnode(STRUCT) - baseexpr_is_const = isinstance(op.args[0], Constant) - expr = structdef.ptr_access_expr(funcgen.expr(op.args[0]), - op.args[1].value, - baseexpr_is_const) + expr = field_expr(funcgen, op.args) T = op.args[2].concretetype return _stm_generic_set(funcgen, op, expr, T) +def array_expr(funcgen, args): + ARRAY = funcgen.lltypemap(args[0]).TO + ptr = funcgen.expr(args[0]) + index = funcgen.expr(args[1]) + arraydef = funcgen.db.gettypedefnode(ARRAY) + return arraydef.itemindex_access_expr(ptr, index) + def stm_getarrayitem(funcgen, op): - ARRAY = funcgen.lltypemap(op.args[0]).TO - ptr = funcgen.expr(op.args[0]) - index = funcgen.expr(op.args[1]) - arraydef = funcgen.db.gettypedefnode(ARRAY) - expr = arraydef.itemindex_access_expr(ptr, index) + expr = array_expr(funcgen, op.args) return _stm_generic_get(funcgen, op, expr) def stm_setarrayitem(funcgen, op): - ARRAY = funcgen.lltypemap(op.args[0]).TO - ptr = funcgen.expr(op.args[0]) - index = funcgen.expr(op.args[1]) - arraydef = funcgen.db.gettypedefnode(ARRAY) - expr = arraydef.itemindex_access_expr(ptr, index) + expr = array_expr(funcgen, op.args) T = op.args[2].concretetype return _stm_generic_set(funcgen, op, expr, T) +def stm_getinteriorfield(funcgen, op): + expr = funcgen.interior_expr(op.args) + return _stm_generic_get(funcgen, op, expr) + +def stm_setinteriorfield(funcgen, op): + expr = funcgen.interior_expr(op.args[:-1]) + T = op.args[-1].concretetype + return _stm_generic_set(funcgen, op, expr, T) + def stm_begin_transaction(funcgen, op): return 'STM_begin_transaction();' diff --git a/pypy/translator/stm/llstminterp.py b/pypy/translator/stm/llstminterp.py --- a/pypy/translator/stm/llstminterp.py +++ b/pypy/translator/stm/llstminterp.py @@ -156,6 +156,14 @@ self.check_stm_mode(lambda m: m != "not_in_transaction") LLFrame.op_setarrayitem(self, array, index, value) + def opstm_stm_getinteriorfield(self, obj, *offsets): + self.check_stm_mode(lambda m: m != "not_in_transaction") + return LLFrame.op_getinteriorfield(self, obj, *offsets) + + def opstm_stm_setinteriorfield(self, obj, *fieldnamesval): + self.check_stm_mode(lambda m: m != "not_in_transaction") + LLFrame.op_setinteriorfield(self, obj, *fieldnamesval) + def opstm_stm_begin_transaction(self): self.check_stm_mode(lambda m: m == "not_in_transaction") self.llinterpreter.stm_mode = "regular_transaction" diff --git a/pypy/translator/stm/test/test_funcgen.py b/pypy/translator/stm/test/test_funcgen.py --- a/pypy/translator/stm/test/test_funcgen.py +++ b/pypy/translator/stm/test/test_funcgen.py @@ -137,6 +137,61 @@ return 0 +def make_array_of_structs(T1, T2): + S = lltype.Struct('S', ('x', T1), ('y', T2)) + a = lltype.malloc(lltype.GcArray(S), 3, immortal=True) + for i, (value1, value2) in enumerate([(1, 10), (-1, 20), (-50, -30)]): + a[i].x = rffi.cast(T1, value1) + a[i].y = rffi.cast(T2, value2) + return a + +prebuilt_array_signed_signed = make_array_of_structs(lltype.Signed, + lltype.Signed) +prebuilt_array_char_char = make_array_of_structs(lltype.Char, + lltype.Char) + +def check2(array, expected1, expected2): + assert len(array) == len(expected1) == len(expected2) + for i in range(len(expected1)): + assert array[i].x == expected1[i] + assert array[i].y == expected2[i] +check2._annspecialcase_ = 'specialize:ll' + +def change2(array, newvalues1, newvalues2): + assert len(newvalues1) <= len(array) + assert len(newvalues2) <= len(array) + for i in range(len(newvalues1)): + array[i].x = rffi.cast(lltype.typeOf(array).TO.OF.x, newvalues1[i]) + for i in range(len(newvalues2)): + array[i].y = rffi.cast(lltype.typeOf(array).TO.OF.y, newvalues2[i]) +change2._annspecialcase_ = 'specialize:ll' + +def do_stm_getinteriorfield(argv): + check2(prebuilt_array_signed_signed, [1, -1, -50], [10, 20, -30]) + check2(prebuilt_array_char_char, [chr(1), chr(255), chr(206)], + [chr(10), chr(20), chr(226)]) + return 0 + +def do_stm_setinteriorfield(argv): + change2(prebuilt_array_signed_signed, [500000, -10000000], [102101202]) + check2(prebuilt_array_signed_signed, [500000, -10000000, -50], + [102101202, 20, -30]) + change2(prebuilt_array_char_char, ['a'], ['b']) + check2(prebuilt_array_char_char, ['a', chr(255), chr(206)], + ['b', chr(20), chr(226)]) + # + rstm.transaction_boundary() + # + check2(prebuilt_array_signed_signed, [500000, -10000000, -50], + [102101202, 20, -30]) + check2(prebuilt_array_char_char, ['a', chr(255), chr(206)], + ['b', chr(20), chr(226)]) + return 0 + + +# ____________________________________________________________ + + class TestFuncGen(CompiledSTMTests): def test_getfield_all_sizes(self): @@ -154,3 +209,11 @@ def test_setarrayitem_all_sizes(self): t, cbuilder = self.compile(do_stm_setarrayitem) cbuilder.cmdexec('') + + def test_getinteriorfield_all_sizes(self): + t, cbuilder = self.compile(do_stm_getinteriorfield) + cbuilder.cmdexec('') + + def test_setinteriorfield_all_sizes(self): + t, cbuilder = self.compile(do_stm_setinteriorfield) + cbuilder.cmdexec('') diff --git a/pypy/translator/stm/test/test_transform.py b/pypy/translator/stm/test/test_transform.py --- a/pypy/translator/stm/test/test_transform.py +++ b/pypy/translator/stm/test/test_transform.py @@ -1,4 +1,4 @@ -from pypy.rpython.lltypesystem import lltype, llmemory +from pypy.rpython.lltypesystem import lltype, llmemory, rstr from pypy.rpython.test.test_llinterp import get_interpreter from pypy.objspace.flow.model import summary from pypy.translator.stm.llstminterp import eval_stm_graph @@ -91,6 +91,26 @@ assert summary(graph) == {'stm_setarrayitem': 1} eval_stm_graph(interp, graph, [p], stm_mode="regular_transaction") +def test_getinteriorfield(): + p = lltype.malloc(rstr.STR, 100, immortal=True) + p.chars[42] = 'X' + def func(p): + return p.chars[42] + interp, graph = get_interpreter(func, [p]) + transform_graph(graph) + assert summary(graph) == {'stm_getinteriorfield': 1} + res = eval_stm_graph(interp, graph, [p], stm_mode="regular_transaction") + assert res == 'X' + +def test_setinteriorfield(): + p = lltype.malloc(rstr.STR, 100, immortal=True) + def func(p): + p.chars[42] = 'Y' + interp, graph = get_interpreter(func, [p]) + transform_graph(graph) + assert summary(graph) == {'stm_setinteriorfield': 1} + res = eval_stm_graph(interp, graph, [p], stm_mode="regular_transaction") + def test_unsupported_operation(): def func(n): n += 1 diff --git a/pypy/translator/stm/transform.py b/pypy/translator/stm/transform.py --- a/pypy/translator/stm/transform.py +++ b/pypy/translator/stm/transform.py @@ -151,6 +151,28 @@ op1 = SpaceOperation('stm_setarrayitem', op.args, op.result) newoperations.append(op1) + def stt_getinteriorfield(self, newoperations, op): + OUTER = op.args[0].concretetype.TO + if OUTER._hints.get('immutable'): + op1 = op + elif OUTER._gckind == 'raw': + turn_inevitable(newoperations, "getinteriorfield-raw") + op1 = op + else: + op1 = SpaceOperation('stm_getinteriorfield', op.args, op.result) + newoperations.append(op1) + + def stt_setinteriorfield(self, newoperations, op): + OUTER = op.args[0].concretetype.TO + if OUTER._hints.get('immutable'): + op1 = op + elif OUTER._gckind == 'raw': + turn_inevitable(newoperations, "setinteriorfield-raw") + op1 = op + else: + op1 = SpaceOperation('stm_setinteriorfield', op.args, op.result) + newoperations.append(op1) + def stt_stm_transaction_boundary(self, newoperations, op): self.seen_transaction_boundary = True v_result = op.result From noreply at buildbot.pypy.org Sat Nov 5 18:03:10 2011 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 5 Nov 2011 18:03:10 +0100 (CET) Subject: [pypy-commit] pypy stm: fix. Message-ID: <20111105170310.15B4B820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r48795:caad8bf3d4ae Date: 2011-11-05 18:01 +0100 http://bitbucket.org/pypy/pypy/changeset/caad8bf3d4ae/ Log: fix. diff --git a/pypy/translator/stm/funcgen.py b/pypy/translator/stm/funcgen.py --- a/pypy/translator/stm/funcgen.py +++ b/pypy/translator/stm/funcgen.py @@ -4,7 +4,7 @@ from pypy.translator.stm.rstm import size_of_voidp -def _stm_generic_get(funcgen, op, expr): +def _stm_generic_get(funcgen, op, expr, simple_struct=False): T = funcgen.lltypemap(op.result) resulttypename = funcgen.db.gettype(T) cresulttypename = cdecl(resulttypename, '') @@ -27,11 +27,11 @@ return '%s = (%s)%s((long*)&%s);' % ( newvalue, cresulttypename, funcname, expr) else: - STRUCT = funcgen.lltypemap(op.args[0]).TO - if isinstance(STRUCT, lltype.Struct): + if simple_struct: # assume that the object is aligned, and any possible misalignment # comes from the field offset, so that it can be resolved at # compile-time (by using C macros) + STRUCT = funcgen.lltypemap(op.args[0]).TO structdef = funcgen.db.gettypedefnode(STRUCT) basename = funcgen.expr(op.args[0]) fieldname = op.args[1].value @@ -85,7 +85,7 @@ def stm_getfield(funcgen, op): expr = field_expr(funcgen, op.args) - return _stm_generic_get(funcgen, op, expr) + return _stm_generic_get(funcgen, op, expr, simple_struct=True) def stm_setfield(funcgen, op): expr = field_expr(funcgen, op.args) From noreply at buildbot.pypy.org Sat Nov 5 18:06:23 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sat, 5 Nov 2011 18:06:23 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: ignore for now Message-ID: <20111105170623.D8404820B3@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48796:4c062b1c20c2 Date: 2011-11-05 16:58 +0100 http://bitbucket.org/pypy/pypy/changeset/4c062b1c20c2/ Log: ignore for now diff --git a/pypy/jit/metainterp/test/support.py b/pypy/jit/metainterp/test/support.py --- a/pypy/jit/metainterp/test/support.py +++ b/pypy/jit/metainterp/test/support.py @@ -168,20 +168,28 @@ This counts as 1 every bridge in addition to every loop; and it does not count at all the entry bridges from interpreter, although they are TreeLoops as well.""" + return # FIXME assert get_stats().compiled_count == count def check_tree_loop_count(self, count): + return # FIXME assert len(get_stats().loops) == count def check_loop_count_at_most(self, count): + return # FIXME assert get_stats().compiled_count <= count def check_enter_count(self, count): + return # FIXME assert get_stats().enter_count == count def check_enter_count_at_most(self, count): + return # FIXME assert get_stats().enter_count <= count def check_jumps(self, maxcount): + return # FIXME assert get_stats().exec_jumps <= maxcount def check_aborted_count(self, count): + return # FIXME assert get_stats().aborted_count == count def check_aborted_count_at_least(self, count): + return # FIXME assert get_stats().aborted_count >= count def meta_interp(self, *args, **kwds): From noreply at buildbot.pypy.org Sat Nov 5 18:06:25 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sat, 5 Nov 2011 18:06:25 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: test_ajit.test_basic now passing Message-ID: <20111105170625.1BEFB820B3@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48797:faab93fbbec4 Date: 2011-11-05 18:06 +0100 http://bitbucket.org/pypy/pypy/changeset/faab93fbbec4/ Log: test_ajit.test_basic now passing diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -596,6 +596,7 @@ class ResumeFromInterpDescr(ResumeDescr): def __init__(self, original_greenkey): self.original_greenkey = original_greenkey + self.procedure_token = ProcedureToken() def compile_and_attach(self, metainterp, new_loop): # We managed to create a bridge going from the interpreter @@ -605,34 +606,23 @@ metainterp_sd = metainterp.staticdata jitdriver_sd = metainterp.jitdriver_sd redargs = new_loop.inputargs - # We make a new LoopToken for this entry bridge, and stick it - # to every guard in the loop. - new_loop_token = make_loop_token(len(redargs), jitdriver_sd) - new_loop.token = new_loop_token + self.procedure_token.outermost_jitdriver_sd = jitdriver_sd + new_loop.token = self.procedure_token send_loop_to_backend(self.original_greenkey, metainterp.jitdriver_sd, metainterp_sd, new_loop, "entry bridge") # send the new_loop to warmspot.py, to be called directly the next time - jitdriver_sd.warmstate.attach_unoptimized_bridge_from_interp( - self.original_greenkey, - new_loop_token) - # store the new loop in compiled_merge_points_wref too - old_loop_tokens = metainterp.get_compiled_merge_points( - self.original_greenkey) - # it always goes at the end of the list, as it is the most - # general loop token - old_loop_tokens.append(new_loop_token) - metainterp.set_compiled_merge_points(self.original_greenkey, - old_loop_tokens) + jitdriver_sd.warmstate.attach_procedure_to_interp( + self.original_greenkey, self.procedure_token) def reset_counter_from_failure(self): pass -def compile_new_bridge(metainterp, old_loop_tokens, resumekey, retraced=False): +def compile_new_bridge(metainterp, resumekey, retraced=False): """Try to compile a new bridge leading from the beginning of the history to some existing place. """ - from pypy.jit.metainterp.optimize import optimize_bridge + from pypy.jit.metainterp.optimizeopt import optimize_trace # The history contains new operations to attach as the code for the # failure of 'resumekey.guard_op'. @@ -640,9 +630,11 @@ # Attempt to use optimize_bridge(). This may return None in case # it does not work -- i.e. none of the existing old_loop_tokens match. new_loop = create_empty_loop(metainterp) - new_loop.inputargs = metainterp.history.inputargs[:] + new_loop.inputargs = inputargs = metainterp.history.inputargs[:] # clone ops, as optimize_bridge can mutate the ops - new_loop.operations = [op.clone() for op in metainterp.history.operations] + procedure_token = resumekey.procedure_token + new_loop.operations = [ResOperation(rop.LABEL, inputargs, None, descr=TargetToken(procedure_token))] + \ + [op.clone() for op in metainterp.history.operations] metainterp_sd = metainterp.staticdata state = metainterp.jitdriver_sd.warmstate if isinstance(resumekey, ResumeAtPositionDescr): @@ -650,38 +642,18 @@ else: inline_short_preamble = True try: - target_loop_token = optimize_bridge(metainterp_sd, old_loop_tokens, - new_loop, state.enable_opts, - inline_short_preamble, retraced) + optimize_trace(metainterp_sd, new_loop, state.enable_opts) except InvalidLoop: debug_print("compile_new_bridge: got an InvalidLoop") # XXX I am fairly convinced that optimize_bridge cannot actually raise # InvalidLoop debug_print('InvalidLoop in compile_new_bridge') return None - # Did it work? - if target_loop_token is not None: - # Yes, we managed to create a bridge. Dispatch to resumekey to - # know exactly what we must do (ResumeGuardDescr/ResumeFromInterpDescr) - prepare_last_operation(new_loop, target_loop_token) - resumekey.compile_and_attach(metainterp, new_loop) - record_loop_or_bridge(metainterp_sd, new_loop) - return target_loop_token - -def prepare_last_operation(new_loop, target_loop_token): - op = new_loop.operations[-1] - if not isinstance(target_loop_token, TerminatingLoopToken): - # normal case - #op.setdescr(target_loop_token) # patch the jump target - pass - else: - # The target_loop_token is a pseudo loop token, - # e.g. loop_tokens_done_with_this_frame_void[0] - # Replace the operation with the real operation we want, i.e. a FINISH - descr = target_loop_token.finishdescr - args = op.getarglist() - new_op = ResOperation(rop.FINISH, args, None, descr=descr) - new_loop.operations[-1] = new_op + # We managed to create a bridge. Dispatch to resumekey to + # know exactly what we must do (ResumeGuardDescr/ResumeFromInterpDescr) + resumekey.compile_and_attach(metainterp, new_loop) + record_loop_or_bridge(metainterp_sd, new_loop) + return new_loop.operations[-1].getdescr() # ____________________________________________________________ diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -74,15 +74,18 @@ self.import_state(start_targetop) lastop = loop.operations[-1] - assert lastop.getopnum() == rop.LABEL - loop.operations = loop.operations[:-1] - #if lastop.getopnum() == rop.LABEL or lastop.getopnum() == rop.JUMP: - # loop.operations = loop.operations[:-1] - #FIXME: FINISH + if lastop.getopnum() == rop.LABEL: + loop.operations = loop.operations[:-1] + else: + lastop = None self.optimizer.propagate_all_forward(clear=False) + + if not lastop: + self.optimizer.flush() + loop.operations = self.optimizer.get_newoperations() + return - #if lastop.getopnum() == rop.LABEL: if not self.did_peel_one: # Enforce the previous behaviour of always peeling exactly one iteration (for now) self.optimizer.flush() KillHugeIntBounds(self.optimizer).apply() @@ -97,8 +100,6 @@ self.close_loop(jumpop) self.finilize_short_preamble(lastop) start_targetop.getdescr().short_preamble = self.short - #else: - # loop.operations = self.optimizer.get_newoperations() def export_state(self, targetop): original_jump_args = targetop.getarglist() diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -2118,10 +2118,10 @@ loop_tokens = sd.loop_tokens_done_with_this_frame_float else: assert False - self.history.record(rop.JUMP, exits, None) - target_loop_token = compile.compile_new_bridge(self, loop_tokens, - self.resumekey) - if target_loop_token is not loop_tokens[0]: + # FIXME: kill TerminatingLoopToken? + self.history.record(rop.FINISH, exits, None, descr=loop_tokens[0].finishdescr) + target_loop_token = compile.compile_new_bridge(self, self.resumekey) + if not target_loop_token: compile.giveup() def compile_exit_frame_with_exception(self, valuebox): diff --git a/pypy/jit/metainterp/test/support.py b/pypy/jit/metainterp/test/support.py --- a/pypy/jit/metainterp/test/support.py +++ b/pypy/jit/metainterp/test/support.py @@ -16,15 +16,16 @@ from pypy.jit.codewriter import support class FakeJitCell(object): - __compiled_merge_points = [] - def get_compiled_merge_points(self): - return self.__compiled_merge_points[:] - def set_compiled_merge_points(self, lst): - self.__compiled_merge_points = lst + __product_token = None + def get_procedure_token(self): + return self.__product_token + def set_procedure_token(self, token): + self.__product_token = token class FakeWarmRunnerState(object): - def attach_unoptimized_bridge_from_interp(self, greenkey, newloop): - pass + def attach_procedure_to_interp(self, greenkey, procedure_token): + cell = self.jit_cell_at_key(greenkey) + cell.set_procedure_token(procedure_token) def helper_func(self, FUNCPTR, func): from pypy.rpython.annlowlevel import llhelper @@ -132,16 +133,14 @@ def _run_with_machine_code(testself, args): metainterp = testself.metainterp num_green_args = metainterp.jitdriver_sd.num_green_args - loop_tokens = metainterp.get_compiled_merge_points(args[:num_green_args]) - if len(loop_tokens) != 1: - return NotImplemented + procedure_token = metainterp.get_procedure_token(args[:num_green_args]) # a loop was successfully created by _run_with_pyjitpl(); call it cpu = metainterp.cpu for i in range(len(args) - num_green_args): x = args[num_green_args + i] typecode = history.getkind(lltype.typeOf(x)) set_future_value(cpu, i, x, typecode) - faildescr = cpu.execute_token(loop_tokens[0]) + faildescr = cpu.execute_token(procedure_token) assert faildescr.__class__.__name__.startswith('DoneWithThisFrameDescr') if metainterp.jitdriver_sd.result_type == history.INT: return cpu.get_latest_value_int(0) From noreply at buildbot.pypy.org Sat Nov 5 18:25:07 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Sat, 5 Nov 2011 18:25:07 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: coming closer, 49 tests passed, 16 failed in test_typed.py Message-ID: <20111105172507.A81A7820B3@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r48798:2f25d74f9648 Date: 2011-11-05 18:23 +0100 http://bitbucket.org/pypy/pypy/changeset/2f25d74f9648/ Log: coming closer, 49 tests passed, 16 failed in test_typed.py Failing: test_memoryerror test_unichr test_UNICHR test_list_indexerror test_long_long test_int_overflow test_int_floordiv_ovf_zer test_int_mul_ovf test_int_mod_ovf_zer test_int_unary_ovf test_float2str test_uint_arith test_hash_preservation test_range_iter test_float test_ovfcheck_float_to_int diff --git a/pypy/doc/discussion/win64_todo.txt b/pypy/doc/discussion/win64_todo.txt --- a/pypy/doc/discussion/win64_todo.txt +++ b/pypy/doc/discussion/win64_todo.txt @@ -1,4 +1,8 @@ -20011-11-4 +2011-11-04 ll_os.py has a problem with the file rwin32.py. Temporarily disabled for the win64_gborg branch. This needs to be -investigated and re-enabled. \ No newline at end of file +investigated and re-enabled. + +2011-11-05 +test_typed.py needs explicit tests to ensure that we +handle word sizes right. \ No newline at end of file diff --git a/pypy/rpython/module/ll_os.py b/pypy/rpython/module/ll_os.py --- a/pypy/rpython/module/ll_os.py +++ b/pypy/rpython/module/ll_os.py @@ -402,9 +402,13 @@ UTIMBUFP = lltype.Ptr(self.UTIMBUF) os_utime = self.llexternal('utime', [rffi.CCHARP, UTIMBUFP], rffi.INT) + if not _WIM32: + includes = ['sys/time.h'] + else: + includes = ['time.h'] class CConfig: _compilation_info_ = ExternalCompilationInfo( - includes=['sys/time.h'] + includes=includes ) HAVE_UTIMES = platform.Has('utimes') config = platform.configure(CConfig) @@ -414,9 +418,14 @@ if config['HAVE_UTIMES']: class CConfig: - _compilation_info_ = ExternalCompilationInfo( - includes = ['sys/time.h'] - ) + if not _WIN32: + _compilation_info_ = ExternalCompilationInfo( + includes = ['sys/time.h'] + ) + else: + _compilation_info_ = ExternalCompilationInfo( + includes = ['time.h'] + ) TIMEVAL = platform.Struct('struct timeval', [('tv_sec', rffi.LONG), ('tv_usec', rffi.LONG)]) config = platform.configure(CConfig) diff --git a/pypy/translator/c/primitive.py b/pypy/translator/c/primitive.py --- a/pypy/translator/c/primitive.py +++ b/pypy/translator/c/primitive.py @@ -1,7 +1,7 @@ import sys from pypy.rlib.objectmodel import Symbolic, ComputedIntSymbolic from pypy.rlib.objectmodel import CDefinedIntSymbolic -from pypy.rlib.rarithmetic import r_longlong +from pypy.rlib.rarithmetic import r_longlong, is_emulated_long from pypy.rlib.rfloat import isinf, isnan from pypy.rpython.lltypesystem.lltype import * from pypy.rpython.lltypesystem import rffi, llgroup @@ -204,6 +204,13 @@ GCREF: 'void* @', } +# support for win64, where sizeof(long) == 4 +if is_emulated_long: + PrimitiveType.update( { + Signed: '__int64 @', + Unsigned: 'unsigned __int64 @', + } ) + def define_c_primitive(ll_type, c_name, suffix=''): if ll_type in PrimitiveName: return @@ -221,7 +228,11 @@ define_c_primitive(rffi.INT, 'int') define_c_primitive(rffi.INT_real, 'int') define_c_primitive(rffi.UINT, 'unsigned int') -define_c_primitive(rffi.LONG, 'long', 'L') -define_c_primitive(rffi.ULONG, 'unsigned long', 'UL') +if is_emulated_long: # special case for win64 + define_c_primitive(rffi.LONG, '__int64', 'LL') + define_c_primitive(rffi.ULONG, 'unsigned __int64', 'ULL') +else: + define_c_primitive(rffi.LONG, 'long', 'L') + define_c_primitive(rffi.ULONG, 'unsigned long', 'UL') define_c_primitive(rffi.LONGLONG, 'long long', 'LL') define_c_primitive(rffi.ULONGLONG, 'unsigned long long', 'ULL') From noreply at buildbot.pypy.org Sat Nov 5 18:33:29 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Sat, 5 Nov 2011 18:33:29 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: merge default Message-ID: <20111105173329.30C3E820B3@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r48799:9752b5362bec Date: 2011-11-05 18:32 +0100 http://bitbucket.org/pypy/pypy/changeset/9752b5362bec/ Log: merge default diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -247,7 +247,6 @@ CONST_1 = ConstInt(1) CVAL_ZERO = ConstantValue(CONST_0) CVAL_ZERO_FLOAT = ConstantValue(Const._new(0.0)) -CVAL_UNINITIALIZED_ZERO = ConstantValue(CONST_0) llhelper.CVAL_NULLREF = ConstantValue(llhelper.CONST_NULL) oohelper.CVAL_NULLREF = ConstantValue(oohelper.CONST_NULL) diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -4123,6 +4123,38 @@ """ self.optimize_strunicode_loop(ops, expected) + def test_str_concat_constant_lengths(self): + ops = """ + [i0] + p0 = newstr(1) + strsetitem(p0, 0, i0) + p1 = newstr(0) + p2 = call(0, p0, p1, descr=strconcatdescr) + i1 = call(0, p2, p0, descr=strequaldescr) + finish(i1) + """ + expected = """ + [i0] + finish(1) + """ + self.optimize_strunicode_loop(ops, expected) + + def test_str_concat_constant_lengths_2(self): + ops = """ + [i0] + p0 = newstr(0) + p1 = newstr(1) + strsetitem(p1, 0, i0) + p2 = call(0, p0, p1, descr=strconcatdescr) + i1 = call(0, p2, p1, descr=strequaldescr) + finish(i1) + """ + expected = """ + [i0] + finish(1) + """ + self.optimize_strunicode_loop(ops, expected) + def test_str_slice_1(self): ops = """ [p1, i1, i2] @@ -4883,6 +4915,27 @@ def test_plain_virtual_string_copy_content(self): ops = """ + [i1] + p0 = newstr(6) + copystrcontent(s"hello!", p0, 0, 0, 6) + p1 = call(0, p0, s"abc123", descr=strconcatdescr) + i0 = strgetitem(p1, i1) + finish(i0) + """ + expected = """ + [i1] + p0 = newstr(6) + copystrcontent(s"hello!", p0, 0, 0, 6) + p1 = newstr(12) + copystrcontent(p0, p1, 0, 0, 6) + copystrcontent(s"abc123", p1, 0, 6, 6) + i0 = strgetitem(p1, i1) + finish(i0) + """ + self.optimize_strunicode_loop(ops, expected) + + def test_plain_virtual_string_copy_content_2(self): + ops = """ [] p0 = newstr(6) copystrcontent(s"hello!", p0, 0, 0, 6) @@ -4894,10 +4947,7 @@ [] p0 = newstr(6) copystrcontent(s"hello!", p0, 0, 0, 6) - p1 = newstr(12) - copystrcontent(p0, p1, 0, 0, 6) - copystrcontent(s"abc123", p1, 0, 6, 6) - i0 = strgetitem(p1, 0) + i0 = strgetitem(p0, 0) finish(i0) """ self.optimize_strunicode_loop(ops, expected) diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -2168,13 +2168,13 @@ ops = """ [p0, i0, p1, i1, i2] setfield_gc(p0, i1, descr=valuedescr) - copystrcontent(p0, i0, p1, i1, i2) + copystrcontent(p0, p1, i0, i1, i2) escape() jump(p0, i0, p1, i1, i2) """ expected = """ [p0, i0, p1, i1, i2] - copystrcontent(p0, i0, p1, i1, i2) + copystrcontent(p0, p1, i0, i1, i2) setfield_gc(p0, i1, descr=valuedescr) escape() jump(p0, i0, p1, i1, i2) @@ -7407,7 +7407,7 @@ expected = """ [p22, p18, i1, i2] call(i2, descr=nonwritedescr) - setfield_gc(p22, i1, descr=valuedescr) + setfield_gc(p22, i1, descr=valuedescr) jump(p22, p18, i1, i1) """ self.optimize_loop(ops, expected, preamble, expected_short=short) @@ -7434,7 +7434,7 @@ def test_cache_setarrayitem_across_loop_boundaries(self): ops = """ [p1] - p2 = getarrayitem_gc(p1, 3, descr=arraydescr) + p2 = getarrayitem_gc(p1, 3, descr=arraydescr) guard_nonnull_class(p2, ConstClass(node_vtable)) [] call(p2, descr=nonwritedescr) p3 = new_with_vtable(ConstClass(node_vtable)) diff --git a/pypy/jit/metainterp/optimizeopt/vstring.py b/pypy/jit/metainterp/optimizeopt/vstring.py --- a/pypy/jit/metainterp/optimizeopt/vstring.py +++ b/pypy/jit/metainterp/optimizeopt/vstring.py @@ -1,6 +1,6 @@ from pypy.jit.codewriter.effectinfo import EffectInfo from pypy.jit.metainterp.history import (BoxInt, Const, ConstInt, ConstPtr, - get_const_ptr_for_string, get_const_ptr_for_unicode) + get_const_ptr_for_string, get_const_ptr_for_unicode, BoxPtr, REF, INT) from pypy.jit.metainterp.optimizeopt import optimizer, virtualize from pypy.jit.metainterp.optimizeopt.optimizer import CONST_0, CONST_1, llhelper from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method @@ -106,46 +106,33 @@ if not we_are_translated(): op.name = 'FORCE' optforce.emit_operation(op) - self.string_copy_parts(optforce, box, CONST_0, self.mode) + self.initialize_forced_string(optforce, box, CONST_0, self.mode) + + def initialize_forced_string(self, string_optimizer, targetbox, + offsetbox, mode): + return self.string_copy_parts(string_optimizer, targetbox, + offsetbox, mode) class VStringPlainValue(VAbstractStringValue): """A string built with newstr(const).""" _lengthbox = None # cache only - # Warning: an issue with VStringPlainValue is that sometimes it is - # initialized unpredictably by some copystrcontent. When this occurs - # we set self._chars to None. Be careful to check for is_valid(). - - def is_valid(self): - return self._chars is not None - - def _invalidate(self): - assert self.is_valid() - if self._lengthbox is None: - self._lengthbox = ConstInt(len(self._chars)) - self._chars = None - - def _really_force(self, optforce): - VAbstractStringValue._really_force(self, optforce) - assert self.box is not None - if self.is_valid(): - for c in self._chars: - if c is optimizer.CVAL_UNINITIALIZED_ZERO: - # the string has uninitialized null bytes in it, so - # assume that it is forced for being further mutated - # (e.g. by copystrcontent). So it becomes invalid - # as a VStringPlainValue: the _chars must not be used - # any longer. - self._invalidate() - break - def setup(self, size): - self._chars = [optimizer.CVAL_UNINITIALIZED_ZERO] * size + # in this list, None means: "it's probably uninitialized so far, + # but maybe it was actually filled." So to handle this case, + # strgetitem cannot be virtual-ized and must be done as a residual + # operation. By contrast, any non-None value means: we know it + # is initialized to this value; strsetitem() there makes no sense. + # Also, as long as self.is_virtual(), then we know that no-one else + # could have written to the string, so we know that in this case + # "None" corresponds to "really uninitialized". + self._chars = [None] * size def setup_slice(self, longerlist, start, stop): assert 0 <= start <= stop <= len(longerlist) self._chars = longerlist[start:stop] + # slice the 'longerlist', which may also contain Nones def getstrlen(self, _, mode): if self._lengthbox is None: @@ -153,43 +140,66 @@ return self._lengthbox def getitem(self, index): - return self._chars[index] + return self._chars[index] # may return None! def setitem(self, index, charvalue): assert isinstance(charvalue, optimizer.OptValue) + assert self._chars[index] is None, ( + "setitem() on an already-initialized location") self._chars[index] = charvalue + def is_completely_initialized(self): + for c in self._chars: + if c is None: + return False + return True + @specialize.arg(1) def get_constant_string_spec(self, mode): - if not self.is_valid(): - return None for c in self._chars: - if c is optimizer.CVAL_UNINITIALIZED_ZERO or not c.is_constant(): + if c is None or not c.is_constant(): return None return mode.emptystr.join([mode.chr(c.box.getint()) for c in self._chars]) def string_copy_parts(self, string_optimizer, targetbox, offsetbox, mode): - if not self.is_valid(): + if not self.is_virtual() and not self.is_completely_initialized(): return VAbstractStringValue.string_copy_parts( self, string_optimizer, targetbox, offsetbox, mode) + else: + return self.initialize_forced_string(string_optimizer, targetbox, + offsetbox, mode) + + def initialize_forced_string(self, string_optimizer, targetbox, + offsetbox, mode): for i in range(len(self._chars)): - charbox = self._chars[i].force_box(string_optimizer) - if not (isinstance(charbox, Const) and charbox.same_constant(CONST_0)): - string_optimizer.emit_operation(ResOperation(mode.STRSETITEM, [targetbox, - offsetbox, - charbox], - None)) + assert isinstance(targetbox, BoxPtr) # ConstPtr never makes sense + charvalue = self.getitem(i) + if charvalue is not None: + charbox = charvalue.force_box(string_optimizer) + if not (isinstance(charbox, Const) and + charbox.same_constant(CONST_0)): + op = ResOperation(mode.STRSETITEM, [targetbox, + offsetbox, + charbox], + None) + string_optimizer.emit_operation(op) offsetbox = _int_add(string_optimizer, offsetbox, CONST_1) return offsetbox def get_args_for_fail(self, modifier): if self.box is None and not modifier.already_seen_virtual(self.keybox): - assert self.is_valid() - charboxes = [value.get_key_box() for value in self._chars] + charboxes = [] + for value in self._chars: + if value is not None: + box = value.get_key_box() + else: + box = None + charboxes.append(box) modifier.register_virtual_fields(self.keybox, charboxes) for value in self._chars: - value.get_args_for_fail(modifier) + if value is not None: + value.get_args_for_fail(modifier) def _make_virtual(self, modifier): return modifier.make_vstrplain(self.mode is mode_unicode) @@ -197,6 +207,7 @@ class VStringConcatValue(VAbstractStringValue): """The concatenation of two other strings.""" + _attrs_ = ('left', 'right', 'lengthbox') lengthbox = None # or the computed length @@ -305,6 +316,7 @@ for i in range(lengthbox.value): charbox = _strgetitem(string_optimizer, srcbox, srcoffsetbox, mode) srcoffsetbox = _int_add(string_optimizer, srcoffsetbox, CONST_1) + assert isinstance(targetbox, BoxPtr) # ConstPtr never makes sense string_optimizer.emit_operation(ResOperation(mode.STRSETITEM, [targetbox, offsetbox, charbox], @@ -315,6 +327,7 @@ nextoffsetbox = _int_add(string_optimizer, offsetbox, lengthbox) else: nextoffsetbox = None + assert isinstance(targetbox, BoxPtr) # ConstPtr never makes sense op = ResOperation(mode.COPYSTRCONTENT, [srcbox, targetbox, srcoffsetbox, offsetbox, lengthbox], None) @@ -401,8 +414,8 @@ def optimize_STRSETITEM(self, op): value = self.getvalue(op.getarg(0)) - if (value.is_virtual() and isinstance(value, VStringPlainValue) - and value.is_valid()): + assert not value.is_constant() # strsetitem(ConstPtr) never makes sense + if value.is_virtual() and isinstance(value, VStringPlainValue): indexbox = self.get_constant_box(op.getarg(1)) if indexbox is not None: value.setitem(indexbox.getint(), self.getvalue(op.getarg(2))) @@ -433,10 +446,22 @@ value = value.vstr vindex = self.getvalue(fullindexbox) # - if (isinstance(value, VStringPlainValue) # even if no longer virtual - and value.is_valid()): # but make sure it is valid + if isinstance(value, VStringPlainValue): # even if no longer virtual if vindex.is_constant(): - return value.getitem(vindex.box.getint()) + result = value.getitem(vindex.box.getint()) + if result is not None: + return result + # + if isinstance(value, VStringConcatValue) and vindex.is_constant(): + len1box = value.left.getstrlen(self, mode) + if isinstance(len1box, ConstInt): + index = vindex.box.getint() + len1 = len1box.getint() + if index < len1: + return self.strgetitem(value.left, vindex, mode) + else: + vindex = optimizer.ConstantValue(ConstInt(index - len1)) + return self.strgetitem(value.right, vindex, mode) # resbox = _strgetitem(self, value.force_box(self), vindex.force_box(self), mode) return self.getvalue(resbox) @@ -458,6 +483,11 @@ def _optimize_COPYSTRCONTENT(self, op, mode): # args: src dst srcstart dststart length + assert op.getarg(0).type == REF + assert op.getarg(1).type == REF + assert op.getarg(2).type == INT + assert op.getarg(3).type == INT + assert op.getarg(4).type == INT src = self.getvalue(op.getarg(0)) dst = self.getvalue(op.getarg(1)) srcstart = self.getvalue(op.getarg(2)) @@ -529,12 +559,12 @@ vstart = self.getvalue(op.getarg(2)) vstop = self.getvalue(op.getarg(3)) # - if (isinstance(vstr, VStringPlainValue) and vstr.is_valid() - and vstart.is_constant() and vstop.is_constant()): - value = self.make_vstring_plain(op.result, op, mode) - value.setup_slice(vstr._chars, vstart.box.getint(), - vstop.box.getint()) - return True + #if (isinstance(vstr, VStringPlainValue) and vstart.is_constant() + # and vstop.is_constant()): + # value = self.make_vstring_plain(op.result, op, mode) + # value.setup_slice(vstr._chars, vstart.box.getint(), + # vstop.box.getint()) + # return True # vstr.ensure_nonnull() lengthbox = _int_sub(self, vstop.force_box(self), diff --git a/pypy/jit/metainterp/resume.py b/pypy/jit/metainterp/resume.py --- a/pypy/jit/metainterp/resume.py +++ b/pypy/jit/metainterp/resume.py @@ -126,6 +126,7 @@ UNASSIGNED = tag(-1<<13, TAGBOX) UNASSIGNEDVIRTUAL = tag(-1<<13, TAGVIRTUAL) NULLREF = tag(-1, TAGCONST) +UNINITIALIZED = tag(-2, TAGCONST) # used for uninitialized string characters class ResumeDataLoopMemo(object): @@ -439,6 +440,8 @@ self.storage.rd_pendingfields = rd_pendingfields def _gettagged(self, box): + if box is None: + return UNINITIALIZED if isinstance(box, Const): return self.memo.getconst(box) else: @@ -572,7 +575,9 @@ string = decoder.allocate_string(length) decoder.virtuals_cache[index] = string for i in range(length): - decoder.string_setitem(string, i, self.fieldnums[i]) + charnum = self.fieldnums[i] + if not tagged_eq(charnum, UNINITIALIZED): + decoder.string_setitem(string, i, charnum) return string def debug_prints(self): @@ -625,7 +630,9 @@ string = decoder.allocate_unicode(length) decoder.virtuals_cache[index] = string for i in range(length): - decoder.unicode_setitem(string, i, self.fieldnums[i]) + charnum = self.fieldnums[i] + if not tagged_eq(charnum, UNINITIALIZED): + decoder.unicode_setitem(string, i, charnum) return string def debug_prints(self): diff --git a/pypy/jit/metainterp/test/test_virtualstate.py b/pypy/jit/metainterp/test/test_virtualstate.py --- a/pypy/jit/metainterp/test/test_virtualstate.py +++ b/pypy/jit/metainterp/test/test_virtualstate.py @@ -847,7 +847,8 @@ i5 = arraylen_gc(p2, descr=arraydescr) i6 = int_ge(i5, 1) guard_true(i6) [] - jump(p0, p1, p2) + p3 = getarrayitem_gc(p2, 0, descr=arraydescr) + jump(p0, p1, p3, p2) """ self.optimize_bridge(loop, bridge, expected, p0=self.myptr) diff --git a/pypy/module/__builtin__/functional.py b/pypy/module/__builtin__/functional.py --- a/pypy/module/__builtin__/functional.py +++ b/pypy/module/__builtin__/functional.py @@ -312,11 +312,10 @@ class W_XRange(Wrappable): - def __init__(self, space, start, stop, step): + def __init__(self, space, start, len, step): self.space = space self.start = start - self.stop = stop - self.len = get_len_of_range(space, start, stop, step) + self.len = len self.step = step def descr_new(space, w_subtype, w_start, w_stop=None, w_step=1): @@ -326,8 +325,9 @@ start, stop = 0, start else: stop = _toint(space, w_stop) + howmany = get_len_of_range(space, start, stop, step) obj = space.allocate_instance(W_XRange, w_subtype) - W_XRange.__init__(obj, space, start, stop, step) + W_XRange.__init__(obj, space, start, howmany, step) return space.wrap(obj) def descr_repr(self): @@ -357,12 +357,12 @@ def descr_iter(self): return self.space.wrap(W_XRangeIterator(self.space, self.start, - self.stop, self.step)) + self.len, self.step)) def descr_reversed(self): lastitem = self.start + (self.len-1) * self.step return self.space.wrap(W_XRangeIterator(self.space, lastitem, - self.start, -self.step, True)) + self.len, -self.step)) def descr_reduce(self): space = self.space @@ -389,29 +389,25 @@ ) class W_XRangeIterator(Wrappable): - def __init__(self, space, start, stop, step, inclusive=False): + def __init__(self, space, current, remaining, step): self.space = space - self.current = start - self.stop = stop + self.current = current + self.remaining = remaining self.step = step - self.inclusive = inclusive def descr_iter(self): return self.space.wrap(self) def descr_next(self): - if self.inclusive: - if not ((self.step > 0 and self.current <= self.stop) or (self.step < 0 and self.current >= self.stop)): - raise OperationError(self.space.w_StopIteration, self.space.w_None) - else: - if not ((self.step > 0 and self.current < self.stop) or (self.step < 0 and self.current > self.stop)): - raise OperationError(self.space.w_StopIteration, self.space.w_None) - item = self.current - self.current = item + self.step - return self.space.wrap(item) + if self.remaining > 0: + item = self.current + self.current = item + self.step + self.remaining -= 1 + return self.space.wrap(item) + raise OperationError(self.space.w_StopIteration, self.space.w_None) - #def descr_len(self): - # return self.space.wrap(self.remaining) + def descr_len(self): + return self.space.wrap(self.remaining) def descr_reduce(self): from pypy.interpreter.mixedmodule import MixedModule @@ -422,7 +418,7 @@ w = space.wrap nt = space.newtuple - tup = [w(self.current), w(self.stop), w(self.step)] + tup = [w(self.current), w(self.remaining), w(self.step)] return nt([new_inst, nt(tup)]) W_XRangeIterator.typedef = TypeDef("rangeiterator", diff --git a/pypy/module/__builtin__/test/test_functional.py b/pypy/module/__builtin__/test/test_functional.py --- a/pypy/module/__builtin__/test/test_functional.py +++ b/pypy/module/__builtin__/test/test_functional.py @@ -157,8 +157,7 @@ raises(OverflowError, xrange, a) raises(OverflowError, xrange, 0, a) raises(OverflowError, xrange, 0, 1, a) - assert list(reversed(xrange(-sys.maxint-1, -sys.maxint-1, -2))) == [] - + def test_xrange_reduce(self): x = xrange(2, 9, 3) callable, args = x.__reduce__() diff --git a/pypy/module/_pickle_support/maker.py b/pypy/module/_pickle_support/maker.py --- a/pypy/module/_pickle_support/maker.py +++ b/pypy/module/_pickle_support/maker.py @@ -66,10 +66,10 @@ new_generator.running = running return space.wrap(new_generator) - at unwrap_spec(current=int, stop=int, step=int) -def xrangeiter_new(space, current, stop, step): + at unwrap_spec(current=int, remaining=int, step=int) +def xrangeiter_new(space, current, remaining, step): from pypy.module.__builtin__.functional import W_XRangeIterator - new_iter = W_XRangeIterator(space, current, stop, step) + new_iter = W_XRangeIterator(space, current, remaining, step) return space.wrap(new_iter) @unwrap_spec(identifier=str) diff --git a/pypy/objspace/std/boolobject.py b/pypy/objspace/std/boolobject.py --- a/pypy/objspace/std/boolobject.py +++ b/pypy/objspace/std/boolobject.py @@ -27,11 +27,7 @@ def uint_w(w_self, space): intval = int(w_self.boolval) - if intval < 0: - raise OperationError(space.w_ValueError, - space.wrap("cannot convert negative integer to unsigned")) - else: - return r_uint(intval) + return r_uint(intval) def bigint_w(w_self, space): return rbigint.fromint(int(w_self.boolval)) diff --git a/pypy/objspace/std/test/test_sliceobject.py b/pypy/objspace/std/test/test_sliceobject.py --- a/pypy/objspace/std/test/test_sliceobject.py +++ b/pypy/objspace/std/test/test_sliceobject.py @@ -42,6 +42,23 @@ getslice(length, mystart, mystop)) + def test_indexes4(self): + space = self.space + w = space.wrap + + def getslice(length, start, stop, step): + return [i for i in range(0, length, step) if start <= i < stop] + + for step in [-5, -4, -3, -2, -1, 1, 2, 3, 4, 5, None]: + for length in range(5): + for start in range(-2*length-2, 2*length+3) + [None]: + for stop in range(-2*length-2, 2*length+3) + [None]: + sl = space.newslice(w(start), w(stop), w(step)) + mystart, mystop, mystep, slicelength = sl.indices4(space, length) + assert len(range(length)[start:stop:step]) == slicelength + assert slice(start, stop, step).indices(length) == ( + mystart, mystop, mystep) + class AppTest_SliceObject: def test_new(self): def cmp_slice(sl1, sl2): From noreply at buildbot.pypy.org Sat Nov 5 18:36:09 2011 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 5 Nov 2011 18:36:09 +0100 (CET) Subject: [pypy-commit] pypy stm: Yay, test_ztranslation passes. The issue was direct memcpy on STRs, Message-ID: <20111105173609.4FB0A820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r48800:e23ab2c195c1 Date: 2011-11-05 18:35 +0100 http://bitbucket.org/pypy/pypy/changeset/e23ab2c195c1/ Log: Yay, test_ztranslation passes. The issue was direct memcpy on STRs, which don't work any more right now. diff --git a/pypy/rlib/rgc.py b/pypy/rlib/rgc.py --- a/pypy/rlib/rgc.py +++ b/pypy/rlib/rgc.py @@ -154,29 +154,33 @@ assert (source_start + length <= dest_start or dest_start + length <= source_start) - TP = lltype.typeOf(source).TO - assert TP == lltype.typeOf(dest).TO - if isinstance(TP.OF, lltype.Ptr) and TP.OF.TO._gckind == 'gc': - # perform a write barrier that copies necessary flags from - # source to dest - if not llop.gc_writebarrier_before_copy(lltype.Bool, source, dest, - source_start, dest_start, - length): - # if the write barrier is not supported, copy by hand - for i in range(length): - dest[i + dest_start] = source[i + source_start] - return - source_addr = llmemory.cast_ptr_to_adr(source) - dest_addr = llmemory.cast_ptr_to_adr(dest) - cp_source_addr = (source_addr + llmemory.itemoffsetof(TP, 0) + - llmemory.sizeof(TP.OF) * source_start) - cp_dest_addr = (dest_addr + llmemory.itemoffsetof(TP, 0) + - llmemory.sizeof(TP.OF) * dest_start) + # XXX --- custom version for STM --- + # the old version first: +## TP = lltype.typeOf(source).TO +## assert TP == lltype.typeOf(dest).TO +## if isinstance(TP.OF, lltype.Ptr) and TP.OF.TO._gckind == 'gc': +## # perform a write barrier that copies necessary flags from +## # source to dest +## if not llop.gc_writebarrier_before_copy(lltype.Bool, source, dest, +## source_start, dest_start, +## length): +## # if the write barrier is not supported, copy by hand +## for i in range(length): +## dest[i + dest_start] = source[i + source_start] +## return +## source_addr = llmemory.cast_ptr_to_adr(source) +## dest_addr = llmemory.cast_ptr_to_adr(dest) +## cp_source_addr = (source_addr + llmemory.itemoffsetof(TP, 0) + +## llmemory.sizeof(TP.OF) * source_start) +## cp_dest_addr = (dest_addr + llmemory.itemoffsetof(TP, 0) + +## llmemory.sizeof(TP.OF) * dest_start) - llmemory.raw_memcopy(cp_source_addr, cp_dest_addr, - llmemory.sizeof(TP.OF) * length) - keepalive_until_here(source) - keepalive_until_here(dest) +## llmemory.raw_memcopy(cp_source_addr, cp_dest_addr, +## llmemory.sizeof(TP.OF) * length) +## keepalive_until_here(source) +## keepalive_until_here(dest) + for i in range(length): + dest[i + dest_start] = source[i + source_start] def ll_shrink_array(p, smallerlength): from pypy.rpython.lltypesystem.lloperation import llop @@ -195,16 +199,24 @@ field = getattr(p, TP._names[0]) setattr(newp, TP._names[0], field) - ARRAY = getattr(TP, TP._arrayfld) - offset = (llmemory.offsetof(TP, TP._arrayfld) + - llmemory.itemoffsetof(ARRAY, 0)) - source_addr = llmemory.cast_ptr_to_adr(p) + offset - dest_addr = llmemory.cast_ptr_to_adr(newp) + offset - llmemory.raw_memcopy(source_addr, dest_addr, - llmemory.sizeof(ARRAY.OF) * smallerlength) + # XXX --- custom version for STM --- + # the old version first: +## ARRAY = getattr(TP, TP._arrayfld) +## offset = (llmemory.offsetof(TP, TP._arrayfld) + +## llmemory.itemoffsetof(ARRAY, 0)) +## source_addr = llmemory.cast_ptr_to_adr(p) + offset +## dest_addr = llmemory.cast_ptr_to_adr(newp) + offset +## llmemory.raw_memcopy(source_addr, dest_addr, +## llmemory.sizeof(ARRAY.OF) * smallerlength) - keepalive_until_here(p) - keepalive_until_here(newp) +## keepalive_until_here(p) +## keepalive_until_here(newp) + + i = 0 + while i < smallerlength: + newp.chars[i] = p.chars[i] + i += 1 + return newp ll_shrink_array._annspecialcase_ = 'specialize:ll' ll_shrink_array._jit_look_inside_ = False diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -698,7 +698,10 @@ string is already nonmovable. Must be followed by a free_nonmovingbuffer call. """ - if rgc.can_move(data): + # XXX --- custom version for STM --- + # disabled the "else" part + ##if rgc.can_move(data): + if 1: count = len(data) buf = lltype.malloc(TYPEP.TO, count, flavor='raw') for i in range(count): @@ -720,11 +723,14 @@ # if 'buf' points inside 'data'. This is only possible if we # followed the 2nd case in get_nonmovingbuffer(); in the first case, # 'buf' points to its own raw-malloced memory. - data = llstrtype(data) - data_start = cast_ptr_to_adr(data) + \ - offsetof(STRTYPE, 'chars') + itemoffsetof(STRTYPE.chars, 0) - followed_2nd_path = (buf == cast(TYPEP, data_start)) - keepalive_until_here(data) + + # XXX --- custom version for STM --- +## data = llstrtype(data) +## data_start = cast_ptr_to_adr(data) + \ +## offsetof(STRTYPE, 'chars') + itemoffsetof(STRTYPE.chars, 0) +## followed_2nd_path = (buf == cast(TYPEP, data_start)) +## keepalive_until_here(data) + followed_2nd_path = False if not followed_2nd_path: lltype.free(buf, flavor='raw') free_nonmovingbuffer._annenforceargs_ = [strtype, None] diff --git a/pypy/rpython/lltypesystem/rstr.py b/pypy/rpython/lltypesystem/rstr.py --- a/pypy/rpython/lltypesystem/rstr.py +++ b/pypy/rpython/lltypesystem/rstr.py @@ -65,11 +65,17 @@ assert srcstart >= 0 assert dststart >= 0 assert length >= 0 - src = llmemory.cast_ptr_to_adr(src) + _str_ofs(srcstart) - dst = llmemory.cast_ptr_to_adr(dst) + _str_ofs(dststart) - llmemory.raw_memcopy(src, dst, llmemory.sizeof(CHAR_TP) * length) - keepalive_until_here(src) - keepalive_until_here(dst) + # XXX --- custom version for STM --- + # the old version first: +## src = llmemory.cast_ptr_to_adr(src) + _str_ofs(srcstart) +## dst = llmemory.cast_ptr_to_adr(dst) + _str_ofs(dststart) +## llmemory.raw_memcopy(src, dst, llmemory.sizeof(CHAR_TP) * length) +## keepalive_until_here(src) +## keepalive_until_here(dst) + i = 0 + while i < length: + dst.chars[dststart + i] = src.chars[srcstart + i] + i += 1 copy_string_contents._always_inline_ = True return func_with_new_name(copy_string_contents, 'copy_%s_contents' % name) diff --git a/pypy/rpython/memory/gctransform/transform.py b/pypy/rpython/memory/gctransform/transform.py --- a/pypy/rpython/memory/gctransform/transform.py +++ b/pypy/rpython/memory/gctransform/transform.py @@ -628,11 +628,7 @@ assert False, "%s has no support for free with flavor %r" % (self, flavor) def gct_gc_can_move(self, hop): - #return hop.cast_result(rmodel.inputconst(lltype.Bool, False)) - # XXX hack: in case of STM, we cannot pass a pointer inside a - # GcStruct or GcArray to the C world, because some of the - # content may still live in the STM buffers - return hop.cast_result(rmodel.inputconst(lltype.Bool, True)) + return hop.cast_result(rmodel.inputconst(lltype.Bool, False)) def gct_shrink_array(self, hop): return hop.cast_result(rmodel.inputconst(lltype.Bool, False)) diff --git a/pypy/translator/c/funcgen.py b/pypy/translator/c/funcgen.py --- a/pypy/translator/c/funcgen.py +++ b/pypy/translator/c/funcgen.py @@ -591,8 +591,14 @@ return '%s = %s.length;'%(self.expr(op.result), expr) + def _is_stm(self): + return getattr(self.db.translator, 'stm_transformation_applied', False) + def _OP_STM(self, op): if not hasattr(self, 'op_stm'): + if not self._is_stm(): + raise AssertionError("STM transformation not applied. " + "You need '--stm'") from pypy.translator.stm.funcgen import op_stm self.__class__.op_stm = op_stm return self.op_stm(op) @@ -681,10 +687,16 @@ self.expr(op.args[0]))) return '\t'.join(result) - OP_CAST_PTR_TO_ADR = OP_CAST_POINTER OP_CAST_ADR_TO_PTR = OP_CAST_POINTER OP_CAST_OPAQUE_PTR = OP_CAST_POINTER + def OP_CAST_PTR_TO_ADR(self, op): + if self.lltypemap(op.args[0]).TO._gckind == 'gc' and self._is_stm(): + raise AssertionError("cast_ptr_to_adr(gcref) is a bad idea " + "with STM. Consider checking config.stm " + "in %r" % (self.graph,)) + return self.OP_CAST_POINTER(op) + def OP_CAST_INT_TO_PTR(self, op): TYPE = self.lltypemap(op.result) typename = self.db.gettype(TYPE) diff --git a/pypy/translator/stm/funcgen.py b/pypy/translator/stm/funcgen.py --- a/pypy/translator/stm/funcgen.py +++ b/pypy/translator/stm/funcgen.py @@ -167,7 +167,5 @@ def op_stm(funcgen, op): - if not getattr(funcgen.db.translator, 'stm_transformation_applied', None): - raise AssertionError("STM transformation not applied. You need '--stm'") func = globals()[op.opname] return func(funcgen, op) From noreply at buildbot.pypy.org Sat Nov 5 18:36:10 2011 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 5 Nov 2011 18:36:10 +0100 (CET) Subject: [pypy-commit] pypy stm: update Message-ID: <20111105173610.7C28C820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r48801:0984ab459199 Date: 2011-11-05 18:35 +0100 http://bitbucket.org/pypy/pypy/changeset/0984ab459199/ Log: update diff --git a/pypy/doc/discussion/stm_todo.txt b/pypy/doc/discussion/stm_todo.txt --- a/pypy/doc/discussion/stm_todo.txt +++ b/pypy/doc/discussion/stm_todo.txt @@ -4,4 +4,4 @@ 2869bd44f830 Make the exc_data structure a thread-local. - c456e6bccc2a gc_can_move returns always True. + e23ab2c195c1 Added a number of "# XXX --- custom version for STM ---" From noreply at buildbot.pypy.org Sat Nov 5 18:53:22 2011 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 5 Nov 2011 18:53:22 +0100 (CET) Subject: [pypy-commit] pypy stm: malloc_varsize. Message-ID: <20111105175322.6B4E1820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r48802:85ce549541e6 Date: 2011-11-05 18:40 +0100 http://bitbucket.org/pypy/pypy/changeset/85ce549541e6/ Log: malloc_varsize. diff --git a/pypy/translator/stm/llstminterp.py b/pypy/translator/stm/llstminterp.py --- a/pypy/translator/stm/llstminterp.py +++ b/pypy/translator/stm/llstminterp.py @@ -131,6 +131,11 @@ self.check_stm_mode(lambda m: m != "regular_transaction") return LLFrame.op_malloc(self, TYPE, flags) + def opstm_malloc_varsize(self, TYPE, flags, size): + if flags['flavor'] != 'gc': + self.check_stm_mode(lambda m: m != "regular_transaction") + return LLFrame.op_malloc_varsize(self, TYPE, flags, size) + # ---------- stm-only operations ---------- # Note that for these tests we assume no real multithreading, # so that we just emulate the operations the easy way diff --git a/pypy/translator/stm/test/test_transform.py b/pypy/translator/stm/test/test_transform.py --- a/pypy/translator/stm/test/test_transform.py +++ b/pypy/translator/stm/test/test_transform.py @@ -129,6 +129,12 @@ lltype.malloc(S) eval_stm_func(func, [], final_stm_mode="regular_transaction") +def test_supported_malloc_varsize(): + A = lltype.GcArray(lltype.Signed) + def func(): + lltype.malloc(A, 5) + eval_stm_func(func, [], final_stm_mode="regular_transaction") + def test_unsupported_malloc(): S = lltype.Struct('S', ('x', lltype.Signed)) # non-GC structure def func(): diff --git a/pypy/translator/stm/transform.py b/pypy/translator/stm/transform.py --- a/pypy/translator/stm/transform.py +++ b/pypy/translator/stm/transform.py @@ -194,6 +194,10 @@ flags = op.args[1].value return flags['flavor'] == 'gc' + def stt_malloc_varsize(self, newoperations, op): + flags = op.args[1].value + return flags['flavor'] == 'gc' + def stt_gc_stack_bottom(self, newoperations, op): self.seen_gc_stack_bottom = True newoperations.append(op) From noreply at buildbot.pypy.org Sat Nov 5 19:38:08 2011 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 5 Nov 2011 19:38:08 +0100 (CET) Subject: [pypy-commit] pypy stm: One more. Message-ID: <20111105183808.2206B820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r48803:31f2ed861176 Date: 2011-11-05 19:35 +0100 http://bitbucket.org/pypy/pypy/changeset/31f2ed861176/ Log: One more. diff --git a/pypy/rlib/rgc.py b/pypy/rlib/rgc.py --- a/pypy/rlib/rgc.py +++ b/pypy/rlib/rgc.py @@ -186,8 +186,11 @@ from pypy.rpython.lltypesystem.lloperation import llop from pypy.rlib.objectmodel import keepalive_until_here - if llop.shrink_array(lltype.Bool, p, smallerlength): - return p # done by the GC + # XXX --- custom version for STM --- + # the next two lines are disabled: +## if llop.shrink_array(lltype.Bool, p, smallerlength): +## return p # done by the GC + # XXX we assume for now that the type of p is GcStruct containing a # variable array, with no further pointers anywhere, and exactly one # field in the fixed part -- like STR and UNICODE. From noreply at buildbot.pypy.org Sat Nov 5 19:38:09 2011 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 5 Nov 2011 19:38:09 +0100 (CET) Subject: [pypy-commit] pypy stm: update Message-ID: <20111105183809.4E6E982A87@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r48804:79693d6aa4a9 Date: 2011-11-05 19:35 +0100 http://bitbucket.org/pypy/pypy/changeset/79693d6aa4a9/ Log: update diff --git a/pypy/doc/discussion/stm_todo.txt b/pypy/doc/discussion/stm_todo.txt --- a/pypy/doc/discussion/stm_todo.txt +++ b/pypy/doc/discussion/stm_todo.txt @@ -5,3 +5,4 @@ 2869bd44f830 Make the exc_data structure a thread-local. e23ab2c195c1 Added a number of "# XXX --- custom version for STM ---" + 31f2ed861176 One more From noreply at buildbot.pypy.org Sat Nov 5 19:38:10 2011 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 5 Nov 2011 19:38:10 +0100 (CET) Subject: [pypy-commit] pypy stm: Use default=False, and enable it only in -O2/O3/Ojit, like the Message-ID: <20111105183810.83739820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r48805:5eb3fc96c892 Date: 2011-11-05 19:36 +0100 http://bitbucket.org/pypy/pypy/changeset/5eb3fc96c892/ Log: Use default=False, and enable it only in -O2/O3/Ojit, like the other optimizations. Fixes an issue if weakrefs are disabled. diff --git a/pypy/config/pypyoption.py b/pypy/config/pypyoption.py --- a/pypy/config/pypyoption.py +++ b/pypy/config/pypyoption.py @@ -333,7 +333,7 @@ requires=[("objspace.std.builtinshortcut", True)]), BoolOption("withidentitydict", "track types that override __hash__, __eq__ or __cmp__ and use a special dict strategy for those which do not", - default=True), + default=False), ]), ]) @@ -362,6 +362,7 @@ config.objspace.std.suggest(optimized_list_getitem=True) config.objspace.std.suggest(getattributeshortcut=True) config.objspace.std.suggest(newshortcut=True) + config.objspace.std.suggest(withidentitydict=True) #if not IS_64_BITS: # config.objspace.std.suggest(withsmalllong=True) From noreply at buildbot.pypy.org Sun Nov 6 08:45:52 2011 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 6 Nov 2011 08:45:52 +0100 (CET) Subject: [pypy-commit] pypy stm: Hard-code the STM logic here for now. Message-ID: <20111106074552.93230820C4@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r48806:0782958b144f Date: 2011-11-05 19:41 +0100 http://bitbucket.org/pypy/pypy/changeset/0782958b144f/ Log: Hard-code the STM logic here for now. diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -306,11 +306,21 @@ AroundFnPtr = lltype.Ptr(lltype.FuncType([], lltype.Void)) class AroundState: - _alloc_flavor_ = "raw" + # XXX for stm with need to comment out this, and use a custom logic +## def _freeze_(self): +## self.before = None # or a regular RPython function +## self.after = None # or a regular RPython function +## return False + @staticmethod + def before(): + from pypy.translator.stm import rstm + rstm.commit_transaction() + @staticmethod + def after(): + from pypy.translator.stm import rstm + rstm.begin_inevitable_transaction() def _freeze_(self): - self.before = None # or a regular RPython function - self.after = None # or a regular RPython function - return False + return True aroundstate = AroundState() aroundstate._freeze_() diff --git a/pypy/translator/stm/test/targetdemo.py b/pypy/translator/stm/test/targetdemo.py --- a/pypy/translator/stm/test/targetdemo.py +++ b/pypy/translator/stm/test/targetdemo.py @@ -53,28 +53,9 @@ glob.done += 1 - -# __________ temp, move me somewhere else __________ - -from pypy.rlib.objectmodel import invoke_around_extcall - -def before_external_call(): - # this function must not raise, in such a way that the exception - # transformer knows that it cannot raise! - rstm.commit_transaction() -before_external_call._gctransformer_hint_cannot_collect_ = True -before_external_call._dont_reach_me_in_del_ = True - -def after_external_call(): - rstm.begin_inevitable_transaction() -after_external_call._gctransformer_hint_cannot_collect_ = True -after_external_call._dont_reach_me_in_del_ = True - - # __________ Entry point __________ def entry_point(argv): - invoke_around_extcall(before_external_call, after_external_call) print "hello world" glob.done = 0 for i in range(NUM_THREADS): From noreply at buildbot.pypy.org Sun Nov 6 08:45:53 2011 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 6 Nov 2011 08:45:53 +0100 (CET) Subject: [pypy-commit] pypy default: Get rid of the test_distutils failure, which (for now) is really Message-ID: <20111106074553.CCAFB820C4@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48807:16447af5aece Date: 2011-11-06 08:44 +0100 http://bitbucket.org/pypy/pypy/changeset/16447af5aece/ Log: Get rid of the test_distutils failure, which (for now) is really irrelevant. We know that sys.get_config('CC') returns None on pypy; it is someething that must either be carefully fixed in the context of the typical user, or just ignored. diff --git a/lib-python/conftest.py b/lib-python/conftest.py --- a/lib-python/conftest.py +++ b/lib-python/conftest.py @@ -201,7 +201,7 @@ RegrTest('test_difflib.py'), RegrTest('test_dircache.py', core=True), RegrTest('test_dis.py'), - RegrTest('test_distutils.py'), + RegrTest('test_distutils.py', skip=True), RegrTest('test_dl.py', skip=True), RegrTest('test_doctest.py', usemodules="thread"), RegrTest('test_doctest2.py'), From noreply at buildbot.pypy.org Sun Nov 6 08:45:55 2011 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 6 Nov 2011 08:45:55 +0100 (CET) Subject: [pypy-commit] pypy default: This test no longer xfails. Message-ID: <20111106074555.04BB3820C4@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48808:fa885677f10c Date: 2011-11-06 08:44 +0100 http://bitbucket.org/pypy/pypy/changeset/fa885677f10c/ Log: This test no longer xfails. diff --git a/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py b/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py --- a/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py +++ b/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py @@ -1,6 +1,5 @@ import unittest from ctypes import * -from ctypes.test import xfail class MyInt(c_int): def __cmp__(self, other): @@ -27,7 +26,6 @@ self.assertEqual(None, cb()) - @xfail def test_int_callback(self): args = [] def func(arg): From noreply at buildbot.pypy.org Sun Nov 6 08:45:56 2011 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 6 Nov 2011 08:45:56 +0100 (CET) Subject: [pypy-commit] pypy default: Fix test_bitfields just by relaxing this check here. Message-ID: <20111106074556.34E6E820C4@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48809:ba608a73c81c Date: 2011-11-06 08:44 +0100 http://bitbucket.org/pypy/pypy/changeset/ba608a73c81c/ Log: Fix test_bitfields just by relaxing this check here. diff --git a/lib_pypy/_ctypes/structure.py b/lib_pypy/_ctypes/structure.py --- a/lib_pypy/_ctypes/structure.py +++ b/lib_pypy/_ctypes/structure.py @@ -17,7 +17,7 @@ if len(f) == 3: if (not hasattr(tp, '_type_') or not isinstance(tp._type_, str) - or tp._type_ not in "iIhHbBlL"): + or tp._type_ not in "iIhHbBlLqQ"): #XXX: are those all types? # we just dont get the type name # in the interp levle thrown TypeError From noreply at buildbot.pypy.org Sun Nov 6 10:46:21 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sun, 6 Nov 2011 10:46:21 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: fix tests Message-ID: <20111106094621.789B1820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48810:865055ea9253 Date: 2011-11-05 18:14 +0100 http://bitbucket.org/pypy/pypy/changeset/865055ea9253/ Log: fix tests diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -140,7 +140,7 @@ res = self.meta_interp(f, [6, 7]) assert res == 252 self.check_loop_count(1) - self.check_resops({'jump': 2, 'int_gt': 2, 'int_add': 2, + self.check_resops({'jump': 1, 'int_gt': 2, 'int_add': 2, 'int_mul': 1, 'guard_true': 2, 'int_sub': 2}) @@ -158,7 +158,7 @@ res = self.meta_interp(f, [6, 7]) assert res == 308 self.check_loop_count(1) - self.check_resops({'jump': 2, 'int_lshift': 2, 'int_gt': 2, + self.check_resops({'jump': 1, 'int_lshift': 2, 'int_gt': 2, 'int_mul_ovf': 1, 'int_add': 4, 'guard_true': 2, 'guard_no_overflow': 1, 'int_sub': 2}) From noreply at buildbot.pypy.org Sun Nov 6 10:46:22 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sun, 6 Nov 2011 10:46:22 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: support for bridges in progress Message-ID: <20111106094622.BF95A820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48811:5e84c483e93d Date: 2011-11-06 09:15 +0100 http://bitbucket.org/pypy/pypy/changeset/5e84c483e93d/ Log: support for bridges in progress diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -119,12 +119,14 @@ except InvalidLoop: return None loop.operations = part.operations + all_target_tokens = [part.operations[0].getdescr()] while part.operations[-1].getopnum() == rop.LABEL: inliner = Inliner(inputargs, jumpargs) part.operations = [part.operations[-1]] + \ [inliner.inline_op(h_ops[i]) for i in range(start, len(h_ops))] + \ [ResOperation(rop.LABEL, [inliner.inline_arg(a) for a in jumpargs], None, descr=TargetToken(procedure_token))] + all_target_tokens.append(part.operations[0].getdescr()) inputargs = jumpargs jumpargs = part.operations[-1].getarglist() @@ -139,7 +141,7 @@ assert isinstance(box, Box) loop.token = procedure_token - + procedure_token.target_tokens = all_target_tokens send_loop_to_backend(greenkey, jitdriver_sd, metainterp_sd, loop, "loop") record_loop_or_bridge(metainterp_sd, loop) return procedure_token @@ -206,10 +208,6 @@ metainterp_sd.log("compiled new " + type) # metainterp_sd.logger_ops.log_loop(loop.inputargs, loop.operations, n, type, ops_offset) - short = loop.token.short_preamble - if short: - metainterp_sd.logger_ops.log_short_preamble(short[-1].inputargs, - short[-1].operations) # if metainterp_sd.warmrunnerdesc is not None: # for tests metainterp_sd.warmrunnerdesc.memory_manager.keep_loop_alive(loop.token) @@ -221,7 +219,8 @@ original_loop_token, operations, n) if not we_are_translated(): show_loop(metainterp_sd) - TreeLoop.check_consistency_of(inputargs, operations) + seen = dict.fromkeys(inputargs) + TreeLoop.check_consistency_of_branch(operations, seen) metainterp_sd.profiler.start_backend() operations = get_deep_immutable_oplist(operations) debug_start("jit-backend") @@ -596,7 +595,6 @@ class ResumeFromInterpDescr(ResumeDescr): def __init__(self, original_greenkey): self.original_greenkey = original_greenkey - self.procedure_token = ProcedureToken() def compile_and_attach(self, metainterp, new_loop): # We managed to create a bridge going from the interpreter @@ -606,13 +604,13 @@ metainterp_sd = metainterp.staticdata jitdriver_sd = metainterp.jitdriver_sd redargs = new_loop.inputargs - self.procedure_token.outermost_jitdriver_sd = jitdriver_sd - new_loop.token = self.procedure_token + procedure_token = make_procedure_token(jitdriver_sd) + new_loop.token = procedure_token send_loop_to_backend(self.original_greenkey, metainterp.jitdriver_sd, metainterp_sd, new_loop, "entry bridge") # send the new_loop to warmspot.py, to be called directly the next time jitdriver_sd.warmstate.attach_procedure_to_interp( - self.original_greenkey, self.procedure_token) + self.original_greenkey, procedure_token) def reset_counter_from_failure(self): pass @@ -626,15 +624,17 @@ # The history contains new operations to attach as the code for the # failure of 'resumekey.guard_op'. - # + # # Attempt to use optimize_bridge(). This may return None in case # it does not work -- i.e. none of the existing old_loop_tokens match. new_loop = create_empty_loop(metainterp) new_loop.inputargs = inputargs = metainterp.history.inputargs[:] # clone ops, as optimize_bridge can mutate the ops - procedure_token = resumekey.procedure_token - new_loop.operations = [ResOperation(rop.LABEL, inputargs, None, descr=TargetToken(procedure_token))] + \ - [op.clone() for op in metainterp.history.operations] + + # A LABEL with descr=None will be killed by optimizer. Its only use + # is to pass along the inputargs to the optimizer + #[ResOperation(rop.LABEL, inputargs, None, descr=None)] + \ + new_loop.operations = [op.clone() for op in metainterp.history.operations] metainterp_sd = metainterp.staticdata state = metainterp.jitdriver_sd.warmstate if isinstance(resumekey, ResumeAtPositionDescr): diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -734,7 +734,7 @@ was compiled; but the LoopDescr remains alive and points to the generated assembler. """ - short_preamble = None + target_tokens = None failed_states = None retraced_count = 0 terminating = False # see TerminatingLoopToken in compile.py diff --git a/pypy/jit/metainterp/optimizeopt/__init__.py b/pypy/jit/metainterp/optimizeopt/__init__.py --- a/pypy/jit/metainterp/optimizeopt/__init__.py +++ b/pypy/jit/metainterp/optimizeopt/__init__.py @@ -85,7 +85,7 @@ """Optimize loop.operations to remove internal overheadish operations. """ - optimizations, unroll = build_opt_chain(metainterp_sd, enable_opts, False, False) + optimizations, unroll = build_opt_chain(metainterp_sd, enable_opts, True, False) if unroll: optimize_unroll(metainterp_sd, loop, optimizations) else: diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -65,46 +65,56 @@ def propagate_all_forward(self): loop = self.optimizer.loop - start_targetop = loop.operations[0] - assert start_targetop.getopnum() == rop.LABEL - loop.operations = loop.operations[1:] self.optimizer.clear_newoperations() - self.optimizer.send_extra_operation(start_targetop) + + import pdb; pdb.set_trace() + + start_label = loop.operations[0] + if start_label.getopnum() == rop.LABEL: + loop.operations = loop.operations[1:] + # We need to emit the label op before import_state() as emitting it + # will clear heap caches + self.optimizer.send_extra_operation(start_label) + else: + start_label = None - self.import_state(start_targetop) - - lastop = loop.operations[-1] - if lastop.getopnum() == rop.LABEL: + stop_label = loop.operations[-1] + if stop_label.getopnum() == rop.LABEL: loop.operations = loop.operations[:-1] else: - lastop = None - + stop_label = None + + self.import_state(start_label) self.optimizer.propagate_all_forward(clear=False) - if not lastop: + if not stop_label: self.optimizer.flush() loop.operations = self.optimizer.get_newoperations() - return - - if not self.did_peel_one: # Enforce the previous behaviour of always peeling exactly one iteration (for now) + elif not start_label: + #jumpop = ResOperation(rop.JUMP, stop_label.getarglist(), None, descr=stop_label.getdescr()) + self.optimizer.send_extra_operation(stop_label) + self.optimizer.flush() + loop.operations = self.optimizer.get_newoperations() + elif not self.did_peel_one: # Enforce the previous behaviour of always peeling exactly one iteration (for now) self.optimizer.flush() KillHugeIntBounds(self.optimizer).apply() loop.operations = self.optimizer.get_newoperations() - self.export_state(lastop) - loop.operations.append(lastop) + self.export_state(stop_label) + loop.operations.append(stop_label) else: - assert lastop.getdescr().procedure_token is start_targetop.getdescr().procedure_token - jumpop = ResOperation(rop.JUMP, lastop.getarglist(), None, descr=start_targetop.getdescr()) + assert stop_label.getdescr().procedure_token is start_label.getdescr().procedure_token + jumpop = ResOperation(rop.JUMP, stop_label.getarglist(), None, descr=start_label.getdescr()) self.close_loop(jumpop) - self.finilize_short_preamble(lastop) - start_targetop.getdescr().short_preamble = self.short + self.finilize_short_preamble(start_label) + start_label.getdescr().short_preamble = self.short def export_state(self, targetop): original_jump_args = targetop.getarglist() jump_args = [self.getvalue(a).get_key_box() for a in original_jump_args] + # FIXME: I dont thnik we need this anymore start_resumedescr = self.optimizer.loop.start_resumedescr.clone_if_mutable() assert isinstance(start_resumedescr, ResumeGuardDescr) start_resumedescr.rd_snapshot = self.fix_snapshot(jump_args, start_resumedescr.rd_snapshot) @@ -139,12 +149,17 @@ targetop.initarglist(inputargs) target_token.virtual_state = virtual_state target_token.short_preamble = [ResOperation(rop.LABEL, short_inputargs, None)] + target_token.start_resumedescr = start_resumedescr target_token.exported_state = ExportedState(constant_inputargs, short_boxes, - inputarg_setup_ops, self.optimizer, - start_resumedescr) + inputarg_setup_ops, self.optimizer) def import_state(self, targetop): + if not targetop: + # FIXME: Set up some sort of empty state with no virtuals? + return target_token = targetop.getdescr() + if not target_token: + return assert isinstance(target_token, TargetToken) exported_state = target_token.exported_state if not exported_state: @@ -160,7 +175,6 @@ self.short_seen[box] = True self.imported_state = exported_state self.inputargs = targetop.getarglist() - self.start_resumedescr = exported_state.start_resumedescr self.initial_virtual_state = target_token.virtual_state seen = {} @@ -275,9 +289,11 @@ raise InvalidLoop debug_stop('jit-log-virtualstate') - def finilize_short_preamble(self, lastop): + def finilize_short_preamble(self, start_label): short = self.short assert short[-1].getopnum() == rop.JUMP + target_token = start_label.getdescr() + assert isinstance(target_token, TargetToken) # Turn guards into conditional jumps to the preamble for i in range(len(short)): @@ -285,7 +301,7 @@ if op.is_guard(): op = op.clone() op.setfailargs(None) - descr = self.start_resumedescr.clone_if_mutable() + descr = target_token.start_resumedescr.clone_if_mutable() op.setdescr(descr) short[i] = op @@ -306,10 +322,8 @@ for i in range(len(short)): short[i] = inliner.inline_op(short[i]) - self.start_resumedescr = self.start_resumedescr.clone_if_mutable() - inliner.inline_descr_inplace(self.start_resumedescr) - #short_loop.start_resumedescr = descr - # FIXME: move this to targettoken + target_token.start_resumedescr = target_token.start_resumedescr.clone_if_mutable() + inliner.inline_descr_inplace(target_token.start_resumedescr) # Forget the values to allow them to be freed for box in short[0].getarglist(): @@ -422,66 +436,76 @@ def propagate_forward(self, op): if op.getopnum() == rop.JUMP: - loop_token = op.getdescr() - if not isinstance(loop_token, TargetToken): + self.emit_operation(op) + return + elif op.getopnum() == rop.LABEL: + target_token = op.getdescr() + assert isinstance(target_token, TargetToken) + procedure_token = target_token.procedure_token + if not procedure_token.target_tokens: self.emit_operation(op) return - short = loop_token.short_preamble - if short: - args = op.getarglist() - modifier = VirtualStateAdder(self.optimizer) - virtual_state = modifier.get_virtual_state(args) - debug_start('jit-log-virtualstate') - virtual_state.debug_print("Looking for ") - for sh in short: - ok = False - extra_guards = [] + args = op.getarglist() + modifier = VirtualStateAdder(self.optimizer) + virtual_state = modifier.get_virtual_state(args) + debug_start('jit-log-virtualstate') + virtual_state.debug_print("Looking for ") - bad = {} - debugmsg = 'Did not match ' - if sh.virtual_state.generalization_of(virtual_state, bad): - ok = True - debugmsg = 'Matched ' - else: - try: - cpu = self.optimizer.cpu - sh.virtual_state.generate_guards(virtual_state, + for target in procedure_token.target_tokens: + if not target.virtual_state: + continue + ok = False + extra_guards = [] + + bad = {} + debugmsg = 'Did not match ' + if target.virtual_state.generalization_of(virtual_state, bad): + ok = True + debugmsg = 'Matched ' + else: + try: + cpu = self.optimizer.cpu + target.virtual_state.generate_guards(virtual_state, args, cpu, extra_guards) - ok = True - debugmsg = 'Guarded to match ' - except InvalidLoop: - pass - sh.virtual_state.debug_print(debugmsg, bad) - - if ok: - debug_stop('jit-log-virtualstate') + ok = True + debugmsg = 'Guarded to match ' + except InvalidLoop: + pass + target.virtual_state.debug_print(debugmsg, bad) - values = [self.getvalue(arg) - for arg in op.getarglist()] - args = sh.virtual_state.make_inputargs(values, self.optimizer, + if ok: + debug_stop('jit-log-virtualstate') + + values = [self.getvalue(arg) + for arg in op.getarglist()] + args = target.virtual_state.make_inputargs(values, self.optimizer, keyboxes=True) - inliner = Inliner(sh.inputargs, args) - - for guard in extra_guards: - if guard.is_guard(): - descr = sh.start_resumedescr.clone_if_mutable() - inliner.inline_descr_inplace(descr) - guard.setdescr(descr) - self.emit_operation(guard) - - try: - for shop in sh.operations: - newop = inliner.inline_op(shop) - self.emit_operation(newop) - except InvalidLoop: - debug_print("Inlining failed unexpectedly", - "jumping to preamble instead") - self.emit_operation(op) - return - debug_stop('jit-log-virtualstate') + short_inputargs = target.short_preamble[0].getarglist() + inliner = Inliner(short_inputargs, args) + + for guard in extra_guards: + if guard.is_guard(): + descr = target.start_resumedescr.clone_if_mutable() + inliner.inline_descr_inplace(descr) + guard.setdescr(descr) + self.emit_operation(guard) + + try: + for shop in target.short_preamble[1:]: + newop = inliner.inline_op(shop) + self.emit_operation(newop) + except InvalidLoop: + debug_print("Inlining failed unexpectedly", + "jumping to preamble instead") + assert False, "FIXME: Construct jump op" + self.emit_operation(op) + return + debug_stop('jit-log-virtualstate') + + if False: # FIXME: retrace retraced_count = loop_token.retraced_count limit = self.optimizer.metainterp_sd.warmrunnerdesc.memory_manager.retrace_limit if not self.retraced and retraced_count Author: Hakan Ardo Branch: jit-targets Changeset: r48812:a5e1ecd1e6cf Date: 2011-11-06 10:13 +0100 http://bitbucket.org/pypy/pypy/changeset/a5e1ecd1e6cf/ Log: first test with a brigde passing diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -77,7 +77,8 @@ # test_memgr.py) if descr.procedure_token is not looptoken: looptoken.record_jump_to(descr.procedure_token) - op._descr = None # clear reference, mostly for tests + # FIXME: Why? How is the jump supposed to work without a target?? + #op._descr = None # clear reference, mostly for tests if not we_are_translated(): op._jumptarget_number = descr.procedure_token.number # record this looptoken on the QuasiImmut used in the code @@ -631,9 +632,6 @@ new_loop.inputargs = inputargs = metainterp.history.inputargs[:] # clone ops, as optimize_bridge can mutate the ops - # A LABEL with descr=None will be killed by optimizer. Its only use - # is to pass along the inputargs to the optimizer - #[ResOperation(rop.LABEL, inputargs, None, descr=None)] + \ new_loop.operations = [op.clone() for op in metainterp.history.operations] metainterp_sd = metainterp.staticdata state = metainterp.jitdriver_sd.warmstate @@ -653,6 +651,7 @@ # know exactly what we must do (ResumeGuardDescr/ResumeFromInterpDescr) resumekey.compile_and_attach(metainterp, new_loop) record_loop_or_bridge(metainterp_sd, new_loop) + return new_loop.operations[-1].getdescr() # ____________________________________________________________ diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -67,8 +67,6 @@ loop = self.optimizer.loop self.optimizer.clear_newoperations() - import pdb; pdb.set_trace() - start_label = loop.operations[0] if start_label.getopnum() == rop.LABEL: loop.operations = loop.operations[1:] diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -235,7 +235,7 @@ assert res == 1692 self.check_loop_count(3) self.check_resops({'int_lt': 2, 'int_gt': 4, 'guard_false': 2, - 'guard_true': 4, 'int_sub': 4, 'jump': 4, + 'guard_true': 4, 'int_sub': 4, 'jump': 3, 'int_mul': 3, 'int_add': 4}) def test_loop_invariant_intbox(self): From noreply at buildbot.pypy.org Sun Nov 6 10:46:25 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sun, 6 Nov 2011 10:46:25 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: we now need inputargs again... Message-ID: <20111106094625.30225820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48813:72538680f42b Date: 2011-11-06 10:21 +0100 http://bitbucket.org/pypy/pypy/changeset/72538680f42b/ Log: we now need inputargs again... diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -825,7 +825,8 @@ def check_consistency(self): # for testing "NOT_RPYTHON" - self.check_consistency_of(self.operations) + seen = dict.fromkeys(self.inputargs) + self.check_consistency_of_branch(self.operations, seen) @staticmethod def check_consistency_of(operations): From noreply at buildbot.pypy.org Sun Nov 6 10:46:26 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sun, 6 Nov 2011 10:46:26 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: we still need it here Message-ID: <20111106094626.5DA1B820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48814:88a4fdc05e1f Date: 2011-11-06 10:25 +0100 http://bitbucket.org/pypy/pypy/changeset/88a4fdc05e1f/ Log: we still need it here diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -174,6 +174,7 @@ self.imported_state = exported_state self.inputargs = targetop.getarglist() self.initial_virtual_state = target_token.virtual_state + self.start_resumedescr = target_token.start_resumedescr seen = {} for box in self.inputargs: From noreply at buildbot.pypy.org Sun Nov 6 10:46:27 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sun, 6 Nov 2011 10:46:27 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: reintroduce inputargs on loops Message-ID: <20111106094627.955CA820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48815:50843084d602 Date: 2011-11-06 10:46 +0100 http://bitbucket.org/pypy/pypy/changeset/50843084d602/ Log: reintroduce inputargs on loops diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -772,6 +772,7 @@ self.exported_state = None class TreeLoop(object): + inputargs = None operations = None token = None call_pure_results = None @@ -784,20 +785,6 @@ # ops of the kind 'guard_xxx' contain a further list of operations, # which may itself contain 'guard_xxx' and so on, making a tree. - _inputargs = None - - def get_inputargs(self): - "NOT_RPYTHON" - if self._inputargs is not None: - return self._inputargs - assert self.operations[0].getopnum() == rop.LABEL - return self.operations[0].getarglist() - - def set_inputargs(self, inputargs): - self._inputargs = inputargs - - inputargs = property(get_inputargs, set_inputargs) - def _all_operations(self, omit_finish=False): "NOT_RPYTHON" result = [] @@ -825,14 +812,15 @@ def check_consistency(self): # for testing "NOT_RPYTHON" - seen = dict.fromkeys(self.inputargs) - self.check_consistency_of_branch(self.operations, seen) + self.check_consistency_of(self.inputargs, self.operations) @staticmethod - def check_consistency_of(operations): - assert operations[0].getopnum() == rop.LABEL - inputargs = operations[0].getarglist() + def check_consistency_of(inputargs, operations): + for box in inputargs: + assert isinstance(box, Box), "Loop.inputargs contains %r" % (box,) seen = dict.fromkeys(inputargs) + assert len(seen) == len(inputargs), ( + "duplicate Box in the Loop.inputargs") TreeLoop.check_consistency_of_branch(operations, seen) @staticmethod @@ -875,7 +863,7 @@ def dump(self): # RPython-friendly - print '%r: ' % self + print '%r: inputargs =' % self, self._dump_args(self.inputargs) for op in self.operations: args = op.getarglist() print '\t', op.getopname(), self._dump_args(args), \ diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -7,7 +7,7 @@ from pypy.jit.metainterp.optimizeopt import optimize_loop_1, ALL_OPTS_DICT, build_opt_chain from pypy.jit.metainterp.optimize import InvalidLoop from pypy.jit.metainterp.history import AbstractDescr, ConstInt, BoxInt -from pypy.jit.metainterp.history import TreeLoop, LoopToken, TargetToken +from pypy.jit.metainterp.history import TreeLoop, ProcedureToken, TargetToken from pypy.jit.metainterp.jitprof import EmptyProfiler from pypy.jit.metainterp import executor, compile, resume, history from pypy.jit.metainterp.resoperation import rop, opname, ResOperation @@ -83,18 +83,16 @@ jumpop = operations[-1] assert jumpop.getopnum() == rop.JUMP inputargs = loop.inputargs - loop.inputargs = None jump_args = jumpop.getarglist()[:] operations = operations[:-1] cloned_operations = [op.clone() for op in operations] preamble = TreeLoop('preamble') - #loop.preamble.inputargs = loop.inputargs - #loop.preamble.token = LoopToken() + preamble.inputargs = inputargs preamble.start_resumedescr = FakeDescr() - token = LoopToken() # FIXME: Make this a MergePointToken? + token = ProcedureToken() preamble.operations = [ResOperation(rop.LABEL, inputargs, None, descr=TargetToken(token))] + \ operations + \ [ResOperation(rop.LABEL, jump_args, None, descr=TargetToken(token))] @@ -107,6 +105,8 @@ [ResOperation(rop.LABEL, [inliner.inline_arg(a) for a in jump_args], None, descr=TargetToken(token))] #[inliner.inline_op(jumpop)] + assert loop.operations[0].getopnum() == rop.LABEL + loop.inputargs = loop.operations[0].getarglist() self._do_optimize_loop(loop, call_pure_results) extra_same_as = [] @@ -146,6 +146,8 @@ assert preamble.operations[-1].getdescr() == loop.operations[0].getdescr() if expected_short: short_preamble = TreeLoop('short preamble') + assert short[0].getopnum() == rop.LABEL + short_preamble.inputargs = short[0].getarglist() short_preamble.operations = short self.assert_equal(short_preamble, convert_old_style_to_targets(expected_short, jump=True), text_right='expected short preamble') @@ -155,6 +157,7 @@ def convert_old_style_to_targets(loop, jump): newloop = TreeLoop(loop.name) + newloop.inputargs = loop.inputargs newloop.operations = [ResOperation(rop.LABEL, loop.inputargs, None, descr=FakeDescr())] + \ loop.operations if not jump: diff --git a/pypy/jit/metainterp/test/test_compile.py b/pypy/jit/metainterp/test/test_compile.py --- a/pypy/jit/metainterp/test/test_compile.py +++ b/pypy/jit/metainterp/test/test_compile.py @@ -1,7 +1,7 @@ from pypy.config.pypyoption import get_pypy_config -from pypy.jit.metainterp.history import LoopToken, ConstInt, History, Stats +from pypy.jit.metainterp.history import ProcedureToken, TargetToken, ConstInt, History, Stats from pypy.jit.metainterp.history import BoxInt, INT -from pypy.jit.metainterp.compile import insert_loop_token, compile_new_loop +from pypy.jit.metainterp.compile import insert_loop_token, compile_procedure from pypy.jit.metainterp.compile import ResumeGuardDescr from pypy.jit.metainterp.compile import ResumeGuardCountersInt from pypy.jit.metainterp.compile import compile_tmp_callback diff --git a/pypy/jit/tool/oparser.py b/pypy/jit/tool/oparser.py --- a/pypy/jit/tool/oparser.py +++ b/pypy/jit/tool/oparser.py @@ -70,7 +70,7 @@ self.invent_fail_descr = invent_fail_descr self.nonstrict = nonstrict self.model = get_model(self.use_mock_model) - self.looptoken = self.model.LoopToken() + self.looptoken = self.model.ProcedureToken() def get_const(self, name, typ): if self._consts is None: diff --git a/pypy/jit/tool/oparser_model.py b/pypy/jit/tool/oparser_model.py --- a/pypy/jit/tool/oparser_model.py +++ b/pypy/jit/tool/oparser_model.py @@ -3,7 +3,7 @@ def get_real_model(): class LoopModel(object): - from pypy.jit.metainterp.history import TreeLoop, LoopToken + from pypy.jit.metainterp.history import TreeLoop, ProcedureToken from pypy.jit.metainterp.history import Box, BoxInt, BoxFloat from pypy.jit.metainterp.history import ConstInt, ConstObj, ConstPtr, ConstFloat from pypy.jit.metainterp.history import BasicFailDescr From noreply at buildbot.pypy.org Sun Nov 6 10:49:59 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sun, 6 Nov 2011 10:49:59 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: fix Message-ID: <20111106094959.A6AAE820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48816:0faba264d761 Date: 2011-11-06 10:49 +0100 http://bitbucket.org/pypy/pypy/changeset/0faba264d761/ Log: fix diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -77,8 +77,7 @@ # test_memgr.py) if descr.procedure_token is not looptoken: looptoken.record_jump_to(descr.procedure_token) - # FIXME: Why? How is the jump supposed to work without a target?? - #op._descr = None # clear reference, mostly for tests + op._descr = None # clear reference, mostly for tests if not we_are_translated(): op._jumptarget_number = descr.procedure_token.number # record this looptoken on the QuasiImmut used in the code @@ -649,10 +648,11 @@ return None # We managed to create a bridge. Dispatch to resumekey to # know exactly what we must do (ResumeGuardDescr/ResumeFromInterpDescr) + target_token = new_loop.operations[-1].getdescr() resumekey.compile_and_attach(metainterp, new_loop) record_loop_or_bridge(metainterp_sd, new_loop) - return new_loop.operations[-1].getdescr() + return target_token # ____________________________________________________________ From noreply at buildbot.pypy.org Sun Nov 6 11:14:29 2011 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 6 Nov 2011 11:14:29 +0100 (CET) Subject: [pypy-commit] pypy default: expose size attribute Message-ID: <20111106101429.E7635820C4@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r48817:29573471a8fd Date: 2011-11-06 11:13 +0100 http://bitbucket.org/pypy/pypy/changeset/29573471a8fd/ Log: expose size attribute diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -201,6 +201,9 @@ def descr_get_shape(self, space): return space.newtuple([self.descr_len(space)]) + def descr_get_size(self, space): + return space.wrap(self.size) + def descr_copy(self, space): return space.call_function(space.gettypefor(BaseArray), self, self.find_dtype()) @@ -607,6 +610,7 @@ dtype = GetSetProperty(BaseArray.descr_get_dtype), shape = GetSetProperty(BaseArray.descr_get_shape), + size = GetSetProperty(BaseArray.descr_get_size), mean = interp2app(BaseArray.descr_mean), sum = interp2app(BaseArray.descr_sum), diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -17,6 +17,12 @@ a[13] = 5.3 assert a[13] == 5.3 + def test_size(self): + from numpy import array + # XXX fixed on multidim branch + #assert array(3).size == 1 + assert array([1, 2, 3]).size == 3 + def test_empty(self): """ Test that empty() works. From noreply at buildbot.pypy.org Sun Nov 6 11:14:31 2011 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 6 Nov 2011 11:14:31 +0100 (CET) Subject: [pypy-commit] pypy default: merge default Message-ID: <20111106101431.A1ADA820C4@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r48818:5a19f9787d6b Date: 2011-11-06 11:14 +0100 http://bitbucket.org/pypy/pypy/changeset/5a19f9787d6b/ Log: merge default diff --git a/lib-python/conftest.py b/lib-python/conftest.py --- a/lib-python/conftest.py +++ b/lib-python/conftest.py @@ -201,7 +201,7 @@ RegrTest('test_difflib.py'), RegrTest('test_dircache.py', core=True), RegrTest('test_dis.py'), - RegrTest('test_distutils.py'), + RegrTest('test_distutils.py', skip=True), RegrTest('test_dl.py', skip=True), RegrTest('test_doctest.py', usemodules="thread"), RegrTest('test_doctest2.py'), diff --git a/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py b/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py --- a/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py +++ b/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py @@ -1,6 +1,5 @@ import unittest from ctypes import * -from ctypes.test import xfail class MyInt(c_int): def __cmp__(self, other): @@ -27,7 +26,6 @@ self.assertEqual(None, cb()) - @xfail def test_int_callback(self): args = [] def func(arg): diff --git a/lib_pypy/_ctypes/structure.py b/lib_pypy/_ctypes/structure.py --- a/lib_pypy/_ctypes/structure.py +++ b/lib_pypy/_ctypes/structure.py @@ -17,7 +17,7 @@ if len(f) == 3: if (not hasattr(tp, '_type_') or not isinstance(tp._type_, str) - or tp._type_ not in "iIhHbBlL"): + or tp._type_ not in "iIhHbBlLqQ"): #XXX: are those all types? # we just dont get the type name # in the interp levle thrown TypeError diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -247,7 +247,6 @@ CONST_1 = ConstInt(1) CVAL_ZERO = ConstantValue(CONST_0) CVAL_ZERO_FLOAT = ConstantValue(Const._new(0.0)) -CVAL_UNINITIALIZED_ZERO = ConstantValue(CONST_0) llhelper.CVAL_NULLREF = ConstantValue(llhelper.CONST_NULL) oohelper.CVAL_NULLREF = ConstantValue(oohelper.CONST_NULL) diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -4123,6 +4123,38 @@ """ self.optimize_strunicode_loop(ops, expected) + def test_str_concat_constant_lengths(self): + ops = """ + [i0] + p0 = newstr(1) + strsetitem(p0, 0, i0) + p1 = newstr(0) + p2 = call(0, p0, p1, descr=strconcatdescr) + i1 = call(0, p2, p0, descr=strequaldescr) + finish(i1) + """ + expected = """ + [i0] + finish(1) + """ + self.optimize_strunicode_loop(ops, expected) + + def test_str_concat_constant_lengths_2(self): + ops = """ + [i0] + p0 = newstr(0) + p1 = newstr(1) + strsetitem(p1, 0, i0) + p2 = call(0, p0, p1, descr=strconcatdescr) + i1 = call(0, p2, p1, descr=strequaldescr) + finish(i1) + """ + expected = """ + [i0] + finish(1) + """ + self.optimize_strunicode_loop(ops, expected) + def test_str_slice_1(self): ops = """ [p1, i1, i2] @@ -4883,6 +4915,27 @@ def test_plain_virtual_string_copy_content(self): ops = """ + [i1] + p0 = newstr(6) + copystrcontent(s"hello!", p0, 0, 0, 6) + p1 = call(0, p0, s"abc123", descr=strconcatdescr) + i0 = strgetitem(p1, i1) + finish(i0) + """ + expected = """ + [i1] + p0 = newstr(6) + copystrcontent(s"hello!", p0, 0, 0, 6) + p1 = newstr(12) + copystrcontent(p0, p1, 0, 0, 6) + copystrcontent(s"abc123", p1, 0, 6, 6) + i0 = strgetitem(p1, i1) + finish(i0) + """ + self.optimize_strunicode_loop(ops, expected) + + def test_plain_virtual_string_copy_content_2(self): + ops = """ [] p0 = newstr(6) copystrcontent(s"hello!", p0, 0, 0, 6) @@ -4894,10 +4947,7 @@ [] p0 = newstr(6) copystrcontent(s"hello!", p0, 0, 0, 6) - p1 = newstr(12) - copystrcontent(p0, p1, 0, 0, 6) - copystrcontent(s"abc123", p1, 0, 6, 6) - i0 = strgetitem(p1, 0) + i0 = strgetitem(p0, 0) finish(i0) """ self.optimize_strunicode_loop(ops, expected) diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -2168,13 +2168,13 @@ ops = """ [p0, i0, p1, i1, i2] setfield_gc(p0, i1, descr=valuedescr) - copystrcontent(p0, i0, p1, i1, i2) + copystrcontent(p0, p1, i0, i1, i2) escape() jump(p0, i0, p1, i1, i2) """ expected = """ [p0, i0, p1, i1, i2] - copystrcontent(p0, i0, p1, i1, i2) + copystrcontent(p0, p1, i0, i1, i2) setfield_gc(p0, i1, descr=valuedescr) escape() jump(p0, i0, p1, i1, i2) @@ -7407,7 +7407,7 @@ expected = """ [p22, p18, i1, i2] call(i2, descr=nonwritedescr) - setfield_gc(p22, i1, descr=valuedescr) + setfield_gc(p22, i1, descr=valuedescr) jump(p22, p18, i1, i1) """ self.optimize_loop(ops, expected, preamble, expected_short=short) @@ -7434,7 +7434,7 @@ def test_cache_setarrayitem_across_loop_boundaries(self): ops = """ [p1] - p2 = getarrayitem_gc(p1, 3, descr=arraydescr) + p2 = getarrayitem_gc(p1, 3, descr=arraydescr) guard_nonnull_class(p2, ConstClass(node_vtable)) [] call(p2, descr=nonwritedescr) p3 = new_with_vtable(ConstClass(node_vtable)) diff --git a/pypy/jit/metainterp/optimizeopt/vstring.py b/pypy/jit/metainterp/optimizeopt/vstring.py --- a/pypy/jit/metainterp/optimizeopt/vstring.py +++ b/pypy/jit/metainterp/optimizeopt/vstring.py @@ -1,6 +1,6 @@ from pypy.jit.codewriter.effectinfo import EffectInfo from pypy.jit.metainterp.history import (BoxInt, Const, ConstInt, ConstPtr, - get_const_ptr_for_string, get_const_ptr_for_unicode) + get_const_ptr_for_string, get_const_ptr_for_unicode, BoxPtr, REF, INT) from pypy.jit.metainterp.optimizeopt import optimizer, virtualize from pypy.jit.metainterp.optimizeopt.optimizer import CONST_0, CONST_1, llhelper from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method @@ -106,7 +106,12 @@ if not we_are_translated(): op.name = 'FORCE' optforce.emit_operation(op) - self.string_copy_parts(optforce, box, CONST_0, self.mode) + self.initialize_forced_string(optforce, box, CONST_0, self.mode) + + def initialize_forced_string(self, string_optimizer, targetbox, + offsetbox, mode): + return self.string_copy_parts(string_optimizer, targetbox, + offsetbox, mode) class VStringPlainValue(VAbstractStringValue): @@ -114,11 +119,20 @@ _lengthbox = None # cache only def setup(self, size): - self._chars = [optimizer.CVAL_UNINITIALIZED_ZERO] * size + # in this list, None means: "it's probably uninitialized so far, + # but maybe it was actually filled." So to handle this case, + # strgetitem cannot be virtual-ized and must be done as a residual + # operation. By contrast, any non-None value means: we know it + # is initialized to this value; strsetitem() there makes no sense. + # Also, as long as self.is_virtual(), then we know that no-one else + # could have written to the string, so we know that in this case + # "None" corresponds to "really uninitialized". + self._chars = [None] * size def setup_slice(self, longerlist, start, stop): assert 0 <= start <= stop <= len(longerlist) self._chars = longerlist[start:stop] + # slice the 'longerlist', which may also contain Nones def getstrlen(self, _, mode): if self._lengthbox is None: @@ -126,42 +140,66 @@ return self._lengthbox def getitem(self, index): - return self._chars[index] + return self._chars[index] # may return None! def setitem(self, index, charvalue): assert isinstance(charvalue, optimizer.OptValue) + assert self._chars[index] is None, ( + "setitem() on an already-initialized location") self._chars[index] = charvalue + def is_completely_initialized(self): + for c in self._chars: + if c is None: + return False + return True + @specialize.arg(1) def get_constant_string_spec(self, mode): for c in self._chars: - if c is optimizer.CVAL_UNINITIALIZED_ZERO or not c.is_constant(): + if c is None or not c.is_constant(): return None return mode.emptystr.join([mode.chr(c.box.getint()) for c in self._chars]) def string_copy_parts(self, string_optimizer, targetbox, offsetbox, mode): - if not self.is_virtual() and targetbox is not self.box: - lengthbox = self.getstrlen(string_optimizer, mode) - srcbox = self.force_box(string_optimizer) - return copy_str_content(string_optimizer, srcbox, targetbox, - CONST_0, offsetbox, lengthbox, mode) + if not self.is_virtual() and not self.is_completely_initialized(): + return VAbstractStringValue.string_copy_parts( + self, string_optimizer, targetbox, offsetbox, mode) + else: + return self.initialize_forced_string(string_optimizer, targetbox, + offsetbox, mode) + + def initialize_forced_string(self, string_optimizer, targetbox, + offsetbox, mode): for i in range(len(self._chars)): - charbox = self._chars[i].force_box(string_optimizer) - if not (isinstance(charbox, Const) and charbox.same_constant(CONST_0)): - string_optimizer.emit_operation(ResOperation(mode.STRSETITEM, [targetbox, - offsetbox, - charbox], - None)) + assert isinstance(targetbox, BoxPtr) # ConstPtr never makes sense + charvalue = self.getitem(i) + if charvalue is not None: + charbox = charvalue.force_box(string_optimizer) + if not (isinstance(charbox, Const) and + charbox.same_constant(CONST_0)): + op = ResOperation(mode.STRSETITEM, [targetbox, + offsetbox, + charbox], + None) + string_optimizer.emit_operation(op) offsetbox = _int_add(string_optimizer, offsetbox, CONST_1) return offsetbox def get_args_for_fail(self, modifier): if self.box is None and not modifier.already_seen_virtual(self.keybox): - charboxes = [value.get_key_box() for value in self._chars] + charboxes = [] + for value in self._chars: + if value is not None: + box = value.get_key_box() + else: + box = None + charboxes.append(box) modifier.register_virtual_fields(self.keybox, charboxes) for value in self._chars: - value.get_args_for_fail(modifier) + if value is not None: + value.get_args_for_fail(modifier) def _make_virtual(self, modifier): return modifier.make_vstrplain(self.mode is mode_unicode) @@ -169,6 +207,7 @@ class VStringConcatValue(VAbstractStringValue): """The concatenation of two other strings.""" + _attrs_ = ('left', 'right', 'lengthbox') lengthbox = None # or the computed length @@ -277,6 +316,7 @@ for i in range(lengthbox.value): charbox = _strgetitem(string_optimizer, srcbox, srcoffsetbox, mode) srcoffsetbox = _int_add(string_optimizer, srcoffsetbox, CONST_1) + assert isinstance(targetbox, BoxPtr) # ConstPtr never makes sense string_optimizer.emit_operation(ResOperation(mode.STRSETITEM, [targetbox, offsetbox, charbox], @@ -287,6 +327,7 @@ nextoffsetbox = _int_add(string_optimizer, offsetbox, lengthbox) else: nextoffsetbox = None + assert isinstance(targetbox, BoxPtr) # ConstPtr never makes sense op = ResOperation(mode.COPYSTRCONTENT, [srcbox, targetbox, srcoffsetbox, offsetbox, lengthbox], None) @@ -373,6 +414,7 @@ def optimize_STRSETITEM(self, op): value = self.getvalue(op.getarg(0)) + assert not value.is_constant() # strsetitem(ConstPtr) never makes sense if value.is_virtual() and isinstance(value, VStringPlainValue): indexbox = self.get_constant_box(op.getarg(1)) if indexbox is not None: @@ -406,11 +448,20 @@ # if isinstance(value, VStringPlainValue): # even if no longer virtual if vindex.is_constant(): - res = value.getitem(vindex.box.getint()) - # If it is uninitialized we can't return it, it was set by a - # COPYSTRCONTENT, not a STRSETITEM - if res is not optimizer.CVAL_UNINITIALIZED_ZERO: - return res + result = value.getitem(vindex.box.getint()) + if result is not None: + return result + # + if isinstance(value, VStringConcatValue) and vindex.is_constant(): + len1box = value.left.getstrlen(self, mode) + if isinstance(len1box, ConstInt): + index = vindex.box.getint() + len1 = len1box.getint() + if index < len1: + return self.strgetitem(value.left, vindex, mode) + else: + vindex = optimizer.ConstantValue(ConstInt(index - len1)) + return self.strgetitem(value.right, vindex, mode) # resbox = _strgetitem(self, value.force_box(self), vindex.force_box(self), mode) return self.getvalue(resbox) @@ -432,6 +483,11 @@ def _optimize_COPYSTRCONTENT(self, op, mode): # args: src dst srcstart dststart length + assert op.getarg(0).type == REF + assert op.getarg(1).type == REF + assert op.getarg(2).type == INT + assert op.getarg(3).type == INT + assert op.getarg(4).type == INT src = self.getvalue(op.getarg(0)) dst = self.getvalue(op.getarg(1)) srcstart = self.getvalue(op.getarg(2)) @@ -503,19 +559,12 @@ vstart = self.getvalue(op.getarg(2)) vstop = self.getvalue(op.getarg(3)) # - if (isinstance(vstr, VStringPlainValue) and vstart.is_constant() - and vstop.is_constant()): - # slicing with constant bounds of a VStringPlainValue, if any of - # the characters is unitialized we don't do this special slice, we - # do the regular copy contents. - for i in range(vstart.box.getint(), vstop.box.getint()): - if vstr.getitem(i) is optimizer.CVAL_UNINITIALIZED_ZERO: - break - else: - value = self.make_vstring_plain(op.result, op, mode) - value.setup_slice(vstr._chars, vstart.box.getint(), - vstop.box.getint()) - return True + #if (isinstance(vstr, VStringPlainValue) and vstart.is_constant() + # and vstop.is_constant()): + # value = self.make_vstring_plain(op.result, op, mode) + # value.setup_slice(vstr._chars, vstart.box.getint(), + # vstop.box.getint()) + # return True # vstr.ensure_nonnull() lengthbox = _int_sub(self, vstop.force_box(self), diff --git a/pypy/jit/metainterp/resume.py b/pypy/jit/metainterp/resume.py --- a/pypy/jit/metainterp/resume.py +++ b/pypy/jit/metainterp/resume.py @@ -126,6 +126,7 @@ UNASSIGNED = tag(-1<<13, TAGBOX) UNASSIGNEDVIRTUAL = tag(-1<<13, TAGVIRTUAL) NULLREF = tag(-1, TAGCONST) +UNINITIALIZED = tag(-2, TAGCONST) # used for uninitialized string characters class ResumeDataLoopMemo(object): @@ -439,6 +440,8 @@ self.storage.rd_pendingfields = rd_pendingfields def _gettagged(self, box): + if box is None: + return UNINITIALIZED if isinstance(box, Const): return self.memo.getconst(box) else: @@ -572,7 +575,9 @@ string = decoder.allocate_string(length) decoder.virtuals_cache[index] = string for i in range(length): - decoder.string_setitem(string, i, self.fieldnums[i]) + charnum = self.fieldnums[i] + if not tagged_eq(charnum, UNINITIALIZED): + decoder.string_setitem(string, i, charnum) return string def debug_prints(self): @@ -625,7 +630,9 @@ string = decoder.allocate_unicode(length) decoder.virtuals_cache[index] = string for i in range(length): - decoder.unicode_setitem(string, i, self.fieldnums[i]) + charnum = self.fieldnums[i] + if not tagged_eq(charnum, UNINITIALIZED): + decoder.unicode_setitem(string, i, charnum) return string def debug_prints(self): diff --git a/pypy/jit/metainterp/test/test_virtualstate.py b/pypy/jit/metainterp/test/test_virtualstate.py --- a/pypy/jit/metainterp/test/test_virtualstate.py +++ b/pypy/jit/metainterp/test/test_virtualstate.py @@ -847,7 +847,8 @@ i5 = arraylen_gc(p2, descr=arraydescr) i6 = int_ge(i5, 1) guard_true(i6) [] - jump(p0, p1, p2) + p3 = getarrayitem_gc(p2, 0, descr=arraydescr) + jump(p0, p1, p3, p2) """ self.optimize_bridge(loop, bridge, expected, p0=self.myptr) diff --git a/pypy/module/__builtin__/functional.py b/pypy/module/__builtin__/functional.py --- a/pypy/module/__builtin__/functional.py +++ b/pypy/module/__builtin__/functional.py @@ -312,11 +312,10 @@ class W_XRange(Wrappable): - def __init__(self, space, start, stop, step): + def __init__(self, space, start, len, step): self.space = space self.start = start - self.stop = stop - self.len = get_len_of_range(space, start, stop, step) + self.len = len self.step = step def descr_new(space, w_subtype, w_start, w_stop=None, w_step=1): @@ -326,8 +325,9 @@ start, stop = 0, start else: stop = _toint(space, w_stop) + howmany = get_len_of_range(space, start, stop, step) obj = space.allocate_instance(W_XRange, w_subtype) - W_XRange.__init__(obj, space, start, stop, step) + W_XRange.__init__(obj, space, start, howmany, step) return space.wrap(obj) def descr_repr(self): @@ -357,12 +357,12 @@ def descr_iter(self): return self.space.wrap(W_XRangeIterator(self.space, self.start, - self.stop, self.step)) + self.len, self.step)) def descr_reversed(self): lastitem = self.start + (self.len-1) * self.step return self.space.wrap(W_XRangeIterator(self.space, lastitem, - self.start - 1, -self.step)) + self.len, -self.step)) def descr_reduce(self): space = self.space @@ -389,24 +389,25 @@ ) class W_XRangeIterator(Wrappable): - def __init__(self, space, start, stop, step): + def __init__(self, space, current, remaining, step): self.space = space - self.current = start - self.stop = stop + self.current = current + self.remaining = remaining self.step = step def descr_iter(self): return self.space.wrap(self) def descr_next(self): - if (self.step > 0 and self.current < self.stop) or (self.step < 0 and self.current > self.stop): + if self.remaining > 0: item = self.current self.current = item + self.step + self.remaining -= 1 return self.space.wrap(item) raise OperationError(self.space.w_StopIteration, self.space.w_None) - #def descr_len(self): - # return self.space.wrap(self.remaining) + def descr_len(self): + return self.space.wrap(self.remaining) def descr_reduce(self): from pypy.interpreter.mixedmodule import MixedModule @@ -417,7 +418,7 @@ w = space.wrap nt = space.newtuple - tup = [w(self.current), w(self.stop), w(self.step)] + tup = [w(self.current), w(self.remaining), w(self.step)] return nt([new_inst, nt(tup)]) W_XRangeIterator.typedef = TypeDef("rangeiterator", diff --git a/pypy/module/_file/interp_file.py b/pypy/module/_file/interp_file.py --- a/pypy/module/_file/interp_file.py +++ b/pypy/module/_file/interp_file.py @@ -206,24 +206,28 @@ @unwrap_spec(size=int) def direct_readlines(self, size=0): stream = self.getstream() - # NB. this implementation is very inefficient for unbuffered - # streams, but ok if stream.readline() is efficient. + # this is implemented as: .read().split('\n') + # except that it keeps the \n in the resulting strings if size <= 0: - result = [] - while True: - line = stream.readline() - if not line: - break - result.append(line) - size -= len(line) + data = stream.readall() else: - result = [] - while size > 0: - line = stream.readline() - if not line: - break - result.append(line) - size -= len(line) + data = stream.read(size) + result = [] + splitfrom = 0 + for i in range(len(data)): + if data[i] == '\n': + result.append(data[splitfrom : i + 1]) + splitfrom = i + 1 + # + if splitfrom < len(data): + # there is a partial line at the end. If size > 0, it is likely + # to be because the 'read(size)' returned data up to the middle + # of a line. In that case, use 'readline()' to read until the + # end of the current line. + data = data[splitfrom:] + if size > 0: + data += stream.readline() + result.append(data) return result @unwrap_spec(offset=r_longlong, whence=int) diff --git a/pypy/module/_pickle_support/maker.py b/pypy/module/_pickle_support/maker.py --- a/pypy/module/_pickle_support/maker.py +++ b/pypy/module/_pickle_support/maker.py @@ -66,10 +66,10 @@ new_generator.running = running return space.wrap(new_generator) - at unwrap_spec(current=int, stop=int, step=int) -def xrangeiter_new(space, current, stop, step): + at unwrap_spec(current=int, remaining=int, step=int) +def xrangeiter_new(space, current, remaining, step): from pypy.module.__builtin__.functional import W_XRangeIterator - new_iter = W_XRangeIterator(space, current, stop, step) + new_iter = W_XRangeIterator(space, current, remaining, step) return space.wrap(new_iter) @unwrap_spec(identifier=str) diff --git a/pypy/objspace/std/boolobject.py b/pypy/objspace/std/boolobject.py --- a/pypy/objspace/std/boolobject.py +++ b/pypy/objspace/std/boolobject.py @@ -27,11 +27,7 @@ def uint_w(w_self, space): intval = int(w_self.boolval) - if intval < 0: - raise OperationError(space.w_ValueError, - space.wrap("cannot convert negative integer to unsigned")) - else: - return r_uint(intval) + return r_uint(intval) def bigint_w(w_self, space): return rbigint.fromint(int(w_self.boolval)) diff --git a/pypy/objspace/std/bytearrayobject.py b/pypy/objspace/std/bytearrayobject.py --- a/pypy/objspace/std/bytearrayobject.py +++ b/pypy/objspace/std/bytearrayobject.py @@ -283,17 +283,9 @@ return space.wrap(''.join(w_bytearray.data)) def _convert_idx_params(space, w_self, w_start, w_stop): - start = slicetype.eval_slice_index(space, w_start) - stop = slicetype.eval_slice_index(space, w_stop) length = len(w_self.data) - if start < 0: - start += length - if start < 0: - start = 0 - if stop < 0: - stop += length - if stop < 0: - stop = 0 + start, stop = slicetype.unwrap_start_stop( + space, length, w_start, w_stop, False) return start, stop, length def str_count__Bytearray_Int_ANY_ANY(space, w_bytearray, w_char, w_start, w_stop): diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py --- a/pypy/objspace/std/listobject.py +++ b/pypy/objspace/std/listobject.py @@ -419,8 +419,8 @@ # needs to be safe against eq_w() mutating the w_list behind our back items = w_list.wrappeditems size = len(items) - i = slicetype.adapt_bound(space, size, w_start) - stop = slicetype.adapt_bound(space, size, w_stop) + i, stop = slicetype.unwrap_start_stop( + space, size, w_start, w_stop, True) while i < stop and i < len(items): if space.eq_w(items[i], w_any): return space.wrap(i) diff --git a/pypy/objspace/std/ropeobject.py b/pypy/objspace/std/ropeobject.py --- a/pypy/objspace/std/ropeobject.py +++ b/pypy/objspace/std/ropeobject.py @@ -357,16 +357,8 @@ self = w_self._node sub = w_sub._node - if space.is_w(w_start, space.w_None): - w_start = space.wrap(0) - if space.is_w(w_end, space.w_None): - w_end = space.len(w_self) - if upper_bound: - start = slicetype.adapt_bound(space, self.length(), w_start) - end = slicetype.adapt_bound(space, self.length(), w_end) - else: - start = slicetype.adapt_lower_bound(space, self.length(), w_start) - end = slicetype.adapt_lower_bound(space, self.length(), w_end) + start, end = slicetype.unwrap_start_stop( + space, self.length(), w_start, w_end, upper_bound) return (self, sub, start, end) _convert_idx_params._annspecialcase_ = 'specialize:arg(5)' diff --git a/pypy/objspace/std/sliceobject.py b/pypy/objspace/std/sliceobject.py --- a/pypy/objspace/std/sliceobject.py +++ b/pypy/objspace/std/sliceobject.py @@ -4,7 +4,7 @@ from pypy.interpreter import gateway from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all -from pypy.objspace.std.slicetype import eval_slice_index +from pypy.objspace.std.slicetype import _eval_slice_index class W_SliceObject(W_Object): from pypy.objspace.std.slicetype import slice_typedef as typedef @@ -25,7 +25,7 @@ if space.is_w(w_slice.w_step, space.w_None): step = 1 else: - step = eval_slice_index(space, w_slice.w_step) + step = _eval_slice_index(space, w_slice.w_step) if step == 0: raise OperationError(space.w_ValueError, space.wrap("slice step cannot be zero")) @@ -35,7 +35,7 @@ else: start = 0 else: - start = eval_slice_index(space, w_slice.w_start) + start = _eval_slice_index(space, w_slice.w_start) if start < 0: start += length if start < 0: @@ -54,7 +54,7 @@ else: stop = length else: - stop = eval_slice_index(space, w_slice.w_stop) + stop = _eval_slice_index(space, w_slice.w_stop) if stop < 0: stop += length if stop < 0: diff --git a/pypy/objspace/std/slicetype.py b/pypy/objspace/std/slicetype.py --- a/pypy/objspace/std/slicetype.py +++ b/pypy/objspace/std/slicetype.py @@ -3,6 +3,7 @@ from pypy.objspace.std.stdtypedef import StdTypeDef, SMM from pypy.objspace.std.register_all import register_all from pypy.interpreter.error import OperationError +from pypy.rlib.objectmodel import specialize # indices multimehtod slice_indices = SMM('indices', 2, @@ -14,7 +15,9 @@ ' normal slices.') # utility functions -def eval_slice_index(space, w_int): +def _eval_slice_index(space, w_int): + # note that it is the *callers* responsibility to check for w_None + # otherwise you can get funny error messages try: return space.getindex_w(w_int, None) # clamp if long integer too large except OperationError, err: @@ -25,7 +28,7 @@ "None or have an __index__ method")) def adapt_lower_bound(space, size, w_index): - index = eval_slice_index(space, w_index) + index = _eval_slice_index(space, w_index) if index < 0: index = index + size if index < 0: @@ -34,16 +37,29 @@ return index def adapt_bound(space, size, w_index): - index = eval_slice_index(space, w_index) - if index < 0: - index = index + size - if index < 0: - index = 0 + index = adapt_lower_bound(space, size, w_index) if index > size: index = size assert index >= 0 return index + at specialize.arg(4) +def unwrap_start_stop(space, size, w_start, w_end, upper_bound=False): + if space.is_w(w_start, space.w_None): + start = 0 + elif upper_bound: + start = adapt_bound(space, size, w_start) + else: + start = adapt_lower_bound(space, size, w_start) + + if space.is_w(w_end, space.w_None): + end = size + elif upper_bound: + end = adapt_bound(space, size, w_end) + else: + end = adapt_lower_bound(space, size, w_end) + return start, end + register_all(vars(), globals()) # ____________________________________________________________ diff --git a/pypy/objspace/std/smalltupleobject.py b/pypy/objspace/std/smalltupleobject.py --- a/pypy/objspace/std/smalltupleobject.py +++ b/pypy/objspace/std/smalltupleobject.py @@ -68,10 +68,10 @@ raise IndexError def eq(self, space, w_other): - if self.length() != w_other.length(): + if n != w_other.length(): return space.w_False for i in iter_n: - item1 = self.getitem(i) + item1 = getattr(self,'w_value%s' % i) item2 = w_other.getitem(i) if not space.eq_w(item1, item2): return space.w_False @@ -80,9 +80,9 @@ def hash(self, space): mult = 1000003 x = 0x345678 - z = self.length() + z = n for i in iter_n: - w_item = self.getitem(i) + w_item = getattr(self, 'w_value%s' % i) y = space.int_w(space.hash(w_item)) x = (x ^ y) * mult z -= 1 diff --git a/pypy/objspace/std/stringobject.py b/pypy/objspace/std/stringobject.py --- a/pypy/objspace/std/stringobject.py +++ b/pypy/objspace/std/stringobject.py @@ -4,7 +4,7 @@ from pypy.interpreter.error import OperationError, operationerrfmt from pypy.interpreter import gateway from pypy.rlib.rarithmetic import ovfcheck -from pypy.rlib.objectmodel import we_are_translated, compute_hash +from pypy.rlib.objectmodel import we_are_translated, compute_hash, specialize from pypy.objspace.std.inttype import wrapint from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice from pypy.objspace.std import slicetype, newformat @@ -47,6 +47,7 @@ W_StringObject.PREBUILT = [W_StringObject(chr(i)) for i in range(256)] del i + at specialize.arg(2) def _is_generic(space, w_self, fun): v = w_self._value if len(v) == 0: @@ -56,14 +57,13 @@ return space.newbool(fun(c)) else: return _is_generic_loop(space, v, fun) -_is_generic._annspecialcase_ = "specialize:arg(2)" + at specialize.arg(2) def _is_generic_loop(space, v, fun): for idx in range(len(v)): if not fun(v[idx]): return space.w_False return space.w_True -_is_generic_loop._annspecialcase_ = "specialize:arg(2)" def _upper(ch): if ch.islower(): @@ -420,22 +420,14 @@ return space.wrap(u_self) -def _convert_idx_params(space, w_self, w_sub, w_start, w_end, upper_bound=False): + at specialize.arg(4) +def _convert_idx_params(space, w_self, w_start, w_end, upper_bound=False): self = w_self._value - sub = w_sub._value + lenself = len(self) - if space.is_w(w_start, space.w_None): - w_start = space.wrap(0) - if space.is_w(w_end, space.w_None): - w_end = space.len(w_self) - if upper_bound: - start = slicetype.adapt_bound(space, len(self), w_start) - end = slicetype.adapt_bound(space, len(self), w_end) - else: - start = slicetype.adapt_lower_bound(space, len(self), w_start) - end = slicetype.adapt_lower_bound(space, len(self), w_end) - return (self, sub, start, end) -_convert_idx_params._annspecialcase_ = 'specialize:arg(5)' + start, end = slicetype.unwrap_start_stop( + space, lenself, w_start, w_end, upper_bound=upper_bound) + return (self, start, end) def contains__String_String(space, w_self, w_sub): self = w_self._value @@ -443,13 +435,13 @@ return space.newbool(self.find(sub) >= 0) def str_find__String_String_ANY_ANY(space, w_self, w_sub, w_start, w_end): - (self, sub, start, end) = _convert_idx_params(space, w_self, w_sub, w_start, w_end) - res = self.find(sub, start, end) + (self, start, end) = _convert_idx_params(space, w_self, w_start, w_end) + res = self.find(w_sub._value, start, end) return space.wrap(res) def str_rfind__String_String_ANY_ANY(space, w_self, w_sub, w_start, w_end): - (self, sub, start, end) = _convert_idx_params(space, w_self, w_sub, w_start, w_end) - res = self.rfind(sub, start, end) + (self, start, end) = _convert_idx_params(space, w_self, w_start, w_end) + res = self.rfind(w_sub._value, start, end) return space.wrap(res) def str_partition__String_String(space, w_self, w_sub): @@ -483,8 +475,8 @@ def str_index__String_String_ANY_ANY(space, w_self, w_sub, w_start, w_end): - (self, sub, start, end) = _convert_idx_params(space, w_self, w_sub, w_start, w_end) - res = self.find(sub, start, end) + (self, start, end) = _convert_idx_params(space, w_self, w_start, w_end) + res = self.find(w_sub._value, start, end) if res < 0: raise OperationError(space.w_ValueError, space.wrap("substring not found in string.index")) @@ -493,8 +485,8 @@ def str_rindex__String_String_ANY_ANY(space, w_self, w_sub, w_start, w_end): - (self, sub, start, end) = _convert_idx_params(space, w_self, w_sub, w_start, w_end) - res = self.rfind(sub, start, end) + (self, start, end) = _convert_idx_params(space, w_self, w_start, w_end) + res = self.rfind(w_sub._value, start, end) if res < 0: raise OperationError(space.w_ValueError, space.wrap("substring not found in string.rindex")) @@ -636,20 +628,17 @@ return wrapstr(space, u_centered) def str_count__String_String_ANY_ANY(space, w_self, w_arg, w_start, w_end): - u_self, u_arg, u_start, u_end = _convert_idx_params(space, w_self, w_arg, - w_start, w_end) - return wrapint(space, u_self.count(u_arg, u_start, u_end)) + u_self, u_start, u_end = _convert_idx_params(space, w_self, w_start, w_end) + return wrapint(space, u_self.count(w_arg._value, u_start, u_end)) def str_endswith__String_String_ANY_ANY(space, w_self, w_suffix, w_start, w_end): - (u_self, suffix, start, end) = _convert_idx_params(space, w_self, - w_suffix, w_start, - w_end, True) - return space.newbool(stringendswith(u_self, suffix, start, end)) + (u_self, start, end) = _convert_idx_params(space, w_self, w_start, + w_end, True) + return space.newbool(stringendswith(u_self, w_suffix._value, start, end)) def str_endswith__String_Tuple_ANY_ANY(space, w_self, w_suffixes, w_start, w_end): - (u_self, _, start, end) = _convert_idx_params(space, w_self, - space.wrap(''), w_start, - w_end, True) + (u_self, start, end) = _convert_idx_params(space, w_self, w_start, + w_end, True) for w_suffix in space.fixedview(w_suffixes): if space.isinstance_w(w_suffix, space.w_unicode): w_u = space.call_function(space.w_unicode, w_self) @@ -661,14 +650,13 @@ return space.w_False def str_startswith__String_String_ANY_ANY(space, w_self, w_prefix, w_start, w_end): - (u_self, prefix, start, end) = _convert_idx_params(space, w_self, - w_prefix, w_start, - w_end, True) - return space.newbool(stringstartswith(u_self, prefix, start, end)) + (u_self, start, end) = _convert_idx_params(space, w_self, w_start, + w_end, True) + return space.newbool(stringstartswith(u_self, w_prefix._value, start, end)) def str_startswith__String_Tuple_ANY_ANY(space, w_self, w_prefixes, w_start, w_end): - (u_self, _, start, end) = _convert_idx_params(space, w_self, space.wrap(''), - w_start, w_end, True) + (u_self, start, end) = _convert_idx_params(space, w_self, + w_start, w_end, True) for w_prefix in space.fixedview(w_prefixes): if space.isinstance_w(w_prefix, space.w_unicode): w_u = space.call_function(space.w_unicode, w_self) diff --git a/pypy/objspace/std/strsliceobject.py b/pypy/objspace/std/strsliceobject.py --- a/pypy/objspace/std/strsliceobject.py +++ b/pypy/objspace/std/strsliceobject.py @@ -60,8 +60,8 @@ def _convert_idx_params(space, w_self, w_sub, w_start, w_end): length = w_self.stop - w_self.start sub = w_sub._value - start = slicetype.adapt_bound(space, length, w_start) - end = slicetype.adapt_bound(space, length, w_end) + start, end = slicetype.unwrap_start_stop( + space, length, w_start, w_end, True) assert start >= 0 assert end >= 0 diff --git a/pypy/objspace/std/test/test_listobject.py b/pypy/objspace/std/test/test_listobject.py --- a/pypy/objspace/std/test/test_listobject.py +++ b/pypy/objspace/std/test/test_listobject.py @@ -2,11 +2,10 @@ from pypy.objspace.std.listobject import W_ListObject from pypy.interpreter.error import OperationError -from pypy.conftest import gettestobjspace +from pypy.conftest import gettestobjspace, option class TestW_ListObject(object): - def test_is_true(self): w = self.space.wrap w_list = W_ListObject([]) @@ -343,6 +342,13 @@ class AppTestW_ListObject(object): + def setup_class(cls): + import sys + on_cpython = (option.runappdirect and + not hasattr(sys, 'pypy_translation_info')) + + cls.w_on_cpython = cls.space.wrap(on_cpython) + def test_call_list(self): assert list('') == [] assert list('abc') == ['a', 'b', 'c'] @@ -616,6 +622,14 @@ assert c.index(0) == 0 raises(ValueError, c.index, 3) + def test_index_cpython_bug(self): + if self.on_cpython: + skip("cpython has a bug here") + c = list('hello world') + assert c.index('l', None, None) == 2 + assert c.index('l', 3, None) == 3 + assert c.index('l', None, 4) == 2 + def test_ass_slice(self): l = range(6) l[1:3] = 'abc' diff --git a/pypy/objspace/std/test/test_sliceobject.py b/pypy/objspace/std/test/test_sliceobject.py --- a/pypy/objspace/std/test/test_sliceobject.py +++ b/pypy/objspace/std/test/test_sliceobject.py @@ -42,6 +42,23 @@ getslice(length, mystart, mystop)) + def test_indexes4(self): + space = self.space + w = space.wrap + + def getslice(length, start, stop, step): + return [i for i in range(0, length, step) if start <= i < stop] + + for step in [-5, -4, -3, -2, -1, 1, 2, 3, 4, 5, None]: + for length in range(5): + for start in range(-2*length-2, 2*length+3) + [None]: + for stop in range(-2*length-2, 2*length+3) + [None]: + sl = space.newslice(w(start), w(stop), w(step)) + mystart, mystop, mystep, slicelength = sl.indices4(space, length) + assert len(range(length)[start:stop:step]) == slicelength + assert slice(start, stop, step).indices(length) == ( + mystart, mystop, mystep) + class AppTest_SliceObject: def test_new(self): def cmp_slice(sl1, sl2): diff --git a/pypy/objspace/std/tupleobject.py b/pypy/objspace/std/tupleobject.py --- a/pypy/objspace/std/tupleobject.py +++ b/pypy/objspace/std/tupleobject.py @@ -167,17 +167,8 @@ return space.wrap(count) def tuple_index__Tuple_ANY_ANY_ANY(space, w_tuple, w_obj, w_start, w_stop): - start = slicetype.eval_slice_index(space, w_start) - stop = slicetype.eval_slice_index(space, w_stop) length = len(w_tuple.wrappeditems) - if start < 0: - start += length - if start < 0: - start = 0 - if stop < 0: - stop += length - if stop < 0: - stop = 0 + start, stop = slicetype.unwrap_start_stop(space, length, w_start, w_stop) for i in range(start, min(stop, length)): w_item = w_tuple.wrappeditems[i] if space.eq_w(w_item, w_obj): diff --git a/pypy/objspace/std/unicodeobject.py b/pypy/objspace/std/unicodeobject.py --- a/pypy/objspace/std/unicodeobject.py +++ b/pypy/objspace/std/unicodeobject.py @@ -10,7 +10,7 @@ from pypy.objspace.std import slicetype, newformat from pypy.objspace.std.tupleobject import W_TupleObject from pypy.rlib.rarithmetic import intmask, ovfcheck -from pypy.rlib.objectmodel import compute_hash +from pypy.rlib.objectmodel import compute_hash, specialize from pypy.rlib.rstring import UnicodeBuilder from pypy.rlib.runicode import unicode_encode_unicode_escape from pypy.module.unicodedata import unicodedb @@ -475,42 +475,29 @@ index = length return index -def _convert_idx_params(space, w_self, w_sub, w_start, w_end, upper_bound=False): - assert isinstance(w_sub, W_UnicodeObject) + at specialize.arg(4) +def _convert_idx_params(space, w_self, w_start, w_end, upper_bound=False): self = w_self._value - sub = w_sub._value - - if space.is_w(w_start, space.w_None): - w_start = space.wrap(0) - if space.is_w(w_end, space.w_None): - w_end = space.len(w_self) - - if upper_bound: - start = slicetype.adapt_bound(space, len(self), w_start) - end = slicetype.adapt_bound(space, len(self), w_end) - else: - start = slicetype.adapt_lower_bound(space, len(self), w_start) - end = slicetype.adapt_lower_bound(space, len(self), w_end) - return (self, sub, start, end) -_convert_idx_params._annspecialcase_ = 'specialize:arg(5)' + start, end = slicetype.unwrap_start_stop( + space, len(self), w_start, w_end, upper_bound) + return (self, start, end) def unicode_endswith__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, + self, start, end = _convert_idx_params(space, w_self, w_start, w_end, True) - return space.newbool(stringendswith(self, substr, start, end)) + return space.newbool(stringendswith(self, w_substr._value, start, end)) def unicode_startswith__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end, True) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end, True) # XXX this stuff can be waaay better for ootypebased backends if # we re-use more of our rpython machinery (ie implement startswith # with additional parameters as rpython) - return space.newbool(stringstartswith(self, substr, start, end)) + return space.newbool(stringstartswith(self, w_substr._value, start, end)) def unicode_startswith__Unicode_Tuple_ANY_ANY(space, w_unistr, w_prefixes, w_start, w_end): - unistr, _, start, end = _convert_idx_params(space, w_unistr, space.wrap(u''), - w_start, w_end, True) + unistr, start, end = _convert_idx_params(space, w_unistr, + w_start, w_end, True) for w_prefix in space.fixedview(w_prefixes): prefix = space.unicode_w(w_prefix) if stringstartswith(unistr, prefix, start, end): @@ -519,7 +506,7 @@ def unicode_endswith__Unicode_Tuple_ANY_ANY(space, w_unistr, w_suffixes, w_start, w_end): - unistr, _, start, end = _convert_idx_params(space, w_unistr, space.wrap(u''), + unistr, start, end = _convert_idx_params(space, w_unistr, w_start, w_end, True) for w_suffix in space.fixedview(w_suffixes): suffix = space.unicode_w(w_suffix) @@ -625,37 +612,32 @@ return space.newlist(lines) def unicode_find__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - return space.wrap(self.find(substr, start, end)) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + return space.wrap(self.find(w_substr._value, start, end)) def unicode_rfind__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - return space.wrap(self.rfind(substr, start, end)) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + return space.wrap(self.rfind(w_substr._value, start, end)) def unicode_index__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - index = self.find(substr, start, end) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + index = self.find(w_substr._value, start, end) if index < 0: raise OperationError(space.w_ValueError, space.wrap('substring not found')) return space.wrap(index) def unicode_rindex__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - index = self.rfind(substr, start, end) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + index = self.rfind(w_substr._value, start, end) if index < 0: raise OperationError(space.w_ValueError, space.wrap('substring not found')) return space.wrap(index) def unicode_count__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - return space.wrap(self.count(substr, start, end)) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + return space.wrap(self.count(w_substr._value, start, end)) def unicode_split__Unicode_None_ANY(space, w_self, w_none, w_maxsplit): maxsplit = space.int_w(w_maxsplit) From noreply at buildbot.pypy.org Sun Nov 6 11:29:13 2011 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 6 Nov 2011 11:29:13 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: Start fixing graphpage.py. Message-ID: <20111106102913.76EB2820C4@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: jit-targets Changeset: r48819:1772c5517e92 Date: 2011-11-06 11:28 +0100 http://bitbucket.org/pypy/pypy/changeset/1772c5517e92/ Log: Start fixing graphpage.py. diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -24,7 +24,7 @@ from pypy.jit.metainterp.jitprof import ABORT_BRIDGE raise SwitchToBlackhole(ABORT_BRIDGE) -def show_loop(metainterp_sd, loop=None, error=None): +def show_procedures(metainterp_sd, procedure=None, error=None): # debugging if option.view or option.viewloops: if error: @@ -33,11 +33,12 @@ errmsg += ': ' + str(error) else: errmsg = None - if loop is None: # or type(loop) is TerminatingLoop: - extraloops = [] + if procedure is None: + extraprocedures = [] else: - extraloops = [loop] - metainterp_sd.stats.view(errmsg=errmsg, extraloops=extraloops) + extraprocedures = [procedure] + metainterp_sd.stats.view(errmsg=errmsg, + extraprocedures=extraprocedures) def create_empty_loop(metainterp, name_prefix=''): name = metainterp.staticdata.stats.name_for_new_loop() @@ -78,8 +79,6 @@ if descr.procedure_token is not looptoken: looptoken.record_jump_to(descr.procedure_token) op._descr = None # clear reference, mostly for tests - if not we_are_translated(): - op._jumptarget_number = descr.procedure_token.number # record this looptoken on the QuasiImmut used in the code if loop.quasi_immutable_deps is not None: for qmut in loop.quasi_immutable_deps: @@ -187,7 +186,7 @@ globaldata.loopnumbering += 1 if not we_are_translated(): - show_loop(metainterp_sd, loop) + show_procedures(metainterp_sd, loop) loop.check_consistency() operations = get_deep_immutable_oplist(loop.operations) @@ -218,7 +217,7 @@ jitdriver_sd.on_compile_bridge(metainterp_sd.logger_ops, original_loop_token, operations, n) if not we_are_translated(): - show_loop(metainterp_sd) + show_procedures(metainterp_sd) seen = dict.fromkeys(inputargs) TreeLoop.check_consistency_of_branch(operations, seen) metainterp_sd.profiler.start_backend() diff --git a/pypy/jit/metainterp/graphpage.py b/pypy/jit/metainterp/graphpage.py --- a/pypy/jit/metainterp/graphpage.py +++ b/pypy/jit/metainterp/graphpage.py @@ -12,8 +12,9 @@ def get_display_text(self): return None -def display_loops(loops, errmsg=None, highlight_loops={}): - graphs = [(loop, highlight_loops.get(loop, 0)) for loop in loops] +def display_procedures(procedures, errmsg=None, highlight_procedures={}): + graphs = [(procedure, highlight_procedures.get(procedure, 0)) + for procedure in procedures] for graph, highlight in graphs: for op in graph.get_operations(): if is_interesting_guard(op): @@ -31,12 +32,6 @@ def compute(self, graphs, errmsg=None): resopgen = ResOpGen() for graph, highlight in graphs: - if getattr(graph, 'token', None) is not None: - resopgen.jumps_to_graphs[graph.token] = graph - if getattr(graph, '_looptoken_number', None) is not None: - resopgen.jumps_to_graphs[graph._looptoken_number] = graph - - for graph, highlight in graphs: resopgen.add_graph(graph, highlight) if errmsg: resopgen.set_errmsg(errmsg) @@ -54,7 +49,7 @@ self.block_starters = {} # {graphindex: {set-of-operation-indices}} self.all_operations = {} self.errmsg = None - self.jumps_to_graphs = {} + self.target_tokens = {} def op_name(self, graphindex, opindex): return 'g%dop%d' % (graphindex, opindex) @@ -73,16 +68,21 @@ for graphindex in range(len(self.graphs)): self.block_starters[graphindex] = {0: True} for graphindex, graph in enumerate(self.graphs): - last_was_mergepoint = False + mergepointblock = None for i, op in enumerate(graph.get_operations()): if is_interesting_guard(op): self.mark_starter(graphindex, i+1) if op.getopnum() == rop.DEBUG_MERGE_POINT: - if not last_was_mergepoint: - last_was_mergepoint = True - self.mark_starter(graphindex, i) + if mergepointblock is None: + mergepointblock = i + elif op.getopnum() == rop.LABEL: + self.mark_starter(graphindex, i) + self.target_tokens[op.getdescr()] = (graphindex, i) + mergepointblock = i else: - last_was_mergepoint = False + if mergepointblock is not None: + self.mark_starter(graphindex, mergepointblock) + mergepointblock = None def set_errmsg(self, errmsg): self.errmsg = errmsg @@ -172,24 +172,10 @@ (graphindex, opindex)) break if op.getopnum() == rop.JUMP: - tgt_g = -1 - tgt = None - tgt_number = getattr(op, '_jumptarget_number', None) - if tgt_number is not None: - tgt = self.jumps_to_graphs.get(tgt_number) - else: - tgt_descr = op.getdescr() - if tgt_descr is None: - tgt_g = graphindex - else: - tgt = self.jumps_to_graphs.get(tgt_descr.number) - if tgt is None: - tgt = self.jumps_to_graphs.get(tgt_descr) - if tgt is not None: - tgt_g = self.graphs.index(tgt) - if tgt_g != -1: + tgt_descr = op.getdescr() + if tgt_descr in self.target_tokens: self.genedge((graphindex, opstartindex), - (tgt_g, 0), + self.target_tokens[tgt_descr], weight="0") lines.append("") label = "\\l".join(lines) diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -803,7 +803,7 @@ return self.operations def get_display_text(self): # for graphpage.py - return self.name + return self.name + '\n' + repr(self.inputargs) def show(self, errmsg=None): "NOT_RPYTHON" @@ -1066,19 +1066,19 @@ if option.view: self.view() - def view(self, errmsg=None, extraloops=[]): - from pypy.jit.metainterp.graphpage import display_loops - loops = self.get_all_loops()[:] - for loop in extraloops: - if loop in loops: - loops.remove(loop) - loops.append(loop) - highlight_loops = dict.fromkeys(extraloops, 1) - for loop in loops: - if hasattr(loop, '_looptoken_number') and ( - loop._looptoken_number in self.invalidated_token_numbers): - highlight_loops.setdefault(loop, 2) - display_loops(loops, errmsg, highlight_loops) + def view(self, errmsg=None, extraprocedures=[]): + from pypy.jit.metainterp.graphpage import display_procedures + procedures = self.get_all_loops()[:] + for procedure in extraprocedures: + if procedure in procedures: + procedures.remove(procedure) + procedures.append(procedure) + highlight_procedures = dict.fromkeys(extraprocedures, 1) + for procedure in procedures: + if hasattr(procedure, '_looptoken_number') and ( + procedure._looptoken_number in self.invalidated_token_numbers): + highlight_procedures.setdefault(procedure, 2) + display_procedures(procedures, errmsg, highlight_procedures) # ---------------------------------------------------------------- From noreply at buildbot.pypy.org Sun Nov 6 14:10:45 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sun, 6 Nov 2011 14:10:45 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: a first failed atempt to support retrace, we need to redesign... Message-ID: <20111106131045.86C00820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48820:a91e6cab9119 Date: 2011-11-06 12:46 +0100 http://bitbucket.org/pypy/pypy/changeset/a91e6cab9119/ Log: a first failed atempt to support retrace, we need to redesign... diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -94,7 +94,7 @@ def compile_procedure(metainterp, greenkey, start, inputargs, jumpargs, - start_resumedescr, full_preamble_needed=True): + start_resumedescr, full_preamble_needed=True, partial_trace=None): """Try to compile a new procedure by closing the current history back to the first operation. """ @@ -104,22 +104,29 @@ metainterp_sd = metainterp.staticdata jitdriver_sd = metainterp.jitdriver_sd - loop = create_empty_loop(metainterp) - loop.inputargs = inputargs[:] - - procedure_token = make_procedure_token(jitdriver_sd) - part = create_empty_loop(metainterp) - h_ops = history.operations - part.start_resumedescr = start_resumedescr - part.operations = [ResOperation(rop.LABEL, inputargs, None, descr=TargetToken(procedure_token))] + \ - [h_ops[i].clone() for i in range(start, len(h_ops))] + \ - [ResOperation(rop.LABEL, jumpargs, None, descr=TargetToken(procedure_token))] - try: - optimize_trace(metainterp_sd, part, jitdriver_sd.warmstate.enable_opts) - except InvalidLoop: - return None + if partial_trace: + part = partial_trace + procedure_token = metainterp.get_procedure_token(greenkey) + assert procedure_token + all_target_tokens = [] + else: + procedure_token = make_procedure_token(jitdriver_sd) + part = create_empty_loop(metainterp) + part.inputargs = inputargs[:] + h_ops = history.operations + part.start_resumedescr = start_resumedescr + part.operations = [ResOperation(rop.LABEL, inputargs, None, descr=TargetToken(procedure_token))] + \ + [h_ops[i].clone() for i in range(start, len(h_ops))] + \ + [ResOperation(rop.LABEL, jumpargs, None, descr=TargetToken(procedure_token))] + try: + optimize_trace(metainterp_sd, part, jitdriver_sd.warmstate.enable_opts) + except InvalidLoop: + return None + all_target_tokens = [part.operations[0].getdescr()] + + loop = create_empty_loop(metainterp) + loop.inputargs = part.inputargs loop.operations = part.operations - all_target_tokens = [part.operations[0].getdescr()] while part.operations[-1].getopnum() == rop.LABEL: inliner = Inliner(inputargs, jumpargs) part.operations = [part.operations[-1]] + \ @@ -627,11 +634,11 @@ # # Attempt to use optimize_bridge(). This may return None in case # it does not work -- i.e. none of the existing old_loop_tokens match. - new_loop = create_empty_loop(metainterp) - new_loop.inputargs = inputargs = metainterp.history.inputargs[:] + new_trace = create_empty_loop(metainterp) + new_trace.inputargs = inputargs = metainterp.history.inputargs[:] # clone ops, as optimize_bridge can mutate the ops - new_loop.operations = [op.clone() for op in metainterp.history.operations] + new_trace.operations = [op.clone() for op in metainterp.history.operations] metainterp_sd = metainterp.staticdata state = metainterp.jitdriver_sd.warmstate if isinstance(resumekey, ResumeAtPositionDescr): @@ -639,20 +646,25 @@ else: inline_short_preamble = True try: - optimize_trace(metainterp_sd, new_loop, state.enable_opts) + optimize_trace(metainterp_sd, new_trace, state.enable_opts) except InvalidLoop: debug_print("compile_new_bridge: got an InvalidLoop") # XXX I am fairly convinced that optimize_bridge cannot actually raise # InvalidLoop debug_print('InvalidLoop in compile_new_bridge') return None - # We managed to create a bridge. Dispatch to resumekey to - # know exactly what we must do (ResumeGuardDescr/ResumeFromInterpDescr) - target_token = new_loop.operations[-1].getdescr() - resumekey.compile_and_attach(metainterp, new_loop) - record_loop_or_bridge(metainterp_sd, new_loop) - return target_token + if new_trace.operations[-1].getopnum() == rop.JUMP: + # We managed to create a bridge. Dispatch to resumekey to + # know exactly what we must do (ResumeGuardDescr/ResumeFromInterpDescr) + target_token = new_trace.operations[-1].getdescr() + resumekey.compile_and_attach(metainterp, new_trace) + record_loop_or_bridge(metainterp_sd, new_trace) + return target_token + else: + metainterp.retrace_needed(new_trace) + return None + # ____________________________________________________________ diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -88,12 +88,18 @@ if not stop_label: self.optimizer.flush() loop.operations = self.optimizer.get_newoperations() + return elif not start_label: - #jumpop = ResOperation(rop.JUMP, stop_label.getarglist(), None, descr=stop_label.getdescr()) - self.optimizer.send_extra_operation(stop_label) - self.optimizer.flush() - loop.operations = self.optimizer.get_newoperations() - elif not self.did_peel_one: # Enforce the previous behaviour of always peeling exactly one iteration (for now) + try: + self.optimizer.send_extra_operation(stop_label) + except RetraceLoop: + pass + else: + self.optimizer.flush() + loop.operations = self.optimizer.get_newoperations() + return + + if not self.did_peel_one: # Enforce the previous behaviour of always peeling exactly one iteration (for now) self.optimizer.flush() KillHugeIntBounds(self.optimizer).apply() @@ -152,6 +158,7 @@ inputarg_setup_ops, self.optimizer) def import_state(self, targetop): + self.did_peel_one = False if not targetop: # FIXME: Set up some sort of empty state with no virtuals? return @@ -161,7 +168,6 @@ assert isinstance(target_token, TargetToken) exported_state = target_token.exported_state if not exported_state: - self.did_peel_one = False # FIXME: Set up some sort of empty state with no virtuals return self.did_peel_one = True @@ -504,29 +510,31 @@ return debug_stop('jit-log-virtualstate') - if False: # FIXME: retrace - retraced_count = loop_token.retraced_count - limit = self.optimizer.metainterp_sd.warmrunnerdesc.memory_manager.retrace_limit - if not self.retraced and retraced_count Author: Hakan Ardo Branch: jit-targets Changeset: r48821:cb3302d943c7 Date: 2011-11-06 14:09 +0100 http://bitbucket.org/pypy/pypy/changeset/cb3302d943c7/ Log: Rename ProcedureToken to JitCellToken. It now refers to all compiled traces starting from a specific JitCell and it is used as a decsr of jumps produced by the frontend to indicate the target of the jump. The optimizer will the convert this to a jump to a TargetToken (which refers to a specific label in an already compiled trace). If that is not yet possible it will be converted into a label resop with a new TargetTocken diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -9,7 +9,7 @@ from pypy.tool.sourcetools import func_with_new_name from pypy.jit.metainterp.resoperation import ResOperation, rop, get_deep_immutable_oplist -from pypy.jit.metainterp.history import TreeLoop, Box, History, ProcedureToken, TargetToken +from pypy.jit.metainterp.history import TreeLoop, Box, History, JitCellToken, TargetToken from pypy.jit.metainterp.history import AbstractFailDescr, BoxInt from pypy.jit.metainterp.history import BoxPtr, BoxObj, BoxFloat, Const from pypy.jit.metainterp import history @@ -47,7 +47,7 @@ def make_procedure_token(jitdriver_sd): - procedure_token = ProcedureToken() + procedure_token = JitCellToken() procedure_token.outermost_jitdriver_sd = jitdriver_sd return procedure_token @@ -68,7 +68,7 @@ n = descr.index if n >= 0: # we also record the resumedescr number looptoken.compiled_loop_token.record_faildescr_index(n) - elif isinstance(descr, ProcedureToken): + elif isinstance(descr, JitCellToken): assert False, "FIXME" elif isinstance(descr, TargetToken): # for a JUMP or a CALL_ASSEMBLER: record it as a potential jump. @@ -285,7 +285,7 @@ raise metainterp_sd.ExitFrameWithExceptionRef(cpu, value) -class TerminatingLoopToken(ProcedureToken): # FIXME:!! +class TerminatingLoopToken(JitCellToken): # FIXME: kill? terminating = True def __init__(self, nargs, finishdescr): diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -727,7 +727,7 @@ # of operations. Each branch ends in a jump which can go either to # the top of the same loop, or to another TreeLoop; or it ends in a FINISH. -class ProcedureToken(AbstractDescr): +class JitCellToken(AbstractDescr): """Used for rop.JUMP, giving the target of the jump. This is different from TreeLoop: the TreeLoop class contains the whole loop, including 'operations', and goes away after the loop @@ -766,19 +766,22 @@ self.compiled_loop_token.cpu.dump_loop_token(self) class TargetToken(AbstractDescr): - def __init__(self, procedure_token): - self.procedure_token = procedure_token + def __init__(self, cell_token): + self.cell_token = cell_token self.virtual_state = None self.exported_state = None class TreeLoop(object): inputargs = None operations = None - token = None call_pure_results = None logops = None quasi_immutable_deps = None + def _token(*args): + raise Exception("TreeLoop.token is killed") + token = property(_token, _token) + def __init__(self, name): self.name = name # self.operations = list of ResOperations diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -7,7 +7,7 @@ from pypy.jit.metainterp.optimizeopt import optimize_loop_1, ALL_OPTS_DICT, build_opt_chain from pypy.jit.metainterp.optimize import InvalidLoop from pypy.jit.metainterp.history import AbstractDescr, ConstInt, BoxInt -from pypy.jit.metainterp.history import TreeLoop, ProcedureToken, TargetToken +from pypy.jit.metainterp.history import TreeLoop, JitCellToken, TargetToken from pypy.jit.metainterp.jitprof import EmptyProfiler from pypy.jit.metainterp import executor, compile, resume, history from pypy.jit.metainterp.resoperation import rop, opname, ResOperation @@ -92,19 +92,22 @@ preamble.inputargs = inputargs preamble.start_resumedescr = FakeDescr() - token = ProcedureToken() + token = JitCellToken() preamble.operations = [ResOperation(rop.LABEL, inputargs, None, descr=TargetToken(token))] + \ operations + \ - [ResOperation(rop.LABEL, jump_args, None, descr=TargetToken(token))] + [ResOperation(rop.JUMP, jump_args, None, descr=token)] self._do_optimize_loop(preamble, call_pure_results) + assert preamble.operations[-1].getopnum() == rop.LABEL + inliner = Inliner(inputargs, jump_args) loop.start_resumedescr = preamble.start_resumedescr loop.operations = [preamble.operations[-1]] + \ [inliner.inline_op(op, clone=False) for op in cloned_operations] + \ - [ResOperation(rop.LABEL, [inliner.inline_arg(a) for a in jump_args], - None, descr=TargetToken(token))] + [ResOperation(rop.JUMP, [inliner.inline_arg(a) for a in jump_args], + None, descr=token)] #[inliner.inline_op(jumpop)] + assert loop.operations[-1].getopnum() == rop.JUMP assert loop.operations[0].getopnum() == rop.LABEL loop.inputargs = loop.operations[0].getarglist() diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -1,7 +1,7 @@ from pypy.jit.codewriter.effectinfo import EffectInfo from pypy.jit.metainterp.optimizeopt.virtualstate import VirtualStateAdder, ShortBoxes from pypy.jit.metainterp.compile import ResumeGuardDescr -from pypy.jit.metainterp.history import TreeLoop, TargetToken +from pypy.jit.metainterp.history import TreeLoop, TargetToken, JitCellToken from pypy.jit.metainterp.jitexc import JitException from pypy.jit.metainterp.optimize import InvalidLoop, RetraceLoop from pypy.jit.metainterp.optimizeopt.optimizer import * @@ -67,6 +67,7 @@ loop = self.optimizer.loop self.optimizer.clear_newoperations() + start_label = loop.operations[0] if start_label.getopnum() == rop.LABEL: loop.operations = loop.operations[1:] @@ -75,39 +76,31 @@ self.optimizer.send_extra_operation(start_label) else: start_label = None - - stop_label = loop.operations[-1] - if stop_label.getopnum() == rop.LABEL: - loop.operations = loop.operations[:-1] - else: - stop_label = None + + jumpop = loop.operations[-1] + assert jumpop.getopnum() == rop.JUMP + loop.operations = loop.operations[:-1] self.import_state(start_label) self.optimizer.propagate_all_forward(clear=False) - if not stop_label: - self.optimizer.flush() - loop.operations = self.optimizer.get_newoperations() + if self.jump_to_already_compiled_trace(jumpop): return - elif not start_label: - try: - self.optimizer.send_extra_operation(stop_label) - except RetraceLoop: - pass - else: - self.optimizer.flush() - loop.operations = self.optimizer.get_newoperations() - return + # Failed to find a compiled trace to jump to, produce a label instead + cell_token = jumpop.getdescr() + assert isinstance(cell_token, JitCellToken) + stop_label = ResOperation(rop.LABEL, jumpop.getarglist(), None, TargetToken(cell_token)) + if not self.did_peel_one: # Enforce the previous behaviour of always peeling exactly one iteration (for now) self.optimizer.flush() KillHugeIntBounds(self.optimizer).apply() loop.operations = self.optimizer.get_newoperations() self.export_state(stop_label) - loop.operations.append(stop_label) + loop.operations.append(stop_label) else: - assert stop_label.getdescr().procedure_token is start_label.getdescr().procedure_token + assert stop_label.getdescr().cell_token is start_label.getdescr().cell_token jumpop = ResOperation(rop.JUMP, stop_label.getarglist(), None, descr=start_label.getdescr()) self.close_loop(jumpop) @@ -430,8 +423,82 @@ if box in self.optimizer.values: box = self.optimizer.values[box].force_box(self.optimizer) jumpargs.append(box) + + def jump_to_already_compiled_trace(self, jumpop): + assert jumpop.getopnum() == rop.JUMP + cell_token = jumpop.getdescr() + + assert isinstance(cell_token, JitCellToken) + if not cell_token.target_tokens: + return False + + args = jumpop.getarglist() + modifier = VirtualStateAdder(self.optimizer) + virtual_state = modifier.get_virtual_state(args) + debug_start('jit-log-virtualstate') + virtual_state.debug_print("Looking for ") + + for target in procedure_token.target_tokens: + if not target.virtual_state: + continue + ok = False + extra_guards = [] + + bad = {} + debugmsg = 'Did not match ' + if target.virtual_state.generalization_of(virtual_state, bad): + ok = True + debugmsg = 'Matched ' + else: + try: + cpu = self.optimizer.cpu + target.virtual_state.generate_guards(virtual_state, + args, cpu, + extra_guards) + + ok = True + debugmsg = 'Guarded to match ' + except InvalidLoop: + pass + target.virtual_state.debug_print(debugmsg, bad) + + if ok: + debug_stop('jit-log-virtualstate') + + values = [self.getvalue(arg) + for arg in jumpop.getarglist()] + args = target.virtual_state.make_inputargs(values, self.optimizer, + keyboxes=True) + short_inputargs = target.short_preamble[0].getarglist() + inliner = Inliner(short_inputargs, args) + + for guard in extra_guards: + if guard.is_guard(): + descr = target.start_resumedescr.clone_if_mutable() + inliner.inline_descr_inplace(descr) + guard.setdescr(descr) + self.emit_operation(guard) + + try: + for shop in target.short_preamble[1:]: + newop = inliner.inline_op(shop) + self.emit_operation(newop) + except InvalidLoop: + debug_print("Inlining failed unexpectedly", + "jumping to preamble instead") + assert False, "FIXME: Construct jump op" + self.emit_operation(op) + return True + debug_stop('jit-log-virtualstate') + + retraced_count = procedure_token.retraced_count + limit = self.optimizer.metainterp_sd.warmrunnerdesc.memory_manager.retrace_limit + if not self.retraced and retraced_count Author: Hakan Ardo Branch: jit-targets Changeset: r48822:9a23b1fe6986 Date: 2011-11-06 14:10 +0100 http://bitbucket.org/pypy/pypy/changeset/9a23b1fe6986/ Log: hg merge diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -24,7 +24,7 @@ from pypy.jit.metainterp.jitprof import ABORT_BRIDGE raise SwitchToBlackhole(ABORT_BRIDGE) -def show_loop(metainterp_sd, loop=None, error=None): +def show_procedures(metainterp_sd, procedure=None, error=None): # debugging if option.view or option.viewloops: if error: @@ -33,11 +33,12 @@ errmsg += ': ' + str(error) else: errmsg = None - if loop is None: # or type(loop) is TerminatingLoop: - extraloops = [] + if procedure is None: + extraprocedures = [] else: - extraloops = [loop] - metainterp_sd.stats.view(errmsg=errmsg, extraloops=extraloops) + extraprocedures = [procedure] + metainterp_sd.stats.view(errmsg=errmsg, + extraprocedures=extraprocedures) def create_empty_loop(metainterp, name_prefix=''): name = metainterp.staticdata.stats.name_for_new_loop() @@ -78,8 +79,6 @@ if descr.procedure_token is not looptoken: looptoken.record_jump_to(descr.procedure_token) op._descr = None # clear reference, mostly for tests - if not we_are_translated(): - op._jumptarget_number = descr.procedure_token.number # record this looptoken on the QuasiImmut used in the code if loop.quasi_immutable_deps is not None: for qmut in loop.quasi_immutable_deps: @@ -194,7 +193,7 @@ globaldata.loopnumbering += 1 if not we_are_translated(): - show_loop(metainterp_sd, loop) + show_procedures(metainterp_sd, loop) loop.check_consistency() operations = get_deep_immutable_oplist(loop.operations) @@ -225,7 +224,7 @@ jitdriver_sd.on_compile_bridge(metainterp_sd.logger_ops, original_loop_token, operations, n) if not we_are_translated(): - show_loop(metainterp_sd) + show_procedures(metainterp_sd) seen = dict.fromkeys(inputargs) TreeLoop.check_consistency_of_branch(operations, seen) metainterp_sd.profiler.start_backend() diff --git a/pypy/jit/metainterp/graphpage.py b/pypy/jit/metainterp/graphpage.py --- a/pypy/jit/metainterp/graphpage.py +++ b/pypy/jit/metainterp/graphpage.py @@ -12,8 +12,9 @@ def get_display_text(self): return None -def display_loops(loops, errmsg=None, highlight_loops={}): - graphs = [(loop, highlight_loops.get(loop, 0)) for loop in loops] +def display_procedures(procedures, errmsg=None, highlight_procedures={}): + graphs = [(procedure, highlight_procedures.get(procedure, 0)) + for procedure in procedures] for graph, highlight in graphs: for op in graph.get_operations(): if is_interesting_guard(op): @@ -31,12 +32,6 @@ def compute(self, graphs, errmsg=None): resopgen = ResOpGen() for graph, highlight in graphs: - if getattr(graph, 'token', None) is not None: - resopgen.jumps_to_graphs[graph.token] = graph - if getattr(graph, '_looptoken_number', None) is not None: - resopgen.jumps_to_graphs[graph._looptoken_number] = graph - - for graph, highlight in graphs: resopgen.add_graph(graph, highlight) if errmsg: resopgen.set_errmsg(errmsg) @@ -54,7 +49,7 @@ self.block_starters = {} # {graphindex: {set-of-operation-indices}} self.all_operations = {} self.errmsg = None - self.jumps_to_graphs = {} + self.target_tokens = {} def op_name(self, graphindex, opindex): return 'g%dop%d' % (graphindex, opindex) @@ -73,16 +68,21 @@ for graphindex in range(len(self.graphs)): self.block_starters[graphindex] = {0: True} for graphindex, graph in enumerate(self.graphs): - last_was_mergepoint = False + mergepointblock = None for i, op in enumerate(graph.get_operations()): if is_interesting_guard(op): self.mark_starter(graphindex, i+1) if op.getopnum() == rop.DEBUG_MERGE_POINT: - if not last_was_mergepoint: - last_was_mergepoint = True - self.mark_starter(graphindex, i) + if mergepointblock is None: + mergepointblock = i + elif op.getopnum() == rop.LABEL: + self.mark_starter(graphindex, i) + self.target_tokens[op.getdescr()] = (graphindex, i) + mergepointblock = i else: - last_was_mergepoint = False + if mergepointblock is not None: + self.mark_starter(graphindex, mergepointblock) + mergepointblock = None def set_errmsg(self, errmsg): self.errmsg = errmsg @@ -172,24 +172,10 @@ (graphindex, opindex)) break if op.getopnum() == rop.JUMP: - tgt_g = -1 - tgt = None - tgt_number = getattr(op, '_jumptarget_number', None) - if tgt_number is not None: - tgt = self.jumps_to_graphs.get(tgt_number) - else: - tgt_descr = op.getdescr() - if tgt_descr is None: - tgt_g = graphindex - else: - tgt = self.jumps_to_graphs.get(tgt_descr.number) - if tgt is None: - tgt = self.jumps_to_graphs.get(tgt_descr) - if tgt is not None: - tgt_g = self.graphs.index(tgt) - if tgt_g != -1: + tgt_descr = op.getdescr() + if tgt_descr in self.target_tokens: self.genedge((graphindex, opstartindex), - (tgt_g, 0), + self.target_tokens[tgt_descr], weight="0") lines.append("") label = "\\l".join(lines) diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -806,7 +806,7 @@ return self.operations def get_display_text(self): # for graphpage.py - return self.name + return self.name + '\n' + repr(self.inputargs) def show(self, errmsg=None): "NOT_RPYTHON" @@ -1069,19 +1069,19 @@ if option.view: self.view() - def view(self, errmsg=None, extraloops=[]): - from pypy.jit.metainterp.graphpage import display_loops - loops = self.get_all_loops()[:] - for loop in extraloops: - if loop in loops: - loops.remove(loop) - loops.append(loop) - highlight_loops = dict.fromkeys(extraloops, 1) - for loop in loops: - if hasattr(loop, '_looptoken_number') and ( - loop._looptoken_number in self.invalidated_token_numbers): - highlight_loops.setdefault(loop, 2) - display_loops(loops, errmsg, highlight_loops) + def view(self, errmsg=None, extraprocedures=[]): + from pypy.jit.metainterp.graphpage import display_procedures + procedures = self.get_all_loops()[:] + for procedure in extraprocedures: + if procedure in procedures: + procedures.remove(procedure) + procedures.append(procedure) + highlight_procedures = dict.fromkeys(extraprocedures, 1) + for procedure in procedures: + if hasattr(procedure, '_looptoken_number') and ( + procedure._looptoken_number in self.invalidated_token_numbers): + highlight_procedures.setdefault(procedure, 2) + display_procedures(procedures, errmsg, highlight_procedures) # ---------------------------------------------------------------- From noreply at buildbot.pypy.org Sun Nov 6 14:18:55 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Sun, 6 Nov 2011 14:18:55 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: adjusted the overflow checks Message-ID: <20111106131855.D69FE820C4@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r48823:f8438a89169b Date: 2011-11-06 13:24 +0100 http://bitbucket.org/pypy/pypy/changeset/f8438a89169b/ Log: adjusted the overflow checks diff --git a/pypy/rpython/rint.py b/pypy/rpython/rint.py --- a/pypy/rpython/rint.py +++ b/pypy/rpython/rint.py @@ -7,7 +7,8 @@ SignedLongLong, build_number, Number, cast_primitive, typeOf from pypy.rpython.rmodel import IntegerRepr, inputconst from pypy.rpython.robject import PyObjRepr, pyobj_repr -from pypy.rlib.rarithmetic import intmask, r_int, r_uint, r_ulonglong, r_longlong +from pypy.rlib.rarithmetic import intmask, r_int, r_uint, r_ulonglong, \ + r_longlong, is_emulated_long from pypy.rpython.error import TyperError, MissingRTypeOperation from pypy.rpython.rmodel import log from pypy.rlib import objectmodel @@ -440,6 +441,11 @@ Unsigned: ('RPyLong_AsUnsignedLong', lambda pyo: r_uint(pyo._obj.value)), Signed: ('PyInt_AsLong', lambda pyo: int(pyo._obj.value)) } +if is_emulated_long: # win64 + py_to_ll_conversion_functions.update( { + Unsigned: ('RPyLong_AsUnsignedLongLong', lambda pyo: r_ulonglong(pyo._obj.value)), + Signed: ('RPyLong_AsLongLong', lambda pyo: r_longlong(pyo._obj.value)), + } ) ll_to_py_conversion_functions = { UnsignedLongLong: ('PyLong_FromUnsignedLongLong', lambda i: pyobjectptr(i)), @@ -447,6 +453,11 @@ Unsigned: ('PyLong_FromUnsignedLong', lambda i: pyobjectptr(i)), Signed: ('PyInt_FromLong', lambda i: pyobjectptr(i)), } +if is_emulated_long: # win64 + ll_to_py_conversion_functions.update( { + Unsigned: ('PyLong_FromUnsignedLongLong', lambda i: pyobjectptr(i)), + Signed: ('PyLong_FromLongLong', lambda i: pyobjectptr(i)), + } ) class __extend__(pairtype(PyObjRepr, IntegerRepr)): diff --git a/pypy/translator/c/src/g_prerequisite.h b/pypy/translator/c/src/g_prerequisite.h --- a/pypy/translator/c/src/g_prerequisite.h +++ b/pypy/translator/c/src/g_prerequisite.h @@ -2,13 +2,22 @@ /**************************************************************/ /*** this is included before any code produced by genc.py ***/ - #ifdef PYPY_STANDALONE # include "src/commondefs.h" #else # include "Python.h" #endif +#ifdef _WIN64 +# define new_long __int64 +# define NEW_LONG_MIN LLONG_MIN +# define NEW_LONG_MAX LLONG_MAX +#else +# define new_log long +# define NEW_LONG_MIN LONG_MIN +# define NEW_LONG_MAX LONG_MAX +#endif + #ifdef _WIN32 # include /* needed, otherwise _lseeki64 truncates to 32-bits (??) */ #endif diff --git a/pypy/translator/c/src/int.h b/pypy/translator/c/src/int.h --- a/pypy/translator/c/src/int.h +++ b/pypy/translator/c/src/int.h @@ -5,18 +5,24 @@ /*** unary operations ***/ +/************ win64 support: + 'new_long' must be defined as + __int64 in case of win64 + long in all other cases + */ + #define OP_INT_IS_TRUE(x,r) r = ((x) != 0) #define OP_INT_INVERT(x,r) r = ~(x) #define OP_INT_NEG(x,r) r = -(x) #define OP_INT_NEG_OVF(x,r) \ - if ((x) == LONG_MIN) FAIL_OVF("integer negate"); \ + if ((x) == NEW_LONG_MIN) FAIL_OVF("integer negate"); \ OP_INT_NEG(x,r) #define OP_INT_ABS(x,r) r = (x) >= 0 ? x : -(x) #define OP_INT_ABS_OVF(x,r) \ - if ((x) == LONG_MIN) FAIL_OVF("integer absolute"); \ + if ((x) == NEW_LONG_MIN) FAIL_OVF("integer absolute"); \ OP_INT_ABS(x,r) /*** binary operations ***/ @@ -33,8 +39,8 @@ for the case of a == 0 (both subtractions are then constant-folded). Note that the following line only works if a <= c in the first place, which we assume is true. */ -#define OP_INT_BETWEEN(a,b,c,r) r = (((unsigned long)b - (unsigned long)a) \ - < ((unsigned long)c - (unsigned long)a)) +#define OP_INT_BETWEEN(a,b,c,r) r = (((unsigned new_long)b - (unsigned new_long)a) \ + < ((unsigned new_long)c - (unsigned new_long)a)) /* addition, subtraction */ @@ -42,22 +48,22 @@ /* cast to avoid undefined behaviour on overflow */ #define OP_INT_ADD_OVF(x,y,r) \ - r = (long)((unsigned long)x + y); \ + r = (new_long)((unsigned new_long)x + y); \ if ((r^x) < 0 && (r^y) < 0) FAIL_OVF("integer addition") #define OP_INT_ADD_NONNEG_OVF(x,y,r) /* y can be assumed >= 0 */ \ - r = (long)((unsigned long)x + y); \ + r = (new_long)((unsigned new_long)x + y); \ if ((r&~x) < 0) FAIL_OVF("integer addition") #define OP_INT_SUB(x,y,r) r = (x) - (y) #define OP_INT_SUB_OVF(x,y,r) \ - r = (long)((unsigned long)x - y); \ + r = (new_long)((unsigned new_long)x - y); \ if ((r^x) < 0 && (r^~y) < 0) FAIL_OVF("integer subtraction") #define OP_INT_MUL(x,y,r) r = (x) * (y) -#if SIZEOF_LONG * 2 <= SIZEOF_LONG_LONG +#if SIZEOF_LONG * 2 <= SIZEOF_LONG_LONG && !defined(_WIN64) #define OP_INT_MUL_OVF(x,y,r) \ { \ long long _lr = (long long)x * y; \ @@ -78,7 +84,7 @@ #define OP_INT_RSHIFT(x,y,r) CHECK_SHIFT_RANGE(y, PYPY_LONG_BIT); \ - r = Py_ARITHMETIC_RIGHT_SHIFT(long, x, (y)) + r = Py_ARITHMETIC_RIGHT_SHIFT(new_long, x, (y)) #define OP_UINT_RSHIFT(x,y,r) CHECK_SHIFT_RANGE(y, PYPY_LONG_BIT); \ r = (x) >> (y) #define OP_LLONG_RSHIFT(x,y,r) CHECK_SHIFT_RANGE(y, PYPY_LONGLONG_BIT); \ @@ -98,7 +104,7 @@ #define OP_INT_LSHIFT_OVF(x,y,r) \ OP_INT_LSHIFT(x,y,r); \ - if ((x) != Py_ARITHMETIC_RIGHT_SHIFT(long, r, (y))) \ + if ((x) != Py_ARITHMETIC_RIGHT_SHIFT(new_long, r, (y))) \ FAIL_OVF("x< Author: Christian Tismer Branch: win64_gborg Changeset: r48824:0944b1ca1861 Date: 2011-11-06 14:13 +0100 http://bitbucket.org/pypy/pypy/changeset/0944b1ca1861/ Log: Adjusted 'long' in most c/src files, but tried carefully not to mix up things where CPython is involved. diff --git a/pypy/translator/c/src/address.h b/pypy/translator/c/src/address.h --- a/pypy/translator/c/src/address.h +++ b/pypy/translator/c/src/address.h @@ -16,5 +16,5 @@ #define OP_ADR_LT(x,y,r) r = ((x) < (y)) #define OP_ADR_GE(x,y,r) r = ((x) >= (y)) -#define OP_CAST_ADR_TO_INT(x, mode, r) r = ((long)x) +#define OP_CAST_ADR_TO_INT(x, mode, r) r = ((new_long)x) #define OP_CAST_INT_TO_ADR(x, r) r = ((void *)(x)) diff --git a/pypy/translator/c/src/asm_gcc_x86_64.h b/pypy/translator/c/src/asm_gcc_x86_64.h --- a/pypy/translator/c/src/asm_gcc_x86_64.h +++ b/pypy/translator/c/src/asm_gcc_x86_64.h @@ -2,7 +2,7 @@ */ #define READ_TIMESTAMP(val) do { \ - unsigned long _rax, _rdx; \ + unsigned new_long _rax, _rdx; \ asm volatile("rdtsc" : "=a"(_rax), "=d"(_rdx)); \ val = (_rdx << 32) | _rax; \ } while (0) diff --git a/pypy/translator/c/src/float.h b/pypy/translator/c/src/float.h --- a/pypy/translator/c/src/float.h +++ b/pypy/translator/c/src/float.h @@ -31,8 +31,8 @@ /*** conversions ***/ -#define OP_CAST_FLOAT_TO_INT(x,r) r = (long)(x) -#define OP_CAST_FLOAT_TO_UINT(x,r) r = (unsigned long)(x) +#define OP_CAST_FLOAT_TO_INT(x,r) r = (new_long)(x) +#define OP_CAST_FLOAT_TO_UINT(x,r) r = (unsigned new_long)(x) #define OP_CAST_INT_TO_FLOAT(x,r) r = (double)(x) #define OP_CAST_UINT_TO_FLOAT(x,r) r = (double)(x) #define OP_CAST_LONGLONG_TO_FLOAT(x,r) r = (double)(x) diff --git a/pypy/translator/c/src/g_prerequisite.h b/pypy/translator/c/src/g_prerequisite.h --- a/pypy/translator/c/src/g_prerequisite.h +++ b/pypy/translator/c/src/g_prerequisite.h @@ -11,11 +11,9 @@ #ifdef _WIN64 # define new_long __int64 # define NEW_LONG_MIN LLONG_MIN -# define NEW_LONG_MAX LLONG_MAX #else -# define new_log long +# define new_long long # define NEW_LONG_MIN LONG_MIN -# define NEW_LONG_MAX LONG_MAX #endif #ifdef _WIN32 diff --git a/pypy/translator/c/src/int.h b/pypy/translator/c/src/int.h --- a/pypy/translator/c/src/int.h +++ b/pypy/translator/c/src/int.h @@ -6,9 +6,16 @@ /*** unary operations ***/ /************ win64 support: + 'new_long' must be defined as - __int64 in case of win64 - long in all other cases + + __int64 in case of win64 + long in all other cases + + 'NEW_LONG_MIN' must be defined as + + LLONG_MIN in case of win64 + LONG_MIN in all other cases */ #define OP_INT_IS_TRUE(x,r) r = ((x) != 0) diff --git a/pypy/translator/c/src/mem.h b/pypy/translator/c/src/mem.h --- a/pypy/translator/c/src/mem.h +++ b/pypy/translator/c/src/mem.h @@ -53,7 +53,7 @@ extern void* __gcmapstart; extern void* __gcmapend; extern char* __gccallshapes; -extern long pypy_asm_stackwalk(void*, void*); +extern new_long pypy_asm_stackwalk(void*, void*); /* With the msvc Microsoft Compiler, the optimizer seems free to move any code (even asm) that involves local memory (registers and stack). @@ -66,7 +66,7 @@ pypy_asm_gcroot(void* _r1) { static volatile int _constant_always_one_ = 1; - (long)_r1 *= _constant_always_one_; + (new_long)_r1 *= _constant_always_one_; _ReadWriteBarrier(); return _r1; } @@ -86,7 +86,7 @@ /* used by pypy.rlib.rstack, but also by asmgcc */ -#define OP_STACK_CURRENT(r) r = (long)&r +#define OP_STACK_CURRENT(r) r = (new_long)&r #define RAW_MALLOC_ZERO_FILLED 0 diff --git a/pypy/translator/c/src/obmalloc.c b/pypy/translator/c/src/obmalloc.c --- a/pypy/translator/c/src/obmalloc.c +++ b/pypy/translator/c/src/obmalloc.c @@ -224,10 +224,10 @@ #define uint unsigned int /* assuming >= 16 bits */ #undef ulong -#define ulong unsigned long /* assuming >= 32 bits */ +#define ulong unsigned new_long /* assuming >= 32 bits */ #undef uptr -#define uptr unsigned long +#define uptr unsigned new_long /* When you say memory, my mind reasons in terms of (pointers to) blocks */ typedef uchar block; diff --git a/pypy/translator/c/src/rtyper.h b/pypy/translator/c/src/rtyper.h --- a/pypy/translator/c/src/rtyper.h +++ b/pypy/translator/c/src/rtyper.h @@ -30,7 +30,7 @@ char *RPyString_AsCharP(RPyString *rps) { - long len = RPyString_Size(rps); + new_long len = RPyString_Size(rps); struct _RPyString_dump_t *dump = \ malloc(sizeof(struct _RPyString_dump_t) + len); if (!dump) diff --git a/pypy/translator/c/src/signals.h b/pypy/translator/c/src/signals.h --- a/pypy/translator/c/src/signals.h +++ b/pypy/translator/c/src/signals.h @@ -54,7 +54,7 @@ /* When a signal is received, pypysig_counter is set to -1. */ /* This is a struct for the JIT. See interp_signal.py. */ struct pypysig_long_struct { - long value; + new_long value; }; extern struct pypysig_long_struct pypysig_counter; diff --git a/pypy/translator/c/src/stack.h b/pypy/translator/c/src/stack.h --- a/pypy/translator/c/src/stack.h +++ b/pypy/translator/c/src/stack.h @@ -12,17 +12,17 @@ #include "thread.h" extern char *_LLstacktoobig_stack_end; -extern long _LLstacktoobig_stack_length; +extern new_long _LLstacktoobig_stack_length; extern char _LLstacktoobig_report_error; -char LL_stack_too_big_slowpath(long); /* returns 0 (ok) or 1 (too big) */ +char LL_stack_too_big_slowpath(new_long); /* returns 0 (ok) or 1 (too big) */ void LL_stack_set_length_fraction(double); /* some macros referenced from pypy.rlib.rstack */ -#define LL_stack_get_end() ((long)_LLstacktoobig_stack_end) +#define LL_stack_get_end() ((new_long)_LLstacktoobig_stack_end) #define LL_stack_get_length() _LLstacktoobig_stack_length -#define LL_stack_get_end_adr() ((long)&_LLstacktoobig_stack_end) /* JIT */ -#define LL_stack_get_length_adr() ((long)&_LLstacktoobig_stack_length)/* JIT */ +#define LL_stack_get_end_adr() ((new_long)&_LLstacktoobig_stack_end) /* JIT */ +#define LL_stack_get_length_adr() ((new_long)&_LLstacktoobig_stack_length)/* JIT */ #define LL_stack_criticalcode_start() (_LLstacktoobig_report_error = 0) #define LL_stack_criticalcode_stop() (_LLstacktoobig_report_error = 1) @@ -41,18 +41,18 @@ /* the current stack is in the interval [end-length:end]. We assume a stack that grows downward here. */ char *_LLstacktoobig_stack_end = NULL; -long _LLstacktoobig_stack_length = MAX_STACK_SIZE; +new_long _LLstacktoobig_stack_length = MAX_STACK_SIZE; char _LLstacktoobig_report_error = 1; static RPyThreadStaticTLS end_tls_key; void LL_stack_set_length_fraction(double fraction) { - _LLstacktoobig_stack_length = (long)(MAX_STACK_SIZE * fraction); + _LLstacktoobig_stack_length = (new_long)(MAX_STACK_SIZE * fraction); } -char LL_stack_too_big_slowpath(long current) +char LL_stack_too_big_slowpath(new_long current) { - long diff, max_stack_size; + new_long diff, max_stack_size; char *baseptr, *curptr = (char*)current; /* The stack_end variable is updated to match the current value @@ -81,12 +81,12 @@ } else { diff = baseptr - curptr; - if (((unsigned long)diff) <= (unsigned long)max_stack_size) { + if (((unsigned new_long)diff) <= (unsigned new_long)max_stack_size) { /* within bounds, probably just had a thread switch */ _LLstacktoobig_stack_end = baseptr; return 0; } - if (((unsigned long)-diff) <= (unsigned long)max_stack_size) { + if (((unsigned new_long)-diff) <= (unsigned new_long)max_stack_size) { /* stack underflowed: the initial estimation of the stack base must be revised */ } diff --git a/pypy/translator/c/src/thread.h b/pypy/translator/c/src/thread.h --- a/pypy/translator/c/src/thread.h +++ b/pypy/translator/c/src/thread.h @@ -37,8 +37,8 @@ #endif -long RPyGilAllocate(void); -long RPyGilYieldThread(void); +new_long RPyGilAllocate(void); +new_long RPyGilYieldThread(void); void RPyGilRelease(void); void RPyGilAcquire(void); diff --git a/pypy/translator/c/src/thread_nt.h b/pypy/translator/c/src/thread_nt.h --- a/pypy/translator/c/src/thread_nt.h +++ b/pypy/translator/c/src/thread_nt.h @@ -17,7 +17,7 @@ typedef struct { void (*func)(void); - long id; + new_long id; HANDLE done; } callobj; @@ -28,7 +28,7 @@ } NRMUTEX, *PNRMUTEX ; /* prototypes */ -long RPyThreadStart(void (*func)(void)); +new_long RPyThreadStart(void (*func)(void)); BOOL InitializeNonRecursiveMutex(PNRMUTEX mutex); VOID DeleteNonRecursiveMutex(PNRMUTEX mutex); DWORD EnterNonRecursiveMutex(PNRMUTEX mutex, BOOL wait); @@ -36,15 +36,15 @@ void RPyOpaqueDealloc_ThreadLock(struct RPyOpaque_ThreadLock *lock); int RPyThreadAcquireLock(struct RPyOpaque_ThreadLock *lock, int waitflag); void RPyThreadReleaseLock(struct RPyOpaque_ThreadLock *lock); -long RPyThreadGetStackSize(void); -long RPyThreadSetStackSize(long); +new_long RPyThreadGetStackSize(void); +new_long RPyThreadSetStackSize(new_long); /* implementations */ #ifndef PYPY_NOT_MAIN_FILE -static long _pypythread_stacksize = 0; +static new_long _pypythread_stacksize = 0; /* * Return the thread Id instead of an handle. The Id is said to uniquely @@ -67,9 +67,9 @@ func(); } -long RPyThreadStart(void (*func)(void)) +new_long RPyThreadStart(void (*func)(void)) { - unsigned long rv; + unsigned new_long rv; callobj obj; obj.id = -1; /* guilty until proved innocent */ @@ -79,7 +79,7 @@ return -1; rv = _beginthread(bootstrap, _pypythread_stacksize, &obj); - if (rv == (unsigned long)-1) { + if (rv == (unsigned new_long)-1) { /* I've seen errno == EAGAIN here, which means "there are * too many threads". */ @@ -100,12 +100,12 @@ #define THREAD_MIN_STACKSIZE 0x8000 /* 32kB */ #define THREAD_MAX_STACKSIZE 0x10000000 /* 256MB */ -long RPyThreadGetStackSize(void) +new_long RPyThreadGetStackSize(void) { return _pypythread_stacksize; } -long RPyThreadSetStackSize(long newsize) +new_long RPyThreadSetStackSize(new_long newsize) { if (newsize == 0) { /* set to default */ _pypythread_stacksize = 0; @@ -229,7 +229,7 @@ static CRITICAL_SECTION mutex_gil; static HANDLE cond_gil; -long RPyGilAllocate(void) +new_long RPyGilAllocate(void) { pending_acquires = 0; InitializeCriticalSection(&mutex_gil); @@ -238,7 +238,7 @@ return 1; } -long RPyGilYieldThread(void) +new_long RPyGilYieldThread(void) { /* can be called even before RPyGilAllocate(), but in this case, pending_acquires will be -1 */ From noreply at buildbot.pypy.org Sun Nov 6 14:18:58 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Sun, 6 Nov 2011 14:18:58 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: merge Message-ID: <20111106131858.5047A820C4@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r48825:626b202a12c5 Date: 2011-11-06 14:18 +0100 http://bitbucket.org/pypy/pypy/changeset/626b202a12c5/ Log: merge diff --git a/lib-python/conftest.py b/lib-python/conftest.py --- a/lib-python/conftest.py +++ b/lib-python/conftest.py @@ -201,7 +201,7 @@ RegrTest('test_difflib.py'), RegrTest('test_dircache.py', core=True), RegrTest('test_dis.py'), - RegrTest('test_distutils.py'), + RegrTest('test_distutils.py', skip=True), RegrTest('test_dl.py', skip=True), RegrTest('test_doctest.py', usemodules="thread"), RegrTest('test_doctest2.py'), diff --git a/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py b/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py --- a/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py +++ b/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py @@ -1,6 +1,5 @@ import unittest from ctypes import * -from ctypes.test import xfail class MyInt(c_int): def __cmp__(self, other): @@ -27,7 +26,6 @@ self.assertEqual(None, cb()) - @xfail def test_int_callback(self): args = [] def func(arg): diff --git a/lib_pypy/_ctypes/structure.py b/lib_pypy/_ctypes/structure.py --- a/lib_pypy/_ctypes/structure.py +++ b/lib_pypy/_ctypes/structure.py @@ -17,7 +17,7 @@ if len(f) == 3: if (not hasattr(tp, '_type_') or not isinstance(tp._type_, str) - or tp._type_ not in "iIhHbBlL"): + or tp._type_ not in "iIhHbBlLqQ"): #XXX: are those all types? # we just dont get the type name # in the interp levle thrown TypeError diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -201,6 +201,9 @@ def descr_get_shape(self, space): return space.newtuple([self.descr_len(space)]) + def descr_get_size(self, space): + return space.wrap(self.size) + def descr_copy(self, space): return space.call_function(space.gettypefor(BaseArray), self, self.find_dtype()) @@ -607,6 +610,7 @@ dtype = GetSetProperty(BaseArray.descr_get_dtype), shape = GetSetProperty(BaseArray.descr_get_shape), + size = GetSetProperty(BaseArray.descr_get_size), mean = interp2app(BaseArray.descr_mean), sum = interp2app(BaseArray.descr_sum), diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -17,6 +17,12 @@ a[13] = 5.3 assert a[13] == 5.3 + def test_size(self): + from numpy import array + # XXX fixed on multidim branch + #assert array(3).size == 1 + assert array([1, 2, 3]).size == 3 + def test_empty(self): """ Test that empty() works. From noreply at buildbot.pypy.org Sun Nov 6 15:04:01 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Sun, 6 Nov 2011 15:04:01 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: reverted a few changes which cannot take external macros Message-ID: <20111106140401.B42EB820C4@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r48826:8910ec31f7e2 Date: 2011-11-06 14:37 +0100 http://bitbucket.org/pypy/pypy/changeset/8910ec31f7e2/ Log: reverted a few changes which cannot take external macros diff --git a/pypy/translator/c/src/stack.h b/pypy/translator/c/src/stack.h --- a/pypy/translator/c/src/stack.h +++ b/pypy/translator/c/src/stack.h @@ -12,17 +12,17 @@ #include "thread.h" extern char *_LLstacktoobig_stack_end; -extern new_long _LLstacktoobig_stack_length; +extern long _LLstacktoobig_stack_length; extern char _LLstacktoobig_report_error; -char LL_stack_too_big_slowpath(new_long); /* returns 0 (ok) or 1 (too big) */ +char LL_stack_too_big_slowpath(long); /* returns 0 (ok) or 1 (too big) */ void LL_stack_set_length_fraction(double); /* some macros referenced from pypy.rlib.rstack */ -#define LL_stack_get_end() ((new_long)_LLstacktoobig_stack_end) +#define LL_stack_get_end() ((long)_LLstacktoobig_stack_end) #define LL_stack_get_length() _LLstacktoobig_stack_length -#define LL_stack_get_end_adr() ((new_long)&_LLstacktoobig_stack_end) /* JIT */ -#define LL_stack_get_length_adr() ((new_long)&_LLstacktoobig_stack_length)/* JIT */ +#define LL_stack_get_end_adr() ((long)&_LLstacktoobig_stack_end) /* JIT */ +#define LL_stack_get_length_adr() ((long)&_LLstacktoobig_stack_length)/* JIT */ #define LL_stack_criticalcode_start() (_LLstacktoobig_report_error = 0) #define LL_stack_criticalcode_stop() (_LLstacktoobig_report_error = 1) @@ -41,18 +41,18 @@ /* the current stack is in the interval [end-length:end]. We assume a stack that grows downward here. */ char *_LLstacktoobig_stack_end = NULL; -new_long _LLstacktoobig_stack_length = MAX_STACK_SIZE; +long _LLstacktoobig_stack_length = MAX_STACK_SIZE; char _LLstacktoobig_report_error = 1; static RPyThreadStaticTLS end_tls_key; void LL_stack_set_length_fraction(double fraction) { - _LLstacktoobig_stack_length = (new_long)(MAX_STACK_SIZE * fraction); + _LLstacktoobig_stack_length = (long)(MAX_STACK_SIZE * fraction); } -char LL_stack_too_big_slowpath(new_long current) +char LL_stack_too_big_slowpath(long current) { - new_long diff, max_stack_size; + long diff, max_stack_size; char *baseptr, *curptr = (char*)current; /* The stack_end variable is updated to match the current value @@ -81,12 +81,12 @@ } else { diff = baseptr - curptr; - if (((unsigned new_long)diff) <= (unsigned new_long)max_stack_size) { + if (((unsigned long)diff) <= (unsigned long)max_stack_size) { /* within bounds, probably just had a thread switch */ _LLstacktoobig_stack_end = baseptr; return 0; } - if (((unsigned new_long)-diff) <= (unsigned new_long)max_stack_size) { + if (((unsigned long)-diff) <= (unsigned long)max_stack_size) { /* stack underflowed: the initial estimation of the stack base must be revised */ } diff --git a/pypy/translator/c/src/thread.h b/pypy/translator/c/src/thread.h --- a/pypy/translator/c/src/thread.h +++ b/pypy/translator/c/src/thread.h @@ -37,8 +37,8 @@ #endif -new_long RPyGilAllocate(void); -new_long RPyGilYieldThread(void); +long RPyGilAllocate(void); +long RPyGilYieldThread(void); void RPyGilRelease(void); void RPyGilAcquire(void); diff --git a/pypy/translator/c/src/thread_nt.h b/pypy/translator/c/src/thread_nt.h --- a/pypy/translator/c/src/thread_nt.h +++ b/pypy/translator/c/src/thread_nt.h @@ -17,7 +17,7 @@ typedef struct { void (*func)(void); - new_long id; + long id; HANDLE done; } callobj; @@ -28,7 +28,7 @@ } NRMUTEX, *PNRMUTEX ; /* prototypes */ -new_long RPyThreadStart(void (*func)(void)); +long RPyThreadStart(void (*func)(void)); BOOL InitializeNonRecursiveMutex(PNRMUTEX mutex); VOID DeleteNonRecursiveMutex(PNRMUTEX mutex); DWORD EnterNonRecursiveMutex(PNRMUTEX mutex, BOOL wait); @@ -36,15 +36,15 @@ void RPyOpaqueDealloc_ThreadLock(struct RPyOpaque_ThreadLock *lock); int RPyThreadAcquireLock(struct RPyOpaque_ThreadLock *lock, int waitflag); void RPyThreadReleaseLock(struct RPyOpaque_ThreadLock *lock); -new_long RPyThreadGetStackSize(void); -new_long RPyThreadSetStackSize(new_long); +long RPyThreadGetStackSize(void); +long RPyThreadSetStackSize(long); /* implementations */ #ifndef PYPY_NOT_MAIN_FILE -static new_long _pypythread_stacksize = 0; +static long _pypythread_stacksize = 0; /* * Return the thread Id instead of an handle. The Id is said to uniquely @@ -67,9 +67,9 @@ func(); } -new_long RPyThreadStart(void (*func)(void)) +long RPyThreadStart(void (*func)(void)) { - unsigned new_long rv; + unsigned long rv; callobj obj; obj.id = -1; /* guilty until proved innocent */ @@ -79,7 +79,7 @@ return -1; rv = _beginthread(bootstrap, _pypythread_stacksize, &obj); - if (rv == (unsigned new_long)-1) { + if (rv == (unsigned long)-1) { /* I've seen errno == EAGAIN here, which means "there are * too many threads". */ @@ -100,12 +100,12 @@ #define THREAD_MIN_STACKSIZE 0x8000 /* 32kB */ #define THREAD_MAX_STACKSIZE 0x10000000 /* 256MB */ -new_long RPyThreadGetStackSize(void) +long RPyThreadGetStackSize(void) { return _pypythread_stacksize; } -new_long RPyThreadSetStackSize(new_long newsize) +long RPyThreadSetStackSize(long newsize) { if (newsize == 0) { /* set to default */ _pypythread_stacksize = 0; @@ -229,7 +229,7 @@ static CRITICAL_SECTION mutex_gil; static HANDLE cond_gil; -new_long RPyGilAllocate(void) +long RPyGilAllocate(void) { pending_acquires = 0; InitializeCriticalSection(&mutex_gil); @@ -238,7 +238,7 @@ return 1; } -new_long RPyGilYieldThread(void) +long RPyGilYieldThread(void) { /* can be called even before RPyGilAllocate(), but in this case, pending_acquires will be -1 */ From noreply at buildbot.pypy.org Sun Nov 6 15:04:02 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Sun, 6 Nov 2011 15:04:02 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: modulo 4 tests (flot/unicode conversion), it all works. Message-ID: <20111106140402.E9AD0820C4@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r48827:c4ab5a26c418 Date: 2011-11-06 15:02 +0100 http://bitbucket.org/pypy/pypy/changeset/c4ab5a26c418/ Log: modulo 4 tests (flot/unicode conversion), it all works. Renamed stuff to 'Signed', 'Unsigned' after a suggestion from Armin. diff --git a/pypy/rlib/rarithmetic.py b/pypy/rlib/rarithmetic.py --- a/pypy/rlib/rarithmetic.py +++ b/pypy/rlib/rarithmetic.py @@ -72,7 +72,8 @@ """get the bit pattern for a long, adjusted to pointer size""" return struct.pack(_long_typecode, x) -# used in tests for ctypes: +# used in tests for ctypes and for genc and friends +# to handle the win64 special case: is_emulated_long = _long_typecode <> 'l' LONG_BIT = _get_long_bit() diff --git a/pypy/translator/c/src/address.h b/pypy/translator/c/src/address.h --- a/pypy/translator/c/src/address.h +++ b/pypy/translator/c/src/address.h @@ -16,5 +16,5 @@ #define OP_ADR_LT(x,y,r) r = ((x) < (y)) #define OP_ADR_GE(x,y,r) r = ((x) >= (y)) -#define OP_CAST_ADR_TO_INT(x, mode, r) r = ((new_long)x) +#define OP_CAST_ADR_TO_INT(x, mode, r) r = ((Signed)x) #define OP_CAST_INT_TO_ADR(x, r) r = ((void *)(x)) diff --git a/pypy/translator/c/src/asm_gcc_x86_64.h b/pypy/translator/c/src/asm_gcc_x86_64.h --- a/pypy/translator/c/src/asm_gcc_x86_64.h +++ b/pypy/translator/c/src/asm_gcc_x86_64.h @@ -2,7 +2,7 @@ */ #define READ_TIMESTAMP(val) do { \ - unsigned new_long _rax, _rdx; \ + Unsigned _rax, _rdx; \ asm volatile("rdtsc" : "=a"(_rax), "=d"(_rdx)); \ val = (_rdx << 32) | _rax; \ } while (0) diff --git a/pypy/translator/c/src/float.h b/pypy/translator/c/src/float.h --- a/pypy/translator/c/src/float.h +++ b/pypy/translator/c/src/float.h @@ -31,8 +31,8 @@ /*** conversions ***/ -#define OP_CAST_FLOAT_TO_INT(x,r) r = (new_long)(x) -#define OP_CAST_FLOAT_TO_UINT(x,r) r = (unsigned new_long)(x) +#define OP_CAST_FLOAT_TO_INT(x,r) r = (Signed)(x) +#define OP_CAST_FLOAT_TO_UINT(x,r) r = (Unsigned)(x) #define OP_CAST_INT_TO_FLOAT(x,r) r = (double)(x) #define OP_CAST_UINT_TO_FLOAT(x,r) r = (double)(x) #define OP_CAST_LONGLONG_TO_FLOAT(x,r) r = (double)(x) diff --git a/pypy/translator/c/src/g_prerequisite.h b/pypy/translator/c/src/g_prerequisite.h --- a/pypy/translator/c/src/g_prerequisite.h +++ b/pypy/translator/c/src/g_prerequisite.h @@ -9,12 +9,13 @@ #endif #ifdef _WIN64 -# define new_long __int64 -# define NEW_LONG_MIN LLONG_MIN +# define Signed __int64 +# define SIGNED_MIN LLONG_MIN #else -# define new_long long -# define NEW_LONG_MIN LONG_MIN +# define Signed long +# define SIGNED_MIN LONG_MIN #endif +#define Unsigned unsigned Signed #ifdef _WIN32 # include /* needed, otherwise _lseeki64 truncates to 32-bits (??) */ diff --git a/pypy/translator/c/src/int.h b/pypy/translator/c/src/int.h --- a/pypy/translator/c/src/int.h +++ b/pypy/translator/c/src/int.h @@ -7,12 +7,12 @@ /************ win64 support: - 'new_long' must be defined as + 'Signed' must be defined as __int64 in case of win64 long in all other cases - 'NEW_LONG_MIN' must be defined as + 'SIGNED_MIN' must be defined as LLONG_MIN in case of win64 LONG_MIN in all other cases @@ -23,13 +23,13 @@ #define OP_INT_NEG(x,r) r = -(x) #define OP_INT_NEG_OVF(x,r) \ - if ((x) == NEW_LONG_MIN) FAIL_OVF("integer negate"); \ + if ((x) == SIGNED_MIN) FAIL_OVF("integer negate"); \ OP_INT_NEG(x,r) #define OP_INT_ABS(x,r) r = (x) >= 0 ? x : -(x) #define OP_INT_ABS_OVF(x,r) \ - if ((x) == NEW_LONG_MIN) FAIL_OVF("integer absolute"); \ + if ((x) == SIGNED_MIN) FAIL_OVF("integer absolute"); \ OP_INT_ABS(x,r) /*** binary operations ***/ @@ -46,8 +46,8 @@ for the case of a == 0 (both subtractions are then constant-folded). Note that the following line only works if a <= c in the first place, which we assume is true. */ -#define OP_INT_BETWEEN(a,b,c,r) r = (((unsigned new_long)b - (unsigned new_long)a) \ - < ((unsigned new_long)c - (unsigned new_long)a)) +#define OP_INT_BETWEEN(a,b,c,r) r = (((Unsigned)b - (Unsigned)a) \ + < ((Unsigned)c - (Unsigned)a)) /* addition, subtraction */ @@ -55,17 +55,17 @@ /* cast to avoid undefined behaviour on overflow */ #define OP_INT_ADD_OVF(x,y,r) \ - r = (new_long)((unsigned new_long)x + y); \ + r = (Signed)((Unsigned)x + y); \ if ((r^x) < 0 && (r^y) < 0) FAIL_OVF("integer addition") #define OP_INT_ADD_NONNEG_OVF(x,y,r) /* y can be assumed >= 0 */ \ - r = (new_long)((unsigned new_long)x + y); \ + r = (Signed)((Unsigned)x + y); \ if ((r&~x) < 0) FAIL_OVF("integer addition") #define OP_INT_SUB(x,y,r) r = (x) - (y) #define OP_INT_SUB_OVF(x,y,r) \ - r = (new_long)((unsigned new_long)x - y); \ + r = (Signed)((Unsigned)x - y); \ if ((r^x) < 0 && (r^~y) < 0) FAIL_OVF("integer subtraction") #define OP_INT_MUL(x,y,r) r = (x) * (y) @@ -91,7 +91,7 @@ #define OP_INT_RSHIFT(x,y,r) CHECK_SHIFT_RANGE(y, PYPY_LONG_BIT); \ - r = Py_ARITHMETIC_RIGHT_SHIFT(new_long, x, (y)) + r = Py_ARITHMETIC_RIGHT_SHIFT(Signed, x, (y)) #define OP_UINT_RSHIFT(x,y,r) CHECK_SHIFT_RANGE(y, PYPY_LONG_BIT); \ r = (x) >> (y) #define OP_LLONG_RSHIFT(x,y,r) CHECK_SHIFT_RANGE(y, PYPY_LONGLONG_BIT); \ @@ -111,7 +111,7 @@ #define OP_INT_LSHIFT_OVF(x,y,r) \ OP_INT_LSHIFT(x,y,r); \ - if ((x) != Py_ARITHMETIC_RIGHT_SHIFT(new_long, r, (y))) \ + if ((x) != Py_ARITHMETIC_RIGHT_SHIFT(Signed, r, (y))) \ FAIL_OVF("x<= 16 bits */ #undef ulong -#define ulong unsigned new_long /* assuming >= 32 bits */ +#define ulong Unsigned /* assuming >= 32 bits */ #undef uptr -#define uptr unsigned new_long +#define uptr Unsigned /* When you say memory, my mind reasons in terms of (pointers to) blocks */ typedef uchar block; diff --git a/pypy/translator/c/src/rtyper.h b/pypy/translator/c/src/rtyper.h --- a/pypy/translator/c/src/rtyper.h +++ b/pypy/translator/c/src/rtyper.h @@ -30,7 +30,7 @@ char *RPyString_AsCharP(RPyString *rps) { - new_long len = RPyString_Size(rps); + Signed len = RPyString_Size(rps); struct _RPyString_dump_t *dump = \ malloc(sizeof(struct _RPyString_dump_t) + len); if (!dump) diff --git a/pypy/translator/c/src/signals.h b/pypy/translator/c/src/signals.h --- a/pypy/translator/c/src/signals.h +++ b/pypy/translator/c/src/signals.h @@ -54,7 +54,7 @@ /* When a signal is received, pypysig_counter is set to -1. */ /* This is a struct for the JIT. See interp_signal.py. */ struct pypysig_long_struct { - new_long value; + Signed value; }; extern struct pypysig_long_struct pypysig_counter; From noreply at buildbot.pypy.org Sun Nov 6 16:22:54 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sun, 6 Nov 2011 16:22:54 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: traces from interpreter now working again Message-ID: <20111106152254.549FC820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48828:82923819cf55 Date: 2011-11-06 16:22 +0100 http://bitbucket.org/pypy/pypy/changeset/82923819cf55/ Log: traces from interpreter now working again diff --git a/pypy/jit/backend/llgraph/runner.py b/pypy/jit/backend/llgraph/runner.py --- a/pypy/jit/backend/llgraph/runner.py +++ b/pypy/jit/backend/llgraph/runner.py @@ -140,17 +140,17 @@ old, oldindex = faildescr._compiled_fail llimpl.compile_redirect_fail(old, oldindex, c) - def compile_loop(self, inputargs, operations, looptoken, log=True, name=''): + def compile_loop(self, inputargs, operations, jitcell_token, log=True, name=''): """In a real assembler backend, this should assemble the given list of operations. Here we just generate a similar CompiledLoop instance. The code here is RPython, whereas the code in llimpl is not. """ c = llimpl.compile_start() - clt = model.CompiledLoopToken(self, looptoken.number) + clt = model.CompiledLoopToken(self, jitcell_token.number) clt.loop_and_bridges = [c] clt.compiled_version = c - looptoken.compiled_loop_token = clt + jitcell_token.compiled_loop_token = clt self._compile_loop_or_bridge(c, inputargs, operations) def free_loop_and_bridges(self, compiled_loop_token): @@ -180,7 +180,7 @@ if isinstance(descr, Descr): llimpl.compile_add_descr(c, descr.ofs, descr.typeinfo, descr.arg_types) - if isinstance(descr, history.ProcedureToken): + if isinstance(descr, history.JitCellToken): assert False if op.getopnum() != rop.JUMP: llimpl.compile_add_loop_token(c, descr) diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -47,37 +47,39 @@ return loop -def make_procedure_token(jitdriver_sd): - procedure_token = JitCellToken() - procedure_token.outermost_jitdriver_sd = jitdriver_sd - return procedure_token +def make_jitcell_token(jitdriver_sd): + jitcell_token = JitCellToken() + jitcell_token.outermost_jitdriver_sd = jitdriver_sd + return jitcell_token def record_loop_or_bridge(metainterp_sd, loop): """Do post-backend recordings and cleanups on 'loop'. """ - # get the original loop token (corresponding to 'loop', or if that is - # a bridge, to the loop that this bridge belongs to) - looptoken = loop.token - assert looptoken is not None + # get the original jitcell token corresponding to jitcell form which + # this trace starts + original_jitcell_token = loop.original_jitcell_token + assert original_jitcell_token is not None if metainterp_sd.warmrunnerdesc is not None: # for tests - assert looptoken.generation > 0 # has been registered with memmgr - wref = weakref.ref(looptoken) + assert original_jitcell_token.generation > 0 # has been registered with memmgr + wref = weakref.ref(original_jitcell_token) for op in loop.operations: descr = op.getdescr() if isinstance(descr, ResumeDescr): descr.wref_original_loop_token = wref # stick it there n = descr.index if n >= 0: # we also record the resumedescr number - looptoken.compiled_loop_token.record_faildescr_index(n) + original_jitcell_token.compiled_loop_token.record_faildescr_index(n) elif isinstance(descr, JitCellToken): + # for a CALL_ASSEMBLER ... assert False, "FIXME" elif isinstance(descr, TargetToken): - # for a JUMP or a CALL_ASSEMBLER: record it as a potential jump. + # for a JUMP: record it as a potential jump. # (the following test is not enough to prevent more complicated # cases of cycles, but at least it helps in simple tests of # test_memgr.py) - if descr.procedure_token is not looptoken: - looptoken.record_jump_to(descr.procedure_token) + if descr.original_jitcell_token is not original_jitcell_token: + assert descr.original_jitcell_token is not None + original_jitcell_token.record_jump_to(descr.original_jitcell_token) op._descr = None # clear reference, mostly for tests # record this looptoken on the QuasiImmut used in the code if loop.quasi_immutable_deps is not None: @@ -85,9 +87,9 @@ qmut.register_loop_token(wref) # XXX maybe we should clear the dictionary here # mostly for tests: make sure we don't keep a reference to the LoopToken - loop.token = None + loop.original_jitcell_token = None if not we_are_translated(): - loop._looptoken_number = looptoken.number + loop._looptoken_number = original_jitcell_token.number # ____________________________________________________________ @@ -184,12 +186,12 @@ old_loop_tokens.append(loop_token) def send_loop_to_backend(greenkey, jitdriver_sd, metainterp_sd, loop, type): - jitdriver_sd.on_compile(metainterp_sd.logger_ops, loop.token, + original_jitcell_token = loop.original_jitcell_token + jitdriver_sd.on_compile(metainterp_sd.logger_ops, original_jitcell_token, loop.operations, type, greenkey) loopname = jitdriver_sd.warmstate.get_location_str(greenkey) globaldata = metainterp_sd.globaldata - loop_token = loop.token - loop_token.number = n = globaldata.loopnumbering + original_jitcell_token.number = n = globaldata.loopnumbering globaldata.loopnumbering += 1 if not we_are_translated(): @@ -201,7 +203,7 @@ debug_start("jit-backend") try: ops_offset = metainterp_sd.cpu.compile_loop(loop.inputargs, operations, - loop.token, name=loopname) + original_jitcell_token, name=loopname) finally: debug_stop("jit-backend") metainterp_sd.profiler.end_backend() @@ -216,7 +218,7 @@ metainterp_sd.logger_ops.log_loop(loop.inputargs, loop.operations, n, type, ops_offset) # if metainterp_sd.warmrunnerdesc is not None: # for tests - metainterp_sd.warmrunnerdesc.memory_manager.keep_loop_alive(loop.token) + metainterp_sd.warmrunnerdesc.memory_manager.keep_loop_alive(original_jitcell_token) def send_bridge_to_backend(jitdriver_sd, metainterp_sd, faildescr, inputargs, operations, original_loop_token): @@ -610,19 +612,18 @@ metainterp_sd = metainterp.staticdata jitdriver_sd = metainterp.jitdriver_sd redargs = new_loop.inputargs - procedure_token = make_procedure_token(jitdriver_sd) - new_loop.token = procedure_token + new_loop.original_jitcell_token = jitcell_token = make_jitcell_token(jitdriver_sd) send_loop_to_backend(self.original_greenkey, metainterp.jitdriver_sd, metainterp_sd, new_loop, "entry bridge") # send the new_loop to warmspot.py, to be called directly the next time jitdriver_sd.warmstate.attach_procedure_to_interp( - self.original_greenkey, procedure_token) + self.original_greenkey, jitcell_token) def reset_counter_from_failure(self): pass -def compile_new_bridge(metainterp, resumekey, retraced=False): +def compile_trace(metainterp, resumekey, retraced=False): """Try to compile a new bridge leading from the beginning of the history to some existing place. """ @@ -653,7 +654,7 @@ debug_print('InvalidLoop in compile_new_bridge') return None - if new_trace.operations[-1].getopnum() == rop.JUMP: + if new_trace.operations[-1].getopnum() != rop.LABEL: # We managed to create a bridge. Dispatch to resumekey to # know exactly what we must do (ResumeGuardDescr/ResumeFromInterpDescr) target_token = new_trace.operations[-1].getdescr() diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -770,6 +770,7 @@ self.cell_token = cell_token self.virtual_state = None self.exported_state = None + self.original_jitcell_token = None class TreeLoop(object): inputargs = None @@ -782,6 +783,11 @@ raise Exception("TreeLoop.token is killed") token = property(_token, _token) + # This is the jitcell where the trace starts. Labels within the trace might + # belong to some other jitcells in the sens that jumping to this other + # jitcell will result in a jump to the label. + original_jitcell_token = None + def __init__(self, name): self.name = name # self.operations = list of ResOperations @@ -816,6 +822,10 @@ def check_consistency(self): # for testing "NOT_RPYTHON" self.check_consistency_of(self.inputargs, self.operations) + for op in self.operations: + descr = op.getdescr() + if isinstance(descr, TargetToken): + assert descr.original_jitcell_token is self.original_jitcell_token @staticmethod def check_consistency_of(inputargs, operations): diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -78,12 +78,16 @@ start_label = None jumpop = loop.operations[-1] - assert jumpop.getopnum() == rop.JUMP - loop.operations = loop.operations[:-1] + if jumpop.getopnum() == rop.JUMP: + loop.operations = loop.operations[:-1] + else: + jumpop = None self.import_state(start_label) self.optimizer.propagate_all_forward(clear=False) + if not jumpop: + return if self.jump_to_already_compiled_trace(jumpop): return diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -2114,7 +2114,7 @@ # FIXME: kill TerminatingLoopToken? # FIXME: can we call compile_trace? self.history.record(rop.FINISH, exits, None, descr=loop_tokens[0].finishdescr) - target_loop_token = compile.compile_new_bridge(self, self.resumekey) + target_loop_token = compile.compile_trace(self, self.resumekey) if not target_loop_token: compile.giveup() From noreply at buildbot.pypy.org Sun Nov 6 18:17:04 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Sun, 6 Nov 2011 18:17:04 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: corrected the formatting of constants. Message-ID: <20111106171704.1DE46820C4@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r48829:0525e812c2ca Date: 2011-11-06 18:07 +0100 http://bitbucket.org/pypy/pypy/changeset/0525e812c2ca/ Log: corrected the formatting of constants. Pretty hackish by a small function that replaces L with LL, but very local and obvious. diff --git a/pypy/translator/c/primitive.py b/pypy/translator/c/primitive.py --- a/pypy/translator/c/primitive.py +++ b/pypy/translator/c/primitive.py @@ -16,6 +16,15 @@ # # Primitives +# win64: we need different constants, since we emulate 64 bit long. +# this function simply replaces 'L' by 'LL' in a format string +if is_emulated_long: + def lll(fmt): + return fmt.replace('L', 'LL') +else: + def lll(fmt): + return fmt + def name_signed(value, db): if isinstance(value, Symbolic): if isinstance(value, FieldOffset): @@ -61,22 +70,22 @@ elif isinstance(value, llgroup.CombinedSymbolic): name = name_small_integer(value.lowpart, db) assert (value.rest & value.MASK) == 0 - return '(%s+%dL)' % (name, value.rest) + return lll('(%s+%dL)') % (name, value.rest) elif isinstance(value, AddressAsInt): - return '((long)%s)' % name_address(value.adr, db) + return '((Signed)%s)' % name_address(value.adr, db) else: raise Exception("unimplemented symbolic %r"%value) if value is None: assert not db.completed return None if value == -sys.maxint-1: # blame C - return '(-%dL-1L)' % sys.maxint + return lll('(-%dL-1L)') % sys.maxint else: - return '%dL' % value + return lll('%dL') % value def name_unsigned(value, db): assert value >= 0 - return '%dUL' % value + return lll('%dUL') % value def name_unsignedlonglong(value, db): assert value >= 0 @@ -190,9 +199,9 @@ PrimitiveType = { SignedLongLong: 'long long @', - Signed: 'long @', + Signed: 'long @', # but see below UnsignedLongLong: 'unsigned long long @', - Unsigned: 'unsigned long @', + Unsigned: 'unsigned long @', # but see below Float: 'double @', SingleFloat: 'float @', LongFloat: 'long double @', @@ -228,11 +237,7 @@ define_c_primitive(rffi.INT, 'int') define_c_primitive(rffi.INT_real, 'int') define_c_primitive(rffi.UINT, 'unsigned int') -if is_emulated_long: # special case for win64 - define_c_primitive(rffi.LONG, '__int64', 'LL') - define_c_primitive(rffi.ULONG, 'unsigned __int64', 'ULL') -else: - define_c_primitive(rffi.LONG, 'long', 'L') - define_c_primitive(rffi.ULONG, 'unsigned long', 'UL') +define_c_primitive(rffi.LONG, 'long', 'L') +define_c_primitive(rffi.ULONG, 'unsigned long', 'UL') define_c_primitive(rffi.LONGLONG, 'long long', 'LL') define_c_primitive(rffi.ULONGLONG, 'unsigned long long', 'ULL') From noreply at buildbot.pypy.org Sun Nov 6 18:17:05 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Sun, 6 Nov 2011 18:17:05 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: simplified primitive.py by using the types 'Signed' and 'Unsigned' which are defined in g_prerequisites.h Message-ID: <20111106171705.60A1C820C4@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r48830:f977b0b7d913 Date: 2011-11-06 18:12 +0100 http://bitbucket.org/pypy/pypy/changeset/f977b0b7d913/ Log: simplified primitive.py by using the types 'Signed' and 'Unsigned' which are defined in g_prerequisites.h diff --git a/pypy/translator/c/primitive.py b/pypy/translator/c/primitive.py --- a/pypy/translator/c/primitive.py +++ b/pypy/translator/c/primitive.py @@ -199,9 +199,9 @@ PrimitiveType = { SignedLongLong: 'long long @', - Signed: 'long @', # but see below + Signed: 'Signed @', UnsignedLongLong: 'unsigned long long @', - Unsigned: 'unsigned long @', # but see below + Unsigned: 'Unsigned @', Float: 'double @', SingleFloat: 'float @', LongFloat: 'long double @', @@ -213,13 +213,6 @@ GCREF: 'void* @', } -# support for win64, where sizeof(long) == 4 -if is_emulated_long: - PrimitiveType.update( { - Signed: '__int64 @', - Unsigned: 'unsigned __int64 @', - } ) - def define_c_primitive(ll_type, c_name, suffix=''): if ll_type in PrimitiveName: return From noreply at buildbot.pypy.org Sun Nov 6 18:26:16 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sun, 6 Nov 2011 18:26:16 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: test_loop_1 passing Message-ID: <20111106172616.E26A9820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48831:b262b6ae31dd Date: 2011-11-06 16:39 +0100 http://bitbucket.org/pypy/pypy/changeset/b262b6ae31dd/ Log: test_loop_1 passing diff --git a/pypy/jit/backend/llgraph/llimpl.py b/pypy/jit/backend/llgraph/llimpl.py --- a/pypy/jit/backend/llgraph/llimpl.py +++ b/pypy/jit/backend/llgraph/llimpl.py @@ -391,7 +391,7 @@ def compile_add_jump_target(loop, targettoken): loop = _from_opaque(loop) - if isinstance(targettoken, history.ProcedureToken): + if isinstance(targettoken, history.JitCellToken): assert False loop_target = _from_opaque(targettoken.compiled_loop_token.compiled_version) target_opindex = 0 diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -107,18 +107,19 @@ if partial_trace: part = partial_trace - procedure_token = metainterp.get_procedure_token(greenkey) + assert False + procedur_token = metainterp.get_procedure_token(greenkey) assert procedure_token all_target_tokens = [] else: - procedure_token = make_procedure_token(jitdriver_sd) + jitcell_token = make_jitcell_token(jitdriver_sd) part = create_empty_loop(metainterp) part.inputargs = inputargs[:] h_ops = history.operations part.start_resumedescr = start_resumedescr - part.operations = [ResOperation(rop.LABEL, inputargs, None, descr=TargetToken(procedure_token))] + \ + part.operations = [ResOperation(rop.LABEL, inputargs, None, descr=TargetToken(jitcell_token))] + \ [h_ops[i].clone() for i in range(start, len(h_ops))] + \ - [ResOperation(rop.LABEL, jumpargs, None, descr=TargetToken(procedure_token))] + [ResOperation(rop.JUMP, jumpargs, None, descr=jitcell_token)] try: optimize_trace(metainterp_sd, part, jitdriver_sd.warmstate.enable_opts) except InvalidLoop: @@ -132,8 +133,8 @@ inliner = Inliner(inputargs, jumpargs) part.operations = [part.operations[-1]] + \ [inliner.inline_op(h_ops[i]) for i in range(start, len(h_ops))] + \ - [ResOperation(rop.LABEL, [inliner.inline_arg(a) for a in jumpargs], - None, descr=TargetToken(procedure_token))] + [ResOperation(rop.JUMP, [inliner.inline_arg(a) for a in jumpargs], + None, descr=jitcell_token)] all_target_tokens.append(part.operations[0].getdescr()) inputargs = jumpargs jumpargs = part.operations[-1].getarglist() @@ -148,11 +149,13 @@ for box in loop.inputargs: assert isinstance(box, Box) - loop.token = procedure_token - procedure_token.target_tokens = all_target_tokens + loop.original_jitcell_token = jitcell_token + for label in all_target_tokens: + label.original_jitcell_token = jitcell_token + jitcell_token.target_tokens = all_target_tokens send_loop_to_backend(greenkey, jitdriver_sd, metainterp_sd, loop, "loop") record_loop_or_bridge(metainterp_sd, loop) - return procedure_token + return jitcell_token if False: # FIXME: full_preamble_needed?? From noreply at buildbot.pypy.org Sun Nov 6 18:26:18 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sun, 6 Nov 2011 18:26:18 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: bridge support Message-ID: <20111106172618.2721B820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48832:9b87dd5eeb7f Date: 2011-11-06 17:05 +0100 http://bitbucket.org/pypy/pypy/changeset/9b87dd5eeb7f/ Log: bridge support diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -426,13 +426,13 @@ # We managed to create a bridge. Attach the new operations # to the corresponding guard_op and compile from there assert metainterp.resumekey_original_loop_token is not None - new_loop.token = metainterp.resumekey_original_loop_token + new_loop.original_jitcell_token = metainterp.resumekey_original_loop_token inputargs = metainterp.history.inputargs if not we_are_translated(): self._debug_suboperations = new_loop.operations send_bridge_to_backend(metainterp.jitdriver_sd, metainterp.staticdata, self, inputargs, new_loop.operations, - new_loop.token) + new_loop.original_jitcell_token) def copy_all_attributes_into(self, res): # XXX a bit ugly to have to list them all here diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -442,7 +442,7 @@ debug_start('jit-log-virtualstate') virtual_state.debug_print("Looking for ") - for target in procedure_token.target_tokens: + for target in cell_token.target_tokens: if not target.virtual_state: continue ok = False @@ -481,24 +481,24 @@ descr = target.start_resumedescr.clone_if_mutable() inliner.inline_descr_inplace(descr) guard.setdescr(descr) - self.emit_operation(guard) + self.optimizer.send_extra_operation(guard) try: for shop in target.short_preamble[1:]: newop = inliner.inline_op(shop) - self.emit_operation(newop) + self.optimizer.send_extra_operation(newop) except InvalidLoop: debug_print("Inlining failed unexpectedly", "jumping to preamble instead") assert False, "FIXME: Construct jump op" - self.emit_operation(op) + self.optimizer.send_extra_operation(op) return True debug_stop('jit-log-virtualstate') - retraced_count = procedure_token.retraced_count + retraced_count = cell_token.retraced_count limit = self.optimizer.metainterp_sd.warmrunnerdesc.memory_manager.retrace_limit if not self.retraced and retraced_count Author: Hakan Ardo Branch: jit-targets Changeset: r48833:d04c6e6f5e44 Date: 2011-11-06 18:25 +0100 http://bitbucket.org/pypy/pypy/changeset/d04c6e6f5e44/ Log: retrace support diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -93,9 +93,9 @@ # ____________________________________________________________ -def compile_procedure(metainterp, greenkey, start, +def compile_loop(metainterp, greenkey, start, inputargs, jumpargs, - start_resumedescr, full_preamble_needed=True, partial_trace=None): + start_resumedescr, full_preamble_needed=True): """Try to compile a new procedure by closing the current history back to the first operation. """ @@ -105,7 +105,7 @@ metainterp_sd = metainterp.staticdata jitdriver_sd = metainterp.jitdriver_sd - if partial_trace: + if False: part = partial_trace assert False procedur_token = metainterp.get_procedure_token(greenkey) @@ -155,7 +155,7 @@ jitcell_token.target_tokens = all_target_tokens send_loop_to_backend(greenkey, jitdriver_sd, metainterp_sd, loop, "loop") record_loop_or_bridge(metainterp_sd, loop) - return jitcell_token + return all_target_tokens[0] if False: # FIXME: full_preamble_needed?? @@ -180,6 +180,53 @@ record_loop_or_bridge(metainterp_sd, loop) return loop_token +def compile_retrace(metainterp, greenkey, start, + inputargs, jumpargs, + start_resumedescr, partial_trace, resumekey): + """Try to compile a new procedure by closing the current history back + to the first operation. + """ + from pypy.jit.metainterp.optimizeopt import optimize_trace + + history = metainterp.history + metainterp_sd = metainterp.staticdata + jitdriver_sd = metainterp.jitdriver_sd + + loop_jitcell_token = metainterp.get_procedure_token(greenkey) + assert loop_jitcell_token + assert partial_trace.operations[-1].getopnum() == rop.LABEL + + part = create_empty_loop(metainterp) + part.inputargs = inputargs[:] + part.start_resumedescr = start_resumedescr + h_ops = history.operations + part.operations = [partial_trace.operations[-1]] + \ + [h_ops[i].clone() for i in range(start, len(h_ops))] + \ + [ResOperation(rop.JUMP, jumpargs, None, descr=loop_jitcell_token)] + try: + optimize_trace(metainterp_sd, part, jitdriver_sd.warmstate.enable_opts) + except InvalidLoop: + return None + assert part.operations[-1].getopnum() != rop.LABEL + label = part.operations[0] + assert label.getopnum() == rop.LABEL + target_token = label.getdescr() + assert isinstance(target_token, TargetToken) + assert loop_jitcell_token.target_tokens + loop_jitcell_token.target_tokens.append(target_token) + + loop = partial_trace + loop.operations = loop.operations[:-1] + part.operations + + for box in loop.inputargs: + assert isinstance(box, Box) + + target_token = loop.operations[-1].getdescr() + resumekey.compile_and_attach(metainterp, loop) + label.getdescr().original_jitcell_token = loop.original_jitcell_token + record_loop_or_bridge(metainterp_sd, loop) + return target_token + def insert_loop_token(old_loop_tokens, loop_token): # Find where in old_loop_tokens we should insert this new loop_token. # The following algo means "as late as possible, but before another diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -778,6 +778,7 @@ call_pure_results = None logops = None quasi_immutable_deps = None + start_resumedescr = None def _token(*args): raise Exception("TreeLoop.token is killed") diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -116,9 +116,12 @@ jump_args = [self.getvalue(a).get_key_box() for a in original_jump_args] # FIXME: I dont thnik we need this anymore - start_resumedescr = self.optimizer.loop.start_resumedescr.clone_if_mutable() - assert isinstance(start_resumedescr, ResumeGuardDescr) - start_resumedescr.rd_snapshot = self.fix_snapshot(jump_args, start_resumedescr.rd_snapshot) + if self.optimizer.loop.start_resumedescr: + start_resumedescr = self.optimizer.loop.start_resumedescr.clone_if_mutable() + assert isinstance(start_resumedescr, ResumeGuardDescr) + start_resumedescr.rd_snapshot = self.fix_snapshot(jump_args, start_resumedescr.rd_snapshot) + else: + start_resumedescr = None modifier = VirtualStateAdder(self.optimizer) virtual_state = modifier.get_virtual_state(jump_args) @@ -177,7 +180,8 @@ self.imported_state = exported_state self.inputargs = targetop.getarglist() self.initial_virtual_state = target_token.virtual_state - self.start_resumedescr = target_token.start_resumedescr + #self.start_resumedescr = target_token.start_resumedescr + self.start_resumedescr = self.optimizer.loop.start_resumedescr seen = {} for box in self.inputargs: @@ -324,7 +328,14 @@ for i in range(len(short)): short[i] = inliner.inline_op(short[i]) - target_token.start_resumedescr = target_token.start_resumedescr.clone_if_mutable() + if target_token.start_resumedescr is None: # FIXME: Hack! + target_token.start_resumedescr = self.start_resumedescr.clone_if_mutable() + fix = Inliner(self.optimizer.loop.operations[-1].getarglist(), + self.optimizer.loop.inputargs) + + fix.inline_descr_inplace(target_token.start_resumedescr) + else: + target_token.start_resumedescr = self.start_resumedescr.clone_if_mutable() inliner.inline_descr_inplace(target_token.start_resumedescr) # Forget the values to allow them to be freed @@ -497,7 +508,7 @@ retraced_count = cell_token.retraced_count limit = self.optimizer.metainterp_sd.warmrunnerdesc.memory_manager.retrace_limit - if not self.retraced and retraced_count Author: Hakan Ardo Branch: jit-targets Changeset: r48834:123a7a37c565 Date: 2011-11-06 18:54 +0100 http://bitbucket.org/pypy/pypy/changeset/123a7a37c565/ Log: fix tests diff --git a/pypy/jit/metainterp/test/support.py b/pypy/jit/metainterp/test/support.py --- a/pypy/jit/metainterp/test/support.py +++ b/pypy/jit/metainterp/test/support.py @@ -228,7 +228,7 @@ # this can be used after interp_operations if expected is not None: expected = dict(expected) - expected['jump'] = 1 + expected['finish'] = 1 self.metainterp.staticdata.stats.check_history(expected, **isns) diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -195,7 +195,7 @@ assert res == 1167 self.check_loop_count(3) self.check_resops({'int_lt': 3, 'int_gt': 2, 'int_add': 5, - 'guard_true': 3, 'int_sub': 4, 'jump': 4, + 'guard_true': 3, 'int_sub': 4, 'jump': 2, 'int_mul': 2, 'guard_false': 2}) def test_loop_invariant_mul_bridge_maintaining2(self): @@ -215,7 +215,7 @@ assert res == 1692 self.check_loop_count(3) self.check_resops({'int_lt': 3, 'int_gt': 2, 'int_add': 5, - 'guard_true': 3, 'int_sub': 4, 'jump': 4, + 'guard_true': 3, 'int_sub': 4, 'jump': 2, 'int_mul': 2, 'guard_false': 2}) def test_loop_invariant_mul_bridge_maintaining3(self): @@ -257,7 +257,7 @@ res = self.meta_interp(f, [6, 7]) assert res == 252 self.check_loop_count(1) - self.check_resops({'jump': 2, 'int_gt': 2, 'int_add': 2, + self.check_resops({'jump': 1, 'int_gt': 2, 'int_add': 2, 'getfield_gc_pure': 1, 'int_mul': 1, 'guard_true': 2, 'int_sub': 2}) @@ -861,7 +861,7 @@ res = self.meta_interp(f, [6, 7]) assert res == 42.0 self.check_loop_count(1) - self.check_resops({'jump': 2, 'float_gt': 2, 'float_add': 2, + self.check_resops({'jump': 1, 'float_gt': 2, 'float_add': 2, 'float_sub': 2, 'guard_true': 2}) def test_print(self): From noreply at buildbot.pypy.org Sun Nov 6 20:58:00 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sun, 6 Nov 2011 20:58:00 +0100 (CET) Subject: [pypy-commit] pypy jit-refactor-tests: converted tests Message-ID: <20111106195800.41FF6820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-refactor-tests Changeset: r48835:c16efa936b3b Date: 2011-11-06 20:57 +0100 http://bitbucket.org/pypy/pypy/changeset/c16efa936b3b/ Log: converted tests diff --git a/pypy/jit/metainterp/test/test_loop.py b/pypy/jit/metainterp/test/test_loop.py --- a/pypy/jit/metainterp/test/test_loop.py +++ b/pypy/jit/metainterp/test/test_loop.py @@ -60,7 +60,8 @@ assert res == f(6, 13) self.check_loop_count(1) if self.enable_opts: - self.check_loops(getfield_gc = 0, setfield_gc = 1) + self.check_resops(setfield_gc=2, getfield_gc=0) + def test_loop_with_two_paths(self): from pypy.rpython.lltypesystem import lltype @@ -180,7 +181,10 @@ assert res == 42 self.check_loop_count(1) # the 'int_eq' and following 'guard' should be constant-folded - self.check_loops(int_eq=0, guard_true=1, guard_false=0) + if 'unroll' in self.enable_opts: + self.check_resops(int_eq=0, guard_true=2, guard_false=0) + else: + self.check_resops(int_eq=0, guard_true=1, guard_false=0) if self.basic: found = 0 for op in get_stats().loops[0]._all_operations(): @@ -643,8 +647,12 @@ res = self.meta_interp(main_interpreter_loop, [1]) assert res == 102 self.check_loop_count(1) - self.check_loops({'int_add' : 3, 'int_gt' : 1, - 'guard_false' : 1, 'jump' : 1}) + if 'unroll' in self.enable_opts: + self.check_resops({'int_add' : 6, 'int_gt' : 2, + 'guard_false' : 2, 'jump' : 2}) + else: + self.check_resops({'int_add' : 3, 'int_gt' : 1, + 'guard_false' : 1, 'jump' : 1}) def test_automatic_promotion(self): myjitdriver = JitDriver(greens = ['i'], @@ -686,7 +694,7 @@ self.check_loop_count(1) # These loops do different numbers of ops based on which optimizer we # are testing with. - self.check_loops(self.automatic_promotion_result) + self.check_resops(self.automatic_promotion_result) def test_can_enter_jit_outside_main_loop(self): myjitdriver = JitDriver(greens=[], reds=['i', 'j', 'a']) diff --git a/pypy/jit/metainterp/test/test_loop_unroll.py b/pypy/jit/metainterp/test/test_loop_unroll.py --- a/pypy/jit/metainterp/test/test_loop_unroll.py +++ b/pypy/jit/metainterp/test/test_loop_unroll.py @@ -8,7 +8,8 @@ enable_opts = ALL_OPTS_NAMES automatic_promotion_result = { - 'int_add' : 3, 'int_gt' : 1, 'guard_false' : 1, 'jump' : 1, + 'int_gt': 2, 'guard_false': 2, 'jump': 2, 'int_add': 6, + 'guard_value': 1 } # ====> test_loop.py diff --git a/pypy/jit/metainterp/test/test_recursive.py b/pypy/jit/metainterp/test/test_recursive.py --- a/pypy/jit/metainterp/test/test_recursive.py +++ b/pypy/jit/metainterp/test/test_recursive.py @@ -143,11 +143,11 @@ f = self.get_interpreter(codes) assert self.meta_interp(f, [0, 0, 0], enable_opts='') == 42 - self.check_loops(int_add = 1, call_may_force = 1, call = 0) + self.check_resops(call_may_force=1, int_add=1, call=0) assert self.meta_interp(f, [0, 0, 0], enable_opts='', inline=True) == 42 - self.check_loops(int_add = 2, call_may_force = 0, call = 0, - guard_no_exception = 0) + self.check_resops(call=0, int_add=2, call_may_force=0, + guard_no_exception=0) def test_inline_jitdriver_check(self): code = "021" @@ -160,7 +160,7 @@ inline=True) == 42 # the call is fully inlined, because we jump to subcode[1], thus # skipping completely the JUMP_BACK in subcode[0] - self.check_loops(call_may_force = 0, call_assembler = 0, call = 0) + self.check_resops(call=0, call_may_force=0, call_assembler=0) def test_guard_failure_in_inlined_function(self): def p(pc, code): @@ -491,10 +491,10 @@ return loop(100) res = self.meta_interp(main, [0], enable_opts='', trace_limit=TRACE_LIMIT) - self.check_loops(call_may_force=1, call=0) + self.check_resops(call=0, call_may_force=1) res = self.meta_interp(main, [1], enable_opts='', trace_limit=TRACE_LIMIT) - self.check_loops(call_may_force=0, call=0) + self.check_resops(call=0, call_may_force=0) def test_trace_from_start(self): def p(pc, code): @@ -576,7 +576,7 @@ result += f('-c-----------l-', i+100) self.meta_interp(g, [10], backendopt=True) self.check_aborted_count(1) - self.check_loops(call_assembler=1, call=0) + self.check_resops(call=0, call_assembler=2) self.check_tree_loop_count(3) def test_directly_call_assembler(self): @@ -625,8 +625,7 @@ try: compile.compile_tmp_callback = my_ctc self.meta_interp(portal, [2, 5], inline=True) - self.check_loops(call_assembler=2, call_may_force=0, - everywhere=True) + self.check_resops(call_may_force=0, call_assembler=2) finally: compile.compile_tmp_callback = original_ctc # check that we made a temporary callback @@ -681,8 +680,7 @@ try: compile.compile_tmp_callback = my_ctc self.meta_interp(main, [2, 5], inline=True) - self.check_loops(call_assembler=2, call_may_force=0, - everywhere=True) + self.check_resops(call_may_force=0, call_assembler=2) finally: compile.compile_tmp_callback = original_ctc # check that we made a temporary callback @@ -1021,7 +1019,7 @@ res = self.meta_interp(portal, [2, 0], inline=True, policy=StopAtXPolicy(residual)) assert res == portal(2, 0) - self.check_loops(call_assembler=4, everywhere=True) + self.check_resops(call_assembler=4) def test_inline_without_hitting_the_loop(self): driver = JitDriver(greens = ['codeno'], reds = ['i'], @@ -1045,7 +1043,7 @@ assert portal(0) == 70 res = self.meta_interp(portal, [0], inline=True) assert res == 70 - self.check_loops(call_assembler=0) + self.check_resops(call_assembler=0) def test_inline_with_hitting_the_loop_sometimes(self): driver = JitDriver(greens = ['codeno'], reds = ['i', 'k'], @@ -1071,7 +1069,7 @@ assert portal(0, 1) == 2095 res = self.meta_interp(portal, [0, 1], inline=True) assert res == 2095 - self.check_loops(call_assembler=12, everywhere=True) + self.check_resops(call_assembler=12) def test_inline_with_hitting_the_loop_sometimes_exc(self): driver = JitDriver(greens = ['codeno'], reds = ['i', 'k'], @@ -1109,7 +1107,7 @@ assert main(0, 1) == 2095 res = self.meta_interp(main, [0, 1], inline=True) assert res == 2095 - self.check_loops(call_assembler=12, everywhere=True) + self.check_resops(call_assembler=12) def test_handle_jitexception_in_portal(self): # a test for _handle_jitexception_in_portal in blackhole.py @@ -1238,7 +1236,7 @@ i += 1 self.meta_interp(portal, [0, 0, 0], inline=True) - self.check_loops(call=0, call_may_force=0) + self.check_resops(call_may_force=0, call=0) class TestLLtype(RecursiveTests, LLJitMixin): pass diff --git a/pypy/jit/metainterp/test/test_send.py b/pypy/jit/metainterp/test/test_send.py --- a/pypy/jit/metainterp/test/test_send.py +++ b/pypy/jit/metainterp/test/test_send.py @@ -20,9 +20,8 @@ return c res = self.meta_interp(f, [1]) assert res == 2 - self.check_loops({'jump': 1, - 'int_sub': 1, 'int_gt' : 1, - 'guard_true': 1}) # all folded away + self.check_resops({'jump': 2, 'guard_true': 2, 'int_gt': 2, + 'int_sub': 2}) # all folded away def test_red_builtin_send(self): myjitdriver = JitDriver(greens = [], reds = ['i', 'counter']) @@ -41,12 +40,9 @@ return res res = self.meta_interp(f, [1], policy=StopAtXPolicy(externfn)) assert res == 2 - if self.type_system == 'ootype': - self.check_loops(call=1, oosend=1) # 'len' remains - else: - # 'len' becomes a getfield('num_items') for now in lltype, - # which is itself encoded as a 'getfield_gc' - self.check_loops(call=1, getfield_gc=1) + # 'len' becomes a getfield('num_items') for now in lltype, + # which is itself encoded as a 'getfield_gc' + self.check_resops(call=2, getfield_gc=2) def test_send_to_single_target_method(self): myjitdriver = JitDriver(greens = [], reds = ['i', 'counter']) @@ -70,11 +66,10 @@ res = self.meta_interp(f, [1], policy=StopAtXPolicy(externfn), backendopt=True) assert res == 43 - self.check_loops({'call': 1, 'guard_no_exception': 1, - 'getfield_gc': 1, - 'int_add': 1, - 'jump': 1, 'int_gt' : 1, 'guard_true' : 1, - 'int_sub' : 1}) + self.check_resops({'int_gt': 2, 'getfield_gc': 2, + 'guard_true': 2, 'int_sub': 2, 'jump': 2, + 'call': 2, 'guard_no_exception': 2, + 'int_add': 2}) def test_red_send_to_green_receiver(self): myjitdriver = JitDriver(greens = ['i'], reds = ['counter', 'j']) @@ -97,7 +92,7 @@ return res res = self.meta_interp(f, [4, -1]) assert res == 145 - self.check_loops(int_add = 1, everywhere=True) + self.check_resops(int_add=1) def test_oosend_base(self): myjitdriver = JitDriver(greens = [], reds = ['x', 'y', 'w']) @@ -132,7 +127,7 @@ assert res == 17 res = self.meta_interp(f, [4, 14]) assert res == 1404 - self.check_loops(guard_class=0, new_with_vtable=0, new=0) + self.check_resops(guard_class=1, new=0, new_with_vtable=0) def test_three_receivers(self): myjitdriver = JitDriver(greens = [], reds = ['y']) @@ -205,8 +200,7 @@ # of the body in a single bigger loop with no failing guard except # the final one. self.check_loop_count(1) - self.check_loops(guard_class=0, - int_add=2, int_sub=2) + self.check_resops(guard_class=1, int_add=4, int_sub=4) self.check_jumps(14) def test_oosend_guard_failure_2(self): @@ -247,8 +241,7 @@ res = self.meta_interp(f, [4, 28]) assert res == f(4, 28) self.check_loop_count(1) - self.check_loops(guard_class=0, - int_add=2, int_sub=2) + self.check_resops(guard_class=1, int_add=4, int_sub=4) self.check_jumps(14) def test_oosend_different_initial_class(self): @@ -285,8 +278,8 @@ # However, this doesn't match the initial value of 'w'. # XXX This not completely easy to check... self.check_loop_count(1) - self.check_loops(int_add=0, int_lshift=1, guard_class=0, - new_with_vtable=0, new=0) + self.check_resops(guard_class=1, new_with_vtable=0, int_lshift=2, + int_add=0, new=0) def test_indirect_call_unknown_object_1(self): myjitdriver = JitDriver(greens = [], reds = ['x', 'y']) @@ -566,10 +559,7 @@ policy = StopAtXPolicy(new, A.foo.im_func, B.foo.im_func) res = self.meta_interp(fn, [0, 20], policy=policy) assert res == 42 - if self.type_system == 'ootype': - self.check_loops(oosend=1) - else: - self.check_loops(call=1) + self.check_resops(call=2) def test_residual_oosend_with_void(self): @@ -597,10 +587,7 @@ policy = StopAtXPolicy(new, A.foo.im_func) res = self.meta_interp(fn, [1, 20], policy=policy) assert res == 41 - if self.type_system == 'ootype': - self.check_loops(oosend=1) - else: - self.check_loops(call=1) + self.check_resops(call=2) def test_constfold_pure_oosend(self): myjitdriver = JitDriver(greens=[], reds = ['i', 'obj']) @@ -621,10 +608,7 @@ policy = StopAtXPolicy(A.foo.im_func) res = self.meta_interp(fn, [1, 20], policy=policy) assert res == 42 - if self.type_system == 'ootype': - self.check_loops(oosend=0) - else: - self.check_loops(call=0) + self.check_resops(call=0) def test_generalize_loop(self): myjitdriver = JitDriver(greens=[], reds = ['i', 'obj']) diff --git a/pypy/jit/metainterp/test/test_virtual.py b/pypy/jit/metainterp/test/test_virtual.py --- a/pypy/jit/metainterp/test/test_virtual.py +++ b/pypy/jit/metainterp/test/test_virtual.py @@ -31,8 +31,9 @@ res = self.meta_interp(f, [10]) assert res == 55 * 10 self.check_loop_count(1) - self.check_loops(new=0, new_with_vtable=0, - getfield_gc=0, setfield_gc=0) + self.check_resops(new_with_vtable=0, setfield_gc=0, + getfield_gc=2, new=0) + def test_virtualized2(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'node1', 'node2']) @@ -53,8 +54,8 @@ n -= 1 return node1.value * node2.value assert f(10) == self.meta_interp(f, [10]) - self.check_loops(new=0, new_with_vtable=0, - getfield_gc=0, setfield_gc=0) + self.check_resops(new_with_vtable=0, setfield_gc=0, getfield_gc=2, + new=0) def test_virtualized_circular1(self): class MyNode(): @@ -79,8 +80,8 @@ res = self.meta_interp(f, [10]) assert res == 55 * 10 self.check_loop_count(1) - self.check_loops(new=0, new_with_vtable=0, - getfield_gc=0, setfield_gc=0) + self.check_resops(new_with_vtable=0, setfield_gc=0, + getfield_gc=3, new=0) def test_virtualized_float(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'node']) @@ -97,7 +98,7 @@ res = self.meta_interp(f, [10]) assert res == f(10) self.check_loop_count(1) - self.check_loops(new=0, float_add=0) + self.check_resops(new=0, float_add=1) def test_virtualized_float2(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'node']) @@ -115,7 +116,8 @@ res = self.meta_interp(f, [10]) assert res == f(10) self.check_loop_count(1) - self.check_loops(new=0, float_add=1) + self.check_resops(new=0, float_add=2) + def test_virtualized_2(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'node']) @@ -139,8 +141,8 @@ res = self.meta_interp(f, [10]) assert res == 55 * 30 self.check_loop_count(1) - self.check_loops(new=0, new_with_vtable=0, - getfield_gc=0, setfield_gc=0) + self.check_resops(new_with_vtable=0, setfield_gc=0, getfield_gc=2, + new=0) def test_nonvirtual_obj_delays_loop(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'node']) @@ -160,8 +162,8 @@ res = self.meta_interp(f, [500]) assert res == 640 self.check_loop_count(1) - self.check_loops(new=0, new_with_vtable=0, - getfield_gc=0, setfield_gc=0) + self.check_resops(new_with_vtable=0, setfield_gc=0, + getfield_gc=1, new=0) def test_two_loops_with_virtual(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'node']) @@ -184,8 +186,9 @@ res = self.meta_interp(f, [18]) assert res == f(18) self.check_loop_count(2) - self.check_loops(new=0, new_with_vtable=0, - getfield_gc=0, setfield_gc=0) + self.check_resops(new_with_vtable=0, setfield_gc=0, + getfield_gc=2, new=0) + def test_two_loops_with_escaping_virtual(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'node']) @@ -212,8 +215,8 @@ res = self.meta_interp(f, [20], policy=StopAtXPolicy(externfn)) assert res == f(20) self.check_loop_count(3) - self.check_loops(**{self._new_op: 1}) - self.check_loops(int_mul=0, call=1) + self.check_resops(**{self._new_op: 1}) + self.check_resops(int_mul=0, call=1) def test_two_virtuals(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'prev']) @@ -236,7 +239,7 @@ res = self.meta_interp(f, [12]) assert res == 78 - self.check_loops(new_with_vtable=0, new=0) + self.check_resops(new_with_vtable=0, new=0) def test_specialied_bridge(self): myjitdriver = JitDriver(greens = [], reds = ['y', 'x', 'res']) @@ -281,7 +284,7 @@ res = self.meta_interp(f, [20]) assert res == 9 - self.check_loops(new_with_vtable=0, new=0) + self.check_resops(new_with_vtable=0, new=0) def test_immutable_constant_getfield(self): myjitdriver = JitDriver(greens = ['stufflist'], reds = ['n', 'i']) @@ -307,7 +310,7 @@ res = self.meta_interp(f, [10, 1, 0], listops=True) assert res == 0 - self.check_loops(getfield_gc=0) + self.check_resops(getfield_gc=0) def test_escapes(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'parent']) @@ -336,7 +339,7 @@ res = self.meta_interp(f, [10], policy=StopAtXPolicy(g)) assert res == 3 - self.check_loops(**{self._new_op: 1}) + self.check_resops(**{self._new_op: 1}) def test_virtual_on_virtual(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'parent']) @@ -366,7 +369,7 @@ res = self.meta_interp(f, [10]) assert res == 2 - self.check_loops(new=0, new_with_vtable=0) + self.check_resops(new=0, new_with_vtable=0) def test_bridge_from_interpreter(self): mydriver = JitDriver(reds = ['n', 'f'], greens = []) @@ -841,7 +844,7 @@ del t2 return i assert self.meta_interp(f, []) == 10 - self.check_loops(new_array=0) + self.check_resops(new_array=0) def test_virtual_streq_bug(self): mydriver = JitDriver(reds = ['i', 's', 'a'], greens = []) @@ -942,8 +945,8 @@ res = self.meta_interp(f, [16]) assert res == f(16) - self.check_loops(getfield_gc=2) - + self.check_resops(getfield_gc=7) + # ____________________________________________________________ # Run 1: all the tests instantiate a real RPython class @@ -985,10 +988,8 @@ res = self.meta_interp(f, [10]) assert res == 20 self.check_loop_count(1) - self.check_loops(new=0, new_with_vtable=0, - getfield_gc=0, setfield_gc=0) - - + self.check_resops(new_with_vtable=0, setfield_gc=0, getfield_gc=0, + new=0) class TestOOtype_Instance(VirtualTests, OOJitMixin): _new_op = 'new_with_vtable' From noreply at buildbot.pypy.org Sun Nov 6 20:59:10 2011 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 6 Nov 2011 20:59:10 +0100 (CET) Subject: [pypy-commit] pypy py3k: remove remnants of the getslice operations Message-ID: <20111106195910.EC7FB820C4@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r48836:d34a3188c6c8 Date: 2011-10-22 23:25 +0200 http://bitbucket.org/pypy/pypy/changeset/d34a3188c6c8/ Log: remove remnants of the getslice operations diff --git a/pypy/objspace/std/builtinshortcut.py b/pypy/objspace/std/builtinshortcut.py --- a/pypy/objspace/std/builtinshortcut.py +++ b/pypy/objspace/std/builtinshortcut.py @@ -34,17 +34,12 @@ KNOWN_MISSING = ['getattr', # mostly non-builtins or optimized by CALL_METHOD 'setattr', 'delattr', 'userdel', # mostly for non-builtins 'get', 'set', 'delete', # uncommon (except on functions) - 'getslice', 'setslice', 'delslice', # see below 'delitem', 'trunc', # rare stuff? 'abs', 'hex', 'oct', # rare stuff? 'pos', 'divmod', 'cmp', # rare stuff? 'float', 'long', 'coerce', # rare stuff? 'isinstance', 'issubtype', ] -# We cannot support {get,set,del}slice right now because -# DescrOperation.{get,set,del}slice do a bit more work than just call -# the special methods: they call old_slice_range(). See e.g. -# test_builtinshortcut.AppTestString. for _name, _, _, _specialmethods in ObjSpace.MethodTable: if _specialmethods: From noreply at buildbot.pypy.org Sun Nov 6 20:59:12 2011 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 6 Nov 2011 20:59:12 +0100 (CET) Subject: [pypy-commit] pypy py3k: Rename: sys.long_info -> sys.int_info Message-ID: <20111106195912.5940F820C4@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r48837:8299efdb68b6 Date: 2011-10-23 11:46 +0200 http://bitbucket.org/pypy/pypy/changeset/8299efdb68b6/ Log: Rename: sys.long_info -> sys.int_info diff --git a/pypy/module/sys/__init__.py b/pypy/module/sys/__init__.py --- a/pypy/module/sys/__init__.py +++ b/pypy/module/sys/__init__.py @@ -77,7 +77,7 @@ 'getfilesystemencoding' : 'interp_encoding.getfilesystemencoding', 'float_info' : 'system.get_float_info(space)', - 'long_info' : 'system.get_long_info(space)', + 'int_info' : 'system.get_int_info(space)', 'float_repr_style' : 'system.get_float_repr_style(space)' } diff --git a/pypy/module/sys/system.py b/pypy/module/sys/system.py --- a/pypy/module/sys/system.py +++ b/pypy/module/sys/system.py @@ -21,7 +21,7 @@ radix = structseqfield(9) rounds = structseqfield(10) -class long_info(metaclass=structseqtype): +class int_info(metaclass=structseqtype): bits_per_digit = structseqfield(0) sizeof_digit = structseqfield(1) """) @@ -44,7 +44,7 @@ w_float_info = app.wget(space, "float_info") return space.call_function(w_float_info, space.newtuple(info_w)) -def get_long_info(space): +def get_int_info(space): assert rbigint.SHIFT == 31 bits_per_digit = rbigint.SHIFT sizeof_digit = rffi.sizeof(rffi.ULONG) @@ -52,8 +52,8 @@ space.wrap(bits_per_digit), space.wrap(sizeof_digit), ] - w_long_info = app.wget(space, "long_info") - return space.call_function(w_long_info, space.newtuple(info_w)) + w_int_info = app.wget(space, "int_info") + return space.call_function(w_int_info, space.newtuple(info_w)) def get_float_repr_style(space): if rfloat.USE_SHORT_FLOAT_REPR: diff --git a/pypy/module/sys/test/test_sysmodule.py b/pypy/module/sys/test/test_sysmodule.py --- a/pypy/module/sys/test/test_sysmodule.py +++ b/pypy/module/sys/test/test_sysmodule.py @@ -121,9 +121,9 @@ assert isinstance(fi.radix, int) assert isinstance(fi.rounds, int) - def test_long_info(self): + def test_int_info(self): import sys - li = sys.long_info + li = sys.int_info assert isinstance(li.bits_per_digit, int) assert isinstance(li.sizeof_digit, int) From noreply at buildbot.pypy.org Sun Nov 6 20:59:13 2011 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 6 Nov 2011 20:59:13 +0100 (CET) Subject: [pypy-commit] pypy py3k: Share code between bytes and bytearray constructors. Message-ID: <20111106195913.99D76820C4@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r48838:c30f57c0a8d2 Date: 2011-10-26 12:29 +0200 http://bitbucket.org/pypy/pypy/changeset/c30f57c0a8d2/ Log: Share code between bytes and bytearray constructors. Add support for bytes(unicode_string, encoding) diff --git a/pypy/objspace/std/bytearrayobject.py b/pypy/objspace/std/bytearrayobject.py --- a/pypy/objspace/std/bytearrayobject.py +++ b/pypy/objspace/std/bytearrayobject.py @@ -39,49 +39,6 @@ registerimplementation(W_BytearrayObject) -init_signature = Signature(['source', 'encoding', 'errors'], None, None) -init_defaults = [None, None, None] - -def init__Bytearray(space, w_bytearray, __args__): - # this is on the silly side - w_source, w_encoding, w_errors = __args__.parse_obj( - None, 'bytearray', init_signature, init_defaults) - - if w_source is None: - w_source = space.wrap('') - if w_encoding is None: - w_encoding = space.w_None - if w_errors is None: - w_errors = space.w_None - - # Unicode argument - if not space.is_w(w_encoding, space.w_None): - from pypy.objspace.std.unicodetype import ( - _get_encoding_and_errors, encode_object - ) - encoding, errors = _get_encoding_and_errors(space, w_encoding, w_errors) - - # if w_source is an integer this correctly raises a TypeError - # the CPython error message is: "encoding or errors without a string argument" - # ours is: "expected unicode, got int object" - w_source = encode_object(space, w_source, encoding, errors) - - # Is it an int? - try: - count = space.int_w(w_source) - except OperationError, e: - if not e.match(space, space.w_TypeError): - raise - else: - if count < 0: - raise OperationError(space.w_ValueError, - space.wrap("bytearray negative count")) - w_bytearray.data = ['\0'] * count - return - - data = makebytesdata_w(space, w_source) - w_bytearray.data = data - def len__Bytearray(space, w_bytearray): result = len(w_bytearray.data) return wrapint(space, result) diff --git a/pypy/objspace/std/bytearraytype.py b/pypy/objspace/std/bytearraytype.py --- a/pypy/objspace/std/bytearraytype.py +++ b/pypy/objspace/std/bytearraytype.py @@ -14,6 +14,7 @@ str_expandtabs, str_ljust, str_rjust, str_center, str_zfill, str_join, str_split, str_rsplit, str_partition, str_rpartition, str_splitlines, str_translate) +from pypy.objspace.std.stringtype import makebytesdata_w from pypy.objspace.std.listtype import ( list_append, list_extend) @@ -61,6 +62,12 @@ def descr__new__(space, w_bytearraytype, __args__): return new_bytearray(space,w_bytearraytype, []) + at gateway.unwrap_spec(encoding='str_or_None', errors='str_or_None') +def descr__init__(space, w_bytearray, w_source=gateway.NoneNotWrapped, + encoding=None, errors=None): + data = makebytesdata_w(space, w_source, encoding, errors) + w_bytearray.data = data + def descr_bytearray__reduce__(space, w_self): from pypy.objspace.std.bytearrayobject import W_BytearrayObject @@ -125,6 +132,7 @@ If the argument is a bytearray, the return value is the same object.''', __new__ = gateway.interp2app(descr__new__), + __init__ = gateway.interp2app(descr__init__), __hash__ = None, __reduce__ = gateway.interp2app(descr_bytearray__reduce__), fromhex = gateway.interp2app(descr_fromhex, as_classmethod=True) diff --git a/pypy/objspace/std/stringtype.py b/pypy/objspace/std/stringtype.py --- a/pypy/objspace/std/stringtype.py +++ b/pypy/objspace/std/stringtype.py @@ -271,7 +271,36 @@ "byte must be in range(0, 256)")) return chr(value) -def makebytesdata_w(space, w_source): +def makebytesdata_w(space, w_source, encoding=None, errors=None): + # None value + if w_source is None: + if encoding is not None or errors is not None: + raise OperationError(space.w_TypeError, space.wrap( + "encoding or errors without string argument")) + return [] + # Is it an int? + try: + count = space.int_w(w_source) + except OperationError, e: + if not e.match(space, space.w_TypeError): + raise + else: + if count < 0: + raise OperationError(space.w_ValueError, + space.wrap("negative count")) + if encoding is not None or errors is not None: + raise OperationError(space.w_TypeError, space.wrap( + "encoding or errors without string argument")) + return ['\0'] * count + # Unicode with encoding + if space.isinstance_w(w_source, space.w_unicode): + if encoding is None: + raise OperationError(space.w_TypeError, space.wrap( + "string argument without an encoding")) + from pypy.objspace.std.unicodetype import encode_object + w_source = encode_object(space, w_source, encoding, errors) + # and continue with the encoded string + # String-like argument try: string = space.bufferstr_new_w(w_source) @@ -295,11 +324,13 @@ data.append(value) return data -def descr__new__(space, w_stringtype, w_source=gateway.NoneNotWrapped): + at gateway.unwrap_spec(encoding='str_or_None', errors='str_or_None') +def descr__new__(space, w_stringtype, w_source=gateway.NoneNotWrapped, + encoding=None, errors=None): if (w_source and space.is_w(space.type(w_source), space.w_bytes) and space.is_w(w_stringtype, space.w_bytes)): return w_source - value = ''.join(makebytesdata_w(space, w_source)) + value = ''.join(makebytesdata_w(space, w_source, encoding, errors)) if space.config.objspace.std.withrope: from pypy.objspace.std.ropeobject import rope, W_RopeObject w_obj = space.allocate_instance(W_RopeObject, w_stringtype) diff --git a/pypy/objspace/std/test/test_stringobject.py b/pypy/objspace/std/test/test_stringobject.py --- a/pypy/objspace/std/test/test_stringobject.py +++ b/pypy/objspace/std/test/test_stringobject.py @@ -89,6 +89,12 @@ class AppTestStringObject: + def test_constructor(self): + assert bytes() == b'' + assert bytes(3) == b'\0\0\0' + assert bytes(b'abc') == b'abc' + assert bytes('abc', 'ascii') == b'abc' + def test_format(self): import operator raises(TypeError, operator.mod, b"%s", (1,)) @@ -627,7 +633,8 @@ assert b'a' in b'abc' assert b'ab' in b'abc' assert not b'd' in b'abc' - raises(TypeError, b'a'.__contains__, 1) + assert 97 in b'a' + raises(TypeError, b'a'.__contains__, 1.0) def test_decode(self): assert b'hello'.decode('ascii') == 'hello' From noreply at buildbot.pypy.org Sun Nov 6 20:59:14 2011 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 6 Nov 2011 20:59:14 +0100 (CET) Subject: [pypy-commit] pypy py3k: Fix object.__reduce_ex__ Message-ID: <20111106195914.EBCA5820C4@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r48839:81ac05c2ad26 Date: 2011-10-26 12:30 +0200 http://bitbucket.org/pypy/pypy/changeset/81ac05c2ad26/ Log: Fix object.__reduce_ex__ diff --git a/pypy/objspace/std/objecttype.py b/pypy/objspace/std/objecttype.py --- a/pypy/objspace/std/objecttype.py +++ b/pypy/objspace/std/objecttype.py @@ -105,16 +105,13 @@ w_st_reduce = space.wrap('__reduce__') w_reduce = space.findattr(w_obj, w_st_reduce) if w_reduce is not None: - w_cls = space.getattr(w_obj, space.wrap('__class__')) - w_cls_reduce_meth = space.getattr(w_cls, w_st_reduce) - w_cls_reduce = space.getattr(w_cls_reduce_meth, space.wrap('im_func')) - w_objtype = space.w_object - w_obj_dict = space.getattr(w_objtype, space.wrap('__dict__')) - w_obj_reduce = space.getitem(w_obj_dict, w_st_reduce) + # Check if __reduce__ has been overridden: + # "type(obj).__reduce__ is not object.__reduce__" + w_cls_reduce = space.getattr(space.type(w_obj), w_st_reduce) + w_obj_reduce = space.getattr(space.w_object, w_st_reduce) override = not space.is_w(w_cls_reduce, w_obj_reduce) - # print 'OVR', override, w_cls_reduce, w_obj_reduce if override: - return space.call(w_reduce, space.newtuple([])) + return space.call_function(w_reduce) return descr__reduce__(space, w_obj, proto) def descr___format__(space, w_obj, w_format_spec): From noreply at buildbot.pypy.org Sun Nov 6 20:59:16 2011 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 6 Nov 2011 20:59:16 +0100 (CET) Subject: [pypy-commit] pypy py3k: Fix pickle of builtin types. Message-ID: <20111106195916.3B051820C4@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r48840:0face036f35c Date: 2011-10-26 12:31 +0200 http://bitbucket.org/pypy/pypy/changeset/0face036f35c/ Log: Fix pickle of builtin types. diff --git a/pypy/objspace/std/typeobject.py b/pypy/objspace/std/typeobject.py --- a/pypy/objspace/std/typeobject.py +++ b/pypy/objspace/std/typeobject.py @@ -517,16 +517,16 @@ space.isinstance_w(w_self.getdictvalue(space, '__module__'), space.w_unicode)): return w_self.getdictvalue(space, '__module__') - return space.wrap('__builtin__') + return space.wrap('builtins') def get_module_type_name(w_self): space = w_self.space w_mod = w_self.get_module() if not space.isinstance_w(w_mod, space.w_str): - mod = '__builtin__' + mod = 'builtins' else: mod = space.str_w(w_mod) - if mod !='__builtin__': + if mod != 'builtins': return '%s.%s' % (mod, w_self.name) else: return w_self.name @@ -884,7 +884,7 @@ kind = 'type' else: kind = 'class' - if mod is not None and mod !='__builtin__': + if mod is not None and mod !='builtins': return space.wrap("<%s '%s.%s'>" % (kind, mod, w_obj.name)) else: return space.wrap("<%s '%s'>" % (kind, w_obj.name)) From noreply at buildbot.pypy.org Sun Nov 6 20:59:17 2011 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 6 Nov 2011 20:59:17 +0100 (CET) Subject: [pypy-commit] pypy py3k: Fix translation Message-ID: <20111106195917.79F79820C4@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r48841:bab586dbcd9c Date: 2011-10-26 16:51 +0200 http://bitbucket.org/pypy/pypy/changeset/bab586dbcd9c/ Log: Fix translation diff --git a/pypy/objspace/std/bytearraytype.py b/pypy/objspace/std/bytearraytype.py --- a/pypy/objspace/std/bytearraytype.py +++ b/pypy/objspace/std/bytearraytype.py @@ -63,10 +63,12 @@ return new_bytearray(space,w_bytearraytype, []) @gateway.unwrap_spec(encoding='str_or_None', errors='str_or_None') -def descr__init__(space, w_bytearray, w_source=gateway.NoneNotWrapped, +def descr__init__(space, w_self, w_source=gateway.NoneNotWrapped, encoding=None, errors=None): + from pypy.objspace.std.bytearrayobject import W_BytearrayObject + assert isinstance(w_self, W_BytearrayObject) data = makebytesdata_w(space, w_source, encoding, errors) - w_bytearray.data = data + w_self.data = data def descr_bytearray__reduce__(space, w_self): From noreply at buildbot.pypy.org Sun Nov 6 20:59:18 2011 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 6 Nov 2011 20:59:18 +0100 (CET) Subject: [pypy-commit] pypy py3k: Disallow implicit concatenation of bytes and strings Message-ID: <20111106195918.CC174820C4@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r48842:a4cd79eb1bb5 Date: 2011-10-26 16:57 +0200 http://bitbucket.org/pypy/pypy/changeset/a4cd79eb1bb5/ Log: Disallow implicit concatenation of bytes and strings diff --git a/pypy/interpreter/astcompiler/astbuilder.py b/pypy/interpreter/astcompiler/astbuilder.py --- a/pypy/interpreter/astcompiler/astbuilder.py +++ b/pypy/interpreter/astcompiler/astbuilder.py @@ -1079,14 +1079,18 @@ # UnicodeError in literal: turn into SyntaxError self.error(e.errorstr(space), atom_node) sub_strings_w = [] # please annotator - # This implements implicit string concatenation. - if len(sub_strings_w) > 1: - w_sub_strings = space.newlist(sub_strings_w) - w_join = space.getattr(space.wrap(""), space.wrap("join")) - final_string = space.call_function(w_join, w_sub_strings) - else: - final_string = sub_strings_w[0] - return ast.Str(final_string, atom_node.lineno, atom_node.column) + # Implement implicit string concatenation. + w_string = sub_strings_w[0] + for i in range(1, len(sub_strings_w)): + try: + w_string = space.add(w_string, sub_strings_w[i]) + except error.OperationError, e: + if not e.match(space, space.w_TypeError): + raise + self.error("cannot mix bytes and nonbytes literals", + atom_node) + # UnicodeError in literal: turn into SyntaxError + return ast.Str(w_string, atom_node.lineno, atom_node.column) elif first_child_type == tokens.NUMBER: num_value = self.parse_number(first_child.value) return ast.Num(num_value, atom_node.lineno, atom_node.column) diff --git a/pypy/interpreter/astcompiler/test/test_astbuilder.py b/pypy/interpreter/astcompiler/test/test_astbuilder.py --- a/pypy/interpreter/astcompiler/test/test_astbuilder.py +++ b/pypy/interpreter/astcompiler/test/test_astbuilder.py @@ -1026,6 +1026,10 @@ s = self.get_first_expr("'hi' ' implicitly' ' extra'") assert isinstance(s, ast.Str) assert space.eq_w(s.s, space.wrap("hi implicitly extra")) + s = self.get_first_expr("b'hi' b' implicitly' b' extra'") + assert isinstance(s, ast.Str) + assert space.eq_w(s.s, space.wrapbytes("hi implicitly extra")) + raises(SyntaxError, self.get_first_expr, "b'hello' 'world'") sentence = u"Die Männer ärgen sich!" source = u"# coding: utf-7\nstuff = u'%s'" % (sentence,) info = pyparse.CompileInfo("", "exec") From noreply at buildbot.pypy.org Sun Nov 6 20:59:20 2011 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 6 Nov 2011 20:59:20 +0100 (CET) Subject: [pypy-commit] pypy py3k: unicode string should not join bytes items Message-ID: <20111106195920.14075820C4@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r48843:4dbce2bccfb1 Date: 2011-10-26 17:08 +0200 http://bitbucket.org/pypy/pypy/changeset/4dbce2bccfb1/ Log: unicode string should not join bytes items diff --git a/pypy/objspace/std/test/test_unicodeobject.py b/pypy/objspace/std/test/test_unicodeobject.py --- a/pypy/objspace/std/test/test_unicodeobject.py +++ b/pypy/objspace/std/test/test_unicodeobject.py @@ -28,6 +28,7 @@ assert a == b assert type(a) == type(b) check(', '.join(['a']), 'a') + raises(TypeError, ','.join, [b'a']) def test_contains(self): assert '' in 'abc' diff --git a/pypy/objspace/std/unicodeobject.py b/pypy/objspace/std/unicodeobject.py --- a/pypy/objspace/std/unicodeobject.py +++ b/pypy/objspace/std/unicodeobject.py @@ -124,17 +124,12 @@ if self and i != 0: sb.append(self) w_s = list_w[i] - if isinstance(w_s, W_UnicodeObject): - # shortcut for performance - sb.append(w_s._value) - else: - try: - sb.append(space.unicode_w(w_s)) - except OperationError, e: - if not e.match(space, space.w_TypeError): - raise - raise operationerrfmt(space.w_TypeError, - "sequence item %d: expected string or Unicode", i) + if not isinstance(w_s, W_UnicodeObject): + raise operationerrfmt( + space.w_TypeError, + "sequence item %d: expected string, %s " + "found", i, space.type(w_s).getname(space)) + sb.append(w_s._value) return space.wrap(sb.build()) def hash__Unicode(space, w_uni): From noreply at buildbot.pypy.org Sun Nov 6 20:59:21 2011 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 6 Nov 2011 20:59:21 +0100 (CET) Subject: [pypy-commit] pypy py3k: Implement imp.cache_from_source() Message-ID: <20111106195921.71318820C4@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r48844:b12056207f0c Date: 2011-10-26 19:34 +0200 http://bitbucket.org/pypy/pypy/changeset/b12056207f0c/ Log: Implement imp.cache_from_source() diff --git a/pypy/module/imp/__init__.py b/pypy/module/imp/__init__.py --- a/pypy/module/imp/__init__.py +++ b/pypy/module/imp/__init__.py @@ -34,6 +34,8 @@ 'lock_held': 'interp_imp.lock_held', 'acquire_lock': 'interp_imp.acquire_lock', 'release_lock': 'interp_imp.release_lock', + + 'cache_from_source': 'interp_imp.cache_from_source', } appleveldefs = { diff --git a/pypy/module/imp/importing.py b/pypy/module/imp/importing.py --- a/pypy/module/imp/importing.py +++ b/pypy/module/imp/importing.py @@ -852,6 +852,9 @@ space.wrap(space.builtin)) code_w.exec_code(space, w_dict, w_dict) +def make_compiled_pathname(pathname): + "Given the path to a .py file, return the path to its .pyc file." + return pathname + 'c' @jit.dont_look_inside def load_source_module(space, w_modulename, w_mod, pathname, source, @@ -863,7 +866,7 @@ w = space.wrap if space.config.objspace.usepycfiles: - cpathname = pathname + 'c' + cpathname = make_compiled_pathname(pathname) src_stat = os.stat(pathname) mtime = int(src_stat[stat.ST_MTIME]) mode = src_stat[stat.ST_MODE] diff --git a/pypy/module/imp/interp_imp.py b/pypy/module/imp/interp_imp.py --- a/pypy/module/imp/interp_imp.py +++ b/pypy/module/imp/interp_imp.py @@ -180,3 +180,7 @@ def reinit_lock(space): if space.config.objspace.usemodules.thread: importing.getimportlock(space).reinit_lock() + + at unwrap_spec(pathname=str) +def cache_from_source(space, pathname): + return space.wrap(importing.make_compiled_pathname(pathname)) diff --git a/pypy/module/imp/test/test_import.py b/pypy/module/imp/test/test_import.py --- a/pypy/module/imp/test/test_import.py +++ b/pypy/module/imp/test/test_import.py @@ -137,7 +137,8 @@ def _teardown(space, w_saved_modules): space.appexec([w_saved_modules], """ - ((saved_path, saved_modules)): + (path_and_modules): + saved_path, saved_modules = path_and_modules import sys sys.path[:] = saved_path sys.modules.clear() @@ -571,6 +572,10 @@ else: assert False, 'should not work' + def test_cache_from_source(self): + import imp + assert imp.cache_from_source('a/b/c.py') == 'a/b/c.pyc' + class TestAbi: def test_abi_tag(self): space1 = gettestobjspace(soabi='TEST') From noreply at buildbot.pypy.org Sun Nov 6 20:59:22 2011 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 6 Nov 2011 20:59:22 +0100 (CET) Subject: [pypy-commit] pypy py3k: imp.get_magic() returns bytes Message-ID: <20111106195922.AB316820C4@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r48845:90b2ca3bd7d6 Date: 2011-10-26 23:15 +0200 http://bitbucket.org/pypy/pypy/changeset/90b2ca3bd7d6/ Log: imp.get_magic() returns bytes diff --git a/pypy/module/imp/interp_imp.py b/pypy/module/imp/interp_imp.py --- a/pypy/module/imp/interp_imp.py +++ b/pypy/module/imp/interp_imp.py @@ -29,7 +29,7 @@ c = x & 0xff x >>= 8 d = x & 0xff - return space.wrap(chr(a) + chr(b) + chr(c) + chr(d)) + return space.wrapbytes(chr(a) + chr(b) + chr(c) + chr(d)) def get_file(space, w_file, filename, filemode): if w_file is None or space.is_w(w_file, space.w_None): From noreply at buildbot.pypy.org Sun Nov 6 20:59:24 2011 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 6 Nov 2011 20:59:24 +0100 (CET) Subject: [pypy-commit] pypy default: cpyext: Add PyUnicode_*Latin1 functions Message-ID: <20111106195924.073E7820C4@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r48846:051fd7d46101 Date: 2011-10-27 01:06 +0200 http://bitbucket.org/pypy/pypy/changeset/051fd7d46101/ Log: cpyext: Add PyUnicode_*Latin1 functions diff --git a/pypy/module/cpyext/test/test_unicodeobject.py b/pypy/module/cpyext/test/test_unicodeobject.py --- a/pypy/module/cpyext/test/test_unicodeobject.py +++ b/pypy/module/cpyext/test/test_unicodeobject.py @@ -385,6 +385,24 @@ data, len(u), lltype.nullptr(rffi.CCHARP.TO)) rffi.free_wcharp(data) + def test_latin1(self, space, api): + s = 'abcdefg' + data = rffi.str2charp(s) + w_u = api.PyUnicode_DecodeLatin1(data, len(s), lltype.nullptr(rffi.CCHARP.TO)) + assert space.eq_w(w_u, space.wrap(u"abcdefg")) + rffi.free_charp(data) + + uni = u'abcdefg' + data = rffi.unicode2wcharp(uni) + w_s = api.PyUnicode_EncodeLatin1(data, len(uni), lltype.nullptr(rffi.CCHARP.TO)) + assert space.eq_w(space.wrap("abcdefg"), w_s) + rffi.free_wcharp(data) + + ustr = "abcdef" + w_ustr = space.wrap(ustr.decode("ascii")) + result = api.PyUnicode_AsLatin1String(w_ustr) + assert space.eq_w(space.wrap(ustr), result) + def test_format(self, space, api): w_format = space.wrap(u'hi %s') w_args = space.wrap((u'test',)) diff --git a/pypy/module/cpyext/unicodeobject.py b/pypy/module/cpyext/unicodeobject.py --- a/pypy/module/cpyext/unicodeobject.py +++ b/pypy/module/cpyext/unicodeobject.py @@ -498,16 +498,16 @@ """Encode a Unicode object using ASCII and return the result as Python string object. Error handling is "strict". Return NULL if an exception was raised by the codec.""" - return space.call_method(w_unicode, 'encode', space.wrap('ascii')) #space.w_None for errors? + return space.call_method(w_unicode, 'encode', space.wrap('ascii')) - at cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP], PyObject) + at cpython_api([CONST_STRING, Py_ssize_t, CONST_STRING], PyObject) def PyUnicode_DecodeASCII(space, s, size, errors): """Create a Unicode object by decoding size bytes of the ASCII encoded string s. Return NULL if an exception was raised by the codec.""" w_s = space.wrap(rffi.charpsize2str(s, size)) return space.call_method(w_s, 'decode', space.wrap('ascii')) - at cpython_api([rffi.CWCHARP, Py_ssize_t, rffi.CCHARP], PyObject) + at cpython_api([CONST_WSTRING, Py_ssize_t, CONST_STRING], PyObject) def PyUnicode_EncodeASCII(space, s, size, errors): """Encode the Py_UNICODE buffer of the given size using ASCII and return a Python string object. Return NULL if an exception was raised by the codec. @@ -516,6 +516,33 @@ w_s = space.wrap(rffi.wcharpsize2unicode(s, size)) return space.call_method(w_s, 'encode', space.wrap('ascii')) + at cpython_api([PyObject], PyObject) +def PyUnicode_AsLatin1String(space, w_unicode): + """Encode a Unicode object using Latin-1 and return the result as Python string + object. Error handling is "strict". Return NULL if an exception was raised + by the codec.""" + return space.call_method(w_unicode, 'encode', space.wrap('latin-1')) + + at cpython_api([CONST_STRING, Py_ssize_t, CONST_STRING], PyObject) +def PyUnicode_DecodeLatin1(space, s, size, errors): + """Create a Unicode object by decoding size bytes of the Latin-1 encoded string + s. Return NULL if an exception was raised by the codec. + + This function used an int type for size. This might require + changes in your code for properly supporting 64-bit systems.""" + w_s = space.wrap(rffi.charpsize2str(s, size)) + return space.call_method(w_s, 'decode', space.wrap('latin-1')) + + at cpython_api([CONST_WSTRING, Py_ssize_t, CONST_STRING], PyObject) +def PyUnicode_EncodeLatin1(space, s, size, errors): + """Encode the Py_UNICODE buffer of the given size using Latin-1 and return + a Python string object. Return NULL if an exception was raised by the codec. + + This function used an int type for size. This might require + changes in your code for properly supporting 64-bit systems.""" + w_s = space.wrap(rffi.wcharpsize2unicode(s, size)) + return space.call_method(w_s, 'encode', space.wrap('latin-1')) + if sys.platform == 'win32': @cpython_api([CONST_WSTRING, Py_ssize_t, CONST_STRING], PyObject) def PyUnicode_EncodeMBCS(space, wchar_p, length, errors): From noreply at buildbot.pypy.org Sun Nov 6 20:59:25 2011 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 6 Nov 2011 20:59:25 +0100 (CET) Subject: [pypy-commit] pypy default: Export PyDescr_NewMethod and the PyWrapperDescr_Type it returns Message-ID: <20111106195925.4433E820C4@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r48847:8ee82fc7f024 Date: 2011-10-27 01:13 +0200 http://bitbucket.org/pypy/pypy/changeset/8ee82fc7f024/ Log: Export PyDescr_NewMethod and the PyWrapperDescr_Type it returns diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -392,6 +392,7 @@ 'Slice': 'space.gettypeobject(W_SliceObject.typedef)', 'StaticMethod': 'space.gettypeobject(StaticMethod.typedef)', 'CFunction': 'space.gettypeobject(cpyext.methodobject.W_PyCFunctionObject.typedef)', + 'WrapperDescr': 'space.gettypeobject(cpyext.methodobject.W_PyCMethodObject.typedef)' }.items(): GLOBALS['Py%s_Type#' % (cpyname, )] = ('PyTypeObject*', pypyexpr) diff --git a/pypy/module/cpyext/methodobject.py b/pypy/module/cpyext/methodobject.py --- a/pypy/module/cpyext/methodobject.py +++ b/pypy/module/cpyext/methodobject.py @@ -240,6 +240,7 @@ def PyStaticMethod_New(space, w_func): return space.wrap(StaticMethod(w_func)) + at cpython_api([PyObject, lltype.Ptr(PyMethodDef)], PyObject) def PyDescr_NewMethod(space, w_type, method): return space.wrap(W_PyCMethodObject(space, method, w_type)) diff --git a/pypy/module/cpyext/test/test_methodobject.py b/pypy/module/cpyext/test/test_methodobject.py --- a/pypy/module/cpyext/test/test_methodobject.py +++ b/pypy/module/cpyext/test/test_methodobject.py @@ -79,7 +79,7 @@ raises(TypeError, mod.isSameFunction, 1) class TestPyCMethodObject(BaseApiTest): - def test_repr(self, space): + def test_repr(self, space, api): """ W_PyCMethodObject has a repr string which describes it as a method and gives its name and the name of its class. @@ -94,7 +94,7 @@ ml.c_ml_meth = rffi.cast(PyCFunction_typedef, c_func.get_llhelper(space)) - method = PyDescr_NewMethod(space, space.w_str, ml) + method = api.PyDescr_NewMethod(space.w_str, ml) assert repr(method).startswith( " Author: Amaury Forgeot d'Arc Branch: Changeset: r48848:5bf4b00693af Date: 2011-10-27 01:14 +0200 http://bitbucket.org/pypy/pypy/changeset/5bf4b00693af/ Log: Remove implemented functions from stubs.py diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -586,10 +586,6 @@ def PyDescr_NewMember(space, type, meth): raise NotImplementedError - at cpython_api([PyTypeObjectPtr, PyMethodDef], PyObject) -def PyDescr_NewMethod(space, type, meth): - raise NotImplementedError - @cpython_api([PyTypeObjectPtr, wrapperbase, rffi.VOIDP], PyObject) def PyDescr_NewWrapper(space, type, wrapper, wrapped): raise NotImplementedError @@ -610,14 +606,6 @@ def PyWrapper_New(space, w_d, w_self): raise NotImplementedError - at cpython_api([PyObject], PyObject) -def PyDictProxy_New(space, dict): - """Return a proxy object for a mapping which enforces read-only behavior. - This is normally used to create a proxy to prevent modification of the - dictionary for non-dynamic class types. - """ - raise NotImplementedError - @cpython_api([PyObject, PyObject, rffi.INT_real], rffi.INT_real, error=-1) def PyDict_Merge(space, a, b, override): """Iterate over mapping object b adding key-value pairs to dictionary a. @@ -2481,31 +2469,6 @@ was raised by the codec.""" raise NotImplementedError - at cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP], PyObject) -def PyUnicode_DecodeLatin1(space, s, size, errors): - """Create a Unicode object by decoding size bytes of the Latin-1 encoded string - s. Return NULL if an exception was raised by the codec. - - This function used an int type for size. This might require - changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - - at cpython_api([rffi.CWCHARP, Py_ssize_t, rffi.CCHARP], PyObject) -def PyUnicode_EncodeLatin1(space, s, size, errors): - """Encode the Py_UNICODE buffer of the given size using Latin-1 and return - a Python string object. Return NULL if an exception was raised by the codec. - - This function used an int type for size. This might require - changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - - at cpython_api([PyObject], PyObject) -def PyUnicode_AsLatin1String(space, unicode): - """Encode a Unicode object using Latin-1 and return the result as Python string - object. Error handling is "strict". Return NULL if an exception was raised - by the codec.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, PyObject, rffi.CCHARP], PyObject) def PyUnicode_DecodeCharmap(space, s, size, mapping, errors): """Create a Unicode object by decoding size bytes of the encoded string s using From noreply at buildbot.pypy.org Sun Nov 6 20:59:27 2011 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 6 Nov 2011 20:59:27 +0100 (CET) Subject: [pypy-commit] pypy default: Implement PyUnicode_EncodeUTF8 Message-ID: <20111106195927.C9A14820C4@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r48849:6cd69184b2f4 Date: 2011-11-06 20:20 +0100 http://bitbucket.org/pypy/pypy/changeset/6cd69184b2f4/ Log: Implement PyUnicode_EncodeUTF8 diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -2281,15 +2281,6 @@ changes in your code for properly supporting 64-bit systems.""" raise NotImplementedError - at cpython_api([rffi.CWCHARP, Py_ssize_t, rffi.CCHARP], PyObject) -def PyUnicode_EncodeUTF8(space, s, size, errors): - """Encode the Py_UNICODE buffer of the given size using UTF-8 and return a - Python string object. Return NULL if an exception was raised by the codec. - - This function used an int type for size. This might require - changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.INTP], PyObject) def PyUnicode_DecodeUTF32(space, s, size, errors, byteorder): """Decode length bytes from a UTF-32 encoded buffer string and return the diff --git a/pypy/module/cpyext/test/test_unicodeobject.py b/pypy/module/cpyext/test/test_unicodeobject.py --- a/pypy/module/cpyext/test/test_unicodeobject.py +++ b/pypy/module/cpyext/test/test_unicodeobject.py @@ -188,6 +188,12 @@ assert space.unwrap(w_u) == 'sp' rffi.free_charp(u) + def test_encode_utf8(self, space, api): + u = rffi.unicode2wcharp(u'sp�m') + w_s = api.PyUnicode_EncodeUTF8(u, 4, None) + assert space.unwrap(w_s) == u'sp�m'.encode('utf-8') + rffi.free_wcharp(u) + def test_IS(self, space, api): for char in [0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x1c, 0x1d, 0x1e, 0x1f, 0x20, 0x85, 0xa0, 0x1680, 0x2000, 0x2001, 0x2002, diff --git a/pypy/module/cpyext/unicodeobject.py b/pypy/module/cpyext/unicodeobject.py --- a/pypy/module/cpyext/unicodeobject.py +++ b/pypy/module/cpyext/unicodeobject.py @@ -438,6 +438,16 @@ w_errors = space.w_None return space.call_method(w_str, 'decode', space.wrap("utf-8"), w_errors) + at cpython_api([CONST_WSTRING, Py_ssize_t, CONST_STRING], PyObject) +def PyUnicode_EncodeUTF8(space, s, size, errors): + """Encode the Py_UNICODE buffer of the given size using UTF-8 and return a + Python string object. Return NULL if an exception was raised by the codec. + + This function used an int type for size. This might require + changes in your code for properly supporting 64-bit systems.""" + w_s = space.wrap(rffi.wcharpsize2unicode(s, size)) + return space.call_method(w_s, 'encode', space.wrap('utf-8')) + @cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.INTP], PyObject) def PyUnicode_DecodeUTF16(space, s, size, llerrors, pbyteorder): """Decode length bytes from a UTF-16 encoded buffer string and return the From noreply at buildbot.pypy.org Sun Nov 6 20:59:29 2011 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 6 Nov 2011 20:59:29 +0100 (CET) Subject: [pypy-commit] pypy default: All these functions PyUnicode_DecodeASCII &co are really similar, Message-ID: <20111106195929.13817820C4@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r48850:ac95da4c2214 Date: 2011-11-06 20:54 +0100 http://bitbucket.org/pypy/pypy/changeset/ac95da4c2214/ Log: All these functions PyUnicode_DecodeASCII &co are really similar, use a single template to generate them. diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -2518,13 +2518,6 @@ """ raise NotImplementedError - at cpython_api([PyObject], PyObject) -def PyUnicode_AsMBCSString(space, unicode): - """Encode a Unicode object using MBCS and return the result as Python string - object. Error handling is "strict". Return NULL if an exception was raised - by the codec.""" - raise NotImplementedError - @cpython_api([PyObject, PyObject], PyObject) def PyUnicode_Concat(space, left, right): """Concat two strings giving a new Unicode string.""" diff --git a/pypy/module/cpyext/unicodeobject.py b/pypy/module/cpyext/unicodeobject.py --- a/pypy/module/cpyext/unicodeobject.py +++ b/pypy/module/cpyext/unicodeobject.py @@ -14,6 +14,7 @@ from pypy.module.sys.interp_encoding import setdefaultencoding from pypy.objspace.std import unicodeobject, unicodetype from pypy.rlib import runicode +from pypy.tool.sourcetools import func_renamer import sys ## See comment in stringobject.py. @@ -417,36 +418,49 @@ ref[0] = rffi.cast(PyObject, py_newuni) return 0 - at cpython_api([PyObject], PyObject) -def PyUnicode_AsUTF8String(space, w_unicode): - """Encode a Unicode object using UTF-8 and return the result as Python string - object. Error handling is "strict". Return NULL if an exception was raised - by the codec.""" - if not PyUnicode_Check(space, w_unicode): - PyErr_BadArgument(space) - return unicodetype.encode_object(space, w_unicode, "utf-8", "strict") +def make_conversion_functions(suffix, encoding): + @cpython_api([PyObject], PyObject) + @func_renamer('PyUnicode_As%sString' % suffix) + def PyUnicode_AsXXXString(space, w_unicode): + """Encode a Unicode object and return the result as Python + string object. Error handling is "strict". Return NULL if an + exception was raised by the codec.""" + if not PyUnicode_Check(space, w_unicode): + PyErr_BadArgument(space) + return unicodetype.encode_object(space, w_unicode, encoding, "strict") - at cpython_api([CONST_STRING, Py_ssize_t, CONST_STRING], PyObject) -def PyUnicode_DecodeUTF8(space, s, size, errors): - """Create a Unicode object by decoding size bytes of the UTF-8 encoded string - s. Return NULL if an exception was raised by the codec. - """ - w_str = space.wrap(rffi.charpsize2str(s, size)) - if errors: - w_errors = space.wrap(rffi.charp2str(errors)) - else: - w_errors = space.w_None - return space.call_method(w_str, 'decode', space.wrap("utf-8"), w_errors) + @cpython_api([CONST_STRING, Py_ssize_t, CONST_STRING], PyObject) + @func_renamer('PyUnicode_Decode%s' % suffix) + def PyUnicode_DecodeXXX(space, s, size, errors): + """Create a Unicode object by decoding size bytes of the + encoded string s. Return NULL if an exception was raised by + the codec. + """ + w_s = space.wrap(rffi.charpsize2str(s, size)) + if errors: + w_errors = space.wrap(rffi.charp2str(errors)) + else: + w_errors = space.w_None + return space.call_method(w_s, 'decode', space.wrap(encoding), w_errors) - at cpython_api([CONST_WSTRING, Py_ssize_t, CONST_STRING], PyObject) -def PyUnicode_EncodeUTF8(space, s, size, errors): - """Encode the Py_UNICODE buffer of the given size using UTF-8 and return a - Python string object. Return NULL if an exception was raised by the codec. + @cpython_api([CONST_WSTRING, Py_ssize_t, CONST_STRING], PyObject) + @func_renamer('PyUnicode_Encode%s' % suffix) + def PyUnicode_EncodeXXX(space, s, size, errors): + """Encode the Py_UNICODE buffer of the given size and return a + Python string object. Return NULL if an exception was raised + by the codec.""" + w_u = space.wrap(rffi.wcharpsize2unicode(s, size)) + if errors: + w_errors = space.wrap(rffi.charp2str(errors)) + else: + w_errors = space.w_None + return space.call_method(w_u, 'encode', space.wrap(encoding), w_errors) - This function used an int type for size. This might require - changes in your code for properly supporting 64-bit systems.""" - w_s = space.wrap(rffi.wcharpsize2unicode(s, size)) - return space.call_method(w_s, 'encode', space.wrap('utf-8')) +make_conversion_functions('UTF8', 'utf-8') +make_conversion_functions('ASCII', 'ascii') +make_conversion_functions('Latin1', 'latin-1') +if sys.platform == 'win32': + make_conversion_functions('MBCS', 'mbcs') @cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.INTP], PyObject) def PyUnicode_DecodeUTF16(space, s, size, llerrors, pbyteorder): @@ -503,83 +517,6 @@ return space.wrap(result) - at cpython_api([PyObject], PyObject) -def PyUnicode_AsASCIIString(space, w_unicode): - """Encode a Unicode object using ASCII and return the result as Python string - object. Error handling is "strict". Return NULL if an exception was raised - by the codec.""" - return space.call_method(w_unicode, 'encode', space.wrap('ascii')) - - at cpython_api([CONST_STRING, Py_ssize_t, CONST_STRING], PyObject) -def PyUnicode_DecodeASCII(space, s, size, errors): - """Create a Unicode object by decoding size bytes of the ASCII encoded string - s. Return NULL if an exception was raised by the codec.""" - w_s = space.wrap(rffi.charpsize2str(s, size)) - return space.call_method(w_s, 'decode', space.wrap('ascii')) - - at cpython_api([CONST_WSTRING, Py_ssize_t, CONST_STRING], PyObject) -def PyUnicode_EncodeASCII(space, s, size, errors): - """Encode the Py_UNICODE buffer of the given size using ASCII and return a - Python string object. Return NULL if an exception was raised by the codec. - """ - - w_s = space.wrap(rffi.wcharpsize2unicode(s, size)) - return space.call_method(w_s, 'encode', space.wrap('ascii')) - - at cpython_api([PyObject], PyObject) -def PyUnicode_AsLatin1String(space, w_unicode): - """Encode a Unicode object using Latin-1 and return the result as Python string - object. Error handling is "strict". Return NULL if an exception was raised - by the codec.""" - return space.call_method(w_unicode, 'encode', space.wrap('latin-1')) - - at cpython_api([CONST_STRING, Py_ssize_t, CONST_STRING], PyObject) -def PyUnicode_DecodeLatin1(space, s, size, errors): - """Create a Unicode object by decoding size bytes of the Latin-1 encoded string - s. Return NULL if an exception was raised by the codec. - - This function used an int type for size. This might require - changes in your code for properly supporting 64-bit systems.""" - w_s = space.wrap(rffi.charpsize2str(s, size)) - return space.call_method(w_s, 'decode', space.wrap('latin-1')) - - at cpython_api([CONST_WSTRING, Py_ssize_t, CONST_STRING], PyObject) -def PyUnicode_EncodeLatin1(space, s, size, errors): - """Encode the Py_UNICODE buffer of the given size using Latin-1 and return - a Python string object. Return NULL if an exception was raised by the codec. - - This function used an int type for size. This might require - changes in your code for properly supporting 64-bit systems.""" - w_s = space.wrap(rffi.wcharpsize2unicode(s, size)) - return space.call_method(w_s, 'encode', space.wrap('latin-1')) - -if sys.platform == 'win32': - @cpython_api([CONST_WSTRING, Py_ssize_t, CONST_STRING], PyObject) - def PyUnicode_EncodeMBCS(space, wchar_p, length, errors): - """Encode the Py_UNICODE buffer of the given size using MBCS and return a - Python string object. Return NULL if an exception was raised by the codec. - """ - w_unicode = space.wrap(rffi.wcharpsize2unicode(wchar_p, length)) - if errors: - w_errors = space.wrap(rffi.charp2str(errors)) - else: - w_errors = space.w_None - return space.call_method(w_unicode, "encode", - space.wrap("mbcs"), w_errors) - - @cpython_api([CONST_STRING, Py_ssize_t, CONST_STRING], PyObject) - def PyUnicode_DecodeMBCS(space, s, size, errors): - """Create a Unicode object by decoding size bytes of the MBCS encoded string s. - Return NULL if an exception was raised by the codec. - """ - w_str = space.wrap(rffi.charpsize2str(s, size)) - w_encoding = space.wrap("mbcs") - if errors: - w_errors = space.wrap(rffi.charp2str(errors)) - else: - w_errors = space.w_None - return space.call_method(w_str, 'decode', w_encoding, w_errors) - @cpython_api([PyObject, PyObject], rffi.INT_real, error=-2) def PyUnicode_Compare(space, w_left, w_right): """Compare two strings and return -1, 0, 1 for less than, equal, and greater From noreply at buildbot.pypy.org Sun Nov 6 20:59:30 2011 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 6 Nov 2011 20:59:30 +0100 (CET) Subject: [pypy-commit] pypy default: Implement PyWeakref_NewProxy, thanks dcolish! Message-ID: <20111106195930.63DFA820C4@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r48851:6d0f05e9a3ac Date: 2011-11-06 20:55 +0100 http://bitbucket.org/pypy/pypy/changeset/6d0f05e9a3ac/ Log: Implement PyWeakref_NewProxy, thanks dcolish! diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -2859,16 +2859,3 @@ """Return true if ob is a proxy object. """ raise NotImplementedError - - at cpython_api([PyObject, PyObject], PyObject) -def PyWeakref_NewProxy(space, ob, callback): - """Return a weak reference proxy object for the object ob. This will always - return a new reference, but is not guaranteed to create a new object; an - existing proxy object may be returned. The second parameter, callback, can - be a callable object that receives notification when ob is garbage - collected; it should accept a single parameter, which will be the weak - reference object itself. callback may also be None or NULL. If ob - is not a weakly-referencable object, or if callback is not callable, - None, or NULL, this will return NULL and raise TypeError. - """ - raise NotImplementedError diff --git a/pypy/module/cpyext/test/test_weakref.py b/pypy/module/cpyext/test/test_weakref.py --- a/pypy/module/cpyext/test/test_weakref.py +++ b/pypy/module/cpyext/test/test_weakref.py @@ -15,6 +15,12 @@ assert api.PyErr_Occurred() is space.w_TypeError api.PyErr_Clear() + def test_proxy(self, space, api): + w_obj = space.w_Warning # some weakrefable object + w_proxy = api.PyWeakref_NewProxy(w_obj, None) + assert space.unwrap(space.str(w_proxy)) == "" + assert space.unwrap(space.repr(w_proxy)).startswith(' Author: Alex Gaynor Branch: Changeset: r48852:9e7c5b33e755 Date: 2011-11-06 15:14 -0500 http://bitbucket.org/pypy/pypy/changeset/9e7c5b33e755/ Log: fix a crash and translation in micronumpy diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -202,7 +202,7 @@ return space.newtuple([self.descr_len(space)]) def descr_get_size(self, space): - return space.wrap(self.size) + return space.wrap(self.find_size()) def descr_copy(self, space): return space.call_function(space.gettypefor(BaseArray), self, self.find_dtype()) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -21,7 +21,9 @@ from numpy import array # XXX fixed on multidim branch #assert array(3).size == 1 - assert array([1, 2, 3]).size == 3 + a = array([1, 2, 3]) + assert a.size == 3 + assert (a + a).size == 3 def test_empty(self): """ From noreply at buildbot.pypy.org Mon Nov 7 00:24:23 2011 From: noreply at buildbot.pypy.org (amauryfa) Date: Mon, 7 Nov 2011 00:24:23 +0100 (CET) Subject: [pypy-commit] pypy py3k: Update the list of test files with the content of 3.2/test/test_*.py. Message-ID: <20111106232423.823C1820C4@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r48853:997bcc8c1c1f Date: 2011-11-07 00:03 +0100 http://bitbucket.org/pypy/pypy/changeset/997bcc8c1c1f/ Log: Update the list of test files with the content of 3.2/test/test_*.py. Tried to keep the previous attributes when they make sense. diff --git a/lib-python/conftest.py b/lib-python/conftest.py --- a/lib-python/conftest.py +++ b/lib-python/conftest.py @@ -109,33 +109,23 @@ RegrTest('test__locale.py', skip=skip_win32), RegrTest('test_abc.py'), RegrTest('test_abstract_numbers.py'), - RegrTest('test_aepack.py', skip=True), RegrTest('test_aifc.py'), RegrTest('test_argparse.py'), - RegrTest('test_al.py', skip=True), + RegrTest('test_array.py', core=True, usemodules='struct array'), RegrTest('test_ast.py', core=True), - RegrTest('test_anydbm.py'), - RegrTest('test_applesingle.py', skip=True), - RegrTest('test_array.py', core=True, usemodules='struct array'), - RegrTest('test_ascii_formatd.py'), RegrTest('test_asynchat.py', usemodules='thread'), RegrTest('test_asyncore.py'), RegrTest('test_atexit.py', core=True), RegrTest('test_audioop.py', skip=True), RegrTest('test_augassign.py', core=True), RegrTest('test_base64.py'), - RegrTest('test_bastion.py'), + RegrTest('test_bigaddrspace.py'), + RegrTest('test_bigmem.py'), RegrTest('test_binascii.py', usemodules='binascii'), - RegrTest('test_binhex.py'), - RegrTest('test_binop.py', core=True), RegrTest('test_bisect.py', core=True, usemodules='_bisect'), RegrTest('test_bool.py', core=True), - RegrTest('test_bsddb.py', skip="unsupported extension module"), - RegrTest('test_bsddb185.py', skip="unsupported extension module"), - RegrTest('test_bsddb3.py', skip="unsupported extension module"), - RegrTest('test_buffer.py'), RegrTest('test_bufio.py', core=True), RegrTest('test_builtin.py', core=True), RegrTest('test_bytes.py'), @@ -143,23 +133,22 @@ RegrTest('test_calendar.py'), RegrTest('test_call.py', core=True), RegrTest('test_capi.py', skip="not applicable"), - RegrTest('test_cd.py', skip=True), RegrTest('test_cfgparser.py'), - RegrTest('test_cgi.py'), RegrTest('test_charmapcodec.py', core=True), - RegrTest('test_cl.py', skip=True), RegrTest('test_class.py', core=True), RegrTest('test_cmath.py', core=True), RegrTest('test_cmd.py'), + RegrTest('test_cmd_line.py'), RegrTest('test_cmd_line_script.py'), + RegrTest('test_code.py', core=True), RegrTest('test_codeccallbacks.py', core=True), RegrTest('test_codecencodings_cn.py', usemodules='_multibytecodec'), RegrTest('test_codecencodings_hk.py', usemodules='_multibytecodec'), + RegrTest('test_codecencodings_iso2022.py', usemodules='_multibytecodec'), RegrTest('test_codecencodings_jp.py', usemodules='_multibytecodec'), RegrTest('test_codecencodings_kr.py', usemodules='_multibytecodec'), RegrTest('test_codecencodings_tw.py', usemodules='_multibytecodec'), - RegrTest('test_codecmaps_cn.py', usemodules='_multibytecodec'), RegrTest('test_codecmaps_hk.py', usemodules='_multibytecodec'), RegrTest('test_codecmaps_jp.py', usemodules='_multibytecodec'), @@ -167,31 +156,31 @@ RegrTest('test_codecmaps_tw.py', usemodules='_multibytecodec'), RegrTest('test_codecs.py', core=True, usemodules='_multibytecodec'), RegrTest('test_codeop.py', core=True), - RegrTest('test_coercion.py', core=True), + RegrTest('test_coding.py', core=True), RegrTest('test_collections.py'), RegrTest('test_colorsys.py'), - RegrTest('test_commands.py'), RegrTest('test_compare.py', core=True), RegrTest('test_compile.py', core=True), RegrTest('test_compileall.py'), - RegrTest('test_compiler.py', core=False, skip="slowly deprecating compiler"), RegrTest('test_complex.py', core=True), - + RegrTest('test_concurrent_futures.py'), RegrTest('test_contains.py', core=True), - RegrTest('test_cookie.py'), - RegrTest('test_cookielib.py'), + RegrTest('test_contextlib.py', usemodules="thread"), RegrTest('test_copy.py', core=True), - RegrTest('test_copy_reg.py', core=True), - RegrTest('test_cpickle.py', core=True), - RegrTest('test_cprofile.py'), + RegrTest('test_copyreg.py', core=True), + RegrTest('test_cprofile.py'), RegrTest('test_crypt.py', usemodules='crypt', skip=skip_win32), RegrTest('test_csv.py'), - + RegrTest('test_ctypes.py', usemodules="_rawffi thread"), RegrTest('test_curses.py', skip="unsupported extension module"), RegrTest('test_datetime.py'), RegrTest('test_dbm.py'), + RegrTest('test_dbm_dumb.py'), + RegrTest('test_dbm_gnu.py'), + RegrTest('test_dbm_ndbm.py'), RegrTest('test_decimal.py'), RegrTest('test_decorators.py', core=True), + RegrTest('test_defaultdict.py', usemodules='_collections'), RegrTest('test_deque.py', core=True, usemodules='_collections'), RegrTest('test_descr.py', core=True, usemodules='_weakref'), RegrTest('test_descrtut.py', core=True), @@ -199,139 +188,123 @@ RegrTest('test_dictcomps.py', core=True), RegrTest('test_dictviews.py', core=True), RegrTest('test_difflib.py'), - RegrTest('test_dircache.py', core=True), RegrTest('test_dis.py'), RegrTest('test_distutils.py'), - RegrTest('test_dl.py', skip=True), RegrTest('test_doctest.py', usemodules="thread"), RegrTest('test_doctest2.py'), RegrTest('test_docxmlrpc.py'), - RegrTest('test_dumbdbm.py'), RegrTest('test_dummy_thread.py', core=True), RegrTest('test_dummy_threading.py', core=True), + RegrTest('test_dynamic.py'), RegrTest('test_email.py'), - - RegrTest('test_email_codecs.py'), RegrTest('test_enumerate.py', core=True), RegrTest('test_eof.py', core=True), RegrTest('test_epoll.py'), RegrTest('test_errno.py', usemodules="errno"), + RegrTest('test_exception_variations.py'), RegrTest('test_exceptions.py', core=True), RegrTest('test_extcall.py', core=True), RegrTest('test_fcntl.py', usemodules='fcntl', skip=skip_win32), RegrTest('test_file.py', usemodules="posix", core=True), - RegrTest('test_file2k.py', usemodules="posix", core=True), RegrTest('test_filecmp.py', core=True), RegrTest('test_fileinput.py', core=True), RegrTest('test_fileio.py'), + RegrTest('test_float.py', core=True), + RegrTest('test_flufl.py'), RegrTest('test_fnmatch.py', core=True), RegrTest('test_fork1.py', usemodules="thread"), RegrTest('test_format.py', core=True), - RegrTest('test_fpformat.py', core=True), RegrTest('test_fractions.py'), RegrTest('test_frozen.py', skip="unsupported extension module"), RegrTest('test_ftplib.py'), RegrTest('test_funcattrs.py', core=True), + RegrTest('test_functools.py'), RegrTest('test_future.py', core=True), RegrTest('test_future1.py', core=True), RegrTest('test_future2.py', core=True), RegrTest('test_future3.py', core=True), RegrTest('test_future4.py', core=True), RegrTest('test_future5.py', core=True), - RegrTest('test_future_builtins.py'), RegrTest('test_gc.py', usemodules='_weakref', skip="implementation detail"), RegrTest('test_gdb.py', skip="not applicable"), - RegrTest('test_gdbm.py', skip="unsupported extension module"), RegrTest('test_generators.py', core=True, usemodules='thread _weakref'), RegrTest('test_genericpath.py'), RegrTest('test_genexps.py', core=True, usemodules='_weakref'), - RegrTest('test_getargs.py', skip="unsupported extension module"), RegrTest('test_getargs2.py', skip="unsupported extension module"), - RegrTest('test_getopt.py', core=True), RegrTest('test_gettext.py'), - - RegrTest('test_gl.py', skip=True), RegrTest('test_glob.py', core=True), RegrTest('test_global.py', core=True), RegrTest('test_grammar.py', core=True), RegrTest('test_grp.py', skip=skip_win32), - RegrTest('test_gzip.py'), RegrTest('test_hash.py', core=True), RegrTest('test_hashlib.py', core=True), - RegrTest('test_heapq.py', core=True), RegrTest('test_hmac.py'), - RegrTest('test_hotshot.py', skip="unsupported extension module"), - - RegrTest('test_htmllib.py'), + RegrTest('test_html.py'), RegrTest('test_htmlparser.py'), + RegrTest('test_http_cookiejar.py'), + RegrTest('test_http_cookies.py'), RegrTest('test_httplib.py'), RegrTest('test_httpservers.py'), - RegrTest('test_imageop.py', skip="unsupported extension module"), RegrTest('test_imaplib.py'), - RegrTest('test_imgfile.py', skip="unsupported extension module"), RegrTest('test_imp.py', core=True, usemodules='thread'), RegrTest('test_import.py', core=True), RegrTest('test_importhooks.py', core=True), RegrTest('test_importlib.py'), + RegrTest('test_index.py'), RegrTest('test_inspect.py'), RegrTest('test_int.py', core=True), RegrTest('test_int_literal.py', core=True), - RegrTest('test_io.py'), + RegrTest('test_io.py', core=True), RegrTest('test_ioctl.py'), RegrTest('test_isinstance.py', core=True), RegrTest('test_iter.py', core=True), - RegrTest('test_iterlen.py', skip="undocumented internal API behavior __length_hint__"), + RegrTest('test_iterlen.py'), RegrTest('test_itertools.py', core=True), RegrTest('test_json.py'), + RegrTest('test_keywordonlyarg.py'), RegrTest('test_kqueue.py'), RegrTest('test_largefile.py'), RegrTest('test_lib2to3.py'), RegrTest('test_linecache.py'), - RegrTest('test_linuxaudiodev.py', skip="unsupported extension module"), RegrTest('test_list.py', core=True), + RegrTest('test_listcomps.py', core=True), RegrTest('test_locale.py', usemodules="_locale"), RegrTest('test_logging.py', usemodules='thread'), RegrTest('test_long.py', core=True), - RegrTest('test_long_future.py', core=True), RegrTest('test_longexp.py', core=True), - RegrTest('test_macos.py'), - RegrTest('test_macostools.py', skip=True), RegrTest('test_macpath.py'), RegrTest('test_mailbox.py'), RegrTest('test_marshal.py', core=True), RegrTest('test_math.py', core=True, usemodules='math'), RegrTest('test_memoryio.py'), RegrTest('test_memoryview.py'), - RegrTest('test_md5.py'), - RegrTest('test_mhlib.py'), - RegrTest('test_mimetools.py'), + RegrTest('test_metaclass.py', core=True), RegrTest('test_mimetypes.py'), - RegrTest('test_MimeWriter.py', core=False), RegrTest('test_minidom.py'), RegrTest('test_mmap.py'), RegrTest('test_module.py', core=True), RegrTest('test_modulefinder.py'), + RegrTest('test_msilib.py'), RegrTest('test_multibytecodec.py', usemodules='_multibytecodec'), RegrTest('test_multibytecodec_support.py', skip="not a test"), - RegrTest('test_multifile.py'), RegrTest('test_multiprocessing.py', skip="FIXME leaves subprocesses"), RegrTest('test_mutants.py', core="possibly"), - RegrTest('test_mutex.py'), RegrTest('test_netrc.py'), - RegrTest('test_new.py', core=True), RegrTest('test_nis.py', skip="unsupported extension module"), + RegrTest('test_nntplib.py'), RegrTest('test_normalization.py'), RegrTest('test_ntpath.py'), + RegrTest('test_numeric_tower.py'), RegrTest('test_opcodes.py', core=True), RegrTest('test_openpty.py'), RegrTest('test_operator.py', core=True), RegrTest('test_optparse.py'), - RegrTest('test_os.py', core=True), RegrTest('test_ossaudiodev.py', skip="unsupported extension module"), + RegrTest('test_osx_env.py'), RegrTest('test_parser.py', skip="slowly deprecating compiler"), RegrTest('test_pdb.py'), RegrTest('test_peepholer.py'), @@ -339,16 +312,19 @@ RegrTest('test_pep263.py'), RegrTest('test_pep277.py'), RegrTest('test_pep292.py'), + RegrTest('test_pep3120.py'), + RegrTest('test_pep3131.py'), + RegrTest('test_pep352.py'), RegrTest('test_pickle.py', core=True), RegrTest('test_pickletools.py', core=False), RegrTest('test_pipes.py'), RegrTest('test_pkg.py', core=True), RegrTest('test_pkgimport.py', core=True), RegrTest('test_pkgutil.py'), + RegrTest('test_platform.py'), RegrTest('test_plistlib.py', skip="unsupported module"), RegrTest('test_poll.py', skip=skip_win32), RegrTest('test_popen.py'), - RegrTest('test_popen2.py'), RegrTest('test_poplib.py'), RegrTest('test_posix.py', usemodules="_rawffi"), RegrTest('test_posixpath.py'), @@ -360,165 +336,129 @@ RegrTest('test_pstats.py'), RegrTest('test_pty.py', skip="unsupported extension module"), RegrTest('test_pwd.py', usemodules="pwd", skip=skip_win32), - RegrTest('test_py3kwarn.py'), RegrTest('test_pyclbr.py'), RegrTest('test_pydoc.py'), RegrTest('test_pyexpat.py'), RegrTest('test_queue.py', usemodules='thread'), RegrTest('test_quopri.py'), + RegrTest('test_raise.py', core=True), RegrTest('test_random.py'), + RegrTest('test_range.py', core=True), RegrTest('test_re.py', core=True), RegrTest('test_readline.py'), - RegrTest('test_repr.py', core=True), + RegrTest('test_reprlib.py', core=True), RegrTest('test_resource.py', skip=skip_win32), - RegrTest('test_rfc822.py'), RegrTest('test_richcmp.py', core=True), RegrTest('test_rlcompleter.py'), - RegrTest('test_robotparser.py'), + RegrTest('test_runpy.py'), RegrTest('test_sax.py'), + RegrTest('test_sched.py'), RegrTest('test_scope.py', core=True), - RegrTest('test_scriptpackages.py', skip="unsupported extension module"), RegrTest('test_select.py'), RegrTest('test_set.py', core=True), - RegrTest('test_sets.py'), RegrTest('test_setcomps.py', core=True), - RegrTest('test_sgmllib.py'), - RegrTest('test_sha.py'), RegrTest('test_shelve.py'), RegrTest('test_shlex.py'), RegrTest('test_shutil.py'), RegrTest('test_signal.py'), - RegrTest('test_SimpleHTTPServer.py'), RegrTest('test_site.py', core=False), RegrTest('test_slice.py', core=True), + RegrTest('test_smtpd.py'), RegrTest('test_smtplib.py'), RegrTest('test_smtpnet.py'), + RegrTest('test_sndhdr.py'), RegrTest('test_socket.py', usemodules='thread _weakref'), - RegrTest('test_socketserver.py', usemodules='thread'), - - RegrTest('test_softspace.py', core=True), RegrTest('test_sort.py', core=True), + RegrTest('test_sqlite.py', usemodules="thread _rawffi zlib"), RegrTest('test_ssl.py', usemodules='_ssl _socket select'), - RegrTest('test_str.py', core=True), - + RegrTest('test_startfile.py'), # skip="bogus test"? RegrTest('test_strftime.py'), RegrTest('test_string.py', core=True), - RegrTest('test_StringIO.py', core=True, usemodules='cStringIO'), RegrTest('test_stringprep.py'), - RegrTest('test_strop.py', skip="deprecated"), - + RegrTest('test_strlit.py', core=True), RegrTest('test_strptime.py'), RegrTest('test_strtod.py'), RegrTest('test_struct.py', usemodules='struct'), RegrTest('test_structmembers.py', skip="CPython specific"), RegrTest('test_structseq.py'), RegrTest('test_subprocess.py', usemodules='signal'), - RegrTest('test_sunaudiodev.py', skip=True), + RegrTest('test_sunau.py', skip=True), RegrTest('test_sundry.py'), + RegrTest('test_super.py', core=True), RegrTest('test_symtable.py', skip="implementation detail"), RegrTest('test_syntax.py', core=True), RegrTest('test_sys.py', core=True, usemodules='struct'), + RegrTest('test_sys_setprofile.py', core=True), RegrTest('test_sys_settrace.py', core=True), - RegrTest('test_sys_setprofile.py', core=True), RegrTest('test_sysconfig.py'), + RegrTest('test_syslog.py'), + RegrTest('test_tarfile.py'), RegrTest('test_tcl.py', skip="unsupported extension module"), - RegrTest('test_tarfile.py'), RegrTest('test_telnetlib.py'), RegrTest('test_tempfile.py'), - RegrTest('test_textwrap.py'), RegrTest('test_thread.py', usemodules="thread", core=True), RegrTest('test_threaded_import.py', usemodules="thread", core=True), RegrTest('test_threadedtempfile.py', usemodules="thread", core=False), - RegrTest('test_threading.py', usemodules="thread", core=True), RegrTest('test_threading_local.py', usemodules="thread", core=True), RegrTest('test_threadsignals.py', usemodules="thread"), - RegrTest('test_time.py', core=True), + RegrTest('test_timeit.py'), RegrTest('test_timeout.py'), RegrTest('test_tk.py'), - RegrTest('test_ttk_guionly.py'), - RegrTest('test_ttk_textonly.py'), RegrTest('test_tokenize.py'), RegrTest('test_trace.py'), RegrTest('test_traceback.py', core=True), - RegrTest('test_transformer.py', core=True), + RegrTest('test_ttk_guionly.py'), + RegrTest('test_ttk_textonly.py'), RegrTest('test_tuple.py', core=True), RegrTest('test_typechecks.py'), RegrTest('test_types.py', core=True), RegrTest('test_ucn.py'), RegrTest('test_unary.py', core=True), - RegrTest('test_undocumented_details.py'), RegrTest('test_unicode.py', core=True), RegrTest('test_unicode_file.py'), RegrTest('test_unicodedata.py'), RegrTest('test_unittest.py', core=True), RegrTest('test_univnewlines.py'), - RegrTest('test_univnewlines2k.py', core=True), RegrTest('test_unpack.py', core=True), + RegrTest('test_unpack_ex.py', core=True), RegrTest('test_urllib.py'), RegrTest('test_urllib2.py'), + RegrTest('test_urllib2_localnet.py', usemodules="thread"), RegrTest('test_urllib2net.py'), + RegrTest('test_urllib_response.py'), RegrTest('test_urllibnet.py'), RegrTest('test_urlparse.py'), RegrTest('test_userdict.py', core=True), RegrTest('test_userlist.py', core=True), RegrTest('test_userstring.py', core=True), RegrTest('test_uu.py'), - + RegrTest('test_uuid.py'), + RegrTest('test_wait3.py', usemodules="thread"), + RegrTest('test_wait4.py', usemodules="thread"), RegrTest('test_warnings.py', core=True), RegrTest('test_wave.py', skip="unsupported extension module"), RegrTest('test_weakref.py', core=True, usemodules='_weakref'), RegrTest('test_weakset.py'), - - RegrTest('test_whichdb.py'), RegrTest('test_winreg.py', skip=only_win32), RegrTest('test_winsound.py', skip="unsupported extension module"), - RegrTest('test_xmllib.py'), - RegrTest('test_xmlrpc.py'), - - RegrTest('test_xpickle.py'), - RegrTest('test_xrange.py', core=True), - RegrTest('test_zipfile.py'), - RegrTest('test_zipimport.py', usemodules='zlib zipimport'), - RegrTest('test_zipimport_support.py', usemodules='zlib zipimport'), - RegrTest('test_zlib.py', usemodules='zlib'), - - RegrTest('test_bigaddrspace.py'), - RegrTest('test_bigmem.py'), - RegrTest('test_cmd_line.py'), - RegrTest('test_code.py'), - RegrTest('test_coding.py'), - RegrTest('test_complex_args.py'), - RegrTest('test_contextlib.py', usemodules="thread"), - RegrTest('test_ctypes.py', usemodules="_rawffi thread"), - RegrTest('test_defaultdict.py', usemodules='_collections'), - RegrTest('test_email_renamed.py'), - RegrTest('test_exception_variations.py'), - RegrTest('test_float.py'), - RegrTest('test_functools.py'), - RegrTest('test_index.py'), - RegrTest('test_old_mailbox.py'), - RegrTest('test_pep352.py'), - RegrTest('test_platform.py'), - RegrTest('test_runpy.py'), - RegrTest('test_sqlite.py', usemodules="thread _rawffi zlib"), - RegrTest('test_startfile.py', skip="bogus test"), - RegrTest('test_structmembers.py', skip="depends on _testcapi"), - RegrTest('test_urllib2_localnet.py', usemodules="thread"), - RegrTest('test_uuid.py'), - RegrTest('test_wait3.py', usemodules="thread"), - RegrTest('test_wait4.py', usemodules="thread"), RegrTest('test_with.py'), RegrTest('test_wsgiref.py'), RegrTest('test_xdrlib.py'), RegrTest('test_xml_etree.py'), RegrTest('test_xml_etree_c.py'), + RegrTest('test_xmlrpc.py'), + RegrTest('test_xmlrpc_net.py'), + RegrTest('test_zipfile.py'), RegrTest('test_zipfile64.py'), + RegrTest('test_zipimport.py', usemodules='zlib zipimport'), + RegrTest('test_zipimport_support.py', usemodules='zlib zipimport'), + RegrTest('test_zlib.py', usemodules='zlib'), ] def check_testmap_complete(): From noreply at buildbot.pypy.org Mon Nov 7 00:24:24 2011 From: noreply at buildbot.pypy.org (amauryfa) Date: Mon, 7 Nov 2011 00:24:24 +0100 (CET) Subject: [pypy-commit] pypy py3k: Make sure sys.stdout or sys.stderr are initialized when app_main prints Message-ID: <20111106232424.BAFD4820C4@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r48854:1bab2dd470bf Date: 2011-11-07 00:23 +0100 http://bitbucket.org/pypy/pypy/changeset/1bab2dd470bf/ Log: Make sure sys.stdout or sys.stderr are initialized when app_main prints --version or --help diff --git a/pypy/translator/goal/app_main.py b/pypy/translator/goal/app_main.py --- a/pypy/translator/goal/app_main.py +++ b/pypy/translator/goal/app_main.py @@ -114,6 +114,7 @@ # Option parsing def print_info(*args): + initstdio() try: options = sys.pypy_translation_info except AttributeError: @@ -126,6 +127,7 @@ raise SystemExit def print_help(*args): + initstdio() print('usage: %s [options] [-c cmd|-m mod|file.py|-] [arg...]' % ( sys.executable,)) print(__doc__.rstrip()) @@ -144,11 +146,13 @@ print(' --jit off turn off the JIT') def print_version(*args): + initstdio() print ("Python", sys.version, file=sys.stderr) raise SystemExit def set_jit_option(options, jitparam, *args): if 'pypyjit' not in sys.builtin_module_names: + initstdio() print("Warning: No jit support in %s" % (sys.executable,), file=sys.stderr) else: @@ -249,7 +253,9 @@ sys.path.append(dir) _seen[dir] = True -def initstdio(encoding, unbuffered): +def initstdio(encoding=None, unbuffered=False): + if not encoding: + encoding = sys.getfilesystemencoding() if ':' in encoding: encoding, errors = encoding.split(':', 1) else: @@ -510,8 +516,7 @@ sys.setrecursionlimit(5000) readenv = not ignore_environment - io_encoding = ((readenv and os.getenv("PYTHONIOENCODING")) - or sys.getfilesystemencoding()) + io_encoding = readenv and os.getenv("PYTHONIOENCODING") initstdio(io_encoding, unbuffered) mainmodule = type(sys)('__main__') @@ -694,6 +699,7 @@ try: cmdline = parse_command_line(argv) except CommandLineError as e: + initstdio() print_error(str(e)) return 2 except SystemExit as e: From noreply at buildbot.pypy.org Mon Nov 7 00:41:08 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Mon, 7 Nov 2011 00:41:08 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: a little bit of comments for me ; -) Message-ID: <20111106234108.51242820C4@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r48855:5a52e0062e49 Date: 2011-11-06 21:36 +0100 http://bitbucket.org/pypy/pypy/changeset/5a52e0062e49/ Log: a little bit of comments for me ;-) diff --git a/pypy/rpython/rmodel.py b/pypy/rpython/rmodel.py --- a/pypy/rpython/rmodel.py +++ b/pypy/rpython/rmodel.py @@ -339,11 +339,11 @@ def _get_opprefix(self): if self._opprefix is None: - raise TyperError("arithmetic not supported on %r" % + raise TyperError("arithmetic not supported on %r, it's size is too small" % self.lowleveltype) return self._opprefix - opprefix =property(_get_opprefix) + opprefix = property(_get_opprefix) class BoolRepr(IntegerRepr): lowleveltype = Bool diff --git a/pypy/translator/c/primitive.py b/pypy/translator/c/primitive.py --- a/pypy/translator/c/primitive.py +++ b/pypy/translator/c/primitive.py @@ -181,6 +181,7 @@ # On 64 bit machines, SignedLongLong and Signed are the same, so the # order matters, because we want the Signed implementation. +# (some entries collapse during dict creation) PrimitiveName = { SignedLongLong: name_signedlonglong, Signed: name_signed, From noreply at buildbot.pypy.org Mon Nov 7 00:41:09 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Mon, 7 Nov 2011 00:41:09 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: got one of four things in test_typed.py to run. Message-ID: <20111106234109.8145E820C4@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r48856:dc81624b6a84 Date: 2011-11-07 00:40 +0100 http://bitbucket.org/pypy/pypy/changeset/dc81624b6a84/ Log: got one of four things in test_typed.py to run. It is rffi related, the others probably as well. I want to add more structure to PyPy. Things like rffi should not be compatible with the target system at all. I would like to introduce explicit conversions, for every type. That would need some effort, but make an 'up-lifting' to a more OS independent build system much easier. Will elaborate on this, tomorrow. diff --git a/pypy/rlib/rdtoa.py b/pypy/rlib/rdtoa.py --- a/pypy/rlib/rdtoa.py +++ b/pypy/rlib/rdtoa.py @@ -58,8 +58,8 @@ try: result = dg_strtod(ll_input, end_ptr) - endpos = (rffi.cast(rffi.LONG, end_ptr[0]) - - rffi.cast(rffi.LONG, ll_input)) + endpos = (rffi.cast(lltype.Signed, end_ptr[0]) - + rffi.cast(lltype.Signed, ll_input)) if endpos == 0 or endpos < len(input): raise ValueError("invalid input at position %d" % (endpos,)) From noreply at buildbot.pypy.org Mon Nov 7 00:42:57 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Mon, 7 Nov 2011 00:42:57 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: merge default Message-ID: <20111106234257.9207A11B2E69@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r48857:84921a708527 Date: 2011-11-07 00:42 +0100 http://bitbucket.org/pypy/pypy/changeset/84921a708527/ Log: merge default diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -392,6 +392,7 @@ 'Slice': 'space.gettypeobject(W_SliceObject.typedef)', 'StaticMethod': 'space.gettypeobject(StaticMethod.typedef)', 'CFunction': 'space.gettypeobject(cpyext.methodobject.W_PyCFunctionObject.typedef)', + 'WrapperDescr': 'space.gettypeobject(cpyext.methodobject.W_PyCMethodObject.typedef)' }.items(): GLOBALS['Py%s_Type#' % (cpyname, )] = ('PyTypeObject*', pypyexpr) diff --git a/pypy/module/cpyext/methodobject.py b/pypy/module/cpyext/methodobject.py --- a/pypy/module/cpyext/methodobject.py +++ b/pypy/module/cpyext/methodobject.py @@ -240,6 +240,7 @@ def PyStaticMethod_New(space, w_func): return space.wrap(StaticMethod(w_func)) + at cpython_api([PyObject, lltype.Ptr(PyMethodDef)], PyObject) def PyDescr_NewMethod(space, w_type, method): return space.wrap(W_PyCMethodObject(space, method, w_type)) diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -586,10 +586,6 @@ def PyDescr_NewMember(space, type, meth): raise NotImplementedError - at cpython_api([PyTypeObjectPtr, PyMethodDef], PyObject) -def PyDescr_NewMethod(space, type, meth): - raise NotImplementedError - @cpython_api([PyTypeObjectPtr, wrapperbase, rffi.VOIDP], PyObject) def PyDescr_NewWrapper(space, type, wrapper, wrapped): raise NotImplementedError @@ -610,14 +606,6 @@ def PyWrapper_New(space, w_d, w_self): raise NotImplementedError - at cpython_api([PyObject], PyObject) -def PyDictProxy_New(space, dict): - """Return a proxy object for a mapping which enforces read-only behavior. - This is normally used to create a proxy to prevent modification of the - dictionary for non-dynamic class types. - """ - raise NotImplementedError - @cpython_api([PyObject, PyObject, rffi.INT_real], rffi.INT_real, error=-1) def PyDict_Merge(space, a, b, override): """Iterate over mapping object b adding key-value pairs to dictionary a. @@ -2293,15 +2281,6 @@ changes in your code for properly supporting 64-bit systems.""" raise NotImplementedError - at cpython_api([rffi.CWCHARP, Py_ssize_t, rffi.CCHARP], PyObject) -def PyUnicode_EncodeUTF8(space, s, size, errors): - """Encode the Py_UNICODE buffer of the given size using UTF-8 and return a - Python string object. Return NULL if an exception was raised by the codec. - - This function used an int type for size. This might require - changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.INTP], PyObject) def PyUnicode_DecodeUTF32(space, s, size, errors, byteorder): """Decode length bytes from a UTF-32 encoded buffer string and return the @@ -2481,31 +2460,6 @@ was raised by the codec.""" raise NotImplementedError - at cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP], PyObject) -def PyUnicode_DecodeLatin1(space, s, size, errors): - """Create a Unicode object by decoding size bytes of the Latin-1 encoded string - s. Return NULL if an exception was raised by the codec. - - This function used an int type for size. This might require - changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - - at cpython_api([rffi.CWCHARP, Py_ssize_t, rffi.CCHARP], PyObject) -def PyUnicode_EncodeLatin1(space, s, size, errors): - """Encode the Py_UNICODE buffer of the given size using Latin-1 and return - a Python string object. Return NULL if an exception was raised by the codec. - - This function used an int type for size. This might require - changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - - at cpython_api([PyObject], PyObject) -def PyUnicode_AsLatin1String(space, unicode): - """Encode a Unicode object using Latin-1 and return the result as Python string - object. Error handling is "strict". Return NULL if an exception was raised - by the codec.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, PyObject, rffi.CCHARP], PyObject) def PyUnicode_DecodeCharmap(space, s, size, mapping, errors): """Create a Unicode object by decoding size bytes of the encoded string s using @@ -2564,13 +2518,6 @@ """ raise NotImplementedError - at cpython_api([PyObject], PyObject) -def PyUnicode_AsMBCSString(space, unicode): - """Encode a Unicode object using MBCS and return the result as Python string - object. Error handling is "strict". Return NULL if an exception was raised - by the codec.""" - raise NotImplementedError - @cpython_api([PyObject, PyObject], PyObject) def PyUnicode_Concat(space, left, right): """Concat two strings giving a new Unicode string.""" @@ -2912,16 +2859,3 @@ """Return true if ob is a proxy object. """ raise NotImplementedError - - at cpython_api([PyObject, PyObject], PyObject) -def PyWeakref_NewProxy(space, ob, callback): - """Return a weak reference proxy object for the object ob. This will always - return a new reference, but is not guaranteed to create a new object; an - existing proxy object may be returned. The second parameter, callback, can - be a callable object that receives notification when ob is garbage - collected; it should accept a single parameter, which will be the weak - reference object itself. callback may also be None or NULL. If ob - is not a weakly-referencable object, or if callback is not callable, - None, or NULL, this will return NULL and raise TypeError. - """ - raise NotImplementedError diff --git a/pypy/module/cpyext/test/test_methodobject.py b/pypy/module/cpyext/test/test_methodobject.py --- a/pypy/module/cpyext/test/test_methodobject.py +++ b/pypy/module/cpyext/test/test_methodobject.py @@ -79,7 +79,7 @@ raises(TypeError, mod.isSameFunction, 1) class TestPyCMethodObject(BaseApiTest): - def test_repr(self, space): + def test_repr(self, space, api): """ W_PyCMethodObject has a repr string which describes it as a method and gives its name and the name of its class. @@ -94,7 +94,7 @@ ml.c_ml_meth = rffi.cast(PyCFunction_typedef, c_func.get_llhelper(space)) - method = PyDescr_NewMethod(space, space.w_str, ml) + method = api.PyDescr_NewMethod(space.w_str, ml) assert repr(method).startswith( "" + assert space.unwrap(space.repr(w_proxy)).startswith(' Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r48858:ee437d31d9fe Date: 2011-11-07 01:05 +0100 http://bitbucket.org/pypy/pypy/changeset/ee437d31d9fe/ Log: Fix for py.test running CPython test suite diff --git a/pypy/tool/pytest/run-script/regrverbose.py b/pypy/tool/pytest/run-script/regrverbose.py --- a/pypy/tool/pytest/run-script/regrverbose.py +++ b/pypy/tool/pytest/run-script/regrverbose.py @@ -1,8 +1,8 @@ # refer to 2.4.1/test/regrtest.py's runtest() for comparison import sys import unittest -from test import test_support -test_support.verbose = 1 +from test import support +support.verbose = 1 sys.argv[:] = sys.argv[1:] modname = sys.argv[0] From noreply at buildbot.pypy.org Mon Nov 7 12:49:24 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Mon, 7 Nov 2011 12:49:24 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: ensure loops are freed Message-ID: <20111107114924.189B7820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48859:712c04e8e94d Date: 2011-11-07 11:09 +0100 http://bitbucket.org/pypy/pypy/changeset/712c04e8e94d/ Log: ensure loops are freed diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -80,6 +80,7 @@ if descr.original_jitcell_token is not original_jitcell_token: assert descr.original_jitcell_token is not None original_jitcell_token.record_jump_to(descr.original_jitcell_token) + descr.exported_state = None op._descr = None # clear reference, mostly for tests # record this looptoken on the QuasiImmut used in the code if loop.quasi_immutable_deps is not None: @@ -673,7 +674,7 @@ pass -def compile_trace(metainterp, resumekey, retraced=False): +def compile_trace(metainterp, resumekey, start_resumedescr=None): """Try to compile a new bridge leading from the beginning of the history to some existing place. """ @@ -689,6 +690,7 @@ # clone ops, as optimize_bridge can mutate the ops new_trace.operations = [op.clone() for op in metainterp.history.operations] + new_trace.start_resumedescr = start_resumedescr metainterp_sd = metainterp.staticdata state = metainterp.jitdriver_sd.warmstate if isinstance(resumekey, ResumeAtPositionDescr): diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -115,13 +115,11 @@ original_jump_args = targetop.getarglist() jump_args = [self.getvalue(a).get_key_box() for a in original_jump_args] - # FIXME: I dont thnik we need this anymore - if self.optimizer.loop.start_resumedescr: - start_resumedescr = self.optimizer.loop.start_resumedescr.clone_if_mutable() - assert isinstance(start_resumedescr, ResumeGuardDescr) - start_resumedescr.rd_snapshot = self.fix_snapshot(jump_args, start_resumedescr.rd_snapshot) - else: - start_resumedescr = None + assert self.optimizer.loop.start_resumedescr + start_resumedescr = self.optimizer.loop.start_resumedescr.clone_if_mutable() + assert isinstance(start_resumedescr, ResumeGuardDescr) + start_resumedescr.rd_snapshot = self.fix_snapshot(jump_args, start_resumedescr.rd_snapshot) + # FIXME: I dont thnik we need fix_snapshot anymore modifier = VirtualStateAdder(self.optimizer) virtual_state = modifier.get_virtual_state(jump_args) @@ -180,8 +178,7 @@ self.imported_state = exported_state self.inputargs = targetop.getarglist() self.initial_virtual_state = target_token.virtual_state - #self.start_resumedescr = target_token.start_resumedescr - self.start_resumedescr = self.optimizer.loop.start_resumedescr + self.start_resumedescr = target_token.start_resumedescr seen = {} for box in self.inputargs: @@ -328,14 +325,7 @@ for i in range(len(short)): short[i] = inliner.inline_op(short[i]) - if target_token.start_resumedescr is None: # FIXME: Hack! - target_token.start_resumedescr = self.start_resumedescr.clone_if_mutable() - fix = Inliner(self.optimizer.loop.operations[-1].getarglist(), - self.optimizer.loop.inputargs) - - fix.inline_descr_inplace(target_token.start_resumedescr) - else: - target_token.start_resumedescr = self.start_resumedescr.clone_if_mutable() + target_token.start_resumedescr = self.start_resumedescr.clone_if_mutable() inliner.inline_descr_inplace(target_token.start_resumedescr) # Forget the values to allow them to be freed diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -1936,7 +1936,7 @@ # from the interpreter. if not self.partial_trace: # FIXME: Support a retrace to be a bridge as well as a loop - self.compile_trace(live_arg_boxes) + self.compile_trace(live_arg_boxes, resumedescr) # raises in case it works -- which is the common case, hopefully, # at least for bridges starting from a guard. @@ -2042,7 +2042,7 @@ self.history.operations = None raise GenerateMergePoint(live_arg_boxes, target_token.cell_token) - def compile_trace(self, live_arg_boxes): + def compile_trace(self, live_arg_boxes, start_resumedescr): num_green_args = self.jitdriver_sd.num_green_args greenkey = live_arg_boxes[:num_green_args] target_jitcell_token = self.get_procedure_token(greenkey) @@ -2052,7 +2052,7 @@ self.history.record(rop.JUMP, live_arg_boxes[num_green_args:], None, descr=target_jitcell_token) try: - target_token = compile.compile_trace(self, self.resumekey) + target_token = compile.compile_trace(self, self.resumekey, start_resumedescr) finally: self.history.operations.pop() # remove the JUMP if target_token is not None: # raise if it *worked* correctly diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -288,9 +288,7 @@ assert res == f(6, 15) gc.collect() - #assert not [wr for wr in wr_loops if wr()] - for loop in [wr for wr in wr_loops if wr()]: - assert loop().name == 'short preamble' + assert not [wr for wr in wr_loops if wr()] def test_string(self): def f(n): From noreply at buildbot.pypy.org Mon Nov 7 12:49:25 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Mon, 7 Nov 2011 12:49:25 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: kill optimizer.bridge Message-ID: <20111107114925.45323820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48860:27048a266352 Date: 2011-11-07 11:19 +0100 http://bitbucket.org/pypy/pypy/changeset/27048a266352/ Log: kill optimizer.bridge diff --git a/pypy/jit/metainterp/optimizeopt/__init__.py b/pypy/jit/metainterp/optimizeopt/__init__.py --- a/pypy/jit/metainterp/optimizeopt/__init__.py +++ b/pypy/jit/metainterp/optimizeopt/__init__.py @@ -89,5 +89,5 @@ if unroll: optimize_unroll(metainterp_sd, loop, optimizations) else: - optimizer = Optimizer(metainterp_sd, loop, optimizations, bridge) + optimizer = Optimizer(metainterp_sd, loop, optimizations) optimizer.propagate_all_forward() diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -329,11 +329,10 @@ class Optimizer(Optimization): - def __init__(self, metainterp_sd, loop, optimizations=None, bridge=False): + def __init__(self, metainterp_sd, loop, optimizations=None): self.metainterp_sd = metainterp_sd self.cpu = metainterp_sd.cpu self.loop = loop - self.bridge = bridge self.values = {} self.interned_refs = self.cpu.ts.new_ref_dict() self.interned_ints = {} @@ -497,7 +496,7 @@ return CVAL_ZERO def propagate_all_forward(self, clear=True): - self.exception_might_have_happened = self.bridge + self.exception_might_have_happened = True if clear: self.clear_newoperations() for op in self.loop.operations: From noreply at buildbot.pypy.org Mon Nov 7 12:49:26 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Mon, 7 Nov 2011 12:49:26 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: rename TargetToken.cell_token into TargetToke.targeting_jitcell_token Message-ID: <20111107114926.75FBC820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48861:963c63e3066e Date: 2011-11-07 11:27 +0100 http://bitbucket.org/pypy/pypy/changeset/963c63e3066e/ Log: rename TargetToken.cell_token into TargetToke.targeting_jitcell_token diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -766,11 +766,15 @@ self.compiled_loop_token.cpu.dump_loop_token(self) class TargetToken(AbstractDescr): - def __init__(self, cell_token): - self.cell_token = cell_token + def __init__(self, targeting_jitcell_token): + # The jitcell to which jumps might result in a jump to this label + self.targeting_jitcell_token = targeting_jitcell_token + + # The jitcell where the trace containing the label with this TargetToken begins + self.original_jitcell_token = None + self.virtual_state = None self.exported_state = None - self.original_jitcell_token = None class TreeLoop(object): inputargs = None diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -104,7 +104,7 @@ self.export_state(stop_label) loop.operations.append(stop_label) else: - assert stop_label.getdescr().cell_token is start_label.getdescr().cell_token + assert stop_label.getdescr().targeting_jitcell_token is start_label.getdescr().targeting_jitcell_token jumpop = ResOperation(rop.JUMP, stop_label.getarglist(), None, descr=start_label.getdescr()) self.close_loop(jumpop) diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -2037,10 +2037,10 @@ live_arg_boxes[num_green_args:], start_resumedescr) if target_token is not None: # raise if it *worked* correctly - self.jitdriver_sd.warmstate.attach_procedure_to_interp(greenkey, target_token.cell_token) + self.jitdriver_sd.warmstate.attach_procedure_to_interp(greenkey, target_token.targeting_jitcell_token) self.history.inputargs = None self.history.operations = None - raise GenerateMergePoint(live_arg_boxes, target_token.cell_token) + raise GenerateMergePoint(live_arg_boxes, target_token.targeting_jitcell_token) def compile_trace(self, live_arg_boxes, start_resumedescr): num_green_args = self.jitdriver_sd.num_green_args @@ -2058,7 +2058,7 @@ if target_token is not None: # raise if it *worked* correctly self.history.inputargs = None self.history.operations = None - raise GenerateMergePoint(live_arg_boxes, target_token.cell_token) + raise GenerateMergePoint(live_arg_boxes, target_token.targeting_jitcell_token) def compile_bridge_and_loop(self, original_boxes, live_arg_boxes, start, bridge_arg_boxes, start_resumedescr): From noreply at buildbot.pypy.org Mon Nov 7 12:49:27 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Mon, 7 Nov 2011 12:49:27 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: make OptSimplify follow and test_optimizebasic follow the new model Message-ID: <20111107114927.AAA3C820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48862:aa5ecf2901be Date: 2011-11-07 12:48 +0100 http://bitbucket.org/pypy/pypy/changeset/aa5ecf2901be/ Log: make OptSimplify follow and test_optimizebasic follow the new model diff --git a/pypy/jit/metainterp/optimizeopt/__init__.py b/pypy/jit/metainterp/optimizeopt/__init__.py --- a/pypy/jit/metainterp/optimizeopt/__init__.py +++ b/pypy/jit/metainterp/optimizeopt/__init__.py @@ -45,7 +45,7 @@ optimizations.append(OptFfiCall()) if ('rewrite' not in enable_opts or 'virtualize' not in enable_opts - or 'heap' not in enable_opts): + or 'heap' not in enable_opts or 'unroll' not in enable_opts): optimizations.append(OptSimplify()) if inline_short_preamble: diff --git a/pypy/jit/metainterp/optimizeopt/simplify.py b/pypy/jit/metainterp/optimizeopt/simplify.py --- a/pypy/jit/metainterp/optimizeopt/simplify.py +++ b/pypy/jit/metainterp/optimizeopt/simplify.py @@ -1,9 +1,12 @@ from pypy.jit.metainterp.optimizeopt.optimizer import Optimization from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method from pypy.jit.metainterp.resoperation import ResOperation, rop - +from pypy.jit.metainterp.history import TargetToken, JitCellToken class OptSimplify(Optimization): + def __init__(self): + self.last_label_descr = None + def optimize_CALL_PURE(self, op): args = op.getarglist() self.emit_operation(ResOperation(rop.CALL, args, op.result, @@ -28,6 +31,20 @@ def optimize_MARK_OPAQUE_PTR(self, op): pass + def optimize_LABEL(self, op): + self.last_label_descr = op.getdescr() + self.emit_operation(op) + + def optimize_JUMP(self, op): + descr = op.getdescr() + assert isinstance(descr, JitCellToken) + if not descr.target_tokens: + assert self.last_label_descr is not None + assert self.last_label_descr.targeting_jitcell_token is descr + op.setdescr(self.last_label_descr) + else: + import pdb; pdb.set_trace() + self.emit_operation(op) dispatch_opt = make_dispatcher_method(OptSimplify, 'optimize_', default=OptSimplify.emit_operation) diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -1,7 +1,8 @@ import py from pypy.rlib.objectmodel import instantiate from pypy.jit.metainterp.optimizeopt.test.test_util import ( - LLtypeMixin, BaseTest, FakeMetaInterpStaticData) + LLtypeMixin, BaseTest, FakeMetaInterpStaticData, convert_old_style_to_targets) +from pypy.jit.metainterp.history import TargetToken, JitCellToken from pypy.jit.metainterp.test.test_compile import FakeLogger import pypy.jit.metainterp.optimizeopt.optimizer as optimizeopt import pypy.jit.metainterp.optimizeopt.virtualize as virtualize @@ -11,7 +12,6 @@ from pypy.jit.metainterp.resoperation import rop, opname, ResOperation from pypy.rlib.rarithmetic import LONG_BIT - def test_store_final_boxes_in_guard(): from pypy.jit.metainterp.compile import ResumeGuardDescr from pypy.jit.metainterp.resume import tag, TAGBOX @@ -116,9 +116,13 @@ enable_opts = "intbounds:rewrite:virtualize:string:earlyforce:pure:heap" def optimize_loop(self, ops, optops, call_pure_results=None): - loop = self.parse(ops) - expected = self.parse(optops) + token = JitCellToken() + loop.operations = [ResOperation(rop.LABEL, loop.inputargs, None, descr=TargetToken(token))] + \ + loop.operations + if loop.operations[-1].getopnum() == rop.JUMP: + loop.operations[-1].setdescr(token) + expected = convert_old_style_to_targets(self.parse(optops), jump=True) self._do_optimize_loop(loop, call_pure_results) print '\n'.join([str(o) for o in loop.operations]) self.assert_equal(loop, expected) diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -1,7 +1,7 @@ import py from pypy.rlib.objectmodel import instantiate from pypy.jit.metainterp.optimizeopt.test.test_util import ( - LLtypeMixin, BaseTest, Storage, _sortboxes) + LLtypeMixin, BaseTest, Storage, _sortboxes, convert_old_style_to_targets) import pypy.jit.metainterp.optimizeopt.optimizer as optimizeopt import pypy.jit.metainterp.optimizeopt.virtualize as virtualize from pypy.jit.metainterp.optimizeopt import optimize_loop_1, ALL_OPTS_DICT, build_opt_chain @@ -158,16 +158,6 @@ return loop -def convert_old_style_to_targets(loop, jump): - newloop = TreeLoop(loop.name) - newloop.inputargs = loop.inputargs - newloop.operations = [ResOperation(rop.LABEL, loop.inputargs, None, descr=FakeDescr())] + \ - loop.operations - if not jump: - assert newloop.operations[-1].getopnum() == rop.JUMP - newloop.operations[-1] = ResOperation(rop.LABEL, newloop.operations[-1].getarglist(), None, descr=FakeDescr()) - return newloop - class OptimizeOptTest(BaseTestWithUnroll): def setup_method(self, meth=None): diff --git a/pypy/jit/metainterp/optimizeopt/test/test_util.py b/pypy/jit/metainterp/optimizeopt/test/test_util.py --- a/pypy/jit/metainterp/optimizeopt/test/test_util.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_util.py @@ -18,6 +18,7 @@ from pypy.jit.metainterp import compile, resume, history from pypy.jit.metainterp.jitprof import EmptyProfiler from pypy.config.pypyoption import get_pypy_config +from pypy.jit.metainterp.resoperation import rop, opname, ResOperation def test_sort_descrs(): class PseudoDescr(AbstractDescr): @@ -400,5 +401,21 @@ # optimize_trace(metainterp_sd, loop, self.enable_opts) +class FakeDescr(compile.ResumeGuardDescr): + def clone_if_mutable(self): + return FakeDescr() + def __eq__(self, other): + return isinstance(other, FakeDescr) + +def convert_old_style_to_targets(loop, jump): + newloop = TreeLoop(loop.name) + newloop.inputargs = loop.inputargs + newloop.operations = [ResOperation(rop.LABEL, loop.inputargs, None, descr=FakeDescr())] + \ + loop.operations + if not jump: + assert newloop.operations[-1].getopnum() == rop.JUMP + newloop.operations[-1] = ResOperation(rop.LABEL, newloop.operations[-1].getarglist(), None, descr=FakeDescr()) + return newloop + # ____________________________________________________________ diff --git a/pypy/jit/metainterp/test/test_compile.py b/pypy/jit/metainterp/test/test_compile.py --- a/pypy/jit/metainterp/test/test_compile.py +++ b/pypy/jit/metainterp/test/test_compile.py @@ -1,7 +1,7 @@ from pypy.config.pypyoption import get_pypy_config from pypy.jit.metainterp.history import TargetToken, ConstInt, History, Stats from pypy.jit.metainterp.history import BoxInt, INT -from pypy.jit.metainterp.compile import insert_loop_token, compile_procedure +from pypy.jit.metainterp.compile import insert_loop_token, compile_loop from pypy.jit.metainterp.compile import ResumeGuardDescr from pypy.jit.metainterp.compile import ResumeGuardCountersInt from pypy.jit.metainterp.compile import compile_tmp_callback From noreply at buildbot.pypy.org Mon Nov 7 15:32:18 2011 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 7 Nov 2011 15:32:18 +0100 (CET) Subject: [pypy-commit] pypy stm: update Message-ID: <20111107143218.EDB2F820C4@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r48863:bfddec59046a Date: 2011-11-06 17:13 +0100 http://bitbucket.org/pypy/pypy/changeset/bfddec59046a/ Log: update diff --git a/pypy/doc/discussion/stm_todo.txt b/pypy/doc/discussion/stm_todo.txt --- a/pypy/doc/discussion/stm_todo.txt +++ b/pypy/doc/discussion/stm_todo.txt @@ -6,3 +6,5 @@ e23ab2c195c1 Added a number of "# XXX --- custom version for STM ---" 31f2ed861176 One more + + 0782958b144f Hard-coded the STM logic in rffi.aroundstate From noreply at buildbot.pypy.org Mon Nov 7 15:32:20 2011 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 7 Nov 2011 15:32:20 +0100 (CET) Subject: [pypy-commit] pypy default: A skipped failing test for a case in which the optimizer gets confused Message-ID: <20111107143220.306DC82A87@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48864:5631da22e6ed Date: 2011-11-07 15:31 +0100 http://bitbucket.org/pypy/pypy/changeset/5631da22e6ed/ Log: A skipped failing test for a case in which the optimizer gets confused and doesn't remove guard_no_exception diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -958,6 +958,24 @@ """ self.optimize_loop(ops, expected, preamble) + def test_bug_guard_no_exception(self): + py.test.skip("missing optimization for this corner case") + ops = """ + [] + i0 = call(123, descr=nonwritedescr) + p0 = call(0, "xy", descr=s2u_descr) # string -> unicode + guard_no_exception() [] + escape(p0) + jump() + """ + expected = """ + [] + i0 = call(123, descr=nonwritedescr) + escape(u"xy") + jump() + """ + self.optimize_loop(ops, expected) + # ---------- def test_call_loopinvariant(self): From noreply at buildbot.pypy.org Mon Nov 7 16:08:11 2011 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 7 Nov 2011 16:08:11 +0100 (CET) Subject: [pypy-commit] pypy default: Python 2.5 support. Message-ID: <20111107150811.BB907820C4@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48865:85696018499d Date: 2011-11-07 16:07 +0100 http://bitbucket.org/pypy/pypy/changeset/85696018499d/ Log: Python 2.5 support. diff --git a/pypy/jit/metainterp/test/test_resume.py b/pypy/jit/metainterp/test/test_resume.py --- a/pypy/jit/metainterp/test/test_resume.py +++ b/pypy/jit/metainterp/test/test_resume.py @@ -1135,16 +1135,11 @@ assert ptr2.parent.next == ptr class CompareableConsts(object): - def __init__(self): - self.oldeq = None - def __enter__(self): - assert self.oldeq is None - self.oldeq = Const.__eq__ Const.__eq__ = Const.same_box - + def __exit__(self, type, value, traceback): - Const.__eq__ = self.oldeq + del Const.__eq__ def test_virtual_adder_make_varray(): b2s, b4s = [BoxPtr(), BoxInt(4)] From noreply at buildbot.pypy.org Mon Nov 7 16:15:17 2011 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 7 Nov 2011 16:15:17 +0100 (CET) Subject: [pypy-commit] pypy default: (antocuni, hakan, arigo) Message-ID: <20111107151517.C5DD8820C4@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48866:141e9d3bebff Date: 2011-11-07 16:15 +0100 http://bitbucket.org/pypy/pypy/changeset/141e9d3bebff/ Log: (antocuni, hakan, arigo) Kill 'exception_might_have_happened' and the 'bridge' boolean field of Optimizer. Simplify some of the 'posponedop' mess, by explicitly removing 'guard_no_exception' only if they immediately follow a removed 'call'. This is detected with a new field 'last_emitted_operation', recording the last operation seen by a 'emit_operation'. Use this too to clean up intbounds overflow checking. diff --git a/pypy/jit/metainterp/optimizeopt/__init__.py b/pypy/jit/metainterp/optimizeopt/__init__.py --- a/pypy/jit/metainterp/optimizeopt/__init__.py +++ b/pypy/jit/metainterp/optimizeopt/__init__.py @@ -55,7 +55,7 @@ def optimize_loop_1(metainterp_sd, loop, enable_opts, - inline_short_preamble=True, retraced=False, bridge=False): + inline_short_preamble=True, retraced=False): """Optimize loop.operations to remove internal overheadish operations. """ @@ -64,7 +64,7 @@ if unroll: optimize_unroll(metainterp_sd, loop, optimizations) else: - optimizer = Optimizer(metainterp_sd, loop, optimizations, bridge) + optimizer = Optimizer(metainterp_sd, loop, optimizations) optimizer.propagate_all_forward() def optimize_bridge_1(metainterp_sd, bridge, enable_opts, @@ -76,7 +76,7 @@ except KeyError: pass optimize_loop_1(metainterp_sd, bridge, enable_opts, - inline_short_preamble, retraced, bridge=True) + inline_short_preamble, retraced) if __name__ == '__main__': print ALL_OPTS_NAMES diff --git a/pypy/jit/metainterp/optimizeopt/heap.py b/pypy/jit/metainterp/optimizeopt/heap.py --- a/pypy/jit/metainterp/optimizeopt/heap.py +++ b/pypy/jit/metainterp/optimizeopt/heap.py @@ -234,7 +234,7 @@ or op.is_ovf()): self.posponedop = op else: - self.next_optimization.propagate_forward(op) + Optimization.emit_operation(self, op) def emitting_operation(self, op): if op.has_no_side_effect(): diff --git a/pypy/jit/metainterp/optimizeopt/intbounds.py b/pypy/jit/metainterp/optimizeopt/intbounds.py --- a/pypy/jit/metainterp/optimizeopt/intbounds.py +++ b/pypy/jit/metainterp/optimizeopt/intbounds.py @@ -6,6 +6,7 @@ IntUpperBound) from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method from pypy.jit.metainterp.resoperation import rop +from pypy.jit.metainterp.optimize import InvalidLoop from pypy.rlib.rarithmetic import LONG_BIT @@ -13,30 +14,10 @@ """Keeps track of the bounds placed on integers by guards and remove redundant guards""" - def setup(self): - self.posponedop = None - self.nextop = None - def new(self): - assert self.posponedop is None return OptIntBounds() - - def flush(self): - assert self.posponedop is None - - def setup(self): - self.posponedop = None - self.nextop = None def propagate_forward(self, op): - if op.is_ovf(): - self.posponedop = op - return - if self.posponedop: - self.nextop = op - op = self.posponedop - self.posponedop = None - dispatch_opt(self, op) def opt_default(self, op): @@ -179,68 +160,75 @@ r = self.getvalue(op.result) r.intbound.intersect(b) + def optimize_GUARD_NO_OVERFLOW(self, op): + lastop = self.last_emitted_operation + if lastop is not None: + opnum = lastop.getopnum() + args = lastop.getarglist() + result = lastop.result + # If the INT_xxx_OVF was replaced with INT_xxx, then we can kill + # the GUARD_NO_OVERFLOW. + if (opnum == rop.INT_ADD or + opnum == rop.INT_SUB or + opnum == rop.INT_MUL): + return + # Else, synthesize the non overflowing op for optimize_default to + # reuse, as well as the reverse op + elif opnum == rop.INT_ADD_OVF: + self.pure(rop.INT_ADD, args[:], result) + self.pure(rop.INT_SUB, [result, args[1]], args[0]) + self.pure(rop.INT_SUB, [result, args[0]], args[1]) + elif opnum == rop.INT_SUB_OVF: + self.pure(rop.INT_SUB, args[:], result) + self.pure(rop.INT_ADD, [result, args[1]], args[0]) + self.pure(rop.INT_SUB, [args[0], result], args[1]) + elif opnum == rop.INT_MUL_OVF: + self.pure(rop.INT_MUL, args[:], result) + self.emit_operation(op) + + def optimize_GUARD_OVERFLOW(self, op): + # If INT_xxx_OVF was replaced by INT_xxx, *but* we still see + # GUARD_OVERFLOW, then the loop is invalid. + lastop = self.last_emitted_operation + if lastop is None: + raise InvalidLoop + opnum = lastop.getopnum() + if opnum not in (rop.INT_ADD_OVF, rop.INT_SUB_OVF, rop.INT_MUL_OVF): + raise InvalidLoop + self.emit_operation(op) + def optimize_INT_ADD_OVF(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) resbound = v1.intbound.add_bound(v2.intbound) - if resbound.has_lower and resbound.has_upper and \ - self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Transform into INT_ADD and remove guard + if resbound.bounded(): + # Transform into INT_ADD. The following guard will be killed + # by optimize_GUARD_NO_OVERFLOW; if we see instead an + # optimize_GUARD_OVERFLOW, then InvalidLoop. op = op.copy_and_change(rop.INT_ADD) - self.optimize_INT_ADD(op) # emit the op - else: - self.emit_operation(op) - r = self.getvalue(op.result) - r.intbound.intersect(resbound) - self.emit_operation(self.nextop) - if self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Synthesize the non overflowing op for optimize_default to reuse - self.pure(rop.INT_ADD, op.getarglist()[:], op.result) - # Synthesize the reverse op for optimize_default to reuse - self.pure(rop.INT_SUB, [op.result, op.getarg(1)], op.getarg(0)) - self.pure(rop.INT_SUB, [op.result, op.getarg(0)], op.getarg(1)) - + self.emit_operation(op) # emit the op + r = self.getvalue(op.result) + r.intbound.intersect(resbound) def optimize_INT_SUB_OVF(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) resbound = v1.intbound.sub_bound(v2.intbound) - if resbound.has_lower and resbound.has_upper and \ - self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Transform into INT_SUB and remove guard + if resbound.bounded(): op = op.copy_and_change(rop.INT_SUB) - self.optimize_INT_SUB(op) # emit the op - else: - self.emit_operation(op) - r = self.getvalue(op.result) - r.intbound.intersect(resbound) - self.emit_operation(self.nextop) - if self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Synthesize the non overflowing op for optimize_default to reuse - self.pure(rop.INT_SUB, op.getarglist()[:], op.result) - # Synthesize the reverse ops for optimize_default to reuse - self.pure(rop.INT_ADD, [op.result, op.getarg(1)], op.getarg(0)) - self.pure(rop.INT_SUB, [op.getarg(0), op.result], op.getarg(1)) - + self.emit_operation(op) # emit the op + r = self.getvalue(op.result) + r.intbound.intersect(resbound) def optimize_INT_MUL_OVF(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) resbound = v1.intbound.mul_bound(v2.intbound) - if resbound.has_lower and resbound.has_upper and \ - self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Transform into INT_MUL and remove guard + if resbound.bounded(): op = op.copy_and_change(rop.INT_MUL) - self.optimize_INT_MUL(op) # emit the op - else: - self.emit_operation(op) - r = self.getvalue(op.result) - r.intbound.intersect(resbound) - self.emit_operation(self.nextop) - if self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Synthesize the non overflowing op for optimize_default to reuse - self.pure(rop.INT_MUL, op.getarglist()[:], op.result) - + self.emit_operation(op) + r = self.getvalue(op.result) + r.intbound.intersect(resbound) def optimize_INT_LT(self, op): v1 = self.getvalue(op.getarg(0)) diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -6,7 +6,7 @@ IntLowerBound, MININT, MAXINT from pypy.jit.metainterp.optimizeopt.util import (make_dispatcher_method, args_dict) -from pypy.jit.metainterp.resoperation import rop, ResOperation +from pypy.jit.metainterp.resoperation import rop, ResOperation, AbstractResOp from pypy.jit.metainterp.typesystem import llhelper, oohelper from pypy.tool.pairtype import extendabletype from pypy.rlib.debug import debug_start, debug_stop, debug_print @@ -249,6 +249,8 @@ CVAL_ZERO_FLOAT = ConstantValue(Const._new(0.0)) llhelper.CVAL_NULLREF = ConstantValue(llhelper.CONST_NULL) oohelper.CVAL_NULLREF = ConstantValue(oohelper.CONST_NULL) +REMOVED = AbstractResOp(None) + class Optimization(object): next_optimization = None @@ -260,6 +262,7 @@ raise NotImplementedError def emit_operation(self, op): + self.last_emitted_operation = op self.next_optimization.propagate_forward(op) # FIXME: Move some of these here? @@ -327,13 +330,13 @@ def forget_numberings(self, box): self.optimizer.forget_numberings(box) + class Optimizer(Optimization): - def __init__(self, metainterp_sd, loop, optimizations=None, bridge=False): + def __init__(self, metainterp_sd, loop, optimizations=None): self.metainterp_sd = metainterp_sd self.cpu = metainterp_sd.cpu self.loop = loop - self.bridge = bridge self.values = {} self.interned_refs = self.cpu.ts.new_ref_dict() self.interned_ints = {} @@ -341,7 +344,6 @@ self.bool_boxes = {} self.producer = {} self.pendingfields = [] - self.exception_might_have_happened = False self.quasi_immutable_deps = None self.opaque_pointers = {} self.replaces_guard = {} @@ -363,6 +365,7 @@ optimizations[-1].next_optimization = self for o in optimizations: o.optimizer = self + o.last_emitted_operation = None o.setup() else: optimizations = [] @@ -497,7 +500,6 @@ return CVAL_ZERO def propagate_all_forward(self): - self.exception_might_have_happened = self.bridge self.clear_newoperations() for op in self.loop.operations: self.first_optimization.propagate_forward(op) diff --git a/pypy/jit/metainterp/optimizeopt/pure.py b/pypy/jit/metainterp/optimizeopt/pure.py --- a/pypy/jit/metainterp/optimizeopt/pure.py +++ b/pypy/jit/metainterp/optimizeopt/pure.py @@ -1,4 +1,4 @@ -from pypy.jit.metainterp.optimizeopt.optimizer import Optimization +from pypy.jit.metainterp.optimizeopt.optimizer import Optimization, REMOVED from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.jit.metainterp.optimizeopt.util import (make_dispatcher_method, args_dict) @@ -61,7 +61,10 @@ oldop = self.pure_operations.get(args, None) if oldop is not None and oldop.getdescr() is op.getdescr(): assert oldop.getopnum() == op.getopnum() + # this removes a CALL_PURE that has the same (non-constant) + # arguments as a previous CALL_PURE. self.make_equal_to(op.result, self.getvalue(oldop.result)) + self.last_emitted_operation = REMOVED return else: self.pure_operations[args] = op @@ -72,6 +75,13 @@ self.emit_operation(ResOperation(rop.CALL, args, op.result, op.getdescr())) + def optimize_GUARD_NO_EXCEPTION(self, op): + if self.last_emitted_operation is REMOVED: + # it was a CALL_PURE that was killed; so we also kill the + # following GUARD_NO_EXCEPTION + return + self.emit_operation(op) + def flush(self): assert self.posponedop is None diff --git a/pypy/jit/metainterp/optimizeopt/rewrite.py b/pypy/jit/metainterp/optimizeopt/rewrite.py --- a/pypy/jit/metainterp/optimizeopt/rewrite.py +++ b/pypy/jit/metainterp/optimizeopt/rewrite.py @@ -294,12 +294,6 @@ raise InvalidLoop self.optimize_GUARD_CLASS(op) - def optimize_GUARD_NO_EXCEPTION(self, op): - if not self.optimizer.exception_might_have_happened: - return - self.emit_operation(op) - self.optimizer.exception_might_have_happened = False - def optimize_CALL_LOOPINVARIANT(self, op): arg = op.getarg(0) # 'arg' must be a Const, because residual_call in codewriter @@ -310,6 +304,7 @@ resvalue = self.loop_invariant_results.get(key, None) if resvalue is not None: self.make_equal_to(op.result, resvalue) + self.last_emitted_operation = REMOVED return # change the op to be a normal call, from the backend's point of view # there is no reason to have a separate operation for this @@ -444,10 +439,19 @@ except KeyError: pass else: + # this removes a CALL_PURE with all constant arguments. self.make_constant(op.result, result) + self.last_emitted_operation = REMOVED return self.emit_operation(op) + def optimize_GUARD_NO_EXCEPTION(self, op): + if self.last_emitted_operation is REMOVED: + # it was a CALL_PURE or a CALL_LOOPINVARIANT that was killed; + # so we also kill the following GUARD_NO_EXCEPTION + return + self.emit_operation(op) + def optimize_INT_FLOORDIV(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -681,25 +681,60 @@ # ---------- - def test_fold_guard_no_exception(self): - ops = """ - [i] - guard_no_exception() [] - i1 = int_add(i, 3) - guard_no_exception() [] + def test_keep_guard_no_exception(self): + ops = """ + [i1] i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] - guard_no_exception() [] - i3 = call(i2, descr=nonwritedescr) - jump(i1) # the exception is considered lost when we loop back - """ - expected = """ - [i] - i1 = int_add(i, 3) - i2 = call(i1, descr=nonwritedescr) + jump(i2) + """ + self.optimize_loop(ops, ops) + + def test_keep_guard_no_exception_with_call_pure_that_is_not_folded(self): + ops = """ + [i1] + i2 = call_pure(123456, i1, descr=nonwritedescr) guard_no_exception() [i1, i2] - i3 = call(i2, descr=nonwritedescr) - jump(i1) + jump(i2) + """ + expected = """ + [i1] + i2 = call(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2] + jump(i2) + """ + self.optimize_loop(ops, expected) + + def test_remove_guard_no_exception_with_call_pure_on_constant_args(self): + arg_consts = [ConstInt(i) for i in (123456, 81)] + call_pure_results = {tuple(arg_consts): ConstInt(5)} + ops = """ + [i1] + i3 = same_as(81) + i2 = call_pure(123456, i3, descr=nonwritedescr) + guard_no_exception() [i1, i2] + jump(i2) + """ + expected = """ + [i1] + jump(5) + """ + self.optimize_loop(ops, expected, call_pure_results) + + def test_remove_guard_no_exception_with_duplicated_call_pure(self): + ops = """ + [i1] + i2 = call_pure(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2] + i3 = call_pure(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2, i3] + jump(i3) + """ + expected = """ + [i1] + i2 = call(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2] + jump(i2) """ self.optimize_loop(ops, expected) diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -931,17 +931,14 @@ [i] guard_no_exception() [] i1 = int_add(i, 3) - guard_no_exception() [] i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] - guard_no_exception() [] i3 = call(i2, descr=nonwritedescr) jump(i1) # the exception is considered lost when we loop back """ - # note that 'guard_no_exception' at the very start is kept around - # for bridges, but not for loops preamble = """ [i] + guard_no_exception() [] # occurs at the start of bridges, so keep it i1 = int_add(i, 3) i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] @@ -950,6 +947,7 @@ """ expected = """ [i] + guard_no_exception() [] # occurs at the start of bridges, so keep it i1 = int_add(i, 3) i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] @@ -959,7 +957,6 @@ self.optimize_loop(ops, expected, preamble) def test_bug_guard_no_exception(self): - py.test.skip("missing optimization for this corner case") ops = """ [] i0 = call(123, descr=nonwritedescr) @@ -6299,12 +6296,15 @@ def test_str2unicode_constant(self): ops = """ [] + escape(1213) p0 = call(0, "xy", descr=s2u_descr) # string -> unicode + guard_no_exception() [] escape(p0) jump() """ expected = """ [] + escape(1213) escape(u"xy") jump() """ @@ -6314,6 +6314,7 @@ ops = """ [p0] p1 = call(0, p0, descr=s2u_descr) # string -> unicode + guard_no_exception() [] escape(p1) jump(p1) """ diff --git a/pypy/jit/metainterp/optimizeopt/vstring.py b/pypy/jit/metainterp/optimizeopt/vstring.py --- a/pypy/jit/metainterp/optimizeopt/vstring.py +++ b/pypy/jit/metainterp/optimizeopt/vstring.py @@ -2,7 +2,8 @@ from pypy.jit.metainterp.history import (BoxInt, Const, ConstInt, ConstPtr, get_const_ptr_for_string, get_const_ptr_for_unicode, BoxPtr, REF, INT) from pypy.jit.metainterp.optimizeopt import optimizer, virtualize -from pypy.jit.metainterp.optimizeopt.optimizer import CONST_0, CONST_1, llhelper +from pypy.jit.metainterp.optimizeopt.optimizer import CONST_0, CONST_1 +from pypy.jit.metainterp.optimizeopt.optimizer import llhelper, REMOVED from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.rlib.objectmodel import specialize, we_are_translated @@ -529,6 +530,11 @@ optimize_CALL_PURE = optimize_CALL + def optimize_GUARD_NO_EXCEPTION(self, op): + if self.last_emitted_operation is REMOVED: + return + self.emit_operation(op) + def opt_call_str_STR2UNICODE(self, op): # Constant-fold unicode("constant string"). # More generally, supporting non-constant but virtual cases is @@ -543,6 +549,7 @@ except UnicodeDecodeError: return False self.make_constant(op.result, get_const_ptr_for_unicode(u)) + self.last_emitted_operation = REMOVED return True def opt_call_stroruni_STR_CONCAT(self, op, mode): diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -90,7 +90,10 @@ return op def __repr__(self): - return self.repr() + try: + return self.repr() + except NotImplementedError: + return object.__repr__(self) def repr(self, graytext=False): # RPython-friendly version From noreply at buildbot.pypy.org Mon Nov 7 16:49:41 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Mon, 7 Nov 2011 16:49:41 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: dont replace the the JitCellToken when retracing Message-ID: <20111107154941.A5C62820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48867:aa8ad4543ac2 Date: 2011-11-07 13:49 +0100 http://bitbucket.org/pypy/pypy/changeset/aa8ad4543ac2/ Log: dont replace the the JitCellToken when retracing diff --git a/pypy/jit/metainterp/optimizeopt/simplify.py b/pypy/jit/metainterp/optimizeopt/simplify.py --- a/pypy/jit/metainterp/optimizeopt/simplify.py +++ b/pypy/jit/metainterp/optimizeopt/simplify.py @@ -43,7 +43,8 @@ assert self.last_label_descr.targeting_jitcell_token is descr op.setdescr(self.last_label_descr) else: - import pdb; pdb.set_trace() + assert len(descr.target_tokens) == 1 + op.setdescr(descr.target_tokens[0]) self.emit_operation(op) dispatch_opt = make_dispatcher_method(OptSimplify, 'optimize_', diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -2036,8 +2036,10 @@ original_boxes[num_green_args:], live_arg_boxes[num_green_args:], start_resumedescr) + if target_token is not None: + self.jitdriver_sd.warmstate.attach_procedure_to_interp(greenkey, target_token.targeting_jitcell_token) + if target_token is not None: # raise if it *worked* correctly - self.jitdriver_sd.warmstate.attach_procedure_to_interp(greenkey, target_token.targeting_jitcell_token) self.history.inputargs = None self.history.operations = None raise GenerateMergePoint(live_arg_boxes, target_token.targeting_jitcell_token) From noreply at buildbot.pypy.org Mon Nov 7 16:49:42 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Mon, 7 Nov 2011 16:49:42 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: support CALL_ASSEMBLER Message-ID: <20111107154942.D15DD820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48868:c728e120eed9 Date: 2011-11-07 14:05 +0100 http://bitbucket.org/pypy/pypy/changeset/c728e120eed9/ Log: support CALL_ASSEMBLER diff --git a/pypy/jit/backend/llgraph/runner.py b/pypy/jit/backend/llgraph/runner.py --- a/pypy/jit/backend/llgraph/runner.py +++ b/pypy/jit/backend/llgraph/runner.py @@ -181,9 +181,8 @@ llimpl.compile_add_descr(c, descr.ofs, descr.typeinfo, descr.arg_types) if isinstance(descr, history.JitCellToken): - assert False - if op.getopnum() != rop.JUMP: - llimpl.compile_add_loop_token(c, descr) + assert op.getopnum() != rop.JUMP + llimpl.compile_add_loop_token(c, descr) if isinstance(descr, history.TargetToken) and op.getopnum() == rop.LABEL: llimpl.compile_add_target_token(c, descr) if self.is_oo and isinstance(descr, (OODescr, MethDescr)): diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -70,8 +70,11 @@ if n >= 0: # we also record the resumedescr number original_jitcell_token.compiled_loop_token.record_faildescr_index(n) elif isinstance(descr, JitCellToken): - # for a CALL_ASSEMBLER ... - assert False, "FIXME" + # for a CALL_ASSEMBLER: record it as a potential jump. + if descr is not original_jitcell_token: + original_jitcell_token.record_jump_to(descr) + descr.exported_state = None + op._descr = None # clear reference, mostly for tests elif isinstance(descr, TargetToken): # for a JUMP: record it as a potential jump. # (the following test is not enough to prevent more complicated From noreply at buildbot.pypy.org Mon Nov 7 16:49:44 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Mon, 7 Nov 2011 16:49:44 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: retraces ending with a virtual state matching a previously compiled trace Message-ID: <20111107154944.068E7820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48869:8e75cc4b5fbc Date: 2011-11-07 16:40 +0100 http://bitbucket.org/pypy/pypy/changeset/8e75cc4b5fbc/ Log: retraces ending with a virtual state matching a previously compiled trace diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -83,7 +83,10 @@ if descr.original_jitcell_token is not original_jitcell_token: assert descr.original_jitcell_token is not None original_jitcell_token.record_jump_to(descr.original_jitcell_token) - descr.exported_state = None + # exported_state is clear by optimizeopt when the short preamble is + # constrcucted. if that did not happen the label should not show up + # in a trace that will be used + assert descr.exported_state is None op._descr = None # clear reference, mostly for tests # record this looptoken on the QuasiImmut used in the code if loop.quasi_immutable_deps is not None: @@ -203,7 +206,8 @@ part = create_empty_loop(metainterp) part.inputargs = inputargs[:] part.start_resumedescr = start_resumedescr - h_ops = history.operations + h_ops = history.operations + part.operations = [partial_trace.operations[-1]] + \ [h_ops[i].clone() for i in range(start, len(h_ops))] + \ [ResOperation(rop.JUMP, jumpargs, None, descr=loop_jitcell_token)] diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -89,14 +89,18 @@ if not jumpop: return if self.jump_to_already_compiled_trace(jumpop): + # Found a compiled trace to jump to + if self.did_import: + + self.close_bridge(start_label) + self.finilize_short_preamble(start_label) return - # Failed to find a compiled trace to jump to, produce a label instead cell_token = jumpop.getdescr() assert isinstance(cell_token, JitCellToken) stop_label = ResOperation(rop.LABEL, jumpop.getarglist(), None, TargetToken(cell_token)) - - if not self.did_peel_one: # Enforce the previous behaviour of always peeling exactly one iteration (for now) + + if not self.did_import: # Enforce the previous behaviour of always peeling exactly one iteration (for now) self.optimizer.flush() KillHugeIntBounds(self.optimizer).apply() @@ -109,7 +113,6 @@ self.close_loop(jumpop) self.finilize_short_preamble(start_label) - start_label.getdescr().short_preamble = self.short def export_state(self, targetop): original_jump_args = targetop.getarglist() @@ -156,7 +159,7 @@ inputarg_setup_ops, self.optimizer) def import_state(self, targetop): - self.did_peel_one = False + self.did_import = False if not targetop: # FIXME: Set up some sort of empty state with no virtuals? return @@ -168,7 +171,7 @@ if not exported_state: # FIXME: Set up some sort of empty state with no virtuals return - self.did_peel_one = True + self.did_import = True self.short = target_token.short_preamble self.short_seen = {} @@ -216,7 +219,28 @@ self.optimizer.flush() self.optimizer.emitting_dissabled = False - def close_loop(self, jumpop): + def close_bridge(self, start_label): + inputargs = self.inputargs + short_jumpargs = inputargs[:] + + newoperations = self.optimizer.get_newoperations() + self.boxes_created_this_iteration = {} + i = 0 + while newoperations[i].getopnum() != rop.LABEL: + i += 1 + while i < len(newoperations): + op = newoperations[i] + self.boxes_created_this_iteration[op.result] = True + args = op.getarglist() + if op.is_guard(): + args = args + op.getfailargs() + for a in args: + self.import_box(a, inputargs, short_jumpargs, []) + i += 1 + newoperations = self.optimizer.get_newoperations() + self.short.append(ResOperation(rop.JUMP, short_jumpargs, None, descr=start_label.getdescr())) + + def close_loop(self, jumpop): virtual_state = self.initial_virtual_state short_inputargs = self.short[0].getarglist() constant_inputargs = self.imported_state.constant_inputargs @@ -334,6 +358,9 @@ for op in short: if op.result: op.result.forget_value() + target_token.short_preamble = self.short + target_token.exported_state = None + def FIXME_old_stuff(): preamble_optimizer = self.optimizer From noreply at buildbot.pypy.org Mon Nov 7 16:49:45 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Mon, 7 Nov 2011 16:49:45 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: hg merge default Message-ID: <20111107154945.99BF7820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48870:fd948f0bae66 Date: 2011-11-07 16:49 +0100 http://bitbucket.org/pypy/pypy/changeset/fd948f0bae66/ Log: hg merge default diff --git a/lib-python/conftest.py b/lib-python/conftest.py --- a/lib-python/conftest.py +++ b/lib-python/conftest.py @@ -201,7 +201,7 @@ RegrTest('test_difflib.py'), RegrTest('test_dircache.py', core=True), RegrTest('test_dis.py'), - RegrTest('test_distutils.py'), + RegrTest('test_distutils.py', skip=True), RegrTest('test_dl.py', skip=True), RegrTest('test_doctest.py', usemodules="thread"), RegrTest('test_doctest2.py'), diff --git a/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py b/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py --- a/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py +++ b/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py @@ -1,6 +1,5 @@ import unittest from ctypes import * -from ctypes.test import xfail class MyInt(c_int): def __cmp__(self, other): @@ -27,7 +26,6 @@ self.assertEqual(None, cb()) - @xfail def test_int_callback(self): args = [] def func(arg): diff --git a/lib_pypy/_ctypes/structure.py b/lib_pypy/_ctypes/structure.py --- a/lib_pypy/_ctypes/structure.py +++ b/lib_pypy/_ctypes/structure.py @@ -17,7 +17,7 @@ if len(f) == 3: if (not hasattr(tp, '_type_') or not isinstance(tp._type_, str) - or tp._type_ not in "iIhHbBlL"): + or tp._type_ not in "iIhHbBlLqQ"): #XXX: are those all types? # we just dont get the type name # in the interp levle thrown TypeError diff --git a/pypy/jit/metainterp/optimizeopt/__init__.py b/pypy/jit/metainterp/optimizeopt/__init__.py --- a/pypy/jit/metainterp/optimizeopt/__init__.py +++ b/pypy/jit/metainterp/optimizeopt/__init__.py @@ -55,7 +55,7 @@ def optimize_loop_1(metainterp_sd, loop, enable_opts, - inline_short_preamble=True, retraced=False, bridge=False): + inline_short_preamble=True, retraced=False): """Optimize loop.operations to remove internal overheadish operations. """ @@ -64,7 +64,7 @@ if unroll: optimize_unroll(metainterp_sd, loop, optimizations) else: - optimizer = Optimizer(metainterp_sd, loop, optimizations, bridge) + optimizer = Optimizer(metainterp_sd, loop, optimizations) optimizer.propagate_all_forward() def optimize_bridge_1(metainterp_sd, bridge, enable_opts, @@ -76,7 +76,7 @@ except KeyError: pass optimize_loop_1(metainterp_sd, bridge, enable_opts, - inline_short_preamble, retraced, bridge=True) + inline_short_preamble, retraced) if __name__ == '__main__': print ALL_OPTS_NAMES diff --git a/pypy/jit/metainterp/optimizeopt/heap.py b/pypy/jit/metainterp/optimizeopt/heap.py --- a/pypy/jit/metainterp/optimizeopt/heap.py +++ b/pypy/jit/metainterp/optimizeopt/heap.py @@ -234,7 +234,7 @@ or op.is_ovf()): self.posponedop = op else: - self.next_optimization.propagate_forward(op) + Optimization.emit_operation(self, op) def emitting_operation(self, op): if op.has_no_side_effect(): diff --git a/pypy/jit/metainterp/optimizeopt/intbounds.py b/pypy/jit/metainterp/optimizeopt/intbounds.py --- a/pypy/jit/metainterp/optimizeopt/intbounds.py +++ b/pypy/jit/metainterp/optimizeopt/intbounds.py @@ -6,6 +6,7 @@ IntUpperBound) from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method from pypy.jit.metainterp.resoperation import rop +from pypy.jit.metainterp.optimize import InvalidLoop from pypy.rlib.rarithmetic import LONG_BIT @@ -13,30 +14,10 @@ """Keeps track of the bounds placed on integers by guards and remove redundant guards""" - def setup(self): - self.posponedop = None - self.nextop = None - def new(self): - assert self.posponedop is None return OptIntBounds() - - def flush(self): - assert self.posponedop is None - - def setup(self): - self.posponedop = None - self.nextop = None def propagate_forward(self, op): - if op.is_ovf(): - self.posponedop = op - return - if self.posponedop: - self.nextop = op - op = self.posponedop - self.posponedop = None - dispatch_opt(self, op) def opt_default(self, op): @@ -179,68 +160,75 @@ r = self.getvalue(op.result) r.intbound.intersect(b) + def optimize_GUARD_NO_OVERFLOW(self, op): + lastop = self.last_emitted_operation + if lastop is not None: + opnum = lastop.getopnum() + args = lastop.getarglist() + result = lastop.result + # If the INT_xxx_OVF was replaced with INT_xxx, then we can kill + # the GUARD_NO_OVERFLOW. + if (opnum == rop.INT_ADD or + opnum == rop.INT_SUB or + opnum == rop.INT_MUL): + return + # Else, synthesize the non overflowing op for optimize_default to + # reuse, as well as the reverse op + elif opnum == rop.INT_ADD_OVF: + self.pure(rop.INT_ADD, args[:], result) + self.pure(rop.INT_SUB, [result, args[1]], args[0]) + self.pure(rop.INT_SUB, [result, args[0]], args[1]) + elif opnum == rop.INT_SUB_OVF: + self.pure(rop.INT_SUB, args[:], result) + self.pure(rop.INT_ADD, [result, args[1]], args[0]) + self.pure(rop.INT_SUB, [args[0], result], args[1]) + elif opnum == rop.INT_MUL_OVF: + self.pure(rop.INT_MUL, args[:], result) + self.emit_operation(op) + + def optimize_GUARD_OVERFLOW(self, op): + # If INT_xxx_OVF was replaced by INT_xxx, *but* we still see + # GUARD_OVERFLOW, then the loop is invalid. + lastop = self.last_emitted_operation + if lastop is None: + raise InvalidLoop + opnum = lastop.getopnum() + if opnum not in (rop.INT_ADD_OVF, rop.INT_SUB_OVF, rop.INT_MUL_OVF): + raise InvalidLoop + self.emit_operation(op) + def optimize_INT_ADD_OVF(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) resbound = v1.intbound.add_bound(v2.intbound) - if resbound.has_lower and resbound.has_upper and \ - self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Transform into INT_ADD and remove guard + if resbound.bounded(): + # Transform into INT_ADD. The following guard will be killed + # by optimize_GUARD_NO_OVERFLOW; if we see instead an + # optimize_GUARD_OVERFLOW, then InvalidLoop. op = op.copy_and_change(rop.INT_ADD) - self.optimize_INT_ADD(op) # emit the op - else: - self.emit_operation(op) - r = self.getvalue(op.result) - r.intbound.intersect(resbound) - self.emit_operation(self.nextop) - if self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Synthesize the non overflowing op for optimize_default to reuse - self.pure(rop.INT_ADD, op.getarglist()[:], op.result) - # Synthesize the reverse op for optimize_default to reuse - self.pure(rop.INT_SUB, [op.result, op.getarg(1)], op.getarg(0)) - self.pure(rop.INT_SUB, [op.result, op.getarg(0)], op.getarg(1)) - + self.emit_operation(op) # emit the op + r = self.getvalue(op.result) + r.intbound.intersect(resbound) def optimize_INT_SUB_OVF(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) resbound = v1.intbound.sub_bound(v2.intbound) - if resbound.has_lower and resbound.has_upper and \ - self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Transform into INT_SUB and remove guard + if resbound.bounded(): op = op.copy_and_change(rop.INT_SUB) - self.optimize_INT_SUB(op) # emit the op - else: - self.emit_operation(op) - r = self.getvalue(op.result) - r.intbound.intersect(resbound) - self.emit_operation(self.nextop) - if self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Synthesize the non overflowing op for optimize_default to reuse - self.pure(rop.INT_SUB, op.getarglist()[:], op.result) - # Synthesize the reverse ops for optimize_default to reuse - self.pure(rop.INT_ADD, [op.result, op.getarg(1)], op.getarg(0)) - self.pure(rop.INT_SUB, [op.getarg(0), op.result], op.getarg(1)) - + self.emit_operation(op) # emit the op + r = self.getvalue(op.result) + r.intbound.intersect(resbound) def optimize_INT_MUL_OVF(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) resbound = v1.intbound.mul_bound(v2.intbound) - if resbound.has_lower and resbound.has_upper and \ - self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Transform into INT_MUL and remove guard + if resbound.bounded(): op = op.copy_and_change(rop.INT_MUL) - self.optimize_INT_MUL(op) # emit the op - else: - self.emit_operation(op) - r = self.getvalue(op.result) - r.intbound.intersect(resbound) - self.emit_operation(self.nextop) - if self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Synthesize the non overflowing op for optimize_default to reuse - self.pure(rop.INT_MUL, op.getarglist()[:], op.result) - + self.emit_operation(op) + r = self.getvalue(op.result) + r.intbound.intersect(resbound) def optimize_INT_LT(self, op): v1 = self.getvalue(op.getarg(0)) diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -6,7 +6,7 @@ IntLowerBound, MININT, MAXINT from pypy.jit.metainterp.optimizeopt.util import (make_dispatcher_method, args_dict) -from pypy.jit.metainterp.resoperation import rop, ResOperation +from pypy.jit.metainterp.resoperation import rop, ResOperation, AbstractResOp from pypy.jit.metainterp.typesystem import llhelper, oohelper from pypy.tool.pairtype import extendabletype from pypy.rlib.debug import debug_start, debug_stop, debug_print @@ -249,6 +249,8 @@ CVAL_ZERO_FLOAT = ConstantValue(Const._new(0.0)) llhelper.CVAL_NULLREF = ConstantValue(llhelper.CONST_NULL) oohelper.CVAL_NULLREF = ConstantValue(oohelper.CONST_NULL) +REMOVED = AbstractResOp(None) + class Optimization(object): next_optimization = None @@ -260,6 +262,7 @@ raise NotImplementedError def emit_operation(self, op): + self.last_emitted_operation = op self.next_optimization.propagate_forward(op) # FIXME: Move some of these here? @@ -327,6 +330,7 @@ def forget_numberings(self, box): self.optimizer.forget_numberings(box) + class Optimizer(Optimization): def __init__(self, metainterp_sd, loop, optimizations=None): @@ -340,7 +344,6 @@ self.bool_boxes = {} self.producer = {} self.pendingfields = [] - self.exception_might_have_happened = False self.quasi_immutable_deps = None self.opaque_pointers = {} self.replaces_guard = {} @@ -362,6 +365,7 @@ optimizations[-1].next_optimization = self for o in optimizations: o.optimizer = self + o.last_emitted_operation = None o.setup() else: optimizations = [] @@ -496,7 +500,6 @@ return CVAL_ZERO def propagate_all_forward(self, clear=True): - self.exception_might_have_happened = True if clear: self.clear_newoperations() for op in self.loop.operations: diff --git a/pypy/jit/metainterp/optimizeopt/pure.py b/pypy/jit/metainterp/optimizeopt/pure.py --- a/pypy/jit/metainterp/optimizeopt/pure.py +++ b/pypy/jit/metainterp/optimizeopt/pure.py @@ -1,4 +1,4 @@ -from pypy.jit.metainterp.optimizeopt.optimizer import Optimization +from pypy.jit.metainterp.optimizeopt.optimizer import Optimization, REMOVED from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.jit.metainterp.optimizeopt.util import (make_dispatcher_method, args_dict) @@ -61,7 +61,10 @@ oldop = self.pure_operations.get(args, None) if oldop is not None and oldop.getdescr() is op.getdescr(): assert oldop.getopnum() == op.getopnum() + # this removes a CALL_PURE that has the same (non-constant) + # arguments as a previous CALL_PURE. self.make_equal_to(op.result, self.getvalue(oldop.result)) + self.last_emitted_operation = REMOVED return else: self.pure_operations[args] = op @@ -72,6 +75,13 @@ self.emit_operation(ResOperation(rop.CALL, args, op.result, op.getdescr())) + def optimize_GUARD_NO_EXCEPTION(self, op): + if self.last_emitted_operation is REMOVED: + # it was a CALL_PURE that was killed; so we also kill the + # following GUARD_NO_EXCEPTION + return + self.emit_operation(op) + def flush(self): assert self.posponedop is None diff --git a/pypy/jit/metainterp/optimizeopt/rewrite.py b/pypy/jit/metainterp/optimizeopt/rewrite.py --- a/pypy/jit/metainterp/optimizeopt/rewrite.py +++ b/pypy/jit/metainterp/optimizeopt/rewrite.py @@ -294,12 +294,6 @@ raise InvalidLoop self.optimize_GUARD_CLASS(op) - def optimize_GUARD_NO_EXCEPTION(self, op): - if not self.optimizer.exception_might_have_happened: - return - self.emit_operation(op) - self.optimizer.exception_might_have_happened = False - def optimize_CALL_LOOPINVARIANT(self, op): arg = op.getarg(0) # 'arg' must be a Const, because residual_call in codewriter @@ -310,6 +304,7 @@ resvalue = self.loop_invariant_results.get(key, None) if resvalue is not None: self.make_equal_to(op.result, resvalue) + self.last_emitted_operation = REMOVED return # change the op to be a normal call, from the backend's point of view # there is no reason to have a separate operation for this @@ -444,10 +439,19 @@ except KeyError: pass else: + # this removes a CALL_PURE with all constant arguments. self.make_constant(op.result, result) + self.last_emitted_operation = REMOVED return self.emit_operation(op) + def optimize_GUARD_NO_EXCEPTION(self, op): + if self.last_emitted_operation is REMOVED: + # it was a CALL_PURE or a CALL_LOOPINVARIANT that was killed; + # so we also kill the following GUARD_NO_EXCEPTION + return + self.emit_operation(op) + def optimize_INT_FLOORDIV(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -685,25 +685,60 @@ # ---------- - def test_fold_guard_no_exception(self): - ops = """ - [i] - guard_no_exception() [] - i1 = int_add(i, 3) - guard_no_exception() [] + def test_keep_guard_no_exception(self): + ops = """ + [i1] i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] - guard_no_exception() [] - i3 = call(i2, descr=nonwritedescr) - jump(i1) # the exception is considered lost when we loop back - """ - expected = """ - [i] - i1 = int_add(i, 3) - i2 = call(i1, descr=nonwritedescr) + jump(i2) + """ + self.optimize_loop(ops, ops) + + def test_keep_guard_no_exception_with_call_pure_that_is_not_folded(self): + ops = """ + [i1] + i2 = call_pure(123456, i1, descr=nonwritedescr) guard_no_exception() [i1, i2] - i3 = call(i2, descr=nonwritedescr) - jump(i1) + jump(i2) + """ + expected = """ + [i1] + i2 = call(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2] + jump(i2) + """ + self.optimize_loop(ops, expected) + + def test_remove_guard_no_exception_with_call_pure_on_constant_args(self): + arg_consts = [ConstInt(i) for i in (123456, 81)] + call_pure_results = {tuple(arg_consts): ConstInt(5)} + ops = """ + [i1] + i3 = same_as(81) + i2 = call_pure(123456, i3, descr=nonwritedescr) + guard_no_exception() [i1, i2] + jump(i2) + """ + expected = """ + [i1] + jump(5) + """ + self.optimize_loop(ops, expected, call_pure_results) + + def test_remove_guard_no_exception_with_duplicated_call_pure(self): + ops = """ + [i1] + i2 = call_pure(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2] + i3 = call_pure(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2, i3] + jump(i3) + """ + expected = """ + [i1] + i2 = call(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2] + jump(i2) """ self.optimize_loop(ops, expected) diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -975,17 +975,14 @@ [i] guard_no_exception() [] i1 = int_add(i, 3) - guard_no_exception() [] i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] - guard_no_exception() [] i3 = call(i2, descr=nonwritedescr) jump(i1) # the exception is considered lost when we loop back """ - # note that 'guard_no_exception' at the very start is kept around - # for bridges, but not for loops preamble = """ [i] + guard_no_exception() [] # occurs at the start of bridges, so keep it i1 = int_add(i, 3) i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] @@ -994,6 +991,7 @@ """ expected = """ [i] + guard_no_exception() [] # occurs at the start of bridges, so keep it i1 = int_add(i, 3) i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] @@ -1002,6 +1000,23 @@ """ self.optimize_loop(ops, expected, preamble) + def test_bug_guard_no_exception(self): + ops = """ + [] + i0 = call(123, descr=nonwritedescr) + p0 = call(0, "xy", descr=s2u_descr) # string -> unicode + guard_no_exception() [] + escape(p0) + jump() + """ + expected = """ + [] + i0 = call(123, descr=nonwritedescr) + escape(u"xy") + jump() + """ + self.optimize_loop(ops, expected) + # ---------- def test_call_loopinvariant(self): @@ -6360,12 +6375,15 @@ def test_str2unicode_constant(self): ops = """ [] + escape(1213) p0 = call(0, "xy", descr=s2u_descr) # string -> unicode + guard_no_exception() [] escape(p0) jump() """ expected = """ [] + escape(1213) escape(u"xy") jump() """ @@ -6375,6 +6393,7 @@ ops = """ [p0] p1 = call(0, p0, descr=s2u_descr) # string -> unicode + guard_no_exception() [] escape(p1) jump(p1) """ diff --git a/pypy/jit/metainterp/optimizeopt/vstring.py b/pypy/jit/metainterp/optimizeopt/vstring.py --- a/pypy/jit/metainterp/optimizeopt/vstring.py +++ b/pypy/jit/metainterp/optimizeopt/vstring.py @@ -2,7 +2,8 @@ from pypy.jit.metainterp.history import (BoxInt, Const, ConstInt, ConstPtr, get_const_ptr_for_string, get_const_ptr_for_unicode, BoxPtr, REF, INT) from pypy.jit.metainterp.optimizeopt import optimizer, virtualize -from pypy.jit.metainterp.optimizeopt.optimizer import CONST_0, CONST_1, llhelper +from pypy.jit.metainterp.optimizeopt.optimizer import CONST_0, CONST_1 +from pypy.jit.metainterp.optimizeopt.optimizer import llhelper, REMOVED from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.rlib.objectmodel import specialize, we_are_translated @@ -529,6 +530,11 @@ optimize_CALL_PURE = optimize_CALL + def optimize_GUARD_NO_EXCEPTION(self, op): + if self.last_emitted_operation is REMOVED: + return + self.emit_operation(op) + def opt_call_str_STR2UNICODE(self, op): # Constant-fold unicode("constant string"). # More generally, supporting non-constant but virtual cases is @@ -543,6 +549,7 @@ except UnicodeDecodeError: return False self.make_constant(op.result, get_const_ptr_for_unicode(u)) + self.last_emitted_operation = REMOVED return True def opt_call_stroruni_STR_CONCAT(self, op, mode): diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -90,7 +90,10 @@ return op def __repr__(self): - return self.repr() + try: + return self.repr() + except NotImplementedError: + return object.__repr__(self) def repr(self, graytext=False): # RPython-friendly version diff --git a/pypy/jit/metainterp/test/test_resume.py b/pypy/jit/metainterp/test/test_resume.py --- a/pypy/jit/metainterp/test/test_resume.py +++ b/pypy/jit/metainterp/test/test_resume.py @@ -1135,16 +1135,11 @@ assert ptr2.parent.next == ptr class CompareableConsts(object): - def __init__(self): - self.oldeq = None - def __enter__(self): - assert self.oldeq is None - self.oldeq = Const.__eq__ Const.__eq__ = Const.same_box - + def __exit__(self, type, value, traceback): - Const.__eq__ = self.oldeq + del Const.__eq__ def test_virtual_adder_make_varray(): b2s, b4s = [BoxPtr(), BoxInt(4)] diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -392,6 +392,7 @@ 'Slice': 'space.gettypeobject(W_SliceObject.typedef)', 'StaticMethod': 'space.gettypeobject(StaticMethod.typedef)', 'CFunction': 'space.gettypeobject(cpyext.methodobject.W_PyCFunctionObject.typedef)', + 'WrapperDescr': 'space.gettypeobject(cpyext.methodobject.W_PyCMethodObject.typedef)' }.items(): GLOBALS['Py%s_Type#' % (cpyname, )] = ('PyTypeObject*', pypyexpr) diff --git a/pypy/module/cpyext/methodobject.py b/pypy/module/cpyext/methodobject.py --- a/pypy/module/cpyext/methodobject.py +++ b/pypy/module/cpyext/methodobject.py @@ -240,6 +240,7 @@ def PyStaticMethod_New(space, w_func): return space.wrap(StaticMethod(w_func)) + at cpython_api([PyObject, lltype.Ptr(PyMethodDef)], PyObject) def PyDescr_NewMethod(space, w_type, method): return space.wrap(W_PyCMethodObject(space, method, w_type)) diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -586,10 +586,6 @@ def PyDescr_NewMember(space, type, meth): raise NotImplementedError - at cpython_api([PyTypeObjectPtr, PyMethodDef], PyObject) -def PyDescr_NewMethod(space, type, meth): - raise NotImplementedError - @cpython_api([PyTypeObjectPtr, wrapperbase, rffi.VOIDP], PyObject) def PyDescr_NewWrapper(space, type, wrapper, wrapped): raise NotImplementedError @@ -610,14 +606,6 @@ def PyWrapper_New(space, w_d, w_self): raise NotImplementedError - at cpython_api([PyObject], PyObject) -def PyDictProxy_New(space, dict): - """Return a proxy object for a mapping which enforces read-only behavior. - This is normally used to create a proxy to prevent modification of the - dictionary for non-dynamic class types. - """ - raise NotImplementedError - @cpython_api([PyObject, PyObject, rffi.INT_real], rffi.INT_real, error=-1) def PyDict_Merge(space, a, b, override): """Iterate over mapping object b adding key-value pairs to dictionary a. @@ -2293,15 +2281,6 @@ changes in your code for properly supporting 64-bit systems.""" raise NotImplementedError - at cpython_api([rffi.CWCHARP, Py_ssize_t, rffi.CCHARP], PyObject) -def PyUnicode_EncodeUTF8(space, s, size, errors): - """Encode the Py_UNICODE buffer of the given size using UTF-8 and return a - Python string object. Return NULL if an exception was raised by the codec. - - This function used an int type for size. This might require - changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.INTP], PyObject) def PyUnicode_DecodeUTF32(space, s, size, errors, byteorder): """Decode length bytes from a UTF-32 encoded buffer string and return the @@ -2481,31 +2460,6 @@ was raised by the codec.""" raise NotImplementedError - at cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP], PyObject) -def PyUnicode_DecodeLatin1(space, s, size, errors): - """Create a Unicode object by decoding size bytes of the Latin-1 encoded string - s. Return NULL if an exception was raised by the codec. - - This function used an int type for size. This might require - changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - - at cpython_api([rffi.CWCHARP, Py_ssize_t, rffi.CCHARP], PyObject) -def PyUnicode_EncodeLatin1(space, s, size, errors): - """Encode the Py_UNICODE buffer of the given size using Latin-1 and return - a Python string object. Return NULL if an exception was raised by the codec. - - This function used an int type for size. This might require - changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - - at cpython_api([PyObject], PyObject) -def PyUnicode_AsLatin1String(space, unicode): - """Encode a Unicode object using Latin-1 and return the result as Python string - object. Error handling is "strict". Return NULL if an exception was raised - by the codec.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, PyObject, rffi.CCHARP], PyObject) def PyUnicode_DecodeCharmap(space, s, size, mapping, errors): """Create a Unicode object by decoding size bytes of the encoded string s using @@ -2564,13 +2518,6 @@ """ raise NotImplementedError - at cpython_api([PyObject], PyObject) -def PyUnicode_AsMBCSString(space, unicode): - """Encode a Unicode object using MBCS and return the result as Python string - object. Error handling is "strict". Return NULL if an exception was raised - by the codec.""" - raise NotImplementedError - @cpython_api([PyObject, PyObject], PyObject) def PyUnicode_Concat(space, left, right): """Concat two strings giving a new Unicode string.""" @@ -2912,16 +2859,3 @@ """Return true if ob is a proxy object. """ raise NotImplementedError - - at cpython_api([PyObject, PyObject], PyObject) -def PyWeakref_NewProxy(space, ob, callback): - """Return a weak reference proxy object for the object ob. This will always - return a new reference, but is not guaranteed to create a new object; an - existing proxy object may be returned. The second parameter, callback, can - be a callable object that receives notification when ob is garbage - collected; it should accept a single parameter, which will be the weak - reference object itself. callback may also be None or NULL. If ob - is not a weakly-referencable object, or if callback is not callable, - None, or NULL, this will return NULL and raise TypeError. - """ - raise NotImplementedError diff --git a/pypy/module/cpyext/test/test_methodobject.py b/pypy/module/cpyext/test/test_methodobject.py --- a/pypy/module/cpyext/test/test_methodobject.py +++ b/pypy/module/cpyext/test/test_methodobject.py @@ -79,7 +79,7 @@ raises(TypeError, mod.isSameFunction, 1) class TestPyCMethodObject(BaseApiTest): - def test_repr(self, space): + def test_repr(self, space, api): """ W_PyCMethodObject has a repr string which describes it as a method and gives its name and the name of its class. @@ -94,7 +94,7 @@ ml.c_ml_meth = rffi.cast(PyCFunction_typedef, c_func.get_llhelper(space)) - method = PyDescr_NewMethod(space, space.w_str, ml) + method = api.PyDescr_NewMethod(space.w_str, ml) assert repr(method).startswith( "" + assert space.unwrap(space.repr(w_proxy)).startswith(' Author: Hakan Ardo Branch: jit-targets Changeset: r48871:7b0d8d8b3d9b Date: 2011-11-07 16:57 +0100 http://bitbucket.org/pypy/pypy/changeset/7b0d8d8b3d9b/ Log: fix test diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -1733,7 +1733,7 @@ self.check_loop_count(5) self.check_resops({'guard_class': 2, 'int_gt': 4, 'getfield_gc': 4, 'guard_true': 4, - 'int_sub': 4, 'jump': 4, 'int_mul': 2, + 'int_sub': 4, 'jump': 2, 'int_mul': 2, 'int_add': 2}) def test_multiple_specialied_versions_array(self): From noreply at buildbot.pypy.org Mon Nov 7 17:53:42 2011 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 7 Nov 2011 17:53:42 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: Bah. Fix. Message-ID: <20111107165342.EEFE3820C4@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: jit-targets Changeset: r48872:50c584a30bb4 Date: 2011-11-07 17:53 +0100 http://bitbucket.org/pypy/pypy/changeset/50c584a30bb4/ Log: Bah. Fix. diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -87,6 +87,8 @@ # constrcucted. if that did not happen the label should not show up # in a trace that will be used assert descr.exported_state is None + if not we_are_translated(): + op._descr_wref = weakref.ref(op._descr) op._descr = None # clear reference, mostly for tests # record this looptoken on the QuasiImmut used in the code if loop.quasi_immutable_deps is not None: diff --git a/pypy/jit/metainterp/graphpage.py b/pypy/jit/metainterp/graphpage.py --- a/pypy/jit/metainterp/graphpage.py +++ b/pypy/jit/metainterp/graphpage.py @@ -26,6 +26,13 @@ def is_interesting_guard(op): return hasattr(op.getdescr(), '_debug_suboperations') +def getdescr(op): + if op._descr is not None: + return op._descr + if hasattr(op, '_descr_wref'): + return op._descr_wref() + return None + class ResOpGraphPage(GraphPage): @@ -77,7 +84,7 @@ mergepointblock = i elif op.getopnum() == rop.LABEL: self.mark_starter(graphindex, i) - self.target_tokens[op.getdescr()] = (graphindex, i) + self.target_tokens[getdescr(op)] = (graphindex, i) mergepointblock = i else: if mergepointblock is not None: @@ -172,8 +179,8 @@ (graphindex, opindex)) break if op.getopnum() == rop.JUMP: - tgt_descr = op.getdescr() - if tgt_descr in self.target_tokens: + tgt_descr = getdescr(op) + if tgt_descr is not None and tgt_descr in self.target_tokens: self.genedge((graphindex, opstartindex), self.target_tokens[tgt_descr], weight="0") From noreply at buildbot.pypy.org Mon Nov 7 21:01:02 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Mon, 7 Nov 2011 21:01:02 +0100 (CET) Subject: [pypy-commit] pypy py3k: Remove dict.has_key, and switch dict.{keys, values, items} to return views and remove the view* methods. Message-ID: <20111107200102.60101820C4@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: py3k Changeset: r48873:866494936d4f Date: 2011-11-07 15:00 -0500 http://bitbucket.org/pypy/pypy/changeset/866494936d4f/ Log: Remove dict.has_key, and switch dict.{keys,values,items} to return views and remove the view* methods. diff --git a/lib_pypy/_structseq.py b/lib_pypy/_structseq.py --- a/lib_pypy/_structseq.py +++ b/lib_pypy/_structseq.py @@ -43,7 +43,7 @@ field.__name__ = name dict['n_fields'] = len(fields_by_index) - extra_fields = fields_by_index.items() + extra_fields = list(fields_by_index.items()) extra_fields.sort() n_sequence_fields = 0 while extra_fields and extra_fields[0][0] == n_sequence_fields: diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -199,11 +199,11 @@ def int_w(self, space): raise OperationError(space.w_TypeError, typed_unwrap_error_msg(space, "integer", self)) - + def uint_w(self, space): raise OperationError(space.w_TypeError, typed_unwrap_error_msg(space, "integer", self)) - + def bigint_w(self, space): raise OperationError(space.w_TypeError, typed_unwrap_error_msg(space, "integer", self)) @@ -543,9 +543,15 @@ def export_builtin_exceptions(self): """NOT_RPYTHON""" w_dic = self.exceptions_module.getdict(self) - w_keys = self.call_method(w_dic, "keys") exc_types_w = {} - for w_name in self.unpackiterable(w_keys): + w_iter = self.iter(w_dic) + while True: + try: + w_name = self.next(w_iter) + except OperationError, e: + if not e.match(self, self.w_StopIteration): + raise + break name = self.str_w(w_name) if not name.startswith('__'): excname = name diff --git a/pypy/interpreter/mixedmodule.py b/pypy/interpreter/mixedmodule.py --- a/pypy/interpreter/mixedmodule.py +++ b/pypy/interpreter/mixedmodule.py @@ -57,7 +57,7 @@ if self.w_initialdict is None: Module.init(self, space) if not self.lazy and self.w_initialdict is None: - self.w_initialdict = space.call_method(self.w_dict, 'items') + self.w_initialdict = space.call_method(self.w_dict, 'copy') def get_applevel_name(cls): @@ -121,7 +121,7 @@ w_value = self.get(name) space.setitem(self.w_dict, space.new_interned_str(name), w_value) self.lazy = False - self.w_initialdict = space.call_method(self.w_dict, 'items') + self.w_initialdict = space.call_method(self.w_dict, 'copy') return self.w_dict def _freeze_(self): diff --git a/pypy/objspace/std/dictmultiobject.py b/pypy/objspace/std/dictmultiobject.py --- a/pypy/objspace/std/dictmultiobject.py +++ b/pypy/objspace/std/dictmultiobject.py @@ -477,7 +477,7 @@ class _UnwrappedIteratorMixin: _mixin_ = True - + def __init__(self, space, strategy, dictimplementation): IteratorImplementation.__init__(self, space, dictimplementation) self.iterator = strategy.unerase(dictimplementation.dstorage).iteritems() @@ -601,8 +601,6 @@ def contains__DictMulti_ANY(space, w_dict, w_key): return space.newbool(w_dict.getitem(w_key) is not None) -dict_has_key__DictMulti_ANY = contains__DictMulti_ANY - def iter__DictMulti(space, w_dict): return W_DictMultiIterObject(space, w_dict.iter(), KEYSITER) @@ -672,14 +670,15 @@ update1_dict_dict(space, w_new, w_self) return w_new + def dict_items__DictMulti(space, w_self): - return space.newlist(w_self.items()) + return W_DictViewItemsObject(space, w_self) def dict_keys__DictMulti(space, w_self): - return space.newlist(w_self.keys()) + return W_DictViewKeysObject(space, w_self) def dict_values__DictMulti(space, w_self): - return space.newlist(w_self.values()) + return W_DictViewValuesObject(space, w_self) def dict_iteritems__DictMulti(space, w_self): return W_DictMultiIterObject(space, w_self.iter(), ITEMSITER) @@ -690,15 +689,6 @@ def dict_itervalues__DictMulti(space, w_self): return W_DictMultiIterObject(space, w_self.iter(), VALUESITER) -def dict_viewitems__DictMulti(space, w_self): - return W_DictViewItemsObject(space, w_self) - -def dict_viewkeys__DictMulti(space, w_self): - return W_DictViewKeysObject(space, w_self) - -def dict_viewvalues__DictMulti(space, w_self): - return W_DictViewValuesObject(space, w_self) - def dict_clear__DictMulti(space, w_self): w_self.clear() diff --git a/pypy/objspace/std/dicttype.py b/pypy/objspace/std/dicttype.py --- a/pypy/objspace/std/dicttype.py +++ b/pypy/objspace/std/dicttype.py @@ -7,14 +7,11 @@ dict_copy = SMM('copy', 1, doc='D.copy() -> a shallow copy of D') dict_items = SMM('items', 1, - doc="D.items() -> list of D's (key, value) pairs, as" - ' 2-tuples') + doc="D.items() -> a set-like object providing a view on D's item") dict_keys = SMM('keys', 1, - doc="D.keys() -> list of D's keys") + doc="D.keys() -> a set-like object providing a view on D's keys") dict_values = SMM('values', 1, - doc="D.values() -> list of D's values") -dict_has_key = SMM('has_key', 2, - doc='D.has_key(k) -> True if D has a key k, else False') + doc="D.values() -> an object providing a view on D's values") dict_clear = SMM('clear', 1, doc='D.clear() -> None. Remove all items from D.') dict_get = SMM('get', 3, defaults=(None,), @@ -43,12 +40,6 @@ doc='D.iterkeys() -> an iterator over the keys of D') dict_itervalues = SMM('itervalues', 1, doc='D.itervalues() -> an iterator over the values of D') -dict_viewkeys = SMM('viewkeys', 1, - doc="D.viewkeys() -> a set-like object providing a view on D's keys") -dict_viewitems = SMM('viewitems', 1, - doc="D.viewitems() -> a set-like object providing a view on D's items") -dict_viewvalues = SMM('viewvalues', 1, - doc="D.viewvalues() -> an object providing a view on D's values") dict_reversed = SMM('__reversed__', 1) def dict_reversed__ANY(space, w_dict): @@ -81,10 +72,10 @@ currently_in_repr[dict_id] = 1 try: items = [] - # XXX for now, we cannot use iteritems() at app-level because - # we want a reasonable result instead of a RuntimeError + # XXX for now, we cannot use items() withut list at app-level + # because we want a reasonable result instead of a RuntimeError # even if the dict is mutated by the repr() in the loop. - for k, v in dict.items(d): + for k, v in list(dict.items(d)): items.append(repr(k) + ": " + repr(v)) return "{" + ', '.join(items) + "}" finally: diff --git a/pypy/objspace/std/test/test_dictmultiobject.py b/pypy/objspace/std/test/test_dictmultiobject.py --- a/pypy/objspace/std/test/test_dictmultiobject.py +++ b/pypy/objspace/std/test/test_dictmultiobject.py @@ -175,14 +175,9 @@ assert len(dd) == 1 raises(KeyError, dd.pop, 33) - def test_has_key(self): - d = {1: 2, 3: 4} - assert d.has_key(1) - assert not d.has_key(33) - def test_items(self): d = {1: 2, 3: 4} - its = d.items() + its = list(d.items()) its.sort() assert its == [(1, 2), (3, 4)] @@ -206,11 +201,11 @@ values = [] for k in d.itervalues(): values.append(k) - assert values == d.values() + assert values == list(d.values()) def test_keys(self): d = {1: 2, 3: 4} - kys = d.keys() + kys = list(d.keys()) kys.sort() assert kys == [1, 3] @@ -323,9 +318,9 @@ def test_values(self): d = {1: 2, 3: 4} - vals = d.values() + vals = list(d.values()) vals.sort() - assert vals == [2,4] + assert vals == [2, 4] def test_eq(self): d1 = {1: 2, 3: 4} @@ -551,9 +546,9 @@ def test_empty_dict(self): d = {} raises(KeyError, d.popitem) - assert d.items() == [] - assert d.values() == [] - assert d.keys() == [] + assert list(d.items()) == [] + assert list(d.values()) == [] + assert list(d.keys()) == [] class AppTest_DictMultiObject(AppTest_DictObject): @@ -570,7 +565,7 @@ a = A() s = S("abc") setattr(a, s, 42) - key = a.__dict__.keys()[0] + key = next(iter(a.__dict__.keys())) assert key == s assert key is not s assert type(key) is str @@ -590,24 +585,24 @@ class AppTestDictViews: def test_dictview(self): d = {1: 2, 3: 4} - assert len(d.viewkeys()) == 2 - assert len(d.viewitems()) == 2 - assert len(d.viewvalues()) == 2 + assert len(d.keys()) == 2 + assert len(d.items()) == 2 + assert len(d.values()) == 2 def test_constructors_not_callable(self): - kt = type({}.viewkeys()) + kt = type({}.keys()) raises(TypeError, kt, {}) raises(TypeError, kt) - it = type({}.viewitems()) + it = type({}.items()) raises(TypeError, it, {}) raises(TypeError, it) - vt = type({}.viewvalues()) + vt = type({}.values()) raises(TypeError, vt, {}) raises(TypeError, vt) def test_dict_keys(self): d = {1: 10, "a": "ABC"} - keys = d.viewkeys() + keys = d.keys() assert len(keys) == 2 assert set(keys) == set([1, "a"]) assert keys == set([1, "a"]) @@ -619,15 +614,15 @@ assert "a" in keys assert 10 not in keys assert "Z" not in keys - assert d.viewkeys() == d.viewkeys() + assert d.keys() == d.keys() e = {1: 11, "a": "def"} - assert d.viewkeys() == e.viewkeys() + assert d.keys() == e.keys() del e["a"] - assert d.viewkeys() != e.viewkeys() + assert d.keys() != e.keys() def test_dict_items(self): d = {1: 10, "a": "ABC"} - items = d.viewitems() + items = d.items() assert len(items) == 2 assert set(items) == set([(1, 10), ("a", "ABC")]) assert items == set([(1, 10), ("a", "ABC")]) @@ -642,36 +637,36 @@ assert () not in items assert (1,) not in items assert (1, 2, 3) not in items - assert d.viewitems() == d.viewitems() + assert d.items() == d.items() e = d.copy() - assert d.viewitems() == e.viewitems() + assert d.items() == e.items() e["a"] = "def" - assert d.viewitems() != e.viewitems() + assert d.items() != e.items() def test_dict_mixed_keys_items(self): d = {(1, 1): 11, (2, 2): 22} e = {1: 1, 2: 2} - assert d.viewkeys() == e.viewitems() - assert d.viewitems() != e.viewkeys() + assert d.keys() == e.items() + assert d.items() != e.keys() def test_dict_values(self): d = {1: 10, "a": "ABC"} - values = d.viewvalues() + values = d.values() assert set(values) == set([10, "ABC"]) assert len(values) == 2 def test_dict_repr(self): d = {1: 10, "a": "ABC"} assert isinstance(repr(d), str) - r = repr(d.viewitems()) + r = repr(d.items()) assert isinstance(r, str) assert (r == "dict_items([('a', 'ABC'), (1, 10)])" or r == "dict_items([(1, 10), ('a', 'ABC')])") - r = repr(d.viewkeys()) + r = repr(d.keys()) assert isinstance(r, str) assert (r == "dict_keys(['a', 1])" or r == "dict_keys([1, 'a'])") - r = repr(d.viewvalues()) + r = repr(d.values()) assert isinstance(r, str) assert (r == "dict_values(['ABC', 10])" or r == "dict_values([10, 'ABC'])") @@ -680,53 +675,53 @@ d1 = {'a': 1, 'b': 2} d2 = {'b': 3, 'c': 2} d3 = {'d': 4, 'e': 5} - assert d1.viewkeys() & d1.viewkeys() == set('ab') - assert d1.viewkeys() & d2.viewkeys() == set('b') - assert d1.viewkeys() & d3.viewkeys() == set() - assert d1.viewkeys() & set(d1.viewkeys()) == set('ab') - assert d1.viewkeys() & set(d2.viewkeys()) == set('b') - assert d1.viewkeys() & set(d3.viewkeys()) == set() + assert d1.keys() & d1.keys() == set('ab') + assert d1.keys() & d2.keys() == set('b') + assert d1.keys() & d3.keys() == set() + assert d1.keys() & set(d1.keys()) == set('ab') + assert d1.keys() & set(d2.keys()) == set('b') + assert d1.keys() & set(d3.keys()) == set() - assert d1.viewkeys() | d1.viewkeys() == set('ab') - assert d1.viewkeys() | d2.viewkeys() == set('abc') - assert d1.viewkeys() | d3.viewkeys() == set('abde') - assert d1.viewkeys() | set(d1.viewkeys()) == set('ab') - assert d1.viewkeys() | set(d2.viewkeys()) == set('abc') - assert d1.viewkeys() | set(d3.viewkeys()) == set('abde') + assert d1.keys() | d1.keys() == set('ab') + assert d1.keys() | d2.keys() == set('abc') + assert d1.keys() | d3.keys() == set('abde') + assert d1.keys() | set(d1.keys()) == set('ab') + assert d1.keys() | set(d2.keys()) == set('abc') + assert d1.keys() | set(d3.keys()) == set('abde') - assert d1.viewkeys() ^ d1.viewkeys() == set() - assert d1.viewkeys() ^ d2.viewkeys() == set('ac') - assert d1.viewkeys() ^ d3.viewkeys() == set('abde') - assert d1.viewkeys() ^ set(d1.viewkeys()) == set() - assert d1.viewkeys() ^ set(d2.viewkeys()) == set('ac') - assert d1.viewkeys() ^ set(d3.viewkeys()) == set('abde') + assert d1.keys() ^ d1.keys() == set() + assert d1.keys() ^ d2.keys() == set('ac') + assert d1.keys() ^ d3.keys() == set('abde') + assert d1.keys() ^ set(d1.keys()) == set() + assert d1.keys() ^ set(d2.keys()) == set('ac') + assert d1.keys() ^ set(d3.keys()) == set('abde') def test_items_set_operations(self): d1 = {'a': 1, 'b': 2} d2 = {'a': 2, 'b': 2} d3 = {'d': 4, 'e': 5} - assert d1.viewitems() & d1.viewitems() == set([('a', 1), ('b', 2)]) - assert d1.viewitems() & d2.viewitems() == set([('b', 2)]) - assert d1.viewitems() & d3.viewitems() == set() - assert d1.viewitems() & set(d1.viewitems()) == set([('a', 1), ('b', 2)]) - assert d1.viewitems() & set(d2.viewitems()) == set([('b', 2)]) - assert d1.viewitems() & set(d3.viewitems()) == set() + assert d1.items() & d1.items() == set([('a', 1), ('b', 2)]) + assert d1.items() & d2.items() == set([('b', 2)]) + assert d1.items() & d3.items() == set() + assert d1.items() & set(d1.items()) == set([('a', 1), ('b', 2)]) + assert d1.items() & set(d2.items()) == set([('b', 2)]) + assert d1.items() & set(d3.items()) == set() - assert d1.viewitems() | d1.viewitems() == set([('a', 1), ('b', 2)]) - assert (d1.viewitems() | d2.viewitems() == + assert d1.items() | d1.items() == set([('a', 1), ('b', 2)]) + assert (d1.items() | d2.items() == set([('a', 1), ('a', 2), ('b', 2)])) - assert (d1.viewitems() | d3.viewitems() == + assert (d1.items() | d3.items() == set([('a', 1), ('b', 2), ('d', 4), ('e', 5)])) - assert (d1.viewitems() | set(d1.viewitems()) == + assert (d1.items() | set(d1.items()) == set([('a', 1), ('b', 2)])) - assert (d1.viewitems() | set(d2.viewitems()) == + assert (d1.items() | set(d2.items()) == set([('a', 1), ('a', 2), ('b', 2)])) - assert (d1.viewitems() | set(d3.viewitems()) == + assert (d1.items() | set(d3.items()) == set([('a', 1), ('b', 2), ('d', 4), ('e', 5)])) - assert d1.viewitems() ^ d1.viewitems() == set() - assert d1.viewitems() ^ d2.viewitems() == set([('a', 1), ('a', 2)]) - assert (d1.viewitems() ^ d3.viewitems() == + assert d1.items() ^ d1.items() == set() + assert d1.items() ^ d2.items() == set([('a', 1), ('a', 2)]) + assert (d1.items() ^ d3.items() == set([('a', 1), ('b', 2), ('d', 4), ('e', 5)])) @@ -841,6 +836,7 @@ w_float = float StringObjectCls = FakeString w_dict = W_DictMultiObject + w_text = str iter = iter fixedview = list listview = list From noreply at buildbot.pypy.org Mon Nov 7 21:22:33 2011 From: noreply at buildbot.pypy.org (amauryfa) Date: Mon, 7 Nov 2011 21:22:33 +0100 (CET) Subject: [pypy-commit] pypy py3k: Fix tests in module/rctime Message-ID: <20111107202233.D201C820C4@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r48874:8fe40c886021 Date: 2011-11-07 21:01 +0100 http://bitbucket.org/pypy/pypy/changeset/8fe40c886021/ Log: Fix tests in module/rctime diff --git a/pypy/module/_sre/interp_sre.py b/pypy/module/_sre/interp_sre.py --- a/pypy/module/_sre/interp_sre.py +++ b/pypy/module/_sre/interp_sre.py @@ -78,8 +78,7 @@ return space.newtuple(grps) def import_re(space): - w_builtin = space.getbuiltinmodule('__builtin__') - w_import = space.getattr(w_builtin, space.wrap("__import__")) + w_import = space.getattr(space.builtin, space.wrap("__import__")) return space.call_function(w_import, space.wrap("re")) def matchcontext(space, ctx): diff --git a/pypy/module/posix/interp_posix.py b/pypy/module/posix/interp_posix.py --- a/pypy/module/posix/interp_posix.py +++ b/pypy/module/posix/interp_posix.py @@ -56,7 +56,7 @@ self.w_obj = w_obj def as_bytes(self): - return self.space.str_w(self.w_obj) + return self.space.bytes_w(self.w_obj) def as_unicode(self): space = self.space @@ -71,7 +71,7 @@ fname = FileEncoder(space, w_fname) return func(fname, *args) else: - fname = space.str_w(w_fname) + fname = space.bytes_w(w_fname) return func(fname, *args) return dispatch @@ -512,19 +512,17 @@ for key, value in os.environ.items(): space.setitem(w_env, space.wrapbytes(key), space.wrapbytes(value)) - at unwrap_spec(name=str, value=str) -def putenv(space, name, value): +def putenv(space, w_name, w_value): """Change or add an environment variable.""" try: - os.environ[name] = value + dispatch_filename_2(rposix.putenv)(space, w_name, w_value) except OSError, e: raise wrap_oserror(space, e) - at unwrap_spec(name=str) -def unsetenv(space, name): +def unsetenv(space, w_name): """Delete an environment variable.""" try: - del os.environ[name] + dispatch_filename(rposix.unsetenv)(space, w_name) except KeyError: pass except OSError, e: diff --git a/pypy/module/rctime/app_time.py b/pypy/module/rctime/app_time.py --- a/pypy/module/rctime/app_time.py +++ b/pypy/module/rctime/app_time.py @@ -24,7 +24,7 @@ (same as strftime()).""" import _strptime # from the CPython standard library - return _strptime._strptime(string, format)[0] + return _strptime._strptime_time(string, format) __doc__ = """This module provides various functions to manipulate time values. diff --git a/pypy/module/rctime/test/test_rctime.py b/pypy/module/rctime/test/test_rctime.py --- a/pypy/module/rctime/test/test_rctime.py +++ b/pypy/module/rctime/test/test_rctime.py @@ -199,7 +199,7 @@ # rely on it. if org_TZ is not None: os.environ['TZ'] = org_TZ - elif os.environ.has_key('TZ'): + elif 'TZ' in os.environ: del os.environ['TZ'] rctime.tzset() @@ -279,7 +279,7 @@ 'j', 'm', 'M', 'p', 'S', 'U', 'w', 'W', 'x', 'X', 'y', 'Y', 'Z', '%'): format = ' %' + directive - print format + print(format) rctime.strptime(rctime.strftime(format, tt), format) def test_pickle(self): diff --git a/pypy/rlib/rposix.py b/pypy/rlib/rposix.py --- a/pypy/rlib/rposix.py +++ b/pypy/rlib/rposix.py @@ -163,3 +163,18 @@ return nt._getfullpathname(path) else: return nt._getfullpathname(path.as_bytes()) + + at specialize.argtype(0, 1) +def putenv(name, value): + if isinstance(name, str): + os.environ[name] = value + else: + os.environ[name.as_bytes()] = value.as_bytes() + + at specialize.argtype(0) +def unsetenv(name): + if isinstance(name, str): + del os.environ[name] + else: + del os.environ[name.as_bytes()] + From noreply at buildbot.pypy.org Mon Nov 7 21:22:35 2011 From: noreply at buildbot.pypy.org (amauryfa) Date: Mon, 7 Nov 2011 21:22:35 +0100 (CET) Subject: [pypy-commit] pypy py3k: Fix most tests in __builtin__ module, Message-ID: <20111107202235.1647E820C4@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r48875:22fa14064102 Date: 2011-11-07 21:02 +0100 http://bitbucket.org/pypy/pypy/changeset/22fa14064102/ Log: Fix most tests in __builtin__ module, including a nasty error in type dictonaries: After class C: x=42 C.x correctly succeeds, but C.__dict__['x'] was not found! diff --git a/pypy/module/__builtin__/interp_memoryview.py b/pypy/module/__builtin__/interp_memoryview.py --- a/pypy/module/__builtin__/interp_memoryview.py +++ b/pypy/module/__builtin__/interp_memoryview.py @@ -72,7 +72,7 @@ return space.wrap(self.buf) def descr_tobytes(self, space): - return space.wrap(self.as_str()) + return space.wrapbytes(self.as_str()) def descr_tolist(self, space): buf = self.buf diff --git a/pypy/module/__builtin__/operation.py b/pypy/module/__builtin__/operation.py --- a/pypy/module/__builtin__/operation.py +++ b/pypy/module/__builtin__/operation.py @@ -215,7 +215,7 @@ table of interned strings whose purpose is to speed up dictionary lookups. Return the string itself or the previously interned string object with the same value.""" - if space.is_w(space.type(w_str), space.w_str): + if space.is_w(space.type(w_str), space.w_unicode): return space.new_interned_w_str(w_str) raise OperationError(space.w_TypeError, space.wrap("intern() argument must be string.")) diff --git a/pypy/module/__builtin__/test/test_abstractinst.py b/pypy/module/__builtin__/test/test_abstractinst.py --- a/pypy/module/__builtin__/test/test_abstractinst.py +++ b/pypy/module/__builtin__/test/test_abstractinst.py @@ -195,10 +195,12 @@ """Implement issubclass(sub, cls).""" candidates = cls.__dict__.get("__subclass__", set()) | set([cls]) return any(c in candidates for c in sub.mro()) - class Integer: - __metaclass__ = ABC + # Equivalent to:: + # class Integer(metaclass=ABC): + # __subclass__ = set([int]) + # But with a syntax compatible with 2.x + Integer = ABC('Integer', (), dict(__subclass__=set([int]))) - __subclass__ = set([int]) assert issubclass(int, Integer) assert issubclass(int, (Integer,)) diff --git a/pypy/module/__builtin__/test/test_buffer.py b/pypy/module/__builtin__/test/test_buffer.py --- a/pypy/module/__builtin__/test/test_buffer.py +++ b/pypy/module/__builtin__/test/test_buffer.py @@ -12,21 +12,21 @@ if sys.maxunicode == 65535: # UCS2 build assert len(b) == 4 if sys.byteorder == "big": - assert b[0:4] == "\x00a\x00b" + assert b[0:4] == b"\x00a\x00b" else: - assert b[0:4] == "a\x00b\x00" + assert b[0:4] == b"a\x00b\x00" else: # UCS4 build assert len(b) == 8 if sys.byteorder == "big": - assert b[0:8] == "\x00\x00\x00a\x00\x00\x00b" + assert b[0:8] == b"\x00\x00\x00a\x00\x00\x00b" else: - assert b[0:8] == "a\x00\x00\x00b\x00\x00\x00" + assert b[0:8] == b"a\x00\x00\x00b\x00\x00\x00" def test_array_buffer(self): import array b = buffer(array.array("B", [1, 2, 3])) assert len(b) == 3 - assert b[0:3] == "\x01\x02\x03" + assert b[0:3] == b"\x01\x02\x03" def test_nonzero(self): assert buffer('\x00') @@ -36,68 +36,68 @@ assert not buffer(array.array("B", [])) def test_str(self): - assert str(buffer('hello')) == 'hello' + assert str(buffer(b'hello')) == 'hello' def test_repr(self): # from 2.5.2 lib tests - assert repr(buffer('hello')).startswith(' buffer('ab')) - assert buffer('ab') >= buffer('ab') - assert buffer('ab') != buffer('abc') - assert buffer('ab') < buffer('abc') - assert buffer('ab') <= buffer('ab') - assert buffer('ab') > buffer('aa') - assert buffer('ab') >= buffer('ab') + assert buffer(b'ab') != b'ab' + assert not (b'ab' == buffer(b'ab')) + assert buffer(b'ab') == buffer(b'ab') + assert not (buffer(b'ab') != buffer(b'ab')) + assert not (buffer(b'ab') < buffer(b'ab')) + assert buffer(b'ab') <= buffer(b'ab') + assert not (buffer(b'ab') > buffer(b'ab')) + assert buffer(b'ab') >= buffer(b'ab') + assert buffer(b'ab') != buffer(b'abc') + assert buffer(b'ab') < buffer(b'abc') + assert buffer(b'ab') <= buffer(b'ab') + assert buffer(b'ab') > buffer(b'aa') + assert buffer(b'ab') >= buffer(b'ab') def test_hash(self): - assert hash(buffer('hello')) == hash('hello') + assert hash(buffer(b'hello')) == hash(b'hello') def test_mul(self): - assert buffer('ab') * 5 == 'ababababab' - assert buffer('ab') * (-2) == '' - assert 5 * buffer('ab') == 'ababababab' - assert (-2) * buffer('ab') == '' + assert buffer(b'ab') * 5 == b'ababababab' + assert buffer(b'ab') * (-2) == b'' + assert 5 * buffer(b'ab') == b'ababababab' + assert (-2) * buffer(b'ab') == b'' def test_offset_size(self): - b = buffer('hello world', 6) + b = buffer(b'hello world', 6) assert len(b) == 5 - assert b[0] == 'w' - assert b[:] == 'world' + assert b[0] == b'w' + assert b[:] == b'world' raises(IndexError, 'b[5]') b = buffer(b, 2) assert len(b) == 3 - assert b[0] == 'r' - assert b[:] == 'rld' + assert b[0] == b'r' + assert b[:] == b'rld' raises(IndexError, 'b[3]') - b = buffer('hello world', 1, 8) + b = buffer(b'hello world', 1, 8) assert len(b) == 8 - assert b[0] == 'e' - assert b[:] == 'ello wor' + assert b[0] == b'e' + assert b[:] == b'ello wor' raises(IndexError, 'b[8]') b = buffer(b, 2, 3) assert len(b) == 3 - assert b[2] == ' ' - assert b[:] == 'lo ' + assert b[2] == b' ' + assert b[:] == b'lo ' raises(IndexError, 'b[3]') b = buffer('hello world', 55) assert len(b) == 0 - assert b[:] == '' - b = buffer('hello world', 6, 999) + assert b[:] == b'' + b = buffer(b'hello world', 6, 999) assert len(b) == 5 - assert b[:] == 'world' + assert b[:] == b'world' raises(ValueError, buffer, "abc", -1) raises(ValueError, buffer, "abc", 0, -2) @@ -105,17 +105,17 @@ def test_rw_offset_size(self): import array - a = array.array("c", 'hello world') + a = array.array("b", b'hello world') b = buffer(a, 6) assert len(b) == 5 - assert b[0] == 'w' - assert b[:] == 'world' + assert b[0] == b'w' + assert b[:] == b'world' raises(IndexError, 'b[5]') - b[0] = 'W' - assert str(b) == 'World' - assert a.tostring() == 'hello World' - b[:] = '12345' - assert a.tostring() == 'hello 12345' + b[0] = b'W' + assert str(b) == b'World' + assert a.tostring() == b'hello World' + b[:] = b'12345' + assert a.tostring() == b'hello 12345' raises(IndexError, 'b[5] = "."') b = buffer(b, 2) @@ -161,7 +161,7 @@ def test_slice(self): # Test extended slicing by comparing with list slicing. - s = "".join(chr(c) for c in list(range(255, -1, -1))) + s = bytes(c for c in list(range(255, -1, -1))) b = buffer(s) indices = (0, None, 1, 3, 19, 300, -1, -2, -31, -300) for start in indices: @@ -172,8 +172,8 @@ class AppTestMemoryView: def test_basic(self): - v = memoryview("abc") - assert v.tobytes() == "abc" + v = memoryview(b"abc") + assert v.tobytes() == b"abc" assert len(v) == 3 assert list(v) == ['a', 'b', 'c'] assert v.tolist() == [97, 98, 99] @@ -186,17 +186,17 @@ assert len(w) == 2 def test_rw(self): - data = bytearray('abcefg') + data = bytearray(b'abcefg') v = memoryview(data) assert v.readonly is False - v[0] = 'z' + v[0] = b'z' assert data == bytearray(eval("b'zbcefg'")) - v[1:4] = '123' + v[1:4] = b'123' assert data == bytearray(eval("b'z123fg'")) raises((ValueError, TypeError), "v[2] = 'spam'") def test_memoryview_attrs(self): - v = memoryview("a"*100) + v = memoryview(b"a"*100) assert v.format == "B" assert v.itemsize == 1 assert v.shape == (100,) @@ -204,13 +204,13 @@ assert v.strides == (1,) def test_suboffsets(self): - v = memoryview("a"*100) + v = memoryview(b"a"*100) assert v.suboffsets == None - v = memoryview(buffer("a"*100, 2)) + v = memoryview(buffer(b"a"*100, 2)) assert v.shape == (98,) assert v.suboffsets == None def test_compare(self): - assert memoryview("abc") == "abc" - assert memoryview("abc") == bytearray("abc") - assert memoryview("abc") != 3 + assert memoryview(b"abc") == b"abc" + assert memoryview(b"abc") == bytearray(b"abc") + assert memoryview(b"abc") != 3 diff --git a/pypy/module/__builtin__/test/test_builtin.py b/pypy/module/__builtin__/test/test_builtin.py --- a/pypy/module/__builtin__/test/test_builtin.py +++ b/pypy/module/__builtin__/test/test_builtin.py @@ -27,8 +27,8 @@ cls.w_safe_runtimerror = cls.space.wrap(sys.version_info < (2, 6)) def test_bytes_alias(self): - assert bytes is str - assert isinstance(eval("b'hi'"), str) + assert bytes is not str + assert isinstance(eval("b'hi'"), bytes) def test_import(self): m = __import__('pprint') @@ -73,7 +73,7 @@ def test_globals(self): d = {"foo":"bar"} - exec "def f(): return globals()" in d + exec("def f(): return globals()", d) d2 = d["f"]() assert d2 is d @@ -157,7 +157,7 @@ assert format(10, "o") == "12" assert format(10, "#o") == "0o12" assert format("hi") == "hi" - assert isinstance(format(4, u""), unicode) + assert isinstance(format(4, u""), str) def test_vars(self): def f(): @@ -208,10 +208,10 @@ def test_iter_sequence(self): raises(TypeError,iter,3) x = iter(['a','b','c']) - assert x.next() =='a' - assert x.next() =='b' - assert x.next() =='c' - raises(StopIteration,x.next) + assert next(x) =='a' + assert next(x) =='b' + assert next(x) =='c' + raises(StopIteration, next, x) def test_iter___iter__(self): # This test assumes that dict.keys() method returns keys in @@ -235,16 +235,16 @@ #self.assertRaises(TypeError,iter,[],5) #self.assertRaises(TypeError,iter,{},5) x = iter(count(),3) - assert x.next() ==1 - assert x.next() ==2 - raises(StopIteration,x.next) + assert next(x) ==1 + assert next(x) ==2 + raises(StopIteration, next, x) def test_enumerate(self): seq = range(2,4) enum = enumerate(seq) - assert enum.next() == (0, 2) - assert enum.next() == (1, 3) - raises(StopIteration, enum.next) + assert next(enum) == (0, 2) + assert next(enum) == (1, 3) + raises(StopIteration, next, enum) raises(TypeError, enumerate, 1) raises(TypeError, enumerate, None) enum = enumerate(range(5), 2) @@ -262,7 +262,7 @@ class Counter: def __init__(self): self.count = 0 - def next(self): + def __next__(self): self.count += 1 return self.count x = Counter() @@ -297,17 +297,17 @@ def test_range_up(self): x = range(2) iter_x = iter(x) - assert iter_x.next() == 0 - assert iter_x.next() == 1 - raises(StopIteration, iter_x.next) + assert next(iter_x) == 0 + assert next(iter_x) == 1 + raises(StopIteration, next, iter_x) def test_range_down(self): x = range(4,2,-1) iter_x = iter(x) - assert iter_x.next() == 4 - assert iter_x.next() == 3 - raises(StopIteration, iter_x.next) + assert next(iter_x) == 4 + assert next(iter_x) == 3 + raises(StopIteration, next, iter_x) def test_range_has_type_identity(self): assert type(range(1)) == type(range(1)) @@ -315,13 +315,12 @@ def test_range_len(self): x = range(33) assert len(x) == 33 - x = range(33.2) - assert len(x) == 33 + raises(TypeError, range, 33.2) x = range(33,0,-1) assert len(x) == 33 x = range(33,0) assert len(x) == 0 - x = range(33,0.2) + raises(TypeError, range, 33, 0.2) assert len(x) == 0 x = range(0,33) assert len(x) == 33 @@ -495,7 +494,7 @@ assert eval(co) == 3 compile("from __future__ import with_statement", "", "exec") raises(SyntaxError, compile, '-', '?', 'eval') - raises(ValueError, compile, '"\\xt"', '?', 'eval') + raises(SyntaxError, compile, '"\\xt"', '?', 'eval') raises(ValueError, compile, '1+2', '?', 'maybenot') raises(ValueError, compile, "\n", "", "exec", 0xff) raises(TypeError, compile, '1+2', 12, 34) @@ -513,7 +512,7 @@ def test_recompile_ast(self): import _ast # raise exception when node type doesn't match with compile mode - co1 = compile('print 1', '', 'exec', _ast.PyCF_ONLY_AST) + co1 = compile('print(1)', '', 'exec', _ast.PyCF_ONLY_AST) raises(TypeError, compile, co1, '', 'eval') co2 = compile('1+1', '', 'eval', _ast.PyCF_ONLY_AST) compile(co2, '', 'eval') @@ -589,39 +588,39 @@ assert firstlineno == 2 def test_print_function(self): - import __builtin__ + import builtins import sys - import StringIO - pr = getattr(__builtin__, "print") + import io + pr = getattr(builtins, "print") save = sys.stdout - out = sys.stdout = StringIO.StringIO() + out = sys.stdout = io.StringIO() try: pr("Hello,", "person!") finally: sys.stdout = save assert out.getvalue() == "Hello, person!\n" - out = StringIO.StringIO() + out = io.StringIO() pr("Hello,", "person!", file=out) assert out.getvalue() == "Hello, person!\n" - out = StringIO.StringIO() + out = io.StringIO() pr("Hello,", "person!", file=out, end="") assert out.getvalue() == "Hello, person!" - out = StringIO.StringIO() + out = io.StringIO() pr("Hello,", "person!", file=out, sep="X") assert out.getvalue() == "Hello,Xperson!\n" - out = StringIO.StringIO() + out = io.StringIO() pr(u"Hello,", u"person!", file=out) result = out.getvalue() - assert isinstance(result, unicode) + assert isinstance(result, str) assert result == u"Hello, person!\n" pr("Hello", file=None) # This works. - out = StringIO.StringIO() + out = io.StringIO() pr(None, file=out) assert out.getvalue() == "None\n" def test_print_exceptions(self): - import __builtin__ - pr = getattr(__builtin__, "print") + import builtins + pr = getattr(builtins, "print") raises(TypeError, pr, x=3) raises(TypeError, pr, end=3) raises(TypeError, pr, sep=42) diff --git a/pypy/module/__builtin__/test/test_descriptor.py b/pypy/module/__builtin__/test/test_descriptor.py --- a/pypy/module/__builtin__/test/test_descriptor.py +++ b/pypy/module/__builtin__/test/test_descriptor.py @@ -342,7 +342,7 @@ except ZeroDivisionError: pass else: - raise Exception, "expected ZeroDivisionError from bad property" + raise Exception("expected ZeroDivisionError from bad property") def test_property_subclass(self): class P(property): diff --git a/pypy/module/__builtin__/test/test_filter.py b/pypy/module/__builtin__/test/test_filter.py --- a/pypy/module/__builtin__/test/test_filter.py +++ b/pypy/module/__builtin__/test/test_filter.py @@ -16,22 +16,10 @@ raises(TypeError, filter, lambda x: x>3, [1], [2]) def test_filter_no_function_list(self): - assert filter(None, [1, 2, 3]) == [1, 2, 3] - - def test_filter_no_function_tuple(self): - assert filter(None, (1, 2, 3)) == (1, 2, 3) - - def test_filter_no_function_string(self): - assert filter(None, 'mystring') == 'mystring' + assert list(filter(None, [1, 2, 3])) == [1, 2, 3] def test_filter_no_function_with_bools(self): - assert filter(None, (True, False, True)) == (True, True) + assert tuple(filter(None, (True, False, True))) == (True, True) def test_filter_list(self): - assert filter(lambda x: x>3, [1, 2, 3, 4, 5]) == [4, 5] - - def test_filter_tuple(self): - assert filter(lambda x: x>3, (1, 2, 3, 4, 5)) == (4, 5) - - def test_filter_string(self): - assert filter(lambda x: x>'a', 'xyzabcd') == 'xyzbcd' + assert list(filter(lambda x: x>3, [1, 2, 3, 4, 5])) == [4, 5] diff --git a/pypy/module/__builtin__/test/test_functional.py b/pypy/module/__builtin__/test/test_functional.py --- a/pypy/module/__builtin__/test/test_functional.py +++ b/pypy/module/__builtin__/test/test_functional.py @@ -70,7 +70,7 @@ class B(object): def __init__(self, n): self.n = n - def next(self): + def __next__(self): self.n -= 1 if self.n == 0: raise StopIteration return self.n @@ -126,12 +126,12 @@ x = range(2, 9, 3) it = iter(x) assert iter(it) is it - assert it.next() == 2 - assert it.next() == 5 - assert it.next() == 8 - raises(StopIteration, it.next) + assert it.__next__() == 2 + assert it.__next__() == 5 + assert it.__next__() == 8 + raises(StopIteration, it.__next__) # test again, to make sure that range() is not its own iterator - assert iter(x).next() == 2 + assert iter(x).__next__() == 2 def test_range_object_with___int__(self): class A(object): @@ -143,7 +143,7 @@ assert list(range(0, 10, A())) == [0, 5] def test_range_float(self): - assert list(range(0.1, 2.0, 1.1)) == [0, 1] + raises(TypeError, range(0.1, 2.0, 1.1)) def test_range_long(self): import sys @@ -162,12 +162,12 @@ def test_reversed(self): r = reversed("hello") assert iter(r) is r - assert r.next() == "o" - assert r.next() == "l" - assert r.next() == "l" - assert r.next() == "e" - assert r.next() == "h" - raises(StopIteration, r.next) + assert r.__next__() == "o" + assert r.__next__() == "l" + assert r.__next__() == "l" + assert r.__next__() == "e" + assert r.__next__() == "h" + raises(StopIteration, r.__next__) assert list(reversed(list(reversed("hello")))) == ['h','e','l','l','o'] raises(TypeError, reversed, reversed("hello")) diff --git a/pypy/module/__builtin__/test/test_rawinput.py b/pypy/module/__builtin__/test/test_rawinput.py --- a/pypy/module/__builtin__/test/test_rawinput.py +++ b/pypy/module/__builtin__/test/test_rawinput.py @@ -1,30 +1,30 @@ +from __future__ import print_function import autopath class AppTestRawInput(): def test_input_and_raw_input(self): - import sys, StringIO - for prompt, expected in [("def:", "abc/ def:/ghi\n"), - ("", "abc/ /ghi\n"), - (42, "abc/ 42/ghi\n"), - (None, "abc/ None/ghi\n"), - (Ellipsis, "abc/ /ghi\n")]: + import sys, io + for prompt, expected in [("def:", "abc/def:/ghi\n"), + ("", "abc//ghi\n"), + (42, "abc/42/ghi\n"), + (None, "abc/None/ghi\n"), + (Ellipsis, "abc//ghi\n")]: for inputfn, inputtext, gottext in [ - (raw_input, "foo\nbar\n", "foo"), - (input, "40+2\n", 42)]: + (input, "foo\nbar\n", "foo")]: save = sys.stdin, sys.stdout try: - sys.stdin = StringIO.StringIO(inputtext) - out = sys.stdout = StringIO.StringIO() - print "abc", # softspace = 1 + sys.stdin = io.StringIO(inputtext) + out = sys.stdout = io.StringIO() + print("abc", end='') out.write('/') if prompt is Ellipsis: got = inputfn() else: got = inputfn(prompt) out.write('/') - print "ghi" + print("ghi") finally: sys.stdin, sys.stdout = save assert out.getvalue() == expected @@ -32,9 +32,9 @@ def test_softspace(self): import sys - import StringIO - fin = StringIO.StringIO() - fout = StringIO.StringIO() + import io + fin = io.StringIO() + fout = io.StringIO() fin.write("Coconuts\n") fin.seek(0) @@ -45,20 +45,20 @@ sys.stdin = fin sys.stdout = fout - print "test", - raw_input("test") + print("test", end='') + input("test") sys.stdin = sys_stdin_orig sys.stdout = sys_stdout_orig fout.seek(0) - assert fout.read() == "test test" + assert fout.read() == "testtest" def test_softspace_carryover(self): import sys - import StringIO - fin = StringIO.StringIO() - fout = StringIO.StringIO() + import io + fin = io.StringIO() + fout = io.StringIO() fin.write("Coconuts\n") fin.seek(0) @@ -69,12 +69,12 @@ sys.stdin = fin sys.stdout = fout - print "test", - raw_input("test") - print "test", + print("test", end='') + input("test") + print("test", end='') sys.stdin = sys_stdin_orig sys.stdout = sys_stdout_orig fout.seek(0) - assert fout.read() == "test testtest" + assert fout.read() == "testtesttest" diff --git a/pypy/objspace/std/dictproxyobject.py b/pypy/objspace/std/dictproxyobject.py --- a/pypy/objspace/std/dictproxyobject.py +++ b/pypy/objspace/std/dictproxyobject.py @@ -20,7 +20,7 @@ def getitem(self, w_dict, w_key): space = self.space w_lookup_type = space.type(w_key) - if space.is_w(w_lookup_type, space.w_str): + if space.is_w(w_lookup_type, space.w_unicode): return self.getitem_str(w_dict, space.str_w(w_key)) else: return None @@ -30,7 +30,7 @@ def setitem(self, w_dict, w_key, w_value): space = self.space - if space.is_w(space.type(w_key), space.w_str): + if space.is_w(space.type(w_key), space.w_unicode): self.setitem_str(w_dict, self.space.str_w(w_key), w_value) else: raise OperationError(space.w_TypeError, space.wrap("cannot add non-string keys to dict of a type")) @@ -60,7 +60,7 @@ def delitem(self, w_dict, w_key): space = self.space w_key_type = space.type(w_key) - if space.is_w(w_key_type, space.w_str): + if space.is_w(w_key_type, space.w_unicode): key = self.space.str_w(w_key) if not self.unerase(w_dict.dstorage).deldictvalue(space, key): raise KeyError From noreply at buildbot.pypy.org Mon Nov 7 21:22:36 2011 From: noreply at buildbot.pypy.org (amauryfa) Date: Mon, 7 Nov 2011 21:22:36 +0100 (CET) Subject: [pypy-commit] pypy py3k: Allow bytes source code in compile() Message-ID: <20111107202236.46803820C4@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r48876:1afdb2a14313 Date: 2011-11-07 21:02 +0100 http://bitbucket.org/pypy/pypy/changeset/1afdb2a14313/ Log: Allow bytes source code in compile() diff --git a/pypy/module/__builtin__/compiling.py b/pypy/module/__builtin__/compiling.py --- a/pypy/module/__builtin__/compiling.py +++ b/pypy/module/__builtin__/compiling.py @@ -30,6 +30,8 @@ if space.is_true(space.isinstance(w_source, w_ast_type)): ast_node = space.interp_w(ast.mod, w_source) ast_node.sync_app_attrs(space) + elif space.isinstance_w(w_source, space.w_bytes): + source_str = space.bytes_w(w_source) else: source_str = space.str_w(w_source) # This flag tells the parser to reject any coding cookies it sees. diff --git a/pypy/module/__builtin__/test/test_builtin.py b/pypy/module/__builtin__/test/test_builtin.py --- a/pypy/module/__builtin__/test/test_builtin.py +++ b/pypy/module/__builtin__/test/test_builtin.py @@ -509,6 +509,10 @@ code = u"# -*- coding: utf-8 -*-\npass\n" raises(SyntaxError, compile, code, "tmp", "exec") + def test_bytes_compile(self): + code = b"# -*- coding: utf-8 -*-\npass\n" + compile(code, "tmp", "exec") + def test_recompile_ast(self): import _ast # raise exception when node type doesn't match with compile mode From noreply at buildbot.pypy.org Mon Nov 7 21:22:37 2011 From: noreply at buildbot.pypy.org (amauryfa) Date: Mon, 7 Nov 2011 21:22:37 +0100 (CET) Subject: [pypy-commit] pypy py3k: _warnings is always required by test.regrtest Message-ID: <20111107202237.76DEF820C4@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r48877:cc4562844148 Date: 2011-11-07 21:02 +0100 http://bitbucket.org/pypy/pypy/changeset/cc4562844148/ Log: _warnings is always required by test.regrtest diff --git a/lib-python/conftest.py b/lib-python/conftest.py --- a/lib-python/conftest.py +++ b/lib-python/conftest.py @@ -61,7 +61,7 @@ usemodules = '', skip=None): self.basename = basename - self._usemodules = usemodules.split() + ['signal'] + self._usemodules = usemodules.split() + ['signal', '_warnings'] self._compiler = compiler self.core = core self.skip = skip From noreply at buildbot.pypy.org Mon Nov 7 21:22:38 2011 From: noreply at buildbot.pypy.org (amauryfa) Date: Mon, 7 Nov 2011 21:22:38 +0100 (CET) Subject: [pypy-commit] pypy py3k: gzip needs zlib of course Message-ID: <20111107202238.A92BA820C4@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r48878:b980324f6ad7 Date: 2011-11-07 21:02 +0100 http://bitbucket.org/pypy/pypy/changeset/b980324f6ad7/ Log: gzip needs zlib of course diff --git a/lib-python/conftest.py b/lib-python/conftest.py --- a/lib-python/conftest.py +++ b/lib-python/conftest.py @@ -237,7 +237,7 @@ RegrTest('test_global.py', core=True), RegrTest('test_grammar.py', core=True), RegrTest('test_grp.py', skip=skip_win32), - RegrTest('test_gzip.py'), + RegrTest('test_gzip.py', usemodules='zlib'), RegrTest('test_hash.py', core=True), RegrTest('test_hashlib.py', core=True), RegrTest('test_heapq.py', core=True), From noreply at buildbot.pypy.org Mon Nov 7 21:22:39 2011 From: noreply at buildbot.pypy.org (amauryfa) Date: Mon, 7 Nov 2011 21:22:39 +0100 (CET) Subject: [pypy-commit] pypy py3k: zlib only deals with bytes, not str Message-ID: <20111107202239.DDE5A820C4@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r48879:0aaed69faf04 Date: 2011-11-07 21:02 +0100 http://bitbucket.org/pypy/pypy/changeset/0aaed69faf04/ Log: zlib only deals with bytes, not str diff --git a/pypy/module/zlib/interp_zlib.py b/pypy/module/zlib/interp_zlib.py --- a/pypy/module/zlib/interp_zlib.py +++ b/pypy/module/zlib/interp_zlib.py @@ -1,7 +1,7 @@ import sys from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.baseobjspace import Wrappable -from pypy.interpreter.typedef import TypeDef, interp_attrproperty +from pypy.interpreter.typedef import TypeDef, interp_attrproperty_bytes from pypy.interpreter.error import OperationError from pypy.rlib.rarithmetic import intmask, r_uint from pypy.rlib.objectmodel import keepalive_until_here @@ -84,7 +84,7 @@ rzlib.deflateEnd(stream) except rzlib.RZlibError, e: raise zlib_error(space, e.msg) - return space.wrap(result) + return space.wrapbytes(result) @unwrap_spec(string='bufferstr', wbits=int, bufsize=int) @@ -106,7 +106,7 @@ rzlib.inflateEnd(stream) except rzlib.RZlibError, e: raise zlib_error(space, e.msg) - return space.wrap(result) + return space.wrapbytes(result) class ZLibObject(Wrappable): @@ -179,7 +179,7 @@ self.unlock() except rzlib.RZlibError, e: raise zlib_error(self.space, e.msg) - return self.space.wrap(result) + return self.space.wrapbytes(result) @unwrap_spec(mode=int) @@ -209,7 +209,7 @@ self.unlock() except rzlib.RZlibError, e: raise zlib_error(self.space, e.msg) - return self.space.wrap(result) + return self.space.wrapbytes(result) @unwrap_spec(level=int, method=int, wbits=int, memLevel=int, strategy=int) @@ -302,11 +302,11 @@ assert unused_start >= 0 tail = data[unused_start:] if finished: - self.unconsumed_tail = '' + self.unconsumed_tail = b'' self.unused_data = tail else: self.unconsumed_tail = tail - return self.space.wrap(string) + return self.space.wrapbytes(string) @unwrap_spec(length=int) @@ -324,7 +324,7 @@ # however CPython's zlib module does not behave like that. # I could not figure out a case in which flush() in CPython # doesn't simply return an empty string without complaining. - return self.space.wrap("") + return self.space.wrapbytes("") @unwrap_spec(wbits=int) @@ -343,8 +343,8 @@ __new__ = interp2app(Decompress___new__), decompress = interp2app(Decompress.decompress), flush = interp2app(Decompress.flush), - unused_data = interp_attrproperty('unused_data', Decompress), - unconsumed_tail = interp_attrproperty('unconsumed_tail', Decompress), + unused_data = interp_attrproperty_bytes('unused_data', Decompress), + unconsumed_tail = interp_attrproperty_bytes('unconsumed_tail', Decompress), __doc__ = """decompressobj([wbits]) -- Return a decompressor object. Optional arg wbits is the window buffer size. diff --git a/pypy/module/zlib/test/test_zlib.py b/pypy/module/zlib/test/test_zlib.py --- a/pypy/module/zlib/test/test_zlib.py +++ b/pypy/module/zlib/test/test_zlib.py @@ -34,9 +34,9 @@ import zlib return zlib """) - expanded = 'some bytes which will be compressed' - cls.w_expanded = cls.space.wrap(expanded) - cls.w_compressed = cls.space.wrap(zlib.compress(expanded)) + expanded = b'some bytes which will be compressed' + cls.w_expanded = cls.space.wrapbytes(expanded) + cls.w_compressed = cls.space.wrapbytes(zlib.compress(expanded)) def test_error(self): @@ -52,9 +52,9 @@ return it as a signed 32 bit integer. On 64-bit machines too (it is a bug in CPython < 2.6 to return unsigned values in this case). """ - assert self.zlib.crc32('') == 0 - assert self.zlib.crc32('\0') == -771559539 - assert self.zlib.crc32('hello, world.') == -936931198 + assert self.zlib.crc32(b'') == 0 + assert self.zlib.crc32(b'\0') == -771559539 + assert self.zlib.crc32(b'hello, world.') == -936931198 def test_crc32_start_value(self): @@ -62,29 +62,29 @@ When called with a string and an integer, zlib.crc32 should compute the CRC32 of the string using the integer as the starting value. """ - assert self.zlib.crc32('', 42) == 42 - assert self.zlib.crc32('\0', 42) == 163128923 - assert self.zlib.crc32('hello, world.', 42) == 1090960721 - hello = 'hello, ' + assert self.zlib.crc32(b'', 42) == 42 + assert self.zlib.crc32(b'\0', 42) == 163128923 + assert self.zlib.crc32(b'hello, world.', 42) == 1090960721 + hello = b'hello, ' hellocrc = self.zlib.crc32(hello) - world = 'world.' + world = b'world.' helloworldcrc = self.zlib.crc32(world, hellocrc) assert helloworldcrc == self.zlib.crc32(hello + world) def test_crc32_negative_start(self): - v = self.zlib.crc32('', -1) + v = self.zlib.crc32(b'', -1) assert v == -1 def test_crc32_negative_long_start(self): - v = self.zlib.crc32('', -1L) + v = self.zlib.crc32(b'', -1L) assert v == -1 - assert self.zlib.crc32('foo', -99999999999999999999999) == 1611238463 + assert self.zlib.crc32(b'foo', -99999999999999999999999) == 1611238463 def test_crc32_long_start(self): import sys - v = self.zlib.crc32('', sys.maxint*2) + v = self.zlib.crc32(b'', sys.maxint*2) assert v == -2 - assert self.zlib.crc32('foo', 99999999999999999999999) == 1635107045 + assert self.zlib.crc32(b'foo', 99999999999999999999999) == 1635107045 def test_adler32(self): """ @@ -93,10 +93,10 @@ On 64-bit machines too (it is a bug in CPython < 2.6 to return unsigned values in this case). """ - assert self.zlib.adler32('') == 1 - assert self.zlib.adler32('\0') == 65537 - assert self.zlib.adler32('hello, world.') == 571147447 - assert self.zlib.adler32('x' * 23) == -2122904887 + assert self.zlib.adler32(b'') == 1 + assert self.zlib.adler32(b'\0') == 65537 + assert self.zlib.adler32(b'hello, world.') == 571147447 + assert self.zlib.adler32(b'x' * 23) == -2122904887 def test_adler32_start_value(self): @@ -105,18 +105,18 @@ the adler 32 checksum of the string using the integer as the starting value. """ - assert self.zlib.adler32('', 42) == 42 - assert self.zlib.adler32('\0', 42) == 2752554 - assert self.zlib.adler32('hello, world.', 42) == 606078176 - assert self.zlib.adler32('x' * 23, 42) == -2061104398 - hello = 'hello, ' + assert self.zlib.adler32(b'', 42) == 42 + assert self.zlib.adler32(b'\0', 42) == 2752554 + assert self.zlib.adler32(b'hello, world.', 42) == 606078176 + assert self.zlib.adler32(b'x' * 23, 42) == -2061104398 + hello = b'hello, ' hellosum = self.zlib.adler32(hello) - world = 'world.' + world = b'world.' helloworldsum = self.zlib.adler32(world, hellosum) assert helloworldsum == self.zlib.adler32(hello + world) - assert self.zlib.adler32('foo', -1) == 45547858 - assert self.zlib.adler32('foo', 99999999999999999999999) == -114818734 + assert self.zlib.adler32(b'foo', -1) == 45547858 + assert self.zlib.adler32(b'foo', 99999999999999999999999) == -114818734 def test_invalidLevel(self): @@ -171,7 +171,7 @@ Try to feed garbage to zlib.decompress(). """ raises(self.zlib.error, self.zlib.decompress, self.compressed[:-2]) - raises(self.zlib.error, self.zlib.decompress, 'foobar') + raises(self.zlib.error, self.zlib.decompress, b'foobar') def test_unused_data(self): @@ -180,21 +180,21 @@ It should show up in the unused_data attribute. """ d = self.zlib.decompressobj() - s = d.decompress(self.compressed + 'extrastuff') + s = d.decompress(self.compressed + b'extrastuff') assert s == self.expanded - assert d.unused_data == 'extrastuff' + assert d.unused_data == b'extrastuff' # try again with several decompression steps d = self.zlib.decompressobj() s1 = d.decompress(self.compressed[:10]) - assert d.unused_data == '' + assert d.unused_data == b'' s2 = d.decompress(self.compressed[10:-3]) - assert d.unused_data == '' - s3 = d.decompress(self.compressed[-3:] + 'spam' * 100) - assert d.unused_data == 'spam' * 100 + assert d.unused_data == b'' + s3 = d.decompress(self.compressed[-3:] + b'spam' * 100) + assert d.unused_data == b'spam' * 100 assert s1 + s2 + s3 == self.expanded - s4 = d.decompress('egg' * 50) - assert d.unused_data == 'egg' * 50 - assert s4 == '' + s4 = d.decompress(b'egg' * 50) + assert d.unused_data == b'egg' * 50 + assert s4 == b'' def test_max_length(self): @@ -215,8 +215,8 @@ """ We should be able to pass buffer objects instead of strings. """ - assert self.zlib.crc32(buffer('hello, world.')) == -936931198 - assert self.zlib.adler32(buffer('hello, world.')) == 571147447 + assert self.zlib.crc32(buffer(b'hello, world.')) == -936931198 + assert self.zlib.adler32(buffer(b'hello, world.')) == 571147447 compressor = self.zlib.compressobj() bytes = compressor.compress(buffer(self.expanded)) From noreply at buildbot.pypy.org Mon Nov 7 21:22:41 2011 From: noreply at buildbot.pypy.org (amauryfa) Date: Mon, 7 Nov 2011 21:22:41 +0100 (CET) Subject: [pypy-commit] pypy py3k: (chronitis) Implement bytes.fromhex(). Message-ID: <20111107202241.1C845820C4@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r48880:02d89bbd9a31 Date: 2011-11-07 21:17 +0100 http://bitbucket.org/pypy/pypy/changeset/02d89bbd9a31/ Log: (chronitis) Implement bytes.fromhex(). Thanks! diff --git a/pypy/objspace/std/bytearraytype.py b/pypy/objspace/std/bytearraytype.py --- a/pypy/objspace/std/bytearraytype.py +++ b/pypy/objspace/std/bytearraytype.py @@ -93,18 +93,13 @@ return val - 87 return -1 -def descr_fromhex(space, w_type, w_hexstring): - "bytearray.fromhex(string) -> bytearray\n\nCreate a bytearray object " - "from a string of hexadecimal numbers.\nSpaces between two numbers are " - "accepted.\nExample: bytearray.fromhex('B9 01EF') -> " - "bytearray(b'\\xb9\\x01\\xef')." - hexstring = space.unicode_w(w_hexstring) +def _hexstring_to_array(space, s): data = [] - length = len(hexstring) + length = len(s) i = -2 while True: i += 2 - while i < length and hexstring[i] == ' ': + while i < length and s[i] == ' ': i += 1 if i >= length: break @@ -112,16 +107,28 @@ raise OperationError(space.w_ValueError, space.wrap( "non-hexadecimal number found in fromhex() arg at position %d" % i)) - top = _hex_digit_to_int(hexstring[i]) + top = _hex_digit_to_int(s[i]) if top == -1: raise OperationError(space.w_ValueError, space.wrap( "non-hexadecimal number found in fromhex() arg at position %d" % i)) - bot = _hex_digit_to_int(hexstring[i+1]) + bot = _hex_digit_to_int(s[i+1]) if bot == -1: raise OperationError(space.w_ValueError, space.wrap( "non-hexadecimal number found in fromhex() arg at position %d" % (i+1,))) data.append(chr(top*16 + bot)) + return data +def descr_fromhex(space, w_type, w_hexstring): + "bytearray.fromhex(string) -> bytearray\n\nCreate a bytearray object " + "from a string of hexadecimal numbers.\nSpaces between two numbers are " + "accepted.\nExample: bytearray.fromhex('B9 01EF') -> " + "bytearray(b'\\xb9\\x01\\xef')." + if not space.is_w(space.type(w_hexstring), space.w_unicode): + raise OperationError(space.w_TypeError, space.wrap( + "must be str, not %s" % space.type(w_hexstring).name)) + hexstring = space.unicode_w(w_hexstring) + + data = _hexstring_to_array(space, hexstring) # in CPython bytearray.fromhex is a staticmethod, so # we ignore w_type and always return a bytearray return new_bytearray(space, space.w_bytearray, data) diff --git a/pypy/objspace/std/stringtype.py b/pypy/objspace/std/stringtype.py --- a/pypy/objspace/std/stringtype.py +++ b/pypy/objspace/std/stringtype.py @@ -342,6 +342,29 @@ W_StringObject.__init__(w_obj, value) return w_obj +def descr_fromhex(space, w_type, w_hexstring): + "bytes.fromhex(string) -> bytes\n" + "\n" + "Create a bytes object from a string of hexadecimal numbers.\n" + "Spaces between two numbers are accepted.\n" + "Example: bytes.fromhex('B9 01EF') -> bytes(b'\\xb9\\x01\\xef')." + from pypy.objspace.std.bytearraytype import _hexstring_to_array + if not space.is_w(space.type(w_hexstring), space.w_unicode): + raise OperationError(space.w_TypeError, space.wrap( + "must be str, not %s" % space.type(w_hexstring).name)) + hexstring = space.unicode_w(w_hexstring) + chars = ''.join(_hexstring_to_array(space, hexstring)) + if space.config.objspace.std.withrope: + from pypy.objspace.std.ropeobject import rope, W_RopeObject + w_obj = space.allocate_instance(W_RopeObject, w_type) + W_RopeObject.__init__(w_obj, rope.LiteralStringNode(chars)) + return w_obj + else: + from pypy.objspace.std.stringobject import W_StringObject + w_obj = space.allocate_instance(W_StringObject, w_type) + W_StringObject.__init__(w_obj, chars) + return w_obj + # ____________________________________________________________ str_typedef = StdTypeDef("bytes", @@ -349,7 +372,8 @@ __doc__ = '''str(object) -> string Return a nice string representation of the object. -If the argument is a string, the return value is the same object.''' +If the argument is a string, the return value is the same object.''', + fromhex = gateway.interp2app(descr_fromhex, as_classmethod=True) ) str_typedef.registermethods(globals()) diff --git a/pypy/objspace/std/test/test_stringobject.py b/pypy/objspace/std/test/test_stringobject.py --- a/pypy/objspace/std/test/test_stringobject.py +++ b/pypy/objspace/std/test/test_stringobject.py @@ -99,6 +99,14 @@ import operator raises(TypeError, operator.mod, b"%s", (1,)) + def test_fromhex(self): + assert bytes.fromhex("abcd") == b'\xab\xcd' + assert b''.fromhex("abcd") == b'\xab\xcd' + assert bytes.fromhex("ab cd ef") == b'\xab\xcd\xef' + raises(TypeError, bytes.fromhex, b"abcd") + raises(TypeError, bytes.fromhex, True) + raises(ValueError, bytes.fromhex, "hello world") + def test_split(self): assert b"".split() == [] assert b"".split(b'x') == [b''] From noreply at buildbot.pypy.org Mon Nov 7 21:22:42 2011 From: noreply at buildbot.pypy.org (amauryfa) Date: Mon, 7 Nov 2011 21:22:42 +0100 (CET) Subject: [pypy-commit] pypy py3k: (chronitis) update std.__doc__ and bytes.__doc__. Message-ID: <20111107202242.4C298820C4@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r48881:29d1700d2efd Date: 2011-11-07 21:20 +0100 http://bitbucket.org/pypy/pypy/changeset/29d1700d2efd/ Log: (chronitis) update std.__doc__ and bytes.__doc__. diff --git a/pypy/objspace/std/stringtype.py b/pypy/objspace/std/stringtype.py --- a/pypy/objspace/std/stringtype.py +++ b/pypy/objspace/std/stringtype.py @@ -369,10 +369,15 @@ str_typedef = StdTypeDef("bytes", __new__ = gateway.interp2app(descr__new__), - __doc__ = '''str(object) -> string - -Return a nice string representation of the object. -If the argument is a string, the return value is the same object.''', + __doc__ = 'bytes(iterable_of_ints) -> bytes\n' + 'bytes(string, encoding[, errors]) -> bytes\n' + 'bytes(bytes_or_buffer) -> immutable copy of bytes_or_buffer\n' + 'bytes(memory_view) -> bytes\n\n' + 'Construct an immutable array of bytes from:\n' + ' - an iterable yielding integers in range(256)\n' + ' - a text string encoded using the specified encoding\n' + ' - a bytes or a buffer object\n' + ' - any object implementing the buffer API.', fromhex = gateway.interp2app(descr_fromhex, as_classmethod=True) ) diff --git a/pypy/objspace/std/unicodetype.py b/pypy/objspace/std/unicodetype.py --- a/pypy/objspace/std/unicodetype.py +++ b/pypy/objspace/std/unicodetype.py @@ -340,7 +340,7 @@ unicode_typedef = StdTypeDef("str", __new__ = gateway.interp2app(descr_new_), - __doc__ = '''unicode(string [, encoding[, errors]]) -> object + __doc__ = '''str(string [, encoding[, errors]]) -> object Create a new Unicode object from the given encoded string. encoding defaults to the current default string encoding. From noreply at buildbot.pypy.org Mon Nov 7 21:24:05 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Mon, 7 Nov 2011 21:24:05 +0100 (CET) Subject: [pypy-commit] pypy py3k: a bunch of fixes for complex, includes removing floordiv, divmod, and mod Message-ID: <20111107202405.344ED820C4@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: py3k Changeset: r48882:cf45cbaaa331 Date: 2011-11-07 15:23 -0500 http://bitbucket.org/pypy/pypy/changeset/cf45cbaaa331/ Log: a bunch of fixes for complex, includes removing floordiv, divmod, and mod diff --git a/pypy/objspace/std/complexobject.py b/pypy/objspace/std/complexobject.py --- a/pypy/objspace/std/complexobject.py +++ b/pypy/objspace/std/complexobject.py @@ -63,17 +63,6 @@ ir = (i1 * ratio - r1) / denom return W_ComplexObject(rr,ir) - def divmod(self, space, other): - space.warn( - "complex divmod(), // and % are deprecated", - space.w_DeprecationWarning - ) - w_div = self.div(other) - div = math.floor(w_div.realval) - w_mod = self.sub( - W_ComplexObject(other.realval * div, other.imagval * div)) - return (W_ComplexObject(div, 0), w_mod) - def pow(self, other): r1, i1 = self.realval, self.imagval r2, i2 = other.realval, other.imagval @@ -160,26 +149,6 @@ truediv__Complex_Complex = div__Complex_Complex -def mod__Complex_Complex(space, w_complex1, w_complex2): - try: - return w_complex1.divmod(space, w_complex2)[1] - except ZeroDivisionError, e: - raise OperationError(space.w_ZeroDivisionError, space.wrap(str(e))) - -def divmod__Complex_Complex(space, w_complex1, w_complex2): - try: - div, mod = w_complex1.divmod(space, w_complex2) - except ZeroDivisionError, e: - raise OperationError(space.w_ZeroDivisionError, space.wrap(str(e))) - return space.newtuple([div, mod]) - -def floordiv__Complex_Complex(space, w_complex1, w_complex2): - # don't care about the slight slowdown you get from using divmod - try: - return w_complex1.divmod(space, w_complex2)[0] - except ZeroDivisionError, e: - raise OperationError(space.w_ZeroDivisionError, space.wrap(str(e))) - def pow__Complex_Complex_ANY(space, w_complex, w_exponent, thirdArg): if not space.is_w(thirdArg, space.w_None): raise OperationError(space.w_ValueError, space.wrap('complex modulo')) diff --git a/pypy/objspace/std/test/test_complexobject.py b/pypy/objspace/std/test/test_complexobject.py --- a/pypy/objspace/std/test/test_complexobject.py +++ b/pypy/objspace/std/test/test_complexobject.py @@ -1,10 +1,13 @@ +from __future__ import print_function + import py -from pypy.objspace.std.complexobject import W_ComplexObject, \ - pow__Complex_Complex_ANY -from pypy.objspace.std import complextype as cobjtype + +from pypy.objspace.std import complextype as cobjtype, StdObjSpace +from pypy.objspace.std.complexobject import (W_ComplexObject, + pow__Complex_Complex_ANY) from pypy.objspace.std.multimethod import FailedToImplement from pypy.objspace.std.stringobject import W_StringObject -from pypy.objspace.std import StdObjSpace + EPS = 1e-9 @@ -134,7 +137,7 @@ from random import random # XXX this test passed but took waaaaay to long # look at dist/lib-python/modified-2.5.2/test/test_complex.py - #simple_real = [float(i) for i in xrange(-5, 6)] + #simple_real = [float(i) for i in range(-5, 6)] simple_real = [-2.0, 0.0, 1.0] simple_complex = [complex(x, y) for x in simple_real for y in simple_real] for x in simple_complex: @@ -147,7 +150,7 @@ self.check_div(complex(1e-200, 1e-200), 1+0j) # Just for fun. - for i in xrange(100): + for i in range(100): self.check_div(complex(random(), random()), complex(random(), random())) @@ -160,8 +163,7 @@ raises(ZeroDivisionError, complex.__truediv__, 1+1j, 0+0j) def test_floordiv(self): - assert self.almost_equal(complex.__floordiv__(3+0j, 1.5+0j), 2) - raises(ZeroDivisionError, complex.__floordiv__, 3+0j, 0+0j) + raises(TypeError, "3+0j // 0+0j") def test_coerce(self): raises(OverflowError, complex.__coerce__, 1+1j, 1L<<10000) @@ -183,13 +185,11 @@ assert large != (5+0j) def test_mod(self): - raises(ZeroDivisionError, (1+1j).__mod__, 0+0j) - a = 3.33+4.43j - raises(ZeroDivisionError, "a % 0") + raises(TypeError, "a % a") def test_divmod(self): - raises(ZeroDivisionError, divmod, 1+1j, 0+0j) + raises(TypeError, divmod, 1+1j, 0+0j) def test_pow(self): assert self.almost_equal(pow(1+1j, 0+0j), 1.0) @@ -221,7 +221,7 @@ def test_boolcontext(self): from random import random - for i in xrange(100): + for i in range(100): assert complex(random() + 1e-6, random() + 1e-6) assert not complex(0.0, 0.0) @@ -354,13 +354,13 @@ raises(TypeError, complex, float2(None)) def test_hash(self): - for x in xrange(-30, 30): + for x in range(-30, 30): assert hash(x) == hash(complex(x, 0)) x /= 3.0 # now check against floating point assert hash(x) == hash(complex(x, 0.)) def test_abs(self): - nums = [complex(x/3., y/7.) for x in xrange(-9,9) for y in xrange(-9,9)] + nums = [complex(x/3., y/7.) for x in range(-9,9) for y in range(-9,9)] for num in nums: assert self.almost_equal((num.real**2 + num.imag**2) ** 0.5, abs(num)) @@ -409,7 +409,7 @@ try: pth = tempfile.mktemp() fo = open(pth,"wb") - print >>fo, a, b + print(a, b, file=fo) fo.close() fo = open(pth, "rb") res = fo.read() From noreply at buildbot.pypy.org Mon Nov 7 21:24:06 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Mon, 7 Nov 2011 21:24:06 +0100 (CET) Subject: [pypy-commit] pypy py3k: merged upstream Message-ID: <20111107202406.7854D820C4@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: py3k Changeset: r48883:705745d3509c Date: 2011-11-07 15:23 -0500 http://bitbucket.org/pypy/pypy/changeset/705745d3509c/ Log: merged upstream diff --git a/lib-python/conftest.py b/lib-python/conftest.py --- a/lib-python/conftest.py +++ b/lib-python/conftest.py @@ -61,7 +61,7 @@ usemodules = '', skip=None): self.basename = basename - self._usemodules = usemodules.split() + ['signal'] + self._usemodules = usemodules.split() + ['signal', '_warnings'] self._compiler = compiler self.core = core self.skip = skip @@ -237,7 +237,7 @@ RegrTest('test_global.py', core=True), RegrTest('test_grammar.py', core=True), RegrTest('test_grp.py', skip=skip_win32), - RegrTest('test_gzip.py'), + RegrTest('test_gzip.py', usemodules='zlib'), RegrTest('test_hash.py', core=True), RegrTest('test_hashlib.py', core=True), RegrTest('test_heapq.py', core=True), diff --git a/pypy/module/__builtin__/compiling.py b/pypy/module/__builtin__/compiling.py --- a/pypy/module/__builtin__/compiling.py +++ b/pypy/module/__builtin__/compiling.py @@ -30,6 +30,8 @@ if space.is_true(space.isinstance(w_source, w_ast_type)): ast_node = space.interp_w(ast.mod, w_source) ast_node.sync_app_attrs(space) + elif space.isinstance_w(w_source, space.w_bytes): + source_str = space.bytes_w(w_source) else: source_str = space.str_w(w_source) # This flag tells the parser to reject any coding cookies it sees. diff --git a/pypy/module/__builtin__/interp_memoryview.py b/pypy/module/__builtin__/interp_memoryview.py --- a/pypy/module/__builtin__/interp_memoryview.py +++ b/pypy/module/__builtin__/interp_memoryview.py @@ -72,7 +72,7 @@ return space.wrap(self.buf) def descr_tobytes(self, space): - return space.wrap(self.as_str()) + return space.wrapbytes(self.as_str()) def descr_tolist(self, space): buf = self.buf diff --git a/pypy/module/__builtin__/operation.py b/pypy/module/__builtin__/operation.py --- a/pypy/module/__builtin__/operation.py +++ b/pypy/module/__builtin__/operation.py @@ -215,7 +215,7 @@ table of interned strings whose purpose is to speed up dictionary lookups. Return the string itself or the previously interned string object with the same value.""" - if space.is_w(space.type(w_str), space.w_str): + if space.is_w(space.type(w_str), space.w_unicode): return space.new_interned_w_str(w_str) raise OperationError(space.w_TypeError, space.wrap("intern() argument must be string.")) diff --git a/pypy/module/__builtin__/test/test_abstractinst.py b/pypy/module/__builtin__/test/test_abstractinst.py --- a/pypy/module/__builtin__/test/test_abstractinst.py +++ b/pypy/module/__builtin__/test/test_abstractinst.py @@ -195,10 +195,12 @@ """Implement issubclass(sub, cls).""" candidates = cls.__dict__.get("__subclass__", set()) | set([cls]) return any(c in candidates for c in sub.mro()) - class Integer: - __metaclass__ = ABC + # Equivalent to:: + # class Integer(metaclass=ABC): + # __subclass__ = set([int]) + # But with a syntax compatible with 2.x + Integer = ABC('Integer', (), dict(__subclass__=set([int]))) - __subclass__ = set([int]) assert issubclass(int, Integer) assert issubclass(int, (Integer,)) diff --git a/pypy/module/__builtin__/test/test_buffer.py b/pypy/module/__builtin__/test/test_buffer.py --- a/pypy/module/__builtin__/test/test_buffer.py +++ b/pypy/module/__builtin__/test/test_buffer.py @@ -12,21 +12,21 @@ if sys.maxunicode == 65535: # UCS2 build assert len(b) == 4 if sys.byteorder == "big": - assert b[0:4] == "\x00a\x00b" + assert b[0:4] == b"\x00a\x00b" else: - assert b[0:4] == "a\x00b\x00" + assert b[0:4] == b"a\x00b\x00" else: # UCS4 build assert len(b) == 8 if sys.byteorder == "big": - assert b[0:8] == "\x00\x00\x00a\x00\x00\x00b" + assert b[0:8] == b"\x00\x00\x00a\x00\x00\x00b" else: - assert b[0:8] == "a\x00\x00\x00b\x00\x00\x00" + assert b[0:8] == b"a\x00\x00\x00b\x00\x00\x00" def test_array_buffer(self): import array b = buffer(array.array("B", [1, 2, 3])) assert len(b) == 3 - assert b[0:3] == "\x01\x02\x03" + assert b[0:3] == b"\x01\x02\x03" def test_nonzero(self): assert buffer('\x00') @@ -36,68 +36,68 @@ assert not buffer(array.array("B", [])) def test_str(self): - assert str(buffer('hello')) == 'hello' + assert str(buffer(b'hello')) == 'hello' def test_repr(self): # from 2.5.2 lib tests - assert repr(buffer('hello')).startswith(' buffer('ab')) - assert buffer('ab') >= buffer('ab') - assert buffer('ab') != buffer('abc') - assert buffer('ab') < buffer('abc') - assert buffer('ab') <= buffer('ab') - assert buffer('ab') > buffer('aa') - assert buffer('ab') >= buffer('ab') + assert buffer(b'ab') != b'ab' + assert not (b'ab' == buffer(b'ab')) + assert buffer(b'ab') == buffer(b'ab') + assert not (buffer(b'ab') != buffer(b'ab')) + assert not (buffer(b'ab') < buffer(b'ab')) + assert buffer(b'ab') <= buffer(b'ab') + assert not (buffer(b'ab') > buffer(b'ab')) + assert buffer(b'ab') >= buffer(b'ab') + assert buffer(b'ab') != buffer(b'abc') + assert buffer(b'ab') < buffer(b'abc') + assert buffer(b'ab') <= buffer(b'ab') + assert buffer(b'ab') > buffer(b'aa') + assert buffer(b'ab') >= buffer(b'ab') def test_hash(self): - assert hash(buffer('hello')) == hash('hello') + assert hash(buffer(b'hello')) == hash(b'hello') def test_mul(self): - assert buffer('ab') * 5 == 'ababababab' - assert buffer('ab') * (-2) == '' - assert 5 * buffer('ab') == 'ababababab' - assert (-2) * buffer('ab') == '' + assert buffer(b'ab') * 5 == b'ababababab' + assert buffer(b'ab') * (-2) == b'' + assert 5 * buffer(b'ab') == b'ababababab' + assert (-2) * buffer(b'ab') == b'' def test_offset_size(self): - b = buffer('hello world', 6) + b = buffer(b'hello world', 6) assert len(b) == 5 - assert b[0] == 'w' - assert b[:] == 'world' + assert b[0] == b'w' + assert b[:] == b'world' raises(IndexError, 'b[5]') b = buffer(b, 2) assert len(b) == 3 - assert b[0] == 'r' - assert b[:] == 'rld' + assert b[0] == b'r' + assert b[:] == b'rld' raises(IndexError, 'b[3]') - b = buffer('hello world', 1, 8) + b = buffer(b'hello world', 1, 8) assert len(b) == 8 - assert b[0] == 'e' - assert b[:] == 'ello wor' + assert b[0] == b'e' + assert b[:] == b'ello wor' raises(IndexError, 'b[8]') b = buffer(b, 2, 3) assert len(b) == 3 - assert b[2] == ' ' - assert b[:] == 'lo ' + assert b[2] == b' ' + assert b[:] == b'lo ' raises(IndexError, 'b[3]') b = buffer('hello world', 55) assert len(b) == 0 - assert b[:] == '' - b = buffer('hello world', 6, 999) + assert b[:] == b'' + b = buffer(b'hello world', 6, 999) assert len(b) == 5 - assert b[:] == 'world' + assert b[:] == b'world' raises(ValueError, buffer, "abc", -1) raises(ValueError, buffer, "abc", 0, -2) @@ -105,17 +105,17 @@ def test_rw_offset_size(self): import array - a = array.array("c", 'hello world') + a = array.array("b", b'hello world') b = buffer(a, 6) assert len(b) == 5 - assert b[0] == 'w' - assert b[:] == 'world' + assert b[0] == b'w' + assert b[:] == b'world' raises(IndexError, 'b[5]') - b[0] = 'W' - assert str(b) == 'World' - assert a.tostring() == 'hello World' - b[:] = '12345' - assert a.tostring() == 'hello 12345' + b[0] = b'W' + assert str(b) == b'World' + assert a.tostring() == b'hello World' + b[:] = b'12345' + assert a.tostring() == b'hello 12345' raises(IndexError, 'b[5] = "."') b = buffer(b, 2) @@ -161,7 +161,7 @@ def test_slice(self): # Test extended slicing by comparing with list slicing. - s = "".join(chr(c) for c in list(range(255, -1, -1))) + s = bytes(c for c in list(range(255, -1, -1))) b = buffer(s) indices = (0, None, 1, 3, 19, 300, -1, -2, -31, -300) for start in indices: @@ -172,8 +172,8 @@ class AppTestMemoryView: def test_basic(self): - v = memoryview("abc") - assert v.tobytes() == "abc" + v = memoryview(b"abc") + assert v.tobytes() == b"abc" assert len(v) == 3 assert list(v) == ['a', 'b', 'c'] assert v.tolist() == [97, 98, 99] @@ -186,17 +186,17 @@ assert len(w) == 2 def test_rw(self): - data = bytearray('abcefg') + data = bytearray(b'abcefg') v = memoryview(data) assert v.readonly is False - v[0] = 'z' + v[0] = b'z' assert data == bytearray(eval("b'zbcefg'")) - v[1:4] = '123' + v[1:4] = b'123' assert data == bytearray(eval("b'z123fg'")) raises((ValueError, TypeError), "v[2] = 'spam'") def test_memoryview_attrs(self): - v = memoryview("a"*100) + v = memoryview(b"a"*100) assert v.format == "B" assert v.itemsize == 1 assert v.shape == (100,) @@ -204,13 +204,13 @@ assert v.strides == (1,) def test_suboffsets(self): - v = memoryview("a"*100) + v = memoryview(b"a"*100) assert v.suboffsets == None - v = memoryview(buffer("a"*100, 2)) + v = memoryview(buffer(b"a"*100, 2)) assert v.shape == (98,) assert v.suboffsets == None def test_compare(self): - assert memoryview("abc") == "abc" - assert memoryview("abc") == bytearray("abc") - assert memoryview("abc") != 3 + assert memoryview(b"abc") == b"abc" + assert memoryview(b"abc") == bytearray(b"abc") + assert memoryview(b"abc") != 3 diff --git a/pypy/module/__builtin__/test/test_builtin.py b/pypy/module/__builtin__/test/test_builtin.py --- a/pypy/module/__builtin__/test/test_builtin.py +++ b/pypy/module/__builtin__/test/test_builtin.py @@ -27,8 +27,8 @@ cls.w_safe_runtimerror = cls.space.wrap(sys.version_info < (2, 6)) def test_bytes_alias(self): - assert bytes is str - assert isinstance(eval("b'hi'"), str) + assert bytes is not str + assert isinstance(eval("b'hi'"), bytes) def test_import(self): m = __import__('pprint') @@ -73,7 +73,7 @@ def test_globals(self): d = {"foo":"bar"} - exec "def f(): return globals()" in d + exec("def f(): return globals()", d) d2 = d["f"]() assert d2 is d @@ -157,7 +157,7 @@ assert format(10, "o") == "12" assert format(10, "#o") == "0o12" assert format("hi") == "hi" - assert isinstance(format(4, u""), unicode) + assert isinstance(format(4, u""), str) def test_vars(self): def f(): @@ -208,10 +208,10 @@ def test_iter_sequence(self): raises(TypeError,iter,3) x = iter(['a','b','c']) - assert x.next() =='a' - assert x.next() =='b' - assert x.next() =='c' - raises(StopIteration,x.next) + assert next(x) =='a' + assert next(x) =='b' + assert next(x) =='c' + raises(StopIteration, next, x) def test_iter___iter__(self): # This test assumes that dict.keys() method returns keys in @@ -235,16 +235,16 @@ #self.assertRaises(TypeError,iter,[],5) #self.assertRaises(TypeError,iter,{},5) x = iter(count(),3) - assert x.next() ==1 - assert x.next() ==2 - raises(StopIteration,x.next) + assert next(x) ==1 + assert next(x) ==2 + raises(StopIteration, next, x) def test_enumerate(self): seq = range(2,4) enum = enumerate(seq) - assert enum.next() == (0, 2) - assert enum.next() == (1, 3) - raises(StopIteration, enum.next) + assert next(enum) == (0, 2) + assert next(enum) == (1, 3) + raises(StopIteration, next, enum) raises(TypeError, enumerate, 1) raises(TypeError, enumerate, None) enum = enumerate(range(5), 2) @@ -262,7 +262,7 @@ class Counter: def __init__(self): self.count = 0 - def next(self): + def __next__(self): self.count += 1 return self.count x = Counter() @@ -297,17 +297,17 @@ def test_range_up(self): x = range(2) iter_x = iter(x) - assert iter_x.next() == 0 - assert iter_x.next() == 1 - raises(StopIteration, iter_x.next) + assert next(iter_x) == 0 + assert next(iter_x) == 1 + raises(StopIteration, next, iter_x) def test_range_down(self): x = range(4,2,-1) iter_x = iter(x) - assert iter_x.next() == 4 - assert iter_x.next() == 3 - raises(StopIteration, iter_x.next) + assert next(iter_x) == 4 + assert next(iter_x) == 3 + raises(StopIteration, next, iter_x) def test_range_has_type_identity(self): assert type(range(1)) == type(range(1)) @@ -315,13 +315,12 @@ def test_range_len(self): x = range(33) assert len(x) == 33 - x = range(33.2) - assert len(x) == 33 + raises(TypeError, range, 33.2) x = range(33,0,-1) assert len(x) == 33 x = range(33,0) assert len(x) == 0 - x = range(33,0.2) + raises(TypeError, range, 33, 0.2) assert len(x) == 0 x = range(0,33) assert len(x) == 33 @@ -495,7 +494,7 @@ assert eval(co) == 3 compile("from __future__ import with_statement", "", "exec") raises(SyntaxError, compile, '-', '?', 'eval') - raises(ValueError, compile, '"\\xt"', '?', 'eval') + raises(SyntaxError, compile, '"\\xt"', '?', 'eval') raises(ValueError, compile, '1+2', '?', 'maybenot') raises(ValueError, compile, "\n", "", "exec", 0xff) raises(TypeError, compile, '1+2', 12, 34) @@ -510,10 +509,14 @@ code = u"# -*- coding: utf-8 -*-\npass\n" raises(SyntaxError, compile, code, "tmp", "exec") + def test_bytes_compile(self): + code = b"# -*- coding: utf-8 -*-\npass\n" + compile(code, "tmp", "exec") + def test_recompile_ast(self): import _ast # raise exception when node type doesn't match with compile mode - co1 = compile('print 1', '', 'exec', _ast.PyCF_ONLY_AST) + co1 = compile('print(1)', '', 'exec', _ast.PyCF_ONLY_AST) raises(TypeError, compile, co1, '', 'eval') co2 = compile('1+1', '', 'eval', _ast.PyCF_ONLY_AST) compile(co2, '', 'eval') @@ -589,39 +592,39 @@ assert firstlineno == 2 def test_print_function(self): - import __builtin__ + import builtins import sys - import StringIO - pr = getattr(__builtin__, "print") + import io + pr = getattr(builtins, "print") save = sys.stdout - out = sys.stdout = StringIO.StringIO() + out = sys.stdout = io.StringIO() try: pr("Hello,", "person!") finally: sys.stdout = save assert out.getvalue() == "Hello, person!\n" - out = StringIO.StringIO() + out = io.StringIO() pr("Hello,", "person!", file=out) assert out.getvalue() == "Hello, person!\n" - out = StringIO.StringIO() + out = io.StringIO() pr("Hello,", "person!", file=out, end="") assert out.getvalue() == "Hello, person!" - out = StringIO.StringIO() + out = io.StringIO() pr("Hello,", "person!", file=out, sep="X") assert out.getvalue() == "Hello,Xperson!\n" - out = StringIO.StringIO() + out = io.StringIO() pr(u"Hello,", u"person!", file=out) result = out.getvalue() - assert isinstance(result, unicode) + assert isinstance(result, str) assert result == u"Hello, person!\n" pr("Hello", file=None) # This works. - out = StringIO.StringIO() + out = io.StringIO() pr(None, file=out) assert out.getvalue() == "None\n" def test_print_exceptions(self): - import __builtin__ - pr = getattr(__builtin__, "print") + import builtins + pr = getattr(builtins, "print") raises(TypeError, pr, x=3) raises(TypeError, pr, end=3) raises(TypeError, pr, sep=42) diff --git a/pypy/module/__builtin__/test/test_descriptor.py b/pypy/module/__builtin__/test/test_descriptor.py --- a/pypy/module/__builtin__/test/test_descriptor.py +++ b/pypy/module/__builtin__/test/test_descriptor.py @@ -342,7 +342,7 @@ except ZeroDivisionError: pass else: - raise Exception, "expected ZeroDivisionError from bad property" + raise Exception("expected ZeroDivisionError from bad property") def test_property_subclass(self): class P(property): diff --git a/pypy/module/__builtin__/test/test_filter.py b/pypy/module/__builtin__/test/test_filter.py --- a/pypy/module/__builtin__/test/test_filter.py +++ b/pypy/module/__builtin__/test/test_filter.py @@ -16,22 +16,10 @@ raises(TypeError, filter, lambda x: x>3, [1], [2]) def test_filter_no_function_list(self): - assert filter(None, [1, 2, 3]) == [1, 2, 3] - - def test_filter_no_function_tuple(self): - assert filter(None, (1, 2, 3)) == (1, 2, 3) - - def test_filter_no_function_string(self): - assert filter(None, 'mystring') == 'mystring' + assert list(filter(None, [1, 2, 3])) == [1, 2, 3] def test_filter_no_function_with_bools(self): - assert filter(None, (True, False, True)) == (True, True) + assert tuple(filter(None, (True, False, True))) == (True, True) def test_filter_list(self): - assert filter(lambda x: x>3, [1, 2, 3, 4, 5]) == [4, 5] - - def test_filter_tuple(self): - assert filter(lambda x: x>3, (1, 2, 3, 4, 5)) == (4, 5) - - def test_filter_string(self): - assert filter(lambda x: x>'a', 'xyzabcd') == 'xyzbcd' + assert list(filter(lambda x: x>3, [1, 2, 3, 4, 5])) == [4, 5] diff --git a/pypy/module/__builtin__/test/test_functional.py b/pypy/module/__builtin__/test/test_functional.py --- a/pypy/module/__builtin__/test/test_functional.py +++ b/pypy/module/__builtin__/test/test_functional.py @@ -70,7 +70,7 @@ class B(object): def __init__(self, n): self.n = n - def next(self): + def __next__(self): self.n -= 1 if self.n == 0: raise StopIteration return self.n @@ -126,12 +126,12 @@ x = range(2, 9, 3) it = iter(x) assert iter(it) is it - assert it.next() == 2 - assert it.next() == 5 - assert it.next() == 8 - raises(StopIteration, it.next) + assert it.__next__() == 2 + assert it.__next__() == 5 + assert it.__next__() == 8 + raises(StopIteration, it.__next__) # test again, to make sure that range() is not its own iterator - assert iter(x).next() == 2 + assert iter(x).__next__() == 2 def test_range_object_with___int__(self): class A(object): @@ -143,7 +143,7 @@ assert list(range(0, 10, A())) == [0, 5] def test_range_float(self): - assert list(range(0.1, 2.0, 1.1)) == [0, 1] + raises(TypeError, range(0.1, 2.0, 1.1)) def test_range_long(self): import sys @@ -162,12 +162,12 @@ def test_reversed(self): r = reversed("hello") assert iter(r) is r - assert r.next() == "o" - assert r.next() == "l" - assert r.next() == "l" - assert r.next() == "e" - assert r.next() == "h" - raises(StopIteration, r.next) + assert r.__next__() == "o" + assert r.__next__() == "l" + assert r.__next__() == "l" + assert r.__next__() == "e" + assert r.__next__() == "h" + raises(StopIteration, r.__next__) assert list(reversed(list(reversed("hello")))) == ['h','e','l','l','o'] raises(TypeError, reversed, reversed("hello")) diff --git a/pypy/module/__builtin__/test/test_rawinput.py b/pypy/module/__builtin__/test/test_rawinput.py --- a/pypy/module/__builtin__/test/test_rawinput.py +++ b/pypy/module/__builtin__/test/test_rawinput.py @@ -1,30 +1,30 @@ +from __future__ import print_function import autopath class AppTestRawInput(): def test_input_and_raw_input(self): - import sys, StringIO - for prompt, expected in [("def:", "abc/ def:/ghi\n"), - ("", "abc/ /ghi\n"), - (42, "abc/ 42/ghi\n"), - (None, "abc/ None/ghi\n"), - (Ellipsis, "abc/ /ghi\n")]: + import sys, io + for prompt, expected in [("def:", "abc/def:/ghi\n"), + ("", "abc//ghi\n"), + (42, "abc/42/ghi\n"), + (None, "abc/None/ghi\n"), + (Ellipsis, "abc//ghi\n")]: for inputfn, inputtext, gottext in [ - (raw_input, "foo\nbar\n", "foo"), - (input, "40+2\n", 42)]: + (input, "foo\nbar\n", "foo")]: save = sys.stdin, sys.stdout try: - sys.stdin = StringIO.StringIO(inputtext) - out = sys.stdout = StringIO.StringIO() - print "abc", # softspace = 1 + sys.stdin = io.StringIO(inputtext) + out = sys.stdout = io.StringIO() + print("abc", end='') out.write('/') if prompt is Ellipsis: got = inputfn() else: got = inputfn(prompt) out.write('/') - print "ghi" + print("ghi") finally: sys.stdin, sys.stdout = save assert out.getvalue() == expected @@ -32,9 +32,9 @@ def test_softspace(self): import sys - import StringIO - fin = StringIO.StringIO() - fout = StringIO.StringIO() + import io + fin = io.StringIO() + fout = io.StringIO() fin.write("Coconuts\n") fin.seek(0) @@ -45,20 +45,20 @@ sys.stdin = fin sys.stdout = fout - print "test", - raw_input("test") + print("test", end='') + input("test") sys.stdin = sys_stdin_orig sys.stdout = sys_stdout_orig fout.seek(0) - assert fout.read() == "test test" + assert fout.read() == "testtest" def test_softspace_carryover(self): import sys - import StringIO - fin = StringIO.StringIO() - fout = StringIO.StringIO() + import io + fin = io.StringIO() + fout = io.StringIO() fin.write("Coconuts\n") fin.seek(0) @@ -69,12 +69,12 @@ sys.stdin = fin sys.stdout = fout - print "test", - raw_input("test") - print "test", + print("test", end='') + input("test") + print("test", end='') sys.stdin = sys_stdin_orig sys.stdout = sys_stdout_orig fout.seek(0) - assert fout.read() == "test testtest" + assert fout.read() == "testtesttest" diff --git a/pypy/module/_sre/interp_sre.py b/pypy/module/_sre/interp_sre.py --- a/pypy/module/_sre/interp_sre.py +++ b/pypy/module/_sre/interp_sre.py @@ -78,8 +78,7 @@ return space.newtuple(grps) def import_re(space): - w_builtin = space.getbuiltinmodule('__builtin__') - w_import = space.getattr(w_builtin, space.wrap("__import__")) + w_import = space.getattr(space.builtin, space.wrap("__import__")) return space.call_function(w_import, space.wrap("re")) def matchcontext(space, ctx): diff --git a/pypy/module/posix/interp_posix.py b/pypy/module/posix/interp_posix.py --- a/pypy/module/posix/interp_posix.py +++ b/pypy/module/posix/interp_posix.py @@ -56,7 +56,7 @@ self.w_obj = w_obj def as_bytes(self): - return self.space.str_w(self.w_obj) + return self.space.bytes_w(self.w_obj) def as_unicode(self): space = self.space @@ -71,7 +71,7 @@ fname = FileEncoder(space, w_fname) return func(fname, *args) else: - fname = space.str_w(w_fname) + fname = space.bytes_w(w_fname) return func(fname, *args) return dispatch @@ -512,19 +512,17 @@ for key, value in os.environ.items(): space.setitem(w_env, space.wrapbytes(key), space.wrapbytes(value)) - at unwrap_spec(name=str, value=str) -def putenv(space, name, value): +def putenv(space, w_name, w_value): """Change or add an environment variable.""" try: - os.environ[name] = value + dispatch_filename_2(rposix.putenv)(space, w_name, w_value) except OSError, e: raise wrap_oserror(space, e) - at unwrap_spec(name=str) -def unsetenv(space, name): +def unsetenv(space, w_name): """Delete an environment variable.""" try: - del os.environ[name] + dispatch_filename(rposix.unsetenv)(space, w_name) except KeyError: pass except OSError, e: diff --git a/pypy/module/rctime/app_time.py b/pypy/module/rctime/app_time.py --- a/pypy/module/rctime/app_time.py +++ b/pypy/module/rctime/app_time.py @@ -24,7 +24,7 @@ (same as strftime()).""" import _strptime # from the CPython standard library - return _strptime._strptime(string, format)[0] + return _strptime._strptime_time(string, format) __doc__ = """This module provides various functions to manipulate time values. diff --git a/pypy/module/rctime/test/test_rctime.py b/pypy/module/rctime/test/test_rctime.py --- a/pypy/module/rctime/test/test_rctime.py +++ b/pypy/module/rctime/test/test_rctime.py @@ -199,7 +199,7 @@ # rely on it. if org_TZ is not None: os.environ['TZ'] = org_TZ - elif os.environ.has_key('TZ'): + elif 'TZ' in os.environ: del os.environ['TZ'] rctime.tzset() @@ -279,7 +279,7 @@ 'j', 'm', 'M', 'p', 'S', 'U', 'w', 'W', 'x', 'X', 'y', 'Y', 'Z', '%'): format = ' %' + directive - print format + print(format) rctime.strptime(rctime.strftime(format, tt), format) def test_pickle(self): diff --git a/pypy/module/zlib/interp_zlib.py b/pypy/module/zlib/interp_zlib.py --- a/pypy/module/zlib/interp_zlib.py +++ b/pypy/module/zlib/interp_zlib.py @@ -1,7 +1,7 @@ import sys from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.baseobjspace import Wrappable -from pypy.interpreter.typedef import TypeDef, interp_attrproperty +from pypy.interpreter.typedef import TypeDef, interp_attrproperty_bytes from pypy.interpreter.error import OperationError from pypy.rlib.rarithmetic import intmask, r_uint from pypy.rlib.objectmodel import keepalive_until_here @@ -84,7 +84,7 @@ rzlib.deflateEnd(stream) except rzlib.RZlibError, e: raise zlib_error(space, e.msg) - return space.wrap(result) + return space.wrapbytes(result) @unwrap_spec(string='bufferstr', wbits=int, bufsize=int) @@ -106,7 +106,7 @@ rzlib.inflateEnd(stream) except rzlib.RZlibError, e: raise zlib_error(space, e.msg) - return space.wrap(result) + return space.wrapbytes(result) class ZLibObject(Wrappable): @@ -179,7 +179,7 @@ self.unlock() except rzlib.RZlibError, e: raise zlib_error(self.space, e.msg) - return self.space.wrap(result) + return self.space.wrapbytes(result) @unwrap_spec(mode=int) @@ -209,7 +209,7 @@ self.unlock() except rzlib.RZlibError, e: raise zlib_error(self.space, e.msg) - return self.space.wrap(result) + return self.space.wrapbytes(result) @unwrap_spec(level=int, method=int, wbits=int, memLevel=int, strategy=int) @@ -302,11 +302,11 @@ assert unused_start >= 0 tail = data[unused_start:] if finished: - self.unconsumed_tail = '' + self.unconsumed_tail = b'' self.unused_data = tail else: self.unconsumed_tail = tail - return self.space.wrap(string) + return self.space.wrapbytes(string) @unwrap_spec(length=int) @@ -324,7 +324,7 @@ # however CPython's zlib module does not behave like that. # I could not figure out a case in which flush() in CPython # doesn't simply return an empty string without complaining. - return self.space.wrap("") + return self.space.wrapbytes("") @unwrap_spec(wbits=int) @@ -343,8 +343,8 @@ __new__ = interp2app(Decompress___new__), decompress = interp2app(Decompress.decompress), flush = interp2app(Decompress.flush), - unused_data = interp_attrproperty('unused_data', Decompress), - unconsumed_tail = interp_attrproperty('unconsumed_tail', Decompress), + unused_data = interp_attrproperty_bytes('unused_data', Decompress), + unconsumed_tail = interp_attrproperty_bytes('unconsumed_tail', Decompress), __doc__ = """decompressobj([wbits]) -- Return a decompressor object. Optional arg wbits is the window buffer size. diff --git a/pypy/module/zlib/test/test_zlib.py b/pypy/module/zlib/test/test_zlib.py --- a/pypy/module/zlib/test/test_zlib.py +++ b/pypy/module/zlib/test/test_zlib.py @@ -34,9 +34,9 @@ import zlib return zlib """) - expanded = 'some bytes which will be compressed' - cls.w_expanded = cls.space.wrap(expanded) - cls.w_compressed = cls.space.wrap(zlib.compress(expanded)) + expanded = b'some bytes which will be compressed' + cls.w_expanded = cls.space.wrapbytes(expanded) + cls.w_compressed = cls.space.wrapbytes(zlib.compress(expanded)) def test_error(self): @@ -52,9 +52,9 @@ return it as a signed 32 bit integer. On 64-bit machines too (it is a bug in CPython < 2.6 to return unsigned values in this case). """ - assert self.zlib.crc32('') == 0 - assert self.zlib.crc32('\0') == -771559539 - assert self.zlib.crc32('hello, world.') == -936931198 + assert self.zlib.crc32(b'') == 0 + assert self.zlib.crc32(b'\0') == -771559539 + assert self.zlib.crc32(b'hello, world.') == -936931198 def test_crc32_start_value(self): @@ -62,29 +62,29 @@ When called with a string and an integer, zlib.crc32 should compute the CRC32 of the string using the integer as the starting value. """ - assert self.zlib.crc32('', 42) == 42 - assert self.zlib.crc32('\0', 42) == 163128923 - assert self.zlib.crc32('hello, world.', 42) == 1090960721 - hello = 'hello, ' + assert self.zlib.crc32(b'', 42) == 42 + assert self.zlib.crc32(b'\0', 42) == 163128923 + assert self.zlib.crc32(b'hello, world.', 42) == 1090960721 + hello = b'hello, ' hellocrc = self.zlib.crc32(hello) - world = 'world.' + world = b'world.' helloworldcrc = self.zlib.crc32(world, hellocrc) assert helloworldcrc == self.zlib.crc32(hello + world) def test_crc32_negative_start(self): - v = self.zlib.crc32('', -1) + v = self.zlib.crc32(b'', -1) assert v == -1 def test_crc32_negative_long_start(self): - v = self.zlib.crc32('', -1L) + v = self.zlib.crc32(b'', -1L) assert v == -1 - assert self.zlib.crc32('foo', -99999999999999999999999) == 1611238463 + assert self.zlib.crc32(b'foo', -99999999999999999999999) == 1611238463 def test_crc32_long_start(self): import sys - v = self.zlib.crc32('', sys.maxint*2) + v = self.zlib.crc32(b'', sys.maxint*2) assert v == -2 - assert self.zlib.crc32('foo', 99999999999999999999999) == 1635107045 + assert self.zlib.crc32(b'foo', 99999999999999999999999) == 1635107045 def test_adler32(self): """ @@ -93,10 +93,10 @@ On 64-bit machines too (it is a bug in CPython < 2.6 to return unsigned values in this case). """ - assert self.zlib.adler32('') == 1 - assert self.zlib.adler32('\0') == 65537 - assert self.zlib.adler32('hello, world.') == 571147447 - assert self.zlib.adler32('x' * 23) == -2122904887 + assert self.zlib.adler32(b'') == 1 + assert self.zlib.adler32(b'\0') == 65537 + assert self.zlib.adler32(b'hello, world.') == 571147447 + assert self.zlib.adler32(b'x' * 23) == -2122904887 def test_adler32_start_value(self): @@ -105,18 +105,18 @@ the adler 32 checksum of the string using the integer as the starting value. """ - assert self.zlib.adler32('', 42) == 42 - assert self.zlib.adler32('\0', 42) == 2752554 - assert self.zlib.adler32('hello, world.', 42) == 606078176 - assert self.zlib.adler32('x' * 23, 42) == -2061104398 - hello = 'hello, ' + assert self.zlib.adler32(b'', 42) == 42 + assert self.zlib.adler32(b'\0', 42) == 2752554 + assert self.zlib.adler32(b'hello, world.', 42) == 606078176 + assert self.zlib.adler32(b'x' * 23, 42) == -2061104398 + hello = b'hello, ' hellosum = self.zlib.adler32(hello) - world = 'world.' + world = b'world.' helloworldsum = self.zlib.adler32(world, hellosum) assert helloworldsum == self.zlib.adler32(hello + world) - assert self.zlib.adler32('foo', -1) == 45547858 - assert self.zlib.adler32('foo', 99999999999999999999999) == -114818734 + assert self.zlib.adler32(b'foo', -1) == 45547858 + assert self.zlib.adler32(b'foo', 99999999999999999999999) == -114818734 def test_invalidLevel(self): @@ -171,7 +171,7 @@ Try to feed garbage to zlib.decompress(). """ raises(self.zlib.error, self.zlib.decompress, self.compressed[:-2]) - raises(self.zlib.error, self.zlib.decompress, 'foobar') + raises(self.zlib.error, self.zlib.decompress, b'foobar') def test_unused_data(self): @@ -180,21 +180,21 @@ It should show up in the unused_data attribute. """ d = self.zlib.decompressobj() - s = d.decompress(self.compressed + 'extrastuff') + s = d.decompress(self.compressed + b'extrastuff') assert s == self.expanded - assert d.unused_data == 'extrastuff' + assert d.unused_data == b'extrastuff' # try again with several decompression steps d = self.zlib.decompressobj() s1 = d.decompress(self.compressed[:10]) - assert d.unused_data == '' + assert d.unused_data == b'' s2 = d.decompress(self.compressed[10:-3]) - assert d.unused_data == '' - s3 = d.decompress(self.compressed[-3:] + 'spam' * 100) - assert d.unused_data == 'spam' * 100 + assert d.unused_data == b'' + s3 = d.decompress(self.compressed[-3:] + b'spam' * 100) + assert d.unused_data == b'spam' * 100 assert s1 + s2 + s3 == self.expanded - s4 = d.decompress('egg' * 50) - assert d.unused_data == 'egg' * 50 - assert s4 == '' + s4 = d.decompress(b'egg' * 50) + assert d.unused_data == b'egg' * 50 + assert s4 == b'' def test_max_length(self): @@ -215,8 +215,8 @@ """ We should be able to pass buffer objects instead of strings. """ - assert self.zlib.crc32(buffer('hello, world.')) == -936931198 - assert self.zlib.adler32(buffer('hello, world.')) == 571147447 + assert self.zlib.crc32(buffer(b'hello, world.')) == -936931198 + assert self.zlib.adler32(buffer(b'hello, world.')) == 571147447 compressor = self.zlib.compressobj() bytes = compressor.compress(buffer(self.expanded)) diff --git a/pypy/objspace/std/bytearraytype.py b/pypy/objspace/std/bytearraytype.py --- a/pypy/objspace/std/bytearraytype.py +++ b/pypy/objspace/std/bytearraytype.py @@ -93,18 +93,13 @@ return val - 87 return -1 -def descr_fromhex(space, w_type, w_hexstring): - "bytearray.fromhex(string) -> bytearray\n\nCreate a bytearray object " - "from a string of hexadecimal numbers.\nSpaces between two numbers are " - "accepted.\nExample: bytearray.fromhex('B9 01EF') -> " - "bytearray(b'\\xb9\\x01\\xef')." - hexstring = space.unicode_w(w_hexstring) +def _hexstring_to_array(space, s): data = [] - length = len(hexstring) + length = len(s) i = -2 while True: i += 2 - while i < length and hexstring[i] == ' ': + while i < length and s[i] == ' ': i += 1 if i >= length: break @@ -112,16 +107,28 @@ raise OperationError(space.w_ValueError, space.wrap( "non-hexadecimal number found in fromhex() arg at position %d" % i)) - top = _hex_digit_to_int(hexstring[i]) + top = _hex_digit_to_int(s[i]) if top == -1: raise OperationError(space.w_ValueError, space.wrap( "non-hexadecimal number found in fromhex() arg at position %d" % i)) - bot = _hex_digit_to_int(hexstring[i+1]) + bot = _hex_digit_to_int(s[i+1]) if bot == -1: raise OperationError(space.w_ValueError, space.wrap( "non-hexadecimal number found in fromhex() arg at position %d" % (i+1,))) data.append(chr(top*16 + bot)) + return data +def descr_fromhex(space, w_type, w_hexstring): + "bytearray.fromhex(string) -> bytearray\n\nCreate a bytearray object " + "from a string of hexadecimal numbers.\nSpaces between two numbers are " + "accepted.\nExample: bytearray.fromhex('B9 01EF') -> " + "bytearray(b'\\xb9\\x01\\xef')." + if not space.is_w(space.type(w_hexstring), space.w_unicode): + raise OperationError(space.w_TypeError, space.wrap( + "must be str, not %s" % space.type(w_hexstring).name)) + hexstring = space.unicode_w(w_hexstring) + + data = _hexstring_to_array(space, hexstring) # in CPython bytearray.fromhex is a staticmethod, so # we ignore w_type and always return a bytearray return new_bytearray(space, space.w_bytearray, data) diff --git a/pypy/objspace/std/dictproxyobject.py b/pypy/objspace/std/dictproxyobject.py --- a/pypy/objspace/std/dictproxyobject.py +++ b/pypy/objspace/std/dictproxyobject.py @@ -20,7 +20,7 @@ def getitem(self, w_dict, w_key): space = self.space w_lookup_type = space.type(w_key) - if space.is_w(w_lookup_type, space.w_str): + if space.is_w(w_lookup_type, space.w_unicode): return self.getitem_str(w_dict, space.str_w(w_key)) else: return None @@ -30,7 +30,7 @@ def setitem(self, w_dict, w_key, w_value): space = self.space - if space.is_w(space.type(w_key), space.w_str): + if space.is_w(space.type(w_key), space.w_unicode): self.setitem_str(w_dict, self.space.str_w(w_key), w_value) else: raise OperationError(space.w_TypeError, space.wrap("cannot add non-string keys to dict of a type")) @@ -60,7 +60,7 @@ def delitem(self, w_dict, w_key): space = self.space w_key_type = space.type(w_key) - if space.is_w(w_key_type, space.w_str): + if space.is_w(w_key_type, space.w_unicode): key = self.space.str_w(w_key) if not self.unerase(w_dict.dstorage).deldictvalue(space, key): raise KeyError diff --git a/pypy/objspace/std/stringtype.py b/pypy/objspace/std/stringtype.py --- a/pypy/objspace/std/stringtype.py +++ b/pypy/objspace/std/stringtype.py @@ -342,14 +342,43 @@ W_StringObject.__init__(w_obj, value) return w_obj +def descr_fromhex(space, w_type, w_hexstring): + "bytes.fromhex(string) -> bytes\n" + "\n" + "Create a bytes object from a string of hexadecimal numbers.\n" + "Spaces between two numbers are accepted.\n" + "Example: bytes.fromhex('B9 01EF') -> bytes(b'\\xb9\\x01\\xef')." + from pypy.objspace.std.bytearraytype import _hexstring_to_array + if not space.is_w(space.type(w_hexstring), space.w_unicode): + raise OperationError(space.w_TypeError, space.wrap( + "must be str, not %s" % space.type(w_hexstring).name)) + hexstring = space.unicode_w(w_hexstring) + chars = ''.join(_hexstring_to_array(space, hexstring)) + if space.config.objspace.std.withrope: + from pypy.objspace.std.ropeobject import rope, W_RopeObject + w_obj = space.allocate_instance(W_RopeObject, w_type) + W_RopeObject.__init__(w_obj, rope.LiteralStringNode(chars)) + return w_obj + else: + from pypy.objspace.std.stringobject import W_StringObject + w_obj = space.allocate_instance(W_StringObject, w_type) + W_StringObject.__init__(w_obj, chars) + return w_obj + # ____________________________________________________________ str_typedef = StdTypeDef("bytes", __new__ = gateway.interp2app(descr__new__), - __doc__ = '''str(object) -> string - -Return a nice string representation of the object. -If the argument is a string, the return value is the same object.''' + __doc__ = 'bytes(iterable_of_ints) -> bytes\n' + 'bytes(string, encoding[, errors]) -> bytes\n' + 'bytes(bytes_or_buffer) -> immutable copy of bytes_or_buffer\n' + 'bytes(memory_view) -> bytes\n\n' + 'Construct an immutable array of bytes from:\n' + ' - an iterable yielding integers in range(256)\n' + ' - a text string encoded using the specified encoding\n' + ' - a bytes or a buffer object\n' + ' - any object implementing the buffer API.', + fromhex = gateway.interp2app(descr_fromhex, as_classmethod=True) ) str_typedef.registermethods(globals()) diff --git a/pypy/objspace/std/test/test_stringobject.py b/pypy/objspace/std/test/test_stringobject.py --- a/pypy/objspace/std/test/test_stringobject.py +++ b/pypy/objspace/std/test/test_stringobject.py @@ -99,6 +99,14 @@ import operator raises(TypeError, operator.mod, b"%s", (1,)) + def test_fromhex(self): + assert bytes.fromhex("abcd") == b'\xab\xcd' + assert b''.fromhex("abcd") == b'\xab\xcd' + assert bytes.fromhex("ab cd ef") == b'\xab\xcd\xef' + raises(TypeError, bytes.fromhex, b"abcd") + raises(TypeError, bytes.fromhex, True) + raises(ValueError, bytes.fromhex, "hello world") + def test_split(self): assert b"".split() == [] assert b"".split(b'x') == [b''] diff --git a/pypy/objspace/std/unicodetype.py b/pypy/objspace/std/unicodetype.py --- a/pypy/objspace/std/unicodetype.py +++ b/pypy/objspace/std/unicodetype.py @@ -340,7 +340,7 @@ unicode_typedef = StdTypeDef("str", __new__ = gateway.interp2app(descr_new_), - __doc__ = '''unicode(string [, encoding[, errors]]) -> object + __doc__ = '''str(string [, encoding[, errors]]) -> object Create a new Unicode object from the given encoded string. encoding defaults to the current default string encoding. diff --git a/pypy/rlib/rposix.py b/pypy/rlib/rposix.py --- a/pypy/rlib/rposix.py +++ b/pypy/rlib/rposix.py @@ -163,3 +163,18 @@ return nt._getfullpathname(path) else: return nt._getfullpathname(path.as_bytes()) + + at specialize.argtype(0, 1) +def putenv(name, value): + if isinstance(name, str): + os.environ[name] = value + else: + os.environ[name.as_bytes()] = value.as_bytes() + + at specialize.argtype(0) +def unsetenv(name): + if isinstance(name, str): + del os.environ[name] + else: + del os.environ[name.as_bytes()] + From noreply at buildbot.pypy.org Mon Nov 7 21:32:23 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Mon, 7 Nov 2011 21:32:23 +0100 (CET) Subject: [pypy-commit] pypy py3k: fixes for app_main Message-ID: <20111107203223.7C190820C4@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: py3k Changeset: r48884:2a365dee5e98 Date: 2011-11-07 15:32 -0500 http://bitbucket.org/pypy/pypy/changeset/2a365dee5e98/ Log: fixes for app_main diff --git a/pypy/translator/goal/app_main.py b/pypy/translator/goal/app_main.py --- a/pypy/translator/goal/app_main.py +++ b/pypy/translator/goal/app_main.py @@ -120,7 +120,7 @@ except AttributeError: print('no translation information found', file=sys.stderr) else: - optitems = options.items() + optitems = list(options.items()) optitems.sort() for name, value in optitems: print(' %51s: %s' % (name, value)) @@ -138,7 +138,7 @@ def _print_jit_help(): import pypyjit - items = pypyjit.defaults.items() + items = list(pypyjit.defaults.items()) items.sort() for key, value in items: print(' --jit %s=N %slow-level JIT parameter (default %s)' % ( @@ -304,7 +304,7 @@ newline=newline, line_buffering=line_buffering) return stream - + def set_io_encoding(io_encoding): try: import _file @@ -510,7 +510,7 @@ unbuffered, ignore_environment, **ignored): - # with PyPy in top of CPython we can only have around 100 + # with PyPy in top of CPython we can only have around 100 # but we need more in the translated PyPy for the compiler package if '__pypy__' not in sys.builtin_module_names: sys.setrecursionlimit(5000) From noreply at buildbot.pypy.org Mon Nov 7 21:49:14 2011 From: noreply at buildbot.pypy.org (pjenvey) Date: Mon, 7 Nov 2011 21:49:14 +0100 (CET) Subject: [pypy-commit] pypy default: unpack tuple params, for py3k support Message-ID: <20111107204914.51490820C4@wyvern.cs.uni-duesseldorf.de> Author: Philip Jenvey Branch: Changeset: r48885:c07fe33e541d Date: 2011-11-07 12:48 -0800 http://bitbucket.org/pypy/pypy/changeset/c07fe33e541d/ Log: unpack tuple params, for py3k support diff --git a/lib_pypy/pyrepl/commands.py b/lib_pypy/pyrepl/commands.py --- a/lib_pypy/pyrepl/commands.py +++ b/lib_pypy/pyrepl/commands.py @@ -33,10 +33,9 @@ class Command(object): finish = 0 kills_digit_arg = 1 - def __init__(self, reader, (event_name, event)): + def __init__(self, reader, cmd): self.reader = reader - self.event = event - self.event_name = event_name + self.event_name, self.event = cmd def do(self): pass diff --git a/lib_pypy/pyrepl/pygame_console.py b/lib_pypy/pyrepl/pygame_console.py --- a/lib_pypy/pyrepl/pygame_console.py +++ b/lib_pypy/pyrepl/pygame_console.py @@ -130,7 +130,7 @@ s.fill(c, [0, 600 - bmargin, 800, bmargin]) s.fill(c, [800 - rmargin, 0, lmargin, 600]) - def refresh(self, screen, (cx, cy)): + def refresh(self, screen, cxy): self.screen = screen self.pygame_screen.fill(colors.bg, [0, tmargin + self.cur_top + self.scroll, @@ -139,8 +139,8 @@ line_top = self.cur_top width, height = self.fontsize - self.cxy = (cx, cy) - cp = self.char_pos(cx, cy) + self.cxy = cxy + cp = self.char_pos(*cxy) if cp[1] < tmargin: self.scroll = - (cy*self.fh + self.cur_top) self.repaint() @@ -148,7 +148,7 @@ self.scroll += (600 - bmargin) - (cp[1] + self.fh) self.repaint() if self.curs_vis: - self.pygame_screen.blit(self.cursor, self.char_pos(cx, cy)) + self.pygame_screen.blit(self.cursor, self.char_pos(*cxy)) for line in screen: if 0 <= line_top + self.scroll <= (600 - bmargin - tmargin - self.fh): if line: diff --git a/lib_pypy/pyrepl/unix_console.py b/lib_pypy/pyrepl/unix_console.py --- a/lib_pypy/pyrepl/unix_console.py +++ b/lib_pypy/pyrepl/unix_console.py @@ -163,7 +163,7 @@ def change_encoding(self, encoding): self.encoding = encoding - def refresh(self, screen, (cx, cy)): + def refresh(self, screen, cxy): # this function is still too long (over 90 lines) if not self.__gone_tall: @@ -198,6 +198,7 @@ # we make sure the cursor is on the screen, and that we're # using all of the screen if we can + cx, cy = cxy if cy < offset: offset = cy elif cy >= offset + height: From noreply at buildbot.pypy.org Mon Nov 7 22:35:00 2011 From: noreply at buildbot.pypy.org (amauryfa) Date: Mon, 7 Nov 2011 22:35:00 +0100 (CET) Subject: [pypy-commit] pypy py3k: hg merge default Message-ID: <20111107213500.3F418820C4@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r48886:df486c370688 Date: 2011-11-07 22:05 +0100 http://bitbucket.org/pypy/pypy/changeset/df486c370688/ Log: hg merge default diff too long, truncating to 10000 out of 10765 lines diff --git a/lib-python/conftest.py b/lib-python/conftest.py --- a/lib-python/conftest.py +++ b/lib-python/conftest.py @@ -189,7 +189,7 @@ RegrTest('test_dictviews.py', core=True), RegrTest('test_difflib.py'), RegrTest('test_dis.py'), - RegrTest('test_distutils.py'), + RegrTest('test_distutils.py', skip=True), RegrTest('test_doctest.py', usemodules="thread"), RegrTest('test_doctest2.py'), RegrTest('test_docxmlrpc.py'), diff --git a/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py b/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py --- a/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py +++ b/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py @@ -1,6 +1,5 @@ import unittest from ctypes import * -from ctypes.test import xfail class MyInt(c_int): def __cmp__(self, other): @@ -27,7 +26,6 @@ self.assertEqual(None, cb()) - @xfail def test_int_callback(self): args = [] def func(arg): diff --git a/lib-python/modified-2.7/heapq.py b/lib-python/modified-2.7/heapq.py new file mode 100644 --- /dev/null +++ b/lib-python/modified-2.7/heapq.py @@ -0,0 +1,442 @@ +# -*- coding: latin-1 -*- + +"""Heap queue algorithm (a.k.a. priority queue). + +Heaps are arrays for which a[k] <= a[2*k+1] and a[k] <= a[2*k+2] for +all k, counting elements from 0. For the sake of comparison, +non-existing elements are considered to be infinite. The interesting +property of a heap is that a[0] is always its smallest element. + +Usage: + +heap = [] # creates an empty heap +heappush(heap, item) # pushes a new item on the heap +item = heappop(heap) # pops the smallest item from the heap +item = heap[0] # smallest item on the heap without popping it +heapify(x) # transforms list into a heap, in-place, in linear time +item = heapreplace(heap, item) # pops and returns smallest item, and adds + # new item; the heap size is unchanged + +Our API differs from textbook heap algorithms as follows: + +- We use 0-based indexing. This makes the relationship between the + index for a node and the indexes for its children slightly less + obvious, but is more suitable since Python uses 0-based indexing. + +- Our heappop() method returns the smallest item, not the largest. + +These two make it possible to view the heap as a regular Python list +without surprises: heap[0] is the smallest item, and heap.sort() +maintains the heap invariant! +""" + +# Original code by Kevin O'Connor, augmented by Tim Peters and Raymond Hettinger + +__about__ = """Heap queues + +[explanation by Fran�ois Pinard] + +Heaps are arrays for which a[k] <= a[2*k+1] and a[k] <= a[2*k+2] for +all k, counting elements from 0. For the sake of comparison, +non-existing elements are considered to be infinite. The interesting +property of a heap is that a[0] is always its smallest element. + +The strange invariant above is meant to be an efficient memory +representation for a tournament. The numbers below are `k', not a[k]: + + 0 + + 1 2 + + 3 4 5 6 + + 7 8 9 10 11 12 13 14 + + 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 + + +In the tree above, each cell `k' is topping `2*k+1' and `2*k+2'. In +an usual binary tournament we see in sports, each cell is the winner +over the two cells it tops, and we can trace the winner down the tree +to see all opponents s/he had. However, in many computer applications +of such tournaments, we do not need to trace the history of a winner. +To be more memory efficient, when a winner is promoted, we try to +replace it by something else at a lower level, and the rule becomes +that a cell and the two cells it tops contain three different items, +but the top cell "wins" over the two topped cells. + +If this heap invariant is protected at all time, index 0 is clearly +the overall winner. The simplest algorithmic way to remove it and +find the "next" winner is to move some loser (let's say cell 30 in the +diagram above) into the 0 position, and then percolate this new 0 down +the tree, exchanging values, until the invariant is re-established. +This is clearly logarithmic on the total number of items in the tree. +By iterating over all items, you get an O(n ln n) sort. + +A nice feature of this sort is that you can efficiently insert new +items while the sort is going on, provided that the inserted items are +not "better" than the last 0'th element you extracted. This is +especially useful in simulation contexts, where the tree holds all +incoming events, and the "win" condition means the smallest scheduled +time. When an event schedule other events for execution, they are +scheduled into the future, so they can easily go into the heap. So, a +heap is a good structure for implementing schedulers (this is what I +used for my MIDI sequencer :-). + +Various structures for implementing schedulers have been extensively +studied, and heaps are good for this, as they are reasonably speedy, +the speed is almost constant, and the worst case is not much different +than the average case. However, there are other representations which +are more efficient overall, yet the worst cases might be terrible. + +Heaps are also very useful in big disk sorts. You most probably all +know that a big sort implies producing "runs" (which are pre-sorted +sequences, which size is usually related to the amount of CPU memory), +followed by a merging passes for these runs, which merging is often +very cleverly organised[1]. It is very important that the initial +sort produces the longest runs possible. Tournaments are a good way +to that. If, using all the memory available to hold a tournament, you +replace and percolate items that happen to fit the current run, you'll +produce runs which are twice the size of the memory for random input, +and much better for input fuzzily ordered. + +Moreover, if you output the 0'th item on disk and get an input which +may not fit in the current tournament (because the value "wins" over +the last output value), it cannot fit in the heap, so the size of the +heap decreases. The freed memory could be cleverly reused immediately +for progressively building a second heap, which grows at exactly the +same rate the first heap is melting. When the first heap completely +vanishes, you switch heaps and start a new run. Clever and quite +effective! + +In a word, heaps are useful memory structures to know. I use them in +a few applications, and I think it is good to keep a `heap' module +around. :-) + +-------------------- +[1] The disk balancing algorithms which are current, nowadays, are +more annoying than clever, and this is a consequence of the seeking +capabilities of the disks. On devices which cannot seek, like big +tape drives, the story was quite different, and one had to be very +clever to ensure (far in advance) that each tape movement will be the +most effective possible (that is, will best participate at +"progressing" the merge). Some tapes were even able to read +backwards, and this was also used to avoid the rewinding time. +Believe me, real good tape sorts were quite spectacular to watch! +From all times, sorting has always been a Great Art! :-) +""" + +__all__ = ['heappush', 'heappop', 'heapify', 'heapreplace', 'merge', + 'nlargest', 'nsmallest', 'heappushpop'] + +from itertools import islice, repeat, count, imap, izip, tee, chain +from operator import itemgetter +import bisect + +def heappush(heap, item): + """Push item onto heap, maintaining the heap invariant.""" + heap.append(item) + _siftdown(heap, 0, len(heap)-1) + +def heappop(heap): + """Pop the smallest item off the heap, maintaining the heap invariant.""" + lastelt = heap.pop() # raises appropriate IndexError if heap is empty + if heap: + returnitem = heap[0] + heap[0] = lastelt + _siftup(heap, 0) + else: + returnitem = lastelt + return returnitem + +def heapreplace(heap, item): + """Pop and return the current smallest value, and add the new item. + + This is more efficient than heappop() followed by heappush(), and can be + more appropriate when using a fixed-size heap. Note that the value + returned may be larger than item! That constrains reasonable uses of + this routine unless written as part of a conditional replacement: + + if item > heap[0]: + item = heapreplace(heap, item) + """ + returnitem = heap[0] # raises appropriate IndexError if heap is empty + heap[0] = item + _siftup(heap, 0) + return returnitem + +def heappushpop(heap, item): + """Fast version of a heappush followed by a heappop.""" + if heap and heap[0] < item: + item, heap[0] = heap[0], item + _siftup(heap, 0) + return item + +def heapify(x): + """Transform list into a heap, in-place, in O(len(heap)) time.""" + n = len(x) + # Transform bottom-up. The largest index there's any point to looking at + # is the largest with a child index in-range, so must have 2*i + 1 < n, + # or i < (n-1)/2. If n is even = 2*j, this is (2*j-1)/2 = j-1/2 so + # j-1 is the largest, which is n//2 - 1. If n is odd = 2*j+1, this is + # (2*j+1-1)/2 = j so j-1 is the largest, and that's again n//2-1. + for i in reversed(xrange(n//2)): + _siftup(x, i) + +def nlargest(n, iterable): + """Find the n largest elements in a dataset. + + Equivalent to: sorted(iterable, reverse=True)[:n] + """ + if n < 0: # for consistency with the c impl + return [] + it = iter(iterable) + result = list(islice(it, n)) + if not result: + return result + heapify(result) + _heappushpop = heappushpop + for elem in it: + _heappushpop(result, elem) + result.sort(reverse=True) + return result + +def nsmallest(n, iterable): + """Find the n smallest elements in a dataset. + + Equivalent to: sorted(iterable)[:n] + """ + if n < 0: # for consistency with the c impl + return [] + if hasattr(iterable, '__len__') and n * 10 <= len(iterable): + # For smaller values of n, the bisect method is faster than a minheap. + # It is also memory efficient, consuming only n elements of space. + it = iter(iterable) + result = sorted(islice(it, 0, n)) + if not result: + return result + insort = bisect.insort + pop = result.pop + los = result[-1] # los --> Largest of the nsmallest + for elem in it: + if los <= elem: + continue + insort(result, elem) + pop() + los = result[-1] + return result + # An alternative approach manifests the whole iterable in memory but + # saves comparisons by heapifying all at once. Also, saves time + # over bisect.insort() which has O(n) data movement time for every + # insertion. Finding the n smallest of an m length iterable requires + # O(m) + O(n log m) comparisons. + h = list(iterable) + heapify(h) + return map(heappop, repeat(h, min(n, len(h)))) + +# 'heap' is a heap at all indices >= startpos, except possibly for pos. pos +# is the index of a leaf with a possibly out-of-order value. Restore the +# heap invariant. +def _siftdown(heap, startpos, pos): + newitem = heap[pos] + # Follow the path to the root, moving parents down until finding a place + # newitem fits. + while pos > startpos: + parentpos = (pos - 1) >> 1 + parent = heap[parentpos] + if newitem < parent: + heap[pos] = parent + pos = parentpos + continue + break + heap[pos] = newitem + +# The child indices of heap index pos are already heaps, and we want to make +# a heap at index pos too. We do this by bubbling the smaller child of +# pos up (and so on with that child's children, etc) until hitting a leaf, +# then using _siftdown to move the oddball originally at index pos into place. +# +# We *could* break out of the loop as soon as we find a pos where newitem <= +# both its children, but turns out that's not a good idea, and despite that +# many books write the algorithm that way. During a heap pop, the last array +# element is sifted in, and that tends to be large, so that comparing it +# against values starting from the root usually doesn't pay (= usually doesn't +# get us out of the loop early). See Knuth, Volume 3, where this is +# explained and quantified in an exercise. +# +# Cutting the # of comparisons is important, since these routines have no +# way to extract "the priority" from an array element, so that intelligence +# is likely to be hiding in custom __cmp__ methods, or in array elements +# storing (priority, record) tuples. Comparisons are thus potentially +# expensive. +# +# On random arrays of length 1000, making this change cut the number of +# comparisons made by heapify() a little, and those made by exhaustive +# heappop() a lot, in accord with theory. Here are typical results from 3 +# runs (3 just to demonstrate how small the variance is): +# +# Compares needed by heapify Compares needed by 1000 heappops +# -------------------------- -------------------------------- +# 1837 cut to 1663 14996 cut to 8680 +# 1855 cut to 1659 14966 cut to 8678 +# 1847 cut to 1660 15024 cut to 8703 +# +# Building the heap by using heappush() 1000 times instead required +# 2198, 2148, and 2219 compares: heapify() is more efficient, when +# you can use it. +# +# The total compares needed by list.sort() on the same lists were 8627, +# 8627, and 8632 (this should be compared to the sum of heapify() and +# heappop() compares): list.sort() is (unsurprisingly!) more efficient +# for sorting. + +def _siftup(heap, pos): + endpos = len(heap) + startpos = pos + newitem = heap[pos] + # Bubble up the smaller child until hitting a leaf. + childpos = 2*pos + 1 # leftmost child position + while childpos < endpos: + # Set childpos to index of smaller child. + rightpos = childpos + 1 + if rightpos < endpos and not heap[childpos] < heap[rightpos]: + childpos = rightpos + # Move the smaller child up. + heap[pos] = heap[childpos] + pos = childpos + childpos = 2*pos + 1 + # The leaf at pos is empty now. Put newitem there, and bubble it up + # to its final resting place (by sifting its parents down). + heap[pos] = newitem + _siftdown(heap, startpos, pos) + +# If available, use C implementation +try: + from _heapq import * +except ImportError: + pass + +def merge(*iterables): + '''Merge multiple sorted inputs into a single sorted output. + + Similar to sorted(itertools.chain(*iterables)) but returns a generator, + does not pull the data into memory all at once, and assumes that each of + the input streams is already sorted (smallest to largest). + + >>> list(merge([1,3,5,7], [0,2,4,8], [5,10,15,20], [], [25])) + [0, 1, 2, 3, 4, 5, 5, 7, 8, 10, 15, 20, 25] + + ''' + _heappop, _heapreplace, _StopIteration = heappop, heapreplace, StopIteration + + h = [] + h_append = h.append + for itnum, it in enumerate(map(iter, iterables)): + try: + next = it.next + h_append([next(), itnum, next]) + except _StopIteration: + pass + heapify(h) + + while 1: + try: + while 1: + v, itnum, next = s = h[0] # raises IndexError when h is empty + yield v + s[0] = next() # raises StopIteration when exhausted + _heapreplace(h, s) # restore heap condition + except _StopIteration: + _heappop(h) # remove empty iterator + except IndexError: + return + +# Extend the implementations of nsmallest and nlargest to use a key= argument +_nsmallest = nsmallest +def nsmallest(n, iterable, key=None): + """Find the n smallest elements in a dataset. + + Equivalent to: sorted(iterable, key=key)[:n] + """ + # Short-cut for n==1 is to use min() when len(iterable)>0 + if n == 1: + it = iter(iterable) + head = list(islice(it, 1)) + if not head: + return [] + if key is None: + return [min(chain(head, it))] + return [min(chain(head, it), key=key)] + + # When n>=size, it's faster to use sort() + try: + size = len(iterable) + except (TypeError, AttributeError): + pass + else: + if n >= size: + return sorted(iterable, key=key)[:n] + + # When key is none, use simpler decoration + if key is None: + it = izip(iterable, count()) # decorate + result = _nsmallest(n, it) + return map(itemgetter(0), result) # undecorate + + # General case, slowest method + in1, in2 = tee(iterable) + it = izip(imap(key, in1), count(), in2) # decorate + result = _nsmallest(n, it) + return map(itemgetter(2), result) # undecorate + +_nlargest = nlargest +def nlargest(n, iterable, key=None): + """Find the n largest elements in a dataset. + + Equivalent to: sorted(iterable, key=key, reverse=True)[:n] + """ + + # Short-cut for n==1 is to use max() when len(iterable)>0 + if n == 1: + it = iter(iterable) + head = list(islice(it, 1)) + if not head: + return [] + if key is None: + return [max(chain(head, it))] + return [max(chain(head, it), key=key)] + + # When n>=size, it's faster to use sort() + try: + size = len(iterable) + except (TypeError, AttributeError): + pass + else: + if n >= size: + return sorted(iterable, key=key, reverse=True)[:n] + + # When key is none, use simpler decoration + if key is None: + it = izip(iterable, count(0,-1)) # decorate + result = _nlargest(n, it) + return map(itemgetter(0), result) # undecorate + + # General case, slowest method + in1, in2 = tee(iterable) + it = izip(imap(key, in1), count(0,-1), in2) # decorate + result = _nlargest(n, it) + return map(itemgetter(2), result) # undecorate + +if __name__ == "__main__": + # Simple sanity test + heap = [] + data = [1, 3, 5, 7, 9, 2, 4, 6, 8, 0] + for item in data: + heappush(heap, item) + sort = [] + while heap: + sort.append(heappop(heap)) + print sort + + import doctest + doctest.testmod() diff --git a/lib-python/modified-2.7/test/test_heapq.py b/lib-python/modified-2.7/test/test_heapq.py --- a/lib-python/modified-2.7/test/test_heapq.py +++ b/lib-python/modified-2.7/test/test_heapq.py @@ -186,6 +186,11 @@ self.assertFalse(sys.modules['heapq'] is self.module) self.assertTrue(hasattr(self.module.heapify, 'func_code')) + def test_islice_protection(self): + m = self.module + self.assertFalse(m.nsmallest(-1, [1])) + self.assertFalse(m.nlargest(-1, [1])) + class TestHeapC(TestHeap): module = c_heapq diff --git a/lib-python/modified-2.7/urllib2.py b/lib-python/modified-2.7/urllib2.py --- a/lib-python/modified-2.7/urllib2.py +++ b/lib-python/modified-2.7/urllib2.py @@ -395,11 +395,7 @@ meth_name = protocol+"_response" for processor in self.process_response.get(protocol, []): meth = getattr(processor, meth_name) - try: - response = meth(req, response) - except: - response.close() - raise + response = meth(req, response) return response diff --git a/lib_pypy/_ctypes/structure.py b/lib_pypy/_ctypes/structure.py --- a/lib_pypy/_ctypes/structure.py +++ b/lib_pypy/_ctypes/structure.py @@ -17,7 +17,7 @@ if len(f) == 3: if (not hasattr(tp, '_type_') or not isinstance(tp._type_, str) - or tp._type_ not in "iIhHbBlL"): + or tp._type_ not in "iIhHbBlLqQ"): #XXX: are those all types? # we just dont get the type name # in the interp levle thrown TypeError diff --git a/lib_pypy/_pypy_irc_topic.py b/lib_pypy/_pypy_irc_topic.py --- a/lib_pypy/_pypy_irc_topic.py +++ b/lib_pypy/_pypy_irc_topic.py @@ -1,117 +1,6 @@ -"""qvfgbcvna naq hgbcvna punvef -qlfgbcvna naq hgbcvna punvef -V'z fbeel, pbhyq lbh cyrnfr abg nterr jvgu gur png nf jryy? -V'z fbeel, pbhyq lbh cyrnfr abg nterr jvgu gur punve nf jryy? -jr cnffrq gur RH erivrj -cbfg RhebClguba fcevag fgnegf 12.IVV.2007, 10nz -RhebClguba raqrq -n Pyrna Ragrecevfrf cebqhpgvba -npnqrzl vf n pbzcyvpngrq ebyr tnzr -npnqrzvn vf n pbzcyvpngrq ebyr tnzr -jbexvat pbqr vf crn fbhc -abg lbhe snhyg, zber yvxr vg'f n zbivat gnetrg -guvf fragrapr vf snyfr -abguvat vf gehr -Yncfnat Fbhpubat -Oenpunzhgnaqn -fbeel, V'yy grnpu gur pnpghf ubj gb fjvz yngre -Jul fb znal znal znal znal znal ivbyvaf? -Jul fb znal znal znal znal znal bowrpgf? -"eha njnl naq yvir ba n snez" nccebnpu gb fbsgjner qrirybczrag -"va snpg, lbh zvtug xabj zber nobhg gur genafyngvba gbbypunva nsgre znfgrevat eclguba guna fbzr angvir fcrnxre xabjf nobhg uvf zbgure gbathr" - kbeNkNk -"jurer qvq nyy gur ivbyvaf tb?" -- ClCl fgnghf oybt: uggc://zberclcl.oybtfcbg.pbz/ -uggc://kxpq.pbz/353/ -pnfhnyvgl ivbyngvbaf naq sylvat -wrgmg abpu fpubxbynqvtre -R09 2X @PNN:85? -vs lbh'er gelvat gb oybj hc fghss, jub pnerf? -vs fghss oybjf hc, lbh pner -2008 jvyy or gur lrne bs clcl ba gur qrfxgbc -2008 jvyy or gur lrne bs gur qrfxgbc ba #clcl -2008 jvyy or gur lrne bs gur qrfxgbc ba #clcl, Wnahnel jvyy or gur zbagu bs gur nyc gbcf -lrf, ohg jung'g gur frafr bs 0 < "qhena qhena" -eclguba: flagnk naq frznagvpf bs clguba, fcrrq bs p, erfgevpgvbaf bs wnin naq pbzcvyre reebe zrffntrf nf crargenoyr nf ZHZCF +"""eclguba: flagnk naq frznagvpf bs clguba, fcrrq bs p, erfgevpgvbaf bs wnin naq pbzcvyre reebe zrffntrf nf crargenoyr nf ZHZCF pglcrf unf n fcva bs 1/3 ' ' vf n fcnpr gbb -2009 jvyy or gur lrne bs WVG ba gur qrfxgbc -N ynathntr vf n qvnyrpg jvgu na nezl naq anil -gbcvpf ner sbe gur srroyr zvaqrq -2009 vf gur lrne bs ersyrpgvba ba gur qrfxgbc -gur tybor vf bhe cbal, gur pbfzbf bhe erny ubefr -jub nz V naq vs lrf, ubj znal? -cebtenzzvat va orq vf n cresrpgyl svar npgvivgl -zbber'f ynj vf n qeht jvgu gur jbefg pbzr qbja -EClguba: jr hfr vg fb lbh qba'g unir gb -Zbber'f ynj vf n qeht jvgu gur jbefg pbzr qbja. EClguba: haqrpvqrq. -guvatf jvyy or avpr naq fghss -qba'g cbfg yvaxf gb cngragf urer -Abg lbhe hfhny nanylfrf. -Gur Neg bs gur Punaary -Clguba 300 -V fhccbfr ZO bs UGZY cre frpbaq vf abg gur hfhny fcrrq zrnfher crbcyr jbhyq rkcrpg sbe n wvg -gur fha arire frgf ba gur ClCl rzcver -ghegyrf ner snfgre guna lbh guvax -cebtenzzvat vf na nrfgrguvp raqrnibhe -P vf tbbq sbe fbzrguvat, whfg abg sbe jevgvat fbsgjner -trezna vf tbbq sbe fbzrguvat, whfg abg sbe jevgvat fbsgjner -trezna vf tbbq sbe artngvbaf, whfg abg sbe jevgvat fbsgjner -# nffreg qvq abg penfu -lbh fubhyq fgneg n cresrpg fbsgjner zbirzrag -lbh fubhyq fgneg n cresrpg punaary gbcvp zbirzrag -guvf vf n cresrpg punaary gbcvp -guvf vf n frys-ersreragvny punaary gbcvp -crrcubcr bcgvzvmngvbaf ner jung n Fhssvpvragyl Fzneg Pbzcvyre hfrf -"crrcubcr" bcgvzvmngvbaf ner jung na bcgvzvfgvp Pbzcvyre hfrf -pubbfr lbhe unpx -gur 'fhcre' xrljbeq vf abg gung uhttnoyr -wlguba cngpurf ner abg rabhtu sbe clcl -- qb lbh xabj oreyva? - nyy bs vg? - jryy, whfg oreyva -- ubj jvyy gur snpg gung gurl ner hfrq va bhe ercy punatr bhe gbcvpf? -- ubj pna vg rire unir jbexrq? -- jurer fubhyq gur unpx or fgberq? -- Vg'f uneq gb fnl rknpgyl jung pbafgvghgrf erfrnepu va gur pbzchgre jbeyq, ohg nf n svefg nccebkvzngvba, vg'f fbsgjner gung qbrfa'g unir hfref. -- Cebtenzzvat vf nyy nobhg xabjvat jura gb obvy gur benatr fcbatr qbaxrl npebff gur cuvyyvcvarf -- Jul fb znal, znal, znal, znal, znal, znal qhpxyvatf? -- ab qrgnvy vf bofpher rabhtu gb abg unir fbzr pbqr qrcraqvat ba vg. -- jung V trarenyyl jnag vf serr fcrrqhcf -- nyy bs ClCl vf kv-dhnyvgl -"lbh pna nyjnlf xvyy -9 be bf._rkvg() vs lbh'er va n uheel" -Ohernhpengf ohvyq npnqrzvp rzcverf juvpu puhea bhg zrnavatyrff fbyhgvbaf gb veeryrinag ceboyrzf. -vg'f abg n unpx, vg'f n jbexnebhaq -ClCl qbrfa'g unir pbcbylinevnqvp qrcraqragyl-zbabzbecurq ulcresyhknqf -ClCl qbrfa'g punatr gur shaqnzragny culfvpf pbafgnagf -Qnapr bs gur Fhtnecyhz Snvel -Wnin vf whfg tbbq rabhtu gb or cenpgvpny, ohg abg tbbq rabhtu gb or hfnoyr. -RhebClguba vf unccravat, qba'g rkcrpg nal dhvpx erfcbafr gvzrf. -"V jbhyq yvxr gb fgnl njnl sebz ernyvgl gura" -"gung'f jul gur 'be' vf ernyyl na 'naq' " -jvgu nyy nccebcevngr pbagrkghnyvfngvbavat -qba'g gevc ba gur cbjre pbeq -vzcyrzragvat YBTB va YBTB: "ghegyrf nyy gur jnl qbja" -gur ohooyrfbeg jbhyq or gur jebat jnl gb tb -gur cevapvcyr bs pbafreingvba bs zrff -gb fnir n gerr, rng n ornire -Qre Ovore znpugf evpugvt: Antg nyyrf xnchgg. -"Nal jbeyqivrj gung vfag jenpxrq ol frys-qbhog naq pbashfvba bire vgf bja vqragvgl vf abg n jbeyqivrj sbe zr." - Fpbgg Nnebafba -jr oryvrir va cnapnxrf, znlor -jr oryvrir va ghegyrf, znlor -jr qrsvavgryl oryvrir va zrgn -gur zngevk unf lbh -"Yvsr vf uneq, gura lbh anc" - n png -Vf Nezva ubzr jura gur havirefr prnfrf gb rkvfg? -Qhrffryqbes fcevag fgnegrq -frys.nobeeg("pnaabg ybnq negvpyrf") -QRAGVFGEL FLZOBY YVTUG IREGVPNY NAQ JNIR -"Gur UUH pnzchf vf n tbbq Dhnxr yriry" - Nezva -"Gur UUH pnzchf jbhyq or n greevoyr dhnxr yriry - lbh'q arire unir n pyhr jurer lbh ner" - zvpunry -N enqvbnpgvir png unf 18 unys-yvirf. - : j [fvtu] -f -pbybe-pbqrq oyhrf -"Neebtnapr va pbzchgre fpvrapr vf zrnfherq va anab-Qvwxfgenf." -ClCl arrqf n Whfg-va-Gvzr WVG -"Lbh pna'g gvzr geniry whfg ol frggvat lbhe pybpxf jebat" -Gjb guernqf jnyx vagb n one. Gur onexrrcre ybbxf hc naq lryyf, "url, V jnag qba'g nal pbaqvgvbaf enpr yvxr gvzr ynfg!" Clguba 2.k rfg cerfdhr zbeg, ivir Clguba! Clguba 2.k vf abg qrnq Riregvzr fbzrbar nethrf jvgu "Fznyygnyx unf nyjnlf qbar K", vg vf nyjnlf n tbbq uvag gung fbzrguvat arrqf gb or punatrq snfg. - Znephf Qraxre @@ -119,7 +8,6 @@ __kkk__ naq __ekkk__ if bcrengvba fybgf: cnegvpyr dhnaghz fhcrecbfvgvba xvaq bs sha ClCl vf na rkpvgvat grpuabybtl gung yrgf lbh gb jevgr snfg, cbegnoyr, zhygv-cyngsbez vagrecergref jvgu yrff rssbeg Nezva: "Cebybt vf n zrff.", PS: "Ab, vg'f irel pbby!", Nezva: "Vfa'g guvf jung V fnvq?" - tbbq, grfgf ner hfrshy fbzrgvzrf :-) ClCl vf yvxr nofheq gurngre jr unir ab nagv-vzcbffvoyr fgvpx gung znxrf fher gung nyy lbhe cebtenzf unyg clcl vf n enpr orgjrra crbcyr funivat lnxf naq gur havirefr cebqhpvat zber orneqrq lnxf. Fb sne, gur havirefr vf jvaavat @@ -136,14 +24,14 @@ ClCl 1.1.0orgn eryrnfrq: uggc://pbqrfcrnx.arg/clcl/qvfg/clcl/qbp/eryrnfr-1.1.0.ugzy "gurer fubhyq or bar naq bayl bar boivbhf jnl gb qb vg". ClCl inevnag: "gurer pna or A unys-ohttl jnlf gb qb vg" 1.1 svany eryrnfrq: uggc://pbqrfcrnx.arg/clcl/qvfg/clcl/qbp/eryrnfr-1.1.0.ugzy -1.1 svany eryrnfrq | nzq64 naq ccp ner bayl ninvynoyr va ragrecevfr irefvba + nzq64 naq ccp ner bayl ninvynoyr va ragrecevfr irefvba Vf gurer n clcl gvzr? - vs lbh pna srry vg (?) gura gurer vf ab, abezny jbex vf fhpu zhpu yrff gvevat guna inpngvbaf ab, abezny jbex vf fb zhpu yrff gvevat guna inpngvbaf -SVEFG gurl vtaber lbh, gura gurl ynhtu ng lbh, gura gurl svtug lbh, gura lbh jva. +-SVEFG gurl vtaber lbh, gura gurl ynhtu ng lbh, gura gurl svtug lbh, gura lbh jva.- vg'f Fhaqnl, znlor vg'f Fhaqnl, ntnva -"3 + 3 = 8" Nagb va gur WVG gnyx +"3 + 3 = 8" - Nagb va gur WVG gnyx RPBBC vf unccravat RPBBC vf svavfurq cflpb rngf bar oenva cre vapu bs cebterff @@ -175,10 +63,108 @@ "nu, whfg va gvzr qbphzragngvba" (__nc__) ClCl vf abg n erny IZ: ab frtsnhyg unaqyref gb qb gur ener pnfrf lbh pna'g unir obgu pbairavrapr naq fcrrq -gur WVG qbrfa'g jbex ba BF/K (abi'09) -ab fhccbeg sbe BF/K evtug abj! (abi'09) fyvccref urvtug pna or zrnfherq va k86 ertvfgref clcl vf n enpr orgjrra gur vaqhfgel gelvat gb ohvyq znpuvarf jvgu zber naq zber erfbheprf, naq gur clcl qrirybcref gelvat gb rng nyy bs gurz. Fb sne, gur jvaare vf fgvyy hapyrne +"znl pbagnva ahgf naq/be lbhat cbvagref" +vg'f nyy irel fvzcyr, yvxr gur ubyvqnlf +unccl ClCl'f lrne 2010! +fnzhryr fnlf gung jr ybfg n enmbe. fb jr pna'g funir lnxf +"yrg'f abg or bofpher, hayrff jr ernyyl arrq gb" + (abg guernq-fnsr, ohg jryy, abguvat vf) +clcl unf znal ceboyrzf, ohg rnpu bar unf znal fbyhgvbaf +whfg nabgure vgrz (1.333...) ba bhe erny-ahzorerq gbqb yvfg +ClCl vf Fuveg Bevtnzv erfrnepu + nafjrevat n dhrfgvba: "ab -- sbe ng yrnfg bar cbffvoyr vagrecergngvba bs lbhe fragrapr" +eryrnfr 1.2 hcpbzvat +ClCl 1.2 eryrnfrq - uggc://clcl.bet/ +AB IPF QVFPHFFVBAF +EClguba vf n svar pnzry unve oehfu +ClCl vf n npghnyyl n ivfhnyvmngvba cebwrpg, jr whfg ohvyq vagrecergref gb unir vagrerfgvat qngn gb ivfhnyvmr +clcl vf yvxr fnhfntrf +naq abj sbe fbzrguvat pbzcyrgryl qvssrerag +n 10gu bs sberire vf 1u45 +pbeerpg pbqr qbrfag arrq nal grfgf +cbfgfgehpghenyvfz rgp. +clcl UVG trarengbe +gur arj clcl fcbeg vf gb cnff clcl ohtf nf pclguba ohtf +jr unir zhpu zber vagrecergref guna hfref +ClCl 1.3 njnvgvat eryrnfr +ClCl 1.3 eryrnfrq +vg frrzf gb zr gung bapr lbh frggyr ba na rkrphgvba / bowrpg zbqry naq / be olgrpbqr sbezng, lbh'ir nyernql qrpvqrq jung ynathntrf (jurer gur 'f' frrzf fhcresyhbhf) fhccbeg vf tbvat gb or svefg pynff sbe +"Nyy ceboyrzf va ClCl pna or fbyirq ol nabgure yriry bs vagrecergngvba" +ClCl 1.3 eryrnfrq (jvaqbjf ovanevrf vapyhqrq) +jul qvq lbh thlf unir gb znxr gur ohvygva sbeghar zber vagrerfgvat guna npghny jbex? v whfg pngpurq zlfrys erfgnegvat clcl 20 gvzrf +"jr hfrq gb unir n zrff jvgu na bofpher vagresnpr, abj jr unir zrff urer naq bofpher vagresnpr gurer. cebterff" crqebavf ba n clcl fcevag +"phcf bs pbssrr ner yvxr nanybtvrf va gung V'z znxvat bar evtug abj" +"vg'f nyjnlf hc gb hf, va n jnl be gur bgure" +ClCl vf infg, naq pbagnvaf zhygvghqrf +qravny vf eneryl n tbbq qrohttvat grpuavdhr +"Yrg'f tb." - "Jr pna'g" - "Jul abg?" - "Jr'er jnvgvat sbe n Genafyngvba." - (qrfcnvevatyl) "Nu!" +'gung'f qrsvavgryl n pnfr bs "hu????"' +va gurbel gurer vf gur Ybbc, va cenpgvpr gurer ner oevqtrf +gur uneqqevir - pbafgnag qngn cvytevzntr +ClCl vf n gbby gb xrrc bgurejvfr qnatrebhf zvaqf fnsryl bpphcvrq. +jr ner n trareny senzrjbex ohvyg ba pbafvfgrag nccyvpngvba bs nqubp-arff +gur jnl gb nibvq n jbexnebhaq vf gb vagebqhpr n fgebatre jbexnebhaq fbzrjurer ryfr +pnyyvat gur genafyngvba gbby punva n 'fpevcg' vf xvaq bs bssrafvir +ehaavat clcl-p genafyngr.cl vf n ovg yvxr jngpuvat n guevyyre zbivr, vg pbhyq pbafhzr nyy gur zrzbel ng nal gvzr +ehaavat clcl-p genafyngr.cl vf n ovg yvxr jngpuvat n guevyyre zbivr, vg pbhyq qvr ng nal gvzr orpnhfr bs gur 32-ovg 4TO yvzvg bs ENZ +Qh jvefg rora tranh qnf reervpura, jbena xrvare tynhog +vs fjvgmreynaq jrer jurer terrpr vf (ba vfynaqf) jbhyq gurl nyy or pbaarpgrq ol oevqtrf? +genafyngvat clcl jvgu pclguba vf fbbbbbb fybj +ClCl 1.4 eryrnfrq! +Jr ner abg urebrf, whfg irel cngvrag. +QBAR zrnaf vg'f qbar +jul gurer vf ab "ClCl 1.4 eryrnfrq" va gbcvp nal zber? +fabj! fabj! +svanyyl, zrephevny zvtengvba vf unccravat! +Gur zvtengvba gb zrephevny vf pbzcyrgrq! uggc://ovgohpxrg.bet/clcl/clcl +fabj! fabj! (gre) +unccl arj lrne +naq anaanaw gb lbh nf jryy +Frrvat nf gur ynjf bs culfvpf ner ntnvafg lbh, lbh unir gb pnershyyl pbafvqre lbhe fpbcr fb gung lbhe tbnyf ner ernfbanoyr. +nf hfhny va clcl, gur fbyhgvba nccrnef pbzcyrgryl qvfcebcbegvbangr gb gur ceboyrz naq vafgrnq jr'yy tb sbe n pbzcyrgryl qvssrerag fvzcyre nccebnpu gb gur bevtvany ceboyrz +fabj, fabj! +va clcl lbh ner nyjnlf ng gur jebat yriry, va bar jnl be gur bgure +jryy, vg'f jebat ohg abg fb "irel jebat" nf vg ybbxrq + V ybir clcl +ynmvarff vzcngvrapr naq uhoevf +fabj, fabj +EClguba: guvatf lbh jbhyqa'g qb va Clguba, naq pna'g qb va P. +vg vf gur rkcrpgrq orunivbe, rkprcg jura lbh qba'g rkcrpg vg +erqrsvavat lryybj frrzf yvxr n orggre vqrn +"gung'f ubjrire whfg ratvarrevat" (svwny) +"[vg] whfg fubjf ntnva gung eclguba vf bofpher" (psobym) +"naljnl, clguba vf n infg ynathntr" (svwny) +bhg-bs-yvr-thneqf +"gurer ner qnlf ba juvpu lbh ybbx nebhaq naq abguvat fubhyq unir rire jbexrq" (svwny) +clcl vf n orggre xvaq bs sbbyvfuarff - ynp +ehaavat grfgf vf rffragvny sbe qrirybcvat clcl -- hu? qvq V oernx gur grfg? (svwny) +V'ir tbg guvf sybbe jnk gung'f nyfb n TERNG qrffreg gbccvat!! +rknexha: "gur cneg gung V gubhtug jnf tbvat gb or uneq jnf gevivny, fb abj V whfg unir guvf cneg gung V qvqa'g rira guvax bs gung vf uneq" +V fhccbfr jr pna yvir jvgu gur bofphevgl, nf ybat nf gurer vf n pbzzrag znxvat vg yvtugre +V nz n ovt oryvrire va ernfbaf. ohg gur nccnerag xvaq ner zl snibevgr. +clcl: trg n WVG sbe serr (jryy gur svefg qnl lbh jba'g znantr naq vg jvyy or irel sehfgengvat) + thgjbegu: bu, jr fubhyq znxr gur WVG zntvpnyyl orggre, jvgu qrpbengbef naq fghss +vg'f n pbzcyrgr unpx, ohg n irel zvavzny bar (nevtngb) +svefg gurl ynhtu ng lbh, gura gurl vtaber lbh, gura gurl svtug lbh, gura lbh jva +ClCl vf snzvyl sevraqyl +jr yvxr pbzcynvagf +gbqnl jr'er snfgre guna lrfgreqnl (hfhnyyl) +ClCl naq PClguba: gurl ner zbegny rarzvrf vagrag ba xvyyvat rnpu bgure +nethnoyl, rirelguvat vf n avpur +clcl unf ynlref yvxr bavbaf: crryvat gurz onpx jvyy znxr lbh pel +EClguba zntvpnyyl znxrf lbh evpu naq snzbhf (fnlf fb ba gur gva) +Vf evtbobg nebhaq jura gur havirefr prnfrf gb rkvfg? +ClCl vf gbb pbby sbe dhrelfgevatf. +< nevtngb> gura jung bpphef? < svwny> tbbq fghss V oryvrir +ClCl 1.6 eryrnfrq! + jurer ner gur grfgf? +uggc://gjvgcvp.pbz/52nr8s +N enaqbz dhbgr +Nyy rkprcgoybpxf frrz fnar. +N cvax tyvggrel ebgngvat ynzoqn +"vg'f yvxryl grzcbenel hagvy sberire" nevtb """ from string import ascii_uppercase, ascii_lowercase diff --git a/lib_pypy/pyrepl/readline.py b/lib_pypy/pyrepl/readline.py --- a/lib_pypy/pyrepl/readline.py +++ b/lib_pypy/pyrepl/readline.py @@ -231,7 +231,11 @@ return ''.join(chars) def _histline(self, line): - return unicode(line.rstrip('\n'), ENCODING) + line = line.rstrip('\n') + try: + return unicode(line, ENCODING) + except UnicodeDecodeError: # bah, silently fall back... + return unicode(line, 'utf-8') def get_history_length(self): return self.saved_history_length @@ -268,7 +272,10 @@ f = open(os.path.expanduser(filename), 'w') for entry in history: if isinstance(entry, unicode): - entry = entry.encode(ENCODING) + try: + entry = entry.encode(ENCODING) + except UnicodeEncodeError: # bah, silently fall back... + entry = entry.encode('utf-8') entry = entry.replace('\n', '\r\n') # multiline history support f.write(entry + '\n') f.close() diff --git a/pypy/config/pypyoption.py b/pypy/config/pypyoption.py --- a/pypy/config/pypyoption.py +++ b/pypy/config/pypyoption.py @@ -98,7 +98,7 @@ module_import_dependencies = { # no _rawffi if importing pypy.rlib.clibffi raises ImportError - # or CompilationError + # or CompilationError or py.test.skip.Exception "_rawffi" : ["pypy.rlib.clibffi"], "_ffi" : ["pypy.rlib.clibffi"], @@ -119,7 +119,7 @@ try: for name in modlist: __import__(name) - except (ImportError, CompilationError), e: + except (ImportError, CompilationError, py.test.skip.Exception), e: errcls = e.__class__.__name__ config.add_warning( "The module %r is disabled\n" % (modname,) + diff --git a/pypy/doc/project-ideas.rst b/pypy/doc/project-ideas.rst --- a/pypy/doc/project-ideas.rst +++ b/pypy/doc/project-ideas.rst @@ -17,6 +17,12 @@ projects, or anything else in PyPy, pop up on IRC or write to us on the `mailing list`_. +Make big integers faster +------------------------- + +PyPy's implementation of the Python ``long`` type is slower than CPython's. +Find out why and optimize them. + Numpy improvements ------------------ diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -782,22 +782,63 @@ """Unpack an iterable object into a real (interpreter-level) list. Raise an OperationError(w_ValueError) if the length is wrong.""" w_iterator = self.iter(w_iterable) - # If we know the expected length we can preallocate. if expected_length == -1: + # xxx special hack for speed + from pypy.interpreter.generator import GeneratorIterator + if isinstance(w_iterator, GeneratorIterator): + lst_w = [] + w_iterator.unpack_into(lst_w) + return lst_w + # /xxx + return self._unpackiterable_unknown_length(w_iterator, w_iterable) + else: + lst_w = self._unpackiterable_known_length(w_iterator, + expected_length) + return lst_w[:] # make the resulting list resizable + + @jit.dont_look_inside + def _unpackiterable_unknown_length(self, w_iterator, w_iterable): + # Unpack a variable-size list of unknown length. + # The JIT does not look inside this function because it + # contains a loop (made explicit with the decorator above). + # + # If we can guess the expected length we can preallocate. + try: + lgt_estimate = self.len_w(w_iterable) + except OperationError, o: + if (not o.match(self, self.w_AttributeError) and + not o.match(self, self.w_TypeError)): + raise + items = [] + else: try: - lgt_estimate = self.len_w(w_iterable) - except OperationError, o: - if (not o.match(self, self.w_AttributeError) and - not o.match(self, self.w_TypeError)): + items = newlist(lgt_estimate) + except MemoryError: + items = [] # it might have lied + # + while True: + try: + w_item = self.next(w_iterator) + except OperationError, e: + if not e.match(self, self.w_StopIteration): raise - items = [] - else: - try: - items = newlist(lgt_estimate) - except MemoryError: - items = [] # it might have lied - else: - items = [None] * expected_length + break # done + items.append(w_item) + # + return items + + @jit.dont_look_inside + def _unpackiterable_known_length(self, w_iterator, expected_length): + # Unpack a known length list, without letting the JIT look inside. + # Implemented by just calling the @jit.unroll_safe version, but + # the JIT stopped looking inside already. + return self._unpackiterable_known_length_jitlook(w_iterator, + expected_length) + + @jit.unroll_safe + def _unpackiterable_known_length_jitlook(self, w_iterator, + expected_length): + items = [None] * expected_length idx = 0 while True: try: @@ -806,26 +847,29 @@ if not e.match(self, self.w_StopIteration): raise break # done - if expected_length != -1 and idx == expected_length: + if idx == expected_length: raise OperationError(self.w_ValueError, - self.wrap("too many values to unpack")) - if expected_length == -1: - items.append(w_item) - else: - items[idx] = w_item + self.wrap("too many values to unpack")) + items[idx] = w_item idx += 1 - if expected_length != -1 and idx < expected_length: + if idx < expected_length: if idx == 1: plural = "" else: plural = "s" - raise OperationError(self.w_ValueError, - self.wrap("need more than %d value%s to unpack" % - (idx, plural))) + raise operationerrfmt(self.w_ValueError, + "need more than %d value%s to unpack", + idx, plural) return items - unpackiterable_unroll = jit.unroll_safe(func_with_new_name(unpackiterable, - 'unpackiterable_unroll')) + def unpackiterable_unroll(self, w_iterable, expected_length): + # Like unpackiterable(), but for the cases where we have + # an expected_length and want to unroll when JITted. + # Returns a fixed-size list. + w_iterator = self.iter(w_iterable) + assert expected_length != -1 + return self._unpackiterable_known_length_jitlook(w_iterator, + expected_length) def fixedview(self, w_iterable, expected_length=-1): """ A fixed list view of w_iterable. Don't modify the result diff --git a/pypy/interpreter/generator.py b/pypy/interpreter/generator.py --- a/pypy/interpreter/generator.py +++ b/pypy/interpreter/generator.py @@ -8,7 +8,7 @@ class GeneratorIterator(Wrappable): "An iterator created by a generator." _immutable_fields_ = ['pycode'] - + def __init__(self, frame): self.space = frame.space self.frame = frame # turned into None when frame_finished_execution @@ -81,7 +81,7 @@ # if the frame is now marked as finished, it was RETURNed from if frame.frame_finished_execution: self.frame = None - raise OperationError(space.w_StopIteration, space.w_None) + raise OperationError(space.w_StopIteration, space.w_None) else: return w_result # YIELDed finally: @@ -97,21 +97,21 @@ def throw(self, w_type, w_val, w_tb): from pypy.interpreter.pytraceback import check_traceback space = self.space - + msg = "throw() third argument must be a traceback object" if space.is_w(w_tb, space.w_None): tb = None else: tb = check_traceback(space, w_tb, msg) - + operr = OperationError(w_type, w_val, tb) operr.normalize_exception(space) return self.send_ex(space.w_None, operr) - + def descr_next(self): """x.next() -> the next value, or raise StopIteration""" return self.send_ex(self.space.w_None) - + def descr_close(self): """x.close(arg) -> raise GeneratorExit inside generator.""" assert isinstance(self, GeneratorIterator) @@ -124,7 +124,7 @@ e.match(space, space.w_GeneratorExit): return space.w_None raise - + if w_retval is not None: msg = "generator ignored GeneratorExit" raise OperationError(space.w_RuntimeError, space.wrap(msg)) @@ -155,3 +155,39 @@ "interrupting generator of ") break block = block.previous + + def unpack_into(self, results_w): + """This is a hack for performance: runs the generator and collects + all produced items in a list.""" + # XXX copied and simplified version of send_ex() + space = self.space + if self.running: + raise OperationError(space.w_ValueError, + space.wrap('generator already executing')) + frame = self.frame + if frame is None: # already finished + return + self.running = True + try: + pycode = self.pycode + while True: + jitdriver.jit_merge_point(self=self, frame=frame, + results_w=results_w, + pycode=pycode) + try: + w_result = frame.execute_frame(space.w_None) + except OperationError, e: + if not e.match(space, space.w_StopIteration): + raise + break + # if the frame is now marked as finished, it was RETURNed from + if frame.frame_finished_execution: + break + results_w.append(w_result) # YIELDed + finally: + frame.f_backref = jit.vref_None + self.running = False + self.frame = None + +jitdriver = jit.JitDriver(greens=['pycode'], + reds=['self', 'frame', 'results_w']) diff --git a/pypy/interpreter/test/test_generator.py b/pypy/interpreter/test/test_generator.py --- a/pypy/interpreter/test/test_generator.py +++ b/pypy/interpreter/test/test_generator.py @@ -117,7 +117,7 @@ g = f() raises(NameError, g.throw, NameError, "Error", None) - + def test_throw_fail(self): def f(): yield 1 @@ -129,7 +129,7 @@ yield 1 g = f() raises(TypeError, g.throw, list()) - + def test_throw_fail3(self): def f(): yield 1 @@ -188,7 +188,7 @@ g = f() g.next() raises(NameError, g.close) - + def test_close_fail(self): def f(): try: @@ -267,3 +267,15 @@ assert r.startswith("' % self.fielddescr.repr_of_descr() diff --git a/pypy/jit/backend/llsupport/gc.py b/pypy/jit/backend/llsupport/gc.py --- a/pypy/jit/backend/llsupport/gc.py +++ b/pypy/jit/backend/llsupport/gc.py @@ -649,10 +649,13 @@ def malloc_basic(size, tid): type_id = llop.extract_ushort(llgroup.HALFWORD, tid) has_finalizer = bool(tid & (1<' % (self._location_code, info) - def location_code(self): - return self._location_code - def value_a(self): return self.loc_a @@ -191,6 +185,7 @@ # we want a width of 8 (... I think. Check this!) _immutable_ = True width = 8 + _location_code = 'j' def __init__(self, address): self.value = address @@ -198,9 +193,6 @@ def __repr__(self): return '' % (self.value,) - def location_code(self): - return 'j' - if IS_X86_32: class FloatImmedLoc(AssemblerLocation): # This stands for an immediate float. It cannot be directly used in @@ -209,6 +201,7 @@ # instead; see below. _immutable_ = True width = 8 + _location_code = '#' # don't use me def __init__(self, floatstorage): self.aslonglong = floatstorage @@ -229,9 +222,6 @@ floatvalue = longlong.getrealfloat(self.aslonglong) return '' % (floatvalue,) - def location_code(self): - raise NotImplementedError - if IS_X86_64: def FloatImmedLoc(floatstorage): from pypy.rlib.longlong2float import float2longlong @@ -270,6 +260,11 @@ else: raise AssertionError(methname + " undefined") +def _missing_binary_insn(name, code1, code2): + raise AssertionError(name + "_" + code1 + code2 + " missing") +_missing_binary_insn._dont_inline_ = True + + class LocationCodeBuilder(object): _mixin_ = True @@ -303,6 +298,8 @@ else: # For this case, we should not need the scratch register more than here. self._load_scratch(val2) + if name == 'MOV' and loc1 is X86_64_SCRATCH_REG: + return # don't need a dummy "MOV r11, r11" INSN(self, loc1, X86_64_SCRATCH_REG) def invoke(self, codes, val1, val2): @@ -310,6 +307,23 @@ _rx86_getattr(self, methname)(val1, val2) invoke._annspecialcase_ = 'specialize:arg(1)' + def has_implementation_for(loc1, loc2): + # A memo function that returns True if there is any NAME_xy that could match. + # If it returns False we know the whole subcase can be omitted from translated + # code. Without this hack, the size of most _binaryop INSN functions ends up + # quite large in C code. + if loc1 == '?': + return any([has_implementation_for(loc1, loc2) + for loc1 in unrolling_location_codes]) + methname = name + "_" + loc1 + loc2 + if not hasattr(rx86.AbstractX86CodeBuilder, methname): + return False + # any NAME_j should have a NAME_m as a fallback, too. Check it + if loc1 == 'j': assert has_implementation_for('m', loc2), methname + if loc2 == 'j': assert has_implementation_for(loc1, 'm'), methname + return True + has_implementation_for._annspecialcase_ = 'specialize:memo' + def INSN(self, loc1, loc2): code1 = loc1.location_code() code2 = loc2.location_code() @@ -325,6 +339,8 @@ assert code2 not in ('j', 'i') for possible_code2 in unrolling_location_codes: + if not has_implementation_for('?', possible_code2): + continue if code2 == possible_code2: val2 = getattr(loc2, "value_" + possible_code2)() # @@ -335,28 +351,32 @@ # # Regular case for possible_code1 in unrolling_location_codes: + if not has_implementation_for(possible_code1, + possible_code2): + continue if code1 == possible_code1: val1 = getattr(loc1, "value_" + possible_code1)() # More faking out of certain operations for x86_64 - if possible_code1 == 'j' and not rx86.fits_in_32bits(val1): + fits32 = rx86.fits_in_32bits + if possible_code1 == 'j' and not fits32(val1): val1 = self._addr_as_reg_offset(val1) invoke(self, "m" + possible_code2, val1, val2) - elif possible_code2 == 'j' and not rx86.fits_in_32bits(val2): + return + if possible_code2 == 'j' and not fits32(val2): val2 = self._addr_as_reg_offset(val2) invoke(self, possible_code1 + "m", val1, val2) - elif possible_code1 == 'm' and not rx86.fits_in_32bits(val1[1]): + return + if possible_code1 == 'm' and not fits32(val1[1]): val1 = self._fix_static_offset_64_m(val1) - invoke(self, "a" + possible_code2, val1, val2) - elif possible_code2 == 'm' and not rx86.fits_in_32bits(val2[1]): + if possible_code2 == 'm' and not fits32(val2[1]): val2 = self._fix_static_offset_64_m(val2) - invoke(self, possible_code1 + "a", val1, val2) - else: - if possible_code1 == 'a' and not rx86.fits_in_32bits(val1[3]): - val1 = self._fix_static_offset_64_a(val1) - if possible_code2 == 'a' and not rx86.fits_in_32bits(val2[3]): - val2 = self._fix_static_offset_64_a(val2) - invoke(self, possible_code1 + possible_code2, val1, val2) + if possible_code1 == 'a' and not fits32(val1[3]): + val1 = self._fix_static_offset_64_a(val1) + if possible_code2 == 'a' and not fits32(val2[3]): + val2 = self._fix_static_offset_64_a(val2) + invoke(self, possible_code1 + possible_code2, val1, val2) return + _missing_binary_insn(name, code1, code2) return func_with_new_name(INSN, "INSN_" + name) @@ -431,12 +451,14 @@ def _fix_static_offset_64_m(self, (basereg, static_offset)): # For cases where an AddressLoc has the location_code 'm', but # where the static offset does not fit in 32-bits. We have to fall - # back to the X86_64_SCRATCH_REG. Note that this returns a location - # encoded as mode 'a'. These are all possibly rare cases; don't try + # back to the X86_64_SCRATCH_REG. Returns a new location encoded + # as mode 'm' too. These are all possibly rare cases; don't try # to reuse a past value of the scratch register at all. self._scratch_register_known = False self.MOV_ri(X86_64_SCRATCH_REG.value, static_offset) - return (basereg, X86_64_SCRATCH_REG.value, 0, 0) + self.LEA_ra(X86_64_SCRATCH_REG.value, + (basereg, X86_64_SCRATCH_REG.value, 0, 0)) + return (X86_64_SCRATCH_REG.value, 0) def _fix_static_offset_64_a(self, (basereg, scalereg, scale, static_offset)): diff --git a/pypy/jit/backend/x86/rx86.py b/pypy/jit/backend/x86/rx86.py --- a/pypy/jit/backend/x86/rx86.py +++ b/pypy/jit/backend/x86/rx86.py @@ -745,6 +745,7 @@ assert insnname_template.count('*') == 1 add_insn('x', register(2), '\xC0') add_insn('j', abs_, immediate(2)) + add_insn('m', mem_reg_plus_const(2)) define_pxmm_insn('PADDQ_x*', '\xD4') define_pxmm_insn('PSUBQ_x*', '\xFB') diff --git a/pypy/jit/backend/x86/test/test_regloc.py b/pypy/jit/backend/x86/test/test_regloc.py --- a/pypy/jit/backend/x86/test/test_regloc.py +++ b/pypy/jit/backend/x86/test/test_regloc.py @@ -146,8 +146,10 @@ expected_instructions = ( # mov r11, 0xFEDCBA9876543210 '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' - # mov rcx, [rdx+r11] - '\x4A\x8B\x0C\x1A' + # lea r11, [rdx+r11] + '\x4E\x8D\x1C\x1A' + # mov rcx, [r11] + '\x49\x8B\x0B' ) assert cb.getvalue() == expected_instructions @@ -174,6 +176,30 @@ # ------------------------------------------------------------ + def test_MOV_64bit_constant_into_r11(self): + base_constant = 0xFEDCBA9876543210 + cb = LocationCodeBuilder64() + cb.MOV(r11, imm(base_constant)) + + expected_instructions = ( + # mov r11, 0xFEDCBA9876543210 + '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' + ) + assert cb.getvalue() == expected_instructions + + def test_MOV_64bit_address_into_r11(self): + base_addr = 0xFEDCBA9876543210 + cb = LocationCodeBuilder64() + cb.MOV(r11, heap(base_addr)) + + expected_instructions = ( + # mov r11, 0xFEDCBA9876543210 + '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' + + # mov r11, [r11] + '\x4D\x8B\x1B' + ) + assert cb.getvalue() == expected_instructions + def test_MOV_immed32_into_64bit_address_1(self): immed = -0x01234567 base_addr = 0xFEDCBA9876543210 @@ -217,8 +243,10 @@ expected_instructions = ( # mov r11, 0xFEDCBA9876543210 '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' - # mov [rdx+r11], -0x01234567 - '\x4A\xC7\x04\x1A\x99\xBA\xDC\xFE' + # lea r11, [rdx+r11] + '\x4E\x8D\x1C\x1A' + # mov [r11], -0x01234567 + '\x49\xC7\x03\x99\xBA\xDC\xFE' ) assert cb.getvalue() == expected_instructions @@ -300,8 +328,10 @@ '\x48\xBA\xEF\xCD\xAB\x89\x67\x45\x23\x01' # mov r11, 0xFEDCBA9876543210 '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' - # mov [rax+r11], rdx - '\x4A\x89\x14\x18' + # lea r11, [rax+r11] + '\x4E\x8D\x1C\x18' + # mov [r11], rdx + '\x49\x89\x13' # pop rdx '\x5A' ) diff --git a/pypy/jit/backend/x86/tool/viewcode.py b/pypy/jit/backend/x86/tool/viewcode.py --- a/pypy/jit/backend/x86/tool/viewcode.py +++ b/pypy/jit/backend/x86/tool/viewcode.py @@ -58,7 +58,7 @@ assert not p.returncode, ('Encountered an error running objdump: %s' % stderr) # drop some objdump cruft - lines = stdout.splitlines()[6:] + lines = stdout.splitlines(True)[6:] # drop some objdump cruft return format_code_dump_with_labels(originaddr, lines, label_list) def format_code_dump_with_labels(originaddr, lines, label_list): @@ -97,7 +97,7 @@ stdout, stderr = p.communicate() assert not p.returncode, ('Encountered an error running nm: %s' % stderr) - for line in stdout.splitlines(): + for line in stdout.splitlines(True): match = re_symbolentry.match(line) if match: addr = long(match.group(1), 16) diff --git a/pypy/jit/codewriter/jtransform.py b/pypy/jit/codewriter/jtransform.py --- a/pypy/jit/codewriter/jtransform.py +++ b/pypy/jit/codewriter/jtransform.py @@ -443,6 +443,8 @@ rewrite_op_gc_identityhash = _do_builtin_call rewrite_op_gc_id = _do_builtin_call rewrite_op_uint_mod = _do_builtin_call + rewrite_op_cast_float_to_uint = _do_builtin_call + rewrite_op_cast_uint_to_float = _do_builtin_call # ---------- # getfield/setfield/mallocs etc. @@ -798,6 +800,9 @@ def _is_gc(self, v): return getattr(getattr(v.concretetype, "TO", None), "_gckind", "?") == 'gc' + def _is_rclass_instance(self, v): + return lltype._castdepth(v.concretetype.TO, rclass.OBJECT) >= 0 + def _rewrite_cmp_ptrs(self, op): if self._is_gc(op.args[0]): return op @@ -815,11 +820,21 @@ return self._rewrite_equality(op, 'int_is_true') def rewrite_op_ptr_eq(self, op): - op1 = self._rewrite_equality(op, 'ptr_iszero') + prefix = '' + if self._is_rclass_instance(op.args[0]): + assert self._is_rclass_instance(op.args[1]) + op = SpaceOperation('instance_ptr_eq', op.args, op.result) + prefix = 'instance_' + op1 = self._rewrite_equality(op, prefix + 'ptr_iszero') return self._rewrite_cmp_ptrs(op1) def rewrite_op_ptr_ne(self, op): - op1 = self._rewrite_equality(op, 'ptr_nonzero') + prefix = '' + if self._is_rclass_instance(op.args[0]): + assert self._is_rclass_instance(op.args[1]) + op = SpaceOperation('instance_ptr_ne', op.args, op.result) + prefix = 'instance_' + op1 = self._rewrite_equality(op, prefix + 'ptr_nonzero') return self._rewrite_cmp_ptrs(op1) rewrite_op_ptr_iszero = _rewrite_cmp_ptrs @@ -829,6 +844,10 @@ if self._is_gc(op.args[0]): return op + def rewrite_op_cast_opaque_ptr(self, op): + # None causes the result of this op to get aliased to op.args[0] + return [SpaceOperation('mark_opaque_ptr', op.args, None), None] + def rewrite_op_force_cast(self, op): v_arg = op.args[0] v_result = op.result @@ -848,26 +867,44 @@ elif not float_arg and float_res: # some int -> some float ops = [] - v1 = varoftype(lltype.Signed) - oplist = self.rewrite_operation( - SpaceOperation('force_cast', [v_arg], v1) - ) - if oplist: - ops.extend(oplist) + v2 = varoftype(lltype.Float) + sizesign = rffi.size_and_sign(v_arg.concretetype) + if sizesign <= rffi.size_and_sign(lltype.Signed): + # cast from a type that fits in an int: either the size is + # smaller, or it is equal and it is not unsigned + v1 = varoftype(lltype.Signed) + oplist = self.rewrite_operation( + SpaceOperation('force_cast', [v_arg], v1) + ) + if oplist: + ops.extend(oplist) + else: + v1 = v_arg + op = self.rewrite_operation( + SpaceOperation('cast_int_to_float', [v1], v2) + ) + ops.append(op) else: - v1 = v_arg - v2 = varoftype(lltype.Float) - op = self.rewrite_operation( - SpaceOperation('cast_int_to_float', [v1], v2) - ) - ops.append(op) + if sizesign == rffi.size_and_sign(lltype.Unsigned): + opname = 'cast_uint_to_float' + elif sizesign == rffi.size_and_sign(lltype.SignedLongLong): + opname = 'cast_longlong_to_float' + elif sizesign == rffi.size_and_sign(lltype.UnsignedLongLong): + opname = 'cast_ulonglong_to_float' + else: + raise AssertionError('cast_x_to_float: %r' % (sizesign,)) + ops1 = self.rewrite_operation( + SpaceOperation(opname, [v_arg], v2) + ) + if not isinstance(ops1, list): ops1 = [ops1] + ops.extend(ops1) op2 = self.rewrite_operation( SpaceOperation('force_cast', [v2], v_result) ) if op2: ops.append(op2) else: - op.result = v_result + ops[-1].result = v_result return ops elif float_arg and not float_res: # some float -> some int @@ -880,18 +917,36 @@ ops.append(op1) else: v1 = v_arg - v2 = varoftype(lltype.Signed) - op = self.rewrite_operation( - SpaceOperation('cast_float_to_int', [v1], v2) - ) - ops.append(op) - oplist = self.rewrite_operation( - SpaceOperation('force_cast', [v2], v_result) - ) - if oplist: - ops.extend(oplist) + sizesign = rffi.size_and_sign(v_result.concretetype) + if sizesign <= rffi.size_and_sign(lltype.Signed): + # cast to a type that fits in an int: either the size is + # smaller, or it is equal and it is not unsigned + v2 = varoftype(lltype.Signed) + op = self.rewrite_operation( + SpaceOperation('cast_float_to_int', [v1], v2) + ) + ops.append(op) + oplist = self.rewrite_operation( + SpaceOperation('force_cast', [v2], v_result) + ) + if oplist: + ops.extend(oplist) + else: + op.result = v_result else: - op.result = v_result + if sizesign == rffi.size_and_sign(lltype.Unsigned): + opname = 'cast_float_to_uint' + elif sizesign == rffi.size_and_sign(lltype.SignedLongLong): + opname = 'cast_float_to_longlong' + elif sizesign == rffi.size_and_sign(lltype.UnsignedLongLong): + opname = 'cast_float_to_ulonglong' + else: + raise AssertionError('cast_float_to_x: %r' % (sizesign,)) + ops1 = self.rewrite_operation( + SpaceOperation(opname, [v1], v_result) + ) + if not isinstance(ops1, list): ops1 = [ops1] + ops.extend(ops1) return ops else: assert False @@ -1097,8 +1152,6 @@ # The new operation is optionally further processed by rewrite_operation(). for _old, _new in [('bool_not', 'int_is_zero'), ('cast_bool_to_float', 'cast_int_to_float'), - ('cast_uint_to_float', 'cast_int_to_float'), - ('cast_float_to_uint', 'cast_float_to_int'), ('int_add_nonneg_ovf', 'int_add_ovf'), ('keepalive', '-live-'), diff --git a/pypy/jit/codewriter/support.py b/pypy/jit/codewriter/support.py --- a/pypy/jit/codewriter/support.py +++ b/pypy/jit/codewriter/support.py @@ -37,9 +37,11 @@ return a.typeannotation(t) def annotate(func, values, inline=None, backendoptimize=True, - type_system="lltype"): + type_system="lltype", translationoptions={}): # build the normal ll graphs for ll_function t = TranslationContext() + for key, value in translationoptions.items(): + setattr(t.config.translation, key, value) annpolicy = AnnotatorPolicy() annpolicy.allow_someobjects = False a = t.buildannotator(policy=annpolicy) @@ -229,6 +231,17 @@ else: return x +def _ll_1_cast_uint_to_float(x): + # XXX on 32-bit platforms, this should be done using cast_longlong_to_float + # (which is a residual call right now in the x86 backend) + return llop.cast_uint_to_float(lltype.Float, x) + +def _ll_1_cast_float_to_uint(x): + # XXX on 32-bit platforms, this should be done using cast_float_to_longlong + # (which is a residual call right now in the x86 backend) + return llop.cast_float_to_uint(lltype.Unsigned, x) + + # math support # ------------ diff --git a/pypy/jit/codewriter/test/test_flatten.py b/pypy/jit/codewriter/test/test_flatten.py --- a/pypy/jit/codewriter/test/test_flatten.py +++ b/pypy/jit/codewriter/test/test_flatten.py @@ -8,7 +8,7 @@ from pypy.rpython.lltypesystem import lltype, rclass, rstr from pypy.objspace.flow.model import SpaceOperation, Variable, Constant from pypy.translator.unsimplify import varoftype -from pypy.rlib.rarithmetic import ovfcheck, r_uint +from pypy.rlib.rarithmetic import ovfcheck, r_uint, r_longlong, r_ulonglong from pypy.rlib.jit import dont_look_inside, _we_are_jitted, JitDriver from pypy.rlib.objectmodel import keepalive_until_here from pypy.rlib import jit @@ -70,7 +70,8 @@ return 'residual' def getcalldescr(self, op, oopspecindex=None, extraeffect=None): try: - if 'cannot_raise' in op.args[0].value._obj.graph.name: + name = op.args[0].value._obj._name + if 'cannot_raise' in name or name.startswith('cast_'): return self._descr_cannot_raise except AttributeError: pass @@ -900,6 +901,67 @@ int_return %i4 """, transform=True) + def f(dbl): + return rffi.cast(rffi.UCHAR, dbl) + self.encoding_test(f, [12.456], """ + cast_float_to_int %f0 -> %i0 + int_and %i0, $255 -> %i1 + int_return %i1 + """, transform=True) + + def f(dbl): + return rffi.cast(lltype.Unsigned, dbl) + self.encoding_test(f, [12.456], """ + residual_call_irf_i $<* fn cast_float_to_uint>, , I[], R[], F[%f0] -> %i0 + int_return %i0 + """, transform=True) + + def f(i): + return rffi.cast(lltype.Float, chr(i)) # "char -> float" + self.encoding_test(f, [12], """ + cast_int_to_float %i0 -> %f0 + float_return %f0 + """, transform=True) + + def f(i): + return rffi.cast(lltype.Float, r_uint(i)) # "uint -> float" + self.encoding_test(f, [12], """ + residual_call_irf_f $<* fn cast_uint_to_float>, , I[%i0], R[], F[] -> %f0 + float_return %f0 + """, transform=True) + + if not longlong.is_64_bit: + def f(dbl): + return rffi.cast(lltype.SignedLongLong, dbl) + self.encoding_test(f, [12.3], """ + residual_call_irf_f $<* fn llong_from_float>, , I[], R[], F[%f0] -> %f1 + float_return %f1 + """, transform=True) + + def f(dbl): + return rffi.cast(lltype.UnsignedLongLong, dbl) + self.encoding_test(f, [12.3], """ + residual_call_irf_f $<* fn ullong_from_float>, , I[], R[], F[%f0] -> %f1 + float_return %f1 + """, transform=True) + + def f(x): + ll = r_longlong(x) + return rffi.cast(lltype.Float, ll) + self.encoding_test(f, [12], """ + residual_call_irf_f $<* fn llong_from_int>, , I[%i0], R[], F[] -> %f0 + residual_call_irf_f $<* fn llong_to_float>, , I[], R[], F[%f0] -> %f1 + float_return %f1 + """, transform=True) + + def f(x): + ll = r_ulonglong(x) + return rffi.cast(lltype.Float, ll) + self.encoding_test(f, [12], """ + residual_call_irf_f $<* fn ullong_from_int>, , I[%i0], R[], F[] -> %f0 + residual_call_irf_f $<* fn ullong_u_to_float>, , I[], R[], F[%f0] -> %f1 + float_return %f1 + """, transform=True) def test_direct_ptradd(self): from pypy.rpython.lltypesystem import rffi diff --git a/pypy/jit/codewriter/test/test_jtransform.py b/pypy/jit/codewriter/test/test_jtransform.py --- a/pypy/jit/codewriter/test/test_jtransform.py +++ b/pypy/jit/codewriter/test/test_jtransform.py @@ -576,10 +576,10 @@ assert op1.args == [v2] def test_ptr_eq(): - v1 = varoftype(rclass.OBJECTPTR) - v2 = varoftype(rclass.OBJECTPTR) + v1 = varoftype(lltype.Ptr(rstr.STR)) + v2 = varoftype(lltype.Ptr(rstr.STR)) v3 = varoftype(lltype.Bool) - c0 = const(lltype.nullptr(rclass.OBJECT)) + c0 = const(lltype.nullptr(rstr.STR)) # for opname, reducedname in [('ptr_eq', 'ptr_iszero'), ('ptr_ne', 'ptr_nonzero')]: @@ -598,6 +598,31 @@ assert op1.opname == reducedname assert op1.args == [v2] +def test_instance_ptr_eq(): + v1 = varoftype(rclass.OBJECTPTR) + v2 = varoftype(rclass.OBJECTPTR) + v3 = varoftype(lltype.Bool) + c0 = const(lltype.nullptr(rclass.OBJECT)) + + for opname, newopname, reducedname in [ + ('ptr_eq', 'instance_ptr_eq', 'instance_ptr_iszero'), + ('ptr_ne', 'instance_ptr_ne', 'instance_ptr_nonzero') + ]: + op = SpaceOperation(opname, [v1, v2], v3) + op1 = Transformer().rewrite_operation(op) + assert op1.opname == newopname + assert op1.args == [v1, v2] + + op = SpaceOperation(opname, [v1, c0], v3) + op1 = Transformer().rewrite_operation(op) + assert op1.opname == reducedname + assert op1.args == [v1] + + op = SpaceOperation(opname, [c0, v1], v3) + op1 = Transformer().rewrite_operation(op) + assert op1.opname == reducedname + assert op1.args == [v1] + def test_nongc_ptr_eq(): v1 = varoftype(rclass.NONGCOBJECTPTR) v2 = varoftype(rclass.NONGCOBJECTPTR) @@ -1103,3 +1128,16 @@ varoftype(lltype.Signed)) tr = Transformer(None, None) raises(NotImplementedError, tr.rewrite_operation, op) + +def test_cast_opaque_ptr(): + S = lltype.GcStruct("S", ("x", lltype.Signed)) + v1 = varoftype(lltype.Ptr(S)) + v2 = varoftype(lltype.Ptr(rclass.OBJECT)) + + op = SpaceOperation('cast_opaque_ptr', [v1], v2) + tr = Transformer() + [op1, op2] = tr.rewrite_operation(op) + assert op1.opname == 'mark_opaque_ptr' + assert op1.args == [v1] + assert op1.result is None + assert op2 is None \ No newline at end of file diff --git a/pypy/jit/metainterp/blackhole.py b/pypy/jit/metainterp/blackhole.py --- a/pypy/jit/metainterp/blackhole.py +++ b/pypy/jit/metainterp/blackhole.py @@ -499,9 +499,12 @@ @arguments("r", returns="i") def bhimpl_ptr_nonzero(a): return bool(a) - @arguments("r", returns="r") - def bhimpl_cast_opaque_ptr(a): - return a + @arguments("r", "r", returns="i") + def bhimpl_instance_ptr_eq(a, b): + return a == b + @arguments("r", "r", returns="i") + def bhimpl_instance_ptr_ne(a, b): + return a != b @arguments("r", returns="i") def bhimpl_cast_ptr_to_int(a): i = lltype.cast_ptr_to_int(a) @@ -512,6 +515,10 @@ ll_assert((i & 1) == 1, "bhimpl_cast_int_to_ptr: not an odd int") return lltype.cast_int_to_ptr(llmemory.GCREF, i) + @arguments("r") + def bhimpl_mark_opaque_ptr(a): + pass + @arguments("i", returns="i") def bhimpl_int_copy(a): return a @@ -630,6 +637,9 @@ a = longlong.getrealfloat(a) # note: we need to call int() twice to care for the fact that # int(-2147483648.0) returns a long :-( + # we could also call intmask() instead of the outermost int(), but + # it's probably better to explicitly crash (by getting a long) if a + # non-translated version tries to cast a too large float to an int. return int(int(a)) @arguments("i", returns="f") diff --git a/pypy/jit/metainterp/heapcache.py b/pypy/jit/metainterp/heapcache.py --- a/pypy/jit/metainterp/heapcache.py +++ b/pypy/jit/metainterp/heapcache.py @@ -34,7 +34,6 @@ self.clear_caches(opnum, descr, argboxes) def mark_escaped(self, opnum, argboxes): - idx = 0 if opnum == rop.SETFIELD_GC: assert len(argboxes) == 2 box, valuebox = argboxes @@ -42,8 +41,20 @@ self.dependencies.setdefault(box, []).append(valuebox) else: self._escape(valuebox) - # GETFIELD_GC doesn't escape it's argument - elif opnum != rop.GETFIELD_GC: + elif opnum == rop.SETARRAYITEM_GC: + assert len(argboxes) == 3 + box, indexbox, valuebox = argboxes + if self.is_unescaped(box) and self.is_unescaped(valuebox): + self.dependencies.setdefault(box, []).append(valuebox) + else: + self._escape(valuebox) + # GETFIELD_GC, MARK_OPAQUE_PTR, PTR_EQ, and PTR_NE don't escape their + # arguments + elif (opnum != rop.GETFIELD_GC and + opnum != rop.MARK_OPAQUE_PTR and + opnum != rop.PTR_EQ and + opnum != rop.PTR_NE): + idx = 0 for box in argboxes: # setarrayitem_gc don't escape its first argument if not (idx == 0 and opnum in [rop.SETARRAYITEM_GC]): @@ -60,13 +71,13 @@ self._escape(dep) def clear_caches(self, opnum, descr, argboxes): - if opnum == rop.SETFIELD_GC: - return - if opnum == rop.SETARRAYITEM_GC: - return - if opnum == rop.SETFIELD_RAW: - return - if opnum == rop.SETARRAYITEM_RAW: + if (opnum == rop.SETFIELD_GC or + opnum == rop.SETARRAYITEM_GC or + opnum == rop.SETFIELD_RAW or + opnum == rop.SETARRAYITEM_RAW or + opnum == rop.SETINTERIORFIELD_GC or + opnum == rop.COPYSTRCONTENT or + opnum == rop.COPYUNICODECONTENT): return if rop._OVF_FIRST <= opnum <= rop._OVF_LAST: return @@ -75,9 +86,9 @@ if opnum == rop.CALL or opnum == rop.CALL_LOOPINVARIANT: effectinfo = descr.get_extra_info() ef = effectinfo.extraeffect - if ef == effectinfo.EF_LOOPINVARIANT or \ - ef == effectinfo.EF_ELIDABLE_CANNOT_RAISE or \ - ef == effectinfo.EF_ELIDABLE_CAN_RAISE: + if (ef == effectinfo.EF_LOOPINVARIANT or + ef == effectinfo.EF_ELIDABLE_CANNOT_RAISE or + ef == effectinfo.EF_ELIDABLE_CAN_RAISE): return # A special case for ll_arraycopy, because it is so common, and its # effects are so well defined. diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -16,6 +16,7 @@ INT = 'i' REF = 'r' FLOAT = 'f' +STRUCT = 's' HOLE = '_' VOID = 'v' @@ -172,6 +173,11 @@ """ raise NotImplementedError + def is_array_of_structs(self): + """ Implement for array descr + """ + raise NotImplementedError + def is_pointer_field(self): """ Implement for field descr """ @@ -923,6 +929,9 @@ def view(self, **kwds): pass + def clear(self): + pass + class Stats(object): """For tests.""" @@ -937,6 +946,15 @@ self.aborted_keys = [] self.invalidated_token_numbers = set() + def clear(self): + del self.loops[:] + del self.locations[:] + del self.aborted_keys[:] + self.invalidated_token_numbers.clear() + self.compiled_count = 0 + self.enter_count = 0 + self.aborted_count = 0 + def set_history(self, history): self.operations = history.operations diff --git a/pypy/jit/metainterp/optimizeopt/__init__.py b/pypy/jit/metainterp/optimizeopt/__init__.py --- a/pypy/jit/metainterp/optimizeopt/__init__.py +++ b/pypy/jit/metainterp/optimizeopt/__init__.py @@ -55,7 +55,7 @@ def optimize_loop_1(metainterp_sd, loop, enable_opts, - inline_short_preamble=True, retraced=False, bridge=False): + inline_short_preamble=True, retraced=False): """Optimize loop.operations to remove internal overheadish operations. """ @@ -64,7 +64,7 @@ if unroll: optimize_unroll(metainterp_sd, loop, optimizations) else: - optimizer = Optimizer(metainterp_sd, loop, optimizations, bridge) + optimizer = Optimizer(metainterp_sd, loop, optimizations) optimizer.propagate_all_forward() def optimize_bridge_1(metainterp_sd, bridge, enable_opts, @@ -76,7 +76,7 @@ except KeyError: pass optimize_loop_1(metainterp_sd, bridge, enable_opts, - inline_short_preamble, retraced, bridge=True) + inline_short_preamble, retraced) if __name__ == '__main__': print ALL_OPTS_NAMES diff --git a/pypy/jit/metainterp/optimizeopt/heap.py b/pypy/jit/metainterp/optimizeopt/heap.py --- a/pypy/jit/metainterp/optimizeopt/heap.py +++ b/pypy/jit/metainterp/optimizeopt/heap.py @@ -43,7 +43,7 @@ optheap.optimizer.ensure_imported(cached_fieldvalue) cached_fieldvalue = self._cached_fields.get(structvalue, None) - if cached_fieldvalue is not fieldvalue: + if not fieldvalue.same_value(cached_fieldvalue): # common case: store the 'op' as lazy_setfield, and register # myself in the optheap's _lazy_setfields_and_arrayitems list self._lazy_setfield = op @@ -140,6 +140,15 @@ getop = ResOperation(rop.GETFIELD_GC, [op.getarg(0)], result, op.getdescr()) shortboxes.add_potential(getop, synthetic=True) + if op.getopnum() == rop.SETARRAYITEM_GC: + result = op.getarg(2) + if isinstance(result, Const): + newresult = result.clonebox() + optimizer.make_constant(newresult, result) + result = newresult + getop = ResOperation(rop.GETARRAYITEM_GC, [op.getarg(0), op.getarg(1)], + result, op.getdescr()) + shortboxes.add_potential(getop, synthetic=True) elif op.result is not None: shortboxes.add_potential(op) @@ -225,7 +234,7 @@ or op.is_ovf()): self.posponedop = op else: - self.next_optimization.propagate_forward(op) + Optimization.emit_operation(self, op) def emitting_operation(self, op): if op.has_no_side_effect(): diff --git a/pypy/jit/metainterp/optimizeopt/intbounds.py b/pypy/jit/metainterp/optimizeopt/intbounds.py --- a/pypy/jit/metainterp/optimizeopt/intbounds.py +++ b/pypy/jit/metainterp/optimizeopt/intbounds.py @@ -1,3 +1,4 @@ +import sys from pypy.jit.metainterp.optimizeopt.optimizer import Optimization, CONST_1, CONST_0, \ MODE_ARRAY, MODE_STR, MODE_UNICODE from pypy.jit.metainterp.history import ConstInt @@ -5,36 +6,18 @@ IntUpperBound) from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method from pypy.jit.metainterp.resoperation import rop +from pypy.jit.metainterp.optimize import InvalidLoop +from pypy.rlib.rarithmetic import LONG_BIT class OptIntBounds(Optimization): """Keeps track of the bounds placed on integers by guards and remove redundant guards""" - def setup(self): - self.posponedop = None - self.nextop = None - def new(self): - assert self.posponedop is None return OptIntBounds() - - def flush(self): - assert self.posponedop is None - - def setup(self): - self.posponedop = None - self.nextop = None def propagate_forward(self, op): - if op.is_ovf(): - self.posponedop = op - return - if self.posponedop: - self.nextop = op - op = self.posponedop - self.posponedop = None - dispatch_opt(self, op) def opt_default(self, op): @@ -126,14 +109,29 @@ r.intbound.intersect(v1.intbound.div_bound(v2.intbound)) def optimize_INT_MOD(self, op): + v1 = self.getvalue(op.getarg(0)) + v2 = self.getvalue(op.getarg(1)) + known_nonneg = (v1.intbound.known_ge(IntBound(0, 0)) and + v2.intbound.known_ge(IntBound(0, 0))) + if known_nonneg and v2.is_constant(): + val = v2.box.getint() + if (val & (val-1)) == 0: + # nonneg % power-of-two ==> nonneg & (power-of-two - 1) + arg1 = op.getarg(0) + arg2 = ConstInt(val-1) + op = op.copy_and_change(rop.INT_AND, args=[arg1, arg2]) self.emit_operation(op) - v2 = self.getvalue(op.getarg(1)) if v2.is_constant(): val = v2.box.getint() r = self.getvalue(op.result) if val < 0: + if val == -sys.maxint-1: + return # give up val = -val - r.intbound.make_gt(IntBound(-val, -val)) + if known_nonneg: + r.intbound.make_ge(IntBound(0, 0)) + else: + r.intbound.make_gt(IntBound(-val, -val)) r.intbound.make_lt(IntBound(val, val)) def optimize_INT_LSHIFT(self, op): @@ -153,72 +151,84 @@ def optimize_INT_RSHIFT(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) + b = v1.intbound.rshift_bound(v2.intbound) + if b.has_lower and b.has_upper and b.lower == b.upper: + # constant result (likely 0, for rshifts that kill all bits) + self.make_constant_int(op.result, b.lower) + else: + self.emit_operation(op) + r = self.getvalue(op.result) + r.intbound.intersect(b) + + def optimize_GUARD_NO_OVERFLOW(self, op): + lastop = self.last_emitted_operation + if lastop is not None: + opnum = lastop.getopnum() + args = lastop.getarglist() + result = lastop.result + # If the INT_xxx_OVF was replaced with INT_xxx, then we can kill + # the GUARD_NO_OVERFLOW. + if (opnum == rop.INT_ADD or + opnum == rop.INT_SUB or + opnum == rop.INT_MUL): + return + # Else, synthesize the non overflowing op for optimize_default to + # reuse, as well as the reverse op + elif opnum == rop.INT_ADD_OVF: + self.pure(rop.INT_ADD, args[:], result) + self.pure(rop.INT_SUB, [result, args[1]], args[0]) + self.pure(rop.INT_SUB, [result, args[0]], args[1]) + elif opnum == rop.INT_SUB_OVF: + self.pure(rop.INT_SUB, args[:], result) + self.pure(rop.INT_ADD, [result, args[1]], args[0]) + self.pure(rop.INT_SUB, [args[0], result], args[1]) + elif opnum == rop.INT_MUL_OVF: + self.pure(rop.INT_MUL, args[:], result) self.emit_operation(op) - r = self.getvalue(op.result) - r.intbound.intersect(v1.intbound.rshift_bound(v2.intbound)) + + def optimize_GUARD_OVERFLOW(self, op): + # If INT_xxx_OVF was replaced by INT_xxx, *but* we still see + # GUARD_OVERFLOW, then the loop is invalid. + lastop = self.last_emitted_operation + if lastop is None: + raise InvalidLoop + opnum = lastop.getopnum() + if opnum not in (rop.INT_ADD_OVF, rop.INT_SUB_OVF, rop.INT_MUL_OVF): + raise InvalidLoop + self.emit_operation(op) def optimize_INT_ADD_OVF(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) resbound = v1.intbound.add_bound(v2.intbound) - if resbound.has_lower and resbound.has_upper and \ - self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Transform into INT_ADD and remove guard + if resbound.bounded(): + # Transform into INT_ADD. The following guard will be killed + # by optimize_GUARD_NO_OVERFLOW; if we see instead an + # optimize_GUARD_OVERFLOW, then InvalidLoop. op = op.copy_and_change(rop.INT_ADD) - self.optimize_INT_ADD(op) # emit the op - else: - self.emit_operation(op) - r = self.getvalue(op.result) - r.intbound.intersect(resbound) - self.emit_operation(self.nextop) - if self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Synthesize the non overflowing op for optimize_default to reuse - self.pure(rop.INT_ADD, op.getarglist()[:], op.result) - # Synthesize the reverse op for optimize_default to reuse - self.pure(rop.INT_SUB, [op.result, op.getarg(1)], op.getarg(0)) - self.pure(rop.INT_SUB, [op.result, op.getarg(0)], op.getarg(1)) - + self.emit_operation(op) # emit the op + r = self.getvalue(op.result) + r.intbound.intersect(resbound) def optimize_INT_SUB_OVF(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) resbound = v1.intbound.sub_bound(v2.intbound) - if resbound.has_lower and resbound.has_upper and \ - self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Transform into INT_SUB and remove guard + if resbound.bounded(): op = op.copy_and_change(rop.INT_SUB) - self.optimize_INT_SUB(op) # emit the op - else: - self.emit_operation(op) - r = self.getvalue(op.result) - r.intbound.intersect(resbound) - self.emit_operation(self.nextop) - if self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Synthesize the non overflowing op for optimize_default to reuse - self.pure(rop.INT_SUB, op.getarglist()[:], op.result) - # Synthesize the reverse ops for optimize_default to reuse - self.pure(rop.INT_ADD, [op.result, op.getarg(1)], op.getarg(0)) - self.pure(rop.INT_SUB, [op.getarg(0), op.result], op.getarg(1)) - + self.emit_operation(op) # emit the op + r = self.getvalue(op.result) + r.intbound.intersect(resbound) def optimize_INT_MUL_OVF(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) resbound = v1.intbound.mul_bound(v2.intbound) - if resbound.has_lower and resbound.has_upper and \ - self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Transform into INT_MUL and remove guard + if resbound.bounded(): op = op.copy_and_change(rop.INT_MUL) - self.optimize_INT_MUL(op) # emit the op - else: - self.emit_operation(op) - r = self.getvalue(op.result) - r.intbound.intersect(resbound) - self.emit_operation(self.nextop) - if self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Synthesize the non overflowing op for optimize_default to reuse - self.pure(rop.INT_MUL, op.getarglist()[:], op.result) - + self.emit_operation(op) + r = self.getvalue(op.result) + r.intbound.intersect(resbound) def optimize_INT_LT(self, op): v1 = self.getvalue(op.getarg(0)) diff --git a/pypy/jit/metainterp/optimizeopt/intutils.py b/pypy/jit/metainterp/optimizeopt/intutils.py --- a/pypy/jit/metainterp/optimizeopt/intutils.py +++ b/pypy/jit/metainterp/optimizeopt/intutils.py @@ -1,4 +1,5 @@ from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift, LONG_BIT +from pypy.rlib.objectmodel import we_are_translated from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.jit.metainterp.history import BoxInt, ConstInt import sys @@ -13,6 +14,10 @@ self.has_lower = True self.upper = upper self.lower = lower + # check for unexpected overflows: + if not we_are_translated(): + assert type(upper) is not long + assert type(lower) is not long # Returns True if the bound was updated def make_le(self, other): diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -1,12 +1,12 @@ from pypy.jit.metainterp import jitprof, resume, compile from pypy.jit.metainterp.executor import execute_nonspec -from pypy.jit.metainterp.history import BoxInt, BoxFloat, Const, ConstInt, REF +from pypy.jit.metainterp.history import BoxInt, BoxFloat, Const, ConstInt, REF, INT from pypy.jit.metainterp.optimizeopt.intutils import IntBound, IntUnbounded, \ ImmutableIntUnbounded, \ IntLowerBound, MININT, MAXINT from pypy.jit.metainterp.optimizeopt.util import (make_dispatcher_method, args_dict) -from pypy.jit.metainterp.resoperation import rop, ResOperation +from pypy.jit.metainterp.resoperation import rop, ResOperation, AbstractResOp from pypy.jit.metainterp.typesystem import llhelper, oohelper from pypy.tool.pairtype import extendabletype from pypy.rlib.debug import debug_start, debug_stop, debug_print @@ -95,6 +95,10 @@ return guards def import_from(self, other, optimizer): + if self.level == LEVEL_CONSTANT: + assert other.level == LEVEL_CONSTANT + assert other.box.same_constant(self.box) + return assert self.level <= LEVEL_NONNULL if other.level == LEVEL_CONSTANT: self.make_constant(other.get_key_box()) @@ -141,6 +145,13 @@ return not box.nonnull() return False + def same_value(self, other): + if not other: + return False + if self.is_constant() and other.is_constant(): + return self.box.same_constant(other.box) + return self is other + def make_constant(self, constbox): """Replace 'self.box' with a Const box.""" assert isinstance(constbox, Const) @@ -209,13 +220,19 @@ def setfield(self, ofs, value): raise NotImplementedError + def getlength(self): + raise NotImplementedError + def getitem(self, index): raise NotImplementedError - def getlength(self): + def setitem(self, index, value): raise NotImplementedError - def setitem(self, index, value): + def getinteriorfield(self, index, ofs, default): + raise NotImplementedError + + def setinteriorfield(self, index, ofs, value): raise NotImplementedError @@ -230,9 +247,10 @@ CONST_1 = ConstInt(1) CVAL_ZERO = ConstantValue(CONST_0) CVAL_ZERO_FLOAT = ConstantValue(Const._new(0.0)) -CVAL_UNINITIALIZED_ZERO = ConstantValue(CONST_0) llhelper.CVAL_NULLREF = ConstantValue(llhelper.CONST_NULL) oohelper.CVAL_NULLREF = ConstantValue(oohelper.CONST_NULL) +REMOVED = AbstractResOp(None) + class Optimization(object): next_optimization = None @@ -244,6 +262,7 @@ raise NotImplementedError def emit_operation(self, op): + self.last_emitted_operation = op self.next_optimization.propagate_forward(op) # FIXME: Move some of these here? @@ -283,11 +302,11 @@ return self.optimizer.optpure.has_pure_result(opnum, args, descr) return False - def get_pure_result(self, key): + def get_pure_result(self, key): if self.optimizer.optpure: return self.optimizer.optpure.get_pure_result(key) return None - + def setup(self): pass @@ -311,20 +330,20 @@ def forget_numberings(self, box): self.optimizer.forget_numberings(box) + class Optimizer(Optimization): - def __init__(self, metainterp_sd, loop, optimizations=None, bridge=False): + def __init__(self, metainterp_sd, loop, optimizations=None): self.metainterp_sd = metainterp_sd self.cpu = metainterp_sd.cpu self.loop = loop - self.bridge = bridge self.values = {} self.interned_refs = self.cpu.ts.new_ref_dict() + self.interned_ints = {} self.resumedata_memo = resume.ResumeDataLoopMemo(metainterp_sd) self.bool_boxes = {} self.producer = {} self.pendingfields = [] - self.exception_might_have_happened = False self.quasi_immutable_deps = None self.opaque_pointers = {} self.replaces_guard = {} @@ -346,6 +365,7 @@ optimizations[-1].next_optimization = self for o in optimizations: o.optimizer = self + o.last_emitted_operation = None o.setup() else: optimizations = [] @@ -392,6 +412,9 @@ if not value: return box return self.interned_refs.setdefault(value, box) + #elif constbox.type == INT: + # value = constbox.getint() + # return self.interned_ints.setdefault(value, box) else: return box @@ -477,7 +500,6 @@ return CVAL_ZERO def propagate_all_forward(self): - self.exception_might_have_happened = self.bridge self.clear_newoperations() for op in self.loop.operations: self.first_optimization.propagate_forward(op) @@ -524,7 +546,7 @@ def replace_op(self, old_op, new_op): # XXX: Do we want to cache indexes to prevent search? - i = len(self._newoperations) + i = len(self._newoperations) while i > 0: i -= 1 if self._newoperations[i] is old_op: diff --git a/pypy/jit/metainterp/optimizeopt/pure.py b/pypy/jit/metainterp/optimizeopt/pure.py --- a/pypy/jit/metainterp/optimizeopt/pure.py +++ b/pypy/jit/metainterp/optimizeopt/pure.py @@ -1,4 +1,4 @@ -from pypy.jit.metainterp.optimizeopt.optimizer import Optimization +from pypy.jit.metainterp.optimizeopt.optimizer import Optimization, REMOVED from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.jit.metainterp.optimizeopt.util import (make_dispatcher_method, args_dict) @@ -61,7 +61,10 @@ oldop = self.pure_operations.get(args, None) if oldop is not None and oldop.getdescr() is op.getdescr(): assert oldop.getopnum() == op.getopnum() + # this removes a CALL_PURE that has the same (non-constant) + # arguments as a previous CALL_PURE. self.make_equal_to(op.result, self.getvalue(oldop.result)) + self.last_emitted_operation = REMOVED return else: self.pure_operations[args] = op @@ -72,6 +75,13 @@ self.emit_operation(ResOperation(rop.CALL, args, op.result, op.getdescr())) + def optimize_GUARD_NO_EXCEPTION(self, op): + if self.last_emitted_operation is REMOVED: + # it was a CALL_PURE that was killed; so we also kill the + # following GUARD_NO_EXCEPTION + return + self.emit_operation(op) + def flush(self): assert self.posponedop is None diff --git a/pypy/jit/metainterp/optimizeopt/rewrite.py b/pypy/jit/metainterp/optimizeopt/rewrite.py --- a/pypy/jit/metainterp/optimizeopt/rewrite.py +++ b/pypy/jit/metainterp/optimizeopt/rewrite.py @@ -294,12 +294,6 @@ raise InvalidLoop self.optimize_GUARD_CLASS(op) - def optimize_GUARD_NO_EXCEPTION(self, op): - if not self.optimizer.exception_might_have_happened: - return - self.emit_operation(op) - self.optimizer.exception_might_have_happened = False - def optimize_CALL_LOOPINVARIANT(self, op): arg = op.getarg(0) # 'arg' must be a Const, because residual_call in codewriter @@ -310,6 +304,7 @@ resvalue = self.loop_invariant_results.get(key, None) if resvalue is not None: self.make_equal_to(op.result, resvalue) + self.last_emitted_operation = REMOVED return # change the op to be a normal call, from the backend's point of view # there is no reason to have a separate operation for this @@ -337,7 +332,7 @@ def optimize_INT_IS_ZERO(self, op): self._optimize_nullness(op, op.getarg(0), False) - def _optimize_oois_ooisnot(self, op, expect_isnot): + def _optimize_oois_ooisnot(self, op, expect_isnot, instance): value0 = self.getvalue(op.getarg(0)) value1 = self.getvalue(op.getarg(1)) if value0.is_virtual(): @@ -355,21 +350,28 @@ elif value0 is value1: self.make_constant_int(op.result, not expect_isnot) else: - cls0 = value0.get_constant_class(self.optimizer.cpu) - if cls0 is not None: - cls1 = value1.get_constant_class(self.optimizer.cpu) - if cls1 is not None and not cls0.same_constant(cls1): - # cannot be the same object, as we know that their - # class is different - self.make_constant_int(op.result, expect_isnot) - return + if instance: + cls0 = value0.get_constant_class(self.optimizer.cpu) + if cls0 is not None: + cls1 = value1.get_constant_class(self.optimizer.cpu) + if cls1 is not None and not cls0.same_constant(cls1): + # cannot be the same object, as we know that their + # class is different + self.make_constant_int(op.result, expect_isnot) + return self.emit_operation(op) + def optimize_PTR_EQ(self, op): + self._optimize_oois_ooisnot(op, False, False) + def optimize_PTR_NE(self, op): - self._optimize_oois_ooisnot(op, True) + self._optimize_oois_ooisnot(op, True, False) - def optimize_PTR_EQ(self, op): - self._optimize_oois_ooisnot(op, False) + def optimize_INSTANCE_PTR_EQ(self, op): + self._optimize_oois_ooisnot(op, False, True) + + def optimize_INSTANCE_PTR_NE(self, op): + self._optimize_oois_ooisnot(op, True, True) ## def optimize_INSTANCEOF(self, op): ## value = self.getvalue(op.args[0]) @@ -437,10 +439,19 @@ except KeyError: pass else: + # this removes a CALL_PURE with all constant arguments. self.make_constant(op.result, result) + self.last_emitted_operation = REMOVED return self.emit_operation(op) + def optimize_GUARD_NO_EXCEPTION(self, op): + if self.last_emitted_operation is REMOVED: + # it was a CALL_PURE or a CALL_LOOPINVARIANT that was killed; + # so we also kill the following GUARD_NO_EXCEPTION + return + self.emit_operation(op) + def optimize_INT_FLOORDIV(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) @@ -448,6 +459,9 @@ if v2.is_constant() and v2.box.getint() == 1: self.make_equal_to(op.result, v1) return + elif v1.is_constant() and v1.box.getint() == 0: + self.make_constant_int(op.result, 0) + return if v1.intbound.known_ge(IntBound(0, 0)) and v2.is_constant(): val = v2.box.getint() if val & (val - 1) == 0 and val > 0: # val == 2**shift @@ -455,10 +469,9 @@ args = [op.getarg(0), ConstInt(highest_bit(val))]) self.emit_operation(op) - def optimize_CAST_OPAQUE_PTR(self, op): + def optimize_MARK_OPAQUE_PTR(self, op): value = self.getvalue(op.getarg(0)) self.optimizer.opaque_pointers[value] = True - self.make_equal_to(op.result, value) def optimize_CAST_PTR_TO_INT(self, op): self.pure(rop.CAST_INT_TO_PTR, [op.result], op.getarg(0)) diff --git a/pypy/jit/metainterp/optimizeopt/simplify.py b/pypy/jit/metainterp/optimizeopt/simplify.py --- a/pypy/jit/metainterp/optimizeopt/simplify.py +++ b/pypy/jit/metainterp/optimizeopt/simplify.py @@ -25,7 +25,8 @@ # but it's a bit hard to implement robustly if heap.py is also run pass - optimize_CAST_OPAQUE_PTR = optimize_VIRTUAL_REF + def optimize_MARK_OPAQUE_PTR(self, op): + pass dispatch_opt = make_dispatcher_method(OptSimplify, 'optimize_', diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -9,6 +9,7 @@ from pypy.jit.metainterp.history import AbstractDescr, ConstInt, BoxInt from pypy.jit.metainterp import executor, compile, resume, history from pypy.jit.metainterp.resoperation import rop, opname, ResOperation +from pypy.rlib.rarithmetic import LONG_BIT def test_store_final_boxes_in_guard(): @@ -508,13 +509,13 @@ ops = """ [p0] guard_class(p0, ConstClass(node_vtable)) [] - i0 = ptr_ne(p0, NULL) + i0 = instance_ptr_ne(p0, NULL) guard_true(i0) [] - i1 = ptr_eq(p0, NULL) + i1 = instance_ptr_eq(p0, NULL) guard_false(i1) [] - i2 = ptr_ne(NULL, p0) + i2 = instance_ptr_ne(NULL, p0) guard_true(i0) [] - i3 = ptr_eq(NULL, p0) + i3 = instance_ptr_eq(NULL, p0) guard_false(i1) [] jump(p0) """ @@ -680,25 +681,60 @@ # ---------- - def test_fold_guard_no_exception(self): - ops = """ - [i] - guard_no_exception() [] - i1 = int_add(i, 3) - guard_no_exception() [] + def test_keep_guard_no_exception(self): + ops = """ + [i1] i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] - guard_no_exception() [] - i3 = call(i2, descr=nonwritedescr) - jump(i1) # the exception is considered lost when we loop back - """ - expected = """ - [i] - i1 = int_add(i, 3) - i2 = call(i1, descr=nonwritedescr) + jump(i2) + """ + self.optimize_loop(ops, ops) + + def test_keep_guard_no_exception_with_call_pure_that_is_not_folded(self): + ops = """ + [i1] + i2 = call_pure(123456, i1, descr=nonwritedescr) guard_no_exception() [i1, i2] - i3 = call(i2, descr=nonwritedescr) - jump(i1) + jump(i2) + """ + expected = """ + [i1] + i2 = call(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2] + jump(i2) + """ + self.optimize_loop(ops, expected) + + def test_remove_guard_no_exception_with_call_pure_on_constant_args(self): + arg_consts = [ConstInt(i) for i in (123456, 81)] + call_pure_results = {tuple(arg_consts): ConstInt(5)} + ops = """ + [i1] + i3 = same_as(81) + i2 = call_pure(123456, i3, descr=nonwritedescr) + guard_no_exception() [i1, i2] + jump(i2) + """ + expected = """ + [i1] + jump(5) + """ + self.optimize_loop(ops, expected, call_pure_results) + + def test_remove_guard_no_exception_with_duplicated_call_pure(self): + ops = """ + [i1] + i2 = call_pure(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2] + i3 = call_pure(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2, i3] + jump(i3) + """ + expected = """ + [i1] + i2 = call(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2] + jump(i2) """ self.optimize_loop(ops, expected) @@ -935,7 +971,6 @@ """ self.optimize_loop(ops, expected) - def test_virtual_constant_isnonnull(self): ops = """ [i0] @@ -951,6 +986,55 @@ """ self.optimize_loop(ops, expected) + def test_virtual_array_of_struct(self): + ops = """ + [f0, f1, f2, f3] + p0 = new_array(2, descr=complexarraydescr) + setinteriorfield_gc(p0, 0, f0, descr=complexrealdescr) + setinteriorfield_gc(p0, 0, f1, descr=compleximagdescr) + setinteriorfield_gc(p0, 1, f2, descr=complexrealdescr) + setinteriorfield_gc(p0, 1, f3, descr=compleximagdescr) + f4 = getinteriorfield_gc(p0, 0, descr=complexrealdescr) + f5 = getinteriorfield_gc(p0, 1, descr=complexrealdescr) + f6 = float_mul(f4, f5) + f7 = getinteriorfield_gc(p0, 0, descr=compleximagdescr) + f8 = getinteriorfield_gc(p0, 1, descr=compleximagdescr) + f9 = float_mul(f7, f8) + f10 = float_add(f6, f9) + finish(f10) + """ + expected = """ + [f0, f1, f2, f3] + f4 = float_mul(f0, f2) + f5 = float_mul(f1, f3) + f6 = float_add(f4, f5) + finish(f6) + """ + self.optimize_loop(ops, expected) + + def test_virtual_array_of_struct_forced(self): + ops = """ + [f0, f1] + p0 = new_array(1, descr=complexarraydescr) + setinteriorfield_gc(p0, 0, f0, descr=complexrealdescr) + setinteriorfield_gc(p0, 0, f1, descr=compleximagdescr) + f2 = getinteriorfield_gc(p0, 0, descr=complexrealdescr) + f3 = getinteriorfield_gc(p0, 0, descr=compleximagdescr) + f4 = float_mul(f2, f3) + i0 = escape(f4, p0) + finish(i0) + """ + expected = """ + [f0, f1] + f2 = float_mul(f0, f1) + p0 = new_array(1, descr=complexarraydescr) + setinteriorfield_gc(p0, 0, f0, descr=complexrealdescr) + setinteriorfield_gc(p0, 0, f1, descr=compleximagdescr) + i0 = escape(f2, p0) + finish(i0) + """ + self.optimize_loop(ops, expected) + def test_nonvirtual_1(self): ops = """ [i] @@ -2026,7 +2110,7 @@ ops = """ [p1] guard_class(p1, ConstClass(node_vtable2)) [] - i = ptr_ne(ConstPtr(myptr), p1) + i = instance_ptr_ne(ConstPtr(myptr), p1) guard_true(i) [] jump(p1) """ @@ -2181,6 +2265,17 @@ """ self.optimize_loop(ops, expected) + ops = """ + [i0] + i1 = int_floordiv(0, i0) + jump(i1) + """ + expected = """ + [i0] + jump(0) + """ + self.optimize_loop(ops, expected) + def test_fold_partially_constant_ops_ovf(self): ops = """ [i0] @@ -4063,6 +4158,38 @@ """ self.optimize_strunicode_loop(ops, expected) + def test_str_concat_constant_lengths(self): + ops = """ + [i0] + p0 = newstr(1) + strsetitem(p0, 0, i0) + p1 = newstr(0) + p2 = call(0, p0, p1, descr=strconcatdescr) + i1 = call(0, p2, p0, descr=strequaldescr) + finish(i1) + """ + expected = """ + [i0] + finish(1) + """ + self.optimize_strunicode_loop(ops, expected) + + def test_str_concat_constant_lengths_2(self): + ops = """ + [i0] + p0 = newstr(0) + p1 = newstr(1) + strsetitem(p1, 0, i0) + p2 = call(0, p0, p1, descr=strconcatdescr) + i1 = call(0, p2, p1, descr=strequaldescr) + finish(i1) + """ + expected = """ + [i0] + finish(1) + """ + self.optimize_strunicode_loop(ops, expected) + def test_str_slice_1(self): ops = """ [p1, i1, i2] @@ -4165,15 +4292,38 @@ """ self.optimize_strunicode_loop(ops, expected) + def test_str_slice_plain_virtual(self): + ops = """ + [] + p0 = newstr(11) + copystrcontent(s"hello world", p0, 0, 0, 11) + p1 = call(0, p0, 0, 5, descr=strslicedescr) + finish(p1) + """ + expected = """ + [] + p0 = newstr(11) + copystrcontent(s"hello world", p0, 0, 0, 11) + # Eventually this should just return s"hello", but ATM this test is + # just verifying that it doesn't return "\0\0\0\0\0", so being + # slightly underoptimized is ok. + p1 = newstr(5) + copystrcontent(p0, p1, 0, 0, 5) + finish(p1) + """ + self.optimize_strunicode_loop(ops, expected) + # ---------- def optimize_strunicode_loop_extradescrs(self, ops, optops): class FakeCallInfoCollection: def callinfo_for_oopspec(self, oopspecindex): calldescrtype = type(LLtypeMixin.strequaldescr) + effectinfotype = type(LLtypeMixin.strequaldescr.get_extra_info()) for value in LLtypeMixin.__dict__.values(): if isinstance(value, calldescrtype): extra = value.get_extra_info() - if extra and extra.oopspecindex == oopspecindex: + if (extra and isinstance(extra, effectinfotype) and + extra.oopspecindex == oopspecindex): # returns 0 for 'func' in this test return value, 0 raise AssertionError("not found: oopspecindex=%d" % @@ -4653,11 +4803,11 @@ i5 = int_ge(i0, 0) guard_true(i5) [] i1 = int_mod(i0, 42) - i2 = int_rshift(i1, 63) + i2 = int_rshift(i1, %d) i3 = int_and(42, i2) i4 = int_add(i1, i3) finish(i4) - """ + """ % (LONG_BIT-1) expected = """ [i0] i5 = int_ge(i0, 0) @@ -4665,21 +4815,41 @@ i1 = int_mod(i0, 42) finish(i1) """ - py.test.skip("in-progress") self.optimize_loop(ops, expected) - # Also, 'n % power-of-two' can be turned into int_and(), - # but that's a bit harder to detect here because it turns into - # several operations, and of course it is wrong to just turn + # 'n % power-of-two' can be turned into int_and(); at least that's + # easy to do now if n is known to be non-negative. + ops = """ + [i0] + i5 = int_ge(i0, 0) + guard_true(i5) [] + i1 = int_mod(i0, 8) + i2 = int_rshift(i1, %d) + i3 = int_and(42, i2) + i4 = int_add(i1, i3) + finish(i4) + """ % (LONG_BIT-1) + expected = """ + [i0] + i5 = int_ge(i0, 0) + guard_true(i5) [] + i1 = int_and(i0, 7) + finish(i1) + """ + self.optimize_loop(ops, expected) + + # Of course any 'maybe-negative % power-of-two' can be turned into + # int_and(), but that's a bit harder to detect here because it turns + # into several operations, and of course it is wrong to just turn # int_mod(i0, 16) into int_and(i0, 15). ops = """ [i0] i1 = int_mod(i0, 16) - i2 = int_rshift(i1, 63) + i2 = int_rshift(i1, %d) i3 = int_and(16, i2) i4 = int_add(i1, i3) finish(i4) - """ + """ % (LONG_BIT-1) expected = """ [i0] i4 = int_and(i0, 15) @@ -4688,6 +4858,16 @@ py.test.skip("harder") self.optimize_loop(ops, expected) + def test_intmod_bounds_bug1(self): + ops = """ + [i0] + i1 = int_mod(i0, %d) + i2 = int_eq(i1, 0) + guard_false(i2) [] + finish() + """ % (-(1<<(LONG_BIT-1)),) + self.optimize_loop(ops, ops) + def test_bounded_lazy_setfield(self): ops = """ [p0, i0] @@ -4770,6 +4950,27 @@ def test_plain_virtual_string_copy_content(self): ops = """ + [i1] + p0 = newstr(6) + copystrcontent(s"hello!", p0, 0, 0, 6) + p1 = call(0, p0, s"abc123", descr=strconcatdescr) + i0 = strgetitem(p1, i1) + finish(i0) + """ + expected = """ + [i1] + p0 = newstr(6) + copystrcontent(s"hello!", p0, 0, 0, 6) + p1 = newstr(12) + copystrcontent(p0, p1, 0, 0, 6) + copystrcontent(s"abc123", p1, 0, 6, 6) + i0 = strgetitem(p1, i1) + finish(i0) + """ + self.optimize_strunicode_loop(ops, expected) + + def test_plain_virtual_string_copy_content_2(self): + ops = """ [] p0 = newstr(6) copystrcontent(s"hello!", p0, 0, 0, 6) @@ -4781,14 +4982,23 @@ [] p0 = newstr(6) copystrcontent(s"hello!", p0, 0, 0, 6) - p1 = newstr(12) - copystrcontent(p0, p1, 0, 0, 6) - copystrcontent(s"abc123", p1, 0, 6, 6) - i0 = strgetitem(p1, 0) + i0 = strgetitem(p0, 0) finish(i0) """ self.optimize_strunicode_loop(ops, expected) + def test_ptr_eq_str_constant(self): + ops = """ + [] + i0 = ptr_eq(s"abc", s"\x00") + finish(i0) + """ + expected = """ + [] + finish(0) + """ + self.optimize_loop(ops, expected) + class TestLLtype(BaseTestOptimizeBasic, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -931,17 +931,14 @@ [i] guard_no_exception() [] i1 = int_add(i, 3) - guard_no_exception() [] i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] - guard_no_exception() [] i3 = call(i2, descr=nonwritedescr) jump(i1) # the exception is considered lost when we loop back """ - # note that 'guard_no_exception' at the very start is kept around - # for bridges, but not for loops preamble = """ [i] + guard_no_exception() [] # occurs at the start of bridges, so keep it i1 = int_add(i, 3) i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] @@ -950,6 +947,7 @@ """ expected = """ [i] + guard_no_exception() [] # occurs at the start of bridges, so keep it i1 = int_add(i, 3) i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] @@ -958,6 +956,23 @@ """ self.optimize_loop(ops, expected, preamble) + def test_bug_guard_no_exception(self): + ops = """ + [] + i0 = call(123, descr=nonwritedescr) + p0 = call(0, "xy", descr=s2u_descr) # string -> unicode + guard_no_exception() [] + escape(p0) + jump() + """ + expected = """ + [] + i0 = call(123, descr=nonwritedescr) + escape(u"xy") + jump() + """ + self.optimize_loop(ops, expected) + # ---------- def test_call_loopinvariant(self): @@ -2168,13 +2183,13 @@ ops = """ [p0, i0, p1, i1, i2] setfield_gc(p0, i1, descr=valuedescr) - copystrcontent(p0, i0, p1, i1, i2) + copystrcontent(p0, p1, i0, i1, i2) escape() jump(p0, i0, p1, i1, i2) """ expected = """ [p0, i0, p1, i1, i2] - copystrcontent(p0, i0, p1, i1, i2) + copystrcontent(p0, p1, i0, i1, i2) setfield_gc(p0, i1, descr=valuedescr) escape() jump(p0, i0, p1, i1, i2) @@ -2683,7 +2698,7 @@ ops = """ [p1] guard_class(p1, ConstClass(node_vtable2)) [] - i = ptr_ne(ConstPtr(myptr), p1) + i = instance_ptr_ne(ConstPtr(myptr), p1) guard_true(i) [] jump(p1) """ @@ -3331,7 +3346,7 @@ jump(p1, i1, i2, i6) ''' self.optimize_loop(ops, expected, preamble) - + # ---------- @@ -4783,6 +4798,52 @@ """ self.optimize_loop(ops, expected) + + def test_division_nonneg(self): + py.test.skip("harder") + # this is how an app-level division turns into right now + ops = """ + [i4] + i1 = int_ge(i4, 0) + guard_true(i1) [] + i16 = int_floordiv(i4, 3) + i18 = int_mul(i16, 3) + i19 = int_sub(i4, i18) + i21 = int_rshift(i19, %d) + i22 = int_add(i16, i21) + finish(i22) + """ % (LONG_BIT-1) + expected = """ + [i4] + i1 = int_ge(i4, 0) + guard_true(i1) [] + i16 = int_floordiv(i4, 3) + finish(i16) + """ + self.optimize_loop(ops, expected) + + def test_division_by_2(self): + py.test.skip("harder") + ops = """ + [i4] + i1 = int_ge(i4, 0) + guard_true(i1) [] + i16 = int_floordiv(i4, 2) + i18 = int_mul(i16, 2) + i19 = int_sub(i4, i18) + i21 = int_rshift(i19, %d) + i22 = int_add(i16, i21) + finish(i22) + """ % (LONG_BIT-1) + expected = """ + [i4] + i1 = int_ge(i4, 0) + guard_true(i1) [] + i16 = int_rshift(i4, 1) + finish(i16) + """ + self.optimize_loop(ops, expected) + def test_subsub_ovf(self): ops = """ [i0] @@ -5800,10 +5861,12 @@ class FakeCallInfoCollection: def callinfo_for_oopspec(self, oopspecindex): calldescrtype = type(LLtypeMixin.strequaldescr) + effectinfotype = type(LLtypeMixin.strequaldescr.get_extra_info()) for value in LLtypeMixin.__dict__.values(): if isinstance(value, calldescrtype): extra = value.get_extra_info() - if extra and extra.oopspecindex == oopspecindex: + if (extra and isinstance(extra, effectinfotype) and + extra.oopspecindex == oopspecindex): # returns 0 for 'func' in this test return value, 0 raise AssertionError("not found: oopspecindex=%d" % @@ -6233,12 +6296,15 @@ def test_str2unicode_constant(self): ops = """ [] + escape(1213) p0 = call(0, "xy", descr=s2u_descr) # string -> unicode + guard_no_exception() [] escape(p0) jump() """ expected = """ [] + escape(1213) escape(u"xy") jump() """ @@ -6248,6 +6314,7 @@ ops = """ [p0] p1 = call(0, p0, descr=s2u_descr) # string -> unicode + guard_no_exception() [] escape(p1) jump(p1) """ @@ -7280,7 +7347,7 @@ ops = """ [p1, p2] setarrayitem_gc(p1, 2, 10, descr=arraydescr) - setarrayitem_gc(p2, 3, 13, descr=arraydescr) + setarrayitem_gc(p2, 3, 13, descr=arraydescr) call(0, p1, p2, 0, 0, 10, descr=arraycopydescr) jump(p1, p2) """ @@ -7307,6 +7374,150 @@ """ self.optimize_loop(ops, expected) + def test_repeated_constant_setfield_mixed_with_guard(self): + ops = """ + [p22, p18] + setfield_gc(p22, 2, descr=valuedescr) + guard_nonnull_class(p18, ConstClass(node_vtable)) [] + setfield_gc(p22, 2, descr=valuedescr) + jump(p22, p18) + """ + preamble = """ + [p22, p18] + setfield_gc(p22, 2, descr=valuedescr) + guard_nonnull_class(p18, ConstClass(node_vtable)) [] + jump(p22, p18) + """ + short = """ + [p22, p18] + i1 = getfield_gc(p22, descr=valuedescr) + guard_value(i1, 2) [] + jump(p22, p18) + """ + expected = """ + [p22, p18] + jump(p22, p18) + """ + self.optimize_loop(ops, expected, preamble, expected_short=short) + + def test_repeated_setfield_mixed_with_guard(self): + ops = """ + [p22, p18, i1] + i2 = getfield_gc(p22, descr=valuedescr) + call(i2, descr=nonwritedescr) + setfield_gc(p22, i1, descr=valuedescr) + guard_nonnull_class(p18, ConstClass(node_vtable)) [] + setfield_gc(p22, i1, descr=valuedescr) + jump(p22, p18, i1) + """ + preamble = """ + [p22, p18, i1] + i2 = getfield_gc(p22, descr=valuedescr) + call(i2, descr=nonwritedescr) + setfield_gc(p22, i1, descr=valuedescr) + guard_nonnull_class(p18, ConstClass(node_vtable)) [] + jump(p22, p18, i1, i1) + """ + short = """ + [p22, p18, i1] + i2 = getfield_gc(p22, descr=valuedescr) + jump(p22, p18, i1, i2) + """ + expected = """ + [p22, p18, i1, i2] + call(i2, descr=nonwritedescr) + setfield_gc(p22, i1, descr=valuedescr) + jump(p22, p18, i1, i1) + """ + self.optimize_loop(ops, expected, preamble, expected_short=short) + + def test_cache_setfield_across_loop_boundaries(self): + ops = """ + [p1] + p2 = getfield_gc(p1, descr=valuedescr) + guard_nonnull_class(p2, ConstClass(node_vtable)) [] + call(p2, descr=nonwritedescr) + p3 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(p1, p3, descr=valuedescr) + jump(p1) + """ + expected = """ + [p1, p2] + call(p2, descr=nonwritedescr) + p3 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(p1, p3, descr=valuedescr) + jump(p1, p3) + """ + self.optimize_loop(ops, expected) + + def test_cache_setarrayitem_across_loop_boundaries(self): + ops = """ + [p1] + p2 = getarrayitem_gc(p1, 3, descr=arraydescr) + guard_nonnull_class(p2, ConstClass(node_vtable)) [] + call(p2, descr=nonwritedescr) + p3 = new_with_vtable(ConstClass(node_vtable)) + setarrayitem_gc(p1, 3, p3, descr=arraydescr) + jump(p1) + """ + expected = """ + [p1, p2] + call(p2, descr=nonwritedescr) + p3 = new_with_vtable(ConstClass(node_vtable)) + setarrayitem_gc(p1, 3, p3, descr=arraydescr) + jump(p1, p3) + """ + self.optimize_loop(ops, expected) + + def test_setarrayitem_p0_p0(self): + ops = """ + [i0, i1] + p0 = escape() + setarrayitem_gc(p0, 2, p0, descr=arraydescr) + jump(i0, i1) + """ + expected = """ + [i0, i1] + p0 = escape() + setarrayitem_gc(p0, 2, p0, descr=arraydescr) + jump(i0, i1) + """ + self.optimize_loop(ops, expected) + + def test_setfield_p0_p0(self): + ops = """ + [i0, i1] + p0 = escape() + setfield_gc(p0, p0, descr=arraydescr) + jump(i0, i1) + """ + expected = """ + [i0, i1] + p0 = escape() + setfield_gc(p0, p0, descr=arraydescr) + jump(i0, i1) + """ + self.optimize_loop(ops, expected) + + def test_setfield_p0_p1_p0(self): + ops = """ + [i0, i1] + p0 = escape() + p1 = escape() + setfield_gc(p0, p1, descr=adescr) + setfield_gc(p1, p0, descr=bdescr) + jump(i0, i1) + """ + expected = """ + [i0, i1] + p0 = escape() + p1 = escape() + setfield_gc(p0, p1, descr=adescr) + setfield_gc(p1, p0, descr=bdescr) + jump(i0, i1) + """ + self.optimize_loop(ops, expected) + class TestLLtype(OptimizeOptTest, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/optimizeopt/test/test_util.py b/pypy/jit/metainterp/optimizeopt/test/test_util.py --- a/pypy/jit/metainterp/optimizeopt/test/test_util.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_util.py @@ -185,6 +185,18 @@ EffectInfo([], [arraydescr], [], [arraydescr], oopspecindex=EffectInfo.OS_ARRAYCOPY)) + + # array of structs (complex data) + complexarray = lltype.GcArray( + lltype.Struct("complex", + ("real", lltype.Float), + ("imag", lltype.Float), + ) + ) + complexarraydescr = cpu.arraydescrof(complexarray) + complexrealdescr = cpu.interiorfielddescrof(complexarray, "real") + compleximagdescr = cpu.interiorfielddescrof(complexarray, "imag") + for _name, _os in [ ('strconcatdescr', 'OS_STR_CONCAT'), ('strslicedescr', 'OS_STR_SLICE'), @@ -240,7 +252,7 @@ ## def get_class_of_box(self, box): ## root = box.getref(ootype.ROOT) ## return ootype.classof(root) - + ## cpu = runner.OOtypeCPU(None) ## NODE = ootype.Instance('NODE', ootype.ROOT, {}) ## NODE._add_fields({'value': ootype.Signed, diff --git a/pypy/jit/metainterp/optimizeopt/virtualize.py b/pypy/jit/metainterp/optimizeopt/virtualize.py --- a/pypy/jit/metainterp/optimizeopt/virtualize.py +++ b/pypy/jit/metainterp/optimizeopt/virtualize.py @@ -59,7 +59,7 @@ def import_from(self, other, optimizer): raise NotImplementedError("should not be called at this level") - + def get_fielddescrlist_cache(cpu): if not hasattr(cpu, '_optimizeopt_fielddescrlist_cache'): result = descrlist_dict() @@ -113,7 +113,7 @@ # if not we_are_translated(): op.name = 'FORCE ' + self.source_op.name - + if self._is_immutable_and_filled_with_constants(optforce): box = optforce.optimizer.constant_fold(op) self.make_constant(box) @@ -239,12 +239,12 @@ for index in range(len(self._items)): self._items[index] = self._items[index].force_at_end_of_preamble(already_forced, optforce) return self - + def _really_force(self, optforce): assert self.source_op is not None if not we_are_translated(): self.source_op.name = 'FORCE ' + self.source_op.name - optforce.emit_operation(self.source_op) + optforce.emit_operation(self.source_op) self.box = box = self.source_op.result for index in range(len(self._items)): subvalue = self._items[index] @@ -271,20 +271,91 @@ def _make_virtual(self, modifier): return modifier.make_varray(self.arraydescr) +class VArrayStructValue(AbstractVirtualValue): + def __init__(self, arraydescr, size, keybox, source_op=None): + AbstractVirtualValue.__init__(self, keybox, source_op) + self.arraydescr = arraydescr + self._items = [{} for _ in xrange(size)] + + def getlength(self): + return len(self._items) + + def getinteriorfield(self, index, ofs, default): + return self._items[index].get(ofs, default) + + def setinteriorfield(self, index, ofs, itemvalue): + assert isinstance(itemvalue, optimizer.OptValue) + self._items[index][ofs] = itemvalue + + def _really_force(self, optforce): + assert self.source_op is not None + if not we_are_translated(): + self.source_op.name = 'FORCE ' + self.source_op.name + optforce.emit_operation(self.source_op) + self.box = box = self.source_op.result + for index in range(len(self._items)): + iteritems = self._items[index].iteritems() + # random order is fine, except for tests + if not we_are_translated(): + iteritems = list(iteritems) + iteritems.sort(key = lambda (x, y): x.sort_key()) + for descr, value in iteritems: + subbox = value.force_box(optforce) + op = ResOperation(rop.SETINTERIORFIELD_GC, + [box, ConstInt(index), subbox], None, descr=descr + ) + optforce.emit_operation(op) + + def _get_list_of_descrs(self): + descrs = [] + for item in self._items: + item_descrs = item.keys() + sort_descrs(item_descrs) + descrs.append(item_descrs) + return descrs + + def get_args_for_fail(self, modifier): + if self.box is None and not modifier.already_seen_virtual(self.keybox): + itemdescrs = self._get_list_of_descrs() + itemboxes = [] + for i in range(len(self._items)): + for descr in itemdescrs[i]: + itemboxes.append(self._items[i][descr].get_key_box()) + modifier.register_virtual_fields(self.keybox, itemboxes) + for i in range(len(self._items)): + for descr in itemdescrs[i]: + self._items[i][descr].get_args_for_fail(modifier) + + def force_at_end_of_preamble(self, already_forced, optforce): + if self in already_forced: + return self + already_forced[self] = self + for index in range(len(self._items)): + for descr in self._items[index].keys(): + self._items[index][descr] = self._items[index][descr].force_at_end_of_preamble(already_forced, optforce) + return self + + def _make_virtual(self, modifier): + return modifier.make_varraystruct(self.arraydescr, self._get_list_of_descrs()) + + class OptVirtualize(optimizer.Optimization): "Virtualize objects until they escape." def new(self): return OptVirtualize() - + def make_virtual(self, known_class, box, source_op=None): vvalue = VirtualValue(self.optimizer.cpu, known_class, box, source_op) self.make_equal_to(box, vvalue) return vvalue def make_varray(self, arraydescr, size, box, source_op=None): - constvalue = self.new_const_item(arraydescr) - vvalue = VArrayValue(arraydescr, constvalue, size, box, source_op) + if arraydescr.is_array_of_structs(): + vvalue = VArrayStructValue(arraydescr, size, box, source_op) + else: + constvalue = self.new_const_item(arraydescr) + vvalue = VArrayValue(arraydescr, constvalue, size, box, source_op) self.make_equal_to(box, vvalue) return vvalue @@ -431,6 +502,34 @@ value.ensure_nonnull() self.emit_operation(op) + def optimize_GETINTERIORFIELD_GC(self, op): + value = self.getvalue(op.getarg(0)) + if value.is_virtual(): + indexbox = self.get_constant_box(op.getarg(1)) + if indexbox is not None: + descr = op.getdescr() + fieldvalue = value.getinteriorfield( + indexbox.getint(), descr, None + ) + if fieldvalue is None: + fieldvalue = self.new_const(descr) + self.make_equal_to(op.result, fieldvalue) + return + value.ensure_nonnull() + self.emit_operation(op) + + def optimize_SETINTERIORFIELD_GC(self, op): + value = self.getvalue(op.getarg(0)) + if value.is_virtual(): + indexbox = self.get_constant_box(op.getarg(1)) + if indexbox is not None: + value.setinteriorfield( + indexbox.getint(), op.getdescr(), self.getvalue(op.getarg(2)) + ) + return + value.ensure_nonnull() + self.emit_operation(op) + dispatch_opt = make_dispatcher_method(OptVirtualize, 'optimize_', default=OptVirtualize.emit_operation) diff --git a/pypy/jit/metainterp/optimizeopt/virtualstate.py b/pypy/jit/metainterp/optimizeopt/virtualstate.py --- a/pypy/jit/metainterp/optimizeopt/virtualstate.py +++ b/pypy/jit/metainterp/optimizeopt/virtualstate.py @@ -16,7 +16,7 @@ class AbstractVirtualStateInfo(resume.AbstractVirtualInfo): position = -1 - + def generalization_of(self, other, renum, bad): raise NotImplementedError @@ -54,7 +54,7 @@ s.debug_print(indent + " ", seen, bad) else: debug_print(indent + " ...") - + def debug_header(self, indent): raise NotImplementedError @@ -77,13 +77,15 @@ bad[self] = True bad[other] = True return False + + assert isinstance(other, AbstractVirtualStructStateInfo) assert len(self.fielddescrs) == len(self.fieldstate) assert len(other.fielddescrs) == len(other.fieldstate) if len(self.fielddescrs) != len(other.fielddescrs): bad[self] = True bad[other] = True return False - + for i in range(len(self.fielddescrs)): if other.fielddescrs[i] is not self.fielddescrs[i]: bad[self] = True @@ -112,8 +114,8 @@ def _enum(self, virtual_state): for s in self.fieldstate: s.enum(virtual_state) - - + + class VirtualStateInfo(AbstractVirtualStructStateInfo): def __init__(self, known_class, fielddescrs): AbstractVirtualStructStateInfo.__init__(self, fielddescrs) @@ -128,13 +130,13 @@ def debug_header(self, indent): debug_print(indent + 'VirtualStateInfo(%d):' % self.position) - + class VStructStateInfo(AbstractVirtualStructStateInfo): def __init__(self, typedescr, fielddescrs): AbstractVirtualStructStateInfo.__init__(self, fielddescrs) self.typedescr = typedescr - def _generalization_of(self, other): + def _generalization_of(self, other): if not isinstance(other, VStructStateInfo): return False if self.typedescr is not other.typedescr: @@ -143,7 +145,7 @@ def debug_header(self, indent): debug_print(indent + 'VStructStateInfo(%d):' % self.position) - + class VArrayStateInfo(AbstractVirtualStateInfo): def __init__(self, arraydescr): self.arraydescr = arraydescr @@ -157,11 +159,7 @@ bad[other] = True return False renum[self.position] = other.position - if not isinstance(other, VArrayStateInfo): - bad[self] = True - bad[other] = True - return False - if self.arraydescr is not other.arraydescr: + if not self._generalization_of(other): bad[self] = True bad[other] = True return False @@ -177,6 +175,10 @@ return False return True + def _generalization_of(self, other): + return (isinstance(other, VArrayStateInfo) and + self.arraydescr is other.arraydescr) + def enum_forced_boxes(self, boxes, value, optimizer): assert isinstance(value, virtualize.VArrayValue) assert value.is_virtual() @@ -192,8 +194,75 @@ def debug_header(self, indent): debug_print(indent + 'VArrayStateInfo(%d):' % self.position) - - + +class VArrayStructStateInfo(AbstractVirtualStateInfo): + def __init__(self, arraydescr, fielddescrs): + self.arraydescr = arraydescr + self.fielddescrs = fielddescrs + + def generalization_of(self, other, renum, bad): + assert self.position != -1 + if self.position in renum: + if renum[self.position] == other.position: + return True + bad[self] = True + bad[other] = True + return False + renum[self.position] = other.position + if not self._generalization_of(other): + bad[self] = True + bad[other] = True + return False + + assert isinstance(other, VArrayStructStateInfo) + if len(self.fielddescrs) != len(other.fielddescrs): + bad[self] = True + bad[other] = True + return False + + p = 0 + for i in range(len(self.fielddescrs)): + if len(self.fielddescrs[i]) != len(other.fielddescrs[i]): + bad[self] = True + bad[other] = True + return False + for j in range(len(self.fielddescrs[i])): + if self.fielddescrs[i][j] is not other.fielddescrs[i][j]: + bad[self] = True + bad[other] = True + return False + if not self.fieldstate[p].generalization_of(other.fieldstate[p], + renum, bad): + bad[self] = True + bad[other] = True + return False + p += 1 + return True + + def _generalization_of(self, other): + return (isinstance(other, VArrayStructStateInfo) and + self.arraydescr is other.arraydescr) + + def _enum(self, virtual_state): + for s in self.fieldstate: + s.enum(virtual_state) + + def enum_forced_boxes(self, boxes, value, optimizer): + assert isinstance(value, virtualize.VArrayStructValue) + assert value.is_virtual() + p = 0 + for i in range(len(self.fielddescrs)): + for j in range(len(self.fielddescrs[i])): + v = value._items[i][self.fielddescrs[i][j]] + s = self.fieldstate[p] + if s.position > self.position: + s.enum_forced_boxes(boxes, v, optimizer) + p += 1 + + def debug_header(self, indent): + debug_print(indent + 'VArrayStructStateInfo(%d):' % self.position) + + class NotVirtualStateInfo(AbstractVirtualStateInfo): def __init__(self, value): self.known_class = value.known_class @@ -277,7 +346,7 @@ op = ResOperation(rop.GUARD_CLASS, [box, self.known_class], None) extra_guards.append(op) return - + if self.level == LEVEL_NONNULL and \ other.level == LEVEL_UNKNOWN and \ isinstance(box, BoxPtr) and \ @@ -285,7 +354,7 @@ op = ResOperation(rop.GUARD_NONNULL, [box], None) extra_guards.append(op) return - + if self.level == LEVEL_UNKNOWN and \ other.level == LEVEL_UNKNOWN and \ isinstance(box, BoxInt) and \ @@ -309,7 +378,7 @@ op = ResOperation(rop.GUARD_TRUE, [res], None) extra_guards.append(op) return - + # Remaining cases are probably not interesting raise InvalidLoop if self.level == LEVEL_CONSTANT: @@ -319,7 +388,7 @@ def enum_forced_boxes(self, boxes, value, optimizer): if self.level == LEVEL_CONSTANT: return - assert 0 <= self.position_in_notvirtuals + assert 0 <= self.position_in_notvirtuals boxes[self.position_in_notvirtuals] = value.force_box(optimizer) def _enum(self, virtual_state): @@ -348,7 +417,7 @@ lb = '' if self.lenbound: lb = ', ' + self.lenbound.bound.__repr__() - + debug_print(indent + mark + 'NotVirtualInfo(%d' % self.position + ', ' + l + ', ' + self.intbound.__repr__() + lb + ')') @@ -370,7 +439,7 @@ return False return True - def generate_guards(self, other, args, cpu, extra_guards): + def generate_guards(self, other, args, cpu, extra_guards): assert len(self.state) == len(other.state) == len(args) renum = {} for i in range(len(self.state)): @@ -393,7 +462,7 @@ inputargs.append(box) assert None not in inputargs - + return inputargs def debug_print(self, hdr='', bad=None): @@ -412,7 +481,7 @@ def register_virtual_fields(self, keybox, fieldboxes): self.fieldboxes[keybox] = fieldboxes - + def already_seen_virtual(self, keybox): return keybox in self.fieldboxes @@ -463,6 +532,9 @@ def make_varray(self, arraydescr): return VArrayStateInfo(arraydescr) + def make_varraystruct(self, arraydescr, fielddescrs): + return VArrayStructStateInfo(arraydescr, fielddescrs) + class BoxNotProducable(Exception): pass @@ -479,6 +551,7 @@ optimizer.produce_potential_short_preamble_ops(self) self.short_boxes = {} + self.short_boxes_in_production = {} for box in self.potential_ops.keys(): try: @@ -501,12 +574,12 @@ else: # Low priority lo -= 1 return alts - + def renamed(self, box): if box in self.rename: return self.rename[box] return box - + def add_to_short(self, box, op): if op: op = op.clone() @@ -528,12 +601,16 @@ self.optimizer.make_equal_to(newbox, value) else: self.short_boxes[box] = op - + def produce_short_preamble_box(self, box): if box in self.short_boxes: - return + return if isinstance(box, Const): - return + return + if box in self.short_boxes_in_production: + raise BoxNotProducable + self.short_boxes_in_production[box] = True + if box in self.potential_ops: ops = self.prioritized_alternatives(box) produced_one = False @@ -570,7 +647,7 @@ else: debug_print(logops.repr_of_arg(box) + ': None') debug_stop('jit-short-boxes') - + def operations(self): if not we_are_translated(): # For tests ops = self.short_boxes.values() @@ -588,7 +665,7 @@ if not isinstance(oldbox, Const) and newbox not in self.short_boxes: self.short_boxes[newbox] = self.short_boxes[oldbox] self.aliases[newbox] = oldbox - + def original(self, box): while box in self.aliases: box = self.aliases[box] diff --git a/pypy/jit/metainterp/optimizeopt/vstring.py b/pypy/jit/metainterp/optimizeopt/vstring.py --- a/pypy/jit/metainterp/optimizeopt/vstring.py +++ b/pypy/jit/metainterp/optimizeopt/vstring.py @@ -1,8 +1,9 @@ from pypy.jit.codewriter.effectinfo import EffectInfo from pypy.jit.metainterp.history import (BoxInt, Const, ConstInt, ConstPtr, - get_const_ptr_for_string, get_const_ptr_for_unicode) + get_const_ptr_for_string, get_const_ptr_for_unicode, BoxPtr, REF, INT) from pypy.jit.metainterp.optimizeopt import optimizer, virtualize -from pypy.jit.metainterp.optimizeopt.optimizer import CONST_0, CONST_1, llhelper +from pypy.jit.metainterp.optimizeopt.optimizer import CONST_0, CONST_1 +from pypy.jit.metainterp.optimizeopt.optimizer import llhelper, REMOVED from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.rlib.objectmodel import specialize, we_are_translated @@ -106,7 +107,12 @@ if not we_are_translated(): op.name = 'FORCE' optforce.emit_operation(op) - self.string_copy_parts(optforce, box, CONST_0, self.mode) + self.initialize_forced_string(optforce, box, CONST_0, self.mode) + + def initialize_forced_string(self, string_optimizer, targetbox, + offsetbox, mode): + return self.string_copy_parts(string_optimizer, targetbox, + offsetbox, mode) class VStringPlainValue(VAbstractStringValue): @@ -114,11 +120,20 @@ _lengthbox = None # cache only def setup(self, size): - self._chars = [optimizer.CVAL_UNINITIALIZED_ZERO] * size + # in this list, None means: "it's probably uninitialized so far, + # but maybe it was actually filled." So to handle this case, + # strgetitem cannot be virtual-ized and must be done as a residual + # operation. By contrast, any non-None value means: we know it + # is initialized to this value; strsetitem() there makes no sense. + # Also, as long as self.is_virtual(), then we know that no-one else + # could have written to the string, so we know that in this case + # "None" corresponds to "really uninitialized". + self._chars = [None] * size def setup_slice(self, longerlist, start, stop): assert 0 <= start <= stop <= len(longerlist) self._chars = longerlist[start:stop] + # slice the 'longerlist', which may also contain Nones def getstrlen(self, _, mode): if self._lengthbox is None: @@ -126,53 +141,66 @@ return self._lengthbox def getitem(self, index): - return self._chars[index] + return self._chars[index] # may return None! def setitem(self, index, charvalue): assert isinstance(charvalue, optimizer.OptValue) + assert self._chars[index] is None, ( + "setitem() on an already-initialized location") self._chars[index] = charvalue + def is_completely_initialized(self): + for c in self._chars: + if c is None: + return False + return True + @specialize.arg(1) def get_constant_string_spec(self, mode): for c in self._chars: - if c is optimizer.CVAL_UNINITIALIZED_ZERO or not c.is_constant(): + if c is None or not c.is_constant(): return None return mode.emptystr.join([mode.chr(c.box.getint()) for c in self._chars]) def string_copy_parts(self, string_optimizer, targetbox, offsetbox, mode): - if not self.is_virtual() and targetbox is not self.box: - lengthbox = self.getstrlen(string_optimizer, mode) - srcbox = self.force_box(string_optimizer) - return copy_str_content(string_optimizer, srcbox, targetbox, - CONST_0, offsetbox, lengthbox, mode) + if not self.is_virtual() and not self.is_completely_initialized(): + return VAbstractStringValue.string_copy_parts( + self, string_optimizer, targetbox, offsetbox, mode) + else: + return self.initialize_forced_string(string_optimizer, targetbox, + offsetbox, mode) + + def initialize_forced_string(self, string_optimizer, targetbox, + offsetbox, mode): for i in range(len(self._chars)): - charbox = self._chars[i].force_box(string_optimizer) - if not (isinstance(charbox, Const) and charbox.same_constant(CONST_0)): - string_optimizer.emit_operation(ResOperation(mode.STRSETITEM, [targetbox, - offsetbox, - charbox], - None)) + assert isinstance(targetbox, BoxPtr) # ConstPtr never makes sense + charvalue = self.getitem(i) + if charvalue is not None: + charbox = charvalue.force_box(string_optimizer) + if not (isinstance(charbox, Const) and + charbox.same_constant(CONST_0)): + op = ResOperation(mode.STRSETITEM, [targetbox, + offsetbox, + charbox], + None) + string_optimizer.emit_operation(op) offsetbox = _int_add(string_optimizer, offsetbox, CONST_1) return offsetbox def get_args_for_fail(self, modifier): if self.box is None and not modifier.already_seen_virtual(self.keybox): - charboxes = [value.get_key_box() for value in self._chars] + charboxes = [] + for value in self._chars: + if value is not None: + box = value.get_key_box() + else: + box = None + charboxes.append(box) modifier.register_virtual_fields(self.keybox, charboxes) for value in self._chars: - value.get_args_for_fail(modifier) - - def FIXME_enum_forced_boxes(self, boxes, already_seen): - key = self.get_key_box() - if key in already_seen: - return - already_seen[key] = None - if self.box is None: - for box in self._chars: - box.enum_forced_boxes(boxes, already_seen) - else: - boxes.append(self.box) + if value is not None: + value.get_args_for_fail(modifier) def _make_virtual(self, modifier): return modifier.make_vstrplain(self.mode is mode_unicode) @@ -180,6 +208,7 @@ class VStringConcatValue(VAbstractStringValue): """The concatenation of two other strings.""" + _attrs_ = ('left', 'right', 'lengthbox') lengthbox = None # or the computed length @@ -226,18 +255,6 @@ self.left.get_args_for_fail(modifier) self.right.get_args_for_fail(modifier) - def FIXME_enum_forced_boxes(self, boxes, already_seen): - key = self.get_key_box() - if key in already_seen: - return - already_seen[key] = None - if self.box is None: - self.left.enum_forced_boxes(boxes, already_seen) - self.right.enum_forced_boxes(boxes, already_seen) - self.lengthbox = None - else: - boxes.append(self.box) - def _make_virtual(self, modifier): return modifier.make_vstrconcat(self.mode is mode_unicode) @@ -284,18 +301,6 @@ self.vstart.get_args_for_fail(modifier) self.vlength.get_args_for_fail(modifier) - def FIXME_enum_forced_boxes(self, boxes, already_seen): - key = self.get_key_box() - if key in already_seen: - return - already_seen[key] = None - if self.box is None: - self.vstr.enum_forced_boxes(boxes, already_seen) - self.vstart.enum_forced_boxes(boxes, already_seen) - self.vlength.enum_forced_boxes(boxes, already_seen) - else: - boxes.append(self.box) - def _make_virtual(self, modifier): return modifier.make_vstrslice(self.mode is mode_unicode) @@ -312,6 +317,7 @@ for i in range(lengthbox.value): charbox = _strgetitem(string_optimizer, srcbox, srcoffsetbox, mode) srcoffsetbox = _int_add(string_optimizer, srcoffsetbox, CONST_1) + assert isinstance(targetbox, BoxPtr) # ConstPtr never makes sense string_optimizer.emit_operation(ResOperation(mode.STRSETITEM, [targetbox, offsetbox, charbox], @@ -322,6 +328,7 @@ nextoffsetbox = _int_add(string_optimizer, offsetbox, lengthbox) else: nextoffsetbox = None + assert isinstance(targetbox, BoxPtr) # ConstPtr never makes sense op = ResOperation(mode.COPYSTRCONTENT, [srcbox, targetbox, srcoffsetbox, offsetbox, lengthbox], None) @@ -408,6 +415,7 @@ def optimize_STRSETITEM(self, op): value = self.getvalue(op.getarg(0)) + assert not value.is_constant() # strsetitem(ConstPtr) never makes sense if value.is_virtual() and isinstance(value, VStringPlainValue): indexbox = self.get_constant_box(op.getarg(1)) if indexbox is not None: @@ -441,11 +449,20 @@ # if isinstance(value, VStringPlainValue): # even if no longer virtual if vindex.is_constant(): - res = value.getitem(vindex.box.getint()) - # If it is uninitialized we can't return it, it was set by a - # COPYSTRCONTENT, not a STRSETITEM - if res is not optimizer.CVAL_UNINITIALIZED_ZERO: - return res + result = value.getitem(vindex.box.getint()) + if result is not None: + return result + # + if isinstance(value, VStringConcatValue) and vindex.is_constant(): + len1box = value.left.getstrlen(self, mode) + if isinstance(len1box, ConstInt): + index = vindex.box.getint() + len1 = len1box.getint() + if index < len1: + return self.strgetitem(value.left, vindex, mode) + else: + vindex = optimizer.ConstantValue(ConstInt(index - len1)) + return self.strgetitem(value.right, vindex, mode) # resbox = _strgetitem(self, value.force_box(self), vindex.force_box(self), mode) return self.getvalue(resbox) @@ -467,6 +484,11 @@ def _optimize_COPYSTRCONTENT(self, op, mode): # args: src dst srcstart dststart length + assert op.getarg(0).type == REF + assert op.getarg(1).type == REF + assert op.getarg(2).type == INT + assert op.getarg(3).type == INT + assert op.getarg(4).type == INT src = self.getvalue(op.getarg(0)) dst = self.getvalue(op.getarg(1)) srcstart = self.getvalue(op.getarg(2)) @@ -508,6 +530,11 @@ optimize_CALL_PURE = optimize_CALL + def optimize_GUARD_NO_EXCEPTION(self, op): + if self.last_emitted_operation is REMOVED: + return + self.emit_operation(op) + def opt_call_str_STR2UNICODE(self, op): # Constant-fold unicode("constant string"). # More generally, supporting non-constant but virtual cases is @@ -522,6 +549,7 @@ except UnicodeDecodeError: return False self.make_constant(op.result, get_const_ptr_for_unicode(u)) + self.last_emitted_operation = REMOVED return True def opt_call_stroruni_STR_CONCAT(self, op, mode): @@ -538,13 +566,12 @@ vstart = self.getvalue(op.getarg(2)) vstop = self.getvalue(op.getarg(3)) # - if (isinstance(vstr, VStringPlainValue) and vstart.is_constant() - and vstop.is_constant()): - # slicing with constant bounds of a VStringPlainValue - value = self.make_vstring_plain(op.result, op, mode) - value.setup_slice(vstr._chars, vstart.box.getint(), - vstop.box.getint()) - return True + #if (isinstance(vstr, VStringPlainValue) and vstart.is_constant() + # and vstop.is_constant()): + # value = self.make_vstring_plain(op.result, op, mode) + # value.setup_slice(vstr._chars, vstart.box.getint(), + # vstop.box.getint()) + # return True # vstr.ensure_nonnull() lengthbox = _int_sub(self, vstop.force_box(self), diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -165,7 +165,7 @@ if not we_are_translated(): for b in registers[count:]: assert not oldbox.same_box(b) - + def make_result_of_lastop(self, resultbox): got_type = resultbox.type @@ -199,7 +199,7 @@ 'float_add', 'float_sub', 'float_mul', 'float_truediv', 'float_lt', 'float_le', 'float_eq', 'float_ne', 'float_gt', 'float_ge', - 'ptr_eq', 'ptr_ne', + 'ptr_eq', 'ptr_ne', 'instance_ptr_eq', 'instance_ptr_ne', ]: exec py.code.Source(''' @arguments("box", "box") @@ -240,8 +240,8 @@ return self.execute(rop.PTR_EQ, box, history.CONST_NULL) @arguments("box") - def opimpl_cast_opaque_ptr(self, box): - return self.execute(rop.CAST_OPAQUE_PTR, box) + def opimpl_mark_opaque_ptr(self, box): + return self.execute(rop.MARK_OPAQUE_PTR, box) @arguments("box") def _opimpl_any_return(self, box): @@ -604,7 +604,7 @@ opimpl_setinteriorfield_gc_i = _opimpl_setinteriorfield_gc_any opimpl_setinteriorfield_gc_f = _opimpl_setinteriorfield_gc_any opimpl_setinteriorfield_gc_r = _opimpl_setinteriorfield_gc_any - + @arguments("box", "descr") def _opimpl_getfield_raw_any(self, box, fielddescr): diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -90,7 +90,10 @@ return op def __repr__(self): - return self.repr() + try: + return self.repr() + except NotImplementedError: + return object.__repr__(self) def repr(self, graytext=False): # RPython-friendly version @@ -404,8 +407,8 @@ 'FLOAT_TRUEDIV/2', 'FLOAT_NEG/1', 'FLOAT_ABS/1', - 'CAST_FLOAT_TO_INT/1', - 'CAST_INT_TO_FLOAT/1', + 'CAST_FLOAT_TO_INT/1', # don't use for unsigned ints; we would + 'CAST_INT_TO_FLOAT/1', # need some messy code in the backend 'CAST_FLOAT_TO_SINGLEFLOAT/1', 'CAST_SINGLEFLOAT_TO_FLOAT/1', # @@ -437,7 +440,8 @@ # 'PTR_EQ/2b', 'PTR_NE/2b', - 'CAST_OPAQUE_PTR/1b', + 'INSTANCE_PTR_EQ/2b', + 'INSTANCE_PTR_NE/2b', # 'ARRAYLEN_GC/1d', 'STRLEN/1', @@ -469,6 +473,7 @@ 'FORCE_TOKEN/0', 'VIRTUAL_REF/2', # removed before it's passed to the backend 'READ_TIMESTAMP/0', + 'MARK_OPAQUE_PTR/1b', '_NOSIDEEFFECT_LAST', # ----- end of no_side_effect operations ----- 'SETARRAYITEM_GC/3d', diff --git a/pypy/jit/metainterp/resume.py b/pypy/jit/metainterp/resume.py --- a/pypy/jit/metainterp/resume.py +++ b/pypy/jit/metainterp/resume.py @@ -126,6 +126,7 @@ UNASSIGNED = tag(-1<<13, TAGBOX) UNASSIGNEDVIRTUAL = tag(-1<<13, TAGVIRTUAL) NULLREF = tag(-1, TAGCONST) +UNINITIALIZED = tag(-2, TAGCONST) # used for uninitialized string characters class ResumeDataLoopMemo(object): @@ -139,7 +140,7 @@ self.numberings = {} self.cached_boxes = {} self.cached_virtuals = {} - + self.nvirtuals = 0 self.nvholes = 0 self.nvreused = 0 @@ -273,6 +274,9 @@ def make_varray(self, arraydescr): return VArrayInfo(arraydescr) + def make_varraystruct(self, arraydescr, fielddescrs): + return VArrayStructInfo(arraydescr, fielddescrs) + def make_vstrplain(self, is_unicode=False): if is_unicode: return VUniPlainInfo() @@ -402,7 +406,7 @@ virtuals[num] = vinfo if self._invalidation_needed(len(liveboxes), nholes): - memo.clear_box_virtual_numbers() + memo.clear_box_virtual_numbers() def _invalidation_needed(self, nliveboxes, nholes): memo = self.memo @@ -436,6 +440,8 @@ self.storage.rd_pendingfields = rd_pendingfields def _gettagged(self, box): + if box is None: + return UNINITIALIZED if isinstance(box, Const): return self.memo.getconst(box) else: @@ -455,7 +461,7 @@ def debug_prints(self): raise NotImplementedError - + class AbstractVirtualStructInfo(AbstractVirtualInfo): def __init__(self, fielddescrs): self.fielddescrs = fielddescrs @@ -537,6 +543,29 @@ for i in self.fieldnums: debug_print("\t\t", str(untag(i))) + +class VArrayStructInfo(AbstractVirtualInfo): + def __init__(self, arraydescr, fielddescrs): + self.arraydescr = arraydescr + self.fielddescrs = fielddescrs + + def debug_prints(self): + debug_print("\tvarraystructinfo", self.arraydescr) + for i in self.fieldnums: + debug_print("\t\t", str(untag(i))) + + @specialize.argtype(1) + def allocate(self, decoder, index): + array = decoder.allocate_array(self.arraydescr, len(self.fielddescrs)) + decoder.virtuals_cache[index] = array + p = 0 + for i in range(len(self.fielddescrs)): + for j in range(len(self.fielddescrs[i])): + decoder.setinteriorfield(i, self.fielddescrs[i][j], array, self.fieldnums[p]) + p += 1 + return array + + class VStrPlainInfo(AbstractVirtualInfo): """Stands for the string made out of the characters of all fieldnums.""" @@ -546,7 +575,9 @@ string = decoder.allocate_string(length) decoder.virtuals_cache[index] = string for i in range(length): - decoder.string_setitem(string, i, self.fieldnums[i]) + charnum = self.fieldnums[i] + if not tagged_eq(charnum, UNINITIALIZED): + decoder.string_setitem(string, i, charnum) return string def debug_prints(self): @@ -599,7 +630,9 @@ string = decoder.allocate_unicode(length) decoder.virtuals_cache[index] = string for i in range(length): - decoder.unicode_setitem(string, i, self.fieldnums[i]) + charnum = self.fieldnums[i] + if not tagged_eq(charnum, UNINITIALIZED): + decoder.unicode_setitem(string, i, charnum) return string def debug_prints(self): @@ -884,6 +917,17 @@ self.metainterp.execute_and_record(rop.SETFIELD_GC, descr, structbox, fieldbox) + def setinteriorfield(self, index, descr, array, fieldnum): + if descr.is_pointer_field(): + kind = REF + elif descr.is_float_field(): + kind = FLOAT + else: + kind = INT + fieldbox = self.decode_box(fieldnum, kind) + self.metainterp.execute_and_record(rop.SETINTERIORFIELD_GC, descr, + array, ConstInt(index), fieldbox) + def setarrayitem_int(self, arraydescr, arraybox, index, fieldnum): self._setarrayitem(arraydescr, arraybox, index, fieldnum, INT) @@ -1164,6 +1208,17 @@ newvalue = self.decode_int(fieldnum) self.cpu.bh_setfield_gc_i(struct, descr, newvalue) + def setinteriorfield(self, index, descr, array, fieldnum): + if descr.is_pointer_field(): + newvalue = self.decode_ref(fieldnum) + self.cpu.bh_setinteriorfield_gc_r(array, index, descr, newvalue) + elif descr.is_float_field(): + newvalue = self.decode_float(fieldnum) + self.cpu.bh_setinteriorfield_gc_f(array, index, descr, newvalue) + else: + newvalue = self.decode_int(fieldnum) + self.cpu.bh_setinteriorfield_gc_i(array, index, descr, newvalue) + def setarrayitem_int(self, arraydescr, array, index, fieldnum): newvalue = self.decode_int(fieldnum) self.cpu.bh_setarrayitem_gc_i(arraydescr, array, index, newvalue) diff --git a/pypy/jit/metainterp/test/support.py b/pypy/jit/metainterp/test/support.py --- a/pypy/jit/metainterp/test/support.py +++ b/pypy/jit/metainterp/test/support.py @@ -12,7 +12,7 @@ from pypy.rlib.rfloat import isnan def _get_jitcodes(testself, CPUClass, func, values, type_system, - supports_longlong=False, **kwds): + supports_longlong=False, translationoptions={}, **kwds): from pypy.jit.codewriter import support class FakeJitCell(object): @@ -42,7 +42,8 @@ enable_opts = ALL_OPTS_DICT func._jit_unroll_safe_ = True - rtyper = support.annotate(func, values, type_system=type_system) + rtyper = support.annotate(func, values, type_system=type_system, + translationoptions=translationoptions) graphs = rtyper.annotator.translator.graphs testself.all_graphs = graphs result_kind = history.getkind(graphs[0].getreturnvar().concretetype)[0] diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -10,6 +10,7 @@ from pypy.jit.metainterp.typesystem import LLTypeHelper, OOTypeHelper from pypy.jit.metainterp.warmspot import get_stats from pypy.jit.metainterp.warmstate import set_future_value +from pypy.rlib import rerased from pypy.rlib.jit import (JitDriver, we_are_jitted, hint, dont_look_inside, loop_invariant, elidable, promote, jit_debug, assert_green, AssertGreenFailed, unroll_safe, current_trace_length, look_inside_iff, @@ -3435,7 +3436,159 @@ return sa res = self.meta_interp(f, [16]) assert res == f(16) - + + def test_ptr_eq(self): + myjitdriver = JitDriver(greens = [], reds = ["n", "x"]) + class A(object): + def __init__(self, v): + self.v = v + def f(n, x): + while n > 0: + myjitdriver.jit_merge_point(n=n, x=x) + z = 0 / x + a1 = A("key") + a2 = A("\x00") + n -= [a1, a2][z].v is not a2.v + return n + res = self.meta_interp(f, [10, 1]) + assert res == 0 + + def test_instance_ptr_eq(self): + myjitdriver = JitDriver(greens = [], reds = ["n", "i", "a1", "a2"]) + class A(object): + pass + def f(n): + a1 = A() + a2 = A() + i = 0 + while n > 0: + myjitdriver.jit_merge_point(n=n, i=i, a1=a1, a2=a2) + if n % 2: + a = a2 + else: + a = a1 + i += a is a1 + n -= 1 + return i + res = self.meta_interp(f, [10]) + assert res == f(10) + def f(n): + a1 = A() + a2 = A() + i = 0 + while n > 0: + myjitdriver.jit_merge_point(n=n, i=i, a1=a1, a2=a2) + if n % 2: + a = a2 + else: + a = a1 + if a is a2: + i += 1 + n -= 1 + return i + res = self.meta_interp(f, [10]) + assert res == f(10) + + def test_virtual_array_of_structs(self): + myjitdriver = JitDriver(greens = [], reds=["n", "d"]) + def f(n): + d = None + while n > 0: + myjitdriver.jit_merge_point(n=n, d=d) + d = {"q": 1} + if n % 2: + d["k"] = n + else: + d["z"] = n + n -= len(d) - d["q"] + return n + res = self.meta_interp(f, [10]) + assert res == 0 + + def test_virtual_dict_constant_keys(self): + myjitdriver = JitDriver(greens = [], reds = ["n"]) + def g(d): + return d["key"] - 1 + + def f(n): + while n > 0: + myjitdriver.jit_merge_point(n=n) + x = {"key": n} + n = g(x) + del x["key"] + return n + + res = self.meta_interp(f, [10]) + assert res == 0 + self.check_loops({"int_sub": 1, "int_gt": 1, "guard_true": 1, "jump": 1}) + + def test_virtual_opaque_ptr(self): + myjitdriver = JitDriver(greens = [], reds = ["n"]) + erase, unerase = rerased.new_erasing_pair("x") + @look_inside_iff(lambda x: isvirtual(x)) + def g(x): + return x[0] + def f(n): + while n > 0: + myjitdriver.jit_merge_point(n=n) + x = [] + y = erase(x) + z = unerase(y) + z.append(1) + n -= g(z) + return n + res = self.meta_interp(f, [10]) + assert res == 0 + self.check_loops({"int_sub": 1, "int_gt": 1, "guard_true": 1, "jump": 1}) + + def test_virtual_opaque_dict(self): + myjitdriver = JitDriver(greens = [], reds = ["n"]) + erase, unerase = rerased.new_erasing_pair("x") + @look_inside_iff(lambda x: isvirtual(x)) + def g(x): + return x[0]["key"] - 1 + def f(n): + while n > 0: + myjitdriver.jit_merge_point(n=n) + x = [{}] + x[0]["key"] = n + x[0]["other key"] = n + y = erase(x) + z = unerase(y) + n = g(x) + return n + res = self.meta_interp(f, [10]) + assert res == 0 + self.check_loops({"int_sub": 1, "int_gt": 1, "guard_true": 1, "jump": 1}) + + def test_convert_from_SmallFunctionSetPBCRepr_to_FunctionsPBCRepr(self): + f1 = lambda n: n+1 + f2 = lambda n: n+2 + f3 = lambda n: n+3 + f4 = lambda n: n+4 + f5 = lambda n: n+5 + f6 = lambda n: n+6 + f7 = lambda n: n+7 + f8 = lambda n: n+8 + def h(n, x): + return x(n) + h._dont_inline = True + def g(n, x): + return h(n, x) + g._dont_inline = True + def f(n): + n = g(n, f1) + n = g(n, f2) + n = h(n, f3) + n = h(n, f4) + n = h(n, f5) + n = h(n, f6) + n = h(n, f7) + n = h(n, f8) + return n + assert f(5) == 41 + translationoptions = {'withsmallfuncsets': 3} + self.interp_operations(f, [5], translationoptions=translationoptions) class TestLLtype(BaseLLtypeTests, LLJitMixin): @@ -3490,11 +3643,12 @@ o = o.dec() pc += 1 return pc - res = self.meta_interp(main, [False, 100, True], taggedpointers=True) + topt = {'taggedpointers': True} + res = self.meta_interp(main, [False, 100, True], + translationoptions=topt) def test_rerased(self): - from pypy.rlib.rerased import erase_int, unerase_int, new_erasing_pair - eraseX, uneraseX = new_erasing_pair("X") + eraseX, uneraseX = rerased.new_erasing_pair("X") # class X: def __init__(self, a, b): @@ -3507,19 +3661,20 @@ e = eraseX(X(i, j)) else: try: - e = erase_int(i) + e = rerased.erase_int(i) except OverflowError: return -42 if j & 1: x = uneraseX(e) return x.a - x.b else: - return unerase_int(e) + return rerased.unerase_int(e) # - x = self.interp_operations(f, [-128, 0], taggedpointers=True) + topt = {'taggedpointers': True} + x = self.interp_operations(f, [-128, 0], translationoptions=topt) assert x == -128 bigint = sys.maxint//2 + 1 - x = self.interp_operations(f, [bigint, 0], taggedpointers=True) + x = self.interp_operations(f, [bigint, 0], translationoptions=topt) assert x == -42 - x = self.interp_operations(f, [1000, 1], taggedpointers=True) + x = self.interp_operations(f, [1000, 1], translationoptions=topt) assert x == 999 diff --git a/pypy/jit/metainterp/test/test_float.py b/pypy/jit/metainterp/test/test_float.py --- a/pypy/jit/metainterp/test/test_float.py +++ b/pypy/jit/metainterp/test/test_float.py @@ -1,5 +1,6 @@ -import math +import math, sys from pypy.jit.metainterp.test.support import LLJitMixin, OOJitMixin +from pypy.rlib.rarithmetic import intmask, r_uint class FloatTests: @@ -45,6 +46,34 @@ res = self.interp_operations(f, [-2.0]) assert res == -8.5 + def test_cast_float_to_int(self): + def g(f): + return int(f) + res = self.interp_operations(g, [-12345.9]) + assert res == -12345 + + def test_cast_float_to_uint(self): + def g(f): + return intmask(r_uint(f)) + res = self.interp_operations(g, [sys.maxint*2.0]) + assert res == intmask(long(sys.maxint*2.0)) + res = self.interp_operations(g, [-12345.9]) + assert res == -12345 + + def test_cast_int_to_float(self): + def g(i): + return float(i) + res = self.interp_operations(g, [-12345]) + assert type(res) is float and res == -12345.0 + + def test_cast_uint_to_float(self): + def g(i): + return float(r_uint(i)) + res = self.interp_operations(g, [intmask(sys.maxint*2)]) + assert type(res) is float and res == float(sys.maxint*2) + res = self.interp_operations(g, [-12345]) + assert type(res) is float and res == float(long(r_uint(-12345))) + class TestOOtype(FloatTests, OOJitMixin): pass diff --git a/pypy/jit/metainterp/test/test_heapcache.py b/pypy/jit/metainterp/test/test_heapcache.py --- a/pypy/jit/metainterp/test/test_heapcache.py +++ b/pypy/jit/metainterp/test/test_heapcache.py @@ -371,3 +371,17 @@ assert h.is_unescaped(box1) h.invalidate_caches(rop.SETARRAYITEM_GC, None, [box2, index1, box1]) assert not h.is_unescaped(box1) + + h = HeapCache() + h.new_array(box1, lengthbox1) + h.new(box2) + assert h.is_unescaped(box1) + assert h.is_unescaped(box2) + h.invalidate_caches(rop.SETARRAYITEM_GC, None, [box1, lengthbox2, box2]) + assert h.is_unescaped(box1) + assert h.is_unescaped(box2) + h.invalidate_caches( + rop.CALL, FakeCallDescr(FakeEffektinfo.EF_RANDOM_EFFECTS), [box1] + ) + assert not h.is_unescaped(box1) + assert not h.is_unescaped(box2) diff --git a/pypy/jit/metainterp/test/test_resume.py b/pypy/jit/metainterp/test/test_resume.py --- a/pypy/jit/metainterp/test/test_resume.py +++ b/pypy/jit/metainterp/test/test_resume.py @@ -1135,16 +1135,11 @@ assert ptr2.parent.next == ptr class CompareableConsts(object): - def __init__(self): - self.oldeq = None - def __enter__(self): - assert self.oldeq is None - self.oldeq = Const.__eq__ Const.__eq__ = Const.same_box - + def __exit__(self, type, value, traceback): - Const.__eq__ = self.oldeq + del Const.__eq__ def test_virtual_adder_make_varray(): b2s, b4s = [BoxPtr(), BoxInt(4)] diff --git a/pypy/jit/metainterp/test/test_tracingopts.py b/pypy/jit/metainterp/test/test_tracingopts.py --- a/pypy/jit/metainterp/test/test_tracingopts.py +++ b/pypy/jit/metainterp/test/test_tracingopts.py @@ -3,6 +3,7 @@ from pypy.jit.metainterp.test.support import LLJitMixin from pypy.rlib import jit from pypy.rlib.rarithmetic import ovfcheck +from pypy.rlib.rstring import StringBuilder import py @@ -590,4 +591,14 @@ assert res == 4 self.check_operations_history(int_add_ovf=0) res = self.interp_operations(fn, [sys.maxint]) - assert res == 12 \ No newline at end of file + assert res == 12 + + def test_copy_str_content(self): + def fn(n): + a = StringBuilder() + x = [1] + a.append("hello world") + return x[0] + res = self.interp_operations(fn, [0]) + assert res == 1 + self.check_operations_history(getarrayitem_gc=0, getarrayitem_gc_pure=0 ) \ No newline at end of file diff --git a/pypy/jit/metainterp/test/test_virtualstate.py b/pypy/jit/metainterp/test/test_virtualstate.py --- a/pypy/jit/metainterp/test/test_virtualstate.py +++ b/pypy/jit/metainterp/test/test_virtualstate.py @@ -847,7 +847,8 @@ i5 = arraylen_gc(p2, descr=arraydescr) i6 = int_ge(i5, 1) guard_true(i6) [] - jump(p0, p1, p2) + p3 = getarrayitem_gc(p2, 0, descr=arraydescr) + jump(p0, p1, p3, p2) """ self.optimize_bridge(loop, bridge, expected, p0=self.myptr) diff --git a/pypy/jit/metainterp/warmspot.py b/pypy/jit/metainterp/warmspot.py --- a/pypy/jit/metainterp/warmspot.py +++ b/pypy/jit/metainterp/warmspot.py @@ -48,13 +48,13 @@ translator.warmrunnerdesc = warmrunnerdesc # for later debugging def ll_meta_interp(function, args, backendopt=False, type_system='lltype', - listcomp=False, **kwds): + listcomp=False, translationoptions={}, **kwds): if listcomp: extraconfigopts = {'translation.list_comprehension_operations': True} else: extraconfigopts = {} - if kwds.pop("taggedpointers", False): - extraconfigopts["translation.taggedpointers"] = True + for key, value in translationoptions.items(): + extraconfigopts['translation.' + key] = value interp, graph = get_interpreter(function, args, backendopt=False, # will be done below type_system=type_system, @@ -62,7 +62,7 @@ clear_tcache() return jittify_and_run(interp, graph, args, backendopt=backendopt, **kwds) -def jittify_and_run(interp, graph, args, repeat=1, +def jittify_and_run(interp, graph, args, repeat=1, graph_and_interp_only=False, backendopt=False, trace_limit=sys.maxint, inline=False, loop_longevity=0, retrace_limit=5, function_threshold=4, @@ -93,6 +93,8 @@ jd.warmstate.set_param_max_retrace_guards(max_retrace_guards) jd.warmstate.set_param_enable_opts(enable_opts) warmrunnerdesc.finish() + if graph_and_interp_only: + return interp, graph res = interp.eval_graph(graph, args) if not kwds.get('translate_support_code', False): warmrunnerdesc.metainterp_sd.profiler.finish() @@ -157,6 +159,9 @@ def get_stats(): return pyjitpl._warmrunnerdesc.stats +def reset_stats(): + pyjitpl._warmrunnerdesc.stats.clear() + def get_translator(): return pyjitpl._warmrunnerdesc.translator diff --git a/pypy/module/_file/interp_file.py b/pypy/module/_file/interp_file.py --- a/pypy/module/_file/interp_file.py +++ b/pypy/module/_file/interp_file.py @@ -206,24 +206,28 @@ @unwrap_spec(size=int) def direct_readlines(self, size=0): stream = self.getstream() - # NB. this implementation is very inefficient for unbuffered - # streams, but ok if stream.readline() is efficient. + # this is implemented as: .read().split('\n') + # except that it keeps the \n in the resulting strings if size <= 0: - result = [] - while True: - line = stream.readline() - if not line: - break - result.append(line) - size -= len(line) + data = stream.readall() else: - result = [] - while size > 0: - line = stream.readline() - if not line: - break - result.append(line) - size -= len(line) + data = stream.read(size) + result = [] + splitfrom = 0 + for i in range(len(data)): + if data[i] == '\n': + result.append(data[splitfrom : i + 1]) + splitfrom = i + 1 + # + if splitfrom < len(data): + # there is a partial line at the end. If size > 0, it is likely + # to be because the 'read(size)' returned data up to the middle + # of a line. In that case, use 'readline()' to read until the + # end of the current line. + data = data[splitfrom:] + if size > 0: + data += stream.readline() + result.append(data) return result @unwrap_spec(offset=r_longlong, whence=int) diff --git a/pypy/module/_hashlib/interp_hashlib.py b/pypy/module/_hashlib/interp_hashlib.py --- a/pypy/module/_hashlib/interp_hashlib.py +++ b/pypy/module/_hashlib/interp_hashlib.py @@ -4,14 +4,21 @@ from pypy.interpreter.error import OperationError from pypy.tool.sourcetools import func_renamer from pypy.interpreter.baseobjspace import Wrappable -from pypy.rpython.lltypesystem import lltype, rffi +from pypy.rpython.lltypesystem import lltype, llmemory, rffi +from pypy.rlib import rgc, ropenssl from pypy.rlib.objectmodel import keepalive_until_here -from pypy.rlib import ropenssl from pypy.rlib.rstring import StringBuilder from pypy.module.thread.os_lock import Lock algorithms = ('md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512') +# HASH_MALLOC_SIZE is the size of EVP_MD, EVP_MD_CTX plus their points +# Used for adding memory pressure. Last number is an (under?)estimate of +# EVP_PKEY_CTX's size. +# XXX: Make a better estimate here +HASH_MALLOC_SIZE = ropenssl.EVP_MD_SIZE + ropenssl.EVP_MD_CTX_SIZE \ + + rffi.sizeof(ropenssl.EVP_MD) * 2 + 208 + def hash_name_mapper_callback(obj_name, userdata): state = global_state[0] assert state is not None @@ -55,22 +62,27 @@ class W_Hash(Wrappable): ctx = lltype.nullptr(ropenssl.EVP_MD_CTX.TO) + _block_size = -1 def __init__(self, space, name): self.name = name + self.digest_size = self.compute_digest_size() # Allocate a lock for each HASH object. # An optimization would be to not release the GIL on small requests, # and use a custom lock only when needed. self.lock = Lock(space) + ctx = lltype.malloc(ropenssl.EVP_MD_CTX.TO, flavor='raw') + rgc.add_memory_pressure(HASH_MALLOC_SIZE + self.digest_size) + self.ctx = ctx + + def initdigest(self, space, name): digest = ropenssl.EVP_get_digestbyname(name) if not digest: raise OperationError(space.w_ValueError, space.wrap("unknown hash function")) - ctx = lltype.malloc(ropenssl.EVP_MD_CTX.TO, flavor='raw') - ropenssl.EVP_DigestInit(ctx, digest) - self.ctx = ctx + ropenssl.EVP_DigestInit(self.ctx, digest) def __del__(self): # self.lock.free() @@ -106,33 +118,29 @@ "Return the digest value as a string of hexadecimal digits." digest = self._digest(space) hexdigits = '0123456789abcdef' - result = StringBuilder(self._digest_size() * 2) + result = StringBuilder(self.digest_size * 2) for c in digest: result.append(hexdigits[(ord(c) >> 4) & 0xf]) result.append(hexdigits[ ord(c) & 0xf]) return space.wrap(result.build()) def get_digest_size(self, space): - return space.wrap(self._digest_size()) + return space.wrap(self.digest_size) def get_block_size(self, space): - return space.wrap(self._block_size()) + return space.wrap(self.compute_block_size()) def _digest(self, space): - copy = self.copy(space) - ctx = copy.ctx - digest_size = self._digest_size() - digest = lltype.malloc(rffi.CCHARP.TO, digest_size, flavor='raw') + with lltype.scoped_alloc(ropenssl.EVP_MD_CTX.TO) as ctx: + with self.lock: + ropenssl.EVP_MD_CTX_copy(ctx, self.ctx) + digest_size = self.digest_size + with lltype.scoped_alloc(rffi.CCHARP.TO, digest_size) as digest: + ropenssl.EVP_DigestFinal(ctx, digest, None) + ropenssl.EVP_MD_CTX_cleanup(ctx) + return rffi.charpsize2str(digest, digest_size) - try: - ropenssl.EVP_DigestFinal(ctx, digest, None) - return rffi.charpsize2str(digest, digest_size) - finally: - keepalive_until_here(copy) - lltype.free(digest, flavor='raw') - - - def _digest_size(self): + def compute_digest_size(self): # XXX This isn't the nicest way, but the EVP_MD_size OpenSSL # XXX function is defined as a C macro on OS X and would be # XXX significantly harder to implement in another way. @@ -146,12 +154,14 @@ 'sha512': 64, 'SHA512': 64, }.get(self.name, 0) - def _block_size(self): + def compute_block_size(self): + if self._block_size != -1: + return self._block_size # XXX This isn't the nicest way, but the EVP_MD_CTX_block_size # XXX OpenSSL function is defined as a C macro on some systems # XXX and would be significantly harder to implement in # XXX another way. - return { + self._block_size = { 'md5': 64, 'MD5': 64, 'sha1': 64, 'SHA1': 64, 'sha224': 64, 'SHA224': 64, @@ -159,6 +169,7 @@ 'sha384': 128, 'SHA384': 128, 'sha512': 128, 'SHA512': 128, }.get(self.name, 0) + return self._block_size W_Hash.typedef = TypeDef( 'HASH', @@ -176,6 +187,7 @@ @unwrap_spec(name=str, string='bufferstr') def new(space, name, string=''): w_hash = W_Hash(space, name) + w_hash.initdigest(space, name) w_hash.update(space, string) return space.wrap(w_hash) diff --git a/pypy/module/_minimal_curses/__init__.py b/pypy/module/_minimal_curses/__init__.py --- a/pypy/module/_minimal_curses/__init__.py +++ b/pypy/module/_minimal_curses/__init__.py @@ -4,7 +4,8 @@ try: import _minimal_curses as _curses # when running on top of pypy-c except ImportError: - raise ImportError("no _curses or _minimal_curses module") # no _curses at all + import py + py.test.skip("no _curses or _minimal_curses module") #no _curses at all from pypy.interpreter.mixedmodule import MixedModule from pypy.module._minimal_curses import fficurses diff --git a/pypy/module/_multiprocessing/interp_semaphore.py b/pypy/module/_multiprocessing/interp_semaphore.py --- a/pypy/module/_multiprocessing/interp_semaphore.py +++ b/pypy/module/_multiprocessing/interp_semaphore.py @@ -4,6 +4,7 @@ from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.error import wrap_oserror, OperationError from pypy.rpython.lltypesystem import rffi, lltype +from pypy.rlib import rgc from pypy.rlib.rarithmetic import r_uint from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.rpython.tool import rffi_platform as platform @@ -23,6 +24,8 @@ _CreateSemaphore = rwin32.winexternal( 'CreateSemaphoreA', [rffi.VOIDP, rffi.LONG, rffi.LONG, rwin32.LPCSTR], rwin32.HANDLE) + _CloseHandle = rwin32.winexternal('CloseHandle', [rwin32.HANDLE], + rwin32.BOOL, threadsafe=False) _ReleaseSemaphore = rwin32.winexternal( 'ReleaseSemaphore', [rwin32.HANDLE, rffi.LONG, rffi.LONGP], rwin32.BOOL) @@ -51,6 +54,7 @@ SEM_FAILED = platform.ConstantInteger('SEM_FAILED') SEM_VALUE_MAX = platform.ConstantInteger('SEM_VALUE_MAX') SEM_TIMED_WAIT = platform.Has('sem_timedwait') + SEM_T_SIZE = platform.SizeOf('sem_t') config = platform.configure(CConfig) TIMEVAL = config['TIMEVAL'] @@ -61,18 +65,21 @@ SEM_FAILED = config['SEM_FAILED'] # rffi.cast(SEM_T, config['SEM_FAILED']) SEM_VALUE_MAX = config['SEM_VALUE_MAX'] SEM_TIMED_WAIT = config['SEM_TIMED_WAIT'] + SEM_T_SIZE = config['SEM_T_SIZE'] if sys.platform == 'darwin': HAVE_BROKEN_SEM_GETVALUE = True else: HAVE_BROKEN_SEM_GETVALUE = False - def external(name, args, result): + def external(name, args, result, **kwargs): return rffi.llexternal(name, args, result, - compilation_info=eci) + compilation_info=eci, **kwargs) _sem_open = external('sem_open', [rffi.CCHARP, rffi.INT, rffi.INT, rffi.UINT], SEM_T) + # tread sem_close as not threadsafe for now to be able to use the __del__ + _sem_close = external('sem_close', [SEM_T], rffi.INT, threadsafe=False) _sem_unlink = external('sem_unlink', [rffi.CCHARP], rffi.INT) _sem_wait = external('sem_wait', [SEM_T], rffi.INT) _sem_trywait = external('sem_trywait', [SEM_T], rffi.INT) @@ -90,6 +97,11 @@ raise OSError(rposix.get_errno(), "sem_open failed") return res + def sem_close(handle): + res = _sem_close(handle) + if res < 0: + raise OSError(rposix.get_errno(), "sem_close failed") + def sem_unlink(name): res = _sem_unlink(name) if res < 0: @@ -205,6 +217,11 @@ raise WindowsError(err, "CreateSemaphore") return handle + def delete_semaphore(handle): + if not _CloseHandle(handle): + err = rwin32.GetLastError() + raise WindowsError(err, "CloseHandle") + def semlock_acquire(self, space, block, w_timeout): if not block: full_msecs = 0 @@ -291,8 +308,13 @@ sem_unlink(name) except OSError: pass + else: + rgc.add_memory_pressure(SEM_T_SIZE) return sem + def delete_semaphore(handle): + sem_close(handle) + def semlock_acquire(self, space, block, w_timeout): if not block: deadline = lltype.nullptr(TIMESPECP.TO) @@ -483,6 +505,9 @@ def exit(self, space, __args__): self.release(space) + def __del__(self): + delete_semaphore(self.handle) + @unwrap_spec(kind=int, value=int, maxvalue=int) def descr_new(space, w_subtype, kind, value, maxvalue): if kind != RECURSIVE_MUTEX and kind != SEMAPHORE: diff --git a/pypy/module/_socket/interp_socket.py b/pypy/module/_socket/interp_socket.py --- a/pypy/module/_socket/interp_socket.py +++ b/pypy/module/_socket/interp_socket.py @@ -16,9 +16,6 @@ self.space.getexecutioncontext().checksignals() class W_RSocket(Wrappable, RSocket): - def __del__(self): - self.close() - def _accept_w(self, space): """_accept() -> (socket object, address info) diff --git a/pypy/module/array/interp_array.py b/pypy/module/array/interp_array.py --- a/pypy/module/array/interp_array.py +++ b/pypy/module/array/interp_array.py @@ -211,7 +211,9 @@ return result def __del__(self): - self.clear_all_weakrefs() + # note that we don't call clear_all_weakrefs here because + # an array with freed buffer is ok to see - it's just empty with 0 + # length self.setlen(0) def setlen(self, size): diff --git a/pypy/module/array/test/test_array.py b/pypy/module/array/test/test_array.py --- a/pypy/module/array/test/test_array.py +++ b/pypy/module/array/test/test_array.py @@ -788,6 +788,22 @@ r = weakref.ref(a) assert r() is a + def test_subclass_del(self): + import array, gc, weakref + l = [] + + class A(array.array): + pass + + a = A('d') + a.append(3.0) + r = weakref.ref(a, lambda a: l.append(a())) + del a + gc.collect(); gc.collect() # XXX needs two of them right now... + assert l + assert l[0] is None or len(l[0]) == 0 + + class DontTestCPythonsOwnArray(BaseArrayTests): def setup_class(cls): @@ -808,11 +824,7 @@ cls.w_tempfile = cls.space.wrap( str(py.test.ensuretemp('array').join('tmpfile'))) cls.w_maxint = cls.space.wrap(sys.maxint) - - - - - + def test_buffer_info(self): a = self.array('b', b'Hi!') bi = a.buffer_info() diff --git a/pypy/module/bz2/test/test_large.py b/pypy/module/bz2/test/test_large.py --- a/pypy/module/bz2/test/test_large.py +++ b/pypy/module/bz2/test/test_large.py @@ -8,7 +8,7 @@ py.test.skip("skipping this very slow test; try 'pypy-c -A'") cls.space = gettestobjspace(usemodules=('bz2',)) largetest_bz2 = py.path.local(__file__).dirpath().join("largetest.bz2") - cls.w_compressed_data = cls.space.wrap(largetest_bz2.read()) + cls.w_compressed_data = cls.space.wrap(largetest_bz2.read('rb')) def test_decompress(self): from bz2 import decompress diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -391,6 +391,7 @@ 'Slice': 'space.gettypeobject(W_SliceObject.typedef)', 'StaticMethod': 'space.gettypeobject(StaticMethod.typedef)', 'CFunction': 'space.gettypeobject(cpyext.methodobject.W_PyCFunctionObject.typedef)', + 'WrapperDescr': 'space.gettypeobject(cpyext.methodobject.W_PyCMethodObject.typedef)' }.items(): GLOBALS['Py%s_Type#' % (cpyname, )] = ('PyTypeObject*', pypyexpr) diff --git a/pypy/module/cpyext/methodobject.py b/pypy/module/cpyext/methodobject.py --- a/pypy/module/cpyext/methodobject.py +++ b/pypy/module/cpyext/methodobject.py @@ -237,6 +237,7 @@ def PyStaticMethod_New(space, w_func): return space.wrap(StaticMethod(w_func)) + at cpython_api([PyObject, lltype.Ptr(PyMethodDef)], PyObject) def PyDescr_NewMethod(space, w_type, method): return space.wrap(W_PyCMethodObject(space, method, w_type)) diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -586,10 +586,6 @@ def PyDescr_NewMember(space, type, meth): raise NotImplementedError - at cpython_api([PyTypeObjectPtr, PyMethodDef], PyObject) -def PyDescr_NewMethod(space, type, meth): - raise NotImplementedError - @cpython_api([PyTypeObjectPtr, wrapperbase, rffi.VOIDP], PyObject) def PyDescr_NewWrapper(space, type, wrapper, wrapped): raise NotImplementedError @@ -610,14 +606,6 @@ def PyWrapper_New(space, w_d, w_self): raise NotImplementedError - at cpython_api([PyObject], PyObject) -def PyDictProxy_New(space, dict): - """Return a proxy object for a mapping which enforces read-only behavior. - This is normally used to create a proxy to prevent modification of the - dictionary for non-dynamic class types. - """ - raise NotImplementedError - @cpython_api([PyObject, PyObject, rffi.INT_real], rffi.INT_real, error=-1) def PyDict_Merge(space, a, b, override): """Iterate over mapping object b adding key-value pairs to dictionary a. @@ -2293,15 +2281,6 @@ changes in your code for properly supporting 64-bit systems.""" raise NotImplementedError - at cpython_api([rffi.CWCHARP, Py_ssize_t, rffi.CCHARP], PyObject) -def PyUnicode_EncodeUTF8(space, s, size, errors): - """Encode the Py_UNICODE buffer of the given size using UTF-8 and return a - Python string object. Return NULL if an exception was raised by the codec. - - This function used an int type for size. This might require - changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.INTP], PyObject) def PyUnicode_DecodeUTF32(space, s, size, errors, byteorder): """Decode length bytes from a UTF-32 encoded buffer string and return the @@ -2481,31 +2460,6 @@ was raised by the codec.""" raise NotImplementedError - at cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP], PyObject) -def PyUnicode_DecodeLatin1(space, s, size, errors): - """Create a Unicode object by decoding size bytes of the Latin-1 encoded string - s. Return NULL if an exception was raised by the codec. - - This function used an int type for size. This might require - changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - - at cpython_api([rffi.CWCHARP, Py_ssize_t, rffi.CCHARP], PyObject) -def PyUnicode_EncodeLatin1(space, s, size, errors): - """Encode the Py_UNICODE buffer of the given size using Latin-1 and return - a Python string object. Return NULL if an exception was raised by the codec. - - This function used an int type for size. This might require - changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - - at cpython_api([PyObject], PyObject) -def PyUnicode_AsLatin1String(space, unicode): - """Encode a Unicode object using Latin-1 and return the result as Python string - object. Error handling is "strict". Return NULL if an exception was raised - by the codec.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, PyObject, rffi.CCHARP], PyObject) def PyUnicode_DecodeCharmap(space, s, size, mapping, errors): """Create a Unicode object by decoding size bytes of the encoded string s using @@ -2564,13 +2518,6 @@ """ raise NotImplementedError - at cpython_api([PyObject], PyObject) -def PyUnicode_AsMBCSString(space, unicode): - """Encode a Unicode object using MBCS and return the result as Python string - object. Error handling is "strict". Return NULL if an exception was raised - by the codec.""" - raise NotImplementedError - @cpython_api([PyObject, PyObject], PyObject) def PyUnicode_Concat(space, left, right): """Concat two strings giving a new Unicode string.""" @@ -2912,16 +2859,3 @@ """Return true if ob is a proxy object. """ raise NotImplementedError - - at cpython_api([PyObject, PyObject], PyObject) -def PyWeakref_NewProxy(space, ob, callback): - """Return a weak reference proxy object for the object ob. This will always - return a new reference, but is not guaranteed to create a new object; an - existing proxy object may be returned. The second parameter, callback, can - be a callable object that receives notification when ob is garbage - collected; it should accept a single parameter, which will be the weak - reference object itself. callback may also be None or NULL. If ob - is not a weakly-referencable object, or if callback is not callable, - None, or NULL, this will return NULL and raise TypeError. - """ - raise NotImplementedError diff --git a/pypy/module/cpyext/test/test_methodobject.py b/pypy/module/cpyext/test/test_methodobject.py --- a/pypy/module/cpyext/test/test_methodobject.py +++ b/pypy/module/cpyext/test/test_methodobject.py @@ -79,7 +79,7 @@ raises(TypeError, mod.isSameFunction, 1) class TestPyCMethodObject(BaseApiTest): - def test_repr(self, space): + def test_repr(self, space, api): """ W_PyCMethodObject has a repr string which describes it as a method and gives its name and the name of its class. @@ -94,7 +94,7 @@ ml.c_ml_meth = rffi.cast(PyCFunction_typedef, c_func.get_llhelper(space)) - method = PyDescr_NewMethod(space, space.w_str, ml) + method = api.PyDescr_NewMethod(space.w_str, ml) assert repr(method).startswith( "" + assert space.unwrap(space.repr(w_proxy)).startswith('': + if isinstance(w_rhs, Scalar): + index = int(interp.space.float_w( + w_rhs.value.wrap(interp.space))) + dtype = interp.space.fromcache(W_Float64Dtype) + return Scalar(dtype, w_lhs.get_concrete().eval(index)) + else: + raise NotImplementedError else: - print "Unknown opcode: %s" % b - raise BogusBytecode() - if len(stack) != 1: - print "Bogus bytecode, uneven stack length" - raise BogusBytecode() - return stack[0] + raise NotImplementedError + if not isinstance(w_res, BaseArray): + dtype = interp.space.fromcache(W_Float64Dtype) + w_res = scalar_w(interp.space, dtype, w_res) + return w_res + + def __repr__(self): + return '(%r %s %r)' % (self.lhs, self.name, self.rhs) + +class FloatConstant(Node): + def __init__(self, v): + self.v = float(v) + + def __repr__(self): + return "Const(%s)" % self.v + + def wrap(self, space): + return space.wrap(self.v) + + def execute(self, interp): + dtype = interp.space.fromcache(W_Float64Dtype) + assert isinstance(dtype, W_Float64Dtype) + return Scalar(dtype, dtype.box(self.v)) + +class RangeConstant(Node): + def __init__(self, v): + self.v = int(v) + + def execute(self, interp): + w_list = interp.space.newlist( + [interp.space.wrap(float(i)) for i in range(self.v)]) + dtype = interp.space.fromcache(W_Float64Dtype) + return descr_new_array(interp.space, None, w_list, w_dtype=dtype) + + def __repr__(self): + return 'Range(%s)' % self.v + +class Code(Node): + def __init__(self, statements): + self.statements = statements + + def __repr__(self): + return "\n".join([repr(i) for i in self.statements]) + +class ArrayConstant(Node): + def __init__(self, items): + self.items = items + + def wrap(self, space): + return space.newlist([item.wrap(space) for item in self.items]) + + def execute(self, interp): + w_list = self.wrap(interp.space) + dtype = interp.space.fromcache(W_Float64Dtype) + return descr_new_array(interp.space, None, w_list, w_dtype=dtype) + + def __repr__(self): + return "[" + ", ".join([repr(item) for item in self.items]) + "]" + +class SliceConstant(Node): + def __init__(self): + pass + + def __repr__(self): + return 'slice()' + +class Execute(Node): + def __init__(self, expr): + self.expr = expr + + def __repr__(self): + return repr(self.expr) + + def execute(self, interp): + interp.results.append(self.expr.execute(interp)) + +class FunctionCall(Node): + def __init__(self, name, args): + self.name = name + self.args = args + + def __repr__(self): + return "%s(%s)" % (self.name, ", ".join([repr(arg) + for arg in self.args])) + + def execute(self, interp): + if self.name in SINGLE_ARG_FUNCTIONS: + if len(self.args) != 1: + raise ArgumentMismatch + arr = self.args[0].execute(interp) + if not isinstance(arr, BaseArray): + raise ArgumentNotAnArray + if self.name == "sum": + w_res = arr.descr_sum(interp.space) + elif self.name == "prod": + w_res = arr.descr_prod(interp.space) + elif self.name == "max": + w_res = arr.descr_max(interp.space) + elif self.name == "min": + w_res = arr.descr_min(interp.space) + elif self.name == "any": + w_res = arr.descr_any(interp.space) + elif self.name == "all": + w_res = arr.descr_all(interp.space) + elif self.name == "unegative": + neg = interp_ufuncs.get(interp.space).negative + w_res = neg.call(interp.space, [arr]) + else: + assert False # unreachable code + if isinstance(w_res, BaseArray): + return w_res + if isinstance(w_res, FloatObject): + dtype = interp.space.fromcache(W_Float64Dtype) + elif isinstance(w_res, BoolObject): + dtype = interp.space.fromcache(W_BoolDtype) + else: + dtype = None + return scalar_w(interp.space, dtype, w_res) + else: + raise WrongFunctionName + +class Parser(object): + def parse_identifier(self, id): + id = id.strip(" ") + #assert id.isalpha() + return Variable(id) + + def parse_expression(self, expr): + tokens = [i for i in expr.split(" ") if i] + if len(tokens) == 1: + return self.parse_constant_or_identifier(tokens[0]) + stack = [] + tokens.reverse() + while tokens: + token = tokens.pop() + if token == ')': + raise NotImplementedError + elif self.is_identifier_or_const(token): + if stack: + name = stack.pop().name + lhs = stack.pop() + rhs = self.parse_constant_or_identifier(token) + stack.append(Operator(lhs, name, rhs)) + else: + stack.append(self.parse_constant_or_identifier(token)) + else: + stack.append(Variable(token)) + assert len(stack) == 1 + return stack[-1] + + def parse_constant(self, v): + lgt = len(v)-1 + assert lgt >= 0 + if ':' in v: + # a slice + assert v == ':' + return SliceConstant() + if v[0] == '[': + return ArrayConstant([self.parse_constant(elem) + for elem in v[1:lgt].split(",")]) + if v[0] == '|': + return RangeConstant(v[1:lgt]) + return FloatConstant(v) + + def is_identifier_or_const(self, v): + c = v[0] + if ((c >= 'a' and c <= 'z') or (c >= 'A' and c <= 'Z') or + (c >= '0' and c <= '9') or c in '-.[|:'): + if v == '-' or v == "->": + return False + return True + return False + + def parse_function_call(self, v): + l = v.split('(') + assert len(l) == 2 + name = l[0] + cut = len(l[1]) - 1 + assert cut >= 0 + args = [self.parse_constant_or_identifier(id) + for id in l[1][:cut].split(",")] + return FunctionCall(name, args) + + def parse_constant_or_identifier(self, v): + c = v[0] + if (c >= 'a' and c <= 'z') or (c >= 'A' and c <= 'Z'): + if '(' in v: + return self.parse_function_call(v) + return self.parse_identifier(v) + return self.parse_constant(v) + + def parse_array_subscript(self, v): + v = v.strip(" ") + l = v.split("[") + lgt = len(l[1]) - 1 + assert lgt >= 0 + rhs = self.parse_constant_or_identifier(l[1][:lgt]) + return l[0], rhs + + def parse_statement(self, line): + if '=' in line: + lhs, rhs = line.split("=") + lhs = lhs.strip(" ") + if '[' in lhs: + name, index = self.parse_array_subscript(lhs) + return ArrayAssignment(name, index, self.parse_expression(rhs)) + else: + return Assignment(lhs, self.parse_expression(rhs)) + else: + return Execute(self.parse_expression(line)) + + def parse(self, code): + statements = [] + for line in code.split("\n"): + if '#' in line: + line = line.split('#', 1)[0] + line = line.strip(" ") + if line: + statements.append(self.parse_statement(line)) + return Code(statements) + +def numpy_compile(code): + parser = Parser() + return InterpreterState(parser.parse(code)) diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -108,6 +108,12 @@ def setitem_w(self, space, storage, i, w_item): self.setitem(storage, i, self.unwrap(space, w_item)) + def fill(self, storage, item, start, stop): + storage = self.unerase(storage) + item = self.unbox(item) + for i in xrange(start, stop): + storage[i] = item + @specialize.argtype(1) def adapt_val(self, val): return self.box(rffi.cast(TP.TO.OF, val)) diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -14,6 +14,27 @@ any_driver = jit.JitDriver(greens=['signature'], reds=['i', 'size', 'self', 'dtype']) slice_driver = jit.JitDriver(greens=['signature'], reds=['i', 'j', 'step', 'stop', 'source', 'dest']) +def descr_new_array(space, w_subtype, w_size_or_iterable, w_dtype=None): + l = space.listview(w_size_or_iterable) + if space.is_w(w_dtype, space.w_None): + w_dtype = None + for w_item in l: + w_dtype = interp_ufuncs.find_dtype_for_scalar(space, w_item, w_dtype) + if w_dtype is space.fromcache(interp_dtype.W_Float64Dtype): + break + if w_dtype is None: + w_dtype = space.w_None + + dtype = space.interp_w(interp_dtype.W_Dtype, + space.call_function(space.gettypefor(interp_dtype.W_Dtype), w_dtype) + ) + arr = SingleDimArray(len(l), dtype=dtype) + i = 0 + for w_elem in l: + dtype.setitem_w(space, arr.storage, i, w_elem) + i += 1 + return arr + class BaseArray(Wrappable): _attrs_ = ["invalidates", "signature"] @@ -32,27 +53,6 @@ def add_invalidates(self, other): self.invalidates.append(other) - def descr__new__(space, w_subtype, w_size_or_iterable, w_dtype=None): - l = space.listview(w_size_or_iterable) - if space.is_w(w_dtype, space.w_None): - w_dtype = None - for w_item in l: - w_dtype = interp_ufuncs.find_dtype_for_scalar(space, w_item, w_dtype) - if w_dtype is space.fromcache(interp_dtype.W_Float64Dtype): - break - if w_dtype is None: - w_dtype = space.w_None - - dtype = space.interp_w(interp_dtype.W_Dtype, - space.call_function(space.gettypefor(interp_dtype.W_Dtype), w_dtype) - ) - arr = SingleDimArray(len(l), dtype=dtype) - i = 0 - for w_elem in l: - dtype.setitem_w(space, arr.storage, i, w_elem) - i += 1 - return arr - def _unaryop_impl(ufunc_name): def impl(self, space): return getattr(interp_ufuncs.get(space), ufunc_name).call(space, [self]) @@ -201,6 +201,9 @@ def descr_get_shape(self, space): return space.newtuple([self.descr_len(space)]) + def descr_get_size(self, space): + return space.wrap(self.find_size()) + def descr_copy(self, space): return space.call_function(space.gettypefor(BaseArray), self, self.find_dtype()) @@ -565,13 +568,12 @@ arr = SingleDimArray(size, dtype=dtype) one = dtype.adapt_val(1) - for i in xrange(size): - arr.dtype.setitem(arr.storage, i, one) + arr.dtype.fill(arr.storage, one, 0, size) return space.wrap(arr) BaseArray.typedef = TypeDef( 'numarray', - __new__ = interp2app(BaseArray.descr__new__.im_func), + __new__ = interp2app(descr_new_array), __len__ = interp2app(BaseArray.descr_len), @@ -608,6 +610,7 @@ dtype = GetSetProperty(BaseArray.descr_get_dtype), shape = GetSetProperty(BaseArray.descr_get_shape), + size = GetSetProperty(BaseArray.descr_get_size), mean = interp2app(BaseArray.descr_mean), sum = interp2app(BaseArray.descr_sum), diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -32,11 +32,17 @@ return self.identity.wrap(space) def descr_call(self, space, __args__): - try: - args_w = __args__.fixedunpack(self.argcount) - except ValueError, e: - raise OperationError(space.w_TypeError, space.wrap(str(e))) - return self.call(space, args_w) + if __args__.keywords or len(__args__.arguments_w) < self.argcount: + raise OperationError(space.w_ValueError, + space.wrap("invalid number of arguments") + ) + elif len(__args__.arguments_w) > self.argcount: + # The extra arguments should actually be the output array, but we + # don't support that yet. + raise OperationError(space.w_TypeError, + space.wrap("invalid number of arguments") + ) + return self.call(space, __args__.arguments_w) def descr_reduce(self, space, w_obj): from pypy.module.micronumpy.interp_numarray import convert_to_array, Scalar @@ -236,22 +242,20 @@ return dt def find_dtype_for_scalar(space, w_obj, current_guess=None): - w_type = space.type(w_obj) - bool_dtype = space.fromcache(interp_dtype.W_BoolDtype) long_dtype = space.fromcache(interp_dtype.W_LongDtype) int64_dtype = space.fromcache(interp_dtype.W_Int64Dtype) - if space.is_w(w_type, space.w_bool): + if space.isinstance_w(w_obj, space.w_bool): if current_guess is None or current_guess is bool_dtype: return bool_dtype return current_guess - elif space.is_w(w_type, space.w_int): + elif space.isinstance_w(w_obj, space.w_int): if (current_guess is None or current_guess is bool_dtype or current_guess is long_dtype): return long_dtype return current_guess - elif space.is_w(w_type, space.w_long): + elif space.isinstance_w(w_obj, space.w_long): if (current_guess is None or current_guess is bool_dtype or current_guess is long_dtype or current_guess is int64_dtype): return int64_dtype diff --git a/pypy/module/micronumpy/test/test_compile.py b/pypy/module/micronumpy/test/test_compile.py new file mode 100644 --- /dev/null +++ b/pypy/module/micronumpy/test/test_compile.py @@ -0,0 +1,170 @@ + +import py +from pypy.module.micronumpy.compile import * + +class TestCompiler(object): + def compile(self, code): + return numpy_compile(code) + + def test_vars(self): + code = """ + a = 2 + b = 3 + """ + interp = self.compile(code) + assert isinstance(interp.code.statements[0], Assignment) + assert interp.code.statements[0].name == 'a' + assert interp.code.statements[0].expr.v == 2 + assert interp.code.statements[1].name == 'b' + assert interp.code.statements[1].expr.v == 3 + + def test_array_literal(self): + code = "a = [1,2,3]" + interp = self.compile(code) + assert isinstance(interp.code.statements[0].expr, ArrayConstant) + st = interp.code.statements[0] + assert st.expr.items == [FloatConstant(1), FloatConstant(2), + FloatConstant(3)] + + def test_array_literal2(self): + code = "a = [[1],[2],[3]]" + interp = self.compile(code) + assert isinstance(interp.code.statements[0].expr, ArrayConstant) + st = interp.code.statements[0] + assert st.expr.items == [ArrayConstant([FloatConstant(1)]), + ArrayConstant([FloatConstant(2)]), + ArrayConstant([FloatConstant(3)])] + + def test_expr_1(self): + code = "b = a + 1" + interp = self.compile(code) + assert (interp.code.statements[0].expr == + Operator(Variable("a"), "+", FloatConstant(1))) + + def test_expr_2(self): + code = "b = a + b - 3" + interp = self.compile(code) + assert (interp.code.statements[0].expr == + Operator(Operator(Variable("a"), "+", Variable("b")), "-", + FloatConstant(3))) + + def test_expr_3(self): + # an equivalent of range + code = "a = |20|" + interp = self.compile(code) + assert interp.code.statements[0].expr == RangeConstant(20) + + def test_expr_only(self): + code = "3 + a" + interp = self.compile(code) + assert interp.code.statements[0] == Execute( + Operator(FloatConstant(3), "+", Variable("a"))) + + def test_array_access(self): + code = "a -> 3" + interp = self.compile(code) + assert interp.code.statements[0] == Execute( + Operator(Variable("a"), "->", FloatConstant(3))) + + def test_function_call(self): + code = "sum(a)" + interp = self.compile(code) + assert interp.code.statements[0] == Execute( + FunctionCall("sum", [Variable("a")])) + + def test_comment(self): + code = """ + # some comment + a = b + 3 # another comment + """ + interp = self.compile(code) + assert interp.code.statements[0] == Assignment( + 'a', Operator(Variable('b'), "+", FloatConstant(3))) + +class TestRunner(object): + def run(self, code): + interp = numpy_compile(code) + space = FakeSpace() + interp.run(space) + return interp + + def test_one(self): + code = """ + a = 3 + b = 4 + a + b + """ + interp = self.run(code) + assert sorted(interp.variables.keys()) == ['a', 'b'] + assert interp.results[0] + + def test_array_add(self): + code = """ + a = [1,2,3,4] + b = [4,5,6,5] + a + b + """ + interp = self.run(code) + assert interp.results[0]._getnums(False) == ["5.0", "7.0", "9.0", "9.0"] + + def test_array_getitem(self): + code = """ + a = [1,2,3,4] + b = [4,5,6,5] + a + b -> 3 + """ + interp = self.run(code) + assert interp.results[0].value.val == 3 + 6 + + def test_range_getitem(self): + code = """ + r = |20| + 3 + r -> 3 + """ + interp = self.run(code) + assert interp.results[0].value.val == 6 + + def test_sum(self): + code = """ + a = [1,2,3,4,5] + r = sum(a) + r + """ + interp = self.run(code) + assert interp.results[0].value.val == 15 + + def test_array_write(self): + code = """ + a = [1,2,3,4,5] + a[3] = 15 + a -> 3 + """ + interp = self.run(code) + assert interp.results[0].value.val == 15 + + def test_min(self): + interp = self.run(""" + a = |30| + a[15] = -12 + b = a + a + min(b) + """) + assert interp.results[0].value.val == -24 + + def test_max(self): + interp = self.run(""" + a = |30| + a[13] = 128 + b = a + a + max(b) + """) + assert interp.results[0].value.val == 256 + + def test_slice(self): + py.test.skip("in progress") + interp = self.run(""" + a = [1,2,3,4] + b = a -> : + b -> 3 + """) + assert interp.results[0].value.val == 3 diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -36,37 +36,40 @@ assert str(d) == "bool" def test_bool_array(self): - from numpy import array + import numpy - a = array([0, 1, 2, 2.5], dtype='?') - assert a[0] is False + a = numpy.array([0, 1, 2, 2.5], dtype='?') + assert a[0] is numpy.False_ for i in xrange(1, 4): - assert a[i] is True + assert a[i] is numpy.True_ def test_copy_array_with_dtype(self): - from numpy import array - a = array([0, 1, 2, 3], dtype=long) + import numpy + + a = numpy.array([0, 1, 2, 3], dtype=long) # int on 64-bit, long in 32-bit assert isinstance(a[0], (int, long)) b = a.copy() assert isinstance(b[0], (int, long)) - a = array([0, 1, 2, 3], dtype=bool) - assert isinstance(a[0], bool) + a = numpy.array([0, 1, 2, 3], dtype=bool) + assert a[0] is numpy.False_ b = a.copy() - assert isinstance(b[0], bool) + assert b[0] is numpy.False_ def test_zeros_bool(self): - from numpy import zeros - a = zeros(10, dtype=bool) + import numpy + + a = numpy.zeros(10, dtype=bool) for i in range(10): - assert a[i] is False + assert a[i] is numpy.False_ def test_ones_bool(self): - from numpy import ones - a = ones(10, dtype=bool) + import numpy + + a = numpy.ones(10, dtype=bool) for i in range(10): - assert a[i] is True + assert a[i] is numpy.True_ def test_zeros_long(self): from numpy import zeros @@ -77,7 +80,7 @@ def test_ones_long(self): from numpy import ones - a = ones(10, dtype=bool) + a = ones(10, dtype=long) for i in range(10): assert isinstance(a[i], (int, long)) assert a[1] == 1 @@ -96,8 +99,9 @@ def test_bool_binop_types(self): from numpy import array, dtype - types = ('?','b','B','h','H','i','I','l','L','q','Q','f','d') - N = len(types) + types = [ + '?', 'b', 'B', 'h', 'H', 'i', 'I', 'l', 'L', 'q', 'Q', 'f', 'd' + ] a = array([True], '?') for t in types: assert (a + array([0], t)).dtype is dtype(t) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -17,6 +17,14 @@ a[13] = 5.3 assert a[13] == 5.3 + def test_size(self): + from numpy import array + # XXX fixed on multidim branch + #assert array(3).size == 1 + a = array([1, 2, 3]) + assert a.size == 3 + assert (a + a).size == 3 + def test_empty(self): """ Test that empty() works. @@ -214,7 +222,7 @@ def test_add_other(self): from numpy import array a = array(range(5)) - b = array(reversed(range(5))) + b = array(range(4, -1, -1)) c = a + b for i in range(5): assert c[i] == 4 @@ -264,18 +272,19 @@ assert b[i] == i - 5 def test_mul(self): - from numpy import array, dtype - a = array(range(5)) + import numpy + + a = numpy.array(range(5)) b = a * a for i in range(5): assert b[i] == i * i - a = array(range(5), dtype=bool) + a = numpy.array(range(5), dtype=bool) b = a * a - assert b.dtype is dtype(bool) - assert b[0] is False + assert b.dtype is numpy.dtype(bool) + assert b[0] is numpy.False_ for i in range(1, 5): - assert b[i] is True + assert b[i] is numpy.True_ def test_mul_constant(self): from numpy import array diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -24,10 +24,10 @@ def test_wrong_arguments(self): from numpy import add, sin - raises(TypeError, add, 1) + raises(ValueError, add, 1) raises(TypeError, add, 1, 2, 3) raises(TypeError, sin, 1, 2) - raises(TypeError, sin) + raises(ValueError, sin) def test_single_item(self): from numpy import negative, sign, minimum @@ -82,6 +82,8 @@ b = negative(a) a[0] = 5.0 assert b[0] == 5.0 + a = array(range(30)) + assert negative(a + a)[3] == -6 def test_abs(self): from numpy import array, absolute @@ -355,4 +357,4 @@ (3.5, 3), (3, 3.5), ]: - assert ufunc(a, b) is func(a, b) + assert ufunc(a, b) == func(a, b) diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -1,253 +1,195 @@ from pypy.jit.metainterp.test.support import LLJitMixin from pypy.module.micronumpy import interp_ufuncs, signature -from pypy.module.micronumpy.compile import (numpy_compile, FakeSpace, - FloatObject, IntObject) -from pypy.module.micronumpy.interp_dtype import W_Int32Dtype, W_Float64Dtype, W_Int64Dtype, W_UInt64Dtype -from pypy.module.micronumpy.interp_numarray import (BaseArray, SingleDimArray, - SingleDimSlice, scalar_w) +from pypy.module.micronumpy.compile import (FakeSpace, + FloatObject, IntObject, numpy_compile, BoolObject) +from pypy.module.micronumpy.interp_numarray import (SingleDimArray, + SingleDimSlice) from pypy.rlib.nonconst import NonConstant -from pypy.rpython.annlowlevel import llstr -from pypy.rpython.test.test_llinterp import interpret +from pypy.rpython.annlowlevel import llstr, hlstr +from pypy.jit.metainterp.warmspot import reset_stats +from pypy.jit.metainterp import pyjitpl import py class TestNumpyJIt(LLJitMixin): - def setup_class(cls): - cls.space = FakeSpace() - cls.float64_dtype = cls.space.fromcache(W_Float64Dtype) - cls.int64_dtype = cls.space.fromcache(W_Int64Dtype) - cls.uint64_dtype = cls.space.fromcache(W_UInt64Dtype) - cls.int32_dtype = cls.space.fromcache(W_Int32Dtype) + graph = None + interp = None + + def run(self, code): + space = FakeSpace() + + def f(code): + interp = numpy_compile(hlstr(code)) + interp.run(space) + res = interp.results[-1] + w_res = res.eval(0).wrap(interp.space) + if isinstance(w_res, BoolObject): + return float(w_res.boolval) + elif isinstance(w_res, FloatObject): + return w_res.floatval + elif isinstance(w_res, IntObject): + return w_res.intval + else: + return -42. + + if self.graph is None: + interp, graph = self.meta_interp(f, [llstr(code)], + listops=True, + backendopt=True, + graph_and_interp_only=True) + self.__class__.interp = interp + self.__class__.graph = graph + + reset_stats() + pyjitpl._warmrunnerdesc.memory_manager.alive_loops.clear() + return self.interp.eval_graph(self.graph, [llstr(code)]) def test_add(self): - def f(i): - ar = SingleDimArray(i, dtype=self.float64_dtype) - v = interp_ufuncs.get(self.space).add.call(self.space, [ar, ar]) - return v.get_concrete().eval(3).val - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + result = self.run(""" + a = |30| + b = a + a + b -> 3 + """) self.check_loops({'getarrayitem_raw': 2, 'float_add': 1, 'setarrayitem_raw': 1, 'int_add': 1, 'int_lt': 1, 'guard_true': 1, 'jump': 1}) - assert result == f(5) + assert result == 3 + 3 def test_floatadd(self): - def f(i): - ar = SingleDimArray(i, dtype=self.float64_dtype) - v = interp_ufuncs.get(self.space).add.call(self.space, [ - ar, - scalar_w(self.space, self.float64_dtype, self.space.wrap(4.5)) - ], - ) - assert isinstance(v, BaseArray) - return v.get_concrete().eval(3).val - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + result = self.run(""" + a = |30| + 3 + a -> 3 + """) + assert result == 3 + 3 self.check_loops({"getarrayitem_raw": 1, "float_add": 1, "setarrayitem_raw": 1, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1}) - assert result == f(5) def test_sum(self): - space = self.space - float64_dtype = self.float64_dtype - int64_dtype = self.int64_dtype - - def f(i): - if NonConstant(False): - dtype = int64_dtype - else: - dtype = float64_dtype - ar = SingleDimArray(i, dtype=dtype) - v = ar.descr_add(space, ar).descr_sum(space) - assert isinstance(v, FloatObject) - return v.floatval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + result = self.run(""" + a = |30| + b = a + a + sum(b) + """) + assert result == 2 * sum(range(30)) self.check_loops({"getarrayitem_raw": 2, "float_add": 2, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1}) - assert result == f(5) def test_prod(self): - space = self.space - float64_dtype = self.float64_dtype - int64_dtype = self.int64_dtype - - def f(i): - if NonConstant(False): - dtype = int64_dtype - else: - dtype = float64_dtype - ar = SingleDimArray(i, dtype=dtype) - v = ar.descr_add(space, ar).descr_prod(space) - assert isinstance(v, FloatObject) - return v.floatval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + result = self.run(""" + a = |30| + b = a + a + prod(b) + """) + expected = 1 + for i in range(30): + expected *= i * 2 + assert result == expected self.check_loops({"getarrayitem_raw": 2, "float_add": 1, "float_mul": 1, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1}) - assert result == f(5) def test_max(self): - space = self.space - float64_dtype = self.float64_dtype - int64_dtype = self.int64_dtype - - def f(i): - if NonConstant(False): - dtype = int64_dtype - else: - dtype = float64_dtype - ar = SingleDimArray(i, dtype=dtype) - j = 0 - while j < i: - ar.get_concrete().setitem(j, float64_dtype.box(float(j))) - j += 1 - v = ar.descr_add(space, ar).descr_max(space) - assert isinstance(v, FloatObject) - return v.floatval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + py.test.skip("broken, investigate") + result = self.run(""" + a = |30| + a[13] = 128 + b = a + a + max(b) + """) + assert result == 256 self.check_loops({"getarrayitem_raw": 2, "float_add": 1, - "float_gt": 1, "int_add": 1, - "int_lt": 1, "guard_true": 1, - "guard_false": 1, "jump": 1}) - assert result == f(5) + "float_mul": 1, "int_add": 1, + "int_lt": 1, "guard_true": 1, "jump": 1}) def test_min(self): - space = self.space - float64_dtype = self.float64_dtype - int64_dtype = self.int64_dtype - - def f(i): - if NonConstant(False): - dtype = int64_dtype - else: - dtype = float64_dtype - ar = SingleDimArray(i, dtype=dtype) - j = 0 - while j < i: - ar.get_concrete().setitem(j, float64_dtype.box(float(j))) - j += 1 - v = ar.descr_add(space, ar).descr_min(space) - assert isinstance(v, FloatObject) - return v.floatval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + py.test.skip("broken, investigate") + result = self.run(""" + a = |30| + a[15] = -12 + b = a + a + min(b) + """) + assert result == -24 self.check_loops({"getarrayitem_raw": 2, "float_add": 1, - "float_lt": 1, "int_add": 1, - "int_lt": 1, "guard_true": 2, - "jump": 1}) - assert result == f(5) - - def test_argmin(self): - space = self.space - float64_dtype = self.float64_dtype - - def f(i): - ar = SingleDimArray(i, dtype=NonConstant(float64_dtype)) - j = 0 - while j < i: - ar.get_concrete().setitem(j, float64_dtype.box(float(j))) - j += 1 - return ar.descr_add(space, ar).descr_argmin(space).intval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) - self.check_loops({"getarrayitem_raw": 2, "float_add": 1, - "float_lt": 1, "int_add": 1, - "int_lt": 1, "guard_true": 2, - "jump": 1}) - assert result == f(5) - - def test_all(self): - space = self.space - float64_dtype = self.float64_dtype - - def f(i): - ar = SingleDimArray(i, dtype=NonConstant(float64_dtype)) - j = 0 - while j < i: - ar.get_concrete().setitem(j, float64_dtype.box(1.0)) - j += 1 - return ar.descr_add(space, ar).descr_all(space).boolval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) - self.check_loops({"getarrayitem_raw": 2, "float_add": 1, - "int_add": 1, "float_ne": 1, - "int_lt": 1, "guard_true": 2, "jump": 1}) - assert result == f(5) + "float_mul": 1, "int_add": 1, + "int_lt": 1, "guard_true": 1, "jump": 1}) def test_any(self): - space = self.space - float64_dtype = self.float64_dtype - - def f(i): - ar = SingleDimArray(i, dtype=NonConstant(float64_dtype)) - return ar.descr_add(space, ar).descr_any(space).boolval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + result = self.run(""" + a = [0,0,0,0,0,0,0,0,0,0,0] + a[8] = -12 + b = a + a + any(b) + """) + assert result == 1 self.check_loops({"getarrayitem_raw": 2, "float_add": 1, - "int_add": 1, "float_ne": 1, "guard_false": 1, - "int_lt": 1, "guard_true": 1, "jump": 1}) - assert result == f(5) + "float_ne": 1, "int_add": 1, + "int_lt": 1, "guard_true": 1, "jump": 1, + "guard_false": 1}) def test_already_forced(self): - space = self.space - - def f(i): - ar = SingleDimArray(i, dtype=self.float64_dtype) - v1 = interp_ufuncs.get(self.space).add.call(space, [ar, scalar_w(space, self.float64_dtype, space.wrap(4.5))]) - assert isinstance(v1, BaseArray) - v2 = interp_ufuncs.get(self.space).multiply.call(space, [v1, scalar_w(space, self.float64_dtype, space.wrap(4.5))]) - v1.force_if_needed() - assert isinstance(v2, BaseArray) - return v2.get_concrete().eval(3).val - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + result = self.run(""" + a = |30| + b = a + 4.5 + b -> 5 # forces + c = b * 8 + c -> 5 + """) + assert result == (5 + 4.5) * 8 # This is the sum of the ops for both loops, however if you remove the # optimization then you end up with 2 float_adds, so we can still be # sure it was optimized correctly. self.check_loops({"getarrayitem_raw": 2, "float_mul": 1, "float_add": 1, "setarrayitem_raw": 2, "int_add": 2, "int_lt": 2, "guard_true": 2, "jump": 2}) - assert result == f(5) def test_ufunc(self): - space = self.space - def f(i): - ar = SingleDimArray(i, dtype=self.float64_dtype) - v1 = interp_ufuncs.get(self.space).add.call(space, [ar, ar]) - v2 = interp_ufuncs.get(self.space).negative.call(space, [v1]) - return v2.get_concrete().eval(3).val - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + result = self.run(""" + a = |30| + b = a + a + c = unegative(b) + c -> 3 + """) + assert result == -6 self.check_loops({"getarrayitem_raw": 2, "float_add": 1, "float_neg": 1, "setarrayitem_raw": 1, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1, }) - assert result == f(5) - def test_appropriate_specialization(self): - space = self.space - def f(i): - ar = SingleDimArray(i, dtype=self.float64_dtype) - - v1 = interp_ufuncs.get(self.space).add.call(space, [ar, ar]) - v2 = interp_ufuncs.get(self.space).negative.call(space, [v1]) - v2.get_concrete() - - for i in xrange(5): - v1 = interp_ufuncs.get(self.space).multiply.call(space, [ar, ar]) - v2 = interp_ufuncs.get(self.space).negative.call(space, [v1]) - v2.get_concrete() - - self.meta_interp(f, [5], listops=True, backendopt=True) + def test_specialization(self): + self.run(""" + a = |30| + b = a + a + c = unegative(b) + c -> 3 + d = a * a + unegative(d) + d -> 3 + d = a * a + unegative(d) + d -> 3 + d = a * a + unegative(d) + d -> 3 + d = a * a + unegative(d) + d -> 3 + """) # This is 3, not 2 because there is a bridge for the exit. self.check_loop_count(3) + +class TestNumpyOld(LLJitMixin): + def setup_class(cls): + from pypy.module.micronumpy.compile import FakeSpace + from pypy.module.micronumpy.interp_dtype import W_Float64Dtype + + cls.space = FakeSpace() + cls.float64_dtype = cls.space.fromcache(W_Float64Dtype) + def test_slice(self): def f(i): step = 3 @@ -332,17 +274,3 @@ result = self.meta_interp(f, [5], listops=True, backendopt=True) assert result == f(5) -class TestTranslation(object): - def test_compile(self): - x = numpy_compile('aa+f*f/a-', 10) - x = x.compute() - assert isinstance(x, SingleDimArray) - assert x.size == 10 - assert x.eval(0).val == 0 - assert x.eval(1).val == ((1 + 1) * 1.2) / 1.2 - 1 - - def test_translation(self): - # we import main to check if the target compiles - from pypy.translator.goal.targetnumpystandalone import main - - interpret(main, [llstr('af+'), 100]) diff --git a/pypy/module/pyexpat/interp_pyexpat.py b/pypy/module/pyexpat/interp_pyexpat.py --- a/pypy/module/pyexpat/interp_pyexpat.py +++ b/pypy/module/pyexpat/interp_pyexpat.py @@ -4,9 +4,9 @@ from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.error import OperationError from pypy.objspace.descroperation import object_setattr +from pypy.rlib import rgc +from pypy.rlib.unroll import unrolling_iterable from pypy.rpython.lltypesystem import rffi, lltype -from pypy.rlib.unroll import unrolling_iterable - from pypy.rpython.tool import rffi_platform from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.translator.platform import platform @@ -118,6 +118,19 @@ locals()[name] = rffi_platform.ConstantInteger(name) for name in xml_model_list: locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + XML_Parser_SIZE = rffi_platform.SizeOf("XML_Parser") for k, v in rffi_platform.configure(CConfigure).items(): globals()[k] = v @@ -793,7 +806,10 @@ rffi.cast(rffi.CHAR, namespace_separator)) else: xmlparser = XML_ParserCreate(encoding) - + # Currently this is just the size of the pointer and some estimated bytes. + # The struct isn't actually defined in expat.h - it is in xmlparse.c + # XXX: find a good estimate of the XML_ParserStruct + rgc.add_memory_pressure(XML_Parser_SIZE + 300) if not xmlparser: raise OperationError(space.w_RuntimeError, space.wrap('XML_ParserCreate failed')) diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py --- a/pypy/module/pypyjit/policy.py +++ b/pypy/module/pypyjit/policy.py @@ -16,7 +16,8 @@ if modname in ['pypyjit', 'signal', 'micronumpy', 'math', 'exceptions', 'imp', 'sys', 'array', '_ffi', 'itertools', 'operator', 'posix', '_socket', '_sre', '_lsprof', '_weakref', - '__pypy__', 'cStringIO', '_collections', 'struct']: + '__pypy__', 'cStringIO', '_collections', 'struct', + 'mmap']: return True return False diff --git a/pypy/module/pypyjit/test_pypy_c/test_call.py b/pypy/module/pypyjit/test_pypy_c/test_call.py --- a/pypy/module/pypyjit/test_pypy_c/test_call.py +++ b/pypy/module/pypyjit/test_pypy_c/test_call.py @@ -465,3 +465,25 @@ setfield_gc(p4, p22, descr=) jump(p0, p1, p2, p3, p4, p7, p22, p7, descr=) """) + + def test_kwargs_virtual(self): + def main(n): + def g(**kwargs): + return kwargs["x"] + 1 + + i = 0 + while i < n: + i = g(x=i) + return i + + log = self.run(main, [500]) + assert log.result == 500 + loop, = log.loops_by_filename(self.filepath) + assert loop.match(""" + i2 = int_lt(i0, i1) + guard_true(i2, descr=...) + i3 = force_token() + i4 = int_add(i0, 1) + --TICK-- + jump(..., descr=...) + """) \ No newline at end of file diff --git a/pypy/module/pypyjit/test_pypy_c/test_containers.py b/pypy/module/pypyjit/test_pypy_c/test_containers.py --- a/pypy/module/pypyjit/test_pypy_c/test_containers.py +++ b/pypy/module/pypyjit/test_pypy_c/test_containers.py @@ -44,9 +44,9 @@ # gc_id call is hoisted out of the loop, the id of a value obviously # can't change ;) assert loop.match_by_id("getitem", """ - i28 = call(ConstClass(ll_dict_lookup__dicttablePtr_objectPtr_Signed), p18, p6, i25, descr=...) + i26 = call(ConstClass(ll_dict_lookup), p18, p6, i25, descr=...) ... - p33 = getinteriorfield_gc(p31, i26, >) + p33 = getinteriorfield_gc(p31, i26, descr=>) ... """) @@ -69,4 +69,51 @@ i9 = int_add(i5, 1) --TICK-- jump(..., descr=...) + """) + + def test_non_virtual_dict(self): + def main(n): + i = 0 + while i < n: + d = {str(i): i} + i += d[str(i)] - i + 1 + return i + + log = self.run(main, [1000]) + assert log.result == main(1000) + loop, = log.loops_by_filename(self.filepath) + assert loop.match(""" + i8 = int_lt(i5, i7) + guard_true(i8, descr=...) + guard_not_invalidated(descr=...) + p10 = call(ConstClass(ll_int_str), i5, descr=) + guard_no_exception(descr=...) + i12 = call(ConstClass(ll_strhash), p10, descr=) + p13 = new(descr=...) + p15 = new_array(8, descr=) + setfield_gc(p13, p15, descr=) + i17 = call(ConstClass(ll_dict_lookup_trampoline), p13, p10, i12, descr=) + setfield_gc(p13, 16, descr=) + guard_no_exception(descr=...) + p20 = new_with_vtable(ConstClass(W_IntObject)) + call(ConstClass(_ll_dict_setitem_lookup_done_trampoline), p13, p10, p20, i12, i17, descr=) + setfield_gc(p20, i5, descr=) + guard_no_exception(descr=...) + i23 = call(ConstClass(ll_dict_lookup_trampoline), p13, p10, i12, descr=) + guard_no_exception(descr=...) + i26 = int_and(i23, .*) + i27 = int_is_true(i26) + guard_false(i27, descr=...) + p28 = getfield_gc(p13, descr=) + p29 = getinteriorfield_gc(p28, i23, descr=>) + guard_nonnull_class(p29, ConstClass(W_IntObject), descr=...) + i31 = getfield_gc_pure(p29, descr=) + i32 = int_sub_ovf(i31, i5) + guard_no_overflow(descr=...) + i34 = int_add_ovf(i32, 1) + guard_no_overflow(descr=...) + i35 = int_add_ovf(i5, i34) + guard_no_overflow(descr=...) + --TICK-- + jump(p0, p1, p2, p3, p4, i35, p13, i7, descr=) """) \ No newline at end of file diff --git a/pypy/module/rctime/interp_time.py b/pypy/module/rctime/interp_time.py --- a/pypy/module/rctime/interp_time.py +++ b/pypy/module/rctime/interp_time.py @@ -245,6 +245,9 @@ if sys.platform != 'win32': @unwrap_spec(secs=float) def sleep(space, secs): + if secs < 0: + raise OperationError(space.w_IOError, + space.wrap("Invalid argument: negative time in sleep")) pytime.sleep(secs) else: from pypy.rlib import rwin32 @@ -265,6 +268,9 @@ OSError(EINTR, "sleep() interrupted")) @unwrap_spec(secs=float) def sleep(space, secs): + if secs < 0: + raise OperationError(space.w_IOError, + space.wrap("Invalid argument: negative time in sleep")) # as decreed by Guido, only the main thread can be # interrupted. main_thread = space.fromcache(State).main_thread diff --git a/pypy/module/rctime/test/test_rctime.py b/pypy/module/rctime/test/test_rctime.py --- a/pypy/module/rctime/test/test_rctime.py +++ b/pypy/module/rctime/test/test_rctime.py @@ -20,8 +20,9 @@ import sys import os raises(TypeError, rctime.sleep, "foo") - rctime.sleep(1.2345) - + rctime.sleep(0.12345) + raises(IOError, rctime.sleep, -1.0) + def test_clock(self): import time as rctime rctime.clock() diff --git a/pypy/module/thread/ll_thread.py b/pypy/module/thread/ll_thread.py --- a/pypy/module/thread/ll_thread.py +++ b/pypy/module/thread/ll_thread.py @@ -2,10 +2,11 @@ from pypy.rpython.lltypesystem import rffi, lltype, llmemory from pypy.translator.tool.cbuild import ExternalCompilationInfo import py -from pypy.rlib import jit +from pypy.rlib import jit, rgc from pypy.rlib.debug import ll_assert from pypy.rlib.objectmodel import we_are_translated from pypy.rpython.lltypesystem.lloperation import llop +from pypy.rpython.tool import rffi_platform from pypy.tool import autopath class error(Exception): @@ -49,7 +50,7 @@ TLOCKP = rffi.COpaquePtr('struct RPyOpaque_ThreadLock', compilation_info=eci) - +TLOCKP_SIZE = rffi_platform.sizeof('struct RPyOpaque_ThreadLock', eci) c_thread_lock_init = llexternal('RPyThreadLockInit', [TLOCKP], rffi.INT, threadsafe=False) # may add in a global list c_thread_lock_dealloc_NOAUTO = llexternal('RPyOpaqueDealloc_ThreadLock', @@ -173,6 +174,9 @@ if rffi.cast(lltype.Signed, res) <= 0: lltype.free(ll_lock, flavor='raw', track_allocation=False) raise error("out of resources") + # Add some memory pressure for the size of the lock because it is an + # Opaque object + rgc.add_memory_pressure(TLOCKP_SIZE) return ll_lock def free_ll_lock(ll_lock): diff --git a/pypy/objspace/std/boolobject.py b/pypy/objspace/std/boolobject.py --- a/pypy/objspace/std/boolobject.py +++ b/pypy/objspace/std/boolobject.py @@ -27,11 +27,7 @@ def uint_w(w_self, space): intval = int(w_self.boolval) - if intval < 0: - raise OperationError(space.w_ValueError, - space.wrap("cannot convert negative integer to unsigned")) - else: - return r_uint(intval) + return r_uint(intval) def bigint_w(w_self, space): return rbigint.fromint(int(w_self.boolval)) diff --git a/pypy/objspace/std/bytearrayobject.py b/pypy/objspace/std/bytearrayobject.py --- a/pypy/objspace/std/bytearrayobject.py +++ b/pypy/objspace/std/bytearrayobject.py @@ -226,17 +226,9 @@ return space.wrapbytes(''.join(w_bytearray.data)) def _convert_idx_params(space, w_self, w_start, w_stop): - start = slicetype.eval_slice_index(space, w_start) - stop = slicetype.eval_slice_index(space, w_stop) length = len(w_self.data) - if start < 0: - start += length - if start < 0: - start = 0 - if stop < 0: - stop += length - if stop < 0: - stop = 0 + start, stop = slicetype.unwrap_start_stop( + space, length, w_start, w_stop, False) return start, stop, length def str_count__Bytearray_Int_ANY_ANY(space, w_bytearray, w_char, w_start, w_stop): diff --git a/pypy/objspace/std/floatobject.py b/pypy/objspace/std/floatobject.py --- a/pypy/objspace/std/floatobject.py +++ b/pypy/objspace/std/floatobject.py @@ -546,6 +546,12 @@ # Try to return int. return space.newtuple([space.int(w_num), space.int(w_den)]) +def float_is_integer__Float(space, w_float): + v = w_float.floatval + if not rfloat.isfinite(v): + return space.w_False + return space.wrap(math.floor(v) == v) + from pypy.objspace.std import floattype register_all(vars(), floattype) diff --git a/pypy/objspace/std/floattype.py b/pypy/objspace/std/floattype.py --- a/pypy/objspace/std/floattype.py +++ b/pypy/objspace/std/floattype.py @@ -12,6 +12,7 @@ float_as_integer_ratio = SMM("as_integer_ratio", 1) +float_is_integer = SMM("is_integer", 1) float_hex = SMM("hex", 1) def descr_conjugate(space, w_float): diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py --- a/pypy/objspace/std/listobject.py +++ b/pypy/objspace/std/listobject.py @@ -54,7 +54,12 @@ def _init_from_iterable(space, items_w, w_iterable): # in its own function to make the JIT look into init__List - # XXX this would need a JIT driver somehow? + # xxx special hack for speed + from pypy.interpreter.generator import GeneratorIterator + if isinstance(w_iterable, GeneratorIterator): + w_iterable.unpack_into(items_w) + return + # /xxx w_iterator = space.iter(w_iterable) while True: try: @@ -395,8 +400,8 @@ # needs to be safe against eq_w() mutating the w_list behind our back items = w_list.wrappeditems size = len(items) - i = slicetype.adapt_bound(space, size, w_start) - stop = slicetype.adapt_bound(space, size, w_stop) + i, stop = slicetype.unwrap_start_stop( + space, size, w_start, w_stop, True) while i < stop and i < len(items): if space.eq_w(items[i], w_any): return space.wrap(i) diff --git a/pypy/objspace/std/model.py b/pypy/objspace/std/model.py --- a/pypy/objspace/std/model.py +++ b/pypy/objspace/std/model.py @@ -66,19 +66,11 @@ from pypy.objspace.std import floatobject from pypy.objspace.std import complexobject from pypy.objspace.std import setobject - from pypy.objspace.std import smallintobject - from pypy.objspace.std import smalllongobject from pypy.objspace.std import tupleobject - from pypy.objspace.std import smalltupleobject from pypy.objspace.std import listobject from pypy.objspace.std import dictmultiobject from pypy.objspace.std import stringobject from pypy.objspace.std import bytearrayobject - from pypy.objspace.std import ropeobject - from pypy.objspace.std import ropeunicodeobject - from pypy.objspace.std import strsliceobject - from pypy.objspace.std import strjoinobject - from pypy.objspace.std import strbufobject from pypy.objspace.std import typeobject from pypy.objspace.std import sliceobject from pypy.objspace.std import longobject @@ -137,7 +129,12 @@ for option, value in config.objspace.std: if option.startswith("with") and option in option_to_typename: for classname in option_to_typename[option]: - implcls = eval(classname) + modname = classname[:classname.index('.')] + classname = classname[classname.index('.')+1:] + d = {} + exec "from pypy.objspace.std.%s import %s" % ( + modname, classname) in d + implcls = d[classname] if value: self.typeorder[implcls] = [] else: @@ -163,6 +160,7 @@ # XXX build these lists a bit more automatically later if config.objspace.std.withsmallint: + from pypy.objspace.std import smallintobject self.typeorder[boolobject.W_BoolObject] += [ (smallintobject.W_SmallIntObject, boolobject.delegate_Bool2SmallInt), ] @@ -185,6 +183,7 @@ (complexobject.W_ComplexObject, complexobject.delegate_Int2Complex), ] if config.objspace.std.withsmalllong: + from pypy.objspace.std import smalllongobject self.typeorder[boolobject.W_BoolObject] += [ (smalllongobject.W_SmallLongObject, smalllongobject.delegate_Bool2SmallLong), ] @@ -212,21 +211,25 @@ (setobject.W_BaseSetObject, None) ] if config.objspace.std.withstrslice: + from pypy.objspace.std import strsliceobject self.typeorder[strsliceobject.W_StringSliceObject] += [ (stringobject.W_StringObject, strsliceobject.delegate_slice2str), ] if config.objspace.std.withstrjoin: + from pypy.objspace.std import strjoinobject self.typeorder[strjoinobject.W_StringJoinObject] += [ (stringobject.W_StringObject, strjoinobject.delegate_join2str), ] elif config.objspace.std.withstrbuf: + from pypy.objspace.std import strbufobject self.typeorder[strbufobject.W_StringBufferObject] += [ (stringobject.W_StringObject, strbufobject.delegate_buf2str), ] if config.objspace.std.withsmalltuple: + from pypy.objspace.std import smalltupleobject self.typeorder[smalltupleobject.W_SmallTupleObject] += [ (tupleobject.W_TupleObject, smalltupleobject.delegate_SmallTuple2Tuple)] diff --git a/pypy/objspace/std/newformat.py b/pypy/objspace/std/newformat.py --- a/pypy/objspace/std/newformat.py +++ b/pypy/objspace/std/newformat.py @@ -120,6 +120,8 @@ out.append_slice(s, last_literal, end) return out.build() + # This is only ever called if we're already unrolling _do_build_string + @jit.unroll_safe def _parse_field(self, start, end): s = self.template # Find ":" or "!" @@ -149,6 +151,7 @@ i += 1 return s[start:end], None, end + @jit.unroll_safe def _get_argument(self, name): # First, find the argument. space = self.space @@ -207,6 +210,7 @@ raise OperationError(space.w_IndexError, w_msg) return self._resolve_lookups(w_arg, name, i, end) + @jit.unroll_safe def _resolve_lookups(self, w_obj, name, start, end): # Resolve attribute and item lookups. space = self.space diff --git a/pypy/objspace/std/objspace.py b/pypy/objspace/std/objspace.py --- a/pypy/objspace/std/objspace.py +++ b/pypy/objspace/std/objspace.py @@ -88,11 +88,12 @@ if self.config.objspace.std.withtproxy: transparent.setup(self) + interplevel_classes = {} for type, classes in self.model.typeorder.iteritems(): - if len(classes) >= 3: + if len(classes) >= 3: # XXX what does this 3 mean??! # W_Root, AnyXxx and actual object - self.gettypefor(type).interplevel_cls = classes[0][0] - + interplevel_classes[self.gettypefor(type)] = classes[0][0] + self._interplevel_classes = interplevel_classes def get_builtin_types(self): return self.builtin_types @@ -421,7 +422,7 @@ else: if unroll: return make_sure_not_resized(ObjSpace.unpackiterable_unroll( - self, w_obj, expected_length)[:]) + self, w_obj, expected_length)) else: return make_sure_not_resized(ObjSpace.unpackiterable( self, w_obj, expected_length)[:]) @@ -429,7 +430,8 @@ raise self._wrap_expected_length(expected_length, len(t)) return make_sure_not_resized(t) - def fixedview_unroll(self, w_obj, expected_length=-1): + def fixedview_unroll(self, w_obj, expected_length): + assert expected_length >= 0 return self.fixedview(w_obj, expected_length, unroll=True) def listview(self, w_obj, expected_length=-1): @@ -587,7 +589,7 @@ raise OperationError(self.w_TypeError, self.wrap("need type object")) if is_annotation_constant(w_type): - cls = w_type.interplevel_cls + cls = self._get_interplevel_cls(w_type) if cls is not None: assert w_inst is not None if isinstance(w_inst, cls): @@ -597,3 +599,9 @@ @specialize.arg_or_var(2) def isinstance_w(space, w_inst, w_type): return space._type_isinstance(w_inst, w_type) + + @specialize.memo() + def _get_interplevel_cls(self, w_type): + if not hasattr(self, "_interplevel_classes"): + return None # before running initialize + return self._interplevel_classes.get(w_type, None) diff --git a/pypy/objspace/std/ropeobject.py b/pypy/objspace/std/ropeobject.py --- a/pypy/objspace/std/ropeobject.py +++ b/pypy/objspace/std/ropeobject.py @@ -355,16 +355,8 @@ self = w_self._node sub = w_sub._node - if space.is_w(w_start, space.w_None): - w_start = space.wrap(0) - if space.is_w(w_end, space.w_None): - w_end = space.len(w_self) - if upper_bound: - start = slicetype.adapt_bound(space, self.length(), w_start) - end = slicetype.adapt_bound(space, self.length(), w_end) - else: - start = slicetype.adapt_lower_bound(space, self.length(), w_start) - end = slicetype.adapt_lower_bound(space, self.length(), w_end) + start, end = slicetype.unwrap_start_stop( + space, self.length(), w_start, w_end, upper_bound) return (self, sub, start, end) _convert_idx_params._annspecialcase_ = 'specialize:arg(5)' diff --git a/pypy/objspace/std/sliceobject.py b/pypy/objspace/std/sliceobject.py --- a/pypy/objspace/std/sliceobject.py +++ b/pypy/objspace/std/sliceobject.py @@ -4,7 +4,7 @@ from pypy.interpreter import gateway from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all -from pypy.objspace.std.slicetype import eval_slice_index +from pypy.objspace.std.slicetype import _eval_slice_index class W_SliceObject(W_Object): from pypy.objspace.std.slicetype import slice_typedef as typedef @@ -25,7 +25,7 @@ if space.is_w(w_slice.w_step, space.w_None): step = 1 else: - step = eval_slice_index(space, w_slice.w_step) + step = _eval_slice_index(space, w_slice.w_step) if step == 0: raise OperationError(space.w_ValueError, space.wrap("slice step cannot be zero")) @@ -35,7 +35,7 @@ else: start = 0 else: - start = eval_slice_index(space, w_slice.w_start) + start = _eval_slice_index(space, w_slice.w_start) if start < 0: start += length if start < 0: @@ -54,7 +54,7 @@ else: stop = length else: - stop = eval_slice_index(space, w_slice.w_stop) + stop = _eval_slice_index(space, w_slice.w_stop) if stop < 0: stop += length if stop < 0: diff --git a/pypy/objspace/std/slicetype.py b/pypy/objspace/std/slicetype.py --- a/pypy/objspace/std/slicetype.py +++ b/pypy/objspace/std/slicetype.py @@ -3,6 +3,7 @@ from pypy.objspace.std.stdtypedef import StdTypeDef, SMM from pypy.objspace.std.register_all import register_all from pypy.interpreter.error import OperationError +from pypy.rlib.objectmodel import specialize # indices multimehtod slice_indices = SMM('indices', 2, @@ -14,7 +15,9 @@ ' normal slices.') # utility functions -def eval_slice_index(space, w_int): +def _eval_slice_index(space, w_int): + # note that it is the *callers* responsibility to check for w_None + # otherwise you can get funny error messages try: return space.getindex_w(w_int, None) # clamp if long integer too large except OperationError, err: @@ -25,7 +28,7 @@ "None or have an __index__ method")) def adapt_lower_bound(space, size, w_index): - index = eval_slice_index(space, w_index) + index = _eval_slice_index(space, w_index) if index < 0: index = index + size if index < 0: @@ -34,16 +37,29 @@ return index def adapt_bound(space, size, w_index): - index = eval_slice_index(space, w_index) - if index < 0: - index = index + size - if index < 0: - index = 0 + index = adapt_lower_bound(space, size, w_index) if index > size: index = size assert index >= 0 return index + at specialize.arg(4) +def unwrap_start_stop(space, size, w_start, w_end, upper_bound=False): + if space.is_w(w_start, space.w_None): + start = 0 + elif upper_bound: + start = adapt_bound(space, size, w_start) + else: + start = adapt_lower_bound(space, size, w_start) + + if space.is_w(w_end, space.w_None): + end = size + elif upper_bound: + end = adapt_bound(space, size, w_end) + else: + end = adapt_lower_bound(space, size, w_end) + return start, end + register_all(vars(), globals()) # ____________________________________________________________ diff --git a/pypy/objspace/std/smallintobject.py b/pypy/objspace/std/smallintobject.py --- a/pypy/objspace/std/smallintobject.py +++ b/pypy/objspace/std/smallintobject.py @@ -12,6 +12,7 @@ from pypy.rlib.rbigint import rbigint from pypy.rlib.rarithmetic import r_uint from pypy.tool.sourcetools import func_with_new_name +from pypy.objspace.std.inttype import wrapint class W_SmallIntObject(W_Object, UnboxedValue): __slots__ = 'intval' @@ -48,14 +49,36 @@ def delegate_SmallInt2Complex(space, w_small): return space.newcomplex(float(w_small.intval), 0.0) +def add__SmallInt_SmallInt(space, w_a, w_b): + return wrapint(space, w_a.intval + w_b.intval) # cannot overflow + +def sub__SmallInt_SmallInt(space, w_a, w_b): + return wrapint(space, w_a.intval - w_b.intval) # cannot overflow + +def floordiv__SmallInt_SmallInt(space, w_a, w_b): + return wrapint(space, w_a.intval // w_b.intval) # cannot overflow + +div__SmallInt_SmallInt = floordiv__SmallInt_SmallInt + +def mod__SmallInt_SmallInt(space, w_a, w_b): + return wrapint(space, w_a.intval % w_b.intval) # cannot overflow + +def divmod__SmallInt_SmallInt(space, w_a, w_b): + w = wrapint(space, w_a.intval // w_b.intval) # cannot overflow + z = wrapint(space, w_a.intval % w_b.intval) + return space.newtuple([w, z]) + def copy_multimethods(ns): """Copy integer multimethods for small int.""" for name, func in intobject.__dict__.iteritems(): if "__Int" in name: new_name = name.replace("Int", "SmallInt") - # Copy the function, so the annotator specializes it for - # W_SmallIntObject. - ns[new_name] = func_with_new_name(func, new_name) + if new_name not in ns: + # Copy the function, so the annotator specializes it for + # W_SmallIntObject. + ns[new_name] = func = func_with_new_name(func, new_name, globals=ns) + else: + ns[name] = func ns["get_integer"] = ns["pos__SmallInt"] = ns["int__SmallInt"] ns["get_negint"] = ns["neg__SmallInt"] diff --git a/pypy/objspace/std/smalltupleobject.py b/pypy/objspace/std/smalltupleobject.py --- a/pypy/objspace/std/smalltupleobject.py +++ b/pypy/objspace/std/smalltupleobject.py @@ -68,10 +68,10 @@ raise IndexError def eq(self, space, w_other): - if self.length() != w_other.length(): + if n != w_other.length(): return space.w_False for i in iter_n: - item1 = self.getitem(i) + item1 = getattr(self,'w_value%s' % i) item2 = w_other.getitem(i) if not space.eq_w(item1, item2): return space.w_False @@ -80,9 +80,9 @@ def hash(self, space): mult = 1000003 x = 0x345678 - z = self.length() + z = n for i in iter_n: - w_item = self.getitem(i) + w_item = getattr(self, 'w_value%s' % i) y = space.int_w(space.hash(w_item)) x = (x ^ y) * mult z -= 1 diff --git a/pypy/objspace/std/stringobject.py b/pypy/objspace/std/stringobject.py --- a/pypy/objspace/std/stringobject.py +++ b/pypy/objspace/std/stringobject.py @@ -4,7 +4,7 @@ from pypy.interpreter.error import OperationError, operationerrfmt from pypy.interpreter import gateway from pypy.rlib.rarithmetic import ovfcheck -from pypy.rlib.objectmodel import we_are_translated, compute_hash +from pypy.rlib.objectmodel import we_are_translated, compute_hash, specialize from pypy.objspace.std.inttype import wrapint from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice from pypy.objspace.std import slicetype, newformat @@ -47,6 +47,7 @@ W_StringObject.PREBUILT = [W_StringObject(chr(i)) for i in range(256)] del i + at specialize.arg(2) def _is_generic(space, w_self, fun): v = w_self._value if len(v) == 0: @@ -56,14 +57,13 @@ return space.newbool(fun(c)) else: return _is_generic_loop(space, v, fun) -_is_generic._annspecialcase_ = "specialize:arg(2)" + at specialize.arg(2) def _is_generic_loop(space, v, fun): for idx in range(len(v)): if not fun(v[idx]): return space.w_False return space.w_True -_is_generic_loop._annspecialcase_ = "specialize:arg(2)" def _upper(ch): if ch.islower(): @@ -414,22 +414,14 @@ return space.wrapbytes(u_self) -def _convert_idx_params(space, w_self, w_sub, w_start, w_end, upper_bound=False): + at specialize.arg(4) +def _convert_idx_params(space, w_self, w_start, w_end, upper_bound=False): self = w_self._value - sub = w_sub._value + lenself = len(self) - if space.is_w(w_start, space.w_None): - w_start = space.wrap(0) - if space.is_w(w_end, space.w_None): - w_end = space.len(w_self) - if upper_bound: - start = slicetype.adapt_bound(space, len(self), w_start) - end = slicetype.adapt_bound(space, len(self), w_end) - else: - start = slicetype.adapt_lower_bound(space, len(self), w_start) - end = slicetype.adapt_lower_bound(space, len(self), w_end) - return (self, sub, start, end) -_convert_idx_params._annspecialcase_ = 'specialize:arg(5)' + start, end = slicetype.unwrap_start_stop( + space, lenself, w_start, w_end, upper_bound=upper_bound) + return (self, start, end) def contains__String_String(space, w_self, w_sub): self = w_self._value @@ -442,13 +434,13 @@ return space.newbool(self.find(chr(char)) >= 0) def str_find__String_String_ANY_ANY(space, w_self, w_sub, w_start, w_end): - (self, sub, start, end) = _convert_idx_params(space, w_self, w_sub, w_start, w_end) - res = self.find(sub, start, end) + (self, start, end) = _convert_idx_params(space, w_self, w_start, w_end) + res = self.find(w_sub._value, start, end) return space.wrap(res) def str_rfind__String_String_ANY_ANY(space, w_self, w_sub, w_start, w_end): - (self, sub, start, end) = _convert_idx_params(space, w_self, w_sub, w_start, w_end) - res = self.rfind(sub, start, end) + (self, start, end) = _convert_idx_params(space, w_self, w_start, w_end) + res = self.rfind(w_sub._value, start, end) return space.wrap(res) def str_partition__String_String(space, w_self, w_sub): @@ -482,8 +474,8 @@ def str_index__String_String_ANY_ANY(space, w_self, w_sub, w_start, w_end): - (self, sub, start, end) = _convert_idx_params(space, w_self, w_sub, w_start, w_end) - res = self.find(sub, start, end) + (self, start, end) = _convert_idx_params(space, w_self, w_start, w_end) + res = self.find(w_sub._value, start, end) if res < 0: raise OperationError(space.w_ValueError, space.wrap("substring not found in string.index")) @@ -492,8 +484,8 @@ def str_rindex__String_String_ANY_ANY(space, w_self, w_sub, w_start, w_end): - (self, sub, start, end) = _convert_idx_params(space, w_self, w_sub, w_start, w_end) - res = self.rfind(sub, start, end) + (self, start, end) = _convert_idx_params(space, w_self, w_start, w_end) + res = self.rfind(w_sub._value, start, end) if res < 0: raise OperationError(space.w_ValueError, space.wrap("substring not found in string.rindex")) @@ -635,20 +627,17 @@ return wrapstr(space, u_centered) def str_count__String_String_ANY_ANY(space, w_self, w_arg, w_start, w_end): - u_self, u_arg, u_start, u_end = _convert_idx_params(space, w_self, w_arg, - w_start, w_end) - return wrapint(space, u_self.count(u_arg, u_start, u_end)) + u_self, u_start, u_end = _convert_idx_params(space, w_self, w_start, w_end) + return wrapint(space, u_self.count(w_arg._value, u_start, u_end)) def str_endswith__String_String_ANY_ANY(space, w_self, w_suffix, w_start, w_end): - (u_self, suffix, start, end) = _convert_idx_params(space, w_self, - w_suffix, w_start, - w_end, True) - return space.newbool(stringendswith(u_self, suffix, start, end)) + (u_self, start, end) = _convert_idx_params(space, w_self, w_start, + w_end, True) + return space.newbool(stringendswith(u_self, w_suffix._value, start, end)) def str_endswith__String_Tuple_ANY_ANY(space, w_self, w_suffixes, w_start, w_end): - (u_self, _, start, end) = _convert_idx_params(space, w_self, - space.wrapbytes(''), - w_start, w_end, True) + (u_self, start, end) = _convert_idx_params(space, w_self, w_start, + w_end, True) for w_suffix in space.fixedview(w_suffixes): if space.isinstance_w(w_suffix, space.w_unicode): w_u = space.call_function(space.w_unicode, w_self) @@ -660,15 +649,13 @@ return space.w_False def str_startswith__String_String_ANY_ANY(space, w_self, w_prefix, w_start, w_end): - (u_self, prefix, start, end) = _convert_idx_params(space, w_self, - w_prefix, w_start, - w_end, True) - return space.newbool(stringstartswith(u_self, prefix, start, end)) + (u_self, start, end) = _convert_idx_params(space, w_self, w_start, + w_end, True) + return space.newbool(stringstartswith(u_self, w_prefix._value, start, end)) def str_startswith__String_Tuple_ANY_ANY(space, w_self, w_prefixes, w_start, w_end): - (u_self, _, start, end) = _convert_idx_params(space, w_self, - space.wrapbytes(''), - w_start, w_end, True) + (u_self, start, end) = _convert_idx_params(space, w_self, + w_start, w_end, True) for w_prefix in space.fixedview(w_prefixes): if space.isinstance_w(w_prefix, space.w_unicode): w_u = space.call_function(space.w_unicode, w_self) diff --git a/pypy/objspace/std/strsliceobject.py b/pypy/objspace/std/strsliceobject.py --- a/pypy/objspace/std/strsliceobject.py +++ b/pypy/objspace/std/strsliceobject.py @@ -55,8 +55,8 @@ def _convert_idx_params(space, w_self, w_sub, w_start, w_end): length = w_self.stop - w_self.start sub = w_sub._value - start = slicetype.adapt_bound(space, length, w_start) - end = slicetype.adapt_bound(space, length, w_end) + start, end = slicetype.unwrap_start_stop( + space, length, w_start, w_end, True) assert start >= 0 assert end >= 0 diff --git a/pypy/objspace/std/test/test_floatobject.py b/pypy/objspace/std/test/test_floatobject.py --- a/pypy/objspace/std/test/test_floatobject.py +++ b/pypy/objspace/std/test/test_floatobject.py @@ -63,6 +63,12 @@ def setup_class(cls): cls.w_py26 = cls.space.wrap(sys.version_info >= (2, 6)) + def test_isinteger(self): + assert (1.).is_integer() + assert not (1.1).is_integer() + assert not float("inf").is_integer() + assert not float("nan").is_integer() + def test_conjugate(self): assert (1.).conjugate() == 1. assert (-1.).conjugate() == -1. @@ -782,4 +788,4 @@ # divide by 0 raises(ZeroDivisionError, lambda: inf % 0) raises(ZeroDivisionError, lambda: inf // 0) - raises(ZeroDivisionError, divmod, inf, 0) \ No newline at end of file + raises(ZeroDivisionError, divmod, inf, 0) diff --git a/pypy/objspace/std/test/test_listobject.py b/pypy/objspace/std/test/test_listobject.py --- a/pypy/objspace/std/test/test_listobject.py +++ b/pypy/objspace/std/test/test_listobject.py @@ -2,11 +2,10 @@ from pypy.objspace.std.listobject import W_ListObject from pypy.interpreter.error import OperationError -from pypy.conftest import gettestobjspace +from pypy.conftest import gettestobjspace, option class TestW_ListObject(object): - def test_is_true(self): w = self.space.wrap w_list = W_ListObject([]) @@ -343,6 +342,13 @@ class AppTestW_ListObject(object): + def setup_class(cls): + import sys + on_cpython = (option.runappdirect and + not hasattr(sys, 'pypy_translation_info')) + + cls.w_on_cpython = cls.space.wrap(on_cpython) + def test_call_list(self): assert list('') == [] assert list('abc') == ['a', 'b', 'c'] @@ -616,6 +622,14 @@ assert c.index(0) == 0 raises(ValueError, c.index, 3) + def test_index_cpython_bug(self): + if self.on_cpython: + skip("cpython has a bug here") + c = list('hello world') + assert c.index('l', None, None) == 2 + assert c.index('l', 3, None) == 3 + assert c.index('l', None, 4) == 2 + def test_ass_slice(self): l = range(6) l[1:3] = 'abc' @@ -801,6 +815,20 @@ l.__delslice__(0, 2) assert l == [3, 4] + def test_list_from_set(self): + l = ['a'] + l.__init__(set('b')) + assert l == ['b'] + + def test_list_from_generator(self): + l = ['a'] + g = (i*i for i in range(5)) + l.__init__(g) + assert l == [0, 1, 4, 9, 16] + l.__init__(g) + assert l == [] + assert list(g) == [] + class AppTestListFastSubscr: diff --git a/pypy/objspace/std/test/test_obj.py b/pypy/objspace/std/test/test_obj.py --- a/pypy/objspace/std/test/test_obj.py +++ b/pypy/objspace/std/test/test_obj.py @@ -102,3 +102,11 @@ def __repr__(self): return 123456 assert A().__str__() == 123456 + +def test_isinstance_shortcut(): + from pypy.objspace.std import objspace + space = objspace.StdObjSpace() + w_a = space.wrap("a") + space.type = None + space.isinstance_w(w_a, space.w_str) # does not crash + diff --git a/pypy/objspace/std/test/test_sliceobject.py b/pypy/objspace/std/test/test_sliceobject.py --- a/pypy/objspace/std/test/test_sliceobject.py +++ b/pypy/objspace/std/test/test_sliceobject.py @@ -42,6 +42,23 @@ getslice(length, mystart, mystop)) + def test_indexes4(self): + space = self.space + w = space.wrap + + def getslice(length, start, stop, step): + return [i for i in range(0, length, step) if start <= i < stop] + + for step in [-5, -4, -3, -2, -1, 1, 2, 3, 4, 5, None]: + for length in range(5): + for start in range(-2*length-2, 2*length+3) + [None]: + for stop in range(-2*length-2, 2*length+3) + [None]: + sl = space.newslice(w(start), w(stop), w(step)) + mystart, mystop, mystep, slicelength = sl.indices4(space, length) + assert len(range(length)[start:stop:step]) == slicelength + assert slice(start, stop, step).indices(length) == ( + mystart, mystop, mystep) + class AppTest_SliceObject: def test_new(self): def cmp_slice(sl1, sl2): diff --git a/pypy/objspace/std/test/test_stdobjspace.py b/pypy/objspace/std/test/test_stdobjspace.py --- a/pypy/objspace/std/test/test_stdobjspace.py +++ b/pypy/objspace/std/test/test_stdobjspace.py @@ -14,11 +14,11 @@ def test_int_w_non_int(self): raises(OperationError,self.space.int_w,self.space.wrap(None)) - raises(OperationError,self.space.int_w,self.space.wrap("")) + raises(OperationError,self.space.int_w,self.space.wrap("")) def test_uint_w_non_int(self): raises(OperationError,self.space.uint_w,self.space.wrap(None)) - raises(OperationError,self.space.uint_w,self.space.wrap("")) + raises(OperationError,self.space.uint_w,self.space.wrap("")) def test_multimethods_defined_on(self): from pypy.objspace.std.stdtypedef import multimethods_defined_on @@ -49,14 +49,14 @@ def test_fastpath_isinstance(self): from pypy.objspace.std.stringobject import W_StringObject from pypy.objspace.std.intobject import W_IntObject - + space = self.space - assert space.w_str.interplevel_cls is W_StringObject - assert space.w_int.interplevel_cls is W_IntObject + assert space._get_interplevel_cls(space.w_str) is W_StringObject + assert space._get_interplevel_cls(space.w_int) is W_IntObject class X(W_StringObject): def __init__(self): pass - + typedef = None assert space.isinstance_w(X(), space.w_str) diff --git a/pypy/objspace/std/tupleobject.py b/pypy/objspace/std/tupleobject.py --- a/pypy/objspace/std/tupleobject.py +++ b/pypy/objspace/std/tupleobject.py @@ -103,15 +103,10 @@ return space.w_False return space.w_True -def _min(a, b): - if a < b: - return a - return b - def lt__Tuple_Tuple(space, w_tuple1, w_tuple2): items1 = w_tuple1.wrappeditems items2 = w_tuple2.wrappeditems - ncmp = _min(len(items1), len(items2)) + ncmp = min(len(items1), len(items2)) # Search for the first index where items are different for p in range(ncmp): if not space.eq_w(items1[p], items2[p]): @@ -122,7 +117,7 @@ def gt__Tuple_Tuple(space, w_tuple1, w_tuple2): items1 = w_tuple1.wrappeditems items2 = w_tuple2.wrappeditems - ncmp = _min(len(items1), len(items2)) + ncmp = min(len(items1), len(items2)) # Search for the first index where items are different for p in range(ncmp): if not space.eq_w(items1[p], items2[p]): @@ -167,17 +162,8 @@ return space.wrap(count) def tuple_index__Tuple_ANY_ANY_ANY(space, w_tuple, w_obj, w_start, w_stop): - start = slicetype.eval_slice_index(space, w_start) - stop = slicetype.eval_slice_index(space, w_stop) length = len(w_tuple.wrappeditems) - if start < 0: - start += length - if start < 0: - start = 0 - if stop < 0: - stop += length - if stop < 0: - stop = 0 + start, stop = slicetype.unwrap_start_stop(space, length, w_start, w_stop) for i in range(start, min(stop, length)): w_item = w_tuple.wrappeditems[i] if space.eq_w(w_item, w_obj): diff --git a/pypy/objspace/std/tupletype.py b/pypy/objspace/std/tupletype.py --- a/pypy/objspace/std/tupletype.py +++ b/pypy/objspace/std/tupletype.py @@ -5,14 +5,14 @@ def wraptuple(space, list_w): from pypy.objspace.std.tupleobject import W_TupleObject - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject2 - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject3 - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject4 - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject5 - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject6 - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject7 - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject8 if space.config.objspace.std.withsmalltuple: + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject2 + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject3 + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject4 + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject5 + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject6 + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject7 + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject8 if len(list_w) == 2: return W_SmallTupleObject2(list_w) if len(list_w) == 3: diff --git a/pypy/objspace/std/typeobject.py b/pypy/objspace/std/typeobject.py --- a/pypy/objspace/std/typeobject.py +++ b/pypy/objspace/std/typeobject.py @@ -102,7 +102,6 @@ 'instancetypedef', 'terminator', '_version_tag?', - 'interplevel_cls', ] # for config.objspace.std.getattributeshortcut @@ -117,9 +116,6 @@ # of the __new__ is an instance of the type w_bltin_new = None - interplevel_cls = None # not None for prebuilt instances of - # interpreter-level types - @dont_look_inside def __init__(w_self, space, name, bases_w, dict_w, overridetypedef=None): diff --git a/pypy/objspace/std/unicodeobject.py b/pypy/objspace/std/unicodeobject.py --- a/pypy/objspace/std/unicodeobject.py +++ b/pypy/objspace/std/unicodeobject.py @@ -9,7 +9,7 @@ from pypy.objspace.std import slicetype, newformat from pypy.objspace.std.tupleobject import W_TupleObject from pypy.rlib.rarithmetic import intmask, ovfcheck -from pypy.rlib.objectmodel import compute_hash +from pypy.rlib.objectmodel import compute_hash, specialize from pypy.rlib.rstring import UnicodeBuilder from pypy.rlib.runicode import unicode_encode_unicode_escape from pypy.module.unicodedata import unicodedb @@ -370,42 +370,29 @@ index = length return index -def _convert_idx_params(space, w_self, w_sub, w_start, w_end, upper_bound=False): - assert isinstance(w_sub, W_UnicodeObject) + at specialize.arg(4) +def _convert_idx_params(space, w_self, w_start, w_end, upper_bound=False): self = w_self._value - sub = w_sub._value - - if space.is_w(w_start, space.w_None): - w_start = space.wrap(0) - if space.is_w(w_end, space.w_None): - w_end = space.len(w_self) - - if upper_bound: - start = slicetype.adapt_bound(space, len(self), w_start) - end = slicetype.adapt_bound(space, len(self), w_end) - else: - start = slicetype.adapt_lower_bound(space, len(self), w_start) - end = slicetype.adapt_lower_bound(space, len(self), w_end) - return (self, sub, start, end) -_convert_idx_params._annspecialcase_ = 'specialize:arg(5)' + start, end = slicetype.unwrap_start_stop( + space, len(self), w_start, w_end, upper_bound) + return (self, start, end) def unicode_endswith__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, + self, start, end = _convert_idx_params(space, w_self, w_start, w_end, True) - return space.newbool(stringendswith(self, substr, start, end)) + return space.newbool(stringendswith(self, w_substr._value, start, end)) def unicode_startswith__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end, True) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end, True) # XXX this stuff can be waaay better for ootypebased backends if # we re-use more of our rpython machinery (ie implement startswith # with additional parameters as rpython) - return space.newbool(stringstartswith(self, substr, start, end)) + return space.newbool(stringstartswith(self, w_substr._value, start, end)) def unicode_startswith__Unicode_Tuple_ANY_ANY(space, w_unistr, w_prefixes, w_start, w_end): - unistr, _, start, end = _convert_idx_params(space, w_unistr, space.wrap(u''), - w_start, w_end, True) + unistr, start, end = _convert_idx_params(space, w_unistr, + w_start, w_end, True) for w_prefix in space.fixedview(w_prefixes): prefix = space.unicode_w(w_prefix) if stringstartswith(unistr, prefix, start, end): @@ -414,7 +401,7 @@ def unicode_endswith__Unicode_Tuple_ANY_ANY(space, w_unistr, w_suffixes, w_start, w_end): - unistr, _, start, end = _convert_idx_params(space, w_unistr, space.wrap(u''), + unistr, start, end = _convert_idx_params(space, w_unistr, w_start, w_end, True) for w_suffix in space.fixedview(w_suffixes): suffix = space.unicode_w(w_suffix) @@ -520,37 +507,32 @@ return space.newlist(lines) def unicode_find__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - return space.wrap(self.find(substr, start, end)) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + return space.wrap(self.find(w_substr._value, start, end)) def unicode_rfind__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - return space.wrap(self.rfind(substr, start, end)) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + return space.wrap(self.rfind(w_substr._value, start, end)) def unicode_index__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - index = self.find(substr, start, end) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + index = self.find(w_substr._value, start, end) if index < 0: raise OperationError(space.w_ValueError, space.wrap('substring not found')) return space.wrap(index) def unicode_rindex__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - index = self.rfind(substr, start, end) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + index = self.rfind(w_substr._value, start, end) if index < 0: raise OperationError(space.w_ValueError, space.wrap('substring not found')) return space.wrap(index) def unicode_count__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - return space.wrap(self.count(substr, start, end)) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + return space.wrap(self.count(w_substr._value, start, end)) def unicode_split__Unicode_None_ANY(space, w_self, w_none, w_maxsplit): maxsplit = space.int_w(w_maxsplit) diff --git a/pypy/pytest.ini b/pypy/pytest.ini --- a/pypy/pytest.ini +++ b/pypy/pytest.ini @@ -1,2 +1,2 @@ [pytest] -addopts = --assertmode=old \ No newline at end of file +addopts = --assertmode=old -rf diff --git a/pypy/rlib/rbigint.py b/pypy/rlib/rbigint.py --- a/pypy/rlib/rbigint.py +++ b/pypy/rlib/rbigint.py @@ -939,7 +939,7 @@ ah, al = _kmul_split(a, shift) assert ah.sign == 1 # the split isn't degenerate - if a == b: + if a is b: bh = ah bl = al else: @@ -993,26 +993,21 @@ i = ret.numdigits() - shift # # digits after shift _v_isub(ret, shift, i, t2, t2.numdigits()) _v_isub(ret, shift, i, t1, t1.numdigits()) - del t1, t2 # 6. t3 <- (ah+al)(bh+bl), and add into result. t1 = _x_add(ah, al) - del ah, al - if a == b: + if a is b: t2 = t1 else: t2 = _x_add(bh, bl) - del bh, bl t3 = _k_mul(t1, t2) - del t1, t2 assert t3.sign >=0 # Add t3. It's not obvious why we can't run out of room here. # See the (*) comment after this function. _v_iadd(ret, shift, i, t3, t3.numdigits()) - del t3 ret._normalize() return ret @@ -1103,7 +1098,6 @@ # Add into result. _v_iadd(ret, nbdone, ret.numdigits() - nbdone, product, product.numdigits()) - del product bsize -= nbtouse nbdone += nbtouse diff --git a/pypy/rlib/rgc.py b/pypy/rlib/rgc.py --- a/pypy/rlib/rgc.py +++ b/pypy/rlib/rgc.py @@ -214,6 +214,10 @@ func._gc_no_collect_ = True return func +def is_light_finalizer(func): + func._is_light_finalizer_ = True + return func + # ____________________________________________________________ def get_rpy_roots(): @@ -255,6 +259,24 @@ except Exception: return False # don't keep objects whose _freeze_() method explodes +def add_memory_pressure(estimate): + """Add memory pressure for OpaquePtrs.""" + pass + +class AddMemoryPressureEntry(ExtRegistryEntry): + _about_ = add_memory_pressure + + def compute_result_annotation(self, s_nbytes): + from pypy.annotation import model as annmodel + return annmodel.s_None + + def specialize_call(self, hop): + [v_size] = hop.inputargs(lltype.Signed) + hop.exception_cannot_occur() + return hop.genop('gc_add_memory_pressure', [v_size], + resulttype=lltype.Void) + + def get_rpy_memory_usage(gcref): "NOT_RPYTHON" # approximate implementation using CPython's type info diff --git a/pypy/rlib/rmmap.py b/pypy/rlib/rmmap.py --- a/pypy/rlib/rmmap.py +++ b/pypy/rlib/rmmap.py @@ -78,7 +78,7 @@ from pypy.rlib.rwin32 import HANDLE, LPHANDLE from pypy.rlib.rwin32 import NULL_HANDLE, INVALID_HANDLE_VALUE from pypy.rlib.rwin32 import DWORD, WORD, DWORD_PTR, LPDWORD - from pypy.rlib.rwin32 import BOOL, LPVOID, LPCVOID, LPCSTR, SIZE_T + from pypy.rlib.rwin32 import BOOL, LPVOID, LPCSTR, SIZE_T from pypy.rlib.rwin32 import INT, LONG, PLONG # export the constants inside and outside. see __init__.py @@ -174,9 +174,9 @@ DuplicateHandle = winexternal('DuplicateHandle', [HANDLE, HANDLE, HANDLE, LPHANDLE, DWORD, BOOL, DWORD], BOOL) CreateFileMapping = winexternal('CreateFileMappingA', [HANDLE, rwin32.LPSECURITY_ATTRIBUTES, DWORD, DWORD, DWORD, LPCSTR], HANDLE) MapViewOfFile = winexternal('MapViewOfFile', [HANDLE, DWORD, DWORD, DWORD, SIZE_T], LPCSTR)##!!LPVOID) - UnmapViewOfFile = winexternal('UnmapViewOfFile', [LPCVOID], BOOL, + UnmapViewOfFile = winexternal('UnmapViewOfFile', [LPCSTR], BOOL, threadsafe=False) - FlushViewOfFile = winexternal('FlushViewOfFile', [LPCVOID, SIZE_T], BOOL) + FlushViewOfFile = winexternal('FlushViewOfFile', [LPCSTR, SIZE_T], BOOL) SetFilePointer = winexternal('SetFilePointer', [HANDLE, LONG, PLONG, DWORD], DWORD) SetEndOfFile = winexternal('SetEndOfFile', [HANDLE], BOOL) VirtualAlloc = winexternal('VirtualAlloc', @@ -292,6 +292,9 @@ elif _POSIX: self.closed = True if self.fd != -1: + # XXX this is buggy - raising in an RPython del is not a good + # idea, we should swallow the exception or ignore the + # underlaying close error code os.close(self.fd) self.fd = -1 if self.size > 0: diff --git a/pypy/rlib/ropenssl.py b/pypy/rlib/ropenssl.py --- a/pypy/rlib/ropenssl.py +++ b/pypy/rlib/ropenssl.py @@ -25,6 +25,7 @@ 'openssl/err.h', 'openssl/rand.h', 'openssl/evp.h', + 'openssl/ossl_typ.h', 'openssl/x509v3.h'] eci = ExternalCompilationInfo( @@ -111,7 +112,9 @@ GENERAL_NAME_st = rffi_platform.Struct( 'struct GENERAL_NAME_st', [('type', rffi.INT), - ]) + ]) + EVP_MD_SIZE = rffi_platform.SizeOf('EVP_MD') + EVP_MD_CTX_SIZE = rffi_platform.SizeOf('EVP_MD_CTX') OBJ_NAME_st = rffi_platform.Struct( 'OBJ_NAME', @@ -164,7 +167,7 @@ ssl_external('CRYPTO_set_id_callback', [lltype.Ptr(lltype.FuncType([], rffi.LONG))], lltype.Void) - + if HAVE_OPENSSL_RAND: ssl_external('RAND_add', [rffi.CCHARP, rffi.INT, rffi.DOUBLE], lltype.Void) ssl_external('RAND_status', [], rffi.INT) @@ -265,7 +268,7 @@ [BIO, rffi.VOIDP, rffi.VOIDP, rffi.VOIDP], X509) EVP_MD_CTX = rffi.COpaquePtr('EVP_MD_CTX', compilation_info=eci) -EVP_MD = rffi.COpaquePtr('EVP_MD') +EVP_MD = rffi.COpaquePtr('EVP_MD', compilation_info=eci) OpenSSL_add_all_digests = external( 'OpenSSL_add_all_digests', [], lltype.Void) diff --git a/pypy/rlib/rsocket.py b/pypy/rlib/rsocket.py --- a/pypy/rlib/rsocket.py +++ b/pypy/rlib/rsocket.py @@ -56,6 +56,7 @@ _FAMILIES = {} + class Address(object): """The base class for RPython-level objects representing addresses. Fields: addr - a _c.sockaddr_ptr (memory owned by the Address instance) @@ -77,9 +78,8 @@ self.addrlen = addrlen def __del__(self): - addr = self.addr_p - if addr: - lltype.free(addr, flavor='raw') + if self.addr_p: + lltype.free(self.addr_p, flavor='raw') def setdata(self, addr, addrlen): # initialize self.addr and self.addrlen. 'addr' can be a different @@ -615,7 +615,10 @@ self.timeout = defaults.timeout def __del__(self): - self.close() + fd = self.fd + if fd != _c.INVALID_SOCKET: + self.fd = _c.INVALID_SOCKET + _c.socketclose(fd) if hasattr(_c, 'fcntl'): def _setblocking(self, block): diff --git a/pypy/rlib/rsre/rsre_core.py b/pypy/rlib/rsre/rsre_core.py --- a/pypy/rlib/rsre/rsre_core.py +++ b/pypy/rlib/rsre/rsre_core.py @@ -391,6 +391,8 @@ if self.num_pending >= min: while enum is not None and ptr == ctx.match_end: enum = enum.move_to_next_result(ctx) + # matched marks for zero-width assertions + marks = ctx.match_marks # if enum is not None: # matched one more 'item'. record it and continue. diff --git a/pypy/rlib/rsre/test/test_re.py b/pypy/rlib/rsre/test/test_re.py --- a/pypy/rlib/rsre/test/test_re.py +++ b/pypy/rlib/rsre/test/test_re.py @@ -226,6 +226,13 @@ (None, 'b', None)) assert pat.match('ac').group(1, 'b2', 3) == ('a', None, 'c') + def test_bug_923(self): + # Issue923: grouping inside optional lookahead problem + assert re.match(r'a(?=(b))?', "ab").groups() == ("b",) + assert re.match(r'(a(?=(b))?)', "ab").groups() == ('a', 'b') + assert re.match(r'(a)(?=(b))?', "ab").groups() == ('a', 'b') + assert re.match(r'(?Pa)(?=(?Pb))?', "ab").groupdict() == {'g1': 'a', 'g2': 'b'} + def test_re_groupref_exists(self): assert re.match('^(\()?([^()]+)(?(1)\))$', '(a)').groups() == ( ('(', 'a')) diff --git a/pypy/rpython/llinterp.py b/pypy/rpython/llinterp.py --- a/pypy/rpython/llinterp.py +++ b/pypy/rpython/llinterp.py @@ -172,7 +172,7 @@ def checkadr(addr): assert lltype.typeOf(addr) is llmemory.Address - + def is_inst(inst): return isinstance(lltype.typeOf(inst), (ootype.Instance, ootype.BuiltinType, ootype.StaticMethod)) @@ -657,7 +657,7 @@ raise TypeError("graph with %r args called with wrong func ptr type: %r" % (tuple([v.concretetype for v in args_v]), ARGS)) frame = self.newsubframe(graph, args) - return frame.eval() + return frame.eval() def op_direct_call(self, f, *args): FTYPE = self.llinterpreter.typer.type_system.derefType(lltype.typeOf(f)) @@ -698,13 +698,13 @@ return ptr except MemoryError: self.make_llexception() - + def op_malloc_nonmovable(self, TYPE, flags): flavor = flags['flavor'] assert flavor == 'gc' zero = flags.get('zero', False) return self.heap.malloc_nonmovable(TYPE, zero=zero) - + def op_malloc_nonmovable_varsize(self, TYPE, flags, size): flavor = flags['flavor'] assert flavor == 'gc' @@ -716,6 +716,9 @@ track_allocation = flags.get('track_allocation', True) self.heap.free(obj, flavor='raw', track_allocation=track_allocation) + def op_gc_add_memory_pressure(self, size): + self.heap.add_memory_pressure(size) + def op_shrink_array(self, obj, smallersize): return self.heap.shrink_array(obj, smallersize) @@ -1095,13 +1098,6 @@ assert y >= 0 return self.op_int_add_ovf(x, y) - def op_cast_float_to_int(self, f): - assert type(f) is float - try: - return ovfcheck(int(f)) - except OverflowError: - self.make_llexception() - def op_int_is_true(self, x): # special case if type(x) is CDefinedIntSymbolic: @@ -1325,7 +1321,7 @@ func_graph = fn.graph else: # obj is an instance, we want to call 'method_name' on it - assert fn is None + assert fn is None self_arg = [obj] func_graph = obj._TYPE._methods[method_name._str].graph diff --git a/pypy/rpython/lltypesystem/llheap.py b/pypy/rpython/lltypesystem/llheap.py --- a/pypy/rpython/lltypesystem/llheap.py +++ b/pypy/rpython/lltypesystem/llheap.py @@ -5,8 +5,7 @@ setfield = setattr from operator import setitem as setarrayitem -from pypy.rlib.rgc import collect -from pypy.rlib.rgc import can_move +from pypy.rlib.rgc import can_move, collect, add_memory_pressure def setinterior(toplevelcontainer, inneraddr, INNERTYPE, newvalue, offsets=None): diff --git a/pypy/rpython/lltypesystem/lloperation.py b/pypy/rpython/lltypesystem/lloperation.py --- a/pypy/rpython/lltypesystem/lloperation.py +++ b/pypy/rpython/lltypesystem/lloperation.py @@ -343,8 +343,8 @@ 'cast_uint_to_float': LLOp(canfold=True), 'cast_longlong_to_float' :LLOp(canfold=True), 'cast_ulonglong_to_float':LLOp(canfold=True), - 'cast_float_to_int': LLOp(canraise=(OverflowError,), tryfold=True), - 'cast_float_to_uint': LLOp(canfold=True), # XXX need OverflowError? + 'cast_float_to_int': LLOp(canfold=True), + 'cast_float_to_uint': LLOp(canfold=True), 'cast_float_to_longlong' :LLOp(canfold=True), 'cast_float_to_ulonglong':LLOp(canfold=True), 'truncate_longlong_to_int':LLOp(canfold=True), @@ -473,6 +473,7 @@ 'gc_is_rpy_instance' : LLOp(), 'gc_dump_rpy_heap' : LLOp(), 'gc_typeids_z' : LLOp(), + 'gc_add_memory_pressure': LLOp(), # ------- JIT & GC interaction, only for some GCs ---------- diff --git a/pypy/rpython/lltypesystem/lltype.py b/pypy/rpython/lltypesystem/lltype.py --- a/pypy/rpython/lltypesystem/lltype.py +++ b/pypy/rpython/lltypesystem/lltype.py @@ -48,7 +48,7 @@ self.TYPE = TYPE def __repr__(self): return ''%(self.TYPE,) - + def saferecursive(func, defl, TLS=TLS): def safe(*args): @@ -537,9 +537,9 @@ return "Func ( %s ) -> %s" % (args, self.RESULT) __str__ = saferecursive(__str__, '...') - def _short_name(self): + def _short_name(self): args = ', '.join([ARG._short_name() for ARG in self.ARGS]) - return "Func(%s)->%s" % (args, self.RESULT._short_name()) + return "Func(%s)->%s" % (args, self.RESULT._short_name()) _short_name = saferecursive(_short_name, '...') def _container_example(self): @@ -553,7 +553,7 @@ class OpaqueType(ContainerType): _gckind = 'raw' - + def __init__(self, tag, hints={}): """ if hints['render_structure'] is set, the type is internal and not considered to come from somewhere else (it should be rendered as a structure) """ @@ -723,10 +723,10 @@ def __str__(self): return '* %s' % (self.TO, ) - + def _short_name(self): return 'Ptr %s' % (self.TO._short_name(), ) - + def _is_atomic(self): return self.TO._gckind == 'raw' @@ -1713,6 +1713,7 @@ return v def setitem(self, index, value): + assert typeOf(value) == self._TYPE.OF self.items[index] = value assert not '__dict__' in dir(_array) diff --git a/pypy/rpython/lltypesystem/opimpl.py b/pypy/rpython/lltypesystem/opimpl.py --- a/pypy/rpython/lltypesystem/opimpl.py +++ b/pypy/rpython/lltypesystem/opimpl.py @@ -355,6 +355,10 @@ assert type(b) is bool return float(b) +def op_cast_float_to_int(f): + assert type(f) is float + return intmask(int(f)) + def op_cast_float_to_uint(f): assert type(f) is float return r_uint(long(f)) diff --git a/pypy/rpython/lltypesystem/rdict.py b/pypy/rpython/lltypesystem/rdict.py --- a/pypy/rpython/lltypesystem/rdict.py +++ b/pypy/rpython/lltypesystem/rdict.py @@ -452,9 +452,9 @@ i = ll_dict_lookup(d, key, hash) return _ll_dict_setitem_lookup_done(d, key, value, hash, i) -# Leaving as dont_look_inside ATM, it has a few branches which could lead to -# many bridges if we don't consider their possible frequency. - at jit.dont_look_inside +# It may be safe to look inside always, it has a few branches though, and their +# frequencies needs to be investigated. + at jit.look_inside_iff(lambda d, key, value, hash, i: jit.isvirtual(d) and jit.isconstant(key)) def _ll_dict_setitem_lookup_done(d, key, value, hash, i): valid = (i & HIGHEST_BIT) == 0 i = i & MASK @@ -508,8 +508,8 @@ return default # XXX: Move the size checking and resize into a single call which is opauqe to -# the JIT to avoid extra branches. - at jit.dont_look_inside +# the JIT when the dict isn't virtual, to avoid extra branches. + at jit.look_inside_iff(lambda d, i: jit.isvirtual(d) and jit.isconstant(i)) def _ll_dict_del(d, i): d.entries.mark_deleted(i) d.num_items -= 1 @@ -549,7 +549,7 @@ # ------- a port of CPython's dictobject.c's lookdict implementation ------- PERTURB_SHIFT = 5 - at jit.dont_look_inside + at jit.look_inside_iff(lambda d, key, hash: jit.isvirtual(d) and jit.isconstant(key)) def ll_dict_lookup(d, key, hash): entries = d.entries ENTRIES = lltype.typeOf(entries).TO diff --git a/pypy/rpython/lltypesystem/rpbc.py b/pypy/rpython/lltypesystem/rpbc.py --- a/pypy/rpython/lltypesystem/rpbc.py +++ b/pypy/rpython/lltypesystem/rpbc.py @@ -116,7 +116,7 @@ fields.append((row.attrname, row.fntype)) kwds = {'hints': {'immutable': True}} return Ptr(Struct('specfunc', *fields, **kwds)) - + def create_specfunc(self): return malloc(self.lowleveltype.TO, immortal=True) @@ -149,7 +149,8 @@ self.descriptions = list(self.s_pbc.descriptions) if self.s_pbc.can_be_None: self.descriptions.insert(0, None) - POINTER_TABLE = Array(self.pointer_repr.lowleveltype) + POINTER_TABLE = Array(self.pointer_repr.lowleveltype, + hints={'nolength': True}) pointer_table = malloc(POINTER_TABLE, len(self.descriptions), immortal=True) for i, desc in enumerate(self.descriptions): @@ -302,7 +303,8 @@ if r_to in r_from._conversion_tables: return r_from._conversion_tables[r_to] else: - t = malloc(Array(Char), len(r_from.descriptions), immortal=True) + t = malloc(Array(Char, hints={'nolength': True}), + len(r_from.descriptions), immortal=True) l = [] for i, d in enumerate(r_from.descriptions): if d in r_to.descriptions: @@ -314,7 +316,7 @@ if l == range(len(r_from.descriptions)): r = None else: - r = inputconst(Ptr(Array(Char)), t) + r = inputconst(Ptr(Array(Char, hints={'nolength': True})), t) r_from._conversion_tables[r_to] = r return r @@ -402,12 +404,12 @@ # ____________________________________________________________ -##def rtype_call_memo(hop): +##def rtype_call_memo(hop): ## memo_table = hop.args_v[0].value ## if memo_table.s_result.is_constant(): ## return hop.inputconst(hop.r_result, memo_table.s_result.const) -## fieldname = memo_table.fieldname -## assert hop.nb_args == 2, "XXX" +## fieldname = memo_table.fieldname +## assert hop.nb_args == 2, "XXX" ## r_pbc = hop.args_r[1] ## assert isinstance(r_pbc, (MultipleFrozenPBCRepr, ClassesPBCRepr)) diff --git a/pypy/rpython/lltypesystem/rstr.py b/pypy/rpython/lltypesystem/rstr.py --- a/pypy/rpython/lltypesystem/rstr.py +++ b/pypy/rpython/lltypesystem/rstr.py @@ -20,6 +20,7 @@ from pypy.rpython.rmodel import Repr from pypy.rpython.lltypesystem import llmemory from pypy.tool.sourcetools import func_with_new_name +from pypy.rpython.lltypesystem.lloperation import llop # ____________________________________________________________ # @@ -364,8 +365,10 @@ while lpos < rpos and s.chars[lpos] == ch: lpos += 1 if right: - while lpos < rpos and s.chars[rpos] == ch: + while lpos < rpos + 1 and s.chars[rpos] == ch: rpos -= 1 + if rpos < lpos: + return s.empty() r_len = rpos - lpos + 1 result = s.malloc(r_len) s.copy_contents(s, result, lpos, 0, r_len) diff --git a/pypy/rpython/memory/gc/base.py b/pypy/rpython/memory/gc/base.py --- a/pypy/rpython/memory/gc/base.py +++ b/pypy/rpython/memory/gc/base.py @@ -1,4 +1,5 @@ -from pypy.rpython.lltypesystem import lltype, llmemory, llarena +from pypy.rpython.lltypesystem import lltype, llmemory, llarena, rffi +from pypy.rpython.lltypesystem.lloperation import llop from pypy.rlib.debug import ll_assert from pypy.rpython.memory.gcheader import GCHeaderBuilder from pypy.rpython.memory.support import DEFAULT_CHUNK_SIZE @@ -62,6 +63,7 @@ def set_query_functions(self, is_varsize, has_gcptr_in_varsize, is_gcarrayofgcptr, getfinalizer, + getlightfinalizer, offsets_to_gc_pointers, fixed_size, varsize_item_sizes, varsize_offset_to_variable_part, @@ -74,6 +76,7 @@ get_custom_trace, fast_path_tracing): self.getfinalizer = getfinalizer + self.getlightfinalizer = getlightfinalizer self.is_varsize = is_varsize self.has_gcptr_in_varsize = has_gcptr_in_varsize self.is_gcarrayofgcptr = is_gcarrayofgcptr @@ -139,6 +142,7 @@ size = self.fixed_size(typeid) needs_finalizer = bool(self.getfinalizer(typeid)) + finalizer_is_light = bool(self.getlightfinalizer(typeid)) contains_weakptr = self.weakpointer_offset(typeid) >= 0 assert not (needs_finalizer and contains_weakptr) if self.is_varsize(typeid): @@ -158,6 +162,7 @@ else: malloc_fixedsize = self.malloc_fixedsize ref = malloc_fixedsize(typeid, size, needs_finalizer, + finalizer_is_light, contains_weakptr) # lots of cast and reverse-cast around... return llmemory.cast_ptr_to_adr(ref) diff --git a/pypy/rpython/memory/gc/generation.py b/pypy/rpython/memory/gc/generation.py --- a/pypy/rpython/memory/gc/generation.py +++ b/pypy/rpython/memory/gc/generation.py @@ -167,7 +167,9 @@ return self.nursery <= addr < self.nursery_top def malloc_fixedsize_clear(self, typeid, size, - has_finalizer=False, contains_weakptr=False): + has_finalizer=False, + is_finalizer_light=False, + contains_weakptr=False): if (has_finalizer or (raw_malloc_usage(size) > self.lb_young_fixedsize and raw_malloc_usage(size) > self.largest_young_fixedsize)): @@ -179,6 +181,7 @@ # "non-simple" case or object too big: don't use the nursery return SemiSpaceGC.malloc_fixedsize_clear(self, typeid, size, has_finalizer, + is_finalizer_light, contains_weakptr) size_gc_header = self.gcheaderbuilder.size_gc_header totalsize = size_gc_header + size diff --git a/pypy/rpython/memory/gc/marksweep.py b/pypy/rpython/memory/gc/marksweep.py --- a/pypy/rpython/memory/gc/marksweep.py +++ b/pypy/rpython/memory/gc/marksweep.py @@ -93,7 +93,8 @@ pass def malloc_fixedsize(self, typeid16, size, - has_finalizer=False, contains_weakptr=False): + has_finalizer=False, is_finalizer_light=False, + contains_weakptr=False): self.maybe_collect() size_gc_header = self.gcheaderbuilder.size_gc_header try: @@ -128,7 +129,9 @@ malloc_fixedsize._dont_inline_ = True def malloc_fixedsize_clear(self, typeid16, size, - has_finalizer=False, contains_weakptr=False): + has_finalizer=False, + is_finalizer_light=False, + contains_weakptr=False): self.maybe_collect() size_gc_header = self.gcheaderbuilder.size_gc_header try: diff --git a/pypy/rpython/memory/gc/minimark.py b/pypy/rpython/memory/gc/minimark.py --- a/pypy/rpython/memory/gc/minimark.py +++ b/pypy/rpython/memory/gc/minimark.py @@ -290,6 +290,8 @@ # # A list of all objects with finalizers (these are never young). self.objects_with_finalizers = self.AddressDeque() + self.young_objects_with_light_finalizers = self.AddressStack() + self.old_objects_with_light_finalizers = self.AddressStack() # # Two lists of the objects with weakrefs. No weakref can be an # old object weakly pointing to a young object: indeed, weakrefs @@ -457,14 +459,16 @@ def malloc_fixedsize_clear(self, typeid, size, - needs_finalizer=False, contains_weakptr=False): + needs_finalizer=False, + is_finalizer_light=False, + contains_weakptr=False): size_gc_header = self.gcheaderbuilder.size_gc_header totalsize = size_gc_header + size rawtotalsize = raw_malloc_usage(totalsize) # # If the object needs a finalizer, ask for a rawmalloc. # The following check should be constant-folded. - if needs_finalizer: + if needs_finalizer and not is_finalizer_light: ll_assert(not contains_weakptr, "'needs_finalizer' and 'contains_weakptr' both specified") obj = self.external_malloc(typeid, 0, can_make_young=False) @@ -494,13 +498,14 @@ # # Build the object. llarena.arena_reserve(result, totalsize) + obj = result + size_gc_header + if is_finalizer_light: + self.young_objects_with_light_finalizers.append(obj) self.init_gc_object(result, typeid, flags=0) # # If it is a weakref, record it (check constant-folded). if contains_weakptr: - self.young_objects_with_weakrefs.append(result+size_gc_header) - # - obj = result + size_gc_header + self.young_objects_with_weakrefs.append(obj) # return llmemory.cast_adr_to_ptr(obj, llmemory.GCREF) @@ -1264,6 +1269,8 @@ # weakrefs' targets. if self.young_objects_with_weakrefs.non_empty(): self.invalidate_young_weakrefs() + if self.young_objects_with_light_finalizers.non_empty(): + self.deal_with_young_objects_with_finalizers() # # Clear this mapping. if self.nursery_objects_shadows.length() > 0: @@ -1584,6 +1591,9 @@ # Weakref support: clear the weak pointers to dying objects if self.old_objects_with_weakrefs.non_empty(): self.invalidate_old_weakrefs() + if self.old_objects_with_light_finalizers.non_empty(): + self.deal_with_old_objects_with_finalizers() + # # Walk all rawmalloced objects and free the ones that don't # have the GCFLAG_VISITED flag. @@ -1649,8 +1659,7 @@ if self.header(obj).tid & GCFLAG_VISITED: self.header(obj).tid &= ~GCFLAG_VISITED return False # survives - else: - return True # dies + return True # dies def _reset_gcflag_visited(self, obj, ignored): self.header(obj).tid &= ~GCFLAG_VISITED @@ -1829,6 +1838,42 @@ # ---------- # Finalizers + def deal_with_young_objects_with_finalizers(self): + """ This is a much simpler version of dealing with finalizers + and an optimization - we can reasonably assume that those finalizers + don't do anything fancy and *just* call them. Among other things + they won't resurrect objects + """ + while self.young_objects_with_light_finalizers.non_empty(): + obj = self.young_objects_with_light_finalizers.pop() + if not self.is_forwarded(obj): + finalizer = self.getlightfinalizer(self.get_type_id(obj)) + ll_assert(bool(finalizer), "no light finalizer found") + finalizer(obj, llmemory.NULL) + else: + obj = self.get_forwarding_address(obj) + self.old_objects_with_light_finalizers.append(obj) + + def deal_with_old_objects_with_finalizers(self): + """ This is a much simpler version of dealing with finalizers + and an optimization - we can reasonably assume that those finalizers + don't do anything fancy and *just* call them. Among other things + they won't resurrect objects + """ + new_objects = self.AddressStack() + while self.old_objects_with_light_finalizers.non_empty(): + obj = self.old_objects_with_light_finalizers.pop() + if self.header(obj).tid & GCFLAG_VISITED: + # surviving + new_objects.append(obj) + else: + # dying + finalizer = self.getlightfinalizer(self.get_type_id(obj)) + ll_assert(bool(finalizer), "no light finalizer found") + finalizer(obj, llmemory.NULL) + self.old_objects_with_light_finalizers.delete() + self.old_objects_with_light_finalizers = new_objects + def deal_with_objects_with_finalizers(self): # Walk over list of objects with finalizers. # If it is not surviving, add it to the list of to-be-called @@ -1959,7 +2004,6 @@ # self.old_objects_with_weakrefs.append(obj) - def invalidate_old_weakrefs(self): """Called during a major collection.""" # walk over list of objects that contain weakrefs diff --git a/pypy/rpython/memory/gc/semispace.py b/pypy/rpython/memory/gc/semispace.py --- a/pypy/rpython/memory/gc/semispace.py +++ b/pypy/rpython/memory/gc/semispace.py @@ -82,6 +82,7 @@ self.free = self.tospace MovingGCBase.setup(self) self.objects_with_finalizers = self.AddressDeque() + self.objects_with_light_finalizers = self.AddressStack() self.objects_with_weakrefs = self.AddressStack() def _teardown(self): @@ -93,7 +94,9 @@ # because the spaces are filled with zeroes in advance. def malloc_fixedsize_clear(self, typeid16, size, - has_finalizer=False, contains_weakptr=False): + has_finalizer=False, + is_finalizer_light=False, + contains_weakptr=False): size_gc_header = self.gcheaderbuilder.size_gc_header totalsize = size_gc_header + size result = self.free @@ -102,6 +105,9 @@ llarena.arena_reserve(result, totalsize) self.init_gc_object(result, typeid16) self.free = result + totalsize + #if is_finalizer_light: + # self.objects_with_light_finalizers.append(result + size_gc_header) + #else: if has_finalizer: self.objects_with_finalizers.append(result + size_gc_header) if contains_weakptr: @@ -263,6 +269,8 @@ if self.run_finalizers.non_empty(): self.update_run_finalizers() scan = self.scan_copied(scan) + if self.objects_with_light_finalizers.non_empty(): + self.deal_with_objects_with_light_finalizers() if self.objects_with_finalizers.non_empty(): scan = self.deal_with_objects_with_finalizers(scan) if self.objects_with_weakrefs.non_empty(): @@ -471,6 +479,23 @@ # immortal objects always have GCFLAG_FORWARDED set; # see get_forwarding_address(). + def deal_with_objects_with_light_finalizers(self): + """ This is a much simpler version of dealing with finalizers + and an optimization - we can reasonably assume that those finalizers + don't do anything fancy and *just* call them. Among other things + they won't resurrect objects + """ + new_objects = self.AddressStack() + while self.objects_with_light_finalizers.non_empty(): + obj = self.objects_with_light_finalizers.pop() + if self.surviving(obj): + new_objects.append(self.get_forwarding_address(obj)) + else: + finalizer = self.getfinalizer(self.get_type_id(obj)) + finalizer(obj, llmemory.NULL) + self.objects_with_light_finalizers.delete() + self.objects_with_light_finalizers = new_objects + def deal_with_objects_with_finalizers(self, scan): # walk over list of objects with finalizers # if it is not copied, add it to the list of to-be-called finalizers diff --git a/pypy/rpython/memory/gctransform/framework.py b/pypy/rpython/memory/gctransform/framework.py --- a/pypy/rpython/memory/gctransform/framework.py +++ b/pypy/rpython/memory/gctransform/framework.py @@ -12,6 +12,7 @@ from pypy.rlib.objectmodel import we_are_translated from pypy.translator.backendopt import graphanalyze from pypy.translator.backendopt.support import var_needsgc +from pypy.translator.backendopt.finalizer import FinalizerAnalyzer from pypy.annotation import model as annmodel from pypy.rpython import annlowlevel from pypy.rpython.rbuiltin import gen_cast @@ -258,6 +259,7 @@ [s_gc, s_typeid16, annmodel.SomeInteger(nonneg=True), annmodel.SomeBool(), + annmodel.SomeBool(), annmodel.SomeBool()], s_gcref, inline = False) if hasattr(GCClass, 'malloc_fixedsize'): @@ -267,6 +269,7 @@ [s_gc, s_typeid16, annmodel.SomeInteger(nonneg=True), annmodel.SomeBool(), + annmodel.SomeBool(), annmodel.SomeBool()], s_gcref, inline = False) else: @@ -319,7 +322,7 @@ raise NotImplementedError("GC needs write barrier, but does not provide writebarrier_before_copy functionality") # in some GCs we can inline the common case of - # malloc_fixedsize(typeid, size, True, False, False) + # malloc_fixedsize(typeid, size, False, False, False) if getattr(GCClass, 'inline_simple_malloc', False): # make a copy of this function so that it gets annotated # independently and the constants are folded inside @@ -337,7 +340,7 @@ malloc_fast, [s_gc, s_typeid16, annmodel.SomeInteger(nonneg=True), - s_False, s_False], s_gcref, + s_False, s_False, s_False], s_gcref, inline = True) else: self.malloc_fast_ptr = None @@ -374,17 +377,24 @@ self.malloc_varsize_nonmovable_ptr = None if getattr(GCClass, 'raw_malloc_memory_pressure', False): - def raw_malloc_memory_pressure(length, itemsize): + def raw_malloc_memory_pressure_varsize(length, itemsize): totalmem = length * itemsize if totalmem > 0: gcdata.gc.raw_malloc_memory_pressure(totalmem) #else: probably an overflow -- the following rawmalloc # will fail then + def raw_malloc_memory_pressure(sizehint): + gcdata.gc.raw_malloc_memory_pressure(sizehint) + self.raw_malloc_memory_pressure_varsize_ptr = getfn( + raw_malloc_memory_pressure_varsize, + [annmodel.SomeInteger(), annmodel.SomeInteger()], + annmodel.s_None, minimal_transform = False) self.raw_malloc_memory_pressure_ptr = getfn( raw_malloc_memory_pressure, - [annmodel.SomeInteger(), annmodel.SomeInteger()], + [annmodel.SomeInteger()], annmodel.s_None, minimal_transform = False) + self.identityhash_ptr = getfn(GCClass.identityhash.im_func, [s_gc, s_gcref], annmodel.SomeInteger(), @@ -668,7 +678,13 @@ kind_and_fptr = self.special_funcptr_for_type(TYPE) has_finalizer = (kind_and_fptr is not None and kind_and_fptr[0] == "finalizer") + has_light_finalizer = (kind_and_fptr is not None and + kind_and_fptr[0] == "light_finalizer") + if has_light_finalizer: + has_finalizer = True c_has_finalizer = rmodel.inputconst(lltype.Bool, has_finalizer) + c_has_light_finalizer = rmodel.inputconst(lltype.Bool, + has_light_finalizer) if not op.opname.endswith('_varsize') and not flags.get('varsize'): #malloc_ptr = self.malloc_fixedsize_ptr @@ -682,7 +698,8 @@ else: malloc_ptr = self.malloc_fixedsize_ptr args = [self.c_const_gc, c_type_id, c_size, - c_has_finalizer, rmodel.inputconst(lltype.Bool, False)] + c_has_finalizer, c_has_light_finalizer, + rmodel.inputconst(lltype.Bool, False)] else: assert not c_has_finalizer.value info_varsize = self.layoutbuilder.get_info_varsize(type_id) @@ -847,12 +864,13 @@ # used by the JIT (see pypy.jit.backend.llsupport.gc) op = hop.spaceop [v_typeid, v_size, - v_has_finalizer, v_contains_weakptr] = op.args + v_has_finalizer, v_has_light_finalizer, v_contains_weakptr] = op.args livevars = self.push_roots(hop) hop.genop("direct_call", [self.malloc_fixedsize_clear_ptr, self.c_const_gc, v_typeid, v_size, - v_has_finalizer, v_contains_weakptr], + v_has_finalizer, v_has_light_finalizer, + v_contains_weakptr], resultvar=op.result) self.pop_roots(hop, livevars) @@ -912,10 +930,10 @@ info = self.layoutbuilder.get_info(type_id) c_size = rmodel.inputconst(lltype.Signed, info.fixedsize) malloc_ptr = self.malloc_fixedsize_ptr - c_has_finalizer = rmodel.inputconst(lltype.Bool, False) + c_false = rmodel.inputconst(lltype.Bool, False) c_has_weakptr = rmodel.inputconst(lltype.Bool, True) args = [self.c_const_gc, c_type_id, c_size, - c_has_finalizer, c_has_weakptr] + c_false, c_false, c_has_weakptr] # push and pop the current live variables *including* the argument # to the weakref_create operation, which must be kept alive and @@ -1250,6 +1268,7 @@ lltype2vtable = translator.rtyper.lltype2vtable else: lltype2vtable = None + self.translator = translator super(TransformerLayoutBuilder, self).__init__(GCClass, lltype2vtable) def has_finalizer(self, TYPE): @@ -1257,6 +1276,10 @@ return rtti is not None and getattr(rtti._obj, 'destructor_funcptr', None) + def has_light_finalizer(self, TYPE): + special = self.special_funcptr_for_type(TYPE) + return special is not None and special[0] == 'light_finalizer' + def has_custom_trace(self, TYPE): rtti = get_rtti(TYPE) return rtti is not None and getattr(rtti._obj, 'custom_trace_funcptr', @@ -1264,7 +1287,7 @@ def make_finalizer_funcptr_for_type(self, TYPE): if not self.has_finalizer(TYPE): - return None + return None, False rtti = get_rtti(TYPE) destrptr = rtti._obj.destructor_funcptr DESTR_ARG = lltype.typeOf(destrptr).TO.ARGS[0] @@ -1276,7 +1299,9 @@ return llmemory.NULL fptr = self.transformer.annotate_finalizer(ll_finalizer, [llmemory.Address, llmemory.Address], llmemory.Address) - return fptr + g = destrptr._obj.graph + light = not FinalizerAnalyzer(self.translator).analyze_light_finalizer(g) + return fptr, light def make_custom_trace_funcptr_for_type(self, TYPE): if not self.has_custom_trace(TYPE): diff --git a/pypy/rpython/memory/gctransform/transform.py b/pypy/rpython/memory/gctransform/transform.py --- a/pypy/rpython/memory/gctransform/transform.py +++ b/pypy/rpython/memory/gctransform/transform.py @@ -63,7 +63,7 @@ gct.push_alive(v_result, self.llops) elif opname not in ('direct_call', 'indirect_call'): gct.push_alive(v_result, self.llops) - + def rename(self, newopname): @@ -118,7 +118,7 @@ self.minimalgctransformer = self.MinimalGCTransformer(self) else: self.minimalgctransformer = None - + def get_lltype_of_exception_value(self): if self.translator is not None: exceptiondata = self.translator.rtyper.getexceptiondata() @@ -399,7 +399,7 @@ def gct_gc_heap_stats(self, hop): from pypy.rpython.memory.gc.base import ARRAY_TYPEID_MAP - + return hop.cast_result(rmodel.inputconst(lltype.Ptr(ARRAY_TYPEID_MAP), lltype.nullptr(ARRAY_TYPEID_MAP))) @@ -427,7 +427,7 @@ assert flavor == 'raw' assert not flags.get('zero') return self.parenttransformer.gct_malloc_varsize(hop) - + def gct_free(self, hop): flags = hop.spaceop.args[1].value flavor = flags['flavor'] @@ -502,7 +502,7 @@ stack_mh = mallocHelpers() stack_mh.allocate = lambda size: llop.stack_malloc(llmemory.Address, size) ll_stack_malloc_fixedsize = stack_mh._ll_malloc_fixedsize - + if self.translator: self.raw_malloc_fixedsize_ptr = self.inittime_helper( ll_raw_malloc_fixedsize, [lltype.Signed], llmemory.Address) @@ -541,7 +541,7 @@ resulttype=llmemory.Address) if flags.get('zero'): hop.genop("raw_memclear", [v_raw, c_size]) - return v_raw + return v_raw def gct_malloc_varsize(self, hop, add_flags=None): flags = hop.spaceop.args[1].value @@ -559,6 +559,14 @@ def gct_malloc_nonmovable_varsize(self, *args, **kwds): return self.gct_malloc_varsize(*args, **kwds) + def gct_gc_add_memory_pressure(self, hop): + if hasattr(self, 'raw_malloc_memory_pressure_ptr'): + op = hop.spaceop + size = op.args[0] + return hop.genop("direct_call", + [self.raw_malloc_memory_pressure_ptr, + size]) + def varsize_malloc_helper(self, hop, flags, meth, extraargs): def intconst(c): return rmodel.inputconst(lltype.Signed, c) op = hop.spaceop @@ -590,9 +598,9 @@ def gct_fv_raw_malloc_varsize(self, hop, flags, TYPE, v_length, c_const_size, c_item_size, c_offset_to_length): if flags.get('add_memory_pressure', False): - if hasattr(self, 'raw_malloc_memory_pressure_ptr'): + if hasattr(self, 'raw_malloc_memory_pressure_varsize_ptr'): hop.genop("direct_call", - [self.raw_malloc_memory_pressure_ptr, + [self.raw_malloc_memory_pressure_varsize_ptr, v_length, c_item_size]) if c_offset_to_length is None: if flags.get('zero'): @@ -625,7 +633,7 @@ hop.genop("track_alloc_stop", [v]) hop.genop('raw_free', [v]) else: - assert False, "%s has no support for free with flavor %r" % (self, flavor) + assert False, "%s has no support for free with flavor %r" % (self, flavor) def gct_gc_can_move(self, hop): return hop.cast_result(rmodel.inputconst(lltype.Bool, False)) diff --git a/pypy/rpython/memory/gctypelayout.py b/pypy/rpython/memory/gctypelayout.py --- a/pypy/rpython/memory/gctypelayout.py +++ b/pypy/rpython/memory/gctypelayout.py @@ -1,7 +1,6 @@ from pypy.rpython.lltypesystem import lltype, llmemory, llarena, llgroup from pypy.rpython.lltypesystem import rclass from pypy.rpython.lltypesystem.lloperation import llop -from pypy.rlib.objectmodel import we_are_translated from pypy.rlib.debug import ll_assert from pypy.rlib.rarithmetic import intmask from pypy.tool.identity_dict import identity_dict @@ -85,6 +84,13 @@ else: return lltype.nullptr(GCData.FINALIZER_OR_CT_FUNC) + def q_light_finalizer(self, typeid): + typeinfo = self.get(typeid) + if typeinfo.infobits & T_HAS_LIGHTWEIGHT_FINALIZER: + return typeinfo.finalizer_or_customtrace + else: + return lltype.nullptr(GCData.FINALIZER_OR_CT_FUNC) + def q_offsets_to_gc_pointers(self, typeid): return self.get(typeid).ofstoptrs @@ -142,6 +148,7 @@ self.q_has_gcptr_in_varsize, self.q_is_gcarrayofgcptr, self.q_finalizer, + self.q_light_finalizer, self.q_offsets_to_gc_pointers, self.q_fixed_size, self.q_varsize_item_sizes, @@ -157,16 +164,17 @@ # the lowest 16bits are used to store group member index -T_MEMBER_INDEX = 0xffff -T_IS_VARSIZE = 0x010000 -T_HAS_GCPTR_IN_VARSIZE = 0x020000 -T_IS_GCARRAY_OF_GCPTR = 0x040000 -T_IS_WEAKREF = 0x080000 -T_IS_RPYTHON_INSTANCE = 0x100000 # the type is a subclass of OBJECT -T_HAS_FINALIZER = 0x200000 -T_HAS_CUSTOM_TRACE = 0x400000 -T_KEY_MASK = intmask(0xFF000000) -T_KEY_VALUE = intmask(0x5A000000) # bug detection only +T_MEMBER_INDEX = 0xffff +T_IS_VARSIZE = 0x010000 +T_HAS_GCPTR_IN_VARSIZE = 0x020000 +T_IS_GCARRAY_OF_GCPTR = 0x040000 +T_IS_WEAKREF = 0x080000 +T_IS_RPYTHON_INSTANCE = 0x100000 # the type is a subclass of OBJECT +T_HAS_FINALIZER = 0x200000 +T_HAS_CUSTOM_TRACE = 0x400000 +T_HAS_LIGHTWEIGHT_FINALIZER = 0x800000 +T_KEY_MASK = intmask(0xFF000000) +T_KEY_VALUE = intmask(0x5A000000) # bug detection only def _check_valid_type_info(p): ll_assert(p.infobits & T_KEY_MASK == T_KEY_VALUE, "invalid type_id") @@ -194,6 +202,8 @@ info.finalizer_or_customtrace = fptr if kind == "finalizer": infobits |= T_HAS_FINALIZER + elif kind == 'light_finalizer': + infobits |= T_HAS_FINALIZER | T_HAS_LIGHTWEIGHT_FINALIZER elif kind == "custom_trace": infobits |= T_HAS_CUSTOM_TRACE else: @@ -367,12 +377,15 @@ def special_funcptr_for_type(self, TYPE): if TYPE in self._special_funcptrs: return self._special_funcptrs[TYPE] - fptr1 = self.make_finalizer_funcptr_for_type(TYPE) + fptr1, is_lightweight = self.make_finalizer_funcptr_for_type(TYPE) fptr2 = self.make_custom_trace_funcptr_for_type(TYPE) assert not (fptr1 and fptr2), ( "type %r needs both a finalizer and a custom tracer" % (TYPE,)) if fptr1: - kind_and_fptr = "finalizer", fptr1 + if is_lightweight: + kind_and_fptr = "light_finalizer", fptr1 + else: + kind_and_fptr = "finalizer", fptr1 elif fptr2: kind_and_fptr = "custom_trace", fptr2 else: @@ -382,7 +395,7 @@ def make_finalizer_funcptr_for_type(self, TYPE): # must be overridden for proper finalizer support - return None + return None, False def make_custom_trace_funcptr_for_type(self, TYPE): # must be overridden for proper custom tracer support diff --git a/pypy/rpython/memory/gcwrapper.py b/pypy/rpython/memory/gcwrapper.py --- a/pypy/rpython/memory/gcwrapper.py +++ b/pypy/rpython/memory/gcwrapper.py @@ -1,3 +1,4 @@ +from pypy.translator.backendopt.finalizer import FinalizerAnalyzer from pypy.rpython.lltypesystem import lltype, llmemory, llheap from pypy.rpython import llinterp from pypy.rpython.annlowlevel import llhelper @@ -65,6 +66,10 @@ gctypelayout.zero_gc_pointers(result) return result + def add_memory_pressure(self, size): + if hasattr(self.gc, 'raw_malloc_memory_pressure'): + self.gc.raw_malloc_memory_pressure(size) + def shrink_array(self, p, smallersize): if hasattr(self.gc, 'shrink_array'): addr = llmemory.cast_ptr_to_adr(p) @@ -196,9 +201,11 @@ DESTR_ARG = lltype.typeOf(destrptr).TO.ARGS[0] destrgraph = destrptr._obj.graph else: - return None + return None, False assert not type_contains_pyobjs(TYPE), "not implemented" + t = self.llinterp.typer.annotator.translator + light = not FinalizerAnalyzer(t).analyze_light_finalizer(destrgraph) def ll_finalizer(addr, dummy): assert dummy == llmemory.NULL try: @@ -208,7 +215,7 @@ raise RuntimeError( "a finalizer raised an exception, shouldn't happen") return llmemory.NULL - return llhelper(gctypelayout.GCData.FINALIZER_OR_CT, ll_finalizer) + return llhelper(gctypelayout.GCData.FINALIZER_OR_CT, ll_finalizer), light def make_custom_trace_funcptr_for_type(self, TYPE): from pypy.rpython.memory.gctransform.support import get_rtti, \ diff --git a/pypy/rpython/memory/test/test_gc.py b/pypy/rpython/memory/test/test_gc.py --- a/pypy/rpython/memory/test/test_gc.py +++ b/pypy/rpython/memory/test/test_gc.py @@ -5,7 +5,6 @@ from pypy.rpython.memory.test import snippet from pypy.rpython.test.test_llinterp import get_interpreter from pypy.rpython.lltypesystem import lltype -from pypy.rpython.lltypesystem.rstr import STR from pypy.rpython.lltypesystem.lloperation import llop from pypy.rlib.objectmodel import we_are_translated from pypy.rlib.objectmodel import compute_unique_id @@ -57,7 +56,7 @@ while j < 20: j += 1 a.append(j) - res = self.interpret(malloc_a_lot, []) + self.interpret(malloc_a_lot, []) #assert simulator.current_size - curr < 16000 * INT_SIZE / 4 #print "size before: %s, size after %s" % (curr, simulator.current_size) @@ -73,7 +72,7 @@ while j < 20: j += 1 b.append((1, j, i)) - res = self.interpret(malloc_a_lot, []) + self.interpret(malloc_a_lot, []) #assert simulator.current_size - curr < 16000 * INT_SIZE / 4 #print "size before: %s, size after %s" % (curr, simulator.current_size) @@ -129,7 +128,7 @@ res = self.interpret(concat, [100]) assert res == concat(100) #assert simulator.current_size - curr < 16000 * INT_SIZE / 4 - + def test_finalizer(self): class B(object): pass @@ -278,7 +277,7 @@ self.interpret, f, []) def test_weakref(self): - import weakref, gc + import weakref class A(object): pass From noreply at buildbot.pypy.org Mon Nov 7 22:35:01 2011 From: noreply at buildbot.pypy.org (amauryfa) Date: Mon, 7 Nov 2011 22:35:01 +0100 (CET) Subject: [pypy-commit] pypy py3k: merge heads Message-ID: <20111107213501.786F3820C4@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r48887:b6e07a358ebe Date: 2011-11-07 22:06 +0100 http://bitbucket.org/pypy/pypy/changeset/b6e07a358ebe/ Log: merge heads diff --git a/pypy/objspace/std/complexobject.py b/pypy/objspace/std/complexobject.py --- a/pypy/objspace/std/complexobject.py +++ b/pypy/objspace/std/complexobject.py @@ -63,17 +63,6 @@ ir = (i1 * ratio - r1) / denom return W_ComplexObject(rr,ir) - def divmod(self, space, other): - space.warn( - "complex divmod(), // and % are deprecated", - space.w_DeprecationWarning - ) - w_div = self.div(other) - div = math.floor(w_div.realval) - w_mod = self.sub( - W_ComplexObject(other.realval * div, other.imagval * div)) - return (W_ComplexObject(div, 0), w_mod) - def pow(self, other): r1, i1 = self.realval, self.imagval r2, i2 = other.realval, other.imagval @@ -160,26 +149,6 @@ truediv__Complex_Complex = div__Complex_Complex -def mod__Complex_Complex(space, w_complex1, w_complex2): - try: - return w_complex1.divmod(space, w_complex2)[1] - except ZeroDivisionError, e: - raise OperationError(space.w_ZeroDivisionError, space.wrap(str(e))) - -def divmod__Complex_Complex(space, w_complex1, w_complex2): - try: - div, mod = w_complex1.divmod(space, w_complex2) - except ZeroDivisionError, e: - raise OperationError(space.w_ZeroDivisionError, space.wrap(str(e))) - return space.newtuple([div, mod]) - -def floordiv__Complex_Complex(space, w_complex1, w_complex2): - # don't care about the slight slowdown you get from using divmod - try: - return w_complex1.divmod(space, w_complex2)[0] - except ZeroDivisionError, e: - raise OperationError(space.w_ZeroDivisionError, space.wrap(str(e))) - def pow__Complex_Complex_ANY(space, w_complex, w_exponent, thirdArg): if not space.is_w(thirdArg, space.w_None): raise OperationError(space.w_ValueError, space.wrap('complex modulo')) diff --git a/pypy/objspace/std/test/test_complexobject.py b/pypy/objspace/std/test/test_complexobject.py --- a/pypy/objspace/std/test/test_complexobject.py +++ b/pypy/objspace/std/test/test_complexobject.py @@ -1,10 +1,13 @@ +from __future__ import print_function + import py -from pypy.objspace.std.complexobject import W_ComplexObject, \ - pow__Complex_Complex_ANY -from pypy.objspace.std import complextype as cobjtype + +from pypy.objspace.std import complextype as cobjtype, StdObjSpace +from pypy.objspace.std.complexobject import (W_ComplexObject, + pow__Complex_Complex_ANY) from pypy.objspace.std.multimethod import FailedToImplement from pypy.objspace.std.stringobject import W_StringObject -from pypy.objspace.std import StdObjSpace + EPS = 1e-9 @@ -134,7 +137,7 @@ from random import random # XXX this test passed but took waaaaay to long # look at dist/lib-python/modified-2.5.2/test/test_complex.py - #simple_real = [float(i) for i in xrange(-5, 6)] + #simple_real = [float(i) for i in range(-5, 6)] simple_real = [-2.0, 0.0, 1.0] simple_complex = [complex(x, y) for x in simple_real for y in simple_real] for x in simple_complex: @@ -147,7 +150,7 @@ self.check_div(complex(1e-200, 1e-200), 1+0j) # Just for fun. - for i in xrange(100): + for i in range(100): self.check_div(complex(random(), random()), complex(random(), random())) @@ -160,8 +163,7 @@ raises(ZeroDivisionError, complex.__truediv__, 1+1j, 0+0j) def test_floordiv(self): - assert self.almost_equal(complex.__floordiv__(3+0j, 1.5+0j), 2) - raises(ZeroDivisionError, complex.__floordiv__, 3+0j, 0+0j) + raises(TypeError, "3+0j // 0+0j") def test_coerce(self): raises(OverflowError, complex.__coerce__, 1+1j, 1L<<10000) @@ -183,13 +185,11 @@ assert large != (5+0j) def test_mod(self): - raises(ZeroDivisionError, (1+1j).__mod__, 0+0j) - a = 3.33+4.43j - raises(ZeroDivisionError, "a % 0") + raises(TypeError, "a % a") def test_divmod(self): - raises(ZeroDivisionError, divmod, 1+1j, 0+0j) + raises(TypeError, divmod, 1+1j, 0+0j) def test_pow(self): assert self.almost_equal(pow(1+1j, 0+0j), 1.0) @@ -221,7 +221,7 @@ def test_boolcontext(self): from random import random - for i in xrange(100): + for i in range(100): assert complex(random() + 1e-6, random() + 1e-6) assert not complex(0.0, 0.0) @@ -354,13 +354,13 @@ raises(TypeError, complex, float2(None)) def test_hash(self): - for x in xrange(-30, 30): + for x in range(-30, 30): assert hash(x) == hash(complex(x, 0)) x /= 3.0 # now check against floating point assert hash(x) == hash(complex(x, 0.)) def test_abs(self): - nums = [complex(x/3., y/7.) for x in xrange(-9,9) for y in xrange(-9,9)] + nums = [complex(x/3., y/7.) for x in range(-9,9) for y in range(-9,9)] for num in nums: assert self.almost_equal((num.real**2 + num.imag**2) ** 0.5, abs(num)) @@ -409,7 +409,7 @@ try: pth = tempfile.mktemp() fo = open(pth,"wb") - print >>fo, a, b + print(a, b, file=fo) fo.close() fo = open(pth, "rb") res = fo.read() From noreply at buildbot.pypy.org Mon Nov 7 22:35:02 2011 From: noreply at buildbot.pypy.org (amauryfa) Date: Mon, 7 Nov 2011 22:35:02 +0100 (CET) Subject: [pypy-commit] pypy py3k: merge heads Message-ID: <20111107213502.A7451820C4@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r48888:9b869427f59a Date: 2011-11-07 22:07 +0100 http://bitbucket.org/pypy/pypy/changeset/9b869427f59a/ Log: merge heads diff --git a/pypy/translator/goal/app_main.py b/pypy/translator/goal/app_main.py --- a/pypy/translator/goal/app_main.py +++ b/pypy/translator/goal/app_main.py @@ -120,7 +120,7 @@ except AttributeError: print('no translation information found', file=sys.stderr) else: - optitems = options.items() + optitems = list(options.items()) optitems.sort() for name, value in optitems: print(' %51s: %s' % (name, value)) @@ -138,7 +138,7 @@ def _print_jit_help(): import pypyjit - items = pypyjit.defaults.items() + items = list(pypyjit.defaults.items()) items.sort() for key, value in items: print(' --jit %s=N %slow-level JIT parameter (default %s)' % ( @@ -304,7 +304,7 @@ newline=newline, line_buffering=line_buffering) return stream - + def set_io_encoding(io_encoding): try: import _file @@ -510,7 +510,7 @@ unbuffered, ignore_environment, **ignored): - # with PyPy in top of CPython we can only have around 100 + # with PyPy in top of CPython we can only have around 100 # but we need more in the translated PyPy for the compiler package if '__pypy__' not in sys.builtin_module_names: sys.setrecursionlimit(5000) From noreply at buildbot.pypy.org Mon Nov 7 23:37:49 2011 From: noreply at buildbot.pypy.org (pjenvey) Date: Mon, 7 Nov 2011 23:37:49 +0100 (CET) Subject: [pypy-commit] pypy py3k: 2to3 Message-ID: <20111107223749.57ED5820C4@wyvern.cs.uni-duesseldorf.de> Author: Philip Jenvey Branch: py3k Changeset: r48889:3e341a64f85c Date: 2011-11-07 14:25 -0800 http://bitbucket.org/pypy/pypy/changeset/3e341a64f85c/ Log: 2to3 diff --git a/pypy/conftest.py b/pypy/conftest.py --- a/pypy/conftest.py +++ b/pypy/conftest.py @@ -72,7 +72,7 @@ """ try: config = make_config(option, objspace=name, **kwds) - except ConflictConfigError, e: + except ConflictConfigError as e: # this exception is typically only raised if a module is not available. # in this case the test should be skipped py.test.skip(str(e)) @@ -96,7 +96,7 @@ config = make_config(option) try: space = make_objspace(config) - except OperationError, e: + except OperationError as e: check_keyboard_interrupt(e) if option.verbose: import traceback @@ -118,7 +118,7 @@ def __init__(self, **kwds): import sys info = getattr(sys, 'pypy_translation_info', None) - for key, value in kwds.iteritems(): + for key, value in kwds.items(): if key == 'usemodules': if info is not None: for modname in value: @@ -148,7 +148,7 @@ assert body.startswith('(') src = py.code.Source("def anonymous" + body) d = {} - exec src.compile() in d + exec(src.compile(), d) return d['anonymous'](*args) def wrap(self, obj): @@ -210,9 +210,9 @@ source = py.code.Source(target)[1:].deindent() res, stdout, stderr = runsubprocess.run_subprocess( python, ["-c", helpers + str(source)]) - print source - print >> sys.stdout, stdout - print >> sys.stderr, stderr + print(source) + print(stdout, file=sys.stdout) + print(stderr, file=sys.stderr) if res > 0: raise AssertionError("Subprocess failed") @@ -225,7 +225,7 @@ try: if e.w_type.name == 'KeyboardInterrupt': tb = sys.exc_info()[2] - raise OpErrKeyboardInterrupt, OpErrKeyboardInterrupt(), tb + raise OpErrKeyboardInterrupt().with_traceback(tb) except AttributeError: pass @@ -240,7 +240,7 @@ apparently earlier on "raises" was already added to module's globals. """ - import __builtin__ + import builtins for helper in helpers: if not hasattr(__builtin__, helper): setattr(__builtin__, helper, getattr(py.test, helper)) @@ -304,10 +304,10 @@ elif hasattr(obj, 'func_code') and self.funcnamefilter(name): if name.startswith('app_test_'): - assert not obj.func_code.co_flags & 32, \ + assert not obj.__code__.co_flags & 32, \ "generator app level functions? you must be joking" return AppTestFunction(name, parent=self) - elif obj.func_code.co_flags & 32: # generator function + elif obj.__code__.co_flags & 32: # generator function return pytest.Generator(name, parent=self) else: return IntTestFunction(name, parent=self) @@ -321,7 +321,7 @@ "(btw, i would need options: %s)" % (ropts,)) for opt in ropts: - if not options.has_key(opt) or options[opt] != ropts[opt]: + if opt not in options or options[opt] != ropts[opt]: break else: return @@ -387,10 +387,10 @@ def runtest(self): try: super(IntTestFunction, self).runtest() - except OperationError, e: + except OperationError as e: check_keyboard_interrupt(e) raise - except Exception, e: + except Exception as e: cls = e.__class__ while cls is not Exception: if cls.__name__ == 'DistutilsPlatformError': @@ -411,13 +411,13 @@ def execute_appex(self, space, target, *args): try: target(*args) - except OperationError, e: + except OperationError as e: tb = sys.exc_info()[2] if e.match(space, space.w_KeyboardInterrupt): - raise OpErrKeyboardInterrupt, OpErrKeyboardInterrupt(), tb + raise OpErrKeyboardInterrupt().with_traceback(tb) appexcinfo = appsupport.AppExceptionInfo(space, e) if appexcinfo.traceback: - raise AppError, AppError(appexcinfo), tb + raise AppError(appexcinfo).with_traceback(tb) raise def runtest(self): @@ -429,7 +429,7 @@ space = gettestobjspace() filename = self._getdynfilename(target) func = app2interp_temp(target, filename=filename) - print "executing", func + print("executing", func) self.execute_appex(space, func, space) def repr_failure(self, excinfo): @@ -438,7 +438,7 @@ return super(AppTestFunction, self).repr_failure(excinfo) def _getdynfilename(self, func): - code = getattr(func, 'im_func', func).func_code + code = getattr(func, 'im_func', func).__code__ return "[%s:%s]" % (code.co_filename, code.co_firstlineno) class AppTestMethod(AppTestFunction): @@ -471,9 +471,9 @@ if self.config.option.appdirect: return run_with_python(self.config.option.appdirect, target) return target() - space = target.im_self.space + space = target.__self__.space filename = self._getdynfilename(target) - func = app2interp_temp(target.im_func, filename=filename) + func = app2interp_temp(target.__func__, filename=filename) w_instance = self.parent.w_instance self.execute_appex(space, func, space, w_instance) diff --git a/pypy/module/__builtin__/test/autopath.py b/pypy/module/__builtin__/test/autopath.py --- a/pypy/module/__builtin__/test/autopath.py +++ b/pypy/module/__builtin__/test/autopath.py @@ -66,7 +66,7 @@ sys.path.insert(0, head) munged = {} - for name, mod in sys.modules.items(): + for name, mod in list(sys.modules.items()): if '.' in name: continue fn = getattr(mod, '__file__', None) @@ -84,7 +84,7 @@ if modpath not in sys.modules: munged[modpath] = mod - for name, mod in munged.iteritems(): + for name, mod in munged.items(): if name not in sys.modules: sys.modules[name] = mod if '.' in name: @@ -111,9 +111,9 @@ f = open(fn, 'rwb+') try: if f.read() == arg: - print "checkok", fn + print("checkok", fn) else: - print "syncing", fn + print("syncing", fn) f = open(fn, 'w') f.write(arg) finally: From noreply at buildbot.pypy.org Mon Nov 7 23:37:50 2011 From: noreply at buildbot.pypy.org (pjenvey) Date: Mon, 7 Nov 2011 23:37:50 +0100 (CET) Subject: [pypy-commit] pypy py3k: handle dict views in dir() Message-ID: <20111107223750.8642A820C4@wyvern.cs.uni-duesseldorf.de> Author: Philip Jenvey Branch: py3k Changeset: r48890:0e9d5f1f05c5 Date: 2011-11-07 14:25 -0800 http://bitbucket.org/pypy/pypy/changeset/0e9d5f1f05c5/ Log: handle dict views in dir() diff --git a/pypy/module/__builtin__/app_inspect.py b/pypy/module/__builtin__/app_inspect.py --- a/pypy/module/__builtin__/app_inspect.py +++ b/pypy/module/__builtin__/app_inspect.py @@ -54,9 +54,7 @@ raise TypeError("dir expected at most 1 arguments, got %d" % len(args)) if len(args) == 0: - local_names = _caller_locals().keys() # 2 stackframes away - if not isinstance(local_names, list): - raise TypeError("expected locals().keys() to be a list") + local_names = list(_caller_locals().keys()) # 2 stackframes away local_names.sort() return local_names @@ -82,7 +80,7 @@ elif isinstance(obj, type): #Don't look at __class__, as metaclass methods would be confusing. - result = _classdir(obj).keys() + result = list(_classdir(obj).keys()) result.sort() return result @@ -113,7 +111,7 @@ except (AttributeError, TypeError): pass - result = Dict.keys() + result = list(Dict.keys()) result.sort() return result From noreply at buildbot.pypy.org Mon Nov 7 23:42:04 2011 From: noreply at buildbot.pypy.org (pjenvey) Date: Mon, 7 Nov 2011 23:42:04 +0100 (CET) Subject: [pypy-commit] pypy py3k: partially revert 3e341a64f85c, conftest needs to stay py Message-ID: <20111107224204.55F9C820C4@wyvern.cs.uni-duesseldorf.de> Author: Philip Jenvey Branch: py3k Changeset: r48891:972745c2bd89 Date: 2011-11-07 14:41 -0800 http://bitbucket.org/pypy/pypy/changeset/972745c2bd89/ Log: partially revert 3e341a64f85c, conftest needs to stay py diff --git a/pypy/conftest.py b/pypy/conftest.py --- a/pypy/conftest.py +++ b/pypy/conftest.py @@ -72,7 +72,7 @@ """ try: config = make_config(option, objspace=name, **kwds) - except ConflictConfigError as e: + except ConflictConfigError, e: # this exception is typically only raised if a module is not available. # in this case the test should be skipped py.test.skip(str(e)) @@ -96,7 +96,7 @@ config = make_config(option) try: space = make_objspace(config) - except OperationError as e: + except OperationError, e: check_keyboard_interrupt(e) if option.verbose: import traceback @@ -118,7 +118,7 @@ def __init__(self, **kwds): import sys info = getattr(sys, 'pypy_translation_info', None) - for key, value in kwds.items(): + for key, value in kwds.iteritems(): if key == 'usemodules': if info is not None: for modname in value: @@ -148,7 +148,7 @@ assert body.startswith('(') src = py.code.Source("def anonymous" + body) d = {} - exec(src.compile(), d) + exec src.compile() in d return d['anonymous'](*args) def wrap(self, obj): @@ -210,9 +210,9 @@ source = py.code.Source(target)[1:].deindent() res, stdout, stderr = runsubprocess.run_subprocess( python, ["-c", helpers + str(source)]) - print(source) - print(stdout, file=sys.stdout) - print(stderr, file=sys.stderr) + print source + print >> sys.stdout, stdout + print >> sys.stderr, stderr if res > 0: raise AssertionError("Subprocess failed") @@ -225,7 +225,7 @@ try: if e.w_type.name == 'KeyboardInterrupt': tb = sys.exc_info()[2] - raise OpErrKeyboardInterrupt().with_traceback(tb) + raise OpErrKeyboardInterrupt, OpErrKeyboardInterrupt(), tb except AttributeError: pass @@ -240,7 +240,7 @@ apparently earlier on "raises" was already added to module's globals. """ - import builtins + import __builtin__ for helper in helpers: if not hasattr(__builtin__, helper): setattr(__builtin__, helper, getattr(py.test, helper)) @@ -304,10 +304,10 @@ elif hasattr(obj, 'func_code') and self.funcnamefilter(name): if name.startswith('app_test_'): - assert not obj.__code__.co_flags & 32, \ + assert not obj.func_code.co_flags & 32, \ "generator app level functions? you must be joking" return AppTestFunction(name, parent=self) - elif obj.__code__.co_flags & 32: # generator function + elif obj.func_code.co_flags & 32: # generator function return pytest.Generator(name, parent=self) else: return IntTestFunction(name, parent=self) @@ -321,7 +321,7 @@ "(btw, i would need options: %s)" % (ropts,)) for opt in ropts: - if opt not in options or options[opt] != ropts[opt]: + if not options.has_key(opt) or options[opt] != ropts[opt]: break else: return @@ -387,10 +387,10 @@ def runtest(self): try: super(IntTestFunction, self).runtest() - except OperationError as e: + except OperationError, e: check_keyboard_interrupt(e) raise - except Exception as e: + except Exception, e: cls = e.__class__ while cls is not Exception: if cls.__name__ == 'DistutilsPlatformError': @@ -411,13 +411,13 @@ def execute_appex(self, space, target, *args): try: target(*args) - except OperationError as e: + except OperationError, e: tb = sys.exc_info()[2] if e.match(space, space.w_KeyboardInterrupt): - raise OpErrKeyboardInterrupt().with_traceback(tb) + raise OpErrKeyboardInterrupt, OpErrKeyboardInterrupt(), tb appexcinfo = appsupport.AppExceptionInfo(space, e) if appexcinfo.traceback: - raise AppError(appexcinfo).with_traceback(tb) + raise AppError, AppError(appexcinfo), tb raise def runtest(self): @@ -429,7 +429,7 @@ space = gettestobjspace() filename = self._getdynfilename(target) func = app2interp_temp(target, filename=filename) - print("executing", func) + print "executing", func self.execute_appex(space, func, space) def repr_failure(self, excinfo): @@ -438,7 +438,7 @@ return super(AppTestFunction, self).repr_failure(excinfo) def _getdynfilename(self, func): - code = getattr(func, 'im_func', func).__code__ + code = getattr(func, 'im_func', func).func_code return "[%s:%s]" % (code.co_filename, code.co_firstlineno) class AppTestMethod(AppTestFunction): @@ -471,9 +471,9 @@ if self.config.option.appdirect: return run_with_python(self.config.option.appdirect, target) return target() - space = target.__self__.space + space = target.im_self.space filename = self._getdynfilename(target) - func = app2interp_temp(target.__func__, filename=filename) + func = app2interp_temp(target.im_func, filename=filename) w_instance = self.parent.w_instance self.execute_appex(space, func, space, w_instance) From noreply at buildbot.pypy.org Tue Nov 8 00:02:18 2011 From: noreply at buildbot.pypy.org (pjenvey) Date: Tue, 8 Nov 2011 00:02:18 +0100 (CET) Subject: [pypy-commit] pypy py3k: merge default Message-ID: <20111107230218.50D93820C4@wyvern.cs.uni-duesseldorf.de> Author: Philip Jenvey Branch: py3k Changeset: r48892:f73c96d7cbd8 Date: 2011-11-07 14:53 -0800 http://bitbucket.org/pypy/pypy/changeset/f73c96d7cbd8/ Log: merge default diff --git a/lib_pypy/pyrepl/commands.py b/lib_pypy/pyrepl/commands.py --- a/lib_pypy/pyrepl/commands.py +++ b/lib_pypy/pyrepl/commands.py @@ -33,10 +33,9 @@ class Command(object): finish = 0 kills_digit_arg = 1 - def __init__(self, reader, (event_name, event)): + def __init__(self, reader, cmd): self.reader = reader - self.event = event - self.event_name = event_name + self.event_name, self.event = cmd def do(self): pass diff --git a/lib_pypy/pyrepl/pygame_console.py b/lib_pypy/pyrepl/pygame_console.py --- a/lib_pypy/pyrepl/pygame_console.py +++ b/lib_pypy/pyrepl/pygame_console.py @@ -130,7 +130,7 @@ s.fill(c, [0, 600 - bmargin, 800, bmargin]) s.fill(c, [800 - rmargin, 0, lmargin, 600]) - def refresh(self, screen, (cx, cy)): + def refresh(self, screen, cxy): self.screen = screen self.pygame_screen.fill(colors.bg, [0, tmargin + self.cur_top + self.scroll, @@ -139,8 +139,8 @@ line_top = self.cur_top width, height = self.fontsize - self.cxy = (cx, cy) - cp = self.char_pos(cx, cy) + self.cxy = cxy + cp = self.char_pos(*cxy) if cp[1] < tmargin: self.scroll = - (cy*self.fh + self.cur_top) self.repaint() @@ -148,7 +148,7 @@ self.scroll += (600 - bmargin) - (cp[1] + self.fh) self.repaint() if self.curs_vis: - self.pygame_screen.blit(self.cursor, self.char_pos(cx, cy)) + self.pygame_screen.blit(self.cursor, self.char_pos(*cxy)) for line in screen: if 0 <= line_top + self.scroll <= (600 - bmargin - tmargin - self.fh): if line: diff --git a/lib_pypy/pyrepl/unix_console.py b/lib_pypy/pyrepl/unix_console.py --- a/lib_pypy/pyrepl/unix_console.py +++ b/lib_pypy/pyrepl/unix_console.py @@ -163,7 +163,7 @@ def change_encoding(self, encoding): self.encoding = encoding - def refresh(self, screen, (cx, cy)): + def refresh(self, screen, cxy): # this function is still too long (over 90 lines) if not self.__gone_tall: @@ -198,6 +198,7 @@ # we make sure the cursor is on the screen, and that we're # using all of the screen if we can + cx, cy = cxy if cy < offset: offset = cy elif cy >= offset + height: From noreply at buildbot.pypy.org Tue Nov 8 00:13:14 2011 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Tue, 8 Nov 2011 00:13:14 +0100 (CET) Subject: [pypy-commit] pyrepl py3ksupport: add unicode alias to unix_console Message-ID: <20111107231314.AC7F8820C4@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: py3ksupport Changeset: r155:9adc0ebfa532 Date: 2011-11-08 00:13 +0100 http://bitbucket.org/pypy/pyrepl/changeset/9adc0ebfa532/ Log: add unicode alias to unix_console diff --git a/pyrepl/unix_console.py b/pyrepl/unix_console.py --- a/pyrepl/unix_console.py +++ b/pyrepl/unix_console.py @@ -30,6 +30,10 @@ class InvalidTerminal(RuntimeError): pass +try: + unicode +except NameError: + unicode = str _error = (termios.error, curses.error, InvalidTerminal) From noreply at buildbot.pypy.org Tue Nov 8 01:05:20 2011 From: noreply at buildbot.pypy.org (amauryfa) Date: Tue, 8 Nov 2011 01:05:20 +0100 (CET) Subject: [pypy-commit] pypy py3k: Implement IOBase._checkClosed(), will maybe fix test_fptlib and test_poplib. Message-ID: <20111108000520.72C7E820C4@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r48893:239bf4fc877d Date: 2011-11-08 01:04 +0100 http://bitbucket.org/pypy/pypy/changeset/239bf4fc877d/ Log: Implement IOBase._checkClosed(), will maybe fix test_fptlib and test_poplib. diff --git a/lib_pypy/pyrepl/input.py b/lib_pypy/pyrepl/input.py --- a/lib_pypy/pyrepl/input.py +++ b/lib_pypy/pyrepl/input.py @@ -56,24 +56,24 @@ keyseq = tuple(parse_keys(keyspec)) d[keyseq] = command if self.verbose: - print d + print(d) self.k = self.ck = compile_keymap(d, ()) self.results = [] self.stack = [] def push(self, evt): if self.verbose: - print "pushed", evt.data, + print("pushed", evt.data, end='') key = evt.data d = self.k.get(key) if isinstance(d, dict): if self.verbose: - print "transition" + print("transition") self.stack.append(key) self.k = d else: if d is None: if self.verbose: - print "invalid" + print("invalid") if self.stack or len(key) > 1 or unicodedata_.category(key) == 'C': self.results.append( (self.invalid_cls, self.stack + [key])) @@ -84,7 +84,7 @@ (self.character_cls, [key])) else: if self.verbose: - print "matched", d + print("matched", d) self.results.append((d, self.stack + [key])) self.stack = [] self.k = self.ck diff --git a/lib_pypy/pyrepl/keymap.py b/lib_pypy/pyrepl/keymap.py --- a/lib_pypy/pyrepl/keymap.py +++ b/lib_pypy/pyrepl/keymap.py @@ -106,22 +106,22 @@ s += 2 elif c == "c": if key[s + 2] != '-': - raise KeySpecError, \ + raise KeySpecError( "\\C must be followed by `-' (char %d of %s)"%( - s + 2, repr(key)) + s + 2, repr(key))) if ctrl: - raise KeySpecError, "doubled \\C- (char %d of %s)"%( - s + 1, repr(key)) + raise KeySpecError("doubled \\C- (char %d of %s)"%( + s + 1, repr(key))) ctrl = 1 s += 3 elif c == "m": if key[s + 2] != '-': - raise KeySpecError, \ + raise KeySpecError( "\\M must be followed by `-' (char %d of %s)"%( - s + 2, repr(key)) + s + 2, repr(key))) if meta: - raise KeySpecError, "doubled \\M- (char %d of %s)"%( - s + 1, repr(key)) + raise KeySpecError("doubled \\M- (char %d of %s)"%( + s + 1, repr(key))) meta = 1 s += 3 elif c.isdigit(): @@ -135,26 +135,26 @@ elif c == '<': t = key.find('>', s) if t == -1: - raise KeySpecError, \ + raise KeySpecError( "unterminated \\< starting at char %d of %s"%( - s + 1, repr(key)) + s + 1, repr(key))) ret = key[s+2:t].lower() if ret not in _keynames: - raise KeySpecError, \ + raise KeySpecError( "unrecognised keyname `%s' at char %d of %s"%( - ret, s + 2, repr(key)) + ret, s + 2, repr(key))) ret = _keynames[ret] s = t + 1 else: - raise KeySpecError, \ + raise KeySpecError( "unknown backslash escape %s at char %d of %s"%( - `c`, s + 2, repr(key)) + repr(c), s + 2, repr(key))) else: ret = key[s] s += 1 if ctrl: if len(ret) > 1: - raise KeySpecError, "\\C- must be followed by a character" + raise KeySpecError("\\C- must be followed by a character") ret = chr(ord(ret) & 0x1f) # curses.ascii.ctrl() if meta: ret = ['\033', ret] @@ -177,8 +177,8 @@ for key, value in r.items(): if empty in value: if len(value) <> 1: - raise KeySpecError, \ - "key definitions for %s clash"%(value.values(),) + raise KeySpecError( + "key definitions for %s clash"%(value.values(),)) else: r[key] = value[empty] else: diff --git a/lib_pypy/pyrepl/reader.py b/lib_pypy/pyrepl/reader.py --- a/lib_pypy/pyrepl/reader.py +++ b/lib_pypy/pyrepl/reader.py @@ -26,17 +26,17 @@ def _make_unctrl_map(): uc_map = {} - for c in map(unichr, range(256)): + for c in map(chr, range(256)): if unicodedata_.category(c)[0] <> 'C': uc_map[c] = c for i in range(32): - c = unichr(i) - uc_map[c] = u'^' + unichr(ord('A') + i - 1) + c = chr(i) + uc_map[c] = u'^' + chr(ord('A') + i - 1) uc_map['\t'] = ' ' # display TABs as 4 characters uc_map['\177'] = u'^?' for i in range(256): - c = unichr(i) - if not uc_map.has_key(c): + c = chr(i) + if c not in uc_map: uc_map[c] = u'\\%03o'%i return uc_map @@ -56,7 +56,7 @@ return u[c] else: if unicodedata_.category(c).startswith('C'): - return '\u%04x'%(ord(c),) + return '\\u%04x'%(ord(c),) else: return c diff --git a/lib_pypy/pyrepl/unix_console.py b/lib_pypy/pyrepl/unix_console.py --- a/lib_pypy/pyrepl/unix_console.py +++ b/lib_pypy/pyrepl/unix_console.py @@ -41,8 +41,8 @@ def _my_getstr(cap, optional=0): r = curses.tigetstr(cap) if not optional and r is None: - raise InvalidTerminal, \ - "terminal doesn't have the required '%s' capability"%cap + raise InvalidTerminal( + "terminal doesn't have the required '%s' capability"%cap) return r # at this point, can we say: AAAAAAAAAAAAAAAAAAAAAARGH! @@ -131,14 +131,14 @@ elif self._cub1 and self._cuf1: self.__move_x = self.__move_x_cub1_cuf1 else: - raise RuntimeError, "insufficient terminal (horizontal)" + raise RuntimeError("insufficient terminal (horizontal)") if self._cuu and self._cud: self.__move_y = self.__move_y_cuu_cud elif self._cuu1 and self._cud1: self.__move_y = self.__move_y_cuu1_cud1 else: - raise RuntimeError, "insufficient terminal (vertical)" + raise RuntimeError("insufficient terminal (vertical)") if self._dch1: self.dch1 = self._dch1 diff --git a/lib_pypy/pyrepl/unix_eventqueue.py b/lib_pypy/pyrepl/unix_eventqueue.py --- a/lib_pypy/pyrepl/unix_eventqueue.py +++ b/lib_pypy/pyrepl/unix_eventqueue.py @@ -52,7 +52,7 @@ for key, tiname in _keynames.items(): keycode = curses.tigetstr(tiname) if keycode: - our_keycodes[keycode] = unicode(key) + our_keycodes[keycode] = str(key) if os.isatty(fd): our_keycodes[tcgetattr(fd)[6][VERASE]] = u'backspace' self.k = self.ck = keymap.compile_keymap(our_keycodes) diff --git a/pypy/module/_io/interp_iobase.py b/pypy/module/_io/interp_iobase.py --- a/pypy/module/_io/interp_iobase.py +++ b/pypy/module/_io/interp_iobase.py @@ -88,6 +88,9 @@ raise OperationError( space.w_ValueError, space.wrap(message)) + def check_closed_w(self, space): + self._check_closed(space) + def closed_get_w(self, space): return space.newbool(self.__IOBase_closed) @@ -253,6 +256,7 @@ _checkWritable = interp2app(check_writable_w), _checkSeekable = interp2app(check_seekable_w), closed = GetSetProperty(W_IOBase.closed_get_w), + _checkClosed = interp2app(W_IOBase.check_closed_w), __dict__ = GetSetProperty(descr_get_dict, descr_set_dict, cls=W_IOBase), __weakref__ = make_weakref_descr(W_IOBase), diff --git a/pypy/module/_io/test/test_io.py b/pypy/module/_io/test/test_io.py --- a/pypy/module/_io/test/test_io.py +++ b/pypy/module/_io/test/test_io.py @@ -24,7 +24,9 @@ import io with io.BufferedIOBase() as f: assert not f.closed + f._checkClosed() assert f.closed + raises(ValueError, f._checkClosed) def test_iter(self): import io From noreply at buildbot.pypy.org Tue Nov 8 01:19:00 2011 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Tue, 8 Nov 2011 01:19:00 +0100 (CET) Subject: [pypy-commit] pyrepl py3ksupport: skip functional tests in case of python3 :( Message-ID: <20111108001900.EC90F820C4@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: py3ksupport Changeset: r156:b2b0b52e2975 Date: 2011-11-08 01:18 +0100 http://bitbucket.org/pypy/pyrepl/changeset/b2b0b52e2975/ Log: skip functional tests in case of python3 :( diff --git a/testing/test_functional.py b/testing/test_functional.py --- a/testing/test_functional.py +++ b/testing/test_functional.py @@ -24,7 +24,10 @@ import sys def pytest_funcarg__child(request): - pexpect = pytest.importorskip('pexpect') + try: + pexpect = pytest.importorskip('pexpect') + except SyntaxError: + pytest.skip('pexpect wont work on py3k') child = pexpect.spawn(sys.executable, ['-S'], timeout=10) child.logfile = sys.stdout child.sendline('from pyrepl.python_reader import main') From noreply at buildbot.pypy.org Tue Nov 8 02:35:46 2011 From: noreply at buildbot.pypy.org (pjenvey) Date: Tue, 8 Nov 2011 02:35:46 +0100 (CET) Subject: [pypy-commit] pypy default: py3k compat. syntax Message-ID: <20111108013546.C02A7820C4@wyvern.cs.uni-duesseldorf.de> Author: Philip Jenvey Branch: Changeset: r48894:56079dacea00 Date: 2011-11-07 17:35 -0800 http://bitbucket.org/pypy/pypy/changeset/56079dacea00/ Log: py3k compat. syntax diff --git a/lib_pypy/_ctypes/pointer.py b/lib_pypy/_ctypes/pointer.py --- a/lib_pypy/_ctypes/pointer.py +++ b/lib_pypy/_ctypes/pointer.py @@ -124,7 +124,8 @@ # for now, we always allow types.pointer, else a lot of tests # break. We need to rethink how pointers are represented, though if my_ffitype is not ffitype and ffitype is not _ffi.types.void_p: - raise ArgumentError, "expected %s instance, got %s" % (type(value), ffitype) + raise ArgumentError("expected %s instance, got %s" % (type(value), + ffitype)) return value._get_buffer_value() def _cast_addr(obj, _, tp): From noreply at buildbot.pypy.org Tue Nov 8 02:55:41 2011 From: noreply at buildbot.pypy.org (pjenvey) Date: Tue, 8 Nov 2011 02:55:41 +0100 (CET) Subject: [pypy-commit] pypy py3k: merge default Message-ID: <20111108015541.740FA820C4@wyvern.cs.uni-duesseldorf.de> Author: Philip Jenvey Branch: py3k Changeset: r48895:3a5b3b4bda7a Date: 2011-11-07 17:54 -0800 http://bitbucket.org/pypy/pypy/changeset/3a5b3b4bda7a/ Log: merge default diff --git a/lib_pypy/_ctypes/pointer.py b/lib_pypy/_ctypes/pointer.py --- a/lib_pypy/_ctypes/pointer.py +++ b/lib_pypy/_ctypes/pointer.py @@ -124,7 +124,8 @@ # for now, we always allow types.pointer, else a lot of tests # break. We need to rethink how pointers are represented, though if my_ffitype is not ffitype and ffitype is not _ffi.types.void_p: - raise ArgumentError, "expected %s instance, got %s" % (type(value), ffitype) + raise ArgumentError("expected %s instance, got %s" % (type(value), + ffitype)) return value._get_buffer_value() def _cast_addr(obj, _, tp): From noreply at buildbot.pypy.org Tue Nov 8 03:14:47 2011 From: noreply at buildbot.pypy.org (pjenvey) Date: Tue, 8 Nov 2011 03:14:47 +0100 (CET) Subject: [pypy-commit] pypy py3k: support the TextIOWrapper write_through option Message-ID: <20111108021447.E9625820C4@wyvern.cs.uni-duesseldorf.de> Author: Philip Jenvey Branch: py3k Changeset: r48896:a94eb72cdc3f Date: 2011-11-07 15:57 -0800 http://bitbucket.org/pypy/pypy/changeset/a94eb72cdc3f/ Log: support the TextIOWrapper write_through option diff --git a/pypy/module/_io/interp_textio.py b/pypy/module/_io/interp_textio.py --- a/pypy/module/_io/interp_textio.py +++ b/pypy/module/_io/interp_textio.py @@ -334,9 +334,10 @@ # of the stream self.snapshot = None - @unwrap_spec(encoding="str_or_None", line_buffering=int) + @unwrap_spec(encoding="str_or_None", line_buffering=int, write_through=int) def descr_init(self, space, w_buffer, encoding=None, - w_errors=None, w_newline=None, line_buffering=0): + w_errors=None, w_newline=None, line_buffering=0, + write_through=0): self.state = STATE_ZERO self.w_buffer = w_buffer @@ -379,6 +380,7 @@ "illegal newline value: %s" % (r,))) self.line_buffering = line_buffering + self.write_through = write_through self.readuniversal = not newline # null or empty self.readtranslate = newline is None @@ -415,6 +417,8 @@ self.seekable = space.is_true(space.call_method(w_buffer, "seekable")) self.telling = self.seekable + self.has_read1 = space.findattr(w_buffer, space.wrap("read1")) + self.encoding_start_of_stream = False if self.seekable and self.w_encoder: self.encoding_start_of_stream = True @@ -553,7 +557,8 @@ dec_flags = 0 # Read a chunk, decode it, and put the result in self._decoded_chars - w_input = space.call_method(self.w_buffer, "read1", + w_input = space.call_method(self.w_buffer, + "read1" if self.has_read1 else "read", space.wrap(self.chunk_size)) eof = space.len_w(w_input) == 0 w_decoded = space.call_method(self.w_decoder, "decode", @@ -723,7 +728,9 @@ text = space.unicode_w(w_text) needflush = False - if self.line_buffering and (haslf or text.find(u'\r') >= 0): + if self.write_through: + needflush = True + elif self.line_buffering and (haslf or text.find(u'\r') >= 0): needflush = True # XXX What if we were just reading? From noreply at buildbot.pypy.org Tue Nov 8 05:47:25 2011 From: noreply at buildbot.pypy.org (pjenvey) Date: Tue, 8 Nov 2011 05:47:25 +0100 (CET) Subject: [pypy-commit] pypy py3k: no longer needed in py3k Message-ID: <20111108044725.E9DAD820C4@wyvern.cs.uni-duesseldorf.de> Author: Philip Jenvey Branch: py3k Changeset: r48897:9dc079b3cf94 Date: 2011-11-07 20:46 -0800 http://bitbucket.org/pypy/pypy/changeset/9dc079b3cf94/ Log: no longer needed in py3k diff --git a/pypy/module/_locale/interp_locale.py b/pypy/module/_locale/interp_locale.py --- a/pypy/module/_locale/interp_locale.py +++ b/pypy/module/_locale/interp_locale.py @@ -19,31 +19,6 @@ return OperationError(space.gettypeobject(W_Error.typedef), space.wrap(e.message)) -def _fixup_ulcase(space): - stringmod = space.call_function( - space.getattr(space.getbuiltinmodule('builtins'), - space.wrap('__import__')), space.wrap('string')) - # create uppercase map string - ul = [] - for c in xrange(256): - if rlocale.isupper(c): - ul.append(chr(c)) - space.setattr(stringmod, space.wrap('uppercase'), space.wrap(''.join(ul))) - - # create lowercase string - ul = [] - for c in xrange(256): - if rlocale.islower(c): - ul.append(chr(c)) - space.setattr(stringmod, space.wrap('lowercase'), space.wrap(''.join(ul))) - - # create letters string - ul = [] - for c in xrange(256): - if rlocale.isalpha(c): - ul.append(chr(c)) - space.setattr(stringmod, space.wrap('letters'), space.wrap(''.join(ul))) - @unwrap_spec(category=int) def setlocale(space, category, w_locale=None): "(integer,string=None) -> string. Activates/queries locale processing." @@ -56,11 +31,6 @@ result = rlocale.setlocale(category, locale) except rlocale.LocaleError, e: raise rewrap_error(space, e) - - # record changes to LC_CTYPE - if category in (rlocale.LC_CTYPE, rlocale.LC_ALL): - _fixup_ulcase(space) - return space.wrap(result) def _w_copy_grouping(space, text): From noreply at buildbot.pypy.org Tue Nov 8 06:04:26 2011 From: noreply at buildbot.pypy.org (pjenvey) Date: Tue, 8 Nov 2011 06:04:26 +0100 (CET) Subject: [pypy-commit] pypy py3k: test for rev a94eb72cdc3f Message-ID: <20111108050426.A740B820C4@wyvern.cs.uni-duesseldorf.de> Author: Philip Jenvey Branch: py3k Changeset: r48898:a1ae5b4a864d Date: 2011-11-07 21:03 -0800 http://bitbucket.org/pypy/pypy/changeset/a1ae5b4a864d/ Log: test for rev a94eb72cdc3f diff --git a/pypy/module/_io/test/test_textio.py b/pypy/module/_io/test/test_textio.py --- a/pypy/module/_io/test/test_textio.py +++ b/pypy/module/_io/test/test_textio.py @@ -210,6 +210,90 @@ b.name = "dummy" assert repr(t) == "<_io.TextIOWrapper name='dummy' encoding='utf-8'>" + def test_rawio(self): + # Issue #12591: TextIOWrapper must work with raw I/O objects, so + # that subprocess.Popen() can have the required unbuffered + # semantics with universal_newlines=True. + import _io + class MockRawIO(_io._RawIOBase): + def __init__(self, read_stack=()): + self._read_stack = list(read_stack) + self._write_stack = [] + self._reads = 0 + self._extraneous_reads = 0 + + def write(self, b): + self._write_stack.append(bytes(b)) + return len(b) + + def writable(self): + return True + + def fileno(self): + return 42 + + def readable(self): + return True + + def seekable(self): + return True + + def seek(self, pos, whence): + return 0 # wrong but we gotta return something + + def tell(self): + return 0 # same comment as above + + def readinto(self, buf): + self._reads += 1 + max_len = len(buf) + try: + data = self._read_stack[0] + except IndexError: + self._extraneous_reads += 1 + return 0 + if data is None: + del self._read_stack[0] + return None + n = len(data) + if len(data) <= max_len: + del self._read_stack[0] + buf[:n] = data + return n + else: + buf[:] = data[:max_len] + self._read_stack[0] = data[max_len:] + return max_len + + def truncate(self, pos=None): + return pos + + def read(self, n=None): + self._reads += 1 + try: + return self._read_stack.pop(0) + except: + self._extraneous_reads += 1 + return b"" + + raw = MockRawIO([b'abc', b'def', b'ghi\njkl\nopq\n']) + txt = _io.TextIOWrapper(raw, encoding='ascii', newline='\n') + # Reads + assert txt.read(4) == 'abcd' + assert txt.readline() == 'efghi\n' + assert list(txt) == ['jkl\n', 'opq\n'] +# +# def test_rawio_write_through(self): +# # Issue #12591: with write_through=True, writes don't need a flush +# import _io + raw = MockRawIO([b'abc', b'def', b'ghi\njkl\nopq\n']) + txt = _io.TextIOWrapper(raw, encoding='ascii', newline='\n', + write_through=True) + txt.write('1') + txt.write('23\n4') + txt.write('5') + assert b''.join(raw._write_stack) == b'123\n45' + class AppTestIncrementalNewlineDecoder: From noreply at buildbot.pypy.org Tue Nov 8 06:28:43 2011 From: noreply at buildbot.pypy.org (pjenvey) Date: Tue, 8 Nov 2011 06:28:43 +0100 (CET) Subject: [pypy-commit] pypy py3k: split up these tests Message-ID: <20111108052843.F1693820C4@wyvern.cs.uni-duesseldorf.de> Author: Philip Jenvey Branch: py3k Changeset: r48899:cf049d857e62 Date: 2011-11-07 21:28 -0800 http://bitbucket.org/pypy/pypy/changeset/cf049d857e62/ Log: split up these tests diff --git a/pypy/module/_io/test/test_textio.py b/pypy/module/_io/test/test_textio.py --- a/pypy/module/_io/test/test_textio.py +++ b/pypy/module/_io/test/test_textio.py @@ -215,6 +215,26 @@ # that subprocess.Popen() can have the required unbuffered # semantics with universal_newlines=True. import _io + raw = self.get_MockRawIO()([b'abc', b'def', b'ghi\njkl\nopq\n']) + txt = _io.TextIOWrapper(raw, encoding='ascii', newline='\n') + # Reads + assert txt.read(4) == 'abcd' + assert txt.readline() == 'efghi\n' + assert list(txt) == ['jkl\n', 'opq\n'] + + def test_rawio_write_through(self): + # Issue #12591: with write_through=True, writes don't need a flush + import _io + raw = self.get_MockRawIO()([b'abc', b'def', b'ghi\njkl\nopq\n']) + txt = _io.TextIOWrapper(raw, encoding='ascii', newline='\n', + write_through=True) + txt.write('1') + txt.write('23\n4') + txt.write('5') + assert b''.join(raw._write_stack) == b'123\n45' + + def w_get_MockRawIO(self): + import _io class MockRawIO(_io._RawIOBase): def __init__(self, read_stack=()): self._read_stack = list(read_stack) @@ -275,24 +295,7 @@ except: self._extraneous_reads += 1 return b"" - - raw = MockRawIO([b'abc', b'def', b'ghi\njkl\nopq\n']) - txt = _io.TextIOWrapper(raw, encoding='ascii', newline='\n') - # Reads - assert txt.read(4) == 'abcd' - assert txt.readline() == 'efghi\n' - assert list(txt) == ['jkl\n', 'opq\n'] -# -# def test_rawio_write_through(self): -# # Issue #12591: with write_through=True, writes don't need a flush -# import _io - raw = MockRawIO([b'abc', b'def', b'ghi\njkl\nopq\n']) - txt = _io.TextIOWrapper(raw, encoding='ascii', newline='\n', - write_through=True) - txt.write('1') - txt.write('23\n4') - txt.write('5') - assert b''.join(raw._write_stack) == b'123\n45' + return MockRawIO class AppTestIncrementalNewlineDecoder: From noreply at buildbot.pypy.org Tue Nov 8 10:26:59 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Tue, 8 Nov 2011 10:26:59 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: fix test Message-ID: <20111108092659.1F9D8820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48900:01cb9893b566 Date: 2011-11-07 18:28 +0100 http://bitbucket.org/pypy/pypy/changeset/01cb9893b566/ Log: fix test diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -89,7 +89,8 @@ assert descr.exported_state is None if not we_are_translated(): op._descr_wref = weakref.ref(op._descr) - op._descr = None # clear reference, mostly for tests + # xxx why do we need to clear op._descr?? + #op._descr = None # clear reference, mostly for tests # record this looptoken on the QuasiImmut used in the code if loop.quasi_immutable_deps is not None: for qmut in loop.quasi_immutable_deps: diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -1760,7 +1760,7 @@ array=array) res = res.binop(x) res.val += array[idx] + array[1] - if y < 7: + if y < 10: idx = 2 y -= 1 return res @@ -1772,10 +1772,10 @@ assert a1.val == a2.val assert b1.val == b2.val return a1.val + b1.val - res = self.meta_interp(g, [6, 14]) - assert res == g(6, 14) + res = self.meta_interp(g, [6, 20]) + assert res == g(6, 20) self.check_loop_count(9) - self.check_resops(getarrayitem_gc=8) + self.check_resops(getarrayitem_gc=10) def test_multiple_specialied_versions_bridge(self): myjitdriver = JitDriver(greens = [], reds = ['y', 'x', 'z', 'res']) From noreply at buildbot.pypy.org Tue Nov 8 10:27:00 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Tue, 8 Nov 2011 10:27:00 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: fallback to preamble if inlining short preamble fails Message-ID: <20111108092700.4AAEA820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48901:ac810b69b113 Date: 2011-11-07 18:36 +0100 http://bitbucket.org/pypy/pypy/changeset/ac810b69b113/ Log: fallback to preamble if inlining short preamble fails diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -518,8 +518,9 @@ except InvalidLoop: debug_print("Inlining failed unexpectedly", "jumping to preamble instead") - assert False, "FIXME: Construct jump op" - self.optimizer.send_extra_operation(op) + assert cell_token.target_tokens[0].virtual_state is None + jumpop.setdescr(cell_token.target_tokens[0]) + self.optimizer.send_extra_operation(jumpop) return True debug_stop('jit-log-virtualstate') From noreply at buildbot.pypy.org Tue Nov 8 10:27:01 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Tue, 8 Nov 2011 10:27:01 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: abort unsupported case Message-ID: <20111108092701.889CC820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48902:0a92fe884bc3 Date: 2011-11-07 19:02 +0100 http://bitbucket.org/pypy/pypy/changeset/0a92fe884bc3/ Log: abort unsupported case diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -527,6 +527,7 @@ retraced_count = cell_token.retraced_count limit = self.optimizer.metainterp_sd.warmrunnerdesc.memory_manager.retrace_limit if retraced_count Author: Hakan Ardo Branch: jit-targets Changeset: r48903:8ed931ccc062 Date: 2011-11-07 20:08 +0100 http://bitbucket.org/pypy/pypy/changeset/8ed931ccc062/ Log: dont inline short preamble when falling back to the full preamble diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -708,7 +708,7 @@ else: inline_short_preamble = True try: - optimize_trace(metainterp_sd, new_trace, state.enable_opts) + optimize_trace(metainterp_sd, new_trace, state.enable_opts, inline_short_preamble) except InvalidLoop: debug_print("compile_new_bridge: got an InvalidLoop") # XXX I am fairly convinced that optimize_bridge cannot actually raise diff --git a/pypy/jit/metainterp/optimizeopt/__init__.py b/pypy/jit/metainterp/optimizeopt/__init__.py --- a/pypy/jit/metainterp/optimizeopt/__init__.py +++ b/pypy/jit/metainterp/optimizeopt/__init__.py @@ -28,8 +28,7 @@ ALL_OPTS_LIST = [name for name, _ in ALL_OPTS] ALL_OPTS_NAMES = ':'.join([name for name, _ in ALL_OPTS]) -def build_opt_chain(metainterp_sd, enable_opts, - inline_short_preamble=True, retraced=False): +def build_opt_chain(metainterp_sd, enable_opts): config = metainterp_sd.config optimizations = [] unroll = 'unroll' in enable_opts # 'enable_opts' is normally a dict @@ -48,9 +47,6 @@ or 'heap' not in enable_opts or 'unroll' not in enable_opts): optimizations.append(OptSimplify()) - if inline_short_preamble: - optimizations = [OptInlineShortPreamble(retraced)] + optimizations - return optimizations, unroll @@ -81,13 +77,13 @@ if __name__ == '__main__': print ALL_OPTS_NAMES -def optimize_trace(metainterp_sd, loop, enable_opts): +def optimize_trace(metainterp_sd, loop, enable_opts, inline_short_preamble=True): """Optimize loop.operations to remove internal overheadish operations. """ - optimizations, unroll = build_opt_chain(metainterp_sd, enable_opts, True, False) + optimizations, unroll = build_opt_chain(metainterp_sd, enable_opts) if unroll: - optimize_unroll(metainterp_sd, loop, optimizations) + optimize_unroll(metainterp_sd, loop, optimizations, inline_short_preamble) else: optimizer = Optimizer(metainterp_sd, loop, optimizations) optimizer.propagate_all_forward() diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -14,8 +14,9 @@ # FIXME: Introduce some VirtualOptimizer super class instead -def optimize_unroll(metainterp_sd, loop, optimizations): +def optimize_unroll(metainterp_sd, loop, optimizations, inline_short_preamble=True): opt = UnrollOptimizer(metainterp_sd, loop, optimizations) + opt.inline_short_preamble = inline_short_preamble opt.propagate_all_forward() class UnrollableOptimizer(Optimizer): @@ -23,6 +24,7 @@ self.importable_values = {} self.emitting_dissabled = False self.emitted_guards = 0 + self.inline_short_preamble = True def ensure_imported(self, value): if not self.emitting_dissabled and value in self.importable_values: @@ -464,6 +466,12 @@ if not cell_token.target_tokens: return False + if not self.inline_short_preamble: + assert cell_token.target_tokens[0].virtual_state is None + jumpop.setdescr(cell_token.target_tokens[0]) + self.optimizer.send_extra_operation(jumpop) + return True + args = jumpop.getarglist() modifier = VirtualStateAdder(self.optimizer) virtual_state = modifier.get_virtual_state(args) From noreply at buildbot.pypy.org Tue Nov 8 10:27:03 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Tue, 8 Nov 2011 10:27:03 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: fix tests Message-ID: <20111108092703.E3F8E820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48904:6ad136c18d39 Date: 2011-11-07 20:15 +0100 http://bitbucket.org/pypy/pypy/changeset/6ad136c18d39/ Log: fix tests diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -3365,7 +3365,7 @@ res = self.meta_interp(main, [10]) assert res == main(10) self.check_resops({'int_gt': 2, 'strlen': 2, 'guard_true': 2, - 'int_sub': 2, 'jump': 2, 'call': 2, + 'int_sub': 2, 'jump': 1, 'call': 2, 'guard_no_exception': 2, 'int_add': 4}) def test_look_inside_iff_const_getarrayitem_gc_pure(self): @@ -3502,7 +3502,7 @@ res = self.meta_interp(f, [10]) assert res == 0 - self.check_resops({'jump': 2, 'guard_true': 2, 'int_gt': 2, + self.check_resops({'jump': 1, 'guard_true': 2, 'int_gt': 2, 'int_sub': 2}) def test_virtual_opaque_ptr(self): @@ -3522,7 +3522,7 @@ return n res = self.meta_interp(f, [10]) assert res == 0 - self.check_resops({'jump': 2, 'guard_true': 2, 'int_gt': 2, + self.check_resops({'jump': 1, 'guard_true': 2, 'int_gt': 2, 'int_sub': 2}) @@ -3545,7 +3545,7 @@ res = self.meta_interp(f, [10]) assert res == 0 self.check_resops({'int_gt': 2, 'getfield_gc': 1, 'int_eq': 1, - 'guard_true': 2, 'int_sub': 2, 'jump': 2, + 'guard_true': 2, 'int_sub': 2, 'jump': 1, 'guard_false': 1}) From noreply at buildbot.pypy.org Tue Nov 8 10:27:05 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Tue, 8 Nov 2011 10:27:05 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: better way of checking that the retracecount is not exceeded Message-ID: <20111108092705.1DAF1820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48905:178c618d4dc3 Date: 2011-11-08 08:06 +0100 http://bitbucket.org/pypy/pypy/changeset/178c618d4dc3/ Log: better way of checking that the retracecount is not exceeded diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -2704,7 +2704,8 @@ res = self.meta_interp(g, [10]) assert res == g(10) # 1 preamble and 6 speciealized versions of each loop - self.check_tree_loop_count(2*(1 + 6)) + for loop in get_stats().loops: + assert len(loop.operations[0].getdescr().targeting_jitcell_token.target_tokens) <= 7 def test_nested_retrace(self): From noreply at buildbot.pypy.org Tue Nov 8 10:27:06 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Tue, 8 Nov 2011 10:27:06 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: hg merge jit-refactor-tests Message-ID: <20111108092706.59F3C820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48906:e008b8114029 Date: 2011-11-08 08:07 +0100 http://bitbucket.org/pypy/pypy/changeset/e008b8114029/ Log: hg merge jit-refactor-tests diff --git a/pypy/jit/metainterp/test/test_loop.py b/pypy/jit/metainterp/test/test_loop.py --- a/pypy/jit/metainterp/test/test_loop.py +++ b/pypy/jit/metainterp/test/test_loop.py @@ -60,7 +60,8 @@ assert res == f(6, 13) self.check_loop_count(1) if self.enable_opts: - self.check_loops(getfield_gc = 0, setfield_gc = 1) + self.check_resops(setfield_gc=2, getfield_gc=0) + def test_loop_with_two_paths(self): from pypy.rpython.lltypesystem import lltype @@ -180,7 +181,10 @@ assert res == 42 self.check_loop_count(1) # the 'int_eq' and following 'guard' should be constant-folded - self.check_loops(int_eq=0, guard_true=1, guard_false=0) + if 'unroll' in self.enable_opts: + self.check_resops(int_eq=0, guard_true=2, guard_false=0) + else: + self.check_resops(int_eq=0, guard_true=1, guard_false=0) if self.basic: found = 0 for op in get_stats().loops[0]._all_operations(): @@ -643,8 +647,12 @@ res = self.meta_interp(main_interpreter_loop, [1]) assert res == 102 self.check_loop_count(1) - self.check_loops({'int_add' : 3, 'int_gt' : 1, - 'guard_false' : 1, 'jump' : 1}) + if 'unroll' in self.enable_opts: + self.check_resops({'int_add' : 6, 'int_gt' : 2, + 'guard_false' : 2, 'jump' : 2}) + else: + self.check_resops({'int_add' : 3, 'int_gt' : 1, + 'guard_false' : 1, 'jump' : 1}) def test_automatic_promotion(self): myjitdriver = JitDriver(greens = ['i'], @@ -686,7 +694,7 @@ self.check_loop_count(1) # These loops do different numbers of ops based on which optimizer we # are testing with. - self.check_loops(self.automatic_promotion_result) + self.check_resops(self.automatic_promotion_result) def test_can_enter_jit_outside_main_loop(self): myjitdriver = JitDriver(greens=[], reds=['i', 'j', 'a']) diff --git a/pypy/jit/metainterp/test/test_loop_unroll.py b/pypy/jit/metainterp/test/test_loop_unroll.py --- a/pypy/jit/metainterp/test/test_loop_unroll.py +++ b/pypy/jit/metainterp/test/test_loop_unroll.py @@ -8,7 +8,8 @@ enable_opts = ALL_OPTS_NAMES automatic_promotion_result = { - 'int_add' : 3, 'int_gt' : 1, 'guard_false' : 1, 'jump' : 1, + 'int_gt': 2, 'guard_false': 2, 'jump': 2, 'int_add': 6, + 'guard_value': 1 } # ====> test_loop.py diff --git a/pypy/jit/metainterp/test/test_recursive.py b/pypy/jit/metainterp/test/test_recursive.py --- a/pypy/jit/metainterp/test/test_recursive.py +++ b/pypy/jit/metainterp/test/test_recursive.py @@ -143,11 +143,11 @@ f = self.get_interpreter(codes) assert self.meta_interp(f, [0, 0, 0], enable_opts='') == 42 - self.check_loops(int_add = 1, call_may_force = 1, call = 0) + self.check_resops(call_may_force=1, int_add=1, call=0) assert self.meta_interp(f, [0, 0, 0], enable_opts='', inline=True) == 42 - self.check_loops(int_add = 2, call_may_force = 0, call = 0, - guard_no_exception = 0) + self.check_resops(call=0, int_add=2, call_may_force=0, + guard_no_exception=0) def test_inline_jitdriver_check(self): code = "021" @@ -160,7 +160,7 @@ inline=True) == 42 # the call is fully inlined, because we jump to subcode[1], thus # skipping completely the JUMP_BACK in subcode[0] - self.check_loops(call_may_force = 0, call_assembler = 0, call = 0) + self.check_resops(call=0, call_may_force=0, call_assembler=0) def test_guard_failure_in_inlined_function(self): def p(pc, code): @@ -491,10 +491,10 @@ return loop(100) res = self.meta_interp(main, [0], enable_opts='', trace_limit=TRACE_LIMIT) - self.check_loops(call_may_force=1, call=0) + self.check_resops(call=0, call_may_force=1) res = self.meta_interp(main, [1], enable_opts='', trace_limit=TRACE_LIMIT) - self.check_loops(call_may_force=0, call=0) + self.check_resops(call=0, call_may_force=0) def test_trace_from_start(self): def p(pc, code): @@ -576,7 +576,7 @@ result += f('-c-----------l-', i+100) self.meta_interp(g, [10], backendopt=True) self.check_aborted_count(1) - self.check_loops(call_assembler=1, call=0) + self.check_resops(call=0, call_assembler=2) self.check_tree_loop_count(3) def test_directly_call_assembler(self): @@ -625,8 +625,7 @@ try: compile.compile_tmp_callback = my_ctc self.meta_interp(portal, [2, 5], inline=True) - self.check_loops(call_assembler=2, call_may_force=0, - everywhere=True) + self.check_resops(call_may_force=0, call_assembler=2) finally: compile.compile_tmp_callback = original_ctc # check that we made a temporary callback @@ -681,8 +680,7 @@ try: compile.compile_tmp_callback = my_ctc self.meta_interp(main, [2, 5], inline=True) - self.check_loops(call_assembler=2, call_may_force=0, - everywhere=True) + self.check_resops(call_may_force=0, call_assembler=2) finally: compile.compile_tmp_callback = original_ctc # check that we made a temporary callback @@ -1021,7 +1019,7 @@ res = self.meta_interp(portal, [2, 0], inline=True, policy=StopAtXPolicy(residual)) assert res == portal(2, 0) - self.check_loops(call_assembler=4, everywhere=True) + self.check_resops(call_assembler=4) def test_inline_without_hitting_the_loop(self): driver = JitDriver(greens = ['codeno'], reds = ['i'], @@ -1045,7 +1043,7 @@ assert portal(0) == 70 res = self.meta_interp(portal, [0], inline=True) assert res == 70 - self.check_loops(call_assembler=0) + self.check_resops(call_assembler=0) def test_inline_with_hitting_the_loop_sometimes(self): driver = JitDriver(greens = ['codeno'], reds = ['i', 'k'], @@ -1071,7 +1069,7 @@ assert portal(0, 1) == 2095 res = self.meta_interp(portal, [0, 1], inline=True) assert res == 2095 - self.check_loops(call_assembler=12, everywhere=True) + self.check_resops(call_assembler=12) def test_inline_with_hitting_the_loop_sometimes_exc(self): driver = JitDriver(greens = ['codeno'], reds = ['i', 'k'], @@ -1109,7 +1107,7 @@ assert main(0, 1) == 2095 res = self.meta_interp(main, [0, 1], inline=True) assert res == 2095 - self.check_loops(call_assembler=12, everywhere=True) + self.check_resops(call_assembler=12) def test_handle_jitexception_in_portal(self): # a test for _handle_jitexception_in_portal in blackhole.py @@ -1238,7 +1236,7 @@ i += 1 self.meta_interp(portal, [0, 0, 0], inline=True) - self.check_loops(call=0, call_may_force=0) + self.check_resops(call_may_force=0, call=0) class TestLLtype(RecursiveTests, LLJitMixin): pass diff --git a/pypy/jit/metainterp/test/test_send.py b/pypy/jit/metainterp/test/test_send.py --- a/pypy/jit/metainterp/test/test_send.py +++ b/pypy/jit/metainterp/test/test_send.py @@ -20,9 +20,8 @@ return c res = self.meta_interp(f, [1]) assert res == 2 - self.check_loops({'jump': 1, - 'int_sub': 1, 'int_gt' : 1, - 'guard_true': 1}) # all folded away + self.check_resops({'jump': 2, 'guard_true': 2, 'int_gt': 2, + 'int_sub': 2}) # all folded away def test_red_builtin_send(self): myjitdriver = JitDriver(greens = [], reds = ['i', 'counter']) @@ -41,12 +40,9 @@ return res res = self.meta_interp(f, [1], policy=StopAtXPolicy(externfn)) assert res == 2 - if self.type_system == 'ootype': - self.check_loops(call=1, oosend=1) # 'len' remains - else: - # 'len' becomes a getfield('num_items') for now in lltype, - # which is itself encoded as a 'getfield_gc' - self.check_loops(call=1, getfield_gc=1) + # 'len' becomes a getfield('num_items') for now in lltype, + # which is itself encoded as a 'getfield_gc' + self.check_resops(call=2, getfield_gc=2) def test_send_to_single_target_method(self): myjitdriver = JitDriver(greens = [], reds = ['i', 'counter']) @@ -70,11 +66,10 @@ res = self.meta_interp(f, [1], policy=StopAtXPolicy(externfn), backendopt=True) assert res == 43 - self.check_loops({'call': 1, 'guard_no_exception': 1, - 'getfield_gc': 1, - 'int_add': 1, - 'jump': 1, 'int_gt' : 1, 'guard_true' : 1, - 'int_sub' : 1}) + self.check_resops({'int_gt': 2, 'getfield_gc': 2, + 'guard_true': 2, 'int_sub': 2, 'jump': 2, + 'call': 2, 'guard_no_exception': 2, + 'int_add': 2}) def test_red_send_to_green_receiver(self): myjitdriver = JitDriver(greens = ['i'], reds = ['counter', 'j']) @@ -97,7 +92,7 @@ return res res = self.meta_interp(f, [4, -1]) assert res == 145 - self.check_loops(int_add = 1, everywhere=True) + self.check_resops(int_add=1) def test_oosend_base(self): myjitdriver = JitDriver(greens = [], reds = ['x', 'y', 'w']) @@ -132,7 +127,7 @@ assert res == 17 res = self.meta_interp(f, [4, 14]) assert res == 1404 - self.check_loops(guard_class=0, new_with_vtable=0, new=0) + self.check_resops(guard_class=1, new=0, new_with_vtable=0) def test_three_receivers(self): myjitdriver = JitDriver(greens = [], reds = ['y']) @@ -205,8 +200,7 @@ # of the body in a single bigger loop with no failing guard except # the final one. self.check_loop_count(1) - self.check_loops(guard_class=0, - int_add=2, int_sub=2) + self.check_resops(guard_class=1, int_add=4, int_sub=4) self.check_jumps(14) def test_oosend_guard_failure_2(self): @@ -247,8 +241,7 @@ res = self.meta_interp(f, [4, 28]) assert res == f(4, 28) self.check_loop_count(1) - self.check_loops(guard_class=0, - int_add=2, int_sub=2) + self.check_resops(guard_class=1, int_add=4, int_sub=4) self.check_jumps(14) def test_oosend_different_initial_class(self): @@ -285,8 +278,8 @@ # However, this doesn't match the initial value of 'w'. # XXX This not completely easy to check... self.check_loop_count(1) - self.check_loops(int_add=0, int_lshift=1, guard_class=0, - new_with_vtable=0, new=0) + self.check_resops(guard_class=1, new_with_vtable=0, int_lshift=2, + int_add=0, new=0) def test_indirect_call_unknown_object_1(self): myjitdriver = JitDriver(greens = [], reds = ['x', 'y']) @@ -566,10 +559,7 @@ policy = StopAtXPolicy(new, A.foo.im_func, B.foo.im_func) res = self.meta_interp(fn, [0, 20], policy=policy) assert res == 42 - if self.type_system == 'ootype': - self.check_loops(oosend=1) - else: - self.check_loops(call=1) + self.check_resops(call=2) def test_residual_oosend_with_void(self): @@ -597,10 +587,7 @@ policy = StopAtXPolicy(new, A.foo.im_func) res = self.meta_interp(fn, [1, 20], policy=policy) assert res == 41 - if self.type_system == 'ootype': - self.check_loops(oosend=1) - else: - self.check_loops(call=1) + self.check_resops(call=2) def test_constfold_pure_oosend(self): myjitdriver = JitDriver(greens=[], reds = ['i', 'obj']) @@ -621,10 +608,7 @@ policy = StopAtXPolicy(A.foo.im_func) res = self.meta_interp(fn, [1, 20], policy=policy) assert res == 42 - if self.type_system == 'ootype': - self.check_loops(oosend=0) - else: - self.check_loops(call=0) + self.check_resops(call=0) def test_generalize_loop(self): myjitdriver = JitDriver(greens=[], reds = ['i', 'obj']) diff --git a/pypy/jit/metainterp/test/test_virtual.py b/pypy/jit/metainterp/test/test_virtual.py --- a/pypy/jit/metainterp/test/test_virtual.py +++ b/pypy/jit/metainterp/test/test_virtual.py @@ -31,8 +31,9 @@ res = self.meta_interp(f, [10]) assert res == 55 * 10 self.check_loop_count(1) - self.check_loops(new=0, new_with_vtable=0, - getfield_gc=0, setfield_gc=0) + self.check_resops(new_with_vtable=0, setfield_gc=0, + getfield_gc=2, new=0) + def test_virtualized2(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'node1', 'node2']) @@ -53,8 +54,8 @@ n -= 1 return node1.value * node2.value assert f(10) == self.meta_interp(f, [10]) - self.check_loops(new=0, new_with_vtable=0, - getfield_gc=0, setfield_gc=0) + self.check_resops(new_with_vtable=0, setfield_gc=0, getfield_gc=2, + new=0) def test_virtualized_circular1(self): class MyNode(): @@ -79,8 +80,8 @@ res = self.meta_interp(f, [10]) assert res == 55 * 10 self.check_loop_count(1) - self.check_loops(new=0, new_with_vtable=0, - getfield_gc=0, setfield_gc=0) + self.check_resops(new_with_vtable=0, setfield_gc=0, + getfield_gc=3, new=0) def test_virtualized_float(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'node']) @@ -97,7 +98,7 @@ res = self.meta_interp(f, [10]) assert res == f(10) self.check_loop_count(1) - self.check_loops(new=0, float_add=0) + self.check_resops(new=0, float_add=1) def test_virtualized_float2(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'node']) @@ -115,7 +116,8 @@ res = self.meta_interp(f, [10]) assert res == f(10) self.check_loop_count(1) - self.check_loops(new=0, float_add=1) + self.check_resops(new=0, float_add=2) + def test_virtualized_2(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'node']) @@ -139,8 +141,8 @@ res = self.meta_interp(f, [10]) assert res == 55 * 30 self.check_loop_count(1) - self.check_loops(new=0, new_with_vtable=0, - getfield_gc=0, setfield_gc=0) + self.check_resops(new_with_vtable=0, setfield_gc=0, getfield_gc=2, + new=0) def test_nonvirtual_obj_delays_loop(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'node']) @@ -160,8 +162,8 @@ res = self.meta_interp(f, [500]) assert res == 640 self.check_loop_count(1) - self.check_loops(new=0, new_with_vtable=0, - getfield_gc=0, setfield_gc=0) + self.check_resops(new_with_vtable=0, setfield_gc=0, + getfield_gc=1, new=0) def test_two_loops_with_virtual(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'node']) @@ -184,8 +186,9 @@ res = self.meta_interp(f, [18]) assert res == f(18) self.check_loop_count(2) - self.check_loops(new=0, new_with_vtable=0, - getfield_gc=0, setfield_gc=0) + self.check_resops(new_with_vtable=0, setfield_gc=0, + getfield_gc=2, new=0) + def test_two_loops_with_escaping_virtual(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'node']) @@ -212,8 +215,8 @@ res = self.meta_interp(f, [20], policy=StopAtXPolicy(externfn)) assert res == f(20) self.check_loop_count(3) - self.check_loops(**{self._new_op: 1}) - self.check_loops(int_mul=0, call=1) + self.check_resops(**{self._new_op: 1}) + self.check_resops(int_mul=0, call=1) def test_two_virtuals(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'prev']) @@ -236,7 +239,7 @@ res = self.meta_interp(f, [12]) assert res == 78 - self.check_loops(new_with_vtable=0, new=0) + self.check_resops(new_with_vtable=0, new=0) def test_specialied_bridge(self): myjitdriver = JitDriver(greens = [], reds = ['y', 'x', 'res']) @@ -281,7 +284,7 @@ res = self.meta_interp(f, [20]) assert res == 9 - self.check_loops(new_with_vtable=0, new=0) + self.check_resops(new_with_vtable=0, new=0) def test_immutable_constant_getfield(self): myjitdriver = JitDriver(greens = ['stufflist'], reds = ['n', 'i']) @@ -307,7 +310,7 @@ res = self.meta_interp(f, [10, 1, 0], listops=True) assert res == 0 - self.check_loops(getfield_gc=0) + self.check_resops(getfield_gc=0) def test_escapes(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'parent']) @@ -336,7 +339,7 @@ res = self.meta_interp(f, [10], policy=StopAtXPolicy(g)) assert res == 3 - self.check_loops(**{self._new_op: 1}) + self.check_resops(**{self._new_op: 1}) def test_virtual_on_virtual(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'parent']) @@ -366,7 +369,7 @@ res = self.meta_interp(f, [10]) assert res == 2 - self.check_loops(new=0, new_with_vtable=0) + self.check_resops(new=0, new_with_vtable=0) def test_bridge_from_interpreter(self): mydriver = JitDriver(reds = ['n', 'f'], greens = []) @@ -841,7 +844,7 @@ del t2 return i assert self.meta_interp(f, []) == 10 - self.check_loops(new_array=0) + self.check_resops(new_array=0) def test_virtual_streq_bug(self): mydriver = JitDriver(reds = ['i', 's', 'a'], greens = []) @@ -942,8 +945,8 @@ res = self.meta_interp(f, [16]) assert res == f(16) - self.check_loops(getfield_gc=2) - + self.check_resops(getfield_gc=7) + # ____________________________________________________________ # Run 1: all the tests instantiate a real RPython class @@ -985,10 +988,8 @@ res = self.meta_interp(f, [10]) assert res == 20 self.check_loop_count(1) - self.check_loops(new=0, new_with_vtable=0, - getfield_gc=0, setfield_gc=0) - - + self.check_resops(new_with_vtable=0, setfield_gc=0, getfield_gc=0, + new=0) class TestOOtype_Instance(VirtualTests, OOJitMixin): _new_op = 'new_with_vtable' From noreply at buildbot.pypy.org Tue Nov 8 10:27:07 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Tue, 8 Nov 2011 10:27:07 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: jumps are allowed to jump to a TargetToken beloning to another procedure Message-ID: <20111108092707.87BF5820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48907:0ba1948f37c5 Date: 2011-11-08 08:33 +0100 http://bitbucket.org/pypy/pypy/changeset/0ba1948f37c5/ Log: jumps are allowed to jump to a TargetToken beloning to another procedure diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -829,7 +829,7 @@ self.check_consistency_of(self.inputargs, self.operations) for op in self.operations: descr = op.getdescr() - if isinstance(descr, TargetToken): + if op.getopnum() == rop.LABEL and isinstance(descr, TargetToken): assert descr.original_jitcell_token is self.original_jitcell_token @staticmethod diff --git a/pypy/jit/metainterp/test/test_recursive.py b/pypy/jit/metainterp/test/test_recursive.py --- a/pypy/jit/metainterp/test/test_recursive.py +++ b/pypy/jit/metainterp/test/test_recursive.py @@ -530,8 +530,8 @@ result = 0 for i in range(m): result += f('+-cl--', i) - g(50) - self.meta_interp(g, [50], backendopt=True) + res = self.meta_interp(g, [50], backendopt=True) + assert res == g(50) py.test.skip("tracing from start is by now only longer enabled " "if a trace gets too big") self.check_tree_loop_count(3) From noreply at buildbot.pypy.org Tue Nov 8 10:27:08 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Tue, 8 Nov 2011 10:27:08 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: renames Message-ID: <20111108092708.B2872820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48908:8de0c01fb64b Date: 2011-11-08 08:55 +0100 http://bitbucket.org/pypy/pypy/changeset/8de0c01fb64b/ Log: renames diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -745,7 +745,7 @@ """ # 'redboxes' is only used to know the types of red arguments. inputargs = [box.clonebox() for box in redboxes] - loop_token = make_loop_token(len(inputargs), jitdriver_sd) + jitcell_token = make_jitcell_token(jitdriver_sd) # 'nb_red_args' might be smaller than len(redboxes), # because it doesn't include the virtualizable boxes. nb_red_args = jitdriver_sd.num_red_args @@ -778,7 +778,7 @@ ] operations[1].setfailargs([]) operations = get_deep_immutable_oplist(operations) - cpu.compile_loop(inputargs, operations, loop_token, log=False) + cpu.compile_loop(inputargs, operations, jitcell_token, log=False) if memory_manager is not None: # for tests - memory_manager.keep_loop_alive(loop_token) - return loop_token + memory_manager.keep_loop_alive(jitcell_token) + return jitcell_token From noreply at buildbot.pypy.org Tue Nov 8 10:27:09 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Tue, 8 Nov 2011 10:27:09 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: recursion support Message-ID: <20111108092709.E6505820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48909:df9f538c4ae9 Date: 2011-11-08 10:15 +0100 http://bitbucket.org/pypy/pypy/changeset/df9f538c4ae9/ Log: recursion support diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -2025,7 +2025,8 @@ num_green_args = self.jitdriver_sd.num_green_args greenkey = original_boxes[:num_green_args] if not self.partial_trace: - assert self.get_procedure_token(greenkey) == None # FIXME: recursion? + assert self.get_procedure_token(greenkey) is None or \ + self.get_procedure_token(greenkey).target_tokens is None if self.partial_trace: target_token = compile.compile_retrace(self, greenkey, start, original_boxes[num_green_args:], @@ -2051,6 +2052,8 @@ target_jitcell_token = self.get_procedure_token(greenkey) if not target_jitcell_token: return + if not target_jitcell_token.target_tokens: + return self.history.record(rop.JUMP, live_arg_boxes[num_green_args:], None, descr=target_jitcell_token) diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -3667,3 +3667,6 @@ assert x == -42 x = self.interp_operations(f, [1000, 1], translationoptions=topt) assert x == 999 + + def test_retracing_bridge_from_interpreter_to_finnish(self): + assert False # FIXME From noreply at buildbot.pypy.org Tue Nov 8 10:27:11 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Tue, 8 Nov 2011 10:27:11 +0100 (CET) Subject: [pypy-commit] pypy jit-refactor-tests: convert test Message-ID: <20111108092711.297AC820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-refactor-tests Changeset: r48910:c52328c21f19 Date: 2011-11-08 10:25 +0100 http://bitbucket.org/pypy/pypy/changeset/c52328c21f19/ Log: convert test diff --git a/pypy/jit/metainterp/test/test_exception.py b/pypy/jit/metainterp/test/test_exception.py --- a/pypy/jit/metainterp/test/test_exception.py +++ b/pypy/jit/metainterp/test/test_exception.py @@ -35,10 +35,8 @@ return n res = self.meta_interp(f, [10]) assert res == 0 - self.check_loops({'jump': 1, - 'int_gt': 1, 'guard_true': 1, - 'int_sub': 1}) - + self.check_resops({'jump': 2, 'guard_true': 2, + 'int_gt': 2, 'int_sub': 2}) def test_bridge_from_guard_exception(self): myjitdriver = JitDriver(greens = [], reds = ['n']) From noreply at buildbot.pypy.org Tue Nov 8 10:27:12 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Tue, 8 Nov 2011 10:27:12 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: hg merge jit-refactor-tests Message-ID: <20111108092712.57462820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48911:893335d40c74 Date: 2011-11-08 10:26 +0100 http://bitbucket.org/pypy/pypy/changeset/893335d40c74/ Log: hg merge jit-refactor-tests diff --git a/pypy/jit/metainterp/test/test_exception.py b/pypy/jit/metainterp/test/test_exception.py --- a/pypy/jit/metainterp/test/test_exception.py +++ b/pypy/jit/metainterp/test/test_exception.py @@ -35,10 +35,8 @@ return n res = self.meta_interp(f, [10]) assert res == 0 - self.check_loops({'jump': 1, - 'int_gt': 1, 'guard_true': 1, - 'int_sub': 1}) - + self.check_resops({'jump': 2, 'guard_true': 2, + 'int_gt': 2, 'int_sub': 2}) def test_bridge_from_guard_exception(self): myjitdriver = JitDriver(greens = [], reds = ['n']) From noreply at buildbot.pypy.org Tue Nov 8 10:35:20 2011 From: noreply at buildbot.pypy.org (hager) Date: Tue, 8 Nov 2011 10:35:20 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Re-adding David's changes, which I killed on merge :( Message-ID: <20111108093520.B13B1820C4@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r48912:9b76414ee2fe Date: 2011-11-08 10:34 +0100 http://bitbucket.org/pypy/pypy/changeset/9b76414ee2fe/ Log: Re-adding David's changes, which I killed on merge :( diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -122,7 +122,6 @@ clt.asmmemmgr = [] return clt.asmmemmgr_blocks - # XXX adjust for 64 bit def _make_prologue(self, target_pos, frame_depth): if IS_PPC_32: # save it in previous frame (Backchain) @@ -288,7 +287,6 @@ return mc.materialize(self.cpu.asmmemmgr, [], self.cpu.gc_ll_descr.gcrootmap) - # XXX 64 bit adjustment needed def _gen_exit_path(self): mc = PPCBuilder() # @@ -334,7 +332,6 @@ # Save all registers which are managed by the register # allocator on top of the stack before decoding. - # XXX adjust for 64 bit def _save_managed_regs(self, mc): for i in range(len(r.MANAGED_REGS) - 1, -1, -1): reg = r.MANAGED_REGS[i] @@ -553,7 +550,10 @@ # store addr in force index field self.mc.load_imm(r.r0, memaddr) - self.mc.stw(r.r0.value, r.SPP.value, 0) + if IS_PPC_32: + self.mc.stw(r.r0.value, r.SPP.value, 0) + else: + self.mc.std(r.r0.value, r.SPP.value, 0) if save_exc: path = self._leave_jitted_hook_save_exc @@ -593,7 +593,6 @@ clt.asmmemmgr_blocks = [] return clt.asmmemmgr_blocks - # XXX fix for 64 bit def regalloc_mov(self, prev_loc, loc): if prev_loc.is_imm(): value = prev_loc.getint() @@ -605,7 +604,10 @@ elif loc.is_stack(): offset = loc.as_key() * WORD - WORD self.mc.load_imm(r.r0.value, value) - self.mc.stw(r.r0.value, r.SPP.value, offset) + if IS_PPC_32: + self.mc.stw(r.r0.value, r.SPP.value, offset) + else: + self.mc.std(r.r0.value, r.SPP.value, offset) return assert 0, "not supported location" elif prev_loc.is_stack(): @@ -613,13 +615,20 @@ # move from memory to register if loc.is_reg(): reg = loc.as_key() - self.mc.lwz(reg, r.SPP.value, offset) + if IS_PPC_32: + self.mc.lwz(reg, r.SPP.value, offset) + else: + self.mc.ld(reg, r.SPP.value, offset) return # move in memory elif loc.is_stack(): target_offset = loc.as_key() * WORD - WORD - self.mc.lwz(r.r0.value, r.SPP.value, offset) - self.mc.stw(r.r0.value, r.SPP.value, target_offset) + if IS_PPC_32: + self.mc.lwz(r.r0.value, r.SPP.value, offset) + self.mc.stw(r.r0.value, r.SPP.value, target_offset) + else: + self.mc.ld(r.r0.value, r.SPP.value, offset) + self.mc.std(r.r0.value, r.SPP.value, target_offset) return assert 0, "not supported location" elif prev_loc.is_reg(): @@ -632,31 +641,36 @@ # move to memory elif loc.is_stack(): offset = loc.as_key() * WORD - WORD - self.mc.stw(reg, r.SPP.value, offset) + if IS_PPC_32: + self.mc.stw(reg, r.SPP.value, offset) + else: + self.mc.std(reg, r.SPP.value, offset) return assert 0, "not supported location" assert 0, "not supported location" def _ensure_result_bit_extension(self, resloc, size, signed): - if size == 4: - return if size == 1: if not signed: #unsigned char - self.mc.load_imm(r.r0, 0xFF) - self.mc.and_(resloc.value, resloc.value, r.r0.value) + if IS_PPC32: + self.mc.rlwinm(resloc.value, resloc.value, 0, 24, 31) + else: + self.mc.rldicl(resloc.value, resloc.value, 0, 56) else: - self.mc.load_imm(r.r0, 24) - self.mc.slw(resloc.value, resloc.value, r.r0.value) - self.mc.sraw(resloc.value, resloc.value, r.r0.value) + self.mc.extsb(resloc.value, resloc.value) elif size == 2: if not signed: - self.mc.load_imm(r.r0, 16) - self.mc.slw(resloc.value, resloc.value, r.r0.value) - self.mc.srw(resloc.value, resloc.value, r.r0.value) + if IS_PPC_32: + self.mc.rlwinm(resloc.value, resloc.value, 0, 16, 31) + else: + self.mc.rldicl(resloc.value, resloc.value, 0, 48) else: - self.mc.load_imm(r.r0, 16) - self.mc.slw(resloc.value, resloc.value, r.r0.value) - self.mc.sraw(resloc.value, resloc.value, r.r0.value) + self.mc.extsh(resloc.value, resloc.value) + elif size == 4: + if not signed: + self.mc.rldicl(resloc.value, resloc.value, 0, 32) + else: + self.mc.extsw(resloc.value, resloc.value) def mark_gc_roots(self, force_index, use_copy_area=False): if force_index < 0: From noreply at buildbot.pypy.org Tue Nov 8 12:08:46 2011 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 8 Nov 2011 12:08:46 +0100 (CET) Subject: [pypy-commit] pypy default: fix test_urllib2_localnet - don't be too eager on closing the response, closing Message-ID: <20111108110846.5CDA0820C4@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r48913:9827978a2b97 Date: 2011-10-17 11:01 +0200 http://bitbucket.org/pypy/pypy/changeset/9827978a2b97/ Log: fix test_urllib2_localnet - don't be too eager on closing the response, closing it when we have a socket error seems to be enough to avoid the leak (and yet you pass tests). diff --git a/lib-python/modified-2.7/urllib2.py b/lib-python/modified-2.7/urllib2.py --- a/lib-python/modified-2.7/urllib2.py +++ b/lib-python/modified-2.7/urllib2.py @@ -395,11 +395,7 @@ meth_name = protocol+"_response" for processor in self.process_response.get(protocol, []): meth = getattr(processor, meth_name) - try: - response = meth(req, response) - except: - response.close() - raise + response = meth(req, response) return response From noreply at buildbot.pypy.org Tue Nov 8 12:08:47 2011 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 8 Nov 2011 12:08:47 +0100 (CET) Subject: [pypy-commit] pypy separate-applevel-numpy: merge default Message-ID: <20111108110847.BEEDB820C4@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: separate-applevel-numpy Changeset: r48914:c9cb4eb54185 Date: 2011-10-17 17:58 +0200 http://bitbucket.org/pypy/pypy/changeset/c9cb4eb54185/ Log: merge default diff --git a/pypy/jit/metainterp/graphpage.py b/pypy/jit/metainterp/graphpage.py --- a/pypy/jit/metainterp/graphpage.py +++ b/pypy/jit/metainterp/graphpage.py @@ -12,8 +12,8 @@ def get_display_text(self): return None -def display_loops(loops, errmsg=None, highlight_loops=()): - graphs = [(loop, loop in highlight_loops) for loop in loops] +def display_loops(loops, errmsg=None, highlight_loops={}): + graphs = [(loop, highlight_loops.get(loop, 0)) for loop in loops] for graph, highlight in graphs: for op in graph.get_operations(): if is_interesting_guard(op): @@ -65,8 +65,7 @@ def add_graph(self, graph, highlight=False): graphindex = len(self.graphs) self.graphs.append(graph) - if highlight: - self.highlight_graphs[graph] = True + self.highlight_graphs[graph] = highlight for i, op in enumerate(graph.get_operations()): self.all_operations[op] = graphindex, i @@ -126,10 +125,13 @@ self.dotgen.emit('subgraph cluster%d {' % graphindex) label = graph.get_display_text() if label is not None: - if self.highlight_graphs.get(graph): - fillcolor = '#f084c2' + colorindex = self.highlight_graphs.get(graph, 0) + if colorindex == 1: + fillcolor = '#f084c2' # highlighted graph + elif colorindex == 2: + fillcolor = '#808080' # invalidated graph else: - fillcolor = '#84f0c2' + fillcolor = '#84f0c2' # normal color self.dotgen.emit_node(graphname, shape="octagon", label=label, fillcolor=fillcolor) self.pendingedges.append((graphname, diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -732,6 +732,7 @@ failed_states = None retraced_count = 0 terminating = False # see TerminatingLoopToken in compile.py + invalidated = False outermost_jitdriver_sd = None # and more data specified by the backend when the loop is compiled number = -1 @@ -934,6 +935,7 @@ self.loops = [] self.locations = [] self.aborted_keys = [] + self.invalidated_token_numbers = set() def set_history(self, history): self.operations = history.operations @@ -1012,7 +1014,12 @@ if loop in loops: loops.remove(loop) loops.append(loop) - display_loops(loops, errmsg, extraloops) + highlight_loops = dict.fromkeys(extraloops, 1) + for loop in loops: + if hasattr(loop, '_looptoken_number') and ( + loop._looptoken_number in self.invalidated_token_numbers): + highlight_loops.setdefault(loop, 2) + display_loops(loops, errmsg, highlight_loops) # ---------------------------------------------------------------- diff --git a/pypy/jit/metainterp/memmgr.py b/pypy/jit/metainterp/memmgr.py --- a/pypy/jit/metainterp/memmgr.py +++ b/pypy/jit/metainterp/memmgr.py @@ -68,7 +68,8 @@ debug_print("Loop tokens before:", oldtotal) max_generation = self.current_generation - (self.max_age-1) for looptoken in self.alive_loops.keys(): - if 0 <= looptoken.generation < max_generation: + if (0 <= looptoken.generation < max_generation or + looptoken.invalidated): del self.alive_loops[looptoken] newtotal = len(self.alive_loops) debug_print("Loop tokens freed: ", oldtotal - newtotal) diff --git a/pypy/jit/metainterp/quasiimmut.py b/pypy/jit/metainterp/quasiimmut.py --- a/pypy/jit/metainterp/quasiimmut.py +++ b/pypy/jit/metainterp/quasiimmut.py @@ -2,6 +2,7 @@ from pypy.rpython.lltypesystem import lltype, rclass from pypy.rpython.annlowlevel import cast_base_ptr_to_instance from pypy.jit.metainterp.history import AbstractDescr +from pypy.rlib.objectmodel import we_are_translated def get_mutate_field_name(fieldname): @@ -50,13 +51,13 @@ class QuasiImmut(object): llopaque = True + compress_limit = 30 def __init__(self, cpu): self.cpu = cpu # list of weakrefs to the LoopTokens that must be invalidated if # this value ever changes self.looptokens_wrefs = [] - self.compress_limit = 30 def hide(self): qmut_ptr = self.cpu.ts.cast_instance_to_base_ref(self) @@ -73,8 +74,12 @@ self.looptokens_wrefs.append(wref_looptoken) def compress_looptokens_list(self): - self.looptokens_wrefs = [wref for wref in self.looptokens_wrefs - if wref() is not None] + newlist = [] + for wref in self.looptokens_wrefs: + looptoken = wref() + if looptoken is not None and not looptoken.invalidated: + newlist.append(wref) + self.looptokens_wrefs = newlist self.compress_limit = (len(self.looptokens_wrefs) + 15) * 2 def invalidate(self): @@ -85,8 +90,12 @@ self.looptokens_wrefs = [] for wref in wrefs: looptoken = wref() - if looptoken is not None: + if looptoken is not None and not looptoken.invalidated: + looptoken.invalidated = True self.cpu.invalidate_loop(looptoken) + if not we_are_translated(): + self.cpu.stats.invalidated_token_numbers.add( + looptoken.number) class QuasiImmutDescr(AbstractDescr): diff --git a/pypy/jit/metainterp/test/test_memmgr.py b/pypy/jit/metainterp/test/test_memmgr.py --- a/pypy/jit/metainterp/test/test_memmgr.py +++ b/pypy/jit/metainterp/test/test_memmgr.py @@ -18,6 +18,7 @@ class FakeLoopToken: generation = 0 + invalidated = False class _TestMemoryManager: diff --git a/pypy/jit/metainterp/test/test_quasiimmut.py b/pypy/jit/metainterp/test/test_quasiimmut.py --- a/pypy/jit/metainterp/test/test_quasiimmut.py +++ b/pypy/jit/metainterp/test/test_quasiimmut.py @@ -48,6 +48,13 @@ class QuasiImmutTests(object): + def setup_method(self, meth): + self.prev_compress_limit = QuasiImmut.compress_limit + QuasiImmut.compress_limit = 1 + + def teardown_method(self, meth): + QuasiImmut.compress_limit = self.prev_compress_limit + def test_simple_1(self): myjitdriver = JitDriver(greens=['foo'], reds=['x', 'total']) class Foo: @@ -289,7 +296,7 @@ return total res = self.meta_interp(main, []) - self.check_loop_count(9) + self.check_tree_loop_count(6) assert res == main() def test_change_during_running(self): @@ -317,7 +324,7 @@ assert f(100, 15) == 3009 res = self.meta_interp(f, [100, 15]) assert res == 3009 - self.check_loops(guard_not_invalidated=2, getfield_gc=0, + self.check_loops(guard_not_invalidated=4, getfield_gc=0, call_may_force=0, guard_not_forced=0) def test_list_simple_1(self): @@ -453,10 +460,30 @@ assert f(100, 15) == 3009 res = self.meta_interp(f, [100, 15]) assert res == 3009 - self.check_loops(guard_not_invalidated=2, getfield_gc=0, + self.check_loops(guard_not_invalidated=4, getfield_gc=0, getarrayitem_gc=0, getarrayitem_gc_pure=0, call_may_force=0, guard_not_forced=0) + def test_invalidated_loop_is_not_used_any_more_as_target(self): + myjitdriver = JitDriver(greens=['foo'], reds=['x']) + class Foo: + _immutable_fields_ = ['step?'] + @dont_look_inside + def residual(x, foo): + if x == 20: + foo.step = 1 + def f(x): + foo = Foo() + foo.step = 2 + while x > 0: + myjitdriver.jit_merge_point(foo=foo, x=x) + residual(x, foo) + x -= foo.step + return foo.step + res = self.meta_interp(f, [60]) + assert res == 1 + self.check_tree_loop_count(4) # at least not 2 like before + class TestLLtypeGreenFieldsTests(QuasiImmutTests, LLJitMixin): pass diff --git a/pypy/jit/metainterp/warmstate.py b/pypy/jit/metainterp/warmstate.py --- a/pypy/jit/metainterp/warmstate.py +++ b/pypy/jit/metainterp/warmstate.py @@ -178,7 +178,7 @@ if self.compiled_merge_points_wref is not None: for wref in self.compiled_merge_points_wref: looptoken = wref() - if looptoken is not None: + if looptoken is not None and not looptoken.invalidated: result.append(looptoken) return result diff --git a/pypy/module/pypyjit/test_pypy_c/model.py b/pypy/module/pypyjit/test_pypy_c/model.py --- a/pypy/module/pypyjit/test_pypy_c/model.py +++ b/pypy/module/pypyjit/test_pypy_c/model.py @@ -387,8 +387,8 @@ return '' text = str(py.code.Source(src).deindent().indent()) lines = text.splitlines(True) - if opindex is not None and 0 <= opindex < len(lines): - lines[opindex] = lines[opindex].rstrip() + '\t<=====\n' + if opindex is not None and 0 <= opindex <= len(lines): + lines.insert(opindex, '\n\t===== HERE =====\n') return ''.join(lines) # expected_src = self.preprocess_expected_src(expected_src) diff --git a/pypy/module/pypyjit/test_pypy_c/test_string.py b/pypy/module/pypyjit/test_pypy_c/test_string.py --- a/pypy/module/pypyjit/test_pypy_c/test_string.py +++ b/pypy/module/pypyjit/test_pypy_c/test_string.py @@ -41,7 +41,7 @@ guard_true(i32, descr=...) i34 = int_add(i6, 1) --TICK-- - jump(p0, p1, p2, p3, p4, p5, i34, p7, p8, i9, i10, p11, i12, p13, descr=) + jump(p0, p1, p2, p3, p4, p5, i34, p7, p8, i9, i10, p11, i12, p13, descr=...) """) def test_long(self): @@ -106,7 +106,7 @@ i58 = int_add_ovf(i6, i57) guard_no_overflow(descr=...) --TICK-- - jump(p0, p1, p2, p3, p4, p5, i58, i7, descr=) + jump(p0, p1, p2, p3, p4, p5, i58, i7, descr=...) """) def test_str_mod(self): diff --git a/pypy/translator/platform/linux.py b/pypy/translator/platform/linux.py --- a/pypy/translator/platform/linux.py +++ b/pypy/translator/platform/linux.py @@ -1,5 +1,6 @@ """Support for Linux.""" +import sys from pypy.translator.platform.posix import BasePosix class BaseLinux(BasePosix): @@ -26,7 +27,11 @@ def library_dirs_for_libffi_a(self): # places where we need to look for libffi.a - return self.library_dirs_for_libffi() + ['/usr/lib'] + # XXX obscuuure! only look for libffi.a if run with translate.py + if 'translate' in sys.modules: + return self.library_dirs_for_libffi() + ['/usr/lib'] + else: + return [] class Linux(BaseLinux): From noreply at buildbot.pypy.org Tue Nov 8 12:08:48 2011 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 8 Nov 2011 12:08:48 +0100 (CET) Subject: [pypy-commit] pypy default: write a test that _get_interplevel_cls still works if we have multiple string Message-ID: <20111108110848.E872B820C4@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r48915:9d9d4b7af85f Date: 2011-11-08 11:53 +0100 http://bitbucket.org/pypy/pypy/changeset/9d9d4b7af85f/ Log: write a test that _get_interplevel_cls still works if we have multiple string implementations. just to avoid confusion I suppose diff --git a/pypy/objspace/std/test/test_stdobjspace.py b/pypy/objspace/std/test/test_stdobjspace.py --- a/pypy/objspace/std/test/test_stdobjspace.py +++ b/pypy/objspace/std/test/test_stdobjspace.py @@ -1,5 +1,6 @@ from pypy.interpreter.error import OperationError from pypy.interpreter.gateway import app2interp +from pypy.conftest import gettestobjspace class TestW_StdObjSpace: @@ -60,3 +61,10 @@ typedef = None assert space.isinstance_w(X(), space.w_str) + + def test_withstrbuf_fastpath_isinstance(self): + from pypy.objspace.std.stringobject import W_StringObject + + space = gettestobjspace(withstrbuf=True) + assert space._get_interplevel_cls(space.w_str) is W_StringObject + From noreply at buildbot.pypy.org Tue Nov 8 12:08:50 2011 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 8 Nov 2011 12:08:50 +0100 (CET) Subject: [pypy-commit] pypy default: merge default Message-ID: <20111108110850.1958C820C4@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r48916:cc101bad6f60 Date: 2011-11-08 12:08 +0100 http://bitbucket.org/pypy/pypy/changeset/cc101bad6f60/ Log: merge default From noreply at buildbot.pypy.org Tue Nov 8 13:21:19 2011 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 8 Nov 2011 13:21:19 +0100 (CET) Subject: [pypy-commit] pypy default: Move the import globally. Message-ID: <20111108122119.14152820C4@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48917:daf1ca6435fb Date: 2011-11-08 12:24 +0100 http://bitbucket.org/pypy/pypy/changeset/daf1ca6435fb/ Log: Move the import globally. diff --git a/pypy/jit/codewriter/test/test_flatten.py b/pypy/jit/codewriter/test/test_flatten.py --- a/pypy/jit/codewriter/test/test_flatten.py +++ b/pypy/jit/codewriter/test/test_flatten.py @@ -5,7 +5,7 @@ from pypy.jit.codewriter.format import assert_format from pypy.jit.codewriter import longlong from pypy.jit.metainterp.history import AbstractDescr -from pypy.rpython.lltypesystem import lltype, rclass, rstr +from pypy.rpython.lltypesystem import lltype, rclass, rstr, rffi from pypy.objspace.flow.model import SpaceOperation, Variable, Constant from pypy.translator.unsimplify import varoftype from pypy.rlib.rarithmetic import ovfcheck, r_uint, r_longlong, r_ulonglong @@ -743,7 +743,6 @@ """, transform=True) def test_force_cast(self): - from pypy.rpython.lltypesystem import rffi # NB: we don't need to test for INT here, the logic in jtransform is # general enough so that if we have the below cases it should # generalize also to INT @@ -849,7 +848,6 @@ transform=True) def test_force_cast_pointer(self): - from pypy.rpython.lltypesystem import rffi def h(p): return rffi.cast(rffi.VOIDP, p) self.encoding_test(h, [lltype.nullptr(rffi.CCHARP.TO)], """ @@ -857,7 +855,6 @@ """, transform=True) def test_force_cast_floats(self): - from pypy.rpython.lltypesystem import rffi # Caststs to lltype.Float def f(n): return rffi.cast(lltype.Float, n) @@ -964,7 +961,6 @@ """, transform=True) def test_direct_ptradd(self): - from pypy.rpython.lltypesystem import rffi def f(p, n): return lltype.direct_ptradd(p, n) self.encoding_test(f, [lltype.nullptr(rffi.CCHARP.TO), 123], """ @@ -975,7 +971,6 @@ def check_force_cast(FROM, TO, operations, value): """Check that the test is correctly written...""" - from pypy.rpython.lltypesystem import rffi import re r = re.compile('(\w+) \%i\d, \$(-?\d+)') # From noreply at buildbot.pypy.org Tue Nov 8 13:21:20 2011 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 8 Nov 2011 13:21:20 +0100 (CET) Subject: [pypy-commit] pypy default: Add a passing test that rgc.ll_arraycopy cannot raise. Message-ID: <20111108122120.3F6E6820C4@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48918:0f2d4f02b0e3 Date: 2011-11-08 12:28 +0100 http://bitbucket.org/pypy/pypy/changeset/0f2d4f02b0e3/ Log: Add a passing test that rgc.ll_arraycopy cannot raise. diff --git a/pypy/translator/backendopt/test/test_canraise.py b/pypy/translator/backendopt/test/test_canraise.py --- a/pypy/translator/backendopt/test/test_canraise.py +++ b/pypy/translator/backendopt/test/test_canraise.py @@ -201,6 +201,16 @@ result = ra.can_raise(ggraph.startblock.operations[0]) assert result + def test_ll_arraycopy(self): + from pypy.rpython.lltypesystem import rffi + from pypy.rlib.rgc import ll_arraycopy + def f(a, b, c, d, e): + ll_arraycopy(a, b, c, d, e) + t, ra = self.translate(f, [rffi.CCHARP, rffi.CCHARP, int, int, int]) + fgraph = graphof(t, f) + result = ra.can_raise(fgraph.startblock.operations[0]) + assert not result + class TestOOType(OORtypeMixin, BaseTestCanRaise): def test_can_raise_recursive(self): From noreply at buildbot.pypy.org Tue Nov 8 13:21:21 2011 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 8 Nov 2011 13:21:21 +0100 (CET) Subject: [pypy-commit] pypy default: Add a test here. *Still* passing. Message-ID: <20111108122121.6CE2E820C4@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48919:a800076e6e9c Date: 2011-11-08 12:33 +0100 http://bitbucket.org/pypy/pypy/changeset/a800076e6e9c/ Log: Add a test here. *Still* passing. diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -3678,3 +3678,16 @@ assert x == -42 x = self.interp_operations(f, [1000, 1], translationoptions=topt) assert x == 999 + + def test_ll_arraycopy(self): + from pypy.rlib import rgc + A = lltype.GcArray(lltype.Char) + a = lltype.malloc(A, 10) + for i in range(10): a[i] = chr(i) + b = lltype.malloc(A, 10) + # + def f(c, d, e): + rgc.ll_arraycopy(a, b, c, d, e) + return 42 + self.interp_operations(f, [1, 2, 3]) + self.check_operations_history(call=1, guard_no_exception=0) From noreply at buildbot.pypy.org Tue Nov 8 13:21:22 2011 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 8 Nov 2011 13:21:22 +0100 (CET) Subject: [pypy-commit] pypy default: Yet another obscure attempt at catching the bug Message-ID: <20111108122122.97979820C4@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48920:f03b9b71714c Date: 2011-11-08 12:47 +0100 http://bitbucket.org/pypy/pypy/changeset/f03b9b71714c/ Log: Yet another obscure attempt at catching the bug diff --git a/pypy/jit/codewriter/effectinfo.py b/pypy/jit/codewriter/effectinfo.py --- a/pypy/jit/codewriter/effectinfo.py +++ b/pypy/jit/codewriter/effectinfo.py @@ -78,6 +78,9 @@ # OS_MATH_SQRT = 100 + # for debugging: + _OS_CANRAISE = set([OS_NONE, OS_STR2UNICODE, OS_LIBFFI_CALL]) + def __new__(cls, readonly_descrs_fields, readonly_descrs_arrays, write_descrs_fields, write_descrs_arrays, extraeffect=EF_CAN_RAISE, @@ -116,6 +119,8 @@ result.extraeffect = extraeffect result.can_invalidate = can_invalidate result.oopspecindex = oopspecindex + if result.check_can_raise(): + assert oopspecindex in cls._OS_CANRAISE cls._cache[key] = result return result From noreply at buildbot.pypy.org Tue Nov 8 13:21:23 2011 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 8 Nov 2011 13:21:23 +0100 (CET) Subject: [pypy-commit] pypy default: merge heads Message-ID: <20111108122123.C0F29820C4@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48921:2e501c64546b Date: 2011-11-08 12:48 +0100 http://bitbucket.org/pypy/pypy/changeset/2e501c64546b/ Log: merge heads diff --git a/pypy/objspace/std/test/test_stdobjspace.py b/pypy/objspace/std/test/test_stdobjspace.py --- a/pypy/objspace/std/test/test_stdobjspace.py +++ b/pypy/objspace/std/test/test_stdobjspace.py @@ -1,5 +1,6 @@ from pypy.interpreter.error import OperationError from pypy.interpreter.gateway import app2interp +from pypy.conftest import gettestobjspace class TestW_StdObjSpace: @@ -60,3 +61,10 @@ typedef = None assert space.isinstance_w(X(), space.w_str) + + def test_withstrbuf_fastpath_isinstance(self): + from pypy.objspace.std.stringobject import W_StringObject + + space = gettestobjspace(withstrbuf=True) + assert space._get_interplevel_cls(space.w_str) is W_StringObject + From noreply at buildbot.pypy.org Tue Nov 8 13:21:24 2011 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 8 Nov 2011 13:21:24 +0100 (CET) Subject: [pypy-commit] pypy default: Don't use range() here, because it can raise MemoryError. Message-ID: <20111108122124.EBB92820C4@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48922:b1d4b57f6170 Date: 2011-11-08 13:08 +0100 http://bitbucket.org/pypy/pypy/changeset/b1d4b57f6170/ Log: Don't use range() here, because it can raise MemoryError. diff --git a/pypy/rlib/rgc.py b/pypy/rlib/rgc.py --- a/pypy/rlib/rgc.py +++ b/pypy/rlib/rgc.py @@ -163,8 +163,10 @@ source_start, dest_start, length): # if the write barrier is not supported, copy by hand - for i in range(length): + i = 0 + while i < length: dest[i + dest_start] = source[i + source_start] + i += 1 return source_addr = llmemory.cast_ptr_to_adr(source) dest_addr = llmemory.cast_ptr_to_adr(dest) From noreply at buildbot.pypy.org Tue Nov 8 13:21:26 2011 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 8 Nov 2011 13:21:26 +0100 (CET) Subject: [pypy-commit] pypy default: No-op clean-up. Message-ID: <20111108122126.2A4C7820C4@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48923:98df871c9c44 Date: 2011-11-08 13:08 +0100 http://bitbucket.org/pypy/pypy/changeset/98df871c9c44/ Log: No-op clean-up. diff --git a/pypy/jit/codewriter/effectinfo.py b/pypy/jit/codewriter/effectinfo.py --- a/pypy/jit/codewriter/effectinfo.py +++ b/pypy/jit/codewriter/effectinfo.py @@ -130,6 +130,10 @@ def check_can_invalidate(self): return self.can_invalidate + def check_is_elidable(self): + return (self.extraeffect == self.EF_ELIDABLE_CAN_RAISE or + self.extraeffect == self.EF_ELIDABLE_CANNOT_RAISE) + def check_forces_virtual_or_virtualizable(self): return self.extraeffect >= self.EF_FORCES_VIRTUAL_OR_VIRTUALIZABLE diff --git a/pypy/jit/metainterp/optimizeopt/test/test_util.py b/pypy/jit/metainterp/optimizeopt/test/test_util.py --- a/pypy/jit/metainterp/optimizeopt/test/test_util.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_util.py @@ -183,6 +183,7 @@ can_invalidate=True)) arraycopydescr = cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, EffectInfo([], [arraydescr], [], [arraydescr], + EffectInfo.EF_CANNOT_RAISE, oopspecindex=EffectInfo.OS_ARRAYCOPY)) @@ -212,12 +213,14 @@ _oopspecindex = getattr(EffectInfo, _os) locals()[_name] = \ cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, - EffectInfo([], [], [], [], oopspecindex=_oopspecindex)) + EffectInfo([], [], [], [], EffectInfo.EF_CANNOT_RAISE, + oopspecindex=_oopspecindex)) # _oopspecindex = getattr(EffectInfo, _os.replace('STR', 'UNI')) locals()[_name.replace('str', 'unicode')] = \ cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, - EffectInfo([], [], [], [], oopspecindex=_oopspecindex)) + EffectInfo([], [], [], [], EffectInfo.EF_CANNOT_RAISE, + oopspecindex=_oopspecindex)) s2u_descr = cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, EffectInfo([], [], [], [], oopspecindex=EffectInfo.OS_STR2UNICODE)) diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -1345,10 +1345,8 @@ if effect == effectinfo.EF_LOOPINVARIANT: return self.execute_varargs(rop.CALL_LOOPINVARIANT, allboxes, descr, False, False) - exc = (effect != effectinfo.EF_CANNOT_RAISE and - effect != effectinfo.EF_ELIDABLE_CANNOT_RAISE) - pure = (effect == effectinfo.EF_ELIDABLE_CAN_RAISE or - effect == effectinfo.EF_ELIDABLE_CANNOT_RAISE) + exc = effectinfo.check_can_raise() + pure = effectinfo.check_is_elidable() return self.execute_varargs(rop.CALL, allboxes, descr, exc, pure) def do_residual_or_indirect_call(self, funcbox, calldescr, argboxes): From noreply at buildbot.pypy.org Tue Nov 8 13:46:01 2011 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 8 Nov 2011 13:46:01 +0100 (CET) Subject: [pypy-commit] pypy release-1.7.x: Create a 1.7 branch. Bump version numbers Message-ID: <20111108124601.F069F820C4@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: release-1.7.x Changeset: r48924:a01f9701efa7 Date: 2011-11-08 13:44 +0100 http://bitbucket.org/pypy/pypy/changeset/a01f9701efa7/ Log: Create a 1.7 branch. Bump version numbers diff --git a/pypy/module/cpyext/include/patchlevel.h b/pypy/module/cpyext/include/patchlevel.h --- a/pypy/module/cpyext/include/patchlevel.h +++ b/pypy/module/cpyext/include/patchlevel.h @@ -29,7 +29,7 @@ #define PY_VERSION "2.7.1" /* PyPy version as a string */ -#define PYPY_VERSION "1.6.1" +#define PYPY_VERSION "1.7.0" /* Subversion Revision number of this file (not of the repository). * Empty since Mercurial migration. */ diff --git a/pypy/module/sys/version.py b/pypy/module/sys/version.py --- a/pypy/module/sys/version.py +++ b/pypy/module/sys/version.py @@ -10,7 +10,7 @@ CPYTHON_VERSION = (2, 7, 1, "final", 42) #XXX # sync patchlevel.h CPYTHON_API_VERSION = 1013 #XXX # sync with include/modsupport.h -PYPY_VERSION = (1, 6, 1, "dev", 0) #XXX # sync patchlevel.h +PYPY_VERSION = (1, 7, 0, "final", 0) #XXX # sync patchlevel.h if platform.name == 'msvc': COMPILER_INFO = 'MSC v.%d 32 bit' % (platform.version * 10 + 600) From noreply at buildbot.pypy.org Tue Nov 8 13:46:03 2011 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 8 Nov 2011 13:46:03 +0100 (CET) Subject: [pypy-commit] pypy default: for completeness sake, bump numbers in the default as well Message-ID: <20111108124603.262C3820C4@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r48925:d5cc360c2060 Date: 2011-11-08 13:44 +0100 http://bitbucket.org/pypy/pypy/changeset/d5cc360c2060/ Log: for completeness sake, bump numbers in the default as well diff --git a/pypy/module/cpyext/include/patchlevel.h b/pypy/module/cpyext/include/patchlevel.h --- a/pypy/module/cpyext/include/patchlevel.h +++ b/pypy/module/cpyext/include/patchlevel.h @@ -29,7 +29,7 @@ #define PY_VERSION "2.7.1" /* PyPy version as a string */ -#define PYPY_VERSION "1.6.1" +#define PYPY_VERSION "1.7.1" /* Subversion Revision number of this file (not of the repository). * Empty since Mercurial migration. */ diff --git a/pypy/module/sys/version.py b/pypy/module/sys/version.py --- a/pypy/module/sys/version.py +++ b/pypy/module/sys/version.py @@ -10,7 +10,7 @@ CPYTHON_VERSION = (2, 7, 1, "final", 42) #XXX # sync patchlevel.h CPYTHON_API_VERSION = 1013 #XXX # sync with include/modsupport.h -PYPY_VERSION = (1, 6, 1, "dev", 0) #XXX # sync patchlevel.h +PYPY_VERSION = (1, 7, 1, "dev", 0) #XXX # sync patchlevel.h if platform.name == 'msvc': COMPILER_INFO = 'MSC v.%d 32 bit' % (platform.version * 10 + 600) From noreply at buildbot.pypy.org Tue Nov 8 14:09:58 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Tue, 8 Nov 2011 14:09:58 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: fix test Message-ID: <20111108130958.D1E47820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48926:340cdb94a9f5 Date: 2011-11-08 10:31 +0100 http://bitbucket.org/pypy/pypy/changeset/340cdb94a9f5/ Log: fix test diff --git a/pypy/jit/metainterp/test/test_exception.py b/pypy/jit/metainterp/test/test_exception.py --- a/pypy/jit/metainterp/test/test_exception.py +++ b/pypy/jit/metainterp/test/test_exception.py @@ -35,7 +35,7 @@ return n res = self.meta_interp(f, [10]) assert res == 0 - self.check_resops({'jump': 2, 'guard_true': 2, + self.check_resops({'jump': 1, 'guard_true': 2, 'int_gt': 2, 'int_sub': 2}) def test_bridge_from_guard_exception(self): From noreply at buildbot.pypy.org Tue Nov 8 14:10:00 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Tue, 8 Nov 2011 14:10:00 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: exception support Message-ID: <20111108131000.0EDCE820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48927:31885b3fa5e9 Date: 2011-11-08 10:43 +0100 http://bitbucket.org/pypy/pypy/changeset/31885b3fa5e9/ Log: exception support diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -2127,20 +2127,19 @@ assert False # FIXME: kill TerminatingLoopToken? # FIXME: can we call compile_trace? - self.history.record(rop.FINISH, exits, None, descr=loop_tokens[0].finishdescr) + token = loop_tokens[0].finishdescr + self.history.record(rop.FINISH, exits, None, descr=token) target_token = compile.compile_trace(self, self.resumekey) - if not target_token: + if target_token is not token: compile.giveup() def compile_exit_frame_with_exception(self, valuebox): self.gen_store_back_in_virtualizable() - # temporarily put a JUMP to a pseudo-loop - self.history.record(rop.JUMP, [valuebox], None) sd = self.staticdata - loop_tokens = sd.loop_tokens_exit_frame_with_exception_ref - target_loop_token = compile.compile_new_bridge(self, loop_tokens, - self.resumekey) - if target_loop_token is not loop_tokens[0]: + token = sd.loop_tokens_exit_frame_with_exception_ref[0].finishdescr + self.history.record(rop.FINISH, [valuebox], None, descr=token) + target_token = compile.compile_trace(self, self.resumekey) + if target_token is not token: compile.giveup() @specialize.arg(1) From noreply at buildbot.pypy.org Tue Nov 8 14:10:01 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Tue, 8 Nov 2011 14:10:01 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: dont crash if not inlining the same short preamble as is beeing produced Message-ID: <20111108131001.3ACE2820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48928:a6c88d02ef0e Date: 2011-11-08 11:07 +0100 http://bitbucket.org/pypy/pypy/changeset/a6c88d02ef0e/ Log: dont crash if not inlining the same short preamble as is beeing produced diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -224,6 +224,10 @@ def close_bridge(self, start_label): inputargs = self.inputargs short_jumpargs = inputargs[:] + + # We dont need to inline the short preamble we are creating as we are conneting + # the bridge to a different trace with a different short preamble + self.short_inliner = None newoperations = self.optimizer.get_newoperations() self.boxes_created_this_iteration = {} @@ -406,7 +410,7 @@ if op is None: return None if op.result is not None and op.result in self.short_seen: - if emit: + if emit and self.short_inliner: return self.short_inliner.inline_arg(op.result) else: return None @@ -425,7 +429,7 @@ self.short.append(op) self.short_seen[op.result] = True - if emit: + if emit and self.short_inliner: newop = self.short_inliner.inline_op(op) self.optimizer.send_extra_operation(newop) else: @@ -535,7 +539,7 @@ retraced_count = cell_token.retraced_count limit = self.optimizer.metainterp_sd.warmrunnerdesc.memory_manager.retrace_limit if retraced_count Author: Hakan Ardo Branch: jit-refactor-tests Changeset: r48929:27c73644ff5a Date: 2011-11-08 11:32 +0100 http://bitbucket.org/pypy/pypy/changeset/27c73644ff5a/ Log: conversion in progress diff --git a/pypy/jit/metainterp/test/test_string.py b/pypy/jit/metainterp/test/test_string.py --- a/pypy/jit/metainterp/test/test_string.py +++ b/pypy/jit/metainterp/test/test_string.py @@ -30,7 +30,7 @@ return i res = self.meta_interp(f, [10, True, _str('h')], listops=True) assert res == 5 - self.check_loops(**{self.CALL: 1, self.CALL_PURE: 0, 'everywhere': True}) + self.check_resops(**{self.CALL: 1, self.CALL_PURE: 0}) def test_eq_folded(self): _str = self._str @@ -50,7 +50,7 @@ return i res = self.meta_interp(f, [10, True, _str('h')], listops=True) assert res == 5 - self.check_loops(**{self.CALL: 0, self.CALL_PURE: 0}) + self.check_resops(**{self.CALL: 0, self.CALL_PURE: 0}) def test_newstr(self): _str, _chr = self._str, self._chr @@ -85,7 +85,7 @@ n -= 1 return 42 self.meta_interp(f, [6]) - self.check_loops(newstr=0, strsetitem=0, strlen=0, + self.check_resops(newstr=0, strsetitem=0, strlen=0, newunicode=0, unicodesetitem=0, unicodelen=0) def test_char2string_escape(self): @@ -126,7 +126,7 @@ return total res = self.meta_interp(f, [6]) assert res == 21 - self.check_loops(newstr=0, strgetitem=0, strsetitem=0, strlen=0, + self.check_resops(newstr=0, strgetitem=0, strsetitem=0, strlen=0, newunicode=0, unicodegetitem=0, unicodesetitem=0, unicodelen=0) @@ -147,7 +147,7 @@ m -= 1 return 42 self.meta_interp(f, [6, 7]) - self.check_loops(newstr=0, strsetitem=0, + self.check_resops(newstr=0, strsetitem=0, newunicode=0, unicodesetitem=0, call=0, call_pure=0) From noreply at buildbot.pypy.org Tue Nov 8 14:10:03 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Tue, 8 Nov 2011 14:10:03 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: renaming, redefining and reeanbling the "loop" counters (in progress) Message-ID: <20111108131003.94FC8820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48930:9b50e8266be0 Date: 2011-11-08 12:11 +0100 http://bitbucket.org/pypy/pypy/changeset/9b50e8266be0/ Log: renaming, redefining and reeanbling the "loop" counters (in progress) diff --git a/pypy/jit/metainterp/test/support.py b/pypy/jit/metainterp/test/support.py --- a/pypy/jit/metainterp/test/support.py +++ b/pypy/jit/metainterp/test/support.py @@ -161,34 +161,35 @@ def check_loops(self, expected=None, everywhere=False, **check): get_stats().check_loops(expected=expected, everywhere=everywhere, **check) - def check_loop_count(self, count): - """NB. This is a hack; use check_tree_loop_count() or - check_enter_count() for the real thing. - This counts as 1 every bridge in addition to every loop; and it does - not count at all the entry bridges from interpreter, although they - are TreeLoops as well.""" - return # FIXME - assert get_stats().compiled_count == count - def check_tree_loop_count(self, count): - return # FIXME + def check_trace_count(self, count): + # The number of traces compiled assert len(get_stats().loops) == count - def check_loop_count_at_most(self, count): - return # FIXME - assert get_stats().compiled_count <= count + def check_trace_count_at_most(self, count): + assert len(get_stats().loops) <= count + + def check_jitcell_token_count(self, count): + tokens = set() + for loop in get_stats().loops: + for op in loop.operations: + descr = op.getdescr() + if isinstance(descr, history.TargetToken): + descr = descr.targeting_jitcell_token + if isinstance(descr, history.JitCellToken): + tokens.add(descr) + assert len(tokens) == count + def check_enter_count(self, count): - return # FIXME assert get_stats().enter_count == count def check_enter_count_at_most(self, count): - return # FIXME assert get_stats().enter_count <= count + def check_jumps(self, maxcount): return # FIXME assert get_stats().exec_jumps <= maxcount + def check_aborted_count(self, count): - return # FIXME assert get_stats().aborted_count == count def check_aborted_count_at_least(self, count): - return # FIXME assert get_stats().aborted_count >= count def meta_interp(self, *args, **kwds): diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -78,7 +78,7 @@ return res res = self.meta_interp(f, [6, 7]) assert res == 42 - self.check_loop_count(1) + self.check_trace_count(1) self.check_resops({'jump': 1, 'int_gt': 2, 'int_add': 2, 'guard_true': 2, 'int_sub': 2}) @@ -107,7 +107,7 @@ return res res = self.meta_interp(f, [6, 7]) assert res == 1323 - self.check_loop_count(1) + self.check_trace_count(1) self.check_resops(int_mul=3) def test_loop_variant_mul_ovf(self): @@ -124,7 +124,7 @@ return res res = self.meta_interp(f, [6, 7]) assert res == 1323 - self.check_loop_count(1) + self.check_trace_count(1) self.check_resops(int_mul_ovf=3) def test_loop_invariant_mul1(self): @@ -139,7 +139,7 @@ return res res = self.meta_interp(f, [6, 7]) assert res == 252 - self.check_loop_count(1) + self.check_trace_count(1) self.check_resops({'jump': 1, 'int_gt': 2, 'int_add': 2, 'int_mul': 1, 'guard_true': 2, 'int_sub': 2}) @@ -157,7 +157,7 @@ return res res = self.meta_interp(f, [6, 7]) assert res == 308 - self.check_loop_count(1) + self.check_trace_count(1) self.check_resops({'jump': 1, 'int_lshift': 2, 'int_gt': 2, 'int_mul_ovf': 1, 'int_add': 4, 'guard_true': 2, 'guard_no_overflow': 1, @@ -177,7 +177,7 @@ return res res = self.meta_interp(f, [6, 32]) assert res == 3427 - self.check_loop_count(3) + self.check_trace_count(3) def test_loop_invariant_mul_bridge_maintaining1(self): myjitdriver = JitDriver(greens = [], reds = ['y', 'res', 'x']) @@ -193,7 +193,7 @@ return res res = self.meta_interp(f, [6, 32]) assert res == 1167 - self.check_loop_count(3) + self.check_trace_count(3) self.check_resops({'int_lt': 3, 'int_gt': 2, 'int_add': 5, 'guard_true': 3, 'int_sub': 4, 'jump': 2, 'int_mul': 2, 'guard_false': 2}) @@ -213,7 +213,7 @@ return res res = self.meta_interp(f, [6, 32]) assert res == 1692 - self.check_loop_count(3) + self.check_trace_count(3) self.check_resops({'int_lt': 3, 'int_gt': 2, 'int_add': 5, 'guard_true': 3, 'int_sub': 4, 'jump': 2, 'int_mul': 2, 'guard_false': 2}) @@ -233,7 +233,7 @@ return res res = self.meta_interp(f, [6, 32, 16]) assert res == 1692 - self.check_loop_count(3) + self.check_trace_count(3) self.check_resops({'int_lt': 2, 'int_gt': 4, 'guard_false': 2, 'guard_true': 4, 'int_sub': 4, 'jump': 3, 'int_mul': 3, 'int_add': 4}) @@ -256,7 +256,7 @@ return res res = self.meta_interp(f, [6, 7]) assert res == 252 - self.check_loop_count(1) + self.check_trace_count(1) self.check_resops({'jump': 1, 'int_gt': 2, 'int_add': 2, 'getfield_gc_pure': 1, 'int_mul': 1, 'guard_true': 2, 'int_sub': 2}) @@ -559,11 +559,11 @@ # res = self.meta_interp(f, [10, 84]) assert res == -6 - self.check_loop_count(0) + self.check_trace_count(0) # res = self.meta_interp(f, [3, 19]) assert res == -2 - self.check_loop_count(1) + self.check_trace_count(1) def test_can_never_inline(self): def can_never_inline(x): @@ -858,7 +858,7 @@ return res res = self.meta_interp(f, [6, 7]) assert res == 42.0 - self.check_loop_count(1) + self.check_trace_count(1) self.check_resops({'jump': 1, 'float_gt': 2, 'float_add': 2, 'float_sub': 2, 'guard_true': 2}) @@ -874,7 +874,7 @@ res = self.meta_interp(f, [7]) assert res == 0 - def test_bridge_from_interpreter(self): + def test_bridge_from_interpreter_1(self): mydriver = JitDriver(reds = ['n'], greens = []) def f(n): @@ -884,7 +884,7 @@ n -= 1 self.meta_interp(f, [20], repeat=7) - self.check_tree_loop_count(2) # the loop and the entry path + self.check_jitcell_token_count(2) # the loop and the entry path # we get: # ENTER - compile the new loop and the entry bridge # ENTER - compile the leaving path @@ -1255,11 +1255,11 @@ res = self.meta_interp(f, [10, 3]) assert res == 9 + 8 + 7 + 6 + 5 + 4 + 3 + 2 + 1 + 0 - self.check_tree_loop_count(2) + self.check_jitcell_token_count(2) res = self.meta_interp(f, [10, 13]) assert res == 9 + 8 + 7 + 6 + 5 + 4 + 3 + 2 + 1 + 0 - self.check_tree_loop_count(0) + self.check_jitcell_token_count(0) def test_dont_look_inside(self): @dont_look_inside @@ -1340,7 +1340,7 @@ return res res = self.meta_interp(f, [6, 7]) assert res == 42 - self.check_loop_count(1) + self.check_trace_count(1) self.check_resops(call=2) def test_merge_guardclass_guardvalue(self): @@ -1635,7 +1635,7 @@ promote(a) x -= 1 self.meta_interp(f, [50]) - self.check_loop_count(1) + self.check_trace_count(1) # this checks that the logic triggered by make_a_counter_per_value() # works and prevents generating tons of bridges @@ -1730,7 +1730,7 @@ return a1.val + b1.val res = self.meta_interp(g, [6, 7]) assert res == 6*8 + 6**8 - self.check_loop_count(5) + self.check_trace_count(5) self.check_resops({'guard_class': 2, 'int_gt': 4, 'getfield_gc': 4, 'guard_true': 4, 'int_sub': 4, 'jump': 2, 'int_mul': 2, @@ -1774,7 +1774,7 @@ return a1.val + b1.val res = self.meta_interp(g, [6, 20]) assert res == g(6, 20) - self.check_loop_count(9) + self.check_trace_count(9) self.check_resops(getarrayitem_gc=10) def test_multiple_specialied_versions_bridge(self): @@ -1962,7 +1962,7 @@ return a1.val + b1.val res = self.meta_interp(g, [3, 23]) assert res == 7068153 - self.check_loop_count(7) + self.check_trace_count(7) self.check_resops(guard_true=6, guard_class=2, int_mul=3, int_add=3, guard_false=3) @@ -2048,7 +2048,7 @@ return n res = self.meta_interp(f, [sys.maxint-10]) assert res == 11 - self.check_tree_loop_count(2) + self.check_jitcell_token_count(2) def test_wrap_around_mul(self): myjitdriver = JitDriver(greens = [], reds = ['x', 'n']) @@ -2064,7 +2064,7 @@ return n res = self.meta_interp(f, [sys.maxint>>10]) assert res == 11 - self.check_tree_loop_count(2) + self.check_jitcell_token_count(2) def test_wrap_around_sub(self): myjitdriver = JitDriver(greens = [], reds = ['x', 'n']) @@ -2080,7 +2080,7 @@ return n res = self.meta_interp(f, [10-sys.maxint]) assert res == 12 - self.check_tree_loop_count(2) + self.check_jitcell_token_count(2) def test_caching_setfield(self): myjitdriver = JitDriver(greens = [], reds = ['sa', 'i', 'n', 'a', 'node']) @@ -2600,9 +2600,9 @@ i += 1 return sa assert self.meta_interp(f, [20, 2]) == f(20, 2) - self.check_tree_loop_count(4) + self.check_jitcell_token_count(4) assert self.meta_interp(f, [20, 3]) == f(20, 3) - self.check_tree_loop_count(5) + self.check_jitcell_token_count(5) def test_max_retrace_guards(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'i', 'sa', 'a']) @@ -2619,9 +2619,9 @@ i += 1 return sa assert self.meta_interp(f, [20, 1]) == f(20, 1) - self.check_tree_loop_count(2) + self.check_jitcell_token_count(2) assert self.meta_interp(f, [20, 10]) == f(20, 10) - self.check_tree_loop_count(5) + self.check_jitcell_token_count(5) def test_retrace_limit_with_extra_guards(self): @@ -2642,9 +2642,9 @@ i += 1 return sa assert self.meta_interp(f, [20, 2]) == f(20, 2) - self.check_tree_loop_count(4) + self.check_jitcell_token_count(4) assert self.meta_interp(f, [20, 3]) == f(20, 3) - self.check_tree_loop_count(5) + self.check_jitcell_token_count(5) def test_retrace_ending_up_retrazing_another_loop(self): @@ -2692,7 +2692,7 @@ # Thus we end up with: # 1 preamble and 1 specialized version of first loop # 1 preamble and 2 specialized version of second loop - self.check_tree_loop_count(2 + 3) + self.check_jitcell_token_count(2 + 3) # FIXME: Add a gloabl retrace counter and test that we are not trying more than 5 times. @@ -2743,14 +2743,14 @@ res = self.meta_interp(f, [10, 7]) assert res == f(10, 7) - self.check_tree_loop_count(4) + self.check_jitcell_token_count(4) def g(n): return f(n, 2) + f(n, 3) res = self.meta_interp(g, [10]) assert res == g(10) - self.check_tree_loop_count(6) + self.check_jitcell_token_count(6) def g(n): @@ -2758,7 +2758,7 @@ res = self.meta_interp(g, [10]) assert res == g(10) - self.check_tree_loop_count(8) + self.check_jitcell_token_count(8) def test_frame_finished_during_retrace(self): class Base(object): @@ -2887,7 +2887,7 @@ # Thus we end up with: # 1 preamble and 1 specialized version of first loop # 1 preamble and 2 specialized version of second loop - self.check_tree_loop_count(2 + 3) + self.check_jitcell_token_count(2 + 3) # FIXME: Add a gloabl retrace counter and test that we are not trying more than 5 times. @@ -2899,7 +2899,7 @@ res = self.meta_interp(g, [10]) assert res == g(10) # 1 preamble and 6 speciealized versions of each loop - self.check_tree_loop_count(2*(1 + 6)) + self.check_jitcell_token_count(2*(1 + 6)) def test_continue_tracing_with_boxes_in_start_snapshot_replaced_by_optimizer(self): myjitdriver = JitDriver(greens = [], reds = ['sa', 'n', 'a', 'b']) @@ -3148,7 +3148,7 @@ return sa res = self.meta_interp(f, [32]) assert res == f(32) - self.check_tree_loop_count(3) + self.check_jitcell_token_count(3) def test_two_loopinvariant_arrays2(self): from pypy.rpython.lltypesystem import lltype, llmemory, rffi @@ -3171,7 +3171,7 @@ return sa res = self.meta_interp(f, [32]) assert res == f(32) - self.check_tree_loop_count(3) + self.check_jitcell_token_count(3) def test_two_loopinvariant_arrays3(self): from pypy.rpython.lltypesystem import lltype, llmemory, rffi @@ -3195,7 +3195,7 @@ return sa res = self.meta_interp(f, [32]) assert res == f(32) - self.check_tree_loop_count(2) + self.check_jitcell_token_count(2) def test_two_loopinvariant_arrays_boxed(self): class A(object): From noreply at buildbot.pypy.org Tue Nov 8 14:10:04 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Tue, 8 Nov 2011 14:10:04 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: make sure all jitcell tokens and traces are actually counted Message-ID: <20111108131004.CE5B4820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48931:12fef84a6bd0 Date: 2011-11-08 12:51 +0100 http://bitbucket.org/pypy/pypy/changeset/12fef84a6bd0/ Log: make sure all jitcell tokens and traces are actually counted diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -270,10 +270,7 @@ metainterp_sd.profiler.end_backend() metainterp_sd.stats.add_new_loop(loop) if not we_are_translated(): - if type != "entry bridge": - metainterp_sd.stats.compiled() - else: - loop._ignore_during_counting = True + metainterp_sd.stats.compiled() metainterp_sd.log("compiled new " + type) # metainterp_sd.logger_ops.log_loop(loop.inputargs, loop.operations, n, type, ops_offset) @@ -679,6 +676,7 @@ # send the new_loop to warmspot.py, to be called directly the next time jitdriver_sd.warmstate.attach_procedure_to_interp( self.original_greenkey, jitcell_token) + metainterp_sd.stats.add_jitcell_token(jitcell_token) def reset_counter_from_failure(self): pass diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -963,6 +963,9 @@ def clear(self): pass + def add_jitcell_token(self, token): + pass + class Stats(object): """For tests.""" @@ -976,6 +979,7 @@ self.locations = [] self.aborted_keys = [] self.invalidated_token_numbers = set() + self.jitcell_tokens = set() def clear(self): del self.loops[:] @@ -986,6 +990,9 @@ self.enter_count = 0 self.aborted_count = 0 + def add_jitcell_token(self, token): + self.jitcell_tokens.add(token) + def set_history(self, history): self.operations = history.operations diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -2040,6 +2040,8 @@ start_resumedescr) if target_token is not None: self.jitdriver_sd.warmstate.attach_procedure_to_interp(greenkey, target_token.targeting_jitcell_token) + self.staticdata.stats.add_jitcell_token(target_token.targeting_jitcell_token) + if target_token is not None: # raise if it *worked* correctly self.history.inputargs = None diff --git a/pypy/jit/metainterp/test/support.py b/pypy/jit/metainterp/test/support.py --- a/pypy/jit/metainterp/test/support.py +++ b/pypy/jit/metainterp/test/support.py @@ -161,22 +161,15 @@ def check_loops(self, expected=None, everywhere=False, **check): get_stats().check_loops(expected=expected, everywhere=everywhere, **check) + def check_trace_count(self, count): # The number of traces compiled - assert len(get_stats().loops) == count + assert get_stats().compiled_count == count def check_trace_count_at_most(self, count): - assert len(get_stats().loops) <= count + assert get_stats().compiled_count <= count def check_jitcell_token_count(self, count): - tokens = set() - for loop in get_stats().loops: - for op in loop.operations: - descr = op.getdescr() - if isinstance(descr, history.TargetToken): - descr = descr.targeting_jitcell_token - if isinstance(descr, history.JitCellToken): - tokens.add(descr) - assert len(tokens) == count + assert len(get_stats().jitcell_tokens) == count def check_enter_count(self, count): assert get_stats().enter_count == count diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -164,18 +164,18 @@ 'int_sub': 2}) def test_loop_invariant_mul_bridge1(self): - myjitdriver = JitDriver(greens = [], reds = ['y', 'res', 'x']) - def f(x, y): + myjitdriver = JitDriver(greens = [], reds = ['y', 'res', 'x', 'n']) + def f(x, y, n): res = 0 while y > 0: - myjitdriver.can_enter_jit(x=x, y=y, res=res) - myjitdriver.jit_merge_point(x=x, y=y, res=res) + myjitdriver.can_enter_jit(x=x, y=y, n=n, res=res) + myjitdriver.jit_merge_point(x=x, y=y, n=n, res=res) res += x * x - if y<16: + if y Author: Hakan Ardo Branch: jit-targets Changeset: r48932:1515bd7380ed Date: 2011-11-08 13:02 +0100 http://bitbucket.org/pypy/pypy/changeset/1515bd7380ed/ Log: fix test to not retrace when the guard is created and only to count the number of int_mul which is what the test is about diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -180,43 +180,39 @@ self.check_trace_count(3) def test_loop_invariant_mul_bridge_maintaining1(self): - myjitdriver = JitDriver(greens = [], reds = ['y', 'res', 'x']) - def f(x, y): + myjitdriver = JitDriver(greens = [], reds = ['y', 'res', 'x', 'n']) + def f(x, y, n): res = 0 while y > 0: - myjitdriver.can_enter_jit(x=x, y=y, res=res) - myjitdriver.jit_merge_point(x=x, y=y, res=res) + myjitdriver.can_enter_jit(x=x, y=y, res=res, n=n) + myjitdriver.jit_merge_point(x=x, y=y, res=res, n=n) res += x * x - if y<16: + if y 0: - myjitdriver.can_enter_jit(x=x, y=y, res=res) - myjitdriver.jit_merge_point(x=x, y=y, res=res) + myjitdriver.can_enter_jit(x=x, y=y, res=res, n=n) + myjitdriver.jit_merge_point(x=x, y=y, res=res, n=n) z = x * x res += z - if y<16: + if y Author: Hakan Ardo Branch: jit-targets Changeset: r48933:7d1b9a847447 Date: 2011-11-08 13:27 +0100 http://bitbucket.org/pypy/pypy/changeset/7d1b9a847447/ Log: fix tests diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -880,7 +880,9 @@ n -= 1 self.meta_interp(f, [20], repeat=7) - self.check_jitcell_token_count(2) # the loop and the entry path + # the loop and the entry path as a single trace + self.check_jitcell_token_count(1) + # we get: # ENTER - compile the new loop and the entry bridge # ENTER - compile the leaving path @@ -1251,7 +1253,7 @@ res = self.meta_interp(f, [10, 3]) assert res == 9 + 8 + 7 + 6 + 5 + 4 + 3 + 2 + 1 + 0 - self.check_jitcell_token_count(2) + self.check_jitcell_token_count(1) res = self.meta_interp(f, [10, 13]) assert res == 9 + 8 + 7 + 6 + 5 + 4 + 3 + 2 + 1 + 0 @@ -1726,7 +1728,7 @@ return a1.val + b1.val res = self.meta_interp(g, [6, 7]) assert res == 6*8 + 6**8 - self.check_trace_count(5) + self.check_trace_count(4) self.check_resops({'guard_class': 2, 'int_gt': 4, 'getfield_gc': 4, 'guard_true': 4, 'int_sub': 4, 'jump': 2, 'int_mul': 2, @@ -1770,7 +1772,7 @@ return a1.val + b1.val res = self.meta_interp(g, [6, 20]) assert res == g(6, 20) - self.check_trace_count(9) + self.check_trace_count(8) self.check_resops(getarrayitem_gc=10) def test_multiple_specialied_versions_bridge(self): @@ -1958,7 +1960,7 @@ return a1.val + b1.val res = self.meta_interp(g, [3, 23]) assert res == 7068153 - self.check_trace_count(7) + self.check_trace_count(6) self.check_resops(guard_true=6, guard_class=2, int_mul=3, int_add=3, guard_false=3) @@ -2044,7 +2046,7 @@ return n res = self.meta_interp(f, [sys.maxint-10]) assert res == 11 - self.check_jitcell_token_count(2) + self.check_jitcell_token_count(1) def test_wrap_around_mul(self): myjitdriver = JitDriver(greens = [], reds = ['x', 'n']) @@ -2060,7 +2062,7 @@ return n res = self.meta_interp(f, [sys.maxint>>10]) assert res == 11 - self.check_jitcell_token_count(2) + self.check_jitcell_token_count(1) def test_wrap_around_sub(self): myjitdriver = JitDriver(greens = [], reds = ['x', 'n']) @@ -2076,7 +2078,7 @@ return n res = self.meta_interp(f, [10-sys.maxint]) assert res == 12 - self.check_jitcell_token_count(2) + self.check_jitcell_token_count(1) def test_caching_setfield(self): myjitdriver = JitDriver(greens = [], reds = ['sa', 'i', 'n', 'a', 'node']) @@ -2596,9 +2598,9 @@ i += 1 return sa assert self.meta_interp(f, [20, 2]) == f(20, 2) + self.check_jitcell_token_count(3) + assert self.meta_interp(f, [20, 3]) == f(20, 3) self.check_jitcell_token_count(4) - assert self.meta_interp(f, [20, 3]) == f(20, 3) - self.check_jitcell_token_count(5) def test_max_retrace_guards(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'i', 'sa', 'a']) From noreply at buildbot.pypy.org Tue Nov 8 14:10:08 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Tue, 8 Nov 2011 14:10:08 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: respect retrace limit Message-ID: <20111108131008.60885820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48934:8f5285d28ef9 Date: 2011-11-08 13:38 +0100 http://bitbucket.org/pypy/pypy/changeset/8f5285d28ef9/ Log: respect retrace limit diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -506,42 +506,48 @@ pass target.virtual_state.debug_print(debugmsg, bad) - if ok: - debug_stop('jit-log-virtualstate') + if ok: + debug_stop('jit-log-virtualstate') - values = [self.getvalue(arg) - for arg in jumpop.getarglist()] - args = target.virtual_state.make_inputargs(values, self.optimizer, - keyboxes=True) - short_inputargs = target.short_preamble[0].getarglist() - inliner = Inliner(short_inputargs, args) + values = [self.getvalue(arg) + for arg in jumpop.getarglist()] + args = target.virtual_state.make_inputargs(values, self.optimizer, + keyboxes=True) + short_inputargs = target.short_preamble[0].getarglist() + inliner = Inliner(short_inputargs, args) - for guard in extra_guards: - if guard.is_guard(): - descr = target.start_resumedescr.clone_if_mutable() - inliner.inline_descr_inplace(descr) - guard.setdescr(descr) - self.optimizer.send_extra_operation(guard) + for guard in extra_guards: + if guard.is_guard(): + descr = target.start_resumedescr.clone_if_mutable() + inliner.inline_descr_inplace(descr) + guard.setdescr(descr) + self.optimizer.send_extra_operation(guard) - try: - for shop in target.short_preamble[1:]: - newop = inliner.inline_op(shop) - self.optimizer.send_extra_operation(newop) - except InvalidLoop: - debug_print("Inlining failed unexpectedly", - "jumping to preamble instead") - assert cell_token.target_tokens[0].virtual_state is None - jumpop.setdescr(cell_token.target_tokens[0]) - self.optimizer.send_extra_operation(jumpop) - return True + try: + for shop in target.short_preamble[1:]: + newop = inliner.inline_op(shop) + self.optimizer.send_extra_operation(newop) + except InvalidLoop: + debug_print("Inlining failed unexpectedly", + "jumping to preamble instead") + assert cell_token.target_tokens[0].virtual_state is None + jumpop.setdescr(cell_token.target_tokens[0]) + self.optimizer.send_extra_operation(jumpop) + return True debug_stop('jit-log-virtualstate') - retraced_count = cell_token.retraced_count limit = self.optimizer.metainterp_sd.warmrunnerdesc.memory_manager.retrace_limit - if retraced_count Author: Hakan Ardo Branch: jit-targets Changeset: r48935:d200a90155ef Date: 2011-11-08 13:51 +0100 http://bitbucket.org/pypy/pypy/changeset/d200a90155ef/ Log: fix test to actually count the number of specialized versions of the loop diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -536,6 +536,8 @@ return True debug_stop('jit-log-virtualstate') + if self.did_import: + return False limit = self.optimizer.metainterp_sd.warmrunnerdesc.memory_manager.retrace_limit if cell_token.retraced_count Author: Hakan Ardo Branch: jit-targets Changeset: r48936:5a5c19100cf4 Date: 2011-11-08 14:09 +0100 http://bitbucket.org/pypy/pypy/changeset/5a5c19100cf4/ Log: support max_retrace_guards diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -322,6 +322,10 @@ raise InvalidLoop debug_stop('jit-log-virtualstate') + maxguards = self.optimizer.metainterp_sd.warmrunnerdesc.memory_manager.max_retrace_guards + if self.optimizer.emitted_guards > maxguards: + jumpop.getdescr().targeting_jitcell_token.retraced_count = sys.maxint + def finilize_short_preamble(self, start_label): short = self.short assert short[-1].getopnum() == rop.JUMP diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -2599,10 +2599,10 @@ return sa assert self.meta_interp(f, [20, 2]) == f(20, 2) self.check_jitcell_token_count(1) - assert len(get_stats().jitcell_tokens.pop().target_tokens) == 4 + assert len(list(get_stats().jitcell_tokens)[0].target_tokens) == 4 assert self.meta_interp(f, [20, 3]) == f(20, 3) self.check_jitcell_token_count(1) - assert len(get_stats().jitcell_tokens.pop().target_tokens) == 5 + assert len(list(get_stats().jitcell_tokens)[0].target_tokens) == 5 def test_max_retrace_guards(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'i', 'sa', 'a']) @@ -2619,10 +2619,11 @@ i += 1 return sa assert self.meta_interp(f, [20, 1]) == f(20, 1) - self.check_jitcell_token_count(2) + self.check_jitcell_token_count(1) + assert len(list(get_stats().jitcell_tokens)[0].target_tokens) == 2 assert self.meta_interp(f, [20, 10]) == f(20, 10) - self.check_jitcell_token_count(5) - + self.check_jitcell_token_count(1) + assert len(list(get_stats().jitcell_tokens)[0].target_tokens) == 5 def test_retrace_limit_with_extra_guards(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'i', 'sa', 'a', From noreply at buildbot.pypy.org Tue Nov 8 15:35:28 2011 From: noreply at buildbot.pypy.org (edelsohn) Date: Tue, 8 Nov 2011 15:35:28 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: PPC64 support for _save_managed_regs Message-ID: <20111108143528.A46C5820C4@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r48937:2634db7ce5b0 Date: 2011-11-08 09:35 -0500 http://bitbucket.org/pypy/pypy/changeset/2634db7ce5b0/ Log: PPC64 support for _save_managed_regs diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -338,7 +338,7 @@ if IS_PPC_32: mc.stw(reg.value, r.SP.value, -(len(r.MANAGED_REGS) - i) * WORD) else: - assert 0, "not implemented yet" + mc.std(reg.value, r.SP.value, -(len(r.MANAGED_REGS) - i) * WORD) def gen_bootstrap_code(self, nonfloatlocs, inputargs): for i in range(len(nonfloatlocs)): From noreply at buildbot.pypy.org Tue Nov 8 16:44:10 2011 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 8 Nov 2011 16:44:10 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim: a nicely passing test and a test we want to work on Message-ID: <20111108154410.9438B820C4@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-multidim Changeset: r48938:8e3ada08df4f Date: 2011-11-08 16:43 +0100 http://bitbucket.org/pypy/pypy/changeset/8e3ada08df4f/ Log: a nicely passing test and a test we want to work on diff --git a/pypy/module/micronumpy/test/test_compile.py b/pypy/module/micronumpy/test/test_compile.py --- a/pypy/module/micronumpy/test/test_compile.py +++ b/pypy/module/micronumpy/test/test_compile.py @@ -183,3 +183,11 @@ a -> 0 -> 1 """) assert interp.results[0].value.val == 2 + + def test_multidim_getitem_2(self): + interp = self.run(""" + a = [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]] + b = a + a + b -> 1 -> 1 + """) + assert interp.results[0].value.val == 8 diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -737,6 +737,11 @@ a = array([[1, 2], [3, 4], [5, 6]]) assert ((a + a) == array([[1+1, 2+2], [3+3, 4+4], [5+5, 6+6]])).all() + def test_getitem_add(self): + from numpy import array + a = array([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]]) + assert (a + a)[1, 1] == 8 + class AppTestSupport(object): def setup_class(cls): import struct diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -243,6 +243,35 @@ 'setarrayitem_raw': 1, 'int_add': 3, 'int_lt': 1, 'guard_true': 1, 'jump': 1}) + def define_multidim(): + return """ + a = [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]] + b = a + a + b -> 1 -> 1 + """ + + def test_multidim(self): + result = self.run('multidim') + assert result == 8 + self.check_loops({'float_add': 1, 'getarrayitem_raw': 2, + 'guard_true': 1, 'int_add': 1, 'int_lt': 1, + 'jump': 1, 'setarrayitem_raw': 1}) + + def define_multidim_slice(): + return """ + a = [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10], [11, 12], [13, 14]] + b = a -> ::2 + c = b + b + c -> 1 -> 1 + """ + + def test_multidim_slice(self): + result = self.run('multidim_slice') + assert result == 12 + py.test.skip("improve") + self.check_loops({}) + + class TestNumpyOld(LLJitMixin): def setup_class(cls): py.test.skip("old") From noreply at buildbot.pypy.org Tue Nov 8 16:51:12 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Tue, 8 Nov 2011 16:51:12 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: Merge with default Message-ID: <20111108155112.90D52820C4@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r48939:64b52e0ecfcc Date: 2011-11-07 17:26 +0100 http://bitbucket.org/pypy/pypy/changeset/64b52e0ecfcc/ Log: Merge with default diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -392,6 +392,7 @@ 'Slice': 'space.gettypeobject(W_SliceObject.typedef)', 'StaticMethod': 'space.gettypeobject(StaticMethod.typedef)', 'CFunction': 'space.gettypeobject(cpyext.methodobject.W_PyCFunctionObject.typedef)', + 'WrapperDescr': 'space.gettypeobject(cpyext.methodobject.W_PyCMethodObject.typedef)' }.items(): GLOBALS['Py%s_Type#' % (cpyname, )] = ('PyTypeObject*', pypyexpr) diff --git a/pypy/module/cpyext/methodobject.py b/pypy/module/cpyext/methodobject.py --- a/pypy/module/cpyext/methodobject.py +++ b/pypy/module/cpyext/methodobject.py @@ -240,6 +240,7 @@ def PyStaticMethod_New(space, w_func): return space.wrap(StaticMethod(w_func)) + at cpython_api([PyObject, lltype.Ptr(PyMethodDef)], PyObject) def PyDescr_NewMethod(space, w_type, method): return space.wrap(W_PyCMethodObject(space, method, w_type)) diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -586,10 +586,6 @@ def PyDescr_NewMember(space, type, meth): raise NotImplementedError - at cpython_api([PyTypeObjectPtr, PyMethodDef], PyObject) -def PyDescr_NewMethod(space, type, meth): - raise NotImplementedError - @cpython_api([PyTypeObjectPtr, wrapperbase, rffi.VOIDP], PyObject) def PyDescr_NewWrapper(space, type, wrapper, wrapped): raise NotImplementedError @@ -610,14 +606,6 @@ def PyWrapper_New(space, w_d, w_self): raise NotImplementedError - at cpython_api([PyObject], PyObject) -def PyDictProxy_New(space, dict): - """Return a proxy object for a mapping which enforces read-only behavior. - This is normally used to create a proxy to prevent modification of the - dictionary for non-dynamic class types. - """ - raise NotImplementedError - @cpython_api([PyObject, PyObject, rffi.INT_real], rffi.INT_real, error=-1) def PyDict_Merge(space, a, b, override): """Iterate over mapping object b adding key-value pairs to dictionary a. @@ -2293,15 +2281,6 @@ changes in your code for properly supporting 64-bit systems.""" raise NotImplementedError - at cpython_api([rffi.CWCHARP, Py_ssize_t, rffi.CCHARP], PyObject) -def PyUnicode_EncodeUTF8(space, s, size, errors): - """Encode the Py_UNICODE buffer of the given size using UTF-8 and return a - Python string object. Return NULL if an exception was raised by the codec. - - This function used an int type for size. This might require - changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.INTP], PyObject) def PyUnicode_DecodeUTF32(space, s, size, errors, byteorder): """Decode length bytes from a UTF-32 encoded buffer string and return the @@ -2481,31 +2460,6 @@ was raised by the codec.""" raise NotImplementedError - at cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP], PyObject) -def PyUnicode_DecodeLatin1(space, s, size, errors): - """Create a Unicode object by decoding size bytes of the Latin-1 encoded string - s. Return NULL if an exception was raised by the codec. - - This function used an int type for size. This might require - changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - - at cpython_api([rffi.CWCHARP, Py_ssize_t, rffi.CCHARP], PyObject) -def PyUnicode_EncodeLatin1(space, s, size, errors): - """Encode the Py_UNICODE buffer of the given size using Latin-1 and return - a Python string object. Return NULL if an exception was raised by the codec. - - This function used an int type for size. This might require - changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - - at cpython_api([PyObject], PyObject) -def PyUnicode_AsLatin1String(space, unicode): - """Encode a Unicode object using Latin-1 and return the result as Python string - object. Error handling is "strict". Return NULL if an exception was raised - by the codec.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, PyObject, rffi.CCHARP], PyObject) def PyUnicode_DecodeCharmap(space, s, size, mapping, errors): """Create a Unicode object by decoding size bytes of the encoded string s using @@ -2564,13 +2518,6 @@ """ raise NotImplementedError - at cpython_api([PyObject], PyObject) -def PyUnicode_AsMBCSString(space, unicode): - """Encode a Unicode object using MBCS and return the result as Python string - object. Error handling is "strict". Return NULL if an exception was raised - by the codec.""" - raise NotImplementedError - @cpython_api([PyObject, PyObject], PyObject) def PyUnicode_Concat(space, left, right): """Concat two strings giving a new Unicode string.""" @@ -2912,16 +2859,3 @@ """Return true if ob is a proxy object. """ raise NotImplementedError - - at cpython_api([PyObject, PyObject], PyObject) -def PyWeakref_NewProxy(space, ob, callback): - """Return a weak reference proxy object for the object ob. This will always - return a new reference, but is not guaranteed to create a new object; an - existing proxy object may be returned. The second parameter, callback, can - be a callable object that receives notification when ob is garbage - collected; it should accept a single parameter, which will be the weak - reference object itself. callback may also be None or NULL. If ob - is not a weakly-referencable object, or if callback is not callable, - None, or NULL, this will return NULL and raise TypeError. - """ - raise NotImplementedError diff --git a/pypy/module/cpyext/test/test_methodobject.py b/pypy/module/cpyext/test/test_methodobject.py --- a/pypy/module/cpyext/test/test_methodobject.py +++ b/pypy/module/cpyext/test/test_methodobject.py @@ -79,7 +79,7 @@ raises(TypeError, mod.isSameFunction, 1) class TestPyCMethodObject(BaseApiTest): - def test_repr(self, space): + def test_repr(self, space, api): """ W_PyCMethodObject has a repr string which describes it as a method and gives its name and the name of its class. @@ -94,7 +94,7 @@ ml.c_ml_meth = rffi.cast(PyCFunction_typedef, c_func.get_llhelper(space)) - method = PyDescr_NewMethod(space, space.w_str, ml) + method = api.PyDescr_NewMethod(space.w_str, ml) assert repr(method).startswith( "" + assert space.unwrap(space.repr(w_proxy)).startswith(' Author: Christian Tismer Branch: win64_gborg Changeset: r48940:0ea921260824 Date: 2011-11-08 00:41 +0100 http://bitbucket.org/pypy/pypy/changeset/0ea921260824/ Log: removed the last bug from test_typed.py ehich is not related to rwin32.py buggyness diff --git a/pypy/rlib/rdtoa.py b/pypy/rlib/rdtoa.py --- a/pypy/rlib/rdtoa.py +++ b/pypy/rlib/rdtoa.py @@ -244,8 +244,8 @@ # The only failure mode is no memory raise MemoryError try: - buflen = (rffi.cast(rffi.LONG, end_ptr[0]) - - rffi.cast(rffi.LONG, digits)) + buflen = (rffi.cast(lltype.Signed, end_ptr[0]) - + rffi.cast(lltype.Signed, digits)) sign = rffi.cast(lltype.Signed, sign_ptr[0]) # Handle nan and inf From noreply at buildbot.pypy.org Tue Nov 8 16:51:15 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Tue, 8 Nov 2011 16:51:15 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: Merge with default Message-ID: <20111108155115.35B0C820C4@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r48941:8c667375eed2 Date: 2011-11-08 00:43 +0100 http://bitbucket.org/pypy/pypy/changeset/8c667375eed2/ Log: Merge with default diff --git a/lib_pypy/pyrepl/commands.py b/lib_pypy/pyrepl/commands.py --- a/lib_pypy/pyrepl/commands.py +++ b/lib_pypy/pyrepl/commands.py @@ -33,10 +33,9 @@ class Command(object): finish = 0 kills_digit_arg = 1 - def __init__(self, reader, (event_name, event)): + def __init__(self, reader, cmd): self.reader = reader - self.event = event - self.event_name = event_name + self.event_name, self.event = cmd def do(self): pass diff --git a/lib_pypy/pyrepl/pygame_console.py b/lib_pypy/pyrepl/pygame_console.py --- a/lib_pypy/pyrepl/pygame_console.py +++ b/lib_pypy/pyrepl/pygame_console.py @@ -130,7 +130,7 @@ s.fill(c, [0, 600 - bmargin, 800, bmargin]) s.fill(c, [800 - rmargin, 0, lmargin, 600]) - def refresh(self, screen, (cx, cy)): + def refresh(self, screen, cxy): self.screen = screen self.pygame_screen.fill(colors.bg, [0, tmargin + self.cur_top + self.scroll, @@ -139,8 +139,8 @@ line_top = self.cur_top width, height = self.fontsize - self.cxy = (cx, cy) - cp = self.char_pos(cx, cy) + self.cxy = cxy + cp = self.char_pos(*cxy) if cp[1] < tmargin: self.scroll = - (cy*self.fh + self.cur_top) self.repaint() @@ -148,7 +148,7 @@ self.scroll += (600 - bmargin) - (cp[1] + self.fh) self.repaint() if self.curs_vis: - self.pygame_screen.blit(self.cursor, self.char_pos(cx, cy)) + self.pygame_screen.blit(self.cursor, self.char_pos(*cxy)) for line in screen: if 0 <= line_top + self.scroll <= (600 - bmargin - tmargin - self.fh): if line: diff --git a/lib_pypy/pyrepl/unix_console.py b/lib_pypy/pyrepl/unix_console.py --- a/lib_pypy/pyrepl/unix_console.py +++ b/lib_pypy/pyrepl/unix_console.py @@ -163,7 +163,7 @@ def change_encoding(self, encoding): self.encoding = encoding - def refresh(self, screen, (cx, cy)): + def refresh(self, screen, cxy): # this function is still too long (over 90 lines) if not self.__gone_tall: @@ -198,6 +198,7 @@ # we make sure the cursor is on the screen, and that we're # using all of the screen if we can + cx, cy = cxy if cy < offset: offset = cy elif cy >= offset + height: diff --git a/pypy/jit/metainterp/optimizeopt/__init__.py b/pypy/jit/metainterp/optimizeopt/__init__.py --- a/pypy/jit/metainterp/optimizeopt/__init__.py +++ b/pypy/jit/metainterp/optimizeopt/__init__.py @@ -55,7 +55,7 @@ def optimize_loop_1(metainterp_sd, loop, enable_opts, - inline_short_preamble=True, retraced=False, bridge=False): + inline_short_preamble=True, retraced=False): """Optimize loop.operations to remove internal overheadish operations. """ @@ -64,7 +64,7 @@ if unroll: optimize_unroll(metainterp_sd, loop, optimizations) else: - optimizer = Optimizer(metainterp_sd, loop, optimizations, bridge) + optimizer = Optimizer(metainterp_sd, loop, optimizations) optimizer.propagate_all_forward() def optimize_bridge_1(metainterp_sd, bridge, enable_opts, @@ -76,7 +76,7 @@ except KeyError: pass optimize_loop_1(metainterp_sd, bridge, enable_opts, - inline_short_preamble, retraced, bridge=True) + inline_short_preamble, retraced) if __name__ == '__main__': print ALL_OPTS_NAMES diff --git a/pypy/jit/metainterp/optimizeopt/heap.py b/pypy/jit/metainterp/optimizeopt/heap.py --- a/pypy/jit/metainterp/optimizeopt/heap.py +++ b/pypy/jit/metainterp/optimizeopt/heap.py @@ -234,7 +234,7 @@ or op.is_ovf()): self.posponedop = op else: - self.next_optimization.propagate_forward(op) + Optimization.emit_operation(self, op) def emitting_operation(self, op): if op.has_no_side_effect(): diff --git a/pypy/jit/metainterp/optimizeopt/intbounds.py b/pypy/jit/metainterp/optimizeopt/intbounds.py --- a/pypy/jit/metainterp/optimizeopt/intbounds.py +++ b/pypy/jit/metainterp/optimizeopt/intbounds.py @@ -6,6 +6,7 @@ IntUpperBound) from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method from pypy.jit.metainterp.resoperation import rop +from pypy.jit.metainterp.optimize import InvalidLoop from pypy.rlib.rarithmetic import LONG_BIT @@ -13,30 +14,10 @@ """Keeps track of the bounds placed on integers by guards and remove redundant guards""" - def setup(self): - self.posponedop = None - self.nextop = None - def new(self): - assert self.posponedop is None return OptIntBounds() - - def flush(self): - assert self.posponedop is None - - def setup(self): - self.posponedop = None - self.nextop = None def propagate_forward(self, op): - if op.is_ovf(): - self.posponedop = op - return - if self.posponedop: - self.nextop = op - op = self.posponedop - self.posponedop = None - dispatch_opt(self, op) def opt_default(self, op): @@ -179,68 +160,75 @@ r = self.getvalue(op.result) r.intbound.intersect(b) + def optimize_GUARD_NO_OVERFLOW(self, op): + lastop = self.last_emitted_operation + if lastop is not None: + opnum = lastop.getopnum() + args = lastop.getarglist() + result = lastop.result + # If the INT_xxx_OVF was replaced with INT_xxx, then we can kill + # the GUARD_NO_OVERFLOW. + if (opnum == rop.INT_ADD or + opnum == rop.INT_SUB or + opnum == rop.INT_MUL): + return + # Else, synthesize the non overflowing op for optimize_default to + # reuse, as well as the reverse op + elif opnum == rop.INT_ADD_OVF: + self.pure(rop.INT_ADD, args[:], result) + self.pure(rop.INT_SUB, [result, args[1]], args[0]) + self.pure(rop.INT_SUB, [result, args[0]], args[1]) + elif opnum == rop.INT_SUB_OVF: + self.pure(rop.INT_SUB, args[:], result) + self.pure(rop.INT_ADD, [result, args[1]], args[0]) + self.pure(rop.INT_SUB, [args[0], result], args[1]) + elif opnum == rop.INT_MUL_OVF: + self.pure(rop.INT_MUL, args[:], result) + self.emit_operation(op) + + def optimize_GUARD_OVERFLOW(self, op): + # If INT_xxx_OVF was replaced by INT_xxx, *but* we still see + # GUARD_OVERFLOW, then the loop is invalid. + lastop = self.last_emitted_operation + if lastop is None: + raise InvalidLoop + opnum = lastop.getopnum() + if opnum not in (rop.INT_ADD_OVF, rop.INT_SUB_OVF, rop.INT_MUL_OVF): + raise InvalidLoop + self.emit_operation(op) + def optimize_INT_ADD_OVF(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) resbound = v1.intbound.add_bound(v2.intbound) - if resbound.has_lower and resbound.has_upper and \ - self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Transform into INT_ADD and remove guard + if resbound.bounded(): + # Transform into INT_ADD. The following guard will be killed + # by optimize_GUARD_NO_OVERFLOW; if we see instead an + # optimize_GUARD_OVERFLOW, then InvalidLoop. op = op.copy_and_change(rop.INT_ADD) - self.optimize_INT_ADD(op) # emit the op - else: - self.emit_operation(op) - r = self.getvalue(op.result) - r.intbound.intersect(resbound) - self.emit_operation(self.nextop) - if self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Synthesize the non overflowing op for optimize_default to reuse - self.pure(rop.INT_ADD, op.getarglist()[:], op.result) - # Synthesize the reverse op for optimize_default to reuse - self.pure(rop.INT_SUB, [op.result, op.getarg(1)], op.getarg(0)) - self.pure(rop.INT_SUB, [op.result, op.getarg(0)], op.getarg(1)) - + self.emit_operation(op) # emit the op + r = self.getvalue(op.result) + r.intbound.intersect(resbound) def optimize_INT_SUB_OVF(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) resbound = v1.intbound.sub_bound(v2.intbound) - if resbound.has_lower and resbound.has_upper and \ - self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Transform into INT_SUB and remove guard + if resbound.bounded(): op = op.copy_and_change(rop.INT_SUB) - self.optimize_INT_SUB(op) # emit the op - else: - self.emit_operation(op) - r = self.getvalue(op.result) - r.intbound.intersect(resbound) - self.emit_operation(self.nextop) - if self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Synthesize the non overflowing op for optimize_default to reuse - self.pure(rop.INT_SUB, op.getarglist()[:], op.result) - # Synthesize the reverse ops for optimize_default to reuse - self.pure(rop.INT_ADD, [op.result, op.getarg(1)], op.getarg(0)) - self.pure(rop.INT_SUB, [op.getarg(0), op.result], op.getarg(1)) - + self.emit_operation(op) # emit the op + r = self.getvalue(op.result) + r.intbound.intersect(resbound) def optimize_INT_MUL_OVF(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) resbound = v1.intbound.mul_bound(v2.intbound) - if resbound.has_lower and resbound.has_upper and \ - self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Transform into INT_MUL and remove guard + if resbound.bounded(): op = op.copy_and_change(rop.INT_MUL) - self.optimize_INT_MUL(op) # emit the op - else: - self.emit_operation(op) - r = self.getvalue(op.result) - r.intbound.intersect(resbound) - self.emit_operation(self.nextop) - if self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Synthesize the non overflowing op for optimize_default to reuse - self.pure(rop.INT_MUL, op.getarglist()[:], op.result) - + self.emit_operation(op) + r = self.getvalue(op.result) + r.intbound.intersect(resbound) def optimize_INT_LT(self, op): v1 = self.getvalue(op.getarg(0)) diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -6,7 +6,7 @@ IntLowerBound, MININT, MAXINT from pypy.jit.metainterp.optimizeopt.util import (make_dispatcher_method, args_dict) -from pypy.jit.metainterp.resoperation import rop, ResOperation +from pypy.jit.metainterp.resoperation import rop, ResOperation, AbstractResOp from pypy.jit.metainterp.typesystem import llhelper, oohelper from pypy.tool.pairtype import extendabletype from pypy.rlib.debug import debug_start, debug_stop, debug_print @@ -249,6 +249,8 @@ CVAL_ZERO_FLOAT = ConstantValue(Const._new(0.0)) llhelper.CVAL_NULLREF = ConstantValue(llhelper.CONST_NULL) oohelper.CVAL_NULLREF = ConstantValue(oohelper.CONST_NULL) +REMOVED = AbstractResOp(None) + class Optimization(object): next_optimization = None @@ -260,6 +262,7 @@ raise NotImplementedError def emit_operation(self, op): + self.last_emitted_operation = op self.next_optimization.propagate_forward(op) # FIXME: Move some of these here? @@ -327,13 +330,13 @@ def forget_numberings(self, box): self.optimizer.forget_numberings(box) + class Optimizer(Optimization): - def __init__(self, metainterp_sd, loop, optimizations=None, bridge=False): + def __init__(self, metainterp_sd, loop, optimizations=None): self.metainterp_sd = metainterp_sd self.cpu = metainterp_sd.cpu self.loop = loop - self.bridge = bridge self.values = {} self.interned_refs = self.cpu.ts.new_ref_dict() self.interned_ints = {} @@ -341,7 +344,6 @@ self.bool_boxes = {} self.producer = {} self.pendingfields = [] - self.exception_might_have_happened = False self.quasi_immutable_deps = None self.opaque_pointers = {} self.replaces_guard = {} @@ -363,6 +365,7 @@ optimizations[-1].next_optimization = self for o in optimizations: o.optimizer = self + o.last_emitted_operation = None o.setup() else: optimizations = [] @@ -497,7 +500,6 @@ return CVAL_ZERO def propagate_all_forward(self): - self.exception_might_have_happened = self.bridge self.clear_newoperations() for op in self.loop.operations: self.first_optimization.propagate_forward(op) diff --git a/pypy/jit/metainterp/optimizeopt/pure.py b/pypy/jit/metainterp/optimizeopt/pure.py --- a/pypy/jit/metainterp/optimizeopt/pure.py +++ b/pypy/jit/metainterp/optimizeopt/pure.py @@ -1,4 +1,4 @@ -from pypy.jit.metainterp.optimizeopt.optimizer import Optimization +from pypy.jit.metainterp.optimizeopt.optimizer import Optimization, REMOVED from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.jit.metainterp.optimizeopt.util import (make_dispatcher_method, args_dict) @@ -61,7 +61,10 @@ oldop = self.pure_operations.get(args, None) if oldop is not None and oldop.getdescr() is op.getdescr(): assert oldop.getopnum() == op.getopnum() + # this removes a CALL_PURE that has the same (non-constant) + # arguments as a previous CALL_PURE. self.make_equal_to(op.result, self.getvalue(oldop.result)) + self.last_emitted_operation = REMOVED return else: self.pure_operations[args] = op @@ -72,6 +75,13 @@ self.emit_operation(ResOperation(rop.CALL, args, op.result, op.getdescr())) + def optimize_GUARD_NO_EXCEPTION(self, op): + if self.last_emitted_operation is REMOVED: + # it was a CALL_PURE that was killed; so we also kill the + # following GUARD_NO_EXCEPTION + return + self.emit_operation(op) + def flush(self): assert self.posponedop is None diff --git a/pypy/jit/metainterp/optimizeopt/rewrite.py b/pypy/jit/metainterp/optimizeopt/rewrite.py --- a/pypy/jit/metainterp/optimizeopt/rewrite.py +++ b/pypy/jit/metainterp/optimizeopt/rewrite.py @@ -294,12 +294,6 @@ raise InvalidLoop self.optimize_GUARD_CLASS(op) - def optimize_GUARD_NO_EXCEPTION(self, op): - if not self.optimizer.exception_might_have_happened: - return - self.emit_operation(op) - self.optimizer.exception_might_have_happened = False - def optimize_CALL_LOOPINVARIANT(self, op): arg = op.getarg(0) # 'arg' must be a Const, because residual_call in codewriter @@ -310,6 +304,7 @@ resvalue = self.loop_invariant_results.get(key, None) if resvalue is not None: self.make_equal_to(op.result, resvalue) + self.last_emitted_operation = REMOVED return # change the op to be a normal call, from the backend's point of view # there is no reason to have a separate operation for this @@ -444,10 +439,19 @@ except KeyError: pass else: + # this removes a CALL_PURE with all constant arguments. self.make_constant(op.result, result) + self.last_emitted_operation = REMOVED return self.emit_operation(op) + def optimize_GUARD_NO_EXCEPTION(self, op): + if self.last_emitted_operation is REMOVED: + # it was a CALL_PURE or a CALL_LOOPINVARIANT that was killed; + # so we also kill the following GUARD_NO_EXCEPTION + return + self.emit_operation(op) + def optimize_INT_FLOORDIV(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -681,25 +681,60 @@ # ---------- - def test_fold_guard_no_exception(self): - ops = """ - [i] - guard_no_exception() [] - i1 = int_add(i, 3) - guard_no_exception() [] + def test_keep_guard_no_exception(self): + ops = """ + [i1] i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] - guard_no_exception() [] - i3 = call(i2, descr=nonwritedescr) - jump(i1) # the exception is considered lost when we loop back - """ - expected = """ - [i] - i1 = int_add(i, 3) - i2 = call(i1, descr=nonwritedescr) + jump(i2) + """ + self.optimize_loop(ops, ops) + + def test_keep_guard_no_exception_with_call_pure_that_is_not_folded(self): + ops = """ + [i1] + i2 = call_pure(123456, i1, descr=nonwritedescr) guard_no_exception() [i1, i2] - i3 = call(i2, descr=nonwritedescr) - jump(i1) + jump(i2) + """ + expected = """ + [i1] + i2 = call(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2] + jump(i2) + """ + self.optimize_loop(ops, expected) + + def test_remove_guard_no_exception_with_call_pure_on_constant_args(self): + arg_consts = [ConstInt(i) for i in (123456, 81)] + call_pure_results = {tuple(arg_consts): ConstInt(5)} + ops = """ + [i1] + i3 = same_as(81) + i2 = call_pure(123456, i3, descr=nonwritedescr) + guard_no_exception() [i1, i2] + jump(i2) + """ + expected = """ + [i1] + jump(5) + """ + self.optimize_loop(ops, expected, call_pure_results) + + def test_remove_guard_no_exception_with_duplicated_call_pure(self): + ops = """ + [i1] + i2 = call_pure(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2] + i3 = call_pure(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2, i3] + jump(i3) + """ + expected = """ + [i1] + i2 = call(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2] + jump(i2) """ self.optimize_loop(ops, expected) diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -931,17 +931,14 @@ [i] guard_no_exception() [] i1 = int_add(i, 3) - guard_no_exception() [] i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] - guard_no_exception() [] i3 = call(i2, descr=nonwritedescr) jump(i1) # the exception is considered lost when we loop back """ - # note that 'guard_no_exception' at the very start is kept around - # for bridges, but not for loops preamble = """ [i] + guard_no_exception() [] # occurs at the start of bridges, so keep it i1 = int_add(i, 3) i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] @@ -950,6 +947,7 @@ """ expected = """ [i] + guard_no_exception() [] # occurs at the start of bridges, so keep it i1 = int_add(i, 3) i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] @@ -958,6 +956,23 @@ """ self.optimize_loop(ops, expected, preamble) + def test_bug_guard_no_exception(self): + ops = """ + [] + i0 = call(123, descr=nonwritedescr) + p0 = call(0, "xy", descr=s2u_descr) # string -> unicode + guard_no_exception() [] + escape(p0) + jump() + """ + expected = """ + [] + i0 = call(123, descr=nonwritedescr) + escape(u"xy") + jump() + """ + self.optimize_loop(ops, expected) + # ---------- def test_call_loopinvariant(self): @@ -6281,12 +6296,15 @@ def test_str2unicode_constant(self): ops = """ [] + escape(1213) p0 = call(0, "xy", descr=s2u_descr) # string -> unicode + guard_no_exception() [] escape(p0) jump() """ expected = """ [] + escape(1213) escape(u"xy") jump() """ @@ -6296,6 +6314,7 @@ ops = """ [p0] p1 = call(0, p0, descr=s2u_descr) # string -> unicode + guard_no_exception() [] escape(p1) jump(p1) """ diff --git a/pypy/jit/metainterp/optimizeopt/vstring.py b/pypy/jit/metainterp/optimizeopt/vstring.py --- a/pypy/jit/metainterp/optimizeopt/vstring.py +++ b/pypy/jit/metainterp/optimizeopt/vstring.py @@ -2,7 +2,8 @@ from pypy.jit.metainterp.history import (BoxInt, Const, ConstInt, ConstPtr, get_const_ptr_for_string, get_const_ptr_for_unicode, BoxPtr, REF, INT) from pypy.jit.metainterp.optimizeopt import optimizer, virtualize -from pypy.jit.metainterp.optimizeopt.optimizer import CONST_0, CONST_1, llhelper +from pypy.jit.metainterp.optimizeopt.optimizer import CONST_0, CONST_1 +from pypy.jit.metainterp.optimizeopt.optimizer import llhelper, REMOVED from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.rlib.objectmodel import specialize, we_are_translated @@ -529,6 +530,11 @@ optimize_CALL_PURE = optimize_CALL + def optimize_GUARD_NO_EXCEPTION(self, op): + if self.last_emitted_operation is REMOVED: + return + self.emit_operation(op) + def opt_call_str_STR2UNICODE(self, op): # Constant-fold unicode("constant string"). # More generally, supporting non-constant but virtual cases is @@ -543,6 +549,7 @@ except UnicodeDecodeError: return False self.make_constant(op.result, get_const_ptr_for_unicode(u)) + self.last_emitted_operation = REMOVED return True def opt_call_stroruni_STR_CONCAT(self, op, mode): diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -90,7 +90,10 @@ return op def __repr__(self): - return self.repr() + try: + return self.repr() + except NotImplementedError: + return object.__repr__(self) def repr(self, graytext=False): # RPython-friendly version diff --git a/pypy/jit/metainterp/test/test_resume.py b/pypy/jit/metainterp/test/test_resume.py --- a/pypy/jit/metainterp/test/test_resume.py +++ b/pypy/jit/metainterp/test/test_resume.py @@ -1135,16 +1135,11 @@ assert ptr2.parent.next == ptr class CompareableConsts(object): - def __init__(self): - self.oldeq = None - def __enter__(self): - assert self.oldeq is None - self.oldeq = Const.__eq__ Const.__eq__ = Const.same_box - + def __exit__(self, type, value, traceback): - Const.__eq__ = self.oldeq + del Const.__eq__ def test_virtual_adder_make_varray(): b2s, b4s = [BoxPtr(), BoxInt(4)] From noreply at buildbot.pypy.org Tue Nov 8 16:51:16 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Tue, 8 Nov 2011 16:51:16 +0100 (CET) Subject: [pypy-commit] pypy default: added sys.maxint to the compilation hash, to avoid obscure errors on windows Message-ID: <20111108155116.6602A820C4@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: Changeset: r48942:905df0d6d47e Date: 2011-11-08 16:37 +0100 http://bitbucket.org/pypy/pypy/changeset/905df0d6d47e/ Log: added sys.maxint to the compilation hash, to avoid obscure errors on windows diff --git a/pypy/translator/platform/__init__.py b/pypy/translator/platform/__init__.py --- a/pypy/translator/platform/__init__.py +++ b/pypy/translator/platform/__init__.py @@ -102,6 +102,8 @@ bits = [self.__class__.__name__, 'cc=%r' % self.cc] for varname in self.relevant_environ: bits.append('%s=%r' % (varname, os.environ.get(varname))) + # adding sys.maxint to disambiguate windows + bits.append('%s=%r' % ('sys.maxint', sys.maxint)) return ' '.join(bits) # some helpers which seem to be cross-platform enough From noreply at buildbot.pypy.org Tue Nov 8 16:51:17 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Tue, 8 Nov 2011 16:51:17 +0100 (CET) Subject: [pypy-commit] pypy default: Merge Message-ID: <20111108155117.A8A76820C4@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: Changeset: r48943:7c7c46d6a78d Date: 2011-11-08 16:41 +0100 http://bitbucket.org/pypy/pypy/changeset/7c7c46d6a78d/ Log: Merge diff --git a/lib_pypy/_ctypes/pointer.py b/lib_pypy/_ctypes/pointer.py --- a/lib_pypy/_ctypes/pointer.py +++ b/lib_pypy/_ctypes/pointer.py @@ -124,7 +124,8 @@ # for now, we always allow types.pointer, else a lot of tests # break. We need to rethink how pointers are represented, though if my_ffitype is not ffitype and ffitype is not _ffi.types.void_p: - raise ArgumentError, "expected %s instance, got %s" % (type(value), ffitype) + raise ArgumentError("expected %s instance, got %s" % (type(value), + ffitype)) return value._get_buffer_value() def _cast_addr(obj, _, tp): diff --git a/pypy/jit/codewriter/effectinfo.py b/pypy/jit/codewriter/effectinfo.py --- a/pypy/jit/codewriter/effectinfo.py +++ b/pypy/jit/codewriter/effectinfo.py @@ -78,6 +78,9 @@ # OS_MATH_SQRT = 100 + # for debugging: + _OS_CANRAISE = set([OS_NONE, OS_STR2UNICODE, OS_LIBFFI_CALL]) + def __new__(cls, readonly_descrs_fields, readonly_descrs_arrays, write_descrs_fields, write_descrs_arrays, extraeffect=EF_CAN_RAISE, @@ -116,6 +119,8 @@ result.extraeffect = extraeffect result.can_invalidate = can_invalidate result.oopspecindex = oopspecindex + if result.check_can_raise(): + assert oopspecindex in cls._OS_CANRAISE cls._cache[key] = result return result @@ -125,6 +130,10 @@ def check_can_invalidate(self): return self.can_invalidate + def check_is_elidable(self): + return (self.extraeffect == self.EF_ELIDABLE_CAN_RAISE or + self.extraeffect == self.EF_ELIDABLE_CANNOT_RAISE) + def check_forces_virtual_or_virtualizable(self): return self.extraeffect >= self.EF_FORCES_VIRTUAL_OR_VIRTUALIZABLE diff --git a/pypy/jit/codewriter/test/test_flatten.py b/pypy/jit/codewriter/test/test_flatten.py --- a/pypy/jit/codewriter/test/test_flatten.py +++ b/pypy/jit/codewriter/test/test_flatten.py @@ -5,7 +5,7 @@ from pypy.jit.codewriter.format import assert_format from pypy.jit.codewriter import longlong from pypy.jit.metainterp.history import AbstractDescr -from pypy.rpython.lltypesystem import lltype, rclass, rstr +from pypy.rpython.lltypesystem import lltype, rclass, rstr, rffi from pypy.objspace.flow.model import SpaceOperation, Variable, Constant from pypy.translator.unsimplify import varoftype from pypy.rlib.rarithmetic import ovfcheck, r_uint, r_longlong, r_ulonglong @@ -743,7 +743,6 @@ """, transform=True) def test_force_cast(self): - from pypy.rpython.lltypesystem import rffi # NB: we don't need to test for INT here, the logic in jtransform is # general enough so that if we have the below cases it should # generalize also to INT @@ -849,7 +848,6 @@ transform=True) def test_force_cast_pointer(self): - from pypy.rpython.lltypesystem import rffi def h(p): return rffi.cast(rffi.VOIDP, p) self.encoding_test(h, [lltype.nullptr(rffi.CCHARP.TO)], """ @@ -857,7 +855,6 @@ """, transform=True) def test_force_cast_floats(self): - from pypy.rpython.lltypesystem import rffi # Caststs to lltype.Float def f(n): return rffi.cast(lltype.Float, n) @@ -964,7 +961,6 @@ """, transform=True) def test_direct_ptradd(self): - from pypy.rpython.lltypesystem import rffi def f(p, n): return lltype.direct_ptradd(p, n) self.encoding_test(f, [lltype.nullptr(rffi.CCHARP.TO), 123], """ @@ -975,7 +971,6 @@ def check_force_cast(FROM, TO, operations, value): """Check that the test is correctly written...""" - from pypy.rpython.lltypesystem import rffi import re r = re.compile('(\w+) \%i\d, \$(-?\d+)') # diff --git a/pypy/jit/metainterp/optimizeopt/test/test_util.py b/pypy/jit/metainterp/optimizeopt/test/test_util.py --- a/pypy/jit/metainterp/optimizeopt/test/test_util.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_util.py @@ -183,6 +183,7 @@ can_invalidate=True)) arraycopydescr = cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, EffectInfo([], [arraydescr], [], [arraydescr], + EffectInfo.EF_CANNOT_RAISE, oopspecindex=EffectInfo.OS_ARRAYCOPY)) @@ -212,12 +213,14 @@ _oopspecindex = getattr(EffectInfo, _os) locals()[_name] = \ cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, - EffectInfo([], [], [], [], oopspecindex=_oopspecindex)) + EffectInfo([], [], [], [], EffectInfo.EF_CANNOT_RAISE, + oopspecindex=_oopspecindex)) # _oopspecindex = getattr(EffectInfo, _os.replace('STR', 'UNI')) locals()[_name.replace('str', 'unicode')] = \ cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, - EffectInfo([], [], [], [], oopspecindex=_oopspecindex)) + EffectInfo([], [], [], [], EffectInfo.EF_CANNOT_RAISE, + oopspecindex=_oopspecindex)) s2u_descr = cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, EffectInfo([], [], [], [], oopspecindex=EffectInfo.OS_STR2UNICODE)) diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -1345,10 +1345,8 @@ if effect == effectinfo.EF_LOOPINVARIANT: return self.execute_varargs(rop.CALL_LOOPINVARIANT, allboxes, descr, False, False) - exc = (effect != effectinfo.EF_CANNOT_RAISE and - effect != effectinfo.EF_ELIDABLE_CANNOT_RAISE) - pure = (effect == effectinfo.EF_ELIDABLE_CAN_RAISE or - effect == effectinfo.EF_ELIDABLE_CANNOT_RAISE) + exc = effectinfo.check_can_raise() + pure = effectinfo.check_is_elidable() return self.execute_varargs(rop.CALL, allboxes, descr, exc, pure) def do_residual_or_indirect_call(self, funcbox, calldescr, argboxes): diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -3678,3 +3678,16 @@ assert x == -42 x = self.interp_operations(f, [1000, 1], translationoptions=topt) assert x == 999 + + def test_ll_arraycopy(self): + from pypy.rlib import rgc + A = lltype.GcArray(lltype.Char) + a = lltype.malloc(A, 10) + for i in range(10): a[i] = chr(i) + b = lltype.malloc(A, 10) + # + def f(c, d, e): + rgc.ll_arraycopy(a, b, c, d, e) + return 42 + self.interp_operations(f, [1, 2, 3]) + self.check_operations_history(call=1, guard_no_exception=0) diff --git a/pypy/module/cpyext/include/patchlevel.h b/pypy/module/cpyext/include/patchlevel.h --- a/pypy/module/cpyext/include/patchlevel.h +++ b/pypy/module/cpyext/include/patchlevel.h @@ -29,7 +29,7 @@ #define PY_VERSION "2.7.1" /* PyPy version as a string */ -#define PYPY_VERSION "1.6.1" +#define PYPY_VERSION "1.7.1" /* Subversion Revision number of this file (not of the repository). * Empty since Mercurial migration. */ diff --git a/pypy/module/sys/version.py b/pypy/module/sys/version.py --- a/pypy/module/sys/version.py +++ b/pypy/module/sys/version.py @@ -10,7 +10,7 @@ CPYTHON_VERSION = (2, 7, 1, "final", 42) #XXX # sync patchlevel.h CPYTHON_API_VERSION = 1013 #XXX # sync with include/modsupport.h -PYPY_VERSION = (1, 6, 1, "dev", 0) #XXX # sync patchlevel.h +PYPY_VERSION = (1, 7, 1, "dev", 0) #XXX # sync patchlevel.h if platform.name == 'msvc': COMPILER_INFO = 'MSC v.%d 32 bit' % (platform.version * 10 + 600) diff --git a/pypy/objspace/std/test/test_stdobjspace.py b/pypy/objspace/std/test/test_stdobjspace.py --- a/pypy/objspace/std/test/test_stdobjspace.py +++ b/pypy/objspace/std/test/test_stdobjspace.py @@ -1,5 +1,6 @@ from pypy.interpreter.error import OperationError from pypy.interpreter.gateway import app2interp +from pypy.conftest import gettestobjspace class TestW_StdObjSpace: @@ -60,3 +61,10 @@ typedef = None assert space.isinstance_w(X(), space.w_str) + + def test_withstrbuf_fastpath_isinstance(self): + from pypy.objspace.std.stringobject import W_StringObject + + space = gettestobjspace(withstrbuf=True) + assert space._get_interplevel_cls(space.w_str) is W_StringObject + diff --git a/pypy/rlib/rgc.py b/pypy/rlib/rgc.py --- a/pypy/rlib/rgc.py +++ b/pypy/rlib/rgc.py @@ -163,8 +163,10 @@ source_start, dest_start, length): # if the write barrier is not supported, copy by hand - for i in range(length): + i = 0 + while i < length: dest[i + dest_start] = source[i + source_start] + i += 1 return source_addr = llmemory.cast_ptr_to_adr(source) dest_addr = llmemory.cast_ptr_to_adr(dest) diff --git a/pypy/translator/backendopt/test/test_canraise.py b/pypy/translator/backendopt/test/test_canraise.py --- a/pypy/translator/backendopt/test/test_canraise.py +++ b/pypy/translator/backendopt/test/test_canraise.py @@ -201,6 +201,16 @@ result = ra.can_raise(ggraph.startblock.operations[0]) assert result + def test_ll_arraycopy(self): + from pypy.rpython.lltypesystem import rffi + from pypy.rlib.rgc import ll_arraycopy + def f(a, b, c, d, e): + ll_arraycopy(a, b, c, d, e) + t, ra = self.translate(f, [rffi.CCHARP, rffi.CCHARP, int, int, int]) + fgraph = graphof(t, f) + result = ra.can_raise(fgraph.startblock.operations[0]) + assert not result + class TestOOType(OORtypeMixin, BaseTestCanRaise): def test_can_raise_recursive(self): From noreply at buildbot.pypy.org Tue Nov 8 17:03:26 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Tue, 8 Nov 2011 17:03:26 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: fix test Message-ID: <20111108160326.033F0820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48944:ac7073eb2075 Date: 2011-11-08 15:09 +0100 http://bitbucket.org/pypy/pypy/changeset/ac7073eb2075/ Log: fix test diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -2643,9 +2643,11 @@ i += 1 return sa assert self.meta_interp(f, [20, 2]) == f(20, 2) - self.check_jitcell_token_count(4) + self.check_jitcell_token_count(1) + assert len(list(get_stats().jitcell_tokens)[0].target_tokens) == 4 assert self.meta_interp(f, [20, 3]) == f(20, 3) - self.check_jitcell_token_count(5) + self.check_jitcell_token_count(1) + assert len(list(get_stats().jitcell_tokens)[0].target_tokens) == 5 def test_retrace_ending_up_retrazing_another_loop(self): @@ -2688,12 +2690,8 @@ # The attempts of retracing first loop will end up retracing the # second and thus fail 5 times, saturating the retrace_count. Instead a - # bridge back to the preamble of the first loop is produced. A guard in - # this bridge is later traced resulting in a retrace of the second loop. - # Thus we end up with: - # 1 preamble and 1 specialized version of first loop - # 1 preamble and 2 specialized version of second loop - self.check_jitcell_token_count(2 + 3) + # bridge back to the preamble of the first loop is produced. + self.check_trace_count(6) # FIXME: Add a gloabl retrace counter and test that we are not trying more than 5 times. @@ -2704,10 +2702,12 @@ res = self.meta_interp(g, [10]) assert res == g(10) - # 1 preamble and 6 speciealized versions of each loop - for loop in get_stats().loops: - assert len(loop.operations[0].getdescr().targeting_jitcell_token.target_tokens) <= 7 - + + self.check_jitcell_token_count(2) + for cell in get_stats().jitcell_tokens: + # Initialal trace with two labels and 5 retraces + assert len(cell.target_tokens) <= 7 + def test_nested_retrace(self): myjitdriver = JitDriver(greens = ['pc'], reds = ['n', 'a', 'i', 'j', 'sa']) From noreply at buildbot.pypy.org Tue Nov 8 17:03:27 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Tue, 8 Nov 2011 17:03:27 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: indent Message-ID: <20111108160327.2E9A0820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48945:7d76cfa50b41 Date: 2011-11-08 15:14 +0100 http://bitbucket.org/pypy/pypy/changeset/7d76cfa50b41/ Log: indent diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -510,34 +510,34 @@ pass target.virtual_state.debug_print(debugmsg, bad) - if ok: - debug_stop('jit-log-virtualstate') + if ok: + debug_stop('jit-log-virtualstate') - values = [self.getvalue(arg) - for arg in jumpop.getarglist()] - args = target.virtual_state.make_inputargs(values, self.optimizer, - keyboxes=True) - short_inputargs = target.short_preamble[0].getarglist() - inliner = Inliner(short_inputargs, args) + values = [self.getvalue(arg) + for arg in jumpop.getarglist()] + args = target.virtual_state.make_inputargs(values, self.optimizer, + keyboxes=True) + short_inputargs = target.short_preamble[0].getarglist() + inliner = Inliner(short_inputargs, args) - for guard in extra_guards: - if guard.is_guard(): - descr = target.start_resumedescr.clone_if_mutable() - inliner.inline_descr_inplace(descr) - guard.setdescr(descr) - self.optimizer.send_extra_operation(guard) + for guard in extra_guards: + if guard.is_guard(): + descr = target.start_resumedescr.clone_if_mutable() + inliner.inline_descr_inplace(descr) + guard.setdescr(descr) + self.optimizer.send_extra_operation(guard) - try: - for shop in target.short_preamble[1:]: - newop = inliner.inline_op(shop) - self.optimizer.send_extra_operation(newop) - except InvalidLoop: - debug_print("Inlining failed unexpectedly", - "jumping to preamble instead") - assert cell_token.target_tokens[0].virtual_state is None - jumpop.setdescr(cell_token.target_tokens[0]) - self.optimizer.send_extra_operation(jumpop) - return True + try: + for shop in target.short_preamble[1:]: + newop = inliner.inline_op(shop) + self.optimizer.send_extra_operation(newop) + except InvalidLoop: + debug_print("Inlining failed unexpectedly", + "jumping to preamble instead") + assert cell_token.target_tokens[0].virtual_state is None + jumpop.setdescr(cell_token.target_tokens[0]) + self.optimizer.send_extra_operation(jumpop) + return True debug_stop('jit-log-virtualstate') if self.did_import: From noreply at buildbot.pypy.org Tue Nov 8 17:03:28 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Tue, 8 Nov 2011 17:03:28 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: fix tests Message-ID: <20111108160328.5C8FA820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r48946:b449ace83c77 Date: 2011-11-08 15:25 +0100 http://bitbucket.org/pypy/pypy/changeset/b449ace83c77/ Log: fix tests diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -3149,7 +3149,7 @@ return sa res = self.meta_interp(f, [32]) assert res == f(32) - self.check_jitcell_token_count(3) + self.check_trace_count(2) def test_two_loopinvariant_arrays2(self): from pypy.rpython.lltypesystem import lltype, llmemory, rffi @@ -3172,7 +3172,7 @@ return sa res = self.meta_interp(f, [32]) assert res == f(32) - self.check_jitcell_token_count(3) + self.check_trace_count(2) def test_two_loopinvariant_arrays3(self): from pypy.rpython.lltypesystem import lltype, llmemory, rffi @@ -3196,7 +3196,7 @@ return sa res = self.meta_interp(f, [32]) assert res == f(32) - self.check_jitcell_token_count(2) + self.check_trace_count(3) def test_two_loopinvariant_arrays_boxed(self): class A(object): From noreply at buildbot.pypy.org Tue Nov 8 17:03:29 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Tue, 8 Nov 2011 17:03:29 +0100 (CET) Subject: [pypy-commit] pypy jit-refactor-tests: converted test Message-ID: <20111108160329.8812E820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-refactor-tests Changeset: r48947:9021a2a814b1 Date: 2011-11-08 15:40 +0100 http://bitbucket.org/pypy/pypy/changeset/9021a2a814b1/ Log: converted test diff --git a/pypy/jit/metainterp/test/test_string.py b/pypy/jit/metainterp/test/test_string.py --- a/pypy/jit/metainterp/test/test_string.py +++ b/pypy/jit/metainterp/test/test_string.py @@ -168,12 +168,11 @@ return 42 self.meta_interp(f, [6, 7]) if _str is str: - self.check_loops(newstr=1, strsetitem=0, copystrcontent=2, - call=1, call_pure=0) # escape + self.check_resops(call_pure=0, copystrcontent=4, + strsetitem=0, call=2, newstr=2) else: - self.check_loops(newunicode=1, unicodesetitem=0, - copyunicodecontent=2, - call=1, call_pure=0) # escape + self.check_resops(call_pure=0, unicodesetitem=0, call=2, + copyunicodecontent=4, newunicode=2) def test_strconcat_escape_str_char(self): _str, _chr = self._str, self._chr @@ -192,12 +191,11 @@ return 42 self.meta_interp(f, [6, 7]) if _str is str: - self.check_loops(newstr=1, strsetitem=1, copystrcontent=1, - call=1, call_pure=0) # escape + self.check_resops(call_pure=0, copystrcontent=2, strsetitem=2, + call=2, newstr=2) else: - self.check_loops(newunicode=1, unicodesetitem=1, - copyunicodecontent=1, - call=1, call_pure=0) # escape + self.check_resops(call_pure=0, unicodesetitem=2, call=2, + copyunicodecontent=2, newunicode=2) def test_strconcat_escape_char_str(self): _str, _chr = self._str, self._chr @@ -216,12 +214,11 @@ return 42 self.meta_interp(f, [6, 7]) if _str is str: - self.check_loops(newstr=1, strsetitem=1, copystrcontent=1, - call=1, call_pure=0) # escape + self.check_resops(call_pure=0, copystrcontent=2, + strsetitem=2, call=2, newstr=2) else: - self.check_loops(newunicode=1, unicodesetitem=1, - copyunicodecontent=1, - call=1, call_pure=0) # escape + self.check_resops(call_pure=0, unicodesetitem=2, call=2, + copyunicodecontent=2, newunicode=2) def test_strconcat_escape_char_char(self): _str, _chr = self._str, self._chr @@ -239,12 +236,11 @@ return 42 self.meta_interp(f, [6, 7]) if _str is str: - self.check_loops(newstr=1, strsetitem=2, copystrcontent=0, - call=1, call_pure=0) # escape + self.check_resops(call_pure=0, copystrcontent=0, + strsetitem=4, call=2, newstr=2) else: - self.check_loops(newunicode=1, unicodesetitem=2, - copyunicodecontent=0, - call=1, call_pure=0) # escape + self.check_resops(call_pure=0, unicodesetitem=4, call=2, + copyunicodecontent=0, newunicode=2) def test_strconcat_escape_str_char_str(self): _str, _chr = self._str, self._chr @@ -263,12 +259,11 @@ return 42 self.meta_interp(f, [6, 7]) if _str is str: - self.check_loops(newstr=1, strsetitem=1, copystrcontent=2, - call=1, call_pure=0) # escape + self.check_resops(call_pure=0, copystrcontent=4, strsetitem=2, + call=2, newstr=2) else: - self.check_loops(newunicode=1, unicodesetitem=1, - copyunicodecontent=2, - call=1, call_pure=0) # escape + self.check_resops(call_pure=0, unicodesetitem=2, call=2, + copyunicodecontent=4, newunicode=2) def test_strconcat_guard_fail(self): _str = self._str @@ -325,7 +320,7 @@ m -= 1 return 42 self.meta_interp(f, [6, 7]) - self.check_loops(newstr=0, newunicode=0) + self.check_resops(newunicode=0, newstr=0) def test_str_slice_len_surviving(self): _str = self._str @@ -504,9 +499,9 @@ sys.defaultencoding = _str('utf-8') return sa assert self.meta_interp(f, [8]) == f(8) - self.check_loops({'int_add': 1, 'guard_true': 1, 'int_sub': 1, - 'jump': 1, 'int_is_true': 1, - 'guard_not_invalidated': 1}) + self.check_resops({'jump': 2, 'int_is_true': 2, 'int_add': 2, + 'guard_true': 2, 'guard_not_invalidated': 2, + 'int_sub': 2}) def test_promote_string(self): driver = JitDriver(greens = [], reds = ['n']) @@ -519,7 +514,7 @@ return 0 self.meta_interp(f, [0]) - self.check_loops(call=3 + 1) # one for int2str + self.check_resops(call=7) #class TestOOtype(StringTests, OOJitMixin): # CALL = "oosend" @@ -552,9 +547,8 @@ m -= 1 return 42 self.meta_interp(f, [6, 7]) - self.check_loops(call=1, # escape() - newunicode=1, unicodegetitem=0, - unicodesetitem=1, copyunicodecontent=1) + self.check_resops(unicodesetitem=2, newunicode=2, call=4, + copyunicodecontent=2, unicodegetitem=0) def test_str2unicode_fold(self): _str = self._str @@ -572,9 +566,9 @@ m -= 1 return 42 self.meta_interp(f, [6, 7]) - self.check_loops(call_pure=0, call=1, - newunicode=0, unicodegetitem=0, - unicodesetitem=0, copyunicodecontent=0) + self.check_resops(call_pure=0, unicodesetitem=0, call=2, + newunicode=0, unicodegetitem=0, + copyunicodecontent=0) def test_join_chars(self): jitdriver = JitDriver(reds=['a', 'b', 'c', 'i'], greens=[]) @@ -596,9 +590,8 @@ # The "".join should be unrolled, since the length of x is known since # it is virtual, ensure there are no calls to ll_join_chars, or # allocations. - self.check_loops({ - "guard_true": 5, "int_is_true": 3, "int_lt": 2, "int_add": 2, "jump": 2, - }, everywhere=True) + self.check_resops({'jump': 2, 'guard_true': 5, 'int_lt': 2, + 'int_add': 2, 'int_is_true': 3}) def test_virtual_copystringcontent(self): jitdriver = JitDriver(reds=['n', 'result'], greens=[]) From noreply at buildbot.pypy.org Tue Nov 8 17:03:30 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Tue, 8 Nov 2011 17:03:30 +0100 (CET) Subject: [pypy-commit] pypy jit-refactor-tests: converted test Message-ID: <20111108160330.B3BD9820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-refactor-tests Changeset: r48948:280c132cc8a4 Date: 2011-11-08 15:52 +0100 http://bitbucket.org/pypy/pypy/changeset/280c132cc8a4/ Log: converted test diff --git a/pypy/jit/metainterp/test/test_virtualizable.py b/pypy/jit/metainterp/test/test_virtualizable.py --- a/pypy/jit/metainterp/test/test_virtualizable.py +++ b/pypy/jit/metainterp/test/test_virtualizable.py @@ -77,7 +77,7 @@ return xy.inst_x res = self.meta_interp(f, [20]) assert res == 30 - self.check_loops(getfield_gc=0, setfield_gc=0) + self.check_resops(setfield_gc=0, getfield_gc=0) def test_preexisting_access_2(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'xy'], @@ -102,7 +102,7 @@ assert f(5) == 185 res = self.meta_interp(f, [5]) assert res == 185 - self.check_loops(getfield_gc=0, setfield_gc=0) + self.check_resops(setfield_gc=0, getfield_gc=0) def test_two_paths_access(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'xy'], @@ -124,7 +124,7 @@ return xy.inst_x res = self.meta_interp(f, [18]) assert res == 10118 - self.check_loops(getfield_gc=0, setfield_gc=0) + self.check_resops(setfield_gc=0, getfield_gc=0) def test_synchronize_in_return(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'xy'], @@ -146,7 +146,7 @@ return xy.inst_x res = self.meta_interp(f, [18]) assert res == 10180 - self.check_loops(getfield_gc=0, setfield_gc=0) + self.check_resops(setfield_gc=0, getfield_gc=0) def test_virtualizable_and_greens(self): myjitdriver = JitDriver(greens = ['m'], reds = ['n', 'xy'], @@ -174,7 +174,7 @@ return res res = self.meta_interp(f, [40]) assert res == 50 * 4 - self.check_loops(getfield_gc=0, setfield_gc=0) + self.check_resops(setfield_gc=0, getfield_gc=0) def test_double_frame(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'xy', 'other'], @@ -197,8 +197,7 @@ return xy.inst_x res = self.meta_interp(f, [20]) assert res == 134 - self.check_loops(getfield_gc=0, setfield_gc=1) - self.check_loops(getfield_gc=1, setfield_gc=2, everywhere=True) + self.check_resops(setfield_gc=2, getfield_gc=1) # ------------------------------ @@ -248,8 +247,8 @@ return xy2.inst_l1[2] res = self.meta_interp(f, [16]) assert res == 3001 + 16 * 80 - self.check_loops(getfield_gc=0, setfield_gc=0, - getarrayitem_gc=0, setarrayitem_gc=0) + self.check_resops(setarrayitem_gc=0, setfield_gc=0, + getarrayitem_gc=0, getfield_gc=0) def test_synchronize_arrays_in_return(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'xy2'], @@ -279,8 +278,7 @@ assert f(18) == 10360 res = self.meta_interp(f, [18]) assert res == 10360 - self.check_loops(getfield_gc=0, setfield_gc=0, - getarrayitem_gc=0) + self.check_resops(setfield_gc=0, getarrayitem_gc=0, getfield_gc=0) def test_array_length(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'xy2'], @@ -306,8 +304,8 @@ return xy2.inst_l1[1] res = self.meta_interp(f, [18]) assert res == 2941309 + 18 - self.check_loops(getfield_gc=0, setfield_gc=0, - getarrayitem_gc=0, arraylen_gc=0) + self.check_resops(setfield_gc=0, getarrayitem_gc=0, + arraylen_gc=0, getfield_gc=0) def test_residual_function(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'xy2'], @@ -340,8 +338,8 @@ return xy2.inst_l1[1] res = self.meta_interp(f, [18]) assert res == 2941309 + 18 - self.check_loops(getfield_gc=0, setfield_gc=0, - getarrayitem_gc=0, arraylen_gc=1, call=1) + self.check_resops(call=2, setfield_gc=0, getarrayitem_gc=0, + arraylen_gc=2, getfield_gc=0) def test_double_frame_array(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'xy2', 'other'], @@ -377,8 +375,8 @@ expected = f(20) res = self.meta_interp(f, [20], enable_opts='') assert res == expected - self.check_loops(getfield_gc=1, setfield_gc=0, - arraylen_gc=1, getarrayitem_gc=1, setarrayitem_gc=1) + self.check_resops(setarrayitem_gc=1, setfield_gc=0, + getarrayitem_gc=1, arraylen_gc=1, getfield_gc=1) # ------------------------------ @@ -425,8 +423,7 @@ assert f(18) == 10360 res = self.meta_interp(f, [18]) assert res == 10360 - self.check_loops(getfield_gc=0, setfield_gc=0, - getarrayitem_gc=0) + self.check_resops(setfield_gc=0, getarrayitem_gc=0, getfield_gc=0) # ------------------------------ @@ -460,8 +457,7 @@ res = self.meta_interp(f, [10]) assert res == 55 - self.check_loops(getfield_gc=0, setfield_gc=0) - + self.check_resops(setfield_gc=0, getfield_gc=0) def test_virtualizable_with_array(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'x', 'frame'], @@ -495,8 +491,7 @@ res = self.meta_interp(f, [10, 1], listops=True) assert res == f(10, 1) - self.check_loops(getarrayitem_gc=0) - + self.check_resops(getarrayitem_gc=0) def test_subclass_of_virtualizable(self): myjitdriver = JitDriver(greens = [], reds = ['frame'], @@ -524,8 +519,7 @@ res = self.meta_interp(f, [10]) assert res == 55 - self.check_loops(getfield_gc=0, setfield_gc=0) - + self.check_resops(setfield_gc=0, getfield_gc=0) def test_external_pass(self): jitdriver = JitDriver(greens = [], reds = ['n', 'z', 'frame'], @@ -1011,8 +1005,8 @@ res = self.meta_interp(f, [70], listops=True) assert res == intmask(42 ** 70) - self.check_loops(int_add=0, - int_sub=1) # for 'n -= 1' only + self.check_resops(int_add=0, + int_sub=2) # for 'n -= 1' only def test_simple_access_directly(self): myjitdriver = JitDriver(greens = [], reds = ['frame'], @@ -1043,7 +1037,7 @@ res = self.meta_interp(f, [10]) assert res == 55 - self.check_loops(getfield_gc=0, setfield_gc=0) + self.check_resops(setfield_gc=0, getfield_gc=0) from pypy.jit.backend.test.support import BaseCompiledMixin if isinstance(self, BaseCompiledMixin): @@ -1098,7 +1092,7 @@ res = self.meta_interp(f, [10]) assert res == 55 - self.check_loops(new_with_vtable=0) + self.check_resops(new_with_vtable=0) def test_check_for_nonstandardness_only_once(self): myjitdriver = JitDriver(greens = [], reds = ['frame'], @@ -1132,7 +1126,7 @@ res = self.meta_interp(f, [10]) assert res == 55 - self.check_loops(new_with_vtable=0, ptr_eq=1, everywhere=True) + self.check_resops(new_with_vtable=0, ptr_eq=1) self.check_history(ptr_eq=2) def test_virtual_child_frame_with_arrays(self): @@ -1165,7 +1159,7 @@ res = self.meta_interp(f, [10], listops=True) assert res == 55 - self.check_loops(new_with_vtable=0) + self.check_resops(new_with_vtable=0) def test_blackhole_should_not_pay_attention(self): myjitdriver = JitDriver(greens = [], reds = ['frame'], @@ -1203,7 +1197,7 @@ res = self.meta_interp(f, [10]) assert res == 155 - self.check_loops(getfield_gc=0, setfield_gc=0) + self.check_resops(setfield_gc=0, getfield_gc=0) def test_blackhole_should_synchronize(self): myjitdriver = JitDriver(greens = [], reds = ['frame'], @@ -1239,7 +1233,7 @@ res = self.meta_interp(f, [10]) assert res == 155 - self.check_loops(getfield_gc=0, setfield_gc=0) + self.check_resops(setfield_gc=0, getfield_gc=0) def test_blackhole_should_not_reenter(self): if not self.basic: From noreply at buildbot.pypy.org Tue Nov 8 17:03:31 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Tue, 8 Nov 2011 17:03:31 +0100 (CET) Subject: [pypy-commit] pypy jit-refactor-tests: fix indentation Message-ID: <20111108160331.E1699820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-refactor-tests Changeset: r48949:444442eb7a0e Date: 2011-11-08 15:55 +0100 http://bitbucket.org/pypy/pypy/changeset/444442eb7a0e/ Log: fix indentation diff --git a/pypy/jit/metainterp/test/test_virtualizable.py b/pypy/jit/metainterp/test/test_virtualizable.py --- a/pypy/jit/metainterp/test/test_virtualizable.py +++ b/pypy/jit/metainterp/test/test_virtualizable.py @@ -1095,39 +1095,39 @@ self.check_resops(new_with_vtable=0) def test_check_for_nonstandardness_only_once(self): - myjitdriver = JitDriver(greens = [], reds = ['frame'], - virtualizables = ['frame']) + myjitdriver = JitDriver(greens = [], reds = ['frame'], + virtualizables = ['frame']) - class Frame(object): - _virtualizable2_ = ['x', 'y', 'z'] + class Frame(object): + _virtualizable2_ = ['x', 'y', 'z'] - def __init__(self, x, y, z=1): - self = hint(self, access_directly=True) - self.x = x - self.y = y - self.z = z + def __init__(self, x, y, z=1): + self = hint(self, access_directly=True) + self.x = x + self.y = y + self.z = z - class SomewhereElse: - pass - somewhere_else = SomewhereElse() + class SomewhereElse: + pass + somewhere_else = SomewhereElse() - def f(n): - frame = Frame(n, 0) - somewhere_else.top_frame = frame # escapes - frame = hint(frame, access_directly=True) - while frame.x > 0: - myjitdriver.can_enter_jit(frame=frame) - myjitdriver.jit_merge_point(frame=frame) - top_frame = somewhere_else.top_frame - child_frame = Frame(frame.x, top_frame.z, 17) - frame.y += child_frame.x - frame.x -= top_frame.z - return somewhere_else.top_frame.y - - res = self.meta_interp(f, [10]) - assert res == 55 - self.check_resops(new_with_vtable=0, ptr_eq=1) - self.check_history(ptr_eq=2) + def f(n): + frame = Frame(n, 0) + somewhere_else.top_frame = frame # escapes + frame = hint(frame, access_directly=True) + while frame.x > 0: + myjitdriver.can_enter_jit(frame=frame) + myjitdriver.jit_merge_point(frame=frame) + top_frame = somewhere_else.top_frame + child_frame = Frame(frame.x, top_frame.z, 17) + frame.y += child_frame.x + frame.x -= top_frame.z + return somewhere_else.top_frame.y + + res = self.meta_interp(f, [10]) + assert res == 55 + self.check_resops(new_with_vtable=0, ptr_eq=1) + self.check_history(ptr_eq=2) def test_virtual_child_frame_with_arrays(self): myjitdriver = JitDriver(greens = [], reds = ['frame'], From noreply at buildbot.pypy.org Tue Nov 8 17:03:33 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Tue, 8 Nov 2011 17:03:33 +0100 (CET) Subject: [pypy-commit] pypy jit-refactor-tests: converted test Message-ID: <20111108160333.180C5820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-refactor-tests Changeset: r48950:4f2ecb448124 Date: 2011-11-08 16:03 +0100 http://bitbucket.org/pypy/pypy/changeset/4f2ecb448124/ Log: converted test diff --git a/pypy/jit/metainterp/test/test_quasiimmut.py b/pypy/jit/metainterp/test/test_quasiimmut.py --- a/pypy/jit/metainterp/test/test_quasiimmut.py +++ b/pypy/jit/metainterp/test/test_quasiimmut.py @@ -73,8 +73,7 @@ # res = self.meta_interp(f, [100, 7]) assert res == 700 - self.check_loops(guard_not_invalidated=2, getfield_gc=0, - everywhere=True) + self.check_resops(guard_not_invalidated=2, getfield_gc=0) # from pypy.jit.metainterp.warmspot import get_stats loops = get_stats().loops @@ -103,7 +102,7 @@ assert f(100, 7) == 721 res = self.meta_interp(f, [100, 7]) assert res == 721 - self.check_loops(guard_not_invalidated=0, getfield_gc=1) + self.check_resops(guard_not_invalidated=0, getfield_gc=3) # from pypy.jit.metainterp.warmspot import get_stats loops = get_stats().loops @@ -134,8 +133,7 @@ # res = self.meta_interp(f, [100, 7]) assert res == 700 - self.check_loops(guard_not_invalidated=2, getfield_gc=0, - everywhere=True) + self.check_resops(guard_not_invalidated=2, getfield_gc=0) def test_change_during_tracing_1(self): myjitdriver = JitDriver(greens=['foo'], reds=['x', 'total']) @@ -160,7 +158,7 @@ assert f(100, 7) == 721 res = self.meta_interp(f, [100, 7]) assert res == 721 - self.check_loops(guard_not_invalidated=0, getfield_gc=1) + self.check_resops(guard_not_invalidated=0, getfield_gc=2) def test_change_during_tracing_2(self): myjitdriver = JitDriver(greens=['foo'], reds=['x', 'total']) @@ -186,7 +184,7 @@ assert f(100, 7) == 700 res = self.meta_interp(f, [100, 7]) assert res == 700 - self.check_loops(guard_not_invalidated=0, getfield_gc=1) + self.check_resops(guard_not_invalidated=0, getfield_gc=2) def test_change_invalidate_reentering(self): myjitdriver = JitDriver(greens=['foo'], reds=['x', 'total']) @@ -212,7 +210,7 @@ assert g(100, 7) == 700707 res = self.meta_interp(g, [100, 7]) assert res == 700707 - self.check_loops(guard_not_invalidated=2, getfield_gc=0) + self.check_resops(guard_not_invalidated=4, getfield_gc=0) def test_invalidate_while_running(self): jitdriver = JitDriver(greens=['foo'], reds=['i', 'total']) @@ -324,8 +322,8 @@ assert f(100, 15) == 3009 res = self.meta_interp(f, [100, 15]) assert res == 3009 - self.check_loops(guard_not_invalidated=4, getfield_gc=0, - call_may_force=0, guard_not_forced=0) + self.check_resops(guard_not_invalidated=8, guard_not_forced=0, + call_may_force=0, getfield_gc=0) def test_list_simple_1(self): myjitdriver = JitDriver(greens=['foo'], reds=['x', 'total']) @@ -347,9 +345,8 @@ # res = self.meta_interp(f, [100, 7]) assert res == 700 - self.check_loops(guard_not_invalidated=2, getfield_gc=0, - getarrayitem_gc=0, getarrayitem_gc_pure=0, - everywhere=True) + self.check_resops(getarrayitem_gc_pure=0, guard_not_invalidated=2, + getarrayitem_gc=0, getfield_gc=0) # from pypy.jit.metainterp.warmspot import get_stats loops = get_stats().loops @@ -385,9 +382,8 @@ # res = self.meta_interp(f, [100, 7]) assert res == 714 - self.check_loops(guard_not_invalidated=2, getfield_gc=0, - getarrayitem_gc=0, getarrayitem_gc_pure=0, - arraylen_gc=0, everywhere=True) + self.check_resops(getarrayitem_gc_pure=0, guard_not_invalidated=2, + arraylen_gc=0, getarrayitem_gc=0, getfield_gc=0) # from pypy.jit.metainterp.warmspot import get_stats loops = get_stats().loops @@ -421,9 +417,8 @@ # res = self.meta_interp(f, [100, 7]) assert res == 700 - self.check_loops(guard_not_invalidated=2, getfield_gc=0, - getarrayitem_gc=0, getarrayitem_gc_pure=0, - everywhere=True) + self.check_resops(guard_not_invalidated=2, getfield_gc=0, + getarrayitem_gc=0, getarrayitem_gc_pure=0) # from pypy.jit.metainterp.warmspot import get_stats loops = get_stats().loops @@ -460,9 +455,9 @@ assert f(100, 15) == 3009 res = self.meta_interp(f, [100, 15]) assert res == 3009 - self.check_loops(guard_not_invalidated=4, getfield_gc=0, - getarrayitem_gc=0, getarrayitem_gc_pure=0, - call_may_force=0, guard_not_forced=0) + self.check_resops(call_may_force=0, getfield_gc=0, + getarrayitem_gc_pure=0, guard_not_forced=0, + getarrayitem_gc=0, guard_not_invalidated=8) def test_invalidated_loop_is_not_used_any_more_as_target(self): myjitdriver = JitDriver(greens=['foo'], reds=['x']) From noreply at buildbot.pypy.org Tue Nov 8 17:03:34 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Tue, 8 Nov 2011 17:03:34 +0100 (CET) Subject: [pypy-commit] pypy jit-refactor-tests: converted test Message-ID: <20111108160334.44485820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-refactor-tests Changeset: r48951:88ca4d9cb01f Date: 2011-11-08 16:07 +0100 http://bitbucket.org/pypy/pypy/changeset/88ca4d9cb01f/ Log: converted test diff --git a/pypy/jit/metainterp/test/test_virtualref.py b/pypy/jit/metainterp/test/test_virtualref.py --- a/pypy/jit/metainterp/test/test_virtualref.py +++ b/pypy/jit/metainterp/test/test_virtualref.py @@ -171,7 +171,7 @@ return 1 # self.meta_interp(f, [10]) - self.check_loops(new_with_vtable=1) # the vref + self.check_resops(new_with_vtable=2) # the vref self.check_aborted_count(0) def test_simple_all_removed(self): @@ -205,8 +205,7 @@ virtual_ref_finish(vref, xy) # self.meta_interp(f, [15]) - self.check_loops(new_with_vtable=0, # all virtualized - new_array=0) + self.check_resops(new_with_vtable=0, new_array=0) self.check_aborted_count(0) def test_simple_no_access(self): @@ -242,7 +241,7 @@ virtual_ref_finish(vref, xy) # self.meta_interp(f, [15]) - self.check_loops(new_with_vtable=1, # the vref: xy doesn't need to be forced + self.check_resops(new_with_vtable=2, # the vref: xy doesn't need to be forced new_array=0) # and neither xy.next1/2/3 self.check_aborted_count(0) @@ -280,8 +279,8 @@ exctx.topframeref = vref_None # self.meta_interp(f, [15]) - self.check_loops(new_with_vtable=2, # XY(), the vref - new_array=3) # next1/2/3 + self.check_resops(new_with_vtable=4, # XY(), the vref + new_array=6) # next1/2/3 self.check_aborted_count(0) def test_simple_force_sometimes(self): @@ -320,8 +319,8 @@ # res = self.meta_interp(f, [30]) assert res == 13 - self.check_loops(new_with_vtable=1, # the vref, but not XY() - new_array=0) # and neither next1/2/3 + self.check_resops(new_with_vtable=2, # the vref, but not XY() + new_array=0) # and neither next1/2/3 self.check_loop_count(1) self.check_aborted_count(0) @@ -362,7 +361,7 @@ # res = self.meta_interp(f, [30]) assert res == 13 - self.check_loops(new_with_vtable=0, # all virtualized in the n!=13 loop + self.check_resops(new_with_vtable=0, # all virtualized in the n!=13 loop new_array=0) self.check_loop_count(1) self.check_aborted_count(0) @@ -412,7 +411,7 @@ res = self.meta_interp(f, [72]) assert res == 6 self.check_loop_count(2) # the loop and the bridge - self.check_loops(new_with_vtable=2, # loop: nothing; bridge: vref, xy + self.check_resops(new_with_vtable=2, # loop: nothing; bridge: vref, xy new_array=2) # bridge: next4, next5 self.check_aborted_count(0) @@ -442,8 +441,8 @@ # res = self.meta_interp(f, [15]) assert res == 1 - self.check_loops(new_with_vtable=2, # vref, xy - new_array=1) # next1 + self.check_resops(new_with_vtable=4, # vref, xy + new_array=2) # next1 self.check_aborted_count(0) def test_recursive_call_1(self): @@ -543,7 +542,7 @@ # res = self.meta_interp(f, [15]) assert res == 1 - self.check_loops(new_with_vtable=2) # vref, xy + self.check_resops(new_with_vtable=4) # vref, xy def test_cannot_use_invalid_virtualref(self): myjitdriver = JitDriver(greens = [], reds = ['n']) From noreply at buildbot.pypy.org Tue Nov 8 17:03:35 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Tue, 8 Nov 2011 17:03:35 +0100 (CET) Subject: [pypy-commit] pypy jit-refactor-tests: convreted tests Message-ID: <20111108160335.75F16820C4@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-refactor-tests Changeset: r48952:5b50039bad35 Date: 2011-11-08 16:39 +0100 http://bitbucket.org/pypy/pypy/changeset/5b50039bad35/ Log: convreted tests diff --git a/pypy/jit/metainterp/test/test_del.py b/pypy/jit/metainterp/test/test_del.py --- a/pypy/jit/metainterp/test/test_del.py +++ b/pypy/jit/metainterp/test/test_del.py @@ -20,12 +20,12 @@ n -= 1 return 42 self.meta_interp(f, [20]) - self.check_loops({'call': 2, # calls to a helper function - 'guard_no_exception': 2, # follows the calls - 'int_sub': 1, - 'int_gt': 1, - 'guard_true': 1, - 'jump': 1}) + self.check_resops({'call': 4, # calls to a helper function + 'guard_no_exception': 4, # follows the calls + 'int_sub': 2, + 'int_gt': 2, + 'guard_true': 2, + 'jump': 2}) def test_class_of_allocated(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'x']) @@ -78,7 +78,7 @@ return 1 res = self.meta_interp(f, [20], enable_opts='') assert res == 1 - self.check_loops(call=1) # for the case B(), but not for the case A() + self.check_resops(call=1) # for the case B(), but not for the case A() class TestLLtype(DelTests, LLJitMixin): @@ -103,7 +103,7 @@ break return 42 self.meta_interp(f, [20]) - self.check_loops(getfield_raw=1, setfield_raw=1, call=0, call_pure=0) + self.check_resops(call_pure=0, setfield_raw=2, call=0, getfield_raw=2) class TestOOtype(DelTests, OOJitMixin): def setup_class(cls): diff --git a/pypy/jit/metainterp/test/test_dict.py b/pypy/jit/metainterp/test/test_dict.py --- a/pypy/jit/metainterp/test/test_dict.py +++ b/pypy/jit/metainterp/test/test_dict.py @@ -91,7 +91,7 @@ res1 = f(100) res2 = self.meta_interp(f, [100], listops=True) assert res1 == res2 - self.check_loops(int_mod=1) # the hash was traced and eq, but cached + self.check_resops(int_mod=2) # the hash was traced and eq, but cached def test_dict_setdefault(self): myjitdriver = JitDriver(greens = [], reds = ['total', 'dct']) @@ -107,7 +107,7 @@ assert f(100) == 50 res = self.meta_interp(f, [100], listops=True) assert res == 50 - self.check_loops(new=0, new_with_vtable=0) + self.check_resops(new=0, new_with_vtable=0) def test_dict_as_counter(self): myjitdriver = JitDriver(greens = [], reds = ['total', 'dct']) @@ -128,7 +128,7 @@ assert f(100) == 50 res = self.meta_interp(f, [100], listops=True) assert res == 50 - self.check_loops(int_mod=1) # key + eq, but cached + self.check_resops(int_mod=2) # key + eq, but cached def test_repeated_lookup(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'd']) @@ -153,12 +153,13 @@ res = self.meta_interp(f, [100], listops=True) assert res == f(50) - self.check_loops({"call": 5, "getfield_gc": 1, "getinteriorfield_gc": 1, - "guard_false": 1, "guard_no_exception": 4, - "guard_true": 1, "int_and": 1, "int_gt": 1, - "int_is_true": 1, "int_sub": 1, "jump": 1, - "new_with_vtable": 1, "new": 1, "new_array": 1, - "setfield_gc": 3, }) + self.check_resops({'new_array': 2, 'getfield_gc': 2, + 'guard_true': 2, 'jump': 2, + 'new_with_vtable': 2, 'getinteriorfield_gc': 2, + 'setfield_gc': 6, 'int_gt': 2, 'int_sub': 2, + 'call': 10, 'int_and': 2, + 'guard_no_exception': 8, 'new': 2, + 'guard_false': 2, 'int_is_true': 2}) class TestOOtype(DictTests, OOJitMixin): diff --git a/pypy/jit/metainterp/test/test_fficall.py b/pypy/jit/metainterp/test/test_fficall.py --- a/pypy/jit/metainterp/test/test_fficall.py +++ b/pypy/jit/metainterp/test/test_fficall.py @@ -68,23 +68,23 @@ 'byval': False} supported = all(d[check] for check in jitif) if supported: - self.check_loops( - call_release_gil=1, # a CALL_RELEASE_GIL, and no other CALLs + self.check_resops( + call_release_gil=2, # a CALL_RELEASE_GIL, and no other CALLs call=0, call_may_force=0, - guard_no_exception=1, - guard_not_forced=1, - int_add=1, - int_lt=1, - guard_true=1, - jump=1) + guard_no_exception=2, + guard_not_forced=2, + int_add=2, + int_lt=2, + guard_true=2, + jump=2) else: - self.check_loops( + self.check_resops( call_release_gil=0, # no CALL_RELEASE_GIL - int_add=1, - int_lt=1, - guard_true=1, - jump=1) + int_add=2, + int_lt=2, + guard_true=2, + jump=2) return res def test_byval_result(self): diff --git a/pypy/jit/metainterp/test/test_greenfield.py b/pypy/jit/metainterp/test/test_greenfield.py --- a/pypy/jit/metainterp/test/test_greenfield.py +++ b/pypy/jit/metainterp/test/test_greenfield.py @@ -25,7 +25,7 @@ res = self.meta_interp(g, [7]) assert res == -2 self.check_loop_count(2) - self.check_loops(guard_value=0) + self.check_resops(guard_value=0) def test_green_field_2(self): myjitdriver = JitDriver(greens=['ctx.x'], reds=['ctx']) @@ -50,7 +50,7 @@ res = self.meta_interp(g, [7]) assert res == -22 self.check_loop_count(6) - self.check_loops(guard_value=0) + self.check_resops(guard_value=0) class TestLLtypeGreenFieldsTests(GreenFieldsTests, LLJitMixin): diff --git a/pypy/jit/metainterp/test/test_jitdriver.py b/pypy/jit/metainterp/test/test_jitdriver.py --- a/pypy/jit/metainterp/test/test_jitdriver.py +++ b/pypy/jit/metainterp/test/test_jitdriver.py @@ -88,7 +88,7 @@ assert res == loop2(4, 40) # we expect only one int_sub, corresponding to the single # compiled instance of loop1() - self.check_loops(int_sub=1) + self.check_resops(int_sub=2) # the following numbers are not really expectations of the test # itself, but just the numbers that we got after looking carefully # at the generated machine code @@ -154,7 +154,7 @@ res = self.meta_interp(loop2, [4, 40], repeat=7, inline=True) assert res == loop2(4, 40) # we expect no int_sub, but a residual call - self.check_loops(int_sub=0, call=1) + self.check_resops(call=2, int_sub=0) def test_multiple_jits_trace_too_long(self): myjitdriver1 = JitDriver(greens=["n"], reds=["i", "box"]) diff --git a/pypy/jit/metainterp/test/test_list.py b/pypy/jit/metainterp/test/test_list.py --- a/pypy/jit/metainterp/test/test_list.py +++ b/pypy/jit/metainterp/test/test_list.py @@ -6,8 +6,8 @@ class ListTests: def check_all_virtualized(self): - self.check_loops(new_array=0, setarrayitem_gc=0, getarrayitem_gc=0, - arraylen_gc=0) + self.check_resops(setarrayitem_gc=0, new_array=0, arraylen_gc=0, + getarrayitem_gc=0) def test_simple_array(self): jitdriver = JitDriver(greens = [], reds = ['n']) @@ -20,7 +20,7 @@ return n res = self.meta_interp(f, [10], listops=True) assert res == 0 - self.check_loops(int_sub=1) + self.check_resops(int_sub=2) self.check_all_virtualized() def test_list_pass_around(self): @@ -56,7 +56,8 @@ res = self.meta_interp(f, [10], listops=True) assert res == f(10) # one setitem should be gone by now - self.check_loops(call=1, setarrayitem_gc=2, getarrayitem_gc=1) + self.check_resops(setarrayitem_gc=4, getarrayitem_gc=2, call=2) + def test_ll_fixed_setitem_fast(self): jitdriver = JitDriver(greens = [], reds = ['n', 'l']) @@ -93,7 +94,7 @@ res = self.meta_interp(f, [10], listops=True) assert res == f(10) - self.check_loops(setarrayitem_gc=0, getarrayitem_gc=0, call=0) + self.check_resops(setarrayitem_gc=0, call=0, getarrayitem_gc=0) def test_vlist_alloc_and_set(self): # the check_loops fails, because [non-null] * n is not supported yet @@ -141,7 +142,7 @@ res = self.meta_interp(f, [5], listops=True) assert res == 7 - self.check_loops(call=0) + self.check_resops(call=0) def test_fold_getitem_1(self): jitdriver = JitDriver(greens = ['pc', 'n', 'l'], reds = ['total']) @@ -161,7 +162,7 @@ res = self.meta_interp(f, [4], listops=True) assert res == f(4) - self.check_loops(call=0) + self.check_resops(call=0) def test_fold_getitem_2(self): jitdriver = JitDriver(greens = ['pc', 'n', 'l'], reds = ['total', 'x']) @@ -186,7 +187,7 @@ res = self.meta_interp(f, [4], listops=True) assert res == f(4) - self.check_loops(call=0, getfield_gc=0) + self.check_resops(call=0, getfield_gc=0) def test_fold_indexerror(self): jitdriver = JitDriver(greens = [], reds = ['total', 'n', 'lst']) @@ -206,7 +207,7 @@ res = self.meta_interp(f, [15], listops=True) assert res == f(15) - self.check_loops(guard_exception=0) + self.check_resops(guard_exception=0) def test_virtual_resize(self): jitdriver = JitDriver(greens = [], reds = ['n', 's']) @@ -224,9 +225,8 @@ return s res = self.meta_interp(f, [15], listops=True) assert res == f(15) - self.check_loops({"int_add": 1, "int_sub": 1, "int_gt": 1, - "guard_true": 1, "jump": 1}) - + self.check_resops({'jump': 2, 'int_gt': 2, 'int_add': 2, + 'guard_true': 2, 'int_sub': 2}) class TestOOtype(ListTests, OOJitMixin): pass @@ -258,4 +258,4 @@ assert res == f(37) # There is the one actual field on a, plus several fields on the list # itself - self.check_loops(getfield_gc=10, everywhere=True) + self.check_resops(getfield_gc=10) diff --git a/pypy/jit/metainterp/test/test_slist.py b/pypy/jit/metainterp/test/test_slist.py --- a/pypy/jit/metainterp/test/test_slist.py +++ b/pypy/jit/metainterp/test/test_slist.py @@ -76,7 +76,7 @@ return lst[i] res = self.meta_interp(f, [21], listops=True) assert res == f(21) - self.check_loops(call=0) + self.check_resops(call=0) def test_getitem_neg(self): myjitdriver = JitDriver(greens = [], reds = ['i', 'n']) @@ -92,7 +92,7 @@ return x res = self.meta_interp(f, [-2], listops=True) assert res == 41 - self.check_loops(call=0, guard_value=0) + self.check_resops(call=0, guard_value=0) # we don't support resizable lists on ootype #class TestOOtype(ListTests, OOJitMixin): diff --git a/pypy/jit/metainterp/test/test_tl.py b/pypy/jit/metainterp/test/test_tl.py --- a/pypy/jit/metainterp/test/test_tl.py +++ b/pypy/jit/metainterp/test/test_tl.py @@ -72,16 +72,16 @@ res = self.meta_interp(main, [0, 6], listops=True, backendopt=True) assert res == 5040 - self.check_loops({'int_mul':1, 'jump':1, - 'int_sub':1, 'int_le':1, 'guard_false':1}) + self.check_resops({'jump': 2, 'int_le': 2, 'guard_value': 1, + 'int_mul': 2, 'guard_false': 2, 'int_sub': 2}) def test_tl_2(self): main = self._get_main() res = self.meta_interp(main, [1, 10], listops=True, backendopt=True) assert res == main(1, 10) - self.check_loops({'int_sub':1, 'int_le':1, - 'guard_false':1, 'jump':1}) + self.check_resops({'int_le': 2, 'int_sub': 2, 'jump': 2, + 'guard_false': 2, 'guard_value': 1}) def test_tl_call(self, listops=True, policy=None): from pypy.jit.tl.tl import interp diff --git a/pypy/jit/metainterp/test/test_warmspot.py b/pypy/jit/metainterp/test/test_warmspot.py --- a/pypy/jit/metainterp/test/test_warmspot.py +++ b/pypy/jit/metainterp/test/test_warmspot.py @@ -103,12 +103,12 @@ # check that the set_param will override the default res = self.meta_interp(f, [10, llstr('')]) assert res == 0 - self.check_loops(new_with_vtable=1) + self.check_resops(new_with_vtable=1) res = self.meta_interp(f, [10, llstr(ALL_OPTS_NAMES)], enable_opts='') assert res == 0 - self.check_loops(new_with_vtable=0) + self.check_resops(new_with_vtable=0) def test_unwanted_loops(self): mydriver = JitDriver(reds = ['n', 'total', 'm'], greens = []) @@ -163,7 +163,7 @@ return n self.meta_interp(f, [50], backendopt=True) self.check_enter_count_at_most(2) - self.check_loops(call=0) + self.check_resops(call=0) def test_loop_header(self): # artificial test: we enter into the JIT only when can_enter_jit() @@ -187,7 +187,7 @@ assert f(15) == 1 res = self.meta_interp(f, [15], backendopt=True) assert res == 1 - self.check_loops(int_add=1) # I get 13 without the loop_header() + self.check_resops(int_add=2) # I get 13 without the loop_header() def test_omit_can_enter_jit(self): # Simple test comparing the effects of always giving a can_enter_jit(), @@ -249,8 +249,8 @@ m = m - 1 self.meta_interp(f1, [8]) self.check_loop_count(1) - self.check_loops({'int_sub': 1, 'int_gt': 1, 'guard_true': 1, - 'jump': 1}) + self.check_resops({'jump': 2, 'guard_true': 2, 'int_gt': 2, + 'int_sub': 2}) def test_void_red_variable(self): mydriver = JitDriver(greens=[], reds=['a', 'm']) From noreply at buildbot.pypy.org Tue Nov 8 17:26:48 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Tue, 8 Nov 2011 17:26:48 +0100 (CET) Subject: [pypy-commit] pypy py3k: Convert from __nonzero__ to __bool__. Message-ID: <20111108162648.50B85820C4@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: py3k Changeset: r48953:442dd206f22d Date: 2011-11-08 11:25 -0500 http://bitbucket.org/pypy/pypy/changeset/442dd206f22d/ Log: Convert from __nonzero__ to __bool__. diff --git a/lib_pypy/_ctypes/function.py b/lib_pypy/_ctypes/function.py --- a/lib_pypy/_ctypes/function.py +++ b/lib_pypy/_ctypes/function.py @@ -157,11 +157,11 @@ callable(restype)): raise TypeError("restype must be a type, a callable, or None") self._restype_ = restype - + def _delrestype(self): self._ptr = None del self._restype_ - + restype = property(_getrestype, _setrestype, _delrestype) def _geterrcheck(self): @@ -221,7 +221,7 @@ self._check_argtypes_for_fastpath() return - + # A callback into python if callable(argument) and not argsl: self.callable = argument @@ -274,7 +274,7 @@ for argtype, arg in zip(argtypes, args)] return to_call(*args) return f - + def __call__(self, *args, **kwargs): argtypes = self._argtypes_ if self.callable is not None: @@ -405,7 +405,7 @@ ffiargs = [argtype.get_ffi_argtype() for argtype in argtypes] ffires = restype.get_ffi_argtype() return _ffi.FuncPtr.fromaddr(ptr, '', ffiargs, ffires) - + cdll = self.dll._handle try: ffi_argtypes = [argtype.get_ffi_argtype() for argtype in argtypes] @@ -439,7 +439,7 @@ if isinstance(argtype, _CDataMeta): cobj, ffiparam = argtype.get_ffi_param(arg) return cobj, ffiparam, argtype - + if argtype is not None: arg = argtype.from_param(arg) if hasattr(arg, '_as_parameter_'): @@ -570,7 +570,7 @@ @staticmethod def _is_primitive(argtype): return argtype.__bases__[0] is _SimpleCData - + def _wrap_result(self, restype, result): """ Convert from low-level repr of the result to the high-level python @@ -630,7 +630,7 @@ return retval - def __nonzero__(self): + def __bool__(self): return self._com_index is not None or bool(self._buffer[0]) def __del__(self): diff --git a/lib_pypy/_ctypes/pointer.py b/lib_pypy/_ctypes/pointer.py --- a/lib_pypy/_ctypes/pointer.py +++ b/lib_pypy/_ctypes/pointer.py @@ -111,7 +111,7 @@ store_reference(self, index, cobj._objects) self._subarray(index)[0] = cobj._get_buffer_value() - def __nonzero__(self): + def __bool__(self): return self._buffer[0] != 0 contents = property(getcontents, setcontents) diff --git a/lib_pypy/_ctypes/primitive.py b/lib_pypy/_ctypes/primitive.py --- a/lib_pypy/_ctypes/primitive.py +++ b/lib_pypy/_ctypes/primitive.py @@ -186,8 +186,8 @@ elif value is None: value = 0 self._buffer[0] = value - result.value = property(_getvalue, _setvalue) - + result.value = property(_getvalue, _setvalue) + elif tp == 'u': def _setvalue(self, val): if isinstance(val, str): @@ -264,7 +264,7 @@ def _as_ffi_pointer_(self, ffitype): return as_ffi_pointer(self, ffitype) result._as_ffi_pointer_ = _as_ffi_pointer_ - + return result from_address = cdata_from_address @@ -272,7 +272,7 @@ def from_param(self, value): if isinstance(value, self): return value - + from_param_f = FROM_PARAM_BY_TYPE.get(self._type_) if from_param_f: res = from_param_f(self, value) @@ -291,7 +291,7 @@ if self.__bases__[0] is _SimpleCData: return output.value return output - + def _sizeofinstances(self): return _rawffi.sizeof(self._type_) @@ -338,7 +338,7 @@ return "<%s object at 0x%x>" % (type(self).__name__, id(self)) - def __nonzero__(self): + def __bool__(self): return self._buffer[0] not in (0, '\x00') from _ctypes.function import CFuncPtr diff --git a/lib_pypy/greenlet.py b/lib_pypy/greenlet.py --- a/lib_pypy/greenlet.py +++ b/lib_pypy/greenlet.py @@ -87,7 +87,7 @@ else: return args - def __nonzero__(self): + def __bool__(self): return self.__main or _continulet.is_pending(self) @property diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1483,7 +1483,7 @@ ('trunc', 'trunc', 1, ['__trunc__']), ('pos', 'pos', 1, ['__pos__']), ('neg', 'neg', 1, ['__neg__']), - ('nonzero', 'truth', 1, ['__nonzero__']), + ('nonzero', 'truth', 1, ['__bool__']), ('abs' , 'abs', 1, ['__abs__']), ('hex', 'hex', 1, ['__hex__']), ('oct', 'oct', 1, ['__oct__']), diff --git a/pypy/module/__builtin__/test/test_functional.py b/pypy/module/__builtin__/test/test_functional.py --- a/pypy/module/__builtin__/test/test_functional.py +++ b/pypy/module/__builtin__/test/test_functional.py @@ -189,7 +189,7 @@ def test_all(self): class TestFailingBool(object): - def __nonzero__(self): + def __bool__(self): raise RuntimeError class TestFailingIter(object): def __iter__(self): @@ -211,7 +211,7 @@ def test_any(self): class TestFailingBool(object): - def __nonzero__(self): + def __bool__(self): raise RuntimeError class TestFailingIter(object): def __iter__(self): diff --git a/pypy/module/_winreg/interp_winreg.py b/pypy/module/_winreg/interp_winreg.py --- a/pypy/module/_winreg/interp_winreg.py +++ b/pypy/module/_winreg/interp_winreg.py @@ -23,7 +23,7 @@ def as_int(self): return rffi.cast(rffi.SIZE_T, self.hkey) - def descr_nonzero(self, space): + def descr_bool(self, space): return space.wrap(self.as_int() != 0) def descr_handle_get(self, space): @@ -87,14 +87,14 @@ handle - The integer Win32 handle. Operations: -__nonzero__ - Handles with an open object return true, otherwise false. +__bool__ - Handles with an open object return true, otherwise false. __int__ - Converting a handle to an integer returns the Win32 handle. __cmp__ - Handle objects are compared using the handle value.""", __new__ = descr_HKEY_new, __del__ = interp2app(W_HKEY.descr_del), __repr__ = interp2app(W_HKEY.descr_repr), __int__ = interp2app(W_HKEY.descr_int), - __nonzero__ = interp2app(W_HKEY.descr_nonzero), + __bool__ = interp2app(W_HKEY.descr_bool), __enter__ = interp2app(W_HKEY.descr__enter__), __exit__ = interp2app(W_HKEY.descr__exit__), handle = GetSetProperty(W_HKEY.descr_handle_get), diff --git a/pypy/module/cpyext/slotdefs.py b/pypy/module/cpyext/slotdefs.py --- a/pypy/module/cpyext/slotdefs.py +++ b/pypy/module/cpyext/slotdefs.py @@ -473,7 +473,7 @@ UNSLOT("__pos__", nb_positive, slot_nb_positive, wrap_unaryfunc, "+x"), UNSLOT("__abs__", nb_absolute, slot_nb_absolute, wrap_unaryfunc, "abs(x)"), - UNSLOT("__nonzero__", nb_nonzero, slot_nb_nonzero, wrap_inquirypred, + UNSLOT("__bool__", nb_bool, slot_nb_bool, wrap_inquirypred, "x != 0"), UNSLOT("__invert__", nb_invert, slot_nb_invert, wrap_unaryfunc, "~x"), BINSLOT("__lshift__", nb_lshift, slot_nb_lshift, "<<"), diff --git a/pypy/module/cpyext/test/test_object.py b/pypy/module/cpyext/test/test_object.py --- a/pypy/module/cpyext/test/test_object.py +++ b/pypy/module/cpyext/test/test_object.py @@ -20,7 +20,7 @@ def test_exception(self, space, api): class C: - def __nonzero__(self): + def __bool__(self): raise ValueError assert api.PyObject_IsTrue(space.wrap(C())) == -1 @@ -90,27 +90,27 @@ def test_size(self, space, api): assert api.PyObject_Size(space.newlist([space.w_None])) == 1 - + def test_repr(self, space, api): w_list = space.newlist([space.w_None, space.wrap(42)]) assert space.str_w(api.PyObject_Repr(w_list)) == "[None, 42]" assert space.str_w(api.PyObject_Repr(space.wrap("a"))) == "'a'" - + w_list = space.newlist([space.w_None, space.wrap(42)]) assert space.str_w(api.PyObject_Str(w_list)) == "[None, 42]" assert space.str_w(api.PyObject_Str(space.wrap("a"))) == "a" - + def test_RichCompare(self, space, api): def compare(w_o1, w_o2, opid): res = api.PyObject_RichCompareBool(w_o1, w_o2, opid) w_res = api.PyObject_RichCompare(w_o1, w_o2, opid) assert space.is_true(w_res) == res return res - + def test_compare(o1, o2): w_o1 = space.wrap(o1) w_o2 = space.wrap(o2) - + for opid, expected in [ (Py_LT, o1 < o2), (Py_LE, o1 <= o2), (Py_NE, o1 != o2), (Py_EQ, o1 == o2), @@ -120,12 +120,12 @@ test_compare(1, 2) test_compare(2, 2) test_compare('2', '1') - + w_i = space.wrap(1) assert api.PyObject_RichCompareBool(w_i, w_i, 123456) == -1 assert api.PyErr_Occurred() is space.w_SystemError api.PyErr_Clear() - + def test_IsInstance(self, space, api): assert api.PyObject_IsInstance(space.wrap(1), space.w_int) == 1 assert api.PyObject_IsInstance(space.wrap(1), space.w_float) == 0 @@ -158,7 +158,7 @@ return File""") w_f = space.call_function(w_File) assert api.PyObject_AsFileDescriptor(w_f) == 42 - + def test_hash(self, space, api): assert api.PyObject_Hash(space.wrap(72)) == 72 assert api.PyObject_Hash(space.wrap(-1)) == -1 diff --git a/pypy/objspace/descroperation.py b/pypy/objspace/descroperation.py --- a/pypy/objspace/descroperation.py +++ b/pypy/objspace/descroperation.py @@ -222,7 +222,7 @@ return space.get_and_call_function(w_descr, w_obj, w_name) def is_true(space, w_obj): - method = "__nonzero__" + method = "__bool__" w_descr = space.lookup(w_obj, method) if w_descr is None: method = "__len__" diff --git a/pypy/objspace/std/builtinshortcut.py b/pypy/objspace/std/builtinshortcut.py --- a/pypy/objspace/std/builtinshortcut.py +++ b/pypy/objspace/std/builtinshortcut.py @@ -50,7 +50,7 @@ def filter_out_conversions(typeorder): res = {} - for cls, order in typeorder.iteritems(): + for cls, order in typeorder.iteritems(): res[cls] = [(target_type, converter) for (target_type, converter) in order if converter is None] return res @@ -113,7 +113,7 @@ except FailedToImplement: pass else: - # the __nonzero__ method of built-in objects should + # the __bool__ method of built-in objects should # always directly return a Bool; however, the __len__ method # of built-in objects typically returns an unwrappable integer if isinstance(w_res, W_BoolObject): diff --git a/pypy/objspace/test/test_descroperation.py b/pypy/objspace/test/test_descroperation.py --- a/pypy/objspace/test/test_descroperation.py +++ b/pypy/objspace/test/test_descroperation.py @@ -228,7 +228,7 @@ class myint(int): pass class X(object): - def __nonzero__(self): + def __bool__(self): return myint(1) raises(TypeError, "not X()") @@ -640,9 +640,9 @@ def test_truth_of_long(self): class X(object): def __len__(self): return 1L - __nonzero__ = __len__ + __bool__ = __len__ assert X() - del X.__nonzero__ + del X.__bool__ assert X() def test_len_overflow(self): From noreply at buildbot.pypy.org Tue Nov 8 17:26:49 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Tue, 8 Nov 2011 17:26:49 +0100 (CET) Subject: [pypy-commit] pypy py3k: merged upstream Message-ID: <20111108162649.8888F820C4@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: py3k Changeset: r48954:5c7c60f852c7 Date: 2011-11-08 11:26 -0500 http://bitbucket.org/pypy/pypy/changeset/5c7c60f852c7/ Log: merged upstream diff --git a/pypy/module/_io/test/test_textio.py b/pypy/module/_io/test/test_textio.py --- a/pypy/module/_io/test/test_textio.py +++ b/pypy/module/_io/test/test_textio.py @@ -215,6 +215,26 @@ # that subprocess.Popen() can have the required unbuffered # semantics with universal_newlines=True. import _io + raw = self.get_MockRawIO()([b'abc', b'def', b'ghi\njkl\nopq\n']) + txt = _io.TextIOWrapper(raw, encoding='ascii', newline='\n') + # Reads + assert txt.read(4) == 'abcd' + assert txt.readline() == 'efghi\n' + assert list(txt) == ['jkl\n', 'opq\n'] + + def test_rawio_write_through(self): + # Issue #12591: with write_through=True, writes don't need a flush + import _io + raw = self.get_MockRawIO()([b'abc', b'def', b'ghi\njkl\nopq\n']) + txt = _io.TextIOWrapper(raw, encoding='ascii', newline='\n', + write_through=True) + txt.write('1') + txt.write('23\n4') + txt.write('5') + assert b''.join(raw._write_stack) == b'123\n45' + + def w_get_MockRawIO(self): + import _io class MockRawIO(_io._RawIOBase): def __init__(self, read_stack=()): self._read_stack = list(read_stack) @@ -275,24 +295,7 @@ except: self._extraneous_reads += 1 return b"" - - raw = MockRawIO([b'abc', b'def', b'ghi\njkl\nopq\n']) - txt = _io.TextIOWrapper(raw, encoding='ascii', newline='\n') - # Reads - assert txt.read(4) == 'abcd' - assert txt.readline() == 'efghi\n' - assert list(txt) == ['jkl\n', 'opq\n'] -# -# def test_rawio_write_through(self): -# # Issue #12591: with write_through=True, writes don't need a flush -# import _io - raw = MockRawIO([b'abc', b'def', b'ghi\njkl\nopq\n']) - txt = _io.TextIOWrapper(raw, encoding='ascii', newline='\n', - write_through=True) - txt.write('1') - txt.write('23\n4') - txt.write('5') - assert b''.join(raw._write_stack) == b'123\n45' + return MockRawIO class AppTestIncrementalNewlineDecoder: From noreply at buildbot.pypy.org Tue Nov 8 17:30:30 2011 From: noreply at buildbot.pypy.org (hager) Date: Tue, 8 Nov 2011 17:30:30 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: (bivab, hager): Make function descriptor in case of 64 bit for the generated machine code. Message-ID: <20111108163030.362F3820C4@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r48955:dda7336a6e6d Date: 2011-11-08 08:30 -0800 http://bitbucket.org/pypy/pypy/changeset/dda7336a6e6d/ Log: (bivab, hager): Make function descriptor in case of 64 bit for the generated machine code. diff --git a/pypy/jit/backend/ppc/ppcgen/codebuilder.py b/pypy/jit/backend/ppc/ppcgen/codebuilder.py --- a/pypy/jit/backend/ppc/ppcgen/codebuilder.py +++ b/pypy/jit/backend/ppc/ppcgen/codebuilder.py @@ -1002,6 +1002,16 @@ self.writechar(chr((word >> 8) & 0xFF)) self.writechar(chr(word & 0xFF)) + def write64(self, word): + self.writechar(chr((word >> 56) & 0xFF)) + self.writechar(chr((word >> 48) & 0xFF)) + self.writechar(chr((word >> 40) & 0xFF)) + self.writechar(chr((word >> 32) & 0xFF)) + self.writechar(chr((word >> 24) & 0xFF)) + self.writechar(chr((word >> 16) & 0xFF)) + self.writechar(chr((word >> 8) & 0xFF)) + self.writechar(chr(word & 0xFF)) + def currpos(self): return self.get_rel_pos() diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -410,7 +410,11 @@ self.write_pending_failure_recoveries() loop_start = self.materialize_loop(looptoken, False) looptoken._ppc_bootstrap_code = loop_start - looptoken.ppc_code = loop_start + start_pos + real_start = loop_start + start_pos + if IS_PPC_32: + looptoken.ppc_code = real_start + else: + looptoken.ppc_code = self.gen_64_bit_func_descr(real_start) self.process_pending_guards(loop_start) self._teardown() @@ -516,6 +520,14 @@ regalloc.possibly_free_vars_for_op(op) regalloc._check_invariants() + def gen_64_bit_func_descr(self, start_addr): + mc = PPCBuilder() + mc.write64(start_addr) + mc.write64(0) + mc.write64(0) + return mc.materialize(self.cpu.asmmemmgr, [], + self.cpu.gc_ll_descr.gcrootmap) + def compute_frame_depth(self, regalloc): frame_depth = (GPR_SAVE_AREA # GPR space + WORD # FORCE INDEX From noreply at buildbot.pypy.org Tue Nov 8 17:42:59 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Tue, 8 Nov 2011 17:42:59 +0100 (CET) Subject: [pypy-commit] pypy py3k: added cmath.isfinite Message-ID: <20111108164259.C641F820C4@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: py3k Changeset: r48956:a2dc8ef2c638 Date: 2011-11-08 11:42 -0500 http://bitbucket.org/pypy/pypy/changeset/a2dc8ef2c638/ Log: added cmath.isfinite diff --git a/pypy/module/cmath/__init__.py b/pypy/module/cmath/__init__.py --- a/pypy/module/cmath/__init__.py +++ b/pypy/module/cmath/__init__.py @@ -29,7 +29,8 @@ 'phase': "Return argument, also known as the phase angle, of a complex.", 'isinf': "Checks if the real or imaginary part of z is infinite.", 'isnan': "Checks if the real or imaginary part of z is not a number (NaN)", - } + 'isfinite': "isfinite(z) -> bool\nReturn True if both the real and imaginary parts of z are finite, else False.", +} class Module(MixedModule): diff --git a/pypy/module/cmath/interp_cmath.py b/pypy/module/cmath/interp_cmath.py --- a/pypy/module/cmath/interp_cmath.py +++ b/pypy/module/cmath/interp_cmath.py @@ -1,33 +1,25 @@ import math from math import fabs -from pypy.rlib.objectmodel import specialize -from pypy.rlib.rfloat import copysign, asinh, log1p, isinf, isnan -from pypy.tool.sourcetools import func_with_new_name + from pypy.interpreter.error import OperationError from pypy.interpreter.gateway import NoneNotWrapped from pypy.module.cmath import names_and_docstrings -from pypy.module.cmath.constant import DBL_MIN, CM_SCALE_UP, CM_SCALE_DOWN -from pypy.module.cmath.constant import CM_LARGE_DOUBLE, DBL_MANT_DIG -from pypy.module.cmath.constant import M_LN2, M_LN10 -from pypy.module.cmath.constant import CM_SQRT_LARGE_DOUBLE, CM_SQRT_DBL_MIN -from pypy.module.cmath.constant import CM_LOG_LARGE_DOUBLE -from pypy.module.cmath.special_value import isfinite, special_type, INF, NAN -from pypy.module.cmath.special_value import sqrt_special_values -from pypy.module.cmath.special_value import acos_special_values -from pypy.module.cmath.special_value import acosh_special_values -from pypy.module.cmath.special_value import asinh_special_values -from pypy.module.cmath.special_value import atanh_special_values -from pypy.module.cmath.special_value import log_special_values -from pypy.module.cmath.special_value import exp_special_values -from pypy.module.cmath.special_value import cosh_special_values -from pypy.module.cmath.special_value import sinh_special_values -from pypy.module.cmath.special_value import tanh_special_values -from pypy.module.cmath.special_value import rect_special_values +from pypy.module.cmath.constant import (DBL_MIN, CM_SCALE_UP, CM_SCALE_DOWN, + CM_LARGE_DOUBLE, DBL_MANT_DIG, M_LN2, M_LN10, CM_SQRT_LARGE_DOUBLE, + CM_SQRT_DBL_MIN, CM_LOG_LARGE_DOUBLE) +from pypy.module.cmath.special_value import (special_type, INF, NAN, + sqrt_special_values, acos_special_values, acosh_special_values, + asinh_special_values, atanh_special_values, log_special_values, + exp_special_values, cosh_special_values, sinh_special_values, + tanh_special_values, rect_special_values) +from pypy.rlib.objectmodel import specialize +from pypy.rlib.rfloat import copysign, asinh, log1p, isinf, isnan, isfinite +from pypy.tool.sourcetools import func_with_new_name + pi = math.pi e = math.e - @specialize.arg(0) def call_c_func(c_func, space, x, y): try: @@ -579,3 +571,12 @@ res = c_isnan(x, y) return space.newbool(res) wrapped_isnan.func_doc = names_and_docstrings['isnan'] + +def c_isfinite(x, y): + return isfinite(x) and isfinite(y) + +def wrapped_isfinite(space, w_z): + x, y = space.unpackcomplex(w_z) + res = c_isfinite(x, y) + return space.newbool(res) +wrapped_isfinite.func_doc = names_and_docstrings['isfinite'] diff --git a/pypy/module/cmath/special_value.py b/pypy/module/cmath/special_value.py --- a/pypy/module/cmath/special_value.py +++ b/pypy/module/cmath/special_value.py @@ -32,9 +32,6 @@ else: return ST_NZERO -def isfinite(d): - return not isinf(d) and not isnan(d) - P = math.pi P14 = 0.25 * math.pi diff --git a/pypy/module/cmath/test/test_cmath.py b/pypy/module/cmath/test/test_cmath.py --- a/pypy/module/cmath/test/test_cmath.py +++ b/pypy/module/cmath/test/test_cmath.py @@ -92,6 +92,18 @@ assert cmath.isnan(complex("inf+nanj")) assert cmath.isnan(complex("nan+infj")) + def test_isfinite(self): + import cmath + import math + + real_vals = [ + float('-inf'), -2.3, -0.0, 0.0, 2.3, float('inf'), float('nan') + ] + for x in real_vals: + for y in real_vals: + z = complex(x, y) + assert cmath.isfinite(z) == (math.isfinite(x) and math.isfinite(y)) + def test_user_defined_complex(self): import cmath class Foo(object): diff --git a/pypy/module/math/__init__.py b/pypy/module/math/__init__.py --- a/pypy/module/math/__init__.py +++ b/pypy/module/math/__init__.py @@ -8,8 +8,8 @@ } interpleveldefs = { - 'e' : 'interp_math.get(space).w_e', - 'pi' : 'interp_math.get(space).w_pi', + 'e' : 'interp_math.get(space).w_e', + 'pi' : 'interp_math.get(space).w_pi', 'pow' : 'interp_math.pow', 'cosh' : 'interp_math.cosh', 'copysign' : 'interp_math.copysign', @@ -39,6 +39,7 @@ 'acos' : 'interp_math.acos', 'isinf' : 'interp_math.isinf', 'isnan' : 'interp_math.isnan', + 'isfinite' : 'interp_math.isfinite', 'trunc' : 'interp_math.trunc', 'fsum' : 'interp_math.fsum', 'asinh' : 'interp_math.asinh', diff --git a/pypy/module/math/interp_math.py b/pypy/module/math/interp_math.py --- a/pypy/module/math/interp_math.py +++ b/pypy/module/math/interp_math.py @@ -77,6 +77,12 @@ """Return True if x is not a number.""" return space.wrap(rfloat.isnan(_get_double(space, w_x))) +def isfinite(space, w_x): + """isfinite(x) -> bool + + Return True if x is neither an infinity nor a NaN, and False otherwise.""" + return space.wrap(rfloat.isfinite(_get_double(space, w_x))) + def pow(space, w_x, w_y): """pow(x,y) From noreply at buildbot.pypy.org Tue Nov 8 18:11:19 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Tue, 8 Nov 2011 18:11:19 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: merge Message-ID: <20111108171119.56CEA820C4@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r48957:8b324f7ce7a9 Date: 2011-11-08 17:21 +0100 http://bitbucket.org/pypy/pypy/changeset/8b324f7ce7a9/ Log: merge From noreply at buildbot.pypy.org Tue Nov 8 18:11:20 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Tue, 8 Nov 2011 18:11:20 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: merge Message-ID: <20111108171120.97F25820C4@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r48958:685959b8208e Date: 2011-11-08 17:22 +0100 http://bitbucket.org/pypy/pypy/changeset/685959b8208e/ Log: merge diff --git a/lib_pypy/_ctypes/pointer.py b/lib_pypy/_ctypes/pointer.py --- a/lib_pypy/_ctypes/pointer.py +++ b/lib_pypy/_ctypes/pointer.py @@ -124,7 +124,8 @@ # for now, we always allow types.pointer, else a lot of tests # break. We need to rethink how pointers are represented, though if my_ffitype is not ffitype and ffitype is not _ffi.types.void_p: - raise ArgumentError, "expected %s instance, got %s" % (type(value), ffitype) + raise ArgumentError("expected %s instance, got %s" % (type(value), + ffitype)) return value._get_buffer_value() def _cast_addr(obj, _, tp): diff --git a/pypy/jit/codewriter/effectinfo.py b/pypy/jit/codewriter/effectinfo.py --- a/pypy/jit/codewriter/effectinfo.py +++ b/pypy/jit/codewriter/effectinfo.py @@ -78,6 +78,9 @@ # OS_MATH_SQRT = 100 + # for debugging: + _OS_CANRAISE = set([OS_NONE, OS_STR2UNICODE, OS_LIBFFI_CALL]) + def __new__(cls, readonly_descrs_fields, readonly_descrs_arrays, write_descrs_fields, write_descrs_arrays, extraeffect=EF_CAN_RAISE, @@ -116,6 +119,8 @@ result.extraeffect = extraeffect result.can_invalidate = can_invalidate result.oopspecindex = oopspecindex + if result.check_can_raise(): + assert oopspecindex in cls._OS_CANRAISE cls._cache[key] = result return result @@ -125,6 +130,10 @@ def check_can_invalidate(self): return self.can_invalidate + def check_is_elidable(self): + return (self.extraeffect == self.EF_ELIDABLE_CAN_RAISE or + self.extraeffect == self.EF_ELIDABLE_CANNOT_RAISE) + def check_forces_virtual_or_virtualizable(self): return self.extraeffect >= self.EF_FORCES_VIRTUAL_OR_VIRTUALIZABLE diff --git a/pypy/jit/codewriter/test/test_flatten.py b/pypy/jit/codewriter/test/test_flatten.py --- a/pypy/jit/codewriter/test/test_flatten.py +++ b/pypy/jit/codewriter/test/test_flatten.py @@ -5,7 +5,7 @@ from pypy.jit.codewriter.format import assert_format from pypy.jit.codewriter import longlong from pypy.jit.metainterp.history import AbstractDescr -from pypy.rpython.lltypesystem import lltype, rclass, rstr +from pypy.rpython.lltypesystem import lltype, rclass, rstr, rffi from pypy.objspace.flow.model import SpaceOperation, Variable, Constant from pypy.translator.unsimplify import varoftype from pypy.rlib.rarithmetic import ovfcheck, r_uint, r_longlong, r_ulonglong @@ -743,7 +743,6 @@ """, transform=True) def test_force_cast(self): - from pypy.rpython.lltypesystem import rffi # NB: we don't need to test for INT here, the logic in jtransform is # general enough so that if we have the below cases it should # generalize also to INT @@ -849,7 +848,6 @@ transform=True) def test_force_cast_pointer(self): - from pypy.rpython.lltypesystem import rffi def h(p): return rffi.cast(rffi.VOIDP, p) self.encoding_test(h, [lltype.nullptr(rffi.CCHARP.TO)], """ @@ -857,7 +855,6 @@ """, transform=True) def test_force_cast_floats(self): - from pypy.rpython.lltypesystem import rffi # Caststs to lltype.Float def f(n): return rffi.cast(lltype.Float, n) @@ -964,7 +961,6 @@ """, transform=True) def test_direct_ptradd(self): - from pypy.rpython.lltypesystem import rffi def f(p, n): return lltype.direct_ptradd(p, n) self.encoding_test(f, [lltype.nullptr(rffi.CCHARP.TO), 123], """ @@ -975,7 +971,6 @@ def check_force_cast(FROM, TO, operations, value): """Check that the test is correctly written...""" - from pypy.rpython.lltypesystem import rffi import re r = re.compile('(\w+) \%i\d, \$(-?\d+)') # diff --git a/pypy/jit/metainterp/optimizeopt/test/test_util.py b/pypy/jit/metainterp/optimizeopt/test/test_util.py --- a/pypy/jit/metainterp/optimizeopt/test/test_util.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_util.py @@ -183,6 +183,7 @@ can_invalidate=True)) arraycopydescr = cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, EffectInfo([], [arraydescr], [], [arraydescr], + EffectInfo.EF_CANNOT_RAISE, oopspecindex=EffectInfo.OS_ARRAYCOPY)) @@ -212,12 +213,14 @@ _oopspecindex = getattr(EffectInfo, _os) locals()[_name] = \ cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, - EffectInfo([], [], [], [], oopspecindex=_oopspecindex)) + EffectInfo([], [], [], [], EffectInfo.EF_CANNOT_RAISE, + oopspecindex=_oopspecindex)) # _oopspecindex = getattr(EffectInfo, _os.replace('STR', 'UNI')) locals()[_name.replace('str', 'unicode')] = \ cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, - EffectInfo([], [], [], [], oopspecindex=_oopspecindex)) + EffectInfo([], [], [], [], EffectInfo.EF_CANNOT_RAISE, + oopspecindex=_oopspecindex)) s2u_descr = cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, EffectInfo([], [], [], [], oopspecindex=EffectInfo.OS_STR2UNICODE)) diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -1345,10 +1345,8 @@ if effect == effectinfo.EF_LOOPINVARIANT: return self.execute_varargs(rop.CALL_LOOPINVARIANT, allboxes, descr, False, False) - exc = (effect != effectinfo.EF_CANNOT_RAISE and - effect != effectinfo.EF_ELIDABLE_CANNOT_RAISE) - pure = (effect == effectinfo.EF_ELIDABLE_CAN_RAISE or - effect == effectinfo.EF_ELIDABLE_CANNOT_RAISE) + exc = effectinfo.check_can_raise() + pure = effectinfo.check_is_elidable() return self.execute_varargs(rop.CALL, allboxes, descr, exc, pure) def do_residual_or_indirect_call(self, funcbox, calldescr, argboxes): diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -3678,3 +3678,16 @@ assert x == -42 x = self.interp_operations(f, [1000, 1], translationoptions=topt) assert x == 999 + + def test_ll_arraycopy(self): + from pypy.rlib import rgc + A = lltype.GcArray(lltype.Char) + a = lltype.malloc(A, 10) + for i in range(10): a[i] = chr(i) + b = lltype.malloc(A, 10) + # + def f(c, d, e): + rgc.ll_arraycopy(a, b, c, d, e) + return 42 + self.interp_operations(f, [1, 2, 3]) + self.check_operations_history(call=1, guard_no_exception=0) diff --git a/pypy/module/cpyext/include/patchlevel.h b/pypy/module/cpyext/include/patchlevel.h --- a/pypy/module/cpyext/include/patchlevel.h +++ b/pypy/module/cpyext/include/patchlevel.h @@ -29,7 +29,7 @@ #define PY_VERSION "2.7.1" /* PyPy version as a string */ -#define PYPY_VERSION "1.6.1" +#define PYPY_VERSION "1.7.1" /* Subversion Revision number of this file (not of the repository). * Empty since Mercurial migration. */ diff --git a/pypy/module/sys/version.py b/pypy/module/sys/version.py --- a/pypy/module/sys/version.py +++ b/pypy/module/sys/version.py @@ -10,7 +10,7 @@ CPYTHON_VERSION = (2, 7, 1, "final", 42) #XXX # sync patchlevel.h CPYTHON_API_VERSION = 1013 #XXX # sync with include/modsupport.h -PYPY_VERSION = (1, 6, 1, "dev", 0) #XXX # sync patchlevel.h +PYPY_VERSION = (1, 7, 1, "dev", 0) #XXX # sync patchlevel.h if platform.name == 'msvc': COMPILER_INFO = 'MSC v.%d 32 bit' % (platform.version * 10 + 600) diff --git a/pypy/objspace/std/test/test_stdobjspace.py b/pypy/objspace/std/test/test_stdobjspace.py --- a/pypy/objspace/std/test/test_stdobjspace.py +++ b/pypy/objspace/std/test/test_stdobjspace.py @@ -1,5 +1,6 @@ from pypy.interpreter.error import OperationError from pypy.interpreter.gateway import app2interp +from pypy.conftest import gettestobjspace class TestW_StdObjSpace: @@ -60,3 +61,10 @@ typedef = None assert space.isinstance_w(X(), space.w_str) + + def test_withstrbuf_fastpath_isinstance(self): + from pypy.objspace.std.stringobject import W_StringObject + + space = gettestobjspace(withstrbuf=True) + assert space._get_interplevel_cls(space.w_str) is W_StringObject + diff --git a/pypy/rlib/rgc.py b/pypy/rlib/rgc.py --- a/pypy/rlib/rgc.py +++ b/pypy/rlib/rgc.py @@ -163,8 +163,10 @@ source_start, dest_start, length): # if the write barrier is not supported, copy by hand - for i in range(length): + i = 0 + while i < length: dest[i + dest_start] = source[i + source_start] + i += 1 return source_addr = llmemory.cast_ptr_to_adr(source) dest_addr = llmemory.cast_ptr_to_adr(dest) diff --git a/pypy/translator/backendopt/test/test_canraise.py b/pypy/translator/backendopt/test/test_canraise.py --- a/pypy/translator/backendopt/test/test_canraise.py +++ b/pypy/translator/backendopt/test/test_canraise.py @@ -201,6 +201,16 @@ result = ra.can_raise(ggraph.startblock.operations[0]) assert result + def test_ll_arraycopy(self): + from pypy.rpython.lltypesystem import rffi + from pypy.rlib.rgc import ll_arraycopy + def f(a, b, c, d, e): + ll_arraycopy(a, b, c, d, e) + t, ra = self.translate(f, [rffi.CCHARP, rffi.CCHARP, int, int, int]) + fgraph = graphof(t, f) + result = ra.can_raise(fgraph.startblock.operations[0]) + assert not result + class TestOOType(OORtypeMixin, BaseTestCanRaise): def test_can_raise_recursive(self): diff --git a/pypy/translator/platform/__init__.py b/pypy/translator/platform/__init__.py --- a/pypy/translator/platform/__init__.py +++ b/pypy/translator/platform/__init__.py @@ -102,6 +102,8 @@ bits = [self.__class__.__name__, 'cc=%r' % self.cc] for varname in self.relevant_environ: bits.append('%s=%r' % (varname, os.environ.get(varname))) + # adding sys.maxint to disambiguate windows + bits.append('%s=%r' % ('sys.maxint', sys.maxint)) return ' '.join(bits) # some helpers which seem to be cross-platform enough From noreply at buildbot.pypy.org Tue Nov 8 18:11:21 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Tue, 8 Nov 2011 18:11:21 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: all errors are gone from test_typed.py. Message-ID: <20111108171121.C6659820C4@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r48959:ec8e923109d9 Date: 2011-11-08 18:10 +0100 http://bitbucket.org/pypy/pypy/changeset/ec8e923109d9/ Log: all errors are gone from test_typed.py. This was a major hassle during the last two days. I was hunting an error which was caused by the rfficache. On windows, it is hard to see any difference between compiler configurations. All environment settings are identical. At the moment, sys.maxind is the only thing that distinguishes the platforms. diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -878,7 +878,7 @@ size = llmemory.sizeof(tp) # a symbolic result in this case return size if isinstance(tp, lltype.Ptr) or tp is llmemory.Address: - tp = ULONG # XXX! + tp = lltype.Signed if tp is lltype.Char or tp is lltype.Bool: return 1 if tp is lltype.UniChar: From noreply at buildbot.pypy.org Tue Nov 8 18:18:00 2011 From: noreply at buildbot.pypy.org (hager) Date: Tue, 8 Nov 2011 18:18:00 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: (bivab, hager): Read function address out of function descriptor in case of 64 bit. Message-ID: <20111108171800.ECBBD820C4@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r48960:1445ebbaeb41 Date: 2011-11-08 09:17 -0800 http://bitbucket.org/pypy/pypy/changeset/1445ebbaeb41/ Log: (bivab, hager): Read function address out of function descriptor in case of 64 bit. diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -300,7 +300,13 @@ # decode_func_addr = llhelper(self.recovery_func_sign, self.failure_recovery_func) - addr = rffi.cast(lltype.Signed, decode_func_addr) + if IS_PPC_32: + addr = rffi.cast(lltype.Signed, decode_func_addr) + else: + intp = lltype.Ptr(lltype.Array(lltype.Signed, hints={'nolength': True})) + descr = rffi.cast(intp, decode_func_addr) + addr = descr[0] + # # load parameters into parameter registers mc.lwz(r.r3.value, r.SPP.value, 0) # address of state encoding From noreply at buildbot.pypy.org Tue Nov 8 18:21:56 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Tue, 8 Nov 2011 18:21:56 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: re-enabled rwin32 Message-ID: <20111108172156.508A7820C4@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r48961:e205679c19d1 Date: 2011-11-08 18:19 +0100 http://bitbucket.org/pypy/pypy/changeset/e205679c19d1/ Log: re-enabled rwin32 diff --git a/pypy/rpython/module/ll_os.py b/pypy/rpython/module/ll_os.py --- a/pypy/rpython/module/ll_os.py +++ b/pypy/rpython/module/ll_os.py @@ -1747,8 +1747,7 @@ # ____________________________________________________________ # Support for the WindowsError exception -# XXX temporarily disabled -if 0 and sys.platform == 'win32': +if sys.platform == 'win32': from pypy.rlib import rwin32 class RegisterFormatError(BaseLazyRegistering): From noreply at buildbot.pypy.org Tue Nov 8 18:24:35 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Tue, 8 Nov 2011 18:24:35 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: re-enabled rwin32 Message-ID: <20111108172435.B2147820C4@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r48962:c00873500c87 Date: 2011-11-08 18:23 +0100 http://bitbucket.org/pypy/pypy/changeset/c00873500c87/ Log: re-enabled rwin32 diff --git a/pypy/doc/discussion/win64_todo.txt b/pypy/doc/discussion/win64_todo.txt --- a/pypy/doc/discussion/win64_todo.txt +++ b/pypy/doc/discussion/win64_todo.txt @@ -2,6 +2,7 @@ ll_os.py has a problem with the file rwin32.py. Temporarily disabled for the win64_gborg branch. This needs to be investigated and re-enabled. +Resolved, enabled. 2011-11-05 test_typed.py needs explicit tests to ensure that we From noreply at buildbot.pypy.org Tue Nov 8 18:53:51 2011 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 8 Nov 2011 18:53:51 +0100 (CET) Subject: [pypy-commit] pypy default: Attempt at producing a Makefile compatible with "nmake lldebug" on Windows. Message-ID: <20111108175351.B99BF820C4@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48963:e9f7368da478 Date: 2011-11-08 18:53 +0100 http://bitbucket.org/pypy/pypy/changeset/e9f7368da478/ Log: Attempt at producing a Makefile compatible with "nmake lldebug" on Windows. diff --git a/pypy/translator/c/genc.py b/pypy/translator/c/genc.py --- a/pypy/translator/c/genc.py +++ b/pypy/translator/c/genc.py @@ -521,13 +521,13 @@ rules = [ ('clean', '', 'rm -f $(OBJECTS) $(TARGET) $(GCMAPFILES) $(ASMFILES) *.gc?? ../module_cache/*.gc??'), ('clean_noprof', '', 'rm -f $(OBJECTS) $(TARGET) $(GCMAPFILES) $(ASMFILES)'), - ('debug', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT" $(TARGET)'), - ('debug_exc', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DDO_LOG_EXC" $(TARGET)'), - ('debug_mem', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DTRIVIAL_MALLOC_DEBUG" $(TARGET)'), + ('debug', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT" debug_target'), + ('debug_exc', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DDO_LOG_EXC" debug_target'), + ('debug_mem', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DTRIVIAL_MALLOC_DEBUG" debug_target'), ('no_obmalloc', '', '$(MAKE) CFLAGS="-g -O2 -DRPY_ASSERT -DNO_OBMALLOC" $(TARGET)'), - ('linuxmemchk', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DLINUXMEMCHK" $(TARGET)'), + ('linuxmemchk', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DLINUXMEMCHK" debug_target'), ('llsafer', '', '$(MAKE) CFLAGS="-O2 -DRPY_LL_ASSERT" $(TARGET)'), - ('lldebug', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DRPY_LL_ASSERT" $(TARGET)'), + ('lldebug', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DRPY_LL_ASSERT" debug_target'), ('profile', '', '$(MAKE) CFLAGS="-g -O1 -pg $(CFLAGS) -fno-omit-frame-pointer" LDFLAGS="-pg $(LDFLAGS)" $(TARGET)'), ] if self.has_profopt(): @@ -554,7 +554,7 @@ mk.definition('ASMLBLFILES', lblsfiles) mk.definition('GCMAPFILES', gcmapfiles) if sys.platform == 'win32': - mk.definition('DEBUGFLAGS', '/Zi') + mk.definition('DEBUGFLAGS', '/MD /Zi') else: mk.definition('DEBUGFLAGS', '-O2 -fomit-frame-pointer -g') @@ -618,9 +618,13 @@ else: if sys.platform == 'win32': - mk.definition('DEBUGFLAGS', '/Zi') + mk.definition('DEBUGFLAGS', '/MD /Zi') else: mk.definition('DEBUGFLAGS', '-O1 -g') + if sys.platform == 'win32': + mk.rule('debug_target', 'debugmode_$(DEFAULT_TARGET)') + else: + mk.rule('debug_target', '$(TARGET)') mk.write() #self.translator.platform, # , diff --git a/pypy/translator/platform/windows.py b/pypy/translator/platform/windows.py --- a/pypy/translator/platform/windows.py +++ b/pypy/translator/platform/windows.py @@ -294,6 +294,9 @@ ['$(CC_LINK) /nologo $(LDFLAGS) $(LDFLAGSEXTRA) $(OBJECTS) $(LINKFILES) /out:$@ $(LIBDIRS) $(LIBS) /MANIFEST /MANIFESTFILE:$*.manifest', 'mt.exe -nologo -manifest $*.manifest -outputresource:$@;1', ]) + m.rule('debugmode_$(TARGET)', '$(OBJECTS)', + ['$(CC_LINK) /nologo /DEBUG $(LDFLAGS) $(LDFLAGSEXTRA) $(OBJECTS) $(LINKFILES) /out:$@ $(LIBDIRS) $(LIBS)', + ]) if shared: m.definition('SHARED_IMPORT_LIB', so_name.new(ext='lib').basename) @@ -307,6 +310,9 @@ ['$(CC_LINK) /nologo main.obj $(SHARED_IMPORT_LIB) /out:$@ /MANIFEST /MANIFESTFILE:$*.manifest', 'mt.exe -nologo -manifest $*.manifest -outputresource:$@;1', ]) + m.rule('debugmode_$(DEFAULT_TARGET)', ['debugmode_$(TARGET)', 'main.obj'], + ['$(CC_LINK) /nologo /DEBUG main.obj $(SHARED_IMPORT_LIB) /out:$@' + ]) return m From noreply at buildbot.pypy.org Tue Nov 8 18:59:13 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Tue, 8 Nov 2011 18:59:13 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: ll_os.times() works now Message-ID: <20111108175913.04340820C4@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r48964:1aa825cbc8de Date: 2011-11-08 18:58 +0100 http://bitbucket.org/pypy/pypy/changeset/1aa825cbc8de/ Log: ll_os.times() works now diff --git a/pypy/rpython/module/ll_os.py b/pypy/rpython/module/ll_os.py --- a/pypy/rpython/module/ll_os.py +++ b/pypy/rpython/module/ll_os.py @@ -530,10 +530,10 @@ # The fields of a FILETIME structure are the hi and lo parts # of a 64-bit value expressed in 100 nanosecond units # (of course). - result = (pkernel.c_dwHighDateTime*429.4967296 + - pkernel.c_dwLowDateTime*1E-7, - puser.c_dwHighDateTime*429.4967296 + - puser.c_dwLowDateTime*1E-7, + result = (rffi.cast(lltype.Signed, pkernel.c_dwHighDateTime) * 429.4967296 + + rffi.cast(lltype.Signed, pkernel.c_dwLowDateTime) * 1E-7, + rffi.cast(lltype.Signed, puser.c_dwHighDateTime) * 429.4967296 + + rffi.cast(lltype.Signed, puser.c_dwLowDateTime) * 1E-7, 0, 0, 0) lltype.free(puser, flavor='raw') lltype.free(pkernel, flavor='raw') From noreply at buildbot.pypy.org Tue Nov 8 18:59:14 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Tue, 8 Nov 2011 18:59:14 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: merge Message-ID: <20111108175914.339DC820C4@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r48965:51c332546797 Date: 2011-11-08 18:58 +0100 http://bitbucket.org/pypy/pypy/changeset/51c332546797/ Log: merge diff --git a/pypy/translator/c/genc.py b/pypy/translator/c/genc.py --- a/pypy/translator/c/genc.py +++ b/pypy/translator/c/genc.py @@ -521,13 +521,13 @@ rules = [ ('clean', '', 'rm -f $(OBJECTS) $(TARGET) $(GCMAPFILES) $(ASMFILES) *.gc?? ../module_cache/*.gc??'), ('clean_noprof', '', 'rm -f $(OBJECTS) $(TARGET) $(GCMAPFILES) $(ASMFILES)'), - ('debug', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT" $(TARGET)'), - ('debug_exc', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DDO_LOG_EXC" $(TARGET)'), - ('debug_mem', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DTRIVIAL_MALLOC_DEBUG" $(TARGET)'), + ('debug', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT" debug_target'), + ('debug_exc', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DDO_LOG_EXC" debug_target'), + ('debug_mem', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DTRIVIAL_MALLOC_DEBUG" debug_target'), ('no_obmalloc', '', '$(MAKE) CFLAGS="-g -O2 -DRPY_ASSERT -DNO_OBMALLOC" $(TARGET)'), - ('linuxmemchk', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DLINUXMEMCHK" $(TARGET)'), + ('linuxmemchk', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DLINUXMEMCHK" debug_target'), ('llsafer', '', '$(MAKE) CFLAGS="-O2 -DRPY_LL_ASSERT" $(TARGET)'), - ('lldebug', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DRPY_LL_ASSERT" $(TARGET)'), + ('lldebug', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DRPY_LL_ASSERT" debug_target'), ('profile', '', '$(MAKE) CFLAGS="-g -O1 -pg $(CFLAGS) -fno-omit-frame-pointer" LDFLAGS="-pg $(LDFLAGS)" $(TARGET)'), ] if self.has_profopt(): @@ -554,7 +554,7 @@ mk.definition('ASMLBLFILES', lblsfiles) mk.definition('GCMAPFILES', gcmapfiles) if sys.platform == 'win32': - mk.definition('DEBUGFLAGS', '/Zi') + mk.definition('DEBUGFLAGS', '/MD /Zi') else: mk.definition('DEBUGFLAGS', '-O2 -fomit-frame-pointer -g') @@ -618,9 +618,13 @@ else: if sys.platform == 'win32': - mk.definition('DEBUGFLAGS', '/Zi') + mk.definition('DEBUGFLAGS', '/MD /Zi') else: mk.definition('DEBUGFLAGS', '-O1 -g') + if sys.platform == 'win32': + mk.rule('debug_target', 'debugmode_$(DEFAULT_TARGET)') + else: + mk.rule('debug_target', '$(TARGET)') mk.write() #self.translator.platform, # , diff --git a/pypy/translator/platform/windows.py b/pypy/translator/platform/windows.py --- a/pypy/translator/platform/windows.py +++ b/pypy/translator/platform/windows.py @@ -308,6 +308,9 @@ ['$(CC_LINK) /nologo $(LDFLAGS) $(LDFLAGSEXTRA) $(OBJECTS) $(LINKFILES) /out:$@ $(LIBDIRS) $(LIBS) /MANIFEST /MANIFESTFILE:$*.manifest', 'mt.exe -nologo -manifest $*.manifest -outputresource:$@;1', ]) + m.rule('debugmode_$(TARGET)', '$(OBJECTS)', + ['$(CC_LINK) /nologo /DEBUG $(LDFLAGS) $(LDFLAGSEXTRA) $(OBJECTS) $(LINKFILES) /out:$@ $(LIBDIRS) $(LIBS)', + ]) if shared: m.definition('SHARED_IMPORT_LIB', so_name.new(ext='lib').basename) @@ -321,6 +324,9 @@ ['$(CC_LINK) /nologo main.obj $(SHARED_IMPORT_LIB) /out:$@ /MANIFEST /MANIFESTFILE:$*.manifest', 'mt.exe -nologo -manifest $*.manifest -outputresource:$@;1', ]) + m.rule('debugmode_$(DEFAULT_TARGET)', ['debugmode_$(TARGET)', 'main.obj'], + ['$(CC_LINK) /nologo /DEBUG main.obj $(SHARED_IMPORT_LIB) /out:$@' + ]) return m From noreply at buildbot.pypy.org Tue Nov 8 19:10:57 2011 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 8 Nov 2011 19:10:57 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim: a skipped test Message-ID: <20111108181057.6DA83820C4@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-multidim Changeset: r48966:24afd34cee15 Date: 2011-11-08 19:02 +0100 http://bitbucket.org/pypy/pypy/changeset/24afd34cee15/ Log: a skipped test diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -742,6 +742,14 @@ a = array([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]]) assert (a + a)[1, 1] == 8 + def test_broadcast(self): + skip("not working") + import numpy + a = numpy.zeros((100, 100)) + b = numpy.ones(100) + a[:,:] = b + assert a[13,15] == 1 + class AppTestSupport(object): def setup_class(cls): import struct From noreply at buildbot.pypy.org Tue Nov 8 19:10:58 2011 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 8 Nov 2011 19:10:58 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim: simplification, a new (and unused) class and some comments Message-ID: <20111108181058.9D3BE820C4@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-multidim Changeset: r48967:437d9d1f5a43 Date: 2011-11-08 19:10 +0100 http://bitbucket.org/pypy/pypy/changeset/437d9d1f5a43/ Log: simplification, a new (and unused) class and some comments diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -68,6 +68,14 @@ dtype.setitem_w(space, arr.storage, i, w_elem) return arr +class ArrayIndex(object): + """ An index into an array or view. Offset is a data offset, indexes + are respective indexes in dimensions + """ + def __init__(self, indexes, offset): + self.indexes = indexes + self.offset = offset + class BaseArray(Wrappable): _attrs_ = ["invalidates", "signature", "shape"] @@ -287,9 +295,6 @@ item += v return item - def len_of_shape(self): - return len(self.shape) - def get_root_shape(self): return self.shape @@ -297,7 +302,7 @@ """ The result of getitem/setitem is a single item if w_idx is a list of scalars that match the size of shape """ - shape_len = self.len_of_shape() + shape_len = len(self.shape) if shape_len == 0: if not space.isinstance_w(w_idx, space.w_int): raise OperationError(space.w_IndexError, space.wrap( @@ -583,10 +588,6 @@ def __init__(self, parent, signature, chunks, shape): ViewArray.__init__(self, parent, signature, shape) self.chunks = chunks - self.shape_reduction = 0 - for chunk in chunks: - if chunk[-2] == 0: - self.shape_reduction += 1 def get_root_storage(self): return self.parent.get_concrete().get_root_storage() @@ -615,9 +616,6 @@ def setitem(self, item, value): self.parent.setitem(self.calc_index(item), value) - def len_of_shape(self): - return self.parent.len_of_shape() - self.shape_reduction - def get_root_shape(self): return self.parent.get_root_shape() @@ -704,6 +702,9 @@ return ret.build() class NDimArray(BaseArray): + """ A class representing contiguous array. We know that each iteration + by say ufunc will increase the data index by one + """ def __init__(self, size, shape, dtype): BaseArray.__init__(self, shape) self.size = size From noreply at buildbot.pypy.org Tue Nov 8 20:15:43 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Tue, 8 Nov 2011 20:15:43 +0100 (CET) Subject: [pypy-commit] pypy default: a failing optimizeopt test Message-ID: <20111108191543.F3819820C4@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r48968:4cb1d062d413 Date: 2011-11-08 14:15 -0500 http://bitbucket.org/pypy/pypy/changeset/4cb1d062d413/ Log: a failing optimizeopt test diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -4999,6 +4999,33 @@ """ self.optimize_loop(ops, expected) + def test_known_equal_ints(self): + ops = """ + [i0, i1, i2, p0] + i3 = int_eq(i0, i1) + guard_true(i3) [] + + i4 = int_lt(i2, i0) + guard_true(i4) [] + i5 = int_lt(i2, i1) + guard_true(i5) [] + + i6 = getarrayitem_gc(p0, i2) + finish(i6) + """ + expected = """ + [i0, i1, i2, p0] + i3 = int_eq(i0, i1) + guard_true(i3) [] + + i4 = int_lt(i2, i0) + guard_true(i4) [] + + i6 = getarrayitem_gc(p0, i3) + finish(i6) + """ + self.optimize_loop(ops, expected) + class TestLLtype(BaseTestOptimizeBasic, LLtypeMixin): pass From noreply at buildbot.pypy.org Tue Nov 8 20:16:17 2011 From: noreply at buildbot.pypy.org (amauryfa) Date: Tue, 8 Nov 2011 20:16:17 +0100 (CET) Subject: [pypy-commit] pypy py3k: A non quadratic implementation of random.getrandbits(), Message-ID: <20111108191617.C4F37820C4@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r48969:a72479c34dae Date: 2011-11-08 20:12 +0100 http://bitbucket.org/pypy/pypy/changeset/a72479c34dae/ Log: A non quadratic implementation of random.getrandbits(), badly needed by test_zlib diff --git a/pypy/module/_random/interp_random.py b/pypy/module/_random/interp_random.py --- a/pypy/module/_random/interp_random.py +++ b/pypy/module/_random/interp_random.py @@ -2,8 +2,8 @@ from pypy.interpreter.typedef import TypeDef from pypy.interpreter.gateway import NoneNotWrapped, interp2app, unwrap_spec from pypy.interpreter.baseobjspace import Wrappable -from pypy.rlib.rarithmetic import r_uint, intmask -from pypy.rlib import rrandom +from pypy.rlib.rarithmetic import r_uint, r_longlong, intmask +from pypy.rlib import rbigint, rrandom import time @@ -83,31 +83,22 @@ n = space.int_w(w_n) self._rnd.jumpahead(n) + assert rbigint.SHIFT <= 32 @unwrap_spec(k=int) def getrandbits(self, space, k): if k <= 0: strerror = space.wrap("number of bits must be greater than zero") raise OperationError(space.w_ValueError, strerror) - bytes = ((k - 1) // 32 + 1) * 4 - bytesarray = [0] * bytes - for i in range(0, bytes, 4): - r = self._rnd.genrand32() - if k < 32: - r >>= (32 - k) - bytesarray[i + 0] = r & r_uint(0xff) - bytesarray[i + 1] = (r >> 8) & r_uint(0xff) - bytesarray[i + 2] = (r >> 16) & r_uint(0xff) - bytesarray[i + 3] = (r >> 24) & r_uint(0xff) - k -= 32 - - # XXX so far this is quadratic - w_result = space.newint(0) - w_eight = space.newint(8) - for i in range(len(bytesarray) - 1, -1, -1): - byte = bytesarray[i] - w_result = space.or_(space.lshift(w_result, w_eight), - space.newint(intmask(byte))) - return w_result + needed = (k - 1) // rbigint.SHIFT + 1 + result = rbigint.rbigint([rbigint.NULLDIGIT] * needed, 1) + for i in range(needed - 1): + # This loses some random digits, but not too many since SHIFT=31 + value = self._rnd.genrand32() + if i < needed - 1: + result.setdigit(i, value & rbigint.MASK) + else: + result.setdigit(i, value >> ((needed * rbigint.SHIFT) - k)) + return space.newlong_from_rbigint(result) W_Random.typedef = TypeDef("Random", diff --git a/pypy/module/_random/test/test_random.py b/pypy/module/_random/test/test_random.py --- a/pypy/module/_random/test/test_random.py +++ b/pypy/module/_random/test/test_random.py @@ -67,7 +67,7 @@ for arg in [None, 0, 0L, 1, 1L, -1, -1L, 10**20, -(10**20), 3.14, 1+2j, 'a', tuple('abc'), 0xffffffffffL]: rnd.seed(arg) - for arg in [range(3), dict(one=1)]: + for arg in [[1, 2, 3], dict(one=1)]: raises(TypeError, rnd.seed, arg) raises(TypeError, rnd.seed, 1, 2) raises(TypeError, type(rnd), []) @@ -92,7 +92,10 @@ def test_randbits(self): import _random rnd = _random.Random() - for n in range(1, 10) + range(10, 1000, 15): + for n in range(1, 10): + k = rnd.getrandbits(n) + assert 0 <= k < 2 ** n + for n in range(10, 1000, 15): k = rnd.getrandbits(n) assert 0 <= k < 2 ** n From noreply at buildbot.pypy.org Tue Nov 8 20:26:06 2011 From: noreply at buildbot.pypy.org (hager) Date: Tue, 8 Nov 2011 20:26:06 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: (edelsohn, hager): Some experiments in _gen_exit_path Message-ID: <20111108192607.01064820C4@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r48970:83e046a36db5 Date: 2011-11-08 11:25 -0800 http://bitbucket.org/pypy/pypy/changeset/83e046a36db5/ Log: (edelsohn, hager): Some experiments in _gen_exit_path diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -306,15 +306,20 @@ intp = lltype.Ptr(lltype.Array(lltype.Signed, hints={'nolength': True})) descr = rffi.cast(intp, decode_func_addr) addr = descr[0] + r11_value = descr[2] # # load parameters into parameter registers - mc.lwz(r.r3.value, r.SPP.value, 0) # address of state encoding + if IS_PPC_32: + mc.lwz(r.r3.value, r.SPP.value, 0) # address of state encoding + else: + mc.lwz(r.r3.value, r.SPP.value, 0) # address of state encoding mc.mr(r.r4.value, r.SP.value) # load stack pointer mc.mr(r.r5.value, r.SPP.value) # load spilling pointer # # load address of decoding function into r0 mc.load_imm(r.r0, addr) + mc.load_imm(r.r11, r11_value) # ... and branch there mc.mtctr(r.r0.value) mc.bctrl() From noreply at buildbot.pypy.org Tue Nov 8 20:34:37 2011 From: noreply at buildbot.pypy.org (hager) Date: Tue, 8 Nov 2011 20:34:37 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Remove typo from _gen_exit_path Message-ID: <20111108193437.3AF78820C4@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r48971:03431c38f9c9 Date: 2011-11-08 11:34 -0800 http://bitbucket.org/pypy/pypy/changeset/03431c38f9c9/ Log: Remove typo from _gen_exit_path diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -313,7 +313,7 @@ if IS_PPC_32: mc.lwz(r.r3.value, r.SPP.value, 0) # address of state encoding else: - mc.lwz(r.r3.value, r.SPP.value, 0) # address of state encoding + mc.ld(r.r3.value, r.SPP.value, 0) mc.mr(r.r4.value, r.SP.value) # load stack pointer mc.mr(r.r5.value, r.SPP.value) # load spilling pointer # From noreply at buildbot.pypy.org Tue Nov 8 20:35:40 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Tue, 8 Nov 2011 20:35:40 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: ll_os.utimes works, too Message-ID: <20111108193540.9F744820C4@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r48972:95c8b04b7cf8 Date: 2011-11-08 20:34 +0100 http://bitbucket.org/pypy/pypy/changeset/95c8b04b7cf8/ Log: ll_os.utimes works, too diff --git a/pypy/rlib/rarithmetic.py b/pypy/rlib/rarithmetic.py --- a/pypy/rlib/rarithmetic.py +++ b/pypy/rlib/rarithmetic.py @@ -471,6 +471,9 @@ r_longlong = build_int('r_longlong', True, 64) r_ulonglong = build_int('r_ulonglong', False, 64) +r_long = build_int('r_long', True, 32) +r_ulong = build_int('r_ulong', False, 32) + longlongmax = r_longlong(LONGLONG_TEST - 1) if r_longlong is not r_int: @@ -478,6 +481,12 @@ else: r_int64 = int +# needed for ll_os_stat.time_t_to_FILE_TIME in the 64 bit case +if r_long is not r_int: + r_uint32 = r_ulong +else: + r_uint32 = r_uint + # the 'float' C type diff --git a/pypy/rpython/lltypesystem/lltype.py b/pypy/rpython/lltypesystem/lltype.py --- a/pypy/rpython/lltypesystem/lltype.py +++ b/pypy/rpython/lltypesystem/lltype.py @@ -1,7 +1,8 @@ import py from pypy.rlib.rarithmetic import (r_int, r_uint, intmask, r_singlefloat, r_ulonglong, r_longlong, r_longfloat, - base_int, normalizedinttype, longlongmask) + base_int, normalizedinttype, longlongmask, + r_uint32) from pypy.rlib.objectmodel import Symbolic from pypy.tool.uid import Hashable from pypy.tool.identity_dict import identity_dict diff --git a/pypy/rpython/module/ll_os.py b/pypy/rpython/module/ll_os.py --- a/pypy/rpython/module/ll_os.py +++ b/pypy/rpython/module/ll_os.py @@ -402,7 +402,7 @@ UTIMBUFP = lltype.Ptr(self.UTIMBUF) os_utime = self.llexternal('utime', [rffi.CCHARP, UTIMBUFP], rffi.INT) - if not _WIM32: + if not _WIN32: includes = ['sys/time.h'] else: includes = ['time.h'] diff --git a/pypy/rpython/module/ll_os_stat.py b/pypy/rpython/module/ll_os_stat.py --- a/pypy/rpython/module/ll_os_stat.py +++ b/pypy/rpython/module/ll_os_stat.py @@ -456,6 +456,6 @@ def time_t_to_FILE_TIME(time, filetime): ft = lltype.r_longlong((time + secs_between_epochs) * 10000000) - filetime.c_dwHighDateTime = lltype.r_uint(ft >> 32) - filetime.c_dwLowDateTime = lltype.r_uint(ft & lltype.r_uint(-1)) + filetime.c_dwHighDateTime = lltype.r_uint32(ft >> 32) + filetime.c_dwLowDateTime = lltype.r_uint32(ft & lltype.r_uint(-1)) From noreply at buildbot.pypy.org Tue Nov 8 20:35:41 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Tue, 8 Nov 2011 20:35:41 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: merge Message-ID: <20111108193541.D2A3E820C4@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r48973:689e57b43a04 Date: 2011-11-08 20:35 +0100 http://bitbucket.org/pypy/pypy/changeset/689e57b43a04/ Log: merge diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -4999,6 +4999,33 @@ """ self.optimize_loop(ops, expected) + def test_known_equal_ints(self): + ops = """ + [i0, i1, i2, p0] + i3 = int_eq(i0, i1) + guard_true(i3) [] + + i4 = int_lt(i2, i0) + guard_true(i4) [] + i5 = int_lt(i2, i1) + guard_true(i5) [] + + i6 = getarrayitem_gc(p0, i2) + finish(i6) + """ + expected = """ + [i0, i1, i2, p0] + i3 = int_eq(i0, i1) + guard_true(i3) [] + + i4 = int_lt(i2, i0) + guard_true(i4) [] + + i6 = getarrayitem_gc(p0, i3) + finish(i6) + """ + self.optimize_loop(ops, expected) + class TestLLtype(BaseTestOptimizeBasic, LLtypeMixin): pass From noreply at buildbot.pypy.org Tue Nov 8 20:53:01 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Tue, 8 Nov 2011 20:53:01 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: test_chdir is fixed now for win32 Message-ID: <20111108195301.4222F820C4@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r48974:82f5470affc1 Date: 2011-11-08 20:52 +0100 http://bitbucket.org/pypy/pypy/changeset/82f5470affc1/ Log: test_chdir is fixed now for win32 diff --git a/pypy/rpython/module/test/test_ll_os.py b/pypy/rpython/module/test/test_ll_os.py --- a/pypy/rpython/module/test/test_ll_os.py +++ b/pypy/rpython/module/test/test_ll_os.py @@ -81,7 +81,8 @@ import ctypes buf = ctypes.create_string_buffer(1000) ctypes.windll.kernel32.GetEnvironmentVariableA('=%c:' % pwd[0], buf, 1000) - assert str(buf.value) == pwd + assert str(buf.value).lower() == pwd + # ctypes returns the drive letter in uppercase, os.getcwd does not pwd = os.getcwd() try: From noreply at buildbot.pypy.org Tue Nov 8 23:01:06 2011 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Tue, 8 Nov 2011 23:01:06 +0100 (CET) Subject: [pypy-commit] pyrepl py3ksupport: merge from default Message-ID: <20111108220106.8C2FB820C4@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: py3ksupport Changeset: r157:2bb3de20db46 Date: 2011-11-08 22:56 +0100 http://bitbucket.org/pypy/pyrepl/changeset/2bb3de20db46/ Log: merge from default diff --git a/pyrepl/readline.py b/pyrepl/readline.py --- a/pyrepl/readline.py +++ b/pyrepl/readline.py @@ -231,7 +231,11 @@ return ''.join(chars) def _histline(self, line): - return unicode(line.rstrip('\n'), ENCODING) + line = line.rstrip('\n') + try: + return unicode(line, ENCODING) + except UnicodeDecodeError: # bah, silently fall back... + return unicode(line, 'utf-8') def get_history_length(self): return self.saved_history_length @@ -268,7 +272,10 @@ f = open(os.path.expanduser(filename), 'w') for entry in history: if isinstance(entry, unicode): - entry = entry.encode(ENCODING) + try: + entry = entry.encode(ENCODING) + except UnicodeEncodeError: # bah, silently fall back... + entry = entry.encode('utf-8') entry = entry.replace('\n', '\r\n') # multiline history support f.write(entry + '\n') f.close() @@ -395,9 +402,21 @@ _wrapper.f_in = f_in _wrapper.f_out = f_out - if hasattr(sys, '__raw_input__'): # PyPy - _old_raw_input = sys.__raw_input__ + if '__pypy__' in sys.builtin_module_names: # PyPy + + def _old_raw_input(prompt=''): + # sys.__raw_input__() is only called when stdin and stdout are + # as expected and are ttys. If it is the case, then get_reader() + # should not really fail in _wrapper.raw_input(). If it still + # does, then we will just cancel the redirection and call again + # the built-in raw_input(). + try: + del sys.__raw_input__ + except AttributeError: + pass + return raw_input(prompt) sys.__raw_input__ = _wrapper.raw_input + else: # this is not really what readline.c does. Better than nothing I guess import __builtin__ From noreply at buildbot.pypy.org Tue Nov 8 23:46:24 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Tue, 8 Nov 2011 23:46:24 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: all of test_ll_os works now (more than before is started win64 ; -) Message-ID: <20111108224624.C55F0820C4@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r48975:d4d34a4e70e5 Date: 2011-11-08 23:45 +0100 http://bitbucket.org/pypy/pypy/changeset/d4d34a4e70e5/ Log: all of test_ll_os works now (more than before is started win64 ;-) diff --git a/pypy/rpython/lltypesystem/llmemory.py b/pypy/rpython/lltypesystem/llmemory.py --- a/pypy/rpython/lltypesystem/llmemory.py +++ b/pypy/rpython/lltypesystem/llmemory.py @@ -57,7 +57,7 @@ return "" % (self.TYPE, self.repeat) def __mul__(self, other): - if not isinstance(other, int): + if not isinstance(other, (int, long)): return NotImplemented return ItemOffset(self.TYPE, self.repeat * other) diff --git a/pypy/rpython/lltypesystem/lltype.py b/pypy/rpython/lltypesystem/lltype.py --- a/pypy/rpython/lltypesystem/lltype.py +++ b/pypy/rpython/lltypesystem/lltype.py @@ -1655,7 +1655,7 @@ __slots__ = ('items',) def __init__(self, TYPE, n, initialization=None, parent=None, parentindex=None): - if not isinstance(n, int): + if not isinstance(n, (int, long)): raise TypeError, "array length must be an int" if n < 0: raise ValueError, "negative array length" diff --git a/pypy/rpython/module/test/test_ll_os.py b/pypy/rpython/module/test/test_ll_os.py --- a/pypy/rpython/module/test/test_ll_os.py +++ b/pypy/rpython/module/test/test_ll_os.py @@ -80,7 +80,10 @@ pwd = os.getcwd() import ctypes buf = ctypes.create_string_buffer(1000) - ctypes.windll.kernel32.GetEnvironmentVariableA('=%c:' % pwd[0], buf, 1000) + len = ctypes.windll.kernel32.GetEnvironmentVariableA('=%c:' % pwd[0], buf, 1000) + if (len == 0) and "WINGDB_PYTHON" in os.environ: + # the ctypes call seems not to work in the Wing debugger + return assert str(buf.value).lower() == pwd # ctypes returns the drive letter in uppercase, os.getcwd does not From noreply at buildbot.pypy.org Tue Nov 8 23:59:54 2011 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Tue, 8 Nov 2011 23:59:54 +0100 (CET) Subject: [pypy-commit] pyrepl py3ksupport: fix up keymap creation Message-ID: <20111108225954.84191820C4@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: py3ksupport Changeset: r158:9de498f86d73 Date: 2011-11-08 23:59 +0100 http://bitbucket.org/pypy/pyrepl/changeset/9de498f86d73/ Log: fix up keymap creation diff --git a/pyrepl/keymap.py b/pyrepl/keymap.py --- a/pyrepl/keymap.py +++ b/pyrepl/keymap.py @@ -174,7 +174,7 @@ r = {} import pprint for key, value in keymap.items(): - r.setdefault(key[0], {})[key[1:]] = value + r.setdefault(key[:1], {})[key[1:]] = value for key, value in r.items(): if empty in value: if len(value) != 1: diff --git a/testing/test_keymap.py b/testing/test_keymap.py new file mode 100644 --- /dev/null +++ b/testing/test_keymap.py @@ -0,0 +1,10 @@ +from pyrepl.keymap import compile_keymap + + +def test_compile_keymap(): + k = compile_keymap({ + b'a': 'test', + b'bc': 'test2', + }) + + assert k == {b'a': 'test', b'b': { b'c': 'test2'}} From notifications-noreply at bitbucket.org Wed Nov 9 01:03:52 2011 From: notifications-noreply at bitbucket.org (Bitbucket) Date: Wed, 09 Nov 2011 00:03:52 -0000 Subject: [pypy-commit] Notification: pypy Message-ID: <20111109000352.31460.53277@bitbucket02.managed.contegix.com> You have received a notification from Dan Colish. Hi, I forked pypy. My fork is at https://bitbucket.org/dcolish/pypy. -- Disable notifications at https://bitbucket.org/account/notifications/ From noreply at buildbot.pypy.org Wed Nov 9 01:33:01 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Wed, 9 Nov 2011 01:33:01 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: test_ll_os_stat works now as well Message-ID: <20111109003301.AAB9A820C4@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r48976:10717241c974 Date: 2011-11-09 01:32 +0100 http://bitbucket.org/pypy/pypy/changeset/10717241c974/ Log: test_ll_os_stat works now as well diff --git a/pypy/rpython/module/ll_os_stat.py b/pypy/rpython/module/ll_os_stat.py --- a/pypy/rpython/module/ll_os_stat.py +++ b/pypy/rpython/module/ll_os_stat.py @@ -319,6 +319,7 @@ assert len(STAT_FIELDS) == 10 # no extra fields on Windows def attributes_to_mode(attributes): + attributes = lltype.r_uint(attributes) m = 0 if attributes & win32traits.FILE_ATTRIBUTE_DIRECTORY: m |= win32traits._S_IFDIR | 0111 # IFEXEC for user,group,other From noreply at buildbot.pypy.org Wed Nov 9 01:48:45 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Wed, 9 Nov 2011 01:48:45 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: test_ll_os_stat works now, too. Hint: never assume 'c:\temp' exists. Use the environ! Message-ID: <20111109004845.848DE820C4@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r48977:ede54430f3a2 Date: 2011-11-09 01:48 +0100 http://bitbucket.org/pypy/pypy/changeset/ede54430f3a2/ Log: test_ll_os_stat works now, too. Hint: never assume 'c:\temp' exists. Use the environ! diff --git a/pypy/rpython/module/test/test_ll_os_stat.py b/pypy/rpython/module/test/test_ll_os_stat.py --- a/pypy/rpython/module/test/test_ll_os_stat.py +++ b/pypy/rpython/module/test/test_ll_os_stat.py @@ -26,7 +26,7 @@ assert wstat(unicode(f)).st_mtime == expected check('c:/') - check('c:/temp') + check(os.environ['TEMP']) check('c:/pagefile.sys') def test_fstat(self): From noreply at buildbot.pypy.org Wed Nov 9 02:35:10 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Wed, 9 Nov 2011 02:35:10 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: test_ll_time: test_time_sleep works Message-ID: <20111109013510.BBF28820C4@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r48978:4e6f08cf4321 Date: 2011-11-09 02:34 +0100 http://bitbucket.org/pypy/pypy/changeset/4e6f08cf4321/ Log: test_ll_time: test_time_sleep works diff --git a/pypy/rlib/rarithmetic.py b/pypy/rlib/rarithmetic.py --- a/pypy/rlib/rarithmetic.py +++ b/pypy/rlib/rarithmetic.py @@ -487,6 +487,8 @@ else: r_uint32 = r_uint +# needed for ll_time.time_sleep_llimpl +maxint32 = int((1 << 31) -1) # the 'float' C type diff --git a/pypy/rpython/module/ll_time.py b/pypy/rpython/module/ll_time.py --- a/pypy/rpython/module/ll_time.py +++ b/pypy/rpython/module/ll_time.py @@ -9,7 +9,7 @@ from pypy.rpython.lltypesystem import lltype from pypy.rpython.extfunc import BaseLazyRegistering, registering, extdef from pypy.rlib import rposix -from pypy.rlib.rarithmetic import intmask +from pypy.rlib.rarithmetic import intmask, maxint32 from pypy.translator.tool.cbuild import ExternalCompilationInfo if sys.platform == 'win32': @@ -177,7 +177,7 @@ @registering(time.sleep) def register_time_sleep(self): if sys.platform == 'win32': - MAX = sys.maxint + MAX = maxint32 Sleep = self.llexternal('Sleep', [rffi.ULONG], lltype.Void) def time_sleep_llimpl(secs): millisecs = secs * 1000.0 From noreply at buildbot.pypy.org Wed Nov 9 02:48:52 2011 From: noreply at buildbot.pypy.org (pjenvey) Date: Wed, 9 Nov 2011 02:48:52 +0100 (CET) Subject: [pypy-commit] pypy py3k: improve pep3120 support Message-ID: <20111109014852.9AD9E820C4@wyvern.cs.uni-duesseldorf.de> Author: Philip Jenvey Branch: py3k Changeset: r48979:75461738f371 Date: 2011-11-08 17:48 -0800 http://bitbucket.org/pypy/pypy/changeset/75461738f371/ Log: improve pep3120 support diff --git a/pypy/interpreter/astcompiler/test/test_astbuilder.py b/pypy/interpreter/astcompiler/test/test_astbuilder.py --- a/pypy/interpreter/astcompiler/test/test_astbuilder.py +++ b/pypy/interpreter/astcompiler/test/test_astbuilder.py @@ -1039,6 +1039,17 @@ assert isinstance(s, ast.Str) assert space.eq_w(s.s, space.wrap(sentence)) + def test_string_pep3120(self): + space = self.space + japan = u'日本' + source = u"foo = '%s'" % japan + info = pyparse.CompileInfo("", "exec") + tree = self.parser.parse_source(source.encode("utf-8"), info) + assert info.encoding == "utf-8" + s = ast_from_node(space, tree, info).body[0].value + assert isinstance(s, ast.Str) + assert space.eq_w(s.s, space.wrap(japan)) + def test_number(self): def get_num(s): node = self.get_first_expr(s) diff --git a/pypy/interpreter/pyparser/pyparse.py b/pypy/interpreter/pyparser/pyparse.py --- a/pypy/interpreter/pyparser/pyparse.py +++ b/pypy/interpreter/pyparser/pyparse.py @@ -5,8 +5,6 @@ def recode_to_utf8(space, bytes, encoding=None): - if encoding is None: - encoding = 'utf-8' if encoding == 'utf-8': return bytes w_text = space.call_method(space.wrapbytes(bytes), "decode", @@ -121,6 +119,8 @@ textsrc = bytessrc else: enc = _normalize_encoding(_check_for_encoding(bytessrc)) + if enc is None: + enc = 'utf-8' try: textsrc = recode_to_utf8(self.space, bytessrc, enc) except OperationError, e: diff --git a/pypy/interpreter/pyparser/test/test_pyparse.py b/pypy/interpreter/pyparser/test/test_pyparse.py --- a/pypy/interpreter/pyparser/test/test_pyparse.py +++ b/pypy/interpreter/pyparser/test/test_pyparse.py @@ -64,6 +64,11 @@ assert exc.msg == ("'ascii' codec can't decode byte 0xc3 " "in position 16: ordinal not in range(128)") + def test_encoding_pep3120(self): + info = pyparse.CompileInfo("", "exec") + tree = self.parse("""foo = '日本'""", info=info) + assert info.encoding == 'utf-8' + def test_syntax_error(self): parse = self.parse exc = py.test.raises(SyntaxError, parse, "name another for").value From noreply at buildbot.pypy.org Wed Nov 9 03:09:39 2011 From: noreply at buildbot.pypy.org (pjenvey) Date: Wed, 9 Nov 2011 03:09:39 +0100 (CET) Subject: [pypy-commit] pypy py3k: fix bytes' repr Message-ID: <20111109020939.D74EE820C4@wyvern.cs.uni-duesseldorf.de> Author: Philip Jenvey Branch: py3k Changeset: r48980:0357086d2dc0 Date: 2011-11-08 18:08 -0800 http://bitbucket.org/pypy/pypy/changeset/0357086d2dc0/ Log: fix bytes' repr diff --git a/pypy/objspace/std/stringobject.py b/pypy/objspace/std/stringobject.py --- a/pypy/objspace/std/stringobject.py +++ b/pypy/objspace/std/stringobject.py @@ -897,8 +897,9 @@ def string_escape_encode(s, quote): - buf = StringBuilder(len(s) + 2) + buf = StringBuilder(len(s) + 3) + buf.append('b') buf.append(quote) startslice = 0 diff --git a/pypy/objspace/std/test/test_stringobject.py b/pypy/objspace/std/test/test_stringobject.py --- a/pypy/objspace/std/test/test_stringobject.py +++ b/pypy/objspace/std/test/test_stringobject.py @@ -618,23 +618,23 @@ assert l == [52, 50] def test_repr(self): - assert repr(b"") =="''" - assert repr(b"a") =="'a'" - assert repr(b"'") =='"\'"' - assert repr(b"\'") =="\"\'\"" - assert repr(b"\"") =='\'"\'' - assert repr(b"\t") =="'\\t'" - assert repr(b"\\") =="'\\\\'" - assert repr(b'') =="''" - assert repr(b'a') =="'a'" - assert repr(b'"') =="'\"'" - assert repr(b'\'') =='"\'"' - assert repr(b'\"') =="'\"'" - assert repr(b'\t') =="'\\t'" - assert repr(b'\\') =="'\\\\'" - assert repr(b"'''\"") =='\'\\\'\\\'\\\'"\'' - assert repr(b"\x13") =="'\\x13'" - assert repr(b"\x02") =="'\\x02'" + assert repr(b"") =="b''" + assert repr(b"a") =="b'a'" + assert repr(b"'") =='b"\'"' + assert repr(b"\'") =="b\"\'\"" + assert repr(b"\"") =='b\'"\'' + assert repr(b"\t") =="b'\\t'" + assert repr(b"\\") =="b'\\\\'" + assert repr(b'') =="b''" + assert repr(b'a') =="b'a'" + assert repr(b'"') =="b'\"'" + assert repr(b'\'') =='b"\'"' + assert repr(b'\"') =="b'\"'" + assert repr(b'\t') =="b'\\t'" + assert repr(b'\\') =="b'\\\\'" + assert repr(b"'''\"") =='b\'\\\'\\\'\\\'"\'' + assert repr(b"\x13") =="b'\\x13'" + assert repr(b"\x02") =="b'\\x02'" def test_contains(self): assert b'' in b'abc' From noreply at buildbot.pypy.org Wed Nov 9 03:21:42 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Wed, 9 Nov 2011 03:21:42 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: test_posix: test_open works Message-ID: <20111109022142.1F405820C4@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r48981:d8a8d1ed4a04 Date: 2011-11-09 03:21 +0100 http://bitbucket.org/pypy/pypy/changeset/d8a8d1ed4a04/ Log: test_posix: test_open works diff --git a/pypy/rpython/module/ll_os.py b/pypy/rpython/module/ll_os.py --- a/pypy/rpython/module/ll_os.py +++ b/pypy/rpython/module/ll_os.py @@ -791,7 +791,7 @@ [traits.CCHARP, rffi.INT, rffi.MODE_T], rffi.INT) def os_open_llimpl(path, flags, mode): - result = rffi.cast(rffi.LONG, os_open(path, flags, mode)) + result = rffi.cast(lltype.Signed, os_open(path, flags, mode)) if result == -1: raise OSError(rposix.get_errno(), "os_open failed") return result diff --git a/pypy/rpython/module/test/test_posix.py b/pypy/rpython/module/test/test_posix.py --- a/pypy/rpython/module/test/test_posix.py +++ b/pypy/rpython/module/test/test_posix.py @@ -18,10 +18,10 @@ def test_open(self): def f(): - ff = posix.open(path,posix.O_RDONLY,0777) + ff = posix.open(path, posix.O_RDONLY, 0777) return ff - func = self.interpret(f,[]) - assert type(func) == int + func = self.interpret(f, []) + assert isinstance(func, (int, long)) def test_fstat(self): def fo(fi): From noreply at buildbot.pypy.org Wed Nov 9 03:32:34 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Wed, 9 Nov 2011 03:32:34 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: test_posix: test_isatty works Message-ID: <20111109023234.61BDA820C4@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r48982:dc98b8e33da9 Date: 2011-11-09 03:32 +0100 http://bitbucket.org/pypy/pypy/changeset/dc98b8e33da9/ Log: test_posix: test_isatty works diff --git a/pypy/rpython/module/ll_os.py b/pypy/rpython/module/ll_os.py --- a/pypy/rpython/module/ll_os.py +++ b/pypy/rpython/module/ll_os.py @@ -1317,7 +1317,7 @@ os_isatty = self.llexternal(underscore_on_windows+'isatty', [rffi.INT], rffi.INT) def isatty_llimpl(fd): - res = rffi.cast(rffi.LONG, os_isatty(rffi.cast(rffi.INT, fd))) + res = rffi.cast(lltype.Signed, os_isatty(rffi.cast(rffi.INT, fd))) return res != 0 return extdef([int], bool, llimpl=isatty_llimpl, diff --git a/pypy/rpython/module/test/test_posix.py b/pypy/rpython/module/test/test_posix.py --- a/pypy/rpython/module/test/test_posix.py +++ b/pypy/rpython/module/test/test_posix.py @@ -65,21 +65,21 @@ def test_lseek(self): - def f(fi,pos): - posix.lseek(fi,pos,0) - fi = os.open(path,os.O_RDONLY,0777) - func = self.interpret(f,[fi,5]) - res = os.read(fi,2) + def f(fi, pos): + posix.lseek(fi, pos, 0) + fi = os.open(path, os.O_RDONLY, 0777) + func = self.interpret(f, [fi, 5]) + res = os.read(fi, 2) assert res =='is' def test_isatty(self): def f(fi): posix.isatty(fi) - fi = os.open(path,os.O_RDONLY,0777) - func = self.interpret(f,[fi]) + fi = os.open(path, os.O_RDONLY, 0777) + func = self.interpret(f, [fi]) assert not func os.close(fi) - func = self.interpret(f,[fi]) + func = self.interpret(f, [fi]) assert not func def test_getcwd(self): From noreply at buildbot.pypy.org Wed Nov 9 04:22:34 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 9 Nov 2011 04:22:34 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: begin refactoring everything. nothing works. Message-ID: <20111109032234.8207F820C4@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r48983:f9f4bedbab84 Date: 2011-11-08 22:22 -0500 http://bitbucket.org/pypy/pypy/changeset/f9f4bedbab84/ Log: begin refactoring everything. nothing works. diff --git a/pypy/module/micronumpy/compile.py b/pypy/module/micronumpy/compile.py --- a/pypy/module/micronumpy/compile.py +++ b/pypy/module/micronumpy/compile.py @@ -4,7 +4,8 @@ """ from pypy.interpreter.baseobjspace import InternalSpaceCache, W_Root -from pypy.module.micronumpy.interp_dtype import W_Float64Dtype, W_BoolDtype +from pypy.module.micronumpy.interp_boxes import W_GenericBox +from pypy.module.micronumpy.interp_dtype import get_dtype_cache from pypy.module.micronumpy.interp_numarray import (Scalar, BaseArray, descr_new_array, scalar_w, SingleDimArray) from pypy.module.micronumpy import interp_ufuncs @@ -40,7 +41,7 @@ def __init__(self): """NOT_RPYTHON""" self.fromcache = InternalSpaceCache(self).getorbuild - self.w_float64dtype = W_Float64Dtype(self) + self.w_float64dtype = get_dtype_cache(self).w_float64dtype def issequence_w(self, w_obj): return isinstance(w_obj, ListObject) or isinstance(w_obj, SingleDimArray) @@ -73,7 +74,7 @@ return w_obj def float_w(self, w_obj): - assert isinstance(w_obj, FloatObject) + assert isinstance(w_obj, FloatObject) return w_obj.floatval def int_w(self, w_obj): @@ -206,18 +207,18 @@ elif self.name == '*': w_res = w_lhs.descr_mul(interp.space, w_rhs) elif self.name == '-': - w_res = w_lhs.descr_sub(interp.space, w_rhs) + w_res = w_lhs.descr_sub(interp.space, w_rhs) elif self.name == '->': if isinstance(w_rhs, Scalar): index = int(interp.space.float_w( - w_rhs.value.wrap(interp.space))) + w_rhs.value)) dtype = interp.space.fromcache(W_Float64Dtype) return Scalar(dtype, w_lhs.get_concrete().eval(index)) else: raise NotImplementedError else: raise NotImplementedError - if not isinstance(w_res, BaseArray): + if not isinstance(w_res, BaseArray) and not isinstance(w_res, W_GenericBox): dtype = interp.space.fromcache(W_Float64Dtype) w_res = scalar_w(interp.space, dtype, w_res) return w_res @@ -236,8 +237,7 @@ return space.wrap(self.v) def execute(self, interp): - dtype = interp.space.fromcache(W_Float64Dtype) - assert isinstance(dtype, W_Float64Dtype) + dtype = get_dtype_cache(interp.space).w_float64dtype return Scalar(dtype, dtype.box(self.v)) class RangeConstant(Node): @@ -269,7 +269,7 @@ def execute(self, interp): w_list = self.wrap(interp.space) - dtype = interp.space.fromcache(W_Float64Dtype) + dtype = get_dtype_cache(interp.space).w_float64dtype return descr_new_array(interp.space, None, w_list, w_dtype=dtype) def __repr__(self): @@ -414,7 +414,7 @@ assert lgt >= 0 rhs = self.parse_constant_or_identifier(l[1][:lgt]) return l[0], rhs - + def parse_statement(self, line): if '=' in line: lhs, rhs = line.split("=") @@ -422,7 +422,7 @@ if '[' in lhs: name, index = self.parse_array_subscript(lhs) return ArrayAssignment(name, index, self.parse_expression(rhs)) - else: + else: return Assignment(lhs, self.parse_expression(rhs)) else: return Execute(self.parse_expression(line)) diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -1,519 +1,132 @@ -import functools -import math - from pypy.interpreter.baseobjspace import Wrappable -from pypy.interpreter.error import OperationError -from pypy.interpreter.gateway import interp2app -from pypy.interpreter.typedef import TypeDef, interp_attrproperty, GetSetProperty -from pypy.module.micronumpy import signature -from pypy.objspace.std.floatobject import float2string -from pypy.rlib import rarithmetic, rfloat -from pypy.rlib.rarithmetic import LONG_BIT, widen -from pypy.rlib.objectmodel import specialize, enforceargs -from pypy.rlib.unroll import unrolling_iterable +from pypy.module.micronumpy import types, signature +from pypy.rlib.objectmodel import specialize +from pypy.rlib.rarithmetic import LONG_BIT from pypy.rpython.lltypesystem import lltype, rffi +STORAGE_TYPE = lltype.Array(lltype.Char, hints={"nolength": True}) + UNSIGNEDLTR = "u" SIGNEDLTR = "i" BOOLLTR = "b" FLOATINGLTR = "f" class W_Dtype(Wrappable): + def __init__(self, itemtype, num, kind): + self.signature = signature.BaseSignature() + self.itemtype = itemtype + self.num = num + self.kind = kind + + def malloc(self, length): + # XXX find out why test_zjit explodes with tracking of allocations + return lltype.malloc(STORAGE_TYPE, self.itemtype.get_element_size() * length, + zero=True, flavor="raw", + track_allocation=False, add_memory_pressure=True + ) + + @specialize.argtype(1) + def box(self, value): + return self.itemtype.box(value) + + def coerce(self, space, w_item): + return self.itemtype.coerce(space, w_item) + + def getitem(self, storage, i): + struct_ptr = rffi.ptradd(storage, i * self.itemtype.get_element_size()) + return self.itemtype.read(struct_ptr, 0) + + def setitem(self, storage, i, box): + struct_ptr = rffi.ptradd(storage, i * self.itemtype.get_element_size()) + self.itemtype.store(struct_ptr, 0, box) + + +class DtypeCache(object): def __init__(self, space): - pass + self.w_booldtype = W_Dtype( + types.Bool(), + num=0, + kind=BOOLLTR, + ) + self.w_int8dtype = W_Dtype( + types.Int8(), + num=1, + kind=SIGNEDLTR, + ) + self.w_uint8dtype = W_Dtype( + types.UInt8(), + num=2, + kind=UNSIGNEDLTR, + ) + self.w_int16dtype = W_Dtype( + types.Int16(), + num=3, + kind=SIGNEDLTR, + ) + self.w_uint16dtype = W_Dtype( + types.UInt16(), + num=4, + kind=UNSIGNEDLTR, + ) + self.w_int32dtype = W_Dtype( + types.Int32(), + num=5, + kind=SIGNEDLTR, + ) + self.w_uint32dtype = W_Dtype( + types.UInt32(), + num=6, + kind=UNSIGNEDLTR, + ) + if LONG_BIT == 32: + longtype = types.Int32() + unsigned_longtype = types.UInt32() + elif LONG_BIT == 64: + longtype = types.Int64() + unsigned_longtype = types.UInt64() + self.w_longdtype = W_Dtype( + longtype, + num=7, + kind=SIGNEDLTR, + ) + self.w_ulongdtype = W_Dtype( + unsigned_longtype, + num=8, + kind=UNSIGNEDLTR, + ) + self.w_int64dtype = W_Dtype( + types.Int64(), + num=9, + kind=SIGNEDLTR, + ) + self.w_uint64dtype = W_Dtype( + types.UInt64(), + num=10, + kind=UNSIGNEDLTR, + ) + self.w_float32dtype = W_Dtype( + types.Float32(), + num=11, + kind=FLOATINGLTR, + ) + self.w_float64dtype = W_Dtype( + types.Float64(), + num=12, + kind=FLOATINGLTR, + ) - def descr__new__(space, w_subtype, w_dtype): - if space.is_w(w_dtype, space.w_None): - return space.fromcache(W_Float64Dtype) - elif space.isinstance_w(w_dtype, space.w_str): - dtype = space.str_w(w_dtype) - for alias, dtype_class in dtypes_by_alias: - if alias == dtype: - return space.fromcache(dtype_class) - elif isinstance(space.interpclass_w(w_dtype), W_Dtype): - return w_dtype - elif space.isinstance_w(w_dtype, space.w_type): - for typename, dtype_class in dtypes_by_apptype: - if space.is_w(getattr(space, "w_%s" % typename), w_dtype): - return space.fromcache(dtype_class) - raise OperationError(space.w_TypeError, space.wrap("data type not understood")) + self.builtin_dtypes = [ + self.w_booldtype, self.w_int8dtype, self.w_uint8dtype, + self.w_int16dtype, self.w_uint16dtype, self.w_int32dtype, + self.w_uint32dtype, self.w_longdtype, self.w_ulongdtype, + self.w_int64dtype, self.w_uint64dtype, self.w_float32dtype, + self.w_float64dtype + ] + self.dtypes_by_num_bytes = sorted( + (dtype.itemtype.get_element_size(), dtype) + for dtype in self.builtin_dtypes + ) - def descr_repr(self, space): - return space.wrap("dtype('%s')" % self.name) - - def descr_str(self, space): - return space.wrap(self.name) - - def descr_get_shape(self, space): - return space.newtuple([]) - - -class BaseBox(object): - pass - -VOID_TP = lltype.Ptr(lltype.Array(lltype.Void, hints={'nolength': True, "uncast_on_llgraph": True})) - -def create_low_level_dtype(num, kind, name, aliases, applevel_types, T, valtype, - expected_size=None): - - class Box(BaseBox): - def __init__(self, val): - self.val = val - - def wrap(self, space): - val = self.val - if valtype is rarithmetic.r_singlefloat: - val = float(val) - return space.wrap(val) - - def convert_to(self, dtype): - return dtype.adapt_val(self.val) - Box.__name__ = "%sBox" % T._name - - TP = lltype.Ptr(lltype.Array(T, hints={'nolength': True})) - class W_LowLevelDtype(W_Dtype): - signature = signature.BaseSignature() - - def erase(self, storage): - return rffi.cast(VOID_TP, storage) - - def unerase(self, storage): - return rffi.cast(TP, storage) - - @enforceargs(None, valtype) - def box(self, value): - return Box(value) - - def unbox(self, box): - assert isinstance(box, Box) - return box.val - - def unwrap(self, space, w_item): - raise NotImplementedError - - def malloc(self, size): - # XXX find out why test_zjit explodes with tracking of allocations - return self.erase(lltype.malloc(TP.TO, size, - zero=True, flavor="raw", - track_allocation=False, add_memory_pressure=True - )) - - def getitem(self, storage, i): - return Box(self.unerase(storage)[i]) - - def setitem(self, storage, i, item): - self.unerase(storage)[i] = self.unbox(item) - - def setitem_w(self, space, storage, i, w_item): - self.setitem(storage, i, self.unwrap(space, w_item)) - - def fill(self, storage, item, start, stop): - storage = self.unerase(storage) - item = self.unbox(item) - for i in xrange(start, stop): - storage[i] = item - - @specialize.argtype(1) - def adapt_val(self, val): - return self.box(rffi.cast(TP.TO.OF, val)) - - W_LowLevelDtype.__name__ = "W_%sDtype" % name.capitalize() - W_LowLevelDtype.num = num - W_LowLevelDtype.kind = kind - W_LowLevelDtype.name = name - W_LowLevelDtype.aliases = aliases - W_LowLevelDtype.applevel_types = applevel_types - W_LowLevelDtype.num_bytes = rffi.sizeof(T) - if expected_size is not None: - assert W_LowLevelDtype.num_bytes == expected_size - return W_LowLevelDtype - - -def binop(func): - @functools.wraps(func) - def impl(self, v1, v2): - return self.adapt_val(func(self, - self.for_computation(self.unbox(v1)), - self.for_computation(self.unbox(v2)), - )) - return impl - -def raw_binop(func): - # Returns the result unwrapped. - @functools.wraps(func) - def impl(self, v1, v2): - return func(self, - self.for_computation(self.unbox(v1)), - self.for_computation(self.unbox(v2)) - ) - return impl - -def unaryop(func): - @functools.wraps(func) - def impl(self, v): - return self.adapt_val(func(self, self.for_computation(self.unbox(v)))) - return impl - -class ArithmeticTypeMixin(object): - _mixin_ = True - - @binop - def add(self, v1, v2): - return v1 + v2 - @binop - def sub(self, v1, v2): - return v1 - v2 - @binop - def mul(self, v1, v2): - return v1 * v2 - - @unaryop - def pos(self, v): - return +v - @unaryop - def neg(self, v): - return -v - @unaryop - def abs(self, v): - return abs(v) - - @binop - def max(self, v1, v2): - return max(v1, v2) - @binop - def min(self, v1, v2): - return min(v1, v2) - - def bool(self, v): - return bool(self.for_computation(self.unbox(v))) - @raw_binop - def eq(self, v1, v2): - return v1 == v2 - @raw_binop - def ne(self, v1, v2): - return v1 != v2 - @raw_binop - def lt(self, v1, v2): - return v1 < v2 - @raw_binop - def le(self, v1, v2): - return v1 <= v2 - @raw_binop - def gt(self, v1, v2): - return v1 > v2 - @raw_binop - def ge(self, v1, v2): - return v1 >= v2 - - -class FloatArithmeticDtype(ArithmeticTypeMixin): - _mixin_ = True - - def unwrap(self, space, w_item): - return self.adapt_val(space.float_w(space.float(w_item))) - - def for_computation(self, v): - return float(v) - - def str_format(self, item): - return float2string(self.for_computation(self.unbox(item)), 'g', rfloat.DTSF_STR_PRECISION) - - @binop - def div(self, v1, v2): - try: - return v1 / v2 - except ZeroDivisionError: - if v1 == v2 == 0.0: - return rfloat.NAN - return rfloat.copysign(rfloat.INFINITY, v1 * v2) - @binop - def mod(self, v1, v2): - return math.fmod(v1, v2) - @binop - def pow(self, v1, v2): - return math.pow(v1, v2) - - @unaryop - def sign(self, v): - if v == 0.0: - return 0.0 - return rfloat.copysign(1.0, v) - @unaryop - def reciprocal(self, v): - if v == 0.0: - return rfloat.copysign(rfloat.INFINITY, v) - return 1.0 / v - @unaryop - def fabs(self, v): - return math.fabs(v) - @unaryop - def floor(self, v): - return math.floor(v) - - @binop - def copysign(self, v1, v2): - return math.copysign(v1, v2) - @unaryop - def exp(self, v): - try: - return math.exp(v) - except OverflowError: - return rfloat.INFINITY - @unaryop - def sin(self, v): - return math.sin(v) - @unaryop - def cos(self, v): - return math.cos(v) - @unaryop - def tan(self, v): - return math.tan(v) - @unaryop - def arcsin(self, v): - if not -1.0 <= v <= 1.0: - return rfloat.NAN - return math.asin(v) - @unaryop - def arccos(self, v): - if not -1.0 <= v <= 1.0: - return rfloat.NAN - return math.acos(v) - @unaryop - def arctan(self, v): - return math.atan(v) - @unaryop - def arcsinh(self, v): - return math.asinh(v) - @unaryop - def arctanh(self, v): - if v == 1.0 or v == -1.0: - return math.copysign(rfloat.INFINITY, v) - if not -1.0 < v < 1.0: - return rfloat.NAN - return math.atanh(v) - -class IntegerArithmeticDtype(ArithmeticTypeMixin): - _mixin_ = True - - def unwrap(self, space, w_item): - return self.adapt_val(space.int_w(space.int(w_item))) - - def for_computation(self, v): - return widen(v) - - def str_format(self, item): - return str(widen(self.unbox(item))) - - @binop - def div(self, v1, v2): - if v2 == 0: - return 0 - return v1 / v2 - @binop - def mod(self, v1, v2): - return v1 % v2 - -class SignedIntegerArithmeticDtype(IntegerArithmeticDtype): - _mixin_ = True - - @unaryop - def sign(self, v): - if v > 0: - return 1 - elif v < 0: - return -1 - else: - assert v == 0 - return 0 - -class UnsignedIntegerArithmeticDtype(IntegerArithmeticDtype): - _mixin_ = True - - @unaryop - def sign(self, v): - return int(v != 0) - - -W_BoolDtype = create_low_level_dtype( - num = 0, kind = BOOLLTR, name = "bool", - aliases = ["?", "bool", "bool8"], - applevel_types = ["bool"], - T = lltype.Bool, - valtype = bool, -) -class W_BoolDtype(SignedIntegerArithmeticDtype, W_BoolDtype): - def unwrap(self, space, w_item): - return self.adapt_val(space.is_true(w_item)) - - def str_format(self, item): - v = self.unbox(item) - return "True" if v else "False" - - def for_computation(self, v): - return int(v) - -W_Int8Dtype = create_low_level_dtype( - num = 1, kind = SIGNEDLTR, name = "int8", - aliases = ["b", "int8", "i1"], - applevel_types = [], - T = rffi.SIGNEDCHAR, - valtype = rffi.SIGNEDCHAR._type, - expected_size = 1, -) -class W_Int8Dtype(SignedIntegerArithmeticDtype, W_Int8Dtype): - pass - -W_UInt8Dtype = create_low_level_dtype( - num = 2, kind = UNSIGNEDLTR, name = "uint8", - aliases = ["B", "uint8", "I1"], - applevel_types = [], - T = rffi.UCHAR, - valtype = rffi.UCHAR._type, - expected_size = 1, -) -class W_UInt8Dtype(UnsignedIntegerArithmeticDtype, W_UInt8Dtype): - pass - -W_Int16Dtype = create_low_level_dtype( - num = 3, kind = SIGNEDLTR, name = "int16", - aliases = ["h", "int16", "i2"], - applevel_types = [], - T = rffi.SHORT, - valtype = rffi.SHORT._type, - expected_size = 2, -) -class W_Int16Dtype(SignedIntegerArithmeticDtype, W_Int16Dtype): - pass - -W_UInt16Dtype = create_low_level_dtype( - num = 4, kind = UNSIGNEDLTR, name = "uint16", - aliases = ["H", "uint16", "I2"], - applevel_types = [], - T = rffi.USHORT, - valtype = rffi.USHORT._type, - expected_size = 2, -) -class W_UInt16Dtype(UnsignedIntegerArithmeticDtype, W_UInt16Dtype): - pass - -W_Int32Dtype = create_low_level_dtype( - num = 5, kind = SIGNEDLTR, name = "int32", - aliases = ["i", "int32", "i4"], - applevel_types = [], - T = rffi.INT, - valtype = rffi.INT._type, - expected_size = 4, -) -class W_Int32Dtype(SignedIntegerArithmeticDtype, W_Int32Dtype): - pass - -W_UInt32Dtype = create_low_level_dtype( - num = 6, kind = UNSIGNEDLTR, name = "uint32", - aliases = ["I", "uint32", "I4"], - applevel_types = [], - T = rffi.UINT, - valtype = rffi.UINT._type, - expected_size = 4, -) -class W_UInt32Dtype(UnsignedIntegerArithmeticDtype, W_UInt32Dtype): - pass - -W_Int64Dtype = create_low_level_dtype( - num = 9, kind = SIGNEDLTR, name = "int64", - aliases = ["q", "int64", "i8"], - applevel_types = ["long"], - T = rffi.LONGLONG, - valtype = rffi.LONGLONG._type, - expected_size = 8, -) -class W_Int64Dtype(SignedIntegerArithmeticDtype, W_Int64Dtype): - pass - -W_UInt64Dtype = create_low_level_dtype( - num = 10, kind = UNSIGNEDLTR, name = "uint64", - aliases = ["Q", "uint64", "I8"], - applevel_types = [], - T = rffi.ULONGLONG, - valtype = rffi.ULONGLONG._type, - expected_size = 8, -) -class W_UInt64Dtype(UnsignedIntegerArithmeticDtype, W_UInt64Dtype): - pass - -if LONG_BIT == 32: - long_dtype = W_Int32Dtype - ulong_dtype = W_UInt32Dtype -elif LONG_BIT == 64: - long_dtype = W_Int64Dtype - ulong_dtype = W_UInt64Dtype -else: - assert False - -class W_LongDtype(long_dtype): - num = 7 - aliases = ["l"] - applevel_types = ["int"] - -class W_ULongDtype(ulong_dtype): - num = 8 - aliases = ["L"] - -W_Float32Dtype = create_low_level_dtype( - num = 11, kind = FLOATINGLTR, name = "float32", - aliases = ["f", "float32", "f4"], - applevel_types = [], - T = lltype.SingleFloat, - valtype = rarithmetic.r_singlefloat, - expected_size = 4, -) -class W_Float32Dtype(FloatArithmeticDtype, W_Float32Dtype): - pass - -W_Float64Dtype = create_low_level_dtype( - num = 12, kind = FLOATINGLTR, name = "float64", - aliases = ["d", "float64", "f8"], - applevel_types = ["float"], - T = lltype.Float, - valtype = float, - expected_size = 8, -) -class W_Float64Dtype(FloatArithmeticDtype, W_Float64Dtype): - pass - -ALL_DTYPES = [ - W_BoolDtype, - W_Int8Dtype, W_UInt8Dtype, W_Int16Dtype, W_UInt16Dtype, - W_Int32Dtype, W_UInt32Dtype, W_LongDtype, W_ULongDtype, - W_Int64Dtype, W_UInt64Dtype, - W_Float32Dtype, W_Float64Dtype, -] - -dtypes_by_alias = unrolling_iterable([ - (alias, dtype) - for dtype in ALL_DTYPES - for alias in dtype.aliases -]) -dtypes_by_apptype = unrolling_iterable([ - (apptype, dtype) - for dtype in ALL_DTYPES - for apptype in dtype.applevel_types -]) -dtypes_by_num_bytes = unrolling_iterable(sorted([ - (dtype.num_bytes, dtype) - for dtype in ALL_DTYPES -])) - -W_Dtype.typedef = TypeDef("dtype", - __module__ = "numpy", - __new__ = interp2app(W_Dtype.descr__new__.im_func), - - __repr__ = interp2app(W_Dtype.descr_repr), - __str__ = interp2app(W_Dtype.descr_str), - - num = interp_attrproperty("num", cls=W_Dtype), - kind = interp_attrproperty("kind", cls=W_Dtype), - itemsize = interp_attrproperty("num_bytes", cls=W_Dtype), - shape = GetSetProperty(W_Dtype.descr_get_shape), -) -W_Dtype.typedef.acceptable_as_base_class = False +def get_dtype_cache(space): + return space.fromcache(DtypeCache) \ No newline at end of file diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -31,7 +31,7 @@ arr = SingleDimArray(len(l), dtype=dtype) i = 0 for w_elem in l: - dtype.setitem_w(space, arr.storage, i, w_elem) + dtype.setitem(arr.storage, i, dtype.coerce(space, w_elem)) i += 1 return arr @@ -187,7 +187,7 @@ ]) else: nums = [ - dtype.str_format(self.eval(index)) + dtype.itemtype.str_format(self.eval(index)) for index in range(self.find_size()) ] return nums diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -2,7 +2,7 @@ from pypy.interpreter.error import OperationError, operationerrfmt from pypy.interpreter.gateway import interp2app from pypy.interpreter.typedef import TypeDef, GetSetProperty, interp_attrproperty -from pypy.module.micronumpy import interp_dtype, signature +from pypy.module.micronumpy import interp_dtype, signature, types from pypy.rlib import jit from pypy.rlib.rarithmetic import LONG_BIT from pypy.tool.sourcetools import func_with_new_name @@ -148,7 +148,7 @@ return self.func(calc_dtype, w_lhs.value.convert_to(calc_dtype), w_rhs.value.convert_to(calc_dtype) - ).wrap(space) + ) new_sig = signature.Signature.find_sig([ self.signature, w_lhs.signature, w_rhs.signature @@ -178,7 +178,7 @@ dt1, dt2 = dt2, dt1 # Some operations promote op(bool, bool) to return int8, rather than bool if promote_bools and (dt1.kind == dt2.kind == interp_dtype.BOOLLTR): - return space.fromcache(interp_dtype.W_Int8Dtype) + return interp_dtype.get_dtype_cache(space).w_int8dtype if promote_to_float: return find_unaryop_result_dtype(space, dt2, promote_to_float=True) # If they're the same kind, choose the greater one. @@ -221,15 +221,16 @@ def find_unaryop_result_dtype(space, dt, promote_to_float=False, promote_bools=False, promote_to_largest=False): if promote_bools and (dt.kind == interp_dtype.BOOLLTR): - return space.fromcache(interp_dtype.W_Int8Dtype) + return interp_dtype.get_dtype_cache(space).w_int8dtype if promote_to_float: if dt.kind == interp_dtype.FLOATINGLTR: return dt if dt.num >= 5: - return space.fromcache(interp_dtype.W_Float64Dtype) - for bytes, dtype in interp_dtype.dtypes_by_num_bytes: - if dtype.kind == interp_dtype.FLOATINGLTR and dtype.num_bytes > dt.num_bytes: - return space.fromcache(dtype) + return interp_dtype.get_dtype_cache(space).w_float64dtype + for bytes, dtype in interp_dtype.get_dtype_cache(space).dtypes_by_num_bytes: + if (dtype.kind == interp_dtype.FLOATINGLTR and + dtype.itemtype.get_element_size() > dt.itemtype.get_element_size()): + return dtype if promote_to_largest: if dt.kind == interp_dtype.BOOLLTR or dt.kind == interp_dtype.SIGNEDLTR: return space.fromcache(interp_dtype.W_Int64Dtype) @@ -264,12 +265,13 @@ def ufunc_dtype_caller(space, ufunc_name, op_name, argcount, comparison_func): + assert hasattr(types.BaseType, op_name) if argcount == 1: def impl(res_dtype, value): - return getattr(res_dtype, op_name)(value) + return getattr(res_dtype.itemtype, op_name)(value) elif argcount == 2: def impl(res_dtype, lvalue, rvalue): - res = getattr(res_dtype, op_name)(lvalue, rvalue) + res = getattr(res_dtype.itemtype, op_name)(lvalue, rvalue) if comparison_func: booldtype = space.fromcache(interp_dtype.W_BoolDtype) assert isinstance(booldtype, interp_dtype.W_BoolDtype) @@ -327,7 +329,7 @@ identity = extra_kwargs.get("identity") if identity is not None: - identity = space.fromcache(interp_dtype.W_LongDtype).adapt_val(identity) + identity = interp_dtype.get_dtype_cache(space).w_longdtype.box(identity) extra_kwargs["identity"] = identity func = ufunc_dtype_caller(space, ufunc_name, op_name, argcount, diff --git a/pypy/module/micronumpy/test/test_base.py b/pypy/module/micronumpy/test/test_base.py --- a/pypy/module/micronumpy/test/test_base.py +++ b/pypy/module/micronumpy/test/test_base.py @@ -1,5 +1,5 @@ from pypy.conftest import gettestobjspace -from pypy.module.micronumpy import interp_dtype +from pypy.module.micronumpy.interp_dtype import get_dtype_cache from pypy.module.micronumpy.interp_numarray import SingleDimArray, Scalar from pypy.module.micronumpy.interp_ufuncs import (find_binop_result_dtype, find_unaryop_result_dtype) @@ -11,7 +11,8 @@ class TestSignature(object): def test_binop_signature(self, space): - float64_dtype = space.fromcache(interp_dtype.W_Float64Dtype) + float64_dtype = get_dtype_cache(space).w_float64dtype + bool_dtype = get_dtype_cache(space).w_booldtype ar = SingleDimArray(10, dtype=float64_dtype) v1 = ar.descr_add(space, ar) @@ -22,7 +23,7 @@ v4 = ar.descr_add(space, ar) assert v1.signature is v4.signature - bool_ar = SingleDimArray(10, dtype=space.fromcache(interp_dtype.W_BoolDtype)) + bool_ar = SingleDimArray(10, dtype=bool_dtype) v5 = ar.descr_add(space, bool_ar) assert v5.signature is not v1.signature assert v5.signature is not v2.signature @@ -30,7 +31,9 @@ assert v5.signature is v6.signature def test_slice_signature(self, space): - ar = SingleDimArray(10, dtype=space.fromcache(interp_dtype.W_Float64Dtype)) + float64_dtype = get_dtype_cache(space).w_float64dtype + + ar = SingleDimArray(10, dtype=float64_dtype) v1 = ar.descr_getitem(space, space.wrap(slice(1, 5, 1))) v2 = ar.descr_getitem(space, space.wrap(slice(4, 6, 1))) assert v1.signature is v2.signature @@ -41,10 +44,10 @@ class TestUfuncCoerscion(object): def test_binops(self, space): - bool_dtype = space.fromcache(interp_dtype.W_BoolDtype) - int8_dtype = space.fromcache(interp_dtype.W_Int8Dtype) - int32_dtype = space.fromcache(interp_dtype.W_Int32Dtype) - float64_dtype = space.fromcache(interp_dtype.W_Float64Dtype) + bool_dtype = get_dtype_cache(space).w_booldtype + int8_dtype = get_dtype_cache(space).w_int8dtype + int32_dtype = get_dtype_cache(space).w_int32dtype + float64_dtype = get_dtype_cache(space).w_float64dtype # Basic pairing assert find_binop_result_dtype(space, bool_dtype, bool_dtype) is bool_dtype @@ -62,19 +65,19 @@ assert find_binop_result_dtype(space, bool_dtype, float64_dtype, promote_to_float=True) is float64_dtype def test_unaryops(self, space): - bool_dtype = space.fromcache(interp_dtype.W_BoolDtype) - int8_dtype = space.fromcache(interp_dtype.W_Int8Dtype) - uint8_dtype = space.fromcache(interp_dtype.W_UInt8Dtype) - int16_dtype = space.fromcache(interp_dtype.W_Int16Dtype) - uint16_dtype = space.fromcache(interp_dtype.W_UInt16Dtype) - int32_dtype = space.fromcache(interp_dtype.W_Int32Dtype) - uint32_dtype = space.fromcache(interp_dtype.W_UInt32Dtype) - long_dtype = space.fromcache(interp_dtype.W_LongDtype) - ulong_dtype = space.fromcache(interp_dtype.W_ULongDtype) - int64_dtype = space.fromcache(interp_dtype.W_Int64Dtype) - uint64_dtype = space.fromcache(interp_dtype.W_UInt64Dtype) - float32_dtype = space.fromcache(interp_dtype.W_Float32Dtype) - float64_dtype = space.fromcache(interp_dtype.W_Float64Dtype) + bool_dtype = get_dtype_cache(space).w_booldtype + int8_dtype = get_dtype_cache(space).w_int8dtype + uint8_dtype = get_dtype_cache(space).w_uint8dtype + int16_dtype = get_dtype_cache(space).w_int16dtype + uint16_dtype = get_dtype_cache(space).w_uint16dtype + int32_dtype = get_dtype_cache(space).w_int32dtype + uint32_dtype = get_dtype_cache(space).w_uint32dtype + long_dtype = get_dtype_cache(space).w_longdtype + ulong_dtype = get_dtype_cache(space).w_ulongdtype + int64_dtype = get_dtype_cache(space).w_int64dtype + uint64_dtype = get_dtype_cache(space).w_uint64dtype + float32_dtype = get_dtype_cache(space).w_float32dtype + float64_dtype = get_dtype_cache(space).w_float64dtype # Normal rules, everything returns itself assert find_unaryop_result_dtype(space, bool_dtype) is bool_dtype From noreply at buildbot.pypy.org Wed Nov 9 05:26:25 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 9 Nov 2011 05:26:25 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: more updates to code and tests Message-ID: <20111109042625.21C1D820C4@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r48984:b055942a4830 Date: 2011-11-08 23:26 -0500 http://bitbucket.org/pypy/pypy/changeset/b055942a4830/ Log: more updates to code and tests diff --git a/pypy/module/micronumpy/compile.py b/pypy/module/micronumpy/compile.py --- a/pypy/module/micronumpy/compile.py +++ b/pypy/module/micronumpy/compile.py @@ -4,7 +4,7 @@ """ from pypy.interpreter.baseobjspace import InternalSpaceCache, W_Root -from pypy.module.micronumpy.interp_boxes import W_GenericBox +from pypy.module.micronumpy import interp_boxes from pypy.module.micronumpy.interp_dtype import get_dtype_cache from pypy.module.micronumpy.interp_numarray import (Scalar, BaseArray, descr_new_array, scalar_w, SingleDimArray) @@ -70,8 +70,10 @@ return obj.items def float(self, w_obj): - assert isinstance(w_obj, FloatObject) - return w_obj + if isinstance(w_obj, FloatObject): + return w_obj + assert isinstance(w_obj, interp_boxes.W_FloatingBox) + return FloatObject(w_obj.value) def float_w(self, w_obj): assert isinstance(w_obj, FloatObject) @@ -172,8 +174,8 @@ def execute(self, interp): arr = interp.variables[self.name] - w_index = self.index.execute(interp).eval(0).wrap(interp.space) - w_val = self.expr.execute(interp).eval(0).wrap(interp.space) + w_index = self.index.execute(interp).eval(0) + w_val = self.expr.execute(interp).eval(0) arr.descr_setitem(interp.space, w_index, w_val) def __repr__(self): @@ -210,15 +212,15 @@ w_res = w_lhs.descr_sub(interp.space, w_rhs) elif self.name == '->': if isinstance(w_rhs, Scalar): - index = int(interp.space.float_w( - w_rhs.value)) - dtype = interp.space.fromcache(W_Float64Dtype) + index = int(interp.space.float_w(interp.space.float(w_rhs.value))) + dtype = get_dtype_cache(interp.space).w_float64dtype return Scalar(dtype, w_lhs.get_concrete().eval(index)) else: raise NotImplementedError else: raise NotImplementedError - if not isinstance(w_res, BaseArray) and not isinstance(w_res, W_GenericBox): + if (not isinstance(w_res, BaseArray) and + not isinstance(w_res, interp_boxes.W_GenericBox)): dtype = interp.space.fromcache(W_Float64Dtype) w_res = scalar_w(interp.space, dtype, w_res) return w_res @@ -246,8 +248,9 @@ def execute(self, interp): w_list = interp.space.newlist( - [interp.space.wrap(float(i)) for i in range(self.v)]) - dtype = interp.space.fromcache(W_Float64Dtype) + [interp.space.wrap(float(i)) for i in range(self.v)] + ) + dtype = get_dtype_cache(interp.space).w_float64dtype return descr_new_array(interp.space, None, w_list, w_dtype=dtype) def __repr__(self): @@ -331,6 +334,8 @@ dtype = interp.space.fromcache(W_Float64Dtype) elif isinstance(w_res, BoolObject): dtype = interp.space.fromcache(W_BoolDtype) + elif isinstance(w_res, interp_boxes.W_GenericBox): + dtype = w_res.descr_get_dtype(interp.space) else: dtype = None return scalar_w(interp.space, dtype, w_res) diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -309,7 +309,7 @@ return scalar_w(space, dtype, w_obj) def scalar_w(space, dtype, w_obj): - return Scalar(dtype, dtype.unwrap(space, w_obj)) + return Scalar(dtype, dtype.coerce(space, w_obj)) class Scalar(BaseArray): """ diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -74,7 +74,7 @@ new_sig = signature.Signature.find_sig([ self.reduce_signature, obj.signature ]) - return self.reduce(new_sig, start, value, obj, dtype, size).wrap(space) + return self.reduce(new_sig, start, value, obj, dtype, size) def reduce(self, signature, start, value, obj, dtype, size): i = start @@ -235,7 +235,7 @@ if dt.kind == interp_dtype.BOOLLTR or dt.kind == interp_dtype.SIGNEDLTR: return space.fromcache(interp_dtype.W_Int64Dtype) elif dt.kind == interp_dtype.FLOATINGLTR: - return space.fromcache(interp_dtype.W_Float64Dtype) + return interp_dtype.get_dtype_cache(space).w_float64dtype elif dt.kind == interp_dtype.UNSIGNEDLTR: return space.fromcache(interp_dtype.W_UInt64Dtype) else: diff --git a/pypy/module/micronumpy/test/test_compile.py b/pypy/module/micronumpy/test/test_compile.py --- a/pypy/module/micronumpy/test/test_compile.py +++ b/pypy/module/micronumpy/test/test_compile.py @@ -5,7 +5,7 @@ class TestCompiler(object): def compile(self, code): return numpy_compile(code) - + def test_vars(self): code = """ a = 2 @@ -25,7 +25,7 @@ st = interp.code.statements[0] assert st.expr.items == [FloatConstant(1), FloatConstant(2), FloatConstant(3)] - + def test_array_literal2(self): code = "a = [[1],[2],[3]]" interp = self.compile(code) @@ -114,15 +114,15 @@ a + b -> 3 """ interp = self.run(code) - assert interp.results[0].value.val == 3 + 6 - + assert interp.results[0].value.value == 3 + 6 + def test_range_getitem(self): code = """ r = |20| + 3 r -> 3 """ interp = self.run(code) - assert interp.results[0].value.val == 6 + assert interp.results[0].value.value == 6 def test_sum(self): code = """ @@ -131,7 +131,7 @@ r """ interp = self.run(code) - assert interp.results[0].value.val == 15 + assert interp.results[0].value.value == 15 def test_array_write(self): code = """ From noreply at buildbot.pypy.org Wed Nov 9 09:48:45 2011 From: noreply at buildbot.pypy.org (pjenvey) Date: Wed, 9 Nov 2011 09:48:45 +0100 (CET) Subject: [pypy-commit] pypy py3k: fix test_escape_encode (thanks amaury) Message-ID: <20111109084845.9B71D8292E@wyvern.cs.uni-duesseldorf.de> Author: Philip Jenvey Branch: py3k Changeset: r48985:7275e7c2a49a Date: 2011-11-09 00:46 -0800 http://bitbucket.org/pypy/pypy/changeset/7275e7c2a49a/ Log: fix test_escape_encode (thanks amaury) diff --git a/pypy/module/_codecs/interp_codecs.py b/pypy/module/_codecs/interp_codecs.py --- a/pypy/module/_codecs/interp_codecs.py +++ b/pypy/module/_codecs/interp_codecs.py @@ -733,12 +733,8 @@ @unwrap_spec(data="bufferstr", errors='str_or_None') def escape_encode(space, data, errors='strict'): from pypy.objspace.std.stringobject import string_escape_encode - result = string_escape_encode(data, quote="'") - start = 1 - end = len(result) - 1 - assert end >= 0 - w_result = space.wrapbytes(result[start:end]) - return space.newtuple([w_result, space.wrap(len(data))]) + result = string_escape_encode(data, False) + return space.newtuple([space.wrapbytes(result), space.wrap(len(data))]) @unwrap_spec(data="bufferstr", errors='str_or_None') def escape_decode(space, data, errors='strict'): diff --git a/pypy/objspace/std/stringobject.py b/pypy/objspace/std/stringobject.py --- a/pypy/objspace/std/stringobject.py +++ b/pypy/objspace/std/stringobject.py @@ -887,20 +887,19 @@ return space.newtuple([wrapstr(space, w_str._value)]) def repr__String(space, w_str): - s = w_str._value + return space.wrap(string_escape_encode(w_str._value, True)) + +def string_escape_encode(s, quotes): + buf = StringBuilder(len(s) + 3 if quotes else 0) quote = "'" - if quote in s and '"' not in s: - quote = '"' + if quotes: + if quote in s and '"' not in s: + quote = '"' + buf.append('b"') + else: + buf.append("b'") - return space.wrap(string_escape_encode(s, quote)) - -def string_escape_encode(s, quote): - - buf = StringBuilder(len(s) + 3) - - buf.append('b') - buf.append(quote) startslice = 0 for i in range(len(s)): @@ -938,7 +937,8 @@ if len(s) != startslice: buf.append_slice(s, startslice, len(s)) - buf.append(quote) + if quotes: + buf.append(quote) return buf.build() From noreply at buildbot.pypy.org Wed Nov 9 10:52:35 2011 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 9 Nov 2011 10:52:35 +0100 (CET) Subject: [pypy-commit] pypy default: skip the test in progress Message-ID: <20111109095235.B4F1E8292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48986:f4506e827118 Date: 2011-11-09 10:52 +0100 http://bitbucket.org/pypy/pypy/changeset/f4506e827118/ Log: skip the test in progress diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -5000,6 +5000,7 @@ self.optimize_loop(ops, expected) def test_known_equal_ints(self): + py.test.skip("in-progress") ops = """ [i0, i1, i2, p0] i3 = int_eq(i0, i1) From noreply at buildbot.pypy.org Wed Nov 9 10:54:48 2011 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 9 Nov 2011 10:54:48 +0100 (CET) Subject: [pypy-commit] pypy default: oups. Message-ID: <20111109095448.3CE978292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48987:8ca9a7426505 Date: 2011-11-09 10:54 +0100 http://bitbucket.org/pypy/pypy/changeset/8ca9a7426505/ Log: oups. diff --git a/pypy/translator/c/genc.py b/pypy/translator/c/genc.py --- a/pypy/translator/c/genc.py +++ b/pypy/translator/c/genc.py @@ -622,9 +622,9 @@ else: mk.definition('DEBUGFLAGS', '-O1 -g') if sys.platform == 'win32': - mk.rule('debug_target', 'debugmode_$(DEFAULT_TARGET)') + mk.rule('debug_target', 'debugmode_$(DEFAULT_TARGET)', 'rem') else: - mk.rule('debug_target', '$(TARGET)') + mk.rule('debug_target', '$(TARGET)', '#') mk.write() #self.translator.platform, # , From noreply at buildbot.pypy.org Wed Nov 9 11:48:41 2011 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 9 Nov 2011 11:48:41 +0100 (CET) Subject: [pypy-commit] pypy default: (antocuni, arigo) Message-ID: <20111109104841.06B058292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48988:c7616f7a871d Date: 2011-11-09 11:48 +0100 http://bitbucket.org/pypy/pypy/changeset/c7616f7a871d/ Log: (antocuni, arigo) Tentatively fix on Windows: the default calling convention for the JIT should be CDECL (=1), not STDCALL (=0). Kill default arguments: explicit is better than implicit (and usually wrong). diff --git a/pypy/jit/backend/llsupport/descr.py b/pypy/jit/backend/llsupport/descr.py --- a/pypy/jit/backend/llsupport/descr.py +++ b/pypy/jit/backend/llsupport/descr.py @@ -305,12 +305,16 @@ _clsname = '' loop_token = None arg_classes = '' # <-- annotation hack - ffi_flags = 0 + ffi_flags = 1 - def __init__(self, arg_classes, extrainfo=None, ffi_flags=0): + def __init__(self, arg_classes, extrainfo=None, ffi_flags=1): self.arg_classes = arg_classes # string of "r" and "i" (ref/int) self.extrainfo = extrainfo self.ffi_flags = ffi_flags + # NB. the default ffi_flags is 1, meaning FUNCFLAG_CDECL, which + # makes sense on Windows as it's the one for all the C functions + # we are compiling together with the JIT. On non-Windows platforms + # it is just ignored anyway. def __repr__(self): res = '%s(%s)' % (self.__class__.__name__, self.arg_classes) @@ -445,7 +449,7 @@ """ _clsname = 'DynamicIntCallDescr' - def __init__(self, arg_classes, result_size, result_sign, extrainfo=None, ffi_flags=0): + def __init__(self, arg_classes, result_size, result_sign, extrainfo, ffi_flags): BaseIntCallDescr.__init__(self, arg_classes, extrainfo, ffi_flags) assert isinstance(result_sign, bool) self._result_size = chr(result_size) diff --git a/pypy/jit/backend/llsupport/ffisupport.py b/pypy/jit/backend/llsupport/ffisupport.py --- a/pypy/jit/backend/llsupport/ffisupport.py +++ b/pypy/jit/backend/llsupport/ffisupport.py @@ -8,7 +8,7 @@ class UnsupportedKind(Exception): pass -def get_call_descr_dynamic(cpu, ffi_args, ffi_result, extrainfo=None, ffi_flags=0): +def get_call_descr_dynamic(cpu, ffi_args, ffi_result, extrainfo, ffi_flags): """Get a call descr: the types of result and args are represented by rlib.libffi.types.*""" try: diff --git a/pypy/jit/backend/llsupport/test/test_ffisupport.py b/pypy/jit/backend/llsupport/test/test_ffisupport.py --- a/pypy/jit/backend/llsupport/test/test_ffisupport.py +++ b/pypy/jit/backend/llsupport/test/test_ffisupport.py @@ -13,44 +13,46 @@ def test_call_descr_dynamic(): args = [types.sint, types.pointer] - descr = get_call_descr_dynamic(FakeCPU(), args, types.sint, ffi_flags=42) + descr = get_call_descr_dynamic(FakeCPU(), args, types.sint, None, + ffi_flags=42) assert isinstance(descr, DynamicIntCallDescr) assert descr.arg_classes == 'ii' assert descr.get_ffi_flags() == 42 args = [types.sint, types.double, types.pointer] - descr = get_call_descr_dynamic(FakeCPU(), args, types.void) + descr = get_call_descr_dynamic(FakeCPU(), args, types.void, None, 42) assert descr is None # missing floats descr = get_call_descr_dynamic(FakeCPU(supports_floats=True), - args, types.void, ffi_flags=43) + args, types.void, None, ffi_flags=43) assert isinstance(descr, VoidCallDescr) assert descr.arg_classes == 'ifi' assert descr.get_ffi_flags() == 43 - descr = get_call_descr_dynamic(FakeCPU(), [], types.sint8) + descr = get_call_descr_dynamic(FakeCPU(), [], types.sint8, None, 42) assert isinstance(descr, DynamicIntCallDescr) assert descr.get_result_size(False) == 1 assert descr.is_result_signed() == True - descr = get_call_descr_dynamic(FakeCPU(), [], types.uint8) + descr = get_call_descr_dynamic(FakeCPU(), [], types.uint8, None, 42) assert isinstance(descr, DynamicIntCallDescr) assert descr.get_result_size(False) == 1 assert descr.is_result_signed() == False if not is_64_bit: - descr = get_call_descr_dynamic(FakeCPU(), [], types.slonglong) + descr = get_call_descr_dynamic(FakeCPU(), [], types.slonglong, + None, 42) assert descr is None # missing longlongs descr = get_call_descr_dynamic(FakeCPU(supports_longlong=True), - [], types.slonglong, ffi_flags=43) + [], types.slonglong, None, ffi_flags=43) assert isinstance(descr, LongLongCallDescr) assert descr.get_ffi_flags() == 43 else: assert types.slonglong is types.slong - descr = get_call_descr_dynamic(FakeCPU(), [], types.float) + descr = get_call_descr_dynamic(FakeCPU(), [], types.float, None, 42) assert descr is None # missing singlefloats descr = get_call_descr_dynamic(FakeCPU(supports_singlefloats=True), - [], types.float, ffi_flags=44) + [], types.float, None, ffi_flags=44) SingleFloatCallDescr = getCallDescrClass(rffi.FLOAT) assert isinstance(descr, SingleFloatCallDescr) assert descr.get_ffi_flags() == 44 diff --git a/pypy/jit/backend/x86/test/test_runner.py b/pypy/jit/backend/x86/test/test_runner.py --- a/pypy/jit/backend/x86/test/test_runner.py +++ b/pypy/jit/backend/x86/test/test_runner.py @@ -455,6 +455,9 @@ EffectInfo.MOST_GENERAL, ffi_flags=-1) calldescr.get_call_conv = lambda: ffi # <==== hack + # ^^^ we patch get_call_conv() so that the test also makes sense + # on Linux, because clibffi.get_call_conv() would always + # return FFI_DEFAULT_ABI on non-Windows platforms. funcbox = ConstInt(rawstart) i1 = BoxInt() i2 = BoxInt() From noreply at buildbot.pypy.org Wed Nov 9 12:31:22 2011 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Wed, 9 Nov 2011 12:31:22 +0100 (CET) Subject: [pypy-commit] pyrepl default: adapt encopyright to hg Message-ID: <20111109113122.D58528292E@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: Changeset: r159:74eb359b4292 Date: 2011-11-09 12:30 +0100 http://bitbucket.org/pypy/pyrepl/changeset/74eb359b4292/ Log: adapt encopyright to hg diff --git a/encopyright.py b/encopyright.py --- a/encopyright.py +++ b/encopyright.py @@ -20,11 +20,10 @@ # CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. import os, time, sys -import bzrlib.branch -import bzrlib.log +import py header_template = """\ -# Copyright 2000-%s Michael Hudson-Doyle %s +# Copyright 2000-%(lastyear)s Michael Hudson-Doyle %(others)s # # All Rights Reserved # @@ -46,64 +45,69 @@ author_template = "\n#%s%%s"%(' '*(header_template.index("Michael")+1),) -branch, path = bzrlib.branch.Branch.open_containing(sys.argv[0]) -rev_tree = branch.basis_tree() -branch.lock_read() -def process(thing): - if os.path.isdir(thing): - for subthing in os.listdir(thing): - process(os.path.join(thing, subthing)) - elif os.path.isfile(thing): - if thing[-3:] == '.py': - process_file(thing) - else: - print "W `%s' not file or directory"%(thing,) author_map = { u'mwh': None, + u'micahel': None, u'Michael Hudson ': None, u'arigo': u"Armin Rigo", u'antocuni': u'Antonio Cuni', + u'anto': u'Antonio Cuni', u'bob': u'Bob Ippolito', u'fijal': u'Maciek Fijalkowski', u'agaynor': u'Alex Gaynor', u'hpk': u'Holger Krekel', + u'Ronny': u'Ronny Pfannschmidt', + u'amauryfa': u"Amaury Forgeot d'Arc", } -def process_file(file): - ilines = open(file).readlines() - file_id = rev_tree.path2id(file) - rev_ids = [rev_id for (revno, rev_id, what) - in bzrlib.log.find_touching_revisions(branch, file_id)] - revs = branch.repository.get_revisions(rev_ids) - revs = sorted(revs, key=lambda x:x.timestamp) - modified_year = None - for rev in reversed(revs): - if 'encopyright' not in rev.message: - modified_year = time.gmtime(rev.timestamp)[0] - break + +def author_revs(path): + proc = py.std.subprocess.Popen([ + 'hg','log', str(path), + '--template', '{author|user} {date}\n', + '-r', 'not keyword("encopyright")', + ], stdout=py.std.subprocess.PIPE) + output, _ = proc.communicate() + lines = output.splitlines() + for line in lines: + try: + name, date = line.split(None, 1) + except ValueError: + pass + else: + if '-' in date: + date = date.split('-')[0] + yield name, float(date) + + +def process(path): + ilines = path.readlines() + revs = sorted(author_revs(path), key=lambda x:x[1]) + modified_year = time.gmtime(revs[-1][1])[0] if not modified_year: - print 'E: no sensible modified_year found for %s' % file, + print 'E: no sensible modified_year found for', path modified_year = time.gmtime(time.time())[0] - authors = set() - for rev in revs: - authors.update(rev.get_apparent_authors()) extra_authors = [] + authors = set(rev[0] for rev in revs) for a in authors: if a not in author_map: - print 'E: need real name for %r' % a + print 'E: need real name for', a ea = author_map.get(a) if ea: extra_authors.append(ea) extra_authors.sort() - header = header_template % (modified_year, ''.join([author_template%ea for ea in extra_authors])) + header = header_template % { + 'lastyear': modified_year, + 'others': ''.join([author_template%ea for ea in extra_authors]) + } header_lines = header.splitlines() prelines = [] old_copyright = [] if not ilines: - print "W ignoring empty file `%s'"%(file,) + print "W ignoring empty file", path return i = 0 @@ -123,8 +127,8 @@ if abs(len(old_copyright) - len(header_lines)) < 2 + len(extra_authors): for x, y in zip(old_copyright, header_lines): if x[:-1] != y: - print "C change needed in", file - ofile = open(file, "w") + print "C change needed in", path + ofile = path.open("w") for l in prelines: ofile.write(l) ofile.write(header + "\n") @@ -133,17 +137,21 @@ ofile.close() break else: - print "M no change needed in", file + print "M no change needed in", path else: print "A no (c) in", file - ofile = open(file, "w") - for l in prelines: - ofile.write(l) - ofile.write(header + "\n\n") - for l in ilines[len(prelines):]: - ofile.write(l) - ofile.close() - + with path.open("w") as ofile: + for l in prelines: + ofile.write(l) + ofile.write(header + "\n\n") + for l in ilines[len(prelines):]: + ofile.write(l) + for thing in sys.argv[1:]: - process(thing) + path = py.path.local(thing) + if path.check(dir=1): + for item in path.visit('*.py'): + process(item) + elif path.check(file=1, ext='py'): + process(path) From noreply at buildbot.pypy.org Wed Nov 9 12:36:32 2011 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Wed, 9 Nov 2011 12:36:32 +0100 (CET) Subject: [pypy-commit] pyrepl default: cherry pick over tox.ini and hgignore from py3ksupport branch Message-ID: <20111109113632.7FA218292E@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: Changeset: r160:4c4f19046887 Date: 2011-11-09 12:35 +0100 http://bitbucket.org/pypy/pyrepl/changeset/4c4f19046887/ Log: cherry pick over tox.ini and hgignore from py3ksupport branch diff --git a/.hgignore b/.hgignore --- a/.hgignore +++ b/.hgignore @@ -1,3 +1,4 @@ dist/ build/ +\.tox/ .*\.egg-info diff --git a/tox.ini b/tox.ini new file mode 100644 --- /dev/null +++ b/tox.ini @@ -0,0 +1,9 @@ +[tox] +envlist= py27, py32 + +[testenv] +deps= + pytest + pexpect +commands= + py.test --junitxml={envdir}/junit.xml [] From noreply at buildbot.pypy.org Wed Nov 9 12:36:33 2011 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Wed, 9 Nov 2011 12:36:33 +0100 (CET) Subject: [pypy-commit] pyrepl py3ksupport: merge default Message-ID: <20111109113633.8C01D8292E@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: py3ksupport Changeset: r161:9234e4d1b551 Date: 2011-11-09 12:36 +0100 http://bitbucket.org/pypy/pyrepl/changeset/9234e4d1b551/ Log: merge default diff --git a/encopyright.py b/encopyright.py --- a/encopyright.py +++ b/encopyright.py @@ -20,11 +20,10 @@ # CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. import os, time, sys -import bzrlib.branch -import bzrlib.log +import py header_template = """\ -# Copyright 2000-%s Michael Hudson-Doyle %s +# Copyright 2000-%(lastyear)s Michael Hudson-Doyle %(others)s # # All Rights Reserved # @@ -46,64 +45,69 @@ author_template = "\n#%s%%s"%(' '*(header_template.index("Michael")+1),) -branch, path = bzrlib.branch.Branch.open_containing(sys.argv[0]) -rev_tree = branch.basis_tree() -branch.lock_read() -def process(thing): - if os.path.isdir(thing): - for subthing in os.listdir(thing): - process(os.path.join(thing, subthing)) - elif os.path.isfile(thing): - if thing[-3:] == '.py': - process_file(thing) - else: - print "W `%s' not file or directory"%(thing,) author_map = { u'mwh': None, + u'micahel': None, u'Michael Hudson ': None, u'arigo': u"Armin Rigo", u'antocuni': u'Antonio Cuni', + u'anto': u'Antonio Cuni', u'bob': u'Bob Ippolito', u'fijal': u'Maciek Fijalkowski', u'agaynor': u'Alex Gaynor', u'hpk': u'Holger Krekel', + u'Ronny': u'Ronny Pfannschmidt', + u'amauryfa': u"Amaury Forgeot d'Arc", } -def process_file(file): - ilines = open(file).readlines() - file_id = rev_tree.path2id(file) - rev_ids = [rev_id for (revno, rev_id, what) - in bzrlib.log.find_touching_revisions(branch, file_id)] - revs = branch.repository.get_revisions(rev_ids) - revs = sorted(revs, key=lambda x:x.timestamp) - modified_year = None - for rev in reversed(revs): - if 'encopyright' not in rev.message: - modified_year = time.gmtime(rev.timestamp)[0] - break + +def author_revs(path): + proc = py.std.subprocess.Popen([ + 'hg','log', str(path), + '--template', '{author|user} {date}\n', + '-r', 'not keyword("encopyright")', + ], stdout=py.std.subprocess.PIPE) + output, _ = proc.communicate() + lines = output.splitlines() + for line in lines: + try: + name, date = line.split(None, 1) + except ValueError: + pass + else: + if '-' in date: + date = date.split('-')[0] + yield name, float(date) + + +def process(path): + ilines = path.readlines() + revs = sorted(author_revs(path), key=lambda x:x[1]) + modified_year = time.gmtime(revs[-1][1])[0] if not modified_year: - print 'E: no sensible modified_year found for %s' % file, + print 'E: no sensible modified_year found for', path modified_year = time.gmtime(time.time())[0] - authors = set() - for rev in revs: - authors.update(rev.get_apparent_authors()) extra_authors = [] + authors = set(rev[0] for rev in revs) for a in authors: if a not in author_map: - print 'E: need real name for %r' % a + print 'E: need real name for', a ea = author_map.get(a) if ea: extra_authors.append(ea) extra_authors.sort() - header = header_template % (modified_year, ''.join([author_template%ea for ea in extra_authors])) + header = header_template % { + 'lastyear': modified_year, + 'others': ''.join([author_template%ea for ea in extra_authors]) + } header_lines = header.splitlines() prelines = [] old_copyright = [] if not ilines: - print "W ignoring empty file `%s'"%(file,) + print "W ignoring empty file", path return i = 0 @@ -123,8 +127,8 @@ if abs(len(old_copyright) - len(header_lines)) < 2 + len(extra_authors): for x, y in zip(old_copyright, header_lines): if x[:-1] != y: - print "C change needed in", file - ofile = open(file, "w") + print "C change needed in", path + ofile = path.open("w") for l in prelines: ofile.write(l) ofile.write(header + "\n") @@ -133,17 +137,21 @@ ofile.close() break else: - print "M no change needed in", file + print "M no change needed in", path else: print "A no (c) in", file - ofile = open(file, "w") - for l in prelines: - ofile.write(l) - ofile.write(header + "\n\n") - for l in ilines[len(prelines):]: - ofile.write(l) - ofile.close() - + with path.open("w") as ofile: + for l in prelines: + ofile.write(l) + ofile.write(header + "\n\n") + for l in ilines[len(prelines):]: + ofile.write(l) + for thing in sys.argv[1:]: - process(thing) + path = py.path.local(thing) + if path.check(dir=1): + for item in path.visit('*.py'): + process(item) + elif path.check(file=1, ext='py'): + process(path) From noreply at buildbot.pypy.org Wed Nov 9 12:37:58 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Wed, 9 Nov 2011 12:37:58 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: fixed ovfcheck, which needs to skip symbolics. This caused 50 or more gs tests to fail Message-ID: <20111109113758.9FAAD8292E@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r48989:8a0ade5786bf Date: 2011-11-09 12:37 +0100 http://bitbucket.org/pypy/pypy/changeset/8a0ade5786bf/ Log: fixed ovfcheck, which needs to skip symbolics. This caused 50 or more gs tests to fail diff --git a/pypy/rlib/rarithmetic.py b/pypy/rlib/rarithmetic.py --- a/pypy/rlib/rarithmetic.py +++ b/pypy/rlib/rarithmetic.py @@ -146,7 +146,9 @@ assert not isinstance(r, r_uint), "unexpected ovf check on unsigned" assert not isinstance(r, r_longlong), "ovfcheck not supported on r_longlong" assert not isinstance(r, r_ulonglong), "ovfcheck not supported on r_ulonglong" - if not is_valid_int(r): + if type(r) is long and not is_valid_int(r): + # the type check is needed to make this chek skip symbolics. + # this happens in the garbage collector. raise OverflowError, "signed integer expression did overflow" return r From noreply at buildbot.pypy.org Wed Nov 9 13:37:50 2011 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 9 Nov 2011 13:37:50 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: Improve the checking: kills values that are not explicitly given Message-ID: <20111109123750.843148292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: jit-targets Changeset: r48990:afc8cfdd9b68 Date: 2011-11-09 13:37 +0100 http://bitbucket.org/pypy/pypy/changeset/afc8cfdd9b68/ Log: Improve the checking: kills values that are not explicitly given as argument. diff --git a/pypy/jit/backend/llgraph/llimpl.py b/pypy/jit/backend/llgraph/llimpl.py --- a/pypy/jit/backend/llgraph/llimpl.py +++ b/pypy/jit/backend/llgraph/llimpl.py @@ -640,8 +640,14 @@ return _op_default_implementation def op_label(self, _, *args): - pass - + op = self.loop.operations[self.opindex] + assert op.opnum == rop.LABEL + assert len(op.args) == len(args) + newenv = {} + for v, value in zip(op.args, args): + newenv[v] = value + self.env = newenv + def op_debug_merge_point(self, _, *args): from pypy.jit.metainterp.warmspot import get_stats try: From noreply at buildbot.pypy.org Wed Nov 9 13:37:51 2011 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 9 Nov 2011 13:37:51 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: Fix runner_test. Message-ID: <20111109123751.C62278292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: jit-targets Changeset: r48991:3eba23e52e42 Date: 2011-11-09 13:37 +0100 http://bitbucket.org/pypy/pypy/changeset/3eba23e52e42/ Log: Fix runner_test. diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -3,7 +3,7 @@ AbstractDescr, BasicFailDescr, BoxInt, Box, BoxPtr, - LoopToken, TargetToken, + JitCellToken, TargetToken, ConstInt, ConstPtr, BoxObj, ConstObj, BoxFloat, ConstFloat) @@ -32,7 +32,7 @@ result_type, valueboxes, descr) - looptoken = LoopToken() + looptoken = JitCellToken() self.cpu.compile_loop(inputargs, operations, looptoken) j = 0 for box in inputargs: @@ -106,7 +106,7 @@ ResOperation(rop.FINISH, [i1], None, descr=BasicFailDescr(1)) ] inputargs = [i0] - looptoken = LoopToken() + looptoken = JitCellToken() self.cpu.compile_loop(inputargs, operations, looptoken) self.cpu.set_future_value_int(0, 2) fail = self.cpu.execute_token(looptoken) @@ -118,15 +118,17 @@ i0 = BoxInt() i1 = BoxInt() i2 = BoxInt() - looptoken = LoopToken() + looptoken = JitCellToken() + targettoken = TargetToken() operations = [ + ResOperation(rop.LABEL, [i0], None, descr=targettoken), ResOperation(rop.INT_ADD, [i0, ConstInt(1)], i1), ResOperation(rop.INT_LE, [i1, ConstInt(9)], i2), ResOperation(rop.GUARD_TRUE, [i2], None, descr=BasicFailDescr(2)), - ResOperation(rop.JUMP, [i1], None, descr=looptoken), + ResOperation(rop.JUMP, [i1], None, descr=targettoken), ] inputargs = [i0] - operations[2].setfailargs([i1]) + operations[3].setfailargs([i1]) self.cpu.compile_loop(inputargs, operations, looptoken) self.cpu.set_future_value_int(0, 2) @@ -139,18 +141,22 @@ i0 = BoxInt() i1 = BoxInt() i2 = BoxInt() - looptoken = LoopToken() + i3 = BoxInt() + looptoken = JitCellToken() + targettoken = TargetToken() operations = [ + ResOperation(rop.INT_SUB, [i3, ConstInt(42)], i0), + ResOperation(rop.LABEL, [i0], None, descr=targettoken), ResOperation(rop.INT_ADD, [i0, ConstInt(1)], i1), ResOperation(rop.INT_LE, [i1, ConstInt(9)], i2), ResOperation(rop.GUARD_TRUE, [i2], None, descr=BasicFailDescr(2)), - ResOperation(rop.JUMP, [i1], None, descr=looptoken), + ResOperation(rop.JUMP, [i1], None, descr=targettoken), ] - inputargs = [i0] - operations[2].setfailargs([None, None, i1, None]) + inputargs = [i3] + operations[4].setfailargs([None, None, i1, None]) self.cpu.compile_loop(inputargs, operations, looptoken) - self.cpu.set_future_value_int(0, 2) + self.cpu.set_future_value_int(0, 44) fail = self.cpu.execute_token(looptoken) assert fail.identifier == 2 res = self.cpu.get_latest_value_int(2) @@ -162,15 +168,17 @@ i0 = BoxInt() i1 = BoxInt() i2 = BoxInt() - looptoken = LoopToken() + looptoken = JitCellToken() + targettoken = TargetToken() operations = [ + ResOperation(rop.LABEL, [i0], None, descr=targettoken), ResOperation(rop.INT_ADD, [i0, ConstInt(1)], i1), ResOperation(rop.INT_LE, [i1, ConstInt(9)], i2), ResOperation(rop.GUARD_TRUE, [i2], None, descr=BasicFailDescr()), - ResOperation(rop.JUMP, [i1], None, descr=looptoken), + ResOperation(rop.JUMP, [i1], None, descr=targettoken), ] inputargs = [i0] - operations[2].setfailargs([i1]) + operations[3].setfailargs([i1]) wr_i1 = weakref.ref(i1) wr_guard = weakref.ref(operations[2]) self.cpu.compile_loop(inputargs, operations, looptoken) @@ -190,15 +198,17 @@ i2 = BoxInt() faildescr1 = BasicFailDescr(1) faildescr2 = BasicFailDescr(2) - looptoken = LoopToken() + looptoken = JitCellToken() + targettoken = TargetToken() operations = [ + ResOperation(rop.LABEL, [i0], None, descr=targettoken), ResOperation(rop.INT_ADD, [i0, ConstInt(1)], i1), ResOperation(rop.INT_LE, [i1, ConstInt(9)], i2), ResOperation(rop.GUARD_TRUE, [i2], None, descr=faildescr1), - ResOperation(rop.JUMP, [i1], None, descr=looptoken), + ResOperation(rop.JUMP, [i1], None, descr=targettoken), ] inputargs = [i0] - operations[2].setfailargs([i1]) + operations[3].setfailargs([i1]) self.cpu.compile_loop(inputargs, operations, looptoken) i1b = BoxInt() @@ -206,7 +216,7 @@ bridge = [ ResOperation(rop.INT_LE, [i1b, ConstInt(19)], i3), ResOperation(rop.GUARD_TRUE, [i3], None, descr=faildescr2), - ResOperation(rop.JUMP, [i1b], None, descr=looptoken), + ResOperation(rop.JUMP, [i1b], None, descr=targettoken), ] bridge[1].setfailargs([i1b]) @@ -226,17 +236,21 @@ i0 = BoxInt() i1 = BoxInt() i2 = BoxInt() + i3 = BoxInt() faildescr1 = BasicFailDescr(1) faildescr2 = BasicFailDescr(2) - looptoken = LoopToken() + looptoken = JitCellToken() + targettoken = TargetToken() operations = [ + ResOperation(rop.INT_SUB, [i3, ConstInt(42)], i0), + ResOperation(rop.LABEL, [i0], None, descr=targettoken), ResOperation(rop.INT_ADD, [i0, ConstInt(1)], i1), ResOperation(rop.INT_LE, [i1, ConstInt(9)], i2), ResOperation(rop.GUARD_TRUE, [i2], None, descr=faildescr1), - ResOperation(rop.JUMP, [i1], None, descr=looptoken), + ResOperation(rop.JUMP, [i1], None, descr=targettoken), ] - inputargs = [i0] - operations[2].setfailargs([None, i1, None]) + inputargs = [i3] + operations[4].setfailargs([None, i1, None]) self.cpu.compile_loop(inputargs, operations, looptoken) i1b = BoxInt() @@ -244,7 +258,7 @@ bridge = [ ResOperation(rop.INT_LE, [i1b, ConstInt(19)], i3), ResOperation(rop.GUARD_TRUE, [i3], None, descr=faildescr2), - ResOperation(rop.JUMP, [i1b], None, descr=looptoken), + ResOperation(rop.JUMP, [i1b], None, descr=targettoken), ] bridge[1].setfailargs([i1b]) @@ -261,15 +275,17 @@ i1 = BoxInt() i2 = BoxInt() faildescr1 = BasicFailDescr(1) - looptoken = LoopToken() + looptoken = JitCellToken() + targettoken = TargetToken() operations = [ + ResOperation(rop.LABEL, [i0], None, descr=targettoken), ResOperation(rop.INT_ADD, [i0, ConstInt(1)], i1), ResOperation(rop.INT_LE, [i1, ConstInt(9)], i2), ResOperation(rop.GUARD_TRUE, [i2], None, descr=faildescr1), - ResOperation(rop.JUMP, [i1], None, descr=looptoken), + ResOperation(rop.JUMP, [i1], None, descr=targettoken), ] inputargs = [i0] - operations[2].setfailargs([None, i1, None]) + operations[3].setfailargs([None, i1, None]) self.cpu.compile_loop(inputargs, operations, looptoken) self.cpu.set_future_value_int(0, 2) @@ -290,7 +306,7 @@ return AbstractFailDescr.__setattr__(self, name, value) py.test.fail("finish descrs should not be touched") faildescr = UntouchableFailDescr() # to check that is not touched - looptoken = LoopToken() + looptoken = JitCellToken() operations = [ ResOperation(rop.FINISH, [i0], None, descr=faildescr) ] @@ -301,7 +317,7 @@ res = self.cpu.get_latest_value_int(0) assert res == 99 - looptoken = LoopToken() + looptoken = JitCellToken() operations = [ ResOperation(rop.FINISH, [ConstInt(42)], None, descr=faildescr) ] @@ -311,7 +327,7 @@ res = self.cpu.get_latest_value_int(0) assert res == 42 - looptoken = LoopToken() + looptoken = JitCellToken() operations = [ ResOperation(rop.FINISH, [], None, descr=faildescr) ] @@ -320,7 +336,7 @@ assert fail is faildescr if self.cpu.supports_floats: - looptoken = LoopToken() + looptoken = JitCellToken() f0 = BoxFloat() operations = [ ResOperation(rop.FINISH, [f0], None, descr=faildescr) @@ -333,7 +349,7 @@ res = self.cpu.get_latest_value_float(0) assert longlong.getrealfloat(res) == -61.25 - looptoken = LoopToken() + looptoken = JitCellToken() operations = [ ResOperation(rop.FINISH, [constfloat(42.5)], None, descr=faildescr) ] @@ -350,14 +366,16 @@ z = BoxInt(579) t = BoxInt(455) u = BoxInt(0) # False - looptoken = LoopToken() + looptoken = JitCellToken() + targettoken = TargetToken() operations = [ + ResOperation(rop.LABEL, [y, x], None, descr=targettoken), ResOperation(rop.INT_ADD, [x, y], z), ResOperation(rop.INT_SUB, [y, ConstInt(1)], t), ResOperation(rop.INT_EQ, [t, ConstInt(0)], u), ResOperation(rop.GUARD_FALSE, [u], None, descr=BasicFailDescr()), - ResOperation(rop.JUMP, [z, t], None, descr=looptoken), + ResOperation(rop.JUMP, [t, z], None, descr=targettoken), ] operations[-2].setfailargs([t, z]) cpu.compile_loop([x, y], operations, looptoken) @@ -419,7 +437,7 @@ ] ops[1].setfailargs([v_res]) # - looptoken = LoopToken() + looptoken = JitCellToken() self.cpu.compile_loop([v1, v2], ops, looptoken) for x, y, z in testcases: excvalue = self.cpu.grab_exc_value() @@ -1082,16 +1100,18 @@ inputargs.insert(index_counter, i0) jumpargs.insert(index_counter, i1) # - looptoken = LoopToken() + looptoken = JitCellToken() + targettoken = TargetToken() faildescr = BasicFailDescr(15) operations = [ + ResOperation(rop.LABEL, inputargs, None, descr=targettoken), ResOperation(rop.INT_SUB, [i0, ConstInt(1)], i1), ResOperation(rop.INT_GE, [i1, ConstInt(0)], i2), ResOperation(rop.GUARD_TRUE, [i2], None), - ResOperation(rop.JUMP, jumpargs, None, descr=looptoken), + ResOperation(rop.JUMP, jumpargs, None, descr=targettoken), ] - operations[2].setfailargs(inputargs[:]) - operations[2].setdescr(faildescr) + operations[3].setfailargs(inputargs[:]) + operations[3].setdescr(faildescr) # self.cpu.compile_loop(inputargs, operations, looptoken) # @@ -1149,22 +1169,24 @@ py.test.skip("requires floats") fboxes = [BoxFloat() for i in range(12)] i2 = BoxInt() + targettoken = TargetToken() faildescr1 = BasicFailDescr(1) faildescr2 = BasicFailDescr(2) operations = [ + ResOperation(rop.LABEL, fboxes, None, descr=targettoken), ResOperation(rop.FLOAT_LE, [fboxes[0], constfloat(9.2)], i2), ResOperation(rop.GUARD_TRUE, [i2], None, descr=faildescr1), ResOperation(rop.FINISH, fboxes, None, descr=faildescr2), ] operations[-2].setfailargs(fboxes) - looptoken = LoopToken() + looptoken = JitCellToken() self.cpu.compile_loop(fboxes, operations, looptoken) fboxes2 = [BoxFloat() for i in range(12)] f3 = BoxFloat() bridge = [ ResOperation(rop.FLOAT_SUB, [fboxes2[0], constfloat(1.0)], f3), - ResOperation(rop.JUMP, [f3] + fboxes2[1:], None, descr=looptoken), + ResOperation(rop.JUMP, [f3]+fboxes2[1:], None, descr=targettoken), ] self.cpu.compile_bridge(faildescr1, fboxes2, bridge, looptoken) @@ -1214,7 +1236,7 @@ ResOperation(rop.FINISH, [], None, descr=faildescr2), ] operations[-2].setfailargs([]) - looptoken = LoopToken() + looptoken = JitCellToken() self.cpu.compile_loop(inputargs, operations, looptoken) # cpu = self.cpu @@ -1271,7 +1293,7 @@ ResOperation(rop.FINISH, [], None, descr=faildescr2), ] operations[-2].setfailargs([]) - looptoken = LoopToken() + looptoken = JitCellToken() self.cpu.compile_loop(inputargs, operations, looptoken) # cpu = self.cpu @@ -1330,7 +1352,7 @@ faildescr = BasicFailDescr(1) operations.append(ResOperation(rop.FINISH, [], None, descr=faildescr)) - looptoken = LoopToken() + looptoken = JitCellToken() # self.cpu.compile_loop(inputargs, operations, looptoken) # @@ -1400,7 +1422,7 @@ ResOperation(rop.FINISH, [], None, descr=BasicFailDescr(5))] operations[1].setfailargs([]) - looptoken = LoopToken() + looptoken = JitCellToken() # Use "set" to unique-ify inputargs unique_testcase_list = list(set(testcase)) self.cpu.compile_loop(unique_testcase_list, operations, @@ -1675,15 +1697,16 @@ exc_tp = xtp exc_ptr = xptr loop = parse(ops, self.cpu, namespace=locals()) - self.cpu.compile_loop(loop.inputargs, loop.operations, loop.token) + looptoken = JitCellToken() + self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken) self.cpu.set_future_value_int(0, 1) - self.cpu.execute_token(loop.token) + self.cpu.execute_token(looptoken) assert self.cpu.get_latest_value_int(0) == 0 assert self.cpu.get_latest_value_ref(1) == xptr excvalue = self.cpu.grab_exc_value() assert not excvalue self.cpu.set_future_value_int(0, 0) - self.cpu.execute_token(loop.token) + self.cpu.execute_token(looptoken) assert self.cpu.get_latest_value_int(0) == 1 excvalue = self.cpu.grab_exc_value() assert not excvalue @@ -1700,9 +1723,10 @@ exc_tp = ytp exc_ptr = yptr loop = parse(ops, self.cpu, namespace=locals()) - self.cpu.compile_loop(loop.inputargs, loop.operations, loop.token) + looptoken = JitCellToken() + self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken) self.cpu.set_future_value_int(0, 1) - self.cpu.execute_token(loop.token) + self.cpu.execute_token(looptoken) assert self.cpu.get_latest_value_int(0) == 1 excvalue = self.cpu.grab_exc_value() assert excvalue == yptr @@ -1718,14 +1742,15 @@ finish(0) ''' loop = parse(ops, self.cpu, namespace=locals()) - self.cpu.compile_loop(loop.inputargs, loop.operations, loop.token) + looptoken = JitCellToken() + self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken) self.cpu.set_future_value_int(0, 1) - self.cpu.execute_token(loop.token) + self.cpu.execute_token(looptoken) assert self.cpu.get_latest_value_int(0) == 1 excvalue = self.cpu.grab_exc_value() assert excvalue == xptr self.cpu.set_future_value_int(0, 0) - self.cpu.execute_token(loop.token) + self.cpu.execute_token(looptoken) assert self.cpu.get_latest_value_int(0) == 0 excvalue = self.cpu.grab_exc_value() assert not excvalue @@ -1895,7 +1920,7 @@ ResOperation(rop.FINISH, [i0], None, descr=BasicFailDescr(0)) ] ops[2].setfailargs([i1, i0]) - looptoken = LoopToken() + looptoken = JitCellToken() self.cpu.compile_loop([i0, i1], ops, looptoken) self.cpu.set_future_value_int(0, 20) self.cpu.set_future_value_int(1, 0) @@ -1940,7 +1965,7 @@ ResOperation(rop.FINISH, [i2], None, descr=BasicFailDescr(0)) ] ops[2].setfailargs([i1, i2, i0]) - looptoken = LoopToken() + looptoken = JitCellToken() self.cpu.compile_loop([i0, i1], ops, looptoken) self.cpu.set_future_value_int(0, 20) self.cpu.set_future_value_int(1, 0) @@ -1986,7 +2011,7 @@ ResOperation(rop.FINISH, [f2], None, descr=BasicFailDescr(0)) ] ops[2].setfailargs([i1, f2, i0]) - looptoken = LoopToken() + looptoken = JitCellToken() self.cpu.compile_loop([i0, i1], ops, looptoken) self.cpu.set_future_value_int(0, 20) self.cpu.set_future_value_int(1, 0) @@ -2031,7 +2056,7 @@ ResOperation(rop.FINISH, [i2], None, descr=BasicFailDescr(0)) ] ops[1].setfailargs([i1, i2]) - looptoken = LoopToken() + looptoken = JitCellToken() self.cpu.compile_loop([i1], ops, looptoken) self.cpu.set_future_value_int(0, ord('G')) fail = self.cpu.execute_token(looptoken) @@ -2091,7 +2116,7 @@ ResOperation(rop.FINISH, [], None, descr=BasicFailDescr(0)) ] ops[1].setfailargs([]) - looptoken = LoopToken() + looptoken = JitCellToken() self.cpu.compile_loop([i0, i1, i2, i3], ops, looptoken) self.cpu.set_future_value_int(0, rffi.cast(lltype.Signed, raw)) self.cpu.set_future_value_int(1, 2) @@ -2147,7 +2172,7 @@ ops += [ ResOperation(rop.FINISH, [i3], None, descr=BasicFailDescr(0)) ] - looptoken = LoopToken() + looptoken = JitCellToken() self.cpu.compile_loop([i1, i2], ops, looptoken) buffer = lltype.malloc(rffi.CCHARP.TO, buflen, flavor='raw') @@ -2169,7 +2194,7 @@ ResOperation(rop.FINISH, [i0], None, descr=BasicFailDescr(0)) ] ops[0].setfailargs([i1]) - looptoken = LoopToken() + looptoken = JitCellToken() self.cpu.compile_loop([i0, i1], ops, looptoken) self.cpu.set_future_value_int(0, -42) @@ -2415,7 +2440,7 @@ i18 = int_add(i17, i9) finish(i18)''' loop = parse(ops) - looptoken = LoopToken() + looptoken = JitCellToken() looptoken.outermost_jitdriver_sd = FakeJitDriverSD() self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken) ARGS = [lltype.Signed] * 10 @@ -2435,7 +2460,7 @@ finish(i11) ''' loop = parse(ops, namespace=locals()) - othertoken = LoopToken() + othertoken = JitCellToken() self.cpu.compile_loop(loop.inputargs, loop.operations, othertoken) for i in range(10): self.cpu.set_future_value_int(i, i+1) @@ -2471,7 +2496,7 @@ finish(f2)''' loop = parse(ops) done_number = self.cpu.get_fail_descr_number(loop.operations[-1].getdescr()) - looptoken = LoopToken() + looptoken = JitCellToken() looptoken.outermost_jitdriver_sd = FakeJitDriverSD() self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken) self.cpu.set_future_value_float(0, longlong.getfloatstorage(1.2)) @@ -2486,7 +2511,7 @@ finish(f3) ''' loop = parse(ops, namespace=locals()) - othertoken = LoopToken() + othertoken = JitCellToken() self.cpu.compile_loop(loop.inputargs, loop.operations, othertoken) self.cpu.set_future_value_float(0, longlong.getfloatstorage(1.2)) self.cpu.set_future_value_float(1, longlong.getfloatstorage(3.2)) @@ -2499,7 +2524,7 @@ del called[:] self.cpu.done_with_this_frame_float_v = done_number try: - othertoken = LoopToken() + othertoken = JitCellToken() self.cpu.compile_loop(loop.inputargs, loop.operations, othertoken) self.cpu.set_future_value_float(0, longlong.getfloatstorage(1.2)) self.cpu.set_future_value_float(1, longlong.getfloatstorage(3.2)) @@ -2561,7 +2586,7 @@ f2 = float_add(f0, f1) finish(f2)''' loop = parse(ops) - looptoken = LoopToken() + looptoken = JitCellToken() looptoken.outermost_jitdriver_sd = FakeJitDriverSD() self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken) self.cpu.set_future_value_float(0, longlong.getfloatstorage(1.25)) @@ -2578,7 +2603,7 @@ finish(f3) ''' loop = parse(ops, namespace=locals()) - othertoken = LoopToken() + othertoken = JitCellToken() self.cpu.compile_loop(loop.inputargs, loop.operations, othertoken) # normal call_assembler: goes to looptoken @@ -2596,7 +2621,7 @@ f2 = float_sub(f0, f1) finish(f2)''' loop = parse(ops) - looptoken2 = LoopToken() + looptoken2 = JitCellToken() looptoken2.outermost_jitdriver_sd = FakeJitDriverSD() self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken2) @@ -2958,7 +2983,7 @@ ResOperation(rop.FINISH, [p0], None, descr=BasicFailDescr(1)) ] inputargs = [i0] - looptoken = LoopToken() + looptoken = JitCellToken() self.cpu.compile_loop(inputargs, operations, looptoken) # overflowing value: self.cpu.set_future_value_int(0, sys.maxint // 4 + 1) @@ -2970,21 +2995,23 @@ i1 = BoxInt() i2 = BoxInt() i3 = BoxInt() - looptoken = LoopToken() - targettoken = TargetToken(None) + looptoken = JitCellToken() + targettoken1 = TargetToken() + targettoken2 = TargetToken() faildescr = BasicFailDescr(2) operations = [ + ResOperation(rop.LABEL, [i0], None, descr=targettoken1), ResOperation(rop.INT_ADD, [i0, ConstInt(1)], i1), ResOperation(rop.INT_LE, [i1, ConstInt(9)], i2), ResOperation(rop.GUARD_TRUE, [i2], None, descr=faildescr), - ResOperation(rop.LABEL, [i1], None, descr=targettoken), + ResOperation(rop.LABEL, [i1], None, descr=targettoken2), ResOperation(rop.INT_GE, [i1, ConstInt(0)], i3), ResOperation(rop.GUARD_TRUE, [i3], None, descr=BasicFailDescr(3)), - ResOperation(rop.JUMP, [i1], None, descr=looptoken), + ResOperation(rop.JUMP, [i1], None, descr=targettoken1), ] inputargs = [i0] - operations[2].setfailargs([i1]) - operations[5].setfailargs([i1]) + operations[3].setfailargs([i1]) + operations[6].setfailargs([i1]) self.cpu.compile_loop(inputargs, operations, looptoken) self.cpu.set_future_value_int(0, 2) @@ -2996,7 +3023,7 @@ inputargs = [i0] operations = [ ResOperation(rop.INT_SUB, [i0, ConstInt(20)], i2), - ResOperation(rop.JUMP, [i2], None, descr=targettoken), + ResOperation(rop.JUMP, [i2], None, descr=targettoken2), ] self.cpu.compile_bridge(faildescr, inputargs, operations, looptoken) diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -723,9 +723,8 @@ # ____________________________________________________________ -# The TreeLoop class contains a loop or a generalized loop, i.e. a tree -# of operations. Each branch ends in a jump which can go either to -# the top of the same loop, or to another TreeLoop; or it ends in a FINISH. +# The JitCellToken class is the root of a tree of traces. Each branch ends +# in a jump which goes to a LABEL operation; or it ends in a FINISH. class JitCellToken(AbstractDescr): """Used for rop.JUMP, giving the target of the jump. @@ -766,7 +765,7 @@ self.compiled_loop_token.cpu.dump_loop_token(self) class TargetToken(AbstractDescr): - def __init__(self, targeting_jitcell_token): + def __init__(self, targeting_jitcell_token=None): # The jitcell to which jumps might result in a jump to this label self.targeting_jitcell_token = targeting_jitcell_token From noreply at buildbot.pypy.org Wed Nov 9 13:45:56 2011 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 9 Nov 2011 13:45:56 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: Fix. Message-ID: <20111109124556.0E8DF8292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: jit-targets Changeset: r48992:5477569cf46c Date: 2011-11-09 13:43 +0100 http://bitbucket.org/pypy/pypy/changeset/5477569cf46c/ Log: Fix. diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -2,8 +2,8 @@ from pypy.jit.backend.llsupport import symbolic from pypy.jit.backend.llsupport.asmmemmgr import MachineDataBlockWrapper from pypy.jit.metainterp.history import Const, Box, BoxInt, ConstInt -from pypy.jit.metainterp.history import (AbstractFailDescr, INT, REF, FLOAT, - LoopToken) +from pypy.jit.metainterp.history import AbstractFailDescr, INT, REF, FLOAT +from pypy.jit.metainterp.history import JitCellToken from pypy.rpython.lltypesystem import lltype, rffi, rstr, llmemory from pypy.rpython.lltypesystem.lloperation import llop from pypy.rpython.annlowlevel import llhelper @@ -424,8 +424,6 @@ _x86_loop_code (an integer giving an address) _x86_bootstrap_code (an integer giving an address) _x86_direct_bootstrap_code ( " " " " ) - _x86_frame_depth - _x86_param_depth _x86_arglocs _x86_debug_checksum ''' @@ -455,12 +453,11 @@ stackadjustpos = self._assemble_bootstrap_code(inputargs, arglocs) looppos = self.mc.get_relative_pos() looptoken._x86_loop_code = looppos - self.target_tokens_currently_compiling[looptoken] = None - looptoken._x86_frame_depth = -1 # temporarily - looptoken._x86_param_depth = -1 # temporarily + clt.frame_depth = -1 # temporarily + clt.param_depth = -1 # temporarily frame_depth, param_depth = self._assemble(regalloc, operations) - looptoken._x86_frame_depth = frame_depth - looptoken._x86_param_depth = param_depth + clt.frame_depth = frame_depth + clt.param_depth = param_depth directbootstrappos = self.mc.get_relative_pos() self._assemble_bootstrap_direct_call(arglocs, looppos, @@ -670,8 +667,8 @@ faildescr._x86_adr_jump_offset = 0 # means "patched" def fixup_target_tokens(self, rawstart): - for looptoken in self.target_tokens_currently_compiling: - looptoken._x86_loop_code += rawstart + for targettoken in self.target_tokens_currently_compiling: + targettoken._x86_loop_code += rawstart self.target_tokens_currently_compiling = None @specialize.argtype(1) @@ -703,8 +700,8 @@ param_depth = regalloc.param_depth jump_target_descr = regalloc.jump_target_descr if jump_target_descr is not None: - target_frame_depth = jump_target_descr._x86_frame_depth - target_param_depth = jump_target_descr._x86_param_depth + target_frame_depth = jump_target_descr._x86_clt.frame_depth + target_param_depth = jump_target_descr._x86_clt.param_depth frame_depth = max(frame_depth, target_frame_depth) param_depth = max(param_depth, target_param_depth) return frame_depth, param_depth @@ -2344,7 +2341,7 @@ fail_index = self.cpu.get_fail_descr_number(faildescr) self.mc.MOV_bi(FORCE_INDEX_OFS, fail_index) descr = op.getdescr() - assert isinstance(descr, LoopToken) + assert isinstance(descr, JitCellToken) assert len(arglocs) - 2 == len(descr._x86_arglocs[0]) # # Write a call to the direct_bootstrap_code of the target assembler @@ -2578,12 +2575,9 @@ gcrootmap.put(self.gcrootmap_retaddr_forced, mark) self.gcrootmap_retaddr_forced = -1 - def target_arglocs(self, loop_token): - return loop_token._x86_arglocs - - def closing_jump(self, loop_token): - target = loop_token._x86_loop_code - if loop_token in self.target_tokens_currently_compiling: + def closing_jump(self, target_token): + target = target_token._x86_loop_code + if target_token in self.target_tokens_currently_compiling: curpos = self.mc.get_relative_pos() + 5 self.mc.JMP_l(target - curpos) else: diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -5,8 +5,8 @@ import os from pypy.jit.metainterp.history import (Box, Const, ConstInt, ConstPtr, ResOperation, BoxPtr, ConstFloat, - BoxFloat, LoopToken, INT, REF, FLOAT, - TargetToken) + BoxFloat, INT, REF, FLOAT, + TargetToken, JitCellToken) from pypy.jit.backend.x86.regloc import * from pypy.rpython.lltypesystem import lltype, rffi, rstr from pypy.rlib.objectmodel import we_are_translated @@ -884,7 +884,7 @@ def consider_call_assembler(self, op, guard_op): descr = op.getdescr() - assert isinstance(descr, LoopToken) + assert isinstance(descr, JitCellToken) jd = descr.outermost_jitdriver_sd assert jd is not None size = jd.portal_calldescr.get_result_size(self.translate_support_code) @@ -1314,8 +1314,8 @@ assembler = self.assembler assert self.jump_target_descr is None descr = op.getdescr() - assert isinstance(descr, (LoopToken, TargetToken)) # XXX refactor! - nonfloatlocs, floatlocs = assembler.target_arglocs(descr) + assert isinstance(descr, TargetToken) + nonfloatlocs, floatlocs = descr._x86_arglocs self.jump_target_descr = descr # compute 'tmploc' to be all_regs[0] by spilling what is there box = TempBox() @@ -1406,8 +1406,7 @@ nonfloatlocs[i] = loc descr._x86_arglocs = nonfloatlocs, floatlocs descr._x86_loop_code = self.assembler.mc.get_relative_pos() - descr._x86_frame_depth = self.fm.frame_depth - descr._x86_param_depth = self.param_depth + descr._x86_clt = self.assembler.current_clt self.assembler.target_tokens_currently_compiling[descr] = None def not_implemented_op(self, op): diff --git a/pypy/jit/backend/x86/runner.py b/pypy/jit/backend/x86/runner.py --- a/pypy/jit/backend/x86/runner.py +++ b/pypy/jit/backend/x86/runner.py @@ -215,14 +215,3 @@ super(CPU_X86_64, self).__init__(*args, **kwargs) CPU = CPU386 - -# silence warnings -##history.LoopToken._x86_param_depth = 0 -##history.LoopToken._x86_arglocs = (None, None) -##history.LoopToken._x86_frame_depth = 0 -##history.LoopToken._x86_bootstrap_code = 0 -##history.LoopToken._x86_direct_bootstrap_code = 0 -##history.LoopToken._x86_loop_code = 0 -##history.LoopToken._x86_debug_checksum = 0 -##compile.AbstractFailDescr._x86_current_depths = (0, 0) -##compile.AbstractFailDescr._x86_adr_jump_offset = 0 diff --git a/pypy/jit/backend/x86/test/test_runner.py b/pypy/jit/backend/x86/test/test_runner.py --- a/pypy/jit/backend/x86/test/test_runner.py +++ b/pypy/jit/backend/x86/test/test_runner.py @@ -1,7 +1,7 @@ import py from pypy.rpython.lltypesystem import lltype, llmemory, rffi, rstr, rclass from pypy.rpython.annlowlevel import llhelper -from pypy.jit.metainterp.history import ResOperation, LoopToken +from pypy.jit.metainterp.history import ResOperation, JitCellToken from pypy.jit.metainterp.history import (BoxInt, BoxPtr, ConstInt, ConstFloat, ConstPtr, Box, BoxFloat, BasicFailDescr) from pypy.jit.backend.detect_cpu import getcpuclass @@ -279,7 +279,7 @@ descr=BasicFailDescr()), ] ops[-2].setfailargs([i1]) - looptoken = LoopToken() + looptoken = JitCellToken() self.cpu.compile_loop([b], ops, looptoken) if op == rop.INT_IS_TRUE: self.cpu.set_future_value_int(0, b.value) @@ -329,7 +329,7 @@ ] ops[-2].setfailargs([i1]) inputargs = [i for i in (a, b) if isinstance(i, Box)] - looptoken = LoopToken() + looptoken = JitCellToken() self.cpu.compile_loop(inputargs, ops, looptoken) for i, box in enumerate(inputargs): self.cpu.set_future_value_int(i, box.value) @@ -353,9 +353,10 @@ i0 = BoxInt() i1 = BoxInt() i2 = BoxInt() + targettoken = TargetToken() faildescr1 = BasicFailDescr(1) faildescr2 = BasicFailDescr(2) - looptoken = LoopToken() + looptoken = JitCellToken() looptoken.number = 17 class FakeString(object): def __init__(self, val): @@ -365,14 +366,15 @@ return self.val operations = [ + ResOperation(rop.LABEL, [i0], None, descr=targettoken), ResOperation(rop.DEBUG_MERGE_POINT, [FakeString("hello"), 0], None), ResOperation(rop.INT_ADD, [i0, ConstInt(1)], i1), ResOperation(rop.INT_LE, [i1, ConstInt(9)], i2), ResOperation(rop.GUARD_TRUE, [i2], None, descr=faildescr1), - ResOperation(rop.JUMP, [i1], None, descr=looptoken), + ResOperation(rop.JUMP, [i1], None, descr=targettoken), ] inputargs = [i0] - operations[3].setfailargs([i1]) + operations[-2].setfailargs([i1]) self.cpu.compile_loop(inputargs, operations, looptoken) name, loopaddress, loopsize = agent.functions[0] assert name == "Loop # 17: hello (loop counter 0)" @@ -385,7 +387,7 @@ ResOperation(rop.INT_LE, [i1b, ConstInt(19)], i3), ResOperation(rop.GUARD_TRUE, [i3], None, descr=faildescr2), ResOperation(rop.DEBUG_MERGE_POINT, [FakeString("bye"), 0], None), - ResOperation(rop.JUMP, [i1b], None, descr=looptoken), + ResOperation(rop.JUMP, [i1b], None, descr=targettoken), ] bridge[1].setfailargs([i1b]) @@ -408,11 +410,13 @@ i0 = BoxInt() i1 = BoxInt() i2 = BoxInt() - looptoken = LoopToken() + looptoken = JitCellToken() + targettoken = TargetToken() operations = [ + ResOperation(rop.LABEL, [i0], None, descr=targettoken), ResOperation(rop.INT_ADD, [i0, ConstInt(1)], i1), ResOperation(rop.INT_LE, [i1, ConstInt(9)], i2), - ResOperation(rop.JUMP, [i1], None, descr=looptoken), + ResOperation(rop.JUMP, [i1], None, descr=targettoken), ] inputargs = [i0] debug._log = dlog = debug.DebugLog() @@ -496,7 +500,7 @@ ops[3].setfailargs([]) ops[5].setfailargs([]) ops[7].setfailargs([]) - looptoken = LoopToken() + looptoken = JitCellToken() self.cpu.compile_loop([i1, i2], ops, looptoken) self.cpu.set_future_value_int(0, 123450) From noreply at buildbot.pypy.org Wed Nov 9 13:45:57 2011 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 9 Nov 2011 13:45:57 +0100 (CET) Subject: [pypy-commit] pypy default: Kill ovfcheck_lshift(), which was only needed before Python 2.4. Message-ID: <20111109124557.4075B8292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r48993:d12fc92b04cc Date: 2011-11-09 13:45 +0100 http://bitbucket.org/pypy/pypy/changeset/d12fc92b04cc/ Log: Kill ovfcheck_lshift(), which was only needed before Python 2.4. diff --git a/pypy/jit/metainterp/optimizeopt/intutils.py b/pypy/jit/metainterp/optimizeopt/intutils.py --- a/pypy/jit/metainterp/optimizeopt/intutils.py +++ b/pypy/jit/metainterp/optimizeopt/intutils.py @@ -1,4 +1,4 @@ -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift, LONG_BIT +from pypy.rlib.rarithmetic import ovfcheck, LONG_BIT from pypy.rlib.objectmodel import we_are_translated from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.jit.metainterp.history import BoxInt, ConstInt @@ -174,10 +174,10 @@ other.known_ge(IntBound(0, 0)) and \ other.known_lt(IntBound(LONG_BIT, LONG_BIT)): try: - vals = (ovfcheck_lshift(self.upper, other.upper), - ovfcheck_lshift(self.upper, other.lower), - ovfcheck_lshift(self.lower, other.upper), - ovfcheck_lshift(self.lower, other.lower)) + vals = (ovfcheck(self.upper << other.upper), + ovfcheck(self.upper << other.lower), + ovfcheck(self.lower << other.upper), + ovfcheck(self.lower << other.lower)) return IntBound(min4(vals), max4(vals)) except (OverflowError, ValueError): return IntUnbounded() diff --git a/pypy/objspace/flow/operation.py b/pypy/objspace/flow/operation.py --- a/pypy/objspace/flow/operation.py +++ b/pypy/objspace/flow/operation.py @@ -11,7 +11,7 @@ from pypy.interpreter.baseobjspace import ObjSpace from pypy.interpreter.error import OperationError from pypy.tool.sourcetools import compile2 -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift +from pypy.rlib.rarithmetic import ovfcheck from pypy.objspace.flow import model @@ -144,7 +144,7 @@ return ovfcheck(x % y) def lshift_ovf(x, y): - return ovfcheck_lshift(x, y) + return ovfcheck(x << y) # slicing: operator.{get,set,del}slice() don't support b=None or c=None def do_getslice(a, b, c): diff --git a/pypy/objspace/std/intobject.py b/pypy/objspace/std/intobject.py --- a/pypy/objspace/std/intobject.py +++ b/pypy/objspace/std/intobject.py @@ -6,7 +6,7 @@ from pypy.objspace.std.noneobject import W_NoneObject from pypy.objspace.std.register_all import register_all from pypy.rlib import jit -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift, LONG_BIT, r_uint +from pypy.rlib.rarithmetic import ovfcheck, LONG_BIT, r_uint from pypy.rlib.rbigint import rbigint """ @@ -245,7 +245,7 @@ b = w_int2.intval if r_uint(b) < LONG_BIT: # 0 <= b < LONG_BIT try: - c = ovfcheck_lshift(a, b) + c = ovfcheck(a << b) except OverflowError: raise FailedToImplementArgs(space.w_OverflowError, space.wrap("integer left shift")) diff --git a/pypy/rlib/listsort.py b/pypy/rlib/listsort.py --- a/pypy/rlib/listsort.py +++ b/pypy/rlib/listsort.py @@ -1,4 +1,4 @@ -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift +from pypy.rlib.rarithmetic import ovfcheck ## ------------------------------------------------------------------------ @@ -136,7 +136,7 @@ if lower(a.list[p + ofs], key): lastofs = ofs try: - ofs = ovfcheck_lshift(ofs, 1) + ofs = ovfcheck(ofs << 1) except OverflowError: ofs = maxofs else: @@ -161,7 +161,7 @@ # key <= a[hint - ofs] lastofs = ofs try: - ofs = ovfcheck_lshift(ofs, 1) + ofs = ovfcheck(ofs << 1) except OverflowError: ofs = maxofs else: diff --git a/pypy/rlib/rarithmetic.py b/pypy/rlib/rarithmetic.py --- a/pypy/rlib/rarithmetic.py +++ b/pypy/rlib/rarithmetic.py @@ -12,9 +12,6 @@ back to a signed int value ovfcheck check on CPython whether the result of a signed integer operation did overflow -ovfcheck_lshift - << with oveflow checking - catering to 2.3/2.4 differences about << ovfcheck_float_to_int convert to an integer or raise OverflowError r_longlong @@ -111,18 +108,6 @@ raise OverflowError, "signed integer expression did overflow" return r -def _local_ovfcheck(r): - # a copy of the above, because we cannot call ovfcheck - # in a context where no primitiveoperator is involved. - assert not isinstance(r, r_uint), "unexpected ovf check on unsigned" - if isinstance(r, long): - raise OverflowError, "signed integer expression did overflow" - return r - -def ovfcheck_lshift(a, b): - "NOT_RPYTHON" - return _local_ovfcheck(int(long(a) << b)) - # Strange things happening for float to int on 64 bit: # int(float(i)) != i because of rounding issues. # These are the minimum and maximum float value that can diff --git a/pypy/rpython/llinterp.py b/pypy/rpython/llinterp.py --- a/pypy/rpython/llinterp.py +++ b/pypy/rpython/llinterp.py @@ -1,6 +1,6 @@ from pypy.objspace.flow.model import FunctionGraph, Constant, Variable, c_last_exception from pypy.rlib.rarithmetic import intmask, r_uint, ovfcheck, r_longlong -from pypy.rlib.rarithmetic import r_ulonglong, ovfcheck_lshift +from pypy.rlib.rarithmetic import r_ulonglong from pypy.rpython.lltypesystem import lltype, llmemory, lloperation, llheap from pypy.rpython.lltypesystem import rclass from pypy.rpython.ootypesystem import ootype @@ -1035,7 +1035,7 @@ assert isinstance(x, int) assert isinstance(y, int) try: - return ovfcheck_lshift(x, y) + return ovfcheck(x << y) except OverflowError: self.make_llexception() diff --git a/pypy/translator/cli/test/test_snippet.py b/pypy/translator/cli/test/test_snippet.py --- a/pypy/translator/cli/test/test_snippet.py +++ b/pypy/translator/cli/test/test_snippet.py @@ -28,14 +28,14 @@ res = self.interpret(fn, [], backendopt=False) def test_link_vars_overlapping(self): - from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift + from pypy.rlib.rarithmetic import ovfcheck def fn(maxofs): lastofs = 0 ofs = 1 while ofs < maxofs: lastofs = ofs try: - ofs = ovfcheck_lshift(ofs, 1) + ofs = ovfcheck(ofs << 1) except OverflowError: ofs = maxofs else: diff --git a/pypy/translator/simplify.py b/pypy/translator/simplify.py --- a/pypy/translator/simplify.py +++ b/pypy/translator/simplify.py @@ -111,16 +111,13 @@ # the while loop above will simplify recursively the new link def transform_ovfcheck(graph): - """The special function calls ovfcheck and ovfcheck_lshift need to + """The special function calls ovfcheck needs to be translated into primitive operations. ovfcheck is called directly after an operation that should be turned into an overflow-checked version. It is considered a syntax error if the resulting _ovf is not defined in objspace/flow/objspace.py. - ovfcheck_lshift is special because there is no preceding operation. - Instead, it will be replaced by an OP_LSHIFT_OVF operation. """ covf = Constant(rarithmetic.ovfcheck) - covfls = Constant(rarithmetic.ovfcheck_lshift) def check_syntax(opname): exlis = operation.implicit_exceptions.get("%s_ovf" % (opname,), []) @@ -154,9 +151,6 @@ op1.opname += '_ovf' del block.operations[i] block.renamevariables({op.result: op1.result}) - elif op.args[0] == covfls: - op.opname = 'lshift_ovf' - del op.args[0] def simplify_exceptions(graph): """The exception handling caused by non-implicit exceptions diff --git a/pypy/translator/test/snippet.py b/pypy/translator/test/snippet.py --- a/pypy/translator/test/snippet.py +++ b/pypy/translator/test/snippet.py @@ -1210,7 +1210,7 @@ return istk.top(), sstk.top() -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift +from pypy.rlib.rarithmetic import ovfcheck def add_func(i=numtype): try: @@ -1253,7 +1253,7 @@ def lshift_func(i=numtype): try: hugo(2, 3, 5) - return ovfcheck_lshift((-maxint-1), i) + return ovfcheck((-maxint-1) << i) except (hugelmugel, OverflowError, StandardError, ValueError): raise From noreply at buildbot.pypy.org Wed Nov 9 13:51:04 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 9 Nov 2011 13:51:04 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: move this test at the end, after the ones which directly operate on StructDescr Message-ID: <20111109125104.8AD008292E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r48994:128dbcd93861 Date: 2011-11-08 10:46 +0100 http://bitbucket.org/pypy/pypy/changeset/128dbcd93861/ Log: move this test at the end, after the ones which directly operate on StructDescr diff --git a/pypy/module/_ffi/test/test_struct.py b/pypy/module/_ffi/test/test_struct.py --- a/pypy/module/_ffi/test/test_struct.py +++ b/pypy/module/_ffi/test/test_struct.py @@ -27,22 +27,6 @@ assert descr.ffitype.sizeof() == longsize*2 assert descr.ffitype.name == 'struct foo' - def test_compute_shape(self): - from _ffi import Structure, Field, types - class Point(Structure): - _fields_ = [ - Field('x', types.slong), - Field('y', types.slong), - ] - - longsize = types.slong.sizeof() - assert isinstance(Point.x, Field) - assert isinstance(Point.y, Field) - assert Point.x.offset == 0 - assert Point.y.offset == longsize - assert Point._struct_.ffitype.sizeof() == longsize*2 - assert Point._struct_.ffitype.name == 'struct Point' - def test_getfield_setfield(self): from _ffi import _StructDescr, Field, types longsize = types.slong.sizeof() @@ -70,3 +54,20 @@ struct = descr.allocate() raises(AttributeError, "struct.getfield('missing')") raises(AttributeError, "struct.setfield('missing', 42)") + + def test_compute_shape(self): + from _ffi import Structure, Field, types + class Point(Structure): + _fields_ = [ + Field('x', types.slong), + Field('y', types.slong), + ] + + longsize = types.slong.sizeof() + assert isinstance(Point.x, Field) + assert isinstance(Point.y, Field) + assert Point.x.offset == 0 + assert Point.y.offset == longsize + assert Point._struct_.ffitype.sizeof() == longsize*2 + assert Point._struct_.ffitype.name == 'struct Point' + From noreply at buildbot.pypy.org Wed Nov 9 13:51:05 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 9 Nov 2011 13:51:05 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: small refactor, and add a failing test Message-ID: <20111109125105.B3D048292E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r48995:0d79a558d6fc Date: 2011-11-08 12:11 +0100 http://bitbucket.org/pypy/pypy/changeset/0d79a558d6fc/ Log: small refactor, and add a failing test diff --git a/pypy/module/_ffi/interp_struct.py b/pypy/module/_ffi/interp_struct.py --- a/pypy/module/_ffi/interp_struct.py +++ b/pypy/module/_ffi/interp_struct.py @@ -62,21 +62,26 @@ @unwrap_spec(name=str) def descr_new_structdescr(space, w_type, name, w_fields): + fields_w = space.fixedview(w_fields) + # note that the fields_w returned by compute_size_and_alignemnt has a + # different annotation than the original: list(W_Root) vs list(W_Field) + size, alignment, fields_w = compute_size_and_alignemnt(space, fields_w) + field_types = [] # clibffi's types + for w_field in fields_w: + field_types.append(w_field.w_ffitype.ffitype) + ffistruct = clibffi.make_struct_ffitype_e(size, alignment, field_types) + return W__StructDescr(space, name, fields_w, ffistruct) + +def compute_size_and_alignemnt(space, fields_w): size = 0 alignment = 0 # XXX - fields_w = space.fixedview(w_fields) - fields_w2 = [] # its items are annotated as W_Field - field_types = [] + fields_w2 = [] for w_field in fields_w: w_field = space.interp_w(W_Field, w_field) w_field.offset = size # XXX: alignment! size += w_field.w_ffitype.sizeof() fields_w2.append(w_field) - field_types.append(w_field.w_ffitype.ffitype) - # - ffistruct = clibffi.make_struct_ffitype_e(size, alignment, field_types) - return W__StructDescr(space, name, fields_w2, ffistruct) - + return size, alignment, fields_w2 W__StructDescr.typedef = TypeDef( diff --git a/pypy/module/_ffi/test/test_struct.py b/pypy/module/_ffi/test/test_struct.py --- a/pypy/module/_ffi/test/test_struct.py +++ b/pypy/module/_ffi/test/test_struct.py @@ -27,6 +27,19 @@ assert descr.ffitype.sizeof() == longsize*2 assert descr.ffitype.name == 'struct foo' + def test_alignment(self): + from _ffi import _StructDescr, Field, types + longsize = types.slong.sizeof() + fields = [ + Field('x', types.sbyte), + Field('y', types.slong), + ] + descr = _StructDescr('foo', fields) + assert descr.ffitype.sizeof() == longsize*2 + assert fields[0].offset == 0 + assert fields[1].offset == longsize # aligned to WORD + + def test_getfield_setfield(self): from _ffi import _StructDescr, Field, types longsize = types.slong.sizeof() From noreply at buildbot.pypy.org Wed Nov 9 13:51:06 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 9 Nov 2011 13:51:06 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: copy the logic to cope with field alignment from _rawffi, the failing test now passes Message-ID: <20111109125106.DD2C38292E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r48996:fd6938fabf7b Date: 2011-11-08 13:00 +0100 http://bitbucket.org/pypy/pypy/changeset/fd6938fabf7b/ Log: copy the logic to cope with field alignment from _rawffi, the failing test now passes diff --git a/pypy/module/_ffi/interp_ffitype.py b/pypy/module/_ffi/interp_ffitype.py --- a/pypy/module/_ffi/interp_ffitype.py +++ b/pypy/module/_ffi/interp_ffitype.py @@ -28,6 +28,9 @@ def sizeof(self): return intmask(self.ffitype.c_size) + def get_alignment(self): + return intmask(self.ffitype.c_alignment) + def repr(self, space): return space.wrap(self.__repr__()) diff --git a/pypy/module/_ffi/interp_struct.py b/pypy/module/_ffi/interp_struct.py --- a/pypy/module/_ffi/interp_struct.py +++ b/pypy/module/_ffi/interp_struct.py @@ -63,27 +63,35 @@ @unwrap_spec(name=str) def descr_new_structdescr(space, w_type, name, w_fields): fields_w = space.fixedview(w_fields) - # note that the fields_w returned by compute_size_and_alignemnt has a + # note that the fields_w returned by compute_size_and_alignement has a # different annotation than the original: list(W_Root) vs list(W_Field) - size, alignment, fields_w = compute_size_and_alignemnt(space, fields_w) + size, alignment, fields_w = compute_size_and_alignement(space, fields_w) field_types = [] # clibffi's types for w_field in fields_w: field_types.append(w_field.w_ffitype.ffitype) ffistruct = clibffi.make_struct_ffitype_e(size, alignment, field_types) return W__StructDescr(space, name, fields_w, ffistruct) -def compute_size_and_alignemnt(space, fields_w): +def round_up(size, alignment): + return (size + alignment - 1) & -alignment + +def compute_size_and_alignement(space, fields_w): size = 0 - alignment = 0 # XXX + alignment = 1 fields_w2 = [] for w_field in fields_w: w_field = space.interp_w(W_Field, w_field) - w_field.offset = size # XXX: alignment! - size += w_field.w_ffitype.sizeof() + fieldsize = w_field.w_ffitype.sizeof() + fieldalignment = w_field.w_ffitype.get_alignment() + alignment = max(alignment, fieldalignment) + size = round_up(size, fieldalignment) + w_field.offset = size + size += fieldsize fields_w2.append(w_field) return size, alignment, fields_w2 + W__StructDescr.typedef = TypeDef( '_StructDescr', __new__ = interp2app(descr_new_structdescr), diff --git a/pypy/module/_ffi/test/test_struct.py b/pypy/module/_ffi/test/test_struct.py --- a/pypy/module/_ffi/test/test_struct.py +++ b/pypy/module/_ffi/test/test_struct.py @@ -1,4 +1,5 @@ from pypy.module._ffi.test.test_funcptr import BaseAppTestFFI + class AppTestStruct(BaseAppTestFFI): From noreply at buildbot.pypy.org Wed Nov 9 13:51:08 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 9 Nov 2011 13:51:08 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: add unit tests for compute_size_and_alignment; the last ones fails and are commented out for now Message-ID: <20111109125108.1051B8292E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r48997:9bd55ee3776f Date: 2011-11-08 13:19 +0100 http://bitbucket.org/pypy/pypy/changeset/9bd55ee3776f/ Log: add unit tests for compute_size_and_alignment; the last ones fails and are commented out for now diff --git a/pypy/module/_ffi/test/test_struct.py b/pypy/module/_ffi/test/test_struct.py --- a/pypy/module/_ffi/test/test_struct.py +++ b/pypy/module/_ffi/test/test_struct.py @@ -1,5 +1,40 @@ from pypy.module._ffi.test.test_funcptr import BaseAppTestFFI - +from pypy.module._ffi.interp_struct import compute_size_and_alignement, W_Field +from pypy.module._ffi.interp_ffitype import app_types + +class TestComputeSizeAndAlignement(object): + + class FakeSpace(object): + def interp_w(self, cls, obj): + return obj + + def compute(self, ffitypes_w): + fields_w = [W_Field('', w_ffitype) for + w_ffitype in ffitypes_w] + return compute_size_and_alignement(self.FakeSpace(), fields_w) + + def sizeof(self, ffitypes_w): + size, aligned, fields_w = self.compute(ffitypes_w) + return size + + def test_compute_size(self): + T = app_types + byte_size = app_types.sbyte.sizeof() + long_size = app_types.slong.sizeof() + llong_size = app_types.slonglong.sizeof() + llong_align = app_types.slonglong.get_alignment() + # + assert llong_align >= 4 + assert self.sizeof([T.sbyte, T.slong]) == 2*long_size + assert self.sizeof([T.sbyte, T.slonglong]) == llong_align + llong_size + assert self.sizeof([T.sbyte, T.sbyte, T.slonglong]) == llong_align + llong_size + assert self.sizeof([T.sbyte, T.sbyte, T.sbyte, T.slonglong]) == llong_align + llong_size + assert self.sizeof([T.sbyte, T.sbyte, T.sbyte, T.sbyte, T.slonglong]) == llong_align + llong_size + ## assert self.sizeof([T.slonglong, T.sbyte]) == llong_size + llong_align + ## assert self.sizeof([T.slonglong, T.sbyte, T.sbyte]) == llong_size + llong_align + ## assert self.sizeof([T.slonglong, T.sbyte, T.sbyte, T.sbyte]) == llong_size + llong_align + ## assert self.sizeof([T.slonglong, T.sbyte, T.sbyte, T.sbyte, T.sbyte]) == llong_size + llong_align + class AppTestStruct(BaseAppTestFFI): From noreply at buildbot.pypy.org Wed Nov 9 13:51:09 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 9 Nov 2011 13:51:09 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: adjust the total size according to the alignment: this makes more tests passing Message-ID: <20111109125109.3E8168292E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r48998:f73603fe9a7d Date: 2011-11-08 14:10 +0100 http://bitbucket.org/pypy/pypy/changeset/f73603fe9a7d/ Log: adjust the total size according to the alignment: this makes more tests passing diff --git a/pypy/module/_ffi/interp_struct.py b/pypy/module/_ffi/interp_struct.py --- a/pypy/module/_ffi/interp_struct.py +++ b/pypy/module/_ffi/interp_struct.py @@ -16,6 +16,9 @@ self.w_ffitype = w_ffitype self.offset = -1 + def __repr__(self): + return '' % (self.name, self.w_ffitype.name) + @unwrap_spec(name=str) def descr_new_field(space, w_type, name, w_ffitype): w_ffitype = space.interp_w(W_FFIType, w_ffitype) @@ -88,6 +91,8 @@ w_field.offset = size size += fieldsize fields_w2.append(w_field) + # + size = round_up(size, alignment) return size, alignment, fields_w2 diff --git a/pypy/module/_ffi/test/test_struct.py b/pypy/module/_ffi/test/test_struct.py --- a/pypy/module/_ffi/test/test_struct.py +++ b/pypy/module/_ffi/test/test_struct.py @@ -30,10 +30,10 @@ assert self.sizeof([T.sbyte, T.sbyte, T.slonglong]) == llong_align + llong_size assert self.sizeof([T.sbyte, T.sbyte, T.sbyte, T.slonglong]) == llong_align + llong_size assert self.sizeof([T.sbyte, T.sbyte, T.sbyte, T.sbyte, T.slonglong]) == llong_align + llong_size - ## assert self.sizeof([T.slonglong, T.sbyte]) == llong_size + llong_align - ## assert self.sizeof([T.slonglong, T.sbyte, T.sbyte]) == llong_size + llong_align - ## assert self.sizeof([T.slonglong, T.sbyte, T.sbyte, T.sbyte]) == llong_size + llong_align - ## assert self.sizeof([T.slonglong, T.sbyte, T.sbyte, T.sbyte, T.sbyte]) == llong_size + llong_align + assert self.sizeof([T.slonglong, T.sbyte]) == llong_size + llong_align + assert self.sizeof([T.slonglong, T.sbyte, T.sbyte]) == llong_size + llong_align + assert self.sizeof([T.slonglong, T.sbyte, T.sbyte, T.sbyte]) == llong_size + llong_align + assert self.sizeof([T.slonglong, T.sbyte, T.sbyte, T.sbyte, T.sbyte]) == llong_size + llong_align class AppTestStruct(BaseAppTestFFI): From noreply at buildbot.pypy.org Wed Nov 9 13:51:10 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 9 Nov 2011 13:51:10 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: build the TYPE_MAP dictionaries but preserves the list: this is because we want to avoid key-clashing (e.g., on 32bit rffi.UINT: ffi_type_sint is overwritten by rffi.LONG, because _signed_type_for(LONG) returns ffi_type_sint32) Message-ID: <20111109125110.66F3E8292E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r48999:e9fe7124b7c6 Date: 2011-11-08 18:46 +0100 http://bitbucket.org/pypy/pypy/changeset/e9fe7124b7c6/ Log: build the TYPE_MAP dictionaries but preserves the list: this is because we want to avoid key-clashing (e.g., on 32bit rffi.UINT: ffi_type_sint is overwritten by rffi.LONG, because _signed_type_for(LONG) returns ffi_type_sint32) diff --git a/pypy/rlib/clibffi.py b/pypy/rlib/clibffi.py --- a/pypy/rlib/clibffi.py +++ b/pypy/rlib/clibffi.py @@ -210,38 +210,42 @@ elif sz == 8: return ffi_type_uint64 else: raise ValueError("unsupported type size for %r" % (TYPE,)) -TYPE_MAP_INT = { - rffi.UCHAR : ffi_type_uchar, - rffi.CHAR : ffi_type_schar, - rffi.SHORT : ffi_type_sshort, - rffi.USHORT : ffi_type_ushort, - rffi.UINT : ffi_type_uint, - rffi.INT : ffi_type_sint, +__int_type_map = [ + (rffi.UCHAR, ffi_type_uchar), + (rffi.CHAR, ffi_type_schar), + (rffi.SHORT, ffi_type_sshort), + (rffi.USHORT, ffi_type_ushort), + (rffi.UINT, ffi_type_uint), + (rffi.INT, ffi_type_sint), # xxx don't use ffi_type_slong and ffi_type_ulong - their meaning # changes from a libffi version to another :-(( - rffi.ULONG : _unsigned_type_for(rffi.ULONG), - rffi.LONG : _signed_type_for(rffi.LONG), - rffi.ULONGLONG : _unsigned_type_for(rffi.ULONGLONG), - rffi.LONGLONG : _signed_type_for(rffi.LONGLONG), - lltype.UniChar : _unsigned_type_for(lltype.UniChar), - lltype.Bool : _unsigned_type_for(lltype.Bool), -} + (rffi.ULONG, _unsigned_type_for(rffi.ULONG)), + (rffi.LONG, _signed_type_for(rffi.LONG)), + (rffi.ULONGLONG, _unsigned_type_for(rffi.ULONGLONG)), + (rffi.LONGLONG, _signed_type_for(rffi.LONGLONG)), + (lltype.UniChar, _unsigned_type_for(lltype.UniChar)), + (lltype.Bool, _unsigned_type_for(lltype.Bool)), + ] -TYPE_MAP_FLOAT = { - rffi.DOUBLE : ffi_type_double, - rffi.FLOAT : ffi_type_float, - rffi.LONGDOUBLE : ffi_type_longdouble, - } +__float_type_map = [ + (rffi.DOUBLE, ffi_type_double), + (rffi.FLOAT, ffi_type_float), + (rffi.LONGDOUBLE, ffi_type_longdouble), + ] -TYPE_MAP = { - lltype.Void : ffi_type_void, - } -TYPE_MAP.update(TYPE_MAP_INT) -TYPE_MAP.update(TYPE_MAP_FLOAT) +__type_map = __int_type_map + __float_type_map + [ + (lltype.Void, ffi_type_void) + ] -ffitype_map_int = unrolling_iterable(TYPE_MAP_INT.iteritems()) -ffitype_map_float = unrolling_iterable(TYPE_MAP_FLOAT.iteritems()) -ffitype_map = unrolling_iterable(TYPE_MAP.iteritems()) +TYPE_MAP_INT = dict(__int_type_map) +TYPE_MAP_FLOAT = dict(__float_type_map) +TYPE_MAP = dict(__type_map) + +ffitype_map_int = unrolling_iterable(__int_type_map) +ffitype_map_float = unrolling_iterable(__float_type_map) +ffitype_map = unrolling_iterable(__type_map) + +del __int_type_map, __float_type_map, __type_map def external(name, args, result, **kwds): From noreply at buildbot.pypy.org Wed Nov 9 13:51:11 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 9 Nov 2011 13:51:11 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: add support for getting/setting signed values other than long Message-ID: <20111109125111.901948292E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r49000:175c37225dc8 Date: 2011-11-08 18:49 +0100 http://bitbucket.org/pypy/pypy/changeset/175c37225dc8/ Log: add support for getting/setting signed values other than long diff --git a/pypy/module/_ffi/interp_struct.py b/pypy/module/_ffi/interp_struct.py --- a/pypy/module/_ffi/interp_struct.py +++ b/pypy/module/_ffi/interp_struct.py @@ -129,18 +129,15 @@ @unwrap_spec(name=str) def getfield(self, space, name): w_ffitype, offset = self.structdescr.get_type_and_offset_for_field(name) - assert w_ffitype is app_types.slong # XXX: handle all cases - FIELD_TYPE = rffi.LONG - # value = libffi.struct_getfield_int(w_ffitype.ffitype, self.rawmem, offset) return space.wrap(value) @unwrap_spec(name=str) def setfield(self, space, name, w_value): w_ffitype, offset = self.structdescr.get_type_and_offset_for_field(name) - assert w_ffitype is app_types.slong # XXX: handle all cases - FIELD_TYPE = rffi.LONG - value = space.int_w(w_value) + # XXX: add support for long long + if w_ffitype.is_signed() or w_ffitype.is_unsigned(): + value = rffi.cast(rffi.LONG, space.uint_w(w_value)) # libffi.struct_setfield_int(w_ffitype.ffitype, self.rawmem, offset, value) diff --git a/pypy/module/_ffi/test/test_struct.py b/pypy/module/_ffi/test/test_struct.py --- a/pypy/module/_ffi/test/test_struct.py +++ b/pypy/module/_ffi/test/test_struct.py @@ -75,7 +75,6 @@ assert fields[0].offset == 0 assert fields[1].offset == longsize # aligned to WORD - def test_getfield_setfield(self): from _ffi import _StructDescr, Field, types longsize = types.slong.sizeof() @@ -104,6 +103,40 @@ raises(AttributeError, "struct.getfield('missing')") raises(AttributeError, "struct.setfield('missing', 42)") + def test_getfield_setfield(self): + from _ffi import _StructDescr, Field, types + longsize = types.slong.sizeof() + fields = [ + Field('x', types.slong), + Field('y', types.slong), + ] + descr = _StructDescr('foo', fields) + struct = descr.allocate() + struct.setfield('x', 42) + struct.setfield('y', 43) + assert struct.getfield('x') == 42 + assert struct.getfield('y') == 43 + mem = self.read_raw_mem(struct.getaddr(), 'c_long', 2) + assert mem == [42, 43] + + def test_getfield_setfield_signed_types(self): + import sys + from _ffi import _StructDescr, Field, types + longsize = types.slong.sizeof() + fields = [ + Field('sbyte', types.sbyte), + Field('sint', types.sint), + Field('slong', types.slong), + ] + descr = _StructDescr('foo', fields) + struct = descr.allocate() + struct.setfield('sbyte', 42) + struct.setfield('sint', 43) + struct.setfield('slong', 44) + assert struct.getfield('sbyte') == 42 + assert struct.getfield('sint') == 43 + assert struct.getfield('slong') == 44 + def test_compute_shape(self): from _ffi import Structure, Field, types class Point(Structure): From noreply at buildbot.pypy.org Wed Nov 9 13:51:12 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 9 Nov 2011 13:51:12 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: make sure that we properly convert a sbyte >= 128 into a negative value when we set it. This requires to change the TYPE_MAP_INT in clibffi.py, which was wrong before (rffi.CHAR is unsigned, not signed) Message-ID: <20111109125112.B8B238292E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r49001:aa53c87e4cdf Date: 2011-11-09 10:00 +0100 http://bitbucket.org/pypy/pypy/changeset/aa53c87e4cdf/ Log: make sure that we properly convert a sbyte >= 128 into a negative value when we set it. This requires to change the TYPE_MAP_INT in clibffi.py, which was wrong before (rffi.CHAR is unsigned, not signed) diff --git a/pypy/module/_ffi/test/test_struct.py b/pypy/module/_ffi/test/test_struct.py --- a/pypy/module/_ffi/test/test_struct.py +++ b/pypy/module/_ffi/test/test_struct.py @@ -130,10 +130,10 @@ ] descr = _StructDescr('foo', fields) struct = descr.allocate() - struct.setfield('sbyte', 42) + struct.setfield('sbyte', 128) struct.setfield('sint', 43) struct.setfield('slong', 44) - assert struct.getfield('sbyte') == 42 + assert struct.getfield('sbyte') == -128 assert struct.getfield('sint') == 43 assert struct.getfield('slong') == 44 diff --git a/pypy/rlib/clibffi.py b/pypy/rlib/clibffi.py --- a/pypy/rlib/clibffi.py +++ b/pypy/rlib/clibffi.py @@ -212,7 +212,7 @@ __int_type_map = [ (rffi.UCHAR, ffi_type_uchar), - (rffi.CHAR, ffi_type_schar), + (rffi.SIGNEDCHAR, ffi_type_schar), (rffi.SHORT, ffi_type_sshort), (rffi.USHORT, ffi_type_ushort), (rffi.UINT, ffi_type_uint), From noreply at buildbot.pypy.org Wed Nov 9 13:51:13 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 9 Nov 2011 13:51:13 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: make sure that we correctly handle the app-level-long to interp-level-slong conversion Message-ID: <20111109125113.E08738292E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r49002:879029fb8b50 Date: 2011-11-09 10:02 +0100 http://bitbucket.org/pypy/pypy/changeset/879029fb8b50/ Log: make sure that we correctly handle the app-level-long to interp- level-slong conversion diff --git a/pypy/module/_ffi/test/test_struct.py b/pypy/module/_ffi/test/test_struct.py --- a/pypy/module/_ffi/test/test_struct.py +++ b/pypy/module/_ffi/test/test_struct.py @@ -132,10 +132,10 @@ struct = descr.allocate() struct.setfield('sbyte', 128) struct.setfield('sint', 43) - struct.setfield('slong', 44) + struct.setfield('slong', sys.maxint+1) assert struct.getfield('sbyte') == -128 assert struct.getfield('sint') == 43 - assert struct.getfield('slong') == 44 + assert struct.getfield('slong') == -sys.maxint-1 def test_compute_shape(self): from _ffi import Structure, Field, types From noreply at buildbot.pypy.org Wed Nov 9 13:51:15 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 9 Nov 2011 13:51:15 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: (antocuni, arigo around): correctly truncate all the values to a Signed" Message-ID: <20111109125115.19DEE8292E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r49003:3d6add2cfe84 Date: 2011-11-09 12:14 +0100 http://bitbucket.org/pypy/pypy/changeset/3d6add2cfe84/ Log: (antocuni, arigo around): correctly truncate all the values to a Signed" diff --git a/pypy/module/_ffi/interp_struct.py b/pypy/module/_ffi/interp_struct.py --- a/pypy/module/_ffi/interp_struct.py +++ b/pypy/module/_ffi/interp_struct.py @@ -137,12 +137,10 @@ w_ffitype, offset = self.structdescr.get_type_and_offset_for_field(name) # XXX: add support for long long if w_ffitype.is_signed() or w_ffitype.is_unsigned(): - value = rffi.cast(rffi.LONG, space.uint_w(w_value)) + value = space.truncatedint(w_value) # libffi.struct_setfield_int(w_ffitype.ffitype, self.rawmem, offset, value) - - W__StructInstance.typedef = TypeDef( '_StructInstance', getaddr = interp2app(W__StructInstance.getaddr), diff --git a/pypy/module/_ffi/test/test_struct.py b/pypy/module/_ffi/test/test_struct.py --- a/pypy/module/_ffi/test/test_struct.py +++ b/pypy/module/_ffi/test/test_struct.py @@ -1,8 +1,11 @@ +import sys +from pypy.conftest import gettestobjspace from pypy.module._ffi.test.test_funcptr import BaseAppTestFFI from pypy.module._ffi.interp_struct import compute_size_and_alignement, W_Field from pypy.module._ffi.interp_ffitype import app_types -class TestComputeSizeAndAlignement(object): + +class TestStruct(object): class FakeSpace(object): def interp_w(self, cls, obj): @@ -35,6 +38,15 @@ assert self.sizeof([T.slonglong, T.sbyte, T.sbyte, T.sbyte]) == llong_size + llong_align assert self.sizeof([T.slonglong, T.sbyte, T.sbyte, T.sbyte, T.sbyte]) == llong_size + llong_align + def test_truncatedint(self): + space = gettestobjspace() + assert space.truncatedint(space.wrap(42)) == 42 + assert space.truncatedint(space.wrap(sys.maxint)) == sys.maxint + assert space.truncatedint(space.wrap(sys.maxint+1)) == -sys.maxint-1 + assert space.truncatedint(space.wrap(-1)) == -1 + assert space.truncatedint(space.wrap(-sys.maxint-2)) == sys.maxint + + class AppTestStruct(BaseAppTestFFI): @@ -131,12 +143,15 @@ descr = _StructDescr('foo', fields) struct = descr.allocate() struct.setfield('sbyte', 128) + assert struct.getfield('sbyte') == -128 struct.setfield('sint', 43) + assert struct.getfield('sint') == 43 struct.setfield('slong', sys.maxint+1) - assert struct.getfield('sbyte') == -128 - assert struct.getfield('sint') == 43 assert struct.getfield('slong') == -sys.maxint-1 + struct.setfield('slong', sys.maxint*3) + assert struct.getfield('slong') == sys.maxint-2 + def test_compute_shape(self): from _ffi import Structure, Field, types class Point(Structure): From noreply at buildbot.pypy.org Wed Nov 9 13:51:16 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 9 Nov 2011 13:51:16 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: add a test for shorts Message-ID: <20111109125116.41A408292E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r49004:b1f919c81753 Date: 2011-11-09 12:33 +0100 http://bitbucket.org/pypy/pypy/changeset/b1f919c81753/ Log: add a test for shorts diff --git a/pypy/module/_ffi/test/test_struct.py b/pypy/module/_ffi/test/test_struct.py --- a/pypy/module/_ffi/test/test_struct.py +++ b/pypy/module/_ffi/test/test_struct.py @@ -137,6 +137,7 @@ longsize = types.slong.sizeof() fields = [ Field('sbyte', types.sbyte), + Field('sshort', types.sshort), Field('sint', types.sint), Field('slong', types.slong), ] @@ -144,13 +145,14 @@ struct = descr.allocate() struct.setfield('sbyte', 128) assert struct.getfield('sbyte') == -128 + struct.setfield('sshort', 32768) + assert struct.getfield('sshort') == -32768 struct.setfield('sint', 43) assert struct.getfield('sint') == 43 struct.setfield('slong', sys.maxint+1) assert struct.getfield('slong') == -sys.maxint-1 struct.setfield('slong', sys.maxint*3) assert struct.getfield('slong') == sys.maxint-2 - def test_compute_shape(self): from _ffi import Structure, Field, types From noreply at buildbot.pypy.org Wed Nov 9 13:51:17 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 9 Nov 2011 13:51:17 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: add support and tests for unsigned types Message-ID: <20111109125117.69B668292E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r49005:25ce0a707991 Date: 2011-11-09 12:40 +0100 http://bitbucket.org/pypy/pypy/changeset/25ce0a707991/ Log: add support and tests for unsigned types diff --git a/pypy/module/_ffi/interp_struct.py b/pypy/module/_ffi/interp_struct.py --- a/pypy/module/_ffi/interp_struct.py +++ b/pypy/module/_ffi/interp_struct.py @@ -2,6 +2,7 @@ from pypy.rlib import clibffi from pypy.rlib import libffi from pypy.rlib import jit +from pypy.rlib.rarithmetic import r_uint from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.typedef import TypeDef, interp_attrproperty from pypy.interpreter.gateway import interp2app, unwrap_spec @@ -130,6 +131,8 @@ def getfield(self, space, name): w_ffitype, offset = self.structdescr.get_type_and_offset_for_field(name) value = libffi.struct_getfield_int(w_ffitype.ffitype, self.rawmem, offset) + if w_ffitype.is_unsigned(): + return space.wrap(r_uint(value)) return space.wrap(value) @unwrap_spec(name=str) diff --git a/pypy/module/_ffi/test/test_struct.py b/pypy/module/_ffi/test/test_struct.py --- a/pypy/module/_ffi/test/test_struct.py +++ b/pypy/module/_ffi/test/test_struct.py @@ -153,7 +153,30 @@ assert struct.getfield('slong') == -sys.maxint-1 struct.setfield('slong', sys.maxint*3) assert struct.getfield('slong') == sys.maxint-2 - + + def test_getfield_setfield_unsigned_types(self): + import sys + from _ffi import _StructDescr, Field, types + longsize = types.slong.sizeof() + fields = [ + Field('ubyte', types.ubyte), + Field('ushort', types.ushort), + Field('uint', types.uint), + Field('ulong', types.ulong), + ] + descr = _StructDescr('foo', fields) + struct = descr.allocate() + struct.setfield('ubyte', -1) + assert struct.getfield('ubyte') == 255 + struct.setfield('ushort', -1) + assert struct.getfield('ushort') == 65535 + struct.setfield('uint', 43) + assert struct.getfield('uint') == 43 + struct.setfield('ulong', -1) + assert struct.getfield('ulong') == sys.maxint*2 + 1 + struct.setfield('ulong', sys.maxint*2 + 2) + assert struct.getfield('ulong') == 0 + def test_compute_shape(self): from _ffi import Structure, Field, types class Point(Structure): From noreply at buildbot.pypy.org Wed Nov 9 13:51:18 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 9 Nov 2011 13:51:18 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: add low-level support to get/set (u)longlong fields in libffi Message-ID: <20111109125118.939BC8292E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r49006:07587050b13c Date: 2011-11-09 13:16 +0100 http://bitbucket.org/pypy/pypy/changeset/07587050b13c/ Log: add low-level support to get/set (u)longlong fields in libffi diff --git a/pypy/rlib/libffi.py b/pypy/rlib/libffi.py --- a/pypy/rlib/libffi.py +++ b/pypy/rlib/libffi.py @@ -441,6 +441,24 @@ assert False, "cannot find the given ffitype" + at jit.oopspec('libffi_struct_getfield(ffitype, addr, offset)') +def struct_getfield_longlong(ffitype, addr, offset): + """ + Return the field of type ``ffitype`` at ``addr+offset``, casted to + lltype.LongLong. + """ + value = _struct_getfield(lltype.SignedLongLong, addr, offset) + return value + + at jit.oopspec('libffi_struct_setfield(ffitype, addr, offset, value)') +def struct_setfield_longlong(ffitype, addr, offset, value): + """ + Set the field of type ``ffitype`` at ``addr+offset``. ``value`` is of + type lltype.LongLong + """ + _struct_setfield(lltype.SignedLongLong, addr, offset, value) + + @specialize.arg(0) def _struct_getfield(TYPE, addr, offset): """ diff --git a/pypy/rlib/test/test_libffi.py b/pypy/rlib/test/test_libffi.py --- a/pypy/rlib/test/test_libffi.py +++ b/pypy/rlib/test/test_libffi.py @@ -5,7 +5,8 @@ from pypy.rlib.rarithmetic import r_singlefloat, r_longlong, r_ulonglong from pypy.rlib.test.test_clibffi import BaseFfiTest, get_libm_name, make_struct_ffitype_e from pypy.rlib.libffi import CDLL, Func, get_libc_name, ArgChain, types -from pypy.rlib.libffi import IS_32_BIT, struct_getfield_int, struct_setfield_int +from pypy.rlib.libffi import (IS_32_BIT, struct_getfield_int, struct_setfield_int, + struct_getfield_longlong, struct_setfield_longlong) class TestLibffiMisc(BaseFfiTest): @@ -72,7 +73,28 @@ assert p.y == -2 # lltype.free(p, flavor='raw') - + + def test_struct_fields_longlong(self): + POINT = lltype.Struct('POINT', + ('x', rffi.LONGLONG), + ('y', rffi.ULONGLONG) + ) + y_ofs = 8 + p = lltype.malloc(POINT, flavor='raw') + p.x = r_longlong(123) + p.y = r_ulonglong(456) + addr = rffi.cast(rffi.VOIDP, p) + assert struct_getfield_longlong(types.slonglong, addr, 0) == 123 + assert struct_getfield_longlong(types.ulonglong, addr, y_ofs) == 456 + # + v = rffi.cast(lltype.SignedLongLong, r_ulonglong(9223372036854775808)) + struct_setfield_longlong(types.slonglong, addr, 0, v) + struct_setfield_longlong(types.ulonglong, addr, y_ofs, r_longlong(-1)) + assert p.x == -9223372036854775808 + assert rffi.cast(lltype.UnsignedLongLong, p.y) == 18446744073709551615 + # + lltype.free(p, flavor='raw') + class TestLibffiCall(BaseFfiTest): """ From noreply at buildbot.pypy.org Wed Nov 9 13:51:19 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 9 Nov 2011 13:51:19 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: add a new space method to truncate longlongs, similar to space.truncatedint Message-ID: <20111109125119.BF7198292E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r49007:ceb799c62245 Date: 2011-11-09 13:39 +0100 http://bitbucket.org/pypy/pypy/changeset/ceb799c62245/ Log: add a new space method to truncate longlongs, similar to space.truncatedint diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1298,6 +1298,17 @@ from pypy.rlib.rarithmetic import intmask return intmask(self.bigint_w(w_obj).uintmask()) + def truncatedlonglong_w(self, w_obj): + # Like space.gateway_r_longlong_w(), but return the integer truncated + # instead of raising OverflowError. + try: + return self.r_longlong_w(w_obj) + except OperationError, e: + if not e.match(self, self.w_OverflowError): + raise + from pypy.rlib.rarithmetic import longlongmask + return longlongmask(self.bigint_w(w_obj).ulonglongmask()) + def c_filedescriptor_w(self, w_fd): # This is only used sometimes in CPython, e.g. for os.fsync() but # not os.close(). It's likely designed for 'select'. It's irregular diff --git a/pypy/interpreter/test/test_objspace.py b/pypy/interpreter/test/test_objspace.py --- a/pypy/interpreter/test/test_objspace.py +++ b/pypy/interpreter/test/test_objspace.py @@ -213,6 +213,29 @@ w_obj = space.wrap(-12) space.raises_w(space.w_ValueError, space.r_ulonglong_w, w_obj) + def test_truncatedlonglong_w(self): + space = self.space + w_value = space.wrap(12) + res = space.truncatedlonglong_w(w_value) + assert res == 12 + assert type(res) is r_longlong + # + w_value = space.wrap(r_ulonglong(9223372036854775808)) + res = space.truncatedlonglong_w(w_value) + assert res == -9223372036854775808 + assert type(res) is r_longlong + # + w_value = space.wrap(r_ulonglong(18446744073709551615)) + res = space.truncatedlonglong_w(w_value) + assert res == -1 + assert type(res) is r_longlong + # + w_value = space.wrap(r_ulonglong(18446744073709551616)) + res = space.truncatedlonglong_w(w_value) + assert res == 0 + assert type(res) is r_longlong + + def test_call_obj_args(self): from pypy.interpreter.argument import Arguments From noreply at buildbot.pypy.org Wed Nov 9 13:51:20 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 9 Nov 2011 13:51:20 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: add support for longlongs at applevel Message-ID: <20111109125120.E805C8292E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r49008:4e792ca0116c Date: 2011-11-09 13:49 +0100 http://bitbucket.org/pypy/pypy/changeset/4e792ca0116c/ Log: add support for longlongs at applevel diff --git a/pypy/module/_ffi/interp_struct.py b/pypy/module/_ffi/interp_struct.py --- a/pypy/module/_ffi/interp_struct.py +++ b/pypy/module/_ffi/interp_struct.py @@ -2,7 +2,7 @@ from pypy.rlib import clibffi from pypy.rlib import libffi from pypy.rlib import jit -from pypy.rlib.rarithmetic import r_uint +from pypy.rlib.rarithmetic import r_uint, r_ulonglong from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.typedef import TypeDef, interp_attrproperty from pypy.interpreter.gateway import interp2app, unwrap_spec @@ -130,19 +130,34 @@ @unwrap_spec(name=str) def getfield(self, space, name): w_ffitype, offset = self.structdescr.get_type_and_offset_for_field(name) - value = libffi.struct_getfield_int(w_ffitype.ffitype, self.rawmem, offset) - if w_ffitype.is_unsigned(): - return space.wrap(r_uint(value)) - return space.wrap(value) + if w_ffitype.is_longlong(): + value = libffi.struct_getfield_longlong(w_ffitype.ffitype, self.rawmem, offset) + if w_ffitype is app_types.ulonglong: + return space.wrap(r_ulonglong(value)) + return space.wrap(value) + # + if w_ffitype.is_signed() or w_ffitype.is_unsigned(): + value = libffi.struct_getfield_int(w_ffitype.ffitype, self.rawmem, offset) + if w_ffitype.is_unsigned(): + return space.wrap(r_uint(value)) + return space.wrap(value) + # + assert False, 'unknown type' @unwrap_spec(name=str) def setfield(self, space, name, w_value): w_ffitype, offset = self.structdescr.get_type_and_offset_for_field(name) - # XXX: add support for long long + if w_ffitype.is_longlong(): + value = space.truncatedlonglong_w(w_value) + libffi.struct_setfield_longlong(w_ffitype.ffitype, self.rawmem, offset, value) + return + # if w_ffitype.is_signed() or w_ffitype.is_unsigned(): value = space.truncatedint(w_value) + libffi.struct_setfield_int(w_ffitype.ffitype, self.rawmem, offset, value) + return # - libffi.struct_setfield_int(w_ffitype.ffitype, self.rawmem, offset, value) + assert False, 'unknown type' W__StructInstance.typedef = TypeDef( '_StructInstance', diff --git a/pypy/module/_ffi/test/test_struct.py b/pypy/module/_ffi/test/test_struct.py --- a/pypy/module/_ffi/test/test_struct.py +++ b/pypy/module/_ffi/test/test_struct.py @@ -177,6 +177,23 @@ struct.setfield('ulong', sys.maxint*2 + 2) assert struct.getfield('ulong') == 0 + def test_getfield_setfield_longlong(self): + import sys + from _ffi import _StructDescr, Field, types + longsize = types.slong.sizeof() + fields = [ + Field('slonglong', types.slonglong), + Field('ulonglong', types.ulonglong), + ] + descr = _StructDescr('foo', fields) + struct = descr.allocate() + struct.setfield('slonglong', 9223372036854775808) + assert struct.getfield('slonglong') == -9223372036854775808 + struct.setfield('ulonglong', -1) + assert struct.getfield('ulonglong') == 18446744073709551615 + mem = self.read_raw_mem(struct.getaddr(), 'c_longlong', 2) + assert mem == [-9223372036854775808, -1] + def test_compute_shape(self): from _ffi import Structure, Field, types class Point(Structure): From noreply at buildbot.pypy.org Wed Nov 9 13:51:22 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 9 Nov 2011 13:51:22 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: add an XXX so that I hopefully don't forget this :-) Message-ID: <20111109125122.1B48F8292E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r49009:f2c743343892 Date: 2011-11-09 13:50 +0100 http://bitbucket.org/pypy/pypy/changeset/f2c743343892/ Log: add an XXX so that I hopefully don't forget this :-) diff --git a/pypy/module/_ffi/interp_struct.py b/pypy/module/_ffi/interp_struct.py --- a/pypy/module/_ffi/interp_struct.py +++ b/pypy/module/_ffi/interp_struct.py @@ -119,6 +119,7 @@ zero=True, add_memory_pressure=True) def __del__(self): + # XXX: check whether I can turn this into a lightweight destructor if self.rawmem: lltype.free(self.rawmem, flavor='raw') self.rawmem = lltype.nullptr(rffi.VOIDP.TO) From noreply at buildbot.pypy.org Wed Nov 9 14:38:05 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Wed, 9 Nov 2011 14:38:05 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: fixed memorylayout of the GC for win64, format characters Message-ID: <20111109133805.0A21C8292E@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r49010:305bded94dfb Date: 2011-11-09 13:49 +0100 http://bitbucket.org/pypy/pypy/changeset/305bded94dfb/ Log: fixed memorylayout of the GC for win64, format characters diff --git a/pypy/jit/backend/llsupport/descr.py b/pypy/jit/backend/llsupport/descr.py --- a/pypy/jit/backend/llsupport/descr.py +++ b/pypy/jit/backend/llsupport/descr.py @@ -305,12 +305,16 @@ _clsname = '' loop_token = None arg_classes = '' # <-- annotation hack - ffi_flags = 0 + ffi_flags = 1 - def __init__(self, arg_classes, extrainfo=None, ffi_flags=0): + def __init__(self, arg_classes, extrainfo=None, ffi_flags=1): self.arg_classes = arg_classes # string of "r" and "i" (ref/int) self.extrainfo = extrainfo self.ffi_flags = ffi_flags + # NB. the default ffi_flags is 1, meaning FUNCFLAG_CDECL, which + # makes sense on Windows as it's the one for all the C functions + # we are compiling together with the JIT. On non-Windows platforms + # it is just ignored anyway. def __repr__(self): res = '%s(%s)' % (self.__class__.__name__, self.arg_classes) @@ -445,7 +449,7 @@ """ _clsname = 'DynamicIntCallDescr' - def __init__(self, arg_classes, result_size, result_sign, extrainfo=None, ffi_flags=0): + def __init__(self, arg_classes, result_size, result_sign, extrainfo, ffi_flags): BaseIntCallDescr.__init__(self, arg_classes, extrainfo, ffi_flags) assert isinstance(result_sign, bool) self._result_size = chr(result_size) diff --git a/pypy/jit/backend/llsupport/ffisupport.py b/pypy/jit/backend/llsupport/ffisupport.py --- a/pypy/jit/backend/llsupport/ffisupport.py +++ b/pypy/jit/backend/llsupport/ffisupport.py @@ -8,7 +8,7 @@ class UnsupportedKind(Exception): pass -def get_call_descr_dynamic(cpu, ffi_args, ffi_result, extrainfo=None, ffi_flags=0): +def get_call_descr_dynamic(cpu, ffi_args, ffi_result, extrainfo, ffi_flags): """Get a call descr: the types of result and args are represented by rlib.libffi.types.*""" try: diff --git a/pypy/jit/backend/llsupport/test/test_ffisupport.py b/pypy/jit/backend/llsupport/test/test_ffisupport.py --- a/pypy/jit/backend/llsupport/test/test_ffisupport.py +++ b/pypy/jit/backend/llsupport/test/test_ffisupport.py @@ -13,44 +13,46 @@ def test_call_descr_dynamic(): args = [types.sint, types.pointer] - descr = get_call_descr_dynamic(FakeCPU(), args, types.sint, ffi_flags=42) + descr = get_call_descr_dynamic(FakeCPU(), args, types.sint, None, + ffi_flags=42) assert isinstance(descr, DynamicIntCallDescr) assert descr.arg_classes == 'ii' assert descr.get_ffi_flags() == 42 args = [types.sint, types.double, types.pointer] - descr = get_call_descr_dynamic(FakeCPU(), args, types.void) + descr = get_call_descr_dynamic(FakeCPU(), args, types.void, None, 42) assert descr is None # missing floats descr = get_call_descr_dynamic(FakeCPU(supports_floats=True), - args, types.void, ffi_flags=43) + args, types.void, None, ffi_flags=43) assert isinstance(descr, VoidCallDescr) assert descr.arg_classes == 'ifi' assert descr.get_ffi_flags() == 43 - descr = get_call_descr_dynamic(FakeCPU(), [], types.sint8) + descr = get_call_descr_dynamic(FakeCPU(), [], types.sint8, None, 42) assert isinstance(descr, DynamicIntCallDescr) assert descr.get_result_size(False) == 1 assert descr.is_result_signed() == True - descr = get_call_descr_dynamic(FakeCPU(), [], types.uint8) + descr = get_call_descr_dynamic(FakeCPU(), [], types.uint8, None, 42) assert isinstance(descr, DynamicIntCallDescr) assert descr.get_result_size(False) == 1 assert descr.is_result_signed() == False if not is_64_bit: - descr = get_call_descr_dynamic(FakeCPU(), [], types.slonglong) + descr = get_call_descr_dynamic(FakeCPU(), [], types.slonglong, + None, 42) assert descr is None # missing longlongs descr = get_call_descr_dynamic(FakeCPU(supports_longlong=True), - [], types.slonglong, ffi_flags=43) + [], types.slonglong, None, ffi_flags=43) assert isinstance(descr, LongLongCallDescr) assert descr.get_ffi_flags() == 43 else: assert types.slonglong is types.slong - descr = get_call_descr_dynamic(FakeCPU(), [], types.float) + descr = get_call_descr_dynamic(FakeCPU(), [], types.float, None, 42) assert descr is None # missing singlefloats descr = get_call_descr_dynamic(FakeCPU(supports_singlefloats=True), - [], types.float, ffi_flags=44) + [], types.float, None, ffi_flags=44) SingleFloatCallDescr = getCallDescrClass(rffi.FLOAT) assert isinstance(descr, SingleFloatCallDescr) assert descr.get_ffi_flags() == 44 diff --git a/pypy/jit/backend/x86/test/test_runner.py b/pypy/jit/backend/x86/test/test_runner.py --- a/pypy/jit/backend/x86/test/test_runner.py +++ b/pypy/jit/backend/x86/test/test_runner.py @@ -455,6 +455,9 @@ EffectInfo.MOST_GENERAL, ffi_flags=-1) calldescr.get_call_conv = lambda: ffi # <==== hack + # ^^^ we patch get_call_conv() so that the test also makes sense + # on Linux, because clibffi.get_call_conv() would always + # return FFI_DEFAULT_ABI on non-Windows platforms. funcbox = ConstInt(rawstart) i1 = BoxInt() i2 = BoxInt() diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -5000,6 +5000,7 @@ self.optimize_loop(ops, expected) def test_known_equal_ints(self): + py.test.skip("in-progress") ops = """ [i0, i1, i2, p0] i3 = int_eq(i0, i1) diff --git a/pypy/rpython/memory/lltypelayout.py b/pypy/rpython/memory/lltypelayout.py --- a/pypy/rpython/memory/lltypelayout.py +++ b/pypy/rpython/memory/lltypelayout.py @@ -1,4 +1,5 @@ from pypy.rpython.lltypesystem import lltype, llmemory, llarena +from pypy.rlib.rarithmetic import is_emulated_long import struct @@ -12,7 +13,11 @@ lltype.Float: "d", llmemory.Address: "P", } - +if is_emulated_long: + primitive_to_fmt.update( { + lltype.Signed: "q", + lltype.Unsigned: "Q", + } ) #___________________________________________________________________________ # Utility functions that know about the memory layout of the lltypes diff --git a/pypy/translator/c/genc.py b/pypy/translator/c/genc.py --- a/pypy/translator/c/genc.py +++ b/pypy/translator/c/genc.py @@ -622,9 +622,9 @@ else: mk.definition('DEBUGFLAGS', '-O1 -g') if sys.platform == 'win32': - mk.rule('debug_target', 'debugmode_$(DEFAULT_TARGET)') + mk.rule('debug_target', 'debugmode_$(DEFAULT_TARGET)', 'rem') else: - mk.rule('debug_target', '$(TARGET)') + mk.rule('debug_target', '$(TARGET)', '#') mk.write() #self.translator.platform, # , From noreply at buildbot.pypy.org Wed Nov 9 14:38:06 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Wed, 9 Nov 2011 14:38:06 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: merge Message-ID: <20111109133806.3CEDD8292E@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r49011:5353e7a1dade Date: 2011-11-09 14:37 +0100 http://bitbucket.org/pypy/pypy/changeset/5353e7a1dade/ Log: merge diff --git a/pypy/jit/metainterp/optimizeopt/intutils.py b/pypy/jit/metainterp/optimizeopt/intutils.py --- a/pypy/jit/metainterp/optimizeopt/intutils.py +++ b/pypy/jit/metainterp/optimizeopt/intutils.py @@ -1,4 +1,4 @@ -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift, LONG_BIT +from pypy.rlib.rarithmetic import ovfcheck, LONG_BIT from pypy.rlib.objectmodel import we_are_translated from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.jit.metainterp.history import BoxInt, ConstInt @@ -174,10 +174,10 @@ other.known_ge(IntBound(0, 0)) and \ other.known_lt(IntBound(LONG_BIT, LONG_BIT)): try: - vals = (ovfcheck_lshift(self.upper, other.upper), - ovfcheck_lshift(self.upper, other.lower), - ovfcheck_lshift(self.lower, other.upper), - ovfcheck_lshift(self.lower, other.lower)) + vals = (ovfcheck(self.upper << other.upper), + ovfcheck(self.upper << other.lower), + ovfcheck(self.lower << other.upper), + ovfcheck(self.lower << other.lower)) return IntBound(min4(vals), max4(vals)) except (OverflowError, ValueError): return IntUnbounded() diff --git a/pypy/objspace/flow/operation.py b/pypy/objspace/flow/operation.py --- a/pypy/objspace/flow/operation.py +++ b/pypy/objspace/flow/operation.py @@ -11,7 +11,7 @@ from pypy.interpreter.baseobjspace import ObjSpace from pypy.interpreter.error import OperationError from pypy.tool.sourcetools import compile2 -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift +from pypy.rlib.rarithmetic import ovfcheck from pypy.objspace.flow import model @@ -144,7 +144,7 @@ return ovfcheck(x % y) def lshift_ovf(x, y): - return ovfcheck_lshift(x, y) + return ovfcheck(x << y) # slicing: operator.{get,set,del}slice() don't support b=None or c=None def do_getslice(a, b, c): diff --git a/pypy/objspace/std/intobject.py b/pypy/objspace/std/intobject.py --- a/pypy/objspace/std/intobject.py +++ b/pypy/objspace/std/intobject.py @@ -6,7 +6,7 @@ from pypy.objspace.std.noneobject import W_NoneObject from pypy.objspace.std.register_all import register_all from pypy.rlib import jit -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift, LONG_BIT, r_uint +from pypy.rlib.rarithmetic import ovfcheck, LONG_BIT, r_uint from pypy.rlib.rbigint import rbigint """ @@ -245,7 +245,7 @@ b = w_int2.intval if r_uint(b) < LONG_BIT: # 0 <= b < LONG_BIT try: - c = ovfcheck_lshift(a, b) + c = ovfcheck(a << b) except OverflowError: raise FailedToImplementArgs(space.w_OverflowError, space.wrap("integer left shift")) diff --git a/pypy/rlib/listsort.py b/pypy/rlib/listsort.py --- a/pypy/rlib/listsort.py +++ b/pypy/rlib/listsort.py @@ -1,4 +1,4 @@ -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift +from pypy.rlib.rarithmetic import ovfcheck ## ------------------------------------------------------------------------ @@ -136,7 +136,7 @@ if lower(a.list[p + ofs], key): lastofs = ofs try: - ofs = ovfcheck_lshift(ofs, 1) + ofs = ovfcheck(ofs << 1) except OverflowError: ofs = maxofs else: @@ -161,7 +161,7 @@ # key <= a[hint - ofs] lastofs = ofs try: - ofs = ovfcheck_lshift(ofs, 1) + ofs = ovfcheck(ofs << 1) except OverflowError: ofs = maxofs else: diff --git a/pypy/rlib/rarithmetic.py b/pypy/rlib/rarithmetic.py --- a/pypy/rlib/rarithmetic.py +++ b/pypy/rlib/rarithmetic.py @@ -12,9 +12,6 @@ back to a signed int value ovfcheck check on CPython whether the result of a signed integer operation did overflow -ovfcheck_lshift - << with oveflow checking - catering to 2.3/2.4 differences about << ovfcheck_float_to_int convert to an integer or raise OverflowError r_longlong @@ -152,19 +149,6 @@ raise OverflowError, "signed integer expression did overflow" return r -def _local_ovfcheck(r): - # a copy of the above, because we cannot call ovfcheck - # in a context where no primitiveoperator is involved. - assert not isinstance(r, r_uint), "unexpected ovf check on unsigned" - # if isinstance(r, long): - if abs(r) > sys.maxint: - raise OverflowError, "signed integer expression did overflow" - return r - -def ovfcheck_lshift(a, b): - "NOT_RPYTHON" - return _local_ovfcheck(int(long(a) << b)) - # Strange things happening for float to int on 64 bit: # int(float(i)) != i because of rounding issues. # These are the minimum and maximum float value that can diff --git a/pypy/rpython/llinterp.py b/pypy/rpython/llinterp.py --- a/pypy/rpython/llinterp.py +++ b/pypy/rpython/llinterp.py @@ -1,6 +1,6 @@ from pypy.objspace.flow.model import FunctionGraph, Constant, Variable, c_last_exception from pypy.rlib.rarithmetic import intmask, r_uint, ovfcheck, r_longlong -from pypy.rlib.rarithmetic import r_ulonglong, ovfcheck_lshift +from pypy.rlib.rarithmetic import r_ulonglong from pypy.rpython.lltypesystem import lltype, llmemory, lloperation, llheap from pypy.rpython.lltypesystem import rclass from pypy.rpython.ootypesystem import ootype @@ -1038,7 +1038,7 @@ assert isinstance(x, (int, long)) assert isinstance(y, (int, long)) try: - return ovfcheck_lshift(x, y) + return ovfcheck(x << y) except OverflowError: self.make_llexception() diff --git a/pypy/translator/cli/test/test_snippet.py b/pypy/translator/cli/test/test_snippet.py --- a/pypy/translator/cli/test/test_snippet.py +++ b/pypy/translator/cli/test/test_snippet.py @@ -28,14 +28,14 @@ res = self.interpret(fn, [], backendopt=False) def test_link_vars_overlapping(self): - from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift + from pypy.rlib.rarithmetic import ovfcheck def fn(maxofs): lastofs = 0 ofs = 1 while ofs < maxofs: lastofs = ofs try: - ofs = ovfcheck_lshift(ofs, 1) + ofs = ovfcheck(ofs << 1) except OverflowError: ofs = maxofs else: diff --git a/pypy/translator/simplify.py b/pypy/translator/simplify.py --- a/pypy/translator/simplify.py +++ b/pypy/translator/simplify.py @@ -111,16 +111,13 @@ # the while loop above will simplify recursively the new link def transform_ovfcheck(graph): - """The special function calls ovfcheck and ovfcheck_lshift need to + """The special function calls ovfcheck needs to be translated into primitive operations. ovfcheck is called directly after an operation that should be turned into an overflow-checked version. It is considered a syntax error if the resulting _ovf is not defined in objspace/flow/objspace.py. - ovfcheck_lshift is special because there is no preceding operation. - Instead, it will be replaced by an OP_LSHIFT_OVF operation. """ covf = Constant(rarithmetic.ovfcheck) - covfls = Constant(rarithmetic.ovfcheck_lshift) def check_syntax(opname): exlis = operation.implicit_exceptions.get("%s_ovf" % (opname,), []) @@ -154,9 +151,6 @@ op1.opname += '_ovf' del block.operations[i] block.renamevariables({op.result: op1.result}) - elif op.args[0] == covfls: - op.opname = 'lshift_ovf' - del op.args[0] def simplify_exceptions(graph): """The exception handling caused by non-implicit exceptions diff --git a/pypy/translator/test/snippet.py b/pypy/translator/test/snippet.py --- a/pypy/translator/test/snippet.py +++ b/pypy/translator/test/snippet.py @@ -1210,7 +1210,7 @@ return istk.top(), sstk.top() -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift +from pypy.rlib.rarithmetic import ovfcheck def add_func(i=numtype): try: @@ -1253,7 +1253,7 @@ def lshift_func(i=numtype): try: hugo(2, 3, 5) - return ovfcheck_lshift((-maxint-1), i) + return ovfcheck((-maxint-1) << i) except (hugelmugel, OverflowError, StandardError, ValueError): raise From noreply at buildbot.pypy.org Wed Nov 9 14:48:31 2011 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 9 Nov 2011 14:48:31 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: Fix. Message-ID: <20111109134831.6F7158292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: jit-targets Changeset: r49012:c7eb819ab63b Date: 2011-11-09 13:52 +0100 http://bitbucket.org/pypy/pypy/changeset/c7eb819ab63b/ Log: Fix. diff --git a/pypy/jit/backend/test/calling_convention_test.py b/pypy/jit/backend/test/calling_convention_test.py --- a/pypy/jit/backend/test/calling_convention_test.py +++ b/pypy/jit/backend/test/calling_convention_test.py @@ -2,7 +2,7 @@ AbstractDescr, BasicFailDescr, BoxInt, Box, BoxPtr, - LoopToken, + JitCellToken, ConstInt, ConstPtr, BoxObj, Const, ConstObj, BoxFloat, ConstFloat) @@ -107,7 +107,7 @@ ops += 'finish(f99, %s)\n' % arguments loop = parse(ops, namespace=locals()) - looptoken = LoopToken() + looptoken = JitCellToken() done_number = self.cpu.get_fail_descr_number(loop.operations[-1].getdescr()) self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken) expected_result = self._prepare_args(args, floats, ints) @@ -253,7 +253,7 @@ called_ops += 'finish(f%d, descr=fdescr3)\n' % total_index # compile called loop called_loop = parse(called_ops, namespace=locals()) - called_looptoken = LoopToken() + called_looptoken = JitCellToken() called_looptoken.outermost_jitdriver_sd = FakeJitDriverSD() done_number = self.cpu.get_fail_descr_number(called_loop.operations[-1].getdescr()) self.cpu.compile_loop(called_loop.inputargs, called_loop.operations, called_looptoken) @@ -284,7 +284,7 @@ # we want to take the fast path self.cpu.done_with_this_frame_float_v = done_number try: - othertoken = LoopToken() + othertoken = JitCellToken() self.cpu.compile_loop(loop.inputargs, loop.operations, othertoken) # prepare call to called_loop From noreply at buildbot.pypy.org Wed Nov 9 14:48:32 2011 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 9 Nov 2011 14:48:32 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: Cannot attach the LoopToken by default on a jump(), because we Message-ID: <20111109134832.984B88292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: jit-targets Changeset: r49013:e41703ed7368 Date: 2011-11-09 13:53 +0100 http://bitbucket.org/pypy/pypy/changeset/e41703ed7368/ Log: Cannot attach the LoopToken by default on a jump(), because we would need to have a TargetToken instead. Fall back to no descr, which is enough for some tests. diff --git a/pypy/jit/tool/oparser.py b/pypy/jit/tool/oparser.py --- a/pypy/jit/tool/oparser.py +++ b/pypy/jit/tool/oparser.py @@ -241,9 +241,9 @@ if opnum == rop.FINISH: if descr is None and self.invent_fail_descr: descr = self.invent_fail_descr(self.model, fail_args) - elif opnum == rop.JUMP: - if descr is None and self.invent_fail_descr: - descr = self.celltoken +## elif opnum == rop.JUMP: +## if descr is None and self.invent_fail_descr: +## ... return opnum, args, descr, fail_args def create_op(self, opnum, args, result, descr): From noreply at buildbot.pypy.org Wed Nov 9 14:48:33 2011 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 9 Nov 2011 14:48:33 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: Fixes. Message-ID: <20111109134833.C89958292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: jit-targets Changeset: r49014:b68ac3a19cec Date: 2011-11-09 14:10 +0100 http://bitbucket.org/pypy/pypy/changeset/b68ac3a19cec/ Log: Fixes. diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -1408,6 +1408,7 @@ descr._x86_loop_code = self.assembler.mc.get_relative_pos() descr._x86_clt = self.assembler.current_clt self.assembler.target_tokens_currently_compiling[descr] = None + self.possibly_free_vars_for_op(op) def not_implemented_op(self, op): not_implemented("not implemented operation: %s" % op.getopname()) diff --git a/pypy/jit/backend/x86/test/test_regalloc.py b/pypy/jit/backend/x86/test/test_regalloc.py --- a/pypy/jit/backend/x86/test/test_regalloc.py +++ b/pypy/jit/backend/x86/test/test_regalloc.py @@ -4,7 +4,7 @@ import py from pypy.jit.metainterp.history import BoxInt, ConstInt,\ - BoxPtr, ConstPtr, LoopToken, BasicFailDescr + BoxPtr, ConstPtr, BasicFailDescr, JitCellToken, TargetToken from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.jit.backend.llsupport.descr import GcCache from pypy.jit.backend.detect_cpu import getcpuclass @@ -96,6 +96,8 @@ raising_calldescr = cpu.calldescrof(FPTR.TO, FPTR.TO.ARGS, FPTR.TO.RESULT, EffectInfo.MOST_GENERAL) + targettoken = TargetToken() + targettoken2 = TargetToken() fdescr1 = BasicFailDescr(1) fdescr2 = BasicFailDescr(2) fdescr3 = BasicFailDescr(3) @@ -134,7 +136,8 @@ def interpret(self, ops, args, run=True): loop = self.parse(ops) - self.cpu.compile_loop(loop.inputargs, loop.operations, loop.token) + looptoken = JitCellToken() + self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken) for i, arg in enumerate(args): if isinstance(arg, int): self.cpu.set_future_value_int(i, arg) @@ -145,8 +148,9 @@ assert isinstance(lltype.typeOf(arg), lltype.Ptr) llgcref = lltype.cast_opaque_ptr(llmemory.GCREF, arg) self.cpu.set_future_value_ref(i, llgcref) + loop._jitcelltoken = looptoken if run: - self.cpu.execute_token(loop.token) + self.cpu.execute_token(looptoken) return loop def getint(self, index): @@ -167,10 +171,7 @@ gcref = self.cpu.get_latest_value_ref(index) return lltype.cast_opaque_ptr(T, gcref) - def attach_bridge(self, ops, loop, guard_op_index, looptoken=None, **kwds): - if looptoken is not None: - self.namespace = self.namespace.copy() - self.namespace['looptoken'] = looptoken + def attach_bridge(self, ops, loop, guard_op_index, **kwds): guard_op = loop.operations[guard_op_index] assert guard_op.is_guard() bridge = self.parse(ops, **kwds) @@ -178,20 +179,21 @@ [box.type for box in guard_op.getfailargs()]) faildescr = guard_op.getdescr() self.cpu.compile_bridge(faildescr, bridge.inputargs, bridge.operations, - loop.token) + loop._jitcelltoken) return bridge def run(self, loop): - return self.cpu.execute_token(loop.token) + return self.cpu.execute_token(loop._jitcelltoken) class TestRegallocSimple(BaseTestRegalloc): def test_simple_loop(self): ops = ''' [i0] + label(i0, descr=targettoken) i1 = int_add(i0, 1) i2 = int_lt(i1, 20) guard_true(i2) [i1] - jump(i1) + jump(i1, descr=targettoken) ''' self.interpret(ops, [0]) assert self.getint(0) == 20 @@ -199,27 +201,29 @@ def test_two_loops_and_a_bridge(self): ops = ''' [i0, i1, i2, i3] + label(i0, i1, i2, i3, descr=targettoken) i4 = int_add(i0, 1) i5 = int_lt(i4, 20) guard_true(i5) [i4, i1, i2, i3] - jump(i4, i1, i2, i3) + jump(i4, i1, i2, i3, descr=targettoken) ''' loop = self.interpret(ops, [0, 0, 0, 0]) ops2 = ''' [i5] + label(i5, descr=targettoken2) i1 = int_add(i5, 1) i3 = int_add(i1, 1) i4 = int_add(i3, 1) i2 = int_lt(i4, 30) guard_true(i2) [i4] - jump(i4) + jump(i4, descr=targettoken2) ''' loop2 = self.interpret(ops2, [0]) bridge_ops = ''' [i4] - jump(i4, i4, i4, i4, descr=looptoken) + jump(i4, i4, i4, i4, descr=targettoken) ''' - bridge = self.attach_bridge(bridge_ops, loop2, 4, looptoken=loop.token) + bridge = self.attach_bridge(bridge_ops, loop2, 5) self.cpu.set_future_value_int(0, 0) self.run(loop2) assert self.getint(0) == 31 @@ -230,10 +234,11 @@ def test_pointer_arg(self): ops = ''' [i0, p0] + label(i0, p0, descr=targettoken) i1 = int_add(i0, 1) i2 = int_lt(i1, 10) guard_true(i2) [p0] - jump(i1, p0) + jump(i1, p0, descr=targettoken) ''' S = lltype.GcStruct('S') ptr = lltype.malloc(S) @@ -311,10 +316,11 @@ def test_spill_for_constant(self): ops = ''' [i0, i1, i2, i3] + label(i0, i1, i2, i3, descr=targettoken) i4 = int_add(3, i1) i5 = int_lt(i4, 30) guard_true(i5) [i0, i4, i2, i3] - jump(1, i4, 3, 4) + jump(1, i4, 3, 4, descr=targettoken) ''' self.interpret(ops, [0, 0, 0, 0]) assert self.getints(4) == [1, 30, 3, 4] @@ -322,31 +328,34 @@ def test_spill_for_constant_lshift(self): ops = ''' [i0, i2, i1, i3] + label(i0, i2, i1, i3, descr=targettoken) i4 = int_lshift(1, i1) i5 = int_add(1, i1) i6 = int_lt(i5, 30) guard_true(i6) [i4, i5, i2, i3] - jump(i4, 3, i5, 4) + jump(i4, 3, i5, 4, descr=targettoken) ''' self.interpret(ops, [0, 0, 0, 0]) assert self.getints(4) == [1<<29, 30, 3, 4] ops = ''' [i0, i1, i2, i3] + label(i0, i1, i2, i3, descr=targettoken) i4 = int_lshift(1, i1) i5 = int_add(1, i1) i6 = int_lt(i5, 30) guard_true(i6) [i4, i5, i2, i3] - jump(i4, i5, 3, 4) + jump(i4, i5, 3, 4, descr=targettoken) ''' self.interpret(ops, [0, 0, 0, 0]) assert self.getints(4) == [1<<29, 30, 3, 4] ops = ''' [i0, i3, i1, i2] + label(i0, i3, i1, i2, descr=targettoken) i4 = int_lshift(1, i1) i5 = int_add(1, i1) i6 = int_lt(i5, 30) guard_true(i6) [i4, i5, i2, i3] - jump(i4, 4, i5, 3) + jump(i4, 4, i5, 3, descr=targettoken) ''' self.interpret(ops, [0, 0, 0, 0]) assert self.getints(4) == [1<<29, 30, 3, 4] @@ -354,11 +363,12 @@ def test_result_selected_reg_via_neg(self): ops = ''' [i0, i1, i2, i3] + label(i0, i1, i2, i3, descr=targettoken) i6 = int_neg(i2) i7 = int_add(1, i1) i4 = int_lt(i7, 10) guard_true(i4) [i0, i6, i7] - jump(1, i7, i2, i6) + jump(1, i7, i2, i6, descr=targettoken) ''' self.interpret(ops, [0, 0, 3, 0]) assert self.getints(3) == [1, -3, 10] @@ -366,11 +376,12 @@ def test_compare_memory_result_survives(self): ops = ''' [i0, i1, i2, i3] + label(i0, i1, i2, i3, descr=targettoken) i4 = int_lt(i0, i1) i5 = int_add(i3, 1) i6 = int_lt(i5, 30) guard_true(i6) [i4] - jump(i0, i1, i4, i5) + jump(i0, i1, i4, i5, descr=targettoken) ''' self.interpret(ops, [0, 10, 0, 0]) assert self.getint(0) == 1 @@ -378,10 +389,11 @@ def test_jump_different_args(self): ops = ''' [i0, i15, i16, i18, i1, i2, i3] + label(i0, i15, i16, i18, i1, i2, i3, descr=targettoken) i4 = int_add(i3, 1) i5 = int_lt(i4, 20) guard_true(i5) [i2, i1] - jump(i0, i18, i15, i16, i2, i1, i4) + jump(i0, i18, i15, i16, i2, i1, i4, descr=targettoken) ''' self.interpret(ops, [0, 1, 2, 3]) @@ -438,6 +450,7 @@ class TestRegallocMoreRegisters(BaseTestRegalloc): cpu = BaseTestRegalloc.cpu + targettoken = TargetToken() S = lltype.GcStruct('S', ('field', lltype.Char)) fielddescr = cpu.fielddescrof(S, 'field') @@ -510,6 +523,7 @@ def test_division_optimized(self): ops = ''' [i7, i6] + label(i7, i6, descr=targettoken) i18 = int_floordiv(i7, i6) i19 = int_xor(i7, i6) i21 = int_lt(i19, 0) @@ -517,7 +531,7 @@ i23 = int_is_true(i22) i24 = int_eq(i6, 4) guard_false(i24) [i18] - jump(i18, i6) + jump(i18, i6, descr=targettoken) ''' self.interpret(ops, [10, 4]) assert self.getint(0) == 2 @@ -588,7 +602,8 @@ ''' loop = self.interpret(ops, [4, 7, 9, 9 ,9, 9, 9, 9, 9, 9, 9]) assert self.getints(11) == [5, 7, 9, 9, 9, 9, 9, 9, 9, 9, 9] - assert loop.token._x86_param_depth == self.expected_param_depth(1) + clt = loop._jitcelltoken.compiled_loop_token + assert clt.param_depth == self.expected_param_depth(1) def test_two_calls(self): ops = ''' @@ -599,7 +614,8 @@ ''' loop = self.interpret(ops, [4, 7, 9, 9 ,9, 9, 9, 9, 9, 9, 9]) assert self.getints(11) == [5*7, 7, 9, 9, 9, 9, 9, 9, 9, 9, 9] - assert loop.token._x86_param_depth == self.expected_param_depth(2) + clt = loop._jitcelltoken.compiled_loop_token + assert clt.param_depth == self.expected_param_depth(2) def test_call_many_arguments(self): # NB: The first and last arguments in the call are constants. This @@ -612,7 +628,8 @@ ''' loop = self.interpret(ops, [2, 3, 4, 5, 6, 7, 8, 9]) assert self.getint(0) == 55 - assert loop.token._x86_param_depth == self.expected_param_depth(10) + clt = loop._jitcelltoken.compiled_loop_token + assert clt.param_depth == self.expected_param_depth(10) def test_bridge_calls_1(self): ops = ''' diff --git a/pypy/jit/backend/x86/test/test_regalloc2.py b/pypy/jit/backend/x86/test/test_regalloc2.py --- a/pypy/jit/backend/x86/test/test_regalloc2.py +++ b/pypy/jit/backend/x86/test/test_regalloc2.py @@ -1,6 +1,6 @@ import py from pypy.jit.metainterp.history import ResOperation, BoxInt, ConstInt,\ - BoxPtr, ConstPtr, BasicFailDescr, LoopToken + BoxPtr, ConstPtr, BasicFailDescr, JitCellToken from pypy.jit.metainterp.resoperation import rop from pypy.jit.backend.detect_cpu import getcpuclass from pypy.jit.backend.x86.arch import WORD @@ -20,7 +20,7 @@ ] cpu = CPU(None, None) cpu.setup_once() - looptoken = LoopToken() + looptoken = JitCellToken() cpu.compile_loop(inputargs, operations, looptoken) cpu.set_future_value_int(0, 9) cpu.execute_token(looptoken) @@ -43,7 +43,7 @@ ] cpu = CPU(None, None) cpu.setup_once() - looptoken = LoopToken() + looptoken = JitCellToken() cpu.compile_loop(inputargs, operations, looptoken) cpu.set_future_value_int(0, -10) cpu.execute_token(looptoken) @@ -140,7 +140,7 @@ ] cpu = CPU(None, None) cpu.setup_once() - looptoken = LoopToken() + looptoken = JitCellToken() cpu.compile_loop(inputargs, operations, looptoken) cpu.set_future_value_int(0, -13) cpu.set_future_value_int(1, 10) @@ -255,7 +255,7 @@ ] cpu = CPU(None, None) cpu.setup_once() - looptoken = LoopToken() + looptoken = JitCellToken() cpu.compile_loop(inputargs, operations, looptoken) cpu.set_future_value_int(0, 17) cpu.set_future_value_int(1, -20) diff --git a/pypy/jit/backend/x86/test/test_runner.py b/pypy/jit/backend/x86/test/test_runner.py --- a/pypy/jit/backend/x86/test/test_runner.py +++ b/pypy/jit/backend/x86/test/test_runner.py @@ -1,9 +1,10 @@ import py from pypy.rpython.lltypesystem import lltype, llmemory, rffi, rstr, rclass from pypy.rpython.annlowlevel import llhelper -from pypy.jit.metainterp.history import ResOperation, JitCellToken +from pypy.jit.metainterp.history import ResOperation, TargetToken, JitCellToken from pypy.jit.metainterp.history import (BoxInt, BoxPtr, ConstInt, ConstFloat, - ConstPtr, Box, BoxFloat, BasicFailDescr) + ConstPtr, Box, BoxFloat, + BasicFailDescr) from pypy.jit.backend.detect_cpu import getcpuclass from pypy.jit.backend.x86.arch import WORD from pypy.jit.backend.x86.rx86 import fits_in_32bits From noreply at buildbot.pypy.org Wed Nov 9 14:48:34 2011 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 9 Nov 2011 14:48:34 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: Fix test_random. Message-ID: <20111109134834.F34318292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: jit-targets Changeset: r49015:6803897157d0 Date: 2011-11-09 14:15 +0100 http://bitbucket.org/pypy/pypy/changeset/6803897157d0/ Log: Fix test_random. diff --git a/pypy/jit/backend/test/test_random.py b/pypy/jit/backend/test/test_random.py --- a/pypy/jit/backend/test/test_random.py +++ b/pypy/jit/backend/test/test_random.py @@ -3,8 +3,8 @@ from pypy.rlib.rarithmetic import intmask, LONG_BIT from pypy.rpython.lltypesystem import llmemory from pypy.jit.metainterp.history import BasicFailDescr, TreeLoop -from pypy.jit.metainterp.history import BoxInt, ConstInt, LoopToken -from pypy.jit.metainterp.history import BoxPtr, ConstPtr +from pypy.jit.metainterp.history import BoxInt, ConstInt, JitCellToken +from pypy.jit.metainterp.history import BoxPtr, ConstPtr, TargetToken from pypy.jit.metainterp.history import BoxFloat, ConstFloat, Const from pypy.jit.metainterp.resoperation import ResOperation, rop from pypy.jit.metainterp.executor import execute_nonspec @@ -179,7 +179,7 @@ #print >>s, ' operations[%d].suboperations = [' % i #print >>s, ' ResOperation(rop.FAIL, [%s], None)]' % ( # ', '.join([names[v] for v in op.args])) - print >>s, ' looptoken = LoopToken()' + print >>s, ' looptoken = JitCellToken()' print >>s, ' cpu.compile_loop(inputargs, operations, looptoken)' if hasattr(self.loop, 'inputargs'): for i, v in enumerate(self.loop.inputargs): @@ -536,13 +536,15 @@ loop = TreeLoop('test_random_function') loop.inputargs = startvars[:] loop.operations = [] - loop.token = LoopToken() - + loop._jitcelltoken = JitCellToken() + loop._targettoken = TargetToken() + loop.operations.append(ResOperation(rop.LABEL, loop.inputargs, None, + loop._targettoken)) builder = builder_factory(cpu, loop, startvars[:]) self.generate_ops(builder, r, loop, startvars) self.builder = builder self.loop = loop - cpu.compile_loop(loop.inputargs, loop.operations, loop.token) + cpu.compile_loop(loop.inputargs, loop.operations, loop._jitcelltoken) def generate_ops(self, builder, r, loop, startvars): block_length = pytest.config.option.block_length @@ -615,7 +617,7 @@ cpu.set_future_value_float(i, box.value) else: raise NotImplementedError(box) - fail = cpu.execute_token(self.loop.token) + fail = cpu.execute_token(self.loop._jitcelltoken) assert fail is self.should_fail_by.getdescr() for i, v in enumerate(self.get_fail_args()): if isinstance(v, (BoxFloat, ConstFloat)): @@ -684,23 +686,25 @@ rl = RandomLoop(self.builder.cpu, self.builder.fork, r, args) self.cpu.compile_loop(rl.loop.inputargs, rl.loop.operations, - rl.loop.token) + rl.loop._jitcelltoken) # done self.should_fail_by = rl.should_fail_by self.expected = rl.expected assert len(rl.loop.inputargs) == len(args) # The new bridge's execution will end normally at its FINISH. # Just replace the FINISH with the JUMP to the new loop. - jump_op = ResOperation(rop.JUMP, subset, None, descr=rl.loop.token) + jump_op = ResOperation(rop.JUMP, subset, None, + descr=rl.loop._targettoken) subloop.operations[-1] = jump_op self.guard_op = rl.guard_op self.prebuilt_ptr_consts += rl.prebuilt_ptr_consts - self.loop.token.record_jump_to(rl.loop.token) + self.loop._jitcelltoken.record_jump_to(rl.loop._jitcelltoken) self.dont_generate_more = True if r.random() < .05: return False self.builder.cpu.compile_bridge(fail_descr, fail_args, - subloop.operations, self.loop.token) + subloop.operations, + self.loop._jitcelltoken) return True def check_random_function(cpu, BuilderClass, r, num=None, max=None): From noreply at buildbot.pypy.org Wed Nov 9 14:48:36 2011 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 9 Nov 2011 14:48:36 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: Fix. Message-ID: <20111109134836.2AD368292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: jit-targets Changeset: r49016:132fd58cb353 Date: 2011-11-09 14:48 +0100 http://bitbucket.org/pypy/pypy/changeset/132fd58cb353/ Log: Fix. diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -1396,10 +1396,23 @@ inputargs = op.getarglist() floatlocs = [None] * len(inputargs) nonfloatlocs = [None] * len(inputargs) + # + # we need to make sure that the tmpreg and xmmtmp are free + tmpreg = X86RegisterManager.all_regs[0] + tmpvar = TempBox() + self.rm.force_allocate_reg(tmpvar, selected_reg=tmpreg) + self.rm.possibly_free_var(tmpvar) + # + xmmtmp = X86XMMRegisterManager.all_regs[0] + tmpvar = TempBox() + self.xrm.force_allocate_reg(tmpvar, selected_reg=xmmtmp) + self.xrm.possibly_free_var(tmpvar) + # for i in range(len(inputargs)): arg = inputargs[i] assert not isinstance(arg, Const) loc = self.loc(arg) + assert not (loc is tmpreg or loc is xmmtmp) if arg.type == FLOAT: floatlocs[i] = loc else: From noreply at buildbot.pypy.org Wed Nov 9 15:19:03 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 9 Nov 2011 15:19:03 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: rename space.truncatedint into truncatedint_w, and move the corresponding test to test_objspace Message-ID: <20111109141903.C79EA8292E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r49017:e72220b3ba49 Date: 2011-11-09 14:05 +0100 http://bitbucket.org/pypy/pypy/changeset/e72220b3ba49/ Log: rename space.truncatedint into truncatedint_w, and move the corresponding test to test_objspace diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1287,7 +1287,7 @@ self.wrap("expected a 32-bit integer")) return value - def truncatedint(self, w_obj): + def truncatedint_w(self, w_obj): # Like space.gateway_int_w(), but return the integer truncated # instead of raising OverflowError. For obscure cases only. try: diff --git a/pypy/interpreter/gateway.py b/pypy/interpreter/gateway.py --- a/pypy/interpreter/gateway.py +++ b/pypy/interpreter/gateway.py @@ -142,7 +142,7 @@ def visit_c_nonnegint(self, el, app_sig): self.checked_space_method(el, app_sig) - def visit_truncatedint(self, el, app_sig): + def visit_truncatedint_w(self, el, app_sig): self.checked_space_method(el, app_sig) def visit__Wrappable(self, el, app_sig): @@ -262,8 +262,8 @@ def visit_c_nonnegint(self, typ): self.run_args.append("space.c_nonnegint_w(%s)" % (self.scopenext(),)) - def visit_truncatedint(self, typ): - self.run_args.append("space.truncatedint(%s)" % (self.scopenext(),)) + def visit_truncatedint_w(self, typ): + self.run_args.append("space.truncatedint_w(%s)" % (self.scopenext(),)) def _make_unwrap_activation_class(self, unwrap_spec, cache={}): try: @@ -395,8 +395,8 @@ def visit_c_nonnegint(self, typ): self.unwrap.append("space.c_nonnegint_w(%s)" % (self.nextarg(),)) - def visit_truncatedint(self, typ): - self.unwrap.append("space.truncatedint(%s)" % (self.nextarg(),)) + def visit_truncatedint_w(self, typ): + self.unwrap.append("space.truncatedint_w(%s)" % (self.nextarg(),)) def make_fastfunc(unwrap_spec, func): unwrap_info = UnwrapSpec_FastFunc_Unwrap() diff --git a/pypy/interpreter/test/test_objspace.py b/pypy/interpreter/test/test_objspace.py --- a/pypy/interpreter/test/test_objspace.py +++ b/pypy/interpreter/test/test_objspace.py @@ -213,6 +213,14 @@ w_obj = space.wrap(-12) space.raises_w(space.w_ValueError, space.r_ulonglong_w, w_obj) + def test_truncatedint_w(self): + space = self.space + assert space.truncatedint_w(space.wrap(42)) == 42 + assert space.truncatedint_w(space.wrap(sys.maxint)) == sys.maxint + assert space.truncatedint_w(space.wrap(sys.maxint+1)) == -sys.maxint-1 + assert space.truncatedint_w(space.wrap(-1)) == -1 + assert space.truncatedint_w(space.wrap(-sys.maxint-2)) == sys.maxint + def test_truncatedlonglong_w(self): space = self.space w_value = space.wrap(12) diff --git a/pypy/module/_ffi/interp_struct.py b/pypy/module/_ffi/interp_struct.py --- a/pypy/module/_ffi/interp_struct.py +++ b/pypy/module/_ffi/interp_struct.py @@ -154,7 +154,7 @@ return # if w_ffitype.is_signed() or w_ffitype.is_unsigned(): - value = space.truncatedint(w_value) + value = space.truncatedint_w(w_value) libffi.struct_setfield_int(w_ffitype.ffitype, self.rawmem, offset, value) return # diff --git a/pypy/module/_ffi/test/test_struct.py b/pypy/module/_ffi/test/test_struct.py --- a/pypy/module/_ffi/test/test_struct.py +++ b/pypy/module/_ffi/test/test_struct.py @@ -38,16 +38,6 @@ assert self.sizeof([T.slonglong, T.sbyte, T.sbyte, T.sbyte]) == llong_size + llong_align assert self.sizeof([T.slonglong, T.sbyte, T.sbyte, T.sbyte, T.sbyte]) == llong_size + llong_align - def test_truncatedint(self): - space = gettestobjspace() - assert space.truncatedint(space.wrap(42)) == 42 - assert space.truncatedint(space.wrap(sys.maxint)) == sys.maxint - assert space.truncatedint(space.wrap(sys.maxint+1)) == -sys.maxint-1 - assert space.truncatedint(space.wrap(-1)) == -1 - assert space.truncatedint(space.wrap(-sys.maxint-2)) == sys.maxint - - - class AppTestStruct(BaseAppTestFFI): def setup_class(cls): diff --git a/pypy/module/binascii/interp_crc32.py b/pypy/module/binascii/interp_crc32.py --- a/pypy/module/binascii/interp_crc32.py +++ b/pypy/module/binascii/interp_crc32.py @@ -61,7 +61,7 @@ crc_32_tab = map(r_uint, crc_32_tab) - at unwrap_spec(data='bufferstr', oldcrc='truncatedint') + at unwrap_spec(data='bufferstr', oldcrc='truncatedint_w') def crc32(space, data, oldcrc=0): "Compute the CRC-32 incrementally." diff --git a/pypy/module/zlib/interp_zlib.py b/pypy/module/zlib/interp_zlib.py --- a/pypy/module/zlib/interp_zlib.py +++ b/pypy/module/zlib/interp_zlib.py @@ -20,7 +20,7 @@ return intmask((x ^ SIGN_EXTEND2) - SIGN_EXTEND2) - at unwrap_spec(string='bufferstr', start='truncatedint') + at unwrap_spec(string='bufferstr', start='truncatedint_w') def crc32(space, string, start = rzlib.CRC32_DEFAULT_START): """ crc32(string[, start]) -- Compute a CRC-32 checksum of string. @@ -41,7 +41,7 @@ return space.wrap(checksum) - at unwrap_spec(string='bufferstr', start='truncatedint') + at unwrap_spec(string='bufferstr', start='truncatedint_w') def adler32(space, string, start=rzlib.ADLER32_DEFAULT_START): """ adler32(string[, start]) -- Compute an Adler-32 checksum of string. From noreply at buildbot.pypy.org Wed Nov 9 15:19:12 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 9 Nov 2011 15:19:12 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: hg merge default Message-ID: <20111109141912.4CD658292E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r49018:b2acb344eb30 Date: 2011-11-09 15:18 +0100 http://bitbucket.org/pypy/pypy/changeset/b2acb344eb30/ Log: hg merge default diff too long, truncating to 10000 out of 42626 lines diff --git a/.hgtags b/.hgtags --- a/.hgtags +++ b/.hgtags @@ -1,2 +1,3 @@ b590cf6de4190623aad9aa698694c22e614d67b9 release-1.5 b48df0bf4e75b81d98f19ce89d4a7dc3e1dab5e5 benchmarked +d8ac7d23d3ec5f9a0fa1264972f74a010dbfd07f release-1.6 diff --git a/dotviewer/graphparse.py b/dotviewer/graphparse.py --- a/dotviewer/graphparse.py +++ b/dotviewer/graphparse.py @@ -36,48 +36,45 @@ print >> sys.stderr, "Warning: could not guess file type, using 'dot'" return 'unknown' -def dot2plain(content, contenttype, use_codespeak=False): - if contenttype == 'plain': - # already a .plain file - return content +def dot2plain_graphviz(content, contenttype, use_codespeak=False): + if contenttype != 'neato': + cmdline = 'dot -Tplain' + else: + cmdline = 'neato -Tplain' + #print >> sys.stderr, '* running:', cmdline + close_fds = sys.platform != 'win32' + p = subprocess.Popen(cmdline, shell=True, close_fds=close_fds, + stdin=subprocess.PIPE, stdout=subprocess.PIPE) + (child_in, child_out) = (p.stdin, p.stdout) + try: + import thread + except ImportError: + bkgndwrite(child_in, content) + else: + thread.start_new_thread(bkgndwrite, (child_in, content)) + plaincontent = child_out.read() + child_out.close() + if not plaincontent: # 'dot' is likely not installed + raise PlainParseError("no result from running 'dot'") + return plaincontent - if not use_codespeak: - if contenttype != 'neato': - cmdline = 'dot -Tplain' - else: - cmdline = 'neato -Tplain' - #print >> sys.stderr, '* running:', cmdline - close_fds = sys.platform != 'win32' - p = subprocess.Popen(cmdline, shell=True, close_fds=close_fds, - stdin=subprocess.PIPE, stdout=subprocess.PIPE) - (child_in, child_out) = (p.stdin, p.stdout) - try: - import thread - except ImportError: - bkgndwrite(child_in, content) - else: - thread.start_new_thread(bkgndwrite, (child_in, content)) - plaincontent = child_out.read() - child_out.close() - if not plaincontent: # 'dot' is likely not installed - raise PlainParseError("no result from running 'dot'") - else: - import urllib - request = urllib.urlencode({'dot': content}) - url = 'http://codespeak.net/pypy/convertdot.cgi' - print >> sys.stderr, '* posting:', url - g = urllib.urlopen(url, data=request) - result = [] - while True: - data = g.read(16384) - if not data: - break - result.append(data) - g.close() - plaincontent = ''.join(result) - # very simple-minded way to give a somewhat better error message - if plaincontent.startswith('> sys.stderr, '* posting:', url + g = urllib.urlopen(url, data=request) + result = [] + while True: + data = g.read(16384) + if not data: + break + result.append(data) + g.close() + plaincontent = ''.join(result) + # very simple-minded way to give a somewhat better error message + if plaincontent.startswith('" + return _PROTOCOL_NAMES.get(protocol_code, '') # a replacement for the old socket.ssl function diff --git a/lib-python/2.7/test/test_ssl.py b/lib-python/2.7/test/test_ssl.py --- a/lib-python/2.7/test/test_ssl.py +++ b/lib-python/2.7/test/test_ssl.py @@ -58,32 +58,35 @@ # Issue #9415: Ubuntu hijacks their OpenSSL and forcefully disables SSLv2 def skip_if_broken_ubuntu_ssl(func): - # We need to access the lower-level wrapper in order to create an - # implicit SSL context without trying to connect or listen. - try: - import _ssl - except ImportError: - # The returned function won't get executed, just ignore the error - pass - @functools.wraps(func) - def f(*args, **kwargs): + if hasattr(ssl, 'PROTOCOL_SSLv2'): + # We need to access the lower-level wrapper in order to create an + # implicit SSL context without trying to connect or listen. try: - s = socket.socket(socket.AF_INET) - _ssl.sslwrap(s._sock, 0, None, None, - ssl.CERT_NONE, ssl.PROTOCOL_SSLv2, None, None) - except ssl.SSLError as e: - if (ssl.OPENSSL_VERSION_INFO == (0, 9, 8, 15, 15) and - platform.linux_distribution() == ('debian', 'squeeze/sid', '') - and 'Invalid SSL protocol variant specified' in str(e)): - raise unittest.SkipTest("Patched Ubuntu OpenSSL breaks behaviour") - return func(*args, **kwargs) - return f + import _ssl + except ImportError: + # The returned function won't get executed, just ignore the error + pass + @functools.wraps(func) + def f(*args, **kwargs): + try: + s = socket.socket(socket.AF_INET) + _ssl.sslwrap(s._sock, 0, None, None, + ssl.CERT_NONE, ssl.PROTOCOL_SSLv2, None, None) + except ssl.SSLError as e: + if (ssl.OPENSSL_VERSION_INFO == (0, 9, 8, 15, 15) and + platform.linux_distribution() == ('debian', 'squeeze/sid', '') + and 'Invalid SSL protocol variant specified' in str(e)): + raise unittest.SkipTest("Patched Ubuntu OpenSSL breaks behaviour") + return func(*args, **kwargs) + return f + else: + return func class BasicSocketTests(unittest.TestCase): def test_constants(self): - ssl.PROTOCOL_SSLv2 + #ssl.PROTOCOL_SSLv2 ssl.PROTOCOL_SSLv23 ssl.PROTOCOL_SSLv3 ssl.PROTOCOL_TLSv1 @@ -964,7 +967,8 @@ try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True, ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True, ssl.CERT_REQUIRED) - try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv2, False) + if hasattr(ssl, 'PROTOCOL_SSLv2'): + try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv2, False) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv23, False) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_TLSv1, False) @@ -976,7 +980,8 @@ try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True, ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True, ssl.CERT_REQUIRED) - try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv2, False) + if hasattr(ssl, 'PROTOCOL_SSLv2'): + try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv2, False) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv3, False) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv23, False) diff --git a/lib-python/conftest.py b/lib-python/conftest.py --- a/lib-python/conftest.py +++ b/lib-python/conftest.py @@ -201,7 +201,7 @@ RegrTest('test_difflib.py'), RegrTest('test_dircache.py', core=True), RegrTest('test_dis.py'), - RegrTest('test_distutils.py'), + RegrTest('test_distutils.py', skip=True), RegrTest('test_dl.py', skip=True), RegrTest('test_doctest.py', usemodules="thread"), RegrTest('test_doctest2.py'), @@ -317,7 +317,7 @@ RegrTest('test_multibytecodec.py', usemodules='_multibytecodec'), RegrTest('test_multibytecodec_support.py', skip="not a test"), RegrTest('test_multifile.py'), - RegrTest('test_multiprocessing.py', skip='FIXME leaves subprocesses'), + RegrTest('test_multiprocessing.py', skip="FIXME leaves subprocesses"), RegrTest('test_mutants.py', core="possibly"), RegrTest('test_mutex.py'), RegrTest('test_netrc.py'), @@ -359,7 +359,7 @@ RegrTest('test_property.py', core=True), RegrTest('test_pstats.py'), RegrTest('test_pty.py', skip="unsupported extension module"), - RegrTest('test_pwd.py', skip=skip_win32), + RegrTest('test_pwd.py', usemodules="pwd", skip=skip_win32), RegrTest('test_py3kwarn.py'), RegrTest('test_pyclbr.py'), RegrTest('test_pydoc.py'), diff --git a/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py b/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py --- a/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py +++ b/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py @@ -1,6 +1,5 @@ import unittest from ctypes import * -from ctypes.test import xfail class MyInt(c_int): def __cmp__(self, other): @@ -27,7 +26,6 @@ self.assertEqual(None, cb()) - @xfail def test_int_callback(self): args = [] def func(arg): diff --git a/lib-python/modified-2.7/gzip.py b/lib-python/modified-2.7/gzip.py deleted file mode 100644 --- a/lib-python/modified-2.7/gzip.py +++ /dev/null @@ -1,514 +0,0 @@ -"""Functions that read and write gzipped files. - -The user of the file doesn't have to worry about the compression, -but random access is not allowed.""" - -# based on Andrew Kuchling's minigzip.py distributed with the zlib module - -import struct, sys, time, os -import zlib -import io -import __builtin__ - -__all__ = ["GzipFile","open"] - -FTEXT, FHCRC, FEXTRA, FNAME, FCOMMENT = 1, 2, 4, 8, 16 - -READ, WRITE = 1, 2 - -def write32u(output, value): - # The L format writes the bit pattern correctly whether signed - # or unsigned. - output.write(struct.pack("' - - def _check_closed(self): - """Raises a ValueError if the underlying file object has been closed. - - """ - if self.closed: - raise ValueError('I/O operation on closed file.') - - def _init_write(self, filename): - self.name = filename - self.crc = zlib.crc32("") & 0xffffffffL - self.size = 0 - self.writebuf = [] - self.bufsize = 0 - - def _write_gzip_header(self): - self.fileobj.write('\037\213') # magic header - self.fileobj.write('\010') # compression method - fname = os.path.basename(self.name) - if fname.endswith(".gz"): - fname = fname[:-3] - flags = 0 - if fname: - flags = FNAME - self.fileobj.write(chr(flags)) - mtime = self.mtime - if mtime is None: - mtime = time.time() - write32u(self.fileobj, long(mtime)) - self.fileobj.write('\002') - self.fileobj.write('\377') - if fname: - self.fileobj.write(fname + '\000') - - def _init_read(self): - self.crc = zlib.crc32("") & 0xffffffffL - self.size = 0 - - def _read_gzip_header(self): - magic = self.fileobj.read(2) - if magic != '\037\213': - raise IOError, 'Not a gzipped file' - method = ord( self.fileobj.read(1) ) - if method != 8: - raise IOError, 'Unknown compression method' - flag = ord( self.fileobj.read(1) ) - self.mtime = read32(self.fileobj) - # extraflag = self.fileobj.read(1) - # os = self.fileobj.read(1) - self.fileobj.read(2) - - if flag & FEXTRA: - # Read & discard the extra field, if present - xlen = ord(self.fileobj.read(1)) - xlen = xlen + 256*ord(self.fileobj.read(1)) - self.fileobj.read(xlen) - if flag & FNAME: - # Read and discard a null-terminated string containing the filename - while True: - s = self.fileobj.read(1) - if not s or s=='\000': - break - if flag & FCOMMENT: - # Read and discard a null-terminated string containing a comment - while True: - s = self.fileobj.read(1) - if not s or s=='\000': - break - if flag & FHCRC: - self.fileobj.read(2) # Read & discard the 16-bit header CRC - - def write(self,data): - self._check_closed() - if self.mode != WRITE: - import errno - raise IOError(errno.EBADF, "write() on read-only GzipFile object") - - if self.fileobj is None: - raise ValueError, "write() on closed GzipFile object" - - # Convert data type if called by io.BufferedWriter. - if isinstance(data, memoryview): - data = data.tobytes() - - if len(data) > 0: - self.size = self.size + len(data) - self.crc = zlib.crc32(data, self.crc) & 0xffffffffL - self.fileobj.write( self.compress.compress(data) ) - self.offset += len(data) - - return len(data) - - def read(self, size=-1): - self._check_closed() - if self.mode != READ: - import errno - raise IOError(errno.EBADF, "read() on write-only GzipFile object") - - if self.extrasize <= 0 and self.fileobj is None: - return '' - - readsize = 1024 - if size < 0: # get the whole thing - try: - while True: - self._read(readsize) - readsize = min(self.max_read_chunk, readsize * 2) - except EOFError: - size = self.extrasize - elif size == 0: - return "" - else: # just get some more of it - try: - while size > self.extrasize: - self._read(readsize) - readsize = min(self.max_read_chunk, readsize * 2) - except EOFError: - if size > self.extrasize: - size = self.extrasize - - offset = self.offset - self.extrastart - chunk = self.extrabuf[offset: offset + size] - self.extrasize = self.extrasize - size - - self.offset += size - return chunk - - def _unread(self, buf): - self.extrasize = len(buf) + self.extrasize - self.offset -= len(buf) - - def _read(self, size=1024): - if self.fileobj is None: - raise EOFError, "Reached EOF" - - if self._new_member: - # If the _new_member flag is set, we have to - # jump to the next member, if there is one. - # - # First, check if we're at the end of the file; - # if so, it's time to stop; no more members to read. - pos = self.fileobj.tell() # Save current position - self.fileobj.seek(0, 2) # Seek to end of file - if pos == self.fileobj.tell(): - raise EOFError, "Reached EOF" - else: - self.fileobj.seek( pos ) # Return to original position - - self._init_read() - self._read_gzip_header() - self.decompress = zlib.decompressobj(-zlib.MAX_WBITS) - self._new_member = False - - # Read a chunk of data from the file - buf = self.fileobj.read(size) - - # If the EOF has been reached, flush the decompression object - # and mark this object as finished. - - if buf == "": - uncompress = self.decompress.flush() - self._read_eof() - self._add_read_data( uncompress ) - raise EOFError, 'Reached EOF' - - uncompress = self.decompress.decompress(buf) - self._add_read_data( uncompress ) - - if self.decompress.unused_data != "": - # Ending case: we've come to the end of a member in the file, - # so seek back to the start of the unused data, finish up - # this member, and read a new gzip header. - # (The number of bytes to seek back is the length of the unused - # data, minus 8 because _read_eof() will rewind a further 8 bytes) - self.fileobj.seek( -len(self.decompress.unused_data)+8, 1) - - # Check the CRC and file size, and set the flag so we read - # a new member on the next call - self._read_eof() - self._new_member = True - - def _add_read_data(self, data): - self.crc = zlib.crc32(data, self.crc) & 0xffffffffL - offset = self.offset - self.extrastart - self.extrabuf = self.extrabuf[offset:] + data - self.extrasize = self.extrasize + len(data) - self.extrastart = self.offset - self.size = self.size + len(data) - - def _read_eof(self): - # We've read to the end of the file, so we have to rewind in order - # to reread the 8 bytes containing the CRC and the file size. - # We check the that the computed CRC and size of the - # uncompressed data matches the stored values. Note that the size - # stored is the true file size mod 2**32. - self.fileobj.seek(-8, 1) - crc32 = read32(self.fileobj) - isize = read32(self.fileobj) # may exceed 2GB - if crc32 != self.crc: - raise IOError("CRC check failed %s != %s" % (hex(crc32), - hex(self.crc))) - elif isize != (self.size & 0xffffffffL): - raise IOError, "Incorrect length of data produced" - - # Gzip files can be padded with zeroes and still have archives. - # Consume all zero bytes and set the file position to the first - # non-zero byte. See http://www.gzip.org/#faq8 - c = "\x00" - while c == "\x00": - c = self.fileobj.read(1) - if c: - self.fileobj.seek(-1, 1) - - @property - def closed(self): - return self.fileobj is None - - def close(self): - if self.fileobj is None: - return - if self.mode == WRITE: - self.fileobj.write(self.compress.flush()) - write32u(self.fileobj, self.crc) - # self.size may exceed 2GB, or even 4GB - write32u(self.fileobj, self.size & 0xffffffffL) - self.fileobj = None - elif self.mode == READ: - self.fileobj = None - if self.myfileobj: - self.myfileobj.close() - self.myfileobj = None - - def flush(self,zlib_mode=zlib.Z_SYNC_FLUSH): - self._check_closed() - if self.mode == WRITE: - # Ensure the compressor's buffer is flushed - self.fileobj.write(self.compress.flush(zlib_mode)) - self.fileobj.flush() - - def fileno(self): - """Invoke the underlying file object's fileno() method. - - This will raise AttributeError if the underlying file object - doesn't support fileno(). - """ - return self.fileobj.fileno() - - def rewind(self): - '''Return the uncompressed stream file position indicator to the - beginning of the file''' - if self.mode != READ: - raise IOError("Can't rewind in write mode") - self.fileobj.seek(0) - self._new_member = True - self.extrabuf = "" - self.extrasize = 0 - self.extrastart = 0 - self.offset = 0 - - def readable(self): - return self.mode == READ - - def writable(self): - return self.mode == WRITE - - def seekable(self): - return True - - def seek(self, offset, whence=0): - if whence: - if whence == 1: - offset = self.offset + offset - else: - raise ValueError('Seek from end not supported') - if self.mode == WRITE: - if offset < self.offset: - raise IOError('Negative seek in write mode') - count = offset - self.offset - for i in range(count // 1024): - self.write(1024 * '\0') - self.write((count % 1024) * '\0') - elif self.mode == READ: - if offset == self.offset: - self.read(0) # to make sure that this file is open - return self.offset - if offset < self.offset: - # for negative seek, rewind and do positive seek - self.rewind() - count = offset - self.offset - for i in range(count // 1024): - self.read(1024) - self.read(count % 1024) - - return self.offset - - def readline(self, size=-1): - if size < 0: - # Shortcut common case - newline found in buffer. - offset = self.offset - self.extrastart - i = self.extrabuf.find('\n', offset) + 1 - if i > 0: - self.extrasize -= i - offset - self.offset += i - offset - return self.extrabuf[offset: i] - - size = sys.maxint - readsize = self.min_readsize - else: - readsize = size - bufs = [] - while size != 0: - c = self.read(readsize) - i = c.find('\n') - - # We set i=size to break out of the loop under two - # conditions: 1) there's no newline, and the chunk is - # larger than size, or 2) there is a newline, but the - # resulting line would be longer than 'size'. - if (size <= i) or (i == -1 and len(c) > size): - i = size - 1 - - if i >= 0 or c == '': - bufs.append(c[:i + 1]) # Add portion of last chunk - self._unread(c[i + 1:]) # Push back rest of chunk - break - - # Append chunk to list, decrease 'size', - bufs.append(c) - size = size - len(c) - readsize = min(size, readsize * 2) - if readsize > self.min_readsize: - self.min_readsize = min(readsize, self.min_readsize * 2, 512) - return ''.join(bufs) # Return resulting line - - -def _test(): - # Act like gzip; with -d, act like gunzip. - # The input file is not deleted, however, nor are any other gzip - # options or features supported. - args = sys.argv[1:] - decompress = args and args[0] == "-d" - if decompress: - args = args[1:] - if not args: - args = ["-"] - for arg in args: - if decompress: - if arg == "-": - f = GzipFile(filename="", mode="rb", fileobj=sys.stdin) - g = sys.stdout - else: - if arg[-3:] != ".gz": - print "filename doesn't end in .gz:", repr(arg) - continue - f = open(arg, "rb") - g = __builtin__.open(arg[:-3], "wb") - else: - if arg == "-": - f = sys.stdin - g = GzipFile(filename="", mode="wb", fileobj=sys.stdout) - else: - f = __builtin__.open(arg, "rb") - g = open(arg + ".gz", "wb") - while True: - chunk = f.read(1024) - if not chunk: - break - g.write(chunk) - if g is not sys.stdout: - g.close() - if f is not sys.stdin: - f.close() - -if __name__ == '__main__': - _test() diff --git a/lib-python/modified-2.7/heapq.py b/lib-python/modified-2.7/heapq.py new file mode 100644 --- /dev/null +++ b/lib-python/modified-2.7/heapq.py @@ -0,0 +1,442 @@ +# -*- coding: latin-1 -*- + +"""Heap queue algorithm (a.k.a. priority queue). + +Heaps are arrays for which a[k] <= a[2*k+1] and a[k] <= a[2*k+2] for +all k, counting elements from 0. For the sake of comparison, +non-existing elements are considered to be infinite. The interesting +property of a heap is that a[0] is always its smallest element. + +Usage: + +heap = [] # creates an empty heap +heappush(heap, item) # pushes a new item on the heap +item = heappop(heap) # pops the smallest item from the heap +item = heap[0] # smallest item on the heap without popping it +heapify(x) # transforms list into a heap, in-place, in linear time +item = heapreplace(heap, item) # pops and returns smallest item, and adds + # new item; the heap size is unchanged + +Our API differs from textbook heap algorithms as follows: + +- We use 0-based indexing. This makes the relationship between the + index for a node and the indexes for its children slightly less + obvious, but is more suitable since Python uses 0-based indexing. + +- Our heappop() method returns the smallest item, not the largest. + +These two make it possible to view the heap as a regular Python list +without surprises: heap[0] is the smallest item, and heap.sort() +maintains the heap invariant! +""" + +# Original code by Kevin O'Connor, augmented by Tim Peters and Raymond Hettinger + +__about__ = """Heap queues + +[explanation by Fran�ois Pinard] + +Heaps are arrays for which a[k] <= a[2*k+1] and a[k] <= a[2*k+2] for +all k, counting elements from 0. For the sake of comparison, +non-existing elements are considered to be infinite. The interesting +property of a heap is that a[0] is always its smallest element. + +The strange invariant above is meant to be an efficient memory +representation for a tournament. The numbers below are `k', not a[k]: + + 0 + + 1 2 + + 3 4 5 6 + + 7 8 9 10 11 12 13 14 + + 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 + + +In the tree above, each cell `k' is topping `2*k+1' and `2*k+2'. In +an usual binary tournament we see in sports, each cell is the winner +over the two cells it tops, and we can trace the winner down the tree +to see all opponents s/he had. However, in many computer applications +of such tournaments, we do not need to trace the history of a winner. +To be more memory efficient, when a winner is promoted, we try to +replace it by something else at a lower level, and the rule becomes +that a cell and the two cells it tops contain three different items, +but the top cell "wins" over the two topped cells. + +If this heap invariant is protected at all time, index 0 is clearly +the overall winner. The simplest algorithmic way to remove it and +find the "next" winner is to move some loser (let's say cell 30 in the +diagram above) into the 0 position, and then percolate this new 0 down +the tree, exchanging values, until the invariant is re-established. +This is clearly logarithmic on the total number of items in the tree. +By iterating over all items, you get an O(n ln n) sort. + +A nice feature of this sort is that you can efficiently insert new +items while the sort is going on, provided that the inserted items are +not "better" than the last 0'th element you extracted. This is +especially useful in simulation contexts, where the tree holds all +incoming events, and the "win" condition means the smallest scheduled +time. When an event schedule other events for execution, they are +scheduled into the future, so they can easily go into the heap. So, a +heap is a good structure for implementing schedulers (this is what I +used for my MIDI sequencer :-). + +Various structures for implementing schedulers have been extensively +studied, and heaps are good for this, as they are reasonably speedy, +the speed is almost constant, and the worst case is not much different +than the average case. However, there are other representations which +are more efficient overall, yet the worst cases might be terrible. + +Heaps are also very useful in big disk sorts. You most probably all +know that a big sort implies producing "runs" (which are pre-sorted +sequences, which size is usually related to the amount of CPU memory), +followed by a merging passes for these runs, which merging is often +very cleverly organised[1]. It is very important that the initial +sort produces the longest runs possible. Tournaments are a good way +to that. If, using all the memory available to hold a tournament, you +replace and percolate items that happen to fit the current run, you'll +produce runs which are twice the size of the memory for random input, +and much better for input fuzzily ordered. + +Moreover, if you output the 0'th item on disk and get an input which +may not fit in the current tournament (because the value "wins" over +the last output value), it cannot fit in the heap, so the size of the +heap decreases. The freed memory could be cleverly reused immediately +for progressively building a second heap, which grows at exactly the +same rate the first heap is melting. When the first heap completely +vanishes, you switch heaps and start a new run. Clever and quite +effective! + +In a word, heaps are useful memory structures to know. I use them in +a few applications, and I think it is good to keep a `heap' module +around. :-) + +-------------------- +[1] The disk balancing algorithms which are current, nowadays, are +more annoying than clever, and this is a consequence of the seeking +capabilities of the disks. On devices which cannot seek, like big +tape drives, the story was quite different, and one had to be very +clever to ensure (far in advance) that each tape movement will be the +most effective possible (that is, will best participate at +"progressing" the merge). Some tapes were even able to read +backwards, and this was also used to avoid the rewinding time. +Believe me, real good tape sorts were quite spectacular to watch! +From all times, sorting has always been a Great Art! :-) +""" + +__all__ = ['heappush', 'heappop', 'heapify', 'heapreplace', 'merge', + 'nlargest', 'nsmallest', 'heappushpop'] + +from itertools import islice, repeat, count, imap, izip, tee, chain +from operator import itemgetter +import bisect + +def heappush(heap, item): + """Push item onto heap, maintaining the heap invariant.""" + heap.append(item) + _siftdown(heap, 0, len(heap)-1) + +def heappop(heap): + """Pop the smallest item off the heap, maintaining the heap invariant.""" + lastelt = heap.pop() # raises appropriate IndexError if heap is empty + if heap: + returnitem = heap[0] + heap[0] = lastelt + _siftup(heap, 0) + else: + returnitem = lastelt + return returnitem + +def heapreplace(heap, item): + """Pop and return the current smallest value, and add the new item. + + This is more efficient than heappop() followed by heappush(), and can be + more appropriate when using a fixed-size heap. Note that the value + returned may be larger than item! That constrains reasonable uses of + this routine unless written as part of a conditional replacement: + + if item > heap[0]: + item = heapreplace(heap, item) + """ + returnitem = heap[0] # raises appropriate IndexError if heap is empty + heap[0] = item + _siftup(heap, 0) + return returnitem + +def heappushpop(heap, item): + """Fast version of a heappush followed by a heappop.""" + if heap and heap[0] < item: + item, heap[0] = heap[0], item + _siftup(heap, 0) + return item + +def heapify(x): + """Transform list into a heap, in-place, in O(len(heap)) time.""" + n = len(x) + # Transform bottom-up. The largest index there's any point to looking at + # is the largest with a child index in-range, so must have 2*i + 1 < n, + # or i < (n-1)/2. If n is even = 2*j, this is (2*j-1)/2 = j-1/2 so + # j-1 is the largest, which is n//2 - 1. If n is odd = 2*j+1, this is + # (2*j+1-1)/2 = j so j-1 is the largest, and that's again n//2-1. + for i in reversed(xrange(n//2)): + _siftup(x, i) + +def nlargest(n, iterable): + """Find the n largest elements in a dataset. + + Equivalent to: sorted(iterable, reverse=True)[:n] + """ + if n < 0: # for consistency with the c impl + return [] + it = iter(iterable) + result = list(islice(it, n)) + if not result: + return result + heapify(result) + _heappushpop = heappushpop + for elem in it: + _heappushpop(result, elem) + result.sort(reverse=True) + return result + +def nsmallest(n, iterable): + """Find the n smallest elements in a dataset. + + Equivalent to: sorted(iterable)[:n] + """ + if n < 0: # for consistency with the c impl + return [] + if hasattr(iterable, '__len__') and n * 10 <= len(iterable): + # For smaller values of n, the bisect method is faster than a minheap. + # It is also memory efficient, consuming only n elements of space. + it = iter(iterable) + result = sorted(islice(it, 0, n)) + if not result: + return result + insort = bisect.insort + pop = result.pop + los = result[-1] # los --> Largest of the nsmallest + for elem in it: + if los <= elem: + continue + insort(result, elem) + pop() + los = result[-1] + return result + # An alternative approach manifests the whole iterable in memory but + # saves comparisons by heapifying all at once. Also, saves time + # over bisect.insort() which has O(n) data movement time for every + # insertion. Finding the n smallest of an m length iterable requires + # O(m) + O(n log m) comparisons. + h = list(iterable) + heapify(h) + return map(heappop, repeat(h, min(n, len(h)))) + +# 'heap' is a heap at all indices >= startpos, except possibly for pos. pos +# is the index of a leaf with a possibly out-of-order value. Restore the +# heap invariant. +def _siftdown(heap, startpos, pos): + newitem = heap[pos] + # Follow the path to the root, moving parents down until finding a place + # newitem fits. + while pos > startpos: + parentpos = (pos - 1) >> 1 + parent = heap[parentpos] + if newitem < parent: + heap[pos] = parent + pos = parentpos + continue + break + heap[pos] = newitem + +# The child indices of heap index pos are already heaps, and we want to make +# a heap at index pos too. We do this by bubbling the smaller child of +# pos up (and so on with that child's children, etc) until hitting a leaf, +# then using _siftdown to move the oddball originally at index pos into place. +# +# We *could* break out of the loop as soon as we find a pos where newitem <= +# both its children, but turns out that's not a good idea, and despite that +# many books write the algorithm that way. During a heap pop, the last array +# element is sifted in, and that tends to be large, so that comparing it +# against values starting from the root usually doesn't pay (= usually doesn't +# get us out of the loop early). See Knuth, Volume 3, where this is +# explained and quantified in an exercise. +# +# Cutting the # of comparisons is important, since these routines have no +# way to extract "the priority" from an array element, so that intelligence +# is likely to be hiding in custom __cmp__ methods, or in array elements +# storing (priority, record) tuples. Comparisons are thus potentially +# expensive. +# +# On random arrays of length 1000, making this change cut the number of +# comparisons made by heapify() a little, and those made by exhaustive +# heappop() a lot, in accord with theory. Here are typical results from 3 +# runs (3 just to demonstrate how small the variance is): +# +# Compares needed by heapify Compares needed by 1000 heappops +# -------------------------- -------------------------------- +# 1837 cut to 1663 14996 cut to 8680 +# 1855 cut to 1659 14966 cut to 8678 +# 1847 cut to 1660 15024 cut to 8703 +# +# Building the heap by using heappush() 1000 times instead required +# 2198, 2148, and 2219 compares: heapify() is more efficient, when +# you can use it. +# +# The total compares needed by list.sort() on the same lists were 8627, +# 8627, and 8632 (this should be compared to the sum of heapify() and +# heappop() compares): list.sort() is (unsurprisingly!) more efficient +# for sorting. + +def _siftup(heap, pos): + endpos = len(heap) + startpos = pos + newitem = heap[pos] + # Bubble up the smaller child until hitting a leaf. + childpos = 2*pos + 1 # leftmost child position + while childpos < endpos: + # Set childpos to index of smaller child. + rightpos = childpos + 1 + if rightpos < endpos and not heap[childpos] < heap[rightpos]: + childpos = rightpos + # Move the smaller child up. + heap[pos] = heap[childpos] + pos = childpos + childpos = 2*pos + 1 + # The leaf at pos is empty now. Put newitem there, and bubble it up + # to its final resting place (by sifting its parents down). + heap[pos] = newitem + _siftdown(heap, startpos, pos) + +# If available, use C implementation +try: + from _heapq import * +except ImportError: + pass + +def merge(*iterables): + '''Merge multiple sorted inputs into a single sorted output. + + Similar to sorted(itertools.chain(*iterables)) but returns a generator, + does not pull the data into memory all at once, and assumes that each of + the input streams is already sorted (smallest to largest). + + >>> list(merge([1,3,5,7], [0,2,4,8], [5,10,15,20], [], [25])) + [0, 1, 2, 3, 4, 5, 5, 7, 8, 10, 15, 20, 25] + + ''' + _heappop, _heapreplace, _StopIteration = heappop, heapreplace, StopIteration + + h = [] + h_append = h.append + for itnum, it in enumerate(map(iter, iterables)): + try: + next = it.next + h_append([next(), itnum, next]) + except _StopIteration: + pass + heapify(h) + + while 1: + try: + while 1: + v, itnum, next = s = h[0] # raises IndexError when h is empty + yield v + s[0] = next() # raises StopIteration when exhausted + _heapreplace(h, s) # restore heap condition + except _StopIteration: + _heappop(h) # remove empty iterator + except IndexError: + return + +# Extend the implementations of nsmallest and nlargest to use a key= argument +_nsmallest = nsmallest +def nsmallest(n, iterable, key=None): + """Find the n smallest elements in a dataset. + + Equivalent to: sorted(iterable, key=key)[:n] + """ + # Short-cut for n==1 is to use min() when len(iterable)>0 + if n == 1: + it = iter(iterable) + head = list(islice(it, 1)) + if not head: + return [] + if key is None: + return [min(chain(head, it))] + return [min(chain(head, it), key=key)] + + # When n>=size, it's faster to use sort() + try: + size = len(iterable) + except (TypeError, AttributeError): + pass + else: + if n >= size: + return sorted(iterable, key=key)[:n] + + # When key is none, use simpler decoration + if key is None: + it = izip(iterable, count()) # decorate + result = _nsmallest(n, it) + return map(itemgetter(0), result) # undecorate + + # General case, slowest method + in1, in2 = tee(iterable) + it = izip(imap(key, in1), count(), in2) # decorate + result = _nsmallest(n, it) + return map(itemgetter(2), result) # undecorate + +_nlargest = nlargest +def nlargest(n, iterable, key=None): + """Find the n largest elements in a dataset. + + Equivalent to: sorted(iterable, key=key, reverse=True)[:n] + """ + + # Short-cut for n==1 is to use max() when len(iterable)>0 + if n == 1: + it = iter(iterable) + head = list(islice(it, 1)) + if not head: + return [] + if key is None: + return [max(chain(head, it))] + return [max(chain(head, it), key=key)] + + # When n>=size, it's faster to use sort() + try: + size = len(iterable) + except (TypeError, AttributeError): + pass + else: + if n >= size: + return sorted(iterable, key=key, reverse=True)[:n] + + # When key is none, use simpler decoration + if key is None: + it = izip(iterable, count(0,-1)) # decorate + result = _nlargest(n, it) + return map(itemgetter(0), result) # undecorate + + # General case, slowest method + in1, in2 = tee(iterable) + it = izip(imap(key, in1), count(0,-1), in2) # decorate + result = _nlargest(n, it) + return map(itemgetter(2), result) # undecorate + +if __name__ == "__main__": + # Simple sanity test + heap = [] + data = [1, 3, 5, 7, 9, 2, 4, 6, 8, 0] + for item in data: + heappush(heap, item) + sort = [] + while heap: + sort.append(heappop(heap)) + print sort + + import doctest + doctest.testmod() diff --git a/lib-python/modified-2.7/httplib.py b/lib-python/modified-2.7/httplib.py new file mode 100644 --- /dev/null +++ b/lib-python/modified-2.7/httplib.py @@ -0,0 +1,1377 @@ +"""HTTP/1.1 client library + + + + +HTTPConnection goes through a number of "states", which define when a client +may legally make another request or fetch the response for a particular +request. This diagram details these state transitions: + + (null) + | + | HTTPConnection() + v + Idle + | + | putrequest() + v + Request-started + | + | ( putheader() )* endheaders() + v + Request-sent + | + | response = getresponse() + v + Unread-response [Response-headers-read] + |\____________________ + | | + | response.read() | putrequest() + v v + Idle Req-started-unread-response + ______/| + / | + response.read() | | ( putheader() )* endheaders() + v v + Request-started Req-sent-unread-response + | + | response.read() + v + Request-sent + +This diagram presents the following rules: + -- a second request may not be started until {response-headers-read} + -- a response [object] cannot be retrieved until {request-sent} + -- there is no differentiation between an unread response body and a + partially read response body + +Note: this enforcement is applied by the HTTPConnection class. The + HTTPResponse class does not enforce this state machine, which + implies sophisticated clients may accelerate the request/response + pipeline. Caution should be taken, though: accelerating the states + beyond the above pattern may imply knowledge of the server's + connection-close behavior for certain requests. For example, it + is impossible to tell whether the server will close the connection + UNTIL the response headers have been read; this means that further + requests cannot be placed into the pipeline until it is known that + the server will NOT be closing the connection. + +Logical State __state __response +------------- ------- ---------- +Idle _CS_IDLE None +Request-started _CS_REQ_STARTED None +Request-sent _CS_REQ_SENT None +Unread-response _CS_IDLE +Req-started-unread-response _CS_REQ_STARTED +Req-sent-unread-response _CS_REQ_SENT +""" + +from array import array +import os +import socket +from sys import py3kwarning +from urlparse import urlsplit +import warnings +with warnings.catch_warnings(): + if py3kwarning: + warnings.filterwarnings("ignore", ".*mimetools has been removed", + DeprecationWarning) + import mimetools + +try: + from cStringIO import StringIO +except ImportError: + from StringIO import StringIO + +__all__ = ["HTTP", "HTTPResponse", "HTTPConnection", + "HTTPException", "NotConnected", "UnknownProtocol", + "UnknownTransferEncoding", "UnimplementedFileMode", + "IncompleteRead", "InvalidURL", "ImproperConnectionState", + "CannotSendRequest", "CannotSendHeader", "ResponseNotReady", + "BadStatusLine", "error", "responses"] + +HTTP_PORT = 80 +HTTPS_PORT = 443 + +_UNKNOWN = 'UNKNOWN' + +# connection states +_CS_IDLE = 'Idle' +_CS_REQ_STARTED = 'Request-started' +_CS_REQ_SENT = 'Request-sent' + +# status codes +# informational +CONTINUE = 100 +SWITCHING_PROTOCOLS = 101 +PROCESSING = 102 + +# successful +OK = 200 +CREATED = 201 +ACCEPTED = 202 +NON_AUTHORITATIVE_INFORMATION = 203 +NO_CONTENT = 204 +RESET_CONTENT = 205 +PARTIAL_CONTENT = 206 +MULTI_STATUS = 207 +IM_USED = 226 + +# redirection +MULTIPLE_CHOICES = 300 +MOVED_PERMANENTLY = 301 +FOUND = 302 +SEE_OTHER = 303 +NOT_MODIFIED = 304 +USE_PROXY = 305 +TEMPORARY_REDIRECT = 307 + +# client error +BAD_REQUEST = 400 +UNAUTHORIZED = 401 +PAYMENT_REQUIRED = 402 +FORBIDDEN = 403 +NOT_FOUND = 404 +METHOD_NOT_ALLOWED = 405 +NOT_ACCEPTABLE = 406 +PROXY_AUTHENTICATION_REQUIRED = 407 +REQUEST_TIMEOUT = 408 +CONFLICT = 409 +GONE = 410 +LENGTH_REQUIRED = 411 +PRECONDITION_FAILED = 412 +REQUEST_ENTITY_TOO_LARGE = 413 +REQUEST_URI_TOO_LONG = 414 +UNSUPPORTED_MEDIA_TYPE = 415 +REQUESTED_RANGE_NOT_SATISFIABLE = 416 +EXPECTATION_FAILED = 417 +UNPROCESSABLE_ENTITY = 422 +LOCKED = 423 +FAILED_DEPENDENCY = 424 +UPGRADE_REQUIRED = 426 + +# server error +INTERNAL_SERVER_ERROR = 500 +NOT_IMPLEMENTED = 501 +BAD_GATEWAY = 502 +SERVICE_UNAVAILABLE = 503 +GATEWAY_TIMEOUT = 504 +HTTP_VERSION_NOT_SUPPORTED = 505 +INSUFFICIENT_STORAGE = 507 +NOT_EXTENDED = 510 + +# Mapping status codes to official W3C names +responses = { + 100: 'Continue', + 101: 'Switching Protocols', + + 200: 'OK', + 201: 'Created', + 202: 'Accepted', + 203: 'Non-Authoritative Information', + 204: 'No Content', + 205: 'Reset Content', + 206: 'Partial Content', + + 300: 'Multiple Choices', + 301: 'Moved Permanently', + 302: 'Found', + 303: 'See Other', + 304: 'Not Modified', + 305: 'Use Proxy', + 306: '(Unused)', + 307: 'Temporary Redirect', + + 400: 'Bad Request', + 401: 'Unauthorized', + 402: 'Payment Required', + 403: 'Forbidden', + 404: 'Not Found', + 405: 'Method Not Allowed', + 406: 'Not Acceptable', + 407: 'Proxy Authentication Required', + 408: 'Request Timeout', + 409: 'Conflict', + 410: 'Gone', + 411: 'Length Required', + 412: 'Precondition Failed', + 413: 'Request Entity Too Large', + 414: 'Request-URI Too Long', + 415: 'Unsupported Media Type', + 416: 'Requested Range Not Satisfiable', + 417: 'Expectation Failed', + + 500: 'Internal Server Error', + 501: 'Not Implemented', + 502: 'Bad Gateway', + 503: 'Service Unavailable', + 504: 'Gateway Timeout', + 505: 'HTTP Version Not Supported', +} + +# maximal amount of data to read at one time in _safe_read +MAXAMOUNT = 1048576 + +class HTTPMessage(mimetools.Message): + + def addheader(self, key, value): + """Add header for field key handling repeats.""" + prev = self.dict.get(key) + if prev is None: + self.dict[key] = value + else: + combined = ", ".join((prev, value)) + self.dict[key] = combined + + def addcontinue(self, key, more): + """Add more field data from a continuation line.""" + prev = self.dict[key] + self.dict[key] = prev + "\n " + more + + def readheaders(self): + """Read header lines. + + Read header lines up to the entirely blank line that terminates them. + The (normally blank) line that ends the headers is skipped, but not + included in the returned list. If a non-header line ends the headers, + (which is an error), an attempt is made to backspace over it; it is + never included in the returned list. + + The variable self.status is set to the empty string if all went well, + otherwise it is an error message. The variable self.headers is a + completely uninterpreted list of lines contained in the header (so + printing them will reproduce the header exactly as it appears in the + file). + + If multiple header fields with the same name occur, they are combined + according to the rules in RFC 2616 sec 4.2: + + Appending each subsequent field-value to the first, each separated + by a comma. The order in which header fields with the same field-name + are received is significant to the interpretation of the combined + field value. + """ + # XXX The implementation overrides the readheaders() method of + # rfc822.Message. The base class design isn't amenable to + # customized behavior here so the method here is a copy of the + # base class code with a few small changes. + + self.dict = {} + self.unixfrom = '' + self.headers = hlist = [] + self.status = '' + headerseen = "" + firstline = 1 + startofline = unread = tell = None + if hasattr(self.fp, 'unread'): + unread = self.fp.unread + elif self.seekable: + tell = self.fp.tell + while True: + if tell: + try: + startofline = tell() + except IOError: + startofline = tell = None + self.seekable = 0 + line = self.fp.readline() + if not line: + self.status = 'EOF in headers' + break + # Skip unix From name time lines + if firstline and line.startswith('From '): + self.unixfrom = self.unixfrom + line + continue + firstline = 0 + if headerseen and line[0] in ' \t': + # XXX Not sure if continuation lines are handled properly + # for http and/or for repeating headers + # It's a continuation line. + hlist.append(line) + self.addcontinue(headerseen, line.strip()) + continue + elif self.iscomment(line): + # It's a comment. Ignore it. + continue + elif self.islast(line): + # Note! No pushback here! The delimiter line gets eaten. + break + headerseen = self.isheader(line) + if headerseen: + # It's a legal header line, save it. + hlist.append(line) + self.addheader(headerseen, line[len(headerseen)+1:].strip()) + continue + else: + # It's not a header line; throw it back and stop here. + if not self.dict: + self.status = 'No headers' + else: + self.status = 'Non-header line where header expected' + # Try to undo the read. + if unread: + unread(line) + elif tell: + self.fp.seek(startofline) + else: + self.status = self.status + '; bad seek' + break + +class HTTPResponse: + + # strict: If true, raise BadStatusLine if the status line can't be + # parsed as a valid HTTP/1.0 or 1.1 status line. By default it is + # false because it prevents clients from talking to HTTP/0.9 + # servers. Note that a response with a sufficiently corrupted + # status line will look like an HTTP/0.9 response. + + # See RFC 2616 sec 19.6 and RFC 1945 sec 6 for details. + + def __init__(self, sock, debuglevel=0, strict=0, method=None, buffering=False): + if buffering: + # The caller won't be using any sock.recv() calls, so buffering + # is fine and recommended for performance. + self.fp = sock.makefile('rb') + else: + # The buffer size is specified as zero, because the headers of + # the response are read with readline(). If the reads were + # buffered the readline() calls could consume some of the + # response, which make be read via a recv() on the underlying + # socket. + self.fp = sock.makefile('rb', 0) + self.debuglevel = debuglevel + self.strict = strict + self._method = method + + self.msg = None + + # from the Status-Line of the response + self.version = _UNKNOWN # HTTP-Version + self.status = _UNKNOWN # Status-Code + self.reason = _UNKNOWN # Reason-Phrase + + self.chunked = _UNKNOWN # is "chunked" being used? + self.chunk_left = _UNKNOWN # bytes left to read in current chunk + self.length = _UNKNOWN # number of bytes left in response + self.will_close = _UNKNOWN # conn will close at end of response + + def _read_status(self): + # Initialize with Simple-Response defaults + line = self.fp.readline() + if self.debuglevel > 0: + print "reply:", repr(line) + if not line: + # Presumably, the server closed the connection before + # sending a valid response. + raise BadStatusLine(line) + try: + [version, status, reason] = line.split(None, 2) + except ValueError: + try: + [version, status] = line.split(None, 1) + reason = "" + except ValueError: + # empty version will cause next test to fail and status + # will be treated as 0.9 response. + version = "" + if not version.startswith('HTTP/'): + if self.strict: + self.close() + raise BadStatusLine(line) + else: + # assume it's a Simple-Response from an 0.9 server + self.fp = LineAndFileWrapper(line, self.fp) + return "HTTP/0.9", 200, "" + + # The status code is a three-digit number + try: + status = int(status) + if status < 100 or status > 999: + raise BadStatusLine(line) + except ValueError: + raise BadStatusLine(line) + return version, status, reason + + def begin(self): + if self.msg is not None: + # we've already started reading the response + return + + # read until we get a non-100 response + while True: + version, status, reason = self._read_status() + if status != CONTINUE: + break + # skip the header from the 100 response + while True: + skip = self.fp.readline().strip() + if not skip: + break + if self.debuglevel > 0: + print "header:", skip + + self.status = status + self.reason = reason.strip() + if version == 'HTTP/1.0': + self.version = 10 + elif version.startswith('HTTP/1.'): + self.version = 11 # use HTTP/1.1 code for HTTP/1.x where x>=1 + elif version == 'HTTP/0.9': + self.version = 9 + else: + raise UnknownProtocol(version) + + if self.version == 9: + self.length = None + self.chunked = 0 + self.will_close = 1 + self.msg = HTTPMessage(StringIO()) + return + + self.msg = HTTPMessage(self.fp, 0) + if self.debuglevel > 0: + for hdr in self.msg.headers: + print "header:", hdr, + + # don't let the msg keep an fp + self.msg.fp = None + + # are we using the chunked-style of transfer encoding? + tr_enc = self.msg.getheader('transfer-encoding') + if tr_enc and tr_enc.lower() == "chunked": + self.chunked = 1 + self.chunk_left = None + else: + self.chunked = 0 + + # will the connection close at the end of the response? + self.will_close = self._check_close() + + # do we have a Content-Length? + # NOTE: RFC 2616, S4.4, #3 says we ignore this if tr_enc is "chunked" + length = self.msg.getheader('content-length') + if length and not self.chunked: + try: + self.length = int(length) + except ValueError: + self.length = None + else: + if self.length < 0: # ignore nonsensical negative lengths + self.length = None + else: + self.length = None + + # does the body have a fixed length? (of zero) + if (status == NO_CONTENT or status == NOT_MODIFIED or + 100 <= status < 200 or # 1xx codes + self._method == 'HEAD'): + self.length = 0 + + # if the connection remains open, and we aren't using chunked, and + # a content-length was not provided, then assume that the connection + # WILL close. + if not self.will_close and \ + not self.chunked and \ + self.length is None: + self.will_close = 1 + + def _check_close(self): + conn = self.msg.getheader('connection') + if self.version == 11: + # An HTTP/1.1 proxy is assumed to stay open unless + # explicitly closed. + conn = self.msg.getheader('connection') + if conn and "close" in conn.lower(): + return True + return False + + # Some HTTP/1.0 implementations have support for persistent + # connections, using rules different than HTTP/1.1. + + # For older HTTP, Keep-Alive indicates persistent connection. + if self.msg.getheader('keep-alive'): + return False + + # At least Akamai returns a "Connection: Keep-Alive" header, + # which was supposed to be sent by the client. + if conn and "keep-alive" in conn.lower(): + return False + + # Proxy-Connection is a netscape hack. + pconn = self.msg.getheader('proxy-connection') + if pconn and "keep-alive" in pconn.lower(): + return False + + # otherwise, assume it will close + return True + + def close(self): + if self.fp: + self.fp.close() + self.fp = None + + def isclosed(self): + # NOTE: it is possible that we will not ever call self.close(). This + # case occurs when will_close is TRUE, length is None, and we + # read up to the last byte, but NOT past it. + # + # IMPLIES: if will_close is FALSE, then self.close() will ALWAYS be + # called, meaning self.isclosed() is meaningful. + return self.fp is None + + # XXX It would be nice to have readline and __iter__ for this, too. + + def read(self, amt=None): + if self.fp is None: + return '' + + if self._method == 'HEAD': + self.close() + return '' + + if self.chunked: + return self._read_chunked(amt) + + if amt is None: + # unbounded read + if self.length is None: + s = self.fp.read() + else: + s = self._safe_read(self.length) + self.length = 0 + self.close() # we read everything + return s + + if self.length is not None: + if amt > self.length: + # clip the read to the "end of response" + amt = self.length + + # we do not use _safe_read() here because this may be a .will_close + # connection, and the user is reading more bytes than will be provided + # (for example, reading in 1k chunks) + s = self.fp.read(amt) + if self.length is not None: + self.length -= len(s) + if not self.length: + self.close() + return s + + def _read_chunked(self, amt): + assert self.chunked != _UNKNOWN + chunk_left = self.chunk_left + value = [] + while True: + if chunk_left is None: + line = self.fp.readline() + i = line.find(';') + if i >= 0: + line = line[:i] # strip chunk-extensions + try: + chunk_left = int(line, 16) + except ValueError: + # close the connection as protocol synchronisation is + # probably lost + self.close() + raise IncompleteRead(''.join(value)) + if chunk_left == 0: + break + if amt is None: + value.append(self._safe_read(chunk_left)) + elif amt < chunk_left: + value.append(self._safe_read(amt)) + self.chunk_left = chunk_left - amt + return ''.join(value) + elif amt == chunk_left: + value.append(self._safe_read(amt)) + self._safe_read(2) # toss the CRLF at the end of the chunk + self.chunk_left = None + return ''.join(value) + else: + value.append(self._safe_read(chunk_left)) + amt -= chunk_left + + # we read the whole chunk, get another + self._safe_read(2) # toss the CRLF at the end of the chunk + chunk_left = None + + # read and discard trailer up to the CRLF terminator + ### note: we shouldn't have any trailers! + while True: + line = self.fp.readline() + if not line: + # a vanishingly small number of sites EOF without + # sending the trailer + break + if line == '\r\n': + break + + # we read everything; close the "file" + self.close() + + return ''.join(value) + + def _safe_read(self, amt): + """Read the number of bytes requested, compensating for partial reads. + + Normally, we have a blocking socket, but a read() can be interrupted + by a signal (resulting in a partial read). + + Note that we cannot distinguish between EOF and an interrupt when zero + bytes have been read. IncompleteRead() will be raised in this + situation. + + This function should be used when bytes "should" be present for + reading. If the bytes are truly not available (due to EOF), then the + IncompleteRead exception can be used to detect the problem. + """ + # NOTE(gps): As of svn r74426 socket._fileobject.read(x) will never + # return less than x bytes unless EOF is encountered. It now handles + # signal interruptions (socket.error EINTR) internally. This code + # never caught that exception anyways. It seems largely pointless. + # self.fp.read(amt) will work fine. + s = [] + while amt > 0: + chunk = self.fp.read(min(amt, MAXAMOUNT)) + if not chunk: + raise IncompleteRead(''.join(s), amt) + s.append(chunk) + amt -= len(chunk) + return ''.join(s) + + def fileno(self): + return self.fp.fileno() + + def getheader(self, name, default=None): + if self.msg is None: + raise ResponseNotReady() + return self.msg.getheader(name, default) + + def getheaders(self): + """Return list of (header, value) tuples.""" + if self.msg is None: + raise ResponseNotReady() + return self.msg.items() + + +class HTTPConnection: + + _http_vsn = 11 + _http_vsn_str = 'HTTP/1.1' + + response_class = HTTPResponse + default_port = HTTP_PORT + auto_open = 1 + debuglevel = 0 + strict = 0 + + def __init__(self, host, port=None, strict=None, + timeout=socket._GLOBAL_DEFAULT_TIMEOUT, source_address=None): + self.timeout = timeout + self.source_address = source_address + self.sock = None + self._buffer = [] + self.__response = None + self.__state = _CS_IDLE + self._method = None + self._tunnel_host = None + self._tunnel_port = None + self._tunnel_headers = {} + + self._set_hostport(host, port) + if strict is not None: + self.strict = strict + + def set_tunnel(self, host, port=None, headers=None): + """ Sets up the host and the port for the HTTP CONNECT Tunnelling. + + The headers argument should be a mapping of extra HTTP headers + to send with the CONNECT request. + """ + self._tunnel_host = host + self._tunnel_port = port + if headers: + self._tunnel_headers = headers + else: + self._tunnel_headers.clear() + + def _set_hostport(self, host, port): + if port is None: + i = host.rfind(':') + j = host.rfind(']') # ipv6 addresses have [...] + if i > j: + try: + port = int(host[i+1:]) + except ValueError: + raise InvalidURL("nonnumeric port: '%s'" % host[i+1:]) + host = host[:i] + else: + port = self.default_port + if host and host[0] == '[' and host[-1] == ']': + host = host[1:-1] + self.host = host + self.port = port + + def set_debuglevel(self, level): + self.debuglevel = level + + def _tunnel(self): + self._set_hostport(self._tunnel_host, self._tunnel_port) + self.send("CONNECT %s:%d HTTP/1.0\r\n" % (self.host, self.port)) + for header, value in self._tunnel_headers.iteritems(): + self.send("%s: %s\r\n" % (header, value)) + self.send("\r\n") + response = self.response_class(self.sock, strict = self.strict, + method = self._method) + (version, code, message) = response._read_status() + + if code != 200: + self.close() + raise socket.error("Tunnel connection failed: %d %s" % (code, + message.strip())) + while True: + line = response.fp.readline() + if line == '\r\n': break + + + def connect(self): + """Connect to the host and port specified in __init__.""" + self.sock = socket.create_connection((self.host,self.port), + self.timeout, self.source_address) + + if self._tunnel_host: + self._tunnel() + + def close(self): + """Close the connection to the HTTP server.""" + if self.sock: + self.sock.close() # close it manually... there may be other refs + self.sock = None + if self.__response: + self.__response.close() + self.__response = None + self.__state = _CS_IDLE + + def send(self, data): + """Send `data' to the server.""" + if self.sock is None: + if self.auto_open: + self.connect() + else: + raise NotConnected() + + if self.debuglevel > 0: + print "send:", repr(data) + blocksize = 8192 + if hasattr(data,'read') and not isinstance(data, array): + if self.debuglevel > 0: print "sendIng a read()able" + datablock = data.read(blocksize) + while datablock: + self.sock.sendall(datablock) + datablock = data.read(blocksize) + else: + self.sock.sendall(data) + + def _output(self, s): + """Add a line of output to the current request buffer. + + Assumes that the line does *not* end with \\r\\n. + """ + self._buffer.append(s) + + def _send_output(self, message_body=None): + """Send the currently buffered request and clear the buffer. + + Appends an extra \\r\\n to the buffer. + A message_body may be specified, to be appended to the request. + """ + self._buffer.extend(("", "")) + msg = "\r\n".join(self._buffer) + del self._buffer[:] + # If msg and message_body are sent in a single send() call, + # it will avoid performance problems caused by the interaction + # between delayed ack and the Nagle algorithim. + if isinstance(message_body, str): + msg += message_body + message_body = None + self.send(msg) + if message_body is not None: + #message_body was not a string (i.e. it is a file) and + #we must run the risk of Nagle + self.send(message_body) + + def putrequest(self, method, url, skip_host=0, skip_accept_encoding=0): + """Send a request to the server. + + `method' specifies an HTTP request method, e.g. 'GET'. + `url' specifies the object being requested, e.g. '/index.html'. + `skip_host' if True does not add automatically a 'Host:' header + `skip_accept_encoding' if True does not add automatically an + 'Accept-Encoding:' header + """ + + # if a prior response has been completed, then forget about it. + if self.__response and self.__response.isclosed(): + self.__response = None + + + # in certain cases, we cannot issue another request on this connection. + # this occurs when: + # 1) we are in the process of sending a request. (_CS_REQ_STARTED) + # 2) a response to a previous request has signalled that it is going + # to close the connection upon completion. + # 3) the headers for the previous response have not been read, thus + # we cannot determine whether point (2) is true. (_CS_REQ_SENT) + # + # if there is no prior response, then we can request at will. + # + # if point (2) is true, then we will have passed the socket to the + # response (effectively meaning, "there is no prior response"), and + # will open a new one when a new request is made. + # + # Note: if a prior response exists, then we *can* start a new request. + # We are not allowed to begin fetching the response to this new + # request, however, until that prior response is complete. + # + if self.__state == _CS_IDLE: + self.__state = _CS_REQ_STARTED + else: + raise CannotSendRequest() + + # Save the method we use, we need it later in the response phase + self._method = method + if not url: + url = '/' + hdr = '%s %s %s' % (method, url, self._http_vsn_str) + + self._output(hdr) + + if self._http_vsn == 11: + # Issue some standard headers for better HTTP/1.1 compliance + + if not skip_host: + # this header is issued *only* for HTTP/1.1 + # connections. more specifically, this means it is + # only issued when the client uses the new + # HTTPConnection() class. backwards-compat clients + # will be using HTTP/1.0 and those clients may be + # issuing this header themselves. we should NOT issue + # it twice; some web servers (such as Apache) barf + # when they see two Host: headers + + # If we need a non-standard port,include it in the + # header. If the request is going through a proxy, + # but the host of the actual URL, not the host of the + # proxy. + + netloc = '' + if url.startswith('http'): + nil, netloc, nil, nil, nil = urlsplit(url) + + if netloc: + try: + netloc_enc = netloc.encode("ascii") + except UnicodeEncodeError: + netloc_enc = netloc.encode("idna") + self.putheader('Host', netloc_enc) + else: + try: + host_enc = self.host.encode("ascii") + except UnicodeEncodeError: + host_enc = self.host.encode("idna") + # Wrap the IPv6 Host Header with [] (RFC 2732) + if host_enc.find(':') >= 0: + host_enc = "[" + host_enc + "]" + if self.port == self.default_port: + self.putheader('Host', host_enc) + else: + self.putheader('Host', "%s:%s" % (host_enc, self.port)) + + # note: we are assuming that clients will not attempt to set these + # headers since *this* library must deal with the + # consequences. this also means that when the supporting + # libraries are updated to recognize other forms, then this + # code should be changed (removed or updated). + + # we only want a Content-Encoding of "identity" since we don't + # support encodings such as x-gzip or x-deflate. + if not skip_accept_encoding: + self.putheader('Accept-Encoding', 'identity') + + # we can accept "chunked" Transfer-Encodings, but no others + # NOTE: no TE header implies *only* "chunked" + #self.putheader('TE', 'chunked') + + # if TE is supplied in the header, then it must appear in a + # Connection header. + #self.putheader('Connection', 'TE') + + else: + # For HTTP/1.0, the server will assume "not chunked" + pass + + def putheader(self, header, *values): + """Send a request header line to the server. + + For example: h.putheader('Accept', 'text/html') + """ + if self.__state != _CS_REQ_STARTED: + raise CannotSendHeader() + + hdr = '%s: %s' % (header, '\r\n\t'.join([str(v) for v in values])) + self._output(hdr) + + def endheaders(self, message_body=None): + """Indicate that the last header line has been sent to the server. + + This method sends the request to the server. The optional + message_body argument can be used to pass message body + associated with the request. The message body will be sent in + the same packet as the message headers if possible. The + message_body should be a string. + """ + if self.__state == _CS_REQ_STARTED: + self.__state = _CS_REQ_SENT + else: + raise CannotSendHeader() + self._send_output(message_body) + + def request(self, method, url, body=None, headers={}): + """Send a complete request to the server.""" + self._send_request(method, url, body, headers) + + def _set_content_length(self, body): + # Set the content-length based on the body. + thelen = None + try: + thelen = str(len(body)) + except TypeError, te: + # If this is a file-like object, try to + # fstat its file descriptor + try: + thelen = str(os.fstat(body.fileno()).st_size) + except (AttributeError, OSError): + # Don't send a length if this failed + if self.debuglevel > 0: print "Cannot stat!!" + + if thelen is not None: + self.putheader('Content-Length', thelen) + + def _send_request(self, method, url, body, headers): + # Honor explicitly requested Host: and Accept-Encoding: headers. + header_names = dict.fromkeys([k.lower() for k in headers]) + skips = {} + if 'host' in header_names: + skips['skip_host'] = 1 + if 'accept-encoding' in header_names: + skips['skip_accept_encoding'] = 1 + + self.putrequest(method, url, **skips) + + if body and ('content-length' not in header_names): + self._set_content_length(body) + for hdr, value in headers.iteritems(): + self.putheader(hdr, value) + self.endheaders(body) + + def getresponse(self, buffering=False): + "Get the response from the server." + + # if a prior response has been completed, then forget about it. + if self.__response and self.__response.isclosed(): + self.__response = None + + # + # if a prior response exists, then it must be completed (otherwise, we + # cannot read this response's header to determine the connection-close + # behavior) + # + # note: if a prior response existed, but was connection-close, then the + # socket and response were made independent of this HTTPConnection + # object since a new request requires that we open a whole new + # connection + # + # this means the prior response had one of two states: + # 1) will_close: this connection was reset and the prior socket and + # response operate independently + # 2) persistent: the response was retained and we await its + # isclosed() status to become true. + # + if self.__state != _CS_REQ_SENT or self.__response: + raise ResponseNotReady() + + args = (self.sock,) + kwds = {"strict":self.strict, "method":self._method} + if self.debuglevel > 0: + args += (self.debuglevel,) + if buffering: + #only add this keyword if non-default, for compatibility with + #other response_classes. + kwds["buffering"] = True; + response = self.response_class(*args, **kwds) + + try: + response.begin() + except: + response.close() + raise + assert response.will_close != _UNKNOWN + self.__state = _CS_IDLE + + if response.will_close: + # this effectively passes the connection to the response + self.close() + else: + # remember this, so we can tell when it is complete + self.__response = response + + return response + + +class HTTP: + "Compatibility class with httplib.py from 1.5." + + _http_vsn = 10 + _http_vsn_str = 'HTTP/1.0' + + debuglevel = 0 + + _connection_class = HTTPConnection + + def __init__(self, host='', port=None, strict=None): + "Provide a default host, since the superclass requires one." + + # some joker passed 0 explicitly, meaning default port + if port == 0: + port = None + + # Note that we may pass an empty string as the host; this will throw + # an error when we attempt to connect. Presumably, the client code + # will call connect before then, with a proper host. + self._setup(self._connection_class(host, port, strict)) + + def _setup(self, conn): + self._conn = conn + + # set up delegation to flesh out interface + self.send = conn.send + self.putrequest = conn.putrequest + self.putheader = conn.putheader + self.endheaders = conn.endheaders + self.set_debuglevel = conn.set_debuglevel + + conn._http_vsn = self._http_vsn + conn._http_vsn_str = self._http_vsn_str + + self.file = None + + def connect(self, host=None, port=None): + "Accept arguments to set the host/port, since the superclass doesn't." + + if host is not None: + self._conn._set_hostport(host, port) + self._conn.connect() + + def getfile(self): + "Provide a getfile, since the superclass' does not use this concept." + return self.file + + def getreply(self, buffering=False): + """Compat definition since superclass does not define it. + + Returns a tuple consisting of: + - server status code (e.g. '200' if all goes well) + - server "reason" corresponding to status code + - any RFC822 headers in the response from the server + """ + try: + if not buffering: + response = self._conn.getresponse() + else: + #only add this keyword if non-default for compatibility + #with other connection classes + response = self._conn.getresponse(buffering) + except BadStatusLine, e: + ### hmm. if getresponse() ever closes the socket on a bad request, + ### then we are going to have problems with self.sock + + ### should we keep this behavior? do people use it? + # keep the socket open (as a file), and return it + self.file = self._conn.sock.makefile('rb', 0) + + # close our socket -- we want to restart after any protocol error + self.close() + + self.headers = None + return -1, e.line, None + + self.headers = response.msg + self.file = response.fp + return response.status, response.reason, response.msg + + def close(self): + self._conn.close() + + # note that self.file == response.fp, which gets closed by the + # superclass. just clear the object ref here. + ### hmm. messy. if status==-1, then self.file is owned by us. + ### well... we aren't explicitly closing, but losing this ref will + ### do it + self.file = None + +try: + import ssl +except ImportError: + pass +else: + class HTTPSConnection(HTTPConnection): + "This class allows communication via SSL." + + default_port = HTTPS_PORT + + def __init__(self, host, port=None, key_file=None, cert_file=None, + strict=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, + source_address=None): + HTTPConnection.__init__(self, host, port, strict, timeout, + source_address) + self.key_file = key_file + self.cert_file = cert_file + + def connect(self): + "Connect to a host on a given (SSL) port." + + sock = socket.create_connection((self.host, self.port), + self.timeout, self.source_address) + if self._tunnel_host: + self.sock = sock + self._tunnel() + self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file) + + __all__.append("HTTPSConnection") + + class HTTPS(HTTP): + """Compatibility with 1.5 httplib interface + + Python 1.5.2 did not have an HTTPS class, but it defined an + interface for sending http requests that is also useful for + https. + """ + + _connection_class = HTTPSConnection + + def __init__(self, host='', port=None, key_file=None, cert_file=None, + strict=None): + # provide a default host, pass the X509 cert info + + # urf. compensate for bad input. + if port == 0: + port = None + self._setup(self._connection_class(host, port, key_file, + cert_file, strict)) + + # we never actually use these for anything, but we keep them + # here for compatibility with post-1.5.2 CVS. + self.key_file = key_file + self.cert_file = cert_file + + + def FakeSocket (sock, sslobj): + warnings.warn("FakeSocket is deprecated, and won't be in 3.x. " + + "Use the result of ssl.wrap_socket() directly instead.", + DeprecationWarning, stacklevel=2) + return sslobj + + +class HTTPException(Exception): + # Subclasses that define an __init__ must call Exception.__init__ + # or define self.args. Otherwise, str() will fail. + pass + +class NotConnected(HTTPException): + pass + +class InvalidURL(HTTPException): + pass + +class UnknownProtocol(HTTPException): + def __init__(self, version): + self.args = version, + self.version = version + +class UnknownTransferEncoding(HTTPException): + pass + +class UnimplementedFileMode(HTTPException): + pass + +class IncompleteRead(HTTPException): + def __init__(self, partial, expected=None): + self.args = partial, + self.partial = partial + self.expected = expected + def __repr__(self): + if self.expected is not None: + e = ', %i more expected' % self.expected + else: + e = '' + return 'IncompleteRead(%i bytes read%s)' % (len(self.partial), e) + def __str__(self): + return repr(self) + +class ImproperConnectionState(HTTPException): + pass + +class CannotSendRequest(ImproperConnectionState): + pass + +class CannotSendHeader(ImproperConnectionState): + pass + +class ResponseNotReady(ImproperConnectionState): + pass + +class BadStatusLine(HTTPException): + def __init__(self, line): + if not line: + line = repr(line) + self.args = line, + self.line = line + +# for backwards compatibility +error = HTTPException + +class LineAndFileWrapper: + """A limited file-like object for HTTP/0.9 responses.""" + + # The status-line parsing code calls readline(), which normally + # get the HTTP status line. For a 0.9 response, however, this is + # actually the first line of the body! Clients need to get a + # readable file object that contains that line. + + def __init__(self, line, file): + self._line = line + self._file = file + self._line_consumed = 0 + self._line_offset = 0 + self._line_left = len(line) + + def __getattr__(self, attr): + return getattr(self._file, attr) + + def _done(self): + # called when the last byte is read from the line. After the + # call, all read methods are delegated to the underlying file + # object. + self._line_consumed = 1 + self.read = self._file.read + self.readline = self._file.readline + self.readlines = self._file.readlines + + def read(self, amt=None): + if self._line_consumed: + return self._file.read(amt) + assert self._line_left + if amt is None or amt > self._line_left: + s = self._line[self._line_offset:] + self._done() + if amt is None: + return s + self._file.read() + else: + return s + self._file.read(amt - len(s)) + else: + assert amt <= self._line_left + i = self._line_offset + j = i + amt + s = self._line[i:j] + self._line_offset = j + self._line_left -= amt + if self._line_left == 0: + self._done() + return s + + def readline(self): + if self._line_consumed: + return self._file.readline() + assert self._line_left + s = self._line[self._line_offset:] + self._done() + return s + + def readlines(self, size=None): + if self._line_consumed: + return self._file.readlines(size) + assert self._line_left + L = [self._line[self._line_offset:]] + self._done() + if size is None: + return L + self._file.readlines() + else: + return L + self._file.readlines(size) + +def test(): + """Test this module. + + A hodge podge of tests collected here, because they have too many + external dependencies for the regular test suite. + """ + + import sys + import getopt + opts, args = getopt.getopt(sys.argv[1:], 'd') + dl = 0 + for o, a in opts: + if o == '-d': dl = dl + 1 + host = 'www.python.org' + selector = '/' + if args[0:]: host = args[0] + if args[1:]: selector = args[1] + h = HTTP() + h.set_debuglevel(dl) + h.connect(host) + h.putrequest('GET', selector) + h.endheaders() + status, reason, headers = h.getreply() + print 'status =', status + print 'reason =', reason + print "read", len(h.getfile().read()) + print + if headers: + for header in headers.headers: print header.strip() + print + + # minimal test that code to extract host from url works + class HTTP11(HTTP): + _http_vsn = 11 + _http_vsn_str = 'HTTP/1.1' + + h = HTTP11('www.python.org') + h.putrequest('GET', 'http://www.python.org/~jeremy/') + h.endheaders() + h.getreply() + h.close() + + try: + import ssl + except ImportError: + pass + else: + + for host, selector in (('sourceforge.net', '/projects/python'), + ): + print "https://%s%s" % (host, selector) + hs = HTTPS() + hs.set_debuglevel(dl) + hs.connect(host) + hs.putrequest('GET', selector) + hs.endheaders() + status, reason, headers = hs.getreply() + print 'status =', status + print 'reason =', reason + print "read", len(hs.getfile().read()) + print + if headers: + for header in headers.headers: print header.strip() + print + +if __name__ == '__main__': + test() diff --git a/lib-python/modified-2.7/json/encoder.py b/lib-python/modified-2.7/json/encoder.py --- a/lib-python/modified-2.7/json/encoder.py +++ b/lib-python/modified-2.7/json/encoder.py @@ -2,14 +2,7 @@ """ import re -try: - from _json import encode_basestring_ascii as c_encode_basestring_ascii -except ImportError: - c_encode_basestring_ascii = None -try: - from _json import make_encoder as c_make_encoder -except ImportError: - c_make_encoder = None +from __pypy__.builders import StringBuilder, UnicodeBuilder ESCAPE = re.compile(r'[\x00-\x1f\\"\b\f\n\r\t]') ESCAPE_ASCII = re.compile(r'([\\"]|[^\ -~])') @@ -24,8 +17,7 @@ '\t': '\\t', } for i in range(0x20): - ESCAPE_DCT.setdefault(chr(i), '\\u{0:04x}'.format(i)) - #ESCAPE_DCT.setdefault(chr(i), '\\u%04x' % (i,)) + ESCAPE_DCT.setdefault(chr(i), '\\u%04x' % (i,)) # Assume this produces an infinity on all machines (probably not guaranteed) INFINITY = float('1e66666') @@ -37,10 +29,9 @@ """ def replace(match): return ESCAPE_DCT[match.group(0)] - return '"' + ESCAPE.sub(replace, s) + '"' + return ESCAPE.sub(replace, s) - -def py_encode_basestring_ascii(s): +def encode_basestring_ascii(s): """Return an ASCII-only JSON representation of a Python string """ @@ -53,20 +44,18 @@ except KeyError: n = ord(s) if n < 0x10000: - return '\\u{0:04x}'.format(n) - #return '\\u%04x' % (n,) + return '\\u%04x' % (n,) else: # surrogate pair n -= 0x10000 s1 = 0xd800 | ((n >> 10) & 0x3ff) s2 = 0xdc00 | (n & 0x3ff) - return '\\u{0:04x}\\u{1:04x}'.format(s1, s2) - #return '\\u%04x\\u%04x' % (s1, s2) - return '"' + str(ESCAPE_ASCII.sub(replace, s)) + '"' - - -encode_basestring_ascii = ( - c_encode_basestring_ascii or py_encode_basestring_ascii) + return '\\u%04x\\u%04x' % (s1, s2) + if ESCAPE_ASCII.search(s): + return str(ESCAPE_ASCII.sub(replace, s)) + return s +py_encode_basestring_ascii = lambda s: '"' + encode_basestring_ascii(s) + '"' +c_encode_basestring_ascii = None class JSONEncoder(object): """Extensible JSON encoder for Python data structures. @@ -147,6 +136,17 @@ self.skipkeys = skipkeys self.ensure_ascii = ensure_ascii + if ensure_ascii: + self.encoder = encode_basestring_ascii + else: + self.encoder = encode_basestring + if encoding != 'utf-8': + orig_encoder = self.encoder + def encoder(o): + if isinstance(o, str): + o = o.decode(encoding) + return orig_encoder(o) + self.encoder = encoder self.check_circular = check_circular self.allow_nan = allow_nan self.sort_keys = sort_keys @@ -184,24 +184,126 @@ '{"foo": ["bar", "baz"]}' """ - # This is for extremely simple cases and benchmarks. + if self.check_circular: + markers = {} + else: + markers = None + if self.ensure_ascii: + builder = StringBuilder() + else: + builder = UnicodeBuilder() + self._encode(o, markers, builder, 0) + return builder.build() + + def _emit_indent(self, builder, _current_indent_level): + if self.indent is not None: + _current_indent_level += 1 + newline_indent = '\n' + (' ' * (self.indent * + _current_indent_level)) + separator = self.item_separator + newline_indent + builder.append(newline_indent) + else: + separator = self.item_separator + return separator, _current_indent_level + + def _emit_unindent(self, builder, _current_indent_level): + if self.indent is not None: + builder.append('\n') + builder.append(' ' * (self.indent * (_current_indent_level - 1))) + + def _encode(self, o, markers, builder, _current_indent_level): if isinstance(o, basestring): - if isinstance(o, str): - _encoding = self.encoding - if (_encoding is not None - and not (_encoding == 'utf-8')): - o = o.decode(_encoding) - if self.ensure_ascii: - return encode_basestring_ascii(o) + builder.append('"') + builder.append(self.encoder(o)) + builder.append('"') + elif o is None: + builder.append('null') + elif o is True: + builder.append('true') + elif o is False: + builder.append('false') + elif isinstance(o, (int, long)): + builder.append(str(o)) + elif isinstance(o, float): + builder.append(self._floatstr(o)) + elif isinstance(o, (list, tuple)): + if not o: + builder.append('[]') + return + self._encode_list(o, markers, builder, _current_indent_level) + elif isinstance(o, dict): + if not o: + builder.append('{}') + return + self._encode_dict(o, markers, builder, _current_indent_level) + else: + self._mark_markers(markers, o) + res = self.default(o) + self._encode(res, markers, builder, _current_indent_level) + self._remove_markers(markers, o) + return res + + def _encode_list(self, l, markers, builder, _current_indent_level): + self._mark_markers(markers, l) + builder.append('[') + first = True + separator, _current_indent_level = self._emit_indent(builder, + _current_indent_level) + for elem in l: + if first: + first = False else: - return encode_basestring(o) - # This doesn't pass the iterator directly to ''.join() because the - # exceptions aren't as detailed. The list call should be roughly - # equivalent to the PySequence_Fast that ''.join() would do. - chunks = self.iterencode(o, _one_shot=True) - if not isinstance(chunks, (list, tuple)): - chunks = list(chunks) - return ''.join(chunks) + builder.append(separator) + self._encode(elem, markers, builder, _current_indent_level) + del elem # XXX grumble + self._emit_unindent(builder, _current_indent_level) + builder.append(']') + self._remove_markers(markers, l) + + def _encode_dict(self, d, markers, builder, _current_indent_level): + self._mark_markers(markers, d) + first = True + builder.append('{') + separator, _current_indent_level = self._emit_indent(builder, + _current_indent_level) + if self.sort_keys: + items = sorted(d.items(), key=lambda kv: kv[0]) + else: + items = d.iteritems() + + for key, v in items: + if first: + first = False + else: + builder.append(separator) + if isinstance(key, basestring): + pass + # JavaScript is weakly typed for these, so it makes sense to + # also allow them. Many encoders seem to do something like this. + elif isinstance(key, float): + key = self._floatstr(key) + elif key is True: + key = 'true' + elif key is False: + key = 'false' + elif key is None: + key = 'null' + elif isinstance(key, (int, long)): + key = str(key) + elif self.skipkeys: + continue + else: + raise TypeError("key " + repr(key) + " is not a string") + builder.append('"') + builder.append(self.encoder(key)) + builder.append('"') + builder.append(self.key_separator) + self._encode(v, markers, builder, _current_indent_level) + del key + del v # XXX grumble + self._emit_unindent(builder, _current_indent_level) + builder.append('}') + self._remove_markers(markers, d) def iterencode(self, o, _one_shot=False): """Encode the given object and yield each string @@ -217,86 +319,54 @@ markers = {} else: markers = None - if self.ensure_ascii: - _encoder = encode_basestring_ascii + return self._iterencode(o, markers, 0) + + def _floatstr(self, o): + # Check for specials. Note that this type of test is processor + # and/or platform-specific, so do tests which don't depend on the + # internals. + + if o != o: + text = 'NaN' + elif o == INFINITY: + text = 'Infinity' + elif o == -INFINITY: + text = '-Infinity' else: - _encoder = encode_basestring - if self.encoding != 'utf-8': - def _encoder(o, _orig_encoder=_encoder, _encoding=self.encoding): - if isinstance(o, str): - o = o.decode(_encoding) - return _orig_encoder(o) + return FLOAT_REPR(o) - def floatstr(o, allow_nan=self.allow_nan, - _repr=FLOAT_REPR, _inf=INFINITY, _neginf=-INFINITY): - # Check for specials. Note that this type of test is processor - # and/or platform-specific, so do tests which don't depend on the - # internals. + if not self.allow_nan: + raise ValueError( + "Out of range float values are not JSON compliant: " + + repr(o)) - if o != o: - text = 'NaN' - elif o == _inf: - text = 'Infinity' - elif o == _neginf: - text = '-Infinity' - else: - return _repr(o) + return text - if not allow_nan: - raise ValueError( - "Out of range float values are not JSON compliant: " + - repr(o)) + def _mark_markers(self, markers, o): + if markers is not None: + if id(o) in markers: + raise ValueError("Circular reference detected") + markers[id(o)] = None - return text + def _remove_markers(self, markers, o): + if markers is not None: + del markers[id(o)] - - if (_one_shot and c_make_encoder is not None - and not self.indent and not self.sort_keys): - _iterencode = c_make_encoder( - markers, self.default, _encoder, self.indent, - self.key_separator, self.item_separator, self.sort_keys, - self.skipkeys, self.allow_nan) - else: - _iterencode = _make_iterencode( - markers, self.default, _encoder, self.indent, floatstr, - self.key_separator, self.item_separator, self.sort_keys, - self.skipkeys, _one_shot) - return _iterencode(o, 0) - -def _make_iterencode(markers, _default, _encoder, _indent, _floatstr, - _key_separator, _item_separator, _sort_keys, _skipkeys, _one_shot, - ## HACK: hand-optimized bytecode; turn globals into locals - ValueError=ValueError, - basestring=basestring, - dict=dict, - float=float, - id=id, - int=int, - isinstance=isinstance, - list=list, - long=long, - str=str, - tuple=tuple, - ): - - def _iterencode_list(lst, _current_indent_level): + def _iterencode_list(self, lst, markers, _current_indent_level): if not lst: yield '[]' return - if markers is not None: - markerid = id(lst) - if markerid in markers: - raise ValueError("Circular reference detected") - markers[markerid] = lst + self._mark_markers(markers, lst) buf = '[' - if _indent is not None: + if self.indent is not None: _current_indent_level += 1 - newline_indent = '\n' + (' ' * (_indent * _current_indent_level)) - separator = _item_separator + newline_indent + newline_indent = '\n' + (' ' * (self.indent * + _current_indent_level)) + separator = self.item_separator + newline_indent buf += newline_indent else: newline_indent = None - separator = _item_separator + separator = self.item_separator first = True for value in lst: if first: @@ -304,7 +374,7 @@ else: buf = separator if isinstance(value, basestring): - yield buf + _encoder(value) + yield buf + '"' + self.encoder(value) + '"' elif value is None: yield buf + 'null' elif value is True: @@ -314,44 +384,43 @@ elif isinstance(value, (int, long)): yield buf + str(value) elif isinstance(value, float): - yield buf + _floatstr(value) + yield buf + self._floatstr(value) else: yield buf if isinstance(value, (list, tuple)): - chunks = _iterencode_list(value, _current_indent_level) + chunks = self._iterencode_list(value, markers, + _current_indent_level) elif isinstance(value, dict): - chunks = _iterencode_dict(value, _current_indent_level) + chunks = self._iterencode_dict(value, markers, + _current_indent_level) else: - chunks = _iterencode(value, _current_indent_level) + chunks = self._iterencode(value, markers, + _current_indent_level) for chunk in chunks: yield chunk if newline_indent is not None: _current_indent_level -= 1 - yield '\n' + (' ' * (_indent * _current_indent_level)) + yield '\n' + (' ' * (self.indent * _current_indent_level)) yield ']' - if markers is not None: - del markers[markerid] + self._remove_markers(markers, lst) - def _iterencode_dict(dct, _current_indent_level): + def _iterencode_dict(self, dct, markers, _current_indent_level): if not dct: yield '{}' return - if markers is not None: - markerid = id(dct) - if markerid in markers: - raise ValueError("Circular reference detected") - markers[markerid] = dct + self._mark_markers(markers, dct) yield '{' - if _indent is not None: + if self.indent is not None: _current_indent_level += 1 - newline_indent = '\n' + (' ' * (_indent * _current_indent_level)) - item_separator = _item_separator + newline_indent + newline_indent = '\n' + (' ' * (self.indent * + _current_indent_level)) + item_separator = self.item_separator + newline_indent yield newline_indent else: newline_indent = None - item_separator = _item_separator + item_separator = self.item_separator first = True - if _sort_keys: + if self.sort_keys: items = sorted(dct.items(), key=lambda kv: kv[0]) else: items = dct.iteritems() @@ -361,7 +430,7 @@ # JavaScript is weakly typed for these, so it makes sense to # also allow them. Many encoders seem to do something like this. elif isinstance(key, float): - key = _floatstr(key) + key = self._floatstr(key) elif key is True: key = 'true' elif key is False: @@ -370,7 +439,7 @@ key = 'null' elif isinstance(key, (int, long)): key = str(key) - elif _skipkeys: + elif self.skipkeys: continue else: raise TypeError("key " + repr(key) + " is not a string") @@ -378,10 +447,10 @@ first = False else: yield item_separator - yield _encoder(key) - yield _key_separator + yield '"' + self.encoder(key) + '"' + yield self.key_separator if isinstance(value, basestring): - yield _encoder(value) + yield '"' + self.encoder(value) + '"' elif value is None: yield 'null' elif value is True: @@ -391,26 +460,28 @@ elif isinstance(value, (int, long)): yield str(value) elif isinstance(value, float): - yield _floatstr(value) + yield self._floatstr(value) else: if isinstance(value, (list, tuple)): - chunks = _iterencode_list(value, _current_indent_level) + chunks = self._iterencode_list(value, markers, + _current_indent_level) elif isinstance(value, dict): - chunks = _iterencode_dict(value, _current_indent_level) + chunks = self._iterencode_dict(value, markers, + _current_indent_level) else: - chunks = _iterencode(value, _current_indent_level) + chunks = self._iterencode(value, markers, + _current_indent_level) for chunk in chunks: yield chunk if newline_indent is not None: _current_indent_level -= 1 - yield '\n' + (' ' * (_indent * _current_indent_level)) + yield '\n' + (' ' * (self.indent * _current_indent_level)) yield '}' - if markers is not None: - del markers[markerid] + self._remove_markers(markers, dct) - def _iterencode(o, _current_indent_level): + def _iterencode(self, o, markers, _current_indent_level): if isinstance(o, basestring): - yield _encoder(o) + yield '"' + self.encoder(o) + '"' elif o is None: yield 'null' elif o is True: @@ -420,23 +491,19 @@ elif isinstance(o, (int, long)): yield str(o) elif isinstance(o, float): - yield _floatstr(o) + yield self._floatstr(o) elif isinstance(o, (list, tuple)): - for chunk in _iterencode_list(o, _current_indent_level): + for chunk in self._iterencode_list(o, markers, + _current_indent_level): yield chunk elif isinstance(o, dict): - for chunk in _iterencode_dict(o, _current_indent_level): + for chunk in self._iterencode_dict(o, markers, + _current_indent_level): yield chunk else: - if markers is not None: - markerid = id(o) - if markerid in markers: - raise ValueError("Circular reference detected") - markers[markerid] = o - o = _default(o) - for chunk in _iterencode(o, _current_indent_level): + self._mark_markers(markers, o) + obj = self.default(o) + for chunk in self._iterencode(obj, markers, + _current_indent_level): yield chunk - if markers is not None: - del markers[markerid] - - return _iterencode + self._remove_markers(markers, o) diff --git a/lib-python/modified-2.7/json/tests/test_unicode.py b/lib-python/modified-2.7/json/tests/test_unicode.py --- a/lib-python/modified-2.7/json/tests/test_unicode.py +++ b/lib-python/modified-2.7/json/tests/test_unicode.py @@ -80,3 +80,9 @@ self.assertEqual(type(json.loads(u'["a"]')[0]), unicode) # Issue 10038. self.assertEqual(type(json.loads('"foo"')), unicode) + + def test_encode_not_utf_8(self): + self.assertEqual(json.dumps('\xb1\xe6', encoding='iso8859-2'), + '"\\u0105\\u0107"') + self.assertEqual(json.dumps(['\xb1\xe6'], encoding='iso8859-2'), + '["\\u0105\\u0107"]') diff --git a/lib-python/modified-2.7/ssl.py b/lib-python/modified-2.7/ssl.py --- a/lib-python/modified-2.7/ssl.py +++ b/lib-python/modified-2.7/ssl.py @@ -62,7 +62,6 @@ from _ssl import OPENSSL_VERSION_NUMBER, OPENSSL_VERSION_INFO, OPENSSL_VERSION from _ssl import SSLError from _ssl import CERT_NONE, CERT_OPTIONAL, CERT_REQUIRED -from _ssl import PROTOCOL_SSLv2, PROTOCOL_SSLv3, PROTOCOL_SSLv23, PROTOCOL_TLSv1 from _ssl import RAND_status, RAND_egd, RAND_add from _ssl import \ SSL_ERROR_ZERO_RETURN, \ @@ -74,6 +73,18 @@ SSL_ERROR_WANT_CONNECT, \ SSL_ERROR_EOF, \ SSL_ERROR_INVALID_ERROR_CODE +from _ssl import PROTOCOL_SSLv3, PROTOCOL_SSLv23, PROTOCOL_TLSv1 +_PROTOCOL_NAMES = { + PROTOCOL_TLSv1: "TLSv1", + PROTOCOL_SSLv23: "SSLv23", + PROTOCOL_SSLv3: "SSLv3", +} +try: + from _ssl import PROTOCOL_SSLv2 +except ImportError: + pass +else: + _PROTOCOL_NAMES[PROTOCOL_SSLv2] = "SSLv2" from socket import socket, _fileobject, error as socket_error from socket import getnameinfo as _getnameinfo @@ -400,16 +411,7 @@ return DER_cert_to_PEM_cert(dercert) def get_protocol_name(protocol_code): - if protocol_code == PROTOCOL_TLSv1: - return "TLSv1" - elif protocol_code == PROTOCOL_SSLv23: - return "SSLv23" - elif protocol_code == PROTOCOL_SSLv2: - return "SSLv2" - elif protocol_code == PROTOCOL_SSLv3: - return "SSLv3" - else: - return "" + return _PROTOCOL_NAMES.get(protocol_code, '') # a replacement for the old socket.ssl function diff --git a/lib-python/modified-2.7/tarfile.py b/lib-python/modified-2.7/tarfile.py --- a/lib-python/modified-2.7/tarfile.py +++ b/lib-python/modified-2.7/tarfile.py @@ -252,8 +252,8 @@ the high bit set. So we calculate two checksums, unsigned and signed. """ - unsigned_chksum = 256 + sum(struct.unpack("148B8x356B", buf[:512])) - signed_chksum = 256 + sum(struct.unpack("148b8x356b", buf[:512])) + unsigned_chksum = 256 + sum(struct.unpack("148B", buf[:148]) + struct.unpack("356B", buf[156:512])) + signed_chksum = 256 + sum(struct.unpack("148b", buf[:148]) + struct.unpack("356b", buf[156:512])) return unsigned_chksum, signed_chksum def copyfileobj(src, dst, length=None): @@ -265,6 +265,7 @@ if length is None: shutil.copyfileobj(src, dst) return + BUFSIZE = 16 * 1024 blocks, remainder = divmod(length, BUFSIZE) for b in xrange(blocks): @@ -801,19 +802,19 @@ if self.closed: raise ValueError("I/O operation on closed file") + buf = "" if self.buffer: if size is None: - buf = self.buffer + self.fileobj.read() + buf = self.buffer self.buffer = "" else: buf = self.buffer[:size] self.buffer = self.buffer[size:] - buf += self.fileobj.read(size - len(buf)) + + if size is None: + buf += self.fileobj.read() else: - if size is None: - buf = self.fileobj.read() - else: - buf = self.fileobj.read(size) + buf += self.fileobj.read(size - len(buf)) self.position += len(buf) return buf diff --git a/lib-python/modified-2.7/test/test_array.py b/lib-python/modified-2.7/test/test_array.py --- a/lib-python/modified-2.7/test/test_array.py +++ b/lib-python/modified-2.7/test/test_array.py @@ -295,9 +295,10 @@ ) b = array.array(self.badtypecode()) - self.assertRaises(TypeError, "a + b") - - self.assertRaises(TypeError, "a + 'bad'") + with self.assertRaises(TypeError): + a + b + with self.assertRaises(TypeError): + a + 'bad' def test_iadd(self): a = array.array(self.typecode, self.example[::-1]) @@ -316,9 +317,10 @@ ) b = array.array(self.badtypecode()) - self.assertRaises(TypeError, "a += b") - - self.assertRaises(TypeError, "a += 'bad'") + with self.assertRaises(TypeError): + a += b + with self.assertRaises(TypeError): + a += 'bad' def test_mul(self): a = 5*array.array(self.typecode, self.example) @@ -345,7 +347,8 @@ array.array(self.typecode) ) - self.assertRaises(TypeError, "a * 'bad'") + with self.assertRaises(TypeError): + a * 'bad' def test_imul(self): a = array.array(self.typecode, self.example) @@ -374,7 +377,8 @@ a *= -1 self.assertEqual(a, array.array(self.typecode)) - self.assertRaises(TypeError, "a *= 'bad'") + with self.assertRaises(TypeError): + a *= 'bad' def test_getitem(self): a = array.array(self.typecode, self.example) diff --git a/lib-python/modified-2.7/test/test_heapq.py b/lib-python/modified-2.7/test/test_heapq.py --- a/lib-python/modified-2.7/test/test_heapq.py +++ b/lib-python/modified-2.7/test/test_heapq.py @@ -186,6 +186,11 @@ self.assertFalse(sys.modules['heapq'] is self.module) self.assertTrue(hasattr(self.module.heapify, 'func_code')) + def test_islice_protection(self): + m = self.module + self.assertFalse(m.nsmallest(-1, [1])) + self.assertFalse(m.nlargest(-1, [1])) + class TestHeapC(TestHeap): module = c_heapq diff --git a/lib-python/modified-2.7/test/test_multiprocessing.py b/lib-python/modified-2.7/test/test_multiprocessing.py --- a/lib-python/modified-2.7/test/test_multiprocessing.py +++ b/lib-python/modified-2.7/test/test_multiprocessing.py @@ -510,7 +510,6 @@ p.join() - @unittest.skipIf(os.name == 'posix', "PYPY: FIXME") def test_qsize(self): q = self.Queue() try: @@ -532,7 +531,6 @@ time.sleep(DELTA) q.task_done() - @unittest.skipIf(os.name == 'posix', "PYPY: FIXME") def test_task_done(self): queue = self.JoinableQueue() @@ -1091,7 +1089,6 @@ class _TestPoolWorkerLifetime(BaseTestCase): ALLOWED_TYPES = ('processes', ) - @unittest.skipIf(os.name == 'posix', "PYPY: FIXME") def test_pool_worker_lifetime(self): p = multiprocessing.Pool(3, maxtasksperchild=10) self.assertEqual(3, len(p._pool)) @@ -1280,7 +1277,6 @@ queue = manager.get_queue() queue.put('hello world') - @unittest.skipIf(os.name == 'posix', "PYPY: FIXME") def test_rapid_restart(self): authkey = os.urandom(32) manager = QueueManager( @@ -1297,6 +1293,7 @@ queue = manager.get_queue() self.assertEqual(queue.get(), 'hello world') del queue + test_support.gc_collect() manager.shutdown() manager = QueueManager( address=addr, authkey=authkey, serializer=SERIALIZER) @@ -1573,7 +1570,6 @@ ALLOWED_TYPES = ('processes',) - @unittest.skipIf(os.name == 'posix', "PYPY: FIXME") def test_heap(self): iterations = 5000 maxblocks = 50 diff --git a/lib-python/modified-2.7/test/test_ssl.py b/lib-python/modified-2.7/test/test_ssl.py --- a/lib-python/modified-2.7/test/test_ssl.py +++ b/lib-python/modified-2.7/test/test_ssl.py @@ -58,32 +58,35 @@ # Issue #9415: Ubuntu hijacks their OpenSSL and forcefully disables SSLv2 def skip_if_broken_ubuntu_ssl(func): - # We need to access the lower-level wrapper in order to create an - # implicit SSL context without trying to connect or listen. - try: - import _ssl - except ImportError: - # The returned function won't get executed, just ignore the error - pass - @functools.wraps(func) - def f(*args, **kwargs): + if hasattr(ssl, 'PROTOCOL_SSLv2'): + # We need to access the lower-level wrapper in order to create an + # implicit SSL context without trying to connect or listen. try: - s = socket.socket(socket.AF_INET) - _ssl.sslwrap(s._sock, 0, None, None, - ssl.CERT_NONE, ssl.PROTOCOL_SSLv2, None, None) - except ssl.SSLError as e: - if (ssl.OPENSSL_VERSION_INFO == (0, 9, 8, 15, 15) and - platform.linux_distribution() == ('debian', 'squeeze/sid', '') - and 'Invalid SSL protocol variant specified' in str(e)): - raise unittest.SkipTest("Patched Ubuntu OpenSSL breaks behaviour") - return func(*args, **kwargs) - return f + import _ssl + except ImportError: + # The returned function won't get executed, just ignore the error + pass + @functools.wraps(func) + def f(*args, **kwargs): + try: + s = socket.socket(socket.AF_INET) + _ssl.sslwrap(s._sock, 0, None, None, + ssl.CERT_NONE, ssl.PROTOCOL_SSLv2, None, None) + except ssl.SSLError as e: + if (ssl.OPENSSL_VERSION_INFO == (0, 9, 8, 15, 15) and + platform.linux_distribution() == ('debian', 'squeeze/sid', '') + and 'Invalid SSL protocol variant specified' in str(e)): + raise unittest.SkipTest("Patched Ubuntu OpenSSL breaks behaviour") + return func(*args, **kwargs) + return f + else: + return func class BasicSocketTests(unittest.TestCase): def test_constants(self): - ssl.PROTOCOL_SSLv2 + #ssl.PROTOCOL_SSLv2 ssl.PROTOCOL_SSLv23 ssl.PROTOCOL_SSLv3 ssl.PROTOCOL_TLSv1 @@ -966,7 +969,8 @@ try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True, ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True, ssl.CERT_REQUIRED) - try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv2, False) + if hasattr(ssl, 'PROTOCOL_SSLv2'): + try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv2, False) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv23, False) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_TLSv1, False) @@ -978,7 +982,8 @@ try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True, ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True, ssl.CERT_REQUIRED) - try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv2, False) + if hasattr(ssl, 'PROTOCOL_SSLv2'): + try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv2, False) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv3, False) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv23, False) diff --git a/lib-python/modified-2.7/test/test_sys_settrace.py b/lib-python/modified-2.7/test/test_sys_settrace.py --- a/lib-python/modified-2.7/test/test_sys_settrace.py +++ b/lib-python/modified-2.7/test/test_sys_settrace.py @@ -286,11 +286,11 @@ self.compare_events(func.func_code.co_firstlineno, tracer.events, func.events) - def set_and_retrieve_none(self): + def test_set_and_retrieve_none(self): sys.settrace(None) assert sys.gettrace() is None - def set_and_retrieve_func(self): + def test_set_and_retrieve_func(self): def fn(*args): pass diff --git a/lib-python/modified-2.7/test/test_urllib2.py b/lib-python/modified-2.7/test/test_urllib2.py --- a/lib-python/modified-2.7/test/test_urllib2.py +++ b/lib-python/modified-2.7/test/test_urllib2.py @@ -307,6 +307,9 @@ def getresponse(self): return MockHTTPResponse(MockFile(), {}, 200, "OK") + def close(self): + pass + class MockHandler: # useful for testing handler machinery # see add_ordered_mock_handlers() docstring diff --git a/lib-python/modified-2.7/urllib2.py b/lib-python/modified-2.7/urllib2.py new file mode 100644 --- /dev/null +++ b/lib-python/modified-2.7/urllib2.py @@ -0,0 +1,1436 @@ +"""An extensible library for opening URLs using a variety of protocols + +The simplest way to use this module is to call the urlopen function, +which accepts a string containing a URL or a Request object (described +below). It opens the URL and returns the results as file-like +object; the returned object has some extra methods described below. + +The OpenerDirector manages a collection of Handler objects that do +all the actual work. Each Handler implements a particular protocol or +option. The OpenerDirector is a composite object that invokes the +Handlers needed to open the requested URL. For example, the +HTTPHandler performs HTTP GET and POST requests and deals with +non-error returns. The HTTPRedirectHandler automatically deals with +HTTP 301, 302, 303 and 307 redirect errors, and the HTTPDigestAuthHandler +deals with digest authentication. + +urlopen(url, data=None) -- Basic usage is the same as original +urllib. pass the url and optionally data to post to an HTTP URL, and +get a file-like object back. One difference is that you can also pass +a Request instance instead of URL. Raises a URLError (subclass of +IOError); for HTTP errors, raises an HTTPError, which can also be +treated as a valid response. + +build_opener -- Function that creates a new OpenerDirector instance. +Will install the default handlers. Accepts one or more Handlers as +arguments, either instances or Handler classes that it will +instantiate. If one of the argument is a subclass of the default +handler, the argument will be installed instead of the default. + +install_opener -- Installs a new opener as the default opener. + +objects of interest: + +OpenerDirector -- Sets up the User Agent as the Python-urllib client and manages +the Handler classes, while dealing with requests and responses. + +Request -- An object that encapsulates the state of a request. The +state can be as simple as the URL. It can also include extra HTTP +headers, e.g. a User-Agent. + +BaseHandler -- + +exceptions: +URLError -- A subclass of IOError, individual protocols have their own +specific subclass. + +HTTPError -- Also a valid HTTP response, so you can treat an HTTP error +as an exceptional event or valid response. + +internals: +BaseHandler and parent +_call_chain conventions + +Example usage: + +import urllib2 + +# set up authentication info +authinfo = urllib2.HTTPBasicAuthHandler() +authinfo.add_password(realm='PDQ Application', + uri='https://mahler:8092/site-updates.py', + user='klem', + passwd='geheim$parole') + +proxy_support = urllib2.ProxyHandler({"http" : "http://ahad-haam:3128"}) + +# build a new opener that adds authentication and caching FTP handlers +opener = urllib2.build_opener(proxy_support, authinfo, urllib2.CacheFTPHandler) + +# install it +urllib2.install_opener(opener) + +f = urllib2.urlopen('http://www.python.org/') + + +""" + +# XXX issues: +# If an authentication error handler that tries to perform +# authentication for some reason but fails, how should the error be +# signalled? The client needs to know the HTTP error code. But if +# the handler knows that the problem was, e.g., that it didn't know +# that hash algo that requested in the challenge, it would be good to +# pass that information along to the client, too. +# ftp errors aren't handled cleanly +# check digest against correct (i.e. non-apache) implementation + +# Possible extensions: +# complex proxies XXX not sure what exactly was meant by this +# abstract factory for opener + +import base64 +import hashlib +import httplib +import mimetools +import os +import posixpath +import random +import re +import socket +import sys +import time +import urlparse +import bisect + +try: + from cStringIO import StringIO +except ImportError: + from StringIO import StringIO + +from urllib import (unwrap, unquote, splittype, splithost, quote, + addinfourl, splitport, splittag, + splitattr, ftpwrapper, splituser, splitpasswd, splitvalue) + +# support for FileHandler, proxies via environment variables +from urllib import localhost, url2pathname, getproxies, proxy_bypass + +# used in User-Agent header sent +__version__ = sys.version[:3] + +_opener = None +def urlopen(url, data=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT): + global _opener + if _opener is None: + _opener = build_opener() + return _opener.open(url, data, timeout) + +def install_opener(opener): + global _opener + _opener = opener + +# do these error classes make sense? +# make sure all of the IOError stuff is overridden. we just want to be +# subtypes. + +class URLError(IOError): + # URLError is a sub-type of IOError, but it doesn't share any of + # the implementation. need to override __init__ and __str__. + # It sets self.args for compatibility with other EnvironmentError + # subclasses, but args doesn't have the typical format with errno in + # slot 0 and strerror in slot 1. This may be better than nothing. + def __init__(self, reason): + self.args = reason, + self.reason = reason + + def __str__(self): + return '' % self.reason + +class HTTPError(URLError, addinfourl): + """Raised when HTTP error occurs, but also acts like non-error return""" + __super_init = addinfourl.__init__ + + def __init__(self, url, code, msg, hdrs, fp): + self.code = code + self.msg = msg + self.hdrs = hdrs + self.fp = fp + self.filename = url + # The addinfourl classes depend on fp being a valid file + # object. In some cases, the HTTPError may not have a valid + # file object. If this happens, the simplest workaround is to + # not initialize the base classes. + if fp is not None: + self.__super_init(fp, hdrs, url, code) + + def __str__(self): + return 'HTTP Error %s: %s' % (self.code, self.msg) + +# copied from cookielib.py +_cut_port_re = re.compile(r":\d+$") +def request_host(request): + """Return request-host, as defined by RFC 2965. + + Variation from RFC: returned value is lowercased, for convenient + comparison. + + """ + url = request.get_full_url() + host = urlparse.urlparse(url)[1] + if host == "": + host = request.get_header("Host", "") + + # remove port, if present + host = _cut_port_re.sub("", host, 1) + return host.lower() + +class Request: + + def __init__(self, url, data=None, headers={}, + origin_req_host=None, unverifiable=False): + # unwrap('') --> 'type://host/path' + self.__original = unwrap(url) + self.__original, fragment = splittag(self.__original) + self.type = None + # self.__r_type is what's left after doing the splittype + self.host = None + self.port = None + self._tunnel_host = None + self.data = data + self.headers = {} + for key, value in headers.items(): + self.add_header(key, value) + self.unredirected_hdrs = {} + if origin_req_host is None: + origin_req_host = request_host(self) + self.origin_req_host = origin_req_host + self.unverifiable = unverifiable + + def __getattr__(self, attr): + # XXX this is a fallback mechanism to guard against these + # methods getting called in a non-standard order. this may be + # too complicated and/or unnecessary. + # XXX should the __r_XXX attributes be public? + if attr[:12] == '_Request__r_': + name = attr[12:] + if hasattr(Request, 'get_' + name): + getattr(self, 'get_' + name)() + return getattr(self, attr) + raise AttributeError, attr + + def get_method(self): + if self.has_data(): + return "POST" + else: + return "GET" + + # XXX these helper methods are lame + + def add_data(self, data): + self.data = data + + def has_data(self): + return self.data is not None + + def get_data(self): + return self.data + + def get_full_url(self): + return self.__original + + def get_type(self): + if self.type is None: + self.type, self.__r_type = splittype(self.__original) + if self.type is None: + raise ValueError, "unknown url type: %s" % self.__original + return self.type + + def get_host(self): + if self.host is None: + self.host, self.__r_host = splithost(self.__r_type) + if self.host: + self.host = unquote(self.host) + return self.host + + def get_selector(self): + return self.__r_host + + def set_proxy(self, host, type): + if self.type == 'https' and not self._tunnel_host: + self._tunnel_host = self.host + else: + self.type = type + self.__r_host = self.__original + + self.host = host + + def has_proxy(self): + return self.__r_host == self.__original + + def get_origin_req_host(self): + return self.origin_req_host + + def is_unverifiable(self): + return self.unverifiable + + def add_header(self, key, val): + # useful for something like authentication + self.headers[key.capitalize()] = val + + def add_unredirected_header(self, key, val): + # will not be added to a redirected request + self.unredirected_hdrs[key.capitalize()] = val + + def has_header(self, header_name): + return (header_name in self.headers or + header_name in self.unredirected_hdrs) + + def get_header(self, header_name, default=None): + return self.headers.get( + header_name, + self.unredirected_hdrs.get(header_name, default)) + + def header_items(self): + hdrs = self.unredirected_hdrs.copy() + hdrs.update(self.headers) + return hdrs.items() + +class OpenerDirector: + def __init__(self): + client_version = "Python-urllib/%s" % __version__ + self.addheaders = [('User-agent', client_version)] + # manage the individual handlers + self.handlers = [] + self.handle_open = {} + self.handle_error = {} + self.process_response = {} + self.process_request = {} + + def add_handler(self, handler): + if not hasattr(handler, "add_parent"): + raise TypeError("expected BaseHandler instance, got %r" % + type(handler)) + + added = False + for meth in dir(handler): + if meth in ["redirect_request", "do_open", "proxy_open"]: + # oops, coincidental match + continue + + i = meth.find("_") + protocol = meth[:i] + condition = meth[i+1:] + + if condition.startswith("error"): + j = condition.find("_") + i + 1 + kind = meth[j+1:] + try: + kind = int(kind) + except ValueError: + pass + lookup = self.handle_error.get(protocol, {}) + self.handle_error[protocol] = lookup + elif condition == "open": + kind = protocol + lookup = self.handle_open + elif condition == "response": + kind = protocol + lookup = self.process_response + elif condition == "request": + kind = protocol + lookup = self.process_request + else: + continue + + handlers = lookup.setdefault(kind, []) + if handlers: + bisect.insort(handlers, handler) + else: + handlers.append(handler) + added = True + + if added: + # the handlers must work in an specific order, the order + # is specified in a Handler attribute + bisect.insort(self.handlers, handler) + handler.add_parent(self) + + def close(self): + # Only exists for backwards compatibility. + pass + + def _call_chain(self, chain, kind, meth_name, *args): + # Handlers raise an exception if no one else should try to handle + # the request, or return None if they can't but another handler + # could. Otherwise, they return the response. + handlers = chain.get(kind, ()) + for handler in handlers: + func = getattr(handler, meth_name) + + result = func(*args) + if result is not None: + return result + + def open(self, fullurl, data=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT): + # accept a URL or a Request object + if isinstance(fullurl, basestring): + req = Request(fullurl, data) + else: + req = fullurl + if data is not None: + req.add_data(data) + + req.timeout = timeout + protocol = req.get_type() + + # pre-process request + meth_name = protocol+"_request" + for processor in self.process_request.get(protocol, []): + meth = getattr(processor, meth_name) + req = meth(req) + + response = self._open(req, data) + + # post-process response + meth_name = protocol+"_response" + for processor in self.process_response.get(protocol, []): + meth = getattr(processor, meth_name) + response = meth(req, response) + + return response + + def _open(self, req, data=None): + result = self._call_chain(self.handle_open, 'default', + 'default_open', req) + if result: + return result + + protocol = req.get_type() + result = self._call_chain(self.handle_open, protocol, protocol + + '_open', req) + if result: + return result + + return self._call_chain(self.handle_open, 'unknown', + 'unknown_open', req) + + def error(self, proto, *args): + if proto in ('http', 'https'): + # XXX http[s] protocols are special-cased + dict = self.handle_error['http'] # https is not different than http + proto = args[2] # YUCK! + meth_name = 'http_error_%s' % proto + http_err = 1 + orig_args = args + else: + dict = self.handle_error + meth_name = proto + '_error' + http_err = 0 + args = (dict, proto, meth_name) + args + result = self._call_chain(*args) + if result: + return result + + if http_err: + args = (dict, 'default', 'http_error_default') + orig_args + return self._call_chain(*args) + +# XXX probably also want an abstract factory that knows when it makes +# sense to skip a superclass in favor of a subclass and when it might +# make sense to include both + +def build_opener(*handlers): + """Create an opener object from a list of handlers. + + The opener will use several default handlers, including support + for HTTP, FTP and when applicable, HTTPS. + + If any of the handlers passed as arguments are subclasses of the + default handlers, the default handlers will not be used. + """ + import types + def isclass(obj): + return isinstance(obj, (types.ClassType, type)) + + opener = OpenerDirector() + default_classes = [ProxyHandler, UnknownHandler, HTTPHandler, + HTTPDefaultErrorHandler, HTTPRedirectHandler, + FTPHandler, FileHandler, HTTPErrorProcessor] + if hasattr(httplib, 'HTTPS'): + default_classes.append(HTTPSHandler) + skip = set() + for klass in default_classes: + for check in handlers: + if isclass(check): + if issubclass(check, klass): + skip.add(klass) + elif isinstance(check, klass): + skip.add(klass) + for klass in skip: + default_classes.remove(klass) + + for klass in default_classes: + opener.add_handler(klass()) + + for h in handlers: + if isclass(h): + h = h() + opener.add_handler(h) + return opener + +class BaseHandler: + handler_order = 500 + + def add_parent(self, parent): + self.parent = parent + + def close(self): + # Only exists for backwards compatibility + pass + + def __lt__(self, other): + if not hasattr(other, "handler_order"): + # Try to preserve the old behavior of having custom classes + # inserted after default ones (works only for custom user + # classes which are not aware of handler_order). + return True + return self.handler_order < other.handler_order + + +class HTTPErrorProcessor(BaseHandler): + """Process HTTP error responses.""" + handler_order = 1000 # after all other processing + + def http_response(self, request, response): + code, msg, hdrs = response.code, response.msg, response.info() + + # According to RFC 2616, "2xx" code indicates that the client's + # request was successfully received, understood, and accepted. + if not (200 <= code < 300): + response = self.parent.error( + 'http', request, response, code, msg, hdrs) + + return response + + https_response = http_response + +class HTTPDefaultErrorHandler(BaseHandler): + def http_error_default(self, req, fp, code, msg, hdrs): + raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) + +class HTTPRedirectHandler(BaseHandler): + # maximum number of redirections to any single URL + # this is needed because of the state that cookies introduce + max_repeats = 4 + # maximum total number of redirections (regardless of URL) before + # assuming we're in a loop + max_redirections = 10 + + def redirect_request(self, req, fp, code, msg, headers, newurl): + """Return a Request or None in response to a redirect. + + This is called by the http_error_30x methods when a + redirection response is received. If a redirection should + take place, return a new Request to allow http_error_30x to + perform the redirect. Otherwise, raise HTTPError if no-one + else should try to handle this url. Return None if you can't + but another Handler might. + """ + m = req.get_method() + if (code in (301, 302, 303, 307) and m in ("GET", "HEAD") + or code in (301, 302, 303) and m == "POST"): + # Strictly (according to RFC 2616), 301 or 302 in response + # to a POST MUST NOT cause a redirection without confirmation + # from the user (of urllib2, in this case). In practice, + # essentially all clients do redirect in this case, so we + # do the same. + # be conciliant with URIs containing a space + newurl = newurl.replace(' ', '%20') + newheaders = dict((k,v) for k,v in req.headers.items() + if k.lower() not in ("content-length", "content-type") + ) + return Request(newurl, + headers=newheaders, + origin_req_host=req.get_origin_req_host(), + unverifiable=True) + else: + raise HTTPError(req.get_full_url(), code, msg, headers, fp) + + # Implementation note: To avoid the server sending us into an + # infinite loop, the request object needs to track what URLs we + # have already seen. Do this by adding a handler-specific + # attribute to the Request object. + def http_error_302(self, req, fp, code, msg, headers): + # Some servers (incorrectly) return multiple Location headers + # (so probably same goes for URI). Use first header. + if 'location' in headers: + newurl = headers.getheaders('location')[0] + elif 'uri' in headers: + newurl = headers.getheaders('uri')[0] + else: + return + + # fix a possible malformed URL + urlparts = urlparse.urlparse(newurl) + if not urlparts.path: + urlparts = list(urlparts) + urlparts[2] = "/" + newurl = urlparse.urlunparse(urlparts) + + newurl = urlparse.urljoin(req.get_full_url(), newurl) + + # XXX Probably want to forget about the state of the current + # request, although that might interact poorly with other + # handlers that also use handler-specific request attributes + new = self.redirect_request(req, fp, code, msg, headers, newurl) + if new is None: + return + + # loop detection + # .redirect_dict has a key url if url was previously visited. + if hasattr(req, 'redirect_dict'): + visited = new.redirect_dict = req.redirect_dict + if (visited.get(newurl, 0) >= self.max_repeats or + len(visited) >= self.max_redirections): + raise HTTPError(req.get_full_url(), code, + self.inf_msg + msg, headers, fp) + else: + visited = new.redirect_dict = req.redirect_dict = {} + visited[newurl] = visited.get(newurl, 0) + 1 + + # Don't close the fp until we are sure that we won't use it + # with HTTPError. + fp.read() + fp.close() + + return self.parent.open(new, timeout=req.timeout) + + http_error_301 = http_error_303 = http_error_307 = http_error_302 + + inf_msg = "The HTTP server returned a redirect error that would " \ + "lead to an infinite loop.\n" \ + "The last 30x error message was:\n" + + +def _parse_proxy(proxy): + """Return (scheme, user, password, host/port) given a URL or an authority. + + If a URL is supplied, it must have an authority (host:port) component. + According to RFC 3986, having an authority component means the URL must + have two slashes after the scheme: + + >>> _parse_proxy('file:/ftp.example.com/') + Traceback (most recent call last): + ValueError: proxy URL with no authority: 'file:/ftp.example.com/' + + The first three items of the returned tuple may be None. + + Examples of authority parsing: + + >>> _parse_proxy('proxy.example.com') + (None, None, None, 'proxy.example.com') + >>> _parse_proxy('proxy.example.com:3128') + (None, None, None, 'proxy.example.com:3128') + + The authority component may optionally include userinfo (assumed to be + username:password): + + >>> _parse_proxy('joe:password at proxy.example.com') + (None, 'joe', 'password', 'proxy.example.com') + >>> _parse_proxy('joe:password at proxy.example.com:3128') + (None, 'joe', 'password', 'proxy.example.com:3128') + + Same examples, but with URLs instead: + + >>> _parse_proxy('http://proxy.example.com/') + ('http', None, None, 'proxy.example.com') + >>> _parse_proxy('http://proxy.example.com:3128/') + ('http', None, None, 'proxy.example.com:3128') + >>> _parse_proxy('http://joe:password at proxy.example.com/') + ('http', 'joe', 'password', 'proxy.example.com') + >>> _parse_proxy('http://joe:password at proxy.example.com:3128') + ('http', 'joe', 'password', 'proxy.example.com:3128') + + Everything after the authority is ignored: + + >>> _parse_proxy('ftp://joe:password at proxy.example.com/rubbish:3128') + ('ftp', 'joe', 'password', 'proxy.example.com') + + Test for no trailing '/' case: + + >>> _parse_proxy('http://joe:password at proxy.example.com') + ('http', 'joe', 'password', 'proxy.example.com') + + """ + scheme, r_scheme = splittype(proxy) + if not r_scheme.startswith("/"): + # authority + scheme = None + authority = proxy + else: + # URL + if not r_scheme.startswith("//"): + raise ValueError("proxy URL with no authority: %r" % proxy) + # We have an authority, so for RFC 3986-compliant URLs (by ss 3. + # and 3.3.), path is empty or starts with '/' + end = r_scheme.find("/", 2) + if end == -1: + end = None + authority = r_scheme[2:end] + userinfo, hostport = splituser(authority) + if userinfo is not None: + user, password = splitpasswd(userinfo) + else: + user = password = None + return scheme, user, password, hostport + +class ProxyHandler(BaseHandler): + # Proxies must be in front + handler_order = 100 + + def __init__(self, proxies=None): + if proxies is None: + proxies = getproxies() + assert hasattr(proxies, 'has_key'), "proxies must be a mapping" + self.proxies = proxies + for type, url in proxies.items(): + setattr(self, '%s_open' % type, + lambda r, proxy=url, type=type, meth=self.proxy_open: \ + meth(r, proxy, type)) + + def proxy_open(self, req, proxy, type): + orig_type = req.get_type() + proxy_type, user, password, hostport = _parse_proxy(proxy) + + if proxy_type is None: + proxy_type = orig_type + + if req.host and proxy_bypass(req.host): + return None + + if user and password: + user_pass = '%s:%s' % (unquote(user), unquote(password)) + creds = base64.b64encode(user_pass).strip() + req.add_header('Proxy-authorization', 'Basic ' + creds) + hostport = unquote(hostport) + req.set_proxy(hostport, proxy_type) + + if orig_type == proxy_type or orig_type == 'https': + # let other handlers take care of it + return None + else: + # need to start over, because the other handlers don't + # grok the proxy's URL type + # e.g. if we have a constructor arg proxies like so: + # {'http': 'ftp://proxy.example.com'}, we may end up turning + # a request for http://acme.example.com/a into one for + # ftp://proxy.example.com/a + return self.parent.open(req, timeout=req.timeout) + +class HTTPPasswordMgr: + + def __init__(self): + self.passwd = {} + + def add_password(self, realm, uri, user, passwd): + # uri could be a single URI or a sequence + if isinstance(uri, basestring): + uri = [uri] + if not realm in self.passwd: + self.passwd[realm] = {} + for default_port in True, False: + reduced_uri = tuple( + [self.reduce_uri(u, default_port) for u in uri]) + self.passwd[realm][reduced_uri] = (user, passwd) + + def find_user_password(self, realm, authuri): + domains = self.passwd.get(realm, {}) + for default_port in True, False: + reduced_authuri = self.reduce_uri(authuri, default_port) + for uris, authinfo in domains.iteritems(): + for uri in uris: + if self.is_suburi(uri, reduced_authuri): + return authinfo + return None, None + + def reduce_uri(self, uri, default_port=True): + """Accept authority or URI and extract only the authority and path.""" + # note HTTP URLs do not have a userinfo component + parts = urlparse.urlsplit(uri) + if parts[1]: + # URI + scheme = parts[0] + authority = parts[1] + path = parts[2] or '/' + else: + # host or host:port + scheme = None + authority = uri + path = '/' + host, port = splitport(authority) + if default_port and port is None and scheme is not None: + dport = {"http": 80, + "https": 443, + }.get(scheme) + if dport is not None: + authority = "%s:%d" % (host, dport) + return authority, path + + def is_suburi(self, base, test): + """Check if test is below base in a URI tree + + Both args must be URIs in reduced form. + """ + if base == test: + return True + if base[0] != test[0]: + return False + common = posixpath.commonprefix((base[1], test[1])) + if len(common) == len(base[1]): + return True + return False + + +class HTTPPasswordMgrWithDefaultRealm(HTTPPasswordMgr): + + def find_user_password(self, realm, authuri): + user, password = HTTPPasswordMgr.find_user_password(self, realm, + authuri) + if user is not None: + return user, password + return HTTPPasswordMgr.find_user_password(self, None, authuri) + + +class AbstractBasicAuthHandler: + + # XXX this allows for multiple auth-schemes, but will stupidly pick + # the last one with a realm specified. + + # allow for double- and single-quoted realm values + # (single quotes are a violation of the RFC, but appear in the wild) + rx = re.compile('(?:.*,)*[ \t]*([^ \t]+)[ \t]+' + 'realm=(["\'])(.*?)\\2', re.I) + + # XXX could pre-emptively send auth info already accepted (RFC 2617, + # end of section 2, and section 1.2 immediately after "credentials" + # production). + + def __init__(self, password_mgr=None): + if password_mgr is None: + password_mgr = HTTPPasswordMgr() + self.passwd = password_mgr + self.add_password = self.passwd.add_password + self.retried = 0 + + def reset_retry_count(self): + self.retried = 0 + + def http_error_auth_reqed(self, authreq, host, req, headers): + # host may be an authority (without userinfo) or a URL with an + # authority + # XXX could be multiple headers + authreq = headers.get(authreq, None) + + if self.retried > 5: + # retry sending the username:password 5 times before failing. + raise HTTPError(req.get_full_url(), 401, "basic auth failed", + headers, None) + else: + self.retried += 1 + + if authreq: + mo = AbstractBasicAuthHandler.rx.search(authreq) + if mo: + scheme, quote, realm = mo.groups() + if scheme.lower() == 'basic': + response = self.retry_http_basic_auth(host, req, realm) + if response and response.code != 401: + self.retried = 0 + return response + + def retry_http_basic_auth(self, host, req, realm): + user, pw = self.passwd.find_user_password(realm, host) + if pw is not None: + raw = "%s:%s" % (user, pw) + auth = 'Basic %s' % base64.b64encode(raw).strip() + if req.headers.get(self.auth_header, None) == auth: + return None + req.add_unredirected_header(self.auth_header, auth) + return self.parent.open(req, timeout=req.timeout) + else: + return None + + +class HTTPBasicAuthHandler(AbstractBasicAuthHandler, BaseHandler): + + auth_header = 'Authorization' + + def http_error_401(self, req, fp, code, msg, headers): + url = req.get_full_url() + response = self.http_error_auth_reqed('www-authenticate', + url, req, headers) + self.reset_retry_count() + return response + + +class ProxyBasicAuthHandler(AbstractBasicAuthHandler, BaseHandler): + + auth_header = 'Proxy-authorization' + + def http_error_407(self, req, fp, code, msg, headers): + # http_error_auth_reqed requires that there is no userinfo component in + # authority. Assume there isn't one, since urllib2 does not (and + # should not, RFC 3986 s. 3.2.1) support requests for URLs containing + # userinfo. + authority = req.get_host() + response = self.http_error_auth_reqed('proxy-authenticate', + authority, req, headers) + self.reset_retry_count() + return response + + +def randombytes(n): + """Return n random bytes.""" + # Use /dev/urandom if it is available. Fall back to random module + # if not. It might be worthwhile to extend this function to use + # other platform-specific mechanisms for getting random bytes. + if os.path.exists("/dev/urandom"): + f = open("/dev/urandom") + s = f.read(n) + f.close() + return s + else: + L = [chr(random.randrange(0, 256)) for i in range(n)] + return "".join(L) + +class AbstractDigestAuthHandler: + # Digest authentication is specified in RFC 2617. + + # XXX The client does not inspect the Authentication-Info header + # in a successful response. + + # XXX It should be possible to test this implementation against + # a mock server that just generates a static set of challenges. + + # XXX qop="auth-int" supports is shaky + + def __init__(self, passwd=None): + if passwd is None: + passwd = HTTPPasswordMgr() + self.passwd = passwd + self.add_password = self.passwd.add_password + self.retried = 0 + self.nonce_count = 0 + self.last_nonce = None + + def reset_retry_count(self): + self.retried = 0 + + def http_error_auth_reqed(self, auth_header, host, req, headers): + authreq = headers.get(auth_header, None) + if self.retried > 5: + # Don't fail endlessly - if we failed once, we'll probably + # fail a second time. Hm. Unless the Password Manager is + # prompting for the information. Crap. This isn't great + # but it's better than the current 'repeat until recursion + # depth exceeded' approach + raise HTTPError(req.get_full_url(), 401, "digest auth failed", + headers, None) + else: + self.retried += 1 + if authreq: + scheme = authreq.split()[0] + if scheme.lower() == 'digest': + return self.retry_http_digest_auth(req, authreq) + + def retry_http_digest_auth(self, req, auth): + token, challenge = auth.split(' ', 1) + chal = parse_keqv_list(parse_http_list(challenge)) + auth = self.get_authorization(req, chal) + if auth: + auth_val = 'Digest %s' % auth + if req.headers.get(self.auth_header, None) == auth_val: + return None + req.add_unredirected_header(self.auth_header, auth_val) + resp = self.parent.open(req, timeout=req.timeout) + return resp + + def get_cnonce(self, nonce): + # The cnonce-value is an opaque + # quoted string value provided by the client and used by both client + # and server to avoid chosen plaintext attacks, to provide mutual + # authentication, and to provide some message integrity protection. + # This isn't a fabulous effort, but it's probably Good Enough. + dig = hashlib.sha1("%s:%s:%s:%s" % (self.nonce_count, nonce, time.ctime(), + randombytes(8))).hexdigest() + return dig[:16] + + def get_authorization(self, req, chal): + try: + realm = chal['realm'] + nonce = chal['nonce'] + qop = chal.get('qop') + algorithm = chal.get('algorithm', 'MD5') + # mod_digest doesn't send an opaque, even though it isn't + # supposed to be optional + opaque = chal.get('opaque', None) + except KeyError: + return None + + H, KD = self.get_algorithm_impls(algorithm) + if H is None: + return None + + user, pw = self.passwd.find_user_password(realm, req.get_full_url()) + if user is None: + return None + + # XXX not implemented yet + if req.has_data(): + entdig = self.get_entity_digest(req.get_data(), chal) + else: + entdig = None + + A1 = "%s:%s:%s" % (user, realm, pw) + A2 = "%s:%s" % (req.get_method(), + # XXX selector: what about proxies and full urls + req.get_selector()) + if qop == 'auth': + if nonce == self.last_nonce: + self.nonce_count += 1 + else: + self.nonce_count = 1 + self.last_nonce = nonce + + ncvalue = '%08x' % self.nonce_count + cnonce = self.get_cnonce(nonce) + noncebit = "%s:%s:%s:%s:%s" % (nonce, ncvalue, cnonce, qop, H(A2)) + respdig = KD(H(A1), noncebit) + elif qop is None: + respdig = KD(H(A1), "%s:%s" % (nonce, H(A2))) + else: + # XXX handle auth-int. + raise URLError("qop '%s' is not supported." % qop) + + # XXX should the partial digests be encoded too? + + base = 'username="%s", realm="%s", nonce="%s", uri="%s", ' \ + 'response="%s"' % (user, realm, nonce, req.get_selector(), + respdig) + if opaque: + base += ', opaque="%s"' % opaque + if entdig: + base += ', digest="%s"' % entdig + base += ', algorithm="%s"' % algorithm + if qop: + base += ', qop=auth, nc=%s, cnonce="%s"' % (ncvalue, cnonce) + return base + + def get_algorithm_impls(self, algorithm): + # algorithm should be case-insensitive according to RFC2617 + algorithm = algorithm.upper() + # lambdas assume digest modules are imported at the top level + if algorithm == 'MD5': + H = lambda x: hashlib.md5(x).hexdigest() + elif algorithm == 'SHA': + H = lambda x: hashlib.sha1(x).hexdigest() + # XXX MD5-sess + KD = lambda s, d: H("%s:%s" % (s, d)) + return H, KD + + def get_entity_digest(self, data, chal): + # XXX not implemented yet + return None + + +class HTTPDigestAuthHandler(BaseHandler, AbstractDigestAuthHandler): + """An authentication protocol defined by RFC 2069 + + Digest authentication improves on basic authentication because it + does not transmit passwords in the clear. + """ + + auth_header = 'Authorization' + handler_order = 490 # before Basic auth + + def http_error_401(self, req, fp, code, msg, headers): + host = urlparse.urlparse(req.get_full_url())[1] + retry = self.http_error_auth_reqed('www-authenticate', + host, req, headers) + self.reset_retry_count() + return retry + + +class ProxyDigestAuthHandler(BaseHandler, AbstractDigestAuthHandler): + + auth_header = 'Proxy-Authorization' + handler_order = 490 # before Basic auth + + def http_error_407(self, req, fp, code, msg, headers): + host = req.get_host() + retry = self.http_error_auth_reqed('proxy-authenticate', + host, req, headers) + self.reset_retry_count() + return retry + +class AbstractHTTPHandler(BaseHandler): + + def __init__(self, debuglevel=0): + self._debuglevel = debuglevel + + def set_http_debuglevel(self, level): + self._debuglevel = level + + def do_request_(self, request): + host = request.get_host() + if not host: + raise URLError('no host given') + + if request.has_data(): # POST + data = request.get_data() + if not request.has_header('Content-type'): + request.add_unredirected_header( + 'Content-type', + 'application/x-www-form-urlencoded') + if not request.has_header('Content-length'): + request.add_unredirected_header( + 'Content-length', '%d' % len(data)) + + sel_host = host + if request.has_proxy(): + scheme, sel = splittype(request.get_selector()) + sel_host, sel_path = splithost(sel) + + if not request.has_header('Host'): + request.add_unredirected_header('Host', sel_host) + for name, value in self.parent.addheaders: + name = name.capitalize() + if not request.has_header(name): + request.add_unredirected_header(name, value) + + return request + + def do_open(self, http_class, req): + """Return an addinfourl object for the request, using http_class. + + http_class must implement the HTTPConnection API from httplib. + The addinfourl return value is a file-like object. It also + has methods and attributes including: + - info(): return a mimetools.Message object for the headers + - geturl(): return the original request URL + - code: HTTP status code + """ + host = req.get_host() + if not host: + raise URLError('no host given') + + h = http_class(host, timeout=req.timeout) # will parse host:port + h.set_debuglevel(self._debuglevel) + + headers = dict(req.unredirected_hdrs) + headers.update(dict((k, v) for k, v in req.headers.items() + if k not in headers)) + + # We want to make an HTTP/1.1 request, but the addinfourl + # class isn't prepared to deal with a persistent connection. + # It will try to read all remaining data from the socket, + # which will block while the server waits for the next request. + # So make sure the connection gets closed after the (only) + # request. + headers["Connection"] = "close" + headers = dict( + (name.title(), val) for name, val in headers.items()) + + if req._tunnel_host: + tunnel_headers = {} + proxy_auth_hdr = "Proxy-Authorization" + if proxy_auth_hdr in headers: + tunnel_headers[proxy_auth_hdr] = headers[proxy_auth_hdr] + # Proxy-Authorization should not be sent to origin + # server. + del headers[proxy_auth_hdr] + h.set_tunnel(req._tunnel_host, headers=tunnel_headers) + + try: + h.request(req.get_method(), req.get_selector(), req.data, headers) + try: + r = h.getresponse(buffering=True) + except TypeError: #buffering kw not supported + r = h.getresponse() + except socket.error, err: # XXX what error? + h.close() + raise URLError(err) + + # Pick apart the HTTPResponse object to get the addinfourl + # object initialized properly. + + # Wrap the HTTPResponse object in socket's file object adapter + # for Windows. That adapter calls recv(), so delegate recv() + # to read(). This weird wrapping allows the returned object to + # have readline() and readlines() methods. + + # XXX It might be better to extract the read buffering code + # out of socket._fileobject() and into a base class. + + r.recv = r.read + fp = socket._fileobject(r, close=True) + + resp = addinfourl(fp, r.msg, req.get_full_url()) + resp.code = r.status + resp.msg = r.reason + return resp + + +class HTTPHandler(AbstractHTTPHandler): + + def http_open(self, req): + return self.do_open(httplib.HTTPConnection, req) + + http_request = AbstractHTTPHandler.do_request_ + +if hasattr(httplib, 'HTTPS'): + class HTTPSHandler(AbstractHTTPHandler): + + def https_open(self, req): + return self.do_open(httplib.HTTPSConnection, req) + + https_request = AbstractHTTPHandler.do_request_ + +class HTTPCookieProcessor(BaseHandler): + def __init__(self, cookiejar=None): + import cookielib + if cookiejar is None: + cookiejar = cookielib.CookieJar() + self.cookiejar = cookiejar + + def http_request(self, request): + self.cookiejar.add_cookie_header(request) + return request + + def http_response(self, request, response): + self.cookiejar.extract_cookies(response, request) + return response + + https_request = http_request + https_response = http_response + +class UnknownHandler(BaseHandler): + def unknown_open(self, req): + type = req.get_type() + raise URLError('unknown url type: %s' % type) + +def parse_keqv_list(l): + """Parse list of key=value strings where keys are not duplicated.""" + parsed = {} + for elt in l: + k, v = elt.split('=', 1) + if v[0] == '"' and v[-1] == '"': + v = v[1:-1] + parsed[k] = v + return parsed + +def parse_http_list(s): + """Parse lists as described by RFC 2068 Section 2. + + In particular, parse comma-separated lists where the elements of + the list may include quoted-strings. A quoted-string could + contain a comma. A non-quoted string could have quotes in the + middle. Neither commas nor quotes count if they are escaped. + Only double-quotes count, not single-quotes. + """ + res = [] + part = '' + + escape = quote = False + for cur in s: + if escape: + part += cur + escape = False + continue + if quote: + if cur == '\\': + escape = True + continue + elif cur == '"': + quote = False + part += cur + continue + + if cur == ',': + res.append(part) + part = '' + continue + + if cur == '"': + quote = True + + part += cur + + # append last part + if part: + res.append(part) + + return [part.strip() for part in res] + +def _safe_gethostbyname(host): + try: + return socket.gethostbyname(host) + except socket.gaierror: + return None + +class FileHandler(BaseHandler): + # Use local file or FTP depending on form of URL + def file_open(self, req): + url = req.get_selector() + if url[:2] == '//' and url[2:3] != '/' and (req.host and + req.host != 'localhost'): + req.type = 'ftp' + return self.parent.open(req) + else: + return self.open_local_file(req) + + # names for the localhost + names = None + def get_names(self): + if FileHandler.names is None: + try: + FileHandler.names = tuple( + socket.gethostbyname_ex('localhost')[2] + + socket.gethostbyname_ex(socket.gethostname())[2]) + except socket.gaierror: + FileHandler.names = (socket.gethostbyname('localhost'),) + return FileHandler.names + + # not entirely sure what the rules are here + def open_local_file(self, req): + import email.utils + import mimetypes + host = req.get_host() + filename = req.get_selector() + localfile = url2pathname(filename) + try: + stats = os.stat(localfile) + size = stats.st_size + modified = email.utils.formatdate(stats.st_mtime, usegmt=True) + mtype = mimetypes.guess_type(filename)[0] + headers = mimetools.Message(StringIO( + 'Content-type: %s\nContent-length: %d\nLast-modified: %s\n' % + (mtype or 'text/plain', size, modified))) + if host: + host, port = splitport(host) + if not host or \ + (not port and _safe_gethostbyname(host) in self.get_names()): + if host: + origurl = 'file://' + host + filename + else: + origurl = 'file://' + filename + return addinfourl(open(localfile, 'rb'), headers, origurl) + except OSError, msg: + # urllib2 users shouldn't expect OSErrors coming from urlopen() + raise URLError(msg) + raise URLError('file not on local host') + +class FTPHandler(BaseHandler): + def ftp_open(self, req): + import ftplib + import mimetypes + host = req.get_host() + if not host: + raise URLError('ftp error: no host given') + host, port = splitport(host) + if port is None: + port = ftplib.FTP_PORT + else: + port = int(port) + + # username/password handling + user, host = splituser(host) + if user: + user, passwd = splitpasswd(user) + else: + passwd = None + host = unquote(host) + user = user or '' + passwd = passwd or '' + + try: + host = socket.gethostbyname(host) + except socket.error, msg: + raise URLError(msg) + path, attrs = splitattr(req.get_selector()) + dirs = path.split('/') + dirs = map(unquote, dirs) + dirs, file = dirs[:-1], dirs[-1] + if dirs and not dirs[0]: + dirs = dirs[1:] + try: + fw = self.connect_ftp(user, passwd, host, port, dirs, req.timeout) + type = file and 'I' or 'D' + for attr in attrs: + attr, value = splitvalue(attr) + if attr.lower() == 'type' and \ + value in ('a', 'A', 'i', 'I', 'd', 'D'): + type = value.upper() + fp, retrlen = fw.retrfile(file, type) + headers = "" + mtype = mimetypes.guess_type(req.get_full_url())[0] + if mtype: + headers += "Content-type: %s\n" % mtype + if retrlen is not None and retrlen >= 0: + headers += "Content-length: %d\n" % retrlen + sf = StringIO(headers) + headers = mimetools.Message(sf) + return addinfourl(fp, headers, req.get_full_url()) + except ftplib.all_errors, msg: + raise URLError, ('ftp error: %s' % msg), sys.exc_info()[2] + + def connect_ftp(self, user, passwd, host, port, dirs, timeout): + fw = ftpwrapper(user, passwd, host, port, dirs, timeout) +## fw.ftp.set_debuglevel(1) + return fw + +class CacheFTPHandler(FTPHandler): + # XXX would be nice to have pluggable cache strategies + # XXX this stuff is definitely not thread safe + def __init__(self): + self.cache = {} + self.timeout = {} + self.soonest = 0 + self.delay = 60 + self.max_conns = 16 + + def setTimeout(self, t): + self.delay = t + + def setMaxConns(self, m): + self.max_conns = m + + def connect_ftp(self, user, passwd, host, port, dirs, timeout): + key = user, host, port, '/'.join(dirs), timeout + if key in self.cache: + self.timeout[key] = time.time() + self.delay + else: + self.cache[key] = ftpwrapper(user, passwd, host, port, dirs, timeout) + self.timeout[key] = time.time() + self.delay + self.check_cache() + return self.cache[key] + + def check_cache(self): + # first check for old ones + t = time.time() + if self.soonest <= t: + for k, v in self.timeout.items(): + if v < t: + self.cache[k].close() + del self.cache[k] + del self.timeout[k] + self.soonest = min(self.timeout.values()) + + # then check the size + if len(self.cache) == self.max_conns: + for k, v in self.timeout.items(): + if v == self.soonest: + del self.cache[k] + del self.timeout[k] + break + self.soonest = min(self.timeout.values()) diff --git a/lib_pypy/_ctypes/pointer.py b/lib_pypy/_ctypes/pointer.py --- a/lib_pypy/_ctypes/pointer.py +++ b/lib_pypy/_ctypes/pointer.py @@ -124,7 +124,8 @@ # for now, we always allow types.pointer, else a lot of tests # break. We need to rethink how pointers are represented, though if my_ffitype is not ffitype and ffitype is not _ffi.types.void_p: - raise ArgumentError, "expected %s instance, got %s" % (type(value), ffitype) + raise ArgumentError("expected %s instance, got %s" % (type(value), + ffitype)) return value._get_buffer_value() def _cast_addr(obj, _, tp): diff --git a/lib_pypy/_ctypes/structure.py b/lib_pypy/_ctypes/structure.py --- a/lib_pypy/_ctypes/structure.py +++ b/lib_pypy/_ctypes/structure.py @@ -17,7 +17,7 @@ if len(f) == 3: if (not hasattr(tp, '_type_') or not isinstance(tp._type_, str) - or tp._type_ not in "iIhHbBlL"): + or tp._type_ not in "iIhHbBlLqQ"): #XXX: are those all types? # we just dont get the type name # in the interp levle thrown TypeError diff --git a/lib_pypy/_functools.py b/lib_pypy/_functools.py --- a/lib_pypy/_functools.py +++ b/lib_pypy/_functools.py @@ -14,10 +14,9 @@ raise TypeError("the first argument must be callable") self.func = func self.args = args - self.keywords = keywords + self.keywords = keywords or None def __call__(self, *fargs, **fkeywords): - newkeywords = self.keywords.copy() - newkeywords.update(fkeywords) - return self.func(*(self.args + fargs), **newkeywords) - + if self.keywords is not None: + fkeywords = dict(self.keywords, **fkeywords) + return self.func(*(self.args + fargs), **fkeywords) diff --git a/lib_pypy/_pypy_interact.py b/lib_pypy/_pypy_interact.py --- a/lib_pypy/_pypy_interact.py +++ b/lib_pypy/_pypy_interact.py @@ -56,6 +56,10 @@ prompt = getattr(sys, 'ps1', '>>> ') try: line = raw_input(prompt) + # Can be None if sys.stdin was redefined + encoding = getattr(sys.stdin, 'encoding', None) + if encoding and not isinstance(line, unicode): + line = line.decode(encoding) except EOFError: console.write("\n") break diff --git a/lib_pypy/_pypy_irc_topic.py b/lib_pypy/_pypy_irc_topic.py --- a/lib_pypy/_pypy_irc_topic.py +++ b/lib_pypy/_pypy_irc_topic.py @@ -1,117 +1,6 @@ -"""qvfgbcvna naq hgbcvna punvef -qlfgbcvna naq hgbcvna punvef -V'z fbeel, pbhyq lbh cyrnfr abg nterr jvgu gur png nf jryy? -V'z fbeel, pbhyq lbh cyrnfr abg nterr jvgu gur punve nf jryy? -jr cnffrq gur RH erivrj -cbfg RhebClguba fcevag fgnegf 12.IVV.2007, 10nz -RhebClguba raqrq -n Pyrna Ragrecevfrf cebqhpgvba -npnqrzl vf n pbzcyvpngrq ebyr tnzr -npnqrzvn vf n pbzcyvpngrq ebyr tnzr -jbexvat pbqr vf crn fbhc -abg lbhe snhyg, zber yvxr vg'f n zbivat gnetrg -guvf fragrapr vf snyfr -abguvat vf gehr -Yncfnat Fbhpubat -Oenpunzhgnaqn -fbeel, V'yy grnpu gur pnpghf ubj gb fjvz yngre -Jul fb znal znal znal znal znal ivbyvaf? -Jul fb znal znal znal znal znal bowrpgf? -"eha njnl naq yvir ba n snez" nccebnpu gb fbsgjner qrirybczrag -"va snpg, lbh zvtug xabj zber nobhg gur genafyngvba gbbypunva nsgre znfgrevat eclguba guna fbzr angvir fcrnxre xabjf nobhg uvf zbgure gbathr" - kbeNkNk -"jurer qvq nyy gur ivbyvaf tb?" -- ClCl fgnghf oybt: uggc://zberclcl.oybtfcbg.pbz/ -uggc://kxpq.pbz/353/ -pnfhnyvgl ivbyngvbaf naq sylvat -wrgmg abpu fpubxbynqvtre -R09 2X @PNN:85? -vs lbh'er gelvat gb oybj hc fghss, jub pnerf? -vs fghss oybjf hc, lbh pner -2008 jvyy or gur lrne bs clcl ba gur qrfxgbc -2008 jvyy or gur lrne bs gur qrfxgbc ba #clcl -2008 jvyy or gur lrne bs gur qrfxgbc ba #clcl, Wnahnel jvyy or gur zbagu bs gur nyc gbcf -lrf, ohg jung'g gur frafr bs 0 < "qhena qhena" -eclguba: flagnk naq frznagvpf bs clguba, fcrrq bs p, erfgevpgvbaf bs wnin naq pbzcvyre reebe zrffntrf nf crargenoyr nf ZHZCF +"""eclguba: flagnk naq frznagvpf bs clguba, fcrrq bs p, erfgevpgvbaf bs wnin naq pbzcvyre reebe zrffntrf nf crargenoyr nf ZHZCF pglcrf unf n fcva bs 1/3 ' ' vf n fcnpr gbb -2009 jvyy or gur lrne bs WVG ba gur qrfxgbc -N ynathntr vf n qvnyrpg jvgu na nezl naq anil -gbcvpf ner sbe gur srroyr zvaqrq -2009 vf gur lrne bs ersyrpgvba ba gur qrfxgbc -gur tybor vf bhe cbal, gur pbfzbf bhe erny ubefr -jub nz V naq vs lrf, ubj znal? -cebtenzzvat va orq vf n cresrpgyl svar npgvivgl -zbber'f ynj vf n qeht jvgu gur jbefg pbzr qbja -EClguba: jr hfr vg fb lbh qba'g unir gb -Zbber'f ynj vf n qeht jvgu gur jbefg pbzr qbja. EClguba: haqrpvqrq. -guvatf jvyy or avpr naq fghss -qba'g cbfg yvaxf gb cngragf urer -Abg lbhe hfhny nanylfrf. -Gur Neg bs gur Punaary -Clguba 300 -V fhccbfr ZO bs UGZY cre frpbaq vf abg gur hfhny fcrrq zrnfher crbcyr jbhyq rkcrpg sbe n wvg -gur fha arire frgf ba gur ClCl rzcver -ghegyrf ner snfgre guna lbh guvax -cebtenzzvat vf na nrfgrguvp raqrnibhe -P vf tbbq sbe fbzrguvat, whfg abg sbe jevgvat fbsgjner -trezna vf tbbq sbe fbzrguvat, whfg abg sbe jevgvat fbsgjner -trezna vf tbbq sbe artngvbaf, whfg abg sbe jevgvat fbsgjner -# nffreg qvq abg penfu -lbh fubhyq fgneg n cresrpg fbsgjner zbirzrag -lbh fubhyq fgneg n cresrpg punaary gbcvp zbirzrag -guvf vf n cresrpg punaary gbcvp -guvf vf n frys-ersreragvny punaary gbcvp -crrcubcr bcgvzvmngvbaf ner jung n Fhssvpvragyl Fzneg Pbzcvyre hfrf -"crrcubcr" bcgvzvmngvbaf ner jung na bcgvzvfgvp Pbzcvyre hfrf -pubbfr lbhe unpx -gur 'fhcre' xrljbeq vf abg gung uhttnoyr -wlguba cngpurf ner abg rabhtu sbe clcl -- qb lbh xabj oreyva? - nyy bs vg? - jryy, whfg oreyva -- ubj jvyy gur snpg gung gurl ner hfrq va bhe ercy punatr bhe gbcvpf? -- ubj pna vg rire unir jbexrq? -- jurer fubhyq gur unpx or fgberq? -- Vg'f uneq gb fnl rknpgyl jung pbafgvghgrf erfrnepu va gur pbzchgre jbeyq, ohg nf n svefg nccebkvzngvba, vg'f fbsgjner gung qbrfa'g unir hfref. -- Cebtenzzvat vf nyy nobhg xabjvat jura gb obvy gur benatr fcbatr qbaxrl npebff gur cuvyyvcvarf -- Jul fb znal, znal, znal, znal, znal, znal qhpxyvatf? -- ab qrgnvy vf bofpher rabhtu gb abg unir fbzr pbqr qrcraqvat ba vg. -- jung V trarenyyl jnag vf serr fcrrqhcf -- nyy bs ClCl vf kv-dhnyvgl -"lbh pna nyjnlf xvyy -9 be bf._rkvg() vs lbh'er va n uheel" -Ohernhpengf ohvyq npnqrzvp rzcverf juvpu puhea bhg zrnavatyrff fbyhgvbaf gb veeryrinag ceboyrzf. -vg'f abg n unpx, vg'f n jbexnebhaq -ClCl qbrfa'g unir pbcbylinevnqvp qrcraqragyl-zbabzbecurq ulcresyhknqf -ClCl qbrfa'g punatr gur shaqnzragny culfvpf pbafgnagf -Qnapr bs gur Fhtnecyhz Snvel -Wnin vf whfg tbbq rabhtu gb or cenpgvpny, ohg abg tbbq rabhtu gb or hfnoyr. -RhebClguba vf unccravat, qba'g rkcrpg nal dhvpx erfcbafr gvzrf. -"V jbhyq yvxr gb fgnl njnl sebz ernyvgl gura" -"gung'f jul gur 'be' vf ernyyl na 'naq' " -jvgu nyy nccebcevngr pbagrkghnyvfngvbavat -qba'g gevc ba gur cbjre pbeq -vzcyrzragvat YBTB va YBTB: "ghegyrf nyy gur jnl qbja" -gur ohooyrfbeg jbhyq or gur jebat jnl gb tb -gur cevapvcyr bs pbafreingvba bs zrff -gb fnir n gerr, rng n ornire -Qre Ovore znpugf evpugvt: Antg nyyrf xnchgg. -"Nal jbeyqivrj gung vfag jenpxrq ol frys-qbhog naq pbashfvba bire vgf bja vqragvgl vf abg n jbeyqivrj sbe zr." - Fpbgg Nnebafba -jr oryvrir va cnapnxrf, znlor -jr oryvrir va ghegyrf, znlor -jr qrsvavgryl oryvrir va zrgn -gur zngevk unf lbh -"Yvsr vf uneq, gura lbh anc" - n png -Vf Nezva ubzr jura gur havirefr prnfrf gb rkvfg? -Qhrffryqbes fcevag fgnegrq -frys.nobeeg("pnaabg ybnq negvpyrf") -QRAGVFGEL FLZOBY YVTUG IREGVPNY NAQ JNIR -"Gur UUH pnzchf vf n tbbq Dhnxr yriry" - Nezva -"Gur UUH pnzchf jbhyq or n greevoyr dhnxr yriry - lbh'q arire unir n pyhr jurer lbh ner" - zvpunry -N enqvbnpgvir png unf 18 unys-yvirf. - : j [fvtu] -f -pbybe-pbqrq oyhrf -"Neebtnapr va pbzchgre fpvrapr vf zrnfherq va anab-Qvwxfgenf." -ClCl arrqf n Whfg-va-Gvzr WVG -"Lbh pna'g gvzr geniry whfg ol frggvat lbhe pybpxf jebat" -Gjb guernqf jnyx vagb n one. Gur onexrrcre ybbxf hc naq lryyf, "url, V jnag qba'g nal pbaqvgvbaf enpr yvxr gvzr ynfg!" Clguba 2.k rfg cerfdhr zbeg, ivir Clguba! Clguba 2.k vf abg qrnq Riregvzr fbzrbar nethrf jvgu "Fznyygnyx unf nyjnlf qbar K", vg vf nyjnlf n tbbq uvag gung fbzrguvat arrqf gb or punatrq snfg. - Znephf Qraxre @@ -119,7 +8,6 @@ __kkk__ naq __ekkk__ if bcrengvba fybgf: cnegvpyr dhnaghz fhcrecbfvgvba xvaq bs sha ClCl vf na rkpvgvat grpuabybtl gung yrgf lbh gb jevgr snfg, cbegnoyr, zhygv-cyngsbez vagrecergref jvgu yrff rssbeg Nezva: "Cebybt vf n zrff.", PS: "Ab, vg'f irel pbby!", Nezva: "Vfa'g guvf jung V fnvq?" - tbbq, grfgf ner hfrshy fbzrgvzrf :-) ClCl vf yvxr nofheq gurngre jr unir ab nagv-vzcbffvoyr fgvpx gung znxrf fher gung nyy lbhe cebtenzf unyg clcl vf n enpr orgjrra crbcyr funivat lnxf naq gur havirefr cebqhpvat zber orneqrq lnxf. Fb sne, gur havirefr vf jvaavat @@ -136,14 +24,14 @@ ClCl 1.1.0orgn eryrnfrq: uggc://pbqrfcrnx.arg/clcl/qvfg/clcl/qbp/eryrnfr-1.1.0.ugzy "gurer fubhyq or bar naq bayl bar boivbhf jnl gb qb vg". ClCl inevnag: "gurer pna or A unys-ohttl jnlf gb qb vg" 1.1 svany eryrnfrq: uggc://pbqrfcrnx.arg/clcl/qvfg/clcl/qbp/eryrnfr-1.1.0.ugzy -1.1 svany eryrnfrq | nzq64 naq ccp ner bayl ninvynoyr va ragrecevfr irefvba + nzq64 naq ccp ner bayl ninvynoyr va ragrecevfr irefvba Vf gurer n clcl gvzr? - vs lbh pna srry vg (?) gura gurer vf ab, abezny jbex vf fhpu zhpu yrff gvevat guna inpngvbaf ab, abezny jbex vf fb zhpu yrff gvevat guna inpngvbaf -SVEFG gurl vtaber lbh, gura gurl ynhtu ng lbh, gura gurl svtug lbh, gura lbh jva. +-SVEFG gurl vtaber lbh, gura gurl ynhtu ng lbh, gura gurl svtug lbh, gura lbh jva.- vg'f Fhaqnl, znlor vg'f Fhaqnl, ntnva -"3 + 3 = 8" Nagb va gur WVG gnyx +"3 + 3 = 8" - Nagb va gur WVG gnyx RPBBC vf unccravat RPBBC vf svavfurq cflpb rngf bar oenva cre vapu bs cebterff @@ -175,10 +63,108 @@ "nu, whfg va gvzr qbphzragngvba" (__nc__) ClCl vf abg n erny IZ: ab frtsnhyg unaqyref gb qb gur ener pnfrf lbh pna'g unir obgu pbairavrapr naq fcrrq -gur WVG qbrfa'g jbex ba BF/K (abi'09) -ab fhccbeg sbe BF/K evtug abj! (abi'09) fyvccref urvtug pna or zrnfherq va k86 ertvfgref clcl vf n enpr orgjrra gur vaqhfgel gelvat gb ohvyq znpuvarf jvgu zber naq zber erfbheprf, naq gur clcl qrirybcref gelvat gb rng nyy bs gurz. Fb sne, gur jvaare vf fgvyy hapyrne +"znl pbagnva ahgf naq/be lbhat cbvagref" +vg'f nyy irel fvzcyr, yvxr gur ubyvqnlf +unccl ClCl'f lrne 2010! +fnzhryr fnlf gung jr ybfg n enmbe. fb jr pna'g funir lnxf +"yrg'f abg or bofpher, hayrff jr ernyyl arrq gb" + (abg guernq-fnsr, ohg jryy, abguvat vf) +clcl unf znal ceboyrzf, ohg rnpu bar unf znal fbyhgvbaf +whfg nabgure vgrz (1.333...) ba bhe erny-ahzorerq gbqb yvfg +ClCl vf Fuveg Bevtnzv erfrnepu + nafjrevat n dhrfgvba: "ab -- sbe ng yrnfg bar cbffvoyr vagrecergngvba bs lbhe fragrapr" +eryrnfr 1.2 hcpbzvat +ClCl 1.2 eryrnfrq - uggc://clcl.bet/ +AB IPF QVFPHFFVBAF +EClguba vf n svar pnzry unve oehfu +ClCl vf n npghnyyl n ivfhnyvmngvba cebwrpg, jr whfg ohvyq vagrecergref gb unir vagrerfgvat qngn gb ivfhnyvmr +clcl vf yvxr fnhfntrf +naq abj sbe fbzrguvat pbzcyrgryl qvssrerag +n 10gu bs sberire vf 1u45 +pbeerpg pbqr qbrfag arrq nal grfgf +cbfgfgehpghenyvfz rgp. +clcl UVG trarengbe +gur arj clcl fcbeg vf gb cnff clcl ohtf nf pclguba ohtf +jr unir zhpu zber vagrecergref guna hfref +ClCl 1.3 njnvgvat eryrnfr +ClCl 1.3 eryrnfrq +vg frrzf gb zr gung bapr lbh frggyr ba na rkrphgvba / bowrpg zbqry naq / be olgrpbqr sbezng, lbh'ir nyernql qrpvqrq jung ynathntrf (jurer gur 'f' frrzf fhcresyhbhf) fhccbeg vf tbvat gb or svefg pynff sbe +"Nyy ceboyrzf va ClCl pna or fbyirq ol nabgure yriry bs vagrecergngvba" +ClCl 1.3 eryrnfrq (jvaqbjf ovanevrf vapyhqrq) +jul qvq lbh thlf unir gb znxr gur ohvygva sbeghar zber vagrerfgvat guna npghny jbex? v whfg pngpurq zlfrys erfgnegvat clcl 20 gvzrf +"jr hfrq gb unir n zrff jvgu na bofpher vagresnpr, abj jr unir zrff urer naq bofpher vagresnpr gurer. cebterff" crqebavf ba n clcl fcevag +"phcf bs pbssrr ner yvxr nanybtvrf va gung V'z znxvat bar evtug abj" +"vg'f nyjnlf hc gb hf, va n jnl be gur bgure" +ClCl vf infg, naq pbagnvaf zhygvghqrf +qravny vf eneryl n tbbq qrohttvat grpuavdhr +"Yrg'f tb." - "Jr pna'g" - "Jul abg?" - "Jr'er jnvgvat sbe n Genafyngvba." - (qrfcnvevatyl) "Nu!" +'gung'f qrsvavgryl n pnfr bs "hu????"' +va gurbel gurer vf gur Ybbc, va cenpgvpr gurer ner oevqtrf +gur uneqqevir - pbafgnag qngn cvytevzntr +ClCl vf n gbby gb xrrc bgurejvfr qnatrebhf zvaqf fnsryl bpphcvrq. +jr ner n trareny senzrjbex ohvyg ba pbafvfgrag nccyvpngvba bs nqubp-arff +gur jnl gb nibvq n jbexnebhaq vf gb vagebqhpr n fgebatre jbexnebhaq fbzrjurer ryfr +pnyyvat gur genafyngvba gbby punva n 'fpevcg' vf xvaq bs bssrafvir +ehaavat clcl-p genafyngr.cl vf n ovg yvxr jngpuvat n guevyyre zbivr, vg pbhyq pbafhzr nyy gur zrzbel ng nal gvzr +ehaavat clcl-p genafyngr.cl vf n ovg yvxr jngpuvat n guevyyre zbivr, vg pbhyq qvr ng nal gvzr orpnhfr bs gur 32-ovg 4TO yvzvg bs ENZ +Qh jvefg rora tranh qnf reervpura, jbena xrvare tynhog +vs fjvgmreynaq jrer jurer terrpr vf (ba vfynaqf) jbhyq gurl nyy or pbaarpgrq ol oevqtrf? +genafyngvat clcl jvgu pclguba vf fbbbbbb fybj +ClCl 1.4 eryrnfrq! +Jr ner abg urebrf, whfg irel cngvrag. +QBAR zrnaf vg'f qbar +jul gurer vf ab "ClCl 1.4 eryrnfrq" va gbcvp nal zber? +fabj! fabj! +svanyyl, zrephevny zvtengvba vf unccravat! +Gur zvtengvba gb zrephevny vf pbzcyrgrq! uggc://ovgohpxrg.bet/clcl/clcl +fabj! fabj! (gre) +unccl arj lrne +naq anaanaw gb lbh nf jryy +Frrvat nf gur ynjf bs culfvpf ner ntnvafg lbh, lbh unir gb pnershyyl pbafvqre lbhe fpbcr fb gung lbhe tbnyf ner ernfbanoyr. +nf hfhny va clcl, gur fbyhgvba nccrnef pbzcyrgryl qvfcebcbegvbangr gb gur ceboyrz naq vafgrnq jr'yy tb sbe n pbzcyrgryl qvssrerag fvzcyre nccebnpu gb gur bevtvany ceboyrz +fabj, fabj! +va clcl lbh ner nyjnlf ng gur jebat yriry, va bar jnl be gur bgure +jryy, vg'f jebat ohg abg fb "irel jebat" nf vg ybbxrq + V ybir clcl +ynmvarff vzcngvrapr naq uhoevf +fabj, fabj +EClguba: guvatf lbh jbhyqa'g qb va Clguba, naq pna'g qb va P. +vg vf gur rkcrpgrq orunivbe, rkprcg jura lbh qba'g rkcrpg vg +erqrsvavat lryybj frrzf yvxr n orggre vqrn +"gung'f ubjrire whfg ratvarrevat" (svwny) +"[vg] whfg fubjf ntnva gung eclguba vf bofpher" (psobym) +"naljnl, clguba vf n infg ynathntr" (svwny) +bhg-bs-yvr-thneqf +"gurer ner qnlf ba juvpu lbh ybbx nebhaq naq abguvat fubhyq unir rire jbexrq" (svwny) +clcl vf n orggre xvaq bs sbbyvfuarff - ynp +ehaavat grfgf vf rffragvny sbe qrirybcvat clcl -- hu? qvq V oernx gur grfg? (svwny) +V'ir tbg guvf sybbe jnk gung'f nyfb n TERNG qrffreg gbccvat!! +rknexha: "gur cneg gung V gubhtug jnf tbvat gb or uneq jnf gevivny, fb abj V whfg unir guvf cneg gung V qvqa'g rira guvax bs gung vf uneq" +V fhccbfr jr pna yvir jvgu gur bofphevgl, nf ybat nf gurer vf n pbzzrag znxvat vg yvtugre +V nz n ovt oryvrire va ernfbaf. ohg gur nccnerag xvaq ner zl snibevgr. +clcl: trg n WVG sbe serr (jryy gur svefg qnl lbh jba'g znantr naq vg jvyy or irel sehfgengvat) + thgjbegu: bu, jr fubhyq znxr gur WVG zntvpnyyl orggre, jvgu qrpbengbef naq fghss +vg'f n pbzcyrgr unpx, ohg n irel zvavzny bar (nevtngb) +svefg gurl ynhtu ng lbh, gura gurl vtaber lbh, gura gurl svtug lbh, gura lbh jva +ClCl vf snzvyl sevraqyl +jr yvxr pbzcynvagf +gbqnl jr'er snfgre guna lrfgreqnl (hfhnyyl) +ClCl naq PClguba: gurl ner zbegny rarzvrf vagrag ba xvyyvat rnpu bgure +nethnoyl, rirelguvat vf n avpur +clcl unf ynlref yvxr bavbaf: crryvat gurz onpx jvyy znxr lbh pel +EClguba zntvpnyyl znxrf lbh evpu naq snzbhf (fnlf fb ba gur gva) +Vf evtbobg nebhaq jura gur havirefr prnfrf gb rkvfg? +ClCl vf gbb pbby sbe dhrelfgevatf. +< nevtngb> gura jung bpphef? < svwny> tbbq fghss V oryvrir +ClCl 1.6 eryrnfrq! + jurer ner gur grfgf? +uggc://gjvgcvp.pbz/52nr8s +N enaqbz dhbgr +Nyy rkprcgoybpxf frrz fnar. +N cvax tyvggrel ebgngvat ynzoqn +"vg'f yvxryl grzcbenel hagvy sberire" nevtb """ def some_topic(): diff --git a/lib_pypy/greenlet.py b/lib_pypy/greenlet.py --- a/lib_pypy/greenlet.py +++ b/lib_pypy/greenlet.py @@ -48,23 +48,23 @@ def switch(self, *args): "Switch execution to this greenlet, optionally passing the values " "given as argument(s). Returns the value passed when switching back." - return self.__switch(_continulet.switch, args) + return self.__switch('switch', args) def throw(self, typ=GreenletExit, val=None, tb=None): "raise exception in greenlet, return value passed when switching back" - return self.__switch(_continulet.throw, typ, val, tb) + return self.__switch('throw', typ, val, tb) - def __switch(target, unbound_method, *args): + def __switch(target, methodname, *args): current = getcurrent() # while not target: if not target.__started: - if unbound_method != _continulet.throw: + if methodname == 'switch': greenlet_func = _greenlet_start else: greenlet_func = _greenlet_throw _continulet.__init__(target, greenlet_func, *args) - unbound_method = _continulet.switch + methodname = 'switch' args = () target.__started = True break @@ -75,22 +75,8 @@ target = target.parent # try: - if current.__main: - if target.__main: - # switch from main to main - if unbound_method == _continulet.throw: - raise args[0], args[1], args[2] - (args,) = args - else: - # enter from main to target - args = unbound_method(target, *args) - else: - if target.__main: - # leave to go to target=main - args = unbound_method(current, *args) - else: - # switch from non-main to non-main - args = unbound_method(current, *args, to=target) + unbound_method = getattr(_continulet, methodname) + args = unbound_method(current, *args, to=target) except GreenletExit, e: args = (e,) finally: @@ -110,7 +96,16 @@ @property def gr_frame(self): - raise NotImplementedError("attribute 'gr_frame' of greenlet objects") + # xxx this doesn't work when called on either the current or + # the main greenlet of another thread + if self is getcurrent(): + return None + if self.__main: + self = getcurrent() + f = _continulet.__reduce__(self)[2][0] + if not f: + return None + return f.f_back.f_back.f_back # go past start(), __switch(), switch() # ____________________________________________________________ # Internal stuff @@ -138,8 +133,7 @@ try: res = greenlet.run(*args) finally: - if greenlet.parent is not _tls.main: - _continuation.permute(greenlet, greenlet.parent) + _continuation.permute(greenlet, greenlet.parent) return (res,) def _greenlet_throw(greenlet, exc, value, tb): @@ -147,5 +141,4 @@ try: raise exc, value, tb finally: - if greenlet.parent is not _tls.main: - _continuation.permute(greenlet, greenlet.parent) + _continuation.permute(greenlet, greenlet.parent) diff --git a/lib_pypy/pypy_test/test_stackless_pickling.py b/lib_pypy/pypy_test/test_stackless_pickling.py --- a/lib_pypy/pypy_test/test_stackless_pickling.py +++ b/lib_pypy/pypy_test/test_stackless_pickling.py @@ -1,7 +1,3 @@ -""" -this test should probably not run from CPython or py.py. -I'm not entirely sure, how to do that. -""" from __future__ import absolute_import from py.test import skip try: @@ -16,11 +12,15 @@ class Test_StacklessPickling: + def test_pickle_main_coroutine(self): + import stackless, pickle + s = pickle.dumps(stackless.coroutine.getcurrent()) + print s + c = pickle.loads(s) + assert c is stackless.coroutine.getcurrent() + def test_basic_tasklet_pickling(self): - try: - import stackless - except ImportError: - skip("can't load stackless and don't know why!!!") + import stackless from stackless import run, schedule, tasklet import pickle diff --git a/lib_pypy/pyrepl/commands.py b/lib_pypy/pyrepl/commands.py --- a/lib_pypy/pyrepl/commands.py +++ b/lib_pypy/pyrepl/commands.py @@ -33,10 +33,9 @@ class Command(object): finish = 0 kills_digit_arg = 1 - def __init__(self, reader, (event_name, event)): + def __init__(self, reader, cmd): self.reader = reader - self.event = event - self.event_name = event_name + self.event_name, self.event = cmd def do(self): pass diff --git a/lib_pypy/pyrepl/completing_reader.py b/lib_pypy/pyrepl/completing_reader.py --- a/lib_pypy/pyrepl/completing_reader.py +++ b/lib_pypy/pyrepl/completing_reader.py @@ -229,7 +229,8 @@ def after_command(self, cmd): super(CompletingReader, self).after_command(cmd) - if not isinstance(cmd, complete) and not isinstance(cmd, self_insert): + if not isinstance(cmd, self.commands['complete']) \ + and not isinstance(cmd, self.commands['self_insert']): self.cmpltn_reset() def calc_screen(self): diff --git a/lib_pypy/pyrepl/pygame_console.py b/lib_pypy/pyrepl/pygame_console.py --- a/lib_pypy/pyrepl/pygame_console.py +++ b/lib_pypy/pyrepl/pygame_console.py @@ -130,7 +130,7 @@ s.fill(c, [0, 600 - bmargin, 800, bmargin]) s.fill(c, [800 - rmargin, 0, lmargin, 600]) - def refresh(self, screen, (cx, cy)): + def refresh(self, screen, cxy): self.screen = screen self.pygame_screen.fill(colors.bg, [0, tmargin + self.cur_top + self.scroll, @@ -139,8 +139,8 @@ line_top = self.cur_top width, height = self.fontsize - self.cxy = (cx, cy) - cp = self.char_pos(cx, cy) + self.cxy = cxy + cp = self.char_pos(*cxy) if cp[1] < tmargin: self.scroll = - (cy*self.fh + self.cur_top) self.repaint() @@ -148,7 +148,7 @@ self.scroll += (600 - bmargin) - (cp[1] + self.fh) self.repaint() if self.curs_vis: - self.pygame_screen.blit(self.cursor, self.char_pos(cx, cy)) + self.pygame_screen.blit(self.cursor, self.char_pos(*cxy)) for line in screen: if 0 <= line_top + self.scroll <= (600 - bmargin - tmargin - self.fh): if line: diff --git a/lib_pypy/pyrepl/reader.py b/lib_pypy/pyrepl/reader.py --- a/lib_pypy/pyrepl/reader.py +++ b/lib_pypy/pyrepl/reader.py @@ -576,7 +576,7 @@ self.console.push_char(char) self.handle1(0) - def readline(self): + def readline(self, returns_unicode=False): """Read a line. The implementation of this method also shows how to drive Reader if you want more control over the event loop.""" @@ -585,6 +585,8 @@ self.refresh() while not self.finished: self.handle1() + if returns_unicode: + return self.get_unicode() return self.get_buffer() finally: self.restore() diff --git a/lib_pypy/pyrepl/readline.py b/lib_pypy/pyrepl/readline.py --- a/lib_pypy/pyrepl/readline.py +++ b/lib_pypy/pyrepl/readline.py @@ -198,7 +198,7 @@ reader.ps1 = prompt return reader.readline() - def multiline_input(self, more_lines, ps1, ps2): + def multiline_input(self, more_lines, ps1, ps2, returns_unicode=False): """Read an input on possibly multiple lines, asking for more lines as long as 'more_lines(unicodetext)' returns an object whose boolean value is true. @@ -209,7 +209,7 @@ reader.more_lines = more_lines reader.ps1 = reader.ps2 = ps1 reader.ps3 = reader.ps4 = ps2 - return reader.readline() + return reader.readline(returns_unicode=returns_unicode) finally: reader.more_lines = saved @@ -231,7 +231,11 @@ return ''.join(chars) def _histline(self, line): - return unicode(line.rstrip('\n'), ENCODING) + line = line.rstrip('\n') + try: + return unicode(line, ENCODING) + except UnicodeDecodeError: # bah, silently fall back... + return unicode(line, 'utf-8') def get_history_length(self): return self.saved_history_length @@ -268,7 +272,10 @@ f = open(os.path.expanduser(filename), 'w') for entry in history: if isinstance(entry, unicode): - entry = entry.encode(ENCODING) + try: + entry = entry.encode(ENCODING) + except UnicodeEncodeError: # bah, silently fall back... + entry = entry.encode('utf-8') entry = entry.replace('\n', '\r\n') # multiline history support f.write(entry + '\n') f.close() @@ -395,9 +402,21 @@ _wrapper.f_in = f_in _wrapper.f_out = f_out - if hasattr(sys, '__raw_input__'): # PyPy - _old_raw_input = sys.__raw_input__ + if '__pypy__' in sys.builtin_module_names: # PyPy + + def _old_raw_input(prompt=''): + # sys.__raw_input__() is only called when stdin and stdout are + # as expected and are ttys. If it is the case, then get_reader() + # should not really fail in _wrapper.raw_input(). If it still + # does, then we will just cancel the redirection and call again + # the built-in raw_input(). + try: + del sys.__raw_input__ + except AttributeError: + pass + return raw_input(prompt) sys.__raw_input__ = _wrapper.raw_input + else: # this is not really what readline.c does. Better than nothing I guess import __builtin__ diff --git a/lib_pypy/pyrepl/simple_interact.py b/lib_pypy/pyrepl/simple_interact.py --- a/lib_pypy/pyrepl/simple_interact.py +++ b/lib_pypy/pyrepl/simple_interact.py @@ -54,7 +54,8 @@ ps1 = getattr(sys, 'ps1', '>>> ') ps2 = getattr(sys, 'ps2', '... ') try: - statement = multiline_input(more_lines, ps1, ps2) + statement = multiline_input(more_lines, ps1, ps2, + returns_unicode=True) except EOFError: break more = console.push(statement) diff --git a/lib_pypy/pyrepl/unix_console.py b/lib_pypy/pyrepl/unix_console.py --- a/lib_pypy/pyrepl/unix_console.py +++ b/lib_pypy/pyrepl/unix_console.py @@ -163,7 +163,7 @@ def change_encoding(self, encoding): self.encoding = encoding - def refresh(self, screen, (cx, cy)): + def refresh(self, screen, cxy): # this function is still too long (over 90 lines) if not self.__gone_tall: @@ -198,6 +198,7 @@ # we make sure the cursor is on the screen, and that we're # using all of the screen if we can + cx, cy = cxy if cy < offset: offset = cy elif cy >= offset + height: diff --git a/lib_pypy/resource.py b/lib_pypy/resource.py --- a/lib_pypy/resource.py +++ b/lib_pypy/resource.py @@ -7,7 +7,7 @@ from ctypes_support import standard_c_lib as libc from ctypes_support import get_errno -from ctypes import Structure, c_int, c_long, byref, sizeof, POINTER +from ctypes import Structure, c_int, c_long, byref, POINTER from errno import EINVAL, EPERM import _structseq @@ -165,7 +165,6 @@ @builtinify def getpagesize(): - pagesize = 0 if _getpagesize: return _getpagesize() else: diff --git a/lib_pypy/stackless.py b/lib_pypy/stackless.py --- a/lib_pypy/stackless.py +++ b/lib_pypy/stackless.py @@ -5,51 +5,54 @@ """ -import traceback import _continuation -from functools import partial class TaskletExit(Exception): pass CoroutineExit = TaskletExit -class GWrap(_continuation.continulet): - """This is just a wrapper around continulet to allow - to stick additional attributes to a continulet. - To be more concrete, we need a backreference to - the coroutine object""" + +def _coroutine_getcurrent(): + "Returns the current coroutine (i.e. the one which called this function)." + try: + return _tls.current_coroutine + except AttributeError: + # first call in this thread: current == main + return _coroutine_getmain() + +def _coroutine_getmain(): + try: + return _tls.main_coroutine + except AttributeError: + # create the main coroutine for this thread + continulet = _continuation.continulet + main = coroutine() + main._frame = continulet.__new__(continulet) + main._is_started = -1 + _tls.current_coroutine = _tls.main_coroutine = main + return _tls.main_coroutine class coroutine(object): - "we can't have continulet as a base, because continulets can't be rebound" + _is_started = 0 # 0=no, 1=yes, -1=main def __init__(self): self._frame = None - self.is_zombie = False - - def __getattr__(self, attr): - return getattr(self._frame, attr) - - def __del__(self): - self.is_zombie = True - del self._frame - self._frame = None def bind(self, func, *argl, **argd): """coro.bind(f, *argl, **argd) -> None. binds function f to coro. f will be called with arguments *argl, **argd """ - if self._frame is None or not self._frame.is_pending(): - - def _func(c, *args, **kwargs): - return func(*args, **kwargs) - - run = partial(_func, *argl, **argd) - self._frame = frame = GWrap(run) - else: + if self.is_alive: raise ValueError("cannot bind a bound coroutine") + def run(c): + _tls.current_coroutine = self + self._is_started = 1 + return func(*argl, **argd) + self._is_started = 0 + self._frame = _continuation.continulet(run) def switch(self): """coro.switch() -> returnvalue @@ -57,46 +60,38 @@ f finishes, the returnvalue is that of f, otherwise None is returned """ - current = _getcurrent() - current._jump_to(self) - - def _jump_to(self, coroutine): - _tls.current_coroutine = coroutine - self._frame.switch(to=coroutine._frame) + current = _coroutine_getcurrent() + try: + current._frame.switch(to=self._frame) + finally: + _tls.current_coroutine = current def kill(self): """coro.kill() : kill coroutine coro""" - _tls.current_coroutine = self - self._frame.throw(CoroutineExit) + current = _coroutine_getcurrent() + try: + current._frame.throw(CoroutineExit, to=self._frame) + finally: + _tls.current_coroutine = current - def _is_alive(self): - if self._frame is None: - return False - return not self._frame.is_pending() - is_alive = property(_is_alive) - del _is_alive + @property + def is_alive(self): + return self._is_started < 0 or ( + self._frame is not None and self._frame.is_pending()) - def getcurrent(): - """coroutine.getcurrent() -> the currently running coroutine""" - try: - return _getcurrent() - except AttributeError: - return _maincoro - getcurrent = staticmethod(getcurrent) + @property + def is_zombie(self): + return self._is_started > 0 and not self._frame.is_pending() + + getcurrent = staticmethod(_coroutine_getcurrent) def __reduce__(self): - raise TypeError, 'pickling is not possible based upon continulets' + if self._is_started < 0: + return _coroutine_getmain, () + else: + return type(self), (), self.__dict__ -def _getcurrent(): - "Returns the current coroutine (i.e. the one which called this function)." - try: - return _tls.current_coroutine - except AttributeError: - # first call in this thread: current == main - _coroutine_create_main() - return _tls.current_coroutine - try: from thread import _local except ImportError: @@ -105,17 +100,8 @@ _tls = _local() -def _coroutine_create_main(): - # create the main coroutine for this thread - _tls.current_coroutine = None - main_coroutine = coroutine() - main_coroutine.bind(lambda x:x) - _tls.main_coroutine = main_coroutine - _tls.current_coroutine = main_coroutine - return main_coroutine - -_maincoro = _coroutine_create_main() +# ____________________________________________________________ from collections import deque @@ -161,10 +147,7 @@ _last_task = next assert not next.blocked if next is not current: - try: - next.switch() - except CoroutineExit: - raise TaskletExit + next.switch() return current def set_schedule_callback(callback): @@ -188,34 +171,6 @@ raise self.type, self.value, self.traceback # -# helpers for pickling -# - -_stackless_primitive_registry = {} - -def register_stackless_primitive(thang, retval_expr='None'): - import types - func = thang - if isinstance(thang, types.MethodType): - func = thang.im_func - code = func.func_code - _stackless_primitive_registry[code] = retval_expr - # It is not too nice to attach info via the code object, but - # I can't think of a better solution without a real transform. - -def rewrite_stackless_primitive(coro_state, alive, tempval): - flags, frame, thunk, parent = coro_state - while frame is not None: - retval_expr = _stackless_primitive_registry.get(frame.f_code) - if retval_expr: - # this tasklet needs to stop pickling here and return its value. - tempval = eval(retval_expr, globals(), frame.f_locals) - coro_state = flags, frame, thunk, parent - break - frame = frame.f_back - return coro_state, alive, tempval - -# # class channel(object): @@ -367,8 +322,6 @@ """ return self._channel_action(None, -1) - register_stackless_primitive(receive, retval_expr='receiver.tempval') - def send_exception(self, exp_type, msg): self.send(bomb(exp_type, exp_type(msg))) @@ -385,9 +338,8 @@ the runnables list. """ return self._channel_action(msg, 1) - - register_stackless_primitive(send) - + + class tasklet(coroutine): """ A tasklet object represents a tiny task in a Python thread. @@ -459,6 +411,7 @@ def _func(): try: try: + coroutine.switch(back) func(*argl, **argd) except TaskletExit: pass @@ -468,6 +421,8 @@ self.func = None coroutine.bind(self, _func) + back = _coroutine_getcurrent() + coroutine.switch(self) self.alive = True _scheduler_append(self) return self @@ -490,39 +445,6 @@ raise RuntimeError, "The current tasklet cannot be removed." # not sure if I will revive this " Use t=tasklet().capture()" _scheduler_remove(self) - - def __reduce__(self): - one, two, coro_state = coroutine.__reduce__(self) - assert one is coroutine - assert two == () - # we want to get rid of the parent thing. - # for now, we just drop it - a, frame, c, d = coro_state - - # Removing all frames related to stackless.py. - # They point to stuff we don't want to be pickled. - - pickleframe = frame - while frame is not None: - if frame.f_code == schedule.func_code: - # Removing everything including and after the - # call to stackless.schedule() - pickleframe = frame.f_back - break - frame = frame.f_back - if d: - assert isinstance(d, coroutine) - coro_state = a, pickleframe, c, None - coro_state, alive, tempval = rewrite_stackless_primitive(coro_state, self.alive, self.tempval) - inst_dict = self.__dict__.copy() - inst_dict.pop('tempval', None) - return self.__class__, (), (coro_state, alive, tempval, inst_dict) - - def __setstate__(self, (coro_state, alive, tempval, inst_dict)): - coroutine.__setstate__(self, coro_state) - self.__dict__.update(inst_dict) - self.alive = alive - self.tempval = tempval def getmain(): """ @@ -611,30 +533,7 @@ global _last_task _global_task_id = 0 _main_tasklet = coroutine.getcurrent() - try: - _main_tasklet.__class__ = tasklet - except TypeError: # we are running pypy-c - class TaskletProxy(object): - """TaskletProxy is needed to give the _main_coroutine tasklet behaviour""" - def __init__(self, coro): - self._coro = coro - - def __getattr__(self,attr): - return getattr(self._coro,attr) - - def __str__(self): - return '' % (self._task_id, self.is_alive) - - def __reduce__(self): - return getmain, () - - __repr__ = __str__ - - - global _main_coroutine - _main_coroutine = _main_tasklet - _main_tasklet = TaskletProxy(_main_tasklet) - assert _main_tasklet.is_alive and not _main_tasklet.is_zombie + _main_tasklet.__class__ = tasklet # XXX HAAAAAAAAAAAAAAAAAAAAACK _last_task = _main_tasklet tasklet._init.im_func(_main_tasklet, label='main') _squeue = deque() diff --git a/py/_code/source.py b/py/_code/source.py --- a/py/_code/source.py +++ b/py/_code/source.py @@ -139,7 +139,7 @@ trysource = self[start:end] if trysource.isparseable(): return start, end - return start, end + return start, len(self) def getblockend(self, lineno): # XXX diff --git a/pypy/annotation/annrpython.py b/pypy/annotation/annrpython.py --- a/pypy/annotation/annrpython.py +++ b/pypy/annotation/annrpython.py @@ -149,7 +149,7 @@ desc = olddesc.bind_self(classdef) args = self.bookkeeper.build_args("simple_call", args_s[:]) desc.consider_call_site(self.bookkeeper, desc.getcallfamily(), [desc], - args, annmodel.s_ImpossibleValue) + args, annmodel.s_ImpossibleValue, None) result = [] def schedule(graph, inputcells): result.append((graph, inputcells)) diff --git a/pypy/annotation/bookkeeper.py b/pypy/annotation/bookkeeper.py --- a/pypy/annotation/bookkeeper.py +++ b/pypy/annotation/bookkeeper.py @@ -209,8 +209,8 @@ self.consider_call_site(call_op) for pbc, args_s in self.emulated_pbc_calls.itervalues(): - self.consider_call_site_for_pbc(pbc, 'simple_call', - args_s, s_ImpossibleValue) + self.consider_call_site_for_pbc(pbc, 'simple_call', + args_s, s_ImpossibleValue, None) self.emulated_pbc_calls = {} finally: self.leave() @@ -257,18 +257,18 @@ args_s = [lltype_to_annotation(adtmeth.ll_ptrtype)] + args_s if isinstance(s_callable, SomePBC): s_result = binding(call_op.result, s_ImpossibleValue) - self.consider_call_site_for_pbc(s_callable, - call_op.opname, - args_s, s_result) + self.consider_call_site_for_pbc(s_callable, call_op.opname, args_s, + s_result, call_op) - def consider_call_site_for_pbc(self, s_callable, opname, args_s, s_result): + def consider_call_site_for_pbc(self, s_callable, opname, args_s, s_result, + call_op): descs = list(s_callable.descriptions) if not descs: return family = descs[0].getcallfamily() args = self.build_args(opname, args_s) s_callable.getKind().consider_call_site(self, family, descs, args, - s_result) + s_result, call_op) def getuniqueclassdef(self, cls): """Get the ClassDef associated with the given user cls. @@ -656,6 +656,7 @@ whence = None else: whence = emulated # callback case + op = None s_previous_result = s_ImpossibleValue def schedule(graph, inputcells): @@ -663,7 +664,7 @@ results = [] for desc in descs: - results.append(desc.pycall(schedule, args, s_previous_result)) + results.append(desc.pycall(schedule, args, s_previous_result, op)) s_result = unionof(*results) return s_result diff --git a/pypy/annotation/classdef.py b/pypy/annotation/classdef.py --- a/pypy/annotation/classdef.py +++ b/pypy/annotation/classdef.py @@ -276,8 +276,8 @@ # create the Attribute and do the generalization asked for newattr = Attribute(attr, self.bookkeeper) if s_value: - if newattr.name == 'intval' and getattr(s_value, 'unsigned', False): - import pdb; pdb.set_trace() + #if newattr.name == 'intval' and getattr(s_value, 'unsigned', False): + # import pdb; pdb.set_trace() newattr.s_value = s_value # keep all subattributes' values diff --git a/pypy/annotation/description.py b/pypy/annotation/description.py --- a/pypy/annotation/description.py +++ b/pypy/annotation/description.py @@ -255,7 +255,11 @@ raise TypeError, "signature mismatch: %s" % e.getmsg(self.name) return inputcells - def specialize(self, inputcells): + def specialize(self, inputcells, op=None): + if (op is None and + getattr(self.bookkeeper, "position_key", None) is not None): + _, block, i = self.bookkeeper.position_key + op = block.operations[i] if self.specializer is None: # get the specializer based on the tag of the 'pyobj' # (if any), according to the current policy @@ -269,11 +273,14 @@ enforceargs = Sig(*enforceargs) self.pyobj._annenforceargs_ = enforceargs enforceargs(self, inputcells) # can modify inputcells in-place - return self.specializer(self, inputcells) + if getattr(self.pyobj, '_annspecialcase_', '').endswith("call_location"): + return self.specializer(self, inputcells, op) + else: + return self.specializer(self, inputcells) - def pycall(self, schedule, args, s_previous_result): + def pycall(self, schedule, args, s_previous_result, op=None): inputcells = self.parse_arguments(args) - result = self.specialize(inputcells) + result = self.specialize(inputcells, op) if isinstance(result, FunctionGraph): graph = result # common case # if that graph has a different signature, we need to re-parse @@ -296,17 +303,17 @@ None, # selfclassdef name) - def consider_call_site(bookkeeper, family, descs, args, s_result): + def consider_call_site(bookkeeper, family, descs, args, s_result, op): shape = rawshape(args) - row = FunctionDesc.row_to_consider(descs, args) + row = FunctionDesc.row_to_consider(descs, args, op) family.calltable_add_row(shape, row) consider_call_site = staticmethod(consider_call_site) - def variant_for_call_site(bookkeeper, family, descs, args): + def variant_for_call_site(bookkeeper, family, descs, args, op): shape = rawshape(args) bookkeeper.enter(None) try: - row = FunctionDesc.row_to_consider(descs, args) + row = FunctionDesc.row_to_consider(descs, args, op) finally: bookkeeper.leave() index = family.calltable_lookup_row(shape, row) @@ -316,7 +323,7 @@ def rowkey(self): return self - def row_to_consider(descs, args): + def row_to_consider(descs, args, op): # see comments in CallFamily from pypy.annotation.model import s_ImpossibleValue row = {} @@ -324,7 +331,7 @@ def enlist(graph, ignore): row[desc.rowkey()] = graph return s_ImpossibleValue # meaningless - desc.pycall(enlist, args, s_ImpossibleValue) + desc.pycall(enlist, args, s_ImpossibleValue, op) return row row_to_consider = staticmethod(row_to_consider) @@ -521,7 +528,7 @@ "specialization" % (self.name,)) return self.getclassdef(None) - def pycall(self, schedule, args, s_previous_result): + def pycall(self, schedule, args, s_previous_result, op=None): from pypy.annotation.model import SomeInstance, SomeImpossibleValue if self.specialize: if self.specialize == 'specialize:ctr_location': @@ -664,7 +671,7 @@ cdesc = cdesc.basedesc return s_result # common case - def consider_call_site(bookkeeper, family, descs, args, s_result): + def consider_call_site(bookkeeper, family, descs, args, s_result, op): from pypy.annotation.model import SomeInstance, SomePBC, s_None if len(descs) == 1: # call to a single class, look at the result annotation @@ -709,7 +716,7 @@ initdescs[0].mergecallfamilies(*initdescs[1:]) initfamily = initdescs[0].getcallfamily() MethodDesc.consider_call_site(bookkeeper, initfamily, initdescs, - args, s_None) + args, s_None, op) consider_call_site = staticmethod(consider_call_site) def getallbases(self): @@ -782,13 +789,13 @@ def getuniquegraph(self): return self.funcdesc.getuniquegraph() - def pycall(self, schedule, args, s_previous_result): + def pycall(self, schedule, args, s_previous_result, op=None): from pypy.annotation.model import SomeInstance if self.selfclassdef is None: raise Exception("calling %r" % (self,)) s_instance = SomeInstance(self.selfclassdef, flags = self.flags) args = args.prepend(s_instance) - return self.funcdesc.pycall(schedule, args, s_previous_result) + return self.funcdesc.pycall(schedule, args, s_previous_result, op) def bind_under(self, classdef, name): self.bookkeeper.warning("rebinding an already bound %r" % (self,)) @@ -801,10 +808,10 @@ self.name, flags) - def consider_call_site(bookkeeper, family, descs, args, s_result): + def consider_call_site(bookkeeper, family, descs, args, s_result, op): shape = rawshape(args, nextra=1) # account for the extra 'self' funcdescs = [methoddesc.funcdesc for methoddesc in descs] - row = FunctionDesc.row_to_consider(descs, args) + row = FunctionDesc.row_to_consider(descs, args, op) family.calltable_add_row(shape, row) consider_call_site = staticmethod(consider_call_site) @@ -956,16 +963,16 @@ return '' % (self.funcdesc, self.frozendesc) - def pycall(self, schedule, args, s_previous_result): + def pycall(self, schedule, args, s_previous_result, op=None): from pypy.annotation.model import SomePBC s_self = SomePBC([self.frozendesc]) args = args.prepend(s_self) - return self.funcdesc.pycall(schedule, args, s_previous_result) + return self.funcdesc.pycall(schedule, args, s_previous_result, op) - def consider_call_site(bookkeeper, family, descs, args, s_result): + def consider_call_site(bookkeeper, family, descs, args, s_result, op): shape = rawshape(args, nextra=1) # account for the extra 'self' funcdescs = [mofdesc.funcdesc for mofdesc in descs] - row = FunctionDesc.row_to_consider(descs, args) + row = FunctionDesc.row_to_consider(descs, args, op) family.calltable_add_row(shape, row) consider_call_site = staticmethod(consider_call_site) diff --git a/pypy/annotation/policy.py b/pypy/annotation/policy.py --- a/pypy/annotation/policy.py +++ b/pypy/annotation/policy.py @@ -1,7 +1,7 @@ # base annotation policy for specialization from pypy.annotation.specialize import default_specialize as default -from pypy.annotation.specialize import specialize_argvalue, specialize_argtype, specialize_arglistitemtype -from pypy.annotation.specialize import memo +from pypy.annotation.specialize import specialize_argvalue, specialize_argtype, specialize_arglistitemtype, specialize_arg_or_var +from pypy.annotation.specialize import memo, specialize_call_location # for some reason, model must be imported first, # or we create a cycle. from pypy.annotation import model as annmodel @@ -73,8 +73,10 @@ default_specialize = staticmethod(default) specialize__memo = staticmethod(memo) specialize__arg = staticmethod(specialize_argvalue) # specialize:arg(N) + specialize__arg_or_var = staticmethod(specialize_arg_or_var) specialize__argtype = staticmethod(specialize_argtype) # specialize:argtype(N) specialize__arglistitemtype = staticmethod(specialize_arglistitemtype) + specialize__call_location = staticmethod(specialize_call_location) def specialize__ll(pol, *args): from pypy.rpython.annlowlevel import LowLevelAnnotatorPolicy diff --git a/pypy/annotation/specialize.py b/pypy/annotation/specialize.py --- a/pypy/annotation/specialize.py +++ b/pypy/annotation/specialize.py @@ -353,6 +353,16 @@ key = tuple(key) return maybe_star_args(funcdesc, key, args_s) +def specialize_arg_or_var(funcdesc, args_s, *argindices): + for argno in argindices: + if not args_s[argno].is_constant(): + break + else: + # all constant + return specialize_argvalue(funcdesc, args_s, *argindices) + # some not constant + return maybe_star_args(funcdesc, None, args_s) + def specialize_argtype(funcdesc, args_s, *argindices): key = tuple([args_s[i].knowntype for i in argindices]) for cls in key: @@ -370,3 +380,7 @@ else: key = s.listdef.listitem.s_value.knowntype return maybe_star_args(funcdesc, key, args_s) + +def specialize_call_location(funcdesc, args_s, op): + assert op is not None + return maybe_star_args(funcdesc, op, args_s) diff --git a/pypy/annotation/test/test_annrpython.py b/pypy/annotation/test/test_annrpython.py --- a/pypy/annotation/test/test_annrpython.py +++ b/pypy/annotation/test/test_annrpython.py @@ -1099,8 +1099,8 @@ allocdesc = a.bookkeeper.getdesc(alloc) s_C1 = a.bookkeeper.immutablevalue(C1) s_C2 = a.bookkeeper.immutablevalue(C2) - graph1 = allocdesc.specialize([s_C1]) - graph2 = allocdesc.specialize([s_C2]) + graph1 = allocdesc.specialize([s_C1], None) + graph2 = allocdesc.specialize([s_C2], None) assert a.binding(graph1.getreturnvar()).classdef == C1df assert a.binding(graph2.getreturnvar()).classdef == C2df assert graph1 in a.translator.graphs @@ -1135,8 +1135,8 @@ allocdesc = a.bookkeeper.getdesc(alloc) s_C1 = a.bookkeeper.immutablevalue(C1) s_C2 = a.bookkeeper.immutablevalue(C2) - graph1 = allocdesc.specialize([s_C1, s_C2]) - graph2 = allocdesc.specialize([s_C2, s_C2]) + graph1 = allocdesc.specialize([s_C1, s_C2], None) + graph2 = allocdesc.specialize([s_C2, s_C2], None) assert a.binding(graph1.getreturnvar()).classdef == C1df assert a.binding(graph2.getreturnvar()).classdef == C2df assert graph1 in a.translator.graphs @@ -1194,6 +1194,33 @@ assert len(executedesc._cache[(0, 'star', 2)].startblock.inputargs) == 4 assert len(executedesc._cache[(1, 'star', 3)].startblock.inputargs) == 5 + def test_specialize_arg_or_var(self): + def f(a): + return 1 + f._annspecialcase_ = 'specialize:arg_or_var(0)' + + def fn(a): + return f(3) + f(a) + + a = self.RPythonAnnotator() + a.build_types(fn, [int]) + executedesc = a.bookkeeper.getdesc(f) + assert sorted(executedesc._cache.keys()) == [None, (3,)] + # we got two different special + + def test_specialize_call_location(self): + def g(a): + return a + g._annspecialcase_ = "specialize:call_location" + def f(x): + return g(x) + f._annspecialcase_ = "specialize:argtype(0)" + def h(y): + w = f(y) + return int(f(str(y))) + w + a = self.RPythonAnnotator() + assert a.build_types(h, [int]) == annmodel.SomeInteger() + def test_assert_list_doesnt_lose_info(self): class T(object): pass @@ -3177,6 +3204,8 @@ s = a.build_types(f, []) assert isinstance(s, annmodel.SomeList) assert not s.listdef.listitem.resized + assert not s.listdef.listitem.immutable + assert s.listdef.listitem.mutated def test_delslice(self): def f(): diff --git a/pypy/annotation/unaryop.py b/pypy/annotation/unaryop.py --- a/pypy/annotation/unaryop.py +++ b/pypy/annotation/unaryop.py @@ -352,6 +352,7 @@ check_negative_slice(s_start, s_stop) if not isinstance(s_iterable, SomeList): raise Exception("list[start:stop] = x: x must be a list") + lst.listdef.mutate() lst.listdef.agree(s_iterable.listdef) # note that setslice is not allowed to resize a list in RPython diff --git a/pypy/config/pypyoption.py b/pypy/config/pypyoption.py --- a/pypy/config/pypyoption.py +++ b/pypy/config/pypyoption.py @@ -27,7 +27,7 @@ # --allworkingmodules working_modules = default_modules.copy() working_modules.update(dict.fromkeys( - ["_socket", "unicodedata", "mmap", "fcntl", "_locale", + ["_socket", "unicodedata", "mmap", "fcntl", "_locale", "pwd", "rctime" , "select", "zipimport", "_lsprof", "crypt", "signal", "_rawffi", "termios", "zlib", "bz2", "struct", "_hashlib", "_md5", "_sha", "_minimal_curses", "cStringIO", @@ -58,6 +58,7 @@ # unix only modules del working_modules["crypt"] del working_modules["fcntl"] + del working_modules["pwd"] del working_modules["termios"] del working_modules["_minimal_curses"] @@ -71,6 +72,7 @@ del working_modules['fcntl'] # LOCK_NB not defined del working_modules["_minimal_curses"] del working_modules["termios"] + del working_modules["_multiprocessing"] # depends on rctime @@ -90,7 +92,7 @@ module_import_dependencies = { # no _rawffi if importing pypy.rlib.clibffi raises ImportError - # or CompilationError + # or CompilationError or py.test.skip.Exception "_rawffi" : ["pypy.rlib.clibffi"], "_ffi" : ["pypy.rlib.clibffi"], @@ -111,7 +113,7 @@ try: for name in modlist: __import__(name) - except (ImportError, CompilationError), e: + except (ImportError, CompilationError, py.test.skip.Exception), e: errcls = e.__class__.__name__ config.add_warning( "The module %r is disabled\n" % (modname,) + @@ -126,7 +128,7 @@ pypy_optiondescription = OptionDescription("objspace", "Object Space Options", [ ChoiceOption("name", "Object Space name", - ["std", "flow", "thunk", "dump", "taint"], + ["std", "flow", "thunk", "dump"], "std", cmdline='--objspace -o'), diff --git a/pypy/doc/__pypy__-module.rst b/pypy/doc/__pypy__-module.rst --- a/pypy/doc/__pypy__-module.rst +++ b/pypy/doc/__pypy__-module.rst @@ -37,29 +37,6 @@ .. _`thunk object space docs`: objspace-proxies.html#thunk .. _`interface section of the thunk object space docs`: objspace-proxies.html#thunk-interface -.. broken: - - Taint Object Space Functionality - ================================ - - When the taint object space is used (choose with :config:`objspace.name`), - the following names are put into ``__pypy__``: - - - ``taint`` - - ``is_tainted`` - - ``untaint`` - - ``taint_atomic`` - - ``_taint_debug`` - - ``_taint_look`` - - ``TaintError`` - - Those are all described in the `interface section of the taint object space - docs`_. - - For more detailed explanations and examples see the `taint object space docs`_. - - .. _`taint object space docs`: objspace-proxies.html#taint - .. _`interface section of the taint object space docs`: objspace-proxies.html#taint-interface Transparent Proxy Functionality =============================== diff --git a/pypy/doc/config/objspace.name.txt b/pypy/doc/config/objspace.name.txt --- a/pypy/doc/config/objspace.name.txt +++ b/pypy/doc/config/objspace.name.txt @@ -4,7 +4,6 @@ for normal usage): * thunk_: The thunk object space adds lazy evaluation to PyPy. - * taint_: The taint object space adds soft security features. * dump_: Using this object spaces results in the dumpimp of all operations to a log. @@ -12,5 +11,4 @@ .. _`Object Space Proxies`: ../objspace-proxies.html .. _`Standard Object Space`: ../objspace.html#standard-object-space .. _thunk: ../objspace-proxies.html#thunk -.. _taint: ../objspace-proxies.html#taint .. _dump: ../objspace-proxies.html#dump diff --git a/pypy/doc/config/objspace.usemodules.pwd.txt b/pypy/doc/config/objspace.usemodules.pwd.txt new file mode 100644 --- /dev/null +++ b/pypy/doc/config/objspace.usemodules.pwd.txt @@ -0,0 +1,2 @@ +Use the 'pwd' module. +This module is expected to be fully working. diff --git a/pypy/doc/index.rst b/pypy/doc/index.rst --- a/pypy/doc/index.rst +++ b/pypy/doc/index.rst @@ -21,8 +21,6 @@ * `Papers`_: Academic papers, talks, and related projects -* `Videos`_: Videos of PyPy talks and presentations - * `speed.pypy.org`_: Daily benchmarks of how fast PyPy is * `potential project ideas`_: In case you want to get your feet wet... @@ -311,7 +309,6 @@ .. _`object space`: objspace.html .. _FlowObjSpace: objspace.html#the-flow-object-space .. _`trace object space`: objspace.html#the-trace-object-space -.. _`taint object space`: objspace-proxies.html#taint .. _`thunk object space`: objspace-proxies.html#thunk .. _`transparent proxies`: objspace-proxies.html#tproxy .. _`Differences between PyPy and CPython`: cpython_differences.html diff --git a/pypy/doc/objspace-proxies.rst b/pypy/doc/objspace-proxies.rst --- a/pypy/doc/objspace-proxies.rst +++ b/pypy/doc/objspace-proxies.rst @@ -129,297 +129,6 @@ function behaves lazily: all calls to it return a thunk object. -.. broken right now: - - .. _taint: - - The Taint Object Space - ====================== - - Motivation - ---------- - - The Taint Object Space provides a form of security: "tainted objects", - inspired by various sources, see [D12.1]_ for a more detailed discussion. - - The basic idea of this kind of security is not to protect against - malicious code but to help with handling and boxing sensitive data. - It covers two kinds of sensitive data: secret data which should not leak, - and untrusted data coming from an external source and that must be - validated before it is used. - - The idea is that, considering a large application that handles these - kinds of sensitive data, there are typically only a small number of - places that need to explicitly manipulate that sensitive data; all the - other places merely pass it around, or do entirely unrelated things. - - Nevertheless, if a large application needs to be reviewed for security, - it must be entirely carefully checked, because it is possible that a - bug at some apparently unrelated place could lead to a leak of sensitive - information in a way that an external attacker could exploit. For - example, if any part of the application provides web services, an - attacker might be able to issue unexpected requests with a regular web - browser and deduce secret information from the details of the answers he - gets. Another example is the common CGI attack where an attacker sends - malformed inputs and causes the CGI script to do unintended things. - - An approach like that of the Taint Object Space allows the small parts - of the program that manipulate sensitive data to be explicitly marked. - The effect of this is that although these small parts still need a - careful security review, the rest of the application no longer does, - because even a bug would be unable to leak the information. - - We have implemented a simple two-level model: objects are either - regular (untainted), or sensitive (tainted). Objects are marked as - sensitive if they are secret or untrusted, and only declassified at - carefully-checked positions (e.g. where the secret data is needed, or - after the untrusted data has been fully validated). - - It would be simple to extend the code for more fine-grained scales of - secrecy. For example it is typical in the literature to consider - user-specified lattices of secrecy levels, corresponding to multiple - "owners" that cannot access data belonging to another "owner" unless - explicitly authorized to do so. - - Tainting and untainting - ----------------------- - - Start a py.py with the Taint Object Space and try the following example:: - - $ py.py -o taint - >>>> from __pypy__ import taint - >>>> x = taint(6) - - # x is hidden from now on. We can pass it around and - # even operate on it, but not inspect it. Taintness - # is propagated to operation results. - - >>>> x - TaintError - - >>>> if x > 5: y = 2 # see below - TaintError - - >>>> y = x + 5 # ok - >>>> lst = [x, y] - >>>> z = lst.pop() - >>>> t = type(z) # type() works too, tainted answer - >>>> t - TaintError - >>>> u = t is int # even 'is' works - >>>> u - TaintError - - Notice that using a tainted boolean like ``x > 5`` in an ``if`` - statement is forbidden. This is because knowing which path is followed - would give away a hint about ``x``; in the example above, if the - statement ``if x > 5: y = 2`` was allowed to run, we would know - something about the value of ``x`` by looking at the (untainted) value - in the variable ``y``. - - Of course, there is a way to inspect tainted objects. The basic way is - to explicitly "declassify" it with the ``untaint()`` function. In an - application, the places that use ``untaint()`` are the places that need - careful security review. To avoid unexpected objects showing up, the - ``untaint()`` function must be called with the exact type of the object - to declassify. It will raise ``TaintError`` if the type doesn't match:: - - >>>> from __pypy__ import taint - >>>> untaint(int, x) - 6 - >>>> untaint(int, z) - 11 - >>>> untaint(bool, x > 5) - True - >>>> untaint(int, x > 5) - TaintError - - - Taint Bombs - ----------- - - In this area, a common problem is what to do about failing operations. - If an operation raises an exception when manipulating a tainted object, - then the very presence of the exception can leak information about the - tainted object itself. Consider:: - - >>>> 5 / (x-6) - - By checking if this raises ``ZeroDivisionError`` or not, we would know - if ``x`` was equal to 6 or not. The solution to this problem in the - Taint Object Space is to introduce *Taint Bombs*. They are a kind of - tainted object that doesn't contain a real object, but a pending - exception. Taint Bombs are indistinguishable from normal tainted - objects to unprivileged code. See:: - - >>>> x = taint(6) - >>>> i = 5 / (x-6) # no exception here - >>>> j = i + 1 # nor here - >>>> k = j + 5 # nor here - >>>> untaint(int, k) - TaintError - - In the above example, all of ``i``, ``j`` and ``k`` contain a Taint - Bomb. Trying to untaint it raises an exception - a generic - ``TaintError``. What we win is that the exception gives little away, - and most importantly it occurs at the point where ``untaint()`` is - called, not where the operation failed. This means that all calls to - ``untaint()`` - but not the rest of the code - must be carefully - reviewed for what occurs if they receive a Taint Bomb; they might catch - the ``TaintError`` and give the user a generic message that something - went wrong, if we are reasonably careful that the message or even its - presence doesn't give information away. This might be a - problem by itself, but there is no satisfying general solution here: - it must be considered on a case-by-case basis. Again, what the - Taint Object Space approach achieves is not solving these problems, but - localizing them to well-defined small parts of the application - namely, - around calls to ``untaint()``. - - The ``TaintError`` exception deliberately does not include any - useful error messages, because they might give information away. - Of course, this makes debugging quite a bit harder; a difficult - problem to solve properly. So far we have implemented a way to peek in a Taint - Box or Bomb, ``__pypy__._taint_look(x)``, and a "debug mode" that - prints the exception as soon as a Bomb is created - both write - information to the low-level stderr of the application, where we hope - that it is unlikely to be seen by anyone but the application - developer. - - - Taint Atomic functions - ---------------------- - - Occasionally, a more complicated computation must be performed on a - tainted object. This requires first untainting the object, performing the - computations, and then carefully tainting the result again (including - hiding all exceptions into Bombs). - - There is a built-in decorator that does this for you:: - - >>>> @__pypy__.taint_atomic - >>>> def myop(x, y): - .... while x > 0: - .... x -= y - .... return x - .... - >>>> myop(42, 10) - -8 - >>>> z = myop(taint(42), 10) - >>>> z - TaintError - >>>> untaint(int, z) - -8 - - The decorator makes a whole function behave like a built-in operation. - If no tainted argument is passed in, the function behaves normally. But - if any of the arguments is tainted, it is automatically untainted - so - the function body always sees untainted arguments - and the eventual - result is tainted again (possibly in a Taint Bomb). - - It is important for the function marked as ``taint_atomic`` to have no - visible side effects, as these could cause information leakage. - This is currently not enforced, which means that all ``taint_atomic`` - functions have to be carefully reviewed for security (but not the - callers of ``taint_atomic`` functions). - - A possible future extension would be to forbid side-effects on - non-tainted objects from all ``taint_atomic`` functions. - - An example of usage: given a tainted object ``passwords_db`` that - references a database of passwords, we can write a function - that checks if a password is valid as follows:: - - @taint_atomic - def validate(passwords_db, username, password): - assert type(passwords_db) is PasswordDatabase - assert type(username) is str - assert type(password) is str - ...load username entry from passwords_db... - return expected_password == password - - It returns a tainted boolean answer, or a Taint Bomb if something - went wrong. A caller can do:: - - ok = validate(passwords_db, 'john', '1234') - ok = untaint(bool, ok) - - This can give three outcomes: ``True``, ``False``, or a ``TaintError`` - exception (with no information on it) if anything went wrong. If even - this is considered giving too much information away, the ``False`` case - can be made indistinguishable from the ``TaintError`` case (simply by - raising an exception in ``validate()`` if the password is wrong). - - In the above example, the security results achieved are the following: - as long as ``validate()`` does not leak information, no other part of - the code can obtain more information about a passwords database than a - Yes/No answer to a precise query. - - A possible extension of the ``taint_atomic`` decorator would be to check - the argument types, as ``untaint()`` does, for the same reason: to - prevent bugs where a function like ``validate()`` above is accidentally - called with the wrong kind of tainted object, which would make it - misbehave. For now, all ``taint_atomic`` functions should be - conservative and carefully check all assumptions on their input - arguments. - - - .. _`taint-interface`: - - Interface - --------- - - .. _`like a built-in operation`: - - The basic rule of the Tainted Object Space is that it introduces two new - kinds of objects, Tainted Boxes and Tainted Bombs (which are not types - in the Python sense). Each box internally contains a regular object; - each bomb internally contains an exception object. An operation - involving Tainted Boxes is performed on the objects contained in the - boxes, and gives a Tainted Box or a Tainted Bomb as a result (such an - operation does not let an exception be raised). An operation called - with a Tainted Bomb argument immediately returns the same Tainted Bomb. - - In a PyPy running with (or translated with) the Taint Object Space, - the ``__pypy__`` module exposes the following interface: - - * ``taint(obj)`` - - Return a new Tainted Box wrapping ``obj``. Return ``obj`` itself - if it is already tainted (a Box or a Bomb). - - * ``is_tainted(obj)`` - - Check if ``obj`` is tainted (a Box or a Bomb). - - * ``untaint(type, obj)`` - - Untaints ``obj`` if it is tainted. Raise ``TaintError`` if the type - of the untainted object is not exactly ``type``, or if ``obj`` is a - Bomb. - - * ``taint_atomic(func)`` - - Return a wrapper function around the callable ``func``. The wrapper - behaves `like a built-in operation`_ with respect to untainting the - arguments, tainting the result, and returning a Bomb. - - * ``TaintError`` - - Exception. On purpose, it provides no attribute or error message. - - * ``_taint_debug(level)`` - - Set the debugging level to ``level`` (0=off). At level 1 or above, - all Taint Bombs print a diagnostic message to stderr when they are - created. - - * ``_taint_look(obj)`` - - For debugging purposes: prints (to stderr) the type and address of - the object in a Tainted Box, or prints the exception if ``obj`` is - a Taint Bomb. - - .. _dump: The Dump Object Space diff --git a/pypy/doc/project-ideas.rst b/pypy/doc/project-ideas.rst --- a/pypy/doc/project-ideas.rst +++ b/pypy/doc/project-ideas.rst @@ -17,6 +17,12 @@ projects, or anything else in PyPy, pop up on IRC or write to us on the `mailing list`_. +Make big integers faster +------------------------- + +PyPy's implementation of the Python ``long`` type is slower than CPython's. +Find out why and optimize them. + Numpy improvements ------------------ @@ -53,6 +59,18 @@ this is an ideal task to get started, because it does not require any deep knowledge of the internals. +Optimized Unicode Representation +-------------------------------- + +CPython 3.3 will use an `optimized unicode representation`_ which switches between +different ways to represent a unicode string, depending on whether the string +fits into ASCII, has only two-byte characters or needs four-byte characters. + +The actual details would be rather differen in PyPy, but we would like to have +the same optimization implemented. + +.. _`optimized unicode representation`: http://www.python.org/dev/peps/pep-0393/ + Translation Toolchain --------------------- diff --git a/pypy/doc/stackless.rst b/pypy/doc/stackless.rst --- a/pypy/doc/stackless.rst +++ b/pypy/doc/stackless.rst @@ -66,7 +66,7 @@ In practice, in PyPy, you cannot change the ``f_back`` of an abitrary frame, but only of frames stored in ``continulets``. -Continulets are internally implemented using stacklets. Stacklets are a +Continulets are internally implemented using stacklets_. Stacklets are a bit more primitive (they are really one-shot continuations), but that idea only works in C, not in Python. The basic idea of continulets is to have at any point in time a complete valid stack; this is important @@ -215,11 +215,6 @@ * Support for other CPUs than x86 and x86-64 -* The app-level ``f_back`` field of frames crossing continulet boundaries - is None for now, unlike what I explain in the theoretical overview - above. It mostly means that in a ``pdb.set_trace()`` you cannot go - ``up`` past countinulet boundaries. This could be fixed. - .. __: `recursion depth limit`_ (*) Pickling, as well as changing threads, could be implemented by using @@ -285,6 +280,24 @@ to use other interfaces like genlets and greenlets.) +Stacklets ++++++++++ + +Continulets are internally implemented using stacklets, which is the +generic RPython-level building block for "one-shot continuations". For +more information about them please see the documentation in the C source +at `pypy/translator/c/src/stacklet/stacklet.h`_. + +The module ``pypy.rlib.rstacklet`` is a thin wrapper around the above +functions. The key point is that new() and switch() always return a +fresh stacklet handle (or an empty one), and switch() additionally +consumes one. It makes no sense to have code in which the returned +handle is ignored, or used more than once. Note that ``stacklet.c`` is +written assuming that the user knows that, and so no additional checking +occurs; this can easily lead to obscure crashes if you don't use a +wrapper like PyPy's '_continuation' module. + + Theory of composability +++++++++++++++++++++++ diff --git a/pypy/interpreter/argument.py b/pypy/interpreter/argument.py --- a/pypy/interpreter/argument.py +++ b/pypy/interpreter/argument.py @@ -125,6 +125,7 @@ ### Manipulation ### + @jit.look_inside_iff(lambda self: not self._dont_jit) def unpack(self): # slowish "Return a ([w1,w2...], {'kw':w3...}) pair." kwds_w = {} @@ -245,6 +246,8 @@ ### Parsing for function calls ### + # XXX: this should be @jit.look_inside_iff, but we need key word arguments, + # and it doesn't support them for now. def _match_signature(self, w_firstarg, scope_w, signature, defaults_w=None, blindargs=0): """Parse args and kwargs according to the signature of a code object, diff --git a/pypy/interpreter/astcompiler/ast.py b/pypy/interpreter/astcompiler/ast.py --- a/pypy/interpreter/astcompiler/ast.py +++ b/pypy/interpreter/astcompiler/ast.py @@ -2,7 +2,7 @@ from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter import typedef from pypy.interpreter.gateway import interp2app -from pypy.interpreter.error import OperationError +from pypy.interpreter.error import OperationError, operationerrfmt from pypy.rlib.unroll import unrolling_iterable from pypy.tool.pairtype import extendabletype from pypy.tool.sourcetools import func_with_new_name @@ -2925,14 +2925,13 @@ def Module_get_body(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -2968,14 +2967,13 @@ def Interactive_get_body(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -3015,8 +3013,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') return space.wrap(w_self.body) def Expression_set_body(space, w_self, w_new_value): @@ -3057,14 +3054,13 @@ def Suite_get_body(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -3104,8 +3100,7 @@ return w_obj if not w_self.initialization_state & w_self._lineno_mask: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'lineno'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'lineno') return space.wrap(w_self.lineno) def stmt_set_lineno(space, w_self, w_new_value): @@ -3126,8 +3121,7 @@ return w_obj if not w_self.initialization_state & w_self._col_offset_mask: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'col_offset'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'col_offset') return space.wrap(w_self.col_offset) def stmt_set_col_offset(space, w_self, w_new_value): @@ -3157,8 +3151,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'name'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'name') return space.wrap(w_self.name) def FunctionDef_set_name(space, w_self, w_new_value): @@ -3179,8 +3172,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'args'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'args') return space.wrap(w_self.args) def FunctionDef_set_args(space, w_self, w_new_value): @@ -3197,14 +3189,13 @@ def FunctionDef_get_body(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -3215,14 +3206,13 @@ def FunctionDef_get_decorator_list(space, w_self): if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'decorator_list'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'decorator_list') if w_self.w_decorator_list is None: if w_self.decorator_list is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.decorator_list] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_decorator_list = w_list return w_self.w_decorator_list @@ -3266,8 +3256,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'name'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'name') return space.wrap(w_self.name) def ClassDef_set_name(space, w_self, w_new_value): @@ -3284,14 +3273,13 @@ def ClassDef_get_bases(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'bases'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'bases') if w_self.w_bases is None: if w_self.bases is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.bases] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_bases = w_list return w_self.w_bases @@ -3302,14 +3290,13 @@ def ClassDef_get_body(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -3320,14 +3307,13 @@ def ClassDef_get_decorator_list(space, w_self): if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'decorator_list'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'decorator_list') if w_self.w_decorator_list is None: if w_self.decorator_list is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.decorator_list] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_decorator_list = w_list return w_self.w_decorator_list @@ -3372,8 +3358,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Return_set_value(space, w_self, w_new_value): @@ -3414,14 +3399,13 @@ def Delete_get_targets(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'targets'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'targets') if w_self.w_targets is None: if w_self.targets is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.targets] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_targets = w_list return w_self.w_targets @@ -3457,14 +3441,13 @@ def Assign_get_targets(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'targets'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'targets') if w_self.w_targets is None: if w_self.targets is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.targets] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_targets = w_list return w_self.w_targets @@ -3479,8 +3462,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Assign_set_value(space, w_self, w_new_value): @@ -3527,8 +3509,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'target'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'target') return space.wrap(w_self.target) def AugAssign_set_target(space, w_self, w_new_value): @@ -3549,8 +3530,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'op'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'op') return operator_to_class[w_self.op - 1]() def AugAssign_set_op(space, w_self, w_new_value): @@ -3573,8 +3553,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def AugAssign_set_value(space, w_self, w_new_value): @@ -3621,8 +3600,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'dest'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'dest') return space.wrap(w_self.dest) def Print_set_dest(space, w_self, w_new_value): @@ -3639,14 +3617,13 @@ def Print_get_values(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'values'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'values') if w_self.w_values is None: if w_self.values is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.values] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_values = w_list return w_self.w_values @@ -3661,8 +3638,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'nl'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'nl') return space.wrap(w_self.nl) def Print_set_nl(space, w_self, w_new_value): @@ -3710,8 +3686,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'target'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'target') return space.wrap(w_self.target) def For_set_target(space, w_self, w_new_value): @@ -3732,8 +3707,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'iter'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'iter') return space.wrap(w_self.iter) def For_set_iter(space, w_self, w_new_value): @@ -3750,14 +3724,13 @@ def For_get_body(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -3768,14 +3741,13 @@ def For_get_orelse(space, w_self): if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'orelse'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') if w_self.w_orelse is None: if w_self.orelse is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.orelse] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_orelse = w_list return w_self.w_orelse @@ -3819,8 +3791,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'test'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'test') return space.wrap(w_self.test) def While_set_test(space, w_self, w_new_value): @@ -3837,14 +3808,13 @@ def While_get_body(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -3855,14 +3825,13 @@ def While_get_orelse(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'orelse'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') if w_self.w_orelse is None: if w_self.orelse is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.orelse] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_orelse = w_list return w_self.w_orelse @@ -3905,8 +3874,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'test'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'test') return space.wrap(w_self.test) def If_set_test(space, w_self, w_new_value): @@ -3923,14 +3891,13 @@ def If_get_body(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -3941,14 +3908,13 @@ def If_get_orelse(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'orelse'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') if w_self.w_orelse is None: if w_self.orelse is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.orelse] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_orelse = w_list return w_self.w_orelse @@ -3991,8 +3957,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'context_expr'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'context_expr') return space.wrap(w_self.context_expr) def With_set_context_expr(space, w_self, w_new_value): @@ -4013,8 +3978,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'optional_vars'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'optional_vars') return space.wrap(w_self.optional_vars) def With_set_optional_vars(space, w_self, w_new_value): @@ -4031,14 +3995,13 @@ def With_get_body(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -4080,8 +4043,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'type'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'type') return space.wrap(w_self.type) def Raise_set_type(space, w_self, w_new_value): @@ -4102,8 +4064,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'inst'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'inst') return space.wrap(w_self.inst) def Raise_set_inst(space, w_self, w_new_value): @@ -4124,8 +4085,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'tback'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'tback') return space.wrap(w_self.tback) def Raise_set_tback(space, w_self, w_new_value): @@ -4168,14 +4128,13 @@ def TryExcept_get_body(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -4186,14 +4145,13 @@ def TryExcept_get_handlers(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'handlers'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'handlers') if w_self.w_handlers is None: if w_self.handlers is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.handlers] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_handlers = w_list return w_self.w_handlers @@ -4204,14 +4162,13 @@ def TryExcept_get_orelse(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'orelse'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') if w_self.w_orelse is None: if w_self.orelse is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.orelse] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_orelse = w_list return w_self.w_orelse @@ -4251,14 +4208,13 @@ def TryFinally_get_body(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -4269,14 +4225,13 @@ def TryFinally_get_finalbody(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'finalbody'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'finalbody') if w_self.w_finalbody is None: if w_self.finalbody is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.finalbody] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_finalbody = w_list return w_self.w_finalbody @@ -4318,8 +4273,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'test'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'test') return space.wrap(w_self.test) def Assert_set_test(space, w_self, w_new_value): @@ -4340,8 +4294,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'msg'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'msg') return space.wrap(w_self.msg) def Assert_set_msg(space, w_self, w_new_value): @@ -4383,14 +4336,13 @@ def Import_get_names(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'names'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'names') if w_self.w_names is None: if w_self.names is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.names] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_names = w_list return w_self.w_names @@ -4430,8 +4382,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'module'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'module') return space.wrap(w_self.module) def ImportFrom_set_module(space, w_self, w_new_value): @@ -4451,14 +4402,13 @@ def ImportFrom_get_names(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'names'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'names') if w_self.w_names is None: if w_self.names is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.names] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_names = w_list return w_self.w_names @@ -4473,8 +4423,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'level'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'level') return space.wrap(w_self.level) def ImportFrom_set_level(space, w_self, w_new_value): @@ -4522,8 +4471,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') return space.wrap(w_self.body) def Exec_set_body(space, w_self, w_new_value): @@ -4544,8 +4492,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'globals'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'globals') return space.wrap(w_self.globals) def Exec_set_globals(space, w_self, w_new_value): @@ -4566,8 +4513,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'locals'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'locals') return space.wrap(w_self.locals) def Exec_set_locals(space, w_self, w_new_value): @@ -4610,14 +4556,13 @@ def Global_get_names(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'names'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'names') if w_self.w_names is None: if w_self.names is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.names] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_names = w_list return w_self.w_names @@ -4657,8 +4602,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Expr_set_value(space, w_self, w_new_value): @@ -4754,8 +4698,7 @@ return w_obj if not w_self.initialization_state & w_self._lineno_mask: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'lineno'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'lineno') return space.wrap(w_self.lineno) def expr_set_lineno(space, w_self, w_new_value): @@ -4776,8 +4719,7 @@ return w_obj if not w_self.initialization_state & w_self._col_offset_mask: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'col_offset'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'col_offset') return space.wrap(w_self.col_offset) def expr_set_col_offset(space, w_self, w_new_value): @@ -4807,8 +4749,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'op'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'op') return boolop_to_class[w_self.op - 1]() def BoolOp_set_op(space, w_self, w_new_value): @@ -4827,14 +4768,13 @@ def BoolOp_get_values(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'values'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'values') if w_self.w_values is None: if w_self.values is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.values] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_values = w_list return w_self.w_values @@ -4875,8 +4815,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'left'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'left') return space.wrap(w_self.left) def BinOp_set_left(space, w_self, w_new_value): @@ -4897,8 +4836,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'op'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'op') return operator_to_class[w_self.op - 1]() def BinOp_set_op(space, w_self, w_new_value): @@ -4921,8 +4859,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'right'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'right') return space.wrap(w_self.right) def BinOp_set_right(space, w_self, w_new_value): @@ -4969,8 +4906,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'op'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'op') return unaryop_to_class[w_self.op - 1]() def UnaryOp_set_op(space, w_self, w_new_value): @@ -4993,8 +4929,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'operand'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'operand') return space.wrap(w_self.operand) def UnaryOp_set_operand(space, w_self, w_new_value): @@ -5040,8 +4975,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'args'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'args') return space.wrap(w_self.args) def Lambda_set_args(space, w_self, w_new_value): @@ -5062,8 +4996,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') return space.wrap(w_self.body) def Lambda_set_body(space, w_self, w_new_value): @@ -5109,8 +5042,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'test'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'test') return space.wrap(w_self.test) def IfExp_set_test(space, w_self, w_new_value): @@ -5131,8 +5063,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') return space.wrap(w_self.body) def IfExp_set_body(space, w_self, w_new_value): @@ -5153,8 +5084,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'orelse'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') return space.wrap(w_self.orelse) def IfExp_set_orelse(space, w_self, w_new_value): @@ -5197,14 +5127,13 @@ def Dict_get_keys(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'keys'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'keys') if w_self.w_keys is None: if w_self.keys is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.keys] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_keys = w_list return w_self.w_keys @@ -5215,14 +5144,13 @@ def Dict_get_values(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'values'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'values') if w_self.w_values is None: if w_self.values is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.values] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_values = w_list return w_self.w_values @@ -5260,14 +5188,13 @@ def Set_get_elts(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'elts'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elts') if w_self.w_elts is None: if w_self.elts is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.elts] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_elts = w_list return w_self.w_elts @@ -5307,8 +5234,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'elt'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elt') return space.wrap(w_self.elt) def ListComp_set_elt(space, w_self, w_new_value): @@ -5325,14 +5251,13 @@ def ListComp_get_generators(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'generators'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'generators') if w_self.w_generators is None: if w_self.generators is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.generators] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_generators = w_list return w_self.w_generators @@ -5373,8 +5298,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'elt'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elt') return space.wrap(w_self.elt) def SetComp_set_elt(space, w_self, w_new_value): @@ -5391,14 +5315,13 @@ def SetComp_get_generators(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'generators'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'generators') if w_self.w_generators is None: if w_self.generators is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.generators] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_generators = w_list return w_self.w_generators @@ -5439,8 +5362,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'key'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'key') return space.wrap(w_self.key) def DictComp_set_key(space, w_self, w_new_value): @@ -5461,8 +5383,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def DictComp_set_value(space, w_self, w_new_value): @@ -5479,14 +5400,13 @@ def DictComp_get_generators(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'generators'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'generators') if w_self.w_generators is None: if w_self.generators is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.generators] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_generators = w_list return w_self.w_generators @@ -5528,8 +5448,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'elt'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elt') return space.wrap(w_self.elt) def GeneratorExp_set_elt(space, w_self, w_new_value): @@ -5546,14 +5465,13 @@ def GeneratorExp_get_generators(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'generators'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'generators') if w_self.w_generators is None: if w_self.generators is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.generators] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_generators = w_list return w_self.w_generators @@ -5594,8 +5512,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Yield_set_value(space, w_self, w_new_value): @@ -5640,8 +5557,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'left'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'left') return space.wrap(w_self.left) def Compare_set_left(space, w_self, w_new_value): @@ -5658,14 +5574,13 @@ def Compare_get_ops(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ops'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ops') if w_self.w_ops is None: if w_self.ops is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [cmpop_to_class[node - 1]() for node in w_self.ops] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_ops = w_list return w_self.w_ops @@ -5676,14 +5591,13 @@ def Compare_get_comparators(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'comparators'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'comparators') if w_self.w_comparators is None: if w_self.comparators is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.comparators] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_comparators = w_list return w_self.w_comparators @@ -5726,8 +5640,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'func'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'func') return space.wrap(w_self.func) def Call_set_func(space, w_self, w_new_value): @@ -5744,14 +5657,13 @@ def Call_get_args(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'args'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'args') if w_self.w_args is None: if w_self.args is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.args] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_args = w_list return w_self.w_args @@ -5762,14 +5674,13 @@ def Call_get_keywords(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'keywords'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'keywords') if w_self.w_keywords is None: if w_self.keywords is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.keywords] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_keywords = w_list return w_self.w_keywords @@ -5784,8 +5695,7 @@ return w_obj if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'starargs'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'starargs') return space.wrap(w_self.starargs) def Call_set_starargs(space, w_self, w_new_value): @@ -5806,8 +5716,7 @@ return w_obj if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'kwargs'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'kwargs') return space.wrap(w_self.kwargs) def Call_set_kwargs(space, w_self, w_new_value): @@ -5858,8 +5767,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Repr_set_value(space, w_self, w_new_value): @@ -5904,8 +5812,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'n'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'n') return w_self.n def Num_set_n(space, w_self, w_new_value): @@ -5950,8 +5857,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 's'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 's') return w_self.s def Str_set_s(space, w_self, w_new_value): @@ -5996,8 +5902,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Attribute_set_value(space, w_self, w_new_value): @@ -6018,8 +5923,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'attr'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'attr') return space.wrap(w_self.attr) def Attribute_set_attr(space, w_self, w_new_value): @@ -6040,8 +5944,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ctx'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() def Attribute_set_ctx(space, w_self, w_new_value): @@ -6090,8 +5993,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Subscript_set_value(space, w_self, w_new_value): @@ -6112,8 +6014,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'slice'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'slice') return space.wrap(w_self.slice) def Subscript_set_slice(space, w_self, w_new_value): @@ -6134,8 +6035,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ctx'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() def Subscript_set_ctx(space, w_self, w_new_value): @@ -6184,8 +6084,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'id'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'id') return space.wrap(w_self.id) def Name_set_id(space, w_self, w_new_value): @@ -6206,8 +6105,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ctx'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() def Name_set_ctx(space, w_self, w_new_value): @@ -6251,14 +6149,13 @@ def List_get_elts(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'elts'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elts') if w_self.w_elts is None: if w_self.elts is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.elts] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_elts = w_list return w_self.w_elts @@ -6273,8 +6170,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ctx'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() def List_set_ctx(space, w_self, w_new_value): @@ -6319,14 +6215,13 @@ def Tuple_get_elts(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'elts'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elts') if w_self.w_elts is None: if w_self.elts is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.elts] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_elts = w_list return w_self.w_elts @@ -6341,8 +6236,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ctx'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() def Tuple_set_ctx(space, w_self, w_new_value): @@ -6391,8 +6285,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return w_self.value def Const_set_value(space, w_self, w_new_value): @@ -6510,8 +6403,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'lower'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'lower') return space.wrap(w_self.lower) def Slice_set_lower(space, w_self, w_new_value): @@ -6532,8 +6424,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'upper'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'upper') return space.wrap(w_self.upper) def Slice_set_upper(space, w_self, w_new_value): @@ -6554,8 +6445,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'step'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'step') return space.wrap(w_self.step) def Slice_set_step(space, w_self, w_new_value): @@ -6598,14 +6488,13 @@ def ExtSlice_get_dims(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'dims'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'dims') if w_self.w_dims is None: if w_self.dims is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.dims] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_dims = w_list return w_self.w_dims @@ -6645,8 +6534,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Index_set_value(space, w_self, w_new_value): @@ -6915,8 +6803,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'target'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'target') return space.wrap(w_self.target) def comprehension_set_target(space, w_self, w_new_value): @@ -6937,8 +6824,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'iter'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'iter') return space.wrap(w_self.iter) def comprehension_set_iter(space, w_self, w_new_value): @@ -6955,14 +6841,13 @@ def comprehension_get_ifs(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ifs'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ifs') if w_self.w_ifs is None: if w_self.ifs is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.ifs] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_ifs = w_list return w_self.w_ifs @@ -7004,8 +6889,7 @@ return w_obj if not w_self.initialization_state & w_self._lineno_mask: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'lineno'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'lineno') return space.wrap(w_self.lineno) def excepthandler_set_lineno(space, w_self, w_new_value): @@ -7026,8 +6910,7 @@ return w_obj if not w_self.initialization_state & w_self._col_offset_mask: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'col_offset'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'col_offset') return space.wrap(w_self.col_offset) def excepthandler_set_col_offset(space, w_self, w_new_value): @@ -7057,8 +6940,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'type'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'type') return space.wrap(w_self.type) def ExceptHandler_set_type(space, w_self, w_new_value): @@ -7079,8 +6961,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'name'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'name') return space.wrap(w_self.name) def ExceptHandler_set_name(space, w_self, w_new_value): @@ -7097,14 +6978,13 @@ def ExceptHandler_get_body(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -7142,14 +7022,13 @@ def arguments_get_args(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'args'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'args') if w_self.w_args is None: if w_self.args is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.args] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_args = w_list return w_self.w_args @@ -7164,8 +7043,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'vararg'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'vararg') return space.wrap(w_self.vararg) def arguments_set_vararg(space, w_self, w_new_value): @@ -7189,8 +7067,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'kwarg'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'kwarg') return space.wrap(w_self.kwarg) def arguments_set_kwarg(space, w_self, w_new_value): @@ -7210,14 +7087,13 @@ def arguments_get_defaults(space, w_self): if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'defaults'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'defaults') if w_self.w_defaults is None: if w_self.defaults is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.defaults] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_defaults = w_list return w_self.w_defaults @@ -7261,8 +7137,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'arg'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'arg') return space.wrap(w_self.arg) def keyword_set_arg(space, w_self, w_new_value): @@ -7283,8 +7158,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def keyword_set_value(space, w_self, w_new_value): @@ -7330,8 +7204,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'name'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'name') return space.wrap(w_self.name) def alias_set_name(space, w_self, w_new_value): @@ -7352,8 +7225,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'asname'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'asname') return space.wrap(w_self.asname) def alias_set_asname(space, w_self, w_new_value): diff --git a/pypy/interpreter/astcompiler/tools/asdl_py.py b/pypy/interpreter/astcompiler/tools/asdl_py.py --- a/pypy/interpreter/astcompiler/tools/asdl_py.py +++ b/pypy/interpreter/astcompiler/tools/asdl_py.py @@ -414,13 +414,12 @@ self.emit(" return w_obj", 1) self.emit("if not w_self.initialization_state & %s:" % (flag,), 1) self.emit("typename = space.type(w_self).getname(space)", 2) - self.emit("w_err = space.wrap(\"'%%s' object has no attribute '%s'\" %% typename)" % + self.emit("raise operationerrfmt(space.w_AttributeError, \"'%%s' object has no attribute '%%s'\", typename, '%s')" % (field.name,), 2) - self.emit("raise OperationError(space.w_AttributeError, w_err)", 2) if field.seq: self.emit("if w_self.w_%s is None:" % (field.name,), 1) self.emit("if w_self.%s is None:" % (field.name,), 2) - self.emit("w_list = space.newlist([])", 3) + self.emit("list_w = []", 3) self.emit("else:", 2) if field.type.value in self.data.simple_types: wrapper = "%s_to_class[node - 1]()" % (field.type,) @@ -428,7 +427,7 @@ wrapper = "space.wrap(node)" self.emit("list_w = [%s for node in w_self.%s]" % (wrapper, field.name), 3) - self.emit("w_list = space.newlist(list_w)", 3) + self.emit("w_list = space.newlist(list_w)", 2) self.emit("w_self.w_%s = w_list" % (field.name,), 2) self.emit("return w_self.w_%s" % (field.name,), 1) elif field.type.value in self.data.simple_types: @@ -540,7 +539,7 @@ from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter import typedef from pypy.interpreter.gateway import interp2app -from pypy.interpreter.error import OperationError +from pypy.interpreter.error import OperationError, operationerrfmt from pypy.rlib.unroll import unrolling_iterable from pypy.tool.pairtype import extendabletype from pypy.tool.sourcetools import func_with_new_name @@ -639,9 +638,7 @@ missing = required[i] if missing is not None: err = "required field \\"%s\\" missing from %s" - err = err % (missing, host) - w_err = space.wrap(err) - raise OperationError(space.w_TypeError, w_err) + raise operationerrfmt(space.w_TypeError, err, missing, host) raise AssertionError("should not reach here") diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -3,18 +3,18 @@ from pypy.interpreter.executioncontext import ExecutionContext, ActionFlag from pypy.interpreter.executioncontext import UserDelAction, FrameTraceAction from pypy.interpreter.error import OperationError, operationerrfmt -from pypy.interpreter.error import new_exception_class +from pypy.interpreter.error import new_exception_class, typed_unwrap_error_msg from pypy.interpreter.argument import Arguments from pypy.interpreter.miscutils import ThreadLocals from pypy.tool.cache import Cache from pypy.tool.uid import HUGEVAL_BYTES -from pypy.rlib.objectmodel import we_are_translated +from pypy.rlib.objectmodel import we_are_translated, newlist, compute_unique_id from pypy.rlib.debug import make_sure_not_resized from pypy.rlib.timer import DummyTimer, Timer from pypy.rlib.rarithmetic import r_uint from pypy.rlib import jit from pypy.tool.sourcetools import func_with_new_name -import os, sys, py +import os, sys __all__ = ['ObjSpace', 'OperationError', 'Wrappable', 'W_Root'] @@ -186,6 +186,28 @@ def _set_mapdict_storage_and_map(self, storage, map): raise NotImplementedError + # ------------------------------------------------------------------- + + def str_w(self, space): + w_msg = typed_unwrap_error_msg(space, "string", self) + raise OperationError(space.w_TypeError, w_msg) + + def unicode_w(self, space): + raise OperationError(space.w_TypeError, + typed_unwrap_error_msg(space, "unicode", self)) + + def int_w(self, space): + raise OperationError(space.w_TypeError, + typed_unwrap_error_msg(space, "integer", self)) + + def uint_w(self, space): + raise OperationError(space.w_TypeError, + typed_unwrap_error_msg(space, "integer", self)) + + def bigint_w(self, space): + raise OperationError(space.w_TypeError, + typed_unwrap_error_msg(space, "integer", self)) + class Wrappable(W_Root): """A subclass of Wrappable is an internal, interpreter-level class @@ -755,11 +777,63 @@ """Unpack an iterable object into a real (interpreter-level) list. Raise an OperationError(w_ValueError) if the length is wrong.""" w_iterator = self.iter(w_iterable) - # If we know the expected length we can preallocate. if expected_length == -1: + # xxx special hack for speed + from pypy.interpreter.generator import GeneratorIterator + if isinstance(w_iterator, GeneratorIterator): + lst_w = [] + w_iterator.unpack_into(lst_w) + return lst_w + # /xxx + return self._unpackiterable_unknown_length(w_iterator, w_iterable) + else: + lst_w = self._unpackiterable_known_length(w_iterator, + expected_length) + return lst_w[:] # make the resulting list resizable + + @jit.dont_look_inside + def _unpackiterable_unknown_length(self, w_iterator, w_iterable): + # Unpack a variable-size list of unknown length. + # The JIT does not look inside this function because it + # contains a loop (made explicit with the decorator above). + # + # If we can guess the expected length we can preallocate. + try: + lgt_estimate = self.len_w(w_iterable) + except OperationError, o: + if (not o.match(self, self.w_AttributeError) and + not o.match(self, self.w_TypeError)): + raise items = [] else: - items = [None] * expected_length + try: + items = newlist(lgt_estimate) + except MemoryError: + items = [] # it might have lied + # + while True: + try: + w_item = self.next(w_iterator) + except OperationError, e: + if not e.match(self, self.w_StopIteration): + raise + break # done + items.append(w_item) + # + return items + + @jit.dont_look_inside + def _unpackiterable_known_length(self, w_iterator, expected_length): + # Unpack a known length list, without letting the JIT look inside. + # Implemented by just calling the @jit.unroll_safe version, but + # the JIT stopped looking inside already. + return self._unpackiterable_known_length_jitlook(w_iterator, + expected_length) + + @jit.unroll_safe + def _unpackiterable_known_length_jitlook(self, w_iterator, + expected_length): + items = [None] * expected_length idx = 0 while True: try: @@ -768,26 +842,29 @@ if not e.match(self, self.w_StopIteration): raise break # done - if expected_length != -1 and idx == expected_length: + if idx == expected_length: raise OperationError(self.w_ValueError, - self.wrap("too many values to unpack")) - if expected_length == -1: - items.append(w_item) - else: - items[idx] = w_item + self.wrap("too many values to unpack")) + items[idx] = w_item idx += 1 - if expected_length != -1 and idx < expected_length: + if idx < expected_length: if idx == 1: plural = "" else: plural = "s" - raise OperationError(self.w_ValueError, - self.wrap("need more than %d value%s to unpack" % - (idx, plural))) + raise operationerrfmt(self.w_ValueError, + "need more than %d value%s to unpack", + idx, plural) return items - unpackiterable_unroll = jit.unroll_safe(func_with_new_name(unpackiterable, - 'unpackiterable_unroll')) + def unpackiterable_unroll(self, w_iterable, expected_length): + # Like unpackiterable(), but for the cases where we have + # an expected_length and want to unroll when JITted. + # Returns a fixed-size list. + w_iterator = self.iter(w_iterable) + assert expected_length != -1 + return self._unpackiterable_known_length_jitlook(w_iterator, + expected_length) def fixedview(self, w_iterable, expected_length=-1): """ A fixed list view of w_iterable. Don't modify the result @@ -890,7 +967,7 @@ ec.c_call_trace(frame, w_func, args) try: w_res = self.call_args(w_func, args) - except OperationError, e: + except OperationError: ec.c_exception_trace(frame, w_func) raise ec.c_return_trace(frame, w_func, args) @@ -936,6 +1013,9 @@ def isinstance_w(self, w_obj, w_type): return self.is_true(self.isinstance(w_obj, w_type)) + def id(self, w_obj): + return self.wrap(compute_unique_id(w_obj)) + # The code below only works # for the simple case (new-style instance). # These methods are patched with the full logic by the __builtin__ @@ -988,8 +1068,6 @@ def eval(self, expression, w_globals, w_locals, hidden_applevel=False): "NOT_RPYTHON: For internal debugging." - import types - from pypy.interpreter.pycode import PyCode if isinstance(expression, str): compiler = self.createcompiler() expression = compiler.compile(expression, '?', 'eval', 0, @@ -1001,7 +1079,6 @@ def exec_(self, statement, w_globals, w_locals, hidden_applevel=False, filename=None): "NOT_RPYTHON: For internal debugging." - import types if filename is None: filename = '?' from pypy.interpreter.pycode import PyCode @@ -1199,6 +1276,18 @@ return None return self.str_w(w_obj) + def str_w(self, w_obj): + return w_obj.str_w(self) + + def int_w(self, w_obj): + return w_obj.int_w(self) + + def uint_w(self, w_obj): + return w_obj.uint_w(self) + + def bigint_w(self, w_obj): + return w_obj.bigint_w(self) + def realstr_w(self, w_obj): # Like str_w, but only works if w_obj is really of type 'str'. if not self.is_true(self.isinstance(w_obj, self.w_str)): @@ -1206,6 +1295,9 @@ self.wrap('argument must be a string')) return self.str_w(w_obj) + def unicode_w(self, w_obj): + return w_obj.unicode_w(self) + def realunicode_w(self, w_obj): # Like unicode_w, but only works if w_obj is really of type # 'unicode'. diff --git a/pypy/interpreter/error.py b/pypy/interpreter/error.py --- a/pypy/interpreter/error.py +++ b/pypy/interpreter/error.py @@ -458,3 +458,7 @@ if module: space.setattr(w_exc, space.wrap("__module__"), space.wrap(module)) return w_exc + +def typed_unwrap_error_msg(space, expected, w_obj): + type_name = space.type(w_obj).getname(space) + return space.wrap("expected %s, got %s object" % (expected, type_name)) diff --git a/pypy/interpreter/executioncontext.py b/pypy/interpreter/executioncontext.py --- a/pypy/interpreter/executioncontext.py +++ b/pypy/interpreter/executioncontext.py @@ -1,5 +1,4 @@ import sys -from pypy.interpreter.miscutils import Stack from pypy.interpreter.error import OperationError from pypy.rlib.rarithmetic import LONG_BIT from pypy.rlib.unroll import unrolling_iterable @@ -48,6 +47,7 @@ return frame @staticmethod + @jit.unroll_safe # should usually loop 0 times, very rarely more than once def getnextframe_nohidden(frame): frame = frame.f_backref() while frame and frame.hide(): @@ -81,58 +81,6 @@ # ________________________________________________________________ - - class Subcontext(object): - # coroutine: subcontext support - - def __init__(self): - self.topframe = None - self.w_tracefunc = None - self.profilefunc = None - self.w_profilefuncarg = None - self.is_tracing = 0 - - def enter(self, ec): - ec.topframeref = jit.non_virtual_ref(self.topframe) - ec.w_tracefunc = self.w_tracefunc - ec.profilefunc = self.profilefunc - ec.w_profilefuncarg = self.w_profilefuncarg - ec.is_tracing = self.is_tracing - ec.space.frame_trace_action.fire() - - def leave(self, ec): - self.topframe = ec.gettopframe() - self.w_tracefunc = ec.w_tracefunc - self.profilefunc = ec.profilefunc - self.w_profilefuncarg = ec.w_profilefuncarg - self.is_tracing = ec.is_tracing - - def clear_framestack(self): - self.topframe = None - - # the following interface is for pickling and unpickling - def getstate(self, space): - if self.topframe is None: - return space.w_None - return self.topframe - - def setstate(self, space, w_state): - from pypy.interpreter.pyframe import PyFrame - if space.is_w(w_state, space.w_None): - self.topframe = None - else: - self.topframe = space.interp_w(PyFrame, w_state) - - def getframestack(self): - lst = [] - f = self.topframe - while f is not None: - lst.append(f) - f = f.f_backref() - lst.reverse() - return lst - # coroutine: I think this is all, folks! - def c_call_trace(self, frame, w_func, args=None): "Profile the call of a builtin function" self._c_call_return_trace(frame, w_func, args, 'c_call') @@ -227,6 +175,9 @@ self.w_tracefunc = w_func self.space.frame_trace_action.fire() + def gettrace(self): + return self.w_tracefunc + def setprofile(self, w_func): """Set the global trace function.""" if self.space.is_w(w_func, self.space.w_None): @@ -359,7 +310,11 @@ self._nonperiodic_actions = [] self.has_bytecode_counter = False self.fired_actions = None - self.checkinterval_scaled = 100 * TICK_COUNTER_STEP + # the default value is not 100, unlike CPython 2.7, but a much + # larger value, because we use a technique that not only allows + # but actually *forces* another thread to run whenever the counter + # reaches zero. + self.checkinterval_scaled = 10000 * TICK_COUNTER_STEP self._rebuild_action_dispatcher() def fire(self, action): @@ -398,6 +353,7 @@ elif interval > MAX: interval = MAX self.checkinterval_scaled = interval * TICK_COUNTER_STEP + self.reset_ticker(-1) def _rebuild_action_dispatcher(self): periodic_actions = unrolling_iterable(self._periodic_actions) @@ -435,8 +391,11 @@ def decrement_ticker(self, by): value = self._ticker if self.has_bytecode_counter: # this 'if' is constant-folded - value -= by - self._ticker = value + if jit.isconstant(by) and by == 0: + pass # normally constant-folded too + else: + value -= by + self._ticker = value return value diff --git a/pypy/interpreter/function.py b/pypy/interpreter/function.py --- a/pypy/interpreter/function.py +++ b/pypy/interpreter/function.py @@ -242,8 +242,10 @@ # we have been seen by other means so rtyping should not choke # on us identifier = self.code.identifier - assert Function._all.get(identifier, self) is self, ("duplicate " - "function ids") + previous = Function._all.get(identifier, self) + assert previous is self, ( + "duplicate function ids with identifier=%r: %r and %r" % ( + identifier, previous, self)) self.add_to_table() return False diff --git a/pypy/interpreter/generator.py b/pypy/interpreter/generator.py --- a/pypy/interpreter/generator.py +++ b/pypy/interpreter/generator.py @@ -8,7 +8,7 @@ class GeneratorIterator(Wrappable): "An iterator created by a generator." _immutable_fields_ = ['pycode'] - + def __init__(self, frame): self.space = frame.space self.frame = frame # turned into None when frame_finished_execution @@ -81,7 +81,7 @@ # if the frame is now marked as finished, it was RETURNed from if frame.frame_finished_execution: self.frame = None - raise OperationError(space.w_StopIteration, space.w_None) + raise OperationError(space.w_StopIteration, space.w_None) else: return w_result # YIELDed finally: @@ -97,21 +97,21 @@ def throw(self, w_type, w_val, w_tb): from pypy.interpreter.pytraceback import check_traceback space = self.space - + msg = "throw() third argument must be a traceback object" if space.is_w(w_tb, space.w_None): tb = None else: tb = check_traceback(space, w_tb, msg) - + operr = OperationError(w_type, w_val, tb) operr.normalize_exception(space) return self.send_ex(space.w_None, operr) - + def descr_next(self): """x.next() -> the next value, or raise StopIteration""" return self.send_ex(self.space.w_None) - + def descr_close(self): """x.close(arg) -> raise GeneratorExit inside generator.""" assert isinstance(self, GeneratorIterator) @@ -124,7 +124,7 @@ e.match(space, space.w_GeneratorExit): return space.w_None raise - + if w_retval is not None: msg = "generator ignored GeneratorExit" raise OperationError(space.w_RuntimeError, space.wrap(msg)) @@ -155,3 +155,39 @@ "interrupting generator of ") break block = block.previous + + def unpack_into(self, results_w): + """This is a hack for performance: runs the generator and collects + all produced items in a list.""" + # XXX copied and simplified version of send_ex() + space = self.space + if self.running: + raise OperationError(space.w_ValueError, + space.wrap('generator already executing')) + frame = self.frame + if frame is None: # already finished + return + self.running = True + try: + pycode = self.pycode + while True: + jitdriver.jit_merge_point(self=self, frame=frame, + results_w=results_w, + pycode=pycode) + try: + w_result = frame.execute_frame(space.w_None) + except OperationError, e: + if not e.match(space, space.w_StopIteration): + raise + break + # if the frame is now marked as finished, it was RETURNed from + if frame.frame_finished_execution: + break + results_w.append(w_result) # YIELDed + finally: + frame.f_backref = jit.vref_None + self.running = False + self.frame = None + +jitdriver = jit.JitDriver(greens=['pycode'], + reds=['self', 'frame', 'results_w']) diff --git a/pypy/interpreter/miscutils.py b/pypy/interpreter/miscutils.py --- a/pypy/interpreter/miscutils.py +++ b/pypy/interpreter/miscutils.py @@ -2,154 +2,6 @@ Miscellaneous utilities. """ -import types - -from pypy.rlib.rarithmetic import r_uint - -class RootStack: - pass - -class Stack(RootStack): - """Utility class implementing a stack.""" - - _annspecialcase_ = "specialize:ctr_location" # polymorphic - - def __init__(self): - self.items = [] - - def clone(self): - s = self.__class__() - for item in self.items: - try: - item = item.clone() - except AttributeError: - pass - s.push(item) - return s - - def push(self, item): - self.items.append(item) - - def pop(self): - return self.items.pop() - - def drop(self, n): - if n > 0: - del self.items[-n:] - - def top(self, position=0): - """'position' is 0 for the top of the stack, 1 for the item below, - and so on. It must not be negative.""" - if position < 0: - raise ValueError, 'negative stack position' - if position >= len(self.items): - raise IndexError, 'not enough entries in stack' - return self.items[~position] - - def set_top(self, value, position=0): - """'position' is 0 for the top of the stack, 1 for the item below, - and so on. It must not be negative.""" - if position < 0: - raise ValueError, 'negative stack position' - if position >= len(self.items): - raise IndexError, 'not enough entries in stack' - self.items[~position] = value - - def depth(self): - return len(self.items) - - def empty(self): - return len(self.items) == 0 - - -class FixedStack(RootStack): - _annspecialcase_ = "specialize:ctr_location" # polymorphic - - # unfortunately, we have to re-do everything - def __init__(self): - pass - - def setup(self, stacksize): - self.ptr = r_uint(0) # we point after the last element - self.items = [None] * stacksize - - def clone(self): - # this is only needed if we support flow space - s = self.__class__() - s.setup(len(self.items)) - for item in self.items[:self.ptr]: - try: - item = item.clone() - except AttributeError: - pass - s.push(item) - return s - - def push(self, item): - ptr = self.ptr - self.items[ptr] = item - self.ptr = ptr + 1 - - def pop(self): - ptr = self.ptr - 1 - ret = self.items[ptr] # you get OverflowError if the stack is empty - self.items[ptr] = None - self.ptr = ptr - return ret - - def drop(self, n): - while n > 0: - n -= 1 - self.ptr -= 1 - self.items[self.ptr] = None - - def top(self, position=0): - # for a fixed stack, we assume correct indices - return self.items[self.ptr + ~position] - - def set_top(self, value, position=0): - # for a fixed stack, we assume correct indices - self.items[self.ptr + ~position] = value - - def depth(self): - return self.ptr - - def empty(self): - return not self.ptr - - -class InitializedClass(type): - """NOT_RPYTHON. A meta-class that allows a class to initialize itself (or - its subclasses) by calling __initclass__() as a class method.""" - def __init__(self, name, bases, dict): - super(InitializedClass, self).__init__(name, bases, dict) - for basecls in self.__mro__: - raw = basecls.__dict__.get('__initclass__') - if isinstance(raw, types.FunctionType): - raw(self) # call it as a class method - - -class RwDictProxy(object): - """NOT_RPYTHON. A dict-like class standing for 'cls.__dict__', to work - around the fact that the latter is a read-only proxy for new-style - classes.""" - - def __init__(self, cls): - self.cls = cls - - def __getitem__(self, attr): - return self.cls.__dict__[attr] - - def __setitem__(self, attr, value): - setattr(self.cls, attr, value) - - def __contains__(self, value): - return value in self.cls.__dict__ - - def items(self): - return self.cls.__dict__.items() - - class ThreadLocals: """Pseudo thread-local storage, for 'space.threadlocals'. This is not really thread-local at all; the intention is that the PyPy @@ -167,3 +19,7 @@ def getmainthreadvalue(self): return self._value + + def getallvalues(self): + return {0: self._value} + diff --git a/pypy/interpreter/pycode.py b/pypy/interpreter/pycode.py --- a/pypy/interpreter/pycode.py +++ b/pypy/interpreter/pycode.py @@ -10,7 +10,7 @@ from pypy.interpreter.argument import Signature from pypy.interpreter.error import OperationError from pypy.interpreter.gateway import NoneNotWrapped, unwrap_spec -from pypy.interpreter.astcompiler.consts import (CO_OPTIMIZED, +from pypy.interpreter.astcompiler.consts import ( CO_OPTIMIZED, CO_NEWLOCALS, CO_VARARGS, CO_VARKEYWORDS, CO_NESTED, CO_GENERATOR, CO_CONTAINSGLOBALS) from pypy.rlib.rarithmetic import intmask diff --git a/pypy/interpreter/pyframe.py b/pypy/interpreter/pyframe.py --- a/pypy/interpreter/pyframe.py +++ b/pypy/interpreter/pyframe.py @@ -614,7 +614,8 @@ return self.get_builtin().getdict(space) def fget_f_back(self, space): - return self.space.wrap(self.f_backref()) + f_back = ExecutionContext.getnextframe_nohidden(self) + return self.space.wrap(f_back) def fget_f_lasti(self, space): return self.space.wrap(self.last_instr) diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py --- a/pypy/interpreter/pyopcode.py +++ b/pypy/interpreter/pyopcode.py @@ -1523,10 +1523,8 @@ if not isinstance(prog, codetype): filename = '' - if not isinstance(prog, str): - if isinstance(prog, basestring): - prog = str(prog) - elif isinstance(prog, file): + if not isinstance(prog, basestring): + if isinstance(prog, file): filename = prog.name prog = prog.read() else: diff --git a/pypy/interpreter/pyparser/future.py b/pypy/interpreter/pyparser/future.py --- a/pypy/interpreter/pyparser/future.py +++ b/pypy/interpreter/pyparser/future.py @@ -225,14 +225,16 @@ raise DoneException self.consume_whitespace() - def consume_whitespace(self): + def consume_whitespace(self, newline_ok=False): while 1: c = self.getc() if c in whitespace: self.pos += 1 continue - elif c == '\\': - self.pos += 1 + elif c == '\\' or newline_ok: + slash = c == '\\' + if slash: + self.pos += 1 c = self.getc() if c == '\n': self.pos += 1 @@ -243,8 +245,10 @@ if self.getc() == '\n': self.pos += 1 self.atbol() + elif slash: + raise DoneException else: - raise DoneException + return else: return @@ -281,7 +285,7 @@ return else: self.pos += 1 - self.consume_whitespace() + self.consume_whitespace(paren_list) if paren_list and self.getc() == ')': self.pos += 1 return # Handles trailing comma inside parenthesis diff --git a/pypy/interpreter/pyparser/pytokenizer.py b/pypy/interpreter/pyparser/pytokenizer.py --- a/pypy/interpreter/pyparser/pytokenizer.py +++ b/pypy/interpreter/pyparser/pytokenizer.py @@ -226,7 +226,7 @@ parenlev = parenlev - 1 if parenlev < 0: raise TokenError("unmatched '%s'" % initial, line, - lnum-1, 0, token_list) + lnum, start + 1, token_list) if token in python_opmap: punct = python_opmap[token] else: diff --git a/pypy/interpreter/pyparser/test/test_futureautomaton.py b/pypy/interpreter/pyparser/test/test_futureautomaton.py --- a/pypy/interpreter/pyparser/test/test_futureautomaton.py +++ b/pypy/interpreter/pyparser/test/test_futureautomaton.py @@ -3,7 +3,7 @@ from pypy.tool import stdlib___future__ as fut def run(s): - f = future.FutureAutomaton(future.futureFlags_2_5, s) + f = future.FutureAutomaton(future.futureFlags_2_7, s) try: f.start() except future.DoneException: @@ -113,6 +113,14 @@ assert f.lineno == 1 assert f.col_offset == 0 +def test_paren_with_newline(): + s = 'from __future__ import (division,\nabsolute_import)\n' + f = run(s) + assert f.pos == len(s) + assert f.flags == (fut.CO_FUTURE_DIVISION | fut.CO_FUTURE_ABSOLUTE_IMPORT) + assert f.lineno == 1 + assert f.col_offset == 0 + def test_multiline(): s = '"abc" #def\n #ghi\nfrom __future__ import (division as b, generators,)\nfrom __future__ import with_statement\n' f = run(s) diff --git a/pypy/interpreter/pyparser/test/test_pyparse.py b/pypy/interpreter/pyparser/test/test_pyparse.py --- a/pypy/interpreter/pyparser/test/test_pyparse.py +++ b/pypy/interpreter/pyparser/test/test_pyparse.py @@ -87,6 +87,10 @@ assert exc.lineno == 1 assert exc.offset == 5 assert exc.lastlineno == 5 + exc = py.test.raises(SyntaxError, parse, "abc)").value + assert exc.msg == "unmatched ')'" + assert exc.lineno == 1 + assert exc.offset == 4 def test_is(self): self.parse("x is y") diff --git a/pypy/interpreter/test/test_exec.py b/pypy/interpreter/test/test_exec.py --- a/pypy/interpreter/test/test_exec.py +++ b/pypy/interpreter/test/test_exec.py @@ -219,3 +219,30 @@ raise e assert res == 1 + + def test_exec_unicode(self): + # 's' is a string + s = "x = u'\xd0\xb9\xd1\x86\xd1\x83\xd0\xba\xd0\xb5\xd0\xbd'" + # 'u' is a unicode + u = s.decode('utf-8') + exec u + assert len(x) == 6 + assert ord(x[0]) == 0x0439 + assert ord(x[1]) == 0x0446 + assert ord(x[2]) == 0x0443 + assert ord(x[3]) == 0x043a + assert ord(x[4]) == 0x0435 + assert ord(x[5]) == 0x043d + + def test_eval_unicode(self): + u = "u'%s'" % unichr(0x1234) + v = eval(u) + assert v == unichr(0x1234) + + def test_compile_unicode(self): + s = "x = u'\xd0\xb9\xd1\x86\xd1\x83\xd0\xba\xd0\xb5\xd0\xbd'" + u = s.decode('utf-8') + c = compile(u, '', 'exec') + exec c + assert len(x) == 6 + assert ord(x[0]) == 0x0439 diff --git a/pypy/interpreter/test/test_executioncontext.py b/pypy/interpreter/test/test_executioncontext.py --- a/pypy/interpreter/test/test_executioncontext.py +++ b/pypy/interpreter/test/test_executioncontext.py @@ -42,6 +42,7 @@ assert i == 9 def test_periodic_action(self): + from pypy.interpreter.executioncontext import ActionFlag class DemoAction(executioncontext.PeriodicAsyncAction): counter = 0 @@ -53,17 +54,20 @@ space = self.space a2 = DemoAction(space) - space.actionflag.register_periodic_action(a2, True) try: - for i in range(500): - space.appexec([], """(): - n = 5 - return n + 2 - """) - except Finished: - pass - checkinterval = space.actionflag.getcheckinterval() - assert checkinterval / 10 < i < checkinterval * 1.1 + space.actionflag.setcheckinterval(100) + space.actionflag.register_periodic_action(a2, True) + try: + for i in range(500): + space.appexec([], """(): + n = 5 + return n + 2 + """) + except Finished: + pass + finally: + space.actionflag = ActionFlag() # reset to default + assert 10 < i < 110 def test_llprofile(self): l = [] diff --git a/pypy/interpreter/test/test_generator.py b/pypy/interpreter/test/test_generator.py --- a/pypy/interpreter/test/test_generator.py +++ b/pypy/interpreter/test/test_generator.py @@ -117,7 +117,7 @@ g = f() raises(NameError, g.throw, NameError, "Error", None) - + def test_throw_fail(self): def f(): yield 1 @@ -129,7 +129,7 @@ yield 1 g = f() raises(TypeError, g.throw, list()) - + def test_throw_fail3(self): def f(): yield 1 @@ -188,7 +188,7 @@ g = f() g.next() raises(NameError, g.close) - + def test_close_fail(self): def f(): try: @@ -267,3 +267,15 @@ assert r.startswith("' % self.fielddescr.repr_of_descr() + +def get_interiorfield_descr(gc_ll_descr, ARRAY, FIELDTP, name): + cache = gc_ll_descr._cache_interiorfield + try: + return cache[(ARRAY, FIELDTP, name)] + except KeyError: + arraydescr = get_array_descr(gc_ll_descr, ARRAY) + fielddescr = get_field_descr(gc_ll_descr, FIELDTP, name) + descr = InteriorFieldDescr(arraydescr, fielddescr) + cache[(ARRAY, FIELDTP, name)] = descr + return descr # ____________________________________________________________ # CallDescrs @@ -271,12 +316,16 @@ _clsname = '' loop_token = None arg_classes = '' # <-- annotation hack - ffi_flags = 0 + ffi_flags = 1 - def __init__(self, arg_classes, extrainfo=None, ffi_flags=0): + def __init__(self, arg_classes, extrainfo=None, ffi_flags=1): self.arg_classes = arg_classes # string of "r" and "i" (ref/int) self.extrainfo = extrainfo self.ffi_flags = ffi_flags + # NB. the default ffi_flags is 1, meaning FUNCFLAG_CDECL, which + # makes sense on Windows as it's the one for all the C functions + # we are compiling together with the JIT. On non-Windows platforms + # it is just ignored anyway. def __repr__(self): res = '%s(%s)' % (self.__class__.__name__, self.arg_classes) @@ -411,7 +460,7 @@ """ _clsname = 'DynamicIntCallDescr' - def __init__(self, arg_classes, result_size, result_sign, extrainfo=None, ffi_flags=0): + def __init__(self, arg_classes, result_size, result_sign, extrainfo, ffi_flags): BaseIntCallDescr.__init__(self, arg_classes, extrainfo, ffi_flags) assert isinstance(result_sign, bool) self._result_size = chr(result_size) @@ -536,7 +585,8 @@ # if TYPE is lltype.Float or is_longlong(TYPE): setattr(Descr, floatattrname, True) - elif TYPE is not lltype.Bool and rffi.cast(TYPE, -1) == -1: + elif (TYPE is not lltype.Bool and isinstance(TYPE, lltype.Number) and + rffi.cast(TYPE, -1) == -1): setattr(Descr, signedattrname, True) # _cache[nameprefix, TYPE] = Descr diff --git a/pypy/jit/backend/llsupport/ffisupport.py b/pypy/jit/backend/llsupport/ffisupport.py --- a/pypy/jit/backend/llsupport/ffisupport.py +++ b/pypy/jit/backend/llsupport/ffisupport.py @@ -8,7 +8,7 @@ class UnsupportedKind(Exception): From noreply at buildbot.pypy.org Wed Nov 9 15:48:42 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 9 Nov 2011 15:48:42 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: start to re-flesh out the dtype interface. now we get to the fun part of exposing the boxes at app level Message-ID: <20111109144842.6E6D58292E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49019:4f8dd56c9505 Date: 2011-11-09 09:48 -0500 http://bitbucket.org/pypy/pypy/changeset/4f8dd56c9505/ Log: start to re-flesh out the dtype interface. now we get to the fun part of exposing the boxes at app level diff --git a/pypy/module/micronumpy/compile.py b/pypy/module/micronumpy/compile.py --- a/pypy/module/micronumpy/compile.py +++ b/pypy/module/micronumpy/compile.py @@ -50,7 +50,7 @@ return False def decode_index4(self, w_idx, size): - return (self.int_w(w_idx), 0, 0, 1) + return (self.int_w(self.int(w_idx)), 0, 0, 1) @specialize.argtype(1) def wrap(self, obj): @@ -87,7 +87,10 @@ raise NotImplementedError def int(self, w_obj): - return w_obj + if isinstance(w_obj, IntObject): + return w_obj + assert isinstance(w_obj, interp_boxes.W_GenericBox) + return IntObject(int(w_obj.value)) def is_true(self, w_obj): assert isinstance(w_obj, BoolObject) diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -1,4 +1,7 @@ from pypy.interpreter.baseobjspace import Wrappable +from pypy.interpreter.error import OperationError +from pypy.interpreter.gateway import interp2app +from pypy.interpreter.typedef import TypeDef, interp_attrproperty from pypy.module.micronumpy import types, signature from pypy.rlib.objectmodel import specialize from pypy.rlib.rarithmetic import LONG_BIT @@ -13,11 +16,14 @@ FLOATINGLTR = "f" class W_Dtype(Wrappable): - def __init__(self, itemtype, num, kind): + def __init__(self, itemtype, num, kind, name, char, alternate_constructors=[]): self.signature = signature.BaseSignature() self.itemtype = itemtype self.num = num self.kind = kind + self.name = name + self.char = char + self.alternate_constructors = alternate_constructors def malloc(self, length): # XXX find out why test_zjit explodes with tracking of allocations @@ -41,6 +47,40 @@ struct_ptr = rffi.ptradd(storage, i * self.itemtype.get_element_size()) self.itemtype.store(struct_ptr, 0, box) + def descr__new__(space, w_subtype, w_dtype): + cache = get_dtype_cache(space) + + if space.is_w(w_dtype, space.w_None): + return cache.w_float64dtype + elif space.isinstance_w(w_dtype, w_subtype): + return w_dtype + elif space.isinstance_w(w_dtype, space.w_str): + name = space.str_w(w_dtype) + for dtype in cache.builtin_dtypes: + if dtype.name == name or dtype.char == name: + return dtype + else: + for dtype in cache.builtin_dtypes: + if w_dtype in dtype.alternate_constructors: + return dtype + raise OperationError(space.w_TypeError, space.wrap("data type not understood")) + + def descr_str(self, space): + return space.wrap(self.name) + + def descr_repr(self, space): + return space.wrap("dtype('%s')" % self.name) + +W_Dtype.typedef = TypeDef("dtype", + __module__ = "numpy", + __new__ = interp2app(W_Dtype.descr__new__.im_func), + + __str__= interp2app(W_Dtype.descr_str), + __repr__ = interp2app(W_Dtype.descr_repr), + + num = interp_attrproperty("num", cls=W_Dtype), + kind = interp_attrproperty("kind", cls=W_Dtype), +) class DtypeCache(object): def __init__(self, space): @@ -48,72 +88,104 @@ types.Bool(), num=0, kind=BOOLLTR, + name="bool", + char="?", + alternate_constructors=[space.w_bool], ) self.w_int8dtype = W_Dtype( types.Int8(), num=1, kind=SIGNEDLTR, + name="int8", + char="b", ) self.w_uint8dtype = W_Dtype( types.UInt8(), num=2, kind=UNSIGNEDLTR, + name="uint8", + char="B", ) self.w_int16dtype = W_Dtype( types.Int16(), num=3, kind=SIGNEDLTR, + name="int16", + char="h", ) self.w_uint16dtype = W_Dtype( types.UInt16(), num=4, kind=UNSIGNEDLTR, + name="uint16", + char="H", ) self.w_int32dtype = W_Dtype( types.Int32(), num=5, kind=SIGNEDLTR, + name="int32", + char="i", ) self.w_uint32dtype = W_Dtype( types.UInt32(), num=6, kind=UNSIGNEDLTR, + name="uint32", + char="I", ) if LONG_BIT == 32: longtype = types.Int32() unsigned_longtype = types.UInt32() + name = "int32" elif LONG_BIT == 64: longtype = types.Int64() unsigned_longtype = types.UInt64() + name = "int64" self.w_longdtype = W_Dtype( longtype, num=7, kind=SIGNEDLTR, + name=name, + char="l", + alternate_constructors=[space.w_int], ) self.w_ulongdtype = W_Dtype( unsigned_longtype, num=8, kind=UNSIGNEDLTR, + name="u" + name, + char="L", ) self.w_int64dtype = W_Dtype( types.Int64(), num=9, kind=SIGNEDLTR, + name="int64", + char="q", + alternate_constructors=[space.w_long], ) self.w_uint64dtype = W_Dtype( types.UInt64(), num=10, kind=UNSIGNEDLTR, + name="uint64", + char="Q", ) self.w_float32dtype = W_Dtype( types.Float32(), num=11, kind=FLOATINGLTR, + name="float32", + char="f", ) self.w_float64dtype = W_Dtype( types.Float64(), num=12, kind=FLOATINGLTR, + name="float32", + char="d", + alternate_constructors=[space.w_float], ) self.builtin_dtypes = [ diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -239,7 +239,7 @@ start, stop, step, slice_length = space.decode_index4(w_idx, self.find_size()) if step == 0: # Single index - return self.get_concrete().eval(start).wrap(space) + return self.get_concrete().eval(start) else: # Slice new_sig = signature.Signature.find_sig([ @@ -540,8 +540,7 @@ return space.wrap(self.size) def setitem_w(self, space, item, w_value): - self.invalidated() - self.dtype.setitem_w(space, self.storage, item, w_value) + return self.setitem(item, self.dtype.coerce(space, w_value)) def setitem(self, item, value): self.invalidated() diff --git a/pypy/module/micronumpy/test/test_compile.py b/pypy/module/micronumpy/test/test_compile.py --- a/pypy/module/micronumpy/test/test_compile.py +++ b/pypy/module/micronumpy/test/test_compile.py @@ -140,7 +140,7 @@ a -> 3 """ interp = self.run(code) - assert interp.results[0].value.val == 15 + assert interp.results[0].value.value == 15 def test_min(self): interp = self.run(""" @@ -149,7 +149,7 @@ b = a + a min(b) """) - assert interp.results[0].value.val == -24 + assert interp.results[0].value.value == -24 def test_max(self): interp = self.run(""" @@ -158,7 +158,7 @@ b = a + a max(b) """) - assert interp.results[0].value.val == 256 + assert interp.results[0].value.value == 256 def test_slice(self): py.test.skip("in progress") From noreply at buildbot.pypy.org Wed Nov 9 16:26:22 2011 From: noreply at buildbot.pypy.org (bivab) Date: Wed, 9 Nov 2011 16:26:22 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: fix an issue in clibffi that is triggered on big endian platforms due to the byte order when casting a larger data type to smaller one to be passed to a function called through ffi Message-ID: <20111109152622.8D7BD8292E@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: ppc-jit-backend Changeset: r49020:1c63c71d3b29 Date: 2011-11-09 16:22 +0100 http://bitbucket.org/pypy/pypy/changeset/1c63c71d3b29/ Log: fix an issue in clibffi that is triggered on big endian platforms due to the byte order when casting a larger data type to smaller one to be passed to a function called through ffi diff --git a/pypy/rlib/clibffi.py b/pypy/rlib/clibffi.py --- a/pypy/rlib/clibffi.py +++ b/pypy/rlib/clibffi.py @@ -30,6 +30,9 @@ _MAC_OS = platform.name == "darwin" _FREEBSD_7 = platform.name == "freebsd7" +_LITTLE_ENDIAN = sys.byteorder == 'little' +_BIG_ENDIAN = sys.byteorder == 'big' + if _WIN32: from pypy.rlib import rwin32 @@ -337,15 +340,46 @@ return TYPE_MAP[tp] cast_type_to_ffitype._annspecialcase_ = 'specialize:memo' -def push_arg_as_ffiptr(ffitp, arg, ll_buf): +def push_arg_as_ffiptr_base(ffitp, arg, ll_buf): + # this is for primitive types. For structures and arrays + # would be something different (more dynamic) + # XXX is this valid in C?, for args that are larger than the size of + # ll_buf we write over the boundaries of the allocated char array and + # just keep as much bytes as we need for the target type. Maybe using + # memcpy would be better here. Also this + # only works on little endian architectures + TP = lltype.typeOf(arg) + TP_P = lltype.Ptr(rffi.CArray(TP)) + buf = rffi.cast(TP_P, ll_buf) + buf[0] = arg +push_arg_as_ffiptr_base._annspecialcase_ = 'specialize:argtype(1)' + +def push_arg_as_ffiptr_memcpy(ffitp, arg, ll_buf): # this is for primitive types. For structures and arrays # would be something different (more dynamic) TP = lltype.typeOf(arg) TP_P = lltype.Ptr(rffi.CArray(TP)) - buf = rffi.cast(TP_P, ll_buf) - buf[0] = arg -push_arg_as_ffiptr._annspecialcase_ = 'specialize:argtype(1)' + TP_size = rffi.sizeof(TP) + c_size = intmask(ffitp.c_size) + # if both types have the same size, we do not can directly write the + # value to the buffer + if c_size == TP_size: + return push_arg_as_ffiptr_base(ffitp, arg, ll_buf) + + # store arg in a small box in memory + # and copy the relevant bytes over to the target buffer (ll_buf) + with lltype.scoped_alloc(TP_P.TO, TP_size) as argbuf: + argbuf[0] = arg + cargbuf = rffi.cast(rffi.CCHARP, argbuf) + ptr = rffi.ptradd(cargbuf, TP_size - c_size) + rffi.c_memcpy(ll_buf, ptr, c_size) +push_arg_as_ffiptr_memcpy._annspecialcase_ = 'specialize:argtype(1)' + +if _LITTLE_ENDIAN: + push_arg_as_ffiptr = push_arg_as_ffiptr_base +else: + push_arg_as_ffiptr = push_arg_as_ffiptr_memcpy # type defs for callback and closure userdata USERDATA_P = lltype.Ptr(lltype.ForwardReference()) diff --git a/pypy/rlib/libffi.py b/pypy/rlib/libffi.py --- a/pypy/rlib/libffi.py +++ b/pypy/rlib/libffi.py @@ -234,7 +234,7 @@ # It is important that there is no other operation in the middle, else # the optimizer will fail to recognize the pattern and won't turn it # into a fast CALL. Note that "arg = arg.next" is optimized away, - # assuming that archain is completely virtual. + # assuming that argchain is completely virtual. self = jit.promote(self) if argchain.numargs != len(self.argtypes): raise TypeError, 'Wrong number of arguments: %d expected, got %d' %\ diff --git a/pypy/rlib/test/test_libffi.py b/pypy/rlib/test/test_libffi.py --- a/pypy/rlib/test/test_libffi.py +++ b/pypy/rlib/test/test_libffi.py @@ -179,6 +179,17 @@ res = self.call(func, [chr(20), 22], rffi.LONG) assert res == 42 + def test_char_args(self): + """ + char sum_args(char a, char b) { + return a + b; + } + """ + libfoo = self.get_libfoo() + func = (libfoo, 'sum_args', [types.schar, types.schar], types.schar) + res = self.call(func, [123, 43], rffi.CHAR) + assert res == chr(166) + def test_unsigned_short_args(self): """ unsigned short sum_xy_us(unsigned short x, unsigned short y) From noreply at buildbot.pypy.org Wed Nov 9 16:28:38 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 9 Nov 2011 16:28:38 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: added files I forgot Message-ID: <20111109152838.3EFEE8292E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49021:7578bf6439b7 Date: 2011-11-09 10:28 -0500 http://bitbucket.org/pypy/pypy/changeset/7578bf6439b7/ Log: added files I forgot diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -14,8 +14,8 @@ 'ones': 'interp_numarray.ones', 'fromstring': 'interp_support.fromstring', - 'True_': 'space.w_True', - 'False_': 'space.w_False', + 'True_': 'types.Bool.True', + 'False_': 'types.Bool.False', } # ufuncs diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py new file mode 100644 --- /dev/null +++ b/pypy/module/micronumpy/interp_boxes.py @@ -0,0 +1,45 @@ +from pypy.interpreter.baseobjspace import Wrappable +from pypy.interpreter.typedef import TypeDef + + +class W_GenericBox(Wrappable): + pass + +class W_BoolBox(Wrappable): + def __init__(self, value): + self.value = value + +class W_NumberBox(W_GenericBox): + def __init__(self, value): + self.value = value + + def convert_to(self, dtype): + return dtype.box(self.value) + +class W_IntegerBox(W_NumberBox): + pass + +class W_SignedIntegerBox(W_IntegerBox): + pass + +class W_Int64Box(W_SignedIntegerBox): + pass + +class W_InexactBox(W_NumberBox): + pass + +class W_FloatingBox(W_InexactBox): + pass + +class W_Float64Box(W_FloatingBox): + def descr_get_dtype(self, space): + from pypy.module.micronumpy.interp_dtype import get_dtype_cache + return get_dtype_cache(space).w_float64dtype + +W_GenericBox.typedef = TypeDef("generic", + __module__ = "numpy", +) + +W_BoolBox.typedef = TypeDef("bool_", W_GenericBox.typedef, + __module__ = "numpy", +) \ No newline at end of file diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py new file mode 100644 --- /dev/null +++ b/pypy/module/micronumpy/types.py @@ -0,0 +1,108 @@ +from pypy.module.micronumpy import interp_boxes +from pypy.objspace.std.floatobject import float2string +from pypy.rlib import rfloat +from pypy.rpython.lltypesystem import lltype, rffi + + +class BaseType(object): + def _unimplemented_ufunc(self, *args): + raise NotImplementedError + add = sub = mul = div = mod = pow = eq = ne = lt = le = gt = ge = max = \ + min = copysign = pos = neg = abs = sign = reciprocal = fabs = floor = \ + exp = sin = cos = tan = arcsin = arccos = arctan = arcsinh = \ + arctanh = _unimplemented_ufunc + +class Primitive(BaseType): + def get_element_size(self): + return rffi.sizeof(self.T) + + def box(self, value): + return self.BoxType(rffi.cast(self.T, value)) + + def unbox(self, box): + assert isinstance(box, self.BoxType) + return box.value + + def coerce(self, space, w_item): + raise NotImplementedError + + def read(self, ptr, offset): + ptr = rffi.ptradd(ptr, offset) + return self.box( + rffi.cast(lltype.Ptr(lltype.Array(self.T, hints={"nolength": True})), ptr)[0] + ) + + def store(self, ptr, offset, box): + value = self.unbox(box) + ptr = rffi.ptradd(ptr, offset) + rffi.cast(lltype.Ptr(lltype.Array(self.T, hints={"nolength": True})), ptr)[0] = value + + def add(self, v1, v2): + return self.box(self.unbox(v1) + self.unbox(v2)) + + def max(self, v1, v2): + return self.box(max(self.unbox(v1), self.unbox(v2))) + + def min(self, v1, v2): + return self.box(min(self.unbox(v1), self.unbox(v2))) + +class Bool(Primitive): + T = lltype.Bool + BoxType = interp_boxes.W_BoolBox + + True = BoxType(True) + False = BoxType(False) + + def box(self, value): + box = Primitive.box(self, value) + if box.value: + return self.True + else: + return self.False + + def coerce(self, space, w_item): + return self.box(space.is_true(w_item)) + +class Integer(Primitive): + def coerce(self, space, w_item): + return self.box(space.int_w(space.int(w_item))) + +class Int8(Primitive): + T = rffi.SIGNEDCHAR + +class UInt8(Primitive): + T = rffi.UCHAR + +class Int16(Primitive): + T = rffi.SHORT + +class UInt16(Primitive): + T = rffi.USHORT + +class Int32(Primitive): + T = rffi.INT + +class UInt32(Primitive): + T = rffi.UINT + +class Int64(Integer): + T = rffi.LONGLONG + BoxType = interp_boxes.W_Int64Box + +class UInt64(Primitive): + T = rffi.ULONGLONG + +class Float(Primitive): + def coerce(self, space, w_item): + return self.box(space.float_w(space.float(w_item))) + + def str_format(self, box): + value = self.unbox(box) + return float2string(value, "g", rfloat.DTSF_STR_PRECISION) + +class Float32(Primitive): + T = rffi.FLOAT + +class Float64(Float): + T = rffi.DOUBLE + BoxType = interp_boxes.W_Float64Box \ No newline at end of file From noreply at buildbot.pypy.org Wed Nov 9 17:07:21 2011 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 9 Nov 2011 17:07:21 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: Make rarithmetic.longlongmask() translatable. Message-ID: <20111109160721.6DBF48292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: ffistruct Changeset: r49022:9699c8a780e1 Date: 2011-11-09 17:03 +0100 http://bitbucket.org/pypy/pypy/changeset/9699c8a780e1/ Log: Make rarithmetic.longlongmask() translatable. diff --git a/pypy/annotation/builtin.py b/pypy/annotation/builtin.py --- a/pypy/annotation/builtin.py +++ b/pypy/annotation/builtin.py @@ -294,6 +294,9 @@ def rarith_intmask(s_obj): return SomeInteger() +def rarith_longlongmask(s_obj): + return SomeInteger(knowntype=pypy.rlib.rarithmetic.r_longlong) + def robjmodel_instantiate(s_clspbc): assert isinstance(s_clspbc, SomePBC) clsdef = None @@ -372,6 +375,7 @@ BUILTIN_ANALYZERS[original] = value BUILTIN_ANALYZERS[pypy.rlib.rarithmetic.intmask] = rarith_intmask +BUILTIN_ANALYZERS[pypy.rlib.rarithmetic.longlongmask] = rarith_longlongmask BUILTIN_ANALYZERS[pypy.rlib.objectmodel.instantiate] = robjmodel_instantiate BUILTIN_ANALYZERS[pypy.rlib.objectmodel.r_dict] = robjmodel_r_dict BUILTIN_ANALYZERS[pypy.rlib.objectmodel.hlinvoke] = robjmodel_hlinvoke diff --git a/pypy/rpython/rbuiltin.py b/pypy/rpython/rbuiltin.py --- a/pypy/rpython/rbuiltin.py +++ b/pypy/rpython/rbuiltin.py @@ -239,6 +239,11 @@ vlist = hop.inputargs(lltype.Signed) return vlist[0] +def rtype_longlongmask(hop): + hop.exception_cannot_occur() + vlist = hop.inputargs(lltype.SignedLongLong) + return vlist[0] + def rtype_builtin_min(hop): v1, v2 = hop.inputargs(hop.r_result, hop.r_result) return hop.gendirectcall(ll_min, v1, v2) @@ -549,6 +554,7 @@ BUILTIN_TYPER[lltype.Ptr] = rtype_const_result BUILTIN_TYPER[lltype.runtime_type_info] = rtype_runtime_type_info BUILTIN_TYPER[rarithmetic.intmask] = rtype_intmask +BUILTIN_TYPER[rarithmetic.longlongmask] = rtype_longlongmask BUILTIN_TYPER[objectmodel.hlinvoke] = rtype_hlinvoke diff --git a/pypy/rpython/test/test_rbuiltin.py b/pypy/rpython/test/test_rbuiltin.py --- a/pypy/rpython/test/test_rbuiltin.py +++ b/pypy/rpython/test/test_rbuiltin.py @@ -5,7 +5,7 @@ from pypy.rlib.debug import llinterpcall from pypy.rpython.lltypesystem import lltype from pypy.tool import udir -from pypy.rlib.rarithmetic import intmask +from pypy.rlib.rarithmetic import intmask, longlongmask, r_int64 from pypy.rlib.rarithmetic import r_int, r_uint, r_longlong, r_ulonglong from pypy.annotation.builtin import * from pypy.rpython.test.tool import BaseRtypingTest, LLRtypeMixin, OORtypeMixin @@ -79,6 +79,16 @@ res = self.interpret(f, [r_uint(5)]) assert type(res) is int and res == 5 + def test_longlongmask(self): + def f(x=r_ulonglong): + try: + return longlongmask(x) + except ValueError: + return 0 + + res = self.interpret(f, [r_ulonglong(5)]) + assert type(res) is r_int64 and res == 5 + def test_rbuiltin_list(self): def f(): l=list((1,2,3)) diff --git a/pypy/translator/c/test/test_typed.py b/pypy/translator/c/test/test_typed.py --- a/pypy/translator/c/test/test_typed.py +++ b/pypy/translator/c/test/test_typed.py @@ -877,3 +877,13 @@ assert res == 'acquire, hello, raised, release' res = f(2) assert res == 'acquire, hello, raised, release' + + def test_longlongmask(self): + from pypy.rlib.rarithmetic import longlongmask, r_ulonglong + def func(n): + m = r_ulonglong(n) + m *= 100000 + return longlongmask(m) + f = self.getcompiled(func, [int]) + res = f(-2000000000) + assert res == -200000000000000 From noreply at buildbot.pypy.org Wed Nov 9 17:07:25 2011 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 9 Nov 2011 17:07:25 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: merge heads Message-ID: <20111109160725.6C5FE8292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: ffistruct Changeset: r49023:3c7c182b8c8a Date: 2011-11-09 17:07 +0100 http://bitbucket.org/pypy/pypy/changeset/3c7c182b8c8a/ Log: merge heads diff too long, truncating to 10000 out of 43212 lines diff --git a/.hgtags b/.hgtags --- a/.hgtags +++ b/.hgtags @@ -1,2 +1,3 @@ b590cf6de4190623aad9aa698694c22e614d67b9 release-1.5 b48df0bf4e75b81d98f19ce89d4a7dc3e1dab5e5 benchmarked +d8ac7d23d3ec5f9a0fa1264972f74a010dbfd07f release-1.6 diff --git a/dotviewer/graphparse.py b/dotviewer/graphparse.py --- a/dotviewer/graphparse.py +++ b/dotviewer/graphparse.py @@ -36,48 +36,45 @@ print >> sys.stderr, "Warning: could not guess file type, using 'dot'" return 'unknown' -def dot2plain(content, contenttype, use_codespeak=False): - if contenttype == 'plain': - # already a .plain file - return content +def dot2plain_graphviz(content, contenttype, use_codespeak=False): + if contenttype != 'neato': + cmdline = 'dot -Tplain' + else: + cmdline = 'neato -Tplain' + #print >> sys.stderr, '* running:', cmdline + close_fds = sys.platform != 'win32' + p = subprocess.Popen(cmdline, shell=True, close_fds=close_fds, + stdin=subprocess.PIPE, stdout=subprocess.PIPE) + (child_in, child_out) = (p.stdin, p.stdout) + try: + import thread + except ImportError: + bkgndwrite(child_in, content) + else: + thread.start_new_thread(bkgndwrite, (child_in, content)) + plaincontent = child_out.read() + child_out.close() + if not plaincontent: # 'dot' is likely not installed + raise PlainParseError("no result from running 'dot'") + return plaincontent - if not use_codespeak: - if contenttype != 'neato': - cmdline = 'dot -Tplain' - else: - cmdline = 'neato -Tplain' - #print >> sys.stderr, '* running:', cmdline - close_fds = sys.platform != 'win32' - p = subprocess.Popen(cmdline, shell=True, close_fds=close_fds, - stdin=subprocess.PIPE, stdout=subprocess.PIPE) - (child_in, child_out) = (p.stdin, p.stdout) - try: - import thread - except ImportError: - bkgndwrite(child_in, content) - else: - thread.start_new_thread(bkgndwrite, (child_in, content)) - plaincontent = child_out.read() - child_out.close() - if not plaincontent: # 'dot' is likely not installed - raise PlainParseError("no result from running 'dot'") - else: - import urllib - request = urllib.urlencode({'dot': content}) - url = 'http://codespeak.net/pypy/convertdot.cgi' - print >> sys.stderr, '* posting:', url - g = urllib.urlopen(url, data=request) - result = [] - while True: - data = g.read(16384) - if not data: - break - result.append(data) - g.close() - plaincontent = ''.join(result) - # very simple-minded way to give a somewhat better error message - if plaincontent.startswith('> sys.stderr, '* posting:', url + g = urllib.urlopen(url, data=request) + result = [] + while True: + data = g.read(16384) + if not data: + break + result.append(data) + g.close() + plaincontent = ''.join(result) + # very simple-minded way to give a somewhat better error message + if plaincontent.startswith('" + return _PROTOCOL_NAMES.get(protocol_code, '') # a replacement for the old socket.ssl function diff --git a/lib-python/2.7/test/test_ssl.py b/lib-python/2.7/test/test_ssl.py --- a/lib-python/2.7/test/test_ssl.py +++ b/lib-python/2.7/test/test_ssl.py @@ -58,32 +58,35 @@ # Issue #9415: Ubuntu hijacks their OpenSSL and forcefully disables SSLv2 def skip_if_broken_ubuntu_ssl(func): - # We need to access the lower-level wrapper in order to create an - # implicit SSL context without trying to connect or listen. - try: - import _ssl - except ImportError: - # The returned function won't get executed, just ignore the error - pass - @functools.wraps(func) - def f(*args, **kwargs): + if hasattr(ssl, 'PROTOCOL_SSLv2'): + # We need to access the lower-level wrapper in order to create an + # implicit SSL context without trying to connect or listen. try: - s = socket.socket(socket.AF_INET) - _ssl.sslwrap(s._sock, 0, None, None, - ssl.CERT_NONE, ssl.PROTOCOL_SSLv2, None, None) - except ssl.SSLError as e: - if (ssl.OPENSSL_VERSION_INFO == (0, 9, 8, 15, 15) and - platform.linux_distribution() == ('debian', 'squeeze/sid', '') - and 'Invalid SSL protocol variant specified' in str(e)): - raise unittest.SkipTest("Patched Ubuntu OpenSSL breaks behaviour") - return func(*args, **kwargs) - return f + import _ssl + except ImportError: + # The returned function won't get executed, just ignore the error + pass + @functools.wraps(func) + def f(*args, **kwargs): + try: + s = socket.socket(socket.AF_INET) + _ssl.sslwrap(s._sock, 0, None, None, + ssl.CERT_NONE, ssl.PROTOCOL_SSLv2, None, None) + except ssl.SSLError as e: + if (ssl.OPENSSL_VERSION_INFO == (0, 9, 8, 15, 15) and + platform.linux_distribution() == ('debian', 'squeeze/sid', '') + and 'Invalid SSL protocol variant specified' in str(e)): + raise unittest.SkipTest("Patched Ubuntu OpenSSL breaks behaviour") + return func(*args, **kwargs) + return f + else: + return func class BasicSocketTests(unittest.TestCase): def test_constants(self): - ssl.PROTOCOL_SSLv2 + #ssl.PROTOCOL_SSLv2 ssl.PROTOCOL_SSLv23 ssl.PROTOCOL_SSLv3 ssl.PROTOCOL_TLSv1 @@ -964,7 +967,8 @@ try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True, ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True, ssl.CERT_REQUIRED) - try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv2, False) + if hasattr(ssl, 'PROTOCOL_SSLv2'): + try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv2, False) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv23, False) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_TLSv1, False) @@ -976,7 +980,8 @@ try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True, ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True, ssl.CERT_REQUIRED) - try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv2, False) + if hasattr(ssl, 'PROTOCOL_SSLv2'): + try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv2, False) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv3, False) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv23, False) diff --git a/lib-python/conftest.py b/lib-python/conftest.py --- a/lib-python/conftest.py +++ b/lib-python/conftest.py @@ -201,7 +201,7 @@ RegrTest('test_difflib.py'), RegrTest('test_dircache.py', core=True), RegrTest('test_dis.py'), - RegrTest('test_distutils.py'), + RegrTest('test_distutils.py', skip=True), RegrTest('test_dl.py', skip=True), RegrTest('test_doctest.py', usemodules="thread"), RegrTest('test_doctest2.py'), @@ -317,7 +317,7 @@ RegrTest('test_multibytecodec.py', usemodules='_multibytecodec'), RegrTest('test_multibytecodec_support.py', skip="not a test"), RegrTest('test_multifile.py'), - RegrTest('test_multiprocessing.py', skip='FIXME leaves subprocesses'), + RegrTest('test_multiprocessing.py', skip="FIXME leaves subprocesses"), RegrTest('test_mutants.py', core="possibly"), RegrTest('test_mutex.py'), RegrTest('test_netrc.py'), @@ -359,7 +359,7 @@ RegrTest('test_property.py', core=True), RegrTest('test_pstats.py'), RegrTest('test_pty.py', skip="unsupported extension module"), - RegrTest('test_pwd.py', skip=skip_win32), + RegrTest('test_pwd.py', usemodules="pwd", skip=skip_win32), RegrTest('test_py3kwarn.py'), RegrTest('test_pyclbr.py'), RegrTest('test_pydoc.py'), diff --git a/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py b/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py --- a/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py +++ b/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py @@ -1,6 +1,5 @@ import unittest from ctypes import * -from ctypes.test import xfail class MyInt(c_int): def __cmp__(self, other): @@ -27,7 +26,6 @@ self.assertEqual(None, cb()) - @xfail def test_int_callback(self): args = [] def func(arg): diff --git a/lib-python/modified-2.7/gzip.py b/lib-python/modified-2.7/gzip.py deleted file mode 100644 --- a/lib-python/modified-2.7/gzip.py +++ /dev/null @@ -1,514 +0,0 @@ -"""Functions that read and write gzipped files. - -The user of the file doesn't have to worry about the compression, -but random access is not allowed.""" - -# based on Andrew Kuchling's minigzip.py distributed with the zlib module - -import struct, sys, time, os -import zlib -import io -import __builtin__ - -__all__ = ["GzipFile","open"] - -FTEXT, FHCRC, FEXTRA, FNAME, FCOMMENT = 1, 2, 4, 8, 16 - -READ, WRITE = 1, 2 - -def write32u(output, value): - # The L format writes the bit pattern correctly whether signed - # or unsigned. - output.write(struct.pack("' - - def _check_closed(self): - """Raises a ValueError if the underlying file object has been closed. - - """ - if self.closed: - raise ValueError('I/O operation on closed file.') - - def _init_write(self, filename): - self.name = filename - self.crc = zlib.crc32("") & 0xffffffffL - self.size = 0 - self.writebuf = [] - self.bufsize = 0 - - def _write_gzip_header(self): - self.fileobj.write('\037\213') # magic header - self.fileobj.write('\010') # compression method - fname = os.path.basename(self.name) - if fname.endswith(".gz"): - fname = fname[:-3] - flags = 0 - if fname: - flags = FNAME - self.fileobj.write(chr(flags)) - mtime = self.mtime - if mtime is None: - mtime = time.time() - write32u(self.fileobj, long(mtime)) - self.fileobj.write('\002') - self.fileobj.write('\377') - if fname: - self.fileobj.write(fname + '\000') - - def _init_read(self): - self.crc = zlib.crc32("") & 0xffffffffL - self.size = 0 - - def _read_gzip_header(self): - magic = self.fileobj.read(2) - if magic != '\037\213': - raise IOError, 'Not a gzipped file' - method = ord( self.fileobj.read(1) ) - if method != 8: - raise IOError, 'Unknown compression method' - flag = ord( self.fileobj.read(1) ) - self.mtime = read32(self.fileobj) - # extraflag = self.fileobj.read(1) - # os = self.fileobj.read(1) - self.fileobj.read(2) - - if flag & FEXTRA: - # Read & discard the extra field, if present - xlen = ord(self.fileobj.read(1)) - xlen = xlen + 256*ord(self.fileobj.read(1)) - self.fileobj.read(xlen) - if flag & FNAME: - # Read and discard a null-terminated string containing the filename - while True: - s = self.fileobj.read(1) - if not s or s=='\000': - break - if flag & FCOMMENT: - # Read and discard a null-terminated string containing a comment - while True: - s = self.fileobj.read(1) - if not s or s=='\000': - break - if flag & FHCRC: - self.fileobj.read(2) # Read & discard the 16-bit header CRC - - def write(self,data): - self._check_closed() - if self.mode != WRITE: - import errno - raise IOError(errno.EBADF, "write() on read-only GzipFile object") - - if self.fileobj is None: - raise ValueError, "write() on closed GzipFile object" - - # Convert data type if called by io.BufferedWriter. - if isinstance(data, memoryview): - data = data.tobytes() - - if len(data) > 0: - self.size = self.size + len(data) - self.crc = zlib.crc32(data, self.crc) & 0xffffffffL - self.fileobj.write( self.compress.compress(data) ) - self.offset += len(data) - - return len(data) - - def read(self, size=-1): - self._check_closed() - if self.mode != READ: - import errno - raise IOError(errno.EBADF, "read() on write-only GzipFile object") - - if self.extrasize <= 0 and self.fileobj is None: - return '' - - readsize = 1024 - if size < 0: # get the whole thing - try: - while True: - self._read(readsize) - readsize = min(self.max_read_chunk, readsize * 2) - except EOFError: - size = self.extrasize - elif size == 0: - return "" - else: # just get some more of it - try: - while size > self.extrasize: - self._read(readsize) - readsize = min(self.max_read_chunk, readsize * 2) - except EOFError: - if size > self.extrasize: - size = self.extrasize - - offset = self.offset - self.extrastart - chunk = self.extrabuf[offset: offset + size] - self.extrasize = self.extrasize - size - - self.offset += size - return chunk - - def _unread(self, buf): - self.extrasize = len(buf) + self.extrasize - self.offset -= len(buf) - - def _read(self, size=1024): - if self.fileobj is None: - raise EOFError, "Reached EOF" - - if self._new_member: - # If the _new_member flag is set, we have to - # jump to the next member, if there is one. - # - # First, check if we're at the end of the file; - # if so, it's time to stop; no more members to read. - pos = self.fileobj.tell() # Save current position - self.fileobj.seek(0, 2) # Seek to end of file - if pos == self.fileobj.tell(): - raise EOFError, "Reached EOF" - else: - self.fileobj.seek( pos ) # Return to original position - - self._init_read() - self._read_gzip_header() - self.decompress = zlib.decompressobj(-zlib.MAX_WBITS) - self._new_member = False - - # Read a chunk of data from the file - buf = self.fileobj.read(size) - - # If the EOF has been reached, flush the decompression object - # and mark this object as finished. - - if buf == "": - uncompress = self.decompress.flush() - self._read_eof() - self._add_read_data( uncompress ) - raise EOFError, 'Reached EOF' - - uncompress = self.decompress.decompress(buf) - self._add_read_data( uncompress ) - - if self.decompress.unused_data != "": - # Ending case: we've come to the end of a member in the file, - # so seek back to the start of the unused data, finish up - # this member, and read a new gzip header. - # (The number of bytes to seek back is the length of the unused - # data, minus 8 because _read_eof() will rewind a further 8 bytes) - self.fileobj.seek( -len(self.decompress.unused_data)+8, 1) - - # Check the CRC and file size, and set the flag so we read - # a new member on the next call - self._read_eof() - self._new_member = True - - def _add_read_data(self, data): - self.crc = zlib.crc32(data, self.crc) & 0xffffffffL - offset = self.offset - self.extrastart - self.extrabuf = self.extrabuf[offset:] + data - self.extrasize = self.extrasize + len(data) - self.extrastart = self.offset - self.size = self.size + len(data) - - def _read_eof(self): - # We've read to the end of the file, so we have to rewind in order - # to reread the 8 bytes containing the CRC and the file size. - # We check the that the computed CRC and size of the - # uncompressed data matches the stored values. Note that the size - # stored is the true file size mod 2**32. - self.fileobj.seek(-8, 1) - crc32 = read32(self.fileobj) - isize = read32(self.fileobj) # may exceed 2GB - if crc32 != self.crc: - raise IOError("CRC check failed %s != %s" % (hex(crc32), - hex(self.crc))) - elif isize != (self.size & 0xffffffffL): - raise IOError, "Incorrect length of data produced" - - # Gzip files can be padded with zeroes and still have archives. - # Consume all zero bytes and set the file position to the first - # non-zero byte. See http://www.gzip.org/#faq8 - c = "\x00" - while c == "\x00": - c = self.fileobj.read(1) - if c: - self.fileobj.seek(-1, 1) - - @property - def closed(self): - return self.fileobj is None - - def close(self): - if self.fileobj is None: - return - if self.mode == WRITE: - self.fileobj.write(self.compress.flush()) - write32u(self.fileobj, self.crc) - # self.size may exceed 2GB, or even 4GB - write32u(self.fileobj, self.size & 0xffffffffL) - self.fileobj = None - elif self.mode == READ: - self.fileobj = None - if self.myfileobj: - self.myfileobj.close() - self.myfileobj = None - - def flush(self,zlib_mode=zlib.Z_SYNC_FLUSH): - self._check_closed() - if self.mode == WRITE: - # Ensure the compressor's buffer is flushed - self.fileobj.write(self.compress.flush(zlib_mode)) - self.fileobj.flush() - - def fileno(self): - """Invoke the underlying file object's fileno() method. - - This will raise AttributeError if the underlying file object - doesn't support fileno(). - """ - return self.fileobj.fileno() - - def rewind(self): - '''Return the uncompressed stream file position indicator to the - beginning of the file''' - if self.mode != READ: - raise IOError("Can't rewind in write mode") - self.fileobj.seek(0) - self._new_member = True - self.extrabuf = "" - self.extrasize = 0 - self.extrastart = 0 - self.offset = 0 - - def readable(self): - return self.mode == READ - - def writable(self): - return self.mode == WRITE - - def seekable(self): - return True - - def seek(self, offset, whence=0): - if whence: - if whence == 1: - offset = self.offset + offset - else: - raise ValueError('Seek from end not supported') - if self.mode == WRITE: - if offset < self.offset: - raise IOError('Negative seek in write mode') - count = offset - self.offset - for i in range(count // 1024): - self.write(1024 * '\0') - self.write((count % 1024) * '\0') - elif self.mode == READ: - if offset == self.offset: - self.read(0) # to make sure that this file is open - return self.offset - if offset < self.offset: - # for negative seek, rewind and do positive seek - self.rewind() - count = offset - self.offset - for i in range(count // 1024): - self.read(1024) - self.read(count % 1024) - - return self.offset - - def readline(self, size=-1): - if size < 0: - # Shortcut common case - newline found in buffer. - offset = self.offset - self.extrastart - i = self.extrabuf.find('\n', offset) + 1 - if i > 0: - self.extrasize -= i - offset - self.offset += i - offset - return self.extrabuf[offset: i] - - size = sys.maxint - readsize = self.min_readsize - else: - readsize = size - bufs = [] - while size != 0: - c = self.read(readsize) - i = c.find('\n') - - # We set i=size to break out of the loop under two - # conditions: 1) there's no newline, and the chunk is - # larger than size, or 2) there is a newline, but the - # resulting line would be longer than 'size'. - if (size <= i) or (i == -1 and len(c) > size): - i = size - 1 - - if i >= 0 or c == '': - bufs.append(c[:i + 1]) # Add portion of last chunk - self._unread(c[i + 1:]) # Push back rest of chunk - break - - # Append chunk to list, decrease 'size', - bufs.append(c) - size = size - len(c) - readsize = min(size, readsize * 2) - if readsize > self.min_readsize: - self.min_readsize = min(readsize, self.min_readsize * 2, 512) - return ''.join(bufs) # Return resulting line - - -def _test(): - # Act like gzip; with -d, act like gunzip. - # The input file is not deleted, however, nor are any other gzip - # options or features supported. - args = sys.argv[1:] - decompress = args and args[0] == "-d" - if decompress: - args = args[1:] - if not args: - args = ["-"] - for arg in args: - if decompress: - if arg == "-": - f = GzipFile(filename="", mode="rb", fileobj=sys.stdin) - g = sys.stdout - else: - if arg[-3:] != ".gz": - print "filename doesn't end in .gz:", repr(arg) - continue - f = open(arg, "rb") - g = __builtin__.open(arg[:-3], "wb") - else: - if arg == "-": - f = sys.stdin - g = GzipFile(filename="", mode="wb", fileobj=sys.stdout) - else: - f = __builtin__.open(arg, "rb") - g = open(arg + ".gz", "wb") - while True: - chunk = f.read(1024) - if not chunk: - break - g.write(chunk) - if g is not sys.stdout: - g.close() - if f is not sys.stdin: - f.close() - -if __name__ == '__main__': - _test() diff --git a/lib-python/modified-2.7/heapq.py b/lib-python/modified-2.7/heapq.py new file mode 100644 --- /dev/null +++ b/lib-python/modified-2.7/heapq.py @@ -0,0 +1,442 @@ +# -*- coding: latin-1 -*- + +"""Heap queue algorithm (a.k.a. priority queue). + +Heaps are arrays for which a[k] <= a[2*k+1] and a[k] <= a[2*k+2] for +all k, counting elements from 0. For the sake of comparison, +non-existing elements are considered to be infinite. The interesting +property of a heap is that a[0] is always its smallest element. + +Usage: + +heap = [] # creates an empty heap +heappush(heap, item) # pushes a new item on the heap +item = heappop(heap) # pops the smallest item from the heap +item = heap[0] # smallest item on the heap without popping it +heapify(x) # transforms list into a heap, in-place, in linear time +item = heapreplace(heap, item) # pops and returns smallest item, and adds + # new item; the heap size is unchanged + +Our API differs from textbook heap algorithms as follows: + +- We use 0-based indexing. This makes the relationship between the + index for a node and the indexes for its children slightly less + obvious, but is more suitable since Python uses 0-based indexing. + +- Our heappop() method returns the smallest item, not the largest. + +These two make it possible to view the heap as a regular Python list +without surprises: heap[0] is the smallest item, and heap.sort() +maintains the heap invariant! +""" + +# Original code by Kevin O'Connor, augmented by Tim Peters and Raymond Hettinger + +__about__ = """Heap queues + +[explanation by Fran�ois Pinard] + +Heaps are arrays for which a[k] <= a[2*k+1] and a[k] <= a[2*k+2] for +all k, counting elements from 0. For the sake of comparison, +non-existing elements are considered to be infinite. The interesting +property of a heap is that a[0] is always its smallest element. + +The strange invariant above is meant to be an efficient memory +representation for a tournament. The numbers below are `k', not a[k]: + + 0 + + 1 2 + + 3 4 5 6 + + 7 8 9 10 11 12 13 14 + + 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 + + +In the tree above, each cell `k' is topping `2*k+1' and `2*k+2'. In +an usual binary tournament we see in sports, each cell is the winner +over the two cells it tops, and we can trace the winner down the tree +to see all opponents s/he had. However, in many computer applications +of such tournaments, we do not need to trace the history of a winner. +To be more memory efficient, when a winner is promoted, we try to +replace it by something else at a lower level, and the rule becomes +that a cell and the two cells it tops contain three different items, +but the top cell "wins" over the two topped cells. + +If this heap invariant is protected at all time, index 0 is clearly +the overall winner. The simplest algorithmic way to remove it and +find the "next" winner is to move some loser (let's say cell 30 in the +diagram above) into the 0 position, and then percolate this new 0 down +the tree, exchanging values, until the invariant is re-established. +This is clearly logarithmic on the total number of items in the tree. +By iterating over all items, you get an O(n ln n) sort. + +A nice feature of this sort is that you can efficiently insert new +items while the sort is going on, provided that the inserted items are +not "better" than the last 0'th element you extracted. This is +especially useful in simulation contexts, where the tree holds all +incoming events, and the "win" condition means the smallest scheduled +time. When an event schedule other events for execution, they are +scheduled into the future, so they can easily go into the heap. So, a +heap is a good structure for implementing schedulers (this is what I +used for my MIDI sequencer :-). + +Various structures for implementing schedulers have been extensively +studied, and heaps are good for this, as they are reasonably speedy, +the speed is almost constant, and the worst case is not much different +than the average case. However, there are other representations which +are more efficient overall, yet the worst cases might be terrible. + +Heaps are also very useful in big disk sorts. You most probably all +know that a big sort implies producing "runs" (which are pre-sorted +sequences, which size is usually related to the amount of CPU memory), +followed by a merging passes for these runs, which merging is often +very cleverly organised[1]. It is very important that the initial +sort produces the longest runs possible. Tournaments are a good way +to that. If, using all the memory available to hold a tournament, you +replace and percolate items that happen to fit the current run, you'll +produce runs which are twice the size of the memory for random input, +and much better for input fuzzily ordered. + +Moreover, if you output the 0'th item on disk and get an input which +may not fit in the current tournament (because the value "wins" over +the last output value), it cannot fit in the heap, so the size of the +heap decreases. The freed memory could be cleverly reused immediately +for progressively building a second heap, which grows at exactly the +same rate the first heap is melting. When the first heap completely +vanishes, you switch heaps and start a new run. Clever and quite +effective! + +In a word, heaps are useful memory structures to know. I use them in +a few applications, and I think it is good to keep a `heap' module +around. :-) + +-------------------- +[1] The disk balancing algorithms which are current, nowadays, are +more annoying than clever, and this is a consequence of the seeking +capabilities of the disks. On devices which cannot seek, like big +tape drives, the story was quite different, and one had to be very +clever to ensure (far in advance) that each tape movement will be the +most effective possible (that is, will best participate at +"progressing" the merge). Some tapes were even able to read +backwards, and this was also used to avoid the rewinding time. +Believe me, real good tape sorts were quite spectacular to watch! +From all times, sorting has always been a Great Art! :-) +""" + +__all__ = ['heappush', 'heappop', 'heapify', 'heapreplace', 'merge', + 'nlargest', 'nsmallest', 'heappushpop'] + +from itertools import islice, repeat, count, imap, izip, tee, chain +from operator import itemgetter +import bisect + +def heappush(heap, item): + """Push item onto heap, maintaining the heap invariant.""" + heap.append(item) + _siftdown(heap, 0, len(heap)-1) + +def heappop(heap): + """Pop the smallest item off the heap, maintaining the heap invariant.""" + lastelt = heap.pop() # raises appropriate IndexError if heap is empty + if heap: + returnitem = heap[0] + heap[0] = lastelt + _siftup(heap, 0) + else: + returnitem = lastelt + return returnitem + +def heapreplace(heap, item): + """Pop and return the current smallest value, and add the new item. + + This is more efficient than heappop() followed by heappush(), and can be + more appropriate when using a fixed-size heap. Note that the value + returned may be larger than item! That constrains reasonable uses of + this routine unless written as part of a conditional replacement: + + if item > heap[0]: + item = heapreplace(heap, item) + """ + returnitem = heap[0] # raises appropriate IndexError if heap is empty + heap[0] = item + _siftup(heap, 0) + return returnitem + +def heappushpop(heap, item): + """Fast version of a heappush followed by a heappop.""" + if heap and heap[0] < item: + item, heap[0] = heap[0], item + _siftup(heap, 0) + return item + +def heapify(x): + """Transform list into a heap, in-place, in O(len(heap)) time.""" + n = len(x) + # Transform bottom-up. The largest index there's any point to looking at + # is the largest with a child index in-range, so must have 2*i + 1 < n, + # or i < (n-1)/2. If n is even = 2*j, this is (2*j-1)/2 = j-1/2 so + # j-1 is the largest, which is n//2 - 1. If n is odd = 2*j+1, this is + # (2*j+1-1)/2 = j so j-1 is the largest, and that's again n//2-1. + for i in reversed(xrange(n//2)): + _siftup(x, i) + +def nlargest(n, iterable): + """Find the n largest elements in a dataset. + + Equivalent to: sorted(iterable, reverse=True)[:n] + """ + if n < 0: # for consistency with the c impl + return [] + it = iter(iterable) + result = list(islice(it, n)) + if not result: + return result + heapify(result) + _heappushpop = heappushpop + for elem in it: + _heappushpop(result, elem) + result.sort(reverse=True) + return result + +def nsmallest(n, iterable): + """Find the n smallest elements in a dataset. + + Equivalent to: sorted(iterable)[:n] + """ + if n < 0: # for consistency with the c impl + return [] + if hasattr(iterable, '__len__') and n * 10 <= len(iterable): + # For smaller values of n, the bisect method is faster than a minheap. + # It is also memory efficient, consuming only n elements of space. + it = iter(iterable) + result = sorted(islice(it, 0, n)) + if not result: + return result + insort = bisect.insort + pop = result.pop + los = result[-1] # los --> Largest of the nsmallest + for elem in it: + if los <= elem: + continue + insort(result, elem) + pop() + los = result[-1] + return result + # An alternative approach manifests the whole iterable in memory but + # saves comparisons by heapifying all at once. Also, saves time + # over bisect.insort() which has O(n) data movement time for every + # insertion. Finding the n smallest of an m length iterable requires + # O(m) + O(n log m) comparisons. + h = list(iterable) + heapify(h) + return map(heappop, repeat(h, min(n, len(h)))) + +# 'heap' is a heap at all indices >= startpos, except possibly for pos. pos +# is the index of a leaf with a possibly out-of-order value. Restore the +# heap invariant. +def _siftdown(heap, startpos, pos): + newitem = heap[pos] + # Follow the path to the root, moving parents down until finding a place + # newitem fits. + while pos > startpos: + parentpos = (pos - 1) >> 1 + parent = heap[parentpos] + if newitem < parent: + heap[pos] = parent + pos = parentpos + continue + break + heap[pos] = newitem + +# The child indices of heap index pos are already heaps, and we want to make +# a heap at index pos too. We do this by bubbling the smaller child of +# pos up (and so on with that child's children, etc) until hitting a leaf, +# then using _siftdown to move the oddball originally at index pos into place. +# +# We *could* break out of the loop as soon as we find a pos where newitem <= +# both its children, but turns out that's not a good idea, and despite that +# many books write the algorithm that way. During a heap pop, the last array +# element is sifted in, and that tends to be large, so that comparing it +# against values starting from the root usually doesn't pay (= usually doesn't +# get us out of the loop early). See Knuth, Volume 3, where this is +# explained and quantified in an exercise. +# +# Cutting the # of comparisons is important, since these routines have no +# way to extract "the priority" from an array element, so that intelligence +# is likely to be hiding in custom __cmp__ methods, or in array elements +# storing (priority, record) tuples. Comparisons are thus potentially +# expensive. +# +# On random arrays of length 1000, making this change cut the number of +# comparisons made by heapify() a little, and those made by exhaustive +# heappop() a lot, in accord with theory. Here are typical results from 3 +# runs (3 just to demonstrate how small the variance is): +# +# Compares needed by heapify Compares needed by 1000 heappops +# -------------------------- -------------------------------- +# 1837 cut to 1663 14996 cut to 8680 +# 1855 cut to 1659 14966 cut to 8678 +# 1847 cut to 1660 15024 cut to 8703 +# +# Building the heap by using heappush() 1000 times instead required +# 2198, 2148, and 2219 compares: heapify() is more efficient, when +# you can use it. +# +# The total compares needed by list.sort() on the same lists were 8627, +# 8627, and 8632 (this should be compared to the sum of heapify() and +# heappop() compares): list.sort() is (unsurprisingly!) more efficient +# for sorting. + +def _siftup(heap, pos): + endpos = len(heap) + startpos = pos + newitem = heap[pos] + # Bubble up the smaller child until hitting a leaf. + childpos = 2*pos + 1 # leftmost child position + while childpos < endpos: + # Set childpos to index of smaller child. + rightpos = childpos + 1 + if rightpos < endpos and not heap[childpos] < heap[rightpos]: + childpos = rightpos + # Move the smaller child up. + heap[pos] = heap[childpos] + pos = childpos + childpos = 2*pos + 1 + # The leaf at pos is empty now. Put newitem there, and bubble it up + # to its final resting place (by sifting its parents down). + heap[pos] = newitem + _siftdown(heap, startpos, pos) + +# If available, use C implementation +try: + from _heapq import * +except ImportError: + pass + +def merge(*iterables): + '''Merge multiple sorted inputs into a single sorted output. + + Similar to sorted(itertools.chain(*iterables)) but returns a generator, + does not pull the data into memory all at once, and assumes that each of + the input streams is already sorted (smallest to largest). + + >>> list(merge([1,3,5,7], [0,2,4,8], [5,10,15,20], [], [25])) + [0, 1, 2, 3, 4, 5, 5, 7, 8, 10, 15, 20, 25] + + ''' + _heappop, _heapreplace, _StopIteration = heappop, heapreplace, StopIteration + + h = [] + h_append = h.append + for itnum, it in enumerate(map(iter, iterables)): + try: + next = it.next + h_append([next(), itnum, next]) + except _StopIteration: + pass + heapify(h) + + while 1: + try: + while 1: + v, itnum, next = s = h[0] # raises IndexError when h is empty + yield v + s[0] = next() # raises StopIteration when exhausted + _heapreplace(h, s) # restore heap condition + except _StopIteration: + _heappop(h) # remove empty iterator + except IndexError: + return + +# Extend the implementations of nsmallest and nlargest to use a key= argument +_nsmallest = nsmallest +def nsmallest(n, iterable, key=None): + """Find the n smallest elements in a dataset. + + Equivalent to: sorted(iterable, key=key)[:n] + """ + # Short-cut for n==1 is to use min() when len(iterable)>0 + if n == 1: + it = iter(iterable) + head = list(islice(it, 1)) + if not head: + return [] + if key is None: + return [min(chain(head, it))] + return [min(chain(head, it), key=key)] + + # When n>=size, it's faster to use sort() + try: + size = len(iterable) + except (TypeError, AttributeError): + pass + else: + if n >= size: + return sorted(iterable, key=key)[:n] + + # When key is none, use simpler decoration + if key is None: + it = izip(iterable, count()) # decorate + result = _nsmallest(n, it) + return map(itemgetter(0), result) # undecorate + + # General case, slowest method + in1, in2 = tee(iterable) + it = izip(imap(key, in1), count(), in2) # decorate + result = _nsmallest(n, it) + return map(itemgetter(2), result) # undecorate + +_nlargest = nlargest +def nlargest(n, iterable, key=None): + """Find the n largest elements in a dataset. + + Equivalent to: sorted(iterable, key=key, reverse=True)[:n] + """ + + # Short-cut for n==1 is to use max() when len(iterable)>0 + if n == 1: + it = iter(iterable) + head = list(islice(it, 1)) + if not head: + return [] + if key is None: + return [max(chain(head, it))] + return [max(chain(head, it), key=key)] + + # When n>=size, it's faster to use sort() + try: + size = len(iterable) + except (TypeError, AttributeError): + pass + else: + if n >= size: + return sorted(iterable, key=key, reverse=True)[:n] + + # When key is none, use simpler decoration + if key is None: + it = izip(iterable, count(0,-1)) # decorate + result = _nlargest(n, it) + return map(itemgetter(0), result) # undecorate + + # General case, slowest method + in1, in2 = tee(iterable) + it = izip(imap(key, in1), count(0,-1), in2) # decorate + result = _nlargest(n, it) + return map(itemgetter(2), result) # undecorate + +if __name__ == "__main__": + # Simple sanity test + heap = [] + data = [1, 3, 5, 7, 9, 2, 4, 6, 8, 0] + for item in data: + heappush(heap, item) + sort = [] + while heap: + sort.append(heappop(heap)) + print sort + + import doctest + doctest.testmod() diff --git a/lib-python/modified-2.7/httplib.py b/lib-python/modified-2.7/httplib.py new file mode 100644 --- /dev/null +++ b/lib-python/modified-2.7/httplib.py @@ -0,0 +1,1377 @@ +"""HTTP/1.1 client library + + + + +HTTPConnection goes through a number of "states", which define when a client +may legally make another request or fetch the response for a particular +request. This diagram details these state transitions: + + (null) + | + | HTTPConnection() + v + Idle + | + | putrequest() + v + Request-started + | + | ( putheader() )* endheaders() + v + Request-sent + | + | response = getresponse() + v + Unread-response [Response-headers-read] + |\____________________ + | | + | response.read() | putrequest() + v v + Idle Req-started-unread-response + ______/| + / | + response.read() | | ( putheader() )* endheaders() + v v + Request-started Req-sent-unread-response + | + | response.read() + v + Request-sent + +This diagram presents the following rules: + -- a second request may not be started until {response-headers-read} + -- a response [object] cannot be retrieved until {request-sent} + -- there is no differentiation between an unread response body and a + partially read response body + +Note: this enforcement is applied by the HTTPConnection class. The + HTTPResponse class does not enforce this state machine, which + implies sophisticated clients may accelerate the request/response + pipeline. Caution should be taken, though: accelerating the states + beyond the above pattern may imply knowledge of the server's + connection-close behavior for certain requests. For example, it + is impossible to tell whether the server will close the connection + UNTIL the response headers have been read; this means that further + requests cannot be placed into the pipeline until it is known that + the server will NOT be closing the connection. + +Logical State __state __response +------------- ------- ---------- +Idle _CS_IDLE None +Request-started _CS_REQ_STARTED None +Request-sent _CS_REQ_SENT None +Unread-response _CS_IDLE +Req-started-unread-response _CS_REQ_STARTED +Req-sent-unread-response _CS_REQ_SENT +""" + +from array import array +import os +import socket +from sys import py3kwarning +from urlparse import urlsplit +import warnings +with warnings.catch_warnings(): + if py3kwarning: + warnings.filterwarnings("ignore", ".*mimetools has been removed", + DeprecationWarning) + import mimetools + +try: + from cStringIO import StringIO +except ImportError: + from StringIO import StringIO + +__all__ = ["HTTP", "HTTPResponse", "HTTPConnection", + "HTTPException", "NotConnected", "UnknownProtocol", + "UnknownTransferEncoding", "UnimplementedFileMode", + "IncompleteRead", "InvalidURL", "ImproperConnectionState", + "CannotSendRequest", "CannotSendHeader", "ResponseNotReady", + "BadStatusLine", "error", "responses"] + +HTTP_PORT = 80 +HTTPS_PORT = 443 + +_UNKNOWN = 'UNKNOWN' + +# connection states +_CS_IDLE = 'Idle' +_CS_REQ_STARTED = 'Request-started' +_CS_REQ_SENT = 'Request-sent' + +# status codes +# informational +CONTINUE = 100 +SWITCHING_PROTOCOLS = 101 +PROCESSING = 102 + +# successful +OK = 200 +CREATED = 201 +ACCEPTED = 202 +NON_AUTHORITATIVE_INFORMATION = 203 +NO_CONTENT = 204 +RESET_CONTENT = 205 +PARTIAL_CONTENT = 206 +MULTI_STATUS = 207 +IM_USED = 226 + +# redirection +MULTIPLE_CHOICES = 300 +MOVED_PERMANENTLY = 301 +FOUND = 302 +SEE_OTHER = 303 +NOT_MODIFIED = 304 +USE_PROXY = 305 +TEMPORARY_REDIRECT = 307 + +# client error +BAD_REQUEST = 400 +UNAUTHORIZED = 401 +PAYMENT_REQUIRED = 402 +FORBIDDEN = 403 +NOT_FOUND = 404 +METHOD_NOT_ALLOWED = 405 +NOT_ACCEPTABLE = 406 +PROXY_AUTHENTICATION_REQUIRED = 407 +REQUEST_TIMEOUT = 408 +CONFLICT = 409 +GONE = 410 +LENGTH_REQUIRED = 411 +PRECONDITION_FAILED = 412 +REQUEST_ENTITY_TOO_LARGE = 413 +REQUEST_URI_TOO_LONG = 414 +UNSUPPORTED_MEDIA_TYPE = 415 +REQUESTED_RANGE_NOT_SATISFIABLE = 416 +EXPECTATION_FAILED = 417 +UNPROCESSABLE_ENTITY = 422 +LOCKED = 423 +FAILED_DEPENDENCY = 424 +UPGRADE_REQUIRED = 426 + +# server error +INTERNAL_SERVER_ERROR = 500 +NOT_IMPLEMENTED = 501 +BAD_GATEWAY = 502 +SERVICE_UNAVAILABLE = 503 +GATEWAY_TIMEOUT = 504 +HTTP_VERSION_NOT_SUPPORTED = 505 +INSUFFICIENT_STORAGE = 507 +NOT_EXTENDED = 510 + +# Mapping status codes to official W3C names +responses = { + 100: 'Continue', + 101: 'Switching Protocols', + + 200: 'OK', + 201: 'Created', + 202: 'Accepted', + 203: 'Non-Authoritative Information', + 204: 'No Content', + 205: 'Reset Content', + 206: 'Partial Content', + + 300: 'Multiple Choices', + 301: 'Moved Permanently', + 302: 'Found', + 303: 'See Other', + 304: 'Not Modified', + 305: 'Use Proxy', + 306: '(Unused)', + 307: 'Temporary Redirect', + + 400: 'Bad Request', + 401: 'Unauthorized', + 402: 'Payment Required', + 403: 'Forbidden', + 404: 'Not Found', + 405: 'Method Not Allowed', + 406: 'Not Acceptable', + 407: 'Proxy Authentication Required', + 408: 'Request Timeout', + 409: 'Conflict', + 410: 'Gone', + 411: 'Length Required', + 412: 'Precondition Failed', + 413: 'Request Entity Too Large', + 414: 'Request-URI Too Long', + 415: 'Unsupported Media Type', + 416: 'Requested Range Not Satisfiable', + 417: 'Expectation Failed', + + 500: 'Internal Server Error', + 501: 'Not Implemented', + 502: 'Bad Gateway', + 503: 'Service Unavailable', + 504: 'Gateway Timeout', + 505: 'HTTP Version Not Supported', +} + +# maximal amount of data to read at one time in _safe_read +MAXAMOUNT = 1048576 + +class HTTPMessage(mimetools.Message): + + def addheader(self, key, value): + """Add header for field key handling repeats.""" + prev = self.dict.get(key) + if prev is None: + self.dict[key] = value + else: + combined = ", ".join((prev, value)) + self.dict[key] = combined + + def addcontinue(self, key, more): + """Add more field data from a continuation line.""" + prev = self.dict[key] + self.dict[key] = prev + "\n " + more + + def readheaders(self): + """Read header lines. + + Read header lines up to the entirely blank line that terminates them. + The (normally blank) line that ends the headers is skipped, but not + included in the returned list. If a non-header line ends the headers, + (which is an error), an attempt is made to backspace over it; it is + never included in the returned list. + + The variable self.status is set to the empty string if all went well, + otherwise it is an error message. The variable self.headers is a + completely uninterpreted list of lines contained in the header (so + printing them will reproduce the header exactly as it appears in the + file). + + If multiple header fields with the same name occur, they are combined + according to the rules in RFC 2616 sec 4.2: + + Appending each subsequent field-value to the first, each separated + by a comma. The order in which header fields with the same field-name + are received is significant to the interpretation of the combined + field value. + """ + # XXX The implementation overrides the readheaders() method of + # rfc822.Message. The base class design isn't amenable to + # customized behavior here so the method here is a copy of the + # base class code with a few small changes. + + self.dict = {} + self.unixfrom = '' + self.headers = hlist = [] + self.status = '' + headerseen = "" + firstline = 1 + startofline = unread = tell = None + if hasattr(self.fp, 'unread'): + unread = self.fp.unread + elif self.seekable: + tell = self.fp.tell + while True: + if tell: + try: + startofline = tell() + except IOError: + startofline = tell = None + self.seekable = 0 + line = self.fp.readline() + if not line: + self.status = 'EOF in headers' + break + # Skip unix From name time lines + if firstline and line.startswith('From '): + self.unixfrom = self.unixfrom + line + continue + firstline = 0 + if headerseen and line[0] in ' \t': + # XXX Not sure if continuation lines are handled properly + # for http and/or for repeating headers + # It's a continuation line. + hlist.append(line) + self.addcontinue(headerseen, line.strip()) + continue + elif self.iscomment(line): + # It's a comment. Ignore it. + continue + elif self.islast(line): + # Note! No pushback here! The delimiter line gets eaten. + break + headerseen = self.isheader(line) + if headerseen: + # It's a legal header line, save it. + hlist.append(line) + self.addheader(headerseen, line[len(headerseen)+1:].strip()) + continue + else: + # It's not a header line; throw it back and stop here. + if not self.dict: + self.status = 'No headers' + else: + self.status = 'Non-header line where header expected' + # Try to undo the read. + if unread: + unread(line) + elif tell: + self.fp.seek(startofline) + else: + self.status = self.status + '; bad seek' + break + +class HTTPResponse: + + # strict: If true, raise BadStatusLine if the status line can't be + # parsed as a valid HTTP/1.0 or 1.1 status line. By default it is + # false because it prevents clients from talking to HTTP/0.9 + # servers. Note that a response with a sufficiently corrupted + # status line will look like an HTTP/0.9 response. + + # See RFC 2616 sec 19.6 and RFC 1945 sec 6 for details. + + def __init__(self, sock, debuglevel=0, strict=0, method=None, buffering=False): + if buffering: + # The caller won't be using any sock.recv() calls, so buffering + # is fine and recommended for performance. + self.fp = sock.makefile('rb') + else: + # The buffer size is specified as zero, because the headers of + # the response are read with readline(). If the reads were + # buffered the readline() calls could consume some of the + # response, which make be read via a recv() on the underlying + # socket. + self.fp = sock.makefile('rb', 0) + self.debuglevel = debuglevel + self.strict = strict + self._method = method + + self.msg = None + + # from the Status-Line of the response + self.version = _UNKNOWN # HTTP-Version + self.status = _UNKNOWN # Status-Code + self.reason = _UNKNOWN # Reason-Phrase + + self.chunked = _UNKNOWN # is "chunked" being used? + self.chunk_left = _UNKNOWN # bytes left to read in current chunk + self.length = _UNKNOWN # number of bytes left in response + self.will_close = _UNKNOWN # conn will close at end of response + + def _read_status(self): + # Initialize with Simple-Response defaults + line = self.fp.readline() + if self.debuglevel > 0: + print "reply:", repr(line) + if not line: + # Presumably, the server closed the connection before + # sending a valid response. + raise BadStatusLine(line) + try: + [version, status, reason] = line.split(None, 2) + except ValueError: + try: + [version, status] = line.split(None, 1) + reason = "" + except ValueError: + # empty version will cause next test to fail and status + # will be treated as 0.9 response. + version = "" + if not version.startswith('HTTP/'): + if self.strict: + self.close() + raise BadStatusLine(line) + else: + # assume it's a Simple-Response from an 0.9 server + self.fp = LineAndFileWrapper(line, self.fp) + return "HTTP/0.9", 200, "" + + # The status code is a three-digit number + try: + status = int(status) + if status < 100 or status > 999: + raise BadStatusLine(line) + except ValueError: + raise BadStatusLine(line) + return version, status, reason + + def begin(self): + if self.msg is not None: + # we've already started reading the response + return + + # read until we get a non-100 response + while True: + version, status, reason = self._read_status() + if status != CONTINUE: + break + # skip the header from the 100 response + while True: + skip = self.fp.readline().strip() + if not skip: + break + if self.debuglevel > 0: + print "header:", skip + + self.status = status + self.reason = reason.strip() + if version == 'HTTP/1.0': + self.version = 10 + elif version.startswith('HTTP/1.'): + self.version = 11 # use HTTP/1.1 code for HTTP/1.x where x>=1 + elif version == 'HTTP/0.9': + self.version = 9 + else: + raise UnknownProtocol(version) + + if self.version == 9: + self.length = None + self.chunked = 0 + self.will_close = 1 + self.msg = HTTPMessage(StringIO()) + return + + self.msg = HTTPMessage(self.fp, 0) + if self.debuglevel > 0: + for hdr in self.msg.headers: + print "header:", hdr, + + # don't let the msg keep an fp + self.msg.fp = None + + # are we using the chunked-style of transfer encoding? + tr_enc = self.msg.getheader('transfer-encoding') + if tr_enc and tr_enc.lower() == "chunked": + self.chunked = 1 + self.chunk_left = None + else: + self.chunked = 0 + + # will the connection close at the end of the response? + self.will_close = self._check_close() + + # do we have a Content-Length? + # NOTE: RFC 2616, S4.4, #3 says we ignore this if tr_enc is "chunked" + length = self.msg.getheader('content-length') + if length and not self.chunked: + try: + self.length = int(length) + except ValueError: + self.length = None + else: + if self.length < 0: # ignore nonsensical negative lengths + self.length = None + else: + self.length = None + + # does the body have a fixed length? (of zero) + if (status == NO_CONTENT or status == NOT_MODIFIED or + 100 <= status < 200 or # 1xx codes + self._method == 'HEAD'): + self.length = 0 + + # if the connection remains open, and we aren't using chunked, and + # a content-length was not provided, then assume that the connection + # WILL close. + if not self.will_close and \ + not self.chunked and \ + self.length is None: + self.will_close = 1 + + def _check_close(self): + conn = self.msg.getheader('connection') + if self.version == 11: + # An HTTP/1.1 proxy is assumed to stay open unless + # explicitly closed. + conn = self.msg.getheader('connection') + if conn and "close" in conn.lower(): + return True + return False + + # Some HTTP/1.0 implementations have support for persistent + # connections, using rules different than HTTP/1.1. + + # For older HTTP, Keep-Alive indicates persistent connection. + if self.msg.getheader('keep-alive'): + return False + + # At least Akamai returns a "Connection: Keep-Alive" header, + # which was supposed to be sent by the client. + if conn and "keep-alive" in conn.lower(): + return False + + # Proxy-Connection is a netscape hack. + pconn = self.msg.getheader('proxy-connection') + if pconn and "keep-alive" in pconn.lower(): + return False + + # otherwise, assume it will close + return True + + def close(self): + if self.fp: + self.fp.close() + self.fp = None + + def isclosed(self): + # NOTE: it is possible that we will not ever call self.close(). This + # case occurs when will_close is TRUE, length is None, and we + # read up to the last byte, but NOT past it. + # + # IMPLIES: if will_close is FALSE, then self.close() will ALWAYS be + # called, meaning self.isclosed() is meaningful. + return self.fp is None + + # XXX It would be nice to have readline and __iter__ for this, too. + + def read(self, amt=None): + if self.fp is None: + return '' + + if self._method == 'HEAD': + self.close() + return '' + + if self.chunked: + return self._read_chunked(amt) + + if amt is None: + # unbounded read + if self.length is None: + s = self.fp.read() + else: + s = self._safe_read(self.length) + self.length = 0 + self.close() # we read everything + return s + + if self.length is not None: + if amt > self.length: + # clip the read to the "end of response" + amt = self.length + + # we do not use _safe_read() here because this may be a .will_close + # connection, and the user is reading more bytes than will be provided + # (for example, reading in 1k chunks) + s = self.fp.read(amt) + if self.length is not None: + self.length -= len(s) + if not self.length: + self.close() + return s + + def _read_chunked(self, amt): + assert self.chunked != _UNKNOWN + chunk_left = self.chunk_left + value = [] + while True: + if chunk_left is None: + line = self.fp.readline() + i = line.find(';') + if i >= 0: + line = line[:i] # strip chunk-extensions + try: + chunk_left = int(line, 16) + except ValueError: + # close the connection as protocol synchronisation is + # probably lost + self.close() + raise IncompleteRead(''.join(value)) + if chunk_left == 0: + break + if amt is None: + value.append(self._safe_read(chunk_left)) + elif amt < chunk_left: + value.append(self._safe_read(amt)) + self.chunk_left = chunk_left - amt + return ''.join(value) + elif amt == chunk_left: + value.append(self._safe_read(amt)) + self._safe_read(2) # toss the CRLF at the end of the chunk + self.chunk_left = None + return ''.join(value) + else: + value.append(self._safe_read(chunk_left)) + amt -= chunk_left + + # we read the whole chunk, get another + self._safe_read(2) # toss the CRLF at the end of the chunk + chunk_left = None + + # read and discard trailer up to the CRLF terminator + ### note: we shouldn't have any trailers! + while True: + line = self.fp.readline() + if not line: + # a vanishingly small number of sites EOF without + # sending the trailer + break + if line == '\r\n': + break + + # we read everything; close the "file" + self.close() + + return ''.join(value) + + def _safe_read(self, amt): + """Read the number of bytes requested, compensating for partial reads. + + Normally, we have a blocking socket, but a read() can be interrupted + by a signal (resulting in a partial read). + + Note that we cannot distinguish between EOF and an interrupt when zero + bytes have been read. IncompleteRead() will be raised in this + situation. + + This function should be used when bytes "should" be present for + reading. If the bytes are truly not available (due to EOF), then the + IncompleteRead exception can be used to detect the problem. + """ + # NOTE(gps): As of svn r74426 socket._fileobject.read(x) will never + # return less than x bytes unless EOF is encountered. It now handles + # signal interruptions (socket.error EINTR) internally. This code + # never caught that exception anyways. It seems largely pointless. + # self.fp.read(amt) will work fine. + s = [] + while amt > 0: + chunk = self.fp.read(min(amt, MAXAMOUNT)) + if not chunk: + raise IncompleteRead(''.join(s), amt) + s.append(chunk) + amt -= len(chunk) + return ''.join(s) + + def fileno(self): + return self.fp.fileno() + + def getheader(self, name, default=None): + if self.msg is None: + raise ResponseNotReady() + return self.msg.getheader(name, default) + + def getheaders(self): + """Return list of (header, value) tuples.""" + if self.msg is None: + raise ResponseNotReady() + return self.msg.items() + + +class HTTPConnection: + + _http_vsn = 11 + _http_vsn_str = 'HTTP/1.1' + + response_class = HTTPResponse + default_port = HTTP_PORT + auto_open = 1 + debuglevel = 0 + strict = 0 + + def __init__(self, host, port=None, strict=None, + timeout=socket._GLOBAL_DEFAULT_TIMEOUT, source_address=None): + self.timeout = timeout + self.source_address = source_address + self.sock = None + self._buffer = [] + self.__response = None + self.__state = _CS_IDLE + self._method = None + self._tunnel_host = None + self._tunnel_port = None + self._tunnel_headers = {} + + self._set_hostport(host, port) + if strict is not None: + self.strict = strict + + def set_tunnel(self, host, port=None, headers=None): + """ Sets up the host and the port for the HTTP CONNECT Tunnelling. + + The headers argument should be a mapping of extra HTTP headers + to send with the CONNECT request. + """ + self._tunnel_host = host + self._tunnel_port = port + if headers: + self._tunnel_headers = headers + else: + self._tunnel_headers.clear() + + def _set_hostport(self, host, port): + if port is None: + i = host.rfind(':') + j = host.rfind(']') # ipv6 addresses have [...] + if i > j: + try: + port = int(host[i+1:]) + except ValueError: + raise InvalidURL("nonnumeric port: '%s'" % host[i+1:]) + host = host[:i] + else: + port = self.default_port + if host and host[0] == '[' and host[-1] == ']': + host = host[1:-1] + self.host = host + self.port = port + + def set_debuglevel(self, level): + self.debuglevel = level + + def _tunnel(self): + self._set_hostport(self._tunnel_host, self._tunnel_port) + self.send("CONNECT %s:%d HTTP/1.0\r\n" % (self.host, self.port)) + for header, value in self._tunnel_headers.iteritems(): + self.send("%s: %s\r\n" % (header, value)) + self.send("\r\n") + response = self.response_class(self.sock, strict = self.strict, + method = self._method) + (version, code, message) = response._read_status() + + if code != 200: + self.close() + raise socket.error("Tunnel connection failed: %d %s" % (code, + message.strip())) + while True: + line = response.fp.readline() + if line == '\r\n': break + + + def connect(self): + """Connect to the host and port specified in __init__.""" + self.sock = socket.create_connection((self.host,self.port), + self.timeout, self.source_address) + + if self._tunnel_host: + self._tunnel() + + def close(self): + """Close the connection to the HTTP server.""" + if self.sock: + self.sock.close() # close it manually... there may be other refs + self.sock = None + if self.__response: + self.__response.close() + self.__response = None + self.__state = _CS_IDLE + + def send(self, data): + """Send `data' to the server.""" + if self.sock is None: + if self.auto_open: + self.connect() + else: + raise NotConnected() + + if self.debuglevel > 0: + print "send:", repr(data) + blocksize = 8192 + if hasattr(data,'read') and not isinstance(data, array): + if self.debuglevel > 0: print "sendIng a read()able" + datablock = data.read(blocksize) + while datablock: + self.sock.sendall(datablock) + datablock = data.read(blocksize) + else: + self.sock.sendall(data) + + def _output(self, s): + """Add a line of output to the current request buffer. + + Assumes that the line does *not* end with \\r\\n. + """ + self._buffer.append(s) + + def _send_output(self, message_body=None): + """Send the currently buffered request and clear the buffer. + + Appends an extra \\r\\n to the buffer. + A message_body may be specified, to be appended to the request. + """ + self._buffer.extend(("", "")) + msg = "\r\n".join(self._buffer) + del self._buffer[:] + # If msg and message_body are sent in a single send() call, + # it will avoid performance problems caused by the interaction + # between delayed ack and the Nagle algorithim. + if isinstance(message_body, str): + msg += message_body + message_body = None + self.send(msg) + if message_body is not None: + #message_body was not a string (i.e. it is a file) and + #we must run the risk of Nagle + self.send(message_body) + + def putrequest(self, method, url, skip_host=0, skip_accept_encoding=0): + """Send a request to the server. + + `method' specifies an HTTP request method, e.g. 'GET'. + `url' specifies the object being requested, e.g. '/index.html'. + `skip_host' if True does not add automatically a 'Host:' header + `skip_accept_encoding' if True does not add automatically an + 'Accept-Encoding:' header + """ + + # if a prior response has been completed, then forget about it. + if self.__response and self.__response.isclosed(): + self.__response = None + + + # in certain cases, we cannot issue another request on this connection. + # this occurs when: + # 1) we are in the process of sending a request. (_CS_REQ_STARTED) + # 2) a response to a previous request has signalled that it is going + # to close the connection upon completion. + # 3) the headers for the previous response have not been read, thus + # we cannot determine whether point (2) is true. (_CS_REQ_SENT) + # + # if there is no prior response, then we can request at will. + # + # if point (2) is true, then we will have passed the socket to the + # response (effectively meaning, "there is no prior response"), and + # will open a new one when a new request is made. + # + # Note: if a prior response exists, then we *can* start a new request. + # We are not allowed to begin fetching the response to this new + # request, however, until that prior response is complete. + # + if self.__state == _CS_IDLE: + self.__state = _CS_REQ_STARTED + else: + raise CannotSendRequest() + + # Save the method we use, we need it later in the response phase + self._method = method + if not url: + url = '/' + hdr = '%s %s %s' % (method, url, self._http_vsn_str) + + self._output(hdr) + + if self._http_vsn == 11: + # Issue some standard headers for better HTTP/1.1 compliance + + if not skip_host: + # this header is issued *only* for HTTP/1.1 + # connections. more specifically, this means it is + # only issued when the client uses the new + # HTTPConnection() class. backwards-compat clients + # will be using HTTP/1.0 and those clients may be + # issuing this header themselves. we should NOT issue + # it twice; some web servers (such as Apache) barf + # when they see two Host: headers + + # If we need a non-standard port,include it in the + # header. If the request is going through a proxy, + # but the host of the actual URL, not the host of the + # proxy. + + netloc = '' + if url.startswith('http'): + nil, netloc, nil, nil, nil = urlsplit(url) + + if netloc: + try: + netloc_enc = netloc.encode("ascii") + except UnicodeEncodeError: + netloc_enc = netloc.encode("idna") + self.putheader('Host', netloc_enc) + else: + try: + host_enc = self.host.encode("ascii") + except UnicodeEncodeError: + host_enc = self.host.encode("idna") + # Wrap the IPv6 Host Header with [] (RFC 2732) + if host_enc.find(':') >= 0: + host_enc = "[" + host_enc + "]" + if self.port == self.default_port: + self.putheader('Host', host_enc) + else: + self.putheader('Host', "%s:%s" % (host_enc, self.port)) + + # note: we are assuming that clients will not attempt to set these + # headers since *this* library must deal with the + # consequences. this also means that when the supporting + # libraries are updated to recognize other forms, then this + # code should be changed (removed or updated). + + # we only want a Content-Encoding of "identity" since we don't + # support encodings such as x-gzip or x-deflate. + if not skip_accept_encoding: + self.putheader('Accept-Encoding', 'identity') + + # we can accept "chunked" Transfer-Encodings, but no others + # NOTE: no TE header implies *only* "chunked" + #self.putheader('TE', 'chunked') + + # if TE is supplied in the header, then it must appear in a + # Connection header. + #self.putheader('Connection', 'TE') + + else: + # For HTTP/1.0, the server will assume "not chunked" + pass + + def putheader(self, header, *values): + """Send a request header line to the server. + + For example: h.putheader('Accept', 'text/html') + """ + if self.__state != _CS_REQ_STARTED: + raise CannotSendHeader() + + hdr = '%s: %s' % (header, '\r\n\t'.join([str(v) for v in values])) + self._output(hdr) + + def endheaders(self, message_body=None): + """Indicate that the last header line has been sent to the server. + + This method sends the request to the server. The optional + message_body argument can be used to pass message body + associated with the request. The message body will be sent in + the same packet as the message headers if possible. The + message_body should be a string. + """ + if self.__state == _CS_REQ_STARTED: + self.__state = _CS_REQ_SENT + else: + raise CannotSendHeader() + self._send_output(message_body) + + def request(self, method, url, body=None, headers={}): + """Send a complete request to the server.""" + self._send_request(method, url, body, headers) + + def _set_content_length(self, body): + # Set the content-length based on the body. + thelen = None + try: + thelen = str(len(body)) + except TypeError, te: + # If this is a file-like object, try to + # fstat its file descriptor + try: + thelen = str(os.fstat(body.fileno()).st_size) + except (AttributeError, OSError): + # Don't send a length if this failed + if self.debuglevel > 0: print "Cannot stat!!" + + if thelen is not None: + self.putheader('Content-Length', thelen) + + def _send_request(self, method, url, body, headers): + # Honor explicitly requested Host: and Accept-Encoding: headers. + header_names = dict.fromkeys([k.lower() for k in headers]) + skips = {} + if 'host' in header_names: + skips['skip_host'] = 1 + if 'accept-encoding' in header_names: + skips['skip_accept_encoding'] = 1 + + self.putrequest(method, url, **skips) + + if body and ('content-length' not in header_names): + self._set_content_length(body) + for hdr, value in headers.iteritems(): + self.putheader(hdr, value) + self.endheaders(body) + + def getresponse(self, buffering=False): + "Get the response from the server." + + # if a prior response has been completed, then forget about it. + if self.__response and self.__response.isclosed(): + self.__response = None + + # + # if a prior response exists, then it must be completed (otherwise, we + # cannot read this response's header to determine the connection-close + # behavior) + # + # note: if a prior response existed, but was connection-close, then the + # socket and response were made independent of this HTTPConnection + # object since a new request requires that we open a whole new + # connection + # + # this means the prior response had one of two states: + # 1) will_close: this connection was reset and the prior socket and + # response operate independently + # 2) persistent: the response was retained and we await its + # isclosed() status to become true. + # + if self.__state != _CS_REQ_SENT or self.__response: + raise ResponseNotReady() + + args = (self.sock,) + kwds = {"strict":self.strict, "method":self._method} + if self.debuglevel > 0: + args += (self.debuglevel,) + if buffering: + #only add this keyword if non-default, for compatibility with + #other response_classes. + kwds["buffering"] = True; + response = self.response_class(*args, **kwds) + + try: + response.begin() + except: + response.close() + raise + assert response.will_close != _UNKNOWN + self.__state = _CS_IDLE + + if response.will_close: + # this effectively passes the connection to the response + self.close() + else: + # remember this, so we can tell when it is complete + self.__response = response + + return response + + +class HTTP: + "Compatibility class with httplib.py from 1.5." + + _http_vsn = 10 + _http_vsn_str = 'HTTP/1.0' + + debuglevel = 0 + + _connection_class = HTTPConnection + + def __init__(self, host='', port=None, strict=None): + "Provide a default host, since the superclass requires one." + + # some joker passed 0 explicitly, meaning default port + if port == 0: + port = None + + # Note that we may pass an empty string as the host; this will throw + # an error when we attempt to connect. Presumably, the client code + # will call connect before then, with a proper host. + self._setup(self._connection_class(host, port, strict)) + + def _setup(self, conn): + self._conn = conn + + # set up delegation to flesh out interface + self.send = conn.send + self.putrequest = conn.putrequest + self.putheader = conn.putheader + self.endheaders = conn.endheaders + self.set_debuglevel = conn.set_debuglevel + + conn._http_vsn = self._http_vsn + conn._http_vsn_str = self._http_vsn_str + + self.file = None + + def connect(self, host=None, port=None): + "Accept arguments to set the host/port, since the superclass doesn't." + + if host is not None: + self._conn._set_hostport(host, port) + self._conn.connect() + + def getfile(self): + "Provide a getfile, since the superclass' does not use this concept." + return self.file + + def getreply(self, buffering=False): + """Compat definition since superclass does not define it. + + Returns a tuple consisting of: + - server status code (e.g. '200' if all goes well) + - server "reason" corresponding to status code + - any RFC822 headers in the response from the server + """ + try: + if not buffering: + response = self._conn.getresponse() + else: + #only add this keyword if non-default for compatibility + #with other connection classes + response = self._conn.getresponse(buffering) + except BadStatusLine, e: + ### hmm. if getresponse() ever closes the socket on a bad request, + ### then we are going to have problems with self.sock + + ### should we keep this behavior? do people use it? + # keep the socket open (as a file), and return it + self.file = self._conn.sock.makefile('rb', 0) + + # close our socket -- we want to restart after any protocol error + self.close() + + self.headers = None + return -1, e.line, None + + self.headers = response.msg + self.file = response.fp + return response.status, response.reason, response.msg + + def close(self): + self._conn.close() + + # note that self.file == response.fp, which gets closed by the + # superclass. just clear the object ref here. + ### hmm. messy. if status==-1, then self.file is owned by us. + ### well... we aren't explicitly closing, but losing this ref will + ### do it + self.file = None + +try: + import ssl +except ImportError: + pass +else: + class HTTPSConnection(HTTPConnection): + "This class allows communication via SSL." + + default_port = HTTPS_PORT + + def __init__(self, host, port=None, key_file=None, cert_file=None, + strict=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, + source_address=None): + HTTPConnection.__init__(self, host, port, strict, timeout, + source_address) + self.key_file = key_file + self.cert_file = cert_file + + def connect(self): + "Connect to a host on a given (SSL) port." + + sock = socket.create_connection((self.host, self.port), + self.timeout, self.source_address) + if self._tunnel_host: + self.sock = sock + self._tunnel() + self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file) + + __all__.append("HTTPSConnection") + + class HTTPS(HTTP): + """Compatibility with 1.5 httplib interface + + Python 1.5.2 did not have an HTTPS class, but it defined an + interface for sending http requests that is also useful for + https. + """ + + _connection_class = HTTPSConnection + + def __init__(self, host='', port=None, key_file=None, cert_file=None, + strict=None): + # provide a default host, pass the X509 cert info + + # urf. compensate for bad input. + if port == 0: + port = None + self._setup(self._connection_class(host, port, key_file, + cert_file, strict)) + + # we never actually use these for anything, but we keep them + # here for compatibility with post-1.5.2 CVS. + self.key_file = key_file + self.cert_file = cert_file + + + def FakeSocket (sock, sslobj): + warnings.warn("FakeSocket is deprecated, and won't be in 3.x. " + + "Use the result of ssl.wrap_socket() directly instead.", + DeprecationWarning, stacklevel=2) + return sslobj + + +class HTTPException(Exception): + # Subclasses that define an __init__ must call Exception.__init__ + # or define self.args. Otherwise, str() will fail. + pass + +class NotConnected(HTTPException): + pass + +class InvalidURL(HTTPException): + pass + +class UnknownProtocol(HTTPException): + def __init__(self, version): + self.args = version, + self.version = version + +class UnknownTransferEncoding(HTTPException): + pass + +class UnimplementedFileMode(HTTPException): + pass + +class IncompleteRead(HTTPException): + def __init__(self, partial, expected=None): + self.args = partial, + self.partial = partial + self.expected = expected + def __repr__(self): + if self.expected is not None: + e = ', %i more expected' % self.expected + else: + e = '' + return 'IncompleteRead(%i bytes read%s)' % (len(self.partial), e) + def __str__(self): + return repr(self) + +class ImproperConnectionState(HTTPException): + pass + +class CannotSendRequest(ImproperConnectionState): + pass + +class CannotSendHeader(ImproperConnectionState): + pass + +class ResponseNotReady(ImproperConnectionState): + pass + +class BadStatusLine(HTTPException): + def __init__(self, line): + if not line: + line = repr(line) + self.args = line, + self.line = line + +# for backwards compatibility +error = HTTPException + +class LineAndFileWrapper: + """A limited file-like object for HTTP/0.9 responses.""" + + # The status-line parsing code calls readline(), which normally + # get the HTTP status line. For a 0.9 response, however, this is + # actually the first line of the body! Clients need to get a + # readable file object that contains that line. + + def __init__(self, line, file): + self._line = line + self._file = file + self._line_consumed = 0 + self._line_offset = 0 + self._line_left = len(line) + + def __getattr__(self, attr): + return getattr(self._file, attr) + + def _done(self): + # called when the last byte is read from the line. After the + # call, all read methods are delegated to the underlying file + # object. + self._line_consumed = 1 + self.read = self._file.read + self.readline = self._file.readline + self.readlines = self._file.readlines + + def read(self, amt=None): + if self._line_consumed: + return self._file.read(amt) + assert self._line_left + if amt is None or amt > self._line_left: + s = self._line[self._line_offset:] + self._done() + if amt is None: + return s + self._file.read() + else: + return s + self._file.read(amt - len(s)) + else: + assert amt <= self._line_left + i = self._line_offset + j = i + amt + s = self._line[i:j] + self._line_offset = j + self._line_left -= amt + if self._line_left == 0: + self._done() + return s + + def readline(self): + if self._line_consumed: + return self._file.readline() + assert self._line_left + s = self._line[self._line_offset:] + self._done() + return s + + def readlines(self, size=None): + if self._line_consumed: + return self._file.readlines(size) + assert self._line_left + L = [self._line[self._line_offset:]] + self._done() + if size is None: + return L + self._file.readlines() + else: + return L + self._file.readlines(size) + +def test(): + """Test this module. + + A hodge podge of tests collected here, because they have too many + external dependencies for the regular test suite. + """ + + import sys + import getopt + opts, args = getopt.getopt(sys.argv[1:], 'd') + dl = 0 + for o, a in opts: + if o == '-d': dl = dl + 1 + host = 'www.python.org' + selector = '/' + if args[0:]: host = args[0] + if args[1:]: selector = args[1] + h = HTTP() + h.set_debuglevel(dl) + h.connect(host) + h.putrequest('GET', selector) + h.endheaders() + status, reason, headers = h.getreply() + print 'status =', status + print 'reason =', reason + print "read", len(h.getfile().read()) + print + if headers: + for header in headers.headers: print header.strip() + print + + # minimal test that code to extract host from url works + class HTTP11(HTTP): + _http_vsn = 11 + _http_vsn_str = 'HTTP/1.1' + + h = HTTP11('www.python.org') + h.putrequest('GET', 'http://www.python.org/~jeremy/') + h.endheaders() + h.getreply() + h.close() + + try: + import ssl + except ImportError: + pass + else: + + for host, selector in (('sourceforge.net', '/projects/python'), + ): + print "https://%s%s" % (host, selector) + hs = HTTPS() + hs.set_debuglevel(dl) + hs.connect(host) + hs.putrequest('GET', selector) + hs.endheaders() + status, reason, headers = hs.getreply() + print 'status =', status + print 'reason =', reason + print "read", len(hs.getfile().read()) + print + if headers: + for header in headers.headers: print header.strip() + print + +if __name__ == '__main__': + test() diff --git a/lib-python/modified-2.7/json/encoder.py b/lib-python/modified-2.7/json/encoder.py --- a/lib-python/modified-2.7/json/encoder.py +++ b/lib-python/modified-2.7/json/encoder.py @@ -2,14 +2,7 @@ """ import re -try: - from _json import encode_basestring_ascii as c_encode_basestring_ascii -except ImportError: - c_encode_basestring_ascii = None -try: - from _json import make_encoder as c_make_encoder -except ImportError: - c_make_encoder = None +from __pypy__.builders import StringBuilder, UnicodeBuilder ESCAPE = re.compile(r'[\x00-\x1f\\"\b\f\n\r\t]') ESCAPE_ASCII = re.compile(r'([\\"]|[^\ -~])') @@ -24,8 +17,7 @@ '\t': '\\t', } for i in range(0x20): - ESCAPE_DCT.setdefault(chr(i), '\\u{0:04x}'.format(i)) - #ESCAPE_DCT.setdefault(chr(i), '\\u%04x' % (i,)) + ESCAPE_DCT.setdefault(chr(i), '\\u%04x' % (i,)) # Assume this produces an infinity on all machines (probably not guaranteed) INFINITY = float('1e66666') @@ -37,10 +29,9 @@ """ def replace(match): return ESCAPE_DCT[match.group(0)] - return '"' + ESCAPE.sub(replace, s) + '"' + return ESCAPE.sub(replace, s) - -def py_encode_basestring_ascii(s): +def encode_basestring_ascii(s): """Return an ASCII-only JSON representation of a Python string """ @@ -53,20 +44,18 @@ except KeyError: n = ord(s) if n < 0x10000: - return '\\u{0:04x}'.format(n) - #return '\\u%04x' % (n,) + return '\\u%04x' % (n,) else: # surrogate pair n -= 0x10000 s1 = 0xd800 | ((n >> 10) & 0x3ff) s2 = 0xdc00 | (n & 0x3ff) - return '\\u{0:04x}\\u{1:04x}'.format(s1, s2) - #return '\\u%04x\\u%04x' % (s1, s2) - return '"' + str(ESCAPE_ASCII.sub(replace, s)) + '"' - - -encode_basestring_ascii = ( - c_encode_basestring_ascii or py_encode_basestring_ascii) + return '\\u%04x\\u%04x' % (s1, s2) + if ESCAPE_ASCII.search(s): + return str(ESCAPE_ASCII.sub(replace, s)) + return s +py_encode_basestring_ascii = lambda s: '"' + encode_basestring_ascii(s) + '"' +c_encode_basestring_ascii = None class JSONEncoder(object): """Extensible JSON encoder for Python data structures. @@ -147,6 +136,17 @@ self.skipkeys = skipkeys self.ensure_ascii = ensure_ascii + if ensure_ascii: + self.encoder = encode_basestring_ascii + else: + self.encoder = encode_basestring + if encoding != 'utf-8': + orig_encoder = self.encoder + def encoder(o): + if isinstance(o, str): + o = o.decode(encoding) + return orig_encoder(o) + self.encoder = encoder self.check_circular = check_circular self.allow_nan = allow_nan self.sort_keys = sort_keys @@ -184,24 +184,126 @@ '{"foo": ["bar", "baz"]}' """ - # This is for extremely simple cases and benchmarks. + if self.check_circular: + markers = {} + else: + markers = None + if self.ensure_ascii: + builder = StringBuilder() + else: + builder = UnicodeBuilder() + self._encode(o, markers, builder, 0) + return builder.build() + + def _emit_indent(self, builder, _current_indent_level): + if self.indent is not None: + _current_indent_level += 1 + newline_indent = '\n' + (' ' * (self.indent * + _current_indent_level)) + separator = self.item_separator + newline_indent + builder.append(newline_indent) + else: + separator = self.item_separator + return separator, _current_indent_level + + def _emit_unindent(self, builder, _current_indent_level): + if self.indent is not None: + builder.append('\n') + builder.append(' ' * (self.indent * (_current_indent_level - 1))) + + def _encode(self, o, markers, builder, _current_indent_level): if isinstance(o, basestring): - if isinstance(o, str): - _encoding = self.encoding - if (_encoding is not None - and not (_encoding == 'utf-8')): - o = o.decode(_encoding) - if self.ensure_ascii: - return encode_basestring_ascii(o) + builder.append('"') + builder.append(self.encoder(o)) + builder.append('"') + elif o is None: + builder.append('null') + elif o is True: + builder.append('true') + elif o is False: + builder.append('false') + elif isinstance(o, (int, long)): + builder.append(str(o)) + elif isinstance(o, float): + builder.append(self._floatstr(o)) + elif isinstance(o, (list, tuple)): + if not o: + builder.append('[]') + return + self._encode_list(o, markers, builder, _current_indent_level) + elif isinstance(o, dict): + if not o: + builder.append('{}') + return + self._encode_dict(o, markers, builder, _current_indent_level) + else: + self._mark_markers(markers, o) + res = self.default(o) + self._encode(res, markers, builder, _current_indent_level) + self._remove_markers(markers, o) + return res + + def _encode_list(self, l, markers, builder, _current_indent_level): + self._mark_markers(markers, l) + builder.append('[') + first = True + separator, _current_indent_level = self._emit_indent(builder, + _current_indent_level) + for elem in l: + if first: + first = False else: - return encode_basestring(o) - # This doesn't pass the iterator directly to ''.join() because the - # exceptions aren't as detailed. The list call should be roughly - # equivalent to the PySequence_Fast that ''.join() would do. - chunks = self.iterencode(o, _one_shot=True) - if not isinstance(chunks, (list, tuple)): - chunks = list(chunks) - return ''.join(chunks) + builder.append(separator) + self._encode(elem, markers, builder, _current_indent_level) + del elem # XXX grumble + self._emit_unindent(builder, _current_indent_level) + builder.append(']') + self._remove_markers(markers, l) + + def _encode_dict(self, d, markers, builder, _current_indent_level): + self._mark_markers(markers, d) + first = True + builder.append('{') + separator, _current_indent_level = self._emit_indent(builder, + _current_indent_level) + if self.sort_keys: + items = sorted(d.items(), key=lambda kv: kv[0]) + else: + items = d.iteritems() + + for key, v in items: + if first: + first = False + else: + builder.append(separator) + if isinstance(key, basestring): + pass + # JavaScript is weakly typed for these, so it makes sense to + # also allow them. Many encoders seem to do something like this. + elif isinstance(key, float): + key = self._floatstr(key) + elif key is True: + key = 'true' + elif key is False: + key = 'false' + elif key is None: + key = 'null' + elif isinstance(key, (int, long)): + key = str(key) + elif self.skipkeys: + continue + else: + raise TypeError("key " + repr(key) + " is not a string") + builder.append('"') + builder.append(self.encoder(key)) + builder.append('"') + builder.append(self.key_separator) + self._encode(v, markers, builder, _current_indent_level) + del key + del v # XXX grumble + self._emit_unindent(builder, _current_indent_level) + builder.append('}') + self._remove_markers(markers, d) def iterencode(self, o, _one_shot=False): """Encode the given object and yield each string @@ -217,86 +319,54 @@ markers = {} else: markers = None - if self.ensure_ascii: - _encoder = encode_basestring_ascii + return self._iterencode(o, markers, 0) + + def _floatstr(self, o): + # Check for specials. Note that this type of test is processor + # and/or platform-specific, so do tests which don't depend on the + # internals. + + if o != o: + text = 'NaN' + elif o == INFINITY: + text = 'Infinity' + elif o == -INFINITY: + text = '-Infinity' else: - _encoder = encode_basestring - if self.encoding != 'utf-8': - def _encoder(o, _orig_encoder=_encoder, _encoding=self.encoding): - if isinstance(o, str): - o = o.decode(_encoding) - return _orig_encoder(o) + return FLOAT_REPR(o) - def floatstr(o, allow_nan=self.allow_nan, - _repr=FLOAT_REPR, _inf=INFINITY, _neginf=-INFINITY): - # Check for specials. Note that this type of test is processor - # and/or platform-specific, so do tests which don't depend on the - # internals. + if not self.allow_nan: + raise ValueError( + "Out of range float values are not JSON compliant: " + + repr(o)) - if o != o: - text = 'NaN' - elif o == _inf: - text = 'Infinity' - elif o == _neginf: - text = '-Infinity' - else: - return _repr(o) + return text - if not allow_nan: - raise ValueError( - "Out of range float values are not JSON compliant: " + - repr(o)) + def _mark_markers(self, markers, o): + if markers is not None: + if id(o) in markers: + raise ValueError("Circular reference detected") + markers[id(o)] = None - return text + def _remove_markers(self, markers, o): + if markers is not None: + del markers[id(o)] - - if (_one_shot and c_make_encoder is not None - and not self.indent and not self.sort_keys): - _iterencode = c_make_encoder( - markers, self.default, _encoder, self.indent, - self.key_separator, self.item_separator, self.sort_keys, - self.skipkeys, self.allow_nan) - else: - _iterencode = _make_iterencode( - markers, self.default, _encoder, self.indent, floatstr, - self.key_separator, self.item_separator, self.sort_keys, - self.skipkeys, _one_shot) - return _iterencode(o, 0) - -def _make_iterencode(markers, _default, _encoder, _indent, _floatstr, - _key_separator, _item_separator, _sort_keys, _skipkeys, _one_shot, - ## HACK: hand-optimized bytecode; turn globals into locals - ValueError=ValueError, - basestring=basestring, - dict=dict, - float=float, - id=id, - int=int, - isinstance=isinstance, - list=list, - long=long, - str=str, - tuple=tuple, - ): - - def _iterencode_list(lst, _current_indent_level): + def _iterencode_list(self, lst, markers, _current_indent_level): if not lst: yield '[]' return - if markers is not None: - markerid = id(lst) - if markerid in markers: - raise ValueError("Circular reference detected") - markers[markerid] = lst + self._mark_markers(markers, lst) buf = '[' - if _indent is not None: + if self.indent is not None: _current_indent_level += 1 - newline_indent = '\n' + (' ' * (_indent * _current_indent_level)) - separator = _item_separator + newline_indent + newline_indent = '\n' + (' ' * (self.indent * + _current_indent_level)) + separator = self.item_separator + newline_indent buf += newline_indent else: newline_indent = None - separator = _item_separator + separator = self.item_separator first = True for value in lst: if first: @@ -304,7 +374,7 @@ else: buf = separator if isinstance(value, basestring): - yield buf + _encoder(value) + yield buf + '"' + self.encoder(value) + '"' elif value is None: yield buf + 'null' elif value is True: @@ -314,44 +384,43 @@ elif isinstance(value, (int, long)): yield buf + str(value) elif isinstance(value, float): - yield buf + _floatstr(value) + yield buf + self._floatstr(value) else: yield buf if isinstance(value, (list, tuple)): - chunks = _iterencode_list(value, _current_indent_level) + chunks = self._iterencode_list(value, markers, + _current_indent_level) elif isinstance(value, dict): - chunks = _iterencode_dict(value, _current_indent_level) + chunks = self._iterencode_dict(value, markers, + _current_indent_level) else: - chunks = _iterencode(value, _current_indent_level) + chunks = self._iterencode(value, markers, + _current_indent_level) for chunk in chunks: yield chunk if newline_indent is not None: _current_indent_level -= 1 - yield '\n' + (' ' * (_indent * _current_indent_level)) + yield '\n' + (' ' * (self.indent * _current_indent_level)) yield ']' - if markers is not None: - del markers[markerid] + self._remove_markers(markers, lst) - def _iterencode_dict(dct, _current_indent_level): + def _iterencode_dict(self, dct, markers, _current_indent_level): if not dct: yield '{}' return - if markers is not None: - markerid = id(dct) - if markerid in markers: - raise ValueError("Circular reference detected") - markers[markerid] = dct + self._mark_markers(markers, dct) yield '{' - if _indent is not None: + if self.indent is not None: _current_indent_level += 1 - newline_indent = '\n' + (' ' * (_indent * _current_indent_level)) - item_separator = _item_separator + newline_indent + newline_indent = '\n' + (' ' * (self.indent * + _current_indent_level)) + item_separator = self.item_separator + newline_indent yield newline_indent else: newline_indent = None - item_separator = _item_separator + item_separator = self.item_separator first = True - if _sort_keys: + if self.sort_keys: items = sorted(dct.items(), key=lambda kv: kv[0]) else: items = dct.iteritems() @@ -361,7 +430,7 @@ # JavaScript is weakly typed for these, so it makes sense to # also allow them. Many encoders seem to do something like this. elif isinstance(key, float): - key = _floatstr(key) + key = self._floatstr(key) elif key is True: key = 'true' elif key is False: @@ -370,7 +439,7 @@ key = 'null' elif isinstance(key, (int, long)): key = str(key) - elif _skipkeys: + elif self.skipkeys: continue else: raise TypeError("key " + repr(key) + " is not a string") @@ -378,10 +447,10 @@ first = False else: yield item_separator - yield _encoder(key) - yield _key_separator + yield '"' + self.encoder(key) + '"' + yield self.key_separator if isinstance(value, basestring): - yield _encoder(value) + yield '"' + self.encoder(value) + '"' elif value is None: yield 'null' elif value is True: @@ -391,26 +460,28 @@ elif isinstance(value, (int, long)): yield str(value) elif isinstance(value, float): - yield _floatstr(value) + yield self._floatstr(value) else: if isinstance(value, (list, tuple)): - chunks = _iterencode_list(value, _current_indent_level) + chunks = self._iterencode_list(value, markers, + _current_indent_level) elif isinstance(value, dict): - chunks = _iterencode_dict(value, _current_indent_level) + chunks = self._iterencode_dict(value, markers, + _current_indent_level) else: - chunks = _iterencode(value, _current_indent_level) + chunks = self._iterencode(value, markers, + _current_indent_level) for chunk in chunks: yield chunk if newline_indent is not None: _current_indent_level -= 1 - yield '\n' + (' ' * (_indent * _current_indent_level)) + yield '\n' + (' ' * (self.indent * _current_indent_level)) yield '}' - if markers is not None: - del markers[markerid] + self._remove_markers(markers, dct) - def _iterencode(o, _current_indent_level): + def _iterencode(self, o, markers, _current_indent_level): if isinstance(o, basestring): - yield _encoder(o) + yield '"' + self.encoder(o) + '"' elif o is None: yield 'null' elif o is True: @@ -420,23 +491,19 @@ elif isinstance(o, (int, long)): yield str(o) elif isinstance(o, float): - yield _floatstr(o) + yield self._floatstr(o) elif isinstance(o, (list, tuple)): - for chunk in _iterencode_list(o, _current_indent_level): + for chunk in self._iterencode_list(o, markers, + _current_indent_level): yield chunk elif isinstance(o, dict): - for chunk in _iterencode_dict(o, _current_indent_level): + for chunk in self._iterencode_dict(o, markers, + _current_indent_level): yield chunk else: - if markers is not None: - markerid = id(o) - if markerid in markers: - raise ValueError("Circular reference detected") - markers[markerid] = o - o = _default(o) - for chunk in _iterencode(o, _current_indent_level): + self._mark_markers(markers, o) + obj = self.default(o) + for chunk in self._iterencode(obj, markers, + _current_indent_level): yield chunk - if markers is not None: - del markers[markerid] - - return _iterencode + self._remove_markers(markers, o) diff --git a/lib-python/modified-2.7/json/tests/test_unicode.py b/lib-python/modified-2.7/json/tests/test_unicode.py --- a/lib-python/modified-2.7/json/tests/test_unicode.py +++ b/lib-python/modified-2.7/json/tests/test_unicode.py @@ -80,3 +80,9 @@ self.assertEqual(type(json.loads(u'["a"]')[0]), unicode) # Issue 10038. self.assertEqual(type(json.loads('"foo"')), unicode) + + def test_encode_not_utf_8(self): + self.assertEqual(json.dumps('\xb1\xe6', encoding='iso8859-2'), + '"\\u0105\\u0107"') + self.assertEqual(json.dumps(['\xb1\xe6'], encoding='iso8859-2'), + '["\\u0105\\u0107"]') diff --git a/lib-python/modified-2.7/ssl.py b/lib-python/modified-2.7/ssl.py --- a/lib-python/modified-2.7/ssl.py +++ b/lib-python/modified-2.7/ssl.py @@ -62,7 +62,6 @@ from _ssl import OPENSSL_VERSION_NUMBER, OPENSSL_VERSION_INFO, OPENSSL_VERSION from _ssl import SSLError from _ssl import CERT_NONE, CERT_OPTIONAL, CERT_REQUIRED -from _ssl import PROTOCOL_SSLv2, PROTOCOL_SSLv3, PROTOCOL_SSLv23, PROTOCOL_TLSv1 from _ssl import RAND_status, RAND_egd, RAND_add from _ssl import \ SSL_ERROR_ZERO_RETURN, \ @@ -74,6 +73,18 @@ SSL_ERROR_WANT_CONNECT, \ SSL_ERROR_EOF, \ SSL_ERROR_INVALID_ERROR_CODE +from _ssl import PROTOCOL_SSLv3, PROTOCOL_SSLv23, PROTOCOL_TLSv1 +_PROTOCOL_NAMES = { + PROTOCOL_TLSv1: "TLSv1", + PROTOCOL_SSLv23: "SSLv23", + PROTOCOL_SSLv3: "SSLv3", +} +try: + from _ssl import PROTOCOL_SSLv2 +except ImportError: + pass +else: + _PROTOCOL_NAMES[PROTOCOL_SSLv2] = "SSLv2" from socket import socket, _fileobject, error as socket_error from socket import getnameinfo as _getnameinfo @@ -400,16 +411,7 @@ return DER_cert_to_PEM_cert(dercert) def get_protocol_name(protocol_code): - if protocol_code == PROTOCOL_TLSv1: - return "TLSv1" - elif protocol_code == PROTOCOL_SSLv23: - return "SSLv23" - elif protocol_code == PROTOCOL_SSLv2: - return "SSLv2" - elif protocol_code == PROTOCOL_SSLv3: - return "SSLv3" - else: - return "" + return _PROTOCOL_NAMES.get(protocol_code, '') # a replacement for the old socket.ssl function diff --git a/lib-python/modified-2.7/tarfile.py b/lib-python/modified-2.7/tarfile.py --- a/lib-python/modified-2.7/tarfile.py +++ b/lib-python/modified-2.7/tarfile.py @@ -252,8 +252,8 @@ the high bit set. So we calculate two checksums, unsigned and signed. """ - unsigned_chksum = 256 + sum(struct.unpack("148B8x356B", buf[:512])) - signed_chksum = 256 + sum(struct.unpack("148b8x356b", buf[:512])) + unsigned_chksum = 256 + sum(struct.unpack("148B", buf[:148]) + struct.unpack("356B", buf[156:512])) + signed_chksum = 256 + sum(struct.unpack("148b", buf[:148]) + struct.unpack("356b", buf[156:512])) return unsigned_chksum, signed_chksum def copyfileobj(src, dst, length=None): @@ -265,6 +265,7 @@ if length is None: shutil.copyfileobj(src, dst) return + BUFSIZE = 16 * 1024 blocks, remainder = divmod(length, BUFSIZE) for b in xrange(blocks): @@ -801,19 +802,19 @@ if self.closed: raise ValueError("I/O operation on closed file") + buf = "" if self.buffer: if size is None: - buf = self.buffer + self.fileobj.read() + buf = self.buffer self.buffer = "" else: buf = self.buffer[:size] self.buffer = self.buffer[size:] - buf += self.fileobj.read(size - len(buf)) + + if size is None: + buf += self.fileobj.read() else: - if size is None: - buf = self.fileobj.read() - else: - buf = self.fileobj.read(size) + buf += self.fileobj.read(size - len(buf)) self.position += len(buf) return buf diff --git a/lib-python/modified-2.7/test/test_array.py b/lib-python/modified-2.7/test/test_array.py --- a/lib-python/modified-2.7/test/test_array.py +++ b/lib-python/modified-2.7/test/test_array.py @@ -295,9 +295,10 @@ ) b = array.array(self.badtypecode()) - self.assertRaises(TypeError, "a + b") - - self.assertRaises(TypeError, "a + 'bad'") + with self.assertRaises(TypeError): + a + b + with self.assertRaises(TypeError): + a + 'bad' def test_iadd(self): a = array.array(self.typecode, self.example[::-1]) @@ -316,9 +317,10 @@ ) b = array.array(self.badtypecode()) - self.assertRaises(TypeError, "a += b") - - self.assertRaises(TypeError, "a += 'bad'") + with self.assertRaises(TypeError): + a += b + with self.assertRaises(TypeError): + a += 'bad' def test_mul(self): a = 5*array.array(self.typecode, self.example) @@ -345,7 +347,8 @@ array.array(self.typecode) ) - self.assertRaises(TypeError, "a * 'bad'") + with self.assertRaises(TypeError): + a * 'bad' def test_imul(self): a = array.array(self.typecode, self.example) @@ -374,7 +377,8 @@ a *= -1 self.assertEqual(a, array.array(self.typecode)) - self.assertRaises(TypeError, "a *= 'bad'") + with self.assertRaises(TypeError): + a *= 'bad' def test_getitem(self): a = array.array(self.typecode, self.example) diff --git a/lib-python/modified-2.7/test/test_heapq.py b/lib-python/modified-2.7/test/test_heapq.py --- a/lib-python/modified-2.7/test/test_heapq.py +++ b/lib-python/modified-2.7/test/test_heapq.py @@ -186,6 +186,11 @@ self.assertFalse(sys.modules['heapq'] is self.module) self.assertTrue(hasattr(self.module.heapify, 'func_code')) + def test_islice_protection(self): + m = self.module + self.assertFalse(m.nsmallest(-1, [1])) + self.assertFalse(m.nlargest(-1, [1])) + class TestHeapC(TestHeap): module = c_heapq diff --git a/lib-python/modified-2.7/test/test_multiprocessing.py b/lib-python/modified-2.7/test/test_multiprocessing.py --- a/lib-python/modified-2.7/test/test_multiprocessing.py +++ b/lib-python/modified-2.7/test/test_multiprocessing.py @@ -510,7 +510,6 @@ p.join() - @unittest.skipIf(os.name == 'posix', "PYPY: FIXME") def test_qsize(self): q = self.Queue() try: @@ -532,7 +531,6 @@ time.sleep(DELTA) q.task_done() - @unittest.skipIf(os.name == 'posix', "PYPY: FIXME") def test_task_done(self): queue = self.JoinableQueue() @@ -1091,7 +1089,6 @@ class _TestPoolWorkerLifetime(BaseTestCase): ALLOWED_TYPES = ('processes', ) - @unittest.skipIf(os.name == 'posix', "PYPY: FIXME") def test_pool_worker_lifetime(self): p = multiprocessing.Pool(3, maxtasksperchild=10) self.assertEqual(3, len(p._pool)) @@ -1280,7 +1277,6 @@ queue = manager.get_queue() queue.put('hello world') - @unittest.skipIf(os.name == 'posix', "PYPY: FIXME") def test_rapid_restart(self): authkey = os.urandom(32) manager = QueueManager( @@ -1297,6 +1293,7 @@ queue = manager.get_queue() self.assertEqual(queue.get(), 'hello world') del queue + test_support.gc_collect() manager.shutdown() manager = QueueManager( address=addr, authkey=authkey, serializer=SERIALIZER) @@ -1573,7 +1570,6 @@ ALLOWED_TYPES = ('processes',) - @unittest.skipIf(os.name == 'posix', "PYPY: FIXME") def test_heap(self): iterations = 5000 maxblocks = 50 diff --git a/lib-python/modified-2.7/test/test_ssl.py b/lib-python/modified-2.7/test/test_ssl.py --- a/lib-python/modified-2.7/test/test_ssl.py +++ b/lib-python/modified-2.7/test/test_ssl.py @@ -58,32 +58,35 @@ # Issue #9415: Ubuntu hijacks their OpenSSL and forcefully disables SSLv2 def skip_if_broken_ubuntu_ssl(func): - # We need to access the lower-level wrapper in order to create an - # implicit SSL context without trying to connect or listen. - try: - import _ssl - except ImportError: - # The returned function won't get executed, just ignore the error - pass - @functools.wraps(func) - def f(*args, **kwargs): + if hasattr(ssl, 'PROTOCOL_SSLv2'): + # We need to access the lower-level wrapper in order to create an + # implicit SSL context without trying to connect or listen. try: - s = socket.socket(socket.AF_INET) - _ssl.sslwrap(s._sock, 0, None, None, - ssl.CERT_NONE, ssl.PROTOCOL_SSLv2, None, None) - except ssl.SSLError as e: - if (ssl.OPENSSL_VERSION_INFO == (0, 9, 8, 15, 15) and - platform.linux_distribution() == ('debian', 'squeeze/sid', '') - and 'Invalid SSL protocol variant specified' in str(e)): - raise unittest.SkipTest("Patched Ubuntu OpenSSL breaks behaviour") - return func(*args, **kwargs) - return f + import _ssl + except ImportError: + # The returned function won't get executed, just ignore the error + pass + @functools.wraps(func) + def f(*args, **kwargs): + try: + s = socket.socket(socket.AF_INET) + _ssl.sslwrap(s._sock, 0, None, None, + ssl.CERT_NONE, ssl.PROTOCOL_SSLv2, None, None) + except ssl.SSLError as e: + if (ssl.OPENSSL_VERSION_INFO == (0, 9, 8, 15, 15) and + platform.linux_distribution() == ('debian', 'squeeze/sid', '') + and 'Invalid SSL protocol variant specified' in str(e)): + raise unittest.SkipTest("Patched Ubuntu OpenSSL breaks behaviour") + return func(*args, **kwargs) + return f + else: + return func class BasicSocketTests(unittest.TestCase): def test_constants(self): - ssl.PROTOCOL_SSLv2 + #ssl.PROTOCOL_SSLv2 ssl.PROTOCOL_SSLv23 ssl.PROTOCOL_SSLv3 ssl.PROTOCOL_TLSv1 @@ -966,7 +969,8 @@ try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True, ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True, ssl.CERT_REQUIRED) - try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv2, False) + if hasattr(ssl, 'PROTOCOL_SSLv2'): + try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv2, False) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv23, False) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_TLSv1, False) @@ -978,7 +982,8 @@ try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True, ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True, ssl.CERT_REQUIRED) - try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv2, False) + if hasattr(ssl, 'PROTOCOL_SSLv2'): + try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv2, False) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv3, False) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv23, False) diff --git a/lib-python/modified-2.7/test/test_sys_settrace.py b/lib-python/modified-2.7/test/test_sys_settrace.py --- a/lib-python/modified-2.7/test/test_sys_settrace.py +++ b/lib-python/modified-2.7/test/test_sys_settrace.py @@ -286,11 +286,11 @@ self.compare_events(func.func_code.co_firstlineno, tracer.events, func.events) - def set_and_retrieve_none(self): + def test_set_and_retrieve_none(self): sys.settrace(None) assert sys.gettrace() is None - def set_and_retrieve_func(self): + def test_set_and_retrieve_func(self): def fn(*args): pass diff --git a/lib-python/modified-2.7/test/test_urllib2.py b/lib-python/modified-2.7/test/test_urllib2.py --- a/lib-python/modified-2.7/test/test_urllib2.py +++ b/lib-python/modified-2.7/test/test_urllib2.py @@ -307,6 +307,9 @@ def getresponse(self): return MockHTTPResponse(MockFile(), {}, 200, "OK") + def close(self): + pass + class MockHandler: # useful for testing handler machinery # see add_ordered_mock_handlers() docstring diff --git a/lib-python/modified-2.7/urllib2.py b/lib-python/modified-2.7/urllib2.py new file mode 100644 --- /dev/null +++ b/lib-python/modified-2.7/urllib2.py @@ -0,0 +1,1436 @@ +"""An extensible library for opening URLs using a variety of protocols + +The simplest way to use this module is to call the urlopen function, +which accepts a string containing a URL or a Request object (described +below). It opens the URL and returns the results as file-like +object; the returned object has some extra methods described below. + +The OpenerDirector manages a collection of Handler objects that do +all the actual work. Each Handler implements a particular protocol or +option. The OpenerDirector is a composite object that invokes the +Handlers needed to open the requested URL. For example, the +HTTPHandler performs HTTP GET and POST requests and deals with +non-error returns. The HTTPRedirectHandler automatically deals with +HTTP 301, 302, 303 and 307 redirect errors, and the HTTPDigestAuthHandler +deals with digest authentication. + +urlopen(url, data=None) -- Basic usage is the same as original +urllib. pass the url and optionally data to post to an HTTP URL, and +get a file-like object back. One difference is that you can also pass +a Request instance instead of URL. Raises a URLError (subclass of +IOError); for HTTP errors, raises an HTTPError, which can also be +treated as a valid response. + +build_opener -- Function that creates a new OpenerDirector instance. +Will install the default handlers. Accepts one or more Handlers as +arguments, either instances or Handler classes that it will +instantiate. If one of the argument is a subclass of the default +handler, the argument will be installed instead of the default. + +install_opener -- Installs a new opener as the default opener. + +objects of interest: + +OpenerDirector -- Sets up the User Agent as the Python-urllib client and manages +the Handler classes, while dealing with requests and responses. + +Request -- An object that encapsulates the state of a request. The +state can be as simple as the URL. It can also include extra HTTP +headers, e.g. a User-Agent. + +BaseHandler -- + +exceptions: +URLError -- A subclass of IOError, individual protocols have their own +specific subclass. + +HTTPError -- Also a valid HTTP response, so you can treat an HTTP error +as an exceptional event or valid response. + +internals: +BaseHandler and parent +_call_chain conventions + +Example usage: + +import urllib2 + +# set up authentication info +authinfo = urllib2.HTTPBasicAuthHandler() +authinfo.add_password(realm='PDQ Application', + uri='https://mahler:8092/site-updates.py', + user='klem', + passwd='geheim$parole') + +proxy_support = urllib2.ProxyHandler({"http" : "http://ahad-haam:3128"}) + +# build a new opener that adds authentication and caching FTP handlers +opener = urllib2.build_opener(proxy_support, authinfo, urllib2.CacheFTPHandler) + +# install it +urllib2.install_opener(opener) + +f = urllib2.urlopen('http://www.python.org/') + + +""" + +# XXX issues: +# If an authentication error handler that tries to perform +# authentication for some reason but fails, how should the error be +# signalled? The client needs to know the HTTP error code. But if +# the handler knows that the problem was, e.g., that it didn't know +# that hash algo that requested in the challenge, it would be good to +# pass that information along to the client, too. +# ftp errors aren't handled cleanly +# check digest against correct (i.e. non-apache) implementation + +# Possible extensions: +# complex proxies XXX not sure what exactly was meant by this +# abstract factory for opener + +import base64 +import hashlib +import httplib +import mimetools +import os +import posixpath +import random +import re +import socket +import sys +import time +import urlparse +import bisect + +try: + from cStringIO import StringIO +except ImportError: + from StringIO import StringIO + +from urllib import (unwrap, unquote, splittype, splithost, quote, + addinfourl, splitport, splittag, + splitattr, ftpwrapper, splituser, splitpasswd, splitvalue) + +# support for FileHandler, proxies via environment variables +from urllib import localhost, url2pathname, getproxies, proxy_bypass + +# used in User-Agent header sent +__version__ = sys.version[:3] + +_opener = None +def urlopen(url, data=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT): + global _opener + if _opener is None: + _opener = build_opener() + return _opener.open(url, data, timeout) + +def install_opener(opener): + global _opener + _opener = opener + +# do these error classes make sense? +# make sure all of the IOError stuff is overridden. we just want to be +# subtypes. + +class URLError(IOError): + # URLError is a sub-type of IOError, but it doesn't share any of + # the implementation. need to override __init__ and __str__. + # It sets self.args for compatibility with other EnvironmentError + # subclasses, but args doesn't have the typical format with errno in + # slot 0 and strerror in slot 1. This may be better than nothing. + def __init__(self, reason): + self.args = reason, + self.reason = reason + + def __str__(self): + return '' % self.reason + +class HTTPError(URLError, addinfourl): + """Raised when HTTP error occurs, but also acts like non-error return""" + __super_init = addinfourl.__init__ + + def __init__(self, url, code, msg, hdrs, fp): + self.code = code + self.msg = msg + self.hdrs = hdrs + self.fp = fp + self.filename = url + # The addinfourl classes depend on fp being a valid file + # object. In some cases, the HTTPError may not have a valid + # file object. If this happens, the simplest workaround is to + # not initialize the base classes. + if fp is not None: + self.__super_init(fp, hdrs, url, code) + + def __str__(self): + return 'HTTP Error %s: %s' % (self.code, self.msg) + +# copied from cookielib.py +_cut_port_re = re.compile(r":\d+$") +def request_host(request): + """Return request-host, as defined by RFC 2965. + + Variation from RFC: returned value is lowercased, for convenient + comparison. + + """ + url = request.get_full_url() + host = urlparse.urlparse(url)[1] + if host == "": + host = request.get_header("Host", "") + + # remove port, if present + host = _cut_port_re.sub("", host, 1) + return host.lower() + +class Request: + + def __init__(self, url, data=None, headers={}, + origin_req_host=None, unverifiable=False): + # unwrap('') --> 'type://host/path' + self.__original = unwrap(url) + self.__original, fragment = splittag(self.__original) + self.type = None + # self.__r_type is what's left after doing the splittype + self.host = None + self.port = None + self._tunnel_host = None + self.data = data + self.headers = {} + for key, value in headers.items(): + self.add_header(key, value) + self.unredirected_hdrs = {} + if origin_req_host is None: + origin_req_host = request_host(self) + self.origin_req_host = origin_req_host + self.unverifiable = unverifiable + + def __getattr__(self, attr): + # XXX this is a fallback mechanism to guard against these + # methods getting called in a non-standard order. this may be + # too complicated and/or unnecessary. + # XXX should the __r_XXX attributes be public? + if attr[:12] == '_Request__r_': + name = attr[12:] + if hasattr(Request, 'get_' + name): + getattr(self, 'get_' + name)() + return getattr(self, attr) + raise AttributeError, attr + + def get_method(self): + if self.has_data(): + return "POST" + else: + return "GET" + + # XXX these helper methods are lame + + def add_data(self, data): + self.data = data + + def has_data(self): + return self.data is not None + + def get_data(self): + return self.data + + def get_full_url(self): + return self.__original + + def get_type(self): + if self.type is None: + self.type, self.__r_type = splittype(self.__original) + if self.type is None: + raise ValueError, "unknown url type: %s" % self.__original + return self.type + + def get_host(self): + if self.host is None: + self.host, self.__r_host = splithost(self.__r_type) + if self.host: + self.host = unquote(self.host) + return self.host + + def get_selector(self): + return self.__r_host + + def set_proxy(self, host, type): + if self.type == 'https' and not self._tunnel_host: + self._tunnel_host = self.host + else: + self.type = type + self.__r_host = self.__original + + self.host = host + + def has_proxy(self): + return self.__r_host == self.__original + + def get_origin_req_host(self): + return self.origin_req_host + + def is_unverifiable(self): + return self.unverifiable + + def add_header(self, key, val): + # useful for something like authentication + self.headers[key.capitalize()] = val + + def add_unredirected_header(self, key, val): + # will not be added to a redirected request + self.unredirected_hdrs[key.capitalize()] = val + + def has_header(self, header_name): + return (header_name in self.headers or + header_name in self.unredirected_hdrs) + + def get_header(self, header_name, default=None): + return self.headers.get( + header_name, + self.unredirected_hdrs.get(header_name, default)) + + def header_items(self): + hdrs = self.unredirected_hdrs.copy() + hdrs.update(self.headers) + return hdrs.items() + +class OpenerDirector: + def __init__(self): + client_version = "Python-urllib/%s" % __version__ + self.addheaders = [('User-agent', client_version)] + # manage the individual handlers + self.handlers = [] + self.handle_open = {} + self.handle_error = {} + self.process_response = {} + self.process_request = {} + + def add_handler(self, handler): + if not hasattr(handler, "add_parent"): + raise TypeError("expected BaseHandler instance, got %r" % + type(handler)) + + added = False + for meth in dir(handler): + if meth in ["redirect_request", "do_open", "proxy_open"]: + # oops, coincidental match + continue + + i = meth.find("_") + protocol = meth[:i] + condition = meth[i+1:] + + if condition.startswith("error"): + j = condition.find("_") + i + 1 + kind = meth[j+1:] + try: + kind = int(kind) + except ValueError: + pass + lookup = self.handle_error.get(protocol, {}) + self.handle_error[protocol] = lookup + elif condition == "open": + kind = protocol + lookup = self.handle_open + elif condition == "response": + kind = protocol + lookup = self.process_response + elif condition == "request": + kind = protocol + lookup = self.process_request + else: + continue + + handlers = lookup.setdefault(kind, []) + if handlers: + bisect.insort(handlers, handler) + else: + handlers.append(handler) + added = True + + if added: + # the handlers must work in an specific order, the order + # is specified in a Handler attribute + bisect.insort(self.handlers, handler) + handler.add_parent(self) + + def close(self): + # Only exists for backwards compatibility. + pass + + def _call_chain(self, chain, kind, meth_name, *args): + # Handlers raise an exception if no one else should try to handle + # the request, or return None if they can't but another handler + # could. Otherwise, they return the response. + handlers = chain.get(kind, ()) + for handler in handlers: + func = getattr(handler, meth_name) + + result = func(*args) + if result is not None: + return result + + def open(self, fullurl, data=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT): + # accept a URL or a Request object + if isinstance(fullurl, basestring): + req = Request(fullurl, data) + else: + req = fullurl + if data is not None: + req.add_data(data) + + req.timeout = timeout + protocol = req.get_type() + + # pre-process request + meth_name = protocol+"_request" + for processor in self.process_request.get(protocol, []): + meth = getattr(processor, meth_name) + req = meth(req) + + response = self._open(req, data) + + # post-process response + meth_name = protocol+"_response" + for processor in self.process_response.get(protocol, []): + meth = getattr(processor, meth_name) + response = meth(req, response) + + return response + + def _open(self, req, data=None): + result = self._call_chain(self.handle_open, 'default', + 'default_open', req) + if result: + return result + + protocol = req.get_type() + result = self._call_chain(self.handle_open, protocol, protocol + + '_open', req) + if result: + return result + + return self._call_chain(self.handle_open, 'unknown', + 'unknown_open', req) + + def error(self, proto, *args): + if proto in ('http', 'https'): + # XXX http[s] protocols are special-cased + dict = self.handle_error['http'] # https is not different than http + proto = args[2] # YUCK! + meth_name = 'http_error_%s' % proto + http_err = 1 + orig_args = args + else: + dict = self.handle_error + meth_name = proto + '_error' + http_err = 0 + args = (dict, proto, meth_name) + args + result = self._call_chain(*args) + if result: + return result + + if http_err: + args = (dict, 'default', 'http_error_default') + orig_args + return self._call_chain(*args) + +# XXX probably also want an abstract factory that knows when it makes +# sense to skip a superclass in favor of a subclass and when it might +# make sense to include both + +def build_opener(*handlers): + """Create an opener object from a list of handlers. + + The opener will use several default handlers, including support + for HTTP, FTP and when applicable, HTTPS. + + If any of the handlers passed as arguments are subclasses of the + default handlers, the default handlers will not be used. + """ + import types + def isclass(obj): + return isinstance(obj, (types.ClassType, type)) + + opener = OpenerDirector() + default_classes = [ProxyHandler, UnknownHandler, HTTPHandler, + HTTPDefaultErrorHandler, HTTPRedirectHandler, + FTPHandler, FileHandler, HTTPErrorProcessor] + if hasattr(httplib, 'HTTPS'): + default_classes.append(HTTPSHandler) + skip = set() + for klass in default_classes: + for check in handlers: + if isclass(check): + if issubclass(check, klass): + skip.add(klass) + elif isinstance(check, klass): + skip.add(klass) + for klass in skip: + default_classes.remove(klass) + + for klass in default_classes: + opener.add_handler(klass()) + + for h in handlers: + if isclass(h): + h = h() + opener.add_handler(h) + return opener + +class BaseHandler: + handler_order = 500 + + def add_parent(self, parent): + self.parent = parent + + def close(self): + # Only exists for backwards compatibility + pass + + def __lt__(self, other): + if not hasattr(other, "handler_order"): + # Try to preserve the old behavior of having custom classes + # inserted after default ones (works only for custom user + # classes which are not aware of handler_order). + return True + return self.handler_order < other.handler_order + + +class HTTPErrorProcessor(BaseHandler): + """Process HTTP error responses.""" + handler_order = 1000 # after all other processing + + def http_response(self, request, response): + code, msg, hdrs = response.code, response.msg, response.info() + + # According to RFC 2616, "2xx" code indicates that the client's + # request was successfully received, understood, and accepted. + if not (200 <= code < 300): + response = self.parent.error( + 'http', request, response, code, msg, hdrs) + + return response + + https_response = http_response + +class HTTPDefaultErrorHandler(BaseHandler): + def http_error_default(self, req, fp, code, msg, hdrs): + raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) + +class HTTPRedirectHandler(BaseHandler): + # maximum number of redirections to any single URL + # this is needed because of the state that cookies introduce + max_repeats = 4 + # maximum total number of redirections (regardless of URL) before + # assuming we're in a loop + max_redirections = 10 + + def redirect_request(self, req, fp, code, msg, headers, newurl): + """Return a Request or None in response to a redirect. + + This is called by the http_error_30x methods when a + redirection response is received. If a redirection should + take place, return a new Request to allow http_error_30x to + perform the redirect. Otherwise, raise HTTPError if no-one + else should try to handle this url. Return None if you can't + but another Handler might. + """ + m = req.get_method() + if (code in (301, 302, 303, 307) and m in ("GET", "HEAD") + or code in (301, 302, 303) and m == "POST"): + # Strictly (according to RFC 2616), 301 or 302 in response + # to a POST MUST NOT cause a redirection without confirmation + # from the user (of urllib2, in this case). In practice, + # essentially all clients do redirect in this case, so we + # do the same. + # be conciliant with URIs containing a space + newurl = newurl.replace(' ', '%20') + newheaders = dict((k,v) for k,v in req.headers.items() + if k.lower() not in ("content-length", "content-type") + ) + return Request(newurl, + headers=newheaders, + origin_req_host=req.get_origin_req_host(), + unverifiable=True) + else: + raise HTTPError(req.get_full_url(), code, msg, headers, fp) + + # Implementation note: To avoid the server sending us into an + # infinite loop, the request object needs to track what URLs we + # have already seen. Do this by adding a handler-specific + # attribute to the Request object. + def http_error_302(self, req, fp, code, msg, headers): + # Some servers (incorrectly) return multiple Location headers + # (so probably same goes for URI). Use first header. + if 'location' in headers: + newurl = headers.getheaders('location')[0] + elif 'uri' in headers: + newurl = headers.getheaders('uri')[0] + else: + return + + # fix a possible malformed URL + urlparts = urlparse.urlparse(newurl) + if not urlparts.path: + urlparts = list(urlparts) + urlparts[2] = "/" + newurl = urlparse.urlunparse(urlparts) + + newurl = urlparse.urljoin(req.get_full_url(), newurl) + + # XXX Probably want to forget about the state of the current + # request, although that might interact poorly with other + # handlers that also use handler-specific request attributes + new = self.redirect_request(req, fp, code, msg, headers, newurl) + if new is None: + return + + # loop detection + # .redirect_dict has a key url if url was previously visited. + if hasattr(req, 'redirect_dict'): + visited = new.redirect_dict = req.redirect_dict + if (visited.get(newurl, 0) >= self.max_repeats or + len(visited) >= self.max_redirections): + raise HTTPError(req.get_full_url(), code, + self.inf_msg + msg, headers, fp) + else: + visited = new.redirect_dict = req.redirect_dict = {} + visited[newurl] = visited.get(newurl, 0) + 1 + + # Don't close the fp until we are sure that we won't use it + # with HTTPError. + fp.read() + fp.close() + + return self.parent.open(new, timeout=req.timeout) + + http_error_301 = http_error_303 = http_error_307 = http_error_302 + + inf_msg = "The HTTP server returned a redirect error that would " \ + "lead to an infinite loop.\n" \ + "The last 30x error message was:\n" + + +def _parse_proxy(proxy): + """Return (scheme, user, password, host/port) given a URL or an authority. + + If a URL is supplied, it must have an authority (host:port) component. + According to RFC 3986, having an authority component means the URL must + have two slashes after the scheme: + + >>> _parse_proxy('file:/ftp.example.com/') + Traceback (most recent call last): + ValueError: proxy URL with no authority: 'file:/ftp.example.com/' + + The first three items of the returned tuple may be None. + + Examples of authority parsing: + + >>> _parse_proxy('proxy.example.com') + (None, None, None, 'proxy.example.com') + >>> _parse_proxy('proxy.example.com:3128') + (None, None, None, 'proxy.example.com:3128') + + The authority component may optionally include userinfo (assumed to be + username:password): + + >>> _parse_proxy('joe:password at proxy.example.com') + (None, 'joe', 'password', 'proxy.example.com') + >>> _parse_proxy('joe:password at proxy.example.com:3128') + (None, 'joe', 'password', 'proxy.example.com:3128') + + Same examples, but with URLs instead: + + >>> _parse_proxy('http://proxy.example.com/') + ('http', None, None, 'proxy.example.com') + >>> _parse_proxy('http://proxy.example.com:3128/') + ('http', None, None, 'proxy.example.com:3128') + >>> _parse_proxy('http://joe:password at proxy.example.com/') + ('http', 'joe', 'password', 'proxy.example.com') + >>> _parse_proxy('http://joe:password at proxy.example.com:3128') + ('http', 'joe', 'password', 'proxy.example.com:3128') + + Everything after the authority is ignored: + + >>> _parse_proxy('ftp://joe:password at proxy.example.com/rubbish:3128') + ('ftp', 'joe', 'password', 'proxy.example.com') + + Test for no trailing '/' case: + + >>> _parse_proxy('http://joe:password at proxy.example.com') + ('http', 'joe', 'password', 'proxy.example.com') + + """ + scheme, r_scheme = splittype(proxy) + if not r_scheme.startswith("/"): + # authority + scheme = None + authority = proxy + else: + # URL + if not r_scheme.startswith("//"): + raise ValueError("proxy URL with no authority: %r" % proxy) + # We have an authority, so for RFC 3986-compliant URLs (by ss 3. + # and 3.3.), path is empty or starts with '/' + end = r_scheme.find("/", 2) + if end == -1: + end = None + authority = r_scheme[2:end] + userinfo, hostport = splituser(authority) + if userinfo is not None: + user, password = splitpasswd(userinfo) + else: + user = password = None + return scheme, user, password, hostport + +class ProxyHandler(BaseHandler): + # Proxies must be in front + handler_order = 100 + + def __init__(self, proxies=None): + if proxies is None: + proxies = getproxies() + assert hasattr(proxies, 'has_key'), "proxies must be a mapping" + self.proxies = proxies + for type, url in proxies.items(): + setattr(self, '%s_open' % type, + lambda r, proxy=url, type=type, meth=self.proxy_open: \ + meth(r, proxy, type)) + + def proxy_open(self, req, proxy, type): + orig_type = req.get_type() + proxy_type, user, password, hostport = _parse_proxy(proxy) + + if proxy_type is None: + proxy_type = orig_type + + if req.host and proxy_bypass(req.host): + return None + + if user and password: + user_pass = '%s:%s' % (unquote(user), unquote(password)) + creds = base64.b64encode(user_pass).strip() + req.add_header('Proxy-authorization', 'Basic ' + creds) + hostport = unquote(hostport) + req.set_proxy(hostport, proxy_type) + + if orig_type == proxy_type or orig_type == 'https': + # let other handlers take care of it + return None + else: + # need to start over, because the other handlers don't + # grok the proxy's URL type + # e.g. if we have a constructor arg proxies like so: + # {'http': 'ftp://proxy.example.com'}, we may end up turning + # a request for http://acme.example.com/a into one for + # ftp://proxy.example.com/a + return self.parent.open(req, timeout=req.timeout) + +class HTTPPasswordMgr: + + def __init__(self): + self.passwd = {} + + def add_password(self, realm, uri, user, passwd): + # uri could be a single URI or a sequence + if isinstance(uri, basestring): + uri = [uri] + if not realm in self.passwd: + self.passwd[realm] = {} + for default_port in True, False: + reduced_uri = tuple( + [self.reduce_uri(u, default_port) for u in uri]) + self.passwd[realm][reduced_uri] = (user, passwd) + + def find_user_password(self, realm, authuri): + domains = self.passwd.get(realm, {}) + for default_port in True, False: + reduced_authuri = self.reduce_uri(authuri, default_port) + for uris, authinfo in domains.iteritems(): + for uri in uris: + if self.is_suburi(uri, reduced_authuri): + return authinfo + return None, None + + def reduce_uri(self, uri, default_port=True): + """Accept authority or URI and extract only the authority and path.""" + # note HTTP URLs do not have a userinfo component + parts = urlparse.urlsplit(uri) + if parts[1]: + # URI + scheme = parts[0] + authority = parts[1] + path = parts[2] or '/' + else: + # host or host:port + scheme = None + authority = uri + path = '/' + host, port = splitport(authority) + if default_port and port is None and scheme is not None: + dport = {"http": 80, + "https": 443, + }.get(scheme) + if dport is not None: + authority = "%s:%d" % (host, dport) + return authority, path + + def is_suburi(self, base, test): + """Check if test is below base in a URI tree + + Both args must be URIs in reduced form. + """ + if base == test: + return True + if base[0] != test[0]: + return False + common = posixpath.commonprefix((base[1], test[1])) + if len(common) == len(base[1]): + return True + return False + + +class HTTPPasswordMgrWithDefaultRealm(HTTPPasswordMgr): + + def find_user_password(self, realm, authuri): + user, password = HTTPPasswordMgr.find_user_password(self, realm, + authuri) + if user is not None: + return user, password + return HTTPPasswordMgr.find_user_password(self, None, authuri) + + +class AbstractBasicAuthHandler: + + # XXX this allows for multiple auth-schemes, but will stupidly pick + # the last one with a realm specified. + + # allow for double- and single-quoted realm values + # (single quotes are a violation of the RFC, but appear in the wild) + rx = re.compile('(?:.*,)*[ \t]*([^ \t]+)[ \t]+' + 'realm=(["\'])(.*?)\\2', re.I) + + # XXX could pre-emptively send auth info already accepted (RFC 2617, + # end of section 2, and section 1.2 immediately after "credentials" + # production). + + def __init__(self, password_mgr=None): + if password_mgr is None: + password_mgr = HTTPPasswordMgr() + self.passwd = password_mgr + self.add_password = self.passwd.add_password + self.retried = 0 + + def reset_retry_count(self): + self.retried = 0 + + def http_error_auth_reqed(self, authreq, host, req, headers): + # host may be an authority (without userinfo) or a URL with an + # authority + # XXX could be multiple headers + authreq = headers.get(authreq, None) + + if self.retried > 5: + # retry sending the username:password 5 times before failing. + raise HTTPError(req.get_full_url(), 401, "basic auth failed", + headers, None) + else: + self.retried += 1 + + if authreq: + mo = AbstractBasicAuthHandler.rx.search(authreq) + if mo: + scheme, quote, realm = mo.groups() + if scheme.lower() == 'basic': + response = self.retry_http_basic_auth(host, req, realm) + if response and response.code != 401: + self.retried = 0 + return response + + def retry_http_basic_auth(self, host, req, realm): + user, pw = self.passwd.find_user_password(realm, host) + if pw is not None: + raw = "%s:%s" % (user, pw) + auth = 'Basic %s' % base64.b64encode(raw).strip() + if req.headers.get(self.auth_header, None) == auth: + return None + req.add_unredirected_header(self.auth_header, auth) + return self.parent.open(req, timeout=req.timeout) + else: + return None + + +class HTTPBasicAuthHandler(AbstractBasicAuthHandler, BaseHandler): + + auth_header = 'Authorization' + + def http_error_401(self, req, fp, code, msg, headers): + url = req.get_full_url() + response = self.http_error_auth_reqed('www-authenticate', + url, req, headers) + self.reset_retry_count() + return response + + +class ProxyBasicAuthHandler(AbstractBasicAuthHandler, BaseHandler): + + auth_header = 'Proxy-authorization' + + def http_error_407(self, req, fp, code, msg, headers): + # http_error_auth_reqed requires that there is no userinfo component in + # authority. Assume there isn't one, since urllib2 does not (and + # should not, RFC 3986 s. 3.2.1) support requests for URLs containing + # userinfo. + authority = req.get_host() + response = self.http_error_auth_reqed('proxy-authenticate', + authority, req, headers) + self.reset_retry_count() + return response + + +def randombytes(n): + """Return n random bytes.""" + # Use /dev/urandom if it is available. Fall back to random module + # if not. It might be worthwhile to extend this function to use + # other platform-specific mechanisms for getting random bytes. + if os.path.exists("/dev/urandom"): + f = open("/dev/urandom") + s = f.read(n) + f.close() + return s + else: + L = [chr(random.randrange(0, 256)) for i in range(n)] + return "".join(L) + +class AbstractDigestAuthHandler: + # Digest authentication is specified in RFC 2617. + + # XXX The client does not inspect the Authentication-Info header + # in a successful response. + + # XXX It should be possible to test this implementation against + # a mock server that just generates a static set of challenges. + + # XXX qop="auth-int" supports is shaky + + def __init__(self, passwd=None): + if passwd is None: + passwd = HTTPPasswordMgr() + self.passwd = passwd + self.add_password = self.passwd.add_password + self.retried = 0 + self.nonce_count = 0 + self.last_nonce = None + + def reset_retry_count(self): + self.retried = 0 + + def http_error_auth_reqed(self, auth_header, host, req, headers): + authreq = headers.get(auth_header, None) + if self.retried > 5: + # Don't fail endlessly - if we failed once, we'll probably + # fail a second time. Hm. Unless the Password Manager is + # prompting for the information. Crap. This isn't great + # but it's better than the current 'repeat until recursion + # depth exceeded' approach + raise HTTPError(req.get_full_url(), 401, "digest auth failed", + headers, None) + else: + self.retried += 1 + if authreq: + scheme = authreq.split()[0] + if scheme.lower() == 'digest': + return self.retry_http_digest_auth(req, authreq) + + def retry_http_digest_auth(self, req, auth): + token, challenge = auth.split(' ', 1) + chal = parse_keqv_list(parse_http_list(challenge)) + auth = self.get_authorization(req, chal) + if auth: + auth_val = 'Digest %s' % auth + if req.headers.get(self.auth_header, None) == auth_val: + return None + req.add_unredirected_header(self.auth_header, auth_val) + resp = self.parent.open(req, timeout=req.timeout) + return resp + + def get_cnonce(self, nonce): + # The cnonce-value is an opaque + # quoted string value provided by the client and used by both client + # and server to avoid chosen plaintext attacks, to provide mutual + # authentication, and to provide some message integrity protection. + # This isn't a fabulous effort, but it's probably Good Enough. + dig = hashlib.sha1("%s:%s:%s:%s" % (self.nonce_count, nonce, time.ctime(), + randombytes(8))).hexdigest() + return dig[:16] + + def get_authorization(self, req, chal): + try: + realm = chal['realm'] + nonce = chal['nonce'] + qop = chal.get('qop') + algorithm = chal.get('algorithm', 'MD5') + # mod_digest doesn't send an opaque, even though it isn't + # supposed to be optional + opaque = chal.get('opaque', None) + except KeyError: + return None + + H, KD = self.get_algorithm_impls(algorithm) + if H is None: + return None + + user, pw = self.passwd.find_user_password(realm, req.get_full_url()) + if user is None: + return None + + # XXX not implemented yet + if req.has_data(): + entdig = self.get_entity_digest(req.get_data(), chal) + else: + entdig = None + + A1 = "%s:%s:%s" % (user, realm, pw) + A2 = "%s:%s" % (req.get_method(), + # XXX selector: what about proxies and full urls + req.get_selector()) + if qop == 'auth': + if nonce == self.last_nonce: + self.nonce_count += 1 + else: + self.nonce_count = 1 + self.last_nonce = nonce + + ncvalue = '%08x' % self.nonce_count + cnonce = self.get_cnonce(nonce) + noncebit = "%s:%s:%s:%s:%s" % (nonce, ncvalue, cnonce, qop, H(A2)) + respdig = KD(H(A1), noncebit) + elif qop is None: + respdig = KD(H(A1), "%s:%s" % (nonce, H(A2))) + else: + # XXX handle auth-int. + raise URLError("qop '%s' is not supported." % qop) + + # XXX should the partial digests be encoded too? + + base = 'username="%s", realm="%s", nonce="%s", uri="%s", ' \ + 'response="%s"' % (user, realm, nonce, req.get_selector(), + respdig) + if opaque: + base += ', opaque="%s"' % opaque + if entdig: + base += ', digest="%s"' % entdig + base += ', algorithm="%s"' % algorithm + if qop: + base += ', qop=auth, nc=%s, cnonce="%s"' % (ncvalue, cnonce) + return base + + def get_algorithm_impls(self, algorithm): + # algorithm should be case-insensitive according to RFC2617 + algorithm = algorithm.upper() + # lambdas assume digest modules are imported at the top level + if algorithm == 'MD5': + H = lambda x: hashlib.md5(x).hexdigest() + elif algorithm == 'SHA': + H = lambda x: hashlib.sha1(x).hexdigest() + # XXX MD5-sess + KD = lambda s, d: H("%s:%s" % (s, d)) + return H, KD + + def get_entity_digest(self, data, chal): + # XXX not implemented yet + return None + + +class HTTPDigestAuthHandler(BaseHandler, AbstractDigestAuthHandler): + """An authentication protocol defined by RFC 2069 + + Digest authentication improves on basic authentication because it + does not transmit passwords in the clear. + """ + + auth_header = 'Authorization' + handler_order = 490 # before Basic auth + + def http_error_401(self, req, fp, code, msg, headers): + host = urlparse.urlparse(req.get_full_url())[1] + retry = self.http_error_auth_reqed('www-authenticate', + host, req, headers) + self.reset_retry_count() + return retry + + +class ProxyDigestAuthHandler(BaseHandler, AbstractDigestAuthHandler): + + auth_header = 'Proxy-Authorization' + handler_order = 490 # before Basic auth + + def http_error_407(self, req, fp, code, msg, headers): + host = req.get_host() + retry = self.http_error_auth_reqed('proxy-authenticate', + host, req, headers) + self.reset_retry_count() + return retry + +class AbstractHTTPHandler(BaseHandler): + + def __init__(self, debuglevel=0): + self._debuglevel = debuglevel + + def set_http_debuglevel(self, level): + self._debuglevel = level + + def do_request_(self, request): + host = request.get_host() + if not host: + raise URLError('no host given') + + if request.has_data(): # POST + data = request.get_data() + if not request.has_header('Content-type'): + request.add_unredirected_header( + 'Content-type', + 'application/x-www-form-urlencoded') + if not request.has_header('Content-length'): + request.add_unredirected_header( + 'Content-length', '%d' % len(data)) + + sel_host = host + if request.has_proxy(): + scheme, sel = splittype(request.get_selector()) + sel_host, sel_path = splithost(sel) + + if not request.has_header('Host'): + request.add_unredirected_header('Host', sel_host) + for name, value in self.parent.addheaders: + name = name.capitalize() + if not request.has_header(name): + request.add_unredirected_header(name, value) + + return request + + def do_open(self, http_class, req): + """Return an addinfourl object for the request, using http_class. + + http_class must implement the HTTPConnection API from httplib. + The addinfourl return value is a file-like object. It also + has methods and attributes including: + - info(): return a mimetools.Message object for the headers + - geturl(): return the original request URL + - code: HTTP status code + """ + host = req.get_host() + if not host: + raise URLError('no host given') + + h = http_class(host, timeout=req.timeout) # will parse host:port + h.set_debuglevel(self._debuglevel) + + headers = dict(req.unredirected_hdrs) + headers.update(dict((k, v) for k, v in req.headers.items() + if k not in headers)) + + # We want to make an HTTP/1.1 request, but the addinfourl + # class isn't prepared to deal with a persistent connection. + # It will try to read all remaining data from the socket, + # which will block while the server waits for the next request. + # So make sure the connection gets closed after the (only) + # request. + headers["Connection"] = "close" + headers = dict( + (name.title(), val) for name, val in headers.items()) + + if req._tunnel_host: + tunnel_headers = {} + proxy_auth_hdr = "Proxy-Authorization" + if proxy_auth_hdr in headers: + tunnel_headers[proxy_auth_hdr] = headers[proxy_auth_hdr] + # Proxy-Authorization should not be sent to origin + # server. + del headers[proxy_auth_hdr] + h.set_tunnel(req._tunnel_host, headers=tunnel_headers) + + try: + h.request(req.get_method(), req.get_selector(), req.data, headers) + try: + r = h.getresponse(buffering=True) + except TypeError: #buffering kw not supported + r = h.getresponse() + except socket.error, err: # XXX what error? + h.close() + raise URLError(err) + + # Pick apart the HTTPResponse object to get the addinfourl + # object initialized properly. + + # Wrap the HTTPResponse object in socket's file object adapter + # for Windows. That adapter calls recv(), so delegate recv() + # to read(). This weird wrapping allows the returned object to + # have readline() and readlines() methods. + + # XXX It might be better to extract the read buffering code + # out of socket._fileobject() and into a base class. + + r.recv = r.read + fp = socket._fileobject(r, close=True) + + resp = addinfourl(fp, r.msg, req.get_full_url()) + resp.code = r.status + resp.msg = r.reason + return resp + + +class HTTPHandler(AbstractHTTPHandler): + + def http_open(self, req): + return self.do_open(httplib.HTTPConnection, req) + + http_request = AbstractHTTPHandler.do_request_ + +if hasattr(httplib, 'HTTPS'): + class HTTPSHandler(AbstractHTTPHandler): + + def https_open(self, req): + return self.do_open(httplib.HTTPSConnection, req) + + https_request = AbstractHTTPHandler.do_request_ + +class HTTPCookieProcessor(BaseHandler): + def __init__(self, cookiejar=None): + import cookielib + if cookiejar is None: + cookiejar = cookielib.CookieJar() + self.cookiejar = cookiejar + + def http_request(self, request): + self.cookiejar.add_cookie_header(request) + return request + + def http_response(self, request, response): + self.cookiejar.extract_cookies(response, request) + return response + + https_request = http_request + https_response = http_response + +class UnknownHandler(BaseHandler): + def unknown_open(self, req): + type = req.get_type() + raise URLError('unknown url type: %s' % type) + +def parse_keqv_list(l): + """Parse list of key=value strings where keys are not duplicated.""" + parsed = {} + for elt in l: + k, v = elt.split('=', 1) + if v[0] == '"' and v[-1] == '"': + v = v[1:-1] + parsed[k] = v + return parsed + +def parse_http_list(s): + """Parse lists as described by RFC 2068 Section 2. + + In particular, parse comma-separated lists where the elements of + the list may include quoted-strings. A quoted-string could + contain a comma. A non-quoted string could have quotes in the + middle. Neither commas nor quotes count if they are escaped. + Only double-quotes count, not single-quotes. + """ + res = [] + part = '' + + escape = quote = False + for cur in s: + if escape: + part += cur + escape = False + continue + if quote: + if cur == '\\': + escape = True + continue + elif cur == '"': + quote = False + part += cur + continue + + if cur == ',': + res.append(part) + part = '' + continue + + if cur == '"': + quote = True + + part += cur + + # append last part + if part: + res.append(part) + + return [part.strip() for part in res] + +def _safe_gethostbyname(host): + try: + return socket.gethostbyname(host) + except socket.gaierror: + return None + +class FileHandler(BaseHandler): + # Use local file or FTP depending on form of URL + def file_open(self, req): + url = req.get_selector() + if url[:2] == '//' and url[2:3] != '/' and (req.host and + req.host != 'localhost'): + req.type = 'ftp' + return self.parent.open(req) + else: + return self.open_local_file(req) + + # names for the localhost + names = None + def get_names(self): + if FileHandler.names is None: + try: + FileHandler.names = tuple( + socket.gethostbyname_ex('localhost')[2] + + socket.gethostbyname_ex(socket.gethostname())[2]) + except socket.gaierror: + FileHandler.names = (socket.gethostbyname('localhost'),) + return FileHandler.names + + # not entirely sure what the rules are here + def open_local_file(self, req): + import email.utils + import mimetypes + host = req.get_host() + filename = req.get_selector() + localfile = url2pathname(filename) + try: + stats = os.stat(localfile) + size = stats.st_size + modified = email.utils.formatdate(stats.st_mtime, usegmt=True) + mtype = mimetypes.guess_type(filename)[0] + headers = mimetools.Message(StringIO( + 'Content-type: %s\nContent-length: %d\nLast-modified: %s\n' % + (mtype or 'text/plain', size, modified))) + if host: + host, port = splitport(host) + if not host or \ + (not port and _safe_gethostbyname(host) in self.get_names()): + if host: + origurl = 'file://' + host + filename + else: + origurl = 'file://' + filename + return addinfourl(open(localfile, 'rb'), headers, origurl) + except OSError, msg: + # urllib2 users shouldn't expect OSErrors coming from urlopen() + raise URLError(msg) + raise URLError('file not on local host') + +class FTPHandler(BaseHandler): + def ftp_open(self, req): + import ftplib + import mimetypes + host = req.get_host() + if not host: + raise URLError('ftp error: no host given') + host, port = splitport(host) + if port is None: + port = ftplib.FTP_PORT + else: + port = int(port) + + # username/password handling + user, host = splituser(host) + if user: + user, passwd = splitpasswd(user) + else: + passwd = None + host = unquote(host) + user = user or '' + passwd = passwd or '' + + try: + host = socket.gethostbyname(host) + except socket.error, msg: + raise URLError(msg) + path, attrs = splitattr(req.get_selector()) + dirs = path.split('/') + dirs = map(unquote, dirs) + dirs, file = dirs[:-1], dirs[-1] + if dirs and not dirs[0]: + dirs = dirs[1:] + try: + fw = self.connect_ftp(user, passwd, host, port, dirs, req.timeout) + type = file and 'I' or 'D' + for attr in attrs: + attr, value = splitvalue(attr) + if attr.lower() == 'type' and \ + value in ('a', 'A', 'i', 'I', 'd', 'D'): + type = value.upper() + fp, retrlen = fw.retrfile(file, type) + headers = "" + mtype = mimetypes.guess_type(req.get_full_url())[0] + if mtype: + headers += "Content-type: %s\n" % mtype + if retrlen is not None and retrlen >= 0: + headers += "Content-length: %d\n" % retrlen + sf = StringIO(headers) + headers = mimetools.Message(sf) + return addinfourl(fp, headers, req.get_full_url()) + except ftplib.all_errors, msg: + raise URLError, ('ftp error: %s' % msg), sys.exc_info()[2] + + def connect_ftp(self, user, passwd, host, port, dirs, timeout): + fw = ftpwrapper(user, passwd, host, port, dirs, timeout) +## fw.ftp.set_debuglevel(1) + return fw + +class CacheFTPHandler(FTPHandler): + # XXX would be nice to have pluggable cache strategies + # XXX this stuff is definitely not thread safe + def __init__(self): + self.cache = {} + self.timeout = {} + self.soonest = 0 + self.delay = 60 + self.max_conns = 16 + + def setTimeout(self, t): + self.delay = t + + def setMaxConns(self, m): + self.max_conns = m + + def connect_ftp(self, user, passwd, host, port, dirs, timeout): + key = user, host, port, '/'.join(dirs), timeout + if key in self.cache: + self.timeout[key] = time.time() + self.delay + else: + self.cache[key] = ftpwrapper(user, passwd, host, port, dirs, timeout) + self.timeout[key] = time.time() + self.delay + self.check_cache() + return self.cache[key] + + def check_cache(self): + # first check for old ones + t = time.time() + if self.soonest <= t: + for k, v in self.timeout.items(): + if v < t: + self.cache[k].close() + del self.cache[k] + del self.timeout[k] + self.soonest = min(self.timeout.values()) + + # then check the size + if len(self.cache) == self.max_conns: + for k, v in self.timeout.items(): + if v == self.soonest: + del self.cache[k] + del self.timeout[k] + break + self.soonest = min(self.timeout.values()) diff --git a/lib_pypy/_ctypes/pointer.py b/lib_pypy/_ctypes/pointer.py --- a/lib_pypy/_ctypes/pointer.py +++ b/lib_pypy/_ctypes/pointer.py @@ -124,7 +124,8 @@ # for now, we always allow types.pointer, else a lot of tests # break. We need to rethink how pointers are represented, though if my_ffitype is not ffitype and ffitype is not _ffi.types.void_p: - raise ArgumentError, "expected %s instance, got %s" % (type(value), ffitype) + raise ArgumentError("expected %s instance, got %s" % (type(value), + ffitype)) return value._get_buffer_value() def _cast_addr(obj, _, tp): diff --git a/lib_pypy/_ctypes/structure.py b/lib_pypy/_ctypes/structure.py --- a/lib_pypy/_ctypes/structure.py +++ b/lib_pypy/_ctypes/structure.py @@ -17,7 +17,7 @@ if len(f) == 3: if (not hasattr(tp, '_type_') or not isinstance(tp._type_, str) - or tp._type_ not in "iIhHbBlL"): + or tp._type_ not in "iIhHbBlLqQ"): #XXX: are those all types? # we just dont get the type name # in the interp levle thrown TypeError diff --git a/lib_pypy/_functools.py b/lib_pypy/_functools.py --- a/lib_pypy/_functools.py +++ b/lib_pypy/_functools.py @@ -14,10 +14,9 @@ raise TypeError("the first argument must be callable") self.func = func self.args = args - self.keywords = keywords + self.keywords = keywords or None def __call__(self, *fargs, **fkeywords): - newkeywords = self.keywords.copy() - newkeywords.update(fkeywords) - return self.func(*(self.args + fargs), **newkeywords) - + if self.keywords is not None: + fkeywords = dict(self.keywords, **fkeywords) + return self.func(*(self.args + fargs), **fkeywords) diff --git a/lib_pypy/_pypy_interact.py b/lib_pypy/_pypy_interact.py --- a/lib_pypy/_pypy_interact.py +++ b/lib_pypy/_pypy_interact.py @@ -56,6 +56,10 @@ prompt = getattr(sys, 'ps1', '>>> ') try: line = raw_input(prompt) + # Can be None if sys.stdin was redefined + encoding = getattr(sys.stdin, 'encoding', None) + if encoding and not isinstance(line, unicode): + line = line.decode(encoding) except EOFError: console.write("\n") break diff --git a/lib_pypy/_pypy_irc_topic.py b/lib_pypy/_pypy_irc_topic.py --- a/lib_pypy/_pypy_irc_topic.py +++ b/lib_pypy/_pypy_irc_topic.py @@ -1,117 +1,6 @@ -"""qvfgbcvna naq hgbcvna punvef -qlfgbcvna naq hgbcvna punvef -V'z fbeel, pbhyq lbh cyrnfr abg nterr jvgu gur png nf jryy? -V'z fbeel, pbhyq lbh cyrnfr abg nterr jvgu gur punve nf jryy? -jr cnffrq gur RH erivrj -cbfg RhebClguba fcevag fgnegf 12.IVV.2007, 10nz -RhebClguba raqrq -n Pyrna Ragrecevfrf cebqhpgvba -npnqrzl vf n pbzcyvpngrq ebyr tnzr -npnqrzvn vf n pbzcyvpngrq ebyr tnzr -jbexvat pbqr vf crn fbhc -abg lbhe snhyg, zber yvxr vg'f n zbivat gnetrg -guvf fragrapr vf snyfr -abguvat vf gehr -Yncfnat Fbhpubat -Oenpunzhgnaqn -fbeel, V'yy grnpu gur pnpghf ubj gb fjvz yngre -Jul fb znal znal znal znal znal ivbyvaf? -Jul fb znal znal znal znal znal bowrpgf? -"eha njnl naq yvir ba n snez" nccebnpu gb fbsgjner qrirybczrag -"va snpg, lbh zvtug xabj zber nobhg gur genafyngvba gbbypunva nsgre znfgrevat eclguba guna fbzr angvir fcrnxre xabjf nobhg uvf zbgure gbathr" - kbeNkNk -"jurer qvq nyy gur ivbyvaf tb?" -- ClCl fgnghf oybt: uggc://zberclcl.oybtfcbg.pbz/ -uggc://kxpq.pbz/353/ -pnfhnyvgl ivbyngvbaf naq sylvat -wrgmg abpu fpubxbynqvtre -R09 2X @PNN:85? -vs lbh'er gelvat gb oybj hc fghss, jub pnerf? -vs fghss oybjf hc, lbh pner -2008 jvyy or gur lrne bs clcl ba gur qrfxgbc -2008 jvyy or gur lrne bs gur qrfxgbc ba #clcl -2008 jvyy or gur lrne bs gur qrfxgbc ba #clcl, Wnahnel jvyy or gur zbagu bs gur nyc gbcf -lrf, ohg jung'g gur frafr bs 0 < "qhena qhena" -eclguba: flagnk naq frznagvpf bs clguba, fcrrq bs p, erfgevpgvbaf bs wnin naq pbzcvyre reebe zrffntrf nf crargenoyr nf ZHZCF +"""eclguba: flagnk naq frznagvpf bs clguba, fcrrq bs p, erfgevpgvbaf bs wnin naq pbzcvyre reebe zrffntrf nf crargenoyr nf ZHZCF pglcrf unf n fcva bs 1/3 ' ' vf n fcnpr gbb -2009 jvyy or gur lrne bs WVG ba gur qrfxgbc -N ynathntr vf n qvnyrpg jvgu na nezl naq anil -gbcvpf ner sbe gur srroyr zvaqrq -2009 vf gur lrne bs ersyrpgvba ba gur qrfxgbc -gur tybor vf bhe cbal, gur pbfzbf bhe erny ubefr -jub nz V naq vs lrf, ubj znal? -cebtenzzvat va orq vf n cresrpgyl svar npgvivgl -zbber'f ynj vf n qeht jvgu gur jbefg pbzr qbja -EClguba: jr hfr vg fb lbh qba'g unir gb -Zbber'f ynj vf n qeht jvgu gur jbefg pbzr qbja. EClguba: haqrpvqrq. -guvatf jvyy or avpr naq fghss -qba'g cbfg yvaxf gb cngragf urer -Abg lbhe hfhny nanylfrf. -Gur Neg bs gur Punaary -Clguba 300 -V fhccbfr ZO bs UGZY cre frpbaq vf abg gur hfhny fcrrq zrnfher crbcyr jbhyq rkcrpg sbe n wvg -gur fha arire frgf ba gur ClCl rzcver -ghegyrf ner snfgre guna lbh guvax -cebtenzzvat vf na nrfgrguvp raqrnibhe -P vf tbbq sbe fbzrguvat, whfg abg sbe jevgvat fbsgjner -trezna vf tbbq sbe fbzrguvat, whfg abg sbe jevgvat fbsgjner -trezna vf tbbq sbe artngvbaf, whfg abg sbe jevgvat fbsgjner -# nffreg qvq abg penfu -lbh fubhyq fgneg n cresrpg fbsgjner zbirzrag -lbh fubhyq fgneg n cresrpg punaary gbcvp zbirzrag -guvf vf n cresrpg punaary gbcvp -guvf vf n frys-ersreragvny punaary gbcvp -crrcubcr bcgvzvmngvbaf ner jung n Fhssvpvragyl Fzneg Pbzcvyre hfrf -"crrcubcr" bcgvzvmngvbaf ner jung na bcgvzvfgvp Pbzcvyre hfrf -pubbfr lbhe unpx -gur 'fhcre' xrljbeq vf abg gung uhttnoyr -wlguba cngpurf ner abg rabhtu sbe clcl -- qb lbh xabj oreyva? - nyy bs vg? - jryy, whfg oreyva -- ubj jvyy gur snpg gung gurl ner hfrq va bhe ercy punatr bhe gbcvpf? -- ubj pna vg rire unir jbexrq? -- jurer fubhyq gur unpx or fgberq? -- Vg'f uneq gb fnl rknpgyl jung pbafgvghgrf erfrnepu va gur pbzchgre jbeyq, ohg nf n svefg nccebkvzngvba, vg'f fbsgjner gung qbrfa'g unir hfref. -- Cebtenzzvat vf nyy nobhg xabjvat jura gb obvy gur benatr fcbatr qbaxrl npebff gur cuvyyvcvarf -- Jul fb znal, znal, znal, znal, znal, znal qhpxyvatf? -- ab qrgnvy vf bofpher rabhtu gb abg unir fbzr pbqr qrcraqvat ba vg. -- jung V trarenyyl jnag vf serr fcrrqhcf -- nyy bs ClCl vf kv-dhnyvgl -"lbh pna nyjnlf xvyy -9 be bf._rkvg() vs lbh'er va n uheel" -Ohernhpengf ohvyq npnqrzvp rzcverf juvpu puhea bhg zrnavatyrff fbyhgvbaf gb veeryrinag ceboyrzf. -vg'f abg n unpx, vg'f n jbexnebhaq -ClCl qbrfa'g unir pbcbylinevnqvp qrcraqragyl-zbabzbecurq ulcresyhknqf -ClCl qbrfa'g punatr gur shaqnzragny culfvpf pbafgnagf -Qnapr bs gur Fhtnecyhz Snvel -Wnin vf whfg tbbq rabhtu gb or cenpgvpny, ohg abg tbbq rabhtu gb or hfnoyr. -RhebClguba vf unccravat, qba'g rkcrpg nal dhvpx erfcbafr gvzrf. -"V jbhyq yvxr gb fgnl njnl sebz ernyvgl gura" -"gung'f jul gur 'be' vf ernyyl na 'naq' " -jvgu nyy nccebcevngr pbagrkghnyvfngvbavat -qba'g gevc ba gur cbjre pbeq -vzcyrzragvat YBTB va YBTB: "ghegyrf nyy gur jnl qbja" -gur ohooyrfbeg jbhyq or gur jebat jnl gb tb -gur cevapvcyr bs pbafreingvba bs zrff -gb fnir n gerr, rng n ornire -Qre Ovore znpugf evpugvt: Antg nyyrf xnchgg. -"Nal jbeyqivrj gung vfag jenpxrq ol frys-qbhog naq pbashfvba bire vgf bja vqragvgl vf abg n jbeyqivrj sbe zr." - Fpbgg Nnebafba -jr oryvrir va cnapnxrf, znlor -jr oryvrir va ghegyrf, znlor -jr qrsvavgryl oryvrir va zrgn -gur zngevk unf lbh -"Yvsr vf uneq, gura lbh anc" - n png -Vf Nezva ubzr jura gur havirefr prnfrf gb rkvfg? -Qhrffryqbes fcevag fgnegrq -frys.nobeeg("pnaabg ybnq negvpyrf") -QRAGVFGEL FLZOBY YVTUG IREGVPNY NAQ JNIR -"Gur UUH pnzchf vf n tbbq Dhnxr yriry" - Nezva -"Gur UUH pnzchf jbhyq or n greevoyr dhnxr yriry - lbh'q arire unir n pyhr jurer lbh ner" - zvpunry -N enqvbnpgvir png unf 18 unys-yvirf. - : j [fvtu] -f -pbybe-pbqrq oyhrf -"Neebtnapr va pbzchgre fpvrapr vf zrnfherq va anab-Qvwxfgenf." -ClCl arrqf n Whfg-va-Gvzr WVG -"Lbh pna'g gvzr geniry whfg ol frggvat lbhe pybpxf jebat" -Gjb guernqf jnyx vagb n one. Gur onexrrcre ybbxf hc naq lryyf, "url, V jnag qba'g nal pbaqvgvbaf enpr yvxr gvzr ynfg!" Clguba 2.k rfg cerfdhr zbeg, ivir Clguba! Clguba 2.k vf abg qrnq Riregvzr fbzrbar nethrf jvgu "Fznyygnyx unf nyjnlf qbar K", vg vf nyjnlf n tbbq uvag gung fbzrguvat arrqf gb or punatrq snfg. - Znephf Qraxre @@ -119,7 +8,6 @@ __kkk__ naq __ekkk__ if bcrengvba fybgf: cnegvpyr dhnaghz fhcrecbfvgvba xvaq bs sha ClCl vf na rkpvgvat grpuabybtl gung yrgf lbh gb jevgr snfg, cbegnoyr, zhygv-cyngsbez vagrecergref jvgu yrff rssbeg Nezva: "Cebybt vf n zrff.", PS: "Ab, vg'f irel pbby!", Nezva: "Vfa'g guvf jung V fnvq?" - tbbq, grfgf ner hfrshy fbzrgvzrf :-) ClCl vf yvxr nofheq gurngre jr unir ab nagv-vzcbffvoyr fgvpx gung znxrf fher gung nyy lbhe cebtenzf unyg clcl vf n enpr orgjrra crbcyr funivat lnxf naq gur havirefr cebqhpvat zber orneqrq lnxf. Fb sne, gur havirefr vf jvaavat @@ -136,14 +24,14 @@ ClCl 1.1.0orgn eryrnfrq: uggc://pbqrfcrnx.arg/clcl/qvfg/clcl/qbp/eryrnfr-1.1.0.ugzy "gurer fubhyq or bar naq bayl bar boivbhf jnl gb qb vg". ClCl inevnag: "gurer pna or A unys-ohttl jnlf gb qb vg" 1.1 svany eryrnfrq: uggc://pbqrfcrnx.arg/clcl/qvfg/clcl/qbp/eryrnfr-1.1.0.ugzy -1.1 svany eryrnfrq | nzq64 naq ccp ner bayl ninvynoyr va ragrecevfr irefvba + nzq64 naq ccp ner bayl ninvynoyr va ragrecevfr irefvba Vf gurer n clcl gvzr? - vs lbh pna srry vg (?) gura gurer vf ab, abezny jbex vf fhpu zhpu yrff gvevat guna inpngvbaf ab, abezny jbex vf fb zhpu yrff gvevat guna inpngvbaf -SVEFG gurl vtaber lbh, gura gurl ynhtu ng lbh, gura gurl svtug lbh, gura lbh jva. +-SVEFG gurl vtaber lbh, gura gurl ynhtu ng lbh, gura gurl svtug lbh, gura lbh jva.- vg'f Fhaqnl, znlor vg'f Fhaqnl, ntnva -"3 + 3 = 8" Nagb va gur WVG gnyx +"3 + 3 = 8" - Nagb va gur WVG gnyx RPBBC vf unccravat RPBBC vf svavfurq cflpb rngf bar oenva cre vapu bs cebterff @@ -175,10 +63,108 @@ "nu, whfg va gvzr qbphzragngvba" (__nc__) ClCl vf abg n erny IZ: ab frtsnhyg unaqyref gb qb gur ener pnfrf lbh pna'g unir obgu pbairavrapr naq fcrrq -gur WVG qbrfa'g jbex ba BF/K (abi'09) -ab fhccbeg sbe BF/K evtug abj! (abi'09) fyvccref urvtug pna or zrnfherq va k86 ertvfgref clcl vf n enpr orgjrra gur vaqhfgel gelvat gb ohvyq znpuvarf jvgu zber naq zber erfbheprf, naq gur clcl qrirybcref gelvat gb rng nyy bs gurz. Fb sne, gur jvaare vf fgvyy hapyrne +"znl pbagnva ahgf naq/be lbhat cbvagref" +vg'f nyy irel fvzcyr, yvxr gur ubyvqnlf +unccl ClCl'f lrne 2010! +fnzhryr fnlf gung jr ybfg n enmbe. fb jr pna'g funir lnxf +"yrg'f abg or bofpher, hayrff jr ernyyl arrq gb" + (abg guernq-fnsr, ohg jryy, abguvat vf) +clcl unf znal ceboyrzf, ohg rnpu bar unf znal fbyhgvbaf +whfg nabgure vgrz (1.333...) ba bhe erny-ahzorerq gbqb yvfg +ClCl vf Fuveg Bevtnzv erfrnepu + nafjrevat n dhrfgvba: "ab -- sbe ng yrnfg bar cbffvoyr vagrecergngvba bs lbhe fragrapr" +eryrnfr 1.2 hcpbzvat +ClCl 1.2 eryrnfrq - uggc://clcl.bet/ +AB IPF QVFPHFFVBAF +EClguba vf n svar pnzry unve oehfu +ClCl vf n npghnyyl n ivfhnyvmngvba cebwrpg, jr whfg ohvyq vagrecergref gb unir vagrerfgvat qngn gb ivfhnyvmr +clcl vf yvxr fnhfntrf +naq abj sbe fbzrguvat pbzcyrgryl qvssrerag +n 10gu bs sberire vf 1u45 +pbeerpg pbqr qbrfag arrq nal grfgf +cbfgfgehpghenyvfz rgp. +clcl UVG trarengbe +gur arj clcl fcbeg vf gb cnff clcl ohtf nf pclguba ohtf +jr unir zhpu zber vagrecergref guna hfref +ClCl 1.3 njnvgvat eryrnfr +ClCl 1.3 eryrnfrq +vg frrzf gb zr gung bapr lbh frggyr ba na rkrphgvba / bowrpg zbqry naq / be olgrpbqr sbezng, lbh'ir nyernql qrpvqrq jung ynathntrf (jurer gur 'f' frrzf fhcresyhbhf) fhccbeg vf tbvat gb or svefg pynff sbe +"Nyy ceboyrzf va ClCl pna or fbyirq ol nabgure yriry bs vagrecergngvba" +ClCl 1.3 eryrnfrq (jvaqbjf ovanevrf vapyhqrq) +jul qvq lbh thlf unir gb znxr gur ohvygva sbeghar zber vagrerfgvat guna npghny jbex? v whfg pngpurq zlfrys erfgnegvat clcl 20 gvzrf +"jr hfrq gb unir n zrff jvgu na bofpher vagresnpr, abj jr unir zrff urer naq bofpher vagresnpr gurer. cebterff" crqebavf ba n clcl fcevag +"phcf bs pbssrr ner yvxr nanybtvrf va gung V'z znxvat bar evtug abj" +"vg'f nyjnlf hc gb hf, va n jnl be gur bgure" +ClCl vf infg, naq pbagnvaf zhygvghqrf +qravny vf eneryl n tbbq qrohttvat grpuavdhr +"Yrg'f tb." - "Jr pna'g" - "Jul abg?" - "Jr'er jnvgvat sbe n Genafyngvba." - (qrfcnvevatyl) "Nu!" +'gung'f qrsvavgryl n pnfr bs "hu????"' +va gurbel gurer vf gur Ybbc, va cenpgvpr gurer ner oevqtrf +gur uneqqevir - pbafgnag qngn cvytevzntr +ClCl vf n gbby gb xrrc bgurejvfr qnatrebhf zvaqf fnsryl bpphcvrq. +jr ner n trareny senzrjbex ohvyg ba pbafvfgrag nccyvpngvba bs nqubp-arff +gur jnl gb nibvq n jbexnebhaq vf gb vagebqhpr n fgebatre jbexnebhaq fbzrjurer ryfr +pnyyvat gur genafyngvba gbby punva n 'fpevcg' vf xvaq bs bssrafvir +ehaavat clcl-p genafyngr.cl vf n ovg yvxr jngpuvat n guevyyre zbivr, vg pbhyq pbafhzr nyy gur zrzbel ng nal gvzr +ehaavat clcl-p genafyngr.cl vf n ovg yvxr jngpuvat n guevyyre zbivr, vg pbhyq qvr ng nal gvzr orpnhfr bs gur 32-ovg 4TO yvzvg bs ENZ +Qh jvefg rora tranh qnf reervpura, jbena xrvare tynhog +vs fjvgmreynaq jrer jurer terrpr vf (ba vfynaqf) jbhyq gurl nyy or pbaarpgrq ol oevqtrf? +genafyngvat clcl jvgu pclguba vf fbbbbbb fybj +ClCl 1.4 eryrnfrq! +Jr ner abg urebrf, whfg irel cngvrag. +QBAR zrnaf vg'f qbar +jul gurer vf ab "ClCl 1.4 eryrnfrq" va gbcvp nal zber? +fabj! fabj! +svanyyl, zrephevny zvtengvba vf unccravat! +Gur zvtengvba gb zrephevny vf pbzcyrgrq! uggc://ovgohpxrg.bet/clcl/clcl +fabj! fabj! (gre) +unccl arj lrne +naq anaanaw gb lbh nf jryy +Frrvat nf gur ynjf bs culfvpf ner ntnvafg lbh, lbh unir gb pnershyyl pbafvqre lbhe fpbcr fb gung lbhe tbnyf ner ernfbanoyr. +nf hfhny va clcl, gur fbyhgvba nccrnef pbzcyrgryl qvfcebcbegvbangr gb gur ceboyrz naq vafgrnq jr'yy tb sbe n pbzcyrgryl qvssrerag fvzcyre nccebnpu gb gur bevtvany ceboyrz +fabj, fabj! +va clcl lbh ner nyjnlf ng gur jebat yriry, va bar jnl be gur bgure +jryy, vg'f jebat ohg abg fb "irel jebat" nf vg ybbxrq + V ybir clcl +ynmvarff vzcngvrapr naq uhoevf +fabj, fabj +EClguba: guvatf lbh jbhyqa'g qb va Clguba, naq pna'g qb va P. +vg vf gur rkcrpgrq orunivbe, rkprcg jura lbh qba'g rkcrpg vg +erqrsvavat lryybj frrzf yvxr n orggre vqrn +"gung'f ubjrire whfg ratvarrevat" (svwny) +"[vg] whfg fubjf ntnva gung eclguba vf bofpher" (psobym) +"naljnl, clguba vf n infg ynathntr" (svwny) +bhg-bs-yvr-thneqf +"gurer ner qnlf ba juvpu lbh ybbx nebhaq naq abguvat fubhyq unir rire jbexrq" (svwny) +clcl vf n orggre xvaq bs sbbyvfuarff - ynp +ehaavat grfgf vf rffragvny sbe qrirybcvat clcl -- hu? qvq V oernx gur grfg? (svwny) +V'ir tbg guvf sybbe jnk gung'f nyfb n TERNG qrffreg gbccvat!! +rknexha: "gur cneg gung V gubhtug jnf tbvat gb or uneq jnf gevivny, fb abj V whfg unir guvf cneg gung V qvqa'g rira guvax bs gung vf uneq" +V fhccbfr jr pna yvir jvgu gur bofphevgl, nf ybat nf gurer vf n pbzzrag znxvat vg yvtugre +V nz n ovt oryvrire va ernfbaf. ohg gur nccnerag xvaq ner zl snibevgr. +clcl: trg n WVG sbe serr (jryy gur svefg qnl lbh jba'g znantr naq vg jvyy or irel sehfgengvat) + thgjbegu: bu, jr fubhyq znxr gur WVG zntvpnyyl orggre, jvgu qrpbengbef naq fghss +vg'f n pbzcyrgr unpx, ohg n irel zvavzny bar (nevtngb) +svefg gurl ynhtu ng lbh, gura gurl vtaber lbh, gura gurl svtug lbh, gura lbh jva +ClCl vf snzvyl sevraqyl +jr yvxr pbzcynvagf +gbqnl jr'er snfgre guna lrfgreqnl (hfhnyyl) +ClCl naq PClguba: gurl ner zbegny rarzvrf vagrag ba xvyyvat rnpu bgure +nethnoyl, rirelguvat vf n avpur +clcl unf ynlref yvxr bavbaf: crryvat gurz onpx jvyy znxr lbh pel +EClguba zntvpnyyl znxrf lbh evpu naq snzbhf (fnlf fb ba gur gva) +Vf evtbobg nebhaq jura gur havirefr prnfrf gb rkvfg? +ClCl vf gbb pbby sbe dhrelfgevatf. +< nevtngb> gura jung bpphef? < svwny> tbbq fghss V oryvrir +ClCl 1.6 eryrnfrq! + jurer ner gur grfgf? +uggc://gjvgcvp.pbz/52nr8s +N enaqbz dhbgr +Nyy rkprcgoybpxf frrz fnar. +N cvax tyvggrel ebgngvat ynzoqn +"vg'f yvxryl grzcbenel hagvy sberire" nevtb """ def some_topic(): diff --git a/lib_pypy/greenlet.py b/lib_pypy/greenlet.py --- a/lib_pypy/greenlet.py +++ b/lib_pypy/greenlet.py @@ -48,23 +48,23 @@ def switch(self, *args): "Switch execution to this greenlet, optionally passing the values " "given as argument(s). Returns the value passed when switching back." - return self.__switch(_continulet.switch, args) + return self.__switch('switch', args) def throw(self, typ=GreenletExit, val=None, tb=None): "raise exception in greenlet, return value passed when switching back" - return self.__switch(_continulet.throw, typ, val, tb) + return self.__switch('throw', typ, val, tb) - def __switch(target, unbound_method, *args): + def __switch(target, methodname, *args): current = getcurrent() # while not target: if not target.__started: - if unbound_method != _continulet.throw: + if methodname == 'switch': greenlet_func = _greenlet_start else: greenlet_func = _greenlet_throw _continulet.__init__(target, greenlet_func, *args) - unbound_method = _continulet.switch + methodname = 'switch' args = () target.__started = True break @@ -75,22 +75,8 @@ target = target.parent # try: - if current.__main: - if target.__main: - # switch from main to main - if unbound_method == _continulet.throw: - raise args[0], args[1], args[2] - (args,) = args - else: - # enter from main to target - args = unbound_method(target, *args) - else: - if target.__main: - # leave to go to target=main - args = unbound_method(current, *args) - else: - # switch from non-main to non-main - args = unbound_method(current, *args, to=target) + unbound_method = getattr(_continulet, methodname) + args = unbound_method(current, *args, to=target) except GreenletExit, e: args = (e,) finally: @@ -110,7 +96,16 @@ @property def gr_frame(self): - raise NotImplementedError("attribute 'gr_frame' of greenlet objects") + # xxx this doesn't work when called on either the current or + # the main greenlet of another thread + if self is getcurrent(): + return None + if self.__main: + self = getcurrent() + f = _continulet.__reduce__(self)[2][0] + if not f: + return None + return f.f_back.f_back.f_back # go past start(), __switch(), switch() # ____________________________________________________________ # Internal stuff @@ -138,8 +133,7 @@ try: res = greenlet.run(*args) finally: - if greenlet.parent is not _tls.main: - _continuation.permute(greenlet, greenlet.parent) + _continuation.permute(greenlet, greenlet.parent) return (res,) def _greenlet_throw(greenlet, exc, value, tb): @@ -147,5 +141,4 @@ try: raise exc, value, tb finally: - if greenlet.parent is not _tls.main: - _continuation.permute(greenlet, greenlet.parent) + _continuation.permute(greenlet, greenlet.parent) diff --git a/lib_pypy/pypy_test/test_stackless_pickling.py b/lib_pypy/pypy_test/test_stackless_pickling.py --- a/lib_pypy/pypy_test/test_stackless_pickling.py +++ b/lib_pypy/pypy_test/test_stackless_pickling.py @@ -1,7 +1,3 @@ -""" -this test should probably not run from CPython or py.py. -I'm not entirely sure, how to do that. -""" from __future__ import absolute_import from py.test import skip try: @@ -16,11 +12,15 @@ class Test_StacklessPickling: + def test_pickle_main_coroutine(self): + import stackless, pickle + s = pickle.dumps(stackless.coroutine.getcurrent()) + print s + c = pickle.loads(s) + assert c is stackless.coroutine.getcurrent() + def test_basic_tasklet_pickling(self): - try: - import stackless - except ImportError: - skip("can't load stackless and don't know why!!!") + import stackless from stackless import run, schedule, tasklet import pickle diff --git a/lib_pypy/pyrepl/commands.py b/lib_pypy/pyrepl/commands.py --- a/lib_pypy/pyrepl/commands.py +++ b/lib_pypy/pyrepl/commands.py @@ -33,10 +33,9 @@ class Command(object): finish = 0 kills_digit_arg = 1 - def __init__(self, reader, (event_name, event)): + def __init__(self, reader, cmd): self.reader = reader - self.event = event - self.event_name = event_name + self.event_name, self.event = cmd def do(self): pass diff --git a/lib_pypy/pyrepl/completing_reader.py b/lib_pypy/pyrepl/completing_reader.py --- a/lib_pypy/pyrepl/completing_reader.py +++ b/lib_pypy/pyrepl/completing_reader.py @@ -229,7 +229,8 @@ def after_command(self, cmd): super(CompletingReader, self).after_command(cmd) - if not isinstance(cmd, complete) and not isinstance(cmd, self_insert): + if not isinstance(cmd, self.commands['complete']) \ + and not isinstance(cmd, self.commands['self_insert']): self.cmpltn_reset() def calc_screen(self): diff --git a/lib_pypy/pyrepl/pygame_console.py b/lib_pypy/pyrepl/pygame_console.py --- a/lib_pypy/pyrepl/pygame_console.py +++ b/lib_pypy/pyrepl/pygame_console.py @@ -130,7 +130,7 @@ s.fill(c, [0, 600 - bmargin, 800, bmargin]) s.fill(c, [800 - rmargin, 0, lmargin, 600]) - def refresh(self, screen, (cx, cy)): + def refresh(self, screen, cxy): self.screen = screen self.pygame_screen.fill(colors.bg, [0, tmargin + self.cur_top + self.scroll, @@ -139,8 +139,8 @@ line_top = self.cur_top width, height = self.fontsize - self.cxy = (cx, cy) - cp = self.char_pos(cx, cy) + self.cxy = cxy + cp = self.char_pos(*cxy) if cp[1] < tmargin: self.scroll = - (cy*self.fh + self.cur_top) self.repaint() @@ -148,7 +148,7 @@ self.scroll += (600 - bmargin) - (cp[1] + self.fh) self.repaint() if self.curs_vis: - self.pygame_screen.blit(self.cursor, self.char_pos(cx, cy)) + self.pygame_screen.blit(self.cursor, self.char_pos(*cxy)) for line in screen: if 0 <= line_top + self.scroll <= (600 - bmargin - tmargin - self.fh): if line: diff --git a/lib_pypy/pyrepl/reader.py b/lib_pypy/pyrepl/reader.py --- a/lib_pypy/pyrepl/reader.py +++ b/lib_pypy/pyrepl/reader.py @@ -576,7 +576,7 @@ self.console.push_char(char) self.handle1(0) - def readline(self): + def readline(self, returns_unicode=False): """Read a line. The implementation of this method also shows how to drive Reader if you want more control over the event loop.""" @@ -585,6 +585,8 @@ self.refresh() while not self.finished: self.handle1() + if returns_unicode: + return self.get_unicode() return self.get_buffer() finally: self.restore() diff --git a/lib_pypy/pyrepl/readline.py b/lib_pypy/pyrepl/readline.py --- a/lib_pypy/pyrepl/readline.py +++ b/lib_pypy/pyrepl/readline.py @@ -198,7 +198,7 @@ reader.ps1 = prompt return reader.readline() - def multiline_input(self, more_lines, ps1, ps2): + def multiline_input(self, more_lines, ps1, ps2, returns_unicode=False): """Read an input on possibly multiple lines, asking for more lines as long as 'more_lines(unicodetext)' returns an object whose boolean value is true. @@ -209,7 +209,7 @@ reader.more_lines = more_lines reader.ps1 = reader.ps2 = ps1 reader.ps3 = reader.ps4 = ps2 - return reader.readline() + return reader.readline(returns_unicode=returns_unicode) finally: reader.more_lines = saved @@ -231,7 +231,11 @@ return ''.join(chars) def _histline(self, line): - return unicode(line.rstrip('\n'), ENCODING) + line = line.rstrip('\n') + try: + return unicode(line, ENCODING) + except UnicodeDecodeError: # bah, silently fall back... + return unicode(line, 'utf-8') def get_history_length(self): return self.saved_history_length @@ -268,7 +272,10 @@ f = open(os.path.expanduser(filename), 'w') for entry in history: if isinstance(entry, unicode): - entry = entry.encode(ENCODING) + try: + entry = entry.encode(ENCODING) + except UnicodeEncodeError: # bah, silently fall back... + entry = entry.encode('utf-8') entry = entry.replace('\n', '\r\n') # multiline history support f.write(entry + '\n') f.close() @@ -395,9 +402,21 @@ _wrapper.f_in = f_in _wrapper.f_out = f_out - if hasattr(sys, '__raw_input__'): # PyPy - _old_raw_input = sys.__raw_input__ + if '__pypy__' in sys.builtin_module_names: # PyPy + + def _old_raw_input(prompt=''): + # sys.__raw_input__() is only called when stdin and stdout are + # as expected and are ttys. If it is the case, then get_reader() + # should not really fail in _wrapper.raw_input(). If it still + # does, then we will just cancel the redirection and call again + # the built-in raw_input(). + try: + del sys.__raw_input__ + except AttributeError: + pass + return raw_input(prompt) sys.__raw_input__ = _wrapper.raw_input + else: # this is not really what readline.c does. Better than nothing I guess import __builtin__ diff --git a/lib_pypy/pyrepl/simple_interact.py b/lib_pypy/pyrepl/simple_interact.py --- a/lib_pypy/pyrepl/simple_interact.py +++ b/lib_pypy/pyrepl/simple_interact.py @@ -54,7 +54,8 @@ ps1 = getattr(sys, 'ps1', '>>> ') ps2 = getattr(sys, 'ps2', '... ') try: - statement = multiline_input(more_lines, ps1, ps2) + statement = multiline_input(more_lines, ps1, ps2, + returns_unicode=True) except EOFError: break more = console.push(statement) diff --git a/lib_pypy/pyrepl/unix_console.py b/lib_pypy/pyrepl/unix_console.py --- a/lib_pypy/pyrepl/unix_console.py +++ b/lib_pypy/pyrepl/unix_console.py @@ -163,7 +163,7 @@ def change_encoding(self, encoding): self.encoding = encoding - def refresh(self, screen, (cx, cy)): + def refresh(self, screen, cxy): # this function is still too long (over 90 lines) if not self.__gone_tall: @@ -198,6 +198,7 @@ # we make sure the cursor is on the screen, and that we're # using all of the screen if we can + cx, cy = cxy if cy < offset: offset = cy elif cy >= offset + height: diff --git a/lib_pypy/resource.py b/lib_pypy/resource.py --- a/lib_pypy/resource.py +++ b/lib_pypy/resource.py @@ -7,7 +7,7 @@ from ctypes_support import standard_c_lib as libc from ctypes_support import get_errno -from ctypes import Structure, c_int, c_long, byref, sizeof, POINTER +from ctypes import Structure, c_int, c_long, byref, POINTER from errno import EINVAL, EPERM import _structseq @@ -165,7 +165,6 @@ @builtinify def getpagesize(): - pagesize = 0 if _getpagesize: return _getpagesize() else: diff --git a/lib_pypy/stackless.py b/lib_pypy/stackless.py --- a/lib_pypy/stackless.py +++ b/lib_pypy/stackless.py @@ -5,51 +5,54 @@ """ -import traceback import _continuation -from functools import partial class TaskletExit(Exception): pass CoroutineExit = TaskletExit -class GWrap(_continuation.continulet): - """This is just a wrapper around continulet to allow - to stick additional attributes to a continulet. - To be more concrete, we need a backreference to - the coroutine object""" + +def _coroutine_getcurrent(): + "Returns the current coroutine (i.e. the one which called this function)." + try: + return _tls.current_coroutine + except AttributeError: + # first call in this thread: current == main + return _coroutine_getmain() + +def _coroutine_getmain(): + try: + return _tls.main_coroutine + except AttributeError: + # create the main coroutine for this thread + continulet = _continuation.continulet + main = coroutine() + main._frame = continulet.__new__(continulet) + main._is_started = -1 + _tls.current_coroutine = _tls.main_coroutine = main + return _tls.main_coroutine class coroutine(object): - "we can't have continulet as a base, because continulets can't be rebound" + _is_started = 0 # 0=no, 1=yes, -1=main def __init__(self): self._frame = None - self.is_zombie = False - - def __getattr__(self, attr): - return getattr(self._frame, attr) - - def __del__(self): - self.is_zombie = True - del self._frame - self._frame = None def bind(self, func, *argl, **argd): """coro.bind(f, *argl, **argd) -> None. binds function f to coro. f will be called with arguments *argl, **argd """ - if self._frame is None or not self._frame.is_pending(): - - def _func(c, *args, **kwargs): - return func(*args, **kwargs) - - run = partial(_func, *argl, **argd) - self._frame = frame = GWrap(run) - else: + if self.is_alive: raise ValueError("cannot bind a bound coroutine") + def run(c): + _tls.current_coroutine = self + self._is_started = 1 + return func(*argl, **argd) + self._is_started = 0 + self._frame = _continuation.continulet(run) def switch(self): """coro.switch() -> returnvalue @@ -57,46 +60,38 @@ f finishes, the returnvalue is that of f, otherwise None is returned """ - current = _getcurrent() - current._jump_to(self) - - def _jump_to(self, coroutine): - _tls.current_coroutine = coroutine - self._frame.switch(to=coroutine._frame) + current = _coroutine_getcurrent() + try: + current._frame.switch(to=self._frame) + finally: + _tls.current_coroutine = current def kill(self): """coro.kill() : kill coroutine coro""" - _tls.current_coroutine = self - self._frame.throw(CoroutineExit) + current = _coroutine_getcurrent() + try: + current._frame.throw(CoroutineExit, to=self._frame) + finally: + _tls.current_coroutine = current - def _is_alive(self): - if self._frame is None: - return False - return not self._frame.is_pending() - is_alive = property(_is_alive) - del _is_alive + @property + def is_alive(self): + return self._is_started < 0 or ( + self._frame is not None and self._frame.is_pending()) - def getcurrent(): - """coroutine.getcurrent() -> the currently running coroutine""" - try: - return _getcurrent() - except AttributeError: - return _maincoro - getcurrent = staticmethod(getcurrent) + @property + def is_zombie(self): + return self._is_started > 0 and not self._frame.is_pending() + + getcurrent = staticmethod(_coroutine_getcurrent) def __reduce__(self): - raise TypeError, 'pickling is not possible based upon continulets' + if self._is_started < 0: + return _coroutine_getmain, () + else: + return type(self), (), self.__dict__ -def _getcurrent(): - "Returns the current coroutine (i.e. the one which called this function)." - try: - return _tls.current_coroutine - except AttributeError: - # first call in this thread: current == main - _coroutine_create_main() - return _tls.current_coroutine - try: from thread import _local except ImportError: @@ -105,17 +100,8 @@ _tls = _local() -def _coroutine_create_main(): - # create the main coroutine for this thread - _tls.current_coroutine = None - main_coroutine = coroutine() - main_coroutine.bind(lambda x:x) - _tls.main_coroutine = main_coroutine - _tls.current_coroutine = main_coroutine - return main_coroutine - -_maincoro = _coroutine_create_main() +# ____________________________________________________________ from collections import deque @@ -161,10 +147,7 @@ _last_task = next assert not next.blocked if next is not current: - try: - next.switch() - except CoroutineExit: - raise TaskletExit + next.switch() return current def set_schedule_callback(callback): @@ -188,34 +171,6 @@ raise self.type, self.value, self.traceback # -# helpers for pickling -# - -_stackless_primitive_registry = {} - -def register_stackless_primitive(thang, retval_expr='None'): - import types - func = thang - if isinstance(thang, types.MethodType): - func = thang.im_func - code = func.func_code - _stackless_primitive_registry[code] = retval_expr - # It is not too nice to attach info via the code object, but - # I can't think of a better solution without a real transform. - -def rewrite_stackless_primitive(coro_state, alive, tempval): - flags, frame, thunk, parent = coro_state - while frame is not None: - retval_expr = _stackless_primitive_registry.get(frame.f_code) - if retval_expr: - # this tasklet needs to stop pickling here and return its value. - tempval = eval(retval_expr, globals(), frame.f_locals) - coro_state = flags, frame, thunk, parent - break - frame = frame.f_back - return coro_state, alive, tempval - -# # class channel(object): @@ -367,8 +322,6 @@ """ return self._channel_action(None, -1) - register_stackless_primitive(receive, retval_expr='receiver.tempval') - def send_exception(self, exp_type, msg): self.send(bomb(exp_type, exp_type(msg))) @@ -385,9 +338,8 @@ the runnables list. """ return self._channel_action(msg, 1) - - register_stackless_primitive(send) - + + class tasklet(coroutine): """ A tasklet object represents a tiny task in a Python thread. @@ -459,6 +411,7 @@ def _func(): try: try: + coroutine.switch(back) func(*argl, **argd) except TaskletExit: pass @@ -468,6 +421,8 @@ self.func = None coroutine.bind(self, _func) + back = _coroutine_getcurrent() + coroutine.switch(self) self.alive = True _scheduler_append(self) return self @@ -490,39 +445,6 @@ raise RuntimeError, "The current tasklet cannot be removed." # not sure if I will revive this " Use t=tasklet().capture()" _scheduler_remove(self) - - def __reduce__(self): - one, two, coro_state = coroutine.__reduce__(self) - assert one is coroutine - assert two == () - # we want to get rid of the parent thing. - # for now, we just drop it - a, frame, c, d = coro_state - - # Removing all frames related to stackless.py. - # They point to stuff we don't want to be pickled. - - pickleframe = frame - while frame is not None: - if frame.f_code == schedule.func_code: - # Removing everything including and after the - # call to stackless.schedule() - pickleframe = frame.f_back - break - frame = frame.f_back - if d: - assert isinstance(d, coroutine) - coro_state = a, pickleframe, c, None - coro_state, alive, tempval = rewrite_stackless_primitive(coro_state, self.alive, self.tempval) - inst_dict = self.__dict__.copy() - inst_dict.pop('tempval', None) - return self.__class__, (), (coro_state, alive, tempval, inst_dict) - - def __setstate__(self, (coro_state, alive, tempval, inst_dict)): - coroutine.__setstate__(self, coro_state) - self.__dict__.update(inst_dict) - self.alive = alive - self.tempval = tempval def getmain(): """ @@ -611,30 +533,7 @@ global _last_task _global_task_id = 0 _main_tasklet = coroutine.getcurrent() - try: - _main_tasklet.__class__ = tasklet - except TypeError: # we are running pypy-c - class TaskletProxy(object): - """TaskletProxy is needed to give the _main_coroutine tasklet behaviour""" - def __init__(self, coro): - self._coro = coro - - def __getattr__(self,attr): - return getattr(self._coro,attr) - - def __str__(self): - return '' % (self._task_id, self.is_alive) - - def __reduce__(self): - return getmain, () - - __repr__ = __str__ - - - global _main_coroutine - _main_coroutine = _main_tasklet - _main_tasklet = TaskletProxy(_main_tasklet) - assert _main_tasklet.is_alive and not _main_tasklet.is_zombie + _main_tasklet.__class__ = tasklet # XXX HAAAAAAAAAAAAAAAAAAAAACK _last_task = _main_tasklet tasklet._init.im_func(_main_tasklet, label='main') _squeue = deque() diff --git a/py/_code/source.py b/py/_code/source.py --- a/py/_code/source.py +++ b/py/_code/source.py @@ -139,7 +139,7 @@ trysource = self[start:end] if trysource.isparseable(): return start, end - return start, end + return start, len(self) def getblockend(self, lineno): # XXX diff --git a/pypy/annotation/annrpython.py b/pypy/annotation/annrpython.py --- a/pypy/annotation/annrpython.py +++ b/pypy/annotation/annrpython.py @@ -149,7 +149,7 @@ desc = olddesc.bind_self(classdef) args = self.bookkeeper.build_args("simple_call", args_s[:]) desc.consider_call_site(self.bookkeeper, desc.getcallfamily(), [desc], - args, annmodel.s_ImpossibleValue) + args, annmodel.s_ImpossibleValue, None) result = [] def schedule(graph, inputcells): result.append((graph, inputcells)) diff --git a/pypy/annotation/bookkeeper.py b/pypy/annotation/bookkeeper.py --- a/pypy/annotation/bookkeeper.py +++ b/pypy/annotation/bookkeeper.py @@ -209,8 +209,8 @@ self.consider_call_site(call_op) for pbc, args_s in self.emulated_pbc_calls.itervalues(): - self.consider_call_site_for_pbc(pbc, 'simple_call', - args_s, s_ImpossibleValue) + self.consider_call_site_for_pbc(pbc, 'simple_call', + args_s, s_ImpossibleValue, None) self.emulated_pbc_calls = {} finally: self.leave() @@ -257,18 +257,18 @@ args_s = [lltype_to_annotation(adtmeth.ll_ptrtype)] + args_s if isinstance(s_callable, SomePBC): s_result = binding(call_op.result, s_ImpossibleValue) - self.consider_call_site_for_pbc(s_callable, - call_op.opname, - args_s, s_result) + self.consider_call_site_for_pbc(s_callable, call_op.opname, args_s, + s_result, call_op) - def consider_call_site_for_pbc(self, s_callable, opname, args_s, s_result): + def consider_call_site_for_pbc(self, s_callable, opname, args_s, s_result, + call_op): descs = list(s_callable.descriptions) if not descs: return family = descs[0].getcallfamily() args = self.build_args(opname, args_s) s_callable.getKind().consider_call_site(self, family, descs, args, - s_result) + s_result, call_op) def getuniqueclassdef(self, cls): """Get the ClassDef associated with the given user cls. @@ -656,6 +656,7 @@ whence = None else: whence = emulated # callback case + op = None s_previous_result = s_ImpossibleValue def schedule(graph, inputcells): @@ -663,7 +664,7 @@ results = [] for desc in descs: - results.append(desc.pycall(schedule, args, s_previous_result)) + results.append(desc.pycall(schedule, args, s_previous_result, op)) s_result = unionof(*results) return s_result diff --git a/pypy/annotation/classdef.py b/pypy/annotation/classdef.py --- a/pypy/annotation/classdef.py +++ b/pypy/annotation/classdef.py @@ -276,8 +276,8 @@ # create the Attribute and do the generalization asked for newattr = Attribute(attr, self.bookkeeper) if s_value: - if newattr.name == 'intval' and getattr(s_value, 'unsigned', False): - import pdb; pdb.set_trace() + #if newattr.name == 'intval' and getattr(s_value, 'unsigned', False): + # import pdb; pdb.set_trace() newattr.s_value = s_value # keep all subattributes' values diff --git a/pypy/annotation/description.py b/pypy/annotation/description.py --- a/pypy/annotation/description.py +++ b/pypy/annotation/description.py @@ -255,7 +255,11 @@ raise TypeError, "signature mismatch: %s" % e.getmsg(self.name) return inputcells - def specialize(self, inputcells): + def specialize(self, inputcells, op=None): + if (op is None and + getattr(self.bookkeeper, "position_key", None) is not None): + _, block, i = self.bookkeeper.position_key + op = block.operations[i] if self.specializer is None: # get the specializer based on the tag of the 'pyobj' # (if any), according to the current policy @@ -269,11 +273,14 @@ enforceargs = Sig(*enforceargs) self.pyobj._annenforceargs_ = enforceargs enforceargs(self, inputcells) # can modify inputcells in-place - return self.specializer(self, inputcells) + if getattr(self.pyobj, '_annspecialcase_', '').endswith("call_location"): + return self.specializer(self, inputcells, op) + else: + return self.specializer(self, inputcells) - def pycall(self, schedule, args, s_previous_result): + def pycall(self, schedule, args, s_previous_result, op=None): inputcells = self.parse_arguments(args) - result = self.specialize(inputcells) + result = self.specialize(inputcells, op) if isinstance(result, FunctionGraph): graph = result # common case # if that graph has a different signature, we need to re-parse @@ -296,17 +303,17 @@ None, # selfclassdef name) - def consider_call_site(bookkeeper, family, descs, args, s_result): + def consider_call_site(bookkeeper, family, descs, args, s_result, op): shape = rawshape(args) - row = FunctionDesc.row_to_consider(descs, args) + row = FunctionDesc.row_to_consider(descs, args, op) family.calltable_add_row(shape, row) consider_call_site = staticmethod(consider_call_site) - def variant_for_call_site(bookkeeper, family, descs, args): + def variant_for_call_site(bookkeeper, family, descs, args, op): shape = rawshape(args) bookkeeper.enter(None) try: - row = FunctionDesc.row_to_consider(descs, args) + row = FunctionDesc.row_to_consider(descs, args, op) finally: bookkeeper.leave() index = family.calltable_lookup_row(shape, row) @@ -316,7 +323,7 @@ def rowkey(self): return self - def row_to_consider(descs, args): + def row_to_consider(descs, args, op): # see comments in CallFamily from pypy.annotation.model import s_ImpossibleValue row = {} @@ -324,7 +331,7 @@ def enlist(graph, ignore): row[desc.rowkey()] = graph return s_ImpossibleValue # meaningless - desc.pycall(enlist, args, s_ImpossibleValue) + desc.pycall(enlist, args, s_ImpossibleValue, op) return row row_to_consider = staticmethod(row_to_consider) @@ -521,7 +528,7 @@ "specialization" % (self.name,)) return self.getclassdef(None) - def pycall(self, schedule, args, s_previous_result): + def pycall(self, schedule, args, s_previous_result, op=None): from pypy.annotation.model import SomeInstance, SomeImpossibleValue if self.specialize: if self.specialize == 'specialize:ctr_location': @@ -664,7 +671,7 @@ cdesc = cdesc.basedesc return s_result # common case - def consider_call_site(bookkeeper, family, descs, args, s_result): + def consider_call_site(bookkeeper, family, descs, args, s_result, op): from pypy.annotation.model import SomeInstance, SomePBC, s_None if len(descs) == 1: # call to a single class, look at the result annotation @@ -709,7 +716,7 @@ initdescs[0].mergecallfamilies(*initdescs[1:]) initfamily = initdescs[0].getcallfamily() MethodDesc.consider_call_site(bookkeeper, initfamily, initdescs, - args, s_None) + args, s_None, op) consider_call_site = staticmethod(consider_call_site) def getallbases(self): @@ -782,13 +789,13 @@ def getuniquegraph(self): return self.funcdesc.getuniquegraph() - def pycall(self, schedule, args, s_previous_result): + def pycall(self, schedule, args, s_previous_result, op=None): from pypy.annotation.model import SomeInstance if self.selfclassdef is None: raise Exception("calling %r" % (self,)) s_instance = SomeInstance(self.selfclassdef, flags = self.flags) args = args.prepend(s_instance) - return self.funcdesc.pycall(schedule, args, s_previous_result) + return self.funcdesc.pycall(schedule, args, s_previous_result, op) def bind_under(self, classdef, name): self.bookkeeper.warning("rebinding an already bound %r" % (self,)) @@ -801,10 +808,10 @@ self.name, flags) - def consider_call_site(bookkeeper, family, descs, args, s_result): + def consider_call_site(bookkeeper, family, descs, args, s_result, op): shape = rawshape(args, nextra=1) # account for the extra 'self' funcdescs = [methoddesc.funcdesc for methoddesc in descs] - row = FunctionDesc.row_to_consider(descs, args) + row = FunctionDesc.row_to_consider(descs, args, op) family.calltable_add_row(shape, row) consider_call_site = staticmethod(consider_call_site) @@ -956,16 +963,16 @@ return '' % (self.funcdesc, self.frozendesc) - def pycall(self, schedule, args, s_previous_result): + def pycall(self, schedule, args, s_previous_result, op=None): from pypy.annotation.model import SomePBC s_self = SomePBC([self.frozendesc]) args = args.prepend(s_self) - return self.funcdesc.pycall(schedule, args, s_previous_result) + return self.funcdesc.pycall(schedule, args, s_previous_result, op) - def consider_call_site(bookkeeper, family, descs, args, s_result): + def consider_call_site(bookkeeper, family, descs, args, s_result, op): shape = rawshape(args, nextra=1) # account for the extra 'self' funcdescs = [mofdesc.funcdesc for mofdesc in descs] - row = FunctionDesc.row_to_consider(descs, args) + row = FunctionDesc.row_to_consider(descs, args, op) family.calltable_add_row(shape, row) consider_call_site = staticmethod(consider_call_site) diff --git a/pypy/annotation/policy.py b/pypy/annotation/policy.py --- a/pypy/annotation/policy.py +++ b/pypy/annotation/policy.py @@ -1,7 +1,7 @@ # base annotation policy for specialization from pypy.annotation.specialize import default_specialize as default -from pypy.annotation.specialize import specialize_argvalue, specialize_argtype, specialize_arglistitemtype -from pypy.annotation.specialize import memo +from pypy.annotation.specialize import specialize_argvalue, specialize_argtype, specialize_arglistitemtype, specialize_arg_or_var +from pypy.annotation.specialize import memo, specialize_call_location # for some reason, model must be imported first, # or we create a cycle. from pypy.annotation import model as annmodel @@ -73,8 +73,10 @@ default_specialize = staticmethod(default) specialize__memo = staticmethod(memo) specialize__arg = staticmethod(specialize_argvalue) # specialize:arg(N) + specialize__arg_or_var = staticmethod(specialize_arg_or_var) specialize__argtype = staticmethod(specialize_argtype) # specialize:argtype(N) specialize__arglistitemtype = staticmethod(specialize_arglistitemtype) + specialize__call_location = staticmethod(specialize_call_location) def specialize__ll(pol, *args): from pypy.rpython.annlowlevel import LowLevelAnnotatorPolicy diff --git a/pypy/annotation/specialize.py b/pypy/annotation/specialize.py --- a/pypy/annotation/specialize.py +++ b/pypy/annotation/specialize.py @@ -353,6 +353,16 @@ key = tuple(key) return maybe_star_args(funcdesc, key, args_s) +def specialize_arg_or_var(funcdesc, args_s, *argindices): + for argno in argindices: + if not args_s[argno].is_constant(): + break + else: + # all constant + return specialize_argvalue(funcdesc, args_s, *argindices) + # some not constant + return maybe_star_args(funcdesc, None, args_s) + def specialize_argtype(funcdesc, args_s, *argindices): key = tuple([args_s[i].knowntype for i in argindices]) for cls in key: @@ -370,3 +380,7 @@ else: key = s.listdef.listitem.s_value.knowntype return maybe_star_args(funcdesc, key, args_s) + +def specialize_call_location(funcdesc, args_s, op): + assert op is not None + return maybe_star_args(funcdesc, op, args_s) diff --git a/pypy/annotation/test/test_annrpython.py b/pypy/annotation/test/test_annrpython.py --- a/pypy/annotation/test/test_annrpython.py +++ b/pypy/annotation/test/test_annrpython.py @@ -1099,8 +1099,8 @@ allocdesc = a.bookkeeper.getdesc(alloc) s_C1 = a.bookkeeper.immutablevalue(C1) s_C2 = a.bookkeeper.immutablevalue(C2) - graph1 = allocdesc.specialize([s_C1]) - graph2 = allocdesc.specialize([s_C2]) + graph1 = allocdesc.specialize([s_C1], None) + graph2 = allocdesc.specialize([s_C2], None) assert a.binding(graph1.getreturnvar()).classdef == C1df assert a.binding(graph2.getreturnvar()).classdef == C2df assert graph1 in a.translator.graphs @@ -1135,8 +1135,8 @@ allocdesc = a.bookkeeper.getdesc(alloc) s_C1 = a.bookkeeper.immutablevalue(C1) s_C2 = a.bookkeeper.immutablevalue(C2) - graph1 = allocdesc.specialize([s_C1, s_C2]) - graph2 = allocdesc.specialize([s_C2, s_C2]) + graph1 = allocdesc.specialize([s_C1, s_C2], None) + graph2 = allocdesc.specialize([s_C2, s_C2], None) assert a.binding(graph1.getreturnvar()).classdef == C1df assert a.binding(graph2.getreturnvar()).classdef == C2df assert graph1 in a.translator.graphs @@ -1194,6 +1194,33 @@ assert len(executedesc._cache[(0, 'star', 2)].startblock.inputargs) == 4 assert len(executedesc._cache[(1, 'star', 3)].startblock.inputargs) == 5 + def test_specialize_arg_or_var(self): + def f(a): + return 1 + f._annspecialcase_ = 'specialize:arg_or_var(0)' + + def fn(a): + return f(3) + f(a) + + a = self.RPythonAnnotator() + a.build_types(fn, [int]) + executedesc = a.bookkeeper.getdesc(f) + assert sorted(executedesc._cache.keys()) == [None, (3,)] + # we got two different special + + def test_specialize_call_location(self): + def g(a): + return a + g._annspecialcase_ = "specialize:call_location" + def f(x): + return g(x) + f._annspecialcase_ = "specialize:argtype(0)" + def h(y): + w = f(y) + return int(f(str(y))) + w + a = self.RPythonAnnotator() + assert a.build_types(h, [int]) == annmodel.SomeInteger() + def test_assert_list_doesnt_lose_info(self): class T(object): pass @@ -3177,6 +3204,8 @@ s = a.build_types(f, []) assert isinstance(s, annmodel.SomeList) assert not s.listdef.listitem.resized + assert not s.listdef.listitem.immutable + assert s.listdef.listitem.mutated def test_delslice(self): def f(): diff --git a/pypy/annotation/unaryop.py b/pypy/annotation/unaryop.py --- a/pypy/annotation/unaryop.py +++ b/pypy/annotation/unaryop.py @@ -352,6 +352,7 @@ check_negative_slice(s_start, s_stop) if not isinstance(s_iterable, SomeList): raise Exception("list[start:stop] = x: x must be a list") + lst.listdef.mutate() lst.listdef.agree(s_iterable.listdef) # note that setslice is not allowed to resize a list in RPython diff --git a/pypy/config/pypyoption.py b/pypy/config/pypyoption.py --- a/pypy/config/pypyoption.py +++ b/pypy/config/pypyoption.py @@ -27,7 +27,7 @@ # --allworkingmodules working_modules = default_modules.copy() working_modules.update(dict.fromkeys( - ["_socket", "unicodedata", "mmap", "fcntl", "_locale", + ["_socket", "unicodedata", "mmap", "fcntl", "_locale", "pwd", "rctime" , "select", "zipimport", "_lsprof", "crypt", "signal", "_rawffi", "termios", "zlib", "bz2", "struct", "_hashlib", "_md5", "_sha", "_minimal_curses", "cStringIO", @@ -58,6 +58,7 @@ # unix only modules del working_modules["crypt"] del working_modules["fcntl"] + del working_modules["pwd"] del working_modules["termios"] del working_modules["_minimal_curses"] @@ -71,6 +72,7 @@ del working_modules['fcntl'] # LOCK_NB not defined del working_modules["_minimal_curses"] del working_modules["termios"] + del working_modules["_multiprocessing"] # depends on rctime @@ -90,7 +92,7 @@ module_import_dependencies = { # no _rawffi if importing pypy.rlib.clibffi raises ImportError - # or CompilationError + # or CompilationError or py.test.skip.Exception "_rawffi" : ["pypy.rlib.clibffi"], "_ffi" : ["pypy.rlib.clibffi"], @@ -111,7 +113,7 @@ try: for name in modlist: __import__(name) - except (ImportError, CompilationError), e: + except (ImportError, CompilationError, py.test.skip.Exception), e: errcls = e.__class__.__name__ config.add_warning( "The module %r is disabled\n" % (modname,) + @@ -126,7 +128,7 @@ pypy_optiondescription = OptionDescription("objspace", "Object Space Options", [ ChoiceOption("name", "Object Space name", - ["std", "flow", "thunk", "dump", "taint"], + ["std", "flow", "thunk", "dump"], "std", cmdline='--objspace -o'), diff --git a/pypy/doc/__pypy__-module.rst b/pypy/doc/__pypy__-module.rst --- a/pypy/doc/__pypy__-module.rst +++ b/pypy/doc/__pypy__-module.rst @@ -37,29 +37,6 @@ .. _`thunk object space docs`: objspace-proxies.html#thunk .. _`interface section of the thunk object space docs`: objspace-proxies.html#thunk-interface -.. broken: - - Taint Object Space Functionality - ================================ - - When the taint object space is used (choose with :config:`objspace.name`), - the following names are put into ``__pypy__``: - - - ``taint`` - - ``is_tainted`` - - ``untaint`` - - ``taint_atomic`` - - ``_taint_debug`` - - ``_taint_look`` - - ``TaintError`` - - Those are all described in the `interface section of the taint object space - docs`_. - - For more detailed explanations and examples see the `taint object space docs`_. - - .. _`taint object space docs`: objspace-proxies.html#taint - .. _`interface section of the taint object space docs`: objspace-proxies.html#taint-interface Transparent Proxy Functionality =============================== diff --git a/pypy/doc/config/objspace.name.txt b/pypy/doc/config/objspace.name.txt --- a/pypy/doc/config/objspace.name.txt +++ b/pypy/doc/config/objspace.name.txt @@ -4,7 +4,6 @@ for normal usage): * thunk_: The thunk object space adds lazy evaluation to PyPy. - * taint_: The taint object space adds soft security features. * dump_: Using this object spaces results in the dumpimp of all operations to a log. @@ -12,5 +11,4 @@ .. _`Object Space Proxies`: ../objspace-proxies.html .. _`Standard Object Space`: ../objspace.html#standard-object-space .. _thunk: ../objspace-proxies.html#thunk -.. _taint: ../objspace-proxies.html#taint .. _dump: ../objspace-proxies.html#dump diff --git a/pypy/doc/config/objspace.usemodules.pwd.txt b/pypy/doc/config/objspace.usemodules.pwd.txt new file mode 100644 --- /dev/null +++ b/pypy/doc/config/objspace.usemodules.pwd.txt @@ -0,0 +1,2 @@ +Use the 'pwd' module. +This module is expected to be fully working. diff --git a/pypy/doc/index.rst b/pypy/doc/index.rst --- a/pypy/doc/index.rst +++ b/pypy/doc/index.rst @@ -21,8 +21,6 @@ * `Papers`_: Academic papers, talks, and related projects -* `Videos`_: Videos of PyPy talks and presentations - * `speed.pypy.org`_: Daily benchmarks of how fast PyPy is * `potential project ideas`_: In case you want to get your feet wet... @@ -311,7 +309,6 @@ .. _`object space`: objspace.html .. _FlowObjSpace: objspace.html#the-flow-object-space .. _`trace object space`: objspace.html#the-trace-object-space -.. _`taint object space`: objspace-proxies.html#taint .. _`thunk object space`: objspace-proxies.html#thunk .. _`transparent proxies`: objspace-proxies.html#tproxy .. _`Differences between PyPy and CPython`: cpython_differences.html diff --git a/pypy/doc/objspace-proxies.rst b/pypy/doc/objspace-proxies.rst --- a/pypy/doc/objspace-proxies.rst +++ b/pypy/doc/objspace-proxies.rst @@ -129,297 +129,6 @@ function behaves lazily: all calls to it return a thunk object. -.. broken right now: - - .. _taint: - - The Taint Object Space - ====================== - - Motivation - ---------- - - The Taint Object Space provides a form of security: "tainted objects", - inspired by various sources, see [D12.1]_ for a more detailed discussion. - - The basic idea of this kind of security is not to protect against - malicious code but to help with handling and boxing sensitive data. - It covers two kinds of sensitive data: secret data which should not leak, - and untrusted data coming from an external source and that must be - validated before it is used. - - The idea is that, considering a large application that handles these - kinds of sensitive data, there are typically only a small number of - places that need to explicitly manipulate that sensitive data; all the - other places merely pass it around, or do entirely unrelated things. - - Nevertheless, if a large application needs to be reviewed for security, - it must be entirely carefully checked, because it is possible that a - bug at some apparently unrelated place could lead to a leak of sensitive - information in a way that an external attacker could exploit. For - example, if any part of the application provides web services, an - attacker might be able to issue unexpected requests with a regular web - browser and deduce secret information from the details of the answers he - gets. Another example is the common CGI attack where an attacker sends - malformed inputs and causes the CGI script to do unintended things. - - An approach like that of the Taint Object Space allows the small parts - of the program that manipulate sensitive data to be explicitly marked. - The effect of this is that although these small parts still need a - careful security review, the rest of the application no longer does, - because even a bug would be unable to leak the information. - - We have implemented a simple two-level model: objects are either - regular (untainted), or sensitive (tainted). Objects are marked as - sensitive if they are secret or untrusted, and only declassified at - carefully-checked positions (e.g. where the secret data is needed, or - after the untrusted data has been fully validated). - - It would be simple to extend the code for more fine-grained scales of - secrecy. For example it is typical in the literature to consider - user-specified lattices of secrecy levels, corresponding to multiple - "owners" that cannot access data belonging to another "owner" unless - explicitly authorized to do so. - - Tainting and untainting - ----------------------- - - Start a py.py with the Taint Object Space and try the following example:: - - $ py.py -o taint - >>>> from __pypy__ import taint - >>>> x = taint(6) - - # x is hidden from now on. We can pass it around and - # even operate on it, but not inspect it. Taintness - # is propagated to operation results. - - >>>> x - TaintError - - >>>> if x > 5: y = 2 # see below - TaintError - - >>>> y = x + 5 # ok - >>>> lst = [x, y] - >>>> z = lst.pop() - >>>> t = type(z) # type() works too, tainted answer - >>>> t - TaintError - >>>> u = t is int # even 'is' works - >>>> u - TaintError - - Notice that using a tainted boolean like ``x > 5`` in an ``if`` - statement is forbidden. This is because knowing which path is followed - would give away a hint about ``x``; in the example above, if the - statement ``if x > 5: y = 2`` was allowed to run, we would know - something about the value of ``x`` by looking at the (untainted) value - in the variable ``y``. - - Of course, there is a way to inspect tainted objects. The basic way is - to explicitly "declassify" it with the ``untaint()`` function. In an - application, the places that use ``untaint()`` are the places that need - careful security review. To avoid unexpected objects showing up, the - ``untaint()`` function must be called with the exact type of the object - to declassify. It will raise ``TaintError`` if the type doesn't match:: - - >>>> from __pypy__ import taint - >>>> untaint(int, x) - 6 - >>>> untaint(int, z) - 11 - >>>> untaint(bool, x > 5) - True - >>>> untaint(int, x > 5) - TaintError - - - Taint Bombs - ----------- - - In this area, a common problem is what to do about failing operations. - If an operation raises an exception when manipulating a tainted object, - then the very presence of the exception can leak information about the - tainted object itself. Consider:: - - >>>> 5 / (x-6) - - By checking if this raises ``ZeroDivisionError`` or not, we would know - if ``x`` was equal to 6 or not. The solution to this problem in the - Taint Object Space is to introduce *Taint Bombs*. They are a kind of - tainted object that doesn't contain a real object, but a pending - exception. Taint Bombs are indistinguishable from normal tainted - objects to unprivileged code. See:: - - >>>> x = taint(6) - >>>> i = 5 / (x-6) # no exception here - >>>> j = i + 1 # nor here - >>>> k = j + 5 # nor here - >>>> untaint(int, k) - TaintError - - In the above example, all of ``i``, ``j`` and ``k`` contain a Taint - Bomb. Trying to untaint it raises an exception - a generic - ``TaintError``. What we win is that the exception gives little away, - and most importantly it occurs at the point where ``untaint()`` is - called, not where the operation failed. This means that all calls to - ``untaint()`` - but not the rest of the code - must be carefully - reviewed for what occurs if they receive a Taint Bomb; they might catch - the ``TaintError`` and give the user a generic message that something - went wrong, if we are reasonably careful that the message or even its - presence doesn't give information away. This might be a - problem by itself, but there is no satisfying general solution here: - it must be considered on a case-by-case basis. Again, what the - Taint Object Space approach achieves is not solving these problems, but - localizing them to well-defined small parts of the application - namely, - around calls to ``untaint()``. - - The ``TaintError`` exception deliberately does not include any - useful error messages, because they might give information away. - Of course, this makes debugging quite a bit harder; a difficult - problem to solve properly. So far we have implemented a way to peek in a Taint - Box or Bomb, ``__pypy__._taint_look(x)``, and a "debug mode" that - prints the exception as soon as a Bomb is created - both write - information to the low-level stderr of the application, where we hope - that it is unlikely to be seen by anyone but the application - developer. - - - Taint Atomic functions - ---------------------- - - Occasionally, a more complicated computation must be performed on a - tainted object. This requires first untainting the object, performing the - computations, and then carefully tainting the result again (including - hiding all exceptions into Bombs). - - There is a built-in decorator that does this for you:: - - >>>> @__pypy__.taint_atomic - >>>> def myop(x, y): - .... while x > 0: - .... x -= y - .... return x - .... - >>>> myop(42, 10) - -8 - >>>> z = myop(taint(42), 10) - >>>> z - TaintError - >>>> untaint(int, z) - -8 - - The decorator makes a whole function behave like a built-in operation. - If no tainted argument is passed in, the function behaves normally. But - if any of the arguments is tainted, it is automatically untainted - so - the function body always sees untainted arguments - and the eventual - result is tainted again (possibly in a Taint Bomb). - - It is important for the function marked as ``taint_atomic`` to have no - visible side effects, as these could cause information leakage. - This is currently not enforced, which means that all ``taint_atomic`` - functions have to be carefully reviewed for security (but not the - callers of ``taint_atomic`` functions). - - A possible future extension would be to forbid side-effects on - non-tainted objects from all ``taint_atomic`` functions. - - An example of usage: given a tainted object ``passwords_db`` that - references a database of passwords, we can write a function - that checks if a password is valid as follows:: - - @taint_atomic - def validate(passwords_db, username, password): - assert type(passwords_db) is PasswordDatabase - assert type(username) is str - assert type(password) is str - ...load username entry from passwords_db... - return expected_password == password - - It returns a tainted boolean answer, or a Taint Bomb if something - went wrong. A caller can do:: - - ok = validate(passwords_db, 'john', '1234') - ok = untaint(bool, ok) - - This can give three outcomes: ``True``, ``False``, or a ``TaintError`` - exception (with no information on it) if anything went wrong. If even - this is considered giving too much information away, the ``False`` case - can be made indistinguishable from the ``TaintError`` case (simply by - raising an exception in ``validate()`` if the password is wrong). - - In the above example, the security results achieved are the following: - as long as ``validate()`` does not leak information, no other part of - the code can obtain more information about a passwords database than a - Yes/No answer to a precise query. - - A possible extension of the ``taint_atomic`` decorator would be to check - the argument types, as ``untaint()`` does, for the same reason: to - prevent bugs where a function like ``validate()`` above is accidentally - called with the wrong kind of tainted object, which would make it - misbehave. For now, all ``taint_atomic`` functions should be - conservative and carefully check all assumptions on their input - arguments. - - - .. _`taint-interface`: - - Interface - --------- - - .. _`like a built-in operation`: - - The basic rule of the Tainted Object Space is that it introduces two new - kinds of objects, Tainted Boxes and Tainted Bombs (which are not types - in the Python sense). Each box internally contains a regular object; - each bomb internally contains an exception object. An operation - involving Tainted Boxes is performed on the objects contained in the - boxes, and gives a Tainted Box or a Tainted Bomb as a result (such an - operation does not let an exception be raised). An operation called - with a Tainted Bomb argument immediately returns the same Tainted Bomb. - - In a PyPy running with (or translated with) the Taint Object Space, - the ``__pypy__`` module exposes the following interface: - - * ``taint(obj)`` - - Return a new Tainted Box wrapping ``obj``. Return ``obj`` itself - if it is already tainted (a Box or a Bomb). - - * ``is_tainted(obj)`` - - Check if ``obj`` is tainted (a Box or a Bomb). - - * ``untaint(type, obj)`` - - Untaints ``obj`` if it is tainted. Raise ``TaintError`` if the type - of the untainted object is not exactly ``type``, or if ``obj`` is a - Bomb. - - * ``taint_atomic(func)`` - - Return a wrapper function around the callable ``func``. The wrapper - behaves `like a built-in operation`_ with respect to untainting the - arguments, tainting the result, and returning a Bomb. - - * ``TaintError`` - - Exception. On purpose, it provides no attribute or error message. - - * ``_taint_debug(level)`` - - Set the debugging level to ``level`` (0=off). At level 1 or above, - all Taint Bombs print a diagnostic message to stderr when they are - created. - - * ``_taint_look(obj)`` - - For debugging purposes: prints (to stderr) the type and address of - the object in a Tainted Box, or prints the exception if ``obj`` is - a Taint Bomb. - - .. _dump: The Dump Object Space diff --git a/pypy/doc/project-ideas.rst b/pypy/doc/project-ideas.rst --- a/pypy/doc/project-ideas.rst +++ b/pypy/doc/project-ideas.rst @@ -17,6 +17,12 @@ projects, or anything else in PyPy, pop up on IRC or write to us on the `mailing list`_. +Make big integers faster +------------------------- + +PyPy's implementation of the Python ``long`` type is slower than CPython's. +Find out why and optimize them. + Numpy improvements ------------------ @@ -53,6 +59,18 @@ this is an ideal task to get started, because it does not require any deep knowledge of the internals. +Optimized Unicode Representation +-------------------------------- + +CPython 3.3 will use an `optimized unicode representation`_ which switches between +different ways to represent a unicode string, depending on whether the string +fits into ASCII, has only two-byte characters or needs four-byte characters. + +The actual details would be rather differen in PyPy, but we would like to have +the same optimization implemented. + +.. _`optimized unicode representation`: http://www.python.org/dev/peps/pep-0393/ + Translation Toolchain --------------------- diff --git a/pypy/doc/stackless.rst b/pypy/doc/stackless.rst --- a/pypy/doc/stackless.rst +++ b/pypy/doc/stackless.rst @@ -66,7 +66,7 @@ In practice, in PyPy, you cannot change the ``f_back`` of an abitrary frame, but only of frames stored in ``continulets``. -Continulets are internally implemented using stacklets. Stacklets are a +Continulets are internally implemented using stacklets_. Stacklets are a bit more primitive (they are really one-shot continuations), but that idea only works in C, not in Python. The basic idea of continulets is to have at any point in time a complete valid stack; this is important @@ -215,11 +215,6 @@ * Support for other CPUs than x86 and x86-64 -* The app-level ``f_back`` field of frames crossing continulet boundaries - is None for now, unlike what I explain in the theoretical overview - above. It mostly means that in a ``pdb.set_trace()`` you cannot go - ``up`` past countinulet boundaries. This could be fixed. - .. __: `recursion depth limit`_ (*) Pickling, as well as changing threads, could be implemented by using @@ -285,6 +280,24 @@ to use other interfaces like genlets and greenlets.) +Stacklets ++++++++++ + +Continulets are internally implemented using stacklets, which is the +generic RPython-level building block for "one-shot continuations". For +more information about them please see the documentation in the C source +at `pypy/translator/c/src/stacklet/stacklet.h`_. + +The module ``pypy.rlib.rstacklet`` is a thin wrapper around the above +functions. The key point is that new() and switch() always return a +fresh stacklet handle (or an empty one), and switch() additionally +consumes one. It makes no sense to have code in which the returned +handle is ignored, or used more than once. Note that ``stacklet.c`` is +written assuming that the user knows that, and so no additional checking +occurs; this can easily lead to obscure crashes if you don't use a +wrapper like PyPy's '_continuation' module. + + Theory of composability +++++++++++++++++++++++ diff --git a/pypy/interpreter/argument.py b/pypy/interpreter/argument.py --- a/pypy/interpreter/argument.py +++ b/pypy/interpreter/argument.py @@ -125,6 +125,7 @@ ### Manipulation ### + @jit.look_inside_iff(lambda self: not self._dont_jit) def unpack(self): # slowish "Return a ([w1,w2...], {'kw':w3...}) pair." kwds_w = {} @@ -245,6 +246,8 @@ ### Parsing for function calls ### + # XXX: this should be @jit.look_inside_iff, but we need key word arguments, + # and it doesn't support them for now. def _match_signature(self, w_firstarg, scope_w, signature, defaults_w=None, blindargs=0): """Parse args and kwargs according to the signature of a code object, diff --git a/pypy/interpreter/astcompiler/ast.py b/pypy/interpreter/astcompiler/ast.py --- a/pypy/interpreter/astcompiler/ast.py +++ b/pypy/interpreter/astcompiler/ast.py @@ -2,7 +2,7 @@ from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter import typedef from pypy.interpreter.gateway import interp2app -from pypy.interpreter.error import OperationError +from pypy.interpreter.error import OperationError, operationerrfmt from pypy.rlib.unroll import unrolling_iterable from pypy.tool.pairtype import extendabletype from pypy.tool.sourcetools import func_with_new_name @@ -2925,14 +2925,13 @@ def Module_get_body(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -2968,14 +2967,13 @@ def Interactive_get_body(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -3015,8 +3013,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') return space.wrap(w_self.body) def Expression_set_body(space, w_self, w_new_value): @@ -3057,14 +3054,13 @@ def Suite_get_body(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -3104,8 +3100,7 @@ return w_obj if not w_self.initialization_state & w_self._lineno_mask: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'lineno'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'lineno') return space.wrap(w_self.lineno) def stmt_set_lineno(space, w_self, w_new_value): @@ -3126,8 +3121,7 @@ return w_obj if not w_self.initialization_state & w_self._col_offset_mask: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'col_offset'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'col_offset') return space.wrap(w_self.col_offset) def stmt_set_col_offset(space, w_self, w_new_value): @@ -3157,8 +3151,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'name'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'name') return space.wrap(w_self.name) def FunctionDef_set_name(space, w_self, w_new_value): @@ -3179,8 +3172,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'args'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'args') return space.wrap(w_self.args) def FunctionDef_set_args(space, w_self, w_new_value): @@ -3197,14 +3189,13 @@ def FunctionDef_get_body(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -3215,14 +3206,13 @@ def FunctionDef_get_decorator_list(space, w_self): if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'decorator_list'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'decorator_list') if w_self.w_decorator_list is None: if w_self.decorator_list is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.decorator_list] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_decorator_list = w_list return w_self.w_decorator_list @@ -3266,8 +3256,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'name'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'name') return space.wrap(w_self.name) def ClassDef_set_name(space, w_self, w_new_value): @@ -3284,14 +3273,13 @@ def ClassDef_get_bases(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'bases'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'bases') if w_self.w_bases is None: if w_self.bases is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.bases] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_bases = w_list return w_self.w_bases @@ -3302,14 +3290,13 @@ def ClassDef_get_body(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -3320,14 +3307,13 @@ def ClassDef_get_decorator_list(space, w_self): if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'decorator_list'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'decorator_list') if w_self.w_decorator_list is None: if w_self.decorator_list is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.decorator_list] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_decorator_list = w_list return w_self.w_decorator_list @@ -3372,8 +3358,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Return_set_value(space, w_self, w_new_value): @@ -3414,14 +3399,13 @@ def Delete_get_targets(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'targets'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'targets') if w_self.w_targets is None: if w_self.targets is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.targets] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_targets = w_list return w_self.w_targets @@ -3457,14 +3441,13 @@ def Assign_get_targets(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'targets'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'targets') if w_self.w_targets is None: if w_self.targets is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.targets] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_targets = w_list return w_self.w_targets @@ -3479,8 +3462,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Assign_set_value(space, w_self, w_new_value): @@ -3527,8 +3509,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'target'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'target') return space.wrap(w_self.target) def AugAssign_set_target(space, w_self, w_new_value): @@ -3549,8 +3530,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'op'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'op') return operator_to_class[w_self.op - 1]() def AugAssign_set_op(space, w_self, w_new_value): @@ -3573,8 +3553,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def AugAssign_set_value(space, w_self, w_new_value): @@ -3621,8 +3600,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'dest'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'dest') return space.wrap(w_self.dest) def Print_set_dest(space, w_self, w_new_value): @@ -3639,14 +3617,13 @@ def Print_get_values(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'values'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'values') if w_self.w_values is None: if w_self.values is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.values] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_values = w_list return w_self.w_values @@ -3661,8 +3638,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'nl'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'nl') return space.wrap(w_self.nl) def Print_set_nl(space, w_self, w_new_value): @@ -3710,8 +3686,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'target'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'target') return space.wrap(w_self.target) def For_set_target(space, w_self, w_new_value): @@ -3732,8 +3707,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'iter'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'iter') return space.wrap(w_self.iter) def For_set_iter(space, w_self, w_new_value): @@ -3750,14 +3724,13 @@ def For_get_body(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -3768,14 +3741,13 @@ def For_get_orelse(space, w_self): if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'orelse'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') if w_self.w_orelse is None: if w_self.orelse is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.orelse] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_orelse = w_list return w_self.w_orelse @@ -3819,8 +3791,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'test'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'test') return space.wrap(w_self.test) def While_set_test(space, w_self, w_new_value): @@ -3837,14 +3808,13 @@ def While_get_body(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -3855,14 +3825,13 @@ def While_get_orelse(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'orelse'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') if w_self.w_orelse is None: if w_self.orelse is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.orelse] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_orelse = w_list return w_self.w_orelse @@ -3905,8 +3874,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'test'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'test') return space.wrap(w_self.test) def If_set_test(space, w_self, w_new_value): @@ -3923,14 +3891,13 @@ def If_get_body(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -3941,14 +3908,13 @@ def If_get_orelse(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'orelse'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') if w_self.w_orelse is None: if w_self.orelse is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.orelse] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_orelse = w_list return w_self.w_orelse @@ -3991,8 +3957,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'context_expr'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'context_expr') return space.wrap(w_self.context_expr) def With_set_context_expr(space, w_self, w_new_value): @@ -4013,8 +3978,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'optional_vars'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'optional_vars') return space.wrap(w_self.optional_vars) def With_set_optional_vars(space, w_self, w_new_value): @@ -4031,14 +3995,13 @@ def With_get_body(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -4080,8 +4043,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'type'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'type') return space.wrap(w_self.type) def Raise_set_type(space, w_self, w_new_value): @@ -4102,8 +4064,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'inst'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'inst') return space.wrap(w_self.inst) def Raise_set_inst(space, w_self, w_new_value): @@ -4124,8 +4085,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'tback'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'tback') return space.wrap(w_self.tback) def Raise_set_tback(space, w_self, w_new_value): @@ -4168,14 +4128,13 @@ def TryExcept_get_body(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -4186,14 +4145,13 @@ def TryExcept_get_handlers(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'handlers'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'handlers') if w_self.w_handlers is None: if w_self.handlers is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.handlers] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_handlers = w_list return w_self.w_handlers @@ -4204,14 +4162,13 @@ def TryExcept_get_orelse(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'orelse'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') if w_self.w_orelse is None: if w_self.orelse is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.orelse] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_orelse = w_list return w_self.w_orelse @@ -4251,14 +4208,13 @@ def TryFinally_get_body(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -4269,14 +4225,13 @@ def TryFinally_get_finalbody(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'finalbody'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'finalbody') if w_self.w_finalbody is None: if w_self.finalbody is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.finalbody] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_finalbody = w_list return w_self.w_finalbody @@ -4318,8 +4273,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'test'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'test') return space.wrap(w_self.test) def Assert_set_test(space, w_self, w_new_value): @@ -4340,8 +4294,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'msg'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'msg') return space.wrap(w_self.msg) def Assert_set_msg(space, w_self, w_new_value): @@ -4383,14 +4336,13 @@ def Import_get_names(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'names'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'names') if w_self.w_names is None: if w_self.names is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.names] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_names = w_list return w_self.w_names @@ -4430,8 +4382,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'module'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'module') return space.wrap(w_self.module) def ImportFrom_set_module(space, w_self, w_new_value): @@ -4451,14 +4402,13 @@ def ImportFrom_get_names(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'names'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'names') if w_self.w_names is None: if w_self.names is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.names] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_names = w_list return w_self.w_names @@ -4473,8 +4423,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'level'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'level') return space.wrap(w_self.level) def ImportFrom_set_level(space, w_self, w_new_value): @@ -4522,8 +4471,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') return space.wrap(w_self.body) def Exec_set_body(space, w_self, w_new_value): @@ -4544,8 +4492,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'globals'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'globals') return space.wrap(w_self.globals) def Exec_set_globals(space, w_self, w_new_value): @@ -4566,8 +4513,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'locals'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'locals') return space.wrap(w_self.locals) def Exec_set_locals(space, w_self, w_new_value): @@ -4610,14 +4556,13 @@ def Global_get_names(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'names'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'names') if w_self.w_names is None: if w_self.names is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.names] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_names = w_list return w_self.w_names @@ -4657,8 +4602,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Expr_set_value(space, w_self, w_new_value): @@ -4754,8 +4698,7 @@ return w_obj if not w_self.initialization_state & w_self._lineno_mask: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'lineno'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'lineno') return space.wrap(w_self.lineno) def expr_set_lineno(space, w_self, w_new_value): @@ -4776,8 +4719,7 @@ return w_obj if not w_self.initialization_state & w_self._col_offset_mask: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'col_offset'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'col_offset') return space.wrap(w_self.col_offset) def expr_set_col_offset(space, w_self, w_new_value): @@ -4807,8 +4749,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'op'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'op') return boolop_to_class[w_self.op - 1]() def BoolOp_set_op(space, w_self, w_new_value): @@ -4827,14 +4768,13 @@ def BoolOp_get_values(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'values'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'values') if w_self.w_values is None: if w_self.values is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.values] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_values = w_list return w_self.w_values @@ -4875,8 +4815,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'left'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'left') return space.wrap(w_self.left) def BinOp_set_left(space, w_self, w_new_value): @@ -4897,8 +4836,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'op'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'op') return operator_to_class[w_self.op - 1]() def BinOp_set_op(space, w_self, w_new_value): @@ -4921,8 +4859,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'right'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'right') return space.wrap(w_self.right) def BinOp_set_right(space, w_self, w_new_value): @@ -4969,8 +4906,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'op'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'op') return unaryop_to_class[w_self.op - 1]() def UnaryOp_set_op(space, w_self, w_new_value): @@ -4993,8 +4929,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'operand'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'operand') return space.wrap(w_self.operand) def UnaryOp_set_operand(space, w_self, w_new_value): @@ -5040,8 +4975,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'args'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'args') return space.wrap(w_self.args) def Lambda_set_args(space, w_self, w_new_value): @@ -5062,8 +4996,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') return space.wrap(w_self.body) def Lambda_set_body(space, w_self, w_new_value): @@ -5109,8 +5042,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'test'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'test') return space.wrap(w_self.test) def IfExp_set_test(space, w_self, w_new_value): @@ -5131,8 +5063,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') return space.wrap(w_self.body) def IfExp_set_body(space, w_self, w_new_value): @@ -5153,8 +5084,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'orelse'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') return space.wrap(w_self.orelse) def IfExp_set_orelse(space, w_self, w_new_value): @@ -5197,14 +5127,13 @@ def Dict_get_keys(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'keys'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'keys') if w_self.w_keys is None: if w_self.keys is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.keys] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_keys = w_list return w_self.w_keys @@ -5215,14 +5144,13 @@ def Dict_get_values(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'values'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'values') if w_self.w_values is None: if w_self.values is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.values] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_values = w_list return w_self.w_values @@ -5260,14 +5188,13 @@ def Set_get_elts(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'elts'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elts') if w_self.w_elts is None: if w_self.elts is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.elts] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_elts = w_list return w_self.w_elts @@ -5307,8 +5234,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'elt'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elt') return space.wrap(w_self.elt) def ListComp_set_elt(space, w_self, w_new_value): @@ -5325,14 +5251,13 @@ def ListComp_get_generators(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'generators'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'generators') if w_self.w_generators is None: if w_self.generators is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.generators] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_generators = w_list return w_self.w_generators @@ -5373,8 +5298,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'elt'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elt') return space.wrap(w_self.elt) def SetComp_set_elt(space, w_self, w_new_value): @@ -5391,14 +5315,13 @@ def SetComp_get_generators(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'generators'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'generators') if w_self.w_generators is None: if w_self.generators is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.generators] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_generators = w_list return w_self.w_generators @@ -5439,8 +5362,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'key'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'key') return space.wrap(w_self.key) def DictComp_set_key(space, w_self, w_new_value): @@ -5461,8 +5383,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def DictComp_set_value(space, w_self, w_new_value): @@ -5479,14 +5400,13 @@ def DictComp_get_generators(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'generators'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'generators') if w_self.w_generators is None: if w_self.generators is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.generators] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_generators = w_list return w_self.w_generators @@ -5528,8 +5448,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'elt'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elt') return space.wrap(w_self.elt) def GeneratorExp_set_elt(space, w_self, w_new_value): @@ -5546,14 +5465,13 @@ def GeneratorExp_get_generators(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'generators'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'generators') if w_self.w_generators is None: if w_self.generators is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.generators] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_generators = w_list return w_self.w_generators @@ -5594,8 +5512,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Yield_set_value(space, w_self, w_new_value): @@ -5640,8 +5557,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'left'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'left') return space.wrap(w_self.left) def Compare_set_left(space, w_self, w_new_value): @@ -5658,14 +5574,13 @@ def Compare_get_ops(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ops'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ops') if w_self.w_ops is None: if w_self.ops is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [cmpop_to_class[node - 1]() for node in w_self.ops] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_ops = w_list return w_self.w_ops @@ -5676,14 +5591,13 @@ def Compare_get_comparators(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'comparators'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'comparators') if w_self.w_comparators is None: if w_self.comparators is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.comparators] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_comparators = w_list return w_self.w_comparators @@ -5726,8 +5640,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'func'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'func') return space.wrap(w_self.func) def Call_set_func(space, w_self, w_new_value): @@ -5744,14 +5657,13 @@ def Call_get_args(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'args'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'args') if w_self.w_args is None: if w_self.args is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.args] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_args = w_list return w_self.w_args @@ -5762,14 +5674,13 @@ def Call_get_keywords(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'keywords'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'keywords') if w_self.w_keywords is None: if w_self.keywords is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.keywords] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_keywords = w_list return w_self.w_keywords @@ -5784,8 +5695,7 @@ return w_obj if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'starargs'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'starargs') return space.wrap(w_self.starargs) def Call_set_starargs(space, w_self, w_new_value): @@ -5806,8 +5716,7 @@ return w_obj if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'kwargs'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'kwargs') return space.wrap(w_self.kwargs) def Call_set_kwargs(space, w_self, w_new_value): @@ -5858,8 +5767,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Repr_set_value(space, w_self, w_new_value): @@ -5904,8 +5812,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'n'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'n') return w_self.n def Num_set_n(space, w_self, w_new_value): @@ -5950,8 +5857,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 's'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 's') return w_self.s def Str_set_s(space, w_self, w_new_value): @@ -5996,8 +5902,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Attribute_set_value(space, w_self, w_new_value): @@ -6018,8 +5923,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'attr'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'attr') return space.wrap(w_self.attr) def Attribute_set_attr(space, w_self, w_new_value): @@ -6040,8 +5944,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ctx'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() def Attribute_set_ctx(space, w_self, w_new_value): @@ -6090,8 +5993,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Subscript_set_value(space, w_self, w_new_value): @@ -6112,8 +6014,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'slice'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'slice') return space.wrap(w_self.slice) def Subscript_set_slice(space, w_self, w_new_value): @@ -6134,8 +6035,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ctx'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() def Subscript_set_ctx(space, w_self, w_new_value): @@ -6184,8 +6084,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'id'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'id') return space.wrap(w_self.id) def Name_set_id(space, w_self, w_new_value): @@ -6206,8 +6105,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ctx'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() def Name_set_ctx(space, w_self, w_new_value): @@ -6251,14 +6149,13 @@ def List_get_elts(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'elts'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elts') if w_self.w_elts is None: if w_self.elts is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.elts] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_elts = w_list return w_self.w_elts @@ -6273,8 +6170,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ctx'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() def List_set_ctx(space, w_self, w_new_value): @@ -6319,14 +6215,13 @@ def Tuple_get_elts(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'elts'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elts') if w_self.w_elts is None: if w_self.elts is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.elts] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_elts = w_list return w_self.w_elts @@ -6341,8 +6236,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ctx'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() def Tuple_set_ctx(space, w_self, w_new_value): @@ -6391,8 +6285,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return w_self.value def Const_set_value(space, w_self, w_new_value): @@ -6510,8 +6403,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'lower'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'lower') return space.wrap(w_self.lower) def Slice_set_lower(space, w_self, w_new_value): @@ -6532,8 +6424,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'upper'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'upper') return space.wrap(w_self.upper) def Slice_set_upper(space, w_self, w_new_value): @@ -6554,8 +6445,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'step'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'step') return space.wrap(w_self.step) def Slice_set_step(space, w_self, w_new_value): @@ -6598,14 +6488,13 @@ def ExtSlice_get_dims(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'dims'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'dims') if w_self.w_dims is None: if w_self.dims is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.dims] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_dims = w_list return w_self.w_dims @@ -6645,8 +6534,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Index_set_value(space, w_self, w_new_value): @@ -6915,8 +6803,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'target'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'target') return space.wrap(w_self.target) def comprehension_set_target(space, w_self, w_new_value): @@ -6937,8 +6824,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'iter'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'iter') return space.wrap(w_self.iter) def comprehension_set_iter(space, w_self, w_new_value): @@ -6955,14 +6841,13 @@ def comprehension_get_ifs(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ifs'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ifs') if w_self.w_ifs is None: if w_self.ifs is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.ifs] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_ifs = w_list return w_self.w_ifs @@ -7004,8 +6889,7 @@ return w_obj if not w_self.initialization_state & w_self._lineno_mask: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'lineno'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'lineno') return space.wrap(w_self.lineno) def excepthandler_set_lineno(space, w_self, w_new_value): @@ -7026,8 +6910,7 @@ return w_obj if not w_self.initialization_state & w_self._col_offset_mask: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'col_offset'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'col_offset') return space.wrap(w_self.col_offset) def excepthandler_set_col_offset(space, w_self, w_new_value): @@ -7057,8 +6940,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'type'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'type') return space.wrap(w_self.type) def ExceptHandler_set_type(space, w_self, w_new_value): @@ -7079,8 +6961,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'name'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'name') return space.wrap(w_self.name) def ExceptHandler_set_name(space, w_self, w_new_value): @@ -7097,14 +6978,13 @@ def ExceptHandler_get_body(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -7142,14 +7022,13 @@ def arguments_get_args(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'args'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'args') if w_self.w_args is None: if w_self.args is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.args] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_args = w_list return w_self.w_args @@ -7164,8 +7043,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'vararg'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'vararg') return space.wrap(w_self.vararg) def arguments_set_vararg(space, w_self, w_new_value): @@ -7189,8 +7067,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'kwarg'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'kwarg') return space.wrap(w_self.kwarg) def arguments_set_kwarg(space, w_self, w_new_value): @@ -7210,14 +7087,13 @@ def arguments_get_defaults(space, w_self): if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'defaults'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'defaults') if w_self.w_defaults is None: if w_self.defaults is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.defaults] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_defaults = w_list return w_self.w_defaults @@ -7261,8 +7137,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'arg'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'arg') return space.wrap(w_self.arg) def keyword_set_arg(space, w_self, w_new_value): @@ -7283,8 +7158,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def keyword_set_value(space, w_self, w_new_value): @@ -7330,8 +7204,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'name'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'name') return space.wrap(w_self.name) def alias_set_name(space, w_self, w_new_value): @@ -7352,8 +7225,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'asname'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'asname') return space.wrap(w_self.asname) def alias_set_asname(space, w_self, w_new_value): diff --git a/pypy/interpreter/astcompiler/tools/asdl_py.py b/pypy/interpreter/astcompiler/tools/asdl_py.py --- a/pypy/interpreter/astcompiler/tools/asdl_py.py +++ b/pypy/interpreter/astcompiler/tools/asdl_py.py @@ -414,13 +414,12 @@ self.emit(" return w_obj", 1) self.emit("if not w_self.initialization_state & %s:" % (flag,), 1) self.emit("typename = space.type(w_self).getname(space)", 2) - self.emit("w_err = space.wrap(\"'%%s' object has no attribute '%s'\" %% typename)" % + self.emit("raise operationerrfmt(space.w_AttributeError, \"'%%s' object has no attribute '%%s'\", typename, '%s')" % (field.name,), 2) - self.emit("raise OperationError(space.w_AttributeError, w_err)", 2) if field.seq: self.emit("if w_self.w_%s is None:" % (field.name,), 1) self.emit("if w_self.%s is None:" % (field.name,), 2) - self.emit("w_list = space.newlist([])", 3) + self.emit("list_w = []", 3) self.emit("else:", 2) if field.type.value in self.data.simple_types: wrapper = "%s_to_class[node - 1]()" % (field.type,) @@ -428,7 +427,7 @@ wrapper = "space.wrap(node)" self.emit("list_w = [%s for node in w_self.%s]" % (wrapper, field.name), 3) - self.emit("w_list = space.newlist(list_w)", 3) + self.emit("w_list = space.newlist(list_w)", 2) self.emit("w_self.w_%s = w_list" % (field.name,), 2) self.emit("return w_self.w_%s" % (field.name,), 1) elif field.type.value in self.data.simple_types: @@ -540,7 +539,7 @@ from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter import typedef from pypy.interpreter.gateway import interp2app -from pypy.interpreter.error import OperationError +from pypy.interpreter.error import OperationError, operationerrfmt from pypy.rlib.unroll import unrolling_iterable from pypy.tool.pairtype import extendabletype from pypy.tool.sourcetools import func_with_new_name @@ -639,9 +638,7 @@ missing = required[i] if missing is not None: err = "required field \\"%s\\" missing from %s" - err = err % (missing, host) - w_err = space.wrap(err) - raise OperationError(space.w_TypeError, w_err) + raise operationerrfmt(space.w_TypeError, err, missing, host) raise AssertionError("should not reach here") diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -3,18 +3,18 @@ from pypy.interpreter.executioncontext import ExecutionContext, ActionFlag from pypy.interpreter.executioncontext import UserDelAction, FrameTraceAction from pypy.interpreter.error import OperationError, operationerrfmt -from pypy.interpreter.error import new_exception_class +from pypy.interpreter.error import new_exception_class, typed_unwrap_error_msg from pypy.interpreter.argument import Arguments from pypy.interpreter.miscutils import ThreadLocals from pypy.tool.cache import Cache from pypy.tool.uid import HUGEVAL_BYTES -from pypy.rlib.objectmodel import we_are_translated +from pypy.rlib.objectmodel import we_are_translated, newlist, compute_unique_id from pypy.rlib.debug import make_sure_not_resized from pypy.rlib.timer import DummyTimer, Timer from pypy.rlib.rarithmetic import r_uint from pypy.rlib import jit from pypy.tool.sourcetools import func_with_new_name -import os, sys, py +import os, sys __all__ = ['ObjSpace', 'OperationError', 'Wrappable', 'W_Root'] @@ -186,6 +186,28 @@ def _set_mapdict_storage_and_map(self, storage, map): raise NotImplementedError + # ------------------------------------------------------------------- + + def str_w(self, space): + w_msg = typed_unwrap_error_msg(space, "string", self) + raise OperationError(space.w_TypeError, w_msg) + + def unicode_w(self, space): + raise OperationError(space.w_TypeError, + typed_unwrap_error_msg(space, "unicode", self)) + + def int_w(self, space): + raise OperationError(space.w_TypeError, + typed_unwrap_error_msg(space, "integer", self)) + + def uint_w(self, space): + raise OperationError(space.w_TypeError, + typed_unwrap_error_msg(space, "integer", self)) + + def bigint_w(self, space): + raise OperationError(space.w_TypeError, + typed_unwrap_error_msg(space, "integer", self)) + class Wrappable(W_Root): """A subclass of Wrappable is an internal, interpreter-level class @@ -755,11 +777,63 @@ """Unpack an iterable object into a real (interpreter-level) list. Raise an OperationError(w_ValueError) if the length is wrong.""" w_iterator = self.iter(w_iterable) - # If we know the expected length we can preallocate. if expected_length == -1: + # xxx special hack for speed + from pypy.interpreter.generator import GeneratorIterator + if isinstance(w_iterator, GeneratorIterator): + lst_w = [] + w_iterator.unpack_into(lst_w) + return lst_w + # /xxx + return self._unpackiterable_unknown_length(w_iterator, w_iterable) + else: + lst_w = self._unpackiterable_known_length(w_iterator, + expected_length) + return lst_w[:] # make the resulting list resizable + + @jit.dont_look_inside + def _unpackiterable_unknown_length(self, w_iterator, w_iterable): + # Unpack a variable-size list of unknown length. + # The JIT does not look inside this function because it + # contains a loop (made explicit with the decorator above). + # + # If we can guess the expected length we can preallocate. + try: + lgt_estimate = self.len_w(w_iterable) + except OperationError, o: + if (not o.match(self, self.w_AttributeError) and + not o.match(self, self.w_TypeError)): + raise items = [] else: - items = [None] * expected_length + try: + items = newlist(lgt_estimate) + except MemoryError: + items = [] # it might have lied + # + while True: + try: + w_item = self.next(w_iterator) + except OperationError, e: + if not e.match(self, self.w_StopIteration): + raise + break # done + items.append(w_item) + # + return items + + @jit.dont_look_inside + def _unpackiterable_known_length(self, w_iterator, expected_length): + # Unpack a known length list, without letting the JIT look inside. + # Implemented by just calling the @jit.unroll_safe version, but + # the JIT stopped looking inside already. + return self._unpackiterable_known_length_jitlook(w_iterator, + expected_length) + + @jit.unroll_safe + def _unpackiterable_known_length_jitlook(self, w_iterator, + expected_length): + items = [None] * expected_length idx = 0 while True: try: @@ -768,26 +842,29 @@ if not e.match(self, self.w_StopIteration): raise break # done - if expected_length != -1 and idx == expected_length: + if idx == expected_length: raise OperationError(self.w_ValueError, - self.wrap("too many values to unpack")) - if expected_length == -1: - items.append(w_item) - else: - items[idx] = w_item + self.wrap("too many values to unpack")) + items[idx] = w_item idx += 1 - if expected_length != -1 and idx < expected_length: + if idx < expected_length: if idx == 1: plural = "" else: plural = "s" - raise OperationError(self.w_ValueError, - self.wrap("need more than %d value%s to unpack" % - (idx, plural))) + raise operationerrfmt(self.w_ValueError, + "need more than %d value%s to unpack", + idx, plural) return items - unpackiterable_unroll = jit.unroll_safe(func_with_new_name(unpackiterable, - 'unpackiterable_unroll')) + def unpackiterable_unroll(self, w_iterable, expected_length): + # Like unpackiterable(), but for the cases where we have + # an expected_length and want to unroll when JITted. + # Returns a fixed-size list. + w_iterator = self.iter(w_iterable) + assert expected_length != -1 + return self._unpackiterable_known_length_jitlook(w_iterator, + expected_length) def fixedview(self, w_iterable, expected_length=-1): """ A fixed list view of w_iterable. Don't modify the result @@ -890,7 +967,7 @@ ec.c_call_trace(frame, w_func, args) try: w_res = self.call_args(w_func, args) - except OperationError, e: + except OperationError: ec.c_exception_trace(frame, w_func) raise ec.c_return_trace(frame, w_func, args) @@ -936,6 +1013,9 @@ def isinstance_w(self, w_obj, w_type): return self.is_true(self.isinstance(w_obj, w_type)) + def id(self, w_obj): + return self.wrap(compute_unique_id(w_obj)) + # The code below only works # for the simple case (new-style instance). # These methods are patched with the full logic by the __builtin__ @@ -988,8 +1068,6 @@ def eval(self, expression, w_globals, w_locals, hidden_applevel=False): "NOT_RPYTHON: For internal debugging." - import types - from pypy.interpreter.pycode import PyCode if isinstance(expression, str): compiler = self.createcompiler() expression = compiler.compile(expression, '?', 'eval', 0, @@ -1001,7 +1079,6 @@ def exec_(self, statement, w_globals, w_locals, hidden_applevel=False, filename=None): "NOT_RPYTHON: For internal debugging." - import types if filename is None: filename = '?' from pypy.interpreter.pycode import PyCode @@ -1199,6 +1276,18 @@ return None return self.str_w(w_obj) + def str_w(self, w_obj): + return w_obj.str_w(self) + + def int_w(self, w_obj): + return w_obj.int_w(self) + + def uint_w(self, w_obj): + return w_obj.uint_w(self) + + def bigint_w(self, w_obj): + return w_obj.bigint_w(self) + def realstr_w(self, w_obj): # Like str_w, but only works if w_obj is really of type 'str'. if not self.is_true(self.isinstance(w_obj, self.w_str)): @@ -1206,6 +1295,9 @@ self.wrap('argument must be a string')) return self.str_w(w_obj) + def unicode_w(self, w_obj): + return w_obj.unicode_w(self) + def realunicode_w(self, w_obj): # Like unicode_w, but only works if w_obj is really of type # 'unicode'. @@ -1287,7 +1379,7 @@ self.wrap("expected a 32-bit integer")) return value - def truncatedint(self, w_obj): + def truncatedint_w(self, w_obj): # Like space.gateway_int_w(), but return the integer truncated # instead of raising OverflowError. For obscure cases only. try: @@ -1298,6 +1390,17 @@ from pypy.rlib.rarithmetic import intmask return intmask(self.bigint_w(w_obj).uintmask()) + def truncatedlonglong_w(self, w_obj): + # Like space.gateway_r_longlong_w(), but return the integer truncated + # instead of raising OverflowError. + try: + return self.r_longlong_w(w_obj) + except OperationError, e: + if not e.match(self, self.w_OverflowError): + raise + from pypy.rlib.rarithmetic import longlongmask + return longlongmask(self.bigint_w(w_obj).ulonglongmask()) + def c_filedescriptor_w(self, w_fd): # This is only used sometimes in CPython, e.g. for os.fsync() but # not os.close(). It's likely designed for 'select'. It's irregular diff --git a/pypy/interpreter/error.py b/pypy/interpreter/error.py --- a/pypy/interpreter/error.py +++ b/pypy/interpreter/error.py @@ -458,3 +458,7 @@ if module: space.setattr(w_exc, space.wrap("__module__"), space.wrap(module)) return w_exc + +def typed_unwrap_error_msg(space, expected, w_obj): + type_name = space.type(w_obj).getname(space) + return space.wrap("expected %s, got %s object" % (expected, type_name)) diff --git a/pypy/interpreter/executioncontext.py b/pypy/interpreter/executioncontext.py --- a/pypy/interpreter/executioncontext.py +++ b/pypy/interpreter/executioncontext.py @@ -1,5 +1,4 @@ import sys -from pypy.interpreter.miscutils import Stack from pypy.interpreter.error import OperationError from pypy.rlib.rarithmetic import LONG_BIT from pypy.rlib.unroll import unrolling_iterable @@ -48,6 +47,7 @@ return frame @staticmethod + @jit.unroll_safe # should usually loop 0 times, very rarely more than once def getnextframe_nohidden(frame): frame = frame.f_backref() while frame and frame.hide(): @@ -81,58 +81,6 @@ # ________________________________________________________________ - - class Subcontext(object): - # coroutine: subcontext support - - def __init__(self): - self.topframe = None - self.w_tracefunc = None - self.profilefunc = None - self.w_profilefuncarg = None - self.is_tracing = 0 - - def enter(self, ec): - ec.topframeref = jit.non_virtual_ref(self.topframe) - ec.w_tracefunc = self.w_tracefunc - ec.profilefunc = self.profilefunc - ec.w_profilefuncarg = self.w_profilefuncarg - ec.is_tracing = self.is_tracing - ec.space.frame_trace_action.fire() - - def leave(self, ec): - self.topframe = ec.gettopframe() - self.w_tracefunc = ec.w_tracefunc - self.profilefunc = ec.profilefunc - self.w_profilefuncarg = ec.w_profilefuncarg - self.is_tracing = ec.is_tracing - - def clear_framestack(self): - self.topframe = None - - # the following interface is for pickling and unpickling - def getstate(self, space): - if self.topframe is None: - return space.w_None - return self.topframe - - def setstate(self, space, w_state): - from pypy.interpreter.pyframe import PyFrame - if space.is_w(w_state, space.w_None): - self.topframe = None - else: - self.topframe = space.interp_w(PyFrame, w_state) - - def getframestack(self): - lst = [] - f = self.topframe - while f is not None: - lst.append(f) - f = f.f_backref() - lst.reverse() - return lst - # coroutine: I think this is all, folks! - def c_call_trace(self, frame, w_func, args=None): "Profile the call of a builtin function" self._c_call_return_trace(frame, w_func, args, 'c_call') @@ -227,6 +175,9 @@ self.w_tracefunc = w_func self.space.frame_trace_action.fire() + def gettrace(self): + return self.w_tracefunc + def setprofile(self, w_func): """Set the global trace function.""" if self.space.is_w(w_func, self.space.w_None): @@ -359,7 +310,11 @@ self._nonperiodic_actions = [] self.has_bytecode_counter = False self.fired_actions = None - self.checkinterval_scaled = 100 * TICK_COUNTER_STEP + # the default value is not 100, unlike CPython 2.7, but a much + # larger value, because we use a technique that not only allows + # but actually *forces* another thread to run whenever the counter + # reaches zero. + self.checkinterval_scaled = 10000 * TICK_COUNTER_STEP self._rebuild_action_dispatcher() def fire(self, action): @@ -398,6 +353,7 @@ elif interval > MAX: interval = MAX self.checkinterval_scaled = interval * TICK_COUNTER_STEP + self.reset_ticker(-1) def _rebuild_action_dispatcher(self): periodic_actions = unrolling_iterable(self._periodic_actions) @@ -435,8 +391,11 @@ def decrement_ticker(self, by): value = self._ticker if self.has_bytecode_counter: # this 'if' is constant-folded - value -= by - self._ticker = value + if jit.isconstant(by) and by == 0: + pass # normally constant-folded too + else: + value -= by + self._ticker = value return value diff --git a/pypy/interpreter/function.py b/pypy/interpreter/function.py --- a/pypy/interpreter/function.py +++ b/pypy/interpreter/function.py @@ -242,8 +242,10 @@ # we have been seen by other means so rtyping should not choke # on us identifier = self.code.identifier - assert Function._all.get(identifier, self) is self, ("duplicate " - "function ids") + previous = Function._all.get(identifier, self) + assert previous is self, ( + "duplicate function ids with identifier=%r: %r and %r" % ( + identifier, previous, self)) self.add_to_table() return False diff --git a/pypy/interpreter/gateway.py b/pypy/interpreter/gateway.py --- a/pypy/interpreter/gateway.py +++ b/pypy/interpreter/gateway.py @@ -142,7 +142,7 @@ def visit_c_nonnegint(self, el, app_sig): self.checked_space_method(el, app_sig) - def visit_truncatedint(self, el, app_sig): + def visit_truncatedint_w(self, el, app_sig): self.checked_space_method(el, app_sig) def visit__Wrappable(self, el, app_sig): @@ -262,8 +262,8 @@ def visit_c_nonnegint(self, typ): self.run_args.append("space.c_nonnegint_w(%s)" % (self.scopenext(),)) - def visit_truncatedint(self, typ): - self.run_args.append("space.truncatedint(%s)" % (self.scopenext(),)) + def visit_truncatedint_w(self, typ): + self.run_args.append("space.truncatedint_w(%s)" % (self.scopenext(),)) def _make_unwrap_activation_class(self, unwrap_spec, cache={}): try: @@ -395,8 +395,8 @@ def visit_c_nonnegint(self, typ): self.unwrap.append("space.c_nonnegint_w(%s)" % (self.nextarg(),)) - def visit_truncatedint(self, typ): - self.unwrap.append("space.truncatedint(%s)" % (self.nextarg(),)) + def visit_truncatedint_w(self, typ): + self.unwrap.append("space.truncatedint_w(%s)" % (self.nextarg(),)) def make_fastfunc(unwrap_spec, func): unwrap_info = UnwrapSpec_FastFunc_Unwrap() diff --git a/pypy/interpreter/generator.py b/pypy/interpreter/generator.py --- a/pypy/interpreter/generator.py +++ b/pypy/interpreter/generator.py @@ -8,7 +8,7 @@ class GeneratorIterator(Wrappable): "An iterator created by a generator." _immutable_fields_ = ['pycode'] - + def __init__(self, frame): self.space = frame.space self.frame = frame # turned into None when frame_finished_execution @@ -81,7 +81,7 @@ # if the frame is now marked as finished, it was RETURNed from if frame.frame_finished_execution: self.frame = None - raise OperationError(space.w_StopIteration, space.w_None) + raise OperationError(space.w_StopIteration, space.w_None) else: return w_result # YIELDed finally: @@ -97,21 +97,21 @@ def throw(self, w_type, w_val, w_tb): from pypy.interpreter.pytraceback import check_traceback space = self.space - + msg = "throw() third argument must be a traceback object" if space.is_w(w_tb, space.w_None): tb = None else: tb = check_traceback(space, w_tb, msg) - + operr = OperationError(w_type, w_val, tb) operr.normalize_exception(space) return self.send_ex(space.w_None, operr) - + def descr_next(self): """x.next() -> the next value, or raise StopIteration""" return self.send_ex(self.space.w_None) - + def descr_close(self): """x.close(arg) -> raise GeneratorExit inside generator.""" assert isinstance(self, GeneratorIterator) @@ -124,7 +124,7 @@ e.match(space, space.w_GeneratorExit): return space.w_None raise - + if w_retval is not None: msg = "generator ignored GeneratorExit" raise OperationError(space.w_RuntimeError, space.wrap(msg)) @@ -155,3 +155,39 @@ "interrupting generator of ") break block = block.previous + + def unpack_into(self, results_w): + """This is a hack for performance: runs the generator and collects + all produced items in a list.""" + # XXX copied and simplified version of send_ex() + space = self.space + if self.running: + raise OperationError(space.w_ValueError, + space.wrap('generator already executing')) + frame = self.frame + if frame is None: # already finished + return + self.running = True + try: + pycode = self.pycode + while True: + jitdriver.jit_merge_point(self=self, frame=frame, + results_w=results_w, + pycode=pycode) + try: + w_result = frame.execute_frame(space.w_None) + except OperationError, e: + if not e.match(space, space.w_StopIteration): + raise + break + # if the frame is now marked as finished, it was RETURNed from + if frame.frame_finished_execution: + break + results_w.append(w_result) # YIELDed + finally: + frame.f_backref = jit.vref_None + self.running = False + self.frame = None + +jitdriver = jit.JitDriver(greens=['pycode'], + reds=['self', 'frame', 'results_w']) diff --git a/pypy/interpreter/miscutils.py b/pypy/interpreter/miscutils.py --- a/pypy/interpreter/miscutils.py +++ b/pypy/interpreter/miscutils.py @@ -2,154 +2,6 @@ Miscellaneous utilities. """ -import types - -from pypy.rlib.rarithmetic import r_uint - -class RootStack: - pass - -class Stack(RootStack): - """Utility class implementing a stack.""" - - _annspecialcase_ = "specialize:ctr_location" # polymorphic - - def __init__(self): - self.items = [] - - def clone(self): - s = self.__class__() - for item in self.items: - try: - item = item.clone() - except AttributeError: - pass - s.push(item) - return s - - def push(self, item): - self.items.append(item) - - def pop(self): - return self.items.pop() - - def drop(self, n): - if n > 0: - del self.items[-n:] - - def top(self, position=0): - """'position' is 0 for the top of the stack, 1 for the item below, - and so on. It must not be negative.""" - if position < 0: - raise ValueError, 'negative stack position' - if position >= len(self.items): - raise IndexError, 'not enough entries in stack' - return self.items[~position] - - def set_top(self, value, position=0): - """'position' is 0 for the top of the stack, 1 for the item below, - and so on. It must not be negative.""" - if position < 0: - raise ValueError, 'negative stack position' - if position >= len(self.items): - raise IndexError, 'not enough entries in stack' - self.items[~position] = value - - def depth(self): - return len(self.items) - - def empty(self): - return len(self.items) == 0 - - -class FixedStack(RootStack): - _annspecialcase_ = "specialize:ctr_location" # polymorphic - - # unfortunately, we have to re-do everything - def __init__(self): - pass - - def setup(self, stacksize): - self.ptr = r_uint(0) # we point after the last element - self.items = [None] * stacksize - - def clone(self): - # this is only needed if we support flow space - s = self.__class__() - s.setup(len(self.items)) - for item in self.items[:self.ptr]: - try: - item = item.clone() - except AttributeError: - pass - s.push(item) - return s - - def push(self, item): - ptr = self.ptr - self.items[ptr] = item - self.ptr = ptr + 1 - - def pop(self): - ptr = self.ptr - 1 - ret = self.items[ptr] # you get OverflowError if the stack is empty - self.items[ptr] = None - self.ptr = ptr - return ret - - def drop(self, n): - while n > 0: - n -= 1 - self.ptr -= 1 - self.items[self.ptr] = None - - def top(self, position=0): - # for a fixed stack, we assume correct indices - return self.items[self.ptr + ~position] - - def set_top(self, value, position=0): - # for a fixed stack, we assume correct indices - self.items[self.ptr + ~position] = value - - def depth(self): - return self.ptr - - def empty(self): - return not self.ptr - - -class InitializedClass(type): - """NOT_RPYTHON. A meta-class that allows a class to initialize itself (or - its subclasses) by calling __initclass__() as a class method.""" - def __init__(self, name, bases, dict): - super(InitializedClass, self).__init__(name, bases, dict) - for basecls in self.__mro__: - raw = basecls.__dict__.get('__initclass__') - if isinstance(raw, types.FunctionType): - raw(self) # call it as a class method - - -class RwDictProxy(object): - """NOT_RPYTHON. A dict-like class standing for 'cls.__dict__', to work - around the fact that the latter is a read-only proxy for new-style - classes.""" - - def __init__(self, cls): - self.cls = cls - - def __getitem__(self, attr): - return self.cls.__dict__[attr] - - def __setitem__(self, attr, value): - setattr(self.cls, attr, value) - - def __contains__(self, value): - return value in self.cls.__dict__ - - def items(self): - return self.cls.__dict__.items() - - class ThreadLocals: """Pseudo thread-local storage, for 'space.threadlocals'. This is not really thread-local at all; the intention is that the PyPy @@ -167,3 +19,7 @@ def getmainthreadvalue(self): return self._value + + def getallvalues(self): + return {0: self._value} + diff --git a/pypy/interpreter/pycode.py b/pypy/interpreter/pycode.py --- a/pypy/interpreter/pycode.py +++ b/pypy/interpreter/pycode.py @@ -10,7 +10,7 @@ from pypy.interpreter.argument import Signature from pypy.interpreter.error import OperationError from pypy.interpreter.gateway import NoneNotWrapped, unwrap_spec -from pypy.interpreter.astcompiler.consts import (CO_OPTIMIZED, +from pypy.interpreter.astcompiler.consts import ( CO_OPTIMIZED, CO_NEWLOCALS, CO_VARARGS, CO_VARKEYWORDS, CO_NESTED, CO_GENERATOR, CO_CONTAINSGLOBALS) from pypy.rlib.rarithmetic import intmask diff --git a/pypy/interpreter/pyframe.py b/pypy/interpreter/pyframe.py --- a/pypy/interpreter/pyframe.py +++ b/pypy/interpreter/pyframe.py @@ -614,7 +614,8 @@ return self.get_builtin().getdict(space) def fget_f_back(self, space): - return self.space.wrap(self.f_backref()) + f_back = ExecutionContext.getnextframe_nohidden(self) + return self.space.wrap(f_back) def fget_f_lasti(self, space): return self.space.wrap(self.last_instr) diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py --- a/pypy/interpreter/pyopcode.py +++ b/pypy/interpreter/pyopcode.py @@ -1523,10 +1523,8 @@ if not isinstance(prog, codetype): filename = '' - if not isinstance(prog, str): - if isinstance(prog, basestring): - prog = str(prog) - elif isinstance(prog, file): + if not isinstance(prog, basestring): + if isinstance(prog, file): filename = prog.name prog = prog.read() else: diff --git a/pypy/interpreter/pyparser/future.py b/pypy/interpreter/pyparser/future.py --- a/pypy/interpreter/pyparser/future.py +++ b/pypy/interpreter/pyparser/future.py @@ -225,14 +225,16 @@ raise DoneException self.consume_whitespace() - def consume_whitespace(self): + def consume_whitespace(self, newline_ok=False): while 1: c = self.getc() if c in whitespace: self.pos += 1 continue - elif c == '\\': - self.pos += 1 + elif c == '\\' or newline_ok: + slash = c == '\\' + if slash: + self.pos += 1 c = self.getc() if c == '\n': self.pos += 1 @@ -243,8 +245,10 @@ if self.getc() == '\n': self.pos += 1 self.atbol() + elif slash: + raise DoneException else: - raise DoneException + return else: return @@ -281,7 +285,7 @@ return else: self.pos += 1 - self.consume_whitespace() + self.consume_whitespace(paren_list) if paren_list and self.getc() == ')': self.pos += 1 return # Handles trailing comma inside parenthesis diff --git a/pypy/interpreter/pyparser/pytokenizer.py b/pypy/interpreter/pyparser/pytokenizer.py --- a/pypy/interpreter/pyparser/pytokenizer.py +++ b/pypy/interpreter/pyparser/pytokenizer.py @@ -226,7 +226,7 @@ parenlev = parenlev - 1 if parenlev < 0: raise TokenError("unmatched '%s'" % initial, line, - lnum-1, 0, token_list) + lnum, start + 1, token_list) if token in python_opmap: punct = python_opmap[token] else: diff --git a/pypy/interpreter/pyparser/test/test_futureautomaton.py b/pypy/interpreter/pyparser/test/test_futureautomaton.py --- a/pypy/interpreter/pyparser/test/test_futureautomaton.py +++ b/pypy/interpreter/pyparser/test/test_futureautomaton.py @@ -3,7 +3,7 @@ from pypy.tool import stdlib___future__ as fut def run(s): - f = future.FutureAutomaton(future.futureFlags_2_5, s) + f = future.FutureAutomaton(future.futureFlags_2_7, s) try: f.start() except future.DoneException: @@ -113,6 +113,14 @@ assert f.lineno == 1 assert f.col_offset == 0 +def test_paren_with_newline(): + s = 'from __future__ import (division,\nabsolute_import)\n' + f = run(s) + assert f.pos == len(s) + assert f.flags == (fut.CO_FUTURE_DIVISION | fut.CO_FUTURE_ABSOLUTE_IMPORT) + assert f.lineno == 1 + assert f.col_offset == 0 + def test_multiline(): s = '"abc" #def\n #ghi\nfrom __future__ import (division as b, generators,)\nfrom __future__ import with_statement\n' f = run(s) diff --git a/pypy/interpreter/pyparser/test/test_pyparse.py b/pypy/interpreter/pyparser/test/test_pyparse.py --- a/pypy/interpreter/pyparser/test/test_pyparse.py +++ b/pypy/interpreter/pyparser/test/test_pyparse.py @@ -87,6 +87,10 @@ assert exc.lineno == 1 assert exc.offset == 5 assert exc.lastlineno == 5 + exc = py.test.raises(SyntaxError, parse, "abc)").value + assert exc.msg == "unmatched ')'" + assert exc.lineno == 1 + assert exc.offset == 4 def test_is(self): self.parse("x is y") diff --git a/pypy/interpreter/test/test_exec.py b/pypy/interpreter/test/test_exec.py --- a/pypy/interpreter/test/test_exec.py +++ b/pypy/interpreter/test/test_exec.py @@ -219,3 +219,30 @@ raise e assert res == 1 + + def test_exec_unicode(self): + # 's' is a string + s = "x = u'\xd0\xb9\xd1\x86\xd1\x83\xd0\xba\xd0\xb5\xd0\xbd'" + # 'u' is a unicode + u = s.decode('utf-8') + exec u + assert len(x) == 6 + assert ord(x[0]) == 0x0439 + assert ord(x[1]) == 0x0446 + assert ord(x[2]) == 0x0443 + assert ord(x[3]) == 0x043a + assert ord(x[4]) == 0x0435 + assert ord(x[5]) == 0x043d + + def test_eval_unicode(self): + u = "u'%s'" % unichr(0x1234) + v = eval(u) + assert v == unichr(0x1234) + + def test_compile_unicode(self): + s = "x = u'\xd0\xb9\xd1\x86\xd1\x83\xd0\xba\xd0\xb5\xd0\xbd'" + u = s.decode('utf-8') + c = compile(u, '', 'exec') + exec c + assert len(x) == 6 + assert ord(x[0]) == 0x0439 diff --git a/pypy/interpreter/test/test_executioncontext.py b/pypy/interpreter/test/test_executioncontext.py --- a/pypy/interpreter/test/test_executioncontext.py +++ b/pypy/interpreter/test/test_executioncontext.py @@ -42,6 +42,7 @@ assert i == 9 def test_periodic_action(self): + from pypy.interpreter.executioncontext import ActionFlag class DemoAction(executioncontext.PeriodicAsyncAction): counter = 0 @@ -53,17 +54,20 @@ space = self.space a2 = DemoAction(space) - space.actionflag.register_periodic_action(a2, True) try: - for i in range(500): - space.appexec([], """(): - n = 5 - return n + 2 - """) - except Finished: - pass - checkinterval = space.actionflag.getcheckinterval() - assert checkinterval / 10 < i < checkinterval * 1.1 + space.actionflag.setcheckinterval(100) + space.actionflag.register_periodic_action(a2, True) + try: + for i in range(500): + space.appexec([], """(): + n = 5 + return n + 2 + """) + except Finished: + pass + finally: + space.actionflag = ActionFlag() # reset to default + assert 10 < i < 110 def test_llprofile(self): l = [] diff --git a/pypy/interpreter/test/test_generator.py b/pypy/interpreter/test/test_generator.py --- a/pypy/interpreter/test/test_generator.py +++ b/pypy/interpreter/test/test_generator.py @@ -117,7 +117,7 @@ g = f() raises(NameError, g.throw, NameError, "Error", None) - + def test_throw_fail(self): def f(): yield 1 @@ -129,7 +129,7 @@ yield 1 g = f() raises(TypeError, g.throw, list()) - + def test_throw_fail3(self): def f(): yield 1 @@ -188,7 +188,7 @@ g = f() g.next() raises(NameError, g.close) - + def test_close_fail(self): def f(): try: @@ -267,3 +267,15 @@ assert r.startswith(" Author: edelsohn Branch: ppc-jit-backend Changeset: r49024:1bda9131792d Date: 2011-11-09 11:12 -0500 http://bitbucket.org/pypy/pypy/changeset/1bda9131792d/ Log: Store PPC64 LR at frame_depth + WORD in prologue. Load R2 in gen_exit_path call and store R1 when allocating stack. diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -134,7 +134,7 @@ else: self.mc.stdu(r.SP.value, r.SP.value, -frame_depth) self.mc.mflr(r.r0.value) - self.mc.std(r.r0.value, r.SP.value, frame_depth + 2 * WORD) + self.mc.std(r.r0.value, r.SP.value, frame_depth + WORD) offset = GPR_SAVE_AREA + WORD # compute spilling pointer (SPP) self.mc.addi(r.SPP.value, r.SP.value, frame_depth - offset) @@ -296,7 +296,10 @@ # XXX do quadword alignment #while size % (4 * WORD) != 0: # size += WORD - mc.addi(r.SP.value, r.SP.value, -size) + if IS_PPC_32: + mc.stwu(r.SP.value, r.SP.value, -size) + else: + mc.stdu(r.SP.value, r.SP.value, -size) # decode_func_addr = llhelper(self.recovery_func_sign, self.failure_recovery_func) @@ -306,6 +309,7 @@ intp = lltype.Ptr(lltype.Array(lltype.Signed, hints={'nolength': True})) descr = rffi.cast(intp, decode_func_addr) addr = descr[0] + r2_value = descr[1] r11_value = descr[2] # @@ -319,6 +323,7 @@ # # load address of decoding function into r0 mc.load_imm(r.r0, addr) + mc.load_imm(r.r2, r2_value) mc.load_imm(r.r11, r11_value) # ... and branch there mc.mtctr(r.r0.value) From noreply at buildbot.pypy.org Wed Nov 9 17:21:38 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Wed, 9 Nov 2011 17:21:38 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: gc inspector works Message-ID: <20111109162138.921A58292E@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r49025:71ab3a388b25 Date: 2011-11-09 17:21 +0100 http://bitbucket.org/pypy/pypy/changeset/71ab3a388b25/ Log: gc inspector works diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -640,6 +640,9 @@ # float * FLOATP = lltype.Ptr(lltype.Array(FLOAT, hints={'nolength': True})) +# Signed * +SIGNEDP = lltype.Ptr(lltype.Array(lltype.Signed, hints={'nolength': True})) + # various type mapping # conversions between str and char* diff --git a/pypy/rpython/memory/gc/inspector.py b/pypy/rpython/memory/gc/inspector.py --- a/pypy/rpython/memory/gc/inspector.py +++ b/pypy/rpython/memory/gc/inspector.py @@ -109,7 +109,7 @@ self.gc = gc self.gcflag = gc.gcflag_extra self.fd = rffi.cast(rffi.INT, fd) - self.writebuffer = lltype.malloc(rffi.LONGP.TO, self.BUFSIZE, + self.writebuffer = lltype.malloc(rffi.SIGNEDP.TO, self.BUFSIZE, flavor='raw') self.buf_count = 0 if self.gcflag == 0: From noreply at buildbot.pypy.org Wed Nov 9 17:31:21 2011 From: noreply at buildbot.pypy.org (edelsohn) Date: Wed, 9 Nov 2011 17:31:21 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Deallocate stack in emit_call on PPC64 path. Message-ID: <20111109163121.1EA648292E@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r49026:7589bb11c7e5 Date: 2011-11-09 11:31 -0500 http://bitbucket.org/pypy/pypy/changeset/7589bb11c7e5/ Log: Deallocate stack in emit_call on PPC64 path. diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -549,9 +549,7 @@ #the actual call if IS_PPC_32: self.mc.bl_abs(adr) - self.mc.lwz(0, 1, stack_space + WORD) - self.mc.mtlr(0) - self.mc.addi(1, 1, stack_space) + self.mc.lwz(r.r0.value, r.SP.value, stack_space + WORD) else: self.mc.std(r.r2.value, r.SP.value, 40) self.mc.load_from_addr(r.r0, adr) @@ -560,6 +558,9 @@ self.mc.mtctr(r.r0.value) self.mc.bctrl() self.mc.ld(r.r2.value, r.SP.value, 40) + self.mc.ld(r.r0.value, r.SP.value, stack_space + WORD) + self.mc.mtlr(r.r0.value) + self.mc.addi(r.SP.value, r.SP.value, stack_space) self.mark_gc_roots(force_index) regalloc.possibly_free_vars(args) From noreply at buildbot.pypy.org Wed Nov 9 17:41:06 2011 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 9 Nov 2011 17:41:06 +0100 (CET) Subject: [pypy-commit] pypy default: Must close the file explicitly; otherwise, on Windows, we cannot unlink it before the GC runs Message-ID: <20111109164106.4FD3A8292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49027:d44e050d460e Date: 2011-11-09 17:28 +0100 http://bitbucket.org/pypy/pypy/changeset/d44e050d460e/ Log: Must close the file explicitly; otherwise, on Windows, we cannot unlink it before the GC runs diff --git a/lib-python/2.7/test/test_os.py b/lib-python/2.7/test/test_os.py --- a/lib-python/2.7/test/test_os.py +++ b/lib-python/2.7/test/test_os.py @@ -74,7 +74,8 @@ self.assertFalse(os.path.exists(name), "file already exists for temporary file") # make sure we can create the file - open(name, "w") + f = open(name, "w") + f.close() self.files.append(name) def test_tempnam(self): From noreply at buildbot.pypy.org Wed Nov 9 17:41:07 2011 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 9 Nov 2011 17:41:07 +0100 (CET) Subject: [pypy-commit] pypy default: Rename the decorator. Fijal: can you use it on some of Message-ID: <20111109164107.7F5C28292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49028:5476689a4d73 Date: 2011-11-09 17:40 +0100 http://bitbucket.org/pypy/pypy/changeset/5476689a4d73/ Log: Rename the decorator. Fijal: can you use it on some of the __del__s where it is important that the finalizer is lightweight? Thanks :-) diff --git a/pypy/rlib/rgc.py b/pypy/rlib/rgc.py --- a/pypy/rlib/rgc.py +++ b/pypy/rlib/rgc.py @@ -216,8 +216,8 @@ func._gc_no_collect_ = True return func -def is_light_finalizer(func): - func._is_light_finalizer_ = True +def must_be_light_finalizer(func): + func._must_be_light_finalizer_ = True return func # ____________________________________________________________ diff --git a/pypy/translator/backendopt/finalizer.py b/pypy/translator/backendopt/finalizer.py --- a/pypy/translator/backendopt/finalizer.py +++ b/pypy/translator/backendopt/finalizer.py @@ -4,7 +4,7 @@ class FinalizerError(Exception): """ __del__ marked as lightweight finalizer, but the analyzer did - not agreed + not agree """ class FinalizerAnalyzer(graphanalyze.BoolGraphAnalyzer): @@ -23,7 +23,7 @@ def analyze_light_finalizer(self, graph): result = self.analyze_direct_call(graph) if (result is self.top_result() and - getattr(graph.func, '_is_light_finalizer_', False)): + getattr(graph.func, '_must_be_light_finalizer_', False)): raise FinalizerError(FinalizerError.__doc__, graph) return result diff --git a/pypy/translator/backendopt/test/test_finalizer.py b/pypy/translator/backendopt/test/test_finalizer.py --- a/pypy/translator/backendopt/test/test_finalizer.py +++ b/pypy/translator/backendopt/test/test_finalizer.py @@ -126,13 +126,13 @@ r = self.analyze(f, [], A.__del__.im_func) assert r - def test_is_light_finalizer_decorator(self): + def test_must_be_light_finalizer_decorator(self): S = lltype.GcStruct('S') - @rgc.is_light_finalizer + @rgc.must_be_light_finalizer def f(): lltype.malloc(S) - @rgc.is_light_finalizer + @rgc.must_be_light_finalizer def g(): pass self.analyze(g, []) # did not explode From noreply at buildbot.pypy.org Wed Nov 9 18:14:07 2011 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 9 Nov 2011 18:14:07 +0100 (CET) Subject: [pypy-commit] pypy default: Missing f.close(). Message-ID: <20111109171407.A77E18292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49029:36f8f2531dfe Date: 2011-11-09 17:46 +0100 http://bitbucket.org/pypy/pypy/changeset/36f8f2531dfe/ Log: Missing f.close(). diff --git a/lib-python/2.7/pkgutil.py b/lib-python/modified-2.7/pkgutil.py copy from lib-python/2.7/pkgutil.py copy to lib-python/modified-2.7/pkgutil.py --- a/lib-python/2.7/pkgutil.py +++ b/lib-python/modified-2.7/pkgutil.py @@ -244,7 +244,8 @@ return mod def get_data(self, pathname): - return open(pathname, "rb").read() + with open(pathname, "rb") as f: + return f.read() def _reopen(self): if self.file and self.file.closed: From noreply at buildbot.pypy.org Wed Nov 9 18:14:08 2011 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 9 Nov 2011 18:14:08 +0100 (CET) Subject: [pypy-commit] pypy default: Copy the logic for math.fmod() from CPython 2.7. Message-ID: <20111109171408.DB1FA8292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49030:165672fb2aef Date: 2011-11-09 17:54 +0100 http://bitbucket.org/pypy/pypy/changeset/165672fb2aef/ Log: Copy the logic for math.fmod() from CPython 2.7. diff --git a/pypy/rpython/lltypesystem/module/ll_math.py b/pypy/rpython/lltypesystem/module/ll_math.py --- a/pypy/rpython/lltypesystem/module/ll_math.py +++ b/pypy/rpython/lltypesystem/module/ll_math.py @@ -223,13 +223,21 @@ def ll_math_fmod(x, y): - if isinf(x) and not isnan(y): - raise ValueError("math domain error") + # fmod(x, +/-Inf) returns x for finite x. + if isinf(y) and isfinite(x): + return x - if y == 0: - raise ValueError("math domain error") - - return math_fmod(x, y) + _error_reset() + r = math_fmod(x, y) + errno = rposix.get_errno() + if isnan(r): + if isnan(x) or isnan(y): + errno = 0 + else: + errno = EDOM + if errno: + _likely_raise(errno, r) + return r def ll_math_hypot(x, y): From noreply at buildbot.pypy.org Wed Nov 9 18:14:10 2011 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 9 Nov 2011 18:14:10 +0100 (CET) Subject: [pypy-commit] pypy default: Tweak for the common case: use isfinite() more often, Message-ID: <20111109171410.130578292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49031:df7f0844e6b4 Date: 2011-11-09 18:13 +0100 http://bitbucket.org/pypy/pypy/changeset/df7f0844e6b4/ Log: Tweak for the common case: use isfinite() more often, and only fall back to checking isnan() and isinf() if it returns False. diff --git a/pypy/rpython/lltypesystem/module/ll_math.py b/pypy/rpython/lltypesystem/module/ll_math.py --- a/pypy/rpython/lltypesystem/module/ll_math.py +++ b/pypy/rpython/lltypesystem/module/ll_math.py @@ -136,10 +136,12 @@ Windows, FreeBSD and alpha Tru64 are amongst platforms that don't always follow C99. """ - if isnan(x) or isnan(y): + if isnan(x): return NAN - if isinf(y): + if not isfinite(y): + if isnan(y): + return NAN if isinf(x): if math_copysign(1.0, x) == 1.0: # atan2(+-inf, +inf) == +-pi/4 @@ -168,7 +170,7 @@ def ll_math_frexp(x): # deal with special cases directly, to sidestep platform differences - if isnan(x) or isinf(x) or not x: + if not isfinite(x) or not x: mantissa = x exponent = 0 else: @@ -185,7 +187,7 @@ INT_MIN = int(-2**31) def ll_math_ldexp(x, exp): - if x == 0.0 or isinf(x) or isnan(x): + if x == 0.0 or not isfinite(x): return x # NaNs, zeros and infinities are returned unchanged if exp > INT_MAX: # overflow (64-bit platforms only) @@ -209,10 +211,11 @@ def ll_math_modf(x): # some platforms don't do the right thing for NaNs and # infinities, so we take care of special cases directly. - if isinf(x): - return (math_copysign(0.0, x), x) - elif isnan(x): - return (x, x) + if not isfinite(x): + if isnan(x): + return (x, x) + else: # isinf(x) + return (math_copysign(0.0, x), x) intpart_p = lltype.malloc(rffi.DOUBLEP.TO, 1, flavor='raw') try: fracpart = math_modf(x, intpart_p) @@ -250,16 +253,17 @@ _error_reset() r = math_hypot(x, y) errno = rposix.get_errno() - if isnan(r): - if isnan(x) or isnan(y): - errno = 0 - else: - errno = EDOM - elif isinf(r): - if isinf(x) or isnan(x) or isinf(y) or isnan(y): - errno = 0 - else: - errno = ERANGE + if not isfinite(r): + if isnan(r): + if isnan(x) or isnan(y): + errno = 0 + else: + errno = EDOM + else: # isinf(r) + if isfinite(x) and isfinite(y): + errno = ERANGE + else: + errno = 0 if errno: _likely_raise(errno, r) return r @@ -269,30 +273,30 @@ # deal directly with IEEE specials, to cope with problems on various # platforms whose semantics don't exactly match C99 - if isnan(x): - if y == 0.0: - return 1.0 # NaN**0 = 1 - return x - - elif isnan(y): + if isnan(y): if x == 1.0: return 1.0 # 1**Nan = 1 return y - elif isinf(x): - odd_y = not isinf(y) and math_fmod(math_fabs(y), 2.0) == 1.0 - if y > 0.0: - if odd_y: - return x - return math_fabs(x) - elif y == 0.0: - return 1.0 - else: # y < 0.0 - if odd_y: - return math_copysign(0.0, x) - return 0.0 + if not isfinite(x): + if isnan(x): + if y == 0.0: + return 1.0 # NaN**0 = 1 + return x + else: # isinf(x) + odd_y = not isinf(y) and math_fmod(math_fabs(y), 2.0) == 1.0 + if y > 0.0: + if odd_y: + return x + return math_fabs(x) + elif y == 0.0: + return 1.0 + else: # y < 0.0 + if odd_y: + return math_copysign(0.0, x) + return 0.0 - elif isinf(y): + if isinf(y): if math_fabs(x) == 1.0: return 1.0 elif y > 0.0 and math_fabs(x) > 1.0: @@ -307,17 +311,18 @@ _error_reset() r = math_pow(x, y) errno = rposix.get_errno() - if isnan(r): - # a NaN result should arise only from (-ve)**(finite non-integer) - errno = EDOM - elif isinf(r): - # an infinite result here arises either from: - # (A) (+/-0.)**negative (-> divide-by-zero) - # (B) overflow of x**y with x and y finite - if x == 0.0: + if not isfinite(r): + if isnan(r): + # a NaN result should arise only from (-ve)**(finite non-integer) errno = EDOM - else: - errno = ERANGE + else: # isinf(r) + # an infinite result here arises either from: + # (A) (+/-0.)**negative (-> divide-by-zero) + # (B) overflow of x**y with x and y finite + if x == 0.0: + errno = EDOM + else: + errno = ERANGE if errno: _likely_raise(errno, r) return r @@ -366,18 +371,19 @@ r = c_func(x) # Error checking fun. Copied from CPython 2.6 errno = rposix.get_errno() - if isnan(r): - if isnan(x): - errno = 0 - else: - errno = EDOM - elif isinf(r): - if isinf(x) or isnan(x): - errno = 0 - elif can_overflow: - errno = ERANGE - else: - errno = EDOM + if not isfinite(r): + if isnan(r): + if isnan(x): + errno = 0 + else: + errno = EDOM + else: # isinf(r) + if not isfinite(x): + errno = 0 + elif can_overflow: + errno = ERANGE + else: + errno = EDOM if errno: _likely_raise(errno, r) return r From noreply at buildbot.pypy.org Wed Nov 9 19:00:33 2011 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 9 Nov 2011 19:00:33 +0100 (CET) Subject: [pypy-commit] pypy default: Tweak. Message-ID: <20111109180033.824FF8292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49032:2466f0e89311 Date: 2011-11-09 19:00 +0100 http://bitbucket.org/pypy/pypy/changeset/2466f0e89311/ Log: Tweak. diff --git a/pypy/rpython/lltypesystem/module/ll_math.py b/pypy/rpython/lltypesystem/module/ll_math.py --- a/pypy/rpython/lltypesystem/module/ll_math.py +++ b/pypy/rpython/lltypesystem/module/ll_math.py @@ -108,14 +108,17 @@ # # Custom implementations +VERY_LARGE_FLOAT = 1.0 +while VERY_LARGE_FLOAT * 100.0 != INFINITY: + VERY_LARGE_FLOAT *= 64.0 + def ll_math_isnan(y): # By not calling into the external function the JIT can inline this. # Floats are awesome. return y != y def ll_math_isinf(y): - # Use a bitwise OR so the JIT doesn't produce 2 different guards. - return (y == INFINITY) | (y == -INFINITY) + return (y + VERY_LARGE_FLOAT) == y def ll_math_isfinite(y): # Use a custom hack that is reasonably well-suited to the JIT. From noreply at buildbot.pypy.org Wed Nov 9 19:12:00 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 9 Nov 2011 19:12:00 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: these two functions are not_rpython Message-ID: <20111109181200.E763D8292E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r49033:9c29ae18d46b Date: 2011-11-09 17:43 +0100 http://bitbucket.org/pypy/pypy/changeset/9c29ae18d46b/ Log: these two functions are not_rpython diff --git a/pypy/rlib/rarithmetic.py b/pypy/rlib/rarithmetic.py --- a/pypy/rlib/rarithmetic.py +++ b/pypy/rlib/rarithmetic.py @@ -56,6 +56,9 @@ assert LONG_BIT_SHIFT < 99, "LONG_BIT_SHIFT value not found?" def intmask(n): + """ + NOT_RPYTHON + """ if isinstance(n, int): return int(n) # possibly bool->int if isinstance(n, objectmodel.Symbolic): @@ -68,6 +71,9 @@ return int(n) def longlongmask(n): + """ + NOT_RPYTHON + """ assert isinstance(n, (int, long)) n = long(n) n &= LONGLONG_MASK From noreply at buildbot.pypy.org Wed Nov 9 19:12:02 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 9 Nov 2011 19:12:02 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: hg merge default Message-ID: <20111109181202.245698292E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r49034:06d63733756d Date: 2011-11-09 17:43 +0100 http://bitbucket.org/pypy/pypy/changeset/06d63733756d/ Log: hg merge default diff --git a/lib-python/2.7/test/test_os.py b/lib-python/2.7/test/test_os.py --- a/lib-python/2.7/test/test_os.py +++ b/lib-python/2.7/test/test_os.py @@ -74,7 +74,8 @@ self.assertFalse(os.path.exists(name), "file already exists for temporary file") # make sure we can create the file - open(name, "w") + f = open(name, "w") + f.close() self.files.append(name) def test_tempnam(self): diff --git a/pypy/rlib/rgc.py b/pypy/rlib/rgc.py --- a/pypy/rlib/rgc.py +++ b/pypy/rlib/rgc.py @@ -216,8 +216,8 @@ func._gc_no_collect_ = True return func -def is_light_finalizer(func): - func._is_light_finalizer_ = True +def must_be_light_finalizer(func): + func._must_be_light_finalizer_ = True return func # ____________________________________________________________ diff --git a/pypy/translator/backendopt/finalizer.py b/pypy/translator/backendopt/finalizer.py --- a/pypy/translator/backendopt/finalizer.py +++ b/pypy/translator/backendopt/finalizer.py @@ -4,7 +4,7 @@ class FinalizerError(Exception): """ __del__ marked as lightweight finalizer, but the analyzer did - not agreed + not agree """ class FinalizerAnalyzer(graphanalyze.BoolGraphAnalyzer): @@ -23,7 +23,7 @@ def analyze_light_finalizer(self, graph): result = self.analyze_direct_call(graph) if (result is self.top_result() and - getattr(graph.func, '_is_light_finalizer_', False)): + getattr(graph.func, '_must_be_light_finalizer_', False)): raise FinalizerError(FinalizerError.__doc__, graph) return result diff --git a/pypy/translator/backendopt/test/test_finalizer.py b/pypy/translator/backendopt/test/test_finalizer.py --- a/pypy/translator/backendopt/test/test_finalizer.py +++ b/pypy/translator/backendopt/test/test_finalizer.py @@ -126,13 +126,13 @@ r = self.analyze(f, [], A.__del__.im_func) assert r - def test_is_light_finalizer_decorator(self): + def test_must_be_light_finalizer_decorator(self): S = lltype.GcStruct('S') - @rgc.is_light_finalizer + @rgc.must_be_light_finalizer def f(): lltype.malloc(S) - @rgc.is_light_finalizer + @rgc.must_be_light_finalizer def g(): pass self.analyze(g, []) # did not explode From noreply at buildbot.pypy.org Wed Nov 9 19:12:03 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 9 Nov 2011 19:12:03 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: (antocuni, arigo): this is probably how the test was meant to be Message-ID: <20111109181203.51E688292E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r49035:08175c33f891 Date: 2011-11-09 17:48 +0100 http://bitbucket.org/pypy/pypy/changeset/08175c33f891/ Log: (antocuni, arigo): this is probably how the test was meant to be diff --git a/pypy/translator/backendopt/test/test_finalizer.py b/pypy/translator/backendopt/test/test_finalizer.py --- a/pypy/translator/backendopt/test/test_finalizer.py +++ b/pypy/translator/backendopt/test/test_finalizer.py @@ -84,8 +84,8 @@ def __del__(self): if self.x: + lltype.free(self.x, flavor='raw') self.x = lltype.nullptr(S) - lltype.free(self.x, flavor='raw') def f(): return A() From noreply at buildbot.pypy.org Wed Nov 9 19:12:04 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 9 Nov 2011 19:12:04 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: make sure that these two finalizers are lightweight Message-ID: <20111109181204.861C08292E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r49036:af3519406a19 Date: 2011-11-09 17:49 +0100 http://bitbucket.org/pypy/pypy/changeset/af3519406a19/ Log: make sure that these two finalizers are lightweight diff --git a/pypy/module/_ffi/interp_struct.py b/pypy/module/_ffi/interp_struct.py --- a/pypy/module/_ffi/interp_struct.py +++ b/pypy/module/_ffi/interp_struct.py @@ -2,6 +2,7 @@ from pypy.rlib import clibffi from pypy.rlib import libffi from pypy.rlib import jit +from pypy.rlib.rgc import must_be_light_finalizer from pypy.rlib.rarithmetic import r_uint, r_ulonglong from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.typedef import TypeDef, interp_attrproperty @@ -59,6 +60,7 @@ return w_field.w_ffitype, w_field.offset + @must_be_light_finalizer def __del__(self): if self.ffistruct: lltype.free(self.ffistruct, flavor='raw') @@ -118,8 +120,8 @@ self.rawmem = lltype.malloc(rffi.VOIDP.TO, size, flavor='raw', zero=True, add_memory_pressure=True) + @must_be_light_finalizer def __del__(self): - # XXX: check whether I can turn this into a lightweight destructor if self.rawmem: lltype.free(self.rawmem, flavor='raw') self.rawmem = lltype.nullptr(rffi.VOIDP.TO) From noreply at buildbot.pypy.org Wed Nov 9 19:12:05 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 9 Nov 2011 19:12:05 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: low level support for float fields Message-ID: <20111109181205.BBB928292E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r49037:deebd66d7766 Date: 2011-11-09 17:56 +0100 http://bitbucket.org/pypy/pypy/changeset/deebd66d7766/ Log: low level support for float fields diff --git a/pypy/rlib/libffi.py b/pypy/rlib/libffi.py --- a/pypy/rlib/libffi.py +++ b/pypy/rlib/libffi.py @@ -459,6 +459,16 @@ _struct_setfield(lltype.SignedLongLong, addr, offset, value) + at jit.oopspec('libffi_struct_getfield(ffitype, addr, offset)') +def struct_getfield_float(ffitype, addr, offset): + value = _struct_getfield(lltype.Float, addr, offset) + return value + + at jit.oopspec('libffi_struct_setfield(ffitype, addr, offset, value)') +def struct_setfield_float(ffitype, addr, offset, value): + _struct_setfield(lltype.Float, addr, offset, value) + + @specialize.arg(0) def _struct_getfield(TYPE, addr, offset): """ diff --git a/pypy/rlib/test/test_libffi.py b/pypy/rlib/test/test_libffi.py --- a/pypy/rlib/test/test_libffi.py +++ b/pypy/rlib/test/test_libffi.py @@ -6,7 +6,8 @@ from pypy.rlib.test.test_clibffi import BaseFfiTest, get_libm_name, make_struct_ffitype_e from pypy.rlib.libffi import CDLL, Func, get_libc_name, ArgChain, types from pypy.rlib.libffi import (IS_32_BIT, struct_getfield_int, struct_setfield_int, - struct_getfield_longlong, struct_setfield_longlong) + struct_getfield_longlong, struct_setfield_longlong, + struct_getfield_float, struct_setfield_float) class TestLibffiMisc(BaseFfiTest): @@ -95,6 +96,26 @@ # lltype.free(p, flavor='raw') + def test_struct_fields_float(self): + POINT = lltype.Struct('POINT', + ('x', rffi.DOUBLE), + ('y', rffi.DOUBLE) + ) + y_ofs = 8 + p = lltype.malloc(POINT, flavor='raw') + p.x = 123.4 + p.y = 567.8 + addr = rffi.cast(rffi.VOIDP, p) + assert struct_getfield_float(types.double, addr, 0) == 123.4 + assert struct_getfield_float(types.double, addr, y_ofs) == 567.8 + # + struct_setfield_float(types.double, addr, 0, 321.0) + struct_setfield_float(types.double, addr, y_ofs, 876.5) + assert p.x == 321.0 + assert p.y == 876.5 + # + lltype.free(p, flavor='raw') + class TestLibffiCall(BaseFfiTest): """ From noreply at buildbot.pypy.org Wed Nov 9 19:12:06 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 9 Nov 2011 19:12:06 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: kill duplicate test Message-ID: <20111109181206.E9F298292E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r49038:92eec651058f Date: 2011-11-09 17:57 +0100 http://bitbucket.org/pypy/pypy/changeset/92eec651058f/ Log: kill duplicate test diff --git a/pypy/module/_ffi/test/test_struct.py b/pypy/module/_ffi/test/test_struct.py --- a/pypy/module/_ffi/test/test_struct.py +++ b/pypy/module/_ffi/test/test_struct.py @@ -77,22 +77,6 @@ assert fields[0].offset == 0 assert fields[1].offset == longsize # aligned to WORD - def test_getfield_setfield(self): - from _ffi import _StructDescr, Field, types - longsize = types.slong.sizeof() - fields = [ - Field('x', types.slong), - Field('y', types.slong), - ] - descr = _StructDescr('foo', fields) - struct = descr.allocate() - struct.setfield('x', 42) - struct.setfield('y', 43) - assert struct.getfield('x') == 42 - assert struct.getfield('y') == 43 - mem = self.read_raw_mem(struct.getaddr(), 'c_long', 2) - assert mem == [42, 43] - def test_missing_field(self): from _ffi import _StructDescr, Field, types longsize = types.slong.sizeof() From noreply at buildbot.pypy.org Wed Nov 9 19:12:08 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 9 Nov 2011 19:12:08 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: applevel support for float fields Message-ID: <20111109181208.2B1738292E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r49039:0aad7df24682 Date: 2011-11-09 18:02 +0100 http://bitbucket.org/pypy/pypy/changeset/0aad7df24682/ Log: applevel support for float fields diff --git a/pypy/module/_ffi/interp_struct.py b/pypy/module/_ffi/interp_struct.py --- a/pypy/module/_ffi/interp_struct.py +++ b/pypy/module/_ffi/interp_struct.py @@ -145,6 +145,10 @@ return space.wrap(r_uint(value)) return space.wrap(value) # + if w_ffitype.is_double(): + value = libffi.struct_getfield_float(w_ffitype.ffitype, self.rawmem, offset) + return space.wrap(value) + # assert False, 'unknown type' @unwrap_spec(name=str) @@ -160,6 +164,11 @@ libffi.struct_setfield_int(w_ffitype.ffitype, self.rawmem, offset, value) return # + if w_ffitype.is_double(): + value = space.float_w(w_value) + libffi.struct_setfield_float(w_ffitype.ffitype, self.rawmem, offset, value) + return + # assert False, 'unknown type' W__StructInstance.typedef = TypeDef( diff --git a/pypy/module/_ffi/test/test_struct.py b/pypy/module/_ffi/test/test_struct.py --- a/pypy/module/_ffi/test/test_struct.py +++ b/pypy/module/_ffi/test/test_struct.py @@ -168,6 +168,20 @@ mem = self.read_raw_mem(struct.getaddr(), 'c_longlong', 2) assert mem == [-9223372036854775808, -1] + def test_getfield_setfield_float(self): + import sys + from _ffi import _StructDescr, Field, types + longsize = types.slong.sizeof() + fields = [ + Field('x', types.double), + ] + descr = _StructDescr('foo', fields) + struct = descr.allocate() + struct.setfield('x', 123.4) + assert struct.getfield('x') == 123.4 + mem = self.read_raw_mem(struct.getaddr(), 'c_double', 1) + assert mem == [123.4] + def test_compute_shape(self): from _ffi import Structure, Field, types class Point(Structure): From noreply at buildbot.pypy.org Wed Nov 9 19:12:09 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 9 Nov 2011 19:12:09 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: low-level support for single float fields Message-ID: <20111109181209.5F90E8292E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r49040:eb02c4c1c0f0 Date: 2011-11-09 18:14 +0100 http://bitbucket.org/pypy/pypy/changeset/eb02c4c1c0f0/ Log: low-level support for single float fields diff --git a/pypy/rlib/libffi.py b/pypy/rlib/libffi.py --- a/pypy/rlib/libffi.py +++ b/pypy/rlib/libffi.py @@ -469,6 +469,16 @@ _struct_setfield(lltype.Float, addr, offset, value) + at jit.oopspec('libffi_struct_getfield(ffitype, addr, offset)') +def struct_getfield_singlefloat(ffitype, addr, offset): + value = _struct_getfield(lltype.SingleFloat, addr, offset) + return value + + at jit.oopspec('libffi_struct_setfield(ffitype, addr, offset, value)') +def struct_setfield_singlefloat(ffitype, addr, offset, value): + _struct_setfield(lltype.SingleFloat, addr, offset, value) + + @specialize.arg(0) def _struct_getfield(TYPE, addr, offset): """ diff --git a/pypy/rlib/test/test_libffi.py b/pypy/rlib/test/test_libffi.py --- a/pypy/rlib/test/test_libffi.py +++ b/pypy/rlib/test/test_libffi.py @@ -7,7 +7,8 @@ from pypy.rlib.libffi import CDLL, Func, get_libc_name, ArgChain, types from pypy.rlib.libffi import (IS_32_BIT, struct_getfield_int, struct_setfield_int, struct_getfield_longlong, struct_setfield_longlong, - struct_getfield_float, struct_setfield_float) + struct_getfield_float, struct_setfield_float, + struct_getfield_singlefloat, struct_setfield_singlefloat) class TestLibffiMisc(BaseFfiTest): @@ -117,6 +118,27 @@ lltype.free(p, flavor='raw') + def test_struct_fields_singlefloat(self): + POINT = lltype.Struct('POINT', + ('x', rffi.FLOAT), + ('y', rffi.FLOAT) + ) + y_ofs = 4 + p = lltype.malloc(POINT, flavor='raw') + p.x = r_singlefloat(123.4) + p.y = r_singlefloat(567.8) + addr = rffi.cast(rffi.VOIDP, p) + assert struct_getfield_singlefloat(types.double, addr, 0) == r_singlefloat(123.4) + assert struct_getfield_singlefloat(types.double, addr, y_ofs) == r_singlefloat(567.8) + # + struct_setfield_singlefloat(types.double, addr, 0, r_singlefloat(321.0)) + struct_setfield_singlefloat(types.double, addr, y_ofs, r_singlefloat(876.5)) + assert p.x == r_singlefloat(321.0) + assert p.y == r_singlefloat(876.5) + # + lltype.free(p, flavor='raw') + + class TestLibffiCall(BaseFfiTest): """ Test various kind of calls through libffi. From noreply at buildbot.pypy.org Wed Nov 9 19:12:10 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 9 Nov 2011 19:12:10 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: applevel support for single float fields Message-ID: <20111109181210.925508292E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r49041:119ae38c2394 Date: 2011-11-09 18:23 +0100 http://bitbucket.org/pypy/pypy/changeset/119ae38c2394/ Log: applevel support for single float fields diff --git a/pypy/module/_ffi/interp_struct.py b/pypy/module/_ffi/interp_struct.py --- a/pypy/module/_ffi/interp_struct.py +++ b/pypy/module/_ffi/interp_struct.py @@ -3,7 +3,7 @@ from pypy.rlib import libffi from pypy.rlib import jit from pypy.rlib.rgc import must_be_light_finalizer -from pypy.rlib.rarithmetic import r_uint, r_ulonglong +from pypy.rlib.rarithmetic import r_uint, r_ulonglong, r_singlefloat from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.typedef import TypeDef, interp_attrproperty from pypy.interpreter.gateway import interp2app, unwrap_spec @@ -149,6 +149,10 @@ value = libffi.struct_getfield_float(w_ffitype.ffitype, self.rawmem, offset) return space.wrap(value) # + if w_ffitype.is_singlefloat(): + value = libffi.struct_getfield_singlefloat(w_ffitype.ffitype, self.rawmem, offset) + return space.wrap(float(value)) + # assert False, 'unknown type' @unwrap_spec(name=str) @@ -169,6 +173,11 @@ libffi.struct_setfield_float(w_ffitype.ffitype, self.rawmem, offset, value) return # + if w_ffitype.is_singlefloat(): + value = r_singlefloat(space.float_w(w_value)) + libffi.struct_setfield_singlefloat(w_ffitype.ffitype, self.rawmem, offset, value) + return + # assert False, 'unknown type' W__StructInstance.typedef = TypeDef( diff --git a/pypy/module/_ffi/test/test_struct.py b/pypy/module/_ffi/test/test_struct.py --- a/pypy/module/_ffi/test/test_struct.py +++ b/pypy/module/_ffi/test/test_struct.py @@ -182,6 +182,25 @@ mem = self.read_raw_mem(struct.getaddr(), 'c_double', 1) assert mem == [123.4] + def test_getfield_setfield_singlefloat(self): + import sys + from _ffi import _StructDescr, Field, types + longsize = types.slong.sizeof() + fields = [ + Field('x', types.float), + ] + descr = _StructDescr('foo', fields) + struct = descr.allocate() + struct.setfield('x', 123.4) # this is a value which DOES loose + # precision in a single float + assert 0 < abs(struct.getfield('x') - 123.4) < 0.0001 + # + struct.setfield('x', 123.5) # this is a value which does not loose + # precision in a single float + assert struct.getfield('x') == 123.5 + mem = self.read_raw_mem(struct.getaddr(), 'c_float', 1) + assert mem == [123.5] + def test_compute_shape(self): from _ffi import Structure, Field, types class Point(Structure): From noreply at buildbot.pypy.org Wed Nov 9 19:12:11 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 9 Nov 2011 19:12:11 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: add support for char/unichar fields Message-ID: <20111109181211.C11968292E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r49042:5ed2330756bb Date: 2011-11-09 19:11 +0100 http://bitbucket.org/pypy/pypy/changeset/5ed2330756bb/ Log: add support for char/unichar fields diff --git a/pypy/module/_ffi/interp_struct.py b/pypy/module/_ffi/interp_struct.py --- a/pypy/module/_ffi/interp_struct.py +++ b/pypy/module/_ffi/interp_struct.py @@ -145,6 +145,14 @@ return space.wrap(r_uint(value)) return space.wrap(value) # + if w_ffitype.is_char(): + value = libffi.struct_getfield_int(w_ffitype.ffitype, self.rawmem, offset) + return space.wrap(chr(value)) + # + if w_ffitype.is_unichar(): + value = libffi.struct_getfield_int(w_ffitype.ffitype, self.rawmem, offset) + return space.wrap(unichr(value)) + # if w_ffitype.is_double(): value = libffi.struct_getfield_float(w_ffitype.ffitype, self.rawmem, offset) return space.wrap(value) @@ -168,6 +176,11 @@ libffi.struct_setfield_int(w_ffitype.ffitype, self.rawmem, offset, value) return # + if w_ffitype.is_char() or w_ffitype.is_unichar(): + value = space.int_w(space.ord(w_value)) + libffi.struct_setfield_int(w_ffitype.ffitype, self.rawmem, offset, value) + return + # if w_ffitype.is_double(): value = space.float_w(w_value) libffi.struct_setfield_float(w_ffitype.ffitype, self.rawmem, offset, value) diff --git a/pypy/module/_ffi/test/test_struct.py b/pypy/module/_ffi/test/test_struct.py --- a/pypy/module/_ffi/test/test_struct.py +++ b/pypy/module/_ffi/test/test_struct.py @@ -137,6 +137,8 @@ Field('ushort', types.ushort), Field('uint', types.uint), Field('ulong', types.ulong), + Field('char', types.char), + Field('unichar', types.unichar), ] descr = _StructDescr('foo', fields) struct = descr.allocate() @@ -150,6 +152,11 @@ assert struct.getfield('ulong') == sys.maxint*2 + 1 struct.setfield('ulong', sys.maxint*2 + 2) assert struct.getfield('ulong') == 0 + struct.setfield('char', 'a') + assert struct.getfield('char') == 'a' + struct.setfield('unichar', u'\u1234') + assert struct.getfield('unichar') == u'\u1234' + def test_getfield_setfield_longlong(self): import sys From noreply at buildbot.pypy.org Wed Nov 9 19:20:35 2011 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 9 Nov 2011 19:20:35 +0100 (CET) Subject: [pypy-commit] pypy default: Fix? the Windows build by using the Windows functions _isnan() Message-ID: <20111109182035.C89828292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49043:40d990865485 Date: 2011-11-09 19:20 +0100 http://bitbucket.org/pypy/pypy/changeset/40d990865485/ Log: Fix? the Windows build by using the Windows functions _isnan() and _finite() if we are *not* jitted. diff --git a/pypy/rpython/lltypesystem/module/ll_math.py b/pypy/rpython/lltypesystem/module/ll_math.py --- a/pypy/rpython/lltypesystem/module/ll_math.py +++ b/pypy/rpython/lltypesystem/module/ll_math.py @@ -11,15 +11,17 @@ from pypy.translator.platform import platform from pypy.rlib.rfloat import isfinite, isinf, isnan, INFINITY, NAN +use_library_isinf_isnan = False if sys.platform == "win32": if platform.name == "msvc": # When compiled with /O2 or /Oi (enable intrinsic functions) # It's no more possible to take the address of some math functions. # Ensure that the compiler chooses real functions instead. eci = ExternalCompilationInfo( - includes = ['math.h'], + includes = ['math.h', 'float.h'], post_include_bits = ['#pragma function(floor)'], ) + use_library_isinf_isnan = True else: eci = ExternalCompilationInfo() # Some math functions are C99 and not defined by the Microsoft compiler @@ -112,17 +114,28 @@ while VERY_LARGE_FLOAT * 100.0 != INFINITY: VERY_LARGE_FLOAT *= 64.0 +_lib_isnan = rffi.llexternal("_isnan", [lltype.Float], lltype.Signed, + compilation_info=eci) +_lib_finite = rffi.llexternal("_finite", [lltype.Float], lltype.Signed, + compilation_info=eci) + def ll_math_isnan(y): # By not calling into the external function the JIT can inline this. # Floats are awesome. + if use_library_isinf_isnan and not jit.we_are_jitted(): + return _lib_isnan(y) return y != y def ll_math_isinf(y): + if use_library_isinf_isnan and not jit.we_are_jitted(): + return not _lib_finite(y) and not _lib_isnan(y) return (y + VERY_LARGE_FLOAT) == y def ll_math_isfinite(y): # Use a custom hack that is reasonably well-suited to the JIT. # Floats are awesome (bis). + if use_library_isinf_isnan and not jit.we_are_jitted(): + return _lib_finite(y) z = 0.0 * y return z == z # i.e.: z is not a NaN From noreply at buildbot.pypy.org Wed Nov 9 19:23:23 2011 From: noreply at buildbot.pypy.org (edelsohn) Date: Wed, 9 Nov 2011 19:23:23 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Alias TOC as r2. Message-ID: <20111109182323.14E948292E@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r49044:81f3267b04ed Date: 2011-11-09 13:23 -0500 http://bitbucket.org/pypy/pypy/changeset/81f3267b04ed/ Log: Alias TOC as r2. diff --git a/pypy/jit/backend/ppc/ppcgen/register.py b/pypy/jit/backend/ppc/ppcgen/register.py --- a/pypy/jit/backend/ppc/ppcgen/register.py +++ b/pypy/jit/backend/ppc/ppcgen/register.py @@ -12,6 +12,7 @@ SPP = r31 SP = r1 +TOC = r2 RES = r3 MANAGED_REGS = [r3, r4, r5, r6, r7, r8, r9, r10, From noreply at buildbot.pypy.org Wed Nov 9 19:25:43 2011 From: noreply at buildbot.pypy.org (edelsohn) Date: Wed, 9 Nov 2011 19:25:43 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Use PPC64 instructions in _emit_call stack adjustment. Message-ID: <20111109182543.AE5C38292E@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r49045:caa24e9f3445 Date: 2011-11-09 13:25 -0500 http://bitbucket.org/pypy/pypy/changeset/caa24e9f3445/ Log: Use PPC64 instructions in _emit_call stack adjustment. diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -502,9 +502,14 @@ stack_space = 4 * (WORD + len(stack_args)) while stack_space % (4 * WORD) != 0: stack_space += 1 - self.mc.stwu(1, 1, -stack_space) - self.mc.mflr(0) - self.mc.stw(0, 1, stack_space + WORD) + if IS_PPC_32: + self.mc.stwu(r.SP.value, r.SP.value, -stack_space) + self.mc.mflr(r.r0.value) + self.mc.stw(r.r0.value, r.SP.value, stack_space + WORD) + else: + self.mc.stdu(r.SP.value, r.SP.value, -stack_space) + self.mc.mflr(r.r0.value) + self.mc.std(r.r0.value, r.SP.value, stack_space + WORD) # then we push everything on the stack for i, arg in enumerate(stack_args): From noreply at buildbot.pypy.org Wed Nov 9 19:52:59 2011 From: noreply at buildbot.pypy.org (hager) Date: Wed, 9 Nov 2011 19:52:59 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Implemented calls to C functions. Message-ID: <20111109185259.D77448292E@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r49046:93688057e7f3 Date: 2011-11-09 19:45 +0100 http://bitbucket.org/pypy/pypy/changeset/93688057e7f3/ Log: Implemented calls to C functions. diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -568,6 +568,13 @@ if result is not None: resloc = regalloc.after_call(result) + def emit_guard_call_may_force(self, op, guard_op, arglocs, regalloc): + self.mc.mr(r.r0.value, r.SP.value) + self.mc.cmpi(r.r0.value, 0) + self._emit_guard(guard_op, arglocs, c.EQ) + + emit_guard_call_release_gil = emit_guard_call_may_force + def write_new_force_index(self): # for shadowstack only: get a new, unused force_index number and # write it to FORCE_INDEX_OFS. Used to record the call shape diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/regalloc.py @@ -332,7 +332,7 @@ y = self.force_allocate_reg(t, boxes) boxes.append(t) y_val = rffi.cast(lltype.Signed, op.getarg(1).getint()) - self.assembler.load_imm(y.value, y_val) + self.assembler.mc.load_imm(y, y_val) offset = self.cpu.vtable_offset assert offset is not None @@ -345,6 +345,30 @@ prepare_guard_nonnull_class = prepare_guard_class + def prepare_guard_call_release_gil(self, op, guard_op): + # first, close the stack in the sense of the asmgcc GC root tracker + gcrootmap = self.cpu.gc_ll_descr.gcrootmap + if gcrootmap: + arglocs = [] + argboxes = [] + for i in range(op.numargs()): + loc, box = self._ensure_value_is_boxed(op.getarg(i), argboxes) + arglocs.append(loc) + argboxes.append(box) + self.assembler.call_release_gil(gcrootmap, arglocs, fcond) + self.possibly_free_vars(argboxes) + # do the call + faildescr = guard_op.getdescr() + fail_index = self.cpu.get_fail_descr_number(faildescr) + args = [imm(rffi.cast(lltype.Signed, op.getarg(0).getint()))] + self.assembler.emit_call(op, args, self, fail_index) + # then reopen the stack + if gcrootmap: + self.assembler.call_reacquire_gil(gcrootmap, r.r0, fcond) + locs = self._prepare_guard(guard_op) + self.possibly_free_vars(guard_op.getfailargs()) + return locs + def prepare_jump(self, op): descr = op.getdescr() assert isinstance(descr, LoopToken) @@ -605,21 +629,37 @@ assert (1 << scale) == size return size, scale, ofs, ofs_length, ptr -def make_operation_list(): - def not_implemented(self, op, *args): - raise NotImplementedError, op +def add_none_argument(fn): + return lambda self, op: fn(self, op, None) - operations = [None] * (rop._LAST + 1) - for key, val in rop.__dict__.items(): - key = key.lower() - if key.startswith("_"): - continue - methname = "prepare_%s" % key - if hasattr(Regalloc, methname): - func = getattr(Regalloc, methname).im_func - else: - func = not_implemented - operations[val] = func - return operations +def notimplemented(self, op): + raise NotImplementedError, op -Regalloc.operations = make_operation_list() +def notimplemented_with_guard(self, op, guard_op): + + raise NotImplementedError, op + +operations = [notimplemented] * (rop._LAST + 1) +operations_with_guard = [notimplemented_with_guard] * (rop._LAST + 1) + +for key, value in rop.__dict__.items(): + key = key.lower() + if key.startswith('_'): + continue + methname = 'prepare_%s' % key + if hasattr(Regalloc, methname): + func = getattr(Regalloc, methname).im_func + operations[value] = func + +for key, value in rop.__dict__.items(): + key = key.lower() + if key.startswith('_'): + continue + methname = 'prepare_guard_%s' % key + if hasattr(Regalloc, methname): + func = getattr(Regalloc, methname).im_func + operations_with_guard[value] = func + operations[value] = add_none_argument(func) + +Regalloc.operations = operations +Regalloc.operations_with_guard = operations_with_guard From noreply at buildbot.pypy.org Wed Nov 9 19:53:01 2011 From: noreply at buildbot.pypy.org (hager) Date: Wed, 9 Nov 2011 19:53:01 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: merge Message-ID: <20111109185301.31BC08292E@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r49047:68560c739dce Date: 2011-11-09 19:52 +0100 http://bitbucket.org/pypy/pypy/changeset/68560c739dce/ Log: merge diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -502,9 +502,14 @@ stack_space = 4 * (WORD + len(stack_args)) while stack_space % (4 * WORD) != 0: stack_space += 1 - self.mc.stwu(1, 1, -stack_space) - self.mc.mflr(0) - self.mc.stw(0, 1, stack_space + WORD) + if IS_PPC_32: + self.mc.stwu(r.SP.value, r.SP.value, -stack_space) + self.mc.mflr(r.r0.value) + self.mc.stw(r.r0.value, r.SP.value, stack_space + WORD) + else: + self.mc.stdu(r.SP.value, r.SP.value, -stack_space) + self.mc.mflr(r.r0.value) + self.mc.std(r.r0.value, r.SP.value, stack_space + WORD) # then we push everything on the stack for i, arg in enumerate(stack_args): @@ -549,9 +554,7 @@ #the actual call if IS_PPC_32: self.mc.bl_abs(adr) - self.mc.lwz(0, 1, stack_space + WORD) - self.mc.mtlr(0) - self.mc.addi(1, 1, stack_space) + self.mc.lwz(r.r0.value, r.SP.value, stack_space + WORD) else: self.mc.std(r.r2.value, r.SP.value, 40) self.mc.load_from_addr(r.r0, adr) @@ -560,6 +563,9 @@ self.mc.mtctr(r.r0.value) self.mc.bctrl() self.mc.ld(r.r2.value, r.SP.value, 40) + self.mc.ld(r.r0.value, r.SP.value, stack_space + WORD) + self.mc.mtlr(r.r0.value) + self.mc.addi(r.SP.value, r.SP.value, stack_space) self.mark_gc_roots(force_index) regalloc.possibly_free_vars(args) diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -8,7 +8,8 @@ from pypy.jit.backend.ppc.ppcgen.opassembler import OpAssembler from pypy.jit.backend.ppc.ppcgen.symbol_lookup import lookup from pypy.jit.backend.ppc.ppcgen.codebuilder import PPCBuilder -from pypy.jit.backend.ppc.ppcgen.arch import (IS_PPC_32, WORD, NONVOLATILES, +from pypy.jit.backend.ppc.ppcgen.arch import (IS_PPC_32, IS_PPC_64, WORD, + NONVOLATILES, GPR_SAVE_AREA, BACKCHAIN_SIZE) from pypy.jit.backend.ppc.ppcgen.helper.assembler import (gen_emit_cmp_op, encode32, decode32) @@ -134,7 +135,7 @@ else: self.mc.stdu(r.SP.value, r.SP.value, -frame_depth) self.mc.mflr(r.r0.value) - self.mc.std(r.r0.value, r.SP.value, frame_depth + 2 * WORD) + self.mc.std(r.r0.value, r.SP.value, frame_depth + WORD) offset = GPR_SAVE_AREA + WORD # compute spilling pointer (SPP) self.mc.addi(r.SPP.value, r.SP.value, frame_depth - offset) @@ -296,7 +297,10 @@ # XXX do quadword alignment #while size % (4 * WORD) != 0: # size += WORD - mc.addi(r.SP.value, r.SP.value, -size) + if IS_PPC_32: + mc.stwu(r.SP.value, r.SP.value, -size) + else: + mc.stdu(r.SP.value, r.SP.value, -size) # decode_func_addr = llhelper(self.recovery_func_sign, self.failure_recovery_func) @@ -306,6 +310,7 @@ intp = lltype.Ptr(lltype.Array(lltype.Signed, hints={'nolength': True})) descr = rffi.cast(intp, decode_func_addr) addr = descr[0] + r2_value = descr[1] r11_value = descr[2] # @@ -319,7 +324,9 @@ # # load address of decoding function into r0 mc.load_imm(r.r0, addr) - mc.load_imm(r.r11, r11_value) + if IS_PPC_64: + mc.load_imm(r.r2, r2_value) + mc.load_imm(r.r11, r11_value) # ... and branch there mc.mtctr(r.r0.value) mc.bctrl() @@ -675,7 +682,7 @@ def _ensure_result_bit_extension(self, resloc, size, signed): if size == 1: if not signed: #unsigned char - if IS_PPC32: + if IS_PPC_32: self.mc.rlwinm(resloc.value, resloc.value, 0, 24, 31) else: self.mc.rldicl(resloc.value, resloc.value, 0, 56) diff --git a/pypy/jit/backend/ppc/ppcgen/register.py b/pypy/jit/backend/ppc/ppcgen/register.py --- a/pypy/jit/backend/ppc/ppcgen/register.py +++ b/pypy/jit/backend/ppc/ppcgen/register.py @@ -12,6 +12,7 @@ SPP = r31 SP = r1 +TOC = r2 RES = r3 MANAGED_REGS = [r3, r4, r5, r6, r7, r8, r9, r10, From noreply at buildbot.pypy.org Wed Nov 9 20:34:28 2011 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 9 Nov 2011 20:34:28 +0100 (CET) Subject: [pypy-commit] pypy default: Oups. Message-ID: <20111109193428.D3FCC8292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49048:d6c0d1f92e1b Date: 2011-11-09 20:34 +0100 http://bitbucket.org/pypy/pypy/changeset/d6c0d1f92e1b/ Log: Oups. diff --git a/pypy/rpython/lltypesystem/module/ll_math.py b/pypy/rpython/lltypesystem/module/ll_math.py --- a/pypy/rpython/lltypesystem/module/ll_math.py +++ b/pypy/rpython/lltypesystem/module/ll_math.py @@ -123,7 +123,7 @@ # By not calling into the external function the JIT can inline this. # Floats are awesome. if use_library_isinf_isnan and not jit.we_are_jitted(): - return _lib_isnan(y) + return bool(_lib_isnan(y)) return y != y def ll_math_isinf(y): @@ -135,7 +135,7 @@ # Use a custom hack that is reasonably well-suited to the JIT. # Floats are awesome (bis). if use_library_isinf_isnan and not jit.we_are_jitted(): - return _lib_finite(y) + return bool(_lib_finite(y)) z = 0.0 * y return z == z # i.e.: z is not a NaN From noreply at buildbot.pypy.org Wed Nov 9 20:34:48 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Wed, 9 Nov 2011 20:34:48 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: hg merge jit-refactor-tests Message-ID: <20111109193448.0562D8292E@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r49049:83bf8f708ffa Date: 2011-11-08 18:02 +0100 http://bitbucket.org/pypy/pypy/changeset/83bf8f708ffa/ Log: hg merge jit-refactor-tests diff --git a/pypy/jit/metainterp/test/test_del.py b/pypy/jit/metainterp/test/test_del.py --- a/pypy/jit/metainterp/test/test_del.py +++ b/pypy/jit/metainterp/test/test_del.py @@ -20,12 +20,12 @@ n -= 1 return 42 self.meta_interp(f, [20]) - self.check_loops({'call': 2, # calls to a helper function - 'guard_no_exception': 2, # follows the calls - 'int_sub': 1, - 'int_gt': 1, - 'guard_true': 1, - 'jump': 1}) + self.check_resops({'call': 4, # calls to a helper function + 'guard_no_exception': 4, # follows the calls + 'int_sub': 2, + 'int_gt': 2, + 'guard_true': 2, + 'jump': 2}) def test_class_of_allocated(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'x']) @@ -78,7 +78,7 @@ return 1 res = self.meta_interp(f, [20], enable_opts='') assert res == 1 - self.check_loops(call=1) # for the case B(), but not for the case A() + self.check_resops(call=1) # for the case B(), but not for the case A() class TestLLtype(DelTests, LLJitMixin): @@ -103,7 +103,7 @@ break return 42 self.meta_interp(f, [20]) - self.check_loops(getfield_raw=1, setfield_raw=1, call=0, call_pure=0) + self.check_resops(call_pure=0, setfield_raw=2, call=0, getfield_raw=2) class TestOOtype(DelTests, OOJitMixin): def setup_class(cls): diff --git a/pypy/jit/metainterp/test/test_dict.py b/pypy/jit/metainterp/test/test_dict.py --- a/pypy/jit/metainterp/test/test_dict.py +++ b/pypy/jit/metainterp/test/test_dict.py @@ -91,7 +91,7 @@ res1 = f(100) res2 = self.meta_interp(f, [100], listops=True) assert res1 == res2 - self.check_loops(int_mod=1) # the hash was traced and eq, but cached + self.check_resops(int_mod=2) # the hash was traced and eq, but cached def test_dict_setdefault(self): myjitdriver = JitDriver(greens = [], reds = ['total', 'dct']) @@ -107,7 +107,7 @@ assert f(100) == 50 res = self.meta_interp(f, [100], listops=True) assert res == 50 - self.check_loops(new=0, new_with_vtable=0) + self.check_resops(new=0, new_with_vtable=0) def test_dict_as_counter(self): myjitdriver = JitDriver(greens = [], reds = ['total', 'dct']) @@ -128,7 +128,7 @@ assert f(100) == 50 res = self.meta_interp(f, [100], listops=True) assert res == 50 - self.check_loops(int_mod=1) # key + eq, but cached + self.check_resops(int_mod=2) # key + eq, but cached def test_repeated_lookup(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'd']) @@ -153,12 +153,13 @@ res = self.meta_interp(f, [100], listops=True) assert res == f(50) - self.check_loops({"call": 5, "getfield_gc": 1, "getinteriorfield_gc": 1, - "guard_false": 1, "guard_no_exception": 4, - "guard_true": 1, "int_and": 1, "int_gt": 1, - "int_is_true": 1, "int_sub": 1, "jump": 1, - "new_with_vtable": 1, "new": 1, "new_array": 1, - "setfield_gc": 3, }) + self.check_resops({'new_array': 2, 'getfield_gc': 2, + 'guard_true': 2, 'jump': 2, + 'new_with_vtable': 2, 'getinteriorfield_gc': 2, + 'setfield_gc': 6, 'int_gt': 2, 'int_sub': 2, + 'call': 10, 'int_and': 2, + 'guard_no_exception': 8, 'new': 2, + 'guard_false': 2, 'int_is_true': 2}) class TestOOtype(DictTests, OOJitMixin): diff --git a/pypy/jit/metainterp/test/test_fficall.py b/pypy/jit/metainterp/test/test_fficall.py --- a/pypy/jit/metainterp/test/test_fficall.py +++ b/pypy/jit/metainterp/test/test_fficall.py @@ -68,23 +68,23 @@ 'byval': False} supported = all(d[check] for check in jitif) if supported: - self.check_loops( - call_release_gil=1, # a CALL_RELEASE_GIL, and no other CALLs + self.check_resops( + call_release_gil=2, # a CALL_RELEASE_GIL, and no other CALLs call=0, call_may_force=0, - guard_no_exception=1, - guard_not_forced=1, - int_add=1, - int_lt=1, - guard_true=1, - jump=1) + guard_no_exception=2, + guard_not_forced=2, + int_add=2, + int_lt=2, + guard_true=2, + jump=2) else: - self.check_loops( + self.check_resops( call_release_gil=0, # no CALL_RELEASE_GIL - int_add=1, - int_lt=1, - guard_true=1, - jump=1) + int_add=2, + int_lt=2, + guard_true=2, + jump=2) return res def test_byval_result(self): diff --git a/pypy/jit/metainterp/test/test_greenfield.py b/pypy/jit/metainterp/test/test_greenfield.py --- a/pypy/jit/metainterp/test/test_greenfield.py +++ b/pypy/jit/metainterp/test/test_greenfield.py @@ -25,7 +25,7 @@ res = self.meta_interp(g, [7]) assert res == -2 self.check_loop_count(2) - self.check_loops(guard_value=0) + self.check_resops(guard_value=0) def test_green_field_2(self): myjitdriver = JitDriver(greens=['ctx.x'], reds=['ctx']) @@ -50,7 +50,7 @@ res = self.meta_interp(g, [7]) assert res == -22 self.check_loop_count(6) - self.check_loops(guard_value=0) + self.check_resops(guard_value=0) class TestLLtypeGreenFieldsTests(GreenFieldsTests, LLJitMixin): diff --git a/pypy/jit/metainterp/test/test_jitdriver.py b/pypy/jit/metainterp/test/test_jitdriver.py --- a/pypy/jit/metainterp/test/test_jitdriver.py +++ b/pypy/jit/metainterp/test/test_jitdriver.py @@ -88,7 +88,7 @@ assert res == loop2(4, 40) # we expect only one int_sub, corresponding to the single # compiled instance of loop1() - self.check_loops(int_sub=1) + self.check_resops(int_sub=2) # the following numbers are not really expectations of the test # itself, but just the numbers that we got after looking carefully # at the generated machine code @@ -154,7 +154,7 @@ res = self.meta_interp(loop2, [4, 40], repeat=7, inline=True) assert res == loop2(4, 40) # we expect no int_sub, but a residual call - self.check_loops(int_sub=0, call=1) + self.check_resops(call=2, int_sub=0) def test_multiple_jits_trace_too_long(self): myjitdriver1 = JitDriver(greens=["n"], reds=["i", "box"]) diff --git a/pypy/jit/metainterp/test/test_list.py b/pypy/jit/metainterp/test/test_list.py --- a/pypy/jit/metainterp/test/test_list.py +++ b/pypy/jit/metainterp/test/test_list.py @@ -6,8 +6,8 @@ class ListTests: def check_all_virtualized(self): - self.check_loops(new_array=0, setarrayitem_gc=0, getarrayitem_gc=0, - arraylen_gc=0) + self.check_resops(setarrayitem_gc=0, new_array=0, arraylen_gc=0, + getarrayitem_gc=0) def test_simple_array(self): jitdriver = JitDriver(greens = [], reds = ['n']) @@ -20,7 +20,7 @@ return n res = self.meta_interp(f, [10], listops=True) assert res == 0 - self.check_loops(int_sub=1) + self.check_resops(int_sub=2) self.check_all_virtualized() def test_list_pass_around(self): @@ -56,7 +56,8 @@ res = self.meta_interp(f, [10], listops=True) assert res == f(10) # one setitem should be gone by now - self.check_loops(call=1, setarrayitem_gc=2, getarrayitem_gc=1) + self.check_resops(setarrayitem_gc=4, getarrayitem_gc=2, call=2) + def test_ll_fixed_setitem_fast(self): jitdriver = JitDriver(greens = [], reds = ['n', 'l']) @@ -93,7 +94,7 @@ res = self.meta_interp(f, [10], listops=True) assert res == f(10) - self.check_loops(setarrayitem_gc=0, getarrayitem_gc=0, call=0) + self.check_resops(setarrayitem_gc=0, call=0, getarrayitem_gc=0) def test_vlist_alloc_and_set(self): # the check_loops fails, because [non-null] * n is not supported yet @@ -141,7 +142,7 @@ res = self.meta_interp(f, [5], listops=True) assert res == 7 - self.check_loops(call=0) + self.check_resops(call=0) def test_fold_getitem_1(self): jitdriver = JitDriver(greens = ['pc', 'n', 'l'], reds = ['total']) @@ -161,7 +162,7 @@ res = self.meta_interp(f, [4], listops=True) assert res == f(4) - self.check_loops(call=0) + self.check_resops(call=0) def test_fold_getitem_2(self): jitdriver = JitDriver(greens = ['pc', 'n', 'l'], reds = ['total', 'x']) @@ -186,7 +187,7 @@ res = self.meta_interp(f, [4], listops=True) assert res == f(4) - self.check_loops(call=0, getfield_gc=0) + self.check_resops(call=0, getfield_gc=0) def test_fold_indexerror(self): jitdriver = JitDriver(greens = [], reds = ['total', 'n', 'lst']) @@ -206,7 +207,7 @@ res = self.meta_interp(f, [15], listops=True) assert res == f(15) - self.check_loops(guard_exception=0) + self.check_resops(guard_exception=0) def test_virtual_resize(self): jitdriver = JitDriver(greens = [], reds = ['n', 's']) @@ -224,9 +225,8 @@ return s res = self.meta_interp(f, [15], listops=True) assert res == f(15) - self.check_loops({"int_add": 1, "int_sub": 1, "int_gt": 1, - "guard_true": 1, "jump": 1}) - + self.check_resops({'jump': 2, 'int_gt': 2, 'int_add': 2, + 'guard_true': 2, 'int_sub': 2}) class TestOOtype(ListTests, OOJitMixin): pass @@ -258,4 +258,4 @@ assert res == f(37) # There is the one actual field on a, plus several fields on the list # itself - self.check_loops(getfield_gc=10, everywhere=True) + self.check_resops(getfield_gc=10) diff --git a/pypy/jit/metainterp/test/test_quasiimmut.py b/pypy/jit/metainterp/test/test_quasiimmut.py --- a/pypy/jit/metainterp/test/test_quasiimmut.py +++ b/pypy/jit/metainterp/test/test_quasiimmut.py @@ -73,8 +73,7 @@ # res = self.meta_interp(f, [100, 7]) assert res == 700 - self.check_loops(guard_not_invalidated=2, getfield_gc=0, - everywhere=True) + self.check_resops(guard_not_invalidated=2, getfield_gc=0) # from pypy.jit.metainterp.warmspot import get_stats loops = get_stats().loops @@ -103,7 +102,7 @@ assert f(100, 7) == 721 res = self.meta_interp(f, [100, 7]) assert res == 721 - self.check_loops(guard_not_invalidated=0, getfield_gc=1) + self.check_resops(guard_not_invalidated=0, getfield_gc=3) # from pypy.jit.metainterp.warmspot import get_stats loops = get_stats().loops @@ -134,8 +133,7 @@ # res = self.meta_interp(f, [100, 7]) assert res == 700 - self.check_loops(guard_not_invalidated=2, getfield_gc=0, - everywhere=True) + self.check_resops(guard_not_invalidated=2, getfield_gc=0) def test_change_during_tracing_1(self): myjitdriver = JitDriver(greens=['foo'], reds=['x', 'total']) @@ -160,7 +158,7 @@ assert f(100, 7) == 721 res = self.meta_interp(f, [100, 7]) assert res == 721 - self.check_loops(guard_not_invalidated=0, getfield_gc=1) + self.check_resops(guard_not_invalidated=0, getfield_gc=2) def test_change_during_tracing_2(self): myjitdriver = JitDriver(greens=['foo'], reds=['x', 'total']) @@ -186,7 +184,7 @@ assert f(100, 7) == 700 res = self.meta_interp(f, [100, 7]) assert res == 700 - self.check_loops(guard_not_invalidated=0, getfield_gc=1) + self.check_resops(guard_not_invalidated=0, getfield_gc=2) def test_change_invalidate_reentering(self): myjitdriver = JitDriver(greens=['foo'], reds=['x', 'total']) @@ -212,7 +210,7 @@ assert g(100, 7) == 700707 res = self.meta_interp(g, [100, 7]) assert res == 700707 - self.check_loops(guard_not_invalidated=2, getfield_gc=0) + self.check_resops(guard_not_invalidated=4, getfield_gc=0) def test_invalidate_while_running(self): jitdriver = JitDriver(greens=['foo'], reds=['i', 'total']) @@ -324,8 +322,8 @@ assert f(100, 15) == 3009 res = self.meta_interp(f, [100, 15]) assert res == 3009 - self.check_loops(guard_not_invalidated=4, getfield_gc=0, - call_may_force=0, guard_not_forced=0) + self.check_resops(guard_not_invalidated=8, guard_not_forced=0, + call_may_force=0, getfield_gc=0) def test_list_simple_1(self): myjitdriver = JitDriver(greens=['foo'], reds=['x', 'total']) @@ -347,9 +345,8 @@ # res = self.meta_interp(f, [100, 7]) assert res == 700 - self.check_loops(guard_not_invalidated=2, getfield_gc=0, - getarrayitem_gc=0, getarrayitem_gc_pure=0, - everywhere=True) + self.check_resops(getarrayitem_gc_pure=0, guard_not_invalidated=2, + getarrayitem_gc=0, getfield_gc=0) # from pypy.jit.metainterp.warmspot import get_stats loops = get_stats().loops @@ -385,9 +382,8 @@ # res = self.meta_interp(f, [100, 7]) assert res == 714 - self.check_loops(guard_not_invalidated=2, getfield_gc=0, - getarrayitem_gc=0, getarrayitem_gc_pure=0, - arraylen_gc=0, everywhere=True) + self.check_resops(getarrayitem_gc_pure=0, guard_not_invalidated=2, + arraylen_gc=0, getarrayitem_gc=0, getfield_gc=0) # from pypy.jit.metainterp.warmspot import get_stats loops = get_stats().loops @@ -421,9 +417,8 @@ # res = self.meta_interp(f, [100, 7]) assert res == 700 - self.check_loops(guard_not_invalidated=2, getfield_gc=0, - getarrayitem_gc=0, getarrayitem_gc_pure=0, - everywhere=True) + self.check_resops(guard_not_invalidated=2, getfield_gc=0, + getarrayitem_gc=0, getarrayitem_gc_pure=0) # from pypy.jit.metainterp.warmspot import get_stats loops = get_stats().loops @@ -460,9 +455,9 @@ assert f(100, 15) == 3009 res = self.meta_interp(f, [100, 15]) assert res == 3009 - self.check_loops(guard_not_invalidated=4, getfield_gc=0, - getarrayitem_gc=0, getarrayitem_gc_pure=0, - call_may_force=0, guard_not_forced=0) + self.check_resops(call_may_force=0, getfield_gc=0, + getarrayitem_gc_pure=0, guard_not_forced=0, + getarrayitem_gc=0, guard_not_invalidated=8) def test_invalidated_loop_is_not_used_any_more_as_target(self): myjitdriver = JitDriver(greens=['foo'], reds=['x']) diff --git a/pypy/jit/metainterp/test/test_slist.py b/pypy/jit/metainterp/test/test_slist.py --- a/pypy/jit/metainterp/test/test_slist.py +++ b/pypy/jit/metainterp/test/test_slist.py @@ -76,7 +76,7 @@ return lst[i] res = self.meta_interp(f, [21], listops=True) assert res == f(21) - self.check_loops(call=0) + self.check_resops(call=0) def test_getitem_neg(self): myjitdriver = JitDriver(greens = [], reds = ['i', 'n']) @@ -92,7 +92,7 @@ return x res = self.meta_interp(f, [-2], listops=True) assert res == 41 - self.check_loops(call=0, guard_value=0) + self.check_resops(call=0, guard_value=0) # we don't support resizable lists on ootype #class TestOOtype(ListTests, OOJitMixin): diff --git a/pypy/jit/metainterp/test/test_string.py b/pypy/jit/metainterp/test/test_string.py --- a/pypy/jit/metainterp/test/test_string.py +++ b/pypy/jit/metainterp/test/test_string.py @@ -30,7 +30,7 @@ return i res = self.meta_interp(f, [10, True, _str('h')], listops=True) assert res == 5 - self.check_loops(**{self.CALL: 1, self.CALL_PURE: 0, 'everywhere': True}) + self.check_resops(**{self.CALL: 1, self.CALL_PURE: 0}) def test_eq_folded(self): _str = self._str @@ -50,7 +50,7 @@ return i res = self.meta_interp(f, [10, True, _str('h')], listops=True) assert res == 5 - self.check_loops(**{self.CALL: 0, self.CALL_PURE: 0}) + self.check_resops(**{self.CALL: 0, self.CALL_PURE: 0}) def test_newstr(self): _str, _chr = self._str, self._chr @@ -85,7 +85,7 @@ n -= 1 return 42 self.meta_interp(f, [6]) - self.check_loops(newstr=0, strsetitem=0, strlen=0, + self.check_resops(newstr=0, strsetitem=0, strlen=0, newunicode=0, unicodesetitem=0, unicodelen=0) def test_char2string_escape(self): @@ -126,7 +126,7 @@ return total res = self.meta_interp(f, [6]) assert res == 21 - self.check_loops(newstr=0, strgetitem=0, strsetitem=0, strlen=0, + self.check_resops(newstr=0, strgetitem=0, strsetitem=0, strlen=0, newunicode=0, unicodegetitem=0, unicodesetitem=0, unicodelen=0) @@ -147,7 +147,7 @@ m -= 1 return 42 self.meta_interp(f, [6, 7]) - self.check_loops(newstr=0, strsetitem=0, + self.check_resops(newstr=0, strsetitem=0, newunicode=0, unicodesetitem=0, call=0, call_pure=0) @@ -168,12 +168,11 @@ return 42 self.meta_interp(f, [6, 7]) if _str is str: - self.check_loops(newstr=1, strsetitem=0, copystrcontent=2, - call=1, call_pure=0) # escape + self.check_resops(call_pure=0, copystrcontent=4, + strsetitem=0, call=2, newstr=2) else: - self.check_loops(newunicode=1, unicodesetitem=0, - copyunicodecontent=2, - call=1, call_pure=0) # escape + self.check_resops(call_pure=0, unicodesetitem=0, call=2, + copyunicodecontent=4, newunicode=2) def test_strconcat_escape_str_char(self): _str, _chr = self._str, self._chr @@ -192,12 +191,11 @@ return 42 self.meta_interp(f, [6, 7]) if _str is str: - self.check_loops(newstr=1, strsetitem=1, copystrcontent=1, - call=1, call_pure=0) # escape + self.check_resops(call_pure=0, copystrcontent=2, strsetitem=2, + call=2, newstr=2) else: - self.check_loops(newunicode=1, unicodesetitem=1, - copyunicodecontent=1, - call=1, call_pure=0) # escape + self.check_resops(call_pure=0, unicodesetitem=2, call=2, + copyunicodecontent=2, newunicode=2) def test_strconcat_escape_char_str(self): _str, _chr = self._str, self._chr @@ -216,12 +214,11 @@ return 42 self.meta_interp(f, [6, 7]) if _str is str: - self.check_loops(newstr=1, strsetitem=1, copystrcontent=1, - call=1, call_pure=0) # escape + self.check_resops(call_pure=0, copystrcontent=2, + strsetitem=2, call=2, newstr=2) else: - self.check_loops(newunicode=1, unicodesetitem=1, - copyunicodecontent=1, - call=1, call_pure=0) # escape + self.check_resops(call_pure=0, unicodesetitem=2, call=2, + copyunicodecontent=2, newunicode=2) def test_strconcat_escape_char_char(self): _str, _chr = self._str, self._chr @@ -239,12 +236,11 @@ return 42 self.meta_interp(f, [6, 7]) if _str is str: - self.check_loops(newstr=1, strsetitem=2, copystrcontent=0, - call=1, call_pure=0) # escape + self.check_resops(call_pure=0, copystrcontent=0, + strsetitem=4, call=2, newstr=2) else: - self.check_loops(newunicode=1, unicodesetitem=2, - copyunicodecontent=0, - call=1, call_pure=0) # escape + self.check_resops(call_pure=0, unicodesetitem=4, call=2, + copyunicodecontent=0, newunicode=2) def test_strconcat_escape_str_char_str(self): _str, _chr = self._str, self._chr @@ -263,12 +259,11 @@ return 42 self.meta_interp(f, [6, 7]) if _str is str: - self.check_loops(newstr=1, strsetitem=1, copystrcontent=2, - call=1, call_pure=0) # escape + self.check_resops(call_pure=0, copystrcontent=4, strsetitem=2, + call=2, newstr=2) else: - self.check_loops(newunicode=1, unicodesetitem=1, - copyunicodecontent=2, - call=1, call_pure=0) # escape + self.check_resops(call_pure=0, unicodesetitem=2, call=2, + copyunicodecontent=4, newunicode=2) def test_strconcat_guard_fail(self): _str = self._str @@ -325,7 +320,7 @@ m -= 1 return 42 self.meta_interp(f, [6, 7]) - self.check_loops(newstr=0, newunicode=0) + self.check_resops(newunicode=0, newstr=0) def test_str_slice_len_surviving(self): _str = self._str @@ -504,9 +499,9 @@ sys.defaultencoding = _str('utf-8') return sa assert self.meta_interp(f, [8]) == f(8) - self.check_loops({'int_add': 1, 'guard_true': 1, 'int_sub': 1, - 'jump': 1, 'int_is_true': 1, - 'guard_not_invalidated': 1}) + self.check_resops({'jump': 2, 'int_is_true': 2, 'int_add': 2, + 'guard_true': 2, 'guard_not_invalidated': 2, + 'int_sub': 2}) def test_promote_string(self): driver = JitDriver(greens = [], reds = ['n']) @@ -519,7 +514,7 @@ return 0 self.meta_interp(f, [0]) - self.check_loops(call=3 + 1) # one for int2str + self.check_resops(call=7) #class TestOOtype(StringTests, OOJitMixin): # CALL = "oosend" @@ -552,9 +547,8 @@ m -= 1 return 42 self.meta_interp(f, [6, 7]) - self.check_loops(call=1, # escape() - newunicode=1, unicodegetitem=0, - unicodesetitem=1, copyunicodecontent=1) + self.check_resops(unicodesetitem=2, newunicode=2, call=4, + copyunicodecontent=2, unicodegetitem=0) def test_str2unicode_fold(self): _str = self._str @@ -572,9 +566,9 @@ m -= 1 return 42 self.meta_interp(f, [6, 7]) - self.check_loops(call_pure=0, call=1, - newunicode=0, unicodegetitem=0, - unicodesetitem=0, copyunicodecontent=0) + self.check_resops(call_pure=0, unicodesetitem=0, call=2, + newunicode=0, unicodegetitem=0, + copyunicodecontent=0) def test_join_chars(self): jitdriver = JitDriver(reds=['a', 'b', 'c', 'i'], greens=[]) @@ -596,9 +590,8 @@ # The "".join should be unrolled, since the length of x is known since # it is virtual, ensure there are no calls to ll_join_chars, or # allocations. - self.check_loops({ - "guard_true": 5, "int_is_true": 3, "int_lt": 2, "int_add": 2, "jump": 2, - }, everywhere=True) + self.check_resops({'jump': 2, 'guard_true': 5, 'int_lt': 2, + 'int_add': 2, 'int_is_true': 3}) def test_virtual_copystringcontent(self): jitdriver = JitDriver(reds=['n', 'result'], greens=[]) diff --git a/pypy/jit/metainterp/test/test_tl.py b/pypy/jit/metainterp/test/test_tl.py --- a/pypy/jit/metainterp/test/test_tl.py +++ b/pypy/jit/metainterp/test/test_tl.py @@ -72,16 +72,16 @@ res = self.meta_interp(main, [0, 6], listops=True, backendopt=True) assert res == 5040 - self.check_loops({'int_mul':1, 'jump':1, - 'int_sub':1, 'int_le':1, 'guard_false':1}) + self.check_resops({'jump': 2, 'int_le': 2, 'guard_value': 1, + 'int_mul': 2, 'guard_false': 2, 'int_sub': 2}) def test_tl_2(self): main = self._get_main() res = self.meta_interp(main, [1, 10], listops=True, backendopt=True) assert res == main(1, 10) - self.check_loops({'int_sub':1, 'int_le':1, - 'guard_false':1, 'jump':1}) + self.check_resops({'int_le': 2, 'int_sub': 2, 'jump': 2, + 'guard_false': 2, 'guard_value': 1}) def test_tl_call(self, listops=True, policy=None): from pypy.jit.tl.tl import interp diff --git a/pypy/jit/metainterp/test/test_virtualizable.py b/pypy/jit/metainterp/test/test_virtualizable.py --- a/pypy/jit/metainterp/test/test_virtualizable.py +++ b/pypy/jit/metainterp/test/test_virtualizable.py @@ -77,7 +77,7 @@ return xy.inst_x res = self.meta_interp(f, [20]) assert res == 30 - self.check_loops(getfield_gc=0, setfield_gc=0) + self.check_resops(setfield_gc=0, getfield_gc=0) def test_preexisting_access_2(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'xy'], @@ -102,7 +102,7 @@ assert f(5) == 185 res = self.meta_interp(f, [5]) assert res == 185 - self.check_loops(getfield_gc=0, setfield_gc=0) + self.check_resops(setfield_gc=0, getfield_gc=0) def test_two_paths_access(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'xy'], @@ -124,7 +124,7 @@ return xy.inst_x res = self.meta_interp(f, [18]) assert res == 10118 - self.check_loops(getfield_gc=0, setfield_gc=0) + self.check_resops(setfield_gc=0, getfield_gc=0) def test_synchronize_in_return(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'xy'], @@ -146,7 +146,7 @@ return xy.inst_x res = self.meta_interp(f, [18]) assert res == 10180 - self.check_loops(getfield_gc=0, setfield_gc=0) + self.check_resops(setfield_gc=0, getfield_gc=0) def test_virtualizable_and_greens(self): myjitdriver = JitDriver(greens = ['m'], reds = ['n', 'xy'], @@ -174,7 +174,7 @@ return res res = self.meta_interp(f, [40]) assert res == 50 * 4 - self.check_loops(getfield_gc=0, setfield_gc=0) + self.check_resops(setfield_gc=0, getfield_gc=0) def test_double_frame(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'xy', 'other'], @@ -197,8 +197,7 @@ return xy.inst_x res = self.meta_interp(f, [20]) assert res == 134 - self.check_loops(getfield_gc=0, setfield_gc=1) - self.check_loops(getfield_gc=1, setfield_gc=2, everywhere=True) + self.check_resops(setfield_gc=2, getfield_gc=1) # ------------------------------ @@ -248,8 +247,8 @@ return xy2.inst_l1[2] res = self.meta_interp(f, [16]) assert res == 3001 + 16 * 80 - self.check_loops(getfield_gc=0, setfield_gc=0, - getarrayitem_gc=0, setarrayitem_gc=0) + self.check_resops(setarrayitem_gc=0, setfield_gc=0, + getarrayitem_gc=0, getfield_gc=0) def test_synchronize_arrays_in_return(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'xy2'], @@ -279,8 +278,7 @@ assert f(18) == 10360 res = self.meta_interp(f, [18]) assert res == 10360 - self.check_loops(getfield_gc=0, setfield_gc=0, - getarrayitem_gc=0) + self.check_resops(setfield_gc=0, getarrayitem_gc=0, getfield_gc=0) def test_array_length(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'xy2'], @@ -306,8 +304,8 @@ return xy2.inst_l1[1] res = self.meta_interp(f, [18]) assert res == 2941309 + 18 - self.check_loops(getfield_gc=0, setfield_gc=0, - getarrayitem_gc=0, arraylen_gc=0) + self.check_resops(setfield_gc=0, getarrayitem_gc=0, + arraylen_gc=0, getfield_gc=0) def test_residual_function(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'xy2'], @@ -340,8 +338,8 @@ return xy2.inst_l1[1] res = self.meta_interp(f, [18]) assert res == 2941309 + 18 - self.check_loops(getfield_gc=0, setfield_gc=0, - getarrayitem_gc=0, arraylen_gc=1, call=1) + self.check_resops(call=2, setfield_gc=0, getarrayitem_gc=0, + arraylen_gc=2, getfield_gc=0) def test_double_frame_array(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'xy2', 'other'], @@ -377,8 +375,8 @@ expected = f(20) res = self.meta_interp(f, [20], enable_opts='') assert res == expected - self.check_loops(getfield_gc=1, setfield_gc=0, - arraylen_gc=1, getarrayitem_gc=1, setarrayitem_gc=1) + self.check_resops(setarrayitem_gc=1, setfield_gc=0, + getarrayitem_gc=1, arraylen_gc=1, getfield_gc=1) # ------------------------------ @@ -425,8 +423,7 @@ assert f(18) == 10360 res = self.meta_interp(f, [18]) assert res == 10360 - self.check_loops(getfield_gc=0, setfield_gc=0, - getarrayitem_gc=0) + self.check_resops(setfield_gc=0, getarrayitem_gc=0, getfield_gc=0) # ------------------------------ @@ -460,8 +457,7 @@ res = self.meta_interp(f, [10]) assert res == 55 - self.check_loops(getfield_gc=0, setfield_gc=0) - + self.check_resops(setfield_gc=0, getfield_gc=0) def test_virtualizable_with_array(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'x', 'frame'], @@ -495,8 +491,7 @@ res = self.meta_interp(f, [10, 1], listops=True) assert res == f(10, 1) - self.check_loops(getarrayitem_gc=0) - + self.check_resops(getarrayitem_gc=0) def test_subclass_of_virtualizable(self): myjitdriver = JitDriver(greens = [], reds = ['frame'], @@ -524,8 +519,7 @@ res = self.meta_interp(f, [10]) assert res == 55 - self.check_loops(getfield_gc=0, setfield_gc=0) - + self.check_resops(setfield_gc=0, getfield_gc=0) def test_external_pass(self): jitdriver = JitDriver(greens = [], reds = ['n', 'z', 'frame'], @@ -1011,8 +1005,8 @@ res = self.meta_interp(f, [70], listops=True) assert res == intmask(42 ** 70) - self.check_loops(int_add=0, - int_sub=1) # for 'n -= 1' only + self.check_resops(int_add=0, + int_sub=2) # for 'n -= 1' only def test_simple_access_directly(self): myjitdriver = JitDriver(greens = [], reds = ['frame'], @@ -1043,7 +1037,7 @@ res = self.meta_interp(f, [10]) assert res == 55 - self.check_loops(getfield_gc=0, setfield_gc=0) + self.check_resops(setfield_gc=0, getfield_gc=0) from pypy.jit.backend.test.support import BaseCompiledMixin if isinstance(self, BaseCompiledMixin): @@ -1098,42 +1092,42 @@ res = self.meta_interp(f, [10]) assert res == 55 - self.check_loops(new_with_vtable=0) + self.check_resops(new_with_vtable=0) def test_check_for_nonstandardness_only_once(self): - myjitdriver = JitDriver(greens = [], reds = ['frame'], - virtualizables = ['frame']) + myjitdriver = JitDriver(greens = [], reds = ['frame'], + virtualizables = ['frame']) - class Frame(object): - _virtualizable2_ = ['x', 'y', 'z'] + class Frame(object): + _virtualizable2_ = ['x', 'y', 'z'] - def __init__(self, x, y, z=1): - self = hint(self, access_directly=True) - self.x = x - self.y = y - self.z = z + def __init__(self, x, y, z=1): + self = hint(self, access_directly=True) + self.x = x + self.y = y + self.z = z - class SomewhereElse: - pass - somewhere_else = SomewhereElse() + class SomewhereElse: + pass + somewhere_else = SomewhereElse() - def f(n): - frame = Frame(n, 0) - somewhere_else.top_frame = frame # escapes - frame = hint(frame, access_directly=True) - while frame.x > 0: - myjitdriver.can_enter_jit(frame=frame) - myjitdriver.jit_merge_point(frame=frame) - top_frame = somewhere_else.top_frame - child_frame = Frame(frame.x, top_frame.z, 17) - frame.y += child_frame.x - frame.x -= top_frame.z - return somewhere_else.top_frame.y - - res = self.meta_interp(f, [10]) - assert res == 55 - self.check_loops(new_with_vtable=0, ptr_eq=1, everywhere=True) - self.check_history(ptr_eq=2) + def f(n): + frame = Frame(n, 0) + somewhere_else.top_frame = frame # escapes + frame = hint(frame, access_directly=True) + while frame.x > 0: + myjitdriver.can_enter_jit(frame=frame) + myjitdriver.jit_merge_point(frame=frame) + top_frame = somewhere_else.top_frame + child_frame = Frame(frame.x, top_frame.z, 17) + frame.y += child_frame.x + frame.x -= top_frame.z + return somewhere_else.top_frame.y + + res = self.meta_interp(f, [10]) + assert res == 55 + self.check_resops(new_with_vtable=0, ptr_eq=1) + self.check_history(ptr_eq=2) def test_virtual_child_frame_with_arrays(self): myjitdriver = JitDriver(greens = [], reds = ['frame'], @@ -1165,7 +1159,7 @@ res = self.meta_interp(f, [10], listops=True) assert res == 55 - self.check_loops(new_with_vtable=0) + self.check_resops(new_with_vtable=0) def test_blackhole_should_not_pay_attention(self): myjitdriver = JitDriver(greens = [], reds = ['frame'], @@ -1203,7 +1197,7 @@ res = self.meta_interp(f, [10]) assert res == 155 - self.check_loops(getfield_gc=0, setfield_gc=0) + self.check_resops(setfield_gc=0, getfield_gc=0) def test_blackhole_should_synchronize(self): myjitdriver = JitDriver(greens = [], reds = ['frame'], @@ -1239,7 +1233,7 @@ res = self.meta_interp(f, [10]) assert res == 155 - self.check_loops(getfield_gc=0, setfield_gc=0) + self.check_resops(setfield_gc=0, getfield_gc=0) def test_blackhole_should_not_reenter(self): if not self.basic: diff --git a/pypy/jit/metainterp/test/test_virtualref.py b/pypy/jit/metainterp/test/test_virtualref.py --- a/pypy/jit/metainterp/test/test_virtualref.py +++ b/pypy/jit/metainterp/test/test_virtualref.py @@ -171,7 +171,7 @@ return 1 # self.meta_interp(f, [10]) - self.check_loops(new_with_vtable=1) # the vref + self.check_resops(new_with_vtable=2) # the vref self.check_aborted_count(0) def test_simple_all_removed(self): @@ -205,8 +205,7 @@ virtual_ref_finish(vref, xy) # self.meta_interp(f, [15]) - self.check_loops(new_with_vtable=0, # all virtualized - new_array=0) + self.check_resops(new_with_vtable=0, new_array=0) self.check_aborted_count(0) def test_simple_no_access(self): @@ -242,7 +241,7 @@ virtual_ref_finish(vref, xy) # self.meta_interp(f, [15]) - self.check_loops(new_with_vtable=1, # the vref: xy doesn't need to be forced + self.check_resops(new_with_vtable=2, # the vref: xy doesn't need to be forced new_array=0) # and neither xy.next1/2/3 self.check_aborted_count(0) @@ -280,8 +279,8 @@ exctx.topframeref = vref_None # self.meta_interp(f, [15]) - self.check_loops(new_with_vtable=2, # XY(), the vref - new_array=3) # next1/2/3 + self.check_resops(new_with_vtable=4, # XY(), the vref + new_array=6) # next1/2/3 self.check_aborted_count(0) def test_simple_force_sometimes(self): @@ -320,8 +319,8 @@ # res = self.meta_interp(f, [30]) assert res == 13 - self.check_loops(new_with_vtable=1, # the vref, but not XY() - new_array=0) # and neither next1/2/3 + self.check_resops(new_with_vtable=2, # the vref, but not XY() + new_array=0) # and neither next1/2/3 self.check_loop_count(1) self.check_aborted_count(0) @@ -362,7 +361,7 @@ # res = self.meta_interp(f, [30]) assert res == 13 - self.check_loops(new_with_vtable=0, # all virtualized in the n!=13 loop + self.check_resops(new_with_vtable=0, # all virtualized in the n!=13 loop new_array=0) self.check_loop_count(1) self.check_aborted_count(0) @@ -412,7 +411,7 @@ res = self.meta_interp(f, [72]) assert res == 6 self.check_loop_count(2) # the loop and the bridge - self.check_loops(new_with_vtable=2, # loop: nothing; bridge: vref, xy + self.check_resops(new_with_vtable=2, # loop: nothing; bridge: vref, xy new_array=2) # bridge: next4, next5 self.check_aborted_count(0) @@ -442,8 +441,8 @@ # res = self.meta_interp(f, [15]) assert res == 1 - self.check_loops(new_with_vtable=2, # vref, xy - new_array=1) # next1 + self.check_resops(new_with_vtable=4, # vref, xy + new_array=2) # next1 self.check_aborted_count(0) def test_recursive_call_1(self): @@ -543,7 +542,7 @@ # res = self.meta_interp(f, [15]) assert res == 1 - self.check_loops(new_with_vtable=2) # vref, xy + self.check_resops(new_with_vtable=4) # vref, xy def test_cannot_use_invalid_virtualref(self): myjitdriver = JitDriver(greens = [], reds = ['n']) diff --git a/pypy/jit/metainterp/test/test_warmspot.py b/pypy/jit/metainterp/test/test_warmspot.py --- a/pypy/jit/metainterp/test/test_warmspot.py +++ b/pypy/jit/metainterp/test/test_warmspot.py @@ -103,12 +103,12 @@ # check that the set_param will override the default res = self.meta_interp(f, [10, llstr('')]) assert res == 0 - self.check_loops(new_with_vtable=1) + self.check_resops(new_with_vtable=1) res = self.meta_interp(f, [10, llstr(ALL_OPTS_NAMES)], enable_opts='') assert res == 0 - self.check_loops(new_with_vtable=0) + self.check_resops(new_with_vtable=0) def test_unwanted_loops(self): mydriver = JitDriver(reds = ['n', 'total', 'm'], greens = []) @@ -163,7 +163,7 @@ return n self.meta_interp(f, [50], backendopt=True) self.check_enter_count_at_most(2) - self.check_loops(call=0) + self.check_resops(call=0) def test_loop_header(self): # artificial test: we enter into the JIT only when can_enter_jit() @@ -187,7 +187,7 @@ assert f(15) == 1 res = self.meta_interp(f, [15], backendopt=True) assert res == 1 - self.check_loops(int_add=1) # I get 13 without the loop_header() + self.check_resops(int_add=2) # I get 13 without the loop_header() def test_omit_can_enter_jit(self): # Simple test comparing the effects of always giving a can_enter_jit(), @@ -249,8 +249,8 @@ m = m - 1 self.meta_interp(f1, [8]) self.check_loop_count(1) - self.check_loops({'int_sub': 1, 'int_gt': 1, 'guard_true': 1, - 'jump': 1}) + self.check_resops({'jump': 2, 'guard_true': 2, 'int_gt': 2, + 'int_sub': 2}) def test_void_red_variable(self): mydriver = JitDriver(greens=[], reds=['a', 'm']) From noreply at buildbot.pypy.org Wed Nov 9 20:34:49 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Wed, 9 Nov 2011 20:34:49 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: merge messup? Message-ID: <20111109193449.34F778292E@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r49050:91a9170e92e8 Date: 2011-11-08 18:13 +0100 http://bitbucket.org/pypy/pypy/changeset/91a9170e92e8/ Log: merge messup? diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -2649,7 +2649,7 @@ self.check_jitcell_token_count(1) assert len(list(get_stats().jitcell_tokens)[0].target_tokens) == 5 - def test_retrace_ending_up_retrazing_another_loop(self): + def test_retrace_ending_up_retracing_another_loop(self): myjitdriver = JitDriver(greens = ['pc'], reds = ['n', 'i', 'sa']) bytecode = "0+sI0+SI" @@ -2842,66 +2842,6 @@ assert res == -2 self.check_resops(setarrayitem_gc=2, getarrayitem_gc=1) - def test_retrace_ending_up_retracing_another_loop(self): - - myjitdriver = JitDriver(greens = ['pc'], reds = ['n', 'i', 'sa']) - bytecode = "0+sI0+SI" - def f(n): - myjitdriver.set_param('threshold', 3) - myjitdriver.set_param('trace_eagerness', 1) - myjitdriver.set_param('retrace_limit', 5) - myjitdriver.set_param('function_threshold', -1) - pc = sa = i = 0 - while pc < len(bytecode): - myjitdriver.jit_merge_point(pc=pc, n=n, sa=sa, i=i) - n = hint(n, promote=True) - op = bytecode[pc] - if op == '0': - i = 0 - elif op == '+': - i += 1 - elif op == 's': - sa += i - elif op == 'S': - sa += 2 - elif op == 'I': - if i < n: - pc -= 2 - myjitdriver.can_enter_jit(pc=pc, n=n, sa=sa, i=i) - continue - pc += 1 - return sa - - def g(n1, n2): - for i in range(10): - f(n1) - for i in range(10): - f(n2) - - nn = [10, 3] - assert self.meta_interp(g, nn) == g(*nn) - - # The attempts of retracing first loop will end up retracing the - # second and thus fail 5 times, saturating the retrace_count. Instead a - # bridge back to the preamble of the first loop is produced. A guard in - # this bridge is later traced resulting in a retrace of the second loop. - # Thus we end up with: - # 1 preamble and 1 specialized version of first loop - # 1 preamble and 2 specialized version of second loop - self.check_jitcell_token_count(2 + 3) - - # FIXME: Add a gloabl retrace counter and test that we are not trying more than 5 times. - - def g(n): - for i in range(n): - for j in range(10): - f(n-i) - - res = self.meta_interp(g, [10]) - assert res == g(10) - # 1 preamble and 6 speciealized versions of each loop - self.check_jitcell_token_count(2*(1 + 6)) - def test_continue_tracing_with_boxes_in_start_snapshot_replaced_by_optimizer(self): myjitdriver = JitDriver(greens = [], reds = ['sa', 'n', 'a', 'b']) def f(n): From noreply at buildbot.pypy.org Wed Nov 9 20:34:50 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Wed, 9 Nov 2011 20:34:50 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: fixed test Message-ID: <20111109193450.62ED88292E@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r49051:08641d9e164d Date: 2011-11-08 18:32 +0100 http://bitbucket.org/pypy/pypy/changeset/08641d9e164d/ Log: fixed test diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -2744,22 +2744,33 @@ res = self.meta_interp(f, [10, 7]) assert res == f(10, 7) - self.check_jitcell_token_count(4) + self.check_jitcell_token_count(2) + for cell in get_stats().jitcell_tokens: + assert len(cell.target_tokens) == 2 def g(n): return f(n, 2) + f(n, 3) res = self.meta_interp(g, [10]) assert res == g(10) - self.check_jitcell_token_count(6) - + self.check_jitcell_token_count(2) + for cell in get_stats().jitcell_tokens: + assert len(cell.target_tokens) <= 3 def g(n): return f(n, 2) + f(n, 3) + f(n, 4) + f(n, 5) + f(n, 6) + f(n, 7) res = self.meta_interp(g, [10]) assert res == g(10) - self.check_jitcell_token_count(8) + # 2 loops and one function + self.check_jitcell_token_count(3) + cnt = 0 + for cell in get_stats().jitcell_tokens: + if cell.target_tokens is None: + cnt += 1 + else: + assert len(cell.target_tokens) <= 4 + assert cnt == 1 def test_frame_finished_during_retrace(self): class Base(object): From noreply at buildbot.pypy.org Wed Nov 9 20:34:51 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Wed, 9 Nov 2011 20:34:51 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: add test and comment Message-ID: <20111109193451.9BE1B8292E@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r49052:8163b98e813f Date: 2011-11-08 19:05 +0100 http://bitbucket.org/pypy/pypy/changeset/8163b98e813f/ Log: add test and comment diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -104,8 +104,8 @@ # ____________________________________________________________ def compile_loop(metainterp, greenkey, start, - inputargs, jumpargs, - start_resumedescr, full_preamble_needed=True): + inputargs, jumpargs, + start_resumedescr, full_preamble_needed=True): """Try to compile a new procedure by closing the current history back to the first operation. """ diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -3621,4 +3621,26 @@ assert x == 999 def test_retracing_bridge_from_interpreter_to_finnish(self): - assert False # FIXME + myjitdriver = JitDriver(greens = [], reds = ['n', 'i', 'sa']) + def f(n): + sa = i = 0 + while i < n: + myjitdriver.jit_merge_point(n=n, i=i, sa=sa) + n = hint(n, promote=True) + sa += 2*n + i += 1 + return sa + def g(n): + return f(n) + f(n) + f(n) + f(n) + f(10*n) + f(11*n) + res = self.meta_interp(g, [1], repeat=3) + assert res == g(1) + #self.check_jitcell_token_count(1) + self.check_jitcell_token_count(2) + # XXX A bridge from the interpreter to a finish is first + # constructed for n=1. It is later replaced with a trace for + # the case n=10 which is extended with a retrace for n=11 and + # finnaly a new bridge to finnish is again traced and created + # for the case n=1. We were not able to reuse the orignial n=1 + # bridge as a preamble since it does not start with a + # label. The alternative would be to have all such bridges + # start with labels. I dont know which is better... From noreply at buildbot.pypy.org Wed Nov 9 20:34:52 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Wed, 9 Nov 2011 20:34:52 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: kill OptInlineShortPreamble Message-ID: <20111109193452.CE2A68292E@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r49053:3dde4cbdcf1b Date: 2011-11-08 19:24 +0100 http://bitbucket.org/pypy/pypy/changeset/3dde4cbdcf1b/ Log: kill OptInlineShortPreamble diff --git a/pypy/jit/metainterp/optimizeopt/__init__.py b/pypy/jit/metainterp/optimizeopt/__init__.py --- a/pypy/jit/metainterp/optimizeopt/__init__.py +++ b/pypy/jit/metainterp/optimizeopt/__init__.py @@ -4,7 +4,7 @@ from pypy.jit.metainterp.optimizeopt.virtualize import OptVirtualize from pypy.jit.metainterp.optimizeopt.heap import OptHeap from pypy.jit.metainterp.optimizeopt.vstring import OptString -from pypy.jit.metainterp.optimizeopt.unroll import optimize_unroll, OptInlineShortPreamble +from pypy.jit.metainterp.optimizeopt.unroll import optimize_unroll from pypy.jit.metainterp.optimizeopt.fficall import OptFfiCall from pypy.jit.metainterp.optimizeopt.simplify import OptSimplify from pypy.jit.metainterp.optimizeopt.pure import OptPure diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -23,31 +23,31 @@ assert names == expected_names # metainterp_sd = FakeMetaInterpStaticData(None) - chain, _ = build_opt_chain(metainterp_sd, "", inline_short_preamble=False) + chain, _ = build_opt_chain(metainterp_sd, "") check(chain, ["OptSimplify"]) # chain, _ = build_opt_chain(metainterp_sd, "") - check(chain, ["OptInlineShortPreamble", "OptSimplify"]) + check(chain, ["OptSimplify"]) # chain, _ = build_opt_chain(metainterp_sd, "") - check(chain, ["OptInlineShortPreamble", "OptSimplify"]) + check(chain, ["OptSimplify"]) # chain, _ = build_opt_chain(metainterp_sd, "heap:intbounds") - check(chain, ["OptInlineShortPreamble", "OptIntBounds", "OptHeap", "OptSimplify"]) + check(chain, ["OptIntBounds", "OptHeap", "OptSimplify"]) # chain, unroll = build_opt_chain(metainterp_sd, "unroll") - check(chain, ["OptInlineShortPreamble", "OptSimplify"]) + check(chain, ["OptSimplify"]) assert unroll # - chain, _ = build_opt_chain(metainterp_sd, "aaa:bbb", inline_short_preamble=False) + chain, _ = build_opt_chain(metainterp_sd, "aaa:bbb") check(chain, ["OptSimplify"]) # - chain, _ = build_opt_chain(metainterp_sd, "ffi", inline_short_preamble=False) + chain, _ = build_opt_chain(metainterp_sd, "ffi") check(chain, ["OptFfiCall", "OptSimplify"]) # metainterp_sd.config = get_pypy_config(translating=True) assert not metainterp_sd.config.translation.jit_ffi - chain, _ = build_opt_chain(metainterp_sd, "ffi", inline_short_preamble=False) + chain, _ = build_opt_chain(metainterp_sd, "ffi") check(chain, ["OptSimplify"]) diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -553,39 +553,6 @@ jumpop.setdescr(cell_token.target_tokens[0]) self.optimizer.send_extra_operation(jumpop) return True - - - -# FIXME: kill -class OptInlineShortPreamble(Optimization): - def __init__(self, retraced): - self.retraced = retraced - - def new(self): - return OptInlineShortPreamble(self.retraced) - - def propagate_forward(self, op): - ## # We should not be failing much anymore... - ## if not procedure_token.failed_states: - ## debug_print("Retracing (%d of %d)" % (retraced_count, - ## limit)) - ## raise RetraceLoop - ## for failed in loop_token.failed_states: - ## if failed.generalization_of(virtual_state): - ## # Retracing once more will most likely fail again - ## break - ## else: - ## debug_print("Retracing (%d of %d)" % (retraced_count, - ## limit)) - - ## raise RetraceLoop - ## else: - ## if not loop_token.failed_states: - ## loop_token.failed_states=[virtual_state] - ## else: - ## loop_token.failed_states.append(virtual_state) - self.emit_operation(op) - class ValueImporter(object): def __init__(self, unroll, value, op): From noreply at buildbot.pypy.org Wed Nov 9 20:34:54 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Wed, 9 Nov 2011 20:34:54 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: started to fix tests (in progress) Message-ID: <20111109193454.0627C8292E@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r49054:f9dccf780ad3 Date: 2011-11-09 20:28 +0100 http://bitbucket.org/pypy/pypy/changeset/f9dccf780ad3/ Log: started to fix tests (in progress) diff --git a/pypy/jit/metainterp/test/test_virtual.py b/pypy/jit/metainterp/test/test_virtual.py --- a/pypy/jit/metainterp/test/test_virtual.py +++ b/pypy/jit/metainterp/test/test_virtual.py @@ -30,7 +30,7 @@ assert f(10) == 55 * 10 res = self.meta_interp(f, [10]) assert res == 55 * 10 - self.check_loop_count(1) + self.check_trace_count(1) self.check_resops(new_with_vtable=0, setfield_gc=0, getfield_gc=2, new=0) @@ -79,7 +79,7 @@ assert f(10) == 55 * 10 res = self.meta_interp(f, [10]) assert res == 55 * 10 - self.check_loop_count(1) + self.check_trace_count(1) self.check_resops(new_with_vtable=0, setfield_gc=0, getfield_gc=3, new=0) @@ -97,7 +97,7 @@ return node.floatval res = self.meta_interp(f, [10]) assert res == f(10) - self.check_loop_count(1) + self.check_trace_count(1) self.check_resops(new=0, float_add=1) def test_virtualized_float2(self): @@ -115,7 +115,7 @@ return node.floatval res = self.meta_interp(f, [10]) assert res == f(10) - self.check_loop_count(1) + self.check_trace_count(1) self.check_resops(new=0, float_add=2) @@ -140,7 +140,7 @@ return node.value * node.extra res = self.meta_interp(f, [10]) assert res == 55 * 30 - self.check_loop_count(1) + self.check_trace_count(1) self.check_resops(new_with_vtable=0, setfield_gc=0, getfield_gc=2, new=0) @@ -161,7 +161,7 @@ return node.value res = self.meta_interp(f, [500]) assert res == 640 - self.check_loop_count(1) + self.check_trace_count(1) self.check_resops(new_with_vtable=0, setfield_gc=0, getfield_gc=1, new=0) @@ -185,7 +185,7 @@ return node.value res = self.meta_interp(f, [18]) assert res == f(18) - self.check_loop_count(2) + self.check_trace_count(2) self.check_resops(new_with_vtable=0, setfield_gc=0, getfield_gc=2, new=0) @@ -214,7 +214,7 @@ return node.value res = self.meta_interp(f, [20], policy=StopAtXPolicy(externfn)) assert res == f(20) - self.check_loop_count(3) + self.check_trace_count(2) self.check_resops(**{self._new_op: 1}) self.check_resops(int_mul=0, call=1) @@ -391,7 +391,7 @@ fieldname = self._field_prefix + 'value' assert getattr(res, fieldname, -100) == f(21).value - self.check_tree_loop_count(2) # the loop and the entry path + self.check_jitcell_token_count(2) # the loop and the entry path # we get: # ENTER - compile the new loop and entry bridge # ENTER - compile the leaving path @@ -565,7 +565,7 @@ n -= 1 return node1.value + node2.value assert self.meta_interp(f, [40, 3]) == f(40, 3) - self.check_loop_count(6) + self.check_trace_count(6) def test_single_virtual_forced_in_bridge(self): myjitdriver = JitDriver(greens = [], reds = ['n', 's', 'node']) @@ -612,10 +612,10 @@ return node.value res = self.meta_interp(f, [48, 3], policy=StopAtXPolicy(externfn)) assert res == f(48, 3) - self.check_loop_count(3) + self.check_trace_count(3) res = self.meta_interp(f, [40, 3], policy=StopAtXPolicy(externfn)) assert res == f(40, 3) - self.check_loop_count(3) + self.check_trace_count(3) def test_forced_virtual_assigned_different_class_in_bridge(self): myjitdriver = JitDriver(greens = [], reds = ['n', 's', 'node', 'node2']) @@ -987,7 +987,7 @@ assert f(10) == 20 res = self.meta_interp(f, [10]) assert res == 20 - self.check_loop_count(1) + self.check_trace_count(1) self.check_resops(new_with_vtable=0, setfield_gc=0, getfield_gc=0, new=0) From noreply at buildbot.pypy.org Wed Nov 9 20:34:55 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Wed, 9 Nov 2011 20:34:55 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: hg merge Message-ID: <20111109193455.4FE728292E@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r49055:c47ea6944945 Date: 2011-11-09 20:31 +0100 http://bitbucket.org/pypy/pypy/changeset/c47ea6944945/ Log: hg merge diff --git a/pypy/jit/backend/llgraph/llimpl.py b/pypy/jit/backend/llgraph/llimpl.py --- a/pypy/jit/backend/llgraph/llimpl.py +++ b/pypy/jit/backend/llgraph/llimpl.py @@ -640,8 +640,14 @@ return _op_default_implementation def op_label(self, _, *args): - pass - + op = self.loop.operations[self.opindex] + assert op.opnum == rop.LABEL + assert len(op.args) == len(args) + newenv = {} + for v, value in zip(op.args, args): + newenv[v] = value + self.env = newenv + def op_debug_merge_point(self, _, *args): from pypy.jit.metainterp.warmspot import get_stats try: diff --git a/pypy/jit/backend/test/calling_convention_test.py b/pypy/jit/backend/test/calling_convention_test.py --- a/pypy/jit/backend/test/calling_convention_test.py +++ b/pypy/jit/backend/test/calling_convention_test.py @@ -2,7 +2,7 @@ AbstractDescr, BasicFailDescr, BoxInt, Box, BoxPtr, - LoopToken, + JitCellToken, ConstInt, ConstPtr, BoxObj, Const, ConstObj, BoxFloat, ConstFloat) @@ -107,7 +107,7 @@ ops += 'finish(f99, %s)\n' % arguments loop = parse(ops, namespace=locals()) - looptoken = LoopToken() + looptoken = JitCellToken() done_number = self.cpu.get_fail_descr_number(loop.operations[-1].getdescr()) self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken) expected_result = self._prepare_args(args, floats, ints) @@ -253,7 +253,7 @@ called_ops += 'finish(f%d, descr=fdescr3)\n' % total_index # compile called loop called_loop = parse(called_ops, namespace=locals()) - called_looptoken = LoopToken() + called_looptoken = JitCellToken() called_looptoken.outermost_jitdriver_sd = FakeJitDriverSD() done_number = self.cpu.get_fail_descr_number(called_loop.operations[-1].getdescr()) self.cpu.compile_loop(called_loop.inputargs, called_loop.operations, called_looptoken) @@ -284,7 +284,7 @@ # we want to take the fast path self.cpu.done_with_this_frame_float_v = done_number try: - othertoken = LoopToken() + othertoken = JitCellToken() self.cpu.compile_loop(loop.inputargs, loop.operations, othertoken) # prepare call to called_loop diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -3,7 +3,7 @@ AbstractDescr, BasicFailDescr, BoxInt, Box, BoxPtr, - LoopToken, TargetToken, + JitCellToken, TargetToken, ConstInt, ConstPtr, BoxObj, ConstObj, BoxFloat, ConstFloat) @@ -32,7 +32,7 @@ result_type, valueboxes, descr) - looptoken = LoopToken() + looptoken = JitCellToken() self.cpu.compile_loop(inputargs, operations, looptoken) j = 0 for box in inputargs: @@ -106,7 +106,7 @@ ResOperation(rop.FINISH, [i1], None, descr=BasicFailDescr(1)) ] inputargs = [i0] - looptoken = LoopToken() + looptoken = JitCellToken() self.cpu.compile_loop(inputargs, operations, looptoken) self.cpu.set_future_value_int(0, 2) fail = self.cpu.execute_token(looptoken) @@ -118,15 +118,17 @@ i0 = BoxInt() i1 = BoxInt() i2 = BoxInt() - looptoken = LoopToken() + looptoken = JitCellToken() + targettoken = TargetToken() operations = [ + ResOperation(rop.LABEL, [i0], None, descr=targettoken), ResOperation(rop.INT_ADD, [i0, ConstInt(1)], i1), ResOperation(rop.INT_LE, [i1, ConstInt(9)], i2), ResOperation(rop.GUARD_TRUE, [i2], None, descr=BasicFailDescr(2)), - ResOperation(rop.JUMP, [i1], None, descr=looptoken), + ResOperation(rop.JUMP, [i1], None, descr=targettoken), ] inputargs = [i0] - operations[2].setfailargs([i1]) + operations[3].setfailargs([i1]) self.cpu.compile_loop(inputargs, operations, looptoken) self.cpu.set_future_value_int(0, 2) @@ -139,18 +141,22 @@ i0 = BoxInt() i1 = BoxInt() i2 = BoxInt() - looptoken = LoopToken() + i3 = BoxInt() + looptoken = JitCellToken() + targettoken = TargetToken() operations = [ + ResOperation(rop.INT_SUB, [i3, ConstInt(42)], i0), + ResOperation(rop.LABEL, [i0], None, descr=targettoken), ResOperation(rop.INT_ADD, [i0, ConstInt(1)], i1), ResOperation(rop.INT_LE, [i1, ConstInt(9)], i2), ResOperation(rop.GUARD_TRUE, [i2], None, descr=BasicFailDescr(2)), - ResOperation(rop.JUMP, [i1], None, descr=looptoken), + ResOperation(rop.JUMP, [i1], None, descr=targettoken), ] - inputargs = [i0] - operations[2].setfailargs([None, None, i1, None]) + inputargs = [i3] + operations[4].setfailargs([None, None, i1, None]) self.cpu.compile_loop(inputargs, operations, looptoken) - self.cpu.set_future_value_int(0, 2) + self.cpu.set_future_value_int(0, 44) fail = self.cpu.execute_token(looptoken) assert fail.identifier == 2 res = self.cpu.get_latest_value_int(2) @@ -162,15 +168,17 @@ i0 = BoxInt() i1 = BoxInt() i2 = BoxInt() - looptoken = LoopToken() + looptoken = JitCellToken() + targettoken = TargetToken() operations = [ + ResOperation(rop.LABEL, [i0], None, descr=targettoken), ResOperation(rop.INT_ADD, [i0, ConstInt(1)], i1), ResOperation(rop.INT_LE, [i1, ConstInt(9)], i2), ResOperation(rop.GUARD_TRUE, [i2], None, descr=BasicFailDescr()), - ResOperation(rop.JUMP, [i1], None, descr=looptoken), + ResOperation(rop.JUMP, [i1], None, descr=targettoken), ] inputargs = [i0] - operations[2].setfailargs([i1]) + operations[3].setfailargs([i1]) wr_i1 = weakref.ref(i1) wr_guard = weakref.ref(operations[2]) self.cpu.compile_loop(inputargs, operations, looptoken) @@ -190,15 +198,17 @@ i2 = BoxInt() faildescr1 = BasicFailDescr(1) faildescr2 = BasicFailDescr(2) - looptoken = LoopToken() + looptoken = JitCellToken() + targettoken = TargetToken() operations = [ + ResOperation(rop.LABEL, [i0], None, descr=targettoken), ResOperation(rop.INT_ADD, [i0, ConstInt(1)], i1), ResOperation(rop.INT_LE, [i1, ConstInt(9)], i2), ResOperation(rop.GUARD_TRUE, [i2], None, descr=faildescr1), - ResOperation(rop.JUMP, [i1], None, descr=looptoken), + ResOperation(rop.JUMP, [i1], None, descr=targettoken), ] inputargs = [i0] - operations[2].setfailargs([i1]) + operations[3].setfailargs([i1]) self.cpu.compile_loop(inputargs, operations, looptoken) i1b = BoxInt() @@ -206,7 +216,7 @@ bridge = [ ResOperation(rop.INT_LE, [i1b, ConstInt(19)], i3), ResOperation(rop.GUARD_TRUE, [i3], None, descr=faildescr2), - ResOperation(rop.JUMP, [i1b], None, descr=looptoken), + ResOperation(rop.JUMP, [i1b], None, descr=targettoken), ] bridge[1].setfailargs([i1b]) @@ -226,17 +236,21 @@ i0 = BoxInt() i1 = BoxInt() i2 = BoxInt() + i3 = BoxInt() faildescr1 = BasicFailDescr(1) faildescr2 = BasicFailDescr(2) - looptoken = LoopToken() + looptoken = JitCellToken() + targettoken = TargetToken() operations = [ + ResOperation(rop.INT_SUB, [i3, ConstInt(42)], i0), + ResOperation(rop.LABEL, [i0], None, descr=targettoken), ResOperation(rop.INT_ADD, [i0, ConstInt(1)], i1), ResOperation(rop.INT_LE, [i1, ConstInt(9)], i2), ResOperation(rop.GUARD_TRUE, [i2], None, descr=faildescr1), - ResOperation(rop.JUMP, [i1], None, descr=looptoken), + ResOperation(rop.JUMP, [i1], None, descr=targettoken), ] - inputargs = [i0] - operations[2].setfailargs([None, i1, None]) + inputargs = [i3] + operations[4].setfailargs([None, i1, None]) self.cpu.compile_loop(inputargs, operations, looptoken) i1b = BoxInt() @@ -244,7 +258,7 @@ bridge = [ ResOperation(rop.INT_LE, [i1b, ConstInt(19)], i3), ResOperation(rop.GUARD_TRUE, [i3], None, descr=faildescr2), - ResOperation(rop.JUMP, [i1b], None, descr=looptoken), + ResOperation(rop.JUMP, [i1b], None, descr=targettoken), ] bridge[1].setfailargs([i1b]) @@ -261,15 +275,17 @@ i1 = BoxInt() i2 = BoxInt() faildescr1 = BasicFailDescr(1) - looptoken = LoopToken() + looptoken = JitCellToken() + targettoken = TargetToken() operations = [ + ResOperation(rop.LABEL, [i0], None, descr=targettoken), ResOperation(rop.INT_ADD, [i0, ConstInt(1)], i1), ResOperation(rop.INT_LE, [i1, ConstInt(9)], i2), ResOperation(rop.GUARD_TRUE, [i2], None, descr=faildescr1), - ResOperation(rop.JUMP, [i1], None, descr=looptoken), + ResOperation(rop.JUMP, [i1], None, descr=targettoken), ] inputargs = [i0] - operations[2].setfailargs([None, i1, None]) + operations[3].setfailargs([None, i1, None]) self.cpu.compile_loop(inputargs, operations, looptoken) self.cpu.set_future_value_int(0, 2) @@ -290,7 +306,7 @@ return AbstractFailDescr.__setattr__(self, name, value) py.test.fail("finish descrs should not be touched") faildescr = UntouchableFailDescr() # to check that is not touched - looptoken = LoopToken() + looptoken = JitCellToken() operations = [ ResOperation(rop.FINISH, [i0], None, descr=faildescr) ] @@ -301,7 +317,7 @@ res = self.cpu.get_latest_value_int(0) assert res == 99 - looptoken = LoopToken() + looptoken = JitCellToken() operations = [ ResOperation(rop.FINISH, [ConstInt(42)], None, descr=faildescr) ] @@ -311,7 +327,7 @@ res = self.cpu.get_latest_value_int(0) assert res == 42 - looptoken = LoopToken() + looptoken = JitCellToken() operations = [ ResOperation(rop.FINISH, [], None, descr=faildescr) ] @@ -320,7 +336,7 @@ assert fail is faildescr if self.cpu.supports_floats: - looptoken = LoopToken() + looptoken = JitCellToken() f0 = BoxFloat() operations = [ ResOperation(rop.FINISH, [f0], None, descr=faildescr) @@ -333,7 +349,7 @@ res = self.cpu.get_latest_value_float(0) assert longlong.getrealfloat(res) == -61.25 - looptoken = LoopToken() + looptoken = JitCellToken() operations = [ ResOperation(rop.FINISH, [constfloat(42.5)], None, descr=faildescr) ] @@ -350,14 +366,16 @@ z = BoxInt(579) t = BoxInt(455) u = BoxInt(0) # False - looptoken = LoopToken() + looptoken = JitCellToken() + targettoken = TargetToken() operations = [ + ResOperation(rop.LABEL, [y, x], None, descr=targettoken), ResOperation(rop.INT_ADD, [x, y], z), ResOperation(rop.INT_SUB, [y, ConstInt(1)], t), ResOperation(rop.INT_EQ, [t, ConstInt(0)], u), ResOperation(rop.GUARD_FALSE, [u], None, descr=BasicFailDescr()), - ResOperation(rop.JUMP, [z, t], None, descr=looptoken), + ResOperation(rop.JUMP, [t, z], None, descr=targettoken), ] operations[-2].setfailargs([t, z]) cpu.compile_loop([x, y], operations, looptoken) @@ -419,7 +437,7 @@ ] ops[1].setfailargs([v_res]) # - looptoken = LoopToken() + looptoken = JitCellToken() self.cpu.compile_loop([v1, v2], ops, looptoken) for x, y, z in testcases: excvalue = self.cpu.grab_exc_value() @@ -1082,16 +1100,18 @@ inputargs.insert(index_counter, i0) jumpargs.insert(index_counter, i1) # - looptoken = LoopToken() + looptoken = JitCellToken() + targettoken = TargetToken() faildescr = BasicFailDescr(15) operations = [ + ResOperation(rop.LABEL, inputargs, None, descr=targettoken), ResOperation(rop.INT_SUB, [i0, ConstInt(1)], i1), ResOperation(rop.INT_GE, [i1, ConstInt(0)], i2), ResOperation(rop.GUARD_TRUE, [i2], None), - ResOperation(rop.JUMP, jumpargs, None, descr=looptoken), + ResOperation(rop.JUMP, jumpargs, None, descr=targettoken), ] - operations[2].setfailargs(inputargs[:]) - operations[2].setdescr(faildescr) + operations[3].setfailargs(inputargs[:]) + operations[3].setdescr(faildescr) # self.cpu.compile_loop(inputargs, operations, looptoken) # @@ -1149,22 +1169,24 @@ py.test.skip("requires floats") fboxes = [BoxFloat() for i in range(12)] i2 = BoxInt() + targettoken = TargetToken() faildescr1 = BasicFailDescr(1) faildescr2 = BasicFailDescr(2) operations = [ + ResOperation(rop.LABEL, fboxes, None, descr=targettoken), ResOperation(rop.FLOAT_LE, [fboxes[0], constfloat(9.2)], i2), ResOperation(rop.GUARD_TRUE, [i2], None, descr=faildescr1), ResOperation(rop.FINISH, fboxes, None, descr=faildescr2), ] operations[-2].setfailargs(fboxes) - looptoken = LoopToken() + looptoken = JitCellToken() self.cpu.compile_loop(fboxes, operations, looptoken) fboxes2 = [BoxFloat() for i in range(12)] f3 = BoxFloat() bridge = [ ResOperation(rop.FLOAT_SUB, [fboxes2[0], constfloat(1.0)], f3), - ResOperation(rop.JUMP, [f3] + fboxes2[1:], None, descr=looptoken), + ResOperation(rop.JUMP, [f3]+fboxes2[1:], None, descr=targettoken), ] self.cpu.compile_bridge(faildescr1, fboxes2, bridge, looptoken) @@ -1214,7 +1236,7 @@ ResOperation(rop.FINISH, [], None, descr=faildescr2), ] operations[-2].setfailargs([]) - looptoken = LoopToken() + looptoken = JitCellToken() self.cpu.compile_loop(inputargs, operations, looptoken) # cpu = self.cpu @@ -1271,7 +1293,7 @@ ResOperation(rop.FINISH, [], None, descr=faildescr2), ] operations[-2].setfailargs([]) - looptoken = LoopToken() + looptoken = JitCellToken() self.cpu.compile_loop(inputargs, operations, looptoken) # cpu = self.cpu @@ -1330,7 +1352,7 @@ faildescr = BasicFailDescr(1) operations.append(ResOperation(rop.FINISH, [], None, descr=faildescr)) - looptoken = LoopToken() + looptoken = JitCellToken() # self.cpu.compile_loop(inputargs, operations, looptoken) # @@ -1400,7 +1422,7 @@ ResOperation(rop.FINISH, [], None, descr=BasicFailDescr(5))] operations[1].setfailargs([]) - looptoken = LoopToken() + looptoken = JitCellToken() # Use "set" to unique-ify inputargs unique_testcase_list = list(set(testcase)) self.cpu.compile_loop(unique_testcase_list, operations, @@ -1675,15 +1697,16 @@ exc_tp = xtp exc_ptr = xptr loop = parse(ops, self.cpu, namespace=locals()) - self.cpu.compile_loop(loop.inputargs, loop.operations, loop.token) + looptoken = JitCellToken() + self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken) self.cpu.set_future_value_int(0, 1) - self.cpu.execute_token(loop.token) + self.cpu.execute_token(looptoken) assert self.cpu.get_latest_value_int(0) == 0 assert self.cpu.get_latest_value_ref(1) == xptr excvalue = self.cpu.grab_exc_value() assert not excvalue self.cpu.set_future_value_int(0, 0) - self.cpu.execute_token(loop.token) + self.cpu.execute_token(looptoken) assert self.cpu.get_latest_value_int(0) == 1 excvalue = self.cpu.grab_exc_value() assert not excvalue @@ -1700,9 +1723,10 @@ exc_tp = ytp exc_ptr = yptr loop = parse(ops, self.cpu, namespace=locals()) - self.cpu.compile_loop(loop.inputargs, loop.operations, loop.token) + looptoken = JitCellToken() + self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken) self.cpu.set_future_value_int(0, 1) - self.cpu.execute_token(loop.token) + self.cpu.execute_token(looptoken) assert self.cpu.get_latest_value_int(0) == 1 excvalue = self.cpu.grab_exc_value() assert excvalue == yptr @@ -1718,14 +1742,15 @@ finish(0) ''' loop = parse(ops, self.cpu, namespace=locals()) - self.cpu.compile_loop(loop.inputargs, loop.operations, loop.token) + looptoken = JitCellToken() + self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken) self.cpu.set_future_value_int(0, 1) - self.cpu.execute_token(loop.token) + self.cpu.execute_token(looptoken) assert self.cpu.get_latest_value_int(0) == 1 excvalue = self.cpu.grab_exc_value() assert excvalue == xptr self.cpu.set_future_value_int(0, 0) - self.cpu.execute_token(loop.token) + self.cpu.execute_token(looptoken) assert self.cpu.get_latest_value_int(0) == 0 excvalue = self.cpu.grab_exc_value() assert not excvalue @@ -1895,7 +1920,7 @@ ResOperation(rop.FINISH, [i0], None, descr=BasicFailDescr(0)) ] ops[2].setfailargs([i1, i0]) - looptoken = LoopToken() + looptoken = JitCellToken() self.cpu.compile_loop([i0, i1], ops, looptoken) self.cpu.set_future_value_int(0, 20) self.cpu.set_future_value_int(1, 0) @@ -1940,7 +1965,7 @@ ResOperation(rop.FINISH, [i2], None, descr=BasicFailDescr(0)) ] ops[2].setfailargs([i1, i2, i0]) - looptoken = LoopToken() + looptoken = JitCellToken() self.cpu.compile_loop([i0, i1], ops, looptoken) self.cpu.set_future_value_int(0, 20) self.cpu.set_future_value_int(1, 0) @@ -1986,7 +2011,7 @@ ResOperation(rop.FINISH, [f2], None, descr=BasicFailDescr(0)) ] ops[2].setfailargs([i1, f2, i0]) - looptoken = LoopToken() + looptoken = JitCellToken() self.cpu.compile_loop([i0, i1], ops, looptoken) self.cpu.set_future_value_int(0, 20) self.cpu.set_future_value_int(1, 0) @@ -2031,7 +2056,7 @@ ResOperation(rop.FINISH, [i2], None, descr=BasicFailDescr(0)) ] ops[1].setfailargs([i1, i2]) - looptoken = LoopToken() + looptoken = JitCellToken() self.cpu.compile_loop([i1], ops, looptoken) self.cpu.set_future_value_int(0, ord('G')) fail = self.cpu.execute_token(looptoken) @@ -2091,7 +2116,7 @@ ResOperation(rop.FINISH, [], None, descr=BasicFailDescr(0)) ] ops[1].setfailargs([]) - looptoken = LoopToken() + looptoken = JitCellToken() self.cpu.compile_loop([i0, i1, i2, i3], ops, looptoken) self.cpu.set_future_value_int(0, rffi.cast(lltype.Signed, raw)) self.cpu.set_future_value_int(1, 2) @@ -2147,7 +2172,7 @@ ops += [ ResOperation(rop.FINISH, [i3], None, descr=BasicFailDescr(0)) ] - looptoken = LoopToken() + looptoken = JitCellToken() self.cpu.compile_loop([i1, i2], ops, looptoken) buffer = lltype.malloc(rffi.CCHARP.TO, buflen, flavor='raw') @@ -2169,7 +2194,7 @@ ResOperation(rop.FINISH, [i0], None, descr=BasicFailDescr(0)) ] ops[0].setfailargs([i1]) - looptoken = LoopToken() + looptoken = JitCellToken() self.cpu.compile_loop([i0, i1], ops, looptoken) self.cpu.set_future_value_int(0, -42) @@ -2415,7 +2440,7 @@ i18 = int_add(i17, i9) finish(i18)''' loop = parse(ops) - looptoken = LoopToken() + looptoken = JitCellToken() looptoken.outermost_jitdriver_sd = FakeJitDriverSD() self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken) ARGS = [lltype.Signed] * 10 @@ -2435,7 +2460,7 @@ finish(i11) ''' loop = parse(ops, namespace=locals()) - othertoken = LoopToken() + othertoken = JitCellToken() self.cpu.compile_loop(loop.inputargs, loop.operations, othertoken) for i in range(10): self.cpu.set_future_value_int(i, i+1) @@ -2471,7 +2496,7 @@ finish(f2)''' loop = parse(ops) done_number = self.cpu.get_fail_descr_number(loop.operations[-1].getdescr()) - looptoken = LoopToken() + looptoken = JitCellToken() looptoken.outermost_jitdriver_sd = FakeJitDriverSD() self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken) self.cpu.set_future_value_float(0, longlong.getfloatstorage(1.2)) @@ -2486,7 +2511,7 @@ finish(f3) ''' loop = parse(ops, namespace=locals()) - othertoken = LoopToken() + othertoken = JitCellToken() self.cpu.compile_loop(loop.inputargs, loop.operations, othertoken) self.cpu.set_future_value_float(0, longlong.getfloatstorage(1.2)) self.cpu.set_future_value_float(1, longlong.getfloatstorage(3.2)) @@ -2499,7 +2524,7 @@ del called[:] self.cpu.done_with_this_frame_float_v = done_number try: - othertoken = LoopToken() + othertoken = JitCellToken() self.cpu.compile_loop(loop.inputargs, loop.operations, othertoken) self.cpu.set_future_value_float(0, longlong.getfloatstorage(1.2)) self.cpu.set_future_value_float(1, longlong.getfloatstorage(3.2)) @@ -2561,7 +2586,7 @@ f2 = float_add(f0, f1) finish(f2)''' loop = parse(ops) - looptoken = LoopToken() + looptoken = JitCellToken() looptoken.outermost_jitdriver_sd = FakeJitDriverSD() self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken) self.cpu.set_future_value_float(0, longlong.getfloatstorage(1.25)) @@ -2578,7 +2603,7 @@ finish(f3) ''' loop = parse(ops, namespace=locals()) - othertoken = LoopToken() + othertoken = JitCellToken() self.cpu.compile_loop(loop.inputargs, loop.operations, othertoken) # normal call_assembler: goes to looptoken @@ -2596,7 +2621,7 @@ f2 = float_sub(f0, f1) finish(f2)''' loop = parse(ops) - looptoken2 = LoopToken() + looptoken2 = JitCellToken() looptoken2.outermost_jitdriver_sd = FakeJitDriverSD() self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken2) @@ -2958,7 +2983,7 @@ ResOperation(rop.FINISH, [p0], None, descr=BasicFailDescr(1)) ] inputargs = [i0] - looptoken = LoopToken() + looptoken = JitCellToken() self.cpu.compile_loop(inputargs, operations, looptoken) # overflowing value: self.cpu.set_future_value_int(0, sys.maxint // 4 + 1) @@ -2970,21 +2995,23 @@ i1 = BoxInt() i2 = BoxInt() i3 = BoxInt() - looptoken = LoopToken() - targettoken = TargetToken(None) + looptoken = JitCellToken() + targettoken1 = TargetToken() + targettoken2 = TargetToken() faildescr = BasicFailDescr(2) operations = [ + ResOperation(rop.LABEL, [i0], None, descr=targettoken1), ResOperation(rop.INT_ADD, [i0, ConstInt(1)], i1), ResOperation(rop.INT_LE, [i1, ConstInt(9)], i2), ResOperation(rop.GUARD_TRUE, [i2], None, descr=faildescr), - ResOperation(rop.LABEL, [i1], None, descr=targettoken), + ResOperation(rop.LABEL, [i1], None, descr=targettoken2), ResOperation(rop.INT_GE, [i1, ConstInt(0)], i3), ResOperation(rop.GUARD_TRUE, [i3], None, descr=BasicFailDescr(3)), - ResOperation(rop.JUMP, [i1], None, descr=looptoken), + ResOperation(rop.JUMP, [i1], None, descr=targettoken1), ] inputargs = [i0] - operations[2].setfailargs([i1]) - operations[5].setfailargs([i1]) + operations[3].setfailargs([i1]) + operations[6].setfailargs([i1]) self.cpu.compile_loop(inputargs, operations, looptoken) self.cpu.set_future_value_int(0, 2) @@ -2996,7 +3023,7 @@ inputargs = [i0] operations = [ ResOperation(rop.INT_SUB, [i0, ConstInt(20)], i2), - ResOperation(rop.JUMP, [i2], None, descr=targettoken), + ResOperation(rop.JUMP, [i2], None, descr=targettoken2), ] self.cpu.compile_bridge(faildescr, inputargs, operations, looptoken) diff --git a/pypy/jit/backend/test/test_random.py b/pypy/jit/backend/test/test_random.py --- a/pypy/jit/backend/test/test_random.py +++ b/pypy/jit/backend/test/test_random.py @@ -3,8 +3,8 @@ from pypy.rlib.rarithmetic import intmask, LONG_BIT from pypy.rpython.lltypesystem import llmemory from pypy.jit.metainterp.history import BasicFailDescr, TreeLoop -from pypy.jit.metainterp.history import BoxInt, ConstInt, LoopToken -from pypy.jit.metainterp.history import BoxPtr, ConstPtr +from pypy.jit.metainterp.history import BoxInt, ConstInt, JitCellToken +from pypy.jit.metainterp.history import BoxPtr, ConstPtr, TargetToken from pypy.jit.metainterp.history import BoxFloat, ConstFloat, Const from pypy.jit.metainterp.resoperation import ResOperation, rop from pypy.jit.metainterp.executor import execute_nonspec @@ -179,7 +179,7 @@ #print >>s, ' operations[%d].suboperations = [' % i #print >>s, ' ResOperation(rop.FAIL, [%s], None)]' % ( # ', '.join([names[v] for v in op.args])) - print >>s, ' looptoken = LoopToken()' + print >>s, ' looptoken = JitCellToken()' print >>s, ' cpu.compile_loop(inputargs, operations, looptoken)' if hasattr(self.loop, 'inputargs'): for i, v in enumerate(self.loop.inputargs): @@ -536,13 +536,15 @@ loop = TreeLoop('test_random_function') loop.inputargs = startvars[:] loop.operations = [] - loop.token = LoopToken() - + loop._jitcelltoken = JitCellToken() + loop._targettoken = TargetToken() + loop.operations.append(ResOperation(rop.LABEL, loop.inputargs, None, + loop._targettoken)) builder = builder_factory(cpu, loop, startvars[:]) self.generate_ops(builder, r, loop, startvars) self.builder = builder self.loop = loop - cpu.compile_loop(loop.inputargs, loop.operations, loop.token) + cpu.compile_loop(loop.inputargs, loop.operations, loop._jitcelltoken) def generate_ops(self, builder, r, loop, startvars): block_length = pytest.config.option.block_length @@ -615,7 +617,7 @@ cpu.set_future_value_float(i, box.value) else: raise NotImplementedError(box) - fail = cpu.execute_token(self.loop.token) + fail = cpu.execute_token(self.loop._jitcelltoken) assert fail is self.should_fail_by.getdescr() for i, v in enumerate(self.get_fail_args()): if isinstance(v, (BoxFloat, ConstFloat)): @@ -684,23 +686,25 @@ rl = RandomLoop(self.builder.cpu, self.builder.fork, r, args) self.cpu.compile_loop(rl.loop.inputargs, rl.loop.operations, - rl.loop.token) + rl.loop._jitcelltoken) # done self.should_fail_by = rl.should_fail_by self.expected = rl.expected assert len(rl.loop.inputargs) == len(args) # The new bridge's execution will end normally at its FINISH. # Just replace the FINISH with the JUMP to the new loop. - jump_op = ResOperation(rop.JUMP, subset, None, descr=rl.loop.token) + jump_op = ResOperation(rop.JUMP, subset, None, + descr=rl.loop._targettoken) subloop.operations[-1] = jump_op self.guard_op = rl.guard_op self.prebuilt_ptr_consts += rl.prebuilt_ptr_consts - self.loop.token.record_jump_to(rl.loop.token) + self.loop._jitcelltoken.record_jump_to(rl.loop._jitcelltoken) self.dont_generate_more = True if r.random() < .05: return False self.builder.cpu.compile_bridge(fail_descr, fail_args, - subloop.operations, self.loop.token) + subloop.operations, + self.loop._jitcelltoken) return True def check_random_function(cpu, BuilderClass, r, num=None, max=None): diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -2,8 +2,8 @@ from pypy.jit.backend.llsupport import symbolic from pypy.jit.backend.llsupport.asmmemmgr import MachineDataBlockWrapper from pypy.jit.metainterp.history import Const, Box, BoxInt, ConstInt -from pypy.jit.metainterp.history import (AbstractFailDescr, INT, REF, FLOAT, - LoopToken) +from pypy.jit.metainterp.history import AbstractFailDescr, INT, REF, FLOAT +from pypy.jit.metainterp.history import JitCellToken from pypy.rpython.lltypesystem import lltype, rffi, rstr, llmemory from pypy.rpython.lltypesystem.lloperation import llop from pypy.rpython.annlowlevel import llhelper @@ -424,8 +424,6 @@ _x86_loop_code (an integer giving an address) _x86_bootstrap_code (an integer giving an address) _x86_direct_bootstrap_code ( " " " " ) - _x86_frame_depth - _x86_param_depth _x86_arglocs _x86_debug_checksum ''' @@ -455,12 +453,11 @@ stackadjustpos = self._assemble_bootstrap_code(inputargs, arglocs) looppos = self.mc.get_relative_pos() looptoken._x86_loop_code = looppos - self.target_tokens_currently_compiling[looptoken] = None - looptoken._x86_frame_depth = -1 # temporarily - looptoken._x86_param_depth = -1 # temporarily + clt.frame_depth = -1 # temporarily + clt.param_depth = -1 # temporarily frame_depth, param_depth = self._assemble(regalloc, operations) - looptoken._x86_frame_depth = frame_depth - looptoken._x86_param_depth = param_depth + clt.frame_depth = frame_depth + clt.param_depth = param_depth directbootstrappos = self.mc.get_relative_pos() self._assemble_bootstrap_direct_call(arglocs, looppos, @@ -670,8 +667,8 @@ faildescr._x86_adr_jump_offset = 0 # means "patched" def fixup_target_tokens(self, rawstart): - for looptoken in self.target_tokens_currently_compiling: - looptoken._x86_loop_code += rawstart + for targettoken in self.target_tokens_currently_compiling: + targettoken._x86_loop_code += rawstart self.target_tokens_currently_compiling = None @specialize.argtype(1) @@ -703,8 +700,8 @@ param_depth = regalloc.param_depth jump_target_descr = regalloc.jump_target_descr if jump_target_descr is not None: - target_frame_depth = jump_target_descr._x86_frame_depth - target_param_depth = jump_target_descr._x86_param_depth + target_frame_depth = jump_target_descr._x86_clt.frame_depth + target_param_depth = jump_target_descr._x86_clt.param_depth frame_depth = max(frame_depth, target_frame_depth) param_depth = max(param_depth, target_param_depth) return frame_depth, param_depth @@ -2344,7 +2341,7 @@ fail_index = self.cpu.get_fail_descr_number(faildescr) self.mc.MOV_bi(FORCE_INDEX_OFS, fail_index) descr = op.getdescr() - assert isinstance(descr, LoopToken) + assert isinstance(descr, JitCellToken) assert len(arglocs) - 2 == len(descr._x86_arglocs[0]) # # Write a call to the direct_bootstrap_code of the target assembler @@ -2578,12 +2575,9 @@ gcrootmap.put(self.gcrootmap_retaddr_forced, mark) self.gcrootmap_retaddr_forced = -1 - def target_arglocs(self, loop_token): - return loop_token._x86_arglocs - - def closing_jump(self, loop_token): - target = loop_token._x86_loop_code - if loop_token in self.target_tokens_currently_compiling: + def closing_jump(self, target_token): + target = target_token._x86_loop_code + if target_token in self.target_tokens_currently_compiling: curpos = self.mc.get_relative_pos() + 5 self.mc.JMP_l(target - curpos) else: diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -5,8 +5,8 @@ import os from pypy.jit.metainterp.history import (Box, Const, ConstInt, ConstPtr, ResOperation, BoxPtr, ConstFloat, - BoxFloat, LoopToken, INT, REF, FLOAT, - TargetToken) + BoxFloat, INT, REF, FLOAT, + TargetToken, JitCellToken) from pypy.jit.backend.x86.regloc import * from pypy.rpython.lltypesystem import lltype, rffi, rstr from pypy.rlib.objectmodel import we_are_translated @@ -884,7 +884,7 @@ def consider_call_assembler(self, op, guard_op): descr = op.getdescr() - assert isinstance(descr, LoopToken) + assert isinstance(descr, JitCellToken) jd = descr.outermost_jitdriver_sd assert jd is not None size = jd.portal_calldescr.get_result_size(self.translate_support_code) @@ -1314,8 +1314,8 @@ assembler = self.assembler assert self.jump_target_descr is None descr = op.getdescr() - assert isinstance(descr, (LoopToken, TargetToken)) # XXX refactor! - nonfloatlocs, floatlocs = assembler.target_arglocs(descr) + assert isinstance(descr, TargetToken) + nonfloatlocs, floatlocs = descr._x86_arglocs self.jump_target_descr = descr # compute 'tmploc' to be all_regs[0] by spilling what is there box = TempBox() @@ -1396,19 +1396,32 @@ inputargs = op.getarglist() floatlocs = [None] * len(inputargs) nonfloatlocs = [None] * len(inputargs) + # + # we need to make sure that the tmpreg and xmmtmp are free + tmpreg = X86RegisterManager.all_regs[0] + tmpvar = TempBox() + self.rm.force_allocate_reg(tmpvar, selected_reg=tmpreg) + self.rm.possibly_free_var(tmpvar) + # + xmmtmp = X86XMMRegisterManager.all_regs[0] + tmpvar = TempBox() + self.xrm.force_allocate_reg(tmpvar, selected_reg=xmmtmp) + self.xrm.possibly_free_var(tmpvar) + # for i in range(len(inputargs)): arg = inputargs[i] assert not isinstance(arg, Const) loc = self.loc(arg) + assert not (loc is tmpreg or loc is xmmtmp) if arg.type == FLOAT: floatlocs[i] = loc else: nonfloatlocs[i] = loc descr._x86_arglocs = nonfloatlocs, floatlocs descr._x86_loop_code = self.assembler.mc.get_relative_pos() - descr._x86_frame_depth = self.fm.frame_depth - descr._x86_param_depth = self.param_depth + descr._x86_clt = self.assembler.current_clt self.assembler.target_tokens_currently_compiling[descr] = None + self.possibly_free_vars_for_op(op) def not_implemented_op(self, op): not_implemented("not implemented operation: %s" % op.getopname()) diff --git a/pypy/jit/backend/x86/runner.py b/pypy/jit/backend/x86/runner.py --- a/pypy/jit/backend/x86/runner.py +++ b/pypy/jit/backend/x86/runner.py @@ -215,14 +215,3 @@ super(CPU_X86_64, self).__init__(*args, **kwargs) CPU = CPU386 - -# silence warnings -##history.LoopToken._x86_param_depth = 0 -##history.LoopToken._x86_arglocs = (None, None) -##history.LoopToken._x86_frame_depth = 0 -##history.LoopToken._x86_bootstrap_code = 0 -##history.LoopToken._x86_direct_bootstrap_code = 0 -##history.LoopToken._x86_loop_code = 0 -##history.LoopToken._x86_debug_checksum = 0 -##compile.AbstractFailDescr._x86_current_depths = (0, 0) -##compile.AbstractFailDescr._x86_adr_jump_offset = 0 diff --git a/pypy/jit/backend/x86/test/test_regalloc.py b/pypy/jit/backend/x86/test/test_regalloc.py --- a/pypy/jit/backend/x86/test/test_regalloc.py +++ b/pypy/jit/backend/x86/test/test_regalloc.py @@ -4,7 +4,7 @@ import py from pypy.jit.metainterp.history import BoxInt, ConstInt,\ - BoxPtr, ConstPtr, LoopToken, BasicFailDescr + BoxPtr, ConstPtr, BasicFailDescr, JitCellToken, TargetToken from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.jit.backend.llsupport.descr import GcCache from pypy.jit.backend.detect_cpu import getcpuclass @@ -96,6 +96,8 @@ raising_calldescr = cpu.calldescrof(FPTR.TO, FPTR.TO.ARGS, FPTR.TO.RESULT, EffectInfo.MOST_GENERAL) + targettoken = TargetToken() + targettoken2 = TargetToken() fdescr1 = BasicFailDescr(1) fdescr2 = BasicFailDescr(2) fdescr3 = BasicFailDescr(3) @@ -134,7 +136,8 @@ def interpret(self, ops, args, run=True): loop = self.parse(ops) - self.cpu.compile_loop(loop.inputargs, loop.operations, loop.token) + looptoken = JitCellToken() + self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken) for i, arg in enumerate(args): if isinstance(arg, int): self.cpu.set_future_value_int(i, arg) @@ -145,8 +148,9 @@ assert isinstance(lltype.typeOf(arg), lltype.Ptr) llgcref = lltype.cast_opaque_ptr(llmemory.GCREF, arg) self.cpu.set_future_value_ref(i, llgcref) + loop._jitcelltoken = looptoken if run: - self.cpu.execute_token(loop.token) + self.cpu.execute_token(looptoken) return loop def getint(self, index): @@ -167,10 +171,7 @@ gcref = self.cpu.get_latest_value_ref(index) return lltype.cast_opaque_ptr(T, gcref) - def attach_bridge(self, ops, loop, guard_op_index, looptoken=None, **kwds): - if looptoken is not None: - self.namespace = self.namespace.copy() - self.namespace['looptoken'] = looptoken + def attach_bridge(self, ops, loop, guard_op_index, **kwds): guard_op = loop.operations[guard_op_index] assert guard_op.is_guard() bridge = self.parse(ops, **kwds) @@ -178,20 +179,21 @@ [box.type for box in guard_op.getfailargs()]) faildescr = guard_op.getdescr() self.cpu.compile_bridge(faildescr, bridge.inputargs, bridge.operations, - loop.token) + loop._jitcelltoken) return bridge def run(self, loop): - return self.cpu.execute_token(loop.token) + return self.cpu.execute_token(loop._jitcelltoken) class TestRegallocSimple(BaseTestRegalloc): def test_simple_loop(self): ops = ''' [i0] + label(i0, descr=targettoken) i1 = int_add(i0, 1) i2 = int_lt(i1, 20) guard_true(i2) [i1] - jump(i1) + jump(i1, descr=targettoken) ''' self.interpret(ops, [0]) assert self.getint(0) == 20 @@ -199,27 +201,29 @@ def test_two_loops_and_a_bridge(self): ops = ''' [i0, i1, i2, i3] + label(i0, i1, i2, i3, descr=targettoken) i4 = int_add(i0, 1) i5 = int_lt(i4, 20) guard_true(i5) [i4, i1, i2, i3] - jump(i4, i1, i2, i3) + jump(i4, i1, i2, i3, descr=targettoken) ''' loop = self.interpret(ops, [0, 0, 0, 0]) ops2 = ''' [i5] + label(i5, descr=targettoken2) i1 = int_add(i5, 1) i3 = int_add(i1, 1) i4 = int_add(i3, 1) i2 = int_lt(i4, 30) guard_true(i2) [i4] - jump(i4) + jump(i4, descr=targettoken2) ''' loop2 = self.interpret(ops2, [0]) bridge_ops = ''' [i4] - jump(i4, i4, i4, i4, descr=looptoken) + jump(i4, i4, i4, i4, descr=targettoken) ''' - bridge = self.attach_bridge(bridge_ops, loop2, 4, looptoken=loop.token) + bridge = self.attach_bridge(bridge_ops, loop2, 5) self.cpu.set_future_value_int(0, 0) self.run(loop2) assert self.getint(0) == 31 @@ -230,10 +234,11 @@ def test_pointer_arg(self): ops = ''' [i0, p0] + label(i0, p0, descr=targettoken) i1 = int_add(i0, 1) i2 = int_lt(i1, 10) guard_true(i2) [p0] - jump(i1, p0) + jump(i1, p0, descr=targettoken) ''' S = lltype.GcStruct('S') ptr = lltype.malloc(S) @@ -311,10 +316,11 @@ def test_spill_for_constant(self): ops = ''' [i0, i1, i2, i3] + label(i0, i1, i2, i3, descr=targettoken) i4 = int_add(3, i1) i5 = int_lt(i4, 30) guard_true(i5) [i0, i4, i2, i3] - jump(1, i4, 3, 4) + jump(1, i4, 3, 4, descr=targettoken) ''' self.interpret(ops, [0, 0, 0, 0]) assert self.getints(4) == [1, 30, 3, 4] @@ -322,31 +328,34 @@ def test_spill_for_constant_lshift(self): ops = ''' [i0, i2, i1, i3] + label(i0, i2, i1, i3, descr=targettoken) i4 = int_lshift(1, i1) i5 = int_add(1, i1) i6 = int_lt(i5, 30) guard_true(i6) [i4, i5, i2, i3] - jump(i4, 3, i5, 4) + jump(i4, 3, i5, 4, descr=targettoken) ''' self.interpret(ops, [0, 0, 0, 0]) assert self.getints(4) == [1<<29, 30, 3, 4] ops = ''' [i0, i1, i2, i3] + label(i0, i1, i2, i3, descr=targettoken) i4 = int_lshift(1, i1) i5 = int_add(1, i1) i6 = int_lt(i5, 30) guard_true(i6) [i4, i5, i2, i3] - jump(i4, i5, 3, 4) + jump(i4, i5, 3, 4, descr=targettoken) ''' self.interpret(ops, [0, 0, 0, 0]) assert self.getints(4) == [1<<29, 30, 3, 4] ops = ''' [i0, i3, i1, i2] + label(i0, i3, i1, i2, descr=targettoken) i4 = int_lshift(1, i1) i5 = int_add(1, i1) i6 = int_lt(i5, 30) guard_true(i6) [i4, i5, i2, i3] - jump(i4, 4, i5, 3) + jump(i4, 4, i5, 3, descr=targettoken) ''' self.interpret(ops, [0, 0, 0, 0]) assert self.getints(4) == [1<<29, 30, 3, 4] @@ -354,11 +363,12 @@ def test_result_selected_reg_via_neg(self): ops = ''' [i0, i1, i2, i3] + label(i0, i1, i2, i3, descr=targettoken) i6 = int_neg(i2) i7 = int_add(1, i1) i4 = int_lt(i7, 10) guard_true(i4) [i0, i6, i7] - jump(1, i7, i2, i6) + jump(1, i7, i2, i6, descr=targettoken) ''' self.interpret(ops, [0, 0, 3, 0]) assert self.getints(3) == [1, -3, 10] @@ -366,11 +376,12 @@ def test_compare_memory_result_survives(self): ops = ''' [i0, i1, i2, i3] + label(i0, i1, i2, i3, descr=targettoken) i4 = int_lt(i0, i1) i5 = int_add(i3, 1) i6 = int_lt(i5, 30) guard_true(i6) [i4] - jump(i0, i1, i4, i5) + jump(i0, i1, i4, i5, descr=targettoken) ''' self.interpret(ops, [0, 10, 0, 0]) assert self.getint(0) == 1 @@ -378,10 +389,11 @@ def test_jump_different_args(self): ops = ''' [i0, i15, i16, i18, i1, i2, i3] + label(i0, i15, i16, i18, i1, i2, i3, descr=targettoken) i4 = int_add(i3, 1) i5 = int_lt(i4, 20) guard_true(i5) [i2, i1] - jump(i0, i18, i15, i16, i2, i1, i4) + jump(i0, i18, i15, i16, i2, i1, i4, descr=targettoken) ''' self.interpret(ops, [0, 1, 2, 3]) @@ -438,6 +450,7 @@ class TestRegallocMoreRegisters(BaseTestRegalloc): cpu = BaseTestRegalloc.cpu + targettoken = TargetToken() S = lltype.GcStruct('S', ('field', lltype.Char)) fielddescr = cpu.fielddescrof(S, 'field') @@ -510,6 +523,7 @@ def test_division_optimized(self): ops = ''' [i7, i6] + label(i7, i6, descr=targettoken) i18 = int_floordiv(i7, i6) i19 = int_xor(i7, i6) i21 = int_lt(i19, 0) @@ -517,7 +531,7 @@ i23 = int_is_true(i22) i24 = int_eq(i6, 4) guard_false(i24) [i18] - jump(i18, i6) + jump(i18, i6, descr=targettoken) ''' self.interpret(ops, [10, 4]) assert self.getint(0) == 2 @@ -588,7 +602,8 @@ ''' loop = self.interpret(ops, [4, 7, 9, 9 ,9, 9, 9, 9, 9, 9, 9]) assert self.getints(11) == [5, 7, 9, 9, 9, 9, 9, 9, 9, 9, 9] - assert loop.token._x86_param_depth == self.expected_param_depth(1) + clt = loop._jitcelltoken.compiled_loop_token + assert clt.param_depth == self.expected_param_depth(1) def test_two_calls(self): ops = ''' @@ -599,7 +614,8 @@ ''' loop = self.interpret(ops, [4, 7, 9, 9 ,9, 9, 9, 9, 9, 9, 9]) assert self.getints(11) == [5*7, 7, 9, 9, 9, 9, 9, 9, 9, 9, 9] - assert loop.token._x86_param_depth == self.expected_param_depth(2) + clt = loop._jitcelltoken.compiled_loop_token + assert clt.param_depth == self.expected_param_depth(2) def test_call_many_arguments(self): # NB: The first and last arguments in the call are constants. This @@ -612,7 +628,8 @@ ''' loop = self.interpret(ops, [2, 3, 4, 5, 6, 7, 8, 9]) assert self.getint(0) == 55 - assert loop.token._x86_param_depth == self.expected_param_depth(10) + clt = loop._jitcelltoken.compiled_loop_token + assert clt.param_depth == self.expected_param_depth(10) def test_bridge_calls_1(self): ops = ''' diff --git a/pypy/jit/backend/x86/test/test_regalloc2.py b/pypy/jit/backend/x86/test/test_regalloc2.py --- a/pypy/jit/backend/x86/test/test_regalloc2.py +++ b/pypy/jit/backend/x86/test/test_regalloc2.py @@ -1,6 +1,6 @@ import py from pypy.jit.metainterp.history import ResOperation, BoxInt, ConstInt,\ - BoxPtr, ConstPtr, BasicFailDescr, LoopToken + BoxPtr, ConstPtr, BasicFailDescr, JitCellToken from pypy.jit.metainterp.resoperation import rop from pypy.jit.backend.detect_cpu import getcpuclass from pypy.jit.backend.x86.arch import WORD @@ -20,7 +20,7 @@ ] cpu = CPU(None, None) cpu.setup_once() - looptoken = LoopToken() + looptoken = JitCellToken() cpu.compile_loop(inputargs, operations, looptoken) cpu.set_future_value_int(0, 9) cpu.execute_token(looptoken) @@ -43,7 +43,7 @@ ] cpu = CPU(None, None) cpu.setup_once() - looptoken = LoopToken() + looptoken = JitCellToken() cpu.compile_loop(inputargs, operations, looptoken) cpu.set_future_value_int(0, -10) cpu.execute_token(looptoken) @@ -140,7 +140,7 @@ ] cpu = CPU(None, None) cpu.setup_once() - looptoken = LoopToken() + looptoken = JitCellToken() cpu.compile_loop(inputargs, operations, looptoken) cpu.set_future_value_int(0, -13) cpu.set_future_value_int(1, 10) @@ -255,7 +255,7 @@ ] cpu = CPU(None, None) cpu.setup_once() - looptoken = LoopToken() + looptoken = JitCellToken() cpu.compile_loop(inputargs, operations, looptoken) cpu.set_future_value_int(0, 17) cpu.set_future_value_int(1, -20) diff --git a/pypy/jit/backend/x86/test/test_runner.py b/pypy/jit/backend/x86/test/test_runner.py --- a/pypy/jit/backend/x86/test/test_runner.py +++ b/pypy/jit/backend/x86/test/test_runner.py @@ -1,9 +1,10 @@ import py from pypy.rpython.lltypesystem import lltype, llmemory, rffi, rstr, rclass from pypy.rpython.annlowlevel import llhelper -from pypy.jit.metainterp.history import ResOperation, LoopToken +from pypy.jit.metainterp.history import ResOperation, TargetToken, JitCellToken from pypy.jit.metainterp.history import (BoxInt, BoxPtr, ConstInt, ConstFloat, - ConstPtr, Box, BoxFloat, BasicFailDescr) + ConstPtr, Box, BoxFloat, + BasicFailDescr) from pypy.jit.backend.detect_cpu import getcpuclass from pypy.jit.backend.x86.arch import WORD from pypy.jit.backend.x86.rx86 import fits_in_32bits @@ -279,7 +280,7 @@ descr=BasicFailDescr()), ] ops[-2].setfailargs([i1]) - looptoken = LoopToken() + looptoken = JitCellToken() self.cpu.compile_loop([b], ops, looptoken) if op == rop.INT_IS_TRUE: self.cpu.set_future_value_int(0, b.value) @@ -329,7 +330,7 @@ ] ops[-2].setfailargs([i1]) inputargs = [i for i in (a, b) if isinstance(i, Box)] - looptoken = LoopToken() + looptoken = JitCellToken() self.cpu.compile_loop(inputargs, ops, looptoken) for i, box in enumerate(inputargs): self.cpu.set_future_value_int(i, box.value) @@ -353,9 +354,10 @@ i0 = BoxInt() i1 = BoxInt() i2 = BoxInt() + targettoken = TargetToken() faildescr1 = BasicFailDescr(1) faildescr2 = BasicFailDescr(2) - looptoken = LoopToken() + looptoken = JitCellToken() looptoken.number = 17 class FakeString(object): def __init__(self, val): @@ -365,14 +367,15 @@ return self.val operations = [ + ResOperation(rop.LABEL, [i0], None, descr=targettoken), ResOperation(rop.DEBUG_MERGE_POINT, [FakeString("hello"), 0], None), ResOperation(rop.INT_ADD, [i0, ConstInt(1)], i1), ResOperation(rop.INT_LE, [i1, ConstInt(9)], i2), ResOperation(rop.GUARD_TRUE, [i2], None, descr=faildescr1), - ResOperation(rop.JUMP, [i1], None, descr=looptoken), + ResOperation(rop.JUMP, [i1], None, descr=targettoken), ] inputargs = [i0] - operations[3].setfailargs([i1]) + operations[-2].setfailargs([i1]) self.cpu.compile_loop(inputargs, operations, looptoken) name, loopaddress, loopsize = agent.functions[0] assert name == "Loop # 17: hello (loop counter 0)" @@ -385,7 +388,7 @@ ResOperation(rop.INT_LE, [i1b, ConstInt(19)], i3), ResOperation(rop.GUARD_TRUE, [i3], None, descr=faildescr2), ResOperation(rop.DEBUG_MERGE_POINT, [FakeString("bye"), 0], None), - ResOperation(rop.JUMP, [i1b], None, descr=looptoken), + ResOperation(rop.JUMP, [i1b], None, descr=targettoken), ] bridge[1].setfailargs([i1b]) @@ -408,11 +411,13 @@ i0 = BoxInt() i1 = BoxInt() i2 = BoxInt() - looptoken = LoopToken() + looptoken = JitCellToken() + targettoken = TargetToken() operations = [ + ResOperation(rop.LABEL, [i0], None, descr=targettoken), ResOperation(rop.INT_ADD, [i0, ConstInt(1)], i1), ResOperation(rop.INT_LE, [i1, ConstInt(9)], i2), - ResOperation(rop.JUMP, [i1], None, descr=looptoken), + ResOperation(rop.JUMP, [i1], None, descr=targettoken), ] inputargs = [i0] debug._log = dlog = debug.DebugLog() @@ -496,7 +501,7 @@ ops[3].setfailargs([]) ops[5].setfailargs([]) ops[7].setfailargs([]) - looptoken = LoopToken() + looptoken = JitCellToken() self.cpu.compile_loop([i1, i2], ops, looptoken) self.cpu.set_future_value_int(0, 123450) diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -723,9 +723,8 @@ # ____________________________________________________________ -# The TreeLoop class contains a loop or a generalized loop, i.e. a tree -# of operations. Each branch ends in a jump which can go either to -# the top of the same loop, or to another TreeLoop; or it ends in a FINISH. +# The JitCellToken class is the root of a tree of traces. Each branch ends +# in a jump which goes to a LABEL operation; or it ends in a FINISH. class JitCellToken(AbstractDescr): """Used for rop.JUMP, giving the target of the jump. @@ -766,7 +765,7 @@ self.compiled_loop_token.cpu.dump_loop_token(self) class TargetToken(AbstractDescr): - def __init__(self, targeting_jitcell_token): + def __init__(self, targeting_jitcell_token=None): # The jitcell to which jumps might result in a jump to this label self.targeting_jitcell_token = targeting_jitcell_token diff --git a/pypy/jit/tool/oparser.py b/pypy/jit/tool/oparser.py --- a/pypy/jit/tool/oparser.py +++ b/pypy/jit/tool/oparser.py @@ -241,9 +241,9 @@ if opnum == rop.FINISH: if descr is None and self.invent_fail_descr: descr = self.invent_fail_descr(self.model, fail_args) - elif opnum == rop.JUMP: - if descr is None and self.invent_fail_descr: - descr = self.celltoken +## elif opnum == rop.JUMP: +## if descr is None and self.invent_fail_descr: +## ... return opnum, args, descr, fail_args def create_op(self, opnum, args, result, descr): From noreply at buildbot.pypy.org Wed Nov 9 20:51:44 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Wed, 9 Nov 2011 20:51:44 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: fixed test_compute_hash Message-ID: <20111109195144.2E4D98292E@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r49056:de489f7cb78d Date: 2011-11-09 20:50 +0100 http://bitbucket.org/pypy/pypy/changeset/de489f7cb78d/ Log: fixed test_compute_hash diff --git a/pypy/rpython/lltypesystem/opimpl.py b/pypy/rpython/lltypesystem/opimpl.py --- a/pypy/rpython/lltypesystem/opimpl.py +++ b/pypy/rpython/lltypesystem/opimpl.py @@ -232,8 +232,8 @@ # used in computing hashes if isinstance(x, AddressAsInt): x = llmemory.cast_adr_to_int(x.adr) if isinstance(y, AddressAsInt): y = llmemory.cast_adr_to_int(y.adr) - assert isinstance(x, int) - assert isinstance(y, int) + assert isinstance(x, (int, long)) + assert isinstance(y, (int, long)) return x ^ y def op_int_mul(x, y): From noreply at buildbot.pypy.org Wed Nov 9 20:54:00 2011 From: noreply at buildbot.pypy.org (hager) Date: Wed, 9 Nov 2011 20:54:00 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Added code for call to C functions again :( Message-ID: <20111109195400.4DF288292E@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r49057:dcde3df53cd8 Date: 2011-11-09 20:53 +0100 http://bitbucket.org/pypy/pypy/changeset/dcde3df53cd8/ Log: Added code for call to C functions again :( diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -83,6 +83,7 @@ self.memcpy_addr = 0 self.fail_boxes_count = 0 self.current_clt = None + self._regalloc = None def _save_nonvolatiles(self): for i, reg in enumerate(NONVOLATILES): @@ -520,8 +521,10 @@ self.pending_guards = None self.current_clt = None self.mc = None + self._regalloc = None def _walk_operations(self, operations, regalloc): + self._regalloc = regalloc while regalloc.position() < len(operations) - 1: regalloc.next_instruction() pos = regalloc.position() @@ -529,6 +532,14 @@ opnum = op.getopnum() if op.has_no_side_effect() and op.result not in regalloc.longevity: regalloc.possibly_free_vars_for_op(op) + elif self.can_merge_with_next_guard(op, pos, operations)\ + # XXX fix this later on + and opnum == rop.CALL_RELEASE_GIL: + regalloc.next_instruction() + arglocs = regalloc.operations_with_guard[opnum](regalloc, op, + operations[pos+1]) + operations_with_guard[opnum](self, op, + operations[pos+1], arglocs, regalloc) else: arglocs = regalloc.operations[opnum](regalloc, op) if arglocs is not None: @@ -538,6 +549,30 @@ regalloc.possibly_free_vars_for_op(op) regalloc._check_invariants() + def can_merge_with_next_guard(self, op, i, operations): + if (op.getopnum() == rop.CALL_MAY_FORCE or + op.getopnum() == rop.CALL_ASSEMBLER or + op.getopnum() == rop.CALL_RELEASE_GIL): + assert operations[i + 1].getopnum() == rop.GUARD_NOT_FORCED + return True + if not op.is_comparison(): + if op.is_ovf(): + if (operations[i + 1].getopnum() != rop.GUARD_NO_OVERFLOW and + operations[i + 1].getopnum() != rop.GUARD_OVERFLOW): + not_implemented("int_xxx_ovf not followed by " + "guard_(no)_overflow") + return True + return False + if (operations[i + 1].getopnum() != rop.GUARD_TRUE and + operations[i + 1].getopnum() != rop.GUARD_FALSE): + return False + if operations[i + 1].getarg(0) is not op.result: + return False + if (self._regalloc.longevity[op.result][1] > i + 1 or + op.result in operations[i + 1].getfailargs()): + return False + return True + def gen_64_bit_func_descr(self, start_addr): mc = PPCBuilder() mc.write64(start_addr) @@ -711,20 +746,32 @@ assert gcrootmap.is_shadow_stack gcrootmap.write_callshape(mark, force_index) -def make_operations(): - def not_implemented(builder, trace_op, cpu, *rest_args): - raise NotImplementedError, trace_op +def notimplemented_op(self, op, arglocs, regalloc): + raise NotImplementedError, op - oplist = [None] * (rop._LAST + 1) - for key, val in rop.__dict__.items(): - if key.startswith("_"): - continue - opname = key.lower() - methname = "emit_%s" % opname - if hasattr(AssemblerPPC, methname): - oplist[val] = getattr(AssemblerPPC, methname).im_func - else: - oplist[val] = not_implemented - return oplist +def notimplemented_op_with_guard(self, op, guard_op, arglocs, regalloc): + raise NotImplementedError, op -AssemblerPPC.operations = make_operations() +operations = [notimplemented_op] * (rop._LAST + 1) +operations_with_guard = [notimplemented_op_with_guard] * (rop._LAST + 1) + +for key, value in rop.__dict__.items(): + key = key.lower() + if key.startswith('_'): + continue + methname = 'emit_%s' % key + if hasattr(AssemblerPPC, methname): + func = getattr(AssemblerPPC, methname).im_func + operations[value] = func + +for key, value in rop.__dict__.items(): + key = key.lower() + if key.startswith('_'): + continue + methname = 'emit_guard_%s' % key + if hasattr(AssemblerPPC, methname): + func = getattr(AssemblerPPC, methname).im_func + operations_with_guard[value] = func + +AssemblerPPC.operations = operations +AssemblerPPC.operations_with_guard = operations_with_guard From noreply at buildbot.pypy.org Wed Nov 9 21:02:57 2011 From: noreply at buildbot.pypy.org (edelsohn) Date: Wed, 9 Nov 2011 21:02:57 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: PPC64 guard compares Message-ID: <20111109200257.E85DB8292E@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r49058:1e101fe11932 Date: 2011-11-09 15:02 -0500 http://bitbucket.org/pypy/pypy/changeset/1e101fe11932/ Log: PPC64 guard compares diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -175,7 +175,10 @@ def emit_guard_true(self, op, arglocs, regalloc): l0 = arglocs[0] failargs = arglocs[1:] - self.mc.cmpi(l0.value, 0) + if IS_PPC_32: + self.mc.cmpwi(l0.value, 0) + else: + self.mc.cmpdi(l0.value, 0) self._emit_guard(op, failargs, c.EQ) # # ^^^^ If this condition is met, # # then the guard fails. @@ -183,7 +186,10 @@ def emit_guard_false(self, op, arglocs, regalloc): l0 = arglocs[0] failargs = arglocs[1:] - self.mc.cmpi(l0.value, 0) + if IS_PPC_32: + self.mc.cmpwi(l0.value, 0) + else: + self.mc.cmpdi(l0.value, 0) self._emit_guard(op, failargs, c.NE) # TODO - Evaluate whether this can be done with @@ -210,9 +216,15 @@ if l0.is_reg(): if l1.is_imm(): - self.mc.cmpi(l0.value, l1.getint()) + if IS_PPC_32: + self.mc.cmpwi(l0.value, l1.getint()) + else: + self.mc.cmpdi(l0.value, l1.getint()) else: - self.mc.cmp(l0.value, l1.value) + if IS_PPC_32: + self.mc.cmpw(l0.value, l1.value) + else: + self.mc.cmpd(l0.value, l1.value) else: assert 0, "not implemented yet" self._emit_guard(op, failargs, c.NE) @@ -243,7 +255,10 @@ def emit_guard_nonnull_class(self, op, arglocs, regalloc): offset = self.cpu.vtable_offset - self.mc.cmpi(arglocs[0].value, 0) + if IS_PPC_32: + self.mc.cmpwi(arglocs[0].value, 0) + else: + self.mc.cmpdi(arglocs[0].value, 0) if offset is not None: self._emit_guard(op, arglocs[3:], c.EQ) else: @@ -576,7 +591,10 @@ def emit_guard_call_may_force(self, op, guard_op, arglocs, regalloc): self.mc.mr(r.r0.value, r.SP.value) - self.mc.cmpi(r.r0.value, 0) + if IS_PPC_32: + self.mc.cmpwi(r.r0.value, 0) + else: + self.mc.cmpdi(r.r0.value, 0) self._emit_guard(guard_op, arglocs, c.EQ) emit_guard_call_release_gil = emit_guard_call_may_force From noreply at buildbot.pypy.org Wed Nov 9 21:23:27 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Wed, 9 Nov 2011 21:23:27 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: fixed rbigint Message-ID: <20111109202327.CCD968292E@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r49059:8ac04a128037 Date: 2011-11-09 21:23 +0100 http://bitbucket.org/pypy/pypy/changeset/8ac04a128037/ Log: fixed rbigint diff --git a/pypy/rlib/rbigint.py b/pypy/rlib/rbigint.py --- a/pypy/rlib/rbigint.py +++ b/pypy/rlib/rbigint.py @@ -45,7 +45,7 @@ def _mask_digit(x): if not we_are_translated(): - assert type(x) is not long, "overflow occurred!" + assert is_valid_int(x>>1), "overflow occurred!" return intmask(x & MASK) _mask_digit._annspecialcase_ = 'specialize:argtype(0)' From noreply at buildbot.pypy.org Wed Nov 9 21:43:17 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Wed, 9 Nov 2011 21:43:17 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: fixed test_rerased to show only the single error which it apparently had before ; -) Message-ID: <20111109204317.808D38292E@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r49060:0bf72ed106e7 Date: 2011-11-09 21:43 +0100 http://bitbucket.org/pypy/pypy/changeset/0bf72ed106e7/ Log: fixed test_rerased to show only the single error which it apparently had before ;-) diff --git a/pypy/rlib/rerased.py b/pypy/rlib/rerased.py --- a/pypy/rlib/rerased.py +++ b/pypy/rlib/rerased.py @@ -28,7 +28,7 @@ def erase_int(x): - assert isinstance(x, int) + assert isinstance(x, (int, long)) res = 2 * x + 1 if res > sys.maxint or res < -sys.maxint - 1: raise OverflowError @@ -36,7 +36,7 @@ def unerase_int(y): assert y._identity is _identity_for_ints - assert isinstance(y._x, int) + assert isinstance(y._x, (int, long)) return y._x From noreply at buildbot.pypy.org Wed Nov 9 21:48:25 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Wed, 9 Nov 2011 21:48:25 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: fix test Message-ID: <20111109204825.3A3FC8292E@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r49061:ae7f187cc6fc Date: 2011-11-09 20:55 +0100 http://bitbucket.org/pypy/pypy/changeset/ae7f187cc6fc/ Log: fix test diff --git a/pypy/jit/metainterp/test/test_virtual.py b/pypy/jit/metainterp/test/test_virtual.py --- a/pypy/jit/metainterp/test/test_virtual.py +++ b/pypy/jit/metainterp/test/test_virtual.py @@ -391,7 +391,7 @@ fieldname = self._field_prefix + 'value' assert getattr(res, fieldname, -100) == f(21).value - self.check_jitcell_token_count(2) # the loop and the entry path + self.check_jitcell_token_count(1) # the loop and the entry path # we get: # ENTER - compile the new loop and entry bridge # ENTER - compile the leaving path From noreply at buildbot.pypy.org Wed Nov 9 21:48:26 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Wed, 9 Nov 2011 21:48:26 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: centralize target token counter Message-ID: <20111109204826.6B57B8292E@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r49062:915eabbf8e27 Date: 2011-11-09 21:00 +0100 http://bitbucket.org/pypy/pypy/changeset/915eabbf8e27/ Log: centralize target token counter diff --git a/pypy/jit/metainterp/test/support.py b/pypy/jit/metainterp/test/support.py --- a/pypy/jit/metainterp/test/support.py +++ b/pypy/jit/metainterp/test/support.py @@ -171,6 +171,10 @@ def check_jitcell_token_count(self, count): assert len(get_stats().jitcell_tokens) == count + def check_target_token_count(self, count): + n = sum([len(t.target_tokens) for t in get_stats().jitcell_tokens]) + assert n == count + def check_enter_count(self, count): assert get_stats().enter_count == count def check_enter_count_at_most(self, count): diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -2599,10 +2599,10 @@ return sa assert self.meta_interp(f, [20, 2]) == f(20, 2) self.check_jitcell_token_count(1) - assert len(list(get_stats().jitcell_tokens)[0].target_tokens) == 4 + self.check_target_token_count(4) assert self.meta_interp(f, [20, 3]) == f(20, 3) self.check_jitcell_token_count(1) - assert len(list(get_stats().jitcell_tokens)[0].target_tokens) == 5 + self.check_target_token_count(5) def test_max_retrace_guards(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'i', 'sa', 'a']) @@ -2620,10 +2620,10 @@ return sa assert self.meta_interp(f, [20, 1]) == f(20, 1) self.check_jitcell_token_count(1) - assert len(list(get_stats().jitcell_tokens)[0].target_tokens) == 2 + self.check_target_token_count(2) assert self.meta_interp(f, [20, 10]) == f(20, 10) self.check_jitcell_token_count(1) - assert len(list(get_stats().jitcell_tokens)[0].target_tokens) == 5 + self.check_target_token_count(5) def test_retrace_limit_with_extra_guards(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'i', 'sa', 'a', @@ -2644,10 +2644,10 @@ return sa assert self.meta_interp(f, [20, 2]) == f(20, 2) self.check_jitcell_token_count(1) - assert len(list(get_stats().jitcell_tokens)[0].target_tokens) == 4 + self.check_target_token_count(4) assert self.meta_interp(f, [20, 3]) == f(20, 3) self.check_jitcell_token_count(1) - assert len(list(get_stats().jitcell_tokens)[0].target_tokens) == 5 + self.check_target_token_count(5) def test_retrace_ending_up_retracing_another_loop(self): From noreply at buildbot.pypy.org Wed Nov 9 21:48:27 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Wed, 9 Nov 2011 21:48:27 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: fix test Message-ID: <20111109204827.971908292E@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r49063:8fc9c0d93a1c Date: 2011-11-09 21:07 +0100 http://bitbucket.org/pypy/pypy/changeset/8fc9c0d93a1c/ Log: fix test diff --git a/pypy/jit/metainterp/test/test_virtual.py b/pypy/jit/metainterp/test/test_virtual.py --- a/pypy/jit/metainterp/test/test_virtual.py +++ b/pypy/jit/metainterp/test/test_virtual.py @@ -565,7 +565,10 @@ n -= 1 return node1.value + node2.value assert self.meta_interp(f, [40, 3]) == f(40, 3) - self.check_trace_count(6) + # We get 4 versions of this loop: + # preamble (no virtuals), node1 virtual, node2 virtual, both virtual + self.check_target_token_count(4) + self.check_resops(new=0, new_with_vtable=0) def test_single_virtual_forced_in_bridge(self): myjitdriver = JitDriver(greens = [], reds = ['n', 's', 'node']) From noreply at buildbot.pypy.org Wed Nov 9 21:48:28 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Wed, 9 Nov 2011 21:48:28 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: optimize_trace might be forced to insert sameas operations infront of the label Message-ID: <20111109204828.C33B38292E@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r49064:aa824f7255e6 Date: 2011-11-09 21:15 +0100 http://bitbucket.org/pypy/pypy/changeset/aa824f7255e6/ Log: optimize_trace might be forced to insert sameas operations infront of the label diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -214,13 +214,13 @@ part.operations = [partial_trace.operations[-1]] + \ [h_ops[i].clone() for i in range(start, len(h_ops))] + \ [ResOperation(rop.JUMP, jumpargs, None, descr=loop_jitcell_token)] + label = part.operations[0] + assert label.getopnum() == rop.LABEL try: optimize_trace(metainterp_sd, part, jitdriver_sd.warmstate.enable_opts) except InvalidLoop: return None assert part.operations[-1].getopnum() != rop.LABEL - label = part.operations[0] - assert label.getopnum() == rop.LABEL target_token = label.getdescr() assert isinstance(target_token, TargetToken) assert loop_jitcell_token.target_tokens From noreply at buildbot.pypy.org Wed Nov 9 21:48:29 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Wed, 9 Nov 2011 21:48:29 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: fix tests Message-ID: <20111109204829.EE3768292E@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r49065:b2fbfc8c5fef Date: 2011-11-09 21:19 +0100 http://bitbucket.org/pypy/pypy/changeset/b2fbfc8c5fef/ Log: fix tests diff --git a/pypy/jit/metainterp/test/test_string.py b/pypy/jit/metainterp/test/test_string.py --- a/pypy/jit/metainterp/test/test_string.py +++ b/pypy/jit/metainterp/test/test_string.py @@ -499,7 +499,7 @@ sys.defaultencoding = _str('utf-8') return sa assert self.meta_interp(f, [8]) == f(8) - self.check_resops({'jump': 2, 'int_is_true': 2, 'int_add': 2, + self.check_resops({'jump': 1, 'int_is_true': 2, 'int_add': 2, 'guard_true': 2, 'guard_not_invalidated': 2, 'int_sub': 2}) @@ -590,7 +590,7 @@ # The "".join should be unrolled, since the length of x is known since # it is virtual, ensure there are no calls to ll_join_chars, or # allocations. - self.check_resops({'jump': 2, 'guard_true': 5, 'int_lt': 2, + self.check_resops({'jump': 1, 'guard_true': 5, 'int_lt': 2, 'int_add': 2, 'int_is_true': 3}) def test_virtual_copystringcontent(self): From noreply at buildbot.pypy.org Wed Nov 9 22:44:51 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 9 Nov 2011 22:44:51 +0100 (CET) Subject: [pypy-commit] pypy default: Allow very basic multiple inheritance of app-level types in RPython. Thanks to amaury for the review/suggestions. Message-ID: <20111109214451.1163D8292E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r49066:90b293e995c7 Date: 2011-11-09 16:44 -0500 http://bitbucket.org/pypy/pypy/changeset/90b293e995c7/ Log: Allow very basic multiple inheritance of app-level types in RPython. Thanks to amaury for the review/suggestions. diff --git a/pypy/interpreter/test/test_typedef.py b/pypy/interpreter/test/test_typedef.py --- a/pypy/interpreter/test/test_typedef.py +++ b/pypy/interpreter/test/test_typedef.py @@ -2,7 +2,7 @@ from pypy.interpreter import typedef from pypy.tool.udir import udir from pypy.interpreter.baseobjspace import Wrappable -from pypy.interpreter.gateway import ObjSpace +from pypy.interpreter.gateway import ObjSpace, interp2app # this test isn't so much to test that the objspace interface *works* # -- it's more to test that it's *there* @@ -260,6 +260,50 @@ gc.collect(); gc.collect() assert space.unwrap(w_seen) == [6, 2] + def test_multiple_inheritance(self): + class W_A(Wrappable): + a = 1 + b = 2 + class W_C(W_A): + b = 3 + W_A.typedef = typedef.TypeDef("A", + a = typedef.interp_attrproperty("a", cls=W_A), + b = typedef.interp_attrproperty("b", cls=W_A), + ) + class W_B(Wrappable): + pass + def standalone_method(space, w_obj): + if isinstance(w_obj, W_A): + return space.w_True + else: + return space.w_False + W_B.typedef = typedef.TypeDef("B", + c = interp2app(standalone_method) + ) + W_C.typedef = typedef.TypeDef("C", (W_A.typedef, W_B.typedef,)) + + w_o1 = self.space.wrap(W_C()) + w_o2 = self.space.wrap(W_B()) + w_c = self.space.gettypefor(W_C) + w_b = self.space.gettypefor(W_B) + w_a = self.space.gettypefor(W_A) + assert w_c.mro_w == [ + w_c, + w_a, + w_b, + self.space.w_object, + ] + for w_tp in w_c.mro_w: + assert self.space.isinstance_w(w_o1, w_tp) + def assert_attr(w_obj, name, value): + assert self.space.unwrap(self.space.getattr(w_obj, self.space.wrap(name))) == value + def assert_method(w_obj, name, value): + assert self.space.unwrap(self.space.call_method(w_obj, name)) == value + assert_attr(w_o1, "a", 1) + assert_attr(w_o1, "b", 3) + assert_method(w_o1, "c", True) + assert_method(w_o2, "c", False) + class AppTestTypeDef: diff --git a/pypy/interpreter/typedef.py b/pypy/interpreter/typedef.py --- a/pypy/interpreter/typedef.py +++ b/pypy/interpreter/typedef.py @@ -15,13 +15,19 @@ def __init__(self, __name, __base=None, **rawdict): "NOT_RPYTHON: initialization-time only" self.name = __name - self.base = __base + if __base is None: + bases = () + elif isinstance(__base, tuple): + bases = __base + else: + bases = (__base,) + self.bases = bases self.hasdict = '__dict__' in rawdict self.weakrefable = '__weakref__' in rawdict self.doc = rawdict.pop('__doc__', None) - if __base is not None: - self.hasdict |= __base.hasdict - self.weakrefable |= __base.weakrefable + for base in bases: + self.hasdict |= base.hasdict + self.weakrefable |= base.weakrefable self.rawdict = {} self.acceptable_as_base_class = '__new__' in rawdict self.applevel_subclasses_base = None diff --git a/pypy/objspace/std/stdtypedef.py b/pypy/objspace/std/stdtypedef.py --- a/pypy/objspace/std/stdtypedef.py +++ b/pypy/objspace/std/stdtypedef.py @@ -32,11 +32,15 @@ from pypy.objspace.std.objecttype import object_typedef if b is object_typedef: return True - while a is not b: - if a is None: - return False - a = a.base - return True + if a is None: + return False + if a is b: + return True + for a1 in a.bases: + for b1 in b.bases: + if issubtypedef(a1, b1): + return True + return False std_dict_descr = GetSetProperty(descr_get_dict, descr_set_dict, descr_del_dict, doc="dictionary for instance variables (if defined)") @@ -75,8 +79,8 @@ if typedef is object_typedef: bases_w = [] else: - base = typedef.base or object_typedef - bases_w = [space.gettypeobject(base)] + bases = typedef.bases or [object_typedef] + bases_w = [space.gettypeobject(base) for base in bases] # wrap everything dict_w = {} From noreply at buildbot.pypy.org Wed Nov 9 23:42:15 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 9 Nov 2011 23:42:15 +0100 (CET) Subject: [pypy-commit] pypy default: Fix for various ops on bools. Message-ID: <20111109224215.A02AF8292E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r49067:6ccba9d4e2b8 Date: 2011-11-09 17:40 -0500 http://bitbucket.org/pypy/pypy/changeset/6ccba9d4e2b8/ Log: Fix for various ops on bools. diff --git a/pypy/objspace/std/stdtypedef.py b/pypy/objspace/std/stdtypedef.py --- a/pypy/objspace/std/stdtypedef.py +++ b/pypy/objspace/std/stdtypedef.py @@ -37,9 +37,8 @@ if a is b: return True for a1 in a.bases: - for b1 in b.bases: - if issubtypedef(a1, b1): - return True + if issubtypedef(a1, b1): + return True return False std_dict_descr = GetSetProperty(descr_get_dict, descr_set_dict, descr_del_dict, From noreply at buildbot.pypy.org Wed Nov 9 23:42:16 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 9 Nov 2011 23:42:16 +0100 (CET) Subject: [pypy-commit] pypy default: fix for a typo and cpyext Message-ID: <20111109224216.CB8948292E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r49068:1428f8ab1883 Date: 2011-11-09 17:42 -0500 http://bitbucket.org/pypy/pypy/changeset/1428f8ab1883/ Log: fix for a typo and cpyext diff --git a/pypy/module/cpyext/pyobject.py b/pypy/module/cpyext/pyobject.py --- a/pypy/module/cpyext/pyobject.py +++ b/pypy/module/cpyext/pyobject.py @@ -116,8 +116,8 @@ try: return typedescr_cache[typedef] except KeyError: - if typedef.base is not None: - return _get_typedescr_1(typedef.base) + if typedef.bases: + return _get_typedescr_1(typedef.bases[0]) return typedescr_cache[None] def get_typedescr(typedef): diff --git a/pypy/objspace/std/stdtypedef.py b/pypy/objspace/std/stdtypedef.py --- a/pypy/objspace/std/stdtypedef.py +++ b/pypy/objspace/std/stdtypedef.py @@ -37,7 +37,7 @@ if a is b: return True for a1 in a.bases: - if issubtypedef(a1, b1): + if issubtypedef(a1, b): return True return False From noreply at buildbot.pypy.org Wed Nov 9 23:53:43 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 9 Nov 2011 23:53:43 +0100 (CET) Subject: [pypy-commit] pypy default: This needs to be a list ot be RPython Message-ID: <20111109225343.8989D8292E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r49069:62bc56457861 Date: 2011-11-09 17:53 -0500 http://bitbucket.org/pypy/pypy/changeset/62bc56457861/ Log: This needs to be a list ot be RPython diff --git a/pypy/interpreter/typedef.py b/pypy/interpreter/typedef.py --- a/pypy/interpreter/typedef.py +++ b/pypy/interpreter/typedef.py @@ -16,11 +16,11 @@ "NOT_RPYTHON: initialization-time only" self.name = __name if __base is None: - bases = () + bases = [] elif isinstance(__base, tuple): - bases = __base + bases = list(__base) else: - bases = (__base,) + bases = [__base] self.bases = bases self.hasdict = '__dict__' in rawdict self.weakrefable = '__weakref__' in rawdict From noreply at buildbot.pypy.org Thu Nov 10 01:57:23 2011 From: noreply at buildbot.pypy.org (pjenvey) Date: Thu, 10 Nov 2011 01:57:23 +0100 (CET) Subject: [pypy-commit] pypy py3k: fix str() on bytes, reenable the -b cmd line opt Message-ID: <20111110005723.831BB8292E@wyvern.cs.uni-duesseldorf.de> Author: Philip Jenvey Branch: py3k Changeset: r49070:c1972bf3e125 Date: 2011-11-09 16:54 -0800 http://bitbucket.org/pypy/pypy/changeset/c1972bf3e125/ Log: fix str() on bytes, reenable the -b cmd line opt diff --git a/pypy/objspace/std/stringobject.py b/pypy/objspace/std/stringobject.py --- a/pypy/objspace/std/stringobject.py +++ b/pypy/objspace/std/stringobject.py @@ -870,9 +870,9 @@ return space.wrap(len(w_str._value)) def str__String(space, w_str): - if type(w_str) is W_StringObject: - return w_str - return wrapstr(space, w_str._value) + if space.sys.get_flag('bytes_warning'): + space.warn("str() on a bytes instance", space.w_BytesWarning) + return repr__String(space, w_str) def ord__String(space, w_str): u_str = w_str._value diff --git a/pypy/objspace/std/test/test_stringobject.py b/pypy/objspace/std/test/test_stringobject.py --- a/pypy/objspace/std/test/test_stringobject.py +++ b/pypy/objspace/std/test/test_stringobject.py @@ -618,23 +618,24 @@ assert l == [52, 50] def test_repr(self): - assert repr(b"") =="b''" - assert repr(b"a") =="b'a'" - assert repr(b"'") =='b"\'"' - assert repr(b"\'") =="b\"\'\"" - assert repr(b"\"") =='b\'"\'' - assert repr(b"\t") =="b'\\t'" - assert repr(b"\\") =="b'\\\\'" - assert repr(b'') =="b''" - assert repr(b'a') =="b'a'" - assert repr(b'"') =="b'\"'" - assert repr(b'\'') =='b"\'"' - assert repr(b'\"') =="b'\"'" - assert repr(b'\t') =="b'\\t'" - assert repr(b'\\') =="b'\\\\'" - assert repr(b"'''\"") =='b\'\\\'\\\'\\\'"\'' - assert repr(b"\x13") =="b'\\x13'" - assert repr(b"\x02") =="b'\\x02'" + for f in str, repr: + assert f(b"") =="b''" + assert f(b"a") =="b'a'" + assert f(b"'") =='b"\'"' + assert f(b"\'") =="b\"\'\"" + assert f(b"\"") =='b\'"\'' + assert f(b"\t") =="b'\\t'" + assert f(b"\\") =="b'\\\\'" + assert f(b'') =="b''" + assert f(b'a') =="b'a'" + assert f(b'"') =="b'\"'" + assert f(b'\'') =='b"\'"' + assert f(b'\"') =="b'\"'" + assert f(b'\t') =="b'\\t'" + assert f(b'\\') =="b'\\\\'" + assert f(b"'''\"") =='b\'\\\'\\\'\\\'"\'' + assert f(b"\x13") =="b'\\x13'" + assert f(b"\x02") =="b'\\x02'" def test_contains(self): assert b'' in b'abc' diff --git a/pypy/objspace/std/unicodetype.py b/pypy/objspace/std/unicodetype.py --- a/pypy/objspace/std/unicodetype.py +++ b/pypy/objspace/std/unicodetype.py @@ -280,24 +280,17 @@ def unicode_from_object(space, w_obj): if space.is_w(space.type(w_obj), space.w_unicode): return w_obj - elif space.is_w(space.type(w_obj), space.w_str): - w_res = w_obj - else: - w_unicode_method = space.lookup(w_obj, "__unicode__") - # obscure workaround: for the next two lines see - # test_unicode_conversion_with__str__ - if w_unicode_method is None: - if space.isinstance_w(w_obj, space.w_unicode): - return space.wrap(space.unicode_w(w_obj)) - w_unicode_method = space.lookup(w_obj, "__str__") - if w_unicode_method is not None: - w_res = space.get_and_call_function(w_unicode_method, w_obj) - else: - w_res = space.str(w_obj) - if space.isinstance_w(w_res, space.w_unicode): - return w_res - return unicode_from_encoded_object(space, w_res, None, "strict") + w_unicode_method = space.lookup(w_obj, "__str__") + if w_unicode_method is None: + return space.repr(w_obj) + + w_res = space.get_and_call_function(w_unicode_method, w_obj) + if not space.isinstance_w(w_res, space.w_unicode): + typename = space.type(w_res).getname(space) + msg = "__str__ returned non-string (type %.200s)" % typename + raise OperationError(space.w_TypeError, space.wrap(msg)) + return w_res def descr_new_(space, w_unicodetype, w_string='', w_encoding=None, w_errors=None): # NB. the default value of w_obj is really a *wrapped* empty string: diff --git a/pypy/translator/goal/app_main.py b/pypy/translator/goal/app_main.py --- a/pypy/translator/goal/app_main.py +++ b/pypy/translator/goal/app_main.py @@ -392,6 +392,7 @@ 'v': (simple_option, 'verbose'), 'U': (simple_option, 'unicode'), 'u': (simple_option, 'unbuffered'), + 'b': (simple_option, 'bytes_warning'), # more complex options 'Q': (div_option, Ellipsis), 'c': (c_option, Ellipsis), @@ -411,7 +412,6 @@ '3': (simple_option, 'py3k_warning'), 'B': (simple_option, 'dont_write_bytecode'), 's': (simple_option, 'no_user_site'), - 'b': (simple_option, 'bytes_warning'), }) From noreply at buildbot.pypy.org Thu Nov 10 02:53:08 2011 From: noreply at buildbot.pypy.org (pjenvey) Date: Thu, 10 Nov 2011 02:53:08 +0100 (CET) Subject: [pypy-commit] pypy py3k: pass through source as bytes to the compiler Message-ID: <20111110015308.3ADFA8292E@wyvern.cs.uni-duesseldorf.de> Author: Philip Jenvey Branch: py3k Changeset: r49071:bc99fcedc6e4 Date: 2011-11-09 17:51 -0800 http://bitbucket.org/pypy/pypy/changeset/bc99fcedc6e4/ Log: pass through source as bytes to the compiler diff --git a/pypy/interpreter/main.py b/pypy/interpreter/main.py --- a/pypy/interpreter/main.py +++ b/pypy/interpreter/main.py @@ -18,7 +18,7 @@ def compilecode(space, source, filename, cmd='exec'): w = space.wrap w_code = space.builtin.call('compile', - w(source), w(filename), w(cmd), w(0), w(0)) + space.wrapbytes(source), w(filename), w(cmd), w(0), w(0)) pycode = space.interp_w(eval.Code, w_code) return pycode diff --git a/pypy/interpreter/test/test_main.py b/pypy/interpreter/test/test_main.py --- a/pypy/interpreter/test/test_main.py +++ b/pypy/interpreter/test/test_main.py @@ -1,3 +1,4 @@ +# coding: utf-8 from cStringIO import StringIO import py @@ -77,3 +78,29 @@ testmodule, ['hello world']) checkoutput(self.space, testresultoutput, main.run_module, testpackage + '.' + testmodule, ['hello world']) + + +class TestMainPEP3120: + def setup_class(cls): + # Encoding the string here to ease writing to the captured + # stdout's underlying Python 2 (not 3) file! + testfn.write("""print('日本'.encode('utf-8'), end='')""", 'wb') + space = cls.space + cls.w_oldsyspath = space.appexec([space.wrap(str(udir))], """(udir): + import sys + old = sys.path[:] + sys.path.insert(0, udir) + return old + """) + + def teardown_class(cls): + cls.space.appexec([cls.w_oldsyspath], """(old): + import sys + sys.path[:] = old + """) + + def test_pep3120(self): + # Ensure '日本' written to the file above is interpreted as utf-8 + # per PEP3120 + checkoutput(self.space, 'b' + repr(u'日本'.encode('utf-8')), + main.run_file, str(testfn)) From noreply at buildbot.pypy.org Thu Nov 10 03:00:02 2011 From: noreply at buildbot.pypy.org (pjenvey) Date: Thu, 10 Nov 2011 03:00:02 +0100 (CET) Subject: [pypy-commit] pypy py3k: builtins.ascii Message-ID: <20111110020002.4D0C48292E@wyvern.cs.uni-duesseldorf.de> Author: Philip Jenvey Branch: py3k Changeset: r49072:e0b22cef21b6 Date: 2011-11-09 17:53 -0800 http://bitbucket.org/pypy/pypy/changeset/e0b22cef21b6/ Log: builtins.ascii diff --git a/pypy/module/__builtin__/__init__.py b/pypy/module/__builtin__/__init__.py --- a/pypy/module/__builtin__/__init__.py +++ b/pypy/module/__builtin__/__init__.py @@ -50,6 +50,7 @@ # interp-level function definitions 'abs' : 'operation.abs', + 'ascii' : 'operation.ascii', 'chr' : 'operation.chr', 'len' : 'operation.len', 'ord' : 'operation.ord', diff --git a/pypy/module/__builtin__/operation.py b/pypy/module/__builtin__/operation.py --- a/pypy/module/__builtin__/operation.py +++ b/pypy/module/__builtin__/operation.py @@ -15,6 +15,17 @@ "abs(number) -> number\n\nReturn the absolute value of the argument." return space.abs(w_val) +def ascii(space, w_obj): + """"ascii(object) -> string + + As repr(), return a string containing a printable representation of an + object, but escape the non-ASCII characters in the string returned by + repr() using \\x, \\u or \\U escapes. This generates a string similar + to that returned by repr() in Python 2.""" + # repr is guaranteed to be unicode + repr = space.unwrap(space.repr(w_obj)) + return space.wrap(repr.encode('ascii', 'backslashreplace').decode('ascii')) + @unwrap_spec(code=int) def chr(space, code): "Return a Unicode string of one character with the given ordinal." diff --git a/pypy/module/__builtin__/test/test_builtin.py b/pypy/module/__builtin__/test/test_builtin.py --- a/pypy/module/__builtin__/test/test_builtin.py +++ b/pypy/module/__builtin__/test/test_builtin.py @@ -1,3 +1,4 @@ +# coding: utf-8 import autopath import sys from pypy import conftest @@ -37,6 +38,41 @@ raises(ImportError, __import__, 'spamspam') raises(TypeError, __import__, 1, 2, 3, 4) + def test_ascii(self): + assert ascii('') == '\'\'' + assert ascii(0) == '0' + assert ascii(()) == '()' + assert ascii([]) == '[]' + assert ascii({}) == '{}' + a = [] + a.append(a) + assert ascii(a) == '[[...]]' + a = {} + a[0] = a + assert ascii(a) == '{0: {...}}' + # Advanced checks for unicode strings + def _check_uni(s): + assert ascii(s) == repr(s) + _check_uni("'") + _check_uni('"') + _check_uni('"\'') + _check_uni('\0') + _check_uni('\r\n\t .') + # Unprintable non-ASCII characters + _check_uni('\x85') + _check_uni('\u1fff') + _check_uni('\U00012fff') + # Lone surrogates + _check_uni('\ud800') + _check_uni('\udfff') + # Issue #9804: surrogates should be joined even for printable + # wide characters (UCS-2 builds). + assert ascii('\U0001d121') == "'\\U0001d121'" + # All together + s = "'\0\"\n\r\t abcd\x85é\U00012fff\uD800\U0001D121xxx." + assert ascii(s) == \ + r"""'\'\x00"\n\r\t abcd\x85\xe9\U00012fff\ud800\U0001d121xxx.'""" + def test_bin(self): assert bin(0) == "0b0" assert bin(-1) == "-0b1" From noreply at buildbot.pypy.org Thu Nov 10 03:21:46 2011 From: noreply at buildbot.pypy.org (pjenvey) Date: Thu, 10 Nov 2011 03:21:46 +0100 (CET) Subject: [pypy-commit] pypy py3k: reapply our sysconfig modifications to 3.2 Message-ID: <20111110022146.245CC8292E@wyvern.cs.uni-duesseldorf.de> Author: Philip Jenvey Branch: py3k Changeset: r49073:d1ca09d10665 Date: 2011-11-09 18:20 -0800 http://bitbucket.org/pypy/pypy/changeset/d1ca09d10665/ Log: reapply our sysconfig modifications to 3.2 diff --git a/lib-python/modified-3.2/sysconfig.py b/lib-python/modified-3.2/sysconfig.py new file mode 100644 --- /dev/null +++ b/lib-python/modified-3.2/sysconfig.py @@ -0,0 +1,625 @@ +"""Provide access to Python's configuration information. + +""" +import sys +import os +from os.path import pardir, realpath + +__all__ = [ + 'get_config_h_filename', + 'get_config_var', + 'get_config_vars', + 'get_makefile_filename', + 'get_path', + 'get_path_names', + 'get_paths', + 'get_platform', + 'get_python_version', + 'get_scheme_names', + 'parse_config_h', + ] + +_INSTALL_SCHEMES = { + 'posix_prefix': { + 'stdlib': '{base}/lib/python{py_version_short}', + 'platstdlib': '{platbase}/lib/python{py_version_short}', + 'purelib': '{base}/lib/python{py_version_short}/site-packages', + 'platlib': '{platbase}/lib/python{py_version_short}/site-packages', + 'include': + '{base}/include/python{py_version_short}{abiflags}', + 'platinclude': + '{platbase}/include/python{py_version_short}{abiflags}', + 'scripts': '{base}/bin', + 'data': '{base}', + }, + 'posix_home': { + 'stdlib': '{base}/lib/python', + 'platstdlib': '{base}/lib/python', + 'purelib': '{base}/lib/python', + 'platlib': '{base}/lib/python', + 'include': '{base}/include/python', + 'platinclude': '{base}/include/python', + 'scripts': '{base}/bin', + 'data' : '{base}', + }, + 'pypy': { + 'stdlib': '{base}/lib-python', + 'platstdlib': '{base}/lib-python', + 'purelib': '{base}/lib-python', + 'platlib': '{base}/lib-python', + 'include': '{base}/include', + 'platinclude': '{base}/include', + 'scripts': '{base}/bin', + 'data' : '{base}', + }, + 'nt': { + 'stdlib': '{base}/Lib', + 'platstdlib': '{base}/Lib', + 'purelib': '{base}/Lib/site-packages', + 'platlib': '{base}/Lib/site-packages', + 'include': '{base}/Include', + 'platinclude': '{base}/Include', + 'scripts': '{base}/Scripts', + 'data' : '{base}', + }, + 'os2': { + 'stdlib': '{base}/Lib', + 'platstdlib': '{base}/Lib', + 'purelib': '{base}/Lib/site-packages', + 'platlib': '{base}/Lib/site-packages', + 'include': '{base}/Include', + 'platinclude': '{base}/Include', + 'scripts': '{base}/Scripts', + 'data' : '{base}', + }, + 'os2_home': { + 'stdlib': '{userbase}/lib/python{py_version_short}', + 'platstdlib': '{userbase}/lib/python{py_version_short}', + 'purelib': '{userbase}/lib/python{py_version_short}/site-packages', + 'platlib': '{userbase}/lib/python{py_version_short}/site-packages', + 'include': '{userbase}/include/python{py_version_short}', + 'scripts': '{userbase}/bin', + 'data' : '{userbase}', + }, + 'nt_user': { + 'stdlib': '{userbase}/Python{py_version_nodot}', + 'platstdlib': '{userbase}/Python{py_version_nodot}', + 'purelib': '{userbase}/Python{py_version_nodot}/site-packages', + 'platlib': '{userbase}/Python{py_version_nodot}/site-packages', + 'include': '{userbase}/Python{py_version_nodot}/Include', + 'scripts': '{userbase}/Scripts', + 'data' : '{userbase}', + }, + 'posix_user': { + 'stdlib': '{userbase}/lib/python{py_version_short}', + 'platstdlib': '{userbase}/lib/python{py_version_short}', + 'purelib': '{userbase}/lib/python{py_version_short}/site-packages', + 'platlib': '{userbase}/lib/python{py_version_short}/site-packages', + 'include': '{userbase}/include/python{py_version_short}', + 'scripts': '{userbase}/bin', + 'data' : '{userbase}', + }, + 'osx_framework_user': { + 'stdlib': '{userbase}/lib/python', + 'platstdlib': '{userbase}/lib/python', + 'purelib': '{userbase}/lib/python/site-packages', + 'platlib': '{userbase}/lib/python/site-packages', + 'include': '{userbase}/include', + 'scripts': '{userbase}/bin', + 'data' : '{userbase}', + }, + } + +_SCHEME_KEYS = ('stdlib', 'platstdlib', 'purelib', 'platlib', 'include', + 'scripts', 'data') +_PY_VERSION = sys.version.split()[0] +_PY_VERSION_SHORT = sys.version[:3] +_PY_VERSION_SHORT_NO_DOT = _PY_VERSION[0] + _PY_VERSION[2] +_PREFIX = os.path.normpath(sys.prefix) +_EXEC_PREFIX = os.path.normpath(sys.exec_prefix) +_CONFIG_VARS = None +_USER_BASE = None + +def _safe_realpath(path): + try: + return realpath(path) + except OSError: + return path + +if sys.executable: + _PROJECT_BASE = os.path.dirname(_safe_realpath(sys.executable)) +else: + # sys.executable can be empty if argv[0] has been changed and Python is + # unable to retrieve the real program name + _PROJECT_BASE = _safe_realpath(os.getcwd()) + +if os.name == "nt" and "pcbuild" in _PROJECT_BASE[-8:].lower(): + _PROJECT_BASE = _safe_realpath(os.path.join(_PROJECT_BASE, pardir)) +# PC/VS7.1 +if os.name == "nt" and "\\pc\\v" in _PROJECT_BASE[-10:].lower(): + _PROJECT_BASE = _safe_realpath(os.path.join(_PROJECT_BASE, pardir, pardir)) +# PC/AMD64 +if os.name == "nt" and "\\pcbuild\\amd64" in _PROJECT_BASE[-14:].lower(): + _PROJECT_BASE = _safe_realpath(os.path.join(_PROJECT_BASE, pardir, pardir)) + +def is_python_build(): + for fn in ("Setup.dist", "Setup.local"): + if os.path.isfile(os.path.join(_PROJECT_BASE, "Modules", fn)): + return True + return False + +_PYTHON_BUILD = is_python_build() + +if _PYTHON_BUILD: + for scheme in ('posix_prefix', 'posix_home'): + _INSTALL_SCHEMES[scheme]['include'] = '{srcdir}/Include' + _INSTALL_SCHEMES[scheme]['platinclude'] = '{projectbase}/.' + +def _subst_vars(s, local_vars): + try: + return s.format(**local_vars) + except KeyError: + try: + return s.format(**os.environ) + except KeyError as var: + raise AttributeError('{%s}' % var) + +def _extend_dict(target_dict, other_dict): + target_keys = target_dict.keys() + for key, value in other_dict.items(): + if key in target_keys: + continue + target_dict[key] = value + +def _expand_vars(scheme, vars): + res = {} + if vars is None: + vars = {} + _extend_dict(vars, get_config_vars()) + + for key, value in _INSTALL_SCHEMES[scheme].items(): + if os.name in ('posix', 'nt'): + value = os.path.expanduser(value) + res[key] = os.path.normpath(_subst_vars(value, vars)) + return res + +def _get_default_scheme(): + if '__pypy__' in sys.builtin_module_names: + return 'pypy' + elif os.name == 'posix': + # the default scheme for posix is posix_prefix + return 'posix_prefix' + return os.name + +def _getuserbase(): + env_base = os.environ.get("PYTHONUSERBASE", None) + def joinuser(*args): + return os.path.expanduser(os.path.join(*args)) + + # what about 'os2emx', 'riscos' ? + if os.name == "nt": + base = os.environ.get("APPDATA") or "~" + return env_base if env_base else joinuser(base, "Python") + + if sys.platform == "darwin": + framework = get_config_var("PYTHONFRAMEWORK") + if framework: + return env_base if env_base else joinuser("~", "Library", framework, "%d.%d"%( + sys.version_info[:2])) + + return env_base if env_base else joinuser("~", ".local") + + +def _init_posix(vars): + """Initialize the module as appropriate for POSIX systems.""" + return + +def _init_non_posix(vars): + """Initialize the module as appropriate for NT""" + # set basic install directories + vars['LIBDEST'] = get_path('stdlib') + vars['BINLIBDEST'] = get_path('platstdlib') + vars['INCLUDEPY'] = get_path('include') + vars['SO'] = '.pyd' + vars['EXE'] = '.exe' + vars['VERSION'] = _PY_VERSION_SHORT_NO_DOT + vars['BINDIR'] = os.path.dirname(_safe_realpath(sys.executable)) + +# +# public APIs +# + + +def parse_config_h(fp, vars=None): + """Parse a config.h-style file. + + A dictionary containing name/value pairs is returned. If an + optional dictionary is passed in as the second argument, it is + used instead of a new dictionary. + """ + import re + if vars is None: + vars = {} + define_rx = re.compile("#define ([A-Z][A-Za-z0-9_]+) (.*)\n") + undef_rx = re.compile("/[*] #undef ([A-Z][A-Za-z0-9_]+) [*]/\n") + + while True: + line = fp.readline() + if not line: + break + m = define_rx.match(line) + if m: + n, v = m.group(1, 2) + try: v = int(v) + except ValueError: pass + vars[n] = v + else: + m = undef_rx.match(line) + if m: + vars[m.group(1)] = 0 + return vars + +def get_config_h_filename(): + """Return the path of pyconfig.h.""" + if _PYTHON_BUILD: + if os.name == "nt": + inc_dir = os.path.join(_PROJECT_BASE, "PC") + else: + inc_dir = _PROJECT_BASE + else: + inc_dir = get_path('platinclude') + return os.path.join(inc_dir, 'pyconfig.h') + +def get_scheme_names(): + """Return a tuple containing the schemes names.""" + schemes = list(_INSTALL_SCHEMES.keys()) + schemes.sort() + return tuple(schemes) + +def get_path_names(): + """Return a tuple containing the paths names.""" + return _SCHEME_KEYS + +def get_paths(scheme=_get_default_scheme(), vars=None, expand=True): + """Return a mapping containing an install scheme. + + ``scheme`` is the install scheme name. If not provided, it will + return the default scheme for the current platform. + """ + if expand: + return _expand_vars(scheme, vars) + else: + return _INSTALL_SCHEMES[scheme] + +def get_path(name, scheme=_get_default_scheme(), vars=None, expand=True): + """Return a path corresponding to the scheme. + + ``scheme`` is the install scheme name. + """ + return get_paths(scheme, vars, expand)[name] + +def get_config_vars(*args): + """With no arguments, return a dictionary of all configuration + variables relevant for the current platform. + + On Unix, this means every variable defined in Python's installed Makefile; + On Windows and Mac OS it's a much smaller set. + + With arguments, return a list of values that result from looking up + each argument in the configuration variable dictionary. + """ + import re + global _CONFIG_VARS + if _CONFIG_VARS is None: + _CONFIG_VARS = {} + # Normalized versions of prefix and exec_prefix are handy to have; + # in fact, these are the standard versions used most places in the + # Distutils. + _CONFIG_VARS['prefix'] = _PREFIX + _CONFIG_VARS['exec_prefix'] = _EXEC_PREFIX + _CONFIG_VARS['py_version'] = _PY_VERSION + _CONFIG_VARS['py_version_short'] = _PY_VERSION_SHORT + _CONFIG_VARS['py_version_nodot'] = _PY_VERSION[0] + _PY_VERSION[2] + _CONFIG_VARS['base'] = _PREFIX + _CONFIG_VARS['platbase'] = _EXEC_PREFIX + _CONFIG_VARS['projectbase'] = _PROJECT_BASE + try: + _CONFIG_VARS['abiflags'] = sys.abiflags + except AttributeError: + # sys.abiflags may not be defined on all platforms. + _CONFIG_VARS['abiflags'] = '' + + if os.name in ('nt', 'os2'): + _init_non_posix(_CONFIG_VARS) + if os.name == 'posix': + _init_posix(_CONFIG_VARS) + # Setting 'userbase' is done below the call to the + # init function to enable using 'get_config_var' in + # the init-function. + _CONFIG_VARS['userbase'] = _getuserbase() + + if 'srcdir' not in _CONFIG_VARS: + _CONFIG_VARS['srcdir'] = _PROJECT_BASE + else: + _CONFIG_VARS['srcdir'] = _safe_realpath(_CONFIG_VARS['srcdir']) + + + # Convert srcdir into an absolute path if it appears necessary. + # Normally it is relative to the build directory. However, during + # testing, for example, we might be running a non-installed python + # from a different directory. + if _PYTHON_BUILD and os.name == "posix": + base = _PROJECT_BASE + try: + cwd = os.getcwd() + except OSError: + cwd = None + if (not os.path.isabs(_CONFIG_VARS['srcdir']) and + base != cwd): + # srcdir is relative and we are not in the same directory + # as the executable. Assume executable is in the build + # directory and make srcdir absolute. + srcdir = os.path.join(base, _CONFIG_VARS['srcdir']) + _CONFIG_VARS['srcdir'] = os.path.normpath(srcdir) + + if sys.platform == 'darwin': + kernel_version = os.uname()[2] # Kernel version (8.4.3) + major_version = int(kernel_version.split('.')[0]) + + if major_version < 8: + # On Mac OS X before 10.4, check if -arch and -isysroot + # are in CFLAGS or LDFLAGS and remove them if they are. + # This is needed when building extensions on a 10.3 system + # using a universal build of python. + for key in ('LDFLAGS', 'BASECFLAGS', + # a number of derived variables. These need to be + # patched up as well. + 'CFLAGS', 'PY_CFLAGS', 'BLDSHARED'): + flags = _CONFIG_VARS[key] + flags = re.sub('-arch\s+\w+\s', ' ', flags) + flags = re.sub('-isysroot [^ \t]*', ' ', flags) + _CONFIG_VARS[key] = flags + else: + # Allow the user to override the architecture flags using + # an environment variable. + # NOTE: This name was introduced by Apple in OSX 10.5 and + # is used by several scripting languages distributed with + # that OS release. + if 'ARCHFLAGS' in os.environ: + arch = os.environ['ARCHFLAGS'] + for key in ('LDFLAGS', 'BASECFLAGS', + # a number of derived variables. These need to be + # patched up as well. + 'CFLAGS', 'PY_CFLAGS', 'BLDSHARED'): + + flags = _CONFIG_VARS[key] + flags = re.sub('-arch\s+\w+\s', ' ', flags) + flags = flags + ' ' + arch + _CONFIG_VARS[key] = flags + + # If we're on OSX 10.5 or later and the user tries to + # compiles an extension using an SDK that is not present + # on the current machine it is better to not use an SDK + # than to fail. + # + # The major usecase for this is users using a Python.org + # binary installer on OSX 10.6: that installer uses + # the 10.4u SDK, but that SDK is not installed by default + # when you install Xcode. + # + CFLAGS = _CONFIG_VARS.get('CFLAGS', '') + m = re.search('-isysroot\s+(\S+)', CFLAGS) + if m is not None: + sdk = m.group(1) + if not os.path.exists(sdk): + for key in ('LDFLAGS', 'BASECFLAGS', + # a number of derived variables. These need to be + # patched up as well. + 'CFLAGS', 'PY_CFLAGS', 'BLDSHARED'): + + flags = _CONFIG_VARS[key] + flags = re.sub('-isysroot\s+\S+(\s|$)', ' ', flags) + _CONFIG_VARS[key] = flags + + if args: + vals = [] + for name in args: + vals.append(_CONFIG_VARS.get(name)) + return vals + else: + return _CONFIG_VARS + +def get_config_var(name): + """Return the value of a single variable using the dictionary returned by + 'get_config_vars()'. + + Equivalent to get_config_vars().get(name) + """ + return get_config_vars().get(name) + +def get_platform(): + """Return a string that identifies the current platform. + + This is used mainly to distinguish platform-specific build directories and + platform-specific built distributions. Typically includes the OS name + and version and the architecture (as supplied by 'os.uname()'), + although the exact information included depends on the OS; eg. for IRIX + the architecture isn't particularly important (IRIX only runs on SGI + hardware), but for Linux the kernel version isn't particularly + important. + + Examples of returned values: + linux-i586 + linux-alpha (?) + solaris-2.6-sun4u + irix-5.3 + irix64-6.2 + + Windows will return one of: + win-amd64 (64bit Windows on AMD64 (aka x86_64, Intel64, EM64T, etc) + win-ia64 (64bit Windows on Itanium) + win32 (all others - specifically, sys.platform is returned) + + For other non-POSIX platforms, currently just returns 'sys.platform'. + """ + import re + if os.name == 'nt': + # sniff sys.version for architecture. + prefix = " bit (" + i = sys.version.find(prefix) + if i == -1: + return sys.platform + j = sys.version.find(")", i) + look = sys.version[i+len(prefix):j].lower() + if look == 'amd64': + return 'win-amd64' + if look == 'itanium': + return 'win-ia64' + return sys.platform + + if os.name != "posix" or not hasattr(os, 'uname'): + # XXX what about the architecture? NT is Intel or Alpha, + # Mac OS is M68k or PPC, etc. + return sys.platform + + # Try to distinguish various flavours of Unix + osname, host, release, version, machine = os.uname() + + # Convert the OS name to lowercase, remove '/' characters + # (to accommodate BSD/OS), and translate spaces (for "Power Macintosh") + osname = osname.lower().replace('/', '') + machine = machine.replace(' ', '_') + machine = machine.replace('/', '-') + + if osname[:5] == "linux": + # At least on Linux/Intel, 'machine' is the processor -- + # i386, etc. + # XXX what about Alpha, SPARC, etc? + return "%s-%s" % (osname, machine) + elif osname[:5] == "sunos": + if release[0] >= "5": # SunOS 5 == Solaris 2 + osname = "solaris" + release = "%d.%s" % (int(release[0]) - 3, release[2:]) + # fall through to standard osname-release-machine representation + elif osname[:4] == "irix": # could be "irix64"! + return "%s-%s" % (osname, release) + elif osname[:3] == "aix": + return "%s-%s.%s" % (osname, version, release) + elif osname[:6] == "cygwin": + osname = "cygwin" + rel_re = re.compile (r'[\d.]+') + m = rel_re.match(release) + if m: + release = m.group() + elif osname[:6] == "darwin": + # + # For our purposes, we'll assume that the system version from + # distutils' perspective is what MACOSX_DEPLOYMENT_TARGET is set + # to. This makes the compatibility story a bit more sane because the + # machine is going to compile and link as if it were + # MACOSX_DEPLOYMENT_TARGET. + # + cfgvars = get_config_vars() + macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') + + if 1: + # Always calculate the release of the running machine, + # needed to determine if we can build fat binaries or not. + + macrelease = macver + # Get the system version. Reading this plist is a documented + # way to get the system version (see the documentation for + # the Gestalt Manager) + try: + f = open('/System/Library/CoreServices/SystemVersion.plist') + except IOError: + # We're on a plain darwin box, fall back to the default + # behaviour. + pass + else: + try: + m = re.search( + r'ProductUserVisibleVersion\s*' + + r'(.*?)', f.read()) + if m is not None: + macrelease = '.'.join(m.group(1).split('.')[:2]) + # else: fall back to the default behaviour + finally: + f.close() + + if not macver: + macver = macrelease + + if macver: + release = macver + osname = "macosx" + + if (macrelease + '.') >= '10.4.' and \ + '-arch' in get_config_vars().get('CFLAGS', '').strip(): + # The universal build will build fat binaries, but not on + # systems before 10.4 + # + # Try to detect 4-way universal builds, those have machine-type + # 'universal' instead of 'fat'. + + machine = 'fat' + cflags = get_config_vars().get('CFLAGS') + + archs = re.findall('-arch\s+(\S+)', cflags) + archs = tuple(sorted(set(archs))) + + if len(archs) == 1: + machine = archs[0] + elif archs == ('i386', 'ppc'): + machine = 'fat' + elif archs == ('i386', 'x86_64'): + machine = 'intel' + elif archs == ('i386', 'ppc', 'x86_64'): + machine = 'fat3' + elif archs == ('ppc64', 'x86_64'): + machine = 'fat64' + elif archs == ('i386', 'ppc', 'ppc64', 'x86_64'): + machine = 'universal' + else: + raise ValueError( + "Don't know machine value for archs=%r"%(archs,)) + + elif machine == 'i386': + # On OSX the machine type returned by uname is always the + # 32-bit variant, even if the executable architecture is + # the 64-bit variant + if sys.maxsize >= 2**32: + machine = 'x86_64' + + elif machine in ('PowerPC', 'Power_Macintosh'): + # Pick a sane name for the PPC architecture. + # See 'i386' case + if sys.maxsize >= 2**32: + machine = 'ppc64' + else: + machine = 'ppc' + + return "%s-%s-%s" % (osname, release, machine) + + +def get_python_version(): + return _PY_VERSION_SHORT + +def _print_dict(title, data): + for index, (key, value) in enumerate(sorted(data.items())): + if index == 0: + print('{0}: '.format(title)) + print('\t{0} = "{1}"'.format(key, value)) + +def _main(): + """Display all information sysconfig detains.""" + print('Platform: "{0}"'.format(get_platform())) + print('Python version: "{0}"'.format(get_python_version())) + print('Current installation scheme: "{0}"'.format(_get_default_scheme())) + print('') + _print_dict('Paths', get_paths()) + print('') + _print_dict('Variables', get_config_vars()) + +if __name__ == '__main__': + _main() diff --git a/lib-python/modified-3.2/test/test_sysconfig.py b/lib-python/modified-3.2/test/test_sysconfig.py new file mode 100644 --- /dev/null +++ b/lib-python/modified-3.2/test/test_sysconfig.py @@ -0,0 +1,362 @@ +"""Tests for 'site'. + +Tests assume the initial paths in sys.path once the interpreter has begun +executing have not been removed. + +""" +import unittest +import sys +import os +import subprocess +import shutil +from copy import copy, deepcopy + +from test.support import (run_unittest, TESTFN, unlink, get_attribute, + captured_stdout, skip_unless_symlink) + +import sysconfig +from sysconfig import (get_paths, get_platform, get_config_vars, + get_path, get_path_names, _INSTALL_SCHEMES, + _get_default_scheme, _expand_vars, + get_scheme_names, get_config_var, _main) + +class TestSysConfig(unittest.TestCase): + + def setUp(self): + """Make a copy of sys.path""" + super(TestSysConfig, self).setUp() + self.sys_path = sys.path[:] + # patching os.uname + if hasattr(os, 'uname'): + self.uname = os.uname + self._uname = os.uname() + else: + self.uname = None + self._uname = None + os.uname = self._get_uname + # saving the environment + self.name = os.name + self.platform = sys.platform + self.version = sys.version + self.sep = os.sep + self.join = os.path.join + self.isabs = os.path.isabs + self.splitdrive = os.path.splitdrive + self._config_vars = copy(sysconfig._CONFIG_VARS) + self.old_environ = deepcopy(os.environ) + + def tearDown(self): + """Restore sys.path""" + sys.path[:] = self.sys_path + self._cleanup_testfn() + if self.uname is not None: + os.uname = self.uname + else: + del os.uname + os.name = self.name + sys.platform = self.platform + sys.version = self.version + os.sep = self.sep + os.path.join = self.join + os.path.isabs = self.isabs + os.path.splitdrive = self.splitdrive + sysconfig._CONFIG_VARS = copy(self._config_vars) + for key, value in self.old_environ.items(): + if os.environ.get(key) != value: + os.environ[key] = value + + for key in list(os.environ.keys()): + if key not in self.old_environ: + del os.environ[key] + + super(TestSysConfig, self).tearDown() + + def _set_uname(self, uname): + self._uname = uname + + def _get_uname(self): + return self._uname + + def _cleanup_testfn(self): + path = TESTFN + if os.path.isfile(path): + os.remove(path) + elif os.path.isdir(path): + shutil.rmtree(path) + + def test_get_path_names(self): + self.assertEqual(get_path_names(), sysconfig._SCHEME_KEYS) + + def test_get_paths(self): + scheme = get_paths() + default_scheme = _get_default_scheme() + wanted = _expand_vars(default_scheme, None) + wanted = list(wanted.items()) + wanted.sort() + scheme = list(scheme.items()) + scheme.sort() + self.assertEqual(scheme, wanted) + + def test_get_path(self): + # xxx make real tests here + for scheme in _INSTALL_SCHEMES: + for name in _INSTALL_SCHEMES[scheme]: + res = get_path(name, scheme) + + def test_get_config_vars(self): + cvars = get_config_vars() + self.assertTrue(isinstance(cvars, dict)) + self.assertTrue(cvars) + + def test_get_platform(self): + # windows XP, 32bits + os.name = 'nt' + sys.version = ('2.4.4 (#71, Oct 18 2006, 08:34:43) ' + '[MSC v.1310 32 bit (Intel)]') + sys.platform = 'win32' + self.assertEqual(get_platform(), 'win32') + + # windows XP, amd64 + os.name = 'nt' + sys.version = ('2.4.4 (#71, Oct 18 2006, 08:34:43) ' + '[MSC v.1310 32 bit (Amd64)]') + sys.platform = 'win32' + self.assertEqual(get_platform(), 'win-amd64') + + # windows XP, itanium + os.name = 'nt' + sys.version = ('2.4.4 (#71, Oct 18 2006, 08:34:43) ' + '[MSC v.1310 32 bit (Itanium)]') + sys.platform = 'win32' + self.assertEqual(get_platform(), 'win-ia64') + + # macbook + os.name = 'posix' + sys.version = ('2.5 (r25:51918, Sep 19 2006, 08:49:13) ' + '\n[GCC 4.0.1 (Apple Computer, Inc. build 5341)]') + sys.platform = 'darwin' + self._set_uname(('Darwin', 'macziade', '8.11.1', + ('Darwin Kernel Version 8.11.1: ' + 'Wed Oct 10 18:23:28 PDT 2007; ' + 'root:xnu-792.25.20~1/RELEASE_I386'), 'PowerPC')) + + + get_config_vars()['MACOSX_DEPLOYMENT_TARGET'] = '10.3' + + get_config_vars()['CFLAGS'] = ('-fno-strict-aliasing -DNDEBUG -g ' + '-fwrapv -O3 -Wall -Wstrict-prototypes') + + maxint = sys.maxsize + try: + sys.maxsize = 2147483647 + self.assertEqual(get_platform(), 'macosx-10.3-ppc') + sys.maxsize = 9223372036854775807 + self.assertEqual(get_platform(), 'macosx-10.3-ppc64') + finally: + sys.maxsize = maxint + + + self._set_uname(('Darwin', 'macziade', '8.11.1', + ('Darwin Kernel Version 8.11.1: ' + 'Wed Oct 10 18:23:28 PDT 2007; ' + 'root:xnu-792.25.20~1/RELEASE_I386'), 'i386')) + get_config_vars()['MACOSX_DEPLOYMENT_TARGET'] = '10.3' + get_config_vars()['MACOSX_DEPLOYMENT_TARGET'] = '10.3' + + get_config_vars()['CFLAGS'] = ('-fno-strict-aliasing -DNDEBUG -g ' + '-fwrapv -O3 -Wall -Wstrict-prototypes') + maxint = sys.maxsize + try: + sys.maxsize = 2147483647 + self.assertEqual(get_platform(), 'macosx-10.3-i386') + sys.maxsize = 9223372036854775807 + self.assertEqual(get_platform(), 'macosx-10.3-x86_64') + finally: + sys.maxsize = maxint + + # macbook with fat binaries (fat, universal or fat64) + get_config_vars()['MACOSX_DEPLOYMENT_TARGET'] = '10.4' + get_config_vars()['CFLAGS'] = ('-arch ppc -arch i386 -isysroot ' + '/Developer/SDKs/MacOSX10.4u.sdk ' + '-fno-strict-aliasing -fno-common ' + '-dynamic -DNDEBUG -g -O3') + + self.assertEqual(get_platform(), 'macosx-10.4-fat') + + get_config_vars()['CFLAGS'] = ('-arch x86_64 -arch i386 -isysroot ' + '/Developer/SDKs/MacOSX10.4u.sdk ' + '-fno-strict-aliasing -fno-common ' + '-dynamic -DNDEBUG -g -O3') + + self.assertEqual(get_platform(), 'macosx-10.4-intel') + + get_config_vars()['CFLAGS'] = ('-arch x86_64 -arch ppc -arch i386 -isysroot ' + '/Developer/SDKs/MacOSX10.4u.sdk ' + '-fno-strict-aliasing -fno-common ' + '-dynamic -DNDEBUG -g -O3') + self.assertEqual(get_platform(), 'macosx-10.4-fat3') + + get_config_vars()['CFLAGS'] = ('-arch ppc64 -arch x86_64 -arch ppc -arch i386 -isysroot ' + '/Developer/SDKs/MacOSX10.4u.sdk ' + '-fno-strict-aliasing -fno-common ' + '-dynamic -DNDEBUG -g -O3') + self.assertEqual(get_platform(), 'macosx-10.4-universal') + + get_config_vars()['CFLAGS'] = ('-arch x86_64 -arch ppc64 -isysroot ' + '/Developer/SDKs/MacOSX10.4u.sdk ' + '-fno-strict-aliasing -fno-common ' + '-dynamic -DNDEBUG -g -O3') + + self.assertEqual(get_platform(), 'macosx-10.4-fat64') + + for arch in ('ppc', 'i386', 'x86_64', 'ppc64'): + get_config_vars()['CFLAGS'] = ('-arch %s -isysroot ' + '/Developer/SDKs/MacOSX10.4u.sdk ' + '-fno-strict-aliasing -fno-common ' + '-dynamic -DNDEBUG -g -O3'%(arch,)) + + self.assertEqual(get_platform(), 'macosx-10.4-%s'%(arch,)) + + # linux debian sarge + os.name = 'posix' + sys.version = ('2.3.5 (#1, Jul 4 2007, 17:28:59) ' + '\n[GCC 4.1.2 20061115 (prerelease) (Debian 4.1.1-21)]') + sys.platform = 'linux2' + self._set_uname(('Linux', 'aglae', '2.6.21.1dedibox-r7', + '#1 Mon Apr 30 17:25:38 CEST 2007', 'i686')) + + self.assertEqual(get_platform(), 'linux-i686') + + # XXX more platforms to tests here + + def test_get_config_h_filename(self): + config_h = sysconfig.get_config_h_filename() + self.assertTrue(os.path.isfile(config_h), config_h) + + def test_get_scheme_names(self): + wanted = ('nt', 'nt_user', 'os2', 'os2_home', 'osx_framework_user', + 'posix_home', 'posix_prefix', 'posix_user', 'pypy') + self.assertEqual(get_scheme_names(), wanted) + + @skip_unless_symlink + def test_symlink(self): + # On Windows, the EXE needs to know where pythonXY.dll is at so we have + # to add the directory to the path. + if sys.platform == "win32": + os.environ["Path"] = "{};{}".format( + os.path.dirname(sys.executable), os.environ["Path"]) + + # Issue 7880 + def get(python): + cmd = [python, '-c', + 'import sysconfig; print(sysconfig.get_platform())'] + p = subprocess.Popen(cmd, stdout=subprocess.PIPE, env=os.environ) + return p.communicate() + real = os.path.realpath(sys.executable) + link = os.path.abspath(TESTFN) + os.symlink(real, link) + try: + self.assertEqual(get(real), get(link)) + finally: + unlink(link) + + def test_user_similar(self): + # Issue 8759 : make sure the posix scheme for the users + # is similar to the global posix_prefix one + base = get_config_var('base') + user = get_config_var('userbase') + for name in ('stdlib', 'platstdlib', 'purelib', 'platlib'): + global_path = get_path(name, 'posix_prefix') + user_path = get_path(name, 'posix_user') + self.assertEqual(user_path, global_path.replace(base, user)) + + def test_main(self): + # just making sure _main() runs and returns things in the stdout + with captured_stdout() as output: + _main() + self.assertTrue(len(output.getvalue().split('\n')) > 0) + + @unittest.skipIf(sys.platform == "win32", "Does not apply to Windows") + def test_ldshared_value(self): + ldflags = sysconfig.get_config_var('LDFLAGS') + ldshared = sysconfig.get_config_var('LDSHARED') + + self.assertIn(ldflags, ldshared) + + + @unittest.skipUnless(sys.platform == "darwin", "test only relevant on MacOSX") + def test_platform_in_subprocess(self): + my_platform = sysconfig.get_platform() + + # Test without MACOSX_DEPLOYMENT_TARGET in the environment + + env = os.environ.copy() + if 'MACOSX_DEPLOYMENT_TARGET' in env: + del env['MACOSX_DEPLOYMENT_TARGET'] + + with open('/dev/null', 'w') as devnull_fp: + p = subprocess.Popen([ + sys.executable, '-c', + 'import sysconfig; print(sysconfig.get_platform())', + ], + stdout=subprocess.PIPE, + stderr=devnull_fp, + env=env) + test_platform = p.communicate()[0].strip() + test_platform = test_platform.decode('utf-8') + status = p.wait() + + self.assertEqual(status, 0) + self.assertEqual(my_platform, test_platform) + + + # Test with MACOSX_DEPLOYMENT_TARGET in the environment, and + # using a value that is unlikely to be the default one. + env = os.environ.copy() + env['MACOSX_DEPLOYMENT_TARGET'] = '10.1' + + p = subprocess.Popen([ + sys.executable, '-c', + 'import sysconfig; print(sysconfig.get_platform())', + ], + stdout=subprocess.PIPE, + stderr=open('/dev/null'), + env=env) + test_platform = p.communicate()[0].strip() + test_platform = test_platform.decode('utf-8') + status = p.wait() + + self.assertEqual(status, 0) + self.assertEqual(my_platform, test_platform) + + +class MakefileTests(unittest.TestCase): + @unittest.skipIf(sys.platform.startswith('win'), + 'Test is not Windows compatible') + def test_get_makefile_filename(self): + makefile = sysconfig.get_makefile_filename() + self.assertTrue(os.path.isfile(makefile), makefile) + + def test_parse_makefile(self): + self.addCleanup(unlink, TESTFN) + with open(TESTFN, "w") as makefile: + print("var1=a$(VAR2)", file=makefile) + print("VAR2=b$(var3)", file=makefile) + print("var3=42", file=makefile) + print("var4=$/invalid", file=makefile) + print("var5=dollar$$5", file=makefile) + vars = sysconfig._parse_makefile(TESTFN) + self.assertEqual(vars, { + 'var1': 'ab42', + 'VAR2': 'b42', + 'var3': 42, + 'var4': '$/invalid', + 'var5': 'dollar$5', + }) + + +def test_main(): + run_unittest(TestSysConfig, MakefileTests) + +if __name__ == "__main__": + test_main() From noreply at buildbot.pypy.org Thu Nov 10 04:45:00 2011 From: noreply at buildbot.pypy.org (pjenvey) Date: Thu, 10 Nov 2011 04:45:00 +0100 (CET) Subject: [pypy-commit] pypy py3k: get ascii translating Message-ID: <20111110034500.AAA648292E@wyvern.cs.uni-duesseldorf.de> Author: Philip Jenvey Branch: py3k Changeset: r49074:53974c65ef6d Date: 2011-11-09 19:36 -0800 http://bitbucket.org/pypy/pypy/changeset/53974c65ef6d/ Log: get ascii translating diff --git a/pypy/module/__builtin__/operation.py b/pypy/module/__builtin__/operation.py --- a/pypy/module/__builtin__/operation.py +++ b/pypy/module/__builtin__/operation.py @@ -5,7 +5,7 @@ from pypy.interpreter import gateway from pypy.interpreter.error import OperationError from pypy.interpreter.gateway import unwrap_spec -from pypy.rlib.runicode import UNICHR +from pypy.rlib.runicode import UNICHR, str_decode_ascii, unicode_encode_ascii from pypy.rlib.rfloat import isnan, isinf, round_double from pypy.rlib import rfloat import __builtin__ @@ -22,9 +22,12 @@ object, but escape the non-ASCII characters in the string returned by repr() using \\x, \\u or \\U escapes. This generates a string similar to that returned by repr() in Python 2.""" + len_ = __builtin__.len # repr is guaranteed to be unicode - repr = space.unwrap(space.repr(w_obj)) - return space.wrap(repr.encode('ascii', 'backslashreplace').decode('ascii')) + repr = space.unicode_w(space.repr(w_obj)) + encoded = unicode_encode_ascii(repr, len_(repr), 'backslashreplace') + decoded = str_decode_ascii(encoded, len_(encoded), None, final=True)[0] + return space.wrap(decoded) @unwrap_spec(code=int) def chr(space, code): From noreply at buildbot.pypy.org Thu Nov 10 05:51:09 2011 From: noreply at buildbot.pypy.org (pjenvey) Date: Thu, 10 Nov 2011 05:51:09 +0100 (CET) Subject: [pypy-commit] pypy py3k: fix translation Message-ID: <20111110045109.CFC868292E@wyvern.cs.uni-duesseldorf.de> Author: Philip Jenvey Branch: py3k Changeset: r49075:670a25758031 Date: 2011-11-09 20:21 -0800 http://bitbucket.org/pypy/pypy/changeset/670a25758031/ Log: fix translation diff --git a/pypy/objspace/std/unicodetype.py b/pypy/objspace/std/unicodetype.py --- a/pypy/objspace/std/unicodetype.py +++ b/pypy/objspace/std/unicodetype.py @@ -288,7 +288,7 @@ w_res = space.get_and_call_function(w_unicode_method, w_obj) if not space.isinstance_w(w_res, space.w_unicode): typename = space.type(w_res).getname(space) - msg = "__str__ returned non-string (type %.200s)" % typename + msg = "__str__ returned non-string (type %s)" % typename raise OperationError(space.w_TypeError, space.wrap(msg)) return w_res From noreply at buildbot.pypy.org Thu Nov 10 10:47:18 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 10:47:18 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim: (mwp antocuni) make platform work on PowerPC Message-ID: <20111110094718.A2DB58292E@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: numpy-multidim Changeset: r49076:0a57ce084165 Date: 2011-11-04 13:40 +0100 http://bitbucket.org/pypy/pypy/changeset/0a57ce084165/ Log: (mwp antocuni) make platform work on PowerPC diff --git a/pypy/translator/platform/__init__.py b/pypy/translator/platform/__init__.py --- a/pypy/translator/platform/__init__.py +++ b/pypy/translator/platform/__init__.py @@ -238,10 +238,13 @@ else: host_factory = Linux64 elif sys.platform == 'darwin': - from pypy.translator.platform.darwin import Darwin_i386, Darwin_x86_64 + from pypy.translator.platform.darwin import Darwin_i386, Darwin_x86_64, Darwin_PowerPC import platform - assert platform.machine() in ('i386', 'x86_64') - if sys.maxint <= 2147483647: + assert platform.machine() in ('Power Macintosh', 'i386', 'x86_64') + + if platform.machine() == 'Power Macintosh': + host_factory = Darwin_PowerPC + elif sys.maxint <= 2147483647: host_factory = Darwin_i386 else: host_factory = Darwin_x86_64 diff --git a/pypy/translator/platform/darwin.py b/pypy/translator/platform/darwin.py --- a/pypy/translator/platform/darwin.py +++ b/pypy/translator/platform/darwin.py @@ -71,6 +71,11 @@ link_flags = ('-arch', 'i386') cflags = ('-arch', 'i386', '-O3', '-fomit-frame-pointer') +class Darwin_PowerPC(Darwin):#xxx fixme, mwp + name = "darwin_powerpc" + link_flags = () + cflags = ('-O3', '-fomit-frame-pointer') + class Darwin_x86_64(Darwin): name = "darwin_x86_64" link_flags = ('-arch', 'x86_64') diff --git a/pypy/translator/platform/test/test_darwin.py b/pypy/translator/platform/test/test_darwin.py --- a/pypy/translator/platform/test/test_darwin.py +++ b/pypy/translator/platform/test/test_darwin.py @@ -7,7 +7,7 @@ py.test.skip("Darwin only") from pypy.tool.udir import udir -from pypy.translator.platform.darwin import Darwin_i386, Darwin_x86_64 +from pypy.translator.platform.darwin import Darwin_i386, Darwin_x86_64, Darwin_PowerPC from pypy.translator.platform.test.test_platform import TestPlatform as BasicTest from pypy.translator.tool.cbuild import ExternalCompilationInfo @@ -17,7 +17,7 @@ else: host_factory = Darwin_x86_64 else: - host_factory = Darwin + host_factory = Darwin_PowerPC class TestDarwin(BasicTest): platform = host_factory() From noreply at buildbot.pypy.org Thu Nov 10 10:47:19 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 10:47:19 +0100 (CET) Subject: [pypy-commit] pypy SpecialisedTuples: (mwp, antocuni) Create a branch with tuples specialised by type Message-ID: <20111110094719.D5E508292E@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: SpecialisedTuples Changeset: r49077:a467cb7c4dd5 Date: 2011-11-04 13:49 +0100 http://bitbucket.org/pypy/pypy/changeset/a467cb7c4dd5/ Log: (mwp, antocuni) Create a branch with tuples specialised by type diff --git a/pypy/config/pypyoption.py b/pypy/config/pypyoption.py --- a/pypy/config/pypyoption.py +++ b/pypy/config/pypyoption.py @@ -252,6 +252,10 @@ "use small tuples", default=False), + BoolOption("withspecialisedtuple", + "use specialised tuples", + default=False), + BoolOption("withrope", "use ropes as the string implementation", default=False, requires=[("objspace.std.withstrslice", False), diff --git a/pypy/objspace/std/model.py b/pypy/objspace/std/model.py --- a/pypy/objspace/std/model.py +++ b/pypy/objspace/std/model.py @@ -15,6 +15,7 @@ _registered_implementations.add(implcls) option_to_typename = { + "withspecialisedtuple" : ["specialisedtupleobject.W_SpecialisedTupleObject"], "withsmalltuple" : ["smalltupleobject.W_SmallTupleObject"], "withsmallint" : ["smallintobject.W_SmallIntObject"], "withsmalllong" : ["smalllongobject.W_SmallLongObject"], @@ -73,6 +74,7 @@ from pypy.objspace.std import smalllongobject from pypy.objspace.std import tupleobject from pypy.objspace.std import smalltupleobject + from pypy.objspace.std import specialisedtupleobject from pypy.objspace.std import listobject from pypy.objspace.std import dictmultiobject from pypy.objspace.std import stringobject @@ -259,6 +261,10 @@ self.typeorder[smalltupleobject.W_SmallTupleObject] += [ (tupleobject.W_TupleObject, smalltupleobject.delegate_SmallTuple2Tuple)] + if config.objspace.std.withspecialisedtuple: + self.typeorder[specialisedtupleobject.W_SpecialisedTupleObject] += [ + (tupleobject.W_TupleObject, specialisedtupleobject.delegate_SpecialisedTuple2Tuple)] + # put W_Root everywhere self.typeorder[W_Root] = [] for type in self.typeorder: diff --git a/pypy/objspace/std/specialisedtupleobject.py b/pypy/objspace/std/specialisedtupleobject.py new file mode 100644 --- /dev/null +++ b/pypy/objspace/std/specialisedtupleobject.py @@ -0,0 +1,200 @@ +from pypy.interpreter.error import OperationError +from pypy.objspace.std.model import registerimplementation, W_Object +from pypy.objspace.std.register_all import register_all +from pypy.objspace.std.inttype import wrapint +from pypy.objspace.std.intobject import W_IntObject +from pypy.objspace.std.floatobject import W_FloatObject +from pypy.objspace.std.stringobject import W_StringObject +from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice +from pypy.objspace.std import slicetype +from pypy.rlib.rarithmetic import intmask +from pypy.objspace.std.tupleobject import W_TupleObject + +from types import IntType, FloatType, StringType + +class W_SpecialisedTupleObject(W_Object): + from pypy.objspace.std.tupletype import tuple_typedef as typedef + + def tolist(self): + raise NotImplementedError + + def _tolistunwrapped(self): + raise NotImplementedError + + def length(self): + raise NotImplementedError + + def getitem(self, index): + raise NotImplementedError + + def hash(self, space): + raise NotImplementedError + + def eq(self, space, w_other): + raise NotImplementedError + + def setitem(self, index, w_item): + raise NotImplementedError + + def unwrap(w_tuple, space): + return tuple(self.tolist) + +class W_SpecialisedTupleObject1(W_SpecialisedTupleObject): #one element tuples + def __init__(self, value0): + raise NotImplementedError + + def length(self): + return 1 + + def eq(self, space, w_other): + if w_other.length() != 1: + return space.w_False + if self.value0 == w_other.value0: #is it safe to assume all 1-tuples are specialised ? + return space.w_True + else: + return space.w_False + + def hash(self, space): + mult = 1000003 + x = 0x345678 + z = 1 + w_item = self.getitem(0) + y = space.int_w(space.hash(w_item)) + x = (x ^ y) * mult + mult += 82520 + z + z + x += 97531 + return space.wrap(intmask(x)) + +class W_SpecialisedTupleObjectInt(W_SpecialisedTupleObject1): #one integer element + def __init__(self, intval): + assert type(intval) == IntType#isinstance + self.intval = intval#intval + + def tolist(self): + return [W_IntObject(self.intval)] + + def getitem(self, index): + if index == 0: + return W_IntObject(self.intval) + raise IndexError + + def setitem(self, index, w_item): + assert isinstance(w_item, W_IntObject) + if index == 0: + self.intval = w_item.intval + return + raise IndexError + +class W_SpecialisedTupleObjectFloat(W_SpecialisedTupleObject1): #one integer element + def __init__(self, floatval): + assert type(floatval) == FloatType + self.floatval = floatval + + def tolist(self): + return [W_FloatObject(self.floatval)] + + def getitem(self, index): + if index == 0: + return W_FloatObject(self.floatval) + raise IndexError + + def setitem(self, index, w_item): + assert isinstance(w_item, W_FloatObject) + if index == 0: + self.floatval = w_item.floatval + return + raise IndexError + +class W_SpecialisedTupleObjectString(W_SpecialisedTupleObject1): #one integer element + def __init__(self, stringval): + assert type(stringval) == StringType + self.stringval = stringval + + def tolist(self): + return [W_StringObject(self.stringval)] + + def getitem(self, index): + if index == 0: + return W_StringObject(self.stringval) + raise IndexError + + def setitem(self, index, w_item): + assert isinstance(w_item, W_StringObject) + if index == 0: + self.stringval = w_item._value # does _value need to be private + return + raise IndexError + +''' + W_SpecialisedTupleObjectIntInt, #two element tupes of int, float or string + W_SpecialisedTupleObjectIntFloat, + W_SpecialisedTupleObjectIntString, + W_SpecialisedTupleObjectFloatInt, + W_SpecialisedTupleObjectFloatFloat, + W_SpecialisedTupleObjectFloatString, + W_SpecialisedTupleObjectStringInt, + W_SpecialisedTupleObjectStringFloat, + W_SpecialisedTupleObjectStringString + +''' +registerimplementation(W_SpecialisedTupleObject) + +#--------- +def delegate_SpecialisedTuple2Tuple(space, w_specialised): + return W_TupleObject(w_specialised.tolist()) + +def len__SpecialisedTuple(space, w_tuple): + return space.wrap(w_tuple.length()) + +def getitem__SpecialisedTuple_ANY(space, w_tuple, w_index): + index = space.getindex_w(w_index, space.w_IndexError, "tuple index") + if index < 0: + index += w_tuple.length() + try: + return w_tuple.getitem(index) + except IndexError: + raise OperationError(space.w_IndexError, + space.wrap("tuple index out of range")) + +# getitem__SpecialisedTuple_Slice removed +# mul_specialisedtuple_times removed +def getitem__SpecialisedTuple_Slice(space, w_tuple, w_slice): + length = w_tuple.length() + start, stop, step, slicelength = w_slice.indices4(space, length) + assert slicelength >= 0 + subitems = [None] * slicelength + for i in range(slicelength): + subitems[i] = w_tuple.getitem(start) + start += step + return space.newtuple(subitems) + +def mul_specialisedtuple_times(space, w_tuple, w_times): + try: + times = space.getindex_w(w_times, space.w_OverflowError) + except OperationError, e: + if e.match(space, space.w_TypeError): + raise FailedToImplement + raise + if times == 1 and space.type(w_tuple) == space.w_tuple: + return w_tuple + items = w_tuple.tolist() + return space.newtuple(items * times) + +def mul__SpecialisedTuple_ANY(space, w_tuple, w_times): + return mul_specialisedtuple_times(space, w_tuple, w_times) + +def mul__ANY_SpecialisedTuple(space, w_times, w_tuple): + return mul_specialisedtuple_times(space, w_tuple, w_times) + + +# mul__SpecialisedTuple_ANY removed +# mul__ANY_SpecialisedTuple removed + +def eq__SpecialisedTuple_SpecialisedTuple(space, w_tuple1, w_tuple2): + return w_tuple1.eq(space, w_tuple2) + +def hash__SpecialisedTuple(space, w_tuple): + return w_tuple.hash(space) + +from pypy.objspace.std import tupletype +register_all(vars(), tupletype) diff --git a/pypy/objspace/std/test/test_specialisedtupleobject.py b/pypy/objspace/std/test/test_specialisedtupleobject.py new file mode 100644 --- /dev/null +++ b/pypy/objspace/std/test/test_specialisedtupleobject.py @@ -0,0 +1,118 @@ +from pypy.objspace.std.tupleobject import W_TupleObject +from pypy.objspace.std.specialisedtupleobject import W_SpecialisedTupleObject +from pypy.interpreter.error import OperationError +from pypy.conftest import gettestobjspace +from pypy.objspace.std.test.test_tupleobject import AppTestW_TupleObject + + +class TestW_SpecialisedTupleObject(): + + def setup_class(cls): + cls.space = gettestobjspace(**{"objspace.std.withspecialisedtuple": True}) + + def test_isspecialisedtupleobject(self): + w_tuple = self.space.newtuple([self.space.wrap(1)]) + assert isinstance(w_tuple, W_SpecialisedTupleObject) + + def test_isnotspecialisedtupleobject(self): + w_tuple = self.space.newtuple([self.space.wrap({})]) + assert not isinstance(w_tuple, W_SpecialisedTupleObject) + + def test_isnotspecialised2tupleobject(self): + w_tuple = self.space.newtuple([self.space.wrap(1), self.space.wrap(2)]) + assert not isinstance(w_tuple, W_SpecialisedTupleObject) + + def test_hash_against_normal_tuple(self): + normalspace = gettestobjspace(**{"objspace.std.withspecialisedtuple": False}) + w_tuple = normalspace.newtuple([self.space.wrap(1)]) + + specialisedspace = gettestobjspace(**{"objspace.std.withspecialisedtuple": True}) + w_specialisedtuple = specialisedspace.newtuple([self.space.wrap(1)]) + + assert isinstance(w_specialisedtuple, W_SpecialisedTupleObject) + assert isinstance(w_tuple, W_TupleObject) + assert not normalspace.is_true(normalspace.eq(w_tuple, w_specialisedtuple)) + assert specialisedspace.is_true(specialisedspace.eq(w_tuple, w_specialisedtuple)) + assert specialisedspace.is_true(specialisedspace.eq(normalspace.hash(w_tuple), specialisedspace.hash(w_specialisedtuple))) + + def test_setitem(self): + w_specialisedtuple = self.space.newtuple([self.space.wrap(1)]) + w_specialisedtuple.setitem(0, self.space.wrap(5)) + list_w = w_specialisedtuple.tolist() + assert len(list_w) == 1 + assert self.space.eq_w(list_w[0], self.space.wrap(5)) + +class AppTestW_SpecialisedTupleObject(AppTestW_TupleObject): + + def setup_class(cls): + cls.space = gettestobjspace(**{"objspace.std.withspecialisedtuple": True}) + cls.w_isspecialised = cls.space.appexec([], """(): + import __pypy__ + def isspecialised(obj): + return "SpecialisedTuple" in __pypy__.internal_repr(obj) + return isspecialised + """) + + def test_specialisedtuple(self): + assert self.isspecialised((42,)) + assert self.isspecialised(('42',)) + assert self.isspecialised((42.5,)) + + def test_notspecialisedtuple(self): + assert not self.isspecialised((42,43)) + + def test_slicing_to_specialised(self): + assert self.isspecialised((1, 2, 3)[0:1]) + assert self.isspecialised((1, '2', 1.3)[0:5:5]) + assert self.isspecialised((1, '2', 1.3)[1:5:5]) + assert self.isspecialised((1, '2', 1.3)[2:5:5]) + + def test_adding_to_specialised(self): + assert self.isspecialised(()+(2,)) + + def test_multiply_to_specialised(self): + assert self.isspecialised((1,)*1) + + def test_slicing_from_specialised(self): + assert (1,)[0:1:1] == (1,) + + def test_eq(self): + a = (1,) + b = (1,) + assert a == b + + a = ('1',) + b = ('1',) + assert a == b + + a = (1.1,) + b = (1.1,) + assert a == b + + c = (1,3,2) + assert a != c + + d = (2) + assert a != d + + def test_hash(self): + a = (1,) + b = (1,) + assert hash(a) == hash(b) + + a = ('1',) + b = ('1',) + assert hash(a) == hash(b) + + a = (1.1,) + b = (1.1,) + assert hash(a) == hash(b) + + c = (2,) + assert hash(a) != hash(c) + + + + + + diff --git a/pypy/objspace/std/tupletype.py b/pypy/objspace/std/tupletype.py --- a/pypy/objspace/std/tupletype.py +++ b/pypy/objspace/std/tupletype.py @@ -2,6 +2,8 @@ from pypy.interpreter import gateway from pypy.objspace.std.register_all import register_all from pypy.objspace.std.stdtypedef import StdTypeDef, SMM +from types import IntType, FloatType, StringType + def wraptuple(space, list_w): from pypy.objspace.std.tupleobject import W_TupleObject @@ -12,6 +14,23 @@ from pypy.objspace.std.smalltupleobject import W_SmallTupleObject6 from pypy.objspace.std.smalltupleobject import W_SmallTupleObject7 from pypy.objspace.std.smalltupleobject import W_SmallTupleObject8 + + from pypy.objspace.std.specialisedtupleobject import W_SpecialisedTupleObjectInt #one element tuples + from pypy.objspace.std.specialisedtupleobject import W_SpecialisedTupleObjectFloat + from pypy.objspace.std.specialisedtupleobject import W_SpecialisedTupleObjectString + from pypy.objspace.std.intobject import W_IntObject + from pypy.objspace.std.floatobject import W_FloatObject + from pypy.objspace.std.stringobject import W_StringObject + + if space.config.objspace.std.withspecialisedtuple: + if len(list_w) == 1: + if isinstance(list_w[0], W_IntObject): + return W_SpecialisedTupleObjectInt(list_w[0].intval) + if isinstance(list_w[0], W_FloatObject): + return W_SpecialisedTupleObjectFloat(list_w[0].floatval) + if isinstance(list_w[0], W_StringObject): + return W_SpecialisedTupleObjectString(list_w[0]._value) + if space.config.objspace.std.withsmalltuple: if len(list_w) == 2: return W_SmallTupleObject2(list_w) From noreply at buildbot.pypy.org Thu Nov 10 10:47:21 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 10:47:21 +0100 (CET) Subject: [pypy-commit] pypy SpecialisedTuples: (antocuni, mwp) not interested in 1-tuples really, kill the code Message-ID: <20111110094721.102868292E@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: SpecialisedTuples Changeset: r49078:584b7dda8f49 Date: 2011-11-04 16:59 +0100 http://bitbucket.org/pypy/pypy/changeset/584b7dda8f49/ Log: (antocuni, mwp) not interested in 1-tuples really, kill the code diff --git a/pypy/objspace/std/specialisedtupleobject.py b/pypy/objspace/std/specialisedtupleobject.py --- a/pypy/objspace/std/specialisedtupleobject.py +++ b/pypy/objspace/std/specialisedtupleobject.py @@ -39,42 +39,23 @@ def unwrap(w_tuple, space): return tuple(self.tolist) -class W_SpecialisedTupleObject1(W_SpecialisedTupleObject): #one element tuples - def __init__(self, value0): - raise NotImplementedError + +class W_SpecialisedTupleObjectIntInt(W_SpecialisedTupleObject): + def __init__(self, intval0, intval1): + assert isinstance(intval0, int) + assert isinstance(intval1, int) + self.intval0 = intval0 + self.intval1 = intval1 def length(self): - return 1 - - def eq(self, space, w_other): - if w_other.length() != 1: - return space.w_False - if self.value0 == w_other.value0: #is it safe to assume all 1-tuples are specialised ? - return space.w_True - else: - return space.w_False - - def hash(self, space): - mult = 1000003 - x = 0x345678 - z = 1 - w_item = self.getitem(0) - y = space.int_w(space.hash(w_item)) - x = (x ^ y) * mult - mult += 82520 + z + z - x += 97531 - return space.wrap(intmask(x)) - -class W_SpecialisedTupleObjectInt(W_SpecialisedTupleObject1): #one integer element - def __init__(self, intval): - assert type(intval) == IntType#isinstance - self.intval = intval#intval - + return 2 +''' def tolist(self): return [W_IntObject(self.intval)] def getitem(self, index): if index == 0: + self.wrap(self.intval) return W_IntObject(self.intval) raise IndexError @@ -85,61 +66,17 @@ return raise IndexError -class W_SpecialisedTupleObjectFloat(W_SpecialisedTupleObject1): #one integer element - def __init__(self, floatval): - assert type(floatval) == FloatType - self.floatval = floatval + def eq(self, space, w_other): + if w_other.length() != 1: + return space.w_False + if self.intval == w_other.intval: #is it safe to assume all 1-tuples are specialised ? + return space.w_True + else: + return space.w_False +''' - def tolist(self): - return [W_FloatObject(self.floatval)] - - def getitem(self, index): - if index == 0: - return W_FloatObject(self.floatval) - raise IndexError - - def setitem(self, index, w_item): - assert isinstance(w_item, W_FloatObject) - if index == 0: - self.floatval = w_item.floatval - return - raise IndexError - -class W_SpecialisedTupleObjectString(W_SpecialisedTupleObject1): #one integer element - def __init__(self, stringval): - assert type(stringval) == StringType - self.stringval = stringval - - def tolist(self): - return [W_StringObject(self.stringval)] - - def getitem(self, index): - if index == 0: - return W_StringObject(self.stringval) - raise IndexError - - def setitem(self, index, w_item): - assert isinstance(w_item, W_StringObject) - if index == 0: - self.stringval = w_item._value # does _value need to be private - return - raise IndexError - -''' - W_SpecialisedTupleObjectIntInt, #two element tupes of int, float or string - W_SpecialisedTupleObjectIntFloat, - W_SpecialisedTupleObjectIntString, - W_SpecialisedTupleObjectFloatInt, - W_SpecialisedTupleObjectFloatFloat, - W_SpecialisedTupleObjectFloatString, - W_SpecialisedTupleObjectStringInt, - W_SpecialisedTupleObjectStringFloat, - W_SpecialisedTupleObjectStringString - -''' registerimplementation(W_SpecialisedTupleObject) -#--------- def delegate_SpecialisedTuple2Tuple(space, w_specialised): return W_TupleObject(w_specialised.tolist()) @@ -156,8 +93,6 @@ raise OperationError(space.w_IndexError, space.wrap("tuple index out of range")) -# getitem__SpecialisedTuple_Slice removed -# mul_specialisedtuple_times removed def getitem__SpecialisedTuple_Slice(space, w_tuple, w_slice): length = w_tuple.length() start, stop, step, slicelength = w_slice.indices4(space, length) @@ -186,10 +121,6 @@ def mul__ANY_SpecialisedTuple(space, w_times, w_tuple): return mul_specialisedtuple_times(space, w_tuple, w_times) - -# mul__SpecialisedTuple_ANY removed -# mul__ANY_SpecialisedTuple removed - def eq__SpecialisedTuple_SpecialisedTuple(space, w_tuple1, w_tuple2): return w_tuple1.eq(space, w_tuple2) diff --git a/pypy/objspace/std/test/test_specialisedtupleobject.py b/pypy/objspace/std/test/test_specialisedtupleobject.py --- a/pypy/objspace/std/test/test_specialisedtupleobject.py +++ b/pypy/objspace/std/test/test_specialisedtupleobject.py @@ -1,28 +1,28 @@ +import py from pypy.objspace.std.tupleobject import W_TupleObject -from pypy.objspace.std.specialisedtupleobject import W_SpecialisedTupleObject +from pypy.objspace.std.specialisedtupleobject import W_SpecialisedTupleObject,W_SpecialisedTupleObjectIntInt from pypy.interpreter.error import OperationError from pypy.conftest import gettestobjspace from pypy.objspace.std.test.test_tupleobject import AppTestW_TupleObject + class TestW_SpecialisedTupleObject(): def setup_class(cls): cls.space = gettestobjspace(**{"objspace.std.withspecialisedtuple": True}) - def test_isspecialisedtupleobject(self): - w_tuple = self.space.newtuple([self.space.wrap(1)]) - assert isinstance(w_tuple, W_SpecialisedTupleObject) - + def test_isspecialisedtupleobjectintint(self): + py.test.skip('in progress') + w_tuple = self.space.newtuple([self.space.wrap(1), self.space.wrap(2)]) + assert isinstance(w_tuple, W_SpecialisedTupleObjectIntInt) + def test_isnotspecialisedtupleobject(self): w_tuple = self.space.newtuple([self.space.wrap({})]) assert not isinstance(w_tuple, W_SpecialisedTupleObject) - - def test_isnotspecialised2tupleobject(self): - w_tuple = self.space.newtuple([self.space.wrap(1), self.space.wrap(2)]) - assert not isinstance(w_tuple, W_SpecialisedTupleObject) def test_hash_against_normal_tuple(self): + py.test.skip('in progress') normalspace = gettestobjspace(**{"objspace.std.withspecialisedtuple": False}) w_tuple = normalspace.newtuple([self.space.wrap(1)]) @@ -36,6 +36,7 @@ assert specialisedspace.is_true(specialisedspace.eq(normalspace.hash(w_tuple), specialisedspace.hash(w_specialisedtuple))) def test_setitem(self): + py.test.skip('in progress') w_specialisedtuple = self.space.newtuple([self.space.wrap(1)]) w_specialisedtuple.setitem(0, self.space.wrap(5)) list_w = w_specialisedtuple.tolist() @@ -54,48 +55,43 @@ """) def test_specialisedtuple(self): - assert self.isspecialised((42,)) - assert self.isspecialised(('42',)) - assert self.isspecialised((42.5,)) - + skip('in progress') + assert self.isspecialised((42,43)) + def test_notspecialisedtuple(self): - assert not self.isspecialised((42,43)) + skip('in progress') + assert not self.isspecialised((42,43,44)) def test_slicing_to_specialised(self): + skip('in progress') assert self.isspecialised((1, 2, 3)[0:1]) assert self.isspecialised((1, '2', 1.3)[0:5:5]) assert self.isspecialised((1, '2', 1.3)[1:5:5]) assert self.isspecialised((1, '2', 1.3)[2:5:5]) def test_adding_to_specialised(self): - assert self.isspecialised(()+(2,)) + skip('in progress') + assert self.isspecialised((1,)+(2,)) def test_multiply_to_specialised(self): - assert self.isspecialised((1,)*1) + skip('in progress') + assert self.isspecialised((1,)*2) def test_slicing_from_specialised(self): - assert (1,)[0:1:1] == (1,) + skip('in progress') + assert (1,2,3)[0:2:1] == (1,) def test_eq(self): - a = (1,) - b = (1,) - assert a == b - - a = ('1',) - b = ('1',) - assert a == b - - a = (1.1,) - b = (1.1,) + skip('in progress') + a = (1,2) + b = (1,2) assert a == b c = (1,3,2) assert a != c - - d = (2) - assert a != d def test_hash(self): + skip('in progress') a = (1,) b = (1,) assert hash(a) == hash(b) diff --git a/pypy/objspace/std/tupletype.py b/pypy/objspace/std/tupletype.py --- a/pypy/objspace/std/tupletype.py +++ b/pypy/objspace/std/tupletype.py @@ -15,22 +15,12 @@ from pypy.objspace.std.smalltupleobject import W_SmallTupleObject7 from pypy.objspace.std.smalltupleobject import W_SmallTupleObject8 - from pypy.objspace.std.specialisedtupleobject import W_SpecialisedTupleObjectInt #one element tuples - from pypy.objspace.std.specialisedtupleobject import W_SpecialisedTupleObjectFloat - from pypy.objspace.std.specialisedtupleobject import W_SpecialisedTupleObjectString from pypy.objspace.std.intobject import W_IntObject from pypy.objspace.std.floatobject import W_FloatObject from pypy.objspace.std.stringobject import W_StringObject if space.config.objspace.std.withspecialisedtuple: - if len(list_w) == 1: - if isinstance(list_w[0], W_IntObject): - return W_SpecialisedTupleObjectInt(list_w[0].intval) - if isinstance(list_w[0], W_FloatObject): - return W_SpecialisedTupleObjectFloat(list_w[0].floatval) - if isinstance(list_w[0], W_StringObject): - return W_SpecialisedTupleObjectString(list_w[0]._value) - + pass if space.config.objspace.std.withsmalltuple: if len(list_w) == 2: return W_SmallTupleObject2(list_w) From noreply at buildbot.pypy.org Thu Nov 10 10:47:22 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 10:47:22 +0100 (CET) Subject: [pypy-commit] pypy SpecialisedTuples: (antocuni, mwp) starting to implement TupleIntInt Message-ID: <20111110094722.3E0698292E@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: SpecialisedTuples Changeset: r49079:8b3f48703d65 Date: 2011-11-04 17:43 +0100 http://bitbucket.org/pypy/pypy/changeset/8b3f48703d65/ Log: (antocuni, mwp) starting to implement TupleIntInt diff --git a/pypy/objspace/std/specialisedtupleobject.py b/pypy/objspace/std/specialisedtupleobject.py --- a/pypy/objspace/std/specialisedtupleobject.py +++ b/pypy/objspace/std/specialisedtupleobject.py @@ -41,18 +41,31 @@ class W_SpecialisedTupleObjectIntInt(W_SpecialisedTupleObject): - def __init__(self, intval0, intval1): + def __init__(self, space, intval0, intval1): assert isinstance(intval0, int) assert isinstance(intval1, int) + self.space = space self.intval0 = intval0 self.intval1 = intval1 def length(self): return 2 + + def tolist(self): + return [self.space.wrap(self.intval0), self.space.wrap(self.intval1)] + + def hash(self, space): + return space.wrap(0) + + def eq(self, space, w_other): + if w_other.length() != 2: + return space.w_False + if self.intval0 == w_other.intval0 and self.intval1 == w_other.intval1: #xxx + return space.w_True + else: + return space.w_False + ''' - def tolist(self): - return [W_IntObject(self.intval)] - def getitem(self, index): if index == 0: self.wrap(self.intval) @@ -65,21 +78,13 @@ self.intval = w_item.intval return raise IndexError - - def eq(self, space, w_other): - if w_other.length() != 1: - return space.w_False - if self.intval == w_other.intval: #is it safe to assume all 1-tuples are specialised ? - return space.w_True - else: - return space.w_False -''' +''' registerimplementation(W_SpecialisedTupleObject) def delegate_SpecialisedTuple2Tuple(space, w_specialised): return W_TupleObject(w_specialised.tolist()) - +''' def len__SpecialisedTuple(space, w_tuple): return space.wrap(w_tuple.length()) @@ -123,7 +128,7 @@ def eq__SpecialisedTuple_SpecialisedTuple(space, w_tuple1, w_tuple2): return w_tuple1.eq(space, w_tuple2) - +''' def hash__SpecialisedTuple(space, w_tuple): return w_tuple.hash(space) diff --git a/pypy/objspace/std/tupletype.py b/pypy/objspace/std/tupletype.py --- a/pypy/objspace/std/tupletype.py +++ b/pypy/objspace/std/tupletype.py @@ -14,13 +14,23 @@ from pypy.objspace.std.smalltupleobject import W_SmallTupleObject6 from pypy.objspace.std.smalltupleobject import W_SmallTupleObject7 from pypy.objspace.std.smalltupleobject import W_SmallTupleObject8 + + + from pypy.objspace.std.specialisedtupleobject import W_SpecialisedTupleObjectIntInt from pypy.objspace.std.intobject import W_IntObject from pypy.objspace.std.floatobject import W_FloatObject from pypy.objspace.std.stringobject import W_StringObject if space.config.objspace.std.withspecialisedtuple: - pass + if len(list_w) == 2: + w_item0 = list_w[0] + w_item1 = list_w[1] + if space.type(w_item0) == space.w_int and space.type(w_item1) == space.w_int: + val0 = space.int_w(w_item0) + val1 = space.int_w(w_item1) + return W_SpecialisedTupleObjectIntInt(space, val0, val1) + if space.config.objspace.std.withsmalltuple: if len(list_w) == 2: return W_SmallTupleObject2(list_w) From noreply at buildbot.pypy.org Thu Nov 10 10:47:23 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 10:47:23 +0100 (CET) Subject: [pypy-commit] pypy SpecialisedTuples: (antocuni, mwp) isspecialisedtupleobjectintint passes Message-ID: <20111110094723.6B6A38292E@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: SpecialisedTuples Changeset: r49080:b582ecf6f507 Date: 2011-11-04 17:47 +0100 http://bitbucket.org/pypy/pypy/changeset/b582ecf6f507/ Log: (antocuni, mwp) isspecialisedtupleobjectintint passes diff --git a/pypy/objspace/std/test/test_specialisedtupleobject.py b/pypy/objspace/std/test/test_specialisedtupleobject.py --- a/pypy/objspace/std/test/test_specialisedtupleobject.py +++ b/pypy/objspace/std/test/test_specialisedtupleobject.py @@ -13,7 +13,6 @@ cls.space = gettestobjspace(**{"objspace.std.withspecialisedtuple": True}) def test_isspecialisedtupleobjectintint(self): - py.test.skip('in progress') w_tuple = self.space.newtuple([self.space.wrap(1), self.space.wrap(2)]) assert isinstance(w_tuple, W_SpecialisedTupleObjectIntInt) From noreply at buildbot.pypy.org Thu Nov 10 10:47:24 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 10:47:24 +0100 (CET) Subject: [pypy-commit] pypy SpecialisedTuples: (antocuni, mwp) test_hash_against_normal_tuple passes Message-ID: <20111110094724.986348292E@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: SpecialisedTuples Changeset: r49081:79188abd9668 Date: 2011-11-04 18:02 +0100 http://bitbucket.org/pypy/pypy/changeset/79188abd9668/ Log: (antocuni, mwp) test_hash_against_normal_tuple passes diff --git a/pypy/objspace/std/specialisedtupleobject.py b/pypy/objspace/std/specialisedtupleobject.py --- a/pypy/objspace/std/specialisedtupleobject.py +++ b/pypy/objspace/std/specialisedtupleobject.py @@ -55,7 +55,18 @@ return [self.space.wrap(self.intval0), self.space.wrap(self.intval1)] def hash(self, space): - return space.wrap(0) + mult = 1000003 + x = 0x345678 + z = 2 + for intval in [self.intval0, self.intval1]: + # we assume that hash value of an intger is the integer itself + # look at intobject.py hash__Int to check this! + y = intval + x = (x ^ y) * mult + z -= 1 + mult += 82520 + z + z + x += 97531 + return space.wrap(intmask(x)) def eq(self, space, w_other): if w_other.length() != 2: diff --git a/pypy/objspace/std/test/test_specialisedtupleobject.py b/pypy/objspace/std/test/test_specialisedtupleobject.py --- a/pypy/objspace/std/test/test_specialisedtupleobject.py +++ b/pypy/objspace/std/test/test_specialisedtupleobject.py @@ -21,12 +21,11 @@ assert not isinstance(w_tuple, W_SpecialisedTupleObject) def test_hash_against_normal_tuple(self): - py.test.skip('in progress') normalspace = gettestobjspace(**{"objspace.std.withspecialisedtuple": False}) - w_tuple = normalspace.newtuple([self.space.wrap(1)]) + w_tuple = normalspace.newtuple([self.space.wrap(1), self.space.wrap(2)]) specialisedspace = gettestobjspace(**{"objspace.std.withspecialisedtuple": True}) - w_specialisedtuple = specialisedspace.newtuple([self.space.wrap(1)]) + w_specialisedtuple = specialisedspace.newtuple([self.space.wrap(1), self.space.wrap(2)]) assert isinstance(w_specialisedtuple, W_SpecialisedTupleObject) assert isinstance(w_tuple, W_TupleObject) From noreply at buildbot.pypy.org Thu Nov 10 10:47:25 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 10:47:25 +0100 (CET) Subject: [pypy-commit] pypy SpecialisedTuples: (antocuni, mwp) app-level [not]test_specialisedtuple pass Message-ID: <20111110094725.C6CE38292E@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: SpecialisedTuples Changeset: r49082:e61eba85f7de Date: 2011-11-04 18:08 +0100 http://bitbucket.org/pypy/pypy/changeset/e61eba85f7de/ Log: (antocuni, mwp) app-level [not]test_specialisedtuple pass diff --git a/pypy/objspace/std/test/test_specialisedtupleobject.py b/pypy/objspace/std/test/test_specialisedtupleobject.py --- a/pypy/objspace/std/test/test_specialisedtupleobject.py +++ b/pypy/objspace/std/test/test_specialisedtupleobject.py @@ -34,7 +34,7 @@ assert specialisedspace.is_true(specialisedspace.eq(normalspace.hash(w_tuple), specialisedspace.hash(w_specialisedtuple))) def test_setitem(self): - py.test.skip('in progress') + py.test.skip('skip for now, only needed for cpyext') w_specialisedtuple = self.space.newtuple([self.space.wrap(1)]) w_specialisedtuple.setitem(0, self.space.wrap(5)) list_w = w_specialisedtuple.tolist() @@ -53,11 +53,9 @@ """) def test_specialisedtuple(self): - skip('in progress') assert self.isspecialised((42,43)) def test_notspecialisedtuple(self): - skip('in progress') assert not self.isspecialised((42,43,44)) def test_slicing_to_specialised(self): From noreply at buildbot.pypy.org Thu Nov 10 10:47:27 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 10:47:27 +0100 (CET) Subject: [pypy-commit] pypy SpecialisedTuples: (antocuni, mwp) app-level slice tests pass Message-ID: <20111110094727.011218292E@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: SpecialisedTuples Changeset: r49083:9aaabdcece5c Date: 2011-11-04 18:21 +0100 http://bitbucket.org/pypy/pypy/changeset/9aaabdcece5c/ Log: (antocuni, mwp) app-level slice tests pass diff --git a/pypy/objspace/std/test/test_specialisedtupleobject.py b/pypy/objspace/std/test/test_specialisedtupleobject.py --- a/pypy/objspace/std/test/test_specialisedtupleobject.py +++ b/pypy/objspace/std/test/test_specialisedtupleobject.py @@ -59,23 +59,17 @@ assert not self.isspecialised((42,43,44)) def test_slicing_to_specialised(self): - skip('in progress') - assert self.isspecialised((1, 2, 3)[0:1]) - assert self.isspecialised((1, '2', 1.3)[0:5:5]) - assert self.isspecialised((1, '2', 1.3)[1:5:5]) - assert self.isspecialised((1, '2', 1.3)[2:5:5]) + assert self.isspecialised((1, 2, 3)[0:2]) + assert self.isspecialised((1, '2', 3)[0:5:2]) def test_adding_to_specialised(self): - skip('in progress') assert self.isspecialised((1,)+(2,)) def test_multiply_to_specialised(self): - skip('in progress') assert self.isspecialised((1,)*2) def test_slicing_from_specialised(self): - skip('in progress') - assert (1,2,3)[0:2:1] == (1,) + assert (1,2,3)[0:2:1] == (1,2) def test_eq(self): skip('in progress') From noreply at buildbot.pypy.org Thu Nov 10 10:47:28 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 10:47:28 +0100 (CET) Subject: [pypy-commit] pypy SpecialisedTuples: (antocuni, mwp) app-level eq and hash test pass Message-ID: <20111110094728.2F22C8292E@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: SpecialisedTuples Changeset: r49084:8e818988cae7 Date: 2011-11-04 18:25 +0100 http://bitbucket.org/pypy/pypy/changeset/8e818988cae7/ Log: (antocuni, mwp) app-level eq and hash test pass diff --git a/pypy/objspace/std/test/test_specialisedtupleobject.py b/pypy/objspace/std/test/test_specialisedtupleobject.py --- a/pypy/objspace/std/test/test_specialisedtupleobject.py +++ b/pypy/objspace/std/test/test_specialisedtupleobject.py @@ -72,7 +72,6 @@ assert (1,2,3)[0:2:1] == (1,2) def test_eq(self): - skip('in progress') a = (1,2) b = (1,2) assert a == b @@ -81,20 +80,11 @@ assert a != c def test_hash(self): - skip('in progress') - a = (1,) - b = (1,) + a = (1,2) + b = (1,2) assert hash(a) == hash(b) - a = ('1',) - b = ('1',) - assert hash(a) == hash(b) - - a = (1.1,) - b = (1.1,) - assert hash(a) == hash(b) - - c = (2,) + c = (2,4) assert hash(a) != hash(c) From noreply at buildbot.pypy.org Thu Nov 10 10:47:29 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 10:47:29 +0100 (CET) Subject: [pypy-commit] pypy SpecialisedTuples: (antocuni, mwp) move instantiaton code into specialisedtupleobject.py Message-ID: <20111110094729.5E47D8292E@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: SpecialisedTuples Changeset: r49085:896207170dd2 Date: 2011-11-05 15:38 +0100 http://bitbucket.org/pypy/pypy/changeset/896207170dd2/ Log: (antocuni, mwp) move instantiaton code into specialisedtupleobject.py diff --git a/pypy/objspace/std/specialisedtupleobject.py b/pypy/objspace/std/specialisedtupleobject.py --- a/pypy/objspace/std/specialisedtupleobject.py +++ b/pypy/objspace/std/specialisedtupleobject.py @@ -10,7 +10,18 @@ from pypy.rlib.rarithmetic import intmask from pypy.objspace.std.tupleobject import W_TupleObject -from types import IntType, FloatType, StringType +class NotSpecialised(Exception): + pass + +def makespecilisedtuple(space, list_w): + if len(list_w) == 2: + w_item0 = list_w[0] + w_item1 = list_w[1] + if space.type(w_item0) == space.w_int and space.type(w_item1) == space.w_int: + val0 = space.int_w(w_item0) + val1 = space.int_w(w_item1) + return W_SpecialisedTupleObjectIntInt(space, val0, val1) + raise NotSpecialised class W_SpecialisedTupleObject(W_Object): from pypy.objspace.std.tupletype import tuple_typedef as typedef @@ -59,7 +70,7 @@ x = 0x345678 z = 2 for intval in [self.intval0, self.intval1]: - # we assume that hash value of an intger is the integer itself + # we assume that hash value of an integer is the integer itself # look at intobject.py hash__Int to check this! y = intval x = (x ^ y) * mult @@ -76,13 +87,13 @@ else: return space.w_False -''' def getitem(self, index): if index == 0: - self.wrap(self.intval) - return W_IntObject(self.intval) + return self.space.wrap(self.intval0) + if index == 1: + return self.space.wrap(self.intval1) raise IndexError - +''' def setitem(self, index, w_item): assert isinstance(w_item, W_IntObject) if index == 0: @@ -95,7 +106,7 @@ def delegate_SpecialisedTuple2Tuple(space, w_specialised): return W_TupleObject(w_specialised.tolist()) -''' + def len__SpecialisedTuple(space, w_tuple): return space.wrap(w_tuple.length()) @@ -108,6 +119,7 @@ except IndexError: raise OperationError(space.w_IndexError, space.wrap("tuple index out of range")) +''' def getitem__SpecialisedTuple_Slice(space, w_tuple, w_slice): length = w_tuple.length() diff --git a/pypy/objspace/std/tupletype.py b/pypy/objspace/std/tupletype.py --- a/pypy/objspace/std/tupletype.py +++ b/pypy/objspace/std/tupletype.py @@ -14,23 +14,14 @@ from pypy.objspace.std.smalltupleobject import W_SmallTupleObject6 from pypy.objspace.std.smalltupleobject import W_SmallTupleObject7 from pypy.objspace.std.smalltupleobject import W_SmallTupleObject8 + + if space.config.objspace.std.withspecialisedtuple: + from specialisedtupleobject import makespecilisedtuple, NotSpecialised + try: + return makespecilisedtuple(space, list_w) + except NotSpecialised: + pass - - from pypy.objspace.std.specialisedtupleobject import W_SpecialisedTupleObjectIntInt - - from pypy.objspace.std.intobject import W_IntObject - from pypy.objspace.std.floatobject import W_FloatObject - from pypy.objspace.std.stringobject import W_StringObject - - if space.config.objspace.std.withspecialisedtuple: - if len(list_w) == 2: - w_item0 = list_w[0] - w_item1 = list_w[1] - if space.type(w_item0) == space.w_int and space.type(w_item1) == space.w_int: - val0 = space.int_w(w_item0) - val1 = space.int_w(w_item1) - return W_SpecialisedTupleObjectIntInt(space, val0, val1) - if space.config.objspace.std.withsmalltuple: if len(list_w) == 2: return W_SmallTupleObject2(list_w) From noreply at buildbot.pypy.org Thu Nov 10 10:47:30 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 10:47:30 +0100 (CET) Subject: [pypy-commit] pypy SpecialisedTuples: (antocuni, mwp) test length of specialised tuples Message-ID: <20111110094730.8B08D8292E@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: SpecialisedTuples Changeset: r49086:48af40650402 Date: 2011-11-05 16:09 +0100 http://bitbucket.org/pypy/pypy/changeset/48af40650402/ Log: (antocuni, mwp) test length of specialised tuples diff --git a/pypy/objspace/std/test/test_specialisedtupleobject.py b/pypy/objspace/std/test/test_specialisedtupleobject.py --- a/pypy/objspace/std/test/test_specialisedtupleobject.py +++ b/pypy/objspace/std/test/test_specialisedtupleobject.py @@ -55,6 +55,9 @@ def test_specialisedtuple(self): assert self.isspecialised((42,43)) + def test_len(self): + assert len((42,43)) == 2 + def test_notspecialisedtuple(self): assert not self.isspecialised((42,43,44)) From noreply at buildbot.pypy.org Thu Nov 10 10:47:31 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 10:47:31 +0100 (CET) Subject: [pypy-commit] pypy SpecialisedTuples: (antocuni, mwp) use new magic for defining helper method Message-ID: <20111110094731.B829E8292E@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: SpecialisedTuples Changeset: r49087:85196b813c5d Date: 2011-11-05 16:22 +0100 http://bitbucket.org/pypy/pypy/changeset/85196b813c5d/ Log: (antocuni, mwp) use new magic for defining helper method diff --git a/pypy/objspace/std/test/test_specialisedtupleobject.py b/pypy/objspace/std/test/test_specialisedtupleobject.py --- a/pypy/objspace/std/test/test_specialisedtupleobject.py +++ b/pypy/objspace/std/test/test_specialisedtupleobject.py @@ -45,16 +45,15 @@ def setup_class(cls): cls.space = gettestobjspace(**{"objspace.std.withspecialisedtuple": True}) - cls.w_isspecialised = cls.space.appexec([], """(): - import __pypy__ - def isspecialised(obj): - return "SpecialisedTuple" in __pypy__.internal_repr(obj) - return isspecialised - """) + + def w_isspecialised(self, obj): + import __pypy__ + return "SpecialisedTuple" in __pypy__.internal_repr(obj) + def test_specialisedtuple(self): assert self.isspecialised((42,43)) - + def test_len(self): assert len((42,43)) == 2 From noreply at buildbot.pypy.org Thu Nov 10 10:47:32 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 10:47:32 +0100 (CET) Subject: [pypy-commit] pypy SpecialisedTuples: (antocuni, mwp) make sure that tuple in test_len does not delegate Message-ID: <20111110094732.E71E88292E@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: SpecialisedTuples Changeset: r49088:ca02c6a45190 Date: 2011-11-05 16:44 +0100 http://bitbucket.org/pypy/pypy/changeset/ca02c6a45190/ Log: (antocuni, mwp) make sure that tuple in test_len does not delegate diff --git a/pypy/objspace/std/specialisedtupleobject.py b/pypy/objspace/std/specialisedtupleobject.py --- a/pypy/objspace/std/specialisedtupleobject.py +++ b/pypy/objspace/std/specialisedtupleobject.py @@ -93,14 +93,6 @@ if index == 1: return self.space.wrap(self.intval1) raise IndexError -''' - def setitem(self, index, w_item): - assert isinstance(w_item, W_IntObject) - if index == 0: - self.intval = w_item.intval - return - raise IndexError -''' registerimplementation(W_SpecialisedTupleObject) diff --git a/pypy/objspace/std/test/test_specialisedtupleobject.py b/pypy/objspace/std/test/test_specialisedtupleobject.py --- a/pypy/objspace/std/test/test_specialisedtupleobject.py +++ b/pypy/objspace/std/test/test_specialisedtupleobject.py @@ -4,7 +4,7 @@ from pypy.interpreter.error import OperationError from pypy.conftest import gettestobjspace from pypy.objspace.std.test.test_tupleobject import AppTestW_TupleObject - +from pypy.interpreter import gateway class TestW_SpecialisedTupleObject(): @@ -45,17 +45,27 @@ def setup_class(cls): cls.space = gettestobjspace(**{"objspace.std.withspecialisedtuple": True}) + def forbid_delegation(space, w_tuple): + def delegation_forbidden(): + raise NotImplementedError + w_tuple.tolist = delegation_forbidden + return w_tuple + cls.w_forbid_delegation = cls.space.wrap(gateway.interp2app(forbid_delegation)) + + def w_isspecialised(self, obj): import __pypy__ return "SpecialisedTuple" in __pypy__.internal_repr(obj) + def test_specialisedtuple(self): assert self.isspecialised((42,43)) def test_len(self): - assert len((42,43)) == 2 + t = self.forbid_delegation((42,43)) + assert len(t) == 2 def test_notspecialisedtuple(self): assert not self.isspecialised((42,43,44)) From noreply at buildbot.pypy.org Thu Nov 10 10:47:34 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 10:47:34 +0100 (CET) Subject: [pypy-commit] pypy SpecialisedTuples: (antocuni, mwp) make sure that tuple in test_getitem does not delegate Message-ID: <20111110094734.1F8898292E@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: SpecialisedTuples Changeset: r49089:b5c82fd6acf8 Date: 2011-11-05 16:59 +0100 http://bitbucket.org/pypy/pypy/changeset/b5c82fd6acf8/ Log: (antocuni, mwp) make sure that tuple in test_getitem does not delegate diff --git a/pypy/objspace/std/specialisedtupleobject.py b/pypy/objspace/std/specialisedtupleobject.py --- a/pypy/objspace/std/specialisedtupleobject.py +++ b/pypy/objspace/std/specialisedtupleobject.py @@ -112,7 +112,6 @@ raise OperationError(space.w_IndexError, space.wrap("tuple index out of range")) ''' - def getitem__SpecialisedTuple_Slice(space, w_tuple, w_slice): length = w_tuple.length() start, stop, step, slicelength = w_slice.indices4(space, length) diff --git a/pypy/objspace/std/test/test_specialisedtupleobject.py b/pypy/objspace/std/test/test_specialisedtupleobject.py --- a/pypy/objspace/std/test/test_specialisedtupleobject.py +++ b/pypy/objspace/std/test/test_specialisedtupleobject.py @@ -3,7 +3,7 @@ from pypy.objspace.std.specialisedtupleobject import W_SpecialisedTupleObject,W_SpecialisedTupleObjectIntInt from pypy.interpreter.error import OperationError from pypy.conftest import gettestobjspace -from pypy.objspace.std.test.test_tupleobject import AppTestW_TupleObject +#from pypy.objspace.std.test.test_tupleobject import AppTestW_TupleObject from pypy.interpreter import gateway @@ -41,7 +41,7 @@ assert len(list_w) == 1 assert self.space.eq_w(list_w[0], self.space.wrap(5)) -class AppTestW_SpecialisedTupleObject(AppTestW_TupleObject): +class AppTestW_SpecialisedTupleObject(object): def setup_class(cls): cls.space = gettestobjspace(**{"objspace.std.withspecialisedtuple": True}) @@ -99,6 +99,13 @@ c = (2,4) assert hash(a) != hash(c) + def test_getitem(self): + t = self.forbid_delegation((5,3)) + assert (t)[0] == 5 + assert (t)[1] == 3 + assert (t)[-1] == 3 + assert (t)[-2] == 5 + raises(IndexError, "t[2]") From noreply at buildbot.pypy.org Thu Nov 10 10:47:35 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 10:47:35 +0100 (CET) Subject: [pypy-commit] pypy SpecialisedTuples: (antocuni, mwp) make sure that tuple in test_eq does not delegate Message-ID: <20111110094735.4CAB18292E@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: SpecialisedTuples Changeset: r49090:f77f1a2e16f8 Date: 2011-11-05 17:12 +0100 http://bitbucket.org/pypy/pypy/changeset/f77f1a2e16f8/ Log: (antocuni, mwp) make sure that tuple in test_eq does not delegate diff --git a/pypy/objspace/std/specialisedtupleobject.py b/pypy/objspace/std/specialisedtupleobject.py --- a/pypy/objspace/std/specialisedtupleobject.py +++ b/pypy/objspace/std/specialisedtupleobject.py @@ -139,10 +139,10 @@ def mul__ANY_SpecialisedTuple(space, w_times, w_tuple): return mul_specialisedtuple_times(space, w_tuple, w_times) - +''' def eq__SpecialisedTuple_SpecialisedTuple(space, w_tuple1, w_tuple2): return w_tuple1.eq(space, w_tuple2) -''' + def hash__SpecialisedTuple(space, w_tuple): return w_tuple.hash(space) diff --git a/pypy/objspace/std/test/test_specialisedtupleobject.py b/pypy/objspace/std/test/test_specialisedtupleobject.py --- a/pypy/objspace/std/test/test_specialisedtupleobject.py +++ b/pypy/objspace/std/test/test_specialisedtupleobject.py @@ -84,12 +84,12 @@ assert (1,2,3)[0:2:1] == (1,2) def test_eq(self): - a = (1,2) + a = self.forbid_delegation((1,2)) b = (1,2) assert a == b - + c = (1,3,2) - assert a != c + assert not a == c def test_hash(self): a = (1,2) @@ -111,3 +111,4 @@ + From noreply at buildbot.pypy.org Thu Nov 10 10:47:36 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 10:47:36 +0100 (CET) Subject: [pypy-commit] pypy SpecialisedTuples: (antocuni, mwp) check eq delegates when necessary Message-ID: <20111110094736.79AF48292E@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: SpecialisedTuples Changeset: r49091:bf6c561e2cdd Date: 2011-11-05 17:35 +0100 http://bitbucket.org/pypy/pypy/changeset/bf6c561e2cdd/ Log: (antocuni, mwp) check eq delegates when necessary diff --git a/pypy/objspace/std/test/test_specialisedtupleobject.py b/pypy/objspace/std/test/test_specialisedtupleobject.py --- a/pypy/objspace/std/test/test_specialisedtupleobject.py +++ b/pypy/objspace/std/test/test_specialisedtupleobject.py @@ -88,8 +88,15 @@ b = (1,2) assert a == b - c = (1,3,2) - assert not a == c + def test_eq_can_delegate(self): + a = (1,2) + b = (1,3,2) + assert not a == b + + values = [2, 2L, 2.0, 1, 1L, 1.0] + for x in values: + for y in values: + assert ((1,2) == (x,y)) == (1 == x and 2 == y) def test_hash(self): a = (1,2) From noreply at buildbot.pypy.org Thu Nov 10 10:47:37 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 10:47:37 +0100 (CET) Subject: [pypy-commit] pypy SpecialisedTuples: (antocuni, mwp) improve eq test and kill commented code Message-ID: <20111110094737.A80D28292E@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: SpecialisedTuples Changeset: r49092:22205add64b8 Date: 2011-11-05 17:45 +0100 http://bitbucket.org/pypy/pypy/changeset/22205add64b8/ Log: (antocuni, mwp) improve eq test and kill commented code diff --git a/pypy/objspace/std/specialisedtupleobject.py b/pypy/objspace/std/specialisedtupleobject.py --- a/pypy/objspace/std/specialisedtupleobject.py +++ b/pypy/objspace/std/specialisedtupleobject.py @@ -111,35 +111,7 @@ except IndexError: raise OperationError(space.w_IndexError, space.wrap("tuple index out of range")) -''' -def getitem__SpecialisedTuple_Slice(space, w_tuple, w_slice): - length = w_tuple.length() - start, stop, step, slicelength = w_slice.indices4(space, length) - assert slicelength >= 0 - subitems = [None] * slicelength - for i in range(slicelength): - subitems[i] = w_tuple.getitem(start) - start += step - return space.newtuple(subitems) -def mul_specialisedtuple_times(space, w_tuple, w_times): - try: - times = space.getindex_w(w_times, space.w_OverflowError) - except OperationError, e: - if e.match(space, space.w_TypeError): - raise FailedToImplement - raise - if times == 1 and space.type(w_tuple) == space.w_tuple: - return w_tuple - items = w_tuple.tolist() - return space.newtuple(items * times) - -def mul__SpecialisedTuple_ANY(space, w_tuple, w_times): - return mul_specialisedtuple_times(space, w_tuple, w_times) - -def mul__ANY_SpecialisedTuple(space, w_times, w_tuple): - return mul_specialisedtuple_times(space, w_tuple, w_times) -''' def eq__SpecialisedTuple_SpecialisedTuple(space, w_tuple1, w_tuple2): return w_tuple1.eq(space, w_tuple2) diff --git a/pypy/objspace/std/test/test_specialisedtupleobject.py b/pypy/objspace/std/test/test_specialisedtupleobject.py --- a/pypy/objspace/std/test/test_specialisedtupleobject.py +++ b/pypy/objspace/std/test/test_specialisedtupleobject.py @@ -88,6 +88,9 @@ b = (1,2) assert a == b + c = (2,1) + assert not a == c + def test_eq_can_delegate(self): a = (1,2) b = (1,3,2) From noreply at buildbot.pypy.org Thu Nov 10 10:47:38 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 10:47:38 +0100 (CET) Subject: [pypy-commit] pypy SpecialisedTuples: (antocuni, mwp) spelling error Message-ID: <20111110094738.D46A78292E@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: SpecialisedTuples Changeset: r49093:baa037667a7f Date: 2011-11-05 18:38 +0100 http://bitbucket.org/pypy/pypy/changeset/baa037667a7f/ Log: (antocuni, mwp) spelling error diff --git a/pypy/objspace/std/specialisedtupleobject.py b/pypy/objspace/std/specialisedtupleobject.py --- a/pypy/objspace/std/specialisedtupleobject.py +++ b/pypy/objspace/std/specialisedtupleobject.py @@ -13,7 +13,7 @@ class NotSpecialised(Exception): pass -def makespecilisedtuple(space, list_w): +def makespecialisedtuple(space, list_w): if len(list_w) == 2: w_item0 = list_w[0] w_item1 = list_w[1] diff --git a/pypy/objspace/std/tupletype.py b/pypy/objspace/std/tupletype.py --- a/pypy/objspace/std/tupletype.py +++ b/pypy/objspace/std/tupletype.py @@ -16,9 +16,9 @@ from pypy.objspace.std.smalltupleobject import W_SmallTupleObject8 if space.config.objspace.std.withspecialisedtuple: - from specialisedtupleobject import makespecilisedtuple, NotSpecialised + from specialisedtupleobject import makespecialisedtuple, NotSpecialised try: - return makespecilisedtuple(space, list_w) + return makespecialisedtuple(space, list_w) except NotSpecialised: pass From noreply at buildbot.pypy.org Thu Nov 10 10:47:40 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 10:47:40 +0100 (CET) Subject: [pypy-commit] pypy SpecialisedTuples: (antocuni, mwp) create specialisedtuple class dynamically Message-ID: <20111110094740.0D5298292E@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: SpecialisedTuples Changeset: r49094:3fa4737cc2a4 Date: 2011-11-05 19:01 +0100 http://bitbucket.org/pypy/pypy/changeset/3fa4737cc2a4/ Log: (antocuni, mwp) create specialisedtuple class dynamically diff --git a/pypy/objspace/std/specialisedtupleobject.py b/pypy/objspace/std/specialisedtupleobject.py --- a/pypy/objspace/std/specialisedtupleobject.py +++ b/pypy/objspace/std/specialisedtupleobject.py @@ -25,6 +25,7 @@ class W_SpecialisedTupleObject(W_Object): from pypy.objspace.std.tupletype import tuple_typedef as typedef + __slots__ = [] def tolist(self): raise NotImplementedError @@ -50,50 +51,55 @@ def unwrap(w_tuple, space): return tuple(self.tolist) - -class W_SpecialisedTupleObjectIntInt(W_SpecialisedTupleObject): - def __init__(self, space, intval0, intval1): - assert isinstance(intval0, int) - assert isinstance(intval1, int) - self.space = space - self.intval0 = intval0 - self.intval1 = intval1 - - def length(self): - return 2 - - def tolist(self): - return [self.space.wrap(self.intval0), self.space.wrap(self.intval1)] - - def hash(self, space): - mult = 1000003 - x = 0x345678 - z = 2 - for intval in [self.intval0, self.intval1]: - # we assume that hash value of an integer is the integer itself - # look at intobject.py hash__Int to check this! - y = intval - x = (x ^ y) * mult - z -= 1 - mult += 82520 + z + z - x += 97531 - return space.wrap(intmask(x)) - - def eq(self, space, w_other): - if w_other.length() != 2: - return space.w_False - if self.intval0 == w_other.intval0 and self.intval1 == w_other.intval1: #xxx - return space.w_True - else: - return space.w_False - - def getitem(self, index): - if index == 0: - return self.space.wrap(self.intval0) - if index == 1: - return self.space.wrap(self.intval1) - raise IndexError - +def make_specialised_class(type0, type1): + class cls(W_SpecialisedTupleObject): + def __init__(self, space, intval0, intval1): + assert isinstance(intval0, int) + assert isinstance(intval1, int) + self.space = space + self.intval0 = intval0 + self.intval1 = intval1 + + def length(self): + return 2 + + def tolist(self): + return [self.space.wrap(self.intval0), self.space.wrap(self.intval1)] + + def hash(self, space): + mult = 1000003 + x = 0x345678 + z = 2 + for intval in [self.intval0, self.intval1]: + # we assume that hash value of an integer is the integer itself + # look at intobject.py hash__Int to check this! + y = intval + x = (x ^ y) * mult + z -= 1 + mult += 82520 + z + z + x += 97531 + return space.wrap(intmask(x)) + + def eq(self, space, w_other): + if w_other.length() != 2: + return space.w_False + if self.intval0 == w_other.intval0 and self.intval1 == w_other.intval1: #xxx + return space.w_True + else: + return space.w_False + + def getitem(self, index): + if index == 0: + return self.space.wrap(self.intval0) + if index == 1: + return self.space.wrap(self.intval1) + raise IndexError + cls.__name__ = 'W_SpecialisedTupleObjectIntInt' + return cls + + +W_SpecialisedTupleObjectIntInt = make_specialised_class(int,int) + registerimplementation(W_SpecialisedTupleObject) def delegate_SpecialisedTuple2Tuple(space, w_specialised): From noreply at buildbot.pypy.org Thu Nov 10 10:47:41 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 10:47:41 +0100 (CET) Subject: [pypy-commit] pypy SpecialisedTuples: (mwp) pass new class name as parameter to creator and tidy locals Message-ID: <20111110094741.3BDB68292E@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: SpecialisedTuples Changeset: r49095:a87c53f9950c Date: 2011-11-05 22:21 +0100 http://bitbucket.org/pypy/pypy/changeset/a87c53f9950c/ Log: (mwp) pass new class name as parameter to creator and tidy locals diff --git a/pypy/objspace/std/specialisedtupleobject.py b/pypy/objspace/std/specialisedtupleobject.py --- a/pypy/objspace/std/specialisedtupleobject.py +++ b/pypy/objspace/std/specialisedtupleobject.py @@ -6,9 +6,11 @@ from pypy.objspace.std.floatobject import W_FloatObject from pypy.objspace.std.stringobject import W_StringObject from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice +from pypy.objspace.std.tupleobject import W_TupleObject from pypy.objspace.std import slicetype from pypy.rlib.rarithmetic import intmask -from pypy.objspace.std.tupleobject import W_TupleObject +from pypy.rlib.objectmodel import compute_hash + class NotSpecialised(Exception): pass @@ -51,29 +53,27 @@ def unwrap(w_tuple, space): return tuple(self.tolist) -def make_specialised_class(type0, type1): +def make_specialised_class(class_name, type0, type1): class cls(W_SpecialisedTupleObject): - def __init__(self, space, intval0, intval1): - assert isinstance(intval0, int) - assert isinstance(intval1, int) + def __init__(self, space, val0, val1): + assert isinstance(val0, type0) + assert isinstance(val1, type1) self.space = space - self.intval0 = intval0 - self.intval1 = intval1 + self.val0 = val0 + self.val1 = val1 def length(self): return 2 def tolist(self): - return [self.space.wrap(self.intval0), self.space.wrap(self.intval1)] + return [self.space.wrap(self.val0), self.space.wrap(self.val1)] def hash(self, space): mult = 1000003 x = 0x345678 z = 2 - for intval in [self.intval0, self.intval1]: - # we assume that hash value of an integer is the integer itself - # look at intobject.py hash__Int to check this! - y = intval + for val in [self.val0, self.val1]: + y = compute_hash(val) x = (x ^ y) * mult z -= 1 mult += 82520 + z + z @@ -83,22 +83,22 @@ def eq(self, space, w_other): if w_other.length() != 2: return space.w_False - if self.intval0 == w_other.intval0 and self.intval1 == w_other.intval1: #xxx + if self.val0 == w_other.val0 and self.val1 == w_other.val1: #xxx return space.w_True else: return space.w_False def getitem(self, index): if index == 0: - return self.space.wrap(self.intval0) + return self.space.wrap(self.val0) if index == 1: - return self.space.wrap(self.intval1) + return self.space.wrap(self.val1) raise IndexError - cls.__name__ = 'W_SpecialisedTupleObjectIntInt' + cls.__name__ = class_name return cls -W_SpecialisedTupleObjectIntInt = make_specialised_class(int,int) +W_SpecialisedTupleObjectIntInt = make_specialised_class('W_SpecialisedTupleObjectIntInt', int,int) registerimplementation(W_SpecialisedTupleObject) From noreply at buildbot.pypy.org Thu Nov 10 10:47:42 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 10:47:42 +0100 (CET) Subject: [pypy-commit] pypy SpecialisedTuples: (mwp) add test for creating float-float-tuples Message-ID: <20111110094742.6DBFB8292E@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: SpecialisedTuples Changeset: r49096:11fc8ffa21f8 Date: 2011-11-05 22:24 +0100 http://bitbucket.org/pypy/pypy/changeset/11fc8ffa21f8/ Log: (mwp) add test for creating float-float-tuples diff --git a/pypy/objspace/std/test/test_specialisedtupleobject.py b/pypy/objspace/std/test/test_specialisedtupleobject.py --- a/pypy/objspace/std/test/test_specialisedtupleobject.py +++ b/pypy/objspace/std/test/test_specialisedtupleobject.py @@ -52,16 +52,15 @@ return w_tuple cls.w_forbid_delegation = cls.space.wrap(gateway.interp2app(forbid_delegation)) - - def w_isspecialised(self, obj): import __pypy__ return "SpecialisedTuple" in __pypy__.internal_repr(obj) - + - - def test_specialisedtuple(self): + def test_createspecialisedtuple(self): assert self.isspecialised((42,43)) + assert self.isspecialised((4.2,4.3)) + assert self.isspecialised((1.0,2.0)) def test_len(self): t = self.forbid_delegation((42,43)) @@ -69,7 +68,9 @@ def test_notspecialisedtuple(self): assert not self.isspecialised((42,43,44)) - + assert not self.isspecialised((1,1.5)) + assert not self.isspecialised((1,1.0)) + def test_slicing_to_specialised(self): assert self.isspecialised((1, 2, 3)[0:2]) assert self.isspecialised((1, '2', 3)[0:5:2]) From noreply at buildbot.pypy.org Thu Nov 10 10:47:43 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 10:47:43 +0100 (CET) Subject: [pypy-commit] pypy SpecialisedTuples: (mwp) refactor test for correct hashes and extend create and eq tests Message-ID: <20111110094743.9B1988292E@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: SpecialisedTuples Changeset: r49097:68b32cfbccbd Date: 2011-11-06 11:22 +0100 http://bitbucket.org/pypy/pypy/changeset/68b32cfbccbd/ Log: (mwp) refactor test for correct hashes and extend create and eq tests diff --git a/pypy/objspace/std/test/test_specialisedtupleobject.py b/pypy/objspace/std/test/test_specialisedtupleobject.py --- a/pypy/objspace/std/test/test_specialisedtupleobject.py +++ b/pypy/objspace/std/test/test_specialisedtupleobject.py @@ -22,17 +22,23 @@ def test_hash_against_normal_tuple(self): normalspace = gettestobjspace(**{"objspace.std.withspecialisedtuple": False}) - w_tuple = normalspace.newtuple([self.space.wrap(1), self.space.wrap(2)]) + specialisedspace = gettestobjspace(**{"objspace.std.withspecialisedtuple": True}) - specialisedspace = gettestobjspace(**{"objspace.std.withspecialisedtuple": True}) - w_specialisedtuple = specialisedspace.newtuple([self.space.wrap(1), self.space.wrap(2)]) + def hash_test(values): + values_w = [self.space.wrap(value) for value in values] + w_tuple = normalspace.newtuple(values_w) + w_specialisedtuple = specialisedspace.newtuple(values_w) + + assert isinstance(w_specialisedtuple, W_SpecialisedTupleObject) + assert isinstance(w_tuple, W_TupleObject) + assert not normalspace.is_true(normalspace.eq(w_tuple, w_specialisedtuple)) + assert specialisedspace.is_true(specialisedspace.eq(w_tuple, w_specialisedtuple)) + assert specialisedspace.is_true(specialisedspace.eq(normalspace.hash(w_tuple), specialisedspace.hash(w_specialisedtuple))) - assert isinstance(w_specialisedtuple, W_SpecialisedTupleObject) - assert isinstance(w_tuple, W_TupleObject) - assert not normalspace.is_true(normalspace.eq(w_tuple, w_specialisedtuple)) - assert specialisedspace.is_true(specialisedspace.eq(w_tuple, w_specialisedtuple)) - assert specialisedspace.is_true(specialisedspace.eq(normalspace.hash(w_tuple), specialisedspace.hash(w_specialisedtuple))) - + hash_test([1,2]) + hash_test([1.5,2.8]) + hash_test(['arbitrary','strings']) + def test_setitem(self): py.test.skip('skip for now, only needed for cpyext') w_specialisedtuple = self.space.newtuple([self.space.wrap(1)]) @@ -61,6 +67,7 @@ assert self.isspecialised((42,43)) assert self.isspecialised((4.2,4.3)) assert self.isspecialised((1.0,2.0)) + assert self.isspecialised(('a','b')) def test_len(self): t = self.forbid_delegation((42,43)) @@ -92,6 +99,12 @@ c = (2,1) assert not a == c + d = (1.0,2.0) + assert a == d + + e = ('r','s') + assert not a == e + def test_eq_can_delegate(self): a = (1,2) b = (1,3,2) From noreply at buildbot.pypy.org Thu Nov 10 10:47:44 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 10:47:44 +0100 (CET) Subject: [pypy-commit] pypy SpecialisedTuples: (mwp) create Classes for float-float and str-str specialisations Message-ID: <20111110094744.C8D0A8292E@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: SpecialisedTuples Changeset: r49098:d693552c9046 Date: 2011-11-06 11:30 +0100 http://bitbucket.org/pypy/pypy/changeset/d693552c9046/ Log: (mwp) create Classes for float-float and str-str specialisations diff --git a/pypy/objspace/std/specialisedtupleobject.py b/pypy/objspace/std/specialisedtupleobject.py --- a/pypy/objspace/std/specialisedtupleobject.py +++ b/pypy/objspace/std/specialisedtupleobject.py @@ -23,6 +23,14 @@ val0 = space.int_w(w_item0) val1 = space.int_w(w_item1) return W_SpecialisedTupleObjectIntInt(space, val0, val1) + if space.type(w_item0) == space.w_float and space.type(w_item1) == space.w_float: + val0 = space.float_w(w_item0) + val1 = space.float_w(w_item1) + return W_SpecialisedTupleObjectFloatFloat(space, val0, val1) + if space.type(w_item0) == space.w_str and space.type(w_item1) == space.w_str: + val0 = space.str_w(w_item0) + val1 = space.str_w(w_item1) + return W_SpecialisedTupleObjectStrStr(space, val0, val1) raise NotSpecialised class W_SpecialisedTupleObject(W_Object): @@ -98,8 +106,10 @@ return cls -W_SpecialisedTupleObjectIntInt = make_specialised_class('W_SpecialisedTupleObjectIntInt', int,int) - +W_SpecialisedTupleObjectIntInt = make_specialised_class('W_SpecialisedTupleObjectIntInt', int,int) +W_SpecialisedTupleObjectFloatFloat = make_specialised_class('W_SpecialisedTupleObjectFloatFloat', float,float) +W_SpecialisedTupleObjectStrStr = make_specialised_class('W_SpecialisedTupleObjectStrStr', str, str) + registerimplementation(W_SpecialisedTupleObject) def delegate_SpecialisedTuple2Tuple(space, w_specialised): From noreply at buildbot.pypy.org Thu Nov 10 10:47:45 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 10:47:45 +0100 (CET) Subject: [pypy-commit] pypy SpecialisedTuples: (mwp) add tests for non-delegated neq and ordering Message-ID: <20111110094745.F3F998292E@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: SpecialisedTuples Changeset: r49099:ea39171f067d Date: 2011-11-06 13:43 +0100 http://bitbucket.org/pypy/pypy/changeset/ea39171f067d/ Log: (mwp) add tests for non-delegated neq and ordering diff --git a/pypy/objspace/std/test/test_specialisedtupleobject.py b/pypy/objspace/std/test/test_specialisedtupleobject.py --- a/pypy/objspace/std/test/test_specialisedtupleobject.py +++ b/pypy/objspace/std/test/test_specialisedtupleobject.py @@ -115,6 +115,32 @@ for y in values: assert ((1,2) == (x,y)) == (1 == x and 2 == y) + def test_neq(self): + a = self.forbid_delegation((1,2)) + b = (1,2) + assert not a != b + + c = (2,1) + assert a != c + + d = (1.0,2.0) + assert a != d + + e = ('r','s') + assert a != e + + def test_ordering (self): + a = self.forbid_delegation((1,2)) + assert a < (2,2) + assert a <= (1,2) + assert a >= (1,2) + assert a > (0,2) + + assert a < (1,3) + assert a <= (1,2) + assert a >= (1,2) + assert a > (1,1) + def test_hash(self): a = (1,2) b = (1,2) From noreply at buildbot.pypy.org Thu Nov 10 10:47:47 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 10:47:47 +0100 (CET) Subject: [pypy-commit] pypy SpecialisedTuples: (antocuni, mwp) fix repr in tool/pytest/appsupport.py in case an exception is raised Message-ID: <20111110094747.2CDA88292E@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: SpecialisedTuples Changeset: r49100:9c629249bacd Date: 2011-11-06 15:33 +0100 http://bitbucket.org/pypy/pypy/changeset/9c629249bacd/ Log: (antocuni, mwp) fix repr in tool/pytest/appsupport.py in case an exception is raised diff --git a/pypy/tool/pytest/appsupport.py b/pypy/tool/pytest/appsupport.py --- a/pypy/tool/pytest/appsupport.py +++ b/pypy/tool/pytest/appsupport.py @@ -63,7 +63,10 @@ exec_ = eval def repr(self, w_value): - return self.space.unwrap(self.space.repr(w_value)) + try: + return self.space.unwrap(self.space.repr(w_value)) + except Exception, e: + return ""%e def is_true(self, w_value): return self.space.is_true(w_value) From noreply at buildbot.pypy.org Thu Nov 10 10:47:48 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 10:47:48 +0100 (CET) Subject: [pypy-commit] pypy SpecialisedTuples: (mwp) add code for ordering of specialised 2-tuples Message-ID: <20111110094748.57C058292E@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: SpecialisedTuples Changeset: r49101:59211f8aac41 Date: 2011-11-06 17:48 +0100 http://bitbucket.org/pypy/pypy/changeset/59211f8aac41/ Log: (mwp) add code for ordering of specialised 2-tuples diff --git a/pypy/objspace/std/specialisedtupleobject.py b/pypy/objspace/std/specialisedtupleobject.py --- a/pypy/objspace/std/specialisedtupleobject.py +++ b/pypy/objspace/std/specialisedtupleobject.py @@ -11,7 +11,6 @@ from pypy.rlib.rarithmetic import intmask from pypy.rlib.objectmodel import compute_hash - class NotSpecialised(Exception): pass @@ -96,6 +95,59 @@ else: return space.w_False + def ne(self, space, w_other): + if w_other.length() != 2: + return space.w_True + if self.val0 != w_other.val0: + return space.w_True + if self.val1 != w_other.val1: + return space.w_True + return space.w_False + + def lt(self, space, w_other): + assert self.length() <= 2 + ncmp = min(self.length(), w_other.length()) + if ncmp >= 1: + if not self.val0 == w_other.val0: + return space.newbool(self.val0 < w_other.val0) + if ncmp >= 2: + if not self.val1 == w_other.val1: + return space.newbool(self.val1 < w_other.val1) + return space.newbool(self.length() < w_other.length()) + + def le(self, space, w_other): + assert self.length() <= 2 + ncmp = min(self.length(), w_other.length()) + if ncmp >= 1: + if not self.val0 == w_other.val0: + return space.newbool(self.val0 <= w_other.val0) + if ncmp >= 2: + if not self.val1 == w_other.val1: + return space.newbool(self.val1 <= w_other.val1) + return space.newbool(self.length() <= w_other.length()) + + def ge(self, space, w_other): + assert self.length() <= 2 + ncmp = min(self.length(), w_other.length()) + if ncmp >= 1: + if not self.val0 == w_other.val0: + return space.newbool(self.val0 >= w_other.val0) + if ncmp >= 2: + if not self.val1 == w_other.val1: + return space.newbool(self.val1 >= w_other.val1) + return space.newbool(self.length() >= w_other.length()) + + def gt(self, space, w_other): + assert self.length() <= 2 + ncmp = min(self.length(), w_other.length()) + if ncmp >= 1: + if not self.val0 == w_other.val0: + return space.newbool(self.val0 > w_other.val0) + if ncmp >= 2: + if not self.val1 == w_other.val1: + return space.newbool(self.val1 > w_other.val1) + return space.newbool(self.length() > w_other.length()) + def getitem(self, index): if index == 0: return self.space.wrap(self.val0) @@ -131,6 +183,21 @@ def eq__SpecialisedTuple_SpecialisedTuple(space, w_tuple1, w_tuple2): return w_tuple1.eq(space, w_tuple2) +def ne__SpecialisedTuple_SpecialisedTuple(space, w_tuple1, w_tuple2): + return w_tuple1.ne(space, w_tuple2) + +def lt__SpecialisedTuple_SpecialisedTuple(space, w_tuple1, w_tuple2): + return w_tuple1.lt(space, w_tuple2) + +def le__SpecialisedTuple_SpecialisedTuple(space, w_tuple1, w_tuple2): + return w_tuple1.le(space, w_tuple2) + +def ge__SpecialisedTuple_SpecialisedTuple(space, w_tuple1, w_tuple2): + return w_tuple1.ge(space, w_tuple2) + +def gt__SpecialisedTuple_SpecialisedTuple(space, w_tuple1, w_tuple2): + return w_tuple1.gt(space, w_tuple2) + def hash__SpecialisedTuple(space, w_tuple): return w_tuple.hash(space) diff --git a/pypy/objspace/std/test/test_specialisedtupleobject.py b/pypy/objspace/std/test/test_specialisedtupleobject.py --- a/pypy/objspace/std/test/test_specialisedtupleobject.py +++ b/pypy/objspace/std/test/test_specialisedtupleobject.py @@ -117,29 +117,30 @@ def test_neq(self): a = self.forbid_delegation((1,2)) - b = (1,2) + b = (1,) + b = b+(2,) assert not a != b - c = (2,1) + c = (1,3) assert a != c - d = (1.0,2.0) - assert a != d - - e = ('r','s') - assert a != e - - def test_ordering (self): + def test_ordering(self): a = self.forbid_delegation((1,2)) assert a < (2,2) - assert a <= (1,2) + assert a < (1,3) + assert not a < (1,2) + + assert a <= (2,2) + assert a <= (1,2) + assert not a <= (1,1) + + assert a >= (0,2) assert a >= (1,2) - assert a > (0,2) + assert not a >= (1,3) - assert a < (1,3) - assert a <= (1,2) - assert a >= (1,2) - assert a > (1,1) + assert a > (0,2) + assert a > (1,1) + assert not a > (1,3) def test_hash(self): a = (1,2) From noreply at buildbot.pypy.org Thu Nov 10 10:47:49 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 10:47:49 +0100 (CET) Subject: [pypy-commit] pypy SpecialisedTuples: (mwp) extend hash test to check floats which happen to be integers Message-ID: <20111110094749.81BC68292E@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: SpecialisedTuples Changeset: r49102:571038b4cd14 Date: 2011-11-07 10:55 +0100 http://bitbucket.org/pypy/pypy/changeset/571038b4cd14/ Log: (mwp) extend hash test to check floats which happen to be integers diff --git a/pypy/objspace/std/test/test_specialisedtupleobject.py b/pypy/objspace/std/test/test_specialisedtupleobject.py --- a/pypy/objspace/std/test/test_specialisedtupleobject.py +++ b/pypy/objspace/std/test/test_specialisedtupleobject.py @@ -37,6 +37,7 @@ hash_test([1,2]) hash_test([1.5,2.8]) + hash_test([1.0,2.0]) hash_test(['arbitrary','strings']) def test_setitem(self): From noreply at buildbot.pypy.org Thu Nov 10 10:47:50 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 10:47:50 +0100 (CET) Subject: [pypy-commit] pypy SpecialisedTuples: (mwp) fix hash so it deals with flaots that are ints properly Message-ID: <20111110094750.AC0018292E@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: SpecialisedTuples Changeset: r49103:12b18053910f Date: 2011-11-07 11:31 +0100 http://bitbucket.org/pypy/pypy/changeset/12b18053910f/ Log: (mwp) fix hash so it deals with flaots that are ints properly diff --git a/pypy/objspace/std/specialisedtupleobject.py b/pypy/objspace/std/specialisedtupleobject.py --- a/pypy/objspace/std/specialisedtupleobject.py +++ b/pypy/objspace/std/specialisedtupleobject.py @@ -80,7 +80,8 @@ x = 0x345678 z = 2 for val in [self.val0, self.val1]: - y = compute_hash(val) +# y = compute_hash(val) + y = space.int_w(space.hash(space.wrap(val))) x = (x ^ y) * mult z -= 1 mult += 82520 + z + z From noreply at buildbot.pypy.org Thu Nov 10 10:47:51 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 10:47:51 +0100 (CET) Subject: [pypy-commit] pypy SpecialisedTuples: (mwp) replace specific code to create SpecialisedTupleObjects with generic Message-ID: <20111110094751.D9B048292E@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: SpecialisedTuples Changeset: r49104:41208dc819b4 Date: 2011-11-07 11:54 +0100 http://bitbucket.org/pypy/pypy/changeset/41208dc819b4/ Log: (mwp) replace specific code to create SpecialisedTupleObjects with generic diff --git a/pypy/objspace/std/specialisedtupleobject.py b/pypy/objspace/std/specialisedtupleobject.py --- a/pypy/objspace/std/specialisedtupleobject.py +++ b/pypy/objspace/std/specialisedtupleobject.py @@ -14,23 +14,27 @@ class NotSpecialised(Exception): pass -def makespecialisedtuple(space, list_w): - if len(list_w) == 2: - w_item0 = list_w[0] - w_item1 = list_w[1] - if space.type(w_item0) == space.w_int and space.type(w_item1) == space.w_int: - val0 = space.int_w(w_item0) - val1 = space.int_w(w_item1) - return W_SpecialisedTupleObjectIntInt(space, val0, val1) - if space.type(w_item0) == space.w_float and space.type(w_item1) == space.w_float: - val0 = space.float_w(w_item0) - val1 = space.float_w(w_item1) - return W_SpecialisedTupleObjectFloatFloat(space, val0, val1) - if space.type(w_item0) == space.w_str and space.type(w_item1) == space.w_str: - val0 = space.str_w(w_item0) - val1 = space.str_w(w_item1) - return W_SpecialisedTupleObjectStrStr(space, val0, val1) - raise NotSpecialised +_specialisations = [] + +def makespecialisedtuple(space, list_w): + w_type_of = {int:space.w_int, float:space.w_float, str:space.w_str} + unwrap_as = {int:space.int_w, float:space.float_w, str:space.str_w} + + def try_specialisation((specialisedClass, paramtypes)): + if len(list_w) != len(paramtypes): + raise NotSpecialised + for param,paramtype in zip(list_w,paramtypes): + if space.type(param) != w_type_of[paramtype]: + raise NotSpecialised + unwrappedparams = [unwrap_as[paramtype](param) for param,paramtype in zip(list_w,paramtypes)] + return specialisedClass(space, *unwrappedparams) + + for spec in _specialisations: + try: + return try_specialisation(spec) + except NotSpecialised: + pass + raise NotSpecialised class W_SpecialisedTupleObject(W_Object): from pypy.objspace.std.tupletype import tuple_typedef as typedef @@ -156,6 +160,7 @@ return self.space.wrap(self.val1) raise IndexError cls.__name__ = class_name + _specialisations.append((cls,(type0,type1))) return cls From noreply at buildbot.pypy.org Thu Nov 10 10:47:53 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 10:47:53 +0100 (CET) Subject: [pypy-commit] pypy SpecialisedTuples: (mwp) use a tuple of types as parameter to make_specialised_class Message-ID: <20111110094753.13A828292E@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: SpecialisedTuples Changeset: r49105:538c174c8197 Date: 2011-11-07 13:05 +0100 http://bitbucket.org/pypy/pypy/changeset/538c174c8197/ Log: (mwp) use a tuple of types as parameter to make_specialised_class diff --git a/pypy/objspace/std/specialisedtupleobject.py b/pypy/objspace/std/specialisedtupleobject.py --- a/pypy/objspace/std/specialisedtupleobject.py +++ b/pypy/objspace/std/specialisedtupleobject.py @@ -64,11 +64,13 @@ def unwrap(w_tuple, space): return tuple(self.tolist) -def make_specialised_class(class_name, type0, type1): + +def make_specialised_class(class_name, typelist): class cls(W_SpecialisedTupleObject): def __init__(self, space, val0, val1): - assert isinstance(val0, type0) - assert isinstance(val1, type1) + assert len(typelist) == 2 + assert isinstance(val0, typelist[0]) + assert isinstance(val1, typelist[1]) self.space = space self.val0 = val0 self.val1 = val1 @@ -160,13 +162,13 @@ return self.space.wrap(self.val1) raise IndexError cls.__name__ = class_name - _specialisations.append((cls,(type0,type1))) + _specialisations.append((cls,typelist)) return cls -W_SpecialisedTupleObjectIntInt = make_specialised_class('W_SpecialisedTupleObjectIntInt', int,int) -W_SpecialisedTupleObjectFloatFloat = make_specialised_class('W_SpecialisedTupleObjectFloatFloat', float,float) -W_SpecialisedTupleObjectStrStr = make_specialised_class('W_SpecialisedTupleObjectStrStr', str, str) +W_SpecialisedTupleObjectIntInt = make_specialised_class('W_SpecialisedTupleObjectIntInt', (int,int)) +W_SpecialisedTupleObjectFloatFloat = make_specialised_class('W_SpecialisedTupleObjectFloatFloat', (float,float)) +W_SpecialisedTupleObjectStrStr = make_specialised_class('W_SpecialisedTupleObjectStrStr', (str, str)) registerimplementation(W_SpecialisedTupleObject) From noreply at buildbot.pypy.org Thu Nov 10 10:47:54 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 10:47:54 +0100 (CET) Subject: [pypy-commit] pypy SpecialisedTuples: (mwp) use unrolling_iterable to generate access to tuple elements Message-ID: <20111110094754.3E25C8292E@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: SpecialisedTuples Changeset: r49106:7e8c19d6251f Date: 2011-11-07 17:01 +0100 http://bitbucket.org/pypy/pypy/changeset/7e8c19d6251f/ Log: (mwp) use unrolling_iterable to generate access to tuple elements diff --git a/pypy/objspace/std/specialisedtupleobject.py b/pypy/objspace/std/specialisedtupleobject.py --- a/pypy/objspace/std/specialisedtupleobject.py +++ b/pypy/objspace/std/specialisedtupleobject.py @@ -1,15 +1,10 @@ from pypy.interpreter.error import OperationError from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all -from pypy.objspace.std.inttype import wrapint -from pypy.objspace.std.intobject import W_IntObject -from pypy.objspace.std.floatobject import W_FloatObject -from pypy.objspace.std.stringobject import W_StringObject -from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice from pypy.objspace.std.tupleobject import W_TupleObject -from pypy.objspace.std import slicetype from pypy.rlib.rarithmetic import intmask from pypy.rlib.objectmodel import compute_hash +from pypy.rlib.unroll import unrolling_iterable class NotSpecialised(Exception): pass @@ -66,101 +61,66 @@ def make_specialised_class(class_name, typelist): + iter_n = unrolling_iterable(range(len(typelist))) class cls(W_SpecialisedTupleObject): - def __init__(self, space, val0, val1): - assert len(typelist) == 2 - assert isinstance(val0, typelist[0]) - assert isinstance(val1, typelist[1]) + def __init__(self, space, *values): + assert len(typelist) == len(values) + for i in iter_n: + assert isinstance(values[i], typelist[i]) self.space = space - self.val0 = val0 - self.val1 = val1 + for i in iter_n: + setattr(self, 'value%s' % i, values[i]) def length(self): - return 2 + return len(typelist) def tolist(self): - return [self.space.wrap(self.val0), self.space.wrap(self.val1)] + return [self.space.wrap(getattr(self, 'value%s' % i)) for i in iter_n] def hash(self, space): mult = 1000003 x = 0x345678 z = 2 - for val in [self.val0, self.val1]: + for i in iter_n: # y = compute_hash(val) - y = space.int_w(space.hash(space.wrap(val))) + y = space.int_w(space.hash(space.wrap(getattr(self, 'value%s' % i)))) x = (x ^ y) * mult z -= 1 mult += 82520 + z + z x += 97531 return space.wrap(intmask(x)) + def _eq(self, w_other): + if w_other.length() != len(typelist): + return False + for i in iter_n: + if getattr(self, 'value%s' % i) != getattr(w_other, 'value%s' % i): + return False + else: + return True + def eq(self, space, w_other): - if w_other.length() != 2: - return space.w_False - if self.val0 == w_other.val0 and self.val1 == w_other.val1: #xxx - return space.w_True - else: - return space.w_False + return space.newbool(self._eq(w_other)) def ne(self, space, w_other): - if w_other.length() != 2: - return space.w_True - if self.val0 != w_other.val0: - return space.w_True - if self.val1 != w_other.val1: - return space.w_True - return space.w_False + return space.newbool(not self._eq(w_other)) - def lt(self, space, w_other): - assert self.length() <= 2 + def _compare(self, compare_op, w_other): ncmp = min(self.length(), w_other.length()) - if ncmp >= 1: - if not self.val0 == w_other.val0: - return space.newbool(self.val0 < w_other.val0) - if ncmp >= 2: - if not self.val1 == w_other.val1: - return space.newbool(self.val1 < w_other.val1) - return space.newbool(self.length() < w_other.length()) - - def le(self, space, w_other): - assert self.length() <= 2 - ncmp = min(self.length(), w_other.length()) - if ncmp >= 1: - if not self.val0 == w_other.val0: - return space.newbool(self.val0 <= w_other.val0) - if ncmp >= 2: - if not self.val1 == w_other.val1: - return space.newbool(self.val1 <= w_other.val1) - return space.newbool(self.length() <= w_other.length()) - - def ge(self, space, w_other): - assert self.length() <= 2 - ncmp = min(self.length(), w_other.length()) - if ncmp >= 1: - if not self.val0 == w_other.val0: - return space.newbool(self.val0 >= w_other.val0) - if ncmp >= 2: - if not self.val1 == w_other.val1: - return space.newbool(self.val1 >= w_other.val1) - return space.newbool(self.length() >= w_other.length()) - - def gt(self, space, w_other): - assert self.length() <= 2 - ncmp = min(self.length(), w_other.length()) - if ncmp >= 1: - if not self.val0 == w_other.val0: - return space.newbool(self.val0 > w_other.val0) - if ncmp >= 2: - if not self.val1 == w_other.val1: - return space.newbool(self.val1 > w_other.val1) - return space.newbool(self.length() > w_other.length()) - + for i in iter_n: + if ncmp > i: + l_val = getattr(self, 'value%s' % i) + r_val = getattr(w_other, 'value%s' % i) + if l_val != r_val: + return compare_op(l_val, r_val) + return compare_op(self.length(), w_other.length()) + def getitem(self, index): - if index == 0: - return self.space.wrap(self.val0) - if index == 1: - return self.space.wrap(self.val1) + for i in iter_n: + if index == i: + return self.space.wrap(getattr(self, 'value%s' % i)) raise IndexError + cls.__name__ = class_name _specialisations.append((cls,typelist)) return cls @@ -194,17 +154,19 @@ def ne__SpecialisedTuple_SpecialisedTuple(space, w_tuple1, w_tuple2): return w_tuple1.ne(space, w_tuple2) +from operator import lt, le, ge, gt + def lt__SpecialisedTuple_SpecialisedTuple(space, w_tuple1, w_tuple2): - return w_tuple1.lt(space, w_tuple2) + return space.newbool(w_tuple1._compare(lt, w_tuple2)) def le__SpecialisedTuple_SpecialisedTuple(space, w_tuple1, w_tuple2): - return w_tuple1.le(space, w_tuple2) + return space.newbool(w_tuple1._compare(le, w_tuple2)) def ge__SpecialisedTuple_SpecialisedTuple(space, w_tuple1, w_tuple2): - return w_tuple1.ge(space, w_tuple2) + return space.newbool(w_tuple1._compare(ge, w_tuple2)) def gt__SpecialisedTuple_SpecialisedTuple(space, w_tuple1, w_tuple2): - return w_tuple1.gt(space, w_tuple2) + return space.newbool(w_tuple1._compare(gt, w_tuple2)) def hash__SpecialisedTuple(space, w_tuple): return w_tuple.hash(space) From noreply at buildbot.pypy.org Thu Nov 10 10:47:55 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 10:47:55 +0100 (CET) Subject: [pypy-commit] pypy SpecialisedTuples: (mwp) add tests and code for some specialised 3-tuples + add slice multimethod Message-ID: <20111110094755.68CE68292E@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: SpecialisedTuples Changeset: r49107:8cbac70700fc Date: 2011-11-07 19:30 +0100 http://bitbucket.org/pypy/pypy/changeset/8cbac70700fc/ Log: (mwp) add tests and code for some specialised 3-tuples + add slice multimethod diff --git a/pypy/objspace/std/specialisedtupleobject.py b/pypy/objspace/std/specialisedtupleobject.py --- a/pypy/objspace/std/specialisedtupleobject.py +++ b/pypy/objspace/std/specialisedtupleobject.py @@ -2,6 +2,7 @@ from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all from pypy.objspace.std.tupleobject import W_TupleObject +from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice from pypy.rlib.rarithmetic import intmask from pypy.rlib.objectmodel import compute_hash from pypy.rlib.unroll import unrolling_iterable @@ -127,8 +128,10 @@ W_SpecialisedTupleObjectIntInt = make_specialised_class('W_SpecialisedTupleObjectIntInt', (int,int)) +W_SpecialisedTupleObjectIntIntInt = make_specialised_class('W_SpecialisedTupleObjectFloatFloat', (int,int,int)) W_SpecialisedTupleObjectFloatFloat = make_specialised_class('W_SpecialisedTupleObjectFloatFloat', (float,float)) W_SpecialisedTupleObjectStrStr = make_specialised_class('W_SpecialisedTupleObjectStrStr', (str, str)) +W_SpecialisedTupleObjectIntFloatStr= make_specialised_class('W_SpecialisedTupleObjectStrStr', (int, float, str)) registerimplementation(W_SpecialisedTupleObject) @@ -148,6 +151,16 @@ raise OperationError(space.w_IndexError, space.wrap("tuple index out of range")) +def getitem__SpecialisedTuple_Slice(space, w_tuple, w_slice): + length = w_tuple.length() + start, stop, step, slicelength = w_slice.indices4(space, length) + assert slicelength >= 0 + subitems = [None] * slicelength + for i in range(slicelength): + subitems[i] = w_tuple.getitem(start) + start += step + return space.newtuple(subitems) + def eq__SpecialisedTuple_SpecialisedTuple(space, w_tuple1, w_tuple2): return w_tuple1.eq(space, w_tuple2) diff --git a/pypy/objspace/std/test/test_specialisedtupleobject.py b/pypy/objspace/std/test/test_specialisedtupleobject.py --- a/pypy/objspace/std/test/test_specialisedtupleobject.py +++ b/pypy/objspace/std/test/test_specialisedtupleobject.py @@ -75,7 +75,7 @@ assert len(t) == 2 def test_notspecialisedtuple(self): - assert not self.isspecialised((42,43,44)) + assert not self.isspecialised((42,43,44,45)) assert not self.isspecialised((1,1.5)) assert not self.isspecialised((1,1.0)) @@ -92,7 +92,7 @@ def test_slicing_from_specialised(self): assert (1,2,3)[0:2:1] == (1,2) - def test_eq(self): + def test_eq_no_delegation(self): a = self.forbid_delegation((1,2)) b = (1,2) assert a == b @@ -110,7 +110,7 @@ a = (1,2) b = (1,3,2) assert not a == b - + values = [2, 2L, 2.0, 1, 1L, 1.0] for x in values: for y in values: @@ -145,7 +145,7 @@ def test_hash(self): a = (1,2) - b = (1,2) + b = (1,) + (2,) # else a and b refer to same constant assert hash(a) == hash(b) c = (2,4) @@ -159,8 +159,26 @@ assert (t)[-2] == 5 raises(IndexError, "t[2]") + def test_three_tuples(self): + if not self.isspecialised((1,2,3)): + skip('3-tuples of ints are not specialised, so skip specific tests on them') + a = self.forbid_delegation((1,2)) + b = self.forbid_delegation((1,2,3)) + c = (1,) + d = c + (2,3) + assert not a == b + assert not b == a + assert a < b + assert b > a + assert self.isspecialised(d) + assert b == d + assert b <= d - - - - + def test_mongrel(self): + a = self.forbid_delegation((1, 2.2, '333')) + if not self.isspecialised(a): + skip('my chosen kind of mixed type tuple is not specialised, so skip specific tests on them') + assert len(a) == 3 + assert a[0] == 1 and a[1] == 2.2 and a[2] == '333' + assert a == (1,) + (2.2,) + ('333',) + assert a < (1, 2.2, '334') From noreply at buildbot.pypy.org Thu Nov 10 10:47:56 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 10:47:56 +0100 (CET) Subject: [pypy-commit] pypy SpecialisedTuples: (mwp) add tests and code to generate name of each specialised class from its element types Message-ID: <20111110094756.966178292E@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: SpecialisedTuples Changeset: r49108:7a225189e654 Date: 2011-11-07 20:03 +0100 http://bitbucket.org/pypy/pypy/changeset/7a225189e654/ Log: (mwp) add tests and code to generate name of each specialised class from its element types diff --git a/pypy/objspace/std/specialisedtupleobject.py b/pypy/objspace/std/specialisedtupleobject.py --- a/pypy/objspace/std/specialisedtupleobject.py +++ b/pypy/objspace/std/specialisedtupleobject.py @@ -61,7 +61,7 @@ return tuple(self.tolist) -def make_specialised_class(class_name, typelist): +def make_specialised_class(typelist): iter_n = unrolling_iterable(range(len(typelist))) class cls(W_SpecialisedTupleObject): def __init__(self, space, *values): @@ -122,16 +122,16 @@ return self.space.wrap(getattr(self, 'value%s' % i)) raise IndexError - cls.__name__ = class_name + cls.__name__ = 'W_SpecialisedTupleObject' + ''.join([t.__name__.capitalize() for t in typelist]) _specialisations.append((cls,typelist)) return cls -W_SpecialisedTupleObjectIntInt = make_specialised_class('W_SpecialisedTupleObjectIntInt', (int,int)) -W_SpecialisedTupleObjectIntIntInt = make_specialised_class('W_SpecialisedTupleObjectFloatFloat', (int,int,int)) -W_SpecialisedTupleObjectFloatFloat = make_specialised_class('W_SpecialisedTupleObjectFloatFloat', (float,float)) -W_SpecialisedTupleObjectStrStr = make_specialised_class('W_SpecialisedTupleObjectStrStr', (str, str)) -W_SpecialisedTupleObjectIntFloatStr= make_specialised_class('W_SpecialisedTupleObjectStrStr', (int, float, str)) +W_SpecialisedTupleObjectIntInt = make_specialised_class((int,int)) +W_SpecialisedTupleObjectIntIntInt = make_specialised_class((int,int,int)) +W_SpecialisedTupleObjectFloatFloat = make_specialised_class((float,float)) +W_SpecialisedTupleObjectStrStr = make_specialised_class((str, str)) +W_SpecialisedTupleObjectIntFloatStr= make_specialised_class((int, float, str)) registerimplementation(W_SpecialisedTupleObject) diff --git a/pypy/objspace/std/test/test_specialisedtupleobject.py b/pypy/objspace/std/test/test_specialisedtupleobject.py --- a/pypy/objspace/std/test/test_specialisedtupleobject.py +++ b/pypy/objspace/std/test/test_specialisedtupleobject.py @@ -20,6 +20,10 @@ w_tuple = self.space.newtuple([self.space.wrap({})]) assert not isinstance(w_tuple, W_SpecialisedTupleObject) + def test_specialisedtupleclassname(self): + w_tuple = self.space.newtuple([self.space.wrap(1), self.space.wrap(2)]) + assert w_tuple.__class__.__name__ == 'W_SpecialisedTupleObjectIntInt' + def test_hash_against_normal_tuple(self): normalspace = gettestobjspace(**{"objspace.std.withspecialisedtuple": False}) specialisedspace = gettestobjspace(**{"objspace.std.withspecialisedtuple": True}) From noreply at buildbot.pypy.org Thu Nov 10 10:47:57 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 10:47:57 +0100 (CET) Subject: [pypy-commit] pypy SpecialisedTuples: (mwp) reinstate inherited tuple tests, and add mul__SpecialisedTuple_ANY to fix identity test failure Message-ID: <20111110094757.C440A8292E@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: SpecialisedTuples Changeset: r49109:2d6ad2a8c19c Date: 2011-11-08 11:44 +0100 http://bitbucket.org/pypy/pypy/changeset/2d6ad2a8c19c/ Log: (mwp) reinstate inherited tuple tests, and add mul__SpecialisedTuple_ANY to fix identity test failure diff --git a/pypy/objspace/std/specialisedtupleobject.py b/pypy/objspace/std/specialisedtupleobject.py --- a/pypy/objspace/std/specialisedtupleobject.py +++ b/pypy/objspace/std/specialisedtupleobject.py @@ -161,6 +161,24 @@ start += step return space.newtuple(subitems) +def mul_specialisedtuple_times(space, w_tuple, w_times): + try: + times = space.getindex_w(w_times, space.w_OverflowError) + except OperationError, e: + if e.match(space, space.w_TypeError): + raise FailedToImplement + raise + if times == 1 and space.type(w_tuple) == space.w_tuple: + return w_tuple + items = w_tuple.tolist() + return space.newtuple(items * times) + +def mul__SpecialisedTuple_ANY(space, w_tuple, w_times): + return mul_specialisedtuple_times(space, w_tuple, w_times) + +def mul__ANY_SpecialisedTuple(space, w_times, w_tuple): + return mul_specialisedtuple_times(space, w_tuple, w_times) + def eq__SpecialisedTuple_SpecialisedTuple(space, w_tuple1, w_tuple2): return w_tuple1.eq(space, w_tuple2) diff --git a/pypy/objspace/std/test/test_specialisedtupleobject.py b/pypy/objspace/std/test/test_specialisedtupleobject.py --- a/pypy/objspace/std/test/test_specialisedtupleobject.py +++ b/pypy/objspace/std/test/test_specialisedtupleobject.py @@ -3,7 +3,7 @@ from pypy.objspace.std.specialisedtupleobject import W_SpecialisedTupleObject,W_SpecialisedTupleObjectIntInt from pypy.interpreter.error import OperationError from pypy.conftest import gettestobjspace -#from pypy.objspace.std.test.test_tupleobject import AppTestW_TupleObject +from pypy.objspace.std.test.test_tupleobject import AppTestW_TupleObject from pypy.interpreter import gateway @@ -52,7 +52,7 @@ assert len(list_w) == 1 assert self.space.eq_w(list_w[0], self.space.wrap(5)) -class AppTestW_SpecialisedTupleObject(object): +class AppTestW_SpecialisedTupleObject(AppTestW_TupleObject): def setup_class(cls): cls.space = gettestobjspace(**{"objspace.std.withspecialisedtuple": True}) From noreply at buildbot.pypy.org Thu Nov 10 10:47:58 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 10:47:58 +0100 (CET) Subject: [pypy-commit] pypy SpecialisedTuples: (mwp) make_specialised_class take a tuple, not a list - rename and assert Message-ID: <20111110094758.F19F18292E@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: SpecialisedTuples Changeset: r49110:fe28627958e5 Date: 2011-11-08 13:20 +0100 http://bitbucket.org/pypy/pypy/changeset/fe28627958e5/ Log: (mwp) make_specialised_class take a tuple, not a list - rename and assert diff --git a/pypy/objspace/std/specialisedtupleobject.py b/pypy/objspace/std/specialisedtupleobject.py --- a/pypy/objspace/std/specialisedtupleobject.py +++ b/pypy/objspace/std/specialisedtupleobject.py @@ -61,19 +61,20 @@ return tuple(self.tolist) -def make_specialised_class(typelist): - iter_n = unrolling_iterable(range(len(typelist))) +def make_specialised_class(typetuple): + assert type(typetuple) == tuple + iter_n = unrolling_iterable(range(len(typetuple))) class cls(W_SpecialisedTupleObject): def __init__(self, space, *values): - assert len(typelist) == len(values) + assert len(typetuple) == len(values) for i in iter_n: - assert isinstance(values[i], typelist[i]) + assert isinstance(values[i], typetuple[i]) self.space = space for i in iter_n: setattr(self, 'value%s' % i, values[i]) def length(self): - return len(typelist) + return len(typetuple) def tolist(self): return [self.space.wrap(getattr(self, 'value%s' % i)) for i in iter_n] @@ -92,7 +93,7 @@ return space.wrap(intmask(x)) def _eq(self, w_other): - if w_other.length() != len(typelist): + if w_other.length() != len(typetuple): return False for i in iter_n: if getattr(self, 'value%s' % i) != getattr(w_other, 'value%s' % i): @@ -122,8 +123,8 @@ return self.space.wrap(getattr(self, 'value%s' % i)) raise IndexError - cls.__name__ = 'W_SpecialisedTupleObject' + ''.join([t.__name__.capitalize() for t in typelist]) - _specialisations.append((cls,typelist)) + cls.__name__ = 'W_SpecialisedTupleObject' + ''.join([t.__name__.capitalize() for t in typetuple]) + _specialisations.append((cls,typetuple)) return cls From noreply at buildbot.pypy.org Thu Nov 10 10:48:00 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 10:48:00 +0100 (CET) Subject: [pypy-commit] pypy SpecialisedTuples: (mwp) equality and order tests now check w_other is same specialisation to avoid mixed type comparisons Message-ID: <20111110094800.296558292E@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: SpecialisedTuples Changeset: r49111:dffb1034d10b Date: 2011-11-08 15:02 +0100 http://bitbucket.org/pypy/pypy/changeset/dffb1034d10b/ Log: (mwp) equality and order tests now check w_other is same specialisation to avoid mixed type comparisons diff --git a/pypy/objspace/std/specialisedtupleobject.py b/pypy/objspace/std/specialisedtupleobject.py --- a/pypy/objspace/std/specialisedtupleobject.py +++ b/pypy/objspace/std/specialisedtupleobject.py @@ -1,6 +1,7 @@ from pypy.interpreter.error import OperationError from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all +from pypy.objspace.std.multimethod import FailedToImplement from pypy.objspace.std.tupleobject import W_TupleObject from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice from pypy.rlib.rarithmetic import intmask @@ -93,8 +94,8 @@ return space.wrap(intmask(x)) def _eq(self, w_other): - if w_other.length() != len(typetuple): - return False + if not isinstance(w_other, cls): #so we will be sure we are comparing same types + raise FailedToImplement for i in iter_n: if getattr(self, 'value%s' % i) != getattr(w_other, 'value%s' % i): return False @@ -108,6 +109,8 @@ return space.newbool(not self._eq(w_other)) def _compare(self, compare_op, w_other): + if not isinstance(w_other, cls): + raise FailedToImplement ncmp = min(self.length(), w_other.length()) for i in iter_n: if ncmp > i: diff --git a/pypy/objspace/std/test/test_specialisedtupleobject.py b/pypy/objspace/std/test/test_specialisedtupleobject.py --- a/pypy/objspace/std/test/test_specialisedtupleobject.py +++ b/pypy/objspace/std/test/test_specialisedtupleobject.py @@ -103,13 +103,7 @@ c = (2,1) assert not a == c - - d = (1.0,2.0) - assert a == d - - e = ('r','s') - assert not a == e - + def test_eq_can_delegate(self): a = (1,2) b = (1,3,2) @@ -166,14 +160,9 @@ def test_three_tuples(self): if not self.isspecialised((1,2,3)): skip('3-tuples of ints are not specialised, so skip specific tests on them') - a = self.forbid_delegation((1,2)) b = self.forbid_delegation((1,2,3)) c = (1,) d = c + (2,3) - assert not a == b - assert not b == a - assert a < b - assert b > a assert self.isspecialised(d) assert b == d assert b <= d From noreply at buildbot.pypy.org Thu Nov 10 10:48:01 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 10:48:01 +0100 (CET) Subject: [pypy-commit] pypy SpecialisedTuples: (mwp) move try_specialisation to be a class method of specialised class, and unroll specialisation loop Message-ID: <20111110094801.5AE048292E@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: SpecialisedTuples Changeset: r49112:06891784efa2 Date: 2011-11-08 15:52 +0100 http://bitbucket.org/pypy/pypy/changeset/06891784efa2/ Log: (mwp) move try_specialisation to be a class method of specialised class, and unroll specialisation loop diff --git a/pypy/objspace/std/specialisedtupleobject.py b/pypy/objspace/std/specialisedtupleobject.py --- a/pypy/objspace/std/specialisedtupleobject.py +++ b/pypy/objspace/std/specialisedtupleobject.py @@ -14,21 +14,9 @@ _specialisations = [] def makespecialisedtuple(space, list_w): - w_type_of = {int:space.w_int, float:space.w_float, str:space.w_str} - unwrap_as = {int:space.int_w, float:space.float_w, str:space.str_w} - - def try_specialisation((specialisedClass, paramtypes)): - if len(list_w) != len(paramtypes): - raise NotSpecialised - for param,paramtype in zip(list_w,paramtypes): - if space.type(param) != w_type_of[paramtype]: - raise NotSpecialised - unwrappedparams = [unwrap_as[paramtype](param) for param,paramtype in zip(list_w,paramtypes)] - return specialisedClass(space, *unwrappedparams) - - for spec in _specialisations: + for specialisedClass,paramtypes in unrolling_iterable(_specialisations): try: - return try_specialisation(spec) + return specialisedClass.try_specialisation(space, paramtypes, list_w) except NotSpecialised: pass raise NotSpecialised @@ -73,6 +61,22 @@ self.space = space for i in iter_n: setattr(self, 'value%s' % i, values[i]) + + @classmethod + def try_specialisation(specialisedClass, space, paramtypes, paramlist): + + + _w_type_of = {int:space.w_int, float:space.w_float, str:space.w_str} + _unwrap_as = {int:space.int_w, float:space.float_w, str:space.str_w} + + + if len(paramlist) != len(paramtypes): + raise NotSpecialised + for param,paramtype in zip(paramlist, paramtypes): + if space.type(param) != _w_type_of[paramtype]: + raise NotSpecialised + unwrappedparams = [_unwrap_as[paramtype](param) for param,paramtype in zip(paramlist, paramtypes)] + return specialisedClass(space, *unwrappedparams) def length(self): return len(typetuple) From noreply at buildbot.pypy.org Thu Nov 10 10:48:02 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 10:48:02 +0100 (CET) Subject: [pypy-commit] pypy SpecialisedTuples: (mwp) also support specialised tuples with 'any' type Message-ID: <20111110094802.8B1408292E@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: SpecialisedTuples Changeset: r49113:62c0151aba6b Date: 2011-11-09 13:04 +0100 http://bitbucket.org/pypy/pypy/changeset/62c0151aba6b/ Log: (mwp) also support specialised tuples with 'any' type diff --git a/pypy/objspace/std/specialisedtupleobject.py b/pypy/objspace/std/specialisedtupleobject.py --- a/pypy/objspace/std/specialisedtupleobject.py +++ b/pypy/objspace/std/specialisedtupleobject.py @@ -7,16 +7,20 @@ from pypy.rlib.rarithmetic import intmask from pypy.rlib.objectmodel import compute_hash from pypy.rlib.unroll import unrolling_iterable +#from types import NoneType as ANY #deliberately misread this as 'None _specified_' + +class ANY(type): + pass class NotSpecialised(Exception): pass - + _specialisations = [] def makespecialisedtuple(space, list_w): - for specialisedClass,paramtypes in unrolling_iterable(_specialisations): + for specialisedClass in unrolling_iterable(_specialisations): try: - return specialisedClass.try_specialisation(space, paramtypes, list_w) + return specialisedClass.try_specialisation(space, list_w) except NotSpecialised: pass raise NotSpecialised @@ -46,40 +50,62 @@ def setitem(self, index, w_item): raise NotImplementedError - def unwrap(w_tuple, space): + def unwrap(self, space): return tuple(self.tolist) def make_specialised_class(typetuple): assert type(typetuple) == tuple - iter_n = unrolling_iterable(range(len(typetuple))) + + nValues = len(typetuple) + iter_n = unrolling_iterable(range(nValues)) + class cls(W_SpecialisedTupleObject): - def __init__(self, space, *values): - assert len(typetuple) == len(values) + def __init__(self, space, values): + assert len(values) == nValues for i in iter_n: - assert isinstance(values[i], typetuple[i]) + if typetuple[i] != ANY: + assert isinstance(values[i], typetuple[i]) self.space = space for i in iter_n: setattr(self, 'value%s' % i, values[i]) @classmethod - def try_specialisation(specialisedClass, space, paramtypes, paramlist): - - - _w_type_of = {int:space.w_int, float:space.w_float, str:space.w_str} - _unwrap_as = {int:space.int_w, float:space.float_w, str:space.str_w} - - - if len(paramlist) != len(paramtypes): + def try_specialisation(cls, space, paramlist): + if len(paramlist) != nValues: raise NotSpecialised - for param,paramtype in zip(paramlist, paramtypes): - if space.type(param) != _w_type_of[paramtype]: - raise NotSpecialised - unwrappedparams = [_unwrap_as[paramtype](param) for param,paramtype in zip(paramlist, paramtypes)] - return specialisedClass(space, *unwrappedparams) + for param,val_type in unrolling_iterable(zip(paramlist, typetuple)): + if val_type == int: + if space.type(param) != space.w_int: + raise NotSpecialised + elif val_type == float: + if space.type(param) != space.w_float: + raise NotSpecialised + elif val_type == str: + if space.type(param) != space.w_str: + raise NotSpecialised + elif val_type == ANY: + if space.type(param) == space.w_type:# else specialise (-1,int) somewhere and unwrap fails + raise NotSpecialised + pass + else: + raise NotSpecialised + unwrappedparams = [None] * nValues + for i in iter_n: + if typetuple[i] == int: + unwrappedparams[i] = space.int_w(paramlist[i]) + elif typetuple[i] == float: + unwrappedparams[i] = space.float_w(paramlist[i]) + elif typetuple[i] == str: + unwrappedparams[i] = space.str_w(paramlist[i]) + elif typetuple[i] == ANY: + unwrappedparams[i] = space.unwrap(paramlist[i])#xxx + else: + raise NotSpecialised + return cls(space, unwrappedparams) def length(self): - return len(typetuple) + return nValues def tolist(self): return [self.space.wrap(getattr(self, 'value%s' % i)) for i in iter_n] @@ -101,6 +127,8 @@ if not isinstance(w_other, cls): #so we will be sure we are comparing same types raise FailedToImplement for i in iter_n: + if typetuple[i] == ANY: + raise FailedToImplement if getattr(self, 'value%s' % i) != getattr(w_other, 'value%s' % i): return False else: @@ -117,6 +145,8 @@ raise FailedToImplement ncmp = min(self.length(), w_other.length()) for i in iter_n: + if typetuple[i] == ANY: + raise FailedToImplement if ncmp > i: l_val = getattr(self, 'value%s' % i) r_val = getattr(w_other, 'value%s' % i) @@ -131,15 +161,17 @@ raise IndexError cls.__name__ = 'W_SpecialisedTupleObject' + ''.join([t.__name__.capitalize() for t in typetuple]) - _specialisations.append((cls,typetuple)) + _specialisations.append(cls) return cls W_SpecialisedTupleObjectIntInt = make_specialised_class((int,int)) +W_SpecialisedTupleObjectIntAny = make_specialised_class((int, ANY)) W_SpecialisedTupleObjectIntIntInt = make_specialised_class((int,int,int)) W_SpecialisedTupleObjectFloatFloat = make_specialised_class((float,float)) W_SpecialisedTupleObjectStrStr = make_specialised_class((str, str)) W_SpecialisedTupleObjectIntFloatStr= make_specialised_class((int, float, str)) +W_SpecialisedTupleObjectIntStrFloatAny= make_specialised_class((int, float, str, ANY)) registerimplementation(W_SpecialisedTupleObject) diff --git a/pypy/objspace/std/test/test_specialisedtupleobject.py b/pypy/objspace/std/test/test_specialisedtupleobject.py --- a/pypy/objspace/std/test/test_specialisedtupleobject.py +++ b/pypy/objspace/std/test/test_specialisedtupleobject.py @@ -80,8 +80,8 @@ def test_notspecialisedtuple(self): assert not self.isspecialised((42,43,44,45)) - assert not self.isspecialised((1,1.5)) - assert not self.isspecialised((1,1.0)) + assert not self.isspecialised((1.5,2)) + assert not self.isspecialised((1.0,2)) def test_slicing_to_specialised(self): assert self.isspecialised((1, 2, 3)[0:2]) @@ -175,3 +175,15 @@ assert a[0] == 1 and a[1] == 2.2 and a[2] == '333' assert a == (1,) + (2.2,) + ('333',) assert a < (1, 2.2, '334') + + def test_mongrel_with_any(self): + a = self.forbid_delegation((1, 2.2, '333',[])) + b = (1, 2.2) + ('333', []) + if not self.isspecialised(a): + skip('my chosen kind of mixed type tuple is not specialised, so skip specific tests on them') + assert len(a) == 4 + assert a[0] == 1 and a[1] == 2.2 and a[2] == '333' and a[3] == [] + assert a != (1, 2.2, '334', []) +# assert b == a +# assert a == (1,) + (2.2,) + ('333',) + ([],) +# assert a < (1, 2.2, '334', {}) From noreply at buildbot.pypy.org Thu Nov 10 10:48:03 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 10:48:03 +0100 (CET) Subject: [pypy-commit] pypy SpecialisedTuples: (mwp) store ANY elements wrapped, and fix bug in hash test Message-ID: <20111110094803.B6CB482A87@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: SpecialisedTuples Changeset: r49114:947ee850430b Date: 2011-11-10 10:26 +0100 http://bitbucket.org/pypy/pypy/changeset/947ee850430b/ Log: (mwp) store ANY elements wrapped, and fix bug in hash test diff --git a/pypy/objspace/std/specialisedtupleobject.py b/pypy/objspace/std/specialisedtupleobject.py --- a/pypy/objspace/std/specialisedtupleobject.py +++ b/pypy/objspace/std/specialisedtupleobject.py @@ -4,10 +4,10 @@ from pypy.objspace.std.multimethod import FailedToImplement from pypy.objspace.std.tupleobject import W_TupleObject from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice +from pypy.objspace.std.floatobject import _hash_float from pypy.rlib.rarithmetic import intmask from pypy.rlib.objectmodel import compute_hash from pypy.rlib.unroll import unrolling_iterable -#from types import NoneType as ANY #deliberately misread this as 'None _specified_' class ANY(type): pass @@ -19,10 +19,10 @@ def makespecialisedtuple(space, list_w): for specialisedClass in unrolling_iterable(_specialisations): - try: - return specialisedClass.try_specialisation(space, list_w) - except NotSpecialised: - pass + try: + return specialisedClass.try_specialisation(space, list_w) + except NotSpecialised: + pass raise NotSpecialised class W_SpecialisedTupleObject(W_Object): @@ -51,7 +51,7 @@ raise NotImplementedError def unwrap(self, space): - return tuple(self.tolist) + return tuple(self._to_unwrapped_list()) def make_specialised_class(typetuple): @@ -62,6 +62,7 @@ class cls(W_SpecialisedTupleObject): def __init__(self, space, values): + print cls,cls.__class__, values assert len(values) == nValues for i in iter_n: if typetuple[i] != ANY: @@ -69,6 +70,7 @@ self.space = space for i in iter_n: setattr(self, 'value%s' % i, values[i]) + @classmethod def try_specialisation(cls, space, paramlist): @@ -85,8 +87,6 @@ if space.type(param) != space.w_str: raise NotSpecialised elif val_type == ANY: - if space.type(param) == space.w_type:# else specialise (-1,int) somewhere and unwrap fails - raise NotSpecialised pass else: raise NotSpecialised @@ -99,7 +99,7 @@ elif typetuple[i] == str: unwrappedparams[i] = space.str_w(paramlist[i]) elif typetuple[i] == ANY: - unwrappedparams[i] = space.unwrap(paramlist[i])#xxx + unwrappedparams[i] = paramlist[i] else: raise NotSpecialised return cls(space, unwrappedparams) @@ -108,15 +108,35 @@ return nValues def tolist(self): - return [self.space.wrap(getattr(self, 'value%s' % i)) for i in iter_n] + list_w = [None] * nValues + for i in iter_n: + if typetuple[i] == ANY: + list_w[i] = getattr(self, 'value%s' % i) + else: + list_w[i] = self.space.wrap(getattr(self, 'value%s' % i)) + return list_w + def _to_unwrapped_list(self): + list_w = [None] * nValues + for i in iter_n: + if typetuple[i] == ANY: + list_w[i] = space.unwrap(getattr(self, 'value%s' % i))#xxx + else: + list_w[i] = getattr(self, 'value%s' % i) + return list_w + def hash(self, space): mult = 1000003 x = 0x345678 z = 2 for i in iter_n: -# y = compute_hash(val) - y = space.int_w(space.hash(space.wrap(getattr(self, 'value%s' % i)))) + value = getattr(self, 'value%s' % i) + if typetuple[i] == ANY: + y = space.int_w(space.hash(value)) + elif typetuple[i] == float: # get correct hash for float which is an integer & other less frequent cases + y = _hash_float(space, value) + else: + y = compute_hash(value) x = (x ^ y) * mult z -= 1 mult += 82520 + z + z @@ -128,9 +148,11 @@ raise FailedToImplement for i in iter_n: if typetuple[i] == ANY: - raise FailedToImplement - if getattr(self, 'value%s' % i) != getattr(w_other, 'value%s' % i): - return False + if not self.space.is_true(self.space.eq(getattr(self, 'value%s' % i), getattr(w_other, 'value%s' % i))): + return False + else: + if getattr(self, 'value%s' % i) != getattr(w_other, 'value%s' % i): + return False else: return True @@ -145,7 +167,7 @@ raise FailedToImplement ncmp = min(self.length(), w_other.length()) for i in iter_n: - if typetuple[i] == ANY: + if typetuple[i] == ANY:#like space.eq on wrapped or two params? raise FailedToImplement if ncmp > i: l_val = getattr(self, 'value%s' % i) @@ -157,7 +179,10 @@ def getitem(self, index): for i in iter_n: if index == i: - return self.space.wrap(getattr(self, 'value%s' % i)) + if typetuple[i] == ANY: + return getattr(self, 'value%s' % i) + else: + return self.space.wrap(getattr(self, 'value%s' % i)) raise IndexError cls.__name__ = 'W_SpecialisedTupleObject' + ''.join([t.__name__.capitalize() for t in typetuple]) @@ -170,6 +195,7 @@ W_SpecialisedTupleObjectIntIntInt = make_specialised_class((int,int,int)) W_SpecialisedTupleObjectFloatFloat = make_specialised_class((float,float)) W_SpecialisedTupleObjectStrStr = make_specialised_class((str, str)) +W_SpecialisedTupleObjectStrAny = make_specialised_class((str, ANY)) W_SpecialisedTupleObjectIntFloatStr= make_specialised_class((int, float, str)) W_SpecialisedTupleObjectIntStrFloatAny= make_specialised_class((int, float, str, ANY)) diff --git a/pypy/objspace/std/test/test_specialisedtupleobject.py b/pypy/objspace/std/test/test_specialisedtupleobject.py --- a/pypy/objspace/std/test/test_specialisedtupleobject.py +++ b/pypy/objspace/std/test/test_specialisedtupleobject.py @@ -23,26 +23,31 @@ def test_specialisedtupleclassname(self): w_tuple = self.space.newtuple([self.space.wrap(1), self.space.wrap(2)]) assert w_tuple.__class__.__name__ == 'W_SpecialisedTupleObjectIntInt' + + def test_hash_against_normal_tuple(self): + N_space = gettestobjspace(**{"objspace.std.withspecialisedtuple": False}) + S_space = gettestobjspace(**{"objspace.std.withspecialisedtuple": True}) - def test_hash_against_normal_tuple(self): - normalspace = gettestobjspace(**{"objspace.std.withspecialisedtuple": False}) - specialisedspace = gettestobjspace(**{"objspace.std.withspecialisedtuple": True}) - def hash_test(values): - values_w = [self.space.wrap(value) for value in values] - w_tuple = normalspace.newtuple(values_w) - w_specialisedtuple = specialisedspace.newtuple(values_w) + N_values_w = [N_space.wrap(value) for value in values] + S_values_w = [S_space.wrap(value) for value in values] + N_w_tuple = N_space.newtuple(N_values_w) + S_w_tuple = S_space.newtuple(S_values_w) - assert isinstance(w_specialisedtuple, W_SpecialisedTupleObject) - assert isinstance(w_tuple, W_TupleObject) - assert not normalspace.is_true(normalspace.eq(w_tuple, w_specialisedtuple)) - assert specialisedspace.is_true(specialisedspace.eq(w_tuple, w_specialisedtuple)) - assert specialisedspace.is_true(specialisedspace.eq(normalspace.hash(w_tuple), specialisedspace.hash(w_specialisedtuple))) + assert isinstance(S_w_tuple, W_SpecialisedTupleObject) + assert isinstance(N_w_tuple, W_TupleObject) + assert not N_space.is_true(N_space.eq(N_w_tuple, S_w_tuple)) + assert S_space.is_true(S_space.eq(N_w_tuple, S_w_tuple)) + assert S_space.is_true(S_space.eq(N_space.hash(N_w_tuple), S_space.hash(S_w_tuple))) hash_test([1,2]) hash_test([1.5,2.8]) hash_test([1.0,2.0]) hash_test(['arbitrary','strings']) + hash_test([1,(1,2,3,4)]) + hash_test([1,(1,2)]) + hash_test([1,('a',2)]) + hash_test([1,()]) def test_setitem(self): py.test.skip('skip for now, only needed for cpyext') From noreply at buildbot.pypy.org Thu Nov 10 10:48:05 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 10:48:05 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim: (antocuni, mwp)merge heads, wanted to checkin on default, did it on branch by mistake Message-ID: <20111110094805.1FCE48292E@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: numpy-multidim Changeset: r49115:f95cf09f56dd Date: 2011-11-10 10:46 +0100 http://bitbucket.org/pypy/pypy/changeset/f95cf09f56dd/ Log: (antocuni, mwp)merge heads, wanted to checkin on default, did it on branch by mistake diff --git a/pypy/module/micronumpy/compile.py b/pypy/module/micronumpy/compile.py --- a/pypy/module/micronumpy/compile.py +++ b/pypy/module/micronumpy/compile.py @@ -9,7 +9,7 @@ descr_new_array, scalar_w, NDimArray) from pypy.module.micronumpy import interp_ufuncs from pypy.rlib.objectmodel import specialize - +import re class BogusBytecode(Exception): pass @@ -23,6 +23,12 @@ class WrongFunctionName(Exception): pass +class TokenizerError(Exception): + pass + +class BadToken(Exception): + pass + SINGLE_ARG_FUNCTIONS = ["sum", "prod", "max", "min", "all", "any", "unegative"] class FakeSpace(object): @@ -192,7 +198,7 @@ interp.variables[self.name] = self.expr.execute(interp) def __repr__(self): - return "%% = %r" % (self.name, self.expr) + return "%r = %r" % (self.name, self.expr) class ArrayAssignment(Node): def __init__(self, name, index, expr): @@ -214,7 +220,7 @@ class Variable(Node): def __init__(self, name): - self.name = name + self.name = name.strip(" ") def execute(self, interp): return interp.variables[self.name] @@ -332,7 +338,7 @@ class FunctionCall(Node): def __init__(self, name, args): - self.name = name + self.name = name.strip(" ") self.args = args def __repr__(self): @@ -375,118 +381,172 @@ else: raise WrongFunctionName +_REGEXES = [ + ('-?[\d\.]+', 'number'), + ('\[', 'array_left'), + (':', 'colon'), + ('\w+', 'identifier'), + ('\]', 'array_right'), + ('(->)|[\+\-\*\/]', 'operator'), + ('=', 'assign'), + (',', 'coma'), + ('\|', 'pipe'), + ('\(', 'paren_left'), + ('\)', 'paren_right'), +] +REGEXES = [] + +for r, name in _REGEXES: + REGEXES.append((re.compile(r' *(' + r + ')'), name)) +del _REGEXES + +class Token(object): + def __init__(self, name, v): + self.name = name + self.v = v + + def __repr__(self): + return '(%s, %s)' % (self.name, self.v) + +empty = Token('', '') + +class TokenStack(object): + def __init__(self, tokens): + self.tokens = tokens + self.c = 0 + + def pop(self): + token = self.tokens[self.c] + self.c += 1 + return token + + def get(self, i): + if self.c + i >= len(self.tokens): + return empty + return self.tokens[self.c + i] + + def remaining(self): + return len(self.tokens) - self.c + + def push(self): + self.c -= 1 + + def __repr__(self): + return repr(self.tokens[self.c:]) + class Parser(object): - def parse_identifier(self, id): - id = id.strip(" ") - #assert id.isalpha() - return Variable(id) + def tokenize(self, line): + tokens = [] + while True: + for r, name in REGEXES: + m = r.match(line) + if m is not None: + g = m.group(0) + tokens.append(Token(name, g)) + line = line[len(g):] + if not line: + return TokenStack(tokens) + break + else: + raise TokenizerError(line) - def parse_expression(self, expr): - tokens = [i for i in expr.split(" ") if i] - if len(tokens) == 1: - return self.parse_constant_or_identifier(tokens[0]) + def parse_number_or_slice(self, tokens): + start_tok = tokens.pop() + if start_tok.name == 'colon': + start = 0 + else: + if tokens.get(0).name != 'colon': + return FloatConstant(start_tok.v) + start = int(start_tok.v) + tokens.pop() + if not tokens.get(0).name in ['colon', 'number']: + stop = -1 + step = 1 + else: + next = tokens.pop() + if next.name == 'colon': + stop = -1 + step = int(tokens.pop().v) + else: + stop = int(next.v) + if tokens.get(0).name == 'colon': + tokens.pop() + step = int(tokens.pop().v) + else: + step = 1 + return SliceConstant(start, stop, step) + + + def parse_expression(self, tokens): stack = [] - tokens.reverse() - while tokens: + while tokens.remaining(): token = tokens.pop() - if token == ')': - raise NotImplementedError - elif self.is_identifier_or_const(token): - if stack: - name = stack.pop().name - lhs = stack.pop() - rhs = self.parse_constant_or_identifier(token) - stack.append(Operator(lhs, name, rhs)) + if token.name == 'identifier': + if tokens.remaining() and tokens.get(0).name == 'paren_left': + stack.append(self.parse_function_call(token.v, tokens)) else: - stack.append(self.parse_constant_or_identifier(token)) + stack.append(Variable(token.v)) + elif token.name == 'array_left': + stack.append(ArrayConstant(self.parse_array_const(tokens))) + elif token.name == 'operator': + stack.append(Variable(token.v)) + elif token.name == 'number' or token.name == 'colon': + tokens.push() + stack.append(self.parse_number_or_slice(tokens)) + elif token.name == 'pipe': + stack.append(RangeConstant(tokens.pop().v)) + end = tokens.pop() + assert end.name == 'pipe' else: - stack.append(Variable(token)) - assert len(stack) == 1 - return stack[-1] + tokens.push() + break + stack.reverse() + lhs = stack.pop() + while stack: + op = stack.pop() + assert isinstance(op, Variable) + rhs = stack.pop() + lhs = Operator(lhs, op.name, rhs) + return lhs - def parse_constant(self, v): - lgt = len(v)-1 - assert lgt >= 0 - if ':' in v: - # a slice - if v == ':': - return SliceConstant(0, 0, 0) - else: - l = v.split(':') - if len(l) == 2: - one = l[0] - two = l[1] - if not one: - one = 0 - else: - one = int(one) - return SliceConstant(int(l[0]), int(l[1]), 1) - else: - three = int(l[2]) - # all can be empty - if l[0]: - one = int(l[0]) - else: - one = 0 - if l[1]: - two = int(l[1]) - else: - two = -1 - return SliceConstant(one, two, three) - - if v[0] == '[': - return ArrayConstant([self.parse_constant(elem) - for elem in v[1:lgt].split(",")]) - if v[0] == '|': - return RangeConstant(v[1:lgt]) - return FloatConstant(v) - - def is_identifier_or_const(self, v): - c = v[0] - if ((c >= 'a' and c <= 'z') or (c >= 'A' and c <= 'Z') or - (c >= '0' and c <= '9') or c in '-.[|:'): - if v == '-' or v == "->": - return False - return True - return False - - def parse_function_call(self, v): - l = v.split('(') - assert len(l) == 2 - name = l[0] - cut = len(l[1]) - 1 - assert cut >= 0 - args = [self.parse_constant_or_identifier(id) - for id in l[1][:cut].split(",")] + def parse_function_call(self, name, tokens): + args = [] + tokens.pop() # lparen + while tokens.get(0).name != 'paren_right': + args.append(self.parse_expression(tokens)) return FunctionCall(name, args) - def parse_constant_or_identifier(self, v): - c = v[0] - if (c >= 'a' and c <= 'z') or (c >= 'A' and c <= 'Z'): - if '(' in v: - return self.parse_function_call(v) - return self.parse_identifier(v) - return self.parse_constant(v) - - def parse_array_subscript(self, v): - v = v.strip(" ") - l = v.split("[") - lgt = len(l[1]) - 1 - assert lgt >= 0 - rhs = self.parse_constant_or_identifier(l[1][:lgt]) - return l[0], rhs + def parse_array_const(self, tokens): + elems = [] + while True: + token = tokens.pop() + if token.name == 'number': + elems.append(FloatConstant(token.v)) + elif token.name == 'array_left': + elems.append(ArrayConstant(self.parse_array_const(tokens))) + else: + raise BadToken() + token = tokens.pop() + if token.name == 'array_right': + return elems + assert token.name == 'coma' - def parse_statement(self, line): - if '=' in line: - lhs, rhs = line.split("=") - lhs = lhs.strip(" ") - if '[' in lhs: - name, index = self.parse_array_subscript(lhs) - return ArrayAssignment(name, index, self.parse_expression(rhs)) - else: - return Assignment(lhs, self.parse_expression(rhs)) - else: - return Execute(self.parse_expression(line)) + def parse_statement(self, tokens): + if (tokens.get(0).name == 'identifier' and + tokens.get(1).name == 'assign'): + lhs = tokens.pop().v + tokens.pop() + rhs = self.parse_expression(tokens) + return Assignment(lhs, rhs) + elif (tokens.get(0).name == 'identifier' and + tokens.get(1).name == 'array_left'): + name = tokens.pop().v + tokens.pop() + index = self.parse_expression(tokens) + tokens.pop() + tokens.pop() + return ArrayAssignment(name, index, self.parse_expression(tokens)) + return Execute(self.parse_expression(tokens)) def parse(self, code): statements = [] @@ -495,7 +555,8 @@ line = line.split('#', 1)[0] line = line.strip(" ") if line: - statements.append(self.parse_statement(line)) + tokens = self.tokenize(line) + statements.append(self.parse_statement(tokens)) return Code(statements) def numpy_compile(code): diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -6,7 +6,7 @@ from pypy.rlib import jit from pypy.rpython.lltypesystem import lltype from pypy.tool.sourcetools import func_with_new_name - +from pypy.rlib.rstring import StringBuilder numpy_driver = jit.JitDriver(greens = ['signature'], reds = ['result_size', 'i', 'self', 'result']) @@ -68,6 +68,14 @@ dtype.setitem_w(space, arr.storage, i, w_elem) return arr +class ArrayIndex(object): + """ An index into an array or view. Offset is a data offset, indexes + are respective indexes in dimensions + """ + def __init__(self, indexes, offset): + self.indexes = indexes + self.offset = offset + class BaseArray(Wrappable): _attrs_ = ["invalidates", "signature", "shape"] @@ -209,25 +217,6 @@ assert isinstance(w_res, BaseArray) return w_res.descr_sum(space) - def _getnums(self, comma): - dtype = self.find_dtype() - if self.find_size() > 1000: - nums = [ - dtype.str_format(self.eval(index)) - for index in range(3) - ] - nums.append("..." + "," * comma) - nums.extend([ - dtype.str_format(self.eval(index)) - for index in range(self.find_size() - 3, self.find_size()) - ]) - else: - nums = [ - dtype.str_format(self.eval(index)) - for index in range(self.find_size()) - ] - return nums - def get_concrete(self): raise NotImplementedError @@ -246,26 +235,35 @@ def descr_repr(self, space): # Simple implementation so that we can see the array. # Since what we want is to print a plethora of 2d views, - # use recursive calls to tostr() to do the work. + # use recursive calls to to_str() to do the work. concrete = self.get_concrete() - res = "array(" - res0 = NDimSlice(concrete, self.signature, [], self.shape).tostr(True, indent=' ') - if res0=="[]" and isinstance(self,NDimSlice): - res0 += ", shape=%s"%(tuple(self.shape),) - res += res0 + res = StringBuilder() + res.append("array(") + myview = NDimSlice(concrete, self.signature, [], self.shape) + res0 = myview.to_str(True, indent=' ') + #This is for numpy compliance: an empty slice reports its shape + if res0 == "[]" and isinstance(self, NDimSlice): + res.append("[], shape=(") + self_shape = str(self.shape) + res.append_slice(str(self_shape), 1, len(self_shape)-1) + res.append(')') + else: + res.append(res0) dtype = concrete.find_dtype() if (dtype is not space.fromcache(interp_dtype.W_Float64Dtype) and - dtype is not space.fromcache(interp_dtype.W_Int64Dtype)) or not self.find_size(): - res += ", dtype=" + dtype.name - res += ")" - return space.wrap(res) + dtype is not space.fromcache(interp_dtype.W_Int64Dtype)) or \ + not self.find_size(): + res.append(", dtype=" + dtype.name) + res.append(")") + return space.wrap(res.build()) def descr_str(self, space): # Simple implementation so that we can see the array. # Since what we want is to print a plethora of 2d views, let # a slice do the work for us. concrete = self.get_concrete() - return space.wrap(NDimSlice(concrete, self.signature, [], self.shape).tostr(False)) + r = NDimSlice(concrete, self.signature, [], self.shape).to_str(False) + return space.wrap(r) def _index_of_single_item(self, space, w_idx): # we assume C ordering for now @@ -297,9 +295,6 @@ item += v return item - def len_of_shape(self): - return len(self.shape) - def get_root_shape(self): return self.shape @@ -307,7 +302,7 @@ """ The result of getitem/setitem is a single item if w_idx is a list of scalars that match the size of shape """ - shape_len = self.len_of_shape() + shape_len = len(self.shape) if shape_len == 0: if not space.isinstance_w(w_idx, space.w_int): raise OperationError(space.w_IndexError, space.wrap( @@ -409,6 +404,7 @@ return scalar_w(space, dtype, w_obj) def scalar_w(space, dtype, w_obj): + assert isinstance(dtype, interp_dtype.W_Dtype) return Scalar(dtype, dtype.unwrap(space, w_obj)) class Scalar(BaseArray): @@ -586,16 +582,12 @@ class NDimSlice(ViewArray): signature = signature.BaseSignature() - + _immutable_fields_ = ['shape[*]', 'chunks[*]'] def __init__(self, parent, signature, chunks, shape): ViewArray.__init__(self, parent, signature, shape) self.chunks = chunks - self.shape_reduction = 0 - for chunk in chunks: - if chunk[-2] == 0: - self.shape_reduction += 1 def get_root_storage(self): return self.parent.get_concrete().get_root_storage() @@ -624,9 +616,6 @@ def setitem(self, item, value): self.parent.setitem(self.calc_index(item), value) - def len_of_shape(self): - return self.parent.len_of_shape() - self.shape_reduction - def get_root_shape(self): return self.parent.get_root_shape() @@ -636,7 +625,6 @@ @jit.unroll_safe def calc_index(self, item): index = [] - __item = item _item = item for i in range(len(self.shape) -1, 0, -1): s = self.shape[i] @@ -666,46 +654,57 @@ item += index[i] i += 1 return item - def tostr(self, comma,indent=' '): - ret = '' + + def to_str(self, comma, indent=' '): + ret = StringBuilder() dtype = self.find_dtype() - ndims = len(self.shape)#-self.shape_reduction - if any([s==0 for s in self.shape]): - ret += '[]' - return ret - if ndims>2: - ret += '[' + ndims = len(self.shape) + for s in self.shape: + if s == 0: + ret.append('[]') + return ret.build() + if ndims > 2: + ret.append('[') for i in range(self.shape[0]): - ret += NDimSlice(self.parent, self.signature, [(i,0,0,1)], self.shape[1:]).tostr(comma,indent=indent+' ') - if i+11000: - ret += (','*comma + ' ').join([dtype.str_format(self.eval(j)) \ - for j in range(3)]) - ret += ','*comma + ' ..., ' - ret += (','*comma + ' ').join([dtype.str_format(self.eval(j)) \ - for j in range(self.shape[0]-3,self.shape[0])]) + ret.append('[') + spacer = ',' * comma + ' ' + ret.append(spacer.join(\ + [dtype.str_format(self.eval(i * self.shape[1] + j)) \ + for j in range(self.shape[1])])) + ret.append(']') + if i + 1 < self.shape[0]: + ret.append(',\n' + indent) + ret.append(']') + elif ndims == 1: + ret.append('[') + spacer = ',' * comma + ' ' + if self.shape[0] > 1000: + ret.append(spacer.join([dtype.str_format(self.eval(j)) \ + for j in range(3)])) + ret.append(',' * comma + ' ..., ') + ret.append(spacer.join([dtype.str_format(self.eval(j)) \ + for j in range(self.shape[0] - 3, self.shape[0])])) else: - ret += (','*comma + ' ').join([dtype.str_format(self.eval(j)) \ - for j in range(self.shape[0])]) - ret += ']' + ret.append(spacer.join([dtype.str_format(self.eval(j)) \ + for j in range(self.shape[0])])) + ret.append(']') else: - ret += dtype.str_format(self.eval(0)) - return ret + ret.append(dtype.str_format(self.eval(0))) + return ret.build() + class NDimArray(BaseArray): + """ A class representing contiguous array. We know that each iteration + by say ufunc will increase the data index by one + """ def __init__(self, size, shape, dtype): BaseArray.__init__(self, shape) self.size = size diff --git a/pypy/module/micronumpy/test/test_compile.py b/pypy/module/micronumpy/test/test_compile.py --- a/pypy/module/micronumpy/test/test_compile.py +++ b/pypy/module/micronumpy/test/test_compile.py @@ -102,10 +102,11 @@ code = """ a = [1,2,3,4] b = [4,5,6,5] - a + b + c = a + b + c -> 3 """ interp = self.run(code) - assert interp.results[0]._getnums(False) == ["5.0", "7.0", "9.0", "9.0"] + assert interp.results[-1].value.val == 9 def test_array_getitem(self): code = """ @@ -176,3 +177,17 @@ """) assert interp.results[0].value.val == 6 + def test_multidim_getitem(self): + interp = self.run(""" + a = [[1,2]] + a -> 0 -> 1 + """) + assert interp.results[0].value.val == 2 + + def test_multidim_getitem_2(self): + interp = self.run(""" + a = [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]] + b = a + a + b -> 1 -> 1 + """) + assert interp.results[0].value.val == 8 diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -737,6 +737,19 @@ a = array([[1, 2], [3, 4], [5, 6]]) assert ((a + a) == array([[1+1, 2+2], [3+3, 4+4], [5+5, 6+6]])).all() + def test_getitem_add(self): + from numpy import array + a = array([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]]) + assert (a + a)[1, 1] == 8 + + def test_broadcast(self): + skip("not working") + import numpy + a = numpy.zeros((100, 100)) + b = numpy.ones(100) + a[:,:] = b + assert a[13,15] == 1 + class AppTestSupport(object): def setup_class(cls): import struct diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -8,7 +8,7 @@ from pypy.jit.metainterp.test.support import LLJitMixin from pypy.module.micronumpy import interp_ufuncs, signature from pypy.module.micronumpy.compile import (numpy_compile, FakeSpace, - FloatObject, IntObject, BoolObject) + FloatObject, IntObject, BoolObject, Parser, InterpreterState) from pypy.module.micronumpy.interp_numarray import NDimArray, NDimSlice from pypy.rlib.nonconst import NonConstant from pypy.rpython.annlowlevel import llstr, hlstr @@ -18,12 +18,33 @@ class TestNumpyJIt(LLJitMixin): graph = None interp = None + + def setup_class(cls): + default = """ + a = [1,2,3,4] + c = a + b + sum(c) -> 1::1 + a -> 3:1:2 + """ + + d = {} + p = Parser() + allcodes = [p.parse(default)] + for name, meth in cls.__dict__.iteritems(): + if name.startswith("define_"): + code = meth() + d[name[len("define_"):]] = len(allcodes) + allcodes.append(p.parse(code)) + cls.code_mapping = d + cls.codes = allcodes - def run(self, code): + def run(self, name): space = FakeSpace() + i = self.code_mapping[name] + codes = self.codes - def f(code): - interp = numpy_compile(hlstr(code)) + def f(i): + interp = InterpreterState(codes[i]) interp.run(space) res = interp.results[-1] w_res = res.eval(0).wrap(interp.space) @@ -37,55 +58,66 @@ return -42. if self.graph is None: - interp, graph = self.meta_interp(f, [llstr(code)], + interp, graph = self.meta_interp(f, [i], listops=True, backendopt=True, graph_and_interp_only=True) self.__class__.interp = interp self.__class__.graph = graph - reset_stats() pyjitpl._warmrunnerdesc.memory_manager.alive_loops.clear() - return self.interp.eval_graph(self.graph, [llstr(code)]) + return self.interp.eval_graph(self.graph, [i]) - def test_add(self): - result = self.run(""" + def define_add(): + return """ a = |30| b = a + a b -> 3 - """) + """ + + def test_add(self): + result = self.run("add") self.check_loops({'getarrayitem_raw': 2, 'float_add': 1, 'setarrayitem_raw': 1, 'int_add': 1, 'int_lt': 1, 'guard_true': 1, 'jump': 1}) assert result == 3 + 3 - def test_floatadd(self): - result = self.run(""" + def define_float_add(): + return """ a = |30| + 3 a -> 3 - """) + """ + + def test_floatadd(self): + result = self.run("float_add") assert result == 3 + 3 self.check_loops({"getarrayitem_raw": 1, "float_add": 1, "setarrayitem_raw": 1, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1}) - def test_sum(self): - result = self.run(""" + def define_sum(): + return """ a = |30| b = a + a sum(b) - """) + """ + + def test_sum(self): + result = self.run("sum") assert result == 2 * sum(range(30)) self.check_loops({"getarrayitem_raw": 2, "float_add": 2, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1}) - def test_prod(self): - result = self.run(""" + def define_prod(): + return """ a = |30| b = a + a prod(b) - """) + """ + + def test_prod(self): + result = self.run("prod") expected = 1 for i in range(30): expected *= i * 2 @@ -120,27 +152,33 @@ "float_mul": 1, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1}) - def test_any(self): - result = self.run(""" + def define_any(): + return """ a = [0,0,0,0,0,0,0,0,0,0,0] a[8] = -12 b = a + a any(b) - """) + """ + + def test_any(self): + result = self.run("any") assert result == 1 self.check_loops({"getarrayitem_raw": 2, "float_add": 1, "float_ne": 1, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1, "guard_false": 1}) - def test_already_forced(self): - result = self.run(""" + def define_already_forced(): + return """ a = |30| b = a + 4.5 b -> 5 # forces c = b * 8 c -> 5 - """) + """ + + def test_already_forced(self): + result = self.run("already_forced") assert result == (5 + 4.5) * 8 # This is the sum of the ops for both loops, however if you remove the # optimization then you end up with 2 float_adds, so we can still be @@ -149,21 +187,24 @@ "setarrayitem_raw": 2, "int_add": 2, "int_lt": 2, "guard_true": 2, "jump": 2}) - def test_ufunc(self): - result = self.run(""" + def define_ufunc(): + return """ a = |30| b = a + a c = unegative(b) c -> 3 - """) + """ + + def test_ufunc(self): + result = self.run("ufunc") assert result == -6 self.check_loops({"getarrayitem_raw": 2, "float_add": 1, "float_neg": 1, "setarrayitem_raw": 1, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1, }) - def test_specialization(self): - self.run(""" + def define_specialization(): + return """ a = |30| b = a + a c = unegative(b) @@ -180,22 +221,57 @@ d = a * a unegative(d) d -> 3 - """) + """ + + def test_specialization(self): + self.run("specialization") # This is 3, not 2 because there is a bridge for the exit. self.check_loop_count(3) - def test_slice(self): - result = self.run(""" + def define_slice(): + return """ a = |30| b = a -> ::3 c = b + b c -> 3 - """) + """ + + def test_slice(self): + result = self.run("slice") assert result == 18 self.check_loops({'int_mul': 2, 'getarrayitem_raw': 2, 'float_add': 1, 'setarrayitem_raw': 1, 'int_add': 3, 'int_lt': 1, 'guard_true': 1, 'jump': 1}) + def define_multidim(): + return """ + a = [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]] + b = a + a + b -> 1 -> 1 + """ + + def test_multidim(self): + result = self.run('multidim') + assert result == 8 + self.check_loops({'float_add': 1, 'getarrayitem_raw': 2, + 'guard_true': 1, 'int_add': 1, 'int_lt': 1, + 'jump': 1, 'setarrayitem_raw': 1}) + + def define_multidim_slice(): + return """ + a = [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10], [11, 12], [13, 14]] + b = a -> ::2 + c = b + b + c -> 1 -> 1 + """ + + def test_multidim_slice(self): + result = self.run('multidim_slice') + assert result == 12 + py.test.skip("improve") + self.check_loops({}) + + class TestNumpyOld(LLJitMixin): def setup_class(cls): py.test.skip("old") diff --git a/pypy/rlib/rsre/rpy.py b/pypy/rlib/rsre/rpy.py new file mode 100644 --- /dev/null +++ b/pypy/rlib/rsre/rpy.py @@ -0,0 +1,49 @@ + +from pypy.rlib.rsre import rsre_char +from pypy.rlib.rsre.rsre_core import match + +def get_hacked_sre_compile(my_compile): + """Return a copy of the sre_compile module for which the _sre + module is a custom module that has _sre.compile == my_compile + and CODESIZE == rsre_char.CODESIZE. + """ + import sre_compile, __builtin__, new + sre_hacked = new.module("_sre_hacked") + sre_hacked.compile = my_compile + sre_hacked.MAGIC = sre_compile.MAGIC + sre_hacked.CODESIZE = rsre_char.CODESIZE + sre_hacked.getlower = rsre_char.getlower + def my_import(name, *args): + if name == '_sre': + return sre_hacked + else: + return default_import(name, *args) + src = sre_compile.__file__ + if src.lower().endswith('.pyc') or src.lower().endswith('.pyo'): + src = src[:-1] + mod = new.module("sre_compile_hacked") + default_import = __import__ + try: + __builtin__.__import__ = my_import + execfile(src, mod.__dict__) + finally: + __builtin__.__import__ = default_import + return mod + +class GotIt(Exception): + pass +def my_compile(pattern, flags, code, *args): + raise GotIt(code, flags, args) +sre_compile_hacked = get_hacked_sre_compile(my_compile) + +def get_code(regexp, flags=0, allargs=False): + try: + sre_compile_hacked.compile(regexp, flags) + except GotIt, e: + pass + else: + raise ValueError("did not reach _sre.compile()!") + if allargs: + return e.args + else: + return e.args[0] diff --git a/pypy/rlib/rsre/rsre_core.py b/pypy/rlib/rsre/rsre_core.py --- a/pypy/rlib/rsre/rsre_core.py +++ b/pypy/rlib/rsre/rsre_core.py @@ -154,7 +154,6 @@ return (fmarks[groupnum], fmarks[groupnum+1]) def group(self, groupnum=0): - "NOT_RPYTHON" # compatibility frm, to = self.span(groupnum) if 0 <= frm <= to: return self._string[frm:to] diff --git a/pypy/rlib/rsre/test/test_match.py b/pypy/rlib/rsre/test/test_match.py --- a/pypy/rlib/rsre/test/test_match.py +++ b/pypy/rlib/rsre/test/test_match.py @@ -1,54 +1,8 @@ import re -from pypy.rlib.rsre import rsre_core, rsre_char +from pypy.rlib.rsre import rsre_core +from pypy.rlib.rsre.rpy import get_code -def get_hacked_sre_compile(my_compile): - """Return a copy of the sre_compile module for which the _sre - module is a custom module that has _sre.compile == my_compile - and CODESIZE == rsre_char.CODESIZE. - """ - import sre_compile, __builtin__, new - sre_hacked = new.module("_sre_hacked") - sre_hacked.compile = my_compile - sre_hacked.MAGIC = sre_compile.MAGIC - sre_hacked.CODESIZE = rsre_char.CODESIZE - sre_hacked.getlower = rsre_char.getlower - def my_import(name, *args): - if name == '_sre': - return sre_hacked - else: - return default_import(name, *args) - src = sre_compile.__file__ - if src.lower().endswith('.pyc') or src.lower().endswith('.pyo'): - src = src[:-1] - mod = new.module("sre_compile_hacked") - default_import = __import__ - try: - __builtin__.__import__ = my_import - execfile(src, mod.__dict__) - finally: - __builtin__.__import__ = default_import - return mod - -class GotIt(Exception): - pass -def my_compile(pattern, flags, code, *args): - print code - raise GotIt(code, flags, args) -sre_compile_hacked = get_hacked_sre_compile(my_compile) - -def get_code(regexp, flags=0, allargs=False): - try: - sre_compile_hacked.compile(regexp, flags) - except GotIt, e: - pass - else: - raise ValueError("did not reach _sre.compile()!") - if allargs: - return e.args - else: - return e.args[0] - def get_code_and_re(regexp): return get_code(regexp), re.compile(regexp) From noreply at buildbot.pypy.org Thu Nov 10 11:07:20 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 10 Nov 2011 11:07:20 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: low level support for pointer fields Message-ID: <20111110100720.4DAE08292E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r49116:fe97346f8494 Date: 2011-11-09 22:44 +0100 http://bitbucket.org/pypy/pypy/changeset/fe97346f8494/ Log: low level support for pointer fields diff --git a/pypy/rlib/clibffi.py b/pypy/rlib/clibffi.py --- a/pypy/rlib/clibffi.py +++ b/pypy/rlib/clibffi.py @@ -233,6 +233,10 @@ (rffi.LONGDOUBLE, ffi_type_longdouble), ] +__ptr_type_map = [ + (rffi.VOIDP, ffi_type_pointer), + ] + __type_map = __int_type_map + __float_type_map + [ (lltype.Void, ffi_type_void) ] @@ -242,10 +246,11 @@ TYPE_MAP = dict(__type_map) ffitype_map_int = unrolling_iterable(__int_type_map) +ffitype_map_int_or_ptr = unrolling_iterable(__int_type_map + __ptr_type_map) ffitype_map_float = unrolling_iterable(__float_type_map) ffitype_map = unrolling_iterable(__type_map) -del __int_type_map, __float_type_map, __type_map +del __int_type_map, __float_type_map, __ptr_type_map, __type_map def external(name, args, result, **kwds): diff --git a/pypy/rlib/libffi.py b/pypy/rlib/libffi.py --- a/pypy/rlib/libffi.py +++ b/pypy/rlib/libffi.py @@ -420,7 +420,7 @@ Return the field of type ``ffitype`` at ``addr+offset``, widened to lltype.Signed. """ - for TYPE, ffitype2 in clibffi.ffitype_map_int: + for TYPE, ffitype2 in clibffi.ffitype_map_int_or_ptr: if ffitype is ffitype2: value = _struct_getfield(TYPE, addr, offset) return rffi.cast(lltype.Signed, value) @@ -433,7 +433,7 @@ Set the field of type ``ffitype`` at ``addr+offset``. ``value`` is of type lltype.Signed, and it's automatically converted to the right type. """ - for TYPE, ffitype2 in clibffi.ffitype_map_int: + for TYPE, ffitype2 in clibffi.ffitype_map_int_or_ptr: if ffitype is ffitype2: value = rffi.cast(TYPE, value) _struct_setfield(TYPE, addr, offset, value) diff --git a/pypy/rlib/test/test_libffi.py b/pypy/rlib/test/test_libffi.py --- a/pypy/rlib/test/test_libffi.py +++ b/pypy/rlib/test/test_libffi.py @@ -59,20 +59,26 @@ longsize = 4 if IS_32_BIT else 8 POINT = lltype.Struct('POINT', ('x', rffi.LONG), - ('y', rffi.SHORT) + ('y', rffi.SHORT), + ('z', rffi.VOIDP), ) y_ofs = longsize + z_ofs = longsize*2 p = lltype.malloc(POINT, flavor='raw') p.x = 42 p.y = rffi.cast(rffi.SHORT, -1) + p.z = rffi.cast(rffi.VOIDP, 0x1234) addr = rffi.cast(rffi.VOIDP, p) assert struct_getfield_int(types.slong, addr, 0) == 42 assert struct_getfield_int(types.sshort, addr, y_ofs) == -1 + assert struct_getfield_int(types.pointer, addr, z_ofs) == 0x1234 # struct_setfield_int(types.slong, addr, 0, 43) struct_setfield_int(types.sshort, addr, y_ofs, 0x1234FFFE) # 0x1234 is masked out + struct_setfield_int(types.pointer, addr, z_ofs, 0x4321) assert p.x == 43 assert p.y == -2 + assert rffi.cast(rffi.LONG, p.z) == 0x4321 # lltype.free(p, flavor='raw') From noreply at buildbot.pypy.org Thu Nov 10 11:07:21 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 10 Nov 2011 11:07:21 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: app level support for pointer fields Message-ID: <20111110100721.7D7BA8292E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r49117:4fee7e624e7e Date: 2011-11-09 22:45 +0100 http://bitbucket.org/pypy/pypy/changeset/4fee7e624e7e/ Log: app level support for pointer fields diff --git a/pypy/module/_ffi/interp_struct.py b/pypy/module/_ffi/interp_struct.py --- a/pypy/module/_ffi/interp_struct.py +++ b/pypy/module/_ffi/interp_struct.py @@ -139,11 +139,11 @@ return space.wrap(r_ulonglong(value)) return space.wrap(value) # - if w_ffitype.is_signed() or w_ffitype.is_unsigned(): + if w_ffitype.is_signed() or w_ffitype.is_unsigned() or w_ffitype.is_pointer(): value = libffi.struct_getfield_int(w_ffitype.ffitype, self.rawmem, offset) - if w_ffitype.is_unsigned(): - return space.wrap(r_uint(value)) - return space.wrap(value) + if w_ffitype.is_signed(): + return space.wrap(value) + return space.wrap(r_uint(value)) # if w_ffitype.is_char(): value = libffi.struct_getfield_int(w_ffitype.ffitype, self.rawmem, offset) @@ -171,7 +171,7 @@ libffi.struct_setfield_longlong(w_ffitype.ffitype, self.rawmem, offset, value) return # - if w_ffitype.is_signed() or w_ffitype.is_unsigned(): + if w_ffitype.is_signed() or w_ffitype.is_unsigned() or w_ffitype.is_pointer(): value = space.truncatedint_w(w_value) libffi.struct_setfield_int(w_ffitype.ffitype, self.rawmem, offset, value) return diff --git a/pypy/module/_ffi/test/test_struct.py b/pypy/module/_ffi/test/test_struct.py --- a/pypy/module/_ffi/test/test_struct.py +++ b/pypy/module/_ffi/test/test_struct.py @@ -139,6 +139,7 @@ Field('ulong', types.ulong), Field('char', types.char), Field('unichar', types.unichar), + Field('ptr', types.void_p), ] descr = _StructDescr('foo', fields) struct = descr.allocate() @@ -156,8 +157,9 @@ assert struct.getfield('char') == 'a' struct.setfield('unichar', u'\u1234') assert struct.getfield('unichar') == u'\u1234' - - + struct.setfield('ptr', -1) + assert struct.getfield('ptr') == sys.maxint*2 + 1 + def test_getfield_setfield_longlong(self): import sys from _ffi import _StructDescr, Field, types From noreply at buildbot.pypy.org Thu Nov 10 11:07:22 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 10 Nov 2011 11:07:22 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: crash with a nicer exception if we don't know how to deal with this type Message-ID: <20111110100722.AC0818292E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r49118:23eba74d609c Date: 2011-11-09 23:03 +0100 http://bitbucket.org/pypy/pypy/changeset/23eba74d609c/ Log: crash with a nicer exception if we don't know how to deal with this type diff --git a/pypy/module/_ffi/interp_struct.py b/pypy/module/_ffi/interp_struct.py --- a/pypy/module/_ffi/interp_struct.py +++ b/pypy/module/_ffi/interp_struct.py @@ -161,7 +161,7 @@ value = libffi.struct_getfield_singlefloat(w_ffitype.ffitype, self.rawmem, offset) return space.wrap(float(value)) # - assert False, 'unknown type' + raise operationerrfmt(space.w_TypeError, 'Unknown type: %s', w_ffitype.name) @unwrap_spec(name=str) def setfield(self, space, name, w_value): @@ -191,7 +191,7 @@ libffi.struct_setfield_singlefloat(w_ffitype.ffitype, self.rawmem, offset, value) return # - assert False, 'unknown type' + raise operationerrfmt(space.w_TypeError, 'Unknown type: %s', w_ffitype.name) W__StructInstance.typedef = TypeDef( '_StructInstance', diff --git a/pypy/module/_ffi/test/test_struct.py b/pypy/module/_ffi/test/test_struct.py --- a/pypy/module/_ffi/test/test_struct.py +++ b/pypy/module/_ffi/test/test_struct.py @@ -2,7 +2,7 @@ from pypy.conftest import gettestobjspace from pypy.module._ffi.test.test_funcptr import BaseAppTestFFI from pypy.module._ffi.interp_struct import compute_size_and_alignement, W_Field -from pypy.module._ffi.interp_ffitype import app_types +from pypy.module._ffi.interp_ffitype import app_types, W_FFIType class TestStruct(object): @@ -53,6 +53,14 @@ lst = [array[i] for i in range(length)] return lst cls.w_read_raw_mem = cls.space.wrap(read_raw_mem) + # + from pypy.rlib import clibffi + from pypy.rlib.rarithmetic import r_uint + from pypy.rpython.lltypesystem import lltype, rffi + dummy_type = lltype.malloc(clibffi.FFI_TYPE_P.TO, flavor='raw') + dummy_type.c_size = r_uint(123) + dummy_type.c_alignment = rffi.cast(rffi.USHORT, 0) + cls.w_dummy_type = W_FFIType('dummy', dummy_type) def test__StructDescr(self): from _ffi import _StructDescr, Field, types @@ -89,6 +97,16 @@ raises(AttributeError, "struct.getfield('missing')") raises(AttributeError, "struct.setfield('missing', 42)") + def test_unknown_type(self): + from _ffi import _StructDescr, Field + fields = [ + Field('x', self.dummy_type), + ] + descr = _StructDescr('foo', fields) + struct = descr.allocate() + raises(TypeError, "struct.getfield('x')") + raises(TypeError, "struct.setfield('x', 42)") + def test_getfield_setfield(self): from _ffi import _StructDescr, Field, types longsize = types.slong.sizeof() From noreply at buildbot.pypy.org Thu Nov 10 11:07:23 2011 From: noreply at buildbot.pypy.org (mwp) Date: Thu, 10 Nov 2011 11:07:23 +0100 (CET) Subject: [pypy-commit] pypy default: (mwp antocuni) make platform work on PowerPC Message-ID: <20111110100723.DC2FA8292E@wyvern.cs.uni-duesseldorf.de> Author: Mark Pearse Branch: Changeset: r49119:065a3c82eebf Date: 2011-11-04 13:40 +0100 http://bitbucket.org/pypy/pypy/changeset/065a3c82eebf/ Log: (mwp antocuni) make platform work on PowerPC diff --git a/pypy/translator/platform/__init__.py b/pypy/translator/platform/__init__.py --- a/pypy/translator/platform/__init__.py +++ b/pypy/translator/platform/__init__.py @@ -240,10 +240,13 @@ else: host_factory = Linux64 elif sys.platform == 'darwin': - from pypy.translator.platform.darwin import Darwin_i386, Darwin_x86_64 + from pypy.translator.platform.darwin import Darwin_i386, Darwin_x86_64, Darwin_PowerPC import platform - assert platform.machine() in ('i386', 'x86_64') - if sys.maxint <= 2147483647: + assert platform.machine() in ('Power Macintosh', 'i386', 'x86_64') + + if platform.machine() == 'Power Macintosh': + host_factory = Darwin_PowerPC + elif sys.maxint <= 2147483647: host_factory = Darwin_i386 else: host_factory = Darwin_x86_64 diff --git a/pypy/translator/platform/darwin.py b/pypy/translator/platform/darwin.py --- a/pypy/translator/platform/darwin.py +++ b/pypy/translator/platform/darwin.py @@ -71,6 +71,11 @@ link_flags = ('-arch', 'i386') cflags = ('-arch', 'i386', '-O3', '-fomit-frame-pointer') +class Darwin_PowerPC(Darwin):#xxx fixme, mwp + name = "darwin_powerpc" + link_flags = () + cflags = ('-O3', '-fomit-frame-pointer') + class Darwin_x86_64(Darwin): name = "darwin_x86_64" link_flags = ('-arch', 'x86_64') diff --git a/pypy/translator/platform/test/test_darwin.py b/pypy/translator/platform/test/test_darwin.py --- a/pypy/translator/platform/test/test_darwin.py +++ b/pypy/translator/platform/test/test_darwin.py @@ -7,7 +7,7 @@ py.test.skip("Darwin only") from pypy.tool.udir import udir -from pypy.translator.platform.darwin import Darwin_i386, Darwin_x86_64 +from pypy.translator.platform.darwin import Darwin_i386, Darwin_x86_64, Darwin_PowerPC from pypy.translator.platform.test.test_platform import TestPlatform as BasicTest from pypy.translator.tool.cbuild import ExternalCompilationInfo @@ -17,7 +17,7 @@ else: host_factory = Darwin_x86_64 else: - host_factory = Darwin + host_factory = Darwin_PowerPC class TestDarwin(BasicTest): platform = host_factory() From noreply at buildbot.pypy.org Thu Nov 10 11:39:03 2011 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 10 Nov 2011 11:39:03 +0100 (CET) Subject: [pypy-commit] pypy default: A sanity check that most probably breaks right now on Windows Message-ID: <20111110103903.5B4568292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49120:c440cc7e4110 Date: 2011-11-10 11:38 +0100 http://bitbucket.org/pypy/pypy/changeset/c440cc7e4110/ Log: A sanity check that most probably breaks right now on Windows diff --git a/pypy/jit/backend/llsupport/descr.py b/pypy/jit/backend/llsupport/descr.py --- a/pypy/jit/backend/llsupport/descr.py +++ b/pypy/jit/backend/llsupport/descr.py @@ -355,6 +355,10 @@ return False # unless overridden def create_call_stub(self, rtyper, RESULT): + from pypy.rlib.clibffi import FFI_DEFAULT_ABI + assert self.get_call_conv() == FFI_DEFAULT_ABI, ( + "%r: create_call_stub() with a non-default call ABI" % (self,)) + def process(c): if c == 'L': assert longlong.supports_longlong From noreply at buildbot.pypy.org Thu Nov 10 11:45:47 2011 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 10 Nov 2011 11:45:47 +0100 (CET) Subject: [pypy-commit] pypy default: jit.dont_look_inside the rffi functions called with the "win" calling conv. Message-ID: <20111110104547.EF6428292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49121:f6f7f134190a Date: 2011-11-10 11:45 +0100 http://bitbucket.org/pypy/pypy/changeset/f6f7f134190a/ Log: jit.dont_look_inside the rffi functions called with the "win" calling conv. diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -245,8 +245,14 @@ wrapper._always_inline_ = True # for debugging, stick ll func ptr to that wrapper._ptr = funcptr + wrapper = func_with_new_name(wrapper, name) - return func_with_new_name(wrapper, name) + if calling_conv != "c": + from pypy.rlib.jit import dont_look_inside + wrapper = dont_look_inside(wrapper) + + return wrapper + class CallbackHolder: def __init__(self): From noreply at buildbot.pypy.org Thu Nov 10 11:54:16 2011 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 10 Nov 2011 11:54:16 +0100 (CET) Subject: [pypy-commit] pypy default: Attempt to fix this test to check with a valid fd. Message-ID: <20111110105416.15BA18292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49122:0f4680451831 Date: 2011-11-10 11:53 +0100 http://bitbucket.org/pypy/pypy/changeset/0f4680451831/ Log: Attempt to fix this test to check with a valid fd. diff --git a/pypy/module/select/test/test_select.py b/pypy/module/select/test/test_select.py --- a/pypy/module/select/test/test_select.py +++ b/pypy/module/select/test/test_select.py @@ -214,11 +214,15 @@ def test_poll(self): import select - class A(object): - def __int__(self): - return 3 - - select.poll().poll(A()) # assert did not crash + readend, writeend = self.getpair() + try: + class A(object): + def __int__(self): + return readend.fileno() + select.poll().poll(A()) # assert did not crash + finally: + readend.close() + writeend.close() class AppTestSelectWithPipes(_AppTestSelect): "Use a pipe to get pairs of file descriptors" From noreply at buildbot.pypy.org Thu Nov 10 11:58:23 2011 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 10 Nov 2011 11:58:23 +0100 (CET) Subject: [pypy-commit] pypy default: setitimer is Unix-only. Message-ID: <20111110105823.EC6608292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49123:fe8481d944cf Date: 2011-11-10 11:57 +0100 http://bitbucket.org/pypy/pypy/changeset/fe8481d944cf/ Log: setitimer is Unix-only. diff --git a/pypy/module/signal/test/test_signal.py b/pypy/module/signal/test/test_signal.py --- a/pypy/module/signal/test/test_signal.py +++ b/pypy/module/signal/test/test_signal.py @@ -1,4 +1,4 @@ -import os, py +import os, py, sys import signal as cpy_signal from pypy.conftest import gettestobjspace @@ -264,6 +264,10 @@ class AppTestItimer: spaceconfig = dict(usemodules=['signal']) + def setup_class(cls): + if sys.platform == 'win32': + py.test.skip("Unix only") + def test_itimer_real(self): import signal From noreply at buildbot.pypy.org Thu Nov 10 11:58:25 2011 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 10 Nov 2011 11:58:25 +0100 (CET) Subject: [pypy-commit] pypy default: Export 'setitimer' even if running Python 2.5, which does not have Message-ID: <20111110105825.259808292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49124:5c495d43377a Date: 2011-11-10 11:58 +0100 http://bitbucket.org/pypy/pypy/changeset/5c495d43377a/ Log: Export 'setitimer' even if running Python 2.5, which does not have itself 'setitimer'. diff --git a/pypy/module/signal/__init__.py b/pypy/module/signal/__init__.py --- a/pypy/module/signal/__init__.py +++ b/pypy/module/signal/__init__.py @@ -20,7 +20,7 @@ interpleveldefs['pause'] = 'interp_signal.pause' interpleveldefs['siginterrupt'] = 'interp_signal.siginterrupt' - if hasattr(cpy_signal, 'setitimer'): + if os.name == 'posix': interpleveldefs['setitimer'] = 'interp_signal.setitimer' interpleveldefs['getitimer'] = 'interp_signal.getitimer' for name in ['ITIMER_REAL', 'ITIMER_VIRTUAL', 'ITIMER_PROF']: From noreply at buildbot.pypy.org Thu Nov 10 12:29:53 2011 From: noreply at buildbot.pypy.org (hager) Date: Thu, 10 Nov 2011 12:29:53 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Remove syntax error Message-ID: <20111110112953.7A2948292E@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r49125:36fe6da839a1 Date: 2011-11-10 12:28 +0100 http://bitbucket.org/pypy/pypy/changeset/36fe6da839a1/ Log: Remove syntax error diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -533,8 +533,7 @@ if op.has_no_side_effect() and op.result not in regalloc.longevity: regalloc.possibly_free_vars_for_op(op) elif self.can_merge_with_next_guard(op, pos, operations)\ - # XXX fix this later on - and opnum == rop.CALL_RELEASE_GIL: + and opnum == rop.CALL_RELEASE_GIL: # XXX fix regalloc.next_instruction() arglocs = regalloc.operations_with_guard[opnum](regalloc, op, operations[pos+1]) From noreply at buildbot.pypy.org Thu Nov 10 12:29:54 2011 From: noreply at buildbot.pypy.org (hager) Date: Thu, 10 Nov 2011 12:29:54 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Replace cmpi with cmpwi and cmpdi Message-ID: <20111110112954.A8FE98292E@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r49126:330c9da8ffaa Date: 2011-11-10 12:29 +0100 http://bitbucket.org/pypy/pypy/changeset/330c9da8ffaa/ Log: Replace cmpi with cmpwi and cmpdi diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -200,7 +200,10 @@ self.mc.mfspr(r.r0.value, 1) # shift and mask to get comparison result self.mc.rlwinm(r.r0.value, r.r0.value, 1, 0, 0) - self.mc.cmpi(r.r0.value, 0) + if IS_PPC_32: + self.mc.cmpwi(r.r0.value, 0) + else: + self.mc.cmpdi(r.r0.value, 0) self._emit_guard(op, arglocs, cond) def emit_guard_no_overflow(self, op, arglocs, regalloc): From noreply at buildbot.pypy.org Thu Nov 10 12:59:06 2011 From: noreply at buildbot.pypy.org (hager) Date: Thu, 10 Nov 2011 12:59:06 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: First set SP when saving managed registers Message-ID: <20111110115906.DF6318292E@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r49127:8c0775a2ea17 Date: 2011-11-10 12:58 +0100 http://bitbucket.org/pypy/pypy/changeset/8c0775a2ea17/ Log: First set SP when saving managed registers diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -292,16 +292,18 @@ def _gen_exit_path(self): mc = PPCBuilder() # - self._save_managed_regs(mc) - # adjust SP (r1) + # compute offset to new SP size = WORD * (len(r.MANAGED_REGS)) + BACKCHAIN_SIZE - # XXX do quadword alignment - #while size % (4 * WORD) != 0: - # size += WORD + # set SP if IS_PPC_32: mc.stwu(r.SP.value, r.SP.value, -size) else: mc.stdu(r.SP.value, r.SP.value, -size) + self._save_managed_regs(mc) + # adjust SP (r1) + # XXX do quadword alignment + #while size % (4 * WORD) != 0: + # size += WORD # decode_func_addr = llhelper(self.recovery_func_sign, self.failure_recovery_func) @@ -352,12 +354,12 @@ # Save all registers which are managed by the register # allocator on top of the stack before decoding. def _save_managed_regs(self, mc): - for i in range(len(r.MANAGED_REGS) - 1, -1, -1): + for i in range(len(r.MANAGED_REGS)): reg = r.MANAGED_REGS[i] if IS_PPC_32: - mc.stw(reg.value, r.SP.value, -(len(r.MANAGED_REGS) - i) * WORD) + mc.stw(reg.value, r.SP.value, i * WORD + BACKCHAIN_SIZE) else: - mc.std(reg.value, r.SP.value, -(len(r.MANAGED_REGS) - i) * WORD) + mc.std(reg.value, r.SP.value, i * WORD + BACKCHAIN_SIZE) def gen_bootstrap_code(self, nonfloatlocs, inputargs): for i in range(len(nonfloatlocs)): From noreply at buildbot.pypy.org Thu Nov 10 13:06:57 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Thu, 10 Nov 2011 13:06:57 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: typo correction Message-ID: <20111110120657.985628292E@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r49128:aeccba4a7567 Date: 2011-11-10 01:15 +0100 http://bitbucket.org/pypy/pypy/changeset/aeccba4a7567/ Log: typo correction diff --git a/pypy/rlib/rarithmetic.py b/pypy/rlib/rarithmetic.py --- a/pypy/rlib/rarithmetic.py +++ b/pypy/rlib/rarithmetic.py @@ -144,7 +144,7 @@ assert not isinstance(r, r_longlong), "ovfcheck not supported on r_longlong" assert not isinstance(r, r_ulonglong), "ovfcheck not supported on r_ulonglong" if type(r) is long and not is_valid_int(r): - # the type check is needed to make this chek skip symbolics. + # the type check is needed to make ovfcheck skip symbolics. # this happens in the garbage collector. raise OverflowError, "signed integer expression did overflow" return r From noreply at buildbot.pypy.org Thu Nov 10 13:06:59 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Thu, 10 Nov 2011 13:06:59 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: Merge with default Message-ID: <20111110120659.260DB8292E@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r49129:27520540161f Date: 2011-11-10 01:37 +0100 http://bitbucket.org/pypy/pypy/changeset/27520540161f/ Log: Merge with default diff --git a/lib-python/2.7/test/test_os.py b/lib-python/2.7/test/test_os.py --- a/lib-python/2.7/test/test_os.py +++ b/lib-python/2.7/test/test_os.py @@ -74,7 +74,8 @@ self.assertFalse(os.path.exists(name), "file already exists for temporary file") # make sure we can create the file - open(name, "w") + f = open(name, "w") + f.close() self.files.append(name) def test_tempnam(self): diff --git a/lib-python/2.7/pkgutil.py b/lib-python/modified-2.7/pkgutil.py copy from lib-python/2.7/pkgutil.py copy to lib-python/modified-2.7/pkgutil.py --- a/lib-python/2.7/pkgutil.py +++ b/lib-python/modified-2.7/pkgutil.py @@ -244,7 +244,8 @@ return mod def get_data(self, pathname): - return open(pathname, "rb").read() + with open(pathname, "rb") as f: + return f.read() def _reopen(self): if self.file and self.file.closed: diff --git a/pypy/interpreter/test/test_typedef.py b/pypy/interpreter/test/test_typedef.py --- a/pypy/interpreter/test/test_typedef.py +++ b/pypy/interpreter/test/test_typedef.py @@ -2,7 +2,7 @@ from pypy.interpreter import typedef from pypy.tool.udir import udir from pypy.interpreter.baseobjspace import Wrappable -from pypy.interpreter.gateway import ObjSpace +from pypy.interpreter.gateway import ObjSpace, interp2app # this test isn't so much to test that the objspace interface *works* # -- it's more to test that it's *there* @@ -260,6 +260,50 @@ gc.collect(); gc.collect() assert space.unwrap(w_seen) == [6, 2] + def test_multiple_inheritance(self): + class W_A(Wrappable): + a = 1 + b = 2 + class W_C(W_A): + b = 3 + W_A.typedef = typedef.TypeDef("A", + a = typedef.interp_attrproperty("a", cls=W_A), + b = typedef.interp_attrproperty("b", cls=W_A), + ) + class W_B(Wrappable): + pass + def standalone_method(space, w_obj): + if isinstance(w_obj, W_A): + return space.w_True + else: + return space.w_False + W_B.typedef = typedef.TypeDef("B", + c = interp2app(standalone_method) + ) + W_C.typedef = typedef.TypeDef("C", (W_A.typedef, W_B.typedef,)) + + w_o1 = self.space.wrap(W_C()) + w_o2 = self.space.wrap(W_B()) + w_c = self.space.gettypefor(W_C) + w_b = self.space.gettypefor(W_B) + w_a = self.space.gettypefor(W_A) + assert w_c.mro_w == [ + w_c, + w_a, + w_b, + self.space.w_object, + ] + for w_tp in w_c.mro_w: + assert self.space.isinstance_w(w_o1, w_tp) + def assert_attr(w_obj, name, value): + assert self.space.unwrap(self.space.getattr(w_obj, self.space.wrap(name))) == value + def assert_method(w_obj, name, value): + assert self.space.unwrap(self.space.call_method(w_obj, name)) == value + assert_attr(w_o1, "a", 1) + assert_attr(w_o1, "b", 3) + assert_method(w_o1, "c", True) + assert_method(w_o2, "c", False) + class AppTestTypeDef: diff --git a/pypy/interpreter/typedef.py b/pypy/interpreter/typedef.py --- a/pypy/interpreter/typedef.py +++ b/pypy/interpreter/typedef.py @@ -15,13 +15,19 @@ def __init__(self, __name, __base=None, **rawdict): "NOT_RPYTHON: initialization-time only" self.name = __name - self.base = __base + if __base is None: + bases = [] + elif isinstance(__base, tuple): + bases = list(__base) + else: + bases = [__base] + self.bases = bases self.hasdict = '__dict__' in rawdict self.weakrefable = '__weakref__' in rawdict self.doc = rawdict.pop('__doc__', None) - if __base is not None: - self.hasdict |= __base.hasdict - self.weakrefable |= __base.weakrefable + for base in bases: + self.hasdict |= base.hasdict + self.weakrefable |= base.weakrefable self.rawdict = {} self.acceptable_as_base_class = '__new__' in rawdict self.applevel_subclasses_base = None diff --git a/pypy/module/cpyext/pyobject.py b/pypy/module/cpyext/pyobject.py --- a/pypy/module/cpyext/pyobject.py +++ b/pypy/module/cpyext/pyobject.py @@ -116,8 +116,8 @@ try: return typedescr_cache[typedef] except KeyError: - if typedef.base is not None: - return _get_typedescr_1(typedef.base) + if typedef.bases: + return _get_typedescr_1(typedef.bases[0]) return typedescr_cache[None] def get_typedescr(typedef): diff --git a/pypy/objspace/std/stdtypedef.py b/pypy/objspace/std/stdtypedef.py --- a/pypy/objspace/std/stdtypedef.py +++ b/pypy/objspace/std/stdtypedef.py @@ -32,11 +32,14 @@ from pypy.objspace.std.objecttype import object_typedef if b is object_typedef: return True - while a is not b: - if a is None: - return False - a = a.base - return True + if a is None: + return False + if a is b: + return True + for a1 in a.bases: + if issubtypedef(a1, b): + return True + return False std_dict_descr = GetSetProperty(descr_get_dict, descr_set_dict, descr_del_dict, doc="dictionary for instance variables (if defined)") @@ -75,8 +78,8 @@ if typedef is object_typedef: bases_w = [] else: - base = typedef.base or object_typedef - bases_w = [space.gettypeobject(base)] + bases = typedef.bases or [object_typedef] + bases_w = [space.gettypeobject(base) for base in bases] # wrap everything dict_w = {} diff --git a/pypy/rlib/rgc.py b/pypy/rlib/rgc.py --- a/pypy/rlib/rgc.py +++ b/pypy/rlib/rgc.py @@ -216,8 +216,8 @@ func._gc_no_collect_ = True return func -def is_light_finalizer(func): - func._is_light_finalizer_ = True +def must_be_light_finalizer(func): + func._must_be_light_finalizer_ = True return func # ____________________________________________________________ diff --git a/pypy/rpython/lltypesystem/module/ll_math.py b/pypy/rpython/lltypesystem/module/ll_math.py --- a/pypy/rpython/lltypesystem/module/ll_math.py +++ b/pypy/rpython/lltypesystem/module/ll_math.py @@ -11,15 +11,17 @@ from pypy.translator.platform import platform from pypy.rlib.rfloat import isfinite, isinf, isnan, INFINITY, NAN +use_library_isinf_isnan = False if sys.platform == "win32": if platform.name == "msvc": # When compiled with /O2 or /Oi (enable intrinsic functions) # It's no more possible to take the address of some math functions. # Ensure that the compiler chooses real functions instead. eci = ExternalCompilationInfo( - includes = ['math.h'], + includes = ['math.h', 'float.h'], post_include_bits = ['#pragma function(floor)'], ) + use_library_isinf_isnan = True else: eci = ExternalCompilationInfo() # Some math functions are C99 and not defined by the Microsoft compiler @@ -108,18 +110,32 @@ # # Custom implementations +VERY_LARGE_FLOAT = 1.0 +while VERY_LARGE_FLOAT * 100.0 != INFINITY: + VERY_LARGE_FLOAT *= 64.0 + +_lib_isnan = rffi.llexternal("_isnan", [lltype.Float], lltype.Signed, + compilation_info=eci) +_lib_finite = rffi.llexternal("_finite", [lltype.Float], lltype.Signed, + compilation_info=eci) + def ll_math_isnan(y): # By not calling into the external function the JIT can inline this. # Floats are awesome. + if use_library_isinf_isnan and not jit.we_are_jitted(): + return bool(_lib_isnan(y)) return y != y def ll_math_isinf(y): - # Use a bitwise OR so the JIT doesn't produce 2 different guards. - return (y == INFINITY) | (y == -INFINITY) + if use_library_isinf_isnan and not jit.we_are_jitted(): + return not _lib_finite(y) and not _lib_isnan(y) + return (y + VERY_LARGE_FLOAT) == y def ll_math_isfinite(y): # Use a custom hack that is reasonably well-suited to the JIT. # Floats are awesome (bis). + if use_library_isinf_isnan and not jit.we_are_jitted(): + return bool(_lib_finite(y)) z = 0.0 * y return z == z # i.e.: z is not a NaN @@ -136,10 +152,12 @@ Windows, FreeBSD and alpha Tru64 are amongst platforms that don't always follow C99. """ - if isnan(x) or isnan(y): + if isnan(x): return NAN - if isinf(y): + if not isfinite(y): + if isnan(y): + return NAN if isinf(x): if math_copysign(1.0, x) == 1.0: # atan2(+-inf, +inf) == +-pi/4 @@ -168,7 +186,7 @@ def ll_math_frexp(x): # deal with special cases directly, to sidestep platform differences - if isnan(x) or isinf(x) or not x: + if not isfinite(x) or not x: mantissa = x exponent = 0 else: @@ -185,7 +203,7 @@ INT_MIN = int(-2**31) def ll_math_ldexp(x, exp): - if x == 0.0 or isinf(x) or isnan(x): + if x == 0.0 or not isfinite(x): return x # NaNs, zeros and infinities are returned unchanged if exp > INT_MAX: # overflow (64-bit platforms only) @@ -209,10 +227,11 @@ def ll_math_modf(x): # some platforms don't do the right thing for NaNs and # infinities, so we take care of special cases directly. - if isinf(x): - return (math_copysign(0.0, x), x) - elif isnan(x): - return (x, x) + if not isfinite(x): + if isnan(x): + return (x, x) + else: # isinf(x) + return (math_copysign(0.0, x), x) intpart_p = lltype.malloc(rffi.DOUBLEP.TO, 1, flavor='raw') try: fracpart = math_modf(x, intpart_p) @@ -223,13 +242,21 @@ def ll_math_fmod(x, y): - if isinf(x) and not isnan(y): - raise ValueError("math domain error") + # fmod(x, +/-Inf) returns x for finite x. + if isinf(y) and isfinite(x): + return x - if y == 0: - raise ValueError("math domain error") - - return math_fmod(x, y) + _error_reset() + r = math_fmod(x, y) + errno = rposix.get_errno() + if isnan(r): + if isnan(x) or isnan(y): + errno = 0 + else: + errno = EDOM + if errno: + _likely_raise(errno, r) + return r def ll_math_hypot(x, y): @@ -242,16 +269,17 @@ _error_reset() r = math_hypot(x, y) errno = rposix.get_errno() - if isnan(r): - if isnan(x) or isnan(y): - errno = 0 - else: - errno = EDOM - elif isinf(r): - if isinf(x) or isnan(x) or isinf(y) or isnan(y): - errno = 0 - else: - errno = ERANGE + if not isfinite(r): + if isnan(r): + if isnan(x) or isnan(y): + errno = 0 + else: + errno = EDOM + else: # isinf(r) + if isfinite(x) and isfinite(y): + errno = ERANGE + else: + errno = 0 if errno: _likely_raise(errno, r) return r @@ -261,30 +289,30 @@ # deal directly with IEEE specials, to cope with problems on various # platforms whose semantics don't exactly match C99 - if isnan(x): - if y == 0.0: - return 1.0 # NaN**0 = 1 - return x - - elif isnan(y): + if isnan(y): if x == 1.0: return 1.0 # 1**Nan = 1 return y - elif isinf(x): - odd_y = not isinf(y) and math_fmod(math_fabs(y), 2.0) == 1.0 - if y > 0.0: - if odd_y: - return x - return math_fabs(x) - elif y == 0.0: - return 1.0 - else: # y < 0.0 - if odd_y: - return math_copysign(0.0, x) - return 0.0 + if not isfinite(x): + if isnan(x): + if y == 0.0: + return 1.0 # NaN**0 = 1 + return x + else: # isinf(x) + odd_y = not isinf(y) and math_fmod(math_fabs(y), 2.0) == 1.0 + if y > 0.0: + if odd_y: + return x + return math_fabs(x) + elif y == 0.0: + return 1.0 + else: # y < 0.0 + if odd_y: + return math_copysign(0.0, x) + return 0.0 - elif isinf(y): + if isinf(y): if math_fabs(x) == 1.0: return 1.0 elif y > 0.0 and math_fabs(x) > 1.0: @@ -299,17 +327,18 @@ _error_reset() r = math_pow(x, y) errno = rposix.get_errno() - if isnan(r): - # a NaN result should arise only from (-ve)**(finite non-integer) - errno = EDOM - elif isinf(r): - # an infinite result here arises either from: - # (A) (+/-0.)**negative (-> divide-by-zero) - # (B) overflow of x**y with x and y finite - if x == 0.0: + if not isfinite(r): + if isnan(r): + # a NaN result should arise only from (-ve)**(finite non-integer) errno = EDOM - else: - errno = ERANGE + else: # isinf(r) + # an infinite result here arises either from: + # (A) (+/-0.)**negative (-> divide-by-zero) + # (B) overflow of x**y with x and y finite + if x == 0.0: + errno = EDOM + else: + errno = ERANGE if errno: _likely_raise(errno, r) return r @@ -358,18 +387,19 @@ r = c_func(x) # Error checking fun. Copied from CPython 2.6 errno = rposix.get_errno() - if isnan(r): - if isnan(x): - errno = 0 - else: - errno = EDOM - elif isinf(r): - if isinf(x) or isnan(x): - errno = 0 - elif can_overflow: - errno = ERANGE - else: - errno = EDOM + if not isfinite(r): + if isnan(r): + if isnan(x): + errno = 0 + else: + errno = EDOM + else: # isinf(r) + if not isfinite(x): + errno = 0 + elif can_overflow: + errno = ERANGE + else: + errno = EDOM if errno: _likely_raise(errno, r) return r diff --git a/pypy/translator/backendopt/finalizer.py b/pypy/translator/backendopt/finalizer.py --- a/pypy/translator/backendopt/finalizer.py +++ b/pypy/translator/backendopt/finalizer.py @@ -4,7 +4,7 @@ class FinalizerError(Exception): """ __del__ marked as lightweight finalizer, but the analyzer did - not agreed + not agree """ class FinalizerAnalyzer(graphanalyze.BoolGraphAnalyzer): @@ -23,7 +23,7 @@ def analyze_light_finalizer(self, graph): result = self.analyze_direct_call(graph) if (result is self.top_result() and - getattr(graph.func, '_is_light_finalizer_', False)): + getattr(graph.func, '_must_be_light_finalizer_', False)): raise FinalizerError(FinalizerError.__doc__, graph) return result diff --git a/pypy/translator/backendopt/test/test_finalizer.py b/pypy/translator/backendopt/test/test_finalizer.py --- a/pypy/translator/backendopt/test/test_finalizer.py +++ b/pypy/translator/backendopt/test/test_finalizer.py @@ -126,13 +126,13 @@ r = self.analyze(f, [], A.__del__.im_func) assert r - def test_is_light_finalizer_decorator(self): + def test_must_be_light_finalizer_decorator(self): S = lltype.GcStruct('S') - @rgc.is_light_finalizer + @rgc.must_be_light_finalizer def f(): lltype.malloc(S) - @rgc.is_light_finalizer + @rgc.must_be_light_finalizer def g(): pass self.analyze(g, []) # did not explode From noreply at buildbot.pypy.org Thu Nov 10 13:07:00 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Thu, 10 Nov 2011 13:07:00 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: fixed a lot of GC and JIT/assembler word sizes (argh, testing now ; -) Message-ID: <20111110120700.65FA88292E@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r49130:46768438b789 Date: 2011-11-10 13:06 +0100 http://bitbucket.org/pypy/pypy/changeset/46768438b789/ Log: fixed a lot of GC and JIT/assembler word sizes (argh, testing now ;-) diff --git a/pypy/jit/backend/llsupport/test/test_gc.py b/pypy/jit/backend/llsupport/test/test_gc.py --- a/pypy/jit/backend/llsupport/test/test_gc.py +++ b/pypy/jit/backend/llsupport/test/test_gc.py @@ -101,7 +101,7 @@ gcrootmap.put(retaddr, shapeaddr) assert gcrootmap._gcmap[0] == retaddr assert gcrootmap._gcmap[1] == shapeaddr - p = rffi.cast(rffi.LONGP, gcrootmap.gcmapstart()) + p = rffi.cast(rffi.SIGNEDP, gcrootmap.gcmapstart()) assert p[0] == retaddr assert (gcrootmap.gcmapend() == gcrootmap.gcmapstart() + rffi.sizeof(lltype.Signed) * 2) diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -403,7 +403,7 @@ after() _NOARG_FUNC = lltype.Ptr(lltype.FuncType([], lltype.Void)) - _CLOSESTACK_FUNC = lltype.Ptr(lltype.FuncType([rffi.LONGP], + _CLOSESTACK_FUNC = lltype.Ptr(lltype.FuncType([rffi.SIGNEDP], lltype.Void)) def _build_release_gil(self, gcrootmap): @@ -1974,10 +1974,10 @@ kind = code & 3 code = (code - self.CODE_FROMSTACK) >> 2 stackloc = frame_addr + get_ebp_ofs(code) - value = rffi.cast(rffi.LONGP, stackloc)[0] + value = rffi.cast(rffi.SIGNEDP, stackloc)[0] if kind == self.DESCR_FLOAT and WORD == 4: value_hi = value - value = rffi.cast(rffi.LONGP, stackloc - 4)[0] + value = rffi.cast(rffi.SIGNEDP, stackloc - 4)[0] else: # 'code' identifies a register: load its value kind = code & 3 @@ -2005,10 +2005,10 @@ elif kind == self.DESCR_FLOAT: tgt = self.fail_boxes_float.get_addr_for_num(num) if WORD == 4: - rffi.cast(rffi.LONGP, tgt)[1] = value_hi + rffi.cast(rffi.SIGNEDP, tgt)[1] = value_hi else: assert 0, "bogus kind" - rffi.cast(rffi.LONGP, tgt)[0] = value + rffi.cast(rffi.SIGNEDP, tgt)[0] = value num += 1 # if not we_are_translated(): @@ -2034,7 +2034,7 @@ self.failure_recovery_func = failure_recovery_func self.failure_recovery_code = [0, 0, 0, 0] - _FAILURE_RECOVERY_FUNC = lltype.Ptr(lltype.FuncType([rffi.LONGP], + _FAILURE_RECOVERY_FUNC = lltype.Ptr(lltype.FuncType([rffi.SIGNEDP], lltype.Signed)) def _build_failure_recovery(self, exc, withfloats=False): diff --git a/pypy/jit/backend/x86/codebuf.py b/pypy/jit/backend/x86/codebuf.py --- a/pypy/jit/backend/x86/codebuf.py +++ b/pypy/jit/backend/x86/codebuf.py @@ -48,7 +48,7 @@ if self.relocations is not None: for reloc in self.relocations: p = addr + reloc - adr = rffi.cast(rffi.LONGP, p - WORD) + adr = rffi.cast(rffi.SIGNEDP, p - WORD) adr[0] = intmask(adr[0] - p) valgrind.discard_translations(addr, self.get_relative_pos()) self._dump(addr, "jit-backend-dump", backend_name) diff --git a/pypy/jit/backend/x86/runner.py b/pypy/jit/backend/x86/runner.py --- a/pypy/jit/backend/x86/runner.py +++ b/pypy/jit/backend/x86/runner.py @@ -150,7 +150,7 @@ cast_ptr_to_int._annspecialcase_ = 'specialize:arglltype(0)' cast_ptr_to_int = staticmethod(cast_ptr_to_int) - all_null_registers = lltype.malloc(rffi.LONGP.TO, 24, + all_null_registers = lltype.malloc(rffi.SIGNEDP.TO, 24, flavor='raw', zero=True, immortal=True) diff --git a/pypy/jit/backend/x86/test/test_assembler.py b/pypy/jit/backend/x86/test/test_assembler.py --- a/pypy/jit/backend/x86/test/test_assembler.py +++ b/pypy/jit/backend/x86/test/test_assembler.py @@ -101,7 +101,7 @@ assert withfloats value = random.random() - 0.5 # make sure it fits into 64 bits - tmp = lltype.malloc(rffi.LONGP.TO, 2, flavor='raw', + tmp = lltype.malloc(rffi.SIGNEDP.TO, 2, flavor='raw', track_allocation=False) rffi.cast(rffi.DOUBLEP, tmp)[0] = value return rffi.cast(rffi.DOUBLEP, tmp)[0], tmp[0], tmp[1] @@ -139,11 +139,11 @@ # prepare the expected target arrays, the descr_bytecode, # the 'registers' and the 'stack' arrays according to 'content' - xmmregisters = lltype.malloc(rffi.LONGP.TO, 16+ACTUAL_CPU.NUM_REGS+1, + xmmregisters = lltype.malloc(rffi.SIGNEDP.TO, 16+ACTUAL_CPU.NUM_REGS+1, flavor='raw', immortal=True) registers = rffi.ptradd(xmmregisters, 16) stacklen = baseloc + 30 - stack = lltype.malloc(rffi.LONGP.TO, stacklen, flavor='raw', + stack = lltype.malloc(rffi.SIGNEDP.TO, stacklen, flavor='raw', immortal=True) expected_ints = [0] * len(content) expected_ptrs = [lltype.nullptr(llmemory.GCREF.TO)] * len(content) diff --git a/pypy/rpython/memory/gctransform/asmgcroot.py b/pypy/rpython/memory/gctransform/asmgcroot.py --- a/pypy/rpython/memory/gctransform/asmgcroot.py +++ b/pypy/rpython/memory/gctransform/asmgcroot.py @@ -536,7 +536,7 @@ while start < end: code = rffi.cast(rffi.CCHARP, start.address[0])[0] if code == '\xe9': # jmp - rel32 = rffi.cast(rffi.LONGP, start.address[0]+1)[0] + rel32 = rffi.cast(rffi.SIGNEDP, start.address[0]+1)[0] target = start.address[0] + (rel32 + 5) start.address[0] = target start += arrayitemsize diff --git a/pypy/rpython/memory/test/test_transformed_gc.py b/pypy/rpython/memory/test/test_transformed_gc.py --- a/pypy/rpython/memory/test/test_transformed_gc.py +++ b/pypy/rpython/memory/test/test_transformed_gc.py @@ -737,7 +737,7 @@ def f(): from pypy.rpython.lltypesystem import rffi alist = [A() for i in range(50)] - idarray = lltype.malloc(rffi.LONGP.TO, len(alist), flavor='raw') + idarray = lltype.malloc(rffi.SIGNEDP.TO, len(alist), flavor='raw') # Compute the id of all the elements of the list. The goal is # to not allocate memory, so that if the GC needs memory to # remember the ids, it will trigger some collections itself diff --git a/pypy/translator/c/test/test_newgc.py b/pypy/translator/c/test/test_newgc.py --- a/pypy/translator/c/test/test_newgc.py +++ b/pypy/translator/c/test/test_newgc.py @@ -675,8 +675,8 @@ gc.collect() p_a1 = rffi.cast(rffi.VOIDPP, ll_args[0])[0] p_a2 = rffi.cast(rffi.VOIDPP, ll_args[1])[0] - a1 = rffi.cast(rffi.LONGP, p_a1)[0] - a2 = rffi.cast(rffi.LONGP, p_a2)[0] + a1 = rffi.cast(rffi.SIGNEDP, p_a1)[0] + a2 = rffi.cast(rffi.SIGNEDP, p_a2)[0] res = rffi.cast(rffi.INTP, ll_res) if a1 > a2: res[0] = rffi.cast(rffi.INT, 1) @@ -1202,7 +1202,7 @@ def f(): from pypy.rpython.lltypesystem import lltype, rffi alist = [A() for i in range(50000)] - idarray = lltype.malloc(rffi.LONGP.TO, len(alist), flavor='raw') + idarray = lltype.malloc(rffi.SIGNEDP.TO, len(alist), flavor='raw') # Compute the id of all elements of the list. The goal is # to not allocate memory, so that if the GC needs memory to # remember the ids, it will trigger some collections itself From noreply at buildbot.pypy.org Thu Nov 10 13:11:26 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Thu, 10 Nov 2011 13:11:26 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: merge Message-ID: <20111110121126.8533B8292E@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r49131:60550d8ee39f Date: 2011-11-10 13:10 +0100 http://bitbucket.org/pypy/pypy/changeset/60550d8ee39f/ Log: merge diff --git a/pypy/jit/backend/llsupport/descr.py b/pypy/jit/backend/llsupport/descr.py --- a/pypy/jit/backend/llsupport/descr.py +++ b/pypy/jit/backend/llsupport/descr.py @@ -355,6 +355,10 @@ return False # unless overridden def create_call_stub(self, rtyper, RESULT): + from pypy.rlib.clibffi import FFI_DEFAULT_ABI + assert self.get_call_conv() == FFI_DEFAULT_ABI, ( + "%r: create_call_stub() with a non-default call ABI" % (self,)) + def process(c): if c == 'L': assert longlong.supports_longlong diff --git a/pypy/module/select/test/test_select.py b/pypy/module/select/test/test_select.py --- a/pypy/module/select/test/test_select.py +++ b/pypy/module/select/test/test_select.py @@ -214,11 +214,15 @@ def test_poll(self): import select - class A(object): - def __int__(self): - return 3 - - select.poll().poll(A()) # assert did not crash + readend, writeend = self.getpair() + try: + class A(object): + def __int__(self): + return readend.fileno() + select.poll().poll(A()) # assert did not crash + finally: + readend.close() + writeend.close() class AppTestSelectWithPipes(_AppTestSelect): "Use a pipe to get pairs of file descriptors" diff --git a/pypy/module/signal/__init__.py b/pypy/module/signal/__init__.py --- a/pypy/module/signal/__init__.py +++ b/pypy/module/signal/__init__.py @@ -20,7 +20,7 @@ interpleveldefs['pause'] = 'interp_signal.pause' interpleveldefs['siginterrupt'] = 'interp_signal.siginterrupt' - if hasattr(cpy_signal, 'setitimer'): + if os.name == 'posix': interpleveldefs['setitimer'] = 'interp_signal.setitimer' interpleveldefs['getitimer'] = 'interp_signal.getitimer' for name in ['ITIMER_REAL', 'ITIMER_VIRTUAL', 'ITIMER_PROF']: diff --git a/pypy/module/signal/test/test_signal.py b/pypy/module/signal/test/test_signal.py --- a/pypy/module/signal/test/test_signal.py +++ b/pypy/module/signal/test/test_signal.py @@ -1,4 +1,4 @@ -import os, py +import os, py, sys import signal as cpy_signal from pypy.conftest import gettestobjspace @@ -264,6 +264,10 @@ class AppTestItimer: spaceconfig = dict(usemodules=['signal']) + def setup_class(cls): + if sys.platform == 'win32': + py.test.skip("Unix only") + def test_itimer_real(self): import signal diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -246,8 +246,14 @@ wrapper._always_inline_ = True # for debugging, stick ll func ptr to that wrapper._ptr = funcptr + wrapper = func_with_new_name(wrapper, name) - return func_with_new_name(wrapper, name) + if calling_conv != "c": + from pypy.rlib.jit import dont_look_inside + wrapper = dont_look_inside(wrapper) + + return wrapper + class CallbackHolder: def __init__(self): diff --git a/pypy/translator/platform/__init__.py b/pypy/translator/platform/__init__.py --- a/pypy/translator/platform/__init__.py +++ b/pypy/translator/platform/__init__.py @@ -240,10 +240,13 @@ else: host_factory = Linux64 elif sys.platform == 'darwin': - from pypy.translator.platform.darwin import Darwin_i386, Darwin_x86_64 + from pypy.translator.platform.darwin import Darwin_i386, Darwin_x86_64, Darwin_PowerPC import platform - assert platform.machine() in ('i386', 'x86_64') - if sys.maxint <= 2147483647: + assert platform.machine() in ('Power Macintosh', 'i386', 'x86_64') + + if platform.machine() == 'Power Macintosh': + host_factory = Darwin_PowerPC + elif sys.maxint <= 2147483647: host_factory = Darwin_i386 else: host_factory = Darwin_x86_64 diff --git a/pypy/translator/platform/darwin.py b/pypy/translator/platform/darwin.py --- a/pypy/translator/platform/darwin.py +++ b/pypy/translator/platform/darwin.py @@ -71,6 +71,11 @@ link_flags = ('-arch', 'i386') cflags = ('-arch', 'i386', '-O3', '-fomit-frame-pointer') +class Darwin_PowerPC(Darwin):#xxx fixme, mwp + name = "darwin_powerpc" + link_flags = () + cflags = ('-O3', '-fomit-frame-pointer') + class Darwin_x86_64(Darwin): name = "darwin_x86_64" link_flags = ('-arch', 'x86_64') diff --git a/pypy/translator/platform/test/test_darwin.py b/pypy/translator/platform/test/test_darwin.py --- a/pypy/translator/platform/test/test_darwin.py +++ b/pypy/translator/platform/test/test_darwin.py @@ -7,7 +7,7 @@ py.test.skip("Darwin only") from pypy.tool.udir import udir -from pypy.translator.platform.darwin import Darwin_i386, Darwin_x86_64 +from pypy.translator.platform.darwin import Darwin_i386, Darwin_x86_64, Darwin_PowerPC from pypy.translator.platform.test.test_platform import TestPlatform as BasicTest from pypy.translator.tool.cbuild import ExternalCompilationInfo @@ -17,7 +17,7 @@ else: host_factory = Darwin_x86_64 else: - host_factory = Darwin + host_factory = Darwin_PowerPC class TestDarwin(BasicTest): platform = host_factory() From noreply at buildbot.pypy.org Thu Nov 10 13:42:44 2011 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 10 Nov 2011 13:42:44 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: This fix looks wrong. The JMP target is still only 4 bytes even Message-ID: <20111110124244.D10E48292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: win64_gborg Changeset: r49132:697e191ea0e0 Date: 2011-11-10 13:42 +0100 http://bitbucket.org/pypy/pypy/changeset/697e191ea0e0/ Log: This fix looks wrong. The JMP target is still only 4 bytes even in AMD64 assembler. diff --git a/pypy/rpython/memory/gctransform/asmgcroot.py b/pypy/rpython/memory/gctransform/asmgcroot.py --- a/pypy/rpython/memory/gctransform/asmgcroot.py +++ b/pypy/rpython/memory/gctransform/asmgcroot.py @@ -533,6 +533,7 @@ # The initial gcmap table contains addresses to a JMP # instruction that jumps indirectly to the real code. # Replace them with the target addresses. + assert rffi.SIGNEDP is rffi.LONGP, "win64 support missing" while start < end: code = rffi.cast(rffi.CCHARP, start.address[0])[0] if code == '\xe9': # jmp From noreply at buildbot.pypy.org Thu Nov 10 13:44:38 2011 From: noreply at buildbot.pypy.org (cfbolz) Date: Thu, 10 Nov 2011 13:44:38 +0100 (CET) Subject: [pypy-commit] pypy default: use consistent name Message-ID: <20111110124438.DCE008292E@wyvern.cs.uni-duesseldorf.de> Author: Carl Friedrich Bolz Branch: Changeset: r49133:52844ea3aa84 Date: 2011-11-04 16:40 +0100 http://bitbucket.org/pypy/pypy/changeset/52844ea3aa84/ Log: use consistent name diff --git a/pypy/objspace/std/test/test_bytes.py b/pypy/objspace/std/test/test_bytearrayobject.py rename from pypy/objspace/std/test/test_bytes.py rename to pypy/objspace/std/test/test_bytearrayobject.py From noreply at buildbot.pypy.org Thu Nov 10 13:44:40 2011 From: noreply at buildbot.pypy.org (cfbolz) Date: Thu, 10 Nov 2011 13:44:40 +0100 (CET) Subject: [pypy-commit] pypy default: remove unused helper function Message-ID: <20111110124440.17EAF8292E@wyvern.cs.uni-duesseldorf.de> Author: Carl Friedrich Bolz Branch: Changeset: r49134:af7955a87f1b Date: 2011-11-10 13:43 +0100 http://bitbucket.org/pypy/pypy/changeset/af7955a87f1b/ Log: remove unused helper function diff --git a/pypy/objspace/std/bytearrayobject.py b/pypy/objspace/std/bytearrayobject.py --- a/pypy/objspace/std/bytearrayobject.py +++ b/pypy/objspace/std/bytearrayobject.py @@ -282,15 +282,12 @@ def str__Bytearray(space, w_bytearray): return space.wrap(''.join(w_bytearray.data)) -def _convert_idx_params(space, w_self, w_start, w_stop): - length = len(w_self.data) +def str_count__Bytearray_Int_ANY_ANY(space, w_bytearray, w_char, w_start, w_stop): + char = w_char.intval + bytearray = w_bytearray.data + length = len(bytearray) start, stop = slicetype.unwrap_start_stop( space, length, w_start, w_stop, False) - return start, stop, length - -def str_count__Bytearray_Int_ANY_ANY(space, w_bytearray, w_char, w_start, w_stop): - char = w_char.intval - start, stop, length = _convert_idx_params(space, w_bytearray, w_start, w_stop) count = 0 for i in range(start, min(stop, length)): c = w_bytearray.data[i] From noreply at buildbot.pypy.org Thu Nov 10 13:44:42 2011 From: noreply at buildbot.pypy.org (cfbolz) Date: Thu, 10 Nov 2011 13:44:42 +0100 (CET) Subject: [pypy-commit] pypy default: merge Message-ID: <20111110124442.8DE5D8292E@wyvern.cs.uni-duesseldorf.de> Author: Carl Friedrich Bolz Branch: Changeset: r49135:3b7fdd2b26ba Date: 2011-11-10 13:44 +0100 http://bitbucket.org/pypy/pypy/changeset/3b7fdd2b26ba/ Log: merge diff --git a/lib-python/2.7/test/test_os.py b/lib-python/2.7/test/test_os.py --- a/lib-python/2.7/test/test_os.py +++ b/lib-python/2.7/test/test_os.py @@ -74,7 +74,8 @@ self.assertFalse(os.path.exists(name), "file already exists for temporary file") # make sure we can create the file - open(name, "w") + f = open(name, "w") + f.close() self.files.append(name) def test_tempnam(self): diff --git a/lib-python/conftest.py b/lib-python/conftest.py --- a/lib-python/conftest.py +++ b/lib-python/conftest.py @@ -201,7 +201,7 @@ RegrTest('test_difflib.py'), RegrTest('test_dircache.py', core=True), RegrTest('test_dis.py'), - RegrTest('test_distutils.py'), + RegrTest('test_distutils.py', skip=True), RegrTest('test_dl.py', skip=True), RegrTest('test_doctest.py', usemodules="thread"), RegrTest('test_doctest2.py'), diff --git a/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py b/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py --- a/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py +++ b/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py @@ -1,6 +1,5 @@ import unittest from ctypes import * -from ctypes.test import xfail class MyInt(c_int): def __cmp__(self, other): @@ -27,7 +26,6 @@ self.assertEqual(None, cb()) - @xfail def test_int_callback(self): args = [] def func(arg): diff --git a/lib-python/2.7/pkgutil.py b/lib-python/modified-2.7/pkgutil.py copy from lib-python/2.7/pkgutil.py copy to lib-python/modified-2.7/pkgutil.py --- a/lib-python/2.7/pkgutil.py +++ b/lib-python/modified-2.7/pkgutil.py @@ -244,7 +244,8 @@ return mod def get_data(self, pathname): - return open(pathname, "rb").read() + with open(pathname, "rb") as f: + return f.read() def _reopen(self): if self.file and self.file.closed: diff --git a/lib_pypy/_ctypes/pointer.py b/lib_pypy/_ctypes/pointer.py --- a/lib_pypy/_ctypes/pointer.py +++ b/lib_pypy/_ctypes/pointer.py @@ -124,7 +124,8 @@ # for now, we always allow types.pointer, else a lot of tests # break. We need to rethink how pointers are represented, though if my_ffitype is not ffitype and ffitype is not _ffi.types.void_p: - raise ArgumentError, "expected %s instance, got %s" % (type(value), ffitype) + raise ArgumentError("expected %s instance, got %s" % (type(value), + ffitype)) return value._get_buffer_value() def _cast_addr(obj, _, tp): diff --git a/lib_pypy/_ctypes/structure.py b/lib_pypy/_ctypes/structure.py --- a/lib_pypy/_ctypes/structure.py +++ b/lib_pypy/_ctypes/structure.py @@ -17,7 +17,7 @@ if len(f) == 3: if (not hasattr(tp, '_type_') or not isinstance(tp._type_, str) - or tp._type_ not in "iIhHbBlL"): + or tp._type_ not in "iIhHbBlLqQ"): #XXX: are those all types? # we just dont get the type name # in the interp levle thrown TypeError diff --git a/lib_pypy/pyrepl/commands.py b/lib_pypy/pyrepl/commands.py --- a/lib_pypy/pyrepl/commands.py +++ b/lib_pypy/pyrepl/commands.py @@ -33,10 +33,9 @@ class Command(object): finish = 0 kills_digit_arg = 1 - def __init__(self, reader, (event_name, event)): + def __init__(self, reader, cmd): self.reader = reader - self.event = event - self.event_name = event_name + self.event_name, self.event = cmd def do(self): pass diff --git a/lib_pypy/pyrepl/pygame_console.py b/lib_pypy/pyrepl/pygame_console.py --- a/lib_pypy/pyrepl/pygame_console.py +++ b/lib_pypy/pyrepl/pygame_console.py @@ -130,7 +130,7 @@ s.fill(c, [0, 600 - bmargin, 800, bmargin]) s.fill(c, [800 - rmargin, 0, lmargin, 600]) - def refresh(self, screen, (cx, cy)): + def refresh(self, screen, cxy): self.screen = screen self.pygame_screen.fill(colors.bg, [0, tmargin + self.cur_top + self.scroll, @@ -139,8 +139,8 @@ line_top = self.cur_top width, height = self.fontsize - self.cxy = (cx, cy) - cp = self.char_pos(cx, cy) + self.cxy = cxy + cp = self.char_pos(*cxy) if cp[1] < tmargin: self.scroll = - (cy*self.fh + self.cur_top) self.repaint() @@ -148,7 +148,7 @@ self.scroll += (600 - bmargin) - (cp[1] + self.fh) self.repaint() if self.curs_vis: - self.pygame_screen.blit(self.cursor, self.char_pos(cx, cy)) + self.pygame_screen.blit(self.cursor, self.char_pos(*cxy)) for line in screen: if 0 <= line_top + self.scroll <= (600 - bmargin - tmargin - self.fh): if line: diff --git a/lib_pypy/pyrepl/unix_console.py b/lib_pypy/pyrepl/unix_console.py --- a/lib_pypy/pyrepl/unix_console.py +++ b/lib_pypy/pyrepl/unix_console.py @@ -163,7 +163,7 @@ def change_encoding(self, encoding): self.encoding = encoding - def refresh(self, screen, (cx, cy)): + def refresh(self, screen, cxy): # this function is still too long (over 90 lines) if not self.__gone_tall: @@ -198,6 +198,7 @@ # we make sure the cursor is on the screen, and that we're # using all of the screen if we can + cx, cy = cxy if cy < offset: offset = cy elif cy >= offset + height: diff --git a/pypy/interpreter/test/test_typedef.py b/pypy/interpreter/test/test_typedef.py --- a/pypy/interpreter/test/test_typedef.py +++ b/pypy/interpreter/test/test_typedef.py @@ -2,7 +2,7 @@ from pypy.interpreter import typedef from pypy.tool.udir import udir from pypy.interpreter.baseobjspace import Wrappable -from pypy.interpreter.gateway import ObjSpace +from pypy.interpreter.gateway import ObjSpace, interp2app # this test isn't so much to test that the objspace interface *works* # -- it's more to test that it's *there* @@ -260,6 +260,50 @@ gc.collect(); gc.collect() assert space.unwrap(w_seen) == [6, 2] + def test_multiple_inheritance(self): + class W_A(Wrappable): + a = 1 + b = 2 + class W_C(W_A): + b = 3 + W_A.typedef = typedef.TypeDef("A", + a = typedef.interp_attrproperty("a", cls=W_A), + b = typedef.interp_attrproperty("b", cls=W_A), + ) + class W_B(Wrappable): + pass + def standalone_method(space, w_obj): + if isinstance(w_obj, W_A): + return space.w_True + else: + return space.w_False + W_B.typedef = typedef.TypeDef("B", + c = interp2app(standalone_method) + ) + W_C.typedef = typedef.TypeDef("C", (W_A.typedef, W_B.typedef,)) + + w_o1 = self.space.wrap(W_C()) + w_o2 = self.space.wrap(W_B()) + w_c = self.space.gettypefor(W_C) + w_b = self.space.gettypefor(W_B) + w_a = self.space.gettypefor(W_A) + assert w_c.mro_w == [ + w_c, + w_a, + w_b, + self.space.w_object, + ] + for w_tp in w_c.mro_w: + assert self.space.isinstance_w(w_o1, w_tp) + def assert_attr(w_obj, name, value): + assert self.space.unwrap(self.space.getattr(w_obj, self.space.wrap(name))) == value + def assert_method(w_obj, name, value): + assert self.space.unwrap(self.space.call_method(w_obj, name)) == value + assert_attr(w_o1, "a", 1) + assert_attr(w_o1, "b", 3) + assert_method(w_o1, "c", True) + assert_method(w_o2, "c", False) + class AppTestTypeDef: diff --git a/pypy/interpreter/typedef.py b/pypy/interpreter/typedef.py --- a/pypy/interpreter/typedef.py +++ b/pypy/interpreter/typedef.py @@ -15,13 +15,19 @@ def __init__(self, __name, __base=None, **rawdict): "NOT_RPYTHON: initialization-time only" self.name = __name - self.base = __base + if __base is None: + bases = [] + elif isinstance(__base, tuple): + bases = list(__base) + else: + bases = [__base] + self.bases = bases self.hasdict = '__dict__' in rawdict self.weakrefable = '__weakref__' in rawdict self.doc = rawdict.pop('__doc__', None) - if __base is not None: - self.hasdict |= __base.hasdict - self.weakrefable |= __base.weakrefable + for base in bases: + self.hasdict |= base.hasdict + self.weakrefable |= base.weakrefable self.rawdict = {} self.acceptable_as_base_class = '__new__' in rawdict self.applevel_subclasses_base = None diff --git a/pypy/jit/backend/llsupport/descr.py b/pypy/jit/backend/llsupport/descr.py --- a/pypy/jit/backend/llsupport/descr.py +++ b/pypy/jit/backend/llsupport/descr.py @@ -305,12 +305,16 @@ _clsname = '' loop_token = None arg_classes = '' # <-- annotation hack - ffi_flags = 0 + ffi_flags = 1 - def __init__(self, arg_classes, extrainfo=None, ffi_flags=0): + def __init__(self, arg_classes, extrainfo=None, ffi_flags=1): self.arg_classes = arg_classes # string of "r" and "i" (ref/int) self.extrainfo = extrainfo self.ffi_flags = ffi_flags + # NB. the default ffi_flags is 1, meaning FUNCFLAG_CDECL, which + # makes sense on Windows as it's the one for all the C functions + # we are compiling together with the JIT. On non-Windows platforms + # it is just ignored anyway. def __repr__(self): res = '%s(%s)' % (self.__class__.__name__, self.arg_classes) @@ -351,6 +355,10 @@ return False # unless overridden def create_call_stub(self, rtyper, RESULT): + from pypy.rlib.clibffi import FFI_DEFAULT_ABI + assert self.get_call_conv() == FFI_DEFAULT_ABI, ( + "%r: create_call_stub() with a non-default call ABI" % (self,)) + def process(c): if c == 'L': assert longlong.supports_longlong @@ -445,7 +453,7 @@ """ _clsname = 'DynamicIntCallDescr' - def __init__(self, arg_classes, result_size, result_sign, extrainfo=None, ffi_flags=0): + def __init__(self, arg_classes, result_size, result_sign, extrainfo, ffi_flags): BaseIntCallDescr.__init__(self, arg_classes, extrainfo, ffi_flags) assert isinstance(result_sign, bool) self._result_size = chr(result_size) diff --git a/pypy/jit/backend/llsupport/ffisupport.py b/pypy/jit/backend/llsupport/ffisupport.py --- a/pypy/jit/backend/llsupport/ffisupport.py +++ b/pypy/jit/backend/llsupport/ffisupport.py @@ -8,7 +8,7 @@ class UnsupportedKind(Exception): pass -def get_call_descr_dynamic(cpu, ffi_args, ffi_result, extrainfo=None, ffi_flags=0): +def get_call_descr_dynamic(cpu, ffi_args, ffi_result, extrainfo, ffi_flags): """Get a call descr: the types of result and args are represented by rlib.libffi.types.*""" try: diff --git a/pypy/jit/backend/llsupport/test/test_ffisupport.py b/pypy/jit/backend/llsupport/test/test_ffisupport.py --- a/pypy/jit/backend/llsupport/test/test_ffisupport.py +++ b/pypy/jit/backend/llsupport/test/test_ffisupport.py @@ -13,44 +13,46 @@ def test_call_descr_dynamic(): args = [types.sint, types.pointer] - descr = get_call_descr_dynamic(FakeCPU(), args, types.sint, ffi_flags=42) + descr = get_call_descr_dynamic(FakeCPU(), args, types.sint, None, + ffi_flags=42) assert isinstance(descr, DynamicIntCallDescr) assert descr.arg_classes == 'ii' assert descr.get_ffi_flags() == 42 args = [types.sint, types.double, types.pointer] - descr = get_call_descr_dynamic(FakeCPU(), args, types.void) + descr = get_call_descr_dynamic(FakeCPU(), args, types.void, None, 42) assert descr is None # missing floats descr = get_call_descr_dynamic(FakeCPU(supports_floats=True), - args, types.void, ffi_flags=43) + args, types.void, None, ffi_flags=43) assert isinstance(descr, VoidCallDescr) assert descr.arg_classes == 'ifi' assert descr.get_ffi_flags() == 43 - descr = get_call_descr_dynamic(FakeCPU(), [], types.sint8) + descr = get_call_descr_dynamic(FakeCPU(), [], types.sint8, None, 42) assert isinstance(descr, DynamicIntCallDescr) assert descr.get_result_size(False) == 1 assert descr.is_result_signed() == True - descr = get_call_descr_dynamic(FakeCPU(), [], types.uint8) + descr = get_call_descr_dynamic(FakeCPU(), [], types.uint8, None, 42) assert isinstance(descr, DynamicIntCallDescr) assert descr.get_result_size(False) == 1 assert descr.is_result_signed() == False if not is_64_bit: - descr = get_call_descr_dynamic(FakeCPU(), [], types.slonglong) + descr = get_call_descr_dynamic(FakeCPU(), [], types.slonglong, + None, 42) assert descr is None # missing longlongs descr = get_call_descr_dynamic(FakeCPU(supports_longlong=True), - [], types.slonglong, ffi_flags=43) + [], types.slonglong, None, ffi_flags=43) assert isinstance(descr, LongLongCallDescr) assert descr.get_ffi_flags() == 43 else: assert types.slonglong is types.slong - descr = get_call_descr_dynamic(FakeCPU(), [], types.float) + descr = get_call_descr_dynamic(FakeCPU(), [], types.float, None, 42) assert descr is None # missing singlefloats descr = get_call_descr_dynamic(FakeCPU(supports_singlefloats=True), - [], types.float, ffi_flags=44) + [], types.float, None, ffi_flags=44) SingleFloatCallDescr = getCallDescrClass(rffi.FLOAT) assert isinstance(descr, SingleFloatCallDescr) assert descr.get_ffi_flags() == 44 diff --git a/pypy/jit/backend/x86/test/test_runner.py b/pypy/jit/backend/x86/test/test_runner.py --- a/pypy/jit/backend/x86/test/test_runner.py +++ b/pypy/jit/backend/x86/test/test_runner.py @@ -455,6 +455,9 @@ EffectInfo.MOST_GENERAL, ffi_flags=-1) calldescr.get_call_conv = lambda: ffi # <==== hack + # ^^^ we patch get_call_conv() so that the test also makes sense + # on Linux, because clibffi.get_call_conv() would always + # return FFI_DEFAULT_ABI on non-Windows platforms. funcbox = ConstInt(rawstart) i1 = BoxInt() i2 = BoxInt() diff --git a/pypy/jit/codewriter/effectinfo.py b/pypy/jit/codewriter/effectinfo.py --- a/pypy/jit/codewriter/effectinfo.py +++ b/pypy/jit/codewriter/effectinfo.py @@ -78,6 +78,9 @@ # OS_MATH_SQRT = 100 + # for debugging: + _OS_CANRAISE = set([OS_NONE, OS_STR2UNICODE, OS_LIBFFI_CALL]) + def __new__(cls, readonly_descrs_fields, readonly_descrs_arrays, write_descrs_fields, write_descrs_arrays, extraeffect=EF_CAN_RAISE, @@ -116,6 +119,8 @@ result.extraeffect = extraeffect result.can_invalidate = can_invalidate result.oopspecindex = oopspecindex + if result.check_can_raise(): + assert oopspecindex in cls._OS_CANRAISE cls._cache[key] = result return result @@ -125,6 +130,10 @@ def check_can_invalidate(self): return self.can_invalidate + def check_is_elidable(self): + return (self.extraeffect == self.EF_ELIDABLE_CAN_RAISE or + self.extraeffect == self.EF_ELIDABLE_CANNOT_RAISE) + def check_forces_virtual_or_virtualizable(self): return self.extraeffect >= self.EF_FORCES_VIRTUAL_OR_VIRTUALIZABLE diff --git a/pypy/jit/codewriter/test/test_flatten.py b/pypy/jit/codewriter/test/test_flatten.py --- a/pypy/jit/codewriter/test/test_flatten.py +++ b/pypy/jit/codewriter/test/test_flatten.py @@ -5,7 +5,7 @@ from pypy.jit.codewriter.format import assert_format from pypy.jit.codewriter import longlong from pypy.jit.metainterp.history import AbstractDescr -from pypy.rpython.lltypesystem import lltype, rclass, rstr +from pypy.rpython.lltypesystem import lltype, rclass, rstr, rffi from pypy.objspace.flow.model import SpaceOperation, Variable, Constant from pypy.translator.unsimplify import varoftype from pypy.rlib.rarithmetic import ovfcheck, r_uint, r_longlong, r_ulonglong @@ -743,7 +743,6 @@ """, transform=True) def test_force_cast(self): - from pypy.rpython.lltypesystem import rffi # NB: we don't need to test for INT here, the logic in jtransform is # general enough so that if we have the below cases it should # generalize also to INT @@ -849,7 +848,6 @@ transform=True) def test_force_cast_pointer(self): - from pypy.rpython.lltypesystem import rffi def h(p): return rffi.cast(rffi.VOIDP, p) self.encoding_test(h, [lltype.nullptr(rffi.CCHARP.TO)], """ @@ -857,7 +855,6 @@ """, transform=True) def test_force_cast_floats(self): - from pypy.rpython.lltypesystem import rffi # Caststs to lltype.Float def f(n): return rffi.cast(lltype.Float, n) @@ -964,7 +961,6 @@ """, transform=True) def test_direct_ptradd(self): - from pypy.rpython.lltypesystem import rffi def f(p, n): return lltype.direct_ptradd(p, n) self.encoding_test(f, [lltype.nullptr(rffi.CCHARP.TO), 123], """ @@ -975,7 +971,6 @@ def check_force_cast(FROM, TO, operations, value): """Check that the test is correctly written...""" - from pypy.rpython.lltypesystem import rffi import re r = re.compile('(\w+) \%i\d, \$(-?\d+)') # diff --git a/pypy/jit/metainterp/optimizeopt/__init__.py b/pypy/jit/metainterp/optimizeopt/__init__.py --- a/pypy/jit/metainterp/optimizeopt/__init__.py +++ b/pypy/jit/metainterp/optimizeopt/__init__.py @@ -55,7 +55,7 @@ def optimize_loop_1(metainterp_sd, loop, enable_opts, - inline_short_preamble=True, retraced=False, bridge=False): + inline_short_preamble=True, retraced=False): """Optimize loop.operations to remove internal overheadish operations. """ @@ -64,7 +64,7 @@ if unroll: optimize_unroll(metainterp_sd, loop, optimizations) else: - optimizer = Optimizer(metainterp_sd, loop, optimizations, bridge) + optimizer = Optimizer(metainterp_sd, loop, optimizations) optimizer.propagate_all_forward() def optimize_bridge_1(metainterp_sd, bridge, enable_opts, @@ -76,7 +76,7 @@ except KeyError: pass optimize_loop_1(metainterp_sd, bridge, enable_opts, - inline_short_preamble, retraced, bridge=True) + inline_short_preamble, retraced) if __name__ == '__main__': print ALL_OPTS_NAMES diff --git a/pypy/jit/metainterp/optimizeopt/heap.py b/pypy/jit/metainterp/optimizeopt/heap.py --- a/pypy/jit/metainterp/optimizeopt/heap.py +++ b/pypy/jit/metainterp/optimizeopt/heap.py @@ -234,7 +234,7 @@ or op.is_ovf()): self.posponedop = op else: - self.next_optimization.propagate_forward(op) + Optimization.emit_operation(self, op) def emitting_operation(self, op): if op.has_no_side_effect(): diff --git a/pypy/jit/metainterp/optimizeopt/intbounds.py b/pypy/jit/metainterp/optimizeopt/intbounds.py --- a/pypy/jit/metainterp/optimizeopt/intbounds.py +++ b/pypy/jit/metainterp/optimizeopt/intbounds.py @@ -6,6 +6,7 @@ IntUpperBound) from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method from pypy.jit.metainterp.resoperation import rop +from pypy.jit.metainterp.optimize import InvalidLoop from pypy.rlib.rarithmetic import LONG_BIT @@ -13,30 +14,10 @@ """Keeps track of the bounds placed on integers by guards and remove redundant guards""" - def setup(self): - self.posponedop = None - self.nextop = None - def new(self): - assert self.posponedop is None return OptIntBounds() - - def flush(self): - assert self.posponedop is None - - def setup(self): - self.posponedop = None - self.nextop = None def propagate_forward(self, op): - if op.is_ovf(): - self.posponedop = op - return - if self.posponedop: - self.nextop = op - op = self.posponedop - self.posponedop = None - dispatch_opt(self, op) def opt_default(self, op): @@ -179,68 +160,75 @@ r = self.getvalue(op.result) r.intbound.intersect(b) + def optimize_GUARD_NO_OVERFLOW(self, op): + lastop = self.last_emitted_operation + if lastop is not None: + opnum = lastop.getopnum() + args = lastop.getarglist() + result = lastop.result + # If the INT_xxx_OVF was replaced with INT_xxx, then we can kill + # the GUARD_NO_OVERFLOW. + if (opnum == rop.INT_ADD or + opnum == rop.INT_SUB or + opnum == rop.INT_MUL): + return + # Else, synthesize the non overflowing op for optimize_default to + # reuse, as well as the reverse op + elif opnum == rop.INT_ADD_OVF: + self.pure(rop.INT_ADD, args[:], result) + self.pure(rop.INT_SUB, [result, args[1]], args[0]) + self.pure(rop.INT_SUB, [result, args[0]], args[1]) + elif opnum == rop.INT_SUB_OVF: + self.pure(rop.INT_SUB, args[:], result) + self.pure(rop.INT_ADD, [result, args[1]], args[0]) + self.pure(rop.INT_SUB, [args[0], result], args[1]) + elif opnum == rop.INT_MUL_OVF: + self.pure(rop.INT_MUL, args[:], result) + self.emit_operation(op) + + def optimize_GUARD_OVERFLOW(self, op): + # If INT_xxx_OVF was replaced by INT_xxx, *but* we still see + # GUARD_OVERFLOW, then the loop is invalid. + lastop = self.last_emitted_operation + if lastop is None: + raise InvalidLoop + opnum = lastop.getopnum() + if opnum not in (rop.INT_ADD_OVF, rop.INT_SUB_OVF, rop.INT_MUL_OVF): + raise InvalidLoop + self.emit_operation(op) + def optimize_INT_ADD_OVF(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) resbound = v1.intbound.add_bound(v2.intbound) - if resbound.has_lower and resbound.has_upper and \ - self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Transform into INT_ADD and remove guard + if resbound.bounded(): + # Transform into INT_ADD. The following guard will be killed + # by optimize_GUARD_NO_OVERFLOW; if we see instead an + # optimize_GUARD_OVERFLOW, then InvalidLoop. op = op.copy_and_change(rop.INT_ADD) - self.optimize_INT_ADD(op) # emit the op - else: - self.emit_operation(op) - r = self.getvalue(op.result) - r.intbound.intersect(resbound) - self.emit_operation(self.nextop) - if self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Synthesize the non overflowing op for optimize_default to reuse - self.pure(rop.INT_ADD, op.getarglist()[:], op.result) - # Synthesize the reverse op for optimize_default to reuse - self.pure(rop.INT_SUB, [op.result, op.getarg(1)], op.getarg(0)) - self.pure(rop.INT_SUB, [op.result, op.getarg(0)], op.getarg(1)) - + self.emit_operation(op) # emit the op + r = self.getvalue(op.result) + r.intbound.intersect(resbound) def optimize_INT_SUB_OVF(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) resbound = v1.intbound.sub_bound(v2.intbound) - if resbound.has_lower and resbound.has_upper and \ - self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Transform into INT_SUB and remove guard + if resbound.bounded(): op = op.copy_and_change(rop.INT_SUB) - self.optimize_INT_SUB(op) # emit the op - else: - self.emit_operation(op) - r = self.getvalue(op.result) - r.intbound.intersect(resbound) - self.emit_operation(self.nextop) - if self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Synthesize the non overflowing op for optimize_default to reuse - self.pure(rop.INT_SUB, op.getarglist()[:], op.result) - # Synthesize the reverse ops for optimize_default to reuse - self.pure(rop.INT_ADD, [op.result, op.getarg(1)], op.getarg(0)) - self.pure(rop.INT_SUB, [op.getarg(0), op.result], op.getarg(1)) - + self.emit_operation(op) # emit the op + r = self.getvalue(op.result) + r.intbound.intersect(resbound) def optimize_INT_MUL_OVF(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) resbound = v1.intbound.mul_bound(v2.intbound) - if resbound.has_lower and resbound.has_upper and \ - self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Transform into INT_MUL and remove guard + if resbound.bounded(): op = op.copy_and_change(rop.INT_MUL) - self.optimize_INT_MUL(op) # emit the op - else: - self.emit_operation(op) - r = self.getvalue(op.result) - r.intbound.intersect(resbound) - self.emit_operation(self.nextop) - if self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Synthesize the non overflowing op for optimize_default to reuse - self.pure(rop.INT_MUL, op.getarglist()[:], op.result) - + self.emit_operation(op) + r = self.getvalue(op.result) + r.intbound.intersect(resbound) def optimize_INT_LT(self, op): v1 = self.getvalue(op.getarg(0)) diff --git a/pypy/jit/metainterp/optimizeopt/intutils.py b/pypy/jit/metainterp/optimizeopt/intutils.py --- a/pypy/jit/metainterp/optimizeopt/intutils.py +++ b/pypy/jit/metainterp/optimizeopt/intutils.py @@ -1,4 +1,4 @@ -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift, LONG_BIT +from pypy.rlib.rarithmetic import ovfcheck, LONG_BIT from pypy.rlib.objectmodel import we_are_translated from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.jit.metainterp.history import BoxInt, ConstInt @@ -174,10 +174,10 @@ other.known_ge(IntBound(0, 0)) and \ other.known_lt(IntBound(LONG_BIT, LONG_BIT)): try: - vals = (ovfcheck_lshift(self.upper, other.upper), - ovfcheck_lshift(self.upper, other.lower), - ovfcheck_lshift(self.lower, other.upper), - ovfcheck_lshift(self.lower, other.lower)) + vals = (ovfcheck(self.upper << other.upper), + ovfcheck(self.upper << other.lower), + ovfcheck(self.lower << other.upper), + ovfcheck(self.lower << other.lower)) return IntBound(min4(vals), max4(vals)) except (OverflowError, ValueError): return IntUnbounded() diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -6,7 +6,7 @@ IntLowerBound, MININT, MAXINT from pypy.jit.metainterp.optimizeopt.util import (make_dispatcher_method, args_dict) -from pypy.jit.metainterp.resoperation import rop, ResOperation +from pypy.jit.metainterp.resoperation import rop, ResOperation, AbstractResOp from pypy.jit.metainterp.typesystem import llhelper, oohelper from pypy.tool.pairtype import extendabletype from pypy.rlib.debug import debug_start, debug_stop, debug_print @@ -247,9 +247,10 @@ CONST_1 = ConstInt(1) CVAL_ZERO = ConstantValue(CONST_0) CVAL_ZERO_FLOAT = ConstantValue(Const._new(0.0)) -CVAL_UNINITIALIZED_ZERO = ConstantValue(CONST_0) llhelper.CVAL_NULLREF = ConstantValue(llhelper.CONST_NULL) oohelper.CVAL_NULLREF = ConstantValue(oohelper.CONST_NULL) +REMOVED = AbstractResOp(None) + class Optimization(object): next_optimization = None @@ -261,6 +262,7 @@ raise NotImplementedError def emit_operation(self, op): + self.last_emitted_operation = op self.next_optimization.propagate_forward(op) # FIXME: Move some of these here? @@ -328,13 +330,13 @@ def forget_numberings(self, box): self.optimizer.forget_numberings(box) + class Optimizer(Optimization): - def __init__(self, metainterp_sd, loop, optimizations=None, bridge=False): + def __init__(self, metainterp_sd, loop, optimizations=None): self.metainterp_sd = metainterp_sd self.cpu = metainterp_sd.cpu self.loop = loop - self.bridge = bridge self.values = {} self.interned_refs = self.cpu.ts.new_ref_dict() self.interned_ints = {} @@ -342,7 +344,6 @@ self.bool_boxes = {} self.producer = {} self.pendingfields = [] - self.exception_might_have_happened = False self.quasi_immutable_deps = None self.opaque_pointers = {} self.replaces_guard = {} @@ -364,6 +365,7 @@ optimizations[-1].next_optimization = self for o in optimizations: o.optimizer = self + o.last_emitted_operation = None o.setup() else: optimizations = [] @@ -498,7 +500,6 @@ return CVAL_ZERO def propagate_all_forward(self): - self.exception_might_have_happened = self.bridge self.clear_newoperations() for op in self.loop.operations: self.first_optimization.propagate_forward(op) diff --git a/pypy/jit/metainterp/optimizeopt/pure.py b/pypy/jit/metainterp/optimizeopt/pure.py --- a/pypy/jit/metainterp/optimizeopt/pure.py +++ b/pypy/jit/metainterp/optimizeopt/pure.py @@ -1,4 +1,4 @@ -from pypy.jit.metainterp.optimizeopt.optimizer import Optimization +from pypy.jit.metainterp.optimizeopt.optimizer import Optimization, REMOVED from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.jit.metainterp.optimizeopt.util import (make_dispatcher_method, args_dict) @@ -61,7 +61,10 @@ oldop = self.pure_operations.get(args, None) if oldop is not None and oldop.getdescr() is op.getdescr(): assert oldop.getopnum() == op.getopnum() + # this removes a CALL_PURE that has the same (non-constant) + # arguments as a previous CALL_PURE. self.make_equal_to(op.result, self.getvalue(oldop.result)) + self.last_emitted_operation = REMOVED return else: self.pure_operations[args] = op @@ -72,6 +75,13 @@ self.emit_operation(ResOperation(rop.CALL, args, op.result, op.getdescr())) + def optimize_GUARD_NO_EXCEPTION(self, op): + if self.last_emitted_operation is REMOVED: + # it was a CALL_PURE that was killed; so we also kill the + # following GUARD_NO_EXCEPTION + return + self.emit_operation(op) + def flush(self): assert self.posponedop is None diff --git a/pypy/jit/metainterp/optimizeopt/rewrite.py b/pypy/jit/metainterp/optimizeopt/rewrite.py --- a/pypy/jit/metainterp/optimizeopt/rewrite.py +++ b/pypy/jit/metainterp/optimizeopt/rewrite.py @@ -294,12 +294,6 @@ raise InvalidLoop self.optimize_GUARD_CLASS(op) - def optimize_GUARD_NO_EXCEPTION(self, op): - if not self.optimizer.exception_might_have_happened: - return - self.emit_operation(op) - self.optimizer.exception_might_have_happened = False - def optimize_CALL_LOOPINVARIANT(self, op): arg = op.getarg(0) # 'arg' must be a Const, because residual_call in codewriter @@ -310,6 +304,7 @@ resvalue = self.loop_invariant_results.get(key, None) if resvalue is not None: self.make_equal_to(op.result, resvalue) + self.last_emitted_operation = REMOVED return # change the op to be a normal call, from the backend's point of view # there is no reason to have a separate operation for this @@ -444,10 +439,19 @@ except KeyError: pass else: + # this removes a CALL_PURE with all constant arguments. self.make_constant(op.result, result) + self.last_emitted_operation = REMOVED return self.emit_operation(op) + def optimize_GUARD_NO_EXCEPTION(self, op): + if self.last_emitted_operation is REMOVED: + # it was a CALL_PURE or a CALL_LOOPINVARIANT that was killed; + # so we also kill the following GUARD_NO_EXCEPTION + return + self.emit_operation(op) + def optimize_INT_FLOORDIV(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -681,25 +681,60 @@ # ---------- - def test_fold_guard_no_exception(self): - ops = """ - [i] - guard_no_exception() [] - i1 = int_add(i, 3) - guard_no_exception() [] + def test_keep_guard_no_exception(self): + ops = """ + [i1] i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] - guard_no_exception() [] - i3 = call(i2, descr=nonwritedescr) - jump(i1) # the exception is considered lost when we loop back - """ - expected = """ - [i] - i1 = int_add(i, 3) - i2 = call(i1, descr=nonwritedescr) + jump(i2) + """ + self.optimize_loop(ops, ops) + + def test_keep_guard_no_exception_with_call_pure_that_is_not_folded(self): + ops = """ + [i1] + i2 = call_pure(123456, i1, descr=nonwritedescr) guard_no_exception() [i1, i2] - i3 = call(i2, descr=nonwritedescr) - jump(i1) + jump(i2) + """ + expected = """ + [i1] + i2 = call(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2] + jump(i2) + """ + self.optimize_loop(ops, expected) + + def test_remove_guard_no_exception_with_call_pure_on_constant_args(self): + arg_consts = [ConstInt(i) for i in (123456, 81)] + call_pure_results = {tuple(arg_consts): ConstInt(5)} + ops = """ + [i1] + i3 = same_as(81) + i2 = call_pure(123456, i3, descr=nonwritedescr) + guard_no_exception() [i1, i2] + jump(i2) + """ + expected = """ + [i1] + jump(5) + """ + self.optimize_loop(ops, expected, call_pure_results) + + def test_remove_guard_no_exception_with_duplicated_call_pure(self): + ops = """ + [i1] + i2 = call_pure(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2] + i3 = call_pure(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2, i3] + jump(i3) + """ + expected = """ + [i1] + i2 = call(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2] + jump(i2) """ self.optimize_loop(ops, expected) @@ -4123,6 +4158,38 @@ """ self.optimize_strunicode_loop(ops, expected) + def test_str_concat_constant_lengths(self): + ops = """ + [i0] + p0 = newstr(1) + strsetitem(p0, 0, i0) + p1 = newstr(0) + p2 = call(0, p0, p1, descr=strconcatdescr) + i1 = call(0, p2, p0, descr=strequaldescr) + finish(i1) + """ + expected = """ + [i0] + finish(1) + """ + self.optimize_strunicode_loop(ops, expected) + + def test_str_concat_constant_lengths_2(self): + ops = """ + [i0] + p0 = newstr(0) + p1 = newstr(1) + strsetitem(p1, 0, i0) + p2 = call(0, p0, p1, descr=strconcatdescr) + i1 = call(0, p2, p1, descr=strequaldescr) + finish(i1) + """ + expected = """ + [i0] + finish(1) + """ + self.optimize_strunicode_loop(ops, expected) + def test_str_slice_1(self): ops = """ [p1, i1, i2] @@ -4883,6 +4950,27 @@ def test_plain_virtual_string_copy_content(self): ops = """ + [i1] + p0 = newstr(6) + copystrcontent(s"hello!", p0, 0, 0, 6) + p1 = call(0, p0, s"abc123", descr=strconcatdescr) + i0 = strgetitem(p1, i1) + finish(i0) + """ + expected = """ + [i1] + p0 = newstr(6) + copystrcontent(s"hello!", p0, 0, 0, 6) + p1 = newstr(12) + copystrcontent(p0, p1, 0, 0, 6) + copystrcontent(s"abc123", p1, 0, 6, 6) + i0 = strgetitem(p1, i1) + finish(i0) + """ + self.optimize_strunicode_loop(ops, expected) + + def test_plain_virtual_string_copy_content_2(self): + ops = """ [] p0 = newstr(6) copystrcontent(s"hello!", p0, 0, 0, 6) @@ -4894,10 +4982,7 @@ [] p0 = newstr(6) copystrcontent(s"hello!", p0, 0, 0, 6) - p1 = newstr(12) - copystrcontent(p0, p1, 0, 0, 6) - copystrcontent(s"abc123", p1, 0, 6, 6) - i0 = strgetitem(p1, 0) + i0 = strgetitem(p0, 0) finish(i0) """ self.optimize_strunicode_loop(ops, expected) @@ -4914,6 +4999,34 @@ """ self.optimize_loop(ops, expected) + def test_known_equal_ints(self): + py.test.skip("in-progress") + ops = """ + [i0, i1, i2, p0] + i3 = int_eq(i0, i1) + guard_true(i3) [] + + i4 = int_lt(i2, i0) + guard_true(i4) [] + i5 = int_lt(i2, i1) + guard_true(i5) [] + + i6 = getarrayitem_gc(p0, i2) + finish(i6) + """ + expected = """ + [i0, i1, i2, p0] + i3 = int_eq(i0, i1) + guard_true(i3) [] + + i4 = int_lt(i2, i0) + guard_true(i4) [] + + i6 = getarrayitem_gc(p0, i3) + finish(i6) + """ + self.optimize_loop(ops, expected) + class TestLLtype(BaseTestOptimizeBasic, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -931,17 +931,14 @@ [i] guard_no_exception() [] i1 = int_add(i, 3) - guard_no_exception() [] i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] - guard_no_exception() [] i3 = call(i2, descr=nonwritedescr) jump(i1) # the exception is considered lost when we loop back """ - # note that 'guard_no_exception' at the very start is kept around - # for bridges, but not for loops preamble = """ [i] + guard_no_exception() [] # occurs at the start of bridges, so keep it i1 = int_add(i, 3) i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] @@ -950,6 +947,7 @@ """ expected = """ [i] + guard_no_exception() [] # occurs at the start of bridges, so keep it i1 = int_add(i, 3) i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] @@ -958,6 +956,23 @@ """ self.optimize_loop(ops, expected, preamble) + def test_bug_guard_no_exception(self): + ops = """ + [] + i0 = call(123, descr=nonwritedescr) + p0 = call(0, "xy", descr=s2u_descr) # string -> unicode + guard_no_exception() [] + escape(p0) + jump() + """ + expected = """ + [] + i0 = call(123, descr=nonwritedescr) + escape(u"xy") + jump() + """ + self.optimize_loop(ops, expected) + # ---------- def test_call_loopinvariant(self): @@ -6281,12 +6296,15 @@ def test_str2unicode_constant(self): ops = """ [] + escape(1213) p0 = call(0, "xy", descr=s2u_descr) # string -> unicode + guard_no_exception() [] escape(p0) jump() """ expected = """ [] + escape(1213) escape(u"xy") jump() """ @@ -6296,6 +6314,7 @@ ops = """ [p0] p1 = call(0, p0, descr=s2u_descr) # string -> unicode + guard_no_exception() [] escape(p1) jump(p1) """ diff --git a/pypy/jit/metainterp/optimizeopt/test/test_util.py b/pypy/jit/metainterp/optimizeopt/test/test_util.py --- a/pypy/jit/metainterp/optimizeopt/test/test_util.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_util.py @@ -183,6 +183,7 @@ can_invalidate=True)) arraycopydescr = cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, EffectInfo([], [arraydescr], [], [arraydescr], + EffectInfo.EF_CANNOT_RAISE, oopspecindex=EffectInfo.OS_ARRAYCOPY)) @@ -212,12 +213,14 @@ _oopspecindex = getattr(EffectInfo, _os) locals()[_name] = \ cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, - EffectInfo([], [], [], [], oopspecindex=_oopspecindex)) + EffectInfo([], [], [], [], EffectInfo.EF_CANNOT_RAISE, + oopspecindex=_oopspecindex)) # _oopspecindex = getattr(EffectInfo, _os.replace('STR', 'UNI')) locals()[_name.replace('str', 'unicode')] = \ cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, - EffectInfo([], [], [], [], oopspecindex=_oopspecindex)) + EffectInfo([], [], [], [], EffectInfo.EF_CANNOT_RAISE, + oopspecindex=_oopspecindex)) s2u_descr = cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, EffectInfo([], [], [], [], oopspecindex=EffectInfo.OS_STR2UNICODE)) diff --git a/pypy/jit/metainterp/optimizeopt/vstring.py b/pypy/jit/metainterp/optimizeopt/vstring.py --- a/pypy/jit/metainterp/optimizeopt/vstring.py +++ b/pypy/jit/metainterp/optimizeopt/vstring.py @@ -2,7 +2,8 @@ from pypy.jit.metainterp.history import (BoxInt, Const, ConstInt, ConstPtr, get_const_ptr_for_string, get_const_ptr_for_unicode, BoxPtr, REF, INT) from pypy.jit.metainterp.optimizeopt import optimizer, virtualize -from pypy.jit.metainterp.optimizeopt.optimizer import CONST_0, CONST_1, llhelper +from pypy.jit.metainterp.optimizeopt.optimizer import CONST_0, CONST_1 +from pypy.jit.metainterp.optimizeopt.optimizer import llhelper, REMOVED from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.rlib.objectmodel import specialize, we_are_translated @@ -106,46 +107,33 @@ if not we_are_translated(): op.name = 'FORCE' optforce.emit_operation(op) - self.string_copy_parts(optforce, box, CONST_0, self.mode) + self.initialize_forced_string(optforce, box, CONST_0, self.mode) + + def initialize_forced_string(self, string_optimizer, targetbox, + offsetbox, mode): + return self.string_copy_parts(string_optimizer, targetbox, + offsetbox, mode) class VStringPlainValue(VAbstractStringValue): """A string built with newstr(const).""" _lengthbox = None # cache only - # Warning: an issue with VStringPlainValue is that sometimes it is - # initialized unpredictably by some copystrcontent. When this occurs - # we set self._chars to None. Be careful to check for is_valid(). - - def is_valid(self): - return self._chars is not None - - def _invalidate(self): - assert self.is_valid() - if self._lengthbox is None: - self._lengthbox = ConstInt(len(self._chars)) - self._chars = None - - def _really_force(self, optforce): - VAbstractStringValue._really_force(self, optforce) - assert self.box is not None - if self.is_valid(): - for c in self._chars: - if c is optimizer.CVAL_UNINITIALIZED_ZERO: - # the string has uninitialized null bytes in it, so - # assume that it is forced for being further mutated - # (e.g. by copystrcontent). So it becomes invalid - # as a VStringPlainValue: the _chars must not be used - # any longer. - self._invalidate() - break - def setup(self, size): - self._chars = [optimizer.CVAL_UNINITIALIZED_ZERO] * size + # in this list, None means: "it's probably uninitialized so far, + # but maybe it was actually filled." So to handle this case, + # strgetitem cannot be virtual-ized and must be done as a residual + # operation. By contrast, any non-None value means: we know it + # is initialized to this value; strsetitem() there makes no sense. + # Also, as long as self.is_virtual(), then we know that no-one else + # could have written to the string, so we know that in this case + # "None" corresponds to "really uninitialized". + self._chars = [None] * size def setup_slice(self, longerlist, start, stop): assert 0 <= start <= stop <= len(longerlist) self._chars = longerlist[start:stop] + # slice the 'longerlist', which may also contain Nones def getstrlen(self, _, mode): if self._lengthbox is None: @@ -153,44 +141,66 @@ return self._lengthbox def getitem(self, index): - return self._chars[index] + return self._chars[index] # may return None! def setitem(self, index, charvalue): assert isinstance(charvalue, optimizer.OptValue) + assert self._chars[index] is None, ( + "setitem() on an already-initialized location") self._chars[index] = charvalue + def is_completely_initialized(self): + for c in self._chars: + if c is None: + return False + return True + @specialize.arg(1) def get_constant_string_spec(self, mode): - if not self.is_valid(): - return None for c in self._chars: - if c is optimizer.CVAL_UNINITIALIZED_ZERO or not c.is_constant(): + if c is None or not c.is_constant(): return None return mode.emptystr.join([mode.chr(c.box.getint()) for c in self._chars]) def string_copy_parts(self, string_optimizer, targetbox, offsetbox, mode): - if not self.is_valid(): + if not self.is_virtual() and not self.is_completely_initialized(): return VAbstractStringValue.string_copy_parts( self, string_optimizer, targetbox, offsetbox, mode) + else: + return self.initialize_forced_string(string_optimizer, targetbox, + offsetbox, mode) + + def initialize_forced_string(self, string_optimizer, targetbox, + offsetbox, mode): for i in range(len(self._chars)): assert isinstance(targetbox, BoxPtr) # ConstPtr never makes sense - charbox = self._chars[i].force_box(string_optimizer) - if not (isinstance(charbox, Const) and charbox.same_constant(CONST_0)): - string_optimizer.emit_operation(ResOperation(mode.STRSETITEM, [targetbox, - offsetbox, - charbox], - None)) + charvalue = self.getitem(i) + if charvalue is not None: + charbox = charvalue.force_box(string_optimizer) + if not (isinstance(charbox, Const) and + charbox.same_constant(CONST_0)): + op = ResOperation(mode.STRSETITEM, [targetbox, + offsetbox, + charbox], + None) + string_optimizer.emit_operation(op) offsetbox = _int_add(string_optimizer, offsetbox, CONST_1) return offsetbox def get_args_for_fail(self, modifier): if self.box is None and not modifier.already_seen_virtual(self.keybox): - assert self.is_valid() - charboxes = [value.get_key_box() for value in self._chars] + charboxes = [] + for value in self._chars: + if value is not None: + box = value.get_key_box() + else: + box = None + charboxes.append(box) modifier.register_virtual_fields(self.keybox, charboxes) for value in self._chars: - value.get_args_for_fail(modifier) + if value is not None: + value.get_args_for_fail(modifier) def _make_virtual(self, modifier): return modifier.make_vstrplain(self.mode is mode_unicode) @@ -198,6 +208,7 @@ class VStringConcatValue(VAbstractStringValue): """The concatenation of two other strings.""" + _attrs_ = ('left', 'right', 'lengthbox') lengthbox = None # or the computed length @@ -405,8 +416,7 @@ def optimize_STRSETITEM(self, op): value = self.getvalue(op.getarg(0)) assert not value.is_constant() # strsetitem(ConstPtr) never makes sense - if (value.is_virtual() and isinstance(value, VStringPlainValue) - and value.is_valid()): + if value.is_virtual() and isinstance(value, VStringPlainValue): indexbox = self.get_constant_box(op.getarg(1)) if indexbox is not None: value.setitem(indexbox.getint(), self.getvalue(op.getarg(2))) @@ -437,10 +447,22 @@ value = value.vstr vindex = self.getvalue(fullindexbox) # - if (isinstance(value, VStringPlainValue) # even if no longer virtual - and value.is_valid()): # but make sure it is valid + if isinstance(value, VStringPlainValue): # even if no longer virtual if vindex.is_constant(): - return value.getitem(vindex.box.getint()) + result = value.getitem(vindex.box.getint()) + if result is not None: + return result + # + if isinstance(value, VStringConcatValue) and vindex.is_constant(): + len1box = value.left.getstrlen(self, mode) + if isinstance(len1box, ConstInt): + index = vindex.box.getint() + len1 = len1box.getint() + if index < len1: + return self.strgetitem(value.left, vindex, mode) + else: + vindex = optimizer.ConstantValue(ConstInt(index - len1)) + return self.strgetitem(value.right, vindex, mode) # resbox = _strgetitem(self, value.force_box(self), vindex.force_box(self), mode) return self.getvalue(resbox) @@ -508,6 +530,11 @@ optimize_CALL_PURE = optimize_CALL + def optimize_GUARD_NO_EXCEPTION(self, op): + if self.last_emitted_operation is REMOVED: + return + self.emit_operation(op) + def opt_call_str_STR2UNICODE(self, op): # Constant-fold unicode("constant string"). # More generally, supporting non-constant but virtual cases is @@ -522,6 +549,7 @@ except UnicodeDecodeError: return False self.make_constant(op.result, get_const_ptr_for_unicode(u)) + self.last_emitted_operation = REMOVED return True def opt_call_stroruni_STR_CONCAT(self, op, mode): @@ -538,12 +566,12 @@ vstart = self.getvalue(op.getarg(2)) vstop = self.getvalue(op.getarg(3)) # - if (isinstance(vstr, VStringPlainValue) and vstr.is_valid() - and vstart.is_constant() and vstop.is_constant()): - value = self.make_vstring_plain(op.result, op, mode) - value.setup_slice(vstr._chars, vstart.box.getint(), - vstop.box.getint()) - return True + #if (isinstance(vstr, VStringPlainValue) and vstart.is_constant() + # and vstop.is_constant()): + # value = self.make_vstring_plain(op.result, op, mode) + # value.setup_slice(vstr._chars, vstart.box.getint(), + # vstop.box.getint()) + # return True # vstr.ensure_nonnull() lengthbox = _int_sub(self, vstop.force_box(self), diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -1345,10 +1345,8 @@ if effect == effectinfo.EF_LOOPINVARIANT: return self.execute_varargs(rop.CALL_LOOPINVARIANT, allboxes, descr, False, False) - exc = (effect != effectinfo.EF_CANNOT_RAISE and - effect != effectinfo.EF_ELIDABLE_CANNOT_RAISE) - pure = (effect == effectinfo.EF_ELIDABLE_CAN_RAISE or - effect == effectinfo.EF_ELIDABLE_CANNOT_RAISE) + exc = effectinfo.check_can_raise() + pure = effectinfo.check_is_elidable() return self.execute_varargs(rop.CALL, allboxes, descr, exc, pure) def do_residual_or_indirect_call(self, funcbox, calldescr, argboxes): diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -90,7 +90,10 @@ return op def __repr__(self): - return self.repr() + try: + return self.repr() + except NotImplementedError: + return object.__repr__(self) def repr(self, graytext=False): # RPython-friendly version diff --git a/pypy/jit/metainterp/resume.py b/pypy/jit/metainterp/resume.py --- a/pypy/jit/metainterp/resume.py +++ b/pypy/jit/metainterp/resume.py @@ -126,6 +126,7 @@ UNASSIGNED = tag(-1<<13, TAGBOX) UNASSIGNEDVIRTUAL = tag(-1<<13, TAGVIRTUAL) NULLREF = tag(-1, TAGCONST) +UNINITIALIZED = tag(-2, TAGCONST) # used for uninitialized string characters class ResumeDataLoopMemo(object): @@ -439,6 +440,8 @@ self.storage.rd_pendingfields = rd_pendingfields def _gettagged(self, box): + if box is None: + return UNINITIALIZED if isinstance(box, Const): return self.memo.getconst(box) else: @@ -572,7 +575,9 @@ string = decoder.allocate_string(length) decoder.virtuals_cache[index] = string for i in range(length): - decoder.string_setitem(string, i, self.fieldnums[i]) + charnum = self.fieldnums[i] + if not tagged_eq(charnum, UNINITIALIZED): + decoder.string_setitem(string, i, charnum) return string def debug_prints(self): @@ -625,7 +630,9 @@ string = decoder.allocate_unicode(length) decoder.virtuals_cache[index] = string for i in range(length): - decoder.unicode_setitem(string, i, self.fieldnums[i]) + charnum = self.fieldnums[i] + if not tagged_eq(charnum, UNINITIALIZED): + decoder.unicode_setitem(string, i, charnum) return string def debug_prints(self): diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -3678,3 +3678,16 @@ assert x == -42 x = self.interp_operations(f, [1000, 1], translationoptions=topt) assert x == 999 + + def test_ll_arraycopy(self): + from pypy.rlib import rgc + A = lltype.GcArray(lltype.Char) + a = lltype.malloc(A, 10) + for i in range(10): a[i] = chr(i) + b = lltype.malloc(A, 10) + # + def f(c, d, e): + rgc.ll_arraycopy(a, b, c, d, e) + return 42 + self.interp_operations(f, [1, 2, 3]) + self.check_operations_history(call=1, guard_no_exception=0) diff --git a/pypy/jit/metainterp/test/test_resume.py b/pypy/jit/metainterp/test/test_resume.py --- a/pypy/jit/metainterp/test/test_resume.py +++ b/pypy/jit/metainterp/test/test_resume.py @@ -1135,16 +1135,11 @@ assert ptr2.parent.next == ptr class CompareableConsts(object): - def __init__(self): - self.oldeq = None - def __enter__(self): - assert self.oldeq is None - self.oldeq = Const.__eq__ Const.__eq__ = Const.same_box - + def __exit__(self, type, value, traceback): - Const.__eq__ = self.oldeq + del Const.__eq__ def test_virtual_adder_make_varray(): b2s, b4s = [BoxPtr(), BoxInt(4)] diff --git a/pypy/jit/metainterp/test/test_virtualstate.py b/pypy/jit/metainterp/test/test_virtualstate.py --- a/pypy/jit/metainterp/test/test_virtualstate.py +++ b/pypy/jit/metainterp/test/test_virtualstate.py @@ -847,7 +847,8 @@ i5 = arraylen_gc(p2, descr=arraydescr) i6 = int_ge(i5, 1) guard_true(i6) [] - jump(p0, p1, p2) + p3 = getarrayitem_gc(p2, 0, descr=arraydescr) + jump(p0, p1, p3, p2) """ self.optimize_bridge(loop, bridge, expected, p0=self.myptr) diff --git a/pypy/module/__builtin__/functional.py b/pypy/module/__builtin__/functional.py --- a/pypy/module/__builtin__/functional.py +++ b/pypy/module/__builtin__/functional.py @@ -312,11 +312,10 @@ class W_XRange(Wrappable): - def __init__(self, space, start, stop, step): + def __init__(self, space, start, len, step): self.space = space self.start = start - self.stop = stop - self.len = get_len_of_range(space, start, stop, step) + self.len = len self.step = step def descr_new(space, w_subtype, w_start, w_stop=None, w_step=1): @@ -326,8 +325,9 @@ start, stop = 0, start else: stop = _toint(space, w_stop) + howmany = get_len_of_range(space, start, stop, step) obj = space.allocate_instance(W_XRange, w_subtype) - W_XRange.__init__(obj, space, start, stop, step) + W_XRange.__init__(obj, space, start, howmany, step) return space.wrap(obj) def descr_repr(self): @@ -357,12 +357,12 @@ def descr_iter(self): return self.space.wrap(W_XRangeIterator(self.space, self.start, - self.stop, self.step)) + self.len, self.step)) def descr_reversed(self): lastitem = self.start + (self.len-1) * self.step return self.space.wrap(W_XRangeIterator(self.space, lastitem, - self.start, -self.step, True)) + self.len, -self.step)) def descr_reduce(self): space = self.space @@ -389,29 +389,25 @@ ) class W_XRangeIterator(Wrappable): - def __init__(self, space, start, stop, step, inclusive=False): + def __init__(self, space, current, remaining, step): self.space = space - self.current = start - self.stop = stop + self.current = current + self.remaining = remaining self.step = step - self.inclusive = inclusive def descr_iter(self): return self.space.wrap(self) def descr_next(self): - if self.inclusive: - if not ((self.step > 0 and self.current <= self.stop) or (self.step < 0 and self.current >= self.stop)): - raise OperationError(self.space.w_StopIteration, self.space.w_None) - else: - if not ((self.step > 0 and self.current < self.stop) or (self.step < 0 and self.current > self.stop)): - raise OperationError(self.space.w_StopIteration, self.space.w_None) - item = self.current - self.current = item + self.step - return self.space.wrap(item) + if self.remaining > 0: + item = self.current + self.current = item + self.step + self.remaining -= 1 + return self.space.wrap(item) + raise OperationError(self.space.w_StopIteration, self.space.w_None) - #def descr_len(self): - # return self.space.wrap(self.remaining) + def descr_len(self): + return self.space.wrap(self.remaining) def descr_reduce(self): from pypy.interpreter.mixedmodule import MixedModule @@ -422,7 +418,7 @@ w = space.wrap nt = space.newtuple - tup = [w(self.current), w(self.stop), w(self.step)] + tup = [w(self.current), w(self.remaining), w(self.step)] return nt([new_inst, nt(tup)]) W_XRangeIterator.typedef = TypeDef("rangeiterator", diff --git a/pypy/module/__builtin__/test/test_functional.py b/pypy/module/__builtin__/test/test_functional.py --- a/pypy/module/__builtin__/test/test_functional.py +++ b/pypy/module/__builtin__/test/test_functional.py @@ -157,8 +157,7 @@ raises(OverflowError, xrange, a) raises(OverflowError, xrange, 0, a) raises(OverflowError, xrange, 0, 1, a) - assert list(reversed(xrange(-sys.maxint-1, -sys.maxint-1, -2))) == [] - + def test_xrange_reduce(self): x = xrange(2, 9, 3) callable, args = x.__reduce__() diff --git a/pypy/module/_pickle_support/maker.py b/pypy/module/_pickle_support/maker.py --- a/pypy/module/_pickle_support/maker.py +++ b/pypy/module/_pickle_support/maker.py @@ -66,10 +66,10 @@ new_generator.running = running return space.wrap(new_generator) - at unwrap_spec(current=int, stop=int, step=int) -def xrangeiter_new(space, current, stop, step): + at unwrap_spec(current=int, remaining=int, step=int) +def xrangeiter_new(space, current, remaining, step): from pypy.module.__builtin__.functional import W_XRangeIterator - new_iter = W_XRangeIterator(space, current, stop, step) + new_iter = W_XRangeIterator(space, current, remaining, step) return space.wrap(new_iter) @unwrap_spec(identifier=str) diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -392,6 +392,7 @@ 'Slice': 'space.gettypeobject(W_SliceObject.typedef)', 'StaticMethod': 'space.gettypeobject(StaticMethod.typedef)', 'CFunction': 'space.gettypeobject(cpyext.methodobject.W_PyCFunctionObject.typedef)', + 'WrapperDescr': 'space.gettypeobject(cpyext.methodobject.W_PyCMethodObject.typedef)' }.items(): GLOBALS['Py%s_Type#' % (cpyname, )] = ('PyTypeObject*', pypyexpr) diff --git a/pypy/module/cpyext/include/patchlevel.h b/pypy/module/cpyext/include/patchlevel.h --- a/pypy/module/cpyext/include/patchlevel.h +++ b/pypy/module/cpyext/include/patchlevel.h @@ -29,7 +29,7 @@ #define PY_VERSION "2.7.1" /* PyPy version as a string */ -#define PYPY_VERSION "1.6.1" +#define PYPY_VERSION "1.7.1" /* Subversion Revision number of this file (not of the repository). * Empty since Mercurial migration. */ diff --git a/pypy/module/cpyext/methodobject.py b/pypy/module/cpyext/methodobject.py --- a/pypy/module/cpyext/methodobject.py +++ b/pypy/module/cpyext/methodobject.py @@ -240,6 +240,7 @@ def PyStaticMethod_New(space, w_func): return space.wrap(StaticMethod(w_func)) + at cpython_api([PyObject, lltype.Ptr(PyMethodDef)], PyObject) def PyDescr_NewMethod(space, w_type, method): return space.wrap(W_PyCMethodObject(space, method, w_type)) diff --git a/pypy/module/cpyext/pyobject.py b/pypy/module/cpyext/pyobject.py --- a/pypy/module/cpyext/pyobject.py +++ b/pypy/module/cpyext/pyobject.py @@ -116,8 +116,8 @@ try: return typedescr_cache[typedef] except KeyError: - if typedef.base is not None: - return _get_typedescr_1(typedef.base) + if typedef.bases: + return _get_typedescr_1(typedef.bases[0]) return typedescr_cache[None] def get_typedescr(typedef): diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -586,10 +586,6 @@ def PyDescr_NewMember(space, type, meth): raise NotImplementedError - at cpython_api([PyTypeObjectPtr, PyMethodDef], PyObject) -def PyDescr_NewMethod(space, type, meth): - raise NotImplementedError - @cpython_api([PyTypeObjectPtr, wrapperbase, rffi.VOIDP], PyObject) def PyDescr_NewWrapper(space, type, wrapper, wrapped): raise NotImplementedError @@ -610,14 +606,6 @@ def PyWrapper_New(space, w_d, w_self): raise NotImplementedError - at cpython_api([PyObject], PyObject) -def PyDictProxy_New(space, dict): - """Return a proxy object for a mapping which enforces read-only behavior. - This is normally used to create a proxy to prevent modification of the - dictionary for non-dynamic class types. - """ - raise NotImplementedError - @cpython_api([PyObject, PyObject, rffi.INT_real], rffi.INT_real, error=-1) def PyDict_Merge(space, a, b, override): """Iterate over mapping object b adding key-value pairs to dictionary a. @@ -2293,15 +2281,6 @@ changes in your code for properly supporting 64-bit systems.""" raise NotImplementedError - at cpython_api([rffi.CWCHARP, Py_ssize_t, rffi.CCHARP], PyObject) -def PyUnicode_EncodeUTF8(space, s, size, errors): - """Encode the Py_UNICODE buffer of the given size using UTF-8 and return a - Python string object. Return NULL if an exception was raised by the codec. - - This function used an int type for size. This might require - changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.INTP], PyObject) def PyUnicode_DecodeUTF32(space, s, size, errors, byteorder): """Decode length bytes from a UTF-32 encoded buffer string and return the @@ -2481,31 +2460,6 @@ was raised by the codec.""" raise NotImplementedError - at cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP], PyObject) -def PyUnicode_DecodeLatin1(space, s, size, errors): - """Create a Unicode object by decoding size bytes of the Latin-1 encoded string - s. Return NULL if an exception was raised by the codec. - - This function used an int type for size. This might require - changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - - at cpython_api([rffi.CWCHARP, Py_ssize_t, rffi.CCHARP], PyObject) -def PyUnicode_EncodeLatin1(space, s, size, errors): - """Encode the Py_UNICODE buffer of the given size using Latin-1 and return - a Python string object. Return NULL if an exception was raised by the codec. - - This function used an int type for size. This might require - changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - - at cpython_api([PyObject], PyObject) -def PyUnicode_AsLatin1String(space, unicode): - """Encode a Unicode object using Latin-1 and return the result as Python string - object. Error handling is "strict". Return NULL if an exception was raised - by the codec.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, PyObject, rffi.CCHARP], PyObject) def PyUnicode_DecodeCharmap(space, s, size, mapping, errors): """Create a Unicode object by decoding size bytes of the encoded string s using @@ -2564,13 +2518,6 @@ """ raise NotImplementedError - at cpython_api([PyObject], PyObject) -def PyUnicode_AsMBCSString(space, unicode): - """Encode a Unicode object using MBCS and return the result as Python string - object. Error handling is "strict". Return NULL if an exception was raised - by the codec.""" - raise NotImplementedError - @cpython_api([PyObject, PyObject], PyObject) def PyUnicode_Concat(space, left, right): """Concat two strings giving a new Unicode string.""" @@ -2912,16 +2859,3 @@ """Return true if ob is a proxy object. """ raise NotImplementedError - - at cpython_api([PyObject, PyObject], PyObject) -def PyWeakref_NewProxy(space, ob, callback): - """Return a weak reference proxy object for the object ob. This will always - return a new reference, but is not guaranteed to create a new object; an - existing proxy object may be returned. The second parameter, callback, can - be a callable object that receives notification when ob is garbage - collected; it should accept a single parameter, which will be the weak - reference object itself. callback may also be None or NULL. If ob - is not a weakly-referencable object, or if callback is not callable, - None, or NULL, this will return NULL and raise TypeError. - """ - raise NotImplementedError diff --git a/pypy/module/cpyext/test/test_methodobject.py b/pypy/module/cpyext/test/test_methodobject.py --- a/pypy/module/cpyext/test/test_methodobject.py +++ b/pypy/module/cpyext/test/test_methodobject.py @@ -79,7 +79,7 @@ raises(TypeError, mod.isSameFunction, 1) class TestPyCMethodObject(BaseApiTest): - def test_repr(self, space): + def test_repr(self, space, api): """ W_PyCMethodObject has a repr string which describes it as a method and gives its name and the name of its class. @@ -94,7 +94,7 @@ ml.c_ml_meth = rffi.cast(PyCFunction_typedef, c_func.get_llhelper(space)) - method = PyDescr_NewMethod(space, space.w_str, ml) + method = api.PyDescr_NewMethod(space.w_str, ml) assert repr(method).startswith( "" + assert space.unwrap(space.repr(w_proxy)).startswith(' INT_MAX: # overflow (64-bit platforms only) @@ -209,10 +227,11 @@ def ll_math_modf(x): # some platforms don't do the right thing for NaNs and # infinities, so we take care of special cases directly. - if isinf(x): - return (math_copysign(0.0, x), x) - elif isnan(x): - return (x, x) + if not isfinite(x): + if isnan(x): + return (x, x) + else: # isinf(x) + return (math_copysign(0.0, x), x) intpart_p = lltype.malloc(rffi.DOUBLEP.TO, 1, flavor='raw') try: fracpart = math_modf(x, intpart_p) @@ -223,13 +242,21 @@ def ll_math_fmod(x, y): - if isinf(x) and not isnan(y): - raise ValueError("math domain error") + # fmod(x, +/-Inf) returns x for finite x. + if isinf(y) and isfinite(x): + return x - if y == 0: - raise ValueError("math domain error") - - return math_fmod(x, y) + _error_reset() + r = math_fmod(x, y) + errno = rposix.get_errno() + if isnan(r): + if isnan(x) or isnan(y): + errno = 0 + else: + errno = EDOM + if errno: + _likely_raise(errno, r) + return r def ll_math_hypot(x, y): @@ -242,16 +269,17 @@ _error_reset() r = math_hypot(x, y) errno = rposix.get_errno() - if isnan(r): - if isnan(x) or isnan(y): - errno = 0 - else: - errno = EDOM - elif isinf(r): - if isinf(x) or isnan(x) or isinf(y) or isnan(y): - errno = 0 - else: - errno = ERANGE + if not isfinite(r): + if isnan(r): + if isnan(x) or isnan(y): + errno = 0 + else: + errno = EDOM + else: # isinf(r) + if isfinite(x) and isfinite(y): + errno = ERANGE + else: + errno = 0 if errno: _likely_raise(errno, r) return r @@ -261,30 +289,30 @@ # deal directly with IEEE specials, to cope with problems on various # platforms whose semantics don't exactly match C99 - if isnan(x): - if y == 0.0: - return 1.0 # NaN**0 = 1 - return x - - elif isnan(y): + if isnan(y): if x == 1.0: return 1.0 # 1**Nan = 1 return y - elif isinf(x): - odd_y = not isinf(y) and math_fmod(math_fabs(y), 2.0) == 1.0 - if y > 0.0: - if odd_y: - return x - return math_fabs(x) - elif y == 0.0: - return 1.0 - else: # y < 0.0 - if odd_y: - return math_copysign(0.0, x) - return 0.0 + if not isfinite(x): + if isnan(x): + if y == 0.0: + return 1.0 # NaN**0 = 1 + return x + else: # isinf(x) + odd_y = not isinf(y) and math_fmod(math_fabs(y), 2.0) == 1.0 + if y > 0.0: + if odd_y: + return x + return math_fabs(x) + elif y == 0.0: + return 1.0 + else: # y < 0.0 + if odd_y: + return math_copysign(0.0, x) + return 0.0 - elif isinf(y): + if isinf(y): if math_fabs(x) == 1.0: return 1.0 elif y > 0.0 and math_fabs(x) > 1.0: @@ -299,17 +327,18 @@ _error_reset() r = math_pow(x, y) errno = rposix.get_errno() - if isnan(r): - # a NaN result should arise only from (-ve)**(finite non-integer) - errno = EDOM - elif isinf(r): - # an infinite result here arises either from: - # (A) (+/-0.)**negative (-> divide-by-zero) - # (B) overflow of x**y with x and y finite - if x == 0.0: + if not isfinite(r): + if isnan(r): + # a NaN result should arise only from (-ve)**(finite non-integer) errno = EDOM - else: - errno = ERANGE + else: # isinf(r) + # an infinite result here arises either from: + # (A) (+/-0.)**negative (-> divide-by-zero) + # (B) overflow of x**y with x and y finite + if x == 0.0: + errno = EDOM + else: + errno = ERANGE if errno: _likely_raise(errno, r) return r @@ -358,18 +387,19 @@ r = c_func(x) # Error checking fun. Copied from CPython 2.6 errno = rposix.get_errno() - if isnan(r): - if isnan(x): - errno = 0 - else: - errno = EDOM - elif isinf(r): - if isinf(x) or isnan(x): - errno = 0 - elif can_overflow: - errno = ERANGE - else: - errno = EDOM + if not isfinite(r): + if isnan(r): + if isnan(x): + errno = 0 + else: + errno = EDOM + else: # isinf(r) + if not isfinite(x): + errno = 0 + elif can_overflow: + errno = ERANGE + else: + errno = EDOM if errno: _likely_raise(errno, r) return r diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -245,8 +245,14 @@ wrapper._always_inline_ = True # for debugging, stick ll func ptr to that wrapper._ptr = funcptr + wrapper = func_with_new_name(wrapper, name) - return func_with_new_name(wrapper, name) + if calling_conv != "c": + from pypy.rlib.jit import dont_look_inside + wrapper = dont_look_inside(wrapper) + + return wrapper + class CallbackHolder: def __init__(self): diff --git a/pypy/translator/backendopt/finalizer.py b/pypy/translator/backendopt/finalizer.py --- a/pypy/translator/backendopt/finalizer.py +++ b/pypy/translator/backendopt/finalizer.py @@ -4,7 +4,7 @@ class FinalizerError(Exception): """ __del__ marked as lightweight finalizer, but the analyzer did - not agreed + not agree """ class FinalizerAnalyzer(graphanalyze.BoolGraphAnalyzer): @@ -23,7 +23,7 @@ def analyze_light_finalizer(self, graph): result = self.analyze_direct_call(graph) if (result is self.top_result() and - getattr(graph.func, '_is_light_finalizer_', False)): + getattr(graph.func, '_must_be_light_finalizer_', False)): raise FinalizerError(FinalizerError.__doc__, graph) return result diff --git a/pypy/translator/backendopt/test/test_canraise.py b/pypy/translator/backendopt/test/test_canraise.py --- a/pypy/translator/backendopt/test/test_canraise.py +++ b/pypy/translator/backendopt/test/test_canraise.py @@ -201,6 +201,16 @@ result = ra.can_raise(ggraph.startblock.operations[0]) assert result + def test_ll_arraycopy(self): + from pypy.rpython.lltypesystem import rffi + from pypy.rlib.rgc import ll_arraycopy + def f(a, b, c, d, e): + ll_arraycopy(a, b, c, d, e) + t, ra = self.translate(f, [rffi.CCHARP, rffi.CCHARP, int, int, int]) + fgraph = graphof(t, f) + result = ra.can_raise(fgraph.startblock.operations[0]) + assert not result + class TestOOType(OORtypeMixin, BaseTestCanRaise): def test_can_raise_recursive(self): diff --git a/pypy/translator/backendopt/test/test_finalizer.py b/pypy/translator/backendopt/test/test_finalizer.py --- a/pypy/translator/backendopt/test/test_finalizer.py +++ b/pypy/translator/backendopt/test/test_finalizer.py @@ -126,13 +126,13 @@ r = self.analyze(f, [], A.__del__.im_func) assert r - def test_is_light_finalizer_decorator(self): + def test_must_be_light_finalizer_decorator(self): S = lltype.GcStruct('S') - @rgc.is_light_finalizer + @rgc.must_be_light_finalizer def f(): lltype.malloc(S) - @rgc.is_light_finalizer + @rgc.must_be_light_finalizer def g(): pass self.analyze(g, []) # did not explode diff --git a/pypy/translator/c/genc.py b/pypy/translator/c/genc.py --- a/pypy/translator/c/genc.py +++ b/pypy/translator/c/genc.py @@ -521,13 +521,13 @@ rules = [ ('clean', '', 'rm -f $(OBJECTS) $(TARGET) $(GCMAPFILES) $(ASMFILES) *.gc?? ../module_cache/*.gc??'), ('clean_noprof', '', 'rm -f $(OBJECTS) $(TARGET) $(GCMAPFILES) $(ASMFILES)'), - ('debug', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT" $(TARGET)'), - ('debug_exc', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DDO_LOG_EXC" $(TARGET)'), - ('debug_mem', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DTRIVIAL_MALLOC_DEBUG" $(TARGET)'), + ('debug', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT" debug_target'), + ('debug_exc', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DDO_LOG_EXC" debug_target'), + ('debug_mem', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DTRIVIAL_MALLOC_DEBUG" debug_target'), ('no_obmalloc', '', '$(MAKE) CFLAGS="-g -O2 -DRPY_ASSERT -DNO_OBMALLOC" $(TARGET)'), - ('linuxmemchk', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DLINUXMEMCHK" $(TARGET)'), + ('linuxmemchk', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DLINUXMEMCHK" debug_target'), ('llsafer', '', '$(MAKE) CFLAGS="-O2 -DRPY_LL_ASSERT" $(TARGET)'), - ('lldebug', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DRPY_LL_ASSERT" $(TARGET)'), + ('lldebug', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DRPY_LL_ASSERT" debug_target'), ('profile', '', '$(MAKE) CFLAGS="-g -O1 -pg $(CFLAGS) -fno-omit-frame-pointer" LDFLAGS="-pg $(LDFLAGS)" $(TARGET)'), ] if self.has_profopt(): @@ -554,7 +554,7 @@ mk.definition('ASMLBLFILES', lblsfiles) mk.definition('GCMAPFILES', gcmapfiles) if sys.platform == 'win32': - mk.definition('DEBUGFLAGS', '/Zi') + mk.definition('DEBUGFLAGS', '/MD /Zi') else: mk.definition('DEBUGFLAGS', '-O2 -fomit-frame-pointer -g') @@ -618,9 +618,13 @@ else: if sys.platform == 'win32': - mk.definition('DEBUGFLAGS', '/Zi') + mk.definition('DEBUGFLAGS', '/MD /Zi') else: mk.definition('DEBUGFLAGS', '-O1 -g') + if sys.platform == 'win32': + mk.rule('debug_target', 'debugmode_$(DEFAULT_TARGET)', 'rem') + else: + mk.rule('debug_target', '$(TARGET)', '#') mk.write() #self.translator.platform, # , diff --git a/pypy/translator/cli/test/test_snippet.py b/pypy/translator/cli/test/test_snippet.py --- a/pypy/translator/cli/test/test_snippet.py +++ b/pypy/translator/cli/test/test_snippet.py @@ -28,14 +28,14 @@ res = self.interpret(fn, [], backendopt=False) def test_link_vars_overlapping(self): - from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift + from pypy.rlib.rarithmetic import ovfcheck def fn(maxofs): lastofs = 0 ofs = 1 while ofs < maxofs: lastofs = ofs try: - ofs = ovfcheck_lshift(ofs, 1) + ofs = ovfcheck(ofs << 1) except OverflowError: ofs = maxofs else: diff --git a/pypy/translator/platform/__init__.py b/pypy/translator/platform/__init__.py --- a/pypy/translator/platform/__init__.py +++ b/pypy/translator/platform/__init__.py @@ -102,6 +102,8 @@ bits = [self.__class__.__name__, 'cc=%r' % self.cc] for varname in self.relevant_environ: bits.append('%s=%r' % (varname, os.environ.get(varname))) + # adding sys.maxint to disambiguate windows + bits.append('%s=%r' % ('sys.maxint', sys.maxint)) return ' '.join(bits) # some helpers which seem to be cross-platform enough @@ -238,10 +240,13 @@ else: host_factory = Linux64 elif sys.platform == 'darwin': - from pypy.translator.platform.darwin import Darwin_i386, Darwin_x86_64 + from pypy.translator.platform.darwin import Darwin_i386, Darwin_x86_64, Darwin_PowerPC import platform - assert platform.machine() in ('i386', 'x86_64') - if sys.maxint <= 2147483647: + assert platform.machine() in ('Power Macintosh', 'i386', 'x86_64') + + if platform.machine() == 'Power Macintosh': + host_factory = Darwin_PowerPC + elif sys.maxint <= 2147483647: host_factory = Darwin_i386 else: host_factory = Darwin_x86_64 diff --git a/pypy/translator/platform/darwin.py b/pypy/translator/platform/darwin.py --- a/pypy/translator/platform/darwin.py +++ b/pypy/translator/platform/darwin.py @@ -71,6 +71,11 @@ link_flags = ('-arch', 'i386') cflags = ('-arch', 'i386', '-O3', '-fomit-frame-pointer') +class Darwin_PowerPC(Darwin):#xxx fixme, mwp + name = "darwin_powerpc" + link_flags = () + cflags = ('-O3', '-fomit-frame-pointer') + class Darwin_x86_64(Darwin): name = "darwin_x86_64" link_flags = ('-arch', 'x86_64') diff --git a/pypy/translator/platform/test/test_darwin.py b/pypy/translator/platform/test/test_darwin.py --- a/pypy/translator/platform/test/test_darwin.py +++ b/pypy/translator/platform/test/test_darwin.py @@ -7,7 +7,7 @@ py.test.skip("Darwin only") from pypy.tool.udir import udir -from pypy.translator.platform.darwin import Darwin_i386, Darwin_x86_64 +from pypy.translator.platform.darwin import Darwin_i386, Darwin_x86_64, Darwin_PowerPC from pypy.translator.platform.test.test_platform import TestPlatform as BasicTest from pypy.translator.tool.cbuild import ExternalCompilationInfo @@ -17,7 +17,7 @@ else: host_factory = Darwin_x86_64 else: - host_factory = Darwin + host_factory = Darwin_PowerPC class TestDarwin(BasicTest): platform = host_factory() diff --git a/pypy/translator/platform/windows.py b/pypy/translator/platform/windows.py --- a/pypy/translator/platform/windows.py +++ b/pypy/translator/platform/windows.py @@ -294,6 +294,9 @@ ['$(CC_LINK) /nologo $(LDFLAGS) $(LDFLAGSEXTRA) $(OBJECTS) $(LINKFILES) /out:$@ $(LIBDIRS) $(LIBS) /MANIFEST /MANIFESTFILE:$*.manifest', 'mt.exe -nologo -manifest $*.manifest -outputresource:$@;1', ]) + m.rule('debugmode_$(TARGET)', '$(OBJECTS)', + ['$(CC_LINK) /nologo /DEBUG $(LDFLAGS) $(LDFLAGSEXTRA) $(OBJECTS) $(LINKFILES) /out:$@ $(LIBDIRS) $(LIBS)', + ]) if shared: m.definition('SHARED_IMPORT_LIB', so_name.new(ext='lib').basename) @@ -307,6 +310,9 @@ ['$(CC_LINK) /nologo main.obj $(SHARED_IMPORT_LIB) /out:$@ /MANIFEST /MANIFESTFILE:$*.manifest', 'mt.exe -nologo -manifest $*.manifest -outputresource:$@;1', ]) + m.rule('debugmode_$(DEFAULT_TARGET)', ['debugmode_$(TARGET)', 'main.obj'], + ['$(CC_LINK) /nologo /DEBUG main.obj $(SHARED_IMPORT_LIB) /out:$@' + ]) return m diff --git a/pypy/translator/simplify.py b/pypy/translator/simplify.py --- a/pypy/translator/simplify.py +++ b/pypy/translator/simplify.py @@ -111,16 +111,13 @@ # the while loop above will simplify recursively the new link def transform_ovfcheck(graph): - """The special function calls ovfcheck and ovfcheck_lshift need to + """The special function calls ovfcheck needs to be translated into primitive operations. ovfcheck is called directly after an operation that should be turned into an overflow-checked version. It is considered a syntax error if the resulting _ovf is not defined in objspace/flow/objspace.py. - ovfcheck_lshift is special because there is no preceding operation. - Instead, it will be replaced by an OP_LSHIFT_OVF operation. """ covf = Constant(rarithmetic.ovfcheck) - covfls = Constant(rarithmetic.ovfcheck_lshift) def check_syntax(opname): exlis = operation.implicit_exceptions.get("%s_ovf" % (opname,), []) @@ -154,9 +151,6 @@ op1.opname += '_ovf' del block.operations[i] block.renamevariables({op.result: op1.result}) - elif op.args[0] == covfls: - op.opname = 'lshift_ovf' - del op.args[0] def simplify_exceptions(graph): """The exception handling caused by non-implicit exceptions diff --git a/pypy/translator/test/snippet.py b/pypy/translator/test/snippet.py --- a/pypy/translator/test/snippet.py +++ b/pypy/translator/test/snippet.py @@ -1210,7 +1210,7 @@ return istk.top(), sstk.top() -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift +from pypy.rlib.rarithmetic import ovfcheck def add_func(i=numtype): try: @@ -1253,7 +1253,7 @@ def lshift_func(i=numtype): try: hugo(2, 3, 5) - return ovfcheck_lshift((-maxint-1), i) + return ovfcheck((-maxint-1) << i) except (hugelmugel, OverflowError, StandardError, ValueError): raise From noreply at buildbot.pypy.org Thu Nov 10 13:49:16 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:49:16 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: First basic implementation of strategies for SetObjects Message-ID: <20111110124916.7491B8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49136:46455b9b0a9d Date: 2011-04-30 11:35 +0200 http://bitbucket.org/pypy/pypy/changeset/46455b9b0a9d/ Log: First basic implementation of strategies for SetObjects diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -8,6 +8,19 @@ from pypy.interpreter.function import Defaults from pypy.objspace.std.settype import set_typedef as settypedef from pypy.objspace.std.frozensettype import frozenset_typedef as frozensettypedef +from pypy.rlib import rerased +from pypy.rlib.objectmodel import instantiate + +def get_strategy_from_setdata(space, setdata): + from pypy.objspace.std.intobject import W_IntObject + + keys_w = setdata.keys() + for item_w in setdata.keys(): + if type(item_w) is not W_IntObject: + break; + if item_w is keys_w[-1]: + return space.fromcache(IntegerSetStrategy) + return space.fromcache(ObjectSetStrategy) class W_BaseSetObject(W_Object): typedef = None @@ -21,18 +34,19 @@ return True return False - def __init__(w_self, space, setdata): """Initialize the set by taking ownership of 'setdata'.""" assert setdata is not None - w_self.setdata = setdata + w_self.strategy = get_strategy_from_setdata(space, setdata) + w_self.strategy.init_from_setdata(w_self, setdata) def __repr__(w_self): """representation for debugging purposes""" - reprlist = [repr(w_item) for w_item in w_self.setdata.keys()] + reprlist = [repr(w_item) for w_item in w_self.getkeys()] return "<%s(%s)>" % (w_self.__class__.__name__, ', '.join(reprlist)) def _newobj(w_self, space, rdict_w=None): + print "_newobj" """Make a new set or frozenset by taking ownership of 'rdict_w'.""" #return space.call(space.type(w_self),W_SetIterObject(rdict_w)) objtype = type(w_self) @@ -51,6 +65,38 @@ def setweakref(self, space, weakreflifeline): self._lifeline_ = weakreflifeline + # _____________ strategy methods ________________ + + def clear(self): + self.strategy.clear(self) + + def copy(self): + return self.strategy.copy(self) + + def length(self): + return self.strategy.length(self) + + def add(self, w_key): + self.strategy.add(self, w_key) + + def getkeys(self): + return self.strategy.getkeys(self) + + def intersect(self, w_other): + return self.strategy.intersect(self, w_other) + + def intersect_multiple(self, others_w): + return self.strategy.intersect_multiple(self, others_w) + + def update(self, w_other): + self.strategy.update(self, w_other) + + def has_key(self, w_key): + return self.strategy.has_key(self, w_key) + + def equals(self, w_other): + return self.strategy.equals(self, w_other) + class W_SetObject(W_BaseSetObject): from pypy.objspace.std.settype import set_typedef as typedef @@ -62,6 +108,151 @@ registerimplementation(W_SetObject) registerimplementation(W_FrozensetObject) +class SetStrategy(object): + def __init__(self, space): + self.space = space + + def init_from_setdata(self, w_set, setdata): + raise NotImplementedError + + def init_from_w_iterable(self, w_set, setdata): + raise NotImplementedError + + def length(self, w_set): + raise NotImplementedError + +class AbstractUnwrappedSetStrategy(object): + __mixin__ = True + + def init_from_setdata(self, w_set, setdata): + #XXX this copies again (see: make_setdata_from_w_iterable) + #XXX cannot store int into r_dict + d = newset(self.space) + for item_w in setdata.keys(): + d[self.unwrap(item_w)] = None + w_set.sstorage = self.cast_to_void_star(d) + + def ____init_from_w_iterable(self, w_set, w_iterable=None): + keys = self.make_setdata_from_w_iterable(w_iterable) + w_set.sstorage = self.cast_to_void_star(keys) + + def make_setdata_from_w_iterable(self, w_iterable): + """Return a new r_dict with the content of w_iterable.""" + if isinstance(w_iterable, W_BaseSetObject): + return self.cast_from_void_star(w_set.sstorage).copy() + data = newset(self.space) + if w_iterable is not None: + for w_item in self.space.listview(w_iterable): + data[self.unwrap(w_item)] = None + return data + + def length(self, w_set): + return len(self.cast_from_void_star(w_set.sstorage)) + + def clear(self, w_set): + self.cast_from_void_star(w_set.sstorage).clear() + + def copy(self, w_set): + print w_set + d = self.cast_from_void_star(w_set.sstorage).copy() + print d + #XXX make it faster by using from_storage_and_strategy + clone = instantiate(type(w_set)) + print clone + clone.strategy = w_set.strategy + return clone + + def add(self, w_set, w_key): + print "hehe" + print w_set + print w_key + d = self.cast_from_void_star(w_set.sstorage) + d[self.unwrap(w_key)] = None + + def getkeys(self, w_set): + keys = self.cast_from_void_star(w_set.sstorage).keys() + keys_w = [self.wrap(key) for key in keys] + return keys_w + + def has_key(self, w_set, w_key): + items_w = self.cast_from_void_star(w_set.sstorage) + return w_key in items_w + + def equals(self, w_set, w_other): + if w_set.length() != w_other.length(): + return False + items = self.cast_from_void_star(w_set.sstorage).keys() + for key in items: + if not w_other.has_key(self.wrap(key)): + return False + return True + + def intersect(self, w_set, w_other): + if w_set.length() > w_other.length(): + return w_other.intersect(w_set) + + result = w_set._newobj(self.space, newset(self.space)) + items = self.cast_from_void_star(w_set.sstorage).keys() + #XXX do it without wrapping when strategies are equal + for key in items: + w_key = self.wrap(key) + if w_other.has_key(w_key): + result.add(w_key) + return result + + def intersect_multiple(self, w_set, others_w): + result = w_set + for w_other in others_w: + if isinstance(w_other, W_BaseSetObject): + # optimization only + result = w_set.intersect(w_other) + else: + result2 = w_set._newobj(self.space, newset(self.space)) + for w_key in self.space.listview(w_other): + if result.has_key(w_key): + result2.add(w_key) + result = result2 + return result + + def update(self, w_set, w_other): + d = self.cast_from_void_star(w_set.sstorage) + if w_set.strategy is self.space.fromcache(ObjectSetStrategy): + other_w = w_other.getkeys() + #XXX better solution!? + for w_key in other_w: + d[w_key] = None + return + + elif w_set.strategy is w_other.strategy: + other = self.cast_to_void_star(w_other.sstorage) + d.update(other) + return + + w_set.switch_to_object_strategy() + w_set.update(w_other) + +class IntegerSetStrategy(AbstractUnwrappedSetStrategy, SetStrategy): + cast_to_void_star, cast_from_void_star = rerased.new_erasing_pair("integer") + cast_to_void_star = staticmethod(cast_to_void_star) + cast_from_void_star = staticmethod(cast_from_void_star) + + def unwrap(self, w_item): + return self.space.unwrap(w_item) + + def wrap(self, item): + return self.space.wrap(item) + +class ObjectSetStrategy(AbstractUnwrappedSetStrategy, SetStrategy): + cast_to_void_star, cast_from_void_star = rerased.new_erasing_pair("object") + cast_to_void_star = staticmethod(cast_to_void_star) + cast_from_void_star = staticmethod(cast_from_void_star) + + def unwrap(self, w_item): + return w_item + + def wrap(self, item): + return item + class W_SetIterObject(W_Object): from pypy.objspace.std.settype import setiter_typedef as typedef @@ -121,9 +312,12 @@ return data def _initialize_set(space, w_obj, w_iterable=None): - w_obj.setdata.clear() + w_obj.clear() if w_iterable is not None: - w_obj.setdata = make_setdata_from_w_iterable(space, w_iterable) + setdata = make_setdata_from_w_iterable(space, w_iterable) + #XXX maybe this is not neccessary + w_obj.strategy = get_strategy_from_setdata(space, setdata) + w_obj.strategy.init_from_setdata(w_obj, setdata) def _convert_set_to_frozenset(space, w_obj): if space.is_true(space.isinstance(w_obj, space.w_set)): @@ -134,14 +328,6 @@ # helper functions for set operation on dicts -def _is_eq(ld, rd): - if len(ld) != len(rd): - return False - for w_key in ld: - if w_key not in rd: - return False - return True - def _difference_dict(space, ld, rd): result = newset(space) for w_key in ld: @@ -159,15 +345,6 @@ except KeyError: pass -def _intersection_dict(space, ld, rd): - result = newset(space) - if len(ld) > len(rd): - ld, rd = rd, ld # loop over the smaller dict - for w_key in ld: - if w_key in rd: - result[w_key] = None - return result - def _isdisjoint_dict(ld, rd): if len(ld) > len(rd): ld, rd = rd, ld # loop over the smaller dict @@ -220,7 +397,7 @@ This has no effect if the element is already present. """ - w_left.setdata[w_other] = None + w_left.add(w_other) def set_copy__Set(space, w_set): return w_set._newobj(space, w_set.setdata.copy()) @@ -280,13 +457,14 @@ def eq__Set_Set(space, w_left, w_other): # optimization only (the general case is eq__Set_settypedef) - return space.wrap(_is_eq(w_left.setdata, w_other.setdata)) + return space.wrap(w_left.equals(w_other)) eq__Set_Frozenset = eq__Set_Set eq__Frozenset_Frozenset = eq__Set_Set eq__Frozenset_Set = eq__Set_Set def eq__Set_settypedef(space, w_left, w_other): + #XXX what is faster: wrapping w_left or creating set from w_other rd = make_setdata_from_w_iterable(space, w_other) return space.wrap(_is_eq(w_left.setdata, rd)) @@ -471,8 +649,12 @@ return w_key def and__Set_Set(space, w_left, w_other): + new_set = w_left.intersect(w_other) + return new_set ld, rd = w_left.setdata, w_other.setdata new_ld = _intersection_dict(space, ld, rd) + #XXX when both have same strategy, ini new set from storage + # therefore this must be moved to strategies return w_left._newobj(space, new_ld) and__Set_Frozenset = and__Set_Set @@ -480,6 +662,8 @@ and__Frozenset_Frozenset = and__Set_Set def _intersection_multiple(space, w_left, others_w): + return w_left.intersect_multiple(others_w) + result = w_left.setdata for w_other in others_w: if isinstance(w_other, W_BaseSetObject): @@ -495,10 +679,9 @@ def set_intersection__Set(space, w_left, others_w): if len(others_w) == 0: - result = w_left.setdata.copy() + return w_left.setdata.copy() else: - result = _intersection_multiple(space, w_left, others_w) - return w_left._newobj(space, result) + return _intersection_multiple(space, w_left, others_w) frozenset_intersection__Frozenset = set_intersection__Set @@ -579,24 +762,26 @@ inplace_xor__Set_Frozenset = inplace_xor__Set_Set def or__Set_Set(space, w_left, w_other): - ld, rd = w_left.setdata, w_other.setdata - result = ld.copy() - result.update(rd) - return w_left._newobj(space, result) + w_copy = w_left.copy() + w_copy.update(w_other) + return w_copy or__Set_Frozenset = or__Set_Set or__Frozenset_Set = or__Set_Set or__Frozenset_Frozenset = or__Set_Set def set_union__Set(space, w_left, others_w): - result = w_left.setdata.copy() + print "hallo", w_left + result = w_left.copy() + print result for w_other in others_w: if isinstance(w_other, W_BaseSetObject): - result.update(w_other.setdata) # optimization only + result.update(w_other) # optimization only else: for w_key in space.listview(w_other): - result[w_key] = None - return w_left._newobj(space, result) + print result + result.add(w_key) + return result frozenset_union__Frozenset = set_union__Set diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -51,6 +51,13 @@ assert self.space.eq_w(s,u) class AppTestAppSetTest: + def test_simple(self): + a = set([1,2,3]) + b = set() + b.add(4) + a.union(b) + assert a == set([1,2,3,4]) + def test_subtype(self): class subset(set):pass a = subset() From noreply at buildbot.pypy.org Thu Nov 10 13:49:17 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:49:17 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: All tests for setobject are working (but there is still untested code) Message-ID: <20111110124917.A68268292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49137:34fd0e9fa474 Date: 2011-05-01 16:19 +0200 http://bitbucket.org/pypy/pypy/changeset/34fd0e9fa474/ Log: All tests for setobject are working (but there is still untested code) diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -11,15 +11,25 @@ from pypy.rlib import rerased from pypy.rlib.objectmodel import instantiate -def get_strategy_from_setdata(space, setdata): +def get_strategy_from_w_iterable(space, w_iterable=None): from pypy.objspace.std.intobject import W_IntObject + #XXX what types for w_iterable are possible - keys_w = setdata.keys() - for item_w in setdata.keys(): + if isinstance(w_iterable, W_BaseSetObject): + return w_iterable.strategy + + if w_iterable is None: + #XXX becomes EmptySetStrategy later + return space.fromcache(ObjectSetStrategy) + + if not isinstance(w_iterable, list): + w_iterable = space.listview(w_iterable) + for item_w in w_iterable: if type(item_w) is not W_IntObject: break; - if item_w is keys_w[-1]: + if item_w is w_iterable[-1]: return space.fromcache(IntegerSetStrategy) + return space.fromcache(ObjectSetStrategy) class W_BaseSetObject(W_Object): @@ -37,8 +47,9 @@ def __init__(w_self, space, setdata): """Initialize the set by taking ownership of 'setdata'.""" assert setdata is not None - w_self.strategy = get_strategy_from_setdata(space, setdata) - w_self.strategy.init_from_setdata(w_self, setdata) + w_self.space = space #XXX less memory without this indirection? + w_self.strategy = get_strategy_from_w_iterable(space, setdata.keys()) + w_self.strategy.init_from_setdata_w(w_self, setdata) def __repr__(w_self): """representation for debugging purposes""" @@ -46,7 +57,6 @@ return "<%s(%s)>" % (w_self.__class__.__name__, ', '.join(reprlist)) def _newobj(w_self, space, rdict_w=None): - print "_newobj" """Make a new set or frozenset by taking ownership of 'rdict_w'.""" #return space.call(space.type(w_self),W_SetIterObject(rdict_w)) objtype = type(w_self) @@ -62,9 +72,15 @@ _lifeline_ = None def getweakref(self): return self._lifeline_ + def setweakref(self, space, weakreflifeline): self._lifeline_ = weakreflifeline + def switch_to_object_strategy(self, space): + d = self.strategy.getdict_w(self) + self.strategy = space.fromcache(ObjectSetStrategy) + self.sstorage = self.strategy.cast_to_void_star(d) + # _____________ strategy methods ________________ def clear(self): @@ -79,15 +95,39 @@ def add(self, w_key): self.strategy.add(self, w_key) + def discard(self, w_item): + return self.strategy.discard(self, w_item) + + def delitem(self, w_item): + return self.strategy.delitem(self, w_item) + + def getdict_w(self): + return self.strategy.getdict_w(self) + def getkeys(self): return self.strategy.getkeys(self) + def difference(self, w_other): + return self.strategy.difference(self, w_other) + + def difference_update(self, w_other): + return self.strategy.difference_update(self, w_other) + def intersect(self, w_other): return self.strategy.intersect(self, w_other) def intersect_multiple(self, others_w): return self.strategy.intersect_multiple(self, others_w) + def intersect_multiple_update(self, others_w): + self.strategy.intersect_multiple_update(self, others_w) + + def issubset(self, w_other): + return self.strategy.issubset(self, w_other) + + def isdisjoint(self, w_other): + return self.strategy.isdisjoint(self, w_other) + def update(self, w_other): self.strategy.update(self, w_other) @@ -112,9 +152,6 @@ def __init__(self, space): self.space = space - def init_from_setdata(self, w_set, setdata): - raise NotImplementedError - def init_from_w_iterable(self, w_set, setdata): raise NotImplementedError @@ -124,23 +161,24 @@ class AbstractUnwrappedSetStrategy(object): __mixin__ = True - def init_from_setdata(self, w_set, setdata): - #XXX this copies again (see: make_setdata_from_w_iterable) - #XXX cannot store int into r_dict - d = newset(self.space) - for item_w in setdata.keys(): + def get_empty_storage(self): + raise NotImplementedError + + def init_from_w_iterable(self, w_set, w_iterable): + setdata = self.make_setdata_from_w_iterable(w_iterable) + w_set.sstorage = self.cast_to_void_star(setdata) + + def init_from_setdata_w(self, w_set, setdata_w): + d = self.get_empty_dict() + for item_w in setdata_w.keys(): d[self.unwrap(item_w)] = None w_set.sstorage = self.cast_to_void_star(d) - def ____init_from_w_iterable(self, w_set, w_iterable=None): - keys = self.make_setdata_from_w_iterable(w_iterable) - w_set.sstorage = self.cast_to_void_star(keys) - def make_setdata_from_w_iterable(self, w_iterable): """Return a new r_dict with the content of w_iterable.""" if isinstance(w_iterable, W_BaseSetObject): return self.cast_from_void_star(w_set.sstorage).copy() - data = newset(self.space) + data = self.get_empty_dict() if w_iterable is not None: for w_item in self.space.listview(w_iterable): data[self.unwrap(w_item)] = None @@ -153,21 +191,59 @@ self.cast_from_void_star(w_set.sstorage).clear() def copy(self, w_set): - print w_set - d = self.cast_from_void_star(w_set.sstorage).copy() - print d + #XXX do not copy FrozenDict + d = self.cast_from_void_star(w_set.sstorage) #XXX make it faster by using from_storage_and_strategy - clone = instantiate(type(w_set)) - print clone + clone = w_set._newobj(self.space, newset(self.space)) clone.strategy = w_set.strategy + clone.sstorage = self.cast_to_void_star(d.copy()) return clone def add(self, w_set, w_key): - print "hehe" - print w_set - print w_key + if self.is_correct_type(w_key): + d = self.cast_from_void_star(w_set.sstorage) + d[self.unwrap(w_key)] = None + else: + w_set.switch_to_object_strategy(self.space) + w_set.add(w_key) + + def delitem(self, w_set, w_item): d = self.cast_from_void_star(w_set.sstorage) - d[self.unwrap(w_key)] = None + try: + del d[self.unwrap(w_item)] + except KeyError: + raise + + def discard(self, w_set, w_item): + d = self.cast_from_void_star(w_set.sstorage) + try: + del d[self.unwrap(w_item)] + return True + except KeyError: + return False + except OperationError, e: + if not e.match(self.space, self.space.w_TypeError): + raise + w_f = _convert_set_to_frozenset(self.space, w_item) + if w_f is None: + raise + try: + del d[w_f] + return True + except KeyError: + return False + except OperationError, e: + #XXX is this ever tested? + if not e.match(space, space.w_TypeError): + raise + return False + + def getdict_w(self, w_set): + result = newset(self.space) + keys = self.cast_from_void_star(w_set.sstorage).keys() + for key in keys: + result[self.wrap(key)] = None + return result def getkeys(self, w_set): keys = self.cast_from_void_star(w_set.sstorage).keys() @@ -175,8 +251,8 @@ return keys_w def has_key(self, w_set, w_key): - items_w = self.cast_from_void_star(w_set.sstorage) - return w_key in items_w + dict_w = self.cast_from_void_star(w_set.sstorage) + return self.unwrap(w_key) in dict_w def equals(self, w_set, w_other): if w_set.length() != w_other.length(): @@ -187,6 +263,27 @@ return False return True + def difference(self, w_set, w_other): + result = w_set._newobj(self.space, newset(self.space)) + if not isinstance(w_other, W_BaseSetObject): + #XXX this is bad + setdata = make_setdata_from_w_iterable(self.space, w_other) + w_other = w_set._newobj(self.space, setdata) + for w_key in w_set.getkeys(): + if not w_other.has_key(w_key): + result.add(w_key) + return result + + def difference_update(self, w_set, w_other): + if w_set is w_other: + w_set.clear() # for the case 'a.difference_update(a)' + else: + for w_key in w_other.getkeys(): + try: + self.delitem(w_set, w_key) + except KeyError: + pass + def intersect(self, w_set, w_other): if w_set.length() > w_other.length(): return w_other.intersect(w_set) @@ -205,8 +302,10 @@ for w_other in others_w: if isinstance(w_other, W_BaseSetObject): # optimization only - result = w_set.intersect(w_other) + #XXX this creates setobject again + result = result.intersect(w_other) else: + #XXX directly give w_other as argument to result2 result2 = w_set._newobj(self.space, newset(self.space)) for w_key in self.space.listview(w_other): if result.has_key(w_key): @@ -214,6 +313,33 @@ result = result2 return result + def intersect_multiple_update(self, w_set, others_w): + #XXX faster withouth creating the setobject in intersect_multiple + result = self.intersect_multiple(w_set, others_w) + w_set.strategy = result.strategy + w_set.sstorage = result.sstorage + + def issubset(self, w_set, w_other): + if w_set.length() > w_other.length(): + return False + + #XXX add ways without unwrapping if strategies are equal + for w_key in w_set.getkeys(): + if not w_other.has_key(w_key): + return False + return True + + def isdisjoint(self, w_set, w_other): + if w_set.length() > w_other.length(): + return w_other.isdisjoint(w_set) + + d = self.cast_from_void_star(w_set.sstorage) + for key in d: + #XXX no need to wrap, if strategies are equal + if w_other.has_key(self.wrap(key)): + return False + return True + def update(self, w_set, w_other): d = self.cast_from_void_star(w_set.sstorage) if w_set.strategy is self.space.fromcache(ObjectSetStrategy): @@ -224,11 +350,10 @@ return elif w_set.strategy is w_other.strategy: - other = self.cast_to_void_star(w_other.sstorage) + other = self.cast_from_void_star(w_other.sstorage) d.update(other) return - - w_set.switch_to_object_strategy() + w_set.switch_to_object_strategy(self.space) w_set.update(w_other) class IntegerSetStrategy(AbstractUnwrappedSetStrategy, SetStrategy): @@ -236,6 +361,13 @@ cast_to_void_star = staticmethod(cast_to_void_star) cast_from_void_star = staticmethod(cast_from_void_star) + def get_empty_dict(self): + return {} + + def is_correct_type(self, w_key): + from pypy.objspace.std.intobject import W_IntObject + return type(w_key) is W_IntObject + def unwrap(self, w_item): return self.space.unwrap(w_item) @@ -247,6 +379,12 @@ cast_to_void_star = staticmethod(cast_to_void_star) cast_from_void_star = staticmethod(cast_from_void_star) + def get_empty_dict(self): + return newset(self.space) + + def is_correct_type(self, w_key): + return True + def unwrap(self, w_item): return w_item @@ -260,7 +398,7 @@ w_self.content = content = setdata w_self.len = len(content) w_self.pos = 0 - w_self.iterator = w_self.content.iterkeys() + w_self.iterator = iter(w_self.content) def next_entry(w_self): for w_key in w_self.iterator: @@ -302,9 +440,11 @@ return r_dict(space.eq_w, space.hash_w) def make_setdata_from_w_iterable(space, w_iterable=None): + #XXX remove this later """Return a new r_dict with the content of w_iterable.""" if isinstance(w_iterable, W_BaseSetObject): - return w_iterable.setdata.copy() + #XXX is this bad or not? + return w_iterable.getdict_w() data = newset(space) if w_iterable is not None: for w_item in space.listview(w_iterable): @@ -314,13 +454,16 @@ def _initialize_set(space, w_obj, w_iterable=None): w_obj.clear() if w_iterable is not None: - setdata = make_setdata_from_w_iterable(space, w_iterable) - #XXX maybe this is not neccessary - w_obj.strategy = get_strategy_from_setdata(space, setdata) - w_obj.strategy.init_from_setdata(w_obj, setdata) + w_obj.strategy = get_strategy_from_w_iterable(space, w_iterable) + w_obj.strategy.init_from_w_iterable(w_obj, w_iterable) def _convert_set_to_frozenset(space, w_obj): + #XXX can be optimized if space.is_true(space.isinstance(w_obj, space.w_set)): + w_frozen = instantiate(W_FrozensetObject) + w_frozen.strategy = w_obj.strategy + w_frozen.sstorage = w_obj.sstorage + return w_frozen return W_FrozensetObject(space, make_setdata_from_w_iterable(space, w_obj)) else: @@ -377,13 +520,12 @@ def set_update__Set(space, w_left, others_w): """Update a set with the union of itself and another.""" - ld = w_left.setdata for w_other in others_w: if isinstance(w_other, W_BaseSetObject): - ld.update(w_other.setdata) # optimization only + w_left.update(w_other) # optimization only else: for w_key in space.listview(w_other): - ld[w_key] = None + w_left.add(w_key) def inplace_or__Set_Set(space, w_left, w_other): ld, rd = w_left.setdata, w_other.setdata @@ -400,7 +542,7 @@ w_left.add(w_other) def set_copy__Set(space, w_set): - return w_set._newobj(space, w_set.setdata.copy()) + return w_set.copy() def frozenset_copy__Frozenset(space, w_left): if type(w_left) is W_FrozensetObject: @@ -421,30 +563,31 @@ sub__Frozenset_Frozenset = sub__Set_Set def set_difference__Set(space, w_left, others_w): - result = w_left.setdata if len(others_w) == 0: - result = result.copy() + return w_left.copy() + result = w_left for w_other in others_w: - if isinstance(w_other, W_BaseSetObject): - rd = w_other.setdata # optimization only - else: - rd = make_setdata_from_w_iterable(space, w_other) - result = _difference_dict(space, result, rd) - return w_left._newobj(space, result) + result = result.difference(w_other) + #if isinstance(w_other, W_BaseSetObject): + # rd = w_other.setdata # optimization only + #else: + # rd = make_setdata_from_w_iterable(space, w_other) + #result = _difference_dict(space, result, rd) + return result frozenset_difference__Frozenset = set_difference__Set def set_difference_update__Set(space, w_left, others_w): - ld = w_left.setdata for w_other in others_w: if isinstance(w_other, W_BaseSetObject): # optimization only - _difference_dict_update(space, ld, w_other.setdata) + w_left.difference_update(w_other) + #_difference_dict_update(space, ld, w_other.setdata) else: for w_key in space.listview(w_other): try: - del ld[w_key] + w_left.delitem(w_key) except KeyError: pass @@ -480,6 +623,7 @@ eq__Frozenset_ANY = eq__Set_ANY def ne__Set_Set(space, w_left, w_other): + return space.wrap(not w_left.equals(w_other)) return space.wrap(not _is_eq(w_left.setdata, w_other.setdata)) ne__Set_Frozenset = ne__Set_Set @@ -503,12 +647,12 @@ def contains__Set_ANY(space, w_left, w_other): try: - return space.newbool(w_other in w_left.setdata) + return space.newbool(w_left.has_key(w_other)) except OperationError, e: if e.match(space, space.w_TypeError): w_f = _convert_set_to_frozenset(space, w_other) if w_f is not None: - return space.newbool(w_f in w_left.setdata) + return space.newbool(w_left.has_key(w_f)) raise contains__Frozenset_ANY = contains__Set_ANY @@ -517,6 +661,8 @@ # optimization only (the general case works too) if space.is_w(w_left, w_other): return space.w_True + return space.wrap(w_left.issubset(w_other)) + ld, rd = w_left.setdata, w_other.setdata return space.wrap(_issubset_dict(ld, rd)) @@ -540,9 +686,11 @@ def set_issuperset__Set_Set(space, w_left, w_other): # optimization only (the general case works too) + #XXX this is the same code as in set_issubset__Set_Set (sets reversed) if space.is_w(w_left, w_other): return space.w_True + return space.wrap(w_other.issubset(w_left)) ld, rd = w_left.setdata, w_other.setdata return space.wrap(_issubset_dict(rd, ld)) @@ -567,7 +715,7 @@ # automatic registration of "lt(x, y)" as "not ge(y, x)" would not give the # correct answer here! def lt__Set_Set(space, w_left, w_other): - if len(w_left.setdata) >= len(w_other.setdata): + if w_left.length() >= w_other.length(): return space.w_False else: return le__Set_Set(space, w_left, w_other) @@ -577,7 +725,7 @@ lt__Frozenset_Frozenset = lt__Set_Set def gt__Set_Set(space, w_left, w_other): - if len(w_left.setdata) <= len(w_other.setdata): + if w_left.length() <= w_other.length(): return space.w_False else: return ge__Set_Set(space, w_left, w_other) @@ -592,6 +740,9 @@ frozenset if the argument is a set. Returns True if successfully removed. """ + x = w_left.discard(w_item) + return x + try: del w_left.setdata[w_item] return True @@ -626,8 +777,8 @@ if w_set.hash != 0: return space.wrap(w_set.hash) hash = 1927868237 - hash *= (len(w_set.setdata) + 1) - for w_item in w_set.setdata: + hash *= (w_set.length() + 1) + for w_item in w_set.getkeys(): h = space.hash_w(w_item) value = ((h ^ (h << 16) ^ 89869747) * multi) hash = intmask(hash ^ value) @@ -668,6 +819,8 @@ for w_other in others_w: if isinstance(w_other, W_BaseSetObject): # optimization only + #XXX test this + assert False result = _intersection_dict(space, result, w_other.setdata) else: result2 = newset(space) @@ -679,13 +832,15 @@ def set_intersection__Set(space, w_left, others_w): if len(others_w) == 0: - return w_left.setdata.copy() + return w_left.copy() else: return _intersection_multiple(space, w_left, others_w) frozenset_intersection__Frozenset = set_intersection__Set def set_intersection_update__Set(space, w_left, others_w): + w_left.intersect_multiple_update(others_w) + return result = _intersection_multiple(space, w_left, others_w) w_left.setdata = result @@ -699,6 +854,7 @@ def set_isdisjoint__Set_Set(space, w_left, w_other): # optimization only (the general case works too) + return space.newbool(w_left.isdisjoint(w_other)) ld, rd = w_left.setdata, w_other.setdata disjoint = _isdisjoint_dict(ld, rd) return space.newbool(disjoint) @@ -708,9 +864,10 @@ set_isdisjoint__Frozenset_Set = set_isdisjoint__Set_Set def set_isdisjoint__Set_ANY(space, w_left, w_other): - ld = w_left.setdata + #XXX maybe checking if type fits strategy first (before comparing) speeds this up a bit + # since this will be used in many other functions -> general function for that for w_key in space.listview(w_other): - if w_key in ld: + if w_left.has_key(w_key): return space.w_False return space.w_True @@ -771,27 +928,24 @@ or__Frozenset_Frozenset = or__Set_Set def set_union__Set(space, w_left, others_w): - print "hallo", w_left result = w_left.copy() - print result for w_other in others_w: if isinstance(w_other, W_BaseSetObject): result.update(w_other) # optimization only else: for w_key in space.listview(w_other): - print result result.add(w_key) return result frozenset_union__Frozenset = set_union__Set def len__Set(space, w_left): - return space.newint(len(w_left.setdata)) + return space.newint(w_left.length()) len__Frozenset = len__Set def iter__Set(space, w_left): - return W_SetIterObject(w_left.setdata) + return W_SetIterObject(w_left.getkeys()) iter__Frozenset = iter__Set diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -55,13 +55,16 @@ a = set([1,2,3]) b = set() b.add(4) - a.union(b) - assert a == set([1,2,3,4]) + c = a.union(b) + assert c == set([1,2,3,4]) def test_subtype(self): class subset(set):pass a = subset() + print "a: ", type(a) b = a | set('abc') + print b + print "b: ", type(b) assert type(b) is subset def test_union(self): From noreply at buildbot.pypy.org Thu Nov 10 13:49:18 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:49:18 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: Cleaned up setobject.py Message-ID: <20111110124918.D447B8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49138:4f1baf0b12d1 Date: 2011-05-01 16:53 +0200 http://bitbucket.org/pypy/pypy/changeset/4f1baf0b12d1/ Log: Cleaned up setobject.py diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -234,6 +234,7 @@ return False except OperationError, e: #XXX is this ever tested? + assert False if not e.match(space, space.w_TypeError): raise return False @@ -464,38 +465,11 @@ w_frozen.strategy = w_obj.strategy w_frozen.sstorage = w_obj.sstorage return w_frozen - return W_FrozensetObject(space, - make_setdata_from_w_iterable(space, w_obj)) else: return None # helper functions for set operation on dicts -def _difference_dict(space, ld, rd): - result = newset(space) - for w_key in ld: - if w_key not in rd: - result[w_key] = None - return result - -def _difference_dict_update(space, ld, rd): - if ld is rd: - ld.clear() # for the case 'a.difference_update(a)' - else: - for w_key in rd: - try: - del ld[w_key] - except KeyError: - pass - -def _isdisjoint_dict(ld, rd): - if len(ld) > len(rd): - ld, rd = rd, ld # loop over the smaller dict - for w_key in ld: - if w_key in rd: - return False - return True - def _symmetric_difference_dict(space, ld, rd): result = newset(space) for w_key in ld: @@ -568,11 +542,6 @@ result = w_left for w_other in others_w: result = result.difference(w_other) - #if isinstance(w_other, W_BaseSetObject): - # rd = w_other.setdata # optimization only - #else: - # rd = make_setdata_from_w_iterable(space, w_other) - #result = _difference_dict(space, result, rd) return result frozenset_difference__Frozenset = set_difference__Set @@ -583,7 +552,6 @@ if isinstance(w_other, W_BaseSetObject): # optimization only w_left.difference_update(w_other) - #_difference_dict_update(space, ld, w_other.setdata) else: for w_key in space.listview(w_other): try: @@ -624,7 +592,6 @@ def ne__Set_Set(space, w_left, w_other): return space.wrap(not w_left.equals(w_other)) - return space.wrap(not _is_eq(w_left.setdata, w_other.setdata)) ne__Set_Frozenset = ne__Set_Set ne__Frozenset_Frozenset = ne__Set_Set @@ -662,9 +629,6 @@ if space.is_w(w_left, w_other): return space.w_True return space.wrap(w_left.issubset(w_other)) - - ld, rd = w_left.setdata, w_other.setdata - return space.wrap(_issubset_dict(ld, rd)) set_issubset__Set_Frozenset = set_issubset__Set_Set frozenset_issubset__Frozenset_Set = set_issubset__Set_Set @@ -691,8 +655,6 @@ return space.w_True return space.wrap(w_other.issubset(w_left)) - ld, rd = w_left.setdata, w_other.setdata - return space.wrap(_issubset_dict(rd, ld)) set_issuperset__Set_Frozenset = set_issuperset__Set_Set set_issuperset__Frozenset_Set = set_issuperset__Set_Set @@ -815,21 +777,6 @@ def _intersection_multiple(space, w_left, others_w): return w_left.intersect_multiple(others_w) - result = w_left.setdata - for w_other in others_w: - if isinstance(w_other, W_BaseSetObject): - # optimization only - #XXX test this - assert False - result = _intersection_dict(space, result, w_other.setdata) - else: - result2 = newset(space) - for w_key in space.listview(w_other): - if w_key in result: - result2[w_key] = None - result = result2 - return result - def set_intersection__Set(space, w_left, others_w): if len(others_w) == 0: return w_left.copy() @@ -841,8 +788,6 @@ def set_intersection_update__Set(space, w_left, others_w): w_left.intersect_multiple_update(others_w) return - result = _intersection_multiple(space, w_left, others_w) - w_left.setdata = result def inplace_and__Set_Set(space, w_left, w_other): ld, rd = w_left.setdata, w_other.setdata @@ -855,9 +800,6 @@ def set_isdisjoint__Set_Set(space, w_left, w_other): # optimization only (the general case works too) return space.newbool(w_left.isdisjoint(w_other)) - ld, rd = w_left.setdata, w_other.setdata - disjoint = _isdisjoint_dict(ld, rd) - return space.newbool(disjoint) set_isdisjoint__Set_Frozenset = set_isdisjoint__Set_Set set_isdisjoint__Frozenset_Frozenset = set_isdisjoint__Set_Set From noreply at buildbot.pypy.org Thu Nov 10 13:49:20 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:49:20 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: added test and fix for inplace_or Message-ID: <20111110124920.0CC4A8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49139:24946aadee81 Date: 2011-05-01 17:03 +0200 http://bitbucket.org/pypy/pypy/changeset/24946aadee81/ Log: added test and fix for inplace_or diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -502,8 +502,7 @@ w_left.add(w_key) def inplace_or__Set_Set(space, w_left, w_other): - ld, rd = w_left.setdata, w_other.setdata - ld.update(rd) + w_left.update(w_other) return w_left inplace_or__Set_Frozenset = inplace_or__Set_Set diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -51,6 +51,7 @@ assert self.space.eq_w(s,u) class AppTestAppSetTest: + def test_simple(self): a = set([1,2,3]) b = set() @@ -58,13 +59,19 @@ c = a.union(b) assert c == set([1,2,3,4]) + def test_or(self): + a = set([0,1,2]) + b = a | set([1,2,3]) + assert b == set([0,1,2,3]) + + # test inplace or + a |= set([1,2,3]) + assert a == b + def test_subtype(self): class subset(set):pass a = subset() - print "a: ", type(a) b = a | set('abc') - print b - print "b: ", type(b) assert type(b) is subset def test_union(self): @@ -354,3 +361,4 @@ assert s == set([2,3]) s.difference_update(s) assert s == set([]) + From noreply at buildbot.pypy.org Thu Nov 10 13:49:21 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:49:21 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: added fix and tests for clear and __sub__ Message-ID: <20111110124921.3C63D8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49140:d6824feeab55 Date: 2011-05-01 17:15 +0200 http://bitbucket.org/pypy/pypy/changeset/d6824feeab55/ Log: added fix and tests for clear and __sub__ diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -524,12 +524,10 @@ return set_copy__Set(space, w_left) def set_clear__Set(space, w_left): - w_left.setdata.clear() + w_left.clear() def sub__Set_Set(space, w_left, w_other): - ld, rd = w_left.setdata, w_other.setdata - new_ld = _difference_dict(space, ld, rd) - return w_left._newobj(space, new_ld) + return w_left.difference(w_other) sub__Set_Frozenset = sub__Set_Set sub__Frozenset_Set = sub__Set_Set diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -68,6 +68,17 @@ a |= set([1,2,3]) assert a == b + def test_clear(self): + a = set([1,2,3]) + a.clear() + assert a == set() + + def test_sub(self): + a = set([1,2,3,4,5]) + b = set([2,3,4]) + a - b == [1,5] + a.__sub__(b) == [1,5] + def test_subtype(self): class subset(set):pass a = subset() From noreply at buildbot.pypy.org Thu Nov 10 13:49:22 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:49:22 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: another test for discard; cleaned up discard code Message-ID: <20111110124922.6CB3C8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49141:1e7b0dec4883 Date: 2011-05-01 17:25 +0200 http://bitbucket.org/pypy/pypy/changeset/1e7b0dec4883/ Log: another test for discard; cleaned up discard code diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -208,6 +208,7 @@ w_set.add(w_key) def delitem(self, w_set, w_item): + # only used internally d = self.cast_from_void_star(w_set.sstorage) try: del d[self.unwrap(w_item)] @@ -702,28 +703,6 @@ x = w_left.discard(w_item) return x - try: - del w_left.setdata[w_item] - return True - except KeyError: - return False - except OperationError, e: - if not e.match(space, space.w_TypeError): - raise - w_f = _convert_set_to_frozenset(space, w_item) - if w_f is None: - raise - - try: - del w_left.setdata[w_f] - return True - except KeyError: - return False - except OperationError, e: - if not e.match(space, space.w_TypeError): - raise - return False - def set_discard__Set_ANY(space, w_left, w_item): _discard_from_set(space, w_left, w_item) diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -79,6 +79,15 @@ a - b == [1,5] a.__sub__(b) == [1,5] + def test_discard_remove(self): + a = set([1,2,3,4,5]) + a.remove(1) + assert a == set([2,3,4,5]) + a.discard(2) + assert a == set([3,4,5]) + + raises(KeyError, "a.remove(6)") + def test_subtype(self): class subset(set):pass a = subset() From noreply at buildbot.pypy.org Thu Nov 10 13:49:23 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:49:23 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: test and fix for W_SetObject.pop() Message-ID: <20111110124923.9BEFB8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49142:8ade98db780b Date: 2011-05-01 17:35 +0200 http://bitbucket.org/pypy/pypy/changeset/8ade98db780b/ Log: test and fix for W_SetObject.pop() diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -208,7 +208,7 @@ w_set.add(w_key) def delitem(self, w_set, w_item): - # only used internally + # not a normal set operation; only used internally d = self.cast_from_void_star(w_set.sstorage) try: del d[self.unwrap(w_item)] @@ -729,12 +729,14 @@ return space.wrap(hash) def set_pop__Set(space, w_left): - for w_key in w_left.setdata: + #XXX move this to strategy so we don't have to + # wrap all items only to get the first one + for w_key in w_left.getkeys(): break else: raise OperationError(space.w_KeyError, space.wrap('pop from an empty set')) - del w_left.setdata[w_key] + w_left.delitem(w_key) return w_key def and__Set_Set(space, w_left, w_other): diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -88,6 +88,13 @@ raises(KeyError, "a.remove(6)") + def test_pop(self): + a = set([1,2,3,4,5]) + for i in xrange(5): + a.pop() + assert a == set() + raises(KeyError, "a.pop()") + def test_subtype(self): class subset(set):pass a = subset() From noreply at buildbot.pypy.org Thu Nov 10 13:49:24 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:49:24 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: added test and fix for inplace sub Message-ID: <20111110124924.CB9EE8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49143:6b7510a9e193 Date: 2011-05-02 13:05 +0200 http://bitbucket.org/pypy/pypy/changeset/6b7510a9e193/ Log: added test and fix for inplace sub diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -558,8 +558,7 @@ pass def inplace_sub__Set_Set(space, w_left, w_other): - ld, rd = w_left.setdata, w_other.setdata - _difference_dict_update(space, ld, rd) + w_left.difference_update(w_other) return w_left inplace_sub__Set_Frozenset = inplace_sub__Set_Set diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -79,6 +79,12 @@ a - b == [1,5] a.__sub__(b) == [1,5] + #inplace sub + a = set([1,2,3,4]) + b = set([1,4]) + a -= b + assert a == set([2,3]) + def test_discard_remove(self): a = set([1,2,3,4,5]) a.remove(1) From noreply at buildbot.pypy.org Thu Nov 10 13:49:26 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:49:26 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: added test and fix for issubset and issuperset Message-ID: <20111110124926.06A288292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49144:b22d4b425150 Date: 2011-05-02 13:27 +0200 http://bitbucket.org/pypy/pypy/changeset/b22d4b425150/ Log: added test and fix for issubset and issuperset diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -322,6 +322,10 @@ w_set.sstorage = result.sstorage def issubset(self, w_set, w_other): + if not isinstance(w_other, W_BaseSetObject): + setdata = make_setdata_from_w_iterable(self.space, w_other) + w_other = w_set._newobj(self.space, setdata) + if w_set.length() > w_other.length(): return False @@ -572,7 +576,7 @@ eq__Frozenset_Set = eq__Set_Set def eq__Set_settypedef(space, w_left, w_other): - #XXX what is faster: wrapping w_left or creating set from w_other + #XXX dont know how to test this rd = make_setdata_from_w_iterable(space, w_other) return space.wrap(_is_eq(w_left.setdata, rd)) @@ -635,8 +639,7 @@ if space.is_w(w_left, w_other): return space.w_True - ld, rd = w_left.setdata, make_setdata_from_w_iterable(space, w_other) - return space.wrap(_issubset_dict(ld, rd)) + return space.wrap(w_left.issubset(w_other)) frozenset_issubset__Frozenset_ANY = set_issubset__Set_ANY @@ -661,8 +664,11 @@ if space.is_w(w_left, w_other): return space.w_True - ld, rd = w_left.setdata, make_setdata_from_w_iterable(space, w_other) - return space.wrap(_issubset_dict(rd, ld)) + #XXX BAD + setdata = make_setdata_from_w_iterable(space, w_other) + w_other = w_left._newobj(space, setdata) + + return space.wrap(w_other.issubset(w_left)) frozenset_issuperset__Frozenset_ANY = set_issuperset__Set_ANY diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -85,6 +85,20 @@ a -= b assert a == set([2,3]) + def test_issubset(self): + a = set([1,2,3,4]) + b = set([2,3]) + assert b.issubset(a) + c = [1,2,3,4] + assert b.issubset(c) + + def test_issuperset(self): + a = set([1,2,3,4]) + b = set([2,3]) + assert a.issuperset(b) + c = [2,3] + assert a.issuperset(c) + def test_discard_remove(self): a = set([1,2,3,4,5]) a.remove(1) From noreply at buildbot.pypy.org Thu Nov 10 13:49:27 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:49:27 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: added test and fix for inplace_and Message-ID: <20111110124927.36FBC8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49145:bf74909839b4 Date: 2011-05-02 13:43 +0200 http://bitbucket.org/pypy/pypy/changeset/bf74909839b4/ Log: added test and fix for inplace_and diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -116,6 +116,9 @@ def intersect(self, w_other): return self.strategy.intersect(self, w_other) + def intersect_update(self, w_other): + return self.strategy.intersect_update(self, w_other) + def intersect_multiple(self, others_w): return self.strategy.intersect_multiple(self, others_w) @@ -299,6 +302,22 @@ result.add(w_key) return result + def intersect_update(self, w_set, w_other): + if w_set.length() > w_other.length(): + return w_other.intersect(w_set) + + setdata = newset(self.space) + items = self.cast_from_void_star(w_set.sstorage).keys() + for key in items: + w_key = self.wrap(key) + if w_other.has_key(w_key): + setdata[w_key] = None + + # do not switch strategy here if other items match + w_set.strategy = strategy = self.space.fromcache(ObjectSetStrategy) + w_set.sstorage = strategy.cast_to_void_star(setdata) + return w_set + def intersect_multiple(self, w_set, others_w): result = w_set for w_other in others_w: @@ -747,11 +766,6 @@ def and__Set_Set(space, w_left, w_other): new_set = w_left.intersect(w_other) return new_set - ld, rd = w_left.setdata, w_other.setdata - new_ld = _intersection_dict(space, ld, rd) - #XXX when both have same strategy, ini new set from storage - # therefore this must be moved to strategies - return w_left._newobj(space, new_ld) and__Set_Frozenset = and__Set_Set and__Frozenset_Set = and__Set_Set @@ -773,10 +787,7 @@ return def inplace_and__Set_Set(space, w_left, w_other): - ld, rd = w_left.setdata, w_other.setdata - new_ld = _intersection_dict(space, ld, rd) - w_left.setdata = new_ld - return w_left + return w_left.intersect_update(w_other) inplace_and__Set_Frozenset = inplace_and__Set_Set diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -99,6 +99,12 @@ c = [2,3] assert a.issuperset(c) + def test_inplace_and(test): + a = set([1,2,3,4]) + b = set([0,2,3,5,6]) + a &= b + assert a == set([2,3]) + def test_discard_remove(self): a = set([1,2,3,4,5]) a.remove(1) From noreply at buildbot.pypy.org Thu Nov 10 13:49:28 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:49:28 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: added fixes and tests for symmetric_difference[_update] Message-ID: <20111110124928.6623C8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49146:28ab4895a815 Date: 2011-05-10 11:59 +0200 http://bitbucket.org/pypy/pypy/changeset/28ab4895a815/ Log: added fixes and tests for symmetric_difference[_update] diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -113,6 +113,12 @@ def difference_update(self, w_other): return self.strategy.difference_update(self, w_other) + def symmetric_difference(self, w_other): + return self.strategy.symmetric_difference(self, w_other) + + def symmetric_difference_update(self, w_other): + return self.strategy.symmetric_difference_update(self, w_other) + def intersect(self, w_other): return self.strategy.intersect(self, w_other) @@ -289,6 +295,31 @@ except KeyError: pass + def symmetric_difference(self, w_set, w_other): + #XXX no wrapping when strategies are equal + result = w_set._newobj(self.space, newset(self.space)) + for w_key in w_set.getkeys(): + if not w_other.has_key(w_key): + result.add(w_key) + for w_key in w_other.getkeys(): + if not w_set.has_key(w_key): + result.add(w_key) + return result + + def symmetric_difference_update(self, w_set, w_other): + #XXX no wrapping when strategies are equal + newsetdata = newset(self.space) + for w_key in w_set.getkeys(): + if not w_other.has_key(w_key): + newsetdata[w_key] = None + for w_key in w_other.getkeys(): + if not w_set.has_key(w_key): + newsetdata[w_key] = None + + # do not switch strategy here if other items match + w_set.strategy = strategy = self.space.fromcache(ObjectSetStrategy) + w_set.sstorage = strategy.cast_to_void_star(newsetdata) + def intersect(self, w_set, w_other): if w_set.length() > w_other.length(): return w_other.intersect(w_set) @@ -811,9 +842,8 @@ def set_symmetric_difference__Set_Set(space, w_left, w_other): # optimization only (the general case works too) - ld, rd = w_left.setdata, w_other.setdata - new_ld = _symmetric_difference_dict(space, ld, rd) - return w_left._newobj(space, new_ld) + w_result = w_left.symmetric_difference(w_other) + return w_result set_symmetric_difference__Set_Frozenset = set_symmetric_difference__Set_Set set_symmetric_difference__Frozenset_Set = set_symmetric_difference__Set_Set @@ -827,26 +857,28 @@ def set_symmetric_difference__Set_ANY(space, w_left, w_other): - ld, rd = w_left.setdata, make_setdata_from_w_iterable(space, w_other) - new_ld = _symmetric_difference_dict(space, ld, rd) - return w_left._newobj(space, new_ld) + #XXX deal with iterables withouth turning them into sets + setdata = make_setdata_from_w_iterable(space, w_other) + w_other_as_set = w_left._newobj(space, setdata) + + w_result = w_left.symmetric_difference(w_other_as_set) + return w_result frozenset_symmetric_difference__Frozenset_ANY = \ set_symmetric_difference__Set_ANY def set_symmetric_difference_update__Set_Set(space, w_left, w_other): # optimization only (the general case works too) - ld, rd = w_left.setdata, w_other.setdata - new_ld = _symmetric_difference_dict(space, ld, rd) - w_left.setdata = new_ld + w_left.symmetric_difference_update(w_other) set_symmetric_difference_update__Set_Frozenset = \ set_symmetric_difference_update__Set_Set def set_symmetric_difference_update__Set_ANY(space, w_left, w_other): - ld, rd = w_left.setdata, make_setdata_from_w_iterable(space, w_other) - new_ld = _symmetric_difference_dict(space, ld, rd) - w_left.setdata = new_ld + #XXX deal with iterables withouth turning them into sets + setdata = make_setdata_from_w_iterable(space, w_other) + w_other_as_set = w_left._newobj(space, setdata) + w_left.symmetric_difference_update(w_other_as_set) def inplace_xor__Set_Set(space, w_left, w_other): set_symmetric_difference_update__Set_Set(space, w_left, w_other) diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -121,6 +121,33 @@ assert a == set() raises(KeyError, "a.pop()") + def test_symmetric_difference(self): + a = set([1,2,3]) + b = set([3,4,5]) + c = a.symmetric_difference(b) + assert c == set([1,2,4,5]) + + a = set([1,2,3]) + b = [3,4,5] + c = a.symmetric_difference(b) + assert c == set([1,2,4,5]) + + def test_symmetric_difference_update(self): + a = set([1,2,3]) + b = set([3,4,5]) + a.symmetric_difference_update(b) + assert a == set([1,2,4,5]) + + a = set([1,2,3]) + b = [3,4,5] + a.symmetric_difference_update(b) + assert a == set([1,2,4,5]) + + a = set([1,2,3]) + b = set([3,4,5]) + a ^= b + assert a == set([1,2,4,5]) + def test_subtype(self): class subset(set):pass a = subset() From noreply at buildbot.pypy.org Thu Nov 10 13:49:29 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:49:29 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: fixed eq__Set_settypedef Message-ID: <20111110124929.94F078292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49147:60ddcb62aeca Date: 2011-05-10 13:41 +0200 http://bitbucket.org/pypy/pypy/changeset/60ddcb62aeca/ Log: fixed eq__Set_settypedef diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -626,9 +626,11 @@ eq__Frozenset_Set = eq__Set_Set def eq__Set_settypedef(space, w_left, w_other): - #XXX dont know how to test this - rd = make_setdata_from_w_iterable(space, w_other) - return space.wrap(_is_eq(w_left.setdata, rd)) + # tested in test_buildinshortcut.py + #XXX do not make new setobject here + setdata = make_setdata_from_w_iterable(space, w_other) + w_other_as_set = w_left._newobj(space, setdata) + return space.wrap(w_left.equals(w_other)) eq__Set_frozensettypedef = eq__Set_settypedef eq__Frozenset_settypedef = eq__Set_settypedef From noreply at buildbot.pypy.org Thu Nov 10 13:49:30 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:49:30 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: added test and fix for set() Message-ID: <20111110124930.C4C718292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49148:ffa5d9dadcfe Date: 2011-05-11 11:19 +0200 http://bitbucket.org/pypy/pypy/changeset/ffa5d9dadcfe/ Log: added test and fix for set() diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -10,6 +10,8 @@ from pypy.objspace.std.frozensettype import frozenset_typedef as frozensettypedef from pypy.rlib import rerased from pypy.rlib.objectmodel import instantiate +from pypy.interpreter.generator import GeneratorIterator +from pypy.objspace.std.listobject import W_ListObject def get_strategy_from_w_iterable(space, w_iterable=None): from pypy.objspace.std.intobject import W_IntObject @@ -510,6 +512,8 @@ def _initialize_set(space, w_obj, w_iterable=None): w_obj.clear() if w_iterable is not None: + if isinstance(w_iterable, GeneratorIterator): + w_iterable = W_ListObject(space.listview(w_iterable)) w_obj.strategy = get_strategy_from_w_iterable(space, w_iterable) w_obj.strategy.init_from_w_iterable(w_obj, w_iterable) diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -59,6 +59,16 @@ c = a.union(b) assert c == set([1,2,3,4]) + def test_generator(self): + def foo(): + for i in [1,2,3,4,5]: + yield i + b = set(foo()) + assert b == set([1,2,3,4,5]) + + a = set(x for x in [1,2,3]) + assert a == set([1,2,3]) + def test_or(self): a = set([0,1,2]) b = a | set([1,2,3]) From noreply at buildbot.pypy.org Thu Nov 10 13:49:31 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:49:31 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: refactored initialisation of W_SetObject Message-ID: <20111110124931.F395B8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49149:d926be3f2432 Date: 2011-05-11 13:33 +0200 http://bitbucket.org/pypy/pypy/changeset/d926be3f2432/ Log: refactored initialisation of W_SetObject diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -14,6 +14,7 @@ from pypy.objspace.std.listobject import W_ListObject def get_strategy_from_w_iterable(space, w_iterable=None): + assert False from pypy.objspace.std.intobject import W_IntObject #XXX what types for w_iterable are possible @@ -50,8 +51,10 @@ """Initialize the set by taking ownership of 'setdata'.""" assert setdata is not None w_self.space = space #XXX less memory without this indirection? - w_self.strategy = get_strategy_from_w_iterable(space, setdata.keys()) - w_self.strategy.init_from_setdata_w(w_self, setdata) + #XXX in case of ObjectStrategy we can reuse the setdata object + set_strategy_and_setdata(space, w_self, setdata.keys()) + #w_self.strategy = get_strategy_from_w_iterable(space, setdata.keys()) + #w_self.strategy.init_from_setdata_w(w_self, setdata) def __repr__(w_self): """representation for debugging purposes""" @@ -185,6 +188,12 @@ d[self.unwrap(item_w)] = None w_set.sstorage = self.cast_to_void_star(d) + def get_storage_from_list(self, list_w): + setdata = self.get_empty_dict() + for w_item in list_w: + setdata[self.unwrap(w_item)] = None + return self.cast_to_void_star(setdata) + def make_setdata_from_w_iterable(self, w_iterable): """Return a new r_dict with the content of w_iterable.""" if isinstance(w_iterable, W_BaseSetObject): @@ -437,6 +446,9 @@ cast_to_void_star = staticmethod(cast_to_void_star) cast_from_void_star = staticmethod(cast_from_void_star) + def get_empty_storage(self): + return self.cast_to_void_star(newset(self.space)) + def get_empty_dict(self): return newset(self.space) @@ -497,6 +509,34 @@ def newset(space): return r_dict(space.eq_w, space.hash_w) +def set_strategy_and_setdata(space, w_set, w_iterable): + from pypy.objspace.std.intobject import W_IntObject + + if w_iterable is None: + w_set.strategy = space.fromcache(ObjectSetStrategy) #XXX EmptySetStrategy + w_set.sstorage = w_set.strategy.get_empty_storage() + return + + if isinstance(w_iterable, W_BaseSetObject): + w_set.strategy = w_iterable.strategy + w_set.sstorage = w_iterable.sstorage + return + + if not isinstance(w_iterable, list): + w_iterable = space.listview(w_iterable) + + # check for integers + for item_w in w_iterable: + if type(item_w) is not W_IntObject: + break; + if item_w is w_iterable[:-1]: + w_set.strategy = space.fromcache(IntegerSetStrategy) + w_set.sstorage = w_set.strategy.get_storage_from_list(w_iterable) + return + + w_set.strategy = space.fromcache(ObjectSetStrategy) + w_set.sstorage = w_set.strategy.get_storage_from_list(w_iterable) + def make_setdata_from_w_iterable(space, w_iterable=None): #XXX remove this later """Return a new r_dict with the content of w_iterable.""" @@ -511,6 +551,8 @@ def _initialize_set(space, w_obj, w_iterable=None): w_obj.clear() + set_strategy_and_setdata(space, w_obj, w_iterable) + return if w_iterable is not None: if isinstance(w_iterable, GeneratorIterator): w_iterable = W_ListObject(space.listview(w_iterable)) From noreply at buildbot.pypy.org Thu Nov 10 13:49:33 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:49:33 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: refactoring: replaced issubset by issuperset Message-ID: <20111110124933.30FA68292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49150:f18f4f0d0e3e Date: 2011-05-11 14:35 +0200 http://bitbucket.org/pypy/pypy/changeset/f18f4f0d0e3e/ Log: refactoring: replaced issubset by issuperset diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -136,6 +136,9 @@ def intersect_multiple_update(self, others_w): self.strategy.intersect_multiple_update(self, others_w) + def issuperset(self, w_other): + return self.strategy.issuperset(self, w_other) + def issubset(self, w_other): return self.strategy.issubset(self, w_other) @@ -382,17 +385,11 @@ w_set.strategy = result.strategy w_set.sstorage = result.sstorage - def issubset(self, w_set, w_other): - if not isinstance(w_other, W_BaseSetObject): - setdata = make_setdata_from_w_iterable(self.space, w_other) - w_other = w_set._newobj(self.space, setdata) - - if w_set.length() > w_other.length(): + def issuperset(self, w_set, w_other): + if w_set.length() < self.space.unwrap(self.space.len(w_other)): return False - - #XXX add ways without unwrapping if strategies are equal - for w_key in w_set.getkeys(): - if not w_other.has_key(w_key): + for w_key in self.space.unpackiterable(w_other): + if not w_set.has_key(w_key): return False return True @@ -727,7 +724,7 @@ # optimization only (the general case works too) if space.is_w(w_left, w_other): return space.w_True - return space.wrap(w_left.issubset(w_other)) + return space.wrap(w_other.issuperset(w_left)) set_issubset__Set_Frozenset = set_issubset__Set_Set frozenset_issubset__Frozenset_Set = set_issubset__Set_Set @@ -737,7 +734,11 @@ if space.is_w(w_left, w_other): return space.w_True - return space.wrap(w_left.issubset(w_other)) + # this is faster when w_other is a set + w_other_as_set = w_left._newobj(space, newset(space)) + set_strategy_and_setdata(space, w_other_as_set, w_other) + + return space.wrap(w_other_as_set.issuperset(w_left)) frozenset_issubset__Frozenset_ANY = set_issubset__Set_ANY @@ -748,11 +749,9 @@ def set_issuperset__Set_Set(space, w_left, w_other): # optimization only (the general case works too) - #XXX this is the same code as in set_issubset__Set_Set (sets reversed) if space.is_w(w_left, w_other): return space.w_True - - return space.wrap(w_other.issubset(w_left)) + return space.wrap(w_left.issuperset(w_other)) set_issuperset__Set_Frozenset = set_issuperset__Set_Set set_issuperset__Frozenset_Set = set_issuperset__Set_Set @@ -762,11 +761,7 @@ if space.is_w(w_left, w_other): return space.w_True - #XXX BAD - setdata = make_setdata_from_w_iterable(space, w_other) - w_other = w_left._newobj(space, setdata) - - return space.wrap(w_other.issubset(w_left)) + return space.wrap(w_left.issuperset(w_other)) frozenset_issuperset__Frozenset_ANY = set_issuperset__Set_ANY From noreply at buildbot.pypy.org Thu Nov 10 13:49:34 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:49:34 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: replaced more make_setdata_from_w_iterbale by _newobj() and set_strategy_from_w_iterable() Message-ID: <20111110124934.60E3E8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49151:fd3571e19e87 Date: 2011-05-11 16:29 +0200 http://bitbucket.org/pypy/pypy/changeset/fd3571e19e87/ Log: replaced more make_setdata_from_w_iterbale by _newobj() and set_strategy_from_w_iterable() diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -53,8 +53,6 @@ w_self.space = space #XXX less memory without this indirection? #XXX in case of ObjectStrategy we can reuse the setdata object set_strategy_and_setdata(space, w_self, setdata.keys()) - #w_self.strategy = get_strategy_from_w_iterable(space, setdata.keys()) - #w_self.strategy.init_from_setdata_w(w_self, setdata) def __repr__(w_self): """representation for debugging purposes""" @@ -169,9 +167,6 @@ def __init__(self, space): self.space = space - def init_from_w_iterable(self, w_set, setdata): - raise NotImplementedError - def length(self, w_set): raise NotImplementedError @@ -181,10 +176,6 @@ def get_empty_storage(self): raise NotImplementedError - def init_from_w_iterable(self, w_set, w_iterable): - setdata = self.make_setdata_from_w_iterable(w_iterable) - w_set.sstorage = self.cast_to_void_star(setdata) - def init_from_setdata_w(self, w_set, setdata_w): d = self.get_empty_dict() for item_w in setdata_w.keys(): @@ -197,16 +188,6 @@ setdata[self.unwrap(w_item)] = None return self.cast_to_void_star(setdata) - def make_setdata_from_w_iterable(self, w_iterable): - """Return a new r_dict with the content of w_iterable.""" - if isinstance(w_iterable, W_BaseSetObject): - return self.cast_from_void_star(w_set.sstorage).copy() - data = self.get_empty_dict() - if w_iterable is not None: - for w_item in self.space.listview(w_iterable): - data[self.unwrap(w_item)] = None - return data - def length(self, w_set): return len(self.cast_from_void_star(w_set.sstorage)) @@ -291,9 +272,10 @@ def difference(self, w_set, w_other): result = w_set._newobj(self.space, newset(self.space)) if not isinstance(w_other, W_BaseSetObject): - #XXX this is bad - setdata = make_setdata_from_w_iterable(self.space, w_other) - w_other = w_set._newobj(self.space, setdata) + w_temp = w_set._newobj(self.space, newset(self.space)) + set_strategy_and_setdata(self.space, w_temp, w_other) + w_other = w_temp + # lookup is faster when w_other is set for w_key in w_set.getkeys(): if not w_other.has_key(w_key): result.add(w_key) @@ -549,12 +531,6 @@ def _initialize_set(space, w_obj, w_iterable=None): w_obj.clear() set_strategy_and_setdata(space, w_obj, w_iterable) - return - if w_iterable is not None: - if isinstance(w_iterable, GeneratorIterator): - w_iterable = W_ListObject(space.listview(w_iterable)) - w_obj.strategy = get_strategy_from_w_iterable(space, w_iterable) - w_obj.strategy.init_from_w_iterable(w_obj, w_iterable) def _convert_set_to_frozenset(space, w_obj): #XXX can be optimized @@ -671,8 +647,8 @@ def eq__Set_settypedef(space, w_left, w_other): # tested in test_buildinshortcut.py #XXX do not make new setobject here - setdata = make_setdata_from_w_iterable(space, w_other) - w_other_as_set = w_left._newobj(space, setdata) + w_other_as_set = w_left._newobj(space, newset(space)) + set_strategy_and_setdata(space, w_other_as_set, w_other) return space.wrap(w_left.equals(w_other)) eq__Set_frozensettypedef = eq__Set_settypedef @@ -694,6 +670,7 @@ ne__Frozenset_Set = ne__Set_Set def ne__Set_settypedef(space, w_left, w_other): + #XXX this is not tested rd = make_setdata_from_w_iterable(space, w_other) return space.wrap(_is_eq(w_left.setdata, rd)) @@ -900,10 +877,10 @@ def set_symmetric_difference__Set_ANY(space, w_left, w_other): - #XXX deal with iterables withouth turning them into sets - setdata = make_setdata_from_w_iterable(space, w_other) - w_other_as_set = w_left._newobj(space, setdata) - + #XXX since we need to iterate over both objects, create set + # from w_other so looking up items is fast + w_other_as_set = w_left._newobj(space, newset(space)) + set_strategy_and_setdata(space, w_other_as_set, w_other) w_result = w_left.symmetric_difference(w_other_as_set) return w_result @@ -919,8 +896,8 @@ def set_symmetric_difference_update__Set_ANY(space, w_left, w_other): #XXX deal with iterables withouth turning them into sets - setdata = make_setdata_from_w_iterable(space, w_other) - w_other_as_set = w_left._newobj(space, setdata) + w_other_as_set = w_left._newobj(space, newset(space)) + set_strategy_and_setdata(space, w_other_as_set, w_other) w_left.symmetric_difference_update(w_other_as_set) def inplace_xor__Set_Set(space, w_left, w_other): diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -10,13 +10,20 @@ import py.test from pypy.objspace.std.setobject import W_SetObject, W_FrozensetObject from pypy.objspace.std.setobject import _initialize_set -from pypy.objspace.std.setobject import newset, make_setdata_from_w_iterable +from pypy.objspace.std.setobject import newset from pypy.objspace.std.setobject import and__Set_Set from pypy.objspace.std.setobject import set_intersection__Set from pypy.objspace.std.setobject import eq__Set_Set letters = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ' +def make_setdata_from_w_iterable(space, w_iterable): + data = newset(space) + if w_iterable is not None: + for w_item in space.listview(w_iterable): + data[w_item] = None + return data + class W_SubSetObject(W_SetObject):pass class TestW_SetObject: From noreply at buildbot.pypy.org Thu Nov 10 13:49:35 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:49:35 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: added from_storage_and_strategy function Message-ID: <20111110124935.8FD618292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49152:1ae8d50ae922 Date: 2011-05-11 18:00 +0200 http://bitbucket.org/pypy/pypy/changeset/1ae8d50ae922/ Log: added from_storage_and_strategy function diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -59,6 +59,20 @@ reprlist = [repr(w_item) for w_item in w_self.getkeys()] return "<%s(%s)>" % (w_self.__class__.__name__, ', '.join(reprlist)) + def from_storage_and_strategy(w_self, storage, strategy): + objtype = type(w_self) + if objtype is W_SetObject: + obj = instantiate(W_SetObject) + elif objtype is W_FrozensetObject: + obj = instantiate(W_FrozensetObject) + else: + itemiterator = w_self.space.iter(W_SetIterObject(newset(w_self.space))) + obj = w_self.space.call_function(w_self.space.type(w_self),itemiterator) + obj.space = w_self.space + obj.strategy = strategy + obj.sstorage = storage + return obj + def _newobj(w_self, space, rdict_w=None): """Make a new set or frozenset by taking ownership of 'rdict_w'.""" #return space.call(space.type(w_self),W_SetIterObject(rdict_w)) @@ -197,10 +211,9 @@ def copy(self, w_set): #XXX do not copy FrozenDict d = self.cast_from_void_star(w_set.sstorage) - #XXX make it faster by using from_storage_and_strategy - clone = w_set._newobj(self.space, newset(self.space)) - clone.strategy = w_set.strategy - clone.sstorage = self.cast_to_void_star(d.copy()) + strategy = w_set.strategy + storage = self.cast_to_void_star(d.copy()) + clone = w_set.from_storage_and_strategy(storage, strategy) return clone def add(self, w_set, w_key): From noreply at buildbot.pypy.org Thu Nov 10 13:49:36 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:49:36 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: W_SetObject not takes w_iterable as init value instead of r_dict Message-ID: <20111110124936.C249B8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49153:ae7f0c3075c5 Date: 2011-05-11 18:57 +0200 http://bitbucket.org/pypy/pypy/changeset/ae7f0c3075c5/ Log: W_SetObject not takes w_iterable as init value instead of r_dict diff --git a/pypy/objspace/std/frozensettype.py b/pypy/objspace/std/frozensettype.py --- a/pypy/objspace/std/frozensettype.py +++ b/pypy/objspace/std/frozensettype.py @@ -44,8 +44,7 @@ w_iterable is not None and type(w_iterable) is W_FrozensetObject): return w_iterable w_obj = space.allocate_instance(W_FrozensetObject, w_frozensettype) - data = make_setdata_from_w_iterable(space, w_iterable) - W_FrozensetObject.__init__(w_obj, space, data) + W_FrozensetObject.__init__(w_obj, space, w_iterable) return w_obj frozenset_typedef = StdTypeDef("frozenset", diff --git a/pypy/objspace/std/objspace.py b/pypy/objspace/std/objspace.py --- a/pypy/objspace/std/objspace.py +++ b/pypy/objspace/std/objspace.py @@ -233,10 +233,7 @@ return W_ComplexObject(x.real, x.imag) if isinstance(x, set): - rdict_w = r_dict(self.eq_w, self.hash_w) - for item in x: - rdict_w[self.wrap(item)] = None - res = W_SetObject(self, rdict_w) + res = W_SetObject(self, [self.wrap(item) for item in x]) return res if isinstance(x, frozenset): diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -47,12 +47,11 @@ return True return False - def __init__(w_self, space, setdata): + def __init__(w_self, space, w_iterable=None): """Initialize the set by taking ownership of 'setdata'.""" - assert setdata is not None w_self.space = space #XXX less memory without this indirection? #XXX in case of ObjectStrategy we can reuse the setdata object - set_strategy_and_setdata(space, w_self, setdata.keys()) + set_strategy_and_setdata(space, w_self, w_iterable) def __repr__(w_self): """representation for debugging purposes""" @@ -73,17 +72,17 @@ obj.sstorage = storage return obj - def _newobj(w_self, space, rdict_w=None): + def _newobj(w_self, space, w_iterable): """Make a new set or frozenset by taking ownership of 'rdict_w'.""" #return space.call(space.type(w_self),W_SetIterObject(rdict_w)) objtype = type(w_self) if objtype is W_SetObject: - obj = W_SetObject(space, rdict_w) + obj = W_SetObject(space, w_iterable) elif objtype is W_FrozensetObject: - obj = W_FrozensetObject(space, rdict_w) + obj = W_FrozensetObject(space, w_iterable) else: - itemiterator = space.iter(W_SetIterObject(rdict_w)) - obj = space.call_function(space.type(w_self),itemiterator) + itemiterator = space.iter(W_SetIterObject(w_iterable)) + obj = space.call_function(space.type(w_self), itemiterator) return obj _lifeline_ = None @@ -283,11 +282,9 @@ return True def difference(self, w_set, w_other): - result = w_set._newobj(self.space, newset(self.space)) + result = w_set._newobj(self.space, None) if not isinstance(w_other, W_BaseSetObject): - w_temp = w_set._newobj(self.space, newset(self.space)) - set_strategy_and_setdata(self.space, w_temp, w_other) - w_other = w_temp + w_other = w_set._newobj(self.space, w_other) # lookup is faster when w_other is set for w_key in w_set.getkeys(): if not w_other.has_key(w_key): @@ -306,7 +303,7 @@ def symmetric_difference(self, w_set, w_other): #XXX no wrapping when strategies are equal - result = w_set._newobj(self.space, newset(self.space)) + result = w_set._newobj(self.space, None) for w_key in w_set.getkeys(): if not w_other.has_key(w_key): result.add(w_key) @@ -333,7 +330,7 @@ if w_set.length() > w_other.length(): return w_other.intersect(w_set) - result = w_set._newobj(self.space, newset(self.space)) + result = w_set._newobj(self.space, None) items = self.cast_from_void_star(w_set.sstorage).keys() #XXX do it without wrapping when strategies are equal for key in items: @@ -367,7 +364,7 @@ result = result.intersect(w_other) else: #XXX directly give w_other as argument to result2 - result2 = w_set._newobj(self.space, newset(self.space)) + result2 = w_set._newobj(self.space, None) for w_key in self.space.listview(w_other): if result.has_key(w_key): result2.add(w_key) @@ -660,8 +657,7 @@ def eq__Set_settypedef(space, w_left, w_other): # tested in test_buildinshortcut.py #XXX do not make new setobject here - w_other_as_set = w_left._newobj(space, newset(space)) - set_strategy_and_setdata(space, w_other_as_set, w_other) + w_other_as_set = w_left._newobj(space, w_other) return space.wrap(w_left.equals(w_other)) eq__Set_frozensettypedef = eq__Set_settypedef @@ -724,10 +720,7 @@ if space.is_w(w_left, w_other): return space.w_True - # this is faster when w_other is a set - w_other_as_set = w_left._newobj(space, newset(space)) - set_strategy_and_setdata(space, w_other_as_set, w_other) - + w_other_as_set = w_left._newobj(space, w_other) return space.wrap(w_other_as_set.issuperset(w_left)) frozenset_issubset__Frozenset_ANY = set_issubset__Set_ANY @@ -892,8 +885,7 @@ def set_symmetric_difference__Set_ANY(space, w_left, w_other): #XXX since we need to iterate over both objects, create set # from w_other so looking up items is fast - w_other_as_set = w_left._newobj(space, newset(space)) - set_strategy_and_setdata(space, w_other_as_set, w_other) + w_other_as_set = w_left._newobj(space, w_other) w_result = w_left.symmetric_difference(w_other_as_set) return w_result @@ -908,9 +900,7 @@ set_symmetric_difference_update__Set_Set def set_symmetric_difference_update__Set_ANY(space, w_left, w_other): - #XXX deal with iterables withouth turning them into sets - w_other_as_set = w_left._newobj(space, newset(space)) - set_strategy_and_setdata(space, w_other_as_set, w_other) + w_other_as_set = w_left._newobj(space, w_other) w_left.symmetric_difference_update(w_other_as_set) def inplace_xor__Set_Set(space, w_left, w_other): diff --git a/pypy/objspace/std/settype.py b/pypy/objspace/std/settype.py --- a/pypy/objspace/std/settype.py +++ b/pypy/objspace/std/settype.py @@ -68,7 +68,7 @@ def descr__new__(space, w_settype, __args__): from pypy.objspace.std.setobject import W_SetObject, newset w_obj = space.allocate_instance(W_SetObject, w_settype) - W_SetObject.__init__(w_obj, space, newset(space)) + W_SetObject.__init__(w_obj, space) return w_obj set_typedef = StdTypeDef("set", diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -36,12 +36,11 @@ self.false = self.space.w_False def test_and(self): - s = W_SetObject(self.space, newset(self.space)) + s = W_SetObject(self.space) _initialize_set(self.space, s, self.word) - t0 = W_SetObject(self.space, newset(self.space)) + t0 = W_SetObject(self.space) _initialize_set(self.space, t0, self.otherword) - t1 = W_FrozensetObject(self.space, - make_setdata_from_w_iterable(self.space, self.otherword)) + t1 = W_FrozensetObject(self.space, self.otherword) r0 = and__Set_Set(self.space, s, t0) r1 = and__Set_Set(self.space, s, t1) assert eq__Set_Set(self.space, r0, r1) == self.true @@ -49,9 +48,9 @@ assert eq__Set_Set(self.space, r0, sr) == self.true def test_compare(self): - s = W_SetObject(self.space, newset(self.space)) + s = W_SetObject(self.space) _initialize_set(self.space, s, self.word) - t = W_SetObject(self.space, newset(self.space)) + t = W_SetObject(self.space) _initialize_set(self.space, t, self.word) assert self.space.eq_w(s,t) u = self.space.wrap(set('simsalabim')) From noreply at buildbot.pypy.org Thu Nov 10 13:49:37 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:49:37 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: added EmptySetStrategy + tests Message-ID: <20111110124937.F2C998292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49154:e1a4e3e28455 Date: 2011-05-13 14:25 +0200 http://bitbucket.org/pypy/pypy/changeset/e1a4e3e28455/ Log: added EmptySetStrategy + tests diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -120,6 +120,9 @@ def getdict_w(self): return self.strategy.getdict_w(self) + def get_storage_copy(self): + return self.strategy.get_storage_copy(self) + def getkeys(self): return self.strategy.getkeys(self) @@ -183,6 +186,96 @@ def length(self, w_set): raise NotImplementedError + +class EmptySetStrategy(SetStrategy): + + cast_to_void_star, cast_from_void_star = rerased.new_erasing_pair("empty") + cast_to_void_star = staticmethod(cast_to_void_star) + cast_from_void_star = staticmethod(cast_from_void_star) + + def get_empty_storage(self): + return self.cast_to_void_star(None) + + def is_correct_type(self, w_key): + return False + + def length(self, w_set): + return 0 + + def clear(self, w_set): + pass + + def copy(self, w_set): + strategy = w_set.strategy + storage = self.cast_to_void_star(None) + clone = w_set.from_storage_and_strategy(storage, strategy) + return clone + + def add(self, w_set, w_key): + #XXX switch to correct strategy later + w_set.switch_to_object_strategy(self.space) + w_set.add(w_key) + + def delitem(self, w_set, w_item): + raise KeyError + + def discard(self, w_set, w_item): + return False + + def getdict_w(self, w_set): + return newset(self.space) + + def get_storage_copy(self, w_set): + return w_set.sstorage + + def getkeys(self, w_set): + return [] + + def has_key(self, w_set, w_key): + return False + + def equals(self, w_set, w_other): + if w_other.strategy is self.space.fromcache(EmptySetStrategy): + return True + return False + + def difference(self, w_set, w_other): + return w_set.copy() + + def difference_update(self, w_set, w_other): + pass + + def intersect(self, w_set, w_other): + return w_set.copy() + + def intersect_update(self, w_set, w_other): + return w_set.copy() + + def intersect_multiple(self, w_set, w_other): + return w_set.copy() + + def intersect_multiple_update(self, w_set, w_other): + pass + + def isdisjoint(self, w_set, w_other): + return True + + def issuperset(self, w_set, w_other): + if self.space.unwrap(self.space.len(w_other)) == 0: + return True + return False + + def symmetric_difference(self, w_set, w_other): + return w_other.copy() + + def symmetric_difference_update(self, w_set, w_other): + w_set.strategy = w_other.strategy + w_set.sstorage = w_other.get_storage_copy() + + def update(self, w_set, w_other): + w_set.switch_to_object_strategy(self.space) + w_set.update(w_other) + class AbstractUnwrappedSetStrategy(object): __mixin__ = True @@ -263,6 +356,11 @@ result[self.wrap(key)] = None return result + def get_storage_copy(self, w_set): + d = self.cast_from_void_star(w_set.sstorage) + copy = self.cast_to_void_star(d.copy()) + return copy + def getkeys(self, w_set): keys = self.cast_from_void_star(w_set.sstorage).keys() keys_w = [self.wrap(key) for key in keys] @@ -282,6 +380,7 @@ return True def difference(self, w_set, w_other): + #XXX return clone if other is Empty result = w_set._newobj(self.space, None) if not isinstance(w_other, W_BaseSetObject): w_other = w_set._newobj(self.space, w_other) @@ -292,6 +391,7 @@ return result def difference_update(self, w_set, w_other): + #XXX do nothing if other is empty if w_set is w_other: w_set.clear() # for the case 'a.difference_update(a)' else: @@ -378,6 +478,7 @@ w_set.sstorage = result.sstorage def issuperset(self, w_set, w_other): + #XXX other is empty is always True if w_set.length() < self.space.unwrap(self.space.len(w_other)): return False for w_key in self.space.unpackiterable(w_other): @@ -386,6 +487,7 @@ return True def isdisjoint(self, w_set, w_other): + #XXX always True if other is empty if w_set.length() > w_other.length(): return w_other.isdisjoint(w_set) @@ -501,19 +603,25 @@ def set_strategy_and_setdata(space, w_set, w_iterable): from pypy.objspace.std.intobject import W_IntObject - if w_iterable is None: - w_set.strategy = space.fromcache(ObjectSetStrategy) #XXX EmptySetStrategy - w_set.sstorage = w_set.strategy.get_empty_storage() + if w_iterable is None : + w_set.strategy = space.fromcache(EmptySetStrategy) + w_set.sstorage = w_set.strategy.cast_to_void_star(None)#w_set.strategy.get_empty_storage() return if isinstance(w_iterable, W_BaseSetObject): w_set.strategy = w_iterable.strategy + #XXX need to make copy here w_set.sstorage = w_iterable.sstorage return if not isinstance(w_iterable, list): w_iterable = space.listview(w_iterable) + if len(w_iterable) == 0: + w_set.strategy = space.fromcache(EmptySetStrategy) + w_set.sstorage = w_set.strategy.cast_to_void_star(None) + return + # check for integers for item_w in w_iterable: if type(item_w) is not W_IntObject: @@ -844,6 +952,7 @@ return def inplace_and__Set_Set(space, w_left, w_other): + #XXX why do we need to return here? return w_left.intersect_update(w_other) inplace_and__Set_Frozenset = inplace_and__Set_Set diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -458,3 +458,63 @@ s.difference_update(s) assert s == set([]) + def test_empty_empty(self): + assert set() == set([]) + + def test_empty_difference(self): + e = set() + x = set([1,2,3]) + assert e.difference(x) == set() + assert x.difference(e) == x + + e.difference_update(x) + assert e == set() + x.difference_update(e) + assert x == set([1,2,3]) + + assert e.symmetric_difference(x) == x + assert x.symmetric_difference(e) == x + + e.symmetric_difference_update(e) + assert e == e + e.symmetric_difference_update(x) + assert e == x + + x.symmetric_difference_update(set()) + assert x == set([1,2,3]) + + def test_empty_intersect(self): + e = set() + x = set([1,2,3]) + assert e.intersection(x) == e + assert x.intersection(e) == e + assert e & x == e + assert x & e == e + + e.intersection_update(x) + assert e == set() + e &= x + assert e == set() + x.intersection_update(e) + assert x == set() + + def test_empty_issuper(self): + e = set() + x = set([1,2,3]) + assert e.issuperset(e) == True + assert e.issuperset(x) == False + assert x.issuperset(e) == True + + def test_empty_issubset(self): + e = set() + x = set([1,2,3]) + assert e.issubset(e) == True + assert e.issubset(x) == True + assert x.issubset(e) == False + + def test_empty_isdisjoint(self): + e = set() + x = set([1,2,3]) + assert e.isdisjoint(e) == True + assert e.isdisjoint(x) == True + assert x.isdisjoint(e) == True From noreply at buildbot.pypy.org Thu Nov 10 13:49:39 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:49:39 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: fixed bug in issuperset, more tests, some optimization Message-ID: <20111110124939.2DDEE8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49155:14b4c0d3850a Date: 2011-05-13 15:42 +0200 http://bitbucket.org/pypy/pypy/changeset/14b4c0d3850a/ Log: fixed bug in issuperset, more tests, some optimization diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -478,12 +478,15 @@ w_set.sstorage = result.sstorage def issuperset(self, w_set, w_other): - #XXX other is empty is always True - if w_set.length() < self.space.unwrap(self.space.len(w_other)): - return False - for w_key in self.space.unpackiterable(w_other): - if not w_set.has_key(w_key): - return False + #XXX always True if other is empty + w_iter = self.space.iter(w_other) + while True: + try: + w_item = self.space.next(w_iter) + if not w_set.has_key(w_item): + return False + except OperationError: + return True return True def isdisjoint(self, w_set, w_other): @@ -818,6 +821,8 @@ # optimization only (the general case works too) if space.is_w(w_left, w_other): return space.w_True + if w_left.length() > w_other.length(): + return space.w_False return space.wrap(w_other.issuperset(w_left)) set_issubset__Set_Frozenset = set_issubset__Set_Set @@ -829,6 +834,9 @@ return space.w_True w_other_as_set = w_left._newobj(space, w_other) + + if w_left.length() > w_other_as_set.length(): + return space.w_False return space.wrap(w_other_as_set.issuperset(w_left)) frozenset_issubset__Frozenset_ANY = set_issubset__Set_ANY @@ -842,6 +850,8 @@ # optimization only (the general case works too) if space.is_w(w_left, w_other): return space.w_True + if w_left.length() < w_other.length(): + return space.w_False return space.wrap(w_left.issuperset(w_other)) set_issuperset__Set_Frozenset = set_issuperset__Set_Set diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -115,6 +115,10 @@ c = [2,3] assert a.issuperset(c) + c = [1,1,1,1,1] + assert a.issuperset(c) + assert set([1,1,1,1,1]).issubset(a) + def test_inplace_and(test): a = set([1,2,3,4]) b = set([0,2,3,5,6]) @@ -518,3 +522,10 @@ assert e.isdisjoint(e) == True assert e.isdisjoint(x) == True assert x.isdisjoint(e) == True + + + def test_super_with_generator(self): + def foo(): + for i in [1,2,3]: + yield i + set([1,2,3,4,5]).issuperset(foo()) From noreply at buildbot.pypy.org Thu Nov 10 13:49:40 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:49:40 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: fixed EmptySetStrategy.issuperset Message-ID: <20111110124940.5C3558292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49156:64ffc4b0905b Date: 2011-05-13 15:51 +0200 http://bitbucket.org/pypy/pypy/changeset/64ffc4b0905b/ Log: fixed EmptySetStrategy.issuperset diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -261,7 +261,9 @@ return True def issuperset(self, w_set, w_other): - if self.space.unwrap(self.space.len(w_other)) == 0: + if isinstance(w_other, W_BaseSetObject) and w_other.strategy is EmptySetStrategy: + return True + elif len(self.space.unpackiterable(w_other)) == 0: return True return False diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -509,12 +509,16 @@ assert e.issuperset(x) == False assert x.issuperset(e) == True + assert e.issuperset(set()) + assert e.issuperset([]) + def test_empty_issubset(self): e = set() x = set([1,2,3]) assert e.issubset(e) == True assert e.issubset(x) == True assert x.issubset(e) == False + assert e.issubset([]) def test_empty_isdisjoint(self): e = set() From noreply at buildbot.pypy.org Thu Nov 10 13:49:41 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:49:41 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: some more optimization Message-ID: <20111110124941.8B8F18292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49157:1ca516864d70 Date: 2011-05-13 17:29 +0200 http://bitbucket.org/pypy/pypy/changeset/1ca516864d70/ Log: some more optimization diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -393,15 +393,23 @@ return result def difference_update(self, w_set, w_other): - #XXX do nothing if other is empty + if w_other.strategy is EmptySetStrategy: + return if w_set is w_other: w_set.clear() # for the case 'a.difference_update(a)' else: - for w_key in w_other.getkeys(): + w_iter = self.space.iter(w_other) + while True: try: - self.delitem(w_set, w_key) - except KeyError: - pass + w_item = self.space.next(w_iter) + try: + self.delitem(w_set, w_item) + except KeyError: + pass + except OperationError, e: + if not e.match(self.space, self.space.w_StopIteration): + raise + return def symmetric_difference(self, w_set, w_other): #XXX no wrapping when strategies are equal @@ -487,12 +495,15 @@ w_item = self.space.next(w_iter) if not w_set.has_key(w_item): return False - except OperationError: + except OperationError, e: + if not e.match(self.space, self.space.w_StopIteration): + raise return True return True def isdisjoint(self, w_set, w_other): - #XXX always True if other is empty + if w_other.length() == 0: + return True if w_set.length() > w_other.length(): return w_other.isdisjoint(w_set) From noreply at buildbot.pypy.org Thu Nov 10 13:49:42 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:49:42 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: added different method for symmetric_difference_update when strategies match Message-ID: <20111110124942.BA99A8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49158:e4d6683b7917 Date: 2011-05-17 13:39 +0200 http://bitbucket.org/pypy/pypy/changeset/e4d6683b7917/ Log: added different method for symmetric_difference_update when strategies match diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -422,7 +422,23 @@ result.add(w_key) return result + def symmetric_difference_update_match(self, w_set, w_other): + d_new = self.get_empty_dict() + d_this = self.cast_from_void_star(w_set.sstorage) + d_other = self.cast_from_void_star(w_other.sstorage) + for key in d_other.keys(): + if not key in d_this: + d_new[key] = None + for key in d_this.keys(): + if not key in d_other: + d_new[key] = None + + w_set.sstorage = self.cast_to_void_star(d_new) + def symmetric_difference_update(self, w_set, w_other): + if w_set.strategy is w_other.strategy: + self.symmetric_difference_update_match(w_set, w_other) + return #XXX no wrapping when strategies are equal newsetdata = newset(self.space) for w_key in w_set.getkeys(): From noreply at buildbot.pypy.org Thu Nov 10 13:49:44 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:49:44 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: fixed bug in determination of strategy Message-ID: <20111110124944.0F2B38292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49159:39f1615703a2 Date: 2011-05-17 13:40 +0200 http://bitbucket.org/pypy/pypy/changeset/39f1615703a2/ Log: fixed bug in determination of strategy diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -658,7 +658,7 @@ for item_w in w_iterable: if type(item_w) is not W_IntObject: break; - if item_w is w_iterable[:-1]: + if item_w is w_iterable[-1]: w_set.strategy = space.fromcache(IntegerSetStrategy) w_set.sstorage = w_set.strategy.get_storage_from_list(w_iterable) return From noreply at buildbot.pypy.org Thu Nov 10 13:49:45 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:49:45 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: added tests for setstrategies Message-ID: <20111110124945.5677E8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49160:a69405cd53ae Date: 2011-05-17 13:41 +0200 http://bitbucket.org/pypy/pypy/changeset/a69405cd53ae/ Log: added tests for setstrategies diff --git a/pypy/objspace/std/test/test_setstrategies.py b/pypy/objspace/std/test/test_setstrategies.py new file mode 100644 --- /dev/null +++ b/pypy/objspace/std/test/test_setstrategies.py @@ -0,0 +1,42 @@ +from pypy.objspace.std.setobject import W_SetObject +from pypy.objspace.std.setobject import IntegerSetStrategy, ObjectSetStrategy, EmptySetStrategy + +class TestW_SetStrategies: + + def wrapped(self, l): + return [self.space.wrap(x) for x in l] + + def test_from_list(self): + s = W_SetObject(self.space, self.wrapped([1,2,3,4,5])) + assert s.strategy is self.space.fromcache(IntegerSetStrategy) + + s = W_SetObject(self.space, self.wrapped([1,"two",3,"four",5])) + assert s.strategy is self.space.fromcache(ObjectSetStrategy) + + s = W_SetObject(self.space) + assert s.strategy is self.space.fromcache(EmptySetStrategy) + + s = W_SetObject(self.space, self.wrapped([])) + assert s.strategy is self.space.fromcache(EmptySetStrategy) + + def test_switch_to_object(self): + s = W_SetObject(self.space, self.wrapped([1,2,3,4,5])) + s.add(self.space.wrap("six")) + assert s.strategy is self.space.fromcache(ObjectSetStrategy) + + s1 = W_SetObject(self.space, self.wrapped([1,2,3,4,5])) + s2 = W_SetObject(self.space, self.wrapped(["six", "seven"])) + s1.symmetric_difference_update(s2) + assert s1.strategy is self.space.fromcache(ObjectSetStrategy) + + s1 = W_SetObject(self.space, self.wrapped([1,2,3,4,5])) + s2 = W_SetObject(self.space, self.wrapped(["six", "seven"])) + s1.update(s2) + assert s1.strategy is self.space.fromcache(ObjectSetStrategy) + + def test_intersection(self): + s1 = W_SetObject(self.space, self.wrapped([1,2,3,4,5])) + s2 = W_SetObject(self.space, self.wrapped([4,5, "six", "seven"])) + s3 = s1.intersect(s2) + assert s3.strategy is self.space.fromcache(IntegerSetStrategy) + From noreply at buildbot.pypy.org Thu Nov 10 13:49:46 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:49:46 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: EmptySet.add() switches to correct strategy now Message-ID: <20111110124946.ABF8E8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49161:28e84214560e Date: 2011-05-17 13:46 +0200 http://bitbucket.org/pypy/pypy/changeset/28e84214560e/ Log: EmptySet.add() switches to correct strategy now diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -212,8 +212,13 @@ return clone def add(self, w_set, w_key): - #XXX switch to correct strategy later - w_set.switch_to_object_strategy(self.space) + from pypy.objspace.std.intobject import W_IntObject + if type(w_key) is W_IntObject: + w_set.strategy = self.space.fromcache(IntegerSetStrategy) + else: + w_set.strategy = self.space.fromcache(ObjectSetStrategy) + + w_set.sstorage = w_set.strategy.get_empty_storage() w_set.add(w_key) def delitem(self, w_set, w_item): @@ -551,6 +556,9 @@ cast_to_void_star = staticmethod(cast_to_void_star) cast_from_void_star = staticmethod(cast_from_void_star) + def get_empty_storage(self): + return self.cast_to_void_star({}) + def get_empty_dict(self): return {} From noreply at buildbot.pypy.org Thu Nov 10 13:49:47 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:49:47 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: switch back to empty strategy on remove and clear Message-ID: <20111110124947.E438D8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49162:715728d2fe02 Date: 2011-05-17 14:14 +0200 http://bitbucket.org/pypy/pypy/changeset/715728d2fe02/ Log: switch back to empty strategy on remove and clear diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -97,6 +97,10 @@ self.strategy = space.fromcache(ObjectSetStrategy) self.sstorage = self.strategy.cast_to_void_star(d) + def switch_to_empty_strategy(self): + self.strategy = self.space.fromcache(EmptySetStrategy) + self.sstorage = self.strategy.get_empty_storage() + # _____________ strategy methods ________________ def clear(self): @@ -305,7 +309,7 @@ return len(self.cast_from_void_star(w_set.sstorage)) def clear(self, w_set): - self.cast_from_void_star(w_set.sstorage).clear() + w_set.switch_to_empty_strategy() def copy(self, w_set): #XXX do not copy FrozenDict @@ -937,6 +941,8 @@ Returns True if successfully removed. """ x = w_left.discard(w_item) + if w_left.length() == 0: + w_left.switch_to_empty_strategy() return x def set_discard__Set_ANY(space, w_left, w_item): diff --git a/pypy/objspace/std/test/test_setstrategies.py b/pypy/objspace/std/test/test_setstrategies.py --- a/pypy/objspace/std/test/test_setstrategies.py +++ b/pypy/objspace/std/test/test_setstrategies.py @@ -26,12 +26,13 @@ s1 = W_SetObject(self.space, self.wrapped([1,2,3,4,5])) s2 = W_SetObject(self.space, self.wrapped(["six", "seven"])) - s1.symmetric_difference_update(s2) + s1.update(s2) assert s1.strategy is self.space.fromcache(ObjectSetStrategy) + def test_symmetric_difference(self): s1 = W_SetObject(self.space, self.wrapped([1,2,3,4,5])) s2 = W_SetObject(self.space, self.wrapped(["six", "seven"])) - s1.update(s2) + s1.symmetric_difference_update(s2) assert s1.strategy is self.space.fromcache(ObjectSetStrategy) def test_intersection(self): @@ -40,3 +41,14 @@ s3 = s1.intersect(s2) assert s3.strategy is self.space.fromcache(IntegerSetStrategy) + def test_clear(self): + s1 = W_SetObject(self.space, self.wrapped([1,2,3,4,5])) + s1.clear() + assert s1.strategy is self.space.fromcache(EmptySetStrategy) + + def test_remove(self): + from pypy.objspace.std.setobject import set_remove__Set_ANY + s1 = W_SetObject(self.space, self.wrapped([1])) + set_remove__Set_ANY(self.space, s1, self.space.wrap(1)) + assert s1.strategy is self.space.fromcache(EmptySetStrategy) + From noreply at buildbot.pypy.org Thu Nov 10 13:49:49 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:49:49 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: added intelligent way to treat the different strategies in W_SetObject.difference Message-ID: <20111110124949.1D4068292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49163:a9c59d68f3ac Date: 2011-05-18 15:42 +0200 http://bitbucket.org/pypy/pypy/changeset/a9c59d68f3ac/ Log: added intelligent way to treat the different strategies in W_SetObject.difference diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -391,34 +391,50 @@ return True def difference(self, w_set, w_other): - #XXX return clone if other is Empty - result = w_set._newobj(self.space, None) if not isinstance(w_other, W_BaseSetObject): w_other = w_set._newobj(self.space, w_other) - # lookup is faster when w_other is set - for w_key in w_set.getkeys(): - if not w_other.has_key(w_key): - result.add(w_key) + + if w_other.strategy is self.space.fromcache(ObjectSetStrategy): + return self.difference_wrapped(w_set, w_other) + + if w_set.strategy is not w_other.strategy: + return w_set.copy() + + return self.difference_unwrapped(w_set, w_other) + + def difference_wrapped(self, w_set, w_other): + result = w_set._newobj(self.space, None) + w_iter = self.space.iter(w_set) + while True: + try: + w_item = self.space.next(w_iter) + if not w_other.has_key(w_key): + result.add(w_key) + except OperationError, e: + if not e.match(self.space, self.space.w_StopIteration): + raise + return + return result + + def difference_unwrapped(self, w_set, w_other): + if not isinstance(w_other, W_BaseSetObject): + w_other = w_set._newobj(self.space, w_other) + iterator = self.cast_from_void_star(w_set.sstorage).iterkeys() + other_dict = self.cast_from_void_star(w_other.sstorage) + result_dict = self.get_empty_dict() + for key in iterator: + if key not in other_dict: + result_dict[key] = None + result = w_set._newobj(self.space, None) + result.strategy = self + result.sstorage = self.cast_to_void_star(result_dict) return result def difference_update(self, w_set, w_other): - if w_other.strategy is EmptySetStrategy: - return - if w_set is w_other: - w_set.clear() # for the case 'a.difference_update(a)' - else: - w_iter = self.space.iter(w_other) - while True: - try: - w_item = self.space.next(w_iter) - try: - self.delitem(w_set, w_item) - except KeyError: - pass - except OperationError, e: - if not e.match(self.space, self.space.w_StopIteration): - raise - return + #XXX this way we unnecessarily create a new set + result = self.difference(w_set, w_other) + w_set.strategy = result.strategy + w_set.sstorage = result.sstorage def symmetric_difference(self, w_set, w_other): #XXX no wrapping when strategies are equal From noreply at buildbot.pypy.org Thu Nov 10 13:49:50 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:49:50 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: fixed ne__Set_settypedef Message-ID: <20111110124950.4FC7E8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49164:7cfd17778080 Date: 2011-05-18 16:54 +0200 http://bitbucket.org/pypy/pypy/changeset/7cfd17778080/ Log: fixed ne__Set_settypedef diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -848,8 +848,8 @@ def ne__Set_settypedef(space, w_left, w_other): #XXX this is not tested - rd = make_setdata_from_w_iterable(space, w_other) - return space.wrap(_is_eq(w_left.setdata, rd)) + w_other_as_set = w_left._newobj(space, w_other) + return space.wrap(w_left.equals(w_other)) ne__Set_frozensettypedef = ne__Set_settypedef ne__Frozenset_settypedef = ne__Set_settypedef From noreply at buildbot.pypy.org Thu Nov 10 13:49:51 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:49:51 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: fixed _mixin_ Message-ID: <20111110124951.869078292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49165:190fda089ccf Date: 2011-05-18 17:15 +0200 http://bitbucket.org/pypy/pypy/changeset/190fda089ccf/ Log: fixed _mixin_ diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -288,7 +288,7 @@ w_set.update(w_other) class AbstractUnwrappedSetStrategy(object): - __mixin__ = True + _mixin_ = True def get_empty_storage(self): raise NotImplementedError From noreply at buildbot.pypy.org Thu Nov 10 13:49:52 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:49:52 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: another way of creating a frozen set Message-ID: <20111110124952.BC37D8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49166:35fb3d7fec2a Date: 2011-05-18 17:28 +0200 http://bitbucket.org/pypy/pypy/changeset/35fb3d7fec2a/ Log: another way of creating a frozen set diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -713,7 +713,7 @@ def _convert_set_to_frozenset(space, w_obj): #XXX can be optimized if space.is_true(space.isinstance(w_obj, space.w_set)): - w_frozen = instantiate(W_FrozensetObject) + w_frozen = W_FrozensetObject(space, None) w_frozen.strategy = w_obj.strategy w_frozen.sstorage = w_obj.sstorage return w_frozen From noreply at buildbot.pypy.org Thu Nov 10 13:49:53 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:49:53 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: be sure that w_obj is setobject Message-ID: <20111110124953.EF6E48292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49167:3f70c38813f3 Date: 2011-05-18 17:43 +0200 http://bitbucket.org/pypy/pypy/changeset/3f70c38813f3/ Log: be sure that w_obj is setobject diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -713,6 +713,8 @@ def _convert_set_to_frozenset(space, w_obj): #XXX can be optimized if space.is_true(space.isinstance(w_obj, space.w_set)): + assert isinstance(w_obj, W_SetObject) + #XXX better instantiate? w_frozen = W_FrozensetObject(space, None) w_frozen.strategy = w_obj.strategy w_frozen.sstorage = w_obj.sstorage From noreply at buildbot.pypy.org Thu Nov 10 13:49:55 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:49:55 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: added test for user generated subclass of setobject Message-ID: <20111110124955.2BCB98292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49168:d23ca90396d5 Date: 2011-05-18 18:24 +0200 http://bitbucket.org/pypy/pypy/changeset/d23ca90396d5/ Log: added test for user generated subclass of setobject diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -65,8 +65,7 @@ elif objtype is W_FrozensetObject: obj = instantiate(W_FrozensetObject) else: - itemiterator = w_self.space.iter(W_SetIterObject(newset(w_self.space))) - obj = w_self.space.call_function(w_self.space.type(w_self),itemiterator) + obj = w_self.space.call_function(w_self.space.type(w_self), None) obj.space = w_self.space obj.strategy = strategy obj.sstorage = storage @@ -81,8 +80,7 @@ elif objtype is W_FrozensetObject: obj = W_FrozensetObject(space, w_iterable) else: - itemiterator = space.iter(W_SetIterObject(w_iterable)) - obj = space.call_function(space.type(w_self), itemiterator) + obj = space.call_function(space.type(w_self), w_iterable) return obj _lifeline_ = None diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -75,6 +75,16 @@ a = set(x for x in [1,2,3]) assert a == set([1,2,3]) + def test_generator2(self): + def foo(): + for i in [1,2,3]: + yield i + class A(set): + pass + a = A([1,2,3,4,5]) + b = a.difference(foo()) + assert b == set([4,5]) + def test_or(self): a = set([0,1,2]) b = a | set([1,2,3]) From noreply at buildbot.pypy.org Thu Nov 10 13:49:56 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:49:56 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: make sure the annotator sees this as set or a subclass of set Message-ID: <20111110124956.5C0888292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49169:17c8862614d7 Date: 2011-05-20 15:10 +0200 http://bitbucket.org/pypy/pypy/changeset/17c8862614d7/ Log: make sure the annotator sees this as set or a subclass of set diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -81,6 +81,7 @@ obj = W_FrozensetObject(space, w_iterable) else: obj = space.call_function(space.type(w_self), w_iterable) + assert isinstance(obj, W_BaseSetObject) return obj _lifeline_ = None From noreply at buildbot.pypy.org Thu Nov 10 13:49:57 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:49:57 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: space not necessary here? Message-ID: <20111110124957.8A75F8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49170:d711192077c7 Date: 2011-05-20 15:18 +0200 http://bitbucket.org/pypy/pypy/changeset/d711192077c7/ Log: space not necessary here? diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -66,7 +66,6 @@ obj = instantiate(W_FrozensetObject) else: obj = w_self.space.call_function(w_self.space.type(w_self), None) - obj.space = w_self.space obj.strategy = strategy obj.sstorage = storage return obj From noreply at buildbot.pypy.org Thu Nov 10 13:49:58 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:49:58 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: this is the same but hopefully it will satisfy the annotator Message-ID: <20111110124958.B6BE08292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49171:10a1be5db44b Date: 2011-05-20 15:39 +0200 http://bitbucket.org/pypy/pypy/changeset/10a1be5db44b/ Log: this is the same but hopefully it will satisfy the annotator diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -663,7 +663,7 @@ if w_iterable is None : w_set.strategy = space.fromcache(EmptySetStrategy) - w_set.sstorage = w_set.strategy.cast_to_void_star(None)#w_set.strategy.get_empty_storage() + w_set.sstorage = w_set.strategy.get_empty_storage() return if isinstance(w_iterable, W_BaseSetObject): @@ -677,7 +677,7 @@ if len(w_iterable) == 0: w_set.strategy = space.fromcache(EmptySetStrategy) - w_set.sstorage = w_set.strategy.cast_to_void_star(None) + w_set.sstorage = w_set.strategy.get_empty_storage() return # check for integers From noreply at buildbot.pypy.org Thu Nov 10 13:49:59 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:49:59 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: fixed bug in difference method for objectsets and added tests Message-ID: <20111110124959.E3D868292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49172:5cba5090dcca Date: 2011-05-20 16:02 +0200 http://bitbucket.org/pypy/pypy/changeset/5cba5090dcca/ Log: fixed bug in difference method for objectsets and added tests diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -392,7 +392,8 @@ if not isinstance(w_other, W_BaseSetObject): w_other = w_set._newobj(self.space, w_other) - if w_other.strategy is self.space.fromcache(ObjectSetStrategy): + if (w_other.strategy is self.space.fromcache(ObjectSetStrategy) or + w_set.strategy is self.space.fromcache(ObjectSetStrategy)): return self.difference_wrapped(w_set, w_other) if w_set.strategy is not w_other.strategy: @@ -406,12 +407,12 @@ while True: try: w_item = self.space.next(w_iter) - if not w_other.has_key(w_key): - result.add(w_key) + if not w_other.has_key(w_item): + result.add(w_item) except OperationError, e: if not e.match(self.space, self.space.w_StopIteration): raise - return + break; return result def difference_unwrapped(self, w_set, w_other): diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -451,6 +451,8 @@ s = set([1,2,3]) assert s.difference() == s assert s.difference() is not s + assert set([1,2,3]).difference(set([2,3,4,'5'])) == set([1]) + assert set([1,2,3,'5']).difference(set([2,3,4])) == set([1,'5']) def test_intersection_update(self): s = set([1,2,3,4,7]) From noreply at buildbot.pypy.org Thu Nov 10 13:50:01 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:50:01 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: tell annotator that this obj must be a set Message-ID: <20111110125001.1C1358292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49173:a0feb9250ca0 Date: 2011-05-20 16:09 +0200 http://bitbucket.org/pypy/pypy/changeset/a0feb9250ca0/ Log: tell annotator that this obj must be a set diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -66,6 +66,7 @@ obj = instantiate(W_FrozensetObject) else: obj = w_self.space.call_function(w_self.space.type(w_self), None) + assert isinstance(obj, W_BaseSetObject) obj.strategy = strategy obj.sstorage = storage return obj From noreply at buildbot.pypy.org Thu Nov 10 13:50:02 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:50:02 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: satisfying the annotator Message-ID: <20111110125002.46FD98292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49174:5deedc46a92c Date: 2011-05-20 16:22 +0200 http://bitbucket.org/pypy/pypy/changeset/5deedc46a92c/ Log: satisfying the annotator diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -93,8 +93,8 @@ def switch_to_object_strategy(self, space): d = self.strategy.getdict_w(self) - self.strategy = space.fromcache(ObjectSetStrategy) - self.sstorage = self.strategy.cast_to_void_star(d) + self.strategy = strategy = space.fromcache(ObjectSetStrategy) + self.sstorage = strategy.cast_to_void_star(d) def switch_to_empty_strategy(self): self.strategy = self.space.fromcache(EmptySetStrategy) From noreply at buildbot.pypy.org Thu Nov 10 13:50:03 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:50:03 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: was not rpython Message-ID: <20111110125003.74D138292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49175:ca26985e470d Date: 2011-05-24 11:17 +0200 http://bitbucket.org/pypy/pypy/changeset/ca26985e470d/ Log: was not rpython diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -292,12 +292,6 @@ def get_empty_storage(self): raise NotImplementedError - def init_from_setdata_w(self, w_set, setdata_w): - d = self.get_empty_dict() - for item_w in setdata_w.keys(): - d[self.unwrap(item_w)] = None - w_set.sstorage = self.cast_to_void_star(d) - def get_storage_from_list(self, list_w): setdata = self.get_empty_dict() for w_item in list_w: @@ -377,8 +371,11 @@ return keys_w def has_key(self, w_set, w_key): - dict_w = self.cast_from_void_star(w_set.sstorage) - return self.unwrap(w_key) in dict_w + if not self.is_correct_type(w_key): + #XXX switch object strategy, test + return False + d = self.cast_from_void_star(w_set.sstorage) + return self.unwrap(w_key) in d def equals(self, w_set, w_other): if w_set.length() != w_other.length(): @@ -587,7 +584,7 @@ return type(w_key) is W_IntObject def unwrap(self, w_item): - return self.space.unwrap(w_item) + return self.space.int_w(w_item) def wrap(self, item): return self.space.wrap(item) From noreply at buildbot.pypy.org Thu Nov 10 13:50:04 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:50:04 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: added strategy test for union Message-ID: <20111110125004.9F9C08292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49176:88dd201179d2 Date: 2011-05-24 11:51 +0200 http://bitbucket.org/pypy/pypy/changeset/88dd201179d2/ Log: added strategy test for union diff --git a/pypy/objspace/std/test/test_setstrategies.py b/pypy/objspace/std/test/test_setstrategies.py --- a/pypy/objspace/std/test/test_setstrategies.py +++ b/pypy/objspace/std/test/test_setstrategies.py @@ -52,3 +52,12 @@ set_remove__Set_ANY(self.space, s1, self.space.wrap(1)) assert s1.strategy is self.space.fromcache(EmptySetStrategy) + def test_union(self): + from pypy.objspace.std.setobject import set_union__Set + s1 = W_SetObject(self.space, self.wrapped([1,2,3,4,5])) + s2 = W_SetObject(self.space, self.wrapped([4,5,6,7])) + s3 = W_SetObject(self.space, self.wrapped([4,'5','6',7])) + s4 = set_union__Set(self.space, s1, [s2]) + s5 = set_union__Set(self.space, s1, [s3]) + assert s4.strategy is self.space.fromcache(IntegerSetStrategy) + assert s5.strategy is self.space.fromcache(ObjectSetStrategy) From noreply at buildbot.pypy.org Thu Nov 10 13:50:05 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:50:05 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: fix and tests for fakeints in instrategy Message-ID: <20111110125005.CB7E78292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49177:1e8aabff9f2a Date: 2011-05-24 15:24 +0200 http://bitbucket.org/pypy/pypy/changeset/1e8aabff9f2a/ Log: fix and tests for fakeints in instrategy diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -329,6 +329,7 @@ raise def discard(self, w_set, w_item): + from pypy.objspace.std.dictmultiobject import _is_sane_hash d = self.cast_from_void_star(w_set.sstorage) try: del d[self.unwrap(w_item)] @@ -336,13 +337,28 @@ except KeyError: return False except OperationError, e: + # raise any error except TypeError if not e.match(self.space, self.space.w_TypeError): raise + # if error is TypeError and w_item is not None, Int, String, Bool or Float + # (i.e. FakeObject) switch to object strategy and discard again + if (not _is_sane_hash(self.space, w_item) and + self is not self.space.fromcache(ObjectSetStrategy)): + w_set.switch_to_object_strategy(self.space) + return w_set.discard(w_item) + # else we have two cases: + # - w_item is as set: then we convert it to frozenset and check again + # - type doesn't match (string in intstrategy): then we raise (cause w_f is none) w_f = _convert_set_to_frozenset(self.space, w_item) if w_f is None: raise + + # if w_item is a set and we are not in ObjectSetStrategy we are finished here + if not self.space.fromcache(ObjectSetStrategy): + return False + try: - del d[w_f] + del d[w_f] # XXX nonsense in intstrategy return True except KeyError: return False @@ -595,7 +611,7 @@ cast_from_void_star = staticmethod(cast_from_void_star) def get_empty_storage(self): - return self.cast_to_void_star(newset(self.space)) + return self.cast_to_void_star(self.get_empty_dict()) def get_empty_dict(self): return newset(self.space) diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -545,3 +545,36 @@ for i in [1,2,3]: yield i set([1,2,3,4,5]).issuperset(foo()) + + + def test_fakeint_intstrategy(self): + class FakeInt(object): + def __init__(self, value): + self.value = value + def __hash__(self): + return hash(self.value) + + def __eq__(self, other): + if other == self.value: + return True + return False + + f1 = FakeInt(4) + assert f1 == 4 + assert hash(f1) == hash(4) + + # test with object strategy + s = set([1, 2, 'three', 'four']) + s.discard(FakeInt(2)) + assert s == set([1, 'three', 'four']) + s.remove(FakeInt(1)) + assert s == set(['three', 'four']) + raises(KeyError, s.remove, FakeInt(16)) + + # test with int strategy + s = set([1,2,3,4]) + s.discard(FakeInt(4)) + assert s == set([1,2,3]) + s.remove(FakeInt(3)) + assert s == set([1,2]) + raises(KeyError, s.remove, FakeInt(16)) From noreply at buildbot.pypy.org Thu Nov 10 13:50:07 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:50:07 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: fix and test for fakeobject in has_key Message-ID: <20111110125007.082448292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49178:7bc2b4077184 Date: 2011-05-24 15:56 +0200 http://bitbucket.org/pypy/pypy/changeset/7bc2b4077184/ Log: fix and test for fakeobject in has_key diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -387,8 +387,11 @@ return keys_w def has_key(self, w_set, w_key): + from pypy.objspace.std.dictmultiobject import _is_sane_hash if not self.is_correct_type(w_key): - #XXX switch object strategy, test + if not _is_sane_hash(self.space, w_key): + w_set.switch_to_object_strategy(self.space) + return w_set.has_key(w_key) return False d = self.cast_from_void_star(w_set.sstorage) return self.unwrap(w_key) in d diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -578,3 +578,20 @@ s.remove(FakeInt(3)) assert s == set([1,2]) raises(KeyError, s.remove, FakeInt(16)) + + + def test_fakeobject_and_has_key(test): + class FakeInt(object): + def __init__(self, value): + self.value = value + def __hash__(self): + return hash(self.value) + + def __eq__(self, other): + if other == self.value: + return True + return False + + s = set([1,2,3,4,5]) + assert 5 in s + assert FakeInt(5) in s From noreply at buildbot.pypy.org Thu Nov 10 13:50:08 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:50:08 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: refactored discard/delitem and wrote some more tests Message-ID: <20111110125008.348698292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49179:0b43c3756798 Date: 2011-05-25 14:56 +0200 http://bitbucket.org/pypy/pypy/changeset/0b43c3756798/ Log: refactored discard/delitem and wrote some more tests diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -225,7 +225,7 @@ w_set.add(w_key) def delitem(self, w_set, w_item): - raise KeyError + return False def discard(self, w_set, w_item): return False @@ -321,53 +321,17 @@ w_set.add(w_key) def delitem(self, w_set, w_item): - # not a normal set operation; only used internally d = self.cast_from_void_star(w_set.sstorage) + if not self.is_correct_type(w_item): + w_set.switch_to_object_strategy(self.space) + return w_set.delitem(w_item) + + key = self.unwrap(w_item) try: - del d[self.unwrap(w_item)] - except KeyError: - raise - - def discard(self, w_set, w_item): - from pypy.objspace.std.dictmultiobject import _is_sane_hash - d = self.cast_from_void_star(w_set.sstorage) - try: - del d[self.unwrap(w_item)] + del d[key] return True except KeyError: return False - except OperationError, e: - # raise any error except TypeError - if not e.match(self.space, self.space.w_TypeError): - raise - # if error is TypeError and w_item is not None, Int, String, Bool or Float - # (i.e. FakeObject) switch to object strategy and discard again - if (not _is_sane_hash(self.space, w_item) and - self is not self.space.fromcache(ObjectSetStrategy)): - w_set.switch_to_object_strategy(self.space) - return w_set.discard(w_item) - # else we have two cases: - # - w_item is as set: then we convert it to frozenset and check again - # - type doesn't match (string in intstrategy): then we raise (cause w_f is none) - w_f = _convert_set_to_frozenset(self.space, w_item) - if w_f is None: - raise - - # if w_item is a set and we are not in ObjectSetStrategy we are finished here - if not self.space.fromcache(ObjectSetStrategy): - return False - - try: - del d[w_f] # XXX nonsense in intstrategy - return True - except KeyError: - return False - except OperationError, e: - #XXX is this ever tested? - assert False - if not e.match(space, space.w_TypeError): - raise - return False def getdict_w(self, w_set): result = newset(self.space) @@ -821,10 +785,7 @@ w_left.difference_update(w_other) else: for w_key in space.listview(w_other): - try: - w_left.delitem(w_key) - except KeyError: - pass + w_left.delitem(w_key) def inplace_sub__Set_Set(space, w_left, w_other): w_left.difference_update(w_other) @@ -974,10 +935,20 @@ frozenset if the argument is a set. Returns True if successfully removed. """ - x = w_left.discard(w_item) + try: + deleted = w_left.delitem(w_item) + except OperationError, e: + if not e.match(space, space.w_TypeError): + raise + else: + w_f = _convert_set_to_frozenset(space, w_item) + if w_f is None: + raise + deleted = w_left.delitem(w_f) + if w_left.length() == 0: w_left.switch_to_empty_strategy() - return x + return deleted def set_discard__Set_ANY(space, w_left, w_item): _discard_from_set(space, w_left, w_item) diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -14,6 +14,7 @@ from pypy.objspace.std.setobject import and__Set_Set from pypy.objspace.std.setobject import set_intersection__Set from pypy.objspace.std.setobject import eq__Set_Set +from pypy.conftest import gettestobjspace letters = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ' @@ -58,6 +59,28 @@ class AppTestAppSetTest: + def setup_class(self): + self.space = gettestobjspace() + w_fakeint = self.space.appexec([], """(): + class FakeInt(object): + def __init__(self, value): + self.value = value + def __hash__(self): + return hash(self.value) + + def __eq__(self, other): + if other == self.value: + return True + return False + return FakeInt + """) + self.w_FakeInt = w_fakeint + + def test_fakeint(self): + f1 = self.FakeInt(4) + assert f1 == 4 + assert hash(f1) == hash(4) + def test_simple(self): a = set([1,2,3]) b = set() @@ -546,52 +569,77 @@ yield i set([1,2,3,4,5]).issuperset(foo()) + def test_fakeint_and_equals(self): + s1 = set([1,2,3,4]) + s2 = set([1,2,self.FakeInt(3), 4]) + assert s1 == s2 - def test_fakeint_intstrategy(self): - class FakeInt(object): + def test_fakeint_and_discard(self): + # test with object strategy + s = set([1, 2, 'three', 'four']) + s.discard(self.FakeInt(2)) + assert s == set([1, 'three', 'four']) + + s.remove(self.FakeInt(1)) + assert s == set(['three', 'four']) + raises(KeyError, s.remove, self.FakeInt(16)) + + # test with int strategy + s = set([1,2,3,4]) + s.discard(self.FakeInt(4)) + assert s == set([1,2,3]) + s.remove(self.FakeInt(3)) + assert s == set([1,2]) + raises(KeyError, s.remove, self.FakeInt(16)) + + def test_fakeobject_and_has_key(self): + s = set([1,2,3,4,5]) + assert 5 in s + assert self.FakeInt(5) in s + + def test_fakeobject_and_pop(self): + s = set([1,2,3,self.FakeInt(4), 5]) + assert s.pop() + assert s.pop() + assert s.pop() + assert s.pop() + assert s.pop() + assert s == set([]) + + def test_fakeobject_and_difference(self): + s = set([1,2,'3',4]) + s.difference_update([self.FakeInt(1), self.FakeInt(2)]) + assert s == set(['3',4]) + + s = set([1,2,3,4]) + s.difference_update([self.FakeInt(1), self.FakeInt(2)]) + assert s == set([3,4]) + + def test_frozenset_behavior(self): + s = set([1,2,3,frozenset([4])]) + raises(TypeError, s.difference_update, [1,2,3,set([4])]) + + s = set([1,2,3,frozenset([4])]) + s.discard(set([4])) + assert s == set([1,2,3]) + + def test_discard_unhashable(self): + s = set([1,2,3,4]) + raises(TypeError, s.discard, [1]) + + + def test_discard_evil_compare(self): + class Evil(object): def __init__(self, value): self.value = value def __hash__(self): return hash(self.value) - def __eq__(self, other): + if isinstance(other, frozenset): + raise TypeError if other == self.value: return True return False + s = set([1,2, Evil(frozenset([1]))]) + raises(TypeError, s.discard, set([1])) - f1 = FakeInt(4) - assert f1 == 4 - assert hash(f1) == hash(4) - - # test with object strategy - s = set([1, 2, 'three', 'four']) - s.discard(FakeInt(2)) - assert s == set([1, 'three', 'four']) - s.remove(FakeInt(1)) - assert s == set(['three', 'four']) - raises(KeyError, s.remove, FakeInt(16)) - - # test with int strategy - s = set([1,2,3,4]) - s.discard(FakeInt(4)) - assert s == set([1,2,3]) - s.remove(FakeInt(3)) - assert s == set([1,2]) - raises(KeyError, s.remove, FakeInt(16)) - - - def test_fakeobject_and_has_key(test): - class FakeInt(object): - def __init__(self, value): - self.value = value - def __hash__(self): - return hash(self.value) - - def __eq__(self, other): - if other == self.value: - return True - return False - - s = set([1,2,3,4,5]) - assert 5 in s - assert FakeInt(5) in s From noreply at buildbot.pypy.org Thu Nov 10 13:50:09 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:50:09 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: added _is_sane_hash to delitem; fixed _is_sane_hash in has_key; added strategy tests Message-ID: <20111110125009.64C2E8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49180:3904ce218a9c Date: 2011-05-25 15:12 +0200 http://bitbucket.org/pypy/pypy/changeset/3904ce218a9c/ Log: added _is_sane_hash to delitem; fixed _is_sane_hash in has_key; added strategy tests diff --git a/pypy/objspace/std/dictmultiobject.py b/pypy/objspace/std/dictmultiobject.py --- a/pypy/objspace/std/dictmultiobject.py +++ b/pypy/objspace/std/dictmultiobject.py @@ -21,6 +21,7 @@ # XXX there are many more types return (space.is_w(w_lookup_type, space.w_NoneType) or + space.is_w(w_lookup_type, space.w_str) or space.is_w(w_lookup_type, space.w_int) or space.is_w(w_lookup_type, space.w_bool) or space.is_w(w_lookup_type, space.w_float) diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -321,8 +321,11 @@ w_set.add(w_key) def delitem(self, w_set, w_item): + from pypy.objspace.std.dictmultiobject import _is_sane_hash d = self.cast_from_void_star(w_set.sstorage) if not self.is_correct_type(w_item): + if _is_sane_hash(self.space, self.space.type(w_item)): + return False w_set.switch_to_object_strategy(self.space) return w_set.delitem(w_item) @@ -353,7 +356,7 @@ def has_key(self, w_set, w_key): from pypy.objspace.std.dictmultiobject import _is_sane_hash if not self.is_correct_type(w_key): - if not _is_sane_hash(self.space, w_key): + if not _is_sane_hash(self.space, self.space.type(w_key)): w_set.switch_to_object_strategy(self.space) return w_set.has_key(w_key) return False diff --git a/pypy/objspace/std/test/test_setstrategies.py b/pypy/objspace/std/test/test_setstrategies.py --- a/pypy/objspace/std/test/test_setstrategies.py +++ b/pypy/objspace/std/test/test_setstrategies.py @@ -61,3 +61,43 @@ s5 = set_union__Set(self.space, s1, [s3]) assert s4.strategy is self.space.fromcache(IntegerSetStrategy) assert s5.strategy is self.space.fromcache(ObjectSetStrategy) + + def test_discard(self): + class FakeInt(object): + def __init__(self, value): + self.value = value + def __hash__(self): + return hash(self.value) + def __eq__(self, other): + if other == self.value: + return True + return False + + from pypy.objspace.std.setobject import set_discard__Set_ANY + + s1 = W_SetObject(self.space, self.wrapped([1,2,3,4,5])) + set_discard__Set_ANY(self.space, s1, self.space.wrap("five")) + assert s1.strategy is self.space.fromcache(IntegerSetStrategy) + + set_discard__Set_ANY(self.space, s1, self.space.wrap(FakeInt(5))) + assert s1.strategy is self.space.fromcache(ObjectSetStrategy) + + def test_has_key(self): + class FakeInt(object): + def __init__(self, value): + self.value = value + def __hash__(self): + return hash(self.value) + def __eq__(self, other): + if other == self.value: + return True + return False + + from pypy.objspace.std.setobject import set_discard__Set_ANY + + s1 = W_SetObject(self.space, self.wrapped([1,2,3,4,5])) + assert not s1.has_key(self.space.wrap("five")) + assert s1.strategy is self.space.fromcache(IntegerSetStrategy) + + assert s1.has_key(self.space.wrap(FakeInt(2))) + assert s1.strategy is self.space.fromcache(ObjectSetStrategy) From noreply at buildbot.pypy.org Thu Nov 10 13:50:10 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:50:10 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: Altough the if-part will never be executed in IntegerSetStrategy, the annotator doesn't know what type d is. It could be an int-dict and then d[w_key], where w_key is always a wrapped object because of the getkeys()-method, would degenerate this object to an integer. Message-ID: <20111110125010.91A2B8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49181:95966fc24e8c Date: 2011-05-27 11:53 +0200 http://bitbucket.org/pypy/pypy/changeset/95966fc24e8c/ Log: Altough the if-part will never be executed in IntegerSetStrategy, the annotator doesn't know what type d is. It could be an int-dict and then d[w_key], where w_key is always a wrapped object because of the getkeys()-method, would degenerate this object to an integer. diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -539,18 +539,19 @@ return True def update(self, w_set, w_other): - d = self.cast_from_void_star(w_set.sstorage) if w_set.strategy is self.space.fromcache(ObjectSetStrategy): + d_obj = self.cast_from_void_star(w_set.sstorage) other_w = w_other.getkeys() - #XXX better solution!? for w_key in other_w: - d[w_key] = None + d_obj[w_key] = None return elif w_set.strategy is w_other.strategy: + d_int = self.cast_from_void_star(w_set.sstorage) other = self.cast_from_void_star(w_other.sstorage) - d.update(other) + d_int.update(other) return + w_set.switch_to_object_strategy(self.space) w_set.update(w_other) From noreply at buildbot.pypy.org Thu Nov 10 13:50:11 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:50:11 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: obviuosly d_obj still could be an int-dict Message-ID: <20111110125011.BDE178292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49182:d8f16ee35e9b Date: 2011-05-27 14:27 +0200 http://bitbucket.org/pypy/pypy/changeset/d8f16ee35e9b/ Log: obviuosly d_obj still could be an int-dict diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -543,7 +543,7 @@ d_obj = self.cast_from_void_star(w_set.sstorage) other_w = w_other.getkeys() for w_key in other_w: - d_obj[w_key] = None + d_obj[self.unwrap(w_key)] = None return elif w_set.strategy is w_other.strategy: From noreply at buildbot.pypy.org Thu Nov 10 13:50:12 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:50:12 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: not needed anymore Message-ID: <20111110125012.EC12B8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49183:0a8c1ba28319 Date: 2011-05-27 14:41 +0200 http://bitbucket.org/pypy/pypy/changeset/0a8c1ba28319/ Log: not needed anymore diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -13,28 +13,6 @@ from pypy.interpreter.generator import GeneratorIterator from pypy.objspace.std.listobject import W_ListObject -def get_strategy_from_w_iterable(space, w_iterable=None): - assert False - from pypy.objspace.std.intobject import W_IntObject - #XXX what types for w_iterable are possible - - if isinstance(w_iterable, W_BaseSetObject): - return w_iterable.strategy - - if w_iterable is None: - #XXX becomes EmptySetStrategy later - return space.fromcache(ObjectSetStrategy) - - if not isinstance(w_iterable, list): - w_iterable = space.listview(w_iterable) - for item_w in w_iterable: - if type(item_w) is not W_IntObject: - break; - if item_w is w_iterable[-1]: - return space.fromcache(IntegerSetStrategy) - - return space.fromcache(ObjectSetStrategy) - class W_BaseSetObject(W_Object): typedef = None From noreply at buildbot.pypy.org Thu Nov 10 13:50:14 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:50:14 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: fix in EmptySetStrategy.issuperset Message-ID: <20111110125014.235D98292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49184:13f5685e273c Date: 2011-05-27 14:51 +0200 http://bitbucket.org/pypy/pypy/changeset/13f5685e273c/ Log: fix in EmptySetStrategy.issuperset diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -247,7 +247,8 @@ return True def issuperset(self, w_set, w_other): - if isinstance(w_other, W_BaseSetObject) and w_other.strategy is EmptySetStrategy: + if (isinstance(w_other, W_BaseSetObject) and + w_other.strategy is self.space.fromcache(EmptySetStrategy)): return True elif len(self.space.unpackiterable(w_other)) == 0: return True From noreply at buildbot.pypy.org Thu Nov 10 13:50:15 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:50:15 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: implemented new iteratorimplementation (similar to dictmultiobject) Message-ID: <20111110125015.530C68292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49185:6e5ed22d0735 Date: 2011-06-08 11:28 +0200 http://bitbucket.org/pypy/pypy/changeset/6e5ed22d0735/ Log: implemented new iteratorimplementation (similar to dictmultiobject) diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -149,6 +149,9 @@ def equals(self, w_other): return self.strategy.equals(self, w_other) + def iter(self): + return self.strategy.iter(self) + class W_SetObject(W_BaseSetObject): from pypy.objspace.std.settype import set_typedef as typedef @@ -265,6 +268,9 @@ w_set.switch_to_object_strategy(self.space) w_set.update(w_other) + def iter(self, w_set): + return EmptyIteratorImplementation(self.space, w_set) + class AbstractUnwrappedSetStrategy(object): _mixin_ = True @@ -555,6 +561,9 @@ def wrap(self, item): return self.space.wrap(item) + def iter(self, w_set): + return IntegerIteratorImplementation(self.space, self, w_set) + class ObjectSetStrategy(AbstractUnwrappedSetStrategy, SetStrategy): cast_to_void_star, cast_from_void_star = rerased.new_erasing_pair("object") cast_to_void_star = staticmethod(cast_to_void_star) @@ -575,20 +584,79 @@ def wrap(self, item): return item + def iter(self, w_set): + return RDictIteratorImplementation(self.space, self, w_set) + +class IteratorImplementation(object): + def __init__(self, space, implementation): + self.space = space + self.dictimplementation = implementation + self.len = implementation.length() + self.pos = 0 + + def next(self): + if self.dictimplementation is None: + return None, None + if self.len != self.dictimplementation.length(): + self.len = -1 # Make this error state sticky + raise OperationError(self.space.w_RuntimeError, + self.space.wrap("dictionary changed size during iteration")) + # look for the next entry + if self.pos < self.len: + result = self.next_entry() + self.pos += 1 + return result + # no more entries + self.dictimplementation = None + return None, None + + def next_entry(self): + """ Purely abstract method + """ + raise NotImplementedError + + def length(self): + if self.dictimplementation is not None: + return self.len - self.pos + return 0 + +class EmptyIteratorImplementation(IteratorImplementation): + def next(self): + return (None, None) + +class IntegerIteratorImplementation(IteratorImplementation): + #XXX same implementation in dictmultiobject on dictstrategy-branch + def __init__(self, space, strategy, dictimplementation): + IteratorImplementation.__init__(self, space, dictimplementation) + d = strategy.cast_from_void_star(dictimplementation.sstorage) + self.iterator = d.iteritems() + + def next_entry(self): + # note that this 'for' loop only runs once, at most + for w_key, w_value in self.iterator: + return self.space.wrap(w_key), w_value + else: + return None, None + +class RDictIteratorImplementation(IteratorImplementation): + def __init__(self, space, strategy, dictimplementation): + IteratorImplementation.__init__(self, space, dictimplementation) + d = strategy.cast_from_void_star(dictimplementation.sstorage) + self.iterator = d.iteritems() + + def next_entry(self): + # note that this 'for' loop only runs once, at most + for item in self.iterator: + return item + else: + return None, None + class W_SetIterObject(W_Object): from pypy.objspace.std.settype import setiter_typedef as typedef - def __init__(w_self, setdata): - w_self.content = content = setdata - w_self.len = len(content) - w_self.pos = 0 - w_self.iterator = iter(w_self.content) - - def next_entry(w_self): - for w_key in w_self.iterator: - return w_key - else: - return None + def __init__(w_self, space, iterimplementation): + w_self.space = space + w_self.iterimplementation = iterimplementation registerimplementation(W_SetIterObject) @@ -596,19 +664,10 @@ return w_setiter def next__SetIterObject(space, w_setiter): - content = w_setiter.content - if content is not None: - if w_setiter.len != len(content): - w_setiter.len = -1 # Make this error state sticky - raise OperationError(space.w_RuntimeError, - space.wrap("Set changed size during iteration")) - # look for the next entry - w_result = w_setiter.next_entry() - if w_result is not None: - w_setiter.pos += 1 - return w_result - # no more entries - w_setiter.content = None + iterimplementation = w_setiter.iterimplementation + w_key, w_value = iterimplementation.next() + if w_key is not None: + return w_key raise OperationError(space.w_StopIteration, space.w_None) # XXX __length_hint__() @@ -1086,7 +1145,8 @@ len__Frozenset = len__Set def iter__Set(space, w_left): - return W_SetIterObject(w_left.getkeys()) + #return iter(w_left.getkeys()) + return W_SetIterObject(space, w_left.iter()) iter__Frozenset = iter__Set From noreply at buildbot.pypy.org Thu Nov 10 13:50:16 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:50:16 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: only iterate over keys Message-ID: <20111110125016.804848292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49186:fefe7a5e60af Date: 2011-06-08 12:49 +0200 http://bitbucket.org/pypy/pypy/changeset/fefe7a5e60af/ Log: only iterate over keys diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -590,25 +590,25 @@ class IteratorImplementation(object): def __init__(self, space, implementation): self.space = space - self.dictimplementation = implementation + self.setimplementation = implementation self.len = implementation.length() self.pos = 0 def next(self): - if self.dictimplementation is None: - return None, None - if self.len != self.dictimplementation.length(): + if self.setimplementation is None: + return None + if self.len != self.setimplementation.length(): self.len = -1 # Make this error state sticky raise OperationError(self.space.w_RuntimeError, - self.space.wrap("dictionary changed size during iteration")) + self.space.wrap("set changed size during iteration")) # look for the next entry if self.pos < self.len: result = self.next_entry() self.pos += 1 return result # no more entries - self.dictimplementation = None - return None, None + self.setimplementation = None + return None def next_entry(self): """ Purely abstract method @@ -616,40 +616,40 @@ raise NotImplementedError def length(self): - if self.dictimplementation is not None: + if self.setimplementation is not None: return self.len - self.pos return 0 class EmptyIteratorImplementation(IteratorImplementation): def next(self): - return (None, None) + return None class IntegerIteratorImplementation(IteratorImplementation): #XXX same implementation in dictmultiobject on dictstrategy-branch def __init__(self, space, strategy, dictimplementation): IteratorImplementation.__init__(self, space, dictimplementation) d = strategy.cast_from_void_star(dictimplementation.sstorage) - self.iterator = d.iteritems() + self.iterator = d.iterkeys() def next_entry(self): # note that this 'for' loop only runs once, at most - for w_key, w_value in self.iterator: - return self.space.wrap(w_key), w_value + for w_key in self.iterator: + return self.space.wrap(w_key) else: - return None, None + return None class RDictIteratorImplementation(IteratorImplementation): def __init__(self, space, strategy, dictimplementation): IteratorImplementation.__init__(self, space, dictimplementation) d = strategy.cast_from_void_star(dictimplementation.sstorage) - self.iterator = d.iteritems() + self.iterator = d.iterkeys() def next_entry(self): # note that this 'for' loop only runs once, at most - for item in self.iterator: - return item + for key in self.iterator: + return key else: - return None, None + return None class W_SetIterObject(W_Object): from pypy.objspace.std.settype import setiter_typedef as typedef @@ -665,7 +665,7 @@ def next__SetIterObject(space, w_setiter): iterimplementation = w_setiter.iterimplementation - w_key, w_value = iterimplementation.next() + w_key = iterimplementation.next() if w_key is not None: return w_key raise OperationError(space.w_StopIteration, space.w_None) From noreply at buildbot.pypy.org Thu Nov 10 13:50:55 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:50:55 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: argument must be None to create a new empty set Message-ID: <20111110125055.702EE82A87@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49188:67d070d04ba6 Date: 2011-07-19 14:02 +0200 http://bitbucket.org/pypy/pypy/changeset/67d070d04ba6/ Log: argument must be None to create a new empty set diff --git a/pypy/objspace/std/objspace.py b/pypy/objspace/std/objspace.py --- a/pypy/objspace/std/objspace.py +++ b/pypy/objspace/std/objspace.py @@ -310,7 +310,7 @@ def newset(self): from pypy.objspace.std.setobject import newset - return W_SetObject(self, newset(self)) + return W_SetObject(self, None) def newslice(self, w_start, w_end, w_step): return W_SliceObject(w_start, w_end, w_step) From noreply at buildbot.pypy.org Thu Nov 10 13:50:54 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:50:54 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: merged default into set-strategies Message-ID: <20111110125054.0E1418292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49187:7291b68c48ce Date: 2011-07-19 12:13 +0200 http://bitbucket.org/pypy/pypy/changeset/7291b68c48ce/ Log: merged default into set-strategies diff too long, truncating to 10000 out of 130446 lines diff --git a/.hgignore b/.hgignore --- a/.hgignore +++ b/.hgignore @@ -1,6 +1,7 @@ syntax: glob *.py[co] *~ +.*.swp syntax: regexp ^testresult$ @@ -17,12 +18,15 @@ ^pypy/module/cpyext/test/.+\.manifest$ ^pypy/module/test_lib_pypy/ctypes_tests/.+\.o$ ^pypy/doc/.+\.html$ +^pypy/doc/config/.+\.rst$ ^pypy/doc/basicblock\.asc$ ^pypy/doc/.+\.svninfo$ ^pypy/translator/c/src/libffi_msvc/.+\.obj$ ^pypy/translator/c/src/libffi_msvc/.+\.dll$ ^pypy/translator/c/src/libffi_msvc/.+\.lib$ ^pypy/translator/c/src/libffi_msvc/.+\.exp$ +^pypy/translator/c/src/cjkcodecs/.+\.o$ +^pypy/translator/c/src/cjkcodecs/.+\.obj$ ^pypy/translator/jvm/\.project$ ^pypy/translator/jvm/\.classpath$ ^pypy/translator/jvm/eclipse-bin$ @@ -35,6 +39,8 @@ ^pypy/translator/benchmark/shootout_benchmarks$ ^pypy/translator/goal/pypy-translation-snapshot$ ^pypy/translator/goal/pypy-c +^pypy/translator/goal/pypy-jvm +^pypy/translator/goal/pypy-jvm.jar ^pypy/translator/goal/.+\.exe$ ^pypy/translator/goal/.+\.dll$ ^pypy/translator/goal/target.+-c$ @@ -61,6 +67,7 @@ ^pypy/doc/image/lattice3\.png$ ^pypy/doc/image/stackless_informal\.png$ ^pypy/doc/image/parsing_example.+\.png$ +^pypy/module/test_lib_pypy/ctypes_tests/_ctypes_test\.o$ ^compiled ^.git/ ^release/ diff --git a/.hgtags b/.hgtags new file mode 100644 --- /dev/null +++ b/.hgtags @@ -0,0 +1,1 @@ +b590cf6de4190623aad9aa698694c22e614d67b9 release-1.5 diff --git a/LICENSE b/LICENSE --- a/LICENSE +++ b/LICENSE @@ -37,78 +37,155 @@ Armin Rigo Maciej Fijalkowski Carl Friedrich Bolz + Amaury Forgeot d'Arc + Antonio Cuni Samuele Pedroni - Antonio Cuni Michael Hudson + Holger Krekel Christian Tismer - Holger Krekel + Benjamin Peterson Eric van Riet Paap + Anders Chrigström + Håkan Ardö Richard Emslie - Anders Chrigstrom - Amaury Forgeot d Arc - Aurelien Campeas + Dan Villiom Podlaski Christiansen + Alexander Schremmer + Alex Gaynor + David Schneider + Aurelién Campeas Anders Lehmann + Camillo Bruni Niklaus Haldimann + Leonardo Santagada + Toon Verwaest Seo Sanghyeon - Leonardo Santagada Lawrence Oluyede + Bartosz Skowron Jakub Gustak Guido Wesdorp - Benjamin Peterson - Alexander Schremmer + Adrien Di Mascio + Laura Creighton + Ludovic Aubry Niko Matsakis - Ludovic Aubry + Daniel Roberts + Jason Creighton + Jacob Hallén Alex Martelli - Toon Verwaest + Anders Hammarquist + Jan de Mooij Stephan Diehl - Adrien Di Mascio + Michael Foord Stefan Schwarzer Tomek Meka Patrick Maupin - Jacob Hallen - Laura Creighton Bob Ippolito - Camillo Bruni - Simon Burton Bruno Gola Alexandre Fayolle Marius Gedminas + Simon Burton + Jean-Paul Calderone + John Witulski + Wim Lavrijsen + Andreas Stührk + Jean-Philippe St. Pierre Guido van Rossum + Pavel Vinogradov Valentino Volonghi + Paul deGrandis Adrian Kuhn - Paul deGrandis + tav + Georg Brandl Gerald Klix Wanja Saatkamp - Anders Hammarquist + Boris Feigin Oscar Nierstrasz + Dario Bertini + David Malcolm Eugene Oden + Henry Mason Lukas Renggli Guenter Jantzen + Ronny Pfannschmidt + Bert Freudenberg + Amit Regmi + Ben Young + Nicolas Chauvat + Andrew Durdin + Michael Schneider + Nicholas Riley + Rocco Moretti + Gintautas Miliauskas + Michael Twomey + Igor Trindade Oliveira + Lucian Branescu Mihaila + Olivier Dormond + Jared Grubb + Karl Bartel + Gabriel Lavoie + Brian Dorsey + Victor Stinner + Stuart Williams + Toby Watson + Antoine Pitrou + Justas Sadzevicius + Neil Shepperd + Mikael Schönenberg + Gasper Zejn + Jonathan David Riehl + Elmo Mäntynen + Anders Qvist + Beatrice Düring + Alexander Sedov + Vincent Legoll + Alan McIntyre + Romain Guillebert + Alex Perry + Jens-Uwe Mager + Dan Stromberg + Lukas Diekmann + Carl Meyer + Pieter Zieschang + Alejandro J. Cura + Sylvain Thenault + Travis Francis Athougies + Henrik Vendelbo + Lutz Paelike + Jacob Oscarson + Martin Blais + Lucio Torre + Lene Wagner + Miguel de Val Borro + Ignas Mikalajunas + Artur Lisiecki + Joshua Gilbert + Godefroid Chappelle + Yusei Tahara + Christopher Armstrong + Stephan Busemann + Gustavo Niemeyer + William Leslie + Akira Li + Kristján Valur Jónsson + Bobby Impollonia + Andrew Thompson + Anders Sigfridsson + Jacek Generowicz + Dan Colish + Sven Hager + Zooko Wilcox-O Hearn + Anders Hammarquist Dinu Gherman - Bartosz Skowron - Georg Brandl - Ben Young - Jean-Paul Calderone - Nicolas Chauvat - Rocco Moretti - Michael Twomey - boria - Jared Grubb - Olivier Dormond - Stuart Williams - Jens-Uwe Mager - Justas Sadzevicius - Mikael Schönenberg - Brian Dorsey - Jonathan David Riehl - Beatrice During - Elmo Mäntynen - Andreas Friedge - Alex Gaynor - Anders Qvist - Alan McIntyre - Bert Freudenberg - Tav + Dan Colish + Daniel Neuhäuser + Michael Chermside + Konrad Delong + Anna Ravencroft + Greg Price + Armin Ronacher + Jim Baker + Philip Jenvey + Rodrigo Araújo + Brett Cannon Heinrich-Heine University, Germany Open End AB (formerly AB Strakt), Sweden diff --git a/README b/README --- a/README +++ b/README @@ -15,10 +15,10 @@ The getting-started document will help guide you: - http://codespeak.net/pypy/dist/pypy/doc/getting-started.html + http://doc.pypy.org/en/latest/getting-started.html It will also point you to the rest of the documentation which is generated from files in the pypy/doc directory within the source repositories. Enjoy and send us feedback! - the pypy-dev team + the pypy-dev team diff --git a/_pytest/__init__.py b/_pytest/__init__.py --- a/_pytest/__init__.py +++ b/_pytest/__init__.py @@ -1,2 +1,2 @@ # -__version__ = '2.0.3.dev3' +__version__ = '2.1.0.dev4' diff --git a/_pytest/assertion.py b/_pytest/assertion.py deleted file mode 100644 --- a/_pytest/assertion.py +++ /dev/null @@ -1,179 +0,0 @@ -""" -support for presented detailed information in failing assertions. -""" -import py -import sys -from _pytest.monkeypatch import monkeypatch - -def pytest_addoption(parser): - group = parser.getgroup("debugconfig") - group._addoption('--no-assert', action="store_true", default=False, - dest="noassert", - help="disable python assert expression reinterpretation."), - -def pytest_configure(config): - # The _reprcompare attribute on the py.code module is used by - # py._code._assertionnew to detect this plugin was loaded and in - # turn call the hooks defined here as part of the - # DebugInterpreter. - config._monkeypatch = m = monkeypatch() - warn_about_missing_assertion() - if not config.getvalue("noassert") and not config.getvalue("nomagic"): - def callbinrepr(op, left, right): - hook_result = config.hook.pytest_assertrepr_compare( - config=config, op=op, left=left, right=right) - for new_expl in hook_result: - if new_expl: - return '\n~'.join(new_expl) - m.setattr(py.builtin.builtins, - 'AssertionError', py.code._AssertionError) - m.setattr(py.code, '_reprcompare', callbinrepr) - -def pytest_unconfigure(config): - config._monkeypatch.undo() - -def warn_about_missing_assertion(): - try: - assert False - except AssertionError: - pass - else: - sys.stderr.write("WARNING: failing tests may report as passing because " - "assertions are turned off! (are you using python -O?)\n") - -# Provide basestring in python3 -try: - basestring = basestring -except NameError: - basestring = str - - -def pytest_assertrepr_compare(op, left, right): - """return specialised explanations for some operators/operands""" - width = 80 - 15 - len(op) - 2 # 15 chars indentation, 1 space around op - left_repr = py.io.saferepr(left, maxsize=int(width/2)) - right_repr = py.io.saferepr(right, maxsize=width-len(left_repr)) - summary = '%s %s %s' % (left_repr, op, right_repr) - - issequence = lambda x: isinstance(x, (list, tuple)) - istext = lambda x: isinstance(x, basestring) - isdict = lambda x: isinstance(x, dict) - isset = lambda x: isinstance(x, set) - - explanation = None - try: - if op == '==': - if istext(left) and istext(right): - explanation = _diff_text(left, right) - elif issequence(left) and issequence(right): - explanation = _compare_eq_sequence(left, right) - elif isset(left) and isset(right): - explanation = _compare_eq_set(left, right) - elif isdict(left) and isdict(right): - explanation = _diff_text(py.std.pprint.pformat(left), - py.std.pprint.pformat(right)) - elif op == 'not in': - if istext(left) and istext(right): - explanation = _notin_text(left, right) - except py.builtin._sysex: - raise - except: - excinfo = py.code.ExceptionInfo() - explanation = ['(pytest_assertion plugin: representation of ' - 'details failed. Probably an object has a faulty __repr__.)', - str(excinfo) - ] - - - if not explanation: - return None - - # Don't include pageloads of data, should be configurable - if len(''.join(explanation)) > 80*8: - explanation = ['Detailed information too verbose, truncated'] - - return [summary] + explanation - - -def _diff_text(left, right): - """Return the explanation for the diff between text - - This will skip leading and trailing characters which are - identical to keep the diff minimal. - """ - explanation = [] - i = 0 # just in case left or right has zero length - for i in range(min(len(left), len(right))): - if left[i] != right[i]: - break - if i > 42: - i -= 10 # Provide some context - explanation = ['Skipping %s identical ' - 'leading characters in diff' % i] - left = left[i:] - right = right[i:] - if len(left) == len(right): - for i in range(len(left)): - if left[-i] != right[-i]: - break - if i > 42: - i -= 10 # Provide some context - explanation += ['Skipping %s identical ' - 'trailing characters in diff' % i] - left = left[:-i] - right = right[:-i] - explanation += [line.strip('\n') - for line in py.std.difflib.ndiff(left.splitlines(), - right.splitlines())] - return explanation - - -def _compare_eq_sequence(left, right): - explanation = [] - for i in range(min(len(left), len(right))): - if left[i] != right[i]: - explanation += ['At index %s diff: %r != %r' % - (i, left[i], right[i])] - break - if len(left) > len(right): - explanation += ['Left contains more items, ' - 'first extra item: %s' % py.io.saferepr(left[len(right)],)] - elif len(left) < len(right): - explanation += ['Right contains more items, ' - 'first extra item: %s' % py.io.saferepr(right[len(left)],)] - return explanation # + _diff_text(py.std.pprint.pformat(left), - # py.std.pprint.pformat(right)) - - -def _compare_eq_set(left, right): - explanation = [] - diff_left = left - right - diff_right = right - left - if diff_left: - explanation.append('Extra items in the left set:') - for item in diff_left: - explanation.append(py.io.saferepr(item)) - if diff_right: - explanation.append('Extra items in the right set:') - for item in diff_right: - explanation.append(py.io.saferepr(item)) - return explanation - - -def _notin_text(term, text): - index = text.find(term) - head = text[:index] - tail = text[index+len(term):] - correct_text = head + tail - diff = _diff_text(correct_text, text) - newdiff = ['%s is contained here:' % py.io.saferepr(term, maxsize=42)] - for line in diff: - if line.startswith('Skipping'): - continue - if line.startswith('- '): - continue - if line.startswith('+ '): - newdiff.append(' ' + line[2:]) - else: - newdiff.append(line) - return newdiff diff --git a/_pytest/assertion/__init__.py b/_pytest/assertion/__init__.py new file mode 100644 --- /dev/null +++ b/_pytest/assertion/__init__.py @@ -0,0 +1,128 @@ +""" +support for presenting detailed information in failing assertions. +""" +import py +import imp +import marshal +import struct +import sys +import pytest +from _pytest.monkeypatch import monkeypatch +from _pytest.assertion import reinterpret, util + +try: + from _pytest.assertion.rewrite import rewrite_asserts +except ImportError: + rewrite_asserts = None +else: + import ast + +def pytest_addoption(parser): + group = parser.getgroup("debugconfig") + group.addoption('--assertmode', action="store", dest="assertmode", + choices=("on", "old", "off", "default"), default="default", + metavar="on|old|off", + help="""control assertion debugging tools. +'off' performs no assertion debugging. +'old' reinterprets the expressions in asserts to glean information. +'on' (the default) rewrites the assert statements in test modules to provide +sub-expression results.""") + group.addoption('--no-assert', action="store_true", default=False, + dest="noassert", help="DEPRECATED equivalent to --assertmode=off") + group.addoption('--nomagic', action="store_true", default=False, + dest="nomagic", help="DEPRECATED equivalent to --assertmode=off") + +class AssertionState: + """State for the assertion plugin.""" + + def __init__(self, config, mode): + self.mode = mode + self.trace = config.trace.root.get("assertion") + +def pytest_configure(config): + warn_about_missing_assertion() + mode = config.getvalue("assertmode") + if config.getvalue("noassert") or config.getvalue("nomagic"): + if mode not in ("off", "default"): + raise pytest.UsageError("assertion options conflict") + mode = "off" + elif mode == "default": + mode = "on" + if mode != "off": + def callbinrepr(op, left, right): + hook_result = config.hook.pytest_assertrepr_compare( + config=config, op=op, left=left, right=right) + for new_expl in hook_result: + if new_expl: + return '\n~'.join(new_expl) + m = monkeypatch() + config._cleanup.append(m.undo) + m.setattr(py.builtin.builtins, 'AssertionError', + reinterpret.AssertionError) + m.setattr(util, '_reprcompare', callbinrepr) + if mode == "on" and rewrite_asserts is None: + mode = "old" + config._assertstate = AssertionState(config, mode) + config._assertstate.trace("configured with mode set to %r" % (mode,)) + +def _write_pyc(co, source_path): + if hasattr(imp, "cache_from_source"): + # Handle PEP 3147 pycs. + pyc = py.path.local(imp.cache_from_source(str(source_path))) + pyc.ensure() + else: + pyc = source_path + "c" + mtime = int(source_path.mtime()) + fp = pyc.open("wb") + try: + fp.write(imp.get_magic()) + fp.write(struct.pack(">", + ast.Add : "+", + ast.Sub : "-", + ast.Mult : "*", + ast.Div : "/", + ast.FloorDiv : "//", + ast.Mod : "%", + ast.Eq : "==", + ast.NotEq : "!=", + ast.Lt : "<", + ast.LtE : "<=", + ast.Gt : ">", + ast.GtE : ">=", + ast.Pow : "**", + ast.Is : "is", + ast.IsNot : "is not", + ast.In : "in", + ast.NotIn : "not in" +} + +unary_map = { + ast.Not : "not %s", + ast.Invert : "~%s", + ast.USub : "-%s", + ast.UAdd : "+%s" +} + + +class DebugInterpreter(ast.NodeVisitor): + """Interpret AST nodes to gleam useful debugging information. """ + + def __init__(self, frame): + self.frame = frame + + def generic_visit(self, node): + # Fallback when we don't have a special implementation. + if _is_ast_expr(node): + mod = ast.Expression(node) + co = self._compile(mod) + try: + result = self.frame.eval(co) + except Exception: + raise Failure() + explanation = self.frame.repr(result) + return explanation, result + elif _is_ast_stmt(node): + mod = ast.Module([node]) + co = self._compile(mod, "exec") + try: + self.frame.exec_(co) + except Exception: + raise Failure() + return None, None + else: + raise AssertionError("can't handle %s" %(node,)) + + def _compile(self, source, mode="eval"): + return compile(source, "", mode) + + def visit_Expr(self, expr): + return self.visit(expr.value) + + def visit_Module(self, mod): + for stmt in mod.body: + self.visit(stmt) + + def visit_Name(self, name): + explanation, result = self.generic_visit(name) + # See if the name is local. + source = "%r in locals() is not globals()" % (name.id,) + co = self._compile(source) + try: + local = self.frame.eval(co) + except Exception: + # have to assume it isn't + local = None + if local is None or not self.frame.is_true(local): + return name.id, result + return explanation, result + + def visit_Compare(self, comp): + left = comp.left + left_explanation, left_result = self.visit(left) + for op, next_op in zip(comp.ops, comp.comparators): + next_explanation, next_result = self.visit(next_op) + op_symbol = operator_map[op.__class__] + explanation = "%s %s %s" % (left_explanation, op_symbol, + next_explanation) + source = "__exprinfo_left %s __exprinfo_right" % (op_symbol,) + co = self._compile(source) + try: + result = self.frame.eval(co, __exprinfo_left=left_result, + __exprinfo_right=next_result) + except Exception: + raise Failure(explanation) + try: + if not self.frame.is_true(result): + break + except KeyboardInterrupt: + raise + except: + break + left_explanation, left_result = next_explanation, next_result + + if util._reprcompare is not None: + res = util._reprcompare(op_symbol, left_result, next_result) + if res: + explanation = res + return explanation, result + + def visit_BoolOp(self, boolop): + is_or = isinstance(boolop.op, ast.Or) + explanations = [] + for operand in boolop.values: + explanation, result = self.visit(operand) + explanations.append(explanation) + if result == is_or: + break + name = is_or and " or " or " and " + explanation = "(" + name.join(explanations) + ")" + return explanation, result + + def visit_UnaryOp(self, unary): + pattern = unary_map[unary.op.__class__] + operand_explanation, operand_result = self.visit(unary.operand) + explanation = pattern % (operand_explanation,) + co = self._compile(pattern % ("__exprinfo_expr",)) + try: + result = self.frame.eval(co, __exprinfo_expr=operand_result) + except Exception: + raise Failure(explanation) + return explanation, result + + def visit_BinOp(self, binop): + left_explanation, left_result = self.visit(binop.left) + right_explanation, right_result = self.visit(binop.right) + symbol = operator_map[binop.op.__class__] + explanation = "(%s %s %s)" % (left_explanation, symbol, + right_explanation) + source = "__exprinfo_left %s __exprinfo_right" % (symbol,) + co = self._compile(source) + try: + result = self.frame.eval(co, __exprinfo_left=left_result, + __exprinfo_right=right_result) + except Exception: + raise Failure(explanation) + return explanation, result + + def visit_Call(self, call): + func_explanation, func = self.visit(call.func) + arg_explanations = [] + ns = {"__exprinfo_func" : func} + arguments = [] + for arg in call.args: + arg_explanation, arg_result = self.visit(arg) + arg_name = "__exprinfo_%s" % (len(ns),) + ns[arg_name] = arg_result + arguments.append(arg_name) + arg_explanations.append(arg_explanation) + for keyword in call.keywords: + arg_explanation, arg_result = self.visit(keyword.value) + arg_name = "__exprinfo_%s" % (len(ns),) + ns[arg_name] = arg_result + keyword_source = "%s=%%s" % (keyword.arg) + arguments.append(keyword_source % (arg_name,)) + arg_explanations.append(keyword_source % (arg_explanation,)) + if call.starargs: + arg_explanation, arg_result = self.visit(call.starargs) + arg_name = "__exprinfo_star" + ns[arg_name] = arg_result + arguments.append("*%s" % (arg_name,)) + arg_explanations.append("*%s" % (arg_explanation,)) + if call.kwargs: + arg_explanation, arg_result = self.visit(call.kwargs) + arg_name = "__exprinfo_kwds" + ns[arg_name] = arg_result + arguments.append("**%s" % (arg_name,)) + arg_explanations.append("**%s" % (arg_explanation,)) + args_explained = ", ".join(arg_explanations) + explanation = "%s(%s)" % (func_explanation, args_explained) + args = ", ".join(arguments) + source = "__exprinfo_func(%s)" % (args,) + co = self._compile(source) + try: + result = self.frame.eval(co, **ns) + except Exception: + raise Failure(explanation) + pattern = "%s\n{%s = %s\n}" + rep = self.frame.repr(result) + explanation = pattern % (rep, rep, explanation) + return explanation, result + + def _is_builtin_name(self, name): + pattern = "%r not in globals() and %r not in locals()" + source = pattern % (name.id, name.id) + co = self._compile(source) + try: + return self.frame.eval(co) + except Exception: + return False + + def visit_Attribute(self, attr): + if not isinstance(attr.ctx, ast.Load): + return self.generic_visit(attr) + source_explanation, source_result = self.visit(attr.value) + explanation = "%s.%s" % (source_explanation, attr.attr) + source = "__exprinfo_expr.%s" % (attr.attr,) + co = self._compile(source) + try: + result = self.frame.eval(co, __exprinfo_expr=source_result) + except Exception: + raise Failure(explanation) + explanation = "%s\n{%s = %s.%s\n}" % (self.frame.repr(result), + self.frame.repr(result), + source_explanation, attr.attr) + # Check if the attr is from an instance. + source = "%r in getattr(__exprinfo_expr, '__dict__', {})" + source = source % (attr.attr,) + co = self._compile(source) + try: + from_instance = self.frame.eval(co, __exprinfo_expr=source_result) + except Exception: + from_instance = None + if from_instance is None or self.frame.is_true(from_instance): + rep = self.frame.repr(result) + pattern = "%s\n{%s = %s\n}" + explanation = pattern % (rep, rep, explanation) + return explanation, result + + def visit_Assert(self, assrt): + test_explanation, test_result = self.visit(assrt.test) + explanation = "assert %s" % (test_explanation,) + if not self.frame.is_true(test_result): + try: + raise BuiltinAssertionError + except Exception: + raise Failure(explanation) + return explanation, test_result + + def visit_Assign(self, assign): + value_explanation, value_result = self.visit(assign.value) + explanation = "... = %s" % (value_explanation,) + name = ast.Name("__exprinfo_expr", ast.Load(), + lineno=assign.value.lineno, + col_offset=assign.value.col_offset) + new_assign = ast.Assign(assign.targets, name, lineno=assign.lineno, + col_offset=assign.col_offset) + mod = ast.Module([new_assign]) + co = self._compile(mod, "exec") + try: + self.frame.exec_(co, __exprinfo_expr=value_result) + except Exception: + raise Failure(explanation) + return explanation, value_result diff --git a/_pytest/assertion/oldinterpret.py b/_pytest/assertion/oldinterpret.py new file mode 100644 --- /dev/null +++ b/_pytest/assertion/oldinterpret.py @@ -0,0 +1,552 @@ +import py +import sys, inspect +from compiler import parse, ast, pycodegen +from _pytest.assertion.util import format_explanation +from _pytest.assertion.reinterpret import BuiltinAssertionError + +passthroughex = py.builtin._sysex + +class Failure: + def __init__(self, node): + self.exc, self.value, self.tb = sys.exc_info() + self.node = node + +class View(object): + """View base class. + + If C is a subclass of View, then C(x) creates a proxy object around + the object x. The actual class of the proxy is not C in general, + but a *subclass* of C determined by the rules below. To avoid confusion + we call view class the class of the proxy (a subclass of C, so of View) + and object class the class of x. + + Attributes and methods not found in the proxy are automatically read on x. + Other operations like setting attributes are performed on the proxy, as + determined by its view class. The object x is available from the proxy + as its __obj__ attribute. + + The view class selection is determined by the __view__ tuples and the + optional __viewkey__ method. By default, the selected view class is the + most specific subclass of C whose __view__ mentions the class of x. + If no such subclass is found, the search proceeds with the parent + object classes. For example, C(True) will first look for a subclass + of C with __view__ = (..., bool, ...) and only if it doesn't find any + look for one with __view__ = (..., int, ...), and then ..., object,... + If everything fails the class C itself is considered to be the default. + + Alternatively, the view class selection can be driven by another aspect + of the object x, instead of the class of x, by overriding __viewkey__. + See last example at the end of this module. + """ + + _viewcache = {} + __view__ = () + + def __new__(rootclass, obj, *args, **kwds): + self = object.__new__(rootclass) + self.__obj__ = obj + self.__rootclass__ = rootclass + key = self.__viewkey__() + try: + self.__class__ = self._viewcache[key] + except KeyError: + self.__class__ = self._selectsubclass(key) + return self + + def __getattr__(self, attr): + # attributes not found in the normal hierarchy rooted on View + # are looked up in the object's real class + return getattr(self.__obj__, attr) + + def __viewkey__(self): + return self.__obj__.__class__ + + def __matchkey__(self, key, subclasses): + if inspect.isclass(key): + keys = inspect.getmro(key) + else: + keys = [key] + for key in keys: + result = [C for C in subclasses if key in C.__view__] + if result: + return result + return [] + + def _selectsubclass(self, key): + subclasses = list(enumsubclasses(self.__rootclass__)) + for C in subclasses: + if not isinstance(C.__view__, tuple): + C.__view__ = (C.__view__,) + choices = self.__matchkey__(key, subclasses) + if not choices: + return self.__rootclass__ + elif len(choices) == 1: + return choices[0] + else: + # combine the multiple choices + return type('?', tuple(choices), {}) + + def __repr__(self): + return '%s(%r)' % (self.__rootclass__.__name__, self.__obj__) + + +def enumsubclasses(cls): + for subcls in cls.__subclasses__(): + for subsubclass in enumsubclasses(subcls): + yield subsubclass + yield cls + + +class Interpretable(View): + """A parse tree node with a few extra methods.""" + explanation = None + + def is_builtin(self, frame): + return False + + def eval(self, frame): + # fall-back for unknown expression nodes + try: + expr = ast.Expression(self.__obj__) + expr.filename = '' + self.__obj__.filename = '' + co = pycodegen.ExpressionCodeGenerator(expr).getCode() + result = frame.eval(co) + except passthroughex: + raise + except: + raise Failure(self) + self.result = result + self.explanation = self.explanation or frame.repr(self.result) + + def run(self, frame): + # fall-back for unknown statement nodes + try: + expr = ast.Module(None, ast.Stmt([self.__obj__])) + expr.filename = '' + co = pycodegen.ModuleCodeGenerator(expr).getCode() + frame.exec_(co) + except passthroughex: + raise + except: + raise Failure(self) + + def nice_explanation(self): + return format_explanation(self.explanation) + + +class Name(Interpretable): + __view__ = ast.Name + + def is_local(self, frame): + source = '%r in locals() is not globals()' % self.name + try: + return frame.is_true(frame.eval(source)) + except passthroughex: + raise + except: + return False + + def is_global(self, frame): + source = '%r in globals()' % self.name + try: + return frame.is_true(frame.eval(source)) + except passthroughex: + raise + except: + return False + + def is_builtin(self, frame): + source = '%r not in locals() and %r not in globals()' % ( + self.name, self.name) + try: + return frame.is_true(frame.eval(source)) + except passthroughex: + raise + except: + return False + + def eval(self, frame): + super(Name, self).eval(frame) + if not self.is_local(frame): + self.explanation = self.name + +class Compare(Interpretable): + __view__ = ast.Compare + + def eval(self, frame): + expr = Interpretable(self.expr) + expr.eval(frame) + for operation, expr2 in self.ops: + if hasattr(self, 'result'): + # shortcutting in chained expressions + if not frame.is_true(self.result): + break + expr2 = Interpretable(expr2) + expr2.eval(frame) + self.explanation = "%s %s %s" % ( + expr.explanation, operation, expr2.explanation) + source = "__exprinfo_left %s __exprinfo_right" % operation + try: + self.result = frame.eval(source, + __exprinfo_left=expr.result, + __exprinfo_right=expr2.result) + except passthroughex: + raise + except: + raise Failure(self) + expr = expr2 + +class And(Interpretable): + __view__ = ast.And + + def eval(self, frame): + explanations = [] + for expr in self.nodes: + expr = Interpretable(expr) + expr.eval(frame) + explanations.append(expr.explanation) + self.result = expr.result + if not frame.is_true(expr.result): + break + self.explanation = '(' + ' and '.join(explanations) + ')' + +class Or(Interpretable): + __view__ = ast.Or + + def eval(self, frame): + explanations = [] + for expr in self.nodes: + expr = Interpretable(expr) + expr.eval(frame) + explanations.append(expr.explanation) + self.result = expr.result + if frame.is_true(expr.result): + break + self.explanation = '(' + ' or '.join(explanations) + ')' + + +# == Unary operations == +keepalive = [] +for astclass, astpattern in { + ast.Not : 'not __exprinfo_expr', + ast.Invert : '(~__exprinfo_expr)', + }.items(): + + class UnaryArith(Interpretable): + __view__ = astclass + + def eval(self, frame, astpattern=astpattern): + expr = Interpretable(self.expr) + expr.eval(frame) + self.explanation = astpattern.replace('__exprinfo_expr', + expr.explanation) + try: + self.result = frame.eval(astpattern, + __exprinfo_expr=expr.result) + except passthroughex: + raise + except: + raise Failure(self) + + keepalive.append(UnaryArith) + +# == Binary operations == +for astclass, astpattern in { + ast.Add : '(__exprinfo_left + __exprinfo_right)', + ast.Sub : '(__exprinfo_left - __exprinfo_right)', + ast.Mul : '(__exprinfo_left * __exprinfo_right)', + ast.Div : '(__exprinfo_left / __exprinfo_right)', + ast.Mod : '(__exprinfo_left % __exprinfo_right)', + ast.Power : '(__exprinfo_left ** __exprinfo_right)', + }.items(): + + class BinaryArith(Interpretable): + __view__ = astclass + + def eval(self, frame, astpattern=astpattern): + left = Interpretable(self.left) + left.eval(frame) + right = Interpretable(self.right) + right.eval(frame) + self.explanation = (astpattern + .replace('__exprinfo_left', left .explanation) + .replace('__exprinfo_right', right.explanation)) + try: + self.result = frame.eval(astpattern, + __exprinfo_left=left.result, + __exprinfo_right=right.result) + except passthroughex: + raise + except: + raise Failure(self) + + keepalive.append(BinaryArith) + + +class CallFunc(Interpretable): + __view__ = ast.CallFunc + + def is_bool(self, frame): + source = 'isinstance(__exprinfo_value, bool)' + try: + return frame.is_true(frame.eval(source, + __exprinfo_value=self.result)) + except passthroughex: + raise + except: + return False + + def eval(self, frame): + node = Interpretable(self.node) + node.eval(frame) + explanations = [] + vars = {'__exprinfo_fn': node.result} + source = '__exprinfo_fn(' + for a in self.args: + if isinstance(a, ast.Keyword): + keyword = a.name + a = a.expr + else: + keyword = None + a = Interpretable(a) + a.eval(frame) + argname = '__exprinfo_%d' % len(vars) + vars[argname] = a.result + if keyword is None: + source += argname + ',' + explanations.append(a.explanation) + else: + source += '%s=%s,' % (keyword, argname) + explanations.append('%s=%s' % (keyword, a.explanation)) + if self.star_args: + star_args = Interpretable(self.star_args) + star_args.eval(frame) + argname = '__exprinfo_star' + vars[argname] = star_args.result + source += '*' + argname + ',' + explanations.append('*' + star_args.explanation) + if self.dstar_args: + dstar_args = Interpretable(self.dstar_args) + dstar_args.eval(frame) + argname = '__exprinfo_kwds' + vars[argname] = dstar_args.result + source += '**' + argname + ',' + explanations.append('**' + dstar_args.explanation) + self.explanation = "%s(%s)" % ( + node.explanation, ', '.join(explanations)) + if source.endswith(','): + source = source[:-1] + source += ')' + try: + self.result = frame.eval(source, **vars) + except passthroughex: + raise + except: + raise Failure(self) + if not node.is_builtin(frame) or not self.is_bool(frame): + r = frame.repr(self.result) + self.explanation = '%s\n{%s = %s\n}' % (r, r, self.explanation) + +class Getattr(Interpretable): + __view__ = ast.Getattr + + def eval(self, frame): + expr = Interpretable(self.expr) + expr.eval(frame) + source = '__exprinfo_expr.%s' % self.attrname + try: + self.result = frame.eval(source, __exprinfo_expr=expr.result) + except passthroughex: + raise + except: + raise Failure(self) + self.explanation = '%s.%s' % (expr.explanation, self.attrname) + # if the attribute comes from the instance, its value is interesting + source = ('hasattr(__exprinfo_expr, "__dict__") and ' + '%r in __exprinfo_expr.__dict__' % self.attrname) + try: + from_instance = frame.is_true( + frame.eval(source, __exprinfo_expr=expr.result)) + except passthroughex: + raise + except: + from_instance = True + if from_instance: + r = frame.repr(self.result) + self.explanation = '%s\n{%s = %s\n}' % (r, r, self.explanation) + +# == Re-interpretation of full statements == + +class Assert(Interpretable): + __view__ = ast.Assert + + def run(self, frame): + test = Interpretable(self.test) + test.eval(frame) + # print the result as 'assert ' + self.result = test.result + self.explanation = 'assert ' + test.explanation + if not frame.is_true(test.result): + try: + raise BuiltinAssertionError + except passthroughex: + raise + except: + raise Failure(self) + +class Assign(Interpretable): + __view__ = ast.Assign + + def run(self, frame): + expr = Interpretable(self.expr) + expr.eval(frame) + self.result = expr.result + self.explanation = '... = ' + expr.explanation + # fall-back-run the rest of the assignment + ass = ast.Assign(self.nodes, ast.Name('__exprinfo_expr')) + mod = ast.Module(None, ast.Stmt([ass])) + mod.filename = '' + co = pycodegen.ModuleCodeGenerator(mod).getCode() + try: + frame.exec_(co, __exprinfo_expr=expr.result) + except passthroughex: + raise + except: + raise Failure(self) + +class Discard(Interpretable): + __view__ = ast.Discard + + def run(self, frame): + expr = Interpretable(self.expr) + expr.eval(frame) + self.result = expr.result + self.explanation = expr.explanation + +class Stmt(Interpretable): + __view__ = ast.Stmt + + def run(self, frame): + for stmt in self.nodes: + stmt = Interpretable(stmt) + stmt.run(frame) + + +def report_failure(e): + explanation = e.node.nice_explanation() + if explanation: + explanation = ", in: " + explanation + else: + explanation = "" + sys.stdout.write("%s: %s%s\n" % (e.exc.__name__, e.value, explanation)) + +def check(s, frame=None): + if frame is None: + frame = sys._getframe(1) + frame = py.code.Frame(frame) + expr = parse(s, 'eval') + assert isinstance(expr, ast.Expression) + node = Interpretable(expr.node) + try: + node.eval(frame) + except passthroughex: + raise + except Failure: + e = sys.exc_info()[1] + report_failure(e) + else: + if not frame.is_true(node.result): + sys.stderr.write("assertion failed: %s\n" % node.nice_explanation()) + + +########################################################### +# API / Entry points +# ######################################################### + +def interpret(source, frame, should_fail=False): + module = Interpretable(parse(source, 'exec').node) + #print "got module", module + if isinstance(frame, py.std.types.FrameType): + frame = py.code.Frame(frame) + try: + module.run(frame) + except Failure: + e = sys.exc_info()[1] + return getfailure(e) + except passthroughex: + raise + except: + import traceback + traceback.print_exc() + if should_fail: + return ("(assertion failed, but when it was re-run for " + "printing intermediate values, it did not fail. Suggestions: " + "compute assert expression before the assert or use --nomagic)") + else: + return None + +def getmsg(excinfo): + if isinstance(excinfo, tuple): + excinfo = py.code.ExceptionInfo(excinfo) + #frame, line = gettbline(tb) + #frame = py.code.Frame(frame) + #return interpret(line, frame) + + tb = excinfo.traceback[-1] + source = str(tb.statement).strip() + x = interpret(source, tb.frame, should_fail=True) + if not isinstance(x, str): + raise TypeError("interpret returned non-string %r" % (x,)) + return x + +def getfailure(e): + explanation = e.node.nice_explanation() + if str(e.value): + lines = explanation.split('\n') + lines[0] += " << %s" % (e.value,) + explanation = '\n'.join(lines) + text = "%s: %s" % (e.exc.__name__, explanation) + if text.startswith('AssertionError: assert '): + text = text[16:] + return text + +def run(s, frame=None): + if frame is None: + frame = sys._getframe(1) + frame = py.code.Frame(frame) + module = Interpretable(parse(s, 'exec').node) + try: + module.run(frame) + except Failure: + e = sys.exc_info()[1] + report_failure(e) + + +if __name__ == '__main__': + # example: + def f(): + return 5 + def g(): + return 3 + def h(x): + return 'never' + check("f() * g() == 5") + check("not f()") + check("not (f() and g() or 0)") + check("f() == g()") + i = 4 + check("i == f()") + check("len(f()) == 0") + check("isinstance(2+3+4, float)") + + run("x = i") + check("x == 5") + + run("assert not f(), 'oops'") + run("a, b, c = 1, 2") + run("a, b, c = f()") + + check("max([f(),g()]) == 4") + check("'hello'[g()] == 'h'") + run("'guk%d' % h(f())") diff --git a/_pytest/assertion/reinterpret.py b/_pytest/assertion/reinterpret.py new file mode 100644 --- /dev/null +++ b/_pytest/assertion/reinterpret.py @@ -0,0 +1,48 @@ +import sys +import py + +BuiltinAssertionError = py.builtin.builtins.AssertionError + +class AssertionError(BuiltinAssertionError): + def __init__(self, *args): + BuiltinAssertionError.__init__(self, *args) + if args: + try: + self.msg = str(args[0]) + except py.builtin._sysex: + raise + except: + self.msg = "<[broken __repr__] %s at %0xd>" %( + args[0].__class__, id(args[0])) + else: + f = py.code.Frame(sys._getframe(1)) + try: + source = f.code.fullsource + if source is not None: + try: + source = source.getstatement(f.lineno, assertion=True) + except IndexError: + source = None + else: + source = str(source.deindent()).strip() + except py.error.ENOENT: + source = None + # this can also occur during reinterpretation, when the + # co_filename is set to "". + if source: + self.msg = reinterpret(source, f, should_fail=True) + else: + self.msg = "" + if not self.args: + self.args = (self.msg,) + +if sys.version_info > (3, 0): + AssertionError.__module__ = "builtins" + reinterpret_old = "old reinterpretation not available for py3" +else: + from _pytest.assertion.oldinterpret import interpret as reinterpret_old +if sys.version_info >= (2, 6) or (sys.platform.startswith("java")): + from _pytest.assertion.newinterpret import interpret as reinterpret +else: + reinterpret = reinterpret_old + diff --git a/_pytest/assertion/rewrite.py b/_pytest/assertion/rewrite.py new file mode 100644 --- /dev/null +++ b/_pytest/assertion/rewrite.py @@ -0,0 +1,340 @@ +"""Rewrite assertion AST to produce nice error messages""" + +import ast +import collections +import itertools +import sys + +import py +from _pytest.assertion import util + + +def rewrite_asserts(mod): + """Rewrite the assert statements in mod.""" + AssertionRewriter().run(mod) + + +_saferepr = py.io.saferepr +from _pytest.assertion.util import format_explanation as _format_explanation + +def _format_boolop(operands, explanations, is_or): + show_explanations = [] + for operand, expl in zip(operands, explanations): + show_explanations.append(expl) + if operand == is_or: + break + return "(" + (is_or and " or " or " and ").join(show_explanations) + ")" + +def _call_reprcompare(ops, results, expls, each_obj): + for i, res, expl in zip(range(len(ops)), results, expls): + try: + done = not res + except Exception: + done = True + if done: + break + if util._reprcompare is not None: + custom = util._reprcompare(ops[i], each_obj[i], each_obj[i + 1]) + if custom is not None: + return custom + return expl + + +unary_map = { + ast.Not : "not %s", + ast.Invert : "~%s", + ast.USub : "-%s", + ast.UAdd : "+%s" +} + +binop_map = { + ast.BitOr : "|", + ast.BitXor : "^", + ast.BitAnd : "&", + ast.LShift : "<<", + ast.RShift : ">>", + ast.Add : "+", + ast.Sub : "-", + ast.Mult : "*", + ast.Div : "/", + ast.FloorDiv : "//", + ast.Mod : "%", + ast.Eq : "==", + ast.NotEq : "!=", + ast.Lt : "<", + ast.LtE : "<=", + ast.Gt : ">", + ast.GtE : ">=", + ast.Pow : "**", + ast.Is : "is", + ast.IsNot : "is not", + ast.In : "in", + ast.NotIn : "not in" +} + + +def set_location(node, lineno, col_offset): + """Set node location information recursively.""" + def _fix(node, lineno, col_offset): + if "lineno" in node._attributes: + node.lineno = lineno + if "col_offset" in node._attributes: + node.col_offset = col_offset + for child in ast.iter_child_nodes(node): + _fix(child, lineno, col_offset) + _fix(node, lineno, col_offset) + return node + + +class AssertionRewriter(ast.NodeVisitor): + + def run(self, mod): + """Find all assert statements in *mod* and rewrite them.""" + if not mod.body: + # Nothing to do. + return + # Insert some special imports at the top of the module but after any + # docstrings and __future__ imports. + aliases = [ast.alias(py.builtin.builtins.__name__, "@py_builtins"), + ast.alias("_pytest.assertion.rewrite", "@pytest_ar")] + expect_docstring = True + pos = 0 + lineno = 0 + for item in mod.body: + if (expect_docstring and isinstance(item, ast.Expr) and + isinstance(item.value, ast.Str)): + doc = item.value.s + if "PYTEST_DONT_REWRITE" in doc: + # The module has disabled assertion rewriting. + return + lineno += len(doc) - 1 + expect_docstring = False + elif (not isinstance(item, ast.ImportFrom) or item.level > 0 and + item.identifier != "__future__"): + lineno = item.lineno + break + pos += 1 + imports = [ast.Import([alias], lineno=lineno, col_offset=0) + for alias in aliases] + mod.body[pos:pos] = imports + # Collect asserts. + nodes = collections.deque([mod]) + while nodes: + node = nodes.popleft() + for name, field in ast.iter_fields(node): + if isinstance(field, list): + new = [] + for i, child in enumerate(field): + if isinstance(child, ast.Assert): + # Transform assert. + new.extend(self.visit(child)) + else: + new.append(child) + if isinstance(child, ast.AST): + nodes.append(child) + setattr(node, name, new) + elif (isinstance(field, ast.AST) and + # Don't recurse into expressions as they can't contain + # asserts. + not isinstance(field, ast.expr)): + nodes.append(field) + + def variable(self): + """Get a new variable.""" + # Use a character invalid in python identifiers to avoid clashing. + name = "@py_assert" + str(next(self.variable_counter)) + self.variables.add(name) + return name + + def assign(self, expr): + """Give *expr* a name.""" + name = self.variable() + self.statements.append(ast.Assign([ast.Name(name, ast.Store())], expr)) + return ast.Name(name, ast.Load()) + + def display(self, expr): + """Call py.io.saferepr on the expression.""" + return self.helper("saferepr", expr) + + def helper(self, name, *args): + """Call a helper in this module.""" + py_name = ast.Name("@pytest_ar", ast.Load()) + attr = ast.Attribute(py_name, "_" + name, ast.Load()) + return ast.Call(attr, list(args), [], None, None) + + def builtin(self, name): + """Return the builtin called *name*.""" + builtin_name = ast.Name("@py_builtins", ast.Load()) + return ast.Attribute(builtin_name, name, ast.Load()) + + def explanation_param(self, expr): + specifier = "py" + str(next(self.variable_counter)) + self.explanation_specifiers[specifier] = expr + return "%(" + specifier + ")s" + + def push_format_context(self): + self.explanation_specifiers = {} + self.stack.append(self.explanation_specifiers) + + def pop_format_context(self, expl_expr): + current = self.stack.pop() + if self.stack: + self.explanation_specifiers = self.stack[-1] + keys = [ast.Str(key) for key in current.keys()] + format_dict = ast.Dict(keys, list(current.values())) + form = ast.BinOp(expl_expr, ast.Mod(), format_dict) + name = "@py_format" + str(next(self.variable_counter)) + self.on_failure.append(ast.Assign([ast.Name(name, ast.Store())], form)) + return ast.Name(name, ast.Load()) + + def generic_visit(self, node): + """Handle expressions we don't have custom code for.""" + assert isinstance(node, ast.expr) + res = self.assign(node) + return res, self.explanation_param(self.display(res)) + + def visit_Assert(self, assert_): + if assert_.msg: + # There's already a message. Don't mess with it. + return [assert_] + self.statements = [] + self.variables = set() + self.variable_counter = itertools.count() + self.stack = [] + self.on_failure = [] + self.push_format_context() + # Rewrite assert into a bunch of statements. + top_condition, explanation = self.visit(assert_.test) + # Create failure message. + body = self.on_failure + negation = ast.UnaryOp(ast.Not(), top_condition) + self.statements.append(ast.If(negation, body, [])) + explanation = "assert " + explanation + template = ast.Str(explanation) + msg = self.pop_format_context(template) + fmt = self.helper("format_explanation", msg) + err_name = ast.Name("AssertionError", ast.Load()) + exc = ast.Call(err_name, [fmt], [], None, None) + if sys.version_info[0] >= 3: + raise_ = ast.Raise(exc, None) + else: + raise_ = ast.Raise(exc, None, None) + body.append(raise_) + # Delete temporary variables. + names = [ast.Name(name, ast.Del()) for name in self.variables] + if names: + delete = ast.Delete(names) + self.statements.append(delete) + # Fix line numbers. + for stmt in self.statements: + set_location(stmt, assert_.lineno, assert_.col_offset) + return self.statements + + def visit_Name(self, name): + # Check if the name is local or not. + locs = ast.Call(self.builtin("locals"), [], [], None, None) + globs = ast.Call(self.builtin("globals"), [], [], None, None) + ops = [ast.In(), ast.IsNot()] + test = ast.Compare(ast.Str(name.id), ops, [locs, globs]) + expr = ast.IfExp(test, self.display(name), ast.Str(name.id)) + return name, self.explanation_param(expr) + + def visit_BoolOp(self, boolop): + operands = [] + explanations = [] + self.push_format_context() + for operand in boolop.values: + res, explanation = self.visit(operand) + operands.append(res) + explanations.append(explanation) + expls = ast.Tuple([ast.Str(expl) for expl in explanations], ast.Load()) + is_or = ast.Num(isinstance(boolop.op, ast.Or)) + expl_template = self.helper("format_boolop", + ast.Tuple(operands, ast.Load()), expls, + is_or) + expl = self.pop_format_context(expl_template) + res = self.assign(ast.BoolOp(boolop.op, operands)) + return res, self.explanation_param(expl) + + def visit_UnaryOp(self, unary): + pattern = unary_map[unary.op.__class__] + operand_res, operand_expl = self.visit(unary.operand) + res = self.assign(ast.UnaryOp(unary.op, operand_res)) + return res, pattern % (operand_expl,) + + def visit_BinOp(self, binop): + symbol = binop_map[binop.op.__class__] + left_expr, left_expl = self.visit(binop.left) + right_expr, right_expl = self.visit(binop.right) + explanation = "(%s %s %s)" % (left_expl, symbol, right_expl) + res = self.assign(ast.BinOp(left_expr, binop.op, right_expr)) + return res, explanation + + def visit_Call(self, call): + new_func, func_expl = self.visit(call.func) + arg_expls = [] + new_args = [] + new_kwargs = [] + new_star = new_kwarg = None + for arg in call.args: + res, expl = self.visit(arg) + new_args.append(res) + arg_expls.append(expl) + for keyword in call.keywords: + res, expl = self.visit(keyword.value) + new_kwargs.append(ast.keyword(keyword.arg, res)) + arg_expls.append(keyword.arg + "=" + expl) + if call.starargs: + new_star, expl = self.visit(call.starargs) + arg_expls.append("*" + expl) + if call.kwargs: + new_kwarg, expl = self.visit(call.kwarg) + arg_expls.append("**" + expl) + expl = "%s(%s)" % (func_expl, ', '.join(arg_expls)) + new_call = ast.Call(new_func, new_args, new_kwargs, new_star, new_kwarg) + res = self.assign(new_call) + res_expl = self.explanation_param(self.display(res)) + outer_expl = "%s\n{%s = %s\n}" % (res_expl, res_expl, expl) + return res, outer_expl + + def visit_Attribute(self, attr): + if not isinstance(attr.ctx, ast.Load): + return self.generic_visit(attr) + value, value_expl = self.visit(attr.value) + res = self.assign(ast.Attribute(value, attr.attr, ast.Load())) + res_expl = self.explanation_param(self.display(res)) + pat = "%s\n{%s = %s.%s\n}" + expl = pat % (res_expl, res_expl, value_expl, attr.attr) + return res, expl + + def visit_Compare(self, comp): + self.push_format_context() + left_res, left_expl = self.visit(comp.left) + res_variables = [self.variable() for i in range(len(comp.ops))] + load_names = [ast.Name(v, ast.Load()) for v in res_variables] + store_names = [ast.Name(v, ast.Store()) for v in res_variables] + it = zip(range(len(comp.ops)), comp.ops, comp.comparators) + expls = [] + syms = [] + results = [left_res] + for i, op, next_operand in it: + next_res, next_expl = self.visit(next_operand) + results.append(next_res) + sym = binop_map[op.__class__] + syms.append(ast.Str(sym)) + expl = "%s %s %s" % (left_expl, sym, next_expl) + expls.append(ast.Str(expl)) + res_expr = ast.Compare(left_res, [op], [next_res]) + self.statements.append(ast.Assign([store_names[i]], res_expr)) + left_res, left_expl = next_res, next_expl + # Use py.code._reprcompare if that's available. + expl_call = self.helper("call_reprcompare", + ast.Tuple(syms, ast.Load()), + ast.Tuple(load_names, ast.Load()), + ast.Tuple(expls, ast.Load()), + ast.Tuple(results, ast.Load())) + if len(comp.ops) > 1: + res = ast.BoolOp(ast.And(), load_names) + else: + res = load_names[0] + return res, self.explanation_param(self.pop_format_context(expl_call)) diff --git a/_pytest/assertion/util.py b/_pytest/assertion/util.py new file mode 100644 --- /dev/null +++ b/_pytest/assertion/util.py @@ -0,0 +1,213 @@ +"""Utilities for assertion debugging""" + +import py + + +# The _reprcompare attribute on the util module is used by the new assertion +# interpretation code and assertion rewriter to detect this plugin was +# loaded and in turn call the hooks defined here as part of the +# DebugInterpreter. +_reprcompare = None + +def format_explanation(explanation): + """This formats an explanation + + Normally all embedded newlines are escaped, however there are + three exceptions: \n{, \n} and \n~. The first two are intended + cover nested explanations, see function and attribute explanations + for examples (.visit_Call(), visit_Attribute()). The last one is + for when one explanation needs to span multiple lines, e.g. when + displaying diffs. + """ + # simplify 'assert False where False = ...' + where = 0 + while True: + start = where = explanation.find("False\n{False = ", where) + if where == -1: + break + level = 0 + for i, c in enumerate(explanation[start:]): + if c == "{": + level += 1 + elif c == "}": + level -= 1 + if not level: + break + else: + raise AssertionError("unbalanced braces: %r" % (explanation,)) + end = start + i + where = end + if explanation[end - 1] == '\n': + explanation = (explanation[:start] + explanation[start+15:end-1] + + explanation[end+1:]) + where -= 17 + raw_lines = (explanation or '').split('\n') + # escape newlines not followed by {, } and ~ + lines = [raw_lines[0]] + for l in raw_lines[1:]: + if l.startswith('{') or l.startswith('}') or l.startswith('~'): + lines.append(l) + else: + lines[-1] += '\\n' + l + + result = lines[:1] + stack = [0] + stackcnt = [0] + for line in lines[1:]: + if line.startswith('{'): + if stackcnt[-1]: + s = 'and ' + else: + s = 'where ' + stack.append(len(result)) + stackcnt[-1] += 1 + stackcnt.append(0) + result.append(' +' + ' '*(len(stack)-1) + s + line[1:]) + elif line.startswith('}'): + assert line.startswith('}') + stack.pop() + stackcnt.pop() + result[stack[-1]] += line[1:] + else: + assert line.startswith('~') + result.append(' '*len(stack) + line[1:]) + assert len(stack) == 1 + return '\n'.join(result) + + +# Provide basestring in python3 +try: + basestring = basestring +except NameError: + basestring = str + + +def assertrepr_compare(op, left, right): + """return specialised explanations for some operators/operands""" + width = 80 - 15 - len(op) - 2 # 15 chars indentation, 1 space around op + left_repr = py.io.saferepr(left, maxsize=int(width/2)) + right_repr = py.io.saferepr(right, maxsize=width-len(left_repr)) + summary = '%s %s %s' % (left_repr, op, right_repr) + + issequence = lambda x: isinstance(x, (list, tuple)) + istext = lambda x: isinstance(x, basestring) + isdict = lambda x: isinstance(x, dict) + isset = lambda x: isinstance(x, set) + + explanation = None + try: + if op == '==': + if istext(left) and istext(right): + explanation = _diff_text(left, right) + elif issequence(left) and issequence(right): + explanation = _compare_eq_sequence(left, right) + elif isset(left) and isset(right): + explanation = _compare_eq_set(left, right) + elif isdict(left) and isdict(right): + explanation = _diff_text(py.std.pprint.pformat(left), + py.std.pprint.pformat(right)) + elif op == 'not in': + if istext(left) and istext(right): + explanation = _notin_text(left, right) + except py.builtin._sysex: + raise + except: + excinfo = py.code.ExceptionInfo() + explanation = ['(pytest_assertion plugin: representation of ' + 'details failed. Probably an object has a faulty __repr__.)', + str(excinfo) + ] + + + if not explanation: + return None + + # Don't include pageloads of data, should be configurable + if len(''.join(explanation)) > 80*8: + explanation = ['Detailed information too verbose, truncated'] + + return [summary] + explanation + + +def _diff_text(left, right): + """Return the explanation for the diff between text + + This will skip leading and trailing characters which are + identical to keep the diff minimal. + """ + explanation = [] + i = 0 # just in case left or right has zero length + for i in range(min(len(left), len(right))): + if left[i] != right[i]: + break + if i > 42: + i -= 10 # Provide some context + explanation = ['Skipping %s identical ' + 'leading characters in diff' % i] + left = left[i:] + right = right[i:] + if len(left) == len(right): + for i in range(len(left)): + if left[-i] != right[-i]: + break + if i > 42: + i -= 10 # Provide some context + explanation += ['Skipping %s identical ' + 'trailing characters in diff' % i] + left = left[:-i] + right = right[:-i] + explanation += [line.strip('\n') + for line in py.std.difflib.ndiff(left.splitlines(), + right.splitlines())] + return explanation + + +def _compare_eq_sequence(left, right): + explanation = [] + for i in range(min(len(left), len(right))): + if left[i] != right[i]: + explanation += ['At index %s diff: %r != %r' % + (i, left[i], right[i])] + break + if len(left) > len(right): + explanation += ['Left contains more items, ' + 'first extra item: %s' % py.io.saferepr(left[len(right)],)] + elif len(left) < len(right): + explanation += ['Right contains more items, ' + 'first extra item: %s' % py.io.saferepr(right[len(left)],)] + return explanation # + _diff_text(py.std.pprint.pformat(left), + # py.std.pprint.pformat(right)) + + +def _compare_eq_set(left, right): + explanation = [] + diff_left = left - right + diff_right = right - left + if diff_left: + explanation.append('Extra items in the left set:') + for item in diff_left: + explanation.append(py.io.saferepr(item)) + if diff_right: + explanation.append('Extra items in the right set:') + for item in diff_right: + explanation.append(py.io.saferepr(item)) + return explanation + + +def _notin_text(term, text): + index = text.find(term) + head = text[:index] + tail = text[index+len(term):] + correct_text = head + tail + diff = _diff_text(correct_text, text) + newdiff = ['%s is contained here:' % py.io.saferepr(term, maxsize=42)] + for line in diff: + if line.startswith('Skipping'): + continue + if line.startswith('- '): + continue + if line.startswith('+ '): + newdiff.append(' ' + line[2:]) + else: + newdiff.append(line) + return newdiff diff --git a/_pytest/config.py b/_pytest/config.py --- a/_pytest/config.py +++ b/_pytest/config.py @@ -12,6 +12,10 @@ config.trace.root.setwriter(sys.stderr.write) return config +def pytest_unconfigure(config): + for func in config._cleanup: + func() + class Parser: """ Parser for command line arguments. """ @@ -251,7 +255,8 @@ self._conftest = Conftest(onimport=self._onimportconftest) self.hook = self.pluginmanager.hook self._inicache = {} - + self._cleanup = [] + @classmethod def fromdictargs(cls, option_dict, args): """ constructor useable for subprocesses. """ diff --git a/_pytest/core.py b/_pytest/core.py --- a/_pytest/core.py +++ b/_pytest/core.py @@ -265,8 +265,15 @@ config.hook.pytest_unconfigure(config=config) config.pluginmanager.unregister(self) - def notify_exception(self, excinfo): - excrepr = excinfo.getrepr(funcargs=True, showlocals=True) + def notify_exception(self, excinfo, option=None): + if option and option.fulltrace: + style = "long" + else: + style = "native" + excrepr = excinfo.getrepr(funcargs=True, + showlocals=getattr(option, 'showlocals', False), + style=style, + ) res = self.hook.pytest_internalerror(excrepr=excrepr) if not py.builtin.any(res): for line in str(excrepr).split("\n"): diff --git a/_pytest/doctest.py b/_pytest/doctest.py --- a/_pytest/doctest.py +++ b/_pytest/doctest.py @@ -59,7 +59,7 @@ inner_excinfo = py.code.ExceptionInfo(excinfo.value.exc_info) lines += ["UNEXPECTED EXCEPTION: %s" % repr(inner_excinfo.value)] - + lines += py.std.traceback.format_exception(*excinfo.value.exc_info) return ReprFailDoctest(reprlocation, lines) else: return super(DoctestItem, self).repr_failure(excinfo) diff --git a/_pytest/helpconfig.py b/_pytest/helpconfig.py --- a/_pytest/helpconfig.py +++ b/_pytest/helpconfig.py @@ -16,9 +16,6 @@ group.addoption('--traceconfig', action="store_true", dest="traceconfig", default=False, help="trace considerations of conftest.py files."), - group._addoption('--nomagic', - action="store_true", dest="nomagic", default=False, - help="don't reinterpret asserts, no traceback cutting. ") group.addoption('--debug', action="store_true", dest="debug", default=False, help="generate and show internal debugging information.") diff --git a/_pytest/junitxml.py b/_pytest/junitxml.py --- a/_pytest/junitxml.py +++ b/_pytest/junitxml.py @@ -5,8 +5,42 @@ import py import os +import re +import sys import time + +# Python 2.X and 3.X compatibility +try: + unichr(65) +except NameError: + unichr = chr +try: + unicode('A') +except NameError: + unicode = str +try: + long(1) +except NameError: + long = int + + +# We need to get the subset of the invalid unicode ranges according to +# XML 1.0 which are valid in this python build. Hence we calculate +# this dynamically instead of hardcoding it. The spec range of valid +# chars is: Char ::= #x9 | #xA | #xD | [#x20-#xD7FF] | [#xE000-#xFFFD] +# | [#x10000-#x10FFFF] +_illegal_unichrs = [(0x00, 0x08), (0x0B, 0x0C), (0x0E, 0x19), + (0xD800, 0xDFFF), (0xFDD0, 0xFFFF)] +_illegal_ranges = [unicode("%s-%s") % (unichr(low), unichr(high)) + for (low, high) in _illegal_unichrs + if low < sys.maxunicode] +illegal_xml_re = re.compile(unicode('[%s]') % + unicode('').join(_illegal_ranges)) +del _illegal_unichrs +del _illegal_ranges + + def pytest_addoption(parser): group = parser.getgroup("terminal reporting") group.addoption('--junitxml', action="store", dest="xmlpath", @@ -28,9 +62,11 @@ del config._xml config.pluginmanager.unregister(xml) + class LogXML(object): def __init__(self, logfile, prefix): - self.logfile = logfile + logfile = os.path.expanduser(os.path.expandvars(logfile)) + self.logfile = os.path.normpath(logfile) self.prefix = prefix self.test_logs = [] self.passed = self.skipped = 0 @@ -41,7 +77,7 @@ names = report.nodeid.split("::") names[0] = names[0].replace("/", '.') names = tuple(names) - d = {'time': self._durations.pop(names, "0")} + d = {'time': self._durations.pop(report.nodeid, "0")} names = [x.replace(".py", "") for x in names if x != "()"] classnames = names[:-1] if self.prefix: @@ -55,7 +91,14 @@ self.test_logs.append("") def appendlog(self, fmt, *args): - args = tuple([py.xml.escape(arg) for arg in args]) + def repl(matchobj): + i = ord(matchobj.group()) + if i <= 0xFF: + return unicode('#x%02X') % i + else: + return unicode('#x%04X') % i + args = tuple([illegal_xml_re.sub(repl, py.xml.escape(arg)) + for arg in args]) self.test_logs.append(fmt % args) def append_pass(self, report): @@ -128,12 +171,11 @@ self.append_skipped(report) def pytest_runtest_call(self, item, __multicall__): - names = tuple(item.listnames()) start = time.time() try: return __multicall__.execute() finally: - self._durations[names] = time.time() - start + self._durations[item.nodeid] = time.time() - start def pytest_collectreport(self, report): if not report.passed: diff --git a/_pytest/main.py b/_pytest/main.py --- a/_pytest/main.py +++ b/_pytest/main.py @@ -46,23 +46,25 @@ def pytest_namespace(): - return dict(collect=dict(Item=Item, Collector=Collector, File=File)) + collect = dict(Item=Item, Collector=Collector, File=File, Session=Session) + return dict(collect=collect) def pytest_configure(config): py.test.config = config # compatibiltiy if config.option.exitfirst: config.option.maxfail = 1 -def pytest_cmdline_main(config): - """ default command line protocol for initialization, session, - running tests and reporting. """ +def wrap_session(config, doit): + """Skeleton command line program""" session = Session(config) session.exitstatus = EXIT_OK + initstate = 0 try: config.pluginmanager.do_configure(config) + initstate = 1 config.hook.pytest_sessionstart(session=session) - config.hook.pytest_collection(session=session) - config.hook.pytest_runtestloop(session=session) + initstate = 2 + doit(config, session) except pytest.UsageError: raise except KeyboardInterrupt: @@ -71,24 +73,30 @@ session.exitstatus = EXIT_INTERRUPTED except: excinfo = py.code.ExceptionInfo() - config.pluginmanager.notify_exception(excinfo) + config.pluginmanager.notify_exception(excinfo, config.option) session.exitstatus = EXIT_INTERNALERROR if excinfo.errisinstance(SystemExit): sys.stderr.write("mainloop: caught Spurious SystemExit!\n") if not session.exitstatus and session._testsfailed: session.exitstatus = EXIT_TESTSFAILED - config.hook.pytest_sessionfinish(session=session, - exitstatus=session.exitstatus) - config.pluginmanager.do_unconfigure(config) + if initstate >= 2: + config.hook.pytest_sessionfinish(session=session, + exitstatus=session.exitstatus) + if initstate >= 1: + config.pluginmanager.do_unconfigure(config) return session.exitstatus +def pytest_cmdline_main(config): + return wrap_session(config, _main) + +def _main(config, session): + """ default command line protocol for initialization, session, + running tests and reporting. """ + config.hook.pytest_collection(session=session) + config.hook.pytest_runtestloop(session=session) + def pytest_collection(session): - session.perform_collect() - hook = session.config.hook - hook.pytest_collection_modifyitems(session=session, - config=session.config, items=session.items) - hook.pytest_collection_finish(session=session) - return True + return session.perform_collect() def pytest_runtestloop(session): if session.config.option.collectonly: @@ -374,6 +382,16 @@ return HookProxy(fspath, self.config) def perform_collect(self, args=None, genitems=True): + hook = self.config.hook + try: + items = self._perform_collect(args, genitems) + hook.pytest_collection_modifyitems(session=self, + config=self.config, items=items) + finally: + hook.pytest_collection_finish(session=self) + return items + + def _perform_collect(self, args, genitems): if args is None: args = self.config.args self.trace("perform_collect", self, args) diff --git a/_pytest/mark.py b/_pytest/mark.py --- a/_pytest/mark.py +++ b/_pytest/mark.py @@ -153,7 +153,7 @@ def __repr__(self): return "" % ( - self._name, self.args, self.kwargs) + self.name, self.args, self.kwargs) def pytest_itemcollected(item): if not isinstance(item, pytest.Function): diff --git a/_pytest/pytester.py b/_pytest/pytester.py --- a/_pytest/pytester.py +++ b/_pytest/pytester.py @@ -6,7 +6,7 @@ import inspect import time from fnmatch import fnmatch -from _pytest.main import Session +from _pytest.main import Session, EXIT_OK from py.builtin import print_ from _pytest.core import HookRelay @@ -236,13 +236,14 @@ def _makefile(self, ext, args, kwargs): items = list(kwargs.items()) if args: - source = "\n".join(map(str, args)) + "\n" + source = py.builtin._totext("\n").join( + map(py.builtin._totext, args)) + py.builtin._totext("\n") basename = self.request.function.__name__ items.insert(0, (basename, source)) ret = None for name, value in items: p = self.tmpdir.join(name).new(ext=ext) - source = str(py.code.Source(value)).lstrip() + source = py.builtin._totext(py.code.Source(value)).lstrip() p.write(source.encode("utf-8"), "wb") if ret is None: ret = p @@ -291,13 +292,19 @@ assert '::' not in str(arg) p = py.path.local(arg) x = session.fspath.bestrelpath(p) - return session.perform_collect([x], genitems=False)[0] + config.hook.pytest_sessionstart(session=session) + res = session.perform_collect([x], genitems=False)[0] + config.hook.pytest_sessionfinish(session=session, exitstatus=EXIT_OK) + return res def getpathnode(self, path): - config = self.parseconfig(path) + config = self.parseconfigure(path) session = Session(config) x = session.fspath.bestrelpath(path) - return session.perform_collect([x], genitems=False)[0] + config.hook.pytest_sessionstart(session=session) + res = session.perform_collect([x], genitems=False)[0] + config.hook.pytest_sessionfinish(session=session, exitstatus=EXIT_OK) + return res def genitems(self, colitems): session = colitems[0].session @@ -311,7 +318,9 @@ config = self.parseconfigure(*args) rec = self.getreportrecorder(config) session = Session(config) + config.hook.pytest_sessionstart(session=session) session.perform_collect() + config.hook.pytest_sessionfinish(session=session, exitstatus=EXIT_OK) return session.items, rec def runitem(self, source): @@ -381,6 +390,8 @@ c.basetemp = py.path.local.make_numbered_dir(prefix="reparse", keep=0, rootdir=self.tmpdir, lock_timeout=None) c.parse(args) + c.pluginmanager.do_configure(c) + self.request.addfinalizer(lambda: c.pluginmanager.do_unconfigure(c)) return c finally: py.test.config = oldconfig diff --git a/_pytest/python.py b/_pytest/python.py --- a/_pytest/python.py +++ b/_pytest/python.py @@ -226,8 +226,13 @@ def _importtestmodule(self): # we assume we are only called once per module + from _pytest import assertion + assertion.before_module_import(self) try: - mod = self.fspath.pyimport(ensuresyspath=True) + try: + mod = self.fspath.pyimport(ensuresyspath=True) + finally: + assertion.after_module_import(self) except SyntaxError: excinfo = py.code.ExceptionInfo() raise self.CollectError(excinfo.getrepr(style="short")) @@ -374,7 +379,7 @@ # test generators are seen as collectors but they also # invoke setup/teardown on popular request # (induced by the common "test_*" naming shared with normal tests) - self.config._setupstate.prepare(self) + self.session._setupstate.prepare(self) # see FunctionMixin.setup and test_setupstate_is_preserved_134 self._preservedparent = self.parent.obj l = [] @@ -721,7 +726,7 @@ def _addfinalizer(self, finalizer, scope): colitem = self._getscopeitem(scope) - self.config._setupstate.addfinalizer( + self._pyfuncitem.session._setupstate.addfinalizer( finalizer=finalizer, colitem=colitem) def __repr__(self): @@ -742,8 +747,10 @@ raise self.LookupError(msg) def showfuncargs(config): - from _pytest.main import Session - session = Session(config) + from _pytest.main import wrap_session + return wrap_session(config, _showfuncargs_main) + +def _showfuncargs_main(config, session): session.perform_collect() if session.items: plugins = session.items[0].getplugins() diff --git a/_pytest/resultlog.py b/_pytest/resultlog.py --- a/_pytest/resultlog.py +++ b/_pytest/resultlog.py @@ -74,7 +74,7 @@ elif report.failed: longrepr = str(report.longrepr) elif report.skipped: - longrepr = str(report.longrepr) + longrepr = str(report.longrepr[2]) self.log_outcome(report, code, longrepr) def pytest_collectreport(self, report): diff --git a/_pytest/runner.py b/_pytest/runner.py --- a/_pytest/runner.py +++ b/_pytest/runner.py @@ -14,17 +14,15 @@ # # pytest plugin hooks -# XXX move to pytest_sessionstart and fix py.test owns tests -def pytest_configure(config): - config._setupstate = SetupState() +def pytest_sessionstart(session): + session._setupstate = SetupState() def pytest_sessionfinish(session, exitstatus): - if hasattr(session.config, '_setupstate'): - hook = session.config.hook - rep = hook.pytest__teardown_final(session=session) - if rep: - hook.pytest__teardown_final_logerror(session=session, report=rep) - session.exitstatus = 1 + hook = session.config.hook + rep = hook.pytest__teardown_final(session=session) + if rep: + hook.pytest__teardown_final_logerror(session=session, report=rep) + session.exitstatus = 1 class NodeInfo: def __init__(self, location): @@ -46,16 +44,16 @@ return reports def pytest_runtest_setup(item): - item.config._setupstate.prepare(item) + item.session._setupstate.prepare(item) def pytest_runtest_call(item): item.runtest() def pytest_runtest_teardown(item): - item.config._setupstate.teardown_exact(item) + item.session._setupstate.teardown_exact(item) def pytest__teardown_final(session): - call = CallInfo(session.config._setupstate.teardown_all, when="teardown") + call = CallInfo(session._setupstate.teardown_all, when="teardown") if call.excinfo: ntraceback = call.excinfo.traceback .cut(excludepath=py._pydir) call.excinfo.traceback = ntraceback.filter() diff --git a/_pytest/tmpdir.py b/_pytest/tmpdir.py --- a/_pytest/tmpdir.py +++ b/_pytest/tmpdir.py @@ -48,15 +48,12 @@ self.trace("finish") def pytest_configure(config): - config._mp = mp = monkeypatch() + mp = monkeypatch() t = TempdirHandler(config) + config._cleanup.extend([mp.undo, t.finish]) mp.setattr(config, '_tmpdirhandler', t, raising=False) mp.setattr(pytest, 'ensuretemp', t.ensuretemp, raising=False) -def pytest_unconfigure(config): - config._tmpdirhandler.finish() - config._mp.undo() - def pytest_funcarg__tmpdir(request): """return a temporary directory path object which is unique to each test function invocation, diff --git a/lib-python/2.7.0/BaseHTTPServer.py b/lib-python/2.7/BaseHTTPServer.py rename from lib-python/2.7.0/BaseHTTPServer.py rename to lib-python/2.7/BaseHTTPServer.py diff --git a/lib-python/2.7.0/Bastion.py b/lib-python/2.7/Bastion.py rename from lib-python/2.7.0/Bastion.py rename to lib-python/2.7/Bastion.py diff --git a/lib-python/2.7.0/CGIHTTPServer.py b/lib-python/2.7/CGIHTTPServer.py rename from lib-python/2.7.0/CGIHTTPServer.py rename to lib-python/2.7/CGIHTTPServer.py diff --git a/lib-python/2.7.0/ConfigParser.py b/lib-python/2.7/ConfigParser.py rename from lib-python/2.7.0/ConfigParser.py rename to lib-python/2.7/ConfigParser.py diff --git a/lib-python/2.7.0/Cookie.py b/lib-python/2.7/Cookie.py rename from lib-python/2.7.0/Cookie.py rename to lib-python/2.7/Cookie.py diff --git a/lib-python/2.7.0/DocXMLRPCServer.py b/lib-python/2.7/DocXMLRPCServer.py rename from lib-python/2.7.0/DocXMLRPCServer.py rename to lib-python/2.7/DocXMLRPCServer.py diff --git a/lib-python/2.7.0/HTMLParser.py b/lib-python/2.7/HTMLParser.py rename from lib-python/2.7.0/HTMLParser.py rename to lib-python/2.7/HTMLParser.py diff --git a/lib-python/2.7.0/MimeWriter.py b/lib-python/2.7/MimeWriter.py rename from lib-python/2.7.0/MimeWriter.py rename to lib-python/2.7/MimeWriter.py diff --git a/lib-python/2.7.0/Queue.py b/lib-python/2.7/Queue.py rename from lib-python/2.7.0/Queue.py rename to lib-python/2.7/Queue.py diff --git a/lib-python/2.7.0/SimpleHTTPServer.py b/lib-python/2.7/SimpleHTTPServer.py rename from lib-python/2.7.0/SimpleHTTPServer.py rename to lib-python/2.7/SimpleHTTPServer.py diff --git a/lib-python/2.7.0/SimpleXMLRPCServer.py b/lib-python/2.7/SimpleXMLRPCServer.py rename from lib-python/2.7.0/SimpleXMLRPCServer.py rename to lib-python/2.7/SimpleXMLRPCServer.py diff --git a/lib-python/2.7.0/SocketServer.py b/lib-python/2.7/SocketServer.py rename from lib-python/2.7.0/SocketServer.py rename to lib-python/2.7/SocketServer.py diff --git a/lib-python/2.7.0/StringIO.py b/lib-python/2.7/StringIO.py rename from lib-python/2.7.0/StringIO.py rename to lib-python/2.7/StringIO.py diff --git a/lib-python/2.7.0/UserDict.py b/lib-python/2.7/UserDict.py rename from lib-python/2.7.0/UserDict.py rename to lib-python/2.7/UserDict.py diff --git a/lib-python/2.7.0/UserList.py b/lib-python/2.7/UserList.py rename from lib-python/2.7.0/UserList.py rename to lib-python/2.7/UserList.py diff --git a/lib-python/2.7.0/UserString.py b/lib-python/2.7/UserString.py rename from lib-python/2.7.0/UserString.py rename to lib-python/2.7/UserString.py diff --git a/lib-python/2.7.0/_LWPCookieJar.py b/lib-python/2.7/_LWPCookieJar.py rename from lib-python/2.7.0/_LWPCookieJar.py rename to lib-python/2.7/_LWPCookieJar.py diff --git a/lib-python/2.7.0/_MozillaCookieJar.py b/lib-python/2.7/_MozillaCookieJar.py rename from lib-python/2.7.0/_MozillaCookieJar.py rename to lib-python/2.7/_MozillaCookieJar.py diff --git a/lib-python/2.7.0/__future__.py b/lib-python/2.7/__future__.py rename from lib-python/2.7.0/__future__.py rename to lib-python/2.7/__future__.py diff --git a/lib-python/2.7.0/__phello__.foo.py b/lib-python/2.7/__phello__.foo.py rename from lib-python/2.7.0/__phello__.foo.py rename to lib-python/2.7/__phello__.foo.py diff --git a/lib-python/2.7.0/_abcoll.py b/lib-python/2.7/_abcoll.py rename from lib-python/2.7.0/_abcoll.py rename to lib-python/2.7/_abcoll.py diff --git a/lib-python/2.7.0/_pyio.py b/lib-python/2.7/_pyio.py rename from lib-python/2.7.0/_pyio.py rename to lib-python/2.7/_pyio.py diff --git a/lib-python/2.7.0/_strptime.py b/lib-python/2.7/_strptime.py rename from lib-python/2.7.0/_strptime.py rename to lib-python/2.7/_strptime.py diff --git a/lib-python/2.7.0/_threading_local.py b/lib-python/2.7/_threading_local.py rename from lib-python/2.7.0/_threading_local.py rename to lib-python/2.7/_threading_local.py diff --git a/lib-python/2.7.0/_weakrefset.py b/lib-python/2.7/_weakrefset.py rename from lib-python/2.7.0/_weakrefset.py rename to lib-python/2.7/_weakrefset.py diff --git a/lib-python/2.7.0/abc.py b/lib-python/2.7/abc.py rename from lib-python/2.7.0/abc.py rename to lib-python/2.7/abc.py diff --git a/lib-python/2.7.0/aifc.py b/lib-python/2.7/aifc.py rename from lib-python/2.7.0/aifc.py rename to lib-python/2.7/aifc.py diff --git a/lib-python/2.7.0/antigravity.py b/lib-python/2.7/antigravity.py rename from lib-python/2.7.0/antigravity.py rename to lib-python/2.7/antigravity.py diff --git a/lib-python/2.7.0/anydbm.py b/lib-python/2.7/anydbm.py rename from lib-python/2.7.0/anydbm.py rename to lib-python/2.7/anydbm.py diff --git a/lib-python/2.7.0/argparse.py b/lib-python/2.7/argparse.py rename from lib-python/2.7.0/argparse.py rename to lib-python/2.7/argparse.py diff --git a/lib-python/2.7.0/ast.py b/lib-python/2.7/ast.py rename from lib-python/2.7.0/ast.py rename to lib-python/2.7/ast.py diff --git a/lib-python/2.7.0/asynchat.py b/lib-python/2.7/asynchat.py rename from lib-python/2.7.0/asynchat.py rename to lib-python/2.7/asynchat.py diff --git a/lib-python/2.7.0/asyncore.py b/lib-python/2.7/asyncore.py rename from lib-python/2.7.0/asyncore.py rename to lib-python/2.7/asyncore.py diff --git a/lib-python/2.7.0/atexit.py b/lib-python/2.7/atexit.py rename from lib-python/2.7.0/atexit.py rename to lib-python/2.7/atexit.py diff --git a/lib-python/2.7.0/audiodev.py b/lib-python/2.7/audiodev.py rename from lib-python/2.7.0/audiodev.py rename to lib-python/2.7/audiodev.py diff --git a/lib-python/2.7.0/base64.py b/lib-python/2.7/base64.py rename from lib-python/2.7.0/base64.py rename to lib-python/2.7/base64.py diff --git a/lib-python/2.7.0/bdb.py b/lib-python/2.7/bdb.py rename from lib-python/2.7.0/bdb.py rename to lib-python/2.7/bdb.py diff --git a/lib-python/2.7.0/binhex.py b/lib-python/2.7/binhex.py rename from lib-python/2.7.0/binhex.py rename to lib-python/2.7/binhex.py diff --git a/lib-python/2.7.0/bisect.py b/lib-python/2.7/bisect.py rename from lib-python/2.7.0/bisect.py rename to lib-python/2.7/bisect.py diff --git a/lib-python/2.7.0/bsddb/__init__.py b/lib-python/2.7/bsddb/__init__.py rename from lib-python/2.7.0/bsddb/__init__.py rename to lib-python/2.7/bsddb/__init__.py diff --git a/lib-python/2.7.0/bsddb/db.py b/lib-python/2.7/bsddb/db.py rename from lib-python/2.7.0/bsddb/db.py rename to lib-python/2.7/bsddb/db.py diff --git a/lib-python/2.7.0/bsddb/dbobj.py b/lib-python/2.7/bsddb/dbobj.py rename from lib-python/2.7.0/bsddb/dbobj.py rename to lib-python/2.7/bsddb/dbobj.py diff --git a/lib-python/2.7.0/bsddb/dbrecio.py b/lib-python/2.7/bsddb/dbrecio.py rename from lib-python/2.7.0/bsddb/dbrecio.py rename to lib-python/2.7/bsddb/dbrecio.py diff --git a/lib-python/2.7.0/bsddb/dbshelve.py b/lib-python/2.7/bsddb/dbshelve.py rename from lib-python/2.7.0/bsddb/dbshelve.py rename to lib-python/2.7/bsddb/dbshelve.py diff --git a/lib-python/2.7.0/bsddb/dbtables.py b/lib-python/2.7/bsddb/dbtables.py rename from lib-python/2.7.0/bsddb/dbtables.py rename to lib-python/2.7/bsddb/dbtables.py --- a/lib-python/2.7.0/bsddb/dbtables.py +++ b/lib-python/2.7/bsddb/dbtables.py @@ -15,7 +15,7 @@ # This provides a simple database table interface built on top of # the Python Berkeley DB 3 interface. # -_cvsid = '$Id: dbtables.py 79285 2010-03-22 14:22:26Z jesus.cea $' +_cvsid = '$Id$' import re import sys diff --git a/lib-python/2.7.0/bsddb/dbutils.py b/lib-python/2.7/bsddb/dbutils.py rename from lib-python/2.7.0/bsddb/dbutils.py rename to lib-python/2.7/bsddb/dbutils.py diff --git a/lib-python/2.7.0/bsddb/test/__init__.py b/lib-python/2.7/bsddb/test/__init__.py rename from lib-python/2.7.0/bsddb/test/__init__.py rename to lib-python/2.7/bsddb/test/__init__.py diff --git a/lib-python/2.7.0/bsddb/test/test_all.py b/lib-python/2.7/bsddb/test/test_all.py rename from lib-python/2.7.0/bsddb/test/test_all.py rename to lib-python/2.7/bsddb/test/test_all.py diff --git a/lib-python/2.7.0/bsddb/test/test_associate.py b/lib-python/2.7/bsddb/test/test_associate.py rename from lib-python/2.7.0/bsddb/test/test_associate.py rename to lib-python/2.7/bsddb/test/test_associate.py --- a/lib-python/2.7.0/bsddb/test/test_associate.py +++ b/lib-python/2.7/bsddb/test/test_associate.py @@ -233,7 +233,7 @@ self.assertEqual(vals, None, vals) vals = secDB.pget('Unknown', txn=txn) - self.assert_(vals[0] == 99 or vals[0] == '99', vals) + self.assertTrue(vals[0] == 99 or vals[0] == '99', vals) vals[1].index('Unknown') vals[1].index('Unnamed') vals[1].index('unknown') @@ -245,9 +245,9 @@ rec = self.cur.first() while rec is not None: if type(self.keytype) == type(''): - self.assert_(int(rec[0])) # for primary db, key is a number + self.assertTrue(int(rec[0])) # for primary db, key is a number else: - self.assert_(rec[0] and type(rec[0]) == type(0)) + self.assertTrue(rec[0] and type(rec[0]) == type(0)) count = count + 1 if verbose: print rec @@ -262,7 +262,7 @@ # test cursor pget vals = self.cur.pget('Unknown', flags=db.DB_LAST) - self.assert_(vals[1] == 99 or vals[1] == '99', vals) + self.assertTrue(vals[1] == 99 or vals[1] == '99', vals) self.assertEqual(vals[0], 'Unknown') vals[2].index('Unknown') vals[2].index('Unnamed') diff --git a/lib-python/2.7.0/bsddb/test/test_basics.py b/lib-python/2.7/bsddb/test/test_basics.py rename from lib-python/2.7.0/bsddb/test/test_basics.py rename to lib-python/2.7/bsddb/test/test_basics.py --- a/lib-python/2.7.0/bsddb/test/test_basics.py +++ b/lib-python/2.7/bsddb/test/test_basics.py @@ -612,7 +612,7 @@ d.put("abcde", "ABCDE"); num = d.truncate() - self.assert_(num >= 1, "truncate returned <= 0 on non-empty database") + self.assertTrue(num >= 1, "truncate returned <= 0 on non-empty database") num = d.truncate() self.assertEqual(num, 0, "truncate on empty DB returned nonzero (%r)" % (num,)) @@ -631,9 +631,9 @@ if db.version() >= (4, 6): def test08_exists(self) : self.d.put("abcde", "ABCDE") - self.assert_(self.d.exists("abcde") == True, + self.assertTrue(self.d.exists("abcde") == True, "DB->exists() returns wrong value") - self.assert_(self.d.exists("x") == False, + self.assertTrue(self.d.exists("x") == False, "DB->exists() returns wrong value") #---------------------------------------- @@ -806,9 +806,9 @@ self.d.put("abcde", "ABCDE", txn=txn) txn.commit() txn = self.env.txn_begin() - self.assert_(self.d.exists("abcde", txn=txn) == True, + self.assertTrue(self.d.exists("abcde", txn=txn) == True, "DB->exists() returns wrong value") - self.assert_(self.d.exists("x", txn=txn) == False, + self.assertTrue(self.d.exists("x", txn=txn) == False, "DB->exists() returns wrong value") txn.abort() @@ -823,7 +823,7 @@ d.put("abcde", "ABCDE"); txn = self.env.txn_begin() num = d.truncate(txn) - self.assert_(num >= 1, "truncate returned <= 0 on non-empty database") + self.assertTrue(num >= 1, "truncate returned <= 0 on non-empty database") num = d.truncate(txn) self.assertEqual(num, 0, "truncate on empty DB returned nonzero (%r)" % (num,)) diff --git a/lib-python/2.7.0/bsddb/test/test_compare.py b/lib-python/2.7/bsddb/test/test_compare.py rename from lib-python/2.7.0/bsddb/test/test_compare.py rename to lib-python/2.7/bsddb/test/test_compare.py diff --git a/lib-python/2.7.0/bsddb/test/test_compat.py b/lib-python/2.7/bsddb/test/test_compat.py rename from lib-python/2.7.0/bsddb/test/test_compat.py rename to lib-python/2.7/bsddb/test/test_compat.py --- a/lib-python/2.7.0/bsddb/test/test_compat.py +++ b/lib-python/2.7/bsddb/test/test_compat.py @@ -119,7 +119,7 @@ if verbose: print rec - self.assert_(f.has_key('f'), 'Error, missing key!') + self.assertTrue(f.has_key('f'), 'Error, missing key!') # test that set_location() returns the next nearest key, value # on btree databases and raises KeyError on others. diff --git a/lib-python/2.7.0/bsddb/test/test_cursor_pget_bug.py b/lib-python/2.7/bsddb/test/test_cursor_pget_bug.py rename from lib-python/2.7.0/bsddb/test/test_cursor_pget_bug.py rename to lib-python/2.7/bsddb/test/test_cursor_pget_bug.py --- a/lib-python/2.7.0/bsddb/test/test_cursor_pget_bug.py +++ b/lib-python/2.7/bsddb/test/test_cursor_pget_bug.py @@ -37,12 +37,12 @@ def test_pget(self): cursor = self.secondary_db.cursor() - self.assertEquals(('eggs', 'salad', 'eggs'), cursor.pget(key='eggs', flags=db.DB_SET)) - self.assertEquals(('eggs', 'omelet', 'eggs'), cursor.pget(db.DB_NEXT_DUP)) - self.assertEquals(None, cursor.pget(db.DB_NEXT_DUP)) + self.assertEqual(('eggs', 'salad', 'eggs'), cursor.pget(key='eggs', flags=db.DB_SET)) + self.assertEqual(('eggs', 'omelet', 'eggs'), cursor.pget(db.DB_NEXT_DUP)) + self.assertEqual(None, cursor.pget(db.DB_NEXT_DUP)) - self.assertEquals(('ham', 'spam', 'ham'), cursor.pget('ham', 'spam', flags=db.DB_SET)) - self.assertEquals(None, cursor.pget(db.DB_NEXT_DUP)) + self.assertEqual(('ham', 'spam', 'ham'), cursor.pget('ham', 'spam', flags=db.DB_SET)) + self.assertEqual(None, cursor.pget(db.DB_NEXT_DUP)) cursor.close() diff --git a/lib-python/2.7.0/bsddb/test/test_db.py b/lib-python/2.7/bsddb/test/test_db.py rename from lib-python/2.7.0/bsddb/test/test_db.py rename to lib-python/2.7/bsddb/test/test_db.py diff --git a/lib-python/2.7.0/bsddb/test/test_dbenv.py b/lib-python/2.7/bsddb/test/test_dbenv.py rename from lib-python/2.7.0/bsddb/test/test_dbenv.py rename to lib-python/2.7/bsddb/test/test_dbenv.py diff --git a/lib-python/2.7.0/bsddb/test/test_dbobj.py b/lib-python/2.7/bsddb/test/test_dbobj.py rename from lib-python/2.7.0/bsddb/test/test_dbobj.py rename to lib-python/2.7/bsddb/test/test_dbobj.py diff --git a/lib-python/2.7.0/bsddb/test/test_dbshelve.py b/lib-python/2.7/bsddb/test/test_dbshelve.py rename from lib-python/2.7.0/bsddb/test/test_dbshelve.py rename to lib-python/2.7/bsddb/test/test_dbshelve.py --- a/lib-python/2.7.0/bsddb/test/test_dbshelve.py +++ b/lib-python/2.7/bsddb/test/test_dbshelve.py @@ -255,7 +255,7 @@ self.assertEqual(value.L, [x] * 10) else: - self.assert_(0, 'Unknown key type, fix the test') + self.assertTrue(0, 'Unknown key type, fix the test') #---------------------------------------------------------------------- diff --git a/lib-python/2.7.0/bsddb/test/test_dbtables.py b/lib-python/2.7/bsddb/test/test_dbtables.py rename from lib-python/2.7.0/bsddb/test/test_dbtables.py rename to lib-python/2.7/bsddb/test/test_dbtables.py --- a/lib-python/2.7.0/bsddb/test/test_dbtables.py +++ b/lib-python/2.7/bsddb/test/test_dbtables.py @@ -18,7 +18,7 @@ # # -- Gregory P. Smith # -# $Id: test_dbtables.py 79285 2010-03-22 14:22:26Z jesus.cea $ +# $Id$ import os, re, sys @@ -84,8 +84,8 @@ colval = pickle.loads(values[0][colname]) else : colval = pickle.loads(bytes(values[0][colname], "iso8859-1")) - self.assert_(colval > 3.141) - self.assert_(colval < 3.142) + self.assertTrue(colval > 3.141) + self.assertTrue(colval < 3.142) def test02(self): diff --git a/lib-python/2.7.0/bsddb/test/test_distributed_transactions.py b/lib-python/2.7/bsddb/test/test_distributed_transactions.py rename from lib-python/2.7.0/bsddb/test/test_distributed_transactions.py rename to lib-python/2.7/bsddb/test/test_distributed_transactions.py --- a/lib-python/2.7.0/bsddb/test/test_distributed_transactions.py +++ b/lib-python/2.7/bsddb/test/test_distributed_transactions.py @@ -88,9 +88,9 @@ # Get "to be recovered" transactions but # let them be garbage collected. recovered_txns=self.dbenv.txn_recover() - self.assertEquals(self.num_txns,len(recovered_txns)) + self.assertEqual(self.num_txns,len(recovered_txns)) for gid,txn in recovered_txns : - self.assert_(gid in txns) + self.assertTrue(gid in txns) del txn del recovered_txns @@ -99,7 +99,7 @@ # Get "to be recovered" transactions. Commit, abort and # discard them. recovered_txns=self.dbenv.txn_recover() - self.assertEquals(self.num_txns,len(recovered_txns)) + self.assertEqual(self.num_txns,len(recovered_txns)) discard_txns=set() committed_txns=set() state=0 @@ -122,7 +122,7 @@ # Verify the discarded transactions are still # around, and dispose them. recovered_txns=self.dbenv.txn_recover() - self.assertEquals(len(discard_txns),len(recovered_txns)) + self.assertEqual(len(discard_txns),len(recovered_txns)) for gid,txn in recovered_txns : txn.abort() del txn @@ -133,8 +133,8 @@ # Be sure there are not pending transactions. # Check also database size. recovered_txns=self.dbenv.txn_recover() - self.assert_(len(recovered_txns)==0) - self.assertEquals(len(committed_txns),self.db.stat()["nkeys"]) + self.assertTrue(len(recovered_txns)==0) + self.assertEqual(len(committed_txns),self.db.stat()["nkeys"]) class DBTxn_distributedSYNC(DBTxn_distributed): nosync=False diff --git a/lib-python/2.7.0/bsddb/test/test_early_close.py b/lib-python/2.7/bsddb/test/test_early_close.py rename from lib-python/2.7.0/bsddb/test/test_early_close.py rename to lib-python/2.7/bsddb/test/test_early_close.py --- a/lib-python/2.7.0/bsddb/test/test_early_close.py +++ b/lib-python/2.7/bsddb/test/test_early_close.py @@ -162,7 +162,7 @@ txn = dbenv.txn_begin() c1 = d.cursor(txn) c2 = c1.dup() - self.assertEquals(("XXX", "yyy"), c1.first()) + self.assertEqual(("XXX", "yyy"), c1.first()) # Not interested in warnings about implicit close. import warnings diff --git a/lib-python/2.7.0/bsddb/test/test_fileid.py b/lib-python/2.7/bsddb/test/test_fileid.py rename from lib-python/2.7.0/bsddb/test/test_fileid.py rename to lib-python/2.7/bsddb/test/test_fileid.py --- a/lib-python/2.7.0/bsddb/test/test_fileid.py +++ b/lib-python/2.7/bsddb/test/test_fileid.py @@ -35,11 +35,11 @@ self.db1 = db.DB(self.db_env) self.db1.open(self.db_path_1, dbtype=db.DB_HASH, flags=db.DB_RDONLY) - self.assertEquals(self.db1.get('spam'), 'eggs') + self.assertEqual(self.db1.get('spam'), 'eggs') self.db2 = db.DB(self.db_env) self.db2.open(self.db_path_2, dbtype=db.DB_HASH, flags=db.DB_RDONLY) - self.assertEquals(self.db2.get('spam'), 'spam') + self.assertEqual(self.db2.get('spam'), 'spam') self.db1.close() self.db2.close() diff --git a/lib-python/2.7.0/bsddb/test/test_get_none.py b/lib-python/2.7/bsddb/test/test_get_none.py rename from lib-python/2.7.0/bsddb/test/test_get_none.py rename to lib-python/2.7/bsddb/test/test_get_none.py --- a/lib-python/2.7.0/bsddb/test/test_get_none.py +++ b/lib-python/2.7/bsddb/test/test_get_none.py @@ -76,7 +76,7 @@ break self.assertNotEqual(rec, None) - self.assert_(exceptionHappened) + self.assertTrue(exceptionHappened) self.assertEqual(count, len(string.letters)) c.close() diff --git a/lib-python/2.7.0/bsddb/test/test_join.py b/lib-python/2.7/bsddb/test/test_join.py rename from lib-python/2.7.0/bsddb/test/test_join.py rename to lib-python/2.7/bsddb/test/test_join.py --- a/lib-python/2.7.0/bsddb/test/test_join.py +++ b/lib-python/2.7/bsddb/test/test_join.py @@ -67,7 +67,7 @@ # Don't do the .set() in an assert, or you can get a bogus failure # when running python -O tmp = sCursor.set('red') - self.assert_(tmp) + self.assertTrue(tmp) # FIXME: jCursor doesn't properly hold a reference to its # cursors, if they are closed before jcursor is used it diff --git a/lib-python/2.7.0/bsddb/test/test_lock.py b/lib-python/2.7/bsddb/test/test_lock.py rename from lib-python/2.7.0/bsddb/test/test_lock.py rename to lib-python/2.7/bsddb/test/test_lock.py diff --git a/lib-python/2.7.0/bsddb/test/test_misc.py b/lib-python/2.7/bsddb/test/test_misc.py rename from lib-python/2.7.0/bsddb/test/test_misc.py rename to lib-python/2.7/bsddb/test/test_misc.py --- a/lib-python/2.7.0/bsddb/test/test_misc.py +++ b/lib-python/2.7/bsddb/test/test_misc.py @@ -32,7 +32,7 @@ def test02_db_home(self): env = db.DBEnv() # check for crash fixed when db_home is used before open() - self.assert_(env.db_home is None) + self.assertTrue(env.db_home is None) env.open(self.homeDir, db.DB_CREATE) if sys.version_info[0] < 3 : self.assertEqual(self.homeDir, env.db_home) @@ -43,7 +43,7 @@ db = hashopen(self.filename) db.close() rp = repr(db) - self.assertEquals(rp, "{}") + self.assertEqual(rp, "{}") def test04_repr_db(self) : db = hashopen(self.filename) @@ -54,7 +54,7 @@ db.close() db = hashopen(self.filename) rp = repr(db) - self.assertEquals(rp, repr(d)) + self.assertEqual(rp, repr(d)) db.close() # http://sourceforge.net/tracker/index.php?func=detail&aid=1708868&group_id=13900&atid=313900 diff --git a/lib-python/2.7.0/bsddb/test/test_pickle.py b/lib-python/2.7/bsddb/test/test_pickle.py rename from lib-python/2.7.0/bsddb/test/test_pickle.py rename to lib-python/2.7/bsddb/test/test_pickle.py diff --git a/lib-python/2.7.0/bsddb/test/test_queue.py b/lib-python/2.7/bsddb/test/test_queue.py rename from lib-python/2.7.0/bsddb/test/test_queue.py rename to lib-python/2.7/bsddb/test/test_queue.py diff --git a/lib-python/2.7.0/bsddb/test/test_recno.py b/lib-python/2.7/bsddb/test/test_recno.py rename from lib-python/2.7.0/bsddb/test/test_recno.py rename to lib-python/2.7/bsddb/test/test_recno.py --- a/lib-python/2.7.0/bsddb/test/test_recno.py +++ b/lib-python/2.7/bsddb/test/test_recno.py @@ -18,7 +18,7 @@ def assertFalse(self, expr, msg=None) : return self.failIf(expr,msg=msg) def assertTrue(self, expr, msg=None) : - return self.assert_(expr, msg=msg) + return self.assertTrue(expr, msg=msg) if (sys.version_info < (2, 7)) or ((sys.version_info >= (3, 0)) and (sys.version_info < (3, 2))) : diff --git a/lib-python/2.7.0/bsddb/test/test_replication.py b/lib-python/2.7/bsddb/test/test_replication.py rename from lib-python/2.7.0/bsddb/test/test_replication.py rename to lib-python/2.7/bsddb/test/test_replication.py --- a/lib-python/2.7.0/bsddb/test/test_replication.py +++ b/lib-python/2.7/bsddb/test/test_replication.py @@ -88,23 +88,23 @@ self.dbenvMaster.rep_set_timeout(db.DB_REP_CONNECTION_RETRY,100123) self.dbenvClient.rep_set_timeout(db.DB_REP_CONNECTION_RETRY,100321) - self.assertEquals(self.dbenvMaster.rep_get_timeout( + self.assertEqual(self.dbenvMaster.rep_get_timeout( db.DB_REP_CONNECTION_RETRY), 100123) - self.assertEquals(self.dbenvClient.rep_get_timeout( + self.assertEqual(self.dbenvClient.rep_get_timeout( db.DB_REP_CONNECTION_RETRY), 100321) self.dbenvMaster.rep_set_timeout(db.DB_REP_ELECTION_TIMEOUT, 100234) self.dbenvClient.rep_set_timeout(db.DB_REP_ELECTION_TIMEOUT, 100432) - self.assertEquals(self.dbenvMaster.rep_get_timeout( + self.assertEqual(self.dbenvMaster.rep_get_timeout( db.DB_REP_ELECTION_TIMEOUT), 100234) - self.assertEquals(self.dbenvClient.rep_get_timeout( + self.assertEqual(self.dbenvClient.rep_get_timeout( db.DB_REP_ELECTION_TIMEOUT), 100432) self.dbenvMaster.rep_set_timeout(db.DB_REP_ELECTION_RETRY, 100345) self.dbenvClient.rep_set_timeout(db.DB_REP_ELECTION_RETRY, 100543) - self.assertEquals(self.dbenvMaster.rep_get_timeout( + self.assertEqual(self.dbenvMaster.rep_get_timeout( db.DB_REP_ELECTION_RETRY), 100345) - self.assertEquals(self.dbenvClient.rep_get_timeout( + self.assertEqual(self.dbenvClient.rep_get_timeout( db.DB_REP_ELECTION_RETRY), 100543) self.dbenvMaster.repmgr_set_ack_policy(db.DB_REPMGR_ACKS_ALL) @@ -113,13 +113,13 @@ self.dbenvMaster.repmgr_start(1, db.DB_REP_MASTER); self.dbenvClient.repmgr_start(1, db.DB_REP_CLIENT); - self.assertEquals(self.dbenvMaster.rep_get_nsites(),2) - self.assertEquals(self.dbenvClient.rep_get_nsites(),2) - self.assertEquals(self.dbenvMaster.rep_get_priority(),10) - self.assertEquals(self.dbenvClient.rep_get_priority(),0) - self.assertEquals(self.dbenvMaster.repmgr_get_ack_policy(), + self.assertEqual(self.dbenvMaster.rep_get_nsites(),2) + self.assertEqual(self.dbenvClient.rep_get_nsites(),2) + self.assertEqual(self.dbenvMaster.rep_get_priority(),10) + self.assertEqual(self.dbenvClient.rep_get_priority(),0) + self.assertEqual(self.dbenvMaster.repmgr_get_ack_policy(), db.DB_REPMGR_ACKS_ALL) - self.assertEquals(self.dbenvClient.repmgr_get_ack_policy(), + self.assertEqual(self.dbenvClient.repmgr_get_ack_policy(), db.DB_REPMGR_ACKS_ALL) # The timeout is necessary in BDB 4.5, since DB_EVENT_REP_STARTUPDONE @@ -143,16 +143,16 @@ startup_timeout = True d = self.dbenvMaster.repmgr_site_list() - self.assertEquals(len(d), 1) - self.assertEquals(d[0][0], "127.0.0.1") - self.assertEquals(d[0][1], client_port) + self.assertEqual(len(d), 1) + self.assertEqual(d[0][0], "127.0.0.1") + self.assertEqual(d[0][1], client_port) self.assertTrue((d[0][2]==db.DB_REPMGR_CONNECTED) or \ (d[0][2]==db.DB_REPMGR_DISCONNECTED)) d = self.dbenvClient.repmgr_site_list() - self.assertEquals(len(d), 1) - self.assertEquals(d[0][0], "127.0.0.1") - self.assertEquals(d[0][1], master_port) + self.assertEqual(len(d), 1) + self.assertEqual(d[0][0], "127.0.0.1") + self.assertEqual(d[0][1], master_port) self.assertTrue((d[0][2]==db.DB_REPMGR_CONNECTED) or \ (d[0][2]==db.DB_REPMGR_DISCONNECTED)) @@ -207,7 +207,7 @@ self.skipTest("replication test skipped due to random failure, " "see issue 3892") self.assertTrue(time.time()= (4,7) : def test02_test_request(self) : diff --git a/lib-python/2.7.0/bsddb/test/test_sequence.py b/lib-python/2.7/bsddb/test/test_sequence.py rename from lib-python/2.7.0/bsddb/test/test_sequence.py rename to lib-python/2.7/bsddb/test/test_sequence.py --- a/lib-python/2.7.0/bsddb/test/test_sequence.py +++ b/lib-python/2.7/bsddb/test/test_sequence.py @@ -37,53 +37,53 @@ self.seq = db.DBSequence(self.d, flags=0) start_value = 10 * self.int_32_max self.assertEqual(0xA00000000, start_value) - self.assertEquals(None, self.seq.initial_value(start_value)) - self.assertEquals(None, self.seq.open(key='id', txn=None, flags=db.DB_CREATE)) - self.assertEquals(start_value, self.seq.get(5)) - self.assertEquals(start_value + 5, self.seq.get()) + self.assertEqual(None, self.seq.initial_value(start_value)) + self.assertEqual(None, self.seq.open(key='id', txn=None, flags=db.DB_CREATE)) + self.assertEqual(start_value, self.seq.get(5)) + self.assertEqual(start_value + 5, self.seq.get()) def test_remove(self): self.seq = db.DBSequence(self.d, flags=0) - self.assertEquals(None, self.seq.open(key='foo', txn=None, flags=db.DB_CREATE)) - self.assertEquals(None, self.seq.remove(txn=None, flags=0)) + self.assertEqual(None, self.seq.open(key='foo', txn=None, flags=db.DB_CREATE)) + self.assertEqual(None, self.seq.remove(txn=None, flags=0)) del self.seq def test_get_key(self): self.seq = db.DBSequence(self.d, flags=0) key = 'foo' - self.assertEquals(None, self.seq.open(key=key, txn=None, flags=db.DB_CREATE)) - self.assertEquals(key, self.seq.get_key()) + self.assertEqual(None, self.seq.open(key=key, txn=None, flags=db.DB_CREATE)) + self.assertEqual(key, self.seq.get_key()) def test_get_dbp(self): self.seq = db.DBSequence(self.d, flags=0) - self.assertEquals(None, self.seq.open(key='foo', txn=None, flags=db.DB_CREATE)) - self.assertEquals(self.d, self.seq.get_dbp()) + self.assertEqual(None, self.seq.open(key='foo', txn=None, flags=db.DB_CREATE)) + self.assertEqual(self.d, self.seq.get_dbp()) def test_cachesize(self): self.seq = db.DBSequence(self.d, flags=0) cashe_size = 10 - self.assertEquals(None, self.seq.set_cachesize(cashe_size)) - self.assertEquals(None, self.seq.open(key='foo', txn=None, flags=db.DB_CREATE)) - self.assertEquals(cashe_size, self.seq.get_cachesize()) + self.assertEqual(None, self.seq.set_cachesize(cashe_size)) + self.assertEqual(None, self.seq.open(key='foo', txn=None, flags=db.DB_CREATE)) + self.assertEqual(cashe_size, self.seq.get_cachesize()) def test_flags(self): self.seq = db.DBSequence(self.d, flags=0) flag = db.DB_SEQ_WRAP; - self.assertEquals(None, self.seq.set_flags(flag)) - self.assertEquals(None, self.seq.open(key='foo', txn=None, flags=db.DB_CREATE)) - self.assertEquals(flag, self.seq.get_flags() & flag) + self.assertEqual(None, self.seq.set_flags(flag)) + self.assertEqual(None, self.seq.open(key='foo', txn=None, flags=db.DB_CREATE)) + self.assertEqual(flag, self.seq.get_flags() & flag) def test_range(self): self.seq = db.DBSequence(self.d, flags=0) seq_range = (10 * self.int_32_max, 11 * self.int_32_max - 1) - self.assertEquals(None, self.seq.set_range(seq_range)) + self.assertEqual(None, self.seq.set_range(seq_range)) self.seq.initial_value(seq_range[0]) - self.assertEquals(None, self.seq.open(key='foo', txn=None, flags=db.DB_CREATE)) - self.assertEquals(seq_range, self.seq.get_range()) + self.assertEqual(None, self.seq.open(key='foo', txn=None, flags=db.DB_CREATE)) + self.assertEqual(seq_range, self.seq.get_range()) def test_stat(self): self.seq = db.DBSequence(self.d, flags=0) - self.assertEquals(None, self.seq.open(key='foo', txn=None, flags=db.DB_CREATE)) + self.assertEqual(None, self.seq.open(key='foo', txn=None, flags=db.DB_CREATE)) stat = self.seq.stat() for param in ('nowait', 'min', 'max', 'value', 'current', 'flags', 'cache_size', 'last_value', 'wait'): @@ -106,24 +106,24 @@ def test_64bits(self) : # We don't use both extremes because they are problematic value_plus=(1L<<63)-2 - self.assertEquals(9223372036854775806L,value_plus) + self.assertEqual(9223372036854775806L,value_plus) value_minus=(-1L<<63)+1 # Two complement - self.assertEquals(-9223372036854775807L,value_minus) + self.assertEqual(-9223372036854775807L,value_minus) self.seq = db.DBSequence(self.d, flags=0) - self.assertEquals(None, self.seq.initial_value(value_plus-1)) - self.assertEquals(None, self.seq.open(key='id', txn=None, + self.assertEqual(None, self.seq.initial_value(value_plus-1)) + self.assertEqual(None, self.seq.open(key='id', txn=None, flags=db.DB_CREATE)) - self.assertEquals(value_plus-1, self.seq.get(1)) - self.assertEquals(value_plus, self.seq.get(1)) + self.assertEqual(value_plus-1, self.seq.get(1)) + self.assertEqual(value_plus, self.seq.get(1)) self.seq.remove(txn=None, flags=0) self.seq = db.DBSequence(self.d, flags=0) - self.assertEquals(None, self.seq.initial_value(value_minus)) - self.assertEquals(None, self.seq.open(key='id', txn=None, + self.assertEqual(None, self.seq.initial_value(value_minus)) + self.assertEqual(None, self.seq.open(key='id', txn=None, flags=db.DB_CREATE)) - self.assertEquals(value_minus, self.seq.get(1)) - self.assertEquals(value_minus+1, self.seq.get(1)) + self.assertEqual(value_minus, self.seq.get(1)) + self.assertEqual(value_minus+1, self.seq.get(1)) def test_multiple_close(self): self.seq = db.DBSequence(self.d) diff --git a/lib-python/2.7.0/bsddb/test/test_thread.py b/lib-python/2.7/bsddb/test/test_thread.py rename from lib-python/2.7.0/bsddb/test/test_thread.py rename to lib-python/2.7/bsddb/test/test_thread.py diff --git a/lib-python/2.7.0/cProfile.py b/lib-python/2.7/cProfile.py rename from lib-python/2.7.0/cProfile.py rename to lib-python/2.7/cProfile.py diff --git a/lib-python/2.7.0/calendar.py b/lib-python/2.7/calendar.py rename from lib-python/2.7.0/calendar.py rename to lib-python/2.7/calendar.py --- a/lib-python/2.7.0/calendar.py +++ b/lib-python/2.7/calendar.py @@ -486,8 +486,8 @@ self.locale = locale def __enter__(self): - self.oldlocale = _locale.setlocale(_locale.LC_TIME, self.locale) - return _locale.getlocale(_locale.LC_TIME)[1] + self.oldlocale = _locale.getlocale(_locale.LC_TIME) + _locale.setlocale(_locale.LC_TIME, self.locale) def __exit__(self, *args): _locale.setlocale(_locale.LC_TIME, self.oldlocale) diff --git a/lib-python/2.7.0/cgi.py b/lib-python/2.7/cgi.py rename from lib-python/2.7.0/cgi.py rename to lib-python/2.7/cgi.py diff --git a/lib-python/2.7.0/cgitb.py b/lib-python/2.7/cgitb.py rename from lib-python/2.7.0/cgitb.py rename to lib-python/2.7/cgitb.py diff --git a/lib-python/2.7.0/chunk.py b/lib-python/2.7/chunk.py rename from lib-python/2.7.0/chunk.py rename to lib-python/2.7/chunk.py diff --git a/lib-python/2.7.0/cmd.py b/lib-python/2.7/cmd.py rename from lib-python/2.7.0/cmd.py rename to lib-python/2.7/cmd.py diff --git a/lib-python/2.7.0/code.py b/lib-python/2.7/code.py rename from lib-python/2.7.0/code.py rename to lib-python/2.7/code.py diff --git a/lib-python/2.7.0/codecs.py b/lib-python/2.7/codecs.py rename from lib-python/2.7.0/codecs.py rename to lib-python/2.7/codecs.py diff --git a/lib-python/2.7.0/codeop.py b/lib-python/2.7/codeop.py rename from lib-python/2.7.0/codeop.py rename to lib-python/2.7/codeop.py diff --git a/lib-python/2.7.0/collections.py b/lib-python/2.7/collections.py rename from lib-python/2.7.0/collections.py rename to lib-python/2.7/collections.py diff --git a/lib-python/2.7.0/colorsys.py b/lib-python/2.7/colorsys.py rename from lib-python/2.7.0/colorsys.py rename to lib-python/2.7/colorsys.py diff --git a/lib-python/2.7.0/commands.py b/lib-python/2.7/commands.py rename from lib-python/2.7.0/commands.py rename to lib-python/2.7/commands.py diff --git a/lib-python/2.7.0/compileall.py b/lib-python/2.7/compileall.py rename from lib-python/2.7.0/compileall.py rename to lib-python/2.7/compileall.py diff --git a/lib-python/2.7.0/compiler/__init__.py b/lib-python/2.7/compiler/__init__.py rename from lib-python/2.7.0/compiler/__init__.py rename to lib-python/2.7/compiler/__init__.py diff --git a/lib-python/2.7.0/compiler/ast.py b/lib-python/2.7/compiler/ast.py rename from lib-python/2.7.0/compiler/ast.py rename to lib-python/2.7/compiler/ast.py diff --git a/lib-python/2.7.0/compiler/consts.py b/lib-python/2.7/compiler/consts.py rename from lib-python/2.7.0/compiler/consts.py rename to lib-python/2.7/compiler/consts.py diff --git a/lib-python/2.7.0/compiler/future.py b/lib-python/2.7/compiler/future.py rename from lib-python/2.7.0/compiler/future.py rename to lib-python/2.7/compiler/future.py diff --git a/lib-python/2.7.0/compiler/misc.py b/lib-python/2.7/compiler/misc.py rename from lib-python/2.7.0/compiler/misc.py rename to lib-python/2.7/compiler/misc.py diff --git a/lib-python/2.7.0/compiler/pyassem.py b/lib-python/2.7/compiler/pyassem.py rename from lib-python/2.7.0/compiler/pyassem.py rename to lib-python/2.7/compiler/pyassem.py diff --git a/lib-python/2.7.0/compiler/pycodegen.py b/lib-python/2.7/compiler/pycodegen.py rename from lib-python/2.7.0/compiler/pycodegen.py rename to lib-python/2.7/compiler/pycodegen.py diff --git a/lib-python/2.7.0/compiler/symbols.py b/lib-python/2.7/compiler/symbols.py rename from lib-python/2.7.0/compiler/symbols.py rename to lib-python/2.7/compiler/symbols.py diff --git a/lib-python/2.7.0/compiler/syntax.py b/lib-python/2.7/compiler/syntax.py rename from lib-python/2.7.0/compiler/syntax.py rename to lib-python/2.7/compiler/syntax.py diff --git a/lib-python/2.7.0/compiler/transformer.py b/lib-python/2.7/compiler/transformer.py rename from lib-python/2.7.0/compiler/transformer.py rename to lib-python/2.7/compiler/transformer.py diff --git a/lib-python/2.7.0/compiler/visitor.py b/lib-python/2.7/compiler/visitor.py rename from lib-python/2.7.0/compiler/visitor.py rename to lib-python/2.7/compiler/visitor.py diff --git a/lib-python/2.7.0/contextlib.py b/lib-python/2.7/contextlib.py rename from lib-python/2.7.0/contextlib.py rename to lib-python/2.7/contextlib.py diff --git a/lib-python/2.7.0/cookielib.py b/lib-python/2.7/cookielib.py rename from lib-python/2.7.0/cookielib.py rename to lib-python/2.7/cookielib.py diff --git a/lib-python/2.7.0/copy.py b/lib-python/2.7/copy.py rename from lib-python/2.7.0/copy.py rename to lib-python/2.7/copy.py diff --git a/lib-python/2.7.0/copy_reg.py b/lib-python/2.7/copy_reg.py rename from lib-python/2.7.0/copy_reg.py rename to lib-python/2.7/copy_reg.py diff --git a/lib-python/2.7.0/csv.py b/lib-python/2.7/csv.py rename from lib-python/2.7.0/csv.py rename to lib-python/2.7/csv.py diff --git a/lib-python/2.7.0/ctypes/__init__.py b/lib-python/2.7/ctypes/__init__.py rename from lib-python/2.7.0/ctypes/__init__.py rename to lib-python/2.7/ctypes/__init__.py diff --git a/lib-python/2.7.0/ctypes/_endian.py b/lib-python/2.7/ctypes/_endian.py rename from lib-python/2.7.0/ctypes/_endian.py rename to lib-python/2.7/ctypes/_endian.py diff --git a/lib-python/2.7.0/ctypes/macholib/README.ctypes b/lib-python/2.7/ctypes/macholib/README.ctypes rename from lib-python/2.7.0/ctypes/macholib/README.ctypes rename to lib-python/2.7/ctypes/macholib/README.ctypes diff --git a/lib-python/2.7.0/ctypes/macholib/__init__.py b/lib-python/2.7/ctypes/macholib/__init__.py rename from lib-python/2.7.0/ctypes/macholib/__init__.py rename to lib-python/2.7/ctypes/macholib/__init__.py diff --git a/lib-python/2.7.0/ctypes/macholib/dyld.py b/lib-python/2.7/ctypes/macholib/dyld.py rename from lib-python/2.7.0/ctypes/macholib/dyld.py rename to lib-python/2.7/ctypes/macholib/dyld.py diff --git a/lib-python/2.7.0/ctypes/macholib/dylib.py b/lib-python/2.7/ctypes/macholib/dylib.py rename from lib-python/2.7.0/ctypes/macholib/dylib.py rename to lib-python/2.7/ctypes/macholib/dylib.py diff --git a/lib-python/2.7.0/ctypes/macholib/fetch_macholib b/lib-python/2.7/ctypes/macholib/fetch_macholib rename from lib-python/2.7.0/ctypes/macholib/fetch_macholib rename to lib-python/2.7/ctypes/macholib/fetch_macholib diff --git a/lib-python/2.7.0/ctypes/macholib/fetch_macholib.bat b/lib-python/2.7/ctypes/macholib/fetch_macholib.bat rename from lib-python/2.7.0/ctypes/macholib/fetch_macholib.bat rename to lib-python/2.7/ctypes/macholib/fetch_macholib.bat diff --git a/lib-python/2.7.0/ctypes/macholib/framework.py b/lib-python/2.7/ctypes/macholib/framework.py rename from lib-python/2.7.0/ctypes/macholib/framework.py rename to lib-python/2.7/ctypes/macholib/framework.py diff --git a/lib-python/2.7.0/ctypes/test/__init__.py b/lib-python/2.7/ctypes/test/__init__.py rename from lib-python/2.7.0/ctypes/test/__init__.py rename to lib-python/2.7/ctypes/test/__init__.py diff --git a/lib-python/2.7.0/ctypes/test/runtests.py b/lib-python/2.7/ctypes/test/runtests.py rename from lib-python/2.7.0/ctypes/test/runtests.py rename to lib-python/2.7/ctypes/test/runtests.py diff --git a/lib-python/2.7.0/ctypes/test/test_anon.py b/lib-python/2.7/ctypes/test/test_anon.py rename from lib-python/2.7.0/ctypes/test/test_anon.py rename to lib-python/2.7/ctypes/test/test_anon.py diff --git a/lib-python/2.7.0/ctypes/test/test_array_in_pointer.py b/lib-python/2.7/ctypes/test/test_array_in_pointer.py rename from lib-python/2.7.0/ctypes/test/test_array_in_pointer.py rename to lib-python/2.7/ctypes/test/test_array_in_pointer.py diff --git a/lib-python/2.7.0/ctypes/test/test_arrays.py b/lib-python/2.7/ctypes/test/test_arrays.py rename from lib-python/2.7.0/ctypes/test/test_arrays.py rename to lib-python/2.7/ctypes/test/test_arrays.py diff --git a/lib-python/2.7.0/ctypes/test/test_as_parameter.py b/lib-python/2.7/ctypes/test/test_as_parameter.py rename from lib-python/2.7.0/ctypes/test/test_as_parameter.py rename to lib-python/2.7/ctypes/test/test_as_parameter.py diff --git a/lib-python/2.7.0/ctypes/test/test_bitfields.py b/lib-python/2.7/ctypes/test/test_bitfields.py rename from lib-python/2.7.0/ctypes/test/test_bitfields.py rename to lib-python/2.7/ctypes/test/test_bitfields.py diff --git a/lib-python/2.7.0/ctypes/test/test_buffers.py b/lib-python/2.7/ctypes/test/test_buffers.py rename from lib-python/2.7.0/ctypes/test/test_buffers.py rename to lib-python/2.7/ctypes/test/test_buffers.py diff --git a/lib-python/2.7.0/ctypes/test/test_byteswap.py b/lib-python/2.7/ctypes/test/test_byteswap.py rename from lib-python/2.7.0/ctypes/test/test_byteswap.py rename to lib-python/2.7/ctypes/test/test_byteswap.py diff --git a/lib-python/2.7.0/ctypes/test/test_callbacks.py b/lib-python/2.7/ctypes/test/test_callbacks.py rename from lib-python/2.7.0/ctypes/test/test_callbacks.py rename to lib-python/2.7/ctypes/test/test_callbacks.py diff --git a/lib-python/2.7.0/ctypes/test/test_cast.py b/lib-python/2.7/ctypes/test/test_cast.py rename from lib-python/2.7.0/ctypes/test/test_cast.py rename to lib-python/2.7/ctypes/test/test_cast.py diff --git a/lib-python/2.7.0/ctypes/test/test_cfuncs.py b/lib-python/2.7/ctypes/test/test_cfuncs.py rename from lib-python/2.7.0/ctypes/test/test_cfuncs.py rename to lib-python/2.7/ctypes/test/test_cfuncs.py diff --git a/lib-python/2.7.0/ctypes/test/test_checkretval.py b/lib-python/2.7/ctypes/test/test_checkretval.py rename from lib-python/2.7.0/ctypes/test/test_checkretval.py rename to lib-python/2.7/ctypes/test/test_checkretval.py diff --git a/lib-python/2.7.0/ctypes/test/test_delattr.py b/lib-python/2.7/ctypes/test/test_delattr.py rename from lib-python/2.7.0/ctypes/test/test_delattr.py rename to lib-python/2.7/ctypes/test/test_delattr.py diff --git a/lib-python/2.7.0/ctypes/test/test_errcheck.py b/lib-python/2.7/ctypes/test/test_errcheck.py rename from lib-python/2.7.0/ctypes/test/test_errcheck.py rename to lib-python/2.7/ctypes/test/test_errcheck.py diff --git a/lib-python/2.7.0/ctypes/test/test_errno.py b/lib-python/2.7/ctypes/test/test_errno.py rename from lib-python/2.7.0/ctypes/test/test_errno.py rename to lib-python/2.7/ctypes/test/test_errno.py diff --git a/lib-python/2.7.0/ctypes/test/test_find.py b/lib-python/2.7/ctypes/test/test_find.py rename from lib-python/2.7.0/ctypes/test/test_find.py rename to lib-python/2.7/ctypes/test/test_find.py diff --git a/lib-python/2.7.0/ctypes/test/test_frombuffer.py b/lib-python/2.7/ctypes/test/test_frombuffer.py rename from lib-python/2.7.0/ctypes/test/test_frombuffer.py rename to lib-python/2.7/ctypes/test/test_frombuffer.py diff --git a/lib-python/2.7.0/ctypes/test/test_funcptr.py b/lib-python/2.7/ctypes/test/test_funcptr.py rename from lib-python/2.7.0/ctypes/test/test_funcptr.py rename to lib-python/2.7/ctypes/test/test_funcptr.py diff --git a/lib-python/2.7.0/ctypes/test/test_functions.py b/lib-python/2.7/ctypes/test/test_functions.py rename from lib-python/2.7.0/ctypes/test/test_functions.py rename to lib-python/2.7/ctypes/test/test_functions.py diff --git a/lib-python/2.7.0/ctypes/test/test_incomplete.py b/lib-python/2.7/ctypes/test/test_incomplete.py rename from lib-python/2.7.0/ctypes/test/test_incomplete.py rename to lib-python/2.7/ctypes/test/test_incomplete.py diff --git a/lib-python/2.7.0/ctypes/test/test_init.py b/lib-python/2.7/ctypes/test/test_init.py rename from lib-python/2.7.0/ctypes/test/test_init.py rename to lib-python/2.7/ctypes/test/test_init.py diff --git a/lib-python/2.7.0/ctypes/test/test_integers.py b/lib-python/2.7/ctypes/test/test_integers.py rename from lib-python/2.7.0/ctypes/test/test_integers.py rename to lib-python/2.7/ctypes/test/test_integers.py diff --git a/lib-python/2.7.0/ctypes/test/test_internals.py b/lib-python/2.7/ctypes/test/test_internals.py rename from lib-python/2.7.0/ctypes/test/test_internals.py rename to lib-python/2.7/ctypes/test/test_internals.py diff --git a/lib-python/2.7.0/ctypes/test/test_keeprefs.py b/lib-python/2.7/ctypes/test/test_keeprefs.py rename from lib-python/2.7.0/ctypes/test/test_keeprefs.py rename to lib-python/2.7/ctypes/test/test_keeprefs.py --- a/lib-python/2.7.0/ctypes/test/test_keeprefs.py +++ b/lib-python/2.7/ctypes/test/test_keeprefs.py @@ -4,19 +4,19 @@ class SimpleTestCase(unittest.TestCase): def test_cint(self): x = c_int() - self.assertEquals(x._objects, None) + self.assertEqual(x._objects, None) x.value = 42 - self.assertEquals(x._objects, None) + self.assertEqual(x._objects, None) x = c_int(99) - self.assertEquals(x._objects, None) + self.assertEqual(x._objects, None) def test_ccharp(self): x = c_char_p() - self.assertEquals(x._objects, None) + self.assertEqual(x._objects, None) x.value = "abc" - self.assertEquals(x._objects, "abc") + self.assertEqual(x._objects, "abc") x = c_char_p("spam") - self.assertEquals(x._objects, "spam") + self.assertEqual(x._objects, "spam") class StructureTestCase(unittest.TestCase): def test_cint_struct(self): @@ -25,21 +25,21 @@ ("b", c_int)] x = X() - self.assertEquals(x._objects, None) + self.assertEqual(x._objects, None) x.a = 42 x.b = 99 - self.assertEquals(x._objects, None) + self.assertEqual(x._objects, None) def test_ccharp_struct(self): class X(Structure): _fields_ = [("a", c_char_p), ("b", c_char_p)] x = X() - self.assertEquals(x._objects, None) + self.assertEqual(x._objects, None) x.a = "spam" x.b = "foo" - self.assertEquals(x._objects, {"0": "spam", "1": "foo"}) + self.assertEqual(x._objects, {"0": "spam", "1": "foo"}) def test_struct_struct(self): class POINT(Structure): @@ -52,28 +52,28 @@ r.ul.y = 1 r.lr.x = 2 r.lr.y = 3 - self.assertEquals(r._objects, None) + self.assertEqual(r._objects, None) r = RECT() pt = POINT(1, 2) r.ul = pt - self.assertEquals(r._objects, {'0': {}}) + self.assertEqual(r._objects, {'0': {}}) r.ul.x = 22 r.ul.y = 44 - self.assertEquals(r._objects, {'0': {}}) + self.assertEqual(r._objects, {'0': {}}) r.lr = POINT() - self.assertEquals(r._objects, {'0': {}, '1': {}}) + self.assertEqual(r._objects, {'0': {}, '1': {}}) class ArrayTestCase(unittest.TestCase): def test_cint_array(self): INTARR = c_int * 3 ia = INTARR() - self.assertEquals(ia._objects, None) + self.assertEqual(ia._objects, None) ia[0] = 1 ia[1] = 2 ia[2] = 3 - self.assertEquals(ia._objects, None) + self.assertEqual(ia._objects, None) class X(Structure): _fields_ = [("x", c_int), @@ -83,9 +83,9 @@ x.x = 1000 x.a[0] = 42 x.a[1] = 96 - self.assertEquals(x._objects, None) + self.assertEqual(x._objects, None) x.a = ia - self.assertEquals(x._objects, {'1': {}}) + self.assertEqual(x._objects, {'1': {}}) class PointerTestCase(unittest.TestCase): def test_p_cint(self): diff --git a/lib-python/2.7.0/ctypes/test/test_libc.py b/lib-python/2.7/ctypes/test/test_libc.py rename from lib-python/2.7.0/ctypes/test/test_libc.py rename to lib-python/2.7/ctypes/test/test_libc.py diff --git a/lib-python/2.7.0/ctypes/test/test_loading.py b/lib-python/2.7/ctypes/test/test_loading.py rename from lib-python/2.7.0/ctypes/test/test_loading.py rename to lib-python/2.7/ctypes/test/test_loading.py diff --git a/lib-python/2.7.0/ctypes/test/test_macholib.py b/lib-python/2.7/ctypes/test/test_macholib.py rename from lib-python/2.7.0/ctypes/test/test_macholib.py rename to lib-python/2.7/ctypes/test/test_macholib.py diff --git a/lib-python/2.7.0/ctypes/test/test_memfunctions.py b/lib-python/2.7/ctypes/test/test_memfunctions.py rename from lib-python/2.7.0/ctypes/test/test_memfunctions.py rename to lib-python/2.7/ctypes/test/test_memfunctions.py diff --git a/lib-python/2.7.0/ctypes/test/test_numbers.py b/lib-python/2.7/ctypes/test/test_numbers.py rename from lib-python/2.7.0/ctypes/test/test_numbers.py rename to lib-python/2.7/ctypes/test/test_numbers.py diff --git a/lib-python/2.7.0/ctypes/test/test_objects.py b/lib-python/2.7/ctypes/test/test_objects.py rename from lib-python/2.7.0/ctypes/test/test_objects.py rename to lib-python/2.7/ctypes/test/test_objects.py diff --git a/lib-python/2.7.0/ctypes/test/test_parameters.py b/lib-python/2.7/ctypes/test/test_parameters.py rename from lib-python/2.7.0/ctypes/test/test_parameters.py rename to lib-python/2.7/ctypes/test/test_parameters.py diff --git a/lib-python/2.7.0/ctypes/test/test_pep3118.py b/lib-python/2.7/ctypes/test/test_pep3118.py rename from lib-python/2.7.0/ctypes/test/test_pep3118.py rename to lib-python/2.7/ctypes/test/test_pep3118.py diff --git a/lib-python/2.7.0/ctypes/test/test_pickling.py b/lib-python/2.7/ctypes/test/test_pickling.py rename from lib-python/2.7.0/ctypes/test/test_pickling.py rename to lib-python/2.7/ctypes/test/test_pickling.py diff --git a/lib-python/2.7.0/ctypes/test/test_pointers.py b/lib-python/2.7/ctypes/test/test_pointers.py rename from lib-python/2.7.0/ctypes/test/test_pointers.py rename to lib-python/2.7/ctypes/test/test_pointers.py diff --git a/lib-python/2.7.0/ctypes/test/test_prototypes.py b/lib-python/2.7/ctypes/test/test_prototypes.py rename from lib-python/2.7.0/ctypes/test/test_prototypes.py rename to lib-python/2.7/ctypes/test/test_prototypes.py diff --git a/lib-python/2.7.0/ctypes/test/test_python_api.py b/lib-python/2.7/ctypes/test/test_python_api.py rename from lib-python/2.7.0/ctypes/test/test_python_api.py rename to lib-python/2.7/ctypes/test/test_python_api.py diff --git a/lib-python/2.7.0/ctypes/test/test_random_things.py b/lib-python/2.7/ctypes/test/test_random_things.py rename from lib-python/2.7.0/ctypes/test/test_random_things.py rename to lib-python/2.7/ctypes/test/test_random_things.py diff --git a/lib-python/2.7.0/ctypes/test/test_refcounts.py b/lib-python/2.7/ctypes/test/test_refcounts.py rename from lib-python/2.7.0/ctypes/test/test_refcounts.py rename to lib-python/2.7/ctypes/test/test_refcounts.py diff --git a/lib-python/2.7.0/ctypes/test/test_repr.py b/lib-python/2.7/ctypes/test/test_repr.py rename from lib-python/2.7.0/ctypes/test/test_repr.py rename to lib-python/2.7/ctypes/test/test_repr.py diff --git a/lib-python/2.7.0/ctypes/test/test_returnfuncptrs.py b/lib-python/2.7/ctypes/test/test_returnfuncptrs.py rename from lib-python/2.7.0/ctypes/test/test_returnfuncptrs.py rename to lib-python/2.7/ctypes/test/test_returnfuncptrs.py diff --git a/lib-python/2.7.0/ctypes/test/test_simplesubclasses.py b/lib-python/2.7/ctypes/test/test_simplesubclasses.py rename from lib-python/2.7.0/ctypes/test/test_simplesubclasses.py rename to lib-python/2.7/ctypes/test/test_simplesubclasses.py diff --git a/lib-python/2.7.0/ctypes/test/test_sizes.py b/lib-python/2.7/ctypes/test/test_sizes.py rename from lib-python/2.7.0/ctypes/test/test_sizes.py rename to lib-python/2.7/ctypes/test/test_sizes.py diff --git a/lib-python/2.7.0/ctypes/test/test_slicing.py b/lib-python/2.7/ctypes/test/test_slicing.py rename from lib-python/2.7.0/ctypes/test/test_slicing.py rename to lib-python/2.7/ctypes/test/test_slicing.py diff --git a/lib-python/2.7.0/ctypes/test/test_stringptr.py b/lib-python/2.7/ctypes/test/test_stringptr.py rename from lib-python/2.7.0/ctypes/test/test_stringptr.py rename to lib-python/2.7/ctypes/test/test_stringptr.py diff --git a/lib-python/2.7.0/ctypes/test/test_strings.py b/lib-python/2.7/ctypes/test/test_strings.py rename from lib-python/2.7.0/ctypes/test/test_strings.py rename to lib-python/2.7/ctypes/test/test_strings.py diff --git a/lib-python/2.7.0/ctypes/test/test_struct_fields.py b/lib-python/2.7/ctypes/test/test_struct_fields.py rename from lib-python/2.7.0/ctypes/test/test_struct_fields.py rename to lib-python/2.7/ctypes/test/test_struct_fields.py diff --git a/lib-python/2.7.0/ctypes/test/test_structures.py b/lib-python/2.7/ctypes/test/test_structures.py rename from lib-python/2.7.0/ctypes/test/test_structures.py rename to lib-python/2.7/ctypes/test/test_structures.py diff --git a/lib-python/2.7.0/ctypes/test/test_unaligned_structures.py b/lib-python/2.7/ctypes/test/test_unaligned_structures.py rename from lib-python/2.7.0/ctypes/test/test_unaligned_structures.py rename to lib-python/2.7/ctypes/test/test_unaligned_structures.py diff --git a/lib-python/2.7.0/ctypes/test/test_unicode.py b/lib-python/2.7/ctypes/test/test_unicode.py rename from lib-python/2.7.0/ctypes/test/test_unicode.py rename to lib-python/2.7/ctypes/test/test_unicode.py diff --git a/lib-python/2.7.0/ctypes/test/test_values.py b/lib-python/2.7/ctypes/test/test_values.py rename from lib-python/2.7.0/ctypes/test/test_values.py rename to lib-python/2.7/ctypes/test/test_values.py diff --git a/lib-python/2.7.0/ctypes/test/test_varsize_struct.py b/lib-python/2.7/ctypes/test/test_varsize_struct.py rename from lib-python/2.7.0/ctypes/test/test_varsize_struct.py rename to lib-python/2.7/ctypes/test/test_varsize_struct.py diff --git a/lib-python/2.7.0/ctypes/test/test_win32.py b/lib-python/2.7/ctypes/test/test_win32.py rename from lib-python/2.7.0/ctypes/test/test_win32.py rename to lib-python/2.7/ctypes/test/test_win32.py diff --git a/lib-python/2.7.0/ctypes/util.py b/lib-python/2.7/ctypes/util.py rename from lib-python/2.7.0/ctypes/util.py rename to lib-python/2.7/ctypes/util.py diff --git a/lib-python/2.7.0/ctypes/wintypes.py b/lib-python/2.7/ctypes/wintypes.py rename from lib-python/2.7.0/ctypes/wintypes.py rename to lib-python/2.7/ctypes/wintypes.py diff --git a/lib-python/2.7.0/curses/__init__.py b/lib-python/2.7/curses/__init__.py rename from lib-python/2.7.0/curses/__init__.py rename to lib-python/2.7/curses/__init__.py --- a/lib-python/2.7.0/curses/__init__.py +++ b/lib-python/2.7/curses/__init__.py @@ -10,7 +10,7 @@ """ -__revision__ = "$Id: __init__.py 61064 2008-02-25 16:29:58Z andrew.kuchling $" +__revision__ = "$Id$" from _curses import * from curses.wrapper import wrapper diff --git a/lib-python/2.7.0/curses/ascii.py b/lib-python/2.7/curses/ascii.py rename from lib-python/2.7.0/curses/ascii.py rename to lib-python/2.7/curses/ascii.py diff --git a/lib-python/2.7.0/curses/has_key.py b/lib-python/2.7/curses/has_key.py rename from lib-python/2.7.0/curses/has_key.py rename to lib-python/2.7/curses/has_key.py diff --git a/lib-python/2.7.0/curses/panel.py b/lib-python/2.7/curses/panel.py rename from lib-python/2.7.0/curses/panel.py rename to lib-python/2.7/curses/panel.py --- a/lib-python/2.7.0/curses/panel.py +++ b/lib-python/2.7/curses/panel.py @@ -3,6 +3,6 @@ Module for using panels with curses. """ -__revision__ = "$Id: panel.py 36560 2004-07-18 06:16:08Z tim_one $" +__revision__ = "$Id$" from _curses_panel import * diff --git a/lib-python/2.7.0/curses/textpad.py b/lib-python/2.7/curses/textpad.py rename from lib-python/2.7.0/curses/textpad.py rename to lib-python/2.7/curses/textpad.py diff --git a/lib-python/2.7.0/curses/wrapper.py b/lib-python/2.7/curses/wrapper.py rename from lib-python/2.7.0/curses/wrapper.py rename to lib-python/2.7/curses/wrapper.py diff --git a/lib-python/2.7.0/dbhash.py b/lib-python/2.7/dbhash.py rename from lib-python/2.7.0/dbhash.py rename to lib-python/2.7/dbhash.py diff --git a/lib-python/2.7.0/decimal.py b/lib-python/2.7/decimal.py rename from lib-python/2.7.0/decimal.py rename to lib-python/2.7/decimal.py diff --git a/lib-python/2.7.0/difflib.py b/lib-python/2.7/difflib.py rename from lib-python/2.7.0/difflib.py rename to lib-python/2.7/difflib.py --- a/lib-python/2.7.0/difflib.py +++ b/lib-python/2.7/difflib.py @@ -151,7 +151,7 @@ Return an upper bound on ratio() very quickly. """ - def __init__(self, isjunk=None, a='', b=''): + def __init__(self, isjunk=None, a='', b='', autojunk=True): """Construct a SequenceMatcher. Optional arg isjunk is None (the default), or a one-argument @@ -169,6 +169,10 @@ Optional arg b is the second of two sequences to be compared. By default, an empty string. The elements of b must be hashable. See also .set_seqs() and .set_seq2(). + + Optional arg autojunk should be set to False to disable the + "automatic junk heuristic" that treats popular elements as junk + (see module documentation for more information). """ # Members: @@ -207,11 +211,13 @@ # DOES NOT WORK for x in a! # isbpopular # for x in b, isbpopular(x) is true iff b is reasonably long - # (at least 200 elements) and x accounts for more than 1% of - # its elements. DOES NOT WORK for x in a! + # (at least 200 elements) and x accounts for more than 1 + 1% of + # its elements (when autojunk is enabled). + # DOES NOT WORK for x in a! self.isjunk = isjunk self.a = self.b = None + self.autojunk = autojunk self.set_seqs(a, b) def set_seqs(self, a, b): @@ -288,7 +294,7 @@ # from starting any matching block at a junk element ... # also creates the fast isbjunk function ... # b2j also does not contain entries for "popular" elements, meaning - # elements that account for more than 1% of the total elements, and + # elements that account for more than 1 + 1% of the total elements, and # when the sequence is reasonably large (>= 200 elements); this can # be viewed as an adaptive notion of semi-junk, and yields an enormous # speedup when, e.g., comparing program files with hundreds of @@ -309,44 +315,37 @@ # out the junk later is much cheaper than building b2j "right" # from the start. b = self.b + self.b2j = b2j = {} + + for i, elt in enumerate(b): + indices = b2j.setdefault(elt, []) + indices.append(i) + + # Purge junk elements + junk = set() + isjunk = self.isjunk + if isjunk: + for elt in list(b2j.keys()): # using list() since b2j is modified + if isjunk(elt): + junk.add(elt) + del b2j[elt] + + # Purge popular elements that are not junk + popular = set() n = len(b) - self.b2j = b2j = {} - populardict = {} - for i, elt in enumerate(b): - if elt in b2j: - indices = b2j[elt] - if n >= 200 and len(indices) * 100 > n: - populardict[elt] = 1 - del indices[:] - else: - indices.append(i) - else: - b2j[elt] = [i] + if self.autojunk and n >= 200: + ntest = n // 100 + 1 + for elt, idxs in list(b2j.items()): + if len(idxs) > ntest: + popular.add(elt) + del b2j[elt] - # Purge leftover indices for popular elements. - for elt in populardict: - del b2j[elt] - - # Now b2j.keys() contains elements uniquely, and especially when - # the sequence is a string, that's usually a good deal smaller - # than len(string). The difference is the number of isjunk calls - # saved. - isjunk = self.isjunk - junkdict = {} - if isjunk: - for d in populardict, b2j: - for elt in d.keys(): - if isjunk(elt): - junkdict[elt] = 1 - del d[elt] - - # Now for x in b, isjunk(x) == x in junkdict, but the - # latter is much faster. Note too that while there may be a - # lot of junk in the sequence, the number of *unique* junk - # elements is probably small. So the memory burden of keeping - # this dict alive is likely trivial compared to the size of b2j. - self.isbjunk = junkdict.__contains__ - self.isbpopular = populardict.__contains__ + # Now for x in b, isjunk(x) == x in junk, but the latter is much faster. + # Sicne the number of *unique* junk elements is probably small, the + # memory burden of keeping this set alive is likely trivial compared to + # the size of b2j. + self.isbjunk = junk.__contains__ + self.isbpopular = popular.__contains__ def find_longest_match(self, alo, ahi, blo, bhi): """Find longest matching block in a[alo:ahi] and b[blo:bhi]. diff --git a/lib-python/2.7.0/dircache.py b/lib-python/2.7/dircache.py rename from lib-python/2.7.0/dircache.py rename to lib-python/2.7/dircache.py diff --git a/lib-python/2.7.0/dis.py b/lib-python/2.7/dis.py rename from lib-python/2.7.0/dis.py rename to lib-python/2.7/dis.py diff --git a/lib-python/2.7.0/distutils/README b/lib-python/2.7/distutils/README rename from lib-python/2.7.0/distutils/README rename to lib-python/2.7/distutils/README --- a/lib-python/2.7.0/distutils/README +++ b/lib-python/2.7/distutils/README @@ -10,4 +10,4 @@ WARNING : Distutils must remain compatible with 2.3 -$Id: README 70017 2009-02-27 12:53:34Z tarek.ziade $ +$Id$ diff --git a/lib-python/2.7.0/distutils/__init__.py b/lib-python/2.7/distutils/__init__.py rename from lib-python/2.7.0/distutils/__init__.py rename to lib-python/2.7/distutils/__init__.py --- a/lib-python/2.7.0/distutils/__init__.py +++ b/lib-python/2.7/distutils/__init__.py @@ -8,12 +8,12 @@ setup (...) """ -__revision__ = "$Id: __init__.py 82506 2010-07-03 14:51:25Z benjamin.peterson $" +__revision__ = "$Id$" # Distutils version # # Updated automatically by the Python release process. # #--start constants-- -__version__ = "2.7.1a0" +__version__ = "2.7.1" #--end constants-- diff --git a/lib-python/2.7.0/distutils/archive_util.py b/lib-python/2.7/distutils/archive_util.py rename from lib-python/2.7.0/distutils/archive_util.py rename to lib-python/2.7/distutils/archive_util.py --- a/lib-python/2.7.0/distutils/archive_util.py +++ b/lib-python/2.7/distutils/archive_util.py @@ -3,7 +3,7 @@ Utility functions for creating archive files (tarballs, zip files, that sort of thing).""" -__revision__ = "$Id: archive_util.py 75659 2009-10-24 13:29:44Z tarek.ziade $" +__revision__ = "$Id$" import os from warnings import warn diff --git a/lib-python/2.7.0/distutils/bcppcompiler.py b/lib-python/2.7/distutils/bcppcompiler.py rename from lib-python/2.7.0/distutils/bcppcompiler.py rename to lib-python/2.7/distutils/bcppcompiler.py --- a/lib-python/2.7.0/distutils/bcppcompiler.py +++ b/lib-python/2.7/distutils/bcppcompiler.py @@ -11,7 +11,7 @@ # someone should sit down and factor out the common code as # WindowsCCompiler! --GPW -__revision__ = "$Id: bcppcompiler.py 76956 2009-12-21 01:22:46Z tarek.ziade $" +__revision__ = "$Id$" import os diff --git a/lib-python/2.7.0/distutils/ccompiler.py b/lib-python/2.7/distutils/ccompiler.py rename from lib-python/2.7.0/distutils/ccompiler.py rename to lib-python/2.7/distutils/ccompiler.py --- a/lib-python/2.7.0/distutils/ccompiler.py +++ b/lib-python/2.7/distutils/ccompiler.py @@ -3,7 +3,7 @@ Contains CCompiler, an abstract base class that defines the interface for the Distutils compiler abstraction model.""" -__revision__ = "$Id: ccompiler.py 77704 2010-01-23 09:23:15Z tarek.ziade $" +__revision__ = "$Id$" import sys import os @@ -794,14 +794,16 @@ library_dirs = [] fd, fname = tempfile.mkstemp(".c", funcname, text=True) f = os.fdopen(fd, "w") - for incl in includes: - f.write("""#include "%s"\n""" % incl) - f.write("""\ + try: + for incl in includes: + f.write("""#include "%s"\n""" % incl) + f.write("""\ main (int argc, char **argv) { %s(); } """ % funcname) - f.close() + finally: + f.close() try: objects = self.compile([fname], include_dirs=include_dirs) except CompileError: diff --git a/lib-python/2.7.0/distutils/cmd.py b/lib-python/2.7/distutils/cmd.py rename from lib-python/2.7.0/distutils/cmd.py rename to lib-python/2.7/distutils/cmd.py --- a/lib-python/2.7.0/distutils/cmd.py +++ b/lib-python/2.7/distutils/cmd.py @@ -4,7 +4,7 @@ in the distutils.command package. """ -__revision__ = "$Id: cmd.py 75192 2009-10-02 23:49:48Z tarek.ziade $" +__revision__ = "$Id$" import sys, os, re from distutils.errors import DistutilsOptionError diff --git a/lib-python/2.7.0/distutils/command/__init__.py b/lib-python/2.7/distutils/command/__init__.py rename from lib-python/2.7.0/distutils/command/__init__.py rename to lib-python/2.7/distutils/command/__init__.py --- a/lib-python/2.7.0/distutils/command/__init__.py +++ b/lib-python/2.7/distutils/command/__init__.py @@ -3,7 +3,7 @@ Package containing implementation of all the standard Distutils commands.""" -__revision__ = "$Id: __init__.py 71473 2009-04-11 14:55:07Z tarek.ziade $" +__revision__ = "$Id$" __all__ = ['build', 'build_py', diff --git a/lib-python/2.7.0/distutils/command/bdist.py b/lib-python/2.7/distutils/command/bdist.py rename from lib-python/2.7.0/distutils/command/bdist.py rename to lib-python/2.7/distutils/command/bdist.py --- a/lib-python/2.7.0/distutils/command/bdist.py +++ b/lib-python/2.7/distutils/command/bdist.py @@ -3,7 +3,7 @@ Implements the Distutils 'bdist' command (create a built [binary] distribution).""" -__revision__ = "$Id: bdist.py 77761 2010-01-26 22:46:15Z tarek.ziade $" +__revision__ = "$Id$" import os diff --git a/lib-python/2.7.0/distutils/command/bdist_dumb.py b/lib-python/2.7/distutils/command/bdist_dumb.py rename from lib-python/2.7.0/distutils/command/bdist_dumb.py rename to lib-python/2.7/distutils/command/bdist_dumb.py --- a/lib-python/2.7.0/distutils/command/bdist_dumb.py +++ b/lib-python/2.7/distutils/command/bdist_dumb.py @@ -4,7 +4,7 @@ distribution -- i.e., just an archive to be unpacked under $prefix or $exec_prefix).""" -__revision__ = "$Id: bdist_dumb.py 77761 2010-01-26 22:46:15Z tarek.ziade $" +__revision__ = "$Id$" import os diff --git a/lib-python/2.7.0/distutils/command/bdist_msi.py b/lib-python/2.7/distutils/command/bdist_msi.py rename from lib-python/2.7.0/distutils/command/bdist_msi.py rename to lib-python/2.7/distutils/command/bdist_msi.py diff --git a/lib-python/2.7.0/distutils/command/bdist_rpm.py b/lib-python/2.7/distutils/command/bdist_rpm.py rename from lib-python/2.7.0/distutils/command/bdist_rpm.py rename to lib-python/2.7/distutils/command/bdist_rpm.py --- a/lib-python/2.7.0/distutils/command/bdist_rpm.py +++ b/lib-python/2.7/distutils/command/bdist_rpm.py @@ -3,7 +3,7 @@ Implements the Distutils 'bdist_rpm' command (create RPM source and binary distributions).""" -__revision__ = "$Id: bdist_rpm.py 76956 2009-12-21 01:22:46Z tarek.ziade $" +__revision__ = "$Id$" import sys import os @@ -355,22 +355,26 @@ src_rpm, non_src_rpm, spec_path) out = os.popen(q_cmd) - binary_rpms = [] - source_rpm = None - while 1: - line = out.readline() - if not line: - break - l = string.split(string.strip(line)) - assert(len(l) == 2) - binary_rpms.append(l[1]) - # The source rpm is named after the first entry in the spec file - if source_rpm is None: - source_rpm = l[0] + try: + binary_rpms = [] + source_rpm = None + while 1: + line = out.readline() + if not line: + break + l = string.split(string.strip(line)) + assert(len(l) == 2) + binary_rpms.append(l[1]) + # The source rpm is named after the first entry in the spec file + if source_rpm is None: + source_rpm = l[0] - status = out.close() - if status: - raise DistutilsExecError("Failed to execute: %s" % repr(q_cmd)) + status = out.close() + if status: + raise DistutilsExecError("Failed to execute: %s" % repr(q_cmd)) + + finally: + out.close() self.spawn(rpm_cmd) diff --git a/lib-python/2.7.0/distutils/command/bdist_wininst.py b/lib-python/2.7/distutils/command/bdist_wininst.py rename from lib-python/2.7.0/distutils/command/bdist_wininst.py rename to lib-python/2.7/distutils/command/bdist_wininst.py --- a/lib-python/2.7.0/distutils/command/bdist_wininst.py +++ b/lib-python/2.7/distutils/command/bdist_wininst.py @@ -3,7 +3,7 @@ Implements the Distutils 'bdist_wininst' command: create a windows installer exe-program.""" -__revision__ = "$Id: bdist_wininst.py 83593 2010-08-02 21:44:25Z georg.brandl $" +__revision__ = "$Id$" import sys import os @@ -356,5 +356,9 @@ sfix = '' filename = os.path.join(directory, "wininst-%.1f%s.exe" % (bv, sfix)) - return open(filename, "rb").read() + f = open(filename, "rb") + try: + return f.read() + finally: + f.close() # class bdist_wininst diff --git a/lib-python/2.7.0/distutils/command/build.py b/lib-python/2.7/distutils/command/build.py rename from lib-python/2.7.0/distutils/command/build.py rename to lib-python/2.7/distutils/command/build.py --- a/lib-python/2.7.0/distutils/command/build.py +++ b/lib-python/2.7/distutils/command/build.py @@ -2,7 +2,7 @@ Implements the Distutils 'build' command.""" -__revision__ = "$Id: build.py 77761 2010-01-26 22:46:15Z tarek.ziade $" +__revision__ = "$Id$" import sys, os diff --git a/lib-python/2.7.0/distutils/command/build_clib.py b/lib-python/2.7/distutils/command/build_clib.py rename from lib-python/2.7.0/distutils/command/build_clib.py rename to lib-python/2.7/distutils/command/build_clib.py --- a/lib-python/2.7.0/distutils/command/build_clib.py +++ b/lib-python/2.7/distutils/command/build_clib.py @@ -4,7 +4,7 @@ that is included in the module distribution and needed by an extension module.""" -__revision__ = "$Id: build_clib.py 84610 2010-09-07 22:18:34Z eric.araujo $" +__revision__ = "$Id$" # XXX this module has *lots* of code ripped-off quite transparently from diff --git a/lib-python/2.7.0/distutils/command/build_ext.py b/lib-python/2.7/distutils/command/build_ext.py rename from lib-python/2.7.0/distutils/command/build_ext.py rename to lib-python/2.7/distutils/command/build_ext.py --- a/lib-python/2.7.0/distutils/command/build_ext.py +++ b/lib-python/2.7/distutils/command/build_ext.py @@ -6,7 +6,7 @@ # This module should be kept compatible with Python 2.1. -__revision__ = "$Id: build_ext.py 84683 2010-09-10 20:03:17Z antoine.pitrou $" +__revision__ = "$Id$" import sys, os, string, re from types import * diff --git a/lib-python/2.7.0/distutils/command/build_py.py b/lib-python/2.7/distutils/command/build_py.py rename from lib-python/2.7.0/distutils/command/build_py.py rename to lib-python/2.7/distutils/command/build_py.py --- a/lib-python/2.7.0/distutils/command/build_py.py +++ b/lib-python/2.7/distutils/command/build_py.py @@ -2,7 +2,7 @@ Implements the Distutils 'build_py' command.""" -__revision__ = "$Id: build_py.py 76956 2009-12-21 01:22:46Z tarek.ziade $" +__revision__ = "$Id$" import os import sys diff --git a/lib-python/2.7.0/distutils/command/build_scripts.py b/lib-python/2.7/distutils/command/build_scripts.py rename from lib-python/2.7.0/distutils/command/build_scripts.py rename to lib-python/2.7/distutils/command/build_scripts.py --- a/lib-python/2.7.0/distutils/command/build_scripts.py +++ b/lib-python/2.7/distutils/command/build_scripts.py @@ -2,7 +2,7 @@ Implements the Distutils 'build_scripts' command.""" -__revision__ = "$Id: build_scripts.py 77704 2010-01-23 09:23:15Z tarek.ziade $" +__revision__ = "$Id$" import os, re from stat import ST_MODE diff --git a/lib-python/2.7.0/distutils/command/check.py b/lib-python/2.7/distutils/command/check.py rename from lib-python/2.7.0/distutils/command/check.py rename to lib-python/2.7/distutils/command/check.py --- a/lib-python/2.7.0/distutils/command/check.py +++ b/lib-python/2.7/distutils/command/check.py @@ -2,7 +2,7 @@ Implements the Distutils 'check' command. """ -__revision__ = "$Id: check.py 75266 2009-10-05 22:32:48Z andrew.kuchling $" +__revision__ = "$Id$" from distutils.core import Command from distutils.errors import DistutilsSetupError diff --git a/lib-python/2.7.0/distutils/command/clean.py b/lib-python/2.7/distutils/command/clean.py rename from lib-python/2.7.0/distutils/command/clean.py rename to lib-python/2.7/distutils/command/clean.py --- a/lib-python/2.7.0/distutils/command/clean.py +++ b/lib-python/2.7/distutils/command/clean.py @@ -4,7 +4,7 @@ # contributed by Bastian Kleineidam , added 2000-03-18 -__revision__ = "$Id: clean.py 70886 2009-03-31 20:50:59Z tarek.ziade $" +__revision__ = "$Id$" import os from distutils.core import Command diff --git a/lib-python/2.7.0/distutils/command/command_template b/lib-python/2.7/distutils/command/command_template rename from lib-python/2.7.0/distutils/command/command_template rename to lib-python/2.7/distutils/command/command_template diff --git a/lib-python/2.7.0/distutils/command/config.py b/lib-python/2.7/distutils/command/config.py rename from lib-python/2.7.0/distutils/command/config.py rename to lib-python/2.7/distutils/command/config.py --- a/lib-python/2.7.0/distutils/command/config.py +++ b/lib-python/2.7/distutils/command/config.py @@ -9,7 +9,7 @@ this header file lives". """ -__revision__ = "$Id: config.py 77704 2010-01-23 09:23:15Z tarek.ziade $" +__revision__ = "$Id$" import os import re diff --git a/lib-python/2.7.0/distutils/command/install.py b/lib-python/2.7/distutils/command/install.py rename from lib-python/2.7.0/distutils/command/install.py rename to lib-python/2.7/distutils/command/install.py --- a/lib-python/2.7.0/distutils/command/install.py +++ b/lib-python/2.7/distutils/command/install.py @@ -6,7 +6,7 @@ # This module should be kept compatible with Python 2.1. -__revision__ = "$Id: install.py 80804 2010-05-05 19:09:31Z ronald.oussoren $" +__revision__ = "$Id$" import sys, os, string from types import * diff --git a/lib-python/2.7.0/distutils/command/install_data.py b/lib-python/2.7/distutils/command/install_data.py rename from lib-python/2.7.0/distutils/command/install_data.py rename to lib-python/2.7/distutils/command/install_data.py --- a/lib-python/2.7.0/distutils/command/install_data.py +++ b/lib-python/2.7/distutils/command/install_data.py @@ -5,7 +5,7 @@ # contributed by Bastian Kleineidam -__revision__ = "$Id: install_data.py 76849 2009-12-15 06:29:19Z tarek.ziade $" +__revision__ = "$Id$" import os from distutils.core import Command diff --git a/lib-python/2.7.0/distutils/command/install_egg_info.py b/lib-python/2.7/distutils/command/install_egg_info.py rename from lib-python/2.7.0/distutils/command/install_egg_info.py rename to lib-python/2.7/distutils/command/install_egg_info.py diff --git a/lib-python/2.7.0/distutils/command/install_headers.py b/lib-python/2.7/distutils/command/install_headers.py rename from lib-python/2.7.0/distutils/command/install_headers.py rename to lib-python/2.7/distutils/command/install_headers.py --- a/lib-python/2.7.0/distutils/command/install_headers.py +++ b/lib-python/2.7/distutils/command/install_headers.py @@ -3,7 +3,7 @@ Implements the Distutils 'install_headers' command, to install C/C++ header files to the Python include directory.""" -__revision__ = "$Id: install_headers.py 70891 2009-03-31 20:55:21Z tarek.ziade $" +__revision__ = "$Id$" from distutils.core import Command diff --git a/lib-python/2.7.0/distutils/command/install_lib.py b/lib-python/2.7/distutils/command/install_lib.py rename from lib-python/2.7.0/distutils/command/install_lib.py rename to lib-python/2.7/distutils/command/install_lib.py --- a/lib-python/2.7.0/distutils/command/install_lib.py +++ b/lib-python/2.7/distutils/command/install_lib.py @@ -3,7 +3,7 @@ Implements the Distutils 'install_lib' command (install all Python modules).""" -__revision__ = "$Id: install_lib.py 75671 2009-10-24 15:51:30Z tarek.ziade $" +__revision__ = "$Id$" import os import sys diff --git a/lib-python/2.7.0/distutils/command/install_scripts.py b/lib-python/2.7/distutils/command/install_scripts.py rename from lib-python/2.7.0/distutils/command/install_scripts.py rename to lib-python/2.7/distutils/command/install_scripts.py --- a/lib-python/2.7.0/distutils/command/install_scripts.py +++ b/lib-python/2.7/distutils/command/install_scripts.py @@ -5,7 +5,7 @@ # contributed by Bastian Kleineidam -__revision__ = "$Id: install_scripts.py 68943 2009-01-25 22:09:10Z tarek.ziade $" +__revision__ = "$Id$" import os from distutils.core import Command diff --git a/lib-python/2.7.0/distutils/command/register.py b/lib-python/2.7/distutils/command/register.py rename from lib-python/2.7.0/distutils/command/register.py rename to lib-python/2.7/distutils/command/register.py --- a/lib-python/2.7.0/distutils/command/register.py +++ b/lib-python/2.7/distutils/command/register.py @@ -5,7 +5,7 @@ # created 2002/10/21, Richard Jones -__revision__ = "$Id: register.py 77717 2010-01-24 00:33:32Z tarek.ziade $" +__revision__ = "$Id$" import urllib2 import getpass diff --git a/lib-python/2.7.0/distutils/command/sdist.py b/lib-python/2.7/distutils/command/sdist.py rename from lib-python/2.7.0/distutils/command/sdist.py rename to lib-python/2.7/distutils/command/sdist.py --- a/lib-python/2.7.0/distutils/command/sdist.py +++ b/lib-python/2.7/distutils/command/sdist.py @@ -2,7 +2,7 @@ Implements the Distutils 'sdist' command (create a source distribution).""" -__revision__ = "$Id: sdist.py 84713 2010-09-11 15:31:13Z eric.araujo $" +__revision__ = "$Id$" import os import string diff --git a/lib-python/2.7.0/distutils/command/upload.py b/lib-python/2.7/distutils/command/upload.py rename from lib-python/2.7.0/distutils/command/upload.py rename to lib-python/2.7/distutils/command/upload.py --- a/lib-python/2.7.0/distutils/command/upload.py +++ b/lib-python/2.7/distutils/command/upload.py @@ -79,7 +79,11 @@ # Fill in the data - send all the meta-data in case we need to # register a new release - content = open(filename,'rb').read() + f = open(filename,'rb') + try: + content = f.read() + finally: + f.close() meta = self.distribution.metadata data = { # action diff --git a/lib-python/2.7.0/distutils/command/wininst-6.0.exe b/lib-python/2.7/distutils/command/wininst-6.0.exe rename from lib-python/2.7.0/distutils/command/wininst-6.0.exe rename to lib-python/2.7/distutils/command/wininst-6.0.exe diff --git a/lib-python/2.7.0/distutils/command/wininst-7.1.exe b/lib-python/2.7/distutils/command/wininst-7.1.exe rename from lib-python/2.7.0/distutils/command/wininst-7.1.exe rename to lib-python/2.7/distutils/command/wininst-7.1.exe diff --git a/lib-python/2.7.0/distutils/command/wininst-8.0.exe b/lib-python/2.7/distutils/command/wininst-8.0.exe rename from lib-python/2.7.0/distutils/command/wininst-8.0.exe rename to lib-python/2.7/distutils/command/wininst-8.0.exe diff --git a/lib-python/2.7.0/distutils/command/wininst-9.0-amd64.exe b/lib-python/2.7/distutils/command/wininst-9.0-amd64.exe rename from lib-python/2.7.0/distutils/command/wininst-9.0-amd64.exe rename to lib-python/2.7/distutils/command/wininst-9.0-amd64.exe diff --git a/lib-python/2.7.0/distutils/command/wininst-9.0.exe b/lib-python/2.7/distutils/command/wininst-9.0.exe rename from lib-python/2.7.0/distutils/command/wininst-9.0.exe rename to lib-python/2.7/distutils/command/wininst-9.0.exe diff --git a/lib-python/2.7.0/distutils/config.py b/lib-python/2.7/distutils/config.py rename from lib-python/2.7.0/distutils/config.py rename to lib-python/2.7/distutils/config.py diff --git a/lib-python/2.7.0/distutils/core.py b/lib-python/2.7/distutils/core.py rename from lib-python/2.7.0/distutils/core.py rename to lib-python/2.7/distutils/core.py --- a/lib-python/2.7.0/distutils/core.py +++ b/lib-python/2.7/distutils/core.py @@ -6,7 +6,7 @@ really defined in distutils.dist and distutils.cmd. """ -__revision__ = "$Id: core.py 77704 2010-01-23 09:23:15Z tarek.ziade $" +__revision__ = "$Id$" import sys import os @@ -216,7 +216,11 @@ sys.argv[0] = script_name if script_args is not None: sys.argv[1:] = script_args - exec open(script_name, 'r').read() in g, l + f = open(script_name) + try: + exec f.read() in g, l + finally: + f.close() finally: sys.argv = save_argv _setup_stop_after = None diff --git a/lib-python/2.7.0/distutils/cygwinccompiler.py b/lib-python/2.7/distutils/cygwinccompiler.py rename from lib-python/2.7.0/distutils/cygwinccompiler.py rename to lib-python/2.7/distutils/cygwinccompiler.py --- a/lib-python/2.7.0/distutils/cygwinccompiler.py +++ b/lib-python/2.7/distutils/cygwinccompiler.py @@ -47,7 +47,7 @@ # This module should be kept compatible with Python 2.1. -__revision__ = "$Id: cygwinccompiler.py 78666 2010-03-05 00:16:02Z tarek.ziade $" +__revision__ = "$Id$" import os,sys,copy from distutils.ccompiler import gen_preprocess_options, gen_lib_options @@ -382,8 +382,10 @@ # It would probably better to read single lines to search. # But we do this only once, and it is fast enough f = open(fn) - s = f.read() - f.close() + try: + s = f.read() + finally: + f.close() except IOError, exc: # if we can't read this file, we cannot say it is wrong diff --git a/lib-python/2.7.0/distutils/debug.py b/lib-python/2.7/distutils/debug.py rename from lib-python/2.7.0/distutils/debug.py rename to lib-python/2.7/distutils/debug.py --- a/lib-python/2.7.0/distutils/debug.py +++ b/lib-python/2.7/distutils/debug.py @@ -1,6 +1,6 @@ import os -__revision__ = "$Id: debug.py 68943 2009-01-25 22:09:10Z tarek.ziade $" +__revision__ = "$Id$" # If DISTUTILS_DEBUG is anything other than the empty string, we run in # debug mode. diff --git a/lib-python/2.7.0/distutils/dep_util.py b/lib-python/2.7/distutils/dep_util.py rename from lib-python/2.7.0/distutils/dep_util.py rename to lib-python/2.7/distutils/dep_util.py --- a/lib-python/2.7.0/distutils/dep_util.py +++ b/lib-python/2.7/distutils/dep_util.py @@ -4,7 +4,7 @@ and groups of files; also, function based entirely on such timestamp dependency analysis.""" -__revision__ = "$Id: dep_util.py 76746 2009-12-10 15:29:03Z tarek.ziade $" +__revision__ = "$Id$" import os from distutils.errors import DistutilsFileError diff --git a/lib-python/2.7.0/distutils/dir_util.py b/lib-python/2.7/distutils/dir_util.py rename from lib-python/2.7.0/distutils/dir_util.py rename to lib-python/2.7/distutils/dir_util.py --- a/lib-python/2.7.0/distutils/dir_util.py +++ b/lib-python/2.7/distutils/dir_util.py @@ -2,9 +2,10 @@ Utility functions for manipulating directories and directory trees.""" -__revision__ = "$Id: dir_util.py 84862 2010-09-17 16:40:01Z senthil.kumaran $" +__revision__ = "$Id$" import os +import errno from distutils.errors import DistutilsFileError, DistutilsInternalError from distutils import log @@ -69,10 +70,11 @@ if not dry_run: try: os.mkdir(head, mode) - created_dirs.append(head) except OSError, exc: - raise DistutilsFileError, \ - "could not create '%s': %s" % (head, exc[-1]) + if not (exc.errno == errno.EEXIST and os.path.isdir(head)): + raise DistutilsFileError( + "could not create '%s': %s" % (head, exc.args[-1])) + created_dirs.append(head) _path_created[abs_head] = 1 return created_dirs diff --git a/lib-python/2.7.0/distutils/dist.py b/lib-python/2.7/distutils/dist.py rename from lib-python/2.7.0/distutils/dist.py rename to lib-python/2.7/distutils/dist.py --- a/lib-python/2.7.0/distutils/dist.py +++ b/lib-python/2.7/distutils/dist.py @@ -4,7 +4,7 @@ being built/installed/distributed. """ -__revision__ = "$Id: dist.py 77717 2010-01-24 00:33:32Z tarek.ziade $" +__revision__ = "$Id$" import sys, os, re from email import message_from_file @@ -1101,9 +1101,11 @@ def write_pkg_info(self, base_dir): """Write the PKG-INFO file into the release tree. """ - pkg_info = open( os.path.join(base_dir, 'PKG-INFO'), 'w') - self.write_pkg_file(pkg_info) - pkg_info.close() + pkg_info = open(os.path.join(base_dir, 'PKG-INFO'), 'w') + try: + self.write_pkg_file(pkg_info) + finally: + pkg_info.close() def write_pkg_file(self, file): """Write the PKG-INFO format data to a file object. diff --git a/lib-python/2.7.0/distutils/emxccompiler.py b/lib-python/2.7/distutils/emxccompiler.py rename from lib-python/2.7.0/distutils/emxccompiler.py rename to lib-python/2.7/distutils/emxccompiler.py --- a/lib-python/2.7.0/distutils/emxccompiler.py +++ b/lib-python/2.7/distutils/emxccompiler.py @@ -19,7 +19,7 @@ # # * EMX gcc 2.81/EMX 0.9d fix03 -__revision__ = "$Id: emxccompiler.py 78666 2010-03-05 00:16:02Z tarek.ziade $" +__revision__ = "$Id$" import os,sys,copy from distutils.ccompiler import gen_preprocess_options, gen_lib_options @@ -272,8 +272,10 @@ # It would probably better to read single lines to search. # But we do this only once, and it is fast enough f = open(fn) - s = f.read() - f.close() + try: + s = f.read() + finally: + f.close() except IOError, exc: # if we can't read this file, we cannot say it is wrong @@ -300,8 +302,10 @@ gcc_exe = find_executable('gcc') if gcc_exe: out = os.popen(gcc_exe + ' -dumpversion','r') - out_string = out.read() - out.close() + try: + out_string = out.read() + finally: + out.close() result = re.search('(\d+\.\d+\.\d+)',out_string) if result: gcc_version = StrictVersion(result.group(1)) diff --git a/lib-python/2.7.0/distutils/errors.py b/lib-python/2.7/distutils/errors.py rename from lib-python/2.7.0/distutils/errors.py rename to lib-python/2.7/distutils/errors.py --- a/lib-python/2.7.0/distutils/errors.py +++ b/lib-python/2.7/distutils/errors.py @@ -8,7 +8,7 @@ This module is safe to use in "from ... import *" mode; it only exports symbols whose names start with "Distutils" and end with "Error".""" -__revision__ = "$Id: errors.py 75901 2009-10-28 06:45:18Z tarek.ziade $" +__revision__ = "$Id$" class DistutilsError(Exception): """The root of all Distutils evil.""" diff --git a/lib-python/2.7.0/distutils/extension.py b/lib-python/2.7/distutils/extension.py rename from lib-python/2.7.0/distutils/extension.py rename to lib-python/2.7/distutils/extension.py --- a/lib-python/2.7.0/distutils/extension.py +++ b/lib-python/2.7/distutils/extension.py @@ -3,7 +3,7 @@ Provides the Extension class, used to describe C/C++ extension modules in setup scripts.""" -__revision__ = "$Id: extension.py 78666 2010-03-05 00:16:02Z tarek.ziade $" +__revision__ = "$Id$" import os, string, sys from types import * @@ -150,87 +150,96 @@ file = TextFile(filename, strip_comments=1, skip_blanks=1, join_lines=1, lstrip_ws=1, rstrip_ws=1) - extensions = [] + try: + extensions = [] - while 1: - line = file.readline() - if line is None: # eof - break - if _variable_rx.match(line): # VAR=VALUE, handled in first pass - continue - - if line[0] == line[-1] == "*": - file.warn("'%s' lines not handled yet" % line) - continue - - #print "original line: " + line - line = expand_makefile_vars(line, vars) - words = split_quoted(line) - #print "expanded line: " + line - - # NB. this parses a slightly different syntax than the old - # makesetup script: here, there must be exactly one extension per - # line, and it must be the first word of the line. I have no idea - # why the old syntax supported multiple extensions per line, as - # they all wind up being the same. - - module = words[0] - ext = Extension(module, []) - append_next_word = None - - for word in words[1:]: - if append_next_word is not None: - append_next_word.append(word) - append_next_word = None + while 1: + line = file.readline() + if line is None: # eof + break + if _variable_rx.match(line): # VAR=VALUE, handled in first pass continue - suffix = os.path.splitext(word)[1] - switch = word[0:2] ; value = word[2:] + if line[0] == line[-1] == "*": + file.warn("'%s' lines not handled yet" % line) + continue - if suffix in (".c", ".cc", ".cpp", ".cxx", ".c++", ".m", ".mm"): - # hmm, should we do something about C vs. C++ sources? - # or leave it up to the CCompiler implementation to - # worry about? - ext.sources.append(word) - elif switch == "-I": - ext.include_dirs.append(value) - elif switch == "-D": - equals = string.find(value, "=") - if equals == -1: # bare "-DFOO" -- no value - ext.define_macros.append((value, None)) - else: # "-DFOO=blah" - ext.define_macros.append((value[0:equals], - value[equals+2:])) - elif switch == "-U": - ext.undef_macros.append(value) - elif switch == "-C": # only here 'cause makesetup has it! - ext.extra_compile_args.append(word) - elif switch == "-l": - ext.libraries.append(value) - elif switch == "-L": - ext.library_dirs.append(value) - elif switch == "-R": - ext.runtime_library_dirs.append(value) - elif word == "-rpath": - append_next_word = ext.runtime_library_dirs - elif word == "-Xlinker": - append_next_word = ext.extra_link_args - elif word == "-Xcompiler": - append_next_word = ext.extra_compile_args - elif switch == "-u": - ext.extra_link_args.append(word) - if not value: + #print "original line: " + line + line = expand_makefile_vars(line, vars) + words = split_quoted(line) + #print "expanded line: " + line + + # NB. this parses a slightly different syntax than the old + # makesetup script: here, there must be exactly one extension per + # line, and it must be the first word of the line. I have no idea + # why the old syntax supported multiple extensions per line, as + # they all wind up being the same. + + module = words[0] + ext = Extension(module, []) + append_next_word = None + + for word in words[1:]: + if append_next_word is not None: + append_next_word.append(word) + append_next_word = None + continue + + suffix = os.path.splitext(word)[1] + switch = word[0:2] ; value = word[2:] + + if suffix in (".c", ".cc", ".cpp", ".cxx", ".c++", ".m", ".mm"): + # hmm, should we do something about C vs. C++ sources? + # or leave it up to the CCompiler implementation to + # worry about? + ext.sources.append(word) + elif switch == "-I": + ext.include_dirs.append(value) + elif switch == "-D": + equals = string.find(value, "=") + if equals == -1: # bare "-DFOO" -- no value + ext.define_macros.append((value, None)) + else: # "-DFOO=blah" + ext.define_macros.append((value[0:equals], + value[equals+2:])) + elif switch == "-U": + ext.undef_macros.append(value) + elif switch == "-C": # only here 'cause makesetup has it! + ext.extra_compile_args.append(word) + elif switch == "-l": + ext.libraries.append(value) + elif switch == "-L": + ext.library_dirs.append(value) + elif switch == "-R": + ext.runtime_library_dirs.append(value) + elif word == "-rpath": + append_next_word = ext.runtime_library_dirs + elif word == "-Xlinker": append_next_word = ext.extra_link_args - elif suffix in (".a", ".so", ".sl", ".o", ".dylib"): - # NB. a really faithful emulation of makesetup would - # append a .o file to extra_objects only if it - # had a slash in it; otherwise, it would s/.o/.c/ - # and append it to sources. Hmmmm. - ext.extra_objects.append(word) - else: - file.warn("unrecognized argument '%s'" % word) + elif word == "-Xcompiler": + append_next_word = ext.extra_compile_args + elif switch == "-u": + ext.extra_link_args.append(word) + if not value: + append_next_word = ext.extra_link_args + elif word == "-Xcompiler": + append_next_word = ext.extra_compile_args + elif switch == "-u": + ext.extra_link_args.append(word) + if not value: + append_next_word = ext.extra_link_args + elif suffix in (".a", ".so", ".sl", ".o", ".dylib"): + # NB. a really faithful emulation of makesetup would + # append a .o file to extra_objects only if it + # had a slash in it; otherwise, it would s/.o/.c/ + # and append it to sources. Hmmmm. + ext.extra_objects.append(word) + else: + file.warn("unrecognized argument '%s'" % word) - extensions.append(ext) + extensions.append(ext) + finally: + file.close() #print "module:", module #print "source files:", source_files diff --git a/lib-python/2.7.0/distutils/fancy_getopt.py b/lib-python/2.7/distutils/fancy_getopt.py rename from lib-python/2.7.0/distutils/fancy_getopt.py rename to lib-python/2.7/distutils/fancy_getopt.py --- a/lib-python/2.7.0/distutils/fancy_getopt.py +++ b/lib-python/2.7/distutils/fancy_getopt.py @@ -8,7 +8,7 @@ * options set attributes of a passed-in object """ -__revision__ = "$Id: fancy_getopt.py 76956 2009-12-21 01:22:46Z tarek.ziade $" +__revision__ = "$Id$" import sys import string diff --git a/lib-python/2.7.0/distutils/file_util.py b/lib-python/2.7/distutils/file_util.py rename from lib-python/2.7.0/distutils/file_util.py rename to lib-python/2.7/distutils/file_util.py --- a/lib-python/2.7.0/distutils/file_util.py +++ b/lib-python/2.7/distutils/file_util.py @@ -3,7 +3,7 @@ Utility functions for operating on single files. """ -__revision__ = "$Id: file_util.py 80804 2010-05-05 19:09:31Z ronald.oussoren $" +__revision__ = "$Id$" import os from distutils.errors import DistutilsFileError @@ -224,6 +224,8 @@ sequence of strings without line terminators) to it. """ f = open(filename, "w") - for line in contents: - f.write(line + "\n") - f.close() + try: + for line in contents: + f.write(line + "\n") + finally: + f.close() diff --git a/lib-python/2.7.0/distutils/filelist.py b/lib-python/2.7/distutils/filelist.py rename from lib-python/2.7.0/distutils/filelist.py rename to lib-python/2.7/distutils/filelist.py --- a/lib-python/2.7.0/distutils/filelist.py +++ b/lib-python/2.7/distutils/filelist.py @@ -4,7 +4,7 @@ and building lists of files. """ -__revision__ = "$Id: filelist.py 75196 2009-10-03 00:07:35Z tarek.ziade $" +__revision__ = "$Id$" import os, re import fnmatch diff --git a/lib-python/2.7.0/distutils/log.py b/lib-python/2.7/distutils/log.py rename from lib-python/2.7.0/distutils/log.py rename to lib-python/2.7/distutils/log.py diff --git a/lib-python/2.7.0/distutils/msvc9compiler.py b/lib-python/2.7/distutils/msvc9compiler.py rename from lib-python/2.7.0/distutils/msvc9compiler.py rename to lib-python/2.7/distutils/msvc9compiler.py --- a/lib-python/2.7.0/distutils/msvc9compiler.py +++ b/lib-python/2.7/distutils/msvc9compiler.py @@ -12,7 +12,7 @@ # finding DevStudio (through the registry) # ported to VS2005 and VS 2008 by Christian Heimes -__revision__ = "$Id: msvc9compiler.py 82130 2010-06-21 15:27:46Z benjamin.peterson $" +__revision__ = "$Id$" import os import subprocess @@ -273,23 +273,27 @@ popen = subprocess.Popen('"%s" %s & set' % (vcvarsall, arch), stdout=subprocess.PIPE, stderr=subprocess.PIPE) + try: + stdout, stderr = popen.communicate() + if popen.wait() != 0: + raise DistutilsPlatformError(stderr.decode("mbcs")) - stdout, stderr = popen.communicate() - if popen.wait() != 0: - raise DistutilsPlatformError(stderr.decode("mbcs")) + stdout = stdout.decode("mbcs") + for line in stdout.split("\n"): + line = Reg.convert_mbcs(line) + if '=' not in line: + continue + line = line.strip() + key, value = line.split('=', 1) + key = key.lower() + if key in interesting: + if value.endswith(os.pathsep): + value = value[:-1] + result[key] = removeDuplicates(value) - stdout = stdout.decode("mbcs") - for line in stdout.split("\n"): - line = Reg.convert_mbcs(line) - if '=' not in line: - continue - line = line.strip() - key, value = line.split('=', 1) - key = key.lower() - if key in interesting: - if value.endswith(os.pathsep): - value = value[:-1] - result[key] = removeDuplicates(value) + finally: + popen.stdout.close() + popen.stderr.close() if len(result) != len(interesting): raise ValueError(str(list(result.keys()))) diff --git a/lib-python/2.7.0/distutils/msvccompiler.py b/lib-python/2.7/distutils/msvccompiler.py rename from lib-python/2.7.0/distutils/msvccompiler.py rename to lib-python/2.7/distutils/msvccompiler.py --- a/lib-python/2.7.0/distutils/msvccompiler.py +++ b/lib-python/2.7/distutils/msvccompiler.py @@ -8,7 +8,7 @@ # hacked by Robin Becker and Thomas Heller to do a better job of # finding DevStudio (through the registry) -__revision__ = "$Id: msvccompiler.py 76956 2009-12-21 01:22:46Z tarek.ziade $" +__revision__ = "$Id$" import sys import os diff --git a/lib-python/2.7.0/distutils/spawn.py b/lib-python/2.7/distutils/spawn.py rename from lib-python/2.7.0/distutils/spawn.py rename to lib-python/2.7/distutils/spawn.py --- a/lib-python/2.7.0/distutils/spawn.py +++ b/lib-python/2.7/distutils/spawn.py @@ -6,7 +6,7 @@ executable name. """ -__revision__ = "$Id: spawn.py 73147 2009-06-02 15:58:43Z tarek.ziade $" +__revision__ = "$Id$" import sys import os diff --git a/lib-python/2.7.0/distutils/sysconfig.py b/lib-python/2.7/distutils/sysconfig.py rename from lib-python/2.7.0/distutils/sysconfig.py rename to lib-python/2.7/distutils/sysconfig.py --- a/lib-python/2.7.0/distutils/sysconfig.py +++ b/lib-python/2.7/distutils/sysconfig.py @@ -9,7 +9,7 @@ Email: """ -__revision__ = "$Id: sysconfig.py 85358 2010-10-10 09:54:59Z antoine.pitrou $" +__revision__ = "$Id$" import os import re @@ -453,32 +453,6 @@ _config_vars = g -def _init_mac(): - """Initialize the module as appropriate for Macintosh systems""" - g = {} - # set basic install directories - g['LIBDEST'] = get_python_lib(plat_specific=0, standard_lib=1) - g['BINLIBDEST'] = get_python_lib(plat_specific=1, standard_lib=1) - - # XXX hmmm.. a normal install puts include files here - g['INCLUDEPY'] = get_python_inc(plat_specific=0) - - import MacOS - if not hasattr(MacOS, 'runtimemodel'): - g['SO'] = '.ppc.slb' - else: - g['SO'] = '.%s.slb' % MacOS.runtimemodel - - # XXX are these used anywhere? - g['install_lib'] = os.path.join(EXEC_PREFIX, "Lib") - g['install_platlib'] = os.path.join(EXEC_PREFIX, "Mac", "Lib") - - # These are used by the extension module build - g['srcdir'] = ':' - global _config_vars - _config_vars = g - - def _init_os2(): """Initialize the module as appropriate for OS/2""" g = {} diff --git a/lib-python/2.7.0/distutils/tests/Setup.sample b/lib-python/2.7/distutils/tests/Setup.sample rename from lib-python/2.7.0/distutils/tests/Setup.sample rename to lib-python/2.7/distutils/tests/Setup.sample diff --git a/lib-python/2.7.0/distutils/tests/__init__.py b/lib-python/2.7/distutils/tests/__init__.py rename from lib-python/2.7.0/distutils/tests/__init__.py rename to lib-python/2.7/distutils/tests/__init__.py diff --git a/lib-python/2.7.0/distutils/tests/setuptools_build_ext.py b/lib-python/2.7/distutils/tests/setuptools_build_ext.py rename from lib-python/2.7.0/distutils/tests/setuptools_build_ext.py rename to lib-python/2.7/distutils/tests/setuptools_build_ext.py diff --git a/lib-python/2.7.0/distutils/tests/setuptools_extension.py b/lib-python/2.7/distutils/tests/setuptools_extension.py rename from lib-python/2.7.0/distutils/tests/setuptools_extension.py rename to lib-python/2.7/distutils/tests/setuptools_extension.py diff --git a/lib-python/2.7.0/distutils/tests/support.py b/lib-python/2.7/distutils/tests/support.py rename from lib-python/2.7.0/distutils/tests/support.py rename to lib-python/2.7/distutils/tests/support.py diff --git a/lib-python/2.7.0/distutils/tests/test_archive_util.py b/lib-python/2.7/distutils/tests/test_archive_util.py rename from lib-python/2.7.0/distutils/tests/test_archive_util.py rename to lib-python/2.7/distutils/tests/test_archive_util.py --- a/lib-python/2.7.0/distutils/tests/test_archive_util.py +++ b/lib-python/2.7/distutils/tests/test_archive_util.py @@ -1,5 +1,5 @@ """Tests for distutils.archive_util.""" -__revision__ = "$Id: test_archive_util.py 75659 2009-10-24 13:29:44Z tarek.ziade $" +__revision__ = "$Id$" import unittest import os @@ -129,7 +129,7 @@ self.assertTrue(os.path.exists(tarball2)) # let's compare both tarballs - self.assertEquals(self._tarinfo(tarball), self._tarinfo(tarball2)) + self.assertEqual(self._tarinfo(tarball), self._tarinfo(tarball2)) # trying an uncompressed one base_name = os.path.join(tmpdir2, 'archive') @@ -169,7 +169,7 @@ os.chdir(old_dir) tarball = base_name + '.tar.Z' self.assertTrue(os.path.exists(tarball)) - self.assertEquals(len(w.warnings), 1) + self.assertEqual(len(w.warnings), 1) # same test with dry_run os.remove(tarball) @@ -183,7 +183,7 @@ finally: os.chdir(old_dir) self.assertTrue(not os.path.exists(tarball)) - self.assertEquals(len(w.warnings), 1) + self.assertEqual(len(w.warnings), 1) @unittest.skipUnless(zlib, "Requires zlib") @unittest.skipUnless(ZIP_SUPPORT, 'Need zip support to run') @@ -201,9 +201,9 @@ tarball = base_name + '.zip' def test_check_archive_formats(self): - self.assertEquals(check_archive_formats(['gztar', 'xxx', 'zip']), - 'xxx') - self.assertEquals(check_archive_formats(['gztar', 'zip']), None) + self.assertEqual(check_archive_formats(['gztar', 'xxx', 'zip']), + 'xxx') + self.assertEqual(check_archive_formats(['gztar', 'zip']), None) def test_make_archive(self): tmpdir = self.mkdtemp() @@ -258,8 +258,8 @@ archive = tarfile.open(archive_name) try: for member in archive.getmembers(): - self.assertEquals(member.uid, 0) - self.assertEquals(member.gid, 0) + self.assertEqual(member.uid, 0) + self.assertEqual(member.gid, 0) finally: archive.close() @@ -273,7 +273,7 @@ make_archive('xxx', 'xxx', root_dir=self.mkdtemp()) except: pass - self.assertEquals(os.getcwd(), current_dir) + self.assertEqual(os.getcwd(), current_dir) finally: del ARCHIVE_FORMATS['xxx'] diff --git a/lib-python/2.7.0/distutils/tests/test_bdist.py b/lib-python/2.7/distutils/tests/test_bdist.py rename from lib-python/2.7.0/distutils/tests/test_bdist.py rename to lib-python/2.7/distutils/tests/test_bdist.py --- a/lib-python/2.7.0/distutils/tests/test_bdist.py +++ b/lib-python/2.7/distutils/tests/test_bdist.py @@ -25,7 +25,7 @@ cmd = bdist(dist) cmd.formats = ['msi'] cmd.ensure_finalized() - self.assertEquals(cmd.formats, ['msi']) + self.assertEqual(cmd.formats, ['msi']) # what format bdist offers ? # XXX an explicit list in bdist is @@ -36,7 +36,7 @@ formats.sort() founded = cmd.format_command.keys() founded.sort() - self.assertEquals(founded, formats) + self.assertEqual(founded, formats) def test_suite(): return unittest.makeSuite(BuildTestCase) diff --git a/lib-python/2.7.0/distutils/tests/test_bdist_dumb.py b/lib-python/2.7/distutils/tests/test_bdist_dumb.py rename from lib-python/2.7.0/distutils/tests/test_bdist_dumb.py rename to lib-python/2.7/distutils/tests/test_bdist_dumb.py --- a/lib-python/2.7.0/distutils/tests/test_bdist_dumb.py +++ b/lib-python/2.7/distutils/tests/test_bdist_dumb.py @@ -78,7 +78,7 @@ base = base.replace(':', '-') wanted = ['%s.zip' % base] - self.assertEquals(dist_created, wanted) + self.assertEqual(dist_created, wanted) # now let's check what we have in the zip file # XXX to be done @@ -87,16 +87,16 @@ pkg_dir, dist = self.create_dist() os.chdir(pkg_dir) cmd = bdist_dumb(dist) - self.assertEquals(cmd.bdist_dir, None) + self.assertEqual(cmd.bdist_dir, None) cmd.finalize_options() # bdist_dir is initialized to bdist_base/dumb if not set base = cmd.get_finalized_command('bdist').bdist_base - self.assertEquals(cmd.bdist_dir, os.path.join(base, 'dumb')) + self.assertEqual(cmd.bdist_dir, os.path.join(base, 'dumb')) # the format is set to a default value depending on the os.name default = cmd.default_format[os.name] - self.assertEquals(cmd.format, default) + self.assertEqual(cmd.format, default) def test_suite(): return unittest.makeSuite(BuildDumbTestCase) diff --git a/lib-python/2.7.0/distutils/tests/test_bdist_msi.py b/lib-python/2.7/distutils/tests/test_bdist_msi.py rename from lib-python/2.7.0/distutils/tests/test_bdist_msi.py rename to lib-python/2.7/distutils/tests/test_bdist_msi.py diff --git a/lib-python/2.7.0/distutils/tests/test_bdist_rpm.py b/lib-python/2.7/distutils/tests/test_bdist_rpm.py rename from lib-python/2.7.0/distutils/tests/test_bdist_rpm.py rename to lib-python/2.7/distutils/tests/test_bdist_rpm.py diff --git a/lib-python/2.7.0/distutils/tests/test_bdist_wininst.py b/lib-python/2.7/distutils/tests/test_bdist_wininst.py rename from lib-python/2.7.0/distutils/tests/test_bdist_wininst.py rename to lib-python/2.7/distutils/tests/test_bdist_wininst.py diff --git a/lib-python/2.7.0/distutils/tests/test_build.py b/lib-python/2.7/distutils/tests/test_build.py rename from lib-python/2.7.0/distutils/tests/test_build.py rename to lib-python/2.7/distutils/tests/test_build.py --- a/lib-python/2.7.0/distutils/tests/test_build.py +++ b/lib-python/2.7/distutils/tests/test_build.py @@ -17,11 +17,11 @@ cmd.finalize_options() # if not specified, plat_name gets the current platform - self.assertEquals(cmd.plat_name, get_platform()) + self.assertEqual(cmd.plat_name, get_platform()) # build_purelib is build + lib wanted = os.path.join(cmd.build_base, 'lib') - self.assertEquals(cmd.build_purelib, wanted) + self.assertEqual(cmd.build_purelib, wanted) # build_platlib is 'build/lib.platform-x.x[-pydebug]' # examples: @@ -31,21 +31,21 @@ self.assertTrue(cmd.build_platlib.endswith('-pydebug')) plat_spec += '-pydebug' wanted = os.path.join(cmd.build_base, 'lib' + plat_spec) - self.assertEquals(cmd.build_platlib, wanted) + self.assertEqual(cmd.build_platlib, wanted) # by default, build_lib = build_purelib - self.assertEquals(cmd.build_lib, cmd.build_purelib) + self.assertEqual(cmd.build_lib, cmd.build_purelib) # build_temp is build/temp. wanted = os.path.join(cmd.build_base, 'temp' + plat_spec) - self.assertEquals(cmd.build_temp, wanted) + self.assertEqual(cmd.build_temp, wanted) # build_scripts is build/scripts-x.x wanted = os.path.join(cmd.build_base, 'scripts-' + sys.version[0:3]) - self.assertEquals(cmd.build_scripts, wanted) + self.assertEqual(cmd.build_scripts, wanted) # executable is os.path.normpath(sys.executable) - self.assertEquals(cmd.executable, os.path.normpath(sys.executable)) + self.assertEqual(cmd.executable, os.path.normpath(sys.executable)) def test_suite(): return unittest.makeSuite(BuildTestCase) diff --git a/lib-python/2.7.0/distutils/tests/test_build_clib.py b/lib-python/2.7/distutils/tests/test_build_clib.py rename from lib-python/2.7.0/distutils/tests/test_build_clib.py rename to lib-python/2.7/distutils/tests/test_build_clib.py --- a/lib-python/2.7.0/distutils/tests/test_build_clib.py +++ b/lib-python/2.7/distutils/tests/test_build_clib.py @@ -55,14 +55,14 @@ self.assertRaises(DistutilsSetupError, cmd.get_source_files) cmd.libraries = [('name', {'sources': ['a', 'b']})] - self.assertEquals(cmd.get_source_files(), ['a', 'b']) + self.assertEqual(cmd.get_source_files(), ['a', 'b']) cmd.libraries = [('name', {'sources': ('a', 'b')})] - self.assertEquals(cmd.get_source_files(), ['a', 'b']) + self.assertEqual(cmd.get_source_files(), ['a', 'b']) cmd.libraries = [('name', {'sources': ('a', 'b')}), ('name2', {'sources': ['c', 'd']})] - self.assertEquals(cmd.get_source_files(), ['a', 'b', 'c', 'd']) + self.assertEqual(cmd.get_source_files(), ['a', 'b', 'c', 'd']) def test_build_libraries(self): @@ -91,11 +91,11 @@ cmd.include_dirs = 'one-dir' cmd.finalize_options() - self.assertEquals(cmd.include_dirs, ['one-dir']) + self.assertEqual(cmd.include_dirs, ['one-dir']) cmd.include_dirs = None cmd.finalize_options() - self.assertEquals(cmd.include_dirs, []) + self.assertEqual(cmd.include_dirs, []) cmd.distribution.libraries = 'WONTWORK' self.assertRaises(DistutilsSetupError, cmd.finalize_options) diff --git a/lib-python/2.7.0/distutils/tests/test_build_ext.py b/lib-python/2.7/distutils/tests/test_build_ext.py rename from lib-python/2.7.0/distutils/tests/test_build_ext.py rename to lib-python/2.7/distutils/tests/test_build_ext.py --- a/lib-python/2.7.0/distutils/tests/test_build_ext.py +++ b/lib-python/2.7/distutils/tests/test_build_ext.py @@ -103,15 +103,15 @@ import xx for attr in ('error', 'foo', 'new', 'roj'): - self.assert_(hasattr(xx, attr)) + self.assertTrue(hasattr(xx, attr)) - self.assertEquals(xx.foo(2, 5), 7) - self.assertEquals(xx.foo(13,15), 28) - self.assertEquals(xx.new().demo(), None) + self.assertEqual(xx.foo(2, 5), 7) + self.assertEqual(xx.foo(13,15), 28) + self.assertEqual(xx.new().demo(), None) doc = 'This is a template module just for instruction.' - self.assertEquals(xx.__doc__, doc) - self.assert_(isinstance(xx.Null(), xx.Null)) - self.assert_(isinstance(xx.Str(), xx.Str)) + self.assertEqual(xx.__doc__, doc) + self.assertTrue(isinstance(xx.Null(), xx.Null)) + self.assertTrue(isinstance(xx.Str(), xx.Str)) def test_solaris_enable_shared(self): dist = Distribution({'name': 'xx'}) @@ -132,7 +132,7 @@ _config_vars['Py_ENABLE_SHARED'] = old_var # make sure we get some library dirs under solaris - self.assert_(len(cmd.library_dirs) > 0) + self.assertTrue(len(cmd.library_dirs) > 0) def test_finalize_options(self): # Make sure Python's include directories (for Python.h, pyconfig.h, @@ -144,31 +144,31 @@ from distutils import sysconfig py_include = sysconfig.get_python_inc() - self.assert_(py_include in cmd.include_dirs) + self.assertTrue(py_include in cmd.include_dirs) plat_py_include = sysconfig.get_python_inc(plat_specific=1) - self.assert_(plat_py_include in cmd.include_dirs) + self.assertTrue(plat_py_include in cmd.include_dirs) # make sure cmd.libraries is turned into a list # if it's a string cmd = build_ext(dist) cmd.libraries = 'my_lib' cmd.finalize_options() - self.assertEquals(cmd.libraries, ['my_lib']) + self.assertEqual(cmd.libraries, ['my_lib']) # make sure cmd.library_dirs is turned into a list # if it's a string cmd = build_ext(dist) cmd.library_dirs = 'my_lib_dir' cmd.finalize_options() - self.assert_('my_lib_dir' in cmd.library_dirs) + self.assertTrue('my_lib_dir' in cmd.library_dirs) # make sure rpath is turned into a list # if it's a list of os.pathsep's paths cmd = build_ext(dist) cmd.rpath = os.pathsep.join(['one', 'two']) cmd.finalize_options() - self.assertEquals(cmd.rpath, ['one', 'two']) + self.assertEqual(cmd.rpath, ['one', 'two']) # XXX more tests to perform for win32 @@ -177,25 +177,25 @@ cmd = build_ext(dist) cmd.define = 'one,two' cmd.finalize_options() - self.assertEquals(cmd.define, [('one', '1'), ('two', '1')]) + self.assertEqual(cmd.define, [('one', '1'), ('two', '1')]) # make sure undef is turned into a list of # strings if they are ','-separated strings cmd = build_ext(dist) cmd.undef = 'one,two' cmd.finalize_options() - self.assertEquals(cmd.undef, ['one', 'two']) + self.assertEqual(cmd.undef, ['one', 'two']) # make sure swig_opts is turned into a list cmd = build_ext(dist) cmd.swig_opts = None cmd.finalize_options() - self.assertEquals(cmd.swig_opts, []) + self.assertEqual(cmd.swig_opts, []) cmd = build_ext(dist) cmd.swig_opts = '1 2' cmd.finalize_options() - self.assertEquals(cmd.swig_opts, ['1', '2']) + self.assertEqual(cmd.swig_opts, ['1', '2']) def test_check_extensions_list(self): dist = Distribution() @@ -226,13 +226,13 @@ 'some': 'bar'})] cmd.check_extensions_list(exts) ext = exts[0] - self.assert_(isinstance(ext, Extension)) + self.assertTrue(isinstance(ext, Extension)) # check_extensions_list adds in ext the values passed # when they are in ('include_dirs', 'library_dirs', 'libraries' # 'extra_objects', 'extra_compile_args', 'extra_link_args') - self.assertEquals(ext.libraries, 'foo') - self.assert_(not hasattr(ext, 'some')) + self.assertEqual(ext.libraries, 'foo') + self.assertTrue(not hasattr(ext, 'some')) # 'macros' element of build info dict must be 1- or 2-tuple exts = [('foo.bar', {'sources': [''], 'libraries': 'foo', @@ -241,15 +241,15 @@ exts[0][1]['macros'] = [('1', '2'), ('3',)] cmd.check_extensions_list(exts) - self.assertEquals(exts[0].undef_macros, ['3']) - self.assertEquals(exts[0].define_macros, [('1', '2')]) + self.assertEqual(exts[0].undef_macros, ['3']) + self.assertEqual(exts[0].define_macros, [('1', '2')]) def test_get_source_files(self): modules = [Extension('foo', ['xxx'])] dist = Distribution({'name': 'xx', 'ext_modules': modules}) cmd = build_ext(dist) cmd.ensure_finalized() - self.assertEquals(cmd.get_source_files(), ['xxx']) + self.assertEqual(cmd.get_source_files(), ['xxx']) def test_compiler_option(self): # cmd.compiler is an option and @@ -260,7 +260,7 @@ cmd.compiler = 'unix' cmd.ensure_finalized() cmd.run() - self.assertEquals(cmd.compiler, 'unix') + self.assertEqual(cmd.compiler, 'unix') def test_get_outputs(self): tmp_dir = self.mkdtemp() @@ -272,7 +272,7 @@ cmd = build_ext(dist) self._fixup_command(cmd) cmd.ensure_finalized() - self.assertEquals(len(cmd.get_outputs()), 1) + self.assertEqual(len(cmd.get_outputs()), 1) if os.name == "nt": cmd.debug = sys.executable.endswith("_d.exe") @@ -291,20 +291,20 @@ so_file = cmd.get_outputs()[0] finally: os.chdir(old_wd) - self.assert_(os.path.exists(so_file)) - self.assertEquals(os.path.splitext(so_file)[-1], - sysconfig.get_config_var('SO')) + self.assertTrue(os.path.exists(so_file)) + self.assertEqual(os.path.splitext(so_file)[-1], + sysconfig.get_config_var('SO')) so_dir = os.path.dirname(so_file) - self.assertEquals(so_dir, other_tmp_dir) + self.assertEqual(so_dir, other_tmp_dir) cmd.compiler = None cmd.inplace = 0 cmd.run() so_file = cmd.get_outputs()[0] - self.assert_(os.path.exists(so_file)) - self.assertEquals(os.path.splitext(so_file)[-1], - sysconfig.get_config_var('SO')) + self.assertTrue(os.path.exists(so_file)) + self.assertEqual(os.path.splitext(so_file)[-1], + sysconfig.get_config_var('SO')) so_dir = os.path.dirname(so_file) - self.assertEquals(so_dir, cmd.build_lib) + self.assertEqual(so_dir, cmd.build_lib) # inplace = 0, cmd.package = 'bar' build_py = cmd.get_finalized_command('build_py') @@ -312,7 +312,7 @@ path = cmd.get_ext_fullpath('foo') # checking that the last directory is the build_dir path = os.path.split(path)[0] - self.assertEquals(path, cmd.build_lib) + self.assertEqual(path, cmd.build_lib) # inplace = 1, cmd.package = 'bar' cmd.inplace = 1 @@ -326,7 +326,7 @@ # checking that the last directory is bar path = os.path.split(path)[0] lastdir = os.path.split(path)[-1] - self.assertEquals(lastdir, 'bar') + self.assertEqual(lastdir, 'bar') def test_ext_fullpath(self): ext = sysconfig.get_config_vars()['SO'] @@ -338,14 +338,14 @@ curdir = os.getcwd() wanted = os.path.join(curdir, 'src', 'lxml', 'etree' + ext) path = cmd.get_ext_fullpath('lxml.etree') - self.assertEquals(wanted, path) + self.assertEqual(wanted, path) # building lxml.etree not inplace cmd.inplace = 0 cmd.build_lib = os.path.join(curdir, 'tmpdir') wanted = os.path.join(curdir, 'tmpdir', 'lxml', 'etree' + ext) path = cmd.get_ext_fullpath('lxml.etree') - self.assertEquals(wanted, path) + self.assertEqual(wanted, path) # building twisted.runner.portmap not inplace build_py = cmd.get_finalized_command('build_py') @@ -354,13 +354,13 @@ path = cmd.get_ext_fullpath('twisted.runner.portmap') wanted = os.path.join(curdir, 'tmpdir', 'twisted', 'runner', 'portmap' + ext) - self.assertEquals(wanted, path) + self.assertEqual(wanted, path) # building twisted.runner.portmap inplace cmd.inplace = 1 path = cmd.get_ext_fullpath('twisted.runner.portmap') wanted = os.path.join(curdir, 'twisted', 'runner', 'portmap' + ext) - self.assertEquals(wanted, path) + self.assertEqual(wanted, path) def test_build_ext_inplace(self): etree_c = os.path.join(self.tmp_dir, 'lxml.etree.c') @@ -375,7 +375,7 @@ ext = sysconfig.get_config_var("SO") wanted = os.path.join(curdir, 'src', 'lxml', 'etree' + ext) path = cmd.get_ext_fullpath('lxml.etree') - self.assertEquals(wanted, path) + self.assertEqual(wanted, path) def test_setuptools_compat(self): import distutils.core, distutils.extension, distutils.command.build_ext @@ -400,7 +400,7 @@ ext = sysconfig.get_config_var("SO") wanted = os.path.join(curdir, 'src', 'lxml', 'etree' + ext) path = cmd.get_ext_fullpath('lxml.etree') - self.assertEquals(wanted, path) + self.assertEqual(wanted, path) finally: # restoring Distutils' Extension class otherwise its broken distutils.extension.Extension = saved_ext @@ -415,7 +415,7 @@ ext_name = os.path.join('UpdateManager', 'fdsend') ext_path = cmd.get_ext_fullpath(ext_name) wanted = os.path.join(cmd.build_lib, 'UpdateManager', 'fdsend' + ext) - self.assertEquals(ext_path, wanted) + self.assertEqual(ext_path, wanted) def test_build_ext_path_cross_platform(self): if sys.platform != 'win32': @@ -428,7 +428,7 @@ ext_name = 'UpdateManager/fdsend' ext_path = cmd.get_ext_fullpath(ext_name) wanted = os.path.join(cmd.build_lib, 'UpdateManager', 'fdsend' + ext) - self.assertEquals(ext_path, wanted) + self.assertEqual(ext_path, wanted) def test_suite(): return unittest.makeSuite(BuildExtTestCase) diff --git a/lib-python/2.7.0/distutils/tests/test_build_py.py b/lib-python/2.7/distutils/tests/test_build_py.py rename from lib-python/2.7.0/distutils/tests/test_build_py.py rename to lib-python/2.7/distutils/tests/test_build_py.py --- a/lib-python/2.7.0/distutils/tests/test_build_py.py +++ b/lib-python/2.7/distutils/tests/test_build_py.py @@ -19,11 +19,15 @@ def _setup_package_data(self): sources = self.mkdtemp() f = open(os.path.join(sources, "__init__.py"), "w") - f.write("# Pretend this is a package.") - f.close() + try: + f.write("# Pretend this is a package.") + finally: + f.close() f = open(os.path.join(sources, "README.txt"), "w") - f.write("Info about this package") - f.close() + try: + f.write("Info about this package") + finally: + f.close() destination = self.mkdtemp() diff --git a/lib-python/2.7.0/distutils/tests/test_build_scripts.py b/lib-python/2.7/distutils/tests/test_build_scripts.py rename from lib-python/2.7.0/distutils/tests/test_build_scripts.py rename to lib-python/2.7/distutils/tests/test_build_scripts.py --- a/lib-python/2.7.0/distutils/tests/test_build_scripts.py +++ b/lib-python/2.7/distutils/tests/test_build_scripts.py @@ -71,8 +71,10 @@ def write_script(self, dir, name, text): f = open(os.path.join(dir, name), "w") - f.write(text) - f.close() + try: + f.write(text) + finally: + f.close() def test_version_int(self): source = self.mkdtemp() diff --git a/lib-python/2.7.0/distutils/tests/test_ccompiler.py b/lib-python/2.7/distutils/tests/test_ccompiler.py rename from lib-python/2.7.0/distutils/tests/test_ccompiler.py rename to lib-python/2.7/distutils/tests/test_ccompiler.py --- a/lib-python/2.7.0/distutils/tests/test_ccompiler.py +++ b/lib-python/2.7/distutils/tests/test_ccompiler.py @@ -32,7 +32,7 @@ opts = gen_lib_options(compiler, libdirs, runlibdirs, libs) wanted = ['-Llib1', '-Llib2', '-cool', '-Rrunlib1', 'found', '-lname2'] - self.assertEquals(opts, wanted) + self.assertEqual(opts, wanted) def test_debug_print(self): @@ -43,14 +43,14 @@ with captured_stdout() as stdout: compiler.debug_print('xxx') stdout.seek(0) - self.assertEquals(stdout.read(), '') + self.assertEqual(stdout.read(), '') debug.DEBUG = True try: with captured_stdout() as stdout: compiler.debug_print('xxx') stdout.seek(0) - self.assertEquals(stdout.read(), 'xxx\n') + self.assertEqual(stdout.read(), 'xxx\n') finally: debug.DEBUG = False @@ -72,7 +72,7 @@ comp = compiler() customize_compiler(comp) - self.assertEquals(comp.exes['archiver'], 'my_ar -arflags') + self.assertEqual(comp.exes['archiver'], 'my_ar -arflags') def test_suite(): return unittest.makeSuite(CCompilerTestCase) diff --git a/lib-python/2.7.0/distutils/tests/test_check.py b/lib-python/2.7/distutils/tests/test_check.py rename from lib-python/2.7.0/distutils/tests/test_check.py rename to lib-python/2.7/distutils/tests/test_check.py --- a/lib-python/2.7.0/distutils/tests/test_check.py +++ b/lib-python/2.7/distutils/tests/test_check.py @@ -26,7 +26,7 @@ # by default, check is checking the metadata # should have some warnings cmd = self._run() - self.assertEquals(cmd._warnings, 2) + self.assertEqual(cmd._warnings, 2) # now let's add the required fields # and run it again, to make sure we don't get @@ -35,7 +35,7 @@ 'author_email': 'xxx', 'name': 'xxx', 'version': 'xxx'} cmd = self._run(metadata) - self.assertEquals(cmd._warnings, 0) + self.assertEqual(cmd._warnings, 0) # now with the strict mode, we should # get an error if there are missing metadata @@ -43,7 +43,7 @@ # and of course, no error when all metadata are present cmd = self._run(metadata, strict=1) - self.assertEquals(cmd._warnings, 0) + self.assertEqual(cmd._warnings, 0) def test_check_document(self): if not HAS_DOCUTILS: # won't test without docutils @@ -54,12 +54,12 @@ # let's see if it detects broken rest broken_rest = 'title\n===\n\ntest' msgs = cmd._check_rst_data(broken_rest) - self.assertEquals(len(msgs), 1) + self.assertEqual(len(msgs), 1) # and non-broken rest rest = 'title\n=====\n\ntest' msgs = cmd._check_rst_data(rest) - self.assertEquals(len(msgs), 0) + self.assertEqual(len(msgs), 0) def test_check_restructuredtext(self): if not HAS_DOCUTILS: # won't test without docutils @@ -69,7 +69,7 @@ pkg_info, dist = self.create_dist(long_description=broken_rest) cmd = check(dist) cmd.check_restructuredtext() - self.assertEquals(cmd._warnings, 1) + self.assertEqual(cmd._warnings, 1) # let's see if we have an error with strict=1 metadata = {'url': 'xxx', 'author': 'xxx', @@ -82,7 +82,7 @@ # and non-broken rest metadata['long_description'] = 'title\n=====\n\ntest' cmd = self._run(metadata, strict=1, restructuredtext=1) - self.assertEquals(cmd._warnings, 0) + self.assertEqual(cmd._warnings, 0) def test_check_all(self): diff --git a/lib-python/2.7.0/distutils/tests/test_clean.py b/lib-python/2.7/distutils/tests/test_clean.py rename from lib-python/2.7.0/distutils/tests/test_clean.py rename to lib-python/2.7/distutils/tests/test_clean.py diff --git a/lib-python/2.7.0/distutils/tests/test_cmd.py b/lib-python/2.7/distutils/tests/test_cmd.py rename from lib-python/2.7.0/distutils/tests/test_cmd.py rename to lib-python/2.7/distutils/tests/test_cmd.py --- a/lib-python/2.7.0/distutils/tests/test_cmd.py +++ b/lib-python/2.7/distutils/tests/test_cmd.py @@ -44,7 +44,7 @@ # making sure execute gets called properly def _execute(func, args, exec_msg, level): - self.assertEquals(exec_msg, 'generating out from in') + self.assertEqual(exec_msg, 'generating out from in') cmd.force = True cmd.execute = _execute cmd.make_file(infiles='in', outfile='out', func='func', args=()) @@ -63,7 +63,7 @@ wanted = ["command options for 'MyCmd':", ' option1 = 1', ' option2 = 1'] - self.assertEquals(msgs, wanted) + self.assertEqual(msgs, wanted) def test_ensure_string(self): cmd = self.cmd @@ -81,7 +81,7 @@ cmd = self.cmd cmd.option1 = 'ok,dok' cmd.ensure_string_list('option1') - self.assertEquals(cmd.option1, ['ok', 'dok']) + self.assertEqual(cmd.option1, ['ok', 'dok']) cmd.option2 = ['xxx', 'www'] cmd.ensure_string_list('option2') @@ -109,14 +109,14 @@ with captured_stdout() as stdout: cmd.debug_print('xxx') stdout.seek(0) - self.assertEquals(stdout.read(), '') + self.assertEqual(stdout.read(), '') debug.DEBUG = True try: with captured_stdout() as stdout: cmd.debug_print('xxx') stdout.seek(0) - self.assertEquals(stdout.read(), 'xxx\n') + self.assertEqual(stdout.read(), 'xxx\n') finally: debug.DEBUG = False diff --git a/lib-python/2.7.0/distutils/tests/test_config.py b/lib-python/2.7/distutils/tests/test_config.py rename from lib-python/2.7.0/distutils/tests/test_config.py rename to lib-python/2.7/distutils/tests/test_config.py --- a/lib-python/2.7.0/distutils/tests/test_config.py +++ b/lib-python/2.7/distutils/tests/test_config.py @@ -90,7 +90,7 @@ waited = [('password', 'secret'), ('realm', 'pypi'), ('repository', 'http://pypi.python.org/pypi'), ('server', 'server1'), ('username', 'me')] - self.assertEquals(config, waited) + self.assertEqual(config, waited) # old format self.write_file(self.rc, PYPIRC_OLD) @@ -100,7 +100,7 @@ waited = [('password', 'secret'), ('realm', 'pypi'), ('repository', 'http://pypi.python.org/pypi'), ('server', 'server-login'), ('username', 'tarek')] - self.assertEquals(config, waited) + self.assertEqual(config, waited) def test_server_empty_registration(self): cmd = self._cmd(self.dist) @@ -108,8 +108,12 @@ self.assertTrue(not os.path.exists(rc)) cmd._store_pypirc('tarek', 'xxx') self.assertTrue(os.path.exists(rc)) - content = open(rc).read() - self.assertEquals(content, WANTED) + f = open(rc) + try: + content = f.read() + self.assertEqual(content, WANTED) + finally: + f.close() def test_suite(): return unittest.makeSuite(PyPIRCCommandTestCase) diff --git a/lib-python/2.7.0/distutils/tests/test_config_cmd.py b/lib-python/2.7/distutils/tests/test_config_cmd.py rename from lib-python/2.7.0/distutils/tests/test_config_cmd.py rename to lib-python/2.7/distutils/tests/test_config_cmd.py --- a/lib-python/2.7.0/distutils/tests/test_config_cmd.py +++ b/lib-python/2.7/distutils/tests/test_config_cmd.py @@ -34,7 +34,7 @@ f.close() dump_file(this_file, 'I am the header') - self.assertEquals(len(self._logs), numlines+1) + self.assertEqual(len(self._logs), numlines+1) def test_search_cpp(self): if sys.platform == 'win32': @@ -44,10 +44,10 @@ # simple pattern searches match = cmd.search_cpp(pattern='xxx', body='// xxx') - self.assertEquals(match, 0) + self.assertEqual(match, 0) match = cmd.search_cpp(pattern='_configtest', body='// xxx') - self.assertEquals(match, 1) + self.assertEqual(match, 1) def test_finalize_options(self): # finalize_options does a bit of transformation @@ -59,9 +59,9 @@ cmd.library_dirs = 'three%sfour' % os.pathsep cmd.ensure_finalized() - self.assertEquals(cmd.include_dirs, ['one', 'two']) - self.assertEquals(cmd.libraries, ['one']) - self.assertEquals(cmd.library_dirs, ['three', 'four']) + self.assertEqual(cmd.include_dirs, ['one', 'two']) + self.assertEqual(cmd.libraries, ['one']) + self.assertEqual(cmd.library_dirs, ['three', 'four']) def test_clean(self): # _clean removes files diff --git a/lib-python/2.7.0/distutils/tests/test_core.py b/lib-python/2.7/distutils/tests/test_core.py rename from lib-python/2.7.0/distutils/tests/test_core.py rename to lib-python/2.7/distutils/tests/test_core.py --- a/lib-python/2.7.0/distutils/tests/test_core.py +++ b/lib-python/2.7/distutils/tests/test_core.py @@ -52,7 +52,11 @@ shutil.rmtree(path) def write_setup(self, text, path=test.test_support.TESTFN): - open(path, "w").write(text) + f = open(path, "w") + try: + f.write(text) + finally: + f.close() return path def test_run_setup_provides_file(self): @@ -85,7 +89,7 @@ with captured_stdout() as stdout: distutils.core.setup(name='bar') stdout.seek(0) - self.assertEquals(stdout.read(), 'bar\n') + self.assertEqual(stdout.read(), 'bar\n') distutils.core.DEBUG = True try: @@ -95,7 +99,7 @@ distutils.core.DEBUG = False stdout.seek(0) wanted = "options (after parsing config files):\n" - self.assertEquals(stdout.readlines()[0], wanted) + self.assertEqual(stdout.readlines()[0], wanted) def test_suite(): return unittest.makeSuite(CoreTestCase) diff --git a/lib-python/2.7.0/distutils/tests/test_dep_util.py b/lib-python/2.7/distutils/tests/test_dep_util.py rename from lib-python/2.7.0/distutils/tests/test_dep_util.py rename to lib-python/2.7/distutils/tests/test_dep_util.py --- a/lib-python/2.7.0/distutils/tests/test_dep_util.py +++ b/lib-python/2.7/distutils/tests/test_dep_util.py @@ -42,8 +42,8 @@ self.write_file(two) self.write_file(four) - self.assertEquals(newer_pairwise([one, two], [three, four]), - ([one],[three])) + self.assertEqual(newer_pairwise([one, two], [three, four]), + ([one],[three])) def test_newer_group(self): tmpdir = self.mkdtemp() diff --git a/lib-python/2.7.0/distutils/tests/test_dir_util.py b/lib-python/2.7/distutils/tests/test_dir_util.py rename from lib-python/2.7.0/distutils/tests/test_dir_util.py rename to lib-python/2.7/distutils/tests/test_dir_util.py --- a/lib-python/2.7.0/distutils/tests/test_dir_util.py +++ b/lib-python/2.7/distutils/tests/test_dir_util.py @@ -37,18 +37,18 @@ mkpath(self.target, verbose=0) wanted = [] - self.assertEquals(self._logs, wanted) + self.assertEqual(self._logs, wanted) remove_tree(self.root_target, verbose=0) mkpath(self.target, verbose=1) wanted = ['creating %s' % self.root_target, 'creating %s' % self.target] - self.assertEquals(self._logs, wanted) + self.assertEqual(self._logs, wanted) self._logs = [] remove_tree(self.root_target, verbose=1) wanted = ["removing '%s' (and everything under it)" % self.root_target] - self.assertEquals(self._logs, wanted) + self.assertEqual(self._logs, wanted) @unittest.skipIf(sys.platform.startswith('win'), "This test is only appropriate for POSIX-like systems.") @@ -66,12 +66,12 @@ def test_create_tree_verbosity(self): create_tree(self.root_target, ['one', 'two', 'three'], verbose=0) - self.assertEquals(self._logs, []) + self.assertEqual(self._logs, []) remove_tree(self.root_target, verbose=0) wanted = ['creating %s' % self.root_target] create_tree(self.root_target, ['one', 'two', 'three'], verbose=1) - self.assertEquals(self._logs, wanted) + self.assertEqual(self._logs, wanted) remove_tree(self.root_target, verbose=0) @@ -81,30 +81,32 @@ mkpath(self.target, verbose=0) copy_tree(self.target, self.target2, verbose=0) - self.assertEquals(self._logs, []) + self.assertEqual(self._logs, []) remove_tree(self.root_target, verbose=0) mkpath(self.target, verbose=0) a_file = os.path.join(self.target, 'ok.txt') f = open(a_file, 'w') - f.write('some content') - f.close() + try: + f.write('some content') + finally: + f.close() wanted = ['copying %s -> %s' % (a_file, self.target2)] copy_tree(self.target, self.target2, verbose=1) - self.assertEquals(self._logs, wanted) + self.assertEqual(self._logs, wanted) remove_tree(self.root_target, verbose=0) remove_tree(self.target2, verbose=0) def test_ensure_relative(self): if os.sep == '/': - self.assertEquals(ensure_relative('/home/foo'), 'home/foo') - self.assertEquals(ensure_relative('some/path'), 'some/path') + self.assertEqual(ensure_relative('/home/foo'), 'home/foo') + self.assertEqual(ensure_relative('some/path'), 'some/path') else: # \\ - self.assertEquals(ensure_relative('c:\\home\\foo'), 'c:home\\foo') - self.assertEquals(ensure_relative('home\\foo'), 'home\\foo') + self.assertEqual(ensure_relative('c:\\home\\foo'), 'c:home\\foo') + self.assertEqual(ensure_relative('home\\foo'), 'home\\foo') def test_suite(): return unittest.makeSuite(DirUtilTestCase) diff --git a/lib-python/2.7.0/distutils/tests/test_dist.py b/lib-python/2.7/distutils/tests/test_dist.py rename from lib-python/2.7.0/distutils/tests/test_dist.py rename to lib-python/2.7/distutils/tests/test_dist.py --- a/lib-python/2.7.0/distutils/tests/test_dist.py +++ b/lib-python/2.7/distutils/tests/test_dist.py @@ -70,13 +70,13 @@ with captured_stdout() as stdout: self.create_distribution(files) stdout.seek(0) - self.assertEquals(stdout.read(), '') + self.assertEqual(stdout.read(), '') distutils.dist.DEBUG = True try: with captured_stdout() as stdout: self.create_distribution(files) stdout.seek(0) - self.assertEquals(stdout.read(), '') + self.assertEqual(stdout.read(), '') finally: distutils.dist.DEBUG = False @@ -102,29 +102,29 @@ def test_command_packages_configfile(self): sys.argv.append("build") + self.addCleanup(os.unlink, TESTFN) f = open(TESTFN, "w") try: print >>f, "[global]" print >>f, "command_packages = foo.bar, splat" + finally: f.close() - d = self.create_distribution([TESTFN]) - self.assertEqual(d.get_command_packages(), - ["distutils.command", "foo.bar", "splat"]) - # ensure command line overrides config: - sys.argv[1:] = ["--command-packages", "spork", "build"] - d = self.create_distribution([TESTFN]) - self.assertEqual(d.get_command_packages(), - ["distutils.command", "spork"]) + d = self.create_distribution([TESTFN]) + self.assertEqual(d.get_command_packages(), + ["distutils.command", "foo.bar", "splat"]) - # Setting --command-packages to '' should cause the default to - # be used even if a config file specified something else: - sys.argv[1:] = ["--command-packages", "", "build"] - d = self.create_distribution([TESTFN]) - self.assertEqual(d.get_command_packages(), ["distutils.command"]) + # ensure command line overrides config: + sys.argv[1:] = ["--command-packages", "spork", "build"] + d = self.create_distribution([TESTFN]) + self.assertEqual(d.get_command_packages(), + ["distutils.command", "spork"]) - finally: - os.unlink(TESTFN) + # Setting --command-packages to '' should cause the default to + # be used even if a config file specified something else: + sys.argv[1:] = ["--command-packages", "", "build"] + d = self.create_distribution([TESTFN]) + self.assertEqual(d.get_command_packages(), ["distutils.command"]) def test_write_pkg_file(self): # Check DistributionMetadata handling of Unicode fields @@ -175,7 +175,7 @@ finally: warnings.warn = old_warn - self.assertEquals(len(warns), 0) + self.assertEqual(len(warns), 0) def test_finalize_options(self): @@ -186,20 +186,20 @@ dist.finalize_options() # finalize_option splits platforms and keywords - self.assertEquals(dist.metadata.platforms, ['one', 'two']) - self.assertEquals(dist.metadata.keywords, ['one', 'two']) + self.assertEqual(dist.metadata.platforms, ['one', 'two']) + self.assertEqual(dist.metadata.keywords, ['one', 'two']) def test_get_command_packages(self): dist = Distribution() - self.assertEquals(dist.command_packages, None) + self.assertEqual(dist.command_packages, None) cmds = dist.get_command_packages() - self.assertEquals(cmds, ['distutils.command']) - self.assertEquals(dist.command_packages, - ['distutils.command']) + self.assertEqual(cmds, ['distutils.command']) + self.assertEqual(dist.command_packages, + ['distutils.command']) dist.command_packages = 'one,two' cmds = dist.get_command_packages() - self.assertEquals(cmds, ['distutils.command', 'one', 'two']) + self.assertEqual(cmds, ['distutils.command', 'one', 'two']) def test_announce(self): @@ -236,7 +236,7 @@ os.path.expanduser = old_expander # make sure --no-user-cfg disables the user cfg file - self.assertEquals(len(all_files)-1, len(files)) + self.assertEqual(len(all_files)-1, len(files)) class MetadataTestCase(support.TempdirManager, support.EnvironGuard, @@ -341,8 +341,10 @@ temp_dir = self.mkdtemp() user_filename = os.path.join(temp_dir, user_filename) f = open(user_filename, 'w') - f.write('.') - f.close() + try: + f.write('.') + finally: + f.close() try: dist = Distribution() @@ -366,8 +368,8 @@ def test_fix_help_options(self): help_tuples = [('a', 'b', 'c', 'd'), (1, 2, 3, 4)] fancy_options = fix_help_options(help_tuples) - self.assertEquals(fancy_options[0], ('a', 'b', 'c')) - self.assertEquals(fancy_options[1], (1, 2, 3)) + self.assertEqual(fancy_options[0], ('a', 'b', 'c')) + self.assertEqual(fancy_options[1], (1, 2, 3)) def test_show_help(self): # smoke test, just makes sure some help is displayed @@ -415,14 +417,14 @@ PKG_INFO.seek(0) metadata.read_pkg_file(PKG_INFO) - self.assertEquals(metadata.name, "package") - self.assertEquals(metadata.version, "1.0") - self.assertEquals(metadata.description, "xxx") - self.assertEquals(metadata.download_url, 'http://example.com') - self.assertEquals(metadata.keywords, ['one', 'two']) - self.assertEquals(metadata.platforms, ['UNKNOWN']) - self.assertEquals(metadata.obsoletes, None) - self.assertEquals(metadata.requires, ['foo']) + self.assertEqual(metadata.name, "package") + self.assertEqual(metadata.version, "1.0") + self.assertEqual(metadata.description, "xxx") + self.assertEqual(metadata.download_url, 'http://example.com') + self.assertEqual(metadata.keywords, ['one', 'two']) + self.assertEqual(metadata.platforms, ['UNKNOWN']) + self.assertEqual(metadata.obsoletes, None) + self.assertEqual(metadata.requires, ['foo']) def test_suite(): suite = unittest.TestSuite() diff --git a/lib-python/2.7.0/distutils/tests/test_file_util.py b/lib-python/2.7/distutils/tests/test_file_util.py rename from lib-python/2.7.0/distutils/tests/test_file_util.py rename to lib-python/2.7/distutils/tests/test_file_util.py --- a/lib-python/2.7.0/distutils/tests/test_file_util.py +++ b/lib-python/2.7/distutils/tests/test_file_util.py @@ -31,19 +31,21 @@ def test_move_file_verbosity(self): f = open(self.source, 'w') - f.write('some content') - f.close() + try: + f.write('some content') + finally: + f.close() move_file(self.source, self.target, verbose=0) wanted = [] - self.assertEquals(self._logs, wanted) + self.assertEqual(self._logs, wanted) # back to original state move_file(self.target, self.source, verbose=0) move_file(self.source, self.target, verbose=1) wanted = ['moving %s -> %s' % (self.source, self.target)] - self.assertEquals(self._logs, wanted) + self.assertEqual(self._logs, wanted) # back to original state move_file(self.target, self.source, verbose=0) @@ -53,7 +55,7 @@ os.mkdir(self.target_dir) move_file(self.source, self.target_dir, verbose=1) wanted = ['moving %s -> %s' % (self.source, self.target_dir)] - self.assertEquals(self._logs, wanted) + self.assertEqual(self._logs, wanted) def test_write_file(self): lines = ['a', 'b', 'c'] @@ -61,7 +63,7 @@ foo = os.path.join(dir, 'foo') write_file(foo, lines) content = [line.strip() for line in open(foo).readlines()] - self.assertEquals(content, lines) + self.assertEqual(content, lines) def test_copy_file(self): src_dir = self.mkdtemp() diff --git a/lib-python/2.7.0/distutils/tests/test_filelist.py b/lib-python/2.7/distutils/tests/test_filelist.py rename from lib-python/2.7.0/distutils/tests/test_filelist.py rename to lib-python/2.7/distutils/tests/test_filelist.py --- a/lib-python/2.7.0/distutils/tests/test_filelist.py +++ b/lib-python/2.7/distutils/tests/test_filelist.py @@ -24,15 +24,15 @@ def test_glob_to_re(self): # simple cases - self.assertEquals(glob_to_re('foo*'), 'foo[^/]*\\Z(?ms)') - self.assertEquals(glob_to_re('foo?'), 'foo[^/]\\Z(?ms)') - self.assertEquals(glob_to_re('foo??'), 'foo[^/][^/]\\Z(?ms)') + self.assertEqual(glob_to_re('foo*'), 'foo[^/]*\\Z(?ms)') + self.assertEqual(glob_to_re('foo?'), 'foo[^/]\\Z(?ms)') + self.assertEqual(glob_to_re('foo??'), 'foo[^/][^/]\\Z(?ms)') # special cases - self.assertEquals(glob_to_re(r'foo\\*'), r'foo\\\\[^/]*\Z(?ms)') - self.assertEquals(glob_to_re(r'foo\\\*'), r'foo\\\\\\[^/]*\Z(?ms)') - self.assertEquals(glob_to_re('foo????'), r'foo[^/][^/][^/][^/]\Z(?ms)') - self.assertEquals(glob_to_re(r'foo\\??'), r'foo\\\\[^/][^/]\Z(?ms)') + self.assertEqual(glob_to_re(r'foo\\*'), r'foo\\\\[^/]*\Z(?ms)') + self.assertEqual(glob_to_re(r'foo\\\*'), r'foo\\\\\\[^/]*\Z(?ms)') + self.assertEqual(glob_to_re('foo????'), r'foo[^/][^/][^/][^/]\Z(?ms)') + self.assertEqual(glob_to_re(r'foo\\??'), r'foo\\\\[^/][^/]\Z(?ms)') def test_process_template_line(self): # testing all MANIFEST.in template patterns @@ -60,21 +60,21 @@ join('global', 'two.txt'), join('f', 'o', 'f.oo'), join('dir', 'graft-one'), join('dir', 'dir2', 'graft2')] - self.assertEquals(file_list.files, wanted) + self.assertEqual(file_list.files, wanted) def test_debug_print(self): file_list = FileList() with captured_stdout() as stdout: file_list.debug_print('xxx') stdout.seek(0) - self.assertEquals(stdout.read(), '') + self.assertEqual(stdout.read(), '') debug.DEBUG = True try: with captured_stdout() as stdout: file_list.debug_print('xxx') stdout.seek(0) - self.assertEquals(stdout.read(), 'xxx\n') + self.assertEqual(stdout.read(), 'xxx\n') finally: debug.DEBUG = False diff --git a/lib-python/2.7.0/distutils/tests/test_install.py b/lib-python/2.7/distutils/tests/test_install.py rename from lib-python/2.7.0/distutils/tests/test_install.py rename to lib-python/2.7/distutils/tests/test_install.py diff --git a/lib-python/2.7.0/distutils/tests/test_install_data.py b/lib-python/2.7/distutils/tests/test_install_data.py rename from lib-python/2.7.0/distutils/tests/test_install_data.py rename to lib-python/2.7/distutils/tests/test_install_data.py --- a/lib-python/2.7.0/distutils/tests/test_install_data.py +++ b/lib-python/2.7/distutils/tests/test_install_data.py @@ -27,14 +27,14 @@ self.write_file(two, 'xxx') cmd.data_files = [one, (inst2, [two])] - self.assertEquals(cmd.get_inputs(), [one, (inst2, [two])]) + self.assertEqual(cmd.get_inputs(), [one, (inst2, [two])]) # let's run the command cmd.ensure_finalized() cmd.run() # let's check the result - self.assertEquals(len(cmd.get_outputs()), 2) + self.assertEqual(len(cmd.get_outputs()), 2) rtwo = os.path.split(two)[-1] self.assertTrue(os.path.exists(os.path.join(inst2, rtwo))) rone = os.path.split(one)[-1] @@ -47,7 +47,7 @@ cmd.run() # let's check the result - self.assertEquals(len(cmd.get_outputs()), 2) + self.assertEqual(len(cmd.get_outputs()), 2) self.assertTrue(os.path.exists(os.path.join(inst2, rtwo))) self.assertTrue(os.path.exists(os.path.join(inst, rone))) cmd.outfiles = [] @@ -65,7 +65,7 @@ cmd.run() # let's check the result - self.assertEquals(len(cmd.get_outputs()), 4) + self.assertEqual(len(cmd.get_outputs()), 4) self.assertTrue(os.path.exists(os.path.join(inst2, rtwo))) self.assertTrue(os.path.exists(os.path.join(inst, rone))) diff --git a/lib-python/2.7.0/distutils/tests/test_install_headers.py b/lib-python/2.7/distutils/tests/test_install_headers.py rename from lib-python/2.7.0/distutils/tests/test_install_headers.py rename to lib-python/2.7/distutils/tests/test_install_headers.py --- a/lib-python/2.7.0/distutils/tests/test_install_headers.py +++ b/lib-python/2.7/distutils/tests/test_install_headers.py @@ -23,7 +23,7 @@ pkg_dir, dist = self.create_dist(headers=headers) cmd = install_headers(dist) - self.assertEquals(cmd.get_inputs(), headers) + self.assertEqual(cmd.get_inputs(), headers) # let's run the command cmd.install_dir = os.path.join(pkg_dir, 'inst') @@ -31,7 +31,7 @@ cmd.run() # let's check the results - self.assertEquals(len(cmd.get_outputs()), 2) + self.assertEqual(len(cmd.get_outputs()), 2) def test_suite(): return unittest.makeSuite(InstallHeadersTestCase) diff --git a/lib-python/2.7.0/distutils/tests/test_install_lib.py b/lib-python/2.7/distutils/tests/test_install_lib.py rename from lib-python/2.7.0/distutils/tests/test_install_lib.py rename to lib-python/2.7/distutils/tests/test_install_lib.py --- a/lib-python/2.7.0/distutils/tests/test_install_lib.py +++ b/lib-python/2.7/distutils/tests/test_install_lib.py @@ -18,8 +18,8 @@ cmd = install_lib(dist) cmd.finalize_options() - self.assertEquals(cmd.compile, 1) - self.assertEquals(cmd.optimize, 0) + self.assertEqual(cmd.compile, 1) + self.assertEqual(cmd.optimize, 0) # optimize must be 0, 1, or 2 cmd.optimize = 'foo' @@ -29,7 +29,7 @@ cmd.optimize = '2' cmd.finalize_options() - self.assertEquals(cmd.optimize, 2) + self.assertEqual(cmd.optimize, 2) def _setup_byte_compile(self): pkg_dir, dist = self.create_dist() @@ -81,7 +81,7 @@ cmd.distribution.script_name = 'setup.py' # get_input should return 2 elements - self.assertEquals(len(cmd.get_inputs()), 2) + self.assertEqual(len(cmd.get_inputs()), 2) def test_dont_write_bytecode(self): # makes sure byte_compile is not used diff --git a/lib-python/2.7.0/distutils/tests/test_install_scripts.py b/lib-python/2.7/distutils/tests/test_install_scripts.py rename from lib-python/2.7.0/distutils/tests/test_install_scripts.py rename to lib-python/2.7/distutils/tests/test_install_scripts.py --- a/lib-python/2.7.0/distutils/tests/test_install_scripts.py +++ b/lib-python/2.7/distutils/tests/test_install_scripts.py @@ -42,8 +42,10 @@ def write_script(name, text): expected.append(name) f = open(os.path.join(source, name), "w") - f.write(text) - f.close() + try: + f.write(text) + finally: + f.close() write_script("script1.py", ("#! /usr/bin/env python2.3\n" "# bogus script w/ Python sh-bang\n" diff --git a/lib-python/2.7.0/distutils/tests/test_msvc9compiler.py b/lib-python/2.7/distutils/tests/test_msvc9compiler.py rename from lib-python/2.7.0/distutils/tests/test_msvc9compiler.py rename to lib-python/2.7/distutils/tests/test_msvc9compiler.py --- a/lib-python/2.7.0/distutils/tests/test_msvc9compiler.py +++ b/lib-python/2.7/distutils/tests/test_msvc9compiler.py @@ -103,7 +103,7 @@ import _winreg HKCU = _winreg.HKEY_CURRENT_USER keys = Reg.read_keys(HKCU, 'xxxx') - self.assertEquals(keys, None) + self.assertEqual(keys, None) keys = Reg.read_keys(HKCU, r'Control Panel') self.assertTrue('Desktop' in keys) @@ -113,20 +113,24 @@ tempdir = self.mkdtemp() manifest = os.path.join(tempdir, 'manifest') f = open(manifest, 'w') - f.write(_MANIFEST) - f.close() + try: + f.write(_MANIFEST) + finally: + f.close() compiler = MSVCCompiler() compiler._remove_visual_c_ref(manifest) # see what we got f = open(manifest) - # removing trailing spaces - content = '\n'.join([line.rstrip() for line in f.readlines()]) - f.close() + try: + # removing trailing spaces + content = '\n'.join([line.rstrip() for line in f.readlines()]) + finally: + f.close() # makes sure the manifest was properly cleaned - self.assertEquals(content, _CLEANED_MANIFEST) + self.assertEqual(content, _CLEANED_MANIFEST) def test_suite(): diff --git a/lib-python/2.7.0/distutils/tests/test_register.py b/lib-python/2.7/distutils/tests/test_register.py rename from lib-python/2.7.0/distutils/tests/test_register.py rename to lib-python/2.7/distutils/tests/test_register.py --- a/lib-python/2.7.0/distutils/tests/test_register.py +++ b/lib-python/2.7/distutils/tests/test_register.py @@ -119,8 +119,12 @@ self.assertTrue(os.path.exists(self.rc)) # with the content similar to WANTED_PYPIRC - content = open(self.rc).read() - self.assertEquals(content, WANTED_PYPIRC) + f = open(self.rc) + try: + content = f.read() + self.assertEqual(content, WANTED_PYPIRC) + finally: + f.close() # now let's make sure the .pypirc file generated # really works : we shouldn't be asked anything @@ -137,7 +141,7 @@ self.assertTrue(self.conn.reqs, 2) req1 = dict(self.conn.reqs[0].headers) req2 = dict(self.conn.reqs[1].headers) - self.assertEquals(req2['Content-length'], req1['Content-length']) + self.assertEqual(req2['Content-length'], req1['Content-length']) self.assertTrue('xxx' in self.conn.reqs[1].data) def test_password_not_in_file(self): @@ -150,7 +154,7 @@ # dist.password should be set # therefore used afterwards by other commands - self.assertEquals(cmd.distribution.password, 'password') + self.assertEqual(cmd.distribution.password, 'password') def test_registering(self): # this test runs choice 2 @@ -167,7 +171,7 @@ self.assertTrue(self.conn.reqs, 1) req = self.conn.reqs[0] headers = dict(req.headers) - self.assertEquals(headers['Content-length'], '608') + self.assertEqual(headers['Content-length'], '608') self.assertTrue('tarek' in req.data) def test_password_reset(self): @@ -185,7 +189,7 @@ self.assertTrue(self.conn.reqs, 1) req = self.conn.reqs[0] headers = dict(req.headers) - self.assertEquals(headers['Content-length'], '290') + self.assertEqual(headers['Content-length'], '290') self.assertTrue('tarek' in req.data) def test_strict(self): @@ -248,7 +252,7 @@ with check_warnings() as w: warnings.simplefilter("always") cmd.check_metadata() - self.assertEquals(len(w.warnings), 1) + self.assertEqual(len(w.warnings), 1) def test_suite(): return unittest.makeSuite(RegisterTestCase) diff --git a/lib-python/2.7.0/distutils/tests/test_sdist.py b/lib-python/2.7/distutils/tests/test_sdist.py rename from lib-python/2.7.0/distutils/tests/test_sdist.py rename to lib-python/2.7/distutils/tests/test_sdist.py --- a/lib-python/2.7.0/distutils/tests/test_sdist.py +++ b/lib-python/2.7/distutils/tests/test_sdist.py @@ -127,7 +127,7 @@ # now let's check what we have dist_folder = join(self.tmp_dir, 'dist') files = os.listdir(dist_folder) - self.assertEquals(files, ['fake-1.0.zip']) + self.assertEqual(files, ['fake-1.0.zip']) zip_file = zipfile.ZipFile(join(dist_folder, 'fake-1.0.zip')) try: @@ -136,7 +136,7 @@ zip_file.close() # making sure everything has been pruned correctly - self.assertEquals(len(content), 4) + self.assertEqual(len(content), 4) @unittest.skipUnless(zlib, "requires zlib") def test_make_distribution(self): @@ -158,8 +158,7 @@ dist_folder = join(self.tmp_dir, 'dist') result = os.listdir(dist_folder) result.sort() - self.assertEquals(result, - ['fake-1.0.tar', 'fake-1.0.tar.gz'] ) + self.assertEqual(result, ['fake-1.0.tar', 'fake-1.0.tar.gz'] ) os.remove(join(dist_folder, 'fake-1.0.tar')) os.remove(join(dist_folder, 'fake-1.0.tar.gz')) @@ -172,8 +171,7 @@ result = os.listdir(dist_folder) result.sort() - self.assertEquals(result, - ['fake-1.0.tar', 'fake-1.0.tar.gz']) + self.assertEqual(result, ['fake-1.0.tar', 'fake-1.0.tar.gz']) @unittest.skipUnless(zlib, "requires zlib") def test_add_defaults(self): @@ -222,7 +220,7 @@ # now let's check what we have dist_folder = join(self.tmp_dir, 'dist') files = os.listdir(dist_folder) - self.assertEquals(files, ['fake-1.0.zip']) + self.assertEqual(files, ['fake-1.0.zip']) zip_file = zipfile.ZipFile(join(dist_folder, 'fake-1.0.zip')) try: @@ -231,11 +229,15 @@ zip_file.close() # making sure everything was added - self.assertEquals(len(content), 11) + self.assertEqual(len(content), 11) # checking the MANIFEST - manifest = open(join(self.tmp_dir, 'MANIFEST')).read() - self.assertEquals(manifest, MANIFEST % {'sep': os.sep}) + f = open(join(self.tmp_dir, 'MANIFEST')) + try: + manifest = f.read() + self.assertEqual(manifest, MANIFEST % {'sep': os.sep}) + finally: + f.close() @unittest.skipUnless(zlib, "requires zlib") def test_metadata_check_option(self): @@ -247,7 +249,7 @@ cmd.ensure_finalized() cmd.run() warnings = self.get_logs(WARN) - self.assertEquals(len(warnings), 2) + self.assertEqual(len(warnings), 2) # trying with a complete set of metadata self.clear_logs() @@ -256,7 +258,7 @@ cmd.metadata_check = 0 cmd.run() warnings = self.get_logs(WARN) - self.assertEquals(len(warnings), 0) + self.assertEqual(len(warnings), 0) def test_check_metadata_deprecated(self): # makes sure make_metadata is deprecated @@ -264,7 +266,7 @@ with check_warnings() as w: warnings.simplefilter("always") cmd.check_metadata() - self.assertEquals(len(w.warnings), 1) + self.assertEqual(len(w.warnings), 1) def test_show_formats(self): with captured_stdout() as stdout: @@ -274,7 +276,7 @@ num_formats = len(ARCHIVE_FORMATS.keys()) output = [line for line in stdout.getvalue().split('\n') if line.strip().startswith('--formats=')] - self.assertEquals(len(output), num_formats) + self.assertEqual(len(output), num_formats) def test_finalize_options(self): @@ -282,9 +284,9 @@ cmd.finalize_options() # default options set by finalize - self.assertEquals(cmd.manifest, 'MANIFEST') - self.assertEquals(cmd.template, 'MANIFEST.in') - self.assertEquals(cmd.dist_dir, 'dist') + self.assertEqual(cmd.manifest, 'MANIFEST') + self.assertEqual(cmd.template, 'MANIFEST.in') + self.assertEqual(cmd.dist_dir, 'dist') # formats has to be a string splitable on (' ', ',') or # a stringlist @@ -321,8 +323,8 @@ archive = tarfile.open(archive_name) try: for member in archive.getmembers(): - self.assertEquals(member.uid, 0) - self.assertEquals(member.gid, 0) + self.assertEqual(member.uid, 0) + self.assertEqual(member.gid, 0) finally: archive.close() @@ -343,7 +345,7 @@ # rights (see #7408) try: for member in archive.getmembers(): - self.assertEquals(member.uid, os.getuid()) + self.assertEqual(member.uid, os.getuid()) finally: archive.close() @@ -365,7 +367,7 @@ finally: f.close() - self.assertEquals(len(manifest), 5) + self.assertEqual(len(manifest), 5) # adding a file self.write_file((self.tmp_dir, 'somecode', 'doc2.txt'), '#') @@ -385,7 +387,7 @@ f.close() # do we have the new file in MANIFEST ? - self.assertEquals(len(manifest2), 6) + self.assertEqual(len(manifest2), 6) self.assertIn('doc2.txt', manifest2[-1]) def test_manifest_marker(self): diff --git a/lib-python/2.7.0/distutils/tests/test_spawn.py b/lib-python/2.7/distutils/tests/test_spawn.py rename from lib-python/2.7.0/distutils/tests/test_spawn.py rename to lib-python/2.7/distutils/tests/test_spawn.py --- a/lib-python/2.7.0/distutils/tests/test_spawn.py +++ b/lib-python/2.7/distutils/tests/test_spawn.py @@ -20,7 +20,7 @@ (['nochange', 'nospace'], ['nochange', 'nospace'])): res = _nt_quote_args(args) - self.assertEquals(res, wanted) + self.assertEqual(res, wanted) @unittest.skipUnless(os.name in ('nt', 'posix'), diff --git a/lib-python/2.7.0/distutils/tests/test_sysconfig.py b/lib-python/2.7/distutils/tests/test_sysconfig.py rename from lib-python/2.7.0/distutils/tests/test_sysconfig.py rename to lib-python/2.7/distutils/tests/test_sysconfig.py --- a/lib-python/2.7.0/distutils/tests/test_sysconfig.py +++ b/lib-python/2.7/distutils/tests/test_sysconfig.py @@ -36,7 +36,7 @@ sysconfig.get_python_lib(prefix=TESTFN)) _sysconfig = __import__('sysconfig') res = sysconfig.get_python_lib(True, True) - self.assertEquals(_sysconfig.get_path('platstdlib'), res) + self.assertEqual(_sysconfig.get_path('platstdlib'), res) def test_get_python_inc(self): inc_dir = sysconfig.get_python_inc() @@ -50,22 +50,26 @@ def test_parse_makefile_base(self): self.makefile = test.test_support.TESTFN fd = open(self.makefile, 'w') - fd.write(r"CONFIG_ARGS= '--arg1=optarg1' 'ENV=LIB'" '\n') - fd.write('VAR=$OTHER\nOTHER=foo') - fd.close() + try: + fd.write(r"CONFIG_ARGS= '--arg1=optarg1' 'ENV=LIB'" '\n') + fd.write('VAR=$OTHER\nOTHER=foo') + finally: + fd.close() d = sysconfig.parse_makefile(self.makefile) - self.assertEquals(d, {'CONFIG_ARGS': "'--arg1=optarg1' 'ENV=LIB'", - 'OTHER': 'foo'}) + self.assertEqual(d, {'CONFIG_ARGS': "'--arg1=optarg1' 'ENV=LIB'", + 'OTHER': 'foo'}) def test_parse_makefile_literal_dollar(self): self.makefile = test.test_support.TESTFN fd = open(self.makefile, 'w') - fd.write(r"CONFIG_ARGS= '--arg1=optarg1' 'ENV=\$$LIB'" '\n') - fd.write('VAR=$OTHER\nOTHER=foo') - fd.close() + try: + fd.write(r"CONFIG_ARGS= '--arg1=optarg1' 'ENV=\$$LIB'" '\n') + fd.write('VAR=$OTHER\nOTHER=foo') + finally: + fd.close() d = sysconfig.parse_makefile(self.makefile) - self.assertEquals(d, {'CONFIG_ARGS': r"'--arg1=optarg1' 'ENV=\$LIB'", - 'OTHER': 'foo'}) + self.assertEqual(d, {'CONFIG_ARGS': r"'--arg1=optarg1' 'ENV=\$LIB'", + 'OTHER': 'foo'}) def test_suite(): diff --git a/lib-python/2.7.0/distutils/tests/test_text_file.py b/lib-python/2.7/distutils/tests/test_text_file.py rename from lib-python/2.7.0/distutils/tests/test_text_file.py rename to lib-python/2.7/distutils/tests/test_text_file.py --- a/lib-python/2.7.0/distutils/tests/test_text_file.py +++ b/lib-python/2.7/distutils/tests/test_text_file.py @@ -48,7 +48,7 @@ def test_input(count, description, file, expected_result): result = file.readlines() - self.assertEquals(result, expected_result) + self.assertEqual(result, expected_result) tmpdir = self.mkdtemp() filename = os.path.join(tmpdir, "test.txt") @@ -58,28 +58,46 @@ finally: out_file.close() - in_file = TextFile (filename, strip_comments=0, skip_blanks=0, - lstrip_ws=0, rstrip_ws=0) - test_input (1, "no processing", in_file, result1) + in_file = TextFile(filename, strip_comments=0, skip_blanks=0, + lstrip_ws=0, rstrip_ws=0) + try: + test_input(1, "no processing", in_file, result1) + finally: + in_file.close() - in_file = TextFile (filename, strip_comments=1, skip_blanks=0, - lstrip_ws=0, rstrip_ws=0) - test_input (2, "strip comments", in_file, result2) + in_file = TextFile(filename, strip_comments=1, skip_blanks=0, + lstrip_ws=0, rstrip_ws=0) + try: + test_input(2, "strip comments", in_file, result2) + finally: + in_file.close() - in_file = TextFile (filename, strip_comments=0, skip_blanks=1, - lstrip_ws=0, rstrip_ws=0) - test_input (3, "strip blanks", in_file, result3) + in_file = TextFile(filename, strip_comments=0, skip_blanks=1, + lstrip_ws=0, rstrip_ws=0) + try: + test_input(3, "strip blanks", in_file, result3) + finally: + in_file.close() - in_file = TextFile (filename) - test_input (4, "default processing", in_file, result4) + in_file = TextFile(filename) + try: + test_input(4, "default processing", in_file, result4) + finally: + in_file.close() - in_file = TextFile (filename, strip_comments=1, skip_blanks=1, - join_lines=1, rstrip_ws=1) - test_input (5, "join lines without collapsing", in_file, result5) + in_file = TextFile(filename, strip_comments=1, skip_blanks=1, + join_lines=1, rstrip_ws=1) + try: + test_input(5, "join lines without collapsing", in_file, result5) + finally: + in_file.close() - in_file = TextFile (filename, strip_comments=1, skip_blanks=1, - join_lines=1, rstrip_ws=1, collapse_join=1) - test_input (6, "join lines with collapsing", in_file, result6) + in_file = TextFile(filename, strip_comments=1, skip_blanks=1, + join_lines=1, rstrip_ws=1, collapse_join=1) + try: + test_input(6, "join lines with collapsing", in_file, result6) + finally: + in_file.close() def test_suite(): return unittest.makeSuite(TextFileTestCase) diff --git a/lib-python/2.7.0/distutils/tests/test_unixccompiler.py b/lib-python/2.7/distutils/tests/test_unixccompiler.py rename from lib-python/2.7.0/distutils/tests/test_unixccompiler.py rename to lib-python/2.7/distutils/tests/test_unixccompiler.py diff --git a/lib-python/2.7.0/distutils/tests/test_upload.py b/lib-python/2.7/distutils/tests/test_upload.py rename from lib-python/2.7.0/distutils/tests/test_upload.py rename to lib-python/2.7/distutils/tests/test_upload.py --- a/lib-python/2.7.0/distutils/tests/test_upload.py +++ b/lib-python/2.7/distutils/tests/test_upload.py @@ -80,7 +80,7 @@ for attr, waited in (('username', 'me'), ('password', 'secret'), ('realm', 'pypi'), ('repository', 'http://pypi.python.org/pypi')): - self.assertEquals(getattr(cmd, attr), waited) + self.assertEqual(getattr(cmd, attr), waited) def test_saved_password(self): # file with no password @@ -90,14 +90,14 @@ dist = Distribution() cmd = upload(dist) cmd.finalize_options() - self.assertEquals(cmd.password, None) + self.assertEqual(cmd.password, None) # make sure we get it as well, if another command # initialized it at the dist level dist.password = 'xxx' cmd = upload(dist) cmd.finalize_options() - self.assertEquals(cmd.password, 'xxx') + self.assertEqual(cmd.password, 'xxx') def test_upload(self): tmp = self.mkdtemp() @@ -116,11 +116,11 @@ # what did we send ? self.assertIn('dédé', self.last_open.req.data) headers = dict(self.last_open.req.headers) - self.assertEquals(headers['Content-length'], '2085') + self.assertEqual(headers['Content-length'], '2085') self.assertTrue(headers['Content-type'].startswith('multipart/form-data')) - self.assertEquals(self.last_open.req.get_method(), 'POST') - self.assertEquals(self.last_open.req.get_full_url(), - 'http://pypi.python.org/pypi') + self.assertEqual(self.last_open.req.get_method(), 'POST') + self.assertEqual(self.last_open.req.get_full_url(), + 'http://pypi.python.org/pypi') self.assertTrue('xxx' in self.last_open.req.data) auth = self.last_open.req.headers['Authorization'] self.assertFalse('\n' in auth) diff --git a/lib-python/2.7.0/distutils/tests/test_util.py b/lib-python/2.7/distutils/tests/test_util.py rename from lib-python/2.7.0/distutils/tests/test_util.py rename to lib-python/2.7/distutils/tests/test_util.py diff --git a/lib-python/2.7.0/distutils/tests/test_version.py b/lib-python/2.7/distutils/tests/test_version.py rename from lib-python/2.7.0/distutils/tests/test_version.py rename to lib-python/2.7/distutils/tests/test_version.py --- a/lib-python/2.7.0/distutils/tests/test_version.py +++ b/lib-python/2.7/distutils/tests/test_version.py @@ -7,12 +7,12 @@ def test_prerelease(self): version = StrictVersion('1.2.3a1') - self.assertEquals(version.version, (1, 2, 3)) - self.assertEquals(version.prerelease, ('a', 1)) - self.assertEquals(str(version), '1.2.3a1') + self.assertEqual(version.version, (1, 2, 3)) + self.assertEqual(version.prerelease, ('a', 1)) + self.assertEqual(str(version), '1.2.3a1') version = StrictVersion('1.2.0') - self.assertEquals(str(version), '1.2') + self.assertEqual(str(version), '1.2') def test_cmp_strict(self): versions = (('1.5.1', '1.5.2b2', -1), @@ -41,9 +41,9 @@ raise AssertionError(("cmp(%s, %s) " "shouldn't raise ValueError") % (v1, v2)) - self.assertEquals(res, wanted, - 'cmp(%s, %s) should be %s, got %s' % - (v1, v2, wanted, res)) + self.assertEqual(res, wanted, + 'cmp(%s, %s) should be %s, got %s' % + (v1, v2, wanted, res)) def test_cmp(self): @@ -59,9 +59,9 @@ for v1, v2, wanted in versions: res = LooseVersion(v1).__cmp__(LooseVersion(v2)) - self.assertEquals(res, wanted, - 'cmp(%s, %s) should be %s, got %s' % - (v1, v2, wanted, res)) + self.assertEqual(res, wanted, + 'cmp(%s, %s) should be %s, got %s' % + (v1, v2, wanted, res)) def test_suite(): return unittest.makeSuite(VersionTestCase) diff --git a/lib-python/2.7.0/distutils/tests/test_versionpredicate.py b/lib-python/2.7/distutils/tests/test_versionpredicate.py rename from lib-python/2.7.0/distutils/tests/test_versionpredicate.py rename to lib-python/2.7/distutils/tests/test_versionpredicate.py diff --git a/lib-python/2.7.0/distutils/text_file.py b/lib-python/2.7/distutils/text_file.py rename from lib-python/2.7.0/distutils/text_file.py rename to lib-python/2.7/distutils/text_file.py --- a/lib-python/2.7.0/distutils/text_file.py +++ b/lib-python/2.7/distutils/text_file.py @@ -4,7 +4,7 @@ that (optionally) takes care of stripping comments, ignoring blank lines, and joining lines with backslashes.""" -__revision__ = "$Id: text_file.py 76956 2009-12-21 01:22:46Z tarek.ziade $" +__revision__ = "$Id$" import sys diff --git a/lib-python/2.7.0/distutils/unixccompiler.py b/lib-python/2.7/distutils/unixccompiler.py rename from lib-python/2.7.0/distutils/unixccompiler.py rename to lib-python/2.7/distutils/unixccompiler.py --- a/lib-python/2.7.0/distutils/unixccompiler.py +++ b/lib-python/2.7/distutils/unixccompiler.py @@ -13,7 +13,7 @@ * link shared library handled by 'cc -shared' """ -__revision__ = "$Id: unixccompiler.py 82272 2010-06-27 12:36:16Z ronald.oussoren $" +__revision__ = "$Id$" import os, sys, re from types import StringType, NoneType diff --git a/lib-python/2.7.0/distutils/util.py b/lib-python/2.7/distutils/util.py rename from lib-python/2.7.0/distutils/util.py rename to lib-python/2.7/distutils/util.py --- a/lib-python/2.7.0/distutils/util.py +++ b/lib-python/2.7/distutils/util.py @@ -4,7 +4,7 @@ one of the other *util.py modules. """ -__revision__ = "$Id: util.py 82791 2010-07-11 08:52:52Z ronald.oussoren $" +__revision__ = "$Id$" import sys, os, string, re from distutils.errors import DistutilsPlatformError @@ -116,13 +116,15 @@ # behaviour. pass else: - m = re.search( - r'ProductUserVisibleVersion\s*' + - r'(.*?)', f.read()) - f.close() - if m is not None: - macrelease = '.'.join(m.group(1).split('.')[:2]) - # else: fall back to the default behaviour + try: + m = re.search( + r'ProductUserVisibleVersion\s*' + + r'(.*?)', f.read()) + if m is not None: + macrelease = '.'.join(m.group(1).split('.')[:2]) + # else: fall back to the default behaviour + finally: + f.close() if not macver: macver = macrelease diff --git a/lib-python/2.7.0/distutils/version.py b/lib-python/2.7/distutils/version.py rename from lib-python/2.7.0/distutils/version.py rename to lib-python/2.7/distutils/version.py --- a/lib-python/2.7.0/distutils/version.py +++ b/lib-python/2.7/distutils/version.py @@ -4,7 +4,7 @@ # Implements multiple version numbering conventions for the # Python Module Distribution Utilities. # -# $Id: version.py 70642 2009-03-28 00:48:48Z georg.brandl $ +# $Id$ # """Provides classes to represent module version numbers (one class for diff --git a/lib-python/2.7.0/distutils/versionpredicate.py b/lib-python/2.7/distutils/versionpredicate.py rename from lib-python/2.7.0/distutils/versionpredicate.py rename to lib-python/2.7/distutils/versionpredicate.py diff --git a/lib-python/2.7.0/doctest.py b/lib-python/2.7/doctest.py rename from lib-python/2.7.0/doctest.py rename to lib-python/2.7/doctest.py diff --git a/lib-python/2.7.0/dumbdbm.py b/lib-python/2.7/dumbdbm.py rename from lib-python/2.7.0/dumbdbm.py rename to lib-python/2.7/dumbdbm.py diff --git a/lib-python/2.7.0/dummy_thread.py b/lib-python/2.7/dummy_thread.py rename from lib-python/2.7.0/dummy_thread.py rename to lib-python/2.7/dummy_thread.py diff --git a/lib-python/2.7.0/dummy_threading.py b/lib-python/2.7/dummy_threading.py rename from lib-python/2.7.0/dummy_threading.py rename to lib-python/2.7/dummy_threading.py diff --git a/lib-python/2.7.0/email/__init__.py b/lib-python/2.7/email/__init__.py rename from lib-python/2.7.0/email/__init__.py rename to lib-python/2.7/email/__init__.py diff --git a/lib-python/2.7.0/email/_parseaddr.py b/lib-python/2.7/email/_parseaddr.py rename from lib-python/2.7.0/email/_parseaddr.py rename to lib-python/2.7/email/_parseaddr.py diff --git a/lib-python/2.7.0/email/base64mime.py b/lib-python/2.7/email/base64mime.py rename from lib-python/2.7.0/email/base64mime.py rename to lib-python/2.7/email/base64mime.py diff --git a/lib-python/2.7.0/email/charset.py b/lib-python/2.7/email/charset.py rename from lib-python/2.7.0/email/charset.py rename to lib-python/2.7/email/charset.py diff --git a/lib-python/2.7.0/email/encoders.py b/lib-python/2.7/email/encoders.py rename from lib-python/2.7.0/email/encoders.py rename to lib-python/2.7/email/encoders.py diff --git a/lib-python/2.7.0/email/errors.py b/lib-python/2.7/email/errors.py rename from lib-python/2.7.0/email/errors.py rename to lib-python/2.7/email/errors.py diff --git a/lib-python/2.7.0/email/feedparser.py b/lib-python/2.7/email/feedparser.py rename from lib-python/2.7.0/email/feedparser.py rename to lib-python/2.7/email/feedparser.py diff --git a/lib-python/2.7.0/email/generator.py b/lib-python/2.7/email/generator.py rename from lib-python/2.7.0/email/generator.py rename to lib-python/2.7/email/generator.py diff --git a/lib-python/2.7.0/email/header.py b/lib-python/2.7/email/header.py rename from lib-python/2.7.0/email/header.py rename to lib-python/2.7/email/header.py diff --git a/lib-python/2.7.0/email/iterators.py b/lib-python/2.7/email/iterators.py rename from lib-python/2.7.0/email/iterators.py rename to lib-python/2.7/email/iterators.py diff --git a/lib-python/2.7.0/email/message.py b/lib-python/2.7/email/message.py rename from lib-python/2.7.0/email/message.py rename to lib-python/2.7/email/message.py diff --git a/lib-python/2.7.0/email/mime/__init__.py b/lib-python/2.7/email/mime/__init__.py rename from lib-python/2.7.0/email/mime/__init__.py rename to lib-python/2.7/email/mime/__init__.py diff --git a/lib-python/2.7.0/email/mime/application.py b/lib-python/2.7/email/mime/application.py rename from lib-python/2.7.0/email/mime/application.py rename to lib-python/2.7/email/mime/application.py diff --git a/lib-python/2.7.0/email/mime/audio.py b/lib-python/2.7/email/mime/audio.py rename from lib-python/2.7.0/email/mime/audio.py rename to lib-python/2.7/email/mime/audio.py diff --git a/lib-python/2.7.0/email/mime/base.py b/lib-python/2.7/email/mime/base.py rename from lib-python/2.7.0/email/mime/base.py rename to lib-python/2.7/email/mime/base.py diff --git a/lib-python/2.7.0/email/mime/image.py b/lib-python/2.7/email/mime/image.py rename from lib-python/2.7.0/email/mime/image.py rename to lib-python/2.7/email/mime/image.py diff --git a/lib-python/2.7.0/email/mime/message.py b/lib-python/2.7/email/mime/message.py rename from lib-python/2.7.0/email/mime/message.py rename to lib-python/2.7/email/mime/message.py diff --git a/lib-python/2.7.0/email/mime/multipart.py b/lib-python/2.7/email/mime/multipart.py rename from lib-python/2.7.0/email/mime/multipart.py rename to lib-python/2.7/email/mime/multipart.py diff --git a/lib-python/2.7.0/email/mime/nonmultipart.py b/lib-python/2.7/email/mime/nonmultipart.py rename from lib-python/2.7.0/email/mime/nonmultipart.py rename to lib-python/2.7/email/mime/nonmultipart.py diff --git a/lib-python/2.7.0/email/mime/text.py b/lib-python/2.7/email/mime/text.py rename from lib-python/2.7.0/email/mime/text.py rename to lib-python/2.7/email/mime/text.py diff --git a/lib-python/2.7.0/email/parser.py b/lib-python/2.7/email/parser.py rename from lib-python/2.7.0/email/parser.py rename to lib-python/2.7/email/parser.py diff --git a/lib-python/2.7.0/email/quoprimime.py b/lib-python/2.7/email/quoprimime.py rename from lib-python/2.7.0/email/quoprimime.py rename to lib-python/2.7/email/quoprimime.py diff --git a/lib-python/2.7.0/email/test/__init__.py b/lib-python/2.7/email/test/__init__.py rename from lib-python/2.7.0/email/test/__init__.py rename to lib-python/2.7/email/test/__init__.py diff --git a/lib-python/2.7.0/email/test/data/PyBanner048.gif b/lib-python/2.7/email/test/data/PyBanner048.gif rename from lib-python/2.7.0/email/test/data/PyBanner048.gif rename to lib-python/2.7/email/test/data/PyBanner048.gif diff --git a/lib-python/2.7.0/email/test/data/audiotest.au b/lib-python/2.7/email/test/data/audiotest.au rename from lib-python/2.7.0/email/test/data/audiotest.au rename to lib-python/2.7/email/test/data/audiotest.au diff --git a/lib-python/2.7.0/email/test/data/msg_01.txt b/lib-python/2.7/email/test/data/msg_01.txt rename from lib-python/2.7.0/email/test/data/msg_01.txt rename to lib-python/2.7/email/test/data/msg_01.txt diff --git a/lib-python/2.7.0/email/test/data/msg_02.txt b/lib-python/2.7/email/test/data/msg_02.txt rename from lib-python/2.7.0/email/test/data/msg_02.txt rename to lib-python/2.7/email/test/data/msg_02.txt diff --git a/lib-python/2.7.0/email/test/data/msg_03.txt b/lib-python/2.7/email/test/data/msg_03.txt rename from lib-python/2.7.0/email/test/data/msg_03.txt rename to lib-python/2.7/email/test/data/msg_03.txt diff --git a/lib-python/2.7.0/email/test/data/msg_04.txt b/lib-python/2.7/email/test/data/msg_04.txt rename from lib-python/2.7.0/email/test/data/msg_04.txt rename to lib-python/2.7/email/test/data/msg_04.txt diff --git a/lib-python/2.7.0/email/test/data/msg_05.txt b/lib-python/2.7/email/test/data/msg_05.txt rename from lib-python/2.7.0/email/test/data/msg_05.txt rename to lib-python/2.7/email/test/data/msg_05.txt diff --git a/lib-python/2.7.0/email/test/data/msg_06.txt b/lib-python/2.7/email/test/data/msg_06.txt rename from lib-python/2.7.0/email/test/data/msg_06.txt rename to lib-python/2.7/email/test/data/msg_06.txt diff --git a/lib-python/2.7.0/email/test/data/msg_07.txt b/lib-python/2.7/email/test/data/msg_07.txt rename from lib-python/2.7.0/email/test/data/msg_07.txt rename to lib-python/2.7/email/test/data/msg_07.txt diff --git a/lib-python/2.7.0/email/test/data/msg_08.txt b/lib-python/2.7/email/test/data/msg_08.txt rename from lib-python/2.7.0/email/test/data/msg_08.txt rename to lib-python/2.7/email/test/data/msg_08.txt diff --git a/lib-python/2.7.0/email/test/data/msg_09.txt b/lib-python/2.7/email/test/data/msg_09.txt rename from lib-python/2.7.0/email/test/data/msg_09.txt rename to lib-python/2.7/email/test/data/msg_09.txt diff --git a/lib-python/2.7.0/email/test/data/msg_10.txt b/lib-python/2.7/email/test/data/msg_10.txt rename from lib-python/2.7.0/email/test/data/msg_10.txt rename to lib-python/2.7/email/test/data/msg_10.txt diff --git a/lib-python/2.7.0/email/test/data/msg_11.txt b/lib-python/2.7/email/test/data/msg_11.txt rename from lib-python/2.7.0/email/test/data/msg_11.txt rename to lib-python/2.7/email/test/data/msg_11.txt diff --git a/lib-python/2.7.0/email/test/data/msg_12.txt b/lib-python/2.7/email/test/data/msg_12.txt rename from lib-python/2.7.0/email/test/data/msg_12.txt rename to lib-python/2.7/email/test/data/msg_12.txt diff --git a/lib-python/2.7.0/email/test/data/msg_12a.txt b/lib-python/2.7/email/test/data/msg_12a.txt rename from lib-python/2.7.0/email/test/data/msg_12a.txt rename to lib-python/2.7/email/test/data/msg_12a.txt diff --git a/lib-python/2.7.0/email/test/data/msg_13.txt b/lib-python/2.7/email/test/data/msg_13.txt rename from lib-python/2.7.0/email/test/data/msg_13.txt rename to lib-python/2.7/email/test/data/msg_13.txt diff --git a/lib-python/2.7.0/email/test/data/msg_14.txt b/lib-python/2.7/email/test/data/msg_14.txt rename from lib-python/2.7.0/email/test/data/msg_14.txt rename to lib-python/2.7/email/test/data/msg_14.txt diff --git a/lib-python/2.7.0/email/test/data/msg_15.txt b/lib-python/2.7/email/test/data/msg_15.txt rename from lib-python/2.7.0/email/test/data/msg_15.txt rename to lib-python/2.7/email/test/data/msg_15.txt diff --git a/lib-python/2.7.0/email/test/data/msg_16.txt b/lib-python/2.7/email/test/data/msg_16.txt rename from lib-python/2.7.0/email/test/data/msg_16.txt rename to lib-python/2.7/email/test/data/msg_16.txt diff --git a/lib-python/2.7.0/email/test/data/msg_17.txt b/lib-python/2.7/email/test/data/msg_17.txt rename from lib-python/2.7.0/email/test/data/msg_17.txt rename to lib-python/2.7/email/test/data/msg_17.txt diff --git a/lib-python/2.7.0/email/test/data/msg_18.txt b/lib-python/2.7/email/test/data/msg_18.txt rename from lib-python/2.7.0/email/test/data/msg_18.txt rename to lib-python/2.7/email/test/data/msg_18.txt diff --git a/lib-python/2.7.0/email/test/data/msg_19.txt b/lib-python/2.7/email/test/data/msg_19.txt rename from lib-python/2.7.0/email/test/data/msg_19.txt rename to lib-python/2.7/email/test/data/msg_19.txt diff --git a/lib-python/2.7.0/email/test/data/msg_20.txt b/lib-python/2.7/email/test/data/msg_20.txt rename from lib-python/2.7.0/email/test/data/msg_20.txt rename to lib-python/2.7/email/test/data/msg_20.txt diff --git a/lib-python/2.7.0/email/test/data/msg_21.txt b/lib-python/2.7/email/test/data/msg_21.txt rename from lib-python/2.7.0/email/test/data/msg_21.txt rename to lib-python/2.7/email/test/data/msg_21.txt diff --git a/lib-python/2.7.0/email/test/data/msg_22.txt b/lib-python/2.7/email/test/data/msg_22.txt rename from lib-python/2.7.0/email/test/data/msg_22.txt rename to lib-python/2.7/email/test/data/msg_22.txt diff --git a/lib-python/2.7.0/email/test/data/msg_23.txt b/lib-python/2.7/email/test/data/msg_23.txt rename from lib-python/2.7.0/email/test/data/msg_23.txt rename to lib-python/2.7/email/test/data/msg_23.txt diff --git a/lib-python/2.7.0/email/test/data/msg_24.txt b/lib-python/2.7/email/test/data/msg_24.txt rename from lib-python/2.7.0/email/test/data/msg_24.txt rename to lib-python/2.7/email/test/data/msg_24.txt diff --git a/lib-python/2.7.0/email/test/data/msg_25.txt b/lib-python/2.7/email/test/data/msg_25.txt rename from lib-python/2.7.0/email/test/data/msg_25.txt rename to lib-python/2.7/email/test/data/msg_25.txt diff --git a/lib-python/2.7.0/email/test/data/msg_26.txt b/lib-python/2.7/email/test/data/msg_26.txt rename from lib-python/2.7.0/email/test/data/msg_26.txt rename to lib-python/2.7/email/test/data/msg_26.txt diff --git a/lib-python/2.7.0/email/test/data/msg_27.txt b/lib-python/2.7/email/test/data/msg_27.txt rename from lib-python/2.7.0/email/test/data/msg_27.txt rename to lib-python/2.7/email/test/data/msg_27.txt diff --git a/lib-python/2.7.0/email/test/data/msg_28.txt b/lib-python/2.7/email/test/data/msg_28.txt rename from lib-python/2.7.0/email/test/data/msg_28.txt rename to lib-python/2.7/email/test/data/msg_28.txt diff --git a/lib-python/2.7.0/email/test/data/msg_29.txt b/lib-python/2.7/email/test/data/msg_29.txt rename from lib-python/2.7.0/email/test/data/msg_29.txt rename to lib-python/2.7/email/test/data/msg_29.txt diff --git a/lib-python/2.7.0/email/test/data/msg_30.txt b/lib-python/2.7/email/test/data/msg_30.txt rename from lib-python/2.7.0/email/test/data/msg_30.txt rename to lib-python/2.7/email/test/data/msg_30.txt diff --git a/lib-python/2.7.0/email/test/data/msg_31.txt b/lib-python/2.7/email/test/data/msg_31.txt rename from lib-python/2.7.0/email/test/data/msg_31.txt rename to lib-python/2.7/email/test/data/msg_31.txt diff --git a/lib-python/2.7.0/email/test/data/msg_32.txt b/lib-python/2.7/email/test/data/msg_32.txt rename from lib-python/2.7.0/email/test/data/msg_32.txt rename to lib-python/2.7/email/test/data/msg_32.txt diff --git a/lib-python/2.7.0/email/test/data/msg_33.txt b/lib-python/2.7/email/test/data/msg_33.txt rename from lib-python/2.7.0/email/test/data/msg_33.txt rename to lib-python/2.7/email/test/data/msg_33.txt diff --git a/lib-python/2.7.0/email/test/data/msg_34.txt b/lib-python/2.7/email/test/data/msg_34.txt rename from lib-python/2.7.0/email/test/data/msg_34.txt rename to lib-python/2.7/email/test/data/msg_34.txt diff --git a/lib-python/2.7.0/email/test/data/msg_35.txt b/lib-python/2.7/email/test/data/msg_35.txt rename from lib-python/2.7.0/email/test/data/msg_35.txt rename to lib-python/2.7/email/test/data/msg_35.txt diff --git a/lib-python/2.7.0/email/test/data/msg_36.txt b/lib-python/2.7/email/test/data/msg_36.txt rename from lib-python/2.7.0/email/test/data/msg_36.txt rename to lib-python/2.7/email/test/data/msg_36.txt diff --git a/lib-python/2.7.0/email/test/data/msg_37.txt b/lib-python/2.7/email/test/data/msg_37.txt rename from lib-python/2.7.0/email/test/data/msg_37.txt rename to lib-python/2.7/email/test/data/msg_37.txt diff --git a/lib-python/2.7.0/email/test/data/msg_38.txt b/lib-python/2.7/email/test/data/msg_38.txt rename from lib-python/2.7.0/email/test/data/msg_38.txt rename to lib-python/2.7/email/test/data/msg_38.txt diff --git a/lib-python/2.7.0/email/test/data/msg_39.txt b/lib-python/2.7/email/test/data/msg_39.txt rename from lib-python/2.7.0/email/test/data/msg_39.txt rename to lib-python/2.7/email/test/data/msg_39.txt diff --git a/lib-python/2.7.0/email/test/data/msg_40.txt b/lib-python/2.7/email/test/data/msg_40.txt rename from lib-python/2.7.0/email/test/data/msg_40.txt rename to lib-python/2.7/email/test/data/msg_40.txt diff --git a/lib-python/2.7.0/email/test/data/msg_41.txt b/lib-python/2.7/email/test/data/msg_41.txt rename from lib-python/2.7.0/email/test/data/msg_41.txt rename to lib-python/2.7/email/test/data/msg_41.txt diff --git a/lib-python/2.7.0/email/test/data/msg_42.txt b/lib-python/2.7/email/test/data/msg_42.txt rename from lib-python/2.7.0/email/test/data/msg_42.txt rename to lib-python/2.7/email/test/data/msg_42.txt diff --git a/lib-python/2.7.0/email/test/data/msg_43.txt b/lib-python/2.7/email/test/data/msg_43.txt rename from lib-python/2.7.0/email/test/data/msg_43.txt rename to lib-python/2.7/email/test/data/msg_43.txt diff --git a/lib-python/2.7.0/email/test/data/msg_44.txt b/lib-python/2.7/email/test/data/msg_44.txt rename from lib-python/2.7.0/email/test/data/msg_44.txt rename to lib-python/2.7/email/test/data/msg_44.txt diff --git a/lib-python/2.7.0/email/test/data/msg_45.txt b/lib-python/2.7/email/test/data/msg_45.txt rename from lib-python/2.7.0/email/test/data/msg_45.txt rename to lib-python/2.7/email/test/data/msg_45.txt diff --git a/lib-python/2.7.0/email/test/data/msg_46.txt b/lib-python/2.7/email/test/data/msg_46.txt rename from lib-python/2.7.0/email/test/data/msg_46.txt rename to lib-python/2.7/email/test/data/msg_46.txt diff --git a/lib-python/2.7.0/email/test/test_email.py b/lib-python/2.7/email/test/test_email.py rename from lib-python/2.7.0/email/test/test_email.py rename to lib-python/2.7/email/test/test_email.py --- a/lib-python/2.7.0/email/test/test_email.py +++ b/lib-python/2.7/email/test/test_email.py @@ -40,13 +40,13 @@ SPACE = ' ' - + def openfile(filename, mode='r'): path = os.path.join(os.path.dirname(landmark), 'data', filename) return open(path, mode) - + # Base test class class TestEmailBase(unittest.TestCase): def ndiffAssertEqual(self, first, second): @@ -68,7 +68,7 @@ return msg - + # Test various aspects of the Message class's API class TestMessageAPI(TestEmailBase): def test_get_all(self): @@ -543,7 +543,7 @@ self.assertEqual('us-ascii', msg.get_content_charset()) - + # Test the email.Encoders module class TestEncoders(unittest.TestCase): def test_encode_empty_payload(self): @@ -572,7 +572,7 @@ msg = email.MIMEText.MIMEText('\xca\xb8', _charset='euc-jp') eq(msg['content-transfer-encoding'], '7bit') - + # Test long header wrapping class TestLongHeaders(TestEmailBase): def test_split_long_continuation(self): @@ -893,7 +893,7 @@ """) - + # Test mangling of "From " lines in the body of a message class TestFromMangling(unittest.TestCase): def setUp(self): @@ -927,7 +927,7 @@ """) - + # Test the basic MIMEAudio class class TestMIMEAudio(unittest.TestCase): def setUp(self): @@ -976,7 +976,7 @@ header='foobar') is missing) - + # Test the basic MIMEImage class class TestMIMEImage(unittest.TestCase): def setUp(self): @@ -1019,7 +1019,7 @@ header='foobar') is missing) - + # Test the basic MIMEText class class TestMIMEText(unittest.TestCase): def setUp(self): @@ -1071,7 +1071,7 @@ self.assertRaises(UnicodeEncodeError, MIMEText, teststr) - + # Test complicated multipart/* messages class TestMultipart(TestEmailBase): def setUp(self): @@ -1447,10 +1447,10 @@ YXNkZg== --===============0012394164==--""") - self.assertEquals(m.get_payload(0).get_payload(), 'YXNkZg==') - - - + self.assertEqual(m.get_payload(0).get_payload(), 'YXNkZg==') + + + # Test some badly formatted messages class TestNonConformant(TestEmailBase): def test_parse_missing_minor_type(self): @@ -1565,7 +1565,7 @@ - + # Test RFC 2047 header encoding and decoding class TestRFC2047(unittest.TestCase): def test_rfc2047_multiline(self): @@ -1627,7 +1627,7 @@ self.assertEqual(decode_header(s), [(b'andr\xe9=zz', 'iso-8659-1')]) - + # Test the MIMEMessage class class TestMIMEMessage(TestEmailBase): def setUp(self): @@ -1940,7 +1940,7 @@ msg = MIMEMultipart() self.assertTrue(msg.is_multipart()) - + # A general test of parser->model->generator idempotency. IOW, read a message # in, parse it into a message object tree, then without touching the tree, # regenerate the plain text. The original text and the transformed text @@ -1964,7 +1964,7 @@ eq(text, s.getvalue()) def test_parse_text_message(self): - eq = self.assertEquals + eq = self.assertEqual msg, text = self._msgobj('msg_01.txt') eq(msg.get_content_type(), 'text/plain') eq(msg.get_content_maintype(), 'text') @@ -1976,7 +1976,7 @@ self._idempotent(msg, text) def test_parse_untyped_message(self): - eq = self.assertEquals + eq = self.assertEqual msg, text = self._msgobj('msg_03.txt') eq(msg.get_content_type(), 'text/plain') eq(msg.get_params(), None) @@ -2048,7 +2048,7 @@ self._idempotent(msg, text) def test_content_type(self): - eq = self.assertEquals + eq = self.assertEqual unless = self.assertTrue # Get a message object and reset the seek pointer for other tests msg, text = self._msgobj('msg_05.txt') @@ -2080,7 +2080,7 @@ eq(msg4.get_payload(), 'Yadda yadda yadda\n') def test_parser(self): - eq = self.assertEquals + eq = self.assertEqual unless = self.assertTrue msg, text = self._msgobj('msg_06.txt') # Check some of the outer headers @@ -2097,7 +2097,7 @@ eq(msg1.get_payload(), '\n') - + # Test various other bits of the package's functionality class TestMiscellaneous(TestEmailBase): def test_message_from_string(self): @@ -2452,7 +2452,7 @@ """) - + # Test the iterator/generators class TestIterators(TestEmailBase): def test_body_line_iterator(self): @@ -2545,7 +2545,7 @@ self.assertTrue(''.join([il for il, n in imt]) == ''.join(om)) - + class TestParsers(TestEmailBase): def test_header_parser(self): eq = self.assertEqual @@ -2708,7 +2708,7 @@ msg = email.message_from_string(m) self.assertTrue(msg.get_payload(0).get_payload().endswith('\r\n')) - + class TestBase64(unittest.TestCase): def test_len(self): eq = self.assertEqual @@ -2780,7 +2780,7 @@ =?iso-8859-1?b?eHh4eCB4eHh4IHh4eHgg?=""") - + class TestQuopri(unittest.TestCase): def setUp(self): self.hlit = [chr(x) for x in range(ord('a'), ord('z')+1)] + \ @@ -2890,7 +2890,7 @@ two line""") - + # Test the Charset class class TestCharset(unittest.TestCase): def tearDown(self): @@ -2951,7 +2951,7 @@ charset = Charset('utf8') self.assertEqual(str(charset), 'utf-8') - + # Test multilingual MIME headers. class TestHeader(TestEmailBase): def test_simple(self): @@ -3114,7 +3114,7 @@ raises(Errors.HeaderParseError, decode_header, s) - + # Test RFC 2231 header parameters (en/de)coding class TestRFC2231(TestEmailBase): def test_get_param(self): @@ -3426,7 +3426,7 @@ eq(s, 'My Document For You') - + # Tests to ensure that signed parts of an email are completely preserved, as # required by RFC1847 section 2.1. Note that these are incomplete, because the # email package does not currently always preserve the body. See issue 1670765. @@ -3462,7 +3462,7 @@ self._signed_parts_eq(original, result) - + def _testclasses(): mod = sys.modules[__name__] return [getattr(mod, name) for name in dir(mod) if name.startswith('Test')] @@ -3480,6 +3480,6 @@ run_unittest(testclass) - + if __name__ == '__main__': unittest.main(defaultTest='suite') diff --git a/lib-python/2.7.0/email/test/test_email_codecs.py b/lib-python/2.7/email/test/test_email_codecs.py rename from lib-python/2.7.0/email/test/test_email_codecs.py rename to lib-python/2.7/email/test/test_email_codecs.py diff --git a/lib-python/2.7.0/email/test/test_email_codecs_renamed.py b/lib-python/2.7/email/test/test_email_codecs_renamed.py rename from lib-python/2.7.0/email/test/test_email_codecs_renamed.py rename to lib-python/2.7/email/test/test_email_codecs_renamed.py diff --git a/lib-python/2.7.0/email/test/test_email_renamed.py b/lib-python/2.7/email/test/test_email_renamed.py rename from lib-python/2.7.0/email/test/test_email_renamed.py rename to lib-python/2.7/email/test/test_email_renamed.py --- a/lib-python/2.7.0/email/test/test_email_renamed.py +++ b/lib-python/2.7/email/test/test_email_renamed.py @@ -41,13 +41,13 @@ SPACE = ' ' - + def openfile(filename, mode='r'): path = os.path.join(os.path.dirname(landmark), 'data', filename) return open(path, mode) - + # Base test class class TestEmailBase(unittest.TestCase): def ndiffAssertEqual(self, first, second): @@ -69,7 +69,7 @@ return msg - + # Test various aspects of the Message class's API class TestMessageAPI(TestEmailBase): def test_get_all(self): @@ -504,7 +504,7 @@ self.assertEqual(msg.get_payload(decode=True), x) - + # Test the email.encoders module class TestEncoders(unittest.TestCase): def test_encode_empty_payload(self): @@ -531,7 +531,7 @@ eq(msg['content-transfer-encoding'], 'quoted-printable') - + # Test long header wrapping class TestLongHeaders(TestEmailBase): def test_split_long_continuation(self): @@ -852,7 +852,7 @@ """) - + # Test mangling of "From " lines in the body of a message class TestFromMangling(unittest.TestCase): def setUp(self): @@ -886,7 +886,7 @@ """) - + # Test the basic MIMEAudio class class TestMIMEAudio(unittest.TestCase): def setUp(self): @@ -935,7 +935,7 @@ header='foobar') is missing) - + # Test the basic MIMEImage class class TestMIMEImage(unittest.TestCase): def setUp(self): @@ -978,7 +978,7 @@ header='foobar') is missing) - + # Test the basic MIMEApplication class class TestMIMEApplication(unittest.TestCase): def test_headers(self): @@ -995,7 +995,7 @@ eq(msg.get_payload(decode=True), bytes) - + # Test the basic MIMEText class class TestMIMEText(unittest.TestCase): def setUp(self): @@ -1022,7 +1022,7 @@ eq(msg['content-type'], 'text/plain; charset="us-ascii"') - + # Test complicated multipart/* messages class TestMultipart(TestEmailBase): def setUp(self): @@ -1398,10 +1398,10 @@ YXNkZg== --===============0012394164==--""") - self.assertEquals(m.get_payload(0).get_payload(), 'YXNkZg==') - - - + self.assertEqual(m.get_payload(0).get_payload(), 'YXNkZg==') + + + # Test some badly formatted messages class TestNonConformant(TestEmailBase): def test_parse_missing_minor_type(self): @@ -1515,7 +1515,7 @@ eq(msg.defects[0].line, ' Line 1\n') - + # Test RFC 2047 header encoding and decoding class TestRFC2047(unittest.TestCase): def test_rfc2047_multiline(self): @@ -1562,7 +1562,7 @@ ('sbord', None)]) - + # Test the MIMEMessage class class TestMIMEMessage(TestEmailBase): def setUp(self): @@ -1872,7 +1872,7 @@ eq(msg.get_payload(1), text2) - + # A general test of parser->model->generator idempotency. IOW, read a message # in, parse it into a message object tree, then without touching the tree, # regenerate the plain text. The original text and the transformed text @@ -1896,7 +1896,7 @@ eq(text, s.getvalue()) def test_parse_text_message(self): - eq = self.assertEquals + eq = self.assertEqual msg, text = self._msgobj('msg_01.txt') eq(msg.get_content_type(), 'text/plain') eq(msg.get_content_maintype(), 'text') @@ -1908,7 +1908,7 @@ self._idempotent(msg, text) def test_parse_untyped_message(self): - eq = self.assertEquals + eq = self.assertEqual msg, text = self._msgobj('msg_03.txt') eq(msg.get_content_type(), 'text/plain') eq(msg.get_params(), None) @@ -1980,7 +1980,7 @@ self._idempotent(msg, text) def test_content_type(self): - eq = self.assertEquals + eq = self.assertEqual unless = self.assertTrue # Get a message object and reset the seek pointer for other tests msg, text = self._msgobj('msg_05.txt') @@ -2012,7 +2012,7 @@ eq(msg4.get_payload(), 'Yadda yadda yadda\n') def test_parser(self): - eq = self.assertEquals + eq = self.assertEqual unless = self.assertTrue msg, text = self._msgobj('msg_06.txt') # Check some of the outer headers @@ -2029,7 +2029,7 @@ eq(msg1.get_payload(), '\n') - + # Test various other bits of the package's functionality class TestMiscellaneous(TestEmailBase): def test_message_from_string(self): @@ -2354,7 +2354,7 @@ """) - + # Test the iterator/generators class TestIterators(TestEmailBase): def test_body_line_iterator(self): @@ -2414,7 +2414,7 @@ """) - + class TestParsers(TestEmailBase): def test_header_parser(self): eq = self.assertEqual @@ -2559,7 +2559,7 @@ eq(msg.get_payload(), 'body') - + class TestBase64(unittest.TestCase): def test_len(self): eq = self.assertEqual @@ -2631,7 +2631,7 @@ =?iso-8859-1?b?eHh4eCB4eHh4IHh4eHgg?=""") - + class TestQuopri(unittest.TestCase): def setUp(self): self.hlit = [chr(x) for x in range(ord('a'), ord('z')+1)] + \ @@ -2741,7 +2741,7 @@ two line""") - + # Test the Charset class class TestCharset(unittest.TestCase): def tearDown(self): @@ -2799,7 +2799,7 @@ self.assertRaises(errors.CharsetError, Charset, 'asc\xffii') - + # Test multilingual MIME headers. class TestHeader(TestEmailBase): def test_simple(self): @@ -2962,7 +2962,7 @@ raises(errors.HeaderParseError, decode_header, s) - + # Test RFC 2231 header parameters (en/de)coding class TestRFC2231(TestEmailBase): def test_get_param(self): @@ -3274,7 +3274,7 @@ eq(s, 'My Document For You') - + def _testclasses(): mod = sys.modules[__name__] return [getattr(mod, name) for name in dir(mod) if name.startswith('Test')] @@ -3292,6 +3292,6 @@ run_unittest(testclass) - + if __name__ == '__main__': unittest.main(defaultTest='suite') diff --git a/lib-python/2.7.0/email/test/test_email_torture.py b/lib-python/2.7/email/test/test_email_torture.py rename from lib-python/2.7.0/email/test/test_email_torture.py rename to lib-python/2.7/email/test/test_email_torture.py diff --git a/lib-python/2.7.0/email/utils.py b/lib-python/2.7/email/utils.py rename from lib-python/2.7.0/email/utils.py rename to lib-python/2.7/email/utils.py diff --git a/lib-python/2.7.0/encodings/__init__.py b/lib-python/2.7/encodings/__init__.py rename from lib-python/2.7.0/encodings/__init__.py rename to lib-python/2.7/encodings/__init__.py diff --git a/lib-python/2.7.0/encodings/aliases.py b/lib-python/2.7/encodings/aliases.py rename from lib-python/2.7.0/encodings/aliases.py rename to lib-python/2.7/encodings/aliases.py diff --git a/lib-python/2.7.0/encodings/ascii.py b/lib-python/2.7/encodings/ascii.py rename from lib-python/2.7.0/encodings/ascii.py rename to lib-python/2.7/encodings/ascii.py diff --git a/lib-python/2.7.0/encodings/base64_codec.py b/lib-python/2.7/encodings/base64_codec.py rename from lib-python/2.7.0/encodings/base64_codec.py rename to lib-python/2.7/encodings/base64_codec.py diff --git a/lib-python/2.7.0/encodings/big5.py b/lib-python/2.7/encodings/big5.py rename from lib-python/2.7.0/encodings/big5.py rename to lib-python/2.7/encodings/big5.py diff --git a/lib-python/2.7.0/encodings/big5hkscs.py b/lib-python/2.7/encodings/big5hkscs.py rename from lib-python/2.7.0/encodings/big5hkscs.py rename to lib-python/2.7/encodings/big5hkscs.py diff --git a/lib-python/2.7.0/encodings/bz2_codec.py b/lib-python/2.7/encodings/bz2_codec.py rename from lib-python/2.7.0/encodings/bz2_codec.py rename to lib-python/2.7/encodings/bz2_codec.py diff --git a/lib-python/2.7.0/encodings/charmap.py b/lib-python/2.7/encodings/charmap.py rename from lib-python/2.7.0/encodings/charmap.py rename to lib-python/2.7/encodings/charmap.py diff --git a/lib-python/2.7.0/encodings/cp037.py b/lib-python/2.7/encodings/cp037.py rename from lib-python/2.7.0/encodings/cp037.py rename to lib-python/2.7/encodings/cp037.py diff --git a/lib-python/2.7.0/encodings/cp1006.py b/lib-python/2.7/encodings/cp1006.py rename from lib-python/2.7.0/encodings/cp1006.py rename to lib-python/2.7/encodings/cp1006.py diff --git a/lib-python/2.7.0/encodings/cp1026.py b/lib-python/2.7/encodings/cp1026.py rename from lib-python/2.7.0/encodings/cp1026.py rename to lib-python/2.7/encodings/cp1026.py diff --git a/lib-python/2.7.0/encodings/cp1140.py b/lib-python/2.7/encodings/cp1140.py rename from lib-python/2.7.0/encodings/cp1140.py rename to lib-python/2.7/encodings/cp1140.py diff --git a/lib-python/2.7.0/encodings/cp1250.py b/lib-python/2.7/encodings/cp1250.py rename from lib-python/2.7.0/encodings/cp1250.py rename to lib-python/2.7/encodings/cp1250.py diff --git a/lib-python/2.7.0/encodings/cp1251.py b/lib-python/2.7/encodings/cp1251.py rename from lib-python/2.7.0/encodings/cp1251.py rename to lib-python/2.7/encodings/cp1251.py diff --git a/lib-python/2.7.0/encodings/cp1252.py b/lib-python/2.7/encodings/cp1252.py rename from lib-python/2.7.0/encodings/cp1252.py rename to lib-python/2.7/encodings/cp1252.py diff --git a/lib-python/2.7.0/encodings/cp1253.py b/lib-python/2.7/encodings/cp1253.py rename from lib-python/2.7.0/encodings/cp1253.py rename to lib-python/2.7/encodings/cp1253.py diff --git a/lib-python/2.7.0/encodings/cp1254.py b/lib-python/2.7/encodings/cp1254.py rename from lib-python/2.7.0/encodings/cp1254.py rename to lib-python/2.7/encodings/cp1254.py diff --git a/lib-python/2.7.0/encodings/cp1255.py b/lib-python/2.7/encodings/cp1255.py rename from lib-python/2.7.0/encodings/cp1255.py rename to lib-python/2.7/encodings/cp1255.py diff --git a/lib-python/2.7.0/encodings/cp1256.py b/lib-python/2.7/encodings/cp1256.py rename from lib-python/2.7.0/encodings/cp1256.py rename to lib-python/2.7/encodings/cp1256.py diff --git a/lib-python/2.7.0/encodings/cp1257.py b/lib-python/2.7/encodings/cp1257.py rename from lib-python/2.7.0/encodings/cp1257.py rename to lib-python/2.7/encodings/cp1257.py diff --git a/lib-python/2.7.0/encodings/cp1258.py b/lib-python/2.7/encodings/cp1258.py rename from lib-python/2.7.0/encodings/cp1258.py rename to lib-python/2.7/encodings/cp1258.py diff --git a/lib-python/2.7.0/encodings/cp424.py b/lib-python/2.7/encodings/cp424.py rename from lib-python/2.7.0/encodings/cp424.py rename to lib-python/2.7/encodings/cp424.py diff --git a/lib-python/2.7.0/encodings/cp437.py b/lib-python/2.7/encodings/cp437.py rename from lib-python/2.7.0/encodings/cp437.py rename to lib-python/2.7/encodings/cp437.py diff --git a/lib-python/2.7.0/encodings/cp500.py b/lib-python/2.7/encodings/cp500.py rename from lib-python/2.7.0/encodings/cp500.py rename to lib-python/2.7/encodings/cp500.py diff --git a/lib-python/2.7.0/encodings/cp720.py b/lib-python/2.7/encodings/cp720.py rename from lib-python/2.7.0/encodings/cp720.py rename to lib-python/2.7/encodings/cp720.py diff --git a/lib-python/2.7.0/encodings/cp737.py b/lib-python/2.7/encodings/cp737.py rename from lib-python/2.7.0/encodings/cp737.py rename to lib-python/2.7/encodings/cp737.py diff --git a/lib-python/2.7.0/encodings/cp775.py b/lib-python/2.7/encodings/cp775.py rename from lib-python/2.7.0/encodings/cp775.py rename to lib-python/2.7/encodings/cp775.py diff --git a/lib-python/2.7.0/encodings/cp850.py b/lib-python/2.7/encodings/cp850.py rename from lib-python/2.7.0/encodings/cp850.py rename to lib-python/2.7/encodings/cp850.py diff --git a/lib-python/2.7.0/encodings/cp852.py b/lib-python/2.7/encodings/cp852.py rename from lib-python/2.7.0/encodings/cp852.py rename to lib-python/2.7/encodings/cp852.py diff --git a/lib-python/2.7.0/encodings/cp855.py b/lib-python/2.7/encodings/cp855.py rename from lib-python/2.7.0/encodings/cp855.py rename to lib-python/2.7/encodings/cp855.py diff --git a/lib-python/2.7.0/encodings/cp856.py b/lib-python/2.7/encodings/cp856.py rename from lib-python/2.7.0/encodings/cp856.py rename to lib-python/2.7/encodings/cp856.py diff --git a/lib-python/2.7.0/encodings/cp857.py b/lib-python/2.7/encodings/cp857.py rename from lib-python/2.7.0/encodings/cp857.py rename to lib-python/2.7/encodings/cp857.py diff --git a/lib-python/2.7.0/encodings/cp858.py b/lib-python/2.7/encodings/cp858.py rename from lib-python/2.7.0/encodings/cp858.py rename to lib-python/2.7/encodings/cp858.py diff --git a/lib-python/2.7.0/encodings/cp860.py b/lib-python/2.7/encodings/cp860.py rename from lib-python/2.7.0/encodings/cp860.py rename to lib-python/2.7/encodings/cp860.py diff --git a/lib-python/2.7.0/encodings/cp861.py b/lib-python/2.7/encodings/cp861.py rename from lib-python/2.7.0/encodings/cp861.py rename to lib-python/2.7/encodings/cp861.py diff --git a/lib-python/2.7.0/encodings/cp862.py b/lib-python/2.7/encodings/cp862.py rename from lib-python/2.7.0/encodings/cp862.py rename to lib-python/2.7/encodings/cp862.py diff --git a/lib-python/2.7.0/encodings/cp863.py b/lib-python/2.7/encodings/cp863.py rename from lib-python/2.7.0/encodings/cp863.py rename to lib-python/2.7/encodings/cp863.py diff --git a/lib-python/2.7.0/encodings/cp864.py b/lib-python/2.7/encodings/cp864.py rename from lib-python/2.7.0/encodings/cp864.py rename to lib-python/2.7/encodings/cp864.py diff --git a/lib-python/2.7.0/encodings/cp865.py b/lib-python/2.7/encodings/cp865.py rename from lib-python/2.7.0/encodings/cp865.py rename to lib-python/2.7/encodings/cp865.py diff --git a/lib-python/2.7.0/encodings/cp866.py b/lib-python/2.7/encodings/cp866.py rename from lib-python/2.7.0/encodings/cp866.py rename to lib-python/2.7/encodings/cp866.py diff --git a/lib-python/2.7.0/encodings/cp869.py b/lib-python/2.7/encodings/cp869.py rename from lib-python/2.7.0/encodings/cp869.py rename to lib-python/2.7/encodings/cp869.py diff --git a/lib-python/2.7.0/encodings/cp874.py b/lib-python/2.7/encodings/cp874.py rename from lib-python/2.7.0/encodings/cp874.py rename to lib-python/2.7/encodings/cp874.py diff --git a/lib-python/2.7.0/encodings/cp875.py b/lib-python/2.7/encodings/cp875.py rename from lib-python/2.7.0/encodings/cp875.py rename to lib-python/2.7/encodings/cp875.py diff --git a/lib-python/2.7.0/encodings/cp932.py b/lib-python/2.7/encodings/cp932.py rename from lib-python/2.7.0/encodings/cp932.py rename to lib-python/2.7/encodings/cp932.py diff --git a/lib-python/2.7.0/encodings/cp949.py b/lib-python/2.7/encodings/cp949.py rename from lib-python/2.7.0/encodings/cp949.py rename to lib-python/2.7/encodings/cp949.py diff --git a/lib-python/2.7.0/encodings/cp950.py b/lib-python/2.7/encodings/cp950.py rename from lib-python/2.7.0/encodings/cp950.py rename to lib-python/2.7/encodings/cp950.py diff --git a/lib-python/2.7.0/encodings/euc_jis_2004.py b/lib-python/2.7/encodings/euc_jis_2004.py rename from lib-python/2.7.0/encodings/euc_jis_2004.py rename to lib-python/2.7/encodings/euc_jis_2004.py diff --git a/lib-python/2.7.0/encodings/euc_jisx0213.py b/lib-python/2.7/encodings/euc_jisx0213.py rename from lib-python/2.7.0/encodings/euc_jisx0213.py rename to lib-python/2.7/encodings/euc_jisx0213.py diff --git a/lib-python/2.7.0/encodings/euc_jp.py b/lib-python/2.7/encodings/euc_jp.py rename from lib-python/2.7.0/encodings/euc_jp.py rename to lib-python/2.7/encodings/euc_jp.py diff --git a/lib-python/2.7.0/encodings/euc_kr.py b/lib-python/2.7/encodings/euc_kr.py rename from lib-python/2.7.0/encodings/euc_kr.py rename to lib-python/2.7/encodings/euc_kr.py diff --git a/lib-python/2.7.0/encodings/gb18030.py b/lib-python/2.7/encodings/gb18030.py rename from lib-python/2.7.0/encodings/gb18030.py rename to lib-python/2.7/encodings/gb18030.py diff --git a/lib-python/2.7.0/encodings/gb2312.py b/lib-python/2.7/encodings/gb2312.py rename from lib-python/2.7.0/encodings/gb2312.py rename to lib-python/2.7/encodings/gb2312.py diff --git a/lib-python/2.7.0/encodings/gbk.py b/lib-python/2.7/encodings/gbk.py rename from lib-python/2.7.0/encodings/gbk.py rename to lib-python/2.7/encodings/gbk.py diff --git a/lib-python/2.7.0/encodings/hex_codec.py b/lib-python/2.7/encodings/hex_codec.py rename from lib-python/2.7.0/encodings/hex_codec.py rename to lib-python/2.7/encodings/hex_codec.py diff --git a/lib-python/2.7.0/encodings/hp_roman8.py b/lib-python/2.7/encodings/hp_roman8.py rename from lib-python/2.7.0/encodings/hp_roman8.py rename to lib-python/2.7/encodings/hp_roman8.py diff --git a/lib-python/2.7.0/encodings/hz.py b/lib-python/2.7/encodings/hz.py rename from lib-python/2.7.0/encodings/hz.py rename to lib-python/2.7/encodings/hz.py diff --git a/lib-python/2.7.0/encodings/idna.py b/lib-python/2.7/encodings/idna.py rename from lib-python/2.7.0/encodings/idna.py rename to lib-python/2.7/encodings/idna.py diff --git a/lib-python/2.7.0/encodings/iso2022_jp.py b/lib-python/2.7/encodings/iso2022_jp.py rename from lib-python/2.7.0/encodings/iso2022_jp.py rename to lib-python/2.7/encodings/iso2022_jp.py diff --git a/lib-python/2.7.0/encodings/iso2022_jp_1.py b/lib-python/2.7/encodings/iso2022_jp_1.py rename from lib-python/2.7.0/encodings/iso2022_jp_1.py rename to lib-python/2.7/encodings/iso2022_jp_1.py diff --git a/lib-python/2.7.0/encodings/iso2022_jp_2.py b/lib-python/2.7/encodings/iso2022_jp_2.py rename from lib-python/2.7.0/encodings/iso2022_jp_2.py rename to lib-python/2.7/encodings/iso2022_jp_2.py diff --git a/lib-python/2.7.0/encodings/iso2022_jp_2004.py b/lib-python/2.7/encodings/iso2022_jp_2004.py rename from lib-python/2.7.0/encodings/iso2022_jp_2004.py rename to lib-python/2.7/encodings/iso2022_jp_2004.py diff --git a/lib-python/2.7.0/encodings/iso2022_jp_3.py b/lib-python/2.7/encodings/iso2022_jp_3.py rename from lib-python/2.7.0/encodings/iso2022_jp_3.py rename to lib-python/2.7/encodings/iso2022_jp_3.py diff --git a/lib-python/2.7.0/encodings/iso2022_jp_ext.py b/lib-python/2.7/encodings/iso2022_jp_ext.py rename from lib-python/2.7.0/encodings/iso2022_jp_ext.py rename to lib-python/2.7/encodings/iso2022_jp_ext.py diff --git a/lib-python/2.7.0/encodings/iso2022_kr.py b/lib-python/2.7/encodings/iso2022_kr.py rename from lib-python/2.7.0/encodings/iso2022_kr.py rename to lib-python/2.7/encodings/iso2022_kr.py diff --git a/lib-python/2.7.0/encodings/iso8859_1.py b/lib-python/2.7/encodings/iso8859_1.py rename from lib-python/2.7.0/encodings/iso8859_1.py rename to lib-python/2.7/encodings/iso8859_1.py diff --git a/lib-python/2.7.0/encodings/iso8859_10.py b/lib-python/2.7/encodings/iso8859_10.py rename from lib-python/2.7.0/encodings/iso8859_10.py rename to lib-python/2.7/encodings/iso8859_10.py diff --git a/lib-python/2.7.0/encodings/iso8859_11.py b/lib-python/2.7/encodings/iso8859_11.py rename from lib-python/2.7.0/encodings/iso8859_11.py rename to lib-python/2.7/encodings/iso8859_11.py diff --git a/lib-python/2.7.0/encodings/iso8859_13.py b/lib-python/2.7/encodings/iso8859_13.py rename from lib-python/2.7.0/encodings/iso8859_13.py rename to lib-python/2.7/encodings/iso8859_13.py diff --git a/lib-python/2.7.0/encodings/iso8859_14.py b/lib-python/2.7/encodings/iso8859_14.py rename from lib-python/2.7.0/encodings/iso8859_14.py rename to lib-python/2.7/encodings/iso8859_14.py diff --git a/lib-python/2.7.0/encodings/iso8859_15.py b/lib-python/2.7/encodings/iso8859_15.py rename from lib-python/2.7.0/encodings/iso8859_15.py rename to lib-python/2.7/encodings/iso8859_15.py diff --git a/lib-python/2.7.0/encodings/iso8859_16.py b/lib-python/2.7/encodings/iso8859_16.py rename from lib-python/2.7.0/encodings/iso8859_16.py rename to lib-python/2.7/encodings/iso8859_16.py diff --git a/lib-python/2.7.0/encodings/iso8859_2.py b/lib-python/2.7/encodings/iso8859_2.py rename from lib-python/2.7.0/encodings/iso8859_2.py rename to lib-python/2.7/encodings/iso8859_2.py diff --git a/lib-python/2.7.0/encodings/iso8859_3.py b/lib-python/2.7/encodings/iso8859_3.py rename from lib-python/2.7.0/encodings/iso8859_3.py rename to lib-python/2.7/encodings/iso8859_3.py diff --git a/lib-python/2.7.0/encodings/iso8859_4.py b/lib-python/2.7/encodings/iso8859_4.py rename from lib-python/2.7.0/encodings/iso8859_4.py rename to lib-python/2.7/encodings/iso8859_4.py diff --git a/lib-python/2.7.0/encodings/iso8859_5.py b/lib-python/2.7/encodings/iso8859_5.py rename from lib-python/2.7.0/encodings/iso8859_5.py rename to lib-python/2.7/encodings/iso8859_5.py diff --git a/lib-python/2.7.0/encodings/iso8859_6.py b/lib-python/2.7/encodings/iso8859_6.py rename from lib-python/2.7.0/encodings/iso8859_6.py rename to lib-python/2.7/encodings/iso8859_6.py diff --git a/lib-python/2.7.0/encodings/iso8859_7.py b/lib-python/2.7/encodings/iso8859_7.py rename from lib-python/2.7.0/encodings/iso8859_7.py rename to lib-python/2.7/encodings/iso8859_7.py diff --git a/lib-python/2.7.0/encodings/iso8859_8.py b/lib-python/2.7/encodings/iso8859_8.py rename from lib-python/2.7.0/encodings/iso8859_8.py rename to lib-python/2.7/encodings/iso8859_8.py diff --git a/lib-python/2.7.0/encodings/iso8859_9.py b/lib-python/2.7/encodings/iso8859_9.py rename from lib-python/2.7.0/encodings/iso8859_9.py rename to lib-python/2.7/encodings/iso8859_9.py diff --git a/lib-python/2.7.0/encodings/johab.py b/lib-python/2.7/encodings/johab.py rename from lib-python/2.7.0/encodings/johab.py rename to lib-python/2.7/encodings/johab.py diff --git a/lib-python/2.7.0/encodings/koi8_r.py b/lib-python/2.7/encodings/koi8_r.py rename from lib-python/2.7.0/encodings/koi8_r.py rename to lib-python/2.7/encodings/koi8_r.py diff --git a/lib-python/2.7.0/encodings/koi8_u.py b/lib-python/2.7/encodings/koi8_u.py rename from lib-python/2.7.0/encodings/koi8_u.py rename to lib-python/2.7/encodings/koi8_u.py diff --git a/lib-python/2.7.0/encodings/latin_1.py b/lib-python/2.7/encodings/latin_1.py rename from lib-python/2.7.0/encodings/latin_1.py rename to lib-python/2.7/encodings/latin_1.py diff --git a/lib-python/2.7.0/encodings/mac_arabic.py b/lib-python/2.7/encodings/mac_arabic.py rename from lib-python/2.7.0/encodings/mac_arabic.py rename to lib-python/2.7/encodings/mac_arabic.py diff --git a/lib-python/2.7.0/encodings/mac_centeuro.py b/lib-python/2.7/encodings/mac_centeuro.py rename from lib-python/2.7.0/encodings/mac_centeuro.py rename to lib-python/2.7/encodings/mac_centeuro.py diff --git a/lib-python/2.7.0/encodings/mac_croatian.py b/lib-python/2.7/encodings/mac_croatian.py rename from lib-python/2.7.0/encodings/mac_croatian.py rename to lib-python/2.7/encodings/mac_croatian.py diff --git a/lib-python/2.7.0/encodings/mac_cyrillic.py b/lib-python/2.7/encodings/mac_cyrillic.py rename from lib-python/2.7.0/encodings/mac_cyrillic.py rename to lib-python/2.7/encodings/mac_cyrillic.py diff --git a/lib-python/2.7.0/encodings/mac_farsi.py b/lib-python/2.7/encodings/mac_farsi.py rename from lib-python/2.7.0/encodings/mac_farsi.py rename to lib-python/2.7/encodings/mac_farsi.py diff --git a/lib-python/2.7.0/encodings/mac_greek.py b/lib-python/2.7/encodings/mac_greek.py rename from lib-python/2.7.0/encodings/mac_greek.py rename to lib-python/2.7/encodings/mac_greek.py diff --git a/lib-python/2.7.0/encodings/mac_iceland.py b/lib-python/2.7/encodings/mac_iceland.py rename from lib-python/2.7.0/encodings/mac_iceland.py rename to lib-python/2.7/encodings/mac_iceland.py diff --git a/lib-python/2.7.0/encodings/mac_latin2.py b/lib-python/2.7/encodings/mac_latin2.py rename from lib-python/2.7.0/encodings/mac_latin2.py rename to lib-python/2.7/encodings/mac_latin2.py diff --git a/lib-python/2.7.0/encodings/mac_roman.py b/lib-python/2.7/encodings/mac_roman.py rename from lib-python/2.7.0/encodings/mac_roman.py rename to lib-python/2.7/encodings/mac_roman.py diff --git a/lib-python/2.7.0/encodings/mac_romanian.py b/lib-python/2.7/encodings/mac_romanian.py rename from lib-python/2.7.0/encodings/mac_romanian.py rename to lib-python/2.7/encodings/mac_romanian.py diff --git a/lib-python/2.7.0/encodings/mac_turkish.py b/lib-python/2.7/encodings/mac_turkish.py rename from lib-python/2.7.0/encodings/mac_turkish.py rename to lib-python/2.7/encodings/mac_turkish.py diff --git a/lib-python/2.7.0/encodings/mbcs.py b/lib-python/2.7/encodings/mbcs.py rename from lib-python/2.7.0/encodings/mbcs.py rename to lib-python/2.7/encodings/mbcs.py diff --git a/lib-python/2.7.0/encodings/palmos.py b/lib-python/2.7/encodings/palmos.py rename from lib-python/2.7.0/encodings/palmos.py rename to lib-python/2.7/encodings/palmos.py diff --git a/lib-python/2.7.0/encodings/ptcp154.py b/lib-python/2.7/encodings/ptcp154.py rename from lib-python/2.7.0/encodings/ptcp154.py rename to lib-python/2.7/encodings/ptcp154.py diff --git a/lib-python/2.7.0/encodings/punycode.py b/lib-python/2.7/encodings/punycode.py rename from lib-python/2.7.0/encodings/punycode.py rename to lib-python/2.7/encodings/punycode.py diff --git a/lib-python/2.7.0/encodings/quopri_codec.py b/lib-python/2.7/encodings/quopri_codec.py rename from lib-python/2.7.0/encodings/quopri_codec.py rename to lib-python/2.7/encodings/quopri_codec.py diff --git a/lib-python/2.7.0/encodings/raw_unicode_escape.py b/lib-python/2.7/encodings/raw_unicode_escape.py rename from lib-python/2.7.0/encodings/raw_unicode_escape.py rename to lib-python/2.7/encodings/raw_unicode_escape.py diff --git a/lib-python/2.7.0/encodings/rot_13.py b/lib-python/2.7/encodings/rot_13.py rename from lib-python/2.7.0/encodings/rot_13.py rename to lib-python/2.7/encodings/rot_13.py diff --git a/lib-python/2.7.0/encodings/shift_jis.py b/lib-python/2.7/encodings/shift_jis.py rename from lib-python/2.7.0/encodings/shift_jis.py rename to lib-python/2.7/encodings/shift_jis.py diff --git a/lib-python/2.7.0/encodings/shift_jis_2004.py b/lib-python/2.7/encodings/shift_jis_2004.py rename from lib-python/2.7.0/encodings/shift_jis_2004.py rename to lib-python/2.7/encodings/shift_jis_2004.py diff --git a/lib-python/2.7.0/encodings/shift_jisx0213.py b/lib-python/2.7/encodings/shift_jisx0213.py rename from lib-python/2.7.0/encodings/shift_jisx0213.py rename to lib-python/2.7/encodings/shift_jisx0213.py diff --git a/lib-python/2.7.0/encodings/string_escape.py b/lib-python/2.7/encodings/string_escape.py rename from lib-python/2.7.0/encodings/string_escape.py rename to lib-python/2.7/encodings/string_escape.py diff --git a/lib-python/2.7.0/encodings/tis_620.py b/lib-python/2.7/encodings/tis_620.py rename from lib-python/2.7.0/encodings/tis_620.py rename to lib-python/2.7/encodings/tis_620.py diff --git a/lib-python/2.7.0/encodings/undefined.py b/lib-python/2.7/encodings/undefined.py rename from lib-python/2.7.0/encodings/undefined.py rename to lib-python/2.7/encodings/undefined.py diff --git a/lib-python/2.7.0/encodings/unicode_escape.py b/lib-python/2.7/encodings/unicode_escape.py rename from lib-python/2.7.0/encodings/unicode_escape.py rename to lib-python/2.7/encodings/unicode_escape.py diff --git a/lib-python/2.7.0/encodings/unicode_internal.py b/lib-python/2.7/encodings/unicode_internal.py rename from lib-python/2.7.0/encodings/unicode_internal.py rename to lib-python/2.7/encodings/unicode_internal.py diff --git a/lib-python/2.7.0/encodings/utf_16.py b/lib-python/2.7/encodings/utf_16.py rename from lib-python/2.7.0/encodings/utf_16.py rename to lib-python/2.7/encodings/utf_16.py diff --git a/lib-python/2.7.0/encodings/utf_16_be.py b/lib-python/2.7/encodings/utf_16_be.py rename from lib-python/2.7.0/encodings/utf_16_be.py rename to lib-python/2.7/encodings/utf_16_be.py diff --git a/lib-python/2.7.0/encodings/utf_16_le.py b/lib-python/2.7/encodings/utf_16_le.py rename from lib-python/2.7.0/encodings/utf_16_le.py rename to lib-python/2.7/encodings/utf_16_le.py diff --git a/lib-python/2.7.0/encodings/utf_32.py b/lib-python/2.7/encodings/utf_32.py rename from lib-python/2.7.0/encodings/utf_32.py rename to lib-python/2.7/encodings/utf_32.py diff --git a/lib-python/2.7.0/encodings/utf_32_be.py b/lib-python/2.7/encodings/utf_32_be.py rename from lib-python/2.7.0/encodings/utf_32_be.py rename to lib-python/2.7/encodings/utf_32_be.py diff --git a/lib-python/2.7.0/encodings/utf_32_le.py b/lib-python/2.7/encodings/utf_32_le.py rename from lib-python/2.7.0/encodings/utf_32_le.py rename to lib-python/2.7/encodings/utf_32_le.py diff --git a/lib-python/2.7.0/encodings/utf_7.py b/lib-python/2.7/encodings/utf_7.py rename from lib-python/2.7.0/encodings/utf_7.py rename to lib-python/2.7/encodings/utf_7.py diff --git a/lib-python/2.7.0/encodings/utf_8.py b/lib-python/2.7/encodings/utf_8.py rename from lib-python/2.7.0/encodings/utf_8.py rename to lib-python/2.7/encodings/utf_8.py diff --git a/lib-python/2.7.0/encodings/utf_8_sig.py b/lib-python/2.7/encodings/utf_8_sig.py rename from lib-python/2.7.0/encodings/utf_8_sig.py rename to lib-python/2.7/encodings/utf_8_sig.py diff --git a/lib-python/2.7.0/encodings/uu_codec.py b/lib-python/2.7/encodings/uu_codec.py rename from lib-python/2.7.0/encodings/uu_codec.py rename to lib-python/2.7/encodings/uu_codec.py diff --git a/lib-python/2.7.0/encodings/zlib_codec.py b/lib-python/2.7/encodings/zlib_codec.py rename from lib-python/2.7.0/encodings/zlib_codec.py rename to lib-python/2.7/encodings/zlib_codec.py diff --git a/lib-python/2.7.0/filecmp.py b/lib-python/2.7/filecmp.py rename from lib-python/2.7.0/filecmp.py rename to lib-python/2.7/filecmp.py diff --git a/lib-python/2.7.0/fileinput.py b/lib-python/2.7/fileinput.py rename from lib-python/2.7.0/fileinput.py rename to lib-python/2.7/fileinput.py diff --git a/lib-python/2.7.0/fnmatch.py b/lib-python/2.7/fnmatch.py rename from lib-python/2.7.0/fnmatch.py rename to lib-python/2.7/fnmatch.py diff --git a/lib-python/2.7.0/formatter.py b/lib-python/2.7/formatter.py rename from lib-python/2.7.0/formatter.py rename to lib-python/2.7/formatter.py diff --git a/lib-python/2.7.0/fpformat.py b/lib-python/2.7/fpformat.py rename from lib-python/2.7.0/fpformat.py rename to lib-python/2.7/fpformat.py diff --git a/lib-python/2.7.0/fractions.py b/lib-python/2.7/fractions.py rename from lib-python/2.7.0/fractions.py rename to lib-python/2.7/fractions.py diff --git a/lib-python/2.7.0/ftplib.py b/lib-python/2.7/ftplib.py rename from lib-python/2.7.0/ftplib.py rename to lib-python/2.7/ftplib.py diff --git a/lib-python/2.7.0/functools.py b/lib-python/2.7/functools.py rename from lib-python/2.7.0/functools.py rename to lib-python/2.7/functools.py diff --git a/lib-python/2.7.0/genericpath.py b/lib-python/2.7/genericpath.py rename from lib-python/2.7.0/genericpath.py rename to lib-python/2.7/genericpath.py diff --git a/lib-python/2.7.0/getopt.py b/lib-python/2.7/getopt.py rename from lib-python/2.7.0/getopt.py rename to lib-python/2.7/getopt.py diff --git a/lib-python/2.7.0/getpass.py b/lib-python/2.7/getpass.py rename from lib-python/2.7.0/getpass.py rename to lib-python/2.7/getpass.py diff --git a/lib-python/2.7.0/gettext.py b/lib-python/2.7/gettext.py rename from lib-python/2.7.0/gettext.py rename to lib-python/2.7/gettext.py diff --git a/lib-python/2.7.0/glob.py b/lib-python/2.7/glob.py rename from lib-python/2.7.0/glob.py rename to lib-python/2.7/glob.py diff --git a/lib-python/2.7.0/gzip.py b/lib-python/2.7/gzip.py rename from lib-python/2.7.0/gzip.py rename to lib-python/2.7/gzip.py diff --git a/lib-python/2.7.0/hashlib.py b/lib-python/2.7/hashlib.py rename from lib-python/2.7.0/hashlib.py rename to lib-python/2.7/hashlib.py --- a/lib-python/2.7.0/hashlib.py +++ b/lib-python/2.7/hashlib.py @@ -1,4 +1,4 @@ -# $Id: hashlib.py 78528 2010-03-01 02:01:47Z gregory.p.smith $ +# $Id$ # # Copyright (C) 2005 Gregory P. Smith (greg at krypto.org) # Licensed to PSF under a Contributor Agreement. diff --git a/lib-python/2.7.0/heapq.py b/lib-python/2.7/heapq.py rename from lib-python/2.7.0/heapq.py rename to lib-python/2.7/heapq.py diff --git a/lib-python/2.7.0/hmac.py b/lib-python/2.7/hmac.py rename from lib-python/2.7.0/hmac.py rename to lib-python/2.7/hmac.py diff --git a/lib-python/2.7.0/hotshot/__init__.py b/lib-python/2.7/hotshot/__init__.py rename from lib-python/2.7.0/hotshot/__init__.py rename to lib-python/2.7/hotshot/__init__.py diff --git a/lib-python/2.7.0/hotshot/log.py b/lib-python/2.7/hotshot/log.py rename from lib-python/2.7.0/hotshot/log.py rename to lib-python/2.7/hotshot/log.py diff --git a/lib-python/2.7.0/hotshot/stats.py b/lib-python/2.7/hotshot/stats.py rename from lib-python/2.7.0/hotshot/stats.py rename to lib-python/2.7/hotshot/stats.py diff --git a/lib-python/2.7.0/hotshot/stones.py b/lib-python/2.7/hotshot/stones.py rename from lib-python/2.7.0/hotshot/stones.py rename to lib-python/2.7/hotshot/stones.py diff --git a/lib-python/2.7.0/htmlentitydefs.py b/lib-python/2.7/htmlentitydefs.py rename from lib-python/2.7.0/htmlentitydefs.py rename to lib-python/2.7/htmlentitydefs.py diff --git a/lib-python/2.7.0/htmllib.py b/lib-python/2.7/htmllib.py rename from lib-python/2.7.0/htmllib.py rename to lib-python/2.7/htmllib.py diff --git a/lib-python/2.7.0/httplib.py b/lib-python/2.7/httplib.py rename from lib-python/2.7.0/httplib.py rename to lib-python/2.7/httplib.py --- a/lib-python/2.7.0/httplib.py +++ b/lib-python/2.7/httplib.py @@ -879,6 +879,9 @@ host_enc = self.host.encode("ascii") except UnicodeEncodeError: host_enc = self.host.encode("idna") + # Wrap the IPv6 Host Header with [] (RFC 2732) + if host_enc.find(':') >= 0: + host_enc = "[" + host_enc + "]" if self.port == self.default_port: self.putheader('Host', host_enc) else: diff --git a/lib-python/2.7.0/idlelib/AutoComplete.py b/lib-python/2.7/idlelib/AutoComplete.py rename from lib-python/2.7.0/idlelib/AutoComplete.py rename to lib-python/2.7/idlelib/AutoComplete.py diff --git a/lib-python/2.7.0/idlelib/AutoCompleteWindow.py b/lib-python/2.7/idlelib/AutoCompleteWindow.py rename from lib-python/2.7.0/idlelib/AutoCompleteWindow.py rename to lib-python/2.7/idlelib/AutoCompleteWindow.py diff --git a/lib-python/2.7.0/idlelib/AutoExpand.py b/lib-python/2.7/idlelib/AutoExpand.py rename from lib-python/2.7.0/idlelib/AutoExpand.py rename to lib-python/2.7/idlelib/AutoExpand.py diff --git a/lib-python/2.7.0/idlelib/Bindings.py b/lib-python/2.7/idlelib/Bindings.py rename from lib-python/2.7.0/idlelib/Bindings.py rename to lib-python/2.7/idlelib/Bindings.py diff --git a/lib-python/2.7.0/idlelib/CREDITS.txt b/lib-python/2.7/idlelib/CREDITS.txt rename from lib-python/2.7.0/idlelib/CREDITS.txt rename to lib-python/2.7/idlelib/CREDITS.txt diff --git a/lib-python/2.7.0/idlelib/CallTipWindow.py b/lib-python/2.7/idlelib/CallTipWindow.py rename from lib-python/2.7.0/idlelib/CallTipWindow.py rename to lib-python/2.7/idlelib/CallTipWindow.py diff --git a/lib-python/2.7.0/idlelib/CallTips.py b/lib-python/2.7/idlelib/CallTips.py rename from lib-python/2.7.0/idlelib/CallTips.py rename to lib-python/2.7/idlelib/CallTips.py diff --git a/lib-python/2.7.0/idlelib/ChangeLog b/lib-python/2.7/idlelib/ChangeLog rename from lib-python/2.7.0/idlelib/ChangeLog rename to lib-python/2.7/idlelib/ChangeLog diff --git a/lib-python/2.7.0/idlelib/ClassBrowser.py b/lib-python/2.7/idlelib/ClassBrowser.py rename from lib-python/2.7.0/idlelib/ClassBrowser.py rename to lib-python/2.7/idlelib/ClassBrowser.py diff --git a/lib-python/2.7.0/idlelib/CodeContext.py b/lib-python/2.7/idlelib/CodeContext.py rename from lib-python/2.7.0/idlelib/CodeContext.py rename to lib-python/2.7/idlelib/CodeContext.py diff --git a/lib-python/2.7.0/idlelib/ColorDelegator.py b/lib-python/2.7/idlelib/ColorDelegator.py rename from lib-python/2.7.0/idlelib/ColorDelegator.py rename to lib-python/2.7/idlelib/ColorDelegator.py diff --git a/lib-python/2.7.0/idlelib/Debugger.py b/lib-python/2.7/idlelib/Debugger.py rename from lib-python/2.7.0/idlelib/Debugger.py rename to lib-python/2.7/idlelib/Debugger.py diff --git a/lib-python/2.7.0/idlelib/Delegator.py b/lib-python/2.7/idlelib/Delegator.py rename from lib-python/2.7.0/idlelib/Delegator.py rename to lib-python/2.7/idlelib/Delegator.py diff --git a/lib-python/2.7.0/idlelib/EditorWindow.py b/lib-python/2.7/idlelib/EditorWindow.py rename from lib-python/2.7.0/idlelib/EditorWindow.py rename to lib-python/2.7/idlelib/EditorWindow.py diff --git a/lib-python/2.7.0/idlelib/FileList.py b/lib-python/2.7/idlelib/FileList.py rename from lib-python/2.7.0/idlelib/FileList.py rename to lib-python/2.7/idlelib/FileList.py diff --git a/lib-python/2.7.0/idlelib/FormatParagraph.py b/lib-python/2.7/idlelib/FormatParagraph.py rename from lib-python/2.7.0/idlelib/FormatParagraph.py rename to lib-python/2.7/idlelib/FormatParagraph.py diff --git a/lib-python/2.7.0/idlelib/GrepDialog.py b/lib-python/2.7/idlelib/GrepDialog.py rename from lib-python/2.7.0/idlelib/GrepDialog.py rename to lib-python/2.7/idlelib/GrepDialog.py diff --git a/lib-python/2.7.0/idlelib/HISTORY.txt b/lib-python/2.7/idlelib/HISTORY.txt rename from lib-python/2.7.0/idlelib/HISTORY.txt rename to lib-python/2.7/idlelib/HISTORY.txt diff --git a/lib-python/2.7.0/idlelib/HyperParser.py b/lib-python/2.7/idlelib/HyperParser.py rename from lib-python/2.7.0/idlelib/HyperParser.py rename to lib-python/2.7/idlelib/HyperParser.py diff --git a/lib-python/2.7.0/idlelib/IOBinding.py b/lib-python/2.7/idlelib/IOBinding.py rename from lib-python/2.7.0/idlelib/IOBinding.py rename to lib-python/2.7/idlelib/IOBinding.py --- a/lib-python/2.7.0/idlelib/IOBinding.py +++ b/lib-python/2.7/idlelib/IOBinding.py @@ -521,8 +521,8 @@ savedialog = None filetypes = [ - ("Python and text files", "*.py *.pyw *.txt", "TEXT"), - ("All text files", "*", "TEXT"), + ("Python files", "*.py *.pyw", "TEXT"), + ("Text files", "*.txt", "TEXT"), ("All files", "*"), ] diff --git a/lib-python/2.7.0/idlelib/Icons/folder.gif b/lib-python/2.7/idlelib/Icons/folder.gif rename from lib-python/2.7.0/idlelib/Icons/folder.gif rename to lib-python/2.7/idlelib/Icons/folder.gif diff --git a/lib-python/2.7.0/idlelib/Icons/idle.icns b/lib-python/2.7/idlelib/Icons/idle.icns rename from lib-python/2.7.0/idlelib/Icons/idle.icns rename to lib-python/2.7/idlelib/Icons/idle.icns diff --git a/lib-python/2.7.0/idlelib/Icons/minusnode.gif b/lib-python/2.7/idlelib/Icons/minusnode.gif rename from lib-python/2.7.0/idlelib/Icons/minusnode.gif rename to lib-python/2.7/idlelib/Icons/minusnode.gif diff --git a/lib-python/2.7.0/idlelib/Icons/openfolder.gif b/lib-python/2.7/idlelib/Icons/openfolder.gif rename from lib-python/2.7.0/idlelib/Icons/openfolder.gif rename to lib-python/2.7/idlelib/Icons/openfolder.gif diff --git a/lib-python/2.7.0/idlelib/Icons/plusnode.gif b/lib-python/2.7/idlelib/Icons/plusnode.gif rename from lib-python/2.7.0/idlelib/Icons/plusnode.gif rename to lib-python/2.7/idlelib/Icons/plusnode.gif diff --git a/lib-python/2.7.0/idlelib/Icons/python.gif b/lib-python/2.7/idlelib/Icons/python.gif rename from lib-python/2.7.0/idlelib/Icons/python.gif rename to lib-python/2.7/idlelib/Icons/python.gif diff --git a/lib-python/2.7.0/idlelib/Icons/tk.gif b/lib-python/2.7/idlelib/Icons/tk.gif rename from lib-python/2.7.0/idlelib/Icons/tk.gif rename to lib-python/2.7/idlelib/Icons/tk.gif diff --git a/lib-python/2.7.0/idlelib/IdleHistory.py b/lib-python/2.7/idlelib/IdleHistory.py rename from lib-python/2.7.0/idlelib/IdleHistory.py rename to lib-python/2.7/idlelib/IdleHistory.py diff --git a/lib-python/2.7.0/idlelib/MultiCall.py b/lib-python/2.7/idlelib/MultiCall.py rename from lib-python/2.7.0/idlelib/MultiCall.py rename to lib-python/2.7/idlelib/MultiCall.py diff --git a/lib-python/2.7.0/idlelib/MultiStatusBar.py b/lib-python/2.7/idlelib/MultiStatusBar.py rename from lib-python/2.7.0/idlelib/MultiStatusBar.py rename to lib-python/2.7/idlelib/MultiStatusBar.py diff --git a/lib-python/2.7.0/idlelib/NEWS.txt b/lib-python/2.7/idlelib/NEWS.txt rename from lib-python/2.7.0/idlelib/NEWS.txt rename to lib-python/2.7/idlelib/NEWS.txt diff --git a/lib-python/2.7.0/idlelib/ObjectBrowser.py b/lib-python/2.7/idlelib/ObjectBrowser.py rename from lib-python/2.7.0/idlelib/ObjectBrowser.py rename to lib-python/2.7/idlelib/ObjectBrowser.py diff --git a/lib-python/2.7.0/idlelib/OutputWindow.py b/lib-python/2.7/idlelib/OutputWindow.py rename from lib-python/2.7.0/idlelib/OutputWindow.py rename to lib-python/2.7/idlelib/OutputWindow.py diff --git a/lib-python/2.7.0/idlelib/ParenMatch.py b/lib-python/2.7/idlelib/ParenMatch.py rename from lib-python/2.7.0/idlelib/ParenMatch.py rename to lib-python/2.7/idlelib/ParenMatch.py diff --git a/lib-python/2.7.0/idlelib/PathBrowser.py b/lib-python/2.7/idlelib/PathBrowser.py rename from lib-python/2.7.0/idlelib/PathBrowser.py rename to lib-python/2.7/idlelib/PathBrowser.py diff --git a/lib-python/2.7.0/idlelib/Percolator.py b/lib-python/2.7/idlelib/Percolator.py rename from lib-python/2.7.0/idlelib/Percolator.py rename to lib-python/2.7/idlelib/Percolator.py diff --git a/lib-python/2.7.0/idlelib/PyParse.py b/lib-python/2.7/idlelib/PyParse.py rename from lib-python/2.7.0/idlelib/PyParse.py rename to lib-python/2.7/idlelib/PyParse.py diff --git a/lib-python/2.7.0/idlelib/PyShell.py b/lib-python/2.7/idlelib/PyShell.py rename from lib-python/2.7.0/idlelib/PyShell.py rename to lib-python/2.7/idlelib/PyShell.py diff --git a/lib-python/2.7.0/idlelib/README.txt b/lib-python/2.7/idlelib/README.txt rename from lib-python/2.7.0/idlelib/README.txt rename to lib-python/2.7/idlelib/README.txt diff --git a/lib-python/2.7.0/idlelib/RemoteDebugger.py b/lib-python/2.7/idlelib/RemoteDebugger.py rename from lib-python/2.7.0/idlelib/RemoteDebugger.py rename to lib-python/2.7/idlelib/RemoteDebugger.py diff --git a/lib-python/2.7.0/idlelib/RemoteObjectBrowser.py b/lib-python/2.7/idlelib/RemoteObjectBrowser.py rename from lib-python/2.7.0/idlelib/RemoteObjectBrowser.py rename to lib-python/2.7/idlelib/RemoteObjectBrowser.py diff --git a/lib-python/2.7.0/idlelib/ReplaceDialog.py b/lib-python/2.7/idlelib/ReplaceDialog.py rename from lib-python/2.7.0/idlelib/ReplaceDialog.py rename to lib-python/2.7/idlelib/ReplaceDialog.py diff --git a/lib-python/2.7.0/idlelib/RstripExtension.py b/lib-python/2.7/idlelib/RstripExtension.py rename from lib-python/2.7.0/idlelib/RstripExtension.py rename to lib-python/2.7/idlelib/RstripExtension.py diff --git a/lib-python/2.7.0/idlelib/ScriptBinding.py b/lib-python/2.7/idlelib/ScriptBinding.py rename from lib-python/2.7.0/idlelib/ScriptBinding.py rename to lib-python/2.7/idlelib/ScriptBinding.py diff --git a/lib-python/2.7.0/idlelib/ScrolledList.py b/lib-python/2.7/idlelib/ScrolledList.py rename from lib-python/2.7.0/idlelib/ScrolledList.py rename to lib-python/2.7/idlelib/ScrolledList.py diff --git a/lib-python/2.7.0/idlelib/SearchDialog.py b/lib-python/2.7/idlelib/SearchDialog.py rename from lib-python/2.7.0/idlelib/SearchDialog.py rename to lib-python/2.7/idlelib/SearchDialog.py diff --git a/lib-python/2.7.0/idlelib/SearchDialogBase.py b/lib-python/2.7/idlelib/SearchDialogBase.py rename from lib-python/2.7.0/idlelib/SearchDialogBase.py rename to lib-python/2.7/idlelib/SearchDialogBase.py diff --git a/lib-python/2.7.0/idlelib/SearchEngine.py b/lib-python/2.7/idlelib/SearchEngine.py rename from lib-python/2.7.0/idlelib/SearchEngine.py rename to lib-python/2.7/idlelib/SearchEngine.py diff --git a/lib-python/2.7.0/idlelib/StackViewer.py b/lib-python/2.7/idlelib/StackViewer.py rename from lib-python/2.7.0/idlelib/StackViewer.py rename to lib-python/2.7/idlelib/StackViewer.py diff --git a/lib-python/2.7.0/idlelib/TODO.txt b/lib-python/2.7/idlelib/TODO.txt rename from lib-python/2.7.0/idlelib/TODO.txt rename to lib-python/2.7/idlelib/TODO.txt diff --git a/lib-python/2.7.0/idlelib/ToolTip.py b/lib-python/2.7/idlelib/ToolTip.py rename from lib-python/2.7.0/idlelib/ToolTip.py rename to lib-python/2.7/idlelib/ToolTip.py diff --git a/lib-python/2.7.0/idlelib/TreeWidget.py b/lib-python/2.7/idlelib/TreeWidget.py rename from lib-python/2.7.0/idlelib/TreeWidget.py rename to lib-python/2.7/idlelib/TreeWidget.py diff --git a/lib-python/2.7.0/idlelib/UndoDelegator.py b/lib-python/2.7/idlelib/UndoDelegator.py rename from lib-python/2.7.0/idlelib/UndoDelegator.py rename to lib-python/2.7/idlelib/UndoDelegator.py diff --git a/lib-python/2.7.0/idlelib/WidgetRedirector.py b/lib-python/2.7/idlelib/WidgetRedirector.py rename from lib-python/2.7.0/idlelib/WidgetRedirector.py rename to lib-python/2.7/idlelib/WidgetRedirector.py diff --git a/lib-python/2.7.0/idlelib/WindowList.py b/lib-python/2.7/idlelib/WindowList.py rename from lib-python/2.7.0/idlelib/WindowList.py rename to lib-python/2.7/idlelib/WindowList.py diff --git a/lib-python/2.7.0/idlelib/ZoomHeight.py b/lib-python/2.7/idlelib/ZoomHeight.py rename from lib-python/2.7.0/idlelib/ZoomHeight.py rename to lib-python/2.7/idlelib/ZoomHeight.py diff --git a/lib-python/2.7.0/idlelib/__init__.py b/lib-python/2.7/idlelib/__init__.py rename from lib-python/2.7.0/idlelib/__init__.py rename to lib-python/2.7/idlelib/__init__.py diff --git a/lib-python/2.7.0/idlelib/aboutDialog.py b/lib-python/2.7/idlelib/aboutDialog.py rename from lib-python/2.7.0/idlelib/aboutDialog.py rename to lib-python/2.7/idlelib/aboutDialog.py diff --git a/lib-python/2.7.0/idlelib/config-extensions.def b/lib-python/2.7/idlelib/config-extensions.def rename from lib-python/2.7.0/idlelib/config-extensions.def rename to lib-python/2.7/idlelib/config-extensions.def diff --git a/lib-python/2.7.0/idlelib/config-highlight.def b/lib-python/2.7/idlelib/config-highlight.def rename from lib-python/2.7.0/idlelib/config-highlight.def rename to lib-python/2.7/idlelib/config-highlight.def diff --git a/lib-python/2.7.0/idlelib/config-keys.def b/lib-python/2.7/idlelib/config-keys.def rename from lib-python/2.7.0/idlelib/config-keys.def rename to lib-python/2.7/idlelib/config-keys.def diff --git a/lib-python/2.7.0/idlelib/config-main.def b/lib-python/2.7/idlelib/config-main.def rename from lib-python/2.7.0/idlelib/config-main.def rename to lib-python/2.7/idlelib/config-main.def diff --git a/lib-python/2.7.0/idlelib/configDialog.py b/lib-python/2.7/idlelib/configDialog.py rename from lib-python/2.7.0/idlelib/configDialog.py rename to lib-python/2.7/idlelib/configDialog.py diff --git a/lib-python/2.7.0/idlelib/configHandler.py b/lib-python/2.7/idlelib/configHandler.py rename from lib-python/2.7.0/idlelib/configHandler.py rename to lib-python/2.7/idlelib/configHandler.py diff --git a/lib-python/2.7.0/idlelib/configHelpSourceEdit.py b/lib-python/2.7/idlelib/configHelpSourceEdit.py rename from lib-python/2.7.0/idlelib/configHelpSourceEdit.py rename to lib-python/2.7/idlelib/configHelpSourceEdit.py diff --git a/lib-python/2.7.0/idlelib/configSectionNameDialog.py b/lib-python/2.7/idlelib/configSectionNameDialog.py rename from lib-python/2.7.0/idlelib/configSectionNameDialog.py rename to lib-python/2.7/idlelib/configSectionNameDialog.py diff --git a/lib-python/2.7.0/idlelib/dynOptionMenuWidget.py b/lib-python/2.7/idlelib/dynOptionMenuWidget.py rename from lib-python/2.7.0/idlelib/dynOptionMenuWidget.py rename to lib-python/2.7/idlelib/dynOptionMenuWidget.py diff --git a/lib-python/2.7.0/idlelib/extend.txt b/lib-python/2.7/idlelib/extend.txt rename from lib-python/2.7.0/idlelib/extend.txt rename to lib-python/2.7/idlelib/extend.txt diff --git a/lib-python/2.7.0/idlelib/help.txt b/lib-python/2.7/idlelib/help.txt rename from lib-python/2.7.0/idlelib/help.txt rename to lib-python/2.7/idlelib/help.txt diff --git a/lib-python/2.7.0/idlelib/idle.bat b/lib-python/2.7/idlelib/idle.bat rename from lib-python/2.7.0/idlelib/idle.bat rename to lib-python/2.7/idlelib/idle.bat --- a/lib-python/2.7.0/idlelib/idle.bat +++ b/lib-python/2.7/idlelib/idle.bat @@ -1,3 +1,4 @@ @echo off -rem Working IDLE bat for Windows - uses start instead of absolute pathname -start idle.pyw %1 %2 %3 %4 %5 %6 %7 %8 %9 +rem Start IDLE using the appropriate Python interpreter +set CURRDIR=%~dp0 +start "%CURRDIR%..\..\pythonw.exe" "%CURRDIR%idle.pyw" %1 %2 %3 %4 %5 %6 %7 %8 %9 diff --git a/lib-python/2.7.0/idlelib/idle.py b/lib-python/2.7/idlelib/idle.py rename from lib-python/2.7.0/idlelib/idle.py rename to lib-python/2.7/idlelib/idle.py diff --git a/lib-python/2.7.0/idlelib/idle.pyw b/lib-python/2.7/idlelib/idle.pyw rename from lib-python/2.7.0/idlelib/idle.pyw rename to lib-python/2.7/idlelib/idle.pyw diff --git a/lib-python/2.7.0/idlelib/idlever.py b/lib-python/2.7/idlelib/idlever.py rename from lib-python/2.7.0/idlelib/idlever.py rename to lib-python/2.7/idlelib/idlever.py --- a/lib-python/2.7.0/idlelib/idlever.py +++ b/lib-python/2.7/idlelib/idlever.py @@ -1,1 +1,1 @@ -IDLE_VERSION = "2.7.1a0" +IDLE_VERSION = "2.7.1" diff --git a/lib-python/2.7.0/idlelib/keybindingDialog.py b/lib-python/2.7/idlelib/keybindingDialog.py rename from lib-python/2.7.0/idlelib/keybindingDialog.py rename to lib-python/2.7/idlelib/keybindingDialog.py diff --git a/lib-python/2.7.0/idlelib/macosxSupport.py b/lib-python/2.7/idlelib/macosxSupport.py rename from lib-python/2.7.0/idlelib/macosxSupport.py rename to lib-python/2.7/idlelib/macosxSupport.py diff --git a/lib-python/2.7.0/idlelib/rpc.py b/lib-python/2.7/idlelib/rpc.py rename from lib-python/2.7.0/idlelib/rpc.py rename to lib-python/2.7/idlelib/rpc.py diff --git a/lib-python/2.7.0/idlelib/run.py b/lib-python/2.7/idlelib/run.py rename from lib-python/2.7.0/idlelib/run.py rename to lib-python/2.7/idlelib/run.py diff --git a/lib-python/2.7.0/idlelib/tabbedpages.py b/lib-python/2.7/idlelib/tabbedpages.py rename from lib-python/2.7.0/idlelib/tabbedpages.py rename to lib-python/2.7/idlelib/tabbedpages.py diff --git a/lib-python/2.7.0/idlelib/testcode.py b/lib-python/2.7/idlelib/testcode.py rename from lib-python/2.7.0/idlelib/testcode.py rename to lib-python/2.7/idlelib/testcode.py diff --git a/lib-python/2.7.0/idlelib/textView.py b/lib-python/2.7/idlelib/textView.py rename from lib-python/2.7.0/idlelib/textView.py rename to lib-python/2.7/idlelib/textView.py diff --git a/lib-python/2.7.0/ihooks.py b/lib-python/2.7/ihooks.py rename from lib-python/2.7.0/ihooks.py rename to lib-python/2.7/ihooks.py diff --git a/lib-python/2.7.0/imaplib.py b/lib-python/2.7/imaplib.py rename from lib-python/2.7.0/imaplib.py rename to lib-python/2.7/imaplib.py --- a/lib-python/2.7.0/imaplib.py +++ b/lib-python/2.7/imaplib.py @@ -22,7 +22,7 @@ __version__ = "2.58" -import binascii, random, re, socket, subprocess, sys, time +import binascii, errno, random, re, socket, subprocess, sys, time __all__ = ["IMAP4", "IMAP4_stream", "Internaldate2tuple", "Int2AP", "ParseFlags", "Time2Internaldate"] @@ -248,7 +248,14 @@ def shutdown(self): """Close I/O established in "open".""" self.file.close() - self.sock.close() + try: + self.sock.shutdown(socket.SHUT_RDWR) + except socket.error as e: + # The server might already have closed the connection + if e.errno != errno.ENOTCONN: + raise + finally: + self.sock.close() def socket(self): @@ -883,14 +890,17 @@ def _command_complete(self, name, tag): - self._check_bye() + # BYE is expected after LOGOUT + if name != 'LOGOUT': + self._check_bye() try: typ, data = self._get_tagged_response(tag) except self.abort, val: raise self.abort('command: %s => %s' % (name, val)) except self.error, val: raise self.error('command: %s => %s' % (name, val)) - self._check_bye() + if name != 'LOGOUT': + self._check_bye() if typ == 'BAD': raise self.error('%s command error: %s %s' % (name, typ, data)) return typ, data diff --git a/lib-python/2.7.0/imghdr.py b/lib-python/2.7/imghdr.py rename from lib-python/2.7.0/imghdr.py rename to lib-python/2.7/imghdr.py diff --git a/lib-python/2.7.0/importlib/__init__.py b/lib-python/2.7/importlib/__init__.py rename from lib-python/2.7.0/importlib/__init__.py rename to lib-python/2.7/importlib/__init__.py diff --git a/lib-python/2.7.0/imputil.py b/lib-python/2.7/imputil.py rename from lib-python/2.7.0/imputil.py rename to lib-python/2.7/imputil.py diff --git a/lib-python/2.7.0/inspect.py b/lib-python/2.7/inspect.py rename from lib-python/2.7.0/inspect.py rename to lib-python/2.7/inspect.py diff --git a/lib-python/2.7.0/io.py b/lib-python/2.7/io.py rename from lib-python/2.7.0/io.py rename to lib-python/2.7/io.py diff --git a/lib-python/2.7.0/json/__init__.py b/lib-python/2.7/json/__init__.py rename from lib-python/2.7.0/json/__init__.py rename to lib-python/2.7/json/__init__.py diff --git a/lib-python/2.7.0/json/decoder.py b/lib-python/2.7/json/decoder.py rename from lib-python/2.7.0/json/decoder.py rename to lib-python/2.7/json/decoder.py diff --git a/lib-python/2.7.0/json/encoder.py b/lib-python/2.7/json/encoder.py rename from lib-python/2.7.0/json/encoder.py rename to lib-python/2.7/json/encoder.py diff --git a/lib-python/2.7.0/json/scanner.py b/lib-python/2.7/json/scanner.py rename from lib-python/2.7.0/json/scanner.py rename to lib-python/2.7/json/scanner.py diff --git a/lib-python/2.7.0/json/tests/__init__.py b/lib-python/2.7/json/tests/__init__.py rename from lib-python/2.7.0/json/tests/__init__.py rename to lib-python/2.7/json/tests/__init__.py diff --git a/lib-python/2.7.0/json/tests/test_check_circular.py b/lib-python/2.7/json/tests/test_check_circular.py rename from lib-python/2.7.0/json/tests/test_check_circular.py rename to lib-python/2.7/json/tests/test_check_circular.py diff --git a/lib-python/2.7.0/json/tests/test_decode.py b/lib-python/2.7/json/tests/test_decode.py rename from lib-python/2.7.0/json/tests/test_decode.py rename to lib-python/2.7/json/tests/test_decode.py --- a/lib-python/2.7.0/json/tests/test_decode.py +++ b/lib-python/2.7/json/tests/test_decode.py @@ -9,19 +9,19 @@ def test_decimal(self): rval = json.loads('1.1', parse_float=decimal.Decimal) self.assertTrue(isinstance(rval, decimal.Decimal)) - self.assertEquals(rval, decimal.Decimal('1.1')) + self.assertEqual(rval, decimal.Decimal('1.1')) def test_float(self): rval = json.loads('1', parse_int=float) self.assertTrue(isinstance(rval, float)) - self.assertEquals(rval, 1.0) + self.assertEqual(rval, 1.0) def test_decoder_optimizations(self): # Several optimizations were made that skip over calls to # the whitespace regex, so this test is designed to try and # exercise the uncommon cases. The array cases are already covered. rval = json.loads('{ "key" : "value" , "k":"v" }') - self.assertEquals(rval, {"key":"value", "k":"v"}) + self.assertEqual(rval, {"key":"value", "k":"v"}) def test_object_pairs_hook(self): s = '{"xkd":1, "kcw":2, "art":3, "hxm":4, "qrt":5, "pad":6, "hoy":7}' diff --git a/lib-python/2.7.0/json/tests/test_default.py b/lib-python/2.7/json/tests/test_default.py rename from lib-python/2.7.0/json/tests/test_default.py rename to lib-python/2.7/json/tests/test_default.py --- a/lib-python/2.7.0/json/tests/test_default.py +++ b/lib-python/2.7/json/tests/test_default.py @@ -4,6 +4,6 @@ class TestDefault(TestCase): def test_default(self): - self.assertEquals( + self.assertEqual( json.dumps(type, default=repr), json.dumps(repr(type))) diff --git a/lib-python/2.7.0/json/tests/test_dump.py b/lib-python/2.7/json/tests/test_dump.py rename from lib-python/2.7.0/json/tests/test_dump.py rename to lib-python/2.7/json/tests/test_dump.py --- a/lib-python/2.7.0/json/tests/test_dump.py +++ b/lib-python/2.7/json/tests/test_dump.py @@ -7,15 +7,15 @@ def test_dump(self): sio = StringIO() json.dump({}, sio) - self.assertEquals(sio.getvalue(), '{}') + self.assertEqual(sio.getvalue(), '{}') def test_dumps(self): - self.assertEquals(json.dumps({}), '{}') + self.assertEqual(json.dumps({}), '{}') def test_encode_truefalse(self): - self.assertEquals(json.dumps( + self.assertEqual(json.dumps( {True: False, False: True}, sort_keys=True), '{"false": true, "true": false}') - self.assertEquals(json.dumps( + self.assertEqual(json.dumps( {2: 3.0, 4.0: 5L, False: 1, 6L: True}, sort_keys=True), '{"false": 1, "2": 3.0, "4.0": 5, "6": true}') diff --git a/lib-python/2.7.0/json/tests/test_encode_basestring_ascii.py b/lib-python/2.7/json/tests/test_encode_basestring_ascii.py rename from lib-python/2.7.0/json/tests/test_encode_basestring_ascii.py rename to lib-python/2.7/json/tests/test_encode_basestring_ascii.py --- a/lib-python/2.7.0/json/tests/test_encode_basestring_ascii.py +++ b/lib-python/2.7/json/tests/test_encode_basestring_ascii.py @@ -36,7 +36,7 @@ fname = encode_basestring_ascii.__name__ for input_string, expect in CASES: result = encode_basestring_ascii(input_string) - self.assertEquals(result, expect, + self.assertEqual(result, expect, '{0!r} != {1!r} for {2}({3!r})'.format( result, expect, fname, input_string)) diff --git a/lib-python/2.7.0/json/tests/test_fail.py b/lib-python/2.7/json/tests/test_fail.py rename from lib-python/2.7.0/json/tests/test_fail.py rename to lib-python/2.7/json/tests/test_fail.py diff --git a/lib-python/2.7.0/json/tests/test_float.py b/lib-python/2.7/json/tests/test_float.py rename from lib-python/2.7.0/json/tests/test_float.py rename to lib-python/2.7/json/tests/test_float.py --- a/lib-python/2.7.0/json/tests/test_float.py +++ b/lib-python/2.7/json/tests/test_float.py @@ -7,13 +7,13 @@ def test_floats(self): for num in [1617161771.7650001, math.pi, math.pi**100, math.pi**-100, 3.1]: - self.assertEquals(float(json.dumps(num)), num) - self.assertEquals(json.loads(json.dumps(num)), num) - self.assertEquals(json.loads(unicode(json.dumps(num))), num) + self.assertEqual(float(json.dumps(num)), num) + self.assertEqual(json.loads(json.dumps(num)), num) + self.assertEqual(json.loads(unicode(json.dumps(num))), num) def test_ints(self): for num in [1, 1L, 1<<32, 1<<64]: - self.assertEquals(json.dumps(num), str(num)) - self.assertEquals(int(json.dumps(num)), num) - self.assertEquals(json.loads(json.dumps(num)), num) - self.assertEquals(json.loads(unicode(json.dumps(num))), num) + self.assertEqual(json.dumps(num), str(num)) + self.assertEqual(int(json.dumps(num)), num) + self.assertEqual(json.loads(json.dumps(num)), num) + self.assertEqual(json.loads(unicode(json.dumps(num))), num) diff --git a/lib-python/2.7.0/json/tests/test_indent.py b/lib-python/2.7/json/tests/test_indent.py rename from lib-python/2.7.0/json/tests/test_indent.py rename to lib-python/2.7/json/tests/test_indent.py --- a/lib-python/2.7.0/json/tests/test_indent.py +++ b/lib-python/2.7/json/tests/test_indent.py @@ -36,6 +36,6 @@ h1 = json.loads(d1) h2 = json.loads(d2) - self.assertEquals(h1, h) - self.assertEquals(h2, h) - self.assertEquals(d2, expect) + self.assertEqual(h1, h) + self.assertEqual(h2, h) + self.assertEqual(d2, expect) diff --git a/lib-python/2.7.0/json/tests/test_pass1.py b/lib-python/2.7/json/tests/test_pass1.py rename from lib-python/2.7.0/json/tests/test_pass1.py rename to lib-python/2.7/json/tests/test_pass1.py --- a/lib-python/2.7.0/json/tests/test_pass1.py +++ b/lib-python/2.7/json/tests/test_pass1.py @@ -67,7 +67,7 @@ # test in/out equivalence and parsing res = json.loads(JSON) out = json.dumps(res) - self.assertEquals(res, json.loads(out)) + self.assertEqual(res, json.loads(out)) try: json.dumps(res, allow_nan=False) except ValueError: diff --git a/lib-python/2.7.0/json/tests/test_pass2.py b/lib-python/2.7/json/tests/test_pass2.py rename from lib-python/2.7.0/json/tests/test_pass2.py rename to lib-python/2.7/json/tests/test_pass2.py --- a/lib-python/2.7.0/json/tests/test_pass2.py +++ b/lib-python/2.7/json/tests/test_pass2.py @@ -11,4 +11,4 @@ # test in/out equivalence and parsing res = json.loads(JSON) out = json.dumps(res) - self.assertEquals(res, json.loads(out)) + self.assertEqual(res, json.loads(out)) diff --git a/lib-python/2.7.0/json/tests/test_pass3.py b/lib-python/2.7/json/tests/test_pass3.py rename from lib-python/2.7.0/json/tests/test_pass3.py rename to lib-python/2.7/json/tests/test_pass3.py --- a/lib-python/2.7.0/json/tests/test_pass3.py +++ b/lib-python/2.7/json/tests/test_pass3.py @@ -17,4 +17,4 @@ # test in/out equivalence and parsing res = json.loads(JSON) out = json.dumps(res) - self.assertEquals(res, json.loads(out)) + self.assertEqual(res, json.loads(out)) diff --git a/lib-python/2.7.0/json/tests/test_recursion.py b/lib-python/2.7/json/tests/test_recursion.py rename from lib-python/2.7.0/json/tests/test_recursion.py rename to lib-python/2.7/json/tests/test_recursion.py --- a/lib-python/2.7.0/json/tests/test_recursion.py +++ b/lib-python/2.7/json/tests/test_recursion.py @@ -57,7 +57,7 @@ def test_defaultrecursion(self): enc = RecursiveJSONEncoder() - self.assertEquals(enc.encode(JSONTestObject), '"JSONTestObject"') + self.assertEqual(enc.encode(JSONTestObject), '"JSONTestObject"') enc.recurse = True try: enc.encode(JSONTestObject) diff --git a/lib-python/2.7.0/json/tests/test_scanstring.py b/lib-python/2.7/json/tests/test_scanstring.py rename from lib-python/2.7.0/json/tests/test_scanstring.py rename to lib-python/2.7/json/tests/test_scanstring.py --- a/lib-python/2.7.0/json/tests/test_scanstring.py +++ b/lib-python/2.7/json/tests/test_scanstring.py @@ -13,92 +13,92 @@ self._test_scanstring(json.decoder.c_scanstring) def _test_scanstring(self, scanstring): - self.assertEquals( + self.assertEqual( scanstring('"z\\ud834\\udd20x"', 1, None, True), (u'z\U0001d120x', 16)) if sys.maxunicode == 65535: - self.assertEquals( + self.assertEqual( scanstring(u'"z\U0001d120x"', 1, None, True), (u'z\U0001d120x', 6)) else: - self.assertEquals( + self.assertEqual( scanstring(u'"z\U0001d120x"', 1, None, True), (u'z\U0001d120x', 5)) - self.assertEquals( + self.assertEqual( scanstring('"\\u007b"', 1, None, True), (u'{', 8)) - self.assertEquals( + self.assertEqual( scanstring('"A JSON payload should be an object or array, not a string."', 1, None, True), (u'A JSON payload should be an object or array, not a string.', 60)) - self.assertEquals( + self.assertEqual( scanstring('["Unclosed array"', 2, None, True), (u'Unclosed array', 17)) - self.assertEquals( + self.assertEqual( scanstring('["extra comma",]', 2, None, True), (u'extra comma', 14)) - self.assertEquals( + self.assertEqual( scanstring('["double extra comma",,]', 2, None, True), (u'double extra comma', 21)) - self.assertEquals( + self.assertEqual( scanstring('["Comma after the close"],', 2, None, True), (u'Comma after the close', 24)) - self.assertEquals( + self.assertEqual( scanstring('["Extra close"]]', 2, None, True), (u'Extra close', 14)) - self.assertEquals( + self.assertEqual( scanstring('{"Extra comma": true,}', 2, None, True), (u'Extra comma', 14)) - self.assertEquals( + self.assertEqual( scanstring('{"Extra value after close": true} "misplaced quoted value"', 2, None, True), (u'Extra value after close', 26)) - self.assertEquals( + self.assertEqual( scanstring('{"Illegal expression": 1 + 2}', 2, None, True), (u'Illegal expression', 21)) - self.assertEquals( + self.assertEqual( scanstring('{"Illegal invocation": alert()}', 2, None, True), (u'Illegal invocation', 21)) - self.assertEquals( + self.assertEqual( scanstring('{"Numbers cannot have leading zeroes": 013}', 2, None, True), (u'Numbers cannot have leading zeroes', 37)) - self.assertEquals( + self.assertEqual( scanstring('{"Numbers cannot be hex": 0x14}', 2, None, True), (u'Numbers cannot be hex', 24)) - self.assertEquals( + self.assertEqual( scanstring('[[[[[[[[[[[[[[[[[[[["Too deep"]]]]]]]]]]]]]]]]]]]]', 21, None, True), (u'Too deep', 30)) - self.assertEquals( + self.assertEqual( scanstring('{"Missing colon" null}', 2, None, True), (u'Missing colon', 16)) - self.assertEquals( + self.assertEqual( scanstring('{"Double colon":: null}', 2, None, True), (u'Double colon', 15)) - self.assertEquals( + self.assertEqual( scanstring('{"Comma instead of colon", null}', 2, None, True), (u'Comma instead of colon', 25)) - self.assertEquals( + self.assertEqual( scanstring('["Colon instead of comma": false]', 2, None, True), (u'Colon instead of comma', 25)) - self.assertEquals( + self.assertEqual( scanstring('["Bad value", truth]', 2, None, True), (u'Bad value', 12)) diff --git a/lib-python/2.7.0/json/tests/test_separators.py b/lib-python/2.7/json/tests/test_separators.py rename from lib-python/2.7.0/json/tests/test_separators.py rename to lib-python/2.7/json/tests/test_separators.py --- a/lib-python/2.7.0/json/tests/test_separators.py +++ b/lib-python/2.7/json/tests/test_separators.py @@ -37,6 +37,6 @@ h1 = json.loads(d1) h2 = json.loads(d2) - self.assertEquals(h1, h) - self.assertEquals(h2, h) - self.assertEquals(d2, expect) + self.assertEqual(h1, h) + self.assertEqual(h2, h) + self.assertEqual(d2, expect) diff --git a/lib-python/2.7.0/json/tests/test_speedups.py b/lib-python/2.7/json/tests/test_speedups.py rename from lib-python/2.7.0/json/tests/test_speedups.py rename to lib-python/2.7/json/tests/test_speedups.py --- a/lib-python/2.7.0/json/tests/test_speedups.py +++ b/lib-python/2.7/json/tests/test_speedups.py @@ -5,11 +5,11 @@ class TestSpeedups(TestCase): def test_scanstring(self): - self.assertEquals(decoder.scanstring.__module__, "_json") + self.assertEqual(decoder.scanstring.__module__, "_json") self.assertTrue(decoder.scanstring is decoder.c_scanstring) def test_encode_basestring_ascii(self): - self.assertEquals(encoder.encode_basestring_ascii.__module__, "_json") + self.assertEqual(encoder.encode_basestring_ascii.__module__, "_json") self.assertTrue(encoder.encode_basestring_ascii is encoder.c_encode_basestring_ascii) diff --git a/lib-python/2.7.0/json/tests/test_unicode.py b/lib-python/2.7/json/tests/test_unicode.py rename from lib-python/2.7.0/json/tests/test_unicode.py rename to lib-python/2.7/json/tests/test_unicode.py --- a/lib-python/2.7.0/json/tests/test_unicode.py +++ b/lib-python/2.7/json/tests/test_unicode.py @@ -10,50 +10,50 @@ s = u.encode('utf-8') ju = encoder.encode(u) js = encoder.encode(s) - self.assertEquals(ju, js) + self.assertEqual(ju, js) def test_encoding2(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' s = u.encode('utf-8') ju = json.dumps(u, encoding='utf-8') js = json.dumps(s, encoding='utf-8') - self.assertEquals(ju, js) + self.assertEqual(ju, js) def test_encoding3(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' j = json.dumps(u) - self.assertEquals(j, '"\\u03b1\\u03a9"') + self.assertEqual(j, '"\\u03b1\\u03a9"') def test_encoding4(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' j = json.dumps([u]) - self.assertEquals(j, '["\\u03b1\\u03a9"]') + self.assertEqual(j, '["\\u03b1\\u03a9"]') def test_encoding5(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' j = json.dumps(u, ensure_ascii=False) - self.assertEquals(j, u'"{0}"'.format(u)) + self.assertEqual(j, u'"{0}"'.format(u)) def test_encoding6(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' j = json.dumps([u], ensure_ascii=False) - self.assertEquals(j, u'["{0}"]'.format(u)) + self.assertEqual(j, u'["{0}"]'.format(u)) def test_big_unicode_encode(self): u = u'\U0001d120' - self.assertEquals(json.dumps(u), '"\\ud834\\udd20"') - self.assertEquals(json.dumps(u, ensure_ascii=False), u'"\U0001d120"') + self.assertEqual(json.dumps(u), '"\\ud834\\udd20"') + self.assertEqual(json.dumps(u, ensure_ascii=False), u'"\U0001d120"') def test_big_unicode_decode(self): u = u'z\U0001d120x' - self.assertEquals(json.loads('"' + u + '"'), u) - self.assertEquals(json.loads('"z\\ud834\\udd20x"'), u) + self.assertEqual(json.loads('"' + u + '"'), u) + self.assertEqual(json.loads('"z\\ud834\\udd20x"'), u) def test_unicode_decode(self): for i in range(0, 0xd7ff): u = unichr(i) s = '"\\u{0:04x}"'.format(i) - self.assertEquals(json.loads(s), u) + self.assertEqual(json.loads(s), u) def test_object_pairs_hook_with_unicode(self): s = u'{"xkd":1, "kcw":2, "art":3, "hxm":4, "qrt":5, "pad":6, "hoy":7}' @@ -71,12 +71,12 @@ OrderedDict(p)) def test_default_encoding(self): - self.assertEquals(json.loads(u'{"a": "\xe9"}'.encode('utf-8')), + self.assertEqual(json.loads(u'{"a": "\xe9"}'.encode('utf-8')), {'a': u'\xe9'}) def test_unicode_preservation(self): - self.assertEquals(type(json.loads(u'""')), unicode) - self.assertEquals(type(json.loads(u'"a"')), unicode) - self.assertEquals(type(json.loads(u'["a"]')[0]), unicode) + self.assertEqual(type(json.loads(u'""')), unicode) + self.assertEqual(type(json.loads(u'"a"')), unicode) + self.assertEqual(type(json.loads(u'["a"]')[0]), unicode) # Issue 10038. - self.assertEquals(type(json.loads('"foo"')), unicode) + self.assertEqual(type(json.loads('"foo"')), unicode) diff --git a/lib-python/2.7.0/json/tool.py b/lib-python/2.7/json/tool.py rename from lib-python/2.7.0/json/tool.py rename to lib-python/2.7/json/tool.py diff --git a/lib-python/2.7.0/keyword.py b/lib-python/2.7/keyword.py rename from lib-python/2.7.0/keyword.py rename to lib-python/2.7/keyword.py diff --git a/lib-python/2.7.0/lib-tk/Canvas.py b/lib-python/2.7/lib-tk/Canvas.py rename from lib-python/2.7.0/lib-tk/Canvas.py rename to lib-python/2.7/lib-tk/Canvas.py diff --git a/lib-python/2.7.0/lib-tk/Dialog.py b/lib-python/2.7/lib-tk/Dialog.py rename from lib-python/2.7.0/lib-tk/Dialog.py rename to lib-python/2.7/lib-tk/Dialog.py diff --git a/lib-python/2.7.0/lib-tk/FileDialog.py b/lib-python/2.7/lib-tk/FileDialog.py rename from lib-python/2.7.0/lib-tk/FileDialog.py rename to lib-python/2.7/lib-tk/FileDialog.py diff --git a/lib-python/2.7.0/lib-tk/FixTk.py b/lib-python/2.7/lib-tk/FixTk.py rename from lib-python/2.7.0/lib-tk/FixTk.py rename to lib-python/2.7/lib-tk/FixTk.py diff --git a/lib-python/2.7.0/lib-tk/ScrolledText.py b/lib-python/2.7/lib-tk/ScrolledText.py rename from lib-python/2.7.0/lib-tk/ScrolledText.py rename to lib-python/2.7/lib-tk/ScrolledText.py diff --git a/lib-python/2.7.0/lib-tk/SimpleDialog.py b/lib-python/2.7/lib-tk/SimpleDialog.py rename from lib-python/2.7.0/lib-tk/SimpleDialog.py rename to lib-python/2.7/lib-tk/SimpleDialog.py diff --git a/lib-python/2.7.0/lib-tk/Tix.py b/lib-python/2.7/lib-tk/Tix.py rename from lib-python/2.7.0/lib-tk/Tix.py rename to lib-python/2.7/lib-tk/Tix.py --- a/lib-python/2.7.0/lib-tk/Tix.py +++ b/lib-python/2.7/lib-tk/Tix.py @@ -1,6 +1,6 @@ # -*-mode: python; fill-column: 75; tab-width: 8; coding: iso-latin-1-unix -*- # -# $Id: Tix.py 81008 2010-05-08 20:59:42Z benjamin.peterson $ +# $Id$ # # Tix.py -- Tix widget wrappers. # diff --git a/lib-python/2.7.0/lib-tk/Tkconstants.py b/lib-python/2.7/lib-tk/Tkconstants.py rename from lib-python/2.7.0/lib-tk/Tkconstants.py rename to lib-python/2.7/lib-tk/Tkconstants.py diff --git a/lib-python/2.7.0/lib-tk/Tkdnd.py b/lib-python/2.7/lib-tk/Tkdnd.py rename from lib-python/2.7.0/lib-tk/Tkdnd.py rename to lib-python/2.7/lib-tk/Tkdnd.py diff --git a/lib-python/2.7.0/lib-tk/Tkinter.py b/lib-python/2.7/lib-tk/Tkinter.py rename from lib-python/2.7.0/lib-tk/Tkinter.py rename to lib-python/2.7/lib-tk/Tkinter.py --- a/lib-python/2.7.0/lib-tk/Tkinter.py +++ b/lib-python/2.7/lib-tk/Tkinter.py @@ -30,7 +30,7 @@ tk.mainloop() """ -__version__ = "$Revision: 81008 $" +__version__ = "$Revision$" import sys if sys.platform == "win32": diff --git a/lib-python/2.7.0/lib-tk/test/README b/lib-python/2.7/lib-tk/test/README rename from lib-python/2.7.0/lib-tk/test/README rename to lib-python/2.7/lib-tk/test/README diff --git a/lib-python/2.7.0/lib-tk/test/runtktests.py b/lib-python/2.7/lib-tk/test/runtktests.py rename from lib-python/2.7.0/lib-tk/test/runtktests.py rename to lib-python/2.7/lib-tk/test/runtktests.py diff --git a/lib-python/2.7.0/lib-tk/test/test_tkinter/__init__.py b/lib-python/2.7/lib-tk/test/test_tkinter/__init__.py rename from lib-python/2.7.0/lib-tk/test/test_tkinter/__init__.py rename to lib-python/2.7/lib-tk/test/test_tkinter/__init__.py diff --git a/lib-python/2.7.0/lib-tk/test/test_tkinter/test_loadtk.py b/lib-python/2.7/lib-tk/test/test_tkinter/test_loadtk.py rename from lib-python/2.7.0/lib-tk/test/test_tkinter/test_loadtk.py rename to lib-python/2.7/lib-tk/test/test_tkinter/test_loadtk.py diff --git a/lib-python/2.7.0/lib-tk/test/test_tkinter/test_text.py b/lib-python/2.7/lib-tk/test/test_tkinter/test_text.py rename from lib-python/2.7.0/lib-tk/test/test_tkinter/test_text.py rename to lib-python/2.7/lib-tk/test/test_tkinter/test_text.py diff --git a/lib-python/2.7.0/lib-tk/test/test_ttk/__init__.py b/lib-python/2.7/lib-tk/test/test_ttk/__init__.py rename from lib-python/2.7.0/lib-tk/test/test_ttk/__init__.py rename to lib-python/2.7/lib-tk/test/test_ttk/__init__.py diff --git a/lib-python/2.7.0/lib-tk/test/test_ttk/support.py b/lib-python/2.7/lib-tk/test/test_ttk/support.py rename from lib-python/2.7.0/lib-tk/test/test_ttk/support.py rename to lib-python/2.7/lib-tk/test/test_ttk/support.py diff --git a/lib-python/2.7.0/lib-tk/test/test_ttk/test_extensions.py b/lib-python/2.7/lib-tk/test/test_ttk/test_extensions.py rename from lib-python/2.7.0/lib-tk/test/test_ttk/test_extensions.py rename to lib-python/2.7/lib-tk/test/test_ttk/test_extensions.py diff --git a/lib-python/2.7.0/lib-tk/test/test_ttk/test_functions.py b/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py rename from lib-python/2.7.0/lib-tk/test/test_ttk/test_functions.py rename to lib-python/2.7/lib-tk/test/test_ttk/test_functions.py diff --git a/lib-python/2.7.0/lib-tk/test/test_ttk/test_style.py b/lib-python/2.7/lib-tk/test/test_ttk/test_style.py rename from lib-python/2.7.0/lib-tk/test/test_ttk/test_style.py rename to lib-python/2.7/lib-tk/test/test_ttk/test_style.py diff --git a/lib-python/2.7.0/lib-tk/test/test_ttk/test_widgets.py b/lib-python/2.7/lib-tk/test/test_ttk/test_widgets.py rename from lib-python/2.7.0/lib-tk/test/test_ttk/test_widgets.py rename to lib-python/2.7/lib-tk/test/test_ttk/test_widgets.py diff --git a/lib-python/2.7.0/lib-tk/tkColorChooser.py b/lib-python/2.7/lib-tk/tkColorChooser.py rename from lib-python/2.7.0/lib-tk/tkColorChooser.py rename to lib-python/2.7/lib-tk/tkColorChooser.py diff --git a/lib-python/2.7.0/lib-tk/tkCommonDialog.py b/lib-python/2.7/lib-tk/tkCommonDialog.py rename from lib-python/2.7.0/lib-tk/tkCommonDialog.py rename to lib-python/2.7/lib-tk/tkCommonDialog.py diff --git a/lib-python/2.7.0/lib-tk/tkFileDialog.py b/lib-python/2.7/lib-tk/tkFileDialog.py rename from lib-python/2.7.0/lib-tk/tkFileDialog.py rename to lib-python/2.7/lib-tk/tkFileDialog.py diff --git a/lib-python/2.7.0/lib-tk/tkFont.py b/lib-python/2.7/lib-tk/tkFont.py rename from lib-python/2.7.0/lib-tk/tkFont.py rename to lib-python/2.7/lib-tk/tkFont.py diff --git a/lib-python/2.7.0/lib-tk/tkMessageBox.py b/lib-python/2.7/lib-tk/tkMessageBox.py rename from lib-python/2.7.0/lib-tk/tkMessageBox.py rename to lib-python/2.7/lib-tk/tkMessageBox.py diff --git a/lib-python/2.7.0/lib-tk/tkSimpleDialog.py b/lib-python/2.7/lib-tk/tkSimpleDialog.py rename from lib-python/2.7.0/lib-tk/tkSimpleDialog.py rename to lib-python/2.7/lib-tk/tkSimpleDialog.py diff --git a/lib-python/2.7.0/lib-tk/ttk.py b/lib-python/2.7/lib-tk/ttk.py rename from lib-python/2.7.0/lib-tk/ttk.py rename to lib-python/2.7/lib-tk/ttk.py diff --git a/lib-python/2.7.0/lib-tk/turtle.py b/lib-python/2.7/lib-tk/turtle.py rename from lib-python/2.7.0/lib-tk/turtle.py rename to lib-python/2.7/lib-tk/turtle.py diff --git a/lib-python/2.7.0/lib2to3/Grammar.txt b/lib-python/2.7/lib2to3/Grammar.txt rename from lib-python/2.7.0/lib2to3/Grammar.txt rename to lib-python/2.7/lib2to3/Grammar.txt diff --git a/lib-python/2.7.0/lib2to3/PatternGrammar.txt b/lib-python/2.7/lib2to3/PatternGrammar.txt rename from lib-python/2.7.0/lib2to3/PatternGrammar.txt rename to lib-python/2.7/lib2to3/PatternGrammar.txt diff --git a/lib-python/2.7.0/lib2to3/__init__.py b/lib-python/2.7/lib2to3/__init__.py rename from lib-python/2.7.0/lib2to3/__init__.py rename to lib-python/2.7/lib2to3/__init__.py diff --git a/lib-python/2.7.0/lib2to3/btm_matcher.py b/lib-python/2.7/lib2to3/btm_matcher.py rename from lib-python/2.7.0/lib2to3/btm_matcher.py rename to lib-python/2.7/lib2to3/btm_matcher.py diff --git a/lib-python/2.7.0/lib2to3/btm_utils.py b/lib-python/2.7/lib2to3/btm_utils.py rename from lib-python/2.7.0/lib2to3/btm_utils.py rename to lib-python/2.7/lib2to3/btm_utils.py diff --git a/lib-python/2.7.0/lib2to3/fixer_base.py b/lib-python/2.7/lib2to3/fixer_base.py rename from lib-python/2.7.0/lib2to3/fixer_base.py rename to lib-python/2.7/lib2to3/fixer_base.py diff --git a/lib-python/2.7.0/lib2to3/fixer_util.py b/lib-python/2.7/lib2to3/fixer_util.py rename from lib-python/2.7.0/lib2to3/fixer_util.py rename to lib-python/2.7/lib2to3/fixer_util.py diff --git a/lib-python/2.7.0/lib2to3/fixes/__init__.py b/lib-python/2.7/lib2to3/fixes/__init__.py rename from lib-python/2.7.0/lib2to3/fixes/__init__.py rename to lib-python/2.7/lib2to3/fixes/__init__.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_apply.py b/lib-python/2.7/lib2to3/fixes/fix_apply.py rename from lib-python/2.7.0/lib2to3/fixes/fix_apply.py rename to lib-python/2.7/lib2to3/fixes/fix_apply.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_basestring.py b/lib-python/2.7/lib2to3/fixes/fix_basestring.py rename from lib-python/2.7.0/lib2to3/fixes/fix_basestring.py rename to lib-python/2.7/lib2to3/fixes/fix_basestring.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_buffer.py b/lib-python/2.7/lib2to3/fixes/fix_buffer.py rename from lib-python/2.7.0/lib2to3/fixes/fix_buffer.py rename to lib-python/2.7/lib2to3/fixes/fix_buffer.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_callable.py b/lib-python/2.7/lib2to3/fixes/fix_callable.py rename from lib-python/2.7.0/lib2to3/fixes/fix_callable.py rename to lib-python/2.7/lib2to3/fixes/fix_callable.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_dict.py b/lib-python/2.7/lib2to3/fixes/fix_dict.py rename from lib-python/2.7.0/lib2to3/fixes/fix_dict.py rename to lib-python/2.7/lib2to3/fixes/fix_dict.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_except.py b/lib-python/2.7/lib2to3/fixes/fix_except.py rename from lib-python/2.7.0/lib2to3/fixes/fix_except.py rename to lib-python/2.7/lib2to3/fixes/fix_except.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_exec.py b/lib-python/2.7/lib2to3/fixes/fix_exec.py rename from lib-python/2.7.0/lib2to3/fixes/fix_exec.py rename to lib-python/2.7/lib2to3/fixes/fix_exec.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_execfile.py b/lib-python/2.7/lib2to3/fixes/fix_execfile.py rename from lib-python/2.7.0/lib2to3/fixes/fix_execfile.py rename to lib-python/2.7/lib2to3/fixes/fix_execfile.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_exitfunc.py b/lib-python/2.7/lib2to3/fixes/fix_exitfunc.py rename from lib-python/2.7.0/lib2to3/fixes/fix_exitfunc.py rename to lib-python/2.7/lib2to3/fixes/fix_exitfunc.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_filter.py b/lib-python/2.7/lib2to3/fixes/fix_filter.py rename from lib-python/2.7.0/lib2to3/fixes/fix_filter.py rename to lib-python/2.7/lib2to3/fixes/fix_filter.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_funcattrs.py b/lib-python/2.7/lib2to3/fixes/fix_funcattrs.py rename from lib-python/2.7.0/lib2to3/fixes/fix_funcattrs.py rename to lib-python/2.7/lib2to3/fixes/fix_funcattrs.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_future.py b/lib-python/2.7/lib2to3/fixes/fix_future.py rename from lib-python/2.7.0/lib2to3/fixes/fix_future.py rename to lib-python/2.7/lib2to3/fixes/fix_future.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_getcwdu.py b/lib-python/2.7/lib2to3/fixes/fix_getcwdu.py rename from lib-python/2.7.0/lib2to3/fixes/fix_getcwdu.py rename to lib-python/2.7/lib2to3/fixes/fix_getcwdu.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_has_key.py b/lib-python/2.7/lib2to3/fixes/fix_has_key.py rename from lib-python/2.7.0/lib2to3/fixes/fix_has_key.py rename to lib-python/2.7/lib2to3/fixes/fix_has_key.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_idioms.py b/lib-python/2.7/lib2to3/fixes/fix_idioms.py rename from lib-python/2.7.0/lib2to3/fixes/fix_idioms.py rename to lib-python/2.7/lib2to3/fixes/fix_idioms.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_import.py b/lib-python/2.7/lib2to3/fixes/fix_import.py rename from lib-python/2.7.0/lib2to3/fixes/fix_import.py rename to lib-python/2.7/lib2to3/fixes/fix_import.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_imports.py b/lib-python/2.7/lib2to3/fixes/fix_imports.py rename from lib-python/2.7.0/lib2to3/fixes/fix_imports.py rename to lib-python/2.7/lib2to3/fixes/fix_imports.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_imports2.py b/lib-python/2.7/lib2to3/fixes/fix_imports2.py rename from lib-python/2.7.0/lib2to3/fixes/fix_imports2.py rename to lib-python/2.7/lib2to3/fixes/fix_imports2.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_input.py b/lib-python/2.7/lib2to3/fixes/fix_input.py rename from lib-python/2.7.0/lib2to3/fixes/fix_input.py rename to lib-python/2.7/lib2to3/fixes/fix_input.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_intern.py b/lib-python/2.7/lib2to3/fixes/fix_intern.py rename from lib-python/2.7.0/lib2to3/fixes/fix_intern.py rename to lib-python/2.7/lib2to3/fixes/fix_intern.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_isinstance.py b/lib-python/2.7/lib2to3/fixes/fix_isinstance.py rename from lib-python/2.7.0/lib2to3/fixes/fix_isinstance.py rename to lib-python/2.7/lib2to3/fixes/fix_isinstance.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_itertools.py b/lib-python/2.7/lib2to3/fixes/fix_itertools.py rename from lib-python/2.7.0/lib2to3/fixes/fix_itertools.py rename to lib-python/2.7/lib2to3/fixes/fix_itertools.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_itertools_imports.py b/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py rename from lib-python/2.7.0/lib2to3/fixes/fix_itertools_imports.py rename to lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_long.py b/lib-python/2.7/lib2to3/fixes/fix_long.py rename from lib-python/2.7.0/lib2to3/fixes/fix_long.py rename to lib-python/2.7/lib2to3/fixes/fix_long.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_map.py b/lib-python/2.7/lib2to3/fixes/fix_map.py rename from lib-python/2.7.0/lib2to3/fixes/fix_map.py rename to lib-python/2.7/lib2to3/fixes/fix_map.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_metaclass.py b/lib-python/2.7/lib2to3/fixes/fix_metaclass.py rename from lib-python/2.7.0/lib2to3/fixes/fix_metaclass.py rename to lib-python/2.7/lib2to3/fixes/fix_metaclass.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_methodattrs.py b/lib-python/2.7/lib2to3/fixes/fix_methodattrs.py rename from lib-python/2.7.0/lib2to3/fixes/fix_methodattrs.py rename to lib-python/2.7/lib2to3/fixes/fix_methodattrs.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_ne.py b/lib-python/2.7/lib2to3/fixes/fix_ne.py rename from lib-python/2.7.0/lib2to3/fixes/fix_ne.py rename to lib-python/2.7/lib2to3/fixes/fix_ne.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_next.py b/lib-python/2.7/lib2to3/fixes/fix_next.py rename from lib-python/2.7.0/lib2to3/fixes/fix_next.py rename to lib-python/2.7/lib2to3/fixes/fix_next.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_nonzero.py b/lib-python/2.7/lib2to3/fixes/fix_nonzero.py rename from lib-python/2.7.0/lib2to3/fixes/fix_nonzero.py rename to lib-python/2.7/lib2to3/fixes/fix_nonzero.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_numliterals.py b/lib-python/2.7/lib2to3/fixes/fix_numliterals.py rename from lib-python/2.7.0/lib2to3/fixes/fix_numliterals.py rename to lib-python/2.7/lib2to3/fixes/fix_numliterals.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_operator.py b/lib-python/2.7/lib2to3/fixes/fix_operator.py rename from lib-python/2.7.0/lib2to3/fixes/fix_operator.py rename to lib-python/2.7/lib2to3/fixes/fix_operator.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_paren.py b/lib-python/2.7/lib2to3/fixes/fix_paren.py rename from lib-python/2.7.0/lib2to3/fixes/fix_paren.py rename to lib-python/2.7/lib2to3/fixes/fix_paren.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_print.py b/lib-python/2.7/lib2to3/fixes/fix_print.py rename from lib-python/2.7.0/lib2to3/fixes/fix_print.py rename to lib-python/2.7/lib2to3/fixes/fix_print.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_raise.py b/lib-python/2.7/lib2to3/fixes/fix_raise.py rename from lib-python/2.7.0/lib2to3/fixes/fix_raise.py rename to lib-python/2.7/lib2to3/fixes/fix_raise.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_raw_input.py b/lib-python/2.7/lib2to3/fixes/fix_raw_input.py rename from lib-python/2.7.0/lib2to3/fixes/fix_raw_input.py rename to lib-python/2.7/lib2to3/fixes/fix_raw_input.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_reduce.py b/lib-python/2.7/lib2to3/fixes/fix_reduce.py rename from lib-python/2.7.0/lib2to3/fixes/fix_reduce.py rename to lib-python/2.7/lib2to3/fixes/fix_reduce.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_renames.py b/lib-python/2.7/lib2to3/fixes/fix_renames.py rename from lib-python/2.7.0/lib2to3/fixes/fix_renames.py rename to lib-python/2.7/lib2to3/fixes/fix_renames.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_repr.py b/lib-python/2.7/lib2to3/fixes/fix_repr.py rename from lib-python/2.7.0/lib2to3/fixes/fix_repr.py rename to lib-python/2.7/lib2to3/fixes/fix_repr.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_set_literal.py b/lib-python/2.7/lib2to3/fixes/fix_set_literal.py rename from lib-python/2.7.0/lib2to3/fixes/fix_set_literal.py rename to lib-python/2.7/lib2to3/fixes/fix_set_literal.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_standarderror.py b/lib-python/2.7/lib2to3/fixes/fix_standarderror.py rename from lib-python/2.7.0/lib2to3/fixes/fix_standarderror.py rename to lib-python/2.7/lib2to3/fixes/fix_standarderror.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_sys_exc.py b/lib-python/2.7/lib2to3/fixes/fix_sys_exc.py rename from lib-python/2.7.0/lib2to3/fixes/fix_sys_exc.py rename to lib-python/2.7/lib2to3/fixes/fix_sys_exc.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_throw.py b/lib-python/2.7/lib2to3/fixes/fix_throw.py rename from lib-python/2.7.0/lib2to3/fixes/fix_throw.py rename to lib-python/2.7/lib2to3/fixes/fix_throw.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_tuple_params.py b/lib-python/2.7/lib2to3/fixes/fix_tuple_params.py rename from lib-python/2.7.0/lib2to3/fixes/fix_tuple_params.py rename to lib-python/2.7/lib2to3/fixes/fix_tuple_params.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_types.py b/lib-python/2.7/lib2to3/fixes/fix_types.py rename from lib-python/2.7.0/lib2to3/fixes/fix_types.py rename to lib-python/2.7/lib2to3/fixes/fix_types.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_unicode.py b/lib-python/2.7/lib2to3/fixes/fix_unicode.py rename from lib-python/2.7.0/lib2to3/fixes/fix_unicode.py rename to lib-python/2.7/lib2to3/fixes/fix_unicode.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_urllib.py b/lib-python/2.7/lib2to3/fixes/fix_urllib.py rename from lib-python/2.7.0/lib2to3/fixes/fix_urllib.py rename to lib-python/2.7/lib2to3/fixes/fix_urllib.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_ws_comma.py b/lib-python/2.7/lib2to3/fixes/fix_ws_comma.py rename from lib-python/2.7.0/lib2to3/fixes/fix_ws_comma.py rename to lib-python/2.7/lib2to3/fixes/fix_ws_comma.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_xrange.py b/lib-python/2.7/lib2to3/fixes/fix_xrange.py rename from lib-python/2.7.0/lib2to3/fixes/fix_xrange.py rename to lib-python/2.7/lib2to3/fixes/fix_xrange.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_xreadlines.py b/lib-python/2.7/lib2to3/fixes/fix_xreadlines.py rename from lib-python/2.7.0/lib2to3/fixes/fix_xreadlines.py rename to lib-python/2.7/lib2to3/fixes/fix_xreadlines.py diff --git a/lib-python/2.7.0/lib2to3/fixes/fix_zip.py b/lib-python/2.7/lib2to3/fixes/fix_zip.py rename from lib-python/2.7.0/lib2to3/fixes/fix_zip.py rename to lib-python/2.7/lib2to3/fixes/fix_zip.py diff --git a/lib-python/2.7.0/lib2to3/main.py b/lib-python/2.7/lib2to3/main.py rename from lib-python/2.7.0/lib2to3/main.py rename to lib-python/2.7/lib2to3/main.py diff --git a/lib-python/2.7.0/lib2to3/patcomp.py b/lib-python/2.7/lib2to3/patcomp.py rename from lib-python/2.7.0/lib2to3/patcomp.py rename to lib-python/2.7/lib2to3/patcomp.py diff --git a/lib-python/2.7.0/lib2to3/pgen2/__init__.py b/lib-python/2.7/lib2to3/pgen2/__init__.py rename from lib-python/2.7.0/lib2to3/pgen2/__init__.py rename to lib-python/2.7/lib2to3/pgen2/__init__.py diff --git a/lib-python/2.7.0/lib2to3/pgen2/conv.py b/lib-python/2.7/lib2to3/pgen2/conv.py rename from lib-python/2.7.0/lib2to3/pgen2/conv.py rename to lib-python/2.7/lib2to3/pgen2/conv.py diff --git a/lib-python/2.7.0/lib2to3/pgen2/driver.py b/lib-python/2.7/lib2to3/pgen2/driver.py rename from lib-python/2.7.0/lib2to3/pgen2/driver.py rename to lib-python/2.7/lib2to3/pgen2/driver.py diff --git a/lib-python/2.7.0/lib2to3/pgen2/grammar.py b/lib-python/2.7/lib2to3/pgen2/grammar.py rename from lib-python/2.7.0/lib2to3/pgen2/grammar.py rename to lib-python/2.7/lib2to3/pgen2/grammar.py diff --git a/lib-python/2.7.0/lib2to3/pgen2/literals.py b/lib-python/2.7/lib2to3/pgen2/literals.py rename from lib-python/2.7.0/lib2to3/pgen2/literals.py rename to lib-python/2.7/lib2to3/pgen2/literals.py diff --git a/lib-python/2.7.0/lib2to3/pgen2/parse.py b/lib-python/2.7/lib2to3/pgen2/parse.py rename from lib-python/2.7.0/lib2to3/pgen2/parse.py rename to lib-python/2.7/lib2to3/pgen2/parse.py diff --git a/lib-python/2.7.0/lib2to3/pgen2/pgen.py b/lib-python/2.7/lib2to3/pgen2/pgen.py rename from lib-python/2.7.0/lib2to3/pgen2/pgen.py rename to lib-python/2.7/lib2to3/pgen2/pgen.py diff --git a/lib-python/2.7.0/lib2to3/pgen2/token.py b/lib-python/2.7/lib2to3/pgen2/token.py rename from lib-python/2.7.0/lib2to3/pgen2/token.py rename to lib-python/2.7/lib2to3/pgen2/token.py diff --git a/lib-python/2.7.0/lib2to3/pgen2/tokenize.py b/lib-python/2.7/lib2to3/pgen2/tokenize.py rename from lib-python/2.7.0/lib2to3/pgen2/tokenize.py rename to lib-python/2.7/lib2to3/pgen2/tokenize.py diff --git a/lib-python/2.7.0/lib2to3/pygram.py b/lib-python/2.7/lib2to3/pygram.py rename from lib-python/2.7.0/lib2to3/pygram.py rename to lib-python/2.7/lib2to3/pygram.py diff --git a/lib-python/2.7.0/lib2to3/pytree.py b/lib-python/2.7/lib2to3/pytree.py rename from lib-python/2.7.0/lib2to3/pytree.py rename to lib-python/2.7/lib2to3/pytree.py diff --git a/lib-python/2.7.0/lib2to3/refactor.py b/lib-python/2.7/lib2to3/refactor.py rename from lib-python/2.7.0/lib2to3/refactor.py rename to lib-python/2.7/lib2to3/refactor.py diff --git a/lib-python/2.7.0/lib2to3/tests/__init__.py b/lib-python/2.7/lib2to3/tests/__init__.py rename from lib-python/2.7.0/lib2to3/tests/__init__.py rename to lib-python/2.7/lib2to3/tests/__init__.py diff --git a/lib-python/2.7.0/lib2to3/tests/data/README b/lib-python/2.7/lib2to3/tests/data/README rename from lib-python/2.7.0/lib2to3/tests/data/README rename to lib-python/2.7/lib2to3/tests/data/README diff --git a/lib-python/2.7.0/lib2to3/tests/data/bom.py b/lib-python/2.7/lib2to3/tests/data/bom.py rename from lib-python/2.7.0/lib2to3/tests/data/bom.py rename to lib-python/2.7/lib2to3/tests/data/bom.py diff --git a/lib-python/2.7.0/lib2to3/tests/data/crlf.py b/lib-python/2.7/lib2to3/tests/data/crlf.py rename from lib-python/2.7.0/lib2to3/tests/data/crlf.py rename to lib-python/2.7/lib2to3/tests/data/crlf.py diff --git a/lib-python/2.7.0/lib2to3/tests/data/different_encoding.py b/lib-python/2.7/lib2to3/tests/data/different_encoding.py rename from lib-python/2.7.0/lib2to3/tests/data/different_encoding.py rename to lib-python/2.7/lib2to3/tests/data/different_encoding.py diff --git a/lib-python/2.7.0/lib2to3/tests/data/fixers/bad_order.py b/lib-python/2.7/lib2to3/tests/data/fixers/bad_order.py rename from lib-python/2.7.0/lib2to3/tests/data/fixers/bad_order.py rename to lib-python/2.7/lib2to3/tests/data/fixers/bad_order.py diff --git a/lib-python/2.7.0/lib2to3/tests/data/fixers/myfixes/__init__.py b/lib-python/2.7/lib2to3/tests/data/fixers/myfixes/__init__.py rename from lib-python/2.7.0/lib2to3/tests/data/fixers/myfixes/__init__.py rename to lib-python/2.7/lib2to3/tests/data/fixers/myfixes/__init__.py diff --git a/lib-python/2.7.0/lib2to3/tests/data/fixers/myfixes/fix_explicit.py b/lib-python/2.7/lib2to3/tests/data/fixers/myfixes/fix_explicit.py rename from lib-python/2.7.0/lib2to3/tests/data/fixers/myfixes/fix_explicit.py rename to lib-python/2.7/lib2to3/tests/data/fixers/myfixes/fix_explicit.py diff --git a/lib-python/2.7.0/lib2to3/tests/data/fixers/myfixes/fix_first.py b/lib-python/2.7/lib2to3/tests/data/fixers/myfixes/fix_first.py rename from lib-python/2.7.0/lib2to3/tests/data/fixers/myfixes/fix_first.py rename to lib-python/2.7/lib2to3/tests/data/fixers/myfixes/fix_first.py diff --git a/lib-python/2.7.0/lib2to3/tests/data/fixers/myfixes/fix_last.py b/lib-python/2.7/lib2to3/tests/data/fixers/myfixes/fix_last.py rename from lib-python/2.7.0/lib2to3/tests/data/fixers/myfixes/fix_last.py rename to lib-python/2.7/lib2to3/tests/data/fixers/myfixes/fix_last.py diff --git a/lib-python/2.7.0/lib2to3/tests/data/fixers/myfixes/fix_parrot.py b/lib-python/2.7/lib2to3/tests/data/fixers/myfixes/fix_parrot.py rename from lib-python/2.7.0/lib2to3/tests/data/fixers/myfixes/fix_parrot.py rename to lib-python/2.7/lib2to3/tests/data/fixers/myfixes/fix_parrot.py diff --git a/lib-python/2.7.0/lib2to3/tests/data/fixers/myfixes/fix_preorder.py b/lib-python/2.7/lib2to3/tests/data/fixers/myfixes/fix_preorder.py rename from lib-python/2.7.0/lib2to3/tests/data/fixers/myfixes/fix_preorder.py rename to lib-python/2.7/lib2to3/tests/data/fixers/myfixes/fix_preorder.py diff --git a/lib-python/2.7.0/lib2to3/tests/data/fixers/no_fixer_cls.py b/lib-python/2.7/lib2to3/tests/data/fixers/no_fixer_cls.py rename from lib-python/2.7.0/lib2to3/tests/data/fixers/no_fixer_cls.py rename to lib-python/2.7/lib2to3/tests/data/fixers/no_fixer_cls.py diff --git a/lib-python/2.7.0/lib2to3/tests/data/fixers/parrot_example.py b/lib-python/2.7/lib2to3/tests/data/fixers/parrot_example.py rename from lib-python/2.7.0/lib2to3/tests/data/fixers/parrot_example.py rename to lib-python/2.7/lib2to3/tests/data/fixers/parrot_example.py diff --git a/lib-python/2.7.0/lib2to3/tests/data/infinite_recursion.py b/lib-python/2.7/lib2to3/tests/data/infinite_recursion.py rename from lib-python/2.7.0/lib2to3/tests/data/infinite_recursion.py rename to lib-python/2.7/lib2to3/tests/data/infinite_recursion.py diff --git a/lib-python/2.7.0/lib2to3/tests/data/py2_test_grammar.py b/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py rename from lib-python/2.7.0/lib2to3/tests/data/py2_test_grammar.py rename to lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py diff --git a/lib-python/2.7.0/lib2to3/tests/data/py3_test_grammar.py b/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py rename from lib-python/2.7.0/lib2to3/tests/data/py3_test_grammar.py rename to lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py diff --git a/lib-python/2.7.0/lib2to3/tests/pytree_idempotency.py b/lib-python/2.7/lib2to3/tests/pytree_idempotency.py rename from lib-python/2.7.0/lib2to3/tests/pytree_idempotency.py rename to lib-python/2.7/lib2to3/tests/pytree_idempotency.py diff --git a/lib-python/2.7.0/lib2to3/tests/support.py b/lib-python/2.7/lib2to3/tests/support.py rename from lib-python/2.7.0/lib2to3/tests/support.py rename to lib-python/2.7/lib2to3/tests/support.py diff --git a/lib-python/2.7.0/lib2to3/tests/test_all_fixers.py b/lib-python/2.7/lib2to3/tests/test_all_fixers.py rename from lib-python/2.7.0/lib2to3/tests/test_all_fixers.py rename to lib-python/2.7/lib2to3/tests/test_all_fixers.py diff --git a/lib-python/2.7.0/lib2to3/tests/test_fixers.py b/lib-python/2.7/lib2to3/tests/test_fixers.py rename from lib-python/2.7.0/lib2to3/tests/test_fixers.py rename to lib-python/2.7/lib2to3/tests/test_fixers.py diff --git a/lib-python/2.7.0/lib2to3/tests/test_main.py b/lib-python/2.7/lib2to3/tests/test_main.py rename from lib-python/2.7.0/lib2to3/tests/test_main.py rename to lib-python/2.7/lib2to3/tests/test_main.py diff --git a/lib-python/2.7.0/lib2to3/tests/test_parser.py b/lib-python/2.7/lib2to3/tests/test_parser.py rename from lib-python/2.7.0/lib2to3/tests/test_parser.py rename to lib-python/2.7/lib2to3/tests/test_parser.py diff --git a/lib-python/2.7.0/lib2to3/tests/test_pytree.py b/lib-python/2.7/lib2to3/tests/test_pytree.py rename from lib-python/2.7.0/lib2to3/tests/test_pytree.py rename to lib-python/2.7/lib2to3/tests/test_pytree.py diff --git a/lib-python/2.7.0/lib2to3/tests/test_refactor.py b/lib-python/2.7/lib2to3/tests/test_refactor.py rename from lib-python/2.7.0/lib2to3/tests/test_refactor.py rename to lib-python/2.7/lib2to3/tests/test_refactor.py diff --git a/lib-python/2.7.0/lib2to3/tests/test_util.py b/lib-python/2.7/lib2to3/tests/test_util.py rename from lib-python/2.7.0/lib2to3/tests/test_util.py rename to lib-python/2.7/lib2to3/tests/test_util.py diff --git a/lib-python/2.7.0/linecache.py b/lib-python/2.7/linecache.py rename from lib-python/2.7.0/linecache.py rename to lib-python/2.7/linecache.py diff --git a/lib-python/2.7.0/locale.py b/lib-python/2.7/locale.py rename from lib-python/2.7.0/locale.py rename to lib-python/2.7/locale.py diff --git a/lib-python/2.7.0/logging/__init__.py b/lib-python/2.7/logging/__init__.py rename from lib-python/2.7.0/logging/__init__.py rename to lib-python/2.7/logging/__init__.py diff --git a/lib-python/2.7.0/logging/config.py b/lib-python/2.7/logging/config.py rename from lib-python/2.7.0/logging/config.py rename to lib-python/2.7/logging/config.py diff --git a/lib-python/2.7.0/logging/handlers.py b/lib-python/2.7/logging/handlers.py rename from lib-python/2.7.0/logging/handlers.py rename to lib-python/2.7/logging/handlers.py diff --git a/lib-python/2.7.0/macpath.py b/lib-python/2.7/macpath.py rename from lib-python/2.7.0/macpath.py rename to lib-python/2.7/macpath.py diff --git a/lib-python/2.7.0/macurl2path.py b/lib-python/2.7/macurl2path.py rename from lib-python/2.7.0/macurl2path.py rename to lib-python/2.7/macurl2path.py diff --git a/lib-python/2.7.0/mailbox.py b/lib-python/2.7/mailbox.py rename from lib-python/2.7.0/mailbox.py rename to lib-python/2.7/mailbox.py diff --git a/lib-python/2.7.0/mailcap.py b/lib-python/2.7/mailcap.py rename from lib-python/2.7.0/mailcap.py rename to lib-python/2.7/mailcap.py diff --git a/lib-python/2.7.0/markupbase.py b/lib-python/2.7/markupbase.py rename from lib-python/2.7.0/markupbase.py rename to lib-python/2.7/markupbase.py diff --git a/lib-python/2.7.0/md5.py b/lib-python/2.7/md5.py rename from lib-python/2.7.0/md5.py rename to lib-python/2.7/md5.py --- a/lib-python/2.7.0/md5.py +++ b/lib-python/2.7/md5.py @@ -1,4 +1,4 @@ -# $Id: md5.py 58064 2007-09-09 20:25:00Z gregory.p.smith $ +# $Id$ # # Copyright (C) 2005 Gregory P. Smith (greg at krypto.org) # Licensed to PSF under a Contributor Agreement. diff --git a/lib-python/2.7.0/mhlib.py b/lib-python/2.7/mhlib.py rename from lib-python/2.7.0/mhlib.py rename to lib-python/2.7/mhlib.py diff --git a/lib-python/2.7.0/mimetools.py b/lib-python/2.7/mimetools.py rename from lib-python/2.7.0/mimetools.py rename to lib-python/2.7/mimetools.py diff --git a/lib-python/2.7.0/mimetypes.py b/lib-python/2.7/mimetypes.py rename from lib-python/2.7.0/mimetypes.py rename to lib-python/2.7/mimetypes.py diff --git a/lib-python/2.7.0/mimify.py b/lib-python/2.7/mimify.py rename from lib-python/2.7.0/mimify.py rename to lib-python/2.7/mimify.py diff --git a/lib-python/2.7.0/modulefinder.py b/lib-python/2.7/modulefinder.py rename from lib-python/2.7.0/modulefinder.py rename to lib-python/2.7/modulefinder.py diff --git a/lib-python/2.7.0/msilib/__init__.py b/lib-python/2.7/msilib/__init__.py rename from lib-python/2.7.0/msilib/__init__.py rename to lib-python/2.7/msilib/__init__.py diff --git a/lib-python/2.7.0/msilib/schema.py b/lib-python/2.7/msilib/schema.py rename from lib-python/2.7.0/msilib/schema.py rename to lib-python/2.7/msilib/schema.py diff --git a/lib-python/2.7.0/msilib/sequence.py b/lib-python/2.7/msilib/sequence.py rename from lib-python/2.7.0/msilib/sequence.py rename to lib-python/2.7/msilib/sequence.py diff --git a/lib-python/2.7.0/msilib/text.py b/lib-python/2.7/msilib/text.py rename from lib-python/2.7.0/msilib/text.py rename to lib-python/2.7/msilib/text.py diff --git a/lib-python/2.7.0/multifile.py b/lib-python/2.7/multifile.py rename from lib-python/2.7.0/multifile.py rename to lib-python/2.7/multifile.py diff --git a/lib-python/2.7.0/multiprocessing/__init__.py b/lib-python/2.7/multiprocessing/__init__.py rename from lib-python/2.7.0/multiprocessing/__init__.py rename to lib-python/2.7/multiprocessing/__init__.py diff --git a/lib-python/2.7.0/multiprocessing/connection.py b/lib-python/2.7/multiprocessing/connection.py rename from lib-python/2.7.0/multiprocessing/connection.py rename to lib-python/2.7/multiprocessing/connection.py diff --git a/lib-python/2.7.0/multiprocessing/dummy/__init__.py b/lib-python/2.7/multiprocessing/dummy/__init__.py rename from lib-python/2.7.0/multiprocessing/dummy/__init__.py rename to lib-python/2.7/multiprocessing/dummy/__init__.py diff --git a/lib-python/2.7.0/multiprocessing/dummy/connection.py b/lib-python/2.7/multiprocessing/dummy/connection.py rename from lib-python/2.7.0/multiprocessing/dummy/connection.py rename to lib-python/2.7/multiprocessing/dummy/connection.py diff --git a/lib-python/2.7.0/multiprocessing/forking.py b/lib-python/2.7/multiprocessing/forking.py rename from lib-python/2.7.0/multiprocessing/forking.py rename to lib-python/2.7/multiprocessing/forking.py diff --git a/lib-python/2.7.0/multiprocessing/heap.py b/lib-python/2.7/multiprocessing/heap.py rename from lib-python/2.7.0/multiprocessing/heap.py rename to lib-python/2.7/multiprocessing/heap.py diff --git a/lib-python/2.7.0/multiprocessing/managers.py b/lib-python/2.7/multiprocessing/managers.py rename from lib-python/2.7.0/multiprocessing/managers.py rename to lib-python/2.7/multiprocessing/managers.py diff --git a/lib-python/2.7.0/multiprocessing/pool.py b/lib-python/2.7/multiprocessing/pool.py rename from lib-python/2.7.0/multiprocessing/pool.py rename to lib-python/2.7/multiprocessing/pool.py diff --git a/lib-python/2.7.0/multiprocessing/process.py b/lib-python/2.7/multiprocessing/process.py rename from lib-python/2.7.0/multiprocessing/process.py rename to lib-python/2.7/multiprocessing/process.py diff --git a/lib-python/2.7.0/multiprocessing/queues.py b/lib-python/2.7/multiprocessing/queues.py rename from lib-python/2.7.0/multiprocessing/queues.py rename to lib-python/2.7/multiprocessing/queues.py diff --git a/lib-python/2.7.0/multiprocessing/reduction.py b/lib-python/2.7/multiprocessing/reduction.py rename from lib-python/2.7.0/multiprocessing/reduction.py rename to lib-python/2.7/multiprocessing/reduction.py diff --git a/lib-python/2.7.0/multiprocessing/sharedctypes.py b/lib-python/2.7/multiprocessing/sharedctypes.py rename from lib-python/2.7.0/multiprocessing/sharedctypes.py rename to lib-python/2.7/multiprocessing/sharedctypes.py diff --git a/lib-python/2.7.0/multiprocessing/synchronize.py b/lib-python/2.7/multiprocessing/synchronize.py rename from lib-python/2.7.0/multiprocessing/synchronize.py rename to lib-python/2.7/multiprocessing/synchronize.py diff --git a/lib-python/2.7.0/multiprocessing/util.py b/lib-python/2.7/multiprocessing/util.py rename from lib-python/2.7.0/multiprocessing/util.py rename to lib-python/2.7/multiprocessing/util.py diff --git a/lib-python/2.7.0/mutex.py b/lib-python/2.7/mutex.py rename from lib-python/2.7.0/mutex.py rename to lib-python/2.7/mutex.py diff --git a/lib-python/2.7.0/netrc.py b/lib-python/2.7/netrc.py rename from lib-python/2.7.0/netrc.py rename to lib-python/2.7/netrc.py diff --git a/lib-python/2.7.0/new.py b/lib-python/2.7/new.py rename from lib-python/2.7.0/new.py rename to lib-python/2.7/new.py diff --git a/lib-python/2.7.0/nntplib.py b/lib-python/2.7/nntplib.py rename from lib-python/2.7.0/nntplib.py rename to lib-python/2.7/nntplib.py diff --git a/lib-python/2.7.0/ntpath.py b/lib-python/2.7/ntpath.py rename from lib-python/2.7.0/ntpath.py rename to lib-python/2.7/ntpath.py diff --git a/lib-python/2.7.0/nturl2path.py b/lib-python/2.7/nturl2path.py rename from lib-python/2.7.0/nturl2path.py rename to lib-python/2.7/nturl2path.py diff --git a/lib-python/2.7.0/numbers.py b/lib-python/2.7/numbers.py rename from lib-python/2.7.0/numbers.py rename to lib-python/2.7/numbers.py diff --git a/lib-python/2.7.0/opcode.py b/lib-python/2.7/opcode.py rename from lib-python/2.7.0/opcode.py rename to lib-python/2.7/opcode.py diff --git a/lib-python/2.7.0/optparse.py b/lib-python/2.7/optparse.py rename from lib-python/2.7.0/optparse.py rename to lib-python/2.7/optparse.py diff --git a/lib-python/2.7.0/os.py b/lib-python/2.7/os.py rename from lib-python/2.7.0/os.py rename to lib-python/2.7/os.py diff --git a/lib-python/2.7.0/os2emxpath.py b/lib-python/2.7/os2emxpath.py rename from lib-python/2.7.0/os2emxpath.py rename to lib-python/2.7/os2emxpath.py diff --git a/lib-python/2.7.0/pdb.doc b/lib-python/2.7/pdb.doc rename from lib-python/2.7.0/pdb.doc rename to lib-python/2.7/pdb.doc diff --git a/lib-python/2.7.0/pdb.py b/lib-python/2.7/pdb.py rename from lib-python/2.7.0/pdb.py rename to lib-python/2.7/pdb.py diff --git a/lib-python/2.7.0/pickle.py b/lib-python/2.7/pickle.py rename from lib-python/2.7.0/pickle.py rename to lib-python/2.7/pickle.py --- a/lib-python/2.7.0/pickle.py +++ b/lib-python/2.7/pickle.py @@ -24,7 +24,7 @@ """ -__version__ = "$Revision: 72223 $" # Code version +__version__ = "$Revision$" # Code version from types import * from copy_reg import dispatch_table diff --git a/lib-python/2.7.0/pickletools.py b/lib-python/2.7/pickletools.py rename from lib-python/2.7.0/pickletools.py rename to lib-python/2.7/pickletools.py diff --git a/lib-python/2.7.0/pipes.py b/lib-python/2.7/pipes.py rename from lib-python/2.7.0/pipes.py rename to lib-python/2.7/pipes.py diff --git a/lib-python/2.7.0/pkgutil.py b/lib-python/2.7/pkgutil.py rename from lib-python/2.7.0/pkgutil.py rename to lib-python/2.7/pkgutil.py diff --git a/lib-python/2.7.0/plat-aix3/IN.py b/lib-python/2.7/plat-aix3/IN.py rename from lib-python/2.7.0/plat-aix3/IN.py rename to lib-python/2.7/plat-aix3/IN.py diff --git a/lib-python/2.7.0/plat-aix3/regen b/lib-python/2.7/plat-aix3/regen rename from lib-python/2.7.0/plat-aix3/regen rename to lib-python/2.7/plat-aix3/regen diff --git a/lib-python/2.7.0/plat-aix4/IN.py b/lib-python/2.7/plat-aix4/IN.py rename from lib-python/2.7.0/plat-aix4/IN.py rename to lib-python/2.7/plat-aix4/IN.py diff --git a/lib-python/2.7.0/plat-aix4/regen b/lib-python/2.7/plat-aix4/regen rename from lib-python/2.7.0/plat-aix4/regen rename to lib-python/2.7/plat-aix4/regen diff --git a/lib-python/2.7.0/plat-atheos/IN.py b/lib-python/2.7/plat-atheos/IN.py rename from lib-python/2.7.0/plat-atheos/IN.py rename to lib-python/2.7/plat-atheos/IN.py diff --git a/lib-python/2.7.0/plat-atheos/TYPES.py b/lib-python/2.7/plat-atheos/TYPES.py rename from lib-python/2.7.0/plat-atheos/TYPES.py rename to lib-python/2.7/plat-atheos/TYPES.py diff --git a/lib-python/2.7.0/plat-atheos/regen b/lib-python/2.7/plat-atheos/regen rename from lib-python/2.7.0/plat-atheos/regen rename to lib-python/2.7/plat-atheos/regen diff --git a/lib-python/2.7.0/plat-beos5/IN.py b/lib-python/2.7/plat-beos5/IN.py rename from lib-python/2.7.0/plat-beos5/IN.py rename to lib-python/2.7/plat-beos5/IN.py diff --git a/lib-python/2.7.0/plat-beos5/regen b/lib-python/2.7/plat-beos5/regen rename from lib-python/2.7.0/plat-beos5/regen rename to lib-python/2.7/plat-beos5/regen diff --git a/lib-python/2.7.0/plat-darwin/IN.py b/lib-python/2.7/plat-darwin/IN.py rename from lib-python/2.7.0/plat-darwin/IN.py rename to lib-python/2.7/plat-darwin/IN.py diff --git a/lib-python/2.7.0/plat-darwin/regen b/lib-python/2.7/plat-darwin/regen rename from lib-python/2.7.0/plat-darwin/regen rename to lib-python/2.7/plat-darwin/regen diff --git a/lib-python/2.7.0/plat-freebsd4/IN.py b/lib-python/2.7/plat-freebsd4/IN.py rename from lib-python/2.7.0/plat-freebsd4/IN.py rename to lib-python/2.7/plat-freebsd4/IN.py diff --git a/lib-python/2.7.0/plat-freebsd4/regen b/lib-python/2.7/plat-freebsd4/regen rename from lib-python/2.7.0/plat-freebsd4/regen rename to lib-python/2.7/plat-freebsd4/regen diff --git a/lib-python/2.7.0/plat-freebsd5/IN.py b/lib-python/2.7/plat-freebsd5/IN.py rename from lib-python/2.7.0/plat-freebsd5/IN.py rename to lib-python/2.7/plat-freebsd5/IN.py diff --git a/lib-python/2.7.0/plat-freebsd5/regen b/lib-python/2.7/plat-freebsd5/regen rename from lib-python/2.7.0/plat-freebsd5/regen rename to lib-python/2.7/plat-freebsd5/regen diff --git a/lib-python/2.7.0/plat-freebsd6/IN.py b/lib-python/2.7/plat-freebsd6/IN.py rename from lib-python/2.7.0/plat-freebsd6/IN.py rename to lib-python/2.7/plat-freebsd6/IN.py diff --git a/lib-python/2.7.0/plat-freebsd6/regen b/lib-python/2.7/plat-freebsd6/regen rename from lib-python/2.7.0/plat-freebsd6/regen rename to lib-python/2.7/plat-freebsd6/regen diff --git a/lib-python/2.7.0/plat-freebsd7/IN.py b/lib-python/2.7/plat-freebsd7/IN.py rename from lib-python/2.7.0/plat-freebsd7/IN.py rename to lib-python/2.7/plat-freebsd7/IN.py diff --git a/lib-python/2.7.0/plat-freebsd7/regen b/lib-python/2.7/plat-freebsd7/regen rename from lib-python/2.7.0/plat-freebsd7/regen rename to lib-python/2.7/plat-freebsd7/regen diff --git a/lib-python/2.7.0/plat-freebsd8/IN.py b/lib-python/2.7/plat-freebsd8/IN.py rename from lib-python/2.7.0/plat-freebsd8/IN.py rename to lib-python/2.7/plat-freebsd8/IN.py diff --git a/lib-python/2.7.0/plat-freebsd8/regen b/lib-python/2.7/plat-freebsd8/regen rename from lib-python/2.7.0/plat-freebsd8/regen rename to lib-python/2.7/plat-freebsd8/regen diff --git a/lib-python/2.7.0/plat-generic/regen b/lib-python/2.7/plat-generic/regen rename from lib-python/2.7.0/plat-generic/regen rename to lib-python/2.7/plat-generic/regen diff --git a/lib-python/2.7.0/plat-irix5/AL.py b/lib-python/2.7/plat-irix5/AL.py rename from lib-python/2.7.0/plat-irix5/AL.py rename to lib-python/2.7/plat-irix5/AL.py diff --git a/lib-python/2.7.0/plat-irix5/CD.py b/lib-python/2.7/plat-irix5/CD.py rename from lib-python/2.7.0/plat-irix5/CD.py rename to lib-python/2.7/plat-irix5/CD.py diff --git a/lib-python/2.7.0/plat-irix5/CL.py b/lib-python/2.7/plat-irix5/CL.py rename from lib-python/2.7.0/plat-irix5/CL.py rename to lib-python/2.7/plat-irix5/CL.py diff --git a/lib-python/2.7.0/plat-irix5/CL_old.py b/lib-python/2.7/plat-irix5/CL_old.py rename from lib-python/2.7.0/plat-irix5/CL_old.py rename to lib-python/2.7/plat-irix5/CL_old.py diff --git a/lib-python/2.7.0/plat-irix5/DEVICE.py b/lib-python/2.7/plat-irix5/DEVICE.py rename from lib-python/2.7.0/plat-irix5/DEVICE.py rename to lib-python/2.7/plat-irix5/DEVICE.py diff --git a/lib-python/2.7.0/plat-irix5/ERRNO.py b/lib-python/2.7/plat-irix5/ERRNO.py rename from lib-python/2.7.0/plat-irix5/ERRNO.py rename to lib-python/2.7/plat-irix5/ERRNO.py diff --git a/lib-python/2.7.0/plat-irix5/FILE.py b/lib-python/2.7/plat-irix5/FILE.py rename from lib-python/2.7.0/plat-irix5/FILE.py rename to lib-python/2.7/plat-irix5/FILE.py diff --git a/lib-python/2.7.0/plat-irix5/FL.py b/lib-python/2.7/plat-irix5/FL.py rename from lib-python/2.7.0/plat-irix5/FL.py rename to lib-python/2.7/plat-irix5/FL.py diff --git a/lib-python/2.7.0/plat-irix5/GET.py b/lib-python/2.7/plat-irix5/GET.py rename from lib-python/2.7.0/plat-irix5/GET.py rename to lib-python/2.7/plat-irix5/GET.py diff --git a/lib-python/2.7.0/plat-irix5/GL.py b/lib-python/2.7/plat-irix5/GL.py rename from lib-python/2.7.0/plat-irix5/GL.py rename to lib-python/2.7/plat-irix5/GL.py diff --git a/lib-python/2.7.0/plat-irix5/GLWS.py b/lib-python/2.7/plat-irix5/GLWS.py rename from lib-python/2.7.0/plat-irix5/GLWS.py rename to lib-python/2.7/plat-irix5/GLWS.py diff --git a/lib-python/2.7.0/plat-irix5/IN.py b/lib-python/2.7/plat-irix5/IN.py rename from lib-python/2.7.0/plat-irix5/IN.py rename to lib-python/2.7/plat-irix5/IN.py diff --git a/lib-python/2.7.0/plat-irix5/IOCTL.py b/lib-python/2.7/plat-irix5/IOCTL.py rename from lib-python/2.7.0/plat-irix5/IOCTL.py rename to lib-python/2.7/plat-irix5/IOCTL.py diff --git a/lib-python/2.7.0/plat-irix5/SV.py b/lib-python/2.7/plat-irix5/SV.py rename from lib-python/2.7.0/plat-irix5/SV.py rename to lib-python/2.7/plat-irix5/SV.py diff --git a/lib-python/2.7.0/plat-irix5/WAIT.py b/lib-python/2.7/plat-irix5/WAIT.py rename from lib-python/2.7.0/plat-irix5/WAIT.py rename to lib-python/2.7/plat-irix5/WAIT.py diff --git a/lib-python/2.7.0/plat-irix5/cddb.py b/lib-python/2.7/plat-irix5/cddb.py rename from lib-python/2.7.0/plat-irix5/cddb.py rename to lib-python/2.7/plat-irix5/cddb.py diff --git a/lib-python/2.7.0/plat-irix5/cdplayer.py b/lib-python/2.7/plat-irix5/cdplayer.py rename from lib-python/2.7.0/plat-irix5/cdplayer.py rename to lib-python/2.7/plat-irix5/cdplayer.py diff --git a/lib-python/2.7.0/plat-irix5/flp.doc b/lib-python/2.7/plat-irix5/flp.doc rename from lib-python/2.7.0/plat-irix5/flp.doc rename to lib-python/2.7/plat-irix5/flp.doc diff --git a/lib-python/2.7.0/plat-irix5/flp.py b/lib-python/2.7/plat-irix5/flp.py rename from lib-python/2.7.0/plat-irix5/flp.py rename to lib-python/2.7/plat-irix5/flp.py diff --git a/lib-python/2.7.0/plat-irix5/jpeg.py b/lib-python/2.7/plat-irix5/jpeg.py rename from lib-python/2.7.0/plat-irix5/jpeg.py rename to lib-python/2.7/plat-irix5/jpeg.py diff --git a/lib-python/2.7.0/plat-irix5/panel.py b/lib-python/2.7/plat-irix5/panel.py rename from lib-python/2.7.0/plat-irix5/panel.py rename to lib-python/2.7/plat-irix5/panel.py diff --git a/lib-python/2.7.0/plat-irix5/panelparser.py b/lib-python/2.7/plat-irix5/panelparser.py rename from lib-python/2.7.0/plat-irix5/panelparser.py rename to lib-python/2.7/plat-irix5/panelparser.py diff --git a/lib-python/2.7.0/plat-irix5/readcd.doc b/lib-python/2.7/plat-irix5/readcd.doc rename from lib-python/2.7.0/plat-irix5/readcd.doc rename to lib-python/2.7/plat-irix5/readcd.doc diff --git a/lib-python/2.7.0/plat-irix5/readcd.py b/lib-python/2.7/plat-irix5/readcd.py rename from lib-python/2.7.0/plat-irix5/readcd.py rename to lib-python/2.7/plat-irix5/readcd.py diff --git a/lib-python/2.7.0/plat-irix5/regen b/lib-python/2.7/plat-irix5/regen rename from lib-python/2.7.0/plat-irix5/regen rename to lib-python/2.7/plat-irix5/regen diff --git a/lib-python/2.7.0/plat-irix5/torgb.py b/lib-python/2.7/plat-irix5/torgb.py rename from lib-python/2.7.0/plat-irix5/torgb.py rename to lib-python/2.7/plat-irix5/torgb.py diff --git a/lib-python/2.7.0/plat-irix6/AL.py b/lib-python/2.7/plat-irix6/AL.py rename from lib-python/2.7.0/plat-irix6/AL.py rename to lib-python/2.7/plat-irix6/AL.py diff --git a/lib-python/2.7.0/plat-irix6/CD.py b/lib-python/2.7/plat-irix6/CD.py rename from lib-python/2.7.0/plat-irix6/CD.py rename to lib-python/2.7/plat-irix6/CD.py diff --git a/lib-python/2.7.0/plat-irix6/CL.py b/lib-python/2.7/plat-irix6/CL.py rename from lib-python/2.7.0/plat-irix6/CL.py rename to lib-python/2.7/plat-irix6/CL.py diff --git a/lib-python/2.7.0/plat-irix6/DEVICE.py b/lib-python/2.7/plat-irix6/DEVICE.py rename from lib-python/2.7.0/plat-irix6/DEVICE.py rename to lib-python/2.7/plat-irix6/DEVICE.py diff --git a/lib-python/2.7.0/plat-irix6/ERRNO.py b/lib-python/2.7/plat-irix6/ERRNO.py rename from lib-python/2.7.0/plat-irix6/ERRNO.py rename to lib-python/2.7/plat-irix6/ERRNO.py diff --git a/lib-python/2.7.0/plat-irix6/FILE.py b/lib-python/2.7/plat-irix6/FILE.py rename from lib-python/2.7.0/plat-irix6/FILE.py rename to lib-python/2.7/plat-irix6/FILE.py diff --git a/lib-python/2.7.0/plat-irix6/FL.py b/lib-python/2.7/plat-irix6/FL.py rename from lib-python/2.7.0/plat-irix6/FL.py rename to lib-python/2.7/plat-irix6/FL.py diff --git a/lib-python/2.7.0/plat-irix6/GET.py b/lib-python/2.7/plat-irix6/GET.py rename from lib-python/2.7.0/plat-irix6/GET.py rename to lib-python/2.7/plat-irix6/GET.py diff --git a/lib-python/2.7.0/plat-irix6/GL.py b/lib-python/2.7/plat-irix6/GL.py rename from lib-python/2.7.0/plat-irix6/GL.py rename to lib-python/2.7/plat-irix6/GL.py diff --git a/lib-python/2.7.0/plat-irix6/GLWS.py b/lib-python/2.7/plat-irix6/GLWS.py rename from lib-python/2.7.0/plat-irix6/GLWS.py rename to lib-python/2.7/plat-irix6/GLWS.py diff --git a/lib-python/2.7.0/plat-irix6/IN.py b/lib-python/2.7/plat-irix6/IN.py rename from lib-python/2.7.0/plat-irix6/IN.py rename to lib-python/2.7/plat-irix6/IN.py diff --git a/lib-python/2.7.0/plat-irix6/IOCTL.py b/lib-python/2.7/plat-irix6/IOCTL.py rename from lib-python/2.7.0/plat-irix6/IOCTL.py rename to lib-python/2.7/plat-irix6/IOCTL.py diff --git a/lib-python/2.7.0/plat-irix6/SV.py b/lib-python/2.7/plat-irix6/SV.py rename from lib-python/2.7.0/plat-irix6/SV.py rename to lib-python/2.7/plat-irix6/SV.py diff --git a/lib-python/2.7.0/plat-irix6/WAIT.py b/lib-python/2.7/plat-irix6/WAIT.py rename from lib-python/2.7.0/plat-irix6/WAIT.py rename to lib-python/2.7/plat-irix6/WAIT.py diff --git a/lib-python/2.7.0/plat-irix6/cddb.py b/lib-python/2.7/plat-irix6/cddb.py rename from lib-python/2.7.0/plat-irix6/cddb.py rename to lib-python/2.7/plat-irix6/cddb.py diff --git a/lib-python/2.7.0/plat-irix6/cdplayer.py b/lib-python/2.7/plat-irix6/cdplayer.py rename from lib-python/2.7.0/plat-irix6/cdplayer.py rename to lib-python/2.7/plat-irix6/cdplayer.py diff --git a/lib-python/2.7.0/plat-irix6/flp.doc b/lib-python/2.7/plat-irix6/flp.doc rename from lib-python/2.7.0/plat-irix6/flp.doc rename to lib-python/2.7/plat-irix6/flp.doc diff --git a/lib-python/2.7.0/plat-irix6/flp.py b/lib-python/2.7/plat-irix6/flp.py rename from lib-python/2.7.0/plat-irix6/flp.py rename to lib-python/2.7/plat-irix6/flp.py diff --git a/lib-python/2.7.0/plat-irix6/jpeg.py b/lib-python/2.7/plat-irix6/jpeg.py rename from lib-python/2.7.0/plat-irix6/jpeg.py rename to lib-python/2.7/plat-irix6/jpeg.py diff --git a/lib-python/2.7.0/plat-irix6/panel.py b/lib-python/2.7/plat-irix6/panel.py rename from lib-python/2.7.0/plat-irix6/panel.py rename to lib-python/2.7/plat-irix6/panel.py diff --git a/lib-python/2.7.0/plat-irix6/panelparser.py b/lib-python/2.7/plat-irix6/panelparser.py rename from lib-python/2.7.0/plat-irix6/panelparser.py rename to lib-python/2.7/plat-irix6/panelparser.py diff --git a/lib-python/2.7.0/plat-irix6/readcd.doc b/lib-python/2.7/plat-irix6/readcd.doc rename from lib-python/2.7.0/plat-irix6/readcd.doc rename to lib-python/2.7/plat-irix6/readcd.doc diff --git a/lib-python/2.7.0/plat-irix6/readcd.py b/lib-python/2.7/plat-irix6/readcd.py rename from lib-python/2.7.0/plat-irix6/readcd.py rename to lib-python/2.7/plat-irix6/readcd.py diff --git a/lib-python/2.7.0/plat-irix6/regen b/lib-python/2.7/plat-irix6/regen rename from lib-python/2.7.0/plat-irix6/regen rename to lib-python/2.7/plat-irix6/regen diff --git a/lib-python/2.7.0/plat-irix6/torgb.py b/lib-python/2.7/plat-irix6/torgb.py rename from lib-python/2.7.0/plat-irix6/torgb.py rename to lib-python/2.7/plat-irix6/torgb.py diff --git a/lib-python/2.7.0/plat-linux2/CDROM.py b/lib-python/2.7/plat-linux2/CDROM.py rename from lib-python/2.7.0/plat-linux2/CDROM.py rename to lib-python/2.7/plat-linux2/CDROM.py diff --git a/lib-python/2.7.0/plat-linux2/DLFCN.py b/lib-python/2.7/plat-linux2/DLFCN.py rename from lib-python/2.7.0/plat-linux2/DLFCN.py rename to lib-python/2.7/plat-linux2/DLFCN.py diff --git a/lib-python/2.7.0/plat-linux2/IN.py b/lib-python/2.7/plat-linux2/IN.py rename from lib-python/2.7.0/plat-linux2/IN.py rename to lib-python/2.7/plat-linux2/IN.py diff --git a/lib-python/2.7.0/plat-linux2/TYPES.py b/lib-python/2.7/plat-linux2/TYPES.py rename from lib-python/2.7.0/plat-linux2/TYPES.py rename to lib-python/2.7/plat-linux2/TYPES.py diff --git a/lib-python/2.7.0/plat-linux2/regen b/lib-python/2.7/plat-linux2/regen rename from lib-python/2.7.0/plat-linux2/regen rename to lib-python/2.7/plat-linux2/regen diff --git a/lib-python/2.7.0/plat-mac/Audio_mac.py b/lib-python/2.7/plat-mac/Audio_mac.py rename from lib-python/2.7.0/plat-mac/Audio_mac.py rename to lib-python/2.7/plat-mac/Audio_mac.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/AE.py b/lib-python/2.7/plat-mac/Carbon/AE.py rename from lib-python/2.7.0/plat-mac/Carbon/AE.py rename to lib-python/2.7/plat-mac/Carbon/AE.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/AH.py b/lib-python/2.7/plat-mac/Carbon/AH.py rename from lib-python/2.7.0/plat-mac/Carbon/AH.py rename to lib-python/2.7/plat-mac/Carbon/AH.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/Alias.py b/lib-python/2.7/plat-mac/Carbon/Alias.py rename from lib-python/2.7.0/plat-mac/Carbon/Alias.py rename to lib-python/2.7/plat-mac/Carbon/Alias.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/Aliases.py b/lib-python/2.7/plat-mac/Carbon/Aliases.py rename from lib-python/2.7.0/plat-mac/Carbon/Aliases.py rename to lib-python/2.7/plat-mac/Carbon/Aliases.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/App.py b/lib-python/2.7/plat-mac/Carbon/App.py rename from lib-python/2.7.0/plat-mac/Carbon/App.py rename to lib-python/2.7/plat-mac/Carbon/App.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/Appearance.py b/lib-python/2.7/plat-mac/Carbon/Appearance.py rename from lib-python/2.7.0/plat-mac/Carbon/Appearance.py rename to lib-python/2.7/plat-mac/Carbon/Appearance.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/AppleEvents.py b/lib-python/2.7/plat-mac/Carbon/AppleEvents.py rename from lib-python/2.7.0/plat-mac/Carbon/AppleEvents.py rename to lib-python/2.7/plat-mac/Carbon/AppleEvents.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/AppleHelp.py b/lib-python/2.7/plat-mac/Carbon/AppleHelp.py rename from lib-python/2.7.0/plat-mac/Carbon/AppleHelp.py rename to lib-python/2.7/plat-mac/Carbon/AppleHelp.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/CF.py b/lib-python/2.7/plat-mac/Carbon/CF.py rename from lib-python/2.7.0/plat-mac/Carbon/CF.py rename to lib-python/2.7/plat-mac/Carbon/CF.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/CG.py b/lib-python/2.7/plat-mac/Carbon/CG.py rename from lib-python/2.7.0/plat-mac/Carbon/CG.py rename to lib-python/2.7/plat-mac/Carbon/CG.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/CarbonEvents.py b/lib-python/2.7/plat-mac/Carbon/CarbonEvents.py rename from lib-python/2.7.0/plat-mac/Carbon/CarbonEvents.py rename to lib-python/2.7/plat-mac/Carbon/CarbonEvents.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/CarbonEvt.py b/lib-python/2.7/plat-mac/Carbon/CarbonEvt.py rename from lib-python/2.7.0/plat-mac/Carbon/CarbonEvt.py rename to lib-python/2.7/plat-mac/Carbon/CarbonEvt.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/Cm.py b/lib-python/2.7/plat-mac/Carbon/Cm.py rename from lib-python/2.7.0/plat-mac/Carbon/Cm.py rename to lib-python/2.7/plat-mac/Carbon/Cm.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/Components.py b/lib-python/2.7/plat-mac/Carbon/Components.py rename from lib-python/2.7.0/plat-mac/Carbon/Components.py rename to lib-python/2.7/plat-mac/Carbon/Components.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/ControlAccessor.py b/lib-python/2.7/plat-mac/Carbon/ControlAccessor.py rename from lib-python/2.7.0/plat-mac/Carbon/ControlAccessor.py rename to lib-python/2.7/plat-mac/Carbon/ControlAccessor.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/Controls.py b/lib-python/2.7/plat-mac/Carbon/Controls.py rename from lib-python/2.7.0/plat-mac/Carbon/Controls.py rename to lib-python/2.7/plat-mac/Carbon/Controls.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/CoreFoundation.py b/lib-python/2.7/plat-mac/Carbon/CoreFoundation.py rename from lib-python/2.7.0/plat-mac/Carbon/CoreFoundation.py rename to lib-python/2.7/plat-mac/Carbon/CoreFoundation.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/CoreGraphics.py b/lib-python/2.7/plat-mac/Carbon/CoreGraphics.py rename from lib-python/2.7.0/plat-mac/Carbon/CoreGraphics.py rename to lib-python/2.7/plat-mac/Carbon/CoreGraphics.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/Ctl.py b/lib-python/2.7/plat-mac/Carbon/Ctl.py rename from lib-python/2.7.0/plat-mac/Carbon/Ctl.py rename to lib-python/2.7/plat-mac/Carbon/Ctl.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/Dialogs.py b/lib-python/2.7/plat-mac/Carbon/Dialogs.py rename from lib-python/2.7.0/plat-mac/Carbon/Dialogs.py rename to lib-python/2.7/plat-mac/Carbon/Dialogs.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/Dlg.py b/lib-python/2.7/plat-mac/Carbon/Dlg.py rename from lib-python/2.7.0/plat-mac/Carbon/Dlg.py rename to lib-python/2.7/plat-mac/Carbon/Dlg.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/Drag.py b/lib-python/2.7/plat-mac/Carbon/Drag.py rename from lib-python/2.7.0/plat-mac/Carbon/Drag.py rename to lib-python/2.7/plat-mac/Carbon/Drag.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/Dragconst.py b/lib-python/2.7/plat-mac/Carbon/Dragconst.py rename from lib-python/2.7.0/plat-mac/Carbon/Dragconst.py rename to lib-python/2.7/plat-mac/Carbon/Dragconst.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/Events.py b/lib-python/2.7/plat-mac/Carbon/Events.py rename from lib-python/2.7.0/plat-mac/Carbon/Events.py rename to lib-python/2.7/plat-mac/Carbon/Events.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/Evt.py b/lib-python/2.7/plat-mac/Carbon/Evt.py rename from lib-python/2.7.0/plat-mac/Carbon/Evt.py rename to lib-python/2.7/plat-mac/Carbon/Evt.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/File.py b/lib-python/2.7/plat-mac/Carbon/File.py rename from lib-python/2.7.0/plat-mac/Carbon/File.py rename to lib-python/2.7/plat-mac/Carbon/File.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/Files.py b/lib-python/2.7/plat-mac/Carbon/Files.py rename from lib-python/2.7.0/plat-mac/Carbon/Files.py rename to lib-python/2.7/plat-mac/Carbon/Files.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/Fm.py b/lib-python/2.7/plat-mac/Carbon/Fm.py rename from lib-python/2.7.0/plat-mac/Carbon/Fm.py rename to lib-python/2.7/plat-mac/Carbon/Fm.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/Folder.py b/lib-python/2.7/plat-mac/Carbon/Folder.py rename from lib-python/2.7.0/plat-mac/Carbon/Folder.py rename to lib-python/2.7/plat-mac/Carbon/Folder.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/Folders.py b/lib-python/2.7/plat-mac/Carbon/Folders.py rename from lib-python/2.7.0/plat-mac/Carbon/Folders.py rename to lib-python/2.7/plat-mac/Carbon/Folders.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/Fonts.py b/lib-python/2.7/plat-mac/Carbon/Fonts.py rename from lib-python/2.7.0/plat-mac/Carbon/Fonts.py rename to lib-python/2.7/plat-mac/Carbon/Fonts.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/Help.py b/lib-python/2.7/plat-mac/Carbon/Help.py rename from lib-python/2.7.0/plat-mac/Carbon/Help.py rename to lib-python/2.7/plat-mac/Carbon/Help.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/IBCarbon.py b/lib-python/2.7/plat-mac/Carbon/IBCarbon.py rename from lib-python/2.7.0/plat-mac/Carbon/IBCarbon.py rename to lib-python/2.7/plat-mac/Carbon/IBCarbon.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/IBCarbonRuntime.py b/lib-python/2.7/plat-mac/Carbon/IBCarbonRuntime.py rename from lib-python/2.7.0/plat-mac/Carbon/IBCarbonRuntime.py rename to lib-python/2.7/plat-mac/Carbon/IBCarbonRuntime.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/Icn.py b/lib-python/2.7/plat-mac/Carbon/Icn.py rename from lib-python/2.7.0/plat-mac/Carbon/Icn.py rename to lib-python/2.7/plat-mac/Carbon/Icn.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/Icons.py b/lib-python/2.7/plat-mac/Carbon/Icons.py rename from lib-python/2.7.0/plat-mac/Carbon/Icons.py rename to lib-python/2.7/plat-mac/Carbon/Icons.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/Launch.py b/lib-python/2.7/plat-mac/Carbon/Launch.py rename from lib-python/2.7.0/plat-mac/Carbon/Launch.py rename to lib-python/2.7/plat-mac/Carbon/Launch.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/LaunchServices.py b/lib-python/2.7/plat-mac/Carbon/LaunchServices.py rename from lib-python/2.7.0/plat-mac/Carbon/LaunchServices.py rename to lib-python/2.7/plat-mac/Carbon/LaunchServices.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/List.py b/lib-python/2.7/plat-mac/Carbon/List.py rename from lib-python/2.7.0/plat-mac/Carbon/List.py rename to lib-python/2.7/plat-mac/Carbon/List.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/Lists.py b/lib-python/2.7/plat-mac/Carbon/Lists.py rename from lib-python/2.7.0/plat-mac/Carbon/Lists.py rename to lib-python/2.7/plat-mac/Carbon/Lists.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/MacHelp.py b/lib-python/2.7/plat-mac/Carbon/MacHelp.py rename from lib-python/2.7.0/plat-mac/Carbon/MacHelp.py rename to lib-python/2.7/plat-mac/Carbon/MacHelp.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/MacTextEditor.py b/lib-python/2.7/plat-mac/Carbon/MacTextEditor.py rename from lib-python/2.7.0/plat-mac/Carbon/MacTextEditor.py rename to lib-python/2.7/plat-mac/Carbon/MacTextEditor.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/MediaDescr.py b/lib-python/2.7/plat-mac/Carbon/MediaDescr.py rename from lib-python/2.7.0/plat-mac/Carbon/MediaDescr.py rename to lib-python/2.7/plat-mac/Carbon/MediaDescr.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/Menu.py b/lib-python/2.7/plat-mac/Carbon/Menu.py rename from lib-python/2.7.0/plat-mac/Carbon/Menu.py rename to lib-python/2.7/plat-mac/Carbon/Menu.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/Menus.py b/lib-python/2.7/plat-mac/Carbon/Menus.py rename from lib-python/2.7.0/plat-mac/Carbon/Menus.py rename to lib-python/2.7/plat-mac/Carbon/Menus.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/Mlte.py b/lib-python/2.7/plat-mac/Carbon/Mlte.py rename from lib-python/2.7.0/plat-mac/Carbon/Mlte.py rename to lib-python/2.7/plat-mac/Carbon/Mlte.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/OSA.py b/lib-python/2.7/plat-mac/Carbon/OSA.py rename from lib-python/2.7.0/plat-mac/Carbon/OSA.py rename to lib-python/2.7/plat-mac/Carbon/OSA.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/OSAconst.py b/lib-python/2.7/plat-mac/Carbon/OSAconst.py rename from lib-python/2.7.0/plat-mac/Carbon/OSAconst.py rename to lib-python/2.7/plat-mac/Carbon/OSAconst.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/QDOffscreen.py b/lib-python/2.7/plat-mac/Carbon/QDOffscreen.py rename from lib-python/2.7.0/plat-mac/Carbon/QDOffscreen.py rename to lib-python/2.7/plat-mac/Carbon/QDOffscreen.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/Qd.py b/lib-python/2.7/plat-mac/Carbon/Qd.py rename from lib-python/2.7.0/plat-mac/Carbon/Qd.py rename to lib-python/2.7/plat-mac/Carbon/Qd.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/Qdoffs.py b/lib-python/2.7/plat-mac/Carbon/Qdoffs.py rename from lib-python/2.7.0/plat-mac/Carbon/Qdoffs.py rename to lib-python/2.7/plat-mac/Carbon/Qdoffs.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/Qt.py b/lib-python/2.7/plat-mac/Carbon/Qt.py rename from lib-python/2.7.0/plat-mac/Carbon/Qt.py rename to lib-python/2.7/plat-mac/Carbon/Qt.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/QuickDraw.py b/lib-python/2.7/plat-mac/Carbon/QuickDraw.py rename from lib-python/2.7.0/plat-mac/Carbon/QuickDraw.py rename to lib-python/2.7/plat-mac/Carbon/QuickDraw.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/QuickTime.py b/lib-python/2.7/plat-mac/Carbon/QuickTime.py rename from lib-python/2.7.0/plat-mac/Carbon/QuickTime.py rename to lib-python/2.7/plat-mac/Carbon/QuickTime.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/Res.py b/lib-python/2.7/plat-mac/Carbon/Res.py rename from lib-python/2.7.0/plat-mac/Carbon/Res.py rename to lib-python/2.7/plat-mac/Carbon/Res.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/Resources.py b/lib-python/2.7/plat-mac/Carbon/Resources.py rename from lib-python/2.7.0/plat-mac/Carbon/Resources.py rename to lib-python/2.7/plat-mac/Carbon/Resources.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/Scrap.py b/lib-python/2.7/plat-mac/Carbon/Scrap.py rename from lib-python/2.7.0/plat-mac/Carbon/Scrap.py rename to lib-python/2.7/plat-mac/Carbon/Scrap.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/Snd.py b/lib-python/2.7/plat-mac/Carbon/Snd.py rename from lib-python/2.7.0/plat-mac/Carbon/Snd.py rename to lib-python/2.7/plat-mac/Carbon/Snd.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/Sndihooks.py b/lib-python/2.7/plat-mac/Carbon/Sndihooks.py rename from lib-python/2.7.0/plat-mac/Carbon/Sndihooks.py rename to lib-python/2.7/plat-mac/Carbon/Sndihooks.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/Sound.py b/lib-python/2.7/plat-mac/Carbon/Sound.py rename from lib-python/2.7.0/plat-mac/Carbon/Sound.py rename to lib-python/2.7/plat-mac/Carbon/Sound.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/TE.py b/lib-python/2.7/plat-mac/Carbon/TE.py rename from lib-python/2.7.0/plat-mac/Carbon/TE.py rename to lib-python/2.7/plat-mac/Carbon/TE.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/TextEdit.py b/lib-python/2.7/plat-mac/Carbon/TextEdit.py rename from lib-python/2.7.0/plat-mac/Carbon/TextEdit.py rename to lib-python/2.7/plat-mac/Carbon/TextEdit.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/Win.py b/lib-python/2.7/plat-mac/Carbon/Win.py rename from lib-python/2.7.0/plat-mac/Carbon/Win.py rename to lib-python/2.7/plat-mac/Carbon/Win.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/Windows.py b/lib-python/2.7/plat-mac/Carbon/Windows.py rename from lib-python/2.7.0/plat-mac/Carbon/Windows.py rename to lib-python/2.7/plat-mac/Carbon/Windows.py diff --git a/lib-python/2.7.0/plat-mac/Carbon/__init__.py b/lib-python/2.7/plat-mac/Carbon/__init__.py rename from lib-python/2.7.0/plat-mac/Carbon/__init__.py rename to lib-python/2.7/plat-mac/Carbon/__init__.py diff --git a/lib-python/2.7.0/plat-mac/EasyDialogs.py b/lib-python/2.7/plat-mac/EasyDialogs.py rename from lib-python/2.7.0/plat-mac/EasyDialogs.py rename to lib-python/2.7/plat-mac/EasyDialogs.py diff --git a/lib-python/2.7.0/plat-mac/FrameWork.py b/lib-python/2.7/plat-mac/FrameWork.py rename from lib-python/2.7.0/plat-mac/FrameWork.py rename to lib-python/2.7/plat-mac/FrameWork.py diff --git a/lib-python/2.7.0/plat-mac/MiniAEFrame.py b/lib-python/2.7/plat-mac/MiniAEFrame.py rename from lib-python/2.7.0/plat-mac/MiniAEFrame.py rename to lib-python/2.7/plat-mac/MiniAEFrame.py diff --git a/lib-python/2.7.0/plat-mac/PixMapWrapper.py b/lib-python/2.7/plat-mac/PixMapWrapper.py rename from lib-python/2.7.0/plat-mac/PixMapWrapper.py rename to lib-python/2.7/plat-mac/PixMapWrapper.py diff --git a/lib-python/2.7.0/plat-mac/aepack.py b/lib-python/2.7/plat-mac/aepack.py rename from lib-python/2.7.0/plat-mac/aepack.py rename to lib-python/2.7/plat-mac/aepack.py diff --git a/lib-python/2.7.0/plat-mac/aetools.py b/lib-python/2.7/plat-mac/aetools.py rename from lib-python/2.7.0/plat-mac/aetools.py rename to lib-python/2.7/plat-mac/aetools.py diff --git a/lib-python/2.7.0/plat-mac/aetypes.py b/lib-python/2.7/plat-mac/aetypes.py rename from lib-python/2.7.0/plat-mac/aetypes.py rename to lib-python/2.7/plat-mac/aetypes.py diff --git a/lib-python/2.7.0/plat-mac/applesingle.py b/lib-python/2.7/plat-mac/applesingle.py rename from lib-python/2.7.0/plat-mac/applesingle.py rename to lib-python/2.7/plat-mac/applesingle.py diff --git a/lib-python/2.7.0/plat-mac/appletrawmain.py b/lib-python/2.7/plat-mac/appletrawmain.py rename from lib-python/2.7.0/plat-mac/appletrawmain.py rename to lib-python/2.7/plat-mac/appletrawmain.py diff --git a/lib-python/2.7.0/plat-mac/appletrunner.py b/lib-python/2.7/plat-mac/appletrunner.py rename from lib-python/2.7.0/plat-mac/appletrunner.py rename to lib-python/2.7/plat-mac/appletrunner.py diff --git a/lib-python/2.7.0/plat-mac/argvemulator.py b/lib-python/2.7/plat-mac/argvemulator.py rename from lib-python/2.7.0/plat-mac/argvemulator.py rename to lib-python/2.7/plat-mac/argvemulator.py diff --git a/lib-python/2.7.0/plat-mac/bgenlocations.py b/lib-python/2.7/plat-mac/bgenlocations.py rename from lib-python/2.7.0/plat-mac/bgenlocations.py rename to lib-python/2.7/plat-mac/bgenlocations.py diff --git a/lib-python/2.7.0/plat-mac/buildtools.py b/lib-python/2.7/plat-mac/buildtools.py rename from lib-python/2.7.0/plat-mac/buildtools.py rename to lib-python/2.7/plat-mac/buildtools.py diff --git a/lib-python/2.7.0/plat-mac/bundlebuilder.py b/lib-python/2.7/plat-mac/bundlebuilder.py rename from lib-python/2.7.0/plat-mac/bundlebuilder.py rename to lib-python/2.7/plat-mac/bundlebuilder.py diff --git a/lib-python/2.7.0/plat-mac/cfmfile.py b/lib-python/2.7/plat-mac/cfmfile.py rename from lib-python/2.7.0/plat-mac/cfmfile.py rename to lib-python/2.7/plat-mac/cfmfile.py diff --git a/lib-python/2.7.0/plat-mac/dialogs.rsrc b/lib-python/2.7/plat-mac/dialogs.rsrc rename from lib-python/2.7.0/plat-mac/dialogs.rsrc rename to lib-python/2.7/plat-mac/dialogs.rsrc diff --git a/lib-python/2.7.0/plat-mac/errors.rsrc b/lib-python/2.7/plat-mac/errors.rsrc rename from lib-python/2.7.0/plat-mac/errors.rsrc rename to lib-python/2.7/plat-mac/errors.rsrc diff --git a/lib-python/2.7.0/plat-mac/findertools.py b/lib-python/2.7/plat-mac/findertools.py rename from lib-python/2.7.0/plat-mac/findertools.py rename to lib-python/2.7/plat-mac/findertools.py diff --git a/lib-python/2.7.0/plat-mac/gensuitemodule.py b/lib-python/2.7/plat-mac/gensuitemodule.py rename from lib-python/2.7.0/plat-mac/gensuitemodule.py rename to lib-python/2.7/plat-mac/gensuitemodule.py diff --git a/lib-python/2.7.0/plat-mac/ic.py b/lib-python/2.7/plat-mac/ic.py rename from lib-python/2.7.0/plat-mac/ic.py rename to lib-python/2.7/plat-mac/ic.py diff --git a/lib-python/2.7.0/plat-mac/icopen.py b/lib-python/2.7/plat-mac/icopen.py rename from lib-python/2.7.0/plat-mac/icopen.py rename to lib-python/2.7/plat-mac/icopen.py diff --git a/lib-python/2.7.0/plat-mac/lib-scriptpackages/CodeWarrior/CodeWarrior_suite.py b/lib-python/2.7/plat-mac/lib-scriptpackages/CodeWarrior/CodeWarrior_suite.py rename from lib-python/2.7.0/plat-mac/lib-scriptpackages/CodeWarrior/CodeWarrior_suite.py rename to lib-python/2.7/plat-mac/lib-scriptpackages/CodeWarrior/CodeWarrior_suite.py diff --git a/lib-python/2.7.0/plat-mac/lib-scriptpackages/CodeWarrior/Metrowerks_Shell_Suite.py b/lib-python/2.7/plat-mac/lib-scriptpackages/CodeWarrior/Metrowerks_Shell_Suite.py rename from lib-python/2.7.0/plat-mac/lib-scriptpackages/CodeWarrior/Metrowerks_Shell_Suite.py rename to lib-python/2.7/plat-mac/lib-scriptpackages/CodeWarrior/Metrowerks_Shell_Suite.py diff --git a/lib-python/2.7.0/plat-mac/lib-scriptpackages/CodeWarrior/Required.py b/lib-python/2.7/plat-mac/lib-scriptpackages/CodeWarrior/Required.py rename from lib-python/2.7.0/plat-mac/lib-scriptpackages/CodeWarrior/Required.py rename to lib-python/2.7/plat-mac/lib-scriptpackages/CodeWarrior/Required.py diff --git a/lib-python/2.7.0/plat-mac/lib-scriptpackages/CodeWarrior/Standard_Suite.py b/lib-python/2.7/plat-mac/lib-scriptpackages/CodeWarrior/Standard_Suite.py rename from lib-python/2.7.0/plat-mac/lib-scriptpackages/CodeWarrior/Standard_Suite.py rename to lib-python/2.7/plat-mac/lib-scriptpackages/CodeWarrior/Standard_Suite.py diff --git a/lib-python/2.7.0/plat-mac/lib-scriptpackages/CodeWarrior/__init__.py b/lib-python/2.7/plat-mac/lib-scriptpackages/CodeWarrior/__init__.py rename from lib-python/2.7.0/plat-mac/lib-scriptpackages/CodeWarrior/__init__.py rename to lib-python/2.7/plat-mac/lib-scriptpackages/CodeWarrior/__init__.py diff --git a/lib-python/2.7.0/plat-mac/lib-scriptpackages/Explorer/Microsoft_Internet_Explorer.py b/lib-python/2.7/plat-mac/lib-scriptpackages/Explorer/Microsoft_Internet_Explorer.py rename from lib-python/2.7.0/plat-mac/lib-scriptpackages/Explorer/Microsoft_Internet_Explorer.py rename to lib-python/2.7/plat-mac/lib-scriptpackages/Explorer/Microsoft_Internet_Explorer.py diff --git a/lib-python/2.7.0/plat-mac/lib-scriptpackages/Explorer/Netscape_Suite.py b/lib-python/2.7/plat-mac/lib-scriptpackages/Explorer/Netscape_Suite.py rename from lib-python/2.7.0/plat-mac/lib-scriptpackages/Explorer/Netscape_Suite.py rename to lib-python/2.7/plat-mac/lib-scriptpackages/Explorer/Netscape_Suite.py diff --git a/lib-python/2.7.0/plat-mac/lib-scriptpackages/Explorer/Required_Suite.py b/lib-python/2.7/plat-mac/lib-scriptpackages/Explorer/Required_Suite.py rename from lib-python/2.7.0/plat-mac/lib-scriptpackages/Explorer/Required_Suite.py rename to lib-python/2.7/plat-mac/lib-scriptpackages/Explorer/Required_Suite.py diff --git a/lib-python/2.7.0/plat-mac/lib-scriptpackages/Explorer/Standard_Suite.py b/lib-python/2.7/plat-mac/lib-scriptpackages/Explorer/Standard_Suite.py rename from lib-python/2.7.0/plat-mac/lib-scriptpackages/Explorer/Standard_Suite.py rename to lib-python/2.7/plat-mac/lib-scriptpackages/Explorer/Standard_Suite.py diff --git a/lib-python/2.7.0/plat-mac/lib-scriptpackages/Explorer/URL_Suite.py b/lib-python/2.7/plat-mac/lib-scriptpackages/Explorer/URL_Suite.py rename from lib-python/2.7.0/plat-mac/lib-scriptpackages/Explorer/URL_Suite.py rename to lib-python/2.7/plat-mac/lib-scriptpackages/Explorer/URL_Suite.py diff --git a/lib-python/2.7.0/plat-mac/lib-scriptpackages/Explorer/Web_Browser_Suite.py b/lib-python/2.7/plat-mac/lib-scriptpackages/Explorer/Web_Browser_Suite.py rename from lib-python/2.7.0/plat-mac/lib-scriptpackages/Explorer/Web_Browser_Suite.py rename to lib-python/2.7/plat-mac/lib-scriptpackages/Explorer/Web_Browser_Suite.py diff --git a/lib-python/2.7.0/plat-mac/lib-scriptpackages/Explorer/__init__.py b/lib-python/2.7/plat-mac/lib-scriptpackages/Explorer/__init__.py rename from lib-python/2.7.0/plat-mac/lib-scriptpackages/Explorer/__init__.py rename to lib-python/2.7/plat-mac/lib-scriptpackages/Explorer/__init__.py diff --git a/lib-python/2.7.0/plat-mac/lib-scriptpackages/Finder/Containers_and_folders.py b/lib-python/2.7/plat-mac/lib-scriptpackages/Finder/Containers_and_folders.py rename from lib-python/2.7.0/plat-mac/lib-scriptpackages/Finder/Containers_and_folders.py rename to lib-python/2.7/plat-mac/lib-scriptpackages/Finder/Containers_and_folders.py diff --git a/lib-python/2.7.0/plat-mac/lib-scriptpackages/Finder/Enumerations.py b/lib-python/2.7/plat-mac/lib-scriptpackages/Finder/Enumerations.py rename from lib-python/2.7.0/plat-mac/lib-scriptpackages/Finder/Enumerations.py rename to lib-python/2.7/plat-mac/lib-scriptpackages/Finder/Enumerations.py diff --git a/lib-python/2.7.0/plat-mac/lib-scriptpackages/Finder/Files.py b/lib-python/2.7/plat-mac/lib-scriptpackages/Finder/Files.py rename from lib-python/2.7.0/plat-mac/lib-scriptpackages/Finder/Files.py rename to lib-python/2.7/plat-mac/lib-scriptpackages/Finder/Files.py From noreply at buildbot.pypy.org Thu Nov 10 13:50:56 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:50:56 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: _is_sane_hash was renamed to _never_equal_to_string Message-ID: <20111110125056.AF7368292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49189:53f140142fc5 Date: 2011-07-19 14:02 +0200 http://bitbucket.org/pypy/pypy/changeset/53f140142fc5/ Log: _is_sane_hash was renamed to _never_equal_to_string diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -325,10 +325,10 @@ w_set.add(w_key) def delitem(self, w_set, w_item): - from pypy.objspace.std.dictmultiobject import _is_sane_hash + from pypy.objspace.std.dictmultiobject import _never_equal_to_string d = self.cast_from_void_star(w_set.sstorage) if not self.is_correct_type(w_item): - if _is_sane_hash(self.space, self.space.type(w_item)): + if _never_equal_to_string(self.space, self.space.type(w_item)): return False w_set.switch_to_object_strategy(self.space) return w_set.delitem(w_item) @@ -358,9 +358,9 @@ return keys_w def has_key(self, w_set, w_key): - from pypy.objspace.std.dictmultiobject import _is_sane_hash + from pypy.objspace.std.dictmultiobject import _never_equal_to_string if not self.is_correct_type(w_key): - if not _is_sane_hash(self.space, self.space.type(w_key)): + if not _never_equal_to_string(self.space, self.space.type(w_key)): w_set.switch_to_object_strategy(self.space) return w_set.has_key(w_key) return False From noreply at buildbot.pypy.org Thu Nov 10 13:50:58 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:50:58 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: _newobj moved to W_SetObject and W_FrozenSetObject Message-ID: <20111110125058.0DD488292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49190:167cc1b5687a Date: 2011-07-19 14:06 +0200 http://bitbucket.org/pypy/pypy/changeset/167cc1b5687a/ Log: _newobj moved to W_SetObject and W_FrozenSetObject diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -48,19 +48,6 @@ obj.sstorage = storage return obj - def _newobj(w_self, space, w_iterable): - """Make a new set or frozenset by taking ownership of 'rdict_w'.""" - #return space.call(space.type(w_self),W_SetIterObject(rdict_w)) - objtype = type(w_self) - if objtype is W_SetObject: - obj = W_SetObject(space, w_iterable) - elif objtype is W_FrozensetObject: - obj = W_FrozensetObject(space, w_iterable) - else: - obj = space.call_function(space.type(w_self), w_iterable) - assert isinstance(obj, W_BaseSetObject) - return obj - _lifeline_ = None def getweakref(self): return self._lifeline_ From noreply at buildbot.pypy.org Thu Nov 10 13:50:59 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:50:59 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: differentiation between set types happens in W_SetObject and W_FrozenSetObject (more OO) Message-ID: <20111110125059.602A88292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49191:e87c1f05838a Date: 2011-07-22 11:26 +0200 http://bitbucket.org/pypy/pypy/changeset/e87c1f05838a/ Log: differentiation between set types happens in W_SetObject and W_FrozenSetObject (more OO) diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -37,12 +37,7 @@ def from_storage_and_strategy(w_self, storage, strategy): objtype = type(w_self) - if objtype is W_SetObject: - obj = instantiate(W_SetObject) - elif objtype is W_FrozensetObject: - obj = instantiate(W_FrozensetObject) - else: - obj = w_self.space.call_function(w_self.space.type(w_self), None) + obj = w_self._newobj(w_self.space, None) assert isinstance(obj, W_BaseSetObject) obj.strategy = strategy obj.sstorage = storage From noreply at buildbot.pypy.org Thu Nov 10 13:51:00 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:00 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: FakeInt is needed for this test class but setup_class is overwritten Message-ID: <20111110125100.AFF528292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49192:142e4c1b492d Date: 2011-07-22 16:15 +0200 http://bitbucket.org/pypy/pypy/changeset/142e4c1b492d/ Log: FakeInt is needed for this test class but setup_class is overwritten diff --git a/pypy/objspace/std/test/test_builtinshortcut.py b/pypy/objspace/std/test/test_builtinshortcut.py --- a/pypy/objspace/std/test/test_builtinshortcut.py +++ b/pypy/objspace/std/test/test_builtinshortcut.py @@ -85,6 +85,20 @@ def setup_class(cls): from pypy import conftest cls.space = conftest.gettestobjspace(**WITH_BUILTINSHORTCUT) + w_fakeint = cls.space.appexec([], """(): + class FakeInt(object): + def __init__(self, value): + self.value = value + def __hash__(self): + return hash(self.value) + + def __eq__(self, other): + if other == self.value: + return True + return False + return FakeInt + """) + cls.w_FakeInt = w_fakeint class AppTestString(test_stringobject.AppTestStringObject): def setup_class(cls): From noreply at buildbot.pypy.org Thu Nov 10 13:51:02 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:02 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: added tests and fix for unhashable items in combination with EmptySetStrategy Message-ID: <20111110125102.165AE8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49193:b07c4ba0f7ba Date: 2011-07-22 16:15 +0200 http://bitbucket.org/pypy/pypy/changeset/b07c4ba0f7ba/ Log: added tests and fix for unhashable items in combination with EmptySetStrategy diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -178,6 +178,17 @@ cast_to_void_star = staticmethod(cast_to_void_star) cast_from_void_star = staticmethod(cast_from_void_star) + def check_for_unhashable_objects(self, w_iterable): + w_iterator = self.space.iter(w_iterable) + while True: + try: + elem = self.space.next(w_iterator) + self.space.hash(elem) + except OperationError, e: + if not e.match(self.space, self.space.w_StopIteration): + raise + break + def get_empty_storage(self): return self.cast_to_void_star(None) @@ -230,22 +241,27 @@ return False def difference(self, w_set, w_other): + self.check_for_unhashable_objects(w_other) return w_set.copy() def difference_update(self, w_set, w_other): - pass + self.check_for_unhashable_objects(w_other) def intersect(self, w_set, w_other): + self.check_for_unhashable_objects(w_other) return w_set.copy() def intersect_update(self, w_set, w_other): + self.check_for_unhashable_objects(w_other) return w_set.copy() - def intersect_multiple(self, w_set, w_other): + def intersect_multiple(self, w_set, others_w): + self.intersect_multiple_update(w_set, others_w) return w_set.copy() - def intersect_multiple_update(self, w_set, w_other): - pass + def intersect_multiple_update(self, w_set, others_w): + for w_other in others_w: + self.intersect(w_set, w_other) def isdisjoint(self, w_set, w_other): return True @@ -828,6 +844,7 @@ w_left.difference_update(w_other) else: for w_key in space.listview(w_other): + space.hash(w_key) w_left.delitem(w_key) def inplace_sub__Set_Set(space, w_left, w_other): diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -599,6 +599,15 @@ assert e.isdisjoint(x) == True assert x.isdisjoint(e) == True + def test_empty_typeerror(self): + s = set() + raises(TypeError, s.difference, [[]]) + raises(TypeError, s.difference_update, [[]]) + raises(TypeError, s.intersection, [[]]) + raises(TypeError, s.intersection_update, [[]]) + raises(TypeError, s.symmetric_difference, [[]]) + raises(TypeError, s.symmetric_difference_update, [[]]) + raises(TypeError, s.update, [[]]) def test_super_with_generator(self): def foo(): From noreply at buildbot.pypy.org Thu Nov 10 13:51:03 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:03 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: make_setdata_from_w_iterable is not needed anymore Message-ID: <20111110125103.67E098292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49194:6a2ef1ad6abe Date: 2011-07-28 11:41 +0200 http://bitbucket.org/pypy/pypy/changeset/6a2ef1ad6abe/ Log: make_setdata_from_w_iterable is not needed anymore diff --git a/pypy/objspace/std/frozensettype.py b/pypy/objspace/std/frozensettype.py --- a/pypy/objspace/std/frozensettype.py +++ b/pypy/objspace/std/frozensettype.py @@ -39,7 +39,6 @@ def descr__frozenset__new__(space, w_frozensettype, w_iterable=gateway.NoneNotWrapped): from pypy.objspace.std.setobject import W_FrozensetObject - from pypy.objspace.std.setobject import make_setdata_from_w_iterable if (space.is_w(w_frozensettype, space.w_frozenset) and w_iterable is not None and type(w_iterable) is W_FrozensetObject): return w_iterable diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -725,6 +725,7 @@ for item_w in w_iterable: if type(item_w) is not W_IntObject: break; + #XXX wont work for [1, "two", "three", 1] use StopIteration instead if item_w is w_iterable[-1]: w_set.strategy = space.fromcache(IntegerSetStrategy) w_set.sstorage = w_set.strategy.get_storage_from_list(w_iterable) @@ -733,18 +734,6 @@ w_set.strategy = space.fromcache(ObjectSetStrategy) w_set.sstorage = w_set.strategy.get_storage_from_list(w_iterable) -def make_setdata_from_w_iterable(space, w_iterable=None): - #XXX remove this later - """Return a new r_dict with the content of w_iterable.""" - if isinstance(w_iterable, W_BaseSetObject): - #XXX is this bad or not? - return w_iterable.getdict_w() - data = newset(space) - if w_iterable is not None: - for w_item in space.listview(w_iterable): - data[w_item] = None - return data - def _initialize_set(space, w_obj, w_iterable=None): w_obj.clear() set_strategy_and_setdata(space, w_obj, w_iterable) @@ -1087,6 +1076,7 @@ def set_isdisjoint__Set_ANY(space, w_left, w_other): #XXX maybe checking if type fits strategy first (before comparing) speeds this up a bit # since this will be used in many other functions -> general function for that + # if w_left.strategy != w_other.strategy => return w_False for w_key in space.listview(w_other): if w_left.has_key(w_key): return space.w_False diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -18,13 +18,6 @@ letters = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ' -def make_setdata_from_w_iterable(space, w_iterable): - data = newset(space) - if w_iterable is not None: - for w_item in space.listview(w_iterable): - data[w_item] = None - return data - class W_SubSetObject(W_SetObject):pass class TestW_SetObject: From noreply at buildbot.pypy.org Thu Nov 10 13:51:04 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:04 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: need to use StopItertion to check for last element in list Message-ID: <20111110125104.C36018292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49195:fc1ddf33f169 Date: 2011-07-28 13:31 +0200 http://bitbucket.org/pypy/pypy/changeset/fc1ddf33f169/ Log: need to use StopItertion to check for last element in list diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -11,6 +11,7 @@ from pypy.rlib.objectmodel import instantiate from pypy.interpreter.generator import GeneratorIterator from pypy.objspace.std.listobject import W_ListObject +from pypy.objspace.std.intobject import W_IntObject class W_BaseSetObject(W_Object): typedef = None @@ -208,7 +209,6 @@ return clone def add(self, w_set, w_key): - from pypy.objspace.std.intobject import W_IntObject if type(w_key) is W_IntObject: w_set.strategy = self.space.fromcache(IntegerSetStrategy) else: @@ -722,11 +722,13 @@ return # check for integers - for item_w in w_iterable: - if type(item_w) is not W_IntObject: - break; - #XXX wont work for [1, "two", "three", 1] use StopIteration instead - if item_w is w_iterable[-1]: + iterator = iter(w_iterable) + while True: + try: + item_w = iterator.next() + if type(item_w) is not W_IntObject: + break; + except StopIteration: w_set.strategy = space.fromcache(IntegerSetStrategy) w_set.sstorage = w_set.strategy.get_storage_from_list(w_iterable) return From noreply at buildbot.pypy.org Thu Nov 10 13:51:06 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:06 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: implemented popitem on W_SetObject Message-ID: <20111110125106.2F5F88292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49196:8a7f58f9e061 Date: 2011-07-28 14:04 +0200 http://bitbucket.org/pypy/pypy/changeset/8a7f58f9e061/ Log: implemented popitem on W_SetObject diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -136,6 +136,9 @@ def iter(self): return self.strategy.iter(self) + def popitem(self): + return self.strategy.popitem(self) + class W_SetObject(W_BaseSetObject): from pypy.objspace.std.settype import set_typedef as typedef @@ -288,6 +291,10 @@ def iter(self, w_set): return EmptyIteratorImplementation(self.space, w_set) + def popitem(self, w_set): + raise OperationError(self.space.w_KeyError, + self.space.wrap('pop from an empty set')) + class AbstractUnwrappedSetStrategy(object): _mixin_ = True @@ -557,6 +564,16 @@ w_set.switch_to_object_strategy(self.space) w_set.update(w_other) + def popitem(self, w_set): + storage = self.cast_from_void_star(w_set.sstorage) + try: + result = storage.popitem() + except KeyError: + # strategy may still be the same even if dict is empty + raise OperationError(self.space.w_KeyError, + self.space.wrap('pop from an empty set')) + return self.wrap(result) + class IntegerSetStrategy(AbstractUnwrappedSetStrategy, SetStrategy): cast_to_void_star, cast_from_void_star = rerased.new_erasing_pair("integer") cast_to_void_star = staticmethod(cast_to_void_star) @@ -1030,6 +1047,7 @@ #XXX move this to strategy so we don't have to # wrap all items only to get the first one #XXX use popitem + return w_left.popitem() for w_key in w_left.getkeys(): break else: diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -165,6 +165,9 @@ raises(KeyError, "a.remove(6)") def test_pop(self): + b = set() + raises(KeyError, "b.pop()") + a = set([1,2,3,4,5]) for i in xrange(5): a.pop() From noreply at buildbot.pypy.org Thu Nov 10 13:51:07 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:07 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: fixed recent popitem changes Message-ID: <20111110125107.8B9668292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49197:b6937fff521d Date: 2011-08-23 11:34 +0200 http://bitbucket.org/pypy/pypy/changeset/b6937fff521d/ Log: fixed recent popitem changes diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -567,12 +567,13 @@ def popitem(self, w_set): storage = self.cast_from_void_star(w_set.sstorage) try: + # this returns a tuple because internally sets are dicts result = storage.popitem() except KeyError: # strategy may still be the same even if dict is empty raise OperationError(self.space.w_KeyError, self.space.wrap('pop from an empty set')) - return self.wrap(result) + return self.wrap(result[0]) class IntegerSetStrategy(AbstractUnwrappedSetStrategy, SetStrategy): cast_to_void_star, cast_from_void_star = rerased.new_erasing_pair("integer") @@ -1044,17 +1045,7 @@ return space.wrap(hash) def set_pop__Set(space, w_left): - #XXX move this to strategy so we don't have to - # wrap all items only to get the first one - #XXX use popitem return w_left.popitem() - for w_key in w_left.getkeys(): - break - else: - raise OperationError(space.w_KeyError, - space.wrap('pop from an empty set')) - w_left.delitem(w_key) - return w_key def and__Set_Set(space, w_left, w_other): new_set = w_left.intersect(w_other) diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -640,7 +640,7 @@ assert self.FakeInt(5) in s def test_fakeobject_and_pop(self): - s = set([1,2,3,self.FakeInt(4), 5]) + s = set([1,2,3,self.FakeInt(4),5]) assert s.pop() assert s.pop() assert s.pop() From noreply at buildbot.pypy.org Thu Nov 10 13:51:08 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:08 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: removed/chnaged old comments Message-ID: <20111110125108.E8EED8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49198:534d51292ce2 Date: 2011-08-23 12:01 +0200 http://bitbucket.org/pypy/pypy/changeset/534d51292ce2/ Log: removed/chnaged old comments diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -1085,9 +1085,7 @@ set_isdisjoint__Frozenset_Set = set_isdisjoint__Set_Set def set_isdisjoint__Set_ANY(space, w_left, w_other): - #XXX maybe checking if type fits strategy first (before comparing) speeds this up a bit - # since this will be used in many other functions -> general function for that - # if w_left.strategy != w_other.strategy => return w_False + #XXX may be optimized when other strategies are added for w_key in space.listview(w_other): if w_left.has_key(w_key): return space.w_False @@ -1112,8 +1110,6 @@ def set_symmetric_difference__Set_ANY(space, w_left, w_other): - #XXX since we need to iterate over both objects, create set - # from w_other so looking up items is fast w_other_as_set = w_left._newobj(space, w_other) w_result = w_left.symmetric_difference(w_other_as_set) return w_result From noreply at buildbot.pypy.org Thu Nov 10 13:51:10 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:10 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: fixed creating new set based on another set (needs to be copied) Message-ID: <20111110125110.3B63D8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49199:dc7e81a7ecc4 Date: 2011-08-23 13:38 +0200 http://bitbucket.org/pypy/pypy/changeset/dc7e81a7ecc4/ Log: fixed creating new set based on another set (needs to be copied) diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -727,8 +727,7 @@ if isinstance(w_iterable, W_BaseSetObject): w_set.strategy = w_iterable.strategy - #XXX need to make copy here - w_set.sstorage = w_iterable.sstorage + w_set.sstorage = w_iterable.get_storage_copy() return if not isinstance(w_iterable, list): diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -669,7 +669,6 @@ s = set([1,2,3,4]) raises(TypeError, s.discard, [1]) - def test_discard_evil_compare(self): class Evil(object): def __init__(self, value): @@ -685,3 +684,9 @@ s = set([1,2, Evil(frozenset([1]))]) raises(TypeError, s.discard, set([1])) + def test_create_set_from_set(self): + x = set([1,2,3]) + y = set(x) + x.pop() + assert x == set([2,3]) + assert y == set([1,2,3]) From noreply at buildbot.pypy.org Thu Nov 10 13:51:11 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:11 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: removed old comment Message-ID: <20111110125111.765408292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49200:3727073215e7 Date: 2011-08-23 14:03 +0200 http://bitbucket.org/pypy/pypy/changeset/3727073215e7/ Log: removed old comment diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -456,7 +456,6 @@ if w_set.strategy is w_other.strategy: self.symmetric_difference_update_match(w_set, w_other) return - #XXX no wrapping when strategies are equal newsetdata = newset(self.space) for w_key in w_set.getkeys(): if not w_other.has_key(w_key): From noreply at buildbot.pypy.org Thu Nov 10 13:51:13 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:13 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: create set from iterable to check length and use fastpath Message-ID: <20111110125113.452088292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49201:541a226d5845 Date: 2011-08-23 14:19 +0200 http://bitbucket.org/pypy/pypy/changeset/541a226d5845/ Log: create set from iterable to check length and use fastpath diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -520,7 +520,9 @@ w_set.sstorage = result.sstorage def issuperset(self, w_set, w_other): - #XXX always True if other is empty + if w_other.length() == 0: + return True + w_iter = self.space.iter(w_other) while True: try: @@ -965,7 +967,11 @@ if space.is_w(w_left, w_other): return space.w_True - return space.wrap(w_left.issuperset(w_other)) + w_other_as_set = w_left._newobj(space, w_other) + + if w_left.length() < w_other_as_set.length(): + return space.w_False + return space.wrap(w_left.issuperset(w_other_as_set)) frozenset_issuperset__Frozenset_ANY = set_issuperset__Set_ANY diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -611,6 +611,12 @@ yield i set([1,2,3,4,5]).issuperset(foo()) + def test_isdisjoint_with_generator(self): + def foo(): + for i in [1,2,3]: + yield i + set([1,2,3,4,5]).isdisjoint(foo()) + def test_fakeint_and_equals(self): s1 = set([1,2,3,4]) s2 = set([1,2,self.FakeInt(3), 4]) From noreply at buildbot.pypy.org Thu Nov 10 13:51:14 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:14 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: refactored symmetric_difference Message-ID: <20111110125114.791358292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49202:9ec0c712367d Date: 2011-09-30 13:59 +0200 http://bitbucket.org/pypy/pypy/changeset/9ec0c712367d/ Log: refactored symmetric_difference diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -428,18 +428,7 @@ w_set.strategy = result.strategy w_set.sstorage = result.sstorage - def symmetric_difference(self, w_set, w_other): - #XXX no wrapping when strategies are equal - result = w_set._newobj(self.space, None) - for w_key in w_set.getkeys(): - if not w_other.has_key(w_key): - result.add(w_key) - for w_key in w_other.getkeys(): - if not w_set.has_key(w_key): - result.add(w_key) - return result - - def symmetric_difference_update_match(self, w_set, w_other): + def _symmetric_difference_unwrapped(self, w_set, w_other): d_new = self.get_empty_dict() d_this = self.cast_from_void_star(w_set.sstorage) d_other = self.cast_from_void_star(w_other.sstorage) @@ -450,12 +439,10 @@ if not key in d_other: d_new[key] = None - w_set.sstorage = self.cast_to_void_star(d_new) + storage = self.cast_to_void_star(d_new) + return storage - def symmetric_difference_update(self, w_set, w_other): - if w_set.strategy is w_other.strategy: - self.symmetric_difference_update_match(w_set, w_other) - return + def _symmetric_difference_wrapped(self, w_set, w_other): newsetdata = newset(self.space) for w_key in w_set.getkeys(): if not w_other.has_key(w_key): @@ -464,9 +451,29 @@ if not w_set.has_key(w_key): newsetdata[w_key] = None - # do not switch strategy here if other items match - w_set.strategy = strategy = self.space.fromcache(ObjectSetStrategy) - w_set.sstorage = strategy.cast_to_void_star(newsetdata) + strategy = self.space.fromcache(ObjectSetStrategy) + return strategy.cast_to_void_star(newsetdata) + + def symmetric_difference(self, w_set, w_other): + #XXX if difference are only ints this wont return an IntSet + if w_set.strategy is w_other.strategy: + strategy = w_set.strategy + storage = self._symmetric_difference_unwrapped(w_set, w_other) + else: + strategy = self.space.fromcache(ObjectSetStrategy) + storage = self._symmetric_difference_wrapped(w_set, w_other) + return w_set.from_storage_and_strategy(storage, strategy) + + def symmetric_difference_update(self, w_set, w_other): + if w_set.strategy is w_other.strategy: + strategy = w_set.strategy + storage = self._symmetric_difference_unwrapped(w_set, w_other) + else: + strategy = self.space.fromcache(ObjectSetStrategy) + storage = self._symmetric_difference_wrapped(w_set, w_other) + + w_set.strategy = strategy + w_set.sstorage = storage def intersect(self, w_set, w_other): if w_set.length() > w_other.length(): From noreply at buildbot.pypy.org Thu Nov 10 13:51:15 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:15 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: refactored difference of setobjects Message-ID: <20111110125115.A7ACE8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49203:cc09dfc855c4 Date: 2011-10-04 10:32 +0200 http://bitbucket.org/pypy/pypy/changeset/cc09dfc855c4/ Log: refactored difference of setobjects diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -381,34 +381,22 @@ return False return True - def difference(self, w_set, w_other): - if not isinstance(w_other, W_BaseSetObject): - w_other = w_set._newobj(self.space, w_other) - - if (w_other.strategy is self.space.fromcache(ObjectSetStrategy) or - w_set.strategy is self.space.fromcache(ObjectSetStrategy)): - return self.difference_wrapped(w_set, w_other) - - if w_set.strategy is not w_other.strategy: - return w_set.copy() - - return self.difference_unwrapped(w_set, w_other) - - def difference_wrapped(self, w_set, w_other): - result = w_set._newobj(self.space, None) + def _difference_wrapped(self, w_set, w_other): + d_new = self.get_empty_dict() w_iter = self.space.iter(w_set) while True: try: w_item = self.space.next(w_iter) if not w_other.has_key(w_item): - result.add(w_item) + d_new[w_item] = None except OperationError, e: if not e.match(self.space, self.space.w_StopIteration): raise break; - return result + strategy = self.space.fromcache(ObjectSetStrategy) + return strategy.cast_to_void_star(d_new) - def difference_unwrapped(self, w_set, w_other): + def _difference_unwrapped(self, w_set, w_other): if not isinstance(w_other, W_BaseSetObject): w_other = w_set._newobj(self.space, w_other) iterator = self.cast_from_void_star(w_set.sstorage).iterkeys() @@ -417,16 +405,31 @@ for key in iterator: if key not in other_dict: result_dict[key] = None - result = w_set._newobj(self.space, None) - result.strategy = self - result.sstorage = self.cast_to_void_star(result_dict) - return result + return self.cast_to_void_star(result_dict) + + def _difference_base(self, w_set, w_other): + if not isinstance(w_other, W_BaseSetObject): + w_other = w_set._newobj(self.space, w_other) + + if w_set.strategy is w_other.strategy: + strategy = w_set.strategy + storage = self._difference_unwrapped(w_set, w_other) + else: + strategy = self.space.fromcache(ObjectSetStrategy) + storage = self._difference_wrapped(w_set, w_other) + return storage, strategy + + def difference(self, w_set, w_other): + #XXX return clone in certain cases: String- with IntStrategy or ANY with Empty + storage, strategy = self._difference_base(w_set, w_other) + w_newset = w_set.from_storage_and_strategy(storage, strategy) + return w_newset def difference_update(self, w_set, w_other): - #XXX this way we unnecessarily create a new set - result = self.difference(w_set, w_other) - w_set.strategy = result.strategy - w_set.sstorage = result.sstorage + #XXX do nothing in certain cases: String- with IntStrategy or ANY with Empty + storage, strategy = self._difference_base(w_set, w_other) + w_set.strategy = strategy + w_set.sstorage = storage def _symmetric_difference_unwrapped(self, w_set, w_other): d_new = self.get_empty_dict() @@ -455,7 +458,6 @@ return strategy.cast_to_void_star(newsetdata) def symmetric_difference(self, w_set, w_other): - #XXX if difference are only ints this wont return an IntSet if w_set.strategy is w_other.strategy: strategy = w_set.strategy storage = self._symmetric_difference_unwrapped(w_set, w_other) From noreply at buildbot.pypy.org Thu Nov 10 13:51:16 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:16 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: refactored intersection for sets Message-ID: <20111110125116.DAFEC8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49204:bb83301f7ae1 Date: 2011-10-04 13:40 +0200 http://bitbucket.org/pypy/pypy/changeset/bb83301f7ae1/ Log: refactored intersection for sets diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -477,33 +477,47 @@ w_set.strategy = strategy w_set.sstorage = storage + def _intersect_base(self, w_set, w_other): + if w_set.strategy is w_other.strategy: + strategy = w_set.strategy + storage = strategy._intersect_unwrapped(w_set, w_other) + else: + strategy = self.space.fromcache(ObjectSetStrategy) + storage = strategy._intersect_wrapped(w_set, w_other) + return storage, strategy + + def _intersect_wrapped(self, w_set, w_other): + result = self.get_empty_dict() + items = self.cast_from_void_star(w_set.sstorage).keys() + for key in items: + w_key = self.wrap(key) + if w_other.has_key(w_key): + result[w_key] = None + return self.cast_to_void_star(result) + + def _intersect_unwrapped(self, w_set, w_other): + result = self.get_empty_dict() + d_this = self.cast_from_void_star(w_set.sstorage) + d_other = self.cast_from_void_star(w_other.sstorage) + for key in d_this: + if key in d_other: + result[key] = None + return self.cast_to_void_star(result) + def intersect(self, w_set, w_other): if w_set.length() > w_other.length(): return w_other.intersect(w_set) - result = w_set._newobj(self.space, None) - items = self.cast_from_void_star(w_set.sstorage).keys() - #XXX do it without wrapping when strategies are equal - for key in items: - w_key = self.wrap(key) - if w_other.has_key(w_key): - result.add(w_key) - return result + storage, strategy = self._intersect_base(w_set, w_other) + return w_set.from_storage_and_strategy(storage, strategy) def intersect_update(self, w_set, w_other): if w_set.length() > w_other.length(): - return w_other.intersect(w_set) - - setdata = newset(self.space) - items = self.cast_from_void_star(w_set.sstorage).keys() - for key in items: - w_key = self.wrap(key) - if w_other.has_key(w_key): - setdata[w_key] = None - - # do not switch strategy here if other items match - w_set.strategy = strategy = self.space.fromcache(ObjectSetStrategy) - w_set.sstorage = strategy.cast_to_void_star(setdata) + storage, strategy = self._intersect_base(w_other, w_set) + else: + storage, strategy = self._intersect_base(w_set, w_other) + w_set.strategy = strategy + w_set.sstorage = storage return w_set def intersect_multiple(self, w_set, others_w): @@ -514,7 +528,6 @@ #XXX this creates setobject again result = result.intersect(w_other) else: - #XXX directly give w_other as argument to result2 result2 = w_set._newobj(self.space, None) for w_key in self.space.listview(w_other): if result.has_key(w_key): @@ -1084,7 +1097,6 @@ return def inplace_and__Set_Set(space, w_left, w_other): - #XXX why do we need to return here? return w_left.intersect_update(w_other) inplace_and__Set_Frozenset = inplace_and__Set_Set diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -493,6 +493,24 @@ assert s.intersection() == s assert s.intersection() is not s + def test_intersection_swap(self): + s1 = s3 = set([1,2,3,4,5]) + s2 = set([2,3,6,7]) + s1 &= s2 + assert s1 == set([2,3]) + assert s3 == set([2,3]) + + def test_intersection_generator(self): + def foo(): + for i in range(5): + yield i + + s1 = s2 = set([1,2,3,4,5,6]) + assert s1.intersection(foo()) == set([1,2,3,4]) + s1.intersection_update(foo()) + assert s1 == set([1,2,3,4]) + assert s2 == set([1,2,3,4]) + def test_difference(self): assert set([1,2,3]).difference(set([2,3,4])) == set([1]) assert set([1,2,3]).difference(frozenset([2,3,4])) == set([1]) From noreply at buildbot.pypy.org Thu Nov 10 13:51:18 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:18 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: refactored symmetric_difference for sets Message-ID: <20111110125118.134168292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49205:3e0b4ff1c77a Date: 2011-10-04 13:53 +0200 http://bitbucket.org/pypy/pypy/changeset/3e0b4ff1c77a/ Log: refactored symmetric_difference for sets diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -457,23 +457,21 @@ strategy = self.space.fromcache(ObjectSetStrategy) return strategy.cast_to_void_star(newsetdata) - def symmetric_difference(self, w_set, w_other): + def _symmetric_difference_base(self, w_set, w_other): if w_set.strategy is w_other.strategy: strategy = w_set.strategy storage = self._symmetric_difference_unwrapped(w_set, w_other) else: strategy = self.space.fromcache(ObjectSetStrategy) storage = self._symmetric_difference_wrapped(w_set, w_other) + return storage, strategy + + def symmetric_difference(self, w_set, w_other): + storage, strategy = self._symmetric_difference_base(w_set, w_other) return w_set.from_storage_and_strategy(storage, strategy) def symmetric_difference_update(self, w_set, w_other): - if w_set.strategy is w_other.strategy: - strategy = w_set.strategy - storage = self._symmetric_difference_unwrapped(w_set, w_other) - else: - strategy = self.space.fromcache(ObjectSetStrategy) - storage = self._symmetric_difference_wrapped(w_set, w_other) - + storage, strategy = self._symmetric_difference_base(w_set, w_other) w_set.strategy = strategy w_set.sstorage = storage From noreply at buildbot.pypy.org Thu Nov 10 13:51:19 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:19 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: frozenset does not need to be copied Message-ID: <20111110125119.41CAB8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49206:8592d5651c05 Date: 2011-10-10 11:10 +0200 http://bitbucket.org/pypy/pypy/changeset/8592d5651c05/ Log: frozenset does not need to be copied diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -28,7 +28,6 @@ def __init__(w_self, space, w_iterable=None): """Initialize the set by taking ownership of 'setdata'.""" w_self.space = space #XXX less memory without this indirection? - #XXX in case of ObjectStrategy we can reuse the setdata object set_strategy_and_setdata(space, w_self, w_iterable) def __repr__(w_self): @@ -314,10 +313,12 @@ w_set.switch_to_empty_strategy() def copy(self, w_set): - #XXX do not copy FrozenDict - d = self.cast_from_void_star(w_set.sstorage) strategy = w_set.strategy - storage = self.cast_to_void_star(d.copy()) + if isinstance(w_set, W_FrozensetObject): + storage = w_set.sstorage + else: + d = self.cast_from_void_star(w_set.sstorage) + storage = self.cast_to_void_star(d.copy()) clone = w_set.from_storage_and_strategy(storage, strategy) return clone From noreply at buildbot.pypy.org Thu Nov 10 13:51:20 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:20 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: refactored issuperset (no wrapping when strategies are equal) Message-ID: <20111110125120.713988292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49207:80d1c500bc62 Date: 2011-10-10 13:32 +0200 http://bitbucket.org/pypy/pypy/changeset/80d1c500bc62/ Log: refactored issuperset (no wrapping when strategies are equal) diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -421,13 +421,13 @@ return storage, strategy def difference(self, w_set, w_other): - #XXX return clone in certain cases: String- with IntStrategy or ANY with Empty + #XXX return clone for ANY with Empty (and later different strategies) storage, strategy = self._difference_base(w_set, w_other) w_newset = w_set.from_storage_and_strategy(storage, strategy) return w_newset def difference_update(self, w_set, w_other): - #XXX do nothing in certain cases: String- with IntStrategy or ANY with Empty + #XXX do nothing for ANY with Empty storage, strategy = self._difference_base(w_set, w_other) w_set.strategy = strategy w_set.sstorage = storage @@ -540,10 +540,16 @@ w_set.strategy = result.strategy w_set.sstorage = result.sstorage - def issuperset(self, w_set, w_other): - if w_other.length() == 0: - return True + def _issuperset_unwrapped(self, w_set, w_other): + d_set = self.cast_from_void_star(w_set.sstorage) + d_other = self.cast_from_void_star(w_other.sstorage) + for e in d_other.keys(): + if not e in d_set: + return False + return True + + def _issuperset_wrapped(self, w_set, w_other): w_iter = self.space.iter(w_other) while True: try: @@ -556,6 +562,15 @@ return True return True + def issuperset(self, w_set, w_other): + if w_other.length() == 0: + return True + + if w_set.strategy is w_other.strategy: + return self._issuperset_unwrapped(w_set, w_other) + else: + return self._issuperset_wrapped(w_set, w_other) + def isdisjoint(self, w_set, w_other): if w_other.length() == 0: return True From noreply at buildbot.pypy.org Thu Nov 10 13:51:21 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:21 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: refactored isdisjoint Message-ID: <20111110125121.A62208292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49208:97bf72254221 Date: 2011-10-10 13:54 +0200 http://bitbucket.org/pypy/pypy/changeset/97bf72254221/ Log: refactored isdisjoint diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -571,15 +571,28 @@ else: return self._issuperset_wrapped(w_set, w_other) + def _isdisjoint_unwrapped(self, w_set, w_other): + d_set = self.cast_from_void_star(w_set.sstorage) + d_other = self.cast_from_void_star(w_other.sstorage) + for key in d_set: + if key in d_other: + return False + return True + def isdisjoint(self, w_set, w_other): if w_other.length() == 0: return True if w_set.length() > w_other.length(): return w_other.isdisjoint(w_set) + if w_set.strategy is w_other.strategy: + return self._isdisjoint_unwrapped(w_set, w_other) + else: + return self._isdisjoint_wrapped(w_set, w_other) + + def _isdisjoint_wrapped(w_set, w_other): d = self.cast_from_void_star(w_set.sstorage) for key in d: - #XXX no need to wrap, if strategies are equal if w_other.has_key(self.wrap(key)): return False return True From noreply at buildbot.pypy.org Thu Nov 10 13:51:22 2011 From: noreply at buildbot.pypy.org (cfbolz) Date: Thu, 10 Nov 2011 13:51:22 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: review code. add plenty of XXXs Message-ID: <20111110125122.E6BE38292E@wyvern.cs.uni-duesseldorf.de> Author: Carl Friedrich Bolz Branch: set-strategies Changeset: r49209:149e5a639fa8 Date: 2011-10-10 17:05 +0200 http://bitbucket.org/pypy/pypy/changeset/149e5a639fa8/ Log: review code. add plenty of XXXs diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -27,7 +27,7 @@ def __init__(w_self, space, w_iterable=None): """Initialize the set by taking ownership of 'setdata'.""" - w_self.space = space #XXX less memory without this indirection? + w_self.space = space set_strategy_and_setdata(space, w_self, w_iterable) def __repr__(w_self): @@ -36,7 +36,6 @@ return "<%s(%s)>" % (w_self.__class__.__name__, ', '.join(reprlist)) def from_storage_and_strategy(w_self, storage, strategy): - objtype = type(w_self) obj = w_self._newobj(w_self.space, None) assert isinstance(obj, W_BaseSetObject) obj.strategy = strategy @@ -63,6 +62,10 @@ # _____________ strategy methods ________________ + # XXX add docstrings to all the strategy methods + # particularly, what are all the w_other arguments? any wrapped object? or + # only sets? + def clear(self): self.strategy.clear(self) @@ -75,9 +78,12 @@ def add(self, w_key): self.strategy.add(self, w_key) + # XXX this appears unused? kill it def discard(self, w_item): return self.strategy.discard(self, w_item) + # XXX rename to "remove", delitem is the name for the operation that does + # "del d[x]" which does not work on sets def delitem(self, w_item): return self.strategy.delitem(self, w_item) @@ -87,6 +93,7 @@ def get_storage_copy(self): return self.strategy.get_storage_copy(self) + # XXX use this as little as possible, as it is really inefficient def getkeys(self): return self.strategy.getkeys(self) @@ -177,6 +184,7 @@ class EmptySetStrategy(SetStrategy): + # XXX rename everywhere to erase and unerase cast_to_void_star, cast_from_void_star = rerased.new_erasing_pair("empty") cast_to_void_star = staticmethod(cast_to_void_star) cast_from_void_star = staticmethod(cast_from_void_star) @@ -212,11 +220,11 @@ def add(self, w_set, w_key): if type(w_key) is W_IntObject: - w_set.strategy = self.space.fromcache(IntegerSetStrategy) + strategy = self.space.fromcache(IntegerSetStrategy) else: - w_set.strategy = self.space.fromcache(ObjectSetStrategy) - - w_set.sstorage = w_set.strategy.get_empty_storage() + strategy = self.space.fromcache(ObjectSetStrategy) + w_set.strategy = strategy + w_set.sstorage = strategy.get_empty_storage() w_set.add(w_key) def delitem(self, w_set, w_item): @@ -238,11 +246,17 @@ return False def equals(self, w_set, w_other): + # XXX can't this be written as w_other.strategy is self? if w_other.strategy is self.space.fromcache(EmptySetStrategy): return True + # XXX let's not enforce the use of the EmptySetStrategy for empty sets + # similar to what we did for lists return False def difference(self, w_set, w_other): + # XXX what is w_other here? a set or any wrapped object? + # if a set, the following line is unnecessary (sets contain only + # hashable objects). self.check_for_unhashable_objects(w_other) return w_set.copy() @@ -270,6 +284,7 @@ def issuperset(self, w_set, w_other): if (isinstance(w_other, W_BaseSetObject) and + # XXX can't this be written as w_other.strategy is self? w_other.strategy is self.space.fromcache(EmptySetStrategy)): return True elif len(self.space.unpackiterable(w_other)) == 0: @@ -284,6 +299,7 @@ w_set.sstorage = w_other.get_storage_copy() def update(self, w_set, w_other): + # XXX wouldn't it be faster to just copy the storage of w_other into self? w_set.switch_to_object_strategy(self.space) w_set.update(w_other) @@ -297,6 +313,9 @@ class AbstractUnwrappedSetStrategy(object): _mixin_ = True + # XXX add similar abstract methods for all the methods the concrete + # subclasses of AbstractUnwrappedSetStrategy need to implement. add + # docstrings too. def get_empty_storage(self): raise NotImplementedError @@ -334,6 +353,8 @@ from pypy.objspace.std.dictmultiobject import _never_equal_to_string d = self.cast_from_void_star(w_set.sstorage) if not self.is_correct_type(w_item): + # XXX I don't understand the next line. shouldn't it be "never + # equal to int" in the int strategy case? if _never_equal_to_string(self.space, self.space.type(w_item)): return False w_set.switch_to_object_strategy(self.space) @@ -366,6 +387,8 @@ def has_key(self, w_set, w_key): from pypy.objspace.std.dictmultiobject import _never_equal_to_string if not self.is_correct_type(w_key): + # XXX I don't understand the next line. shouldn't it be "never + # equal to int" in the int strategy case? if not _never_equal_to_string(self.space, self.space.type(w_key)): w_set.switch_to_object_strategy(self.space) return w_set.has_key(w_key) @@ -384,6 +407,10 @@ def _difference_wrapped(self, w_set, w_other): d_new = self.get_empty_dict() + # XXX why not just: + # for obj in self.cast_from_void_star(w_set.sstorage): + # w_item = self.wrap(obj) + # ... w_iter = self.space.iter(w_set) while True: try: @@ -398,6 +425,8 @@ return strategy.cast_to_void_star(d_new) def _difference_unwrapped(self, w_set, w_other): + # XXX this line should not be needed + # the caller (_difference_base) already checks for this! if not isinstance(w_other, W_BaseSetObject): w_other = w_set._newobj(self.space, w_other) iterator = self.cast_from_void_star(w_set.sstorage).iterkeys() @@ -412,6 +441,7 @@ if not isinstance(w_other, W_BaseSetObject): w_other = w_set._newobj(self.space, w_other) + # XXX shouldn't w_set.strategy be simply "self"? if w_set.strategy is w_other.strategy: strategy = w_set.strategy storage = self._difference_unwrapped(w_set, w_other) @@ -448,10 +478,11 @@ def _symmetric_difference_wrapped(self, w_set, w_other): newsetdata = newset(self.space) + # XXX don't use getkeys for the next line, you know how to iterate over it for w_key in w_set.getkeys(): if not w_other.has_key(w_key): newsetdata[w_key] = None - for w_key in w_other.getkeys(): + for w_key in w_other.getkeys(): # XXX here it is fine if not w_set.has_key(w_key): newsetdata[w_key] = None @@ -459,6 +490,7 @@ return strategy.cast_to_void_star(newsetdata) def _symmetric_difference_base(self, w_set, w_other): + # shouldn't that be "if self is w_other.strategy"? if w_set.strategy is w_other.strategy: strategy = w_set.strategy storage = self._symmetric_difference_unwrapped(w_set, w_other) @@ -477,6 +509,7 @@ w_set.sstorage = storage def _intersect_base(self, w_set, w_other): + # XXX shouldn't this again be equivalent to self? if w_set.strategy is w_other.strategy: strategy = w_set.strategy storage = strategy._intersect_unwrapped(w_set, w_other) @@ -487,7 +520,9 @@ def _intersect_wrapped(self, w_set, w_other): result = self.get_empty_dict() - items = self.cast_from_void_star(w_set.sstorage).keys() + # XXX this is the correct way to iterate over w_set. please use this + # everywhere :-) + items = self.cast_from_void_star(w_set.sstorage).iterkeys() for key in items: w_key = self.wrap(key) if w_other.has_key(w_key): @@ -512,6 +547,8 @@ def intersect_update(self, w_set, w_other): if w_set.length() > w_other.length(): + # XXX this is not allowed! you must maintain the invariant that the + # firsts argument's is self. storage, strategy = self._intersect_base(w_other, w_set) else: storage, strategy = self._intersect_base(w_set, w_other) @@ -551,6 +588,8 @@ def _issuperset_wrapped(self, w_set, w_other): w_iter = self.space.iter(w_other) + # XXX this iteration is slow! it might be better to formulate + # everything in terms of issubset, to circumvent this problem. while True: try: w_item = self.space.next(w_iter) @@ -590,6 +629,8 @@ else: return self._isdisjoint_wrapped(w_set, w_other) + # XXX can you please order the functions XXX, _XXX_base, _XXX_unwrapped and + # _XXX_wrapped in a consistent way? def _isdisjoint_wrapped(w_set, w_other): d = self.cast_from_void_star(w_set.sstorage) for key in d: @@ -598,6 +639,9 @@ return True def update(self, w_set, w_other): + # XXX again, this is equivalent to self is self.space.fromcache(ObjectSetStrategy) + # this shows that the following condition is nonsense! you should + # instead overwrite update in ObjectSetStrategy and kill the if here if w_set.strategy is self.space.fromcache(ObjectSetStrategy): d_obj = self.cast_from_void_star(w_set.sstorage) other_w = w_other.getkeys() @@ -606,6 +650,7 @@ return elif w_set.strategy is w_other.strategy: + # XXX d_int is a sucky variable name, other should be d_other d_int = self.cast_from_void_star(w_set.sstorage) other = self.cast_from_void_star(w_other.sstorage) d_int.update(other) @@ -718,8 +763,8 @@ def next_entry(self): # note that this 'for' loop only runs once, at most - for w_key in self.iterator: - return self.space.wrap(w_key) + for key in self.iterator: + return self.space.wrap(key) else: return None @@ -731,8 +776,8 @@ def next_entry(self): # note that this 'for' loop only runs once, at most - for key in self.iterator: - return key + for w_key in self.iterator: + return w_key else: return None @@ -780,8 +825,8 @@ w_set.sstorage = w_iterable.get_storage_copy() return - if not isinstance(w_iterable, list): - w_iterable = space.listview(w_iterable) + if not isinstance(w_iterable, list): # XXX this cannot happen, a wrapped object is never a list + w_iterable = space.listview(w_iterable) # XXX this should be called iterable_w, because it is an unwrapped list if len(w_iterable) == 0: w_set.strategy = space.fromcache(EmptySetStrategy) @@ -821,6 +866,7 @@ # helper functions for set operation on dicts +# XXX are these still needed? def _symmetric_difference_dict(space, ld, rd): result = newset(space) for w_key in ld: @@ -984,8 +1030,8 @@ frozenset_issubset__Frozenset_Frozenset = set_issubset__Set_Set def set_issubset__Set_ANY(space, w_left, w_other): - if space.is_w(w_left, w_other): - return space.w_True + # not checking whether w_left is w_other here, because if that were the + # case the more precise multimethod would have applied. w_other_as_set = w_left._newobj(space, w_other) From noreply at buildbot.pypy.org Thu Nov 10 13:51:24 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:24 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: w_iterable must never be a list Message-ID: <20111110125124.23E348292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49210:029e6898fbbb Date: 2011-10-11 13:48 +0200 http://bitbucket.org/pypy/pypy/changeset/029e6898fbbb/ Log: w_iterable must never be a list diff --git a/pypy/objspace/std/objspace.py b/pypy/objspace/std/objspace.py --- a/pypy/objspace/std/objspace.py +++ b/pypy/objspace/std/objspace.py @@ -233,7 +233,7 @@ return W_ComplexObject(x.real, x.imag) if isinstance(x, set): - res = W_SetObject(self, [self.wrap(item) for item in x]) + res = W_SetObject(self, self.newlist([self.wrap(item) for item in x])) return res if isinstance(x, frozenset): diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -482,7 +482,7 @@ for w_key in w_set.getkeys(): if not w_other.has_key(w_key): newsetdata[w_key] = None - for w_key in w_other.getkeys(): # XXX here it is fine + for w_key in w_other.getkeys(): # XXX use set iterator if not w_set.has_key(w_key): newsetdata[w_key] = None @@ -549,6 +549,7 @@ if w_set.length() > w_other.length(): # XXX this is not allowed! you must maintain the invariant that the # firsts argument's is self. + # call w_other.intersect() and apply storage of returning set to w_set storage, strategy = self._intersect_base(w_other, w_set) else: storage, strategy = self._intersect_base(w_set, w_other) @@ -557,11 +558,13 @@ return w_set def intersect_multiple(self, w_set, others_w): + #XXX find smarter implementations result = w_set for w_other in others_w: if isinstance(w_other, W_BaseSetObject): # optimization only #XXX this creates setobject again + # create copy and use update result = result.intersect(w_other) else: result2 = w_set._newobj(self.space, None) @@ -814,7 +817,6 @@ def set_strategy_and_setdata(space, w_set, w_iterable): from pypy.objspace.std.intobject import W_IntObject - if w_iterable is None : w_set.strategy = space.fromcache(EmptySetStrategy) w_set.sstorage = w_set.strategy.get_empty_storage() @@ -825,28 +827,24 @@ w_set.sstorage = w_iterable.get_storage_copy() return - if not isinstance(w_iterable, list): # XXX this cannot happen, a wrapped object is never a list - w_iterable = space.listview(w_iterable) # XXX this should be called iterable_w, because it is an unwrapped list + iterable_w = space.listview(w_iterable) - if len(w_iterable) == 0: + if len(iterable_w) == 0: w_set.strategy = space.fromcache(EmptySetStrategy) w_set.sstorage = w_set.strategy.get_empty_storage() return # check for integers - iterator = iter(w_iterable) - while True: - try: - item_w = iterator.next() - if type(item_w) is not W_IntObject: - break; - except StopIteration: - w_set.strategy = space.fromcache(IntegerSetStrategy) - w_set.sstorage = w_set.strategy.get_storage_from_list(w_iterable) - return + for w_item in iterable_w: + if type(w_item) is not W_IntObject: + break + else: + w_set.strategy = space.fromcache(IntegerSetStrategy) + w_set.sstorage = w_set.strategy.get_storage_from_list(iterable_w) + return w_set.strategy = space.fromcache(ObjectSetStrategy) - w_set.sstorage = w_set.strategy.get_storage_from_list(w_iterable) + w_set.sstorage = w_set.strategy.get_storage_from_list(iterable_w) def _initialize_set(space, w_obj, w_iterable=None): w_obj.clear() From noreply at buildbot.pypy.org Thu Nov 10 13:51:25 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:25 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: removed unused methods Message-ID: <20111110125125.5821A8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49211:88edfa6d5641 Date: 2011-10-11 13:51 +0200 http://bitbucket.org/pypy/pypy/changeset/88edfa6d5641/ Log: removed unused methods diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -862,31 +862,6 @@ else: return None -# helper functions for set operation on dicts - -# XXX are these still needed? -def _symmetric_difference_dict(space, ld, rd): - result = newset(space) - for w_key in ld: - if w_key not in rd: - result[w_key] = None - for w_key in rd: - if w_key not in ld: - result[w_key] = None - return result - -def _issubset_dict(ldict, rdict): - if len(ldict) > len(rdict): - return False - - for w_key in ldict: - if w_key not in rdict: - return False - return True - - -#end helper functions - def set_update__Set(space, w_left, others_w): """Update a set with the union of itself and another.""" for w_other in others_w: From noreply at buildbot.pypy.org Thu Nov 10 13:51:26 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:26 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: discard is not needed anymore Message-ID: <20111110125126.9050F8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49212:d7e380dfbd4c Date: 2011-10-11 13:54 +0200 http://bitbucket.org/pypy/pypy/changeset/d7e380dfbd4c/ Log: discard is not needed anymore diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -78,10 +78,6 @@ def add(self, w_key): self.strategy.add(self, w_key) - # XXX this appears unused? kill it - def discard(self, w_item): - return self.strategy.discard(self, w_item) - # XXX rename to "remove", delitem is the name for the operation that does # "del d[x]" which does not work on sets def delitem(self, w_item): From noreply at buildbot.pypy.org Thu Nov 10 13:51:27 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:27 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: renamed delitem to remove Message-ID: <20111110125127.BA5BF8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49213:4e50f659baaf Date: 2011-10-11 13:55 +0200 http://bitbucket.org/pypy/pypy/changeset/4e50f659baaf/ Log: renamed delitem to remove diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -78,10 +78,8 @@ def add(self, w_key): self.strategy.add(self, w_key) - # XXX rename to "remove", delitem is the name for the operation that does - # "del d[x]" which does not work on sets - def delitem(self, w_item): - return self.strategy.delitem(self, w_item) + def remove(self, w_item): + return self.strategy.remove(self, w_item) def getdict_w(self): return self.strategy.getdict_w(self) @@ -223,7 +221,7 @@ w_set.sstorage = strategy.get_empty_storage() w_set.add(w_key) - def delitem(self, w_set, w_item): + def remove(self, w_set, w_item): return False def discard(self, w_set, w_item): @@ -345,7 +343,7 @@ w_set.switch_to_object_strategy(self.space) w_set.add(w_key) - def delitem(self, w_set, w_item): + def remove(self, w_set, w_item): from pypy.objspace.std.dictmultiobject import _never_equal_to_string d = self.cast_from_void_star(w_set.sstorage) if not self.is_correct_type(w_item): @@ -354,7 +352,7 @@ if _never_equal_to_string(self.space, self.space.type(w_item)): return False w_set.switch_to_object_strategy(self.space) - return w_set.delitem(w_item) + return w_set.remove(w_item) key = self.unwrap(w_item) try: @@ -918,7 +916,7 @@ else: for w_key in space.listview(w_other): space.hash(w_key) - w_left.delitem(w_key) + w_left.remove(w_key) def inplace_sub__Set_Set(space, w_left, w_other): w_left.difference_update(w_other) @@ -1073,7 +1071,7 @@ Returns True if successfully removed. """ try: - deleted = w_left.delitem(w_item) + deleted = w_left.remove(w_item) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -1081,7 +1079,7 @@ w_f = _convert_set_to_frozenset(space, w_item) if w_f is None: raise - deleted = w_left.delitem(w_f) + deleted = w_left.remove(w_f) if w_left.length() == 0: w_left.switch_to_empty_strategy() From noreply at buildbot.pypy.org Thu Nov 10 13:51:28 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:28 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: renamed cast_to/from_void_star to (un)erase Message-ID: <20111110125128.E44088292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49214:f37d7c77fc9e Date: 2011-10-11 13:57 +0200 http://bitbucket.org/pypy/pypy/changeset/f37d7c77fc9e/ Log: renamed cast_to/from_void_star to (un)erase diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -54,7 +54,7 @@ def switch_to_object_strategy(self, space): d = self.strategy.getdict_w(self) self.strategy = strategy = space.fromcache(ObjectSetStrategy) - self.sstorage = strategy.cast_to_void_star(d) + self.sstorage = strategy.erase(d) def switch_to_empty_strategy(self): self.strategy = self.space.fromcache(EmptySetStrategy) @@ -179,9 +179,9 @@ class EmptySetStrategy(SetStrategy): # XXX rename everywhere to erase and unerase - cast_to_void_star, cast_from_void_star = rerased.new_erasing_pair("empty") - cast_to_void_star = staticmethod(cast_to_void_star) - cast_from_void_star = staticmethod(cast_from_void_star) + erase, unerase = rerased.new_erasing_pair("empty") + erase = staticmethod(erase) + unerase = staticmethod(unerase) def check_for_unhashable_objects(self, w_iterable): w_iterator = self.space.iter(w_iterable) @@ -195,7 +195,7 @@ break def get_empty_storage(self): - return self.cast_to_void_star(None) + return self.erase(None) def is_correct_type(self, w_key): return False @@ -208,7 +208,7 @@ def copy(self, w_set): strategy = w_set.strategy - storage = self.cast_to_void_star(None) + storage = self.erase(None) clone = w_set.from_storage_and_strategy(storage, strategy) return clone @@ -317,10 +317,10 @@ setdata = self.get_empty_dict() for w_item in list_w: setdata[self.unwrap(w_item)] = None - return self.cast_to_void_star(setdata) + return self.erase(setdata) def length(self, w_set): - return len(self.cast_from_void_star(w_set.sstorage)) + return len(self.unerase(w_set.sstorage)) def clear(self, w_set): w_set.switch_to_empty_strategy() @@ -330,14 +330,14 @@ if isinstance(w_set, W_FrozensetObject): storage = w_set.sstorage else: - d = self.cast_from_void_star(w_set.sstorage) - storage = self.cast_to_void_star(d.copy()) + d = self.unerase(w_set.sstorage) + storage = self.erase(d.copy()) clone = w_set.from_storage_and_strategy(storage, strategy) return clone def add(self, w_set, w_key): if self.is_correct_type(w_key): - d = self.cast_from_void_star(w_set.sstorage) + d = self.unerase(w_set.sstorage) d[self.unwrap(w_key)] = None else: w_set.switch_to_object_strategy(self.space) @@ -345,7 +345,7 @@ def remove(self, w_set, w_item): from pypy.objspace.std.dictmultiobject import _never_equal_to_string - d = self.cast_from_void_star(w_set.sstorage) + d = self.unerase(w_set.sstorage) if not self.is_correct_type(w_item): # XXX I don't understand the next line. shouldn't it be "never # equal to int" in the int strategy case? @@ -363,18 +363,18 @@ def getdict_w(self, w_set): result = newset(self.space) - keys = self.cast_from_void_star(w_set.sstorage).keys() + keys = self.unerase(w_set.sstorage).keys() for key in keys: result[self.wrap(key)] = None return result def get_storage_copy(self, w_set): - d = self.cast_from_void_star(w_set.sstorage) - copy = self.cast_to_void_star(d.copy()) + d = self.unerase(w_set.sstorage) + copy = self.erase(d.copy()) return copy def getkeys(self, w_set): - keys = self.cast_from_void_star(w_set.sstorage).keys() + keys = self.unerase(w_set.sstorage).keys() keys_w = [self.wrap(key) for key in keys] return keys_w @@ -387,13 +387,13 @@ w_set.switch_to_object_strategy(self.space) return w_set.has_key(w_key) return False - d = self.cast_from_void_star(w_set.sstorage) + d = self.unerase(w_set.sstorage) return self.unwrap(w_key) in d def equals(self, w_set, w_other): if w_set.length() != w_other.length(): return False - items = self.cast_from_void_star(w_set.sstorage).keys() + items = self.unerase(w_set.sstorage).keys() for key in items: if not w_other.has_key(self.wrap(key)): return False @@ -402,7 +402,7 @@ def _difference_wrapped(self, w_set, w_other): d_new = self.get_empty_dict() # XXX why not just: - # for obj in self.cast_from_void_star(w_set.sstorage): + # for obj in self.unerase(w_set.sstorage): # w_item = self.wrap(obj) # ... w_iter = self.space.iter(w_set) @@ -416,20 +416,20 @@ raise break; strategy = self.space.fromcache(ObjectSetStrategy) - return strategy.cast_to_void_star(d_new) + return strategy.erase(d_new) def _difference_unwrapped(self, w_set, w_other): # XXX this line should not be needed # the caller (_difference_base) already checks for this! if not isinstance(w_other, W_BaseSetObject): w_other = w_set._newobj(self.space, w_other) - iterator = self.cast_from_void_star(w_set.sstorage).iterkeys() - other_dict = self.cast_from_void_star(w_other.sstorage) + iterator = self.unerase(w_set.sstorage).iterkeys() + other_dict = self.unerase(w_other.sstorage) result_dict = self.get_empty_dict() for key in iterator: if key not in other_dict: result_dict[key] = None - return self.cast_to_void_star(result_dict) + return self.erase(result_dict) def _difference_base(self, w_set, w_other): if not isinstance(w_other, W_BaseSetObject): @@ -458,8 +458,8 @@ def _symmetric_difference_unwrapped(self, w_set, w_other): d_new = self.get_empty_dict() - d_this = self.cast_from_void_star(w_set.sstorage) - d_other = self.cast_from_void_star(w_other.sstorage) + d_this = self.unerase(w_set.sstorage) + d_other = self.unerase(w_other.sstorage) for key in d_other.keys(): if not key in d_this: d_new[key] = None @@ -467,7 +467,7 @@ if not key in d_other: d_new[key] = None - storage = self.cast_to_void_star(d_new) + storage = self.erase(d_new) return storage def _symmetric_difference_wrapped(self, w_set, w_other): @@ -481,7 +481,7 @@ newsetdata[w_key] = None strategy = self.space.fromcache(ObjectSetStrategy) - return strategy.cast_to_void_star(newsetdata) + return strategy.erase(newsetdata) def _symmetric_difference_base(self, w_set, w_other): # shouldn't that be "if self is w_other.strategy"? @@ -516,21 +516,21 @@ result = self.get_empty_dict() # XXX this is the correct way to iterate over w_set. please use this # everywhere :-) - items = self.cast_from_void_star(w_set.sstorage).iterkeys() + items = self.unerase(w_set.sstorage).iterkeys() for key in items: w_key = self.wrap(key) if w_other.has_key(w_key): result[w_key] = None - return self.cast_to_void_star(result) + return self.erase(result) def _intersect_unwrapped(self, w_set, w_other): result = self.get_empty_dict() - d_this = self.cast_from_void_star(w_set.sstorage) - d_other = self.cast_from_void_star(w_other.sstorage) + d_this = self.unerase(w_set.sstorage) + d_other = self.unerase(w_other.sstorage) for key in d_this: if key in d_other: result[key] = None - return self.cast_to_void_star(result) + return self.erase(result) def intersect(self, w_set, w_other): if w_set.length() > w_other.length(): @@ -575,8 +575,8 @@ w_set.sstorage = result.sstorage def _issuperset_unwrapped(self, w_set, w_other): - d_set = self.cast_from_void_star(w_set.sstorage) - d_other = self.cast_from_void_star(w_other.sstorage) + d_set = self.unerase(w_set.sstorage) + d_other = self.unerase(w_other.sstorage) for e in d_other.keys(): if not e in d_set: @@ -608,8 +608,8 @@ return self._issuperset_wrapped(w_set, w_other) def _isdisjoint_unwrapped(self, w_set, w_other): - d_set = self.cast_from_void_star(w_set.sstorage) - d_other = self.cast_from_void_star(w_other.sstorage) + d_set = self.unerase(w_set.sstorage) + d_other = self.unerase(w_other.sstorage) for key in d_set: if key in d_other: return False @@ -629,7 +629,7 @@ # XXX can you please order the functions XXX, _XXX_base, _XXX_unwrapped and # _XXX_wrapped in a consistent way? def _isdisjoint_wrapped(w_set, w_other): - d = self.cast_from_void_star(w_set.sstorage) + d = self.unerase(w_set.sstorage) for key in d: if w_other.has_key(self.wrap(key)): return False @@ -640,7 +640,7 @@ # this shows that the following condition is nonsense! you should # instead overwrite update in ObjectSetStrategy and kill the if here if w_set.strategy is self.space.fromcache(ObjectSetStrategy): - d_obj = self.cast_from_void_star(w_set.sstorage) + d_obj = self.unerase(w_set.sstorage) other_w = w_other.getkeys() for w_key in other_w: d_obj[self.unwrap(w_key)] = None @@ -648,8 +648,8 @@ elif w_set.strategy is w_other.strategy: # XXX d_int is a sucky variable name, other should be d_other - d_int = self.cast_from_void_star(w_set.sstorage) - other = self.cast_from_void_star(w_other.sstorage) + d_int = self.unerase(w_set.sstorage) + other = self.unerase(w_other.sstorage) d_int.update(other) return @@ -657,7 +657,7 @@ w_set.update(w_other) def popitem(self, w_set): - storage = self.cast_from_void_star(w_set.sstorage) + storage = self.unerase(w_set.sstorage) try: # this returns a tuple because internally sets are dicts result = storage.popitem() @@ -668,12 +668,12 @@ return self.wrap(result[0]) class IntegerSetStrategy(AbstractUnwrappedSetStrategy, SetStrategy): - cast_to_void_star, cast_from_void_star = rerased.new_erasing_pair("integer") - cast_to_void_star = staticmethod(cast_to_void_star) - cast_from_void_star = staticmethod(cast_from_void_star) + erase, unerase = rerased.new_erasing_pair("integer") + erase = staticmethod(erase) + unerase = staticmethod(unerase) def get_empty_storage(self): - return self.cast_to_void_star({}) + return self.erase({}) def get_empty_dict(self): return {} @@ -692,12 +692,12 @@ return IntegerIteratorImplementation(self.space, self, w_set) class ObjectSetStrategy(AbstractUnwrappedSetStrategy, SetStrategy): - cast_to_void_star, cast_from_void_star = rerased.new_erasing_pair("object") - cast_to_void_star = staticmethod(cast_to_void_star) - cast_from_void_star = staticmethod(cast_from_void_star) + erase, unerase = rerased.new_erasing_pair("object") + erase = staticmethod(erase) + unerase = staticmethod(unerase) def get_empty_storage(self): - return self.cast_to_void_star(self.get_empty_dict()) + return self.erase(self.get_empty_dict()) def get_empty_dict(self): return newset(self.space) @@ -755,7 +755,7 @@ #XXX same implementation in dictmultiobject on dictstrategy-branch def __init__(self, space, strategy, dictimplementation): IteratorImplementation.__init__(self, space, dictimplementation) - d = strategy.cast_from_void_star(dictimplementation.sstorage) + d = strategy.unerase(dictimplementation.sstorage) self.iterator = d.iterkeys() def next_entry(self): @@ -768,7 +768,7 @@ class RDictIteratorImplementation(IteratorImplementation): def __init__(self, space, strategy, dictimplementation): IteratorImplementation.__init__(self, space, dictimplementation) - d = strategy.cast_from_void_star(dictimplementation.sstorage) + d = strategy.unerase(dictimplementation.sstorage) self.iterator = d.iterkeys() def next_entry(self): From noreply at buildbot.pypy.org Thu Nov 10 13:51:30 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:30 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: we do not enforce EmptySetStrategy for empty sets Message-ID: <20111110125130.17D408292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49215:2cedb79b6ae1 Date: 2011-10-11 14:02 +0200 http://bitbucket.org/pypy/pypy/changeset/2cedb79b6ae1/ Log: we do not enforce EmptySetStrategy for empty sets diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -178,7 +178,6 @@ class EmptySetStrategy(SetStrategy): - # XXX rename everywhere to erase and unerase erase, unerase = rerased.new_erasing_pair("empty") erase = staticmethod(erase) unerase = staticmethod(unerase) @@ -240,11 +239,8 @@ return False def equals(self, w_set, w_other): - # XXX can't this be written as w_other.strategy is self? - if w_other.strategy is self.space.fromcache(EmptySetStrategy): + if w_other.strategy is self or w_other.length() == 0: return True - # XXX let's not enforce the use of the EmptySetStrategy for empty sets - # similar to what we did for lists return False def difference(self, w_set, w_other): From noreply at buildbot.pypy.org Thu Nov 10 13:51:31 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:31 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: difference always expects w_other to be a set Message-ID: <20111110125131.405948292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49216:2bec064b8288 Date: 2011-10-11 14:27 +0200 http://bitbucket.org/pypy/pypy/changeset/2bec064b8288/ Log: difference always expects w_other to be a set diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -244,10 +244,6 @@ return False def difference(self, w_set, w_other): - # XXX what is w_other here? a set or any wrapped object? - # if a set, the following line is unnecessary (sets contain only - # hashable objects). - self.check_for_unhashable_objects(w_other) return w_set.copy() def difference_update(self, w_set, w_other): @@ -896,9 +892,13 @@ def set_difference__Set(space, w_left, others_w): if len(others_w) == 0: return w_left.copy() - result = w_left + result = w_left.copy() for w_other in others_w: - result = result.difference(w_other) + if isinstance(w_other, W_BaseSetObject): + result.difference_update(w_other) + else: + w_other_as_set = w_left._newobj(space, w_other) + result.difference_update(w_other_as_set) return result frozenset_difference__Frozenset = set_difference__Set From noreply at buildbot.pypy.org Thu Nov 10 13:51:32 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:32 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: to be consistent create a set and call difference_update here too Message-ID: <20111110125132.68D6E8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49217:5676c0591355 Date: 2011-10-11 14:30 +0200 http://bitbucket.org/pypy/pypy/changeset/5676c0591355/ Log: to be consistent create a set and call difference_update here too diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -910,9 +910,8 @@ # optimization only w_left.difference_update(w_other) else: - for w_key in space.listview(w_other): - space.hash(w_key) - w_left.remove(w_key) + w_other_as_set = w_left._newobj(space, w_other) + w_left.difference_update(w_other_as_set) def inplace_sub__Set_Set(space, w_left, w_other): w_left.difference_update(w_other) From noreply at buildbot.pypy.org Thu Nov 10 13:51:33 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:33 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: reuse set_difference_update__Set Message-ID: <20111110125133.91DCB8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49218:4854e943f993 Date: 2011-10-11 14:33 +0200 http://bitbucket.org/pypy/pypy/changeset/4854e943f993/ Log: reuse set_difference_update__Set diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -893,12 +893,7 @@ if len(others_w) == 0: return w_left.copy() result = w_left.copy() - for w_other in others_w: - if isinstance(w_other, W_BaseSetObject): - result.difference_update(w_other) - else: - w_other_as_set = w_left._newobj(space, w_other) - result.difference_update(w_other_as_set) + set_difference_update__Set(space, result, others_w) return result frozenset_difference__Frozenset = set_difference__Set From noreply at buildbot.pypy.org Thu Nov 10 13:51:34 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:34 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: replaced w_left.strategy with self where possible Message-ID: <20111110125134.B98BB8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49219:d8bdee84e4e2 Date: 2011-10-11 14:35 +0200 http://bitbucket.org/pypy/pypy/changeset/d8bdee84e4e2/ Log: replaced w_left.strategy with self where possible diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -270,8 +270,7 @@ def issuperset(self, w_set, w_other): if (isinstance(w_other, W_BaseSetObject) and - # XXX can't this be written as w_other.strategy is self? - w_other.strategy is self.space.fromcache(EmptySetStrategy)): + w_other.strategy is self): return True elif len(self.space.unpackiterable(w_other)) == 0: return True @@ -428,7 +427,7 @@ w_other = w_set._newobj(self.space, w_other) # XXX shouldn't w_set.strategy be simply "self"? - if w_set.strategy is w_other.strategy: + if self is w_other.strategy: strategy = w_set.strategy storage = self._difference_unwrapped(w_set, w_other) else: @@ -477,7 +476,7 @@ def _symmetric_difference_base(self, w_set, w_other): # shouldn't that be "if self is w_other.strategy"? - if w_set.strategy is w_other.strategy: + if self is w_other.strategy: strategy = w_set.strategy storage = self._symmetric_difference_unwrapped(w_set, w_other) else: @@ -496,7 +495,7 @@ def _intersect_base(self, w_set, w_other): # XXX shouldn't this again be equivalent to self? - if w_set.strategy is w_other.strategy: + if self is w_other.strategy: strategy = w_set.strategy storage = strategy._intersect_unwrapped(w_set, w_other) else: From noreply at buildbot.pypy.org Thu Nov 10 13:51:35 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:35 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: when updating empty list simply copy storage and strategy from the other set Message-ID: <20111110125135.E0AB38292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49220:94d75f6d8f44 Date: 2011-10-11 14:40 +0200 http://bitbucket.org/pypy/pypy/changeset/94d75f6d8f44/ Log: when updating empty list simply copy storage and strategy from the other set diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -284,9 +284,8 @@ w_set.sstorage = w_other.get_storage_copy() def update(self, w_set, w_other): - # XXX wouldn't it be faster to just copy the storage of w_other into self? - w_set.switch_to_object_strategy(self.space) - w_set.update(w_other) + w_set.strategy = w_other.strategy + w_set.storage = w_other.get_storage_copy() def iter(self, w_set): return EmptyIteratorImplementation(self.space, w_set) From noreply at buildbot.pypy.org Thu Nov 10 13:51:37 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:37 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: using a for loop is much simpler here Message-ID: <20111110125137.14CAB8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49221:79858af1738c Date: 2011-10-11 14:45 +0200 http://bitbucket.org/pypy/pypy/changeset/79858af1738c/ Log: using a for loop is much simpler here diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -391,20 +391,11 @@ def _difference_wrapped(self, w_set, w_other): d_new = self.get_empty_dict() - # XXX why not just: - # for obj in self.unerase(w_set.sstorage): - # w_item = self.wrap(obj) - # ... - w_iter = self.space.iter(w_set) - while True: - try: - w_item = self.space.next(w_iter) - if not w_other.has_key(w_item): - d_new[w_item] = None - except OperationError, e: - if not e.match(self.space, self.space.w_StopIteration): - raise - break; + for obj in self.unerase(w_set.sstorage): + w_item = self.wrap(obj) + if not w_other.has_key(w_item): + d_new[w_item] = None + strategy = self.space.fromcache(ObjectSetStrategy) return strategy.erase(d_new) From noreply at buildbot.pypy.org Thu Nov 10 13:51:38 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:38 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: added test and fix for update on empty sets Message-ID: <20111110125138.440338292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49222:64702787279f Date: 2011-10-11 14:49 +0200 http://bitbucket.org/pypy/pypy/changeset/64702787279f/ Log: added test and fix for update on empty sets diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -285,7 +285,7 @@ def update(self, w_set, w_other): w_set.strategy = w_other.strategy - w_set.storage = w_other.get_storage_copy() + w_set.sstorage = w_other.get_storage_copy() def iter(self, w_set): return EmptyIteratorImplementation(self.space, w_set) diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -324,6 +324,9 @@ s1 = set('abc') s1.update('d', 'ef', frozenset('g')) assert s1 == set('abcdefg') + s1 = set() + s1.update(set('abcd')) + assert s1 == set('abcd') def test_recursive_repr(self): class A(object): From noreply at buildbot.pypy.org Thu Nov 10 13:51:39 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:39 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: this is already checked in _difference_base Message-ID: <20111110125139.6B1488292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49223:a07c98fa413a Date: 2011-10-11 14:50 +0200 http://bitbucket.org/pypy/pypy/changeset/a07c98fa413a/ Log: this is already checked in _difference_base diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -400,10 +400,6 @@ return strategy.erase(d_new) def _difference_unwrapped(self, w_set, w_other): - # XXX this line should not be needed - # the caller (_difference_base) already checks for this! - if not isinstance(w_other, W_BaseSetObject): - w_other = w_set._newobj(self.space, w_other) iterator = self.unerase(w_set.sstorage).iterkeys() other_dict = self.unerase(w_other.sstorage) result_dict = self.get_empty_dict() @@ -416,7 +412,6 @@ if not isinstance(w_other, W_BaseSetObject): w_other = w_set._newobj(self.space, w_other) - # XXX shouldn't w_set.strategy be simply "self"? if self is w_other.strategy: strategy = w_set.strategy storage = self._difference_unwrapped(w_set, w_other) From noreply at buildbot.pypy.org Thu Nov 10 13:51:40 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:40 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: do not use getkeys as this is not very efficient Message-ID: <20111110125140.93AE08292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49224:195496c4dc01 Date: 2011-10-11 14:55 +0200 http://bitbucket.org/pypy/pypy/changeset/195496c4dc01/ Log: do not use getkeys as this is not very efficient diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -448,13 +448,21 @@ def _symmetric_difference_wrapped(self, w_set, w_other): newsetdata = newset(self.space) - # XXX don't use getkeys for the next line, you know how to iterate over it - for w_key in w_set.getkeys(): - if not w_other.has_key(w_key): - newsetdata[w_key] = None - for w_key in w_other.getkeys(): # XXX use set iterator - if not w_set.has_key(w_key): - newsetdata[w_key] = None + for obj in self.unerase(w_set.sstorage): + w_item = self.wrap(obj) + if not w_other.has_key(w_item): + newsetdata[w_item] = None + + w_iterator = self.space.iter(w_other) + while True: + try: + w_item = self.space.next(w_iterator) + if not w_set.has_key(w_item): + newsetdata[w_item] = None + except OperationError, e: + if not e.match(self.space, self.space.w_StopIteration): + raise + break strategy = self.space.fromcache(ObjectSetStrategy) return strategy.erase(newsetdata) From noreply at buildbot.pypy.org Thu Nov 10 13:51:41 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:41 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: maintain invariant that first argument is always self Message-ID: <20111110125141.BBE1A8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49225:a034685e1583 Date: 2011-10-11 15:00 +0200 http://bitbucket.org/pypy/pypy/changeset/a034685e1583/ Log: maintain invariant that first argument is always self diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -468,7 +468,6 @@ return strategy.erase(newsetdata) def _symmetric_difference_base(self, w_set, w_other): - # shouldn't that be "if self is w_other.strategy"? if self is w_other.strategy: strategy = w_set.strategy storage = self._symmetric_difference_unwrapped(w_set, w_other) @@ -487,7 +486,6 @@ w_set.sstorage = storage def _intersect_base(self, w_set, w_other): - # XXX shouldn't this again be equivalent to self? if self is w_other.strategy: strategy = w_set.strategy storage = strategy._intersect_unwrapped(w_set, w_other) @@ -498,8 +496,6 @@ def _intersect_wrapped(self, w_set, w_other): result = self.get_empty_dict() - # XXX this is the correct way to iterate over w_set. please use this - # everywhere :-) items = self.unerase(w_set.sstorage).iterkeys() for key in items: w_key = self.wrap(key) @@ -525,10 +521,9 @@ def intersect_update(self, w_set, w_other): if w_set.length() > w_other.length(): - # XXX this is not allowed! you must maintain the invariant that the - # firsts argument's is self. - # call w_other.intersect() and apply storage of returning set to w_set - storage, strategy = self._intersect_base(w_other, w_set) + w_intersection = w_other.intersect(w_set) + strategy = w_intersection.strategy + storage = w_intersection.sstorage else: storage, strategy = self._intersect_base(w_set, w_other) w_set.strategy = strategy From noreply at buildbot.pypy.org Thu Nov 10 13:51:42 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:42 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: use copy and intersect_update Message-ID: <20111110125142.E87788292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49226:f0d6cf7b30b1 Date: 2011-10-11 15:10 +0200 http://bitbucket.org/pypy/pypy/changeset/f0d6cf7b30b1/ Log: use copy and intersect_update diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -532,23 +532,17 @@ def intersect_multiple(self, w_set, others_w): #XXX find smarter implementations - result = w_set + result = w_set.copy() for w_other in others_w: if isinstance(w_other, W_BaseSetObject): # optimization only - #XXX this creates setobject again - # create copy and use update - result = result.intersect(w_other) + result.intersect_update(w_other) else: - result2 = w_set._newobj(self.space, None) - for w_key in self.space.listview(w_other): - if result.has_key(w_key): - result2.add(w_key) - result = result2 + w_other_as_set = w_set._newobj(self.space, w_other) + result.intersect_update(w_other_as_set) return result def intersect_multiple_update(self, w_set, others_w): - #XXX faster withouth creating the setobject in intersect_multiple result = self.intersect_multiple(w_set, others_w) w_set.strategy = result.strategy w_set.sstorage = result.sstorage From noreply at buildbot.pypy.org Thu Nov 10 13:51:44 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:44 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: always use issubset instead of issuperset Message-ID: <20111110125144.2196C8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49227:c826689d38c6 Date: 2011-10-11 15:31 +0200 http://bitbucket.org/pypy/pypy/changeset/c826689d38c6/ Log: always use issubset instead of issuperset diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -115,9 +115,6 @@ def intersect_multiple_update(self, others_w): self.strategy.intersect_multiple_update(self, others_w) - def issuperset(self, w_other): - return self.strategy.issuperset(self, w_other) - def issubset(self, w_other): return self.strategy.issubset(self, w_other) @@ -268,13 +265,8 @@ def isdisjoint(self, w_set, w_other): return True - def issuperset(self, w_set, w_other): - if (isinstance(w_other, W_BaseSetObject) and - w_other.strategy is self): - return True - elif len(self.space.unpackiterable(w_other)) == 0: - return True - return False + def issubset(self, w_set, w_other): + return True def symmetric_difference(self, w_set, w_other): return w_other.copy() @@ -547,38 +539,28 @@ w_set.strategy = result.strategy w_set.sstorage = result.sstorage - def _issuperset_unwrapped(self, w_set, w_other): - d_set = self.unerase(w_set.sstorage) + def _issubset_unwrapped(self, w_set, w_other): d_other = self.unerase(w_other.sstorage) - - for e in d_other.keys(): - if not e in d_set: + for item in self.unerase(w_set.sstorage): + if not item in d_other: return False return True - def _issuperset_wrapped(self, w_set, w_other): - w_iter = self.space.iter(w_other) - # XXX this iteration is slow! it might be better to formulate - # everything in terms of issubset, to circumvent this problem. - while True: - try: - w_item = self.space.next(w_iter) - if not w_set.has_key(w_item): - return False - except OperationError, e: - if not e.match(self.space, self.space.w_StopIteration): - raise - return True + def _issubset_wrapped(self, w_set, w_other): + for obj in self.unerase(w_set.sstorage): + w_item = self.wrap(obj) + if not w_other.has_key(w_item): + return False return True - def issuperset(self, w_set, w_other): - if w_other.length() == 0: + def issubset(self, w_set, w_other): + if w_set.length() == 0: return True if w_set.strategy is w_other.strategy: - return self._issuperset_unwrapped(w_set, w_other) + return self._issubset_unwrapped(w_set, w_other) else: - return self._issuperset_wrapped(w_set, w_other) + return self._issubset_wrapped(w_set, w_other) def _isdisjoint_unwrapped(self, w_set, w_other): d_set = self.unerase(w_set.sstorage) @@ -961,7 +943,7 @@ return space.w_True if w_left.length() > w_other.length(): return space.w_False - return space.wrap(w_other.issuperset(w_left)) + return space.wrap(w_left.issubset(w_other)) set_issubset__Set_Frozenset = set_issubset__Set_Set frozenset_issubset__Frozenset_Set = set_issubset__Set_Set @@ -975,7 +957,7 @@ if w_left.length() > w_other_as_set.length(): return space.w_False - return space.wrap(w_other_as_set.issuperset(w_left)) + return space.wrap(w_left.issubset(w_other_as_set)) frozenset_issubset__Frozenset_ANY = set_issubset__Set_ANY @@ -990,7 +972,7 @@ return space.w_True if w_left.length() < w_other.length(): return space.w_False - return space.wrap(w_left.issuperset(w_other)) + return space.wrap(w_other.issubset(w_left)) set_issuperset__Set_Frozenset = set_issuperset__Set_Set set_issuperset__Frozenset_Set = set_issuperset__Set_Set @@ -1004,7 +986,7 @@ if w_left.length() < w_other_as_set.length(): return space.w_False - return space.wrap(w_left.issuperset(w_other_as_set)) + return space.wrap(w_other_as_set.issubset(w_left)) frozenset_issuperset__Frozenset_ANY = set_issuperset__Set_ANY From noreply at buildbot.pypy.org Thu Nov 10 13:51:45 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:45 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: keep the same order for similar methods Message-ID: <20111110125145.493688292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49228:7e7690516d69 Date: 2011-10-11 15:32 +0200 http://bitbucket.org/pypy/pypy/changeset/7e7690516d69/ Log: keep the same order for similar methods diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -570,6 +570,13 @@ return False return True + def _isdisjoint_wrapped(w_set, w_other): + d = self.unerase(w_set.sstorage) + for key in d: + if w_other.has_key(self.wrap(key)): + return False + return True + def isdisjoint(self, w_set, w_other): if w_other.length() == 0: return True @@ -581,15 +588,6 @@ else: return self._isdisjoint_wrapped(w_set, w_other) - # XXX can you please order the functions XXX, _XXX_base, _XXX_unwrapped and - # _XXX_wrapped in a consistent way? - def _isdisjoint_wrapped(w_set, w_other): - d = self.unerase(w_set.sstorage) - for key in d: - if w_other.has_key(self.wrap(key)): - return False - return True - def update(self, w_set, w_other): # XXX again, this is equivalent to self is self.space.fromcache(ObjectSetStrategy) # this shows that the following condition is nonsense! you should From noreply at buildbot.pypy.org Thu Nov 10 13:51:46 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:46 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: move objectstrategy case to ObjectSetStrategy Message-ID: <20111110125146.71E798292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49229:8be9bb5879b8 Date: 2011-10-11 15:38 +0200 http://bitbucket.org/pypy/pypy/changeset/8be9bb5879b8/ Log: move objectstrategy case to ObjectSetStrategy diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -589,17 +589,8 @@ return self._isdisjoint_wrapped(w_set, w_other) def update(self, w_set, w_other): - # XXX again, this is equivalent to self is self.space.fromcache(ObjectSetStrategy) - # this shows that the following condition is nonsense! you should - # instead overwrite update in ObjectSetStrategy and kill the if here - if w_set.strategy is self.space.fromcache(ObjectSetStrategy): - d_obj = self.unerase(w_set.sstorage) - other_w = w_other.getkeys() - for w_key in other_w: - d_obj[self.unwrap(w_key)] = None - return - elif w_set.strategy is w_other.strategy: + if self is w_other.strategy: # XXX d_int is a sucky variable name, other should be d_other d_int = self.unerase(w_set.sstorage) other = self.unerase(w_other.sstorage) @@ -667,6 +658,12 @@ def iter(self, w_set): return RDictIteratorImplementation(self.space, self, w_set) + def update(self, w_set, w_other): + d_obj = self.unerase(w_set.sstorage) + other_w = w_other.getkeys() + for w_key in other_w: + d_obj[w_key] = None + class IteratorImplementation(object): def __init__(self, space, implementation): self.space = space From noreply at buildbot.pypy.org Thu Nov 10 13:51:47 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:47 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: give variables some meaningful names Message-ID: <20111110125147.99E238292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49230:4edc6447f846 Date: 2011-10-11 15:40 +0200 http://bitbucket.org/pypy/pypy/changeset/4edc6447f846/ Log: give variables some meaningful names diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -589,12 +589,10 @@ return self._isdisjoint_wrapped(w_set, w_other) def update(self, w_set, w_other): - if self is w_other.strategy: - # XXX d_int is a sucky variable name, other should be d_other - d_int = self.unerase(w_set.sstorage) - other = self.unerase(w_other.sstorage) - d_int.update(other) + d_set = self.unerase(w_set.sstorage) + d_other = self.unerase(w_other.sstorage) + d_set.update(d_other) return w_set.switch_to_object_strategy(self.space) From noreply at buildbot.pypy.org Thu Nov 10 13:51:48 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:48 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: replaced getkeys by using iterator Message-ID: <20111110125148.C0EC48292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49231:32d6410e50da Date: 2011-10-12 10:12 +0200 http://bitbucket.org/pypy/pypy/changeset/32d6410e50da/ Log: replaced getkeys by using iterator diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -658,9 +658,15 @@ def update(self, w_set, w_other): d_obj = self.unerase(w_set.sstorage) - other_w = w_other.getkeys() - for w_key in other_w: - d_obj[w_key] = None + w_iterator = self.space.iter(w_other) + while True: + try: + w_item = self.space.next(w_iterator) + d_obj[w_item] = None + except OperationError, e: + if not e.match(self.space, self.space.w_StopIteration): + raise + break class IteratorImplementation(object): def __init__(self, space, implementation): From noreply at buildbot.pypy.org Thu Nov 10 13:51:49 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:49 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: replaced getkeys in hash_FrozenSet with iterator Message-ID: <20111110125149.E8CD88292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49232:973765c2af6d Date: 2011-10-12 14:14 +0200 http://bitbucket.org/pypy/pypy/changeset/973765c2af6d/ Log: replaced getkeys in hash_FrozenSet with iterator diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -1050,10 +1050,17 @@ return space.wrap(w_set.hash) hash = 1927868237 hash *= (w_set.length() + 1) - for w_item in w_set.getkeys(): - h = space.hash_w(w_item) - value = ((h ^ (h << 16) ^ 89869747) * multi) - hash = intmask(hash ^ value) + w_iterator = space.iter(w_set) + while True: + try: + w_item = space.next(w_iterator) + h = space.hash_w(w_item) + value = ((h ^ (h << 16) ^ 89869747) * multi) + hash = intmask(hash ^ value) + except OperationError, e: + if not e.match(space, space.w_StopIteration): + raise + break hash = hash * 69069 + 907133923 if hash == 0: hash = 590923713 From noreply at buildbot.pypy.org Thu Nov 10 13:51:51 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:51 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: this comment won't be needed anymore Message-ID: <20111110125151.1B9888292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49233:d4f48513e645 Date: 2011-10-12 14:15 +0200 http://bitbucket.org/pypy/pypy/changeset/d4f48513e645/ Log: this comment won't be needed anymore diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -1185,7 +1185,6 @@ len__Frozenset = len__Set def iter__Set(space, w_left): - #return iter(w_left.getkeys()) return W_SetIterObject(space, w_left.iter()) iter__Frozenset = iter__Set From noreply at buildbot.pypy.org Thu Nov 10 13:51:52 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:52 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: added methods raising NotImplemented error Message-ID: <20111110125152.429A48292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49234:b3a217a2461b Date: 2011-10-12 14:38 +0200 http://bitbucket.org/pypy/pypy/changeset/b3a217a2461b/ Log: added methods raising NotImplemented error diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -87,7 +87,6 @@ def get_storage_copy(self): return self.strategy.get_storage_copy(self) - # XXX use this as little as possible, as it is really inefficient def getkeys(self): return self.strategy.getkeys(self) @@ -169,9 +168,88 @@ def __init__(self, space): self.space = space + def get_empty_dict(self): + raise NotImplementedError + + def get_empty_storage(self): + raise NotImplementedError + + def erase(self, storage): + raise NotImplementedError + + def unerase(self, storage): + raise NotImplementedError + + # __________________ methods called on W_SetObject _________________ + + def clear(self): + raise NotImplementedError + + def copy(self): + raise NotImplementedError + def length(self, w_set): raise NotImplementedError + def add(self, w_key): + raise NotImplementedError + + def remove(self, w_item): + raise NotImplementedError + + def getdict_w(self): + raise NotImplementedError + + def get_storage_copy(self): + raise NotImplementedError + + def getkeys(self): + raise NotImplementedError + + def difference(self, w_other): + raise NotImplementedError + + def difference_update(self, w_other): + raise NotImplementedError + + def symmetric_difference(self, w_other): + raise NotImplementedError + + def symmetric_difference_update(self, w_other): + raise NotImplementedError + + def intersect(self, w_other): + raise NotImplementedError + + def intersect_update(self, w_other): + raise NotImplementedError + + def intersect_multiple(self, others_w): + raise NotImplementedError + + def intersect_multiple_update(self, others_w): + raise NotImplementedError + + def issubset(self, w_other): + raise NotImplementedError + + def isdisjoint(self, w_other): + raise NotImplementedError + + def update(self, w_other): + raise NotImplementedError + + def has_key(self, w_key): + raise NotImplementedError + + def equals(self, w_other): + raise NotImplementedError + + def iter(self, w_set): + raise NotImplementedError + + def popitem(self): + raise NotImplementedError class EmptySetStrategy(SetStrategy): @@ -289,10 +367,16 @@ class AbstractUnwrappedSetStrategy(object): _mixin_ = True - # XXX add similar abstract methods for all the methods the concrete - # subclasses of AbstractUnwrappedSetStrategy need to implement. add + # XXX add # docstrings too. - def get_empty_storage(self): + + def is_correct_type(self, w_key): + raise NotImplementedError + + def unwrap(self, w_item): + raise NotImplementedError + + def wrap(self, item): raise NotImplementedError def get_storage_from_list(self, list_w): From noreply at buildbot.pypy.org Thu Nov 10 13:51:53 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:53 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: added docstrings Message-ID: <20111110125153.6A46F8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49235:52bb2aea8502 Date: 2011-10-12 15:51 +0200 http://bitbucket.org/pypy/pypy/changeset/52bb2aea8502/ Log: added docstrings diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -62,77 +62,97 @@ # _____________ strategy methods ________________ - # XXX add docstrings to all the strategy methods - # particularly, what are all the w_other arguments? any wrapped object? or - # only sets? - def clear(self): + """ Removes all elements from the set. """ self.strategy.clear(self) def copy(self): + """ Returns a clone of the set. """ return self.strategy.copy(self) def length(self): + """ Returns the number of items inside the set. """ return self.strategy.length(self) def add(self, w_key): + """ Adds an element to the set. The element must be wrapped. """ self.strategy.add(self, w_key) def remove(self, w_item): + """ Removes the given element from the set. Element must be wrapped. """ return self.strategy.remove(self, w_item) def getdict_w(self): + """ Returns a dict with all elements of the set. Needed only for switching to ObjectSetStrategy. """ return self.strategy.getdict_w(self) def get_storage_copy(self): + """ Returns a copy of the storage. Needed when we want to clone all elements from one set and + put them into another. """ return self.strategy.get_storage_copy(self) def getkeys(self): + """ Returns a list of all elements inside the set. Only used in __repr__. Use as less as possible.""" return self.strategy.getkeys(self) def difference(self, w_other): + """ Returns a set with all items that are in this set, but not in w_other. W_other must be a set.""" return self.strategy.difference(self, w_other) def difference_update(self, w_other): + """ As difference but overwrites the sets content with the result. """ return self.strategy.difference_update(self, w_other) def symmetric_difference(self, w_other): + """ Returns a set with all items that are either in this set or in w_other, but not in both. W_other must be a set. """ return self.strategy.symmetric_difference(self, w_other) def symmetric_difference_update(self, w_other): + """ As symmetric_difference but overwrites the content of the set with the result. """ return self.strategy.symmetric_difference_update(self, w_other) def intersect(self, w_other): + """ Returns a set with all items that exists in both sets, this set and in w_other. W_other must be a set. """ return self.strategy.intersect(self, w_other) def intersect_update(self, w_other): + """ Keeps only those elements found in both sets, removing all other elements. """ return self.strategy.intersect_update(self, w_other) def intersect_multiple(self, others_w): + """ Returns a new set of all elements that exist in all of the given iterables.""" return self.strategy.intersect_multiple(self, others_w) def intersect_multiple_update(self, others_w): + """ Same as intersect_multiple but overwrites this set with the result. """ self.strategy.intersect_multiple_update(self, others_w) def issubset(self, w_other): + """ Checks wether this set is a subset of w_other. W_other must be a set. """ return self.strategy.issubset(self, w_other) def isdisjoint(self, w_other): + """ Checks wether this set and the w_other are completly different, i.e. have no equal elements. """ return self.strategy.isdisjoint(self, w_other) def update(self, w_other): + """ Appends all elements from the given set to this set. """ self.strategy.update(self, w_other) def has_key(self, w_key): + """ Checks wether this set contains the given wrapped key.""" return self.strategy.has_key(self, w_key) def equals(self, w_other): + """ Checks wether this set and the given set are equal, i.e. contain the same elements. """ return self.strategy.equals(self, w_other) def iter(self): + """ Returns an iterator of the elements from this set. """ return self.strategy.iter(self) def popitem(self): + """ Removes an arbitrary element from the set. May raise KeyError if set is empty.""" return self.strategy.popitem(self) class W_SetObject(W_BaseSetObject): @@ -169,9 +189,11 @@ self.space = space def get_empty_dict(self): + """ Returns an empty dictionary depending on the strategy. Used to initalize a new storage. """ raise NotImplementedError def get_empty_storage(self): + """ Returns an empty storage (erased) object. Used to initialize an empty set.""" raise NotImplementedError def erase(self, storage): @@ -367,16 +389,16 @@ class AbstractUnwrappedSetStrategy(object): _mixin_ = True - # XXX add - # docstrings too. - def is_correct_type(self, w_key): + """ Checks wether the given wrapped key fits this strategy.""" raise NotImplementedError def unwrap(self, w_item): + """ Returns the unwrapped value of the given wrapped item.""" raise NotImplementedError def wrap(self, item): + """ Returns a wrapped version of the given unwrapped item. """ raise NotImplementedError def get_storage_from_list(self, list_w): From noreply at buildbot.pypy.org Thu Nov 10 13:51:54 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:54 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: replaced space.iterator with iterator implementation for sets Message-ID: <20111110125154.939A78292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49236:86a2b557f516 Date: 2011-10-12 16:49 +0200 http://bitbucket.org/pypy/pypy/changeset/86a2b557f516/ Log: replaced space.iterator with iterator implementation for sets diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -551,16 +551,13 @@ if not w_other.has_key(w_item): newsetdata[w_item] = None - w_iterator = self.space.iter(w_other) + w_iterator = w_other.iter() while True: - try: - w_item = self.space.next(w_iterator) - if not w_set.has_key(w_item): - newsetdata[w_item] = None - except OperationError, e: - if not e.match(self.space, self.space.w_StopIteration): - raise + w_item = w_iterator.next_entry() + if w_item is None: break + if not w_set.has_key(w_item): + newsetdata[w_item] = None strategy = self.space.fromcache(ObjectSetStrategy) return strategy.erase(newsetdata) @@ -764,15 +761,12 @@ def update(self, w_set, w_other): d_obj = self.unerase(w_set.sstorage) - w_iterator = self.space.iter(w_other) + w_iterator = w_other.iter() while True: - try: - w_item = self.space.next(w_iterator) - d_obj[w_item] = None - except OperationError, e: - if not e.match(self.space, self.space.w_StopIteration): - raise + w_item = w_iterator.next_entry() + if w_item is None: break + d_obj[w_item] = None class IteratorImplementation(object): def __init__(self, space, implementation): @@ -808,7 +802,7 @@ return 0 class EmptyIteratorImplementation(IteratorImplementation): - def next(self): + def next_entry(self): return None class IntegerIteratorImplementation(IteratorImplementation): @@ -1156,17 +1150,14 @@ return space.wrap(w_set.hash) hash = 1927868237 hash *= (w_set.length() + 1) - w_iterator = space.iter(w_set) + w_iterator = w_set.iter() while True: - try: - w_item = space.next(w_iterator) - h = space.hash_w(w_item) - value = ((h ^ (h << 16) ^ 89869747) * multi) - hash = intmask(hash ^ value) - except OperationError, e: - if not e.match(space, space.w_StopIteration): - raise + w_item = w_iterator.next_entry() + if w_item is None: break + h = space.hash_w(w_item) + value = ((h ^ (h << 16) ^ 89869747) * multi) + hash = intmask(hash ^ value) hash = hash * 69069 + 907133923 if hash == 0: hash = 590923713 From noreply at buildbot.pypy.org Thu Nov 10 13:51:55 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:51:55 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: _never_equal_to_string makes no sense here Message-ID: <20111110125155.BC02C8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49237:624230481d9a Date: 2011-10-14 11:01 +0200 http://bitbucket.org/pypy/pypy/changeset/624230481d9a/ Log: _never_equal_to_string makes no sense here diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -435,10 +435,7 @@ from pypy.objspace.std.dictmultiobject import _never_equal_to_string d = self.unerase(w_set.sstorage) if not self.is_correct_type(w_item): - # XXX I don't understand the next line. shouldn't it be "never - # equal to int" in the int strategy case? - if _never_equal_to_string(self.space, self.space.type(w_item)): - return False + #XXX check type of w_item and immediately return False in some cases w_set.switch_to_object_strategy(self.space) return w_set.remove(w_item) @@ -469,12 +466,9 @@ def has_key(self, w_set, w_key): from pypy.objspace.std.dictmultiobject import _never_equal_to_string if not self.is_correct_type(w_key): - # XXX I don't understand the next line. shouldn't it be "never - # equal to int" in the int strategy case? - if not _never_equal_to_string(self.space, self.space.type(w_key)): - w_set.switch_to_object_strategy(self.space) - return w_set.has_key(w_key) - return False + #XXX check type of w_item and immediately return False in some cases + w_set.switch_to_object_strategy(self.space) + return w_set.has_key(w_key) d = self.unerase(w_set.sstorage) return self.unwrap(w_key) in d From noreply at buildbot.pypy.org Thu Nov 10 13:52:03 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:52:03 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: merge with default Message-ID: <20111110125203.500DB8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49238:a191ae82db20 Date: 2011-10-14 11:54 +0200 http://bitbucket.org/pypy/pypy/changeset/a191ae82db20/ Log: merge with default diff too long, truncating to 10000 out of 72296 lines diff --git a/.hgtags b/.hgtags --- a/.hgtags +++ b/.hgtags @@ -1,1 +1,3 @@ b590cf6de4190623aad9aa698694c22e614d67b9 release-1.5 +b48df0bf4e75b81d98f19ce89d4a7dc3e1dab5e5 benchmarked +d8ac7d23d3ec5f9a0fa1264972f74a010dbfd07f release-1.6 diff --git a/LICENSE b/LICENSE --- a/LICENSE +++ b/LICENSE @@ -37,22 +37,22 @@ Armin Rigo Maciej Fijalkowski Carl Friedrich Bolz + Antonio Cuni Amaury Forgeot d'Arc - Antonio Cuni Samuele Pedroni Michael Hudson Holger Krekel + Benjamin Peterson Christian Tismer - Benjamin Peterson + Hakan Ardo + Alex Gaynor Eric van Riet Paap - Anders Chrigström - Håkan Ardö + Anders Chrigstrom + David Schneider Richard Emslie Dan Villiom Podlaski Christiansen Alexander Schremmer - Alex Gaynor - David Schneider - Aurelién Campeas + Aurelien Campeas Anders Lehmann Camillo Bruni Niklaus Haldimann @@ -63,16 +63,17 @@ Bartosz Skowron Jakub Gustak Guido Wesdorp + Daniel Roberts Adrien Di Mascio Laura Creighton Ludovic Aubry Niko Matsakis - Daniel Roberts Jason Creighton - Jacob Hallén + Jacob Hallen Alex Martelli Anders Hammarquist Jan de Mooij + Wim Lavrijsen Stephan Diehl Michael Foord Stefan Schwarzer @@ -83,9 +84,13 @@ Alexandre Fayolle Marius Gedminas Simon Burton + Justin Peel Jean-Paul Calderone John Witulski + Lukas Diekmann + holger krekel Wim Lavrijsen + Dario Bertini Andreas Stührk Jean-Philippe St. Pierre Guido van Rossum @@ -97,15 +102,16 @@ Georg Brandl Gerald Klix Wanja Saatkamp + Ronny Pfannschmidt Boris Feigin Oscar Nierstrasz - Dario Bertini David Malcolm Eugene Oden Henry Mason + Sven Hager Lukas Renggli + Ilya Osadchiy Guenter Jantzen - Ronny Pfannschmidt Bert Freudenberg Amit Regmi Ben Young @@ -122,8 +128,8 @@ Jared Grubb Karl Bartel Gabriel Lavoie + Victor Stinner Brian Dorsey - Victor Stinner Stuart Williams Toby Watson Antoine Pitrou @@ -134,19 +140,23 @@ Jonathan David Riehl Elmo Mäntynen Anders Qvist - Beatrice Düring + Beatrice During Alexander Sedov + Timo Paulssen + Corbin Simpson Vincent Legoll + Romain Guillebert Alan McIntyre - Romain Guillebert Alex Perry Jens-Uwe Mager + Simon Cross Dan Stromberg - Lukas Diekmann + Guillebert Romain Carl Meyer Pieter Zieschang Alejandro J. Cura Sylvain Thenault + Christoph Gerum Travis Francis Athougies Henrik Vendelbo Lutz Paelike @@ -157,6 +167,7 @@ Miguel de Val Borro Ignas Mikalajunas Artur Lisiecki + Philip Jenvey Joshua Gilbert Godefroid Chappelle Yusei Tahara @@ -165,27 +176,31 @@ Gustavo Niemeyer William Leslie Akira Li - Kristján Valur Jónsson + Kristjan Valur Jonsson Bobby Impollonia + Michael Hudson-Doyle Andrew Thompson Anders Sigfridsson + Floris Bruynooghe Jacek Generowicz Dan Colish - Sven Hager Zooko Wilcox-O Hearn + Dan Villiom Podlaski Christiansen Anders Hammarquist + Chris Lambacher Dinu Gherman Dan Colish + Brett Cannon Daniel Neuhäuser Michael Chermside Konrad Delong Anna Ravencroft Greg Price Armin Ronacher + Christian Muirhead Jim Baker - Philip Jenvey Rodrigo Araújo - Brett Cannon + Romain Guillebert Heinrich-Heine University, Germany Open End AB (formerly AB Strakt), Sweden diff --git a/ctypes_configure/configure.py b/ctypes_configure/configure.py --- a/ctypes_configure/configure.py +++ b/ctypes_configure/configure.py @@ -559,7 +559,9 @@ C_HEADER = """ #include #include /* for offsetof() */ -#include /* FreeBSD: for uint64_t */ +#ifndef _WIN32 +# include /* FreeBSD: for uint64_t */ +#endif void dump(char* key, int value) { printf("%s: %d\\n", key, value); diff --git a/ctypes_configure/stdoutcapture.py b/ctypes_configure/stdoutcapture.py --- a/ctypes_configure/stdoutcapture.py +++ b/ctypes_configure/stdoutcapture.py @@ -15,6 +15,15 @@ not hasattr(os, 'fdopen')): self.dummy = 1 else: + try: + self.tmpout = os.tmpfile() + if mixed_out_err: + self.tmperr = self.tmpout + else: + self.tmperr = os.tmpfile() + except OSError: # bah? on at least one Windows box + self.dummy = 1 + return self.dummy = 0 # make new stdout/stderr files if needed self.localoutfd = os.dup(1) @@ -29,11 +38,6 @@ sys.stderr = os.fdopen(self.localerrfd, 'w', 0) else: self.saved_stderr = None - self.tmpout = os.tmpfile() - if mixed_out_err: - self.tmperr = self.tmpout - else: - self.tmperr = os.tmpfile() os.dup2(self.tmpout.fileno(), 1) os.dup2(self.tmperr.fileno(), 2) diff --git a/dotviewer/graphparse.py b/dotviewer/graphparse.py --- a/dotviewer/graphparse.py +++ b/dotviewer/graphparse.py @@ -36,48 +36,45 @@ print >> sys.stderr, "Warning: could not guess file type, using 'dot'" return 'unknown' -def dot2plain(content, contenttype, use_codespeak=False): - if contenttype == 'plain': - # already a .plain file - return content +def dot2plain_graphviz(content, contenttype, use_codespeak=False): + if contenttype != 'neato': + cmdline = 'dot -Tplain' + else: + cmdline = 'neato -Tplain' + #print >> sys.stderr, '* running:', cmdline + close_fds = sys.platform != 'win32' + p = subprocess.Popen(cmdline, shell=True, close_fds=close_fds, + stdin=subprocess.PIPE, stdout=subprocess.PIPE) + (child_in, child_out) = (p.stdin, p.stdout) + try: + import thread + except ImportError: + bkgndwrite(child_in, content) + else: + thread.start_new_thread(bkgndwrite, (child_in, content)) + plaincontent = child_out.read() + child_out.close() + if not plaincontent: # 'dot' is likely not installed + raise PlainParseError("no result from running 'dot'") + return plaincontent - if not use_codespeak: - if contenttype != 'neato': - cmdline = 'dot -Tplain' - else: - cmdline = 'neato -Tplain' - #print >> sys.stderr, '* running:', cmdline - close_fds = sys.platform != 'win32' - p = subprocess.Popen(cmdline, shell=True, close_fds=close_fds, - stdin=subprocess.PIPE, stdout=subprocess.PIPE) - (child_in, child_out) = (p.stdin, p.stdout) - try: - import thread - except ImportError: - bkgndwrite(child_in, content) - else: - thread.start_new_thread(bkgndwrite, (child_in, content)) - plaincontent = child_out.read() - child_out.close() - if not plaincontent: # 'dot' is likely not installed - raise PlainParseError("no result from running 'dot'") - else: - import urllib - request = urllib.urlencode({'dot': content}) - url = 'http://codespeak.net/pypy/convertdot.cgi' - print >> sys.stderr, '* posting:', url - g = urllib.urlopen(url, data=request) - result = [] - while True: - data = g.read(16384) - if not data: - break - result.append(data) - g.close() - plaincontent = ''.join(result) - # very simple-minded way to give a somewhat better error message - if plaincontent.startswith('> sys.stderr, '* posting:', url + g = urllib.urlopen(url, data=request) + result = [] + while True: + data = g.read(16384) + if not data: + break + result.append(data) + g.close() + plaincontent = ''.join(result) + # very simple-minded way to give a somewhat better error message + if plaincontent.startswith('" + return _PROTOCOL_NAMES.get(protocol_code, '') # a replacement for the old socket.ssl function diff --git a/lib-python/2.7/test/test_ssl.py b/lib-python/2.7/test/test_ssl.py --- a/lib-python/2.7/test/test_ssl.py +++ b/lib-python/2.7/test/test_ssl.py @@ -58,32 +58,35 @@ # Issue #9415: Ubuntu hijacks their OpenSSL and forcefully disables SSLv2 def skip_if_broken_ubuntu_ssl(func): - # We need to access the lower-level wrapper in order to create an - # implicit SSL context without trying to connect or listen. - try: - import _ssl - except ImportError: - # The returned function won't get executed, just ignore the error - pass - @functools.wraps(func) - def f(*args, **kwargs): + if hasattr(ssl, 'PROTOCOL_SSLv2'): + # We need to access the lower-level wrapper in order to create an + # implicit SSL context without trying to connect or listen. try: - s = socket.socket(socket.AF_INET) - _ssl.sslwrap(s._sock, 0, None, None, - ssl.CERT_NONE, ssl.PROTOCOL_SSLv2, None, None) - except ssl.SSLError as e: - if (ssl.OPENSSL_VERSION_INFO == (0, 9, 8, 15, 15) and - platform.linux_distribution() == ('debian', 'squeeze/sid', '') - and 'Invalid SSL protocol variant specified' in str(e)): - raise unittest.SkipTest("Patched Ubuntu OpenSSL breaks behaviour") - return func(*args, **kwargs) - return f + import _ssl + except ImportError: + # The returned function won't get executed, just ignore the error + pass + @functools.wraps(func) + def f(*args, **kwargs): + try: + s = socket.socket(socket.AF_INET) + _ssl.sslwrap(s._sock, 0, None, None, + ssl.CERT_NONE, ssl.PROTOCOL_SSLv2, None, None) + except ssl.SSLError as e: + if (ssl.OPENSSL_VERSION_INFO == (0, 9, 8, 15, 15) and + platform.linux_distribution() == ('debian', 'squeeze/sid', '') + and 'Invalid SSL protocol variant specified' in str(e)): + raise unittest.SkipTest("Patched Ubuntu OpenSSL breaks behaviour") + return func(*args, **kwargs) + return f + else: + return func class BasicSocketTests(unittest.TestCase): def test_constants(self): - ssl.PROTOCOL_SSLv2 + #ssl.PROTOCOL_SSLv2 ssl.PROTOCOL_SSLv23 ssl.PROTOCOL_SSLv3 ssl.PROTOCOL_TLSv1 @@ -964,7 +967,8 @@ try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True, ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True, ssl.CERT_REQUIRED) - try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv2, False) + if hasattr(ssl, 'PROTOCOL_SSLv2'): + try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv2, False) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv23, False) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_TLSv1, False) @@ -976,7 +980,8 @@ try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True, ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True, ssl.CERT_REQUIRED) - try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv2, False) + if hasattr(ssl, 'PROTOCOL_SSLv2'): + try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv2, False) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv3, False) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv23, False) diff --git a/lib-python/conftest.py b/lib-python/conftest.py --- a/lib-python/conftest.py +++ b/lib-python/conftest.py @@ -154,18 +154,18 @@ RegrTest('test_cmd.py'), RegrTest('test_cmd_line_script.py'), RegrTest('test_codeccallbacks.py', core=True), - RegrTest('test_codecencodings_cn.py'), - RegrTest('test_codecencodings_hk.py'), - RegrTest('test_codecencodings_jp.py'), - RegrTest('test_codecencodings_kr.py'), - RegrTest('test_codecencodings_tw.py'), + RegrTest('test_codecencodings_cn.py', usemodules='_multibytecodec'), + RegrTest('test_codecencodings_hk.py', usemodules='_multibytecodec'), + RegrTest('test_codecencodings_jp.py', usemodules='_multibytecodec'), + RegrTest('test_codecencodings_kr.py', usemodules='_multibytecodec'), + RegrTest('test_codecencodings_tw.py', usemodules='_multibytecodec'), - RegrTest('test_codecmaps_cn.py'), - RegrTest('test_codecmaps_hk.py'), - RegrTest('test_codecmaps_jp.py'), - RegrTest('test_codecmaps_kr.py'), - RegrTest('test_codecmaps_tw.py'), - RegrTest('test_codecs.py', core=True), + RegrTest('test_codecmaps_cn.py', usemodules='_multibytecodec'), + RegrTest('test_codecmaps_hk.py', usemodules='_multibytecodec'), + RegrTest('test_codecmaps_jp.py', usemodules='_multibytecodec'), + RegrTest('test_codecmaps_kr.py', usemodules='_multibytecodec'), + RegrTest('test_codecmaps_tw.py', usemodules='_multibytecodec'), + RegrTest('test_codecs.py', core=True, usemodules='_multibytecodec'), RegrTest('test_codeop.py', core=True), RegrTest('test_coercion.py', core=True), RegrTest('test_collections.py'), @@ -314,10 +314,10 @@ RegrTest('test_mmap.py'), RegrTest('test_module.py', core=True), RegrTest('test_modulefinder.py'), - RegrTest('test_multibytecodec.py'), + RegrTest('test_multibytecodec.py', usemodules='_multibytecodec'), RegrTest('test_multibytecodec_support.py', skip="not a test"), RegrTest('test_multifile.py'), - RegrTest('test_multiprocessing.py', skip='FIXME leaves subprocesses'), + RegrTest('test_multiprocessing.py', skip="FIXME leaves subprocesses"), RegrTest('test_mutants.py', core="possibly"), RegrTest('test_mutex.py'), RegrTest('test_netrc.py'), @@ -359,7 +359,7 @@ RegrTest('test_property.py', core=True), RegrTest('test_pstats.py'), RegrTest('test_pty.py', skip="unsupported extension module"), - RegrTest('test_pwd.py', skip=skip_win32), + RegrTest('test_pwd.py', usemodules="pwd", skip=skip_win32), RegrTest('test_py3kwarn.py'), RegrTest('test_pyclbr.py'), RegrTest('test_pydoc.py'), diff --git a/lib-python/modified-2.7/ctypes/__init__.py b/lib-python/modified-2.7/ctypes/__init__.py --- a/lib-python/modified-2.7/ctypes/__init__.py +++ b/lib-python/modified-2.7/ctypes/__init__.py @@ -489,9 +489,12 @@ _flags_ = _FUNCFLAG_CDECL | _FUNCFLAG_PYTHONAPI return CFunctionType -_cast = PYFUNCTYPE(py_object, c_void_p, py_object, py_object)(_cast_addr) def cast(obj, typ): - return _cast(obj, obj, typ) + try: + c_void_p.from_param(obj) + except TypeError, e: + raise ArgumentError(str(e)) + return _cast_addr(obj, obj, typ) _string_at = PYFUNCTYPE(py_object, c_void_p, c_int)(_string_at_addr) def string_at(ptr, size=-1): diff --git a/lib-python/modified-2.7/ctypes/util.py b/lib-python/modified-2.7/ctypes/util.py --- a/lib-python/modified-2.7/ctypes/util.py +++ b/lib-python/modified-2.7/ctypes/util.py @@ -72,8 +72,8 @@ return name if os.name == "posix" and sys.platform == "darwin": - from ctypes.macholib.dyld import dyld_find as _dyld_find def find_library(name): + from ctypes.macholib.dyld import dyld_find as _dyld_find possible = ['lib%s.dylib' % name, '%s.dylib' % name, '%s.framework/%s' % (name, name)] diff --git a/lib-python/modified-2.7/distutils/sysconfig_pypy.py b/lib-python/modified-2.7/distutils/sysconfig_pypy.py --- a/lib-python/modified-2.7/distutils/sysconfig_pypy.py +++ b/lib-python/modified-2.7/distutils/sysconfig_pypy.py @@ -116,6 +116,12 @@ if compiler.compiler_type == "unix": compiler.compiler_so.extend(['-fPIC', '-Wimplicit']) compiler.shared_lib_extension = get_config_var('SO') + if "CFLAGS" in os.environ: + cflags = os.environ["CFLAGS"] + compiler.compiler.append(cflags) + compiler.compiler_so.append(cflags) + compiler.linker_so.append(cflags) + from sysconfig_cpython import ( parse_makefile, _variable_rx, expand_makefile_vars) diff --git a/lib-python/modified-2.7/distutils/unixccompiler.py b/lib-python/modified-2.7/distutils/unixccompiler.py --- a/lib-python/modified-2.7/distutils/unixccompiler.py +++ b/lib-python/modified-2.7/distutils/unixccompiler.py @@ -324,7 +324,7 @@ # On OSX users can specify an alternate SDK using # '-isysroot', calculate the SDK root if it is specified # (and use it further on) - cflags = sysconfig.get_config_var('CFLAGS') + cflags = sysconfig.get_config_var('CFLAGS') or '' m = re.search(r'-isysroot\s+(\S+)', cflags) if m is None: sysroot = '/' diff --git a/lib-python/modified-2.7/httplib.py b/lib-python/modified-2.7/httplib.py new file mode 100644 --- /dev/null +++ b/lib-python/modified-2.7/httplib.py @@ -0,0 +1,1377 @@ +"""HTTP/1.1 client library + + + + +HTTPConnection goes through a number of "states", which define when a client +may legally make another request or fetch the response for a particular +request. This diagram details these state transitions: + + (null) + | + | HTTPConnection() + v + Idle + | + | putrequest() + v + Request-started + | + | ( putheader() )* endheaders() + v + Request-sent + | + | response = getresponse() + v + Unread-response [Response-headers-read] + |\____________________ + | | + | response.read() | putrequest() + v v + Idle Req-started-unread-response + ______/| + / | + response.read() | | ( putheader() )* endheaders() + v v + Request-started Req-sent-unread-response + | + | response.read() + v + Request-sent + +This diagram presents the following rules: + -- a second request may not be started until {response-headers-read} + -- a response [object] cannot be retrieved until {request-sent} + -- there is no differentiation between an unread response body and a + partially read response body + +Note: this enforcement is applied by the HTTPConnection class. The + HTTPResponse class does not enforce this state machine, which + implies sophisticated clients may accelerate the request/response + pipeline. Caution should be taken, though: accelerating the states + beyond the above pattern may imply knowledge of the server's + connection-close behavior for certain requests. For example, it + is impossible to tell whether the server will close the connection + UNTIL the response headers have been read; this means that further + requests cannot be placed into the pipeline until it is known that + the server will NOT be closing the connection. + +Logical State __state __response +------------- ------- ---------- +Idle _CS_IDLE None +Request-started _CS_REQ_STARTED None +Request-sent _CS_REQ_SENT None +Unread-response _CS_IDLE +Req-started-unread-response _CS_REQ_STARTED +Req-sent-unread-response _CS_REQ_SENT +""" + +from array import array +import os +import socket +from sys import py3kwarning +from urlparse import urlsplit +import warnings +with warnings.catch_warnings(): + if py3kwarning: + warnings.filterwarnings("ignore", ".*mimetools has been removed", + DeprecationWarning) + import mimetools + +try: + from cStringIO import StringIO +except ImportError: + from StringIO import StringIO + +__all__ = ["HTTP", "HTTPResponse", "HTTPConnection", + "HTTPException", "NotConnected", "UnknownProtocol", + "UnknownTransferEncoding", "UnimplementedFileMode", + "IncompleteRead", "InvalidURL", "ImproperConnectionState", + "CannotSendRequest", "CannotSendHeader", "ResponseNotReady", + "BadStatusLine", "error", "responses"] + +HTTP_PORT = 80 +HTTPS_PORT = 443 + +_UNKNOWN = 'UNKNOWN' + +# connection states +_CS_IDLE = 'Idle' +_CS_REQ_STARTED = 'Request-started' +_CS_REQ_SENT = 'Request-sent' + +# status codes +# informational +CONTINUE = 100 +SWITCHING_PROTOCOLS = 101 +PROCESSING = 102 + +# successful +OK = 200 +CREATED = 201 +ACCEPTED = 202 +NON_AUTHORITATIVE_INFORMATION = 203 +NO_CONTENT = 204 +RESET_CONTENT = 205 +PARTIAL_CONTENT = 206 +MULTI_STATUS = 207 +IM_USED = 226 + +# redirection +MULTIPLE_CHOICES = 300 +MOVED_PERMANENTLY = 301 +FOUND = 302 +SEE_OTHER = 303 +NOT_MODIFIED = 304 +USE_PROXY = 305 +TEMPORARY_REDIRECT = 307 + +# client error +BAD_REQUEST = 400 +UNAUTHORIZED = 401 +PAYMENT_REQUIRED = 402 +FORBIDDEN = 403 +NOT_FOUND = 404 +METHOD_NOT_ALLOWED = 405 +NOT_ACCEPTABLE = 406 +PROXY_AUTHENTICATION_REQUIRED = 407 +REQUEST_TIMEOUT = 408 +CONFLICT = 409 +GONE = 410 +LENGTH_REQUIRED = 411 +PRECONDITION_FAILED = 412 +REQUEST_ENTITY_TOO_LARGE = 413 +REQUEST_URI_TOO_LONG = 414 +UNSUPPORTED_MEDIA_TYPE = 415 +REQUESTED_RANGE_NOT_SATISFIABLE = 416 +EXPECTATION_FAILED = 417 +UNPROCESSABLE_ENTITY = 422 +LOCKED = 423 +FAILED_DEPENDENCY = 424 +UPGRADE_REQUIRED = 426 + +# server error +INTERNAL_SERVER_ERROR = 500 +NOT_IMPLEMENTED = 501 +BAD_GATEWAY = 502 +SERVICE_UNAVAILABLE = 503 +GATEWAY_TIMEOUT = 504 +HTTP_VERSION_NOT_SUPPORTED = 505 +INSUFFICIENT_STORAGE = 507 +NOT_EXTENDED = 510 + +# Mapping status codes to official W3C names +responses = { + 100: 'Continue', + 101: 'Switching Protocols', + + 200: 'OK', + 201: 'Created', + 202: 'Accepted', + 203: 'Non-Authoritative Information', + 204: 'No Content', + 205: 'Reset Content', + 206: 'Partial Content', + + 300: 'Multiple Choices', + 301: 'Moved Permanently', + 302: 'Found', + 303: 'See Other', + 304: 'Not Modified', + 305: 'Use Proxy', + 306: '(Unused)', + 307: 'Temporary Redirect', + + 400: 'Bad Request', + 401: 'Unauthorized', + 402: 'Payment Required', + 403: 'Forbidden', + 404: 'Not Found', + 405: 'Method Not Allowed', + 406: 'Not Acceptable', + 407: 'Proxy Authentication Required', + 408: 'Request Timeout', + 409: 'Conflict', + 410: 'Gone', + 411: 'Length Required', + 412: 'Precondition Failed', + 413: 'Request Entity Too Large', + 414: 'Request-URI Too Long', + 415: 'Unsupported Media Type', + 416: 'Requested Range Not Satisfiable', + 417: 'Expectation Failed', + + 500: 'Internal Server Error', + 501: 'Not Implemented', + 502: 'Bad Gateway', + 503: 'Service Unavailable', + 504: 'Gateway Timeout', + 505: 'HTTP Version Not Supported', +} + +# maximal amount of data to read at one time in _safe_read +MAXAMOUNT = 1048576 + +class HTTPMessage(mimetools.Message): + + def addheader(self, key, value): + """Add header for field key handling repeats.""" + prev = self.dict.get(key) + if prev is None: + self.dict[key] = value + else: + combined = ", ".join((prev, value)) + self.dict[key] = combined + + def addcontinue(self, key, more): + """Add more field data from a continuation line.""" + prev = self.dict[key] + self.dict[key] = prev + "\n " + more + + def readheaders(self): + """Read header lines. + + Read header lines up to the entirely blank line that terminates them. + The (normally blank) line that ends the headers is skipped, but not + included in the returned list. If a non-header line ends the headers, + (which is an error), an attempt is made to backspace over it; it is + never included in the returned list. + + The variable self.status is set to the empty string if all went well, + otherwise it is an error message. The variable self.headers is a + completely uninterpreted list of lines contained in the header (so + printing them will reproduce the header exactly as it appears in the + file). + + If multiple header fields with the same name occur, they are combined + according to the rules in RFC 2616 sec 4.2: + + Appending each subsequent field-value to the first, each separated + by a comma. The order in which header fields with the same field-name + are received is significant to the interpretation of the combined + field value. + """ + # XXX The implementation overrides the readheaders() method of + # rfc822.Message. The base class design isn't amenable to + # customized behavior here so the method here is a copy of the + # base class code with a few small changes. + + self.dict = {} + self.unixfrom = '' + self.headers = hlist = [] + self.status = '' + headerseen = "" + firstline = 1 + startofline = unread = tell = None + if hasattr(self.fp, 'unread'): + unread = self.fp.unread + elif self.seekable: + tell = self.fp.tell + while True: + if tell: + try: + startofline = tell() + except IOError: + startofline = tell = None + self.seekable = 0 + line = self.fp.readline() + if not line: + self.status = 'EOF in headers' + break + # Skip unix From name time lines + if firstline and line.startswith('From '): + self.unixfrom = self.unixfrom + line + continue + firstline = 0 + if headerseen and line[0] in ' \t': + # XXX Not sure if continuation lines are handled properly + # for http and/or for repeating headers + # It's a continuation line. + hlist.append(line) + self.addcontinue(headerseen, line.strip()) + continue + elif self.iscomment(line): + # It's a comment. Ignore it. + continue + elif self.islast(line): + # Note! No pushback here! The delimiter line gets eaten. + break + headerseen = self.isheader(line) + if headerseen: + # It's a legal header line, save it. + hlist.append(line) + self.addheader(headerseen, line[len(headerseen)+1:].strip()) + continue + else: + # It's not a header line; throw it back and stop here. + if not self.dict: + self.status = 'No headers' + else: + self.status = 'Non-header line where header expected' + # Try to undo the read. + if unread: + unread(line) + elif tell: + self.fp.seek(startofline) + else: + self.status = self.status + '; bad seek' + break + +class HTTPResponse: + + # strict: If true, raise BadStatusLine if the status line can't be + # parsed as a valid HTTP/1.0 or 1.1 status line. By default it is + # false because it prevents clients from talking to HTTP/0.9 + # servers. Note that a response with a sufficiently corrupted + # status line will look like an HTTP/0.9 response. + + # See RFC 2616 sec 19.6 and RFC 1945 sec 6 for details. + + def __init__(self, sock, debuglevel=0, strict=0, method=None, buffering=False): + if buffering: + # The caller won't be using any sock.recv() calls, so buffering + # is fine and recommended for performance. + self.fp = sock.makefile('rb') + else: + # The buffer size is specified as zero, because the headers of + # the response are read with readline(). If the reads were + # buffered the readline() calls could consume some of the + # response, which make be read via a recv() on the underlying + # socket. + self.fp = sock.makefile('rb', 0) + self.debuglevel = debuglevel + self.strict = strict + self._method = method + + self.msg = None + + # from the Status-Line of the response + self.version = _UNKNOWN # HTTP-Version + self.status = _UNKNOWN # Status-Code + self.reason = _UNKNOWN # Reason-Phrase + + self.chunked = _UNKNOWN # is "chunked" being used? + self.chunk_left = _UNKNOWN # bytes left to read in current chunk + self.length = _UNKNOWN # number of bytes left in response + self.will_close = _UNKNOWN # conn will close at end of response + + def _read_status(self): + # Initialize with Simple-Response defaults + line = self.fp.readline() + if self.debuglevel > 0: + print "reply:", repr(line) + if not line: + # Presumably, the server closed the connection before + # sending a valid response. + raise BadStatusLine(line) + try: + [version, status, reason] = line.split(None, 2) + except ValueError: + try: + [version, status] = line.split(None, 1) + reason = "" + except ValueError: + # empty version will cause next test to fail and status + # will be treated as 0.9 response. + version = "" + if not version.startswith('HTTP/'): + if self.strict: + self.close() + raise BadStatusLine(line) + else: + # assume it's a Simple-Response from an 0.9 server + self.fp = LineAndFileWrapper(line, self.fp) + return "HTTP/0.9", 200, "" + + # The status code is a three-digit number + try: + status = int(status) + if status < 100 or status > 999: + raise BadStatusLine(line) + except ValueError: + raise BadStatusLine(line) + return version, status, reason + + def begin(self): + if self.msg is not None: + # we've already started reading the response + return + + # read until we get a non-100 response + while True: + version, status, reason = self._read_status() + if status != CONTINUE: + break + # skip the header from the 100 response + while True: + skip = self.fp.readline().strip() + if not skip: + break + if self.debuglevel > 0: + print "header:", skip + + self.status = status + self.reason = reason.strip() + if version == 'HTTP/1.0': + self.version = 10 + elif version.startswith('HTTP/1.'): + self.version = 11 # use HTTP/1.1 code for HTTP/1.x where x>=1 + elif version == 'HTTP/0.9': + self.version = 9 + else: + raise UnknownProtocol(version) + + if self.version == 9: + self.length = None + self.chunked = 0 + self.will_close = 1 + self.msg = HTTPMessage(StringIO()) + return + + self.msg = HTTPMessage(self.fp, 0) + if self.debuglevel > 0: + for hdr in self.msg.headers: + print "header:", hdr, + + # don't let the msg keep an fp + self.msg.fp = None + + # are we using the chunked-style of transfer encoding? + tr_enc = self.msg.getheader('transfer-encoding') + if tr_enc and tr_enc.lower() == "chunked": + self.chunked = 1 + self.chunk_left = None + else: + self.chunked = 0 + + # will the connection close at the end of the response? + self.will_close = self._check_close() + + # do we have a Content-Length? + # NOTE: RFC 2616, S4.4, #3 says we ignore this if tr_enc is "chunked" + length = self.msg.getheader('content-length') + if length and not self.chunked: + try: + self.length = int(length) + except ValueError: + self.length = None + else: + if self.length < 0: # ignore nonsensical negative lengths + self.length = None + else: + self.length = None + + # does the body have a fixed length? (of zero) + if (status == NO_CONTENT or status == NOT_MODIFIED or + 100 <= status < 200 or # 1xx codes + self._method == 'HEAD'): + self.length = 0 + + # if the connection remains open, and we aren't using chunked, and + # a content-length was not provided, then assume that the connection + # WILL close. + if not self.will_close and \ + not self.chunked and \ + self.length is None: + self.will_close = 1 + + def _check_close(self): + conn = self.msg.getheader('connection') + if self.version == 11: + # An HTTP/1.1 proxy is assumed to stay open unless + # explicitly closed. + conn = self.msg.getheader('connection') + if conn and "close" in conn.lower(): + return True + return False + + # Some HTTP/1.0 implementations have support for persistent + # connections, using rules different than HTTP/1.1. + + # For older HTTP, Keep-Alive indicates persistent connection. + if self.msg.getheader('keep-alive'): + return False + + # At least Akamai returns a "Connection: Keep-Alive" header, + # which was supposed to be sent by the client. + if conn and "keep-alive" in conn.lower(): + return False + + # Proxy-Connection is a netscape hack. + pconn = self.msg.getheader('proxy-connection') + if pconn and "keep-alive" in pconn.lower(): + return False + + # otherwise, assume it will close + return True + + def close(self): + if self.fp: + self.fp.close() + self.fp = None + + def isclosed(self): + # NOTE: it is possible that we will not ever call self.close(). This + # case occurs when will_close is TRUE, length is None, and we + # read up to the last byte, but NOT past it. + # + # IMPLIES: if will_close is FALSE, then self.close() will ALWAYS be + # called, meaning self.isclosed() is meaningful. + return self.fp is None + + # XXX It would be nice to have readline and __iter__ for this, too. + + def read(self, amt=None): + if self.fp is None: + return '' + + if self._method == 'HEAD': + self.close() + return '' + + if self.chunked: + return self._read_chunked(amt) + + if amt is None: + # unbounded read + if self.length is None: + s = self.fp.read() + else: + s = self._safe_read(self.length) + self.length = 0 + self.close() # we read everything + return s + + if self.length is not None: + if amt > self.length: + # clip the read to the "end of response" + amt = self.length + + # we do not use _safe_read() here because this may be a .will_close + # connection, and the user is reading more bytes than will be provided + # (for example, reading in 1k chunks) + s = self.fp.read(amt) + if self.length is not None: + self.length -= len(s) + if not self.length: + self.close() + return s + + def _read_chunked(self, amt): + assert self.chunked != _UNKNOWN + chunk_left = self.chunk_left + value = [] + while True: + if chunk_left is None: + line = self.fp.readline() + i = line.find(';') + if i >= 0: + line = line[:i] # strip chunk-extensions + try: + chunk_left = int(line, 16) + except ValueError: + # close the connection as protocol synchronisation is + # probably lost + self.close() + raise IncompleteRead(''.join(value)) + if chunk_left == 0: + break + if amt is None: + value.append(self._safe_read(chunk_left)) + elif amt < chunk_left: + value.append(self._safe_read(amt)) + self.chunk_left = chunk_left - amt + return ''.join(value) + elif amt == chunk_left: + value.append(self._safe_read(amt)) + self._safe_read(2) # toss the CRLF at the end of the chunk + self.chunk_left = None + return ''.join(value) + else: + value.append(self._safe_read(chunk_left)) + amt -= chunk_left + + # we read the whole chunk, get another + self._safe_read(2) # toss the CRLF at the end of the chunk + chunk_left = None + + # read and discard trailer up to the CRLF terminator + ### note: we shouldn't have any trailers! + while True: + line = self.fp.readline() + if not line: + # a vanishingly small number of sites EOF without + # sending the trailer + break + if line == '\r\n': + break + + # we read everything; close the "file" + self.close() + + return ''.join(value) + + def _safe_read(self, amt): + """Read the number of bytes requested, compensating for partial reads. + + Normally, we have a blocking socket, but a read() can be interrupted + by a signal (resulting in a partial read). + + Note that we cannot distinguish between EOF and an interrupt when zero + bytes have been read. IncompleteRead() will be raised in this + situation. + + This function should be used when bytes "should" be present for + reading. If the bytes are truly not available (due to EOF), then the + IncompleteRead exception can be used to detect the problem. + """ + # NOTE(gps): As of svn r74426 socket._fileobject.read(x) will never + # return less than x bytes unless EOF is encountered. It now handles + # signal interruptions (socket.error EINTR) internally. This code + # never caught that exception anyways. It seems largely pointless. + # self.fp.read(amt) will work fine. + s = [] + while amt > 0: + chunk = self.fp.read(min(amt, MAXAMOUNT)) + if not chunk: + raise IncompleteRead(''.join(s), amt) + s.append(chunk) + amt -= len(chunk) + return ''.join(s) + + def fileno(self): + return self.fp.fileno() + + def getheader(self, name, default=None): + if self.msg is None: + raise ResponseNotReady() + return self.msg.getheader(name, default) + + def getheaders(self): + """Return list of (header, value) tuples.""" + if self.msg is None: + raise ResponseNotReady() + return self.msg.items() + + +class HTTPConnection: + + _http_vsn = 11 + _http_vsn_str = 'HTTP/1.1' + + response_class = HTTPResponse + default_port = HTTP_PORT + auto_open = 1 + debuglevel = 0 + strict = 0 + + def __init__(self, host, port=None, strict=None, + timeout=socket._GLOBAL_DEFAULT_TIMEOUT, source_address=None): + self.timeout = timeout + self.source_address = source_address + self.sock = None + self._buffer = [] + self.__response = None + self.__state = _CS_IDLE + self._method = None + self._tunnel_host = None + self._tunnel_port = None + self._tunnel_headers = {} + + self._set_hostport(host, port) + if strict is not None: + self.strict = strict + + def set_tunnel(self, host, port=None, headers=None): + """ Sets up the host and the port for the HTTP CONNECT Tunnelling. + + The headers argument should be a mapping of extra HTTP headers + to send with the CONNECT request. + """ + self._tunnel_host = host + self._tunnel_port = port + if headers: + self._tunnel_headers = headers + else: + self._tunnel_headers.clear() + + def _set_hostport(self, host, port): + if port is None: + i = host.rfind(':') + j = host.rfind(']') # ipv6 addresses have [...] + if i > j: + try: + port = int(host[i+1:]) + except ValueError: + raise InvalidURL("nonnumeric port: '%s'" % host[i+1:]) + host = host[:i] + else: + port = self.default_port + if host and host[0] == '[' and host[-1] == ']': + host = host[1:-1] + self.host = host + self.port = port + + def set_debuglevel(self, level): + self.debuglevel = level + + def _tunnel(self): + self._set_hostport(self._tunnel_host, self._tunnel_port) + self.send("CONNECT %s:%d HTTP/1.0\r\n" % (self.host, self.port)) + for header, value in self._tunnel_headers.iteritems(): + self.send("%s: %s\r\n" % (header, value)) + self.send("\r\n") + response = self.response_class(self.sock, strict = self.strict, + method = self._method) + (version, code, message) = response._read_status() + + if code != 200: + self.close() + raise socket.error("Tunnel connection failed: %d %s" % (code, + message.strip())) + while True: + line = response.fp.readline() + if line == '\r\n': break + + + def connect(self): + """Connect to the host and port specified in __init__.""" + self.sock = socket.create_connection((self.host,self.port), + self.timeout, self.source_address) + + if self._tunnel_host: + self._tunnel() + + def close(self): + """Close the connection to the HTTP server.""" + if self.sock: + self.sock.close() # close it manually... there may be other refs + self.sock = None + if self.__response: + self.__response.close() + self.__response = None + self.__state = _CS_IDLE + + def send(self, data): + """Send `data' to the server.""" + if self.sock is None: + if self.auto_open: + self.connect() + else: + raise NotConnected() + + if self.debuglevel > 0: + print "send:", repr(data) + blocksize = 8192 + if hasattr(data,'read') and not isinstance(data, array): + if self.debuglevel > 0: print "sendIng a read()able" + datablock = data.read(blocksize) + while datablock: + self.sock.sendall(datablock) + datablock = data.read(blocksize) + else: + self.sock.sendall(data) + + def _output(self, s): + """Add a line of output to the current request buffer. + + Assumes that the line does *not* end with \\r\\n. + """ + self._buffer.append(s) + + def _send_output(self, message_body=None): + """Send the currently buffered request and clear the buffer. + + Appends an extra \\r\\n to the buffer. + A message_body may be specified, to be appended to the request. + """ + self._buffer.extend(("", "")) + msg = "\r\n".join(self._buffer) + del self._buffer[:] + # If msg and message_body are sent in a single send() call, + # it will avoid performance problems caused by the interaction + # between delayed ack and the Nagle algorithim. + if isinstance(message_body, str): + msg += message_body + message_body = None + self.send(msg) + if message_body is not None: + #message_body was not a string (i.e. it is a file) and + #we must run the risk of Nagle + self.send(message_body) + + def putrequest(self, method, url, skip_host=0, skip_accept_encoding=0): + """Send a request to the server. + + `method' specifies an HTTP request method, e.g. 'GET'. + `url' specifies the object being requested, e.g. '/index.html'. + `skip_host' if True does not add automatically a 'Host:' header + `skip_accept_encoding' if True does not add automatically an + 'Accept-Encoding:' header + """ + + # if a prior response has been completed, then forget about it. + if self.__response and self.__response.isclosed(): + self.__response = None + + + # in certain cases, we cannot issue another request on this connection. + # this occurs when: + # 1) we are in the process of sending a request. (_CS_REQ_STARTED) + # 2) a response to a previous request has signalled that it is going + # to close the connection upon completion. + # 3) the headers for the previous response have not been read, thus + # we cannot determine whether point (2) is true. (_CS_REQ_SENT) + # + # if there is no prior response, then we can request at will. + # + # if point (2) is true, then we will have passed the socket to the + # response (effectively meaning, "there is no prior response"), and + # will open a new one when a new request is made. + # + # Note: if a prior response exists, then we *can* start a new request. + # We are not allowed to begin fetching the response to this new + # request, however, until that prior response is complete. + # + if self.__state == _CS_IDLE: + self.__state = _CS_REQ_STARTED + else: + raise CannotSendRequest() + + # Save the method we use, we need it later in the response phase + self._method = method + if not url: + url = '/' + hdr = '%s %s %s' % (method, url, self._http_vsn_str) + + self._output(hdr) + + if self._http_vsn == 11: + # Issue some standard headers for better HTTP/1.1 compliance + + if not skip_host: + # this header is issued *only* for HTTP/1.1 + # connections. more specifically, this means it is + # only issued when the client uses the new + # HTTPConnection() class. backwards-compat clients + # will be using HTTP/1.0 and those clients may be + # issuing this header themselves. we should NOT issue + # it twice; some web servers (such as Apache) barf + # when they see two Host: headers + + # If we need a non-standard port,include it in the + # header. If the request is going through a proxy, + # but the host of the actual URL, not the host of the + # proxy. + + netloc = '' + if url.startswith('http'): + nil, netloc, nil, nil, nil = urlsplit(url) + + if netloc: + try: + netloc_enc = netloc.encode("ascii") + except UnicodeEncodeError: + netloc_enc = netloc.encode("idna") + self.putheader('Host', netloc_enc) + else: + try: + host_enc = self.host.encode("ascii") + except UnicodeEncodeError: + host_enc = self.host.encode("idna") + # Wrap the IPv6 Host Header with [] (RFC 2732) + if host_enc.find(':') >= 0: + host_enc = "[" + host_enc + "]" + if self.port == self.default_port: + self.putheader('Host', host_enc) + else: + self.putheader('Host', "%s:%s" % (host_enc, self.port)) + + # note: we are assuming that clients will not attempt to set these + # headers since *this* library must deal with the + # consequences. this also means that when the supporting + # libraries are updated to recognize other forms, then this + # code should be changed (removed or updated). + + # we only want a Content-Encoding of "identity" since we don't + # support encodings such as x-gzip or x-deflate. + if not skip_accept_encoding: + self.putheader('Accept-Encoding', 'identity') + + # we can accept "chunked" Transfer-Encodings, but no others + # NOTE: no TE header implies *only* "chunked" + #self.putheader('TE', 'chunked') + + # if TE is supplied in the header, then it must appear in a + # Connection header. + #self.putheader('Connection', 'TE') + + else: + # For HTTP/1.0, the server will assume "not chunked" + pass + + def putheader(self, header, *values): + """Send a request header line to the server. + + For example: h.putheader('Accept', 'text/html') + """ + if self.__state != _CS_REQ_STARTED: + raise CannotSendHeader() + + hdr = '%s: %s' % (header, '\r\n\t'.join([str(v) for v in values])) + self._output(hdr) + + def endheaders(self, message_body=None): + """Indicate that the last header line has been sent to the server. + + This method sends the request to the server. The optional + message_body argument can be used to pass message body + associated with the request. The message body will be sent in + the same packet as the message headers if possible. The + message_body should be a string. + """ + if self.__state == _CS_REQ_STARTED: + self.__state = _CS_REQ_SENT + else: + raise CannotSendHeader() + self._send_output(message_body) + + def request(self, method, url, body=None, headers={}): + """Send a complete request to the server.""" + self._send_request(method, url, body, headers) + + def _set_content_length(self, body): + # Set the content-length based on the body. + thelen = None + try: + thelen = str(len(body)) + except TypeError, te: + # If this is a file-like object, try to + # fstat its file descriptor + try: + thelen = str(os.fstat(body.fileno()).st_size) + except (AttributeError, OSError): + # Don't send a length if this failed + if self.debuglevel > 0: print "Cannot stat!!" + + if thelen is not None: + self.putheader('Content-Length', thelen) + + def _send_request(self, method, url, body, headers): + # Honor explicitly requested Host: and Accept-Encoding: headers. + header_names = dict.fromkeys([k.lower() for k in headers]) + skips = {} + if 'host' in header_names: + skips['skip_host'] = 1 + if 'accept-encoding' in header_names: + skips['skip_accept_encoding'] = 1 + + self.putrequest(method, url, **skips) + + if body and ('content-length' not in header_names): + self._set_content_length(body) + for hdr, value in headers.iteritems(): + self.putheader(hdr, value) + self.endheaders(body) + + def getresponse(self, buffering=False): + "Get the response from the server." + + # if a prior response has been completed, then forget about it. + if self.__response and self.__response.isclosed(): + self.__response = None + + # + # if a prior response exists, then it must be completed (otherwise, we + # cannot read this response's header to determine the connection-close + # behavior) + # + # note: if a prior response existed, but was connection-close, then the + # socket and response were made independent of this HTTPConnection + # object since a new request requires that we open a whole new + # connection + # + # this means the prior response had one of two states: + # 1) will_close: this connection was reset and the prior socket and + # response operate independently + # 2) persistent: the response was retained and we await its + # isclosed() status to become true. + # + if self.__state != _CS_REQ_SENT or self.__response: + raise ResponseNotReady() + + args = (self.sock,) + kwds = {"strict":self.strict, "method":self._method} + if self.debuglevel > 0: + args += (self.debuglevel,) + if buffering: + #only add this keyword if non-default, for compatibility with + #other response_classes. + kwds["buffering"] = True; + response = self.response_class(*args, **kwds) + + try: + response.begin() + except: + response.close() + raise + assert response.will_close != _UNKNOWN + self.__state = _CS_IDLE + + if response.will_close: + # this effectively passes the connection to the response + self.close() + else: + # remember this, so we can tell when it is complete + self.__response = response + + return response + + +class HTTP: + "Compatibility class with httplib.py from 1.5." + + _http_vsn = 10 + _http_vsn_str = 'HTTP/1.0' + + debuglevel = 0 + + _connection_class = HTTPConnection + + def __init__(self, host='', port=None, strict=None): + "Provide a default host, since the superclass requires one." + + # some joker passed 0 explicitly, meaning default port + if port == 0: + port = None + + # Note that we may pass an empty string as the host; this will throw + # an error when we attempt to connect. Presumably, the client code + # will call connect before then, with a proper host. + self._setup(self._connection_class(host, port, strict)) + + def _setup(self, conn): + self._conn = conn + + # set up delegation to flesh out interface + self.send = conn.send + self.putrequest = conn.putrequest + self.putheader = conn.putheader + self.endheaders = conn.endheaders + self.set_debuglevel = conn.set_debuglevel + + conn._http_vsn = self._http_vsn + conn._http_vsn_str = self._http_vsn_str + + self.file = None + + def connect(self, host=None, port=None): + "Accept arguments to set the host/port, since the superclass doesn't." + + if host is not None: + self._conn._set_hostport(host, port) + self._conn.connect() + + def getfile(self): + "Provide a getfile, since the superclass' does not use this concept." + return self.file + + def getreply(self, buffering=False): + """Compat definition since superclass does not define it. + + Returns a tuple consisting of: + - server status code (e.g. '200' if all goes well) + - server "reason" corresponding to status code + - any RFC822 headers in the response from the server + """ + try: + if not buffering: + response = self._conn.getresponse() + else: + #only add this keyword if non-default for compatibility + #with other connection classes + response = self._conn.getresponse(buffering) + except BadStatusLine, e: + ### hmm. if getresponse() ever closes the socket on a bad request, + ### then we are going to have problems with self.sock + + ### should we keep this behavior? do people use it? + # keep the socket open (as a file), and return it + self.file = self._conn.sock.makefile('rb', 0) + + # close our socket -- we want to restart after any protocol error + self.close() + + self.headers = None + return -1, e.line, None + + self.headers = response.msg + self.file = response.fp + return response.status, response.reason, response.msg + + def close(self): + self._conn.close() + + # note that self.file == response.fp, which gets closed by the + # superclass. just clear the object ref here. + ### hmm. messy. if status==-1, then self.file is owned by us. + ### well... we aren't explicitly closing, but losing this ref will + ### do it + self.file = None + +try: + import ssl +except ImportError: + pass +else: + class HTTPSConnection(HTTPConnection): + "This class allows communication via SSL." + + default_port = HTTPS_PORT + + def __init__(self, host, port=None, key_file=None, cert_file=None, + strict=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, + source_address=None): + HTTPConnection.__init__(self, host, port, strict, timeout, + source_address) + self.key_file = key_file + self.cert_file = cert_file + + def connect(self): + "Connect to a host on a given (SSL) port." + + sock = socket.create_connection((self.host, self.port), + self.timeout, self.source_address) + if self._tunnel_host: + self.sock = sock + self._tunnel() + self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file) + + __all__.append("HTTPSConnection") + + class HTTPS(HTTP): + """Compatibility with 1.5 httplib interface + + Python 1.5.2 did not have an HTTPS class, but it defined an + interface for sending http requests that is also useful for + https. + """ + + _connection_class = HTTPSConnection + + def __init__(self, host='', port=None, key_file=None, cert_file=None, + strict=None): + # provide a default host, pass the X509 cert info + + # urf. compensate for bad input. + if port == 0: + port = None + self._setup(self._connection_class(host, port, key_file, + cert_file, strict)) + + # we never actually use these for anything, but we keep them + # here for compatibility with post-1.5.2 CVS. + self.key_file = key_file + self.cert_file = cert_file + + + def FakeSocket (sock, sslobj): + warnings.warn("FakeSocket is deprecated, and won't be in 3.x. " + + "Use the result of ssl.wrap_socket() directly instead.", + DeprecationWarning, stacklevel=2) + return sslobj + + +class HTTPException(Exception): + # Subclasses that define an __init__ must call Exception.__init__ + # or define self.args. Otherwise, str() will fail. + pass + +class NotConnected(HTTPException): + pass + +class InvalidURL(HTTPException): + pass + +class UnknownProtocol(HTTPException): + def __init__(self, version): + self.args = version, + self.version = version + +class UnknownTransferEncoding(HTTPException): + pass + +class UnimplementedFileMode(HTTPException): + pass + +class IncompleteRead(HTTPException): + def __init__(self, partial, expected=None): + self.args = partial, + self.partial = partial + self.expected = expected + def __repr__(self): + if self.expected is not None: + e = ', %i more expected' % self.expected + else: + e = '' + return 'IncompleteRead(%i bytes read%s)' % (len(self.partial), e) + def __str__(self): + return repr(self) + +class ImproperConnectionState(HTTPException): + pass + +class CannotSendRequest(ImproperConnectionState): + pass + +class CannotSendHeader(ImproperConnectionState): + pass + +class ResponseNotReady(ImproperConnectionState): + pass + +class BadStatusLine(HTTPException): + def __init__(self, line): + if not line: + line = repr(line) + self.args = line, + self.line = line + +# for backwards compatibility +error = HTTPException + +class LineAndFileWrapper: + """A limited file-like object for HTTP/0.9 responses.""" + + # The status-line parsing code calls readline(), which normally + # get the HTTP status line. For a 0.9 response, however, this is + # actually the first line of the body! Clients need to get a + # readable file object that contains that line. + + def __init__(self, line, file): + self._line = line + self._file = file + self._line_consumed = 0 + self._line_offset = 0 + self._line_left = len(line) + + def __getattr__(self, attr): + return getattr(self._file, attr) + + def _done(self): + # called when the last byte is read from the line. After the + # call, all read methods are delegated to the underlying file + # object. + self._line_consumed = 1 + self.read = self._file.read + self.readline = self._file.readline + self.readlines = self._file.readlines + + def read(self, amt=None): + if self._line_consumed: + return self._file.read(amt) + assert self._line_left + if amt is None or amt > self._line_left: + s = self._line[self._line_offset:] + self._done() + if amt is None: + return s + self._file.read() + else: + return s + self._file.read(amt - len(s)) + else: + assert amt <= self._line_left + i = self._line_offset + j = i + amt + s = self._line[i:j] + self._line_offset = j + self._line_left -= amt + if self._line_left == 0: + self._done() + return s + + def readline(self): + if self._line_consumed: + return self._file.readline() + assert self._line_left + s = self._line[self._line_offset:] + self._done() + return s + + def readlines(self, size=None): + if self._line_consumed: + return self._file.readlines(size) + assert self._line_left + L = [self._line[self._line_offset:]] + self._done() + if size is None: + return L + self._file.readlines() + else: + return L + self._file.readlines(size) + +def test(): + """Test this module. + + A hodge podge of tests collected here, because they have too many + external dependencies for the regular test suite. + """ + + import sys + import getopt + opts, args = getopt.getopt(sys.argv[1:], 'd') + dl = 0 + for o, a in opts: + if o == '-d': dl = dl + 1 + host = 'www.python.org' + selector = '/' + if args[0:]: host = args[0] + if args[1:]: selector = args[1] + h = HTTP() + h.set_debuglevel(dl) + h.connect(host) + h.putrequest('GET', selector) + h.endheaders() + status, reason, headers = h.getreply() + print 'status =', status + print 'reason =', reason + print "read", len(h.getfile().read()) + print + if headers: + for header in headers.headers: print header.strip() + print + + # minimal test that code to extract host from url works + class HTTP11(HTTP): + _http_vsn = 11 + _http_vsn_str = 'HTTP/1.1' + + h = HTTP11('www.python.org') + h.putrequest('GET', 'http://www.python.org/~jeremy/') + h.endheaders() + h.getreply() + h.close() + + try: + import ssl + except ImportError: + pass + else: + + for host, selector in (('sourceforge.net', '/projects/python'), + ): + print "https://%s%s" % (host, selector) + hs = HTTPS() + hs.set_debuglevel(dl) + hs.connect(host) + hs.putrequest('GET', selector) + hs.endheaders() + status, reason, headers = hs.getreply() + print 'status =', status + print 'reason =', reason + print "read", len(hs.getfile().read()) + print + if headers: + for header in headers.headers: print header.strip() + print + +if __name__ == '__main__': + test() diff --git a/lib-python/modified-2.7/sqlite3/test/regression.py b/lib-python/modified-2.7/sqlite3/test/regression.py --- a/lib-python/modified-2.7/sqlite3/test/regression.py +++ b/lib-python/modified-2.7/sqlite3/test/regression.py @@ -274,6 +274,18 @@ cur.execute("UPDATE foo SET id = 3 WHERE id = 1") self.assertEqual(cur.description, None) + def CheckStatementCache(self): + cur = self.con.cursor() + cur.execute("CREATE TABLE foo (id INTEGER)") + values = [(i,) for i in xrange(5)] + cur.executemany("INSERT INTO foo (id) VALUES (?)", values) + + cur.execute("SELECT id FROM foo") + self.assertEqual(list(cur), values) + self.con.commit() + cur.execute("SELECT id FROM foo") + self.assertEqual(list(cur), values) + def suite(): regression_suite = unittest.makeSuite(RegressionTests, "Check") return unittest.TestSuite((regression_suite,)) diff --git a/lib-python/modified-2.7/ssl.py b/lib-python/modified-2.7/ssl.py --- a/lib-python/modified-2.7/ssl.py +++ b/lib-python/modified-2.7/ssl.py @@ -62,7 +62,6 @@ from _ssl import OPENSSL_VERSION_NUMBER, OPENSSL_VERSION_INFO, OPENSSL_VERSION from _ssl import SSLError from _ssl import CERT_NONE, CERT_OPTIONAL, CERT_REQUIRED -from _ssl import PROTOCOL_SSLv2, PROTOCOL_SSLv3, PROTOCOL_SSLv23, PROTOCOL_TLSv1 from _ssl import RAND_status, RAND_egd, RAND_add from _ssl import \ SSL_ERROR_ZERO_RETURN, \ @@ -74,6 +73,18 @@ SSL_ERROR_WANT_CONNECT, \ SSL_ERROR_EOF, \ SSL_ERROR_INVALID_ERROR_CODE +from _ssl import PROTOCOL_SSLv3, PROTOCOL_SSLv23, PROTOCOL_TLSv1 +_PROTOCOL_NAMES = { + PROTOCOL_TLSv1: "TLSv1", + PROTOCOL_SSLv23: "SSLv23", + PROTOCOL_SSLv3: "SSLv3", +} +try: + from _ssl import PROTOCOL_SSLv2 +except ImportError: + pass +else: + _PROTOCOL_NAMES[PROTOCOL_SSLv2] = "SSLv2" from socket import socket, _fileobject, error as socket_error from socket import getnameinfo as _getnameinfo @@ -400,16 +411,7 @@ return DER_cert_to_PEM_cert(dercert) def get_protocol_name(protocol_code): - if protocol_code == PROTOCOL_TLSv1: - return "TLSv1" - elif protocol_code == PROTOCOL_SSLv23: - return "SSLv23" - elif protocol_code == PROTOCOL_SSLv2: - return "SSLv2" - elif protocol_code == PROTOCOL_SSLv3: - return "SSLv3" - else: - return "" + return _PROTOCOL_NAMES.get(protocol_code, '') # a replacement for the old socket.ssl function diff --git a/lib-python/modified-2.7/test/regrtest.py b/lib-python/modified-2.7/test/regrtest.py --- a/lib-python/modified-2.7/test/regrtest.py +++ b/lib-python/modified-2.7/test/regrtest.py @@ -1403,7 +1403,26 @@ test_zipimport test_zlib """, - 'openbsd3': + 'openbsd4': + """ + test_ascii_formatd + test_bsddb + test_bsddb3 + test_ctypes + test_dl + test_epoll + test_gdbm + test_locale + test_normalization + test_ossaudiodev + test_pep277 + test_tcl + test_tk + test_ttk_guionly + test_ttk_textonly + test_multiprocessing + """, + 'openbsd5': """ test_ascii_formatd test_bsddb diff --git a/lib-python/modified-2.7/test/test_bz2.py b/lib-python/modified-2.7/test/test_bz2.py --- a/lib-python/modified-2.7/test/test_bz2.py +++ b/lib-python/modified-2.7/test/test_bz2.py @@ -50,6 +50,7 @@ self.filename = TESTFN def tearDown(self): + test_support.gc_collect() if os.path.isfile(self.filename): os.unlink(self.filename) diff --git a/lib-python/modified-2.7/test/test_fcntl.py b/lib-python/modified-2.7/test/test_fcntl.py new file mode 100644 --- /dev/null +++ b/lib-python/modified-2.7/test/test_fcntl.py @@ -0,0 +1,108 @@ +"""Test program for the fcntl C module. + +OS/2+EMX doesn't support the file locking operations. + +""" +import os +import struct +import sys +import unittest +from test.test_support import (verbose, TESTFN, unlink, run_unittest, + import_module) + +# Skip test if no fnctl module. +fcntl = import_module('fcntl') + + +# TODO - Write tests for flock() and lockf(). + +def get_lockdata(): + if sys.platform.startswith('atheos'): + start_len = "qq" + else: + try: + os.O_LARGEFILE + except AttributeError: + start_len = "ll" + else: + start_len = "qq" + + if sys.platform in ('netbsd1', 'netbsd2', 'netbsd3', + 'Darwin1.2', 'darwin', + 'freebsd2', 'freebsd3', 'freebsd4', 'freebsd5', + 'freebsd6', 'freebsd7', 'freebsd8', + 'bsdos2', 'bsdos3', 'bsdos4', + 'openbsd', 'openbsd2', 'openbsd3', 'openbsd4', 'openbsd5'): + if struct.calcsize('l') == 8: + off_t = 'l' + pid_t = 'i' + else: + off_t = 'lxxxx' + pid_t = 'l' + lockdata = struct.pack(off_t + off_t + pid_t + 'hh', 0, 0, 0, + fcntl.F_WRLCK, 0) + elif sys.platform in ['aix3', 'aix4', 'hp-uxB', 'unixware7']: + lockdata = struct.pack('hhlllii', fcntl.F_WRLCK, 0, 0, 0, 0, 0, 0) + elif sys.platform in ['os2emx']: + lockdata = None + else: + lockdata = struct.pack('hh'+start_len+'hh', fcntl.F_WRLCK, 0, 0, 0, 0, 0) + if lockdata: + if verbose: + print 'struct.pack: ', repr(lockdata) + return lockdata + +lockdata = get_lockdata() + + +class TestFcntl(unittest.TestCase): + + def setUp(self): + self.f = None + + def tearDown(self): + if self.f and not self.f.closed: + self.f.close() + unlink(TESTFN) + + def test_fcntl_fileno(self): + # the example from the library docs + self.f = open(TESTFN, 'w') + rv = fcntl.fcntl(self.f.fileno(), fcntl.F_SETFL, os.O_NONBLOCK) + if verbose: + print 'Status from fcntl with O_NONBLOCK: ', rv + if sys.platform not in ['os2emx']: + rv = fcntl.fcntl(self.f.fileno(), fcntl.F_SETLKW, lockdata) + if verbose: + print 'String from fcntl with F_SETLKW: ', repr(rv) + self.f.close() + + def test_fcntl_file_descriptor(self): + # again, but pass the file rather than numeric descriptor + self.f = open(TESTFN, 'w') + rv = fcntl.fcntl(self.f, fcntl.F_SETFL, os.O_NONBLOCK) + if sys.platform not in ['os2emx']: + rv = fcntl.fcntl(self.f, fcntl.F_SETLKW, lockdata) + self.f.close() + + def test_fcntl_64_bit(self): + # Issue #1309352: fcntl shouldn't fail when the third arg fits in a + # C 'long' but not in a C 'int'. + try: + cmd = fcntl.F_NOTIFY + # This flag is larger than 2**31 in 64-bit builds + flags = fcntl.DN_MULTISHOT + except AttributeError: + self.skipTest("F_NOTIFY or DN_MULTISHOT unavailable") + fd = os.open(os.path.dirname(os.path.abspath(TESTFN)), os.O_RDONLY) + try: + fcntl.fcntl(fd, cmd, flags) + finally: + os.close(fd) + + +def test_main(): + run_unittest(TestFcntl) + +if __name__ == '__main__': + test_main() diff --git a/lib-python/modified-2.7/test/test_multibytecodec.py b/lib-python/modified-2.7/test/test_multibytecodec.py --- a/lib-python/modified-2.7/test/test_multibytecodec.py +++ b/lib-python/modified-2.7/test/test_multibytecodec.py @@ -148,7 +148,8 @@ class Test_StreamReader(unittest.TestCase): def test_bug1728403(self): try: - open(TESTFN, 'w').write('\xa1') + with open(TESTFN, 'w') as f: + f.write('\xa1') f = codecs.open(TESTFN, encoding='cp949') self.assertRaises(UnicodeDecodeError, f.read, 2) finally: diff --git a/lib-python/modified-2.7/test/test_multiprocessing.py b/lib-python/modified-2.7/test/test_multiprocessing.py --- a/lib-python/modified-2.7/test/test_multiprocessing.py +++ b/lib-python/modified-2.7/test/test_multiprocessing.py @@ -510,7 +510,6 @@ p.join() - @unittest.skipIf(os.name == 'posix', "PYPY: FIXME") def test_qsize(self): q = self.Queue() try: @@ -532,7 +531,6 @@ time.sleep(DELTA) q.task_done() - @unittest.skipIf(os.name == 'posix', "PYPY: FIXME") def test_task_done(self): queue = self.JoinableQueue() @@ -1091,7 +1089,6 @@ class _TestPoolWorkerLifetime(BaseTestCase): ALLOWED_TYPES = ('processes', ) - @unittest.skipIf(os.name == 'posix', "PYPY: FIXME") def test_pool_worker_lifetime(self): p = multiprocessing.Pool(3, maxtasksperchild=10) self.assertEqual(3, len(p._pool)) @@ -1280,7 +1277,6 @@ queue = manager.get_queue() queue.put('hello world') - @unittest.skipIf(os.name == 'posix', "PYPY: FIXME") def test_rapid_restart(self): authkey = os.urandom(32) manager = QueueManager( @@ -1297,6 +1293,7 @@ queue = manager.get_queue() self.assertEqual(queue.get(), 'hello world') del queue + test_support.gc_collect() manager.shutdown() manager = QueueManager( address=addr, authkey=authkey, serializer=SERIALIZER) @@ -1573,7 +1570,6 @@ ALLOWED_TYPES = ('processes',) - @unittest.skipIf(os.name == 'posix', "PYPY: FIXME") def test_heap(self): iterations = 5000 maxblocks = 50 diff --git a/lib-python/modified-2.7/test/test_ssl.py b/lib-python/modified-2.7/test/test_ssl.py --- a/lib-python/modified-2.7/test/test_ssl.py +++ b/lib-python/modified-2.7/test/test_ssl.py @@ -58,32 +58,35 @@ # Issue #9415: Ubuntu hijacks their OpenSSL and forcefully disables SSLv2 def skip_if_broken_ubuntu_ssl(func): - # We need to access the lower-level wrapper in order to create an - # implicit SSL context without trying to connect or listen. - try: - import _ssl - except ImportError: - # The returned function won't get executed, just ignore the error - pass - @functools.wraps(func) - def f(*args, **kwargs): + if hasattr(ssl, 'PROTOCOL_SSLv2'): + # We need to access the lower-level wrapper in order to create an + # implicit SSL context without trying to connect or listen. try: - s = socket.socket(socket.AF_INET) - _ssl.sslwrap(s._sock, 0, None, None, - ssl.CERT_NONE, ssl.PROTOCOL_SSLv2, None, None) - except ssl.SSLError as e: - if (ssl.OPENSSL_VERSION_INFO == (0, 9, 8, 15, 15) and - platform.linux_distribution() == ('debian', 'squeeze/sid', '') - and 'Invalid SSL protocol variant specified' in str(e)): - raise unittest.SkipTest("Patched Ubuntu OpenSSL breaks behaviour") - return func(*args, **kwargs) - return f + import _ssl + except ImportError: + # The returned function won't get executed, just ignore the error + pass + @functools.wraps(func) + def f(*args, **kwargs): + try: + s = socket.socket(socket.AF_INET) + _ssl.sslwrap(s._sock, 0, None, None, + ssl.CERT_NONE, ssl.PROTOCOL_SSLv2, None, None) + except ssl.SSLError as e: + if (ssl.OPENSSL_VERSION_INFO == (0, 9, 8, 15, 15) and + platform.linux_distribution() == ('debian', 'squeeze/sid', '') + and 'Invalid SSL protocol variant specified' in str(e)): + raise unittest.SkipTest("Patched Ubuntu OpenSSL breaks behaviour") + return func(*args, **kwargs) + return f + else: + return func class BasicSocketTests(unittest.TestCase): def test_constants(self): - ssl.PROTOCOL_SSLv2 + #ssl.PROTOCOL_SSLv2 ssl.PROTOCOL_SSLv23 ssl.PROTOCOL_SSLv3 ssl.PROTOCOL_TLSv1 @@ -966,7 +969,8 @@ try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True, ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True, ssl.CERT_REQUIRED) - try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv2, False) + if hasattr(ssl, 'PROTOCOL_SSLv2'): + try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv2, False) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv23, False) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_TLSv1, False) @@ -978,7 +982,8 @@ try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True, ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True, ssl.CERT_REQUIRED) - try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv2, False) + if hasattr(ssl, 'PROTOCOL_SSLv2'): + try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv2, False) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv3, False) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv23, False) diff --git a/lib-python/modified-2.7/test/test_sys_settrace.py b/lib-python/modified-2.7/test/test_sys_settrace.py --- a/lib-python/modified-2.7/test/test_sys_settrace.py +++ b/lib-python/modified-2.7/test/test_sys_settrace.py @@ -286,11 +286,11 @@ self.compare_events(func.func_code.co_firstlineno, tracer.events, func.events) - def set_and_retrieve_none(self): + def test_set_and_retrieve_none(self): sys.settrace(None) assert sys.gettrace() is None - def set_and_retrieve_func(self): + def test_set_and_retrieve_func(self): def fn(*args): pass diff --git a/lib-python/2.7/test/test_tarfile.py b/lib-python/modified-2.7/test/test_tarfile.py copy from lib-python/2.7/test/test_tarfile.py copy to lib-python/modified-2.7/test/test_tarfile.py --- a/lib-python/2.7/test/test_tarfile.py +++ b/lib-python/modified-2.7/test/test_tarfile.py @@ -169,6 +169,7 @@ except tarfile.ReadError: self.fail("tarfile.open() failed on empty archive") self.assertListEqual(tar.getmembers(), []) + tar.close() def test_null_tarfile(self): # Test for issue6123: Allow opening empty archives. @@ -207,16 +208,21 @@ fobj = open(self.tarname, "rb") tar = tarfile.open(fileobj=fobj, mode=self.mode) self.assertEqual(tar.name, os.path.abspath(fobj.name)) + tar.close() def test_no_name_attribute(self): - data = open(self.tarname, "rb").read() + f = open(self.tarname, "rb") + data = f.read() + f.close() fobj = StringIO.StringIO(data) self.assertRaises(AttributeError, getattr, fobj, "name") tar = tarfile.open(fileobj=fobj, mode=self.mode) self.assertEqual(tar.name, None) def test_empty_name_attribute(self): - data = open(self.tarname, "rb").read() + f = open(self.tarname, "rb") + data = f.read() + f.close() fobj = StringIO.StringIO(data) fobj.name = "" tar = tarfile.open(fileobj=fobj, mode=self.mode) @@ -515,6 +521,7 @@ self.tar = tarfile.open(self.tarname, mode=self.mode, encoding="iso8859-1") tarinfo = self.tar.getmember("pax/umlauts-�������") self._test_member(tarinfo, size=7011, chksum=md5_regtype) + self.tar.close() class LongnameTest(ReadTest): @@ -675,6 +682,7 @@ tar = tarfile.open(tmpname, self.mode) tarinfo = tar.gettarinfo(path) self.assertEqual(tarinfo.size, 0) + tar.close() finally: os.rmdir(path) @@ -692,6 +700,7 @@ tar.gettarinfo(target) tarinfo = tar.gettarinfo(link) self.assertEqual(tarinfo.size, 0) + tar.close() finally: os.remove(target) os.remove(link) @@ -704,6 +713,7 @@ tar = tarfile.open(tmpname, self.mode) tarinfo = tar.gettarinfo(path) self.assertEqual(tarinfo.size, 0) + tar.close() finally: os.remove(path) @@ -722,6 +732,7 @@ tar.add(dstname) os.chdir(cwd) self.assertTrue(tar.getnames() == [], "added the archive to itself") + tar.close() def test_exclude(self): tempdir = os.path.join(TEMPDIR, "exclude") @@ -742,6 +753,7 @@ tar = tarfile.open(tmpname, "r") self.assertEqual(len(tar.getmembers()), 1) self.assertEqual(tar.getnames()[0], "empty_dir") + tar.close() finally: shutil.rmtree(tempdir) @@ -859,7 +871,9 @@ fobj.close() elif self.mode.endswith("bz2"): dec = bz2.BZ2Decompressor() - data = open(tmpname, "rb").read() + f = open(tmpname, "rb") + data = f.read() + f.close() data = dec.decompress(data) self.assertTrue(len(dec.unused_data) == 0, "found trailing data") @@ -938,6 +952,7 @@ "unable to read longname member") self.assertEqual(tarinfo.linkname, member.linkname, "unable to read longname member") + tar.close() def test_longname_1023(self): self._test(("longnam/" * 127) + "longnam") @@ -1030,6 +1045,7 @@ else: n = tar.getmembers()[0].name self.assertTrue(name == n, "PAX longname creation failed") + tar.close() def test_pax_global_header(self): pax_headers = { @@ -1058,6 +1074,7 @@ tarfile.PAX_NUMBER_FIELDS[key](val) except (TypeError, ValueError): self.fail("unable to convert pax header field") + tar.close() def test_pax_extended_header(self): # The fields from the pax header have priority over the @@ -1077,6 +1094,7 @@ self.assertEqual(t.pax_headers, pax_headers) self.assertEqual(t.name, "foo") self.assertEqual(t.uid, 123) + tar.close() class UstarUnicodeTest(unittest.TestCase): @@ -1120,6 +1138,7 @@ tarinfo.name = "foo" tarinfo.uname = u"���" self.assertRaises(UnicodeError, tar.addfile, tarinfo) + tar.close() def test_unicode_argument(self): tar = tarfile.open(tarname, "r", encoding="iso8859-1", errors="strict") @@ -1174,6 +1193,7 @@ tar = tarfile.open(tmpname, format=self.format, encoding="ascii", errors=handler) self.assertEqual(tar.getnames()[0], name) + tar.close() self.assertRaises(UnicodeError, tarfile.open, tmpname, encoding="ascii", errors="strict") @@ -1186,6 +1206,7 @@ tar = tarfile.open(tmpname, format=self.format, encoding="iso8859-1", errors="utf-8") self.assertEqual(tar.getnames()[0], "���/" + u"�".encode("utf8")) + tar.close() class AppendTest(unittest.TestCase): @@ -1213,6 +1234,7 @@ def _test(self, names=["bar"], fileobj=None): tar = tarfile.open(self.tarname, fileobj=fileobj) self.assertEqual(tar.getnames(), names) + tar.close() def test_non_existing(self): self._add_testfile() @@ -1231,7 +1253,9 @@ def test_fileobj(self): self._create_testtar() - data = open(self.tarname).read() + f = open(self.tarname) + data = f.read() + f.close() fobj = StringIO.StringIO(data) self._add_testfile(fobj) fobj.seek(0) @@ -1257,7 +1281,9 @@ # Append mode is supposed to fail if the tarfile to append to # does not end with a zero block. def _test_error(self, data): - open(self.tarname, "wb").write(data) + f = open(self.tarname, "wb") + f.write(data) + f.close() self.assertRaises(tarfile.ReadError, self._add_testfile) def test_null(self): diff --git a/lib-python/modified-2.7/test/test_tempfile.py b/lib-python/modified-2.7/test/test_tempfile.py --- a/lib-python/modified-2.7/test/test_tempfile.py +++ b/lib-python/modified-2.7/test/test_tempfile.py @@ -23,8 +23,8 @@ # TEST_FILES may need to be tweaked for systems depending on the maximum # number of files that can be opened at one time (see ulimit -n) -if sys.platform in ('openbsd3', 'openbsd4'): - TEST_FILES = 48 +if sys.platform.startswith("openbsd"): + TEST_FILES = 64 # ulimit -n defaults to 128 for normal users else: TEST_FILES = 100 diff --git a/lib-python/modified-2.7/test/test_urllib2.py b/lib-python/modified-2.7/test/test_urllib2.py --- a/lib-python/modified-2.7/test/test_urllib2.py +++ b/lib-python/modified-2.7/test/test_urllib2.py @@ -307,6 +307,9 @@ def getresponse(self): return MockHTTPResponse(MockFile(), {}, 200, "OK") + def close(self): + pass + class MockHandler: # useful for testing handler machinery # see add_ordered_mock_handlers() docstring diff --git a/lib-python/modified-2.7/urllib2.py b/lib-python/modified-2.7/urllib2.py new file mode 100644 --- /dev/null +++ b/lib-python/modified-2.7/urllib2.py @@ -0,0 +1,1440 @@ +"""An extensible library for opening URLs using a variety of protocols + +The simplest way to use this module is to call the urlopen function, +which accepts a string containing a URL or a Request object (described +below). It opens the URL and returns the results as file-like +object; the returned object has some extra methods described below. + +The OpenerDirector manages a collection of Handler objects that do +all the actual work. Each Handler implements a particular protocol or +option. The OpenerDirector is a composite object that invokes the +Handlers needed to open the requested URL. For example, the +HTTPHandler performs HTTP GET and POST requests and deals with +non-error returns. The HTTPRedirectHandler automatically deals with +HTTP 301, 302, 303 and 307 redirect errors, and the HTTPDigestAuthHandler +deals with digest authentication. + +urlopen(url, data=None) -- Basic usage is the same as original +urllib. pass the url and optionally data to post to an HTTP URL, and +get a file-like object back. One difference is that you can also pass +a Request instance instead of URL. Raises a URLError (subclass of +IOError); for HTTP errors, raises an HTTPError, which can also be +treated as a valid response. + +build_opener -- Function that creates a new OpenerDirector instance. +Will install the default handlers. Accepts one or more Handlers as +arguments, either instances or Handler classes that it will +instantiate. If one of the argument is a subclass of the default +handler, the argument will be installed instead of the default. + +install_opener -- Installs a new opener as the default opener. + +objects of interest: + +OpenerDirector -- Sets up the User Agent as the Python-urllib client and manages +the Handler classes, while dealing with requests and responses. + +Request -- An object that encapsulates the state of a request. The +state can be as simple as the URL. It can also include extra HTTP +headers, e.g. a User-Agent. + +BaseHandler -- + +exceptions: +URLError -- A subclass of IOError, individual protocols have their own +specific subclass. + +HTTPError -- Also a valid HTTP response, so you can treat an HTTP error +as an exceptional event or valid response. + +internals: +BaseHandler and parent +_call_chain conventions + +Example usage: + +import urllib2 + +# set up authentication info +authinfo = urllib2.HTTPBasicAuthHandler() +authinfo.add_password(realm='PDQ Application', + uri='https://mahler:8092/site-updates.py', + user='klem', + passwd='geheim$parole') + +proxy_support = urllib2.ProxyHandler({"http" : "http://ahad-haam:3128"}) + +# build a new opener that adds authentication and caching FTP handlers +opener = urllib2.build_opener(proxy_support, authinfo, urllib2.CacheFTPHandler) + +# install it +urllib2.install_opener(opener) + +f = urllib2.urlopen('http://www.python.org/') + + +""" + +# XXX issues: +# If an authentication error handler that tries to perform +# authentication for some reason but fails, how should the error be +# signalled? The client needs to know the HTTP error code. But if +# the handler knows that the problem was, e.g., that it didn't know +# that hash algo that requested in the challenge, it would be good to +# pass that information along to the client, too. +# ftp errors aren't handled cleanly +# check digest against correct (i.e. non-apache) implementation + +# Possible extensions: +# complex proxies XXX not sure what exactly was meant by this +# abstract factory for opener + +import base64 +import hashlib +import httplib +import mimetools +import os +import posixpath +import random +import re +import socket +import sys +import time +import urlparse +import bisect + +try: + from cStringIO import StringIO +except ImportError: + from StringIO import StringIO + +from urllib import (unwrap, unquote, splittype, splithost, quote, + addinfourl, splitport, splittag, + splitattr, ftpwrapper, splituser, splitpasswd, splitvalue) + +# support for FileHandler, proxies via environment variables +from urllib import localhost, url2pathname, getproxies, proxy_bypass + +# used in User-Agent header sent +__version__ = sys.version[:3] + +_opener = None +def urlopen(url, data=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT): + global _opener + if _opener is None: + _opener = build_opener() + return _opener.open(url, data, timeout) + +def install_opener(opener): + global _opener + _opener = opener + +# do these error classes make sense? +# make sure all of the IOError stuff is overridden. we just want to be +# subtypes. + +class URLError(IOError): + # URLError is a sub-type of IOError, but it doesn't share any of + # the implementation. need to override __init__ and __str__. + # It sets self.args for compatibility with other EnvironmentError + # subclasses, but args doesn't have the typical format with errno in + # slot 0 and strerror in slot 1. This may be better than nothing. + def __init__(self, reason): + self.args = reason, + self.reason = reason + + def __str__(self): + return '' % self.reason + +class HTTPError(URLError, addinfourl): + """Raised when HTTP error occurs, but also acts like non-error return""" + __super_init = addinfourl.__init__ + + def __init__(self, url, code, msg, hdrs, fp): + self.code = code + self.msg = msg + self.hdrs = hdrs + self.fp = fp + self.filename = url + # The addinfourl classes depend on fp being a valid file + # object. In some cases, the HTTPError may not have a valid + # file object. If this happens, the simplest workaround is to + # not initialize the base classes. + if fp is not None: + self.__super_init(fp, hdrs, url, code) + + def __str__(self): + return 'HTTP Error %s: %s' % (self.code, self.msg) + +# copied from cookielib.py +_cut_port_re = re.compile(r":\d+$") +def request_host(request): + """Return request-host, as defined by RFC 2965. + + Variation from RFC: returned value is lowercased, for convenient + comparison. + + """ + url = request.get_full_url() + host = urlparse.urlparse(url)[1] + if host == "": + host = request.get_header("Host", "") + + # remove port, if present + host = _cut_port_re.sub("", host, 1) + return host.lower() + +class Request: + + def __init__(self, url, data=None, headers={}, + origin_req_host=None, unverifiable=False): + # unwrap('') --> 'type://host/path' + self.__original = unwrap(url) + self.__original, fragment = splittag(self.__original) + self.type = None + # self.__r_type is what's left after doing the splittype + self.host = None + self.port = None + self._tunnel_host = None + self.data = data + self.headers = {} + for key, value in headers.items(): + self.add_header(key, value) + self.unredirected_hdrs = {} + if origin_req_host is None: + origin_req_host = request_host(self) + self.origin_req_host = origin_req_host + self.unverifiable = unverifiable + + def __getattr__(self, attr): + # XXX this is a fallback mechanism to guard against these + # methods getting called in a non-standard order. this may be + # too complicated and/or unnecessary. + # XXX should the __r_XXX attributes be public? + if attr[:12] == '_Request__r_': + name = attr[12:] + if hasattr(Request, 'get_' + name): + getattr(self, 'get_' + name)() + return getattr(self, attr) + raise AttributeError, attr + + def get_method(self): + if self.has_data(): + return "POST" + else: + return "GET" + + # XXX these helper methods are lame + + def add_data(self, data): + self.data = data + + def has_data(self): + return self.data is not None + + def get_data(self): + return self.data + + def get_full_url(self): + return self.__original + + def get_type(self): + if self.type is None: + self.type, self.__r_type = splittype(self.__original) + if self.type is None: + raise ValueError, "unknown url type: %s" % self.__original + return self.type + + def get_host(self): + if self.host is None: + self.host, self.__r_host = splithost(self.__r_type) + if self.host: + self.host = unquote(self.host) + return self.host + + def get_selector(self): + return self.__r_host + + def set_proxy(self, host, type): + if self.type == 'https' and not self._tunnel_host: + self._tunnel_host = self.host + else: + self.type = type + self.__r_host = self.__original + + self.host = host + + def has_proxy(self): + return self.__r_host == self.__original + + def get_origin_req_host(self): + return self.origin_req_host + + def is_unverifiable(self): + return self.unverifiable + + def add_header(self, key, val): + # useful for something like authentication + self.headers[key.capitalize()] = val + + def add_unredirected_header(self, key, val): + # will not be added to a redirected request + self.unredirected_hdrs[key.capitalize()] = val + + def has_header(self, header_name): + return (header_name in self.headers or + header_name in self.unredirected_hdrs) + + def get_header(self, header_name, default=None): + return self.headers.get( + header_name, + self.unredirected_hdrs.get(header_name, default)) + + def header_items(self): + hdrs = self.unredirected_hdrs.copy() + hdrs.update(self.headers) + return hdrs.items() + +class OpenerDirector: + def __init__(self): + client_version = "Python-urllib/%s" % __version__ + self.addheaders = [('User-agent', client_version)] + # manage the individual handlers + self.handlers = [] + self.handle_open = {} + self.handle_error = {} + self.process_response = {} + self.process_request = {} + + def add_handler(self, handler): + if not hasattr(handler, "add_parent"): + raise TypeError("expected BaseHandler instance, got %r" % + type(handler)) + + added = False + for meth in dir(handler): + if meth in ["redirect_request", "do_open", "proxy_open"]: + # oops, coincidental match + continue + + i = meth.find("_") + protocol = meth[:i] + condition = meth[i+1:] + + if condition.startswith("error"): + j = condition.find("_") + i + 1 + kind = meth[j+1:] + try: + kind = int(kind) + except ValueError: + pass + lookup = self.handle_error.get(protocol, {}) + self.handle_error[protocol] = lookup + elif condition == "open": + kind = protocol + lookup = self.handle_open + elif condition == "response": + kind = protocol + lookup = self.process_response + elif condition == "request": + kind = protocol + lookup = self.process_request + else: + continue + + handlers = lookup.setdefault(kind, []) + if handlers: + bisect.insort(handlers, handler) + else: + handlers.append(handler) + added = True + + if added: + # the handlers must work in an specific order, the order + # is specified in a Handler attribute + bisect.insort(self.handlers, handler) + handler.add_parent(self) + + def close(self): + # Only exists for backwards compatibility. + pass + + def _call_chain(self, chain, kind, meth_name, *args): + # Handlers raise an exception if no one else should try to handle + # the request, or return None if they can't but another handler + # could. Otherwise, they return the response. + handlers = chain.get(kind, ()) + for handler in handlers: + func = getattr(handler, meth_name) + + result = func(*args) + if result is not None: + return result + + def open(self, fullurl, data=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT): + # accept a URL or a Request object + if isinstance(fullurl, basestring): + req = Request(fullurl, data) + else: + req = fullurl + if data is not None: + req.add_data(data) + + req.timeout = timeout + protocol = req.get_type() + + # pre-process request + meth_name = protocol+"_request" + for processor in self.process_request.get(protocol, []): + meth = getattr(processor, meth_name) + req = meth(req) + + response = self._open(req, data) + + # post-process response + meth_name = protocol+"_response" + for processor in self.process_response.get(protocol, []): + meth = getattr(processor, meth_name) + try: + response = meth(req, response) + except: + response.close() + raise + + return response + + def _open(self, req, data=None): + result = self._call_chain(self.handle_open, 'default', + 'default_open', req) + if result: + return result + + protocol = req.get_type() + result = self._call_chain(self.handle_open, protocol, protocol + + '_open', req) + if result: + return result + + return self._call_chain(self.handle_open, 'unknown', + 'unknown_open', req) + + def error(self, proto, *args): + if proto in ('http', 'https'): + # XXX http[s] protocols are special-cased + dict = self.handle_error['http'] # https is not different than http + proto = args[2] # YUCK! + meth_name = 'http_error_%s' % proto + http_err = 1 + orig_args = args + else: + dict = self.handle_error + meth_name = proto + '_error' + http_err = 0 + args = (dict, proto, meth_name) + args + result = self._call_chain(*args) + if result: + return result + + if http_err: + args = (dict, 'default', 'http_error_default') + orig_args + return self._call_chain(*args) + +# XXX probably also want an abstract factory that knows when it makes +# sense to skip a superclass in favor of a subclass and when it might +# make sense to include both + +def build_opener(*handlers): + """Create an opener object from a list of handlers. + + The opener will use several default handlers, including support + for HTTP, FTP and when applicable, HTTPS. + + If any of the handlers passed as arguments are subclasses of the + default handlers, the default handlers will not be used. + """ + import types + def isclass(obj): + return isinstance(obj, (types.ClassType, type)) + + opener = OpenerDirector() + default_classes = [ProxyHandler, UnknownHandler, HTTPHandler, + HTTPDefaultErrorHandler, HTTPRedirectHandler, + FTPHandler, FileHandler, HTTPErrorProcessor] + if hasattr(httplib, 'HTTPS'): + default_classes.append(HTTPSHandler) + skip = set() + for klass in default_classes: + for check in handlers: + if isclass(check): + if issubclass(check, klass): + skip.add(klass) + elif isinstance(check, klass): + skip.add(klass) + for klass in skip: + default_classes.remove(klass) + + for klass in default_classes: + opener.add_handler(klass()) + + for h in handlers: + if isclass(h): + h = h() + opener.add_handler(h) + return opener + +class BaseHandler: + handler_order = 500 + + def add_parent(self, parent): + self.parent = parent + + def close(self): + # Only exists for backwards compatibility + pass + + def __lt__(self, other): + if not hasattr(other, "handler_order"): + # Try to preserve the old behavior of having custom classes + # inserted after default ones (works only for custom user + # classes which are not aware of handler_order). + return True + return self.handler_order < other.handler_order + + +class HTTPErrorProcessor(BaseHandler): + """Process HTTP error responses.""" + handler_order = 1000 # after all other processing + + def http_response(self, request, response): + code, msg, hdrs = response.code, response.msg, response.info() + + # According to RFC 2616, "2xx" code indicates that the client's + # request was successfully received, understood, and accepted. + if not (200 <= code < 300): + response = self.parent.error( + 'http', request, response, code, msg, hdrs) + + return response + + https_response = http_response + +class HTTPDefaultErrorHandler(BaseHandler): + def http_error_default(self, req, fp, code, msg, hdrs): + raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) + +class HTTPRedirectHandler(BaseHandler): + # maximum number of redirections to any single URL + # this is needed because of the state that cookies introduce + max_repeats = 4 + # maximum total number of redirections (regardless of URL) before + # assuming we're in a loop + max_redirections = 10 + + def redirect_request(self, req, fp, code, msg, headers, newurl): + """Return a Request or None in response to a redirect. + + This is called by the http_error_30x methods when a + redirection response is received. If a redirection should + take place, return a new Request to allow http_error_30x to + perform the redirect. Otherwise, raise HTTPError if no-one + else should try to handle this url. Return None if you can't + but another Handler might. + """ + m = req.get_method() + if (code in (301, 302, 303, 307) and m in ("GET", "HEAD") + or code in (301, 302, 303) and m == "POST"): + # Strictly (according to RFC 2616), 301 or 302 in response + # to a POST MUST NOT cause a redirection without confirmation + # from the user (of urllib2, in this case). In practice, + # essentially all clients do redirect in this case, so we + # do the same. + # be conciliant with URIs containing a space + newurl = newurl.replace(' ', '%20') + newheaders = dict((k,v) for k,v in req.headers.items() + if k.lower() not in ("content-length", "content-type") + ) + return Request(newurl, + headers=newheaders, + origin_req_host=req.get_origin_req_host(), + unverifiable=True) + else: + raise HTTPError(req.get_full_url(), code, msg, headers, fp) + + # Implementation note: To avoid the server sending us into an + # infinite loop, the request object needs to track what URLs we + # have already seen. Do this by adding a handler-specific + # attribute to the Request object. + def http_error_302(self, req, fp, code, msg, headers): + # Some servers (incorrectly) return multiple Location headers + # (so probably same goes for URI). Use first header. + if 'location' in headers: + newurl = headers.getheaders('location')[0] + elif 'uri' in headers: + newurl = headers.getheaders('uri')[0] + else: + return + + # fix a possible malformed URL + urlparts = urlparse.urlparse(newurl) + if not urlparts.path: + urlparts = list(urlparts) + urlparts[2] = "/" + newurl = urlparse.urlunparse(urlparts) + + newurl = urlparse.urljoin(req.get_full_url(), newurl) + + # XXX Probably want to forget about the state of the current + # request, although that might interact poorly with other + # handlers that also use handler-specific request attributes + new = self.redirect_request(req, fp, code, msg, headers, newurl) + if new is None: + return + + # loop detection + # .redirect_dict has a key url if url was previously visited. + if hasattr(req, 'redirect_dict'): + visited = new.redirect_dict = req.redirect_dict + if (visited.get(newurl, 0) >= self.max_repeats or + len(visited) >= self.max_redirections): + raise HTTPError(req.get_full_url(), code, + self.inf_msg + msg, headers, fp) + else: + visited = new.redirect_dict = req.redirect_dict = {} + visited[newurl] = visited.get(newurl, 0) + 1 + + # Don't close the fp until we are sure that we won't use it + # with HTTPError. + fp.read() + fp.close() + + return self.parent.open(new, timeout=req.timeout) + + http_error_301 = http_error_303 = http_error_307 = http_error_302 + + inf_msg = "The HTTP server returned a redirect error that would " \ + "lead to an infinite loop.\n" \ + "The last 30x error message was:\n" + + +def _parse_proxy(proxy): + """Return (scheme, user, password, host/port) given a URL or an authority. + + If a URL is supplied, it must have an authority (host:port) component. + According to RFC 3986, having an authority component means the URL must + have two slashes after the scheme: + + >>> _parse_proxy('file:/ftp.example.com/') + Traceback (most recent call last): + ValueError: proxy URL with no authority: 'file:/ftp.example.com/' + + The first three items of the returned tuple may be None. + + Examples of authority parsing: + + >>> _parse_proxy('proxy.example.com') + (None, None, None, 'proxy.example.com') + >>> _parse_proxy('proxy.example.com:3128') + (None, None, None, 'proxy.example.com:3128') + + The authority component may optionally include userinfo (assumed to be + username:password): + + >>> _parse_proxy('joe:password at proxy.example.com') + (None, 'joe', 'password', 'proxy.example.com') + >>> _parse_proxy('joe:password at proxy.example.com:3128') + (None, 'joe', 'password', 'proxy.example.com:3128') + + Same examples, but with URLs instead: + + >>> _parse_proxy('http://proxy.example.com/') + ('http', None, None, 'proxy.example.com') + >>> _parse_proxy('http://proxy.example.com:3128/') + ('http', None, None, 'proxy.example.com:3128') + >>> _parse_proxy('http://joe:password at proxy.example.com/') + ('http', 'joe', 'password', 'proxy.example.com') + >>> _parse_proxy('http://joe:password at proxy.example.com:3128') + ('http', 'joe', 'password', 'proxy.example.com:3128') + + Everything after the authority is ignored: + + >>> _parse_proxy('ftp://joe:password at proxy.example.com/rubbish:3128') + ('ftp', 'joe', 'password', 'proxy.example.com') + + Test for no trailing '/' case: + + >>> _parse_proxy('http://joe:password at proxy.example.com') + ('http', 'joe', 'password', 'proxy.example.com') + + """ + scheme, r_scheme = splittype(proxy) + if not r_scheme.startswith("/"): + # authority + scheme = None + authority = proxy + else: + # URL + if not r_scheme.startswith("//"): + raise ValueError("proxy URL with no authority: %r" % proxy) + # We have an authority, so for RFC 3986-compliant URLs (by ss 3. + # and 3.3.), path is empty or starts with '/' + end = r_scheme.find("/", 2) + if end == -1: + end = None + authority = r_scheme[2:end] + userinfo, hostport = splituser(authority) + if userinfo is not None: + user, password = splitpasswd(userinfo) + else: + user = password = None + return scheme, user, password, hostport + +class ProxyHandler(BaseHandler): + # Proxies must be in front + handler_order = 100 + + def __init__(self, proxies=None): + if proxies is None: + proxies = getproxies() + assert hasattr(proxies, 'has_key'), "proxies must be a mapping" + self.proxies = proxies + for type, url in proxies.items(): + setattr(self, '%s_open' % type, + lambda r, proxy=url, type=type, meth=self.proxy_open: \ + meth(r, proxy, type)) + + def proxy_open(self, req, proxy, type): + orig_type = req.get_type() + proxy_type, user, password, hostport = _parse_proxy(proxy) + + if proxy_type is None: + proxy_type = orig_type + + if req.host and proxy_bypass(req.host): + return None + + if user and password: + user_pass = '%s:%s' % (unquote(user), unquote(password)) + creds = base64.b64encode(user_pass).strip() + req.add_header('Proxy-authorization', 'Basic ' + creds) + hostport = unquote(hostport) + req.set_proxy(hostport, proxy_type) + + if orig_type == proxy_type or orig_type == 'https': + # let other handlers take care of it + return None + else: + # need to start over, because the other handlers don't + # grok the proxy's URL type + # e.g. if we have a constructor arg proxies like so: + # {'http': 'ftp://proxy.example.com'}, we may end up turning + # a request for http://acme.example.com/a into one for + # ftp://proxy.example.com/a + return self.parent.open(req, timeout=req.timeout) + +class HTTPPasswordMgr: + + def __init__(self): + self.passwd = {} + + def add_password(self, realm, uri, user, passwd): + # uri could be a single URI or a sequence + if isinstance(uri, basestring): + uri = [uri] + if not realm in self.passwd: + self.passwd[realm] = {} + for default_port in True, False: + reduced_uri = tuple( + [self.reduce_uri(u, default_port) for u in uri]) + self.passwd[realm][reduced_uri] = (user, passwd) + + def find_user_password(self, realm, authuri): + domains = self.passwd.get(realm, {}) + for default_port in True, False: + reduced_authuri = self.reduce_uri(authuri, default_port) + for uris, authinfo in domains.iteritems(): + for uri in uris: + if self.is_suburi(uri, reduced_authuri): + return authinfo + return None, None + + def reduce_uri(self, uri, default_port=True): + """Accept authority or URI and extract only the authority and path.""" + # note HTTP URLs do not have a userinfo component + parts = urlparse.urlsplit(uri) + if parts[1]: + # URI + scheme = parts[0] + authority = parts[1] + path = parts[2] or '/' + else: + # host or host:port + scheme = None + authority = uri + path = '/' + host, port = splitport(authority) + if default_port and port is None and scheme is not None: + dport = {"http": 80, + "https": 443, + }.get(scheme) + if dport is not None: + authority = "%s:%d" % (host, dport) + return authority, path + + def is_suburi(self, base, test): + """Check if test is below base in a URI tree + + Both args must be URIs in reduced form. + """ + if base == test: + return True + if base[0] != test[0]: + return False + common = posixpath.commonprefix((base[1], test[1])) + if len(common) == len(base[1]): + return True + return False + + +class HTTPPasswordMgrWithDefaultRealm(HTTPPasswordMgr): + + def find_user_password(self, realm, authuri): + user, password = HTTPPasswordMgr.find_user_password(self, realm, + authuri) + if user is not None: + return user, password + return HTTPPasswordMgr.find_user_password(self, None, authuri) + + +class AbstractBasicAuthHandler: + + # XXX this allows for multiple auth-schemes, but will stupidly pick + # the last one with a realm specified. + + # allow for double- and single-quoted realm values + # (single quotes are a violation of the RFC, but appear in the wild) + rx = re.compile('(?:.*,)*[ \t]*([^ \t]+)[ \t]+' + 'realm=(["\'])(.*?)\\2', re.I) + + # XXX could pre-emptively send auth info already accepted (RFC 2617, + # end of section 2, and section 1.2 immediately after "credentials" + # production). + + def __init__(self, password_mgr=None): + if password_mgr is None: + password_mgr = HTTPPasswordMgr() + self.passwd = password_mgr + self.add_password = self.passwd.add_password + self.retried = 0 + + def reset_retry_count(self): + self.retried = 0 + + def http_error_auth_reqed(self, authreq, host, req, headers): + # host may be an authority (without userinfo) or a URL with an + # authority + # XXX could be multiple headers + authreq = headers.get(authreq, None) + + if self.retried > 5: + # retry sending the username:password 5 times before failing. + raise HTTPError(req.get_full_url(), 401, "basic auth failed", + headers, None) + else: + self.retried += 1 + + if authreq: + mo = AbstractBasicAuthHandler.rx.search(authreq) + if mo: + scheme, quote, realm = mo.groups() + if scheme.lower() == 'basic': + response = self.retry_http_basic_auth(host, req, realm) + if response and response.code != 401: + self.retried = 0 + return response + + def retry_http_basic_auth(self, host, req, realm): + user, pw = self.passwd.find_user_password(realm, host) + if pw is not None: + raw = "%s:%s" % (user, pw) + auth = 'Basic %s' % base64.b64encode(raw).strip() + if req.headers.get(self.auth_header, None) == auth: + return None + req.add_unredirected_header(self.auth_header, auth) + return self.parent.open(req, timeout=req.timeout) + else: + return None + + +class HTTPBasicAuthHandler(AbstractBasicAuthHandler, BaseHandler): + + auth_header = 'Authorization' + + def http_error_401(self, req, fp, code, msg, headers): + url = req.get_full_url() + response = self.http_error_auth_reqed('www-authenticate', + url, req, headers) + self.reset_retry_count() + return response + + +class ProxyBasicAuthHandler(AbstractBasicAuthHandler, BaseHandler): + + auth_header = 'Proxy-authorization' + + def http_error_407(self, req, fp, code, msg, headers): + # http_error_auth_reqed requires that there is no userinfo component in + # authority. Assume there isn't one, since urllib2 does not (and + # should not, RFC 3986 s. 3.2.1) support requests for URLs containing + # userinfo. + authority = req.get_host() + response = self.http_error_auth_reqed('proxy-authenticate', + authority, req, headers) + self.reset_retry_count() + return response + + +def randombytes(n): + """Return n random bytes.""" + # Use /dev/urandom if it is available. Fall back to random module + # if not. It might be worthwhile to extend this function to use + # other platform-specific mechanisms for getting random bytes. + if os.path.exists("/dev/urandom"): + f = open("/dev/urandom") + s = f.read(n) + f.close() + return s + else: + L = [chr(random.randrange(0, 256)) for i in range(n)] + return "".join(L) + +class AbstractDigestAuthHandler: + # Digest authentication is specified in RFC 2617. + + # XXX The client does not inspect the Authentication-Info header + # in a successful response. + + # XXX It should be possible to test this implementation against + # a mock server that just generates a static set of challenges. + + # XXX qop="auth-int" supports is shaky + + def __init__(self, passwd=None): + if passwd is None: + passwd = HTTPPasswordMgr() + self.passwd = passwd + self.add_password = self.passwd.add_password + self.retried = 0 + self.nonce_count = 0 + self.last_nonce = None + + def reset_retry_count(self): + self.retried = 0 + + def http_error_auth_reqed(self, auth_header, host, req, headers): + authreq = headers.get(auth_header, None) + if self.retried > 5: + # Don't fail endlessly - if we failed once, we'll probably + # fail a second time. Hm. Unless the Password Manager is + # prompting for the information. Crap. This isn't great + # but it's better than the current 'repeat until recursion + # depth exceeded' approach + raise HTTPError(req.get_full_url(), 401, "digest auth failed", + headers, None) + else: + self.retried += 1 + if authreq: + scheme = authreq.split()[0] + if scheme.lower() == 'digest': + return self.retry_http_digest_auth(req, authreq) + + def retry_http_digest_auth(self, req, auth): + token, challenge = auth.split(' ', 1) + chal = parse_keqv_list(parse_http_list(challenge)) + auth = self.get_authorization(req, chal) + if auth: + auth_val = 'Digest %s' % auth + if req.headers.get(self.auth_header, None) == auth_val: + return None + req.add_unredirected_header(self.auth_header, auth_val) + resp = self.parent.open(req, timeout=req.timeout) + return resp + + def get_cnonce(self, nonce): + # The cnonce-value is an opaque + # quoted string value provided by the client and used by both client + # and server to avoid chosen plaintext attacks, to provide mutual + # authentication, and to provide some message integrity protection. + # This isn't a fabulous effort, but it's probably Good Enough. + dig = hashlib.sha1("%s:%s:%s:%s" % (self.nonce_count, nonce, time.ctime(), + randombytes(8))).hexdigest() + return dig[:16] + + def get_authorization(self, req, chal): + try: + realm = chal['realm'] + nonce = chal['nonce'] + qop = chal.get('qop') + algorithm = chal.get('algorithm', 'MD5') + # mod_digest doesn't send an opaque, even though it isn't + # supposed to be optional + opaque = chal.get('opaque', None) + except KeyError: + return None + + H, KD = self.get_algorithm_impls(algorithm) + if H is None: + return None + + user, pw = self.passwd.find_user_password(realm, req.get_full_url()) + if user is None: + return None + + # XXX not implemented yet + if req.has_data(): + entdig = self.get_entity_digest(req.get_data(), chal) + else: + entdig = None + + A1 = "%s:%s:%s" % (user, realm, pw) + A2 = "%s:%s" % (req.get_method(), + # XXX selector: what about proxies and full urls + req.get_selector()) + if qop == 'auth': + if nonce == self.last_nonce: + self.nonce_count += 1 + else: + self.nonce_count = 1 + self.last_nonce = nonce + + ncvalue = '%08x' % self.nonce_count + cnonce = self.get_cnonce(nonce) + noncebit = "%s:%s:%s:%s:%s" % (nonce, ncvalue, cnonce, qop, H(A2)) + respdig = KD(H(A1), noncebit) + elif qop is None: + respdig = KD(H(A1), "%s:%s" % (nonce, H(A2))) + else: + # XXX handle auth-int. + raise URLError("qop '%s' is not supported." % qop) + + # XXX should the partial digests be encoded too? + + base = 'username="%s", realm="%s", nonce="%s", uri="%s", ' \ + 'response="%s"' % (user, realm, nonce, req.get_selector(), + respdig) + if opaque: + base += ', opaque="%s"' % opaque + if entdig: + base += ', digest="%s"' % entdig + base += ', algorithm="%s"' % algorithm + if qop: + base += ', qop=auth, nc=%s, cnonce="%s"' % (ncvalue, cnonce) + return base + + def get_algorithm_impls(self, algorithm): + # algorithm should be case-insensitive according to RFC2617 + algorithm = algorithm.upper() + # lambdas assume digest modules are imported at the top level + if algorithm == 'MD5': + H = lambda x: hashlib.md5(x).hexdigest() + elif algorithm == 'SHA': + H = lambda x: hashlib.sha1(x).hexdigest() + # XXX MD5-sess + KD = lambda s, d: H("%s:%s" % (s, d)) + return H, KD + + def get_entity_digest(self, data, chal): + # XXX not implemented yet + return None + + +class HTTPDigestAuthHandler(BaseHandler, AbstractDigestAuthHandler): + """An authentication protocol defined by RFC 2069 + + Digest authentication improves on basic authentication because it + does not transmit passwords in the clear. + """ + + auth_header = 'Authorization' + handler_order = 490 # before Basic auth + + def http_error_401(self, req, fp, code, msg, headers): + host = urlparse.urlparse(req.get_full_url())[1] + retry = self.http_error_auth_reqed('www-authenticate', + host, req, headers) + self.reset_retry_count() + return retry + + +class ProxyDigestAuthHandler(BaseHandler, AbstractDigestAuthHandler): + + auth_header = 'Proxy-Authorization' + handler_order = 490 # before Basic auth + + def http_error_407(self, req, fp, code, msg, headers): + host = req.get_host() + retry = self.http_error_auth_reqed('proxy-authenticate', + host, req, headers) + self.reset_retry_count() + return retry + +class AbstractHTTPHandler(BaseHandler): + + def __init__(self, debuglevel=0): + self._debuglevel = debuglevel + + def set_http_debuglevel(self, level): + self._debuglevel = level + + def do_request_(self, request): + host = request.get_host() + if not host: + raise URLError('no host given') + + if request.has_data(): # POST + data = request.get_data() + if not request.has_header('Content-type'): + request.add_unredirected_header( + 'Content-type', + 'application/x-www-form-urlencoded') + if not request.has_header('Content-length'): + request.add_unredirected_header( + 'Content-length', '%d' % len(data)) + + sel_host = host + if request.has_proxy(): + scheme, sel = splittype(request.get_selector()) + sel_host, sel_path = splithost(sel) + + if not request.has_header('Host'): + request.add_unredirected_header('Host', sel_host) + for name, value in self.parent.addheaders: + name = name.capitalize() + if not request.has_header(name): + request.add_unredirected_header(name, value) + + return request + + def do_open(self, http_class, req): + """Return an addinfourl object for the request, using http_class. + + http_class must implement the HTTPConnection API from httplib. + The addinfourl return value is a file-like object. It also + has methods and attributes including: + - info(): return a mimetools.Message object for the headers + - geturl(): return the original request URL + - code: HTTP status code + """ + host = req.get_host() + if not host: + raise URLError('no host given') + + h = http_class(host, timeout=req.timeout) # will parse host:port + h.set_debuglevel(self._debuglevel) + + headers = dict(req.unredirected_hdrs) + headers.update(dict((k, v) for k, v in req.headers.items() + if k not in headers)) + + # We want to make an HTTP/1.1 request, but the addinfourl + # class isn't prepared to deal with a persistent connection. + # It will try to read all remaining data from the socket, + # which will block while the server waits for the next request. + # So make sure the connection gets closed after the (only) + # request. + headers["Connection"] = "close" + headers = dict( + (name.title(), val) for name, val in headers.items()) + + if req._tunnel_host: + tunnel_headers = {} + proxy_auth_hdr = "Proxy-Authorization" + if proxy_auth_hdr in headers: + tunnel_headers[proxy_auth_hdr] = headers[proxy_auth_hdr] + # Proxy-Authorization should not be sent to origin + # server. + del headers[proxy_auth_hdr] + h.set_tunnel(req._tunnel_host, headers=tunnel_headers) + + try: + h.request(req.get_method(), req.get_selector(), req.data, headers) + try: + r = h.getresponse(buffering=True) + except TypeError: #buffering kw not supported + r = h.getresponse() + except socket.error, err: # XXX what error? + h.close() + raise URLError(err) + + # Pick apart the HTTPResponse object to get the addinfourl + # object initialized properly. + + # Wrap the HTTPResponse object in socket's file object adapter + # for Windows. That adapter calls recv(), so delegate recv() + # to read(). This weird wrapping allows the returned object to + # have readline() and readlines() methods. + + # XXX It might be better to extract the read buffering code + # out of socket._fileobject() and into a base class. + + r.recv = r.read + fp = socket._fileobject(r, close=True) + + resp = addinfourl(fp, r.msg, req.get_full_url()) + resp.code = r.status + resp.msg = r.reason + return resp + + +class HTTPHandler(AbstractHTTPHandler): + + def http_open(self, req): + return self.do_open(httplib.HTTPConnection, req) + + http_request = AbstractHTTPHandler.do_request_ + +if hasattr(httplib, 'HTTPS'): + class HTTPSHandler(AbstractHTTPHandler): + + def https_open(self, req): + return self.do_open(httplib.HTTPSConnection, req) + + https_request = AbstractHTTPHandler.do_request_ + +class HTTPCookieProcessor(BaseHandler): + def __init__(self, cookiejar=None): + import cookielib + if cookiejar is None: + cookiejar = cookielib.CookieJar() + self.cookiejar = cookiejar + + def http_request(self, request): + self.cookiejar.add_cookie_header(request) + return request + + def http_response(self, request, response): + self.cookiejar.extract_cookies(response, request) + return response + + https_request = http_request + https_response = http_response + +class UnknownHandler(BaseHandler): + def unknown_open(self, req): + type = req.get_type() + raise URLError('unknown url type: %s' % type) + +def parse_keqv_list(l): + """Parse list of key=value strings where keys are not duplicated.""" + parsed = {} + for elt in l: + k, v = elt.split('=', 1) + if v[0] == '"' and v[-1] == '"': + v = v[1:-1] + parsed[k] = v + return parsed + +def parse_http_list(s): + """Parse lists as described by RFC 2068 Section 2. + + In particular, parse comma-separated lists where the elements of + the list may include quoted-strings. A quoted-string could + contain a comma. A non-quoted string could have quotes in the + middle. Neither commas nor quotes count if they are escaped. + Only double-quotes count, not single-quotes. + """ + res = [] + part = '' + + escape = quote = False + for cur in s: + if escape: + part += cur + escape = False + continue + if quote: + if cur == '\\': + escape = True + continue + elif cur == '"': + quote = False + part += cur + continue + + if cur == ',': + res.append(part) + part = '' + continue + + if cur == '"': + quote = True + + part += cur + + # append last part + if part: + res.append(part) + + return [part.strip() for part in res] + +def _safe_gethostbyname(host): + try: + return socket.gethostbyname(host) + except socket.gaierror: + return None + +class FileHandler(BaseHandler): + # Use local file or FTP depending on form of URL + def file_open(self, req): + url = req.get_selector() + if url[:2] == '//' and url[2:3] != '/' and (req.host and + req.host != 'localhost'): + req.type = 'ftp' + return self.parent.open(req) + else: + return self.open_local_file(req) + + # names for the localhost + names = None + def get_names(self): + if FileHandler.names is None: + try: + FileHandler.names = tuple( + socket.gethostbyname_ex('localhost')[2] + + socket.gethostbyname_ex(socket.gethostname())[2]) + except socket.gaierror: + FileHandler.names = (socket.gethostbyname('localhost'),) + return FileHandler.names + + # not entirely sure what the rules are here + def open_local_file(self, req): + import email.utils + import mimetypes + host = req.get_host() + filename = req.get_selector() + localfile = url2pathname(filename) + try: + stats = os.stat(localfile) + size = stats.st_size + modified = email.utils.formatdate(stats.st_mtime, usegmt=True) + mtype = mimetypes.guess_type(filename)[0] + headers = mimetools.Message(StringIO( + 'Content-type: %s\nContent-length: %d\nLast-modified: %s\n' % + (mtype or 'text/plain', size, modified))) + if host: + host, port = splitport(host) + if not host or \ + (not port and _safe_gethostbyname(host) in self.get_names()): + if host: + origurl = 'file://' + host + filename + else: + origurl = 'file://' + filename + return addinfourl(open(localfile, 'rb'), headers, origurl) + except OSError, msg: + # urllib2 users shouldn't expect OSErrors coming from urlopen() + raise URLError(msg) + raise URLError('file not on local host') + +class FTPHandler(BaseHandler): + def ftp_open(self, req): + import ftplib + import mimetypes + host = req.get_host() + if not host: + raise URLError('ftp error: no host given') + host, port = splitport(host) + if port is None: + port = ftplib.FTP_PORT + else: + port = int(port) + + # username/password handling + user, host = splituser(host) + if user: + user, passwd = splitpasswd(user) + else: + passwd = None + host = unquote(host) + user = user or '' + passwd = passwd or '' + + try: + host = socket.gethostbyname(host) + except socket.error, msg: + raise URLError(msg) + path, attrs = splitattr(req.get_selector()) + dirs = path.split('/') + dirs = map(unquote, dirs) + dirs, file = dirs[:-1], dirs[-1] + if dirs and not dirs[0]: + dirs = dirs[1:] + try: + fw = self.connect_ftp(user, passwd, host, port, dirs, req.timeout) + type = file and 'I' or 'D' + for attr in attrs: + attr, value = splitvalue(attr) + if attr.lower() == 'type' and \ + value in ('a', 'A', 'i', 'I', 'd', 'D'): + type = value.upper() + fp, retrlen = fw.retrfile(file, type) + headers = "" + mtype = mimetypes.guess_type(req.get_full_url())[0] + if mtype: + headers += "Content-type: %s\n" % mtype + if retrlen is not None and retrlen >= 0: + headers += "Content-length: %d\n" % retrlen + sf = StringIO(headers) + headers = mimetools.Message(sf) + return addinfourl(fp, headers, req.get_full_url()) + except ftplib.all_errors, msg: + raise URLError, ('ftp error: %s' % msg), sys.exc_info()[2] + + def connect_ftp(self, user, passwd, host, port, dirs, timeout): + fw = ftpwrapper(user, passwd, host, port, dirs, timeout) +## fw.ftp.set_debuglevel(1) + return fw + +class CacheFTPHandler(FTPHandler): + # XXX would be nice to have pluggable cache strategies + # XXX this stuff is definitely not thread safe + def __init__(self): + self.cache = {} + self.timeout = {} + self.soonest = 0 + self.delay = 60 + self.max_conns = 16 + + def setTimeout(self, t): + self.delay = t + + def setMaxConns(self, m): + self.max_conns = m + + def connect_ftp(self, user, passwd, host, port, dirs, timeout): + key = user, host, port, '/'.join(dirs), timeout + if key in self.cache: + self.timeout[key] = time.time() + self.delay + else: + self.cache[key] = ftpwrapper(user, passwd, host, port, dirs, timeout) + self.timeout[key] = time.time() + self.delay + self.check_cache() + return self.cache[key] + + def check_cache(self): + # first check for old ones + t = time.time() + if self.soonest <= t: + for k, v in self.timeout.items(): + if v < t: + self.cache[k].close() + del self.cache[k] + del self.timeout[k] + self.soonest = min(self.timeout.values()) + + # then check the size + if len(self.cache) == self.max_conns: + for k, v in self.timeout.items(): + if v == self.soonest: + del self.cache[k] + del self.timeout[k] + break + self.soonest = min(self.timeout.values()) diff --git a/lib_pypy/_ctypes/basics.py b/lib_pypy/_ctypes/basics.py --- a/lib_pypy/_ctypes/basics.py +++ b/lib_pypy/_ctypes/basics.py @@ -54,7 +54,8 @@ def get_ffi_argtype(self): if self._ffiargtype: return self._ffiargtype - return _shape_to_ffi_type(self._ffiargshape) + self._ffiargtype = _shape_to_ffi_type(self._ffiargshape) + return self._ffiargtype def _CData_output(self, resbuffer, base=None, index=-1): #assert isinstance(resbuffer, _rawffi.ArrayInstance) @@ -166,7 +167,8 @@ return tp._alignmentofinstances() def byref(cdata): - from ctypes import pointer + # "pointer" is imported at the end of this module to avoid circular + # imports return pointer(cdata) def cdata_from_address(self, address): @@ -224,5 +226,9 @@ 'Z' : _ffi.types.void_p, 'X' : _ffi.types.void_p, 'v' : _ffi.types.sshort, + '?' : _ffi.types.ubyte, } + +# used by "byref" +from _ctypes.pointer import pointer diff --git a/lib_pypy/_ctypes/function.py b/lib_pypy/_ctypes/function.py --- a/lib_pypy/_ctypes/function.py +++ b/lib_pypy/_ctypes/function.py @@ -78,8 +78,6 @@ _com_iid = None _is_fastpath = False - __restype_set = False - def _getargtypes(self): return self._argtypes_ @@ -93,13 +91,15 @@ raise TypeError( "item %d in _argtypes_ has no from_param method" % ( i + 1,)) - # - if all([hasattr(argtype, '_ffiargshape') for argtype in argtypes]): - fastpath_cls = make_fastpath_subclass(self.__class__) - fastpath_cls.enable_fastpath_maybe(self) self._argtypes_ = list(argtypes) + self._check_argtypes_for_fastpath() argtypes = property(_getargtypes, _setargtypes) + def _check_argtypes_for_fastpath(self): + if all([hasattr(argtype, '_ffiargshape') for argtype in self._argtypes_]): + fastpath_cls = make_fastpath_subclass(self.__class__) + fastpath_cls.enable_fastpath_maybe(self) + def _getparamflags(self): return self._paramflags @@ -149,7 +149,6 @@ return self._restype_ def _setrestype(self, restype): - self.__restype_set = True self._ptr = None if restype is int: from ctypes import c_int @@ -219,6 +218,7 @@ import ctypes restype = ctypes.c_int self._ptr = self._getfuncptr_fromaddress(self._argtypes_, restype) + self._check_argtypes_for_fastpath() return @@ -296,13 +296,12 @@ "This function takes %d argument%s (%s given)" % (len(self._argtypes_), plural, len(args))) - # check that arguments are convertible - ## XXX Not as long as ctypes.cast is a callback function with - ## py_object arguments... - ## self._convert_args(self._argtypes_, args, {}) - try: - res = self.callable(*args) + newargs = self._convert_args_for_callback(argtypes, args) + except (UnicodeError, TypeError, ValueError), e: + raise ArgumentError(str(e)) + try: + res = self.callable(*newargs) except: exc_info = sys.exc_info() traceback.print_tb(exc_info[2], file=sys.stderr) @@ -316,10 +315,6 @@ warnings.warn('C function without declared arguments called', RuntimeWarning, stacklevel=2) argtypes = [] - - if not self.__restype_set: - warnings.warn('C function without declared return type called', - RuntimeWarning, stacklevel=2) if self._com_index: from ctypes import cast, c_void_p, POINTER @@ -366,7 +361,10 @@ if self._flags_ & _rawffi.FUNCFLAG_USE_LASTERROR: set_last_error(_rawffi.get_last_error()) # - return self._build_result(self._restype_, result, newargs) + try: + return self._build_result(self._restype_, result, newargs) + finally: + funcptr.free_temp_buffers() def _do_errcheck(self, result, args): # The 'errcheck' protocol @@ -466,6 +464,19 @@ return cobj, cobj._to_ffi_param(), type(cobj) + def _convert_args_for_callback(self, argtypes, args): + assert len(argtypes) == len(args) + newargs = [] + for argtype, arg in zip(argtypes, args): + param = argtype.from_param(arg) + _type_ = getattr(argtype, '_type_', None) + if _type_ == 'P': # special-case for c_void_p + param = param._get_buffer_value() + elif self._is_primitive(argtype): + param = param.value + newargs.append(param) + return newargs + def _convert_args(self, argtypes, args, kwargs, marker=object()): newargs = [] outargs = [] @@ -556,6 +567,9 @@ newargtypes.append(newargtype) return keepalives, newargs, newargtypes, outargs + @staticmethod + def _is_primitive(argtype): + return argtype.__bases__[0] is _SimpleCData def _wrap_result(self, restype, result): """ @@ -564,7 +578,7 @@ """ # hack for performance: if restype is a "simple" primitive type, don't # allocate the buffer because it's going to be thrown away immediately - if restype.__bases__[0] is _SimpleCData and not restype._is_pointer_like(): + if self._is_primitive(restype) and not restype._is_pointer_like(): return result # shape = restype._ffishape @@ -680,7 +694,7 @@ try: result = self._call_funcptr(funcptr, *args) result = self._do_errcheck(result, args) - except (TypeError, ArgumentError): # XXX, should be FFITypeError + except (TypeError, ArgumentError, UnicodeDecodeError): assert self._slowpath_allowed return CFuncPtr.__call__(self, *args) return result diff --git a/lib_pypy/_ctypes/primitive.py b/lib_pypy/_ctypes/primitive.py --- a/lib_pypy/_ctypes/primitive.py +++ b/lib_pypy/_ctypes/primitive.py @@ -10,6 +10,8 @@ from _ctypes.builtin import ConvMode from _ctypes.array import Array from _ctypes.pointer import _Pointer, as_ffi_pointer +#from _ctypes.function import CFuncPtr # this import is moved at the bottom + # because else it's circular class NULL(object): pass @@ -86,7 +88,7 @@ return res if isinstance(value, Array): return value - if isinstance(value, _Pointer): + if isinstance(value, (_Pointer, CFuncPtr)): return cls.from_address(value._buffer.buffer) if isinstance(value, (int, long)): return cls(value) @@ -338,3 +340,5 @@ def __nonzero__(self): return self._buffer[0] not in (0, '\x00') + +from _ctypes.function import CFuncPtr diff --git a/lib_pypy/_ctypes/structure.py b/lib_pypy/_ctypes/structure.py --- a/lib_pypy/_ctypes/structure.py +++ b/lib_pypy/_ctypes/structure.py @@ -14,6 +14,15 @@ raise TypeError("Expected CData subclass, got %s" % (tp,)) if isinstance(tp, StructOrUnionMeta): tp._make_final() + if len(f) == 3: + if (not hasattr(tp, '_type_') + or not isinstance(tp._type_, str) + or tp._type_ not in "iIhHbBlL"): + #XXX: are those all types? + # we just dont get the type name + # in the interp levle thrown TypeError + # from rawffi if there are more + raise TypeError('bit fields not allowed for type ' + tp.__name__) all_fields = [] for cls in reversed(inspect.getmro(superclass)): @@ -34,34 +43,37 @@ for i, field in enumerate(all_fields): name = field[0] value = field[1] + is_bitfield = (len(field) == 3) fields[name] = Field(name, self._ffistruct.fieldoffset(name), self._ffistruct.fieldsize(name), - value, i) + value, i, is_bitfield) if anonymous_fields: resnames = [] for i, field in enumerate(all_fields): name = field[0] value = field[1] + is_bitfield = (len(field) == 3) startpos = self._ffistruct.fieldoffset(name) if name in anonymous_fields: for subname in value._names: resnames.append(subname) - relpos = startpos + value._fieldtypes[subname].offset - subvalue = value._fieldtypes[subname].ctype + subfield = getattr(value, subname) + relpos = startpos + subfield.offset + subvalue = subfield.ctype fields[subname] = Field(subname, relpos, subvalue._sizeofinstances(), - subvalue, i) + subvalue, i, is_bitfield) else: resnames.append(name) names = resnames self._names = names - self._fieldtypes = fields + self.__dict__.update(fields) class Field(object): - def __init__(self, name, offset, size, ctype, num): - for k in ('name', 'offset', 'size', 'ctype', 'num'): + def __init__(self, name, offset, size, ctype, num, is_bitfield): + for k in ('name', 'offset', 'size', 'ctype', 'num', 'is_bitfield'): self.__dict__[k] = locals()[k] def __setattr__(self, name, value): @@ -71,6 +83,35 @@ return "" % (self.name, self.offset, self.size) + def __get__(self, obj, cls=None): + if obj is None: + return self + if self.is_bitfield: + # bitfield member, use direct access + return obj._buffer.__getattr__(self.name) + else: + fieldtype = self.ctype + offset = self.num + suba = obj._subarray(fieldtype, self.name) + return fieldtype._CData_output(suba, obj, offset) + + + def __set__(self, obj, value): + fieldtype = self.ctype + cobj = fieldtype.from_param(value) + if ensure_objects(cobj) is not None: + key = keepalive_key(self.num) + store_reference(obj, key, cobj._objects) + arg = cobj._get_buffer_value() + if fieldtype._fficompositesize is not None: + from ctypes import memmove + dest = obj._buffer.fieldaddress(self.name) + memmove(dest, arg, fieldtype._fficompositesize) + else: + obj._buffer.__setattr__(self.name, arg) + + + # ________________________________________________________________ def _set_shape(tp, rawfields, is_union=False): @@ -79,17 +120,12 @@ tp._ffiargshape = tp._ffishape = (tp._ffistruct, 1) tp._fficompositesize = tp._ffistruct.size -def struct_getattr(self, name): - if name not in ('_fields_', '_fieldtypes'): - if hasattr(self, '_fieldtypes') and name in self._fieldtypes: - return self._fieldtypes[name] - return _CDataMeta.__getattribute__(self, name) def struct_setattr(self, name, value): if name == '_fields_': if self.__dict__.get('_fields_', None) is not None: raise AttributeError("_fields_ is final") - if self in [v for k, v in value]: + if self in [f[1] for f in value]: raise AttributeError("Structure or union cannot contain itself") names_and_fields( self, @@ -127,14 +163,14 @@ if '_fields_' not in self.__dict__: self._fields_ = [] self._names = [] - self._fieldtypes = {} _set_shape(self, [], self._is_union) - __getattr__ = struct_getattr __setattr__ = struct_setattr def from_address(self, address): instance = StructOrUnion.__new__(self) + if isinstance(address, _rawffi.StructureInstance): + address = address.buffer instance.__dict__['_buffer'] = self._ffistruct.fromaddress(address) return instance @@ -200,40 +236,6 @@ A = _rawffi.Array(fieldtype._ffishape) return A.fromaddress(address, 1) - def __setattr__(self, name, value): - try: - field = self._fieldtypes[name] - except KeyError: - return _CData.__setattr__(self, name, value) - fieldtype = field.ctype - cobj = fieldtype.from_param(value) - if ensure_objects(cobj) is not None: - key = keepalive_key(field.num) - store_reference(self, key, cobj._objects) - arg = cobj._get_buffer_value() - if fieldtype._fficompositesize is not None: - from ctypes import memmove - dest = self._buffer.fieldaddress(name) - memmove(dest, arg, fieldtype._fficompositesize) - else: - self._buffer.__setattr__(name, arg) - - def __getattribute__(self, name): - if name == '_fieldtypes': - return _CData.__getattribute__(self, '_fieldtypes') - try: - field = self._fieldtypes[name] - except KeyError: - return _CData.__getattribute__(self, name) - if field.size >> 16: - # bitfield member, use direct access - return self._buffer.__getattr__(name) - else: - fieldtype = field.ctype - offset = field.num - suba = self._subarray(fieldtype, name) - return fieldtype._CData_output(suba, self, offset) - def _get_buffer_for_param(self): return self diff --git a/lib_pypy/_elementtree.py b/lib_pypy/_elementtree.py new file mode 100644 --- /dev/null +++ b/lib_pypy/_elementtree.py @@ -0,0 +1,6 @@ +# Just use ElementTree. + +from xml.etree import ElementTree + +globals().update(ElementTree.__dict__) +del __all__ diff --git a/lib_pypy/_functools.py b/lib_pypy/_functools.py --- a/lib_pypy/_functools.py +++ b/lib_pypy/_functools.py @@ -14,10 +14,9 @@ raise TypeError("the first argument must be callable") self.func = func self.args = args - self.keywords = keywords + self.keywords = keywords or None def __call__(self, *fargs, **fkeywords): - newkeywords = self.keywords.copy() - newkeywords.update(fkeywords) - return self.func(*(self.args + fargs), **newkeywords) - + if self.keywords is not None: + fkeywords = dict(self.keywords, **fkeywords) + return self.func(*(self.args + fargs), **fkeywords) diff --git a/lib_pypy/_pypy_interact.py b/lib_pypy/_pypy_interact.py --- a/lib_pypy/_pypy_interact.py +++ b/lib_pypy/_pypy_interact.py @@ -56,6 +56,10 @@ prompt = getattr(sys, 'ps1', '>>> ') try: line = raw_input(prompt) + # Can be None if sys.stdin was redefined + encoding = getattr(sys.stdin, 'encoding', None) + if encoding and not isinstance(line, unicode): + line = line.decode(encoding) except EOFError: console.write("\n") break diff --git a/lib_pypy/_sqlite3.py b/lib_pypy/_sqlite3.py --- a/lib_pypy/_sqlite3.py +++ b/lib_pypy/_sqlite3.py @@ -24,6 +24,7 @@ from ctypes import c_void_p, c_int, c_double, c_int64, c_char_p, cdll from ctypes import POINTER, byref, string_at, CFUNCTYPE, cast from ctypes import sizeof, c_ssize_t +from collections import OrderedDict import datetime import sys import time @@ -274,6 +275,28 @@ def unicode_text_factory(x): return unicode(x, 'utf-8') + +class StatementCache(object): + def __init__(self, connection, maxcount): + self.connection = connection + self.maxcount = maxcount + self.cache = OrderedDict() + + def get(self, sql, cursor, row_factory): + try: + stat = self.cache[sql] + except KeyError: + stat = Statement(self.connection, sql) + self.cache[sql] = stat + if len(self.cache) > self.maxcount: + self.cache.popitem(0) + # + if stat.in_use: + stat = Statement(self.connection, sql) + stat.set_row_factory(row_factory) + return stat + + class Connection(object): def __init__(self, database, timeout=5.0, detect_types=0, isolation_level="", check_same_thread=True, factory=None, cached_statements=100): @@ -291,6 +314,7 @@ self.row_factory = None self._isolation_level = isolation_level self.detect_types = detect_types + self.statement_cache = StatementCache(self, cached_statements) self.cursors = [] @@ -399,7 +423,7 @@ cur = Cursor(self) if not isinstance(sql, (str, unicode)): raise Warning("SQL is of wrong type. Must be string or unicode.") - statement = Statement(cur, sql, self.row_factory) + statement = self.statement_cache.get(sql, cur, self.row_factory) return statement def _get_isolation_level(self): @@ -681,6 +705,8 @@ from sqlite3.dump import _iterdump return _iterdump(self) +DML, DQL, DDL = range(3) + class Cursor(object): def __init__(self, con): if not isinstance(con, Connection): @@ -708,12 +734,12 @@ if type(sql) is unicode: sql = sql.encode("utf-8") self._check_closed() - self.statement = Statement(self, sql, self.row_factory) + self.statement = self.connection.statement_cache.get(sql, self, self.row_factory) if self.connection._isolation_level is not None: - if self.statement.kind == "DDL": + if self.statement.kind == DDL: self.connection.commit() - elif self.statement.kind == "DML": + elif self.statement.kind == DML: self.connection._begin() self.statement.set_params(params) @@ -724,19 +750,18 @@ self.statement.reset() raise self.connection._get_exception(ret) - if self.statement.kind == "DQL": - if ret == SQLITE_ROW: - self.statement._build_row_cast_map() - self.statement._readahead() - else: - self.statement.item = None - self.statement.exhausted = True + if self.statement.kind == DQL and ret == SQLITE_ROW: + self.statement._build_row_cast_map() + self.statement._readahead(self) + else: + self.statement.item = None + self.statement.exhausted = True - if self.statement.kind in ("DML", "DDL"): + if self.statement.kind == DML or self.statement.kind == DDL: self.statement.reset() self.rowcount = -1 - if self.statement.kind == "DML": + if self.statement.kind == DML: self.rowcount = sqlite.sqlite3_changes(self.connection.db) return self @@ -747,8 +772,9 @@ if type(sql) is unicode: sql = sql.encode("utf-8") self._check_closed() - self.statement = Statement(self, sql, self.row_factory) - if self.statement.kind == "DML": + self.statement = self.connection.statement_cache.get(sql, self, self.row_factory) + + if self.statement.kind == DML: self.connection._begin() else: raise ProgrammingError, "executemany is only for DML statements" @@ -800,7 +826,7 @@ return self def __iter__(self): - return self.statement + return iter(self.fetchone, None) def _check_reset(self): if self.reset: @@ -817,7 +843,7 @@ return None try: - return self.statement.next() + return self.statement.next(self) except StopIteration: return None @@ -831,7 +857,7 @@ if size is None: size = self.arraysize lst = [] - for row in self.statement: + for row in self: lst.append(row) if len(lst) == size: break @@ -842,7 +868,7 @@ self._check_reset() if self.statement is None: return [] - return list(self.statement) + return list(self) def _getdescription(self): if self._description is None: @@ -872,39 +898,47 @@ lastrowid = property(_getlastrowid) class Statement(object): - def __init__(self, cur, sql, row_factory): + def __init__(self, connection, sql): self.statement = None if not isinstance(sql, str): raise ValueError, "sql must be a string" - self.con = cur.connection - self.cur = weakref.ref(cur) + self.con = connection self.sql = sql # DEBUG ONLY - self.row_factory = row_factory first_word = self._statement_kind = sql.lstrip().split(" ")[0].upper() if first_word in ("INSERT", "UPDATE", "DELETE", "REPLACE"): - self.kind = "DML" + self.kind = DML elif first_word in ("SELECT", "PRAGMA"): - self.kind = "DQL" + self.kind = DQL else: - self.kind = "DDL" + self.kind = DDL self.exhausted = False + self.in_use = False + # + # set by set_row_factory + self.row_factory = None self.statement = c_void_p() next_char = c_char_p() - ret = sqlite.sqlite3_prepare_v2(self.con.db, sql, -1, byref(self.statement), byref(next_char)) + sql_char = c_char_p(sql) + ret = sqlite.sqlite3_prepare_v2(self.con.db, sql_char, -1, byref(self.statement), byref(next_char)) if ret == SQLITE_OK and self.statement.value is None: # an empty statement, we work around that, as it's the least trouble ret = sqlite.sqlite3_prepare_v2(self.con.db, "select 42", -1, byref(self.statement), byref(next_char)) - self.kind = "DQL" + self.kind = DQL if ret != SQLITE_OK: raise self.con._get_exception(ret) self.con._remember_statement(self) if _check_remaining_sql(next_char.value): - raise Warning, "One and only one statement required" + raise Warning, "One and only one statement required: %r" % ( + next_char.value,) + # sql_char should remain alive until here self._build_row_cast_map() + def set_row_factory(self, row_factory): + self.row_factory = row_factory + def _build_row_cast_map(self): self.row_cast_map = [] for i in xrange(sqlite.sqlite3_column_count(self.statement)): @@ -974,6 +1008,7 @@ ret = sqlite.sqlite3_reset(self.statement) if ret != SQLITE_OK: raise self.con._get_exception(ret) + self.mark_dirty() if params is None: if sqlite.sqlite3_bind_parameter_count(self.statement) != 0: @@ -1004,10 +1039,7 @@ raise ProgrammingError("missing parameter '%s'" %param) self.set_param(idx, param) - def __iter__(self): - return self - - def next(self): + def next(self, cursor): self.con._check_closed() self.con._check_thread() if self.exhausted: @@ -1023,10 +1055,10 @@ sqlite.sqlite3_reset(self.statement) raise exc - self._readahead() + self._readahead(cursor) return item - def _readahead(self): + def _readahead(self, cursor): self.column_count = sqlite.sqlite3_column_count(self.statement) row = [] for i in xrange(self.column_count): @@ -1061,23 +1093,30 @@ row = tuple(row) if self.row_factory is not None: - row = self.row_factory(self.cur(), row) + row = self.row_factory(cursor, row) self.item = row def reset(self): self.row_cast_map = None - return sqlite.sqlite3_reset(self.statement) + ret = sqlite.sqlite3_reset(self.statement) + self.in_use = False + self.exhausted = False + return ret def finalize(self): sqlite.sqlite3_finalize(self.statement) self.statement = None + self.in_use = False + + def mark_dirty(self): + self.in_use = True def __del__(self): sqlite.sqlite3_finalize(self.statement) self.statement = None def _get_description(self): - if self.kind == "DML": + if self.kind == DML: return None desc = [] for i in xrange(sqlite.sqlite3_column_count(self.statement)): diff --git a/lib_pypy/_subprocess.py b/lib_pypy/_subprocess.py --- a/lib_pypy/_subprocess.py +++ b/lib_pypy/_subprocess.py @@ -35,7 +35,7 @@ _DuplicateHandle.restype = ctypes.c_int _WaitForSingleObject = _kernel32.WaitForSingleObject -_WaitForSingleObject.argtypes = [ctypes.c_int, ctypes.c_int] +_WaitForSingleObject.argtypes = [ctypes.c_int, ctypes.c_uint] _WaitForSingleObject.restype = ctypes.c_int _GetExitCodeProcess = _kernel32.GetExitCodeProcess diff --git a/lib_pypy/distributed/test/test_distributed.py b/lib_pypy/distributed/test/test_distributed.py --- a/lib_pypy/distributed/test/test_distributed.py +++ b/lib_pypy/distributed/test/test_distributed.py @@ -9,7 +9,7 @@ class AppTestDistributed(object): def setup_class(cls): cls.space = gettestobjspace(**{"objspace.std.withtproxy": True, - "usemodules":("_stackless",)}) + "usemodules":("_continuation",)}) def test_init(self): import distributed @@ -91,10 +91,8 @@ class AppTestDistributedTasklets(object): spaceconfig = {"objspace.std.withtproxy": True, - "objspace.usemodules._stackless": True} + "objspace.usemodules._continuation": True} def setup_class(cls): - #cls.space = gettestobjspace(**{"objspace.std.withtproxy": True, - # "usemodules":("_stackless",)}) cls.w_test_env = cls.space.appexec([], """(): from distributed import test_env return test_env diff --git a/lib_pypy/distributed/test/test_greensock.py b/lib_pypy/distributed/test/test_greensock.py --- a/lib_pypy/distributed/test/test_greensock.py +++ b/lib_pypy/distributed/test/test_greensock.py @@ -10,7 +10,7 @@ if not option.runappdirect: py.test.skip("Cannot run this on top of py.py because of PopenGateway") cls.space = gettestobjspace(**{"objspace.std.withtproxy": True, - "usemodules":("_stackless",)}) + "usemodules":("_continuation",)}) cls.w_remote_side_code = cls.space.appexec([], """(): import sys sys.path.insert(0, '%s') diff --git a/lib_pypy/distributed/test/test_socklayer.py b/lib_pypy/distributed/test/test_socklayer.py --- a/lib_pypy/distributed/test/test_socklayer.py +++ b/lib_pypy/distributed/test/test_socklayer.py @@ -9,7 +9,8 @@ class AppTestSocklayer: def setup_class(cls): cls.space = gettestobjspace(**{"objspace.std.withtproxy": True, - "usemodules":("_stackless","_socket", "select")}) + "usemodules":("_continuation", + "_socket", "select")}) def test_socklayer(self): class X(object): diff --git a/lib_pypy/greenlet.py b/lib_pypy/greenlet.py --- a/lib_pypy/greenlet.py +++ b/lib_pypy/greenlet.py @@ -1,1 +1,144 @@ -from _stackless import greenlet +import _continuation, sys + + +# ____________________________________________________________ +# Exceptions + +class GreenletExit(Exception): + """This special exception does not propagate to the parent greenlet; it +can be used to kill a single greenlet.""" + +error = _continuation.error + +# ____________________________________________________________ +# Helper function + +def getcurrent(): + "Returns the current greenlet (i.e. the one which called this function)." + try: + return _tls.current + except AttributeError: + # first call in this thread: current == main + _green_create_main() + return _tls.current + +# ____________________________________________________________ +# The 'greenlet' class + +_continulet = _continuation.continulet + +class greenlet(_continulet): + getcurrent = staticmethod(getcurrent) + error = error + GreenletExit = GreenletExit + __main = False + __started = False + + def __new__(cls, *args, **kwds): + self = _continulet.__new__(cls) + self.parent = getcurrent() + return self + + def __init__(self, run=None, parent=None): + if run is not None: + self.run = run + if parent is not None: + self.parent = parent + + def switch(self, *args): + "Switch execution to this greenlet, optionally passing the values " + "given as argument(s). Returns the value passed when switching back." + return self.__switch('switch', args) + + def throw(self, typ=GreenletExit, val=None, tb=None): + "raise exception in greenlet, return value passed when switching back" + return self.__switch('throw', typ, val, tb) + + def __switch(target, methodname, *args): + current = getcurrent() + # + while not target: + if not target.__started: + if methodname == 'switch': + greenlet_func = _greenlet_start + else: + greenlet_func = _greenlet_throw + _continulet.__init__(target, greenlet_func, *args) + methodname = 'switch' + args = () + target.__started = True + break + # already done, go to the parent instead + # (NB. infinite loop possible, but unlikely, unless you mess + # up the 'parent' explicitly. Good enough, because a Ctrl-C + # will show that the program is caught in this loop here.) + target = target.parent + # + try: + unbound_method = getattr(_continulet, methodname) + args = unbound_method(current, *args, to=target) + except GreenletExit, e: + args = (e,) + finally: + _tls.current = current + # + if len(args) == 1: + return args[0] + else: + return args + + def __nonzero__(self): + return self.__main or _continulet.is_pending(self) + + @property + def dead(self): + return self.__started and not self + + @property + def gr_frame(self): + # xxx this doesn't work when called on either the current or + # the main greenlet of another thread + if self is getcurrent(): + return None + if self.__main: + self = getcurrent() + f = _continulet.__reduce__(self)[2][0] + if not f: + return None + return f.f_back.f_back.f_back # go past start(), __switch(), switch() + +# ____________________________________________________________ +# Internal stuff + +try: + from thread import _local +except ImportError: + class _local(object): # assume no threads + pass + +_tls = _local() + +def _green_create_main(): + # create the main greenlet for this thread + _tls.current = None + gmain = greenlet.__new__(greenlet) + gmain._greenlet__main = True + gmain._greenlet__started = True + assert gmain.parent is None + _tls.main = gmain + _tls.current = gmain + +def _greenlet_start(greenlet, args): + _tls.current = greenlet + try: + res = greenlet.run(*args) + finally: + _continuation.permute(greenlet, greenlet.parent) + return (res,) + +def _greenlet_throw(greenlet, exc, value, tb): + _tls.current = greenlet + try: + raise exc, value, tb + finally: + _continuation.permute(greenlet, greenlet.parent) diff --git a/lib_pypy/pypy_test/test_coroutine.py b/lib_pypy/pypy_test/test_coroutine.py --- a/lib_pypy/pypy_test/test_coroutine.py +++ b/lib_pypy/pypy_test/test_coroutine.py @@ -2,7 +2,7 @@ from py.test import skip, raises try: - from lib_pypy.stackless import coroutine, CoroutineExit + from stackless import coroutine, CoroutineExit except ImportError, e: skip('cannot import stackless: %s' % (e,)) @@ -20,10 +20,6 @@ assert not co.is_zombie def test_is_zombie_del_without_frame(self): - try: - import _stackless # are we on pypy with a stackless build? - except ImportError: - skip("only works on pypy-c-stackless") import gc res = [] class MyCoroutine(coroutine): @@ -45,10 +41,6 @@ assert res[0], "is_zombie was False in __del__" def test_is_zombie_del_with_frame(self): - try: - import _stackless # are we on pypy with a stackless build? - except ImportError: - skip("only works on pypy-c-stackless") import gc res = [] class MyCoroutine(coroutine): diff --git a/lib_pypy/pypy_test/test_stackless_pickling.py b/lib_pypy/pypy_test/test_stackless_pickling.py --- a/lib_pypy/pypy_test/test_stackless_pickling.py +++ b/lib_pypy/pypy_test/test_stackless_pickling.py @@ -1,7 +1,3 @@ -""" -this test should probably not run from CPython or py.py. -I'm not entirely sure, how to do that. -""" from __future__ import absolute_import from py.test import skip try: @@ -16,11 +12,15 @@ class Test_StacklessPickling: + def test_pickle_main_coroutine(self): + import stackless, pickle + s = pickle.dumps(stackless.coroutine.getcurrent()) + print s + c = pickle.loads(s) + assert c is stackless.coroutine.getcurrent() + def test_basic_tasklet_pickling(self): - try: - import stackless - except ImportError: - skip("can't load stackless and don't know why!!!") + import stackless from stackless import run, schedule, tasklet import pickle diff --git a/lib_pypy/pyrepl/completing_reader.py b/lib_pypy/pyrepl/completing_reader.py --- a/lib_pypy/pyrepl/completing_reader.py +++ b/lib_pypy/pyrepl/completing_reader.py @@ -229,7 +229,8 @@ def after_command(self, cmd): super(CompletingReader, self).after_command(cmd) - if not isinstance(cmd, complete) and not isinstance(cmd, self_insert): + if not isinstance(cmd, self.commands['complete']) \ + and not isinstance(cmd, self.commands['self_insert']): self.cmpltn_reset() def calc_screen(self): diff --git a/lib_pypy/pyrepl/reader.py b/lib_pypy/pyrepl/reader.py --- a/lib_pypy/pyrepl/reader.py +++ b/lib_pypy/pyrepl/reader.py @@ -401,13 +401,19 @@ return "(arg: %s) "%self.arg if "\n" in self.buffer: if lineno == 0: - return self._ps2 + res = self.ps2 elif lineno == self.buffer.count("\n"): - return self._ps4 + res = self.ps4 else: - return self._ps3 + res = self.ps3 else: - return self._ps1 + res = self.ps1 + # Lazily call str() on self.psN, and cache the results using as key + # the object on which str() was called. This ensures that even if the + # same object is used e.g. for ps1 and ps2, str() is called only once. + if res not in self._pscache: + self._pscache[res] = str(res) + return self._pscache[res] def push_input_trans(self, itrans): self.input_trans_stack.append(self.input_trans) @@ -473,8 +479,7 @@ self.pos = 0 self.dirty = 1 self.last_command = None - self._ps1, self._ps2, self._ps3, self._ps4 = \ - map(str, [self.ps1, self.ps2, self.ps3, self.ps4]) + self._pscache = {} except: self.restore() raise @@ -571,7 +576,7 @@ self.console.push_char(char) self.handle1(0) - def readline(self): + def readline(self, returns_unicode=False): """Read a line. The implementation of this method also shows how to drive Reader if you want more control over the event loop.""" @@ -580,6 +585,8 @@ self.refresh() while not self.finished: self.handle1() + if returns_unicode: + return self.get_unicode() return self.get_buffer() finally: self.restore() diff --git a/lib_pypy/pyrepl/readline.py b/lib_pypy/pyrepl/readline.py --- a/lib_pypy/pyrepl/readline.py +++ b/lib_pypy/pyrepl/readline.py @@ -33,7 +33,7 @@ from pyrepl.unix_console import UnixConsole, _error -ENCODING = 'latin1' # XXX hard-coded +ENCODING = sys.getfilesystemencoding() or 'latin1' # XXX review __all__ = ['add_history', 'clear_history', @@ -198,7 +198,7 @@ reader.ps1 = prompt return reader.readline() - def multiline_input(self, more_lines, ps1, ps2): + def multiline_input(self, more_lines, ps1, ps2, returns_unicode=False): """Read an input on possibly multiple lines, asking for more lines as long as 'more_lines(unicodetext)' returns an object whose boolean value is true. @@ -209,7 +209,7 @@ reader.more_lines = more_lines reader.ps1 = reader.ps2 = ps1 reader.ps3 = reader.ps4 = ps2 - return reader.readline() + return reader.readline(returns_unicode=returns_unicode) finally: reader.more_lines = saved diff --git a/lib_pypy/pyrepl/simple_interact.py b/lib_pypy/pyrepl/simple_interact.py --- a/lib_pypy/pyrepl/simple_interact.py +++ b/lib_pypy/pyrepl/simple_interact.py @@ -54,7 +54,8 @@ ps1 = getattr(sys, 'ps1', '>>> ') ps2 = getattr(sys, 'ps2', '... ') try: - statement = multiline_input(more_lines, ps1, ps2) + statement = multiline_input(more_lines, ps1, ps2, + returns_unicode=True) except EOFError: break more = console.push(statement) diff --git a/lib_pypy/pyrepl/unix_console.py b/lib_pypy/pyrepl/unix_console.py --- a/lib_pypy/pyrepl/unix_console.py +++ b/lib_pypy/pyrepl/unix_console.py @@ -384,15 +384,19 @@ self.__maybe_write_code(self._smkx) - self.old_sigwinch = signal.signal( - signal.SIGWINCH, self.__sigwinch) + try: + self.old_sigwinch = signal.signal( + signal.SIGWINCH, self.__sigwinch) + except ValueError: + pass def restore(self): self.__maybe_write_code(self._rmkx) self.flushoutput() tcsetattr(self.input_fd, termios.TCSADRAIN, self.__svtermstate) - signal.signal(signal.SIGWINCH, self.old_sigwinch) + if hasattr(self, 'old_sigwinch'): + signal.signal(signal.SIGWINCH, self.old_sigwinch) def __sigwinch(self, signum, frame): self.height, self.width = self.getheightwidth() diff --git a/lib_pypy/resource.py b/lib_pypy/resource.py --- a/lib_pypy/resource.py +++ b/lib_pypy/resource.py @@ -7,7 +7,7 @@ from ctypes_support import standard_c_lib as libc from ctypes_support import get_errno -from ctypes import Structure, c_int, c_long, byref, sizeof, POINTER +from ctypes import Structure, c_int, c_long, byref, POINTER from errno import EINVAL, EPERM import _structseq @@ -165,7 +165,6 @@ @builtinify def getpagesize(): - pagesize = 0 if _getpagesize: return _getpagesize() else: diff --git a/lib_pypy/stackless.py b/lib_pypy/stackless.py --- a/lib_pypy/stackless.py +++ b/lib_pypy/stackless.py @@ -4,121 +4,110 @@ Please refer to their documentation. """ -DEBUG = True -def dprint(*args): - for arg in args: - print arg, - print +import _continuation -import traceback -import sys +class TaskletExit(Exception): + pass + +CoroutineExit = TaskletExit + + +def _coroutine_getcurrent(): + "Returns the current coroutine (i.e. the one which called this function)." + try: + return _tls.current_coroutine + except AttributeError: + # first call in this thread: current == main + return _coroutine_getmain() + +def _coroutine_getmain(): + try: + return _tls.main_coroutine + except AttributeError: + # create the main coroutine for this thread + continulet = _continuation.continulet + main = coroutine() + main._frame = continulet.__new__(continulet) + main._is_started = -1 + _tls.current_coroutine = _tls.main_coroutine = main + return _tls.main_coroutine + + +class coroutine(object): + _is_started = 0 # 0=no, 1=yes, -1=main + + def __init__(self): + self._frame = None + + def bind(self, func, *argl, **argd): + """coro.bind(f, *argl, **argd) -> None. + binds function f to coro. f will be called with + arguments *argl, **argd + """ + if self.is_alive: + raise ValueError("cannot bind a bound coroutine") + def run(c): + _tls.current_coroutine = self + self._is_started = 1 + return func(*argl, **argd) + self._is_started = 0 + self._frame = _continuation.continulet(run) + + def switch(self): + """coro.switch() -> returnvalue + switches to coroutine coro. If the bound function + f finishes, the returnvalue is that of f, otherwise + None is returned + """ + current = _coroutine_getcurrent() + try: + current._frame.switch(to=self._frame) + finally: + _tls.current_coroutine = current + + def kill(self): + """coro.kill() : kill coroutine coro""" + current = _coroutine_getcurrent() + try: + current._frame.throw(CoroutineExit, to=self._frame) + finally: + _tls.current_coroutine = current + + @property + def is_alive(self): + return self._is_started < 0 or ( + self._frame is not None and self._frame.is_pending()) + + @property + def is_zombie(self): + return self._is_started > 0 and not self._frame.is_pending() + + getcurrent = staticmethod(_coroutine_getcurrent) + + def __reduce__(self): + if self._is_started < 0: + return _coroutine_getmain, () + else: + return type(self), (), self.__dict__ + + try: - # If _stackless can be imported then TaskletExit and CoroutineExit are - # automatically added to the builtins. - from _stackless import coroutine, greenlet -except ImportError: # we are running from CPython - from greenlet import greenlet, GreenletExit - TaskletExit = CoroutineExit = GreenletExit - del GreenletExit - try: - from functools import partial - except ImportError: # we are not running python 2.5 - class partial(object): - # just enough of 'partial' to be usefull - def __init__(self, func, *argl, **argd): - self.func = func - self.argl = argl - self.argd = argd + from thread import _local +except ImportError: + class _local(object): # assume no threads + pass - def __call__(self): - return self.func(*self.argl, **self.argd) +_tls = _local() - class GWrap(greenlet): - """This is just a wrapper around greenlets to allow - to stick additional attributes to a greenlet. - To be more concrete, we need a backreference to - the coroutine object""" - class MWrap(object): - def __init__(self,something): - self.something = something +# ____________________________________________________________ - def __getattr__(self, attr): - return getattr(self.something, attr) - - class coroutine(object): - "we can't have greenlet as a base, because greenlets can't be rebound" - - def __init__(self): - self._frame = None - self.is_zombie = False - - def __getattr__(self, attr): - return getattr(self._frame, attr) - - def __del__(self): - self.is_zombie = True - del self._frame - self._frame = None - - def bind(self, func, *argl, **argd): - """coro.bind(f, *argl, **argd) -> None. - binds function f to coro. f will be called with - arguments *argl, **argd - """ - if self._frame is None or self._frame.dead: - self._frame = frame = GWrap() - frame.coro = self - if hasattr(self._frame, 'run') and self._frame.run: - raise ValueError("cannot bind a bound coroutine") - self._frame.run = partial(func, *argl, **argd) - - def switch(self): - """coro.switch() -> returnvalue - switches to coroutine coro. If the bound function - f finishes, the returnvalue is that of f, otherwise - None is returned - """ - try: - return greenlet.switch(self._frame) - except TypeError, exp: # self._frame is the main coroutine - return greenlet.switch(self._frame.something) - - def kill(self): - """coro.kill() : kill coroutine coro""" - self._frame.throw() - - def _is_alive(self): - if self._frame is None: - return False - return not self._frame.dead - is_alive = property(_is_alive) - del _is_alive - - def getcurrent(): - """coroutine.getcurrent() -> the currently running coroutine""" - try: - return greenlet.getcurrent().coro - except AttributeError: - return _maincoro - getcurrent = staticmethod(getcurrent) - - def __reduce__(self): - raise TypeError, 'pickling is not possible based upon greenlets' - - _maincoro = coroutine() - maingreenlet = greenlet.getcurrent() - _maincoro._frame = frame = MWrap(maingreenlet) - frame.coro = _maincoro - del frame - del maingreenlet from collections import deque import operator -__all__ = 'run getcurrent getmain schedule tasklet channel coroutine \ - greenlet'.split() +__all__ = 'run getcurrent getmain schedule tasklet channel coroutine'.split() _global_task_id = 0 _squeue = None @@ -131,7 +120,8 @@ def _scheduler_remove(value): try: del _squeue[operator.indexOf(_squeue, value)] - except ValueError:pass + except ValueError: + pass def _scheduler_append(value, normal=True): if normal: @@ -157,10 +147,7 @@ _last_task = next assert not next.blocked if next is not current: - try: - next.switch() - except CoroutineExit: - raise TaskletExit + next.switch() return current def set_schedule_callback(callback): @@ -184,34 +171,6 @@ raise self.type, self.value, self.traceback # -# helpers for pickling -# - -_stackless_primitive_registry = {} - -def register_stackless_primitive(thang, retval_expr='None'): - import types - func = thang - if isinstance(thang, types.MethodType): - func = thang.im_func - code = func.func_code - _stackless_primitive_registry[code] = retval_expr - # It is not too nice to attach info via the code object, but - # I can't think of a better solution without a real transform. - -def rewrite_stackless_primitive(coro_state, alive, tempval): - flags, frame, thunk, parent = coro_state - while frame is not None: - retval_expr = _stackless_primitive_registry.get(frame.f_code) - if retval_expr: - # this tasklet needs to stop pickling here and return its value. - tempval = eval(retval_expr, globals(), frame.f_locals) - coro_state = flags, frame, thunk, parent - break - frame = frame.f_back - return coro_state, alive, tempval - -# # class channel(object): @@ -363,8 +322,6 @@ """ return self._channel_action(None, -1) - register_stackless_primitive(receive, retval_expr='receiver.tempval') - def send_exception(self, exp_type, msg): self.send(bomb(exp_type, exp_type(msg))) @@ -381,9 +338,8 @@ the runnables list. """ return self._channel_action(msg, 1) - - register_stackless_primitive(send) - + + class tasklet(coroutine): """ A tasklet object represents a tiny task in a Python thread. @@ -455,6 +411,7 @@ def _func(): try: try: + coroutine.switch(back) func(*argl, **argd) except TaskletExit: pass @@ -464,6 +421,8 @@ self.func = None coroutine.bind(self, _func) + back = _coroutine_getcurrent() + coroutine.switch(self) self.alive = True _scheduler_append(self) return self @@ -486,39 +445,6 @@ raise RuntimeError, "The current tasklet cannot be removed." # not sure if I will revive this " Use t=tasklet().capture()" _scheduler_remove(self) - - def __reduce__(self): - one, two, coro_state = coroutine.__reduce__(self) - assert one is coroutine - assert two == () - # we want to get rid of the parent thing. - # for now, we just drop it - a, frame, c, d = coro_state - - # Removing all frames related to stackless.py. - # They point to stuff we don't want to be pickled. - - pickleframe = frame - while frame is not None: - if frame.f_code == schedule.func_code: - # Removing everything including and after the - # call to stackless.schedule() - pickleframe = frame.f_back - break - frame = frame.f_back - if d: - assert isinstance(d, coroutine) - coro_state = a, pickleframe, c, None - coro_state, alive, tempval = rewrite_stackless_primitive(coro_state, self.alive, self.tempval) - inst_dict = self.__dict__.copy() - inst_dict.pop('tempval', None) - return self.__class__, (), (coro_state, alive, tempval, inst_dict) - - def __setstate__(self, (coro_state, alive, tempval, inst_dict)): - coroutine.__setstate__(self, coro_state) - self.__dict__.update(inst_dict) - self.alive = alive - self.tempval = tempval def getmain(): """ @@ -607,30 +533,7 @@ global _last_task _global_task_id = 0 _main_tasklet = coroutine.getcurrent() - try: - _main_tasklet.__class__ = tasklet - except TypeError: # we are running pypy-c - class TaskletProxy(object): - """TaskletProxy is needed to give the _main_coroutine tasklet behaviour""" - def __init__(self, coro): - self._coro = coro - - def __getattr__(self,attr): - return getattr(self._coro,attr) - - def __str__(self): - return '' % (self._task_id, self.is_alive) - - def __reduce__(self): - return getmain, () - - __repr__ = __str__ - - - global _main_coroutine - _main_coroutine = _main_tasklet - _main_tasklet = TaskletProxy(_main_tasklet) - assert _main_tasklet.is_alive and not _main_tasklet.is_zombie + _main_tasklet.__class__ = tasklet # XXX HAAAAAAAAAAAAAAAAAAAAACK _last_task = _main_tasklet tasklet._init.im_func(_main_tasklet, label='main') _squeue = deque() diff --git a/py/_code/source.py b/py/_code/source.py --- a/py/_code/source.py +++ b/py/_code/source.py @@ -139,7 +139,7 @@ trysource = self[start:end] if trysource.isparseable(): return start, end - return start, end + return start, len(self) def getblockend(self, lineno): # XXX diff --git a/pypy/annotation/annrpython.py b/pypy/annotation/annrpython.py --- a/pypy/annotation/annrpython.py +++ b/pypy/annotation/annrpython.py @@ -149,7 +149,7 @@ desc = olddesc.bind_self(classdef) args = self.bookkeeper.build_args("simple_call", args_s[:]) desc.consider_call_site(self.bookkeeper, desc.getcallfamily(), [desc], - args, annmodel.s_ImpossibleValue) + args, annmodel.s_ImpossibleValue, None) result = [] def schedule(graph, inputcells): result.append((graph, inputcells)) diff --git a/pypy/annotation/bookkeeper.py b/pypy/annotation/bookkeeper.py --- a/pypy/annotation/bookkeeper.py +++ b/pypy/annotation/bookkeeper.py @@ -209,8 +209,8 @@ self.consider_call_site(call_op) for pbc, args_s in self.emulated_pbc_calls.itervalues(): - self.consider_call_site_for_pbc(pbc, 'simple_call', - args_s, s_ImpossibleValue) + self.consider_call_site_for_pbc(pbc, 'simple_call', + args_s, s_ImpossibleValue, None) self.emulated_pbc_calls = {} finally: self.leave() @@ -257,18 +257,18 @@ args_s = [lltype_to_annotation(adtmeth.ll_ptrtype)] + args_s if isinstance(s_callable, SomePBC): s_result = binding(call_op.result, s_ImpossibleValue) - self.consider_call_site_for_pbc(s_callable, - call_op.opname, - args_s, s_result) + self.consider_call_site_for_pbc(s_callable, call_op.opname, args_s, + s_result, call_op) - def consider_call_site_for_pbc(self, s_callable, opname, args_s, s_result): + def consider_call_site_for_pbc(self, s_callable, opname, args_s, s_result, + call_op): descs = list(s_callable.descriptions) if not descs: return family = descs[0].getcallfamily() args = self.build_args(opname, args_s) s_callable.getKind().consider_call_site(self, family, descs, args, - s_result) + s_result, call_op) def getuniqueclassdef(self, cls): """Get the ClassDef associated with the given user cls. @@ -656,6 +656,7 @@ whence = None else: whence = emulated # callback case + op = None s_previous_result = s_ImpossibleValue def schedule(graph, inputcells): @@ -663,7 +664,7 @@ results = [] for desc in descs: - results.append(desc.pycall(schedule, args, s_previous_result)) + results.append(desc.pycall(schedule, args, s_previous_result, op)) s_result = unionof(*results) return s_result diff --git a/pypy/annotation/builtin.py b/pypy/annotation/builtin.py --- a/pypy/annotation/builtin.py +++ b/pypy/annotation/builtin.py @@ -308,9 +308,6 @@ clsdef = clsdef.commonbase(cdef) return SomeInstance(clsdef) -def robjmodel_we_are_translated(): - return immutablevalue(True) - def robjmodel_r_dict(s_eqfn, s_hashfn, s_force_non_null=None): if s_force_non_null is None: force_non_null = False @@ -376,8 +373,6 @@ BUILTIN_ANALYZERS[pypy.rlib.rarithmetic.intmask] = rarith_intmask BUILTIN_ANALYZERS[pypy.rlib.objectmodel.instantiate] = robjmodel_instantiate -BUILTIN_ANALYZERS[pypy.rlib.objectmodel.we_are_translated] = ( - robjmodel_we_are_translated) BUILTIN_ANALYZERS[pypy.rlib.objectmodel.r_dict] = robjmodel_r_dict BUILTIN_ANALYZERS[pypy.rlib.objectmodel.hlinvoke] = robjmodel_hlinvoke BUILTIN_ANALYZERS[pypy.rlib.objectmodel.keepalive_until_here] = robjmodel_keepalive_until_here @@ -416,7 +411,8 @@ from pypy.annotation.model import SomePtr from pypy.rpython.lltypesystem import lltype -def malloc(s_T, s_n=None, s_flavor=None, s_zero=None, s_track_allocation=None): +def malloc(s_T, s_n=None, s_flavor=None, s_zero=None, s_track_allocation=None, + s_add_memory_pressure=None): assert (s_n is None or s_n.knowntype == int or issubclass(s_n.knowntype, pypy.rlib.rarithmetic.base_int)) assert s_T.is_constant() @@ -432,6 +428,8 @@ else: assert s_flavor.is_constant() assert s_track_allocation is None or s_track_allocation.is_constant() + assert (s_add_memory_pressure is None or + s_add_memory_pressure.is_constant()) # not sure how to call malloc() for the example 'p' in the # presence of s_extraargs r = SomePtr(lltype.Ptr(s_T.const)) diff --git a/pypy/annotation/classdef.py b/pypy/annotation/classdef.py --- a/pypy/annotation/classdef.py +++ b/pypy/annotation/classdef.py @@ -276,8 +276,8 @@ # create the Attribute and do the generalization asked for newattr = Attribute(attr, self.bookkeeper) if s_value: - if newattr.name == 'intval' and getattr(s_value, 'unsigned', False): - import pdb; pdb.set_trace() + #if newattr.name == 'intval' and getattr(s_value, 'unsigned', False): + # import pdb; pdb.set_trace() newattr.s_value = s_value # keep all subattributes' values diff --git a/pypy/annotation/description.py b/pypy/annotation/description.py --- a/pypy/annotation/description.py +++ b/pypy/annotation/description.py @@ -255,7 +255,11 @@ raise TypeError, "signature mismatch: %s" % e.getmsg(self.name) return inputcells - def specialize(self, inputcells): + def specialize(self, inputcells, op=None): + if (op is None and + getattr(self.bookkeeper, "position_key", None) is not None): + _, block, i = self.bookkeeper.position_key + op = block.operations[i] if self.specializer is None: # get the specializer based on the tag of the 'pyobj' # (if any), according to the current policy @@ -269,11 +273,14 @@ enforceargs = Sig(*enforceargs) self.pyobj._annenforceargs_ = enforceargs enforceargs(self, inputcells) # can modify inputcells in-place - return self.specializer(self, inputcells) + if getattr(self.pyobj, '_annspecialcase_', '').endswith("call_location"): + return self.specializer(self, inputcells, op) + else: + return self.specializer(self, inputcells) - def pycall(self, schedule, args, s_previous_result): + def pycall(self, schedule, args, s_previous_result, op=None): inputcells = self.parse_arguments(args) - result = self.specialize(inputcells) + result = self.specialize(inputcells, op) if isinstance(result, FunctionGraph): graph = result # common case # if that graph has a different signature, we need to re-parse @@ -296,17 +303,17 @@ None, # selfclassdef name) - def consider_call_site(bookkeeper, family, descs, args, s_result): + def consider_call_site(bookkeeper, family, descs, args, s_result, op): shape = rawshape(args) - row = FunctionDesc.row_to_consider(descs, args) + row = FunctionDesc.row_to_consider(descs, args, op) family.calltable_add_row(shape, row) consider_call_site = staticmethod(consider_call_site) - def variant_for_call_site(bookkeeper, family, descs, args): + def variant_for_call_site(bookkeeper, family, descs, args, op): shape = rawshape(args) bookkeeper.enter(None) try: - row = FunctionDesc.row_to_consider(descs, args) + row = FunctionDesc.row_to_consider(descs, args, op) finally: bookkeeper.leave() index = family.calltable_lookup_row(shape, row) @@ -316,7 +323,7 @@ def rowkey(self): return self - def row_to_consider(descs, args): + def row_to_consider(descs, args, op): # see comments in CallFamily from pypy.annotation.model import s_ImpossibleValue row = {} @@ -324,7 +331,7 @@ def enlist(graph, ignore): row[desc.rowkey()] = graph return s_ImpossibleValue # meaningless - desc.pycall(enlist, args, s_ImpossibleValue) + desc.pycall(enlist, args, s_ImpossibleValue, op) return row row_to_consider = staticmethod(row_to_consider) @@ -399,9 +406,7 @@ if b1 is object: continue if b1.__dict__.get('_mixin_', False): - assert b1.__bases__ == () or b1.__bases__ == (object,), ( - "mixin class %r should have no base" % (b1,)) - self.add_sources_for_class(b1, mixin=True) + self.add_mixin(b1) else: assert base is object, ("multiple inheritance only supported " "with _mixin_: %r" % (cls,)) @@ -469,6 +474,15 @@ return self.classdict[name] = Constant(value) + def add_mixin(self, base): + for subbase in base.__bases__: + if subbase is object: + continue + assert subbase.__dict__.get("_mixin_", False), ("Mixin class %r has non" + "mixin base class %r" % (base, subbase)) + self.add_mixin(subbase) + self.add_sources_for_class(base, mixin=True) + def add_sources_for_class(self, cls, mixin=False): for name, value in cls.__dict__.items(): self.add_source_attribute(name, value, mixin) @@ -514,7 +528,7 @@ "specialization" % (self.name,)) return self.getclassdef(None) - def pycall(self, schedule, args, s_previous_result): + def pycall(self, schedule, args, s_previous_result, op=None): from pypy.annotation.model import SomeInstance, SomeImpossibleValue if self.specialize: if self.specialize == 'specialize:ctr_location': @@ -657,7 +671,7 @@ cdesc = cdesc.basedesc return s_result # common case - def consider_call_site(bookkeeper, family, descs, args, s_result): + def consider_call_site(bookkeeper, family, descs, args, s_result, op): from pypy.annotation.model import SomeInstance, SomePBC, s_None if len(descs) == 1: # call to a single class, look at the result annotation @@ -702,7 +716,7 @@ initdescs[0].mergecallfamilies(*initdescs[1:]) initfamily = initdescs[0].getcallfamily() MethodDesc.consider_call_site(bookkeeper, initfamily, initdescs, - args, s_None) + args, s_None, op) consider_call_site = staticmethod(consider_call_site) def getallbases(self): @@ -775,13 +789,13 @@ def getuniquegraph(self): return self.funcdesc.getuniquegraph() - def pycall(self, schedule, args, s_previous_result): + def pycall(self, schedule, args, s_previous_result, op=None): from pypy.annotation.model import SomeInstance if self.selfclassdef is None: raise Exception("calling %r" % (self,)) s_instance = SomeInstance(self.selfclassdef, flags = self.flags) args = args.prepend(s_instance) - return self.funcdesc.pycall(schedule, args, s_previous_result) + return self.funcdesc.pycall(schedule, args, s_previous_result, op) def bind_under(self, classdef, name): self.bookkeeper.warning("rebinding an already bound %r" % (self,)) @@ -794,10 +808,10 @@ self.name, flags) - def consider_call_site(bookkeeper, family, descs, args, s_result): + def consider_call_site(bookkeeper, family, descs, args, s_result, op): shape = rawshape(args, nextra=1) # account for the extra 'self' funcdescs = [methoddesc.funcdesc for methoddesc in descs] - row = FunctionDesc.row_to_consider(descs, args) + row = FunctionDesc.row_to_consider(descs, args, op) family.calltable_add_row(shape, row) consider_call_site = staticmethod(consider_call_site) @@ -949,16 +963,16 @@ return '' % (self.funcdesc, self.frozendesc) - def pycall(self, schedule, args, s_previous_result): + def pycall(self, schedule, args, s_previous_result, op=None): from pypy.annotation.model import SomePBC s_self = SomePBC([self.frozendesc]) args = args.prepend(s_self) - return self.funcdesc.pycall(schedule, args, s_previous_result) + return self.funcdesc.pycall(schedule, args, s_previous_result, op) - def consider_call_site(bookkeeper, family, descs, args, s_result): + def consider_call_site(bookkeeper, family, descs, args, s_result, op): shape = rawshape(args, nextra=1) # account for the extra 'self' funcdescs = [mofdesc.funcdesc for mofdesc in descs] - row = FunctionDesc.row_to_consider(descs, args) + row = FunctionDesc.row_to_consider(descs, args, op) family.calltable_add_row(shape, row) consider_call_site = staticmethod(consider_call_site) diff --git a/pypy/annotation/policy.py b/pypy/annotation/policy.py --- a/pypy/annotation/policy.py +++ b/pypy/annotation/policy.py @@ -1,7 +1,7 @@ # base annotation policy for specialization from pypy.annotation.specialize import default_specialize as default -from pypy.annotation.specialize import specialize_argvalue, specialize_argtype, specialize_arglistitemtype -from pypy.annotation.specialize import memo +from pypy.annotation.specialize import specialize_argvalue, specialize_argtype, specialize_arglistitemtype, specialize_arg_or_var +from pypy.annotation.specialize import memo, specialize_call_location # for some reason, model must be imported first, # or we create a cycle. from pypy.annotation import model as annmodel @@ -73,8 +73,10 @@ default_specialize = staticmethod(default) specialize__memo = staticmethod(memo) specialize__arg = staticmethod(specialize_argvalue) # specialize:arg(N) + specialize__arg_or_var = staticmethod(specialize_arg_or_var) specialize__argtype = staticmethod(specialize_argtype) # specialize:argtype(N) specialize__arglistitemtype = staticmethod(specialize_arglistitemtype) + specialize__call_location = staticmethod(specialize_call_location) def specialize__ll(pol, *args): from pypy.rpython.annlowlevel import LowLevelAnnotatorPolicy diff --git a/pypy/annotation/specialize.py b/pypy/annotation/specialize.py --- a/pypy/annotation/specialize.py +++ b/pypy/annotation/specialize.py @@ -353,6 +353,16 @@ key = tuple(key) return maybe_star_args(funcdesc, key, args_s) +def specialize_arg_or_var(funcdesc, args_s, *argindices): + for argno in argindices: + if not args_s[argno].is_constant(): + break + else: + # all constant + return specialize_argvalue(funcdesc, args_s, *argindices) + # some not constant + return maybe_star_args(funcdesc, None, args_s) + def specialize_argtype(funcdesc, args_s, *argindices): key = tuple([args_s[i].knowntype for i in argindices]) for cls in key: @@ -370,3 +380,7 @@ else: key = s.listdef.listitem.s_value.knowntype return maybe_star_args(funcdesc, key, args_s) + +def specialize_call_location(funcdesc, args_s, op): + assert op is not None + return maybe_star_args(funcdesc, op, args_s) diff --git a/pypy/annotation/test/test_annrpython.py b/pypy/annotation/test/test_annrpython.py --- a/pypy/annotation/test/test_annrpython.py +++ b/pypy/annotation/test/test_annrpython.py @@ -1099,8 +1099,8 @@ allocdesc = a.bookkeeper.getdesc(alloc) s_C1 = a.bookkeeper.immutablevalue(C1) s_C2 = a.bookkeeper.immutablevalue(C2) - graph1 = allocdesc.specialize([s_C1]) - graph2 = allocdesc.specialize([s_C2]) + graph1 = allocdesc.specialize([s_C1], None) + graph2 = allocdesc.specialize([s_C2], None) assert a.binding(graph1.getreturnvar()).classdef == C1df assert a.binding(graph2.getreturnvar()).classdef == C2df assert graph1 in a.translator.graphs @@ -1135,8 +1135,8 @@ allocdesc = a.bookkeeper.getdesc(alloc) s_C1 = a.bookkeeper.immutablevalue(C1) s_C2 = a.bookkeeper.immutablevalue(C2) - graph1 = allocdesc.specialize([s_C1, s_C2]) - graph2 = allocdesc.specialize([s_C2, s_C2]) + graph1 = allocdesc.specialize([s_C1, s_C2], None) + graph2 = allocdesc.specialize([s_C2, s_C2], None) assert a.binding(graph1.getreturnvar()).classdef == C1df assert a.binding(graph2.getreturnvar()).classdef == C2df assert graph1 in a.translator.graphs @@ -1194,6 +1194,33 @@ assert len(executedesc._cache[(0, 'star', 2)].startblock.inputargs) == 4 assert len(executedesc._cache[(1, 'star', 3)].startblock.inputargs) == 5 + def test_specialize_arg_or_var(self): + def f(a): + return 1 + f._annspecialcase_ = 'specialize:arg_or_var(0)' + + def fn(a): + return f(3) + f(a) + + a = self.RPythonAnnotator() + a.build_types(fn, [int]) + executedesc = a.bookkeeper.getdesc(f) + assert sorted(executedesc._cache.keys()) == [None, (3,)] + # we got two different special + + def test_specialize_call_location(self): + def g(a): + return a + g._annspecialcase_ = "specialize:call_location" + def f(x): + return g(x) + f._annspecialcase_ = "specialize:argtype(0)" + def h(y): + w = f(y) + return int(f(str(y))) + w + a = self.RPythonAnnotator() + assert a.build_types(h, [int]) == annmodel.SomeInteger() + def test_assert_list_doesnt_lose_info(self): class T(object): pass @@ -3177,6 +3204,8 @@ s = a.build_types(f, []) assert isinstance(s, annmodel.SomeList) assert not s.listdef.listitem.resized + assert not s.listdef.listitem.immutable + assert s.listdef.listitem.mutated def test_delslice(self): def f(): diff --git a/pypy/annotation/unaryop.py b/pypy/annotation/unaryop.py --- a/pypy/annotation/unaryop.py +++ b/pypy/annotation/unaryop.py @@ -352,6 +352,7 @@ check_negative_slice(s_start, s_stop) if not isinstance(s_iterable, SomeList): raise Exception("list[start:stop] = x: x must be a list") + lst.listdef.mutate() lst.listdef.agree(s_iterable.listdef) # note that setslice is not allowed to resize a list in RPython diff --git a/pypy/config/makerestdoc.py b/pypy/config/makerestdoc.py --- a/pypy/config/makerestdoc.py +++ b/pypy/config/makerestdoc.py @@ -134,7 +134,7 @@ for child in self._children: subpath = fullpath + "." + child._name toctree.append(subpath) - content.add(Directive("toctree", *toctree, maxdepth=4)) + content.add(Directive("toctree", *toctree, **{'maxdepth': 4})) content.join( ListItem(Strong("name:"), self._name), ListItem(Strong("description:"), self.doc)) diff --git a/pypy/config/pypyoption.py b/pypy/config/pypyoption.py --- a/pypy/config/pypyoption.py +++ b/pypy/config/pypyoption.py @@ -27,13 +27,14 @@ # --allworkingmodules working_modules = default_modules.copy() working_modules.update(dict.fromkeys( - ["_socket", "unicodedata", "mmap", "fcntl", "_locale", + ["_socket", "unicodedata", "mmap", "fcntl", "_locale", "pwd", "rctime" , "select", "zipimport", "_lsprof", "crypt", "signal", "_rawffi", "termios", "zlib", "bz2", "struct", "_hashlib", "_md5", "_sha", "_minimal_curses", "cStringIO", "thread", "itertools", "pyexpat", "_ssl", "cpyext", "array", "_bisect", "binascii", "_multiprocessing", '_warnings', - "_collections", "_multibytecodec", "micronumpy", "_ffi"] + "_collections", "_multibytecodec", "micronumpy", "_ffi", + "_continuation"] )) translation_modules = default_modules.copy() @@ -57,6 +58,7 @@ # unix only modules del working_modules["crypt"] del working_modules["fcntl"] + del working_modules["pwd"] del working_modules["termios"] del working_modules["_minimal_curses"] @@ -99,6 +101,7 @@ "_ssl" : ["pypy.module._ssl.interp_ssl"], "_hashlib" : ["pypy.module._ssl.interp_ssl"], "_minimal_curses": ["pypy.module._minimal_curses.fficurses"], + "_continuation": ["pypy.rlib.rstacklet"], } def get_module_validator(modname): @@ -124,7 +127,7 @@ pypy_optiondescription = OptionDescription("objspace", "Object Space Options", [ ChoiceOption("name", "Object Space name", - ["std", "flow", "thunk", "dump", "taint"], + ["std", "flow", "thunk", "dump"], "std", cmdline='--objspace -o'), @@ -327,6 +330,9 @@ BoolOption("mutable_builtintypes", "Allow the changing of builtin types", default=False, requires=[("objspace.std.builtinshortcut", True)]), + BoolOption("withidentitydict", + "track types that override __hash__, __eq__ or __cmp__ and use a special dict strategy for those which do not", + default=True), ]), ]) diff --git a/pypy/config/support.py b/pypy/config/support.py --- a/pypy/config/support.py +++ b/pypy/config/support.py @@ -9,7 +9,7 @@ return 1 # don't override MAKEFLAGS. This will call 'make' without any '-j' option if sys.platform == 'darwin': return darwin_get_cpu_count() - elif sys.platform != 'linux2': + elif not sys.platform.startswith('linux'): return 1 # implement me try: if isinstance(filename_or_file, str): diff --git a/pypy/config/test/test_config.py b/pypy/config/test/test_config.py --- a/pypy/config/test/test_config.py +++ b/pypy/config/test/test_config.py @@ -1,5 +1,5 @@ from pypy.config.config import * -import py +import py, sys def make_description(): gcoption = ChoiceOption('name', 'GC name', ['ref', 'framework'], 'ref') @@ -69,13 +69,15 @@ attrs = dir(config) assert '__repr__' in attrs # from the type assert '_cfgimpl_values' in attrs # from self - assert 'gc' in attrs # custom attribute - assert 'objspace' in attrs # custom attribute + if sys.version_info >= (2, 6): + assert 'gc' in attrs # custom attribute + assert 'objspace' in attrs # custom attribute # attrs = dir(config.gc) - assert 'name' in attrs - assert 'dummy' in attrs - assert 'float' in attrs + if sys.version_info >= (2, 6): + assert 'name' in attrs + assert 'dummy' in attrs + assert 'float' in attrs def test_arbitrary_option(): descr = OptionDescription("top", "", [ @@ -279,11 +281,11 @@ def test_underscore_in_option_name(): descr = OptionDescription("opt", "", [ - BoolOption("_stackless", "", default=False), + BoolOption("_foobar", "", default=False), ]) config = Config(descr) parser = to_optparse(config) - assert parser.has_option("--_stackless") + assert parser.has_option("--_foobar") def test_none(): dummy1 = BoolOption('dummy1', 'doc dummy', default=False, cmdline=None) diff --git a/pypy/config/test/test_support.py b/pypy/config/test/test_support.py --- a/pypy/config/test/test_support.py +++ b/pypy/config/test/test_support.py @@ -40,7 +40,7 @@ return self._value def test_cpuinfo_linux(): - if sys.platform != 'linux2': + if not sys.platform.startswith('linux'): py.test.skip("linux only") saved = os.environ try: diff --git a/pypy/config/translationoption.py b/pypy/config/translationoption.py --- a/pypy/config/translationoption.py +++ b/pypy/config/translationoption.py @@ -13,6 +13,10 @@ DEFL_LOW_INLINE_THRESHOLD = DEFL_INLINE_THRESHOLD / 2.0 DEFL_GC = "minimark" +if sys.platform.startswith("linux"): + DEFL_ROOTFINDER_WITHJIT = "asmgcc" +else: + DEFL_ROOTFINDER_WITHJIT = "shadowstack" IS_64_BITS = sys.maxint > 2147483647 @@ -24,10 +28,9 @@ translation_optiondescription = OptionDescription( "translation", "Translation Options", [ - BoolOption("stackless", "enable stackless features during compilation", - default=False, cmdline="--stackless", - requires=[("translation.type_system", "lltype"), - ("translation.gcremovetypeptr", False)]), # XXX? + BoolOption("continuation", "enable single-shot continuations", + default=False, cmdline="--continuation", + requires=[("translation.type_system", "lltype")]), ChoiceOption("type_system", "Type system to use when RTyping", ["lltype", "ootype"], cmdline=None, default="lltype", requires={ @@ -66,7 +69,8 @@ "statistics": [("translation.gctransformer", "framework")], "generation": [("translation.gctransformer", "framework")], "hybrid": [("translation.gctransformer", "framework")], - "boehm": [("translation.gctransformer", "boehm")], + "boehm": [("translation.gctransformer", "boehm"), + ("translation.continuation", False)], # breaks "markcompact": [("translation.gctransformer", "framework")], "minimark": [("translation.gctransformer", "framework")], }, @@ -109,7 +113,7 @@ BoolOption("jit", "generate a JIT", default=False, suggests=[("translation.gc", DEFL_GC), - ("translation.gcrootfinder", "asmgcc"), + ("translation.gcrootfinder", DEFL_ROOTFINDER_WITHJIT), ("translation.list_comprehension_operations", True)]), ChoiceOption("jit_backend", "choose the backend for the JIT", ["auto", "x86", "x86-without-sse2", "llvm"], @@ -385,8 +389,6 @@ config.translation.suggest(withsmallfuncsets=5) elif word == 'jit': config.translation.suggest(jit=True) - if config.translation.stackless: - raise NotImplementedError("JIT conflicts with stackless for now") elif word == 'removetypeptr': config.translation.suggest(gcremovetypeptr=True) else: diff --git a/pypy/doc/__pypy__-module.rst b/pypy/doc/__pypy__-module.rst --- a/pypy/doc/__pypy__-module.rst +++ b/pypy/doc/__pypy__-module.rst @@ -37,29 +37,6 @@ .. _`thunk object space docs`: objspace-proxies.html#thunk .. _`interface section of the thunk object space docs`: objspace-proxies.html#thunk-interface -.. broken: - - Taint Object Space Functionality - ================================ - - When the taint object space is used (choose with :config:`objspace.name`), - the following names are put into ``__pypy__``: - - - ``taint`` - - ``is_tainted`` - - ``untaint`` - - ``taint_atomic`` - - ``_taint_debug`` - - ``_taint_look`` - - ``TaintError`` - - Those are all described in the `interface section of the taint object space - docs`_. - - For more detailed explanations and examples see the `taint object space docs`_. - - .. _`taint object space docs`: objspace-proxies.html#taint - .. _`interface section of the taint object space docs`: objspace-proxies.html#taint-interface Transparent Proxy Functionality =============================== diff --git a/pypy/doc/_ref.txt b/pypy/doc/_ref.txt --- a/pypy/doc/_ref.txt +++ b/pypy/doc/_ref.txt @@ -1,11 +1,10 @@ .. _`ctypes_configure/doc/sample.py`: https://bitbucket.org/pypy/pypy/src/default/ctypes_configure/doc/sample.py .. _`demo/`: https://bitbucket.org/pypy/pypy/src/default/demo/ -.. _`demo/pickle_coroutine.py`: https://bitbucket.org/pypy/pypy/src/default/demo/pickle_coroutine.py .. _`lib-python/`: https://bitbucket.org/pypy/pypy/src/default/lib-python/ .. _`lib-python/2.7/dis.py`: https://bitbucket.org/pypy/pypy/src/default/lib-python/2.7/dis.py .. _`lib_pypy/`: https://bitbucket.org/pypy/pypy/src/default/lib_pypy/ +.. _`lib_pypy/greenlet.py`: https://bitbucket.org/pypy/pypy/src/default/lib_pypy/greenlet.py .. _`lib_pypy/pypy_test/`: https://bitbucket.org/pypy/pypy/src/default/lib_pypy/pypy_test/ -.. _`lib_pypy/stackless.py`: https://bitbucket.org/pypy/pypy/src/default/lib_pypy/stackless.py .. _`lib_pypy/tputil.py`: https://bitbucket.org/pypy/pypy/src/default/lib_pypy/tputil.py .. _`pypy/annotation`: .. _`pypy/annotation/`: https://bitbucket.org/pypy/pypy/src/default/pypy/annotation/ @@ -55,7 +54,6 @@ .. _`pypy/module`: .. _`pypy/module/`: https://bitbucket.org/pypy/pypy/src/default/pypy/module/ .. _`pypy/module/__builtin__/__init__.py`: https://bitbucket.org/pypy/pypy/src/default/pypy/module/__builtin__/__init__.py -.. _`pypy/module/_stackless/test/test_composable_coroutine.py`: https://bitbucket.org/pypy/pypy/src/default/pypy/module/_stackless/test/test_composable_coroutine.py .. _`pypy/objspace`: .. _`pypy/objspace/`: https://bitbucket.org/pypy/pypy/src/default/pypy/objspace/ .. _`pypy/objspace/dump.py`: https://bitbucket.org/pypy/pypy/src/default/pypy/objspace/dump.py @@ -117,6 +115,7 @@ .. _`pypy/translator/`: https://bitbucket.org/pypy/pypy/src/default/pypy/translator/ .. _`pypy/translator/backendopt/`: https://bitbucket.org/pypy/pypy/src/default/pypy/translator/backendopt/ .. _`pypy/translator/c/`: https://bitbucket.org/pypy/pypy/src/default/pypy/translator/c/ +.. _`pypy/translator/c/src/stacklet/`: https://bitbucket.org/pypy/pypy/src/default/pypy/translator/c/src/stacklet/ .. _`pypy/translator/cli/`: https://bitbucket.org/pypy/pypy/src/default/pypy/translator/cli/ .. _`pypy/translator/goal/`: https://bitbucket.org/pypy/pypy/src/default/pypy/translator/goal/ .. _`pypy/translator/jvm/`: https://bitbucket.org/pypy/pypy/src/default/pypy/translator/jvm/ diff --git a/pypy/doc/architecture.rst b/pypy/doc/architecture.rst --- a/pypy/doc/architecture.rst +++ b/pypy/doc/architecture.rst @@ -153,7 +153,7 @@ * Optionally, `various transformations`_ can then be applied which, for example, perform optimizations such as inlining, add capabilities - such as stackless_-style concurrency, or insert code for the + such as stackless-style concurrency (deprecated), or insert code for the `garbage collector`_. * Then, the graphs are converted to source code for the target platform @@ -255,7 +255,6 @@ .. _Python: http://docs.python.org/reference/ .. _Psyco: http://psyco.sourceforge.net -.. _stackless: stackless.html .. _`generate Just-In-Time Compilers`: jit/index.html .. _`JIT Generation in PyPy`: jit/index.html .. _`implement your own interpreter`: http://morepypy.blogspot.com/2011/04/tutorial-writing-interpreter-with-pypy.html diff --git a/pypy/doc/coding-guide.rst b/pypy/doc/coding-guide.rst --- a/pypy/doc/coding-guide.rst +++ b/pypy/doc/coding-guide.rst @@ -929,6 +929,19 @@ located in the ``py/bin/`` directory. For switches to modify test execution pass the ``-h`` option. +Coverage reports +---------------- + +In order to get coverage reports the `pytest-cov`_ plugin is included. +it adds some extra requirements ( coverage_ and `cov-core`_ ) +and can once they are installed coverage testing can be invoked via:: + + python test_all.py --cov file_or_direcory_to_cover file_or_directory + +.. _`pytest-cov`: http://pypi.python.org/pypi/pytest-cov +.. _`coverage`: http://pypi.python.org/pypi/coverage +.. _`cov-core`: http://pypi.python.org/pypi/cov-core + Test conventions ---------------- diff --git a/pypy/doc/conf.py b/pypy/doc/conf.py --- a/pypy/doc/conf.py +++ b/pypy/doc/conf.py @@ -45,9 +45,9 @@ # built documents. # # The short X.Y version. -version = '1.5' +version = '1.6' # The full version, including alpha/beta/rc tags. -release = '1.5' +release = '1.6' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. diff --git a/pypy/doc/config/objspace.name.txt b/pypy/doc/config/objspace.name.txt --- a/pypy/doc/config/objspace.name.txt +++ b/pypy/doc/config/objspace.name.txt @@ -4,7 +4,6 @@ for normal usage): * thunk_: The thunk object space adds lazy evaluation to PyPy. - * taint_: The taint object space adds soft security features. * dump_: Using this object spaces results in the dumpimp of all operations to a log. @@ -12,5 +11,4 @@ .. _`Object Space Proxies`: ../objspace-proxies.html .. _`Standard Object Space`: ../objspace.html#standard-object-space .. _thunk: ../objspace-proxies.html#thunk -.. _taint: ../objspace-proxies.html#taint .. _dump: ../objspace-proxies.html#dump diff --git a/pypy/doc/config/objspace.std.withidentitydict.txt b/pypy/doc/config/objspace.std.withidentitydict.txt new file mode 100644 --- /dev/null +++ b/pypy/doc/config/objspace.std.withidentitydict.txt @@ -0,0 +1,21 @@ +============================= +objspace.std.withidentitydict +============================= + +* **name:** withidentitydict + +* **description:** enable a dictionary strategy for "by identity" comparisons + +* **command-line:** --objspace-std-withidentitydict + +* **command-line for negation:** --no-objspace-std-withidentitydict + +* **option type:** boolean option + +* **default:** True + + +Enable a dictionary strategy specialized for instances of classes which +compares "by identity", which is the default unless you override ``__hash__``, +``__eq__`` or ``__cmp__``. This strategy will be used only with new-style +classes. diff --git a/pypy/doc/config/objspace.usemodules._stackless.txt b/pypy/doc/config/objspace.usemodules._continuation.txt rename from pypy/doc/config/objspace.usemodules._stackless.txt rename to pypy/doc/config/objspace.usemodules._continuation.txt --- a/pypy/doc/config/objspace.usemodules._stackless.txt +++ b/pypy/doc/config/objspace.usemodules._continuation.txt @@ -1,6 +1,4 @@ -Use the '_stackless' module. +Use the '_continuation' module. -Exposes the `stackless` primitives, and also implies a stackless build. -See also :config:`translation.stackless`. - -.. _`stackless`: ../stackless.html +Exposes the `continulet` app-level primitives. +See also :config:`translation.continuation`. diff --git a/pypy/doc/config/objspace.usemodules.pwd.txt b/pypy/doc/config/objspace.usemodules.pwd.txt new file mode 100644 --- /dev/null +++ b/pypy/doc/config/objspace.usemodules.pwd.txt @@ -0,0 +1,2 @@ +Use the 'pwd' module. +This module is expected to be fully working. diff --git a/pypy/doc/config/translation.stackless.txt b/pypy/doc/config/translation.continuation.txt rename from pypy/doc/config/translation.stackless.txt rename to pypy/doc/config/translation.continuation.txt --- a/pypy/doc/config/translation.stackless.txt +++ b/pypy/doc/config/translation.continuation.txt @@ -1,5 +1,2 @@ -Run the `stackless transform`_ on each generated graph, which enables the use -of coroutines at RPython level and the "stackless" module when translating -PyPy. - -.. _`stackless transform`: ../stackless.html +Enable the use of a stackless-like primitive called "stacklet". +In PyPy, this is exposed at app-level by the "_continuation" module. diff --git a/pypy/doc/config/translation.dont_write_c_files.txt b/pypy/doc/config/translation.dont_write_c_files.txt new file mode 100644 --- /dev/null +++ b/pypy/doc/config/translation.dont_write_c_files.txt @@ -0,0 +1,4 @@ +write the generated C files to ``/dev/null`` instead of to the disk. Useful if +you want to use translate.py as a benchmark and don't want to access the disk. + +.. _`translation documentation`: ../translation.html diff --git a/pypy/doc/config/translation.gc.txt b/pypy/doc/config/translation.gc.txt --- a/pypy/doc/config/translation.gc.txt +++ b/pypy/doc/config/translation.gc.txt @@ -1,4 +1,6 @@ -Choose the Garbage Collector used by the translated program: +Choose the Garbage Collector used by the translated program. +The good performing collectors are "hybrid" and "minimark". +The default is "minimark". - "ref": reference counting. Takes very long to translate and the result is slow. @@ -11,3 +13,12 @@ older generation. - "boehm": use the Boehm conservative GC. + + - "hybrid": a hybrid collector of "generation" together with a + mark-n-sweep old space + + - "markcompact": a slow, but memory-efficient collector, + influenced e.g. by Smalltalk systems. + + - "minimark": a generational mark-n-sweep collector with good + performance. Includes page marking for large arrays. diff --git a/pypy/doc/contributor.rst b/pypy/doc/contributor.rst --- a/pypy/doc/contributor.rst +++ b/pypy/doc/contributor.rst @@ -9,22 +9,22 @@ Armin Rigo Maciej Fijalkowski Carl Friedrich Bolz + Antonio Cuni Amaury Forgeot d'Arc - Antonio Cuni Samuele Pedroni Michael Hudson Holger Krekel + Benjamin Peterson Christian Tismer - Benjamin Peterson + Hakan Ardo + Alex Gaynor Eric van Riet Paap - Anders Chrigström - Håkan Ardö + Anders Chrigstrom + David Schneider Richard Emslie Dan Villiom Podlaski Christiansen Alexander Schremmer - Alex Gaynor - David Schneider - Aurelién Campeas + Aurelien Campeas Anders Lehmann Camillo Bruni Niklaus Haldimann @@ -35,16 +35,17 @@ Bartosz Skowron Jakub Gustak Guido Wesdorp + Daniel Roberts Adrien Di Mascio Laura Creighton Ludovic Aubry Niko Matsakis - Daniel Roberts Jason Creighton - Jacob Hallén + Jacob Hallen Alex Martelli Anders Hammarquist Jan de Mooij + Wim Lavrijsen Stephan Diehl Michael Foord Stefan Schwarzer @@ -55,9 +56,13 @@ Alexandre Fayolle Marius Gedminas Simon Burton + Justin Peel Jean-Paul Calderone John Witulski + Lukas Diekmann + holger krekel Wim Lavrijsen + Dario Bertini Andreas Stührk Jean-Philippe St. Pierre Guido van Rossum @@ -69,15 +74,16 @@ Georg Brandl Gerald Klix Wanja Saatkamp + Ronny Pfannschmidt Boris Feigin Oscar Nierstrasz - Dario Bertini David Malcolm Eugene Oden Henry Mason + Sven Hager Lukas Renggli + Ilya Osadchiy Guenter Jantzen - Ronny Pfannschmidt Bert Freudenberg Amit Regmi Ben Young @@ -94,8 +100,8 @@ Jared Grubb Karl Bartel Gabriel Lavoie + Victor Stinner Brian Dorsey - Victor Stinner Stuart Williams Toby Watson Antoine Pitrou @@ -106,19 +112,23 @@ Jonathan David Riehl Elmo Mäntynen Anders Qvist - Beatrice Düring + Beatrice During Alexander Sedov + Timo Paulssen + Corbin Simpson Vincent Legoll + Romain Guillebert Alan McIntyre - Romain Guillebert Alex Perry Jens-Uwe Mager + Simon Cross Dan Stromberg - Lukas Diekmann + Guillebert Romain Carl Meyer Pieter Zieschang Alejandro J. Cura Sylvain Thenault + Christoph Gerum Travis Francis Athougies Henrik Vendelbo Lutz Paelike @@ -129,6 +139,7 @@ Miguel de Val Borro Ignas Mikalajunas Artur Lisiecki + Philip Jenvey Joshua Gilbert Godefroid Chappelle Yusei Tahara @@ -137,24 +148,29 @@ Gustavo Niemeyer William Leslie Akira Li - Kristján Valur Jónsson + Kristjan Valur Jonsson Bobby Impollonia + Michael Hudson-Doyle Andrew Thompson Anders Sigfridsson + Floris Bruynooghe Jacek Generowicz Dan Colish - Sven Hager Zooko Wilcox-O Hearn + Dan Villiom Podlaski Christiansen Anders Hammarquist + Chris Lambacher Dinu Gherman Dan Colish + Brett Cannon Daniel Neuhäuser Michael Chermside Konrad Delong Anna Ravencroft Greg Price Armin Ronacher + Christian Muirhead Jim Baker - Philip Jenvey Rodrigo Araújo + Romain Guillebert diff --git a/pypy/doc/cpython_differences.rst b/pypy/doc/cpython_differences.rst --- a/pypy/doc/cpython_differences.rst +++ b/pypy/doc/cpython_differences.rst @@ -24,6 +24,7 @@ _bisect _codecs _collections + `_continuation`_ `_ffi`_ _hashlib _io @@ -84,9 +85,12 @@ _winreg - Extra module with Stackless_ only: - - _stackless + Note that only some of these modules are built-in in a typical + CPython installation, and the rest is from non built-in extension + modules. This means that e.g. ``import parser`` will, on CPython, + find a local file ``parser.py``, while ``import sys`` will not find a + local file ``sys.py``. In PyPy the difference does not exist: all + these modules are built-in. * Supported by being rewritten in pure Python (possibly using ``ctypes``): see the `lib_pypy/`_ directory. Examples of modules that we @@ -101,11 +105,11 @@ .. the nonstandard modules are listed below... .. _`__pypy__`: __pypy__-module.html +.. _`_continuation`: stackless.html .. _`_ffi`: ctypes-implementation.html .. _`_rawffi`: ctypes-implementation.html .. _`_minimal_curses`: config/objspace.usemodules._minimal_curses.html .. _`cpyext`: http://morepypy.blogspot.com/2010/04/using-cpython-extension-modules-with.html -.. _Stackless: stackless.html Differences related to garbage collection strategies @@ -211,6 +215,38 @@ >>>> print d1['a'] 42 +Mutating classes of objects which are already used as dictionary keys +--------------------------------------------------------------------- + +Consider the following snippet of code:: + + class X(object): + pass + + def __evil_eq__(self, other): + print 'hello world' + return False + + def evil(y): + d = {x(): 1} + X.__eq__ = __evil_eq__ + d[y] # might trigger a call to __eq__? + +In CPython, __evil_eq__ **might** be called, although there is no way to write +a test which reliably calls it. It happens if ``y is not x`` and ``hash(y) == +hash(x)``, where ``hash(x)`` is computed when ``x`` is inserted into the +dictionary. If **by chance** the condition is satisfied, then ``__evil_eq__`` +is called. + +PyPy uses a special strategy to optimize dictionaries whose keys are instances +of user-defined classes which do not override the default ``__hash__``, +``__eq__`` and ``__cmp__``: when using this strategy, ``__eq__`` and +``__cmp__`` are never called, but instead the lookup is done by identity, so +in the case above it is guaranteed that ``__eq__`` won't be called. + +Note that in all other cases (e.g., if you have a custom ``__hash__`` and +``__eq__`` in ``y``) the behavior is exactly the same as CPython. + Ignored exceptions ----------------------- @@ -248,7 +284,14 @@ never a dictionary as it sometimes is in CPython. Assigning to ``__builtins__`` has no effect. -* object identity of immutable keys in dictionaries is not necessarily preserved. - Never compare immutable objects with ``is``. +* Do not compare immutable objects with ``is``. For example on CPython + it is true that ``x is 0`` works, i.e. does the same as ``type(x) is + int and x == 0``, but it is so by accident. If you do instead + ``x is 1000``, then it stops working, because 1000 is too large and + doesn't come from the internal cache. In PyPy it fails to work in + both cases, because we have no need for a cache at all. + +* Also, object identity of immutable keys in dictionaries is not necessarily + preserved. .. include:: _ref.txt diff --git a/pypy/doc/faq.rst b/pypy/doc/faq.rst --- a/pypy/doc/faq.rst +++ b/pypy/doc/faq.rst @@ -315,6 +315,28 @@ .. _`Andrew Brown's tutorial`: http://morepypy.blogspot.com/2011/04/tutorial-writing-interpreter-with-pypy.html +--------------------------------------------------------- +Can RPython modules for PyPy be translated independently? +--------------------------------------------------------- + +No, you have to rebuild the entire interpreter. This means two things: + +* It is imperative to use test-driven development. You have to test + exhaustively your module in pure Python, before even attempting to + translate it. Once you translate it, you should have only a few typing + issues left to fix, but otherwise the result should work out of the box. + +* Second, and perhaps most important: do you have a really good reason + for writing the module in RPython in the first place? Nowadays you + should really look at alternatives, like writing it in pure Python, + using ctypes if it needs to call C code. Other alternatives are being + developed too (as of summer 2011), like a Cython binding. + +In this context it is not that important to be able to translate +RPython modules independently of translating the complete interpreter. +(It could be done given enough efforts, but it's a really serious +undertaking. Consider it as quite unlikely for now.) + ---------------------------------------------------------- Why does PyPy draw a Mandelbrot fractal while translating? ---------------------------------------------------------- diff --git a/pypy/doc/garbage_collection.rst b/pypy/doc/garbage_collection.rst --- a/pypy/doc/garbage_collection.rst +++ b/pypy/doc/garbage_collection.rst @@ -147,7 +147,7 @@ You can read more about them at the start of `pypy/rpython/memory/gc/minimark.py`_. -In more details: +In more detail: - The small newly malloced objects are allocated in the nursery (case 1). All objects living in the nursery are "young". diff --git a/pypy/doc/getting-started-python.rst b/pypy/doc/getting-started-python.rst --- a/pypy/doc/getting-started-python.rst +++ b/pypy/doc/getting-started-python.rst @@ -32,7 +32,10 @@ .. _`windows document`: windows.html You can translate the whole of PyPy's Python interpreter to low level C code, -or `CLI code`_. +or `CLI code`_. If you intend to build using gcc, check to make sure that +the version you have is not 4.2 or you will run into `this bug`_. + +.. _`this bug`: https://bugs.launchpad.net/ubuntu/+source/gcc-4.2/+bug/187391 1. First `download a pre-built PyPy`_ for your architecture which you will use to translate your Python interpreter. It is, of course, possible to @@ -64,7 +67,6 @@ * ``libssl-dev`` (for the optional ``_ssl`` module) * ``libgc-dev`` (for the Boehm garbage collector: only needed when translating with `--opt=0, 1` or `size`) * ``python-sphinx`` (for the optional documentation build. You need version 1.0.7 or later) - * ``python-greenlet`` (for the optional stackless support in interpreted mode/testing) 3. Translation is time-consuming -- 45 minutes on a very fast machine -- @@ -102,7 +104,7 @@ $ ./pypy-c Python 2.7.0 (61ef2a11b56a, Mar 02 2011, 03:00:11) - [PyPy 1.5.0-alpha0 with GCC 4.4.3] on linux2 + [PyPy 1.6.0 with GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. And now for something completely different: ``this sentence is false'' >>>> 46 - 4 @@ -117,19 +119,8 @@ Installation_ below. The ``translate.py`` script takes a very large number of options controlling -what to translate and how. See ``translate.py -h``. Some of the more -interesting options (but for now incompatible with the JIT) are: - - * ``--stackless``: this produces a pypy-c that includes features - inspired by `Stackless Python `__. - - * ``--gc=boehm|ref|marknsweep|semispace|generation|hybrid|minimark``: - choose between using - the `Boehm-Demers-Weiser garbage collector`_, our reference - counting implementation or one of own collector implementations - (the default depends on the optimization level but is usually - ``minimark``). - +what to translate and how. See ``translate.py -h``. The default options +should be suitable for mostly everybody by now. Find a more detailed description of the various options in our `configuration sections`_. @@ -162,7 +153,7 @@ $ ./pypy-cli Python 2.7.0 (61ef2a11b56a, Mar 02 2011, 03:00:11) - [PyPy 1.5.0-alpha0] on linux2 + [PyPy 1.6.0] on linux2 Type "help", "copyright", "credits" or "license" for more information. And now for something completely different: ``distopian and utopian chairs'' >>>> @@ -199,7 +190,7 @@ $ ./pypy-jvm Python 2.7.0 (61ef2a11b56a, Mar 02 2011, 03:00:11) - [PyPy 1.5.0-alpha0] on linux2 + [PyPy 1.6.0] on linux2 Type "help", "copyright", "credits" or "license" for more information. And now for something completely different: ``# assert did not crash'' >>>> @@ -238,7 +229,7 @@ the ``bin/pypy`` executable. To install PyPy system wide on unix-like systems, it is recommended to put the -whole hierarchy alone (e.g. in ``/opt/pypy1.5``) and put a symlink to the +whole hierarchy alone (e.g. in ``/opt/pypy1.6``) and put a symlink to the ``pypy`` executable into ``/usr/bin`` or ``/usr/local/bin`` If the executable fails to find suitable libraries, it will report diff --git a/pypy/doc/getting-started.rst b/pypy/doc/getting-started.rst --- a/pypy/doc/getting-started.rst +++ b/pypy/doc/getting-started.rst @@ -53,11 +53,11 @@ PyPy is ready to be executed as soon as you unpack the tarball or the zip file, with no need to install it in any specific location:: - $ tar xf pypy-1.5-linux.tar.bz2 + $ tar xf pypy-1.6-linux.tar.bz2 - $ ./pypy-1.5-linux/bin/pypy + $ ./pypy-1.6/bin/pypy Python 2.7.1 (?, Apr 27 2011, 12:44:21) - [PyPy 1.5.0-alpha0 with GCC 4.4.3] on linux2 + [PyPy 1.6.0 with GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. And now for something completely different: ``implementing LOGO in LOGO: "turtles all the way down"'' @@ -73,16 +73,16 @@ $ curl -O http://python-distribute.org/distribute_setup.py - $ curl -O https://github.com/pypa/pip/raw/master/contrib/get-pip.py + $ curl -O https://raw.github.com/pypa/pip/master/contrib/get-pip.py - $ ./pypy-1.5-linux/bin/pypy distribute_setup.py + $ ./pypy-1.6/bin/pypy distribute_setup.py - $ ./pypy-1.5-linux/bin/pypy get-pip.py + $ ./pypy-1.6/bin/pypy get-pip.py - $ ./pypy-1.5-linux/bin/pip install pygments # for example + $ ./pypy-1.6/bin/pip install pygments # for example -3rd party libraries will be installed in ``pypy-1.5-linux/site-packages``, and -the scripts in ``pypy-1.5-linux/bin``. +3rd party libraries will be installed in ``pypy-1.6/site-packages``, and +the scripts in ``pypy-1.6/bin``. Installing using virtualenv --------------------------- diff --git a/pypy/doc/how-to-release.rst b/pypy/doc/how-to-release.rst --- a/pypy/doc/how-to-release.rst +++ b/pypy/doc/how-to-release.rst @@ -21,8 +21,8 @@ Release Steps ---------------- -* at code freeze make a release branch under - http://codepeak.net/svn/pypy/release/x.y(.z). IMPORTANT: bump the +* at code freeze make a release branch using release-x.x.x in mercurial + IMPORTANT: bump the pypy version number in module/sys/version.py and in module/cpyext/include/patchlevel.h, notice that the branch will capture the revision number of this change for the release; @@ -42,18 +42,11 @@ JIT: windows, linux, os/x no JIT: windows, linux, os/x sandbox: linux, os/x - stackless: windows, linux, os/x * write release announcement pypy/doc/release-x.y(.z).txt the release announcement should contain a direct link to the download page * update pypy.org (under extradoc/pypy.org), rebuild and commit -* update http://codespeak.net/pypy/trunk: - code0> + chmod -R yourname:users /www/codespeak.net/htdocs/pypy/trunk - local> cd ..../pypy/doc && py.test - local> cd ..../pypy - local> rsync -az doc codespeak.net:/www/codespeak.net/htdocs/pypy/trunk/pypy/ - * post announcement on morepypy.blogspot.com * send announcements to pypy-dev, python-list, python-announce, python-dev ... diff --git a/pypy/doc/index-of-release-notes.rst b/pypy/doc/index-of-release-notes.rst --- a/pypy/doc/index-of-release-notes.rst +++ b/pypy/doc/index-of-release-notes.rst @@ -16,3 +16,4 @@ release-1.4.0beta.rst release-1.4.1.rst release-1.5.0.rst + release-1.6.0.rst diff --git a/pypy/doc/index.rst b/pypy/doc/index.rst --- a/pypy/doc/index.rst +++ b/pypy/doc/index.rst @@ -15,14 +15,12 @@ * `FAQ`_: some frequently asked questions. -* `Release 1.5`_: the latest official release +* `Release 1.6`_: the latest official release * `PyPy Blog`_: news and status info about PyPy * `Papers`_: Academic papers, talks, and related projects -* `Videos`_: Videos of PyPy talks and presentations - * `speed.pypy.org`_: Daily benchmarks of how fast PyPy is * `potential project ideas`_: In case you want to get your feet wet... @@ -35,7 +33,7 @@ * `Differences between PyPy and CPython`_ * `What PyPy can do for your objects`_ - * `Stackless and coroutines`_ + * `Continulets and greenlets`_ * `JIT Generation in PyPy`_ * `Sandboxing Python code`_ @@ -77,7 +75,7 @@ .. _`Getting Started`: getting-started.html .. _`Papers`: extradoc.html .. _`Videos`: video-index.html -.. _`Release 1.5`: http://pypy.org/download.html +.. _`Release 1.6`: http://pypy.org/download.html .. _`speed.pypy.org`: http://speed.pypy.org .. _`RPython toolchain`: translation.html .. _`potential project ideas`: project-ideas.html @@ -122,9 +120,9 @@ Windows, on top of .NET, and on top of Java. To dig into PyPy it is recommended to try out the current Mercurial default branch, which is always working or mostly working, -instead of the latest release, which is `1.5`__. +instead of the latest release, which is `1.6`__. -.. __: release-1.5.0.html +.. __: release-1.6.0.html PyPy is mainly developed on Linux and Mac OS X. Windows is supported, but platform-specific bugs tend to take longer before we notice and fix @@ -292,8 +290,6 @@ `pypy/translator/jvm/`_ the Java backend -`pypy/translator/stackless/`_ the `Stackless Transform`_ - `pypy/translator/tool/`_ helper tools for translation, including the Pygame `graph viewer`_ @@ -313,12 +309,11 @@ .. _`object space`: objspace.html .. _FlowObjSpace: objspace.html#the-flow-object-space .. _`trace object space`: objspace.html#the-trace-object-space -.. _`taint object space`: objspace-proxies.html#taint .. _`thunk object space`: objspace-proxies.html#thunk .. _`transparent proxies`: objspace-proxies.html#tproxy .. _`Differences between PyPy and CPython`: cpython_differences.html .. _`What PyPy can do for your objects`: objspace-proxies.html -.. _`Stackless and coroutines`: stackless.html +.. _`Continulets and greenlets`: stackless.html .. _StdObjSpace: objspace.html#the-standard-object-space .. _`abstract interpretation`: http://en.wikipedia.org/wiki/Abstract_interpretation .. _`rpython`: coding-guide.html#rpython @@ -337,7 +332,6 @@ .. _`low-level type system`: rtyper.html#low-level-type .. _`object-oriented type system`: rtyper.html#oo-type .. _`garbage collector`: garbage_collection.html -.. _`Stackless Transform`: translation.html#the-stackless-transform .. _`main PyPy-translation scripts`: getting-started-python.html#translating-the-pypy-python-interpreter .. _`.NET`: http://www.microsoft.com/net/ .. _Mono: http://www.mono-project.com/ diff --git a/pypy/doc/jit/pyjitpl5.rst b/pypy/doc/jit/pyjitpl5.rst --- a/pypy/doc/jit/pyjitpl5.rst +++ b/pypy/doc/jit/pyjitpl5.rst @@ -103,7 +103,7 @@ The meta-interpreter starts interpreting the JIT bytecode. Each operation is executed and then recorded in a list of operations, called the trace. -Operations can have a list of boxes that operate on, arguments. Some operations +Operations can have a list of boxes they operate on, arguments. Some operations (like GETFIELD and GETARRAYITEM) also have special objects that describe how their arguments are laid out in memory. All possible operations generated by tracing are listed in metainterp/resoperation.py. When a (interpreter-level) diff --git a/pypy/doc/objspace-proxies.rst b/pypy/doc/objspace-proxies.rst --- a/pypy/doc/objspace-proxies.rst +++ b/pypy/doc/objspace-proxies.rst @@ -129,297 +129,6 @@ function behaves lazily: all calls to it return a thunk object. -.. broken right now: - - .. _taint: - - The Taint Object Space - ====================== - - Motivation - ---------- - - The Taint Object Space provides a form of security: "tainted objects", - inspired by various sources, see [D12.1]_ for a more detailed discussion. - - The basic idea of this kind of security is not to protect against - malicious code but to help with handling and boxing sensitive data. - It covers two kinds of sensitive data: secret data which should not leak, - and untrusted data coming from an external source and that must be - validated before it is used. - - The idea is that, considering a large application that handles these - kinds of sensitive data, there are typically only a small number of - places that need to explicitly manipulate that sensitive data; all the - other places merely pass it around, or do entirely unrelated things. - - Nevertheless, if a large application needs to be reviewed for security, - it must be entirely carefully checked, because it is possible that a - bug at some apparently unrelated place could lead to a leak of sensitive - information in a way that an external attacker could exploit. For - example, if any part of the application provides web services, an - attacker might be able to issue unexpected requests with a regular web - browser and deduce secret information from the details of the answers he - gets. Another example is the common CGI attack where an attacker sends - malformed inputs and causes the CGI script to do unintended things. - - An approach like that of the Taint Object Space allows the small parts - of the program that manipulate sensitive data to be explicitly marked. - The effect of this is that although these small parts still need a - careful security review, the rest of the application no longer does, - because even a bug would be unable to leak the information. - - We have implemented a simple two-level model: objects are either - regular (untainted), or sensitive (tainted). Objects are marked as - sensitive if they are secret or untrusted, and only declassified at - carefully-checked positions (e.g. where the secret data is needed, or - after the untrusted data has been fully validated). - - It would be simple to extend the code for more fine-grained scales of - secrecy. For example it is typical in the literature to consider - user-specified lattices of secrecy levels, corresponding to multiple - "owners" that cannot access data belonging to another "owner" unless - explicitly authorized to do so. - - Tainting and untainting - ----------------------- - - Start a py.py with the Taint Object Space and try the following example:: - - $ py.py -o taint - >>>> from __pypy__ import taint - >>>> x = taint(6) - - # x is hidden from now on. We can pass it around and - # even operate on it, but not inspect it. Taintness - # is propagated to operation results. - - >>>> x - TaintError - - >>>> if x > 5: y = 2 # see below - TaintError - - >>>> y = x + 5 # ok - >>>> lst = [x, y] - >>>> z = lst.pop() - >>>> t = type(z) # type() works too, tainted answer - >>>> t - TaintError - >>>> u = t is int # even 'is' works - >>>> u - TaintError - - Notice that using a tainted boolean like ``x > 5`` in an ``if`` - statement is forbidden. This is because knowing which path is followed - would give away a hint about ``x``; in the example above, if the - statement ``if x > 5: y = 2`` was allowed to run, we would know - something about the value of ``x`` by looking at the (untainted) value - in the variable ``y``. - - Of course, there is a way to inspect tainted objects. The basic way is - to explicitly "declassify" it with the ``untaint()`` function. In an - application, the places that use ``untaint()`` are the places that need - careful security review. To avoid unexpected objects showing up, the - ``untaint()`` function must be called with the exact type of the object - to declassify. It will raise ``TaintError`` if the type doesn't match:: - - >>>> from __pypy__ import taint - >>>> untaint(int, x) - 6 - >>>> untaint(int, z) - 11 - >>>> untaint(bool, x > 5) - True - >>>> untaint(int, x > 5) - TaintError - - - Taint Bombs - ----------- - - In this area, a common problem is what to do about failing operations. - If an operation raises an exception when manipulating a tainted object, - then the very presence of the exception can leak information about the - tainted object itself. Consider:: - - >>>> 5 / (x-6) - - By checking if this raises ``ZeroDivisionError`` or not, we would know - if ``x`` was equal to 6 or not. The solution to this problem in the - Taint Object Space is to introduce *Taint Bombs*. They are a kind of - tainted object that doesn't contain a real object, but a pending - exception. Taint Bombs are indistinguishable from normal tainted - objects to unprivileged code. See:: - - >>>> x = taint(6) - >>>> i = 5 / (x-6) # no exception here - >>>> j = i + 1 # nor here - >>>> k = j + 5 # nor here - >>>> untaint(int, k) - TaintError - - In the above example, all of ``i``, ``j`` and ``k`` contain a Taint - Bomb. Trying to untaint it raises an exception - a generic - ``TaintError``. What we win is that the exception gives little away, - and most importantly it occurs at the point where ``untaint()`` is - called, not where the operation failed. This means that all calls to - ``untaint()`` - but not the rest of the code - must be carefully - reviewed for what occurs if they receive a Taint Bomb; they might catch - the ``TaintError`` and give the user a generic message that something - went wrong, if we are reasonably careful that the message or even its - presence doesn't give information away. This might be a - problem by itself, but there is no satisfying general solution here: - it must be considered on a case-by-case basis. Again, what the - Taint Object Space approach achieves is not solving these problems, but - localizing them to well-defined small parts of the application - namely, - around calls to ``untaint()``. - - The ``TaintError`` exception deliberately does not include any - useful error messages, because they might give information away. - Of course, this makes debugging quite a bit harder; a difficult - problem to solve properly. So far we have implemented a way to peek in a Taint - Box or Bomb, ``__pypy__._taint_look(x)``, and a "debug mode" that - prints the exception as soon as a Bomb is created - both write - information to the low-level stderr of the application, where we hope - that it is unlikely to be seen by anyone but the application - developer. - - - Taint Atomic functions - ---------------------- - - Occasionally, a more complicated computation must be performed on a - tainted object. This requires first untainting the object, performing the - computations, and then carefully tainting the result again (including - hiding all exceptions into Bombs). - - There is a built-in decorator that does this for you:: - - >>>> @__pypy__.taint_atomic - >>>> def myop(x, y): - .... while x > 0: - .... x -= y - .... return x - .... - >>>> myop(42, 10) - -8 - >>>> z = myop(taint(42), 10) - >>>> z - TaintError - >>>> untaint(int, z) - -8 - - The decorator makes a whole function behave like a built-in operation. - If no tainted argument is passed in, the function behaves normally. But - if any of the arguments is tainted, it is automatically untainted - so - the function body always sees untainted arguments - and the eventual - result is tainted again (possibly in a Taint Bomb). - - It is important for the function marked as ``taint_atomic`` to have no - visible side effects, as these could cause information leakage. - This is currently not enforced, which means that all ``taint_atomic`` - functions have to be carefully reviewed for security (but not the - callers of ``taint_atomic`` functions). - - A possible future extension would be to forbid side-effects on - non-tainted objects from all ``taint_atomic`` functions. - - An example of usage: given a tainted object ``passwords_db`` that - references a database of passwords, we can write a function - that checks if a password is valid as follows:: - - @taint_atomic - def validate(passwords_db, username, password): - assert type(passwords_db) is PasswordDatabase - assert type(username) is str - assert type(password) is str - ...load username entry from passwords_db... - return expected_password == password - - It returns a tainted boolean answer, or a Taint Bomb if something - went wrong. A caller can do:: - - ok = validate(passwords_db, 'john', '1234') - ok = untaint(bool, ok) - - This can give three outcomes: ``True``, ``False``, or a ``TaintError`` - exception (with no information on it) if anything went wrong. If even - this is considered giving too much information away, the ``False`` case - can be made indistinguishable from the ``TaintError`` case (simply by - raising an exception in ``validate()`` if the password is wrong). - - In the above example, the security results achieved are the following: - as long as ``validate()`` does not leak information, no other part of - the code can obtain more information about a passwords database than a - Yes/No answer to a precise query. - - A possible extension of the ``taint_atomic`` decorator would be to check - the argument types, as ``untaint()`` does, for the same reason: to - prevent bugs where a function like ``validate()`` above is accidentally - called with the wrong kind of tainted object, which would make it - misbehave. For now, all ``taint_atomic`` functions should be - conservative and carefully check all assumptions on their input - arguments. - - - .. _`taint-interface`: - - Interface - --------- - - .. _`like a built-in operation`: - - The basic rule of the Tainted Object Space is that it introduces two new - kinds of objects, Tainted Boxes and Tainted Bombs (which are not types - in the Python sense). Each box internally contains a regular object; - each bomb internally contains an exception object. An operation - involving Tainted Boxes is performed on the objects contained in the - boxes, and gives a Tainted Box or a Tainted Bomb as a result (such an - operation does not let an exception be raised). An operation called - with a Tainted Bomb argument immediately returns the same Tainted Bomb. - - In a PyPy running with (or translated with) the Taint Object Space, - the ``__pypy__`` module exposes the following interface: - - * ``taint(obj)`` - - Return a new Tainted Box wrapping ``obj``. Return ``obj`` itself - if it is already tainted (a Box or a Bomb). - - * ``is_tainted(obj)`` - - Check if ``obj`` is tainted (a Box or a Bomb). - - * ``untaint(type, obj)`` - - Untaints ``obj`` if it is tainted. Raise ``TaintError`` if the type - of the untainted object is not exactly ``type``, or if ``obj`` is a - Bomb. - - * ``taint_atomic(func)`` - - Return a wrapper function around the callable ``func``. The wrapper - behaves `like a built-in operation`_ with respect to untainting the - arguments, tainting the result, and returning a Bomb. - - * ``TaintError`` - - Exception. On purpose, it provides no attribute or error message. - - * ``_taint_debug(level)`` - - Set the debugging level to ``level`` (0=off). At level 1 or above, - all Taint Bombs print a diagnostic message to stderr when they are - created. - - * ``_taint_look(obj)`` - - For debugging purposes: prints (to stderr) the type and address of - the object in a Tainted Box, or prints the exception if ``obj`` is - a Taint Bomb. - - .. _dump: The Dump Object Space diff --git a/pypy/doc/project-ideas.rst b/pypy/doc/project-ideas.rst --- a/pypy/doc/project-ideas.rst +++ b/pypy/doc/project-ideas.rst @@ -48,17 +48,23 @@ .. image:: image/jitviewer.png -We would like to add one level to this hierarchy, by showing the generated -machine code for each jit operation. The necessary information is already in -the log file produced by the JIT, so it is "only" a matter of teaching the -jitviewer to display it. Ideally, the machine code should be hidden by -default and viewable on request. - The jitviewer is a web application based on flask and jinja2 (and jQuery on the client): if you have great web developing skills and want to help PyPy, this is an ideal task to get started, because it does not require any deep knowledge of the internals. +Optimized Unicode Representation +-------------------------------- + +CPython 3.3 will use an `optimized unicode representation`_ which switches between +different ways to represent a unicode string, depending on whether the string +fits into ASCII, has only two-byte characters or needs four-byte characters. + +The actual details would be rather differen in PyPy, but we would like to have +the same optimization implemented. + +.. _`optimized unicode representation`: http://www.python.org/dev/peps/pep-0393/ + Translation Toolchain --------------------- diff --git a/pypy/doc/release-1.6.0.rst b/pypy/doc/release-1.6.0.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/release-1.6.0.rst @@ -0,0 +1,95 @@ +======================== +PyPy 1.6 - kickass panda +======================== + +We're pleased to announce the 1.6 release of PyPy. This release brings a lot +of bugfixes and performance improvements over 1.5, and improves support for +Windows 32bit and OS X 64bit. This version fully implements Python 2.7.1 and +has beta level support for loading CPython C extensions. You can download it +here: + + http://pypy.org/download.html + +What is PyPy? +============= + +PyPy is a very compliant Python interpreter, almost a drop-in replacement for +CPython 2.7.1. It's fast (`pypy 1.5 and cpython 2.6.2`_ performance comparison) +due to its integrated tracing JIT compiler. + +This release supports x86 machines running Linux 32/64 or Mac OS X. Windows 32 +is beta (it roughly works but a lot of small issues have not been fixed so +far). Windows 64 is not yet supported. + +The main topics of this release are speed and stability: on average on +our benchmark suite, PyPy 1.6 is between **20% and 30%** faster than PyPy 1.5, +which was already much faster than CPython on our set of benchmarks. + +The speed improvements have been made possible by optimizing many of the +layers which compose PyPy. In particular, we improved: the Garbage Collector, +the JIT warmup time, the optimizations performed by the JIT, the quality of +the generated machine code and the implementation of our Python interpreter. + +.. _`pypy 1.5 and cpython 2.6.2`: http://speed.pypy.org + + +Highlights +========== + +* Numerous performance improvements, overall giving considerable speedups: + + - better GC behavior when dealing with very large objects and arrays + + - **fast ctypes:** now calls to ctypes functions are seen and optimized + by the JIT, and they are up to 60 times faster than PyPy 1.5 and 10 times + faster than CPython + + - improved generators(1): simple generators now are inlined into the caller + loop, making performance up to 3.5 times faster than PyPy 1.5. + + - improved generators(2): thanks to other optimizations, even generators + that are not inlined are between 10% and 20% faster than PyPy 1.5. + + - faster warmup time for the JIT + + - JIT support for single floats (e.g., for ``array('f')``) + + - optimized dictionaries: the internal representation of dictionaries is now + dynamically selected depending on the type of stored objects, resulting in + faster code and smaller memory footprint. For example, dictionaries whose + keys are all strings, or all integers. Other dictionaries are also smaller + due to bugfixes. + +* JitViewer: this is the first official release which includes the JitViewer, + a web-based tool which helps you to see which parts of your Python code have + been compiled by the JIT, down until the assembler. The `jitviewer`_ 0.1 has + already been release and works well with PyPy 1.6. + +* The CPython extension module API has been improved and now supports many + more extensions. For information on which one are supported, please refer to + our `compatibility wiki`_. + +* Multibyte encoding support: this was of of the last areas in which we were + still behind CPython, but now we fully support them. + +* Preliminary support for NumPy: this release includes a preview of a very + fast NumPy module integrated with the PyPy JIT. Unfortunately, this does + not mean that you can expect to take an existing NumPy program and run it on + PyPy, because the module is still unfinished and supports only some of the + numpy API. However, barring some details, what works should be + blazingly fast :-) + +* Bugfixes: since the 1.5 release we fixed 53 bugs in our `bug tracker`_, not + counting the numerous bugs that were found and reported through other + channels than the bug tracker. + +Cheers, + +Hakan Ardo, Carl Friedrich Bolz, Laura Creighton, Antonio Cuni, +Maciej Fijalkowski, Amaury Forgeot d'Arc, Alex Gaynor, +Armin Rigo and the PyPy team + +.. _`jitviewer`: http://morepypy.blogspot.com/2011/08/visualization-of-jitted-code.html +.. _`bug tracker`: https://bugs.pypy.org +.. _`compatibility wiki`: https://bitbucket.org/pypy/compatibility/wiki/Home + diff --git a/pypy/doc/rlib.rst b/pypy/doc/rlib.rst --- a/pypy/doc/rlib.rst +++ b/pypy/doc/rlib.rst @@ -134,69 +134,6 @@ a hierarchy of Address classes, in a typical static-OO-programming style. -``rstack`` -========== - -The `pypy/rlib/rstack.py`_ module allows an RPython program to control its own execution stack. -This is only useful if the program is translated using stackless. An old -description of the exposed functions is below. - -We introduce an RPython type ``frame_stack_top`` and a built-in function -``yield_current_frame_to_caller()`` that work as follows (see example below): - -* The built-in function ``yield_current_frame_to_caller()`` causes the current - function's state to be captured in a new ``frame_stack_top`` object that is - returned to the parent. Only one frame, the current one, is captured this - way. The current frame is suspended and the caller continues to run. Note - that the caller is only resumed once: when - ``yield_current_frame_to_caller()`` is called. See below. - -* A ``frame_stack_top`` object can be jumped to by calling its ``switch()`` - method with no argument. - -* ``yield_current_frame_to_caller()`` and ``switch()`` themselves return a new - ``frame_stack_top`` object: the freshly captured state of the caller of the - source ``switch()`` that was just executed, or None in the case described - below. - -* the function that called ``yield_current_frame_to_caller()`` also has a - normal return statement, like all functions. This statement must return - another ``frame_stack_top`` object. The latter is *not* returned to the - original caller; there is no way to return several times to the caller. - Instead, it designates the place to which the execution must jump, as if by - a ``switch()``. The place to which we jump this way will see a None as the - source frame stack top. - -* every frame stack top must be resumed once and only once. Not resuming - it at all causes a leak. Resuming it several times causes a crash. - -* a function that called ``yield_current_frame_to_caller()`` should not raise. - It would have no implicit parent frame to propagate the exception to. That - would be a crashingly bad idea. - -The following example would print the numbers from 1 to 7 in order:: - - def g(): - print 2 - frametop_before_5 = yield_current_frame_to_caller() - print 4 - frametop_before_7 = frametop_before_5.switch() - print 6 - return frametop_before_7 - - def f(): - print 1 - frametop_before_4 = g() - print 3 - frametop_before_6 = frametop_before_4.switch() - print 5 - frametop_after_return = frametop_before_6.switch() - print 7 - assert frametop_after_return is None - - f() - - ``streamio`` ============ diff --git a/pypy/doc/stackless.rst b/pypy/doc/stackless.rst --- a/pypy/doc/stackless.rst +++ b/pypy/doc/stackless.rst @@ -8,446 +8,312 @@ ================ PyPy can expose to its user language features similar to the ones -present in `Stackless Python`_: **no recursion depth limit**, and the -ability to write code in a **massively concurrent style**. It actually -exposes three different paradigms to choose from: +present in `Stackless Python`_: the ability to write code in a +**massively concurrent style**. (It does not (any more) offer the +ability to run with no `recursion depth limit`_, but the same effect +can be achieved indirectly.) -* `Tasklets and channels`_; +This feature is based on a custom primitive called a continulet_. +Continulets can be directly used by application code, or it is possible +to write (entirely at app-level) more user-friendly interfaces. -* Greenlets_; +Currently PyPy implements greenlets_ on top of continulets. It would be +easy to implement tasklets and channels as well, emulating the model +of `Stackless Python`_. -* Plain coroutines_. +Continulets are extremely light-weight, which means that PyPy should be +able to handle programs containing large amounts of them. However, due +to an implementation restriction, a PyPy compiled with +``--gcrootfinder=shadowstack`` consumes at least one page of physical +memory (4KB) per live continulet, and half a megabyte of virtual memory +on 32-bit or a complete megabyte on 64-bit. Moreover, the feature is +only available (so far) on x86 and x86-64 CPUs; for other CPUs you need +to add a short page of custom assembler to +`pypy/translator/c/src/stacklet/`_. -All of them are extremely light-weight, which means that PyPy should be -able to handle programs containing large amounts of coroutines, tasklets -and greenlets. +Theory +====== -Requirements -++++++++++++++++ +The fundamental idea is that, at any point in time, the program happens +to run one stack of frames (or one per thread, in case of +multi-threading). To see the stack, start at the top frame and follow +the chain of ``f_back`` until you reach the bottom frame. From the +point of view of one of these frames, it has a ``f_back`` pointing to +another frame (unless it is the bottom frame), and it is itself being +pointed to by another frame (unless it is the top frame). -If you are running py.py on top of CPython, then you need to enable -the _stackless module by running it as follows:: +The theory behind continulets is to literally take the previous sentence +as definition of "an O.K. situation". The trick is that there are +O.K. situations that are more complex than just one stack: you will +always have one stack, but you can also have in addition one or more +detached *cycles* of frames, such that by following the ``f_back`` chain +you run in a circle. But note that these cycles are indeed completely +detached: the top frame (the currently running one) is always the one +which is not the ``f_back`` of anybody else, and it is always the top of +a stack that ends with the bottom frame, never a part of these extra +cycles. - py.py --withmod-_stackless +How do you create such cycles? The fundamental operation to do so is to +take two frames and *permute* their ``f_back`` --- i.e. exchange them. +You can permute any two ``f_back`` without breaking the rule of "an O.K. +situation". Say for example that ``f`` is some frame halfway down the +stack, and you permute its ``f_back`` with the ``f_back`` of the top +frame. Then you have removed from the normal stack all intermediate +frames, and turned them into one stand-alone cycle. By doing the same +permutation again you restore the original situation. -This is implemented internally using greenlets, so it only works on a -platform where `greenlets`_ are supported. A few features do -not work this way, though, and really require a translated -``pypy-c``. +In practice, in PyPy, you cannot change the ``f_back`` of an abitrary +frame, but only of frames stored in ``continulets``. -To obtain a translated version of ``pypy-c`` that includes Stackless -support, run translate.py as follows:: - - cd pypy/translator/goal - python translate.py --stackless +Continulets are internally implemented using stacklets_. Stacklets are a +bit more primitive (they are really one-shot continuations), but that +idea only works in C, not in Python. The basic idea of continulets is +to have at any point in time a complete valid stack; this is important +e.g. to correctly propagate exceptions (and it seems to give meaningful +tracebacks too). Application level interface ============================= -A stackless PyPy contains a module called ``stackless``. The interface -exposed by this module have not been refined much, so it should be -considered in-flux (as of 2007). -So far, PyPy does not provide support for ``stackless`` in a threaded -environment. This limitation is not fundamental, as previous experience -has shown, so supporting this would probably be reasonably easy. +.. _continulet: -An interesting point is that the same ``stackless`` module can provide -a number of different concurrency paradigms at the same time. From a -theoretical point of view, none of above-mentioned existing three -paradigms considered on its own is new: two of them are from previous -Python work, and the third one is a variant of the classical coroutine. -The new part is that the PyPy implementation manages to provide all of -them and let the user implement more. Moreover - and this might be an -important theoretical contribution of this work - we manage to provide -these concurrency concepts in a "composable" way. In other words, it -is possible to naturally mix in a single application multiple -concurrency paradigms, and multiple unrelated usages of the same -paradigm. This is discussed in the Composability_ section below. +Continulets ++++++++++++ +A translated PyPy contains by default a module called ``_continuation`` +exporting the type ``continulet``. A ``continulet`` object from this +module is a container that stores a "one-shot continuation". It plays +the role of an extra frame you can insert in the stack, and whose +``f_back`` can be changed. -Infinite recursion -++++++++++++++++++ +To make a continulet object, call ``continulet()`` with a callable and +optional extra arguments. -Any stackless PyPy executable natively supports recursion that is only -limited by the available memory. As in normal Python, though, there is -an initial recursion limit (which is 5000 in all pypy-c's, and 1000 in -CPython). It can be changed with ``sys.setrecursionlimit()``. With a -stackless PyPy, any value is acceptable - use ``sys.maxint`` for -unlimited. +Later, the first time you ``switch()`` to the continulet, the callable +is invoked with the same continulet object as the extra first argument. +At that point, the one-shot continuation stored in the continulet points +to the caller of ``switch()``. In other words you have a perfectly +normal-looking stack of frames. But when ``switch()`` is called again, +this stored one-shot continuation is exchanged with the current one; it +means that the caller of ``switch()`` is suspended with its continuation +stored in the container, and the old continuation from the continulet +object is resumed. -In some cases, you can write Python code that causes interpreter-level -infinite recursion -- i.e. infinite recursion without going via -application-level function calls. It is possible to limit that too, -with ``_stackless.set_stack_depth_limit()``, or to unlimit it completely -by setting it to ``sys.maxint``. +The most primitive API is actually 'permute()', which just permutes the +one-shot continuation stored in two (or more) continulets. +In more details: -Coroutines -++++++++++ +* ``continulet(callable, *args, **kwds)``: make a new continulet. + Like a generator, this only creates it; the ``callable`` is only + actually called the first time it is switched to. It will be + called as follows:: -A Coroutine is similar to a very small thread, with no preemptive scheduling. -Within a family of coroutines, the flow of execution is explicitly -transferred from one to another by the programmer. When execution is -transferred to a coroutine, it begins to execute some Python code. When -it transfers execution away from itself it is temporarily suspended, and -when execution returns to it it resumes its execution from the -point where it was suspended. Conceptually, only one coroutine is -actively running at any given time (but see Composability_ below). + callable(cont, *args, **kwds) -The ``stackless.coroutine`` class is instantiated with no argument. -It provides the following methods and attributes: + where ``cont`` is the same continulet object. -* ``stackless.coroutine.getcurrent()`` + Note that it is actually ``cont.__init__()`` that binds + the continulet. It is also possible to create a not-bound-yet + continulet by calling explicitly ``continulet.__new__()``, and + only bind it later by calling explicitly ``cont.__init__()``. - Static method returning the currently running coroutine. There is a - so-called "main" coroutine object that represents the "outer" - execution context, where your main program started and where it runs - as long as it does not switch to another coroutine. +* ``cont.switch(value=None, to=None)``: start the continulet if + it was not started yet. Otherwise, store the current continuation + in ``cont``, and activate the target continuation, which is the + one that was previously stored in ``cont``. Note that the target + continuation was itself previously suspended by another call to + ``switch()``; this older ``switch()`` will now appear to return. + The ``value`` argument is any object that is carried to the target + and returned by the target's ``switch()``. -* ``coro.bind(callable, *args, **kwds)`` + If ``to`` is given, it must be another continulet object. In + that case, performs a "double switch": it switches as described + above to ``cont``, and then immediately switches again to ``to``. + This is different from switching directly to ``to``: the current + continuation gets stored in ``cont``, the old continuation from + ``cont`` gets stored in ``to``, and only then we resume the + execution from the old continuation out of ``to``. - Bind the coroutine so that it will execute ``callable(*args, - **kwds)``. The call is not performed immediately, but only the - first time we call the ``coro.switch()`` method. A coroutine must - be bound before it is switched to. When the coroutine finishes - (because the call to the callable returns), the coroutine exits and - implicitly switches back to another coroutine (its "parent"); after - this point, it is possible to bind it again and switch to it again. - (Which coroutine is the parent of which is not documented, as it is - likely to change when the interface is refined.) +* ``cont.throw(type, value=None, tb=None, to=None)``: similar to + ``switch()``, except that immediately after the switch is done, raise + the given exception in the target. -* ``coro.switch()`` +* ``cont.is_pending()``: return True if the continulet is pending. + This is False when it is not initialized (because we called + ``__new__`` and not ``__init__``) or when it is finished (because + the ``callable()`` returned). When it is False, the continulet + object is empty and cannot be ``switch()``-ed to. - Suspend the current (caller) coroutine, and resume execution in the - target coroutine ``coro``. +* ``permute(*continulets)``: a global function that permutes the + continuations stored in the given continulets arguments. Mostly + theoretical. In practice, using ``cont.switch()`` is easier and + more efficient than using ``permute()``; the latter does not on + its own change the currently running frame. -* ``coro.kill()`` - Kill ``coro`` by sending a CoroutineExit exception and switching - execution immediately to it. This exception can be caught in the - coroutine itself and can be raised from any call to ``coro.switch()``. - This exception isn't propagated to the parent coroutine. +Genlets ++++++++ -* ``coro.throw(type, value)`` +The ``_continuation`` module also exposes the ``generator`` decorator:: - Insert an exception in ``coro`` an resume switches execution - immediately to it. In the coroutine itself, this exception - will come from any call to ``coro.switch()`` and can be caught. If the - exception isn't caught, it will be propagated to the parent coroutine. + @generator + def f(cont, a, b): + cont.switch(a + b) + cont.switch(a + b + 1) -When a coroutine is garbage-collected, it gets the ``.kill()`` method sent to -it. This happens at the point the next ``.switch`` method is called, so the -target coroutine of this call will be executed only after the ``.kill`` has -finished. + for i in f(10, 20): + print i -Example -~~~~~~~ +This example prints 30 and 31. The only advantage over using regular +generators is that the generator itself is not limited to ``yield`` +statements that must all occur syntactically in the same function. +Instead, we can pass around ``cont``, e.g. to nested sub-functions, and +call ``cont.switch(x)`` from there. -Here is a classical producer/consumer example: an algorithm computes a -sequence of values, while another consumes them. For our purposes we -assume that the producer can generate several values at once, and the -consumer can process up to 3 values in a batch - it can also process -batches with fewer than 3 values without waiting for the producer (which -would be messy to express with a classical Python generator). :: +The ``generator`` decorator can also be applied to methods:: - def producer(lst): - while True: - ...compute some more values... - lst.extend(new_values) - coro_consumer.switch() - - def consumer(lst): - while True: - # First ask the producer for more values if needed - while len(lst) == 0: - coro_producer.switch() - # Process the available values in a batch, but at most 3 - batch = lst[:3] - del lst[:3] - ...process batch... - - # Initialize two coroutines with a shared list as argument - exchangelst = [] - coro_producer = coroutine() - coro_producer.bind(producer, exchangelst) - coro_consumer = coroutine() - coro_consumer.bind(consumer, exchangelst) - - # Start running the consumer coroutine - coro_consumer.switch() - - -Tasklets and channels -+++++++++++++++++++++ - -The ``stackless`` module also provides an interface that is roughly -compatible with the interface of the ``stackless`` module in `Stackless -Python`_: it contains ``stackless.tasklet`` and ``stackless.channel`` -classes. Tasklets are also similar to microthreads, but (like coroutines) -they don't actually run in parallel with other microthreads; instead, -they synchronize and exchange data with each other over Channels, and -these exchanges determine which Tasklet runs next. - -For usage reference, see the documentation on the `Stackless Python`_ -website. - -Note that Tasklets and Channels are implemented at application-level in -`lib_pypy/stackless.py`_ on top of coroutines_. You can refer to this -module for more details and API documentation. - -The stackless.py code tries to resemble the stackless C code as much -as possible. This makes the code somewhat unpythonic. - -Bird's eye view of tasklets and channels -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Tasklets are a bit like threads: they encapsulate a function in such a way that -they can be suspended/restarted any time. Unlike threads, they won't -run concurrently, but must be cooperative. When using stackless -features, it is vitally important that no action is performed that blocks -everything else. In particular, blocking input/output should be centralized -to a single tasklet. - -Communication between tasklets is done via channels. -There are three ways for a tasklet to give up control: - -1. call ``stackless.schedule()`` -2. send something over a channel -3. receive something from a channel - -A (live) tasklet can either be running, waiting to get scheduled, or be -blocked by a channel. - -Scheduling is done in strictly round-robin manner. A blocked tasklet -is removed from the scheduling queue and will be reinserted when it -becomes unblocked. - -Example -~~~~~~~ - -Here is a many-producers many-consumers example, where any consumer can -process the result of any producer. For this situation we set up a -single channel where all producer send, and on which all consumers -wait:: - - def producer(chan): - while True: - chan.send(...next value...) - - def consumer(chan): - while True: - x = chan.receive() - ...do something with x... - - # Set up the N producer and M consumer tasklets - common_channel = stackless.channel() - for i in range(N): - stackless.tasklet(producer, common_channel)() - for i in range(M): - stackless.tasklet(consumer, common_channel)() - - # Run it all - stackless.run() - -Each item sent over the channel is received by one of the waiting -consumers; which one is not specified. The producers block until their -item is consumed: the channel is not a queue, but rather a meeting point -which causes tasklets to block until both a consumer and a producer are -ready. In practice, the reason for having several consumers receiving -on a single channel is that some of the consumers can be busy in other -ways part of the time. For example, each consumer might receive a -database request, process it, and send the result to a further channel -before it asks for the next request. In this situation, further -requests can still be received by other consumers. + class X: + @generator + def f(self, cont, a, b): + ... Greenlets +++++++++ -A Greenlet is a kind of primitive Tasklet with a lower-level interface -and with exact control over the execution order. Greenlets are similar -to Coroutines, with a slightly different interface: greenlets put more -emphasis on a tree structure. The various greenlets of a program form a -precise tree, which fully determines their order of execution. +Greenlets are implemented on top of continulets in `lib_pypy/greenlet.py`_. +See the official `documentation of the greenlets`_. -For usage reference, see the `documentation of the greenlets`_. -The PyPy interface is identical. You should use ``greenlet.greenlet`` -instead of ``stackless.greenlet`` directly, because the greenlet library -can give you the latter when you ask for the former on top of PyPy. +Note that unlike the CPython greenlets, this version does not suffer +from GC issues: if the program "forgets" an unfinished greenlet, it will +always be collected at the next garbage collection. -PyPy's greenlets do not suffer from the cyclic GC limitation that the -CPython greenlets have: greenlets referencing each other via local -variables tend to leak on top of CPython (where it is mostly impossible -to do the right thing). It works correctly on top of PyPy. +Unimplemented features +++++++++++++++++++++++ -Coroutine Pickling -++++++++++++++++++ +The following features (present in some past Stackless version of PyPy) +are for the time being not supported any more: -Coroutines and tasklets can be pickled and unpickled, i.e. serialized to -a string of bytes for the purpose of storage or transmission. This -allows "live" coroutines or tasklets to be made persistent, moved to -other machines, or cloned in any way. The standard ``pickle`` module -works with coroutines and tasklets (at least in a translated ``pypy-c``; -unpickling live coroutines or tasklets cannot be easily implemented on -top of CPython). +* Tasklets and channels (currently ``stackless.py`` seems to import, + but you have tasklets on top of coroutines on top of greenlets on + top of continulets on top of stacklets, and it's probably not too + hard to cut two of these levels by adapting ``stackless.py`` to + use directly continulets) -To be able to achieve this result, we have to consider many objects that -are not normally pickleable in CPython. Here again, the `Stackless -Python`_ implementation has paved the way, and we follow the same -general design decisions: simple internal objects like bound method -objects and various kinds of iterators are supported; frame objects can -be fully pickled and unpickled -(by serializing a reference to the bytecode they are -running in addition to all the local variables). References to globals -and modules are pickled by name, similarly to references to functions -and classes in the traditional CPython ``pickle``. +* Coroutines (could be rewritten at app-level) -The "magic" part of this process is the implementation of the unpickling -of a chain of frames. The Python interpreter of PyPy uses -interpreter-level recursion to represent application-level calls. The -reason for this is that it tremendously simplifies the implementation of -the interpreter itself. Indeed, in Python, almost any operation can -potentially result in a non-tail-recursive call to another Python -function. This makes writing a non-recursive interpreter extremely -tedious; instead, we rely on lower-level transformations during the -translation process to control this recursion. This is the `Stackless -Transform`_, which is at the heart of PyPy's support for stackless-style -concurrency. +* Pickling and unpickling continulets (*) -At any point in time, a chain of Python-level frames corresponds to a -chain of interpreter-level frames (e.g. C frames in pypy-c), where each -single Python-level frame corresponds to one or a few interpreter-level -frames - depending on the length of the interpreter-level call chain -from one bytecode evaluation loop to the next (recursively invoked) one. +* Continuing execution of a continulet in a different thread (*) -This means that it is not sufficient to simply create a chain of Python -frame objects in the heap of a process before we can resume execution of -these newly built frames. We must recreate a corresponding chain of -interpreter-level frames. To this end, we have inserted a few *named -resume points* (see 3.2.4, in `D07.1 Massive Parallelism and Translation Aspects`_) in the Python interpreter of PyPy. This is the -motivation for implementing the interpreter-level primitives -``resume_state_create()`` and ``resume_state_invoke()``, the powerful -interface that allows an RPython program to artificially rebuild a chain -of calls in a reflective way, completely from scratch, and jump to it. +* Automatic unlimited stack (must be emulated__ so far) -.. _`D07.1 Massive Parallelism and Translation Aspects`: http://codespeak.net/pypy/extradoc/eu-report/D07.1_Massive_Parallelism_and_Translation_Aspects-2007-02-28.pdf +* Support for other CPUs than x86 and x86-64 -Example -~~~~~~~ +.. __: `recursion depth limit`_ -(See `demo/pickle_coroutine.py`_ for the complete source of this demo.) +(*) Pickling, as well as changing threads, could be implemented by using +a "soft" stack switching mode again. We would get either "hard" or +"soft" switches, similarly to Stackless Python 3rd version: you get a +"hard" switch (like now) when the C stack contains non-trivial C frames +to save, and a "soft" switch (like previously) when it contains only +simple calls from Python to Python. Soft-switched continulets would +also consume a bit less RAM, and the switch might be a bit faster too +(unsure about that; what is the Stackless Python experience?). -Consider a program which contains a part performing a long-running -computation:: - def ackermann(x, y): - if x == 0: - return y + 1 - if y == 0: - return ackermann(x - 1, 1) - return ackermann(x - 1, ackermann(x, y - 1)) +Recursion depth limit ++++++++++++++++++++++ -By using pickling, we can save the state of the computation while it is -running, for the purpose of restoring it later and continuing the -computation at another time or on a different machine. However, -pickling does not produce a whole-program dump: it can only pickle -individual coroutines. This means that the computation should be -started in its own coroutine:: +You can use continulets to emulate the infinite recursion depth present +in Stackless Python and in stackless-enabled older versions of PyPy. - # Make a coroutine that will run 'ackermann(3, 8)' - coro = coroutine() - coro.bind(ackermann, 3, 8) +The trick is to start a continulet "early", i.e. when the recursion +depth is very low, and switch to it "later", i.e. when the recursion +depth is high. Example:: - # Now start running the coroutine - result = coro.switch() + from _continuation import continulet -The coroutine itself must switch back to the main program when it needs -to be interrupted (we can only pickle suspended coroutines). Due to -current limitations this requires an explicit check in the -``ackermann()`` function:: + def invoke(_, callable, arg): + return callable(arg) - def ackermann(x, y): - if interrupt_flag: # test a global flag - main.switch() # and switch back to 'main' if it is set - if x == 0: - return y + 1 - if y == 0: - return ackermann(x - 1, 1) - return ackermann(x - 1, ackermann(x, y - 1)) + def bootstrap(c): + # this loop runs forever, at a very low recursion depth + callable, arg = c.switch() + while True: + # start a new continulet from here, and switch to + # it using an "exchange", i.e. a switch with to=. + to = continulet(invoke, callable, arg) + callable, arg = c.switch(to=to) -The global ``interrupt_flag`` would be set for example by a timeout, or -by a signal handler reacting to Ctrl-C, etc. It causes the coroutine to -transfer control back to the main program. The execution comes back -just after the line ``coro.switch()``, where we can pickle the coroutine -if necessary:: + c = continulet(bootstrap) + c.switch() - if not coro.is_alive: - print "finished; the result is:", result - else: - # save the state of the suspended coroutine - f = open('demo.pickle', 'w') - pickle.dump(coro, f) - f.close() -The process can then stop. At any later time, or on another machine, -we can reload the file and restart the coroutine with:: + def recursive(n): + if n == 0: + return ("ok", n) + if n % 200 == 0: + prev = c.switch((recursive, n - 1)) + else: + prev = recursive(n - 1) + return (prev[0], prev[1] + 1) - f = open('demo.pickle', 'r') - coro = pickle.load(f) - f.close() - result = coro.switch() + print recursive(999999) # prints ('ok', 999999) -Limitations -~~~~~~~~~~~ +Note that if you press Ctrl-C while running this example, the traceback +will be built with *all* recursive() calls so far, even if this is more +than the number that can possibly fit in the C stack. These frames are +"overlapping" each other in the sense of the C stack; more precisely, +they are copied out of and into the C stack as needed. -Coroutine pickling is subject to some limitations. First of all, it is -not a whole-program "memory dump". It means that only the "local" state -of a coroutine is saved. The local state is defined to include the -chain of calls and the local variables, but not for example the value of -any global variable. +(The example above also makes use of the following general "guideline" +to help newcomers write continulets: in ``bootstrap(c)``, only call +methods on ``c``, not on another continulet object. That's why we wrote +``c.switch(to=to)`` and not ``to.switch()``, which would mess up the +state. This is however just a guideline; in general we would recommend +to use other interfaces like genlets and greenlets.) -As in normal Python, the pickle will not include any function object's -code, any class definition, etc., but only references to functions and -classes. Unlike normal Python, the pickle contains frames. A pickled -frame stores a bytecode index, representing the current execution -position. This means that the user program cannot be modified *at all* -between pickling and unpickling! -On the other hand, the pickled data is fairly independent from the -platform and from the PyPy version. +Stacklets ++++++++++ -Pickling/unpickling fails if the coroutine is suspended in a state that -involves Python frames which were *indirectly* called. To define this -more precisely, a Python function can issue a regular function or method -call to invoke another Python function - this is a *direct* call and can -be pickled and unpickled. But there are many ways to invoke a Python -function indirectly. For example, most operators can invoke a special -method ``__xyz__()`` on a class, various built-in functions can call -back Python functions, signals can invoke signal handlers, and so on. -These cases are not supported yet. +Continulets are internally implemented using stacklets, which is the +generic RPython-level building block for "one-shot continuations". For +more information about them please see the documentation in the C source +at `pypy/translator/c/src/stacklet/stacklet.h`_. +The module ``pypy.rlib.rstacklet`` is a thin wrapper around the above +functions. The key point is that new() and switch() always return a +fresh stacklet handle (or an empty one), and switch() additionally +consumes one. It makes no sense to have code in which the returned +handle is ignored, or used more than once. Note that ``stacklet.c`` is +written assuming that the user knows that, and so no additional checking +occurs; this can easily lead to obscure crashes if you don't use a +wrapper like PyPy's '_continuation' module. -Composability -+++++++++++++ + +Theory of composability ++++++++++++++++++++++++ Although the concept of coroutines is far from new, they have not been generally integrated into mainstream languages, or only in limited form (like generators in Python and iterators in C#). We can argue that a possible reason for that is that they do not scale well when a program's complexity increases: they look attractive in small examples, but the -models that require explicit switching, by naming the target coroutine, -do not compose naturally. This means that a program that uses -coroutines for two unrelated purposes may run into conflicts caused by -unexpected interactions. +models that require explicit switching, for example by naming the target +coroutine, do not compose naturally. This means that a program that +uses coroutines for two unrelated purposes may run into conflicts caused +by unexpected interactions. To illustrate the problem, consider the following example (simplified -code; see the full source in -`pypy/module/_stackless/test/test_composable_coroutine.py`_). First, a -simple usage of coroutine:: +code using a theorical ``coroutine`` class). First, a simple usage of +coroutine:: main_coro = coroutine.getcurrent() # the main (outer) coroutine data = [] @@ -530,74 +396,35 @@ main coroutine, which confuses the ``generator_iterator.next()`` method (it gets resumed, but not as a result of a call to ``Yield()``). -As part of trying to combine multiple different paradigms into a single -application-level module, we have built a way to solve this problem. -The idea is to avoid the notion of a single, global "main" coroutine (or -a single main greenlet, or a single main tasklet). Instead, each -conceptually separated user of one of these concurrency interfaces can -create its own "view" on what the main coroutine/greenlet/tasklet is, -which other coroutine/greenlet/tasklets there are, and which of these is -the currently running one. Each "view" is orthogonal to the others. In -particular, each view has one (and exactly one) "current" -coroutine/greenlet/tasklet at any point in time. When the user switches -to a coroutine/greenlet/tasklet, it implicitly means that he wants to -switch away from the current coroutine/greenlet/tasklet *that belongs to -the same view as the target*. +Thus the notion of coroutine is *not composable*. By opposition, the +primitive notion of continulets is composable: if you build two +different interfaces on top of it, or have a program that uses twice the +same interface in two parts, then assuming that both parts independently +work, the composition of the two parts still works. -The precise application-level interface has not been fixed yet; so far, -"views" in the above sense are objects of the type -``stackless.usercostate``. The above two examples can be rewritten in -the following way:: +A full proof of that claim would require careful definitions, but let us +just claim that this fact is true because of the following observation: +the API of continulets is such that, when doing a ``switch()``, it +requires the program to have some continulet to explicitly operate on. +It shuffles the current continuation with the continuation stored in +that continulet, but has no effect outside. So if a part of a program +has a continulet object, and does not expose it as a global, then the +rest of the program cannot accidentally influence the continuation +stored in that continulet object. - producer_view = stackless.usercostate() # a local view - main_coro = producer_view.getcurrent() # the main (outer) coroutine - ... - producer_coro = producer_view.newcoroutine() - ... - -and:: - - generators_view = stackless.usercostate() - - def generator(f): - def wrappedfunc(*args, **kwds): - g = generators_view.newcoroutine(generator_iterator) - ... - - ...generators_view.getcurrent()... - -Then the composition ``grab_values()`` works as expected, because the -two views are independent. The coroutine captured as ``self.caller`` in -the ``generator_iterator.next()`` method is the main coroutine of the -``generators_view``. It is no longer the same object as the main -coroutine of the ``producer_view``, so when ``data_producer()`` issues -the following command:: - - main_coro.switch() - -the control flow cannot accidentally jump back to -``generator_iterator.next()``. In other words, from the point of view -of ``producer_view``, the function ``grab_next_value()`` always runs in -its main coroutine ``main_coro`` and the function ``data_producer`` in -its coroutine ``producer_coro``. This is the case independently of -which ``generators_view``-based coroutine is the current one when -``grab_next_value()`` is called. - -Only code that has explicit access to the ``producer_view`` or its -coroutine objects can perform switches that are relevant for the -generator code. If the view object and the coroutine objects that share -this view are all properly encapsulated inside the generator logic, no -external code can accidentally temper with the expected control flow any -longer. - -In conclusion: we will probably change the app-level interface of PyPy's -stackless module in the future to not expose coroutines and greenlets at -all, but only views. They are not much more difficult to use, and they -scale automatically to larger programs. +In other words, if we regard the continulet object as being essentially +a modifiable ``f_back``, then it is just a link between the frame of +``callable()`` and the parent frame --- and it cannot be arbitrarily +changed by unrelated code, as long as they don't explicitly manipulate +the continulet object. Typically, both the frame of ``callable()`` +(commonly a local function) and its parent frame (which is the frame +that switched to it) belong to the same class or module; so from that +point of view the continulet is a purely local link between two local +frames. It doesn't make sense to have a concept that allows this link +to be manipulated from outside. .. _`Stackless Python`: http://www.stackless.com .. _`documentation of the greenlets`: http://packages.python.org/greenlet/ -.. _`Stackless Transform`: translation.html#the-stackless-transform .. include:: _ref.txt diff --git a/pypy/doc/translation.rst b/pypy/doc/translation.rst --- a/pypy/doc/translation.rst +++ b/pypy/doc/translation.rst @@ -552,14 +552,15 @@ The stackless transform converts functions into a form that knows how to save the execution point and active variables into a heap structure -and resume execution at that point. This is used to implement +and resume execution at that point. This was used to implement coroutines as an RPython-level feature, which in turn are used to -implement `coroutines, greenlets and tasklets`_ as an application +implement coroutines, greenlets and tasklets as an application level feature for the Standard Interpreter. -Enable the stackless transformation with :config:`translation.stackless`. +The stackless transformation has been deprecated and is no longer +available in trunk. It has been replaced with continulets_. -.. _`coroutines, greenlets and tasklets`: stackless.html +.. _continulets: stackless.html .. _`preparing the graphs for source generation`: diff --git a/pypy/doc/windows.rst b/pypy/doc/windows.rst --- a/pypy/doc/windows.rst +++ b/pypy/doc/windows.rst @@ -32,6 +32,24 @@ modules that relies on third-party libraries. See below how to get and build them. +Preping Windows for the Large Build +----------------------------------- + +Normally 32bit programs are limited to 2GB of memory on Windows. It is +possible to raise this limit, to 3GB on Windows 32bit, and almost 4GB +on Windows 64bit. + +On Windows 32bit, it is necessary to modify the system: follow +http://usa.autodesk.com/adsk/servlet/ps/dl/item?siteID=123112&id=9583842&linkID=9240617 +to enable the "3GB" feature, and reboot. This step is not necessary on +Windows 64bit. + +Then you need to execute:: + + editbin /largeaddressaware pypy.exe + +on the pypy.exe file you compiled. + Installing external packages ---------------------------- diff --git a/pypy/interpreter/argument.py b/pypy/interpreter/argument.py --- a/pypy/interpreter/argument.py +++ b/pypy/interpreter/argument.py @@ -125,6 +125,7 @@ ### Manipulation ### + @jit.look_inside_iff(lambda self: not self._dont_jit) def unpack(self): # slowish "Return a ([w1,w2...], {'kw':w3...}) pair." kwds_w = {} @@ -245,6 +246,8 @@ ### Parsing for function calls ### + # XXX: this should be @jit.look_inside_iff, but we need key word arguments, + # and it doesn't support them for now. def _match_signature(self, w_firstarg, scope_w, signature, defaults_w=None, blindargs=0): """Parse args and kwargs according to the signature of a code object, diff --git a/pypy/interpreter/astcompiler/ast.py b/pypy/interpreter/astcompiler/ast.py --- a/pypy/interpreter/astcompiler/ast.py +++ b/pypy/interpreter/astcompiler/ast.py @@ -2541,8 +2541,9 @@ class ASTVisitor(object): def visit_sequence(self, seq): - for node in seq: - node.walkabout(self) + if seq is not None: + for node in seq: + node.walkabout(self) def default_visitor(self, node): raise NodeVisitorNotImplemented @@ -2673,46 +2674,36 @@ class GenericASTVisitor(ASTVisitor): def visit_Module(self, node): - if node.body: - self.visit_sequence(node.body) + self.visit_sequence(node.body) def visit_Interactive(self, node): - if node.body: - self.visit_sequence(node.body) + self.visit_sequence(node.body) def visit_Expression(self, node): node.body.walkabout(self) def visit_Suite(self, node): - if node.body: - self.visit_sequence(node.body) + self.visit_sequence(node.body) def visit_FunctionDef(self, node): node.args.walkabout(self) - if node.body: - self.visit_sequence(node.body) - if node.decorator_list: - self.visit_sequence(node.decorator_list) + self.visit_sequence(node.body) + self.visit_sequence(node.decorator_list) def visit_ClassDef(self, node): - if node.bases: - self.visit_sequence(node.bases) - if node.body: - self.visit_sequence(node.body) - if node.decorator_list: - self.visit_sequence(node.decorator_list) + self.visit_sequence(node.bases) + self.visit_sequence(node.body) + self.visit_sequence(node.decorator_list) def visit_Return(self, node): if node.value: node.value.walkabout(self) def visit_Delete(self, node): - if node.targets: - self.visit_sequence(node.targets) + self.visit_sequence(node.targets) def visit_Assign(self, node): - if node.targets: - self.visit_sequence(node.targets) + self.visit_sequence(node.targets) node.value.walkabout(self) def visit_AugAssign(self, node): @@ -2722,37 +2713,29 @@ def visit_Print(self, node): if node.dest: node.dest.walkabout(self) - if node.values: - self.visit_sequence(node.values) + self.visit_sequence(node.values) def visit_For(self, node): node.target.walkabout(self) node.iter.walkabout(self) - if node.body: - self.visit_sequence(node.body) - if node.orelse: - self.visit_sequence(node.orelse) + self.visit_sequence(node.body) + self.visit_sequence(node.orelse) def visit_While(self, node): node.test.walkabout(self) - if node.body: - self.visit_sequence(node.body) - if node.orelse: - self.visit_sequence(node.orelse) + self.visit_sequence(node.body) + self.visit_sequence(node.orelse) def visit_If(self, node): node.test.walkabout(self) - if node.body: - self.visit_sequence(node.body) - if node.orelse: - self.visit_sequence(node.orelse) + self.visit_sequence(node.body) + self.visit_sequence(node.orelse) def visit_With(self, node): node.context_expr.walkabout(self) if node.optional_vars: node.optional_vars.walkabout(self) - if node.body: - self.visit_sequence(node.body) + self.visit_sequence(node.body) def visit_Raise(self, node): if node.type: @@ -2763,18 +2746,13 @@ node.tback.walkabout(self) def visit_TryExcept(self, node): - if node.body: - self.visit_sequence(node.body) - if node.handlers: - self.visit_sequence(node.handlers) - if node.orelse: - self.visit_sequence(node.orelse) + self.visit_sequence(node.body) + self.visit_sequence(node.handlers) + self.visit_sequence(node.orelse) def visit_TryFinally(self, node): - if node.body: - self.visit_sequence(node.body) - if node.finalbody: - self.visit_sequence(node.finalbody) + self.visit_sequence(node.body) + self.visit_sequence(node.finalbody) def visit_Assert(self, node): node.test.walkabout(self) @@ -2782,12 +2760,10 @@ node.msg.walkabout(self) def visit_Import(self, node): - if node.names: - self.visit_sequence(node.names) + self.visit_sequence(node.names) def visit_ImportFrom(self, node): - if node.names: - self.visit_sequence(node.names) + self.visit_sequence(node.names) def visit_Exec(self, node): node.body.walkabout(self) @@ -2812,8 +2788,7 @@ pass def visit_BoolOp(self, node): - if node.values: - self.visit_sequence(node.values) + self.visit_sequence(node.values) def visit_BinOp(self, node): node.left.walkabout(self) @@ -2832,35 +2807,28 @@ node.orelse.walkabout(self) def visit_Dict(self, node): - if node.keys: - self.visit_sequence(node.keys) - if node.values: - self.visit_sequence(node.values) + self.visit_sequence(node.keys) + self.visit_sequence(node.values) def visit_Set(self, node): - if node.elts: - self.visit_sequence(node.elts) + self.visit_sequence(node.elts) def visit_ListComp(self, node): node.elt.walkabout(self) - if node.generators: - self.visit_sequence(node.generators) + self.visit_sequence(node.generators) def visit_SetComp(self, node): node.elt.walkabout(self) - if node.generators: - self.visit_sequence(node.generators) + self.visit_sequence(node.generators) def visit_DictComp(self, node): node.key.walkabout(self) node.value.walkabout(self) - if node.generators: - self.visit_sequence(node.generators) + self.visit_sequence(node.generators) def visit_GeneratorExp(self, node): node.elt.walkabout(self) - if node.generators: - self.visit_sequence(node.generators) + self.visit_sequence(node.generators) def visit_Yield(self, node): if node.value: @@ -2868,15 +2836,12 @@ def visit_Compare(self, node): node.left.walkabout(self) - if node.comparators: - self.visit_sequence(node.comparators) + self.visit_sequence(node.comparators) def visit_Call(self, node): node.func.walkabout(self) - if node.args: - self.visit_sequence(node.args) - if node.keywords: - self.visit_sequence(node.keywords) + self.visit_sequence(node.args) + self.visit_sequence(node.keywords) if node.starargs: node.starargs.walkabout(self) if node.kwargs: @@ -2902,12 +2867,10 @@ pass def visit_List(self, node): - if node.elts: - self.visit_sequence(node.elts) + self.visit_sequence(node.elts) def visit_Tuple(self, node): - if node.elts: - self.visit_sequence(node.elts) + self.visit_sequence(node.elts) def visit_Const(self, node): pass @@ -2924,8 +2887,7 @@ node.step.walkabout(self) def visit_ExtSlice(self, node): - if node.dims: - self.visit_sequence(node.dims) + self.visit_sequence(node.dims) def visit_Index(self, node): node.value.walkabout(self) @@ -2933,22 +2895,18 @@ def visit_comprehension(self, node): node.target.walkabout(self) node.iter.walkabout(self) - if node.ifs: - self.visit_sequence(node.ifs) + self.visit_sequence(node.ifs) def visit_ExceptHandler(self, node): if node.type: node.type.walkabout(self) if node.name: node.name.walkabout(self) - if node.body: - self.visit_sequence(node.body) + self.visit_sequence(node.body) def visit_arguments(self, node): - if node.args: - self.visit_sequence(node.args) - if node.defaults: - self.visit_sequence(node.defaults) + self.visit_sequence(node.args) + self.visit_sequence(node.defaults) def visit_keyword(self, node): node.value.walkabout(self) @@ -3069,6 +3027,7 @@ raise w_self.setdictvalue(space, 'body', w_new_value) return + w_self.deldictvalue(space, 'body') w_self.initialization_state |= 1 _Expression_field_unroller = unrolling_iterable(['body']) @@ -3157,6 +3116,7 @@ raise w_self.setdictvalue(space, 'lineno', w_new_value) return + w_self.deldictvalue(space, 'lineno') w_self.initialization_state |= w_self._lineno_mask def stmt_get_col_offset(space, w_self): @@ -3178,6 +3138,7 @@ raise w_self.setdictvalue(space, 'col_offset', w_new_value) return + w_self.deldictvalue(space, 'col_offset') w_self.initialization_state |= w_self._col_offset_mask stmt.typedef = typedef.TypeDef("stmt", @@ -3208,6 +3169,7 @@ raise w_self.setdictvalue(space, 'name', w_new_value) return + w_self.deldictvalue(space, 'name') w_self.initialization_state |= 1 def FunctionDef_get_args(space, w_self): @@ -3229,6 +3191,7 @@ raise w_self.setdictvalue(space, 'args', w_new_value) return + w_self.deldictvalue(space, 'args') w_self.initialization_state |= 2 def FunctionDef_get_body(space, w_self): @@ -3315,6 +3278,7 @@ raise w_self.setdictvalue(space, 'name', w_new_value) return + w_self.deldictvalue(space, 'name') w_self.initialization_state |= 1 def ClassDef_get_bases(space, w_self): @@ -3420,6 +3384,7 @@ raise w_self.setdictvalue(space, 'value', w_new_value) return + w_self.deldictvalue(space, 'value') w_self.initialization_state |= 1 _Return_field_unroller = unrolling_iterable(['value']) @@ -3526,6 +3491,7 @@ raise w_self.setdictvalue(space, 'value', w_new_value) return + w_self.deldictvalue(space, 'value') w_self.initialization_state |= 2 _Assign_field_unroller = unrolling_iterable(['targets', 'value']) @@ -3573,6 +3539,7 @@ raise w_self.setdictvalue(space, 'target', w_new_value) return + w_self.deldictvalue(space, 'target') w_self.initialization_state |= 1 def AugAssign_get_op(space, w_self): @@ -3590,13 +3557,13 @@ try: obj = space.interp_w(operator, w_new_value) w_self.op = obj.to_simple_int(space) - # need to save the original object too - w_self.setdictvalue(space, 'op', w_new_value) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'op', w_new_value) return + # need to save the original object too + w_self.setdictvalue(space, 'op', w_new_value) w_self.initialization_state |= 2 def AugAssign_get_value(space, w_self): @@ -3618,6 +3585,7 @@ raise w_self.setdictvalue(space, 'value', w_new_value) return + w_self.deldictvalue(space, 'value') w_self.initialization_state |= 4 _AugAssign_field_unroller = unrolling_iterable(['target', 'op', 'value']) @@ -3665,6 +3633,7 @@ raise w_self.setdictvalue(space, 'dest', w_new_value) return + w_self.deldictvalue(space, 'dest') w_self.initialization_state |= 1 def Print_get_values(space, w_self): @@ -3704,6 +3673,7 @@ raise w_self.setdictvalue(space, 'nl', w_new_value) return + w_self.deldictvalue(space, 'nl') w_self.initialization_state |= 4 _Print_field_unroller = unrolling_iterable(['dest', 'values', 'nl']) @@ -3752,6 +3722,7 @@ raise w_self.setdictvalue(space, 'target', w_new_value) return + w_self.deldictvalue(space, 'target') w_self.initialization_state |= 1 def For_get_iter(space, w_self): @@ -3773,6 +3744,7 @@ raise w_self.setdictvalue(space, 'iter', w_new_value) return + w_self.deldictvalue(space, 'iter') w_self.initialization_state |= 2 def For_get_body(space, w_self): @@ -3859,6 +3831,7 @@ raise w_self.setdictvalue(space, 'test', w_new_value) return + w_self.deldictvalue(space, 'test') w_self.initialization_state |= 1 def While_get_body(space, w_self): @@ -3944,6 +3917,7 @@ raise w_self.setdictvalue(space, 'test', w_new_value) return + w_self.deldictvalue(space, 'test') w_self.initialization_state |= 1 def If_get_body(space, w_self): @@ -4029,6 +4003,7 @@ raise w_self.setdictvalue(space, 'context_expr', w_new_value) return + w_self.deldictvalue(space, 'context_expr') w_self.initialization_state |= 1 def With_get_optional_vars(space, w_self): @@ -4050,6 +4025,7 @@ raise w_self.setdictvalue(space, 'optional_vars', w_new_value) return + w_self.deldictvalue(space, 'optional_vars') w_self.initialization_state |= 2 def With_get_body(space, w_self): @@ -4116,6 +4092,7 @@ raise w_self.setdictvalue(space, 'type', w_new_value) return + w_self.deldictvalue(space, 'type') w_self.initialization_state |= 1 def Raise_get_inst(space, w_self): @@ -4137,6 +4114,7 @@ raise w_self.setdictvalue(space, 'inst', w_new_value) return + w_self.deldictvalue(space, 'inst') w_self.initialization_state |= 2 def Raise_get_tback(space, w_self): @@ -4158,6 +4136,7 @@ raise w_self.setdictvalue(space, 'tback', w_new_value) return + w_self.deldictvalue(space, 'tback') w_self.initialization_state |= 4 _Raise_field_unroller = unrolling_iterable(['type', 'inst', 'tback']) @@ -4351,6 +4330,7 @@ raise w_self.setdictvalue(space, 'test', w_new_value) return + w_self.deldictvalue(space, 'test') w_self.initialization_state |= 1 def Assert_get_msg(space, w_self): @@ -4372,6 +4352,7 @@ raise w_self.setdictvalue(space, 'msg', w_new_value) return + w_self.deldictvalue(space, 'msg') w_self.initialization_state |= 2 _Assert_field_unroller = unrolling_iterable(['test', 'msg']) @@ -4464,6 +4445,7 @@ raise w_self.setdictvalue(space, 'module', w_new_value) return + w_self.deldictvalue(space, 'module') w_self.initialization_state |= 1 def ImportFrom_get_names(space, w_self): @@ -4503,6 +4485,7 @@ raise w_self.setdictvalue(space, 'level', w_new_value) return + w_self.deldictvalue(space, 'level') w_self.initialization_state |= 4 _ImportFrom_field_unroller = unrolling_iterable(['module', 'names', 'level']) @@ -4551,6 +4534,7 @@ raise w_self.setdictvalue(space, 'body', w_new_value) return + w_self.deldictvalue(space, 'body') w_self.initialization_state |= 1 def Exec_get_globals(space, w_self): @@ -4572,6 +4556,7 @@ raise w_self.setdictvalue(space, 'globals', w_new_value) return + w_self.deldictvalue(space, 'globals') w_self.initialization_state |= 2 def Exec_get_locals(space, w_self): @@ -4593,6 +4578,7 @@ raise w_self.setdictvalue(space, 'locals', w_new_value) return + w_self.deldictvalue(space, 'locals') w_self.initialization_state |= 4 _Exec_field_unroller = unrolling_iterable(['body', 'globals', 'locals']) @@ -4683,6 +4669,7 @@ raise w_self.setdictvalue(space, 'value', w_new_value) return + w_self.deldictvalue(space, 'value') w_self.initialization_state |= 1 _Expr_field_unroller = unrolling_iterable(['value']) @@ -4779,6 +4766,7 @@ raise w_self.setdictvalue(space, 'lineno', w_new_value) return + w_self.deldictvalue(space, 'lineno') w_self.initialization_state |= w_self._lineno_mask def expr_get_col_offset(space, w_self): @@ -4800,6 +4788,7 @@ raise w_self.setdictvalue(space, 'col_offset', w_new_value) return + w_self.deldictvalue(space, 'col_offset') w_self.initialization_state |= w_self._col_offset_mask expr.typedef = typedef.TypeDef("expr", @@ -4826,13 +4815,13 @@ try: obj = space.interp_w(boolop, w_new_value) w_self.op = obj.to_simple_int(space) - # need to save the original object too - w_self.setdictvalue(space, 'op', w_new_value) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'op', w_new_value) return + # need to save the original object too + w_self.setdictvalue(space, 'op', w_new_value) w_self.initialization_state |= 1 def BoolOp_get_values(space, w_self): @@ -4898,6 +4887,7 @@ raise w_self.setdictvalue(space, 'left', w_new_value) return + w_self.deldictvalue(space, 'left') w_self.initialization_state |= 1 def BinOp_get_op(space, w_self): @@ -4915,13 +4905,13 @@ try: obj = space.interp_w(operator, w_new_value) w_self.op = obj.to_simple_int(space) - # need to save the original object too - w_self.setdictvalue(space, 'op', w_new_value) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'op', w_new_value) return + # need to save the original object too + w_self.setdictvalue(space, 'op', w_new_value) w_self.initialization_state |= 2 def BinOp_get_right(space, w_self): @@ -4943,6 +4933,7 @@ raise w_self.setdictvalue(space, 'right', w_new_value) return + w_self.deldictvalue(space, 'right') w_self.initialization_state |= 4 _BinOp_field_unroller = unrolling_iterable(['left', 'op', 'right']) @@ -4986,13 +4977,13 @@ try: obj = space.interp_w(unaryop, w_new_value) w_self.op = obj.to_simple_int(space) - # need to save the original object too - w_self.setdictvalue(space, 'op', w_new_value) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'op', w_new_value) return + # need to save the original object too + w_self.setdictvalue(space, 'op', w_new_value) w_self.initialization_state |= 1 def UnaryOp_get_operand(space, w_self): @@ -5014,6 +5005,7 @@ raise w_self.setdictvalue(space, 'operand', w_new_value) return + w_self.deldictvalue(space, 'operand') w_self.initialization_state |= 2 _UnaryOp_field_unroller = unrolling_iterable(['op', 'operand']) @@ -5060,6 +5052,7 @@ raise w_self.setdictvalue(space, 'args', w_new_value) return + w_self.deldictvalue(space, 'args') w_self.initialization_state |= 1 def Lambda_get_body(space, w_self): @@ -5081,6 +5074,7 @@ raise w_self.setdictvalue(space, 'body', w_new_value) return + w_self.deldictvalue(space, 'body') w_self.initialization_state |= 2 _Lambda_field_unroller = unrolling_iterable(['args', 'body']) @@ -5127,6 +5121,7 @@ raise w_self.setdictvalue(space, 'test', w_new_value) return + w_self.deldictvalue(space, 'test') w_self.initialization_state |= 1 def IfExp_get_body(space, w_self): @@ -5148,6 +5143,7 @@ raise w_self.setdictvalue(space, 'body', w_new_value) return + w_self.deldictvalue(space, 'body') w_self.initialization_state |= 2 def IfExp_get_orelse(space, w_self): @@ -5169,6 +5165,7 @@ raise w_self.setdictvalue(space, 'orelse', w_new_value) return + w_self.deldictvalue(space, 'orelse') w_self.initialization_state |= 4 _IfExp_field_unroller = unrolling_iterable(['test', 'body', 'orelse']) @@ -5322,6 +5319,7 @@ raise w_self.setdictvalue(space, 'elt', w_new_value) return + w_self.deldictvalue(space, 'elt') w_self.initialization_state |= 1 def ListComp_get_generators(space, w_self): @@ -5387,6 +5385,7 @@ raise w_self.setdictvalue(space, 'elt', w_new_value) return + w_self.deldictvalue(space, 'elt') w_self.initialization_state |= 1 def SetComp_get_generators(space, w_self): @@ -5452,6 +5451,7 @@ raise w_self.setdictvalue(space, 'key', w_new_value) return + w_self.deldictvalue(space, 'key') w_self.initialization_state |= 1 def DictComp_get_value(space, w_self): @@ -5473,6 +5473,7 @@ raise w_self.setdictvalue(space, 'value', w_new_value) return + w_self.deldictvalue(space, 'value') w_self.initialization_state |= 2 def DictComp_get_generators(space, w_self): @@ -5539,6 +5540,7 @@ raise w_self.setdictvalue(space, 'elt', w_new_value) return + w_self.deldictvalue(space, 'elt') w_self.initialization_state |= 1 def GeneratorExp_get_generators(space, w_self): @@ -5604,6 +5606,7 @@ raise w_self.setdictvalue(space, 'value', w_new_value) return + w_self.deldictvalue(space, 'value') w_self.initialization_state |= 1 _Yield_field_unroller = unrolling_iterable(['value']) @@ -5649,6 +5652,7 @@ raise w_self.setdictvalue(space, 'left', w_new_value) return + w_self.deldictvalue(space, 'left') w_self.initialization_state |= 1 def Compare_get_ops(space, w_self): @@ -5734,6 +5738,7 @@ raise w_self.setdictvalue(space, 'func', w_new_value) return + w_self.deldictvalue(space, 'func') w_self.initialization_state |= 1 def Call_get_args(space, w_self): @@ -5791,6 +5796,7 @@ raise w_self.setdictvalue(space, 'starargs', w_new_value) return + w_self.deldictvalue(space, 'starargs') w_self.initialization_state |= 8 def Call_get_kwargs(space, w_self): @@ -5812,6 +5818,7 @@ raise w_self.setdictvalue(space, 'kwargs', w_new_value) return + w_self.deldictvalue(space, 'kwargs') w_self.initialization_state |= 16 _Call_field_unroller = unrolling_iterable(['func', 'args', 'keywords', 'starargs', 'kwargs']) @@ -5863,6 +5870,7 @@ raise w_self.setdictvalue(space, 'value', w_new_value) return + w_self.deldictvalue(space, 'value') w_self.initialization_state |= 1 _Repr_field_unroller = unrolling_iterable(['value']) @@ -5908,6 +5916,7 @@ raise w_self.setdictvalue(space, 'n', w_new_value) return + w_self.deldictvalue(space, 'n') w_self.initialization_state |= 1 _Num_field_unroller = unrolling_iterable(['n']) @@ -5953,6 +5962,7 @@ raise w_self.setdictvalue(space, 's', w_new_value) return + w_self.deldictvalue(space, 's') w_self.initialization_state |= 1 _Str_field_unroller = unrolling_iterable(['s']) @@ -5998,6 +6008,7 @@ raise w_self.setdictvalue(space, 'value', w_new_value) return + w_self.deldictvalue(space, 'value') w_self.initialization_state |= 1 def Attribute_get_attr(space, w_self): @@ -6019,6 +6030,7 @@ raise w_self.setdictvalue(space, 'attr', w_new_value) return + w_self.deldictvalue(space, 'attr') w_self.initialization_state |= 2 def Attribute_get_ctx(space, w_self): @@ -6036,13 +6048,13 @@ try: obj = space.interp_w(expr_context, w_new_value) w_self.ctx = obj.to_simple_int(space) - # need to save the original object too - w_self.setdictvalue(space, 'ctx', w_new_value) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'ctx', w_new_value) return + # need to save the original object too + w_self.setdictvalue(space, 'ctx', w_new_value) w_self.initialization_state |= 4 _Attribute_field_unroller = unrolling_iterable(['value', 'attr', 'ctx']) @@ -6090,6 +6102,7 @@ raise w_self.setdictvalue(space, 'value', w_new_value) return + w_self.deldictvalue(space, 'value') w_self.initialization_state |= 1 def Subscript_get_slice(space, w_self): @@ -6111,6 +6124,7 @@ raise w_self.setdictvalue(space, 'slice', w_new_value) return + w_self.deldictvalue(space, 'slice') w_self.initialization_state |= 2 def Subscript_get_ctx(space, w_self): @@ -6128,13 +6142,13 @@ try: obj = space.interp_w(expr_context, w_new_value) w_self.ctx = obj.to_simple_int(space) - # need to save the original object too - w_self.setdictvalue(space, 'ctx', w_new_value) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'ctx', w_new_value) return + # need to save the original object too + w_self.setdictvalue(space, 'ctx', w_new_value) w_self.initialization_state |= 4 _Subscript_field_unroller = unrolling_iterable(['value', 'slice', 'ctx']) @@ -6182,6 +6196,7 @@ raise w_self.setdictvalue(space, 'id', w_new_value) return + w_self.deldictvalue(space, 'id') w_self.initialization_state |= 1 def Name_get_ctx(space, w_self): @@ -6199,13 +6214,13 @@ try: obj = space.interp_w(expr_context, w_new_value) w_self.ctx = obj.to_simple_int(space) - # need to save the original object too - w_self.setdictvalue(space, 'ctx', w_new_value) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'ctx', w_new_value) return + # need to save the original object too + w_self.setdictvalue(space, 'ctx', w_new_value) w_self.initialization_state |= 2 _Name_field_unroller = unrolling_iterable(['id', 'ctx']) @@ -6266,13 +6281,13 @@ try: obj = space.interp_w(expr_context, w_new_value) w_self.ctx = obj.to_simple_int(space) - # need to save the original object too - w_self.setdictvalue(space, 'ctx', w_new_value) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'ctx', w_new_value) return + # need to save the original object too + w_self.setdictvalue(space, 'ctx', w_new_value) w_self.initialization_state |= 2 _List_field_unroller = unrolling_iterable(['elts', 'ctx']) @@ -6334,13 +6349,13 @@ try: obj = space.interp_w(expr_context, w_new_value) w_self.ctx = obj.to_simple_int(space) - # need to save the original object too - w_self.setdictvalue(space, 'ctx', w_new_value) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'ctx', w_new_value) return + # need to save the original object too + w_self.setdictvalue(space, 'ctx', w_new_value) w_self.initialization_state |= 2 _Tuple_field_unroller = unrolling_iterable(['elts', 'ctx']) @@ -6388,6 +6403,7 @@ raise w_self.setdictvalue(space, 'value', w_new_value) return + w_self.deldictvalue(space, 'value') w_self.initialization_state |= 1 _Const_field_unroller = unrolling_iterable(['value']) @@ -6506,6 +6522,7 @@ raise w_self.setdictvalue(space, 'lower', w_new_value) return + w_self.deldictvalue(space, 'lower') w_self.initialization_state |= 1 def Slice_get_upper(space, w_self): @@ -6527,6 +6544,7 @@ raise w_self.setdictvalue(space, 'upper', w_new_value) return + w_self.deldictvalue(space, 'upper') w_self.initialization_state |= 2 def Slice_get_step(space, w_self): @@ -6548,6 +6566,7 @@ raise w_self.setdictvalue(space, 'step', w_new_value) return + w_self.deldictvalue(space, 'step') w_self.initialization_state |= 4 _Slice_field_unroller = unrolling_iterable(['lower', 'upper', 'step']) @@ -6638,6 +6657,7 @@ raise w_self.setdictvalue(space, 'value', w_new_value) return + w_self.deldictvalue(space, 'value') w_self.initialization_state |= 1 _Index_field_unroller = unrolling_iterable(['value']) @@ -6907,6 +6927,7 @@ raise w_self.setdictvalue(space, 'target', w_new_value) return + w_self.deldictvalue(space, 'target') w_self.initialization_state |= 1 def comprehension_get_iter(space, w_self): @@ -6928,6 +6949,7 @@ raise w_self.setdictvalue(space, 'iter', w_new_value) return + w_self.deldictvalue(space, 'iter') w_self.initialization_state |= 2 def comprehension_get_ifs(space, w_self): @@ -6994,6 +7016,7 @@ raise w_self.setdictvalue(space, 'lineno', w_new_value) return + w_self.deldictvalue(space, 'lineno') w_self.initialization_state |= w_self._lineno_mask def excepthandler_get_col_offset(space, w_self): @@ -7015,6 +7038,7 @@ raise w_self.setdictvalue(space, 'col_offset', w_new_value) return + w_self.deldictvalue(space, 'col_offset') w_self.initialization_state |= w_self._col_offset_mask excepthandler.typedef = typedef.TypeDef("excepthandler", @@ -7045,6 +7069,7 @@ raise w_self.setdictvalue(space, 'type', w_new_value) return + w_self.deldictvalue(space, 'type') w_self.initialization_state |= 1 def ExceptHandler_get_name(space, w_self): @@ -7066,6 +7091,7 @@ raise w_self.setdictvalue(space, 'name', w_new_value) return + w_self.deldictvalue(space, 'name') w_self.initialization_state |= 2 def ExceptHandler_get_body(space, w_self): @@ -7153,6 +7179,7 @@ raise w_self.setdictvalue(space, 'vararg', w_new_value) return + w_self.deldictvalue(space, 'vararg') w_self.initialization_state |= 2 def arguments_get_kwarg(space, w_self): @@ -7177,6 +7204,7 @@ raise w_self.setdictvalue(space, 'kwarg', w_new_value) return + w_self.deldictvalue(space, 'kwarg') w_self.initialization_state |= 4 def arguments_get_defaults(space, w_self): @@ -7245,6 +7273,7 @@ raise w_self.setdictvalue(space, 'arg', w_new_value) return + w_self.deldictvalue(space, 'arg') w_self.initialization_state |= 1 def keyword_get_value(space, w_self): @@ -7266,6 +7295,7 @@ raise w_self.setdictvalue(space, 'value', w_new_value) return + w_self.deldictvalue(space, 'value') w_self.initialization_state |= 2 _keyword_field_unroller = unrolling_iterable(['arg', 'value']) @@ -7312,6 +7342,7 @@ raise w_self.setdictvalue(space, 'name', w_new_value) return + w_self.deldictvalue(space, 'name') w_self.initialization_state |= 1 def alias_get_asname(space, w_self): @@ -7336,6 +7367,7 @@ raise w_self.setdictvalue(space, 'asname', w_new_value) return + w_self.deldictvalue(space, 'asname') w_self.initialization_state |= 2 _alias_field_unroller = unrolling_iterable(['name', 'asname']) diff --git a/pypy/interpreter/astcompiler/codegen.py b/pypy/interpreter/astcompiler/codegen.py --- a/pypy/interpreter/astcompiler/codegen.py +++ b/pypy/interpreter/astcompiler/codegen.py @@ -295,15 +295,11 @@ def visit_FunctionDef(self, func): self.update_position(func.lineno, True) # Load decorators first, but apply them after the function is created. - if func.decorator_list: - self.visit_sequence(func.decorator_list) + self.visit_sequence(func.decorator_list) args = func.args assert isinstance(args, ast.arguments) - if args.defaults: - self.visit_sequence(args.defaults) - num_defaults = len(args.defaults) - else: - num_defaults = 0 + self.visit_sequence(args.defaults) + num_defaults = len(args.defaults) if args.defaults is not None else 0 code = self.sub_scope(FunctionCodeGenerator, func.name, func, func.lineno) self._make_function(code, num_defaults) @@ -317,24 +313,17 @@ self.update_position(lam.lineno) args = lam.args assert isinstance(args, ast.arguments) - if args.defaults: - self.visit_sequence(args.defaults) - default_count = len(args.defaults) - else: - default_count = 0 + self.visit_sequence(args.defaults) + default_count = len(args.defaults) if args.defaults is not None else 0 code = self.sub_scope(LambdaCodeGenerator, "", lam, lam.lineno) self._make_function(code, default_count) def visit_ClassDef(self, cls): self.update_position(cls.lineno, True) - if cls.decorator_list: - self.visit_sequence(cls.decorator_list) + self.visit_sequence(cls.decorator_list) self.load_const(self.space.wrap(cls.name)) - if cls.bases: - bases_count = len(cls.bases) - self.visit_sequence(cls.bases) - else: - bases_count = 0 + self.visit_sequence(cls.bases) + bases_count = len(cls.bases) if cls.bases is not None else 0 self.emit_op_arg(ops.BUILD_TUPLE, bases_count) code = self.sub_scope(ClassCodeGenerator, cls.name, cls, cls.lineno) self._make_function(code, 0) @@ -446,8 +435,7 @@ end = self.new_block() test_constant = if_.test.as_constant_truth(self.space) if test_constant == optimize.CONST_FALSE: - if if_.orelse: - self.visit_sequence(if_.orelse) + self.visit_sequence(if_.orelse) elif test_constant == optimize.CONST_TRUE: self.visit_sequence(if_.body) else: @@ -515,16 +503,14 @@ self.use_next_block(cleanup) self.emit_op(ops.POP_BLOCK) self.pop_frame_block(F_BLOCK_LOOP, start) - if fr.orelse: - self.visit_sequence(fr.orelse) + self.visit_sequence(fr.orelse) self.use_next_block(end) def visit_While(self, wh): self.update_position(wh.lineno, True) test_constant = wh.test.as_constant_truth(self.space) if test_constant == optimize.CONST_FALSE: - if wh.orelse: - self.visit_sequence(wh.orelse) + self.visit_sequence(wh.orelse) else: end = self.new_block() anchor = None @@ -544,8 +530,7 @@ self.use_next_block(anchor) self.emit_op(ops.POP_BLOCK) self.pop_frame_block(F_BLOCK_LOOP, loop) - if wh.orelse: - self.visit_sequence(wh.orelse) + self.visit_sequence(wh.orelse) self.use_next_block(end) def visit_TryExcept(self, te): @@ -581,8 +566,7 @@ self.use_next_block(next_except) self.emit_op(ops.END_FINALLY) self.use_next_block(otherwise) - if te.orelse: - self.visit_sequence(te.orelse) + self.visit_sequence(te.orelse) self.use_next_block(end) def visit_TryFinally(self, tf): @@ -893,27 +877,19 @@ def visit_Tuple(self, tup): self.update_position(tup.lineno) - if tup.elts: - elt_count = len(tup.elts) - else: - elt_count = 0 + elt_count = len(tup.elts) if tup.elts is not None else 0 if tup.ctx == ast.Store: self.emit_op_arg(ops.UNPACK_SEQUENCE, elt_count) - if elt_count: - self.visit_sequence(tup.elts) + self.visit_sequence(tup.elts) if tup.ctx == ast.Load: self.emit_op_arg(ops.BUILD_TUPLE, elt_count) def visit_List(self, l): self.update_position(l.lineno) - if l.elts: - elt_count = len(l.elts) - else: - elt_count = 0 + elt_count = len(l.elts) if l.elts is not None else 0 if l.ctx == ast.Store: self.emit_op_arg(ops.UNPACK_SEQUENCE, elt_count) - if elt_count: - self.visit_sequence(l.elts) + self.visit_sequence(l.elts) if l.ctx == ast.Load: self.emit_op_arg(ops.BUILD_LIST, elt_count) @@ -944,11 +920,9 @@ if self._optimize_method_call(call): return call.func.walkabout(self) - arg = 0 + arg = len(call.args) if call.args is not None else 0 call_type = 0 - if call.args: - arg = len(call.args) - self.visit_sequence(call.args) + self.visit_sequence(call.args) if call.keywords: self.visit_sequence(call.keywords) arg |= len(call.keywords) << 8 @@ -984,16 +958,10 @@ assert isinstance(attr_lookup, ast.Attribute) attr_lookup.value.walkabout(self) self.emit_op_name(ops.LOOKUP_METHOD, self.names, attr_lookup.attr) - if call.args: - self.visit_sequence(call.args) - arg_count = len(call.args) - else: - arg_count = 0 - if call.keywords: - self.visit_sequence(call.keywords) - kwarg_count = len(call.keywords) - else: - kwarg_count = 0 + self.visit_sequence(call.args) + arg_count = len(call.args) if call.args is not None else 0 + self.visit_sequence(call.keywords) + kwarg_count = len(call.keywords) if call.keywords is not None else 0 self.emit_op_arg(ops.CALL_METHOD, (kwarg_count << 8) | arg_count) return True @@ -1251,7 +1219,10 @@ def _compile(self, func): assert isinstance(func, ast.FunctionDef) # If there's a docstring, store it as the first constant. - doc_expr = self.possible_docstring(func.body[0]) + if func.body: + doc_expr = self.possible_docstring(func.body[0]) + else: + doc_expr = None if doc_expr is not None: self.add_const(doc_expr.s) start = 1 @@ -1263,8 +1234,9 @@ if args.args: self._handle_nested_args(args.args) self.argcount = len(args.args) - for i in range(start, len(func.body)): - func.body[i].walkabout(self) + if func.body: + for i in range(start, len(func.body)): + func.body[i].walkabout(self) class LambdaCodeGenerator(AbstractFunctionCodeGenerator): diff --git a/pypy/interpreter/astcompiler/symtable.py b/pypy/interpreter/astcompiler/symtable.py --- a/pypy/interpreter/astcompiler/symtable.py +++ b/pypy/interpreter/astcompiler/symtable.py @@ -356,10 +356,8 @@ # Function defaults and decorators happen in the outer scope. args = func.args assert isinstance(args, ast.arguments) - if args.defaults: - self.visit_sequence(args.defaults) - if func.decorator_list: - self.visit_sequence(func.decorator_list) + self.visit_sequence(args.defaults) + self.visit_sequence(func.decorator_list) new_scope = FunctionScope(func.name, func.lineno, func.col_offset) self.push_scope(new_scope, func) func.args.walkabout(self) @@ -372,10 +370,8 @@ def visit_ClassDef(self, clsdef): self.note_symbol(clsdef.name, SYM_ASSIGNED) - if clsdef.bases: - self.visit_sequence(clsdef.bases) - if clsdef.decorator_list: - self.visit_sequence(clsdef.decorator_list) + self.visit_sequence(clsdef.bases) + self.visit_sequence(clsdef.decorator_list) self.push_scope(ClassScope(clsdef), clsdef) self.visit_sequence(clsdef.body) self.pop_scope() @@ -431,8 +427,7 @@ def visit_Lambda(self, lamb): args = lamb.args assert isinstance(args, ast.arguments) - if args.defaults: - self.visit_sequence(args.defaults) + self.visit_sequence(args.defaults) new_scope = FunctionScope("lambda", lamb.lineno, lamb.col_offset) self.push_scope(new_scope, lamb) lamb.args.walkabout(self) @@ -447,8 +442,7 @@ self.push_scope(new_scope, node) self.implicit_arg(0) outer.target.walkabout(self) - if outer.ifs: - self.visit_sequence(outer.ifs) + self.visit_sequence(outer.ifs) self.visit_sequence(comps[1:]) for item in list(consider): item.walkabout(self) diff --git a/pypy/interpreter/astcompiler/tools/asdl_py.py b/pypy/interpreter/astcompiler/tools/asdl_py.py --- a/pypy/interpreter/astcompiler/tools/asdl_py.py +++ b/pypy/interpreter/astcompiler/tools/asdl_py.py @@ -221,8 +221,9 @@ self.emit("class ASTVisitor(object):") self.emit("") self.emit("def visit_sequence(self, seq):", 1) - self.emit("for node in seq:", 2) - self.emit("node.walkabout(self)", 3) + self.emit("if seq is not None:", 2) + self.emit("for node in seq:", 3) + self.emit("node.walkabout(self)", 4) self.emit("") self.emit("def default_visitor(self, node):", 1) self.emit("raise NodeVisitorNotImplemented", 2) @@ -280,15 +281,13 @@ def visitField(self, field): if field.type.value not in asdl.builtin_types and \ field.type.value not in self.data.simple_types: - if field.seq or field.opt: + level = 2 + template = "node.%s.walkabout(self)" + if field.seq: + template = "self.visit_sequence(node.%s)" + elif field.opt: self.emit("if node.%s:" % (field.name,), 2) level = 3 - else: - level = 2 - if field.seq: - template = "self.visit_sequence(node.%s)" - else: - template = "node.%s.walkabout(self)" self.emit(template % (field.name,), level) return True return False @@ -446,6 +445,7 @@ if field.seq: self.emit("w_self.w_%s = w_new_value" % (field.name,), 1) else: + save_original_object = False self.emit("try:", 1) if field.type.value not in asdl.builtin_types: # These are always other AST nodes. @@ -454,9 +454,7 @@ (field.type,), 2) self.emit("w_self.%s = obj.to_simple_int(space)" % (field.name,), 2) - self.emit("# need to save the original object too", 2) - self.emit("w_self.setdictvalue(space, '%s', w_new_value)" - % (field.name,), 2) + save_original_object = True else: config = (field.name, field.type, repr(field.opt)) self.emit("w_self.%s = space.interp_w(%s, w_new_value, %s)" % @@ -480,6 +478,12 @@ self.emit(" w_self.setdictvalue(space, '%s', w_new_value)" % (field.name,), 1) self.emit(" return", 1) + if save_original_object: + self.emit("# need to save the original object too", 1) + self.emit("w_self.setdictvalue(space, '%s', w_new_value)" + % (field.name,), 1) + else: + self.emit("w_self.deldictvalue(space, '%s')" %(field.name,), 1) self.emit("w_self.initialization_state |= %s" % (flag,), 1) self.emit("") diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -3,18 +3,18 @@ from pypy.interpreter.executioncontext import ExecutionContext, ActionFlag from pypy.interpreter.executioncontext import UserDelAction, FrameTraceAction from pypy.interpreter.error import OperationError, operationerrfmt -from pypy.interpreter.error import new_exception_class +from pypy.interpreter.error import new_exception_class, typed_unwrap_error_msg from pypy.interpreter.argument import Arguments from pypy.interpreter.miscutils import ThreadLocals from pypy.tool.cache import Cache from pypy.tool.uid import HUGEVAL_BYTES -from pypy.rlib.objectmodel import we_are_translated +from pypy.rlib.objectmodel import we_are_translated, newlist, compute_unique_id from pypy.rlib.debug import make_sure_not_resized from pypy.rlib.timer import DummyTimer, Timer from pypy.rlib.rarithmetic import r_uint from pypy.rlib import jit from pypy.tool.sourcetools import func_with_new_name -import os, sys, py +import os, sys __all__ = ['ObjSpace', 'OperationError', 'Wrappable', 'W_Root'] @@ -44,11 +44,11 @@ return True return False - def deldictvalue(self, space, w_name): + def deldictvalue(self, space, attr): w_dict = self.getdict(space) if w_dict is not None: try: - space.delitem(w_dict, w_name) + space.delitem(w_dict, space.wrap(attr)) return True except OperationError, ex: if not ex.match(space, space.w_KeyError): @@ -111,6 +111,9 @@ def setslotvalue(self, index, w_val): raise NotImplementedError + def delslotvalue(self, index): + raise NotImplementedError + def descr_call_mismatch(self, space, opname, RequiredClass, args): if RequiredClass is None: classname = '?' @@ -183,6 +186,28 @@ def _set_mapdict_storage_and_map(self, storage, map): raise NotImplementedError + # ------------------------------------------------------------------- + + def str_w(self, space): + w_msg = typed_unwrap_error_msg(space, "string", self) + raise OperationError(space.w_TypeError, w_msg) + + def unicode_w(self, space): + raise OperationError(space.w_TypeError, + typed_unwrap_error_msg(space, "unicode", self)) + + def int_w(self, space): + raise OperationError(space.w_TypeError, + typed_unwrap_error_msg(space, "integer", self)) + + def uint_w(self, space): + raise OperationError(space.w_TypeError, + typed_unwrap_error_msg(space, "integer", self)) + + def bigint_w(self, space): + raise OperationError(space.w_TypeError, + typed_unwrap_error_msg(space, "integer", self)) + class Wrappable(W_Root): """A subclass of Wrappable is an internal, interpreter-level class @@ -623,9 +648,9 @@ self.default_compiler = compiler return compiler - def createframe(self, code, w_globals, closure=None): + def createframe(self, code, w_globals, outer_func=None): "Create an empty PyFrame suitable for this code object." - return self.FrameClass(self, code, w_globals, closure) + return self.FrameClass(self, code, w_globals, outer_func) def allocate_lock(self): """Return an interp-level Lock object if threads are enabled, @@ -754,7 +779,18 @@ w_iterator = self.iter(w_iterable) # If we know the expected length we can preallocate. if expected_length == -1: - items = [] + try: + lgt_estimate = self.len_w(w_iterable) + except OperationError, o: + if (not o.match(self, self.w_AttributeError) and + not o.match(self, self.w_TypeError)): + raise + items = [] + else: + try: + items = newlist(lgt_estimate) + except MemoryError: + items = [] # it might have lied else: items = [None] * expected_length idx = 0 @@ -887,7 +923,7 @@ ec.c_call_trace(frame, w_func, args) try: w_res = self.call_args(w_func, args) - except OperationError, e: + except OperationError: ec.c_exception_trace(frame, w_func) raise ec.c_return_trace(frame, w_func, args) @@ -933,6 +969,9 @@ def isinstance_w(self, w_obj, w_type): return self.is_true(self.isinstance(w_obj, w_type)) + def id(self, w_obj): + return self.wrap(compute_unique_id(w_obj)) + # The code below only works # for the simple case (new-style instance). # These methods are patched with the full logic by the __builtin__ @@ -985,8 +1024,6 @@ def eval(self, expression, w_globals, w_locals, hidden_applevel=False): "NOT_RPYTHON: For internal debugging." - import types - from pypy.interpreter.pycode import PyCode if isinstance(expression, str): compiler = self.createcompiler() expression = compiler.compile(expression, '?', 'eval', 0, @@ -998,7 +1035,6 @@ def exec_(self, statement, w_globals, w_locals, hidden_applevel=False, filename=None): "NOT_RPYTHON: For internal debugging." - import types if filename is None: filename = '?' from pypy.interpreter.pycode import PyCode @@ -1196,6 +1232,18 @@ return None return self.str_w(w_obj) + def str_w(self, w_obj): + return w_obj.str_w(self) + + def int_w(self, w_obj): + return w_obj.int_w(self) + + def uint_w(self, w_obj): + return w_obj.uint_w(self) + + def bigint_w(self, w_obj): + return w_obj.bigint_w(self) + def realstr_w(self, w_obj): # Like str_w, but only works if w_obj is really of type 'str'. if not self.is_true(self.isinstance(w_obj, self.w_str)): @@ -1203,6 +1251,9 @@ self.wrap('argument must be a string')) return self.str_w(w_obj) + def unicode_w(self, w_obj): + return w_obj.unicode_w(self) + def realunicode_w(self, w_obj): # Like unicode_w, but only works if w_obj is really of type # 'unicode'. @@ -1284,6 +1335,17 @@ self.wrap("expected a 32-bit integer")) return value + def truncatedint(self, w_obj): + # Like space.gateway_int_w(), but return the integer truncated + # instead of raising OverflowError. For obscure cases only. + try: + return self.int_w(w_obj) + except OperationError, e: + if not e.match(self, self.w_OverflowError): + raise + from pypy.rlib.rarithmetic import intmask + return intmask(self.bigint_w(w_obj).uintmask()) + def c_filedescriptor_w(self, w_fd): # This is only used sometimes in CPython, e.g. for os.fsync() but # not os.close(). It's likely designed for 'select'. It's irregular diff --git a/pypy/interpreter/error.py b/pypy/interpreter/error.py --- a/pypy/interpreter/error.py +++ b/pypy/interpreter/error.py @@ -189,7 +189,7 @@ if space.is_w(w_value, space.w_None): # raise Type: we assume we have to instantiate Type w_value = space.call_function(w_type) - w_type = space.exception_getclass(w_value) + w_type = self._exception_getclass(space, w_value) else: w_valuetype = space.exception_getclass(w_value) if space.exception_issubclass_w(w_valuetype, w_type): @@ -204,18 +204,12 @@ else: # raise Type, X: assume X is the constructor argument w_value = space.call_function(w_type, w_value) - w_type = space.exception_getclass(w_value) + w_type = self._exception_getclass(space, w_value) else: # the only case left here is (inst, None), from a 'raise inst'. w_inst = w_type - w_instclass = space.exception_getclass(w_inst) - if not space.exception_is_valid_class_w(w_instclass): - instclassname = w_instclass.getname(space) - msg = ("exceptions must be old-style classes or derived " - "from BaseException, not %s") - raise operationerrfmt(space.w_TypeError, msg, instclassname) - + w_instclass = self._exception_getclass(space, w_inst) if not space.is_w(w_value, space.w_None): raise OperationError(space.w_TypeError, space.wrap("instance exception may not " @@ -226,6 +220,15 @@ self.w_type = w_type self._w_value = w_value + def _exception_getclass(self, space, w_inst): + w_type = space.exception_getclass(w_inst) + if not space.exception_is_valid_class_w(w_type): + typename = w_type.getname(space) + msg = ("exceptions must be old-style classes or derived " + "from BaseException, not %s") + raise operationerrfmt(space.w_TypeError, msg, typename) + return w_type + def write_unraisable(self, space, where, w_object=None): if w_object is None: objrepr = '' @@ -455,3 +458,7 @@ if module: space.setattr(w_exc, space.wrap("__module__"), space.wrap(module)) return w_exc + +def typed_unwrap_error_msg(space, expected, w_obj): + type_name = space.type(w_obj).getname(space) + return space.wrap("expected %s, got %s object" % (expected, type_name)) diff --git a/pypy/interpreter/executioncontext.py b/pypy/interpreter/executioncontext.py --- a/pypy/interpreter/executioncontext.py +++ b/pypy/interpreter/executioncontext.py @@ -1,5 +1,4 @@ import sys -from pypy.interpreter.miscutils import Stack from pypy.interpreter.error import OperationError from pypy.rlib.rarithmetic import LONG_BIT from pypy.rlib.unroll import unrolling_iterable @@ -48,6 +47,7 @@ return frame @staticmethod + @jit.unroll_safe # should usually loop 0 times, very rarely more than once def getnextframe_nohidden(frame): frame = frame.f_backref() while frame and frame.hide(): @@ -81,58 +81,6 @@ # ________________________________________________________________ - - class Subcontext(object): - # coroutine: subcontext support - - def __init__(self): - self.topframe = None - self.w_tracefunc = None - self.profilefunc = None - self.w_profilefuncarg = None - self.is_tracing = 0 - - def enter(self, ec): - ec.topframeref = jit.non_virtual_ref(self.topframe) - ec.w_tracefunc = self.w_tracefunc - ec.profilefunc = self.profilefunc - ec.w_profilefuncarg = self.w_profilefuncarg - ec.is_tracing = self.is_tracing - ec.space.frame_trace_action.fire() - - def leave(self, ec): - self.topframe = ec.gettopframe() - self.w_tracefunc = ec.w_tracefunc - self.profilefunc = ec.profilefunc - self.w_profilefuncarg = ec.w_profilefuncarg - self.is_tracing = ec.is_tracing - - def clear_framestack(self): - self.topframe = None - - # the following interface is for pickling and unpickling - def getstate(self, space): - if self.topframe is None: - return space.w_None - return self.topframe - - def setstate(self, space, w_state): - from pypy.interpreter.pyframe import PyFrame - if space.is_w(w_state, space.w_None): - self.topframe = None - else: - self.topframe = space.interp_w(PyFrame, w_state) - - def getframestack(self): - lst = [] - f = self.topframe - while f is not None: - lst.append(f) - f = f.f_backref() - lst.reverse() - return lst - # coroutine: I think this is all, folks! - def c_call_trace(self, frame, w_func, args=None): "Profile the call of a builtin function" self._c_call_return_trace(frame, w_func, args, 'c_call') @@ -227,6 +175,9 @@ self.w_tracefunc = w_func self.space.frame_trace_action.fire() + def gettrace(self): + return self.w_tracefunc + def setprofile(self, w_func): """Set the global trace function.""" if self.space.is_w(w_func, self.space.w_None): @@ -359,7 +310,11 @@ self._nonperiodic_actions = [] self.has_bytecode_counter = False self.fired_actions = None - self.checkinterval_scaled = 100 * TICK_COUNTER_STEP + # the default value is not 100, unlike CPython 2.7, but a much + # larger value, because we use a technique that not only allows + # but actually *forces* another thread to run whenever the counter + # reaches zero. + self.checkinterval_scaled = 10000 * TICK_COUNTER_STEP self._rebuild_action_dispatcher() def fire(self, action): @@ -398,6 +353,7 @@ elif interval > MAX: interval = MAX self.checkinterval_scaled = interval * TICK_COUNTER_STEP + self.reset_ticker(-1) def _rebuild_action_dispatcher(self): periodic_actions = unrolling_iterable(self._periodic_actions) diff --git a/pypy/interpreter/function.py b/pypy/interpreter/function.py --- a/pypy/interpreter/function.py +++ b/pypy/interpreter/function.py @@ -30,8 +30,9 @@ can_change_code = True _immutable_fields_ = ['code?', 'w_func_globals?', - 'closure?', - 'defs_w?[*]'] + 'closure?[*]', + 'defs_w?[*]', + 'name?'] def __init__(self, space, code, w_globals=None, defs_w=[], closure=None, forcename=None): @@ -95,7 +96,7 @@ assert isinstance(code, PyCode) if nargs < 5: new_frame = self.space.createframe(code, self.w_func_globals, - self.closure) + self) for i in funccallunrolling: if i < nargs: new_frame.locals_stack_w[i] = args_w[i] @@ -155,7 +156,7 @@ def _flat_pycall(self, code, nargs, frame): # code is a PyCode new_frame = self.space.createframe(code, self.w_func_globals, From noreply at buildbot.pypy.org Thu Nov 10 13:52:04 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:52:04 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: checking for string makes no sense here Message-ID: <20111110125204.7F2FA8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49239:ee4c088754f8 Date: 2011-10-14 11:54 +0200 http://bitbucket.org/pypy/pypy/changeset/ee4c088754f8/ Log: checking for string makes no sense here diff --git a/pypy/objspace/std/dictmultiobject.py b/pypy/objspace/std/dictmultiobject.py --- a/pypy/objspace/std/dictmultiobject.py +++ b/pypy/objspace/std/dictmultiobject.py @@ -22,7 +22,6 @@ # XXX there are many more types return (space.is_w(w_lookup_type, space.w_NoneType) or - space.is_w(w_lookup_type, space.w_str) or space.is_w(w_lookup_type, space.w_int) or space.is_w(w_lookup_type, space.w_bool) or space.is_w(w_lookup_type, space.w_float) From noreply at buildbot.pypy.org Thu Nov 10 13:52:05 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:52:05 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: skip currently not supported tests Message-ID: <20111110125205.A62178292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49240:76fde77d4ef0 Date: 2011-10-14 12:39 +0200 http://bitbucket.org/pypy/pypy/changeset/76fde77d4ef0/ Log: skip currently not supported tests diff --git a/pypy/objspace/std/test/test_setstrategies.py b/pypy/objspace/std/test/test_setstrategies.py --- a/pypy/objspace/std/test/test_setstrategies.py +++ b/pypy/objspace/std/test/test_setstrategies.py @@ -1,10 +1,11 @@ from pypy.objspace.std.setobject import W_SetObject from pypy.objspace.std.setobject import IntegerSetStrategy, ObjectSetStrategy, EmptySetStrategy +from pypy.objspace.std.listobject import W_ListObject class TestW_SetStrategies: def wrapped(self, l): - return [self.space.wrap(x) for x in l] + return W_ListObject([self.space.wrap(x) for x in l]) def test_from_list(self): s = W_SetObject(self.space, self.wrapped([1,2,3,4,5])) @@ -39,6 +40,7 @@ s1 = W_SetObject(self.space, self.wrapped([1,2,3,4,5])) s2 = W_SetObject(self.space, self.wrapped([4,5, "six", "seven"])) s3 = s1.intersect(s2) + skip("for now intersection with ObjectStrategy always results in another ObjectStrategy") assert s3.strategy is self.space.fromcache(IntegerSetStrategy) def test_clear(self): @@ -77,6 +79,7 @@ s1 = W_SetObject(self.space, self.wrapped([1,2,3,4,5])) set_discard__Set_ANY(self.space, s1, self.space.wrap("five")) + skip("currently not supported") assert s1.strategy is self.space.fromcache(IntegerSetStrategy) set_discard__Set_ANY(self.space, s1, self.space.wrap(FakeInt(5))) @@ -97,6 +100,7 @@ s1 = W_SetObject(self.space, self.wrapped([1,2,3,4,5])) assert not s1.has_key(self.space.wrap("five")) + skip("currently not supported") assert s1.strategy is self.space.fromcache(IntegerSetStrategy) assert s1.has_key(self.space.wrap(FakeInt(2))) From noreply at buildbot.pypy.org Thu Nov 10 13:52:06 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:52:06 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: fix needed for translation Message-ID: <20111110125206.CF1198292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49241:b1d40e572594 Date: 2011-10-14 12:57 +0200 http://bitbucket.org/pypy/pypy/changeset/b1d40e572594/ Log: fix needed for translation diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -57,8 +57,8 @@ self.sstorage = strategy.erase(d) def switch_to_empty_strategy(self): - self.strategy = self.space.fromcache(EmptySetStrategy) - self.sstorage = self.strategy.get_empty_storage() + self.strategy = strategy = self.space.fromcache(EmptySetStrategy) + self.sstorage = strategy.get_empty_storage() # _____________ strategy methods ________________ From noreply at buildbot.pypy.org Thu Nov 10 13:52:08 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:52:08 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: unnecessary code Message-ID: <20111110125208.094118292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49242:24ed09109359 Date: 2011-10-14 14:52 +0200 http://bitbucket.org/pypy/pypy/changeset/24ed09109359/ Log: unnecessary code diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -303,9 +303,8 @@ pass def copy(self, w_set): - strategy = w_set.strategy storage = self.erase(None) - clone = w_set.from_storage_and_strategy(storage, strategy) + clone = w_set.from_storage_and_strategy(storage, self) return clone def add(self, w_set, w_key): From noreply at buildbot.pypy.org Thu Nov 10 13:52:09 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:52:09 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: fix needed for translation Message-ID: <20111110125209.37FD98292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49243:23d0550fda0a Date: 2011-10-14 15:02 +0200 http://bitbucket.org/pypy/pypy/changeset/23d0550fda0a/ Log: fix needed for translation diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -859,8 +859,8 @@ def set_strategy_and_setdata(space, w_set, w_iterable): from pypy.objspace.std.intobject import W_IntObject if w_iterable is None : - w_set.strategy = space.fromcache(EmptySetStrategy) - w_set.sstorage = w_set.strategy.get_empty_storage() + w_set.strategy = strategy = space.fromcache(EmptySetStrategy) + w_set.sstorage = strategy.get_empty_storage() return if isinstance(w_iterable, W_BaseSetObject): @@ -871,8 +871,8 @@ iterable_w = space.listview(w_iterable) if len(iterable_w) == 0: - w_set.strategy = space.fromcache(EmptySetStrategy) - w_set.sstorage = w_set.strategy.get_empty_storage() + w_set.strategy = strategy = space.fromcache(EmptySetStrategy) + w_set.sstorage = strategy.get_empty_storage() return # check for integers From noreply at buildbot.pypy.org Thu Nov 10 13:52:10 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:52:10 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: possible fix for translation Message-ID: <20111110125210.66C098292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49244:64942a5fcc0f Date: 2011-10-14 15:11 +0200 http://bitbucket.org/pypy/pypy/changeset/64942a5fcc0f/ Log: possible fix for translation diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -196,11 +196,11 @@ """ Returns an empty storage (erased) object. Used to initialize an empty set.""" raise NotImplementedError - def erase(self, storage): - raise NotImplementedError + #def erase(self, storage): + # raise NotImplementedError - def unerase(self, storage): - raise NotImplementedError + #def unerase(self, storage): + # raise NotImplementedError # __________________ methods called on W_SetObject _________________ From noreply at buildbot.pypy.org Thu Nov 10 13:52:11 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:52:11 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: merge with default Message-ID: <20111110125211.92E868292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49245:a07d1bf6b358 Date: 2011-10-14 15:34 +0200 http://bitbucket.org/pypy/pypy/changeset/a07d1bf6b358/ Log: merge with default diff --git a/pypy/objspace/std/stringobject.py b/pypy/objspace/std/stringobject.py --- a/pypy/objspace/std/stringobject.py +++ b/pypy/objspace/std/stringobject.py @@ -756,7 +756,8 @@ input = w_self._value width = space.int_w(w_width) - if len(input) >= width: + num_zeros = width - len(input) + if num_zeros <= 0: # cannot return w_self, in case it is a subclass of str return space.wrap(input) @@ -764,13 +765,11 @@ if len(input) > 0 and (input[0] == '+' or input[0] == '-'): builder.append(input[0]) start = 1 - middle = width - len(input) + 1 else: start = 0 - middle = width - len(input) - builder.append_multiple_char('0', middle - start) - builder.append(input[start:start + (width - middle)]) + builder.append_multiple_char('0', num_zeros) + builder.append_slice(input, start, len(input)) return space.wrap(builder.build()) diff --git a/pypy/tool/logparser.py b/pypy/tool/logparser.py --- a/pypy/tool/logparser.py +++ b/pypy/tool/logparser.py @@ -298,6 +298,8 @@ image.paste(textpercent, (t1x, 5), textpercent) image.paste(textlabel, (t2x, 5), textlabel) images.append(image) + if not images: + return None return combine(images, spacing=0, border=1, horizontal=False) def get_timesummary_single_image(totaltimes, totaltime0, componentdict, @@ -333,6 +335,8 @@ del totaltimes[None] img2 = render_histogram(totaltimes, totaltime0, {}, width, summarybarheight) + if img2 is None: + return img1 return combine([img1, img2], spacing=spacing, horizontal=True) # ---------- From noreply at buildbot.pypy.org Thu Nov 10 13:52:12 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:52:12 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: forgot argument for abstract method copy Message-ID: <20111110125212.BF3798292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49246:82bf144b8c88 Date: 2011-10-14 15:44 +0200 http://bitbucket.org/pypy/pypy/changeset/82bf144b8c88/ Log: forgot argument for abstract method copy diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -207,7 +207,7 @@ def clear(self): raise NotImplementedError - def copy(self): + def copy(self, w_set): raise NotImplementedError def length(self, w_set): From noreply at buildbot.pypy.org Thu Nov 10 13:52:13 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:52:13 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: fixed copy and paste error. SetStrategy needs one more argument Message-ID: <20111110125213.E6E018292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49247:3f17a58f779e Date: 2011-10-14 15:53 +0200 http://bitbucket.org/pypy/pypy/changeset/3f17a58f779e/ Log: fixed copy and paste error. SetStrategy needs one more argument diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -213,64 +213,64 @@ def length(self, w_set): raise NotImplementedError - def add(self, w_key): + def add(self, w_set, w_key): raise NotImplementedError - def remove(self, w_item): + def remove(self, w_set, w_item): raise NotImplementedError - def getdict_w(self): + def getdict_w(self, w_set): raise NotImplementedError - def get_storage_copy(self): + def get_storage_copy(self, w_set): raise NotImplementedError - def getkeys(self): + def getkeys(self, w_set): raise NotImplementedError - def difference(self, w_other): + def difference(self, w_set, w_other): raise NotImplementedError - def difference_update(self, w_other): + def difference_update(self, w_set, w_other): raise NotImplementedError - def symmetric_difference(self, w_other): + def symmetric_difference(self, w_set, w_other): raise NotImplementedError - def symmetric_difference_update(self, w_other): + def symmetric_difference_update(self, w_set, w_other): raise NotImplementedError - def intersect(self, w_other): + def intersect(self, w_set, w_other): raise NotImplementedError - def intersect_update(self, w_other): + def intersect_update(self, w_set, w_other): raise NotImplementedError - def intersect_multiple(self, others_w): + def intersect_multiple(self, w_set, others_w): raise NotImplementedError - def intersect_multiple_update(self, others_w): + def intersect_multiple_update(self, w_set, others_w): raise NotImplementedError - def issubset(self, w_other): + def issubset(self, w_set, w_other): raise NotImplementedError - def isdisjoint(self, w_other): + def isdisjoint(self, w_set, w_other): raise NotImplementedError - def update(self, w_other): + def update(self, w_set, w_other): raise NotImplementedError - def has_key(self, w_key): + def has_key(self, w_set, w_key): raise NotImplementedError - def equals(self, w_other): + def equals(self, w_set, w_other): raise NotImplementedError def iter(self, w_set): raise NotImplementedError - def popitem(self): + def popitem(self, w_set): raise NotImplementedError class EmptySetStrategy(SetStrategy): From noreply at buildbot.pypy.org Thu Nov 10 13:52:15 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:52:15 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: one more abstract method fix Message-ID: <20111110125215.1A87E8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49248:4b7161bb5ef7 Date: 2011-10-14 15:59 +0200 http://bitbucket.org/pypy/pypy/changeset/4b7161bb5ef7/ Log: one more abstract method fix diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -204,7 +204,7 @@ # __________________ methods called on W_SetObject _________________ - def clear(self): + def clear(self, w_set): raise NotImplementedError def copy(self, w_set): From noreply at buildbot.pypy.org Thu Nov 10 13:52:16 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:52:16 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: use the correct dict (here: r_dict for wrapped items) Message-ID: <20111110125216.453418292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49249:78ae9026b827 Date: 2011-10-14 16:08 +0200 http://bitbucket.org/pypy/pypy/changeset/78ae9026b827/ Log: use the correct dict (here: r_dict for wrapped items) diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -481,13 +481,14 @@ return True def _difference_wrapped(self, w_set, w_other): - d_new = self.get_empty_dict() + strategy = self.space.fromcache(ObjectSetStrategy) + + d_new = strategy.get_empty_dict() for obj in self.unerase(w_set.sstorage): w_item = self.wrap(obj) if not w_other.has_key(w_item): d_new[w_item] = None - strategy = self.space.fromcache(ObjectSetStrategy) return strategy.erase(d_new) def _difference_unwrapped(self, w_set, w_other): From noreply at buildbot.pypy.org Thu Nov 10 13:52:17 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:52:17 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: forgot self in method _isdisjoint_wrapped Message-ID: <20111110125217.6CFC68292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49250:0dacc5b60316 Date: 2011-10-14 16:23 +0200 http://bitbucket.org/pypy/pypy/changeset/0dacc5b60316/ Log: forgot self in method _isdisjoint_wrapped diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -667,7 +667,7 @@ return False return True - def _isdisjoint_wrapped(w_set, w_other): + def _isdisjoint_wrapped(self, w_set, w_other): d = self.unerase(w_set.sstorage) for key in d: if w_other.has_key(self.wrap(key)): From noreply at buildbot.pypy.org Thu Nov 10 13:52:18 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:52:18 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: also copy storage of frozenset to avoid changing frozenset in methods like intersection, difference, etc Message-ID: <20111110125218.9636D8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49251:ddc6e9d447f3 Date: 2011-10-18 12:12 +0200 http://bitbucket.org/pypy/pypy/changeset/ddc6e9d447f3/ Log: also copy storage of frozenset to avoid changing frozenset in methods like intersection, difference, etc diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -66,9 +66,9 @@ """ Removes all elements from the set. """ self.strategy.clear(self) - def copy(self): - """ Returns a clone of the set. """ - return self.strategy.copy(self) + def copy_real(self): + """ Returns a clone of the set. Frozensets storages are also copied.""" + return self.strategy.copy_real(self) def length(self): """ Returns the number of items inside the set. """ @@ -207,7 +207,7 @@ def clear(self, w_set): raise NotImplementedError - def copy(self, w_set): + def copy_real(self, w_set): raise NotImplementedError def length(self, w_set): @@ -302,7 +302,7 @@ def clear(self, w_set): pass - def copy(self, w_set): + def copy_real(self, w_set): storage = self.erase(None) clone = w_set.from_storage_and_strategy(storage, self) return clone @@ -340,22 +340,22 @@ return False def difference(self, w_set, w_other): - return w_set.copy() + return w_set.copy_real() def difference_update(self, w_set, w_other): self.check_for_unhashable_objects(w_other) def intersect(self, w_set, w_other): self.check_for_unhashable_objects(w_other) - return w_set.copy() + return w_set.copy_real() def intersect_update(self, w_set, w_other): self.check_for_unhashable_objects(w_other) - return w_set.copy() + return w_set.copy_real() def intersect_multiple(self, w_set, others_w): self.intersect_multiple_update(w_set, others_w) - return w_set.copy() + return w_set.copy_real() def intersect_multiple_update(self, w_set, others_w): for w_other in others_w: @@ -368,7 +368,7 @@ return True def symmetric_difference(self, w_set, w_other): - return w_other.copy() + return w_other.copy_real() def symmetric_difference_update(self, w_set, w_other): w_set.strategy = w_other.strategy @@ -412,10 +412,14 @@ def clear(self, w_set): w_set.switch_to_empty_strategy() - def copy(self, w_set): + def copy_real(self, w_set): strategy = w_set.strategy if isinstance(w_set, W_FrozensetObject): - storage = w_set.sstorage + # only used internally since frozenset().copy() + # returns self in frozenset_copy__Frozenset + d = self.unerase(w_set.sstorage) + storage = self.erase(d.copy()) + #storage = w_set.sstorage else: d = self.unerase(w_set.sstorage) storage = self.erase(d.copy()) @@ -621,7 +625,7 @@ def intersect_multiple(self, w_set, others_w): #XXX find smarter implementations - result = w_set.copy() + result = w_set.copy_real() for w_other in others_w: if isinstance(w_other, W_BaseSetObject): # optimization only @@ -927,7 +931,7 @@ w_left.add(w_other) def set_copy__Set(space, w_set): - return w_set.copy() + return w_set.copy_real() def frozenset_copy__Frozenset(space, w_left): if type(w_left) is W_FrozensetObject: @@ -947,8 +951,8 @@ def set_difference__Set(space, w_left, others_w): if len(others_w) == 0: - return w_left.copy() - result = w_left.copy() + return w_left.copy_real() + result = w_left.copy_real() set_difference_update__Set(space, result, others_w) return result @@ -1176,7 +1180,7 @@ def set_intersection__Set(space, w_left, others_w): if len(others_w) == 0: - return w_left.copy() + return w_left.copy_real() else: return _intersection_multiple(space, w_left, others_w) @@ -1250,7 +1254,7 @@ inplace_xor__Set_Frozenset = inplace_xor__Set_Set def or__Set_Set(space, w_left, w_other): - w_copy = w_left.copy() + w_copy = w_left.copy_real() w_copy.update(w_other) return w_copy @@ -1259,7 +1263,7 @@ or__Frozenset_Frozenset = or__Set_Set def set_union__Set(space, w_left, others_w): - result = w_left.copy() + result = w_left.copy_real() for w_other in others_w: if isinstance(w_other, W_BaseSetObject): result.update(w_other) # optimization only diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -717,3 +717,53 @@ x.pop() assert x == set([2,3]) assert y == set([1,2,3]) + + def test_never_change_frozenset(self): + a = frozenset([1,2]) + b = a.copy() + assert a is b + + a = frozenset([1,2]) + b = a.union(set([3,4])) + assert b == set([1,2,3,4]) + assert a == set([1,2]) + + a = frozenset() + b = a.union(set([3,4])) + assert b == set([3,4]) + assert a == set() + + a = frozenset([1,2])#multiple + b = a.union(set([3,4]),[5,6]) + assert b == set([1,2,3,4,5,6]) + assert a == set([1,2]) + + a = frozenset([1,2,3]) + b = a.difference(set([3,4,5])) + assert b == set([1,2]) + assert a == set([1,2,3]) + + a = frozenset([1,2,3])#multiple + b = a.difference(set([3]), [2]) + assert b == set([1]) + assert a == set([1,2,3]) + + a = frozenset([1,2,3]) + b = a.symmetric_difference(set([3,4,5])) + assert b == set([1,2,4,5]) + assert a == set([1,2,3]) + + a = frozenset([1,2,3]) + b = a.intersection(set([3,4,5])) + assert b == set([3]) + assert a == set([1,2,3]) + + a = frozenset([1,2,3])#multiple + b = a.intersection(set([2,3,4]), [2]) + assert b == set([2]) + assert a == set([1,2,3]) + + raises(AttributeError, "frozenset().update()") + raises(AttributeError, "frozenset().difference_update()") + raises(AttributeError, "frozenset().symmetric_difference_update()") + raises(AttributeError, "frozenset().intersection_update()") From noreply at buildbot.pypy.org Thu Nov 10 13:52:19 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:52:19 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: just check for unhashable objects here Message-ID: <20111110125219.C0C108292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49252:3d1995ca1028 Date: 2011-10-18 13:28 +0200 http://bitbucket.org/pypy/pypy/changeset/3d1995ca1028/ Log: just check for unhashable objects here diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -359,7 +359,7 @@ def intersect_multiple_update(self, w_set, others_w): for w_other in others_w: - self.intersect(w_set, w_other) + self.check_for_unhashable_objects(w_other) def isdisjoint(self, w_set, w_other): return True From noreply at buildbot.pypy.org Thu Nov 10 13:52:20 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:52:20 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: erasing bug in _intersection_wrapped. added test and fix Message-ID: <20111110125220.EC07F8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49253:ef85a53cfb2c Date: 2011-10-18 15:17 +0200 http://bitbucket.org/pypy/pypy/changeset/ef85a53cfb2c/ Log: erasing bug in _intersection_wrapped. added test and fix diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -584,17 +584,18 @@ storage = strategy._intersect_unwrapped(w_set, w_other) else: strategy = self.space.fromcache(ObjectSetStrategy) - storage = strategy._intersect_wrapped(w_set, w_other) + storage = self._intersect_wrapped(w_set, w_other) return storage, strategy def _intersect_wrapped(self, w_set, w_other): result = self.get_empty_dict() - items = self.unerase(w_set.sstorage).iterkeys() - for key in items: + for key in self.unerase(w_set.sstorage): w_key = self.wrap(key) if w_other.has_key(w_key): result[w_key] = None - return self.erase(result) + + strategy = self.space.fromcache(ObjectSetStrategy) + return strategy.erase(result) def _intersect_unwrapped(self, w_set, w_other): result = self.get_empty_dict() diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -514,6 +514,11 @@ assert s1 == set([1,2,3,4]) assert s2 == set([1,2,3,4]) + def test_intersection_string(self): + s = set([1,2,3]) + o = 'abc' + assert s.intersection(o) == set() + def test_difference(self): assert set([1,2,3]).difference(set([2,3,4])) == set([1]) assert set([1,2,3]).difference(frozenset([2,3,4])) == set([1]) From noreply at buildbot.pypy.org Thu Nov 10 13:52:22 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:52:22 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: need to use r_dict when storing wrapped objects Message-ID: <20111110125222.2392E8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49254:766b7c29656f Date: 2011-10-18 15:46 +0200 http://bitbucket.org/pypy/pypy/changeset/766b7c29656f/ Log: need to use r_dict when storing wrapped objects diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -588,7 +588,7 @@ return storage, strategy def _intersect_wrapped(self, w_set, w_other): - result = self.get_empty_dict() + result = newset(self.space) for key in self.unerase(w_set.sstorage): w_key = self.wrap(key) if w_other.has_key(w_key): From noreply at buildbot.pypy.org Thu Nov 10 13:52:23 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:52:23 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: more test coverage Message-ID: <20111110125223.69F138292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49255:64772ab889de Date: 2011-10-19 16:36 +0200 http://bitbucket.org/pypy/pypy/changeset/64772ab889de/ Log: more test coverage diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -138,6 +138,10 @@ c = [1,2,3,4] assert b.issubset(c) + a = set([1,2,3,4]) + b = set(['1','2']) + assert not b.issubset(a) + def test_issuperset(self): a = set([1,2,3,4]) b = set([2,3]) @@ -149,6 +153,10 @@ assert a.issuperset(c) assert set([1,1,1,1,1]).issubset(a) + a = set([1,2,3]) + assert a.issuperset(a) + assert not a.issuperset(set([1,2,3,4,5])) + def test_inplace_and(test): a = set([1,2,3,4]) b = set([0,2,3,5,6]) @@ -185,6 +193,11 @@ c = a.symmetric_difference(b) assert c == set([1,2,4,5]) + a = set([1,2,3]) + b = set('abc') + c = a.symmetric_difference(b) + assert c == set([1,2,3,'a','b','c']) + def test_symmetric_difference_update(self): a = set([1,2,3]) b = set([3,4,5]) @@ -277,6 +290,8 @@ assert (set('abc') != set('abcd')) assert (frozenset('abc') != frozenset('abcd')) assert (frozenset('abc') != set('abcd')) + assert set() != set('abc') + assert set('abc') != set('abd') def test_libpython_equality(self): for thetype in [frozenset, set]: @@ -479,6 +494,7 @@ assert not set([1,2,5]).isdisjoint(frozenset([4,5,6])) assert not set([1,2,5]).isdisjoint([4,5,6]) assert not set([1,2,5]).isdisjoint((4,5,6)) + assert set([1,2,3]).isdisjoint(set([3.5,4.0])) def test_intersection(self): assert set([1,2,3]).intersection(set([2,3,4])) == set([2,3]) @@ -519,6 +535,12 @@ o = 'abc' assert s.intersection(o) == set() + def test_intersection_float(self): + a = set([1,2,3]) + b = set([3.0,4.0,5.0]) + c = a.intersection(b) + assert c == set([3.0]) + def test_difference(self): assert set([1,2,3]).difference(set([2,3,4])) == set([1]) assert set([1,2,3]).difference(frozenset([2,3,4])) == set([1]) @@ -535,6 +557,7 @@ assert s.difference() is not s assert set([1,2,3]).difference(set([2,3,4,'5'])) == set([1]) assert set([1,2,3,'5']).difference(set([2,3,4])) == set([1,'5']) + assert set().difference(set([1,2,3])) == set() def test_intersection_update(self): s = set([1,2,3,4,7]) From noreply at buildbot.pypy.org Thu Nov 10 13:52:24 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:52:24 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: discard is deprecated. instead we use remove Message-ID: <20111110125224.95C4F8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49256:0e94aadc3c7f Date: 2011-10-19 16:37 +0200 http://bitbucket.org/pypy/pypy/changeset/0e94aadc3c7f/ Log: discard is deprecated. instead we use remove diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -319,9 +319,6 @@ def remove(self, w_set, w_item): return False - def discard(self, w_set, w_item): - return False - def getdict_w(self, w_set): return newset(self.space) From noreply at buildbot.pypy.org Thu Nov 10 13:52:25 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:52:25 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: no need to check since w_other is always a set here Message-ID: <20111110125225.BF5578292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49257:cdcdf681bb20 Date: 2011-10-19 16:38 +0200 http://bitbucket.org/pypy/pypy/changeset/cdcdf681bb20/ Log: no need to check since w_other is always a set here diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -502,9 +502,6 @@ return self.erase(result_dict) def _difference_base(self, w_set, w_other): - if not isinstance(w_other, W_BaseSetObject): - w_other = w_set._newobj(self.space, w_other) - if self is w_other.strategy: strategy = w_set.strategy storage = self._difference_unwrapped(w_set, w_other) From noreply at buildbot.pypy.org Thu Nov 10 13:52:26 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:52:26 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: added StringStrategy for sets Message-ID: <20111110125226.E74008292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49258:4a416c0077b7 Date: 2011-11-02 17:03 +0100 http://bitbucket.org/pypy/pypy/changeset/4a416c0077b7/ Log: added StringStrategy for sets diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -12,6 +12,7 @@ from pypy.interpreter.generator import GeneratorIterator from pypy.objspace.std.listobject import W_ListObject from pypy.objspace.std.intobject import W_IntObject +from pypy.objspace.std.stringobject import W_StringObject class W_BaseSetObject(W_Object): typedef = None @@ -705,6 +706,30 @@ self.space.wrap('pop from an empty set')) return self.wrap(result[0]) +class StringSetStrategy(AbstractUnwrappedSetStrategy, SetStrategy): + erase, unerase = rerased.new_erasing_pair("string") + erase = staticmethod(erase) + unerase = staticmethod(unerase) + + def get_empty_storage(self): + return self.erase({}) + + def get_empty_dict(self): + return {} + + def is_correct_type(self, w_key): + return type(w_key) is W_StringObject + + def unwrap(self, w_item): + return self.space.str_w(w_item) + + def wrap(self, item): + return self.space.wrap(item) + + def iter(self, w_set): + return StringIteratorImplementation(self.space, self, w_set) + + class IntegerSetStrategy(AbstractUnwrappedSetStrategy, SetStrategy): erase, unerase = rerased.new_erasing_pair("integer") erase = staticmethod(erase) @@ -798,6 +823,19 @@ def next_entry(self): return None + +class StringIteratorImplementation(IteratorImplementation): + def __init__(self, space, strategy, w_set): + IteratorImplementation.__init__(self, space, w_set) + d = strategy.unerase(w_set.sstorage) + self.iterator = d.iterkeys() + + def next_entry(self): + for key in self.iterator: + return self.space.wrap(key) + else: + return None + class IntegerIteratorImplementation(IteratorImplementation): #XXX same implementation in dictmultiobject on dictstrategy-branch def __init__(self, space, strategy, dictimplementation): @@ -875,6 +913,8 @@ w_set.sstorage = strategy.get_empty_storage() return + #XXX check ints and strings at once + # check for integers for w_item in iterable_w: if type(w_item) is not W_IntObject: @@ -884,6 +924,15 @@ w_set.sstorage = w_set.strategy.get_storage_from_list(iterable_w) return + # check for strings + for w_item in iterable_w: + if type(w_item) is not W_StringObject: + break + else: + w_set.strategy = space.fromcache(StringSetStrategy) + w_set.sstorage = w_set.strategy.get_storage_from_list(iterable_w) + return + w_set.strategy = space.fromcache(ObjectSetStrategy) w_set.sstorage = w_set.strategy.get_storage_from_list(iterable_w) From noreply at buildbot.pypy.org Thu Nov 10 13:52:28 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:52:28 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: added fastpath for not comparable sets (starting with difference) Message-ID: <20111110125228.225248292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49259:06b2d8982ba0 Date: 2011-11-02 17:40 +0100 http://bitbucket.org/pypy/pypy/changeset/06b2d8982ba0/ Log: added fastpath for not comparable sets (starting with difference) diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -506,6 +506,9 @@ if self is w_other.strategy: strategy = w_set.strategy storage = self._difference_unwrapped(w_set, w_other) + elif not_comparable(self.space, w_set.strategy, w_other.strategy): + strategy = w_set.strategy + storage = w_set.sstorage else: strategy = self.space.fromcache(ObjectSetStrategy) storage = self._difference_wrapped(w_set, w_other) @@ -891,6 +894,17 @@ # some helper functions +def not_comparable(space, strategy1, strategy2): + # add all strategies here that cannot be compared. this way is safer than + # adding only types that can be compared. else we get wrong results if + # someone adds new strategies and forgets to define them here. since this + # is only a fastpath we want to avoid possible errors + if strategy1 is space.fromcache(StringSetStrategy) and strategy2 is space.fromcache(IntegerSetStrategy): + return True + if strategy1 is space.fromcache(EmptySetStrategy) or strategy2 is space.fromcache(EmptySetStrategy): + return True + return False + def newset(space): return r_dict(space.eq_w, space.hash_w, force_non_null=True) diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -604,6 +604,12 @@ x.symmetric_difference_update(set()) assert x == set([1,2,3]) + def test_difference_uncomparable_strategies(self): + a = set([1,2,3]) + b = set(["a","b","c"]) + assert a.difference(b) == a + assert b.difference(a) == b + def test_empty_intersect(self): e = set() x = set([1,2,3]) From noreply at buildbot.pypy.org Thu Nov 10 13:52:29 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:52:29 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: renamed not_comparable to more convenient not_contain_equal_elements Message-ID: <20111110125229.4A7628292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49260:d62d426fd752 Date: 2011-11-03 15:42 +0100 http://bitbucket.org/pypy/pypy/changeset/d62d426fd752/ Log: renamed not_comparable to more convenient not_contain_equal_elements diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -506,7 +506,7 @@ if self is w_other.strategy: strategy = w_set.strategy storage = self._difference_unwrapped(w_set, w_other) - elif not_comparable(self.space, w_set.strategy, w_other.strategy): + elif not_contain_equal_elements(self.space, w_set, w_other): strategy = w_set.strategy storage = w_set.sstorage else: @@ -894,14 +894,14 @@ # some helper functions -def not_comparable(space, strategy1, strategy2): - # add all strategies here that cannot be compared. this way is safer than - # adding only types that can be compared. else we get wrong results if - # someone adds new strategies and forgets to define them here. since this - # is only a fastpath we want to avoid possible errors +def not_contain_equal_elements(space, w_set, w_other): + strategy1 = w_set.strategy + strategy2 = w_other.strategy + # add strategies here for which elements from sets with theses strategies are never equal. if strategy1 is space.fromcache(StringSetStrategy) and strategy2 is space.fromcache(IntegerSetStrategy): return True if strategy1 is space.fromcache(EmptySetStrategy) or strategy2 is space.fromcache(EmptySetStrategy): + # an empty set and another set will never have any equal element return True return False From noreply at buildbot.pypy.org Thu Nov 10 13:52:30 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:52:30 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: this is done with not_contain_equal_elements Message-ID: <20111110125230.736778292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49261:926bd0d9d481 Date: 2011-11-03 15:44 +0100 http://bitbucket.org/pypy/pypy/changeset/926bd0d9d481/ Log: this is done with not_contain_equal_elements diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -515,13 +515,11 @@ return storage, strategy def difference(self, w_set, w_other): - #XXX return clone for ANY with Empty (and later different strategies) storage, strategy = self._difference_base(w_set, w_other) w_newset = w_set.from_storage_and_strategy(storage, strategy) return w_newset def difference_update(self, w_set, w_other): - #XXX do nothing for ANY with Empty storage, strategy = self._difference_base(w_set, w_other) w_set.strategy = strategy w_set.sstorage = storage From noreply at buildbot.pypy.org Thu Nov 10 13:52:31 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:52:31 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: added fastpath to intersection and fixed not_contain_equal_elements Message-ID: <20111110125231.9DA398292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49262:fca421c60d1d Date: 2011-11-03 16:14 +0100 http://bitbucket.org/pypy/pypy/changeset/fca421c60d1d/ Log: added fastpath to intersection and fixed not_contain_equal_elements diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -578,6 +578,9 @@ if self is w_other.strategy: strategy = w_set.strategy storage = strategy._intersect_unwrapped(w_set, w_other) + elif not_contain_equal_elements(self.space, w_set, w_other): + strategy = self.space.fromcache(EmptySetStrategy) + storage = strategy.get_empty_storage() else: strategy = self.space.fromcache(ObjectSetStrategy) storage = self._intersect_wrapped(w_set, w_other) @@ -893,11 +896,16 @@ # some helper functions def not_contain_equal_elements(space, w_set, w_other): + # add strategies here for which elements from sets with theses strategies are never equal. + strategy1 = w_set.strategy strategy2 = w_other.strategy - # add strategies here for which elements from sets with theses strategies are never equal. + if strategy1 is space.fromcache(StringSetStrategy) and strategy2 is space.fromcache(IntegerSetStrategy): return True + if strategy1 is space.fromcache(IntegerSetStrategy) and strategy2 is space.fromcache(StringSetStrategy): + return True + if strategy1 is space.fromcache(EmptySetStrategy) or strategy2 is space.fromcache(EmptySetStrategy): # an empty set and another set will never have any equal element return True diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -604,12 +604,17 @@ x.symmetric_difference_update(set()) assert x == set([1,2,3]) - def test_difference_uncomparable_strategies(self): + def test_fastpath_with_strategies(self): a = set([1,2,3]) b = set(["a","b","c"]) assert a.difference(b) == a assert b.difference(a) == b + a = set([1,2,3]) + b = set(["a","b","c"]) + assert a.intersection(b) == set() + assert b.intersection(a) == set() + def test_empty_intersect(self): e = set() x = set([1,2,3]) From noreply at buildbot.pypy.org Thu Nov 10 13:52:32 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:52:32 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: in intersection_multiple start with the smallest to avoid unnecessary comparisons Message-ID: <20111110125232.C5BA28292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49263:001538c05f0e Date: 2011-11-03 17:19 +0100 http://bitbucket.org/pypy/pypy/changeset/001538c05f0e/ Log: in intersection_multiple start with the smallest to avoid unnecessary comparisons diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -626,7 +626,29 @@ def intersect_multiple(self, w_set, others_w): #XXX find smarter implementations result = w_set.copy_real() + + # find smallest set in others_w to reduce comparisons + # XXX maybe we can do this smarter + if len(others_w) > 1: + startset, startlength = None, 0 + for w_other in others_w: + try: + length = self.space.len(w_other) + except OperationError, e: + if not e.match(self.space, self.space.w_TypeError): + raise + continue + + if startset is None or self.space.is_true(self.space.lt(length, startlength)): + startset = w_other + startlength = length + + others_w[others_w.index(startset)] = others_w[0] + others_w[0] = startset + for w_other in others_w: + if result.length() == 0: + break if isinstance(w_other, W_BaseSetObject): # optimization only result.intersect_update(w_other) From noreply at buildbot.pypy.org Thu Nov 10 13:52:33 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:52:33 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: use string strategy when appending string to empty set Message-ID: <20111110125233.F122F8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49264:a7633ebf174b Date: 2011-11-04 14:49 +0100 http://bitbucket.org/pypy/pypy/changeset/a7633ebf174b/ Log: use string strategy when appending string to empty set diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -311,6 +311,8 @@ def add(self, w_set, w_key): if type(w_key) is W_IntObject: strategy = self.space.fromcache(IntegerSetStrategy) + elif type(w_key) is W_StringObject: + strategy = self.space.fromcache(StringSetStrategy) else: strategy = self.space.fromcache(ObjectSetStrategy) w_set.strategy = strategy From noreply at buildbot.pypy.org Thu Nov 10 13:52:35 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:52:35 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: delegated not_contain_equal_elements method to strategies Message-ID: <20111110125235.257138292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49265:a7b6365fb35c Date: 2011-11-04 15:07 +0100 http://bitbucket.org/pypy/pypy/changeset/a7b6365fb35c/ Log: delegated not_contain_equal_elements method to strategies diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -508,7 +508,7 @@ if self is w_other.strategy: strategy = w_set.strategy storage = self._difference_unwrapped(w_set, w_other) - elif not_contain_equal_elements(self.space, w_set, w_other): + elif not w_set.strategy.may_contain_equal_elements(w_other.strategy): strategy = w_set.strategy storage = w_set.sstorage else: @@ -580,7 +580,7 @@ if self is w_other.strategy: strategy = w_set.strategy storage = strategy._intersect_unwrapped(w_set, w_other) - elif not_contain_equal_elements(self.space, w_set, w_other): + elif not w_set.strategy.may_contain_equal_elements(w_other.strategy): strategy = self.space.fromcache(EmptySetStrategy) storage = strategy.get_empty_storage() else: @@ -748,6 +748,13 @@ def is_correct_type(self, w_key): return type(w_key) is W_StringObject + def may_contain_equal_elements(self, strategy): + if strategy is self.space.fromcache(IntegerSetStrategy): + return False + if strategy is self.space.fromcache(EmptySetStrategy): + return False + return True + def unwrap(self, w_item): return self.space.str_w(w_item) @@ -757,7 +764,6 @@ def iter(self, w_set): return StringIteratorImplementation(self.space, self, w_set) - class IntegerSetStrategy(AbstractUnwrappedSetStrategy, SetStrategy): erase, unerase = rerased.new_erasing_pair("integer") erase = staticmethod(erase) @@ -773,6 +779,13 @@ from pypy.objspace.std.intobject import W_IntObject return type(w_key) is W_IntObject + def may_contain_equal_elements(self, strategy): + if strategy is self.space.fromcache(StringSetStrategy): + return False + if strategy is self.space.fromcache(EmptySetStrategy): + return False + return True + def unwrap(self, w_item): return self.space.int_w(w_item) @@ -796,6 +809,11 @@ def is_correct_type(self, w_key): return True + def may_contain_equal_elements(self, strategy): + if strategy is self.space.fromcache(EmptySetStrategy): + return False + return True + def unwrap(self, w_item): return w_item @@ -919,22 +937,6 @@ # some helper functions -def not_contain_equal_elements(space, w_set, w_other): - # add strategies here for which elements from sets with theses strategies are never equal. - - strategy1 = w_set.strategy - strategy2 = w_other.strategy - - if strategy1 is space.fromcache(StringSetStrategy) and strategy2 is space.fromcache(IntegerSetStrategy): - return True - if strategy1 is space.fromcache(IntegerSetStrategy) and strategy2 is space.fromcache(StringSetStrategy): - return True - - if strategy1 is space.fromcache(EmptySetStrategy) or strategy2 is space.fromcache(EmptySetStrategy): - # an empty set and another set will never have any equal element - return True - return False - def newset(space): return r_dict(space.eq_w, space.hash_w, force_non_null=True) From noreply at buildbot.pypy.org Thu Nov 10 13:52:36 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:52:36 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: added fastpath for issubset and isdisjoint Message-ID: <20111110125236.4F6F88292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49266:41bcb4199af4 Date: 2011-11-07 12:18 +0100 http://bitbucket.org/pypy/pypy/changeset/41bcb4199af4/ Log: added fastpath for issubset and isdisjoint diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -684,6 +684,8 @@ if w_set.strategy is w_other.strategy: return self._issubset_unwrapped(w_set, w_other) + elif not w_set.strategy.may_contain_equal_elements(w_other.strategy): + return False else: return self._issubset_wrapped(w_set, w_other) @@ -710,6 +712,8 @@ if w_set.strategy is w_other.strategy: return self._isdisjoint_unwrapped(w_set, w_other) + elif not w_set.strategy.may_contain_equal_elements(w_other.strategy): + return True else: return self._isdisjoint_wrapped(w_set, w_other) diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -615,6 +615,16 @@ assert a.intersection(b) == set() assert b.intersection(a) == set() + a = set([1,2,3]) + b = set(["a","b","c"]) + assert not a.issubset(b) + assert not b.issubset(a) + + a = set([1,2,3]) + b = set(["a","b","c"]) + assert a.isdisjoint(b) + assert b.isdisjoint(a) + def test_empty_intersect(self): e = set() x = set([1,2,3]) From noreply at buildbot.pypy.org Thu Nov 10 13:52:37 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:52:37 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: optimized intersection_multiple some more Message-ID: <20111110125237.7BAB68292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49267:031e88af4605 Date: 2011-11-07 14:07 +0100 http://bitbucket.org/pypy/pypy/changeset/031e88af4605/ Log: optimized intersection_multiple some more diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -120,14 +120,6 @@ """ Keeps only those elements found in both sets, removing all other elements. """ return self.strategy.intersect_update(self, w_other) - def intersect_multiple(self, others_w): - """ Returns a new set of all elements that exist in all of the given iterables.""" - return self.strategy.intersect_multiple(self, others_w) - - def intersect_multiple_update(self, others_w): - """ Same as intersect_multiple but overwrites this set with the result. """ - self.strategy.intersect_multiple_update(self, others_w) - def issubset(self, w_other): """ Checks wether this set is a subset of w_other. W_other must be a set. """ return self.strategy.issubset(self, w_other) @@ -247,12 +239,6 @@ def intersect_update(self, w_set, w_other): raise NotImplementedError - def intersect_multiple(self, w_set, others_w): - raise NotImplementedError - - def intersect_multiple_update(self, w_set, others_w): - raise NotImplementedError - def issubset(self, w_set, w_other): raise NotImplementedError @@ -353,14 +339,6 @@ self.check_for_unhashable_objects(w_other) return w_set.copy_real() - def intersect_multiple(self, w_set, others_w): - self.intersect_multiple_update(w_set, others_w) - return w_set.copy_real() - - def intersect_multiple_update(self, w_set, others_w): - for w_other in others_w: - self.check_for_unhashable_objects(w_other) - def isdisjoint(self, w_set, w_other): return True @@ -625,45 +603,6 @@ w_set.sstorage = storage return w_set - def intersect_multiple(self, w_set, others_w): - #XXX find smarter implementations - result = w_set.copy_real() - - # find smallest set in others_w to reduce comparisons - # XXX maybe we can do this smarter - if len(others_w) > 1: - startset, startlength = None, 0 - for w_other in others_w: - try: - length = self.space.len(w_other) - except OperationError, e: - if not e.match(self.space, self.space.w_TypeError): - raise - continue - - if startset is None or self.space.is_true(self.space.lt(length, startlength)): - startset = w_other - startlength = length - - others_w[others_w.index(startset)] = others_w[0] - others_w[0] = startset - - for w_other in others_w: - if result.length() == 0: - break - if isinstance(w_other, W_BaseSetObject): - # optimization only - result.intersect_update(w_other) - else: - w_other_as_set = w_set._newobj(self.space, w_other) - result.intersect_update(w_other_as_set) - return result - - def intersect_multiple_update(self, w_set, others_w): - result = self.intersect_multiple(w_set, others_w) - w_set.strategy = result.strategy - w_set.sstorage = result.sstorage - def _issubset_unwrapped(self, w_set, w_other): d_other = self.unerase(w_other.sstorage) for item in self.unerase(w_set.sstorage): @@ -1270,7 +1209,36 @@ and__Frozenset_Frozenset = and__Set_Set def _intersection_multiple(space, w_left, others_w): - return w_left.intersect_multiple(others_w) + #XXX find smarter implementations + others_w.append(w_left) + + # find smallest set in others_w to reduce comparisons + startindex, startlength = -1, -1 + for i in range(len(others_w)): + w_other = others_w[i] + try: + length = space.int_w(space.len(w_other)) + except OperationError, e: + if not e.match(space, space.w_TypeError): + raise + continue + + if length < startlength: + startindex = i + startlength = length + + others_w[i], others_w[0] = others_w[0], others_w[i] + + result = w_left._newobj(space, others_w[0]) + for i in range(1,len(others_w)): + w_other = others_w[i] + if isinstance(w_other, W_BaseSetObject): + # optimization only + result.intersect_update(w_other) + else: + w_other_as_set = w_left._newobj(space, w_other) + result.intersect_update(w_other_as_set) + return result def set_intersection__Set(space, w_left, others_w): if len(others_w) == 0: @@ -1281,7 +1249,9 @@ frozenset_intersection__Frozenset = set_intersection__Set def set_intersection_update__Set(space, w_left, others_w): - w_left.intersect_multiple_update(others_w) + result = set_intersection__Set(space, w_left, others_w) + w_left.strategy = result.strategy + w_left.sstorage = result.sstorage return def inplace_and__Set_Set(space, w_left, w_other): diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -665,7 +665,7 @@ assert e.isdisjoint(x) == True assert x.isdisjoint(e) == True - def test_empty_typeerror(self): + def test_empty_unhashable(self): s = set() raises(TypeError, s.difference, [[]]) raises(TypeError, s.difference_update, [[]]) From noreply at buildbot.pypy.org Thu Nov 10 13:52:38 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:52:38 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: added tests for intersection_multiple order Message-ID: <20111110125238.B38928292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49268:0bf8d5082b03 Date: 2011-11-07 16:24 +0100 http://bitbucket.org/pypy/pypy/changeset/0bf8d5082b03/ Log: added tests for intersection_multiple order diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -15,6 +15,7 @@ from pypy.objspace.std.setobject import set_intersection__Set from pypy.objspace.std.setobject import eq__Set_Set from pypy.conftest import gettestobjspace +from pypy.objspace.std.listobject import W_ListObject letters = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ' @@ -54,6 +55,31 @@ s = self.space.newset() assert self.space.str_w(self.space.repr(s)) == 'set([])' + def test_intersection_order(self): + # theses tests make sure that intersection is done in the correct order + # (smallest first) + space = self.space + a = W_SetObject(self.space) + _initialize_set(self.space, a, self.space.wrap("abcdefg")) + a.intersect = None + + b = W_SetObject(self.space) + _initialize_set(self.space, b, self.space.wrap("abc")) + + result = set_intersection__Set(space, a, [b]) + assert space.is_true(self.space.eq(result, W_SetObject(space, self.space.wrap("abc")))) + + c = W_SetObject(self.space) + _initialize_set(self.space, c, self.space.wrap("e")) + + d = W_SetObject(self.space) + _initialize_set(self.space, d, self.space.wrap("ab")) + d.intersect = None + + result = set_intersection__Set(space, a, [d,c,b]) + assert space.is_true(self.space.eq(result, W_SetObject(space, self.space.wrap("")))) + + class AppTestAppSetTest: def setup_class(self): From noreply at buildbot.pypy.org Thu Nov 10 13:52:39 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:52:39 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: referenced i before assignment if others_w is None/empty Message-ID: <20111110125239.E96818292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49269:fc5601b33c58 Date: 2011-11-08 14:12 +0100 http://bitbucket.org/pypy/pypy/changeset/fc5601b33c58/ Log: referenced i before assignment if others_w is None/empty diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -1227,7 +1227,8 @@ startindex = i startlength = length - others_w[i], others_w[0] = others_w[0], others_w[i] + if i > 0: + others_w[i], others_w[0] = others_w[0], others_w[i] result = w_left._newobj(space, others_w[0]) for i in range(1,len(others_w)): From noreply at buildbot.pypy.org Thu Nov 10 13:52:41 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:52:41 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: now we dont reference i before assignment anymore Message-ID: <20111110125241.2A22D8292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49270:4f4b06b3d3f8 Date: 2011-11-08 14:34 +0100 http://bitbucket.org/pypy/pypy/changeset/4f4b06b3d3f8/ Log: now we dont reference i before assignment anymore diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -1213,7 +1213,7 @@ others_w.append(w_left) # find smallest set in others_w to reduce comparisons - startindex, startlength = -1, -1 + i, startindex, startlength = 0, -1, -1 for i in range(len(others_w)): w_other = others_w[i] try: @@ -1227,8 +1227,7 @@ startindex = i startlength = length - if i > 0: - others_w[i], others_w[0] = others_w[0], others_w[i] + others_w[i], others_w[0] = others_w[0], others_w[i] result = w_left._newobj(space, others_w[0]) for i in range(1,len(others_w)): From noreply at buildbot.pypy.org Thu Nov 10 13:52:42 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:52:42 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: other_w can't be resized Message-ID: <20111110125242.527638292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49271:2e5141d8fd6c Date: 2011-11-08 16:31 +0100 http://bitbucket.org/pypy/pypy/changeset/2e5141d8fd6c/ Log: other_w can't be resized diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -1210,10 +1210,11 @@ def _intersection_multiple(space, w_left, others_w): #XXX find smarter implementations + others_w = others_w[:] # original others_w can't be resized others_w.append(w_left) # find smallest set in others_w to reduce comparisons - i, startindex, startlength = 0, -1, -1 + startindex, startlength = -1, -1 for i in range(len(others_w)): w_other = others_w[i] try: From noreply at buildbot.pypy.org Thu Nov 10 13:52:43 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 10 Nov 2011 13:52:43 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: what the hell did we do here!? Message-ID: <20111110125243.81D098292E@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r49272:67ea580d5c56 Date: 2011-11-08 17:16 +0100 http://bitbucket.org/pypy/pypy/changeset/67ea580d5c56/ Log: what the hell did we do here!? diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -1214,7 +1214,7 @@ others_w.append(w_left) # find smallest set in others_w to reduce comparisons - startindex, startlength = -1, -1 + startindex, startlength = 0, -1 for i in range(len(others_w)): w_other = others_w[i] try: @@ -1224,11 +1224,11 @@ raise continue - if length < startlength: + if startlength == -1 or length < startlength: startindex = i startlength = length - others_w[i], others_w[0] = others_w[0], others_w[i] + others_w[startindex], others_w[0] = others_w[0], others_w[startindex] result = w_left._newobj(space, others_w[0]) for i in range(1,len(others_w)): diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -74,7 +74,11 @@ d = W_SetObject(self.space) _initialize_set(self.space, d, self.space.wrap("ab")) - d.intersect = None + + # if ordering works correct we should start with set e + a.get_storage_copy = None + b.get_storage_copy = None + d.get_storage_copy = None result = set_intersection__Set(space, a, [d,c,b]) assert space.is_true(self.space.eq(result, W_SetObject(space, self.space.wrap("")))) From noreply at buildbot.pypy.org Thu Nov 10 13:52:47 2011 From: noreply at buildbot.pypy.org (cfbolz) Date: Thu, 10 Nov 2011 13:52:47 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: merge default Message-ID: <20111110125247.08AFF8292E@wyvern.cs.uni-duesseldorf.de> Author: Carl Friedrich Bolz Branch: set-strategies Changeset: r49273:c80d30d2d88e Date: 2011-11-10 13:40 +0100 http://bitbucket.org/pypy/pypy/changeset/c80d30d2d88e/ Log: merge default diff too long, truncating to 10000 out of 18659 lines diff --git a/lib-python/2.7/test/test_os.py b/lib-python/2.7/test/test_os.py --- a/lib-python/2.7/test/test_os.py +++ b/lib-python/2.7/test/test_os.py @@ -74,7 +74,8 @@ self.assertFalse(os.path.exists(name), "file already exists for temporary file") # make sure we can create the file - open(name, "w") + f = open(name, "w") + f.close() self.files.append(name) def test_tempnam(self): diff --git a/lib-python/conftest.py b/lib-python/conftest.py --- a/lib-python/conftest.py +++ b/lib-python/conftest.py @@ -201,7 +201,7 @@ RegrTest('test_difflib.py'), RegrTest('test_dircache.py', core=True), RegrTest('test_dis.py'), - RegrTest('test_distutils.py'), + RegrTest('test_distutils.py', skip=True), RegrTest('test_dl.py', skip=True), RegrTest('test_doctest.py', usemodules="thread"), RegrTest('test_doctest2.py'), diff --git a/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py b/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py --- a/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py +++ b/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py @@ -1,6 +1,5 @@ import unittest from ctypes import * -from ctypes.test import xfail class MyInt(c_int): def __cmp__(self, other): @@ -27,7 +26,6 @@ self.assertEqual(None, cb()) - @xfail def test_int_callback(self): args = [] def func(arg): diff --git a/lib-python/modified-2.7/heapq.py b/lib-python/modified-2.7/heapq.py new file mode 100644 --- /dev/null +++ b/lib-python/modified-2.7/heapq.py @@ -0,0 +1,442 @@ +# -*- coding: latin-1 -*- + +"""Heap queue algorithm (a.k.a. priority queue). + +Heaps are arrays for which a[k] <= a[2*k+1] and a[k] <= a[2*k+2] for +all k, counting elements from 0. For the sake of comparison, +non-existing elements are considered to be infinite. The interesting +property of a heap is that a[0] is always its smallest element. + +Usage: + +heap = [] # creates an empty heap +heappush(heap, item) # pushes a new item on the heap +item = heappop(heap) # pops the smallest item from the heap +item = heap[0] # smallest item on the heap without popping it +heapify(x) # transforms list into a heap, in-place, in linear time +item = heapreplace(heap, item) # pops and returns smallest item, and adds + # new item; the heap size is unchanged + +Our API differs from textbook heap algorithms as follows: + +- We use 0-based indexing. This makes the relationship between the + index for a node and the indexes for its children slightly less + obvious, but is more suitable since Python uses 0-based indexing. + +- Our heappop() method returns the smallest item, not the largest. + +These two make it possible to view the heap as a regular Python list +without surprises: heap[0] is the smallest item, and heap.sort() +maintains the heap invariant! +""" + +# Original code by Kevin O'Connor, augmented by Tim Peters and Raymond Hettinger + +__about__ = """Heap queues + +[explanation by Fran�ois Pinard] + +Heaps are arrays for which a[k] <= a[2*k+1] and a[k] <= a[2*k+2] for +all k, counting elements from 0. For the sake of comparison, +non-existing elements are considered to be infinite. The interesting +property of a heap is that a[0] is always its smallest element. + +The strange invariant above is meant to be an efficient memory +representation for a tournament. The numbers below are `k', not a[k]: + + 0 + + 1 2 + + 3 4 5 6 + + 7 8 9 10 11 12 13 14 + + 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 + + +In the tree above, each cell `k' is topping `2*k+1' and `2*k+2'. In +an usual binary tournament we see in sports, each cell is the winner +over the two cells it tops, and we can trace the winner down the tree +to see all opponents s/he had. However, in many computer applications +of such tournaments, we do not need to trace the history of a winner. +To be more memory efficient, when a winner is promoted, we try to +replace it by something else at a lower level, and the rule becomes +that a cell and the two cells it tops contain three different items, +but the top cell "wins" over the two topped cells. + +If this heap invariant is protected at all time, index 0 is clearly +the overall winner. The simplest algorithmic way to remove it and +find the "next" winner is to move some loser (let's say cell 30 in the +diagram above) into the 0 position, and then percolate this new 0 down +the tree, exchanging values, until the invariant is re-established. +This is clearly logarithmic on the total number of items in the tree. +By iterating over all items, you get an O(n ln n) sort. + +A nice feature of this sort is that you can efficiently insert new +items while the sort is going on, provided that the inserted items are +not "better" than the last 0'th element you extracted. This is +especially useful in simulation contexts, where the tree holds all +incoming events, and the "win" condition means the smallest scheduled +time. When an event schedule other events for execution, they are +scheduled into the future, so they can easily go into the heap. So, a +heap is a good structure for implementing schedulers (this is what I +used for my MIDI sequencer :-). + +Various structures for implementing schedulers have been extensively +studied, and heaps are good for this, as they are reasonably speedy, +the speed is almost constant, and the worst case is not much different +than the average case. However, there are other representations which +are more efficient overall, yet the worst cases might be terrible. + +Heaps are also very useful in big disk sorts. You most probably all +know that a big sort implies producing "runs" (which are pre-sorted +sequences, which size is usually related to the amount of CPU memory), +followed by a merging passes for these runs, which merging is often +very cleverly organised[1]. It is very important that the initial +sort produces the longest runs possible. Tournaments are a good way +to that. If, using all the memory available to hold a tournament, you +replace and percolate items that happen to fit the current run, you'll +produce runs which are twice the size of the memory for random input, +and much better for input fuzzily ordered. + +Moreover, if you output the 0'th item on disk and get an input which +may not fit in the current tournament (because the value "wins" over +the last output value), it cannot fit in the heap, so the size of the +heap decreases. The freed memory could be cleverly reused immediately +for progressively building a second heap, which grows at exactly the +same rate the first heap is melting. When the first heap completely +vanishes, you switch heaps and start a new run. Clever and quite +effective! + +In a word, heaps are useful memory structures to know. I use them in +a few applications, and I think it is good to keep a `heap' module +around. :-) + +-------------------- +[1] The disk balancing algorithms which are current, nowadays, are +more annoying than clever, and this is a consequence of the seeking +capabilities of the disks. On devices which cannot seek, like big +tape drives, the story was quite different, and one had to be very +clever to ensure (far in advance) that each tape movement will be the +most effective possible (that is, will best participate at +"progressing" the merge). Some tapes were even able to read +backwards, and this was also used to avoid the rewinding time. +Believe me, real good tape sorts were quite spectacular to watch! +From all times, sorting has always been a Great Art! :-) +""" + +__all__ = ['heappush', 'heappop', 'heapify', 'heapreplace', 'merge', + 'nlargest', 'nsmallest', 'heappushpop'] + +from itertools import islice, repeat, count, imap, izip, tee, chain +from operator import itemgetter +import bisect + +def heappush(heap, item): + """Push item onto heap, maintaining the heap invariant.""" + heap.append(item) + _siftdown(heap, 0, len(heap)-1) + +def heappop(heap): + """Pop the smallest item off the heap, maintaining the heap invariant.""" + lastelt = heap.pop() # raises appropriate IndexError if heap is empty + if heap: + returnitem = heap[0] + heap[0] = lastelt + _siftup(heap, 0) + else: + returnitem = lastelt + return returnitem + +def heapreplace(heap, item): + """Pop and return the current smallest value, and add the new item. + + This is more efficient than heappop() followed by heappush(), and can be + more appropriate when using a fixed-size heap. Note that the value + returned may be larger than item! That constrains reasonable uses of + this routine unless written as part of a conditional replacement: + + if item > heap[0]: + item = heapreplace(heap, item) + """ + returnitem = heap[0] # raises appropriate IndexError if heap is empty + heap[0] = item + _siftup(heap, 0) + return returnitem + +def heappushpop(heap, item): + """Fast version of a heappush followed by a heappop.""" + if heap and heap[0] < item: + item, heap[0] = heap[0], item + _siftup(heap, 0) + return item + +def heapify(x): + """Transform list into a heap, in-place, in O(len(heap)) time.""" + n = len(x) + # Transform bottom-up. The largest index there's any point to looking at + # is the largest with a child index in-range, so must have 2*i + 1 < n, + # or i < (n-1)/2. If n is even = 2*j, this is (2*j-1)/2 = j-1/2 so + # j-1 is the largest, which is n//2 - 1. If n is odd = 2*j+1, this is + # (2*j+1-1)/2 = j so j-1 is the largest, and that's again n//2-1. + for i in reversed(xrange(n//2)): + _siftup(x, i) + +def nlargest(n, iterable): + """Find the n largest elements in a dataset. + + Equivalent to: sorted(iterable, reverse=True)[:n] + """ + if n < 0: # for consistency with the c impl + return [] + it = iter(iterable) + result = list(islice(it, n)) + if not result: + return result + heapify(result) + _heappushpop = heappushpop + for elem in it: + _heappushpop(result, elem) + result.sort(reverse=True) + return result + +def nsmallest(n, iterable): + """Find the n smallest elements in a dataset. + + Equivalent to: sorted(iterable)[:n] + """ + if n < 0: # for consistency with the c impl + return [] + if hasattr(iterable, '__len__') and n * 10 <= len(iterable): + # For smaller values of n, the bisect method is faster than a minheap. + # It is also memory efficient, consuming only n elements of space. + it = iter(iterable) + result = sorted(islice(it, 0, n)) + if not result: + return result + insort = bisect.insort + pop = result.pop + los = result[-1] # los --> Largest of the nsmallest + for elem in it: + if los <= elem: + continue + insort(result, elem) + pop() + los = result[-1] + return result + # An alternative approach manifests the whole iterable in memory but + # saves comparisons by heapifying all at once. Also, saves time + # over bisect.insort() which has O(n) data movement time for every + # insertion. Finding the n smallest of an m length iterable requires + # O(m) + O(n log m) comparisons. + h = list(iterable) + heapify(h) + return map(heappop, repeat(h, min(n, len(h)))) + +# 'heap' is a heap at all indices >= startpos, except possibly for pos. pos +# is the index of a leaf with a possibly out-of-order value. Restore the +# heap invariant. +def _siftdown(heap, startpos, pos): + newitem = heap[pos] + # Follow the path to the root, moving parents down until finding a place + # newitem fits. + while pos > startpos: + parentpos = (pos - 1) >> 1 + parent = heap[parentpos] + if newitem < parent: + heap[pos] = parent + pos = parentpos + continue + break + heap[pos] = newitem + +# The child indices of heap index pos are already heaps, and we want to make +# a heap at index pos too. We do this by bubbling the smaller child of +# pos up (and so on with that child's children, etc) until hitting a leaf, +# then using _siftdown to move the oddball originally at index pos into place. +# +# We *could* break out of the loop as soon as we find a pos where newitem <= +# both its children, but turns out that's not a good idea, and despite that +# many books write the algorithm that way. During a heap pop, the last array +# element is sifted in, and that tends to be large, so that comparing it +# against values starting from the root usually doesn't pay (= usually doesn't +# get us out of the loop early). See Knuth, Volume 3, where this is +# explained and quantified in an exercise. +# +# Cutting the # of comparisons is important, since these routines have no +# way to extract "the priority" from an array element, so that intelligence +# is likely to be hiding in custom __cmp__ methods, or in array elements +# storing (priority, record) tuples. Comparisons are thus potentially +# expensive. +# +# On random arrays of length 1000, making this change cut the number of +# comparisons made by heapify() a little, and those made by exhaustive +# heappop() a lot, in accord with theory. Here are typical results from 3 +# runs (3 just to demonstrate how small the variance is): +# +# Compares needed by heapify Compares needed by 1000 heappops +# -------------------------- -------------------------------- +# 1837 cut to 1663 14996 cut to 8680 +# 1855 cut to 1659 14966 cut to 8678 +# 1847 cut to 1660 15024 cut to 8703 +# +# Building the heap by using heappush() 1000 times instead required +# 2198, 2148, and 2219 compares: heapify() is more efficient, when +# you can use it. +# +# The total compares needed by list.sort() on the same lists were 8627, +# 8627, and 8632 (this should be compared to the sum of heapify() and +# heappop() compares): list.sort() is (unsurprisingly!) more efficient +# for sorting. + +def _siftup(heap, pos): + endpos = len(heap) + startpos = pos + newitem = heap[pos] + # Bubble up the smaller child until hitting a leaf. + childpos = 2*pos + 1 # leftmost child position + while childpos < endpos: + # Set childpos to index of smaller child. + rightpos = childpos + 1 + if rightpos < endpos and not heap[childpos] < heap[rightpos]: + childpos = rightpos + # Move the smaller child up. + heap[pos] = heap[childpos] + pos = childpos + childpos = 2*pos + 1 + # The leaf at pos is empty now. Put newitem there, and bubble it up + # to its final resting place (by sifting its parents down). + heap[pos] = newitem + _siftdown(heap, startpos, pos) + +# If available, use C implementation +try: + from _heapq import * +except ImportError: + pass + +def merge(*iterables): + '''Merge multiple sorted inputs into a single sorted output. + + Similar to sorted(itertools.chain(*iterables)) but returns a generator, + does not pull the data into memory all at once, and assumes that each of + the input streams is already sorted (smallest to largest). + + >>> list(merge([1,3,5,7], [0,2,4,8], [5,10,15,20], [], [25])) + [0, 1, 2, 3, 4, 5, 5, 7, 8, 10, 15, 20, 25] + + ''' + _heappop, _heapreplace, _StopIteration = heappop, heapreplace, StopIteration + + h = [] + h_append = h.append + for itnum, it in enumerate(map(iter, iterables)): + try: + next = it.next + h_append([next(), itnum, next]) + except _StopIteration: + pass + heapify(h) + + while 1: + try: + while 1: + v, itnum, next = s = h[0] # raises IndexError when h is empty + yield v + s[0] = next() # raises StopIteration when exhausted + _heapreplace(h, s) # restore heap condition + except _StopIteration: + _heappop(h) # remove empty iterator + except IndexError: + return + +# Extend the implementations of nsmallest and nlargest to use a key= argument +_nsmallest = nsmallest +def nsmallest(n, iterable, key=None): + """Find the n smallest elements in a dataset. + + Equivalent to: sorted(iterable, key=key)[:n] + """ + # Short-cut for n==1 is to use min() when len(iterable)>0 + if n == 1: + it = iter(iterable) + head = list(islice(it, 1)) + if not head: + return [] + if key is None: + return [min(chain(head, it))] + return [min(chain(head, it), key=key)] + + # When n>=size, it's faster to use sort() + try: + size = len(iterable) + except (TypeError, AttributeError): + pass + else: + if n >= size: + return sorted(iterable, key=key)[:n] + + # When key is none, use simpler decoration + if key is None: + it = izip(iterable, count()) # decorate + result = _nsmallest(n, it) + return map(itemgetter(0), result) # undecorate + + # General case, slowest method + in1, in2 = tee(iterable) + it = izip(imap(key, in1), count(), in2) # decorate + result = _nsmallest(n, it) + return map(itemgetter(2), result) # undecorate + +_nlargest = nlargest +def nlargest(n, iterable, key=None): + """Find the n largest elements in a dataset. + + Equivalent to: sorted(iterable, key=key, reverse=True)[:n] + """ + + # Short-cut for n==1 is to use max() when len(iterable)>0 + if n == 1: + it = iter(iterable) + head = list(islice(it, 1)) + if not head: + return [] + if key is None: + return [max(chain(head, it))] + return [max(chain(head, it), key=key)] + + # When n>=size, it's faster to use sort() + try: + size = len(iterable) + except (TypeError, AttributeError): + pass + else: + if n >= size: + return sorted(iterable, key=key, reverse=True)[:n] + + # When key is none, use simpler decoration + if key is None: + it = izip(iterable, count(0,-1)) # decorate + result = _nlargest(n, it) + return map(itemgetter(0), result) # undecorate + + # General case, slowest method + in1, in2 = tee(iterable) + it = izip(imap(key, in1), count(0,-1), in2) # decorate + result = _nlargest(n, it) + return map(itemgetter(2), result) # undecorate + +if __name__ == "__main__": + # Simple sanity test + heap = [] + data = [1, 3, 5, 7, 9, 2, 4, 6, 8, 0] + for item in data: + heappush(heap, item) + sort = [] + while heap: + sort.append(heappop(heap)) + print sort + + import doctest + doctest.testmod() diff --git a/lib-python/modified-2.7/json/encoder.py b/lib-python/modified-2.7/json/encoder.py --- a/lib-python/modified-2.7/json/encoder.py +++ b/lib-python/modified-2.7/json/encoder.py @@ -2,14 +2,7 @@ """ import re -try: - from _json import encode_basestring_ascii as c_encode_basestring_ascii -except ImportError: - c_encode_basestring_ascii = None -try: - from _json import make_encoder as c_make_encoder -except ImportError: - c_make_encoder = None +from __pypy__.builders import StringBuilder, UnicodeBuilder ESCAPE = re.compile(r'[\x00-\x1f\\"\b\f\n\r\t]') ESCAPE_ASCII = re.compile(r'([\\"]|[^\ -~])') @@ -24,8 +17,7 @@ '\t': '\\t', } for i in range(0x20): - ESCAPE_DCT.setdefault(chr(i), '\\u{0:04x}'.format(i)) - #ESCAPE_DCT.setdefault(chr(i), '\\u%04x' % (i,)) + ESCAPE_DCT.setdefault(chr(i), '\\u%04x' % (i,)) # Assume this produces an infinity on all machines (probably not guaranteed) INFINITY = float('1e66666') @@ -37,10 +29,9 @@ """ def replace(match): return ESCAPE_DCT[match.group(0)] - return '"' + ESCAPE.sub(replace, s) + '"' + return ESCAPE.sub(replace, s) - -def py_encode_basestring_ascii(s): +def encode_basestring_ascii(s): """Return an ASCII-only JSON representation of a Python string """ @@ -53,20 +44,18 @@ except KeyError: n = ord(s) if n < 0x10000: - return '\\u{0:04x}'.format(n) - #return '\\u%04x' % (n,) + return '\\u%04x' % (n,) else: # surrogate pair n -= 0x10000 s1 = 0xd800 | ((n >> 10) & 0x3ff) s2 = 0xdc00 | (n & 0x3ff) - return '\\u{0:04x}\\u{1:04x}'.format(s1, s2) - #return '\\u%04x\\u%04x' % (s1, s2) - return '"' + str(ESCAPE_ASCII.sub(replace, s)) + '"' - - -encode_basestring_ascii = ( - c_encode_basestring_ascii or py_encode_basestring_ascii) + return '\\u%04x\\u%04x' % (s1, s2) + if ESCAPE_ASCII.search(s): + return str(ESCAPE_ASCII.sub(replace, s)) + return s +py_encode_basestring_ascii = lambda s: '"' + encode_basestring_ascii(s) + '"' +c_encode_basestring_ascii = None class JSONEncoder(object): """Extensible JSON encoder for Python data structures. @@ -147,6 +136,17 @@ self.skipkeys = skipkeys self.ensure_ascii = ensure_ascii + if ensure_ascii: + self.encoder = encode_basestring_ascii + else: + self.encoder = encode_basestring + if encoding != 'utf-8': + orig_encoder = self.encoder + def encoder(o): + if isinstance(o, str): + o = o.decode(encoding) + return orig_encoder(o) + self.encoder = encoder self.check_circular = check_circular self.allow_nan = allow_nan self.sort_keys = sort_keys @@ -184,24 +184,126 @@ '{"foo": ["bar", "baz"]}' """ - # This is for extremely simple cases and benchmarks. + if self.check_circular: + markers = {} + else: + markers = None + if self.ensure_ascii: + builder = StringBuilder() + else: + builder = UnicodeBuilder() + self._encode(o, markers, builder, 0) + return builder.build() + + def _emit_indent(self, builder, _current_indent_level): + if self.indent is not None: + _current_indent_level += 1 + newline_indent = '\n' + (' ' * (self.indent * + _current_indent_level)) + separator = self.item_separator + newline_indent + builder.append(newline_indent) + else: + separator = self.item_separator + return separator, _current_indent_level + + def _emit_unindent(self, builder, _current_indent_level): + if self.indent is not None: + builder.append('\n') + builder.append(' ' * (self.indent * (_current_indent_level - 1))) + + def _encode(self, o, markers, builder, _current_indent_level): if isinstance(o, basestring): - if isinstance(o, str): - _encoding = self.encoding - if (_encoding is not None - and not (_encoding == 'utf-8')): - o = o.decode(_encoding) - if self.ensure_ascii: - return encode_basestring_ascii(o) + builder.append('"') + builder.append(self.encoder(o)) + builder.append('"') + elif o is None: + builder.append('null') + elif o is True: + builder.append('true') + elif o is False: + builder.append('false') + elif isinstance(o, (int, long)): + builder.append(str(o)) + elif isinstance(o, float): + builder.append(self._floatstr(o)) + elif isinstance(o, (list, tuple)): + if not o: + builder.append('[]') + return + self._encode_list(o, markers, builder, _current_indent_level) + elif isinstance(o, dict): + if not o: + builder.append('{}') + return + self._encode_dict(o, markers, builder, _current_indent_level) + else: + self._mark_markers(markers, o) + res = self.default(o) + self._encode(res, markers, builder, _current_indent_level) + self._remove_markers(markers, o) + return res + + def _encode_list(self, l, markers, builder, _current_indent_level): + self._mark_markers(markers, l) + builder.append('[') + first = True + separator, _current_indent_level = self._emit_indent(builder, + _current_indent_level) + for elem in l: + if first: + first = False else: - return encode_basestring(o) - # This doesn't pass the iterator directly to ''.join() because the - # exceptions aren't as detailed. The list call should be roughly - # equivalent to the PySequence_Fast that ''.join() would do. - chunks = self.iterencode(o, _one_shot=True) - if not isinstance(chunks, (list, tuple)): - chunks = list(chunks) - return ''.join(chunks) + builder.append(separator) + self._encode(elem, markers, builder, _current_indent_level) + del elem # XXX grumble + self._emit_unindent(builder, _current_indent_level) + builder.append(']') + self._remove_markers(markers, l) + + def _encode_dict(self, d, markers, builder, _current_indent_level): + self._mark_markers(markers, d) + first = True + builder.append('{') + separator, _current_indent_level = self._emit_indent(builder, + _current_indent_level) + if self.sort_keys: + items = sorted(d.items(), key=lambda kv: kv[0]) + else: + items = d.iteritems() + + for key, v in items: + if first: + first = False + else: + builder.append(separator) + if isinstance(key, basestring): + pass + # JavaScript is weakly typed for these, so it makes sense to + # also allow them. Many encoders seem to do something like this. + elif isinstance(key, float): + key = self._floatstr(key) + elif key is True: + key = 'true' + elif key is False: + key = 'false' + elif key is None: + key = 'null' + elif isinstance(key, (int, long)): + key = str(key) + elif self.skipkeys: + continue + else: + raise TypeError("key " + repr(key) + " is not a string") + builder.append('"') + builder.append(self.encoder(key)) + builder.append('"') + builder.append(self.key_separator) + self._encode(v, markers, builder, _current_indent_level) + del key + del v # XXX grumble + self._emit_unindent(builder, _current_indent_level) + builder.append('}') + self._remove_markers(markers, d) def iterencode(self, o, _one_shot=False): """Encode the given object and yield each string @@ -217,86 +319,54 @@ markers = {} else: markers = None - if self.ensure_ascii: - _encoder = encode_basestring_ascii + return self._iterencode(o, markers, 0) + + def _floatstr(self, o): + # Check for specials. Note that this type of test is processor + # and/or platform-specific, so do tests which don't depend on the + # internals. + + if o != o: + text = 'NaN' + elif o == INFINITY: + text = 'Infinity' + elif o == -INFINITY: + text = '-Infinity' else: - _encoder = encode_basestring - if self.encoding != 'utf-8': - def _encoder(o, _orig_encoder=_encoder, _encoding=self.encoding): - if isinstance(o, str): - o = o.decode(_encoding) - return _orig_encoder(o) + return FLOAT_REPR(o) - def floatstr(o, allow_nan=self.allow_nan, - _repr=FLOAT_REPR, _inf=INFINITY, _neginf=-INFINITY): - # Check for specials. Note that this type of test is processor - # and/or platform-specific, so do tests which don't depend on the - # internals. + if not self.allow_nan: + raise ValueError( + "Out of range float values are not JSON compliant: " + + repr(o)) - if o != o: - text = 'NaN' - elif o == _inf: - text = 'Infinity' - elif o == _neginf: - text = '-Infinity' - else: - return _repr(o) + return text - if not allow_nan: - raise ValueError( - "Out of range float values are not JSON compliant: " + - repr(o)) + def _mark_markers(self, markers, o): + if markers is not None: + if id(o) in markers: + raise ValueError("Circular reference detected") + markers[id(o)] = None - return text + def _remove_markers(self, markers, o): + if markers is not None: + del markers[id(o)] - - if (_one_shot and c_make_encoder is not None - and not self.indent and not self.sort_keys): - _iterencode = c_make_encoder( - markers, self.default, _encoder, self.indent, - self.key_separator, self.item_separator, self.sort_keys, - self.skipkeys, self.allow_nan) - else: - _iterencode = _make_iterencode( - markers, self.default, _encoder, self.indent, floatstr, - self.key_separator, self.item_separator, self.sort_keys, - self.skipkeys, _one_shot) - return _iterencode(o, 0) - -def _make_iterencode(markers, _default, _encoder, _indent, _floatstr, - _key_separator, _item_separator, _sort_keys, _skipkeys, _one_shot, - ## HACK: hand-optimized bytecode; turn globals into locals - ValueError=ValueError, - basestring=basestring, - dict=dict, - float=float, - id=id, - int=int, - isinstance=isinstance, - list=list, - long=long, - str=str, - tuple=tuple, - ): - - def _iterencode_list(lst, _current_indent_level): + def _iterencode_list(self, lst, markers, _current_indent_level): if not lst: yield '[]' return - if markers is not None: - markerid = id(lst) - if markerid in markers: - raise ValueError("Circular reference detected") - markers[markerid] = lst + self._mark_markers(markers, lst) buf = '[' - if _indent is not None: + if self.indent is not None: _current_indent_level += 1 - newline_indent = '\n' + (' ' * (_indent * _current_indent_level)) - separator = _item_separator + newline_indent + newline_indent = '\n' + (' ' * (self.indent * + _current_indent_level)) + separator = self.item_separator + newline_indent buf += newline_indent else: newline_indent = None - separator = _item_separator + separator = self.item_separator first = True for value in lst: if first: @@ -304,7 +374,7 @@ else: buf = separator if isinstance(value, basestring): - yield buf + _encoder(value) + yield buf + '"' + self.encoder(value) + '"' elif value is None: yield buf + 'null' elif value is True: @@ -314,44 +384,43 @@ elif isinstance(value, (int, long)): yield buf + str(value) elif isinstance(value, float): - yield buf + _floatstr(value) + yield buf + self._floatstr(value) else: yield buf if isinstance(value, (list, tuple)): - chunks = _iterencode_list(value, _current_indent_level) + chunks = self._iterencode_list(value, markers, + _current_indent_level) elif isinstance(value, dict): - chunks = _iterencode_dict(value, _current_indent_level) + chunks = self._iterencode_dict(value, markers, + _current_indent_level) else: - chunks = _iterencode(value, _current_indent_level) + chunks = self._iterencode(value, markers, + _current_indent_level) for chunk in chunks: yield chunk if newline_indent is not None: _current_indent_level -= 1 - yield '\n' + (' ' * (_indent * _current_indent_level)) + yield '\n' + (' ' * (self.indent * _current_indent_level)) yield ']' - if markers is not None: - del markers[markerid] + self._remove_markers(markers, lst) - def _iterencode_dict(dct, _current_indent_level): + def _iterencode_dict(self, dct, markers, _current_indent_level): if not dct: yield '{}' return - if markers is not None: - markerid = id(dct) - if markerid in markers: - raise ValueError("Circular reference detected") - markers[markerid] = dct + self._mark_markers(markers, dct) yield '{' - if _indent is not None: + if self.indent is not None: _current_indent_level += 1 - newline_indent = '\n' + (' ' * (_indent * _current_indent_level)) - item_separator = _item_separator + newline_indent + newline_indent = '\n' + (' ' * (self.indent * + _current_indent_level)) + item_separator = self.item_separator + newline_indent yield newline_indent else: newline_indent = None - item_separator = _item_separator + item_separator = self.item_separator first = True - if _sort_keys: + if self.sort_keys: items = sorted(dct.items(), key=lambda kv: kv[0]) else: items = dct.iteritems() @@ -361,7 +430,7 @@ # JavaScript is weakly typed for these, so it makes sense to # also allow them. Many encoders seem to do something like this. elif isinstance(key, float): - key = _floatstr(key) + key = self._floatstr(key) elif key is True: key = 'true' elif key is False: @@ -370,7 +439,7 @@ key = 'null' elif isinstance(key, (int, long)): key = str(key) - elif _skipkeys: + elif self.skipkeys: continue else: raise TypeError("key " + repr(key) + " is not a string") @@ -378,10 +447,10 @@ first = False else: yield item_separator - yield _encoder(key) - yield _key_separator + yield '"' + self.encoder(key) + '"' + yield self.key_separator if isinstance(value, basestring): - yield _encoder(value) + yield '"' + self.encoder(value) + '"' elif value is None: yield 'null' elif value is True: @@ -391,26 +460,28 @@ elif isinstance(value, (int, long)): yield str(value) elif isinstance(value, float): - yield _floatstr(value) + yield self._floatstr(value) else: if isinstance(value, (list, tuple)): - chunks = _iterencode_list(value, _current_indent_level) + chunks = self._iterencode_list(value, markers, + _current_indent_level) elif isinstance(value, dict): - chunks = _iterencode_dict(value, _current_indent_level) + chunks = self._iterencode_dict(value, markers, + _current_indent_level) else: - chunks = _iterencode(value, _current_indent_level) + chunks = self._iterencode(value, markers, + _current_indent_level) for chunk in chunks: yield chunk if newline_indent is not None: _current_indent_level -= 1 - yield '\n' + (' ' * (_indent * _current_indent_level)) + yield '\n' + (' ' * (self.indent * _current_indent_level)) yield '}' - if markers is not None: - del markers[markerid] + self._remove_markers(markers, dct) - def _iterencode(o, _current_indent_level): + def _iterencode(self, o, markers, _current_indent_level): if isinstance(o, basestring): - yield _encoder(o) + yield '"' + self.encoder(o) + '"' elif o is None: yield 'null' elif o is True: @@ -420,23 +491,19 @@ elif isinstance(o, (int, long)): yield str(o) elif isinstance(o, float): - yield _floatstr(o) + yield self._floatstr(o) elif isinstance(o, (list, tuple)): - for chunk in _iterencode_list(o, _current_indent_level): + for chunk in self._iterencode_list(o, markers, + _current_indent_level): yield chunk elif isinstance(o, dict): - for chunk in _iterencode_dict(o, _current_indent_level): + for chunk in self._iterencode_dict(o, markers, + _current_indent_level): yield chunk else: - if markers is not None: - markerid = id(o) - if markerid in markers: - raise ValueError("Circular reference detected") - markers[markerid] = o - o = _default(o) - for chunk in _iterencode(o, _current_indent_level): + self._mark_markers(markers, o) + obj = self.default(o) + for chunk in self._iterencode(obj, markers, + _current_indent_level): yield chunk - if markers is not None: - del markers[markerid] - - return _iterencode + self._remove_markers(markers, o) diff --git a/lib-python/modified-2.7/json/tests/test_unicode.py b/lib-python/modified-2.7/json/tests/test_unicode.py --- a/lib-python/modified-2.7/json/tests/test_unicode.py +++ b/lib-python/modified-2.7/json/tests/test_unicode.py @@ -80,3 +80,9 @@ self.assertEqual(type(json.loads(u'["a"]')[0]), unicode) # Issue 10038. self.assertEqual(type(json.loads('"foo"')), unicode) + + def test_encode_not_utf_8(self): + self.assertEqual(json.dumps('\xb1\xe6', encoding='iso8859-2'), + '"\\u0105\\u0107"') + self.assertEqual(json.dumps(['\xb1\xe6'], encoding='iso8859-2'), + '["\\u0105\\u0107"]') diff --git a/lib-python/2.7/pkgutil.py b/lib-python/modified-2.7/pkgutil.py copy from lib-python/2.7/pkgutil.py copy to lib-python/modified-2.7/pkgutil.py --- a/lib-python/2.7/pkgutil.py +++ b/lib-python/modified-2.7/pkgutil.py @@ -244,7 +244,8 @@ return mod def get_data(self, pathname): - return open(pathname, "rb").read() + with open(pathname, "rb") as f: + return f.read() def _reopen(self): if self.file and self.file.closed: diff --git a/lib-python/modified-2.7/test/test_array.py b/lib-python/modified-2.7/test/test_array.py --- a/lib-python/modified-2.7/test/test_array.py +++ b/lib-python/modified-2.7/test/test_array.py @@ -295,9 +295,10 @@ ) b = array.array(self.badtypecode()) - self.assertRaises(TypeError, "a + b") - - self.assertRaises(TypeError, "a + 'bad'") + with self.assertRaises(TypeError): + a + b + with self.assertRaises(TypeError): + a + 'bad' def test_iadd(self): a = array.array(self.typecode, self.example[::-1]) @@ -316,9 +317,10 @@ ) b = array.array(self.badtypecode()) - self.assertRaises(TypeError, "a += b") - - self.assertRaises(TypeError, "a += 'bad'") + with self.assertRaises(TypeError): + a += b + with self.assertRaises(TypeError): + a += 'bad' def test_mul(self): a = 5*array.array(self.typecode, self.example) @@ -345,7 +347,8 @@ array.array(self.typecode) ) - self.assertRaises(TypeError, "a * 'bad'") + with self.assertRaises(TypeError): + a * 'bad' def test_imul(self): a = array.array(self.typecode, self.example) @@ -374,7 +377,8 @@ a *= -1 self.assertEqual(a, array.array(self.typecode)) - self.assertRaises(TypeError, "a *= 'bad'") + with self.assertRaises(TypeError): + a *= 'bad' def test_getitem(self): a = array.array(self.typecode, self.example) diff --git a/lib-python/modified-2.7/test/test_heapq.py b/lib-python/modified-2.7/test/test_heapq.py --- a/lib-python/modified-2.7/test/test_heapq.py +++ b/lib-python/modified-2.7/test/test_heapq.py @@ -186,6 +186,11 @@ self.assertFalse(sys.modules['heapq'] is self.module) self.assertTrue(hasattr(self.module.heapify, 'func_code')) + def test_islice_protection(self): + m = self.module + self.assertFalse(m.nsmallest(-1, [1])) + self.assertFalse(m.nlargest(-1, [1])) + class TestHeapC(TestHeap): module = c_heapq diff --git a/lib-python/modified-2.7/urllib2.py b/lib-python/modified-2.7/urllib2.py --- a/lib-python/modified-2.7/urllib2.py +++ b/lib-python/modified-2.7/urllib2.py @@ -395,11 +395,7 @@ meth_name = protocol+"_response" for processor in self.process_response.get(protocol, []): meth = getattr(processor, meth_name) - try: - response = meth(req, response) - except: - response.close() - raise + response = meth(req, response) return response diff --git a/lib_pypy/_ctypes/pointer.py b/lib_pypy/_ctypes/pointer.py --- a/lib_pypy/_ctypes/pointer.py +++ b/lib_pypy/_ctypes/pointer.py @@ -124,7 +124,8 @@ # for now, we always allow types.pointer, else a lot of tests # break. We need to rethink how pointers are represented, though if my_ffitype is not ffitype and ffitype is not _ffi.types.void_p: - raise ArgumentError, "expected %s instance, got %s" % (type(value), ffitype) + raise ArgumentError("expected %s instance, got %s" % (type(value), + ffitype)) return value._get_buffer_value() def _cast_addr(obj, _, tp): diff --git a/lib_pypy/_ctypes/structure.py b/lib_pypy/_ctypes/structure.py --- a/lib_pypy/_ctypes/structure.py +++ b/lib_pypy/_ctypes/structure.py @@ -17,7 +17,7 @@ if len(f) == 3: if (not hasattr(tp, '_type_') or not isinstance(tp._type_, str) - or tp._type_ not in "iIhHbBlL"): + or tp._type_ not in "iIhHbBlLqQ"): #XXX: are those all types? # we just dont get the type name # in the interp levle thrown TypeError diff --git a/lib_pypy/_pypy_irc_topic.py b/lib_pypy/_pypy_irc_topic.py --- a/lib_pypy/_pypy_irc_topic.py +++ b/lib_pypy/_pypy_irc_topic.py @@ -1,117 +1,6 @@ -"""qvfgbcvna naq hgbcvna punvef -qlfgbcvna naq hgbcvna punvef -V'z fbeel, pbhyq lbh cyrnfr abg nterr jvgu gur png nf jryy? -V'z fbeel, pbhyq lbh cyrnfr abg nterr jvgu gur punve nf jryy? -jr cnffrq gur RH erivrj -cbfg RhebClguba fcevag fgnegf 12.IVV.2007, 10nz -RhebClguba raqrq -n Pyrna Ragrecevfrf cebqhpgvba -npnqrzl vf n pbzcyvpngrq ebyr tnzr -npnqrzvn vf n pbzcyvpngrq ebyr tnzr -jbexvat pbqr vf crn fbhc -abg lbhe snhyg, zber yvxr vg'f n zbivat gnetrg -guvf fragrapr vf snyfr -abguvat vf gehr -Yncfnat Fbhpubat -Oenpunzhgnaqn -fbeel, V'yy grnpu gur pnpghf ubj gb fjvz yngre -Jul fb znal znal znal znal znal ivbyvaf? -Jul fb znal znal znal znal znal bowrpgf? -"eha njnl naq yvir ba n snez" nccebnpu gb fbsgjner qrirybczrag -"va snpg, lbh zvtug xabj zber nobhg gur genafyngvba gbbypunva nsgre znfgrevat eclguba guna fbzr angvir fcrnxre xabjf nobhg uvf zbgure gbathr" - kbeNkNk -"jurer qvq nyy gur ivbyvaf tb?" -- ClCl fgnghf oybt: uggc://zberclcl.oybtfcbg.pbz/ -uggc://kxpq.pbz/353/ -pnfhnyvgl ivbyngvbaf naq sylvat -wrgmg abpu fpubxbynqvtre -R09 2X @PNN:85? -vs lbh'er gelvat gb oybj hc fghss, jub pnerf? -vs fghss oybjf hc, lbh pner -2008 jvyy or gur lrne bs clcl ba gur qrfxgbc -2008 jvyy or gur lrne bs gur qrfxgbc ba #clcl -2008 jvyy or gur lrne bs gur qrfxgbc ba #clcl, Wnahnel jvyy or gur zbagu bs gur nyc gbcf -lrf, ohg jung'g gur frafr bs 0 < "qhena qhena" -eclguba: flagnk naq frznagvpf bs clguba, fcrrq bs p, erfgevpgvbaf bs wnin naq pbzcvyre reebe zrffntrf nf crargenoyr nf ZHZCF +"""eclguba: flagnk naq frznagvpf bs clguba, fcrrq bs p, erfgevpgvbaf bs wnin naq pbzcvyre reebe zrffntrf nf crargenoyr nf ZHZCF pglcrf unf n fcva bs 1/3 ' ' vf n fcnpr gbb -2009 jvyy or gur lrne bs WVG ba gur qrfxgbc -N ynathntr vf n qvnyrpg jvgu na nezl naq anil -gbcvpf ner sbe gur srroyr zvaqrq -2009 vf gur lrne bs ersyrpgvba ba gur qrfxgbc -gur tybor vf bhe cbal, gur pbfzbf bhe erny ubefr -jub nz V naq vs lrf, ubj znal? -cebtenzzvat va orq vf n cresrpgyl svar npgvivgl -zbber'f ynj vf n qeht jvgu gur jbefg pbzr qbja -EClguba: jr hfr vg fb lbh qba'g unir gb -Zbber'f ynj vf n qeht jvgu gur jbefg pbzr qbja. EClguba: haqrpvqrq. -guvatf jvyy or avpr naq fghss -qba'g cbfg yvaxf gb cngragf urer -Abg lbhe hfhny nanylfrf. -Gur Neg bs gur Punaary -Clguba 300 -V fhccbfr ZO bs UGZY cre frpbaq vf abg gur hfhny fcrrq zrnfher crbcyr jbhyq rkcrpg sbe n wvg -gur fha arire frgf ba gur ClCl rzcver -ghegyrf ner snfgre guna lbh guvax -cebtenzzvat vf na nrfgrguvp raqrnibhe -P vf tbbq sbe fbzrguvat, whfg abg sbe jevgvat fbsgjner -trezna vf tbbq sbe fbzrguvat, whfg abg sbe jevgvat fbsgjner -trezna vf tbbq sbe artngvbaf, whfg abg sbe jevgvat fbsgjner -# nffreg qvq abg penfu -lbh fubhyq fgneg n cresrpg fbsgjner zbirzrag -lbh fubhyq fgneg n cresrpg punaary gbcvp zbirzrag -guvf vf n cresrpg punaary gbcvp -guvf vf n frys-ersreragvny punaary gbcvp -crrcubcr bcgvzvmngvbaf ner jung n Fhssvpvragyl Fzneg Pbzcvyre hfrf -"crrcubcr" bcgvzvmngvbaf ner jung na bcgvzvfgvp Pbzcvyre hfrf -pubbfr lbhe unpx -gur 'fhcre' xrljbeq vf abg gung uhttnoyr -wlguba cngpurf ner abg rabhtu sbe clcl -- qb lbh xabj oreyva? - nyy bs vg? - jryy, whfg oreyva -- ubj jvyy gur snpg gung gurl ner hfrq va bhe ercy punatr bhe gbcvpf? -- ubj pna vg rire unir jbexrq? -- jurer fubhyq gur unpx or fgberq? -- Vg'f uneq gb fnl rknpgyl jung pbafgvghgrf erfrnepu va gur pbzchgre jbeyq, ohg nf n svefg nccebkvzngvba, vg'f fbsgjner gung qbrfa'g unir hfref. -- Cebtenzzvat vf nyy nobhg xabjvat jura gb obvy gur benatr fcbatr qbaxrl npebff gur cuvyyvcvarf -- Jul fb znal, znal, znal, znal, znal, znal qhpxyvatf? -- ab qrgnvy vf bofpher rabhtu gb abg unir fbzr pbqr qrcraqvat ba vg. -- jung V trarenyyl jnag vf serr fcrrqhcf -- nyy bs ClCl vf kv-dhnyvgl -"lbh pna nyjnlf xvyy -9 be bf._rkvg() vs lbh'er va n uheel" -Ohernhpengf ohvyq npnqrzvp rzcverf juvpu puhea bhg zrnavatyrff fbyhgvbaf gb veeryrinag ceboyrzf. -vg'f abg n unpx, vg'f n jbexnebhaq -ClCl qbrfa'g unir pbcbylinevnqvp qrcraqragyl-zbabzbecurq ulcresyhknqf -ClCl qbrfa'g punatr gur shaqnzragny culfvpf pbafgnagf -Qnapr bs gur Fhtnecyhz Snvel -Wnin vf whfg tbbq rabhtu gb or cenpgvpny, ohg abg tbbq rabhtu gb or hfnoyr. -RhebClguba vf unccravat, qba'g rkcrpg nal dhvpx erfcbafr gvzrf. -"V jbhyq yvxr gb fgnl njnl sebz ernyvgl gura" -"gung'f jul gur 'be' vf ernyyl na 'naq' " -jvgu nyy nccebcevngr pbagrkghnyvfngvbavat -qba'g gevc ba gur cbjre pbeq -vzcyrzragvat YBTB va YBTB: "ghegyrf nyy gur jnl qbja" -gur ohooyrfbeg jbhyq or gur jebat jnl gb tb -gur cevapvcyr bs pbafreingvba bs zrff -gb fnir n gerr, rng n ornire -Qre Ovore znpugf evpugvt: Antg nyyrf xnchgg. -"Nal jbeyqivrj gung vfag jenpxrq ol frys-qbhog naq pbashfvba bire vgf bja vqragvgl vf abg n jbeyqivrj sbe zr." - Fpbgg Nnebafba -jr oryvrir va cnapnxrf, znlor -jr oryvrir va ghegyrf, znlor -jr qrsvavgryl oryvrir va zrgn -gur zngevk unf lbh -"Yvsr vf uneq, gura lbh anc" - n png -Vf Nezva ubzr jura gur havirefr prnfrf gb rkvfg? -Qhrffryqbes fcevag fgnegrq -frys.nobeeg("pnaabg ybnq negvpyrf") -QRAGVFGEL FLZOBY YVTUG IREGVPNY NAQ JNIR -"Gur UUH pnzchf vf n tbbq Dhnxr yriry" - Nezva -"Gur UUH pnzchf jbhyq or n greevoyr dhnxr yriry - lbh'q arire unir n pyhr jurer lbh ner" - zvpunry -N enqvbnpgvir png unf 18 unys-yvirf. - : j [fvtu] -f -pbybe-pbqrq oyhrf -"Neebtnapr va pbzchgre fpvrapr vf zrnfherq va anab-Qvwxfgenf." -ClCl arrqf n Whfg-va-Gvzr WVG -"Lbh pna'g gvzr geniry whfg ol frggvat lbhe pybpxf jebat" -Gjb guernqf jnyx vagb n one. Gur onexrrcre ybbxf hc naq lryyf, "url, V jnag qba'g nal pbaqvgvbaf enpr yvxr gvzr ynfg!" Clguba 2.k rfg cerfdhr zbeg, ivir Clguba! Clguba 2.k vf abg qrnq Riregvzr fbzrbar nethrf jvgu "Fznyygnyx unf nyjnlf qbar K", vg vf nyjnlf n tbbq uvag gung fbzrguvat arrqf gb or punatrq snfg. - Znephf Qraxre @@ -119,7 +8,6 @@ __kkk__ naq __ekkk__ if bcrengvba fybgf: cnegvpyr dhnaghz fhcrecbfvgvba xvaq bs sha ClCl vf na rkpvgvat grpuabybtl gung yrgf lbh gb jevgr snfg, cbegnoyr, zhygv-cyngsbez vagrecergref jvgu yrff rssbeg Nezva: "Cebybt vf n zrff.", PS: "Ab, vg'f irel pbby!", Nezva: "Vfa'g guvf jung V fnvq?" - tbbq, grfgf ner hfrshy fbzrgvzrf :-) ClCl vf yvxr nofheq gurngre jr unir ab nagv-vzcbffvoyr fgvpx gung znxrf fher gung nyy lbhe cebtenzf unyg clcl vf n enpr orgjrra crbcyr funivat lnxf naq gur havirefr cebqhpvat zber orneqrq lnxf. Fb sne, gur havirefr vf jvaavat @@ -136,14 +24,14 @@ ClCl 1.1.0orgn eryrnfrq: uggc://pbqrfcrnx.arg/clcl/qvfg/clcl/qbp/eryrnfr-1.1.0.ugzy "gurer fubhyq or bar naq bayl bar boivbhf jnl gb qb vg". ClCl inevnag: "gurer pna or A unys-ohttl jnlf gb qb vg" 1.1 svany eryrnfrq: uggc://pbqrfcrnx.arg/clcl/qvfg/clcl/qbp/eryrnfr-1.1.0.ugzy -1.1 svany eryrnfrq | nzq64 naq ccp ner bayl ninvynoyr va ragrecevfr irefvba + nzq64 naq ccp ner bayl ninvynoyr va ragrecevfr irefvba Vf gurer n clcl gvzr? - vs lbh pna srry vg (?) gura gurer vf ab, abezny jbex vf fhpu zhpu yrff gvevat guna inpngvbaf ab, abezny jbex vf fb zhpu yrff gvevat guna inpngvbaf -SVEFG gurl vtaber lbh, gura gurl ynhtu ng lbh, gura gurl svtug lbh, gura lbh jva. +-SVEFG gurl vtaber lbh, gura gurl ynhtu ng lbh, gura gurl svtug lbh, gura lbh jva.- vg'f Fhaqnl, znlor vg'f Fhaqnl, ntnva -"3 + 3 = 8" Nagb va gur WVG gnyx +"3 + 3 = 8" - Nagb va gur WVG gnyx RPBBC vf unccravat RPBBC vf svavfurq cflpb rngf bar oenva cre vapu bs cebterff @@ -175,10 +63,108 @@ "nu, whfg va gvzr qbphzragngvba" (__nc__) ClCl vf abg n erny IZ: ab frtsnhyg unaqyref gb qb gur ener pnfrf lbh pna'g unir obgu pbairavrapr naq fcrrq -gur WVG qbrfa'g jbex ba BF/K (abi'09) -ab fhccbeg sbe BF/K evtug abj! (abi'09) fyvccref urvtug pna or zrnfherq va k86 ertvfgref clcl vf n enpr orgjrra gur vaqhfgel gelvat gb ohvyq znpuvarf jvgu zber naq zber erfbheprf, naq gur clcl qrirybcref gelvat gb rng nyy bs gurz. Fb sne, gur jvaare vf fgvyy hapyrne +"znl pbagnva ahgf naq/be lbhat cbvagref" +vg'f nyy irel fvzcyr, yvxr gur ubyvqnlf +unccl ClCl'f lrne 2010! +fnzhryr fnlf gung jr ybfg n enmbe. fb jr pna'g funir lnxf +"yrg'f abg or bofpher, hayrff jr ernyyl arrq gb" + (abg guernq-fnsr, ohg jryy, abguvat vf) +clcl unf znal ceboyrzf, ohg rnpu bar unf znal fbyhgvbaf +whfg nabgure vgrz (1.333...) ba bhe erny-ahzorerq gbqb yvfg +ClCl vf Fuveg Bevtnzv erfrnepu + nafjrevat n dhrfgvba: "ab -- sbe ng yrnfg bar cbffvoyr vagrecergngvba bs lbhe fragrapr" +eryrnfr 1.2 hcpbzvat +ClCl 1.2 eryrnfrq - uggc://clcl.bet/ +AB IPF QVFPHFFVBAF +EClguba vf n svar pnzry unve oehfu +ClCl vf n npghnyyl n ivfhnyvmngvba cebwrpg, jr whfg ohvyq vagrecergref gb unir vagrerfgvat qngn gb ivfhnyvmr +clcl vf yvxr fnhfntrf +naq abj sbe fbzrguvat pbzcyrgryl qvssrerag +n 10gu bs sberire vf 1u45 +pbeerpg pbqr qbrfag arrq nal grfgf +cbfgfgehpghenyvfz rgp. +clcl UVG trarengbe +gur arj clcl fcbeg vf gb cnff clcl ohtf nf pclguba ohtf +jr unir zhpu zber vagrecergref guna hfref +ClCl 1.3 njnvgvat eryrnfr +ClCl 1.3 eryrnfrq +vg frrzf gb zr gung bapr lbh frggyr ba na rkrphgvba / bowrpg zbqry naq / be olgrpbqr sbezng, lbh'ir nyernql qrpvqrq jung ynathntrf (jurer gur 'f' frrzf fhcresyhbhf) fhccbeg vf tbvat gb or svefg pynff sbe +"Nyy ceboyrzf va ClCl pna or fbyirq ol nabgure yriry bs vagrecergngvba" +ClCl 1.3 eryrnfrq (jvaqbjf ovanevrf vapyhqrq) +jul qvq lbh thlf unir gb znxr gur ohvygva sbeghar zber vagrerfgvat guna npghny jbex? v whfg pngpurq zlfrys erfgnegvat clcl 20 gvzrf +"jr hfrq gb unir n zrff jvgu na bofpher vagresnpr, abj jr unir zrff urer naq bofpher vagresnpr gurer. cebterff" crqebavf ba n clcl fcevag +"phcf bs pbssrr ner yvxr nanybtvrf va gung V'z znxvat bar evtug abj" +"vg'f nyjnlf hc gb hf, va n jnl be gur bgure" +ClCl vf infg, naq pbagnvaf zhygvghqrf +qravny vf eneryl n tbbq qrohttvat grpuavdhr +"Yrg'f tb." - "Jr pna'g" - "Jul abg?" - "Jr'er jnvgvat sbe n Genafyngvba." - (qrfcnvevatyl) "Nu!" +'gung'f qrsvavgryl n pnfr bs "hu????"' +va gurbel gurer vf gur Ybbc, va cenpgvpr gurer ner oevqtrf +gur uneqqevir - pbafgnag qngn cvytevzntr +ClCl vf n gbby gb xrrc bgurejvfr qnatrebhf zvaqf fnsryl bpphcvrq. +jr ner n trareny senzrjbex ohvyg ba pbafvfgrag nccyvpngvba bs nqubp-arff +gur jnl gb nibvq n jbexnebhaq vf gb vagebqhpr n fgebatre jbexnebhaq fbzrjurer ryfr +pnyyvat gur genafyngvba gbby punva n 'fpevcg' vf xvaq bs bssrafvir +ehaavat clcl-p genafyngr.cl vf n ovg yvxr jngpuvat n guevyyre zbivr, vg pbhyq pbafhzr nyy gur zrzbel ng nal gvzr +ehaavat clcl-p genafyngr.cl vf n ovg yvxr jngpuvat n guevyyre zbivr, vg pbhyq qvr ng nal gvzr orpnhfr bs gur 32-ovg 4TO yvzvg bs ENZ +Qh jvefg rora tranh qnf reervpura, jbena xrvare tynhog +vs fjvgmreynaq jrer jurer terrpr vf (ba vfynaqf) jbhyq gurl nyy or pbaarpgrq ol oevqtrf? +genafyngvat clcl jvgu pclguba vf fbbbbbb fybj +ClCl 1.4 eryrnfrq! +Jr ner abg urebrf, whfg irel cngvrag. +QBAR zrnaf vg'f qbar +jul gurer vf ab "ClCl 1.4 eryrnfrq" va gbcvp nal zber? +fabj! fabj! +svanyyl, zrephevny zvtengvba vf unccravat! +Gur zvtengvba gb zrephevny vf pbzcyrgrq! uggc://ovgohpxrg.bet/clcl/clcl +fabj! fabj! (gre) +unccl arj lrne +naq anaanaw gb lbh nf jryy +Frrvat nf gur ynjf bs culfvpf ner ntnvafg lbh, lbh unir gb pnershyyl pbafvqre lbhe fpbcr fb gung lbhe tbnyf ner ernfbanoyr. +nf hfhny va clcl, gur fbyhgvba nccrnef pbzcyrgryl qvfcebcbegvbangr gb gur ceboyrz naq vafgrnq jr'yy tb sbe n pbzcyrgryl qvssrerag fvzcyre nccebnpu gb gur bevtvany ceboyrz +fabj, fabj! +va clcl lbh ner nyjnlf ng gur jebat yriry, va bar jnl be gur bgure +jryy, vg'f jebat ohg abg fb "irel jebat" nf vg ybbxrq + V ybir clcl +ynmvarff vzcngvrapr naq uhoevf +fabj, fabj +EClguba: guvatf lbh jbhyqa'g qb va Clguba, naq pna'g qb va P. +vg vf gur rkcrpgrq orunivbe, rkprcg jura lbh qba'g rkcrpg vg +erqrsvavat lryybj frrzf yvxr n orggre vqrn +"gung'f ubjrire whfg ratvarrevat" (svwny) +"[vg] whfg fubjf ntnva gung eclguba vf bofpher" (psobym) +"naljnl, clguba vf n infg ynathntr" (svwny) +bhg-bs-yvr-thneqf +"gurer ner qnlf ba juvpu lbh ybbx nebhaq naq abguvat fubhyq unir rire jbexrq" (svwny) +clcl vf n orggre xvaq bs sbbyvfuarff - ynp +ehaavat grfgf vf rffragvny sbe qrirybcvat clcl -- hu? qvq V oernx gur grfg? (svwny) +V'ir tbg guvf sybbe jnk gung'f nyfb n TERNG qrffreg gbccvat!! +rknexha: "gur cneg gung V gubhtug jnf tbvat gb or uneq jnf gevivny, fb abj V whfg unir guvf cneg gung V qvqa'g rira guvax bs gung vf uneq" +V fhccbfr jr pna yvir jvgu gur bofphevgl, nf ybat nf gurer vf n pbzzrag znxvat vg yvtugre +V nz n ovt oryvrire va ernfbaf. ohg gur nccnerag xvaq ner zl snibevgr. +clcl: trg n WVG sbe serr (jryy gur svefg qnl lbh jba'g znantr naq vg jvyy or irel sehfgengvat) + thgjbegu: bu, jr fubhyq znxr gur WVG zntvpnyyl orggre, jvgu qrpbengbef naq fghss +vg'f n pbzcyrgr unpx, ohg n irel zvavzny bar (nevtngb) +svefg gurl ynhtu ng lbh, gura gurl vtaber lbh, gura gurl svtug lbh, gura lbh jva +ClCl vf snzvyl sevraqyl +jr yvxr pbzcynvagf +gbqnl jr'er snfgre guna lrfgreqnl (hfhnyyl) +ClCl naq PClguba: gurl ner zbegny rarzvrf vagrag ba xvyyvat rnpu bgure +nethnoyl, rirelguvat vf n avpur +clcl unf ynlref yvxr bavbaf: crryvat gurz onpx jvyy znxr lbh pel +EClguba zntvpnyyl znxrf lbh evpu naq snzbhf (fnlf fb ba gur gva) +Vf evtbobg nebhaq jura gur havirefr prnfrf gb rkvfg? +ClCl vf gbb pbby sbe dhrelfgevatf. +< nevtngb> gura jung bpphef? < svwny> tbbq fghss V oryvrir +ClCl 1.6 eryrnfrq! + jurer ner gur grfgf? +uggc://gjvgcvp.pbz/52nr8s +N enaqbz dhbgr +Nyy rkprcgoybpxf frrz fnar. +N cvax tyvggrel ebgngvat ynzoqn +"vg'f yvxryl grzcbenel hagvy sberire" nevtb """ def some_topic(): diff --git a/lib_pypy/pyrepl/commands.py b/lib_pypy/pyrepl/commands.py --- a/lib_pypy/pyrepl/commands.py +++ b/lib_pypy/pyrepl/commands.py @@ -33,10 +33,9 @@ class Command(object): finish = 0 kills_digit_arg = 1 - def __init__(self, reader, (event_name, event)): + def __init__(self, reader, cmd): self.reader = reader - self.event = event - self.event_name = event_name + self.event_name, self.event = cmd def do(self): pass diff --git a/lib_pypy/pyrepl/pygame_console.py b/lib_pypy/pyrepl/pygame_console.py --- a/lib_pypy/pyrepl/pygame_console.py +++ b/lib_pypy/pyrepl/pygame_console.py @@ -130,7 +130,7 @@ s.fill(c, [0, 600 - bmargin, 800, bmargin]) s.fill(c, [800 - rmargin, 0, lmargin, 600]) - def refresh(self, screen, (cx, cy)): + def refresh(self, screen, cxy): self.screen = screen self.pygame_screen.fill(colors.bg, [0, tmargin + self.cur_top + self.scroll, @@ -139,8 +139,8 @@ line_top = self.cur_top width, height = self.fontsize - self.cxy = (cx, cy) - cp = self.char_pos(cx, cy) + self.cxy = cxy + cp = self.char_pos(*cxy) if cp[1] < tmargin: self.scroll = - (cy*self.fh + self.cur_top) self.repaint() @@ -148,7 +148,7 @@ self.scroll += (600 - bmargin) - (cp[1] + self.fh) self.repaint() if self.curs_vis: - self.pygame_screen.blit(self.cursor, self.char_pos(cx, cy)) + self.pygame_screen.blit(self.cursor, self.char_pos(*cxy)) for line in screen: if 0 <= line_top + self.scroll <= (600 - bmargin - tmargin - self.fh): if line: diff --git a/lib_pypy/pyrepl/readline.py b/lib_pypy/pyrepl/readline.py --- a/lib_pypy/pyrepl/readline.py +++ b/lib_pypy/pyrepl/readline.py @@ -231,7 +231,11 @@ return ''.join(chars) def _histline(self, line): - return unicode(line.rstrip('\n'), ENCODING) + line = line.rstrip('\n') + try: + return unicode(line, ENCODING) + except UnicodeDecodeError: # bah, silently fall back... + return unicode(line, 'utf-8') def get_history_length(self): return self.saved_history_length @@ -268,7 +272,10 @@ f = open(os.path.expanduser(filename), 'w') for entry in history: if isinstance(entry, unicode): - entry = entry.encode(ENCODING) + try: + entry = entry.encode(ENCODING) + except UnicodeEncodeError: # bah, silently fall back... + entry = entry.encode('utf-8') entry = entry.replace('\n', '\r\n') # multiline history support f.write(entry + '\n') f.close() @@ -395,9 +402,21 @@ _wrapper.f_in = f_in _wrapper.f_out = f_out - if hasattr(sys, '__raw_input__'): # PyPy - _old_raw_input = sys.__raw_input__ + if '__pypy__' in sys.builtin_module_names: # PyPy + + def _old_raw_input(prompt=''): + # sys.__raw_input__() is only called when stdin and stdout are + # as expected and are ttys. If it is the case, then get_reader() + # should not really fail in _wrapper.raw_input(). If it still + # does, then we will just cancel the redirection and call again + # the built-in raw_input(). + try: + del sys.__raw_input__ + except AttributeError: + pass + return raw_input(prompt) sys.__raw_input__ = _wrapper.raw_input + else: # this is not really what readline.c does. Better than nothing I guess import __builtin__ diff --git a/lib_pypy/pyrepl/unix_console.py b/lib_pypy/pyrepl/unix_console.py --- a/lib_pypy/pyrepl/unix_console.py +++ b/lib_pypy/pyrepl/unix_console.py @@ -163,7 +163,7 @@ def change_encoding(self, encoding): self.encoding = encoding - def refresh(self, screen, (cx, cy)): + def refresh(self, screen, cxy): # this function is still too long (over 90 lines) if not self.__gone_tall: @@ -198,6 +198,7 @@ # we make sure the cursor is on the screen, and that we're # using all of the screen if we can + cx, cy = cxy if cy < offset: offset = cy elif cy >= offset + height: diff --git a/pypy/config/pypyoption.py b/pypy/config/pypyoption.py --- a/pypy/config/pypyoption.py +++ b/pypy/config/pypyoption.py @@ -72,6 +72,7 @@ del working_modules['fcntl'] # LOCK_NB not defined del working_modules["_minimal_curses"] del working_modules["termios"] + del working_modules["_multiprocessing"] # depends on rctime @@ -91,7 +92,7 @@ module_import_dependencies = { # no _rawffi if importing pypy.rlib.clibffi raises ImportError - # or CompilationError + # or CompilationError or py.test.skip.Exception "_rawffi" : ["pypy.rlib.clibffi"], "_ffi" : ["pypy.rlib.clibffi"], @@ -112,7 +113,7 @@ try: for name in modlist: __import__(name) - except (ImportError, CompilationError), e: + except (ImportError, CompilationError, py.test.skip.Exception), e: errcls = e.__class__.__name__ config.add_warning( "The module %r is disabled\n" % (modname,) + diff --git a/pypy/doc/project-ideas.rst b/pypy/doc/project-ideas.rst --- a/pypy/doc/project-ideas.rst +++ b/pypy/doc/project-ideas.rst @@ -17,6 +17,12 @@ projects, or anything else in PyPy, pop up on IRC or write to us on the `mailing list`_. +Make big integers faster +------------------------- + +PyPy's implementation of the Python ``long`` type is slower than CPython's. +Find out why and optimize them. + Numpy improvements ------------------ diff --git a/pypy/interpreter/astcompiler/ast.py b/pypy/interpreter/astcompiler/ast.py --- a/pypy/interpreter/astcompiler/ast.py +++ b/pypy/interpreter/astcompiler/ast.py @@ -2,7 +2,7 @@ from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter import typedef from pypy.interpreter.gateway import interp2app -from pypy.interpreter.error import OperationError +from pypy.interpreter.error import OperationError, operationerrfmt from pypy.rlib.unroll import unrolling_iterable from pypy.tool.pairtype import extendabletype from pypy.tool.sourcetools import func_with_new_name @@ -2925,14 +2925,13 @@ def Module_get_body(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -2968,14 +2967,13 @@ def Interactive_get_body(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -3015,8 +3013,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') return space.wrap(w_self.body) def Expression_set_body(space, w_self, w_new_value): @@ -3057,14 +3054,13 @@ def Suite_get_body(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -3104,8 +3100,7 @@ return w_obj if not w_self.initialization_state & w_self._lineno_mask: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'lineno'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'lineno') return space.wrap(w_self.lineno) def stmt_set_lineno(space, w_self, w_new_value): @@ -3126,8 +3121,7 @@ return w_obj if not w_self.initialization_state & w_self._col_offset_mask: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'col_offset'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'col_offset') return space.wrap(w_self.col_offset) def stmt_set_col_offset(space, w_self, w_new_value): @@ -3157,8 +3151,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'name'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'name') return space.wrap(w_self.name) def FunctionDef_set_name(space, w_self, w_new_value): @@ -3179,8 +3172,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'args'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'args') return space.wrap(w_self.args) def FunctionDef_set_args(space, w_self, w_new_value): @@ -3197,14 +3189,13 @@ def FunctionDef_get_body(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -3215,14 +3206,13 @@ def FunctionDef_get_decorator_list(space, w_self): if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'decorator_list'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'decorator_list') if w_self.w_decorator_list is None: if w_self.decorator_list is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.decorator_list] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_decorator_list = w_list return w_self.w_decorator_list @@ -3266,8 +3256,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'name'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'name') return space.wrap(w_self.name) def ClassDef_set_name(space, w_self, w_new_value): @@ -3284,14 +3273,13 @@ def ClassDef_get_bases(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'bases'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'bases') if w_self.w_bases is None: if w_self.bases is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.bases] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_bases = w_list return w_self.w_bases @@ -3302,14 +3290,13 @@ def ClassDef_get_body(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -3320,14 +3307,13 @@ def ClassDef_get_decorator_list(space, w_self): if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'decorator_list'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'decorator_list') if w_self.w_decorator_list is None: if w_self.decorator_list is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.decorator_list] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_decorator_list = w_list return w_self.w_decorator_list @@ -3372,8 +3358,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Return_set_value(space, w_self, w_new_value): @@ -3414,14 +3399,13 @@ def Delete_get_targets(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'targets'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'targets') if w_self.w_targets is None: if w_self.targets is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.targets] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_targets = w_list return w_self.w_targets @@ -3457,14 +3441,13 @@ def Assign_get_targets(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'targets'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'targets') if w_self.w_targets is None: if w_self.targets is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.targets] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_targets = w_list return w_self.w_targets @@ -3479,8 +3462,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Assign_set_value(space, w_self, w_new_value): @@ -3527,8 +3509,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'target'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'target') return space.wrap(w_self.target) def AugAssign_set_target(space, w_self, w_new_value): @@ -3549,8 +3530,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'op'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'op') return operator_to_class[w_self.op - 1]() def AugAssign_set_op(space, w_self, w_new_value): @@ -3573,8 +3553,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def AugAssign_set_value(space, w_self, w_new_value): @@ -3621,8 +3600,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'dest'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'dest') return space.wrap(w_self.dest) def Print_set_dest(space, w_self, w_new_value): @@ -3639,14 +3617,13 @@ def Print_get_values(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'values'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'values') if w_self.w_values is None: if w_self.values is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.values] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_values = w_list return w_self.w_values @@ -3661,8 +3638,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'nl'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'nl') return space.wrap(w_self.nl) def Print_set_nl(space, w_self, w_new_value): @@ -3710,8 +3686,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'target'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'target') return space.wrap(w_self.target) def For_set_target(space, w_self, w_new_value): @@ -3732,8 +3707,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'iter'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'iter') return space.wrap(w_self.iter) def For_set_iter(space, w_self, w_new_value): @@ -3750,14 +3724,13 @@ def For_get_body(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -3768,14 +3741,13 @@ def For_get_orelse(space, w_self): if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'orelse'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') if w_self.w_orelse is None: if w_self.orelse is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.orelse] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_orelse = w_list return w_self.w_orelse @@ -3819,8 +3791,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'test'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'test') return space.wrap(w_self.test) def While_set_test(space, w_self, w_new_value): @@ -3837,14 +3808,13 @@ def While_get_body(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -3855,14 +3825,13 @@ def While_get_orelse(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'orelse'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') if w_self.w_orelse is None: if w_self.orelse is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.orelse] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_orelse = w_list return w_self.w_orelse @@ -3905,8 +3874,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'test'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'test') return space.wrap(w_self.test) def If_set_test(space, w_self, w_new_value): @@ -3923,14 +3891,13 @@ def If_get_body(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -3941,14 +3908,13 @@ def If_get_orelse(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'orelse'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') if w_self.w_orelse is None: if w_self.orelse is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.orelse] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_orelse = w_list return w_self.w_orelse @@ -3991,8 +3957,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'context_expr'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'context_expr') return space.wrap(w_self.context_expr) def With_set_context_expr(space, w_self, w_new_value): @@ -4013,8 +3978,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'optional_vars'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'optional_vars') return space.wrap(w_self.optional_vars) def With_set_optional_vars(space, w_self, w_new_value): @@ -4031,14 +3995,13 @@ def With_get_body(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -4080,8 +4043,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'type'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'type') return space.wrap(w_self.type) def Raise_set_type(space, w_self, w_new_value): @@ -4102,8 +4064,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'inst'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'inst') return space.wrap(w_self.inst) def Raise_set_inst(space, w_self, w_new_value): @@ -4124,8 +4085,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'tback'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'tback') return space.wrap(w_self.tback) def Raise_set_tback(space, w_self, w_new_value): @@ -4168,14 +4128,13 @@ def TryExcept_get_body(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -4186,14 +4145,13 @@ def TryExcept_get_handlers(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'handlers'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'handlers') if w_self.w_handlers is None: if w_self.handlers is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.handlers] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_handlers = w_list return w_self.w_handlers @@ -4204,14 +4162,13 @@ def TryExcept_get_orelse(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'orelse'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') if w_self.w_orelse is None: if w_self.orelse is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.orelse] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_orelse = w_list return w_self.w_orelse @@ -4251,14 +4208,13 @@ def TryFinally_get_body(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -4269,14 +4225,13 @@ def TryFinally_get_finalbody(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'finalbody'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'finalbody') if w_self.w_finalbody is None: if w_self.finalbody is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.finalbody] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_finalbody = w_list return w_self.w_finalbody @@ -4318,8 +4273,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'test'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'test') return space.wrap(w_self.test) def Assert_set_test(space, w_self, w_new_value): @@ -4340,8 +4294,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'msg'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'msg') return space.wrap(w_self.msg) def Assert_set_msg(space, w_self, w_new_value): @@ -4383,14 +4336,13 @@ def Import_get_names(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'names'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'names') if w_self.w_names is None: if w_self.names is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.names] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_names = w_list return w_self.w_names @@ -4430,8 +4382,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'module'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'module') return space.wrap(w_self.module) def ImportFrom_set_module(space, w_self, w_new_value): @@ -4451,14 +4402,13 @@ def ImportFrom_get_names(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'names'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'names') if w_self.w_names is None: if w_self.names is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.names] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_names = w_list return w_self.w_names @@ -4473,8 +4423,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'level'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'level') return space.wrap(w_self.level) def ImportFrom_set_level(space, w_self, w_new_value): @@ -4522,8 +4471,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') return space.wrap(w_self.body) def Exec_set_body(space, w_self, w_new_value): @@ -4544,8 +4492,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'globals'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'globals') return space.wrap(w_self.globals) def Exec_set_globals(space, w_self, w_new_value): @@ -4566,8 +4513,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'locals'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'locals') return space.wrap(w_self.locals) def Exec_set_locals(space, w_self, w_new_value): @@ -4610,14 +4556,13 @@ def Global_get_names(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'names'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'names') if w_self.w_names is None: if w_self.names is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.names] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_names = w_list return w_self.w_names @@ -4657,8 +4602,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Expr_set_value(space, w_self, w_new_value): @@ -4754,8 +4698,7 @@ return w_obj if not w_self.initialization_state & w_self._lineno_mask: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'lineno'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'lineno') return space.wrap(w_self.lineno) def expr_set_lineno(space, w_self, w_new_value): @@ -4776,8 +4719,7 @@ return w_obj if not w_self.initialization_state & w_self._col_offset_mask: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'col_offset'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'col_offset') return space.wrap(w_self.col_offset) def expr_set_col_offset(space, w_self, w_new_value): @@ -4807,8 +4749,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'op'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'op') return boolop_to_class[w_self.op - 1]() def BoolOp_set_op(space, w_self, w_new_value): @@ -4827,14 +4768,13 @@ def BoolOp_get_values(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'values'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'values') if w_self.w_values is None: if w_self.values is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.values] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_values = w_list return w_self.w_values @@ -4875,8 +4815,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'left'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'left') return space.wrap(w_self.left) def BinOp_set_left(space, w_self, w_new_value): @@ -4897,8 +4836,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'op'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'op') return operator_to_class[w_self.op - 1]() def BinOp_set_op(space, w_self, w_new_value): @@ -4921,8 +4859,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'right'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'right') return space.wrap(w_self.right) def BinOp_set_right(space, w_self, w_new_value): @@ -4969,8 +4906,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'op'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'op') return unaryop_to_class[w_self.op - 1]() def UnaryOp_set_op(space, w_self, w_new_value): @@ -4993,8 +4929,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'operand'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'operand') return space.wrap(w_self.operand) def UnaryOp_set_operand(space, w_self, w_new_value): @@ -5040,8 +4975,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'args'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'args') return space.wrap(w_self.args) def Lambda_set_args(space, w_self, w_new_value): @@ -5062,8 +4996,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') return space.wrap(w_self.body) def Lambda_set_body(space, w_self, w_new_value): @@ -5109,8 +5042,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'test'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'test') return space.wrap(w_self.test) def IfExp_set_test(space, w_self, w_new_value): @@ -5131,8 +5063,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') return space.wrap(w_self.body) def IfExp_set_body(space, w_self, w_new_value): @@ -5153,8 +5084,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'orelse'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') return space.wrap(w_self.orelse) def IfExp_set_orelse(space, w_self, w_new_value): @@ -5197,14 +5127,13 @@ def Dict_get_keys(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'keys'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'keys') if w_self.w_keys is None: if w_self.keys is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.keys] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_keys = w_list return w_self.w_keys @@ -5215,14 +5144,13 @@ def Dict_get_values(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'values'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'values') if w_self.w_values is None: if w_self.values is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.values] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_values = w_list return w_self.w_values @@ -5260,14 +5188,13 @@ def Set_get_elts(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'elts'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elts') if w_self.w_elts is None: if w_self.elts is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.elts] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_elts = w_list return w_self.w_elts @@ -5307,8 +5234,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'elt'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elt') return space.wrap(w_self.elt) def ListComp_set_elt(space, w_self, w_new_value): @@ -5325,14 +5251,13 @@ def ListComp_get_generators(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'generators'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'generators') if w_self.w_generators is None: if w_self.generators is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.generators] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_generators = w_list return w_self.w_generators @@ -5373,8 +5298,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'elt'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elt') return space.wrap(w_self.elt) def SetComp_set_elt(space, w_self, w_new_value): @@ -5391,14 +5315,13 @@ def SetComp_get_generators(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'generators'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'generators') if w_self.w_generators is None: if w_self.generators is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.generators] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_generators = w_list return w_self.w_generators @@ -5439,8 +5362,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'key'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'key') return space.wrap(w_self.key) def DictComp_set_key(space, w_self, w_new_value): @@ -5461,8 +5383,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def DictComp_set_value(space, w_self, w_new_value): @@ -5479,14 +5400,13 @@ def DictComp_get_generators(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'generators'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'generators') if w_self.w_generators is None: if w_self.generators is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.generators] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_generators = w_list return w_self.w_generators @@ -5528,8 +5448,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'elt'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elt') return space.wrap(w_self.elt) def GeneratorExp_set_elt(space, w_self, w_new_value): @@ -5546,14 +5465,13 @@ def GeneratorExp_get_generators(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'generators'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'generators') if w_self.w_generators is None: if w_self.generators is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.generators] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_generators = w_list return w_self.w_generators @@ -5594,8 +5512,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Yield_set_value(space, w_self, w_new_value): @@ -5640,8 +5557,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'left'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'left') return space.wrap(w_self.left) def Compare_set_left(space, w_self, w_new_value): @@ -5658,14 +5574,13 @@ def Compare_get_ops(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ops'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ops') if w_self.w_ops is None: if w_self.ops is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [cmpop_to_class[node - 1]() for node in w_self.ops] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_ops = w_list return w_self.w_ops @@ -5676,14 +5591,13 @@ def Compare_get_comparators(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'comparators'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'comparators') if w_self.w_comparators is None: if w_self.comparators is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.comparators] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_comparators = w_list return w_self.w_comparators @@ -5726,8 +5640,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'func'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'func') return space.wrap(w_self.func) def Call_set_func(space, w_self, w_new_value): @@ -5744,14 +5657,13 @@ def Call_get_args(space, w_self): if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'args'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'args') if w_self.w_args is None: if w_self.args is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.args] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_args = w_list return w_self.w_args @@ -5762,14 +5674,13 @@ def Call_get_keywords(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'keywords'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'keywords') if w_self.w_keywords is None: if w_self.keywords is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.keywords] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_keywords = w_list return w_self.w_keywords @@ -5784,8 +5695,7 @@ return w_obj if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'starargs'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'starargs') return space.wrap(w_self.starargs) def Call_set_starargs(space, w_self, w_new_value): @@ -5806,8 +5716,7 @@ return w_obj if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'kwargs'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'kwargs') return space.wrap(w_self.kwargs) def Call_set_kwargs(space, w_self, w_new_value): @@ -5858,8 +5767,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Repr_set_value(space, w_self, w_new_value): @@ -5904,8 +5812,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'n'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'n') return w_self.n def Num_set_n(space, w_self, w_new_value): @@ -5950,8 +5857,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 's'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 's') return w_self.s def Str_set_s(space, w_self, w_new_value): @@ -5996,8 +5902,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Attribute_set_value(space, w_self, w_new_value): @@ -6018,8 +5923,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'attr'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'attr') return space.wrap(w_self.attr) def Attribute_set_attr(space, w_self, w_new_value): @@ -6040,8 +5944,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ctx'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() def Attribute_set_ctx(space, w_self, w_new_value): @@ -6090,8 +5993,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Subscript_set_value(space, w_self, w_new_value): @@ -6112,8 +6014,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'slice'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'slice') return space.wrap(w_self.slice) def Subscript_set_slice(space, w_self, w_new_value): @@ -6134,8 +6035,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ctx'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() def Subscript_set_ctx(space, w_self, w_new_value): @@ -6184,8 +6084,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'id'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'id') return space.wrap(w_self.id) def Name_set_id(space, w_self, w_new_value): @@ -6206,8 +6105,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ctx'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() def Name_set_ctx(space, w_self, w_new_value): @@ -6251,14 +6149,13 @@ def List_get_elts(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'elts'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elts') if w_self.w_elts is None: if w_self.elts is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.elts] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_elts = w_list return w_self.w_elts @@ -6273,8 +6170,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ctx'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() def List_set_ctx(space, w_self, w_new_value): @@ -6319,14 +6215,13 @@ def Tuple_get_elts(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'elts'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elts') if w_self.w_elts is None: if w_self.elts is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.elts] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_elts = w_list return w_self.w_elts @@ -6341,8 +6236,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ctx'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() def Tuple_set_ctx(space, w_self, w_new_value): @@ -6391,8 +6285,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return w_self.value def Const_set_value(space, w_self, w_new_value): @@ -6510,8 +6403,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'lower'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'lower') return space.wrap(w_self.lower) def Slice_set_lower(space, w_self, w_new_value): @@ -6532,8 +6424,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'upper'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'upper') return space.wrap(w_self.upper) def Slice_set_upper(space, w_self, w_new_value): @@ -6554,8 +6445,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'step'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'step') return space.wrap(w_self.step) def Slice_set_step(space, w_self, w_new_value): @@ -6598,14 +6488,13 @@ def ExtSlice_get_dims(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'dims'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'dims') if w_self.w_dims is None: if w_self.dims is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.dims] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_dims = w_list return w_self.w_dims @@ -6645,8 +6534,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Index_set_value(space, w_self, w_new_value): @@ -6915,8 +6803,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'target'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'target') return space.wrap(w_self.target) def comprehension_set_target(space, w_self, w_new_value): @@ -6937,8 +6824,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'iter'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'iter') return space.wrap(w_self.iter) def comprehension_set_iter(space, w_self, w_new_value): @@ -6955,14 +6841,13 @@ def comprehension_get_ifs(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ifs'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ifs') if w_self.w_ifs is None: if w_self.ifs is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.ifs] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_ifs = w_list return w_self.w_ifs @@ -7004,8 +6889,7 @@ return w_obj if not w_self.initialization_state & w_self._lineno_mask: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'lineno'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'lineno') return space.wrap(w_self.lineno) def excepthandler_set_lineno(space, w_self, w_new_value): @@ -7026,8 +6910,7 @@ return w_obj if not w_self.initialization_state & w_self._col_offset_mask: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'col_offset'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'col_offset') return space.wrap(w_self.col_offset) def excepthandler_set_col_offset(space, w_self, w_new_value): @@ -7057,8 +6940,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'type'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'type') return space.wrap(w_self.type) def ExceptHandler_set_type(space, w_self, w_new_value): @@ -7079,8 +6961,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'name'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'name') return space.wrap(w_self.name) def ExceptHandler_set_name(space, w_self, w_new_value): @@ -7097,14 +6978,13 @@ def ExceptHandler_get_body(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -7142,14 +7022,13 @@ def arguments_get_args(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'args'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'args') if w_self.w_args is None: if w_self.args is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.args] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_args = w_list return w_self.w_args @@ -7164,8 +7043,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'vararg'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'vararg') return space.wrap(w_self.vararg) def arguments_set_vararg(space, w_self, w_new_value): @@ -7189,8 +7067,7 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'kwarg'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'kwarg') return space.wrap(w_self.kwarg) def arguments_set_kwarg(space, w_self, w_new_value): @@ -7210,14 +7087,13 @@ def arguments_get_defaults(space, w_self): if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'defaults'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'defaults') if w_self.w_defaults is None: if w_self.defaults is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.defaults] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_defaults = w_list return w_self.w_defaults @@ -7261,8 +7137,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'arg'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'arg') return space.wrap(w_self.arg) def keyword_set_arg(space, w_self, w_new_value): @@ -7283,8 +7158,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def keyword_set_value(space, w_self, w_new_value): @@ -7330,8 +7204,7 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'name'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'name') return space.wrap(w_self.name) def alias_set_name(space, w_self, w_new_value): @@ -7352,8 +7225,7 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'asname'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'asname') return space.wrap(w_self.asname) def alias_set_asname(space, w_self, w_new_value): diff --git a/pypy/interpreter/astcompiler/tools/asdl_py.py b/pypy/interpreter/astcompiler/tools/asdl_py.py --- a/pypy/interpreter/astcompiler/tools/asdl_py.py +++ b/pypy/interpreter/astcompiler/tools/asdl_py.py @@ -414,13 +414,12 @@ self.emit(" return w_obj", 1) self.emit("if not w_self.initialization_state & %s:" % (flag,), 1) self.emit("typename = space.type(w_self).getname(space)", 2) - self.emit("w_err = space.wrap(\"'%%s' object has no attribute '%s'\" %% typename)" % + self.emit("raise operationerrfmt(space.w_AttributeError, \"'%%s' object has no attribute '%%s'\", typename, '%s')" % (field.name,), 2) - self.emit("raise OperationError(space.w_AttributeError, w_err)", 2) if field.seq: self.emit("if w_self.w_%s is None:" % (field.name,), 1) self.emit("if w_self.%s is None:" % (field.name,), 2) - self.emit("w_list = space.newlist([])", 3) + self.emit("list_w = []", 3) self.emit("else:", 2) if field.type.value in self.data.simple_types: wrapper = "%s_to_class[node - 1]()" % (field.type,) @@ -428,7 +427,7 @@ wrapper = "space.wrap(node)" self.emit("list_w = [%s for node in w_self.%s]" % (wrapper, field.name), 3) - self.emit("w_list = space.newlist(list_w)", 3) + self.emit("w_list = space.newlist(list_w)", 2) self.emit("w_self.w_%s = w_list" % (field.name,), 2) self.emit("return w_self.w_%s" % (field.name,), 1) elif field.type.value in self.data.simple_types: @@ -540,7 +539,7 @@ from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter import typedef from pypy.interpreter.gateway import interp2app -from pypy.interpreter.error import OperationError +from pypy.interpreter.error import OperationError, operationerrfmt from pypy.rlib.unroll import unrolling_iterable from pypy.tool.pairtype import extendabletype from pypy.tool.sourcetools import func_with_new_name @@ -639,9 +638,7 @@ missing = required[i] if missing is not None: err = "required field \\"%s\\" missing from %s" - err = err % (missing, host) - w_err = space.wrap(err) - raise OperationError(space.w_TypeError, w_err) + raise operationerrfmt(space.w_TypeError, err, missing, host) raise AssertionError("should not reach here") diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -777,22 +777,63 @@ """Unpack an iterable object into a real (interpreter-level) list. Raise an OperationError(w_ValueError) if the length is wrong.""" w_iterator = self.iter(w_iterable) - # If we know the expected length we can preallocate. if expected_length == -1: + # xxx special hack for speed + from pypy.interpreter.generator import GeneratorIterator + if isinstance(w_iterator, GeneratorIterator): + lst_w = [] + w_iterator.unpack_into(lst_w) + return lst_w + # /xxx + return self._unpackiterable_unknown_length(w_iterator, w_iterable) + else: + lst_w = self._unpackiterable_known_length(w_iterator, + expected_length) + return lst_w[:] # make the resulting list resizable + + @jit.dont_look_inside + def _unpackiterable_unknown_length(self, w_iterator, w_iterable): + # Unpack a variable-size list of unknown length. + # The JIT does not look inside this function because it + # contains a loop (made explicit with the decorator above). + # + # If we can guess the expected length we can preallocate. + try: + lgt_estimate = self.len_w(w_iterable) + except OperationError, o: + if (not o.match(self, self.w_AttributeError) and + not o.match(self, self.w_TypeError)): + raise + items = [] + else: try: - lgt_estimate = self.len_w(w_iterable) - except OperationError, o: - if (not o.match(self, self.w_AttributeError) and - not o.match(self, self.w_TypeError)): + items = newlist(lgt_estimate) + except MemoryError: + items = [] # it might have lied + # + while True: + try: + w_item = self.next(w_iterator) + except OperationError, e: + if not e.match(self, self.w_StopIteration): raise - items = [] - else: - try: - items = newlist(lgt_estimate) - except MemoryError: - items = [] # it might have lied - else: - items = [None] * expected_length + break # done + items.append(w_item) + # + return items + + @jit.dont_look_inside + def _unpackiterable_known_length(self, w_iterator, expected_length): + # Unpack a known length list, without letting the JIT look inside. + # Implemented by just calling the @jit.unroll_safe version, but + # the JIT stopped looking inside already. + return self._unpackiterable_known_length_jitlook(w_iterator, + expected_length) + + @jit.unroll_safe + def _unpackiterable_known_length_jitlook(self, w_iterator, + expected_length): + items = [None] * expected_length idx = 0 while True: try: @@ -801,26 +842,29 @@ if not e.match(self, self.w_StopIteration): raise break # done - if expected_length != -1 and idx == expected_length: + if idx == expected_length: raise OperationError(self.w_ValueError, - self.wrap("too many values to unpack")) - if expected_length == -1: - items.append(w_item) - else: - items[idx] = w_item + self.wrap("too many values to unpack")) + items[idx] = w_item idx += 1 - if expected_length != -1 and idx < expected_length: + if idx < expected_length: if idx == 1: plural = "" else: plural = "s" - raise OperationError(self.w_ValueError, - self.wrap("need more than %d value%s to unpack" % - (idx, plural))) + raise operationerrfmt(self.w_ValueError, + "need more than %d value%s to unpack", + idx, plural) return items - unpackiterable_unroll = jit.unroll_safe(func_with_new_name(unpackiterable, - 'unpackiterable_unroll')) + def unpackiterable_unroll(self, w_iterable, expected_length): + # Like unpackiterable(), but for the cases where we have + # an expected_length and want to unroll when JITted. + # Returns a fixed-size list. + w_iterator = self.iter(w_iterable) + assert expected_length != -1 + return self._unpackiterable_known_length_jitlook(w_iterator, + expected_length) def fixedview(self, w_iterable, expected_length=-1): """ A fixed list view of w_iterable. Don't modify the result diff --git a/pypy/interpreter/executioncontext.py b/pypy/interpreter/executioncontext.py --- a/pypy/interpreter/executioncontext.py +++ b/pypy/interpreter/executioncontext.py @@ -391,8 +391,11 @@ def decrement_ticker(self, by): value = self._ticker if self.has_bytecode_counter: # this 'if' is constant-folded - value -= by - self._ticker = value + if jit.isconstant(by) and by == 0: + pass # normally constant-folded too + else: + value -= by + self._ticker = value return value diff --git a/pypy/interpreter/generator.py b/pypy/interpreter/generator.py --- a/pypy/interpreter/generator.py +++ b/pypy/interpreter/generator.py @@ -8,7 +8,7 @@ class GeneratorIterator(Wrappable): "An iterator created by a generator." _immutable_fields_ = ['pycode'] - + def __init__(self, frame): self.space = frame.space self.frame = frame # turned into None when frame_finished_execution @@ -81,7 +81,7 @@ # if the frame is now marked as finished, it was RETURNed from if frame.frame_finished_execution: self.frame = None - raise OperationError(space.w_StopIteration, space.w_None) + raise OperationError(space.w_StopIteration, space.w_None) else: return w_result # YIELDed finally: @@ -97,21 +97,21 @@ def throw(self, w_type, w_val, w_tb): from pypy.interpreter.pytraceback import check_traceback space = self.space - + msg = "throw() third argument must be a traceback object" if space.is_w(w_tb, space.w_None): tb = None else: tb = check_traceback(space, w_tb, msg) - + operr = OperationError(w_type, w_val, tb) operr.normalize_exception(space) return self.send_ex(space.w_None, operr) - + def descr_next(self): """x.next() -> the next value, or raise StopIteration""" return self.send_ex(self.space.w_None) - + def descr_close(self): """x.close(arg) -> raise GeneratorExit inside generator.""" assert isinstance(self, GeneratorIterator) @@ -124,7 +124,7 @@ e.match(space, space.w_GeneratorExit): return space.w_None raise - + if w_retval is not None: msg = "generator ignored GeneratorExit" raise OperationError(space.w_RuntimeError, space.wrap(msg)) @@ -155,3 +155,39 @@ "interrupting generator of ") break block = block.previous + + def unpack_into(self, results_w): + """This is a hack for performance: runs the generator and collects + all produced items in a list.""" + # XXX copied and simplified version of send_ex() + space = self.space + if self.running: + raise OperationError(space.w_ValueError, + space.wrap('generator already executing')) + frame = self.frame + if frame is None: # already finished + return + self.running = True + try: + pycode = self.pycode + while True: + jitdriver.jit_merge_point(self=self, frame=frame, + results_w=results_w, + pycode=pycode) + try: + w_result = frame.execute_frame(space.w_None) + except OperationError, e: + if not e.match(space, space.w_StopIteration): + raise + break + # if the frame is now marked as finished, it was RETURNed from + if frame.frame_finished_execution: + break + results_w.append(w_result) # YIELDed + finally: + frame.f_backref = jit.vref_None + self.running = False + self.frame = None + +jitdriver = jit.JitDriver(greens=['pycode'], + reds=['self', 'frame', 'results_w']) diff --git a/pypy/interpreter/pyparser/pytokenizer.py b/pypy/interpreter/pyparser/pytokenizer.py --- a/pypy/interpreter/pyparser/pytokenizer.py +++ b/pypy/interpreter/pyparser/pytokenizer.py @@ -226,7 +226,7 @@ parenlev = parenlev - 1 if parenlev < 0: raise TokenError("unmatched '%s'" % initial, line, - lnum-1, 0, token_list) + lnum, start + 1, token_list) if token in python_opmap: punct = python_opmap[token] else: diff --git a/pypy/interpreter/pyparser/test/test_pyparse.py b/pypy/interpreter/pyparser/test/test_pyparse.py --- a/pypy/interpreter/pyparser/test/test_pyparse.py +++ b/pypy/interpreter/pyparser/test/test_pyparse.py @@ -87,6 +87,10 @@ assert exc.lineno == 1 assert exc.offset == 5 assert exc.lastlineno == 5 + exc = py.test.raises(SyntaxError, parse, "abc)").value + assert exc.msg == "unmatched ')'" + assert exc.lineno == 1 + assert exc.offset == 4 def test_is(self): self.parse("x is y") diff --git a/pypy/interpreter/test/test_generator.py b/pypy/interpreter/test/test_generator.py --- a/pypy/interpreter/test/test_generator.py +++ b/pypy/interpreter/test/test_generator.py @@ -117,7 +117,7 @@ g = f() raises(NameError, g.throw, NameError, "Error", None) - + def test_throw_fail(self): def f(): yield 1 @@ -129,7 +129,7 @@ yield 1 g = f() raises(TypeError, g.throw, list()) - + def test_throw_fail3(self): def f(): yield 1 @@ -188,7 +188,7 @@ g = f() g.next() raises(NameError, g.close) - + def test_close_fail(self): def f(): try: @@ -267,3 +267,15 @@ assert r.startswith("' % self.fielddescr.repr_of_descr() + +def get_interiorfield_descr(gc_ll_descr, ARRAY, FIELDTP, name): + cache = gc_ll_descr._cache_interiorfield + try: + return cache[(ARRAY, FIELDTP, name)] + except KeyError: + arraydescr = get_array_descr(gc_ll_descr, ARRAY) + fielddescr = get_field_descr(gc_ll_descr, FIELDTP, name) + descr = InteriorFieldDescr(arraydescr, fielddescr) + cache[(ARRAY, FIELDTP, name)] = descr + return descr # ____________________________________________________________ # CallDescrs @@ -260,12 +305,16 @@ _clsname = '' loop_token = None arg_classes = '' # <-- annotation hack - ffi_flags = 0 + ffi_flags = 1 - def __init__(self, arg_classes, extrainfo=None, ffi_flags=0): + def __init__(self, arg_classes, extrainfo=None, ffi_flags=1): self.arg_classes = arg_classes # string of "r" and "i" (ref/int) self.extrainfo = extrainfo self.ffi_flags = ffi_flags + # NB. the default ffi_flags is 1, meaning FUNCFLAG_CDECL, which + # makes sense on Windows as it's the one for all the C functions + # we are compiling together with the JIT. On non-Windows platforms + # it is just ignored anyway. def __repr__(self): res = '%s(%s)' % (self.__class__.__name__, self.arg_classes) @@ -306,6 +355,10 @@ return False # unless overridden def create_call_stub(self, rtyper, RESULT): + from pypy.rlib.clibffi import FFI_DEFAULT_ABI + assert self.get_call_conv() == FFI_DEFAULT_ABI, ( + "%r: create_call_stub() with a non-default call ABI" % (self,)) + def process(c): if c == 'L': assert longlong.supports_longlong @@ -400,7 +453,7 @@ """ _clsname = 'DynamicIntCallDescr' - def __init__(self, arg_classes, result_size, result_sign, extrainfo=None, ffi_flags=0): + def __init__(self, arg_classes, result_size, result_sign, extrainfo, ffi_flags): BaseIntCallDescr.__init__(self, arg_classes, extrainfo, ffi_flags) assert isinstance(result_sign, bool) self._result_size = chr(result_size) @@ -525,7 +578,8 @@ # if TYPE is lltype.Float or is_longlong(TYPE): setattr(Descr, floatattrname, True) - elif TYPE is not lltype.Bool and rffi.cast(TYPE, -1) == -1: + elif (TYPE is not lltype.Bool and isinstance(TYPE, lltype.Number) and + rffi.cast(TYPE, -1) == -1): setattr(Descr, signedattrname, True) # _cache[nameprefix, TYPE] = Descr diff --git a/pypy/jit/backend/llsupport/ffisupport.py b/pypy/jit/backend/llsupport/ffisupport.py --- a/pypy/jit/backend/llsupport/ffisupport.py +++ b/pypy/jit/backend/llsupport/ffisupport.py @@ -8,7 +8,7 @@ class UnsupportedKind(Exception): pass -def get_call_descr_dynamic(cpu, ffi_args, ffi_result, extrainfo=None, ffi_flags=0): +def get_call_descr_dynamic(cpu, ffi_args, ffi_result, extrainfo, ffi_flags): """Get a call descr: the types of result and args are represented by rlib.libffi.types.*""" try: diff --git a/pypy/jit/backend/llsupport/gc.py b/pypy/jit/backend/llsupport/gc.py --- a/pypy/jit/backend/llsupport/gc.py +++ b/pypy/jit/backend/llsupport/gc.py @@ -45,6 +45,14 @@ def freeing_block(self, start, stop): pass + def get_funcptr_for_newarray(self): + return llhelper(self.GC_MALLOC_ARRAY, self.malloc_array) + def get_funcptr_for_newstr(self): + return llhelper(self.GC_MALLOC_STR_UNICODE, self.malloc_str) + def get_funcptr_for_newunicode(self): + return llhelper(self.GC_MALLOC_STR_UNICODE, self.malloc_unicode) + + def record_constptrs(self, op, gcrefs_output_list): for i in range(op.numargs()): v = op.getarg(i) @@ -96,6 +104,39 @@ malloc_fn_ptr = self.configure_boehm_once() self.funcptr_for_new = malloc_fn_ptr + def malloc_array(basesize, itemsize, ofs_length, num_elem): + try: + size = ovfcheck(basesize + ovfcheck(itemsize * num_elem)) + except OverflowError: + return lltype.nullptr(llmemory.GCREF.TO) + res = self.funcptr_for_new(size) + if not res: + return res + rffi.cast(rffi.CArrayPtr(lltype.Signed), res)[ofs_length/WORD] = num_elem + return res + self.malloc_array = malloc_array + self.GC_MALLOC_ARRAY = lltype.Ptr(lltype.FuncType( + [lltype.Signed] * 4, llmemory.GCREF)) + + + (str_basesize, str_itemsize, str_ofs_length + ) = symbolic.get_array_token(rstr.STR, self.translate_support_code) + (unicode_basesize, unicode_itemsize, unicode_ofs_length + ) = symbolic.get_array_token(rstr.UNICODE, self.translate_support_code) + def malloc_str(length): + return self.malloc_array( + str_basesize, str_itemsize, str_ofs_length, length + ) + def malloc_unicode(length): + return self.malloc_array( + unicode_basesize, unicode_itemsize, unicode_ofs_length, length + ) + self.malloc_str = malloc_str + self.malloc_unicode = malloc_unicode + self.GC_MALLOC_STR_UNICODE = lltype.Ptr(lltype.FuncType( + [lltype.Signed], llmemory.GCREF)) + + # on some platform GC_init is required before any other # GC_* functions, call it here for the benefit of tests # XXX move this to tests @@ -116,39 +157,27 @@ ofs_length = arraydescr.get_ofs_length(self.translate_support_code) basesize = arraydescr.get_base_size(self.translate_support_code) itemsize = arraydescr.get_item_size(self.translate_support_code) - size = basesize + itemsize * num_elem - res = self.funcptr_for_new(size) - rffi.cast(rffi.CArrayPtr(lltype.Signed), res)[ofs_length/WORD] = num_elem - return res + return self.malloc_array(basesize, itemsize, ofs_length, num_elem) def gc_malloc_str(self, num_elem): - basesize, itemsize, ofs_length = symbolic.get_array_token(rstr.STR, - self.translate_support_code) - assert itemsize == 1 - size = basesize + num_elem - res = self.funcptr_for_new(size) - rffi.cast(rffi.CArrayPtr(lltype.Signed), res)[ofs_length/WORD] = num_elem - return res + return self.malloc_str(num_elem) def gc_malloc_unicode(self, num_elem): - basesize, itemsize, ofs_length = symbolic.get_array_token(rstr.UNICODE, - self.translate_support_code) - size = basesize + num_elem * itemsize - res = self.funcptr_for_new(size) - rffi.cast(rffi.CArrayPtr(lltype.Signed), res)[ofs_length/WORD] = num_elem - return res + return self.malloc_unicode(num_elem) def args_for_new(self, sizedescr): assert isinstance(sizedescr, BaseSizeDescr) return [sizedescr.size] + def args_for_new_array(self, arraydescr): + ofs_length = arraydescr.get_ofs_length(self.translate_support_code) + basesize = arraydescr.get_base_size(self.translate_support_code) + itemsize = arraydescr.get_item_size(self.translate_support_code) + return [basesize, itemsize, ofs_length] + def get_funcptr_for_new(self): return self.funcptr_for_new - get_funcptr_for_newarray = None - get_funcptr_for_newstr = None - get_funcptr_for_newunicode = None - def rewrite_assembler(self, cpu, operations, gcrefs_output_list): # record all GCREFs too, because Boehm cannot see them and keep them # alive if they end up as constants in the assembler @@ -620,10 +649,13 @@ def malloc_basic(size, tid): type_id = llop.extract_ushort(llgroup.HALFWORD, tid) has_finalizer = bool(tid & (1<' # - cache = {} descr4 = get_call_descr(c0, [lltype.Char, lltype.Ptr(S)], lltype.Ptr(S)) assert 'GcPtrCallDescr' in descr4.repr_of_descr() # @@ -412,10 +413,10 @@ ARGS = [lltype.Float, lltype.Ptr(ARRAY)] RES = lltype.Float - def f(a, b): + def f2(a, b): return float(b[0]) + a - fnptr = llhelper(lltype.Ptr(lltype.FuncType(ARGS, RES)), f) + fnptr = llhelper(lltype.Ptr(lltype.FuncType(ARGS, RES)), f2) descr2 = get_call_descr(c0, ARGS, RES) a = lltype.malloc(ARRAY, 3) opaquea = lltype.cast_opaque_ptr(llmemory.GCREF, a) diff --git a/pypy/jit/backend/llsupport/test/test_ffisupport.py b/pypy/jit/backend/llsupport/test/test_ffisupport.py --- a/pypy/jit/backend/llsupport/test/test_ffisupport.py +++ b/pypy/jit/backend/llsupport/test/test_ffisupport.py @@ -13,44 +13,46 @@ def test_call_descr_dynamic(): args = [types.sint, types.pointer] - descr = get_call_descr_dynamic(FakeCPU(), args, types.sint, ffi_flags=42) + descr = get_call_descr_dynamic(FakeCPU(), args, types.sint, None, + ffi_flags=42) assert isinstance(descr, DynamicIntCallDescr) assert descr.arg_classes == 'ii' assert descr.get_ffi_flags() == 42 args = [types.sint, types.double, types.pointer] - descr = get_call_descr_dynamic(FakeCPU(), args, types.void) + descr = get_call_descr_dynamic(FakeCPU(), args, types.void, None, 42) assert descr is None # missing floats descr = get_call_descr_dynamic(FakeCPU(supports_floats=True), - args, types.void, ffi_flags=43) + args, types.void, None, ffi_flags=43) assert isinstance(descr, VoidCallDescr) assert descr.arg_classes == 'ifi' assert descr.get_ffi_flags() == 43 - descr = get_call_descr_dynamic(FakeCPU(), [], types.sint8) + descr = get_call_descr_dynamic(FakeCPU(), [], types.sint8, None, 42) assert isinstance(descr, DynamicIntCallDescr) assert descr.get_result_size(False) == 1 assert descr.is_result_signed() == True - descr = get_call_descr_dynamic(FakeCPU(), [], types.uint8) + descr = get_call_descr_dynamic(FakeCPU(), [], types.uint8, None, 42) assert isinstance(descr, DynamicIntCallDescr) assert descr.get_result_size(False) == 1 assert descr.is_result_signed() == False if not is_64_bit: - descr = get_call_descr_dynamic(FakeCPU(), [], types.slonglong) + descr = get_call_descr_dynamic(FakeCPU(), [], types.slonglong, + None, 42) assert descr is None # missing longlongs descr = get_call_descr_dynamic(FakeCPU(supports_longlong=True), - [], types.slonglong, ffi_flags=43) + [], types.slonglong, None, ffi_flags=43) assert isinstance(descr, LongLongCallDescr) assert descr.get_ffi_flags() == 43 else: assert types.slonglong is types.slong - descr = get_call_descr_dynamic(FakeCPU(), [], types.float) + descr = get_call_descr_dynamic(FakeCPU(), [], types.float, None, 42) assert descr is None # missing singlefloats descr = get_call_descr_dynamic(FakeCPU(supports_singlefloats=True), - [], types.float, ffi_flags=44) + [], types.float, None, ffi_flags=44) SingleFloatCallDescr = getCallDescrClass(rffi.FLOAT) assert isinstance(descr, SingleFloatCallDescr) assert descr.get_ffi_flags() == 44 diff --git a/pypy/jit/backend/llsupport/test/test_gc.py b/pypy/jit/backend/llsupport/test/test_gc.py --- a/pypy/jit/backend/llsupport/test/test_gc.py +++ b/pypy/jit/backend/llsupport/test/test_gc.py @@ -247,12 +247,14 @@ self.record = [] def do_malloc_fixedsize_clear(self, RESTYPE, type_id, size, - has_finalizer, contains_weakptr): + has_finalizer, has_light_finalizer, + contains_weakptr): assert not contains_weakptr + assert not has_finalizer # in these tests + assert not has_light_finalizer # in these tests p = llmemory.raw_malloc(size) p = llmemory.cast_adr_to_ptr(p, RESTYPE) - flags = int(has_finalizer) << 16 - tid = llop.combine_ushort(lltype.Signed, type_id, flags) + tid = llop.combine_ushort(lltype.Signed, type_id, 0) self.record.append(("fixedsize", repr(size), tid, p)) return p diff --git a/pypy/jit/backend/model.py b/pypy/jit/backend/model.py --- a/pypy/jit/backend/model.py +++ b/pypy/jit/backend/model.py @@ -1,5 +1,5 @@ from pypy.rlib.debug import debug_start, debug_print, debug_stop -from pypy.jit.metainterp import history, compile +from pypy.jit.metainterp import history class AbstractCPU(object): @@ -213,6 +213,10 @@ def typedescrof(TYPE): raise NotImplementedError + @staticmethod + def interiorfielddescrof(A, fieldname): + raise NotImplementedError + # ---------- the backend-dependent operations ---------- # lltype specific operations diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -5,7 +5,7 @@ BoxInt, Box, BoxPtr, LoopToken, ConstInt, ConstPtr, - BoxObj, Const, + BoxObj, ConstObj, BoxFloat, ConstFloat) from pypy.jit.metainterp.resoperation import ResOperation, rop from pypy.jit.metainterp.typesystem import deref @@ -111,7 +111,7 @@ self.cpu.set_future_value_int(0, 2) fail = self.cpu.execute_token(looptoken) res = self.cpu.get_latest_value_int(0) - assert res == 3 + assert res == 3 assert fail.identifier == 1 def test_compile_loop(self): @@ -127,7 +127,7 @@ ] inputargs = [i0] operations[2].setfailargs([i1]) - + self.cpu.compile_loop(inputargs, operations, looptoken) self.cpu.set_future_value_int(0, 2) fail = self.cpu.execute_token(looptoken) @@ -148,7 +148,7 @@ ] inputargs = [i0] operations[2].setfailargs([None, None, i1, None]) - + self.cpu.compile_loop(inputargs, operations, looptoken) self.cpu.set_future_value_int(0, 2) fail = self.cpu.execute_token(looptoken) @@ -372,7 +372,7 @@ for opnum, boxargs, retvalue in get_int_tests(): res = self.execute_operation(opnum, boxargs, 'int') assert res.value == retvalue - + def test_float_operations(self): from pypy.jit.metainterp.test.test_executor import get_float_tests for opnum, boxargs, rettype, retvalue in get_float_tests(self.cpu): @@ -438,7 +438,7 @@ def test_ovf_operations_reversed(self): self.test_ovf_operations(reversed=True) - + def test_bh_call(self): cpu = self.cpu # @@ -503,7 +503,7 @@ [funcbox, BoxInt(num), BoxInt(num)], 'int', descr=dyn_calldescr) assert res.value == 2 * num - + if cpu.supports_floats: def func(f0, f1, f2, f3, f4, f5, f6, i0, i1, f7, f8, f9): @@ -543,7 +543,7 @@ funcbox = self.get_funcbox(self.cpu, func_ptr) res = self.execute_operation(rop.CALL, [funcbox] + map(BoxInt, args), 'int', descr=calldescr) assert res.value == func(*args) - + def test_call_stack_alignment(self): # test stack alignment issues, notably for Mac OS/X. # also test the ordering of the arguments. @@ -615,7 +615,7 @@ res = self.execute_operation(rop.GETFIELD_GC, [t_box], 'int', descr=shortdescr) assert res.value == 1331 - + # u_box, U_box = self.alloc_instance(self.U) fielddescr2 = self.cpu.fielddescrof(self.S, 'next') @@ -695,7 +695,7 @@ def test_failing_guard_class(self): t_box, T_box = self.alloc_instance(self.T) - u_box, U_box = self.alloc_instance(self.U) + u_box, U_box = self.alloc_instance(self.U) null_box = self.null_instance() for opname, args in [(rop.GUARD_CLASS, [t_box, U_box]), (rop.GUARD_CLASS, [u_box, T_box]), @@ -787,7 +787,7 @@ r = self.execute_operation(rop.GETARRAYITEM_GC, [a_box, BoxInt(3)], 'int', descr=arraydescr) assert r.value == 160 - + # if isinstance(A, lltype.GcArray): A = lltype.Ptr(A) @@ -880,6 +880,73 @@ 'int', descr=arraydescr) assert r.value == 7441 + def test_array_of_structs(self): + TP = lltype.GcStruct('x') + ITEM = lltype.Struct('x', + ('vs', lltype.Signed), + ('vu', lltype.Unsigned), + ('vsc', rffi.SIGNEDCHAR), + ('vuc', rffi.UCHAR), + ('vss', rffi.SHORT), + ('vus', rffi.USHORT), + ('vsi', rffi.INT), + ('vui', rffi.UINT), + ('k', lltype.Float), + ('p', lltype.Ptr(TP))) + a_box, A = self.alloc_array_of(ITEM, 15) + s_box, S = self.alloc_instance(TP) + kdescr = self.cpu.interiorfielddescrof(A, 'k') + pdescr = self.cpu.interiorfielddescrof(A, 'p') + self.execute_operation(rop.SETINTERIORFIELD_GC, [a_box, BoxInt(3), + boxfloat(1.5)], + 'void', descr=kdescr) + f = self.cpu.bh_getinteriorfield_gc_f(a_box.getref_base(), 3, kdescr) + assert longlong.getrealfloat(f) == 1.5 + self.cpu.bh_setinteriorfield_gc_f(a_box.getref_base(), 3, kdescr, longlong.getfloatstorage(2.5)) + r = self.execute_operation(rop.GETINTERIORFIELD_GC, [a_box, BoxInt(3)], + 'float', descr=kdescr) + assert r.getfloat() == 2.5 + # + NUMBER_FIELDS = [('vs', lltype.Signed), + ('vu', lltype.Unsigned), + ('vsc', rffi.SIGNEDCHAR), + ('vuc', rffi.UCHAR), + ('vss', rffi.SHORT), + ('vus', rffi.USHORT), + ('vsi', rffi.INT), + ('vui', rffi.UINT)] + for name, TYPE in NUMBER_FIELDS[::-1]: + vdescr = self.cpu.interiorfielddescrof(A, name) + self.execute_operation(rop.SETINTERIORFIELD_GC, [a_box, BoxInt(3), + BoxInt(-15)], + 'void', descr=vdescr) + for name, TYPE in NUMBER_FIELDS: + vdescr = self.cpu.interiorfielddescrof(A, name) + i = self.cpu.bh_getinteriorfield_gc_i(a_box.getref_base(), 3, + vdescr) + assert i == rffi.cast(lltype.Signed, rffi.cast(TYPE, -15)) + for name, TYPE in NUMBER_FIELDS[::-1]: + vdescr = self.cpu.interiorfielddescrof(A, name) + self.cpu.bh_setinteriorfield_gc_i(a_box.getref_base(), 3, + vdescr, -25) + for name, TYPE in NUMBER_FIELDS: + vdescr = self.cpu.interiorfielddescrof(A, name) + r = self.execute_operation(rop.GETINTERIORFIELD_GC, + [a_box, BoxInt(3)], + 'int', descr=vdescr) + assert r.getint() == rffi.cast(lltype.Signed, rffi.cast(TYPE, -25)) + # + self.execute_operation(rop.SETINTERIORFIELD_GC, [a_box, BoxInt(4), + s_box], + 'void', descr=pdescr) + r = self.cpu.bh_getinteriorfield_gc_r(a_box.getref_base(), 4, pdescr) + assert r == s_box.getref_base() + self.cpu.bh_setinteriorfield_gc_r(a_box.getref_base(), 3, pdescr, + s_box.getref_base()) + r = self.execute_operation(rop.GETINTERIORFIELD_GC, [a_box, BoxInt(3)], + 'ref', descr=pdescr) + assert r.getref_base() == s_box.getref_base() + def test_string_basic(self): s_box = self.alloc_string("hello\xfe") r = self.execute_operation(rop.STRLEN, [s_box], 'int') @@ -1402,7 +1469,7 @@ addr = llmemory.cast_ptr_to_adr(func_ptr) return ConstInt(heaptracker.adr2int(addr)) - + MY_VTABLE = rclass.OBJECT_VTABLE # for tests only S = lltype.GcForwardReference() @@ -1439,7 +1506,6 @@ return BoxPtr(lltype.nullptr(llmemory.GCREF.TO)) def alloc_array_of(self, ITEM, length): - cpu = self.cpu A = lltype.GcArray(ITEM) a = lltype.malloc(A, length) a_box = BoxPtr(lltype.cast_opaque_ptr(llmemory.GCREF, a)) @@ -1468,20 +1534,16 @@ return u''.join(u.chars) - def test_casts(self): - py.test.skip("xxx fix or kill") - from pypy.rpython.lltypesystem import lltype, llmemory - TP = lltype.GcStruct('x') - x = lltype.malloc(TP) - x = lltype.cast_opaque_ptr(llmemory.GCREF, x) + def test_cast_int_to_ptr(self): + res = self.execute_operation(rop.CAST_INT_TO_PTR, + [BoxInt(-17)], 'ref').value + assert lltype.cast_ptr_to_int(res) == -17 + + def test_cast_ptr_to_int(self): + x = lltype.cast_int_to_ptr(llmemory.GCREF, -19) res = self.execute_operation(rop.CAST_PTR_TO_INT, - [BoxPtr(x)], 'int').value - expected = self.cpu.cast_adr_to_int(llmemory.cast_ptr_to_adr(x)) - assert rffi.get_real_int(res) == rffi.get_real_int(expected) - res = self.execute_operation(rop.CAST_PTR_TO_INT, - [ConstPtr(x)], 'int').value - expected = self.cpu.cast_adr_to_int(llmemory.cast_ptr_to_adr(x)) - assert rffi.get_real_int(res) == rffi.get_real_int(expected) + [BoxPtr(x)], 'int').value + assert res == -19 def test_ooops_non_gc(self): x = lltype.malloc(lltype.Struct('x'), flavor='raw') @@ -2299,13 +2361,6 @@ # cpu.bh_strsetitem(x, 4, ord('/')) assert str.chars[4] == '/' - # -## x = cpu.bh_newstr(5) -## y = cpu.bh_cast_ptr_to_int(x) -## z = cpu.bh_cast_ptr_to_int(x) -## y = rffi.get_real_int(y) -## z = rffi.get_real_int(z) -## assert type(y) == type(z) == int and y == z def test_sorting_of_fields(self): S = self.S @@ -2329,7 +2384,7 @@ for opname, arg, res in ops: self.execute_operation(opname, [arg], 'void') assert self.guard_failed == res - + lltype.free(x, flavor='raw') def test_assembler_call(self): @@ -2409,7 +2464,7 @@ FakeJitDriverSD.portal_calldescr = self.cpu.calldescrof( lltype.Ptr(lltype.FuncType(ARGS, RES)), ARGS, RES, EffectInfo.MOST_GENERAL) - + ops = ''' [f0, f1] f2 = float_add(f0, f1) @@ -2500,7 +2555,7 @@ FakeJitDriverSD.portal_calldescr = self.cpu.calldescrof( lltype.Ptr(lltype.FuncType(ARGS, RES)), ARGS, RES, EffectInfo.MOST_GENERAL) - + ops = ''' [f0, f1] f2 = float_add(f0, f1) @@ -2951,4 +3006,4 @@ def alloc_unicode(self, unicode): py.test.skip("implement me") - + diff --git a/pypy/jit/backend/test/test_ll_random.py b/pypy/jit/backend/test/test_ll_random.py --- a/pypy/jit/backend/test/test_ll_random.py +++ b/pypy/jit/backend/test/test_ll_random.py @@ -28,16 +28,27 @@ fork.structure_types_and_vtables = self.structure_types_and_vtables return fork - def get_structptr_var(self, r, must_have_vtable=False, type=lltype.Struct): + def _choose_ptr_vars(self, from_, type, array_of_structs): + ptrvars = [] + for i in range(len(from_)): + v, S = from_[i][:2] + if not isinstance(S, type): + continue + if ((isinstance(S, lltype.Array) and + isinstance(S.OF, lltype.Struct)) == array_of_structs): + ptrvars.append((v, S)) + return ptrvars + + def get_structptr_var(self, r, must_have_vtable=False, type=lltype.Struct, + array_of_structs=False): while True: - ptrvars = [(v, S) for (v, S) in self.ptrvars - if isinstance(S, type)] + ptrvars = self._choose_ptr_vars(self.ptrvars, type, + array_of_structs) if ptrvars and r.random() < 0.8: v, S = r.choice(ptrvars) else: - prebuilt_ptr_consts = [(v, S) - for (v, S, _) in self.prebuilt_ptr_consts - if isinstance(S, type)] + prebuilt_ptr_consts = self._choose_ptr_vars( + self.prebuilt_ptr_consts, type, array_of_structs) if prebuilt_ptr_consts and r.random() < 0.7: v, S = r.choice(prebuilt_ptr_consts) else: @@ -48,7 +59,8 @@ has_vtable=must_have_vtable) else: # create a new constant array - p = self.get_random_array(r) + p = self.get_random_array(r, + must_be_array_of_structs=array_of_structs) S = lltype.typeOf(p).TO v = ConstPtr(lltype.cast_opaque_ptr(llmemory.GCREF, p)) self.prebuilt_ptr_consts.append((v, S, @@ -74,7 +86,8 @@ TYPE = lltype.Signed return TYPE - def get_random_structure_type(self, r, with_vtable=None, cache=True): + def get_random_structure_type(self, r, with_vtable=None, cache=True, + type=lltype.GcStruct): if cache and self.structure_types and r.random() < 0.5: return r.choice(self.structure_types) fields = [] @@ -85,7 +98,7 @@ for i in range(r.randrange(1, 5)): TYPE = self.get_random_primitive_type(r) fields.append(('f%d' % i, TYPE)) - S = lltype.GcStruct('S%d' % self.counter, *fields, **kwds) + S = type('S%d' % self.counter, *fields, **kwds) self.counter += 1 if cache: self.structure_types.append(S) @@ -125,17 +138,29 @@ setattr(p, fieldname, rffi.cast(TYPE, r.random_integer())) return p - def get_random_array_type(self, r): - TYPE = self.get_random_primitive_type(r) + def get_random_array_type(self, r, can_be_array_of_struct=False, + must_be_array_of_structs=False): + if ((can_be_array_of_struct and r.random() < 0.1) or + must_be_array_of_structs): + TYPE = self.get_random_structure_type(r, cache=False, + type=lltype.Struct) + else: + TYPE = self.get_random_primitive_type(r) return lltype.GcArray(TYPE) - def get_random_array(self, r): - A = self.get_random_array_type(r) + def get_random_array(self, r, must_be_array_of_structs=False): + A = self.get_random_array_type(r, + must_be_array_of_structs=must_be_array_of_structs) length = (r.random_integer() // 15) % 300 # length: between 0 and 299 # likely to be small p = lltype.malloc(A, length) - for i in range(length): - p[i] = rffi.cast(A.OF, r.random_integer()) + if isinstance(A.OF, lltype.Primitive): + for i in range(length): + p[i] = rffi.cast(A.OF, r.random_integer()) + else: + for i in range(length): + for fname, TP in A.OF._flds.iteritems(): + setattr(p[i], fname, rffi.cast(TP, r.random_integer())) return p def get_index(self, length, r): @@ -155,8 +180,16 @@ dic[fieldname] = getattr(p, fieldname) else: assert isinstance(S, lltype.Array) - for i in range(len(p)): - dic[i] = p[i] + if isinstance(S.OF, lltype.Struct): + for i in range(len(p)): + item = p[i] + s1 = {} + for fieldname in S.OF._names: + s1[fieldname] = getattr(item, fieldname) + dic[i] = s1 + else: + for i in range(len(p)): + dic[i] = p[i] return dic def print_loop_prebuilt(self, names, writevar, s): @@ -220,7 +253,7 @@ class GetFieldOperation(test_random.AbstractOperation): def field_descr(self, builder, r): - v, S = builder.get_structptr_var(r) + v, S = builder.get_structptr_var(r, ) names = S._names if names[0] == 'parent': names = names[1:] @@ -239,6 +272,28 @@ continue break +class GetInteriorFieldOperation(test_random.AbstractOperation): + def field_descr(self, builder, r): + v, A = builder.get_structptr_var(r, type=lltype.Array, + array_of_structs=True) + array = v.getref(lltype.Ptr(A)) + v_index = builder.get_index(len(array), r) + name = r.choice(A.OF._names) + descr = builder.cpu.interiorfielddescrof(A, name) + descr._random_info = 'cpu.interiorfielddescrof(%s, %r)' % (A.OF._name, + name) + TYPE = getattr(A.OF, name) + return v, v_index, descr, TYPE + + def produce_into(self, builder, r): + while True: + try: + v, v_index, descr, _ = self.field_descr(builder, r) + self.put(builder, [v, v_index], descr) + except lltype.UninitializedMemoryAccess: + continue + break + class SetFieldOperation(GetFieldOperation): def produce_into(self, builder, r): v, descr, TYPE = self.field_descr(builder, r) @@ -251,6 +306,18 @@ break builder.do(self.opnum, [v, w], descr) +class SetInteriorFieldOperation(GetInteriorFieldOperation): + def produce_into(self, builder, r): + v, v_index, descr, TYPE = self.field_descr(builder, r) + while True: + if r.random() < 0.3: + w = ConstInt(r.random_integer()) + else: + w = r.choice(builder.intvars) + if rffi.cast(lltype.Signed, rffi.cast(TYPE, w.value)) == w.value: + break + builder.do(self.opnum, [v, v_index, w], descr) + class NewOperation(test_random.AbstractOperation): def size_descr(self, builder, S): descr = builder.cpu.sizeof(S) @@ -306,7 +373,7 @@ class NewArrayOperation(ArrayOperation): def produce_into(self, builder, r): - A = builder.get_random_array_type(r) + A = builder.get_random_array_type(r, can_be_array_of_struct=True) v_size = builder.get_index(300, r) v_ptr = builder.do(self.opnum, [v_size], self.array_descr(builder, A)) builder.ptrvars.append((v_ptr, A)) @@ -586,7 +653,9 @@ for i in range(4): # make more common OPERATIONS.append(GetFieldOperation(rop.GETFIELD_GC)) OPERATIONS.append(GetFieldOperation(rop.GETFIELD_GC)) + OPERATIONS.append(GetInteriorFieldOperation(rop.GETINTERIORFIELD_GC)) OPERATIONS.append(SetFieldOperation(rop.SETFIELD_GC)) + OPERATIONS.append(SetInteriorFieldOperation(rop.SETINTERIORFIELD_GC)) OPERATIONS.append(NewOperation(rop.NEW)) OPERATIONS.append(NewOperation(rop.NEW_WITH_VTABLE)) diff --git a/pypy/jit/backend/test/test_random.py b/pypy/jit/backend/test/test_random.py --- a/pypy/jit/backend/test/test_random.py +++ b/pypy/jit/backend/test/test_random.py @@ -595,6 +595,10 @@ for name, value in fields.items(): if isinstance(name, str): setattr(container, name, value) + elif isinstance(value, dict): + item = container.getitem(name) + for key1, value1 in value.items(): + setattr(item, key1, value1) else: container.setitem(name, value) diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -1,7 +1,7 @@ import sys, os from pypy.jit.backend.llsupport import symbolic from pypy.jit.backend.llsupport.asmmemmgr import MachineDataBlockWrapper -from pypy.jit.metainterp.history import Const, Box, BoxInt, BoxPtr, BoxFloat +from pypy.jit.metainterp.history import Const, Box, BoxInt, ConstInt from pypy.jit.metainterp.history import (AbstractFailDescr, INT, REF, FLOAT, LoopToken) from pypy.rpython.lltypesystem import lltype, rffi, rstr, llmemory @@ -36,7 +36,6 @@ from pypy.rlib import rgc from pypy.rlib.clibffi import FFI_DEFAULT_ABI from pypy.jit.backend.x86.jump import remap_frame_layout -from pypy.jit.metainterp.history import ConstInt, BoxInt from pypy.jit.codewriter.effectinfo import EffectInfo from pypy.jit.codewriter import longlong @@ -729,8 +728,8 @@ # Also, make sure this is consistent with FRAME_FIXED_SIZE. self.mc.PUSH_r(ebp.value) self.mc.MOV_rr(ebp.value, esp.value) - for regloc in self.cpu.CALLEE_SAVE_REGISTERS: - self.mc.PUSH_r(regloc.value) + for loc in self.cpu.CALLEE_SAVE_REGISTERS: + self.mc.PUSH_r(loc.value) gcrootmap = self.cpu.gc_ll_descr.gcrootmap if gcrootmap and gcrootmap.is_shadow_stack: @@ -994,7 +993,7 @@ effectinfo = op.getdescr().get_extra_info() oopspecindex = effectinfo.oopspecindex genop_llong_list[oopspecindex](self, op, arglocs, resloc) - + def regalloc_perform_math(self, op, arglocs, resloc): effectinfo = op.getdescr().get_extra_info() oopspecindex = effectinfo.oopspecindex @@ -1277,8 +1276,8 @@ genop_int_ne = _cmpop("NE", "NE") genop_int_gt = _cmpop("G", "L") genop_int_ge = _cmpop("GE", "LE") - genop_ptr_eq = genop_int_eq - genop_ptr_ne = genop_int_ne + genop_ptr_eq = genop_instance_ptr_eq = genop_int_eq + genop_ptr_ne = genop_instance_ptr_ne = genop_int_ne genop_float_lt = _cmpop_float('B', 'A') genop_float_le = _cmpop_float('BE', 'AE') @@ -1298,8 +1297,8 @@ genop_guard_int_ne = _cmpop_guard("NE", "NE", "E", "E") genop_guard_int_gt = _cmpop_guard("G", "L", "LE", "GE") genop_guard_int_ge = _cmpop_guard("GE", "LE", "L", "G") - genop_guard_ptr_eq = genop_guard_int_eq - genop_guard_ptr_ne = genop_guard_int_ne + genop_guard_ptr_eq = genop_guard_instance_ptr_eq = genop_guard_int_eq + genop_guard_ptr_ne = genop_guard_instance_ptr_ne = genop_guard_int_ne genop_guard_uint_gt = _cmpop_guard("A", "B", "BE", "AE") genop_guard_uint_lt = _cmpop_guard("B", "A", "AE", "BE") @@ -1311,7 +1310,7 @@ genop_guard_float_eq = _cmpop_guard_float("E", "E", "NE","NE") genop_guard_float_gt = _cmpop_guard_float("A", "B", "BE","AE") genop_guard_float_ge = _cmpop_guard_float("AE","BE", "B", "A") - + def genop_math_sqrt(self, op, arglocs, resloc): self.mc.SQRTSD(arglocs[0], resloc) @@ -1387,7 +1386,8 @@ def genop_same_as(self, op, arglocs, resloc): self.mov(arglocs[0], resloc) - #genop_cast_ptr_to_int = genop_same_as + genop_cast_ptr_to_int = genop_same_as + genop_cast_int_to_ptr = genop_same_as def genop_int_mod(self, op, arglocs, resloc): if IS_X86_32: @@ -1596,12 +1596,44 @@ genop_getarrayitem_gc_pure = genop_getarrayitem_gc genop_getarrayitem_raw = genop_getarrayitem_gc + def _get_interiorfield_addr(self, temp_loc, index_loc, itemsize_loc, + base_loc, ofs_loc): + assert isinstance(itemsize_loc, ImmedLoc) + if isinstance(index_loc, ImmedLoc): + temp_loc = imm(index_loc.value * itemsize_loc.value) + else: + # XXX should not use IMUL in most cases + assert isinstance(temp_loc, RegLoc) + assert isinstance(index_loc, RegLoc) + assert not temp_loc.is_xmm + self.mc.IMUL_rri(temp_loc.value, index_loc.value, + itemsize_loc.value) + assert isinstance(ofs_loc, ImmedLoc) + return AddressLoc(base_loc, temp_loc, 0, ofs_loc.value) + + def genop_getinteriorfield_gc(self, op, arglocs, resloc): + (base_loc, ofs_loc, itemsize_loc, fieldsize_loc, + index_loc, temp_loc, sign_loc) = arglocs + src_addr = self._get_interiorfield_addr(temp_loc, index_loc, + itemsize_loc, base_loc, + ofs_loc) + self.load_from_mem(resloc, src_addr, fieldsize_loc, sign_loc) + + def genop_discard_setfield_gc(self, op, arglocs): base_loc, ofs_loc, size_loc, value_loc = arglocs assert isinstance(size_loc, ImmedLoc) dest_addr = AddressLoc(base_loc, ofs_loc) self.save_into_mem(dest_addr, value_loc, size_loc) + def genop_discard_setinteriorfield_gc(self, op, arglocs): + (base_loc, ofs_loc, itemsize_loc, fieldsize_loc, + index_loc, temp_loc, value_loc) = arglocs + dest_addr = self._get_interiorfield_addr(temp_loc, index_loc, + itemsize_loc, base_loc, + ofs_loc) + self.save_into_mem(dest_addr, value_loc, fieldsize_loc) + def genop_discard_setarrayitem_gc(self, op, arglocs): base_loc, ofs_loc, value_loc, size_loc, baseofs = arglocs assert isinstance(baseofs, ImmedLoc) diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -7,7 +7,7 @@ ResOperation, BoxPtr, ConstFloat, BoxFloat, LoopToken, INT, REF, FLOAT) from pypy.jit.backend.x86.regloc import * -from pypy.rpython.lltypesystem import lltype, ll2ctypes, rffi, rstr +from pypy.rpython.lltypesystem import lltype, rffi, rstr from pypy.rlib.objectmodel import we_are_translated from pypy.rlib import rgc from pypy.jit.backend.llsupport import symbolic @@ -17,11 +17,12 @@ from pypy.jit.metainterp.resoperation import rop from pypy.jit.backend.llsupport.descr import BaseFieldDescr, BaseArrayDescr from pypy.jit.backend.llsupport.descr import BaseCallDescr, BaseSizeDescr +from pypy.jit.backend.llsupport.descr import InteriorFieldDescr from pypy.jit.backend.llsupport.regalloc import FrameManager, RegisterManager,\ TempBox from pypy.jit.backend.x86.arch import WORD, FRAME_FIXED_SIZE from pypy.jit.backend.x86.arch import IS_X86_32, IS_X86_64, MY_COPY_OF_REGS -from pypy.rlib.rarithmetic import r_longlong, r_uint +from pypy.rlib.rarithmetic import r_longlong class X86RegisterManager(RegisterManager): @@ -433,7 +434,7 @@ if self.can_merge_with_next_guard(op, i, operations): oplist_with_guard[op.getopnum()](self, op, operations[i + 1]) i += 1 - elif not we_are_translated() and op.getopnum() == -124: + elif not we_are_translated() and op.getopnum() == -124: self._consider_force_spill(op) else: oplist[op.getopnum()](self, op) @@ -650,8 +651,8 @@ consider_uint_lt = _consider_compop consider_uint_le = _consider_compop consider_uint_ge = _consider_compop - consider_ptr_eq = _consider_compop - consider_ptr_ne = _consider_compop + consider_ptr_eq = consider_instance_ptr_eq = _consider_compop + consider_ptr_ne = consider_instance_ptr_ne = _consider_compop def _consider_float_op(self, op): loc1 = self.xrm.loc(op.getarg(1)) @@ -815,7 +816,7 @@ save_all_regs = guard_not_forced_op is not None self.xrm.before_call(force_store, save_all_regs=save_all_regs) if not save_all_regs: - gcrootmap = gc_ll_descr = self.assembler.cpu.gc_ll_descr.gcrootmap + gcrootmap = self.assembler.cpu.gc_ll_descr.gcrootmap if gcrootmap and gcrootmap.is_shadow_stack: save_all_regs = 2 self.rm.before_call(force_store, save_all_regs=save_all_regs) @@ -972,74 +973,27 @@ return self._call(op, arglocs) def consider_newstr(self, op): - gc_ll_descr = self.assembler.cpu.gc_ll_descr - if gc_ll_descr.get_funcptr_for_newstr is not None: - # framework GC - loc = self.loc(op.getarg(0)) - return self._call(op, [loc]) - # boehm GC (XXX kill the following code at some point) - ofs_items, itemsize, ofs = symbolic.get_array_token(rstr.STR, self.translate_support_code) - assert itemsize == 1 - return self._malloc_varsize(ofs_items, ofs, 0, op.getarg(0), - op.result) + loc = self.loc(op.getarg(0)) + return self._call(op, [loc]) def consider_newunicode(self, op): - gc_ll_descr = self.assembler.cpu.gc_ll_descr - if gc_ll_descr.get_funcptr_for_newunicode is not None: - # framework GC - loc = self.loc(op.getarg(0)) - return self._call(op, [loc]) - # boehm GC (XXX kill the following code at some point) - ofs_items, _, ofs = symbolic.get_array_token(rstr.UNICODE, - self.translate_support_code) - scale = self._get_unicode_item_scale() - return self._malloc_varsize(ofs_items, ofs, scale, op.getarg(0), - op.result) - - def _malloc_varsize(self, ofs_items, ofs_length, scale, v, res_v): - # XXX kill this function at some point - if isinstance(v, Box): - loc = self.rm.make_sure_var_in_reg(v, [v]) - tempbox = TempBox() - other_loc = self.rm.force_allocate_reg(tempbox, [v]) - self.assembler.load_effective_addr(loc, ofs_items,scale, other_loc) - else: - tempbox = None - other_loc = imm(ofs_items + (v.getint() << scale)) - self._call(ResOperation(rop.NEW, [], res_v), - [other_loc], [v]) - loc = self.rm.make_sure_var_in_reg(v, [res_v]) - assert self.loc(res_v) == eax - # now we have to reload length to some reasonable place - self.rm.possibly_free_var(v) - if tempbox is not None: - self.rm.possibly_free_var(tempbox) - self.PerformDiscard(ResOperation(rop.SETFIELD_GC, [None, None], None), - [eax, imm(ofs_length), imm(WORD), loc]) + loc = self.loc(op.getarg(0)) + return self._call(op, [loc]) def consider_new_array(self, op): gc_ll_descr = self.assembler.cpu.gc_ll_descr - if gc_ll_descr.get_funcptr_for_newarray is not None: - # framework GC - box_num_elem = op.getarg(0) - if isinstance(box_num_elem, ConstInt): - num_elem = box_num_elem.value - if gc_ll_descr.can_inline_malloc_varsize(op.getdescr(), - num_elem): - self.fastpath_malloc_varsize(op, op.getdescr(), num_elem) - return - args = self.assembler.cpu.gc_ll_descr.args_for_new_array( - op.getdescr()) - arglocs = [imm(x) for x in args] - arglocs.append(self.loc(box_num_elem)) - self._call(op, arglocs) - return - # boehm GC (XXX kill the following code at some point) - itemsize, basesize, ofs_length, _, _ = ( - self._unpack_arraydescr(op.getdescr())) - scale_of_field = _get_scale(itemsize) - self._malloc_varsize(basesize, ofs_length, scale_of_field, - op.getarg(0), op.result) + box_num_elem = op.getarg(0) + if isinstance(box_num_elem, ConstInt): + num_elem = box_num_elem.value + if gc_ll_descr.can_inline_malloc_varsize(op.getdescr(), + num_elem): + self.fastpath_malloc_varsize(op, op.getdescr(), num_elem) + return + args = self.assembler.cpu.gc_ll_descr.args_for_new_array( + op.getdescr()) + arglocs = [imm(x) for x in args] + arglocs.append(self.loc(box_num_elem)) + self._call(op, arglocs) def _unpack_arraydescr(self, arraydescr): assert isinstance(arraydescr, BaseArrayDescr) @@ -1058,6 +1012,16 @@ sign = fielddescr.is_field_signed() return imm(ofs), imm(size), ptr, sign + def _unpack_interiorfielddescr(self, descr): + assert isinstance(descr, InteriorFieldDescr) + arraydescr = descr.arraydescr + ofs = arraydescr.get_base_size(self.translate_support_code) + itemsize = arraydescr.get_item_size(self.translate_support_code) + fieldsize = descr.fielddescr.get_field_size(self.translate_support_code) + sign = descr.fielddescr.is_field_signed() + ofs += descr.fielddescr.offset + return imm(ofs), imm(itemsize), imm(fieldsize), sign + def consider_setfield_gc(self, op): ofs_loc, size_loc, _, _ = self._unpack_fielddescr(op.getdescr()) assert isinstance(size_loc, ImmedLoc) @@ -1074,6 +1038,35 @@ consider_setfield_raw = consider_setfield_gc + def consider_setinteriorfield_gc(self, op): + t = self._unpack_interiorfielddescr(op.getdescr()) + ofs, itemsize, fieldsize, _ = t + args = op.getarglist() + if fieldsize.value == 1: + need_lower_byte = True + else: + need_lower_byte = False + box_base, box_index, box_value = args + base_loc = self.rm.make_sure_var_in_reg(box_base, args) + index_loc = self.rm.make_sure_var_in_reg(box_index, args) + value_loc = self.make_sure_var_in_reg(box_value, args, + need_lower_byte=need_lower_byte) + # If 'index_loc' is not an immediate, then we need a 'temp_loc' that + # is a register whose value will be destroyed. It's fine to destroy + # the same register as 'index_loc', but not the other ones. + self.rm.possibly_free_var(box_index) + if not isinstance(index_loc, ImmedLoc): + tempvar = TempBox() + temp_loc = self.rm.force_allocate_reg(tempvar, [box_base, + box_value]) + self.rm.possibly_free_var(tempvar) + else: + temp_loc = None + self.rm.possibly_free_var(box_base) + self.possibly_free_var(box_value) + self.PerformDiscard(op, [base_loc, ofs, itemsize, fieldsize, + index_loc, temp_loc, value_loc]) + def consider_strsetitem(self, op): args = op.getarglist() base_loc = self.rm.make_sure_var_in_reg(op.getarg(0), args) @@ -1135,6 +1128,36 @@ consider_getarrayitem_raw = consider_getarrayitem_gc consider_getarrayitem_gc_pure = consider_getarrayitem_gc + def consider_getinteriorfield_gc(self, op): + t = self._unpack_interiorfielddescr(op.getdescr()) + ofs, itemsize, fieldsize, sign = t + if sign: + sign_loc = imm1 + else: + sign_loc = imm0 + args = op.getarglist() + base_loc = self.rm.make_sure_var_in_reg(op.getarg(0), args) + index_loc = self.rm.make_sure_var_in_reg(op.getarg(1), args) + # 'base' and 'index' are put in two registers (or one if 'index' + # is an immediate). 'result' can be in the same register as + # 'index' but must be in a different register than 'base'. + self.rm.possibly_free_var(op.getarg(1)) + result_loc = self.force_allocate_reg(op.result, [op.getarg(0)]) + assert isinstance(result_loc, RegLoc) + # two cases: 1) if result_loc is a normal register, use it as temp_loc + if not result_loc.is_xmm: + temp_loc = result_loc + else: + # 2) if result_loc is an xmm register, we (likely) need another + # temp_loc that is a normal register. It can be in the same + # register as 'index' but not 'base'. + tempvar = TempBox() + temp_loc = self.rm.force_allocate_reg(tempvar, [op.getarg(0)]) + self.rm.possibly_free_var(tempvar) + self.rm.possibly_free_var(op.getarg(0)) + self.Perform(op, [base_loc, ofs, itemsize, fieldsize, + index_loc, temp_loc, sign_loc], result_loc) + def consider_int_is_true(self, op, guard_op): # doesn't need arg to be in a register argloc = self.loc(op.getarg(0)) @@ -1152,7 +1175,8 @@ self.possibly_free_var(op.getarg(0)) resloc = self.force_allocate_reg(op.result) self.Perform(op, [argloc], resloc) - #consider_cast_ptr_to_int = consider_same_as + consider_cast_ptr_to_int = consider_same_as + consider_cast_int_to_ptr = consider_same_as def consider_strlen(self, op): args = op.getarglist() @@ -1240,7 +1264,6 @@ self.rm.possibly_free_var(srcaddr_box) def _gen_address_inside_string(self, baseloc, ofsloc, resloc, is_unicode): - cpu = self.assembler.cpu if is_unicode: ofs_items, _, _ = symbolic.get_array_token(rstr.UNICODE, self.translate_support_code) @@ -1299,7 +1322,7 @@ tmpreg = X86RegisterManager.all_regs[0] tmploc = self.rm.force_allocate_reg(box, selected_reg=tmpreg) xmmtmp = X86XMMRegisterManager.all_regs[0] - xmmtmploc = self.xrm.force_allocate_reg(box1, selected_reg=xmmtmp) + self.xrm.force_allocate_reg(box1, selected_reg=xmmtmp) # Part about non-floats # XXX we don't need a copy, we only just the original list src_locations1 = [self.loc(op.getarg(i)) for i in range(op.numargs()) @@ -1379,7 +1402,7 @@ return lambda self, op: fn(self, op, None) def is_comparison_or_ovf_op(opnum): - from pypy.jit.metainterp.resoperation import opclasses, AbstractResOp + from pypy.jit.metainterp.resoperation import opclasses cls = opclasses[opnum] # hack hack: in theory they are instance method, but they don't use # any instance field, we can use a fake object diff --git a/pypy/jit/backend/x86/regloc.py b/pypy/jit/backend/x86/regloc.py --- a/pypy/jit/backend/x86/regloc.py +++ b/pypy/jit/backend/x86/regloc.py @@ -17,7 +17,7 @@ class AssemblerLocation(object): # XXX: Is adding "width" here correct? - __slots__ = ('value', 'width') + _attrs_ = ('value', 'width', '_location_code') _immutable_ = True def _getregkey(self): return self.value @@ -25,6 +25,9 @@ def is_memory_reference(self): return self.location_code() in ('b', 's', 'j', 'a', 'm') + def location_code(self): + return self._location_code + def value_r(self): return self.value def value_b(self): return self.value def value_s(self): return self.value @@ -38,6 +41,8 @@ class StackLoc(AssemblerLocation): _immutable_ = True + _location_code = 'b' + def __init__(self, position, ebp_offset, num_words, type): assert ebp_offset < 0 # so no confusion with RegLoc.value self.position = position @@ -49,9 +54,6 @@ def __repr__(self): return '%d(%%ebp)' % (self.value,) - def location_code(self): - return 'b' - def assembler(self): return repr(self) @@ -63,8 +65,10 @@ self.is_xmm = is_xmm if self.is_xmm: self.width = 8 + self._location_code = 'x' else: self.width = WORD + self._location_code = 'r' def __repr__(self): if self.is_xmm: return rx86.R.xmmnames[self.value] @@ -79,12 +83,6 @@ assert not self.is_xmm return RegLoc(rx86.high_byte(self.value), False) - def location_code(self): - if self.is_xmm: - return 'x' - else: - return 'r' - def assembler(self): return '%' + repr(self) @@ -97,14 +95,13 @@ class ImmedLoc(AssemblerLocation): _immutable_ = True width = WORD + _location_code = 'i' + def __init__(self, value): from pypy.rpython.lltypesystem import rffi, lltype # force as a real int self.value = rffi.cast(lltype.Signed, value) - def location_code(self): - return 'i' - def getint(self): return self.value @@ -149,9 +146,6 @@ info = getattr(self, attr, '?') return '' % (self._location_code, info) - def location_code(self): - return self._location_code - def value_a(self): return self.loc_a @@ -191,6 +185,7 @@ # we want a width of 8 (... I think. Check this!) _immutable_ = True width = 8 + _location_code = 'j' def __init__(self, address): self.value = address @@ -198,9 +193,6 @@ def __repr__(self): return '' % (self.value,) - def location_code(self): - return 'j' - if IS_X86_32: class FloatImmedLoc(AssemblerLocation): # This stands for an immediate float. It cannot be directly used in @@ -209,6 +201,7 @@ # instead; see below. _immutable_ = True width = 8 + _location_code = '#' # don't use me def __init__(self, floatstorage): self.aslonglong = floatstorage @@ -229,9 +222,6 @@ floatvalue = longlong.getrealfloat(self.aslonglong) return '' % (floatvalue,) - def location_code(self): - raise NotImplementedError - if IS_X86_64: def FloatImmedLoc(floatstorage): from pypy.rlib.longlong2float import float2longlong @@ -270,6 +260,11 @@ else: raise AssertionError(methname + " undefined") +def _missing_binary_insn(name, code1, code2): + raise AssertionError(name + "_" + code1 + code2 + " missing") +_missing_binary_insn._dont_inline_ = True + + class LocationCodeBuilder(object): _mixin_ = True @@ -303,6 +298,8 @@ else: # For this case, we should not need the scratch register more than here. self._load_scratch(val2) + if name == 'MOV' and loc1 is X86_64_SCRATCH_REG: + return # don't need a dummy "MOV r11, r11" INSN(self, loc1, X86_64_SCRATCH_REG) def invoke(self, codes, val1, val2): @@ -310,6 +307,23 @@ _rx86_getattr(self, methname)(val1, val2) invoke._annspecialcase_ = 'specialize:arg(1)' + def has_implementation_for(loc1, loc2): + # A memo function that returns True if there is any NAME_xy that could match. + # If it returns False we know the whole subcase can be omitted from translated + # code. Without this hack, the size of most _binaryop INSN functions ends up + # quite large in C code. + if loc1 == '?': + return any([has_implementation_for(loc1, loc2) + for loc1 in unrolling_location_codes]) + methname = name + "_" + loc1 + loc2 + if not hasattr(rx86.AbstractX86CodeBuilder, methname): + return False + # any NAME_j should have a NAME_m as a fallback, too. Check it + if loc1 == 'j': assert has_implementation_for('m', loc2), methname + if loc2 == 'j': assert has_implementation_for(loc1, 'm'), methname + return True + has_implementation_for._annspecialcase_ = 'specialize:memo' + def INSN(self, loc1, loc2): code1 = loc1.location_code() code2 = loc2.location_code() @@ -325,6 +339,8 @@ assert code2 not in ('j', 'i') for possible_code2 in unrolling_location_codes: + if not has_implementation_for('?', possible_code2): + continue if code2 == possible_code2: val2 = getattr(loc2, "value_" + possible_code2)() # @@ -335,28 +351,32 @@ # # Regular case for possible_code1 in unrolling_location_codes: + if not has_implementation_for(possible_code1, + possible_code2): + continue if code1 == possible_code1: val1 = getattr(loc1, "value_" + possible_code1)() # More faking out of certain operations for x86_64 - if possible_code1 == 'j' and not rx86.fits_in_32bits(val1): + fits32 = rx86.fits_in_32bits + if possible_code1 == 'j' and not fits32(val1): val1 = self._addr_as_reg_offset(val1) invoke(self, "m" + possible_code2, val1, val2) - elif possible_code2 == 'j' and not rx86.fits_in_32bits(val2): + return + if possible_code2 == 'j' and not fits32(val2): val2 = self._addr_as_reg_offset(val2) invoke(self, possible_code1 + "m", val1, val2) - elif possible_code1 == 'm' and not rx86.fits_in_32bits(val1[1]): + return + if possible_code1 == 'm' and not fits32(val1[1]): val1 = self._fix_static_offset_64_m(val1) - invoke(self, "a" + possible_code2, val1, val2) - elif possible_code2 == 'm' and not rx86.fits_in_32bits(val2[1]): + if possible_code2 == 'm' and not fits32(val2[1]): val2 = self._fix_static_offset_64_m(val2) - invoke(self, possible_code1 + "a", val1, val2) - else: - if possible_code1 == 'a' and not rx86.fits_in_32bits(val1[3]): - val1 = self._fix_static_offset_64_a(val1) - if possible_code2 == 'a' and not rx86.fits_in_32bits(val2[3]): - val2 = self._fix_static_offset_64_a(val2) - invoke(self, possible_code1 + possible_code2, val1, val2) + if possible_code1 == 'a' and not fits32(val1[3]): + val1 = self._fix_static_offset_64_a(val1) + if possible_code2 == 'a' and not fits32(val2[3]): + val2 = self._fix_static_offset_64_a(val2) + invoke(self, possible_code1 + possible_code2, val1, val2) return + _missing_binary_insn(name, code1, code2) return func_with_new_name(INSN, "INSN_" + name) @@ -431,12 +451,14 @@ def _fix_static_offset_64_m(self, (basereg, static_offset)): # For cases where an AddressLoc has the location_code 'm', but # where the static offset does not fit in 32-bits. We have to fall - # back to the X86_64_SCRATCH_REG. Note that this returns a location - # encoded as mode 'a'. These are all possibly rare cases; don't try + # back to the X86_64_SCRATCH_REG. Returns a new location encoded + # as mode 'm' too. These are all possibly rare cases; don't try # to reuse a past value of the scratch register at all. self._scratch_register_known = False self.MOV_ri(X86_64_SCRATCH_REG.value, static_offset) - return (basereg, X86_64_SCRATCH_REG.value, 0, 0) + self.LEA_ra(X86_64_SCRATCH_REG.value, + (basereg, X86_64_SCRATCH_REG.value, 0, 0)) + return (X86_64_SCRATCH_REG.value, 0) def _fix_static_offset_64_a(self, (basereg, scalereg, scale, static_offset)): diff --git a/pypy/jit/backend/x86/rx86.py b/pypy/jit/backend/x86/rx86.py --- a/pypy/jit/backend/x86/rx86.py +++ b/pypy/jit/backend/x86/rx86.py @@ -745,6 +745,7 @@ assert insnname_template.count('*') == 1 add_insn('x', register(2), '\xC0') add_insn('j', abs_, immediate(2)) + add_insn('m', mem_reg_plus_const(2)) define_pxmm_insn('PADDQ_x*', '\xD4') define_pxmm_insn('PSUBQ_x*', '\xFB') diff --git a/pypy/jit/backend/x86/test/test_del.py b/pypy/jit/backend/x86/test/test_del.py --- a/pypy/jit/backend/x86/test/test_del.py +++ b/pypy/jit/backend/x86/test/test_del.py @@ -1,5 +1,4 @@ -import py from pypy.jit.backend.x86.test.test_basic import Jit386Mixin from pypy.jit.metainterp.test.test_del import DelTests diff --git a/pypy/jit/backend/x86/test/test_dict.py b/pypy/jit/backend/x86/test/test_dict.py new file mode 100644 --- /dev/null +++ b/pypy/jit/backend/x86/test/test_dict.py @@ -0,0 +1,9 @@ + +from pypy.jit.backend.x86.test.test_basic import Jit386Mixin +from pypy.jit.metainterp.test.test_dict import DictTests + + +class TestDict(Jit386Mixin, DictTests): + # for the individual tests see + # ====> ../../../metainterp/test/test_dict.py + pass diff --git a/pypy/jit/backend/x86/test/test_regloc.py b/pypy/jit/backend/x86/test/test_regloc.py --- a/pypy/jit/backend/x86/test/test_regloc.py +++ b/pypy/jit/backend/x86/test/test_regloc.py @@ -146,8 +146,10 @@ expected_instructions = ( # mov r11, 0xFEDCBA9876543210 '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' - # mov rcx, [rdx+r11] - '\x4A\x8B\x0C\x1A' + # lea r11, [rdx+r11] + '\x4E\x8D\x1C\x1A' + # mov rcx, [r11] + '\x49\x8B\x0B' ) assert cb.getvalue() == expected_instructions @@ -174,6 +176,30 @@ # ------------------------------------------------------------ + def test_MOV_64bit_constant_into_r11(self): + base_constant = 0xFEDCBA9876543210 + cb = LocationCodeBuilder64() + cb.MOV(r11, imm(base_constant)) + + expected_instructions = ( + # mov r11, 0xFEDCBA9876543210 + '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' + ) + assert cb.getvalue() == expected_instructions + + def test_MOV_64bit_address_into_r11(self): + base_addr = 0xFEDCBA9876543210 + cb = LocationCodeBuilder64() + cb.MOV(r11, heap(base_addr)) + + expected_instructions = ( + # mov r11, 0xFEDCBA9876543210 + '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' + + # mov r11, [r11] + '\x4D\x8B\x1B' + ) + assert cb.getvalue() == expected_instructions + def test_MOV_immed32_into_64bit_address_1(self): immed = -0x01234567 base_addr = 0xFEDCBA9876543210 @@ -217,8 +243,10 @@ expected_instructions = ( # mov r11, 0xFEDCBA9876543210 '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' - # mov [rdx+r11], -0x01234567 - '\x4A\xC7\x04\x1A\x99\xBA\xDC\xFE' + # lea r11, [rdx+r11] + '\x4E\x8D\x1C\x1A' + # mov [r11], -0x01234567 + '\x49\xC7\x03\x99\xBA\xDC\xFE' ) assert cb.getvalue() == expected_instructions @@ -300,8 +328,10 @@ '\x48\xBA\xEF\xCD\xAB\x89\x67\x45\x23\x01' # mov r11, 0xFEDCBA9876543210 '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' - # mov [rax+r11], rdx - '\x4A\x89\x14\x18' + # lea r11, [rax+r11] + '\x4E\x8D\x1C\x18' + # mov [r11], rdx + '\x49\x89\x13' # pop rdx '\x5A' ) diff --git a/pypy/jit/backend/x86/test/test_runner.py b/pypy/jit/backend/x86/test/test_runner.py --- a/pypy/jit/backend/x86/test/test_runner.py +++ b/pypy/jit/backend/x86/test/test_runner.py @@ -31,7 +31,7 @@ # for the individual tests see # ====> ../../test/runner_test.py - + def setup_method(self, meth): self.cpu = CPU(rtyper=None, stats=FakeStats()) self.cpu.setup_once() @@ -69,22 +69,16 @@ def test_allocations(self): from pypy.rpython.lltypesystem import rstr - + allocs = [None] all = [] + orig_new = self.cpu.gc_ll_descr.funcptr_for_new def f(size): allocs.insert(0, size) - buf = ctypes.create_string_buffer(size) - all.append(buf) - return ctypes.cast(buf, ctypes.c_void_p).value - func = ctypes.CFUNCTYPE(ctypes.c_int, ctypes.c_int)(f) - addr = ctypes.cast(func, ctypes.c_void_p).value - # ctypes produces an unsigned value. We need it to be signed for, eg, - # relative addressing to work properly. - addr = rffi.cast(lltype.Signed, addr) - + return orig_new(size) + self.cpu.assembler.setup_once() - self.cpu.assembler.malloc_func_addr = addr + self.cpu.gc_ll_descr.funcptr_for_new = f ofs = symbolic.get_field_token(rstr.STR, 'chars', False)[0] res = self.execute_operation(rop.NEWSTR, [ConstInt(7)], 'ref') @@ -108,7 +102,7 @@ res = self.execute_operation(rop.NEW_ARRAY, [ConstInt(10)], 'ref', descr) assert allocs[0] == 10*WORD + ofs + WORD - resbuf = self._resbuf(res) + resbuf = self._resbuf(res) assert resbuf[ofs/WORD] == 10 # ------------------------------------------------------------ @@ -116,7 +110,7 @@ res = self.execute_operation(rop.NEW_ARRAY, [BoxInt(10)], 'ref', descr) assert allocs[0] == 10*WORD + ofs + WORD - resbuf = self._resbuf(res) + resbuf = self._resbuf(res) assert resbuf[ofs/WORD] == 10 def test_stringitems(self): @@ -146,7 +140,7 @@ ConstInt(2), BoxInt(38)], 'void', descr) assert resbuf[itemsofs/WORD + 2] == 38 - + self.execute_operation(rop.SETARRAYITEM_GC, [res, BoxInt(3), BoxInt(42)], 'void', descr) @@ -167,7 +161,7 @@ BoxInt(2)], 'int', descr) assert r.value == 38 - + r = self.execute_operation(rop.GETARRAYITEM_GC, [res, BoxInt(3)], 'int', descr) assert r.value == 42 @@ -226,7 +220,7 @@ self.execute_operation(rop.SETFIELD_GC, [res, BoxInt(1234)], 'void', ofs_i) i = self.execute_operation(rop.GETFIELD_GC, [res], 'int', ofs_i) assert i.value == 1234 - + #u = self.execute_operation(rop.GETFIELD_GC, [res, ofs_u], 'int') #assert u.value == 5 self.execute_operation(rop.SETFIELD_GC, [res, ConstInt(1)], 'void', @@ -299,7 +293,7 @@ else: assert result != execute(self.cpu, None, op, None, b).value - + def test_stuff_followed_by_guard(self): boxes = [(BoxInt(1), BoxInt(0)), @@ -461,6 +455,9 @@ EffectInfo.MOST_GENERAL, ffi_flags=-1) calldescr.get_call_conv = lambda: ffi # <==== hack + # ^^^ we patch get_call_conv() so that the test also makes sense + # on Linux, because clibffi.get_call_conv() would always + # return FFI_DEFAULT_ABI on non-Windows platforms. funcbox = ConstInt(rawstart) i1 = BoxInt() i2 = BoxInt() @@ -523,7 +520,7 @@ def test_debugger_on(self): from pypy.tool.logparser import parse_log_file, extract_category from pypy.rlib import debug - + loop = """ [i0] debug_merge_point('xyz', 0) diff --git a/pypy/jit/backend/x86/tool/viewcode.py b/pypy/jit/backend/x86/tool/viewcode.py --- a/pypy/jit/backend/x86/tool/viewcode.py +++ b/pypy/jit/backend/x86/tool/viewcode.py @@ -58,7 +58,7 @@ assert not p.returncode, ('Encountered an error running objdump: %s' % stderr) # drop some objdump cruft - lines = stdout.splitlines()[6:] + lines = stdout.splitlines(True)[6:] # drop some objdump cruft return format_code_dump_with_labels(originaddr, lines, label_list) def format_code_dump_with_labels(originaddr, lines, label_list): @@ -97,7 +97,7 @@ stdout, stderr = p.communicate() assert not p.returncode, ('Encountered an error running nm: %s' % stderr) - for line in stdout.splitlines(): + for line in stdout.splitlines(True): match = re_symbolentry.match(line) if match: addr = long(match.group(1), 16) diff --git a/pypy/jit/codewriter/effectinfo.py b/pypy/jit/codewriter/effectinfo.py --- a/pypy/jit/codewriter/effectinfo.py +++ b/pypy/jit/codewriter/effectinfo.py @@ -78,6 +78,9 @@ # OS_MATH_SQRT = 100 + # for debugging: + _OS_CANRAISE = set([OS_NONE, OS_STR2UNICODE, OS_LIBFFI_CALL]) + def __new__(cls, readonly_descrs_fields, readonly_descrs_arrays, write_descrs_fields, write_descrs_arrays, extraeffect=EF_CAN_RAISE, @@ -116,6 +119,8 @@ result.extraeffect = extraeffect result.can_invalidate = can_invalidate result.oopspecindex = oopspecindex + if result.check_can_raise(): + assert oopspecindex in cls._OS_CANRAISE cls._cache[key] = result return result @@ -125,6 +130,10 @@ def check_can_invalidate(self): return self.can_invalidate + def check_is_elidable(self): + return (self.extraeffect == self.EF_ELIDABLE_CAN_RAISE or + self.extraeffect == self.EF_ELIDABLE_CANNOT_RAISE) + def check_forces_virtual_or_virtualizable(self): return self.extraeffect >= self.EF_FORCES_VIRTUAL_OR_VIRTUALIZABLE diff --git a/pypy/jit/codewriter/jtransform.py b/pypy/jit/codewriter/jtransform.py --- a/pypy/jit/codewriter/jtransform.py +++ b/pypy/jit/codewriter/jtransform.py @@ -52,9 +52,11 @@ newoperations = [] # def do_rename(var, var_or_const): + if var.concretetype is lltype.Void: + renamings[var] = Constant(None, lltype.Void) + return renamings[var] = var_or_const - if (isinstance(var_or_const, Constant) - and var.concretetype != lltype.Void): + if isinstance(var_or_const, Constant): value = var_or_const.value value = lltype._cast_whatever(var.concretetype, value) renamings_constants[var] = Constant(value, var.concretetype) @@ -441,6 +443,8 @@ rewrite_op_gc_identityhash = _do_builtin_call rewrite_op_gc_id = _do_builtin_call rewrite_op_uint_mod = _do_builtin_call + rewrite_op_cast_float_to_uint = _do_builtin_call + rewrite_op_cast_uint_to_float = _do_builtin_call # ---------- # getfield/setfield/mallocs etc. @@ -735,29 +739,54 @@ return SpaceOperation(opname, [op.args[0]], op.result) def rewrite_op_getinteriorfield(self, op): - # only supports strings and unicodes assert len(op.args) == 3 - assert op.args[1].value == 'chars' optype = op.args[0].concretetype if optype == lltype.Ptr(rstr.STR): opname = "strgetitem" + return SpaceOperation(opname, [op.args[0], op.args[2]], op.result) + elif optype == lltype.Ptr(rstr.UNICODE): + opname = "unicodegetitem" + return SpaceOperation(opname, [op.args[0], op.args[2]], op.result) else: - assert optype == lltype.Ptr(rstr.UNICODE) - opname = "unicodegetitem" - return SpaceOperation(opname, [op.args[0], op.args[2]], op.result) + v_inst, v_index, c_field = op.args + if op.result.concretetype is lltype.Void: + return + # only GcArray of Struct supported + assert isinstance(v_inst.concretetype.TO, lltype.GcArray) + STRUCT = v_inst.concretetype.TO.OF + assert isinstance(STRUCT, lltype.Struct) + descr = self.cpu.interiorfielddescrof(v_inst.concretetype.TO, + c_field.value) + args = [v_inst, v_index, descr] + kind = getkind(op.result.concretetype)[0] + return SpaceOperation('getinteriorfield_gc_%s' % kind, args, + op.result) def rewrite_op_setinteriorfield(self, op): - # only supports strings and unicodes assert len(op.args) == 4 - assert op.args[1].value == 'chars' optype = op.args[0].concretetype if optype == lltype.Ptr(rstr.STR): opname = "strsetitem" + return SpaceOperation(opname, [op.args[0], op.args[2], op.args[3]], + op.result) + elif optype == lltype.Ptr(rstr.UNICODE): + opname = "unicodesetitem" + return SpaceOperation(opname, [op.args[0], op.args[2], op.args[3]], + op.result) else: - assert optype == lltype.Ptr(rstr.UNICODE) - opname = "unicodesetitem" - return SpaceOperation(opname, [op.args[0], op.args[2], op.args[3]], - op.result) + v_inst, v_index, c_field, v_value = op.args + if v_value.concretetype is lltype.Void: + return + # only GcArray of Struct supported + assert isinstance(v_inst.concretetype.TO, lltype.GcArray) + STRUCT = v_inst.concretetype.TO.OF + assert isinstance(STRUCT, lltype.Struct) + descr = self.cpu.interiorfielddescrof(v_inst.concretetype.TO, + c_field.value) + kind = getkind(v_value.concretetype)[0] + args = [v_inst, v_index, v_value, descr] + return SpaceOperation('setinteriorfield_gc_%s' % kind, args, + op.result) def _rewrite_equality(self, op, opname): arg0, arg1 = op.args @@ -771,6 +800,9 @@ def _is_gc(self, v): return getattr(getattr(v.concretetype, "TO", None), "_gckind", "?") == 'gc' + def _is_rclass_instance(self, v): + return lltype._castdepth(v.concretetype.TO, rclass.OBJECT) >= 0 + def _rewrite_cmp_ptrs(self, op): if self._is_gc(op.args[0]): return op @@ -788,11 +820,21 @@ return self._rewrite_equality(op, 'int_is_true') def rewrite_op_ptr_eq(self, op): - op1 = self._rewrite_equality(op, 'ptr_iszero') + prefix = '' + if self._is_rclass_instance(op.args[0]): + assert self._is_rclass_instance(op.args[1]) + op = SpaceOperation('instance_ptr_eq', op.args, op.result) + prefix = 'instance_' + op1 = self._rewrite_equality(op, prefix + 'ptr_iszero') return self._rewrite_cmp_ptrs(op1) def rewrite_op_ptr_ne(self, op): - op1 = self._rewrite_equality(op, 'ptr_nonzero') + prefix = '' + if self._is_rclass_instance(op.args[0]): + assert self._is_rclass_instance(op.args[1]) + op = SpaceOperation('instance_ptr_ne', op.args, op.result) + prefix = 'instance_' + op1 = self._rewrite_equality(op, prefix + 'ptr_nonzero') return self._rewrite_cmp_ptrs(op1) rewrite_op_ptr_iszero = _rewrite_cmp_ptrs @@ -800,8 +842,11 @@ def rewrite_op_cast_ptr_to_int(self, op): if self._is_gc(op.args[0]): - #return op - raise NotImplementedError("cast_ptr_to_int") + return op + + def rewrite_op_cast_opaque_ptr(self, op): + # None causes the result of this op to get aliased to op.args[0] + return [SpaceOperation('mark_opaque_ptr', op.args, None), None] def rewrite_op_force_cast(self, op): v_arg = op.args[0] @@ -822,26 +867,44 @@ elif not float_arg and float_res: # some int -> some float ops = [] - v1 = varoftype(lltype.Signed) - oplist = self.rewrite_operation( - SpaceOperation('force_cast', [v_arg], v1) - ) - if oplist: - ops.extend(oplist) + v2 = varoftype(lltype.Float) + sizesign = rffi.size_and_sign(v_arg.concretetype) + if sizesign <= rffi.size_and_sign(lltype.Signed): + # cast from a type that fits in an int: either the size is + # smaller, or it is equal and it is not unsigned + v1 = varoftype(lltype.Signed) + oplist = self.rewrite_operation( + SpaceOperation('force_cast', [v_arg], v1) + ) + if oplist: + ops.extend(oplist) + else: + v1 = v_arg + op = self.rewrite_operation( + SpaceOperation('cast_int_to_float', [v1], v2) + ) + ops.append(op) else: - v1 = v_arg - v2 = varoftype(lltype.Float) - op = self.rewrite_operation( - SpaceOperation('cast_int_to_float', [v1], v2) - ) - ops.append(op) + if sizesign == rffi.size_and_sign(lltype.Unsigned): + opname = 'cast_uint_to_float' + elif sizesign == rffi.size_and_sign(lltype.SignedLongLong): + opname = 'cast_longlong_to_float' + elif sizesign == rffi.size_and_sign(lltype.UnsignedLongLong): + opname = 'cast_ulonglong_to_float' + else: + raise AssertionError('cast_x_to_float: %r' % (sizesign,)) + ops1 = self.rewrite_operation( + SpaceOperation(opname, [v_arg], v2) + ) + if not isinstance(ops1, list): ops1 = [ops1] + ops.extend(ops1) op2 = self.rewrite_operation( SpaceOperation('force_cast', [v2], v_result) ) if op2: ops.append(op2) else: - op.result = v_result + ops[-1].result = v_result return ops elif float_arg and not float_res: # some float -> some int @@ -854,18 +917,36 @@ ops.append(op1) else: v1 = v_arg - v2 = varoftype(lltype.Signed) - op = self.rewrite_operation( - SpaceOperation('cast_float_to_int', [v1], v2) - ) - ops.append(op) - oplist = self.rewrite_operation( - SpaceOperation('force_cast', [v2], v_result) - ) - if oplist: - ops.extend(oplist) + sizesign = rffi.size_and_sign(v_result.concretetype) + if sizesign <= rffi.size_and_sign(lltype.Signed): + # cast to a type that fits in an int: either the size is + # smaller, or it is equal and it is not unsigned + v2 = varoftype(lltype.Signed) + op = self.rewrite_operation( + SpaceOperation('cast_float_to_int', [v1], v2) + ) + ops.append(op) + oplist = self.rewrite_operation( + SpaceOperation('force_cast', [v2], v_result) + ) + if oplist: + ops.extend(oplist) + else: + op.result = v_result else: - op.result = v_result + if sizesign == rffi.size_and_sign(lltype.Unsigned): + opname = 'cast_float_to_uint' + elif sizesign == rffi.size_and_sign(lltype.SignedLongLong): + opname = 'cast_float_to_longlong' + elif sizesign == rffi.size_and_sign(lltype.UnsignedLongLong): + opname = 'cast_float_to_ulonglong' + else: + raise AssertionError('cast_float_to_x: %r' % (sizesign,)) + ops1 = self.rewrite_operation( + SpaceOperation(opname, [v1], v_result) + ) + if not isinstance(ops1, list): ops1 = [ops1] + ops.extend(ops1) return ops else: assert False @@ -1071,8 +1152,6 @@ # The new operation is optionally further processed by rewrite_operation(). for _old, _new in [('bool_not', 'int_is_zero'), ('cast_bool_to_float', 'cast_int_to_float'), - ('cast_uint_to_float', 'cast_int_to_float'), - ('cast_float_to_uint', 'cast_float_to_int'), ('int_add_nonneg_ovf', 'int_add_ovf'), ('keepalive', '-live-'), @@ -1543,6 +1622,10 @@ def rewrite_op_jit_force_virtual(self, op): return self._do_builtin_call(op) + def rewrite_op_jit_is_virtual(self, op): + raise Exception, ( + "'vref.virtual' should not be used from jit-visible code") + def rewrite_op_jit_force_virtualizable(self, op): # this one is for virtualizables vinfo = self.get_vinfo(op.args[0]) diff --git a/pypy/jit/codewriter/support.py b/pypy/jit/codewriter/support.py --- a/pypy/jit/codewriter/support.py +++ b/pypy/jit/codewriter/support.py @@ -13,7 +13,6 @@ from pypy.translator.simplify import get_funcobj from pypy.translator.unsimplify import split_block from pypy.objspace.flow.model import Constant -from pypy import conftest from pypy.translator.translator import TranslationContext from pypy.annotation.policy import AnnotatorPolicy from pypy.annotation import model as annmodel @@ -38,9 +37,11 @@ return a.typeannotation(t) def annotate(func, values, inline=None, backendoptimize=True, - type_system="lltype"): + type_system="lltype", translationoptions={}): # build the normal ll graphs for ll_function t = TranslationContext() + for key, value in translationoptions.items(): + setattr(t.config.translation, key, value) annpolicy = AnnotatorPolicy() annpolicy.allow_someobjects = False a = t.buildannotator(policy=annpolicy) @@ -48,15 +49,13 @@ a.build_types(func, argtypes, main_entry_point=True) rtyper = t.buildrtyper(type_system = type_system) rtyper.specialize() - if inline: - auto_inlining(t, threshold=inline) + #if inline: + # auto_inlining(t, threshold=inline) if backendoptimize: from pypy.translator.backendopt.all import backend_optimizations backend_optimizations(t, inline_threshold=inline or 0, remove_asserts=True, really_remove_asserts=True) - #if conftest.option.view: - # t.view() return rtyper def getgraph(func, values): @@ -232,6 +231,17 @@ else: return x +def _ll_1_cast_uint_to_float(x): + # XXX on 32-bit platforms, this should be done using cast_longlong_to_float + # (which is a residual call right now in the x86 backend) + return llop.cast_uint_to_float(lltype.Float, x) + +def _ll_1_cast_float_to_uint(x): + # XXX on 32-bit platforms, this should be done using cast_float_to_longlong + # (which is a residual call right now in the x86 backend) + return llop.cast_float_to_uint(lltype.Unsigned, x) + + # math support # ------------ @@ -456,6 +466,8 @@ return LLtypeHelpers._dictnext_items(lltype.Ptr(RES), iter) _ll_1_dictiter_nextitems.need_result_type = True + _ll_1_dict_resize = ll_rdict.ll_dict_resize + # ---------- strings and unicode ---------- _ll_1_str_str2unicode = ll_rstr.LLHelpers.ll_str2unicode diff --git a/pypy/jit/codewriter/test/test_flatten.py b/pypy/jit/codewriter/test/test_flatten.py --- a/pypy/jit/codewriter/test/test_flatten.py +++ b/pypy/jit/codewriter/test/test_flatten.py @@ -5,10 +5,10 @@ from pypy.jit.codewriter.format import assert_format from pypy.jit.codewriter import longlong from pypy.jit.metainterp.history import AbstractDescr -from pypy.rpython.lltypesystem import lltype, rclass, rstr +from pypy.rpython.lltypesystem import lltype, rclass, rstr, rffi from pypy.objspace.flow.model import SpaceOperation, Variable, Constant from pypy.translator.unsimplify import varoftype -from pypy.rlib.rarithmetic import ovfcheck, r_uint +from pypy.rlib.rarithmetic import ovfcheck, r_uint, r_longlong, r_ulonglong from pypy.rlib.jit import dont_look_inside, _we_are_jitted, JitDriver from pypy.rlib.objectmodel import keepalive_until_here from pypy.rlib import jit @@ -70,7 +70,8 @@ return 'residual' def getcalldescr(self, op, oopspecindex=None, extraeffect=None): try: - if 'cannot_raise' in op.args[0].value._obj.graph.name: + name = op.args[0].value._obj._name + if 'cannot_raise' in name or name.startswith('cast_'): return self._descr_cannot_raise except AttributeError: pass @@ -742,7 +743,6 @@ """, transform=True) def test_force_cast(self): - from pypy.rpython.lltypesystem import rffi # NB: we don't need to test for INT here, the logic in jtransform is # general enough so that if we have the below cases it should # generalize also to INT @@ -848,7 +848,6 @@ transform=True) def test_force_cast_pointer(self): - from pypy.rpython.lltypesystem import rffi def h(p): return rffi.cast(rffi.VOIDP, p) self.encoding_test(h, [lltype.nullptr(rffi.CCHARP.TO)], """ @@ -856,7 +855,6 @@ """, transform=True) def test_force_cast_floats(self): - from pypy.rpython.lltypesystem import rffi # Caststs to lltype.Float def f(n): return rffi.cast(lltype.Float, n) @@ -900,9 +898,69 @@ int_return %i4 """, transform=True) + def f(dbl): + return rffi.cast(rffi.UCHAR, dbl) + self.encoding_test(f, [12.456], """ + cast_float_to_int %f0 -> %i0 + int_and %i0, $255 -> %i1 + int_return %i1 + """, transform=True) + + def f(dbl): + return rffi.cast(lltype.Unsigned, dbl) + self.encoding_test(f, [12.456], """ + residual_call_irf_i $<* fn cast_float_to_uint>, , I[], R[], F[%f0] -> %i0 + int_return %i0 + """, transform=True) + + def f(i): + return rffi.cast(lltype.Float, chr(i)) # "char -> float" + self.encoding_test(f, [12], """ + cast_int_to_float %i0 -> %f0 + float_return %f0 + """, transform=True) + + def f(i): + return rffi.cast(lltype.Float, r_uint(i)) # "uint -> float" + self.encoding_test(f, [12], """ + residual_call_irf_f $<* fn cast_uint_to_float>, , I[%i0], R[], F[] -> %f0 + float_return %f0 + """, transform=True) + + if not longlong.is_64_bit: + def f(dbl): + return rffi.cast(lltype.SignedLongLong, dbl) + self.encoding_test(f, [12.3], """ + residual_call_irf_f $<* fn llong_from_float>, , I[], R[], F[%f0] -> %f1 + float_return %f1 + """, transform=True) + + def f(dbl): + return rffi.cast(lltype.UnsignedLongLong, dbl) + self.encoding_test(f, [12.3], """ + residual_call_irf_f $<* fn ullong_from_float>, , I[], R[], F[%f0] -> %f1 + float_return %f1 + """, transform=True) + + def f(x): + ll = r_longlong(x) + return rffi.cast(lltype.Float, ll) + self.encoding_test(f, [12], """ + residual_call_irf_f $<* fn llong_from_int>, , I[%i0], R[], F[] -> %f0 + residual_call_irf_f $<* fn llong_to_float>, , I[], R[], F[%f0] -> %f1 + float_return %f1 + """, transform=True) + + def f(x): + ll = r_ulonglong(x) + return rffi.cast(lltype.Float, ll) + self.encoding_test(f, [12], """ + residual_call_irf_f $<* fn ullong_from_int>, , I[%i0], R[], F[] -> %f0 + residual_call_irf_f $<* fn ullong_u_to_float>, , I[], R[], F[%f0] -> %f1 + float_return %f1 + """, transform=True) def test_direct_ptradd(self): - from pypy.rpython.lltypesystem import rffi def f(p, n): return lltype.direct_ptradd(p, n) self.encoding_test(f, [lltype.nullptr(rffi.CCHARP.TO), 123], """ @@ -913,7 +971,6 @@ def check_force_cast(FROM, TO, operations, value): """Check that the test is correctly written...""" - from pypy.rpython.lltypesystem import rffi import re r = re.compile('(\w+) \%i\d, \$(-?\d+)') # diff --git a/pypy/jit/codewriter/test/test_jtransform.py b/pypy/jit/codewriter/test/test_jtransform.py --- a/pypy/jit/codewriter/test/test_jtransform.py +++ b/pypy/jit/codewriter/test/test_jtransform.py @@ -1,4 +1,3 @@ -import py import random try: from itertools import product @@ -16,13 +15,13 @@ from pypy.objspace.flow.model import FunctionGraph, Block, Link from pypy.objspace.flow.model import SpaceOperation, Variable, Constant -from pypy.jit.codewriter.jtransform import Transformer -from pypy.jit.metainterp.history import getkind -from pypy.rpython.lltypesystem import lltype, llmemory, rclass, rstr, rlist +from pypy.rpython.lltypesystem import lltype, llmemory, rclass, rstr from pypy.rpython.lltypesystem.module import ll_math from pypy.translator.unsimplify import varoftype from pypy.jit.codewriter import heaptracker, effectinfo from pypy.jit.codewriter.flatten import ListOfKind +from pypy.jit.codewriter.jtransform import Transformer +from pypy.jit.metainterp.history import getkind def const(x): return Constant(x, lltype.typeOf(x)) @@ -37,6 +36,8 @@ return ('calldescr', FUNC, ARGS, RESULT) def fielddescrof(self, STRUCT, name): return ('fielddescr', STRUCT, name) + def interiorfielddescrof(self, ARRAY, name): + return ('interiorfielddescr', ARRAY, name) def arraydescrof(self, ARRAY): return FakeDescr(('arraydescr', ARRAY)) def sizeof(self, STRUCT): @@ -539,7 +540,7 @@ def test_rename_on_links(): v1 = Variable() - v2 = Variable() + v2 = Variable(); v2.concretetype = llmemory.Address v3 = Variable() block = Block([v1]) block.operations = [SpaceOperation('cast_pointer', [v1], v2)] @@ -575,10 +576,10 @@ assert op1.args == [v2] def test_ptr_eq(): - v1 = varoftype(rclass.OBJECTPTR) - v2 = varoftype(rclass.OBJECTPTR) + v1 = varoftype(lltype.Ptr(rstr.STR)) + v2 = varoftype(lltype.Ptr(rstr.STR)) v3 = varoftype(lltype.Bool) - c0 = const(lltype.nullptr(rclass.OBJECT)) + c0 = const(lltype.nullptr(rstr.STR)) # for opname, reducedname in [('ptr_eq', 'ptr_iszero'), ('ptr_ne', 'ptr_nonzero')]: @@ -597,6 +598,31 @@ assert op1.opname == reducedname assert op1.args == [v2] +def test_instance_ptr_eq(): + v1 = varoftype(rclass.OBJECTPTR) + v2 = varoftype(rclass.OBJECTPTR) + v3 = varoftype(lltype.Bool) + c0 = const(lltype.nullptr(rclass.OBJECT)) + + for opname, newopname, reducedname in [ + ('ptr_eq', 'instance_ptr_eq', 'instance_ptr_iszero'), + ('ptr_ne', 'instance_ptr_ne', 'instance_ptr_nonzero') + ]: + op = SpaceOperation(opname, [v1, v2], v3) + op1 = Transformer().rewrite_operation(op) + assert op1.opname == newopname + assert op1.args == [v1, v2] + + op = SpaceOperation(opname, [v1, c0], v3) + op1 = Transformer().rewrite_operation(op) + assert op1.opname == reducedname + assert op1.args == [v1] + + op = SpaceOperation(opname, [c0, v1], v3) + op1 = Transformer().rewrite_operation(op) + assert op1.opname == reducedname + assert op1.args == [v1] + def test_nongc_ptr_eq(): v1 = varoftype(rclass.NONGCOBJECTPTR) v2 = varoftype(rclass.NONGCOBJECTPTR) @@ -676,6 +702,22 @@ assert op1.args == [v, v_index] assert op1.result == v_result +def test_dict_getinteriorfield(): + DICT = lltype.GcArray(lltype.Struct('ENTRY', ('v', lltype.Signed), + ('k', lltype.Signed))) + v = varoftype(lltype.Ptr(DICT)) + i = varoftype(lltype.Signed) + v_result = varoftype(lltype.Signed) + op = SpaceOperation('getinteriorfield', [v, i, Constant('v', lltype.Void)], + v_result) + op1 = Transformer(FakeCPU()).rewrite_operation(op) + assert op1.opname == 'getinteriorfield_gc_i' + assert op1.args == [v, i, ('interiorfielddescr', DICT, 'v')] + op = SpaceOperation('getinteriorfield', [v, i, Constant('v', lltype.Void)], + Constant(None, lltype.Void)) + op1 = Transformer(FakeCPU()).rewrite_operation(op) + assert op1 is None + def test_str_setinteriorfield(): v = varoftype(lltype.Ptr(rstr.STR)) v_index = varoftype(lltype.Signed) @@ -702,6 +744,23 @@ assert op1.args == [v, v_index, v_newchr] assert op1.result == v_void +def test_dict_setinteriorfield(): + DICT = lltype.GcArray(lltype.Struct('ENTRY', ('v', lltype.Signed), + ('k', lltype.Signed))) + v = varoftype(lltype.Ptr(DICT)) + i = varoftype(lltype.Signed) + v_void = varoftype(lltype.Void) + op = SpaceOperation('setinteriorfield', [v, i, Constant('v', lltype.Void), + i], + v_void) + op1 = Transformer(FakeCPU()).rewrite_operation(op) + assert op1.opname == 'setinteriorfield_gc_i' + assert op1.args == [v, i, i, ('interiorfielddescr', DICT, 'v')] + op = SpaceOperation('setinteriorfield', [v, i, Constant('v', lltype.Void), + v_void], v_void) + op1 = Transformer(FakeCPU()).rewrite_operation(op) + assert not op1 + def test_promote_1(): v1 = varoftype(lltype.Signed) v2 = varoftype(lltype.Signed) @@ -1069,3 +1128,16 @@ varoftype(lltype.Signed)) tr = Transformer(None, None) raises(NotImplementedError, tr.rewrite_operation, op) + +def test_cast_opaque_ptr(): + S = lltype.GcStruct("S", ("x", lltype.Signed)) + v1 = varoftype(lltype.Ptr(S)) + v2 = varoftype(lltype.Ptr(rclass.OBJECT)) + + op = SpaceOperation('cast_opaque_ptr', [v1], v2) + tr = Transformer() + [op1, op2] = tr.rewrite_operation(op) + assert op1.opname == 'mark_opaque_ptr' + assert op1.args == [v1] + assert op1.result is None + assert op2 is None \ No newline at end of file diff --git a/pypy/jit/metainterp/blackhole.py b/pypy/jit/metainterp/blackhole.py --- a/pypy/jit/metainterp/blackhole.py +++ b/pypy/jit/metainterp/blackhole.py @@ -2,11 +2,10 @@ from pypy.rlib.rtimer import read_timestamp from pypy.rlib.rarithmetic import intmask, LONG_BIT, r_uint, ovfcheck from pypy.rlib.objectmodel import we_are_translated -from pypy.rlib.debug import debug_start, debug_stop +from pypy.rlib.debug import debug_start, debug_stop, ll_assert from pypy.rlib.debug import make_sure_not_resized from pypy.rpython.lltypesystem import lltype, llmemory, rclass from pypy.rpython.lltypesystem.lloperation import llop -from pypy.rpython.llinterp import LLException from pypy.jit.codewriter.jitcode import JitCode, SwitchDictDescr from pypy.jit.codewriter import heaptracker, longlong from pypy.jit.metainterp.jitexc import JitException, get_llexception, reraise @@ -500,9 +499,25 @@ @arguments("r", returns="i") def bhimpl_ptr_nonzero(a): return bool(a) - @arguments("r", returns="r") - def bhimpl_cast_opaque_ptr(a): - return a + @arguments("r", "r", returns="i") + def bhimpl_instance_ptr_eq(a, b): + return a == b + @arguments("r", "r", returns="i") + def bhimpl_instance_ptr_ne(a, b): + return a != b + @arguments("r", returns="i") + def bhimpl_cast_ptr_to_int(a): + i = lltype.cast_ptr_to_int(a) + ll_assert((i & 1) == 1, "bhimpl_cast_ptr_to_int: not an odd int") + return i + @arguments("i", returns="r") + def bhimpl_cast_int_to_ptr(i): + ll_assert((i & 1) == 1, "bhimpl_cast_int_to_ptr: not an odd int") + return lltype.cast_int_to_ptr(llmemory.GCREF, i) + + @arguments("r") + def bhimpl_mark_opaque_ptr(a): + pass @arguments("i", returns="i") def bhimpl_int_copy(a): @@ -622,6 +637,9 @@ a = longlong.getrealfloat(a) # note: we need to call int() twice to care for the fact that # int(-2147483648.0) returns a long :-( + # we could also call intmask() instead of the outermost int(), but + # it's probably better to explicitly crash (by getting a long) if a + # non-translated version tries to cast a too large float to an int. return int(int(a)) @arguments("i", returns="f") @@ -1145,6 +1163,26 @@ array = cpu.bh_getfield_gc_r(vable, fdescr) return cpu.bh_arraylen_gc(adescr, array) + @arguments("cpu", "r", "i", "d", returns="i") + def bhimpl_getinteriorfield_gc_i(cpu, array, index, descr): + return cpu.bh_getinteriorfield_gc_i(array, index, descr) + @arguments("cpu", "r", "i", "d", returns="r") + def bhimpl_getinteriorfield_gc_r(cpu, array, index, descr): + return cpu.bh_getinteriorfield_gc_r(array, index, descr) + @arguments("cpu", "r", "i", "d", returns="f") + def bhimpl_getinteriorfield_gc_f(cpu, array, index, descr): + return cpu.bh_getinteriorfield_gc_f(array, index, descr) + + @arguments("cpu", "r", "i", "d", "i") + def bhimpl_setinteriorfield_gc_i(cpu, array, index, descr, value): + cpu.bh_setinteriorfield_gc_i(array, index, descr, value) + @arguments("cpu", "r", "i", "d", "r") + def bhimpl_setinteriorfield_gc_r(cpu, array, index, descr, value): + cpu.bh_setinteriorfield_gc_r(array, index, descr, value) + @arguments("cpu", "r", "i", "d", "f") + def bhimpl_setinteriorfield_gc_f(cpu, array, index, descr, value): + cpu.bh_setinteriorfield_gc_f(array, index, descr, value) + @arguments("cpu", "r", "d", returns="i") def bhimpl_getfield_gc_i(cpu, struct, fielddescr): return cpu.bh_getfield_gc_i(struct, fielddescr) diff --git a/pypy/jit/metainterp/executor.py b/pypy/jit/metainterp/executor.py --- a/pypy/jit/metainterp/executor.py +++ b/pypy/jit/metainterp/executor.py @@ -1,11 +1,8 @@ """This implements pyjitpl's execution of operations. """ -import py -from pypy.rpython.lltypesystem import lltype, llmemory, rstr -from pypy.rpython.ootypesystem import ootype -from pypy.rpython.lltypesystem.lloperation import llop -from pypy.rlib.rarithmetic import ovfcheck, r_uint, intmask, r_longlong +from pypy.rpython.lltypesystem import lltype, rstr +from pypy.rlib.rarithmetic import ovfcheck, r_longlong from pypy.rlib.rtimer import read_timestamp from pypy.rlib.unroll import unrolling_iterable from pypy.jit.metainterp.history import BoxInt, BoxPtr, BoxFloat, check_descr @@ -123,6 +120,29 @@ else: cpu.bh_setarrayitem_raw_i(arraydescr, array, index, itembox.getint()) +def do_getinteriorfield_gc(cpu, _, arraybox, indexbox, descr): + array = arraybox.getref_base() + index = indexbox.getint() + if descr.is_pointer_field(): + return BoxPtr(cpu.bh_getinteriorfield_gc_r(array, index, descr)) + elif descr.is_float_field(): + return BoxFloat(cpu.bh_getinteriorfield_gc_f(array, index, descr)) + else: + return BoxInt(cpu.bh_getinteriorfield_gc_i(array, index, descr)) + +def do_setinteriorfield_gc(cpu, _, arraybox, indexbox, valuebox, descr): + array = arraybox.getref_base() + index = indexbox.getint() + if descr.is_pointer_field(): + cpu.bh_setinteriorfield_gc_r(array, index, descr, + valuebox.getref_base()) + elif descr.is_float_field(): + cpu.bh_setinteriorfield_gc_f(array, index, descr, + valuebox.getfloatstorage()) + else: + cpu.bh_setinteriorfield_gc_i(array, index, descr, + valuebox.getint()) + def do_getfield_gc(cpu, _, structbox, fielddescr): struct = structbox.getref_base() if fielddescr.is_pointer_field(): diff --git a/pypy/jit/metainterp/graphpage.py b/pypy/jit/metainterp/graphpage.py --- a/pypy/jit/metainterp/graphpage.py +++ b/pypy/jit/metainterp/graphpage.py @@ -12,8 +12,8 @@ def get_display_text(self): return None -def display_loops(loops, errmsg=None, highlight_loops=()): - graphs = [(loop, loop in highlight_loops) for loop in loops] +def display_loops(loops, errmsg=None, highlight_loops={}): + graphs = [(loop, highlight_loops.get(loop, 0)) for loop in loops] for graph, highlight in graphs: for op in graph.get_operations(): if is_interesting_guard(op): @@ -65,8 +65,7 @@ def add_graph(self, graph, highlight=False): graphindex = len(self.graphs) self.graphs.append(graph) - if highlight: - self.highlight_graphs[graph] = True + self.highlight_graphs[graph] = highlight for i, op in enumerate(graph.get_operations()): self.all_operations[op] = graphindex, i @@ -126,10 +125,13 @@ self.dotgen.emit('subgraph cluster%d {' % graphindex) label = graph.get_display_text() if label is not None: - if self.highlight_graphs.get(graph): - fillcolor = '#f084c2' + colorindex = self.highlight_graphs.get(graph, 0) + if colorindex == 1: + fillcolor = '#f084c2' # highlighted graph + elif colorindex == 2: + fillcolor = '#808080' # invalidated graph else: - fillcolor = '#84f0c2' + fillcolor = '#84f0c2' # normal color self.dotgen.emit_node(graphname, shape="octagon", label=label, fillcolor=fillcolor) self.pendingedges.append((graphname, diff --git a/pypy/jit/metainterp/heapcache.py b/pypy/jit/metainterp/heapcache.py --- a/pypy/jit/metainterp/heapcache.py +++ b/pypy/jit/metainterp/heapcache.py @@ -34,7 +34,6 @@ self.clear_caches(opnum, descr, argboxes) def mark_escaped(self, opnum, argboxes): - idx = 0 if opnum == rop.SETFIELD_GC: assert len(argboxes) == 2 box, valuebox = argboxes @@ -42,8 +41,20 @@ self.dependencies.setdefault(box, []).append(valuebox) else: self._escape(valuebox) - # GETFIELD_GC doesn't escape it's argument - elif opnum != rop.GETFIELD_GC: + elif opnum == rop.SETARRAYITEM_GC: + assert len(argboxes) == 3 + box, indexbox, valuebox = argboxes + if self.is_unescaped(box) and self.is_unescaped(valuebox): + self.dependencies.setdefault(box, []).append(valuebox) + else: + self._escape(valuebox) + # GETFIELD_GC, MARK_OPAQUE_PTR, PTR_EQ, and PTR_NE don't escape their + # arguments + elif (opnum != rop.GETFIELD_GC and + opnum != rop.MARK_OPAQUE_PTR and + opnum != rop.PTR_EQ and + opnum != rop.PTR_NE): + idx = 0 for box in argboxes: # setarrayitem_gc don't escape its first argument if not (idx == 0 and opnum in [rop.SETARRAYITEM_GC]): @@ -60,13 +71,13 @@ self._escape(dep) def clear_caches(self, opnum, descr, argboxes): - if opnum == rop.SETFIELD_GC: - return - if opnum == rop.SETARRAYITEM_GC: - return - if opnum == rop.SETFIELD_RAW: - return - if opnum == rop.SETARRAYITEM_RAW: + if (opnum == rop.SETFIELD_GC or + opnum == rop.SETARRAYITEM_GC or + opnum == rop.SETFIELD_RAW or + opnum == rop.SETARRAYITEM_RAW or + opnum == rop.SETINTERIORFIELD_GC or + opnum == rop.COPYSTRCONTENT or + opnum == rop.COPYUNICODECONTENT): return if rop._OVF_FIRST <= opnum <= rop._OVF_LAST: return @@ -75,9 +86,9 @@ if opnum == rop.CALL or opnum == rop.CALL_LOOPINVARIANT: effectinfo = descr.get_extra_info() ef = effectinfo.extraeffect - if ef == effectinfo.EF_LOOPINVARIANT or \ - ef == effectinfo.EF_ELIDABLE_CANNOT_RAISE or \ - ef == effectinfo.EF_ELIDABLE_CAN_RAISE: + if (ef == effectinfo.EF_LOOPINVARIANT or + ef == effectinfo.EF_ELIDABLE_CANNOT_RAISE or + ef == effectinfo.EF_ELIDABLE_CAN_RAISE): return # A special case for ll_arraycopy, because it is so common, and its # effects are so well defined. diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -16,6 +16,7 @@ INT = 'i' REF = 'r' FLOAT = 'f' +STRUCT = 's' HOLE = '_' VOID = 'v' @@ -172,6 +173,11 @@ """ raise NotImplementedError + def is_array_of_structs(self): + """ Implement for array descr + """ + raise NotImplementedError + def is_pointer_field(self): """ Implement for field descr """ @@ -732,6 +738,7 @@ failed_states = None retraced_count = 0 terminating = False # see TerminatingLoopToken in compile.py + invalidated = False outermost_jitdriver_sd = None # and more data specified by the backend when the loop is compiled number = -1 @@ -922,6 +929,9 @@ def view(self, **kwds): pass + def clear(self): + pass + class Stats(object): """For tests.""" @@ -934,6 +944,16 @@ self.loops = [] self.locations = [] self.aborted_keys = [] + self.invalidated_token_numbers = set() + + def clear(self): + del self.loops[:] + del self.locations[:] + del self.aborted_keys[:] + self.invalidated_token_numbers.clear() + self.compiled_count = 0 + self.enter_count = 0 + self.aborted_count = 0 def set_history(self, history): self.operations = history.operations @@ -1012,7 +1032,12 @@ if loop in loops: loops.remove(loop) loops.append(loop) - display_loops(loops, errmsg, extraloops) + highlight_loops = dict.fromkeys(extraloops, 1) + for loop in loops: + if hasattr(loop, '_looptoken_number') and ( + loop._looptoken_number in self.invalidated_token_numbers): + highlight_loops.setdefault(loop, 2) + display_loops(loops, errmsg, highlight_loops) # ---------------------------------------------------------------- diff --git a/pypy/jit/metainterp/memmgr.py b/pypy/jit/metainterp/memmgr.py --- a/pypy/jit/metainterp/memmgr.py +++ b/pypy/jit/metainterp/memmgr.py @@ -68,7 +68,8 @@ debug_print("Loop tokens before:", oldtotal) max_generation = self.current_generation - (self.max_age-1) for looptoken in self.alive_loops.keys(): - if 0 <= looptoken.generation < max_generation: + if (0 <= looptoken.generation < max_generation or + looptoken.invalidated): del self.alive_loops[looptoken] newtotal = len(self.alive_loops) debug_print("Loop tokens freed: ", oldtotal - newtotal) diff --git a/pypy/jit/metainterp/optimizeopt/__init__.py b/pypy/jit/metainterp/optimizeopt/__init__.py --- a/pypy/jit/metainterp/optimizeopt/__init__.py +++ b/pypy/jit/metainterp/optimizeopt/__init__.py @@ -55,7 +55,7 @@ def optimize_loop_1(metainterp_sd, loop, enable_opts, - inline_short_preamble=True, retraced=False, bridge=False): + inline_short_preamble=True, retraced=False): """Optimize loop.operations to remove internal overheadish operations. """ @@ -64,7 +64,7 @@ if unroll: optimize_unroll(metainterp_sd, loop, optimizations) else: - optimizer = Optimizer(metainterp_sd, loop, optimizations, bridge) + optimizer = Optimizer(metainterp_sd, loop, optimizations) optimizer.propagate_all_forward() def optimize_bridge_1(metainterp_sd, bridge, enable_opts, @@ -76,7 +76,7 @@ except KeyError: pass optimize_loop_1(metainterp_sd, bridge, enable_opts, - inline_short_preamble, retraced, bridge=True) + inline_short_preamble, retraced) if __name__ == '__main__': print ALL_OPTS_NAMES diff --git a/pypy/jit/metainterp/optimizeopt/heap.py b/pypy/jit/metainterp/optimizeopt/heap.py --- a/pypy/jit/metainterp/optimizeopt/heap.py +++ b/pypy/jit/metainterp/optimizeopt/heap.py @@ -43,7 +43,7 @@ optheap.optimizer.ensure_imported(cached_fieldvalue) cached_fieldvalue = self._cached_fields.get(structvalue, None) - if cached_fieldvalue is not fieldvalue: + if not fieldvalue.same_value(cached_fieldvalue): # common case: store the 'op' as lazy_setfield, and register # myself in the optheap's _lazy_setfields_and_arrayitems list self._lazy_setfield = op @@ -140,6 +140,15 @@ getop = ResOperation(rop.GETFIELD_GC, [op.getarg(0)], result, op.getdescr()) shortboxes.add_potential(getop, synthetic=True) + if op.getopnum() == rop.SETARRAYITEM_GC: + result = op.getarg(2) + if isinstance(result, Const): + newresult = result.clonebox() + optimizer.make_constant(newresult, result) + result = newresult + getop = ResOperation(rop.GETARRAYITEM_GC, [op.getarg(0), op.getarg(1)], + result, op.getdescr()) + shortboxes.add_potential(getop, synthetic=True) elif op.result is not None: shortboxes.add_potential(op) @@ -225,7 +234,7 @@ or op.is_ovf()): self.posponedop = op else: - self.next_optimization.propagate_forward(op) + Optimization.emit_operation(self, op) def emitting_operation(self, op): if op.has_no_side_effect(): diff --git a/pypy/jit/metainterp/optimizeopt/intbounds.py b/pypy/jit/metainterp/optimizeopt/intbounds.py --- a/pypy/jit/metainterp/optimizeopt/intbounds.py +++ b/pypy/jit/metainterp/optimizeopt/intbounds.py @@ -1,3 +1,4 @@ +import sys from pypy.jit.metainterp.optimizeopt.optimizer import Optimization, CONST_1, CONST_0, \ MODE_ARRAY, MODE_STR, MODE_UNICODE from pypy.jit.metainterp.history import ConstInt @@ -5,36 +6,18 @@ IntUpperBound) from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method from pypy.jit.metainterp.resoperation import rop +from pypy.jit.metainterp.optimize import InvalidLoop +from pypy.rlib.rarithmetic import LONG_BIT class OptIntBounds(Optimization): """Keeps track of the bounds placed on integers by guards and remove redundant guards""" - def setup(self): - self.posponedop = None - self.nextop = None - def new(self): - assert self.posponedop is None return OptIntBounds() - - def flush(self): - assert self.posponedop is None - - def setup(self): - self.posponedop = None - self.nextop = None def propagate_forward(self, op): - if op.is_ovf(): - self.posponedop = op - return - if self.posponedop: - self.nextop = op - op = self.posponedop - self.posponedop = None - dispatch_opt(self, op) def opt_default(self, op): @@ -126,14 +109,29 @@ r.intbound.intersect(v1.intbound.div_bound(v2.intbound)) def optimize_INT_MOD(self, op): + v1 = self.getvalue(op.getarg(0)) + v2 = self.getvalue(op.getarg(1)) + known_nonneg = (v1.intbound.known_ge(IntBound(0, 0)) and + v2.intbound.known_ge(IntBound(0, 0))) + if known_nonneg and v2.is_constant(): + val = v2.box.getint() + if (val & (val-1)) == 0: + # nonneg % power-of-two ==> nonneg & (power-of-two - 1) + arg1 = op.getarg(0) + arg2 = ConstInt(val-1) + op = op.copy_and_change(rop.INT_AND, args=[arg1, arg2]) self.emit_operation(op) - v2 = self.getvalue(op.getarg(1)) if v2.is_constant(): val = v2.box.getint() r = self.getvalue(op.result) if val < 0: + if val == -sys.maxint-1: + return # give up val = -val - r.intbound.make_gt(IntBound(-val, -val)) + if known_nonneg: + r.intbound.make_ge(IntBound(0, 0)) + else: + r.intbound.make_gt(IntBound(-val, -val)) r.intbound.make_lt(IntBound(val, val)) def optimize_INT_LSHIFT(self, op): @@ -153,72 +151,84 @@ def optimize_INT_RSHIFT(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) + b = v1.intbound.rshift_bound(v2.intbound) + if b.has_lower and b.has_upper and b.lower == b.upper: + # constant result (likely 0, for rshifts that kill all bits) + self.make_constant_int(op.result, b.lower) + else: + self.emit_operation(op) + r = self.getvalue(op.result) + r.intbound.intersect(b) + + def optimize_GUARD_NO_OVERFLOW(self, op): + lastop = self.last_emitted_operation + if lastop is not None: + opnum = lastop.getopnum() + args = lastop.getarglist() + result = lastop.result + # If the INT_xxx_OVF was replaced with INT_xxx, then we can kill + # the GUARD_NO_OVERFLOW. + if (opnum == rop.INT_ADD or + opnum == rop.INT_SUB or + opnum == rop.INT_MUL): + return + # Else, synthesize the non overflowing op for optimize_default to + # reuse, as well as the reverse op + elif opnum == rop.INT_ADD_OVF: + self.pure(rop.INT_ADD, args[:], result) + self.pure(rop.INT_SUB, [result, args[1]], args[0]) + self.pure(rop.INT_SUB, [result, args[0]], args[1]) + elif opnum == rop.INT_SUB_OVF: + self.pure(rop.INT_SUB, args[:], result) + self.pure(rop.INT_ADD, [result, args[1]], args[0]) + self.pure(rop.INT_SUB, [args[0], result], args[1]) + elif opnum == rop.INT_MUL_OVF: + self.pure(rop.INT_MUL, args[:], result) self.emit_operation(op) - r = self.getvalue(op.result) - r.intbound.intersect(v1.intbound.rshift_bound(v2.intbound)) + + def optimize_GUARD_OVERFLOW(self, op): + # If INT_xxx_OVF was replaced by INT_xxx, *but* we still see + # GUARD_OVERFLOW, then the loop is invalid. + lastop = self.last_emitted_operation + if lastop is None: + raise InvalidLoop + opnum = lastop.getopnum() + if opnum not in (rop.INT_ADD_OVF, rop.INT_SUB_OVF, rop.INT_MUL_OVF): + raise InvalidLoop + self.emit_operation(op) def optimize_INT_ADD_OVF(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) resbound = v1.intbound.add_bound(v2.intbound) - if resbound.has_lower and resbound.has_upper and \ - self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Transform into INT_ADD and remove guard + if resbound.bounded(): + # Transform into INT_ADD. The following guard will be killed + # by optimize_GUARD_NO_OVERFLOW; if we see instead an + # optimize_GUARD_OVERFLOW, then InvalidLoop. op = op.copy_and_change(rop.INT_ADD) - self.optimize_INT_ADD(op) # emit the op - else: - self.emit_operation(op) - r = self.getvalue(op.result) - r.intbound.intersect(resbound) - self.emit_operation(self.nextop) - if self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Synthesize the non overflowing op for optimize_default to reuse - self.pure(rop.INT_ADD, op.getarglist()[:], op.result) - # Synthesize the reverse op for optimize_default to reuse - self.pure(rop.INT_SUB, [op.result, op.getarg(1)], op.getarg(0)) - self.pure(rop.INT_SUB, [op.result, op.getarg(0)], op.getarg(1)) - + self.emit_operation(op) # emit the op + r = self.getvalue(op.result) + r.intbound.intersect(resbound) def optimize_INT_SUB_OVF(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) resbound = v1.intbound.sub_bound(v2.intbound) - if resbound.has_lower and resbound.has_upper and \ - self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Transform into INT_SUB and remove guard + if resbound.bounded(): op = op.copy_and_change(rop.INT_SUB) - self.optimize_INT_SUB(op) # emit the op - else: - self.emit_operation(op) - r = self.getvalue(op.result) - r.intbound.intersect(resbound) - self.emit_operation(self.nextop) - if self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Synthesize the non overflowing op for optimize_default to reuse - self.pure(rop.INT_SUB, op.getarglist()[:], op.result) - # Synthesize the reverse ops for optimize_default to reuse - self.pure(rop.INT_ADD, [op.result, op.getarg(1)], op.getarg(0)) - self.pure(rop.INT_SUB, [op.getarg(0), op.result], op.getarg(1)) - + self.emit_operation(op) # emit the op + r = self.getvalue(op.result) + r.intbound.intersect(resbound) def optimize_INT_MUL_OVF(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) resbound = v1.intbound.mul_bound(v2.intbound) - if resbound.has_lower and resbound.has_upper and \ - self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Transform into INT_MUL and remove guard + if resbound.bounded(): op = op.copy_and_change(rop.INT_MUL) - self.optimize_INT_MUL(op) # emit the op - else: - self.emit_operation(op) - r = self.getvalue(op.result) - r.intbound.intersect(resbound) - self.emit_operation(self.nextop) - if self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Synthesize the non overflowing op for optimize_default to reuse - self.pure(rop.INT_MUL, op.getarglist()[:], op.result) - + self.emit_operation(op) + r = self.getvalue(op.result) + r.intbound.intersect(resbound) def optimize_INT_LT(self, op): v1 = self.getvalue(op.getarg(0)) diff --git a/pypy/jit/metainterp/optimizeopt/intutils.py b/pypy/jit/metainterp/optimizeopt/intutils.py --- a/pypy/jit/metainterp/optimizeopt/intutils.py +++ b/pypy/jit/metainterp/optimizeopt/intutils.py @@ -1,4 +1,5 @@ -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift, LONG_BIT +from pypy.rlib.rarithmetic import ovfcheck, LONG_BIT +from pypy.rlib.objectmodel import we_are_translated from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.jit.metainterp.history import BoxInt, ConstInt import sys @@ -13,6 +14,10 @@ self.has_lower = True self.upper = upper self.lower = lower + # check for unexpected overflows: + if not we_are_translated(): + assert type(upper) is not long + assert type(lower) is not long # Returns True if the bound was updated def make_le(self, other): @@ -169,10 +174,10 @@ other.known_ge(IntBound(0, 0)) and \ other.known_lt(IntBound(LONG_BIT, LONG_BIT)): try: - vals = (ovfcheck_lshift(self.upper, other.upper), - ovfcheck_lshift(self.upper, other.lower), - ovfcheck_lshift(self.lower, other.upper), - ovfcheck_lshift(self.lower, other.lower)) + vals = (ovfcheck(self.upper << other.upper), + ovfcheck(self.upper << other.lower), + ovfcheck(self.lower << other.upper), + ovfcheck(self.lower << other.lower)) return IntBound(min4(vals), max4(vals)) except (OverflowError, ValueError): return IntUnbounded() diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -1,12 +1,12 @@ from pypy.jit.metainterp import jitprof, resume, compile from pypy.jit.metainterp.executor import execute_nonspec -from pypy.jit.metainterp.history import BoxInt, BoxFloat, Const, ConstInt, REF +from pypy.jit.metainterp.history import BoxInt, BoxFloat, Const, ConstInt, REF, INT from pypy.jit.metainterp.optimizeopt.intutils import IntBound, IntUnbounded, \ ImmutableIntUnbounded, \ IntLowerBound, MININT, MAXINT from pypy.jit.metainterp.optimizeopt.util import (make_dispatcher_method, args_dict) -from pypy.jit.metainterp.resoperation import rop, ResOperation +from pypy.jit.metainterp.resoperation import rop, ResOperation, AbstractResOp from pypy.jit.metainterp.typesystem import llhelper, oohelper from pypy.tool.pairtype import extendabletype from pypy.rlib.debug import debug_start, debug_stop, debug_print @@ -95,6 +95,10 @@ return guards def import_from(self, other, optimizer): + if self.level == LEVEL_CONSTANT: + assert other.level == LEVEL_CONSTANT + assert other.box.same_constant(self.box) + return assert self.level <= LEVEL_NONNULL if other.level == LEVEL_CONSTANT: self.make_constant(other.get_key_box()) @@ -141,6 +145,13 @@ return not box.nonnull() return False + def same_value(self, other): + if not other: + return False + if self.is_constant() and other.is_constant(): + return self.box.same_constant(other.box) + return self is other + def make_constant(self, constbox): """Replace 'self.box' with a Const box.""" assert isinstance(constbox, Const) @@ -209,13 +220,19 @@ def setfield(self, ofs, value): raise NotImplementedError + def getlength(self): + raise NotImplementedError + def getitem(self, index): raise NotImplementedError - def getlength(self): + def setitem(self, index, value): raise NotImplementedError - def setitem(self, index, value): + def getinteriorfield(self, index, ofs, default): + raise NotImplementedError + + def setinteriorfield(self, index, ofs, value): raise NotImplementedError @@ -230,9 +247,10 @@ CONST_1 = ConstInt(1) CVAL_ZERO = ConstantValue(CONST_0) CVAL_ZERO_FLOAT = ConstantValue(Const._new(0.0)) -CVAL_UNINITIALIZED_ZERO = ConstantValue(CONST_0) llhelper.CVAL_NULLREF = ConstantValue(llhelper.CONST_NULL) oohelper.CVAL_NULLREF = ConstantValue(oohelper.CONST_NULL) +REMOVED = AbstractResOp(None) + class Optimization(object): next_optimization = None @@ -244,6 +262,7 @@ raise NotImplementedError def emit_operation(self, op): + self.last_emitted_operation = op self.next_optimization.propagate_forward(op) # FIXME: Move some of these here? @@ -283,11 +302,11 @@ return self.optimizer.optpure.has_pure_result(opnum, args, descr) return False - def get_pure_result(self, key): + def get_pure_result(self, key): if self.optimizer.optpure: return self.optimizer.optpure.get_pure_result(key) return None - + def setup(self): pass @@ -311,20 +330,20 @@ def forget_numberings(self, box): self.optimizer.forget_numberings(box) + class Optimizer(Optimization): - def __init__(self, metainterp_sd, loop, optimizations=None, bridge=False): + def __init__(self, metainterp_sd, loop, optimizations=None): self.metainterp_sd = metainterp_sd self.cpu = metainterp_sd.cpu self.loop = loop - self.bridge = bridge self.values = {} self.interned_refs = self.cpu.ts.new_ref_dict() + self.interned_ints = {} self.resumedata_memo = resume.ResumeDataLoopMemo(metainterp_sd) self.bool_boxes = {} self.producer = {} self.pendingfields = [] - self.exception_might_have_happened = False self.quasi_immutable_deps = None self.opaque_pointers = {} self.replaces_guard = {} @@ -346,6 +365,7 @@ optimizations[-1].next_optimization = self for o in optimizations: o.optimizer = self + o.last_emitted_operation = None o.setup() else: optimizations = [] @@ -392,6 +412,9 @@ if not value: return box return self.interned_refs.setdefault(value, box) + #elif constbox.type == INT: + # value = constbox.getint() + # return self.interned_ints.setdefault(value, box) else: return box @@ -477,7 +500,6 @@ return CVAL_ZERO def propagate_all_forward(self): - self.exception_might_have_happened = self.bridge self.clear_newoperations() for op in self.loop.operations: self.first_optimization.propagate_forward(op) @@ -524,7 +546,7 @@ def replace_op(self, old_op, new_op): # XXX: Do we want to cache indexes to prevent search? - i = len(self._newoperations) + i = len(self._newoperations) while i > 0: i -= 1 if self._newoperations[i] is old_op: diff --git a/pypy/jit/metainterp/optimizeopt/pure.py b/pypy/jit/metainterp/optimizeopt/pure.py --- a/pypy/jit/metainterp/optimizeopt/pure.py +++ b/pypy/jit/metainterp/optimizeopt/pure.py @@ -1,4 +1,4 @@ -from pypy.jit.metainterp.optimizeopt.optimizer import Optimization +from pypy.jit.metainterp.optimizeopt.optimizer import Optimization, REMOVED from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.jit.metainterp.optimizeopt.util import (make_dispatcher_method, args_dict) @@ -61,7 +61,10 @@ oldop = self.pure_operations.get(args, None) if oldop is not None and oldop.getdescr() is op.getdescr(): assert oldop.getopnum() == op.getopnum() + # this removes a CALL_PURE that has the same (non-constant) + # arguments as a previous CALL_PURE. self.make_equal_to(op.result, self.getvalue(oldop.result)) + self.last_emitted_operation = REMOVED return else: self.pure_operations[args] = op @@ -72,6 +75,13 @@ self.emit_operation(ResOperation(rop.CALL, args, op.result, op.getdescr())) + def optimize_GUARD_NO_EXCEPTION(self, op): + if self.last_emitted_operation is REMOVED: + # it was a CALL_PURE that was killed; so we also kill the + # following GUARD_NO_EXCEPTION + return + self.emit_operation(op) + def flush(self): assert self.posponedop is None diff --git a/pypy/jit/metainterp/optimizeopt/rewrite.py b/pypy/jit/metainterp/optimizeopt/rewrite.py --- a/pypy/jit/metainterp/optimizeopt/rewrite.py +++ b/pypy/jit/metainterp/optimizeopt/rewrite.py @@ -106,10 +106,9 @@ self.make_equal_to(op.result, v1) else: self.emit_operation(op) - - # Synthesize the reverse ops for optimize_default to reuse - self.pure(rop.INT_ADD, [op.result, op.getarg(1)], op.getarg(0)) - self.pure(rop.INT_SUB, [op.getarg(0), op.result], op.getarg(1)) + # Synthesize the reverse ops for optimize_default to reuse + self.pure(rop.INT_ADD, [op.result, op.getarg(1)], op.getarg(0)) + self.pure(rop.INT_SUB, [op.getarg(0), op.result], op.getarg(1)) def optimize_INT_ADD(self, op): v1 = self.getvalue(op.getarg(0)) @@ -122,10 +121,9 @@ self.make_equal_to(op.result, v1) else: self.emit_operation(op) - - # Synthesize the reverse op for optimize_default to reuse - self.pure(rop.INT_SUB, [op.result, op.getarg(1)], op.getarg(0)) - self.pure(rop.INT_SUB, [op.result, op.getarg(0)], op.getarg(1)) + # Synthesize the reverse op for optimize_default to reuse + self.pure(rop.INT_SUB, [op.result, op.getarg(1)], op.getarg(0)) + self.pure(rop.INT_SUB, [op.result, op.getarg(0)], op.getarg(1)) def optimize_INT_MUL(self, op): v1 = self.getvalue(op.getarg(0)) @@ -141,13 +139,13 @@ self.make_constant_int(op.result, 0) else: for lhs, rhs in [(v1, v2), (v2, v1)]: - # x & (x -1) == 0 is a quick test for power of 2 - if (lhs.is_constant() and - (lhs.box.getint() & (lhs.box.getint() - 1)) == 0): - new_rhs = ConstInt(highest_bit(lhs.box.getint())) - op = op.copy_and_change(rop.INT_LSHIFT, args=[rhs.box, new_rhs]) - break - + if lhs.is_constant(): + x = lhs.box.getint() + # x & (x - 1) == 0 is a quick test for power of 2 + if x & (x - 1) == 0: + new_rhs = ConstInt(highest_bit(lhs.box.getint())) + op = op.copy_and_change(rop.INT_LSHIFT, args=[rhs.box, new_rhs]) + break self.emit_operation(op) def optimize_UINT_FLOORDIV(self, op): @@ -296,12 +294,6 @@ raise InvalidLoop self.optimize_GUARD_CLASS(op) - def optimize_GUARD_NO_EXCEPTION(self, op): - if not self.optimizer.exception_might_have_happened: - return - self.emit_operation(op) - self.optimizer.exception_might_have_happened = False - def optimize_CALL_LOOPINVARIANT(self, op): arg = op.getarg(0) # 'arg' must be a Const, because residual_call in codewriter @@ -312,6 +304,7 @@ resvalue = self.loop_invariant_results.get(key, None) if resvalue is not None: self.make_equal_to(op.result, resvalue) + self.last_emitted_operation = REMOVED return # change the op to be a normal call, from the backend's point of view # there is no reason to have a separate operation for this @@ -339,7 +332,7 @@ def optimize_INT_IS_ZERO(self, op): self._optimize_nullness(op, op.getarg(0), False) - def _optimize_oois_ooisnot(self, op, expect_isnot): + def _optimize_oois_ooisnot(self, op, expect_isnot, instance): value0 = self.getvalue(op.getarg(0)) value1 = self.getvalue(op.getarg(1)) if value0.is_virtual(): @@ -357,21 +350,28 @@ elif value0 is value1: self.make_constant_int(op.result, not expect_isnot) else: - cls0 = value0.get_constant_class(self.optimizer.cpu) - if cls0 is not None: - cls1 = value1.get_constant_class(self.optimizer.cpu) - if cls1 is not None and not cls0.same_constant(cls1): - # cannot be the same object, as we know that their - # class is different - self.make_constant_int(op.result, expect_isnot) - return + if instance: + cls0 = value0.get_constant_class(self.optimizer.cpu) + if cls0 is not None: + cls1 = value1.get_constant_class(self.optimizer.cpu) + if cls1 is not None and not cls0.same_constant(cls1): + # cannot be the same object, as we know that their + # class is different + self.make_constant_int(op.result, expect_isnot) + return self.emit_operation(op) + def optimize_PTR_EQ(self, op): + self._optimize_oois_ooisnot(op, False, False) + def optimize_PTR_NE(self, op): - self._optimize_oois_ooisnot(op, True) + self._optimize_oois_ooisnot(op, True, False) - def optimize_PTR_EQ(self, op): - self._optimize_oois_ooisnot(op, False) + def optimize_INSTANCE_PTR_EQ(self, op): + self._optimize_oois_ooisnot(op, False, True) + + def optimize_INSTANCE_PTR_NE(self, op): + self._optimize_oois_ooisnot(op, True, True) ## def optimize_INSTANCEOF(self, op): ## value = self.getvalue(op.args[0]) @@ -439,10 +439,19 @@ except KeyError: pass else: + # this removes a CALL_PURE with all constant arguments. self.make_constant(op.result, result) + self.last_emitted_operation = REMOVED return self.emit_operation(op) + def optimize_GUARD_NO_EXCEPTION(self, op): + if self.last_emitted_operation is REMOVED: + # it was a CALL_PURE or a CALL_LOOPINVARIANT that was killed; + # so we also kill the following GUARD_NO_EXCEPTION + return + self.emit_operation(op) + def optimize_INT_FLOORDIV(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) @@ -450,6 +459,9 @@ if v2.is_constant() and v2.box.getint() == 1: self.make_equal_to(op.result, v1) return + elif v1.is_constant() and v1.box.getint() == 0: + self.make_constant_int(op.result, 0) + return if v1.intbound.known_ge(IntBound(0, 0)) and v2.is_constant(): val = v2.box.getint() if val & (val - 1) == 0 and val > 0: # val == 2**shift @@ -457,10 +469,17 @@ args = [op.getarg(0), ConstInt(highest_bit(val))]) self.emit_operation(op) - def optimize_CAST_OPAQUE_PTR(self, op): + def optimize_MARK_OPAQUE_PTR(self, op): value = self.getvalue(op.getarg(0)) self.optimizer.opaque_pointers[value] = True - self.make_equal_to(op.result, value) + + def optimize_CAST_PTR_TO_INT(self, op): + self.pure(rop.CAST_INT_TO_PTR, [op.result], op.getarg(0)) + self.emit_operation(op) + + def optimize_CAST_INT_TO_PTR(self, op): + self.pure(rop.CAST_PTR_TO_INT, [op.result], op.getarg(0)) + self.emit_operation(op) dispatch_opt = make_dispatcher_method(OptRewrite, 'optimize_', default=OptRewrite.emit_operation) diff --git a/pypy/jit/metainterp/optimizeopt/simplify.py b/pypy/jit/metainterp/optimizeopt/simplify.py --- a/pypy/jit/metainterp/optimizeopt/simplify.py +++ b/pypy/jit/metainterp/optimizeopt/simplify.py @@ -25,7 +25,8 @@ # but it's a bit hard to implement robustly if heap.py is also run pass - optimize_CAST_OPAQUE_PTR = optimize_VIRTUAL_REF + def optimize_MARK_OPAQUE_PTR(self, op): + pass dispatch_opt = make_dispatcher_method(OptSimplify, 'optimize_', diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -9,6 +9,7 @@ from pypy.jit.metainterp.history import AbstractDescr, ConstInt, BoxInt from pypy.jit.metainterp import executor, compile, resume, history from pypy.jit.metainterp.resoperation import rop, opname, ResOperation +from pypy.rlib.rarithmetic import LONG_BIT def test_store_final_boxes_in_guard(): @@ -508,13 +509,13 @@ ops = """ [p0] guard_class(p0, ConstClass(node_vtable)) [] - i0 = ptr_ne(p0, NULL) + i0 = instance_ptr_ne(p0, NULL) guard_true(i0) [] - i1 = ptr_eq(p0, NULL) + i1 = instance_ptr_eq(p0, NULL) guard_false(i1) [] - i2 = ptr_ne(NULL, p0) + i2 = instance_ptr_ne(NULL, p0) guard_true(i0) [] - i3 = ptr_eq(NULL, p0) + i3 = instance_ptr_eq(NULL, p0) guard_false(i1) [] jump(p0) """ @@ -680,25 +681,60 @@ # ---------- - def test_fold_guard_no_exception(self): - ops = """ - [i] - guard_no_exception() [] - i1 = int_add(i, 3) - guard_no_exception() [] + def test_keep_guard_no_exception(self): + ops = """ + [i1] i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] - guard_no_exception() [] - i3 = call(i2, descr=nonwritedescr) - jump(i1) # the exception is considered lost when we loop back - """ - expected = """ - [i] - i1 = int_add(i, 3) - i2 = call(i1, descr=nonwritedescr) + jump(i2) + """ + self.optimize_loop(ops, ops) + + def test_keep_guard_no_exception_with_call_pure_that_is_not_folded(self): + ops = """ + [i1] + i2 = call_pure(123456, i1, descr=nonwritedescr) guard_no_exception() [i1, i2] - i3 = call(i2, descr=nonwritedescr) - jump(i1) + jump(i2) + """ + expected = """ + [i1] + i2 = call(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2] + jump(i2) + """ + self.optimize_loop(ops, expected) + + def test_remove_guard_no_exception_with_call_pure_on_constant_args(self): + arg_consts = [ConstInt(i) for i in (123456, 81)] + call_pure_results = {tuple(arg_consts): ConstInt(5)} + ops = """ + [i1] + i3 = same_as(81) + i2 = call_pure(123456, i3, descr=nonwritedescr) + guard_no_exception() [i1, i2] + jump(i2) + """ + expected = """ + [i1] + jump(5) + """ + self.optimize_loop(ops, expected, call_pure_results) + + def test_remove_guard_no_exception_with_duplicated_call_pure(self): + ops = """ + [i1] + i2 = call_pure(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2] + i3 = call_pure(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2, i3] + jump(i3) + """ + expected = """ + [i1] + i2 = call(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2] + jump(i2) """ self.optimize_loop(ops, expected) @@ -935,7 +971,6 @@ """ self.optimize_loop(ops, expected) - def test_virtual_constant_isnonnull(self): ops = """ [i0] @@ -951,6 +986,55 @@ """ self.optimize_loop(ops, expected) + def test_virtual_array_of_struct(self): + ops = """ + [f0, f1, f2, f3] + p0 = new_array(2, descr=complexarraydescr) + setinteriorfield_gc(p0, 0, f0, descr=complexrealdescr) + setinteriorfield_gc(p0, 0, f1, descr=compleximagdescr) + setinteriorfield_gc(p0, 1, f2, descr=complexrealdescr) + setinteriorfield_gc(p0, 1, f3, descr=compleximagdescr) + f4 = getinteriorfield_gc(p0, 0, descr=complexrealdescr) + f5 = getinteriorfield_gc(p0, 1, descr=complexrealdescr) + f6 = float_mul(f4, f5) + f7 = getinteriorfield_gc(p0, 0, descr=compleximagdescr) + f8 = getinteriorfield_gc(p0, 1, descr=compleximagdescr) + f9 = float_mul(f7, f8) + f10 = float_add(f6, f9) + finish(f10) + """ + expected = """ + [f0, f1, f2, f3] + f4 = float_mul(f0, f2) + f5 = float_mul(f1, f3) + f6 = float_add(f4, f5) + finish(f6) + """ + self.optimize_loop(ops, expected) + + def test_virtual_array_of_struct_forced(self): + ops = """ + [f0, f1] + p0 = new_array(1, descr=complexarraydescr) + setinteriorfield_gc(p0, 0, f0, descr=complexrealdescr) + setinteriorfield_gc(p0, 0, f1, descr=compleximagdescr) + f2 = getinteriorfield_gc(p0, 0, descr=complexrealdescr) + f3 = getinteriorfield_gc(p0, 0, descr=compleximagdescr) + f4 = float_mul(f2, f3) + i0 = escape(f4, p0) + finish(i0) + """ + expected = """ + [f0, f1] + f2 = float_mul(f0, f1) + p0 = new_array(1, descr=complexarraydescr) + setinteriorfield_gc(p0, 0, f0, descr=complexrealdescr) + setinteriorfield_gc(p0, 0, f1, descr=compleximagdescr) + i0 = escape(f2, p0) + finish(i0) + """ + self.optimize_loop(ops, expected) + def test_nonvirtual_1(self): ops = """ [i] @@ -2026,7 +2110,7 @@ ops = """ [p1] guard_class(p1, ConstClass(node_vtable2)) [] - i = ptr_ne(ConstPtr(myptr), p1) + i = instance_ptr_ne(ConstPtr(myptr), p1) guard_true(i) [] jump(p1) """ @@ -2181,6 +2265,17 @@ """ self.optimize_loop(ops, expected) + ops = """ + [i0] + i1 = int_floordiv(0, i0) + jump(i1) + """ + expected = """ + [i0] + jump(0) + """ + self.optimize_loop(ops, expected) + def test_fold_partially_constant_ops_ovf(self): ops = """ [i0] @@ -4063,6 +4158,38 @@ """ self.optimize_strunicode_loop(ops, expected) + def test_str_concat_constant_lengths(self): + ops = """ + [i0] + p0 = newstr(1) + strsetitem(p0, 0, i0) + p1 = newstr(0) + p2 = call(0, p0, p1, descr=strconcatdescr) + i1 = call(0, p2, p0, descr=strequaldescr) + finish(i1) + """ + expected = """ + [i0] + finish(1) + """ + self.optimize_strunicode_loop(ops, expected) + + def test_str_concat_constant_lengths_2(self): + ops = """ + [i0] + p0 = newstr(0) + p1 = newstr(1) + strsetitem(p1, 0, i0) + p2 = call(0, p0, p1, descr=strconcatdescr) + i1 = call(0, p2, p1, descr=strequaldescr) + finish(i1) + """ + expected = """ + [i0] + finish(1) + """ + self.optimize_strunicode_loop(ops, expected) + def test_str_slice_1(self): ops = """ [p1, i1, i2] @@ -4165,15 +4292,38 @@ """ self.optimize_strunicode_loop(ops, expected) + def test_str_slice_plain_virtual(self): + ops = """ + [] + p0 = newstr(11) + copystrcontent(s"hello world", p0, 0, 0, 11) + p1 = call(0, p0, 0, 5, descr=strslicedescr) + finish(p1) + """ + expected = """ + [] + p0 = newstr(11) + copystrcontent(s"hello world", p0, 0, 0, 11) + # Eventually this should just return s"hello", but ATM this test is + # just verifying that it doesn't return "\0\0\0\0\0", so being + # slightly underoptimized is ok. + p1 = newstr(5) + copystrcontent(p0, p1, 0, 0, 5) + finish(p1) + """ + self.optimize_strunicode_loop(ops, expected) + # ---------- def optimize_strunicode_loop_extradescrs(self, ops, optops): class FakeCallInfoCollection: def callinfo_for_oopspec(self, oopspecindex): calldescrtype = type(LLtypeMixin.strequaldescr) + effectinfotype = type(LLtypeMixin.strequaldescr.get_extra_info()) for value in LLtypeMixin.__dict__.values(): if isinstance(value, calldescrtype): extra = value.get_extra_info() - if extra and extra.oopspecindex == oopspecindex: + if (extra and isinstance(extra, effectinfotype) and + extra.oopspecindex == oopspecindex): # returns 0 for 'func' in this test return value, 0 raise AssertionError("not found: oopspecindex=%d" % @@ -4653,11 +4803,11 @@ i5 = int_ge(i0, 0) guard_true(i5) [] i1 = int_mod(i0, 42) - i2 = int_rshift(i1, 63) + i2 = int_rshift(i1, %d) i3 = int_and(42, i2) i4 = int_add(i1, i3) finish(i4) - """ + """ % (LONG_BIT-1) expected = """ [i0] i5 = int_ge(i0, 0) @@ -4665,21 +4815,41 @@ i1 = int_mod(i0, 42) finish(i1) """ - py.test.skip("in-progress") self.optimize_loop(ops, expected) - # Also, 'n % power-of-two' can be turned into int_and(), - # but that's a bit harder to detect here because it turns into - # several operations, and of course it is wrong to just turn + # 'n % power-of-two' can be turned into int_and(); at least that's + # easy to do now if n is known to be non-negative. + ops = """ + [i0] + i5 = int_ge(i0, 0) + guard_true(i5) [] + i1 = int_mod(i0, 8) + i2 = int_rshift(i1, %d) + i3 = int_and(42, i2) + i4 = int_add(i1, i3) + finish(i4) + """ % (LONG_BIT-1) + expected = """ + [i0] + i5 = int_ge(i0, 0) + guard_true(i5) [] + i1 = int_and(i0, 7) + finish(i1) + """ + self.optimize_loop(ops, expected) + + # Of course any 'maybe-negative % power-of-two' can be turned into + # int_and(), but that's a bit harder to detect here because it turns + # into several operations, and of course it is wrong to just turn # int_mod(i0, 16) into int_and(i0, 15). ops = """ [i0] i1 = int_mod(i0, 16) - i2 = int_rshift(i1, 63) + i2 = int_rshift(i1, %d) i3 = int_and(16, i2) i4 = int_add(i1, i3) finish(i4) - """ + """ % (LONG_BIT-1) expected = """ [i0] i4 = int_and(i0, 15) @@ -4688,6 +4858,16 @@ py.test.skip("harder") self.optimize_loop(ops, expected) + def test_intmod_bounds_bug1(self): + ops = """ + [i0] + i1 = int_mod(i0, %d) + i2 = int_eq(i1, 0) + guard_false(i2) [] + finish() + """ % (-(1<<(LONG_BIT-1)),) + self.optimize_loop(ops, ops) + def test_bounded_lazy_setfield(self): ops = """ [p0, i0] @@ -4770,6 +4950,27 @@ def test_plain_virtual_string_copy_content(self): ops = """ + [i1] + p0 = newstr(6) + copystrcontent(s"hello!", p0, 0, 0, 6) + p1 = call(0, p0, s"abc123", descr=strconcatdescr) + i0 = strgetitem(p1, i1) + finish(i0) + """ + expected = """ + [i1] + p0 = newstr(6) + copystrcontent(s"hello!", p0, 0, 0, 6) + p1 = newstr(12) + copystrcontent(p0, p1, 0, 0, 6) + copystrcontent(s"abc123", p1, 0, 6, 6) + i0 = strgetitem(p1, i1) + finish(i0) + """ + self.optimize_strunicode_loop(ops, expected) + + def test_plain_virtual_string_copy_content_2(self): + ops = """ [] p0 = newstr(6) copystrcontent(s"hello!", p0, 0, 0, 6) @@ -4781,14 +4982,51 @@ [] p0 = newstr(6) copystrcontent(s"hello!", p0, 0, 0, 6) - p1 = newstr(12) - copystrcontent(p0, p1, 0, 0, 6) - copystrcontent(s"abc123", p1, 0, 6, 6) - i0 = strgetitem(p1, 0) + i0 = strgetitem(p0, 0) finish(i0) """ self.optimize_strunicode_loop(ops, expected) + def test_ptr_eq_str_constant(self): + ops = """ + [] + i0 = ptr_eq(s"abc", s"\x00") + finish(i0) + """ + expected = """ + [] + finish(0) + """ + self.optimize_loop(ops, expected) + + def test_known_equal_ints(self): + py.test.skip("in-progress") + ops = """ + [i0, i1, i2, p0] + i3 = int_eq(i0, i1) + guard_true(i3) [] + + i4 = int_lt(i2, i0) + guard_true(i4) [] + i5 = int_lt(i2, i1) + guard_true(i5) [] + + i6 = getarrayitem_gc(p0, i2) + finish(i6) + """ + expected = """ + [i0, i1, i2, p0] + i3 = int_eq(i0, i1) + guard_true(i3) [] + + i4 = int_lt(i2, i0) + guard_true(i4) [] + + i6 = getarrayitem_gc(p0, i3) + finish(i6) + """ + self.optimize_loop(ops, expected) + class TestLLtype(BaseTestOptimizeBasic, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -234,6 +234,30 @@ """ % expected_value self.optimize_loop(ops, expected) + def test_reverse_of_cast(self): + ops = """ + [i0] + p0 = cast_int_to_ptr(i0) + i1 = cast_ptr_to_int(p0) + jump(i1) + """ + expected = """ + [i0] + jump(i0) + """ + self.optimize_loop(ops, expected) + ops = """ + [p0] + i1 = cast_ptr_to_int(p0) + p1 = cast_int_to_ptr(i1) + jump(p1) + """ + expected = """ + [p0] + jump(p0) + """ + self.optimize_loop(ops, expected) + # ---------- def test_remove_guard_class_1(self): @@ -907,17 +931,14 @@ [i] guard_no_exception() [] i1 = int_add(i, 3) - guard_no_exception() [] i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] - guard_no_exception() [] i3 = call(i2, descr=nonwritedescr) jump(i1) # the exception is considered lost when we loop back """ - # note that 'guard_no_exception' at the very start is kept around - # for bridges, but not for loops preamble = """ [i] + guard_no_exception() [] # occurs at the start of bridges, so keep it i1 = int_add(i, 3) i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] @@ -926,6 +947,7 @@ """ expected = """ [i] + guard_no_exception() [] # occurs at the start of bridges, so keep it i1 = int_add(i, 3) i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] @@ -934,6 +956,23 @@ """ self.optimize_loop(ops, expected, preamble) + def test_bug_guard_no_exception(self): + ops = """ + [] + i0 = call(123, descr=nonwritedescr) + p0 = call(0, "xy", descr=s2u_descr) # string -> unicode + guard_no_exception() [] + escape(p0) + jump() + """ + expected = """ + [] + i0 = call(123, descr=nonwritedescr) + escape(u"xy") + jump() + """ + self.optimize_loop(ops, expected) + # ---------- def test_call_loopinvariant(self): @@ -2144,13 +2183,13 @@ ops = """ [p0, i0, p1, i1, i2] setfield_gc(p0, i1, descr=valuedescr) - copystrcontent(p0, i0, p1, i1, i2) + copystrcontent(p0, p1, i0, i1, i2) escape() jump(p0, i0, p1, i1, i2) """ expected = """ [p0, i0, p1, i1, i2] - copystrcontent(p0, i0, p1, i1, i2) + copystrcontent(p0, p1, i0, i1, i2) setfield_gc(p0, i1, descr=valuedescr) escape() jump(p0, i0, p1, i1, i2) @@ -2659,7 +2698,7 @@ ops = """ [p1] guard_class(p1, ConstClass(node_vtable2)) [] - i = ptr_ne(ConstPtr(myptr), p1) + i = instance_ptr_ne(ConstPtr(myptr), p1) guard_true(i) [] jump(p1) """ @@ -3307,7 +3346,7 @@ jump(p1, i1, i2, i6) ''' self.optimize_loop(ops, expected, preamble) - + # ---------- @@ -4759,6 +4798,52 @@ """ self.optimize_loop(ops, expected) + + def test_division_nonneg(self): + py.test.skip("harder") + # this is how an app-level division turns into right now + ops = """ + [i4] + i1 = int_ge(i4, 0) + guard_true(i1) [] + i16 = int_floordiv(i4, 3) + i18 = int_mul(i16, 3) + i19 = int_sub(i4, i18) + i21 = int_rshift(i19, %d) + i22 = int_add(i16, i21) + finish(i22) + """ % (LONG_BIT-1) + expected = """ + [i4] + i1 = int_ge(i4, 0) + guard_true(i1) [] + i16 = int_floordiv(i4, 3) + finish(i16) + """ + self.optimize_loop(ops, expected) + + def test_division_by_2(self): + py.test.skip("harder") + ops = """ + [i4] + i1 = int_ge(i4, 0) + guard_true(i1) [] + i16 = int_floordiv(i4, 2) + i18 = int_mul(i16, 2) + i19 = int_sub(i4, i18) + i21 = int_rshift(i19, %d) + i22 = int_add(i16, i21) + finish(i22) + """ % (LONG_BIT-1) + expected = """ + [i4] + i1 = int_ge(i4, 0) + guard_true(i1) [] + i16 = int_rshift(i4, 1) + finish(i16) + """ + self.optimize_loop(ops, expected) + def test_subsub_ovf(self): ops = """ [i0] @@ -5776,10 +5861,12 @@ class FakeCallInfoCollection: def callinfo_for_oopspec(self, oopspecindex): calldescrtype = type(LLtypeMixin.strequaldescr) + effectinfotype = type(LLtypeMixin.strequaldescr.get_extra_info()) for value in LLtypeMixin.__dict__.values(): if isinstance(value, calldescrtype): extra = value.get_extra_info() - if extra and extra.oopspecindex == oopspecindex: + if (extra and isinstance(extra, effectinfotype) and + extra.oopspecindex == oopspecindex): # returns 0 for 'func' in this test return value, 0 raise AssertionError("not found: oopspecindex=%d" % @@ -6209,12 +6296,15 @@ def test_str2unicode_constant(self): ops = """ [] + escape(1213) p0 = call(0, "xy", descr=s2u_descr) # string -> unicode + guard_no_exception() [] escape(p0) jump() """ expected = """ [] + escape(1213) escape(u"xy") jump() """ @@ -6224,6 +6314,7 @@ ops = """ [p0] p1 = call(0, p0, descr=s2u_descr) # string -> unicode + guard_no_exception() [] escape(p1) jump(p1) """ @@ -7256,7 +7347,7 @@ ops = """ [p1, p2] setarrayitem_gc(p1, 2, 10, descr=arraydescr) - setarrayitem_gc(p2, 3, 13, descr=arraydescr) + setarrayitem_gc(p2, 3, 13, descr=arraydescr) call(0, p1, p2, 0, 0, 10, descr=arraycopydescr) jump(p1, p2) """ @@ -7283,6 +7374,150 @@ """ self.optimize_loop(ops, expected) + def test_repeated_constant_setfield_mixed_with_guard(self): + ops = """ + [p22, p18] + setfield_gc(p22, 2, descr=valuedescr) + guard_nonnull_class(p18, ConstClass(node_vtable)) [] + setfield_gc(p22, 2, descr=valuedescr) + jump(p22, p18) + """ + preamble = """ + [p22, p18] + setfield_gc(p22, 2, descr=valuedescr) + guard_nonnull_class(p18, ConstClass(node_vtable)) [] + jump(p22, p18) + """ + short = """ + [p22, p18] + i1 = getfield_gc(p22, descr=valuedescr) + guard_value(i1, 2) [] + jump(p22, p18) + """ + expected = """ + [p22, p18] + jump(p22, p18) + """ + self.optimize_loop(ops, expected, preamble, expected_short=short) + + def test_repeated_setfield_mixed_with_guard(self): + ops = """ + [p22, p18, i1] + i2 = getfield_gc(p22, descr=valuedescr) + call(i2, descr=nonwritedescr) + setfield_gc(p22, i1, descr=valuedescr) + guard_nonnull_class(p18, ConstClass(node_vtable)) [] + setfield_gc(p22, i1, descr=valuedescr) + jump(p22, p18, i1) + """ + preamble = """ + [p22, p18, i1] + i2 = getfield_gc(p22, descr=valuedescr) + call(i2, descr=nonwritedescr) + setfield_gc(p22, i1, descr=valuedescr) + guard_nonnull_class(p18, ConstClass(node_vtable)) [] + jump(p22, p18, i1, i1) + """ + short = """ + [p22, p18, i1] + i2 = getfield_gc(p22, descr=valuedescr) + jump(p22, p18, i1, i2) + """ + expected = """ + [p22, p18, i1, i2] + call(i2, descr=nonwritedescr) + setfield_gc(p22, i1, descr=valuedescr) + jump(p22, p18, i1, i1) + """ + self.optimize_loop(ops, expected, preamble, expected_short=short) + + def test_cache_setfield_across_loop_boundaries(self): + ops = """ + [p1] + p2 = getfield_gc(p1, descr=valuedescr) + guard_nonnull_class(p2, ConstClass(node_vtable)) [] + call(p2, descr=nonwritedescr) + p3 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(p1, p3, descr=valuedescr) + jump(p1) + """ + expected = """ + [p1, p2] + call(p2, descr=nonwritedescr) + p3 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(p1, p3, descr=valuedescr) + jump(p1, p3) + """ + self.optimize_loop(ops, expected) + + def test_cache_setarrayitem_across_loop_boundaries(self): + ops = """ + [p1] + p2 = getarrayitem_gc(p1, 3, descr=arraydescr) + guard_nonnull_class(p2, ConstClass(node_vtable)) [] + call(p2, descr=nonwritedescr) + p3 = new_with_vtable(ConstClass(node_vtable)) + setarrayitem_gc(p1, 3, p3, descr=arraydescr) + jump(p1) + """ + expected = """ + [p1, p2] + call(p2, descr=nonwritedescr) + p3 = new_with_vtable(ConstClass(node_vtable)) + setarrayitem_gc(p1, 3, p3, descr=arraydescr) + jump(p1, p3) + """ + self.optimize_loop(ops, expected) + + def test_setarrayitem_p0_p0(self): + ops = """ + [i0, i1] + p0 = escape() + setarrayitem_gc(p0, 2, p0, descr=arraydescr) + jump(i0, i1) + """ + expected = """ + [i0, i1] + p0 = escape() + setarrayitem_gc(p0, 2, p0, descr=arraydescr) + jump(i0, i1) + """ + self.optimize_loop(ops, expected) + + def test_setfield_p0_p0(self): + ops = """ + [i0, i1] + p0 = escape() + setfield_gc(p0, p0, descr=arraydescr) + jump(i0, i1) + """ + expected = """ + [i0, i1] + p0 = escape() + setfield_gc(p0, p0, descr=arraydescr) + jump(i0, i1) + """ + self.optimize_loop(ops, expected) + + def test_setfield_p0_p1_p0(self): + ops = """ + [i0, i1] + p0 = escape() + p1 = escape() + setfield_gc(p0, p1, descr=adescr) + setfield_gc(p1, p0, descr=bdescr) + jump(i0, i1) + """ + expected = """ + [i0, i1] + p0 = escape() + p1 = escape() + setfield_gc(p0, p1, descr=adescr) + setfield_gc(p1, p0, descr=bdescr) + jump(i0, i1) + """ + self.optimize_loop(ops, expected) + class TestLLtype(OptimizeOptTest, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/optimizeopt/test/test_util.py b/pypy/jit/metainterp/optimizeopt/test/test_util.py --- a/pypy/jit/metainterp/optimizeopt/test/test_util.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_util.py @@ -183,8 +183,21 @@ can_invalidate=True)) arraycopydescr = cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, EffectInfo([], [arraydescr], [], [arraydescr], + EffectInfo.EF_CANNOT_RAISE, oopspecindex=EffectInfo.OS_ARRAYCOPY)) + + # array of structs (complex data) + complexarray = lltype.GcArray( + lltype.Struct("complex", + ("real", lltype.Float), + ("imag", lltype.Float), + ) + ) + complexarraydescr = cpu.arraydescrof(complexarray) + complexrealdescr = cpu.interiorfielddescrof(complexarray, "real") + compleximagdescr = cpu.interiorfielddescrof(complexarray, "imag") + for _name, _os in [ ('strconcatdescr', 'OS_STR_CONCAT'), ('strslicedescr', 'OS_STR_SLICE'), @@ -200,12 +213,14 @@ _oopspecindex = getattr(EffectInfo, _os) locals()[_name] = \ cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, - EffectInfo([], [], [], [], oopspecindex=_oopspecindex)) + EffectInfo([], [], [], [], EffectInfo.EF_CANNOT_RAISE, + oopspecindex=_oopspecindex)) # _oopspecindex = getattr(EffectInfo, _os.replace('STR', 'UNI')) locals()[_name.replace('str', 'unicode')] = \ cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, - EffectInfo([], [], [], [], oopspecindex=_oopspecindex)) + EffectInfo([], [], [], [], EffectInfo.EF_CANNOT_RAISE, + oopspecindex=_oopspecindex)) s2u_descr = cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, EffectInfo([], [], [], [], oopspecindex=EffectInfo.OS_STR2UNICODE)) @@ -240,7 +255,7 @@ ## def get_class_of_box(self, box): ## root = box.getref(ootype.ROOT) ## return ootype.classof(root) - + ## cpu = runner.OOtypeCPU(None) ## NODE = ootype.Instance('NODE', ootype.ROOT, {}) ## NODE._add_fields({'value': ootype.Signed, diff --git a/pypy/jit/metainterp/optimizeopt/virtualize.py b/pypy/jit/metainterp/optimizeopt/virtualize.py --- a/pypy/jit/metainterp/optimizeopt/virtualize.py +++ b/pypy/jit/metainterp/optimizeopt/virtualize.py @@ -59,7 +59,7 @@ def import_from(self, other, optimizer): raise NotImplementedError("should not be called at this level") - + def get_fielddescrlist_cache(cpu): if not hasattr(cpu, '_optimizeopt_fielddescrlist_cache'): result = descrlist_dict() @@ -113,7 +113,7 @@ # if not we_are_translated(): op.name = 'FORCE ' + self.source_op.name - + if self._is_immutable_and_filled_with_constants(optforce): box = optforce.optimizer.constant_fold(op) self.make_constant(box) @@ -239,12 +239,12 @@ for index in range(len(self._items)): self._items[index] = self._items[index].force_at_end_of_preamble(already_forced, optforce) return self - + def _really_force(self, optforce): assert self.source_op is not None if not we_are_translated(): self.source_op.name = 'FORCE ' + self.source_op.name - optforce.emit_operation(self.source_op) + optforce.emit_operation(self.source_op) self.box = box = self.source_op.result for index in range(len(self._items)): subvalue = self._items[index] @@ -271,20 +271,91 @@ def _make_virtual(self, modifier): return modifier.make_varray(self.arraydescr) +class VArrayStructValue(AbstractVirtualValue): + def __init__(self, arraydescr, size, keybox, source_op=None): + AbstractVirtualValue.__init__(self, keybox, source_op) + self.arraydescr = arraydescr + self._items = [{} for _ in xrange(size)] + + def getlength(self): + return len(self._items) + + def getinteriorfield(self, index, ofs, default): + return self._items[index].get(ofs, default) + + def setinteriorfield(self, index, ofs, itemvalue): + assert isinstance(itemvalue, optimizer.OptValue) + self._items[index][ofs] = itemvalue + + def _really_force(self, optforce): + assert self.source_op is not None + if not we_are_translated(): + self.source_op.name = 'FORCE ' + self.source_op.name + optforce.emit_operation(self.source_op) + self.box = box = self.source_op.result + for index in range(len(self._items)): + iteritems = self._items[index].iteritems() + # random order is fine, except for tests + if not we_are_translated(): + iteritems = list(iteritems) + iteritems.sort(key = lambda (x, y): x.sort_key()) + for descr, value in iteritems: + subbox = value.force_box(optforce) + op = ResOperation(rop.SETINTERIORFIELD_GC, + [box, ConstInt(index), subbox], None, descr=descr + ) + optforce.emit_operation(op) + + def _get_list_of_descrs(self): + descrs = [] + for item in self._items: + item_descrs = item.keys() + sort_descrs(item_descrs) + descrs.append(item_descrs) + return descrs + + def get_args_for_fail(self, modifier): + if self.box is None and not modifier.already_seen_virtual(self.keybox): + itemdescrs = self._get_list_of_descrs() + itemboxes = [] + for i in range(len(self._items)): + for descr in itemdescrs[i]: + itemboxes.append(self._items[i][descr].get_key_box()) + modifier.register_virtual_fields(self.keybox, itemboxes) + for i in range(len(self._items)): + for descr in itemdescrs[i]: + self._items[i][descr].get_args_for_fail(modifier) + + def force_at_end_of_preamble(self, already_forced, optforce): + if self in already_forced: + return self + already_forced[self] = self + for index in range(len(self._items)): + for descr in self._items[index].keys(): + self._items[index][descr] = self._items[index][descr].force_at_end_of_preamble(already_forced, optforce) + return self + + def _make_virtual(self, modifier): + return modifier.make_varraystruct(self.arraydescr, self._get_list_of_descrs()) + + class OptVirtualize(optimizer.Optimization): "Virtualize objects until they escape." def new(self): return OptVirtualize() - + def make_virtual(self, known_class, box, source_op=None): vvalue = VirtualValue(self.optimizer.cpu, known_class, box, source_op) self.make_equal_to(box, vvalue) return vvalue def make_varray(self, arraydescr, size, box, source_op=None): - constvalue = self.new_const_item(arraydescr) - vvalue = VArrayValue(arraydescr, constvalue, size, box, source_op) + if arraydescr.is_array_of_structs(): + vvalue = VArrayStructValue(arraydescr, size, box, source_op) + else: + constvalue = self.new_const_item(arraydescr) + vvalue = VArrayValue(arraydescr, constvalue, size, box, source_op) self.make_equal_to(box, vvalue) return vvalue @@ -431,6 +502,34 @@ value.ensure_nonnull() self.emit_operation(op) + def optimize_GETINTERIORFIELD_GC(self, op): + value = self.getvalue(op.getarg(0)) + if value.is_virtual(): + indexbox = self.get_constant_box(op.getarg(1)) + if indexbox is not None: + descr = op.getdescr() + fieldvalue = value.getinteriorfield( + indexbox.getint(), descr, None + ) + if fieldvalue is None: + fieldvalue = self.new_const(descr) + self.make_equal_to(op.result, fieldvalue) + return + value.ensure_nonnull() + self.emit_operation(op) + + def optimize_SETINTERIORFIELD_GC(self, op): + value = self.getvalue(op.getarg(0)) + if value.is_virtual(): + indexbox = self.get_constant_box(op.getarg(1)) + if indexbox is not None: + value.setinteriorfield( + indexbox.getint(), op.getdescr(), self.getvalue(op.getarg(2)) + ) + return + value.ensure_nonnull() + self.emit_operation(op) + dispatch_opt = make_dispatcher_method(OptVirtualize, 'optimize_', default=OptVirtualize.emit_operation) diff --git a/pypy/jit/metainterp/optimizeopt/virtualstate.py b/pypy/jit/metainterp/optimizeopt/virtualstate.py --- a/pypy/jit/metainterp/optimizeopt/virtualstate.py +++ b/pypy/jit/metainterp/optimizeopt/virtualstate.py @@ -16,7 +16,7 @@ class AbstractVirtualStateInfo(resume.AbstractVirtualInfo): position = -1 - + def generalization_of(self, other, renum, bad): raise NotImplementedError @@ -54,7 +54,7 @@ s.debug_print(indent + " ", seen, bad) else: debug_print(indent + " ...") - + def debug_header(self, indent): raise NotImplementedError @@ -77,13 +77,15 @@ bad[self] = True bad[other] = True return False + + assert isinstance(other, AbstractVirtualStructStateInfo) assert len(self.fielddescrs) == len(self.fieldstate) assert len(other.fielddescrs) == len(other.fieldstate) if len(self.fielddescrs) != len(other.fielddescrs): bad[self] = True bad[other] = True return False - + for i in range(len(self.fielddescrs)): if other.fielddescrs[i] is not self.fielddescrs[i]: bad[self] = True @@ -112,8 +114,8 @@ def _enum(self, virtual_state): for s in self.fieldstate: s.enum(virtual_state) - - + + class VirtualStateInfo(AbstractVirtualStructStateInfo): def __init__(self, known_class, fielddescrs): AbstractVirtualStructStateInfo.__init__(self, fielddescrs) @@ -128,13 +130,13 @@ def debug_header(self, indent): debug_print(indent + 'VirtualStateInfo(%d):' % self.position) - + class VStructStateInfo(AbstractVirtualStructStateInfo): def __init__(self, typedescr, fielddescrs): AbstractVirtualStructStateInfo.__init__(self, fielddescrs) self.typedescr = typedescr - def _generalization_of(self, other): + def _generalization_of(self, other): if not isinstance(other, VStructStateInfo): return False if self.typedescr is not other.typedescr: @@ -143,7 +145,7 @@ def debug_header(self, indent): debug_print(indent + 'VStructStateInfo(%d):' % self.position) - + class VArrayStateInfo(AbstractVirtualStateInfo): def __init__(self, arraydescr): self.arraydescr = arraydescr @@ -157,11 +159,7 @@ bad[other] = True return False renum[self.position] = other.position - if not isinstance(other, VArrayStateInfo): - bad[self] = True - bad[other] = True - return False - if self.arraydescr is not other.arraydescr: + if not self._generalization_of(other): bad[self] = True bad[other] = True return False @@ -177,6 +175,10 @@ return False return True + def _generalization_of(self, other): + return (isinstance(other, VArrayStateInfo) and + self.arraydescr is other.arraydescr) + def enum_forced_boxes(self, boxes, value, optimizer): assert isinstance(value, virtualize.VArrayValue) assert value.is_virtual() @@ -192,8 +194,75 @@ def debug_header(self, indent): debug_print(indent + 'VArrayStateInfo(%d):' % self.position) - - + +class VArrayStructStateInfo(AbstractVirtualStateInfo): + def __init__(self, arraydescr, fielddescrs): + self.arraydescr = arraydescr + self.fielddescrs = fielddescrs + + def generalization_of(self, other, renum, bad): + assert self.position != -1 + if self.position in renum: + if renum[self.position] == other.position: + return True + bad[self] = True + bad[other] = True + return False + renum[self.position] = other.position + if not self._generalization_of(other): + bad[self] = True + bad[other] = True + return False + + assert isinstance(other, VArrayStructStateInfo) + if len(self.fielddescrs) != len(other.fielddescrs): + bad[self] = True + bad[other] = True + return False + + p = 0 + for i in range(len(self.fielddescrs)): + if len(self.fielddescrs[i]) != len(other.fielddescrs[i]): + bad[self] = True + bad[other] = True + return False + for j in range(len(self.fielddescrs[i])): + if self.fielddescrs[i][j] is not other.fielddescrs[i][j]: + bad[self] = True + bad[other] = True + return False + if not self.fieldstate[p].generalization_of(other.fieldstate[p], + renum, bad): + bad[self] = True + bad[other] = True + return False + p += 1 + return True + + def _generalization_of(self, other): + return (isinstance(other, VArrayStructStateInfo) and + self.arraydescr is other.arraydescr) + + def _enum(self, virtual_state): + for s in self.fieldstate: + s.enum(virtual_state) + + def enum_forced_boxes(self, boxes, value, optimizer): + assert isinstance(value, virtualize.VArrayStructValue) + assert value.is_virtual() + p = 0 + for i in range(len(self.fielddescrs)): + for j in range(len(self.fielddescrs[i])): + v = value._items[i][self.fielddescrs[i][j]] + s = self.fieldstate[p] + if s.position > self.position: + s.enum_forced_boxes(boxes, v, optimizer) + p += 1 + + def debug_header(self, indent): + debug_print(indent + 'VArrayStructStateInfo(%d):' % self.position) + + class NotVirtualStateInfo(AbstractVirtualStateInfo): def __init__(self, value): self.known_class = value.known_class @@ -277,7 +346,7 @@ op = ResOperation(rop.GUARD_CLASS, [box, self.known_class], None) extra_guards.append(op) return - + if self.level == LEVEL_NONNULL and \ other.level == LEVEL_UNKNOWN and \ isinstance(box, BoxPtr) and \ @@ -285,7 +354,7 @@ op = ResOperation(rop.GUARD_NONNULL, [box], None) extra_guards.append(op) return - + if self.level == LEVEL_UNKNOWN and \ other.level == LEVEL_UNKNOWN and \ isinstance(box, BoxInt) and \ @@ -309,7 +378,7 @@ op = ResOperation(rop.GUARD_TRUE, [res], None) extra_guards.append(op) return - + # Remaining cases are probably not interesting raise InvalidLoop if self.level == LEVEL_CONSTANT: @@ -319,7 +388,7 @@ def enum_forced_boxes(self, boxes, value, optimizer): if self.level == LEVEL_CONSTANT: return - assert 0 <= self.position_in_notvirtuals + assert 0 <= self.position_in_notvirtuals boxes[self.position_in_notvirtuals] = value.force_box(optimizer) def _enum(self, virtual_state): @@ -348,7 +417,7 @@ lb = '' if self.lenbound: lb = ', ' + self.lenbound.bound.__repr__() - + debug_print(indent + mark + 'NotVirtualInfo(%d' % self.position + ', ' + l + ', ' + self.intbound.__repr__() + lb + ')') @@ -370,7 +439,7 @@ return False return True - def generate_guards(self, other, args, cpu, extra_guards): + def generate_guards(self, other, args, cpu, extra_guards): assert len(self.state) == len(other.state) == len(args) renum = {} for i in range(len(self.state)): @@ -393,7 +462,7 @@ inputargs.append(box) assert None not in inputargs - + return inputargs def debug_print(self, hdr='', bad=None): @@ -412,7 +481,7 @@ def register_virtual_fields(self, keybox, fieldboxes): self.fieldboxes[keybox] = fieldboxes - + def already_seen_virtual(self, keybox): return keybox in self.fieldboxes @@ -463,6 +532,9 @@ def make_varray(self, arraydescr): return VArrayStateInfo(arraydescr) + def make_varraystruct(self, arraydescr, fielddescrs): + return VArrayStructStateInfo(arraydescr, fielddescrs) + class BoxNotProducable(Exception): pass @@ -479,6 +551,7 @@ optimizer.produce_potential_short_preamble_ops(self) self.short_boxes = {} + self.short_boxes_in_production = {} for box in self.potential_ops.keys(): try: @@ -501,12 +574,12 @@ else: # Low priority lo -= 1 return alts - + def renamed(self, box): if box in self.rename: return self.rename[box] return box - + def add_to_short(self, box, op): if op: op = op.clone() @@ -528,12 +601,16 @@ self.optimizer.make_equal_to(newbox, value) else: self.short_boxes[box] = op - + def produce_short_preamble_box(self, box): if box in self.short_boxes: - return + return if isinstance(box, Const): - return + return + if box in self.short_boxes_in_production: + raise BoxNotProducable + self.short_boxes_in_production[box] = True + if box in self.potential_ops: ops = self.prioritized_alternatives(box) produced_one = False @@ -570,7 +647,7 @@ else: debug_print(logops.repr_of_arg(box) + ': None') debug_stop('jit-short-boxes') - + def operations(self): if not we_are_translated(): # For tests ops = self.short_boxes.values() @@ -588,7 +665,7 @@ if not isinstance(oldbox, Const) and newbox not in self.short_boxes: self.short_boxes[newbox] = self.short_boxes[oldbox] self.aliases[newbox] = oldbox - + def original(self, box): while box in self.aliases: box = self.aliases[box] diff --git a/pypy/jit/metainterp/optimizeopt/vstring.py b/pypy/jit/metainterp/optimizeopt/vstring.py --- a/pypy/jit/metainterp/optimizeopt/vstring.py +++ b/pypy/jit/metainterp/optimizeopt/vstring.py @@ -1,8 +1,9 @@ from pypy.jit.codewriter.effectinfo import EffectInfo from pypy.jit.metainterp.history import (BoxInt, Const, ConstInt, ConstPtr, - get_const_ptr_for_string, get_const_ptr_for_unicode) + get_const_ptr_for_string, get_const_ptr_for_unicode, BoxPtr, REF, INT) from pypy.jit.metainterp.optimizeopt import optimizer, virtualize -from pypy.jit.metainterp.optimizeopt.optimizer import CONST_0, CONST_1, llhelper +from pypy.jit.metainterp.optimizeopt.optimizer import CONST_0, CONST_1 +from pypy.jit.metainterp.optimizeopt.optimizer import llhelper, REMOVED from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.rlib.objectmodel import specialize, we_are_translated @@ -106,7 +107,12 @@ if not we_are_translated(): op.name = 'FORCE' optforce.emit_operation(op) - self.string_copy_parts(optforce, box, CONST_0, self.mode) + self.initialize_forced_string(optforce, box, CONST_0, self.mode) + + def initialize_forced_string(self, string_optimizer, targetbox, + offsetbox, mode): + return self.string_copy_parts(string_optimizer, targetbox, + offsetbox, mode) class VStringPlainValue(VAbstractStringValue): @@ -114,11 +120,20 @@ _lengthbox = None # cache only def setup(self, size): - self._chars = [optimizer.CVAL_UNINITIALIZED_ZERO] * size + # in this list, None means: "it's probably uninitialized so far, + # but maybe it was actually filled." So to handle this case, + # strgetitem cannot be virtual-ized and must be done as a residual + # operation. By contrast, any non-None value means: we know it + # is initialized to this value; strsetitem() there makes no sense. + # Also, as long as self.is_virtual(), then we know that no-one else + # could have written to the string, so we know that in this case + # "None" corresponds to "really uninitialized". + self._chars = [None] * size def setup_slice(self, longerlist, start, stop): assert 0 <= start <= stop <= len(longerlist) self._chars = longerlist[start:stop] + # slice the 'longerlist', which may also contain Nones def getstrlen(self, _, mode): if self._lengthbox is None: @@ -126,53 +141,66 @@ return self._lengthbox def getitem(self, index): - return self._chars[index] + return self._chars[index] # may return None! def setitem(self, index, charvalue): assert isinstance(charvalue, optimizer.OptValue) + assert self._chars[index] is None, ( + "setitem() on an already-initialized location") self._chars[index] = charvalue + def is_completely_initialized(self): + for c in self._chars: + if c is None: + return False + return True + @specialize.arg(1) def get_constant_string_spec(self, mode): for c in self._chars: - if c is optimizer.CVAL_UNINITIALIZED_ZERO or not c.is_constant(): + if c is None or not c.is_constant(): return None return mode.emptystr.join([mode.chr(c.box.getint()) for c in self._chars]) def string_copy_parts(self, string_optimizer, targetbox, offsetbox, mode): - if not self.is_virtual() and targetbox is not self.box: - lengthbox = self.getstrlen(string_optimizer, mode) - srcbox = self.force_box(string_optimizer) - return copy_str_content(string_optimizer, srcbox, targetbox, - CONST_0, offsetbox, lengthbox, mode) + if not self.is_virtual() and not self.is_completely_initialized(): + return VAbstractStringValue.string_copy_parts( + self, string_optimizer, targetbox, offsetbox, mode) + else: + return self.initialize_forced_string(string_optimizer, targetbox, + offsetbox, mode) + + def initialize_forced_string(self, string_optimizer, targetbox, + offsetbox, mode): for i in range(len(self._chars)): - charbox = self._chars[i].force_box(string_optimizer) - if not (isinstance(charbox, Const) and charbox.same_constant(CONST_0)): - string_optimizer.emit_operation(ResOperation(mode.STRSETITEM, [targetbox, - offsetbox, - charbox], - None)) + assert isinstance(targetbox, BoxPtr) # ConstPtr never makes sense + charvalue = self.getitem(i) + if charvalue is not None: + charbox = charvalue.force_box(string_optimizer) + if not (isinstance(charbox, Const) and + charbox.same_constant(CONST_0)): + op = ResOperation(mode.STRSETITEM, [targetbox, + offsetbox, + charbox], + None) + string_optimizer.emit_operation(op) offsetbox = _int_add(string_optimizer, offsetbox, CONST_1) return offsetbox def get_args_for_fail(self, modifier): if self.box is None and not modifier.already_seen_virtual(self.keybox): - charboxes = [value.get_key_box() for value in self._chars] + charboxes = [] + for value in self._chars: + if value is not None: + box = value.get_key_box() + else: + box = None + charboxes.append(box) modifier.register_virtual_fields(self.keybox, charboxes) for value in self._chars: - value.get_args_for_fail(modifier) - - def FIXME_enum_forced_boxes(self, boxes, already_seen): - key = self.get_key_box() - if key in already_seen: - return - already_seen[key] = None - if self.box is None: - for box in self._chars: - box.enum_forced_boxes(boxes, already_seen) - else: - boxes.append(self.box) + if value is not None: + value.get_args_for_fail(modifier) def _make_virtual(self, modifier): return modifier.make_vstrplain(self.mode is mode_unicode) @@ -180,6 +208,7 @@ class VStringConcatValue(VAbstractStringValue): """The concatenation of two other strings.""" + _attrs_ = ('left', 'right', 'lengthbox') lengthbox = None # or the computed length @@ -226,18 +255,6 @@ self.left.get_args_for_fail(modifier) self.right.get_args_for_fail(modifier) - def FIXME_enum_forced_boxes(self, boxes, already_seen): - key = self.get_key_box() - if key in already_seen: - return - already_seen[key] = None - if self.box is None: - self.left.enum_forced_boxes(boxes, already_seen) - self.right.enum_forced_boxes(boxes, already_seen) - self.lengthbox = None - else: - boxes.append(self.box) - def _make_virtual(self, modifier): return modifier.make_vstrconcat(self.mode is mode_unicode) @@ -284,18 +301,6 @@ self.vstart.get_args_for_fail(modifier) self.vlength.get_args_for_fail(modifier) - def FIXME_enum_forced_boxes(self, boxes, already_seen): - key = self.get_key_box() - if key in already_seen: - return - already_seen[key] = None - if self.box is None: - self.vstr.enum_forced_boxes(boxes, already_seen) - self.vstart.enum_forced_boxes(boxes, already_seen) - self.vlength.enum_forced_boxes(boxes, already_seen) - else: - boxes.append(self.box) - def _make_virtual(self, modifier): return modifier.make_vstrslice(self.mode is mode_unicode) @@ -312,6 +317,7 @@ for i in range(lengthbox.value): charbox = _strgetitem(string_optimizer, srcbox, srcoffsetbox, mode) srcoffsetbox = _int_add(string_optimizer, srcoffsetbox, CONST_1) + assert isinstance(targetbox, BoxPtr) # ConstPtr never makes sense string_optimizer.emit_operation(ResOperation(mode.STRSETITEM, [targetbox, offsetbox, charbox], @@ -322,6 +328,7 @@ nextoffsetbox = _int_add(string_optimizer, offsetbox, lengthbox) else: nextoffsetbox = None + assert isinstance(targetbox, BoxPtr) # ConstPtr never makes sense op = ResOperation(mode.COPYSTRCONTENT, [srcbox, targetbox, srcoffsetbox, offsetbox, lengthbox], None) @@ -408,6 +415,7 @@ def optimize_STRSETITEM(self, op): value = self.getvalue(op.getarg(0)) + assert not value.is_constant() # strsetitem(ConstPtr) never makes sense if value.is_virtual() and isinstance(value, VStringPlainValue): indexbox = self.get_constant_box(op.getarg(1)) if indexbox is not None: @@ -441,11 +449,20 @@ # if isinstance(value, VStringPlainValue): # even if no longer virtual if vindex.is_constant(): - res = value.getitem(vindex.box.getint()) - # If it is uninitialized we can't return it, it was set by a - # COPYSTRCONTENT, not a STRSETITEM - if res is not optimizer.CVAL_UNINITIALIZED_ZERO: - return res + result = value.getitem(vindex.box.getint()) + if result is not None: + return result + # + if isinstance(value, VStringConcatValue) and vindex.is_constant(): + len1box = value.left.getstrlen(self, mode) + if isinstance(len1box, ConstInt): + index = vindex.box.getint() + len1 = len1box.getint() + if index < len1: + return self.strgetitem(value.left, vindex, mode) + else: + vindex = optimizer.ConstantValue(ConstInt(index - len1)) + return self.strgetitem(value.right, vindex, mode) # resbox = _strgetitem(self, value.force_box(self), vindex.force_box(self), mode) return self.getvalue(resbox) @@ -467,6 +484,11 @@ def _optimize_COPYSTRCONTENT(self, op, mode): # args: src dst srcstart dststart length + assert op.getarg(0).type == REF + assert op.getarg(1).type == REF + assert op.getarg(2).type == INT + assert op.getarg(3).type == INT + assert op.getarg(4).type == INT src = self.getvalue(op.getarg(0)) dst = self.getvalue(op.getarg(1)) srcstart = self.getvalue(op.getarg(2)) @@ -508,6 +530,11 @@ optimize_CALL_PURE = optimize_CALL + def optimize_GUARD_NO_EXCEPTION(self, op): + if self.last_emitted_operation is REMOVED: + return + self.emit_operation(op) + def opt_call_str_STR2UNICODE(self, op): # Constant-fold unicode("constant string"). # More generally, supporting non-constant but virtual cases is @@ -522,6 +549,7 @@ except UnicodeDecodeError: return False self.make_constant(op.result, get_const_ptr_for_unicode(u)) + self.last_emitted_operation = REMOVED return True def opt_call_stroruni_STR_CONCAT(self, op, mode): @@ -538,13 +566,12 @@ vstart = self.getvalue(op.getarg(2)) vstop = self.getvalue(op.getarg(3)) # - if (isinstance(vstr, VStringPlainValue) and vstart.is_constant() - and vstop.is_constant()): - # slicing with constant bounds of a VStringPlainValue - value = self.make_vstring_plain(op.result, op, mode) - value.setup_slice(vstr._chars, vstart.box.getint(), - vstop.box.getint()) - return True + #if (isinstance(vstr, VStringPlainValue) and vstart.is_constant() + # and vstop.is_constant()): + # value = self.make_vstring_plain(op.result, op, mode) + # value.setup_slice(vstr._chars, vstart.box.getint(), + # vstop.box.getint()) + # return True # vstr.ensure_nonnull() lengthbox = _int_sub(self, vstop.force_box(self), diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -36,6 +36,7 @@ class MIFrame(object): + debug = False def __init__(self, metainterp): self.metainterp = metainterp @@ -164,7 +165,7 @@ if not we_are_translated(): for b in registers[count:]: assert not oldbox.same_box(b) - + def make_result_of_lastop(self, resultbox): got_type = resultbox.type @@ -198,7 +199,7 @@ 'float_add', 'float_sub', 'float_mul', 'float_truediv', 'float_lt', 'float_le', 'float_eq', 'float_ne', 'float_gt', 'float_ge', - 'ptr_eq', 'ptr_ne', + 'ptr_eq', 'ptr_ne', 'instance_ptr_eq', 'instance_ptr_ne', ]: exec py.code.Source(''' @arguments("box", "box") @@ -222,6 +223,7 @@ 'cast_float_to_int', 'cast_int_to_float', 'cast_float_to_singlefloat', 'cast_singlefloat_to_float', 'float_neg', 'float_abs', + 'cast_ptr_to_int', 'cast_int_to_ptr', ]: exec py.code.Source(''' @arguments("box") @@ -238,8 +240,8 @@ return self.execute(rop.PTR_EQ, box, history.CONST_NULL) @arguments("box") - def opimpl_cast_opaque_ptr(self, box): - return self.execute(rop.CAST_OPAQUE_PTR, box) + def opimpl_mark_opaque_ptr(self, box): + return self.execute(rop.MARK_OPAQUE_PTR, box) @arguments("box") def _opimpl_any_return(self, box): @@ -547,6 +549,14 @@ opimpl_getfield_gc_r_pure = _opimpl_getfield_gc_pure_any opimpl_getfield_gc_f_pure = _opimpl_getfield_gc_pure_any + @arguments("box", "box", "descr") + def _opimpl_getinteriorfield_gc_any(self, array, index, descr): + return self.execute_with_descr(rop.GETINTERIORFIELD_GC, descr, + array, index) + opimpl_getinteriorfield_gc_i = _opimpl_getinteriorfield_gc_any + opimpl_getinteriorfield_gc_f = _opimpl_getinteriorfield_gc_any + opimpl_getinteriorfield_gc_r = _opimpl_getinteriorfield_gc_any + @specialize.arg(1) def _opimpl_getfield_gc_any_pureornot(self, opnum, box, fielddescr): tobox = self.metainterp.heapcache.getfield(box, fielddescr) @@ -587,6 +597,15 @@ opimpl_setfield_gc_r = _opimpl_setfield_gc_any opimpl_setfield_gc_f = _opimpl_setfield_gc_any + @arguments("box", "box", "box", "descr") + def _opimpl_setinteriorfield_gc_any(self, array, index, value, descr): + self.execute_with_descr(rop.SETINTERIORFIELD_GC, descr, + array, index, value) + opimpl_setinteriorfield_gc_i = _opimpl_setinteriorfield_gc_any + opimpl_setinteriorfield_gc_f = _opimpl_setinteriorfield_gc_any + opimpl_setinteriorfield_gc_r = _opimpl_setinteriorfield_gc_any + + @arguments("box", "descr") def _opimpl_getfield_raw_any(self, box, fielddescr): return self.execute_with_descr(rop.GETFIELD_RAW, fielddescr, box) @@ -1326,10 +1345,8 @@ if effect == effectinfo.EF_LOOPINVARIANT: return self.execute_varargs(rop.CALL_LOOPINVARIANT, allboxes, descr, False, False) - exc = (effect != effectinfo.EF_CANNOT_RAISE and - effect != effectinfo.EF_ELIDABLE_CANNOT_RAISE) - pure = (effect == effectinfo.EF_ELIDABLE_CAN_RAISE or - effect == effectinfo.EF_ELIDABLE_CANNOT_RAISE) + exc = effectinfo.check_can_raise() + pure = effectinfo.check_is_elidable() return self.execute_varargs(rop.CALL, allboxes, descr, exc, pure) def do_residual_or_indirect_call(self, funcbox, calldescr, argboxes): @@ -2587,17 +2604,21 @@ self.pc = position # if not we_are_translated(): - print '\tpyjitpl: %s(%s)' % (name, ', '.join(map(repr, args))), + if self.debug: + print '\tpyjitpl: %s(%s)' % (name, ', '.join(map(repr, args))), try: resultbox = unboundmethod(self, *args) except Exception, e: - print '-> %s!' % e.__class__.__name__ + if self.debug: + print '-> %s!' % e.__class__.__name__ raise if num_return_args == 0: - print + if self.debug: + print assert resultbox is None else: - print '-> %r' % (resultbox,) + if self.debug: + print '-> %r' % (resultbox,) assert argcodes[next_argcode] == '>' result_argcode = argcodes[next_argcode + 1] assert resultbox.type == {'i': history.INT, diff --git a/pypy/jit/metainterp/quasiimmut.py b/pypy/jit/metainterp/quasiimmut.py --- a/pypy/jit/metainterp/quasiimmut.py +++ b/pypy/jit/metainterp/quasiimmut.py @@ -2,6 +2,7 @@ from pypy.rpython.lltypesystem import lltype, rclass from pypy.rpython.annlowlevel import cast_base_ptr_to_instance from pypy.jit.metainterp.history import AbstractDescr +from pypy.rlib.objectmodel import we_are_translated def get_mutate_field_name(fieldname): @@ -50,13 +51,13 @@ class QuasiImmut(object): llopaque = True + compress_limit = 30 def __init__(self, cpu): self.cpu = cpu # list of weakrefs to the LoopTokens that must be invalidated if # this value ever changes self.looptokens_wrefs = [] - self.compress_limit = 30 def hide(self): qmut_ptr = self.cpu.ts.cast_instance_to_base_ref(self) @@ -75,6 +76,8 @@ def compress_looptokens_list(self): self.looptokens_wrefs = [wref for wref in self.looptokens_wrefs if wref() is not None] + # NB. we must keep around the looptoken_wrefs that are + # already invalidated; see below self.compress_limit = (len(self.looptokens_wrefs) + 15) * 2 def invalidate(self): @@ -86,7 +89,16 @@ for wref in wrefs: looptoken = wref() if looptoken is not None: + looptoken.invalidated = True self.cpu.invalidate_loop(looptoken) + # NB. we must call cpu.invalidate_loop() even if + # looptoken.invalidated was already set to True. + # It's possible to invalidate several times the + # same looptoken; see comments in jit.backend.model + # in invalidate_loop(). + if not we_are_translated(): + self.cpu.stats.invalidated_token_numbers.add( + looptoken.number) class QuasiImmutDescr(AbstractDescr): diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -1,5 +1,4 @@ from pypy.rlib.objectmodel import we_are_translated -from pypy.rlib.debug import make_sure_not_resized def ResOperation(opnum, args, result, descr=None): cls = opclasses[opnum] @@ -91,7 +90,10 @@ return op def __repr__(self): - return self.repr() + try: + return self.repr() + except NotImplementedError: + return object.__repr__(self) def repr(self, graytext=False): # RPython-friendly version @@ -405,8 +407,8 @@ 'FLOAT_TRUEDIV/2', 'FLOAT_NEG/1', 'FLOAT_ABS/1', - 'CAST_FLOAT_TO_INT/1', - 'CAST_INT_TO_FLOAT/1', + 'CAST_FLOAT_TO_INT/1', # don't use for unsigned ints; we would + 'CAST_INT_TO_FLOAT/1', # need some messy code in the backend 'CAST_FLOAT_TO_SINGLEFLOAT/1', 'CAST_SINGLEFLOAT_TO_FLOAT/1', # @@ -433,10 +435,13 @@ 'INT_INVERT/1', # 'SAME_AS/1', # gets a Const or a Box, turns it into another Box + 'CAST_PTR_TO_INT/1', + 'CAST_INT_TO_PTR/1', # 'PTR_EQ/2b', 'PTR_NE/2b', - 'CAST_OPAQUE_PTR/1b', + 'INSTANCE_PTR_EQ/2b', + 'INSTANCE_PTR_NE/2b', # 'ARRAYLEN_GC/1d', 'STRLEN/1', @@ -455,6 +460,7 @@ 'GETARRAYITEM_GC/2d', 'GETARRAYITEM_RAW/2d', + 'GETINTERIORFIELD_GC/2d', 'GETFIELD_GC/1d', 'GETFIELD_RAW/1d', '_MALLOC_FIRST', @@ -467,10 +473,12 @@ 'FORCE_TOKEN/0', 'VIRTUAL_REF/2', # removed before it's passed to the backend 'READ_TIMESTAMP/0', + 'MARK_OPAQUE_PTR/1b', '_NOSIDEEFFECT_LAST', # ----- end of no_side_effect operations ----- 'SETARRAYITEM_GC/3d', 'SETARRAYITEM_RAW/3d', + 'SETINTERIORFIELD_GC/3d', 'SETFIELD_GC/2d', 'SETFIELD_RAW/2d', 'STRSETITEM/3', diff --git a/pypy/jit/metainterp/resume.py b/pypy/jit/metainterp/resume.py --- a/pypy/jit/metainterp/resume.py +++ b/pypy/jit/metainterp/resume.py @@ -126,6 +126,7 @@ UNASSIGNED = tag(-1<<13, TAGBOX) UNASSIGNEDVIRTUAL = tag(-1<<13, TAGVIRTUAL) NULLREF = tag(-1, TAGCONST) +UNINITIALIZED = tag(-2, TAGCONST) # used for uninitialized string characters class ResumeDataLoopMemo(object): @@ -139,7 +140,7 @@ self.numberings = {} self.cached_boxes = {} self.cached_virtuals = {} - + self.nvirtuals = 0 self.nvholes = 0 self.nvreused = 0 @@ -273,6 +274,9 @@ def make_varray(self, arraydescr): return VArrayInfo(arraydescr) + def make_varraystruct(self, arraydescr, fielddescrs): + return VArrayStructInfo(arraydescr, fielddescrs) + def make_vstrplain(self, is_unicode=False): if is_unicode: return VUniPlainInfo() @@ -402,7 +406,7 @@ virtuals[num] = vinfo if self._invalidation_needed(len(liveboxes), nholes): - memo.clear_box_virtual_numbers() + memo.clear_box_virtual_numbers() def _invalidation_needed(self, nliveboxes, nholes): memo = self.memo @@ -436,6 +440,8 @@ self.storage.rd_pendingfields = rd_pendingfields def _gettagged(self, box): + if box is None: + return UNINITIALIZED if isinstance(box, Const): return self.memo.getconst(box) else: @@ -455,7 +461,7 @@ def debug_prints(self): raise NotImplementedError - + class AbstractVirtualStructInfo(AbstractVirtualInfo): def __init__(self, fielddescrs): self.fielddescrs = fielddescrs @@ -537,6 +543,29 @@ for i in self.fieldnums: debug_print("\t\t", str(untag(i))) + +class VArrayStructInfo(AbstractVirtualInfo): + def __init__(self, arraydescr, fielddescrs): + self.arraydescr = arraydescr + self.fielddescrs = fielddescrs + + def debug_prints(self): + debug_print("\tvarraystructinfo", self.arraydescr) + for i in self.fieldnums: + debug_print("\t\t", str(untag(i))) + + @specialize.argtype(1) + def allocate(self, decoder, index): + array = decoder.allocate_array(self.arraydescr, len(self.fielddescrs)) + decoder.virtuals_cache[index] = array + p = 0 + for i in range(len(self.fielddescrs)): + for j in range(len(self.fielddescrs[i])): + decoder.setinteriorfield(i, self.fielddescrs[i][j], array, self.fieldnums[p]) + p += 1 + return array + + class VStrPlainInfo(AbstractVirtualInfo): """Stands for the string made out of the characters of all fieldnums.""" @@ -546,7 +575,9 @@ string = decoder.allocate_string(length) decoder.virtuals_cache[index] = string for i in range(length): - decoder.string_setitem(string, i, self.fieldnums[i]) + charnum = self.fieldnums[i] + if not tagged_eq(charnum, UNINITIALIZED): + decoder.string_setitem(string, i, charnum) return string def debug_prints(self): @@ -599,7 +630,9 @@ string = decoder.allocate_unicode(length) decoder.virtuals_cache[index] = string for i in range(length): - decoder.unicode_setitem(string, i, self.fieldnums[i]) + charnum = self.fieldnums[i] + if not tagged_eq(charnum, UNINITIALIZED): + decoder.unicode_setitem(string, i, charnum) return string def debug_prints(self): @@ -884,6 +917,17 @@ self.metainterp.execute_and_record(rop.SETFIELD_GC, descr, structbox, fieldbox) + def setinteriorfield(self, index, descr, array, fieldnum): + if descr.is_pointer_field(): + kind = REF + elif descr.is_float_field(): + kind = FLOAT + else: + kind = INT + fieldbox = self.decode_box(fieldnum, kind) + self.metainterp.execute_and_record(rop.SETINTERIORFIELD_GC, descr, + array, ConstInt(index), fieldbox) + def setarrayitem_int(self, arraydescr, arraybox, index, fieldnum): self._setarrayitem(arraydescr, arraybox, index, fieldnum, INT) @@ -1164,6 +1208,17 @@ newvalue = self.decode_int(fieldnum) self.cpu.bh_setfield_gc_i(struct, descr, newvalue) + def setinteriorfield(self, index, descr, array, fieldnum): + if descr.is_pointer_field(): + newvalue = self.decode_ref(fieldnum) + self.cpu.bh_setinteriorfield_gc_r(array, index, descr, newvalue) + elif descr.is_float_field(): + newvalue = self.decode_float(fieldnum) + self.cpu.bh_setinteriorfield_gc_f(array, index, descr, newvalue) + else: + newvalue = self.decode_int(fieldnum) + self.cpu.bh_setinteriorfield_gc_i(array, index, descr, newvalue) + def setarrayitem_int(self, arraydescr, array, index, fieldnum): newvalue = self.decode_int(fieldnum) self.cpu.bh_setarrayitem_gc_i(arraydescr, array, index, newvalue) diff --git a/pypy/jit/metainterp/test/support.py b/pypy/jit/metainterp/test/support.py --- a/pypy/jit/metainterp/test/support.py +++ b/pypy/jit/metainterp/test/support.py @@ -12,7 +12,7 @@ from pypy.rlib.rfloat import isnan def _get_jitcodes(testself, CPUClass, func, values, type_system, - supports_longlong=False, **kwds): + supports_longlong=False, translationoptions={}, **kwds): from pypy.jit.codewriter import support class FakeJitCell(object): @@ -42,7 +42,8 @@ enable_opts = ALL_OPTS_DICT func._jit_unroll_safe_ = True - rtyper = support.annotate(func, values, type_system=type_system) + rtyper = support.annotate(func, values, type_system=type_system, + translationoptions=translationoptions) graphs = rtyper.annotator.translator.graphs testself.all_graphs = graphs result_kind = history.getkind(graphs[0].getreturnvar().concretetype)[0] diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -10,6 +10,7 @@ from pypy.jit.metainterp.typesystem import LLTypeHelper, OOTypeHelper from pypy.jit.metainterp.warmspot import get_stats from pypy.jit.metainterp.warmstate import set_future_value +from pypy.rlib import rerased from pypy.rlib.jit import (JitDriver, we_are_jitted, hint, dont_look_inside, loop_invariant, elidable, promote, jit_debug, assert_green, AssertGreenFailed, unroll_safe, current_trace_length, look_inside_iff, @@ -3435,12 +3436,163 @@ return sa res = self.meta_interp(f, [16]) assert res == f(16) - + + def test_ptr_eq(self): + myjitdriver = JitDriver(greens = [], reds = ["n", "x"]) + class A(object): + def __init__(self, v): + self.v = v + def f(n, x): + while n > 0: + myjitdriver.jit_merge_point(n=n, x=x) + z = 0 / x + a1 = A("key") + a2 = A("\x00") + n -= [a1, a2][z].v is not a2.v + return n + res = self.meta_interp(f, [10, 1]) + assert res == 0 + + def test_instance_ptr_eq(self): + myjitdriver = JitDriver(greens = [], reds = ["n", "i", "a1", "a2"]) + class A(object): + pass + def f(n): + a1 = A() + a2 = A() + i = 0 + while n > 0: + myjitdriver.jit_merge_point(n=n, i=i, a1=a1, a2=a2) + if n % 2: + a = a2 + else: + a = a1 + i += a is a1 + n -= 1 + return i + res = self.meta_interp(f, [10]) + assert res == f(10) + def f(n): + a1 = A() + a2 = A() + i = 0 + while n > 0: + myjitdriver.jit_merge_point(n=n, i=i, a1=a1, a2=a2) + if n % 2: + a = a2 + else: + a = a1 + if a is a2: + i += 1 + n -= 1 + return i + res = self.meta_interp(f, [10]) + assert res == f(10) + + def test_virtual_array_of_structs(self): + myjitdriver = JitDriver(greens = [], reds=["n", "d"]) + def f(n): + d = None + while n > 0: + myjitdriver.jit_merge_point(n=n, d=d) + d = {"q": 1} + if n % 2: + d["k"] = n + else: + d["z"] = n + n -= len(d) - d["q"] + return n + res = self.meta_interp(f, [10]) + assert res == 0 + + def test_virtual_dict_constant_keys(self): + myjitdriver = JitDriver(greens = [], reds = ["n"]) + def g(d): + return d["key"] - 1 + + def f(n): + while n > 0: + myjitdriver.jit_merge_point(n=n) + x = {"key": n} + n = g(x) + del x["key"] + return n + + res = self.meta_interp(f, [10]) + assert res == 0 + self.check_loops({"int_sub": 1, "int_gt": 1, "guard_true": 1, "jump": 1}) + + def test_virtual_opaque_ptr(self): + myjitdriver = JitDriver(greens = [], reds = ["n"]) + erase, unerase = rerased.new_erasing_pair("x") + @look_inside_iff(lambda x: isvirtual(x)) + def g(x): + return x[0] + def f(n): + while n > 0: + myjitdriver.jit_merge_point(n=n) + x = [] + y = erase(x) + z = unerase(y) + z.append(1) + n -= g(z) + return n + res = self.meta_interp(f, [10]) + assert res == 0 + self.check_loops({"int_sub": 1, "int_gt": 1, "guard_true": 1, "jump": 1}) + + def test_virtual_opaque_dict(self): + myjitdriver = JitDriver(greens = [], reds = ["n"]) + erase, unerase = rerased.new_erasing_pair("x") + @look_inside_iff(lambda x: isvirtual(x)) + def g(x): + return x[0]["key"] - 1 + def f(n): + while n > 0: + myjitdriver.jit_merge_point(n=n) + x = [{}] + x[0]["key"] = n + x[0]["other key"] = n + y = erase(x) + z = unerase(y) + n = g(x) + return n + res = self.meta_interp(f, [10]) + assert res == 0 + self.check_loops({"int_sub": 1, "int_gt": 1, "guard_true": 1, "jump": 1}) + + def test_convert_from_SmallFunctionSetPBCRepr_to_FunctionsPBCRepr(self): + f1 = lambda n: n+1 + f2 = lambda n: n+2 + f3 = lambda n: n+3 + f4 = lambda n: n+4 + f5 = lambda n: n+5 + f6 = lambda n: n+6 + f7 = lambda n: n+7 + f8 = lambda n: n+8 + def h(n, x): + return x(n) + h._dont_inline = True + def g(n, x): + return h(n, x) + g._dont_inline = True + def f(n): + n = g(n, f1) + n = g(n, f2) + n = h(n, f3) + n = h(n, f4) + n = h(n, f5) + n = h(n, f6) + n = h(n, f7) + n = h(n, f8) + return n + assert f(5) == 41 + translationoptions = {'withsmallfuncsets': 3} + self.interp_operations(f, [5], translationoptions=translationoptions) class TestLLtype(BaseLLtypeTests, LLJitMixin): def test_tagged(self): - py.test.skip("implement me") from pypy.rlib.objectmodel import UnboxedValue class Base(object): __slots__ = () @@ -3491,4 +3643,51 @@ o = o.dec() pc += 1 return pc - res = self.meta_interp(main, [False, 100, True], taggedpointers=True) + topt = {'taggedpointers': True} + res = self.meta_interp(main, [False, 100, True], + translationoptions=topt) + + def test_rerased(self): + eraseX, uneraseX = rerased.new_erasing_pair("X") + # + class X: + def __init__(self, a, b): + self.a = a + self.b = b + # + def f(i, j): + # 'j' should be 0 or 1, not other values + if j > 0: + e = eraseX(X(i, j)) + else: + try: + e = rerased.erase_int(i) + except OverflowError: + return -42 + if j & 1: + x = uneraseX(e) + return x.a - x.b + else: + return rerased.unerase_int(e) + # + topt = {'taggedpointers': True} + x = self.interp_operations(f, [-128, 0], translationoptions=topt) + assert x == -128 From noreply at buildbot.pypy.org Thu Nov 10 14:05:45 2011 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 10 Nov 2011 14:05:45 +0100 (CET) Subject: [pypy-commit] pypy default: Finally found out where to put the "assert". Message-ID: <20111110130545.53D648292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49274:edb7318580ea Date: 2011-11-10 14:05 +0100 http://bitbucket.org/pypy/pypy/changeset/edb7318580ea/ Log: Finally found out where to put the "assert". diff --git a/pypy/jit/codewriter/call.py b/pypy/jit/codewriter/call.py --- a/pypy/jit/codewriter/call.py +++ b/pypy/jit/codewriter/call.py @@ -212,7 +212,10 @@ elidable = False loopinvariant = False if op.opname == "direct_call": - func = getattr(get_funcobj(op.args[0].value), '_callable', None) + funcobj = get_funcobj(op.args[0].value) + assert getattr(funcobj, 'calling_conv', 'c') == 'c', ( + "%r: getcalldescr() with a non-default call ABI" % (op,)) + func = getattr(funcobj, '_callable', None) elidable = getattr(func, "_elidable_function_", False) loopinvariant = getattr(func, "_jit_loop_invariant_", False) if loopinvariant: diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -125,6 +125,7 @@ canraise=False, random_effects_on_gcobjs= random_effects_on_gcobjs, + calling_conv=calling_conv, **kwds) if isinstance(_callable, ll2ctypes.LL2CtypesCallable): _callable.funcptr = funcptr From noreply at buildbot.pypy.org Thu Nov 10 14:11:46 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Thu, 10 Nov 2011 14:11:46 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: Merge with default Message-ID: <20111110131146.D873E8292E@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r49275:663b703f5738 Date: 2011-11-10 13:53 +0100 http://bitbucket.org/pypy/pypy/changeset/663b703f5738/ Log: Merge with default diff --git a/pypy/objspace/std/bytearrayobject.py b/pypy/objspace/std/bytearrayobject.py --- a/pypy/objspace/std/bytearrayobject.py +++ b/pypy/objspace/std/bytearrayobject.py @@ -282,15 +282,12 @@ def str__Bytearray(space, w_bytearray): return space.wrap(''.join(w_bytearray.data)) -def _convert_idx_params(space, w_self, w_start, w_stop): - length = len(w_self.data) +def str_count__Bytearray_Int_ANY_ANY(space, w_bytearray, w_char, w_start, w_stop): + char = w_char.intval + bytearray = w_bytearray.data + length = len(bytearray) start, stop = slicetype.unwrap_start_stop( space, length, w_start, w_stop, False) - return start, stop, length - -def str_count__Bytearray_Int_ANY_ANY(space, w_bytearray, w_char, w_start, w_stop): - char = w_char.intval - start, stop, length = _convert_idx_params(space, w_bytearray, w_start, w_stop) count = 0 for i in range(start, min(stop, length)): c = w_bytearray.data[i] diff --git a/pypy/objspace/std/test/test_bytes.py b/pypy/objspace/std/test/test_bytearrayobject.py rename from pypy/objspace/std/test/test_bytes.py rename to pypy/objspace/std/test/test_bytearrayobject.py From noreply at buildbot.pypy.org Thu Nov 10 14:11:48 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Thu, 10 Nov 2011 14:11:48 +0100 (CET) Subject: [pypy-commit] pypy win64 test: closing badly named old branch. I guess that the changes are still visible from win64_gborg Message-ID: <20111110131148.09D6D8292E@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64 test Changeset: r49276:9d40404468cf Date: 2011-11-10 14:11 +0100 http://bitbucket.org/pypy/pypy/changeset/9d40404468cf/ Log: closing badly named old branch. I guess that the changes are still visible from win64_gborg From noreply at buildbot.pypy.org Thu Nov 10 14:36:05 2011 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 10 Nov 2011 14:36:05 +0100 (CET) Subject: [pypy-commit] pypy default: Fix, maybe temporary, for getting a Windows process to write stuff Message-ID: <20111110133605.B70F68292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49277:1fd2bb8741dc Date: 2011-11-10 14:35 +0100 http://bitbucket.org/pypy/pypy/changeset/1fd2bb8741dc/ Log: Fix, maybe temporary, for getting a Windows process to write stuff in binary mode: just use the "-u" flag instead of mess around with the 'msvcrt' module. diff --git a/lib-python/2.7/test/test_subprocess.py b/lib-python/modified-2.7/test/test_subprocess.py copy from lib-python/2.7/test/test_subprocess.py copy to lib-python/modified-2.7/test/test_subprocess.py --- a/lib-python/2.7/test/test_subprocess.py +++ b/lib-python/modified-2.7/test/test_subprocess.py @@ -16,11 +16,11 @@ # Depends on the following external programs: Python # -if mswindows: - SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' - 'os.O_BINARY);') -else: - SETBINARY = '' +#if mswindows: +# SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' +# 'os.O_BINARY);') +#else: +# SETBINARY = '' try: @@ -420,8 +420,9 @@ self.assertStderrEqual(stderr, "") def test_universal_newlines(self): - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' @@ -448,8 +449,9 @@ def test_universal_newlines_communicate(self): # universal newlines through communicate() - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' From noreply at buildbot.pypy.org Thu Nov 10 14:59:12 2011 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 10 Nov 2011 14:59:12 +0100 (CET) Subject: [pypy-commit] pypy default: Add an XXX for this bug. Message-ID: <20111110135912.B870B8292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49278:99493e1f94b0 Date: 2011-11-10 14:58 +0100 http://bitbucket.org/pypy/pypy/changeset/99493e1f94b0/ Log: Add an XXX for this bug. diff --git a/lib-python/modified-2.7/test/test_import.py b/lib-python/modified-2.7/test/test_import.py --- a/lib-python/modified-2.7/test/test_import.py +++ b/lib-python/modified-2.7/test/test_import.py @@ -64,6 +64,7 @@ except ImportError, err: self.fail("import from %s failed: %s" % (ext, err)) else: + # XXX importing .pyw is missing on Windows self.assertEqual(mod.a, a, "module loaded (%s) but contents invalid" % mod) self.assertEqual(mod.b, b, From noreply at buildbot.pypy.org Thu Nov 10 15:08:40 2011 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 10 Nov 2011 15:08:40 +0100 (CET) Subject: [pypy-commit] pypy default: Fix this CPython test, and comment about why I think that PyPy's Message-ID: <20111110140840.5ABE58292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49279:2f05272d1f77 Date: 2011-11-10 15:08 +0100 http://bitbucket.org/pypy/pypy/changeset/2f05272d1f77/ Log: Fix this CPython test, and comment about why I think that PyPy's behavior is better (although it's all open to discussion of course) diff --git a/lib-python/modified-2.7/test/test_repr.py b/lib-python/modified-2.7/test/test_repr.py --- a/lib-python/modified-2.7/test/test_repr.py +++ b/lib-python/modified-2.7/test/test_repr.py @@ -254,8 +254,14 @@ eq = self.assertEqual touch(os.path.join(self.subpkgname, self.pkgname + os.extsep + 'py')) from areallylongpackageandmodulenametotestreprtruncation.areallylongpackageandmodulenametotestreprtruncation import areallylongpackageandmodulenametotestreprtruncation - eq(repr(areallylongpackageandmodulenametotestreprtruncation), - "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + # On PyPy, we use %r to format the file name; on CPython it is done + # with '%s'. It seems to me that %r is safer . + if '__pypy__' in sys.builtin_module_names: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + else: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) eq(repr(sys), "") def test_type(self): From noreply at buildbot.pypy.org Thu Nov 10 15:50:22 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 10 Nov 2011 15:50:22 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: a draft of the sprint report for the blog Message-ID: <20111110145022.BB7E08292E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: extradoc Changeset: r3961:4ce00b0bbc9b Date: 2011-11-10 15:47 +0100 http://bitbucket.org/pypy/extradoc/changeset/4ce00b0bbc9b/ Log: a draft of the sprint report for the blog diff --git a/blog/draft/2011-11-gborg-sprint-report.rst b/blog/draft/2011-11-gborg-sprint-report.rst new file mode 100644 --- /dev/null +++ b/blog/draft/2011-11-gborg-sprint-report.rst @@ -0,0 +1,90 @@ +Gothenburg sprint report +========================= + +In the past days, we have been busy hacking on PyPy at the Gothenburg sprint, +the second of this 2011. The sprint was hold at Laura's and Jacob's place, +and here is a brief report of what happened. + + + +In the first day we welcomed Mark Pearse, which was new to PyPy and at his +first sprint. Mark worked the whole sprint at the new SpecialisedTuple_ +branch, whose aim is to have a special implementation for small 2-items and +3-items tuples of primitive types (e.g., ints or floats) to save memory. Mark +paired with Antonio for a couple of days, then he continued alone and did amazing +job. He even learned how to properly do Test Driven Development :-). + +.. _SpecialisedTuple: http://bitbucket.org/pypy/pypy/changesets/tip/branch%28%22SpecialisedTuples%22%29 + +Antonio spent a couple of days investingating whether it is possible to use +`application checkpoint` libraries such as BLCR_ and DMTCP_ to save the state of +the PyPy interpreter between subsequent runs, thus saving also the +JIT-compiled code to reduce the warmup time. The conclusion is that these are +interesting technologies, but more work would be needed (either on the PyPy +side or on the checkpoint library side) before it can have a practical usage +for PyPy users. + +.. _`application checkpoint`: http://en.wikipedia.org/wiki/Application_checkpointing +.. _BLCR: http://ftg.lbl.gov/projects/CheckpointRestart/ +.. _DMTCP: http://dmtcp.sourceforge.net/ + +Then, Antonio spent most of the sprint working on his ffistruct_ branch, whose +aim is to provide a very JIT-friendly way to interact with C structures, and +eventually implement ``ctypes.Structure`` on top of that. The "cool part" of +the branch is already done, and the JIT already can compile set/get of fields +into a single fast assembly instruction, about 400 times faster than the +corresponding ctypes code. What is still left to do is to add a nicer syntax +(which is easy) and to implement all the ctypes peculiarities (which is +tedious, at best :-)). + +.. _ffistruct: http://bitbucket.org/pypy/pypy/changesets/tip/branch(%22ffistruct%22) + +As usual, Armin did tons of different stuff, including fixing a JIT bug, +improving the performance of ``file.readlines()`` and working on the STM_ +branch (for Software Transactional Memory), which is now able to run RPython +multithreaded programs using software transaction (as long as they don't fill +up all the memory, because support for the GC is still missing :-)). Finally, +he worked on improving the Windows version of PyPy, and while doing so he +discovered toghether with Anto a terrible bug which leaded to a continuous +leak of stack space because the JIT called some functions using the wrong +calling convention. + +.. _STM: http://bitbucket.org/pypy/pypy/changesets/tip/branch("stm") + +Håkan, with some help from Armim, worked on the `jit-targets`_ branch, whose goal +is to heavily refactor the way the traces are internally represented by the +JIT, so that in the end we can produce (even :-)) better code than what we do +nowadays. More details in this mail_. + +.. _`jit-targets`: http://bitbucket.org/pypy/pypy/changesets/tip/branch("stm") +.. _mail: http://mail.python.org/pipermail/pypy-dev/2011-November/008728.html + + +Andrew Dalke worked on a way to integrate PyPy with FORTRAN libraries, and in +particular the ones which are wrapped by Numpy and Scipy: in doing so, he +wrote f2pypy_, which is similar to the existing ``f2py`` but instead of +producing a CPython extension module it produces a pure python modules based +on ``ctypes``. More work is needed before it can be considered complete, but +``f2pypy`` is already able to produce a wrapper for BLAS which passes most of +the tests (although not all). + +.. _f2pypy: http://bitbucket.org/pypy/f2pypy + +Christian Tismer worked the whole sprint on the branch to make PyPy compatible +with Windows 64 bit. This needs a lot of work because a lot of PyPy is +written under the assumption than assumption that the ``long`` type in C has +the same bit size than ``void*``, which is not true on Win64. Christian says +that in the past Genova-Pegli sprint he completed 90% of the work, and in this +sprint he did the other 90% of the work. Obviously, what is left to complete +the task is the third 90% :-). More seriously, he estimated a total of 2-4 +person-weeks of work to finish it. + +But, all in all, the best part of the sprint has been the cake that Laura +cooked to celebrate the "5x faster than CPython" achievement. Well, actually +our speed_ page reports "only" 4.7x, but that's because in the meantime we +switched from comparing against CPython 2.6 to comparing against CPython 2.7, +which is slightly faster. We are confident that we will reach the 5x goal +again, and that will be the perfect excuse to eat another cake :-) + +.. _speed: http://speed.pypy.org/ + diff --git a/blog/draft/5x-cake.jpg b/blog/draft/5x-cake.jpg new file mode 100755 index 0000000000000000000000000000000000000000..2d712593681d479dd8211003f1949c79d7c8520a GIT binary patch [cut] From noreply at buildbot.pypy.org Thu Nov 10 15:50:23 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 10 Nov 2011 15:50:23 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: merge heads Message-ID: <20111110145023.E9E5882A87@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: extradoc Changeset: r3962:ee75b63c9b59 Date: 2011-11-10 15:48 +0100 http://bitbucket.org/pypy/extradoc/changeset/ee75b63c9b59/ Log: merge heads diff --git a/sprintinfo/gothenburg-2011-2/planning.txt b/sprintinfo/gothenburg-2011-2/planning.txt new file mode 100644 --- /dev/null +++ b/sprintinfo/gothenburg-2011-2/planning.txt @@ -0,0 +1,29 @@ +people present: + Christian Timser + Hakan Ardo + + + +done so far: + + Christian works on win64 support, continuing the job started in Genua + + Hakan refactors unrolling: adds a TARGET resoperation that can be used + in the middle of loops, and that defines a possible JUMP target. + + Armin did random stuff including progress on the STM branch. + + Mark worked on specializing 2-tuples to contain int/floats/strings. + + Andrew Dalke and Sam Lade worked on the previous days on numpy + integration, looking at f2py + + +today: + + the TARGET resoperation: Hakan, Armin + + win64: Christian + + specialized 2-tuples: Mark, Anto + From noreply at buildbot.pypy.org Thu Nov 10 15:54:00 2011 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 10 Nov 2011 15:54:00 +0100 (CET) Subject: [pypy-commit] pypy default: 'nt.spawnve()', a Windows function Message-ID: <20111110145400.7EA6A8292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49280:bde6464f341d Date: 2011-11-10 15:53 +0100 http://bitbucket.org/pypy/pypy/changeset/bde6464f341d/ Log: 'nt.spawnve()', a Windows function diff --git a/pypy/module/posix/__init__.py b/pypy/module/posix/__init__.py --- a/pypy/module/posix/__init__.py +++ b/pypy/module/posix/__init__.py @@ -137,6 +137,8 @@ interpleveldefs['execve'] = 'interp_posix.execve' if hasattr(posix, 'spawnv'): interpleveldefs['spawnv'] = 'interp_posix.spawnv' + if hasattr(posix, 'spawnve'): + interpleveldefs['spawnve'] = 'interp_posix.spawnve' if hasattr(os, 'uname'): interpleveldefs['uname'] = 'interp_posix.uname' if hasattr(os, 'sysconf'): diff --git a/pypy/module/posix/interp_posix.py b/pypy/module/posix/interp_posix.py --- a/pypy/module/posix/interp_posix.py +++ b/pypy/module/posix/interp_posix.py @@ -760,6 +760,14 @@ except OSError, e: raise wrap_oserror(space, e) +def _env2interp(space, w_env): + env = {} + w_keys = space.call_method(w_env, 'keys') + for w_key in space.unpackiterable(w_keys): + w_value = space.getitem(w_env, w_key) + env[space.str_w(w_key)] = space.str_w(w_value) + return env + def execve(space, w_command, w_args, w_env): """ execve(path, args, env) @@ -771,11 +779,7 @@ """ command = fsencode_w(space, w_command) args = [fsencode_w(space, w_arg) for w_arg in space.unpackiterable(w_args)] - env = {} - w_keys = space.call_method(w_env, 'keys') - for w_key in space.unpackiterable(w_keys): - w_value = space.getitem(w_env, w_key) - env[space.str_w(w_key)] = space.str_w(w_value) + env = _env2interp(space, w_env) try: os.execve(command, args, env) except OSError, e: @@ -790,6 +794,16 @@ raise wrap_oserror(space, e) return space.wrap(ret) + at unwrap_spec(mode=int, path=str) +def spawnve(space, mode, path, w_args, w_env): + args = [space.str_w(w_arg) for w_arg in space.unpackiterable(w_args)] + env = _env2interp(space, w_env) + try: + ret = os.spawnve(mode, path, args, env) + except OSError, e: + raise wrap_oserror(space, e) + return space.wrap(ret) + def utime(space, w_path, w_tuple): """ utime(path, (atime, mtime)) utime(path, None) diff --git a/pypy/module/posix/test/test_posix2.py b/pypy/module/posix/test/test_posix2.py --- a/pypy/module/posix/test/test_posix2.py +++ b/pypy/module/posix/test/test_posix2.py @@ -471,6 +471,17 @@ ['python', '-c', 'raise(SystemExit(42))']) assert ret == 42 + if hasattr(__import__(os.name), "spawnve"): + def test_spawnve(self): + os = self.posix + import sys + print self.python + ret = os.spawnve(os.P_WAIT, self.python, + ['python', '-c', + "raise(SystemExit(int(__import__('os').environ['FOOBAR'])))"], + {'FOOBAR': '42'}) + assert ret == 42 + def test_popen(self): os = self.posix for i in range(5): diff --git a/pypy/rpython/module/ll_os.py b/pypy/rpython/module/ll_os.py --- a/pypy/rpython/module/ll_os.py +++ b/pypy/rpython/module/ll_os.py @@ -356,6 +356,32 @@ return extdef([int, str, [str]], int, llimpl=spawnv_llimpl, export_name="ll_os.ll_os_spawnv") + @registering_if(os, 'spawnve') + def register_os_spawnve(self): + os_spawnve = self.llexternal('spawnve', + [rffi.INT, rffi.CCHARP, rffi.CCHARPP, + rffi.CCHARPP], + rffi.INT) + + def spawnve_llimpl(mode, path, args, env): + envstrs = [] + for item in env.iteritems(): + envstrs.append("%s=%s" % item) + + mode = rffi.cast(rffi.INT, mode) + l_args = rffi.liststr2charpp(args) + l_env = rffi.liststr2charpp(envstrs) + childpid = os_spawnve(mode, path, l_args, l_env) + rffi.free_charpp(l_env) + rffi.free_charpp(l_args) + if childpid == -1: + raise OSError(rposix.get_errno(), "os_spawnve failed") + return rffi.cast(lltype.Signed, childpid) + + return extdef([int, str, [str], {str: str}], int, + llimpl=spawnve_llimpl, + export_name="ll_os.ll_os_spawnve") + @registering(os.dup) def register_os_dup(self): os_dup = self.llexternal(underscore_on_windows+'dup', [rffi.INT], rffi.INT) diff --git a/pypy/translator/c/test/test_extfunc.py b/pypy/translator/c/test/test_extfunc.py --- a/pypy/translator/c/test/test_extfunc.py +++ b/pypy/translator/c/test/test_extfunc.py @@ -818,6 +818,24 @@ func() assert open(filename).read() == "2" +if hasattr(posix, 'spawnve'): + def test_spawnve(): + filename = str(udir.join('test_spawnve.txt')) + progname = str(sys.executable) + scriptpath = udir.join('test_spawnve.py') + scriptpath.write('import os\n' + + 'f=open(%r,"w")\n' % filename + + 'f.write(os.environ["FOOBAR"])\n' + + 'f.close\n') + scriptname = str(scriptpath) + def does_stuff(): + l = [progname, scriptname] + pid = os.spawnve(os.P_NOWAIT, progname, l, {'FOOBAR': '42'}) + os.waitpid(pid, 0) + func = compile(does_stuff, []) + func() + assert open(filename).read() == "42" + def test_utime(): path = str(udir.ensure("test_utime.txt")) from time import time, sleep From noreply at buildbot.pypy.org Thu Nov 10 16:32:36 2011 From: noreply at buildbot.pypy.org (bivab) Date: Thu, 10 Nov 2011 16:32:36 +0100 (CET) Subject: [pypy-commit] pypy default: fix an issue in clibffi that is triggered on big endian platforms due to the byte order when casting a larger data type to smaller one to be passed to a function called through ffi Message-ID: <20111110153236.422248292E@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: Changeset: r49281:88daf71d8892 Date: 2011-11-09 16:22 +0100 http://bitbucket.org/pypy/pypy/changeset/88daf71d8892/ Log: fix an issue in clibffi that is triggered on big endian platforms due to the byte order when casting a larger data type to smaller one to be passed to a function called through ffi diff --git a/pypy/rlib/clibffi.py b/pypy/rlib/clibffi.py --- a/pypy/rlib/clibffi.py +++ b/pypy/rlib/clibffi.py @@ -30,6 +30,9 @@ _MAC_OS = platform.name == "darwin" _FREEBSD_7 = platform.name == "freebsd7" +_LITTLE_ENDIAN = sys.byteorder == 'little' +_BIG_ENDIAN = sys.byteorder == 'big' + if _WIN32: from pypy.rlib import rwin32 @@ -337,15 +340,46 @@ return TYPE_MAP[tp] cast_type_to_ffitype._annspecialcase_ = 'specialize:memo' -def push_arg_as_ffiptr(ffitp, arg, ll_buf): +def push_arg_as_ffiptr_base(ffitp, arg, ll_buf): + # this is for primitive types. For structures and arrays + # would be something different (more dynamic) + # XXX is this valid in C?, for args that are larger than the size of + # ll_buf we write over the boundaries of the allocated char array and + # just keep as much bytes as we need for the target type. Maybe using + # memcpy would be better here. Also this + # only works on little endian architectures + TP = lltype.typeOf(arg) + TP_P = lltype.Ptr(rffi.CArray(TP)) + buf = rffi.cast(TP_P, ll_buf) + buf[0] = arg +push_arg_as_ffiptr_base._annspecialcase_ = 'specialize:argtype(1)' + +def push_arg_as_ffiptr_memcpy(ffitp, arg, ll_buf): # this is for primitive types. For structures and arrays # would be something different (more dynamic) TP = lltype.typeOf(arg) TP_P = lltype.Ptr(rffi.CArray(TP)) - buf = rffi.cast(TP_P, ll_buf) - buf[0] = arg -push_arg_as_ffiptr._annspecialcase_ = 'specialize:argtype(1)' + TP_size = rffi.sizeof(TP) + c_size = intmask(ffitp.c_size) + # if both types have the same size, we do not can directly write the + # value to the buffer + if c_size == TP_size: + return push_arg_as_ffiptr_base(ffitp, arg, ll_buf) + + # store arg in a small box in memory + # and copy the relevant bytes over to the target buffer (ll_buf) + with lltype.scoped_alloc(TP_P.TO, TP_size) as argbuf: + argbuf[0] = arg + cargbuf = rffi.cast(rffi.CCHARP, argbuf) + ptr = rffi.ptradd(cargbuf, TP_size - c_size) + rffi.c_memcpy(ll_buf, ptr, c_size) +push_arg_as_ffiptr_memcpy._annspecialcase_ = 'specialize:argtype(1)' + +if _LITTLE_ENDIAN: + push_arg_as_ffiptr = push_arg_as_ffiptr_base +else: + push_arg_as_ffiptr = push_arg_as_ffiptr_memcpy # type defs for callback and closure userdata USERDATA_P = lltype.Ptr(lltype.ForwardReference()) diff --git a/pypy/rlib/libffi.py b/pypy/rlib/libffi.py --- a/pypy/rlib/libffi.py +++ b/pypy/rlib/libffi.py @@ -234,7 +234,7 @@ # It is important that there is no other operation in the middle, else # the optimizer will fail to recognize the pattern and won't turn it # into a fast CALL. Note that "arg = arg.next" is optimized away, - # assuming that archain is completely virtual. + # assuming that argchain is completely virtual. self = jit.promote(self) if argchain.numargs != len(self.argtypes): raise TypeError, 'Wrong number of arguments: %d expected, got %d' %\ diff --git a/pypy/rlib/test/test_libffi.py b/pypy/rlib/test/test_libffi.py --- a/pypy/rlib/test/test_libffi.py +++ b/pypy/rlib/test/test_libffi.py @@ -179,6 +179,17 @@ res = self.call(func, [chr(20), 22], rffi.LONG) assert res == 42 + def test_char_args(self): + """ + char sum_args(char a, char b) { + return a + b; + } + """ + libfoo = self.get_libfoo() + func = (libfoo, 'sum_args', [types.schar, types.schar], types.schar) + res = self.call(func, [123, 43], rffi.CHAR) + assert res == chr(166) + def test_unsigned_short_args(self): """ unsigned short sum_xy_us(unsigned short x, unsigned short y) From noreply at buildbot.pypy.org Thu Nov 10 16:54:07 2011 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 10 Nov 2011 16:54:07 +0100 (CET) Subject: [pypy-commit] buildbot default: Add Win64. Message-ID: <20111110155407.8A59B8292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r596:2f982db47d5d Date: 2011-11-10 16:53 +0100 http://bitbucket.org/pypy/buildbot/changeset/2f982db47d5d/ Log: Add Win64. diff --git a/bot2/pypybuildbot/master.py b/bot2/pypybuildbot/master.py --- a/bot2/pypybuildbot/master.py +++ b/bot2/pypybuildbot/master.py @@ -134,6 +134,7 @@ LINUX64 = "own-linux-x86-64" MACOSX32 = "own-macosx-x86-32" WIN32 = "own-win-x86-32" +WIN64 = "own-win-x86-64" APPLVLLINUX32 = "pypy-c-app-level-linux-x86-32" APPLVLLINUX64 = "pypy-c-app-level-linux-x86-64" @@ -144,6 +145,7 @@ OJITLINUX32 = "pypy-c-Ojit-no-jit-linux-x86-32" JITMACOSX64 = "pypy-c-jit-macosx-x86-64" JITWIN32 = "pypy-c-jit-win-x86-32" +JITWIN64 = "pypy-c-jit-win-x86-64" JITFREEBSD64 = 'pypy-c-jit-freebsd-7-x86-64' JITONLYLINUX32 = "jitonly-own-linux-x86-32" @@ -311,6 +313,12 @@ "factory": pypyOwnTestFactoryWin, "category": 'win32' }, + {"name": WIN64, + "slavenames": ["snakepit64"], + "builddir": WIN64, + "factory": pypyOwnTestFactoryWin, + "category": 'win32' + }, {"name": APPLVLWIN32, "slavenames": ["bigboard"], "builddir": APPLVLWIN32, @@ -323,6 +331,12 @@ 'factory' : pypyJITTranslatedTestFactoryWin, 'category' : 'win32', }, + {"name" : JITWIN64, + "slavenames": ["snakepit64"], + 'builddir' : JITWIN64, + 'factory' : pypyJITTranslatedTestFactoryWin, + 'category' : 'win32', + }, {"name" : JITFREEBSD64, "slavenames": ['headless'], 'builddir' : JITFREEBSD64, From noreply at buildbot.pypy.org Thu Nov 10 17:18:17 2011 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 10 Nov 2011 17:18:17 +0100 (CET) Subject: [pypy-commit] pypy.org extradoc: All *binary* versions Message-ID: <20111110161817.3DE758292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: extradoc Changeset: r290:a4cc49d479d8 Date: 2011-11-10 17:18 +0100 http://bitbucket.org/pypy/pypy.org/changeset/a4cc49d479d8/ Log: All *binary* versions diff --git a/source/download.txt b/source/download.txt --- a/source/download.txt +++ b/source/download.txt @@ -84,7 +84,7 @@ Installing ------------------------------- -All versions are packaged in a ``tar.bz2`` or ``zip`` file. When +All binary versions are packaged in a ``tar.bz2`` or ``zip`` file. When uncompressed, they run in-place. For now you can uncompress them either somewhere in your home directory or, say, in ``/opt``, and if you want, put a symlink from somewhere like From noreply at buildbot.pypy.org Thu Nov 10 17:48:01 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Thu, 10 Nov 2011 17:48:01 +0100 (CET) Subject: [pypy-commit] pypy win64-stage1: new name for the branch Message-ID: <20111110164801.61F7A8292E@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64-stage1 Changeset: r49282:0309b15c05f8 Date: 2011-11-10 17:47 +0100 http://bitbucket.org/pypy/pypy/changeset/0309b15c05f8/ Log: new name for the branch From noreply at buildbot.pypy.org Thu Nov 10 17:50:32 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Thu, 10 Nov 2011 17:50:32 +0100 (CET) Subject: [pypy-commit] pypy win64_gborg: renamed to win64-stage1 Message-ID: <20111110165032.EF6088292E@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64_gborg Changeset: r49283:ca0f81ea74b5 Date: 2011-11-10 17:50 +0100 http://bitbucket.org/pypy/pypy/changeset/ca0f81ea74b5/ Log: renamed to win64-stage1 From noreply at buildbot.pypy.org Thu Nov 10 18:14:20 2011 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 10 Nov 2011 18:14:20 +0100 (CET) Subject: [pypy-commit] pypy default: Tentatively rewrite push_arg_as_ffiptr(). Message-ID: <20111110171421.003128292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49284:b6390a34f261 Date: 2011-11-10 18:14 +0100 http://bitbucket.org/pypy/pypy/changeset/b6390a34f261/ Log: Tentatively rewrite push_arg_as_ffiptr(). diff --git a/pypy/rlib/clibffi.py b/pypy/rlib/clibffi.py --- a/pypy/rlib/clibffi.py +++ b/pypy/rlib/clibffi.py @@ -340,46 +340,38 @@ return TYPE_MAP[tp] cast_type_to_ffitype._annspecialcase_ = 'specialize:memo' -def push_arg_as_ffiptr_base(ffitp, arg, ll_buf): - # this is for primitive types. For structures and arrays - # would be something different (more dynamic) - # XXX is this valid in C?, for args that are larger than the size of - # ll_buf we write over the boundaries of the allocated char array and - # just keep as much bytes as we need for the target type. Maybe using - # memcpy would be better here. Also this - # only works on little endian architectures - TP = lltype.typeOf(arg) - TP_P = lltype.Ptr(rffi.CArray(TP)) - buf = rffi.cast(TP_P, ll_buf) - buf[0] = arg -push_arg_as_ffiptr_base._annspecialcase_ = 'specialize:argtype(1)' - -def push_arg_as_ffiptr_memcpy(ffitp, arg, ll_buf): - # this is for primitive types. For structures and arrays - # would be something different (more dynamic) +def push_arg_as_ffiptr(ffitp, arg, ll_buf): + # This is for primitive types. Note that the exact type of 'arg' may be + # different from the expected 'c_size'. To cope with that, we fall back + # to a byte-by-byte copy. TP = lltype.typeOf(arg) TP_P = lltype.Ptr(rffi.CArray(TP)) TP_size = rffi.sizeof(TP) c_size = intmask(ffitp.c_size) - - # if both types have the same size, we do not can directly write the + # if both types have the same size, we can directly write the # value to the buffer if c_size == TP_size: - return push_arg_as_ffiptr_base(ffitp, arg, ll_buf) - - # store arg in a small box in memory - # and copy the relevant bytes over to the target buffer (ll_buf) - with lltype.scoped_alloc(TP_P.TO, TP_size) as argbuf: - argbuf[0] = arg - cargbuf = rffi.cast(rffi.CCHARP, argbuf) - ptr = rffi.ptradd(cargbuf, TP_size - c_size) - rffi.c_memcpy(ll_buf, ptr, c_size) -push_arg_as_ffiptr_memcpy._annspecialcase_ = 'specialize:argtype(1)' - -if _LITTLE_ENDIAN: - push_arg_as_ffiptr = push_arg_as_ffiptr_base -else: - push_arg_as_ffiptr = push_arg_as_ffiptr_memcpy + buf = rffi.cast(TP_P, ll_buf) + buf[0] = arg + else: + # needs byte-by-byte copying. Make sure 'arg' is an integer type. + # Note that this won't work for rffi.FLOAT/rffi.DOUBLE. + assert TP is not rffi.FLOAT and TP is not rffi.DOUBLE + if TP_size <= rffi.sizeof(lltype.Signed): + arg = rffi.cast(lltype.Unsigned, arg) + else: + arg = rffi.cast(lltype.UnsignedLongLong, arg) + if _LITTLE_ENDIAN: + for i in range(c_size): + ll_buf[i] = chr(arg & 0xFF) + arg >>= 8 + elif _BIG_ENDIAN: + for i in range(c_size-1, -1, -1): + ll_buf[i] = chr(arg & 0xFF) + arg >>= 8 + else: + raise AssertionError +push_arg_as_ffiptr._annspecialcase_ = 'specialize:argtype(1)' # type defs for callback and closure userdata USERDATA_P = lltype.Ptr(lltype.ForwardReference()) From noreply at buildbot.pypy.org Thu Nov 10 19:13:26 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Thu, 10 Nov 2011 19:13:26 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: progress, we now inherit from int Message-ID: <20111110181326.607AE82A87@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49286:ef2232d35126 Date: 2011-11-10 13:13 -0500 http://bitbucket.org/pypy/pypy/changeset/ef2232d35126/ Log: progress, we now inherit from int diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -1,7 +1,11 @@ from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.typedef import TypeDef +from pypy.objspace.std.inttype import int_typedef +from pypy.rlib.rarithmetic import LONG_BIT +MIXIN_64 = (int_typedef,) if LONG_BIT == 64 else () + class W_GenericBox(Wrappable): pass @@ -22,6 +26,9 @@ class W_SignedIntegerBox(W_IntegerBox): pass +class W_LongBox(W_SignedIntegerBox): + pass + class W_Int64Box(W_SignedIntegerBox): pass @@ -42,4 +49,29 @@ W_BoolBox.typedef = TypeDef("bool_", W_GenericBox.typedef, __module__ = "numpy", +) + +W_NumberBox.typedef = TypeDef("number", W_GenericBox.typedef, + __module__ = "numpy", +) + +W_IntegerBox.typedef = TypeDef("integer", W_NumberBox.typedef, + __module__ = "numpy", +) + +W_SignedIntegerBox.typedef = TypeDef("signedinteger", W_IntegerBox.typedef, + __module__ = "numpy", +) + +# XXX: fix for 32bit +if LONG_BIT == 32: + long_name = "int32" +elif LONG_BIT == 64: + long_name = "int64" +W_LongBox.typedef = TypeDef(long_name, (W_SignedIntegerBox.typedef, int_typedef,), + __module__ = "numpy", +) + +W_Int64Box.typedef = TypeDef("int64", (W_SignedIntegerBox.typedef,) + MIXIN_64, + __module__ = "numpy", ) \ No newline at end of file diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -135,15 +135,11 @@ char="I", ) if LONG_BIT == 32: - longtype = types.Int32() - unsigned_longtype = types.UInt32() name = "int32" elif LONG_BIT == 64: - longtype = types.Int64() - unsigned_longtype = types.UInt64() name = "int64" self.w_longdtype = W_Dtype( - longtype, + types.Long(), num=7, kind=SIGNEDLTR, name=name, @@ -151,7 +147,7 @@ alternate_constructors=[space.w_int], ) self.w_ulongdtype = W_Dtype( - unsigned_longtype, + types.ULong(), num=8, kind=UNSIGNEDLTR, name="u" + name, diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -1,6 +1,7 @@ from pypy.module.micronumpy import interp_boxes from pypy.objspace.std.floatobject import float2string from pypy.rlib import rfloat +from pypy.rlib.rarithmetic import LONG_BIT from pypy.rpython.lltypesystem import lltype, rffi @@ -85,6 +86,13 @@ class UInt32(Primitive): T = rffi.UINT +class Long(Primitive): + T = rffi.LONG + BoxType = interp_boxes.W_LongBox + +class ULong(Primitive): + T = rffi.ULONG + class Int64(Integer): T = rffi.LONGLONG BoxType = interp_boxes.W_Int64Box From noreply at buildbot.pypy.org Thu Nov 10 19:13:25 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Thu, 10 Nov 2011 19:13:25 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: merged default in Message-ID: <20111110181325.32DD08292E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49285:59aca542f69a Date: 2011-11-09 18:19 -0500 http://bitbucket.org/pypy/pypy/changeset/59aca542f69a/ Log: merged default in diff --git a/lib-python/2.7/test/test_os.py b/lib-python/2.7/test/test_os.py --- a/lib-python/2.7/test/test_os.py +++ b/lib-python/2.7/test/test_os.py @@ -74,7 +74,8 @@ self.assertFalse(os.path.exists(name), "file already exists for temporary file") # make sure we can create the file - open(name, "w") + f = open(name, "w") + f.close() self.files.append(name) def test_tempnam(self): diff --git a/lib-python/2.7/pkgutil.py b/lib-python/modified-2.7/pkgutil.py copy from lib-python/2.7/pkgutil.py copy to lib-python/modified-2.7/pkgutil.py --- a/lib-python/2.7/pkgutil.py +++ b/lib-python/modified-2.7/pkgutil.py @@ -244,7 +244,8 @@ return mod def get_data(self, pathname): - return open(pathname, "rb").read() + with open(pathname, "rb") as f: + return f.read() def _reopen(self): if self.file and self.file.closed: diff --git a/pypy/interpreter/test/test_typedef.py b/pypy/interpreter/test/test_typedef.py --- a/pypy/interpreter/test/test_typedef.py +++ b/pypy/interpreter/test/test_typedef.py @@ -2,7 +2,7 @@ from pypy.interpreter import typedef from pypy.tool.udir import udir from pypy.interpreter.baseobjspace import Wrappable -from pypy.interpreter.gateway import ObjSpace +from pypy.interpreter.gateway import ObjSpace, interp2app # this test isn't so much to test that the objspace interface *works* # -- it's more to test that it's *there* @@ -260,6 +260,50 @@ gc.collect(); gc.collect() assert space.unwrap(w_seen) == [6, 2] + def test_multiple_inheritance(self): + class W_A(Wrappable): + a = 1 + b = 2 + class W_C(W_A): + b = 3 + W_A.typedef = typedef.TypeDef("A", + a = typedef.interp_attrproperty("a", cls=W_A), + b = typedef.interp_attrproperty("b", cls=W_A), + ) + class W_B(Wrappable): + pass + def standalone_method(space, w_obj): + if isinstance(w_obj, W_A): + return space.w_True + else: + return space.w_False + W_B.typedef = typedef.TypeDef("B", + c = interp2app(standalone_method) + ) + W_C.typedef = typedef.TypeDef("C", (W_A.typedef, W_B.typedef,)) + + w_o1 = self.space.wrap(W_C()) + w_o2 = self.space.wrap(W_B()) + w_c = self.space.gettypefor(W_C) + w_b = self.space.gettypefor(W_B) + w_a = self.space.gettypefor(W_A) + assert w_c.mro_w == [ + w_c, + w_a, + w_b, + self.space.w_object, + ] + for w_tp in w_c.mro_w: + assert self.space.isinstance_w(w_o1, w_tp) + def assert_attr(w_obj, name, value): + assert self.space.unwrap(self.space.getattr(w_obj, self.space.wrap(name))) == value + def assert_method(w_obj, name, value): + assert self.space.unwrap(self.space.call_method(w_obj, name)) == value + assert_attr(w_o1, "a", 1) + assert_attr(w_o1, "b", 3) + assert_method(w_o1, "c", True) + assert_method(w_o2, "c", False) + class AppTestTypeDef: diff --git a/pypy/interpreter/typedef.py b/pypy/interpreter/typedef.py --- a/pypy/interpreter/typedef.py +++ b/pypy/interpreter/typedef.py @@ -15,13 +15,19 @@ def __init__(self, __name, __base=None, **rawdict): "NOT_RPYTHON: initialization-time only" self.name = __name - self.base = __base + if __base is None: + bases = [] + elif isinstance(__base, tuple): + bases = list(__base) + else: + bases = [__base] + self.bases = bases self.hasdict = '__dict__' in rawdict self.weakrefable = '__weakref__' in rawdict self.doc = rawdict.pop('__doc__', None) - if __base is not None: - self.hasdict |= __base.hasdict - self.weakrefable |= __base.weakrefable + for base in bases: + self.hasdict |= base.hasdict + self.weakrefable |= base.weakrefable self.rawdict = {} self.acceptable_as_base_class = '__new__' in rawdict self.applevel_subclasses_base = None diff --git a/pypy/jit/backend/llsupport/descr.py b/pypy/jit/backend/llsupport/descr.py --- a/pypy/jit/backend/llsupport/descr.py +++ b/pypy/jit/backend/llsupport/descr.py @@ -305,12 +305,16 @@ _clsname = '' loop_token = None arg_classes = '' # <-- annotation hack - ffi_flags = 0 + ffi_flags = 1 - def __init__(self, arg_classes, extrainfo=None, ffi_flags=0): + def __init__(self, arg_classes, extrainfo=None, ffi_flags=1): self.arg_classes = arg_classes # string of "r" and "i" (ref/int) self.extrainfo = extrainfo self.ffi_flags = ffi_flags + # NB. the default ffi_flags is 1, meaning FUNCFLAG_CDECL, which + # makes sense on Windows as it's the one for all the C functions + # we are compiling together with the JIT. On non-Windows platforms + # it is just ignored anyway. def __repr__(self): res = '%s(%s)' % (self.__class__.__name__, self.arg_classes) @@ -445,7 +449,7 @@ """ _clsname = 'DynamicIntCallDescr' - def __init__(self, arg_classes, result_size, result_sign, extrainfo=None, ffi_flags=0): + def __init__(self, arg_classes, result_size, result_sign, extrainfo, ffi_flags): BaseIntCallDescr.__init__(self, arg_classes, extrainfo, ffi_flags) assert isinstance(result_sign, bool) self._result_size = chr(result_size) diff --git a/pypy/jit/backend/llsupport/ffisupport.py b/pypy/jit/backend/llsupport/ffisupport.py --- a/pypy/jit/backend/llsupport/ffisupport.py +++ b/pypy/jit/backend/llsupport/ffisupport.py @@ -8,7 +8,7 @@ class UnsupportedKind(Exception): pass -def get_call_descr_dynamic(cpu, ffi_args, ffi_result, extrainfo=None, ffi_flags=0): +def get_call_descr_dynamic(cpu, ffi_args, ffi_result, extrainfo, ffi_flags): """Get a call descr: the types of result and args are represented by rlib.libffi.types.*""" try: diff --git a/pypy/jit/backend/llsupport/test/test_ffisupport.py b/pypy/jit/backend/llsupport/test/test_ffisupport.py --- a/pypy/jit/backend/llsupport/test/test_ffisupport.py +++ b/pypy/jit/backend/llsupport/test/test_ffisupport.py @@ -13,44 +13,46 @@ def test_call_descr_dynamic(): args = [types.sint, types.pointer] - descr = get_call_descr_dynamic(FakeCPU(), args, types.sint, ffi_flags=42) + descr = get_call_descr_dynamic(FakeCPU(), args, types.sint, None, + ffi_flags=42) assert isinstance(descr, DynamicIntCallDescr) assert descr.arg_classes == 'ii' assert descr.get_ffi_flags() == 42 args = [types.sint, types.double, types.pointer] - descr = get_call_descr_dynamic(FakeCPU(), args, types.void) + descr = get_call_descr_dynamic(FakeCPU(), args, types.void, None, 42) assert descr is None # missing floats descr = get_call_descr_dynamic(FakeCPU(supports_floats=True), - args, types.void, ffi_flags=43) + args, types.void, None, ffi_flags=43) assert isinstance(descr, VoidCallDescr) assert descr.arg_classes == 'ifi' assert descr.get_ffi_flags() == 43 - descr = get_call_descr_dynamic(FakeCPU(), [], types.sint8) + descr = get_call_descr_dynamic(FakeCPU(), [], types.sint8, None, 42) assert isinstance(descr, DynamicIntCallDescr) assert descr.get_result_size(False) == 1 assert descr.is_result_signed() == True - descr = get_call_descr_dynamic(FakeCPU(), [], types.uint8) + descr = get_call_descr_dynamic(FakeCPU(), [], types.uint8, None, 42) assert isinstance(descr, DynamicIntCallDescr) assert descr.get_result_size(False) == 1 assert descr.is_result_signed() == False if not is_64_bit: - descr = get_call_descr_dynamic(FakeCPU(), [], types.slonglong) + descr = get_call_descr_dynamic(FakeCPU(), [], types.slonglong, + None, 42) assert descr is None # missing longlongs descr = get_call_descr_dynamic(FakeCPU(supports_longlong=True), - [], types.slonglong, ffi_flags=43) + [], types.slonglong, None, ffi_flags=43) assert isinstance(descr, LongLongCallDescr) assert descr.get_ffi_flags() == 43 else: assert types.slonglong is types.slong - descr = get_call_descr_dynamic(FakeCPU(), [], types.float) + descr = get_call_descr_dynamic(FakeCPU(), [], types.float, None, 42) assert descr is None # missing singlefloats descr = get_call_descr_dynamic(FakeCPU(supports_singlefloats=True), - [], types.float, ffi_flags=44) + [], types.float, None, ffi_flags=44) SingleFloatCallDescr = getCallDescrClass(rffi.FLOAT) assert isinstance(descr, SingleFloatCallDescr) assert descr.get_ffi_flags() == 44 diff --git a/pypy/jit/backend/x86/test/test_runner.py b/pypy/jit/backend/x86/test/test_runner.py --- a/pypy/jit/backend/x86/test/test_runner.py +++ b/pypy/jit/backend/x86/test/test_runner.py @@ -455,6 +455,9 @@ EffectInfo.MOST_GENERAL, ffi_flags=-1) calldescr.get_call_conv = lambda: ffi # <==== hack + # ^^^ we patch get_call_conv() so that the test also makes sense + # on Linux, because clibffi.get_call_conv() would always + # return FFI_DEFAULT_ABI on non-Windows platforms. funcbox = ConstInt(rawstart) i1 = BoxInt() i2 = BoxInt() diff --git a/pypy/jit/metainterp/optimizeopt/intutils.py b/pypy/jit/metainterp/optimizeopt/intutils.py --- a/pypy/jit/metainterp/optimizeopt/intutils.py +++ b/pypy/jit/metainterp/optimizeopt/intutils.py @@ -1,4 +1,4 @@ -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift, LONG_BIT +from pypy.rlib.rarithmetic import ovfcheck, LONG_BIT from pypy.rlib.objectmodel import we_are_translated from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.jit.metainterp.history import BoxInt, ConstInt @@ -174,10 +174,10 @@ other.known_ge(IntBound(0, 0)) and \ other.known_lt(IntBound(LONG_BIT, LONG_BIT)): try: - vals = (ovfcheck_lshift(self.upper, other.upper), - ovfcheck_lshift(self.upper, other.lower), - ovfcheck_lshift(self.lower, other.upper), - ovfcheck_lshift(self.lower, other.lower)) + vals = (ovfcheck(self.upper << other.upper), + ovfcheck(self.upper << other.lower), + ovfcheck(self.lower << other.upper), + ovfcheck(self.lower << other.lower)) return IntBound(min4(vals), max4(vals)) except (OverflowError, ValueError): return IntUnbounded() diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -5000,6 +5000,7 @@ self.optimize_loop(ops, expected) def test_known_equal_ints(self): + py.test.skip("in-progress") ops = """ [i0, i1, i2, p0] i3 = int_eq(i0, i1) diff --git a/pypy/module/cpyext/pyobject.py b/pypy/module/cpyext/pyobject.py --- a/pypy/module/cpyext/pyobject.py +++ b/pypy/module/cpyext/pyobject.py @@ -116,8 +116,8 @@ try: return typedescr_cache[typedef] except KeyError: - if typedef.base is not None: - return _get_typedescr_1(typedef.base) + if typedef.bases: + return _get_typedescr_1(typedef.bases[0]) return typedescr_cache[None] def get_typedescr(typedef): diff --git a/pypy/objspace/flow/operation.py b/pypy/objspace/flow/operation.py --- a/pypy/objspace/flow/operation.py +++ b/pypy/objspace/flow/operation.py @@ -11,7 +11,7 @@ from pypy.interpreter.baseobjspace import ObjSpace from pypy.interpreter.error import OperationError from pypy.tool.sourcetools import compile2 -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift +from pypy.rlib.rarithmetic import ovfcheck from pypy.objspace.flow import model @@ -144,7 +144,7 @@ return ovfcheck(x % y) def lshift_ovf(x, y): - return ovfcheck_lshift(x, y) + return ovfcheck(x << y) # slicing: operator.{get,set,del}slice() don't support b=None or c=None def do_getslice(a, b, c): diff --git a/pypy/objspace/std/intobject.py b/pypy/objspace/std/intobject.py --- a/pypy/objspace/std/intobject.py +++ b/pypy/objspace/std/intobject.py @@ -6,7 +6,7 @@ from pypy.objspace.std.noneobject import W_NoneObject from pypy.objspace.std.register_all import register_all from pypy.rlib import jit -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift, LONG_BIT, r_uint +from pypy.rlib.rarithmetic import ovfcheck, LONG_BIT, r_uint from pypy.rlib.rbigint import rbigint """ @@ -245,7 +245,7 @@ b = w_int2.intval if r_uint(b) < LONG_BIT: # 0 <= b < LONG_BIT try: - c = ovfcheck_lshift(a, b) + c = ovfcheck(a << b) except OverflowError: raise FailedToImplementArgs(space.w_OverflowError, space.wrap("integer left shift")) diff --git a/pypy/objspace/std/stdtypedef.py b/pypy/objspace/std/stdtypedef.py --- a/pypy/objspace/std/stdtypedef.py +++ b/pypy/objspace/std/stdtypedef.py @@ -32,11 +32,14 @@ from pypy.objspace.std.objecttype import object_typedef if b is object_typedef: return True - while a is not b: - if a is None: - return False - a = a.base - return True + if a is None: + return False + if a is b: + return True + for a1 in a.bases: + if issubtypedef(a1, b): + return True + return False std_dict_descr = GetSetProperty(descr_get_dict, descr_set_dict, descr_del_dict, doc="dictionary for instance variables (if defined)") @@ -75,8 +78,8 @@ if typedef is object_typedef: bases_w = [] else: - base = typedef.base or object_typedef - bases_w = [space.gettypeobject(base)] + bases = typedef.bases or [object_typedef] + bases_w = [space.gettypeobject(base) for base in bases] # wrap everything dict_w = {} diff --git a/pypy/rlib/listsort.py b/pypy/rlib/listsort.py --- a/pypy/rlib/listsort.py +++ b/pypy/rlib/listsort.py @@ -1,4 +1,4 @@ -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift +from pypy.rlib.rarithmetic import ovfcheck ## ------------------------------------------------------------------------ @@ -136,7 +136,7 @@ if lower(a.list[p + ofs], key): lastofs = ofs try: - ofs = ovfcheck_lshift(ofs, 1) + ofs = ovfcheck(ofs << 1) except OverflowError: ofs = maxofs else: @@ -161,7 +161,7 @@ # key <= a[hint - ofs] lastofs = ofs try: - ofs = ovfcheck_lshift(ofs, 1) + ofs = ovfcheck(ofs << 1) except OverflowError: ofs = maxofs else: diff --git a/pypy/rlib/rarithmetic.py b/pypy/rlib/rarithmetic.py --- a/pypy/rlib/rarithmetic.py +++ b/pypy/rlib/rarithmetic.py @@ -12,9 +12,6 @@ back to a signed int value ovfcheck check on CPython whether the result of a signed integer operation did overflow -ovfcheck_lshift - << with oveflow checking - catering to 2.3/2.4 differences about << ovfcheck_float_to_int convert to an integer or raise OverflowError r_longlong @@ -111,18 +108,6 @@ raise OverflowError, "signed integer expression did overflow" return r -def _local_ovfcheck(r): - # a copy of the above, because we cannot call ovfcheck - # in a context where no primitiveoperator is involved. - assert not isinstance(r, r_uint), "unexpected ovf check on unsigned" - if isinstance(r, long): - raise OverflowError, "signed integer expression did overflow" - return r - -def ovfcheck_lshift(a, b): - "NOT_RPYTHON" - return _local_ovfcheck(int(long(a) << b)) - # Strange things happening for float to int on 64 bit: # int(float(i)) != i because of rounding issues. # These are the minimum and maximum float value that can diff --git a/pypy/rlib/rgc.py b/pypy/rlib/rgc.py --- a/pypy/rlib/rgc.py +++ b/pypy/rlib/rgc.py @@ -216,8 +216,8 @@ func._gc_no_collect_ = True return func -def is_light_finalizer(func): - func._is_light_finalizer_ = True +def must_be_light_finalizer(func): + func._must_be_light_finalizer_ = True return func # ____________________________________________________________ diff --git a/pypy/rpython/llinterp.py b/pypy/rpython/llinterp.py --- a/pypy/rpython/llinterp.py +++ b/pypy/rpython/llinterp.py @@ -1,6 +1,6 @@ from pypy.objspace.flow.model import FunctionGraph, Constant, Variable, c_last_exception from pypy.rlib.rarithmetic import intmask, r_uint, ovfcheck, r_longlong -from pypy.rlib.rarithmetic import r_ulonglong, ovfcheck_lshift +from pypy.rlib.rarithmetic import r_ulonglong from pypy.rpython.lltypesystem import lltype, llmemory, lloperation, llheap from pypy.rpython.lltypesystem import rclass from pypy.rpython.ootypesystem import ootype @@ -1035,7 +1035,7 @@ assert isinstance(x, int) assert isinstance(y, int) try: - return ovfcheck_lshift(x, y) + return ovfcheck(x << y) except OverflowError: self.make_llexception() diff --git a/pypy/rpython/lltypesystem/module/ll_math.py b/pypy/rpython/lltypesystem/module/ll_math.py --- a/pypy/rpython/lltypesystem/module/ll_math.py +++ b/pypy/rpython/lltypesystem/module/ll_math.py @@ -11,15 +11,17 @@ from pypy.translator.platform import platform from pypy.rlib.rfloat import isfinite, isinf, isnan, INFINITY, NAN +use_library_isinf_isnan = False if sys.platform == "win32": if platform.name == "msvc": # When compiled with /O2 or /Oi (enable intrinsic functions) # It's no more possible to take the address of some math functions. # Ensure that the compiler chooses real functions instead. eci = ExternalCompilationInfo( - includes = ['math.h'], + includes = ['math.h', 'float.h'], post_include_bits = ['#pragma function(floor)'], ) + use_library_isinf_isnan = True else: eci = ExternalCompilationInfo() # Some math functions are C99 and not defined by the Microsoft compiler @@ -108,18 +110,32 @@ # # Custom implementations +VERY_LARGE_FLOAT = 1.0 +while VERY_LARGE_FLOAT * 100.0 != INFINITY: + VERY_LARGE_FLOAT *= 64.0 + +_lib_isnan = rffi.llexternal("_isnan", [lltype.Float], lltype.Signed, + compilation_info=eci) +_lib_finite = rffi.llexternal("_finite", [lltype.Float], lltype.Signed, + compilation_info=eci) + def ll_math_isnan(y): # By not calling into the external function the JIT can inline this. # Floats are awesome. + if use_library_isinf_isnan and not jit.we_are_jitted(): + return bool(_lib_isnan(y)) return y != y def ll_math_isinf(y): - # Use a bitwise OR so the JIT doesn't produce 2 different guards. - return (y == INFINITY) | (y == -INFINITY) + if use_library_isinf_isnan and not jit.we_are_jitted(): + return not _lib_finite(y) and not _lib_isnan(y) + return (y + VERY_LARGE_FLOAT) == y def ll_math_isfinite(y): # Use a custom hack that is reasonably well-suited to the JIT. # Floats are awesome (bis). + if use_library_isinf_isnan and not jit.we_are_jitted(): + return bool(_lib_finite(y)) z = 0.0 * y return z == z # i.e.: z is not a NaN @@ -136,10 +152,12 @@ Windows, FreeBSD and alpha Tru64 are amongst platforms that don't always follow C99. """ - if isnan(x) or isnan(y): + if isnan(x): return NAN - if isinf(y): + if not isfinite(y): + if isnan(y): + return NAN if isinf(x): if math_copysign(1.0, x) == 1.0: # atan2(+-inf, +inf) == +-pi/4 @@ -168,7 +186,7 @@ def ll_math_frexp(x): # deal with special cases directly, to sidestep platform differences - if isnan(x) or isinf(x) or not x: + if not isfinite(x) or not x: mantissa = x exponent = 0 else: @@ -185,7 +203,7 @@ INT_MIN = int(-2**31) def ll_math_ldexp(x, exp): - if x == 0.0 or isinf(x) or isnan(x): + if x == 0.0 or not isfinite(x): return x # NaNs, zeros and infinities are returned unchanged if exp > INT_MAX: # overflow (64-bit platforms only) @@ -209,10 +227,11 @@ def ll_math_modf(x): # some platforms don't do the right thing for NaNs and # infinities, so we take care of special cases directly. - if isinf(x): - return (math_copysign(0.0, x), x) - elif isnan(x): - return (x, x) + if not isfinite(x): + if isnan(x): + return (x, x) + else: # isinf(x) + return (math_copysign(0.0, x), x) intpart_p = lltype.malloc(rffi.DOUBLEP.TO, 1, flavor='raw') try: fracpart = math_modf(x, intpart_p) @@ -223,13 +242,21 @@ def ll_math_fmod(x, y): - if isinf(x) and not isnan(y): - raise ValueError("math domain error") + # fmod(x, +/-Inf) returns x for finite x. + if isinf(y) and isfinite(x): + return x - if y == 0: - raise ValueError("math domain error") - - return math_fmod(x, y) + _error_reset() + r = math_fmod(x, y) + errno = rposix.get_errno() + if isnan(r): + if isnan(x) or isnan(y): + errno = 0 + else: + errno = EDOM + if errno: + _likely_raise(errno, r) + return r def ll_math_hypot(x, y): @@ -242,16 +269,17 @@ _error_reset() r = math_hypot(x, y) errno = rposix.get_errno() - if isnan(r): - if isnan(x) or isnan(y): - errno = 0 - else: - errno = EDOM - elif isinf(r): - if isinf(x) or isnan(x) or isinf(y) or isnan(y): - errno = 0 - else: - errno = ERANGE + if not isfinite(r): + if isnan(r): + if isnan(x) or isnan(y): + errno = 0 + else: + errno = EDOM + else: # isinf(r) + if isfinite(x) and isfinite(y): + errno = ERANGE + else: + errno = 0 if errno: _likely_raise(errno, r) return r @@ -261,30 +289,30 @@ # deal directly with IEEE specials, to cope with problems on various # platforms whose semantics don't exactly match C99 - if isnan(x): - if y == 0.0: - return 1.0 # NaN**0 = 1 - return x - - elif isnan(y): + if isnan(y): if x == 1.0: return 1.0 # 1**Nan = 1 return y - elif isinf(x): - odd_y = not isinf(y) and math_fmod(math_fabs(y), 2.0) == 1.0 - if y > 0.0: - if odd_y: - return x - return math_fabs(x) - elif y == 0.0: - return 1.0 - else: # y < 0.0 - if odd_y: - return math_copysign(0.0, x) - return 0.0 + if not isfinite(x): + if isnan(x): + if y == 0.0: + return 1.0 # NaN**0 = 1 + return x + else: # isinf(x) + odd_y = not isinf(y) and math_fmod(math_fabs(y), 2.0) == 1.0 + if y > 0.0: + if odd_y: + return x + return math_fabs(x) + elif y == 0.0: + return 1.0 + else: # y < 0.0 + if odd_y: + return math_copysign(0.0, x) + return 0.0 - elif isinf(y): + if isinf(y): if math_fabs(x) == 1.0: return 1.0 elif y > 0.0 and math_fabs(x) > 1.0: @@ -299,17 +327,18 @@ _error_reset() r = math_pow(x, y) errno = rposix.get_errno() - if isnan(r): - # a NaN result should arise only from (-ve)**(finite non-integer) - errno = EDOM - elif isinf(r): - # an infinite result here arises either from: - # (A) (+/-0.)**negative (-> divide-by-zero) - # (B) overflow of x**y with x and y finite - if x == 0.0: + if not isfinite(r): + if isnan(r): + # a NaN result should arise only from (-ve)**(finite non-integer) errno = EDOM - else: - errno = ERANGE + else: # isinf(r) + # an infinite result here arises either from: + # (A) (+/-0.)**negative (-> divide-by-zero) + # (B) overflow of x**y with x and y finite + if x == 0.0: + errno = EDOM + else: + errno = ERANGE if errno: _likely_raise(errno, r) return r @@ -358,18 +387,19 @@ r = c_func(x) # Error checking fun. Copied from CPython 2.6 errno = rposix.get_errno() - if isnan(r): - if isnan(x): - errno = 0 - else: - errno = EDOM - elif isinf(r): - if isinf(x) or isnan(x): - errno = 0 - elif can_overflow: - errno = ERANGE - else: - errno = EDOM + if not isfinite(r): + if isnan(r): + if isnan(x): + errno = 0 + else: + errno = EDOM + else: # isinf(r) + if not isfinite(x): + errno = 0 + elif can_overflow: + errno = ERANGE + else: + errno = EDOM if errno: _likely_raise(errno, r) return r diff --git a/pypy/translator/backendopt/finalizer.py b/pypy/translator/backendopt/finalizer.py --- a/pypy/translator/backendopt/finalizer.py +++ b/pypy/translator/backendopt/finalizer.py @@ -4,7 +4,7 @@ class FinalizerError(Exception): """ __del__ marked as lightweight finalizer, but the analyzer did - not agreed + not agree """ class FinalizerAnalyzer(graphanalyze.BoolGraphAnalyzer): @@ -23,7 +23,7 @@ def analyze_light_finalizer(self, graph): result = self.analyze_direct_call(graph) if (result is self.top_result() and - getattr(graph.func, '_is_light_finalizer_', False)): + getattr(graph.func, '_must_be_light_finalizer_', False)): raise FinalizerError(FinalizerError.__doc__, graph) return result diff --git a/pypy/translator/backendopt/test/test_finalizer.py b/pypy/translator/backendopt/test/test_finalizer.py --- a/pypy/translator/backendopt/test/test_finalizer.py +++ b/pypy/translator/backendopt/test/test_finalizer.py @@ -126,13 +126,13 @@ r = self.analyze(f, [], A.__del__.im_func) assert r - def test_is_light_finalizer_decorator(self): + def test_must_be_light_finalizer_decorator(self): S = lltype.GcStruct('S') - @rgc.is_light_finalizer + @rgc.must_be_light_finalizer def f(): lltype.malloc(S) - @rgc.is_light_finalizer + @rgc.must_be_light_finalizer def g(): pass self.analyze(g, []) # did not explode diff --git a/pypy/translator/c/genc.py b/pypy/translator/c/genc.py --- a/pypy/translator/c/genc.py +++ b/pypy/translator/c/genc.py @@ -622,9 +622,9 @@ else: mk.definition('DEBUGFLAGS', '-O1 -g') if sys.platform == 'win32': - mk.rule('debug_target', 'debugmode_$(DEFAULT_TARGET)') + mk.rule('debug_target', 'debugmode_$(DEFAULT_TARGET)', 'rem') else: - mk.rule('debug_target', '$(TARGET)') + mk.rule('debug_target', '$(TARGET)', '#') mk.write() #self.translator.platform, # , diff --git a/pypy/translator/cli/test/test_snippet.py b/pypy/translator/cli/test/test_snippet.py --- a/pypy/translator/cli/test/test_snippet.py +++ b/pypy/translator/cli/test/test_snippet.py @@ -28,14 +28,14 @@ res = self.interpret(fn, [], backendopt=False) def test_link_vars_overlapping(self): - from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift + from pypy.rlib.rarithmetic import ovfcheck def fn(maxofs): lastofs = 0 ofs = 1 while ofs < maxofs: lastofs = ofs try: - ofs = ovfcheck_lshift(ofs, 1) + ofs = ovfcheck(ofs << 1) except OverflowError: ofs = maxofs else: diff --git a/pypy/translator/simplify.py b/pypy/translator/simplify.py --- a/pypy/translator/simplify.py +++ b/pypy/translator/simplify.py @@ -111,16 +111,13 @@ # the while loop above will simplify recursively the new link def transform_ovfcheck(graph): - """The special function calls ovfcheck and ovfcheck_lshift need to + """The special function calls ovfcheck needs to be translated into primitive operations. ovfcheck is called directly after an operation that should be turned into an overflow-checked version. It is considered a syntax error if the resulting _ovf is not defined in objspace/flow/objspace.py. - ovfcheck_lshift is special because there is no preceding operation. - Instead, it will be replaced by an OP_LSHIFT_OVF operation. """ covf = Constant(rarithmetic.ovfcheck) - covfls = Constant(rarithmetic.ovfcheck_lshift) def check_syntax(opname): exlis = operation.implicit_exceptions.get("%s_ovf" % (opname,), []) @@ -154,9 +151,6 @@ op1.opname += '_ovf' del block.operations[i] block.renamevariables({op.result: op1.result}) - elif op.args[0] == covfls: - op.opname = 'lshift_ovf' - del op.args[0] def simplify_exceptions(graph): """The exception handling caused by non-implicit exceptions diff --git a/pypy/translator/test/snippet.py b/pypy/translator/test/snippet.py --- a/pypy/translator/test/snippet.py +++ b/pypy/translator/test/snippet.py @@ -1210,7 +1210,7 @@ return istk.top(), sstk.top() -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift +from pypy.rlib.rarithmetic import ovfcheck def add_func(i=numtype): try: @@ -1253,7 +1253,7 @@ def lshift_func(i=numtype): try: hugo(2, 3, 5) - return ovfcheck_lshift((-maxint-1), i) + return ovfcheck((-maxint-1) << i) except (hugelmugel, OverflowError, StandardError, ValueError): raise From noreply at buildbot.pypy.org Thu Nov 10 20:08:01 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Thu, 10 Nov 2011 20:08:01 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: tons more box types, and some methods on them Message-ID: <20111110190801.4057C8292E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49287:b8595c5ed572 Date: 2011-11-10 14:07 -0500 http://bitbucket.org/pypy/pypy/changeset/b8595c5ed572/ Log: tons more box types, and some methods on them diff --git a/pypy/module/micronumpy/compile.py b/pypy/module/micronumpy/compile.py --- a/pypy/module/micronumpy/compile.py +++ b/pypy/module/micronumpy/compile.py @@ -338,7 +338,7 @@ elif isinstance(w_res, BoolObject): dtype = interp.space.fromcache(W_BoolDtype) elif isinstance(w_res, interp_boxes.W_GenericBox): - dtype = w_res.descr_get_dtype(interp.space) + dtype = w_res.get_dtype(interp.space) else: dtype = None return scalar_w(interp.space, dtype, w_res) diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -1,13 +1,38 @@ from pypy.interpreter.baseobjspace import Wrappable +from pypy.interpreter.gateway import interp2app from pypy.interpreter.typedef import TypeDef from pypy.objspace.std.inttype import int_typedef from pypy.rlib.rarithmetic import LONG_BIT +from pypy.tool.sourcetools import func_with_new_name MIXIN_64 = (int_typedef,) if LONG_BIT == 64 else () +def dtype_getter(name): + @staticmethod + def get_dtype(space): + from pypy.module.micronumpy.interp_dtype import get_dtype_cache + return getattr(get_dtype_cache(space), "w_%sdtype" % name) + return get_dtype + class W_GenericBox(Wrappable): - pass + def descr_repr(self, space): + return space.wrap(self.get_dtype(space).itemtype.str_format(self)) + + def descr_int(self, space): + return space.wrap(self.convert_to(W_LongBox.get_dtype(space)).value) + + def descr_float(self, space): + return space.wrap(self.convert_to(W_Float64Box.get_dtype(space)).value) + + def _binop_impl(ufunc_name): + def impl(self, space, w_other): + from pypy.module.micronumpy import interp_ufuncs + return getattr(interp_ufuncs.get(space), ufunc_name).call(space, [self, w_other]) + return func_with_new_name(impl, "binop_%s_impl" % ufunc_name) + + descr_eq = _binop_impl("equal") + class W_BoolBox(Wrappable): def __init__(self, value): @@ -26,10 +51,37 @@ class W_SignedIntegerBox(W_IntegerBox): pass +class W_UnsignedIntgerBox(W_IntegerBox): + pass + +class W_Int8Box(W_SignedIntegerBox): + pass + +class W_UInt8Box(W_UnsignedIntgerBox): + pass + +class W_Int16Box(W_SignedIntegerBox): + pass + +class W_UInt16Box(W_UnsignedIntgerBox): + pass + +class W_Int32Box(W_SignedIntegerBox): + pass + +class W_UInt32Box(W_UnsignedIntgerBox): + pass + class W_LongBox(W_SignedIntegerBox): + get_dtype = dtype_getter("long") + +class W_ULongBox(W_UnsignedIntgerBox): pass class W_Int64Box(W_SignedIntegerBox): + get_dtype = dtype_getter("int64") + +class W_UInt64Box(W_UnsignedIntgerBox): pass class W_InexactBox(W_NumberBox): @@ -38,13 +90,23 @@ class W_FloatingBox(W_InexactBox): pass +class W_Float32Box(W_FloatingBox): + pass + class W_Float64Box(W_FloatingBox): - def descr_get_dtype(self, space): - from pypy.module.micronumpy.interp_dtype import get_dtype_cache - return get_dtype_cache(space).w_float64dtype + get_dtype = dtype_getter("float64") + + W_GenericBox.typedef = TypeDef("generic", __module__ = "numpy", + + __repr__ = interp2app(W_GenericBox.descr_repr), + __int__ = interp2app(W_GenericBox.descr_int), + __float__ = interp2app(W_GenericBox.descr_float), + + __eq__ = interp2app(W_GenericBox.descr_eq), + ) W_BoolBox.typedef = TypeDef("bool_", W_GenericBox.typedef, @@ -63,7 +125,6 @@ __module__ = "numpy", ) -# XXX: fix for 32bit if LONG_BIT == 32: long_name = "int32" elif LONG_BIT == 64: diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -1,7 +1,7 @@ from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.error import OperationError from pypy.interpreter.gateway import interp2app -from pypy.interpreter.typedef import TypeDef, interp_attrproperty +from pypy.interpreter.typedef import TypeDef, GetSetProperty, interp_attrproperty from pypy.module.micronumpy import types, signature from pypy.rlib.objectmodel import specialize from pypy.rlib.rarithmetic import LONG_BIT @@ -47,6 +47,10 @@ struct_ptr = rffi.ptradd(storage, i * self.itemtype.get_element_size()) self.itemtype.store(struct_ptr, 0, box) + def fill(self, storage, box, start, stop): + start_ptr = rffi.ptradd(storage, start * self.itemtype.get_element_size()) + self.itemtype.fill(start_ptr, box, stop - start) + def descr__new__(space, w_subtype, w_dtype): cache = get_dtype_cache(space) @@ -71,6 +75,9 @@ def descr_repr(self, space): return space.wrap("dtype('%s')" % self.name) + def descr_get_itemsize(self, space): + return space.wrap(self.itemtype.get_element_size()) + W_Dtype.typedef = TypeDef("dtype", __module__ = "numpy", __new__ = interp2app(W_Dtype.descr__new__.im_func), @@ -80,6 +87,7 @@ num = interp_attrproperty("num", cls=W_Dtype), kind = interp_attrproperty("kind", cls=W_Dtype), + itemsize = GetSetProperty(W_Dtype.descr_get_itemsize), ) class DtypeCache(object): diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -566,7 +566,7 @@ ) arr = SingleDimArray(size, dtype=dtype) - one = dtype.adapt_val(1) + one = dtype.box(1) arr.dtype.fill(arr.storage, one, 0, size) return space.wrap(arr) diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -141,7 +141,7 @@ promote_bools=self.promote_bools, ) if self.comparison_func: - res_dtype = space.fromcache(interp_dtype.W_BoolDtype) + res_dtype = interp_dtype.get_dtype_cache(space).w_booldtype else: res_dtype = calc_dtype if isinstance(w_lhs, Scalar) and isinstance(w_rhs, Scalar): @@ -243,9 +243,9 @@ return dt def find_dtype_for_scalar(space, w_obj, current_guess=None): - bool_dtype = space.fromcache(interp_dtype.W_BoolDtype) - long_dtype = space.fromcache(interp_dtype.W_LongDtype) - int64_dtype = space.fromcache(interp_dtype.W_Int64Dtype) + bool_dtype = interp_dtype.get_dtype_cache(space).w_booldtype + long_dtype = interp_dtype.get_dtype_cache(space).w_longdtype + int64_dtype = interp_dtype.get_dtype_cache(space).w_int64dtype if space.isinstance_w(w_obj, space.w_bool): if current_guess is None or current_guess is bool_dtype: @@ -261,7 +261,7 @@ current_guess is long_dtype or current_guess is int64_dtype): return int64_dtype return current_guess - return space.fromcache(interp_dtype.W_Float64Dtype) + return interp_dtype.get_dtype_cache(space).w_float64dtype def ufunc_dtype_caller(space, ufunc_name, op_name, argcount, comparison_func): @@ -273,9 +273,8 @@ def impl(res_dtype, lvalue, rvalue): res = getattr(res_dtype.itemtype, op_name)(lvalue, rvalue) if comparison_func: - booldtype = space.fromcache(interp_dtype.W_BoolDtype) - assert isinstance(booldtype, interp_dtype.W_BoolDtype) - res = booldtype.box(res) + bool_dtype = interp_dtype.get_dtype_cache(space).w_booldtype + res = bool_dtype.box(res) return res return func_with_new_name(impl, ufunc_name) diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -25,6 +25,11 @@ return box.value def coerce(self, space, w_item): + if isinstance(w_item, self.BoxType): + return w_item + return self._coerce(space, w_item) + + def _coerce(self, space, w_item): raise NotImplementedError def read(self, ptr, offset): @@ -38,9 +43,18 @@ ptr = rffi.ptradd(ptr, offset) rffi.cast(lltype.Ptr(lltype.Array(self.T, hints={"nolength": True})), ptr)[0] = value + def fill(self, ptr, box, n): + value = self.unbox(box) + for i in xrange(n): + rffi.cast(lltype.Ptr(lltype.Array(self.T, hints={"nolength": True})), ptr)[0] = value + ptr = rffi.ptradd(ptr, self.get_element_size()) + def add(self, v1, v2): return self.box(self.unbox(v1) + self.unbox(v2)) + def eq(self, v1, v2): + return self.unbox(v1) == self.unbox(v2) + def max(self, v1, v2): return self.box(max(self.unbox(v1), self.unbox(v2))) @@ -61,55 +75,68 @@ else: return self.False - def coerce(self, space, w_item): + def _coerce(self, space, w_item): return self.box(space.is_true(w_item)) class Integer(Primitive): - def coerce(self, space, w_item): + def _coerce(self, space, w_item): return self.box(space.int_w(space.int(w_item))) -class Int8(Primitive): + def str_format(self, box): + value = self.unbox(box) + return str(value) + +class Int8(Integer): T = rffi.SIGNEDCHAR + BoxType = interp_boxes.W_Int8Box -class UInt8(Primitive): +class UInt8(Integer): T = rffi.UCHAR + BoxType = interp_boxes.W_UInt8Box -class Int16(Primitive): +class Int16(Integer): T = rffi.SHORT + BoxType = interp_boxes.W_Int16Box -class UInt16(Primitive): +class UInt16(Integer): T = rffi.USHORT + BoxType = interp_boxes.W_UInt16Box -class Int32(Primitive): +class Int32(Integer): T = rffi.INT + BoxType = interp_boxes.W_Int32Box -class UInt32(Primitive): +class UInt32(Integer): T = rffi.UINT + BoxType = interp_boxes.W_UInt32Box -class Long(Primitive): +class Long(Integer): T = rffi.LONG BoxType = interp_boxes.W_LongBox -class ULong(Primitive): +class ULong(Integer): T = rffi.ULONG + BoxType = interp_boxes.W_ULongBox class Int64(Integer): T = rffi.LONGLONG BoxType = interp_boxes.W_Int64Box -class UInt64(Primitive): +class UInt64(Integer): T = rffi.ULONGLONG + BoxType = interp_boxes.W_UInt64Box class Float(Primitive): - def coerce(self, space, w_item): + def _coerce(self, space, w_item): return self.box(space.float_w(space.float(w_item))) def str_format(self, box): value = self.unbox(box) return float2string(value, "g", rfloat.DTSF_STR_PRECISION) -class Float32(Primitive): +class Float32(Float): T = rffi.FLOAT + BoxType = interp_boxes.W_Float32Box class Float64(Float): T = rffi.DOUBLE From noreply at buildbot.pypy.org Thu Nov 10 20:43:06 2011 From: noreply at buildbot.pypy.org (pjenvey) Date: Thu, 10 Nov 2011 20:43:06 +0100 (CET) Subject: [pypy-commit] pypy py3k: update type's repr Message-ID: <20111110194306.674118292E@wyvern.cs.uni-duesseldorf.de> Author: Philip Jenvey Branch: py3k Changeset: r49288:21b2914fdb96 Date: 2011-11-10 11:42 -0800 http://bitbucket.org/pypy/pypy/changeset/21b2914fdb96/ Log: update type's repr diff --git a/pypy/objspace/std/test/test_typeobject.py b/pypy/objspace/std/test/test_typeobject.py --- a/pypy/objspace/std/test/test_typeobject.py +++ b/pypy/objspace/std/test/test_typeobject.py @@ -739,10 +739,10 @@ class A(object): pass assert repr(A) == "" - assert repr(type(type)) == "" - assert repr(complex) == "" - assert repr(property) == "" - assert repr(TypeError) == "" + assert repr(type(type)) == "" + assert repr(complex) == "" + assert repr(property) == "" + assert repr(TypeError) == "" def test_invalid_mro(self): class A(object): diff --git a/pypy/objspace/std/typeobject.py b/pypy/objspace/std/typeobject.py --- a/pypy/objspace/std/typeobject.py +++ b/pypy/objspace/std/typeobject.py @@ -518,10 +518,10 @@ def get_module_type_name(w_self): space = w_self.space w_mod = w_self.get_module() - if not space.isinstance_w(w_mod, space.w_str): + if not space.isinstance_w(w_mod, space.w_unicode): mod = 'builtins' else: - mod = space.str_w(w_mod) + mod = space.unicode_w(w_mod) if mod != 'builtins': return '%s.%s' % (mod, w_self.name) else: @@ -871,19 +871,14 @@ def repr__Type(space, w_obj): w_mod = w_obj.get_module() - if not space.isinstance_w(w_mod, space.w_str): + if not space.isinstance_w(w_mod, space.w_unicode): mod = None else: - mod = space.str_w(w_mod) - if (not w_obj.is_heaptype() or - (mod == '__builtin__' or mod == 'exceptions')): - kind = 'type' + mod = space.unicode_w(w_mod) + if mod is not None and mod != 'builtins': + return space.wrap("" % (mod, w_obj.name)) else: - kind = 'class' - if mod is not None and mod !='builtins': - return space.wrap("<%s '%s.%s'>" % (kind, mod, w_obj.name)) - else: - return space.wrap("<%s '%s'>" % (kind, w_obj.name)) + return space.wrap("" % (w_obj.name)) def getattr__Type_ANY(space, w_type, w_name): name = space.str_w(w_name) From noreply at buildbot.pypy.org Thu Nov 10 21:25:17 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Thu, 10 Nov 2011 21:25:17 +0100 (CET) Subject: [pypy-commit] pypy jit-refactor-tests: convert tests Message-ID: <20111110202517.E6BA88292E@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-refactor-tests Changeset: r49289:ed67ff4c7185 Date: 2011-11-10 21:24 +0100 http://bitbucket.org/pypy/pypy/changeset/ed67ff4c7185/ Log: convert tests diff --git a/pypy/jit/tl/spli/test/test_jit.py b/pypy/jit/tl/spli/test/test_jit.py --- a/pypy/jit/tl/spli/test/test_jit.py +++ b/pypy/jit/tl/spli/test/test_jit.py @@ -36,7 +36,7 @@ i = i + 1 return i self.interpret(f, []) - self.check_loops(new_with_vtable=0) + self.check_resops(new_with_vtable=0) def test_bridge(self): py.test.skip('We currently cant virtualize across bridges') @@ -52,7 +52,7 @@ return total self.interpret(f, [1, 10]) - self.check_loops(new_with_vtable=0) + self.check_resops(new_with_vtable=0) def test_bridge_bad_case(self): py.test.skip('We currently cant virtualize across bridges') @@ -67,7 +67,7 @@ return a + b self.interpret(f, [1, 10]) - self.check_loops(new_with_vtable=1) # XXX should eventually be 0? + self.check_resops(new_with_vtable=1) # XXX should eventually be 0? # I think it should be either 0 or 2, 1 makes little sense # If the loop after entering goes first time to the bridge, a # is rewrapped again, without preserving the identity. I'm not diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -51,9 +51,11 @@ b = a + a b -> 3 """) - self.check_loops({'getarrayitem_raw': 2, 'float_add': 1, - 'setarrayitem_raw': 1, 'int_add': 1, - 'int_lt': 1, 'guard_true': 1, 'jump': 1}) + self.check_resops({'setarrayitem_raw': 2, 'getfield_gc': 11, + 'guard_class': 7, 'guard_true': 2, + 'guard_isnull': 1, 'jump': 2, 'int_lt': 2, + 'float_add': 2, 'int_add': 2, 'guard_value': 1, + 'getarrayitem_raw': 4}) assert result == 3 + 3 def test_floatadd(self): @@ -62,9 +64,11 @@ a -> 3 """) assert result == 3 + 3 - self.check_loops({"getarrayitem_raw": 1, "float_add": 1, - "setarrayitem_raw": 1, "int_add": 1, - "int_lt": 1, "guard_true": 1, "jump": 1}) + self.check_resops({'setarrayitem_raw': 2, 'getfield_gc': 11, + 'guard_class': 7, 'guard_true': 2, + 'guard_isnull': 1, 'jump': 2, 'int_lt': 2, + 'float_add': 2, 'int_add': 2, 'guard_value': 1, + 'getarrayitem_raw': 2}) def test_sum(self): result = self.run(""" @@ -73,9 +77,10 @@ sum(b) """) assert result == 2 * sum(range(30)) - self.check_loops({"getarrayitem_raw": 2, "float_add": 2, - "int_add": 1, - "int_lt": 1, "guard_true": 1, "jump": 1}) + self.check_resops({'guard_class': 7, 'getfield_gc': 11, + 'guard_true': 2, 'jump': 2, 'getarrayitem_raw': 4, + 'guard_value': 2, 'guard_isnull': 1, 'int_lt': 2, + 'float_add': 4, 'int_add': 2}) def test_prod(self): result = self.run(""" @@ -87,9 +92,10 @@ for i in range(30): expected *= i * 2 assert result == expected - self.check_loops({"getarrayitem_raw": 2, "float_add": 1, - "float_mul": 1, "int_add": 1, - "int_lt": 1, "guard_true": 1, "jump": 1}) + self.check_resops({'int_lt': 2, 'getfield_gc': 11, 'guard_class': 7, + 'float_mul': 2, 'guard_true': 2, 'guard_isnull': 1, + 'jump': 2, 'getarrayitem_raw': 4, 'float_add': 2, + 'int_add': 2, 'guard_value': 2}) def test_max(self): py.test.skip("broken, investigate") @@ -125,10 +131,10 @@ any(b) """) assert result == 1 - self.check_loops({"getarrayitem_raw": 2, "float_add": 1, - "float_ne": 1, "int_add": 1, - "int_lt": 1, "guard_true": 1, "jump": 1, - "guard_false": 1}) + self.check_resops({'int_lt': 2, 'getfield_gc': 9, 'guard_class': 7, + 'guard_value': 1, 'int_add': 2, 'guard_true': 2, + 'guard_isnull': 1, 'jump': 2, 'getarrayitem_raw': 4, + 'float_add': 2, 'guard_false': 2, 'float_ne': 2}) def test_already_forced(self): result = self.run(""" @@ -142,9 +148,12 @@ # This is the sum of the ops for both loops, however if you remove the # optimization then you end up with 2 float_adds, so we can still be # sure it was optimized correctly. - self.check_loops({"getarrayitem_raw": 2, "float_mul": 1, "float_add": 1, - "setarrayitem_raw": 2, "int_add": 2, - "int_lt": 2, "guard_true": 2, "jump": 2}) + self.check_resops({'setarrayitem_raw': 4, 'guard_nonnull': 1, + 'getfield_gc': 23, 'guard_class': 14, + 'guard_true': 4, 'float_mul': 2, 'guard_isnull': 2, + 'jump': 4, 'int_lt': 4, 'float_add': 2, + 'int_add': 4, 'guard_value': 2, + 'getarrayitem_raw': 4}) def test_ufunc(self): result = self.run(""" @@ -154,10 +163,11 @@ c -> 3 """) assert result == -6 - self.check_loops({"getarrayitem_raw": 2, "float_add": 1, "float_neg": 1, - "setarrayitem_raw": 1, "int_add": 1, - "int_lt": 1, "guard_true": 1, "jump": 1, - }) + self.check_resops({'setarrayitem_raw': 2, 'getfield_gc': 15, + 'guard_class': 9, 'float_neg': 2, 'guard_true': 2, + 'guard_isnull': 2, 'jump': 2, 'int_lt': 2, + 'float_add': 2, 'int_add': 2, 'guard_value': 2, + 'getarrayitem_raw': 4}) def test_specialization(self): self.run(""" @@ -202,9 +212,11 @@ return v.get_concrete().eval(3).val result = self.meta_interp(f, [5], listops=True, backendopt=True) - self.check_loops({'int_mul': 1, 'getarrayitem_raw': 2, 'float_add': 1, - 'setarrayitem_raw': 1, 'int_add': 1, - 'int_lt': 1, 'guard_true': 1, 'jump': 1}) + self.check_resops({'setarrayitem_raw': 2, 'getfield_gc': 9, + 'guard_true': 2, 'guard_isnull': 1, 'jump': 2, + 'int_lt': 2, 'float_add': 2, 'int_mul': 2, + 'int_add': 2, 'guard_value': 1, + 'getarrayitem_raw': 4}) assert result == f(5) def test_slice2(self): @@ -224,9 +236,11 @@ return v.get_concrete().eval(3).val result = self.meta_interp(f, [5], listops=True, backendopt=True) - self.check_loops({'int_mul': 2, 'getarrayitem_raw': 2, 'float_add': 1, - 'setarrayitem_raw': 1, 'int_add': 1, - 'int_lt': 1, 'guard_true': 1, 'jump': 1}) + self.check_resops({'setarrayitem_raw': 2, 'getfield_gc': 11, + 'guard_true': 2, 'guard_isnull': 1, 'jump': 2, + 'int_lt': 2, 'float_add': 2, 'int_mul': 4, + 'int_add': 2, 'guard_value': 1, + 'getarrayitem_raw': 4}) assert result == f(5) def test_setslice(self): @@ -243,10 +257,12 @@ return ar.get_concrete().eval(3).val result = self.meta_interp(f, [5], listops=True, backendopt=True) - self.check_loops({'getarrayitem_raw': 2, - 'float_add' : 1, - 'setarrayitem_raw': 1, 'int_add': 2, - 'int_lt': 1, 'guard_true': 1, 'jump': 1}) + self.check_resops({'int_is_true': 1, 'setarrayitem_raw': 2, + 'guard_nonnull': 1, 'getfield_gc': 9, + 'guard_false': 1, 'guard_true': 3, + 'guard_isnull': 1, 'jump': 2, 'int_lt': 2, + 'float_add': 2, 'int_gt': 1, 'int_add': 4, + 'guard_value': 1, 'getarrayitem_raw': 4}) assert result == 11.0 def test_int32_sum(self): diff --git a/pypy/rlib/rsre/test/test_zjit.py b/pypy/rlib/rsre/test/test_zjit.py --- a/pypy/rlib/rsre/test/test_zjit.py +++ b/pypy/rlib/rsre/test/test_zjit.py @@ -96,7 +96,7 @@ def test_fast_search(self): res = self.meta_interp_search(r"", "eua") assert res == 15 - self.check_loops(guard_value=0) + self.check_resops(guard_value=0) def test_regular_search(self): res = self.meta_interp_search(r"<\w+>", "eiofweoxdiwhdohua") @@ -120,7 +120,7 @@ def test_aorbstar(self): res = self.meta_interp_match("(a|b)*a", "a" * 100) assert res == 100 - self.check_loops(guard_value=0) + self.check_resops(guard_value=0) # group guards tests @@ -165,4 +165,4 @@ def test_find_repetition_end_fastpath(self): res = self.meta_interp_search(r"b+", "a"*30 + "b") assert res == 30 - self.check_loops(call=0) + self.check_resops(call=0) From noreply at buildbot.pypy.org Fri Nov 11 08:06:28 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Fri, 11 Nov 2011 08:06:28 +0100 (CET) Subject: [pypy-commit] pypy jit-refactor-tests: hg merge default Message-ID: <20111111070628.AE6AD8292E@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-refactor-tests Changeset: r49290:f8171c00d11a Date: 2011-11-11 07:29 +0100 http://bitbucket.org/pypy/pypy/changeset/f8171c00d11a/ Log: hg merge default diff --git a/lib-python/2.7/test/test_os.py b/lib-python/2.7/test/test_os.py --- a/lib-python/2.7/test/test_os.py +++ b/lib-python/2.7/test/test_os.py @@ -74,7 +74,8 @@ self.assertFalse(os.path.exists(name), "file already exists for temporary file") # make sure we can create the file - open(name, "w") + f = open(name, "w") + f.close() self.files.append(name) def test_tempnam(self): diff --git a/lib-python/conftest.py b/lib-python/conftest.py --- a/lib-python/conftest.py +++ b/lib-python/conftest.py @@ -201,7 +201,7 @@ RegrTest('test_difflib.py'), RegrTest('test_dircache.py', core=True), RegrTest('test_dis.py'), - RegrTest('test_distutils.py'), + RegrTest('test_distutils.py', skip=True), RegrTest('test_dl.py', skip=True), RegrTest('test_doctest.py', usemodules="thread"), RegrTest('test_doctest2.py'), diff --git a/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py b/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py --- a/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py +++ b/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py @@ -1,6 +1,5 @@ import unittest from ctypes import * -from ctypes.test import xfail class MyInt(c_int): def __cmp__(self, other): @@ -27,7 +26,6 @@ self.assertEqual(None, cb()) - @xfail def test_int_callback(self): args = [] def func(arg): diff --git a/lib-python/2.7/pkgutil.py b/lib-python/modified-2.7/pkgutil.py copy from lib-python/2.7/pkgutil.py copy to lib-python/modified-2.7/pkgutil.py --- a/lib-python/2.7/pkgutil.py +++ b/lib-python/modified-2.7/pkgutil.py @@ -244,7 +244,8 @@ return mod def get_data(self, pathname): - return open(pathname, "rb").read() + with open(pathname, "rb") as f: + return f.read() def _reopen(self): if self.file and self.file.closed: diff --git a/lib_pypy/_ctypes/pointer.py b/lib_pypy/_ctypes/pointer.py --- a/lib_pypy/_ctypes/pointer.py +++ b/lib_pypy/_ctypes/pointer.py @@ -124,7 +124,8 @@ # for now, we always allow types.pointer, else a lot of tests # break. We need to rethink how pointers are represented, though if my_ffitype is not ffitype and ffitype is not _ffi.types.void_p: - raise ArgumentError, "expected %s instance, got %s" % (type(value), ffitype) + raise ArgumentError("expected %s instance, got %s" % (type(value), + ffitype)) return value._get_buffer_value() def _cast_addr(obj, _, tp): diff --git a/lib_pypy/_ctypes/structure.py b/lib_pypy/_ctypes/structure.py --- a/lib_pypy/_ctypes/structure.py +++ b/lib_pypy/_ctypes/structure.py @@ -17,7 +17,7 @@ if len(f) == 3: if (not hasattr(tp, '_type_') or not isinstance(tp._type_, str) - or tp._type_ not in "iIhHbBlL"): + or tp._type_ not in "iIhHbBlLqQ"): #XXX: are those all types? # we just dont get the type name # in the interp levle thrown TypeError diff --git a/lib_pypy/pyrepl/commands.py b/lib_pypy/pyrepl/commands.py --- a/lib_pypy/pyrepl/commands.py +++ b/lib_pypy/pyrepl/commands.py @@ -33,10 +33,9 @@ class Command(object): finish = 0 kills_digit_arg = 1 - def __init__(self, reader, (event_name, event)): + def __init__(self, reader, cmd): self.reader = reader - self.event = event - self.event_name = event_name + self.event_name, self.event = cmd def do(self): pass diff --git a/lib_pypy/pyrepl/pygame_console.py b/lib_pypy/pyrepl/pygame_console.py --- a/lib_pypy/pyrepl/pygame_console.py +++ b/lib_pypy/pyrepl/pygame_console.py @@ -130,7 +130,7 @@ s.fill(c, [0, 600 - bmargin, 800, bmargin]) s.fill(c, [800 - rmargin, 0, lmargin, 600]) - def refresh(self, screen, (cx, cy)): + def refresh(self, screen, cxy): self.screen = screen self.pygame_screen.fill(colors.bg, [0, tmargin + self.cur_top + self.scroll, @@ -139,8 +139,8 @@ line_top = self.cur_top width, height = self.fontsize - self.cxy = (cx, cy) - cp = self.char_pos(cx, cy) + self.cxy = cxy + cp = self.char_pos(*cxy) if cp[1] < tmargin: self.scroll = - (cy*self.fh + self.cur_top) self.repaint() @@ -148,7 +148,7 @@ self.scroll += (600 - bmargin) - (cp[1] + self.fh) self.repaint() if self.curs_vis: - self.pygame_screen.blit(self.cursor, self.char_pos(cx, cy)) + self.pygame_screen.blit(self.cursor, self.char_pos(*cxy)) for line in screen: if 0 <= line_top + self.scroll <= (600 - bmargin - tmargin - self.fh): if line: diff --git a/lib_pypy/pyrepl/unix_console.py b/lib_pypy/pyrepl/unix_console.py --- a/lib_pypy/pyrepl/unix_console.py +++ b/lib_pypy/pyrepl/unix_console.py @@ -163,7 +163,7 @@ def change_encoding(self, encoding): self.encoding = encoding - def refresh(self, screen, (cx, cy)): + def refresh(self, screen, cxy): # this function is still too long (over 90 lines) if not self.__gone_tall: @@ -198,6 +198,7 @@ # we make sure the cursor is on the screen, and that we're # using all of the screen if we can + cx, cy = cxy if cy < offset: offset = cy elif cy >= offset + height: diff --git a/pypy/jit/backend/llsupport/descr.py b/pypy/jit/backend/llsupport/descr.py --- a/pypy/jit/backend/llsupport/descr.py +++ b/pypy/jit/backend/llsupport/descr.py @@ -305,12 +305,16 @@ _clsname = '' loop_token = None arg_classes = '' # <-- annotation hack - ffi_flags = 0 + ffi_flags = 1 - def __init__(self, arg_classes, extrainfo=None, ffi_flags=0): + def __init__(self, arg_classes, extrainfo=None, ffi_flags=1): self.arg_classes = arg_classes # string of "r" and "i" (ref/int) self.extrainfo = extrainfo self.ffi_flags = ffi_flags + # NB. the default ffi_flags is 1, meaning FUNCFLAG_CDECL, which + # makes sense on Windows as it's the one for all the C functions + # we are compiling together with the JIT. On non-Windows platforms + # it is just ignored anyway. def __repr__(self): res = '%s(%s)' % (self.__class__.__name__, self.arg_classes) @@ -445,7 +449,7 @@ """ _clsname = 'DynamicIntCallDescr' - def __init__(self, arg_classes, result_size, result_sign, extrainfo=None, ffi_flags=0): + def __init__(self, arg_classes, result_size, result_sign, extrainfo, ffi_flags): BaseIntCallDescr.__init__(self, arg_classes, extrainfo, ffi_flags) assert isinstance(result_sign, bool) self._result_size = chr(result_size) diff --git a/pypy/jit/backend/llsupport/ffisupport.py b/pypy/jit/backend/llsupport/ffisupport.py --- a/pypy/jit/backend/llsupport/ffisupport.py +++ b/pypy/jit/backend/llsupport/ffisupport.py @@ -8,7 +8,7 @@ class UnsupportedKind(Exception): pass -def get_call_descr_dynamic(cpu, ffi_args, ffi_result, extrainfo=None, ffi_flags=0): +def get_call_descr_dynamic(cpu, ffi_args, ffi_result, extrainfo, ffi_flags): """Get a call descr: the types of result and args are represented by rlib.libffi.types.*""" try: diff --git a/pypy/jit/backend/llsupport/test/test_ffisupport.py b/pypy/jit/backend/llsupport/test/test_ffisupport.py --- a/pypy/jit/backend/llsupport/test/test_ffisupport.py +++ b/pypy/jit/backend/llsupport/test/test_ffisupport.py @@ -13,44 +13,46 @@ def test_call_descr_dynamic(): args = [types.sint, types.pointer] - descr = get_call_descr_dynamic(FakeCPU(), args, types.sint, ffi_flags=42) + descr = get_call_descr_dynamic(FakeCPU(), args, types.sint, None, + ffi_flags=42) assert isinstance(descr, DynamicIntCallDescr) assert descr.arg_classes == 'ii' assert descr.get_ffi_flags() == 42 args = [types.sint, types.double, types.pointer] - descr = get_call_descr_dynamic(FakeCPU(), args, types.void) + descr = get_call_descr_dynamic(FakeCPU(), args, types.void, None, 42) assert descr is None # missing floats descr = get_call_descr_dynamic(FakeCPU(supports_floats=True), - args, types.void, ffi_flags=43) + args, types.void, None, ffi_flags=43) assert isinstance(descr, VoidCallDescr) assert descr.arg_classes == 'ifi' assert descr.get_ffi_flags() == 43 - descr = get_call_descr_dynamic(FakeCPU(), [], types.sint8) + descr = get_call_descr_dynamic(FakeCPU(), [], types.sint8, None, 42) assert isinstance(descr, DynamicIntCallDescr) assert descr.get_result_size(False) == 1 assert descr.is_result_signed() == True - descr = get_call_descr_dynamic(FakeCPU(), [], types.uint8) + descr = get_call_descr_dynamic(FakeCPU(), [], types.uint8, None, 42) assert isinstance(descr, DynamicIntCallDescr) assert descr.get_result_size(False) == 1 assert descr.is_result_signed() == False if not is_64_bit: - descr = get_call_descr_dynamic(FakeCPU(), [], types.slonglong) + descr = get_call_descr_dynamic(FakeCPU(), [], types.slonglong, + None, 42) assert descr is None # missing longlongs descr = get_call_descr_dynamic(FakeCPU(supports_longlong=True), - [], types.slonglong, ffi_flags=43) + [], types.slonglong, None, ffi_flags=43) assert isinstance(descr, LongLongCallDescr) assert descr.get_ffi_flags() == 43 else: assert types.slonglong is types.slong - descr = get_call_descr_dynamic(FakeCPU(), [], types.float) + descr = get_call_descr_dynamic(FakeCPU(), [], types.float, None, 42) assert descr is None # missing singlefloats descr = get_call_descr_dynamic(FakeCPU(supports_singlefloats=True), - [], types.float, ffi_flags=44) + [], types.float, None, ffi_flags=44) SingleFloatCallDescr = getCallDescrClass(rffi.FLOAT) assert isinstance(descr, SingleFloatCallDescr) assert descr.get_ffi_flags() == 44 diff --git a/pypy/jit/backend/x86/test/test_runner.py b/pypy/jit/backend/x86/test/test_runner.py --- a/pypy/jit/backend/x86/test/test_runner.py +++ b/pypy/jit/backend/x86/test/test_runner.py @@ -455,6 +455,9 @@ EffectInfo.MOST_GENERAL, ffi_flags=-1) calldescr.get_call_conv = lambda: ffi # <==== hack + # ^^^ we patch get_call_conv() so that the test also makes sense + # on Linux, because clibffi.get_call_conv() would always + # return FFI_DEFAULT_ABI on non-Windows platforms. funcbox = ConstInt(rawstart) i1 = BoxInt() i2 = BoxInt() diff --git a/pypy/jit/codewriter/effectinfo.py b/pypy/jit/codewriter/effectinfo.py --- a/pypy/jit/codewriter/effectinfo.py +++ b/pypy/jit/codewriter/effectinfo.py @@ -78,6 +78,9 @@ # OS_MATH_SQRT = 100 + # for debugging: + _OS_CANRAISE = set([OS_NONE, OS_STR2UNICODE, OS_LIBFFI_CALL]) + def __new__(cls, readonly_descrs_fields, readonly_descrs_arrays, write_descrs_fields, write_descrs_arrays, extraeffect=EF_CAN_RAISE, @@ -116,6 +119,8 @@ result.extraeffect = extraeffect result.can_invalidate = can_invalidate result.oopspecindex = oopspecindex + if result.check_can_raise(): + assert oopspecindex in cls._OS_CANRAISE cls._cache[key] = result return result @@ -125,6 +130,10 @@ def check_can_invalidate(self): return self.can_invalidate + def check_is_elidable(self): + return (self.extraeffect == self.EF_ELIDABLE_CAN_RAISE or + self.extraeffect == self.EF_ELIDABLE_CANNOT_RAISE) + def check_forces_virtual_or_virtualizable(self): return self.extraeffect >= self.EF_FORCES_VIRTUAL_OR_VIRTUALIZABLE diff --git a/pypy/jit/codewriter/test/test_flatten.py b/pypy/jit/codewriter/test/test_flatten.py --- a/pypy/jit/codewriter/test/test_flatten.py +++ b/pypy/jit/codewriter/test/test_flatten.py @@ -5,7 +5,7 @@ from pypy.jit.codewriter.format import assert_format from pypy.jit.codewriter import longlong from pypy.jit.metainterp.history import AbstractDescr -from pypy.rpython.lltypesystem import lltype, rclass, rstr +from pypy.rpython.lltypesystem import lltype, rclass, rstr, rffi from pypy.objspace.flow.model import SpaceOperation, Variable, Constant from pypy.translator.unsimplify import varoftype from pypy.rlib.rarithmetic import ovfcheck, r_uint, r_longlong, r_ulonglong @@ -743,7 +743,6 @@ """, transform=True) def test_force_cast(self): - from pypy.rpython.lltypesystem import rffi # NB: we don't need to test for INT here, the logic in jtransform is # general enough so that if we have the below cases it should # generalize also to INT @@ -849,7 +848,6 @@ transform=True) def test_force_cast_pointer(self): - from pypy.rpython.lltypesystem import rffi def h(p): return rffi.cast(rffi.VOIDP, p) self.encoding_test(h, [lltype.nullptr(rffi.CCHARP.TO)], """ @@ -857,7 +855,6 @@ """, transform=True) def test_force_cast_floats(self): - from pypy.rpython.lltypesystem import rffi # Caststs to lltype.Float def f(n): return rffi.cast(lltype.Float, n) @@ -964,7 +961,6 @@ """, transform=True) def test_direct_ptradd(self): - from pypy.rpython.lltypesystem import rffi def f(p, n): return lltype.direct_ptradd(p, n) self.encoding_test(f, [lltype.nullptr(rffi.CCHARP.TO), 123], """ @@ -975,7 +971,6 @@ def check_force_cast(FROM, TO, operations, value): """Check that the test is correctly written...""" - from pypy.rpython.lltypesystem import rffi import re r = re.compile('(\w+) \%i\d, \$(-?\d+)') # diff --git a/pypy/jit/metainterp/optimizeopt/__init__.py b/pypy/jit/metainterp/optimizeopt/__init__.py --- a/pypy/jit/metainterp/optimizeopt/__init__.py +++ b/pypy/jit/metainterp/optimizeopt/__init__.py @@ -55,7 +55,7 @@ def optimize_loop_1(metainterp_sd, loop, enable_opts, - inline_short_preamble=True, retraced=False, bridge=False): + inline_short_preamble=True, retraced=False): """Optimize loop.operations to remove internal overheadish operations. """ @@ -64,7 +64,7 @@ if unroll: optimize_unroll(metainterp_sd, loop, optimizations) else: - optimizer = Optimizer(metainterp_sd, loop, optimizations, bridge) + optimizer = Optimizer(metainterp_sd, loop, optimizations) optimizer.propagate_all_forward() def optimize_bridge_1(metainterp_sd, bridge, enable_opts, @@ -76,7 +76,7 @@ except KeyError: pass optimize_loop_1(metainterp_sd, bridge, enable_opts, - inline_short_preamble, retraced, bridge=True) + inline_short_preamble, retraced) if __name__ == '__main__': print ALL_OPTS_NAMES diff --git a/pypy/jit/metainterp/optimizeopt/heap.py b/pypy/jit/metainterp/optimizeopt/heap.py --- a/pypy/jit/metainterp/optimizeopt/heap.py +++ b/pypy/jit/metainterp/optimizeopt/heap.py @@ -234,7 +234,7 @@ or op.is_ovf()): self.posponedop = op else: - self.next_optimization.propagate_forward(op) + Optimization.emit_operation(self, op) def emitting_operation(self, op): if op.has_no_side_effect(): diff --git a/pypy/jit/metainterp/optimizeopt/intbounds.py b/pypy/jit/metainterp/optimizeopt/intbounds.py --- a/pypy/jit/metainterp/optimizeopt/intbounds.py +++ b/pypy/jit/metainterp/optimizeopt/intbounds.py @@ -6,6 +6,7 @@ IntUpperBound) from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method from pypy.jit.metainterp.resoperation import rop +from pypy.jit.metainterp.optimize import InvalidLoop from pypy.rlib.rarithmetic import LONG_BIT @@ -13,30 +14,10 @@ """Keeps track of the bounds placed on integers by guards and remove redundant guards""" - def setup(self): - self.posponedop = None - self.nextop = None - def new(self): - assert self.posponedop is None return OptIntBounds() - - def flush(self): - assert self.posponedop is None - - def setup(self): - self.posponedop = None - self.nextop = None def propagate_forward(self, op): - if op.is_ovf(): - self.posponedop = op - return - if self.posponedop: - self.nextop = op - op = self.posponedop - self.posponedop = None - dispatch_opt(self, op) def opt_default(self, op): @@ -179,68 +160,75 @@ r = self.getvalue(op.result) r.intbound.intersect(b) + def optimize_GUARD_NO_OVERFLOW(self, op): + lastop = self.last_emitted_operation + if lastop is not None: + opnum = lastop.getopnum() + args = lastop.getarglist() + result = lastop.result + # If the INT_xxx_OVF was replaced with INT_xxx, then we can kill + # the GUARD_NO_OVERFLOW. + if (opnum == rop.INT_ADD or + opnum == rop.INT_SUB or + opnum == rop.INT_MUL): + return + # Else, synthesize the non overflowing op for optimize_default to + # reuse, as well as the reverse op + elif opnum == rop.INT_ADD_OVF: + self.pure(rop.INT_ADD, args[:], result) + self.pure(rop.INT_SUB, [result, args[1]], args[0]) + self.pure(rop.INT_SUB, [result, args[0]], args[1]) + elif opnum == rop.INT_SUB_OVF: + self.pure(rop.INT_SUB, args[:], result) + self.pure(rop.INT_ADD, [result, args[1]], args[0]) + self.pure(rop.INT_SUB, [args[0], result], args[1]) + elif opnum == rop.INT_MUL_OVF: + self.pure(rop.INT_MUL, args[:], result) + self.emit_operation(op) + + def optimize_GUARD_OVERFLOW(self, op): + # If INT_xxx_OVF was replaced by INT_xxx, *but* we still see + # GUARD_OVERFLOW, then the loop is invalid. + lastop = self.last_emitted_operation + if lastop is None: + raise InvalidLoop + opnum = lastop.getopnum() + if opnum not in (rop.INT_ADD_OVF, rop.INT_SUB_OVF, rop.INT_MUL_OVF): + raise InvalidLoop + self.emit_operation(op) + def optimize_INT_ADD_OVF(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) resbound = v1.intbound.add_bound(v2.intbound) - if resbound.has_lower and resbound.has_upper and \ - self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Transform into INT_ADD and remove guard + if resbound.bounded(): + # Transform into INT_ADD. The following guard will be killed + # by optimize_GUARD_NO_OVERFLOW; if we see instead an + # optimize_GUARD_OVERFLOW, then InvalidLoop. op = op.copy_and_change(rop.INT_ADD) - self.optimize_INT_ADD(op) # emit the op - else: - self.emit_operation(op) - r = self.getvalue(op.result) - r.intbound.intersect(resbound) - self.emit_operation(self.nextop) - if self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Synthesize the non overflowing op for optimize_default to reuse - self.pure(rop.INT_ADD, op.getarglist()[:], op.result) - # Synthesize the reverse op for optimize_default to reuse - self.pure(rop.INT_SUB, [op.result, op.getarg(1)], op.getarg(0)) - self.pure(rop.INT_SUB, [op.result, op.getarg(0)], op.getarg(1)) - + self.emit_operation(op) # emit the op + r = self.getvalue(op.result) + r.intbound.intersect(resbound) def optimize_INT_SUB_OVF(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) resbound = v1.intbound.sub_bound(v2.intbound) - if resbound.has_lower and resbound.has_upper and \ - self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Transform into INT_SUB and remove guard + if resbound.bounded(): op = op.copy_and_change(rop.INT_SUB) - self.optimize_INT_SUB(op) # emit the op - else: - self.emit_operation(op) - r = self.getvalue(op.result) - r.intbound.intersect(resbound) - self.emit_operation(self.nextop) - if self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Synthesize the non overflowing op for optimize_default to reuse - self.pure(rop.INT_SUB, op.getarglist()[:], op.result) - # Synthesize the reverse ops for optimize_default to reuse - self.pure(rop.INT_ADD, [op.result, op.getarg(1)], op.getarg(0)) - self.pure(rop.INT_SUB, [op.getarg(0), op.result], op.getarg(1)) - + self.emit_operation(op) # emit the op + r = self.getvalue(op.result) + r.intbound.intersect(resbound) def optimize_INT_MUL_OVF(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) resbound = v1.intbound.mul_bound(v2.intbound) - if resbound.has_lower and resbound.has_upper and \ - self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Transform into INT_MUL and remove guard + if resbound.bounded(): op = op.copy_and_change(rop.INT_MUL) - self.optimize_INT_MUL(op) # emit the op - else: - self.emit_operation(op) - r = self.getvalue(op.result) - r.intbound.intersect(resbound) - self.emit_operation(self.nextop) - if self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Synthesize the non overflowing op for optimize_default to reuse - self.pure(rop.INT_MUL, op.getarglist()[:], op.result) - + self.emit_operation(op) + r = self.getvalue(op.result) + r.intbound.intersect(resbound) def optimize_INT_LT(self, op): v1 = self.getvalue(op.getarg(0)) diff --git a/pypy/jit/metainterp/optimizeopt/intutils.py b/pypy/jit/metainterp/optimizeopt/intutils.py --- a/pypy/jit/metainterp/optimizeopt/intutils.py +++ b/pypy/jit/metainterp/optimizeopt/intutils.py @@ -1,4 +1,4 @@ -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift, LONG_BIT +from pypy.rlib.rarithmetic import ovfcheck, LONG_BIT from pypy.rlib.objectmodel import we_are_translated from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.jit.metainterp.history import BoxInt, ConstInt @@ -174,10 +174,10 @@ other.known_ge(IntBound(0, 0)) and \ other.known_lt(IntBound(LONG_BIT, LONG_BIT)): try: - vals = (ovfcheck_lshift(self.upper, other.upper), - ovfcheck_lshift(self.upper, other.lower), - ovfcheck_lshift(self.lower, other.upper), - ovfcheck_lshift(self.lower, other.lower)) + vals = (ovfcheck(self.upper << other.upper), + ovfcheck(self.upper << other.lower), + ovfcheck(self.lower << other.upper), + ovfcheck(self.lower << other.lower)) return IntBound(min4(vals), max4(vals)) except (OverflowError, ValueError): return IntUnbounded() diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -6,7 +6,7 @@ IntLowerBound, MININT, MAXINT from pypy.jit.metainterp.optimizeopt.util import (make_dispatcher_method, args_dict) -from pypy.jit.metainterp.resoperation import rop, ResOperation +from pypy.jit.metainterp.resoperation import rop, ResOperation, AbstractResOp from pypy.jit.metainterp.typesystem import llhelper, oohelper from pypy.tool.pairtype import extendabletype from pypy.rlib.debug import debug_start, debug_stop, debug_print @@ -249,6 +249,8 @@ CVAL_ZERO_FLOAT = ConstantValue(Const._new(0.0)) llhelper.CVAL_NULLREF = ConstantValue(llhelper.CONST_NULL) oohelper.CVAL_NULLREF = ConstantValue(oohelper.CONST_NULL) +REMOVED = AbstractResOp(None) + class Optimization(object): next_optimization = None @@ -260,6 +262,7 @@ raise NotImplementedError def emit_operation(self, op): + self.last_emitted_operation = op self.next_optimization.propagate_forward(op) # FIXME: Move some of these here? @@ -327,13 +330,13 @@ def forget_numberings(self, box): self.optimizer.forget_numberings(box) + class Optimizer(Optimization): - def __init__(self, metainterp_sd, loop, optimizations=None, bridge=False): + def __init__(self, metainterp_sd, loop, optimizations=None): self.metainterp_sd = metainterp_sd self.cpu = metainterp_sd.cpu self.loop = loop - self.bridge = bridge self.values = {} self.interned_refs = self.cpu.ts.new_ref_dict() self.interned_ints = {} @@ -341,7 +344,6 @@ self.bool_boxes = {} self.producer = {} self.pendingfields = [] - self.exception_might_have_happened = False self.quasi_immutable_deps = None self.opaque_pointers = {} self.replaces_guard = {} @@ -363,6 +365,7 @@ optimizations[-1].next_optimization = self for o in optimizations: o.optimizer = self + o.last_emitted_operation = None o.setup() else: optimizations = [] @@ -497,7 +500,6 @@ return CVAL_ZERO def propagate_all_forward(self): - self.exception_might_have_happened = self.bridge self.clear_newoperations() for op in self.loop.operations: self.first_optimization.propagate_forward(op) diff --git a/pypy/jit/metainterp/optimizeopt/pure.py b/pypy/jit/metainterp/optimizeopt/pure.py --- a/pypy/jit/metainterp/optimizeopt/pure.py +++ b/pypy/jit/metainterp/optimizeopt/pure.py @@ -1,4 +1,4 @@ -from pypy.jit.metainterp.optimizeopt.optimizer import Optimization +from pypy.jit.metainterp.optimizeopt.optimizer import Optimization, REMOVED from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.jit.metainterp.optimizeopt.util import (make_dispatcher_method, args_dict) @@ -61,7 +61,10 @@ oldop = self.pure_operations.get(args, None) if oldop is not None and oldop.getdescr() is op.getdescr(): assert oldop.getopnum() == op.getopnum() + # this removes a CALL_PURE that has the same (non-constant) + # arguments as a previous CALL_PURE. self.make_equal_to(op.result, self.getvalue(oldop.result)) + self.last_emitted_operation = REMOVED return else: self.pure_operations[args] = op @@ -72,6 +75,13 @@ self.emit_operation(ResOperation(rop.CALL, args, op.result, op.getdescr())) + def optimize_GUARD_NO_EXCEPTION(self, op): + if self.last_emitted_operation is REMOVED: + # it was a CALL_PURE that was killed; so we also kill the + # following GUARD_NO_EXCEPTION + return + self.emit_operation(op) + def flush(self): assert self.posponedop is None diff --git a/pypy/jit/metainterp/optimizeopt/rewrite.py b/pypy/jit/metainterp/optimizeopt/rewrite.py --- a/pypy/jit/metainterp/optimizeopt/rewrite.py +++ b/pypy/jit/metainterp/optimizeopt/rewrite.py @@ -294,12 +294,6 @@ raise InvalidLoop self.optimize_GUARD_CLASS(op) - def optimize_GUARD_NO_EXCEPTION(self, op): - if not self.optimizer.exception_might_have_happened: - return - self.emit_operation(op) - self.optimizer.exception_might_have_happened = False - def optimize_CALL_LOOPINVARIANT(self, op): arg = op.getarg(0) # 'arg' must be a Const, because residual_call in codewriter @@ -310,6 +304,7 @@ resvalue = self.loop_invariant_results.get(key, None) if resvalue is not None: self.make_equal_to(op.result, resvalue) + self.last_emitted_operation = REMOVED return # change the op to be a normal call, from the backend's point of view # there is no reason to have a separate operation for this @@ -444,10 +439,19 @@ except KeyError: pass else: + # this removes a CALL_PURE with all constant arguments. self.make_constant(op.result, result) + self.last_emitted_operation = REMOVED return self.emit_operation(op) + def optimize_GUARD_NO_EXCEPTION(self, op): + if self.last_emitted_operation is REMOVED: + # it was a CALL_PURE or a CALL_LOOPINVARIANT that was killed; + # so we also kill the following GUARD_NO_EXCEPTION + return + self.emit_operation(op) + def optimize_INT_FLOORDIV(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -681,25 +681,60 @@ # ---------- - def test_fold_guard_no_exception(self): - ops = """ - [i] - guard_no_exception() [] - i1 = int_add(i, 3) - guard_no_exception() [] + def test_keep_guard_no_exception(self): + ops = """ + [i1] i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] - guard_no_exception() [] - i3 = call(i2, descr=nonwritedescr) - jump(i1) # the exception is considered lost when we loop back - """ - expected = """ - [i] - i1 = int_add(i, 3) - i2 = call(i1, descr=nonwritedescr) + jump(i2) + """ + self.optimize_loop(ops, ops) + + def test_keep_guard_no_exception_with_call_pure_that_is_not_folded(self): + ops = """ + [i1] + i2 = call_pure(123456, i1, descr=nonwritedescr) guard_no_exception() [i1, i2] - i3 = call(i2, descr=nonwritedescr) - jump(i1) + jump(i2) + """ + expected = """ + [i1] + i2 = call(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2] + jump(i2) + """ + self.optimize_loop(ops, expected) + + def test_remove_guard_no_exception_with_call_pure_on_constant_args(self): + arg_consts = [ConstInt(i) for i in (123456, 81)] + call_pure_results = {tuple(arg_consts): ConstInt(5)} + ops = """ + [i1] + i3 = same_as(81) + i2 = call_pure(123456, i3, descr=nonwritedescr) + guard_no_exception() [i1, i2] + jump(i2) + """ + expected = """ + [i1] + jump(5) + """ + self.optimize_loop(ops, expected, call_pure_results) + + def test_remove_guard_no_exception_with_duplicated_call_pure(self): + ops = """ + [i1] + i2 = call_pure(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2] + i3 = call_pure(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2, i3] + jump(i3) + """ + expected = """ + [i1] + i2 = call(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2] + jump(i2) """ self.optimize_loop(ops, expected) @@ -4964,6 +4999,34 @@ """ self.optimize_loop(ops, expected) + def test_known_equal_ints(self): + py.test.skip("in-progress") + ops = """ + [i0, i1, i2, p0] + i3 = int_eq(i0, i1) + guard_true(i3) [] + + i4 = int_lt(i2, i0) + guard_true(i4) [] + i5 = int_lt(i2, i1) + guard_true(i5) [] + + i6 = getarrayitem_gc(p0, i2) + finish(i6) + """ + expected = """ + [i0, i1, i2, p0] + i3 = int_eq(i0, i1) + guard_true(i3) [] + + i4 = int_lt(i2, i0) + guard_true(i4) [] + + i6 = getarrayitem_gc(p0, i3) + finish(i6) + """ + self.optimize_loop(ops, expected) + class TestLLtype(BaseTestOptimizeBasic, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -931,17 +931,14 @@ [i] guard_no_exception() [] i1 = int_add(i, 3) - guard_no_exception() [] i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] - guard_no_exception() [] i3 = call(i2, descr=nonwritedescr) jump(i1) # the exception is considered lost when we loop back """ - # note that 'guard_no_exception' at the very start is kept around - # for bridges, but not for loops preamble = """ [i] + guard_no_exception() [] # occurs at the start of bridges, so keep it i1 = int_add(i, 3) i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] @@ -950,6 +947,7 @@ """ expected = """ [i] + guard_no_exception() [] # occurs at the start of bridges, so keep it i1 = int_add(i, 3) i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] @@ -958,6 +956,23 @@ """ self.optimize_loop(ops, expected, preamble) + def test_bug_guard_no_exception(self): + ops = """ + [] + i0 = call(123, descr=nonwritedescr) + p0 = call(0, "xy", descr=s2u_descr) # string -> unicode + guard_no_exception() [] + escape(p0) + jump() + """ + expected = """ + [] + i0 = call(123, descr=nonwritedescr) + escape(u"xy") + jump() + """ + self.optimize_loop(ops, expected) + # ---------- def test_call_loopinvariant(self): @@ -6281,12 +6296,15 @@ def test_str2unicode_constant(self): ops = """ [] + escape(1213) p0 = call(0, "xy", descr=s2u_descr) # string -> unicode + guard_no_exception() [] escape(p0) jump() """ expected = """ [] + escape(1213) escape(u"xy") jump() """ @@ -6296,6 +6314,7 @@ ops = """ [p0] p1 = call(0, p0, descr=s2u_descr) # string -> unicode + guard_no_exception() [] escape(p1) jump(p1) """ diff --git a/pypy/jit/metainterp/optimizeopt/test/test_util.py b/pypy/jit/metainterp/optimizeopt/test/test_util.py --- a/pypy/jit/metainterp/optimizeopt/test/test_util.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_util.py @@ -183,6 +183,7 @@ can_invalidate=True)) arraycopydescr = cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, EffectInfo([], [arraydescr], [], [arraydescr], + EffectInfo.EF_CANNOT_RAISE, oopspecindex=EffectInfo.OS_ARRAYCOPY)) @@ -212,12 +213,14 @@ _oopspecindex = getattr(EffectInfo, _os) locals()[_name] = \ cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, - EffectInfo([], [], [], [], oopspecindex=_oopspecindex)) + EffectInfo([], [], [], [], EffectInfo.EF_CANNOT_RAISE, + oopspecindex=_oopspecindex)) # _oopspecindex = getattr(EffectInfo, _os.replace('STR', 'UNI')) locals()[_name.replace('str', 'unicode')] = \ cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, - EffectInfo([], [], [], [], oopspecindex=_oopspecindex)) + EffectInfo([], [], [], [], EffectInfo.EF_CANNOT_RAISE, + oopspecindex=_oopspecindex)) s2u_descr = cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, EffectInfo([], [], [], [], oopspecindex=EffectInfo.OS_STR2UNICODE)) diff --git a/pypy/jit/metainterp/optimizeopt/vstring.py b/pypy/jit/metainterp/optimizeopt/vstring.py --- a/pypy/jit/metainterp/optimizeopt/vstring.py +++ b/pypy/jit/metainterp/optimizeopt/vstring.py @@ -2,7 +2,8 @@ from pypy.jit.metainterp.history import (BoxInt, Const, ConstInt, ConstPtr, get_const_ptr_for_string, get_const_ptr_for_unicode, BoxPtr, REF, INT) from pypy.jit.metainterp.optimizeopt import optimizer, virtualize -from pypy.jit.metainterp.optimizeopt.optimizer import CONST_0, CONST_1, llhelper +from pypy.jit.metainterp.optimizeopt.optimizer import CONST_0, CONST_1 +from pypy.jit.metainterp.optimizeopt.optimizer import llhelper, REMOVED from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.rlib.objectmodel import specialize, we_are_translated @@ -529,6 +530,11 @@ optimize_CALL_PURE = optimize_CALL + def optimize_GUARD_NO_EXCEPTION(self, op): + if self.last_emitted_operation is REMOVED: + return + self.emit_operation(op) + def opt_call_str_STR2UNICODE(self, op): # Constant-fold unicode("constant string"). # More generally, supporting non-constant but virtual cases is @@ -543,6 +549,7 @@ except UnicodeDecodeError: return False self.make_constant(op.result, get_const_ptr_for_unicode(u)) + self.last_emitted_operation = REMOVED return True def opt_call_stroruni_STR_CONCAT(self, op, mode): diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -1345,10 +1345,8 @@ if effect == effectinfo.EF_LOOPINVARIANT: return self.execute_varargs(rop.CALL_LOOPINVARIANT, allboxes, descr, False, False) - exc = (effect != effectinfo.EF_CANNOT_RAISE and - effect != effectinfo.EF_ELIDABLE_CANNOT_RAISE) - pure = (effect == effectinfo.EF_ELIDABLE_CAN_RAISE or - effect == effectinfo.EF_ELIDABLE_CANNOT_RAISE) + exc = effectinfo.check_can_raise() + pure = effectinfo.check_is_elidable() return self.execute_varargs(rop.CALL, allboxes, descr, exc, pure) def do_residual_or_indirect_call(self, funcbox, calldescr, argboxes): diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -90,7 +90,10 @@ return op def __repr__(self): - return self.repr() + try: + return self.repr() + except NotImplementedError: + return object.__repr__(self) def repr(self, graytext=False): # RPython-friendly version diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -3667,3 +3667,16 @@ assert x == -42 x = self.interp_operations(f, [1000, 1], translationoptions=topt) assert x == 999 + + def test_ll_arraycopy(self): + from pypy.rlib import rgc + A = lltype.GcArray(lltype.Char) + a = lltype.malloc(A, 10) + for i in range(10): a[i] = chr(i) + b = lltype.malloc(A, 10) + # + def f(c, d, e): + rgc.ll_arraycopy(a, b, c, d, e) + return 42 + self.interp_operations(f, [1, 2, 3]) + self.check_operations_history(call=1, guard_no_exception=0) diff --git a/pypy/jit/metainterp/test/test_resume.py b/pypy/jit/metainterp/test/test_resume.py --- a/pypy/jit/metainterp/test/test_resume.py +++ b/pypy/jit/metainterp/test/test_resume.py @@ -1135,16 +1135,11 @@ assert ptr2.parent.next == ptr class CompareableConsts(object): - def __init__(self): - self.oldeq = None - def __enter__(self): - assert self.oldeq is None - self.oldeq = Const.__eq__ Const.__eq__ = Const.same_box - + def __exit__(self, type, value, traceback): - Const.__eq__ = self.oldeq + del Const.__eq__ def test_virtual_adder_make_varray(): b2s, b4s = [BoxPtr(), BoxInt(4)] diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -392,6 +392,7 @@ 'Slice': 'space.gettypeobject(W_SliceObject.typedef)', 'StaticMethod': 'space.gettypeobject(StaticMethod.typedef)', 'CFunction': 'space.gettypeobject(cpyext.methodobject.W_PyCFunctionObject.typedef)', + 'WrapperDescr': 'space.gettypeobject(cpyext.methodobject.W_PyCMethodObject.typedef)' }.items(): GLOBALS['Py%s_Type#' % (cpyname, )] = ('PyTypeObject*', pypyexpr) diff --git a/pypy/module/cpyext/include/patchlevel.h b/pypy/module/cpyext/include/patchlevel.h --- a/pypy/module/cpyext/include/patchlevel.h +++ b/pypy/module/cpyext/include/patchlevel.h @@ -29,7 +29,7 @@ #define PY_VERSION "2.7.1" /* PyPy version as a string */ -#define PYPY_VERSION "1.6.1" +#define PYPY_VERSION "1.7.1" /* Subversion Revision number of this file (not of the repository). * Empty since Mercurial migration. */ diff --git a/pypy/module/cpyext/methodobject.py b/pypy/module/cpyext/methodobject.py --- a/pypy/module/cpyext/methodobject.py +++ b/pypy/module/cpyext/methodobject.py @@ -240,6 +240,7 @@ def PyStaticMethod_New(space, w_func): return space.wrap(StaticMethod(w_func)) + at cpython_api([PyObject, lltype.Ptr(PyMethodDef)], PyObject) def PyDescr_NewMethod(space, w_type, method): return space.wrap(W_PyCMethodObject(space, method, w_type)) diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -586,10 +586,6 @@ def PyDescr_NewMember(space, type, meth): raise NotImplementedError - at cpython_api([PyTypeObjectPtr, PyMethodDef], PyObject) -def PyDescr_NewMethod(space, type, meth): - raise NotImplementedError - @cpython_api([PyTypeObjectPtr, wrapperbase, rffi.VOIDP], PyObject) def PyDescr_NewWrapper(space, type, wrapper, wrapped): raise NotImplementedError @@ -610,14 +606,6 @@ def PyWrapper_New(space, w_d, w_self): raise NotImplementedError - at cpython_api([PyObject], PyObject) -def PyDictProxy_New(space, dict): - """Return a proxy object for a mapping which enforces read-only behavior. - This is normally used to create a proxy to prevent modification of the - dictionary for non-dynamic class types. - """ - raise NotImplementedError - @cpython_api([PyObject, PyObject, rffi.INT_real], rffi.INT_real, error=-1) def PyDict_Merge(space, a, b, override): """Iterate over mapping object b adding key-value pairs to dictionary a. @@ -2293,15 +2281,6 @@ changes in your code for properly supporting 64-bit systems.""" raise NotImplementedError - at cpython_api([rffi.CWCHARP, Py_ssize_t, rffi.CCHARP], PyObject) -def PyUnicode_EncodeUTF8(space, s, size, errors): - """Encode the Py_UNICODE buffer of the given size using UTF-8 and return a - Python string object. Return NULL if an exception was raised by the codec. - - This function used an int type for size. This might require - changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.INTP], PyObject) def PyUnicode_DecodeUTF32(space, s, size, errors, byteorder): """Decode length bytes from a UTF-32 encoded buffer string and return the @@ -2481,31 +2460,6 @@ was raised by the codec.""" raise NotImplementedError - at cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP], PyObject) -def PyUnicode_DecodeLatin1(space, s, size, errors): - """Create a Unicode object by decoding size bytes of the Latin-1 encoded string - s. Return NULL if an exception was raised by the codec. - - This function used an int type for size. This might require - changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - - at cpython_api([rffi.CWCHARP, Py_ssize_t, rffi.CCHARP], PyObject) -def PyUnicode_EncodeLatin1(space, s, size, errors): - """Encode the Py_UNICODE buffer of the given size using Latin-1 and return - a Python string object. Return NULL if an exception was raised by the codec. - - This function used an int type for size. This might require - changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - - at cpython_api([PyObject], PyObject) -def PyUnicode_AsLatin1String(space, unicode): - """Encode a Unicode object using Latin-1 and return the result as Python string - object. Error handling is "strict". Return NULL if an exception was raised - by the codec.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, PyObject, rffi.CCHARP], PyObject) def PyUnicode_DecodeCharmap(space, s, size, mapping, errors): """Create a Unicode object by decoding size bytes of the encoded string s using @@ -2564,13 +2518,6 @@ """ raise NotImplementedError - at cpython_api([PyObject], PyObject) -def PyUnicode_AsMBCSString(space, unicode): - """Encode a Unicode object using MBCS and return the result as Python string - object. Error handling is "strict". Return NULL if an exception was raised - by the codec.""" - raise NotImplementedError - @cpython_api([PyObject, PyObject], PyObject) def PyUnicode_Concat(space, left, right): """Concat two strings giving a new Unicode string.""" @@ -2912,16 +2859,3 @@ """Return true if ob is a proxy object. """ raise NotImplementedError - - at cpython_api([PyObject, PyObject], PyObject) -def PyWeakref_NewProxy(space, ob, callback): - """Return a weak reference proxy object for the object ob. This will always - return a new reference, but is not guaranteed to create a new object; an - existing proxy object may be returned. The second parameter, callback, can - be a callable object that receives notification when ob is garbage - collected; it should accept a single parameter, which will be the weak - reference object itself. callback may also be None or NULL. If ob - is not a weakly-referencable object, or if callback is not callable, - None, or NULL, this will return NULL and raise TypeError. - """ - raise NotImplementedError diff --git a/pypy/module/cpyext/test/test_methodobject.py b/pypy/module/cpyext/test/test_methodobject.py --- a/pypy/module/cpyext/test/test_methodobject.py +++ b/pypy/module/cpyext/test/test_methodobject.py @@ -79,7 +79,7 @@ raises(TypeError, mod.isSameFunction, 1) class TestPyCMethodObject(BaseApiTest): - def test_repr(self, space): + def test_repr(self, space, api): """ W_PyCMethodObject has a repr string which describes it as a method and gives its name and the name of its class. @@ -94,7 +94,7 @@ ml.c_ml_meth = rffi.cast(PyCFunction_typedef, c_func.get_llhelper(space)) - method = PyDescr_NewMethod(space, space.w_str, ml) + method = api.PyDescr_NewMethod(space.w_str, ml) assert repr(method).startswith( "" + assert space.unwrap(space.repr(w_proxy)).startswith(' INT_MAX: # overflow (64-bit platforms only) @@ -209,10 +227,11 @@ def ll_math_modf(x): # some platforms don't do the right thing for NaNs and # infinities, so we take care of special cases directly. - if isinf(x): - return (math_copysign(0.0, x), x) - elif isnan(x): - return (x, x) + if not isfinite(x): + if isnan(x): + return (x, x) + else: # isinf(x) + return (math_copysign(0.0, x), x) intpart_p = lltype.malloc(rffi.DOUBLEP.TO, 1, flavor='raw') try: fracpart = math_modf(x, intpart_p) @@ -223,13 +242,21 @@ def ll_math_fmod(x, y): - if isinf(x) and not isnan(y): - raise ValueError("math domain error") + # fmod(x, +/-Inf) returns x for finite x. + if isinf(y) and isfinite(x): + return x - if y == 0: - raise ValueError("math domain error") - - return math_fmod(x, y) + _error_reset() + r = math_fmod(x, y) + errno = rposix.get_errno() + if isnan(r): + if isnan(x) or isnan(y): + errno = 0 + else: + errno = EDOM + if errno: + _likely_raise(errno, r) + return r def ll_math_hypot(x, y): @@ -242,16 +269,17 @@ _error_reset() r = math_hypot(x, y) errno = rposix.get_errno() - if isnan(r): - if isnan(x) or isnan(y): - errno = 0 - else: - errno = EDOM - elif isinf(r): - if isinf(x) or isnan(x) or isinf(y) or isnan(y): - errno = 0 - else: - errno = ERANGE + if not isfinite(r): + if isnan(r): + if isnan(x) or isnan(y): + errno = 0 + else: + errno = EDOM + else: # isinf(r) + if isfinite(x) and isfinite(y): + errno = ERANGE + else: + errno = 0 if errno: _likely_raise(errno, r) return r @@ -261,30 +289,30 @@ # deal directly with IEEE specials, to cope with problems on various # platforms whose semantics don't exactly match C99 - if isnan(x): - if y == 0.0: - return 1.0 # NaN**0 = 1 - return x - - elif isnan(y): + if isnan(y): if x == 1.0: return 1.0 # 1**Nan = 1 return y - elif isinf(x): - odd_y = not isinf(y) and math_fmod(math_fabs(y), 2.0) == 1.0 - if y > 0.0: - if odd_y: - return x - return math_fabs(x) - elif y == 0.0: - return 1.0 - else: # y < 0.0 - if odd_y: - return math_copysign(0.0, x) - return 0.0 + if not isfinite(x): + if isnan(x): + if y == 0.0: + return 1.0 # NaN**0 = 1 + return x + else: # isinf(x) + odd_y = not isinf(y) and math_fmod(math_fabs(y), 2.0) == 1.0 + if y > 0.0: + if odd_y: + return x + return math_fabs(x) + elif y == 0.0: + return 1.0 + else: # y < 0.0 + if odd_y: + return math_copysign(0.0, x) + return 0.0 - elif isinf(y): + if isinf(y): if math_fabs(x) == 1.0: return 1.0 elif y > 0.0 and math_fabs(x) > 1.0: @@ -299,17 +327,18 @@ _error_reset() r = math_pow(x, y) errno = rposix.get_errno() - if isnan(r): - # a NaN result should arise only from (-ve)**(finite non-integer) - errno = EDOM - elif isinf(r): - # an infinite result here arises either from: - # (A) (+/-0.)**negative (-> divide-by-zero) - # (B) overflow of x**y with x and y finite - if x == 0.0: + if not isfinite(r): + if isnan(r): + # a NaN result should arise only from (-ve)**(finite non-integer) errno = EDOM - else: - errno = ERANGE + else: # isinf(r) + # an infinite result here arises either from: + # (A) (+/-0.)**negative (-> divide-by-zero) + # (B) overflow of x**y with x and y finite + if x == 0.0: + errno = EDOM + else: + errno = ERANGE if errno: _likely_raise(errno, r) return r @@ -358,18 +387,19 @@ r = c_func(x) # Error checking fun. Copied from CPython 2.6 errno = rposix.get_errno() - if isnan(r): - if isnan(x): - errno = 0 - else: - errno = EDOM - elif isinf(r): - if isinf(x) or isnan(x): - errno = 0 - elif can_overflow: - errno = ERANGE - else: - errno = EDOM + if not isfinite(r): + if isnan(r): + if isnan(x): + errno = 0 + else: + errno = EDOM + else: # isinf(r) + if not isfinite(x): + errno = 0 + elif can_overflow: + errno = ERANGE + else: + errno = EDOM if errno: _likely_raise(errno, r) return r diff --git a/pypy/translator/backendopt/finalizer.py b/pypy/translator/backendopt/finalizer.py --- a/pypy/translator/backendopt/finalizer.py +++ b/pypy/translator/backendopt/finalizer.py @@ -4,7 +4,7 @@ class FinalizerError(Exception): """ __del__ marked as lightweight finalizer, but the analyzer did - not agreed + not agree """ class FinalizerAnalyzer(graphanalyze.BoolGraphAnalyzer): @@ -23,7 +23,7 @@ def analyze_light_finalizer(self, graph): result = self.analyze_direct_call(graph) if (result is self.top_result() and - getattr(graph.func, '_is_light_finalizer_', False)): + getattr(graph.func, '_must_be_light_finalizer_', False)): raise FinalizerError(FinalizerError.__doc__, graph) return result diff --git a/pypy/translator/backendopt/test/test_canraise.py b/pypy/translator/backendopt/test/test_canraise.py --- a/pypy/translator/backendopt/test/test_canraise.py +++ b/pypy/translator/backendopt/test/test_canraise.py @@ -201,6 +201,16 @@ result = ra.can_raise(ggraph.startblock.operations[0]) assert result + def test_ll_arraycopy(self): + from pypy.rpython.lltypesystem import rffi + from pypy.rlib.rgc import ll_arraycopy + def f(a, b, c, d, e): + ll_arraycopy(a, b, c, d, e) + t, ra = self.translate(f, [rffi.CCHARP, rffi.CCHARP, int, int, int]) + fgraph = graphof(t, f) + result = ra.can_raise(fgraph.startblock.operations[0]) + assert not result + class TestOOType(OORtypeMixin, BaseTestCanRaise): def test_can_raise_recursive(self): diff --git a/pypy/translator/backendopt/test/test_finalizer.py b/pypy/translator/backendopt/test/test_finalizer.py --- a/pypy/translator/backendopt/test/test_finalizer.py +++ b/pypy/translator/backendopt/test/test_finalizer.py @@ -126,13 +126,13 @@ r = self.analyze(f, [], A.__del__.im_func) assert r - def test_is_light_finalizer_decorator(self): + def test_must_be_light_finalizer_decorator(self): S = lltype.GcStruct('S') - @rgc.is_light_finalizer + @rgc.must_be_light_finalizer def f(): lltype.malloc(S) - @rgc.is_light_finalizer + @rgc.must_be_light_finalizer def g(): pass self.analyze(g, []) # did not explode diff --git a/pypy/translator/c/genc.py b/pypy/translator/c/genc.py --- a/pypy/translator/c/genc.py +++ b/pypy/translator/c/genc.py @@ -521,13 +521,13 @@ rules = [ ('clean', '', 'rm -f $(OBJECTS) $(TARGET) $(GCMAPFILES) $(ASMFILES) *.gc?? ../module_cache/*.gc??'), ('clean_noprof', '', 'rm -f $(OBJECTS) $(TARGET) $(GCMAPFILES) $(ASMFILES)'), - ('debug', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT" $(TARGET)'), - ('debug_exc', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DDO_LOG_EXC" $(TARGET)'), - ('debug_mem', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DTRIVIAL_MALLOC_DEBUG" $(TARGET)'), + ('debug', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT" debug_target'), + ('debug_exc', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DDO_LOG_EXC" debug_target'), + ('debug_mem', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DTRIVIAL_MALLOC_DEBUG" debug_target'), ('no_obmalloc', '', '$(MAKE) CFLAGS="-g -O2 -DRPY_ASSERT -DNO_OBMALLOC" $(TARGET)'), - ('linuxmemchk', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DLINUXMEMCHK" $(TARGET)'), + ('linuxmemchk', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DLINUXMEMCHK" debug_target'), ('llsafer', '', '$(MAKE) CFLAGS="-O2 -DRPY_LL_ASSERT" $(TARGET)'), - ('lldebug', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DRPY_LL_ASSERT" $(TARGET)'), + ('lldebug', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DRPY_LL_ASSERT" debug_target'), ('profile', '', '$(MAKE) CFLAGS="-g -O1 -pg $(CFLAGS) -fno-omit-frame-pointer" LDFLAGS="-pg $(LDFLAGS)" $(TARGET)'), ] if self.has_profopt(): @@ -554,7 +554,7 @@ mk.definition('ASMLBLFILES', lblsfiles) mk.definition('GCMAPFILES', gcmapfiles) if sys.platform == 'win32': - mk.definition('DEBUGFLAGS', '/Zi') + mk.definition('DEBUGFLAGS', '/MD /Zi') else: mk.definition('DEBUGFLAGS', '-O2 -fomit-frame-pointer -g') @@ -618,9 +618,13 @@ else: if sys.platform == 'win32': - mk.definition('DEBUGFLAGS', '/Zi') + mk.definition('DEBUGFLAGS', '/MD /Zi') else: mk.definition('DEBUGFLAGS', '-O1 -g') + if sys.platform == 'win32': + mk.rule('debug_target', 'debugmode_$(DEFAULT_TARGET)', 'rem') + else: + mk.rule('debug_target', '$(TARGET)', '#') mk.write() #self.translator.platform, # , diff --git a/pypy/translator/cli/test/test_snippet.py b/pypy/translator/cli/test/test_snippet.py --- a/pypy/translator/cli/test/test_snippet.py +++ b/pypy/translator/cli/test/test_snippet.py @@ -28,14 +28,14 @@ res = self.interpret(fn, [], backendopt=False) def test_link_vars_overlapping(self): - from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift + from pypy.rlib.rarithmetic import ovfcheck def fn(maxofs): lastofs = 0 ofs = 1 while ofs < maxofs: lastofs = ofs try: - ofs = ovfcheck_lshift(ofs, 1) + ofs = ovfcheck(ofs << 1) except OverflowError: ofs = maxofs else: diff --git a/pypy/translator/platform/__init__.py b/pypy/translator/platform/__init__.py --- a/pypy/translator/platform/__init__.py +++ b/pypy/translator/platform/__init__.py @@ -102,6 +102,8 @@ bits = [self.__class__.__name__, 'cc=%r' % self.cc] for varname in self.relevant_environ: bits.append('%s=%r' % (varname, os.environ.get(varname))) + # adding sys.maxint to disambiguate windows + bits.append('%s=%r' % ('sys.maxint', sys.maxint)) return ' '.join(bits) # some helpers which seem to be cross-platform enough diff --git a/pypy/translator/platform/windows.py b/pypy/translator/platform/windows.py --- a/pypy/translator/platform/windows.py +++ b/pypy/translator/platform/windows.py @@ -294,6 +294,9 @@ ['$(CC_LINK) /nologo $(LDFLAGS) $(LDFLAGSEXTRA) $(OBJECTS) $(LINKFILES) /out:$@ $(LIBDIRS) $(LIBS) /MANIFEST /MANIFESTFILE:$*.manifest', 'mt.exe -nologo -manifest $*.manifest -outputresource:$@;1', ]) + m.rule('debugmode_$(TARGET)', '$(OBJECTS)', + ['$(CC_LINK) /nologo /DEBUG $(LDFLAGS) $(LDFLAGSEXTRA) $(OBJECTS) $(LINKFILES) /out:$@ $(LIBDIRS) $(LIBS)', + ]) if shared: m.definition('SHARED_IMPORT_LIB', so_name.new(ext='lib').basename) @@ -307,6 +310,9 @@ ['$(CC_LINK) /nologo main.obj $(SHARED_IMPORT_LIB) /out:$@ /MANIFEST /MANIFESTFILE:$*.manifest', 'mt.exe -nologo -manifest $*.manifest -outputresource:$@;1', ]) + m.rule('debugmode_$(DEFAULT_TARGET)', ['debugmode_$(TARGET)', 'main.obj'], + ['$(CC_LINK) /nologo /DEBUG main.obj $(SHARED_IMPORT_LIB) /out:$@' + ]) return m diff --git a/pypy/translator/simplify.py b/pypy/translator/simplify.py --- a/pypy/translator/simplify.py +++ b/pypy/translator/simplify.py @@ -111,16 +111,13 @@ # the while loop above will simplify recursively the new link def transform_ovfcheck(graph): - """The special function calls ovfcheck and ovfcheck_lshift need to + """The special function calls ovfcheck needs to be translated into primitive operations. ovfcheck is called directly after an operation that should be turned into an overflow-checked version. It is considered a syntax error if the resulting _ovf is not defined in objspace/flow/objspace.py. - ovfcheck_lshift is special because there is no preceding operation. - Instead, it will be replaced by an OP_LSHIFT_OVF operation. """ covf = Constant(rarithmetic.ovfcheck) - covfls = Constant(rarithmetic.ovfcheck_lshift) def check_syntax(opname): exlis = operation.implicit_exceptions.get("%s_ovf" % (opname,), []) @@ -154,9 +151,6 @@ op1.opname += '_ovf' del block.operations[i] block.renamevariables({op.result: op1.result}) - elif op.args[0] == covfls: - op.opname = 'lshift_ovf' - del op.args[0] def simplify_exceptions(graph): """The exception handling caused by non-implicit exceptions diff --git a/pypy/translator/test/snippet.py b/pypy/translator/test/snippet.py --- a/pypy/translator/test/snippet.py +++ b/pypy/translator/test/snippet.py @@ -1210,7 +1210,7 @@ return istk.top(), sstk.top() -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift +from pypy.rlib.rarithmetic import ovfcheck def add_func(i=numtype): try: @@ -1253,7 +1253,7 @@ def lshift_func(i=numtype): try: hugo(2, 3, 5) - return ovfcheck_lshift((-maxint-1), i) + return ovfcheck((-maxint-1) << i) except (hugelmugel, OverflowError, StandardError, ValueError): raise From noreply at buildbot.pypy.org Fri Nov 11 08:06:29 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Fri, 11 Nov 2011 08:06:29 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: fix test Message-ID: <20111111070629.F252D8292E@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r49291:d9c82a0bbd6c Date: 2011-11-11 08:06 +0100 http://bitbucket.org/pypy/pypy/changeset/d9c82a0bbd6c/ Log: fix test diff --git a/pypy/jit/metainterp/test/support.py b/pypy/jit/metainterp/test/support.py --- a/pypy/jit/metainterp/test/support.py +++ b/pypy/jit/metainterp/test/support.py @@ -162,13 +162,13 @@ get_stats().check_loops(expected=expected, everywhere=everywhere, **check) - def check_trace_count(self, count): + def check_trace_count(self, count): # was check_loop_count # The number of traces compiled assert get_stats().compiled_count == count def check_trace_count_at_most(self, count): assert get_stats().compiled_count <= count - def check_jitcell_token_count(self, count): + def check_jitcell_token_count(self, count): # was check_tree_loop_count assert len(get_stats().jitcell_tokens) == count def check_target_token_count(self, count): diff --git a/pypy/jit/metainterp/test/test_send.py b/pypy/jit/metainterp/test/test_send.py --- a/pypy/jit/metainterp/test/test_send.py +++ b/pypy/jit/metainterp/test/test_send.py @@ -20,7 +20,7 @@ return c res = self.meta_interp(f, [1]) assert res == 2 - self.check_resops({'jump': 2, 'guard_true': 2, 'int_gt': 2, + self.check_resops({'jump': 1, 'guard_true': 2, 'int_gt': 2, 'int_sub': 2}) # all folded away def test_red_builtin_send(self): @@ -67,7 +67,7 @@ backendopt=True) assert res == 43 self.check_resops({'int_gt': 2, 'getfield_gc': 2, - 'guard_true': 2, 'int_sub': 2, 'jump': 2, + 'guard_true': 2, 'int_sub': 2, 'jump': 1, 'call': 2, 'guard_no_exception': 2, 'int_add': 2}) @@ -160,7 +160,7 @@ res = self.meta_interp(f, [j], policy=policy) assert res == 42 self.check_enter_count_at_most(5) - self.check_loop_count_at_most(5) + self.check_trace_count_at_most(5) def test_oosend_guard_failure(self): myjitdriver = JitDriver(greens = [], reds = ['x', 'y', 'w']) @@ -199,7 +199,7 @@ # InvalidLoop condition, and was then unrolled, giving two copies # of the body in a single bigger loop with no failing guard except # the final one. - self.check_loop_count(1) + self.check_trace_count(1) self.check_resops(guard_class=1, int_add=4, int_sub=4) self.check_jumps(14) @@ -240,7 +240,7 @@ assert res == f(3, 28) res = self.meta_interp(f, [4, 28]) assert res == f(4, 28) - self.check_loop_count(1) + self.check_trace_count(1) self.check_resops(guard_class=1, int_add=4, int_sub=4) self.check_jumps(14) @@ -277,7 +277,7 @@ # looking only at the loop, we deduce that the class of 'w' is 'W2'. # However, this doesn't match the initial value of 'w'. # XXX This not completely easy to check... - self.check_loop_count(1) + self.check_trace_count(1) self.check_resops(guard_class=1, new_with_vtable=0, int_lshift=2, int_add=0, new=0) @@ -306,7 +306,7 @@ return x res = self.meta_interp(f, [198], policy=StopAtXPolicy(externfn)) assert res == f(198) - self.check_loop_count(4) + self.check_trace_count(4) def test_indirect_call_unknown_object_2(self): myjitdriver = JitDriver(greens = [], reds = ['x', 'y', 'state']) @@ -340,9 +340,9 @@ res = self.meta_interp(f, [198], policy=StopAtXPolicy(State.externfn.im_func)) assert res == f(198) - # we get two TreeLoops: an initial one, and one entering from - # the interpreter - self.check_tree_loop_count(2) + # we get two TargetTokens, one for the loop and one for the preamble + self.check_jitcell_token_count(1) + self.check_target_token_count(2) def test_indirect_call_unknown_object_3(self): myjitdriver = JitDriver(greens = [], reds = ['x', 'y', 'z', 'state']) @@ -377,9 +377,10 @@ res = self.meta_interp(f, [198], policy=StopAtXPolicy(State.externfn.im_func)) assert res == f(198) - # we get four TreeLoops: one for each of the 3 getvalue functions, - # and one entering from the interpreter - self.check_tree_loop_count(4) + # we get four TargetTokens: one for each of the 3 getvalue functions, + # and one entering from the interpreter (the preamble) + self.check_jitcell_token_count(1) + self.check_target_token_count(4) def test_two_behaviors(self): py.test.skip("XXX fix me!!!!!!! problem in optimize.py") @@ -403,7 +404,7 @@ # is true if we replace "if cases[y]" above with "if not cases[y]" # -- so there is no good reason that it fails. self.check_loops(new_with_vtable=0) - self.check_loop_count(2) + self.check_trace_count(2) def test_behavior_change_after_a_while(self): myjitdriver = JitDriver(greens = [], reds = ['y', 'x']) @@ -431,9 +432,10 @@ assert res == 200 # we expect 2 versions of the loop, 1 entry bridge, # and 1 bridge going from the - # loop back to the start of the entry bridge - self.check_loop_count(3) # 2 loop + 1 bridge - self.check_tree_loop_count(3) # 2 loop + 1 entry bridge (argh) + # loop back to the loop + self.check_trace_count(2) # preamble/loop and 1 bridge + self.check_jitcell_token_count(1) + self.check_target_token_count(3) # preamble, Int1, Int2 self.check_aborted_count(0) def test_three_cases(self): @@ -454,7 +456,7 @@ return node.x res = self.meta_interp(f, [55]) assert res == f(55) - self.check_tree_loop_count(4) + self.check_trace_count(3) def test_three_classes(self): class Base: @@ -484,7 +486,7 @@ return n res = self.meta_interp(f, [55], policy=StopAtXPolicy(extern)) assert res == f(55) - self.check_tree_loop_count(2) + self.check_jitcell_token_count(1) def test_bug1(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'node']) From noreply at buildbot.pypy.org Fri Nov 11 08:36:23 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Fri, 11 Nov 2011 08:36:23 +0100 (CET) Subject: [pypy-commit] pypy jit-refactor-tests: kill check_loops, it is now replaced with check_resops Message-ID: <20111111073623.38ECA8292E@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-refactor-tests Changeset: r49292:03055b5850d3 Date: 2011-11-11 08:35 +0100 http://bitbucket.org/pypy/pypy/changeset/03055b5850d3/ Log: kill check_loops, it is now replaced with check_resops diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -1013,36 +1013,6 @@ "found %d %r, expected %d" % (found, insn, expected_count)) return insns - def check_loops(self, expected=None, everywhere=False, **check): - insns = {} - for loop in self.loops: - #if not everywhere: - # if getattr(loop, '_ignore_during_counting', False): - # continue - insns = loop.summary(adding_insns=insns) - if expected is not None: - insns.pop('debug_merge_point', None) - print - print - print " self.check_resops(%s)" % str(insns) - print - import pdb; pdb.set_trace() - else: - chk = ['%s=%d' % (i, insns.get(i, 0)) for i in check] - print - print - print " self.check_resops(%s)" % ', '.join(chk) - print - import pdb; pdb.set_trace() - return - - for insn, expected_count in check.items(): - getattr(rop, insn.upper()) # fails if 'rop.INSN' does not exist - found = insns.get(insn, 0) - assert found == expected_count, ( - "found %d %r, expected %d" % (found, insn, expected_count)) - return insns - def check_consistency(self): "NOT_RPYTHON" for loop in self.loops: From noreply at buildbot.pypy.org Fri Nov 11 09:22:02 2011 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 11 Nov 2011 09:22:02 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Tentatively rewrite push_arg_as_ffiptr(). Message-ID: <20111111082202.490E282A87@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: ppc-jit-backend Changeset: r49294:066646be3787 Date: 2011-11-10 18:14 +0100 http://bitbucket.org/pypy/pypy/changeset/066646be3787/ Log: Tentatively rewrite push_arg_as_ffiptr(). diff --git a/pypy/rlib/clibffi.py b/pypy/rlib/clibffi.py --- a/pypy/rlib/clibffi.py +++ b/pypy/rlib/clibffi.py @@ -340,46 +340,38 @@ return TYPE_MAP[tp] cast_type_to_ffitype._annspecialcase_ = 'specialize:memo' -def push_arg_as_ffiptr_base(ffitp, arg, ll_buf): - # this is for primitive types. For structures and arrays - # would be something different (more dynamic) - # XXX is this valid in C?, for args that are larger than the size of - # ll_buf we write over the boundaries of the allocated char array and - # just keep as much bytes as we need for the target type. Maybe using - # memcpy would be better here. Also this - # only works on little endian architectures - TP = lltype.typeOf(arg) - TP_P = lltype.Ptr(rffi.CArray(TP)) - buf = rffi.cast(TP_P, ll_buf) - buf[0] = arg -push_arg_as_ffiptr_base._annspecialcase_ = 'specialize:argtype(1)' - -def push_arg_as_ffiptr_memcpy(ffitp, arg, ll_buf): - # this is for primitive types. For structures and arrays - # would be something different (more dynamic) +def push_arg_as_ffiptr(ffitp, arg, ll_buf): + # This is for primitive types. Note that the exact type of 'arg' may be + # different from the expected 'c_size'. To cope with that, we fall back + # to a byte-by-byte copy. TP = lltype.typeOf(arg) TP_P = lltype.Ptr(rffi.CArray(TP)) TP_size = rffi.sizeof(TP) c_size = intmask(ffitp.c_size) - - # if both types have the same size, we do not can directly write the + # if both types have the same size, we can directly write the # value to the buffer if c_size == TP_size: - return push_arg_as_ffiptr_base(ffitp, arg, ll_buf) - - # store arg in a small box in memory - # and copy the relevant bytes over to the target buffer (ll_buf) - with lltype.scoped_alloc(TP_P.TO, TP_size) as argbuf: - argbuf[0] = arg - cargbuf = rffi.cast(rffi.CCHARP, argbuf) - ptr = rffi.ptradd(cargbuf, TP_size - c_size) - rffi.c_memcpy(ll_buf, ptr, c_size) -push_arg_as_ffiptr_memcpy._annspecialcase_ = 'specialize:argtype(1)' - -if _LITTLE_ENDIAN: - push_arg_as_ffiptr = push_arg_as_ffiptr_base -else: - push_arg_as_ffiptr = push_arg_as_ffiptr_memcpy + buf = rffi.cast(TP_P, ll_buf) + buf[0] = arg + else: + # needs byte-by-byte copying. Make sure 'arg' is an integer type. + # Note that this won't work for rffi.FLOAT/rffi.DOUBLE. + assert TP is not rffi.FLOAT and TP is not rffi.DOUBLE + if TP_size <= rffi.sizeof(lltype.Signed): + arg = rffi.cast(lltype.Unsigned, arg) + else: + arg = rffi.cast(lltype.UnsignedLongLong, arg) + if _LITTLE_ENDIAN: + for i in range(c_size): + ll_buf[i] = chr(arg & 0xFF) + arg >>= 8 + elif _BIG_ENDIAN: + for i in range(c_size-1, -1, -1): + ll_buf[i] = chr(arg & 0xFF) + arg >>= 8 + else: + raise AssertionError +push_arg_as_ffiptr._annspecialcase_ = 'specialize:argtype(1)' # type defs for callback and closure userdata USERDATA_P = lltype.Ptr(lltype.ForwardReference()) From noreply at buildbot.pypy.org Fri Nov 11 09:22:01 2011 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 11 Nov 2011 09:22:01 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: fix test Message-ID: <20111111082201.0E2C28292E@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r49293:3079baf884b8 Date: 2011-11-04 12:53 +0100 http://bitbucket.org/pypy/pypy/changeset/3079baf884b8/ Log: fix test diff --git a/pypy/jit/backend/arm/test/test_assembler.py b/pypy/jit/backend/arm/test/test_assembler.py --- a/pypy/jit/backend/arm/test/test_assembler.py +++ b/pypy/jit/backend/arm/test/test_assembler.py @@ -32,7 +32,8 @@ def test_make_operation_list(self): i = rop.INT_ADD - assert self.a.operations[i] is AssemblerARM.emit_op_int_add.im_func + from pypy.jit.backend.arm import assembler + assert assembler.asm_operations[i] is AssemblerARM.emit_op_int_add.im_func def test_load_small_int_to_reg(self): self.a.gen_func_prolog() From noreply at buildbot.pypy.org Fri Nov 11 11:05:20 2011 From: noreply at buildbot.pypy.org (hager) Date: Fri, 11 Nov 2011 11:05:20 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Remove GPR 1 from those registers which are preserved across function calls Message-ID: <20111111100520.91D0E8292E@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r49295:4b4ba001ebbb Date: 2011-11-11 11:04 +0100 http://bitbucket.org/pypy/pypy/changeset/4b4ba001ebbb/ Log: Remove GPR 1 from those registers which are preserved across function calls diff --git a/pypy/jit/backend/ppc/ppcgen/register.py b/pypy/jit/backend/ppc/ppcgen/register.py --- a/pypy/jit/backend/ppc/ppcgen/register.py +++ b/pypy/jit/backend/ppc/ppcgen/register.py @@ -6,7 +6,7 @@ r17, r18, r19, r20, r21, r22, r23, r24, r25, r26, r27, r28, r29, r30, r31\ = ALL_REGS -NONVOLATILES = [r1, r14, r15, r16, r17, r18, r19, r20, r21, r22, r23, +NONVOLATILES = [r14, r15, r16, r17, r18, r19, r20, r21, r22, r23, r24, r25, r26, r27, r28, r29, r30, r31] VOLATILES = [r0, r3, r4, r5, r6, r7, r8, r9, r10, r11, r12, r13] From noreply at buildbot.pypy.org Fri Nov 11 11:40:10 2011 From: noreply at buildbot.pypy.org (hager) Date: Fri, 11 Nov 2011 11:40:10 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: changed BACKCHAIN_SIZE to 3 * WORD Message-ID: <20111111104010.43E5D8292E@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r49296:ebd183097a54 Date: 2011-11-11 11:39 +0100 http://bitbucket.org/pypy/pypy/changeset/ebd183097a54/ Log: changed BACKCHAIN_SIZE to 3 * WORD diff --git a/pypy/jit/backend/ppc/ppcgen/arch.py b/pypy/jit/backend/ppc/ppcgen/arch.py --- a/pypy/jit/backend/ppc/ppcgen/arch.py +++ b/pypy/jit/backend/ppc/ppcgen/arch.py @@ -17,4 +17,4 @@ GPR_SAVE_AREA = len(NONVOLATILES) * WORD MAX_REG_PARAMS = 8 -BACKCHAIN_SIZE = 2 * WORD +BACKCHAIN_SIZE = 3 * WORD From noreply at buildbot.pypy.org Fri Nov 11 12:15:48 2011 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 11 Nov 2011 12:15:48 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim-shards: A branch to use shards instead of just raw indexes that have to be computed Message-ID: <20111111111548.4B5268292E@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-multidim-shards Changeset: r49297:083d184dc8ab Date: 2011-11-11 12:10 +0100 http://bitbucket.org/pypy/pypy/changeset/083d184dc8ab/ Log: A branch to use shards instead of just raw indexes that have to be computed using expensive division diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -68,22 +68,73 @@ dtype.setitem_w(space, arr.storage, i, w_elem) return arr -class ArrayIndex(object): - """ An index into an array or view. Offset is a data offset, indexes - are respective indexes in dimensions - """ - def __init__(self, indexes, offset): - self.indexes = indexes - self.offset = offset +class ArrayIterator(object): + def __init__(self, size): + self.offset = 0 + self.size = size + + def next(self): + self.offset += 1 + + def done(self): + return self.offset >= self.size + +class ViewIterator(object): + def __init__(self, arr): + self.indices = [0] * len(arr.shape) + self.offset = arr.start + self.arr = arr + self.done = False + + @jit.unroll_safe + def next(self): + for i in range(len(self.indices)): + if self.indices[i] < self.arr.shape[i]: + self.indices[i] += 1 + self.offset += self.arr.shards[i] + break + else: + self.indices[i] = 0 + self.offset -= self.arr.backshards[i] + else: + self.done = True + + def done(self): + return self.done + +class Call2Iterator(object): + def __init__(self, left, right): + self.left = left + self.right = right + + def next(self): + self.left.next() + self.right.next() + + def done(self): + return self.left.done() + +class Call1Iterator(object): + def __init__(self, child): + self.child = child + + def next(self): + self.child.next() + + def done(self): + return self.child.done() class BaseArray(Wrappable): - _attrs_ = ["invalidates", "signature", "shape"] + _attrs_ = ["invalidates", "signature", "shape", "shards", "backshards", + "start"] - _immutable_fields_ = ['shape[*]'] + _immutable_fields_ = ['shape[*]', "shards[*]", "backshards[*]"] - def __init__(self, shape): + def __init__(self, shards, backshards, shape): self.invalidates = [] self.shape = shape + self.shards = shards + self.backshards = backshards def invalidated(self): if self.invalidates: @@ -155,8 +206,9 @@ reduce_driver = jit.JitDriver(greens=['signature'], reds = ['i', 'size', 'result', 'self', 'cur_best', 'dtype']) def loop(self, size): + xxx result = 0 - cur_best = self.eval(0) + cur_best = self.eval(self.start) i = 1 dtype = self.find_dtype() while i < size: @@ -164,6 +216,7 @@ self=self, dtype=dtype, size=size, i=i, result=result, cur_best=cur_best) + xxx new_best = getattr(dtype, op_name)(cur_best, self.eval(i)) if dtype.ne(new_best, cur_best): result = i @@ -180,6 +233,7 @@ return func_with_new_name(impl, "reduce_arg%s_impl" % op_name) def _all(self): + xxx size = self.find_size() dtype = self.find_dtype() i = 0 @@ -193,6 +247,7 @@ return space.wrap(self._all()) def _any(self): + xxx size = self.find_size() dtype = self.find_dtype() i = 0 @@ -233,6 +288,7 @@ return self.get_concrete().descr_len(space) def descr_repr(self, space): + xxx # Simple implementation so that we can see the array. # Since what we want is to print a plethora of 2d views, # use recursive calls to to_str() to do the work. @@ -279,10 +335,11 @@ if idx < 0 or idx >= self.shape[0]: raise OperationError(space.w_IndexError, space.wrap("index out of range")) - return idx + return self.start + idx * self.shards[0] index = [space.int_w(w_item) for w_item in space.fixedview(w_idx)] item = 0 + xxx for i in range(len(index)): v = index[i] if v < 0: @@ -327,45 +384,20 @@ return False return True - def _create_slice(self, space, w_idx): - new_sig = signature.Signature.find_sig([ - NDimSlice.signature, self.signature - ]) - if (space.isinstance_w(w_idx, space.w_int) or - space.isinstance_w(w_idx, space.w_slice)): - start, stop, step, lgt = space.decode_index4(w_idx, self.shape[0]) - if step == 0: - shape = self.shape[1:] - else: - shape = [lgt] + self.shape[1:] - chunks = [(start, stop, step, lgt)] - else: - chunks = [] - shape = self.shape[:] - for i, w_item in enumerate(space.fixedview(w_idx)): - start, stop, step, lgt = space.decode_index4(w_item, - self.shape[i]) - chunks.append((start, stop, step, lgt)) - if step == 0: - shape[i] = -1 - else: - shape[i] = lgt - shape = [i for i in shape if i != -1][:] - return NDimSlice(self, new_sig, chunks[:], shape) - def descr_getitem(self, space, w_idx): if self._single_item_result(space, w_idx): - item = self._index_of_single_item(space, w_idx) - return self.get_concrete().eval(item).wrap(space) + concrete = self.get_concrete() + item = concrete._index_of_single_item(space, w_idx) + return concrete.eval(item).wrap(space) return space.wrap(self._create_slice(space, w_idx)) def descr_setitem(self, space, w_idx, w_value): self.invalidated() + concrete = self.get_concrete() if self._single_item_result(space, w_idx): - item = self._index_of_single_item(space, w_idx) - self.get_concrete().setitem_w(space, item, w_value) + item = concrete._index_of_single_item(space, w_idx) + concrete.setitem_w(space, item, w_value) return - concrete = self.get_concrete() if isinstance(w_value, BaseArray): # for now we just copy if setting part of an array from # part of itself. can be improved. @@ -378,6 +410,42 @@ view = self._create_slice(space, w_idx) view.setslice(space, w_value) + def _create_slice(self, space, w_idx): + new_sig = signature.Signature.find_sig([ + NDimSlice.signature, self.signature + ]) + if (space.isinstance_w(w_idx, space.w_int) or + space.isinstance_w(w_idx, space.w_slice)): + start, stop, step, lgt = space.decode_index4(w_idx, self.shape[0]) + if step == 0: + shape = self.shape[1:] + shards = self.shards[1:] + backshards = self.backshards[1:] + else: + shape = [lgt] + self.shape[1:] + shards = [self.shards[0] * step] + self.shards[1:] + backshards = [lgt * self.shards[0] * step] + self.backshards[1:] + else: + shape = [] + shards = [] + backshards = [] + start = -1 + i = 0 + for i, w_item in enumerate(space.fixedview(w_idx)): + start_, stop, step, lgt = space.decode_index4(w_item, + self.shape[i]) + if step != 0: + if start == -1: + start = start_ * self.shards[i] + self.start + shape.append(lgt) + shards.append(self.shards[i] * step) + backshards.append(self.shards[i] * lgt * step) + # add a reminder + shape += self.shape[i + 1:] + shards += self.shards[i + 1:] + backshards += self.backshards[i + 1:] + return NDimSlice(self, new_sig, start, end, shards, backshards, shape) + def descr_mean(self, space): return space.wrap(space.float_w(self.descr_sum(space))/self.find_size()) @@ -388,7 +456,7 @@ "The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()")) except ValueError: pass - return space.wrap(space.is_true(self.get_concrete().eval(0).wrap(space))) + return space.wrap(space.is_true(self.get_concrete().eval(self.start).wrap(space))) def convert_to_array(space, w_obj): if isinstance(w_obj, BaseArray): @@ -416,7 +484,7 @@ _attrs_ = ["dtype", "value", "shape"] def __init__(self, dtype, value): - BaseArray.__init__(self, []) + BaseArray.__init__(self, None, None, []) self.dtype = dtype self.value = value @@ -429,7 +497,7 @@ def find_dtype(self): return self.dtype - def eval(self, i): + def eval(self, offset): return self.value class VirtualArray(BaseArray): @@ -437,7 +505,7 @@ Class for representing virtual arrays, such as binary ops or ufuncs """ def __init__(self, signature, shape, res_dtype): - BaseArray.__init__(self, shape) + BaseArray.__init__(self, None, None, shape) self.forced_result = None self.signature = signature self.res_dtype = res_dtype @@ -451,12 +519,13 @@ signature = self.signature result_size = self.find_size() result = NDimArray(result_size, self.shape, self.find_dtype()) - while i < result_size: + i = self.start_iter() + while not i.done(): numpy_driver.jit_merge_point(signature=signature, result_size=result_size, i=i, self=self, result=result) - result.dtype.setitem(result.storage, i, self.eval(i)) - i += 1 + result.dtype.setitem(result.storage, i.offset, self.eval(i.offset)) + i = self.next_index(i) return result def force_if_needed(self): @@ -468,10 +537,10 @@ self.force_if_needed() return self.forced_result - def eval(self, i): + def eval(self, offset): if self.forced_result is not None: - return self.forced_result.eval(i) - return self._eval(i) + return self.forced_result.eval(offset) + return self._eval(offset) def setitem(self, item, value): return self.get_concrete().setitem(item, value) @@ -545,13 +614,10 @@ Class for representing views of arrays, they will reflect changes of parent arrays. Example: slices """ - def __init__(self, parent, signature, shape): - BaseArray.__init__(self, shape) + def __init__(self, parent, signature, shards, backshards, shape): + BaseArray.__init__(self, shards, backshards, shape) self.signature = signature self.parent = parent - self.size = 1 - for elem in shape: - self.size *= elem self.invalidates = parent.invalidates def get_concrete(self): @@ -561,12 +627,12 @@ self.parent.get_concrete() return self - def eval(self, i): - return self.parent.eval(self.calc_index(i)) + def eval(self, offset): + return self.parent.eval(offset) @unwrap_spec(item=int) def setitem_w(self, space, item, w_value): - return self.parent.setitem_w(space, self.calc_index(item), w_value) + return self.parent.setitem_w(space, item, w_value) def setitem(self, item, value): # This is currently not possible to be called from anywhere. @@ -577,17 +643,18 @@ return space.wrap(self.shape[0]) return space.wrap(1) - def calc_index(self, item): - raise NotImplementedError - class NDimSlice(ViewArray): signature = signature.BaseSignature() - _immutable_fields_ = ['shape[*]', 'chunks[*]'] + _immutable_fields_ = ['shape[*]', 'shards[*]', 'backshards[*]', 'start'] - def __init__(self, parent, signature, chunks, shape): - ViewArray.__init__(self, parent, signature, shape) - self.chunks = chunks + def __init__(self, parent, signature, start, end, shards, backshards, + shape): + if isinstance(parent, NDimSlice): + parent = parent.parent + ViewArray.__init__(self, parent, signature, shards, backshards, shape) + self.start = start + self.end = end def get_root_storage(self): return self.parent.get_concrete().get_root_storage() @@ -606,6 +673,7 @@ self._sliceloop(w_value) def _sliceloop(self, source): + xxx i = 0 while i < self.size: slice_driver.jit_merge_point(signature=source.signature, i=i, @@ -614,14 +682,12 @@ i += 1 def setitem(self, item, value): + xxx self.parent.setitem(self.calc_index(item), value) def get_root_shape(self): return self.parent.get_root_shape() - # XXX we might want to provide a custom finder of where we look for - # a particular item, right now we'll do the calculations again - @jit.unroll_safe def calc_index(self, item): index = [] @@ -656,6 +722,7 @@ return item def to_str(self, comma, indent=' '): + xxx ret = StringBuilder() dtype = self.find_dtype() ndims = len(self.shape) @@ -698,15 +765,24 @@ for j in range(self.shape[0])])) ret.append(']') else: - ret.append(dtype.str_format(self.eval(0))) + ret.append(dtype.str_format(self.eval(self.start))) return ret.build() class NDimArray(BaseArray): """ A class representing contiguous array. We know that each iteration by say ufunc will increase the data index by one """ + start = 0 + def __init__(self, size, shape, dtype): - BaseArray.__init__(self, shape) + shards = [] + backshards = [] + s = 1 + for sh in shape: + shards.append(s) + backshards.append(s * (sh - 1)) + s *= sh + BaseArray.__init__(self, shards, backshards, shape) self.size = size self.dtype = dtype self.storage = dtype.malloc(size) @@ -724,8 +800,8 @@ def find_dtype(self): return self.dtype - def eval(self, i): - return self.dtype.getitem(self.storage, i) + def eval(self, offset): + return self.dtype.getitem(self.storage, offset) def descr_len(self, space): if len(self.shape): diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1,6 +1,55 @@ from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest +from pypy.module.micronumpy.interp_numarray import NDimArray +from pypy.module.micronumpy import signature from pypy.conftest import gettestobjspace +class MockDtype(object): + signature = signature.BaseSignature() + def malloc(self, size): + return None + +class TestNumArrayDirect(object): + def newslice(self, *args): + return self.space.newslice(*[self.space.wrap(arg) for arg in args]) + + def test_shards(self): + a = NDimArray(100, [10, 5, 3], MockDtype()) + assert a.shards == [1, 10, 50] + assert a.backshards == [9, 40, 100] + + def test_create_slice(self): + space = self.space + a = NDimArray(10*5*3, [10, 5, 3], MockDtype()) + s = a._create_slice(space, space.wrap(3)) + assert s.start == 3 + assert s.shards == [10, 50] + assert s.backshards == [40, 100] + s = a._create_slice(space, self.newslice(1, 9, 2)) + assert s.start == 1 + assert s.shards == [2, 10, 50] + assert s.backshards == [8, 40, 100] + s = a._create_slice(space, space.newtuple([ + self.newslice(1, 5, 3), self.newslice(1, 2, 1), space.wrap(1)])) + assert s.start == 1 + assert s.shape == [2, 1] + assert s.shards == [3, 10] + assert s.backshards == [6, 10] + + def test_slice_of_slice(self): + space = self.space + a = NDimArray(10*5*3, [10, 5, 3], MockDtype()) + s = a._create_slice(space, space.wrap(5)) + s2 = s._create_slice(space, space.wrap(3)) + assert s2.shape == [3] + assert s2.shards == [50] + assert s2.parent is a + assert s2.backshards == [100] + s = a._create_slice(space, self.newslice(1, 5, 3)) + s2 = s._create_slice(space, space.newtuple([ + self.newslice(None, None, None), space.wrap(2)])) + assert s2.shape == [2, 3] + assert s2.shards == [3, 50] + assert s2.backshards == [6, 100] class AppTestNumArray(BaseNumpyAppTest): def test_type(self): @@ -53,87 +102,6 @@ assert a[0] == 1 assert a.shape == () - def test_repr(self): - from numpy import array, zeros - a = array(range(5), float) - assert repr(a) == "array([0.0, 1.0, 2.0, 3.0, 4.0])" - a = array([], float) - assert repr(a) == "array([], dtype=float64)" - a = zeros(1001) - assert repr(a) == "array([0.0, 0.0, 0.0, ..., 0.0, 0.0, 0.0])" - a = array(range(5), long) - assert repr(a) == "array([0, 1, 2, 3, 4])" - a = array([], long) - assert repr(a) == "array([], dtype=int64)" - a = array([True, False, True, False], "?") - assert repr(a) == "array([True, False, True, False], dtype=bool)" - a = zeros((3,4)) - assert repr(a) == '''array([[0.0, 0.0, 0.0, 0.0], - [0.0, 0.0, 0.0, 0.0], - [0.0, 0.0, 0.0, 0.0]])''' - a = zeros((2,3,4)) - assert repr(a) == '''array([[[0.0, 0.0, 0.0, 0.0], - [0.0, 0.0, 0.0, 0.0], - [0.0, 0.0, 0.0, 0.0]], - - [[0.0, 0.0, 0.0, 0.0], - [0.0, 0.0, 0.0, 0.0], - [0.0, 0.0, 0.0, 0.0]]])''' - - def test_repr_slice(self): - from numpy import array, zeros - a = array(range(5), float) - b = a[1::2] - assert repr(b) == "array([1.0, 3.0])" - a = zeros(2002) - b = a[::2] - assert repr(b) == "array([0.0, 0.0, 0.0, ..., 0.0, 0.0, 0.0])" - a = array((range(5),range(5,10)), dtype="int16") - b=a[1,2:] - assert repr(b) == "array([7, 8, 9], dtype=int16)" - #This is the way cpython numpy does it - an empty slice prints its shape - b=a[2:1,] - assert repr(b) == "array([], shape=(0, 5), dtype=int16)" - - def test_str(self): - from numpy import array, zeros - a = array(range(5), float) - assert str(a) == "[0.0 1.0 2.0 3.0 4.0]" - assert str((2*a)[:]) == "[0.0 2.0 4.0 6.0 8.0]" - a = zeros(1001) - assert str(a) == "[0.0 0.0 0.0 ..., 0.0 0.0 0.0]" - - a = array(range(5), dtype=long) - assert str(a) == "[0 1 2 3 4]" - a = array([True, False, True, False], dtype="?") - assert str(a) == "[True False True False]" - - a = array(range(5), dtype="int8") - assert str(a) == "[0 1 2 3 4]" - - a = array(range(5), dtype="int16") - assert str(a) == "[0 1 2 3 4]" - - a = array((range(5),range(5,10)), dtype="int16") - assert str(a) == "[[0 1 2 3 4],\n [5 6 7 8 9]]" - - a = array(3,dtype=int) - assert str(a) == "3" - - def test_str_slice(self): - from numpy import array, zeros - a = array(range(5), float) - b = a[1::2] - assert str(b) == "[1.0 3.0]" - a = zeros(2002) - b = a[::2] - assert str(b) == "[0.0 0.0 0.0 ..., 0.0 0.0 0.0]" - a = array((range(5),range(5,10)), dtype="int16") - b=a[1,2:] - assert str(b) == "[7 8 9]" - b=a[2:1,] - assert str(b) == "[]" - def test_getitem(self): from numpy import array a = array(range(5)) @@ -762,3 +730,85 @@ for i in range(4): assert a[i] == i + 1 raises(ValueError, fromstring, "abc") + +class AppTestRepr(BaseNumpyAppTest): + def test_repr(self): + from numpy import array, zeros + a = array(range(5), float) + assert repr(a) == "array([0.0, 1.0, 2.0, 3.0, 4.0])" + a = array([], float) + assert repr(a) == "array([], dtype=float64)" + a = zeros(1001) + assert repr(a) == "array([0.0, 0.0, 0.0, ..., 0.0, 0.0, 0.0])" + a = array(range(5), long) + assert repr(a) == "array([0, 1, 2, 3, 4])" + a = array([], long) + assert repr(a) == "array([], dtype=int64)" + a = array([True, False, True, False], "?") + assert repr(a) == "array([True, False, True, False], dtype=bool)" + a = zeros((3,4)) + assert repr(a) == '''array([[0.0, 0.0, 0.0, 0.0], + [0.0, 0.0, 0.0, 0.0], + [0.0, 0.0, 0.0, 0.0]])''' + a = zeros((2,3,4)) + assert repr(a) == '''array([[[0.0, 0.0, 0.0, 0.0], + [0.0, 0.0, 0.0, 0.0], + [0.0, 0.0, 0.0, 0.0]], + + [[0.0, 0.0, 0.0, 0.0], + [0.0, 0.0, 0.0, 0.0], + [0.0, 0.0, 0.0, 0.0]]])''' + + def test_repr_slice(self): + from numpy import array, zeros + a = array(range(5), float) + b = a[1::2] + assert repr(b) == "array([1.0, 3.0])" + a = zeros(2002) + b = a[::2] + assert repr(b) == "array([0.0, 0.0, 0.0, ..., 0.0, 0.0, 0.0])" + a = array((range(5),range(5,10)), dtype="int16") + b=a[1,2:] + assert repr(b) == "array([7, 8, 9], dtype=int16)" + #This is the way cpython numpy does it - an empty slice prints its shape + b=a[2:1,] + assert repr(b) == "array([], shape=(0, 5), dtype=int16)" + + def test_str(self): + from numpy import array, zeros + a = array(range(5), float) + assert str(a) == "[0.0 1.0 2.0 3.0 4.0]" + assert str((2*a)[:]) == "[0.0 2.0 4.0 6.0 8.0]" + a = zeros(1001) + assert str(a) == "[0.0 0.0 0.0 ..., 0.0 0.0 0.0]" + + a = array(range(5), dtype=long) + assert str(a) == "[0 1 2 3 4]" + a = array([True, False, True, False], dtype="?") + assert str(a) == "[True False True False]" + + a = array(range(5), dtype="int8") + assert str(a) == "[0 1 2 3 4]" + + a = array(range(5), dtype="int16") + assert str(a) == "[0 1 2 3 4]" + + a = array((range(5),range(5,10)), dtype="int16") + assert str(a) == "[[0 1 2 3 4],\n [5 6 7 8 9]]" + + a = array(3,dtype=int) + assert str(a) == "3" + + def test_str_slice(self): + from numpy import array, zeros + a = array(range(5), float) + b = a[1::2] + assert str(b) == "[1.0 3.0]" + a = zeros(2002) + b = a[::2] + assert str(b) == "[0.0 0.0 0.0 ..., 0.0 0.0 0.0]" + a = array((range(5),range(5,10)), dtype="int16") + b=a[1,2:] + assert str(b) == "[7 8 9]" + b=a[2:1,] + assert str(b) == "[]" From noreply at buildbot.pypy.org Fri Nov 11 12:47:11 2011 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 11 Nov 2011 12:47:11 +0100 (CET) Subject: [pypy-commit] pypy default: Kill test now that ovfcheck_lshift() is gone. Message-ID: <20111111114711.6D3678292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49298:7fc3038601d8 Date: 2011-11-11 12:46 +0100 http://bitbucket.org/pypy/pypy/changeset/7fc3038601d8/ Log: Kill test now that ovfcheck_lshift() is gone. diff --git a/pypy/translator/test/test_simplify.py b/pypy/translator/test/test_simplify.py --- a/pypy/translator/test/test_simplify.py +++ b/pypy/translator/test/test_simplify.py @@ -42,24 +42,6 @@ assert graph.startblock.operations[0].opname == 'int_mul_ovf' assert graph.startblock.operations[1].opname == 'int_sub' -def test_remove_ovfcheck_lshift(): - # check that ovfcheck_lshift() is handled - from pypy.rlib.rarithmetic import ovfcheck_lshift - def f(x): - try: - return ovfcheck_lshift(x, 2) - except OverflowError: - return -42 - graph, _ = translate(f, [int]) - assert len(graph.startblock.operations) == 1 - assert graph.startblock.operations[0].opname == 'int_lshift_ovf' - assert len(graph.startblock.operations[0].args) == 2 - assert len(graph.startblock.exits) == 2 - assert [link.exitcase for link in graph.startblock.exits] == \ - [None, OverflowError] - assert [link.target.operations for link in graph.startblock.exits] == \ - [(), ()] - def test_remove_ovfcheck_floordiv(): # check that ovfcheck() is handled even if the operation raises # and catches another exception too, here ZeroDivisionError From noreply at buildbot.pypy.org Fri Nov 11 13:11:42 2011 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 11 Nov 2011 13:11:42 +0100 (CET) Subject: [pypy-commit] pypy default: Uh. Fix this function. No clue how it could have worked: it Message-ID: <20111111121142.40D888292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49299:f87b5df4955d Date: 2011-11-11 13:11 +0100 http://bitbucket.org/pypy/pypy/changeset/f87b5df4955d/ Log: Uh. Fix this function. No clue how it could have worked: it returned "unsigned=False" for all non-Number types, like Char. diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -862,11 +862,12 @@ try: unsigned = not tp._type.SIGNED except AttributeError: - if tp in [lltype.Char, lltype.Float, lltype.Signed] or\ - isinstance(tp, lltype.Ptr): + if (isinstance(tp, lltype.Ptr) or + tp in (FLOAT, DOUBLE) or + cast(lltype.SignedLongLong, cast(tp, -1)) < 0): unsigned = False else: - unsigned = False + unsigned = True return size, unsigned def sizeof(tp): From noreply at buildbot.pypy.org Fri Nov 11 13:26:38 2011 From: noreply at buildbot.pypy.org (hager) Date: Fri, 11 Nov 2011 13:26:38 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Different backchain sizes depending on the architecture Message-ID: <20111111122638.3071C8292E@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r49300:0cc8549b1466 Date: 2011-11-11 13:16 +0100 http://bitbucket.org/pypy/pypy/changeset/0cc8549b1466/ Log: Different backchain sizes depending on the architecture diff --git a/pypy/jit/backend/ppc/ppcgen/arch.py b/pypy/jit/backend/ppc/ppcgen/arch.py --- a/pypy/jit/backend/ppc/ppcgen/arch.py +++ b/pypy/jit/backend/ppc/ppcgen/arch.py @@ -6,15 +6,14 @@ if sys.maxint == (2**31 - 1): WORD = 4 IS_PPC_32 = True - IS_PPC_64 = False + BACKCHAIN_SIZE = 2 * WORD else: WORD = 8 IS_PPC_32 = False - IS_PPC_64 = True + BACKCHAIN_SIZE = 3 * WORD +IS_PPC_64 = not IS_PPC_32 MY_COPY_OF_REGS = 0 GPR_SAVE_AREA = len(NONVOLATILES) * WORD MAX_REG_PARAMS = 8 - -BACKCHAIN_SIZE = 3 * WORD From noreply at buildbot.pypy.org Fri Nov 11 13:26:39 2011 From: noreply at buildbot.pypy.org (hager) Date: Fri, 11 Nov 2011 13:26:39 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Added some comments in ppcgen/ppc_assembler.py Message-ID: <20111111122639.5CC208292E@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r49301:14e77d72c41c Date: 2011-11-11 13:20 +0100 http://bitbucket.org/pypy/pypy/changeset/14e77d72c41c/ Log: Added some comments in ppcgen/ppc_assembler.py diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -74,6 +74,41 @@ EMPTY_LOC = '\xFE' END_OF_LOCS = '\xFF' + + ''' + PyPy's PPC stack frame layout + ============================= + + . . + . . + ---------------------------- + | BACKCHAIN | OLD FRAME + ------------------------------------------------------ + | | PyPy Frame + | GPR SAVE AREA | + | | + ---------------------------- + | FORCE INDEX | + ---------------------------- <- Spilling Pointer (SPP) + | | + | SPILLING AREA | + | | + ---------------------------- <- Stack Pointer (SP) + + The size of the GPR save area and the force index area fixed: + + GPR SAVE AREA: len(NONVOLATILES) * WORD + FORCE INDEX : WORD + + + The size of the spilling area is known when the trace operations + have been generated. + ''' + + GPR_SAVE_AREA_AND_FORCE_INDEX = GPR_SAVE_AREA + WORD + # ^^^^^^^^^^^^^ ^^^^ + # save GRP regs force index + def __init__(self, cpu, failargs_limit=1000): self.cpu = cpu self.fail_boxes_int = values_array(lltype.Signed, failargs_limit) @@ -124,6 +159,8 @@ clt.asmmemmgr = [] return clt.asmmemmgr_blocks + # The code generated here allocates a new stackframe + # and is the first machine code to be executed. def _make_prologue(self, target_pos, frame_depth): if IS_PPC_32: # save it in previous frame (Backchain) @@ -158,10 +195,14 @@ @rgc.no_collect def failure_recovery_func(mem_loc, stack_pointer, spilling_pointer): - """mem_loc is a structure in memory describing where the values for - the failargs are stored. - frame loc is the address of the frame pointer for the frame to be - decoded frame """ + """ + mem_loc is a structure in memory describing where the values for + the failargs are stored. + + stack_pointer is the address of top of the stack. + + spilling_pointer is the address of the FORCE_INDEX. + """ return self.decode_registers_and_descr(mem_loc, stack_pointer, spilling_pointer) self.failure_recovery_func = failure_recovery_func @@ -289,9 +330,17 @@ return mc.materialize(self.cpu.asmmemmgr, [], self.cpu.gc_ll_descr.gcrootmap) + # The code generated here serves as an exit stub from + # the executed machine code. + # It is generated only once when the backend is initialized. + # + # The following actions are performed: + # - The fail boxes are filled with the computed values + # (failure_recovery_func) + # - The nonvolatile registers are restored + # - jump back to the calling code def _gen_exit_path(self): mc = PPCBuilder() - # # compute offset to new SP size = WORD * (len(r.MANAGED_REGS)) + BACKCHAIN_SIZE # set SP @@ -316,7 +365,6 @@ r2_value = descr[1] r11_value = descr[2] - # # load parameters into parameter registers if IS_PPC_32: mc.lwz(r.r3.value, r.SPP.value, 0) # address of state encoding @@ -328,6 +376,7 @@ # load address of decoding function into r0 mc.load_imm(r.r0, addr) if IS_PPC_64: + # load TOC pointer and environment pointer mc.load_imm(r.r2, r2_value) mc.load_imm(r.r11, r11_value) # ... and branch there @@ -340,12 +389,19 @@ mc.mr(r.r5.value, r.SPP.value) self._restore_nonvolatiles(mc, r.r5) # load old backchain into r4 + offset_to_old_backchain = self.GPR_SAVE_AREA_AND_FORCE_INDEX + WORD if IS_PPC_32: mc.lwz(r.r4.value, r.r5.value, GPR_SAVE_AREA + 2 * WORD) else: mc.ld(r.r4.value, r.r5.value, GPR_SAVE_AREA + 2 * WORD) mc.mtlr(r.r4.value) # restore LR mc.addi(r.SP.value, r.r5.value, GPR_SAVE_AREA + WORD) # restore old SP + + # From SPP, we have a constant offset of GPR_SAVE_AREA_AND_FORCE_INDEX + # to the old backchain. We use the SPP to re-establish the old backchain + # because this exit stub is generated before we know how much space + # the entire frame will need. + mc.addi(r.SP.value, r.r5.value, self.GPR_SAVE_AREA_AND_FORCE_INDEX) # restore old SP mc.blr() mc.prepare_insts_blocks() return mc.materialize(self.cpu.asmmemmgr, [], @@ -361,6 +417,7 @@ else: mc.std(reg.value, r.SP.value, i * WORD + BACKCHAIN_SIZE) + # Load parameters from fail args into locations (stack or registers) def gen_bootstrap_code(self, nonfloatlocs, inputargs): for i in range(len(nonfloatlocs)): loc = nonfloatlocs[i] From noreply at buildbot.pypy.org Fri Nov 11 13:26:40 2011 From: noreply at buildbot.pypy.org (hager) Date: Fri, 11 Nov 2011 13:26:40 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Use a more clear way to compute offsets Message-ID: <20111111122640.86F048292E@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r49302:ca6424b81d89 Date: 2011-11-11 13:26 +0100 http://bitbucket.org/pypy/pypy/changeset/ca6424b81d89/ Log: Use a more clear way to compute offsets diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -174,9 +174,10 @@ self.mc.stdu(r.SP.value, r.SP.value, -frame_depth) self.mc.mflr(r.r0.value) self.mc.std(r.r0.value, r.SP.value, frame_depth + WORD) - offset = GPR_SAVE_AREA + WORD + # compute spilling pointer (SPP) - self.mc.addi(r.SPP.value, r.SP.value, frame_depth - offset) + self.mc.addi(r.SPP.value, r.SP.value, frame_depth + - self.GPR_SAVE_AREA_AND_FORCE_INDEX) self._save_nonvolatiles() # save r31, use r30 as scratch register # this is safe because r30 has been saved already @@ -391,11 +392,10 @@ # load old backchain into r4 offset_to_old_backchain = self.GPR_SAVE_AREA_AND_FORCE_INDEX + WORD if IS_PPC_32: - mc.lwz(r.r4.value, r.r5.value, GPR_SAVE_AREA + 2 * WORD) + mc.lwz(r.r4.value, r.r5.value, offset_to_old_backchain) else: - mc.ld(r.r4.value, r.r5.value, GPR_SAVE_AREA + 2 * WORD) + mc.ld(r.r4.value, r.r5.value, offset_to_old_backchain) mc.mtlr(r.r4.value) # restore LR - mc.addi(r.SP.value, r.r5.value, GPR_SAVE_AREA + WORD) # restore old SP # From SPP, we have a constant offset of GPR_SAVE_AREA_AND_FORCE_INDEX # to the old backchain. We use the SPP to re-establish the old backchain From noreply at buildbot.pypy.org Fri Nov 11 13:52:41 2011 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 11 Nov 2011 13:52:41 +0100 (CET) Subject: [pypy-commit] pypy default: Fix. Message-ID: <20111111125241.1CA0F8292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49303:88acc7aafd1e Date: 2011-11-11 13:52 +0100 http://bitbucket.org/pypy/pypy/changeset/88acc7aafd1e/ Log: Fix. diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -862,7 +862,7 @@ try: unsigned = not tp._type.SIGNED except AttributeError: - if (isinstance(tp, lltype.Ptr) or + if (not isinstance(tp, lltype.Primitive) or tp in (FLOAT, DOUBLE) or cast(lltype.SignedLongLong, cast(tp, -1)) < 0): unsigned = False From noreply at buildbot.pypy.org Fri Nov 11 14:00:14 2011 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 11 Nov 2011 14:00:14 +0100 (CET) Subject: [pypy-commit] pypy default: Obscure: mark some builtin modules (but not others) as taking Message-ID: <20111111130014.66D058292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49304:6f2534aea5ca Date: 2011-11-11 13:59 +0100 http://bitbucket.org/pypy/pypy/changeset/6f2534aea5ca/ Log: Obscure: mark some builtin modules (but not others) as taking precedence in an "import xyz" statement if there is a file "xyz.py". The exact list is copied from the default installation of CPython 2.7 on a recent Ubuntu Linux. diff --git a/pypy/interpreter/mixedmodule.py b/pypy/interpreter/mixedmodule.py --- a/pypy/interpreter/mixedmodule.py +++ b/pypy/interpreter/mixedmodule.py @@ -13,6 +13,7 @@ applevel_name = None expose__file__attribute = True + cannot_override_in_import_statements = False # The following attribute is None as long as the module has not been # imported yet, and when it has been, it is mod.__dict__.items() just diff --git a/pypy/module/__builtin__/__init__.py b/pypy/module/__builtin__/__init__.py --- a/pypy/module/__builtin__/__init__.py +++ b/pypy/module/__builtin__/__init__.py @@ -7,6 +7,7 @@ class Module(MixedModule): """Built-in functions, exceptions, and other objects.""" + cannot_override_in_import_statements = True expose__file__attribute = False appleveldefs = { diff --git a/pypy/module/_ast/__init__.py b/pypy/module/_ast/__init__.py --- a/pypy/module/_ast/__init__.py +++ b/pypy/module/_ast/__init__.py @@ -3,6 +3,7 @@ class Module(MixedModule): + cannot_override_in_import_statements = True interpleveldefs = { "PyCF_ONLY_AST" : "space.wrap(%s)" % consts.PyCF_ONLY_AST, diff --git a/pypy/module/_codecs/__init__.py b/pypy/module/_codecs/__init__.py --- a/pypy/module/_codecs/__init__.py +++ b/pypy/module/_codecs/__init__.py @@ -37,6 +37,7 @@ Copyright (c) Corporation for National Research Initiatives. """ + cannot_override_in_import_statements = True appleveldefs = {} diff --git a/pypy/module/_sre/__init__.py b/pypy/module/_sre/__init__.py --- a/pypy/module/_sre/__init__.py +++ b/pypy/module/_sre/__init__.py @@ -1,6 +1,7 @@ from pypy.interpreter.mixedmodule import MixedModule class Module(MixedModule): + cannot_override_in_import_statements = True appleveldefs = { } diff --git a/pypy/module/_warnings/__init__.py b/pypy/module/_warnings/__init__.py --- a/pypy/module/_warnings/__init__.py +++ b/pypy/module/_warnings/__init__.py @@ -3,6 +3,7 @@ class Module(MixedModule): """provides basic warning filtering support. It is a helper module to speed up interpreter start-up.""" + cannot_override_in_import_statements = True interpleveldefs = { 'warn' : 'interp_warnings.warn', diff --git a/pypy/module/_weakref/__init__.py b/pypy/module/_weakref/__init__.py --- a/pypy/module/_weakref/__init__.py +++ b/pypy/module/_weakref/__init__.py @@ -1,6 +1,7 @@ from pypy.interpreter.mixedmodule import MixedModule class Module(MixedModule): + cannot_override_in_import_statements = True appleveldefs = { } interpleveldefs = { diff --git a/pypy/module/errno/__init__.py b/pypy/module/errno/__init__.py --- a/pypy/module/errno/__init__.py +++ b/pypy/module/errno/__init__.py @@ -16,6 +16,7 @@ To map error codes to error messages, use the function os.strerror(), e.g. os.strerror(2) could return 'No such file or directory'.""" + cannot_override_in_import_statements = True appleveldefs = {} interpleveldefs = {"errorcode": "interp_errno.get_errorcode(space)"} diff --git a/pypy/module/exceptions/__init__.py b/pypy/module/exceptions/__init__.py --- a/pypy/module/exceptions/__init__.py +++ b/pypy/module/exceptions/__init__.py @@ -2,6 +2,8 @@ from pypy.interpreter.mixedmodule import MixedModule class Module(MixedModule): + cannot_override_in_import_statements = True + appleveldefs = {} interpleveldefs = { diff --git a/pypy/module/gc/__init__.py b/pypy/module/gc/__init__.py --- a/pypy/module/gc/__init__.py +++ b/pypy/module/gc/__init__.py @@ -1,6 +1,7 @@ from pypy.interpreter.mixedmodule import MixedModule class Module(MixedModule): + cannot_override_in_import_statements = True appleveldefs = { 'enable': 'app_gc.enable', 'disable': 'app_gc.disable', diff --git a/pypy/module/imp/__init__.py b/pypy/module/imp/__init__.py --- a/pypy/module/imp/__init__.py +++ b/pypy/module/imp/__init__.py @@ -5,6 +5,7 @@ This module provides the components needed to build your own __import__ function. """ + cannot_override_in_import_statements = True interpleveldefs = { 'PY_SOURCE': 'space.wrap(importing.PY_SOURCE)', 'PY_COMPILED': 'space.wrap(importing.PY_COMPILED)', diff --git a/pypy/module/imp/importing.py b/pypy/module/imp/importing.py --- a/pypy/module/imp/importing.py +++ b/pypy/module/imp/importing.py @@ -5,6 +5,7 @@ import sys, os, stat from pypy.interpreter.module import Module +from pypy.interpreter.mixedmodule import MixedModule from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.typedef import TypeDef, generic_new_descr from pypy.interpreter.error import OperationError, operationerrfmt @@ -483,10 +484,19 @@ # XXX Check for frozen modules? # when w_path is a string + default_result = None + if w_path is None: # check the builtin modules - if modulename in space.builtin_modules: - return FindInfo(C_BUILTIN, modulename, None) + w_mod = space.builtin_modules.get(modulename, None) + if w_mod is not None: + default_result = FindInfo(C_BUILTIN, modulename, None) + mod = space.interpclass_w(w_mod) + if (isinstance(mod, MixedModule) and + mod.cannot_override_in_import_statements): + return default_result + #else: + # continue looking and only return it if no xxx.py found w_path = space.sys.get('path') # XXX check frozen modules? @@ -530,7 +540,7 @@ # Out of file descriptors. # not found - return None + return default_result def _prepare_module(space, w_mod, filename, pkgdir): w = space.wrap diff --git a/pypy/module/imp/test/test_import.py b/pypy/module/imp/test/test_import.py --- a/pypy/module/imp/test/test_import.py +++ b/pypy/module/imp/test/test_import.py @@ -918,6 +918,34 @@ finally: stream.close() + def test_cannot_hide_builtin_exceptions(self): + import sys, os + filename = os.path.join(sys.path[0], 'exceptions.py') + f = open(filename, 'w') + f.close() + try: + import exceptions + assert hasattr(exceptions, 'NotImplementedError') + finally: + os.unlink(filename) + + def test_can_hide_builtin_parser(self): + import sys, os + filename = os.path.join(sys.path[0], 'parser.py') + f = open(filename, 'w') + f.write('I_have_been_hidden = 42\n') + f.close() + old = sys.modules.pop('parser', None) + try: + import parser + assert hasattr(parser, 'I_have_been_hidden') + finally: + os.unlink(filename) + if old is not None: + sys.modules['parser'] = old + else: + del sys.modules['parser'] + def test_PYTHONPATH_takes_precedence(space): if sys.platform == "win32": diff --git a/pypy/module/marshal/__init__.py b/pypy/module/marshal/__init__.py --- a/pypy/module/marshal/__init__.py +++ b/pypy/module/marshal/__init__.py @@ -5,6 +5,7 @@ """ This module implements marshal at interpreter level. """ + cannot_override_in_import_statements = True appleveldefs = { } diff --git a/pypy/module/posix/__init__.py b/pypy/module/posix/__init__.py --- a/pypy/module/posix/__init__.py +++ b/pypy/module/posix/__init__.py @@ -30,6 +30,7 @@ disguised Unix interface). Refer to the library manual and corresponding Unix manual entries for more information on calls.""" + cannot_override_in_import_statements = True applevel_name = os.name appleveldefs = { diff --git a/pypy/module/pwd/__init__.py b/pypy/module/pwd/__init__.py --- a/pypy/module/pwd/__init__.py +++ b/pypy/module/pwd/__init__.py @@ -11,6 +11,7 @@ The uid and gid items are integers, all others are strings. An exception is raised if the entry asked for cannot be found. """ + cannot_override_in_import_statements = True interpleveldefs = { 'getpwuid': 'interp_pwd.getpwuid', diff --git a/pypy/module/signal/__init__.py b/pypy/module/signal/__init__.py --- a/pypy/module/signal/__init__.py +++ b/pypy/module/signal/__init__.py @@ -4,6 +4,7 @@ import signal as cpy_signal class Module(MixedModule): + cannot_override_in_import_statements = True interpleveldefs = { 'signal': 'interp_signal.signal', 'getsignal': 'interp_signal.getsignal', diff --git a/pypy/module/sys/__init__.py b/pypy/module/sys/__init__.py --- a/pypy/module/sys/__init__.py +++ b/pypy/module/sys/__init__.py @@ -8,6 +8,7 @@ class Module(MixedModule): """Sys Builtin Module. """ _immutable_fields_ = ["defaultencoding?"] + cannot_override_in_import_statements = True def __init__(self, space, w_name): """NOT_RPYTHON""" # because parent __init__ isn't diff --git a/pypy/module/thread/__init__.py b/pypy/module/thread/__init__.py --- a/pypy/module/thread/__init__.py +++ b/pypy/module/thread/__init__.py @@ -3,6 +3,8 @@ from pypy.interpreter.mixedmodule import MixedModule class Module(MixedModule): + cannot_override_in_import_statements = True + appleveldefs = { } diff --git a/pypy/module/zipimport/__init__.py b/pypy/module/zipimport/__init__.py --- a/pypy/module/zipimport/__init__.py +++ b/pypy/module/zipimport/__init__.py @@ -5,6 +5,7 @@ from pypy.interpreter.mixedmodule import MixedModule class Module(MixedModule): + cannot_override_in_import_statements = True interpleveldefs = { 'zipimporter':'interp_zipimport.W_ZipImporter', From noreply at buildbot.pypy.org Fri Nov 11 14:03:32 2011 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 11 Nov 2011 14:03:32 +0100 (CET) Subject: [pypy-commit] buildbot default: Add PPC32 Message-ID: <20111111130332.15A8B8292E@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: Changeset: r597:e9fb7f98db74 Date: 2011-11-11 14:03 +0100 http://bitbucket.org/pypy/buildbot/changeset/e9fb7f98db74/ Log: Add PPC32 diff --git a/bot2/pypybuildbot/master.py b/bot2/pypybuildbot/master.py --- a/bot2/pypybuildbot/master.py +++ b/bot2/pypybuildbot/master.py @@ -133,6 +133,7 @@ LINUX32 = "own-linux-x86-32" LINUX64 = "own-linux-x86-64" MACOSX32 = "own-macosx-x86-32" +PPCLINUX32 = "own-linux-ppc-32" WIN32 = "own-win-x86-32" WIN64 = "own-win-x86-64" APPLVLLINUX32 = "pypy-c-app-level-linux-x86-32" @@ -301,6 +302,12 @@ "factory": pypyOwnTestFactory, "category": 'mac32' }, + {"name": PPCLINUX32, + "slavenames": ["stups-ppc32"], + "builddir": PPCLINUX32, + "factory": pypyOwnTestFactory, + "category": 'linuxppc32' + }, {"name" : JITMACOSX64, "slavenames": ["macmini-mvt", "xerxes"], 'builddir' : JITMACOSX64, From noreply at buildbot.pypy.org Fri Nov 11 14:26:51 2011 From: noreply at buildbot.pypy.org (hager) Date: Fri, 11 Nov 2011 14:26:51 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: (bivab, hager): Make first tests on PPC64 pass Message-ID: <20111111132651.B03448292E@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r49305:e1cc85b1017d Date: 2011-11-11 05:26 -0800 http://bitbucket.org/pypy/pypy/changeset/e1cc85b1017d/ Log: (bivab, hager): Make first tests on PPC64 pass diff --git a/pypy/jit/backend/ppc/ppcgen/helper/assembler.py b/pypy/jit/backend/ppc/ppcgen/helper/assembler.py --- a/pypy/jit/backend/ppc/ppcgen/helper/assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/helper/assembler.py @@ -94,6 +94,12 @@ | ord(mem[index+1]) << 16 | ord(mem[index]) << 24) +def decode64(mem, index): + high = decode32(mem, index) + index += 4 + low = decode32(mem, index) + return (r_longlong(high) << 32) | r_longlong(r_uint(low)) + def count_reg_args(args): reg_args = 0 words = 0 diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -12,7 +12,8 @@ NONVOLATILES, GPR_SAVE_AREA, BACKCHAIN_SIZE) from pypy.jit.backend.ppc.ppcgen.helper.assembler import (gen_emit_cmp_op, - encode32, decode32) + encode32, decode32, + decode64) import pypy.jit.backend.ppc.ppcgen.register as r import pypy.jit.backend.ppc.ppcgen.condition as c from pypy.jit.metainterp.history import (Const, ConstPtr, LoopToken, @@ -271,7 +272,10 @@ self.fail_boxes_float.setitem(fail_index, value) continue else: - value = decode32(regs, (reg - 3) * WORD) + if IS_PPC_32: + value = decode32(regs, (reg - 3) * WORD) + else: + value = decode64(regs, (reg - 3) * WORD) if group == self.INT_TYPE: self.fail_boxes_int.setitem(fail_index, value) From noreply at buildbot.pypy.org Fri Nov 11 14:33:51 2011 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 11 Nov 2011 14:33:51 +0100 (CET) Subject: [pypy-commit] pypy default: Bah. Not really motivated to write a test for this obscure case :-( Message-ID: <20111111133351.ECA808292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49306:ed83fd7b7ec1 Date: 2011-11-11 14:33 +0100 http://bitbucket.org/pypy/pypy/changeset/ed83fd7b7ec1/ Log: Bah. Not really motivated to write a test for this obscure case :-( diff --git a/pypy/module/_rawffi/structure.py b/pypy/module/_rawffi/structure.py --- a/pypy/module/_rawffi/structure.py +++ b/pypy/module/_rawffi/structure.py @@ -212,6 +212,8 @@ while count + basic_size <= total_size: fieldtypes.append(basic_ffi_type) count += basic_size + if basic_size == 0: # corner case. get out of this infinite + break # loop after 1 iteration ("why not") self.ffi_struct = clibffi.make_struct_ffitype_e(self.size, self.alignment, fieldtypes) From noreply at buildbot.pypy.org Fri Nov 11 14:42:30 2011 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 11 Nov 2011 14:42:30 +0100 (CET) Subject: [pypy-commit] pypy default: Backout 6f2534aea5ca. It's more of a mess, because Message-ID: <20111111134230.D84FF8292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49307:86c777384663 Date: 2011-11-11 14:42 +0100 http://bitbucket.org/pypy/pypy/changeset/86c777384663/ Log: Backout 6f2534aea5ca. It's more of a mess, because e.g. we have lib_pypy/struct.py, but we still want the builtin module if we have it... diff --git a/pypy/interpreter/mixedmodule.py b/pypy/interpreter/mixedmodule.py --- a/pypy/interpreter/mixedmodule.py +++ b/pypy/interpreter/mixedmodule.py @@ -13,7 +13,6 @@ applevel_name = None expose__file__attribute = True - cannot_override_in_import_statements = False # The following attribute is None as long as the module has not been # imported yet, and when it has been, it is mod.__dict__.items() just diff --git a/pypy/module/__builtin__/__init__.py b/pypy/module/__builtin__/__init__.py --- a/pypy/module/__builtin__/__init__.py +++ b/pypy/module/__builtin__/__init__.py @@ -7,7 +7,6 @@ class Module(MixedModule): """Built-in functions, exceptions, and other objects.""" - cannot_override_in_import_statements = True expose__file__attribute = False appleveldefs = { diff --git a/pypy/module/_ast/__init__.py b/pypy/module/_ast/__init__.py --- a/pypy/module/_ast/__init__.py +++ b/pypy/module/_ast/__init__.py @@ -3,7 +3,6 @@ class Module(MixedModule): - cannot_override_in_import_statements = True interpleveldefs = { "PyCF_ONLY_AST" : "space.wrap(%s)" % consts.PyCF_ONLY_AST, diff --git a/pypy/module/_codecs/__init__.py b/pypy/module/_codecs/__init__.py --- a/pypy/module/_codecs/__init__.py +++ b/pypy/module/_codecs/__init__.py @@ -37,7 +37,6 @@ Copyright (c) Corporation for National Research Initiatives. """ - cannot_override_in_import_statements = True appleveldefs = {} diff --git a/pypy/module/_sre/__init__.py b/pypy/module/_sre/__init__.py --- a/pypy/module/_sre/__init__.py +++ b/pypy/module/_sre/__init__.py @@ -1,7 +1,6 @@ from pypy.interpreter.mixedmodule import MixedModule class Module(MixedModule): - cannot_override_in_import_statements = True appleveldefs = { } diff --git a/pypy/module/_warnings/__init__.py b/pypy/module/_warnings/__init__.py --- a/pypy/module/_warnings/__init__.py +++ b/pypy/module/_warnings/__init__.py @@ -3,7 +3,6 @@ class Module(MixedModule): """provides basic warning filtering support. It is a helper module to speed up interpreter start-up.""" - cannot_override_in_import_statements = True interpleveldefs = { 'warn' : 'interp_warnings.warn', diff --git a/pypy/module/_weakref/__init__.py b/pypy/module/_weakref/__init__.py --- a/pypy/module/_weakref/__init__.py +++ b/pypy/module/_weakref/__init__.py @@ -1,7 +1,6 @@ from pypy.interpreter.mixedmodule import MixedModule class Module(MixedModule): - cannot_override_in_import_statements = True appleveldefs = { } interpleveldefs = { diff --git a/pypy/module/errno/__init__.py b/pypy/module/errno/__init__.py --- a/pypy/module/errno/__init__.py +++ b/pypy/module/errno/__init__.py @@ -16,7 +16,6 @@ To map error codes to error messages, use the function os.strerror(), e.g. os.strerror(2) could return 'No such file or directory'.""" - cannot_override_in_import_statements = True appleveldefs = {} interpleveldefs = {"errorcode": "interp_errno.get_errorcode(space)"} diff --git a/pypy/module/exceptions/__init__.py b/pypy/module/exceptions/__init__.py --- a/pypy/module/exceptions/__init__.py +++ b/pypy/module/exceptions/__init__.py @@ -2,8 +2,6 @@ from pypy.interpreter.mixedmodule import MixedModule class Module(MixedModule): - cannot_override_in_import_statements = True - appleveldefs = {} interpleveldefs = { diff --git a/pypy/module/gc/__init__.py b/pypy/module/gc/__init__.py --- a/pypy/module/gc/__init__.py +++ b/pypy/module/gc/__init__.py @@ -1,7 +1,6 @@ from pypy.interpreter.mixedmodule import MixedModule class Module(MixedModule): - cannot_override_in_import_statements = True appleveldefs = { 'enable': 'app_gc.enable', 'disable': 'app_gc.disable', diff --git a/pypy/module/imp/__init__.py b/pypy/module/imp/__init__.py --- a/pypy/module/imp/__init__.py +++ b/pypy/module/imp/__init__.py @@ -5,7 +5,6 @@ This module provides the components needed to build your own __import__ function. """ - cannot_override_in_import_statements = True interpleveldefs = { 'PY_SOURCE': 'space.wrap(importing.PY_SOURCE)', 'PY_COMPILED': 'space.wrap(importing.PY_COMPILED)', diff --git a/pypy/module/imp/importing.py b/pypy/module/imp/importing.py --- a/pypy/module/imp/importing.py +++ b/pypy/module/imp/importing.py @@ -5,7 +5,6 @@ import sys, os, stat from pypy.interpreter.module import Module -from pypy.interpreter.mixedmodule import MixedModule from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.typedef import TypeDef, generic_new_descr from pypy.interpreter.error import OperationError, operationerrfmt @@ -484,19 +483,10 @@ # XXX Check for frozen modules? # when w_path is a string - default_result = None - if w_path is None: # check the builtin modules - w_mod = space.builtin_modules.get(modulename, None) - if w_mod is not None: - default_result = FindInfo(C_BUILTIN, modulename, None) - mod = space.interpclass_w(w_mod) - if (isinstance(mod, MixedModule) and - mod.cannot_override_in_import_statements): - return default_result - #else: - # continue looking and only return it if no xxx.py found + if modulename in space.builtin_modules: + return FindInfo(C_BUILTIN, modulename, None) w_path = space.sys.get('path') # XXX check frozen modules? @@ -540,7 +530,7 @@ # Out of file descriptors. # not found - return default_result + return None def _prepare_module(space, w_mod, filename, pkgdir): w = space.wrap diff --git a/pypy/module/imp/test/test_import.py b/pypy/module/imp/test/test_import.py --- a/pypy/module/imp/test/test_import.py +++ b/pypy/module/imp/test/test_import.py @@ -918,34 +918,6 @@ finally: stream.close() - def test_cannot_hide_builtin_exceptions(self): - import sys, os - filename = os.path.join(sys.path[0], 'exceptions.py') - f = open(filename, 'w') - f.close() - try: - import exceptions - assert hasattr(exceptions, 'NotImplementedError') - finally: - os.unlink(filename) - - def test_can_hide_builtin_parser(self): - import sys, os - filename = os.path.join(sys.path[0], 'parser.py') - f = open(filename, 'w') - f.write('I_have_been_hidden = 42\n') - f.close() - old = sys.modules.pop('parser', None) - try: - import parser - assert hasattr(parser, 'I_have_been_hidden') - finally: - os.unlink(filename) - if old is not None: - sys.modules['parser'] = old - else: - del sys.modules['parser'] - def test_PYTHONPATH_takes_precedence(space): if sys.platform == "win32": diff --git a/pypy/module/marshal/__init__.py b/pypy/module/marshal/__init__.py --- a/pypy/module/marshal/__init__.py +++ b/pypy/module/marshal/__init__.py @@ -5,7 +5,6 @@ """ This module implements marshal at interpreter level. """ - cannot_override_in_import_statements = True appleveldefs = { } diff --git a/pypy/module/posix/__init__.py b/pypy/module/posix/__init__.py --- a/pypy/module/posix/__init__.py +++ b/pypy/module/posix/__init__.py @@ -30,7 +30,6 @@ disguised Unix interface). Refer to the library manual and corresponding Unix manual entries for more information on calls.""" - cannot_override_in_import_statements = True applevel_name = os.name appleveldefs = { diff --git a/pypy/module/pwd/__init__.py b/pypy/module/pwd/__init__.py --- a/pypy/module/pwd/__init__.py +++ b/pypy/module/pwd/__init__.py @@ -11,7 +11,6 @@ The uid and gid items are integers, all others are strings. An exception is raised if the entry asked for cannot be found. """ - cannot_override_in_import_statements = True interpleveldefs = { 'getpwuid': 'interp_pwd.getpwuid', diff --git a/pypy/module/signal/__init__.py b/pypy/module/signal/__init__.py --- a/pypy/module/signal/__init__.py +++ b/pypy/module/signal/__init__.py @@ -4,7 +4,6 @@ import signal as cpy_signal class Module(MixedModule): - cannot_override_in_import_statements = True interpleveldefs = { 'signal': 'interp_signal.signal', 'getsignal': 'interp_signal.getsignal', diff --git a/pypy/module/sys/__init__.py b/pypy/module/sys/__init__.py --- a/pypy/module/sys/__init__.py +++ b/pypy/module/sys/__init__.py @@ -8,7 +8,6 @@ class Module(MixedModule): """Sys Builtin Module. """ _immutable_fields_ = ["defaultencoding?"] - cannot_override_in_import_statements = True def __init__(self, space, w_name): """NOT_RPYTHON""" # because parent __init__ isn't diff --git a/pypy/module/thread/__init__.py b/pypy/module/thread/__init__.py --- a/pypy/module/thread/__init__.py +++ b/pypy/module/thread/__init__.py @@ -3,8 +3,6 @@ from pypy.interpreter.mixedmodule import MixedModule class Module(MixedModule): - cannot_override_in_import_statements = True - appleveldefs = { } diff --git a/pypy/module/zipimport/__init__.py b/pypy/module/zipimport/__init__.py --- a/pypy/module/zipimport/__init__.py +++ b/pypy/module/zipimport/__init__.py @@ -5,7 +5,6 @@ from pypy.interpreter.mixedmodule import MixedModule class Module(MixedModule): - cannot_override_in_import_statements = True interpleveldefs = { 'zipimporter':'interp_zipimport.W_ZipImporter', From noreply at buildbot.pypy.org Fri Nov 11 14:57:30 2011 From: noreply at buildbot.pypy.org (hager) Date: Fri, 11 Nov 2011 14:57:30 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: (bivab, hager): Fixed bug in decode64 Message-ID: <20111111135730.E052E8292E@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r49308:622487af296f Date: 2011-11-11 05:56 -0800 http://bitbucket.org/pypy/pypy/changeset/622487af296f/ Log: (bivab, hager): Fixed bug in decode64 diff --git a/pypy/jit/backend/ppc/ppcgen/helper/assembler.py b/pypy/jit/backend/ppc/ppcgen/helper/assembler.py --- a/pypy/jit/backend/ppc/ppcgen/helper/assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/helper/assembler.py @@ -2,6 +2,7 @@ from pypy.rlib.rarithmetic import r_uint, r_longlong, intmask from pypy.jit.backend.ppc.ppcgen.arch import MAX_REG_PARAMS, IS_PPC_32 from pypy.jit.metainterp.history import FLOAT +from pypy.rlib.unroll import unrolling_iterable def gen_emit_cmp_op(condition, signed=True): def f(self, op, arglocs, regalloc): @@ -95,10 +96,10 @@ | ord(mem[index]) << 24) def decode64(mem, index): - high = decode32(mem, index) - index += 4 - low = decode32(mem, index) - return (r_longlong(high) << 32) | r_longlong(r_uint(low)) + value = 0 + for x in unrolling_iterable(range(8)): + value |= (ord(mem[index + x]) << (56 - x * 8)) + return intmask(value) def count_reg_args(args): reg_args = 0 From noreply at buildbot.pypy.org Fri Nov 11 15:41:36 2011 From: noreply at buildbot.pypy.org (hager) Date: Fri, 11 Nov 2011 15:41:36 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: (bivab, hager): Code refactorings and debugging in int operations Message-ID: <20111111144136.4ECF88292E@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r49309:ce80006b9449 Date: 2011-11-11 06:41 -0800 http://bitbucket.org/pypy/pypy/changeset/ce80006b9449/ Log: (bivab, hager): Code refactorings and debugging in int operations diff --git a/pypy/jit/backend/ppc/ppcgen/codebuilder.py b/pypy/jit/backend/ppc/ppcgen/codebuilder.py --- a/pypy/jit/backend/ppc/ppcgen/codebuilder.py +++ b/pypy/jit/backend/ppc/ppcgen/codebuilder.py @@ -1018,6 +1018,39 @@ def copy_to_raw_memory(self, addr): self._copy_to_raw_memory(addr) + def cmp_op(self, block, a, b, imm=False, signed=True): + if IS_PPC_32: + if signed: + if imm: + # 32 bit immediate signed + self.cmpwi(block, a, b) + else: + # 32 bit signed + self.cmpw(block, a, b) + else: + if imm: + # 32 bit immediate unsigned + self.cmplwi(block, a, b) + else: + # 32 bit unsigned + self.cmplw(block, a, b) + else: + if signed: + if imm: + # 64 bit immediate signed + self.cmpdi(block, a, b) + else: + # 64 bit signed + self.cmpd(block, a, b) + else: + if imm: + # 64 bit immediate unsigned + self.cmpldi(block, a, b) + else: + # 64 bit unsigned + self.cmpld(block, a, b) + + class BranchUpdater(PPCAssembler): def __init__(self): PPCAssembler.__init__(self) diff --git a/pypy/jit/backend/ppc/ppcgen/helper/assembler.py b/pypy/jit/backend/ppc/ppcgen/helper/assembler.py --- a/pypy/jit/backend/ppc/ppcgen/helper/assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/helper/assembler.py @@ -8,57 +8,24 @@ def f(self, op, arglocs, regalloc): l0, l1, res = arglocs # do the comparison - if signed: - if l1.is_imm(): - if IS_PPC_32: - self.mc.cmpwi(0, l0.value, l1.value) - else: - self.mc.cmpdi(0, l0.value, l1.value) - else: - if IS_PPC_32: - self.mc.cmpw(0, l0.value, l1.value) - else: - self.mc.cmpd(0, l0.value, l1.value) - - # After the comparison, place the result - # in the first bit of the CR - if condition == c.LT: - self.mc.cror(0, 0, 0) - elif condition == c.LE: - self.mc.cror(0, 0, 2) - elif condition == c.EQ: - self.mc.cror(0, 2, 2) - elif condition == c.GE: - self.mc.cror(0, 1, 2) - elif condition == c.GT: - self.mc.cror(0, 1, 1) - elif condition == c.NE: - self.mc.cror(0, 0, 1) - else: - assert 0, "condition not known" - + self.mc.cmp_op(0, l0.value, l1.value, + imm=l1.is_imm(), signed=signed) + # After the comparison, place the result + # in the first bit of the CR + if condition == c.LT or condition == c.U_LT: + self.mc.cror(0, 0, 0) + elif condition == c.LE or condition == c.U_LE: + self.mc.cror(0, 0, 2) + elif condition == c.EQ: + self.mc.cror(0, 2, 2) + elif condition == c.GE or condition == c.U_GE: + self.mc.cror(0, 1, 2) + elif condition == c.GT or condition == c.U_GT: + self.mc.cror(0, 1, 1) + elif condition == c.NE: + self.mc.cror(0, 0, 1) else: - if l1.is_imm(): - if IS_PPC_32: - self.mc.cmplwi(0, l0.value, l1.value) - else: - self.mc.cmpldi(0, l0.value, l1.value) - else: - if IS_PPC_32: - self.mc.cmplw(0, l0.value, l1.value) - else: - self.mc.cmpld(0, l0.value, l1.value) - - if condition == c.U_LT: - self.mc.cror(0, 0, 0) - elif condition == c.U_LE: - self.mc.cror(0, 0, 2) - elif condition == c.U_GT: - self.mc.cror(0, 1, 1) - elif condition == c.U_GE: - self.mc.cror(0, 1, 2) - else: - assert 0, "condition not known" + assert 0, "condition not known" resval = res.value # move the content of the CR to resval @@ -71,7 +38,7 @@ def f(self, op, arglocs, regalloc): reg, res = arglocs - self.mc.cmpwi(0, reg.value, 0) + self.mc.cmp_op(0, reg.value, 0, imm=True) if condition == c.IS_ZERO: self.mc.cror(0, 2, 2) elif condition == c.IS_TRUE: From noreply at buildbot.pypy.org Fri Nov 11 15:57:34 2011 From: noreply at buildbot.pypy.org (edelsohn) Date: Fri, 11 Nov 2011 15:57:34 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Adjust frame layout and store TOC pointer. Message-ID: <20111111145734.410818292E@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r49310:4cf958dae77e Date: 2011-11-11 09:57 -0500 http://bitbucket.org/pypy/pypy/changeset/4cf958dae77e/ Log: Adjust frame layout and store TOC pointer. diff --git a/pypy/jit/backend/ppc/ppcgen/arch.py b/pypy/jit/backend/ppc/ppcgen/arch.py --- a/pypy/jit/backend/ppc/ppcgen/arch.py +++ b/pypy/jit/backend/ppc/ppcgen/arch.py @@ -10,7 +10,7 @@ else: WORD = 8 IS_PPC_32 = False - BACKCHAIN_SIZE = 3 * WORD + BACKCHAIN_SIZE = 4 * WORD IS_PPC_64 = not IS_PPC_32 MY_COPY_OF_REGS = 0 diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -527,7 +527,7 @@ else: self.mc.stdu(r.SP.value, r.SP.value, -stack_space) self.mc.mflr(r.r0.value) - self.mc.std(r.r0.value, r.SP.value, stack_space + WORD) + self.mc.std(r.r0.value, r.SP.value, stack_space + 2 * WORD) # then we push everything on the stack for i, arg in enumerate(stack_args): @@ -574,14 +574,14 @@ self.mc.bl_abs(adr) self.mc.lwz(r.r0.value, r.SP.value, stack_space + WORD) else: - self.mc.std(r.r2.value, r.SP.value, 40) + self.mc.std(r.r2.value, r.SP.value, 3 * WORD) self.mc.load_from_addr(r.r0, adr) - self.mc.load_from_addr(r.r2, adr+WORD) - self.mc.load_from_addr(r.r11, adr+2*WORD) + self.mc.load_from_addr(r.r2, adr + WORD) + self.mc.load_from_addr(r.r11, adr + 2 * WORD) self.mc.mtctr(r.r0.value) self.mc.bctrl() - self.mc.ld(r.r2.value, r.SP.value, 40) - self.mc.ld(r.r0.value, r.SP.value, stack_space + WORD) + self.mc.ld(r.r2.value, r.SP.value, 3 * WORD) + self.mc.ld(r.r0.value, r.SP.value, stack_space + 2 * WORD) self.mc.mtlr(r.r0.value) self.mc.addi(r.SP.value, r.SP.value, stack_space) diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -174,7 +174,7 @@ else: self.mc.stdu(r.SP.value, r.SP.value, -frame_depth) self.mc.mflr(r.r0.value) - self.mc.std(r.r0.value, r.SP.value, frame_depth + WORD) + self.mc.std(r.r0.value, r.SP.value, frame_depth + 2 * WORD) # compute spilling pointer (SPP) self.mc.addi(r.SPP.value, r.SP.value, frame_depth @@ -381,12 +381,15 @@ # load address of decoding function into r0 mc.load_imm(r.r0, addr) if IS_PPC_64: + mc.std(r.r2.value, r.SP.value, 3 * WORD) # load TOC pointer and environment pointer mc.load_imm(r.r2, r2_value) mc.load_imm(r.r11, r11_value) # ... and branch there mc.mtctr(r.r0.value) mc.bctrl() + if IS_PPC_64: + mc.ld(r.r2.value, r.SP.value, 3 * WORD) # mc.addi(r.SP.value, r.SP.value, size) # save SPP in r5 @@ -398,7 +401,7 @@ if IS_PPC_32: mc.lwz(r.r4.value, r.r5.value, offset_to_old_backchain) else: - mc.ld(r.r4.value, r.r5.value, offset_to_old_backchain) + mc.ld(r.r4.value, r.r5.value, offset_to_old_backchain + WORD) mc.mtlr(r.r4.value) # restore LR # From SPP, we have a constant offset of GPR_SAVE_AREA_AND_FORCE_INDEX From noreply at buildbot.pypy.org Fri Nov 11 16:11:40 2011 From: noreply at buildbot.pypy.org (hager) Date: Fri, 11 Nov 2011 16:11:40 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: (bivab, edelsohn, hager): change size of allocated stack space at function calls Message-ID: <20111111151140.6CC128292E@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r49311:cd410e74e9f5 Date: 2011-11-11 07:11 -0800 http://bitbucket.org/pypy/pypy/changeset/cd410e74e9f5/ Log: (bivab, edelsohn, hager): change size of allocated stack space at function calls diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -2,7 +2,8 @@ gen_emit_unary_cmp_op) import pypy.jit.backend.ppc.ppcgen.condition as c import pypy.jit.backend.ppc.ppcgen.register as r -from pypy.jit.backend.ppc.ppcgen.arch import GPR_SAVE_AREA, IS_PPC_32, WORD +from pypy.jit.backend.ppc.ppcgen.arch import (GPR_SAVE_AREA, IS_PPC_32, WORD, + BACKCHAIN_SIZE) from pypy.jit.metainterp.history import LoopToken, AbstractFailDescr, FLOAT from pypy.rlib.objectmodel import we_are_translated @@ -517,7 +518,7 @@ stack_args.append(None) # adjust SP and compute size of parameter save area - stack_space = 4 * (WORD + len(stack_args)) + stack_space = len(stack_args) * WORD + BACKCHAIN_SIZE while stack_space % (4 * WORD) != 0: stack_space += 1 if IS_PPC_32: From noreply at buildbot.pypy.org Fri Nov 11 16:22:29 2011 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 11 Nov 2011 16:22:29 +0100 (CET) Subject: [pypy-commit] buildbot default: Add some documentation on how to run a buildslave Message-ID: <20111111152229.6BC1C8292E@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: Changeset: r598:1bfea475131e Date: 2011-11-11 16:22 +0100 http://bitbucket.org/pypy/buildbot/changeset/1bfea475131e/ Log: Add some documentation on how to run a buildslave diff --git a/README b/README --- a/README +++ b/README @@ -36,11 +36,22 @@ =========================== $ cd pypy-buildbot + $ hg pull -u + $ cd master + $ buildbot checkconfig + $ make reconfig OR + $ make stop + $ make start + +To run a buildslave +=================== +Please refer to README_BUILDSLAVE + diff --git a/README_BUILDSLAVE b/README_BUILDSLAVE new file mode 100644 --- /dev/null +++ b/README_BUILDSLAVE @@ -0,0 +1,40 @@ +How to setup a buildslave for PyPy +================================== + +First you will need to install the ``buildbot_buildslave`` package. +pip install buildbot_buildslave + +The next step is to create a buildslave configuration file. Based on version +0.7.12 of buildbot you need to execute the following command. + +buildbot create-slave BASEDIR MASTERHOST:PORT SLAVENAME PASSWORD + +For PyPy the MASTERHOST currently is ``wyvern.cs.uni-duesseldorf.de``. The +value for PORT is ``10407``. +SLAVENAME and PASSWORD can be freely chosen. These values need to be added to +the slaveinfo.py configuration file on the MASTERHOST, ask in the IRC channel +(#pypy on freenode.net) for the settings to be added. BASEDIR is a path to a +local directory that will be created to contain all the files will be used by +the buildslave. + +Finally you will need to update the buildmaster configuration found in +bot2/pypybuildbot/master.py to associate the buildslave with one or more +builders. Builders define what tasks should be executed on the buildslave. +The changeset of revision 2f982db47d5d is a good place to start +(https://bitbucket.org/pypy/buildbot/changeset/2f982db47d5d). Once the changes +are commited the buildmaster on MASTERHOST needs to be updated and restared to +reflect the changes to the configuration. + +To run the buildslave execute +============================= + +First you will need to copy the file Makefile.sample to Makefile and +update it as necessary. + +To start the buildslave just run + + make start + +and to stop it run + + make stop From noreply at buildbot.pypy.org Fri Nov 11 16:27:54 2011 From: noreply at buildbot.pypy.org (edelsohn) Date: Fri, 11 Nov 2011 16:27:54 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Adjust emit_call for PPC64 ABI. Message-ID: <20111111152754.9B4398292E@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r49312:c702afe5ff67 Date: 2011-11-11 10:27 -0500 http://bitbucket.org/pypy/pypy/changeset/c702afe5ff67/ Log: Adjust emit_call for PPC64 ABI. diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -2,8 +2,8 @@ gen_emit_unary_cmp_op) import pypy.jit.backend.ppc.ppcgen.condition as c import pypy.jit.backend.ppc.ppcgen.register as r -from pypy.jit.backend.ppc.ppcgen.arch import (GPR_SAVE_AREA, IS_PPC_32, WORD, - BACKCHAIN_SIZE) +from pypy.jit.backend.ppc.ppcgen.arch import (IS_PPC_32, WORD, + GPR_SAVE_AREA, BACKCHAIN_SIZE) from pypy.jit.metainterp.history import LoopToken, AbstractFailDescr, FLOAT from pypy.rlib.objectmodel import we_are_translated @@ -518,21 +518,27 @@ stack_args.append(None) # adjust SP and compute size of parameter save area - stack_space = len(stack_args) * WORD + BACKCHAIN_SIZE - while stack_space % (4 * WORD) != 0: - stack_space += 1 if IS_PPC_32: + stack_space = BACKCHAIN_SIZE + len(stack_args) * WORD + while stack_space % (4 * WORD) != 0: + stack_space += 1 self.mc.stwu(r.SP.value, r.SP.value, -stack_space) self.mc.mflr(r.r0.value) self.mc.stw(r.r0.value, r.SP.value, stack_space + WORD) else: + # ABI fixed frame + 8 GPRs + arguments + stack_space = (6 + 8 + len(stack_args)) * WORD self.mc.stdu(r.SP.value, r.SP.value, -stack_space) self.mc.mflr(r.r0.value) self.mc.std(r.r0.value, r.SP.value, stack_space + 2 * WORD) # then we push everything on the stack for i, arg in enumerate(stack_args): - offset = (2 + i) * WORD + if IS_PPC_32: + abi = 2 + else: + abi = 14 + offset = (abi + i) * WORD if arg is not None: self.mc.load_imm(r.r0, arg.value) if IS_PPC_32: From noreply at buildbot.pypy.org Fri Nov 11 16:32:36 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 11 Nov 2011 16:32:36 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: introduce the dispatcher, whose goal is to convert applevel objtects into low-level values based on the given ffitype. Move there the logic that we used in W_FunctPtr to build the argchain Message-ID: <20111111153236.23BB38292E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r49313:1e475fc1fe0e Date: 2011-11-10 18:39 +0100 http://bitbucket.org/pypy/pypy/changeset/1e475fc1fe0e/ Log: introduce the dispatcher, whose goal is to convert applevel objtects into low-level values based on the given ffitype. Move there the logic that we used in W_FunctPtr to build the argchain diff --git a/pypy/module/_ffi/dispatcher.py b/pypy/module/_ffi/dispatcher.py new file mode 100644 --- /dev/null +++ b/pypy/module/_ffi/dispatcher.py @@ -0,0 +1,172 @@ +from pypy.rlib import libffi +from pypy.rlib import jit +from pypy.rlib.rarithmetic import intmask +from pypy.rpython.lltypesystem import rffi +from pypy.module._rawffi.structure import W_StructureInstance + + +def unwrap_truncate_int(TP, space, w_arg): + if space.is_true(space.isinstance(w_arg, space.w_int)): + return rffi.cast(TP, space.int_w(w_arg)) + else: + return rffi.cast(TP, space.bigint_w(w_arg).ulonglongmask()) +unwrap_truncate_int._annspecialcase_ = 'specialize:arg(0)' + + +class AbstractDispatcher(object): + + def __init__(self, space): + self.space = space + + def unwrap_and_do(self, w_ffitype, w_obj): + space = self.space + if w_ffitype.is_longlong(): + # note that we must check for longlong first, because either + # is_signed or is_unsigned returns true anyway + assert libffi.IS_32_BIT + self._longlong(w_ffitype, w_obj) + elif w_ffitype.is_signed(): + intval = unwrap_truncate_int(rffi.LONG, space, w_obj) + self.handle_signed(w_ffitype, w_obj, intval) + elif self.maybe_handle_char_or_unichar_p(w_ffitype, w_obj): + # the object was already handled from within + # maybe_handle_char_or_unichar_p + pass + elif w_ffitype.is_pointer(): + w_obj = self.convert_pointer_arg_maybe(w_obj, w_ffitype) + intval = intmask(space.uint_w(w_obj)) + self.handle_pointer(w_ffitype, w_obj, intval) + elif w_ffitype.is_unsigned(): + uintval = unwrap_truncate_int(rffi.ULONG, space, w_obj) + self.handle_unsigned(w_ffitype, w_obj, uintval) + elif w_ffitype.is_char(): + intval = space.int_w(space.ord(w_obj)) + self.handle_char(w_ffitype, w_obj, intval) + elif w_ffitype.is_unichar(): + intval = space.int_w(space.ord(w_obj)) + self.handle_unichar(w_ffitype, w_obj, intval) + elif w_ffitype.is_double(): + self._float(w_ffitype, w_obj) + elif w_ffitype.is_singlefloat(): + self._singlefloat(w_ffitype, w_obj) + elif w_ffitype.is_struct(): + # arg_raw directly takes value to put inside ll_args + w_obj = space.interp_w(W_StructureInstance, w_obj) + self.handle_struct(w_ffitype, w_obj) + else: + self.error(w_ffitype, w_obj) + + def _longlong(self, w_ffitype, w_obj): + # a separate function, which can be seen by the jit or not, + # depending on whether longlongs are supported + bigval = self.space.bigint_w(w_obj) + ullval = bigval.ulonglongmask() + llval = rffi.cast(rffi.LONGLONG, ullval) + self.handle_longlong(w_ffitype, w_obj, llval) + + def _float(self, w_ffitype, w_obj): + # a separate function, which can be seen by the jit or not, + # depending on whether floats are supported + floatval = self.space.float_w(w_obj) + self.handle_float(w_ffitype, w_obj, floatval) + + def _singlefloat(self, w_ffitype, w_obj): + # a separate function, which can be seen by the jit or not, + # depending on whether singlefloats are supported + from pypy.rlib.rarithmetic import r_singlefloat + floatval = self.space.float_w(w_obj) + singlefloatval = r_singlefloat(floatval) + self.handle_singlefloat(w_ffitype, w_obj, singlefloatval) + + def maybe_handle_char_or_unichar_p(self, w_ffitype, w_obj): + w_type = jit.promote(self.space.type(w_obj)) + if w_ffitype.is_char_p() and w_type is self.space.w_str: + strval = self.space.str_w(w_obj) + self.handle_char_p(w_ffitype, w_obj, strval) + return True + elif w_ffitype.is_unichar_p() and (w_type is self.space.w_str or + w_type is self.space.w_unicode): + unicodeval = self.space.unicode_w(w_obj) + self.handle_unichar_p(w_ffitype, w_obj, unicodeval) + return True + + def convert_pointer_arg_maybe(self, w_arg, w_argtype): + """ + Try to convert the argument by calling _as_ffi_pointer_() + """ + space = self.space + meth = space.lookup(w_arg, '_as_ffi_pointer_') # this also promotes the type + if meth: + return space.call_function(meth, w_arg, w_argtype) + else: + return w_arg + + def error(self, w_ffitype, w_obj): + assert False # XXX raise a proper app-level exception + + def handle_signed(self, w_ffitype, w_obj, intval): + """ + intval: lltype.Signed + """ + self.error(w_ffitype, w_obj) + + def handle_unsigned(self, w_ffitype, w_obj, uintval): + """ + uintval: lltype.Unsigned + """ + self.error(w_ffitype, w_obj) + + def handle_pointer(self, w_ffitype, w_obj, intval): + """ + intval: lltype.Signed + """ + self.error(w_ffitype, w_obj) + + def handle_char(self, w_ffitype, w_obj, intval): + """ + intval: lltype.Signed + """ + self.error(w_ffitype, w_obj) + + def handle_unichar(self, w_ffitype, w_obj, intval): + """ + intval: lltype.Signed + """ + self.error(w_ffitype, w_obj) + + def handle_longlong(self, w_ffitype, w_obj, longlongval): + """ + longlongval: lltype.SignedLongLong + """ + self.error(w_ffitype, w_obj) + + def handle_char_p(self, w_ffitype, w_obj, strval): + """ + strval: interp-level str + """ + self.error(w_ffitype, w_obj) + + def handle_unichar_p(self, w_ffitype, w_obj, unicodeval): + """ + unicodeval: interp-level unicode + """ + self.error(w_ffitype, w_obj) + + def handle_float(self, w_ffitype, w_obj, floatval): + """ + floatval: lltype.Float + """ + self.error(w_ffitype, w_obj) + + def handle_singlefloat(self, w_ffitype, w_obj, singlefloatval): + """ + singlefloatval: lltype.SingleFloat + """ + self.error(w_ffitype, w_obj) + + def handle_struct(self, w_ffitype, w_structinstance): + """ + w_structinstance: W_StructureInstance + """ + self.error(w_ffitype, w_structinstance) + diff --git a/pypy/module/_ffi/interp_funcptr.py b/pypy/module/_ffi/interp_funcptr.py --- a/pypy/module/_ffi/interp_funcptr.py +++ b/pypy/module/_ffi/interp_funcptr.py @@ -13,6 +13,7 @@ from pypy.rlib.rdynload import DLOpenError from pypy.rlib.rarithmetic import intmask, r_uint from pypy.rlib.objectmodel import we_are_translated +from pypy.module._ffi.dispatcher import AbstractDispatcher def unwrap_ffitype(space, w_argtype, allow_void=False): @@ -22,12 +23,6 @@ raise OperationError(space.w_TypeError, space.wrap(msg)) return res -def unwrap_truncate_int(TP, space, w_arg): - if space.is_true(space.isinstance(w_arg, space.w_int)): - return rffi.cast(TP, space.int_w(w_arg)) - else: - return rffi.cast(TP, space.bigint_w(w_arg).ulonglongmask()) -unwrap_truncate_int._annspecialcase_ = 'specialize:arg(0)' # ======================================================================== @@ -54,97 +49,13 @@ self.func.name, expected, arg, given) # argchain = libffi.ArgChain() + argpusher = ArgumentPusherDispatcher(space, argchain, self.to_free) for i in range(expected): w_argtype = self.argtypes_w[i] w_arg = args_w[i] - if w_argtype.is_longlong(): - # note that we must check for longlong first, because either - # is_signed or is_unsigned returns true anyway - assert libffi.IS_32_BIT - self.arg_longlong(space, argchain, w_arg) - elif w_argtype.is_signed(): - argchain.arg(unwrap_truncate_int(rffi.LONG, space, w_arg)) - elif self.add_char_p_maybe(space, argchain, w_arg, w_argtype): - # the argument is added to the argchain direcly by the method above - pass - elif w_argtype.is_pointer(): - w_arg = self.convert_pointer_arg_maybe(space, w_arg, w_argtype) - argchain.arg(intmask(space.uint_w(w_arg))) - elif w_argtype.is_unsigned(): - argchain.arg(unwrap_truncate_int(rffi.ULONG, space, w_arg)) - elif w_argtype.is_char(): - w_arg = space.ord(w_arg) - argchain.arg(space.int_w(w_arg)) - elif w_argtype.is_unichar(): - w_arg = space.ord(w_arg) - argchain.arg(space.int_w(w_arg)) - elif w_argtype.is_double(): - self.arg_float(space, argchain, w_arg) - elif w_argtype.is_singlefloat(): - self.arg_singlefloat(space, argchain, w_arg) - elif w_argtype.is_struct(): - # arg_raw directly takes value to put inside ll_args - w_arg = space.interp_w(W_StructureInstance, w_arg) - ptrval = w_arg.ll_buffer - argchain.arg_raw(ptrval) - else: - assert False, "Argument shape '%s' not supported" % w_argtype + argpusher.unwrap_and_do(w_argtype, w_arg) return argchain - def add_char_p_maybe(self, space, argchain, w_arg, w_argtype): - """ - Automatic conversion from string to char_p. The allocated buffer will - be automatically freed after the call. - """ - w_type = jit.promote(space.type(w_arg)) - if w_argtype.is_char_p() and w_type is space.w_str: - strval = space.str_w(w_arg) - buf = rffi.str2charp(strval) - self.to_free.append(rffi.cast(rffi.VOIDP, buf)) - addr = rffi.cast(rffi.ULONG, buf) - argchain.arg(addr) - return True - elif w_argtype.is_unichar_p() and (w_type is space.w_str or - w_type is space.w_unicode): - unicodeval = space.unicode_w(w_arg) - buf = rffi.unicode2wcharp(unicodeval) - self.to_free.append(rffi.cast(rffi.VOIDP, buf)) - addr = rffi.cast(rffi.ULONG, buf) - argchain.arg(addr) - return True - return False - - def convert_pointer_arg_maybe(self, space, w_arg, w_argtype): - """ - Try to convert the argument by calling _as_ffi_pointer_() - """ - meth = space.lookup(w_arg, '_as_ffi_pointer_') # this also promotes the type - if meth: - return space.call_function(meth, w_arg, w_argtype) - else: - return w_arg - - def arg_float(self, space, argchain, w_arg): - # a separate function, which can be seen by the jit or not, - # depending on whether floats are supported - argchain.arg(space.float_w(w_arg)) - - def arg_longlong(self, space, argchain, w_arg): - # a separate function, which can be seen by the jit or not, - # depending on whether longlongs are supported - bigarg = space.bigint_w(w_arg) - ullval = bigarg.ulonglongmask() - llval = rffi.cast(rffi.LONGLONG, ullval) - argchain.arg(llval) - - def arg_singlefloat(self, space, argchain, w_arg): - # a separate function, which can be seen by the jit or not, - # depending on whether singlefloats are supported - from pypy.rlib.rarithmetic import r_singlefloat - fval = space.float_w(w_arg) - sfval = r_singlefloat(fval) - argchain.arg(sfval) - def call(self, space, args_w): self = jit.promote(self) argchain = self.build_argchain(space, args_w) @@ -281,6 +192,58 @@ return space.wrap(rffi.cast(rffi.LONG, self.func.funcsym)) +class ArgumentPusherDispatcher(AbstractDispatcher): + """ + A dispatcher used by W_FuncPtr to unwrap the app-level objects into + low-level types and push them to the argchain. + """ + + def __init__(self, space, argchain, to_free): + AbstractDispatcher.__init__(self, space) + self.argchain = argchain + self.to_free = to_free + + def handle_signed(self, w_ffitype, w_obj, intval): + self.argchain.arg(intval) + + def handle_unsigned(self, w_ffitype, w_obj, uintval): + self.argchain.arg(uintval) + + def handle_pointer(self, w_ffitype, w_obj, intval): + self.argchain.arg(intval) + + def handle_char(self, w_ffitype, w_obj, intval): + self.argchain.arg(intval) + + def handle_unichar(self, w_ffitype, w_obj, intval): + self.argchain.arg(intval) + + def handle_longlong(self, w_ffitype, w_obj, longlongval): + self.argchain.arg(longlongval) + + def handle_char_p(self, w_ffitype, w_obj, strval): + buf = rffi.str2charp(strval) + self.to_free.append(rffi.cast(rffi.VOIDP, buf)) + addr = rffi.cast(rffi.ULONG, buf) + self.argchain.arg(addr) + + def handle_unichar_p(self, w_ffitype, w_obj, unicodeval): + buf = rffi.unicode2wcharp(unicodeval) + self.to_free.append(rffi.cast(rffi.VOIDP, buf)) + addr = rffi.cast(rffi.ULONG, buf) + self.argchain.arg(addr) + + def handle_float(self, w_ffitype, w_obj, floatval): + self.argchain.arg(floatval) + + def handle_singlefloat(self, w_ffitype, w_obj, singlefloatval): + self.argchain.arg(singlefloatval) + + def handle_struct(self, w_ffitype, w_structinstance): + ptrval = w_structinstance.ll_buffer + self.argchain.arg_raw(ptrval) + + def unpack_argtypes(space, w_argtypes, w_restype): argtypes_w = [space.interp_w(W_FFIType, w_argtype) @@ -369,3 +332,4 @@ return space.wrap(W_CDLL(space, get_libc_name())) except OSError, e: raise wrap_oserror(space, e) + From noreply at buildbot.pypy.org Fri Nov 11 16:32:37 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 11 Nov 2011 16:32:37 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: kill unwrap_truncated_int, and use the nice space.truncatedint_w method Message-ID: <20111111153237.55E4A8292E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r49314:b927c30cd40e Date: 2011-11-10 18:42 +0100 http://bitbucket.org/pypy/pypy/changeset/b927c30cd40e/ Log: kill unwrap_truncated_int, and use the nice space.truncatedint_w method diff --git a/pypy/module/_ffi/dispatcher.py b/pypy/module/_ffi/dispatcher.py --- a/pypy/module/_ffi/dispatcher.py +++ b/pypy/module/_ffi/dispatcher.py @@ -1,18 +1,9 @@ from pypy.rlib import libffi from pypy.rlib import jit -from pypy.rlib.rarithmetic import intmask +from pypy.rlib.rarithmetic import intmask, r_uint from pypy.rpython.lltypesystem import rffi from pypy.module._rawffi.structure import W_StructureInstance - -def unwrap_truncate_int(TP, space, w_arg): - if space.is_true(space.isinstance(w_arg, space.w_int)): - return rffi.cast(TP, space.int_w(w_arg)) - else: - return rffi.cast(TP, space.bigint_w(w_arg).ulonglongmask()) -unwrap_truncate_int._annspecialcase_ = 'specialize:arg(0)' - - class AbstractDispatcher(object): def __init__(self, space): @@ -26,7 +17,7 @@ assert libffi.IS_32_BIT self._longlong(w_ffitype, w_obj) elif w_ffitype.is_signed(): - intval = unwrap_truncate_int(rffi.LONG, space, w_obj) + intval = space.truncatedint_w(w_obj) self.handle_signed(w_ffitype, w_obj, intval) elif self.maybe_handle_char_or_unichar_p(w_ffitype, w_obj): # the object was already handled from within @@ -37,7 +28,7 @@ intval = intmask(space.uint_w(w_obj)) self.handle_pointer(w_ffitype, w_obj, intval) elif w_ffitype.is_unsigned(): - uintval = unwrap_truncate_int(rffi.ULONG, space, w_obj) + uintval = r_uint(space.truncatedint_w(w_obj)) self.handle_unsigned(w_ffitype, w_obj, uintval) elif w_ffitype.is_char(): intval = space.int_w(space.ord(w_obj)) From noreply at buildbot.pypy.org Fri Nov 11 16:32:38 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 11 Nov 2011 16:32:38 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: move the logic to wrap the result of a call in WrapDispatcher. Will write the proper docstrings later, now I have to shutdown the laptop because we are landing :-) Message-ID: <20111111153238.840AA8292E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r49315:fec3defe7d52 Date: 2011-11-10 19:41 +0100 http://bitbucket.org/pypy/pypy/changeset/fec3defe7d52/ Log: move the logic to wrap the result of a call in WrapDispatcher. Will write the proper docstrings later, now I have to shutdown the laptop because we are landing :-) diff --git a/pypy/module/_ffi/dispatcher.py b/pypy/module/_ffi/dispatcher.py --- a/pypy/module/_ffi/dispatcher.py +++ b/pypy/module/_ffi/dispatcher.py @@ -2,9 +2,10 @@ from pypy.rlib import jit from pypy.rlib.rarithmetic import intmask, r_uint from pypy.rpython.lltypesystem import rffi -from pypy.module._rawffi.structure import W_StructureInstance +from pypy.module._rawffi.structure import W_StructureInstance, W_Structure +from pypy.module._ffi.interp_ffitype import app_types -class AbstractDispatcher(object): +class UnwrapDispatcher(object): def __init__(self, space): self.space = space @@ -161,3 +162,120 @@ """ self.error(w_ffitype, w_structinstance) + + +class WrapDispatcher(object): + + def __init__(self, space): + self.space = space + + def do_and_wrap(self, w_ffitype): + space = self.space + if w_ffitype.is_longlong(): + # note that we must check for longlong first, because either + # is_signed or is_unsigned returns true anyway + assert libffi.IS_32_BIT + return self._longlong(w_ffitype) + elif w_ffitype.is_signed(): + intval = self.get_signed(w_ffitype) + return space.wrap(intval) + elif w_ffitype is app_types.ulong: + # we need to be careful when the return type is ULONG, because the + # value might not fit into a signed LONG, and thus might require + # and app-evel . This is why we need to treat it separately + # than the other unsigned types. + uintval = self.get_unsigned(w_ffitype) + return space.wrap(uintval) + elif w_ffitype.is_unsigned(): # note that ulong is handled just before + intval = self.get_unsigned_which_fits_into_a_signed(w_ffitype) + return space.wrap(intval) + elif w_ffitype.is_pointer(): + uintval = self.get_pointer(w_ffitype) + return space.wrap(uintval) + elif w_ffitype.is_char(): + ucharval = self.get_char(w_ffitype) + return space.wrap(chr(ucharval)) + elif w_ffitype.is_unichar(): + wcharval = self.get_unichar(w_ffitype) + return space.wrap(unichr(wcharval)) + elif w_ffitype.is_double(): + return self._float(w_ffitype) + elif w_ffitype.is_singlefloat(): + return self._singlefloat(w_ffitype) + elif w_ffitype.is_struct(): + w_datashape = w_ffitype.w_datashape + assert isinstance(w_datashape, W_Structure) + uintval = self.get_struct(w_datashape) # this is the ptr to the struct + return w_datashape.fromaddress(space, uintval) + elif w_ffitype.is_void(): + voidval = self.get_void(w_ffitype) + assert voidval is None + return space.w_None + else: + assert False, "Return value shape '%s' not supported" % w_ffitype + + def _longlong(self, w_ffitype): + # a separate function, which can be seen by the jit or not, + # depending on whether longlongs are supported + if w_ffitype is app_types.slonglong: + longlongval = self.get_longlong(w_ffitype) + return self.space.wrap(longlongval) + elif w_ffitype is app_types.ulonglong: + ulonglongval = self.get_ulonglong(w_ffitype) + return self.space.wrap(ulonglongval) + else: + self.error(w_ffitype) + + def _float(self, w_ffitype): + # a separate function, which can be seen by the jit or not, + # depending on whether floats are supported + floatval = self.get_float(w_ffitype) + return self.space.wrap(floatval) + + def _singlefloat(self, w_ffitype): + # a separate function, which can be seen by the jit or not, + # depending on whether singlefloats are supported + singlefloatval = self.get_singlefloat(w_ffitype) + return self.space.wrap(float(singlefloatval)) + + def error(self, w_ffitype, w_obj): + assert False # XXX raise a proper app-level exception + + def get_longlong(self, w_ffitype): + self.error(w_ffitype) + + def get_ulonglong(self, w_ffitype): + self.error(w_ffitype) + + def get_signed(self, w_ffitype): + self.error(w_ffitype) + + def get_unsigned(self, w_ffitype): + self.error(w_ffitype) + + def get_unsigned_which_fits_into_a_signed(self, w_ffitype): + self.error(w_ffitype) + + def get_pointer(self, w_ffitype): + self.error(w_ffitype) + + def get_char(self, w_ffitype): + self.error(w_ffitype) + + def get_unichar(self, w_ffitype): + self.error(w_ffitype) + + def get_float(self, w_ffitype): + self.error(w_ffitype) + + def get_singlefloat(self, w_ffitype): + self.error(w_ffitype) + + def get_struct(self, w_datashape): + """ + XXX: write nice docstring in the base class, must return an ULONG + """ + return self.func.call(self.argchain, rffi.ULONG, is_struct=True) + + def get_void(self, w_ffitype): + self.error(w_ffitype) diff --git a/pypy/module/_ffi/interp_funcptr.py b/pypy/module/_ffi/interp_funcptr.py --- a/pypy/module/_ffi/interp_funcptr.py +++ b/pypy/module/_ffi/interp_funcptr.py @@ -3,7 +3,6 @@ operationerrfmt from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.typedef import TypeDef -from pypy.module._rawffi.structure import W_StructureInstance, W_Structure from pypy.module._ffi.interp_ffitype import W_FFIType # from pypy.rpython.lltypesystem import lltype, rffi @@ -13,7 +12,7 @@ from pypy.rlib.rdynload import DLOpenError from pypy.rlib.rarithmetic import intmask, r_uint from pypy.rlib.objectmodel import we_are_translated -from pypy.module._ffi.dispatcher import AbstractDispatcher +from pypy.module._ffi.dispatcher import UnwrapDispatcher, WrapDispatcher def unwrap_ffitype(space, w_argtype, allow_void=False): @@ -49,7 +48,7 @@ self.func.name, expected, arg, given) # argchain = libffi.ArgChain() - argpusher = ArgumentPusherDispatcher(space, argchain, self.to_free) + argpusher = PushArgumentDispatcher(space, argchain, self.to_free) for i in range(expected): w_argtype = self.argtypes_w[i] w_arg = args_w[i] @@ -59,7 +58,9 @@ def call(self, space, args_w): self = jit.promote(self) argchain = self.build_argchain(space, args_w) - return self._do_call(space, argchain) + func_caller = CallFunctionDispatcher(space, self.func, argchain) + return func_caller.do_and_wrap(self.w_restype) + #return self._do_call(space, argchain) def free_temp_buffers(self, space): for buf in self.to_free: @@ -69,122 +70,6 @@ lltype.free(buf, flavor='raw') self.to_free = [] - def _do_call(self, space, argchain): - w_restype = self.w_restype - if w_restype.is_longlong(): - # note that we must check for longlong first, because either - # is_signed or is_unsigned returns true anyway - assert libffi.IS_32_BIT - return self._call_longlong(space, argchain) - elif w_restype.is_signed(): - return self._call_int(space, argchain) - elif w_restype.is_unsigned() or w_restype.is_pointer(): - return self._call_uint(space, argchain) - elif w_restype.is_char(): - intres = self.func.call(argchain, rffi.UCHAR) - return space.wrap(chr(intres)) - elif w_restype.is_unichar(): - intres = self.func.call(argchain, rffi.WCHAR_T) - return space.wrap(unichr(intres)) - elif w_restype.is_double(): - return self._call_float(space, argchain) - elif w_restype.is_singlefloat(): - return self._call_singlefloat(space, argchain) - elif w_restype.is_struct(): - w_datashape = w_restype.w_datashape - assert isinstance(w_datashape, W_Structure) - ptrval = self.func.call(argchain, rffi.ULONG, is_struct=True) - return w_datashape.fromaddress(space, ptrval) - elif w_restype.is_void(): - voidres = self.func.call(argchain, lltype.Void) - assert voidres is None - return space.w_None - else: - assert False, "Return value shape '%s' not supported" % w_restype - - def _call_int(self, space, argchain): - # if the declared return type of the function is smaller than LONG, - # the result buffer may contains garbage in its higher bits. To get - # the correct value, and to be sure to handle the signed/unsigned case - # correctly, we need to cast the result to the correct type. After - # that, we cast it back to LONG, because this is what we want to pass - # to space.wrap in order to get a nice applevel . - # - restype = self.func.restype - call = self.func.call - if restype is libffi.types.slong: - intres = call(argchain, rffi.LONG) - elif restype is libffi.types.sint: - intres = rffi.cast(rffi.LONG, call(argchain, rffi.INT)) - elif restype is libffi.types.sshort: - intres = rffi.cast(rffi.LONG, call(argchain, rffi.SHORT)) - elif restype is libffi.types.schar: - intres = rffi.cast(rffi.LONG, call(argchain, rffi.SIGNEDCHAR)) - else: - raise OperationError(space.w_ValueError, - space.wrap('Unsupported restype')) - return space.wrap(intres) - - def _call_uint(self, space, argchain): - # the same comment as above apply. Moreover, we need to be careful - # when the return type is ULONG, because the value might not fit into - # a signed LONG: this is the only case in which we cast the result to - # something different than LONG; as a result, the applevel value will - # be a . - # - # Note that we check for ULONG before UINT: this is needed on 32bit - # machines, where they are they same: if we checked for UINT before - # ULONG, we would cast to the wrong type. Note that this also means - # that on 32bit the UINT case will never be entered (because it is - # handled by the ULONG case). - restype = self.func.restype - call = self.func.call - if restype is libffi.types.ulong: - # special case - uintres = call(argchain, rffi.ULONG) - return space.wrap(uintres) - elif restype is libffi.types.pointer: - ptrres = call(argchain, rffi.VOIDP) - uintres = rffi.cast(rffi.ULONG, ptrres) - return space.wrap(uintres) - elif restype is libffi.types.uint: - intres = rffi.cast(rffi.LONG, call(argchain, rffi.UINT)) - elif restype is libffi.types.ushort: - intres = rffi.cast(rffi.LONG, call(argchain, rffi.USHORT)) - elif restype is libffi.types.uchar: - intres = rffi.cast(rffi.LONG, call(argchain, rffi.UCHAR)) - else: - raise OperationError(space.w_ValueError, - space.wrap('Unsupported restype')) - return space.wrap(intres) - - def _call_float(self, space, argchain): - # a separate function, which can be seen by the jit or not, - # depending on whether floats are supported - floatres = self.func.call(argchain, rffi.DOUBLE) - return space.wrap(floatres) - - def _call_longlong(self, space, argchain): - # a separate function, which can be seen by the jit or not, - # depending on whether longlongs are supported - restype = self.func.restype - call = self.func.call - if restype is libffi.types.slonglong: - llres = call(argchain, rffi.LONGLONG) - return space.wrap(llres) - elif restype is libffi.types.ulonglong: - ullres = call(argchain, rffi.ULONGLONG) - return space.wrap(ullres) - else: - raise OperationError(space.w_ValueError, - space.wrap('Unsupported longlong restype')) - - def _call_singlefloat(self, space, argchain): - # a separate function, which can be seen by the jit or not, - # depending on whether singlefloats are supported - sfres = self.func.call(argchain, rffi.FLOAT) - return space.wrap(float(sfres)) - def getaddr(self, space): """ Return the physical address in memory of the function @@ -192,14 +77,14 @@ return space.wrap(rffi.cast(rffi.LONG, self.func.funcsym)) -class ArgumentPusherDispatcher(AbstractDispatcher): +class PushArgumentDispatcher(UnwrapDispatcher): """ A dispatcher used by W_FuncPtr to unwrap the app-level objects into low-level types and push them to the argchain. """ def __init__(self, space, argchain, to_free): - AbstractDispatcher.__init__(self, space) + UnwrapDispatcher.__init__(self, space) self.argchain = argchain self.to_free = to_free @@ -244,6 +129,91 @@ self.argchain.arg_raw(ptrval) +class CallFunctionDispatcher(WrapDispatcher): + """ + A dispatcher used by W_FuncPtr to call the function, expect the result of + a correct low-level type and wrap it to the corresponding app-level type + """ + + def __init__(self, space, func, argchain): + WrapDispatcher.__init__(self, space) + self.func = func + self.argchain = argchain + + def get_longlong(self, w_ffitype): + return self.func.call(self.argchain, rffi.LONGLONG) + + def get_ulonglong(self, w_ffitype): + return self.func.call(self.argchain, rffi.ULONGLONG) + + def get_signed(self, w_ffitype): + # if the declared return type of the function is smaller than LONG, + # the result buffer may contains garbage in its higher bits. To get + # the correct value, and to be sure to handle the signed/unsigned case + # correctly, we need to cast the result to the correct type. After + # that, we cast it back to LONG, because this is what we want to pass + # to space.wrap in order to get a nice applevel . + # + restype = w_ffitype.ffitype + call = self.func.call + if restype is libffi.types.slong: + return call(self.argchain, rffi.LONG) + elif restype is libffi.types.sint: + return rffi.cast(rffi.LONG, call(self.argchain, rffi.INT)) + elif restype is libffi.types.sshort: + return rffi.cast(rffi.LONG, call(self.argchain, rffi.SHORT)) + elif restype is libffi.types.schar: + return rffi.cast(rffi.LONG, call(self.argchain, rffi.SIGNEDCHAR)) + else: + raise OperationError(space.w_ValueError, + space.wrap('Unsupported restype')) + + def get_unsigned(self, w_ffitype): + return self.func.call(self.argchain, rffi.ULONG) + + def get_unsigned_which_fits_into_a_signed(self, w_ffitype): + # the same comment as get_signed apply + restype = w_ffitype.ffitype + call = self.func.call + if restype is libffi.types.uint: + assert not libffi.IS_32_BIT + # on 32bit machines, we should never get here, because it's a case + # which has already been handled by get_unsigned above. + return rffi.cast(rffi.LONG, call(self.argchain, rffi.UINT)) + elif restype is libffi.types.ushort: + return rffi.cast(rffi.LONG, call(self.argchain, rffi.USHORT)) + elif restype is libffi.types.uchar: + return rffi.cast(rffi.LONG, call(self.argchain, rffi.UCHAR)) + else: + raise OperationError(space.w_ValueError, + space.wrap('Unsupported restype')) + return space.wrap(intres) + + def get_pointer(self, w_ffitype): + ptrres = self.func.call(self.argchain, rffi.VOIDP) + return rffi.cast(rffi.ULONG, ptrres) + + def get_char(self, w_ffitype): + return self.func.call(self.argchain, rffi.UCHAR) + + def get_unichar(self, w_ffitype): + return self.func.call(self.argchain, rffi.WCHAR_T) + + def get_float(self, w_ffitype): + return self.func.call(self.argchain, rffi.DOUBLE) + + def get_singlefloat(self, w_ffitype): + return self.func.call(self.argchain, rffi.FLOAT) + + def get_struct(self, w_datashape): + """ + XXX: write nice docstring in the base class, must return an ULONG + """ + return self.func.call(self.argchain, rffi.ULONG, is_struct=True) + + def get_void(self, w_ffitype): + return self.func.call(self.argchain, lltype.Void) + def unpack_argtypes(space, w_argtypes, w_restype): argtypes_w = [space.interp_w(W_FFIType, w_argtype) From noreply at buildbot.pypy.org Fri Nov 11 16:32:39 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 11 Nov 2011 16:32:39 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: add docstrings for WrapDispatcher methods; raise a proper applevel exception instead of assert False Message-ID: <20111111153239.B2D688292E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r49316:b088b5e09108 Date: 2011-11-10 22:03 +0100 http://bitbucket.org/pypy/pypy/changeset/b088b5e09108/ Log: add docstrings for WrapDispatcher methods; raise a proper applevel exception instead of assert False diff --git a/pypy/module/_ffi/dispatcher.py b/pypy/module/_ffi/dispatcher.py --- a/pypy/module/_ffi/dispatcher.py +++ b/pypy/module/_ffi/dispatcher.py @@ -94,7 +94,9 @@ return w_arg def error(self, w_ffitype, w_obj): - assert False # XXX raise a proper app-level exception + raise operationerrfmt(space.w_TypeError, + 'Unsupported ffi type to convert: %s', + w_ffitype.name) def handle_signed(self, w_ffitype, w_obj, intval): """ @@ -238,44 +240,83 @@ singlefloatval = self.get_singlefloat(w_ffitype) return self.space.wrap(float(singlefloatval)) - def error(self, w_ffitype, w_obj): - assert False # XXX raise a proper app-level exception + def error(self, w_ffitype): + raise operationerrfmt(space.w_TypeError, + 'Unsupported ffi type to convert: %s', + w_ffitype.name) def get_longlong(self, w_ffitype): + """ + Return type: lltype.SignedLongLong + """ self.error(w_ffitype) def get_ulonglong(self, w_ffitype): + """ + Return type: lltype.UnsignedLongLong + """ self.error(w_ffitype) def get_signed(self, w_ffitype): + """ + Return type: lltype.Signed + """ self.error(w_ffitype) def get_unsigned(self, w_ffitype): + """ + Return type: lltype.Unsigned + """ self.error(w_ffitype) def get_unsigned_which_fits_into_a_signed(self, w_ffitype): + """ + Return type: lltype.Signed. + + We return Signed even if the input type is unsigned, because this way + we get an app-level instead of a . + """ self.error(w_ffitype) def get_pointer(self, w_ffitype): + """ + Return type: lltype.Unsigned + """ self.error(w_ffitype) def get_char(self, w_ffitype): + """ + Return type: rffi.UCHAR + """ self.error(w_ffitype) def get_unichar(self, w_ffitype): + """ + Return type: rffi.WCHAR_T + """ self.error(w_ffitype) def get_float(self, w_ffitype): + """ + Return type: lltype.Float + """ self.error(w_ffitype) def get_singlefloat(self, w_ffitype): + """ + Return type: lltype.SingleFloat + """ self.error(w_ffitype) def get_struct(self, w_datashape): """ - XXX: write nice docstring in the base class, must return an ULONG + Return type: lltype.Unsigned + (the address of the structure) """ return self.func.call(self.argchain, rffi.ULONG, is_struct=True) def get_void(self, w_ffitype): + """ + Return type: None + """ self.error(w_ffitype) diff --git a/pypy/module/_ffi/interp_funcptr.py b/pypy/module/_ffi/interp_funcptr.py --- a/pypy/module/_ffi/interp_funcptr.py +++ b/pypy/module/_ffi/interp_funcptr.py @@ -165,9 +165,8 @@ elif restype is libffi.types.schar: return rffi.cast(rffi.LONG, call(self.argchain, rffi.SIGNEDCHAR)) else: - raise OperationError(space.w_ValueError, - space.wrap('Unsupported restype')) - + self.error(w_ffitype) + def get_unsigned(self, w_ffitype): return self.func.call(self.argchain, rffi.ULONG) @@ -185,9 +184,8 @@ elif restype is libffi.types.uchar: return rffi.cast(rffi.LONG, call(self.argchain, rffi.UCHAR)) else: - raise OperationError(space.w_ValueError, - space.wrap('Unsupported restype')) - return space.wrap(intres) + self.error(w_ffitype) + def get_pointer(self, w_ffitype): ptrres = self.func.call(self.argchain, rffi.VOIDP) @@ -206,9 +204,6 @@ return self.func.call(self.argchain, rffi.FLOAT) def get_struct(self, w_datashape): - """ - XXX: write nice docstring in the base class, must return an ULONG - """ return self.func.call(self.argchain, rffi.ULONG, is_struct=True) def get_void(self, w_ffitype): From noreply at buildbot.pypy.org Fri Nov 11 16:32:40 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 11 Nov 2011 16:32:40 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: rename dispatchers into the more descriptive {FromAppLevel, ToAppLevel}Converter, and add docstrings Message-ID: <20111111153240.E00518292E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r49317:4c293ed22500 Date: 2011-11-10 22:14 +0100 http://bitbucket.org/pypy/pypy/changeset/4c293ed22500/ Log: rename dispatchers into the more descriptive {FromAppLevel,ToAppLevel}Converter, and add docstrings diff --git a/pypy/module/_ffi/interp_funcptr.py b/pypy/module/_ffi/interp_funcptr.py --- a/pypy/module/_ffi/interp_funcptr.py +++ b/pypy/module/_ffi/interp_funcptr.py @@ -12,7 +12,7 @@ from pypy.rlib.rdynload import DLOpenError from pypy.rlib.rarithmetic import intmask, r_uint from pypy.rlib.objectmodel import we_are_translated -from pypy.module._ffi.dispatcher import UnwrapDispatcher, WrapDispatcher +from pypy.module._ffi.type_converter import FromAppLevelConverter, ToAppLevelConverter def unwrap_ffitype(space, w_argtype, allow_void=False): @@ -48,7 +48,7 @@ self.func.name, expected, arg, given) # argchain = libffi.ArgChain() - argpusher = PushArgumentDispatcher(space, argchain, self.to_free) + argpusher = PushArgumentConverter(space, argchain, self.to_free) for i in range(expected): w_argtype = self.argtypes_w[i] w_arg = args_w[i] @@ -58,7 +58,7 @@ def call(self, space, args_w): self = jit.promote(self) argchain = self.build_argchain(space, args_w) - func_caller = CallFunctionDispatcher(space, self.func, argchain) + func_caller = CallFunctionConverter(space, self.func, argchain) return func_caller.do_and_wrap(self.w_restype) #return self._do_call(space, argchain) @@ -77,14 +77,14 @@ return space.wrap(rffi.cast(rffi.LONG, self.func.funcsym)) -class PushArgumentDispatcher(UnwrapDispatcher): +class PushArgumentConverter(FromAppLevelConverter): """ - A dispatcher used by W_FuncPtr to unwrap the app-level objects into + A converter used by W_FuncPtr to unwrap the app-level objects into low-level types and push them to the argchain. """ def __init__(self, space, argchain, to_free): - UnwrapDispatcher.__init__(self, space) + FromAppLevelConverter.__init__(self, space) self.argchain = argchain self.to_free = to_free @@ -129,14 +129,14 @@ self.argchain.arg_raw(ptrval) -class CallFunctionDispatcher(WrapDispatcher): +class CallFunctionConverter(ToAppLevelConverter): """ - A dispatcher used by W_FuncPtr to call the function, expect the result of + A converter used by W_FuncPtr to call the function, expect the result of a correct low-level type and wrap it to the corresponding app-level type """ def __init__(self, space, func, argchain): - WrapDispatcher.__init__(self, space) + ToAppLevelConverter.__init__(self, space) self.func = func self.argchain = argchain diff --git a/pypy/module/_ffi/dispatcher.py b/pypy/module/_ffi/type_converter.py rename from pypy/module/_ffi/dispatcher.py rename to pypy/module/_ffi/type_converter.py --- a/pypy/module/_ffi/dispatcher.py +++ b/pypy/module/_ffi/type_converter.py @@ -5,7 +5,13 @@ from pypy.module._rawffi.structure import W_StructureInstance, W_Structure from pypy.module._ffi.interp_ffitype import app_types -class UnwrapDispatcher(object): +class FromAppLevelConverter(object): + """ + Unwrap an app-level object to the corresponding low-level type, following + the conversion rules which apply to the specified w_ffitype. Once + unwrapped, the value is passed to the corresponding handle_* method. + Subclasses should override the desired ones. + """ def __init__(self, space): self.space = space @@ -166,7 +172,13 @@ -class WrapDispatcher(object): +class ToAppLevelConverter(object): + """ + Wrap a low-level value to an app-level object, following the conversion + rules which apply to the specified w_ffitype. The value is got by calling + the get_* method corresponding to the w_ffitype. Subclasses should + override the desired ones. + """ def __init__(self, space): self.space = space From noreply at buildbot.pypy.org Fri Nov 11 16:32:42 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 11 Nov 2011 16:32:42 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: infrastructure to test type_converter.py Message-ID: <20111111153242.16F4B8292E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r49318:69141176aa24 Date: 2011-11-10 22:27 +0100 http://bitbucket.org/pypy/pypy/changeset/69141176aa24/ Log: infrastructure to test type_converter.py diff --git a/pypy/module/_ffi/test/test_type_converter.py b/pypy/module/_ffi/test/test_type_converter.py new file mode 100644 --- /dev/null +++ b/pypy/module/_ffi/test/test_type_converter.py @@ -0,0 +1,42 @@ +from pypy.conftest import gettestobjspace +from pypy.module._ffi.interp_ffitype import app_types +from pypy.module._ffi.type_converter import FromAppLevelConverter, ToAppLevelConverter + +class DummyFromAppLevelConverter(FromAppLevelConverter): + + def handle_all(self, w_ffitype, w_obj, val): + self.lastval = val + + handle_signed = handle_all + handle_unsigned = handle_all + handle_pointer = handle_all + handle_char = handle_all + handle_unichar = handle_all + handle_longlong = handle_all + handle_char_p = handle_all + handle_unichar_p = handle_all + handle_float = handle_all + handle_singlefloat = handle_all + + def handle_struct(self, w_ffitype, w_structinstance): + self.lastval = w_structinstance + + def convert(self, w_ffitype, w_obj): + self.unwrap_and_do(w_ffitype, w_obj) + return self.lastval + + +class TestFromAppLevel(object): + + def setup_class(cls): + cls.space = gettestobjspace(usemodules=('_ffi',)) + converter = DummyFromAppLevelConverter(cls.space) + cls.from_app_level = staticmethod(converter.convert) + + def check(self, w_ffitype, w_obj, expected): + v = self.from_app_level(w_ffitype, w_obj) + assert v == expected + assert type(v) is type(expected) + + def test_int(self): + self.check(app_types.sint, self.space.wrap(42), 42) From noreply at buildbot.pypy.org Fri Nov 11 16:32:43 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 11 Nov 2011 16:32:43 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: more tests for type_converter Message-ID: <20111111153243.412048292E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r49319:470f4d531072 Date: 2011-11-10 22:37 +0100 http://bitbucket.org/pypy/pypy/changeset/470f4d531072/ Log: more tests for type_converter diff --git a/pypy/module/_ffi/test/test_type_converter.py b/pypy/module/_ffi/test/test_type_converter.py --- a/pypy/module/_ffi/test/test_type_converter.py +++ b/pypy/module/_ffi/test/test_type_converter.py @@ -1,4 +1,6 @@ +import sys from pypy.conftest import gettestobjspace +from pypy.rlib.rarithmetic import r_uint from pypy.module._ffi.interp_ffitype import app_types from pypy.module._ffi.type_converter import FromAppLevelConverter, ToAppLevelConverter @@ -40,3 +42,19 @@ def test_int(self): self.check(app_types.sint, self.space.wrap(42), 42) + self.check(app_types.sint, self.space.wrap(sys.maxint+1), -sys.maxint-1) + self.check(app_types.sint, self.space.wrap(sys.maxint*2), -2) + + def test_uint(self): + self.check(app_types.uint, self.space.wrap(42), r_uint(42)) + self.check(app_types.uint, self.space.wrap(-1), r_uint(sys.maxint*2 +1)) + self.check(app_types.uint, self.space.wrap(sys.maxint*3), + r_uint(sys.maxint - 2)) + + def test_pointer(self): + # pointers are "unsigned" at applevel, but signed at interp-level (for + # no good reason, at interp-level Signed or Unsigned makes no + # difference for passing bits around) + self.check(app_types.void_p, self.space.wrap(42), 42) + self.check( + app_types.void_p, self.space.wrap(sys.maxint+1), -sys.maxint-1) From noreply at buildbot.pypy.org Fri Nov 11 16:36:32 2011 From: noreply at buildbot.pypy.org (edelsohn) Date: Fri, 11 Nov 2011 16:36:32 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Add PPC64 ovf operations Message-ID: <20111111153632.16FDB8292E@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r49320:a876a6857b55 Date: 2011-11-11 10:36 -0500 http://bitbucket.org/pypy/pypy/changeset/a876a6857b55/ Log: Add PPC64 ovf operations diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -56,15 +56,17 @@ def emit_int_mul(self, op, arglocs, regalloc): reg1, reg2, res = arglocs - self.mc.mullw(res.value, reg1.value, reg2.value) - - def emit_int_mul_ovf(self, op, arglocs, regalloc): - reg1, reg2, res = arglocs - self.mc.mullwo(res.value, reg1.value, reg2.value) + if IS_PPC_32: + self.mc.mullw(res.value, reg1.value, reg2.value) + else: + self.mc.mulld(res.value, reg1.value, reg2.value) def emit_int_mul_ovf(self, op, arglocs, regalloc): l0, l1, res = arglocs - self.mc.mullwo(res.value, l0.value, l1.value) + if IS_PPC_32: + self.mc.mullwo(res.value, l0.value, l1.value) + else: + self.mc.mulldo(res.value, l0.value, l1.value) def emit_int_floordiv(self, op, arglocs, regalloc): l0, l1, res = arglocs From noreply at buildbot.pypy.org Fri Nov 11 17:02:13 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Fri, 11 Nov 2011 17:02:13 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: more updates Message-ID: <20111111160213.B69CC8292E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49321:6c74c22f9687 Date: 2011-11-10 14:24 -0500 http://bitbucket.org/pypy/pypy/changeset/6c74c22f9687/ Log: more updates diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -78,6 +78,9 @@ def descr_get_itemsize(self, space): return space.wrap(self.itemtype.get_element_size()) + def descr_get_shape(self, space): + return space.newtuple([]) + W_Dtype.typedef = TypeDef("dtype", __module__ = "numpy", __new__ = interp2app(W_Dtype.descr__new__.im_func), @@ -88,7 +91,9 @@ num = interp_attrproperty("num", cls=W_Dtype), kind = interp_attrproperty("kind", cls=W_Dtype), itemsize = GetSetProperty(W_Dtype.descr_get_itemsize), + shape = GetSetProperty(W_Dtype.descr_get_shape), ) +W_Dtype.typedef.acceptable_as_base_class = False class DtypeCache(object): def __init__(self, space): diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -188,14 +188,14 @@ # Everything promotes to float, and bool promotes to everything. if dt2.kind == interp_dtype.FLOATINGLTR or dt1.kind == interp_dtype.BOOLLTR: # Float32 + 8-bit int = Float64 - if dt2.num == 11 and dt1.num_bytes >= 4: - return space.fromcache(interp_dtype.W_Float64Dtype) + if dt2.num == 11 and dt1.itemtype.get_element_size() >= 4: + return interp_dtype.get_dtype_cache(space).w_float64dtype return dt2 # for now this means mixing signed and unsigned if dt2.kind == interp_dtype.SIGNEDLTR: # if dt2 has a greater number of bytes, then just go with it - if dt1.num_bytes < dt2.num_bytes: + if dt1.itemtype.get_element_size() < dt2.itemtype.get_element_size(): return dt2 # we need to promote both dtypes dtypenum = dt2.num + 2 @@ -205,10 +205,11 @@ # UInt64 + signed = Float64 if dt2.num == 10: dtypenum += 1 - newdtype = interp_dtype.ALL_DTYPES[dtypenum] + newdtype = interp_dtype.get_dtype_cache(space).builtin_dtypes[dtypenum] - if newdtype.num_bytes > dt2.num_bytes or newdtype.kind == interp_dtype.FLOATINGLTR: - return space.fromcache(newdtype) + if (newdtype.itemtype.get_element_size() > dt2.itemtype.get_element_size() or + newdtype.kind == interp_dtype.FLOATINGLTR): + return newdtype else: # we only promoted to long on 32-bit or to longlong on 64-bit # this is really for dealing with the Long and Ulong dtypes @@ -216,7 +217,7 @@ dtypenum += 2 else: dtypenum += 3 - return space.fromcache(interp_dtype.ALL_DTYPES[dtypenum]) + return interp_dtype.get_dtype_cache(space).builtin_dtypes[dtypenum] def find_unaryop_result_dtype(space, dt, promote_to_float=False, promote_bools=False, promote_to_largest=False): From noreply at buildbot.pypy.org Fri Nov 11 17:02:14 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Fri, 11 Nov 2011 17:02:14 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: many more tests passing Message-ID: <20111111160214.E6F7F8292E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49322:603ee5d09325 Date: 2011-11-10 14:33 -0500 http://bitbucket.org/pypy/pypy/changeset/603ee5d09325/ Log: many more tests passing diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -31,6 +31,8 @@ return getattr(interp_ufuncs.get(space), ufunc_name).call(space, [self, w_other]) return func_with_new_name(impl, "binop_%s_impl" % ufunc_name) + descr_div = _binop_impl("divide") + descr_eq = _binop_impl("equal") @@ -105,6 +107,7 @@ __int__ = interp2app(W_GenericBox.descr_int), __float__ = interp2app(W_GenericBox.descr_float), + __div__ = interp2app(W_GenericBox.descr_div), __eq__ = interp2app(W_GenericBox.descr_eq), ) diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -192,7 +192,7 @@ types.Float64(), num=12, kind=FLOATINGLTR, - name="float32", + name="float64", char="d", alternate_constructors=[space.w_float], ) diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -20,7 +20,7 @@ w_dtype = None for w_item in l: w_dtype = interp_ufuncs.find_dtype_for_scalar(space, w_item, w_dtype) - if w_dtype is space.fromcache(interp_dtype.W_Float64Dtype): + if w_dtype is interp_dtype.get_dtype_cache(space).w_float64dtype: break if w_dtype is None: w_dtype = space.w_None @@ -177,12 +177,12 @@ dtype = self.find_dtype() if self.find_size() > 1000: nums = [ - dtype.str_format(self.eval(index)) + dtype.itemtype.str_format(self.eval(index)) for index in range(3) ] nums.append("..." + "," * comma) nums.extend([ - dtype.str_format(self.eval(index)) + dtype.itemtype.str_format(self.eval(index)) for index in range(self.find_size() - 3, self.find_size()) ]) else: @@ -215,8 +215,8 @@ concrete = self.get_concrete() res = "array([" + ", ".join(concrete._getnums(False)) + "]" dtype = concrete.find_dtype() - if (dtype is not space.fromcache(interp_dtype.W_Float64Dtype) and - dtype is not space.fromcache(interp_dtype.W_Int64Dtype)) or not self.find_size(): + if (dtype is not interp_dtype.get_dtype_cache(space).w_float64dtype and + dtype is not interp_dtype.get_dtype_cache(space).w_int64dtype) or not self.find_size(): res += ", dtype=" + dtype.name res += ")" return space.wrap(res) @@ -282,7 +282,7 @@ slice_length, w_value) def descr_mean(self, space): - return space.wrap(space.float_w(self.descr_sum(space))/self.find_size()) + return space.div(self.descr_sum(space), space.wrap(self.find_size())) def _sliceloop(self, start, stop, step, source, dest): i = start diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -234,7 +234,7 @@ return dtype if promote_to_largest: if dt.kind == interp_dtype.BOOLLTR or dt.kind == interp_dtype.SIGNEDLTR: - return space.fromcache(interp_dtype.W_Int64Dtype) + return interp_dtype.get_dtype_cache(space).w_float64dtype elif dt.kind == interp_dtype.FLOATINGLTR: return interp_dtype.get_dtype_cache(space).w_float64dtype elif dt.kind == interp_dtype.UNSIGNEDLTR: diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -52,6 +52,9 @@ def add(self, v1, v2): return self.box(self.unbox(v1) + self.unbox(v2)) + def div(self, v1, v2): + return self.box(self.unbox(v1) / self.unbox(v2)) + def eq(self, v1, v2): return self.unbox(v1) == self.unbox(v2) @@ -78,6 +81,10 @@ def _coerce(self, space, w_item): return self.box(space.is_true(w_item)) + def str_format(self, box): + value = self.unbox(box) + return "True" if value else "False" + class Integer(Primitive): def _coerce(self, space, w_item): return self.box(space.int_w(space.int(w_item))) From noreply at buildbot.pypy.org Fri Nov 11 17:02:16 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Fri, 11 Nov 2011 17:02:16 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: start re-adding many ops Message-ID: <20111111160216.1F1898292E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49323:156f37cb96f3 Date: 2011-11-10 14:45 -0500 http://bitbucket.org/pypy/pypy/changeset/156f37cb96f3/ Log: start re-adding many ops diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -31,15 +31,18 @@ return getattr(interp_ufuncs.get(space), ufunc_name).call(space, [self, w_other]) return func_with_new_name(impl, "binop_%s_impl" % ufunc_name) + descr_add = _binop_impl("add") descr_div = _binop_impl("divide") - descr_eq = _binop_impl("equal") -class W_BoolBox(Wrappable): +class W_BoolBox(W_GenericBox): def __init__(self, value): self.value = value + def convert_to(self, dtype): + return dtype.box(self.value) + class W_NumberBox(W_GenericBox): def __init__(self, value): self.value = value @@ -107,6 +110,7 @@ __int__ = interp2app(W_GenericBox.descr_int), __float__ = interp2app(W_GenericBox.descr_float), + __add__ = interp2app(W_GenericBox.descr_add), __div__ = interp2app(W_GenericBox.descr_div), __eq__ = interp2app(W_GenericBox.descr_eq), diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -141,7 +141,7 @@ i = 0 while i < size: all_driver.jit_merge_point(signature=self.signature, self=self, dtype=dtype, size=size, i=i) - if not dtype.bool(self.eval(i)): + if not dtype.itemtype.bool(self.eval(i)): return False i += 1 return True diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -1,3 +1,5 @@ +import math + from pypy.module.micronumpy import interp_boxes from pypy.objspace.std.floatobject import float2string from pypy.rlib import rfloat @@ -5,6 +7,11 @@ from pypy.rpython.lltypesystem import lltype, rffi +def simple_op(func): + def dispatcher(self, v1, v2): + return self.box(func(self, self.unbox(v1), self.unbox(v2))) + return dispatcher + class BaseType(object): def _unimplemented_ufunc(self, *args): raise NotImplementedError @@ -52,12 +59,18 @@ def add(self, v1, v2): return self.box(self.unbox(v1) + self.unbox(v2)) - def div(self, v1, v2): - return self.box(self.unbox(v1) / self.unbox(v2)) + def sub(self, v1, v2): + return self.box(self.unbox(v1) - self.unbox(v2)) + + def mul(self, v1, v2): + return self.box(self.unbox(v1) * self.unbox(v2)) def eq(self, v1, v2): return self.unbox(v1) == self.unbox(v2) + def bool(self, v): + return bool(v) + def max(self, v1, v2): return self.box(max(self.unbox(v1), self.unbox(v2))) @@ -93,6 +106,12 @@ value = self.unbox(box) return str(value) + @simple_op + def div(self, v1, v2): + if v2 == 0: + return 0 + return v1 / v2 + class Int8(Integer): T = rffi.SIGNEDCHAR BoxType = interp_boxes.W_Int8Box @@ -141,6 +160,19 @@ value = self.unbox(box) return float2string(value, "g", rfloat.DTSF_STR_PRECISION) + @simple_op + def div(self, v1, v2): + try: + return v1 / v2 + except ZeroDivisionError: + if v1 == v2 == 0.0: + return rfloat.NAN + return rfloat.copysign(rfloat.INFINITY, v1 * v2) + + @simple_op + def pow(self, v1, v2): + return math.pow(v1, v2) + class Float32(Float): T = rffi.FLOAT BoxType = interp_boxes.W_Float32Box From noreply at buildbot.pypy.org Fri Nov 11 17:02:17 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Fri, 11 Nov 2011 17:02:17 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: put a ton of ufuncs back, more methods on numpy.generic objs, a few more fixes Message-ID: <20111111160217.4C6458292E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49324:256fe9f5b6bb Date: 2011-11-11 11:01 -0500 http://bitbucket.org/pypy/pypy/changeset/256fe9f5b6bb/ Log: put a ton of ufuncs back, more methods on numpy.generic objs, a few more fixes diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -31,10 +31,29 @@ return getattr(interp_ufuncs.get(space), ufunc_name).call(space, [self, w_other]) return func_with_new_name(impl, "binop_%s_impl" % ufunc_name) + def _binop_right_impl(ufunc_name): + def impl(self, space, w_other): + from pypy.module.micronumpy import interp_ufuncs + return getattr(interp_ufuncs.get(space), ufunc_name).call(space, [w_other, self]) + return func_with_new_name(impl, "binop_right_%s_impl" % ufunc_name) + + def _unaryop_impl(ufunc_name): + def impl(self, space): + from pypy.module.micronumpy import interp_ufuncs + return getattr(interp_ufuncs.get(space), ufunc_name).call(space, [self]) + return func_with_new_name(impl, "unaryop_%s_impl" % ufunc_name) + descr_add = _binop_impl("add") + descr_sub = _binop_impl("subtract") + descr_mul = _binop_impl("multiply") descr_div = _binop_impl("divide") descr_eq = _binop_impl("equal") + descr_rmul = _binop_right_impl("multiply") + + descr_neg = _unaryop_impl("negative") + descr_abs = _unaryop_impl("absolute") + class W_BoolBox(W_GenericBox): def __init__(self, value): @@ -111,9 +130,16 @@ __float__ = interp2app(W_GenericBox.descr_float), __add__ = interp2app(W_GenericBox.descr_add), + __sub__ = interp2app(W_GenericBox.descr_sub), + __mul__ = interp2app(W_GenericBox.descr_mul), __div__ = interp2app(W_GenericBox.descr_div), + + __rmul__ = interp2app(W_GenericBox.descr_rmul), + __eq__ = interp2app(W_GenericBox.descr_eq), + __neg__ = interp2app(W_GenericBox.descr_neg), + __abs__ = interp2app(W_GenericBox.descr_abs), ) W_BoolBox.typedef = TypeDef("bool_", W_GenericBox.typedef, diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -120,8 +120,8 @@ self=self, dtype=dtype, size=size, i=i, result=result, cur_best=cur_best) - new_best = getattr(dtype, op_name)(cur_best, self.eval(i)) - if dtype.ne(new_best, cur_best): + new_best = getattr(dtype.itemtype, op_name)(cur_best, self.eval(i)) + if dtype.itemtype.ne(new_best, cur_best): result = i cur_best = new_best i += 1 @@ -154,7 +154,7 @@ i = 0 while i < size: any_driver.jit_merge_point(signature=self.signature, self=self, size=size, dtype=dtype, i=i) - if dtype.bool(self.eval(i)): + if dtype.itemtype.bool(self.eval(i)): return True i += 1 return False diff --git a/pypy/module/micronumpy/interp_support.py b/pypy/module/micronumpy/interp_support.py --- a/pypy/module/micronumpy/interp_support.py +++ b/pypy/module/micronumpy/interp_support.py @@ -1,6 +1,6 @@ from pypy.interpreter.error import OperationError from pypy.interpreter.gateway import unwrap_spec -from pypy.module.micronumpy.interp_dtype import W_Float64Dtype +from pypy.module.micronumpy.interp_dtype import get_dtype_cache from pypy.rlib.rstruct.runpack import runpack from pypy.rpython.lltypesystem import lltype, rffi @@ -18,7 +18,7 @@ raise OperationError(space.w_ValueError, space.wrap( "string length %d not divisable by %d" % (length, FLOAT_SIZE))) - dtype = space.fromcache(W_Float64Dtype) + dtype = get_dtype_cache(space).w_float64dtype a = SingleDimArray(number, dtype=dtype) start = 0 diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -29,7 +29,7 @@ def descr_get_identity(self, space): if self.identity is None: return space.w_None - return self.identity.wrap(space) + return self.identity def descr_call(self, space, __args__): if __args__.keywords or len(__args__.arguments_w) < self.argcount: @@ -108,7 +108,7 @@ promote_bools=self.promote_bools, ) if isinstance(w_obj, Scalar): - return self.func(res_dtype, w_obj.value.convert_to(res_dtype)).wrap(space) + return self.func(res_dtype, w_obj.value.convert_to(res_dtype)) new_sig = signature.Signature.find_sig([self.signature, w_obj.signature]) w_res = Call1(new_sig, res_dtype, w_obj) diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -7,7 +7,12 @@ from pypy.rpython.lltypesystem import lltype, rffi -def simple_op(func): +def simple_unary_op(func): + def dispatcher(self, v): + return self.box(func(self, self.unbox(v))) + return dispatcher + +def simple_binary_op(func): def dispatcher(self, v1, v2): return self.box(func(self, self.unbox(v1), self.unbox(v2))) return dispatcher @@ -65,11 +70,36 @@ def mul(self, v1, v2): return self.box(self.unbox(v1) * self.unbox(v2)) + def pos(self, v): + return self.box(+self.unbox(v)) + + def neg(self, v): + return self.box(-self.unbox(v)) + + @simple_unary_op + def abs(self, v): + return abs(v) + def eq(self, v1, v2): return self.unbox(v1) == self.unbox(v2) + def ne(self, v1, v2): + return self.unbox(v1) != self.unbox(v2) + + def lt(self, v1, v2): + return self.unbox(v1) < self.unbox(v2) + + def le(self, v1, v2): + return self.unbox(v1) <= self.unbox(v2) + + def gt(self, v1, v2): + return self.unbox(v1) > self.unbox(v2) + + def ge(self, v1, v2): + return self.unbox(v1) >= self.unbox(v2) + def bool(self, v): - return bool(v) + return bool(self.unbox(v)) def max(self, v1, v2): return self.box(max(self.unbox(v1), self.unbox(v2))) @@ -106,12 +136,26 @@ value = self.unbox(box) return str(value) - @simple_op + @simple_binary_op def div(self, v1, v2): if v2 == 0: return 0 return v1 / v2 + @simple_binary_op + def mod(self, v1, v2): + return v1 % v2 + + @simple_unary_op + def sign(self, v): + if v > 0: + return 1 + elif v < 0: + return -1 + else: + assert v == 0 + return 0 + class Int8(Integer): T = rffi.SIGNEDCHAR BoxType = interp_boxes.W_Int8Box @@ -160,7 +204,7 @@ value = self.unbox(box) return float2string(value, "g", rfloat.DTSF_STR_PRECISION) - @simple_op + @simple_binary_op def div(self, v1, v2): try: return v1 / v2 @@ -169,10 +213,49 @@ return rfloat.NAN return rfloat.copysign(rfloat.INFINITY, v1 * v2) - @simple_op + @simple_binary_op + def mod(self, v1, v2): + return math.fmod(v1, v2) + + @simple_binary_op def pow(self, v1, v2): return math.pow(v1, v2) + @simple_binary_op + def copysign(self, v1, v2): + return math.copysign(v1, v2) + + @simple_unary_op + def sign(self, v): + if v == 0.0: + return 0.0 + return rfloat.copysign(1.0, v) + + @simple_unary_op + def fabs(self, v): + return math.fabs(v) + + @simple_unary_op + def reciprocal(self, v): + if v == 0.0: + return rfloat.copysign(rfloat.INFINITY, v) + return 1.0 / v + + @simple_unary_op + def floor(self, v): + return math.floor(v) + + @simple_unary_op + def exp(self, v): + try: + return math.exp(v) + except OverflowError: + return rfloat.INFINITY + + @simple_unary_op + def sin(self, v): + return math.sin(v) + class Float32(Float): T = rffi.FLOAT BoxType = interp_boxes.W_Float32Box From noreply at buildbot.pypy.org Fri Nov 11 17:25:47 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Fri, 11 Nov 2011 17:25:47 +0100 (CET) Subject: [pypy-commit] pypy default: Fix for rffi.cast(lltype.SingleFloat, ), if the input type is the same as the restype, just retunr the obj. Message-ID: <20111111162547.1660C8292E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r49325:a780603cc0b5 Date: 2011-11-11 11:22 -0500 http://bitbucket.org/pypy/pypy/changeset/a780603cc0b5/ Log: Fix for rffi.cast(lltype.SingleFloat, ), if the input type is the same as the restype, just retunr the obj. diff --git a/pypy/rpython/lltypesystem/ll2ctypes.py b/pypy/rpython/lltypesystem/ll2ctypes.py --- a/pypy/rpython/lltypesystem/ll2ctypes.py +++ b/pypy/rpython/lltypesystem/ll2ctypes.py @@ -1179,6 +1179,8 @@ cvalue = ord(cvalue) # character -> integer elif hasattr(RESTYPE, "_type") and issubclass(RESTYPE._type, base_int): cvalue = int(cvalue) + elif RESTYPE is TYPE1: + return value if not isinstance(cvalue, (int, long, float)): raise NotImplementedError("casting %r to %r" % (TYPE1, RESTYPE)) diff --git a/pypy/rpython/lltypesystem/test/test_rffi.py b/pypy/rpython/lltypesystem/test/test_rffi.py --- a/pypy/rpython/lltypesystem/test/test_rffi.py +++ b/pypy/rpython/lltypesystem/test/test_rffi.py @@ -18,6 +18,7 @@ from pypy.conftest import option from pypy.objspace.flow.model import summary from pypy.translator.tool.cbuild import ExternalCompilationInfo +from pypy.rlib.rarithmetic import r_singlefloat class BaseTestRffi: def test_basic(self): @@ -704,6 +705,11 @@ res = cast(lltype.Signed, 42.5) assert res == 42 + res = cast(lltype.SingleFloat, 12.3) + assert res == r_singlefloat(12.3) + res = cast(lltype.SingleFloat, res) + assert res == r_singlefloat(12.3) + def test_rffi_sizeof(self): try: import ctypes From noreply at buildbot.pypy.org Fri Nov 11 17:25:48 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Fri, 11 Nov 2011 17:25:48 +0100 (CET) Subject: [pypy-commit] pypy default: reorganize the conditions as armin suggested Message-ID: <20111111162548.440B68292E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r49326:fd3e370164e6 Date: 2011-11-11 11:24 -0500 http://bitbucket.org/pypy/pypy/changeset/fd3e370164e6/ Log: reorganize the conditions as armin suggested diff --git a/pypy/rpython/lltypesystem/ll2ctypes.py b/pypy/rpython/lltypesystem/ll2ctypes.py --- a/pypy/rpython/lltypesystem/ll2ctypes.py +++ b/pypy/rpython/lltypesystem/ll2ctypes.py @@ -1166,7 +1166,9 @@ TYPE1 = lltype.typeOf(value) cvalue = lltype2ctypes(value) cresulttype = get_ctypes_type(RESTYPE) - if isinstance(TYPE1, lltype.Ptr): + if RESTYPE == TYPE1: + return value + elif isinstance(TYPE1, lltype.Ptr): if isinstance(RESTYPE, lltype.Ptr): # shortcut: ptr->ptr cast cptr = ctypes.cast(cvalue, cresulttype) @@ -1179,8 +1181,6 @@ cvalue = ord(cvalue) # character -> integer elif hasattr(RESTYPE, "_type") and issubclass(RESTYPE._type, base_int): cvalue = int(cvalue) - elif RESTYPE is TYPE1: - return value if not isinstance(cvalue, (int, long, float)): raise NotImplementedError("casting %r to %r" % (TYPE1, RESTYPE)) From noreply at buildbot.pypy.org Fri Nov 11 17:25:49 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Fri, 11 Nov 2011 17:25:49 +0100 (CET) Subject: [pypy-commit] pypy default: merged upstream Message-ID: <20111111162549.7E9378292E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r49327:909a9ae3ef61 Date: 2011-11-11 11:25 -0500 http://bitbucket.org/pypy/pypy/changeset/909a9ae3ef61/ Log: merged upstream diff --git a/pypy/module/_rawffi/structure.py b/pypy/module/_rawffi/structure.py --- a/pypy/module/_rawffi/structure.py +++ b/pypy/module/_rawffi/structure.py @@ -212,6 +212,8 @@ while count + basic_size <= total_size: fieldtypes.append(basic_ffi_type) count += basic_size + if basic_size == 0: # corner case. get out of this infinite + break # loop after 1 iteration ("why not") self.ffi_struct = clibffi.make_struct_ffitype_e(self.size, self.alignment, fieldtypes) diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -862,11 +862,12 @@ try: unsigned = not tp._type.SIGNED except AttributeError: - if tp in [lltype.Char, lltype.Float, lltype.Signed] or\ - isinstance(tp, lltype.Ptr): + if (not isinstance(tp, lltype.Primitive) or + tp in (FLOAT, DOUBLE) or + cast(lltype.SignedLongLong, cast(tp, -1)) < 0): unsigned = False else: - unsigned = False + unsigned = True return size, unsigned def sizeof(tp): diff --git a/pypy/translator/test/test_simplify.py b/pypy/translator/test/test_simplify.py --- a/pypy/translator/test/test_simplify.py +++ b/pypy/translator/test/test_simplify.py @@ -42,24 +42,6 @@ assert graph.startblock.operations[0].opname == 'int_mul_ovf' assert graph.startblock.operations[1].opname == 'int_sub' -def test_remove_ovfcheck_lshift(): - # check that ovfcheck_lshift() is handled - from pypy.rlib.rarithmetic import ovfcheck_lshift - def f(x): - try: - return ovfcheck_lshift(x, 2) - except OverflowError: - return -42 - graph, _ = translate(f, [int]) - assert len(graph.startblock.operations) == 1 - assert graph.startblock.operations[0].opname == 'int_lshift_ovf' - assert len(graph.startblock.operations[0].args) == 2 - assert len(graph.startblock.exits) == 2 - assert [link.exitcase for link in graph.startblock.exits] == \ - [None, OverflowError] - assert [link.target.operations for link in graph.startblock.exits] == \ - [(), ()] - def test_remove_ovfcheck_floordiv(): # check that ovfcheck() is handled even if the operation raises # and catches another exception too, here ZeroDivisionError From noreply at buildbot.pypy.org Fri Nov 11 17:40:29 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Fri, 11 Nov 2011 17:40:29 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: merged default in Message-ID: <20111111164029.B0C318292E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49328:2544cc59dc76 Date: 2011-11-11 11:26 -0500 http://bitbucket.org/pypy/pypy/changeset/2544cc59dc76/ Log: merged default in diff --git a/lib-python/modified-2.7/test/test_import.py b/lib-python/modified-2.7/test/test_import.py --- a/lib-python/modified-2.7/test/test_import.py +++ b/lib-python/modified-2.7/test/test_import.py @@ -64,6 +64,7 @@ except ImportError, err: self.fail("import from %s failed: %s" % (ext, err)) else: + # XXX importing .pyw is missing on Windows self.assertEqual(mod.a, a, "module loaded (%s) but contents invalid" % mod) self.assertEqual(mod.b, b, diff --git a/lib-python/modified-2.7/test/test_repr.py b/lib-python/modified-2.7/test/test_repr.py --- a/lib-python/modified-2.7/test/test_repr.py +++ b/lib-python/modified-2.7/test/test_repr.py @@ -254,8 +254,14 @@ eq = self.assertEqual touch(os.path.join(self.subpkgname, self.pkgname + os.extsep + 'py')) from areallylongpackageandmodulenametotestreprtruncation.areallylongpackageandmodulenametotestreprtruncation import areallylongpackageandmodulenametotestreprtruncation - eq(repr(areallylongpackageandmodulenametotestreprtruncation), - "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + # On PyPy, we use %r to format the file name; on CPython it is done + # with '%s'. It seems to me that %r is safer . + if '__pypy__' in sys.builtin_module_names: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + else: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) eq(repr(sys), "") def test_type(self): diff --git a/lib-python/2.7/test/test_subprocess.py b/lib-python/modified-2.7/test/test_subprocess.py copy from lib-python/2.7/test/test_subprocess.py copy to lib-python/modified-2.7/test/test_subprocess.py --- a/lib-python/2.7/test/test_subprocess.py +++ b/lib-python/modified-2.7/test/test_subprocess.py @@ -16,11 +16,11 @@ # Depends on the following external programs: Python # -if mswindows: - SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' - 'os.O_BINARY);') -else: - SETBINARY = '' +#if mswindows: +# SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' +# 'os.O_BINARY);') +#else: +# SETBINARY = '' try: @@ -420,8 +420,9 @@ self.assertStderrEqual(stderr, "") def test_universal_newlines(self): - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' @@ -448,8 +449,9 @@ def test_universal_newlines_communicate(self): # universal newlines through communicate() - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' diff --git a/pypy/jit/backend/llsupport/descr.py b/pypy/jit/backend/llsupport/descr.py --- a/pypy/jit/backend/llsupport/descr.py +++ b/pypy/jit/backend/llsupport/descr.py @@ -355,6 +355,10 @@ return False # unless overridden def create_call_stub(self, rtyper, RESULT): + from pypy.rlib.clibffi import FFI_DEFAULT_ABI + assert self.get_call_conv() == FFI_DEFAULT_ABI, ( + "%r: create_call_stub() with a non-default call ABI" % (self,)) + def process(c): if c == 'L': assert longlong.supports_longlong diff --git a/pypy/jit/codewriter/call.py b/pypy/jit/codewriter/call.py --- a/pypy/jit/codewriter/call.py +++ b/pypy/jit/codewriter/call.py @@ -212,7 +212,10 @@ elidable = False loopinvariant = False if op.opname == "direct_call": - func = getattr(get_funcobj(op.args[0].value), '_callable', None) + funcobj = get_funcobj(op.args[0].value) + assert getattr(funcobj, 'calling_conv', 'c') == 'c', ( + "%r: getcalldescr() with a non-default call ABI" % (op,)) + func = getattr(funcobj, '_callable', None) elidable = getattr(func, "_elidable_function_", False) loopinvariant = getattr(func, "_jit_loop_invariant_", False) if loopinvariant: diff --git a/pypy/module/_rawffi/structure.py b/pypy/module/_rawffi/structure.py --- a/pypy/module/_rawffi/structure.py +++ b/pypy/module/_rawffi/structure.py @@ -212,6 +212,8 @@ while count + basic_size <= total_size: fieldtypes.append(basic_ffi_type) count += basic_size + if basic_size == 0: # corner case. get out of this infinite + break # loop after 1 iteration ("why not") self.ffi_struct = clibffi.make_struct_ffitype_e(self.size, self.alignment, fieldtypes) diff --git a/pypy/module/posix/__init__.py b/pypy/module/posix/__init__.py --- a/pypy/module/posix/__init__.py +++ b/pypy/module/posix/__init__.py @@ -137,6 +137,8 @@ interpleveldefs['execve'] = 'interp_posix.execve' if hasattr(posix, 'spawnv'): interpleveldefs['spawnv'] = 'interp_posix.spawnv' + if hasattr(posix, 'spawnve'): + interpleveldefs['spawnve'] = 'interp_posix.spawnve' if hasattr(os, 'uname'): interpleveldefs['uname'] = 'interp_posix.uname' if hasattr(os, 'sysconf'): diff --git a/pypy/module/posix/interp_posix.py b/pypy/module/posix/interp_posix.py --- a/pypy/module/posix/interp_posix.py +++ b/pypy/module/posix/interp_posix.py @@ -760,6 +760,14 @@ except OSError, e: raise wrap_oserror(space, e) +def _env2interp(space, w_env): + env = {} + w_keys = space.call_method(w_env, 'keys') + for w_key in space.unpackiterable(w_keys): + w_value = space.getitem(w_env, w_key) + env[space.str_w(w_key)] = space.str_w(w_value) + return env + def execve(space, w_command, w_args, w_env): """ execve(path, args, env) @@ -771,11 +779,7 @@ """ command = fsencode_w(space, w_command) args = [fsencode_w(space, w_arg) for w_arg in space.unpackiterable(w_args)] - env = {} - w_keys = space.call_method(w_env, 'keys') - for w_key in space.unpackiterable(w_keys): - w_value = space.getitem(w_env, w_key) - env[space.str_w(w_key)] = space.str_w(w_value) + env = _env2interp(space, w_env) try: os.execve(command, args, env) except OSError, e: @@ -790,6 +794,16 @@ raise wrap_oserror(space, e) return space.wrap(ret) + at unwrap_spec(mode=int, path=str) +def spawnve(space, mode, path, w_args, w_env): + args = [space.str_w(w_arg) for w_arg in space.unpackiterable(w_args)] + env = _env2interp(space, w_env) + try: + ret = os.spawnve(mode, path, args, env) + except OSError, e: + raise wrap_oserror(space, e) + return space.wrap(ret) + def utime(space, w_path, w_tuple): """ utime(path, (atime, mtime)) utime(path, None) diff --git a/pypy/module/posix/test/test_posix2.py b/pypy/module/posix/test/test_posix2.py --- a/pypy/module/posix/test/test_posix2.py +++ b/pypy/module/posix/test/test_posix2.py @@ -471,6 +471,17 @@ ['python', '-c', 'raise(SystemExit(42))']) assert ret == 42 + if hasattr(__import__(os.name), "spawnve"): + def test_spawnve(self): + os = self.posix + import sys + print self.python + ret = os.spawnve(os.P_WAIT, self.python, + ['python', '-c', + "raise(SystemExit(int(__import__('os').environ['FOOBAR'])))"], + {'FOOBAR': '42'}) + assert ret == 42 + def test_popen(self): os = self.posix for i in range(5): diff --git a/pypy/module/select/test/test_select.py b/pypy/module/select/test/test_select.py --- a/pypy/module/select/test/test_select.py +++ b/pypy/module/select/test/test_select.py @@ -214,11 +214,15 @@ def test_poll(self): import select - class A(object): - def __int__(self): - return 3 - - select.poll().poll(A()) # assert did not crash + readend, writeend = self.getpair() + try: + class A(object): + def __int__(self): + return readend.fileno() + select.poll().poll(A()) # assert did not crash + finally: + readend.close() + writeend.close() class AppTestSelectWithPipes(_AppTestSelect): "Use a pipe to get pairs of file descriptors" diff --git a/pypy/module/signal/__init__.py b/pypy/module/signal/__init__.py --- a/pypy/module/signal/__init__.py +++ b/pypy/module/signal/__init__.py @@ -20,7 +20,7 @@ interpleveldefs['pause'] = 'interp_signal.pause' interpleveldefs['siginterrupt'] = 'interp_signal.siginterrupt' - if hasattr(cpy_signal, 'setitimer'): + if os.name == 'posix': interpleveldefs['setitimer'] = 'interp_signal.setitimer' interpleveldefs['getitimer'] = 'interp_signal.getitimer' for name in ['ITIMER_REAL', 'ITIMER_VIRTUAL', 'ITIMER_PROF']: diff --git a/pypy/module/signal/test/test_signal.py b/pypy/module/signal/test/test_signal.py --- a/pypy/module/signal/test/test_signal.py +++ b/pypy/module/signal/test/test_signal.py @@ -1,4 +1,4 @@ -import os, py +import os, py, sys import signal as cpy_signal from pypy.conftest import gettestobjspace @@ -264,6 +264,10 @@ class AppTestItimer: spaceconfig = dict(usemodules=['signal']) + def setup_class(cls): + if sys.platform == 'win32': + py.test.skip("Unix only") + def test_itimer_real(self): import signal diff --git a/pypy/objspace/std/bytearrayobject.py b/pypy/objspace/std/bytearrayobject.py --- a/pypy/objspace/std/bytearrayobject.py +++ b/pypy/objspace/std/bytearrayobject.py @@ -282,15 +282,12 @@ def str__Bytearray(space, w_bytearray): return space.wrap(''.join(w_bytearray.data)) -def _convert_idx_params(space, w_self, w_start, w_stop): - length = len(w_self.data) +def str_count__Bytearray_Int_ANY_ANY(space, w_bytearray, w_char, w_start, w_stop): + char = w_char.intval + bytearray = w_bytearray.data + length = len(bytearray) start, stop = slicetype.unwrap_start_stop( space, length, w_start, w_stop, False) - return start, stop, length - -def str_count__Bytearray_Int_ANY_ANY(space, w_bytearray, w_char, w_start, w_stop): - char = w_char.intval - start, stop, length = _convert_idx_params(space, w_bytearray, w_start, w_stop) count = 0 for i in range(start, min(stop, length)): c = w_bytearray.data[i] diff --git a/pypy/objspace/std/test/test_bytes.py b/pypy/objspace/std/test/test_bytearrayobject.py rename from pypy/objspace/std/test/test_bytes.py rename to pypy/objspace/std/test/test_bytearrayobject.py diff --git a/pypy/rlib/clibffi.py b/pypy/rlib/clibffi.py --- a/pypy/rlib/clibffi.py +++ b/pypy/rlib/clibffi.py @@ -30,6 +30,9 @@ _MAC_OS = platform.name == "darwin" _FREEBSD_7 = platform.name == "freebsd7" +_LITTLE_ENDIAN = sys.byteorder == 'little' +_BIG_ENDIAN = sys.byteorder == 'big' + if _WIN32: from pypy.rlib import rwin32 @@ -338,15 +341,38 @@ cast_type_to_ffitype._annspecialcase_ = 'specialize:memo' def push_arg_as_ffiptr(ffitp, arg, ll_buf): - # this is for primitive types. For structures and arrays - # would be something different (more dynamic) + # This is for primitive types. Note that the exact type of 'arg' may be + # different from the expected 'c_size'. To cope with that, we fall back + # to a byte-by-byte copy. TP = lltype.typeOf(arg) TP_P = lltype.Ptr(rffi.CArray(TP)) - buf = rffi.cast(TP_P, ll_buf) - buf[0] = arg + TP_size = rffi.sizeof(TP) + c_size = intmask(ffitp.c_size) + # if both types have the same size, we can directly write the + # value to the buffer + if c_size == TP_size: + buf = rffi.cast(TP_P, ll_buf) + buf[0] = arg + else: + # needs byte-by-byte copying. Make sure 'arg' is an integer type. + # Note that this won't work for rffi.FLOAT/rffi.DOUBLE. + assert TP is not rffi.FLOAT and TP is not rffi.DOUBLE + if TP_size <= rffi.sizeof(lltype.Signed): + arg = rffi.cast(lltype.Unsigned, arg) + else: + arg = rffi.cast(lltype.UnsignedLongLong, arg) + if _LITTLE_ENDIAN: + for i in range(c_size): + ll_buf[i] = chr(arg & 0xFF) + arg >>= 8 + elif _BIG_ENDIAN: + for i in range(c_size-1, -1, -1): + ll_buf[i] = chr(arg & 0xFF) + arg >>= 8 + else: + raise AssertionError push_arg_as_ffiptr._annspecialcase_ = 'specialize:argtype(1)' - # type defs for callback and closure userdata USERDATA_P = lltype.Ptr(lltype.ForwardReference()) CALLBACK_TP = lltype.Ptr(lltype.FuncType([rffi.VOIDPP, rffi.VOIDP, USERDATA_P], diff --git a/pypy/rlib/libffi.py b/pypy/rlib/libffi.py --- a/pypy/rlib/libffi.py +++ b/pypy/rlib/libffi.py @@ -234,7 +234,7 @@ # It is important that there is no other operation in the middle, else # the optimizer will fail to recognize the pattern and won't turn it # into a fast CALL. Note that "arg = arg.next" is optimized away, - # assuming that archain is completely virtual. + # assuming that argchain is completely virtual. self = jit.promote(self) if argchain.numargs != len(self.argtypes): raise TypeError, 'Wrong number of arguments: %d expected, got %d' %\ diff --git a/pypy/rlib/test/test_libffi.py b/pypy/rlib/test/test_libffi.py --- a/pypy/rlib/test/test_libffi.py +++ b/pypy/rlib/test/test_libffi.py @@ -179,6 +179,17 @@ res = self.call(func, [chr(20), 22], rffi.LONG) assert res == 42 + def test_char_args(self): + """ + char sum_args(char a, char b) { + return a + b; + } + """ + libfoo = self.get_libfoo() + func = (libfoo, 'sum_args', [types.schar, types.schar], types.schar) + res = self.call(func, [123, 43], rffi.CHAR) + assert res == chr(166) + def test_unsigned_short_args(self): """ unsigned short sum_xy_us(unsigned short x, unsigned short y) diff --git a/pypy/rpython/lltypesystem/ll2ctypes.py b/pypy/rpython/lltypesystem/ll2ctypes.py --- a/pypy/rpython/lltypesystem/ll2ctypes.py +++ b/pypy/rpython/lltypesystem/ll2ctypes.py @@ -1166,7 +1166,9 @@ TYPE1 = lltype.typeOf(value) cvalue = lltype2ctypes(value) cresulttype = get_ctypes_type(RESTYPE) - if isinstance(TYPE1, lltype.Ptr): + if RESTYPE == TYPE1: + return value + elif isinstance(TYPE1, lltype.Ptr): if isinstance(RESTYPE, lltype.Ptr): # shortcut: ptr->ptr cast cptr = ctypes.cast(cvalue, cresulttype) diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -125,6 +125,7 @@ canraise=False, random_effects_on_gcobjs= random_effects_on_gcobjs, + calling_conv=calling_conv, **kwds) if isinstance(_callable, ll2ctypes.LL2CtypesCallable): _callable.funcptr = funcptr @@ -245,8 +246,14 @@ wrapper._always_inline_ = True # for debugging, stick ll func ptr to that wrapper._ptr = funcptr + wrapper = func_with_new_name(wrapper, name) - return func_with_new_name(wrapper, name) + if calling_conv != "c": + from pypy.rlib.jit import dont_look_inside + wrapper = dont_look_inside(wrapper) + + return wrapper + class CallbackHolder: def __init__(self): @@ -855,11 +862,12 @@ try: unsigned = not tp._type.SIGNED except AttributeError: - if tp in [lltype.Char, lltype.Float, lltype.Signed] or\ - isinstance(tp, lltype.Ptr): + if (not isinstance(tp, lltype.Primitive) or + tp in (FLOAT, DOUBLE) or + cast(lltype.SignedLongLong, cast(tp, -1)) < 0): unsigned = False else: - unsigned = False + unsigned = True return size, unsigned def sizeof(tp): diff --git a/pypy/rpython/lltypesystem/test/test_rffi.py b/pypy/rpython/lltypesystem/test/test_rffi.py --- a/pypy/rpython/lltypesystem/test/test_rffi.py +++ b/pypy/rpython/lltypesystem/test/test_rffi.py @@ -18,6 +18,7 @@ from pypy.conftest import option from pypy.objspace.flow.model import summary from pypy.translator.tool.cbuild import ExternalCompilationInfo +from pypy.rlib.rarithmetic import r_singlefloat class BaseTestRffi: def test_basic(self): @@ -704,6 +705,11 @@ res = cast(lltype.Signed, 42.5) assert res == 42 + res = cast(lltype.SingleFloat, 12.3) + assert res == r_singlefloat(12.3) + res = cast(lltype.SingleFloat, res) + assert res == r_singlefloat(12.3) + def test_rffi_sizeof(self): try: import ctypes diff --git a/pypy/rpython/module/ll_os.py b/pypy/rpython/module/ll_os.py --- a/pypy/rpython/module/ll_os.py +++ b/pypy/rpython/module/ll_os.py @@ -356,6 +356,32 @@ return extdef([int, str, [str]], int, llimpl=spawnv_llimpl, export_name="ll_os.ll_os_spawnv") + @registering_if(os, 'spawnve') + def register_os_spawnve(self): + os_spawnve = self.llexternal('spawnve', + [rffi.INT, rffi.CCHARP, rffi.CCHARPP, + rffi.CCHARPP], + rffi.INT) + + def spawnve_llimpl(mode, path, args, env): + envstrs = [] + for item in env.iteritems(): + envstrs.append("%s=%s" % item) + + mode = rffi.cast(rffi.INT, mode) + l_args = rffi.liststr2charpp(args) + l_env = rffi.liststr2charpp(envstrs) + childpid = os_spawnve(mode, path, l_args, l_env) + rffi.free_charpp(l_env) + rffi.free_charpp(l_args) + if childpid == -1: + raise OSError(rposix.get_errno(), "os_spawnve failed") + return rffi.cast(lltype.Signed, childpid) + + return extdef([int, str, [str], {str: str}], int, + llimpl=spawnve_llimpl, + export_name="ll_os.ll_os_spawnve") + @registering(os.dup) def register_os_dup(self): os_dup = self.llexternal(underscore_on_windows+'dup', [rffi.INT], rffi.INT) diff --git a/pypy/translator/c/test/test_extfunc.py b/pypy/translator/c/test/test_extfunc.py --- a/pypy/translator/c/test/test_extfunc.py +++ b/pypy/translator/c/test/test_extfunc.py @@ -818,6 +818,24 @@ func() assert open(filename).read() == "2" +if hasattr(posix, 'spawnve'): + def test_spawnve(): + filename = str(udir.join('test_spawnve.txt')) + progname = str(sys.executable) + scriptpath = udir.join('test_spawnve.py') + scriptpath.write('import os\n' + + 'f=open(%r,"w")\n' % filename + + 'f.write(os.environ["FOOBAR"])\n' + + 'f.close\n') + scriptname = str(scriptpath) + def does_stuff(): + l = [progname, scriptname] + pid = os.spawnve(os.P_NOWAIT, progname, l, {'FOOBAR': '42'}) + os.waitpid(pid, 0) + func = compile(does_stuff, []) + func() + assert open(filename).read() == "42" + def test_utime(): path = str(udir.ensure("test_utime.txt")) from time import time, sleep diff --git a/pypy/translator/platform/__init__.py b/pypy/translator/platform/__init__.py --- a/pypy/translator/platform/__init__.py +++ b/pypy/translator/platform/__init__.py @@ -240,10 +240,13 @@ else: host_factory = Linux64 elif sys.platform == 'darwin': - from pypy.translator.platform.darwin import Darwin_i386, Darwin_x86_64 + from pypy.translator.platform.darwin import Darwin_i386, Darwin_x86_64, Darwin_PowerPC import platform - assert platform.machine() in ('i386', 'x86_64') - if sys.maxint <= 2147483647: + assert platform.machine() in ('Power Macintosh', 'i386', 'x86_64') + + if platform.machine() == 'Power Macintosh': + host_factory = Darwin_PowerPC + elif sys.maxint <= 2147483647: host_factory = Darwin_i386 else: host_factory = Darwin_x86_64 diff --git a/pypy/translator/platform/darwin.py b/pypy/translator/platform/darwin.py --- a/pypy/translator/platform/darwin.py +++ b/pypy/translator/platform/darwin.py @@ -71,6 +71,11 @@ link_flags = ('-arch', 'i386') cflags = ('-arch', 'i386', '-O3', '-fomit-frame-pointer') +class Darwin_PowerPC(Darwin):#xxx fixme, mwp + name = "darwin_powerpc" + link_flags = () + cflags = ('-O3', '-fomit-frame-pointer') + class Darwin_x86_64(Darwin): name = "darwin_x86_64" link_flags = ('-arch', 'x86_64') diff --git a/pypy/translator/platform/test/test_darwin.py b/pypy/translator/platform/test/test_darwin.py --- a/pypy/translator/platform/test/test_darwin.py +++ b/pypy/translator/platform/test/test_darwin.py @@ -7,7 +7,7 @@ py.test.skip("Darwin only") from pypy.tool.udir import udir -from pypy.translator.platform.darwin import Darwin_i386, Darwin_x86_64 +from pypy.translator.platform.darwin import Darwin_i386, Darwin_x86_64, Darwin_PowerPC from pypy.translator.platform.test.test_platform import TestPlatform as BasicTest from pypy.translator.tool.cbuild import ExternalCompilationInfo @@ -17,7 +17,7 @@ else: host_factory = Darwin_x86_64 else: - host_factory = Darwin + host_factory = Darwin_PowerPC class TestDarwin(BasicTest): platform = host_factory() diff --git a/pypy/translator/test/test_simplify.py b/pypy/translator/test/test_simplify.py --- a/pypy/translator/test/test_simplify.py +++ b/pypy/translator/test/test_simplify.py @@ -42,24 +42,6 @@ assert graph.startblock.operations[0].opname == 'int_mul_ovf' assert graph.startblock.operations[1].opname == 'int_sub' -def test_remove_ovfcheck_lshift(): - # check that ovfcheck_lshift() is handled - from pypy.rlib.rarithmetic import ovfcheck_lshift - def f(x): - try: - return ovfcheck_lshift(x, 2) - except OverflowError: - return -42 - graph, _ = translate(f, [int]) - assert len(graph.startblock.operations) == 1 - assert graph.startblock.operations[0].opname == 'int_lshift_ovf' - assert len(graph.startblock.operations[0].args) == 2 - assert len(graph.startblock.exits) == 2 - assert [link.exitcase for link in graph.startblock.exits] == \ - [None, OverflowError] - assert [link.target.operations for link in graph.startblock.exits] == \ - [(), ()] - def test_remove_ovfcheck_floordiv(): # check that ovfcheck() is handled even if the operation raises # and catches another exception too, here ZeroDivisionError From noreply at buildbot.pypy.org Fri Nov 11 17:40:30 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Fri, 11 Nov 2011 17:40:30 +0100 (CET) Subject: [pypy-commit] pypy default: Fix for rffi.cast(lltype.Float, r_singlefloat(2)) Message-ID: <20111111164030.ECA048292E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r49329:5ca4800e4a5d Date: 2011-11-11 11:40 -0500 http://bitbucket.org/pypy/pypy/changeset/5ca4800e4a5d/ Log: Fix for rffi.cast(lltype.Float, r_singlefloat(2)) diff --git a/pypy/rpython/lltypesystem/ll2ctypes.py b/pypy/rpython/lltypesystem/ll2ctypes.py --- a/pypy/rpython/lltypesystem/ll2ctypes.py +++ b/pypy/rpython/lltypesystem/ll2ctypes.py @@ -1163,6 +1163,8 @@ value = value.adr if isinstance(value, llmemory.fakeaddress): value = value.ptr or 0 + if isinstance(value, r_singlefloat): + value = float(value) TYPE1 = lltype.typeOf(value) cvalue = lltype2ctypes(value) cresulttype = get_ctypes_type(RESTYPE) diff --git a/pypy/rpython/lltypesystem/test/test_rffi.py b/pypy/rpython/lltypesystem/test/test_rffi.py --- a/pypy/rpython/lltypesystem/test/test_rffi.py +++ b/pypy/rpython/lltypesystem/test/test_rffi.py @@ -710,6 +710,9 @@ res = cast(lltype.SingleFloat, res) assert res == r_singlefloat(12.3) + res = cast(lltype.Float, r_singlefloat(12.)) + assert res == 12. + def test_rffi_sizeof(self): try: import ctypes From noreply at buildbot.pypy.org Fri Nov 11 17:46:08 2011 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 11 Nov 2011 17:46:08 +0100 (CET) Subject: [pypy-commit] pypy default: fix. Message-ID: <20111111164608.BC5718292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49330:bfd6b3f93365 Date: 2011-11-11 17:45 +0100 http://bitbucket.org/pypy/pypy/changeset/bfd6b3f93365/ Log: fix. diff --git a/pypy/rpython/lltypesystem/test/test_rffi.py b/pypy/rpython/lltypesystem/test/test_rffi.py --- a/pypy/rpython/lltypesystem/test/test_rffi.py +++ b/pypy/rpython/lltypesystem/test/test_rffi.py @@ -733,7 +733,7 @@ assert sizeof(ll) == ctypes.sizeof(ctp) assert sizeof(lltype.Typedef(ll, 'test')) == sizeof(ll) assert not size_and_sign(lltype.Signed)[1] - assert not size_and_sign(lltype.Char)[1] + assert size_and_sign(lltype.Char) == (1, True) assert not size_and_sign(lltype.UniChar)[1] assert size_and_sign(UINT)[1] From noreply at buildbot.pypy.org Fri Nov 11 17:46:09 2011 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 11 Nov 2011 17:46:09 +0100 (CET) Subject: [pypy-commit] pypy default: merge heads Message-ID: <20111111164609.EB9EE8292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49331:bef362009dda Date: 2011-11-11 17:45 +0100 http://bitbucket.org/pypy/pypy/changeset/bef362009dda/ Log: merge heads diff --git a/pypy/rpython/lltypesystem/ll2ctypes.py b/pypy/rpython/lltypesystem/ll2ctypes.py --- a/pypy/rpython/lltypesystem/ll2ctypes.py +++ b/pypy/rpython/lltypesystem/ll2ctypes.py @@ -1163,10 +1163,14 @@ value = value.adr if isinstance(value, llmemory.fakeaddress): value = value.ptr or 0 + if isinstance(value, r_singlefloat): + value = float(value) TYPE1 = lltype.typeOf(value) cvalue = lltype2ctypes(value) cresulttype = get_ctypes_type(RESTYPE) - if isinstance(TYPE1, lltype.Ptr): + if RESTYPE == TYPE1: + return value + elif isinstance(TYPE1, lltype.Ptr): if isinstance(RESTYPE, lltype.Ptr): # shortcut: ptr->ptr cast cptr = ctypes.cast(cvalue, cresulttype) diff --git a/pypy/rpython/lltypesystem/test/test_rffi.py b/pypy/rpython/lltypesystem/test/test_rffi.py --- a/pypy/rpython/lltypesystem/test/test_rffi.py +++ b/pypy/rpython/lltypesystem/test/test_rffi.py @@ -18,6 +18,7 @@ from pypy.conftest import option from pypy.objspace.flow.model import summary from pypy.translator.tool.cbuild import ExternalCompilationInfo +from pypy.rlib.rarithmetic import r_singlefloat class BaseTestRffi: def test_basic(self): @@ -704,6 +705,14 @@ res = cast(lltype.Signed, 42.5) assert res == 42 + res = cast(lltype.SingleFloat, 12.3) + assert res == r_singlefloat(12.3) + res = cast(lltype.SingleFloat, res) + assert res == r_singlefloat(12.3) + + res = cast(lltype.Float, r_singlefloat(12.)) + assert res == 12. + def test_rffi_sizeof(self): try: import ctypes From noreply at buildbot.pypy.org Fri Nov 11 17:50:01 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Fri, 11 Nov 2011 17:50:01 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: merged default in Message-ID: <20111111165001.3C8A08292E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49332:d5a2cb285c1d Date: 2011-11-11 11:41 -0500 http://bitbucket.org/pypy/pypy/changeset/d5a2cb285c1d/ Log: merged default in diff --git a/pypy/rpython/lltypesystem/ll2ctypes.py b/pypy/rpython/lltypesystem/ll2ctypes.py --- a/pypy/rpython/lltypesystem/ll2ctypes.py +++ b/pypy/rpython/lltypesystem/ll2ctypes.py @@ -1163,6 +1163,8 @@ value = value.adr if isinstance(value, llmemory.fakeaddress): value = value.ptr or 0 + if isinstance(value, r_singlefloat): + value = float(value) TYPE1 = lltype.typeOf(value) cvalue = lltype2ctypes(value) cresulttype = get_ctypes_type(RESTYPE) diff --git a/pypy/rpython/lltypesystem/test/test_rffi.py b/pypy/rpython/lltypesystem/test/test_rffi.py --- a/pypy/rpython/lltypesystem/test/test_rffi.py +++ b/pypy/rpython/lltypesystem/test/test_rffi.py @@ -710,6 +710,9 @@ res = cast(lltype.SingleFloat, res) assert res == r_singlefloat(12.3) + res = cast(lltype.Float, r_singlefloat(12.)) + assert res == 12. + def test_rffi_sizeof(self): try: import ctypes From noreply at buildbot.pypy.org Fri Nov 11 17:50:02 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Fri, 11 Nov 2011 17:50:02 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: all tests pass! now it just needs to be made RPyhon Message-ID: <20111111165002.689408292E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49333:ce6baab20b88 Date: 2011-11-11 11:49 -0500 http://bitbucket.org/pypy/pypy/changeset/ce6baab20b88/ Log: all tests pass! now it just needs to be made RPyhon diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -48,6 +48,7 @@ descr_mul = _binop_impl("multiply") descr_div = _binop_impl("divide") descr_eq = _binop_impl("equal") + descr_lt = _binop_impl("less") descr_rmul = _binop_right_impl("multiply") @@ -115,7 +116,7 @@ pass class W_Float32Box(W_FloatingBox): - pass + get_dtype = dtype_getter("float32") class W_Float64Box(W_FloatingBox): get_dtype = dtype_getter("float64") @@ -137,6 +138,7 @@ __rmul__ = interp2app(W_GenericBox.descr_rmul), __eq__ = interp2app(W_GenericBox.descr_eq), + __lt__ = interp2app(W_GenericBox.descr_lt), __neg__ = interp2app(W_GenericBox.descr_neg), __abs__ = interp2app(W_GenericBox.descr_abs), diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -256,6 +256,43 @@ def sin(self, v): return math.sin(v) + @simple_unary_op + def cos(self, v): + return math.cos(v) + + @simple_unary_op + def tan(self, v): + return math.tan(v) + + @simple_unary_op + def arcsin(self, v): + if not -1.0 <= v <= 1.0: + return rfloat.NAN + return math.asin(v) + + @simple_unary_op + def arccos(self, v): + if not -1.0 <= v <= 1.0: + return rfloat.NAN + return math.acos(v) + + @simple_unary_op + def arctan(self, v): + return math.atan(v) + + @simple_unary_op + def arcsinh(self, v): + return math.asinh(v) + + @simple_unary_op + def arctanh(self, v): + if v == 1.0 or v == -1.0: + return math.copysign(rfloat.INFINITY, v) + if not -1.0 < v < 1.0: + return rfloat.NAN + return math.atanh(v) + + class Float32(Float): T = rffi.FLOAT BoxType = interp_boxes.W_Float32Box From noreply at buildbot.pypy.org Fri Nov 11 18:21:17 2011 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 11 Nov 2011 18:21:17 +0100 (CET) Subject: [pypy-commit] pypy win64-stage1: Check in a test that fails (and works on "default") Message-ID: <20111111172117.397308292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: win64-stage1 Changeset: r49334:263bc62972cd Date: 2011-11-11 18:20 +0100 http://bitbucket.org/pypy/pypy/changeset/263bc62972cd/ Log: Check in a test that fails (and works on "default") diff --git a/pypy/objspace/std/test/test_longobject.py b/pypy/objspace/std/test/test_longobject.py --- a/pypy/objspace/std/test/test_longobject.py +++ b/pypy/objspace/std/test/test_longobject.py @@ -225,6 +225,7 @@ assert x ^ 0x555555555L == 0x5FFFFFFFFL def test_hash(self): + import sys # ints have the same hash as equal longs for i in range(-4, 14): assert hash(i) == hash(long(i)) @@ -233,6 +234,8 @@ assert hash(1234567890123456789L) in ( -1895067127, # with 32-bit platforms 1234567890123456789) # with 64-bit platforms + assert hash(long(sys.maxint)) == sys.maxint + assert hash(long(-sys.maxint-1)) == -sys.maxint-1 def test_math_log(self): import math From noreply at buildbot.pypy.org Fri Nov 11 19:11:23 2011 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 11 Nov 2011 19:11:23 +0100 (CET) Subject: [pypy-commit] pypy default: Fix for test_sin_cos. Message-ID: <20111111181123.671A18292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49335:165f327efffa Date: 2011-11-11 17:59 +0000 http://bitbucket.org/pypy/pypy/changeset/165f327efffa/ Log: Fix for test_sin_cos. diff --git a/pypy/module/pypyjit/test_pypy_c/model.py b/pypy/module/pypyjit/test_pypy_c/model.py --- a/pypy/module/pypyjit/test_pypy_c/model.py +++ b/pypy/module/pypyjit/test_pypy_c/model.py @@ -285,6 +285,11 @@ guard_false(ticker_cond1, descr=...) """ src = src.replace('--EXC-TICK--', exc_ticker_check) + # + # ISINF is done as a macro; fix it here + r = re.compile('(\w+) = --ISINF--[(](\w+)[)]') + src = r.sub(r'\2\B999 = float_add(\2, ...)\n\1 = float_eq(\2\B999, \2)', + src) return src @classmethod diff --git a/pypy/module/pypyjit/test_pypy_c/test_math.py b/pypy/module/pypyjit/test_pypy_c/test_math.py --- a/pypy/module/pypyjit/test_pypy_c/test_math.py +++ b/pypy/module/pypyjit/test_pypy_c/test_math.py @@ -49,10 +49,7 @@ guard_true(i2, descr=...) guard_not_invalidated(descr=...) f1 = cast_int_to_float(i0) - i3 = float_eq(f1, inf) - i4 = float_eq(f1, -inf) - i5 = int_or(i3, i4) - i6 = int_is_true(i5) + i6 = --ISINF--(f1) guard_false(i6, descr=...) f2 = call(ConstClass(sin), f1, descr=) f3 = call(ConstClass(cos), f1, descr=) @@ -90,4 +87,4 @@ i6 = int_sub(i0, 1) --TICK-- jump(..., descr=) - """) \ No newline at end of file + """) From noreply at buildbot.pypy.org Fri Nov 11 19:11:24 2011 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 11 Nov 2011 19:11:24 +0100 (CET) Subject: [pypy-commit] pypy default: Skip the fmod test for now. Message-ID: <20111111181124.98BCC8292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49336:c6c97bf46846 Date: 2011-11-11 18:08 +0000 http://bitbucket.org/pypy/pypy/changeset/c6c97bf46846/ Log: Skip the fmod test for now. diff --git a/pypy/module/pypyjit/test_pypy_c/test_math.py b/pypy/module/pypyjit/test_pypy_c/test_math.py --- a/pypy/module/pypyjit/test_pypy_c/test_math.py +++ b/pypy/module/pypyjit/test_pypy_c/test_math.py @@ -1,3 +1,4 @@ +import py from pypy.module.pypyjit.test_pypy_c.test_00_model import BaseTestPyPyC @@ -61,6 +62,7 @@ """) def test_fmod(self): + py.test.skip("test relies on the old and broken ll_math_fmod") def main(n): import math From noreply at buildbot.pypy.org Fri Nov 11 19:34:32 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Fri, 11 Nov 2011 19:34:32 +0100 (CET) Subject: [pypy-commit] pypy win64-stage1: removed a check in r_bigint that creates more problems now than it solves Message-ID: <20111111183432.E6DF28292E@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64-stage1 Changeset: r49337:c8e869f3da71 Date: 2011-11-11 19:34 +0100 http://bitbucket.org/pypy/pypy/changeset/c8e869f3da71/ Log: removed a check in r_bigint that creates more problems now than it solves diff --git a/pypy/rlib/rbigint.py b/pypy/rlib/rbigint.py --- a/pypy/rlib/rbigint.py +++ b/pypy/rlib/rbigint.py @@ -44,8 +44,6 @@ def _mask_digit(x): - if not we_are_translated(): - assert is_valid_int(x>>1), "overflow occurred!" return intmask(x & MASK) _mask_digit._annspecialcase_ = 'specialize:argtype(0)' From noreply at buildbot.pypy.org Fri Nov 11 20:04:27 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Fri, 11 Nov 2011 20:04:27 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: first few translation fixes Message-ID: <20111111190427.1B6928292E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49338:8c2614b0fc36 Date: 2011-11-11 14:04 -0500 http://bitbucket.org/pypy/pypy/changeset/8c2614b0fc36/ Log: first few translation fixes diff --git a/pypy/module/micronumpy/compile.py b/pypy/module/micronumpy/compile.py --- a/pypy/module/micronumpy/compile.py +++ b/pypy/module/micronumpy/compile.py @@ -224,7 +224,7 @@ raise NotImplementedError if (not isinstance(w_res, BaseArray) and not isinstance(w_res, interp_boxes.W_GenericBox)): - dtype = interp.space.fromcache(W_Float64Dtype) + dtype = get_dtype_cache(interp.space).w_float64dtype w_res = scalar_w(interp.space, dtype, w_res) return w_res @@ -334,9 +334,9 @@ if isinstance(w_res, BaseArray): return w_res if isinstance(w_res, FloatObject): - dtype = interp.space.fromcache(W_Float64Dtype) + dtype = get_dtype_cache(interp.space).w_float64dtype elif isinstance(w_res, BoolObject): - dtype = interp.space.fromcache(W_BoolDtype) + dtype = get_dtype_cache(interp.space).w_booldtype elif isinstance(w_res, interp_boxes.W_GenericBox): dtype = w_res.get_dtype(interp.space) else: From noreply at buildbot.pypy.org Fri Nov 11 20:13:58 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Fri, 11 Nov 2011 20:13:58 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: more progress towards translating Message-ID: <20111111191358.E52498292E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49339:b42b84e29282 Date: 2011-11-11 14:13 -0500 http://bitbucket.org/pypy/pypy/changeset/b42b84e29282/ Log: more progress towards translating diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -238,7 +238,7 @@ elif dt.kind == interp_dtype.FLOATINGLTR: return interp_dtype.get_dtype_cache(space).w_float64dtype elif dt.kind == interp_dtype.UNSIGNEDLTR: - return space.fromcache(interp_dtype.W_UInt64Dtype) + return interp_dtype.get_dtype_cache(space).w_uint64dtype else: assert False return dt diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -25,7 +25,8 @@ exp = sin = cos = tan = arcsin = arccos = arctan = arcsinh = \ arctanh = _unimplemented_ufunc -class Primitive(BaseType): +class Primitive(object): + _mixin_ = True def get_element_size(self): return rffi.sizeof(self.T) @@ -47,18 +48,18 @@ def read(self, ptr, offset): ptr = rffi.ptradd(ptr, offset) return self.box( - rffi.cast(lltype.Ptr(lltype.Array(self.T, hints={"nolength": True})), ptr)[0] + rffi.cast(rffi.CArrayPtr(self.T), ptr)[0] ) def store(self, ptr, offset, box): value = self.unbox(box) ptr = rffi.ptradd(ptr, offset) - rffi.cast(lltype.Ptr(lltype.Array(self.T, hints={"nolength": True})), ptr)[0] = value + rffi.cast(rffi.CArrayPtr(self.T), ptr)[0] = value def fill(self, ptr, box, n): value = self.unbox(box) for i in xrange(n): - rffi.cast(lltype.Ptr(lltype.Array(self.T, hints={"nolength": True})), ptr)[0] = value + rffi.cast(rffi.CArrayPtr(self.T), ptr)[0] = value ptr = rffi.ptradd(ptr, self.get_element_size()) def add(self, v1, v2): @@ -107,7 +108,7 @@ def min(self, v1, v2): return self.box(min(self.unbox(v1), self.unbox(v2))) -class Bool(Primitive): +class Bool(BaseType, Primitive): T = lltype.Bool BoxType = interp_boxes.W_BoolBox @@ -129,6 +130,8 @@ return "True" if value else "False" class Integer(Primitive): + _mixin_ = True + def _coerce(self, space, w_item): return self.box(space.int_w(space.int(w_item))) @@ -156,47 +159,49 @@ assert v == 0 return 0 -class Int8(Integer): +class Int8(BaseType, Integer): T = rffi.SIGNEDCHAR BoxType = interp_boxes.W_Int8Box -class UInt8(Integer): +class UInt8(BaseType, Integer): T = rffi.UCHAR BoxType = interp_boxes.W_UInt8Box -class Int16(Integer): +class Int16(BaseType, Integer): T = rffi.SHORT BoxType = interp_boxes.W_Int16Box -class UInt16(Integer): +class UInt16(BaseType, Integer): T = rffi.USHORT BoxType = interp_boxes.W_UInt16Box -class Int32(Integer): +class Int32(BaseType, Integer): T = rffi.INT BoxType = interp_boxes.W_Int32Box -class UInt32(Integer): +class UInt32(BaseType, Integer): T = rffi.UINT BoxType = interp_boxes.W_UInt32Box -class Long(Integer): +class Long(BaseType, Integer): T = rffi.LONG BoxType = interp_boxes.W_LongBox -class ULong(Integer): +class ULong(BaseType, Integer): T = rffi.ULONG BoxType = interp_boxes.W_ULongBox -class Int64(Integer): +class Int64(BaseType, Integer): T = rffi.LONGLONG BoxType = interp_boxes.W_Int64Box -class UInt64(Integer): +class UInt64(BaseType, Integer): T = rffi.ULONGLONG BoxType = interp_boxes.W_UInt64Box class Float(Primitive): + _mixin_ = True + def _coerce(self, space, w_item): return self.box(space.float_w(space.float(w_item))) @@ -293,10 +298,10 @@ return math.atanh(v) -class Float32(Float): +class Float32(BaseType, Float): T = rffi.FLOAT BoxType = interp_boxes.W_Float32Box -class Float64(Float): +class Float64(BaseType, Float): T = rffi.DOUBLE BoxType = interp_boxes.W_Float64Box \ No newline at end of file From noreply at buildbot.pypy.org Fri Nov 11 20:23:13 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Fri, 11 Nov 2011 20:23:13 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: more translation fixes Message-ID: <20111111192313.CA9A68292E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49340:fb125d8a0233 Date: 2011-11-11 14:22 -0500 http://bitbucket.org/pypy/pypy/changeset/fb125d8a0233/ Log: more translation fixes diff --git a/pypy/module/micronumpy/compile.py b/pypy/module/micronumpy/compile.py --- a/pypy/module/micronumpy/compile.py +++ b/pypy/module/micronumpy/compile.py @@ -72,8 +72,8 @@ def float(self, w_obj): if isinstance(w_obj, FloatObject): return w_obj - assert isinstance(w_obj, interp_boxes.W_FloatingBox) - return FloatObject(w_obj.value) + assert isinstance(w_obj, interp_boxes.W_GenericBox) + return self.float(w_obj.descr_float(self)) def float_w(self, w_obj): assert isinstance(w_obj, FloatObject) @@ -90,7 +90,7 @@ if isinstance(w_obj, IntObject): return w_obj assert isinstance(w_obj, interp_boxes.W_GenericBox) - return IntObject(int(w_obj.value)) + return self.int(w_obj.descr_int(self)) def is_true(self, w_obj): assert isinstance(w_obj, BoolObject) diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -15,7 +15,18 @@ return getattr(get_dtype_cache(space), "w_%sdtype" % name) return get_dtype +class PrimitiveBox(object): + _mixin_ = True + + def __init__(self, value): + self.value = value + + def convert_to(self, dtype): + return dtype.box(self.value) + class W_GenericBox(Wrappable): + _attrs_ = () + def descr_repr(self, space): return space.wrap(self.get_dtype(space).itemtype.str_format(self)) @@ -56,19 +67,11 @@ descr_abs = _unaryop_impl("absolute") -class W_BoolBox(W_GenericBox): - def __init__(self, value): - self.value = value - - def convert_to(self, dtype): - return dtype.box(self.value) +class W_BoolBox(W_GenericBox, PrimitiveBox): + pass class W_NumberBox(W_GenericBox): - def __init__(self, value): - self.value = value - - def convert_to(self, dtype): - return dtype.box(self.value) + pass class W_IntegerBox(W_NumberBox): pass @@ -79,34 +82,34 @@ class W_UnsignedIntgerBox(W_IntegerBox): pass -class W_Int8Box(W_SignedIntegerBox): +class W_Int8Box(W_SignedIntegerBox, PrimitiveBox): pass -class W_UInt8Box(W_UnsignedIntgerBox): +class W_UInt8Box(W_UnsignedIntgerBox, PrimitiveBox): pass -class W_Int16Box(W_SignedIntegerBox): +class W_Int16Box(W_SignedIntegerBox, PrimitiveBox): pass -class W_UInt16Box(W_UnsignedIntgerBox): +class W_UInt16Box(W_UnsignedIntgerBox, PrimitiveBox): pass -class W_Int32Box(W_SignedIntegerBox): +class W_Int32Box(W_SignedIntegerBox, PrimitiveBox): pass -class W_UInt32Box(W_UnsignedIntgerBox): +class W_UInt32Box(W_UnsignedIntgerBox, PrimitiveBox): pass -class W_LongBox(W_SignedIntegerBox): +class W_LongBox(W_SignedIntegerBox, PrimitiveBox): get_dtype = dtype_getter("long") -class W_ULongBox(W_UnsignedIntgerBox): +class W_ULongBox(W_UnsignedIntgerBox, PrimitiveBox): pass -class W_Int64Box(W_SignedIntegerBox): +class W_Int64Box(W_SignedIntegerBox, PrimitiveBox): get_dtype = dtype_getter("int64") -class W_UInt64Box(W_UnsignedIntgerBox): +class W_UInt64Box(W_UnsignedIntgerBox, PrimitiveBox): pass class W_InexactBox(W_NumberBox): @@ -115,10 +118,10 @@ class W_FloatingBox(W_InexactBox): pass -class W_Float32Box(W_FloatingBox): +class W_Float32Box(W_FloatingBox, PrimitiveBox): get_dtype = dtype_getter("float32") -class W_Float64Box(W_FloatingBox): +class W_Float64Box(W_FloatingBox, PrimitiveBox): get_dtype = dtype_getter("float64") From noreply at buildbot.pypy.org Fri Nov 11 20:57:04 2011 From: noreply at buildbot.pypy.org (edelsohn) Date: Fri, 11 Nov 2011 20:57:04 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Align PPC64 stack Message-ID: <20111111195704.2461A8292E@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r49341:da65114e1734 Date: 2011-11-11 14:56 -0500 http://bitbucket.org/pypy/pypy/changeset/da65114e1734/ Log: Align PPC64 stack diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -3,7 +3,8 @@ import pypy.jit.backend.ppc.ppcgen.condition as c import pypy.jit.backend.ppc.ppcgen.register as r from pypy.jit.backend.ppc.ppcgen.arch import (IS_PPC_32, WORD, - GPR_SAVE_AREA, BACKCHAIN_SIZE) + GPR_SAVE_AREA, BACKCHAIN_SIZE, + MAX_REG_PARAMS) from pypy.jit.metainterp.history import LoopToken, AbstractFailDescr, FLOAT from pypy.rlib.objectmodel import we_are_translated @@ -529,7 +530,9 @@ self.mc.stw(r.r0.value, r.SP.value, stack_space + WORD) else: # ABI fixed frame + 8 GPRs + arguments - stack_space = (6 + 8 + len(stack_args)) * WORD + stack_space = (6 + MAX_REG_PARAMS + len(stack_args)) * WORD + while stack_space % (2 * WORD) != 0: + stack_space += 1 self.mc.stdu(r.SP.value, r.SP.value, -stack_space) self.mc.mflr(r.r0.value) self.mc.std(r.r0.value, r.SP.value, stack_space + 2 * WORD) From noreply at buildbot.pypy.org Fri Nov 11 21:03:32 2011 From: noreply at buildbot.pypy.org (amauryfa) Date: Fri, 11 Nov 2011 21:03:32 +0100 (CET) Subject: [pypy-commit] pypy default: imp.find_module() now returns the file object for extension modules. Message-ID: <20111111200332.025038292E@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r49342:e6473fd2fde5 Date: 2011-11-06 22:39 +0100 http://bitbucket.org/pypy/pypy/changeset/e6473fd2fde5/ Log: imp.find_module() now returns the file object for extension modules. diff --git a/pypy/module/imp/importing.py b/pypy/module/imp/importing.py --- a/pypy/module/imp/importing.py +++ b/pypy/module/imp/importing.py @@ -513,7 +513,7 @@ space.warn(msg, space.w_ImportWarning) modtype, suffix, filemode = find_modtype(space, filepart) try: - if modtype in (PY_SOURCE, PY_COMPILED): + if modtype in (PY_SOURCE, PY_COMPILED, C_EXTENSION): assert suffix is not None filename = filepart + suffix stream = streamio.open_file_as_stream(filename, filemode) @@ -522,9 +522,6 @@ except: stream.close() raise - if modtype == C_EXTENSION: - filename = filepart + suffix - return FindInfo(modtype, filename, None, suffix, filemode) except StreamErrors: pass # XXX! must not eat all exceptions, e.g. # Out of file descriptors. From noreply at buildbot.pypy.org Fri Nov 11 21:03:33 2011 From: noreply at buildbot.pypy.org (amauryfa) Date: Fri, 11 Nov 2011 21:03:33 +0100 (CET) Subject: [pypy-commit] pypy default: Better presentation of the docstring Message-ID: <20111111200333.453A78292E@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r49343:d85b35219396 Date: 2011-11-11 21:02 +0100 http://bitbucket.org/pypy/pypy/changeset/d85b35219396/ Log: Better presentation of the docstring diff --git a/pypy/objspace/std/bytearraytype.py b/pypy/objspace/std/bytearraytype.py --- a/pypy/objspace/std/bytearraytype.py +++ b/pypy/objspace/std/bytearraytype.py @@ -122,10 +122,11 @@ return -1 def descr_fromhex(space, w_type, w_hexstring): - "bytearray.fromhex(string) -> bytearray\n\nCreate a bytearray object " - "from a string of hexadecimal numbers.\nSpaces between two numbers are " - "accepted.\nExample: bytearray.fromhex('B9 01EF') -> " - "bytearray(b'\\xb9\\x01\\xef')." + "bytearray.fromhex(string) -> bytearray\n" + "\n" + "Create a bytearray object from a string of hexadecimal numbers.\n" + "Spaces between two numbers are accepted.\n" + "Example: bytearray.fromhex('B9 01EF') -> bytearray(b'\\xb9\\x01\\xef')." hexstring = space.str_w(w_hexstring) hexstring = hexstring.lower() data = [] From noreply at buildbot.pypy.org Fri Nov 11 21:03:35 2011 From: noreply at buildbot.pypy.org (amauryfa) Date: Fri, 11 Nov 2011 21:03:35 +0100 (CET) Subject: [pypy-commit] pypy default: Merge heads Message-ID: <20111111200335.42B6C8292E@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r49344:73b76d76352b Date: 2011-11-11 21:02 +0100 http://bitbucket.org/pypy/pypy/changeset/73b76d76352b/ Log: Merge heads diff --git a/lib-python/2.7/test/test_os.py b/lib-python/2.7/test/test_os.py --- a/lib-python/2.7/test/test_os.py +++ b/lib-python/2.7/test/test_os.py @@ -74,7 +74,8 @@ self.assertFalse(os.path.exists(name), "file already exists for temporary file") # make sure we can create the file - open(name, "w") + f = open(name, "w") + f.close() self.files.append(name) def test_tempnam(self): diff --git a/lib-python/2.7/pkgutil.py b/lib-python/modified-2.7/pkgutil.py copy from lib-python/2.7/pkgutil.py copy to lib-python/modified-2.7/pkgutil.py --- a/lib-python/2.7/pkgutil.py +++ b/lib-python/modified-2.7/pkgutil.py @@ -244,7 +244,8 @@ return mod def get_data(self, pathname): - return open(pathname, "rb").read() + with open(pathname, "rb") as f: + return f.read() def _reopen(self): if self.file and self.file.closed: diff --git a/lib-python/modified-2.7/test/test_import.py b/lib-python/modified-2.7/test/test_import.py --- a/lib-python/modified-2.7/test/test_import.py +++ b/lib-python/modified-2.7/test/test_import.py @@ -64,6 +64,7 @@ except ImportError, err: self.fail("import from %s failed: %s" % (ext, err)) else: + # XXX importing .pyw is missing on Windows self.assertEqual(mod.a, a, "module loaded (%s) but contents invalid" % mod) self.assertEqual(mod.b, b, diff --git a/lib-python/modified-2.7/test/test_repr.py b/lib-python/modified-2.7/test/test_repr.py --- a/lib-python/modified-2.7/test/test_repr.py +++ b/lib-python/modified-2.7/test/test_repr.py @@ -254,8 +254,14 @@ eq = self.assertEqual touch(os.path.join(self.subpkgname, self.pkgname + os.extsep + 'py')) from areallylongpackageandmodulenametotestreprtruncation.areallylongpackageandmodulenametotestreprtruncation import areallylongpackageandmodulenametotestreprtruncation - eq(repr(areallylongpackageandmodulenametotestreprtruncation), - "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + # On PyPy, we use %r to format the file name; on CPython it is done + # with '%s'. It seems to me that %r is safer . + if '__pypy__' in sys.builtin_module_names: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + else: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) eq(repr(sys), "") def test_type(self): diff --git a/lib-python/2.7/test/test_subprocess.py b/lib-python/modified-2.7/test/test_subprocess.py copy from lib-python/2.7/test/test_subprocess.py copy to lib-python/modified-2.7/test/test_subprocess.py --- a/lib-python/2.7/test/test_subprocess.py +++ b/lib-python/modified-2.7/test/test_subprocess.py @@ -16,11 +16,11 @@ # Depends on the following external programs: Python # -if mswindows: - SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' - 'os.O_BINARY);') -else: - SETBINARY = '' +#if mswindows: +# SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' +# 'os.O_BINARY);') +#else: +# SETBINARY = '' try: @@ -420,8 +420,9 @@ self.assertStderrEqual(stderr, "") def test_universal_newlines(self): - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' @@ -448,8 +449,9 @@ def test_universal_newlines_communicate(self): # universal newlines through communicate() - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' diff --git a/lib_pypy/_ctypes/pointer.py b/lib_pypy/_ctypes/pointer.py --- a/lib_pypy/_ctypes/pointer.py +++ b/lib_pypy/_ctypes/pointer.py @@ -124,7 +124,8 @@ # for now, we always allow types.pointer, else a lot of tests # break. We need to rethink how pointers are represented, though if my_ffitype is not ffitype and ffitype is not _ffi.types.void_p: - raise ArgumentError, "expected %s instance, got %s" % (type(value), ffitype) + raise ArgumentError("expected %s instance, got %s" % (type(value), + ffitype)) return value._get_buffer_value() def _cast_addr(obj, _, tp): diff --git a/lib_pypy/pyrepl/commands.py b/lib_pypy/pyrepl/commands.py --- a/lib_pypy/pyrepl/commands.py +++ b/lib_pypy/pyrepl/commands.py @@ -33,10 +33,9 @@ class Command(object): finish = 0 kills_digit_arg = 1 - def __init__(self, reader, (event_name, event)): + def __init__(self, reader, cmd): self.reader = reader - self.event = event - self.event_name = event_name + self.event_name, self.event = cmd def do(self): pass diff --git a/lib_pypy/pyrepl/pygame_console.py b/lib_pypy/pyrepl/pygame_console.py --- a/lib_pypy/pyrepl/pygame_console.py +++ b/lib_pypy/pyrepl/pygame_console.py @@ -130,7 +130,7 @@ s.fill(c, [0, 600 - bmargin, 800, bmargin]) s.fill(c, [800 - rmargin, 0, lmargin, 600]) - def refresh(self, screen, (cx, cy)): + def refresh(self, screen, cxy): self.screen = screen self.pygame_screen.fill(colors.bg, [0, tmargin + self.cur_top + self.scroll, @@ -139,8 +139,8 @@ line_top = self.cur_top width, height = self.fontsize - self.cxy = (cx, cy) - cp = self.char_pos(cx, cy) + self.cxy = cxy + cp = self.char_pos(*cxy) if cp[1] < tmargin: self.scroll = - (cy*self.fh + self.cur_top) self.repaint() @@ -148,7 +148,7 @@ self.scroll += (600 - bmargin) - (cp[1] + self.fh) self.repaint() if self.curs_vis: - self.pygame_screen.blit(self.cursor, self.char_pos(cx, cy)) + self.pygame_screen.blit(self.cursor, self.char_pos(*cxy)) for line in screen: if 0 <= line_top + self.scroll <= (600 - bmargin - tmargin - self.fh): if line: diff --git a/lib_pypy/pyrepl/unix_console.py b/lib_pypy/pyrepl/unix_console.py --- a/lib_pypy/pyrepl/unix_console.py +++ b/lib_pypy/pyrepl/unix_console.py @@ -163,7 +163,7 @@ def change_encoding(self, encoding): self.encoding = encoding - def refresh(self, screen, (cx, cy)): + def refresh(self, screen, cxy): # this function is still too long (over 90 lines) if not self.__gone_tall: @@ -198,6 +198,7 @@ # we make sure the cursor is on the screen, and that we're # using all of the screen if we can + cx, cy = cxy if cy < offset: offset = cy elif cy >= offset + height: diff --git a/pypy/interpreter/test/test_typedef.py b/pypy/interpreter/test/test_typedef.py --- a/pypy/interpreter/test/test_typedef.py +++ b/pypy/interpreter/test/test_typedef.py @@ -2,7 +2,7 @@ from pypy.interpreter import typedef from pypy.tool.udir import udir from pypy.interpreter.baseobjspace import Wrappable -from pypy.interpreter.gateway import ObjSpace +from pypy.interpreter.gateway import ObjSpace, interp2app # this test isn't so much to test that the objspace interface *works* # -- it's more to test that it's *there* @@ -260,6 +260,50 @@ gc.collect(); gc.collect() assert space.unwrap(w_seen) == [6, 2] + def test_multiple_inheritance(self): + class W_A(Wrappable): + a = 1 + b = 2 + class W_C(W_A): + b = 3 + W_A.typedef = typedef.TypeDef("A", + a = typedef.interp_attrproperty("a", cls=W_A), + b = typedef.interp_attrproperty("b", cls=W_A), + ) + class W_B(Wrappable): + pass + def standalone_method(space, w_obj): + if isinstance(w_obj, W_A): + return space.w_True + else: + return space.w_False + W_B.typedef = typedef.TypeDef("B", + c = interp2app(standalone_method) + ) + W_C.typedef = typedef.TypeDef("C", (W_A.typedef, W_B.typedef,)) + + w_o1 = self.space.wrap(W_C()) + w_o2 = self.space.wrap(W_B()) + w_c = self.space.gettypefor(W_C) + w_b = self.space.gettypefor(W_B) + w_a = self.space.gettypefor(W_A) + assert w_c.mro_w == [ + w_c, + w_a, + w_b, + self.space.w_object, + ] + for w_tp in w_c.mro_w: + assert self.space.isinstance_w(w_o1, w_tp) + def assert_attr(w_obj, name, value): + assert self.space.unwrap(self.space.getattr(w_obj, self.space.wrap(name))) == value + def assert_method(w_obj, name, value): + assert self.space.unwrap(self.space.call_method(w_obj, name)) == value + assert_attr(w_o1, "a", 1) + assert_attr(w_o1, "b", 3) + assert_method(w_o1, "c", True) + assert_method(w_o2, "c", False) + class AppTestTypeDef: diff --git a/pypy/interpreter/typedef.py b/pypy/interpreter/typedef.py --- a/pypy/interpreter/typedef.py +++ b/pypy/interpreter/typedef.py @@ -15,13 +15,19 @@ def __init__(self, __name, __base=None, **rawdict): "NOT_RPYTHON: initialization-time only" self.name = __name - self.base = __base + if __base is None: + bases = [] + elif isinstance(__base, tuple): + bases = list(__base) + else: + bases = [__base] + self.bases = bases self.hasdict = '__dict__' in rawdict self.weakrefable = '__weakref__' in rawdict self.doc = rawdict.pop('__doc__', None) - if __base is not None: - self.hasdict |= __base.hasdict - self.weakrefable |= __base.weakrefable + for base in bases: + self.hasdict |= base.hasdict + self.weakrefable |= base.weakrefable self.rawdict = {} self.acceptable_as_base_class = '__new__' in rawdict self.applevel_subclasses_base = None diff --git a/pypy/jit/backend/llsupport/descr.py b/pypy/jit/backend/llsupport/descr.py --- a/pypy/jit/backend/llsupport/descr.py +++ b/pypy/jit/backend/llsupport/descr.py @@ -305,12 +305,16 @@ _clsname = '' loop_token = None arg_classes = '' # <-- annotation hack - ffi_flags = 0 + ffi_flags = 1 - def __init__(self, arg_classes, extrainfo=None, ffi_flags=0): + def __init__(self, arg_classes, extrainfo=None, ffi_flags=1): self.arg_classes = arg_classes # string of "r" and "i" (ref/int) self.extrainfo = extrainfo self.ffi_flags = ffi_flags + # NB. the default ffi_flags is 1, meaning FUNCFLAG_CDECL, which + # makes sense on Windows as it's the one for all the C functions + # we are compiling together with the JIT. On non-Windows platforms + # it is just ignored anyway. def __repr__(self): res = '%s(%s)' % (self.__class__.__name__, self.arg_classes) @@ -351,6 +355,10 @@ return False # unless overridden def create_call_stub(self, rtyper, RESULT): + from pypy.rlib.clibffi import FFI_DEFAULT_ABI + assert self.get_call_conv() == FFI_DEFAULT_ABI, ( + "%r: create_call_stub() with a non-default call ABI" % (self,)) + def process(c): if c == 'L': assert longlong.supports_longlong @@ -445,7 +453,7 @@ """ _clsname = 'DynamicIntCallDescr' - def __init__(self, arg_classes, result_size, result_sign, extrainfo=None, ffi_flags=0): + def __init__(self, arg_classes, result_size, result_sign, extrainfo, ffi_flags): BaseIntCallDescr.__init__(self, arg_classes, extrainfo, ffi_flags) assert isinstance(result_sign, bool) self._result_size = chr(result_size) diff --git a/pypy/jit/backend/llsupport/ffisupport.py b/pypy/jit/backend/llsupport/ffisupport.py --- a/pypy/jit/backend/llsupport/ffisupport.py +++ b/pypy/jit/backend/llsupport/ffisupport.py @@ -8,7 +8,7 @@ class UnsupportedKind(Exception): pass -def get_call_descr_dynamic(cpu, ffi_args, ffi_result, extrainfo=None, ffi_flags=0): +def get_call_descr_dynamic(cpu, ffi_args, ffi_result, extrainfo, ffi_flags): """Get a call descr: the types of result and args are represented by rlib.libffi.types.*""" try: diff --git a/pypy/jit/backend/llsupport/test/test_ffisupport.py b/pypy/jit/backend/llsupport/test/test_ffisupport.py --- a/pypy/jit/backend/llsupport/test/test_ffisupport.py +++ b/pypy/jit/backend/llsupport/test/test_ffisupport.py @@ -13,44 +13,46 @@ def test_call_descr_dynamic(): args = [types.sint, types.pointer] - descr = get_call_descr_dynamic(FakeCPU(), args, types.sint, ffi_flags=42) + descr = get_call_descr_dynamic(FakeCPU(), args, types.sint, None, + ffi_flags=42) assert isinstance(descr, DynamicIntCallDescr) assert descr.arg_classes == 'ii' assert descr.get_ffi_flags() == 42 args = [types.sint, types.double, types.pointer] - descr = get_call_descr_dynamic(FakeCPU(), args, types.void) + descr = get_call_descr_dynamic(FakeCPU(), args, types.void, None, 42) assert descr is None # missing floats descr = get_call_descr_dynamic(FakeCPU(supports_floats=True), - args, types.void, ffi_flags=43) + args, types.void, None, ffi_flags=43) assert isinstance(descr, VoidCallDescr) assert descr.arg_classes == 'ifi' assert descr.get_ffi_flags() == 43 - descr = get_call_descr_dynamic(FakeCPU(), [], types.sint8) + descr = get_call_descr_dynamic(FakeCPU(), [], types.sint8, None, 42) assert isinstance(descr, DynamicIntCallDescr) assert descr.get_result_size(False) == 1 assert descr.is_result_signed() == True - descr = get_call_descr_dynamic(FakeCPU(), [], types.uint8) + descr = get_call_descr_dynamic(FakeCPU(), [], types.uint8, None, 42) assert isinstance(descr, DynamicIntCallDescr) assert descr.get_result_size(False) == 1 assert descr.is_result_signed() == False if not is_64_bit: - descr = get_call_descr_dynamic(FakeCPU(), [], types.slonglong) + descr = get_call_descr_dynamic(FakeCPU(), [], types.slonglong, + None, 42) assert descr is None # missing longlongs descr = get_call_descr_dynamic(FakeCPU(supports_longlong=True), - [], types.slonglong, ffi_flags=43) + [], types.slonglong, None, ffi_flags=43) assert isinstance(descr, LongLongCallDescr) assert descr.get_ffi_flags() == 43 else: assert types.slonglong is types.slong - descr = get_call_descr_dynamic(FakeCPU(), [], types.float) + descr = get_call_descr_dynamic(FakeCPU(), [], types.float, None, 42) assert descr is None # missing singlefloats descr = get_call_descr_dynamic(FakeCPU(supports_singlefloats=True), - [], types.float, ffi_flags=44) + [], types.float, None, ffi_flags=44) SingleFloatCallDescr = getCallDescrClass(rffi.FLOAT) assert isinstance(descr, SingleFloatCallDescr) assert descr.get_ffi_flags() == 44 diff --git a/pypy/jit/backend/x86/test/test_runner.py b/pypy/jit/backend/x86/test/test_runner.py --- a/pypy/jit/backend/x86/test/test_runner.py +++ b/pypy/jit/backend/x86/test/test_runner.py @@ -455,6 +455,9 @@ EffectInfo.MOST_GENERAL, ffi_flags=-1) calldescr.get_call_conv = lambda: ffi # <==== hack + # ^^^ we patch get_call_conv() so that the test also makes sense + # on Linux, because clibffi.get_call_conv() would always + # return FFI_DEFAULT_ABI on non-Windows platforms. funcbox = ConstInt(rawstart) i1 = BoxInt() i2 = BoxInt() diff --git a/pypy/jit/codewriter/call.py b/pypy/jit/codewriter/call.py --- a/pypy/jit/codewriter/call.py +++ b/pypy/jit/codewriter/call.py @@ -212,7 +212,10 @@ elidable = False loopinvariant = False if op.opname == "direct_call": - func = getattr(get_funcobj(op.args[0].value), '_callable', None) + funcobj = get_funcobj(op.args[0].value) + assert getattr(funcobj, 'calling_conv', 'c') == 'c', ( + "%r: getcalldescr() with a non-default call ABI" % (op,)) + func = getattr(funcobj, '_callable', None) elidable = getattr(func, "_elidable_function_", False) loopinvariant = getattr(func, "_jit_loop_invariant_", False) if loopinvariant: diff --git a/pypy/jit/codewriter/effectinfo.py b/pypy/jit/codewriter/effectinfo.py --- a/pypy/jit/codewriter/effectinfo.py +++ b/pypy/jit/codewriter/effectinfo.py @@ -78,6 +78,9 @@ # OS_MATH_SQRT = 100 + # for debugging: + _OS_CANRAISE = set([OS_NONE, OS_STR2UNICODE, OS_LIBFFI_CALL]) + def __new__(cls, readonly_descrs_fields, readonly_descrs_arrays, write_descrs_fields, write_descrs_arrays, extraeffect=EF_CAN_RAISE, @@ -116,6 +119,8 @@ result.extraeffect = extraeffect result.can_invalidate = can_invalidate result.oopspecindex = oopspecindex + if result.check_can_raise(): + assert oopspecindex in cls._OS_CANRAISE cls._cache[key] = result return result @@ -125,6 +130,10 @@ def check_can_invalidate(self): return self.can_invalidate + def check_is_elidable(self): + return (self.extraeffect == self.EF_ELIDABLE_CAN_RAISE or + self.extraeffect == self.EF_ELIDABLE_CANNOT_RAISE) + def check_forces_virtual_or_virtualizable(self): return self.extraeffect >= self.EF_FORCES_VIRTUAL_OR_VIRTUALIZABLE diff --git a/pypy/jit/codewriter/test/test_flatten.py b/pypy/jit/codewriter/test/test_flatten.py --- a/pypy/jit/codewriter/test/test_flatten.py +++ b/pypy/jit/codewriter/test/test_flatten.py @@ -5,7 +5,7 @@ from pypy.jit.codewriter.format import assert_format from pypy.jit.codewriter import longlong from pypy.jit.metainterp.history import AbstractDescr -from pypy.rpython.lltypesystem import lltype, rclass, rstr +from pypy.rpython.lltypesystem import lltype, rclass, rstr, rffi from pypy.objspace.flow.model import SpaceOperation, Variable, Constant from pypy.translator.unsimplify import varoftype from pypy.rlib.rarithmetic import ovfcheck, r_uint, r_longlong, r_ulonglong @@ -743,7 +743,6 @@ """, transform=True) def test_force_cast(self): - from pypy.rpython.lltypesystem import rffi # NB: we don't need to test for INT here, the logic in jtransform is # general enough so that if we have the below cases it should # generalize also to INT @@ -849,7 +848,6 @@ transform=True) def test_force_cast_pointer(self): - from pypy.rpython.lltypesystem import rffi def h(p): return rffi.cast(rffi.VOIDP, p) self.encoding_test(h, [lltype.nullptr(rffi.CCHARP.TO)], """ @@ -857,7 +855,6 @@ """, transform=True) def test_force_cast_floats(self): - from pypy.rpython.lltypesystem import rffi # Caststs to lltype.Float def f(n): return rffi.cast(lltype.Float, n) @@ -964,7 +961,6 @@ """, transform=True) def test_direct_ptradd(self): - from pypy.rpython.lltypesystem import rffi def f(p, n): return lltype.direct_ptradd(p, n) self.encoding_test(f, [lltype.nullptr(rffi.CCHARP.TO), 123], """ @@ -975,7 +971,6 @@ def check_force_cast(FROM, TO, operations, value): """Check that the test is correctly written...""" - from pypy.rpython.lltypesystem import rffi import re r = re.compile('(\w+) \%i\d, \$(-?\d+)') # diff --git a/pypy/jit/metainterp/optimizeopt/__init__.py b/pypy/jit/metainterp/optimizeopt/__init__.py --- a/pypy/jit/metainterp/optimizeopt/__init__.py +++ b/pypy/jit/metainterp/optimizeopt/__init__.py @@ -55,7 +55,7 @@ def optimize_loop_1(metainterp_sd, loop, enable_opts, - inline_short_preamble=True, retraced=False, bridge=False): + inline_short_preamble=True, retraced=False): """Optimize loop.operations to remove internal overheadish operations. """ @@ -64,7 +64,7 @@ if unroll: optimize_unroll(metainterp_sd, loop, optimizations) else: - optimizer = Optimizer(metainterp_sd, loop, optimizations, bridge) + optimizer = Optimizer(metainterp_sd, loop, optimizations) optimizer.propagate_all_forward() def optimize_bridge_1(metainterp_sd, bridge, enable_opts, @@ -76,7 +76,7 @@ except KeyError: pass optimize_loop_1(metainterp_sd, bridge, enable_opts, - inline_short_preamble, retraced, bridge=True) + inline_short_preamble, retraced) if __name__ == '__main__': print ALL_OPTS_NAMES diff --git a/pypy/jit/metainterp/optimizeopt/heap.py b/pypy/jit/metainterp/optimizeopt/heap.py --- a/pypy/jit/metainterp/optimizeopt/heap.py +++ b/pypy/jit/metainterp/optimizeopt/heap.py @@ -234,7 +234,7 @@ or op.is_ovf()): self.posponedop = op else: - self.next_optimization.propagate_forward(op) + Optimization.emit_operation(self, op) def emitting_operation(self, op): if op.has_no_side_effect(): diff --git a/pypy/jit/metainterp/optimizeopt/intbounds.py b/pypy/jit/metainterp/optimizeopt/intbounds.py --- a/pypy/jit/metainterp/optimizeopt/intbounds.py +++ b/pypy/jit/metainterp/optimizeopt/intbounds.py @@ -6,6 +6,7 @@ IntUpperBound) from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method from pypy.jit.metainterp.resoperation import rop +from pypy.jit.metainterp.optimize import InvalidLoop from pypy.rlib.rarithmetic import LONG_BIT @@ -13,30 +14,10 @@ """Keeps track of the bounds placed on integers by guards and remove redundant guards""" - def setup(self): - self.posponedop = None - self.nextop = None - def new(self): - assert self.posponedop is None return OptIntBounds() - - def flush(self): - assert self.posponedop is None - - def setup(self): - self.posponedop = None - self.nextop = None def propagate_forward(self, op): - if op.is_ovf(): - self.posponedop = op - return - if self.posponedop: - self.nextop = op - op = self.posponedop - self.posponedop = None - dispatch_opt(self, op) def opt_default(self, op): @@ -179,68 +160,75 @@ r = self.getvalue(op.result) r.intbound.intersect(b) + def optimize_GUARD_NO_OVERFLOW(self, op): + lastop = self.last_emitted_operation + if lastop is not None: + opnum = lastop.getopnum() + args = lastop.getarglist() + result = lastop.result + # If the INT_xxx_OVF was replaced with INT_xxx, then we can kill + # the GUARD_NO_OVERFLOW. + if (opnum == rop.INT_ADD or + opnum == rop.INT_SUB or + opnum == rop.INT_MUL): + return + # Else, synthesize the non overflowing op for optimize_default to + # reuse, as well as the reverse op + elif opnum == rop.INT_ADD_OVF: + self.pure(rop.INT_ADD, args[:], result) + self.pure(rop.INT_SUB, [result, args[1]], args[0]) + self.pure(rop.INT_SUB, [result, args[0]], args[1]) + elif opnum == rop.INT_SUB_OVF: + self.pure(rop.INT_SUB, args[:], result) + self.pure(rop.INT_ADD, [result, args[1]], args[0]) + self.pure(rop.INT_SUB, [args[0], result], args[1]) + elif opnum == rop.INT_MUL_OVF: + self.pure(rop.INT_MUL, args[:], result) + self.emit_operation(op) + + def optimize_GUARD_OVERFLOW(self, op): + # If INT_xxx_OVF was replaced by INT_xxx, *but* we still see + # GUARD_OVERFLOW, then the loop is invalid. + lastop = self.last_emitted_operation + if lastop is None: + raise InvalidLoop + opnum = lastop.getopnum() + if opnum not in (rop.INT_ADD_OVF, rop.INT_SUB_OVF, rop.INT_MUL_OVF): + raise InvalidLoop + self.emit_operation(op) + def optimize_INT_ADD_OVF(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) resbound = v1.intbound.add_bound(v2.intbound) - if resbound.has_lower and resbound.has_upper and \ - self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Transform into INT_ADD and remove guard + if resbound.bounded(): + # Transform into INT_ADD. The following guard will be killed + # by optimize_GUARD_NO_OVERFLOW; if we see instead an + # optimize_GUARD_OVERFLOW, then InvalidLoop. op = op.copy_and_change(rop.INT_ADD) - self.optimize_INT_ADD(op) # emit the op - else: - self.emit_operation(op) - r = self.getvalue(op.result) - r.intbound.intersect(resbound) - self.emit_operation(self.nextop) - if self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Synthesize the non overflowing op for optimize_default to reuse - self.pure(rop.INT_ADD, op.getarglist()[:], op.result) - # Synthesize the reverse op for optimize_default to reuse - self.pure(rop.INT_SUB, [op.result, op.getarg(1)], op.getarg(0)) - self.pure(rop.INT_SUB, [op.result, op.getarg(0)], op.getarg(1)) - + self.emit_operation(op) # emit the op + r = self.getvalue(op.result) + r.intbound.intersect(resbound) def optimize_INT_SUB_OVF(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) resbound = v1.intbound.sub_bound(v2.intbound) - if resbound.has_lower and resbound.has_upper and \ - self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Transform into INT_SUB and remove guard + if resbound.bounded(): op = op.copy_and_change(rop.INT_SUB) - self.optimize_INT_SUB(op) # emit the op - else: - self.emit_operation(op) - r = self.getvalue(op.result) - r.intbound.intersect(resbound) - self.emit_operation(self.nextop) - if self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Synthesize the non overflowing op for optimize_default to reuse - self.pure(rop.INT_SUB, op.getarglist()[:], op.result) - # Synthesize the reverse ops for optimize_default to reuse - self.pure(rop.INT_ADD, [op.result, op.getarg(1)], op.getarg(0)) - self.pure(rop.INT_SUB, [op.getarg(0), op.result], op.getarg(1)) - + self.emit_operation(op) # emit the op + r = self.getvalue(op.result) + r.intbound.intersect(resbound) def optimize_INT_MUL_OVF(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) resbound = v1.intbound.mul_bound(v2.intbound) - if resbound.has_lower and resbound.has_upper and \ - self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Transform into INT_MUL and remove guard + if resbound.bounded(): op = op.copy_and_change(rop.INT_MUL) - self.optimize_INT_MUL(op) # emit the op - else: - self.emit_operation(op) - r = self.getvalue(op.result) - r.intbound.intersect(resbound) - self.emit_operation(self.nextop) - if self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Synthesize the non overflowing op for optimize_default to reuse - self.pure(rop.INT_MUL, op.getarglist()[:], op.result) - + self.emit_operation(op) + r = self.getvalue(op.result) + r.intbound.intersect(resbound) def optimize_INT_LT(self, op): v1 = self.getvalue(op.getarg(0)) diff --git a/pypy/jit/metainterp/optimizeopt/intutils.py b/pypy/jit/metainterp/optimizeopt/intutils.py --- a/pypy/jit/metainterp/optimizeopt/intutils.py +++ b/pypy/jit/metainterp/optimizeopt/intutils.py @@ -1,4 +1,4 @@ -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift, LONG_BIT +from pypy.rlib.rarithmetic import ovfcheck, LONG_BIT from pypy.rlib.objectmodel import we_are_translated from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.jit.metainterp.history import BoxInt, ConstInt @@ -174,10 +174,10 @@ other.known_ge(IntBound(0, 0)) and \ other.known_lt(IntBound(LONG_BIT, LONG_BIT)): try: - vals = (ovfcheck_lshift(self.upper, other.upper), - ovfcheck_lshift(self.upper, other.lower), - ovfcheck_lshift(self.lower, other.upper), - ovfcheck_lshift(self.lower, other.lower)) + vals = (ovfcheck(self.upper << other.upper), + ovfcheck(self.upper << other.lower), + ovfcheck(self.lower << other.upper), + ovfcheck(self.lower << other.lower)) return IntBound(min4(vals), max4(vals)) except (OverflowError, ValueError): return IntUnbounded() diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -6,7 +6,7 @@ IntLowerBound, MININT, MAXINT from pypy.jit.metainterp.optimizeopt.util import (make_dispatcher_method, args_dict) -from pypy.jit.metainterp.resoperation import rop, ResOperation +from pypy.jit.metainterp.resoperation import rop, ResOperation, AbstractResOp from pypy.jit.metainterp.typesystem import llhelper, oohelper from pypy.tool.pairtype import extendabletype from pypy.rlib.debug import debug_start, debug_stop, debug_print @@ -249,6 +249,8 @@ CVAL_ZERO_FLOAT = ConstantValue(Const._new(0.0)) llhelper.CVAL_NULLREF = ConstantValue(llhelper.CONST_NULL) oohelper.CVAL_NULLREF = ConstantValue(oohelper.CONST_NULL) +REMOVED = AbstractResOp(None) + class Optimization(object): next_optimization = None @@ -260,6 +262,7 @@ raise NotImplementedError def emit_operation(self, op): + self.last_emitted_operation = op self.next_optimization.propagate_forward(op) # FIXME: Move some of these here? @@ -327,13 +330,13 @@ def forget_numberings(self, box): self.optimizer.forget_numberings(box) + class Optimizer(Optimization): - def __init__(self, metainterp_sd, loop, optimizations=None, bridge=False): + def __init__(self, metainterp_sd, loop, optimizations=None): self.metainterp_sd = metainterp_sd self.cpu = metainterp_sd.cpu self.loop = loop - self.bridge = bridge self.values = {} self.interned_refs = self.cpu.ts.new_ref_dict() self.interned_ints = {} @@ -341,7 +344,6 @@ self.bool_boxes = {} self.producer = {} self.pendingfields = [] - self.exception_might_have_happened = False self.quasi_immutable_deps = None self.opaque_pointers = {} self.replaces_guard = {} @@ -363,6 +365,7 @@ optimizations[-1].next_optimization = self for o in optimizations: o.optimizer = self + o.last_emitted_operation = None o.setup() else: optimizations = [] @@ -497,7 +500,6 @@ return CVAL_ZERO def propagate_all_forward(self): - self.exception_might_have_happened = self.bridge self.clear_newoperations() for op in self.loop.operations: self.first_optimization.propagate_forward(op) diff --git a/pypy/jit/metainterp/optimizeopt/pure.py b/pypy/jit/metainterp/optimizeopt/pure.py --- a/pypy/jit/metainterp/optimizeopt/pure.py +++ b/pypy/jit/metainterp/optimizeopt/pure.py @@ -1,4 +1,4 @@ -from pypy.jit.metainterp.optimizeopt.optimizer import Optimization +from pypy.jit.metainterp.optimizeopt.optimizer import Optimization, REMOVED from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.jit.metainterp.optimizeopt.util import (make_dispatcher_method, args_dict) @@ -61,7 +61,10 @@ oldop = self.pure_operations.get(args, None) if oldop is not None and oldop.getdescr() is op.getdescr(): assert oldop.getopnum() == op.getopnum() + # this removes a CALL_PURE that has the same (non-constant) + # arguments as a previous CALL_PURE. self.make_equal_to(op.result, self.getvalue(oldop.result)) + self.last_emitted_operation = REMOVED return else: self.pure_operations[args] = op @@ -72,6 +75,13 @@ self.emit_operation(ResOperation(rop.CALL, args, op.result, op.getdescr())) + def optimize_GUARD_NO_EXCEPTION(self, op): + if self.last_emitted_operation is REMOVED: + # it was a CALL_PURE that was killed; so we also kill the + # following GUARD_NO_EXCEPTION + return + self.emit_operation(op) + def flush(self): assert self.posponedop is None diff --git a/pypy/jit/metainterp/optimizeopt/rewrite.py b/pypy/jit/metainterp/optimizeopt/rewrite.py --- a/pypy/jit/metainterp/optimizeopt/rewrite.py +++ b/pypy/jit/metainterp/optimizeopt/rewrite.py @@ -294,12 +294,6 @@ raise InvalidLoop self.optimize_GUARD_CLASS(op) - def optimize_GUARD_NO_EXCEPTION(self, op): - if not self.optimizer.exception_might_have_happened: - return - self.emit_operation(op) - self.optimizer.exception_might_have_happened = False - def optimize_CALL_LOOPINVARIANT(self, op): arg = op.getarg(0) # 'arg' must be a Const, because residual_call in codewriter @@ -310,6 +304,7 @@ resvalue = self.loop_invariant_results.get(key, None) if resvalue is not None: self.make_equal_to(op.result, resvalue) + self.last_emitted_operation = REMOVED return # change the op to be a normal call, from the backend's point of view # there is no reason to have a separate operation for this @@ -444,10 +439,19 @@ except KeyError: pass else: + # this removes a CALL_PURE with all constant arguments. self.make_constant(op.result, result) + self.last_emitted_operation = REMOVED return self.emit_operation(op) + def optimize_GUARD_NO_EXCEPTION(self, op): + if self.last_emitted_operation is REMOVED: + # it was a CALL_PURE or a CALL_LOOPINVARIANT that was killed; + # so we also kill the following GUARD_NO_EXCEPTION + return + self.emit_operation(op) + def optimize_INT_FLOORDIV(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -681,25 +681,60 @@ # ---------- - def test_fold_guard_no_exception(self): - ops = """ - [i] - guard_no_exception() [] - i1 = int_add(i, 3) - guard_no_exception() [] + def test_keep_guard_no_exception(self): + ops = """ + [i1] i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] - guard_no_exception() [] - i3 = call(i2, descr=nonwritedescr) - jump(i1) # the exception is considered lost when we loop back - """ - expected = """ - [i] - i1 = int_add(i, 3) - i2 = call(i1, descr=nonwritedescr) + jump(i2) + """ + self.optimize_loop(ops, ops) + + def test_keep_guard_no_exception_with_call_pure_that_is_not_folded(self): + ops = """ + [i1] + i2 = call_pure(123456, i1, descr=nonwritedescr) guard_no_exception() [i1, i2] - i3 = call(i2, descr=nonwritedescr) - jump(i1) + jump(i2) + """ + expected = """ + [i1] + i2 = call(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2] + jump(i2) + """ + self.optimize_loop(ops, expected) + + def test_remove_guard_no_exception_with_call_pure_on_constant_args(self): + arg_consts = [ConstInt(i) for i in (123456, 81)] + call_pure_results = {tuple(arg_consts): ConstInt(5)} + ops = """ + [i1] + i3 = same_as(81) + i2 = call_pure(123456, i3, descr=nonwritedescr) + guard_no_exception() [i1, i2] + jump(i2) + """ + expected = """ + [i1] + jump(5) + """ + self.optimize_loop(ops, expected, call_pure_results) + + def test_remove_guard_no_exception_with_duplicated_call_pure(self): + ops = """ + [i1] + i2 = call_pure(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2] + i3 = call_pure(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2, i3] + jump(i3) + """ + expected = """ + [i1] + i2 = call(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2] + jump(i2) """ self.optimize_loop(ops, expected) @@ -4964,6 +4999,34 @@ """ self.optimize_loop(ops, expected) + def test_known_equal_ints(self): + py.test.skip("in-progress") + ops = """ + [i0, i1, i2, p0] + i3 = int_eq(i0, i1) + guard_true(i3) [] + + i4 = int_lt(i2, i0) + guard_true(i4) [] + i5 = int_lt(i2, i1) + guard_true(i5) [] + + i6 = getarrayitem_gc(p0, i2) + finish(i6) + """ + expected = """ + [i0, i1, i2, p0] + i3 = int_eq(i0, i1) + guard_true(i3) [] + + i4 = int_lt(i2, i0) + guard_true(i4) [] + + i6 = getarrayitem_gc(p0, i3) + finish(i6) + """ + self.optimize_loop(ops, expected) + class TestLLtype(BaseTestOptimizeBasic, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -931,17 +931,14 @@ [i] guard_no_exception() [] i1 = int_add(i, 3) - guard_no_exception() [] i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] - guard_no_exception() [] i3 = call(i2, descr=nonwritedescr) jump(i1) # the exception is considered lost when we loop back """ - # note that 'guard_no_exception' at the very start is kept around - # for bridges, but not for loops preamble = """ [i] + guard_no_exception() [] # occurs at the start of bridges, so keep it i1 = int_add(i, 3) i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] @@ -950,6 +947,7 @@ """ expected = """ [i] + guard_no_exception() [] # occurs at the start of bridges, so keep it i1 = int_add(i, 3) i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] @@ -958,6 +956,23 @@ """ self.optimize_loop(ops, expected, preamble) + def test_bug_guard_no_exception(self): + ops = """ + [] + i0 = call(123, descr=nonwritedescr) + p0 = call(0, "xy", descr=s2u_descr) # string -> unicode + guard_no_exception() [] + escape(p0) + jump() + """ + expected = """ + [] + i0 = call(123, descr=nonwritedescr) + escape(u"xy") + jump() + """ + self.optimize_loop(ops, expected) + # ---------- def test_call_loopinvariant(self): @@ -6281,12 +6296,15 @@ def test_str2unicode_constant(self): ops = """ [] + escape(1213) p0 = call(0, "xy", descr=s2u_descr) # string -> unicode + guard_no_exception() [] escape(p0) jump() """ expected = """ [] + escape(1213) escape(u"xy") jump() """ @@ -6296,6 +6314,7 @@ ops = """ [p0] p1 = call(0, p0, descr=s2u_descr) # string -> unicode + guard_no_exception() [] escape(p1) jump(p1) """ diff --git a/pypy/jit/metainterp/optimizeopt/test/test_util.py b/pypy/jit/metainterp/optimizeopt/test/test_util.py --- a/pypy/jit/metainterp/optimizeopt/test/test_util.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_util.py @@ -183,6 +183,7 @@ can_invalidate=True)) arraycopydescr = cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, EffectInfo([], [arraydescr], [], [arraydescr], + EffectInfo.EF_CANNOT_RAISE, oopspecindex=EffectInfo.OS_ARRAYCOPY)) @@ -212,12 +213,14 @@ _oopspecindex = getattr(EffectInfo, _os) locals()[_name] = \ cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, - EffectInfo([], [], [], [], oopspecindex=_oopspecindex)) + EffectInfo([], [], [], [], EffectInfo.EF_CANNOT_RAISE, + oopspecindex=_oopspecindex)) # _oopspecindex = getattr(EffectInfo, _os.replace('STR', 'UNI')) locals()[_name.replace('str', 'unicode')] = \ cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, - EffectInfo([], [], [], [], oopspecindex=_oopspecindex)) + EffectInfo([], [], [], [], EffectInfo.EF_CANNOT_RAISE, + oopspecindex=_oopspecindex)) s2u_descr = cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, EffectInfo([], [], [], [], oopspecindex=EffectInfo.OS_STR2UNICODE)) diff --git a/pypy/jit/metainterp/optimizeopt/vstring.py b/pypy/jit/metainterp/optimizeopt/vstring.py --- a/pypy/jit/metainterp/optimizeopt/vstring.py +++ b/pypy/jit/metainterp/optimizeopt/vstring.py @@ -2,7 +2,8 @@ from pypy.jit.metainterp.history import (BoxInt, Const, ConstInt, ConstPtr, get_const_ptr_for_string, get_const_ptr_for_unicode, BoxPtr, REF, INT) from pypy.jit.metainterp.optimizeopt import optimizer, virtualize -from pypy.jit.metainterp.optimizeopt.optimizer import CONST_0, CONST_1, llhelper +from pypy.jit.metainterp.optimizeopt.optimizer import CONST_0, CONST_1 +from pypy.jit.metainterp.optimizeopt.optimizer import llhelper, REMOVED from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.rlib.objectmodel import specialize, we_are_translated @@ -529,6 +530,11 @@ optimize_CALL_PURE = optimize_CALL + def optimize_GUARD_NO_EXCEPTION(self, op): + if self.last_emitted_operation is REMOVED: + return + self.emit_operation(op) + def opt_call_str_STR2UNICODE(self, op): # Constant-fold unicode("constant string"). # More generally, supporting non-constant but virtual cases is @@ -543,6 +549,7 @@ except UnicodeDecodeError: return False self.make_constant(op.result, get_const_ptr_for_unicode(u)) + self.last_emitted_operation = REMOVED return True def opt_call_stroruni_STR_CONCAT(self, op, mode): diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -1345,10 +1345,8 @@ if effect == effectinfo.EF_LOOPINVARIANT: return self.execute_varargs(rop.CALL_LOOPINVARIANT, allboxes, descr, False, False) - exc = (effect != effectinfo.EF_CANNOT_RAISE and - effect != effectinfo.EF_ELIDABLE_CANNOT_RAISE) - pure = (effect == effectinfo.EF_ELIDABLE_CAN_RAISE or - effect == effectinfo.EF_ELIDABLE_CANNOT_RAISE) + exc = effectinfo.check_can_raise() + pure = effectinfo.check_is_elidable() return self.execute_varargs(rop.CALL, allboxes, descr, exc, pure) def do_residual_or_indirect_call(self, funcbox, calldescr, argboxes): diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -90,7 +90,10 @@ return op def __repr__(self): - return self.repr() + try: + return self.repr() + except NotImplementedError: + return object.__repr__(self) def repr(self, graytext=False): # RPython-friendly version diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -3678,3 +3678,16 @@ assert x == -42 x = self.interp_operations(f, [1000, 1], translationoptions=topt) assert x == 999 + + def test_ll_arraycopy(self): + from pypy.rlib import rgc + A = lltype.GcArray(lltype.Char) + a = lltype.malloc(A, 10) + for i in range(10): a[i] = chr(i) + b = lltype.malloc(A, 10) + # + def f(c, d, e): + rgc.ll_arraycopy(a, b, c, d, e) + return 42 + self.interp_operations(f, [1, 2, 3]) + self.check_operations_history(call=1, guard_no_exception=0) diff --git a/pypy/jit/metainterp/test/test_resume.py b/pypy/jit/metainterp/test/test_resume.py --- a/pypy/jit/metainterp/test/test_resume.py +++ b/pypy/jit/metainterp/test/test_resume.py @@ -1135,16 +1135,11 @@ assert ptr2.parent.next == ptr class CompareableConsts(object): - def __init__(self): - self.oldeq = None - def __enter__(self): - assert self.oldeq is None - self.oldeq = Const.__eq__ Const.__eq__ = Const.same_box - + def __exit__(self, type, value, traceback): - Const.__eq__ = self.oldeq + del Const.__eq__ def test_virtual_adder_make_varray(): b2s, b4s = [BoxPtr(), BoxInt(4)] diff --git a/pypy/module/_rawffi/structure.py b/pypy/module/_rawffi/structure.py --- a/pypy/module/_rawffi/structure.py +++ b/pypy/module/_rawffi/structure.py @@ -212,6 +212,8 @@ while count + basic_size <= total_size: fieldtypes.append(basic_ffi_type) count += basic_size + if basic_size == 0: # corner case. get out of this infinite + break # loop after 1 iteration ("why not") self.ffi_struct = clibffi.make_struct_ffitype_e(self.size, self.alignment, fieldtypes) diff --git a/pypy/module/cpyext/include/patchlevel.h b/pypy/module/cpyext/include/patchlevel.h --- a/pypy/module/cpyext/include/patchlevel.h +++ b/pypy/module/cpyext/include/patchlevel.h @@ -29,7 +29,7 @@ #define PY_VERSION "2.7.1" /* PyPy version as a string */ -#define PYPY_VERSION "1.6.1" +#define PYPY_VERSION "1.7.1" /* Subversion Revision number of this file (not of the repository). * Empty since Mercurial migration. */ diff --git a/pypy/module/cpyext/pyobject.py b/pypy/module/cpyext/pyobject.py --- a/pypy/module/cpyext/pyobject.py +++ b/pypy/module/cpyext/pyobject.py @@ -116,8 +116,8 @@ try: return typedescr_cache[typedef] except KeyError: - if typedef.base is not None: - return _get_typedescr_1(typedef.base) + if typedef.bases: + return _get_typedescr_1(typedef.bases[0]) return typedescr_cache[None] def get_typedescr(typedef): diff --git a/pypy/module/posix/__init__.py b/pypy/module/posix/__init__.py --- a/pypy/module/posix/__init__.py +++ b/pypy/module/posix/__init__.py @@ -137,6 +137,8 @@ interpleveldefs['execve'] = 'interp_posix.execve' if hasattr(posix, 'spawnv'): interpleveldefs['spawnv'] = 'interp_posix.spawnv' + if hasattr(posix, 'spawnve'): + interpleveldefs['spawnve'] = 'interp_posix.spawnve' if hasattr(os, 'uname'): interpleveldefs['uname'] = 'interp_posix.uname' if hasattr(os, 'sysconf'): diff --git a/pypy/module/posix/interp_posix.py b/pypy/module/posix/interp_posix.py --- a/pypy/module/posix/interp_posix.py +++ b/pypy/module/posix/interp_posix.py @@ -760,6 +760,14 @@ except OSError, e: raise wrap_oserror(space, e) +def _env2interp(space, w_env): + env = {} + w_keys = space.call_method(w_env, 'keys') + for w_key in space.unpackiterable(w_keys): + w_value = space.getitem(w_env, w_key) + env[space.str_w(w_key)] = space.str_w(w_value) + return env + def execve(space, w_command, w_args, w_env): """ execve(path, args, env) @@ -771,11 +779,7 @@ """ command = fsencode_w(space, w_command) args = [fsencode_w(space, w_arg) for w_arg in space.unpackiterable(w_args)] - env = {} - w_keys = space.call_method(w_env, 'keys') - for w_key in space.unpackiterable(w_keys): - w_value = space.getitem(w_env, w_key) - env[space.str_w(w_key)] = space.str_w(w_value) + env = _env2interp(space, w_env) try: os.execve(command, args, env) except OSError, e: @@ -790,6 +794,16 @@ raise wrap_oserror(space, e) return space.wrap(ret) + at unwrap_spec(mode=int, path=str) +def spawnve(space, mode, path, w_args, w_env): + args = [space.str_w(w_arg) for w_arg in space.unpackiterable(w_args)] + env = _env2interp(space, w_env) + try: + ret = os.spawnve(mode, path, args, env) + except OSError, e: + raise wrap_oserror(space, e) + return space.wrap(ret) + def utime(space, w_path, w_tuple): """ utime(path, (atime, mtime)) utime(path, None) diff --git a/pypy/module/posix/test/test_posix2.py b/pypy/module/posix/test/test_posix2.py --- a/pypy/module/posix/test/test_posix2.py +++ b/pypy/module/posix/test/test_posix2.py @@ -471,6 +471,17 @@ ['python', '-c', 'raise(SystemExit(42))']) assert ret == 42 + if hasattr(__import__(os.name), "spawnve"): + def test_spawnve(self): + os = self.posix + import sys + print self.python + ret = os.spawnve(os.P_WAIT, self.python, + ['python', '-c', + "raise(SystemExit(int(__import__('os').environ['FOOBAR'])))"], + {'FOOBAR': '42'}) + assert ret == 42 + def test_popen(self): os = self.posix for i in range(5): diff --git a/pypy/module/pypyjit/test_pypy_c/model.py b/pypy/module/pypyjit/test_pypy_c/model.py --- a/pypy/module/pypyjit/test_pypy_c/model.py +++ b/pypy/module/pypyjit/test_pypy_c/model.py @@ -285,6 +285,11 @@ guard_false(ticker_cond1, descr=...) """ src = src.replace('--EXC-TICK--', exc_ticker_check) + # + # ISINF is done as a macro; fix it here + r = re.compile('(\w+) = --ISINF--[(](\w+)[)]') + src = r.sub(r'\2\B999 = float_add(\2, ...)\n\1 = float_eq(\2\B999, \2)', + src) return src @classmethod diff --git a/pypy/module/pypyjit/test_pypy_c/test_math.py b/pypy/module/pypyjit/test_pypy_c/test_math.py --- a/pypy/module/pypyjit/test_pypy_c/test_math.py +++ b/pypy/module/pypyjit/test_pypy_c/test_math.py @@ -1,3 +1,4 @@ +import py from pypy.module.pypyjit.test_pypy_c.test_00_model import BaseTestPyPyC @@ -49,10 +50,7 @@ guard_true(i2, descr=...) guard_not_invalidated(descr=...) f1 = cast_int_to_float(i0) - i3 = float_eq(f1, inf) - i4 = float_eq(f1, -inf) - i5 = int_or(i3, i4) - i6 = int_is_true(i5) + i6 = --ISINF--(f1) guard_false(i6, descr=...) f2 = call(ConstClass(sin), f1, descr=) f3 = call(ConstClass(cos), f1, descr=) @@ -64,6 +62,7 @@ """) def test_fmod(self): + py.test.skip("test relies on the old and broken ll_math_fmod") def main(n): import math @@ -90,4 +89,4 @@ i6 = int_sub(i0, 1) --TICK-- jump(..., descr=) - """) \ No newline at end of file + """) diff --git a/pypy/module/select/test/test_select.py b/pypy/module/select/test/test_select.py --- a/pypy/module/select/test/test_select.py +++ b/pypy/module/select/test/test_select.py @@ -214,11 +214,15 @@ def test_poll(self): import select - class A(object): - def __int__(self): - return 3 - - select.poll().poll(A()) # assert did not crash + readend, writeend = self.getpair() + try: + class A(object): + def __int__(self): + return readend.fileno() + select.poll().poll(A()) # assert did not crash + finally: + readend.close() + writeend.close() class AppTestSelectWithPipes(_AppTestSelect): "Use a pipe to get pairs of file descriptors" diff --git a/pypy/module/signal/__init__.py b/pypy/module/signal/__init__.py --- a/pypy/module/signal/__init__.py +++ b/pypy/module/signal/__init__.py @@ -20,7 +20,7 @@ interpleveldefs['pause'] = 'interp_signal.pause' interpleveldefs['siginterrupt'] = 'interp_signal.siginterrupt' - if hasattr(cpy_signal, 'setitimer'): + if os.name == 'posix': interpleveldefs['setitimer'] = 'interp_signal.setitimer' interpleveldefs['getitimer'] = 'interp_signal.getitimer' for name in ['ITIMER_REAL', 'ITIMER_VIRTUAL', 'ITIMER_PROF']: diff --git a/pypy/module/signal/test/test_signal.py b/pypy/module/signal/test/test_signal.py --- a/pypy/module/signal/test/test_signal.py +++ b/pypy/module/signal/test/test_signal.py @@ -1,4 +1,4 @@ -import os, py +import os, py, sys import signal as cpy_signal from pypy.conftest import gettestobjspace @@ -264,6 +264,10 @@ class AppTestItimer: spaceconfig = dict(usemodules=['signal']) + def setup_class(cls): + if sys.platform == 'win32': + py.test.skip("Unix only") + def test_itimer_real(self): import signal diff --git a/pypy/module/sys/version.py b/pypy/module/sys/version.py --- a/pypy/module/sys/version.py +++ b/pypy/module/sys/version.py @@ -10,7 +10,7 @@ CPYTHON_VERSION = (2, 7, 1, "final", 42) #XXX # sync patchlevel.h CPYTHON_API_VERSION = 1013 #XXX # sync with include/modsupport.h -PYPY_VERSION = (1, 6, 1, "dev", 0) #XXX # sync patchlevel.h +PYPY_VERSION = (1, 7, 1, "dev", 0) #XXX # sync patchlevel.h if platform.name == 'msvc': COMPILER_INFO = 'MSC v.%d 32 bit' % (platform.version * 10 + 600) diff --git a/pypy/objspace/flow/operation.py b/pypy/objspace/flow/operation.py --- a/pypy/objspace/flow/operation.py +++ b/pypy/objspace/flow/operation.py @@ -11,7 +11,7 @@ from pypy.interpreter.baseobjspace import ObjSpace from pypy.interpreter.error import OperationError from pypy.tool.sourcetools import compile2 -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift +from pypy.rlib.rarithmetic import ovfcheck from pypy.objspace.flow import model @@ -144,7 +144,7 @@ return ovfcheck(x % y) def lshift_ovf(x, y): - return ovfcheck_lshift(x, y) + return ovfcheck(x << y) # slicing: operator.{get,set,del}slice() don't support b=None or c=None def do_getslice(a, b, c): diff --git a/pypy/objspace/std/bytearrayobject.py b/pypy/objspace/std/bytearrayobject.py --- a/pypy/objspace/std/bytearrayobject.py +++ b/pypy/objspace/std/bytearrayobject.py @@ -282,15 +282,12 @@ def str__Bytearray(space, w_bytearray): return space.wrap(''.join(w_bytearray.data)) -def _convert_idx_params(space, w_self, w_start, w_stop): - length = len(w_self.data) +def str_count__Bytearray_Int_ANY_ANY(space, w_bytearray, w_char, w_start, w_stop): + char = w_char.intval + bytearray = w_bytearray.data + length = len(bytearray) start, stop = slicetype.unwrap_start_stop( space, length, w_start, w_stop, False) - return start, stop, length - -def str_count__Bytearray_Int_ANY_ANY(space, w_bytearray, w_char, w_start, w_stop): - char = w_char.intval - start, stop, length = _convert_idx_params(space, w_bytearray, w_start, w_stop) count = 0 for i in range(start, min(stop, length)): c = w_bytearray.data[i] diff --git a/pypy/objspace/std/intobject.py b/pypy/objspace/std/intobject.py --- a/pypy/objspace/std/intobject.py +++ b/pypy/objspace/std/intobject.py @@ -6,7 +6,7 @@ from pypy.objspace.std.noneobject import W_NoneObject from pypy.objspace.std.register_all import register_all from pypy.rlib import jit -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift, LONG_BIT, r_uint +from pypy.rlib.rarithmetic import ovfcheck, LONG_BIT, r_uint from pypy.rlib.rbigint import rbigint """ @@ -245,7 +245,7 @@ b = w_int2.intval if r_uint(b) < LONG_BIT: # 0 <= b < LONG_BIT try: - c = ovfcheck_lshift(a, b) + c = ovfcheck(a << b) except OverflowError: raise FailedToImplementArgs(space.w_OverflowError, space.wrap("integer left shift")) diff --git a/pypy/objspace/std/stdtypedef.py b/pypy/objspace/std/stdtypedef.py --- a/pypy/objspace/std/stdtypedef.py +++ b/pypy/objspace/std/stdtypedef.py @@ -32,11 +32,14 @@ from pypy.objspace.std.objecttype import object_typedef if b is object_typedef: return True - while a is not b: - if a is None: - return False - a = a.base - return True + if a is None: + return False + if a is b: + return True + for a1 in a.bases: + if issubtypedef(a1, b): + return True + return False std_dict_descr = GetSetProperty(descr_get_dict, descr_set_dict, descr_del_dict, doc="dictionary for instance variables (if defined)") @@ -75,8 +78,8 @@ if typedef is object_typedef: bases_w = [] else: - base = typedef.base or object_typedef - bases_w = [space.gettypeobject(base)] + bases = typedef.bases or [object_typedef] + bases_w = [space.gettypeobject(base) for base in bases] # wrap everything dict_w = {} diff --git a/pypy/objspace/std/test/test_bytes.py b/pypy/objspace/std/test/test_bytearrayobject.py rename from pypy/objspace/std/test/test_bytes.py rename to pypy/objspace/std/test/test_bytearrayobject.py diff --git a/pypy/objspace/std/test/test_stdobjspace.py b/pypy/objspace/std/test/test_stdobjspace.py --- a/pypy/objspace/std/test/test_stdobjspace.py +++ b/pypy/objspace/std/test/test_stdobjspace.py @@ -1,5 +1,6 @@ from pypy.interpreter.error import OperationError from pypy.interpreter.gateway import app2interp +from pypy.conftest import gettestobjspace class TestW_StdObjSpace: @@ -60,3 +61,10 @@ typedef = None assert space.isinstance_w(X(), space.w_str) + + def test_withstrbuf_fastpath_isinstance(self): + from pypy.objspace.std.stringobject import W_StringObject + + space = gettestobjspace(withstrbuf=True) + assert space._get_interplevel_cls(space.w_str) is W_StringObject + diff --git a/pypy/rlib/clibffi.py b/pypy/rlib/clibffi.py --- a/pypy/rlib/clibffi.py +++ b/pypy/rlib/clibffi.py @@ -30,6 +30,9 @@ _MAC_OS = platform.name == "darwin" _FREEBSD_7 = platform.name == "freebsd7" +_LITTLE_ENDIAN = sys.byteorder == 'little' +_BIG_ENDIAN = sys.byteorder == 'big' + if _WIN32: from pypy.rlib import rwin32 @@ -338,15 +341,38 @@ cast_type_to_ffitype._annspecialcase_ = 'specialize:memo' def push_arg_as_ffiptr(ffitp, arg, ll_buf): - # this is for primitive types. For structures and arrays - # would be something different (more dynamic) + # This is for primitive types. Note that the exact type of 'arg' may be + # different from the expected 'c_size'. To cope with that, we fall back + # to a byte-by-byte copy. TP = lltype.typeOf(arg) TP_P = lltype.Ptr(rffi.CArray(TP)) - buf = rffi.cast(TP_P, ll_buf) - buf[0] = arg + TP_size = rffi.sizeof(TP) + c_size = intmask(ffitp.c_size) + # if both types have the same size, we can directly write the + # value to the buffer + if c_size == TP_size: + buf = rffi.cast(TP_P, ll_buf) + buf[0] = arg + else: + # needs byte-by-byte copying. Make sure 'arg' is an integer type. + # Note that this won't work for rffi.FLOAT/rffi.DOUBLE. + assert TP is not rffi.FLOAT and TP is not rffi.DOUBLE + if TP_size <= rffi.sizeof(lltype.Signed): + arg = rffi.cast(lltype.Unsigned, arg) + else: + arg = rffi.cast(lltype.UnsignedLongLong, arg) + if _LITTLE_ENDIAN: + for i in range(c_size): + ll_buf[i] = chr(arg & 0xFF) + arg >>= 8 + elif _BIG_ENDIAN: + for i in range(c_size-1, -1, -1): + ll_buf[i] = chr(arg & 0xFF) + arg >>= 8 + else: + raise AssertionError push_arg_as_ffiptr._annspecialcase_ = 'specialize:argtype(1)' - # type defs for callback and closure userdata USERDATA_P = lltype.Ptr(lltype.ForwardReference()) CALLBACK_TP = lltype.Ptr(lltype.FuncType([rffi.VOIDPP, rffi.VOIDP, USERDATA_P], diff --git a/pypy/rlib/libffi.py b/pypy/rlib/libffi.py --- a/pypy/rlib/libffi.py +++ b/pypy/rlib/libffi.py @@ -234,7 +234,7 @@ # It is important that there is no other operation in the middle, else # the optimizer will fail to recognize the pattern and won't turn it # into a fast CALL. Note that "arg = arg.next" is optimized away, - # assuming that archain is completely virtual. + # assuming that argchain is completely virtual. self = jit.promote(self) if argchain.numargs != len(self.argtypes): raise TypeError, 'Wrong number of arguments: %d expected, got %d' %\ diff --git a/pypy/rlib/listsort.py b/pypy/rlib/listsort.py --- a/pypy/rlib/listsort.py +++ b/pypy/rlib/listsort.py @@ -1,4 +1,4 @@ -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift +from pypy.rlib.rarithmetic import ovfcheck ## ------------------------------------------------------------------------ @@ -136,7 +136,7 @@ if lower(a.list[p + ofs], key): lastofs = ofs try: - ofs = ovfcheck_lshift(ofs, 1) + ofs = ovfcheck(ofs << 1) except OverflowError: ofs = maxofs else: @@ -161,7 +161,7 @@ # key <= a[hint - ofs] lastofs = ofs try: - ofs = ovfcheck_lshift(ofs, 1) + ofs = ovfcheck(ofs << 1) except OverflowError: ofs = maxofs else: diff --git a/pypy/rlib/rarithmetic.py b/pypy/rlib/rarithmetic.py --- a/pypy/rlib/rarithmetic.py +++ b/pypy/rlib/rarithmetic.py @@ -12,9 +12,6 @@ back to a signed int value ovfcheck check on CPython whether the result of a signed integer operation did overflow -ovfcheck_lshift - << with oveflow checking - catering to 2.3/2.4 differences about << ovfcheck_float_to_int convert to an integer or raise OverflowError r_longlong @@ -111,18 +108,6 @@ raise OverflowError, "signed integer expression did overflow" return r -def _local_ovfcheck(r): - # a copy of the above, because we cannot call ovfcheck - # in a context where no primitiveoperator is involved. - assert not isinstance(r, r_uint), "unexpected ovf check on unsigned" - if isinstance(r, long): - raise OverflowError, "signed integer expression did overflow" - return r - -def ovfcheck_lshift(a, b): - "NOT_RPYTHON" - return _local_ovfcheck(int(long(a) << b)) - # Strange things happening for float to int on 64 bit: # int(float(i)) != i because of rounding issues. # These are the minimum and maximum float value that can diff --git a/pypy/rlib/rgc.py b/pypy/rlib/rgc.py --- a/pypy/rlib/rgc.py +++ b/pypy/rlib/rgc.py @@ -163,8 +163,10 @@ source_start, dest_start, length): # if the write barrier is not supported, copy by hand - for i in range(length): + i = 0 + while i < length: dest[i + dest_start] = source[i + source_start] + i += 1 return source_addr = llmemory.cast_ptr_to_adr(source) dest_addr = llmemory.cast_ptr_to_adr(dest) @@ -214,8 +216,8 @@ func._gc_no_collect_ = True return func -def is_light_finalizer(func): - func._is_light_finalizer_ = True +def must_be_light_finalizer(func): + func._must_be_light_finalizer_ = True return func # ____________________________________________________________ diff --git a/pypy/rlib/test/test_libffi.py b/pypy/rlib/test/test_libffi.py --- a/pypy/rlib/test/test_libffi.py +++ b/pypy/rlib/test/test_libffi.py @@ -179,6 +179,17 @@ res = self.call(func, [chr(20), 22], rffi.LONG) assert res == 42 + def test_char_args(self): + """ + char sum_args(char a, char b) { + return a + b; + } + """ + libfoo = self.get_libfoo() + func = (libfoo, 'sum_args', [types.schar, types.schar], types.schar) + res = self.call(func, [123, 43], rffi.CHAR) + assert res == chr(166) + def test_unsigned_short_args(self): """ unsigned short sum_xy_us(unsigned short x, unsigned short y) diff --git a/pypy/rpython/llinterp.py b/pypy/rpython/llinterp.py --- a/pypy/rpython/llinterp.py +++ b/pypy/rpython/llinterp.py @@ -1,6 +1,6 @@ from pypy.objspace.flow.model import FunctionGraph, Constant, Variable, c_last_exception from pypy.rlib.rarithmetic import intmask, r_uint, ovfcheck, r_longlong -from pypy.rlib.rarithmetic import r_ulonglong, ovfcheck_lshift +from pypy.rlib.rarithmetic import r_ulonglong from pypy.rpython.lltypesystem import lltype, llmemory, lloperation, llheap from pypy.rpython.lltypesystem import rclass from pypy.rpython.ootypesystem import ootype @@ -1035,7 +1035,7 @@ assert isinstance(x, int) assert isinstance(y, int) try: - return ovfcheck_lshift(x, y) + return ovfcheck(x << y) except OverflowError: self.make_llexception() diff --git a/pypy/rpython/lltypesystem/ll2ctypes.py b/pypy/rpython/lltypesystem/ll2ctypes.py --- a/pypy/rpython/lltypesystem/ll2ctypes.py +++ b/pypy/rpython/lltypesystem/ll2ctypes.py @@ -1163,10 +1163,14 @@ value = value.adr if isinstance(value, llmemory.fakeaddress): value = value.ptr or 0 + if isinstance(value, r_singlefloat): + value = float(value) TYPE1 = lltype.typeOf(value) cvalue = lltype2ctypes(value) cresulttype = get_ctypes_type(RESTYPE) - if isinstance(TYPE1, lltype.Ptr): + if RESTYPE == TYPE1: + return value + elif isinstance(TYPE1, lltype.Ptr): if isinstance(RESTYPE, lltype.Ptr): # shortcut: ptr->ptr cast cptr = ctypes.cast(cvalue, cresulttype) diff --git a/pypy/rpython/lltypesystem/module/ll_math.py b/pypy/rpython/lltypesystem/module/ll_math.py --- a/pypy/rpython/lltypesystem/module/ll_math.py +++ b/pypy/rpython/lltypesystem/module/ll_math.py @@ -11,15 +11,17 @@ from pypy.translator.platform import platform from pypy.rlib.rfloat import isfinite, isinf, isnan, INFINITY, NAN +use_library_isinf_isnan = False if sys.platform == "win32": if platform.name == "msvc": # When compiled with /O2 or /Oi (enable intrinsic functions) # It's no more possible to take the address of some math functions. # Ensure that the compiler chooses real functions instead. eci = ExternalCompilationInfo( - includes = ['math.h'], + includes = ['math.h', 'float.h'], post_include_bits = ['#pragma function(floor)'], ) + use_library_isinf_isnan = True else: eci = ExternalCompilationInfo() # Some math functions are C99 and not defined by the Microsoft compiler @@ -108,18 +110,32 @@ # # Custom implementations +VERY_LARGE_FLOAT = 1.0 +while VERY_LARGE_FLOAT * 100.0 != INFINITY: + VERY_LARGE_FLOAT *= 64.0 + +_lib_isnan = rffi.llexternal("_isnan", [lltype.Float], lltype.Signed, + compilation_info=eci) +_lib_finite = rffi.llexternal("_finite", [lltype.Float], lltype.Signed, + compilation_info=eci) + def ll_math_isnan(y): # By not calling into the external function the JIT can inline this. # Floats are awesome. + if use_library_isinf_isnan and not jit.we_are_jitted(): + return bool(_lib_isnan(y)) return y != y def ll_math_isinf(y): - # Use a bitwise OR so the JIT doesn't produce 2 different guards. - return (y == INFINITY) | (y == -INFINITY) + if use_library_isinf_isnan and not jit.we_are_jitted(): + return not _lib_finite(y) and not _lib_isnan(y) + return (y + VERY_LARGE_FLOAT) == y def ll_math_isfinite(y): # Use a custom hack that is reasonably well-suited to the JIT. # Floats are awesome (bis). + if use_library_isinf_isnan and not jit.we_are_jitted(): + return bool(_lib_finite(y)) z = 0.0 * y return z == z # i.e.: z is not a NaN @@ -136,10 +152,12 @@ Windows, FreeBSD and alpha Tru64 are amongst platforms that don't always follow C99. """ - if isnan(x) or isnan(y): + if isnan(x): return NAN - if isinf(y): + if not isfinite(y): + if isnan(y): + return NAN if isinf(x): if math_copysign(1.0, x) == 1.0: # atan2(+-inf, +inf) == +-pi/4 @@ -168,7 +186,7 @@ def ll_math_frexp(x): # deal with special cases directly, to sidestep platform differences - if isnan(x) or isinf(x) or not x: + if not isfinite(x) or not x: mantissa = x exponent = 0 else: @@ -185,7 +203,7 @@ INT_MIN = int(-2**31) def ll_math_ldexp(x, exp): - if x == 0.0 or isinf(x) or isnan(x): + if x == 0.0 or not isfinite(x): return x # NaNs, zeros and infinities are returned unchanged if exp > INT_MAX: # overflow (64-bit platforms only) @@ -209,10 +227,11 @@ def ll_math_modf(x): # some platforms don't do the right thing for NaNs and # infinities, so we take care of special cases directly. - if isinf(x): - return (math_copysign(0.0, x), x) - elif isnan(x): - return (x, x) + if not isfinite(x): + if isnan(x): + return (x, x) + else: # isinf(x) + return (math_copysign(0.0, x), x) intpart_p = lltype.malloc(rffi.DOUBLEP.TO, 1, flavor='raw') try: fracpart = math_modf(x, intpart_p) @@ -223,13 +242,21 @@ def ll_math_fmod(x, y): - if isinf(x) and not isnan(y): - raise ValueError("math domain error") + # fmod(x, +/-Inf) returns x for finite x. + if isinf(y) and isfinite(x): + return x - if y == 0: - raise ValueError("math domain error") - - return math_fmod(x, y) + _error_reset() + r = math_fmod(x, y) + errno = rposix.get_errno() + if isnan(r): + if isnan(x) or isnan(y): + errno = 0 + else: + errno = EDOM + if errno: + _likely_raise(errno, r) + return r def ll_math_hypot(x, y): @@ -242,16 +269,17 @@ _error_reset() r = math_hypot(x, y) errno = rposix.get_errno() - if isnan(r): - if isnan(x) or isnan(y): - errno = 0 - else: - errno = EDOM - elif isinf(r): - if isinf(x) or isnan(x) or isinf(y) or isnan(y): - errno = 0 - else: - errno = ERANGE + if not isfinite(r): + if isnan(r): + if isnan(x) or isnan(y): + errno = 0 + else: + errno = EDOM + else: # isinf(r) + if isfinite(x) and isfinite(y): + errno = ERANGE + else: + errno = 0 if errno: _likely_raise(errno, r) return r @@ -261,30 +289,30 @@ # deal directly with IEEE specials, to cope with problems on various # platforms whose semantics don't exactly match C99 - if isnan(x): - if y == 0.0: - return 1.0 # NaN**0 = 1 - return x - - elif isnan(y): + if isnan(y): if x == 1.0: return 1.0 # 1**Nan = 1 return y - elif isinf(x): - odd_y = not isinf(y) and math_fmod(math_fabs(y), 2.0) == 1.0 - if y > 0.0: - if odd_y: - return x - return math_fabs(x) - elif y == 0.0: - return 1.0 - else: # y < 0.0 - if odd_y: - return math_copysign(0.0, x) - return 0.0 + if not isfinite(x): + if isnan(x): + if y == 0.0: + return 1.0 # NaN**0 = 1 + return x + else: # isinf(x) + odd_y = not isinf(y) and math_fmod(math_fabs(y), 2.0) == 1.0 + if y > 0.0: + if odd_y: + return x + return math_fabs(x) + elif y == 0.0: + return 1.0 + else: # y < 0.0 + if odd_y: + return math_copysign(0.0, x) + return 0.0 - elif isinf(y): + if isinf(y): if math_fabs(x) == 1.0: return 1.0 elif y > 0.0 and math_fabs(x) > 1.0: @@ -299,17 +327,18 @@ _error_reset() r = math_pow(x, y) errno = rposix.get_errno() - if isnan(r): - # a NaN result should arise only from (-ve)**(finite non-integer) - errno = EDOM - elif isinf(r): - # an infinite result here arises either from: - # (A) (+/-0.)**negative (-> divide-by-zero) - # (B) overflow of x**y with x and y finite - if x == 0.0: + if not isfinite(r): + if isnan(r): + # a NaN result should arise only from (-ve)**(finite non-integer) errno = EDOM - else: - errno = ERANGE + else: # isinf(r) + # an infinite result here arises either from: + # (A) (+/-0.)**negative (-> divide-by-zero) + # (B) overflow of x**y with x and y finite + if x == 0.0: + errno = EDOM + else: + errno = ERANGE if errno: _likely_raise(errno, r) return r @@ -358,18 +387,19 @@ r = c_func(x) # Error checking fun. Copied from CPython 2.6 errno = rposix.get_errno() - if isnan(r): - if isnan(x): - errno = 0 - else: - errno = EDOM - elif isinf(r): - if isinf(x) or isnan(x): - errno = 0 - elif can_overflow: - errno = ERANGE - else: - errno = EDOM + if not isfinite(r): + if isnan(r): + if isnan(x): + errno = 0 + else: + errno = EDOM + else: # isinf(r) + if not isfinite(x): + errno = 0 + elif can_overflow: + errno = ERANGE + else: + errno = EDOM if errno: _likely_raise(errno, r) return r diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -125,6 +125,7 @@ canraise=False, random_effects_on_gcobjs= random_effects_on_gcobjs, + calling_conv=calling_conv, **kwds) if isinstance(_callable, ll2ctypes.LL2CtypesCallable): _callable.funcptr = funcptr @@ -245,8 +246,14 @@ wrapper._always_inline_ = True # for debugging, stick ll func ptr to that wrapper._ptr = funcptr + wrapper = func_with_new_name(wrapper, name) - return func_with_new_name(wrapper, name) + if calling_conv != "c": + from pypy.rlib.jit import dont_look_inside + wrapper = dont_look_inside(wrapper) + + return wrapper + class CallbackHolder: def __init__(self): @@ -855,11 +862,12 @@ try: unsigned = not tp._type.SIGNED except AttributeError: - if tp in [lltype.Char, lltype.Float, lltype.Signed] or\ - isinstance(tp, lltype.Ptr): + if (not isinstance(tp, lltype.Primitive) or + tp in (FLOAT, DOUBLE) or + cast(lltype.SignedLongLong, cast(tp, -1)) < 0): unsigned = False else: - unsigned = False + unsigned = True return size, unsigned def sizeof(tp): diff --git a/pypy/rpython/lltypesystem/test/test_rffi.py b/pypy/rpython/lltypesystem/test/test_rffi.py --- a/pypy/rpython/lltypesystem/test/test_rffi.py +++ b/pypy/rpython/lltypesystem/test/test_rffi.py @@ -18,6 +18,7 @@ from pypy.conftest import option from pypy.objspace.flow.model import summary from pypy.translator.tool.cbuild import ExternalCompilationInfo +from pypy.rlib.rarithmetic import r_singlefloat class BaseTestRffi: def test_basic(self): @@ -704,6 +705,14 @@ res = cast(lltype.Signed, 42.5) assert res == 42 + res = cast(lltype.SingleFloat, 12.3) + assert res == r_singlefloat(12.3) + res = cast(lltype.SingleFloat, res) + assert res == r_singlefloat(12.3) + + res = cast(lltype.Float, r_singlefloat(12.)) + assert res == 12. + def test_rffi_sizeof(self): try: import ctypes @@ -733,7 +742,7 @@ assert sizeof(ll) == ctypes.sizeof(ctp) assert sizeof(lltype.Typedef(ll, 'test')) == sizeof(ll) assert not size_and_sign(lltype.Signed)[1] - assert not size_and_sign(lltype.Char)[1] + assert size_and_sign(lltype.Char) == (1, True) assert not size_and_sign(lltype.UniChar)[1] assert size_and_sign(UINT)[1] diff --git a/pypy/rpython/module/ll_os.py b/pypy/rpython/module/ll_os.py --- a/pypy/rpython/module/ll_os.py +++ b/pypy/rpython/module/ll_os.py @@ -356,6 +356,32 @@ return extdef([int, str, [str]], int, llimpl=spawnv_llimpl, export_name="ll_os.ll_os_spawnv") + @registering_if(os, 'spawnve') + def register_os_spawnve(self): + os_spawnve = self.llexternal('spawnve', + [rffi.INT, rffi.CCHARP, rffi.CCHARPP, + rffi.CCHARPP], + rffi.INT) + + def spawnve_llimpl(mode, path, args, env): + envstrs = [] + for item in env.iteritems(): + envstrs.append("%s=%s" % item) + + mode = rffi.cast(rffi.INT, mode) + l_args = rffi.liststr2charpp(args) + l_env = rffi.liststr2charpp(envstrs) + childpid = os_spawnve(mode, path, l_args, l_env) + rffi.free_charpp(l_env) + rffi.free_charpp(l_args) + if childpid == -1: + raise OSError(rposix.get_errno(), "os_spawnve failed") + return rffi.cast(lltype.Signed, childpid) + + return extdef([int, str, [str], {str: str}], int, + llimpl=spawnve_llimpl, + export_name="ll_os.ll_os_spawnve") + @registering(os.dup) def register_os_dup(self): os_dup = self.llexternal(underscore_on_windows+'dup', [rffi.INT], rffi.INT) diff --git a/pypy/translator/backendopt/finalizer.py b/pypy/translator/backendopt/finalizer.py --- a/pypy/translator/backendopt/finalizer.py +++ b/pypy/translator/backendopt/finalizer.py @@ -4,7 +4,7 @@ class FinalizerError(Exception): """ __del__ marked as lightweight finalizer, but the analyzer did - not agreed + not agree """ class FinalizerAnalyzer(graphanalyze.BoolGraphAnalyzer): @@ -23,7 +23,7 @@ def analyze_light_finalizer(self, graph): result = self.analyze_direct_call(graph) if (result is self.top_result() and - getattr(graph.func, '_is_light_finalizer_', False)): + getattr(graph.func, '_must_be_light_finalizer_', False)): raise FinalizerError(FinalizerError.__doc__, graph) return result diff --git a/pypy/translator/backendopt/test/test_canraise.py b/pypy/translator/backendopt/test/test_canraise.py --- a/pypy/translator/backendopt/test/test_canraise.py +++ b/pypy/translator/backendopt/test/test_canraise.py @@ -201,6 +201,16 @@ result = ra.can_raise(ggraph.startblock.operations[0]) assert result + def test_ll_arraycopy(self): + from pypy.rpython.lltypesystem import rffi + from pypy.rlib.rgc import ll_arraycopy + def f(a, b, c, d, e): + ll_arraycopy(a, b, c, d, e) + t, ra = self.translate(f, [rffi.CCHARP, rffi.CCHARP, int, int, int]) + fgraph = graphof(t, f) + result = ra.can_raise(fgraph.startblock.operations[0]) + assert not result + class TestOOType(OORtypeMixin, BaseTestCanRaise): def test_can_raise_recursive(self): diff --git a/pypy/translator/backendopt/test/test_finalizer.py b/pypy/translator/backendopt/test/test_finalizer.py --- a/pypy/translator/backendopt/test/test_finalizer.py +++ b/pypy/translator/backendopt/test/test_finalizer.py @@ -126,13 +126,13 @@ r = self.analyze(f, [], A.__del__.im_func) assert r - def test_is_light_finalizer_decorator(self): + def test_must_be_light_finalizer_decorator(self): S = lltype.GcStruct('S') - @rgc.is_light_finalizer + @rgc.must_be_light_finalizer def f(): lltype.malloc(S) - @rgc.is_light_finalizer + @rgc.must_be_light_finalizer def g(): pass self.analyze(g, []) # did not explode diff --git a/pypy/translator/c/genc.py b/pypy/translator/c/genc.py --- a/pypy/translator/c/genc.py +++ b/pypy/translator/c/genc.py @@ -521,13 +521,13 @@ rules = [ ('clean', '', 'rm -f $(OBJECTS) $(TARGET) $(GCMAPFILES) $(ASMFILES) *.gc?? ../module_cache/*.gc??'), ('clean_noprof', '', 'rm -f $(OBJECTS) $(TARGET) $(GCMAPFILES) $(ASMFILES)'), - ('debug', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT" $(TARGET)'), - ('debug_exc', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DDO_LOG_EXC" $(TARGET)'), - ('debug_mem', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DTRIVIAL_MALLOC_DEBUG" $(TARGET)'), + ('debug', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT" debug_target'), + ('debug_exc', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DDO_LOG_EXC" debug_target'), + ('debug_mem', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DTRIVIAL_MALLOC_DEBUG" debug_target'), ('no_obmalloc', '', '$(MAKE) CFLAGS="-g -O2 -DRPY_ASSERT -DNO_OBMALLOC" $(TARGET)'), - ('linuxmemchk', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DLINUXMEMCHK" $(TARGET)'), + ('linuxmemchk', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DLINUXMEMCHK" debug_target'), ('llsafer', '', '$(MAKE) CFLAGS="-O2 -DRPY_LL_ASSERT" $(TARGET)'), - ('lldebug', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DRPY_LL_ASSERT" $(TARGET)'), + ('lldebug', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DRPY_LL_ASSERT" debug_target'), ('profile', '', '$(MAKE) CFLAGS="-g -O1 -pg $(CFLAGS) -fno-omit-frame-pointer" LDFLAGS="-pg $(LDFLAGS)" $(TARGET)'), ] if self.has_profopt(): @@ -554,7 +554,7 @@ mk.definition('ASMLBLFILES', lblsfiles) mk.definition('GCMAPFILES', gcmapfiles) if sys.platform == 'win32': - mk.definition('DEBUGFLAGS', '/Zi') + mk.definition('DEBUGFLAGS', '/MD /Zi') else: mk.definition('DEBUGFLAGS', '-O2 -fomit-frame-pointer -g') @@ -618,9 +618,13 @@ else: if sys.platform == 'win32': - mk.definition('DEBUGFLAGS', '/Zi') + mk.definition('DEBUGFLAGS', '/MD /Zi') else: mk.definition('DEBUGFLAGS', '-O1 -g') + if sys.platform == 'win32': + mk.rule('debug_target', 'debugmode_$(DEFAULT_TARGET)', 'rem') + else: + mk.rule('debug_target', '$(TARGET)', '#') mk.write() #self.translator.platform, # , diff --git a/pypy/translator/c/test/test_extfunc.py b/pypy/translator/c/test/test_extfunc.py --- a/pypy/translator/c/test/test_extfunc.py +++ b/pypy/translator/c/test/test_extfunc.py @@ -818,6 +818,24 @@ func() assert open(filename).read() == "2" +if hasattr(posix, 'spawnve'): + def test_spawnve(): + filename = str(udir.join('test_spawnve.txt')) + progname = str(sys.executable) + scriptpath = udir.join('test_spawnve.py') + scriptpath.write('import os\n' + + 'f=open(%r,"w")\n' % filename + + 'f.write(os.environ["FOOBAR"])\n' + + 'f.close\n') + scriptname = str(scriptpath) + def does_stuff(): + l = [progname, scriptname] + pid = os.spawnve(os.P_NOWAIT, progname, l, {'FOOBAR': '42'}) + os.waitpid(pid, 0) + func = compile(does_stuff, []) + func() + assert open(filename).read() == "42" + def test_utime(): path = str(udir.ensure("test_utime.txt")) from time import time, sleep diff --git a/pypy/translator/cli/test/test_snippet.py b/pypy/translator/cli/test/test_snippet.py --- a/pypy/translator/cli/test/test_snippet.py +++ b/pypy/translator/cli/test/test_snippet.py @@ -28,14 +28,14 @@ res = self.interpret(fn, [], backendopt=False) def test_link_vars_overlapping(self): - from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift + from pypy.rlib.rarithmetic import ovfcheck def fn(maxofs): lastofs = 0 ofs = 1 while ofs < maxofs: lastofs = ofs try: - ofs = ovfcheck_lshift(ofs, 1) + ofs = ovfcheck(ofs << 1) except OverflowError: ofs = maxofs else: diff --git a/pypy/translator/platform/__init__.py b/pypy/translator/platform/__init__.py --- a/pypy/translator/platform/__init__.py +++ b/pypy/translator/platform/__init__.py @@ -102,6 +102,8 @@ bits = [self.__class__.__name__, 'cc=%r' % self.cc] for varname in self.relevant_environ: bits.append('%s=%r' % (varname, os.environ.get(varname))) + # adding sys.maxint to disambiguate windows + bits.append('%s=%r' % ('sys.maxint', sys.maxint)) return ' '.join(bits) # some helpers which seem to be cross-platform enough @@ -238,10 +240,13 @@ else: host_factory = Linux64 elif sys.platform == 'darwin': - from pypy.translator.platform.darwin import Darwin_i386, Darwin_x86_64 + from pypy.translator.platform.darwin import Darwin_i386, Darwin_x86_64, Darwin_PowerPC import platform - assert platform.machine() in ('i386', 'x86_64') - if sys.maxint <= 2147483647: + assert platform.machine() in ('Power Macintosh', 'i386', 'x86_64') + + if platform.machine() == 'Power Macintosh': + host_factory = Darwin_PowerPC + elif sys.maxint <= 2147483647: host_factory = Darwin_i386 else: host_factory = Darwin_x86_64 diff --git a/pypy/translator/platform/darwin.py b/pypy/translator/platform/darwin.py --- a/pypy/translator/platform/darwin.py +++ b/pypy/translator/platform/darwin.py @@ -71,6 +71,11 @@ link_flags = ('-arch', 'i386') cflags = ('-arch', 'i386', '-O3', '-fomit-frame-pointer') +class Darwin_PowerPC(Darwin):#xxx fixme, mwp + name = "darwin_powerpc" + link_flags = () + cflags = ('-O3', '-fomit-frame-pointer') + class Darwin_x86_64(Darwin): name = "darwin_x86_64" link_flags = ('-arch', 'x86_64') diff --git a/pypy/translator/platform/test/test_darwin.py b/pypy/translator/platform/test/test_darwin.py --- a/pypy/translator/platform/test/test_darwin.py +++ b/pypy/translator/platform/test/test_darwin.py @@ -7,7 +7,7 @@ py.test.skip("Darwin only") from pypy.tool.udir import udir -from pypy.translator.platform.darwin import Darwin_i386, Darwin_x86_64 +from pypy.translator.platform.darwin import Darwin_i386, Darwin_x86_64, Darwin_PowerPC from pypy.translator.platform.test.test_platform import TestPlatform as BasicTest from pypy.translator.tool.cbuild import ExternalCompilationInfo @@ -17,7 +17,7 @@ else: host_factory = Darwin_x86_64 else: - host_factory = Darwin + host_factory = Darwin_PowerPC class TestDarwin(BasicTest): platform = host_factory() diff --git a/pypy/translator/platform/windows.py b/pypy/translator/platform/windows.py --- a/pypy/translator/platform/windows.py +++ b/pypy/translator/platform/windows.py @@ -294,6 +294,9 @@ ['$(CC_LINK) /nologo $(LDFLAGS) $(LDFLAGSEXTRA) $(OBJECTS) $(LINKFILES) /out:$@ $(LIBDIRS) $(LIBS) /MANIFEST /MANIFESTFILE:$*.manifest', 'mt.exe -nologo -manifest $*.manifest -outputresource:$@;1', ]) + m.rule('debugmode_$(TARGET)', '$(OBJECTS)', + ['$(CC_LINK) /nologo /DEBUG $(LDFLAGS) $(LDFLAGSEXTRA) $(OBJECTS) $(LINKFILES) /out:$@ $(LIBDIRS) $(LIBS)', + ]) if shared: m.definition('SHARED_IMPORT_LIB', so_name.new(ext='lib').basename) @@ -307,6 +310,9 @@ ['$(CC_LINK) /nologo main.obj $(SHARED_IMPORT_LIB) /out:$@ /MANIFEST /MANIFESTFILE:$*.manifest', 'mt.exe -nologo -manifest $*.manifest -outputresource:$@;1', ]) + m.rule('debugmode_$(DEFAULT_TARGET)', ['debugmode_$(TARGET)', 'main.obj'], + ['$(CC_LINK) /nologo /DEBUG main.obj $(SHARED_IMPORT_LIB) /out:$@' + ]) return m diff --git a/pypy/translator/simplify.py b/pypy/translator/simplify.py --- a/pypy/translator/simplify.py +++ b/pypy/translator/simplify.py @@ -111,16 +111,13 @@ # the while loop above will simplify recursively the new link def transform_ovfcheck(graph): - """The special function calls ovfcheck and ovfcheck_lshift need to + """The special function calls ovfcheck needs to be translated into primitive operations. ovfcheck is called directly after an operation that should be turned into an overflow-checked version. It is considered a syntax error if the resulting _ovf is not defined in objspace/flow/objspace.py. - ovfcheck_lshift is special because there is no preceding operation. - Instead, it will be replaced by an OP_LSHIFT_OVF operation. """ covf = Constant(rarithmetic.ovfcheck) - covfls = Constant(rarithmetic.ovfcheck_lshift) def check_syntax(opname): exlis = operation.implicit_exceptions.get("%s_ovf" % (opname,), []) @@ -154,9 +151,6 @@ op1.opname += '_ovf' del block.operations[i] block.renamevariables({op.result: op1.result}) - elif op.args[0] == covfls: - op.opname = 'lshift_ovf' - del op.args[0] def simplify_exceptions(graph): """The exception handling caused by non-implicit exceptions diff --git a/pypy/translator/test/snippet.py b/pypy/translator/test/snippet.py --- a/pypy/translator/test/snippet.py +++ b/pypy/translator/test/snippet.py @@ -1210,7 +1210,7 @@ return istk.top(), sstk.top() -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift +from pypy.rlib.rarithmetic import ovfcheck def add_func(i=numtype): try: @@ -1253,7 +1253,7 @@ def lshift_func(i=numtype): try: hugo(2, 3, 5) - return ovfcheck_lshift((-maxint-1), i) + return ovfcheck((-maxint-1) << i) except (hugelmugel, OverflowError, StandardError, ValueError): raise diff --git a/pypy/translator/test/test_simplify.py b/pypy/translator/test/test_simplify.py --- a/pypy/translator/test/test_simplify.py +++ b/pypy/translator/test/test_simplify.py @@ -42,24 +42,6 @@ assert graph.startblock.operations[0].opname == 'int_mul_ovf' assert graph.startblock.operations[1].opname == 'int_sub' -def test_remove_ovfcheck_lshift(): - # check that ovfcheck_lshift() is handled - from pypy.rlib.rarithmetic import ovfcheck_lshift - def f(x): - try: - return ovfcheck_lshift(x, 2) - except OverflowError: - return -42 - graph, _ = translate(f, [int]) - assert len(graph.startblock.operations) == 1 - assert graph.startblock.operations[0].opname == 'int_lshift_ovf' - assert len(graph.startblock.operations[0].args) == 2 - assert len(graph.startblock.exits) == 2 - assert [link.exitcase for link in graph.startblock.exits] == \ - [None, OverflowError] - assert [link.target.operations for link in graph.startblock.exits] == \ - [(), ()] - def test_remove_ovfcheck_floordiv(): # check that ovfcheck() is handled even if the operation raises # and catches another exception too, here ZeroDivisionError From noreply at buildbot.pypy.org Fri Nov 11 21:18:21 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Fri, 11 Nov 2011 21:18:21 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: major progress towards translating Message-ID: <20111111201821.28E198292E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49345:a86f783b009d Date: 2011-11-11 15:18 -0500 http://bitbucket.org/pypy/pypy/changeset/a86f783b009d/ Log: major progress towards translating diff --git a/pypy/module/micronumpy/compile.py b/pypy/module/micronumpy/compile.py --- a/pypy/module/micronumpy/compile.py +++ b/pypy/module/micronumpy/compile.py @@ -177,8 +177,9 @@ def execute(self, interp): arr = interp.variables[self.name] - w_index = self.index.execute(interp).eval(0) - w_val = self.expr.execute(interp).eval(0) + w_index = self.index.execute(interp) + w_val = self.expr.execute(interp) + assert isinstance(arr, BaseArray) arr.descr_setitem(interp.space, w_index, w_val) def __repr__(self): diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -31,10 +31,14 @@ return space.wrap(self.get_dtype(space).itemtype.str_format(self)) def descr_int(self, space): - return space.wrap(self.convert_to(W_LongBox.get_dtype(space)).value) + box = self.convert_to(W_LongBox.get_dtype(space)) + assert isinstance(box, W_LongBox) + return space.wrap(box.value) def descr_float(self, space): - return space.wrap(self.convert_to(W_Float64Box.get_dtype(space)).value) + box = self.convert_to(W_Float64Box.get_dtype(space)) + assert isinstance(box, W_Float64Box) + return space.wrap(box.value) def _binop_impl(ufunc_name): def impl(self, space, w_other): @@ -71,7 +75,7 @@ pass class W_NumberBox(W_GenericBox): - pass + _attrs_ = () class W_IntegerBox(W_NumberBox): pass @@ -113,10 +117,10 @@ pass class W_InexactBox(W_NumberBox): - pass + _attrs_ = () class W_FloatingBox(W_InexactBox): - pass + _attrs_ = () class W_Float32Box(W_FloatingBox, PrimitiveBox): get_dtype = dtype_getter("float32") diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -1,8 +1,8 @@ from pypy.jit.metainterp.test.support import LLJitMixin -from pypy.module.micronumpy import interp_ufuncs, signature +from pypy.module.micronumpy import interp_boxes, interp_ufuncs, signature from pypy.module.micronumpy.compile import (FakeSpace, FloatObject, IntObject, numpy_compile, BoolObject) -from pypy.module.micronumpy.interp_numarray import (SingleDimArray, +from pypy.module.micronumpy.interp_numarray import (BaseArray, SingleDimArray, SingleDimSlice) from pypy.rlib.nonconst import NonConstant from pypy.rpython.annlowlevel import llstr, hlstr @@ -15,21 +15,22 @@ class TestNumpyJIt(LLJitMixin): graph = None interp = None - + def run(self, code): space = FakeSpace() - + def f(code): interp = numpy_compile(hlstr(code)) interp.run(space) res = interp.results[-1] - w_res = res.eval(0).wrap(interp.space) - if isinstance(w_res, BoolObject): - return float(w_res.boolval) - elif isinstance(w_res, FloatObject): - return w_res.floatval - elif isinstance(w_res, IntObject): - return w_res.intval + assert isinstance(res, BaseArray) + w_res = res.eval(0) + if isinstance(w_res, interp_boxes.W_BoolBox): + return float(w_res.value) + elif isinstance(w_res, interp_boxes.W_Float64Box): + return w_res.value + elif isinstance(w_res, interp_boxes.W_LongBox): + return w_res.value else: return -42. @@ -186,10 +187,10 @@ def setup_class(cls): from pypy.module.micronumpy.compile import FakeSpace from pypy.module.micronumpy.interp_dtype import W_Float64Dtype - + cls.space = FakeSpace() cls.float64_dtype = cls.space.fromcache(W_Float64Dtype) - + def test_slice(self): def f(i): step = 3 diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -3,6 +3,7 @@ from pypy.module.micronumpy import interp_boxes from pypy.objspace.std.floatobject import float2string from pypy.rlib import rfloat +from pypy.rlib.objectmodel import specialize from pypy.rlib.rarithmetic import LONG_BIT from pypy.rpython.lltypesystem import lltype, rffi @@ -30,6 +31,7 @@ def get_element_size(self): return rffi.sizeof(self.T) + @specialize.argtype(1) def box(self, value): return self.BoxType(rffi.cast(self.T, value)) @@ -115,6 +117,7 @@ True = BoxType(True) False = BoxType(False) + @specialize.argtype(1) def box(self, value): box = Primitive.box(self, value) if box.value: From noreply at buildbot.pypy.org Fri Nov 11 21:37:57 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Fri, 11 Nov 2011 21:37:57 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: comment these out, they're in the wrong place in the MRO with the mixins Message-ID: <20111111203757.04D248292E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49346:892cab276a98 Date: 2011-11-11 15:37 -0500 http://bitbucket.org/pypy/pypy/changeset/892cab276a98/ Log: comment these out, they're in the wrong place in the MRO with the mixins diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -266,7 +266,6 @@ def ufunc_dtype_caller(space, ufunc_name, op_name, argcount, comparison_func): - assert hasattr(types.BaseType, op_name) if argcount == 1: def impl(res_dtype, value): return getattr(res_dtype.itemtype, op_name)(value) diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -21,10 +21,10 @@ class BaseType(object): def _unimplemented_ufunc(self, *args): raise NotImplementedError - add = sub = mul = div = mod = pow = eq = ne = lt = le = gt = ge = max = \ - min = copysign = pos = neg = abs = sign = reciprocal = fabs = floor = \ - exp = sin = cos = tan = arcsin = arccos = arctan = arcsinh = \ - arctanh = _unimplemented_ufunc + # add = sub = mul = div = mod = pow = eq = ne = lt = le = gt = ge = max = \ + # min = copysign = pos = neg = abs = sign = reciprocal = fabs = floor = \ + # exp = sin = cos = tan = arcsin = arccos = arctan = arcsinh = \ + # arctanh = _unimplemented_ufunc class Primitive(object): _mixin_ = True From noreply at buildbot.pypy.org Fri Nov 11 21:50:29 2011 From: noreply at buildbot.pypy.org (amauryfa) Date: Fri, 11 Nov 2011 21:50:29 +0100 (CET) Subject: [pypy-commit] pypy py3k: Fix _random.get_randbits() Message-ID: <20111111205029.C4BB18292E@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r49347:7ea6a7e448e0 Date: 2011-11-11 21:48 +0100 http://bitbucket.org/pypy/pypy/changeset/7ea6a7e448e0/ Log: Fix _random.get_randbits() diff --git a/pypy/module/_random/interp_random.py b/pypy/module/_random/interp_random.py --- a/pypy/module/_random/interp_random.py +++ b/pypy/module/_random/interp_random.py @@ -91,11 +91,11 @@ raise OperationError(space.w_ValueError, strerror) needed = (k - 1) // rbigint.SHIFT + 1 result = rbigint.rbigint([rbigint.NULLDIGIT] * needed, 1) - for i in range(needed - 1): - # This loses some random digits, but not too many since SHIFT=31 - value = self._rnd.genrand32() + for i in range(needed): + # This wastes some random digits, but not too many since SHIFT=31 + value = self._rnd.genrand32() & rbigint.MASK if i < needed - 1: - result.setdigit(i, value & rbigint.MASK) + result.setdigit(i, value) else: result.setdigit(i, value >> ((needed * rbigint.SHIFT) - k)) return space.newlong_from_rbigint(result) diff --git a/pypy/module/_random/test/test_random.py b/pypy/module/_random/test/test_random.py --- a/pypy/module/_random/test/test_random.py +++ b/pypy/module/_random/test/test_random.py @@ -98,6 +98,7 @@ for n in range(10, 1000, 15): k = rnd.getrandbits(n) assert 0 <= k < 2 ** n + assert rnd.getrandbits(30) != 0 # Fails every 1e10 runs def test_subclass(self): import _random From noreply at buildbot.pypy.org Fri Nov 11 21:54:51 2011 From: noreply at buildbot.pypy.org (amauryfa) Date: Fri, 11 Nov 2011 21:54:51 +0100 (CET) Subject: [pypy-commit] pypy py3k: hg merge default Message-ID: <20111111205451.402698292E@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r49348:8ca9a3739338 Date: 2011-11-11 21:54 +0100 http://bitbucket.org/pypy/pypy/changeset/8ca9a3739338/ Log: hg merge default diff --git a/lib-python/2.7/test/test_os.py b/lib-python/2.7/test/test_os.py --- a/lib-python/2.7/test/test_os.py +++ b/lib-python/2.7/test/test_os.py @@ -74,7 +74,8 @@ self.assertFalse(os.path.exists(name), "file already exists for temporary file") # make sure we can create the file - open(name, "w") + f = open(name, "w") + f.close() self.files.append(name) def test_tempnam(self): diff --git a/lib-python/2.7/pkgutil.py b/lib-python/modified-2.7/pkgutil.py copy from lib-python/2.7/pkgutil.py copy to lib-python/modified-2.7/pkgutil.py --- a/lib-python/2.7/pkgutil.py +++ b/lib-python/modified-2.7/pkgutil.py @@ -244,7 +244,8 @@ return mod def get_data(self, pathname): - return open(pathname, "rb").read() + with open(pathname, "rb") as f: + return f.read() def _reopen(self): if self.file and self.file.closed: diff --git a/lib-python/modified-2.7/test/test_import.py b/lib-python/modified-2.7/test/test_import.py --- a/lib-python/modified-2.7/test/test_import.py +++ b/lib-python/modified-2.7/test/test_import.py @@ -64,6 +64,7 @@ except ImportError, err: self.fail("import from %s failed: %s" % (ext, err)) else: + # XXX importing .pyw is missing on Windows self.assertEqual(mod.a, a, "module loaded (%s) but contents invalid" % mod) self.assertEqual(mod.b, b, diff --git a/lib-python/modified-2.7/test/test_repr.py b/lib-python/modified-2.7/test/test_repr.py --- a/lib-python/modified-2.7/test/test_repr.py +++ b/lib-python/modified-2.7/test/test_repr.py @@ -254,8 +254,14 @@ eq = self.assertEqual touch(os.path.join(self.subpkgname, self.pkgname + os.extsep + 'py')) from areallylongpackageandmodulenametotestreprtruncation.areallylongpackageandmodulenametotestreprtruncation import areallylongpackageandmodulenametotestreprtruncation - eq(repr(areallylongpackageandmodulenametotestreprtruncation), - "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + # On PyPy, we use %r to format the file name; on CPython it is done + # with '%s'. It seems to me that %r is safer . + if '__pypy__' in sys.builtin_module_names: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + else: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) eq(repr(sys), "") def test_type(self): diff --git a/lib-python/2.7/test/test_subprocess.py b/lib-python/modified-2.7/test/test_subprocess.py copy from lib-python/2.7/test/test_subprocess.py copy to lib-python/modified-2.7/test/test_subprocess.py --- a/lib-python/2.7/test/test_subprocess.py +++ b/lib-python/modified-2.7/test/test_subprocess.py @@ -16,11 +16,11 @@ # Depends on the following external programs: Python # -if mswindows: - SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' - 'os.O_BINARY);') -else: - SETBINARY = '' +#if mswindows: +# SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' +# 'os.O_BINARY);') +#else: +# SETBINARY = '' try: @@ -420,8 +420,9 @@ self.assertStderrEqual(stderr, "") def test_universal_newlines(self): - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' @@ -448,8 +449,9 @@ def test_universal_newlines_communicate(self): # universal newlines through communicate() - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' diff --git a/pypy/interpreter/test/test_typedef.py b/pypy/interpreter/test/test_typedef.py --- a/pypy/interpreter/test/test_typedef.py +++ b/pypy/interpreter/test/test_typedef.py @@ -2,7 +2,7 @@ from pypy.interpreter import typedef from pypy.tool.udir import udir from pypy.interpreter.baseobjspace import Wrappable -from pypy.interpreter.gateway import ObjSpace +from pypy.interpreter.gateway import ObjSpace, interp2app # this test isn't so much to test that the objspace interface *works* # -- it's more to test that it's *there* @@ -260,6 +260,50 @@ gc.collect(); gc.collect() assert space.unwrap(w_seen) == [6, 2] + def test_multiple_inheritance(self): + class W_A(Wrappable): + a = 1 + b = 2 + class W_C(W_A): + b = 3 + W_A.typedef = typedef.TypeDef("A", + a = typedef.interp_attrproperty("a", cls=W_A), + b = typedef.interp_attrproperty("b", cls=W_A), + ) + class W_B(Wrappable): + pass + def standalone_method(space, w_obj): + if isinstance(w_obj, W_A): + return space.w_True + else: + return space.w_False + W_B.typedef = typedef.TypeDef("B", + c = interp2app(standalone_method) + ) + W_C.typedef = typedef.TypeDef("C", (W_A.typedef, W_B.typedef,)) + + w_o1 = self.space.wrap(W_C()) + w_o2 = self.space.wrap(W_B()) + w_c = self.space.gettypefor(W_C) + w_b = self.space.gettypefor(W_B) + w_a = self.space.gettypefor(W_A) + assert w_c.mro_w == [ + w_c, + w_a, + w_b, + self.space.w_object, + ] + for w_tp in w_c.mro_w: + assert self.space.isinstance_w(w_o1, w_tp) + def assert_attr(w_obj, name, value): + assert self.space.unwrap(self.space.getattr(w_obj, self.space.wrap(name))) == value + def assert_method(w_obj, name, value): + assert self.space.unwrap(self.space.call_method(w_obj, name)) == value + assert_attr(w_o1, "a", 1) + assert_attr(w_o1, "b", 3) + assert_method(w_o1, "c", True) + assert_method(w_o2, "c", False) + class AppTestTypeDef: diff --git a/pypy/interpreter/typedef.py b/pypy/interpreter/typedef.py --- a/pypy/interpreter/typedef.py +++ b/pypy/interpreter/typedef.py @@ -15,13 +15,19 @@ def __init__(self, __name, __base=None, **rawdict): "NOT_RPYTHON: initialization-time only" self.name = __name - self.base = __base + if __base is None: + bases = [] + elif isinstance(__base, tuple): + bases = list(__base) + else: + bases = [__base] + self.bases = bases self.hasdict = '__dict__' in rawdict self.weakrefable = '__weakref__' in rawdict self.doc = rawdict.pop('__doc__', None) - if __base is not None: - self.hasdict |= __base.hasdict - self.weakrefable |= __base.weakrefable + for base in bases: + self.hasdict |= base.hasdict + self.weakrefable |= base.weakrefable self.rawdict = {} self.acceptable_as_base_class = '__new__' in rawdict self.applevel_subclasses_base = None diff --git a/pypy/jit/backend/llsupport/descr.py b/pypy/jit/backend/llsupport/descr.py --- a/pypy/jit/backend/llsupport/descr.py +++ b/pypy/jit/backend/llsupport/descr.py @@ -305,12 +305,16 @@ _clsname = '' loop_token = None arg_classes = '' # <-- annotation hack - ffi_flags = 0 + ffi_flags = 1 - def __init__(self, arg_classes, extrainfo=None, ffi_flags=0): + def __init__(self, arg_classes, extrainfo=None, ffi_flags=1): self.arg_classes = arg_classes # string of "r" and "i" (ref/int) self.extrainfo = extrainfo self.ffi_flags = ffi_flags + # NB. the default ffi_flags is 1, meaning FUNCFLAG_CDECL, which + # makes sense on Windows as it's the one for all the C functions + # we are compiling together with the JIT. On non-Windows platforms + # it is just ignored anyway. def __repr__(self): res = '%s(%s)' % (self.__class__.__name__, self.arg_classes) @@ -351,6 +355,10 @@ return False # unless overridden def create_call_stub(self, rtyper, RESULT): + from pypy.rlib.clibffi import FFI_DEFAULT_ABI + assert self.get_call_conv() == FFI_DEFAULT_ABI, ( + "%r: create_call_stub() with a non-default call ABI" % (self,)) + def process(c): if c == 'L': assert longlong.supports_longlong @@ -445,7 +453,7 @@ """ _clsname = 'DynamicIntCallDescr' - def __init__(self, arg_classes, result_size, result_sign, extrainfo=None, ffi_flags=0): + def __init__(self, arg_classes, result_size, result_sign, extrainfo, ffi_flags): BaseIntCallDescr.__init__(self, arg_classes, extrainfo, ffi_flags) assert isinstance(result_sign, bool) self._result_size = chr(result_size) diff --git a/pypy/jit/backend/llsupport/ffisupport.py b/pypy/jit/backend/llsupport/ffisupport.py --- a/pypy/jit/backend/llsupport/ffisupport.py +++ b/pypy/jit/backend/llsupport/ffisupport.py @@ -8,7 +8,7 @@ class UnsupportedKind(Exception): pass -def get_call_descr_dynamic(cpu, ffi_args, ffi_result, extrainfo=None, ffi_flags=0): +def get_call_descr_dynamic(cpu, ffi_args, ffi_result, extrainfo, ffi_flags): """Get a call descr: the types of result and args are represented by rlib.libffi.types.*""" try: diff --git a/pypy/jit/backend/llsupport/test/test_ffisupport.py b/pypy/jit/backend/llsupport/test/test_ffisupport.py --- a/pypy/jit/backend/llsupport/test/test_ffisupport.py +++ b/pypy/jit/backend/llsupport/test/test_ffisupport.py @@ -13,44 +13,46 @@ def test_call_descr_dynamic(): args = [types.sint, types.pointer] - descr = get_call_descr_dynamic(FakeCPU(), args, types.sint, ffi_flags=42) + descr = get_call_descr_dynamic(FakeCPU(), args, types.sint, None, + ffi_flags=42) assert isinstance(descr, DynamicIntCallDescr) assert descr.arg_classes == 'ii' assert descr.get_ffi_flags() == 42 args = [types.sint, types.double, types.pointer] - descr = get_call_descr_dynamic(FakeCPU(), args, types.void) + descr = get_call_descr_dynamic(FakeCPU(), args, types.void, None, 42) assert descr is None # missing floats descr = get_call_descr_dynamic(FakeCPU(supports_floats=True), - args, types.void, ffi_flags=43) + args, types.void, None, ffi_flags=43) assert isinstance(descr, VoidCallDescr) assert descr.arg_classes == 'ifi' assert descr.get_ffi_flags() == 43 - descr = get_call_descr_dynamic(FakeCPU(), [], types.sint8) + descr = get_call_descr_dynamic(FakeCPU(), [], types.sint8, None, 42) assert isinstance(descr, DynamicIntCallDescr) assert descr.get_result_size(False) == 1 assert descr.is_result_signed() == True - descr = get_call_descr_dynamic(FakeCPU(), [], types.uint8) + descr = get_call_descr_dynamic(FakeCPU(), [], types.uint8, None, 42) assert isinstance(descr, DynamicIntCallDescr) assert descr.get_result_size(False) == 1 assert descr.is_result_signed() == False if not is_64_bit: - descr = get_call_descr_dynamic(FakeCPU(), [], types.slonglong) + descr = get_call_descr_dynamic(FakeCPU(), [], types.slonglong, + None, 42) assert descr is None # missing longlongs descr = get_call_descr_dynamic(FakeCPU(supports_longlong=True), - [], types.slonglong, ffi_flags=43) + [], types.slonglong, None, ffi_flags=43) assert isinstance(descr, LongLongCallDescr) assert descr.get_ffi_flags() == 43 else: assert types.slonglong is types.slong - descr = get_call_descr_dynamic(FakeCPU(), [], types.float) + descr = get_call_descr_dynamic(FakeCPU(), [], types.float, None, 42) assert descr is None # missing singlefloats descr = get_call_descr_dynamic(FakeCPU(supports_singlefloats=True), - [], types.float, ffi_flags=44) + [], types.float, None, ffi_flags=44) SingleFloatCallDescr = getCallDescrClass(rffi.FLOAT) assert isinstance(descr, SingleFloatCallDescr) assert descr.get_ffi_flags() == 44 diff --git a/pypy/jit/backend/x86/test/test_runner.py b/pypy/jit/backend/x86/test/test_runner.py --- a/pypy/jit/backend/x86/test/test_runner.py +++ b/pypy/jit/backend/x86/test/test_runner.py @@ -455,6 +455,9 @@ EffectInfo.MOST_GENERAL, ffi_flags=-1) calldescr.get_call_conv = lambda: ffi # <==== hack + # ^^^ we patch get_call_conv() so that the test also makes sense + # on Linux, because clibffi.get_call_conv() would always + # return FFI_DEFAULT_ABI on non-Windows platforms. funcbox = ConstInt(rawstart) i1 = BoxInt() i2 = BoxInt() diff --git a/pypy/jit/codewriter/call.py b/pypy/jit/codewriter/call.py --- a/pypy/jit/codewriter/call.py +++ b/pypy/jit/codewriter/call.py @@ -212,7 +212,10 @@ elidable = False loopinvariant = False if op.opname == "direct_call": - func = getattr(get_funcobj(op.args[0].value), '_callable', None) + funcobj = get_funcobj(op.args[0].value) + assert getattr(funcobj, 'calling_conv', 'c') == 'c', ( + "%r: getcalldescr() with a non-default call ABI" % (op,)) + func = getattr(funcobj, '_callable', None) elidable = getattr(func, "_elidable_function_", False) loopinvariant = getattr(func, "_jit_loop_invariant_", False) if loopinvariant: diff --git a/pypy/jit/codewriter/effectinfo.py b/pypy/jit/codewriter/effectinfo.py --- a/pypy/jit/codewriter/effectinfo.py +++ b/pypy/jit/codewriter/effectinfo.py @@ -78,6 +78,9 @@ # OS_MATH_SQRT = 100 + # for debugging: + _OS_CANRAISE = set([OS_NONE, OS_STR2UNICODE, OS_LIBFFI_CALL]) + def __new__(cls, readonly_descrs_fields, readonly_descrs_arrays, write_descrs_fields, write_descrs_arrays, extraeffect=EF_CAN_RAISE, @@ -116,6 +119,8 @@ result.extraeffect = extraeffect result.can_invalidate = can_invalidate result.oopspecindex = oopspecindex + if result.check_can_raise(): + assert oopspecindex in cls._OS_CANRAISE cls._cache[key] = result return result @@ -125,6 +130,10 @@ def check_can_invalidate(self): return self.can_invalidate + def check_is_elidable(self): + return (self.extraeffect == self.EF_ELIDABLE_CAN_RAISE or + self.extraeffect == self.EF_ELIDABLE_CANNOT_RAISE) + def check_forces_virtual_or_virtualizable(self): return self.extraeffect >= self.EF_FORCES_VIRTUAL_OR_VIRTUALIZABLE diff --git a/pypy/jit/codewriter/test/test_flatten.py b/pypy/jit/codewriter/test/test_flatten.py --- a/pypy/jit/codewriter/test/test_flatten.py +++ b/pypy/jit/codewriter/test/test_flatten.py @@ -5,7 +5,7 @@ from pypy.jit.codewriter.format import assert_format from pypy.jit.codewriter import longlong from pypy.jit.metainterp.history import AbstractDescr -from pypy.rpython.lltypesystem import lltype, rclass, rstr +from pypy.rpython.lltypesystem import lltype, rclass, rstr, rffi from pypy.objspace.flow.model import SpaceOperation, Variable, Constant from pypy.translator.unsimplify import varoftype from pypy.rlib.rarithmetic import ovfcheck, r_uint, r_longlong, r_ulonglong @@ -743,7 +743,6 @@ """, transform=True) def test_force_cast(self): - from pypy.rpython.lltypesystem import rffi # NB: we don't need to test for INT here, the logic in jtransform is # general enough so that if we have the below cases it should # generalize also to INT @@ -849,7 +848,6 @@ transform=True) def test_force_cast_pointer(self): - from pypy.rpython.lltypesystem import rffi def h(p): return rffi.cast(rffi.VOIDP, p) self.encoding_test(h, [lltype.nullptr(rffi.CCHARP.TO)], """ @@ -857,7 +855,6 @@ """, transform=True) def test_force_cast_floats(self): - from pypy.rpython.lltypesystem import rffi # Caststs to lltype.Float def f(n): return rffi.cast(lltype.Float, n) @@ -964,7 +961,6 @@ """, transform=True) def test_direct_ptradd(self): - from pypy.rpython.lltypesystem import rffi def f(p, n): return lltype.direct_ptradd(p, n) self.encoding_test(f, [lltype.nullptr(rffi.CCHARP.TO), 123], """ @@ -975,7 +971,6 @@ def check_force_cast(FROM, TO, operations, value): """Check that the test is correctly written...""" - from pypy.rpython.lltypesystem import rffi import re r = re.compile('(\w+) \%i\d, \$(-?\d+)') # diff --git a/pypy/jit/metainterp/optimizeopt/intutils.py b/pypy/jit/metainterp/optimizeopt/intutils.py --- a/pypy/jit/metainterp/optimizeopt/intutils.py +++ b/pypy/jit/metainterp/optimizeopt/intutils.py @@ -1,4 +1,4 @@ -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift, LONG_BIT +from pypy.rlib.rarithmetic import ovfcheck, LONG_BIT from pypy.rlib.objectmodel import we_are_translated from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.jit.metainterp.history import BoxInt, ConstInt @@ -174,10 +174,10 @@ other.known_ge(IntBound(0, 0)) and \ other.known_lt(IntBound(LONG_BIT, LONG_BIT)): try: - vals = (ovfcheck_lshift(self.upper, other.upper), - ovfcheck_lshift(self.upper, other.lower), - ovfcheck_lshift(self.lower, other.upper), - ovfcheck_lshift(self.lower, other.lower)) + vals = (ovfcheck(self.upper << other.upper), + ovfcheck(self.upper << other.lower), + ovfcheck(self.lower << other.upper), + ovfcheck(self.lower << other.lower)) return IntBound(min4(vals), max4(vals)) except (OverflowError, ValueError): return IntUnbounded() diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -4999,6 +4999,34 @@ """ self.optimize_loop(ops, expected) + def test_known_equal_ints(self): + py.test.skip("in-progress") + ops = """ + [i0, i1, i2, p0] + i3 = int_eq(i0, i1) + guard_true(i3) [] + + i4 = int_lt(i2, i0) + guard_true(i4) [] + i5 = int_lt(i2, i1) + guard_true(i5) [] + + i6 = getarrayitem_gc(p0, i2) + finish(i6) + """ + expected = """ + [i0, i1, i2, p0] + i3 = int_eq(i0, i1) + guard_true(i3) [] + + i4 = int_lt(i2, i0) + guard_true(i4) [] + + i6 = getarrayitem_gc(p0, i3) + finish(i6) + """ + self.optimize_loop(ops, expected) + class TestLLtype(BaseTestOptimizeBasic, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/optimizeopt/test/test_util.py b/pypy/jit/metainterp/optimizeopt/test/test_util.py --- a/pypy/jit/metainterp/optimizeopt/test/test_util.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_util.py @@ -183,6 +183,7 @@ can_invalidate=True)) arraycopydescr = cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, EffectInfo([], [arraydescr], [], [arraydescr], + EffectInfo.EF_CANNOT_RAISE, oopspecindex=EffectInfo.OS_ARRAYCOPY)) @@ -212,12 +213,14 @@ _oopspecindex = getattr(EffectInfo, _os) locals()[_name] = \ cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, - EffectInfo([], [], [], [], oopspecindex=_oopspecindex)) + EffectInfo([], [], [], [], EffectInfo.EF_CANNOT_RAISE, + oopspecindex=_oopspecindex)) # _oopspecindex = getattr(EffectInfo, _os.replace('STR', 'UNI')) locals()[_name.replace('str', 'unicode')] = \ cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, - EffectInfo([], [], [], [], oopspecindex=_oopspecindex)) + EffectInfo([], [], [], [], EffectInfo.EF_CANNOT_RAISE, + oopspecindex=_oopspecindex)) s2u_descr = cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, EffectInfo([], [], [], [], oopspecindex=EffectInfo.OS_STR2UNICODE)) diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -1345,10 +1345,8 @@ if effect == effectinfo.EF_LOOPINVARIANT: return self.execute_varargs(rop.CALL_LOOPINVARIANT, allboxes, descr, False, False) - exc = (effect != effectinfo.EF_CANNOT_RAISE and - effect != effectinfo.EF_ELIDABLE_CANNOT_RAISE) - pure = (effect == effectinfo.EF_ELIDABLE_CAN_RAISE or - effect == effectinfo.EF_ELIDABLE_CANNOT_RAISE) + exc = effectinfo.check_can_raise() + pure = effectinfo.check_is_elidable() return self.execute_varargs(rop.CALL, allboxes, descr, exc, pure) def do_residual_or_indirect_call(self, funcbox, calldescr, argboxes): diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -3678,3 +3678,16 @@ assert x == -42 x = self.interp_operations(f, [1000, 1], translationoptions=topt) assert x == 999 + + def test_ll_arraycopy(self): + from pypy.rlib import rgc + A = lltype.GcArray(lltype.Char) + a = lltype.malloc(A, 10) + for i in range(10): a[i] = chr(i) + b = lltype.malloc(A, 10) + # + def f(c, d, e): + rgc.ll_arraycopy(a, b, c, d, e) + return 42 + self.interp_operations(f, [1, 2, 3]) + self.check_operations_history(call=1, guard_no_exception=0) diff --git a/pypy/module/_rawffi/structure.py b/pypy/module/_rawffi/structure.py --- a/pypy/module/_rawffi/structure.py +++ b/pypy/module/_rawffi/structure.py @@ -212,6 +212,8 @@ while count + basic_size <= total_size: fieldtypes.append(basic_ffi_type) count += basic_size + if basic_size == 0: # corner case. get out of this infinite + break # loop after 1 iteration ("why not") self.ffi_struct = clibffi.make_struct_ffitype_e(self.size, self.alignment, fieldtypes) diff --git a/pypy/module/cpyext/include/patchlevel.h b/pypy/module/cpyext/include/patchlevel.h --- a/pypy/module/cpyext/include/patchlevel.h +++ b/pypy/module/cpyext/include/patchlevel.h @@ -29,7 +29,7 @@ #define PY_VERSION "2.7.1" /* PyPy version as a string */ -#define PYPY_VERSION "1.6.1" +#define PYPY_VERSION "1.7.1" /* Subversion Revision number of this file (not of the repository). * Empty since Mercurial migration. */ diff --git a/pypy/module/cpyext/pyobject.py b/pypy/module/cpyext/pyobject.py --- a/pypy/module/cpyext/pyobject.py +++ b/pypy/module/cpyext/pyobject.py @@ -116,8 +116,8 @@ try: return typedescr_cache[typedef] except KeyError: - if typedef.base is not None: - return _get_typedescr_1(typedef.base) + if typedef.bases: + return _get_typedescr_1(typedef.bases[0]) return typedescr_cache[None] def get_typedescr(typedef): diff --git a/pypy/module/imp/importing.py b/pypy/module/imp/importing.py --- a/pypy/module/imp/importing.py +++ b/pypy/module/imp/importing.py @@ -513,7 +513,7 @@ space.warn(msg, space.w_ImportWarning) modtype, suffix, filemode = find_modtype(space, filepart) try: - if modtype in (PY_SOURCE, PY_COMPILED): + if modtype in (PY_SOURCE, PY_COMPILED, C_EXTENSION): assert suffix is not None filename = filepart + suffix stream = streamio.open_file_as_stream(filename, filemode) @@ -522,9 +522,6 @@ except: stream.close() raise - if modtype == C_EXTENSION: - filename = filepart + suffix - return FindInfo(modtype, filename, None, suffix, filemode) except StreamErrors: pass # XXX! must not eat all exceptions, e.g. # Out of file descriptors. diff --git a/pypy/module/posix/__init__.py b/pypy/module/posix/__init__.py --- a/pypy/module/posix/__init__.py +++ b/pypy/module/posix/__init__.py @@ -135,6 +135,8 @@ interpleveldefs['execve'] = 'interp_posix.execve' if hasattr(posix, 'spawnv'): interpleveldefs['spawnv'] = 'interp_posix.spawnv' + if hasattr(posix, 'spawnve'): + interpleveldefs['spawnve'] = 'interp_posix.spawnve' if hasattr(os, 'uname'): interpleveldefs['uname'] = 'interp_posix.uname' if hasattr(os, 'sysconf'): diff --git a/pypy/module/posix/interp_posix.py b/pypy/module/posix/interp_posix.py --- a/pypy/module/posix/interp_posix.py +++ b/pypy/module/posix/interp_posix.py @@ -758,6 +758,14 @@ except OSError, e: raise wrap_oserror(space, e) +def _env2interp(space, w_env): + env = {} + w_keys = space.call_method(w_env, 'keys') + for w_key in space.unpackiterable(w_keys): + w_value = space.getitem(w_env, w_key) + env[space.str_w(w_key)] = space.str_w(w_value) + return env + def execve(space, w_command, w_args, w_env): """ execve(path, args, env) @@ -769,11 +777,7 @@ """ command = fsencode_w(space, w_command) args = [fsencode_w(space, w_arg) for w_arg in space.unpackiterable(w_args)] - env = {} - w_keys = space.call_method(w_env, 'keys') - for w_key in space.unpackiterable(w_keys): - w_value = space.getitem(w_env, w_key) - env[space.str_w(w_key)] = space.str_w(w_value) + env = _env2interp(space, w_env) try: os.execve(command, args, env) except OSError, e: @@ -788,6 +792,16 @@ raise wrap_oserror(space, e) return space.wrap(ret) + at unwrap_spec(mode=int, path=str) +def spawnve(space, mode, path, w_args, w_env): + args = [space.str_w(w_arg) for w_arg in space.unpackiterable(w_args)] + env = _env2interp(space, w_env) + try: + ret = os.spawnve(mode, path, args, env) + except OSError, e: + raise wrap_oserror(space, e) + return space.wrap(ret) + def utime(space, w_path, w_tuple): """ utime(path, (atime, mtime)) utime(path, None) diff --git a/pypy/module/posix/test/test_posix2.py b/pypy/module/posix/test/test_posix2.py --- a/pypy/module/posix/test/test_posix2.py +++ b/pypy/module/posix/test/test_posix2.py @@ -471,6 +471,17 @@ ['python', '-c', 'raise(SystemExit(42))']) assert ret == 42 + if hasattr(__import__(os.name), "spawnve"): + def test_spawnve(self): + os = self.posix + import sys + print self.python + ret = os.spawnve(os.P_WAIT, self.python, + ['python', '-c', + "raise(SystemExit(int(__import__('os').environ['FOOBAR'])))"], + {'FOOBAR': '42'}) + assert ret == 42 + def test_popen(self): os = self.posix for i in range(5): diff --git a/pypy/module/pypyjit/test_pypy_c/model.py b/pypy/module/pypyjit/test_pypy_c/model.py --- a/pypy/module/pypyjit/test_pypy_c/model.py +++ b/pypy/module/pypyjit/test_pypy_c/model.py @@ -285,6 +285,11 @@ guard_false(ticker_cond1, descr=...) """ src = src.replace('--EXC-TICK--', exc_ticker_check) + # + # ISINF is done as a macro; fix it here + r = re.compile('(\w+) = --ISINF--[(](\w+)[)]') + src = r.sub(r'\2\B999 = float_add(\2, ...)\n\1 = float_eq(\2\B999, \2)', + src) return src @classmethod diff --git a/pypy/module/pypyjit/test_pypy_c/test_math.py b/pypy/module/pypyjit/test_pypy_c/test_math.py --- a/pypy/module/pypyjit/test_pypy_c/test_math.py +++ b/pypy/module/pypyjit/test_pypy_c/test_math.py @@ -1,3 +1,4 @@ +import py from pypy.module.pypyjit.test_pypy_c.test_00_model import BaseTestPyPyC @@ -49,10 +50,7 @@ guard_true(i2, descr=...) guard_not_invalidated(descr=...) f1 = cast_int_to_float(i0) - i3 = float_eq(f1, inf) - i4 = float_eq(f1, -inf) - i5 = int_or(i3, i4) - i6 = int_is_true(i5) + i6 = --ISINF--(f1) guard_false(i6, descr=...) f2 = call(ConstClass(sin), f1, descr=) f3 = call(ConstClass(cos), f1, descr=) @@ -64,6 +62,7 @@ """) def test_fmod(self): + py.test.skip("test relies on the old and broken ll_math_fmod") def main(n): import math @@ -90,4 +89,4 @@ i6 = int_sub(i0, 1) --TICK-- jump(..., descr=) - """) \ No newline at end of file + """) diff --git a/pypy/module/select/test/test_select.py b/pypy/module/select/test/test_select.py --- a/pypy/module/select/test/test_select.py +++ b/pypy/module/select/test/test_select.py @@ -214,11 +214,15 @@ def test_poll(self): import select - class A(object): - def __int__(self): - return 3 - - select.poll().poll(A()) # assert did not crash + readend, writeend = self.getpair() + try: + class A(object): + def __int__(self): + return readend.fileno() + select.poll().poll(A()) # assert did not crash + finally: + readend.close() + writeend.close() class AppTestSelectWithPipes(_AppTestSelect): "Use a pipe to get pairs of file descriptors" diff --git a/pypy/module/signal/__init__.py b/pypy/module/signal/__init__.py --- a/pypy/module/signal/__init__.py +++ b/pypy/module/signal/__init__.py @@ -20,7 +20,7 @@ interpleveldefs['pause'] = 'interp_signal.pause' interpleveldefs['siginterrupt'] = 'interp_signal.siginterrupt' - if hasattr(cpy_signal, 'setitimer'): + if os.name == 'posix': interpleveldefs['setitimer'] = 'interp_signal.setitimer' interpleveldefs['getitimer'] = 'interp_signal.getitimer' for name in ['ITIMER_REAL', 'ITIMER_VIRTUAL', 'ITIMER_PROF']: diff --git a/pypy/module/signal/test/test_signal.py b/pypy/module/signal/test/test_signal.py --- a/pypy/module/signal/test/test_signal.py +++ b/pypy/module/signal/test/test_signal.py @@ -1,4 +1,4 @@ -import os, py +import os, py, sys import signal as cpy_signal from pypy.conftest import gettestobjspace @@ -264,6 +264,10 @@ class AppTestItimer: spaceconfig = dict(usemodules=['signal']) + def setup_class(cls): + if sys.platform == 'win32': + py.test.skip("Unix only") + def test_itimer_real(self): import signal diff --git a/pypy/module/sys/version.py b/pypy/module/sys/version.py --- a/pypy/module/sys/version.py +++ b/pypy/module/sys/version.py @@ -10,7 +10,7 @@ CPYTHON_VERSION = (3, 2, 2, "final", 0) #XXX # sync patchlevel.h CPYTHON_API_VERSION = 1013 #XXX # sync with include/modsupport.h -PYPY_VERSION = (1, 6, 1, "dev", 0) #XXX # sync patchlevel.h +PYPY_VERSION = (1, 7, 1, "dev", 0) #XXX # sync patchlevel.h if platform.name == 'msvc': COMPILER_INFO = 'MSC v.%d 32 bit' % (platform.version * 10 + 600) diff --git a/pypy/objspace/flow/operation.py b/pypy/objspace/flow/operation.py --- a/pypy/objspace/flow/operation.py +++ b/pypy/objspace/flow/operation.py @@ -11,7 +11,7 @@ from pypy.interpreter.baseobjspace import ObjSpace from pypy.interpreter.error import OperationError from pypy.tool.sourcetools import compile2 -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift +from pypy.rlib.rarithmetic import ovfcheck from pypy.objspace.flow import model MethodTable = ObjSpace.MethodTable[:] @@ -151,7 +151,7 @@ return ovfcheck(x % y) def lshift_ovf(x, y): - return ovfcheck_lshift(x, y) + return ovfcheck(x << y) # slicing: operator.{get,set,del}slice() don't support b=None or c=None def do_getslice(a, b, c): diff --git a/pypy/objspace/std/bytearrayobject.py b/pypy/objspace/std/bytearrayobject.py --- a/pypy/objspace/std/bytearrayobject.py +++ b/pypy/objspace/std/bytearrayobject.py @@ -225,15 +225,12 @@ def _to_bytes(space, w_bytearray): return space.wrapbytes(''.join(w_bytearray.data)) -def _convert_idx_params(space, w_self, w_start, w_stop): - length = len(w_self.data) +def str_count__Bytearray_Int_ANY_ANY(space, w_bytearray, w_char, w_start, w_stop): + char = w_char.intval + bytearray = w_bytearray.data + length = len(bytearray) start, stop = slicetype.unwrap_start_stop( space, length, w_start, w_stop, False) - return start, stop, length - -def str_count__Bytearray_Int_ANY_ANY(space, w_bytearray, w_char, w_start, w_stop): - char = w_char.intval - start, stop, length = _convert_idx_params(space, w_bytearray, w_start, w_stop) count = 0 for i in range(start, min(stop, length)): c = w_bytearray.data[i] diff --git a/pypy/objspace/std/bytearraytype.py b/pypy/objspace/std/bytearraytype.py --- a/pypy/objspace/std/bytearraytype.py +++ b/pypy/objspace/std/bytearraytype.py @@ -119,10 +119,11 @@ return data def descr_fromhex(space, w_type, w_hexstring): - "bytearray.fromhex(string) -> bytearray\n\nCreate a bytearray object " - "from a string of hexadecimal numbers.\nSpaces between two numbers are " - "accepted.\nExample: bytearray.fromhex('B9 01EF') -> " - "bytearray(b'\\xb9\\x01\\xef')." + "bytearray.fromhex(string) -> bytearray\n" + "\n" + "Create a bytearray object from a string of hexadecimal numbers.\n" + "Spaces between two numbers are accepted.\n" + "Example: bytearray.fromhex('B9 01EF') -> bytearray(b'\\xb9\\x01\\xef')." if not space.is_w(space.type(w_hexstring), space.w_unicode): raise OperationError(space.w_TypeError, space.wrap( "must be str, not %s" % space.type(w_hexstring).name)) diff --git a/pypy/objspace/std/intobject.py b/pypy/objspace/std/intobject.py --- a/pypy/objspace/std/intobject.py +++ b/pypy/objspace/std/intobject.py @@ -6,7 +6,7 @@ from pypy.objspace.std.noneobject import W_NoneObject from pypy.objspace.std.register_all import register_all from pypy.rlib import jit -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift, LONG_BIT, r_uint +from pypy.rlib.rarithmetic import ovfcheck, LONG_BIT, r_uint from pypy.rlib.rbigint import rbigint """ @@ -245,7 +245,7 @@ b = w_int2.intval if r_uint(b) < LONG_BIT: # 0 <= b < LONG_BIT try: - c = ovfcheck_lshift(a, b) + c = ovfcheck(a << b) except OverflowError: raise FailedToImplementArgs(space.w_OverflowError, space.wrap("integer left shift")) diff --git a/pypy/objspace/std/stdtypedef.py b/pypy/objspace/std/stdtypedef.py --- a/pypy/objspace/std/stdtypedef.py +++ b/pypy/objspace/std/stdtypedef.py @@ -32,11 +32,14 @@ from pypy.objspace.std.objecttype import object_typedef if b is object_typedef: return True - while a is not b: - if a is None: - return False - a = a.base - return True + if a is None: + return False + if a is b: + return True + for a1 in a.bases: + if issubtypedef(a1, b): + return True + return False std_dict_descr = GetSetProperty(descr_get_dict, descr_set_dict, descr_del_dict, doc="dictionary for instance variables (if defined)") @@ -75,8 +78,8 @@ if typedef is object_typedef: bases_w = [] else: - base = typedef.base or object_typedef - bases_w = [space.gettypeobject(base)] + bases = typedef.bases or [object_typedef] + bases_w = [space.gettypeobject(base) for base in bases] # wrap everything dict_w = {} diff --git a/pypy/objspace/std/test/test_bytes.py b/pypy/objspace/std/test/test_bytearrayobject.py rename from pypy/objspace/std/test/test_bytes.py rename to pypy/objspace/std/test/test_bytearrayobject.py diff --git a/pypy/objspace/std/test/test_stdobjspace.py b/pypy/objspace/std/test/test_stdobjspace.py --- a/pypy/objspace/std/test/test_stdobjspace.py +++ b/pypy/objspace/std/test/test_stdobjspace.py @@ -1,5 +1,6 @@ from pypy.interpreter.error import OperationError from pypy.interpreter.gateway import app2interp +from pypy.conftest import gettestobjspace class TestW_StdObjSpace: @@ -60,3 +61,10 @@ typedef = None assert space.isinstance_w(X(), space.w_str) + + def test_withstrbuf_fastpath_isinstance(self): + from pypy.objspace.std.stringobject import W_StringObject + + space = gettestobjspace(withstrbuf=True) + assert space._get_interplevel_cls(space.w_str) is W_StringObject + diff --git a/pypy/rlib/clibffi.py b/pypy/rlib/clibffi.py --- a/pypy/rlib/clibffi.py +++ b/pypy/rlib/clibffi.py @@ -30,6 +30,9 @@ _MAC_OS = platform.name == "darwin" _FREEBSD_7 = platform.name == "freebsd7" +_LITTLE_ENDIAN = sys.byteorder == 'little' +_BIG_ENDIAN = sys.byteorder == 'big' + if _WIN32: from pypy.rlib import rwin32 @@ -338,15 +341,38 @@ cast_type_to_ffitype._annspecialcase_ = 'specialize:memo' def push_arg_as_ffiptr(ffitp, arg, ll_buf): - # this is for primitive types. For structures and arrays - # would be something different (more dynamic) + # This is for primitive types. Note that the exact type of 'arg' may be + # different from the expected 'c_size'. To cope with that, we fall back + # to a byte-by-byte copy. TP = lltype.typeOf(arg) TP_P = lltype.Ptr(rffi.CArray(TP)) - buf = rffi.cast(TP_P, ll_buf) - buf[0] = arg + TP_size = rffi.sizeof(TP) + c_size = intmask(ffitp.c_size) + # if both types have the same size, we can directly write the + # value to the buffer + if c_size == TP_size: + buf = rffi.cast(TP_P, ll_buf) + buf[0] = arg + else: + # needs byte-by-byte copying. Make sure 'arg' is an integer type. + # Note that this won't work for rffi.FLOAT/rffi.DOUBLE. + assert TP is not rffi.FLOAT and TP is not rffi.DOUBLE + if TP_size <= rffi.sizeof(lltype.Signed): + arg = rffi.cast(lltype.Unsigned, arg) + else: + arg = rffi.cast(lltype.UnsignedLongLong, arg) + if _LITTLE_ENDIAN: + for i in range(c_size): + ll_buf[i] = chr(arg & 0xFF) + arg >>= 8 + elif _BIG_ENDIAN: + for i in range(c_size-1, -1, -1): + ll_buf[i] = chr(arg & 0xFF) + arg >>= 8 + else: + raise AssertionError push_arg_as_ffiptr._annspecialcase_ = 'specialize:argtype(1)' - # type defs for callback and closure userdata USERDATA_P = lltype.Ptr(lltype.ForwardReference()) CALLBACK_TP = lltype.Ptr(lltype.FuncType([rffi.VOIDPP, rffi.VOIDP, USERDATA_P], diff --git a/pypy/rlib/libffi.py b/pypy/rlib/libffi.py --- a/pypy/rlib/libffi.py +++ b/pypy/rlib/libffi.py @@ -234,7 +234,7 @@ # It is important that there is no other operation in the middle, else # the optimizer will fail to recognize the pattern and won't turn it # into a fast CALL. Note that "arg = arg.next" is optimized away, - # assuming that archain is completely virtual. + # assuming that argchain is completely virtual. self = jit.promote(self) if argchain.numargs != len(self.argtypes): raise TypeError, 'Wrong number of arguments: %d expected, got %d' %\ diff --git a/pypy/rlib/listsort.py b/pypy/rlib/listsort.py --- a/pypy/rlib/listsort.py +++ b/pypy/rlib/listsort.py @@ -1,4 +1,4 @@ -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift +from pypy.rlib.rarithmetic import ovfcheck ## ------------------------------------------------------------------------ @@ -136,7 +136,7 @@ if lower(a.list[p + ofs], key): lastofs = ofs try: - ofs = ovfcheck_lshift(ofs, 1) + ofs = ovfcheck(ofs << 1) except OverflowError: ofs = maxofs else: @@ -161,7 +161,7 @@ # key <= a[hint - ofs] lastofs = ofs try: - ofs = ovfcheck_lshift(ofs, 1) + ofs = ovfcheck(ofs << 1) except OverflowError: ofs = maxofs else: diff --git a/pypy/rlib/rarithmetic.py b/pypy/rlib/rarithmetic.py --- a/pypy/rlib/rarithmetic.py +++ b/pypy/rlib/rarithmetic.py @@ -12,9 +12,6 @@ back to a signed int value ovfcheck check on CPython whether the result of a signed integer operation did overflow -ovfcheck_lshift - << with oveflow checking - catering to 2.3/2.4 differences about << ovfcheck_float_to_int convert to an integer or raise OverflowError r_longlong @@ -111,18 +108,6 @@ raise OverflowError, "signed integer expression did overflow" return r -def _local_ovfcheck(r): - # a copy of the above, because we cannot call ovfcheck - # in a context where no primitiveoperator is involved. - assert not isinstance(r, r_uint), "unexpected ovf check on unsigned" - if isinstance(r, long): - raise OverflowError, "signed integer expression did overflow" - return r - -def ovfcheck_lshift(a, b): - "NOT_RPYTHON" - return _local_ovfcheck(int(long(a) << b)) - # Strange things happening for float to int on 64 bit: # int(float(i)) != i because of rounding issues. # These are the minimum and maximum float value that can diff --git a/pypy/rlib/rgc.py b/pypy/rlib/rgc.py --- a/pypy/rlib/rgc.py +++ b/pypy/rlib/rgc.py @@ -163,8 +163,10 @@ source_start, dest_start, length): # if the write barrier is not supported, copy by hand - for i in range(length): + i = 0 + while i < length: dest[i + dest_start] = source[i + source_start] + i += 1 return source_addr = llmemory.cast_ptr_to_adr(source) dest_addr = llmemory.cast_ptr_to_adr(dest) @@ -214,8 +216,8 @@ func._gc_no_collect_ = True return func -def is_light_finalizer(func): - func._is_light_finalizer_ = True +def must_be_light_finalizer(func): + func._must_be_light_finalizer_ = True return func # ____________________________________________________________ diff --git a/pypy/rlib/test/test_libffi.py b/pypy/rlib/test/test_libffi.py --- a/pypy/rlib/test/test_libffi.py +++ b/pypy/rlib/test/test_libffi.py @@ -179,6 +179,17 @@ res = self.call(func, [chr(20), 22], rffi.LONG) assert res == 42 + def test_char_args(self): + """ + char sum_args(char a, char b) { + return a + b; + } + """ + libfoo = self.get_libfoo() + func = (libfoo, 'sum_args', [types.schar, types.schar], types.schar) + res = self.call(func, [123, 43], rffi.CHAR) + assert res == chr(166) + def test_unsigned_short_args(self): """ unsigned short sum_xy_us(unsigned short x, unsigned short y) diff --git a/pypy/rpython/llinterp.py b/pypy/rpython/llinterp.py --- a/pypy/rpython/llinterp.py +++ b/pypy/rpython/llinterp.py @@ -1,6 +1,6 @@ from pypy.objspace.flow.model import FunctionGraph, Constant, Variable, c_last_exception from pypy.rlib.rarithmetic import intmask, r_uint, ovfcheck, r_longlong -from pypy.rlib.rarithmetic import r_ulonglong, ovfcheck_lshift +from pypy.rlib.rarithmetic import r_ulonglong from pypy.rpython.lltypesystem import lltype, llmemory, lloperation, llheap from pypy.rpython.lltypesystem import rclass from pypy.rpython.ootypesystem import ootype @@ -1035,7 +1035,7 @@ assert isinstance(x, int) assert isinstance(y, int) try: - return ovfcheck_lshift(x, y) + return ovfcheck(x << y) except OverflowError: self.make_llexception() diff --git a/pypy/rpython/lltypesystem/ll2ctypes.py b/pypy/rpython/lltypesystem/ll2ctypes.py --- a/pypy/rpython/lltypesystem/ll2ctypes.py +++ b/pypy/rpython/lltypesystem/ll2ctypes.py @@ -1163,10 +1163,14 @@ value = value.adr if isinstance(value, llmemory.fakeaddress): value = value.ptr or 0 + if isinstance(value, r_singlefloat): + value = float(value) TYPE1 = lltype.typeOf(value) cvalue = lltype2ctypes(value) cresulttype = get_ctypes_type(RESTYPE) - if isinstance(TYPE1, lltype.Ptr): + if RESTYPE == TYPE1: + return value + elif isinstance(TYPE1, lltype.Ptr): if isinstance(RESTYPE, lltype.Ptr): # shortcut: ptr->ptr cast cptr = ctypes.cast(cvalue, cresulttype) diff --git a/pypy/rpython/lltypesystem/module/ll_math.py b/pypy/rpython/lltypesystem/module/ll_math.py --- a/pypy/rpython/lltypesystem/module/ll_math.py +++ b/pypy/rpython/lltypesystem/module/ll_math.py @@ -11,15 +11,17 @@ from pypy.translator.platform import platform from pypy.rlib.rfloat import isfinite, isinf, isnan, INFINITY, NAN +use_library_isinf_isnan = False if sys.platform == "win32": if platform.name == "msvc": # When compiled with /O2 or /Oi (enable intrinsic functions) # It's no more possible to take the address of some math functions. # Ensure that the compiler chooses real functions instead. eci = ExternalCompilationInfo( - includes = ['math.h'], + includes = ['math.h', 'float.h'], post_include_bits = ['#pragma function(floor)'], ) + use_library_isinf_isnan = True else: eci = ExternalCompilationInfo() # Some math functions are C99 and not defined by the Microsoft compiler @@ -108,18 +110,32 @@ # # Custom implementations +VERY_LARGE_FLOAT = 1.0 +while VERY_LARGE_FLOAT * 100.0 != INFINITY: + VERY_LARGE_FLOAT *= 64.0 + +_lib_isnan = rffi.llexternal("_isnan", [lltype.Float], lltype.Signed, + compilation_info=eci) +_lib_finite = rffi.llexternal("_finite", [lltype.Float], lltype.Signed, + compilation_info=eci) + def ll_math_isnan(y): # By not calling into the external function the JIT can inline this. # Floats are awesome. + if use_library_isinf_isnan and not jit.we_are_jitted(): + return bool(_lib_isnan(y)) return y != y def ll_math_isinf(y): - # Use a bitwise OR so the JIT doesn't produce 2 different guards. - return (y == INFINITY) | (y == -INFINITY) + if use_library_isinf_isnan and not jit.we_are_jitted(): + return not _lib_finite(y) and not _lib_isnan(y) + return (y + VERY_LARGE_FLOAT) == y def ll_math_isfinite(y): # Use a custom hack that is reasonably well-suited to the JIT. # Floats are awesome (bis). + if use_library_isinf_isnan and not jit.we_are_jitted(): + return bool(_lib_finite(y)) z = 0.0 * y return z == z # i.e.: z is not a NaN @@ -136,10 +152,12 @@ Windows, FreeBSD and alpha Tru64 are amongst platforms that don't always follow C99. """ - if isnan(x) or isnan(y): + if isnan(x): return NAN - if isinf(y): + if not isfinite(y): + if isnan(y): + return NAN if isinf(x): if math_copysign(1.0, x) == 1.0: # atan2(+-inf, +inf) == +-pi/4 @@ -168,7 +186,7 @@ def ll_math_frexp(x): # deal with special cases directly, to sidestep platform differences - if isnan(x) or isinf(x) or not x: + if not isfinite(x) or not x: mantissa = x exponent = 0 else: @@ -185,7 +203,7 @@ INT_MIN = int(-2**31) def ll_math_ldexp(x, exp): - if x == 0.0 or isinf(x) or isnan(x): + if x == 0.0 or not isfinite(x): return x # NaNs, zeros and infinities are returned unchanged if exp > INT_MAX: # overflow (64-bit platforms only) @@ -209,10 +227,11 @@ def ll_math_modf(x): # some platforms don't do the right thing for NaNs and # infinities, so we take care of special cases directly. - if isinf(x): - return (math_copysign(0.0, x), x) - elif isnan(x): - return (x, x) + if not isfinite(x): + if isnan(x): + return (x, x) + else: # isinf(x) + return (math_copysign(0.0, x), x) intpart_p = lltype.malloc(rffi.DOUBLEP.TO, 1, flavor='raw') try: fracpart = math_modf(x, intpart_p) @@ -223,13 +242,21 @@ def ll_math_fmod(x, y): - if isinf(x) and not isnan(y): - raise ValueError("math domain error") + # fmod(x, +/-Inf) returns x for finite x. + if isinf(y) and isfinite(x): + return x - if y == 0: - raise ValueError("math domain error") - - return math_fmod(x, y) + _error_reset() + r = math_fmod(x, y) + errno = rposix.get_errno() + if isnan(r): + if isnan(x) or isnan(y): + errno = 0 + else: + errno = EDOM + if errno: + _likely_raise(errno, r) + return r def ll_math_hypot(x, y): @@ -242,16 +269,17 @@ _error_reset() r = math_hypot(x, y) errno = rposix.get_errno() - if isnan(r): - if isnan(x) or isnan(y): - errno = 0 - else: - errno = EDOM - elif isinf(r): - if isinf(x) or isnan(x) or isinf(y) or isnan(y): - errno = 0 - else: - errno = ERANGE + if not isfinite(r): + if isnan(r): + if isnan(x) or isnan(y): + errno = 0 + else: + errno = EDOM + else: # isinf(r) + if isfinite(x) and isfinite(y): + errno = ERANGE + else: + errno = 0 if errno: _likely_raise(errno, r) return r @@ -261,30 +289,30 @@ # deal directly with IEEE specials, to cope with problems on various # platforms whose semantics don't exactly match C99 - if isnan(x): - if y == 0.0: - return 1.0 # NaN**0 = 1 - return x - - elif isnan(y): + if isnan(y): if x == 1.0: return 1.0 # 1**Nan = 1 return y - elif isinf(x): - odd_y = not isinf(y) and math_fmod(math_fabs(y), 2.0) == 1.0 - if y > 0.0: - if odd_y: - return x - return math_fabs(x) - elif y == 0.0: - return 1.0 - else: # y < 0.0 - if odd_y: - return math_copysign(0.0, x) - return 0.0 + if not isfinite(x): + if isnan(x): + if y == 0.0: + return 1.0 # NaN**0 = 1 + return x + else: # isinf(x) + odd_y = not isinf(y) and math_fmod(math_fabs(y), 2.0) == 1.0 + if y > 0.0: + if odd_y: + return x + return math_fabs(x) + elif y == 0.0: + return 1.0 + else: # y < 0.0 + if odd_y: + return math_copysign(0.0, x) + return 0.0 - elif isinf(y): + if isinf(y): if math_fabs(x) == 1.0: return 1.0 elif y > 0.0 and math_fabs(x) > 1.0: @@ -299,17 +327,18 @@ _error_reset() r = math_pow(x, y) errno = rposix.get_errno() - if isnan(r): - # a NaN result should arise only from (-ve)**(finite non-integer) - errno = EDOM - elif isinf(r): - # an infinite result here arises either from: - # (A) (+/-0.)**negative (-> divide-by-zero) - # (B) overflow of x**y with x and y finite - if x == 0.0: + if not isfinite(r): + if isnan(r): + # a NaN result should arise only from (-ve)**(finite non-integer) errno = EDOM - else: - errno = ERANGE + else: # isinf(r) + # an infinite result here arises either from: + # (A) (+/-0.)**negative (-> divide-by-zero) + # (B) overflow of x**y with x and y finite + if x == 0.0: + errno = EDOM + else: + errno = ERANGE if errno: _likely_raise(errno, r) return r @@ -358,18 +387,19 @@ r = c_func(x) # Error checking fun. Copied from CPython 2.6 errno = rposix.get_errno() - if isnan(r): - if isnan(x): - errno = 0 - else: - errno = EDOM - elif isinf(r): - if isinf(x) or isnan(x): - errno = 0 - elif can_overflow: - errno = ERANGE - else: - errno = EDOM + if not isfinite(r): + if isnan(r): + if isnan(x): + errno = 0 + else: + errno = EDOM + else: # isinf(r) + if not isfinite(x): + errno = 0 + elif can_overflow: + errno = ERANGE + else: + errno = EDOM if errno: _likely_raise(errno, r) return r diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -125,6 +125,7 @@ canraise=False, random_effects_on_gcobjs= random_effects_on_gcobjs, + calling_conv=calling_conv, **kwds) if isinstance(_callable, ll2ctypes.LL2CtypesCallable): _callable.funcptr = funcptr @@ -245,8 +246,14 @@ wrapper._always_inline_ = True # for debugging, stick ll func ptr to that wrapper._ptr = funcptr + wrapper = func_with_new_name(wrapper, name) - return func_with_new_name(wrapper, name) + if calling_conv != "c": + from pypy.rlib.jit import dont_look_inside + wrapper = dont_look_inside(wrapper) + + return wrapper + class CallbackHolder: def __init__(self): @@ -855,11 +862,12 @@ try: unsigned = not tp._type.SIGNED except AttributeError: - if tp in [lltype.Char, lltype.Float, lltype.Signed] or\ - isinstance(tp, lltype.Ptr): + if (not isinstance(tp, lltype.Primitive) or + tp in (FLOAT, DOUBLE) or + cast(lltype.SignedLongLong, cast(tp, -1)) < 0): unsigned = False else: - unsigned = False + unsigned = True return size, unsigned def sizeof(tp): diff --git a/pypy/rpython/lltypesystem/test/test_rffi.py b/pypy/rpython/lltypesystem/test/test_rffi.py --- a/pypy/rpython/lltypesystem/test/test_rffi.py +++ b/pypy/rpython/lltypesystem/test/test_rffi.py @@ -18,6 +18,7 @@ from pypy.conftest import option from pypy.objspace.flow.model import summary from pypy.translator.tool.cbuild import ExternalCompilationInfo +from pypy.rlib.rarithmetic import r_singlefloat class BaseTestRffi: def test_basic(self): @@ -704,6 +705,14 @@ res = cast(lltype.Signed, 42.5) assert res == 42 + res = cast(lltype.SingleFloat, 12.3) + assert res == r_singlefloat(12.3) + res = cast(lltype.SingleFloat, res) + assert res == r_singlefloat(12.3) + + res = cast(lltype.Float, r_singlefloat(12.)) + assert res == 12. + def test_rffi_sizeof(self): try: import ctypes @@ -733,7 +742,7 @@ assert sizeof(ll) == ctypes.sizeof(ctp) assert sizeof(lltype.Typedef(ll, 'test')) == sizeof(ll) assert not size_and_sign(lltype.Signed)[1] - assert not size_and_sign(lltype.Char)[1] + assert size_and_sign(lltype.Char) == (1, True) assert not size_and_sign(lltype.UniChar)[1] assert size_and_sign(UINT)[1] diff --git a/pypy/rpython/module/ll_os.py b/pypy/rpython/module/ll_os.py --- a/pypy/rpython/module/ll_os.py +++ b/pypy/rpython/module/ll_os.py @@ -356,6 +356,32 @@ return extdef([int, str, [str]], int, llimpl=spawnv_llimpl, export_name="ll_os.ll_os_spawnv") + @registering_if(os, 'spawnve') + def register_os_spawnve(self): + os_spawnve = self.llexternal('spawnve', + [rffi.INT, rffi.CCHARP, rffi.CCHARPP, + rffi.CCHARPP], + rffi.INT) + + def spawnve_llimpl(mode, path, args, env): + envstrs = [] + for item in env.iteritems(): + envstrs.append("%s=%s" % item) + + mode = rffi.cast(rffi.INT, mode) + l_args = rffi.liststr2charpp(args) + l_env = rffi.liststr2charpp(envstrs) + childpid = os_spawnve(mode, path, l_args, l_env) + rffi.free_charpp(l_env) + rffi.free_charpp(l_args) + if childpid == -1: + raise OSError(rposix.get_errno(), "os_spawnve failed") + return rffi.cast(lltype.Signed, childpid) + + return extdef([int, str, [str], {str: str}], int, + llimpl=spawnve_llimpl, + export_name="ll_os.ll_os_spawnve") + @registering(os.dup) def register_os_dup(self): os_dup = self.llexternal(underscore_on_windows+'dup', [rffi.INT], rffi.INT) diff --git a/pypy/translator/backendopt/finalizer.py b/pypy/translator/backendopt/finalizer.py --- a/pypy/translator/backendopt/finalizer.py +++ b/pypy/translator/backendopt/finalizer.py @@ -4,7 +4,7 @@ class FinalizerError(Exception): """ __del__ marked as lightweight finalizer, but the analyzer did - not agreed + not agree """ class FinalizerAnalyzer(graphanalyze.BoolGraphAnalyzer): @@ -23,7 +23,7 @@ def analyze_light_finalizer(self, graph): result = self.analyze_direct_call(graph) if (result is self.top_result() and - getattr(graph.func, '_is_light_finalizer_', False)): + getattr(graph.func, '_must_be_light_finalizer_', False)): raise FinalizerError(FinalizerError.__doc__, graph) return result diff --git a/pypy/translator/backendopt/test/test_canraise.py b/pypy/translator/backendopt/test/test_canraise.py --- a/pypy/translator/backendopt/test/test_canraise.py +++ b/pypy/translator/backendopt/test/test_canraise.py @@ -201,6 +201,16 @@ result = ra.can_raise(ggraph.startblock.operations[0]) assert result + def test_ll_arraycopy(self): + from pypy.rpython.lltypesystem import rffi + from pypy.rlib.rgc import ll_arraycopy + def f(a, b, c, d, e): + ll_arraycopy(a, b, c, d, e) + t, ra = self.translate(f, [rffi.CCHARP, rffi.CCHARP, int, int, int]) + fgraph = graphof(t, f) + result = ra.can_raise(fgraph.startblock.operations[0]) + assert not result + class TestOOType(OORtypeMixin, BaseTestCanRaise): def test_can_raise_recursive(self): diff --git a/pypy/translator/backendopt/test/test_finalizer.py b/pypy/translator/backendopt/test/test_finalizer.py --- a/pypy/translator/backendopt/test/test_finalizer.py +++ b/pypy/translator/backendopt/test/test_finalizer.py @@ -126,13 +126,13 @@ r = self.analyze(f, [], A.__del__.im_func) assert r - def test_is_light_finalizer_decorator(self): + def test_must_be_light_finalizer_decorator(self): S = lltype.GcStruct('S') - @rgc.is_light_finalizer + @rgc.must_be_light_finalizer def f(): lltype.malloc(S) - @rgc.is_light_finalizer + @rgc.must_be_light_finalizer def g(): pass self.analyze(g, []) # did not explode diff --git a/pypy/translator/c/genc.py b/pypy/translator/c/genc.py --- a/pypy/translator/c/genc.py +++ b/pypy/translator/c/genc.py @@ -521,13 +521,13 @@ rules = [ ('clean', '', 'rm -f $(OBJECTS) $(TARGET) $(GCMAPFILES) $(ASMFILES) *.gc?? ../module_cache/*.gc??'), ('clean_noprof', '', 'rm -f $(OBJECTS) $(TARGET) $(GCMAPFILES) $(ASMFILES)'), - ('debug', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT" $(TARGET)'), - ('debug_exc', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DDO_LOG_EXC" $(TARGET)'), - ('debug_mem', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DTRIVIAL_MALLOC_DEBUG" $(TARGET)'), + ('debug', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT" debug_target'), + ('debug_exc', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DDO_LOG_EXC" debug_target'), + ('debug_mem', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DTRIVIAL_MALLOC_DEBUG" debug_target'), ('no_obmalloc', '', '$(MAKE) CFLAGS="-g -O2 -DRPY_ASSERT -DNO_OBMALLOC" $(TARGET)'), - ('linuxmemchk', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DLINUXMEMCHK" $(TARGET)'), + ('linuxmemchk', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DLINUXMEMCHK" debug_target'), ('llsafer', '', '$(MAKE) CFLAGS="-O2 -DRPY_LL_ASSERT" $(TARGET)'), - ('lldebug', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DRPY_LL_ASSERT" $(TARGET)'), + ('lldebug', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DRPY_LL_ASSERT" debug_target'), ('profile', '', '$(MAKE) CFLAGS="-g -O1 -pg $(CFLAGS) -fno-omit-frame-pointer" LDFLAGS="-pg $(LDFLAGS)" $(TARGET)'), ] if self.has_profopt(): @@ -554,7 +554,7 @@ mk.definition('ASMLBLFILES', lblsfiles) mk.definition('GCMAPFILES', gcmapfiles) if sys.platform == 'win32': - mk.definition('DEBUGFLAGS', '/Zi') + mk.definition('DEBUGFLAGS', '/MD /Zi') else: mk.definition('DEBUGFLAGS', '-O2 -fomit-frame-pointer -g') @@ -618,9 +618,13 @@ else: if sys.platform == 'win32': - mk.definition('DEBUGFLAGS', '/Zi') + mk.definition('DEBUGFLAGS', '/MD /Zi') else: mk.definition('DEBUGFLAGS', '-O1 -g') + if sys.platform == 'win32': + mk.rule('debug_target', 'debugmode_$(DEFAULT_TARGET)', 'rem') + else: + mk.rule('debug_target', '$(TARGET)', '#') mk.write() #self.translator.platform, # , diff --git a/pypy/translator/c/test/test_extfunc.py b/pypy/translator/c/test/test_extfunc.py --- a/pypy/translator/c/test/test_extfunc.py +++ b/pypy/translator/c/test/test_extfunc.py @@ -818,6 +818,24 @@ func() assert open(filename).read() == "2" +if hasattr(posix, 'spawnve'): + def test_spawnve(): + filename = str(udir.join('test_spawnve.txt')) + progname = str(sys.executable) + scriptpath = udir.join('test_spawnve.py') + scriptpath.write('import os\n' + + 'f=open(%r,"w")\n' % filename + + 'f.write(os.environ["FOOBAR"])\n' + + 'f.close\n') + scriptname = str(scriptpath) + def does_stuff(): + l = [progname, scriptname] + pid = os.spawnve(os.P_NOWAIT, progname, l, {'FOOBAR': '42'}) + os.waitpid(pid, 0) + func = compile(does_stuff, []) + func() + assert open(filename).read() == "42" + def test_utime(): path = str(udir.ensure("test_utime.txt")) from time import time, sleep diff --git a/pypy/translator/cli/test/test_snippet.py b/pypy/translator/cli/test/test_snippet.py --- a/pypy/translator/cli/test/test_snippet.py +++ b/pypy/translator/cli/test/test_snippet.py @@ -28,14 +28,14 @@ res = self.interpret(fn, [], backendopt=False) def test_link_vars_overlapping(self): - from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift + from pypy.rlib.rarithmetic import ovfcheck def fn(maxofs): lastofs = 0 ofs = 1 while ofs < maxofs: lastofs = ofs try: - ofs = ovfcheck_lshift(ofs, 1) + ofs = ovfcheck(ofs << 1) except OverflowError: ofs = maxofs else: diff --git a/pypy/translator/platform/__init__.py b/pypy/translator/platform/__init__.py --- a/pypy/translator/platform/__init__.py +++ b/pypy/translator/platform/__init__.py @@ -102,6 +102,8 @@ bits = [self.__class__.__name__, 'cc=%r' % self.cc] for varname in self.relevant_environ: bits.append('%s=%r' % (varname, os.environ.get(varname))) + # adding sys.maxint to disambiguate windows + bits.append('%s=%r' % ('sys.maxint', sys.maxint)) return ' '.join(bits) # some helpers which seem to be cross-platform enough @@ -238,10 +240,13 @@ else: host_factory = Linux64 elif sys.platform == 'darwin': - from pypy.translator.platform.darwin import Darwin_i386, Darwin_x86_64 + from pypy.translator.platform.darwin import Darwin_i386, Darwin_x86_64, Darwin_PowerPC import platform - assert platform.machine() in ('i386', 'x86_64') - if sys.maxint <= 2147483647: + assert platform.machine() in ('Power Macintosh', 'i386', 'x86_64') + + if platform.machine() == 'Power Macintosh': + host_factory = Darwin_PowerPC + elif sys.maxint <= 2147483647: host_factory = Darwin_i386 else: host_factory = Darwin_x86_64 diff --git a/pypy/translator/platform/darwin.py b/pypy/translator/platform/darwin.py --- a/pypy/translator/platform/darwin.py +++ b/pypy/translator/platform/darwin.py @@ -71,6 +71,11 @@ link_flags = ('-arch', 'i386') cflags = ('-arch', 'i386', '-O3', '-fomit-frame-pointer') +class Darwin_PowerPC(Darwin):#xxx fixme, mwp + name = "darwin_powerpc" + link_flags = () + cflags = ('-O3', '-fomit-frame-pointer') + class Darwin_x86_64(Darwin): name = "darwin_x86_64" link_flags = ('-arch', 'x86_64') diff --git a/pypy/translator/platform/test/test_darwin.py b/pypy/translator/platform/test/test_darwin.py --- a/pypy/translator/platform/test/test_darwin.py +++ b/pypy/translator/platform/test/test_darwin.py @@ -7,7 +7,7 @@ py.test.skip("Darwin only") from pypy.tool.udir import udir -from pypy.translator.platform.darwin import Darwin_i386, Darwin_x86_64 +from pypy.translator.platform.darwin import Darwin_i386, Darwin_x86_64, Darwin_PowerPC from pypy.translator.platform.test.test_platform import TestPlatform as BasicTest from pypy.translator.tool.cbuild import ExternalCompilationInfo @@ -17,7 +17,7 @@ else: host_factory = Darwin_x86_64 else: - host_factory = Darwin + host_factory = Darwin_PowerPC class TestDarwin(BasicTest): platform = host_factory() diff --git a/pypy/translator/platform/windows.py b/pypy/translator/platform/windows.py --- a/pypy/translator/platform/windows.py +++ b/pypy/translator/platform/windows.py @@ -294,6 +294,9 @@ ['$(CC_LINK) /nologo $(LDFLAGS) $(LDFLAGSEXTRA) $(OBJECTS) $(LINKFILES) /out:$@ $(LIBDIRS) $(LIBS) /MANIFEST /MANIFESTFILE:$*.manifest', 'mt.exe -nologo -manifest $*.manifest -outputresource:$@;1', ]) + m.rule('debugmode_$(TARGET)', '$(OBJECTS)', + ['$(CC_LINK) /nologo /DEBUG $(LDFLAGS) $(LDFLAGSEXTRA) $(OBJECTS) $(LINKFILES) /out:$@ $(LIBDIRS) $(LIBS)', + ]) if shared: m.definition('SHARED_IMPORT_LIB', so_name.new(ext='lib').basename) @@ -307,6 +310,9 @@ ['$(CC_LINK) /nologo main.obj $(SHARED_IMPORT_LIB) /out:$@ /MANIFEST /MANIFESTFILE:$*.manifest', 'mt.exe -nologo -manifest $*.manifest -outputresource:$@;1', ]) + m.rule('debugmode_$(DEFAULT_TARGET)', ['debugmode_$(TARGET)', 'main.obj'], + ['$(CC_LINK) /nologo /DEBUG main.obj $(SHARED_IMPORT_LIB) /out:$@' + ]) return m diff --git a/pypy/translator/simplify.py b/pypy/translator/simplify.py --- a/pypy/translator/simplify.py +++ b/pypy/translator/simplify.py @@ -111,16 +111,13 @@ # the while loop above will simplify recursively the new link def transform_ovfcheck(graph): - """The special function calls ovfcheck and ovfcheck_lshift need to + """The special function calls ovfcheck needs to be translated into primitive operations. ovfcheck is called directly after an operation that should be turned into an overflow-checked version. It is considered a syntax error if the resulting _ovf is not defined in objspace/flow/objspace.py. - ovfcheck_lshift is special because there is no preceding operation. - Instead, it will be replaced by an OP_LSHIFT_OVF operation. """ covf = Constant(rarithmetic.ovfcheck) - covfls = Constant(rarithmetic.ovfcheck_lshift) def check_syntax(opname): exlis = operation.implicit_exceptions.get("%s_ovf" % (opname,), []) @@ -154,9 +151,6 @@ op1.opname += '_ovf' del block.operations[i] block.renamevariables({op.result: op1.result}) - elif op.args[0] == covfls: - op.opname = 'lshift_ovf' - del op.args[0] def simplify_exceptions(graph): """The exception handling caused by non-implicit exceptions diff --git a/pypy/translator/test/snippet.py b/pypy/translator/test/snippet.py --- a/pypy/translator/test/snippet.py +++ b/pypy/translator/test/snippet.py @@ -1210,7 +1210,7 @@ return istk.top(), sstk.top() -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift +from pypy.rlib.rarithmetic import ovfcheck def add_func(i=numtype): try: @@ -1253,7 +1253,7 @@ def lshift_func(i=numtype): try: hugo(2, 3, 5) - return ovfcheck_lshift((-maxint-1), i) + return ovfcheck((-maxint-1) << i) except (hugelmugel, OverflowError, StandardError, ValueError): raise diff --git a/pypy/translator/test/test_simplify.py b/pypy/translator/test/test_simplify.py --- a/pypy/translator/test/test_simplify.py +++ b/pypy/translator/test/test_simplify.py @@ -42,24 +42,6 @@ assert graph.startblock.operations[0].opname == 'int_mul_ovf' assert graph.startblock.operations[1].opname == 'int_sub' -def test_remove_ovfcheck_lshift(): - # check that ovfcheck_lshift() is handled - from pypy.rlib.rarithmetic import ovfcheck_lshift - def f(x): - try: - return ovfcheck_lshift(x, 2) - except OverflowError: - return -42 - graph, _ = translate(f, [int]) - assert len(graph.startblock.operations) == 1 - assert graph.startblock.operations[0].opname == 'int_lshift_ovf' - assert len(graph.startblock.operations[0].args) == 2 - assert len(graph.startblock.exits) == 2 - assert [link.exitcase for link in graph.startblock.exits] == \ - [None, OverflowError] - assert [link.target.operations for link in graph.startblock.exits] == \ - [(), ()] - def test_remove_ovfcheck_floordiv(): # check that ovfcheck() is handled even if the operation raises # and catches another exception too, here ZeroDivisionError From noreply at buildbot.pypy.org Fri Nov 11 23:47:15 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Fri, 11 Nov 2011 23:47:15 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: fix Message-ID: <20111111224715.9393C8292E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49349:51ea0f525951 Date: 2011-11-11 17:47 -0500 http://bitbucket.org/pypy/pypy/changeset/51ea0f525951/ Log: fix diff --git a/pypy/module/micronumpy/compile.py b/pypy/module/micronumpy/compile.py --- a/pypy/module/micronumpy/compile.py +++ b/pypy/module/micronumpy/compile.py @@ -177,8 +177,8 @@ def execute(self, interp): arr = interp.variables[self.name] - w_index = self.index.execute(interp) - w_val = self.expr.execute(interp) + w_index = self.index.execute(interp).eval(0) + w_val = self.expr.execute(interp).eval(0) assert isinstance(arr, BaseArray) arr.descr_setitem(interp.space, w_index, w_val) From noreply at buildbot.pypy.org Sat Nov 12 01:07:37 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sat, 12 Nov 2011 01:07:37 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: don't try to do arithmatic on small types Message-ID: <20111112000737.79E1982A87@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49351:bb6c5b0c09a7 Date: 2011-11-11 19:03 -0500 http://bitbucket.org/pypy/pypy/changeset/bb6c5b0c09a7/ Log: don't try to do arithmatic on small types diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -4,18 +4,29 @@ from pypy.objspace.std.floatobject import float2string from pypy.rlib import rfloat from pypy.rlib.objectmodel import specialize -from pypy.rlib.rarithmetic import LONG_BIT +from pypy.rlib.rarithmetic import LONG_BIT, widen from pypy.rpython.lltypesystem import lltype, rffi def simple_unary_op(func): def dispatcher(self, v): - return self.box(func(self, self.unbox(v))) + return self.box( + func( + self, + self.for_computation(self.unbox(v)) + ) + ) return dispatcher def simple_binary_op(func): def dispatcher(self, v1, v2): - return self.box(func(self, self.unbox(v1), self.unbox(v2))) + return self.box( + func( + self, + self.for_computation(self.unbox(v1)), + self.for_computation(self.unbox(v2)), + ) + ) return dispatcher class BaseType(object): @@ -64,20 +75,25 @@ rffi.cast(rffi.CArrayPtr(self.T), ptr)[0] = value ptr = rffi.ptradd(ptr, self.get_element_size()) + @simple_binary_op def add(self, v1, v2): - return self.box(self.unbox(v1) + self.unbox(v2)) + return v1 + v2 + @simple_binary_op def sub(self, v1, v2): - return self.box(self.unbox(v1) - self.unbox(v2)) + return v1 - v2 + @simple_binary_op def mul(self, v1, v2): - return self.box(self.unbox(v1) * self.unbox(v2)) + return v1 * v2 + @simple_unary_op def pos(self, v): - return self.box(+self.unbox(v)) + return +v + @simple_unary_op def neg(self, v): - return self.box(-self.unbox(v)) + return -v @simple_unary_op def abs(self, v): @@ -104,11 +120,13 @@ def bool(self, v): return bool(self.unbox(v)) + @simple_binary_op def max(self, v1, v2): - return self.box(max(self.unbox(v1), self.unbox(v2))) + return max(v1, v2) + @simple_binary_op def min(self, v1, v2): - return self.box(min(self.unbox(v1), self.unbox(v2))) + return min(v1, v2) class Bool(BaseType, Primitive): T = lltype.Bool @@ -132,6 +150,9 @@ value = self.unbox(box) return "True" if value else "False" + def for_computation(self, v): + return int(v) + class Integer(Primitive): _mixin_ = True @@ -142,6 +163,9 @@ value = self.unbox(box) return str(value) + def for_computation(self, v): + return widen(v) + @simple_binary_op def div(self, v1, v2): if v2 == 0: @@ -212,6 +236,9 @@ value = self.unbox(box) return float2string(value, "g", rfloat.DTSF_STR_PRECISION) + def for_computation(self, v): + return float(v) + @simple_binary_op def div(self, v1, v2): try: From noreply at buildbot.pypy.org Sat Nov 12 01:07:36 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sat, 12 Nov 2011 01:07:36 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: fix-ish, need to talk to fijal about why these mehods have different signatures than expected Message-ID: <20111112000736.4EE418292E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49350:72d246a1c7a8 Date: 2011-11-11 18:20 -0500 http://bitbucket.org/pypy/pypy/changeset/72d246a1c7a8/ Log: fix-ish, need to talk to fijal about why these mehods have different signatures than expected diff --git a/pypy/module/micronumpy/compile.py b/pypy/module/micronumpy/compile.py --- a/pypy/module/micronumpy/compile.py +++ b/pypy/module/micronumpy/compile.py @@ -177,10 +177,12 @@ def execute(self, interp): arr = interp.variables[self.name] - w_index = self.index.execute(interp).eval(0) - w_val = self.expr.execute(interp).eval(0) + w_index = self.index.execute(interp) + assert isinstance(w_index, BaseArray) + w_val = self.expr.execute(interp) + assert isinstance(w_val, BaseArray) assert isinstance(arr, BaseArray) - arr.descr_setitem(interp.space, w_index, w_val) + arr.descr_setitem(interp.space, w_index.eval(0), w_val.eval(0)) def __repr__(self): return "%s[%r] = %r" % (self.name, self.index, self.expr) From noreply at buildbot.pypy.org Sat Nov 12 01:07:38 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sat, 12 Nov 2011 01:07:38 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: stuff seems to translate! Message-ID: <20111112000738.A4CE382A88@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49352:d190df7f6bfb Date: 2011-11-11 19:07 -0500 http://bitbucket.org/pypy/pypy/changeset/d190df7f6bfb/ Log: stuff seems to translate! diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -118,7 +118,7 @@ return self.unbox(v1) >= self.unbox(v2) def bool(self, v): - return bool(self.unbox(v)) + return bool(self.for_computation(self.unbox(v))) @simple_binary_op def max(self, v1, v2): From noreply at buildbot.pypy.org Sat Nov 12 10:17:16 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sat, 12 Nov 2011 10:17:16 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: hg merge jit-refactor-tests Message-ID: <20111112091716.54F428292E@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r49353:5dcec5762796 Date: 2011-11-12 10:01 +0100 http://bitbucket.org/pypy/pypy/changeset/5dcec5762796/ Log: hg merge jit-refactor-tests diff --git a/lib-python/2.7/test/test_os.py b/lib-python/2.7/test/test_os.py --- a/lib-python/2.7/test/test_os.py +++ b/lib-python/2.7/test/test_os.py @@ -74,7 +74,8 @@ self.assertFalse(os.path.exists(name), "file already exists for temporary file") # make sure we can create the file - open(name, "w") + f = open(name, "w") + f.close() self.files.append(name) def test_tempnam(self): diff --git a/lib-python/2.7/pkgutil.py b/lib-python/modified-2.7/pkgutil.py copy from lib-python/2.7/pkgutil.py copy to lib-python/modified-2.7/pkgutil.py --- a/lib-python/2.7/pkgutil.py +++ b/lib-python/modified-2.7/pkgutil.py @@ -244,7 +244,8 @@ return mod def get_data(self, pathname): - return open(pathname, "rb").read() + with open(pathname, "rb") as f: + return f.read() def _reopen(self): if self.file and self.file.closed: diff --git a/lib_pypy/_ctypes/pointer.py b/lib_pypy/_ctypes/pointer.py --- a/lib_pypy/_ctypes/pointer.py +++ b/lib_pypy/_ctypes/pointer.py @@ -124,7 +124,8 @@ # for now, we always allow types.pointer, else a lot of tests # break. We need to rethink how pointers are represented, though if my_ffitype is not ffitype and ffitype is not _ffi.types.void_p: - raise ArgumentError, "expected %s instance, got %s" % (type(value), ffitype) + raise ArgumentError("expected %s instance, got %s" % (type(value), + ffitype)) return value._get_buffer_value() def _cast_addr(obj, _, tp): diff --git a/lib_pypy/pyrepl/commands.py b/lib_pypy/pyrepl/commands.py --- a/lib_pypy/pyrepl/commands.py +++ b/lib_pypy/pyrepl/commands.py @@ -33,10 +33,9 @@ class Command(object): finish = 0 kills_digit_arg = 1 - def __init__(self, reader, (event_name, event)): + def __init__(self, reader, cmd): self.reader = reader - self.event = event - self.event_name = event_name + self.event_name, self.event = cmd def do(self): pass diff --git a/lib_pypy/pyrepl/pygame_console.py b/lib_pypy/pyrepl/pygame_console.py --- a/lib_pypy/pyrepl/pygame_console.py +++ b/lib_pypy/pyrepl/pygame_console.py @@ -130,7 +130,7 @@ s.fill(c, [0, 600 - bmargin, 800, bmargin]) s.fill(c, [800 - rmargin, 0, lmargin, 600]) - def refresh(self, screen, (cx, cy)): + def refresh(self, screen, cxy): self.screen = screen self.pygame_screen.fill(colors.bg, [0, tmargin + self.cur_top + self.scroll, @@ -139,8 +139,8 @@ line_top = self.cur_top width, height = self.fontsize - self.cxy = (cx, cy) - cp = self.char_pos(cx, cy) + self.cxy = cxy + cp = self.char_pos(*cxy) if cp[1] < tmargin: self.scroll = - (cy*self.fh + self.cur_top) self.repaint() @@ -148,7 +148,7 @@ self.scroll += (600 - bmargin) - (cp[1] + self.fh) self.repaint() if self.curs_vis: - self.pygame_screen.blit(self.cursor, self.char_pos(cx, cy)) + self.pygame_screen.blit(self.cursor, self.char_pos(*cxy)) for line in screen: if 0 <= line_top + self.scroll <= (600 - bmargin - tmargin - self.fh): if line: diff --git a/lib_pypy/pyrepl/unix_console.py b/lib_pypy/pyrepl/unix_console.py --- a/lib_pypy/pyrepl/unix_console.py +++ b/lib_pypy/pyrepl/unix_console.py @@ -163,7 +163,7 @@ def change_encoding(self, encoding): self.encoding = encoding - def refresh(self, screen, (cx, cy)): + def refresh(self, screen, cxy): # this function is still too long (over 90 lines) if not self.__gone_tall: @@ -198,6 +198,7 @@ # we make sure the cursor is on the screen, and that we're # using all of the screen if we can + cx, cy = cxy if cy < offset: offset = cy elif cy >= offset + height: diff --git a/pypy/jit/backend/llsupport/descr.py b/pypy/jit/backend/llsupport/descr.py --- a/pypy/jit/backend/llsupport/descr.py +++ b/pypy/jit/backend/llsupport/descr.py @@ -305,12 +305,16 @@ _clsname = '' loop_token = None arg_classes = '' # <-- annotation hack - ffi_flags = 0 + ffi_flags = 1 - def __init__(self, arg_classes, extrainfo=None, ffi_flags=0): + def __init__(self, arg_classes, extrainfo=None, ffi_flags=1): self.arg_classes = arg_classes # string of "r" and "i" (ref/int) self.extrainfo = extrainfo self.ffi_flags = ffi_flags + # NB. the default ffi_flags is 1, meaning FUNCFLAG_CDECL, which + # makes sense on Windows as it's the one for all the C functions + # we are compiling together with the JIT. On non-Windows platforms + # it is just ignored anyway. def __repr__(self): res = '%s(%s)' % (self.__class__.__name__, self.arg_classes) @@ -445,7 +449,7 @@ """ _clsname = 'DynamicIntCallDescr' - def __init__(self, arg_classes, result_size, result_sign, extrainfo=None, ffi_flags=0): + def __init__(self, arg_classes, result_size, result_sign, extrainfo, ffi_flags): BaseIntCallDescr.__init__(self, arg_classes, extrainfo, ffi_flags) assert isinstance(result_sign, bool) self._result_size = chr(result_size) diff --git a/pypy/jit/backend/llsupport/ffisupport.py b/pypy/jit/backend/llsupport/ffisupport.py --- a/pypy/jit/backend/llsupport/ffisupport.py +++ b/pypy/jit/backend/llsupport/ffisupport.py @@ -8,7 +8,7 @@ class UnsupportedKind(Exception): pass -def get_call_descr_dynamic(cpu, ffi_args, ffi_result, extrainfo=None, ffi_flags=0): +def get_call_descr_dynamic(cpu, ffi_args, ffi_result, extrainfo, ffi_flags): """Get a call descr: the types of result and args are represented by rlib.libffi.types.*""" try: diff --git a/pypy/jit/backend/llsupport/test/test_ffisupport.py b/pypy/jit/backend/llsupport/test/test_ffisupport.py --- a/pypy/jit/backend/llsupport/test/test_ffisupport.py +++ b/pypy/jit/backend/llsupport/test/test_ffisupport.py @@ -13,44 +13,46 @@ def test_call_descr_dynamic(): args = [types.sint, types.pointer] - descr = get_call_descr_dynamic(FakeCPU(), args, types.sint, ffi_flags=42) + descr = get_call_descr_dynamic(FakeCPU(), args, types.sint, None, + ffi_flags=42) assert isinstance(descr, DynamicIntCallDescr) assert descr.arg_classes == 'ii' assert descr.get_ffi_flags() == 42 args = [types.sint, types.double, types.pointer] - descr = get_call_descr_dynamic(FakeCPU(), args, types.void) + descr = get_call_descr_dynamic(FakeCPU(), args, types.void, None, 42) assert descr is None # missing floats descr = get_call_descr_dynamic(FakeCPU(supports_floats=True), - args, types.void, ffi_flags=43) + args, types.void, None, ffi_flags=43) assert isinstance(descr, VoidCallDescr) assert descr.arg_classes == 'ifi' assert descr.get_ffi_flags() == 43 - descr = get_call_descr_dynamic(FakeCPU(), [], types.sint8) + descr = get_call_descr_dynamic(FakeCPU(), [], types.sint8, None, 42) assert isinstance(descr, DynamicIntCallDescr) assert descr.get_result_size(False) == 1 assert descr.is_result_signed() == True - descr = get_call_descr_dynamic(FakeCPU(), [], types.uint8) + descr = get_call_descr_dynamic(FakeCPU(), [], types.uint8, None, 42) assert isinstance(descr, DynamicIntCallDescr) assert descr.get_result_size(False) == 1 assert descr.is_result_signed() == False if not is_64_bit: - descr = get_call_descr_dynamic(FakeCPU(), [], types.slonglong) + descr = get_call_descr_dynamic(FakeCPU(), [], types.slonglong, + None, 42) assert descr is None # missing longlongs descr = get_call_descr_dynamic(FakeCPU(supports_longlong=True), - [], types.slonglong, ffi_flags=43) + [], types.slonglong, None, ffi_flags=43) assert isinstance(descr, LongLongCallDescr) assert descr.get_ffi_flags() == 43 else: assert types.slonglong is types.slong - descr = get_call_descr_dynamic(FakeCPU(), [], types.float) + descr = get_call_descr_dynamic(FakeCPU(), [], types.float, None, 42) assert descr is None # missing singlefloats descr = get_call_descr_dynamic(FakeCPU(supports_singlefloats=True), - [], types.float, ffi_flags=44) + [], types.float, None, ffi_flags=44) SingleFloatCallDescr = getCallDescrClass(rffi.FLOAT) assert isinstance(descr, SingleFloatCallDescr) assert descr.get_ffi_flags() == 44 diff --git a/pypy/jit/backend/x86/test/test_runner.py b/pypy/jit/backend/x86/test/test_runner.py --- a/pypy/jit/backend/x86/test/test_runner.py +++ b/pypy/jit/backend/x86/test/test_runner.py @@ -460,6 +460,9 @@ EffectInfo.MOST_GENERAL, ffi_flags=-1) calldescr.get_call_conv = lambda: ffi # <==== hack + # ^^^ we patch get_call_conv() so that the test also makes sense + # on Linux, because clibffi.get_call_conv() would always + # return FFI_DEFAULT_ABI on non-Windows platforms. funcbox = ConstInt(rawstart) i1 = BoxInt() i2 = BoxInt() diff --git a/pypy/jit/codewriter/effectinfo.py b/pypy/jit/codewriter/effectinfo.py --- a/pypy/jit/codewriter/effectinfo.py +++ b/pypy/jit/codewriter/effectinfo.py @@ -78,6 +78,9 @@ # OS_MATH_SQRT = 100 + # for debugging: + _OS_CANRAISE = set([OS_NONE, OS_STR2UNICODE, OS_LIBFFI_CALL]) + def __new__(cls, readonly_descrs_fields, readonly_descrs_arrays, write_descrs_fields, write_descrs_arrays, extraeffect=EF_CAN_RAISE, @@ -116,6 +119,8 @@ result.extraeffect = extraeffect result.can_invalidate = can_invalidate result.oopspecindex = oopspecindex + if result.check_can_raise(): + assert oopspecindex in cls._OS_CANRAISE cls._cache[key] = result return result @@ -125,6 +130,10 @@ def check_can_invalidate(self): return self.can_invalidate + def check_is_elidable(self): + return (self.extraeffect == self.EF_ELIDABLE_CAN_RAISE or + self.extraeffect == self.EF_ELIDABLE_CANNOT_RAISE) + def check_forces_virtual_or_virtualizable(self): return self.extraeffect >= self.EF_FORCES_VIRTUAL_OR_VIRTUALIZABLE diff --git a/pypy/jit/codewriter/test/test_flatten.py b/pypy/jit/codewriter/test/test_flatten.py --- a/pypy/jit/codewriter/test/test_flatten.py +++ b/pypy/jit/codewriter/test/test_flatten.py @@ -5,7 +5,7 @@ from pypy.jit.codewriter.format import assert_format from pypy.jit.codewriter import longlong from pypy.jit.metainterp.history import AbstractDescr -from pypy.rpython.lltypesystem import lltype, rclass, rstr +from pypy.rpython.lltypesystem import lltype, rclass, rstr, rffi from pypy.objspace.flow.model import SpaceOperation, Variable, Constant from pypy.translator.unsimplify import varoftype from pypy.rlib.rarithmetic import ovfcheck, r_uint, r_longlong, r_ulonglong @@ -743,7 +743,6 @@ """, transform=True) def test_force_cast(self): - from pypy.rpython.lltypesystem import rffi # NB: we don't need to test for INT here, the logic in jtransform is # general enough so that if we have the below cases it should # generalize also to INT @@ -849,7 +848,6 @@ transform=True) def test_force_cast_pointer(self): - from pypy.rpython.lltypesystem import rffi def h(p): return rffi.cast(rffi.VOIDP, p) self.encoding_test(h, [lltype.nullptr(rffi.CCHARP.TO)], """ @@ -857,7 +855,6 @@ """, transform=True) def test_force_cast_floats(self): - from pypy.rpython.lltypesystem import rffi # Caststs to lltype.Float def f(n): return rffi.cast(lltype.Float, n) @@ -964,7 +961,6 @@ """, transform=True) def test_direct_ptradd(self): - from pypy.rpython.lltypesystem import rffi def f(p, n): return lltype.direct_ptradd(p, n) self.encoding_test(f, [lltype.nullptr(rffi.CCHARP.TO), 123], """ @@ -975,7 +971,6 @@ def check_force_cast(FROM, TO, operations, value): """Check that the test is correctly written...""" - from pypy.rpython.lltypesystem import rffi import re r = re.compile('(\w+) \%i\d, \$(-?\d+)') # diff --git a/pypy/jit/metainterp/optimizeopt/intutils.py b/pypy/jit/metainterp/optimizeopt/intutils.py --- a/pypy/jit/metainterp/optimizeopt/intutils.py +++ b/pypy/jit/metainterp/optimizeopt/intutils.py @@ -1,4 +1,4 @@ -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift, LONG_BIT +from pypy.rlib.rarithmetic import ovfcheck, LONG_BIT from pypy.rlib.objectmodel import we_are_translated from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.jit.metainterp.history import BoxInt, ConstInt @@ -174,10 +174,10 @@ other.known_ge(IntBound(0, 0)) and \ other.known_lt(IntBound(LONG_BIT, LONG_BIT)): try: - vals = (ovfcheck_lshift(self.upper, other.upper), - ovfcheck_lshift(self.upper, other.lower), - ovfcheck_lshift(self.lower, other.upper), - ovfcheck_lshift(self.lower, other.lower)) + vals = (ovfcheck(self.upper << other.upper), + ovfcheck(self.upper << other.lower), + ovfcheck(self.lower << other.upper), + ovfcheck(self.lower << other.lower)) return IntBound(min4(vals), max4(vals)) except (OverflowError, ValueError): return IntUnbounded() diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -5003,6 +5003,34 @@ """ self.optimize_loop(ops, expected) + def test_known_equal_ints(self): + py.test.skip("in-progress") + ops = """ + [i0, i1, i2, p0] + i3 = int_eq(i0, i1) + guard_true(i3) [] + + i4 = int_lt(i2, i0) + guard_true(i4) [] + i5 = int_lt(i2, i1) + guard_true(i5) [] + + i6 = getarrayitem_gc(p0, i2) + finish(i6) + """ + expected = """ + [i0, i1, i2, p0] + i3 = int_eq(i0, i1) + guard_true(i3) [] + + i4 = int_lt(i2, i0) + guard_true(i4) [] + + i6 = getarrayitem_gc(p0, i3) + finish(i6) + """ + self.optimize_loop(ops, expected) + class TestLLtype(BaseTestOptimizeBasic, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/optimizeopt/test/test_util.py b/pypy/jit/metainterp/optimizeopt/test/test_util.py --- a/pypy/jit/metainterp/optimizeopt/test/test_util.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_util.py @@ -184,6 +184,7 @@ can_invalidate=True)) arraycopydescr = cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, EffectInfo([], [arraydescr], [], [arraydescr], + EffectInfo.EF_CANNOT_RAISE, oopspecindex=EffectInfo.OS_ARRAYCOPY)) @@ -213,12 +214,14 @@ _oopspecindex = getattr(EffectInfo, _os) locals()[_name] = \ cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, - EffectInfo([], [], [], [], oopspecindex=_oopspecindex)) + EffectInfo([], [], [], [], EffectInfo.EF_CANNOT_RAISE, + oopspecindex=_oopspecindex)) # _oopspecindex = getattr(EffectInfo, _os.replace('STR', 'UNI')) locals()[_name.replace('str', 'unicode')] = \ cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, - EffectInfo([], [], [], [], oopspecindex=_oopspecindex)) + EffectInfo([], [], [], [], EffectInfo.EF_CANNOT_RAISE, + oopspecindex=_oopspecindex)) s2u_descr = cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, EffectInfo([], [], [], [], oopspecindex=EffectInfo.OS_STR2UNICODE)) diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -1344,10 +1344,8 @@ if effect == effectinfo.EF_LOOPINVARIANT: return self.execute_varargs(rop.CALL_LOOPINVARIANT, allboxes, descr, False, False) - exc = (effect != effectinfo.EF_CANNOT_RAISE and - effect != effectinfo.EF_ELIDABLE_CANNOT_RAISE) - pure = (effect == effectinfo.EF_ELIDABLE_CAN_RAISE or - effect == effectinfo.EF_ELIDABLE_CANNOT_RAISE) + exc = effectinfo.check_can_raise() + pure = effectinfo.check_is_elidable() return self.execute_varargs(rop.CALL, allboxes, descr, exc, pure) def do_residual_or_indirect_call(self, funcbox, calldescr, argboxes): diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -3644,3 +3644,16 @@ # bridge as a preamble since it does not start with a # label. The alternative would be to have all such bridges # start with labels. I dont know which is better... + + def test_ll_arraycopy(self): + from pypy.rlib import rgc + A = lltype.GcArray(lltype.Char) + a = lltype.malloc(A, 10) + for i in range(10): a[i] = chr(i) + b = lltype.malloc(A, 10) + # + def f(c, d, e): + rgc.ll_arraycopy(a, b, c, d, e) + return 42 + self.interp_operations(f, [1, 2, 3]) + self.check_operations_history(call=1, guard_no_exception=0) diff --git a/pypy/jit/tl/spli/test/test_jit.py b/pypy/jit/tl/spli/test/test_jit.py --- a/pypy/jit/tl/spli/test/test_jit.py +++ b/pypy/jit/tl/spli/test/test_jit.py @@ -36,7 +36,7 @@ i = i + 1 return i self.interpret(f, []) - self.check_loops(new_with_vtable=0) + self.check_resops(new_with_vtable=0) def test_bridge(self): py.test.skip('We currently cant virtualize across bridges') @@ -52,7 +52,7 @@ return total self.interpret(f, [1, 10]) - self.check_loops(new_with_vtable=0) + self.check_resops(new_with_vtable=0) def test_bridge_bad_case(self): py.test.skip('We currently cant virtualize across bridges') @@ -67,7 +67,7 @@ return a + b self.interpret(f, [1, 10]) - self.check_loops(new_with_vtable=1) # XXX should eventually be 0? + self.check_resops(new_with_vtable=1) # XXX should eventually be 0? # I think it should be either 0 or 2, 1 makes little sense # If the loop after entering goes first time to the bridge, a # is rewrapped again, without preserving the identity. I'm not diff --git a/pypy/module/cpyext/include/patchlevel.h b/pypy/module/cpyext/include/patchlevel.h --- a/pypy/module/cpyext/include/patchlevel.h +++ b/pypy/module/cpyext/include/patchlevel.h @@ -29,7 +29,7 @@ #define PY_VERSION "2.7.1" /* PyPy version as a string */ -#define PYPY_VERSION "1.6.1" +#define PYPY_VERSION "1.7.1" /* Subversion Revision number of this file (not of the repository). * Empty since Mercurial migration. */ diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -51,9 +51,11 @@ b = a + a b -> 3 """) - self.check_loops({'getarrayitem_raw': 2, 'float_add': 1, - 'setarrayitem_raw': 1, 'int_add': 1, - 'int_lt': 1, 'guard_true': 1, 'jump': 1}) + self.check_resops({'setarrayitem_raw': 2, 'getfield_gc': 11, + 'guard_class': 7, 'guard_true': 2, + 'guard_isnull': 1, 'jump': 2, 'int_lt': 2, + 'float_add': 2, 'int_add': 2, 'guard_value': 1, + 'getarrayitem_raw': 4}) assert result == 3 + 3 def test_floatadd(self): @@ -62,9 +64,11 @@ a -> 3 """) assert result == 3 + 3 - self.check_loops({"getarrayitem_raw": 1, "float_add": 1, - "setarrayitem_raw": 1, "int_add": 1, - "int_lt": 1, "guard_true": 1, "jump": 1}) + self.check_resops({'setarrayitem_raw': 2, 'getfield_gc': 11, + 'guard_class': 7, 'guard_true': 2, + 'guard_isnull': 1, 'jump': 2, 'int_lt': 2, + 'float_add': 2, 'int_add': 2, 'guard_value': 1, + 'getarrayitem_raw': 2}) def test_sum(self): result = self.run(""" @@ -73,9 +77,10 @@ sum(b) """) assert result == 2 * sum(range(30)) - self.check_loops({"getarrayitem_raw": 2, "float_add": 2, - "int_add": 1, - "int_lt": 1, "guard_true": 1, "jump": 1}) + self.check_resops({'guard_class': 7, 'getfield_gc': 11, + 'guard_true': 2, 'jump': 2, 'getarrayitem_raw': 4, + 'guard_value': 2, 'guard_isnull': 1, 'int_lt': 2, + 'float_add': 4, 'int_add': 2}) def test_prod(self): result = self.run(""" @@ -87,9 +92,10 @@ for i in range(30): expected *= i * 2 assert result == expected - self.check_loops({"getarrayitem_raw": 2, "float_add": 1, - "float_mul": 1, "int_add": 1, - "int_lt": 1, "guard_true": 1, "jump": 1}) + self.check_resops({'int_lt': 2, 'getfield_gc': 11, 'guard_class': 7, + 'float_mul': 2, 'guard_true': 2, 'guard_isnull': 1, + 'jump': 2, 'getarrayitem_raw': 4, 'float_add': 2, + 'int_add': 2, 'guard_value': 2}) def test_max(self): py.test.skip("broken, investigate") @@ -125,10 +131,10 @@ any(b) """) assert result == 1 - self.check_loops({"getarrayitem_raw": 2, "float_add": 1, - "float_ne": 1, "int_add": 1, - "int_lt": 1, "guard_true": 1, "jump": 1, - "guard_false": 1}) + self.check_resops({'int_lt': 2, 'getfield_gc': 9, 'guard_class': 7, + 'guard_value': 1, 'int_add': 2, 'guard_true': 2, + 'guard_isnull': 1, 'jump': 2, 'getarrayitem_raw': 4, + 'float_add': 2, 'guard_false': 2, 'float_ne': 2}) def test_already_forced(self): result = self.run(""" @@ -142,9 +148,12 @@ # This is the sum of the ops for both loops, however if you remove the # optimization then you end up with 2 float_adds, so we can still be # sure it was optimized correctly. - self.check_loops({"getarrayitem_raw": 2, "float_mul": 1, "float_add": 1, - "setarrayitem_raw": 2, "int_add": 2, - "int_lt": 2, "guard_true": 2, "jump": 2}) + self.check_resops({'setarrayitem_raw': 4, 'guard_nonnull': 1, + 'getfield_gc': 23, 'guard_class': 14, + 'guard_true': 4, 'float_mul': 2, 'guard_isnull': 2, + 'jump': 4, 'int_lt': 4, 'float_add': 2, + 'int_add': 4, 'guard_value': 2, + 'getarrayitem_raw': 4}) def test_ufunc(self): result = self.run(""" @@ -154,10 +163,11 @@ c -> 3 """) assert result == -6 - self.check_loops({"getarrayitem_raw": 2, "float_add": 1, "float_neg": 1, - "setarrayitem_raw": 1, "int_add": 1, - "int_lt": 1, "guard_true": 1, "jump": 1, - }) + self.check_resops({'setarrayitem_raw': 2, 'getfield_gc': 15, + 'guard_class': 9, 'float_neg': 2, 'guard_true': 2, + 'guard_isnull': 2, 'jump': 2, 'int_lt': 2, + 'float_add': 2, 'int_add': 2, 'guard_value': 2, + 'getarrayitem_raw': 4}) def test_specialization(self): self.run(""" @@ -202,9 +212,11 @@ return v.get_concrete().eval(3).val result = self.meta_interp(f, [5], listops=True, backendopt=True) - self.check_loops({'int_mul': 1, 'getarrayitem_raw': 2, 'float_add': 1, - 'setarrayitem_raw': 1, 'int_add': 1, - 'int_lt': 1, 'guard_true': 1, 'jump': 1}) + self.check_resops({'setarrayitem_raw': 2, 'getfield_gc': 9, + 'guard_true': 2, 'guard_isnull': 1, 'jump': 2, + 'int_lt': 2, 'float_add': 2, 'int_mul': 2, + 'int_add': 2, 'guard_value': 1, + 'getarrayitem_raw': 4}) assert result == f(5) def test_slice2(self): @@ -224,9 +236,11 @@ return v.get_concrete().eval(3).val result = self.meta_interp(f, [5], listops=True, backendopt=True) - self.check_loops({'int_mul': 2, 'getarrayitem_raw': 2, 'float_add': 1, - 'setarrayitem_raw': 1, 'int_add': 1, - 'int_lt': 1, 'guard_true': 1, 'jump': 1}) + self.check_resops({'setarrayitem_raw': 2, 'getfield_gc': 11, + 'guard_true': 2, 'guard_isnull': 1, 'jump': 2, + 'int_lt': 2, 'float_add': 2, 'int_mul': 4, + 'int_add': 2, 'guard_value': 1, + 'getarrayitem_raw': 4}) assert result == f(5) def test_setslice(self): @@ -243,10 +257,12 @@ return ar.get_concrete().eval(3).val result = self.meta_interp(f, [5], listops=True, backendopt=True) - self.check_loops({'getarrayitem_raw': 2, - 'float_add' : 1, - 'setarrayitem_raw': 1, 'int_add': 2, - 'int_lt': 1, 'guard_true': 1, 'jump': 1}) + self.check_resops({'int_is_true': 1, 'setarrayitem_raw': 2, + 'guard_nonnull': 1, 'getfield_gc': 9, + 'guard_false': 1, 'guard_true': 3, + 'guard_isnull': 1, 'jump': 2, 'int_lt': 2, + 'float_add': 2, 'int_gt': 1, 'int_add': 4, + 'guard_value': 1, 'getarrayitem_raw': 4}) assert result == 11.0 def test_int32_sum(self): diff --git a/pypy/module/sys/version.py b/pypy/module/sys/version.py --- a/pypy/module/sys/version.py +++ b/pypy/module/sys/version.py @@ -10,7 +10,7 @@ CPYTHON_VERSION = (2, 7, 1, "final", 42) #XXX # sync patchlevel.h CPYTHON_API_VERSION = 1013 #XXX # sync with include/modsupport.h -PYPY_VERSION = (1, 6, 1, "dev", 0) #XXX # sync patchlevel.h +PYPY_VERSION = (1, 7, 1, "dev", 0) #XXX # sync patchlevel.h if platform.name == 'msvc': COMPILER_INFO = 'MSC v.%d 32 bit' % (platform.version * 10 + 600) diff --git a/pypy/objspace/flow/operation.py b/pypy/objspace/flow/operation.py --- a/pypy/objspace/flow/operation.py +++ b/pypy/objspace/flow/operation.py @@ -11,7 +11,7 @@ from pypy.interpreter.baseobjspace import ObjSpace from pypy.interpreter.error import OperationError from pypy.tool.sourcetools import compile2 -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift +from pypy.rlib.rarithmetic import ovfcheck from pypy.objspace.flow import model @@ -144,7 +144,7 @@ return ovfcheck(x % y) def lshift_ovf(x, y): - return ovfcheck_lshift(x, y) + return ovfcheck(x << y) # slicing: operator.{get,set,del}slice() don't support b=None or c=None def do_getslice(a, b, c): diff --git a/pypy/objspace/std/intobject.py b/pypy/objspace/std/intobject.py --- a/pypy/objspace/std/intobject.py +++ b/pypy/objspace/std/intobject.py @@ -6,7 +6,7 @@ from pypy.objspace.std.noneobject import W_NoneObject from pypy.objspace.std.register_all import register_all from pypy.rlib import jit -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift, LONG_BIT, r_uint +from pypy.rlib.rarithmetic import ovfcheck, LONG_BIT, r_uint from pypy.rlib.rbigint import rbigint """ @@ -245,7 +245,7 @@ b = w_int2.intval if r_uint(b) < LONG_BIT: # 0 <= b < LONG_BIT try: - c = ovfcheck_lshift(a, b) + c = ovfcheck(a << b) except OverflowError: raise FailedToImplementArgs(space.w_OverflowError, space.wrap("integer left shift")) diff --git a/pypy/objspace/std/test/test_stdobjspace.py b/pypy/objspace/std/test/test_stdobjspace.py --- a/pypy/objspace/std/test/test_stdobjspace.py +++ b/pypy/objspace/std/test/test_stdobjspace.py @@ -1,5 +1,6 @@ from pypy.interpreter.error import OperationError from pypy.interpreter.gateway import app2interp +from pypy.conftest import gettestobjspace class TestW_StdObjSpace: @@ -60,3 +61,10 @@ typedef = None assert space.isinstance_w(X(), space.w_str) + + def test_withstrbuf_fastpath_isinstance(self): + from pypy.objspace.std.stringobject import W_StringObject + + space = gettestobjspace(withstrbuf=True) + assert space._get_interplevel_cls(space.w_str) is W_StringObject + diff --git a/pypy/rlib/listsort.py b/pypy/rlib/listsort.py --- a/pypy/rlib/listsort.py +++ b/pypy/rlib/listsort.py @@ -1,4 +1,4 @@ -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift +from pypy.rlib.rarithmetic import ovfcheck ## ------------------------------------------------------------------------ @@ -136,7 +136,7 @@ if lower(a.list[p + ofs], key): lastofs = ofs try: - ofs = ovfcheck_lshift(ofs, 1) + ofs = ovfcheck(ofs << 1) except OverflowError: ofs = maxofs else: @@ -161,7 +161,7 @@ # key <= a[hint - ofs] lastofs = ofs try: - ofs = ovfcheck_lshift(ofs, 1) + ofs = ovfcheck(ofs << 1) except OverflowError: ofs = maxofs else: diff --git a/pypy/rlib/rarithmetic.py b/pypy/rlib/rarithmetic.py --- a/pypy/rlib/rarithmetic.py +++ b/pypy/rlib/rarithmetic.py @@ -12,9 +12,6 @@ back to a signed int value ovfcheck check on CPython whether the result of a signed integer operation did overflow -ovfcheck_lshift - << with oveflow checking - catering to 2.3/2.4 differences about << ovfcheck_float_to_int convert to an integer or raise OverflowError r_longlong @@ -111,18 +108,6 @@ raise OverflowError, "signed integer expression did overflow" return r -def _local_ovfcheck(r): - # a copy of the above, because we cannot call ovfcheck - # in a context where no primitiveoperator is involved. - assert not isinstance(r, r_uint), "unexpected ovf check on unsigned" - if isinstance(r, long): - raise OverflowError, "signed integer expression did overflow" - return r - -def ovfcheck_lshift(a, b): - "NOT_RPYTHON" - return _local_ovfcheck(int(long(a) << b)) - # Strange things happening for float to int on 64 bit: # int(float(i)) != i because of rounding issues. # These are the minimum and maximum float value that can diff --git a/pypy/rlib/rgc.py b/pypy/rlib/rgc.py --- a/pypy/rlib/rgc.py +++ b/pypy/rlib/rgc.py @@ -163,8 +163,10 @@ source_start, dest_start, length): # if the write barrier is not supported, copy by hand - for i in range(length): + i = 0 + while i < length: dest[i + dest_start] = source[i + source_start] + i += 1 return source_addr = llmemory.cast_ptr_to_adr(source) dest_addr = llmemory.cast_ptr_to_adr(dest) @@ -214,8 +216,8 @@ func._gc_no_collect_ = True return func -def is_light_finalizer(func): - func._is_light_finalizer_ = True +def must_be_light_finalizer(func): + func._must_be_light_finalizer_ = True return func # ____________________________________________________________ diff --git a/pypy/rlib/rsre/test/test_zjit.py b/pypy/rlib/rsre/test/test_zjit.py --- a/pypy/rlib/rsre/test/test_zjit.py +++ b/pypy/rlib/rsre/test/test_zjit.py @@ -96,7 +96,7 @@ def test_fast_search(self): res = self.meta_interp_search(r"", "eua") assert res == 15 - self.check_loops(guard_value=0) + self.check_resops(guard_value=0) def test_regular_search(self): res = self.meta_interp_search(r"<\w+>", "eiofweoxdiwhdohua") @@ -120,7 +120,7 @@ def test_aorbstar(self): res = self.meta_interp_match("(a|b)*a", "a" * 100) assert res == 100 - self.check_loops(guard_value=0) + self.check_resops(guard_value=0) # group guards tests @@ -165,4 +165,4 @@ def test_find_repetition_end_fastpath(self): res = self.meta_interp_search(r"b+", "a"*30 + "b") assert res == 30 - self.check_loops(call=0) + self.check_resops(call=0) diff --git a/pypy/rpython/llinterp.py b/pypy/rpython/llinterp.py --- a/pypy/rpython/llinterp.py +++ b/pypy/rpython/llinterp.py @@ -1,6 +1,6 @@ from pypy.objspace.flow.model import FunctionGraph, Constant, Variable, c_last_exception from pypy.rlib.rarithmetic import intmask, r_uint, ovfcheck, r_longlong -from pypy.rlib.rarithmetic import r_ulonglong, ovfcheck_lshift +from pypy.rlib.rarithmetic import r_ulonglong from pypy.rpython.lltypesystem import lltype, llmemory, lloperation, llheap from pypy.rpython.lltypesystem import rclass from pypy.rpython.ootypesystem import ootype @@ -1035,7 +1035,7 @@ assert isinstance(x, int) assert isinstance(y, int) try: - return ovfcheck_lshift(x, y) + return ovfcheck(x << y) except OverflowError: self.make_llexception() diff --git a/pypy/rpython/lltypesystem/module/ll_math.py b/pypy/rpython/lltypesystem/module/ll_math.py --- a/pypy/rpython/lltypesystem/module/ll_math.py +++ b/pypy/rpython/lltypesystem/module/ll_math.py @@ -11,15 +11,17 @@ from pypy.translator.platform import platform from pypy.rlib.rfloat import isfinite, isinf, isnan, INFINITY, NAN +use_library_isinf_isnan = False if sys.platform == "win32": if platform.name == "msvc": # When compiled with /O2 or /Oi (enable intrinsic functions) # It's no more possible to take the address of some math functions. # Ensure that the compiler chooses real functions instead. eci = ExternalCompilationInfo( - includes = ['math.h'], + includes = ['math.h', 'float.h'], post_include_bits = ['#pragma function(floor)'], ) + use_library_isinf_isnan = True else: eci = ExternalCompilationInfo() # Some math functions are C99 and not defined by the Microsoft compiler @@ -108,18 +110,32 @@ # # Custom implementations +VERY_LARGE_FLOAT = 1.0 +while VERY_LARGE_FLOAT * 100.0 != INFINITY: + VERY_LARGE_FLOAT *= 64.0 + +_lib_isnan = rffi.llexternal("_isnan", [lltype.Float], lltype.Signed, + compilation_info=eci) +_lib_finite = rffi.llexternal("_finite", [lltype.Float], lltype.Signed, + compilation_info=eci) + def ll_math_isnan(y): # By not calling into the external function the JIT can inline this. # Floats are awesome. + if use_library_isinf_isnan and not jit.we_are_jitted(): + return bool(_lib_isnan(y)) return y != y def ll_math_isinf(y): - # Use a bitwise OR so the JIT doesn't produce 2 different guards. - return (y == INFINITY) | (y == -INFINITY) + if use_library_isinf_isnan and not jit.we_are_jitted(): + return not _lib_finite(y) and not _lib_isnan(y) + return (y + VERY_LARGE_FLOAT) == y def ll_math_isfinite(y): # Use a custom hack that is reasonably well-suited to the JIT. # Floats are awesome (bis). + if use_library_isinf_isnan and not jit.we_are_jitted(): + return bool(_lib_finite(y)) z = 0.0 * y return z == z # i.e.: z is not a NaN @@ -136,10 +152,12 @@ Windows, FreeBSD and alpha Tru64 are amongst platforms that don't always follow C99. """ - if isnan(x) or isnan(y): + if isnan(x): return NAN - if isinf(y): + if not isfinite(y): + if isnan(y): + return NAN if isinf(x): if math_copysign(1.0, x) == 1.0: # atan2(+-inf, +inf) == +-pi/4 @@ -168,7 +186,7 @@ def ll_math_frexp(x): # deal with special cases directly, to sidestep platform differences - if isnan(x) or isinf(x) or not x: + if not isfinite(x) or not x: mantissa = x exponent = 0 else: @@ -185,7 +203,7 @@ INT_MIN = int(-2**31) def ll_math_ldexp(x, exp): - if x == 0.0 or isinf(x) or isnan(x): + if x == 0.0 or not isfinite(x): return x # NaNs, zeros and infinities are returned unchanged if exp > INT_MAX: # overflow (64-bit platforms only) @@ -209,10 +227,11 @@ def ll_math_modf(x): # some platforms don't do the right thing for NaNs and # infinities, so we take care of special cases directly. - if isinf(x): - return (math_copysign(0.0, x), x) - elif isnan(x): - return (x, x) + if not isfinite(x): + if isnan(x): + return (x, x) + else: # isinf(x) + return (math_copysign(0.0, x), x) intpart_p = lltype.malloc(rffi.DOUBLEP.TO, 1, flavor='raw') try: fracpart = math_modf(x, intpart_p) @@ -223,13 +242,21 @@ def ll_math_fmod(x, y): - if isinf(x) and not isnan(y): - raise ValueError("math domain error") + # fmod(x, +/-Inf) returns x for finite x. + if isinf(y) and isfinite(x): + return x - if y == 0: - raise ValueError("math domain error") - - return math_fmod(x, y) + _error_reset() + r = math_fmod(x, y) + errno = rposix.get_errno() + if isnan(r): + if isnan(x) or isnan(y): + errno = 0 + else: + errno = EDOM + if errno: + _likely_raise(errno, r) + return r def ll_math_hypot(x, y): @@ -242,16 +269,17 @@ _error_reset() r = math_hypot(x, y) errno = rposix.get_errno() - if isnan(r): - if isnan(x) or isnan(y): - errno = 0 - else: - errno = EDOM - elif isinf(r): - if isinf(x) or isnan(x) or isinf(y) or isnan(y): - errno = 0 - else: - errno = ERANGE + if not isfinite(r): + if isnan(r): + if isnan(x) or isnan(y): + errno = 0 + else: + errno = EDOM + else: # isinf(r) + if isfinite(x) and isfinite(y): + errno = ERANGE + else: + errno = 0 if errno: _likely_raise(errno, r) return r @@ -261,30 +289,30 @@ # deal directly with IEEE specials, to cope with problems on various # platforms whose semantics don't exactly match C99 - if isnan(x): - if y == 0.0: - return 1.0 # NaN**0 = 1 - return x - - elif isnan(y): + if isnan(y): if x == 1.0: return 1.0 # 1**Nan = 1 return y - elif isinf(x): - odd_y = not isinf(y) and math_fmod(math_fabs(y), 2.0) == 1.0 - if y > 0.0: - if odd_y: - return x - return math_fabs(x) - elif y == 0.0: - return 1.0 - else: # y < 0.0 - if odd_y: - return math_copysign(0.0, x) - return 0.0 + if not isfinite(x): + if isnan(x): + if y == 0.0: + return 1.0 # NaN**0 = 1 + return x + else: # isinf(x) + odd_y = not isinf(y) and math_fmod(math_fabs(y), 2.0) == 1.0 + if y > 0.0: + if odd_y: + return x + return math_fabs(x) + elif y == 0.0: + return 1.0 + else: # y < 0.0 + if odd_y: + return math_copysign(0.0, x) + return 0.0 - elif isinf(y): + if isinf(y): if math_fabs(x) == 1.0: return 1.0 elif y > 0.0 and math_fabs(x) > 1.0: @@ -299,17 +327,18 @@ _error_reset() r = math_pow(x, y) errno = rposix.get_errno() - if isnan(r): - # a NaN result should arise only from (-ve)**(finite non-integer) - errno = EDOM - elif isinf(r): - # an infinite result here arises either from: - # (A) (+/-0.)**negative (-> divide-by-zero) - # (B) overflow of x**y with x and y finite - if x == 0.0: + if not isfinite(r): + if isnan(r): + # a NaN result should arise only from (-ve)**(finite non-integer) errno = EDOM - else: - errno = ERANGE + else: # isinf(r) + # an infinite result here arises either from: + # (A) (+/-0.)**negative (-> divide-by-zero) + # (B) overflow of x**y with x and y finite + if x == 0.0: + errno = EDOM + else: + errno = ERANGE if errno: _likely_raise(errno, r) return r @@ -358,18 +387,19 @@ r = c_func(x) # Error checking fun. Copied from CPython 2.6 errno = rposix.get_errno() - if isnan(r): - if isnan(x): - errno = 0 - else: - errno = EDOM - elif isinf(r): - if isinf(x) or isnan(x): - errno = 0 - elif can_overflow: - errno = ERANGE - else: - errno = EDOM + if not isfinite(r): + if isnan(r): + if isnan(x): + errno = 0 + else: + errno = EDOM + else: # isinf(r) + if not isfinite(x): + errno = 0 + elif can_overflow: + errno = ERANGE + else: + errno = EDOM if errno: _likely_raise(errno, r) return r diff --git a/pypy/translator/backendopt/finalizer.py b/pypy/translator/backendopt/finalizer.py --- a/pypy/translator/backendopt/finalizer.py +++ b/pypy/translator/backendopt/finalizer.py @@ -4,7 +4,7 @@ class FinalizerError(Exception): """ __del__ marked as lightweight finalizer, but the analyzer did - not agreed + not agree """ class FinalizerAnalyzer(graphanalyze.BoolGraphAnalyzer): @@ -23,7 +23,7 @@ def analyze_light_finalizer(self, graph): result = self.analyze_direct_call(graph) if (result is self.top_result() and - getattr(graph.func, '_is_light_finalizer_', False)): + getattr(graph.func, '_must_be_light_finalizer_', False)): raise FinalizerError(FinalizerError.__doc__, graph) return result diff --git a/pypy/translator/backendopt/test/test_canraise.py b/pypy/translator/backendopt/test/test_canraise.py --- a/pypy/translator/backendopt/test/test_canraise.py +++ b/pypy/translator/backendopt/test/test_canraise.py @@ -201,6 +201,16 @@ result = ra.can_raise(ggraph.startblock.operations[0]) assert result + def test_ll_arraycopy(self): + from pypy.rpython.lltypesystem import rffi + from pypy.rlib.rgc import ll_arraycopy + def f(a, b, c, d, e): + ll_arraycopy(a, b, c, d, e) + t, ra = self.translate(f, [rffi.CCHARP, rffi.CCHARP, int, int, int]) + fgraph = graphof(t, f) + result = ra.can_raise(fgraph.startblock.operations[0]) + assert not result + class TestOOType(OORtypeMixin, BaseTestCanRaise): def test_can_raise_recursive(self): diff --git a/pypy/translator/backendopt/test/test_finalizer.py b/pypy/translator/backendopt/test/test_finalizer.py --- a/pypy/translator/backendopt/test/test_finalizer.py +++ b/pypy/translator/backendopt/test/test_finalizer.py @@ -126,13 +126,13 @@ r = self.analyze(f, [], A.__del__.im_func) assert r - def test_is_light_finalizer_decorator(self): + def test_must_be_light_finalizer_decorator(self): S = lltype.GcStruct('S') - @rgc.is_light_finalizer + @rgc.must_be_light_finalizer def f(): lltype.malloc(S) - @rgc.is_light_finalizer + @rgc.must_be_light_finalizer def g(): pass self.analyze(g, []) # did not explode diff --git a/pypy/translator/c/genc.py b/pypy/translator/c/genc.py --- a/pypy/translator/c/genc.py +++ b/pypy/translator/c/genc.py @@ -521,13 +521,13 @@ rules = [ ('clean', '', 'rm -f $(OBJECTS) $(TARGET) $(GCMAPFILES) $(ASMFILES) *.gc?? ../module_cache/*.gc??'), ('clean_noprof', '', 'rm -f $(OBJECTS) $(TARGET) $(GCMAPFILES) $(ASMFILES)'), - ('debug', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT" $(TARGET)'), - ('debug_exc', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DDO_LOG_EXC" $(TARGET)'), - ('debug_mem', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DTRIVIAL_MALLOC_DEBUG" $(TARGET)'), + ('debug', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT" debug_target'), + ('debug_exc', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DDO_LOG_EXC" debug_target'), + ('debug_mem', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DTRIVIAL_MALLOC_DEBUG" debug_target'), ('no_obmalloc', '', '$(MAKE) CFLAGS="-g -O2 -DRPY_ASSERT -DNO_OBMALLOC" $(TARGET)'), - ('linuxmemchk', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DLINUXMEMCHK" $(TARGET)'), + ('linuxmemchk', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DLINUXMEMCHK" debug_target'), ('llsafer', '', '$(MAKE) CFLAGS="-O2 -DRPY_LL_ASSERT" $(TARGET)'), - ('lldebug', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DRPY_LL_ASSERT" $(TARGET)'), + ('lldebug', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DRPY_LL_ASSERT" debug_target'), ('profile', '', '$(MAKE) CFLAGS="-g -O1 -pg $(CFLAGS) -fno-omit-frame-pointer" LDFLAGS="-pg $(LDFLAGS)" $(TARGET)'), ] if self.has_profopt(): @@ -554,7 +554,7 @@ mk.definition('ASMLBLFILES', lblsfiles) mk.definition('GCMAPFILES', gcmapfiles) if sys.platform == 'win32': - mk.definition('DEBUGFLAGS', '/Zi') + mk.definition('DEBUGFLAGS', '/MD /Zi') else: mk.definition('DEBUGFLAGS', '-O2 -fomit-frame-pointer -g') @@ -618,9 +618,13 @@ else: if sys.platform == 'win32': - mk.definition('DEBUGFLAGS', '/Zi') + mk.definition('DEBUGFLAGS', '/MD /Zi') else: mk.definition('DEBUGFLAGS', '-O1 -g') + if sys.platform == 'win32': + mk.rule('debug_target', 'debugmode_$(DEFAULT_TARGET)', 'rem') + else: + mk.rule('debug_target', '$(TARGET)', '#') mk.write() #self.translator.platform, # , diff --git a/pypy/translator/cli/test/test_snippet.py b/pypy/translator/cli/test/test_snippet.py --- a/pypy/translator/cli/test/test_snippet.py +++ b/pypy/translator/cli/test/test_snippet.py @@ -28,14 +28,14 @@ res = self.interpret(fn, [], backendopt=False) def test_link_vars_overlapping(self): - from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift + from pypy.rlib.rarithmetic import ovfcheck def fn(maxofs): lastofs = 0 ofs = 1 while ofs < maxofs: lastofs = ofs try: - ofs = ovfcheck_lshift(ofs, 1) + ofs = ovfcheck(ofs << 1) except OverflowError: ofs = maxofs else: diff --git a/pypy/translator/platform/__init__.py b/pypy/translator/platform/__init__.py --- a/pypy/translator/platform/__init__.py +++ b/pypy/translator/platform/__init__.py @@ -102,6 +102,8 @@ bits = [self.__class__.__name__, 'cc=%r' % self.cc] for varname in self.relevant_environ: bits.append('%s=%r' % (varname, os.environ.get(varname))) + # adding sys.maxint to disambiguate windows + bits.append('%s=%r' % ('sys.maxint', sys.maxint)) return ' '.join(bits) # some helpers which seem to be cross-platform enough diff --git a/pypy/translator/platform/windows.py b/pypy/translator/platform/windows.py --- a/pypy/translator/platform/windows.py +++ b/pypy/translator/platform/windows.py @@ -294,6 +294,9 @@ ['$(CC_LINK) /nologo $(LDFLAGS) $(LDFLAGSEXTRA) $(OBJECTS) $(LINKFILES) /out:$@ $(LIBDIRS) $(LIBS) /MANIFEST /MANIFESTFILE:$*.manifest', 'mt.exe -nologo -manifest $*.manifest -outputresource:$@;1', ]) + m.rule('debugmode_$(TARGET)', '$(OBJECTS)', + ['$(CC_LINK) /nologo /DEBUG $(LDFLAGS) $(LDFLAGSEXTRA) $(OBJECTS) $(LINKFILES) /out:$@ $(LIBDIRS) $(LIBS)', + ]) if shared: m.definition('SHARED_IMPORT_LIB', so_name.new(ext='lib').basename) @@ -307,6 +310,9 @@ ['$(CC_LINK) /nologo main.obj $(SHARED_IMPORT_LIB) /out:$@ /MANIFEST /MANIFESTFILE:$*.manifest', 'mt.exe -nologo -manifest $*.manifest -outputresource:$@;1', ]) + m.rule('debugmode_$(DEFAULT_TARGET)', ['debugmode_$(TARGET)', 'main.obj'], + ['$(CC_LINK) /nologo /DEBUG main.obj $(SHARED_IMPORT_LIB) /out:$@' + ]) return m diff --git a/pypy/translator/simplify.py b/pypy/translator/simplify.py --- a/pypy/translator/simplify.py +++ b/pypy/translator/simplify.py @@ -111,16 +111,13 @@ # the while loop above will simplify recursively the new link def transform_ovfcheck(graph): - """The special function calls ovfcheck and ovfcheck_lshift need to + """The special function calls ovfcheck needs to be translated into primitive operations. ovfcheck is called directly after an operation that should be turned into an overflow-checked version. It is considered a syntax error if the resulting _ovf is not defined in objspace/flow/objspace.py. - ovfcheck_lshift is special because there is no preceding operation. - Instead, it will be replaced by an OP_LSHIFT_OVF operation. """ covf = Constant(rarithmetic.ovfcheck) - covfls = Constant(rarithmetic.ovfcheck_lshift) def check_syntax(opname): exlis = operation.implicit_exceptions.get("%s_ovf" % (opname,), []) @@ -154,9 +151,6 @@ op1.opname += '_ovf' del block.operations[i] block.renamevariables({op.result: op1.result}) - elif op.args[0] == covfls: - op.opname = 'lshift_ovf' - del op.args[0] def simplify_exceptions(graph): """The exception handling caused by non-implicit exceptions diff --git a/pypy/translator/test/snippet.py b/pypy/translator/test/snippet.py --- a/pypy/translator/test/snippet.py +++ b/pypy/translator/test/snippet.py @@ -1210,7 +1210,7 @@ return istk.top(), sstk.top() -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift +from pypy.rlib.rarithmetic import ovfcheck def add_func(i=numtype): try: @@ -1253,7 +1253,7 @@ def lshift_func(i=numtype): try: hugo(2, 3, 5) - return ovfcheck_lshift((-maxint-1), i) + return ovfcheck((-maxint-1) << i) except (hugelmugel, OverflowError, StandardError, ValueError): raise From noreply at buildbot.pypy.org Sat Nov 12 10:17:17 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sat, 12 Nov 2011 10:17:17 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: fixed test Message-ID: <20111112091717.8689A8292E@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r49354:f109778eb791 Date: 2011-11-12 10:16 +0100 http://bitbucket.org/pypy/pypy/changeset/f109778eb791/ Log: fixed test diff --git a/pypy/jit/metainterp/test/test_loop.py b/pypy/jit/metainterp/test/test_loop.py --- a/pypy/jit/metainterp/test/test_loop.py +++ b/pypy/jit/metainterp/test/test_loop.py @@ -36,7 +36,7 @@ return res * 2 res = self.meta_interp(f, [6, 7]) assert res == 84 - self.check_loop_count(1) + self.check_trace_count(1) def test_loop_with_delayed_setfield(self): myjitdriver = JitDriver(greens = [], reds = ['x', 'y', 'res', 'a']) @@ -58,7 +58,7 @@ return res * 2 res = self.meta_interp(f, [6, 13]) assert res == f(6, 13) - self.check_loop_count(1) + self.check_trace_count(1) if self.enable_opts: self.check_resops(setfield_gc=2, getfield_gc=0) @@ -90,9 +90,9 @@ res = self.meta_interp(f, [6, 33], policy=StopAtXPolicy(l)) assert res == f(6, 33) if self.enable_opts: - self.check_loop_count(3) + self.check_trace_count(2) else: - self.check_loop_count(2) + self.check_trace_count(2) def test_alternating_loops(self): myjitdriver = JitDriver(greens = [], reds = ['pattern']) @@ -108,9 +108,9 @@ return 42 self.meta_interp(f, [0xF0F0F0]) if self.enable_opts: - self.check_loop_count(3) + self.check_trace_count(3) else: - self.check_loop_count(2) + self.check_trace_count(2) def test_interp_simple(self): myjitdriver = JitDriver(greens = ['i'], reds = ['x', 'y']) @@ -135,7 +135,7 @@ return x res = self.meta_interp(f, [100, 30]) assert res == 42 - self.check_loop_count(0) + self.check_trace_count(0) def test_green_prevents_loop(self): myjitdriver = JitDriver(greens = ['i'], reds = ['x', 'y']) @@ -154,7 +154,7 @@ return x res = self.meta_interp(f, [100, 5]) assert res == f(100, 5) - self.check_loop_count(0) + self.check_trace_count(0) def test_interp_single_loop(self): myjitdriver = JitDriver(greens = ['i'], reds = ['x', 'y']) @@ -179,7 +179,7 @@ return x res = self.meta_interp(f, [5, 8]) assert res == 42 - self.check_loop_count(1) + self.check_trace_count(1) # the 'int_eq' and following 'guard' should be constant-folded if 'unroll' in self.enable_opts: self.check_resops(int_eq=0, guard_true=2, guard_false=0) @@ -194,7 +194,10 @@ assert isinstance(liveboxes[0], history.BoxInt) assert isinstance(liveboxes[1], history.BoxInt) found += 1 - assert found == 1 + if 'unroll' in self.enable_opts: + assert found == 2 + else: + assert found == 1 def test_interp_many_paths(self): myjitdriver = JitDriver(greens = ['i'], reds = ['x', 'node']) @@ -229,7 +232,7 @@ expected = f(node1) res = self.meta_interp(f, [node1]) assert res == expected - self.check_loop_count_at_most(19) + self.check_trace_count_at_most(19) def test_interp_many_paths_2(self): myjitdriver = JitDriver(greens = ['i'], reds = ['x', 'node']) @@ -268,7 +271,7 @@ expected = f(node1) res = self.meta_interp(f, [node1]) assert res == expected - self.check_loop_count_at_most(19) + self.check_trace_count_at_most(19) def test_nested_loops(self): myjitdriver = JitDriver(greens = ['i'], reds = ['x', 'y']) @@ -601,11 +604,11 @@ assert res == expected if self.enable_opts: - self.check_loop_count(2) - self.check_tree_loop_count(2) # 1 loop, 1 bridge from interp + self.check_trace_count(2) + self.check_jitcell_token_count(1) # 1 loop with bridge from interp else: - self.check_loop_count(2) - self.check_tree_loop_count(1) # 1 loop, callable from the interp + self.check_trace_count(2) + self.check_jitcell_token_count(1) # 1 loop, callable from the interp def test_example(self): myjitdriver = JitDriver(greens = ['i'], @@ -646,10 +649,10 @@ res = self.meta_interp(main_interpreter_loop, [1]) assert res == 102 - self.check_loop_count(1) + self.check_trace_count(1) if 'unroll' in self.enable_opts: self.check_resops({'int_add' : 6, 'int_gt' : 2, - 'guard_false' : 2, 'jump' : 2}) + 'guard_false' : 2, 'jump' : 1}) else: self.check_resops({'int_add' : 3, 'int_gt' : 1, 'guard_false' : 1, 'jump' : 1}) @@ -691,7 +694,7 @@ res = self.meta_interp(main_interpreter_loop, [1]) assert res == main_interpreter_loop(1) - self.check_loop_count(1) + self.check_trace_count(1) # These loops do different numbers of ops based on which optimizer we # are testing with. self.check_resops(self.automatic_promotion_result) @@ -753,7 +756,7 @@ res = self.meta_interp(interpret, [1]) assert res == interpret(1) # XXX it's unsure how many loops should be there - self.check_loop_count(3) + self.check_trace_count(3) def test_path_with_operations_not_from_start(self): jitdriver = JitDriver(greens = ['k'], reds = ['n', 'z']) diff --git a/pypy/jit/metainterp/test/test_loop_unroll.py b/pypy/jit/metainterp/test/test_loop_unroll.py --- a/pypy/jit/metainterp/test/test_loop_unroll.py +++ b/pypy/jit/metainterp/test/test_loop_unroll.py @@ -8,7 +8,7 @@ enable_opts = ALL_OPTS_NAMES automatic_promotion_result = { - 'int_gt': 2, 'guard_false': 2, 'jump': 2, 'int_add': 6, + 'int_gt': 2, 'guard_false': 2, 'jump': 1, 'int_add': 6, 'guard_value': 1 } From noreply at buildbot.pypy.org Sat Nov 12 12:44:29 2011 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 12 Nov 2011 12:44:29 +0100 (CET) Subject: [pypy-commit] pypy default: Add the requirement that W_XxxObject classes that are different Message-ID: <20111112114429.122298292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49355:b0c70fdea6b5 Date: 2011-11-12 12:44 +0100 http://bitbucket.org/pypy/pypy/changeset/b0c70fdea6b5/ Log: Add the requirement that W_XxxObject classes that are different implementations of the same app-level type should inherit from a common base class more precise than W_Object. This is actually easy, just by adding some empty W_AbstractXxxObject classes here and there. This property allows us to build the _interplevel_classes for-speed- only dictionary in a way that doesn't depend on dictionary order. Previously it would randomly pick a class if there are several ones, which might be (if you're unluckly) not the most commonly used one. diff --git a/pypy/objspace/std/intobject.py b/pypy/objspace/std/intobject.py --- a/pypy/objspace/std/intobject.py +++ b/pypy/objspace/std/intobject.py @@ -16,7 +16,10 @@ something CPython does not do anymore. """ -class W_IntObject(W_Object): +class W_AbstractIntObject(W_Object): + pass + +class W_IntObject(W_AbstractIntObject): __slots__ = 'intval' _immutable_fields_ = ['intval'] diff --git a/pypy/objspace/std/iterobject.py b/pypy/objspace/std/iterobject.py --- a/pypy/objspace/std/iterobject.py +++ b/pypy/objspace/std/iterobject.py @@ -4,7 +4,10 @@ from pypy.objspace.std.register_all import register_all -class W_AbstractSeqIterObject(W_Object): +class W_AbstractIterObject(W_Object): + pass + +class W_AbstractSeqIterObject(W_AbstractIterObject): from pypy.objspace.std.itertype import iter_typedef as typedef def __init__(w_self, w_seq, index=0): diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py --- a/pypy/objspace/std/listobject.py +++ b/pypy/objspace/std/listobject.py @@ -11,7 +11,10 @@ from pypy.rlib.listsort import make_timsort_class from pypy.interpreter.argument import Signature -class W_ListObject(W_Object): +class W_AbstractListObject(W_Object): + pass + +class W_ListObject(W_AbstractListObject): from pypy.objspace.std.listtype import list_typedef as typedef def __init__(w_self, wrappeditems): diff --git a/pypy/objspace/std/longobject.py b/pypy/objspace/std/longobject.py --- a/pypy/objspace/std/longobject.py +++ b/pypy/objspace/std/longobject.py @@ -8,7 +8,10 @@ from pypy.objspace.std.noneobject import W_NoneObject from pypy.rlib.rbigint import rbigint, SHIFT -class W_LongObject(W_Object): +class W_AbstractLongObject(W_Object): + pass + +class W_LongObject(W_AbstractLongObject): """This is a wrapper of rbigint.""" from pypy.objspace.std.longtype import long_typedef as typedef _immutable_fields_ = ['num'] diff --git a/pypy/objspace/std/objspace.py b/pypy/objspace/std/objspace.py --- a/pypy/objspace/std/objspace.py +++ b/pypy/objspace/std/objspace.py @@ -83,12 +83,7 @@ if self.config.objspace.std.withtproxy: transparent.setup(self) - interplevel_classes = {} - for type, classes in self.model.typeorder.iteritems(): - if len(classes) >= 3: # XXX what does this 3 mean??! - # W_Root, AnyXxx and actual object - interplevel_classes[self.gettypefor(type)] = classes[0][0] - self._interplevel_classes = interplevel_classes + self.setup_isinstance_cache() def get_builtin_types(self): return self.builtin_types @@ -592,6 +587,63 @@ def isinstance_w(space, w_inst, w_type): return space._type_isinstance(w_inst, w_type) + def setup_isinstance_cache(self): + # This assumes that all classes in the stdobjspace implementing a + # particular app-level type are distinguished by a common base class. + # Alternatively, you can turn off the cache on specific classes, + # like e.g. proxyobject. It is just a bit less performant but + # should not have any bad effect. + from pypy.objspace.std.model import W_Root, W_Object + # + # Build a dict {class: w_typeobject-or-None}. The value None is used + # on classes that are known to be abstract base classes. + class2type = {} + class2type[W_Root] = None + class2type[W_Object] = None + for cls in self.model.typeorder.keys(): + if getattr(cls, 'typedef', None) is None: + continue + if getattr(cls, 'ignore_for_isinstance_cache', False): + continue + w_type = self.gettypefor(cls) + w_oldtype = class2type.setdefault(cls, w_type) + assert w_oldtype is w_type + # + # Build the real dict {w_typeobject: class-or-base-class}. For every + # w_typeobject we look for the most precise common base class of all + # the registered classes. If no such class is found, we will find + # W_Object or W_Root, and complain. Then you must either add an + # artificial common base class, or disable caching on one of the + # two classes with ignore_for_isinstance_cache. + def getmro(cls): + while True: + yield cls + if cls is W_Root: + break + cls = cls.__bases__[0] + self._interplevel_classes = {} + for cls, w_type in class2type.items(): + if w_type is None: + continue + if w_type not in self._interplevel_classes: + self._interplevel_classes[w_type] = cls + else: + cls1 = self._interplevel_classes[w_type] + mro1 = list(getmro(cls1)) + for base in getmro(cls): + if base in mro1: + break + if base in class2type and class2type[base] is not w_type: + if class2type.get(base) is None: + msg = ("cannot find a common interp-level base class" + " between %r and %r" % (cls1, cls)) + else: + msg = ("%s is a base class of both %r and %r" % ( + class2type[base], cls1, cls)) + raise AssertionError("%r: %s" % (w_type, msg)) + class2type[base] = w_type + self._interplevel_classes[w_type] = base + @specialize.memo() def _get_interplevel_cls(self, w_type): if not hasattr(self, "_interplevel_classes"): diff --git a/pypy/objspace/std/proxyobject.py b/pypy/objspace/std/proxyobject.py --- a/pypy/objspace/std/proxyobject.py +++ b/pypy/objspace/std/proxyobject.py @@ -16,6 +16,8 @@ def transparent_class(name, BaseCls): class W_Transparent(BaseCls): + ignore_for_isinstance_cache = True + def __init__(self, space, w_type, w_controller): self.w_type = w_type self.w_controller = w_controller diff --git a/pypy/objspace/std/rangeobject.py b/pypy/objspace/std/rangeobject.py --- a/pypy/objspace/std/rangeobject.py +++ b/pypy/objspace/std/rangeobject.py @@ -5,7 +5,7 @@ from pypy.objspace.std.noneobject import W_NoneObject from pypy.objspace.std.inttype import wrapint from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice -from pypy.objspace.std.listobject import W_ListObject +from pypy.objspace.std.listobject import W_AbstractListObject, W_ListObject from pypy.objspace.std import listtype, iterobject, slicetype from pypy.interpreter import gateway, baseobjspace @@ -21,7 +21,7 @@ return (start - stop - step - 1)/-step -class W_RangeListObject(W_Object): +class W_RangeListObject(W_AbstractListObject): typedef = listtype.list_typedef def __init__(w_self, start, step, length): diff --git a/pypy/objspace/std/ropeobject.py b/pypy/objspace/std/ropeobject.py --- a/pypy/objspace/std/ropeobject.py +++ b/pypy/objspace/std/ropeobject.py @@ -6,7 +6,7 @@ from pypy.rlib.objectmodel import we_are_translated from pypy.objspace.std.inttype import wrapint from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice -from pypy.objspace.std import slicetype +from pypy.objspace.std import stringobject, slicetype, iterobject from pypy.objspace.std.listobject import W_ListObject from pypy.objspace.std.noneobject import W_NoneObject from pypy.objspace.std.tupleobject import W_TupleObject @@ -19,7 +19,7 @@ str_format__String as str_format__Rope, _upper, _lower, DEFAULT_NOOP_TABLE) -class W_RopeObject(W_Object): +class W_RopeObject(stringobject.W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef _immutable_fields_ = ['_node'] @@ -59,7 +59,7 @@ registerimplementation(W_RopeObject) -class W_RopeIterObject(W_Object): +class W_RopeIterObject(iterobject.W_AbstractIterObject): from pypy.objspace.std.itertype import iter_typedef as typedef def __init__(w_self, w_rope, index=0): diff --git a/pypy/objspace/std/ropeunicodeobject.py b/pypy/objspace/std/ropeunicodeobject.py --- a/pypy/objspace/std/ropeunicodeobject.py +++ b/pypy/objspace/std/ropeunicodeobject.py @@ -9,7 +9,7 @@ from pypy.objspace.std.noneobject import W_NoneObject from pypy.rlib import rope from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice -from pypy.objspace.std import slicetype +from pypy.objspace.std import unicodeobject, slicetype, iterobject from pypy.objspace.std.tupleobject import W_TupleObject from pypy.rlib.rarithmetic import intmask, ovfcheck from pypy.module.unicodedata import unicodedb @@ -76,7 +76,7 @@ return encode_object(space, w_unistr, encoding, errors) -class W_RopeUnicodeObject(W_Object): +class W_RopeUnicodeObject(unicodeobject.W_AbstractUnicodeObject): from pypy.objspace.std.unicodetype import unicode_typedef as typedef _immutable_fields_ = ['_node'] @@ -117,7 +117,7 @@ return rope.LiteralUnicodeNode(space.unicode_w(w_str)) -class W_RopeUnicodeIterObject(W_Object): +class W_RopeUnicodeIterObject(iterobject.W_AbstractIterObject): from pypy.objspace.std.itertype import iter_typedef as typedef def __init__(w_self, w_rope, index=0): diff --git a/pypy/objspace/std/smallintobject.py b/pypy/objspace/std/smallintobject.py --- a/pypy/objspace/std/smallintobject.py +++ b/pypy/objspace/std/smallintobject.py @@ -6,7 +6,7 @@ from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all from pypy.objspace.std.noneobject import W_NoneObject -from pypy.objspace.std.intobject import W_IntObject +from pypy.objspace.std.intobject import W_AbstractIntObject, W_IntObject from pypy.interpreter.error import OperationError from pypy.rlib.objectmodel import UnboxedValue from pypy.rlib.rbigint import rbigint @@ -14,7 +14,7 @@ from pypy.tool.sourcetools import func_with_new_name from pypy.objspace.std.inttype import wrapint -class W_SmallIntObject(W_Object, UnboxedValue): +class W_SmallIntObject(W_AbstractIntObject, UnboxedValue): __slots__ = 'intval' from pypy.objspace.std.inttype import int_typedef as typedef diff --git a/pypy/objspace/std/smalllongobject.py b/pypy/objspace/std/smalllongobject.py --- a/pypy/objspace/std/smalllongobject.py +++ b/pypy/objspace/std/smalllongobject.py @@ -9,7 +9,7 @@ from pypy.rlib.rarithmetic import r_longlong, r_int, r_uint from pypy.rlib.rarithmetic import intmask, LONGLONG_BIT from pypy.rlib.rbigint import rbigint -from pypy.objspace.std.longobject import W_LongObject +from pypy.objspace.std.longobject import W_AbstractLongObject, W_LongObject from pypy.objspace.std.intobject import W_IntObject from pypy.objspace.std.noneobject import W_NoneObject from pypy.interpreter.error import OperationError @@ -17,7 +17,7 @@ LONGLONG_MIN = r_longlong((-1) << (LONGLONG_BIT-1)) -class W_SmallLongObject(W_Object): +class W_SmallLongObject(W_AbstractLongObject): from pypy.objspace.std.longtype import long_typedef as typedef _immutable_fields_ = ['longlong'] diff --git a/pypy/objspace/std/smalltupleobject.py b/pypy/objspace/std/smalltupleobject.py --- a/pypy/objspace/std/smalltupleobject.py +++ b/pypy/objspace/std/smalltupleobject.py @@ -9,9 +9,9 @@ from pypy.interpreter import gateway from pypy.rlib.debug import make_sure_not_resized from pypy.rlib.unroll import unrolling_iterable -from pypy.objspace.std.tupleobject import W_TupleObject +from pypy.objspace.std.tupleobject import W_AbstractTupleObject, W_TupleObject -class W_SmallTupleObject(W_Object): +class W_SmallTupleObject(W_AbstractTupleObject): from pypy.objspace.std.tupletype import tuple_typedef as typedef def tolist(self): diff --git a/pypy/objspace/std/strbufobject.py b/pypy/objspace/std/strbufobject.py --- a/pypy/objspace/std/strbufobject.py +++ b/pypy/objspace/std/strbufobject.py @@ -1,11 +1,12 @@ from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all +from pypy.objspace.std.stringobject import W_AbstractStringObject from pypy.objspace.std.stringobject import W_StringObject from pypy.objspace.std.unicodeobject import delegate_String2Unicode from pypy.rlib.rstring import StringBuilder from pypy.interpreter.buffer import Buffer -class W_StringBufferObject(W_Object): +class W_StringBufferObject(W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef w_str = None diff --git a/pypy/objspace/std/stringobject.py b/pypy/objspace/std/stringobject.py --- a/pypy/objspace/std/stringobject.py +++ b/pypy/objspace/std/stringobject.py @@ -19,7 +19,10 @@ from pypy.objspace.std.formatting import mod_format -class W_StringObject(W_Object): +class W_AbstractStringObject(W_Object): + pass + +class W_StringObject(W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef _immutable_fields_ = ['_value'] diff --git a/pypy/objspace/std/strjoinobject.py b/pypy/objspace/std/strjoinobject.py --- a/pypy/objspace/std/strjoinobject.py +++ b/pypy/objspace/std/strjoinobject.py @@ -1,11 +1,12 @@ from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all +from pypy.objspace.std.stringobject import W_AbstractStringObject from pypy.objspace.std.stringobject import W_StringObject from pypy.objspace.std.unicodeobject import delegate_String2Unicode from pypy.objspace.std.stringtype import wrapstr -class W_StringJoinObject(W_Object): +class W_StringJoinObject(W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef def __init__(w_self, joined_strs, until=-1): diff --git a/pypy/objspace/std/strsliceobject.py b/pypy/objspace/std/strsliceobject.py --- a/pypy/objspace/std/strsliceobject.py +++ b/pypy/objspace/std/strsliceobject.py @@ -1,6 +1,7 @@ from pypy.interpreter.error import OperationError from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all +from pypy.objspace.std.stringobject import W_AbstractStringObject from pypy.objspace.std.stringobject import W_StringObject from pypy.objspace.std.unicodeobject import delegate_String2Unicode from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice @@ -12,7 +13,7 @@ stringendswith, stringstartswith -class W_StringSliceObject(W_Object): +class W_StringSliceObject(W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef def __init__(w_self, str, start, stop): diff --git a/pypy/objspace/std/test/test_stdobjspace.py b/pypy/objspace/std/test/test_stdobjspace.py --- a/pypy/objspace/std/test/test_stdobjspace.py +++ b/pypy/objspace/std/test/test_stdobjspace.py @@ -50,6 +50,8 @@ def test_fastpath_isinstance(self): from pypy.objspace.std.stringobject import W_StringObject from pypy.objspace.std.intobject import W_IntObject + from pypy.objspace.std.iterobject import W_AbstractSeqIterObject + from pypy.objspace.std.iterobject import W_SeqIterObject space = self.space assert space._get_interplevel_cls(space.w_str) is W_StringObject @@ -62,9 +64,13 @@ assert space.isinstance_w(X(), space.w_str) + w_sequenceiterator = space.gettypefor(W_SeqIterObject) + cls = space._get_interplevel_cls(w_sequenceiterator) + assert cls is W_AbstractSeqIterObject + def test_withstrbuf_fastpath_isinstance(self): - from pypy.objspace.std.stringobject import W_StringObject + from pypy.objspace.std.stringobject import W_AbstractStringObject - space = gettestobjspace(withstrbuf=True) - assert space._get_interplevel_cls(space.w_str) is W_StringObject - + space = gettestobjspace(withstrbuf=True) + cls = space._get_interplevel_cls(space.w_str) + assert cls is W_AbstractStringObject diff --git a/pypy/objspace/std/tupleobject.py b/pypy/objspace/std/tupleobject.py --- a/pypy/objspace/std/tupleobject.py +++ b/pypy/objspace/std/tupleobject.py @@ -9,7 +9,10 @@ from pypy.interpreter import gateway from pypy.rlib.debug import make_sure_not_resized -class W_TupleObject(W_Object): +class W_AbstractTupleObject(W_Object): + pass + +class W_TupleObject(W_AbstractTupleObject): from pypy.objspace.std.tupletype import tuple_typedef as typedef _immutable_fields_ = ['wrappeditems[*]'] diff --git a/pypy/objspace/std/unicodeobject.py b/pypy/objspace/std/unicodeobject.py --- a/pypy/objspace/std/unicodeobject.py +++ b/pypy/objspace/std/unicodeobject.py @@ -19,7 +19,10 @@ from pypy.objspace.std.formatting import mod_format from pypy.objspace.std.stringtype import stringstartswith, stringendswith -class W_UnicodeObject(W_Object): +class W_AbstractUnicodeObject(W_Object): + pass + +class W_UnicodeObject(W_AbstractUnicodeObject): from pypy.objspace.std.unicodetype import unicode_typedef as typedef _immutable_fields_ = ['_value'] From noreply at buildbot.pypy.org Sat Nov 12 13:42:09 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Sat, 12 Nov 2011 13:42:09 +0100 (CET) Subject: [pypy-commit] pypy win64 test: buildbot problem partially solved: sys.maxint is now also hacked when called by the buildbot (which uses pytest, not the test_all Message-ID: <20111112124209.0FC0D8292E@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64 test Changeset: r49356:ae3f08719461 Date: 2011-11-12 13:08 +0100 http://bitbucket.org/pypy/pypy/changeset/ae3f08719461/ Log: buildbot problem partially solved: sys.maxint is now also hacked when called by the buildbot (which uses pytest, not the test_all diff --git a/pypy/rlib/rbigint.py b/pypy/rlib/rbigint.py --- a/pypy/rlib/rbigint.py +++ b/pypy/rlib/rbigint.py @@ -45,7 +45,7 @@ def _mask_digit(x): if not we_are_translated(): - assert type(x) is not long, "overflow occurred!" + assert abs(x) , "overflow occurred!" return intmask(x & MASK) _mask_digit._annspecialcase_ = 'specialize:argtype(0)' diff --git a/pypy/test_all.py b/pypy/test_all.py --- a/pypy/test_all.py +++ b/pypy/test_all.py @@ -10,10 +10,6 @@ For more information, use test_all.py -h. """ import sys, os -# XXX hack for win64: -# this needs to be done without hacking maxint -if hasattr(sys, "maxsize"): - sys.maxint = max(sys.maxint, sys.maxsize) if len(sys.argv) == 1 and os.path.dirname(sys.argv[0]) in '.': print >> sys.stderr, __doc__ diff --git a/pytest.py b/pytest.py --- a/pytest.py +++ b/pytest.py @@ -4,6 +4,12 @@ """ __all__ = ['main'] +# XXX hack for win64: +# this needs to be done without hacking maxint +import sys +if hasattr(sys, "maxsize"): + sys.maxint = max(sys.maxint, sys.maxsize) + from _pytest.core import main, UsageError, _preloadplugins from _pytest import core as cmdline from _pytest import __version__ From noreply at buildbot.pypy.org Sat Nov 12 13:42:10 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Sat, 12 Nov 2011 13:42:10 +0100 (CET) Subject: [pypy-commit] pypy win64-stage1: buildbot problem partially solved: sys.maxint is now also hacked when called by the buildbot (which uses pytest, not the test_all Message-ID: <20111112124210.429C48292E@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64-stage1 Changeset: r49357:30f9db926aa8 Date: 2011-11-12 13:20 +0100 http://bitbucket.org/pypy/pypy/changeset/30f9db926aa8/ Log: buildbot problem partially solved: sys.maxint is now also hacked when called by the buildbot (which uses pytest, not the test_all diff --git a/pypy/test_all.py b/pypy/test_all.py --- a/pypy/test_all.py +++ b/pypy/test_all.py @@ -10,10 +10,6 @@ For more information, use test_all.py -h. """ import sys, os -# XXX hack for win64: -# this needs to be done without hacking maxint -if hasattr(sys, "maxsize"): - sys.maxint = max(sys.maxint, sys.maxsize) if len(sys.argv) == 1 and os.path.dirname(sys.argv[0]) in '.': print >> sys.stderr, __doc__ diff --git a/pytest.py b/pytest.py --- a/pytest.py +++ b/pytest.py @@ -4,7 +4,12 @@ """ __all__ = ['main'] -from _pytest.core import main, UsageError, _preloadplugins +# XXX hack for win64: +# this needs to be done without hacking maxint +import sys +if hasattr(sys, "maxsize"): + sys.maxint = max(sys.maxint, sys.maxsize) + from _pytest import core as cmdline from _pytest import __version__ From noreply at buildbot.pypy.org Sat Nov 12 13:42:12 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Sat, 12 Nov 2011 13:42:12 +0100 (CET) Subject: [pypy-commit] pypy win64-stage1: merge Message-ID: <20111112124212.818878292E@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64-stage1 Changeset: r49358:cddd6dcfc376 Date: 2011-11-12 13:22 +0100 http://bitbucket.org/pypy/pypy/changeset/cddd6dcfc376/ Log: merge diff --git a/lib-python/modified-2.7/test/test_import.py b/lib-python/modified-2.7/test/test_import.py --- a/lib-python/modified-2.7/test/test_import.py +++ b/lib-python/modified-2.7/test/test_import.py @@ -64,6 +64,7 @@ except ImportError, err: self.fail("import from %s failed: %s" % (ext, err)) else: + # XXX importing .pyw is missing on Windows self.assertEqual(mod.a, a, "module loaded (%s) but contents invalid" % mod) self.assertEqual(mod.b, b, diff --git a/lib-python/modified-2.7/test/test_repr.py b/lib-python/modified-2.7/test/test_repr.py --- a/lib-python/modified-2.7/test/test_repr.py +++ b/lib-python/modified-2.7/test/test_repr.py @@ -254,8 +254,14 @@ eq = self.assertEqual touch(os.path.join(self.subpkgname, self.pkgname + os.extsep + 'py')) from areallylongpackageandmodulenametotestreprtruncation.areallylongpackageandmodulenametotestreprtruncation import areallylongpackageandmodulenametotestreprtruncation - eq(repr(areallylongpackageandmodulenametotestreprtruncation), - "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + # On PyPy, we use %r to format the file name; on CPython it is done + # with '%s'. It seems to me that %r is safer . + if '__pypy__' in sys.builtin_module_names: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + else: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) eq(repr(sys), "") def test_type(self): diff --git a/lib-python/2.7/test/test_subprocess.py b/lib-python/modified-2.7/test/test_subprocess.py copy from lib-python/2.7/test/test_subprocess.py copy to lib-python/modified-2.7/test/test_subprocess.py --- a/lib-python/2.7/test/test_subprocess.py +++ b/lib-python/modified-2.7/test/test_subprocess.py @@ -16,11 +16,11 @@ # Depends on the following external programs: Python # -if mswindows: - SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' - 'os.O_BINARY);') -else: - SETBINARY = '' +#if mswindows: +# SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' +# 'os.O_BINARY);') +#else: +# SETBINARY = '' try: @@ -420,8 +420,9 @@ self.assertStderrEqual(stderr, "") def test_universal_newlines(self): - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' @@ -448,8 +449,9 @@ def test_universal_newlines_communicate(self): # universal newlines through communicate() - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' diff --git a/pypy/jit/codewriter/call.py b/pypy/jit/codewriter/call.py --- a/pypy/jit/codewriter/call.py +++ b/pypy/jit/codewriter/call.py @@ -212,7 +212,10 @@ elidable = False loopinvariant = False if op.opname == "direct_call": - func = getattr(get_funcobj(op.args[0].value), '_callable', None) + funcobj = get_funcobj(op.args[0].value) + assert getattr(funcobj, 'calling_conv', 'c') == 'c', ( + "%r: getcalldescr() with a non-default call ABI" % (op,)) + func = getattr(funcobj, '_callable', None) elidable = getattr(func, "_elidable_function_", False) loopinvariant = getattr(func, "_jit_loop_invariant_", False) if loopinvariant: diff --git a/pypy/module/_rawffi/structure.py b/pypy/module/_rawffi/structure.py --- a/pypy/module/_rawffi/structure.py +++ b/pypy/module/_rawffi/structure.py @@ -212,6 +212,8 @@ while count + basic_size <= total_size: fieldtypes.append(basic_ffi_type) count += basic_size + if basic_size == 0: # corner case. get out of this infinite + break # loop after 1 iteration ("why not") self.ffi_struct = clibffi.make_struct_ffitype_e(self.size, self.alignment, fieldtypes) diff --git a/pypy/module/imp/importing.py b/pypy/module/imp/importing.py --- a/pypy/module/imp/importing.py +++ b/pypy/module/imp/importing.py @@ -513,7 +513,7 @@ space.warn(msg, space.w_ImportWarning) modtype, suffix, filemode = find_modtype(space, filepart) try: - if modtype in (PY_SOURCE, PY_COMPILED): + if modtype in (PY_SOURCE, PY_COMPILED, C_EXTENSION): assert suffix is not None filename = filepart + suffix stream = streamio.open_file_as_stream(filename, filemode) @@ -522,9 +522,6 @@ except: stream.close() raise - if modtype == C_EXTENSION: - filename = filepart + suffix - return FindInfo(modtype, filename, None, suffix, filemode) except StreamErrors: pass # XXX! must not eat all exceptions, e.g. # Out of file descriptors. diff --git a/pypy/module/posix/__init__.py b/pypy/module/posix/__init__.py --- a/pypy/module/posix/__init__.py +++ b/pypy/module/posix/__init__.py @@ -137,6 +137,8 @@ interpleveldefs['execve'] = 'interp_posix.execve' if hasattr(posix, 'spawnv'): interpleveldefs['spawnv'] = 'interp_posix.spawnv' + if hasattr(posix, 'spawnve'): + interpleveldefs['spawnve'] = 'interp_posix.spawnve' if hasattr(os, 'uname'): interpleveldefs['uname'] = 'interp_posix.uname' if hasattr(os, 'sysconf'): diff --git a/pypy/module/posix/interp_posix.py b/pypy/module/posix/interp_posix.py --- a/pypy/module/posix/interp_posix.py +++ b/pypy/module/posix/interp_posix.py @@ -760,6 +760,14 @@ except OSError, e: raise wrap_oserror(space, e) +def _env2interp(space, w_env): + env = {} + w_keys = space.call_method(w_env, 'keys') + for w_key in space.unpackiterable(w_keys): + w_value = space.getitem(w_env, w_key) + env[space.str_w(w_key)] = space.str_w(w_value) + return env + def execve(space, w_command, w_args, w_env): """ execve(path, args, env) @@ -771,11 +779,7 @@ """ command = fsencode_w(space, w_command) args = [fsencode_w(space, w_arg) for w_arg in space.unpackiterable(w_args)] - env = {} - w_keys = space.call_method(w_env, 'keys') - for w_key in space.unpackiterable(w_keys): - w_value = space.getitem(w_env, w_key) - env[space.str_w(w_key)] = space.str_w(w_value) + env = _env2interp(space, w_env) try: os.execve(command, args, env) except OSError, e: @@ -790,6 +794,16 @@ raise wrap_oserror(space, e) return space.wrap(ret) + at unwrap_spec(mode=int, path=str) +def spawnve(space, mode, path, w_args, w_env): + args = [space.str_w(w_arg) for w_arg in space.unpackiterable(w_args)] + env = _env2interp(space, w_env) + try: + ret = os.spawnve(mode, path, args, env) + except OSError, e: + raise wrap_oserror(space, e) + return space.wrap(ret) + def utime(space, w_path, w_tuple): """ utime(path, (atime, mtime)) utime(path, None) diff --git a/pypy/module/posix/test/test_posix2.py b/pypy/module/posix/test/test_posix2.py --- a/pypy/module/posix/test/test_posix2.py +++ b/pypy/module/posix/test/test_posix2.py @@ -471,6 +471,17 @@ ['python', '-c', 'raise(SystemExit(42))']) assert ret == 42 + if hasattr(__import__(os.name), "spawnve"): + def test_spawnve(self): + os = self.posix + import sys + print self.python + ret = os.spawnve(os.P_WAIT, self.python, + ['python', '-c', + "raise(SystemExit(int(__import__('os').environ['FOOBAR'])))"], + {'FOOBAR': '42'}) + assert ret == 42 + def test_popen(self): os = self.posix for i in range(5): diff --git a/pypy/module/pypyjit/test_pypy_c/model.py b/pypy/module/pypyjit/test_pypy_c/model.py --- a/pypy/module/pypyjit/test_pypy_c/model.py +++ b/pypy/module/pypyjit/test_pypy_c/model.py @@ -285,6 +285,11 @@ guard_false(ticker_cond1, descr=...) """ src = src.replace('--EXC-TICK--', exc_ticker_check) + # + # ISINF is done as a macro; fix it here + r = re.compile('(\w+) = --ISINF--[(](\w+)[)]') + src = r.sub(r'\2\B999 = float_add(\2, ...)\n\1 = float_eq(\2\B999, \2)', + src) return src @classmethod diff --git a/pypy/module/pypyjit/test_pypy_c/test_math.py b/pypy/module/pypyjit/test_pypy_c/test_math.py --- a/pypy/module/pypyjit/test_pypy_c/test_math.py +++ b/pypy/module/pypyjit/test_pypy_c/test_math.py @@ -1,3 +1,4 @@ +import py from pypy.module.pypyjit.test_pypy_c.test_00_model import BaseTestPyPyC @@ -49,10 +50,7 @@ guard_true(i2, descr=...) guard_not_invalidated(descr=...) f1 = cast_int_to_float(i0) - i3 = float_eq(f1, inf) - i4 = float_eq(f1, -inf) - i5 = int_or(i3, i4) - i6 = int_is_true(i5) + i6 = --ISINF--(f1) guard_false(i6, descr=...) f2 = call(ConstClass(sin), f1, descr=) f3 = call(ConstClass(cos), f1, descr=) @@ -64,6 +62,7 @@ """) def test_fmod(self): + py.test.skip("test relies on the old and broken ll_math_fmod") def main(n): import math @@ -90,4 +89,4 @@ i6 = int_sub(i0, 1) --TICK-- jump(..., descr=) - """) \ No newline at end of file + """) diff --git a/pypy/objspace/std/bytearraytype.py b/pypy/objspace/std/bytearraytype.py --- a/pypy/objspace/std/bytearraytype.py +++ b/pypy/objspace/std/bytearraytype.py @@ -122,10 +122,11 @@ return -1 def descr_fromhex(space, w_type, w_hexstring): - "bytearray.fromhex(string) -> bytearray\n\nCreate a bytearray object " - "from a string of hexadecimal numbers.\nSpaces between two numbers are " - "accepted.\nExample: bytearray.fromhex('B9 01EF') -> " - "bytearray(b'\\xb9\\x01\\xef')." + "bytearray.fromhex(string) -> bytearray\n" + "\n" + "Create a bytearray object from a string of hexadecimal numbers.\n" + "Spaces between two numbers are accepted.\n" + "Example: bytearray.fromhex('B9 01EF') -> bytearray(b'\\xb9\\x01\\xef')." hexstring = space.str_w(w_hexstring) hexstring = hexstring.lower() data = [] diff --git a/pypy/objspace/std/intobject.py b/pypy/objspace/std/intobject.py --- a/pypy/objspace/std/intobject.py +++ b/pypy/objspace/std/intobject.py @@ -16,7 +16,10 @@ something CPython does not do anymore. """ -class W_IntObject(W_Object): +class W_AbstractIntObject(W_Object): + pass + +class W_IntObject(W_AbstractIntObject): __slots__ = 'intval' _immutable_fields_ = ['intval'] diff --git a/pypy/objspace/std/iterobject.py b/pypy/objspace/std/iterobject.py --- a/pypy/objspace/std/iterobject.py +++ b/pypy/objspace/std/iterobject.py @@ -4,7 +4,10 @@ from pypy.objspace.std.register_all import register_all -class W_AbstractSeqIterObject(W_Object): +class W_AbstractIterObject(W_Object): + pass + +class W_AbstractSeqIterObject(W_AbstractIterObject): from pypy.objspace.std.itertype import iter_typedef as typedef def __init__(w_self, w_seq, index=0): diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py --- a/pypy/objspace/std/listobject.py +++ b/pypy/objspace/std/listobject.py @@ -11,7 +11,10 @@ from pypy.rlib.listsort import make_timsort_class from pypy.interpreter.argument import Signature -class W_ListObject(W_Object): +class W_AbstractListObject(W_Object): + pass + +class W_ListObject(W_AbstractListObject): from pypy.objspace.std.listtype import list_typedef as typedef def __init__(w_self, wrappeditems): diff --git a/pypy/objspace/std/longobject.py b/pypy/objspace/std/longobject.py --- a/pypy/objspace/std/longobject.py +++ b/pypy/objspace/std/longobject.py @@ -8,7 +8,10 @@ from pypy.objspace.std.noneobject import W_NoneObject from pypy.rlib.rbigint import rbigint, SHIFT -class W_LongObject(W_Object): +class W_AbstractLongObject(W_Object): + pass + +class W_LongObject(W_AbstractLongObject): """This is a wrapper of rbigint.""" from pypy.objspace.std.longtype import long_typedef as typedef _immutable_fields_ = ['num'] diff --git a/pypy/objspace/std/objspace.py b/pypy/objspace/std/objspace.py --- a/pypy/objspace/std/objspace.py +++ b/pypy/objspace/std/objspace.py @@ -83,12 +83,7 @@ if self.config.objspace.std.withtproxy: transparent.setup(self) - interplevel_classes = {} - for type, classes in self.model.typeorder.iteritems(): - if len(classes) >= 3: # XXX what does this 3 mean??! - # W_Root, AnyXxx and actual object - interplevel_classes[self.gettypefor(type)] = classes[0][0] - self._interplevel_classes = interplevel_classes + self.setup_isinstance_cache() def get_builtin_types(self): return self.builtin_types @@ -592,6 +587,63 @@ def isinstance_w(space, w_inst, w_type): return space._type_isinstance(w_inst, w_type) + def setup_isinstance_cache(self): + # This assumes that all classes in the stdobjspace implementing a + # particular app-level type are distinguished by a common base class. + # Alternatively, you can turn off the cache on specific classes, + # like e.g. proxyobject. It is just a bit less performant but + # should not have any bad effect. + from pypy.objspace.std.model import W_Root, W_Object + # + # Build a dict {class: w_typeobject-or-None}. The value None is used + # on classes that are known to be abstract base classes. + class2type = {} + class2type[W_Root] = None + class2type[W_Object] = None + for cls in self.model.typeorder.keys(): + if getattr(cls, 'typedef', None) is None: + continue + if getattr(cls, 'ignore_for_isinstance_cache', False): + continue + w_type = self.gettypefor(cls) + w_oldtype = class2type.setdefault(cls, w_type) + assert w_oldtype is w_type + # + # Build the real dict {w_typeobject: class-or-base-class}. For every + # w_typeobject we look for the most precise common base class of all + # the registered classes. If no such class is found, we will find + # W_Object or W_Root, and complain. Then you must either add an + # artificial common base class, or disable caching on one of the + # two classes with ignore_for_isinstance_cache. + def getmro(cls): + while True: + yield cls + if cls is W_Root: + break + cls = cls.__bases__[0] + self._interplevel_classes = {} + for cls, w_type in class2type.items(): + if w_type is None: + continue + if w_type not in self._interplevel_classes: + self._interplevel_classes[w_type] = cls + else: + cls1 = self._interplevel_classes[w_type] + mro1 = list(getmro(cls1)) + for base in getmro(cls): + if base in mro1: + break + if base in class2type and class2type[base] is not w_type: + if class2type.get(base) is None: + msg = ("cannot find a common interp-level base class" + " between %r and %r" % (cls1, cls)) + else: + msg = ("%s is a base class of both %r and %r" % ( + class2type[base], cls1, cls)) + raise AssertionError("%r: %s" % (w_type, msg)) + class2type[base] = w_type + self._interplevel_classes[w_type] = base + @specialize.memo() def _get_interplevel_cls(self, w_type): if not hasattr(self, "_interplevel_classes"): diff --git a/pypy/objspace/std/proxyobject.py b/pypy/objspace/std/proxyobject.py --- a/pypy/objspace/std/proxyobject.py +++ b/pypy/objspace/std/proxyobject.py @@ -16,6 +16,8 @@ def transparent_class(name, BaseCls): class W_Transparent(BaseCls): + ignore_for_isinstance_cache = True + def __init__(self, space, w_type, w_controller): self.w_type = w_type self.w_controller = w_controller diff --git a/pypy/objspace/std/rangeobject.py b/pypy/objspace/std/rangeobject.py --- a/pypy/objspace/std/rangeobject.py +++ b/pypy/objspace/std/rangeobject.py @@ -5,7 +5,7 @@ from pypy.objspace.std.noneobject import W_NoneObject from pypy.objspace.std.inttype import wrapint from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice -from pypy.objspace.std.listobject import W_ListObject +from pypy.objspace.std.listobject import W_AbstractListObject, W_ListObject from pypy.objspace.std import listtype, iterobject, slicetype from pypy.interpreter import gateway, baseobjspace @@ -21,7 +21,7 @@ return (start - stop - step - 1)/-step -class W_RangeListObject(W_Object): +class W_RangeListObject(W_AbstractListObject): typedef = listtype.list_typedef def __init__(w_self, start, step, length): diff --git a/pypy/objspace/std/ropeobject.py b/pypy/objspace/std/ropeobject.py --- a/pypy/objspace/std/ropeobject.py +++ b/pypy/objspace/std/ropeobject.py @@ -6,7 +6,7 @@ from pypy.rlib.objectmodel import we_are_translated from pypy.objspace.std.inttype import wrapint from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice -from pypy.objspace.std import slicetype +from pypy.objspace.std import stringobject, slicetype, iterobject from pypy.objspace.std.listobject import W_ListObject from pypy.objspace.std.noneobject import W_NoneObject from pypy.objspace.std.tupleobject import W_TupleObject @@ -19,7 +19,7 @@ str_format__String as str_format__Rope, _upper, _lower, DEFAULT_NOOP_TABLE) -class W_RopeObject(W_Object): +class W_RopeObject(stringobject.W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef _immutable_fields_ = ['_node'] @@ -59,7 +59,7 @@ registerimplementation(W_RopeObject) -class W_RopeIterObject(W_Object): +class W_RopeIterObject(iterobject.W_AbstractIterObject): from pypy.objspace.std.itertype import iter_typedef as typedef def __init__(w_self, w_rope, index=0): diff --git a/pypy/objspace/std/ropeunicodeobject.py b/pypy/objspace/std/ropeunicodeobject.py --- a/pypy/objspace/std/ropeunicodeobject.py +++ b/pypy/objspace/std/ropeunicodeobject.py @@ -9,7 +9,7 @@ from pypy.objspace.std.noneobject import W_NoneObject from pypy.rlib import rope from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice -from pypy.objspace.std import slicetype +from pypy.objspace.std import unicodeobject, slicetype, iterobject from pypy.objspace.std.tupleobject import W_TupleObject from pypy.rlib.rarithmetic import intmask, ovfcheck from pypy.module.unicodedata import unicodedb @@ -76,7 +76,7 @@ return encode_object(space, w_unistr, encoding, errors) -class W_RopeUnicodeObject(W_Object): +class W_RopeUnicodeObject(unicodeobject.W_AbstractUnicodeObject): from pypy.objspace.std.unicodetype import unicode_typedef as typedef _immutable_fields_ = ['_node'] @@ -117,7 +117,7 @@ return rope.LiteralUnicodeNode(space.unicode_w(w_str)) -class W_RopeUnicodeIterObject(W_Object): +class W_RopeUnicodeIterObject(iterobject.W_AbstractIterObject): from pypy.objspace.std.itertype import iter_typedef as typedef def __init__(w_self, w_rope, index=0): diff --git a/pypy/objspace/std/smallintobject.py b/pypy/objspace/std/smallintobject.py --- a/pypy/objspace/std/smallintobject.py +++ b/pypy/objspace/std/smallintobject.py @@ -6,7 +6,7 @@ from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all from pypy.objspace.std.noneobject import W_NoneObject -from pypy.objspace.std.intobject import W_IntObject +from pypy.objspace.std.intobject import W_AbstractIntObject, W_IntObject from pypy.interpreter.error import OperationError from pypy.rlib.objectmodel import UnboxedValue from pypy.rlib.rbigint import rbigint @@ -14,7 +14,7 @@ from pypy.tool.sourcetools import func_with_new_name from pypy.objspace.std.inttype import wrapint -class W_SmallIntObject(W_Object, UnboxedValue): +class W_SmallIntObject(W_AbstractIntObject, UnboxedValue): __slots__ = 'intval' from pypy.objspace.std.inttype import int_typedef as typedef diff --git a/pypy/objspace/std/smalllongobject.py b/pypy/objspace/std/smalllongobject.py --- a/pypy/objspace/std/smalllongobject.py +++ b/pypy/objspace/std/smalllongobject.py @@ -9,7 +9,7 @@ from pypy.rlib.rarithmetic import r_longlong, r_int, r_uint from pypy.rlib.rarithmetic import intmask, LONGLONG_BIT from pypy.rlib.rbigint import rbigint -from pypy.objspace.std.longobject import W_LongObject +from pypy.objspace.std.longobject import W_AbstractLongObject, W_LongObject from pypy.objspace.std.intobject import W_IntObject from pypy.objspace.std.noneobject import W_NoneObject from pypy.interpreter.error import OperationError @@ -17,7 +17,7 @@ LONGLONG_MIN = r_longlong((-1) << (LONGLONG_BIT-1)) -class W_SmallLongObject(W_Object): +class W_SmallLongObject(W_AbstractLongObject): from pypy.objspace.std.longtype import long_typedef as typedef _immutable_fields_ = ['longlong'] diff --git a/pypy/objspace/std/smalltupleobject.py b/pypy/objspace/std/smalltupleobject.py --- a/pypy/objspace/std/smalltupleobject.py +++ b/pypy/objspace/std/smalltupleobject.py @@ -9,9 +9,9 @@ from pypy.interpreter import gateway from pypy.rlib.debug import make_sure_not_resized from pypy.rlib.unroll import unrolling_iterable -from pypy.objspace.std.tupleobject import W_TupleObject +from pypy.objspace.std.tupleobject import W_AbstractTupleObject, W_TupleObject -class W_SmallTupleObject(W_Object): +class W_SmallTupleObject(W_AbstractTupleObject): from pypy.objspace.std.tupletype import tuple_typedef as typedef def tolist(self): diff --git a/pypy/objspace/std/strbufobject.py b/pypy/objspace/std/strbufobject.py --- a/pypy/objspace/std/strbufobject.py +++ b/pypy/objspace/std/strbufobject.py @@ -1,11 +1,12 @@ from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all +from pypy.objspace.std.stringobject import W_AbstractStringObject from pypy.objspace.std.stringobject import W_StringObject from pypy.objspace.std.unicodeobject import delegate_String2Unicode from pypy.rlib.rstring import StringBuilder from pypy.interpreter.buffer import Buffer -class W_StringBufferObject(W_Object): +class W_StringBufferObject(W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef w_str = None diff --git a/pypy/objspace/std/stringobject.py b/pypy/objspace/std/stringobject.py --- a/pypy/objspace/std/stringobject.py +++ b/pypy/objspace/std/stringobject.py @@ -19,7 +19,10 @@ from pypy.objspace.std.formatting import mod_format -class W_StringObject(W_Object): +class W_AbstractStringObject(W_Object): + pass + +class W_StringObject(W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef _immutable_fields_ = ['_value'] diff --git a/pypy/objspace/std/strjoinobject.py b/pypy/objspace/std/strjoinobject.py --- a/pypy/objspace/std/strjoinobject.py +++ b/pypy/objspace/std/strjoinobject.py @@ -1,11 +1,12 @@ from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all +from pypy.objspace.std.stringobject import W_AbstractStringObject from pypy.objspace.std.stringobject import W_StringObject from pypy.objspace.std.unicodeobject import delegate_String2Unicode from pypy.objspace.std.stringtype import wrapstr -class W_StringJoinObject(W_Object): +class W_StringJoinObject(W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef def __init__(w_self, joined_strs, until=-1): diff --git a/pypy/objspace/std/strsliceobject.py b/pypy/objspace/std/strsliceobject.py --- a/pypy/objspace/std/strsliceobject.py +++ b/pypy/objspace/std/strsliceobject.py @@ -1,6 +1,7 @@ from pypy.interpreter.error import OperationError from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all +from pypy.objspace.std.stringobject import W_AbstractStringObject from pypy.objspace.std.stringobject import W_StringObject from pypy.objspace.std.unicodeobject import delegate_String2Unicode from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice @@ -12,7 +13,7 @@ stringendswith, stringstartswith -class W_StringSliceObject(W_Object): +class W_StringSliceObject(W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef def __init__(w_self, str, start, stop): diff --git a/pypy/objspace/std/test/test_stdobjspace.py b/pypy/objspace/std/test/test_stdobjspace.py --- a/pypy/objspace/std/test/test_stdobjspace.py +++ b/pypy/objspace/std/test/test_stdobjspace.py @@ -50,6 +50,8 @@ def test_fastpath_isinstance(self): from pypy.objspace.std.stringobject import W_StringObject from pypy.objspace.std.intobject import W_IntObject + from pypy.objspace.std.iterobject import W_AbstractSeqIterObject + from pypy.objspace.std.iterobject import W_SeqIterObject space = self.space assert space._get_interplevel_cls(space.w_str) is W_StringObject @@ -62,9 +64,13 @@ assert space.isinstance_w(X(), space.w_str) + w_sequenceiterator = space.gettypefor(W_SeqIterObject) + cls = space._get_interplevel_cls(w_sequenceiterator) + assert cls is W_AbstractSeqIterObject + def test_withstrbuf_fastpath_isinstance(self): - from pypy.objspace.std.stringobject import W_StringObject + from pypy.objspace.std.stringobject import W_AbstractStringObject - space = gettestobjspace(withstrbuf=True) - assert space._get_interplevel_cls(space.w_str) is W_StringObject - + space = gettestobjspace(withstrbuf=True) + cls = space._get_interplevel_cls(space.w_str) + assert cls is W_AbstractStringObject diff --git a/pypy/objspace/std/tupleobject.py b/pypy/objspace/std/tupleobject.py --- a/pypy/objspace/std/tupleobject.py +++ b/pypy/objspace/std/tupleobject.py @@ -9,7 +9,10 @@ from pypy.interpreter import gateway from pypy.rlib.debug import make_sure_not_resized -class W_TupleObject(W_Object): +class W_AbstractTupleObject(W_Object): + pass + +class W_TupleObject(W_AbstractTupleObject): from pypy.objspace.std.tupletype import tuple_typedef as typedef _immutable_fields_ = ['wrappeditems[*]'] diff --git a/pypy/objspace/std/unicodeobject.py b/pypy/objspace/std/unicodeobject.py --- a/pypy/objspace/std/unicodeobject.py +++ b/pypy/objspace/std/unicodeobject.py @@ -19,7 +19,10 @@ from pypy.objspace.std.formatting import mod_format from pypy.objspace.std.stringtype import stringstartswith, stringendswith -class W_UnicodeObject(W_Object): +class W_AbstractUnicodeObject(W_Object): + pass + +class W_UnicodeObject(W_AbstractUnicodeObject): from pypy.objspace.std.unicodetype import unicode_typedef as typedef _immutable_fields_ = ['_value'] diff --git a/pypy/rlib/clibffi.py b/pypy/rlib/clibffi.py --- a/pypy/rlib/clibffi.py +++ b/pypy/rlib/clibffi.py @@ -30,6 +30,9 @@ _MAC_OS = platform.name == "darwin" _FREEBSD_7 = platform.name == "freebsd7" +_LITTLE_ENDIAN = sys.byteorder == 'little' +_BIG_ENDIAN = sys.byteorder == 'big' + if _WIN32: from pypy.rlib import rwin32 @@ -338,15 +341,38 @@ cast_type_to_ffitype._annspecialcase_ = 'specialize:memo' def push_arg_as_ffiptr(ffitp, arg, ll_buf): - # this is for primitive types. For structures and arrays - # would be something different (more dynamic) + # This is for primitive types. Note that the exact type of 'arg' may be + # different from the expected 'c_size'. To cope with that, we fall back + # to a byte-by-byte copy. TP = lltype.typeOf(arg) TP_P = lltype.Ptr(rffi.CArray(TP)) - buf = rffi.cast(TP_P, ll_buf) - buf[0] = arg + TP_size = rffi.sizeof(TP) + c_size = intmask(ffitp.c_size) + # if both types have the same size, we can directly write the + # value to the buffer + if c_size == TP_size: + buf = rffi.cast(TP_P, ll_buf) + buf[0] = arg + else: + # needs byte-by-byte copying. Make sure 'arg' is an integer type. + # Note that this won't work for rffi.FLOAT/rffi.DOUBLE. + assert TP is not rffi.FLOAT and TP is not rffi.DOUBLE + if TP_size <= rffi.sizeof(lltype.Signed): + arg = rffi.cast(lltype.Unsigned, arg) + else: + arg = rffi.cast(lltype.UnsignedLongLong, arg) + if _LITTLE_ENDIAN: + for i in range(c_size): + ll_buf[i] = chr(arg & 0xFF) + arg >>= 8 + elif _BIG_ENDIAN: + for i in range(c_size-1, -1, -1): + ll_buf[i] = chr(arg & 0xFF) + arg >>= 8 + else: + raise AssertionError push_arg_as_ffiptr._annspecialcase_ = 'specialize:argtype(1)' - # type defs for callback and closure userdata USERDATA_P = lltype.Ptr(lltype.ForwardReference()) CALLBACK_TP = lltype.Ptr(lltype.FuncType([rffi.VOIDPP, rffi.VOIDP, USERDATA_P], diff --git a/pypy/rlib/libffi.py b/pypy/rlib/libffi.py --- a/pypy/rlib/libffi.py +++ b/pypy/rlib/libffi.py @@ -234,7 +234,7 @@ # It is important that there is no other operation in the middle, else # the optimizer will fail to recognize the pattern and won't turn it # into a fast CALL. Note that "arg = arg.next" is optimized away, - # assuming that archain is completely virtual. + # assuming that argchain is completely virtual. self = jit.promote(self) if argchain.numargs != len(self.argtypes): raise TypeError, 'Wrong number of arguments: %d expected, got %d' %\ diff --git a/pypy/rlib/test/test_libffi.py b/pypy/rlib/test/test_libffi.py --- a/pypy/rlib/test/test_libffi.py +++ b/pypy/rlib/test/test_libffi.py @@ -179,6 +179,17 @@ res = self.call(func, [chr(20), 22], rffi.LONG) assert res == 42 + def test_char_args(self): + """ + char sum_args(char a, char b) { + return a + b; + } + """ + libfoo = self.get_libfoo() + func = (libfoo, 'sum_args', [types.schar, types.schar], types.schar) + res = self.call(func, [123, 43], rffi.CHAR) + assert res == chr(166) + def test_unsigned_short_args(self): """ unsigned short sum_xy_us(unsigned short x, unsigned short y) diff --git a/pypy/rpython/lltypesystem/ll2ctypes.py b/pypy/rpython/lltypesystem/ll2ctypes.py --- a/pypy/rpython/lltypesystem/ll2ctypes.py +++ b/pypy/rpython/lltypesystem/ll2ctypes.py @@ -1178,10 +1178,14 @@ value = value.adr if isinstance(value, llmemory.fakeaddress): value = value.ptr or 0 + if isinstance(value, r_singlefloat): + value = float(value) TYPE1 = lltype.typeOf(value) cvalue = lltype2ctypes(value) cresulttype = get_ctypes_type(RESTYPE) - if isinstance(TYPE1, lltype.Ptr): + if RESTYPE == TYPE1: + return value + elif isinstance(TYPE1, lltype.Ptr): if isinstance(RESTYPE, lltype.Ptr): # shortcut: ptr->ptr cast cptr = ctypes.cast(cvalue, cresulttype) diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -126,6 +126,7 @@ canraise=False, random_effects_on_gcobjs= random_effects_on_gcobjs, + calling_conv=calling_conv, **kwds) if isinstance(_callable, ll2ctypes.LL2CtypesCallable): _callable.funcptr = funcptr @@ -865,11 +866,12 @@ try: unsigned = not tp._type.SIGNED except AttributeError: - if tp in [lltype.Char, lltype.Float, lltype.Signed] or\ - isinstance(tp, lltype.Ptr): + if (not isinstance(tp, lltype.Primitive) or + tp in (FLOAT, DOUBLE) or + cast(lltype.SignedLongLong, cast(tp, -1)) < 0): unsigned = False else: - unsigned = False + unsigned = True return size, unsigned def sizeof(tp): diff --git a/pypy/rpython/lltypesystem/test/test_rffi.py b/pypy/rpython/lltypesystem/test/test_rffi.py --- a/pypy/rpython/lltypesystem/test/test_rffi.py +++ b/pypy/rpython/lltypesystem/test/test_rffi.py @@ -18,6 +18,7 @@ from pypy.conftest import option from pypy.objspace.flow.model import summary from pypy.translator.tool.cbuild import ExternalCompilationInfo +from pypy.rlib.rarithmetic import r_singlefloat class BaseTestRffi: def test_basic(self): @@ -704,6 +705,14 @@ res = cast(lltype.Signed, 42.5) assert res == 42 + res = cast(lltype.SingleFloat, 12.3) + assert res == r_singlefloat(12.3) + res = cast(lltype.SingleFloat, res) + assert res == r_singlefloat(12.3) + + res = cast(lltype.Float, r_singlefloat(12.)) + assert res == 12. + def test_rffi_sizeof(self): try: import ctypes @@ -733,7 +742,7 @@ assert sizeof(ll) == ctypes.sizeof(ctp) assert sizeof(lltype.Typedef(ll, 'test')) == sizeof(ll) assert not size_and_sign(lltype.Signed)[1] - assert not size_and_sign(lltype.Char)[1] + assert size_and_sign(lltype.Char) == (1, True) assert not size_and_sign(lltype.UniChar)[1] assert size_and_sign(UINT)[1] diff --git a/pypy/rpython/module/ll_os.py b/pypy/rpython/module/ll_os.py --- a/pypy/rpython/module/ll_os.py +++ b/pypy/rpython/module/ll_os.py @@ -356,6 +356,32 @@ return extdef([int, str, [str]], int, llimpl=spawnv_llimpl, export_name="ll_os.ll_os_spawnv") + @registering_if(os, 'spawnve') + def register_os_spawnve(self): + os_spawnve = self.llexternal('spawnve', + [rffi.INT, rffi.CCHARP, rffi.CCHARPP, + rffi.CCHARPP], + rffi.INT) + + def spawnve_llimpl(mode, path, args, env): + envstrs = [] + for item in env.iteritems(): + envstrs.append("%s=%s" % item) + + mode = rffi.cast(rffi.INT, mode) + l_args = rffi.liststr2charpp(args) + l_env = rffi.liststr2charpp(envstrs) + childpid = os_spawnve(mode, path, l_args, l_env) + rffi.free_charpp(l_env) + rffi.free_charpp(l_args) + if childpid == -1: + raise OSError(rposix.get_errno(), "os_spawnve failed") + return rffi.cast(lltype.Signed, childpid) + + return extdef([int, str, [str], {str: str}], int, + llimpl=spawnve_llimpl, + export_name="ll_os.ll_os_spawnve") + @registering(os.dup) def register_os_dup(self): os_dup = self.llexternal(underscore_on_windows+'dup', [rffi.INT], rffi.INT) diff --git a/pypy/translator/c/test/test_extfunc.py b/pypy/translator/c/test/test_extfunc.py --- a/pypy/translator/c/test/test_extfunc.py +++ b/pypy/translator/c/test/test_extfunc.py @@ -818,6 +818,24 @@ func() assert open(filename).read() == "2" +if hasattr(posix, 'spawnve'): + def test_spawnve(): + filename = str(udir.join('test_spawnve.txt')) + progname = str(sys.executable) + scriptpath = udir.join('test_spawnve.py') + scriptpath.write('import os\n' + + 'f=open(%r,"w")\n' % filename + + 'f.write(os.environ["FOOBAR"])\n' + + 'f.close\n') + scriptname = str(scriptpath) + def does_stuff(): + l = [progname, scriptname] + pid = os.spawnve(os.P_NOWAIT, progname, l, {'FOOBAR': '42'}) + os.waitpid(pid, 0) + func = compile(does_stuff, []) + func() + assert open(filename).read() == "42" + def test_utime(): path = str(udir.ensure("test_utime.txt")) from time import time, sleep diff --git a/pypy/translator/test/test_simplify.py b/pypy/translator/test/test_simplify.py --- a/pypy/translator/test/test_simplify.py +++ b/pypy/translator/test/test_simplify.py @@ -42,24 +42,6 @@ assert graph.startblock.operations[0].opname == 'int_mul_ovf' assert graph.startblock.operations[1].opname == 'int_sub' -def test_remove_ovfcheck_lshift(): - # check that ovfcheck_lshift() is handled - from pypy.rlib.rarithmetic import ovfcheck_lshift - def f(x): - try: - return ovfcheck_lshift(x, 2) - except OverflowError: - return -42 - graph, _ = translate(f, [int]) - assert len(graph.startblock.operations) == 1 - assert graph.startblock.operations[0].opname == 'int_lshift_ovf' - assert len(graph.startblock.operations[0].args) == 2 - assert len(graph.startblock.exits) == 2 - assert [link.exitcase for link in graph.startblock.exits] == \ - [None, OverflowError] - assert [link.target.operations for link in graph.startblock.exits] == \ - [(), ()] - def test_remove_ovfcheck_floordiv(): # check that ovfcheck() is handled even if the operation raises # and catches another exception too, here ZeroDivisionError From noreply at buildbot.pypy.org Sat Nov 12 13:55:24 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Sat, 12 Nov 2011 13:55:24 +0100 (CET) Subject: [pypy-commit] pypy win64-stage1: oupps Message-ID: <20111112125524.275EE8292E@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64-stage1 Changeset: r49359:c4db8f51a9bd Date: 2011-11-12 13:39 +0100 http://bitbucket.org/pypy/pypy/changeset/c4db8f51a9bd/ Log: oupps diff --git a/pytest.py b/pytest.py --- a/pytest.py +++ b/pytest.py @@ -10,6 +10,7 @@ if hasattr(sys, "maxsize"): sys.maxint = max(sys.maxint, sys.maxsize) +from _pytest.core import main, UsageError, _preloadplugins from _pytest import core as cmdline from _pytest import __version__ From noreply at buildbot.pypy.org Sat Nov 12 18:05:51 2011 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 12 Nov 2011 18:05:51 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: First version of the FSCONS talk Message-ID: <20111112170551.526B28292E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: extradoc Changeset: r3963:bf988b21d001 Date: 2011-11-12 18:05 +0100 http://bitbucket.org/pypy/extradoc/changeset/bf988b21d001/ Log: First version of the FSCONS talk diff --git a/talk/ep2011/talk/talk.pdf b/talk/ep2011/talk/talk.pdf index c5e303544d4b86bf04c73c4111c2d283bcce39a2..89348bba51f5493e80e861565275ee35a39f5ac9 GIT binary patch [cut] diff --git a/talk/ep2011/talk/Makefile b/talk/fscons2011/Makefile copy from talk/ep2011/talk/Makefile copy to talk/fscons2011/Makefile diff --git a/talk/ep2011/talk/author.latex b/talk/fscons2011/author.latex copy from talk/ep2011/talk/author.latex copy to talk/fscons2011/author.latex --- a/talk/ep2011/talk/author.latex +++ b/talk/fscons2011/author.latex @@ -1,8 +1,8 @@ \definecolor{rrblitbackground}{rgb}{0.0, 0.0, 0.0} -\title[PyPy in Production]{PyPy in Production} -\author[antocuni, arigo] -{Antonio Cuni \\ Armin Rigo} +\title[PyPy in Production]{PyPy} +\author[Armin Rigo] +{Armin Rigo} -\institute{EuroPython 2011} -\date{June 23 2011} +\institute{FSCONS 2011} +\date{November 13 2011} diff --git a/talk/ep2011/talk/beamerdefs.txt b/talk/fscons2011/beamerdefs.txt copy from talk/ep2011/talk/beamerdefs.txt copy to talk/fscons2011/beamerdefs.txt --- a/talk/ep2011/talk/beamerdefs.txt +++ b/talk/fscons2011/beamerdefs.txt @@ -89,7 +89,7 @@ -.. |snake| image:: ../../img/py-web-new.png +.. |snake| image:: ../img/py-web-new.png :scale: 15% diff --git a/talk/ep2011/talk/django-last-year.png b/talk/fscons2011/django-last-year.png copy from talk/ep2011/talk/django-last-year.png copy to talk/fscons2011/django-last-year.png diff --git a/talk/fscons2011/progress.png b/talk/fscons2011/progress.png new file mode 100644 index 0000000000000000000000000000000000000000..550f54205d9624d9061bdd46bf8381d1006efc02 GIT binary patch [cut] diff --git a/talk/ep2011/talk/question-mark.png b/talk/fscons2011/question-mark.png copy from talk/ep2011/talk/question-mark.png copy to talk/fscons2011/question-mark.png diff --git a/talk/fscons2011/speed.png b/talk/fscons2011/speed.png new file mode 100644 index 0000000000000000000000000000000000000000..75312760c7ac767b0194cef9ebd5f4a43767ccb9 GIT binary patch [cut] diff --git a/talk/ep2011/talk/stylesheet.latex b/talk/fscons2011/stylesheet.latex copy from talk/ep2011/talk/stylesheet.latex copy to talk/fscons2011/stylesheet.latex diff --git a/talk/fscons2011/talk.rst b/talk/fscons2011/talk.rst new file mode 100644 --- /dev/null +++ b/talk/fscons2011/talk.rst @@ -0,0 +1,186 @@ +.. include:: beamerdefs.txt + +============================================== + PyPy +============================================== + + +PyPy +-------- + + + +Speed +--------- + +.. image:: progress.png + :scale: 40% + :align: center + + +Speed +--------- + +.. image:: speed.png + :scale: 40% + :align: center + + +Speed +------------------------------ + +.. image:: django-last-year.png + :scale: 38% + :align: center + + +What is PyPy +--------------------------- + +PyPy + +- started in 2003 + +- partially publically funded + +- Open Source + + +What is PyPy +--------------------------- + +PyPy + +- Python implementation + +- framework for fast dynamic languages + + +Framework for fast dynamic languages +------------------------------------------- + +* It is easy to implement a new language with PyPy + +* Better suited to dynamic languages + +|pause| + +* Java or .NET? Not really suited + + +For the language implementor +-------------------------------- + +* Pick your favourite (dynamic) language + +|pause| + +* Write an interpreter for it + +|pause| + +* ...in RPython, a subset of Python + +|pause| + +* ...ignoring all hard issues: + + - the object model + + - garbage collection + + - coroutines + + - Just-in-Time Compilation + + +Just-in-Time Compilation +------------------------ + +!? + + +Just-in-Time Compilation +------------------------ + +"It works" in practice: + +* PyPy the Python interpreter is fast + +* Pyrolog, a Prolog interpreter, is fast too + +* Haskell and a number of other experiments + +|pause| + +* ...and yours :-) + + +Just-in-Time Compilation +------------------------ + +* Tracing JIT Compiler + +* Not unlike TraceMonkey for JavaScript in FireFox + +* But two levels + +* Really traces the RPython interpreter, which runs the application + + +PyPy 1.7 +--------- + +* Release soon (last release, 1.6, this summer) + +* Python 2.7.x + +* The most compatible alternative to CPython + +* Most programs just work + +* C extensions might not work (or might work) + +|pause| + +* ...fast + +|pause| + +* ...can use less memory, but usually not + + - ``__slots__`` on CPython, not needed on PyPy + + +PyPy's future? +-------------------- + +.. sourcecode:: plain + + CPython 2.7 -------> CPython 3.x + + ^ ^ + | | + | | + | | + V V + + PyPy 1.x <------> PyPy3 1.x + + +Contacts, Q/A +-------------- + +- http://pypy.org + +- blog: http://morepypy.blogspot.com + +- mailing list: ``pypy-dev at python.org`` + +- IRC: #pypy on freenode + +Questions + +.. image:: question-mark.png + :scale: 10% + :align: center diff --git a/talk/ep2011/talk/title.latex b/talk/fscons2011/title.latex copy from talk/ep2011/talk/title.latex copy to talk/fscons2011/title.latex --- a/talk/ep2011/talk/title.latex +++ b/talk/fscons2011/title.latex @@ -1,5 +1,5 @@ \begin{titlepage} \begin{figure}[h] -\includegraphics[width=60px]{../../img/py-web-new.png} +\includegraphics[width=60px]{../img/py-web-new.png} \end{figure} \end{titlepage} From noreply at buildbot.pypy.org Sat Nov 12 18:05:54 2011 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 12 Nov 2011 18:05:54 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: merge heads Message-ID: <20111112170554.44BA782A87@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: extradoc Changeset: r3964:a1b68f32d59b Date: 2011-11-12 18:05 +0100 http://bitbucket.org/pypy/extradoc/changeset/a1b68f32d59b/ Log: merge heads diff --git a/blog/draft/2011-11-gborg-sprint-report.rst b/blog/draft/2011-11-gborg-sprint-report.rst new file mode 100644 --- /dev/null +++ b/blog/draft/2011-11-gborg-sprint-report.rst @@ -0,0 +1,90 @@ +Gothenburg sprint report +========================= + +In the past days, we have been busy hacking on PyPy at the Gothenburg sprint, +the second of this 2011. The sprint was hold at Laura's and Jacob's place, +and here is a brief report of what happened. + + + +In the first day we welcomed Mark Pearse, which was new to PyPy and at his +first sprint. Mark worked the whole sprint at the new SpecialisedTuple_ +branch, whose aim is to have a special implementation for small 2-items and +3-items tuples of primitive types (e.g., ints or floats) to save memory. Mark +paired with Antonio for a couple of days, then he continued alone and did amazing +job. He even learned how to properly do Test Driven Development :-). + +.. _SpecialisedTuple: http://bitbucket.org/pypy/pypy/changesets/tip/branch%28%22SpecialisedTuples%22%29 + +Antonio spent a couple of days investingating whether it is possible to use +`application checkpoint` libraries such as BLCR_ and DMTCP_ to save the state of +the PyPy interpreter between subsequent runs, thus saving also the +JIT-compiled code to reduce the warmup time. The conclusion is that these are +interesting technologies, but more work would be needed (either on the PyPy +side or on the checkpoint library side) before it can have a practical usage +for PyPy users. + +.. _`application checkpoint`: http://en.wikipedia.org/wiki/Application_checkpointing +.. _BLCR: http://ftg.lbl.gov/projects/CheckpointRestart/ +.. _DMTCP: http://dmtcp.sourceforge.net/ + +Then, Antonio spent most of the sprint working on his ffistruct_ branch, whose +aim is to provide a very JIT-friendly way to interact with C structures, and +eventually implement ``ctypes.Structure`` on top of that. The "cool part" of +the branch is already done, and the JIT already can compile set/get of fields +into a single fast assembly instruction, about 400 times faster than the +corresponding ctypes code. What is still left to do is to add a nicer syntax +(which is easy) and to implement all the ctypes peculiarities (which is +tedious, at best :-)). + +.. _ffistruct: http://bitbucket.org/pypy/pypy/changesets/tip/branch(%22ffistruct%22) + +As usual, Armin did tons of different stuff, including fixing a JIT bug, +improving the performance of ``file.readlines()`` and working on the STM_ +branch (for Software Transactional Memory), which is now able to run RPython +multithreaded programs using software transaction (as long as they don't fill +up all the memory, because support for the GC is still missing :-)). Finally, +he worked on improving the Windows version of PyPy, and while doing so he +discovered toghether with Anto a terrible bug which leaded to a continuous +leak of stack space because the JIT called some functions using the wrong +calling convention. + +.. _STM: http://bitbucket.org/pypy/pypy/changesets/tip/branch("stm") + +Håkan, with some help from Armim, worked on the `jit-targets`_ branch, whose goal +is to heavily refactor the way the traces are internally represented by the +JIT, so that in the end we can produce (even :-)) better code than what we do +nowadays. More details in this mail_. + +.. _`jit-targets`: http://bitbucket.org/pypy/pypy/changesets/tip/branch("stm") +.. _mail: http://mail.python.org/pipermail/pypy-dev/2011-November/008728.html + + +Andrew Dalke worked on a way to integrate PyPy with FORTRAN libraries, and in +particular the ones which are wrapped by Numpy and Scipy: in doing so, he +wrote f2pypy_, which is similar to the existing ``f2py`` but instead of +producing a CPython extension module it produces a pure python modules based +on ``ctypes``. More work is needed before it can be considered complete, but +``f2pypy`` is already able to produce a wrapper for BLAS which passes most of +the tests (although not all). + +.. _f2pypy: http://bitbucket.org/pypy/f2pypy + +Christian Tismer worked the whole sprint on the branch to make PyPy compatible +with Windows 64 bit. This needs a lot of work because a lot of PyPy is +written under the assumption than assumption that the ``long`` type in C has +the same bit size than ``void*``, which is not true on Win64. Christian says +that in the past Genova-Pegli sprint he completed 90% of the work, and in this +sprint he did the other 90% of the work. Obviously, what is left to complete +the task is the third 90% :-). More seriously, he estimated a total of 2-4 +person-weeks of work to finish it. + +But, all in all, the best part of the sprint has been the cake that Laura +cooked to celebrate the "5x faster than CPython" achievement. Well, actually +our speed_ page reports "only" 4.7x, but that's because in the meantime we +switched from comparing against CPython 2.6 to comparing against CPython 2.7, +which is slightly faster. We are confident that we will reach the 5x goal +again, and that will be the perfect excuse to eat another cake :-) + +.. _speed: http://speed.pypy.org/ + diff --git a/blog/draft/5x-cake.jpg b/blog/draft/5x-cake.jpg new file mode 100755 index 0000000000000000000000000000000000000000..2d712593681d479dd8211003f1949c79d7c8520a GIT binary patch [cut] From noreply at buildbot.pypy.org Sun Nov 13 05:20:26 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sun, 13 Nov 2011 05:20:26 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: consistancy Message-ID: <20111113042026.F1DD78292E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49360:9ce050b260cd Date: 2011-11-12 15:34 -0500 http://bitbucket.org/pypy/pypy/changeset/9ce050b260cd/ Log: consistancy diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -8,7 +8,7 @@ from pypy.rpython.lltypesystem import lltype, rffi -STORAGE_TYPE = lltype.Array(lltype.Char, hints={"nolength": True}) +STORAGE_TYPE = rffi.CArray(lltype.Char) UNSIGNEDLTR = "u" SIGNEDLTR = "i" From noreply at buildbot.pypy.org Sun Nov 13 05:20:28 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sun, 13 Nov 2011 05:20:28 +0100 (CET) Subject: [pypy-commit] pypy jit-dynamic-getarrayitem: started working on support for creating array get/set item at compile time Message-ID: <20111113042028.307058292E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: jit-dynamic-getarrayitem Changeset: r49361:a731ffd298b4 Date: 2011-11-12 23:20 -0500 http://bitbucket.org/pypy/pypy/changeset/a731ffd298b4/ Log: started working on support for creating array get/set item at compile time diff --git a/pypy/jit/metainterp/test/test_fficall.py b/pypy/jit/metainterp/test/test_fficall.py --- a/pypy/jit/metainterp/test/test_fficall.py +++ b/pypy/jit/metainterp/test/test_fficall.py @@ -1,15 +1,16 @@ +import py -import py +from pypy.jit.metainterp.test.support import LLJitMixin +from pypy.rlib.jit import JitDriver, promote, dont_look_inside +from pypy.rlib.libffi import (ArgChain, IS_32_BIT, array_getitem, array_setitem, + types) +from pypy.rlib.objectmodel import specialize from pypy.rlib.rarithmetic import r_singlefloat, r_longlong, r_ulonglong -from pypy.rlib.jit import JitDriver, promote, dont_look_inside +from pypy.rlib.test.test_libffi import TestLibffiCall as _TestLibffiCall from pypy.rlib.unroll import unrolling_iterable -from pypy.rlib.libffi import ArgChain -from pypy.rlib.libffi import IS_32_BIT -from pypy.rlib.test.test_libffi import TestLibffiCall as _TestLibffiCall from pypy.rpython.lltypesystem import lltype, rffi -from pypy.rlib.objectmodel import specialize from pypy.tool.sourcetools import func_with_new_name -from pypy.jit.metainterp.test.support import LLJitMixin + class TestFfiCall(LLJitMixin, _TestLibffiCall): supports_all = False # supports_{floats,longlong,singlefloats} @@ -95,3 +96,61 @@ class TestFfiCallSupportAll(TestFfiCall): supports_all = True # supports_{floats,longlong,singlefloats} + + def test_array_fields(self): + myjitdriver = JitDriver( + greens = [], + reds = ["n", "i", "signed_size", "points", "result_point"], + ) + + POINT = lltype.Struct("POINT", + ("x", lltype.Signed), + ("y", lltype.Signed), + ) + def f(n): + points = lltype.malloc(rffi.CArray(POINT), n, flavor="raw") + for i in xrange(n): + points[i].x = i * 2 + points[i].y = i * 2 + 1 + points = rffi.cast(rffi.CArrayPtr(lltype.Char), points) + result_point = lltype.malloc(rffi.CArray(POINT), 1, flavor="raw") + result_point[0].x = 0 + result_point[0].y = 0 + result_point = rffi.cast(rffi.CArrayPtr(lltype.Char), result_point) + i = 0 + signed_size = rffi.sizeof(lltype.Signed) + while i < n: + myjitdriver.jit_merge_point(i=i, points=points, n=n, + signed_size=signed_size, + result_point=result_point) + x = array_getitem( + types.slong, signed_size * 2, points, i, 0 + ) + y = array_getitem( + types.slong, signed_size * 2, points, i, signed_size + ) + + cur_x = array_getitem( + types.slong, signed_size * 2, result_point, 0, 0 + ) + cur_y = array_getitem( + types.slong, signed_size * 2, result_point, 0, signed_size + ) + + array_setitem( + types.slong, signed_size * 2, result_point, 0, 0, cur_x + x + ) + array_setitem( + types.slong, signed_size * 2, result_point, 0, signed_size, cur_y + y + ) + i += 1 + result_point = rffi.cast(rffi.CArrayPtr(POINT), result_point) + result = result_point[0].x * result_point[0].y + lltype.free(result_point, flavor="raw") + lltype.free(points, flavor="raw") + return result + + assert self.meta_interp(f, [10]) == f(10) == 9000 + self.check_loops({"int_add": 3, "jump": 1, "int_lt": 1, + "getinteriorfield_raw": 4, "setinteriorfield_raw": 2 + }) diff --git a/pypy/rlib/clibffi.py b/pypy/rlib/clibffi.py --- a/pypy/rlib/clibffi.py +++ b/pypy/rlib/clibffi.py @@ -30,9 +30,6 @@ _MAC_OS = platform.name == "darwin" _FREEBSD_7 = platform.name == "freebsd7" -_LITTLE_ENDIAN = sys.byteorder == 'little' -_BIG_ENDIAN = sys.byteorder == 'big' - if _WIN32: from pypy.rlib import rwin32 @@ -213,26 +210,48 @@ elif sz == 8: return ffi_type_uint64 else: raise ValueError("unsupported type size for %r" % (TYPE,)) -TYPE_MAP = { - rffi.DOUBLE : ffi_type_double, - rffi.FLOAT : ffi_type_float, - rffi.LONGDOUBLE : ffi_type_longdouble, - rffi.UCHAR : ffi_type_uchar, - rffi.CHAR : ffi_type_schar, - rffi.SHORT : ffi_type_sshort, - rffi.USHORT : ffi_type_ushort, - rffi.UINT : ffi_type_uint, - rffi.INT : ffi_type_sint, +__int_type_map = [ + (rffi.UCHAR, ffi_type_uchar), + (rffi.SIGNEDCHAR, ffi_type_schar), + (rffi.SHORT, ffi_type_sshort), + (rffi.USHORT, ffi_type_ushort), + (rffi.UINT, ffi_type_uint), + (rffi.INT, ffi_type_sint), # xxx don't use ffi_type_slong and ffi_type_ulong - their meaning # changes from a libffi version to another :-(( - rffi.ULONG : _unsigned_type_for(rffi.ULONG), - rffi.LONG : _signed_type_for(rffi.LONG), - rffi.ULONGLONG : _unsigned_type_for(rffi.ULONGLONG), - rffi.LONGLONG : _signed_type_for(rffi.LONGLONG), - lltype.Void : ffi_type_void, - lltype.UniChar : _unsigned_type_for(lltype.UniChar), - lltype.Bool : _unsigned_type_for(lltype.Bool), - } + (rffi.ULONG, _unsigned_type_for(rffi.ULONG)), + (rffi.LONG, _signed_type_for(rffi.LONG)), + (rffi.ULONGLONG, _unsigned_type_for(rffi.ULONGLONG)), + (rffi.LONGLONG, _signed_type_for(rffi.LONGLONG)), + (lltype.UniChar, _unsigned_type_for(lltype.UniChar)), + (lltype.Bool, _unsigned_type_for(lltype.Bool)), + ] + +__float_type_map = [ + (rffi.DOUBLE, ffi_type_double), + (rffi.FLOAT, ffi_type_float), + (rffi.LONGDOUBLE, ffi_type_longdouble), + ] + +__ptr_type_map = [ + (rffi.VOIDP, ffi_type_pointer), + ] + +__type_map = __int_type_map + __float_type_map + [ + (lltype.Void, ffi_type_void) + ] + +TYPE_MAP_INT = dict(__int_type_map) +TYPE_MAP_FLOAT = dict(__float_type_map) +TYPE_MAP = dict(__type_map) + +ffitype_map_int = unrolling_iterable(__int_type_map) +ffitype_map_int_or_ptr = unrolling_iterable(__int_type_map + __ptr_type_map) +ffitype_map_float = unrolling_iterable(__float_type_map) +ffitype_map = unrolling_iterable(__type_map) + +del __int_type_map, __float_type_map, __ptr_type_map, __type_map + def external(name, args, result, **kwds): return rffi.llexternal(name, args, result, compilation_info=eci, **kwds) @@ -341,38 +360,15 @@ cast_type_to_ffitype._annspecialcase_ = 'specialize:memo' def push_arg_as_ffiptr(ffitp, arg, ll_buf): - # This is for primitive types. Note that the exact type of 'arg' may be - # different from the expected 'c_size'. To cope with that, we fall back - # to a byte-by-byte copy. + # this is for primitive types. For structures and arrays + # would be something different (more dynamic) TP = lltype.typeOf(arg) TP_P = lltype.Ptr(rffi.CArray(TP)) - TP_size = rffi.sizeof(TP) - c_size = intmask(ffitp.c_size) - # if both types have the same size, we can directly write the - # value to the buffer - if c_size == TP_size: - buf = rffi.cast(TP_P, ll_buf) - buf[0] = arg - else: - # needs byte-by-byte copying. Make sure 'arg' is an integer type. - # Note that this won't work for rffi.FLOAT/rffi.DOUBLE. - assert TP is not rffi.FLOAT and TP is not rffi.DOUBLE - if TP_size <= rffi.sizeof(lltype.Signed): - arg = rffi.cast(lltype.Unsigned, arg) - else: - arg = rffi.cast(lltype.UnsignedLongLong, arg) - if _LITTLE_ENDIAN: - for i in range(c_size): - ll_buf[i] = chr(arg & 0xFF) - arg >>= 8 - elif _BIG_ENDIAN: - for i in range(c_size-1, -1, -1): - ll_buf[i] = chr(arg & 0xFF) - arg >>= 8 - else: - raise AssertionError + buf = rffi.cast(TP_P, ll_buf) + buf[0] = arg push_arg_as_ffiptr._annspecialcase_ = 'specialize:argtype(1)' + # type defs for callback and closure userdata USERDATA_P = lltype.Ptr(lltype.ForwardReference()) CALLBACK_TP = lltype.Ptr(lltype.FuncType([rffi.VOIDPP, rffi.VOIDP, USERDATA_P], diff --git a/pypy/rlib/libffi.py b/pypy/rlib/libffi.py --- a/pypy/rlib/libffi.py +++ b/pypy/rlib/libffi.py @@ -140,7 +140,7 @@ self.last.next = arg self.last = arg self.numargs += 1 - + class AbstractArg(object): next = None @@ -410,3 +410,22 @@ def getaddressindll(self, name): return dlsym(self.lib, name) + + at jit.oopspec("libffi_array_getitem") +def array_getitem(ffitype, width, addr, index, offset): + for TYPE, ffitype2 in clibffi.ffitype_map: + if ffitype is ffitype2: + addr = rffi.ptradd(addr, index * width) + addr = rffi.ptradd(addr, offset) + return rffi.cast(rffi.CArrayPtr(TYPE), addr)[0] + assert False + + at jit.oopspec("libffi_array_setitem") +def array_setitem(ffitype, width, addr, index, offset, value): + for TYPE, ffitype2 in clibffi.ffitype_map: + if ffitype is ffitype2: + addr = rffi.ptradd(addr, index * width) + addr = rffi.ptradd(addr, offset) + rffi.cast(rffi.CArrayPtr(TYPE), addr)[0] = value + return + assert False \ No newline at end of file diff --git a/pypy/rlib/test/test_libffi.py b/pypy/rlib/test/test_libffi.py --- a/pypy/rlib/test/test_libffi.py +++ b/pypy/rlib/test/test_libffi.py @@ -1,11 +1,13 @@ +import sys + import py -import sys + +from pypy.rlib.libffi import (CDLL, Func, get_libc_name, ArgChain, types, + IS_32_BIT, array_getitem, array_setitem) +from pypy.rlib.rarithmetic import r_singlefloat, r_longlong, r_ulonglong +from pypy.rlib.test.test_clibffi import BaseFfiTest, get_libm_name, make_struct_ffitype_e from pypy.rpython.lltypesystem import rffi, lltype from pypy.rpython.lltypesystem.ll2ctypes import ALLOCATED -from pypy.rlib.rarithmetic import r_singlefloat, r_longlong, r_ulonglong -from pypy.rlib.test.test_clibffi import BaseFfiTest, get_libm_name, make_struct_ffitype_e -from pypy.rlib.libffi import CDLL, Func, get_libc_name, ArgChain, types -from pypy.rlib.libffi import IS_32_BIT class TestLibffiMisc(BaseFfiTest): @@ -52,6 +54,34 @@ del lib assert not ALLOCATED + def test_array_fields(self): + POINT = lltype.Struct("POINT", + ("x", lltype.Float), + ("y", lltype.Float), + ) + points = lltype.malloc(rffi.CArray(POINT), 2, flavor="raw") + points[0].x = 1.0 + points[0].y = 2.0 + points[1].x = 3.0 + points[1].y = 4.0 + points = rffi.cast(rffi.CArrayPtr(lltype.Char), points) + assert array_getitem(types.double, 16, points, 0, 0) == 1.0 + assert array_getitem(types.double, 16, points, 0, 8) == 2.0 + assert array_getitem(types.double, 16, points, 1, 0) == 3.0 + assert array_getitem(types.double, 16, points, 1, 8) == 4.0 + + array_setitem(types.double, 16, points, 0, 0, 10.0) + array_setitem(types.double, 16, points, 0, 8, 20.0) + array_setitem(types.double, 16, points, 1, 0, 30.0) + array_setitem(types.double, 16, points, 1, 8, 40.0) + + assert array_getitem(types.double, 16, points, 0, 0) == 10.0 + assert array_getitem(types.double, 16, points, 0, 8) == 20.0 + assert array_getitem(types.double, 16, points, 1, 0) == 30.0 + assert array_getitem(types.double, 16, points, 1, 8) == 40.0 + + lltype.free(points, flavor="raw") + class TestLibffiCall(BaseFfiTest): """ Test various kind of calls through libffi. @@ -109,7 +139,7 @@ This method is overridden by metainterp/test/test_fficall.py in order to do the call in a loop and JIT it. The optional arguments are used only by that overridden method. - + """ lib, name, argtypes, restype = funcspec func = lib.getpointer(name, argtypes, restype) @@ -132,7 +162,7 @@ return x - y; } """ - libfoo = self.get_libfoo() + libfoo = self.get_libfoo() func = (libfoo, 'diff_xy', [types.sint, types.slong], types.sint) res = self.call(func, [50, 8], lltype.Signed) assert res == 42 @@ -144,7 +174,7 @@ return (x + (int)y); } """ - libfoo = self.get_libfoo() + libfoo = self.get_libfoo() func = (libfoo, 'sum_xy', [types.sint, types.double], types.sint) res = self.call(func, [38, 4.2], lltype.Signed, jitif=["floats"]) assert res == 42 @@ -249,7 +279,7 @@ }; struct pair my_static_pair = {10, 20}; - + long* get_pointer_to_b() { return &my_static_pair.b; @@ -340,7 +370,7 @@ def test_wrong_number_of_arguments(self): from pypy.rpython.llinterp import LLException - libfoo = self.get_libfoo() + libfoo = self.get_libfoo() func = (libfoo, 'sum_xy', [types.sint, types.double], types.sint) glob = globals() From noreply at buildbot.pypy.org Sun Nov 13 11:20:20 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sun, 13 Nov 2011 11:20:20 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: kill insert_loop_token Message-ID: <20111113102020.3E339820BE@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r49362:6a9c9a3e0c43 Date: 2011-11-12 13:08 +0100 http://bitbucket.org/pypy/pypy/changeset/6a9c9a3e0c43/ Log: kill insert_loop_token diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -167,29 +167,6 @@ record_loop_or_bridge(metainterp_sd, loop) return all_target_tokens[0] - - if False: # FIXME: full_preamble_needed?? - if full_preamble_needed: - send_loop_to_backend(greenkey, jitdriver_sd, metainterp_sd, - loop.preamble, "entry bridge") - insert_loop_token(old_loop_tokens, loop.preamble.token) - jitdriver_sd.warmstate.attach_unoptimized_bridge_from_interp( - greenkey, loop.preamble.token) - record_loop_or_bridge(metainterp_sd, loop.preamble) - elif token.short_preamble: - short = token.short_preamble[-1] - metainterp_sd.logger_ops.log_short_preamble(short.inputargs, - short.operations) - return token - else: - send_loop_to_backend(greenkey, jitdriver_sd, metainterp_sd, loop, - "loop") - insert_loop_token(old_loop_tokens, loop_token) - jitdriver_sd.warmstate.attach_unoptimized_bridge_from_interp( - greenkey, loop.token) - record_loop_or_bridge(metainterp_sd, loop) - return loop_token - def compile_retrace(metainterp, greenkey, start, inputargs, jumpargs, start_resumedescr, partial_trace, resumekey): @@ -238,14 +215,6 @@ record_loop_or_bridge(metainterp_sd, loop) return target_token -def insert_loop_token(old_loop_tokens, loop_token): - # Find where in old_loop_tokens we should insert this new loop_token. - # The following algo means "as late as possible, but before another - # loop token that would be more general and so completely mask off - # the new loop_token". - # XXX do we still need a list? - old_loop_tokens.append(loop_token) - def send_loop_to_backend(greenkey, jitdriver_sd, metainterp_sd, loop, type): original_jitcell_token = loop.original_jitcell_token jitdriver_sd.on_compile(metainterp_sd.logger_ops, original_jitcell_token, diff --git a/pypy/jit/metainterp/test/test_compile.py b/pypy/jit/metainterp/test/test_compile.py --- a/pypy/jit/metainterp/test/test_compile.py +++ b/pypy/jit/metainterp/test/test_compile.py @@ -1,7 +1,7 @@ from pypy.config.pypyoption import get_pypy_config from pypy.jit.metainterp.history import TargetToken, ConstInt, History, Stats from pypy.jit.metainterp.history import BoxInt, INT -from pypy.jit.metainterp.compile import insert_loop_token, compile_loop +from pypy.jit.metainterp.compile import compile_loop from pypy.jit.metainterp.compile import ResumeGuardDescr from pypy.jit.metainterp.compile import ResumeGuardCountersInt from pypy.jit.metainterp.compile import compile_tmp_callback @@ -10,23 +10,6 @@ from pypy.jit.tool.oparser import parse from pypy.jit.metainterp.optimizeopt import ALL_OPTS_DICT -def test_insert_loop_token(): - # XXX this test is a bit useless now that there are no specnodes - lst = [] - # - tok1 = LoopToken() - insert_loop_token(lst, tok1) - assert lst == [tok1] - # - tok2 = LoopToken() - insert_loop_token(lst, tok2) - assert lst == [tok1, tok2] - # - tok3 = LoopToken() - insert_loop_token(lst, tok3) - assert lst == [tok1, tok2, tok3] - - class FakeCPU(object): ts = typesystem.llhelper def __init__(self): From noreply at buildbot.pypy.org Sun Nov 13 11:20:21 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sun, 13 Nov 2011 11:20:21 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: fix test Message-ID: <20111113102021.75734820BE@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r49363:0436b59b6130 Date: 2011-11-12 13:27 +0100 http://bitbucket.org/pypy/pypy/changeset/0436b59b6130/ Log: fix test diff --git a/pypy/jit/metainterp/test/test_compile.py b/pypy/jit/metainterp/test/test_compile.py --- a/pypy/jit/metainterp/test/test_compile.py +++ b/pypy/jit/metainterp/test/test_compile.py @@ -56,7 +56,7 @@ on_compile = staticmethod(lambda *args: None) on_compile_bridge = staticmethod(lambda *args: None) -def test_compile_new_loop(): +def test_compile_loop(): cpu = FakeCPU() staticdata = FakeMetaInterpStaticData() staticdata.cpu = cpu @@ -76,34 +76,26 @@ metainterp.staticdata = staticdata metainterp.cpu = cpu metainterp.history = History() - metainterp.history.operations = loop.operations[:] + metainterp.history.operations = loop.operations[:-1] metainterp.history.inputargs = loop.inputargs[:] cpu._all_size_descrs_with_vtable = ( LLtypeMixin.cpu._all_size_descrs_with_vtable) # - loop_tokens = [] - loop_token = compile_new_loop(metainterp, loop_tokens, [], 0, None) - assert loop_tokens == [loop_token] - assert loop_token.number == 1 + greenkey = 'faked' + target_token = compile_loop(metainterp, greenkey, 0, + loop.inputargs, + loop.operations[-1].getarglist(), + None) + jitcell_token = target_token.targeting_jitcell_token + assert jitcell_token == target_token.original_jitcell_token + assert jitcell_token.target_tokens == [target_token] + assert jitcell_token.number == 1 assert staticdata.globaldata.loopnumbering == 2 # assert len(cpu.seen) == 1 - assert cpu.seen[0][2] == loop_token + assert cpu.seen[0][2] == jitcell_token # del cpu.seen[:] - metainterp = FakeMetaInterp() - metainterp.staticdata = staticdata - metainterp.cpu = cpu - metainterp.history = History() - metainterp.history.operations = loop.operations[:] - metainterp.history.inputargs = loop.inputargs[:] - # - loop_token_2 = compile_new_loop(metainterp, loop_tokens, [], 0, None) - assert loop_token_2 is loop_token - assert loop_tokens == [loop_token] - assert len(cpu.seen) == 0 - assert staticdata.globaldata.loopnumbering == 2 - def test_resume_guard_counters(): rgc = ResumeGuardCountersInt() From noreply at buildbot.pypy.org Sun Nov 13 11:20:22 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sun, 13 Nov 2011 11:20:22 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: fix tests Message-ID: <20111113102022.A4AB4820BE@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r49364:ed426958d5e2 Date: 2011-11-12 13:59 +0100 http://bitbucket.org/pypy/pypy/changeset/ed426958d5e2/ Log: fix tests diff --git a/pypy/jit/metainterp/test/test_del.py b/pypy/jit/metainterp/test/test_del.py --- a/pypy/jit/metainterp/test/test_del.py +++ b/pypy/jit/metainterp/test/test_del.py @@ -25,7 +25,7 @@ 'int_sub': 2, 'int_gt': 2, 'guard_true': 2, - 'jump': 2}) + 'jump': 1}) def test_class_of_allocated(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'x']) diff --git a/pypy/jit/metainterp/test/test_dict.py b/pypy/jit/metainterp/test/test_dict.py --- a/pypy/jit/metainterp/test/test_dict.py +++ b/pypy/jit/metainterp/test/test_dict.py @@ -154,7 +154,7 @@ res = self.meta_interp(f, [100], listops=True) assert res == f(50) self.check_resops({'new_array': 2, 'getfield_gc': 2, - 'guard_true': 2, 'jump': 2, + 'guard_true': 2, 'jump': 1, 'new_with_vtable': 2, 'getinteriorfield_gc': 2, 'setfield_gc': 6, 'int_gt': 2, 'int_sub': 2, 'call': 10, 'int_and': 2, diff --git a/pypy/jit/metainterp/test/test_exception.py b/pypy/jit/metainterp/test/test_exception.py --- a/pypy/jit/metainterp/test/test_exception.py +++ b/pypy/jit/metainterp/test/test_exception.py @@ -512,7 +512,7 @@ res = self.meta_interp(main, [41], repeat=7) assert res == -1 - self.check_tree_loop_count(2) # the loop and the entry path + self.check_target_token_count(2) # the loop and the entry path # we get: # ENTER - compile the new loop and the entry bridge # ENTER - compile the leaving path (raising MyError) diff --git a/pypy/jit/metainterp/test/test_fficall.py b/pypy/jit/metainterp/test/test_fficall.py --- a/pypy/jit/metainterp/test/test_fficall.py +++ b/pypy/jit/metainterp/test/test_fficall.py @@ -77,14 +77,14 @@ int_add=2, int_lt=2, guard_true=2, - jump=2) + jump=1) else: self.check_resops( call_release_gil=0, # no CALL_RELEASE_GIL int_add=2, int_lt=2, guard_true=2, - jump=2) + jump=1) return res def test_byval_result(self): diff --git a/pypy/jit/metainterp/test/test_greenfield.py b/pypy/jit/metainterp/test/test_greenfield.py --- a/pypy/jit/metainterp/test/test_greenfield.py +++ b/pypy/jit/metainterp/test/test_greenfield.py @@ -24,7 +24,7 @@ # res = self.meta_interp(g, [7]) assert res == -2 - self.check_loop_count(2) + self.check_trace_count(2) self.check_resops(guard_value=0) def test_green_field_2(self): @@ -49,7 +49,7 @@ # res = self.meta_interp(g, [7]) assert res == -22 - self.check_loop_count(6) + self.check_trace_count(6) self.check_resops(guard_value=0) diff --git a/pypy/jit/metainterp/test/test_jitdriver.py b/pypy/jit/metainterp/test/test_jitdriver.py --- a/pypy/jit/metainterp/test/test_jitdriver.py +++ b/pypy/jit/metainterp/test/test_jitdriver.py @@ -28,10 +28,10 @@ i += 1 self.meta_interp(loop, [1, 4]) - assert sorted(called.keys()) == [(4, 1, "entry bridge"), (4, 1, "loop")] + assert sorted(called.keys()) == [(4, 1, "loop")] self.meta_interp(loop, [2, 4]) - assert sorted(called.keys()) == [(4, 1, "entry bridge"), (4, 1, "loop"), - (4, 2, "entry bridge"), (4, 2, "loop")] + assert sorted(called.keys()) == [(4, 1, "loop"), + (4, 2, "loop")] def test_on_compile_bridge(self): called = {} @@ -55,8 +55,7 @@ i += 1 self.meta_interp(loop, [1, 10]) - assert sorted(called.keys()) == ['bridge', (10, 1, "entry bridge"), - (10, 1, "loop")] + assert sorted(called.keys()) == ['bridge', (10, 1, "loop")] class TestLLtypeSingle(JitDriverTests, LLJitMixin): @@ -92,8 +91,9 @@ # the following numbers are not really expectations of the test # itself, but just the numbers that we got after looking carefully # at the generated machine code - self.check_loop_count(5) - self.check_tree_loop_count(4) # 2 x loop, 2 x enter bridge + self.check_trace_count(5) + self.check_jitcell_token_count(2) # 2 x loop including enter bridge + self.check_target_token_count(4) # 2 x loop, 2 x enter bridge self.check_enter_count(5) def test_inline(self): @@ -125,7 +125,7 @@ # we expect no loop at all for 'loop1': it should always be inlined # we do however get several version of 'loop2', all of which contains # at least one int_add, while there are no int_add's in 'loop1' - self.check_tree_loop_count(5) + self.check_jitcell_token_count(1) for loop in get_stats().loops: assert loop.summary()['int_add'] >= 1 diff --git a/pypy/jit/metainterp/test/test_jitprof.py b/pypy/jit/metainterp/test/test_jitprof.py --- a/pypy/jit/metainterp/test/test_jitprof.py +++ b/pypy/jit/metainterp/test/test_jitprof.py @@ -55,8 +55,6 @@ TRACING, BACKEND, ~ BACKEND, - BACKEND, - ~ BACKEND, ~ TRACING, RUNNING, ~ RUNNING, @@ -64,8 +62,8 @@ ~ BLACKHOLE ] assert profiler.events == expected - assert profiler.times == [3, 2, 1, 1] - assert profiler.counters == [1, 2, 1, 1, 3, 3, 1, 13, 2, 0, 0, 0, 0, + assert profiler.times == [2, 1, 1, 1] + assert profiler.counters == [1, 1, 1, 1, 3, 3, 1, 15, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0] def test_simple_loop_with_call(self): From noreply at buildbot.pypy.org Sun Nov 13 11:20:23 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sun, 13 Nov 2011 11:20:23 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: make sure history.Stats dont keep things alive Message-ID: <20111113102023.DBC90820BE@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r49365:16b76159c68e Date: 2011-11-13 11:01 +0100 http://bitbucket.org/pypy/pypy/changeset/16b76159c68e/ Log: make sure history.Stats dont keep things alive diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -89,8 +89,8 @@ assert descr.exported_state is None if not we_are_translated(): op._descr_wref = weakref.ref(op._descr) - # xxx why do we need to clear op._descr?? - #op._descr = None # clear reference, mostly for tests + op._descr = None # clear reference to prevent the history.Stats + # from keeping the loop alive during tests # record this looptoken on the QuasiImmut used in the code if loop.quasi_immutable_deps is not None: for qmut in loop.quasi_immutable_deps: diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -10,6 +10,7 @@ from pypy.jit.metainterp.resoperation import ResOperation, rop from pypy.jit.codewriter import heaptracker, longlong from pypy.rlib.objectmodel import compute_identity_hash +import weakref # ____________________________________________________________ @@ -978,7 +979,7 @@ self.locations = [] self.aborted_keys = [] self.invalidated_token_numbers = set() - self.jitcell_tokens = set() + self.jitcell_token_wrefs = set() def clear(self): del self.loops[:] @@ -990,7 +991,7 @@ self.aborted_count = 0 def add_jitcell_token(self, token): - self.jitcell_tokens.add(token) + self.jitcell_token_wrefs.add(weakref.ref(token)) def set_history(self, history): self.operations = history.operations @@ -1021,6 +1022,15 @@ def get_all_loops(self): return self.loops + def get_all_jitcell_tokens(self): + tokens = [t() for t in self.jitcell_token_wrefs] + if None in tokens: + assert False, "get_all_jitcell_tokens will not work as "+\ + "loops have been freed" + return tokens + + + def check_history(self, expected=None, **check): insns = {} for op in self.operations: @@ -1038,7 +1048,7 @@ def check_resops(self, expected=None, **check): insns = {} - for loop in self.loops: + for loop in self.get_all_loops(): insns = loop.summary(adding_insns=insns) if expected is not None: insns.pop('debug_merge_point', None) @@ -1053,7 +1063,7 @@ def check_loops(self, expected=None, everywhere=False, **check): insns = {} - for loop in self.loops: + for loop in self.get_all_loops(): #if not everywhere: # if getattr(loop, '_ignore_during_counting', False): # continue @@ -1083,7 +1093,7 @@ def check_consistency(self): "NOT_RPYTHON" - for loop in self.loops: + for loop in self.get_all_loops(): loop.check_consistency() def maybe_view(self): diff --git a/pypy/jit/metainterp/test/support.py b/pypy/jit/metainterp/test/support.py --- a/pypy/jit/metainterp/test/support.py +++ b/pypy/jit/metainterp/test/support.py @@ -169,10 +169,11 @@ assert get_stats().compiled_count <= count def check_jitcell_token_count(self, count): # was check_tree_loop_count - assert len(get_stats().jitcell_tokens) == count + assert len(get_stats().jitcell_token_wrefs) == count def check_target_token_count(self, count): - n = sum([len(t.target_tokens) for t in get_stats().jitcell_tokens]) + tokens = get_stats().get_all_jitcell_tokens() + n = sum ([len(t.target_tokens) for t in tokens]) assert n == count def check_enter_count(self, count): diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -84,7 +84,7 @@ if self.basic: found = 0 - for op in get_stats().loops[0]._all_operations(): + for op in get_stats().get_all_loops()[0]._all_operations(): if op.getopname() == 'guard_true': liveboxes = op.getfailargs() assert len(liveboxes) == 3 @@ -2704,7 +2704,7 @@ assert res == g(10) self.check_jitcell_token_count(2) - for cell in get_stats().jitcell_tokens: + for cell in get_stats().get_all_jitcell_tokens(): # Initialal trace with two labels and 5 retraces assert len(cell.target_tokens) <= 7 @@ -2745,7 +2745,7 @@ res = self.meta_interp(f, [10, 7]) assert res == f(10, 7) self.check_jitcell_token_count(2) - for cell in get_stats().jitcell_tokens: + for cell in get_stats().get_all_jitcell_tokens(): assert len(cell.target_tokens) == 2 def g(n): @@ -2754,7 +2754,7 @@ res = self.meta_interp(g, [10]) assert res == g(10) self.check_jitcell_token_count(2) - for cell in get_stats().jitcell_tokens: + for cell in get_stats().get_all_jitcell_tokens(): assert len(cell.target_tokens) <= 3 def g(n): @@ -2765,7 +2765,7 @@ # 2 loops and one function self.check_jitcell_token_count(3) cnt = 0 - for cell in get_stats().jitcell_tokens: + for cell in get_stats().get_all_jitcell_tokens(): if cell.target_tokens is None: cnt += 1 else: diff --git a/pypy/jit/metainterp/test/test_list.py b/pypy/jit/metainterp/test/test_list.py --- a/pypy/jit/metainterp/test/test_list.py +++ b/pypy/jit/metainterp/test/test_list.py @@ -225,7 +225,7 @@ return s res = self.meta_interp(f, [15], listops=True) assert res == f(15) - self.check_resops({'jump': 2, 'int_gt': 2, 'int_add': 2, + self.check_resops({'jump': 1, 'int_gt': 2, 'int_add': 2, 'guard_true': 2, 'int_sub': 2}) class TestOOtype(ListTests, OOJitMixin): diff --git a/pypy/jit/metainterp/test/test_memmgr.py b/pypy/jit/metainterp/test/test_memmgr.py --- a/pypy/jit/metainterp/test/test_memmgr.py +++ b/pypy/jit/metainterp/test/test_memmgr.py @@ -99,7 +99,7 @@ assert res == 42 # we should see only the loop and the entry bridge - self.check_tree_loop_count(2) + self.check_target_token_count(2) def test_target_loop_kept_alive_or_not(self): myjitdriver = JitDriver(greens=['m'], reds=['n']) @@ -132,14 +132,15 @@ # case A res = self.meta_interp(f, [], loop_longevity=3) assert res == 42 - # we should see only the loop and the entry bridge for g(5) and g(7) - self.check_tree_loop_count(4) + # we should see only the loop with preamble and the exit bridge + # for g(5) and g(7) + self.check_enter_count(4) # case B, with a lower longevity res = self.meta_interp(f, [], loop_longevity=1) assert res == 42 # we should see a loop for each call to g() - self.check_tree_loop_count(8 + 20*2*2) + self.check_enter_count(8 + 20*2) def test_throw_away_old_loops(self): myjitdriver = JitDriver(greens=['m'], reds=['n']) @@ -154,7 +155,7 @@ for i in range(10): g(1) # g(1) gets a loop and an entry bridge, stays alive g(2) # (and an exit bridge, which does not count in - g(1) # check_tree_loop_count) + g(1) # check_target_token_count) g(3) g(1) g(4) # g(2), g(3), g(4), g(5) are thrown away every iteration @@ -164,7 +165,7 @@ res = self.meta_interp(f, [], loop_longevity=3) assert res == 42 - self.check_tree_loop_count(2 + 10*4*2) + self.check_enter_count(2 + 10*4*2) def test_call_assembler_keep_alive(self): myjitdriver1 = JitDriver(greens=['m'], reds=['n']) @@ -198,7 +199,7 @@ res = self.meta_interp(f, [1], loop_longevity=4, inline=True) assert res == 42 - self.check_tree_loop_count(12) + self.check_enter_count(12) # ____________________________________________________________ From noreply at buildbot.pypy.org Sun Nov 13 11:20:25 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sun, 13 Nov 2011 11:20:25 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: propagate quasi_immutable_deps Message-ID: <20111113102025.19747820BE@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r49366:224f5793feda Date: 2011-11-13 11:19 +0100 http://bitbucket.org/pypy/pypy/changeset/224f5793feda/ Log: propagate quasi_immutable_deps diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -139,8 +139,12 @@ loop = create_empty_loop(metainterp) loop.inputargs = part.inputargs loop.operations = part.operations + loop.quasi_immutable_deps = {} + if part.quasi_immutable_deps: + loop.quasi_immutable_deps.update(part.quasi_immutable_deps) while part.operations[-1].getopnum() == rop.LABEL: inliner = Inliner(inputargs, jumpargs) + part.quasi_immutable_deps = None part.operations = [part.operations[-1]] + \ [inliner.inline_op(h_ops[i]) for i in range(start, len(h_ops))] + \ [ResOperation(rop.JUMP, [inliner.inline_arg(a) for a in jumpargs], @@ -155,7 +159,11 @@ return None loop.operations = loop.operations[:-1] + part.operations + if part.quasi_immutable_deps: + loop.quasi_immutable_deps.update(part.quasi_immutable_deps) + if not loop.quasi_immutable_deps: + loop.quasi_immutable_deps = None for box in loop.inputargs: assert isinstance(box, Box) @@ -206,6 +214,14 @@ loop = partial_trace loop.operations = loop.operations[:-1] + part.operations + quasi_immutable_deps = {} + if loop.quasi_immutable_deps: + quasi_immutable_deps.update(loop.quasi_immutable_deps) + if part.quasi_immutable_deps: + quasi_immutable_deps.update(part.quasi_immutable_deps) + if quasi_immutable_deps: + loop.quasi_immutable_deps = quasi_immutable_deps + for box in loop.inputargs: assert isinstance(box, Box) From noreply at buildbot.pypy.org Sun Nov 13 15:01:32 2011 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 13 Nov 2011 15:01:32 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim-shards: Fix until some tests start passing. Introduce more iterators Message-ID: <20111113140132.1A649820BE@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-multidim-shards Changeset: r49367:a0719edc5e5e Date: 2011-11-12 11:02 +0100 http://bitbucket.org/pypy/pypy/changeset/a0719edc5e5e/ Log: Fix until some tests start passing. Introduce more iterators diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -14,7 +14,9 @@ 'dtype']) any_driver = jit.JitDriver(greens=['signature'], reds=['i', 'size', 'self', 'dtype']) -slice_driver = jit.JitDriver(greens=['signature'], reds=['i', 'self', 'source']) +slice_driver = jit.JitDriver(greens=['signature'], reds=['self', 'source', + 'source_iter', + 'res_iter']) def _find_shape_and_elems(space, w_iterable): shape = [space.len_w(w_iterable)] @@ -68,7 +70,14 @@ dtype.setitem_w(space, arr.storage, i, w_elem) return arr -class ArrayIterator(object): +class BaseIterator(object): + def next(self): + raise NotImplementedError + + def done(self): + raise NotImplementedError + +class ArrayIterator(BaseIterator): def __init__(self, size): self.offset = 0 self.size = size @@ -79,17 +88,17 @@ def done(self): return self.offset >= self.size -class ViewIterator(object): +class ViewIterator(BaseIterator): def __init__(self, arr): self.indices = [0] * len(arr.shape) self.offset = arr.start self.arr = arr - self.done = False + self._done = False @jit.unroll_safe def next(self): for i in range(len(self.indices)): - if self.indices[i] < self.arr.shape[i]: + if self.indices[i] < self.arr.shape[i] - 1: self.indices[i] += 1 self.offset += self.arr.shards[i] break @@ -97,12 +106,12 @@ self.indices[i] = 0 self.offset -= self.arr.backshards[i] else: - self.done = True + self._done = True def done(self): - return self.done + return self._done -class Call2Iterator(object): +class Call2Iterator(BaseIterator): def __init__(self, left, right): self.left = left self.right = right @@ -112,9 +121,9 @@ self.right.next() def done(self): - return self.left.done() + return self.left.done() or self.right.done() -class Call1Iterator(object): +class Call1Iterator(BaseIterator): def __init__(self, child): self.child = child @@ -124,6 +133,13 @@ def done(self): return self.child.done() +class ConstantIterator(BaseIterator): + def next(self): + pass + + def done(self): + return False + class BaseArray(Wrappable): _attrs_ = ["invalidates", "signature", "shape", "shards", "backshards", "start"] @@ -338,8 +354,7 @@ return self.start + idx * self.shards[0] index = [space.int_w(w_item) for w_item in space.fixedview(w_idx)] - item = 0 - xxx + item = self.start for i in range(len(index)): v = index[i] if v < 0: @@ -347,9 +362,7 @@ if v < 0 or v >= self.shape[i]: raise OperationError(space.w_IndexError, space.wrap("index (%d) out of range (0<=index<%d" % (index[i], self.shape[i]))) - if i != 0: - item *= self.shape[i] - item += v + item += v * self.shards[i] return item def get_root_shape(self): @@ -388,7 +401,7 @@ if self._single_item_result(space, w_idx): concrete = self.get_concrete() item = concrete._index_of_single_item(space, w_idx) - return concrete.eval(item).wrap(space) + return concrete.getitem(item).wrap(space) return space.wrap(self._create_slice(space, w_idx)) def descr_setitem(self, space, w_idx, w_value): @@ -425,12 +438,14 @@ shape = [lgt] + self.shape[1:] shards = [self.shards[0] * step] + self.shards[1:] backshards = [lgt * self.shards[0] * step] + self.backshards[1:] + start *= self.shards[0] + start += self.start else: shape = [] shards = [] backshards = [] start = -1 - i = 0 + i = -1 for i, w_item in enumerate(space.fixedview(w_idx)): start_, stop, step, lgt = space.decode_index4(w_item, self.shape[i]) @@ -440,11 +455,13 @@ shape.append(lgt) shards.append(self.shards[i] * step) backshards.append(self.shards[i] * lgt * step) + if start == -1: + start = self.start # add a reminder shape += self.shape[i + 1:] shards += self.shards[i + 1:] backshards += self.backshards[i + 1:] - return NDimSlice(self, new_sig, start, end, shards, backshards, shape) + return NDimSlice(self, new_sig, start, shards, backshards, shape) def descr_mean(self, space): return space.wrap(space.float_w(self.descr_sum(space))/self.find_size()) @@ -458,6 +475,12 @@ pass return space.wrap(space.is_true(self.get_concrete().eval(self.start).wrap(space))) + def getitem(self, item): + raise NotImplementedError + + def start_iter(self): + raise NotImplementedError + def convert_to_array(space, w_obj): if isinstance(w_obj, BaseArray): return w_obj @@ -497,9 +520,15 @@ def find_dtype(self): return self.dtype - def eval(self, offset): + def getitem(self, item): return self.value + def eval(self, iter): + return self.value + + def start_iter(self): + return ConstantIterator() + class VirtualArray(BaseArray): """ Class for representing virtual arrays, such as binary ops or ufuncs @@ -520,12 +549,14 @@ result_size = self.find_size() result = NDimArray(result_size, self.shape, self.find_dtype()) i = self.start_iter() + ri = result.start_iter() while not i.done(): numpy_driver.jit_merge_point(signature=signature, result_size=result_size, i=i, self=self, result=result) - result.dtype.setitem(result.storage, i.offset, self.eval(i.offset)) - i = self.next_index(i) + result.dtype.setitem(result.storage, ri.offset, self.eval(i)) + i.next() + ri.next() return result def force_if_needed(self): @@ -535,12 +566,12 @@ def get_concrete(self): self.force_if_needed() - return self.forced_result + return self.forced_result - def eval(self, offset): + def eval(self, iter): if self.forced_result is not None: - return self.forced_result.eval(offset) - return self._eval(offset) + return self.forced_result.eval(iter) + return self._eval(iter) def setitem(self, item, value): return self.get_concrete().setitem(item, value) @@ -569,7 +600,9 @@ def _find_dtype(self): return self.res_dtype - def _eval(self, i): + def _eval(self, iter): + assert isinstance(iter, Call1Iterator) + xxx val = self.values.eval(i).convert_to(self.res_dtype) sig = jit.promote(self.signature) @@ -599,10 +632,13 @@ pass return self.right.find_size() - def _eval(self, i): - lhs = self.left.eval(i).convert_to(self.calc_dtype) - rhs = self.right.eval(i).convert_to(self.calc_dtype) + def start_iter(self): + return Call2Iterator(self.left.start_iter(), self.right.start_iter()) + def _eval(self, iter): + assert isinstance(iter, Call2Iterator) + lhs = self.left.eval(iter.left).convert_to(self.calc_dtype) + rhs = self.right.eval(iter.right).convert_to(self.calc_dtype) sig = jit.promote(self.signature) assert isinstance(sig, signature.Signature) call_sig = sig.components[0] @@ -627,8 +663,12 @@ self.parent.get_concrete() return self - def eval(self, offset): - return self.parent.eval(offset) + def getitem(self, item): + return self.parent.getitem(item) + + def eval(self, iter): + assert isinstance(iter, ViewIterator) + return self.parent.getitem(iter.offset) @unwrap_spec(item=int) def setitem_w(self, space, item, w_value): @@ -648,13 +688,14 @@ _immutable_fields_ = ['shape[*]', 'shards[*]', 'backshards[*]', 'start'] - def __init__(self, parent, signature, start, end, shards, backshards, + def __init__(self, parent, signature, start, shards, backshards, shape): if isinstance(parent, NDimSlice): parent = parent.parent + else: + assert isinstance(parent, NDimArray) ViewArray.__init__(self, parent, signature, shards, backshards, shape) self.start = start - self.end = end def get_root_storage(self): return self.parent.get_concrete().get_root_storage() @@ -673,17 +714,23 @@ self._sliceloop(w_value) def _sliceloop(self, source): - xxx - i = 0 - while i < self.size: - slice_driver.jit_merge_point(signature=source.signature, i=i, - self=self, source=source) - self.setitem(i, source.eval(i).convert_to(self.find_dtype())) - i += 1 + source_iter = source.start_iter() + res_iter = self.start_iter() + while not res_iter.done(): + slice_driver.jit_merge_point(signature=source.signature, + self=self, source=source, + res_iter=res_iter, + source_iter=source_iter) + self.setitem(res_iter.offset, source.eval(source_iter).convert_to( + self.find_dtype())) + source_iter.next() + res_iter.next() + + def start_iter(self): + return ViewIterator(self) def setitem(self, item, value): - xxx - self.parent.setitem(self.calc_index(item), value) + self.parent.setitem(item, value) def get_root_shape(self): return self.parent.get_root_shape() @@ -800,8 +847,12 @@ def find_dtype(self): return self.dtype - def eval(self, offset): - return self.dtype.getitem(self.storage, offset) + def getitem(self, item): + return self.dtype.getitem(self.storage, item) + + def eval(self, iter): + assert isinstance(iter, ArrayIterator) + return self.dtype.getitem(self.storage, iter.offset) def descr_len(self, space): if len(self.shape): @@ -817,6 +868,9 @@ self.invalidated() self.dtype.setitem(self.storage, item, value) + def start_iter(self): + return ArrayIterator(self.size) + def __del__(self): lltype.free(self.storage, flavor='raw', track_allocation=False) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -11,6 +11,9 @@ class TestNumArrayDirect(object): def newslice(self, *args): return self.space.newslice(*[self.space.wrap(arg) for arg in args]) + + def newtuple(self, *args): + return self.space.newtuple([self.space.wrap(arg) for arg in args]) def test_shards(self): a = NDimArray(100, [10, 5, 3], MockDtype()) @@ -51,6 +54,19 @@ assert s2.shards == [3, 50] assert s2.backshards == [6, 100] + def test_negative_step(self): + space = self.space + a = NDimArray(10*5*3, [10, 5, 3], MockDtype()) + s = a._create_slice(space, self.newslice(None, None, -2)) + assert s.start == 9 + assert s.shards == [-2, 10, 50] + assert s.backshards == [-10, 40, 100] + + def test_index_of_single_item(self): + a = NDimArray(10*5*3, [10, 5, 3], MockDtype()) + r = a._index_of_single_item(self.space, self.newtuple(1, 2, 2)) + assert r == 1 + 2*10 + 2*10*5 + class AppTestNumArray(BaseNumpyAppTest): def test_type(self): from numpy import array @@ -184,6 +200,7 @@ a[1:4:2] = 0. assert a[1] == 0. assert a[3] == 0. + def test_scalar(self): from numpy import array a = array(3) From noreply at buildbot.pypy.org Sun Nov 13 15:01:33 2011 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 13 Nov 2011 15:01:33 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim-shards: progress Message-ID: <20111113140133.49FA8820BE@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-multidim-shards Changeset: r49368:35103929b924 Date: 2011-11-13 14:38 +0100 http://bitbucket.org/pypy/pypy/changeset/35103929b924/ Log: progress diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -10,10 +10,8 @@ numpy_driver = jit.JitDriver(greens = ['signature'], reds = ['result_size', 'i', 'self', 'result']) -all_driver = jit.JitDriver(greens=['signature'], reds=['i', 'size', 'self', - 'dtype']) -any_driver = jit.JitDriver(greens=['signature'], reds=['i', 'size', 'self', - 'dtype']) +all_driver = jit.JitDriver(greens=['signature'], reds=['i', 'self', 'dtype']) +any_driver = jit.JitDriver(greens=['signature'], reds=['i', 'self', 'dtype']) slice_driver = jit.JitDriver(greens=['signature'], reds=['self', 'source', 'source_iter', 'res_iter']) @@ -249,29 +247,26 @@ return func_with_new_name(impl, "reduce_arg%s_impl" % op_name) def _all(self): - xxx - size = self.find_size() dtype = self.find_dtype() - i = 0 - while i < size: - all_driver.jit_merge_point(signature=self.signature, self=self, dtype=dtype, size=size, i=i) + i = self.start_iter() + while not i.done(): + all_driver.jit_merge_point(signature=self.signature, self=self, dtype=dtype, i=i) if not dtype.bool(self.eval(i)): return False - i += 1 + i.next() return True def descr_all(self, space): return space.wrap(self._all()) def _any(self): - xxx - size = self.find_size() dtype = self.find_dtype() - i = 0 - while i < size: - any_driver.jit_merge_point(signature=self.signature, self=self, size=size, dtype=dtype, i=i) + i = self.start_iter() + while not i.done(): + any_driver.jit_merge_point(signature=self.signature, self=self, + dtype=dtype, i=i) if dtype.bool(self.eval(i)): return True - i += 1 + i.next() return False def descr_any(self, space): return space.wrap(self._any()) @@ -602,15 +597,16 @@ def _eval(self, iter): assert isinstance(iter, Call1Iterator) - xxx - val = self.values.eval(i).convert_to(self.res_dtype) - + val = self.values.eval(iter.child).convert_to(self.res_dtype) sig = jit.promote(self.signature) assert isinstance(sig, signature.Signature) call_sig = sig.components[0] assert isinstance(call_sig, signature.Call1) return call_sig.func(self.res_dtype, val) + def start_iter(self): + return Call1Iterator(self.values.start_iter()) + class Call2(VirtualArray): """ Intermediate class for performing binary operations. @@ -683,6 +679,9 @@ return space.wrap(self.shape[0]) return space.wrap(1) +class VirtualView(VirtualArray): + pass + class NDimSlice(ViewArray): signature = signature.BaseSignature() @@ -735,39 +734,6 @@ def get_root_shape(self): return self.parent.get_root_shape() - @jit.unroll_safe - def calc_index(self, item): - index = [] - _item = item - for i in range(len(self.shape) -1, 0, -1): - s = self.shape[i] - index.append(_item % s) - _item //= s - index.append(_item) - index.reverse() - i = 0 - item = 0 - k = 0 - shape = self.parent.shape - for chunk in self.chunks: - if k != 0: - item *= shape[k] - k += 1 - start, stop, step, lgt = chunk - if step == 0: - # we don't consume an index - item += start - else: - item += start + step * index[i] - i += 1 - while k < len(shape): - if k != 0: - item *= shape[k] - k += 1 - item += index[i] - i += 1 - return item - def to_str(self, comma, indent=' '): xxx ret = StringBuilder() From noreply at buildbot.pypy.org Sun Nov 13 15:01:34 2011 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 13 Nov 2011 15:01:34 +0100 (CET) Subject: [pypy-commit] pypy default: Try to fix the issue with -lrt being at the beginning and not at the end. Message-ID: <20111113140134.769B4820BE@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r49369:ad8b93cf993c Date: 2011-11-13 15:01 +0100 http://bitbucket.org/pypy/pypy/changeset/ad8b93cf993c/ Log: Try to fix the issue with -lrt being at the beginning and not at the end. diff --git a/pypy/translator/platform/__init__.py b/pypy/translator/platform/__init__.py --- a/pypy/translator/platform/__init__.py +++ b/pypy/translator/platform/__init__.py @@ -42,6 +42,8 @@ so_prefixes = ('',) + extra_libs = () + def __init__(self, cc): if self.__class__ is Platform: raise TypeError("You should not instantiate Platform class directly") @@ -181,7 +183,7 @@ link_files = self._linkfiles(eci.link_files) export_flags = self._exportsymbols_link_flags(eci) return (library_dirs + list(self.link_flags) + export_flags + - link_files + list(eci.link_extra) + libraries) + link_files + list(eci.link_extra) + libraries + self.extra_libs) def _exportsymbols_link_flags(self, eci, relto=None): if eci.export_symbols: diff --git a/pypy/translator/platform/linux.py b/pypy/translator/platform/linux.py --- a/pypy/translator/platform/linux.py +++ b/pypy/translator/platform/linux.py @@ -6,7 +6,8 @@ class BaseLinux(BasePosix): name = "linux" - link_flags = ('-pthread', '-lrt') + link_flags = ('-pthread',) + extra_libs = ['-lrt'] cflags = ('-O3', '-pthread', '-fomit-frame-pointer', '-Wall', '-Wno-unused') standalone_only = () From noreply at buildbot.pypy.org Sun Nov 13 17:00:39 2011 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 13 Nov 2011 17:00:39 +0100 (CET) Subject: [pypy-commit] pypy default: for concatenation to work, this has to be an empty list (thanks ned) Message-ID: <20111113160039.D00E7820BE@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r49370:38f173ee998a Date: 2011-11-13 17:00 +0100 http://bitbucket.org/pypy/pypy/changeset/38f173ee998a/ Log: for concatenation to work, this has to be an empty list (thanks ned) diff --git a/pypy/translator/platform/__init__.py b/pypy/translator/platform/__init__.py --- a/pypy/translator/platform/__init__.py +++ b/pypy/translator/platform/__init__.py @@ -42,7 +42,7 @@ so_prefixes = ('',) - extra_libs = () + extra_libs = [] def __init__(self, cc): if self.__class__ is Platform: From noreply at buildbot.pypy.org Sun Nov 13 17:11:49 2011 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 13 Nov 2011 17:11:49 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim-shards: pass few more tests. everything has shards and backshards Message-ID: <20111113161149.0437C820BE@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-multidim-shards Changeset: r49371:214d0b29208f Date: 2011-11-13 15:32 +0100 http://bitbucket.org/pypy/pypy/changeset/214d0b29208f/ Log: pass few more tests. everything has shards and backshards diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -142,13 +142,22 @@ _attrs_ = ["invalidates", "signature", "shape", "shards", "backshards", "start"] - _immutable_fields_ = ['shape[*]', "shards[*]", "backshards[*]"] + _immutable_fields_ = ['shape[*]', "shards[*]", "backshards[*]", 'start'] - def __init__(self, shards, backshards, shape): + shards = None + start = 0 + + def __init__(self, shape): self.invalidates = [] self.shape = shape - self.shards = shards - self.backshards = backshards + if self.shards is None: + self.shards = [] + self.backshards = [] + s = 1 + for sh in shape: + self.shards.append(s) + self.backshards.append(s * (sh - 1)) + s *= sh def invalidated(self): if self.invalidates: @@ -502,7 +511,7 @@ _attrs_ = ["dtype", "value", "shape"] def __init__(self, dtype, value): - BaseArray.__init__(self, None, None, []) + BaseArray.__init__(self, []) self.dtype = dtype self.value = value @@ -529,7 +538,7 @@ Class for representing virtual arrays, such as binary ops or ufuncs """ def __init__(self, signature, shape, res_dtype): - BaseArray.__init__(self, None, None, shape) + BaseArray.__init__(self, shape) self.forced_result = None self.signature = signature self.res_dtype = res_dtype @@ -545,7 +554,7 @@ result = NDimArray(result_size, self.shape, self.find_dtype()) i = self.start_iter() ri = result.start_iter() - while not i.done(): + while not ri.done(): numpy_driver.jit_merge_point(signature=signature, result_size=result_size, i=i, self=self, result=result) @@ -568,6 +577,9 @@ return self.forced_result.eval(iter) return self._eval(iter) + def getitem(self, item): + return self.get_concrete().getitem(item) + def setitem(self, item, value): return self.get_concrete().setitem(item, value) @@ -647,7 +659,9 @@ arrays. Example: slices """ def __init__(self, parent, signature, shards, backshards, shape): - BaseArray.__init__(self, shards, backshards, shape) + self.shards = shards + self.backshards = backshards + BaseArray.__init__(self, shape) self.signature = signature self.parent = parent self.invalidates = parent.invalidates @@ -691,10 +705,11 @@ shape): if isinstance(parent, NDimSlice): parent = parent.parent - else: - assert isinstance(parent, NDimArray) ViewArray.__init__(self, parent, signature, shards, backshards, shape) self.start = start + self.size = 1 + for sh in shape: + self.size *= sh def get_root_storage(self): return self.parent.get_concrete().get_root_storage() @@ -785,17 +800,8 @@ """ A class representing contiguous array. We know that each iteration by say ufunc will increase the data index by one """ - start = 0 - def __init__(self, size, shape, dtype): - shards = [] - backshards = [] - s = 1 - for sh in shape: - shards.append(s) - backshards.append(s * (sh - 1)) - s *= sh - BaseArray.__init__(self, shards, backshards, shape) + BaseArray.__init__(self, shape) self.size = size self.dtype = dtype self.storage = dtype.malloc(size) From noreply at buildbot.pypy.org Sun Nov 13 17:11:50 2011 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 13 Nov 2011 17:11:50 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim-shards: fix multidim tests Message-ID: <20111113161150.3FE16820BE@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-multidim-shards Changeset: r49372:466d59c9ff2a Date: 2011-11-13 17:11 +0100 http://bitbucket.org/pypy/pypy/changeset/466d59c9ff2a/ Log: fix multidim tests diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -154,10 +154,14 @@ self.shards = [] self.backshards = [] s = 1 - for sh in shape: + shape_rev = shape[:] + shape_rev.reverse() + for sh in shape_rev: self.shards.append(s) self.backshards.append(s * (sh - 1)) s *= sh + self.shards.reverse() + self.backshards.reverse() def invalidated(self): if self.invalidates: @@ -448,19 +452,16 @@ shape = [] shards = [] backshards = [] - start = -1 + start = self.start i = -1 for i, w_item in enumerate(space.fixedview(w_idx)): start_, stop, step, lgt = space.decode_index4(w_item, self.shape[i]) if step != 0: - if start == -1: - start = start_ * self.shards[i] + self.start shape.append(lgt) shards.append(self.shards[i] * step) backshards.append(self.shards[i] * lgt * step) - if start == -1: - start = self.start + start += self.shards[i] * start_ # add a reminder shape += self.shape[i + 1:] shards += self.shards[i + 1:] diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -13,59 +13,78 @@ return self.space.newslice(*[self.space.wrap(arg) for arg in args]) def newtuple(self, *args): - return self.space.newtuple([self.space.wrap(arg) for arg in args]) + args_w = [] + for arg in args: + if isinstance(arg, int): + args_w.append(self.space.wrap(arg)) + else: + args_w.append(arg) + return self.space.newtuple(args_w) def test_shards(self): a = NDimArray(100, [10, 5, 3], MockDtype()) - assert a.shards == [1, 10, 50] - assert a.backshards == [9, 40, 100] + assert a.shards == [15, 3, 1] + assert a.backshards == [135, 12, 2] def test_create_slice(self): space = self.space a = NDimArray(10*5*3, [10, 5, 3], MockDtype()) s = a._create_slice(space, space.wrap(3)) - assert s.start == 3 - assert s.shards == [10, 50] - assert s.backshards == [40, 100] + assert s.start == 45 + assert s.shards == [3, 1] + assert s.backshards == [12, 2] s = a._create_slice(space, self.newslice(1, 9, 2)) - assert s.start == 1 - assert s.shards == [2, 10, 50] - assert s.backshards == [8, 40, 100] + assert s.start == 15 + assert s.shards == [30, 3, 1] + assert s.backshards == [120, 12, 2] s = a._create_slice(space, space.newtuple([ self.newslice(1, 5, 3), self.newslice(1, 2, 1), space.wrap(1)])) - assert s.start == 1 + assert s.start == 19 assert s.shape == [2, 1] - assert s.shards == [3, 10] - assert s.backshards == [6, 10] + assert s.shards == [45, 3] + assert s.backshards == [90, 3] + s = a._create_slice(space, self.newtuple( + self.newslice(None, None, None), space.wrap(2))) + assert s.start == 6 + assert s.shape == [10, 3] def test_slice_of_slice(self): space = self.space a = NDimArray(10*5*3, [10, 5, 3], MockDtype()) s = a._create_slice(space, space.wrap(5)) + assert s.start == 15*5 s2 = s._create_slice(space, space.wrap(3)) assert s2.shape == [3] - assert s2.shards == [50] + assert s2.shards == [1] assert s2.parent is a - assert s2.backshards == [100] + assert s2.backshards == [2] + assert s2.start == 5*15 + 3*3 s = a._create_slice(space, self.newslice(1, 5, 3)) s2 = s._create_slice(space, space.newtuple([ self.newslice(None, None, None), space.wrap(2)])) assert s2.shape == [2, 3] - assert s2.shards == [3, 50] - assert s2.backshards == [6, 100] + assert s2.shards == [45, 1] + assert s2.backshards == [90, 2] + assert s2.start == 1*15 + 2*3 def test_negative_step(self): space = self.space a = NDimArray(10*5*3, [10, 5, 3], MockDtype()) s = a._create_slice(space, self.newslice(None, None, -2)) - assert s.start == 9 - assert s.shards == [-2, 10, 50] - assert s.backshards == [-10, 40, 100] + assert s.start == 135 + assert s.shards == [-30, 3, 1] + assert s.backshards == [-150, 12, 2] def test_index_of_single_item(self): a = NDimArray(10*5*3, [10, 5, 3], MockDtype()) r = a._index_of_single_item(self.space, self.newtuple(1, 2, 2)) - assert r == 1 + 2*10 + 2*10*5 + assert r == 1 * 3 * 5 + 2 * 3 + 2 + s = a._create_slice(self.space, self.newtuple( + self.newslice(None, None, None), 2)) + r = s._index_of_single_item(self.space, self.newtuple(1, 0)) + assert r == a._index_of_single_item(self.space, self.newtuple(1, 2, 0)) + r = s._index_of_single_item(self.space, self.newtuple(1, 1)) + assert r == a._index_of_single_item(self.space, self.newtuple(1, 2, 1)) class AppTestNumArray(BaseNumpyAppTest): def test_type(self): @@ -698,7 +717,8 @@ raises(ValueError, numpy.array, [[[1, 2], [3, 4], 5]]) raises(ValueError, numpy.array, [[[1, 2], [3, 4], [5]]]) a = numpy.array([[1, 2], [4, 5]]) - assert a[0, 1] == a[0][1] == 2 + assert a[0, 1] == 2 + assert a[0][1] == 2 a = numpy.array(([[[1, 2], [3, 4], [5, 6]]])) assert (a[0, 1] == [3, 4]).all() From noreply at buildbot.pypy.org Sun Nov 13 18:13:14 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sun, 13 Nov 2011 18:13:14 +0100 (CET) Subject: [pypy-commit] pypy jit-dynamic-getarrayitem: start working on the JIT optimization Message-ID: <20111113171314.5632F820BE@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: jit-dynamic-getarrayitem Changeset: r49373:48644296897a Date: 2011-11-13 02:02 -0500 http://bitbucket.org/pypy/pypy/changeset/48644296897a/ Log: start working on the JIT optimization diff --git a/pypy/jit/backend/llgraph/llimpl.py b/pypy/jit/backend/llgraph/llimpl.py --- a/pypy/jit/backend/llgraph/llimpl.py +++ b/pypy/jit/backend/llgraph/llimpl.py @@ -1403,6 +1403,7 @@ struct = array._obj.container.getitem(index) return cast_to_ptr(_getinteriorfield_gc(struct, fieldnum)) + def _getfield_raw(struct, fieldnum): STRUCT, fieldname = symbolic.TokenToField[fieldnum] ptr = cast_from_int(lltype.Ptr(STRUCT), struct) @@ -1479,7 +1480,7 @@ return do_setinteriorfield_gc do_setinteriorfield_gc_int = new_setinteriorfield_gc(cast_from_int) do_setinteriorfield_gc_float = new_setinteriorfield_gc(cast_from_floatstorage) -do_setinteriorfield_gc_ptr = new_setinteriorfield_gc(cast_from_ptr) +do_setinteriorfield_gc_ptr = new_setinteriorfield_gc(cast_from_ptr) def do_setfield_raw_int(struct, fieldnum, newvalue): STRUCT, fieldname = symbolic.TokenToField[fieldnum] diff --git a/pypy/jit/backend/llgraph/runner.py b/pypy/jit/backend/llgraph/runner.py --- a/pypy/jit/backend/llgraph/runner.py +++ b/pypy/jit/backend/llgraph/runner.py @@ -329,6 +329,18 @@ token = history.getkind(getattr(S, fieldname)) return self.getdescr(ofs, token[0], name=fieldname, extrainfo=ofs2) + def interiorfielddescrof_dynamic(self, offset, width, fieldsize, + is_pointer, is_float, is_signed): + + if is_pointer: + typeinfo = REF + elif is_float: + typeinfo = FLOAT + else: + typeinfo = INT + # we abuse the arg_types field to distinguish dynamic and static descrs + return self.getdescr(offset, typeinfo, arg_types='dynamic', name='', extrainfo=width) + def calldescrof(self, FUNC, ARGS, RESULT, extrainfo): arg_types = [] for ARG in ARGS: diff --git a/pypy/jit/codewriter/effectinfo.py b/pypy/jit/codewriter/effectinfo.py --- a/pypy/jit/codewriter/effectinfo.py +++ b/pypy/jit/codewriter/effectinfo.py @@ -48,6 +48,8 @@ OS_LIBFFI_PREPARE = 60 OS_LIBFFI_PUSH_ARG = 61 OS_LIBFFI_CALL = 62 + OS_LIBFFI_GETARRAYITEM = 63 + OS_LIBFFI_SETARRAYITEM = 64 # OS_LLONG_INVERT = 69 OS_LLONG_ADD = 70 diff --git a/pypy/jit/codewriter/jtransform.py b/pypy/jit/codewriter/jtransform.py --- a/pypy/jit/codewriter/jtransform.py +++ b/pypy/jit/codewriter/jtransform.py @@ -1615,6 +1615,12 @@ elif oopspec_name.startswith('libffi_call_'): oopspecindex = EffectInfo.OS_LIBFFI_CALL extraeffect = EffectInfo.EF_RANDOM_EFFECTS + elif oopspec_name == 'libffi_array_getitem': + oopspecindex = EffectInfo.OS_LIBFFI_GETARRAYITEM + extraeffect = EffectInfo.EF_CANNOT_RAISE + elif oopspec_name == 'libffi_array_setitem': + oopspecindex = EffectInfo.OS_LIBFFI_SETARRAYITEM + extraeffect = EffectInfo.EF_CANNOT_RAISE else: assert False, 'unsupported oopspec: %s' % oopspec_name return self._handle_oopspec_call(op, args, oopspecindex, extraeffect) diff --git a/pypy/jit/metainterp/executor.py b/pypy/jit/metainterp/executor.py --- a/pypy/jit/metainterp/executor.py +++ b/pypy/jit/metainterp/executor.py @@ -130,6 +130,16 @@ else: return BoxInt(cpu.bh_getinteriorfield_gc_i(array, index, descr)) +def do_getinteriorfield_raw(cpu, _, arraybox, indexbox, descr): + array = arraybox.getref_base() + index = indexbox.getint() + if descr.is_pointer_field(): + return BoxPtr(cpu.bh_getinteriorfield_raw_r(array, index, descr)) + elif descr.is_float_field(): + return BoxFloat(cpu.bh_getinteriorfield_raw_f(array, index, descr)) + else: + return BoxInt(cpu.bh_getionteriorfield_raw_i(array, index, descr)) + def do_setinteriorfield_gc(cpu, _, arraybox, indexbox, valuebox, descr): array = arraybox.getref_base() index = indexbox.getint() @@ -143,6 +153,7 @@ cpu.bh_setinteriorfield_gc_i(array, index, descr, valuebox.getint()) + def do_getfield_gc(cpu, _, structbox, fielddescr): struct = structbox.getref_base() if fielddescr.is_pointer_field(): diff --git a/pypy/jit/metainterp/optimizeopt/fficall.py b/pypy/jit/metainterp/optimizeopt/fficall.py --- a/pypy/jit/metainterp/optimizeopt/fficall.py +++ b/pypy/jit/metainterp/optimizeopt/fficall.py @@ -1,11 +1,13 @@ +from pypy.jit.codewriter.effectinfo import EffectInfo +from pypy.jit.metainterp.optimizeopt.optimizer import Optimization +from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method +from pypy.jit.metainterp.resoperation import rop, ResOperation +from pypy.rlib import clibffi, libffi +from pypy.rlib.debug import debug_print +from pypy.rlib.libffi import Func +from pypy.rlib.objectmodel import we_are_translated from pypy.rpython.annlowlevel import cast_base_ptr_to_instance -from pypy.rlib.objectmodel import we_are_translated -from pypy.rlib.libffi import Func -from pypy.rlib.debug import debug_print -from pypy.jit.codewriter.effectinfo import EffectInfo -from pypy.jit.metainterp.resoperation import rop, ResOperation -from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method -from pypy.jit.metainterp.optimizeopt.optimizer import Optimization +from pypy.rpython.lltypesystem import llmemory class FuncInfo(object): @@ -78,7 +80,7 @@ def new(self): return OptFfiCall() - + def begin_optimization(self, funcval, op): self.rollback_maybe('begin_optimization', op) self.funcinfo = FuncInfo(funcval, self.optimizer.cpu, op) @@ -116,6 +118,8 @@ ops = self.do_push_arg(op) elif oopspec == EffectInfo.OS_LIBFFI_CALL: ops = self.do_call(op) + elif oopspec == EffectInfo.OS_LIBFFI_GETARRAYITEM: + ops = self.do_getarrayitem(op) # for op in ops: self.emit_operation(op) @@ -190,6 +194,48 @@ ops.append(newop) return ops + def do_getarrayitem(self, op): + ffitypeval = self.getvalue(op.getarg(1)) + widthval = self.getvalue(op.getarg(2)) + offsetval = self.getvalue(op.getarg(5)) + if not ffitypeval.is_constant() or not widthval.is_constant() or not offsetval.is_constant(): + return [op] + + ffitypeaddr = ffitypeval.box.getaddr() + ffitype = llmemory.cast_adr_to_ptr(ffitypeaddr, clibffi.FFI_TYPE_P) + offset = offsetval.box.getint() + width = offsetval.box.getint() + descr = self._get_interior_descr(ffitype, width, offset) + + arglist = [ + self.getvalue(op.getarg(3)).force_box(self.optimizer), + self.getvalue(op.getarg(4)).force_box(self.optimizer), + ] + return [ + ResOperation( + rop.GETINTERIORFIELD_RAW, arglist, op.result, descr=descr + ) + ] + + def _get_interior_descr(self, ffitype, width, offset): + kind = libffi.types.getkind(ffitype) + is_pointer = is_float = is_signed = False + if ffitype is libffi.types.pointer: + is_pointer = True + elif kind == 'i': + is_signed = True + elif kind == 'f' or kind == 'I' or kind == 'U': + # longlongs are treated as floats, see + # e.g. llsupport/descr.py:getDescrClass + is_float = True + else: + assert False, "unsupported ffitype or kind" + # + fieldsize = ffitype.c_size + return self.optimizer.cpu.interiorfielddescrof_dynamic( + offset, width, fieldsize, is_pointer, is_float, is_signed + ) + def propagate_forward(self, op): if self.logops is not None: debug_print(self.logops.repr_of_resop(op)) diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -461,6 +461,7 @@ 'GETARRAYITEM_GC/2d', 'GETARRAYITEM_RAW/2d', 'GETINTERIORFIELD_GC/2d', + 'GETINTERIORFIELD_RAW/2d', 'GETFIELD_GC/1d', 'GETFIELD_RAW/1d', '_MALLOC_FIRST', diff --git a/pypy/jit/metainterp/test/test_fficall.py b/pypy/jit/metainterp/test/test_fficall.py --- a/pypy/jit/metainterp/test/test_fficall.py +++ b/pypy/jit/metainterp/test/test_fficall.py @@ -100,57 +100,55 @@ def test_array_fields(self): myjitdriver = JitDriver( greens = [], - reds = ["n", "i", "signed_size", "points", "result_point"], + reds = ["n", "i", "points", "result_point"], ) POINT = lltype.Struct("POINT", ("x", lltype.Signed), ("y", lltype.Signed), ) - def f(n): - points = lltype.malloc(rffi.CArray(POINT), n, flavor="raw") - for i in xrange(n): - points[i].x = i * 2 - points[i].y = i * 2 + 1 - points = rffi.cast(rffi.CArrayPtr(lltype.Char), points) - result_point = lltype.malloc(rffi.CArray(POINT), 1, flavor="raw") - result_point[0].x = 0 - result_point[0].y = 0 - result_point = rffi.cast(rffi.CArrayPtr(lltype.Char), result_point) + def f(points, result_point, n): i = 0 - signed_size = rffi.sizeof(lltype.Signed) while i < n: myjitdriver.jit_merge_point(i=i, points=points, n=n, - signed_size=signed_size, result_point=result_point) x = array_getitem( - types.slong, signed_size * 2, points, i, 0 + types.slong, rffi.sizeof(lltype.Signed) * 2, points, i, 0 ) y = array_getitem( - types.slong, signed_size * 2, points, i, signed_size + types.slong, rffi.sizeof(lltype.Signed) * 2, points, i, rffi.sizeof(lltype.Signed) ) cur_x = array_getitem( - types.slong, signed_size * 2, result_point, 0, 0 + types.slong, rffi.sizeof(lltype.Signed) * 2, result_point, 0, 0 ) cur_y = array_getitem( - types.slong, signed_size * 2, result_point, 0, signed_size + types.slong, rffi.sizeof(lltype.Signed) * 2, result_point, 0, rffi.sizeof(lltype.Signed) ) array_setitem( - types.slong, signed_size * 2, result_point, 0, 0, cur_x + x + types.slong, rffi.sizeof(lltype.Signed) * 2, result_point, 0, 0, cur_x + x ) array_setitem( - types.slong, signed_size * 2, result_point, 0, signed_size, cur_y + y + types.slong, rffi.sizeof(lltype.Signed) * 2, result_point, 0, rffi.sizeof(lltype.Signed), cur_y + y ) i += 1 - result_point = rffi.cast(rffi.CArrayPtr(POINT), result_point) - result = result_point[0].x * result_point[0].y - lltype.free(result_point, flavor="raw") - lltype.free(points, flavor="raw") - return result - assert self.meta_interp(f, [10]) == f(10) == 9000 + def main(n): + with lltype.scoped_alloc(rffi.CArray(POINT), n) as points: + with lltype.scoped_alloc(rffi.CArray(POINT), 1) as result_point: + for i in xrange(n): + points[i].x = i * 2 + points[i].y = i * 2 + 1 + points = rffi.cast(rffi.CArrayPtr(lltype.Char), points) + result_point[0].x = 0 + result_point[0].y = 0 + result_point = rffi.cast(rffi.CArrayPtr(lltype.Char), result_point) + f(points, result_point, n) + result_point = rffi.cast(rffi.CArrayPtr(POINT), result_point) + return result_point[0].x * result_point[0].y + + assert self.meta_interp(main, [10]) == main(10) == 9000 self.check_loops({"int_add": 3, "jump": 1, "int_lt": 1, "getinteriorfield_raw": 4, "setinteriorfield_raw": 2 }) diff --git a/pypy/rlib/libffi.py b/pypy/rlib/libffi.py --- a/pypy/rlib/libffi.py +++ b/pypy/rlib/libffi.py @@ -411,7 +411,7 @@ def getaddressindll(self, name): return dlsym(self.lib, name) - at jit.oopspec("libffi_array_getitem") + at jit.oopspec("libffi_array_getitem(ffitype, width, addr, index, offset)") def array_getitem(ffitype, width, addr, index, offset): for TYPE, ffitype2 in clibffi.ffitype_map: if ffitype is ffitype2: @@ -420,7 +420,7 @@ return rffi.cast(rffi.CArrayPtr(TYPE), addr)[0] assert False - at jit.oopspec("libffi_array_setitem") + at jit.oopspec("libffi_array_setitem(ffitype, width, addr, index, offset, value)") def array_setitem(ffitype, width, addr, index, offset, value): for TYPE, ffitype2 in clibffi.ffitype_map: if ffitype is ffitype2: From noreply at buildbot.pypy.org Sun Nov 13 18:13:15 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sun, 13 Nov 2011 18:13:15 +0100 (CET) Subject: [pypy-commit] pypy jit-dynamic-getarrayitem: progress, gets are now compiled and executed properly Message-ID: <20111113171315.8762782A87@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: jit-dynamic-getarrayitem Changeset: r49374:cc246e01cada Date: 2011-11-13 12:13 -0500 http://bitbucket.org/pypy/pypy/changeset/cc246e01cada/ Log: progress, gets are now compiled and executed properly diff --git a/pypy/jit/backend/llgraph/llimpl.py b/pypy/jit/backend/llgraph/llimpl.py --- a/pypy/jit/backend/llgraph/llimpl.py +++ b/pypy/jit/backend/llgraph/llimpl.py @@ -325,12 +325,12 @@ loop = _from_opaque(loop) loop.operations.append(Operation(opnum)) -def compile_add_descr(loop, ofs, type, arg_types): +def compile_add_descr(loop, ofs, type, arg_types, extrainfo): from pypy.jit.backend.llgraph.runner import Descr loop = _from_opaque(loop) op = loop.operations[-1] assert isinstance(type, str) and len(type) == 1 - op.descr = Descr(ofs, type, arg_types=arg_types) + op.descr = Descr(ofs, type, arg_types=arg_types, extrainfo=extrainfo) def compile_add_descr_arg(loop, ofs, type, arg_types): from pypy.jit.backend.llgraph.runner import Descr @@ -825,6 +825,16 @@ else: raise NotImplementedError + def op_getinteriorfield_raw(self, descr, array, index): + if descr.typeinfo == REF: + return do_getinteriorfield_raw_ptr(array, index, descr.extrainfo, descr.ofs) + elif descr.typeinfo == INT: + return do_getinteriorfield_raw_int(array, index, descr.extrainfo, descr.ofs) + elif descr.typeinfo == FLOAT: + return do_getinteriorfield_raw_float(array, index, descr.extrainfo, descr.ofs) + else: + raise NotImplementedError + def op_setinteriorfield_gc(self, descr, array, index, newvalue): if descr.typeinfo == REF: return do_setinteriorfield_gc_ptr(array, index, descr.ofs, @@ -1403,6 +1413,15 @@ struct = array._obj.container.getitem(index) return cast_to_ptr(_getinteriorfield_gc(struct, fieldnum)) +def _getinteriorfield_raw(ffitype, array, index, width, ofs): + from pypy.rlib import libffi + addr = rffi.cast(rffi.VOIDP, array) + return libffi.array_getitem(ffitype, width, addr, index, ofs) + +def do_getinteriorfield_raw_int(array, index, width, ofs): + from pypy.rlib import libffi + res = _getinteriorfield_raw(libffi.types.slong, array, index, width, ofs) + return res def _getfield_raw(struct, fieldnum): STRUCT, fieldname = symbolic.TokenToField[fieldnum] diff --git a/pypy/jit/backend/llgraph/runner.py b/pypy/jit/backend/llgraph/runner.py --- a/pypy/jit/backend/llgraph/runner.py +++ b/pypy/jit/backend/llgraph/runner.py @@ -179,7 +179,7 @@ descr = op.getdescr() if isinstance(descr, Descr): llimpl.compile_add_descr(c, descr.ofs, descr.typeinfo, - descr.arg_types) + descr.arg_types, descr.extrainfo) if (isinstance(descr, history.LoopToken) and op.getopnum() != rop.JUMP): llimpl.compile_add_loop_token(c, descr) diff --git a/pypy/jit/metainterp/optimizeopt/fficall.py b/pypy/jit/metainterp/optimizeopt/fficall.py --- a/pypy/jit/metainterp/optimizeopt/fficall.py +++ b/pypy/jit/metainterp/optimizeopt/fficall.py @@ -204,7 +204,7 @@ ffitypeaddr = ffitypeval.box.getaddr() ffitype = llmemory.cast_adr_to_ptr(ffitypeaddr, clibffi.FFI_TYPE_P) offset = offsetval.box.getint() - width = offsetval.box.getint() + width = widthval.box.getint() descr = self._get_interior_descr(ffitype, width, offset) arglist = [ From noreply at buildbot.pypy.org Sun Nov 13 18:20:16 2011 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 13 Nov 2011 18:20:16 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim-shards: make more tests pass Message-ID: <20111113172016.11907820BE@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-multidim-shards Changeset: r49375:a90cb67cd9f9 Date: 2011-11-13 18:19 +0100 http://bitbucket.org/pypy/pypy/changeset/a90cb67cd9f9/ Log: make more tests pass diff --git a/py/_code/code.py b/py/_code/code.py --- a/py/_code/code.py +++ b/py/_code/code.py @@ -307,7 +307,7 @@ self._striptext = 'AssertionError: ' self._excinfo = tup self.type, self.value, tb = self._excinfo - self.typename = self.type.__name__ + self.typename = getattr(self.type, "__name__", "???") self.traceback = py.code.Traceback(tb) def __repr__(self): diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -75,6 +75,9 @@ def done(self): raise NotImplementedError + def get_offset(self): + raise NotImplementedError + class ArrayIterator(BaseIterator): def __init__(self, size): self.offset = 0 @@ -86,6 +89,9 @@ def done(self): return self.offset >= self.size + def get_offset(self): + return self.offset + class ViewIterator(BaseIterator): def __init__(self, arr): self.indices = [0] * len(arr.shape) @@ -109,6 +115,9 @@ def done(self): return self._done + def get_offset(self): + return self.offset + class Call2Iterator(BaseIterator): def __init__(self, left, right): self.left = left @@ -121,6 +130,11 @@ def done(self): return self.left.done() or self.right.done() + def get_offset(self): + if isinstance(self.left, ConstantIterator): + return self.right.get_offset() + return self.left.get_offset() + class Call1Iterator(BaseIterator): def __init__(self, child): self.child = child @@ -131,6 +145,9 @@ def done(self): return self.child.done() + def get_offset(self): + return self.child.get_offset() + class ConstantIterator(BaseIterator): def next(self): pass @@ -138,6 +155,9 @@ def done(self): return False + def get_offset(self): + return 0 + class BaseArray(Wrappable): _attrs_ = ["invalidates", "signature", "shape", "shards", "backshards", "start"] @@ -231,24 +251,23 @@ def _reduce_argmax_argmin_impl(op_name): reduce_driver = jit.JitDriver(greens=['signature'], - reds = ['i', 'size', 'result', 'self', 'cur_best', 'dtype']) - def loop(self, size): - xxx - result = 0 - cur_best = self.eval(self.start) - i = 1 + reds = ['i', 'result', 'self', 'cur_best', 'dtype']) + def loop(self): + i = self.start_iter() + result = i.get_offset() + cur_best = self.eval(i) + i.next() dtype = self.find_dtype() - while i < size: + while not i.done(): reduce_driver.jit_merge_point(signature=self.signature, self=self, dtype=dtype, - size=size, i=i, result=result, + i=i, result=result, cur_best=cur_best) - xxx new_best = getattr(dtype, op_name)(cur_best, self.eval(i)) if dtype.ne(new_best, cur_best): - result = i + result = i.get_offset() cur_best = new_best - i += 1 + i.next() return result def impl(self, space): size = self.find_size() @@ -256,7 +275,7 @@ raise OperationError(space.w_ValueError, space.wrap("Can't call %s on zero-size arrays" \ % op_name)) - return space.wrap(loop(self, size)) + return self.compute_index(space, loop(self)) return func_with_new_name(impl, "reduce_arg%s_impl" % op_name) def _all(self): @@ -486,6 +505,17 @@ def start_iter(self): raise NotImplementedError + def compute_index(self, space, offset): + offset -= self.start + if len(self.shape) == 1: + return space.wrap(offset // self.shards[0]) + indices_w = [] + for shard in self.shards: + r = offset // shard + indices_w.append(space.wrap(r)) + offset -= shard * r + return space.newtuple(indices_w) + def convert_to_array(space, w_obj): if isinstance(w_obj, BaseArray): return w_obj diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -10,7 +10,7 @@ reduce_driver = jit.JitDriver( greens = ["signature"], - reds = ["i", "size", "self", "dtype", "value", "obj"] + reds = ["i", "self", "dtype", "value", "obj"] ) class W_Ufunc(Wrappable): @@ -56,28 +56,27 @@ space, obj.find_dtype(), promote_to_largest=True ) - start = 0 + start = obj.start_iter() if self.identity is None: if size == 0: raise operationerrfmt(space.w_ValueError, "zero-size array to " "%s.reduce without identity", self.name) - value = obj.eval(0).convert_to(dtype) - start += 1 + value = obj.eval(start).convert_to(dtype) + start.next() else: value = self.identity.convert_to(dtype) new_sig = signature.Signature.find_sig([ self.reduce_signature, obj.signature ]) - return self.reduce(new_sig, start, value, obj, dtype, size).wrap(space) + return self.reduce(new_sig, start, value, obj, dtype).wrap(space) - def reduce(self, signature, start, value, obj, dtype, size): - i = start - while i < size: + def reduce(self, signature, i, value, obj, dtype): + while not i.done(): reduce_driver.jit_merge_point(signature=signature, self=self, value=value, obj=obj, i=i, - dtype=dtype, size=size) + dtype=dtype) value = self.func(dtype, value, obj.eval(i).convert_to(dtype)) - i += 1 + i.next() return value class W_Ufunc1(W_Ufunc): diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -560,14 +560,27 @@ raises(ValueError, "b.min()") def test_argmax(self): + import sys from numpy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) - assert a.argmax() == 2 + r = a.argmax() + assert r == 2 b = array([]) - raises(ValueError, "b.argmax()") + try: + b.argmax() + except: + pass + else: + raise Exception("Did not raise") a = array(range(-5, 5)) - assert a.argmax() == 9 + r = a.argmax() + assert r == 9 + b = a[::2] + r = b.argmax() + assert r == 4 + r = (a + a).argmax() + assert r == 9 def test_argmin(self): from numpy import array From noreply at buildbot.pypy.org Sun Nov 13 18:21:09 2011 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 13 Nov 2011 18:21:09 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim-shards: one more test Message-ID: <20111113172109.2212C820BE@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-multidim-shards Changeset: r49376:054e78d993dc Date: 2011-11-13 18:20 +0100 http://bitbucket.org/pypy/pypy/changeset/054e78d993dc/ Log: one more test diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -497,7 +497,8 @@ "The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()")) except ValueError: pass - return space.wrap(space.is_true(self.get_concrete().eval(self.start).wrap(space))) + return space.wrap(space.is_true(self.get_concrete().eval( + self.start_iter()).wrap(space))) def getitem(self, item): raise NotImplementedError From noreply at buildbot.pypy.org Sun Nov 13 18:29:52 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sun, 13 Nov 2011 18:29:52 +0100 (CET) Subject: [pypy-commit] pypy jit-dynamic-getarrayitem: implement the optimization for set Message-ID: <20111113172952.0EA65820BE@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: jit-dynamic-getarrayitem Changeset: r49377:af9eea9cddb5 Date: 2011-11-13 12:29 -0500 http://bitbucket.org/pypy/pypy/changeset/af9eea9cddb5/ Log: implement the optimization for set diff --git a/pypy/jit/backend/llgraph/llimpl.py b/pypy/jit/backend/llgraph/llimpl.py --- a/pypy/jit/backend/llgraph/llimpl.py +++ b/pypy/jit/backend/llgraph/llimpl.py @@ -20,6 +20,7 @@ from pypy.jit.backend.llgraph import symbolic from pypy.jit.codewriter import longlong +from pypy.rlib import libffi from pypy.rlib.objectmodel import ComputedIntSymbolic, we_are_translated from pypy.rlib.rarithmetic import ovfcheck from pypy.rlib.rarithmetic import r_longlong, r_ulonglong, r_uint @@ -848,6 +849,16 @@ else: raise NotImplementedError + def op_setinteriorfield_raw(self, descr, array, index, newvalue): + if descr.typeinfo == REF: + return do_setinteriorfield_raw_ptr(array, index, newvalue, descr.extrainfo, descr.ofs) + elif descr.typeinfo == INT: + return do_setinteriorfield_raw_int(array, index, newvalue, descr.extrainfo, descr.ofs) + elif descr.typeinfo == FLOAT: + return do_setinteriorfield_raw_float(array, index, newvalue, descr.extrainfo, descr.ofs) + else: + raise NotImplementedError + def op_setfield_gc(self, fielddescr, struct, newvalue): if fielddescr.typeinfo == REF: do_setfield_gc_ptr(struct, fielddescr.ofs, newvalue) @@ -1414,12 +1425,10 @@ return cast_to_ptr(_getinteriorfield_gc(struct, fieldnum)) def _getinteriorfield_raw(ffitype, array, index, width, ofs): - from pypy.rlib import libffi addr = rffi.cast(rffi.VOIDP, array) return libffi.array_getitem(ffitype, width, addr, index, ofs) def do_getinteriorfield_raw_int(array, index, width, ofs): - from pypy.rlib import libffi res = _getinteriorfield_raw(libffi.types.slong, array, index, width, ofs) return res @@ -1501,6 +1510,13 @@ do_setinteriorfield_gc_float = new_setinteriorfield_gc(cast_from_floatstorage) do_setinteriorfield_gc_ptr = new_setinteriorfield_gc(cast_from_ptr) +def new_setinteriorfield_raw(ffitype): + def do_setinteriorfield_raw(array, index, newvalue, width, ofs): + addr = rffi.cast(rffi.VOIDP, array) + return libffi.array_setitem(ffitype, width, addr, index, ofs, newvalue) + return do_setinteriorfield_raw +do_setinteriorfield_raw_int = new_setinteriorfield_raw(libffi.types.slong) + def do_setfield_raw_int(struct, fieldnum, newvalue): STRUCT, fieldname = symbolic.TokenToField[fieldnum] ptr = cast_from_int(lltype.Ptr(STRUCT), struct) diff --git a/pypy/jit/metainterp/executor.py b/pypy/jit/metainterp/executor.py --- a/pypy/jit/metainterp/executor.py +++ b/pypy/jit/metainterp/executor.py @@ -130,16 +130,6 @@ else: return BoxInt(cpu.bh_getinteriorfield_gc_i(array, index, descr)) -def do_getinteriorfield_raw(cpu, _, arraybox, indexbox, descr): - array = arraybox.getref_base() - index = indexbox.getint() - if descr.is_pointer_field(): - return BoxPtr(cpu.bh_getinteriorfield_raw_r(array, index, descr)) - elif descr.is_float_field(): - return BoxFloat(cpu.bh_getinteriorfield_raw_f(array, index, descr)) - else: - return BoxInt(cpu.bh_getionteriorfield_raw_i(array, index, descr)) - def do_setinteriorfield_gc(cpu, _, arraybox, indexbox, valuebox, descr): array = arraybox.getref_base() index = indexbox.getint() @@ -153,7 +143,6 @@ cpu.bh_setinteriorfield_gc_i(array, index, descr, valuebox.getint()) - def do_getfield_gc(cpu, _, structbox, fielddescr): struct = structbox.getref_base() if fielddescr.is_pointer_field(): @@ -351,6 +340,8 @@ rop.DEBUG_MERGE_POINT, rop.JIT_DEBUG, rop.SETARRAYITEM_RAW, + rop.GETINTERIORFIELD_RAW, + rop.SETINTERIORFIELD_RAW, rop.CALL_RELEASE_GIL, rop.QUASIIMMUT_FIELD, ): # list of opcodes never executed by pyjitpl diff --git a/pypy/jit/metainterp/optimizeopt/fficall.py b/pypy/jit/metainterp/optimizeopt/fficall.py --- a/pypy/jit/metainterp/optimizeopt/fficall.py +++ b/pypy/jit/metainterp/optimizeopt/fficall.py @@ -118,8 +118,9 @@ ops = self.do_push_arg(op) elif oopspec == EffectInfo.OS_LIBFFI_CALL: ops = self.do_call(op) - elif oopspec == EffectInfo.OS_LIBFFI_GETARRAYITEM: - ops = self.do_getarrayitem(op) + elif (oopspec == EffectInfo.OS_LIBFFI_GETARRAYITEM or + oopspec == EffectInfo.OS_LIBFFI_SETARRAYITEM): + ops = self.do_getsetarrayitem(op, oopspec) # for op in ops: self.emit_operation(op) @@ -194,7 +195,7 @@ ops.append(newop) return ops - def do_getarrayitem(self, op): + def do_getsetarrayitem(self, op, oopspec): ffitypeval = self.getvalue(op.getarg(1)) widthval = self.getvalue(op.getarg(2)) offsetval = self.getvalue(op.getarg(5)) @@ -211,10 +212,13 @@ self.getvalue(op.getarg(3)).force_box(self.optimizer), self.getvalue(op.getarg(4)).force_box(self.optimizer), ] + if oopspec == EffectInfo.OS_LIBFFI_GETARRAYITEM: + opnum = rop.GETINTERIORFIELD_RAW + elif oopspec == EffectInfo.OS_LIBFFI_SETARRAYITEM: + opnum = rop.SETINTERIORFIELD_RAW + arglist.append(self.getvalue(op.getarg(6)).force_box(self.optimizer)) return [ - ResOperation( - rop.GETINTERIORFIELD_RAW, arglist, op.result, descr=descr - ) + ResOperation(opnum, arglist, op.result, descr=descr), ] def _get_interior_descr(self, ffitype, width, offset): diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -480,6 +480,7 @@ 'SETARRAYITEM_GC/3d', 'SETARRAYITEM_RAW/3d', 'SETINTERIORFIELD_GC/3d', + 'SETINTERIORFIELD_RAW/3d', 'SETFIELD_GC/2d', 'SETFIELD_RAW/2d', 'STRSETITEM/3', diff --git a/pypy/jit/metainterp/test/test_fficall.py b/pypy/jit/metainterp/test/test_fficall.py --- a/pypy/jit/metainterp/test/test_fficall.py +++ b/pypy/jit/metainterp/test/test_fficall.py @@ -149,6 +149,6 @@ return result_point[0].x * result_point[0].y assert self.meta_interp(main, [10]) == main(10) == 9000 - self.check_loops({"int_add": 3, "jump": 1, "int_lt": 1, + self.check_loops({"int_add": 3, "jump": 1, "int_lt": 1, "guard_true": 1, "getinteriorfield_raw": 4, "setinteriorfield_raw": 2 }) From noreply at buildbot.pypy.org Sun Nov 13 21:23:17 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sun, 13 Nov 2011 21:23:17 +0100 (CET) Subject: [pypy-commit] pypy jit-dynamic-getarrayitem: support in teh x86 backend Message-ID: <20111113202317.0AE13820BE@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: jit-dynamic-getarrayitem Changeset: r49378:da2860819b0f Date: 2011-11-13 15:23 -0500 http://bitbucket.org/pypy/pypy/changeset/da2860819b0f/ Log: support in teh x86 backend diff --git a/pypy/jit/backend/llsupport/descr.py b/pypy/jit/backend/llsupport/descr.py --- a/pypy/jit/backend/llsupport/descr.py +++ b/pypy/jit/backend/llsupport/descr.py @@ -111,6 +111,16 @@ def repr_of_descr(self): return '<%s %s %s>' % (self._clsname, self.name, self.offset) +class DynamicFieldDescr(BaseFieldDescr): + def __init__(self, offset, fieldsize, is_pointer, is_float, is_signed): + self.offset = offset + self._fieldsize = fieldsize + self._is_pointer_field = is_pointer + self._is_float_field = is_float + self._is_field_signed = is_signed + + def get_field_size(self, translate_support_code): + return self._fieldsize class NonGcPtrFieldDescr(BaseFieldDescr): _clsname = 'NonGcPtrFieldDescr' @@ -182,6 +192,7 @@ def repr_of_descr(self): return '<%s>' % self._clsname + class NonGcPtrArrayDescr(BaseArrayDescr): _clsname = 'NonGcPtrArrayDescr' def get_item_size(self, translate_support_code): @@ -211,6 +222,13 @@ def get_ofs_length(self, translate_support_code): return -1 +class DynamicArrayNoLengthDescr(BaseArrayNoLengthDescr): + def __init__(self, itemsize): + self.itemsize = itemsize + + def get_item_size(self, translate_support_code): + return self.itemsize + class NonGcPtrArrayNoLengthDescr(BaseArrayNoLengthDescr): _clsname = 'NonGcPtrArrayNoLengthDescr' def get_item_size(self, translate_support_code): diff --git a/pypy/jit/backend/llsupport/llmodel.py b/pypy/jit/backend/llsupport/llmodel.py --- a/pypy/jit/backend/llsupport/llmodel.py +++ b/pypy/jit/backend/llsupport/llmodel.py @@ -9,9 +9,10 @@ from pypy.jit.backend.llsupport import symbolic from pypy.jit.backend.llsupport.symbolic import WORD, unroll_basic_sizes from pypy.jit.backend.llsupport.descr import (get_size_descr, - get_field_descr, BaseFieldDescr, get_array_descr, BaseArrayDescr, - get_call_descr, BaseIntCallDescr, GcPtrCallDescr, FloatCallDescr, - VoidCallDescr, InteriorFieldDescr, get_interiorfield_descr) + get_field_descr, BaseFieldDescr, DynamicFieldDescr, get_array_descr, + BaseArrayDescr, DynamicArrayNoLengthDescr, get_call_descr, + BaseIntCallDescr, GcPtrCallDescr, FloatCallDescr, VoidCallDescr, + InteriorFieldDescr, get_interiorfield_descr) from pypy.jit.backend.llsupport.asmmemmgr import AsmMemoryManager @@ -238,6 +239,12 @@ def interiorfielddescrof(self, A, fieldname): return get_interiorfield_descr(self.gc_ll_descr, A, A.OF, fieldname) + def interiorfielddescrof_dynamic(self, offset, width, fieldsize, + is_pointer, is_float, is_signed): + arraydescr = DynamicArrayNoLengthDescr(width) + fielddescr = DynamicFieldDescr(offset, fieldsize, is_pointer, is_float, is_signed) + return InteriorFieldDescr(arraydescr, fielddescr) + def unpack_arraydescr(self, arraydescr): assert isinstance(arraydescr, BaseArrayDescr) return arraydescr.get_base_size(self.translate_support_code) diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -1619,6 +1619,8 @@ ofs_loc) self.load_from_mem(resloc, src_addr, fieldsize_loc, sign_loc) + genop_getinteriorfield_raw = genop_getinteriorfield_gc + def genop_discard_setfield_gc(self, op, arglocs): base_loc, ofs_loc, size_loc, value_loc = arglocs @@ -1634,6 +1636,8 @@ ofs_loc) self.save_into_mem(dest_addr, value_loc, fieldsize_loc) + genop_discard_setinteriorfield_raw = genop_discard_setinteriorfield_gc + def genop_discard_setarrayitem_gc(self, op, arglocs): base_loc, ofs_loc, value_loc, size_loc, baseofs = arglocs assert isinstance(baseofs, ImmedLoc) diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -1067,6 +1067,8 @@ self.PerformDiscard(op, [base_loc, ofs, itemsize, fieldsize, index_loc, temp_loc, value_loc]) + consider_setinteriorfield_raw = consider_setinteriorfield_gc + def consider_strsetitem(self, op): args = op.getarglist() base_loc = self.rm.make_sure_var_in_reg(op.getarg(0), args) @@ -1158,6 +1160,8 @@ self.Perform(op, [base_loc, ofs, itemsize, fieldsize, index_loc, temp_loc, sign_loc], result_loc) + consider_getinteriorfield_raw = consider_getinteriorfield_gc + def consider_int_is_true(self, op, guard_op): # doesn't need arg to be in a register argloc = self.loc(op.getarg(0)) diff --git a/pypy/jit/metainterp/test/test_fficall.py b/pypy/jit/metainterp/test/test_fficall.py --- a/pypy/jit/metainterp/test/test_fficall.py +++ b/pypy/jit/metainterp/test/test_fficall.py @@ -12,9 +12,7 @@ from pypy.tool.sourcetools import func_with_new_name -class TestFfiCall(LLJitMixin, _TestLibffiCall): - supports_all = False # supports_{floats,longlong,singlefloats} - +class FfiCallTests(_TestLibffiCall): # ===> ../../../rlib/test/test_libffi.py def call(self, funcspec, args, RESULT, is_struct=False, jitif=[]): @@ -93,10 +91,6 @@ test_byval_result.__doc__ = _TestLibffiCall.test_byval_result.__doc__ test_byval_result.dont_track_allocations = True - -class TestFfiCallSupportAll(TestFfiCall): - supports_all = True # supports_{floats,longlong,singlefloats} - def test_array_fields(self): myjitdriver = JitDriver( greens = [], @@ -152,3 +146,11 @@ self.check_loops({"int_add": 3, "jump": 1, "int_lt": 1, "guard_true": 1, "getinteriorfield_raw": 4, "setinteriorfield_raw": 2 }) + + + +class TestFfiCall(FfiCallTests, LLJitMixin): + supports_all = False + +class TestFfiCallSupportAll(FfiCallTests, LLJitMixin): + supports_all = True # supports_{floats,longlong,singlefloats} From noreply at buildbot.pypy.org Sun Nov 13 21:34:23 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sun, 13 Nov 2011 21:34:23 +0100 (CET) Subject: [pypy-commit] pypy jit-dynamic-getarrayitem: optimize {get, set}interiorfield_{raw, gc} for itemsizes that can be addressed using special x86 addressing Message-ID: <20111113203423.8D540820BE@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: jit-dynamic-getarrayitem Changeset: r49379:5545a3c93d51 Date: 2011-11-13 15:34 -0500 http://bitbucket.org/pypy/pypy/changeset/5545a3c93d51/ Log: optimize {get,set}interiorfield_{raw,gc} for itemsizes that can be addressed using special x86 addressing diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -8,8 +8,8 @@ from pypy.rpython.lltypesystem.lloperation import llop from pypy.rpython.annlowlevel import llhelper from pypy.jit.backend.model import CompiledLoopToken -from pypy.jit.backend.x86.regalloc import (RegAlloc, get_ebp_ofs, - _get_scale, gpr_reg_mgr_cls) +from pypy.jit.backend.x86.regalloc import (RegAlloc, get_ebp_ofs, _get_scale, + gpr_reg_mgr_cls, _valid_addressing_size) from pypy.jit.backend.x86.arch import (FRAME_FIXED_SIZE, FORCE_INDEX_OFS, WORD, IS_X86_32, IS_X86_64) @@ -1601,8 +1601,10 @@ assert isinstance(itemsize_loc, ImmedLoc) if isinstance(index_loc, ImmedLoc): temp_loc = imm(index_loc.value * itemsize_loc.value) + elif _valid_addressing_size(itemsize_loc.value): + return AddressLoc(base_loc, index_loc, _get_scale(itemsize_loc.value), ofs_loc.value) else: - # XXX should not use IMUL in most cases + # XXX should not use IMUL in more cases, it can use a clever LEA assert isinstance(temp_loc, RegLoc) assert isinstance(index_loc, RegLoc) assert not temp_loc.is_xmm diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -1434,8 +1434,11 @@ # i.e. the n'th word beyond the fixed frame size. return -WORD * (FRAME_FIXED_SIZE + position) +def _valid_addressing_size(size): + return size == 1 or size == 2 or size == 4 or size == 8 + def _get_scale(size): - assert size == 1 or size == 2 or size == 4 or size == 8 + assert _valid_addressing_size(size) if size < 4: return size - 1 # 1, 2 => 0, 1 else: From noreply at buildbot.pypy.org Sun Nov 13 21:44:05 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sun, 13 Nov 2011 21:44:05 +0100 (CET) Subject: [pypy-commit] pypy jit-dynamic-getarrayitem: forgotten test file Message-ID: <20111113204405.22F13820BE@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: jit-dynamic-getarrayitem Changeset: r49380:34a22813b598 Date: 2011-11-13 15:43 -0500 http://bitbucket.org/pypy/pypy/changeset/34a22813b598/ Log: forgotten test file diff --git a/pypy/jit/backend/x86/test/test_fficall.py b/pypy/jit/backend/x86/test/test_fficall.py new file mode 100644 --- /dev/null +++ b/pypy/jit/backend/x86/test/test_fficall.py @@ -0,0 +1,8 @@ +import py +from pypy.jit.metainterp.test import test_fficall +from pypy.jit.backend.x86.test.test_basic import Jit386Mixin + +class TestFfiCall(Jit386Mixin, test_fficall.FfiCallTests): + # for the individual tests see + # ====> ../../../metainterp/test/test_fficall.py + pass From noreply at buildbot.pypy.org Sun Nov 13 21:44:58 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sun, 13 Nov 2011 21:44:58 +0100 (CET) Subject: [pypy-commit] pypy jit-dynamic-getarrayitem: missing attr Message-ID: <20111113204458.2CF8E820BE@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: jit-dynamic-getarrayitem Changeset: r49381:27cd35fd7358 Date: 2011-11-13 15:44 -0500 http://bitbucket.org/pypy/pypy/changeset/27cd35fd7358/ Log: missing attr diff --git a/pypy/jit/backend/x86/test/test_fficall.py b/pypy/jit/backend/x86/test/test_fficall.py --- a/pypy/jit/backend/x86/test/test_fficall.py +++ b/pypy/jit/backend/x86/test/test_fficall.py @@ -5,4 +5,4 @@ class TestFfiCall(Jit386Mixin, test_fficall.FfiCallTests): # for the individual tests see # ====> ../../../metainterp/test/test_fficall.py - pass + supports_all = True From noreply at buildbot.pypy.org Sun Nov 13 21:58:50 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sun, 13 Nov 2011 21:58:50 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: fix tests Message-ID: <20111113205850.12653820BE@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r49382:01ed1de7dd3e Date: 2011-11-13 13:49 +0100 http://bitbucket.org/pypy/pypy/changeset/01ed1de7dd3e/ Log: fix tests diff --git a/pypy/jit/metainterp/test/test_recursive.py b/pypy/jit/metainterp/test/test_recursive.py --- a/pypy/jit/metainterp/test/test_recursive.py +++ b/pypy/jit/metainterp/test/test_recursive.py @@ -577,7 +577,7 @@ self.meta_interp(g, [10], backendopt=True) self.check_aborted_count(1) self.check_resops(call=0, call_assembler=2) - self.check_tree_loop_count(3) + self.check_jitcell_token_count(2) def test_directly_call_assembler(self): driver = JitDriver(greens = ['codeno'], reds = ['i'], @@ -1211,11 +1211,11 @@ portal(c, i, v) self.meta_interp(main, [10, 10, False, False], inline=True) - self.check_tree_loop_count(1) - self.check_loop_count(0) + self.check_jitcell_token_count(1) + self.check_trace_count(1) self.meta_interp(main, [3, 10, True, False], inline=True) - self.check_tree_loop_count(0) - self.check_loop_count(0) + self.check_jitcell_token_count(0) + self.check_trace_count(0) def test_trace_from_start_does_not_prevent_inlining(self): driver = JitDriver(greens = ['c', 'bc'], reds = ['i']) diff --git a/pypy/jit/metainterp/test/test_tl.py b/pypy/jit/metainterp/test/test_tl.py --- a/pypy/jit/metainterp/test/test_tl.py +++ b/pypy/jit/metainterp/test/test_tl.py @@ -72,7 +72,7 @@ res = self.meta_interp(main, [0, 6], listops=True, backendopt=True) assert res == 5040 - self.check_resops({'jump': 2, 'int_le': 2, 'guard_value': 1, + self.check_resops({'jump': 1, 'int_le': 2, 'guard_value': 1, 'int_mul': 2, 'guard_false': 2, 'int_sub': 2}) def test_tl_2(self): @@ -80,7 +80,7 @@ res = self.meta_interp(main, [1, 10], listops=True, backendopt=True) assert res == main(1, 10) - self.check_resops({'int_le': 2, 'int_sub': 2, 'jump': 2, + self.check_resops({'int_le': 2, 'int_sub': 2, 'jump': 1, 'guard_false': 2, 'guard_value': 1}) def test_tl_call(self, listops=True, policy=None): diff --git a/pypy/jit/metainterp/test/test_virtualizable.py b/pypy/jit/metainterp/test/test_virtualizable.py --- a/pypy/jit/metainterp/test/test_virtualizable.py +++ b/pypy/jit/metainterp/test/test_virtualizable.py @@ -582,7 +582,7 @@ res = self.meta_interp(f, [123], policy=StopAtXPolicy(g)) assert res == f(123) self.check_aborted_count(2) - self.check_tree_loop_count(0) + self.check_jitcell_token_count(0) def test_external_read_with_exception(self): jitdriver = JitDriver(greens = [], reds = ['frame'], @@ -621,7 +621,7 @@ res = self.meta_interp(f, [123], policy=StopAtXPolicy(g)) assert res == f(123) self.check_aborted_count(2) - self.check_tree_loop_count(0) + self.check_jitcell_token_count(0) def test_external_write(self): jitdriver = JitDriver(greens = [], reds = ['frame'], @@ -653,7 +653,7 @@ res = self.meta_interp(f, [240], policy=StopAtXPolicy(g)) assert res == f(240) self.check_aborted_count(3) - self.check_tree_loop_count(0) + self.check_jitcell_token_count(0) def test_external_read_sometimes(self): jitdriver = JitDriver(greens = [], reds = ['frame'], diff --git a/pypy/jit/metainterp/test/test_virtualref.py b/pypy/jit/metainterp/test/test_virtualref.py --- a/pypy/jit/metainterp/test/test_virtualref.py +++ b/pypy/jit/metainterp/test/test_virtualref.py @@ -321,7 +321,7 @@ assert res == 13 self.check_resops(new_with_vtable=2, # the vref, but not XY() new_array=0) # and neither next1/2/3 - self.check_loop_count(1) + self.check_trace_count(1) self.check_aborted_count(0) def test_blackhole_forces(self): @@ -363,7 +363,7 @@ assert res == 13 self.check_resops(new_with_vtable=0, # all virtualized in the n!=13 loop new_array=0) - self.check_loop_count(1) + self.check_trace_count(1) self.check_aborted_count(0) def test_bridge_forces(self): @@ -410,7 +410,7 @@ # res = self.meta_interp(f, [72]) assert res == 6 - self.check_loop_count(2) # the loop and the bridge + self.check_trace_count(2) # the loop and the bridge self.check_resops(new_with_vtable=2, # loop: nothing; bridge: vref, xy new_array=2) # bridge: next4, next5 self.check_aborted_count(0) diff --git a/pypy/jit/metainterp/test/test_warmspot.py b/pypy/jit/metainterp/test/test_warmspot.py --- a/pypy/jit/metainterp/test/test_warmspot.py +++ b/pypy/jit/metainterp/test/test_warmspot.py @@ -203,7 +203,7 @@ m -= 1 self.meta_interp(f2, [i2]) try: - self.check_tree_loop_count(1) + self.check_jitcell_token_count(1) break except AssertionError: print "f2: no loop generated for i2==%d" % i2 @@ -218,7 +218,7 @@ m -= 1 self.meta_interp(f1, [i1]) try: - self.check_tree_loop_count(1) + self.check_jitcell_token_count(1) break except AssertionError: print "f1: no loop generated for i1==%d" % i1 @@ -238,8 +238,8 @@ self.meta_interp(f1, [8]) # it should generate one "loop" only, which ends in a FINISH # corresponding to the return from f2. - self.check_tree_loop_count(1) - self.check_loop_count(0) + self.check_trace_count(1) + self.check_resops(jump=0) def test_simple_loop(self): mydriver = JitDriver(greens=[], reds=['m']) @@ -248,8 +248,8 @@ mydriver.jit_merge_point(m=m) m = m - 1 self.meta_interp(f1, [8]) - self.check_loop_count(1) - self.check_resops({'jump': 2, 'guard_true': 2, 'int_gt': 2, + self.check_trace_count(1) + self.check_resops({'jump': 1, 'guard_true': 2, 'int_gt': 2, 'int_sub': 2}) def test_void_red_variable(self): diff --git a/pypy/jit/metainterp/test/test_warmstate.py b/pypy/jit/metainterp/test/test_warmstate.py --- a/pypy/jit/metainterp/test/test_warmstate.py +++ b/pypy/jit/metainterp/test/test_warmstate.py @@ -192,12 +192,12 @@ class FakeLoopToken(object): pass looptoken = FakeLoopToken() - state.attach_unoptimized_bridge_from_interp([ConstInt(5), - constfloat(2.25)], - looptoken) + state.attach_procedure_to_interp([ConstInt(5), + constfloat(2.25)], + looptoken) cell1 = get_jitcell(True, 5, 2.25) assert cell1.counter < 0 - assert cell1.get_entry_loop_token() is looptoken + assert cell1.get_procedure_token() is looptoken def test_make_jitdriver_callbacks_1(): class FakeWarmRunnerDesc: From noreply at buildbot.pypy.org Sun Nov 13 21:58:51 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sun, 13 Nov 2011 21:58:51 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: translation fix Message-ID: <20111113205851.5A46A820BE@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r49383:e3289bc8ecbc Date: 2011-11-13 18:07 +0100 http://bitbucket.org/pypy/pypy/changeset/e3289bc8ecbc/ Log: translation fix diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -169,6 +169,7 @@ loop.original_jitcell_token = jitcell_token for label in all_target_tokens: + assert isinstance(label, TargetToken) label.original_jitcell_token = jitcell_token jitcell_token.target_tokens = all_target_tokens send_loop_to_backend(greenkey, jitdriver_sd, metainterp_sd, loop, "loop") @@ -227,7 +228,9 @@ target_token = loop.operations[-1].getdescr() resumekey.compile_and_attach(metainterp, loop) - label.getdescr().original_jitcell_token = loop.original_jitcell_token + target_token = label.getdescr() + assert isinstance(target_token, TargetToken) + target_token.original_jitcell_token = loop.original_jitcell_token record_loop_or_bridge(metainterp_sd, loop) return target_token diff --git a/pypy/jit/metainterp/optimizeopt/simplify.py b/pypy/jit/metainterp/optimizeopt/simplify.py --- a/pypy/jit/metainterp/optimizeopt/simplify.py +++ b/pypy/jit/metainterp/optimizeopt/simplify.py @@ -40,7 +40,9 @@ assert isinstance(descr, JitCellToken) if not descr.target_tokens: assert self.last_label_descr is not None - assert self.last_label_descr.targeting_jitcell_token is descr + target_token = self.last_label_descr + assert isinstance(target_token, TargetToken) + assert target_token.targeting_jitcell_token is descr op.setdescr(self.last_label_descr) else: assert len(descr.target_tokens) == 1 diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -24,7 +24,6 @@ self.importable_values = {} self.emitting_dissabled = False self.emitted_guards = 0 - self.inline_short_preamble = True def ensure_imported(self, value): if not self.emitting_dissabled and value in self.importable_values: @@ -51,6 +50,9 @@ become the preamble or entry bridge (don't think there is a distinction anymore)""" + inline_short_preamble = True + did_import = False + def __init__(self, metainterp_sd, loop, optimizations): self.optimizer = UnrollableOptimizer(metainterp_sd, loop, optimizations) @@ -110,7 +112,11 @@ self.export_state(stop_label) loop.operations.append(stop_label) else: - assert stop_label.getdescr().targeting_jitcell_token is start_label.getdescr().targeting_jitcell_token + stop_target = stop_label.getdescr() + start_target = start_label.getdescr() + assert isinstance(stop_target, TargetToken) + assert isinstance(start_target, TargetToken) + assert stop_target.targeting_jitcell_token is start_target.targeting_jitcell_token jumpop = ResOperation(rop.JUMP, stop_label.getarglist(), None, descr=start_label.getdescr()) self.close_loop(jumpop) @@ -324,7 +330,9 @@ maxguards = self.optimizer.metainterp_sd.warmrunnerdesc.memory_manager.max_retrace_guards if self.optimizer.emitted_guards > maxguards: - jumpop.getdescr().targeting_jitcell_token.retraced_count = sys.maxint + target_token = jumpop.getdescr() + assert isinstance(target_token, TargetToken) + target_token.targeting_jitcell_token.retraced_count = sys.maxint def finilize_short_preamble(self, start_label): short = self.short diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -2037,6 +2037,7 @@ live_arg_boxes[num_green_args:], start_resumedescr) if target_token is not None: + assert isinstance(target_token, TargetToken) self.jitdriver_sd.warmstate.attach_procedure_to_interp(greenkey, target_token.targeting_jitcell_token) self.staticdata.stats.add_jitcell_token(target_token.targeting_jitcell_token) @@ -2044,6 +2045,7 @@ if target_token is not None: # raise if it *worked* correctly self.history.inputargs = None self.history.operations = None + assert isinstance(target_token, TargetToken) raise GenerateMergePoint(live_arg_boxes, target_token.targeting_jitcell_token) def compile_trace(self, live_arg_boxes, start_resumedescr): @@ -2064,6 +2066,7 @@ if target_token is not None: # raise if it *worked* correctly self.history.inputargs = None self.history.operations = None + assert isinstance(target_token, TargetToken) raise GenerateMergePoint(live_arg_boxes, target_token.targeting_jitcell_token) def compile_bridge_and_loop(self, original_boxes, live_arg_boxes, start, From noreply at buildbot.pypy.org Mon Nov 14 00:08:14 2011 From: noreply at buildbot.pypy.org (edelsohn) Date: Mon, 14 Nov 2011 00:08:14 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Define getarrayitem_raw, getfield_raw, etc. Sign extend getfield result. Message-ID: <20111113230814.CDCF8820BE@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r49384:3bebd83ccde0 Date: 2011-11-13 18:08 -0500 http://bitbucket.org/pypy/pypy/changeset/3bebd83ccde0/ Log: Define getarrayitem_raw, getfield_raw, etc. Sign extend getfield result. diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -312,6 +312,7 @@ self.mc.stbx(value_loc.value, base_loc.value, ofs.value) else: assert 0, "size not supported" + emit_setfield_raw = emit_setfield_gc def emit_getfield_gc(self, op, arglocs, regalloc): @@ -338,6 +339,14 @@ self.mc.lbzx(res.value, base_loc.value, ofs.value) else: assert 0, "size not supported" + + #XXX Hack, Hack, Hack + if not we_are_translated(): + descr = op.getdescr() + size = descr.get_field_size(False) + signed = descr.is_field_signed() + self._ensure_result_bit_extension(res, size, signed) + emit_getfield_raw = emit_getfield_gc emit_getfield_raw_pure = emit_getfield_gc emit_getfield_gc_pure = emit_getfield_gc @@ -376,6 +385,8 @@ else: assert 0, "scale %s not supported" % (scale.value) + emit_setarrayitem_raw = emit_setarrayitem_gc + def emit_getarrayitem_gc(self, op, arglocs, regalloc): res, base_loc, ofs_loc, scale, ofs = arglocs if scale.value > 0: @@ -409,6 +420,9 @@ signed = descr.is_item_signed() self._ensure_result_bit_extension(res, size, signed) + emit_getarrayitem_raw = emit_getarrayitem_gc + emit_getarrayitem_gc_pure = emit_getarrayitem_gc + def emit_strlen(self, op, arglocs, regalloc): l0, l1, res = arglocs if l1.is_imm(): diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/regalloc.py @@ -449,6 +449,8 @@ self.possibly_free_vars(boxes) return [value_loc, base_loc, ofs_loc, imm(scale), imm(ofs)] + prepare_setarrayitem_raw = prepare_setarrayitem_gc + def prepare_getarrayitem_gc(self, op): a0, a1 = boxes = list(op.getarglist()) _, scale, ofs, _, ptr = self._unpack_arraydescr(op.getdescr()) @@ -462,6 +464,9 @@ self.possibly_free_var(op.result) return [res, base_loc, ofs_loc, imm(scale), imm(ofs)] + prepare_getarrayitem_raw = prepare_getarrayitem_gc + prepare_getarrayitem_gc_pure = prepare_getarrayitem_gc + def prepare_strlen(self, op): l0, box = self._ensure_value_is_boxed(op.getarg(0)) boxes = [box] From noreply at buildbot.pypy.org Mon Nov 14 00:22:54 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Mon, 14 Nov 2011 00:22:54 +0100 (CET) Subject: [pypy-commit] pypy jit-dynamic-getarrayitem: show the annotator this case isn't posssible Message-ID: <20111113232254.2CD0B820BE@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: jit-dynamic-getarrayitem Changeset: r49385:99d01fa4f6a8 Date: 2011-11-13 18:22 -0500 http://bitbucket.org/pypy/pypy/changeset/99d01fa4f6a8/ Log: show the annotator this case isn't posssible diff --git a/pypy/jit/metainterp/optimizeopt/fficall.py b/pypy/jit/metainterp/optimizeopt/fficall.py --- a/pypy/jit/metainterp/optimizeopt/fficall.py +++ b/pypy/jit/metainterp/optimizeopt/fficall.py @@ -217,6 +217,8 @@ elif oopspec == EffectInfo.OS_LIBFFI_SETARRAYITEM: opnum = rop.SETINTERIORFIELD_RAW arglist.append(self.getvalue(op.getarg(6)).force_box(self.optimizer)) + else: + assert False return [ ResOperation(opnum, arglist, op.result, descr=descr), ] From noreply at buildbot.pypy.org Mon Nov 14 10:18:32 2011 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 14 Nov 2011 10:18:32 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim-shards: one more test and start working on repr Message-ID: <20111114091832.9B9BD820BE@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-multidim-shards Changeset: r49386:e49090902cb5 Date: 2011-11-14 10:18 +0100 http://bitbucket.org/pypy/pypy/changeset/e49090902cb5/ Log: one more test and start working on repr diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -331,23 +331,20 @@ return self.get_concrete().descr_len(space) def descr_repr(self, space): - xxx # Simple implementation so that we can see the array. # Since what we want is to print a plethora of 2d views, # use recursive calls to to_str() to do the work. concrete = self.get_concrete() res = StringBuilder() res.append("array(") - myview = NDimSlice(concrete, self.signature, [], self.shape) - res0 = myview.to_str(True, indent=' ') #This is for numpy compliance: an empty slice reports its shape - if res0 == "[]" and isinstance(self, NDimSlice): + if not concrete.find_size(): res.append("[], shape=(") self_shape = str(self.shape) res.append_slice(str(self_shape), 1, len(self_shape)-1) res.append(')') else: - res.append(res0) + concrete.to_str(True, res, indent=' ') dtype = concrete.find_dtype() if (dtype is not space.fromcache(interp_dtype.W_Float64Dtype) and dtype is not space.fromcache(interp_dtype.W_Int64Dtype)) or \ @@ -429,7 +426,7 @@ concrete = self.get_concrete() item = concrete._index_of_single_item(space, w_idx) return concrete.getitem(item).wrap(space) - return space.wrap(self._create_slice(space, w_idx)) + return space.wrap(self.create_slice(space, w_idx)) def descr_setitem(self, space, w_idx, w_value): self.invalidated() @@ -447,10 +444,10 @@ assert isinstance(w_value, BaseArray) else: w_value = convert_to_array(space, w_value) - view = self._create_slice(space, w_idx) + view = self.create_slice(space, w_idx) view.setslice(space, w_value) - def _create_slice(self, space, w_idx): + def create_slice(self, space, w_idx): new_sig = signature.Signature.find_sig([ NDimSlice.signature, self.signature ]) @@ -782,7 +779,6 @@ return self.parent.get_root_shape() def to_str(self, comma, indent=' '): - xxx ret = StringBuilder() dtype = self.find_dtype() ndims = len(self.shape) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -760,6 +760,12 @@ a = array([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]]) assert (a + a)[1, 1] == 8 + def test_ufunc_negative(self): + from numpy import array, negative + a = array([[1, 2], [3, 4]]) + b = negative(a + a) + assert (b == [[-1, -2], [-3, -4]]).all() + def test_broadcast(self): skip("not working") import numpy From noreply at buildbot.pypy.org Mon Nov 14 10:45:02 2011 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 14 Nov 2011 10:45:02 +0100 (CET) Subject: [pypy-commit] pypy default: Backout ad8b93cf993c and 38f173ee998a, which fail on tannit. Message-ID: <20111114094502.5C99A820BE@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49387:f718100d7782 Date: 2011-11-14 10:44 +0100 http://bitbucket.org/pypy/pypy/changeset/f718100d7782/ Log: Backout ad8b93cf993c and 38f173ee998a, which fail on tannit. diff --git a/pypy/translator/platform/__init__.py b/pypy/translator/platform/__init__.py --- a/pypy/translator/platform/__init__.py +++ b/pypy/translator/platform/__init__.py @@ -42,8 +42,6 @@ so_prefixes = ('',) - extra_libs = [] - def __init__(self, cc): if self.__class__ is Platform: raise TypeError("You should not instantiate Platform class directly") @@ -183,7 +181,7 @@ link_files = self._linkfiles(eci.link_files) export_flags = self._exportsymbols_link_flags(eci) return (library_dirs + list(self.link_flags) + export_flags + - link_files + list(eci.link_extra) + libraries + self.extra_libs) + link_files + list(eci.link_extra) + libraries) def _exportsymbols_link_flags(self, eci, relto=None): if eci.export_symbols: diff --git a/pypy/translator/platform/linux.py b/pypy/translator/platform/linux.py --- a/pypy/translator/platform/linux.py +++ b/pypy/translator/platform/linux.py @@ -6,8 +6,7 @@ class BaseLinux(BasePosix): name = "linux" - link_flags = ('-pthread',) - extra_libs = ['-lrt'] + link_flags = ('-pthread', '-lrt') cflags = ('-O3', '-pthread', '-fomit-frame-pointer', '-Wall', '-Wno-unused') standalone_only = () From noreply at buildbot.pypy.org Mon Nov 14 10:45:41 2011 From: noreply at buildbot.pypy.org (bivab) Date: Mon, 14 Nov 2011 10:45:41 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: merge default Message-ID: <20111114094541.8C817820BE@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r49388:02cab8c5480f Date: 2011-11-14 10:40 +0100 http://bitbucket.org/pypy/pypy/changeset/02cab8c5480f/ Log: merge default diff too long, truncating to 10000 out of 12780 lines diff --git a/lib-python/2.7/test/test_os.py b/lib-python/2.7/test/test_os.py --- a/lib-python/2.7/test/test_os.py +++ b/lib-python/2.7/test/test_os.py @@ -74,7 +74,8 @@ self.assertFalse(os.path.exists(name), "file already exists for temporary file") # make sure we can create the file - open(name, "w") + f = open(name, "w") + f.close() self.files.append(name) def test_tempnam(self): diff --git a/lib-python/conftest.py b/lib-python/conftest.py --- a/lib-python/conftest.py +++ b/lib-python/conftest.py @@ -201,7 +201,7 @@ RegrTest('test_difflib.py'), RegrTest('test_dircache.py', core=True), RegrTest('test_dis.py'), - RegrTest('test_distutils.py'), + RegrTest('test_distutils.py', skip=True), RegrTest('test_dl.py', skip=True), RegrTest('test_doctest.py', usemodules="thread"), RegrTest('test_doctest2.py'), diff --git a/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py b/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py --- a/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py +++ b/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py @@ -1,6 +1,5 @@ import unittest from ctypes import * -from ctypes.test import xfail class MyInt(c_int): def __cmp__(self, other): @@ -27,7 +26,6 @@ self.assertEqual(None, cb()) - @xfail def test_int_callback(self): args = [] def func(arg): diff --git a/lib-python/modified-2.7/heapq.py b/lib-python/modified-2.7/heapq.py new file mode 100644 --- /dev/null +++ b/lib-python/modified-2.7/heapq.py @@ -0,0 +1,442 @@ +# -*- coding: latin-1 -*- + +"""Heap queue algorithm (a.k.a. priority queue). + +Heaps are arrays for which a[k] <= a[2*k+1] and a[k] <= a[2*k+2] for +all k, counting elements from 0. For the sake of comparison, +non-existing elements are considered to be infinite. The interesting +property of a heap is that a[0] is always its smallest element. + +Usage: + +heap = [] # creates an empty heap +heappush(heap, item) # pushes a new item on the heap +item = heappop(heap) # pops the smallest item from the heap +item = heap[0] # smallest item on the heap without popping it +heapify(x) # transforms list into a heap, in-place, in linear time +item = heapreplace(heap, item) # pops and returns smallest item, and adds + # new item; the heap size is unchanged + +Our API differs from textbook heap algorithms as follows: + +- We use 0-based indexing. This makes the relationship between the + index for a node and the indexes for its children slightly less + obvious, but is more suitable since Python uses 0-based indexing. + +- Our heappop() method returns the smallest item, not the largest. + +These two make it possible to view the heap as a regular Python list +without surprises: heap[0] is the smallest item, and heap.sort() +maintains the heap invariant! +""" + +# Original code by Kevin O'Connor, augmented by Tim Peters and Raymond Hettinger + +__about__ = """Heap queues + +[explanation by Fran�ois Pinard] + +Heaps are arrays for which a[k] <= a[2*k+1] and a[k] <= a[2*k+2] for +all k, counting elements from 0. For the sake of comparison, +non-existing elements are considered to be infinite. The interesting +property of a heap is that a[0] is always its smallest element. + +The strange invariant above is meant to be an efficient memory +representation for a tournament. The numbers below are `k', not a[k]: + + 0 + + 1 2 + + 3 4 5 6 + + 7 8 9 10 11 12 13 14 + + 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 + + +In the tree above, each cell `k' is topping `2*k+1' and `2*k+2'. In +an usual binary tournament we see in sports, each cell is the winner +over the two cells it tops, and we can trace the winner down the tree +to see all opponents s/he had. However, in many computer applications +of such tournaments, we do not need to trace the history of a winner. +To be more memory efficient, when a winner is promoted, we try to +replace it by something else at a lower level, and the rule becomes +that a cell and the two cells it tops contain three different items, +but the top cell "wins" over the two topped cells. + +If this heap invariant is protected at all time, index 0 is clearly +the overall winner. The simplest algorithmic way to remove it and +find the "next" winner is to move some loser (let's say cell 30 in the +diagram above) into the 0 position, and then percolate this new 0 down +the tree, exchanging values, until the invariant is re-established. +This is clearly logarithmic on the total number of items in the tree. +By iterating over all items, you get an O(n ln n) sort. + +A nice feature of this sort is that you can efficiently insert new +items while the sort is going on, provided that the inserted items are +not "better" than the last 0'th element you extracted. This is +especially useful in simulation contexts, where the tree holds all +incoming events, and the "win" condition means the smallest scheduled +time. When an event schedule other events for execution, they are +scheduled into the future, so they can easily go into the heap. So, a +heap is a good structure for implementing schedulers (this is what I +used for my MIDI sequencer :-). + +Various structures for implementing schedulers have been extensively +studied, and heaps are good for this, as they are reasonably speedy, +the speed is almost constant, and the worst case is not much different +than the average case. However, there are other representations which +are more efficient overall, yet the worst cases might be terrible. + +Heaps are also very useful in big disk sorts. You most probably all +know that a big sort implies producing "runs" (which are pre-sorted +sequences, which size is usually related to the amount of CPU memory), +followed by a merging passes for these runs, which merging is often +very cleverly organised[1]. It is very important that the initial +sort produces the longest runs possible. Tournaments are a good way +to that. If, using all the memory available to hold a tournament, you +replace and percolate items that happen to fit the current run, you'll +produce runs which are twice the size of the memory for random input, +and much better for input fuzzily ordered. + +Moreover, if you output the 0'th item on disk and get an input which +may not fit in the current tournament (because the value "wins" over +the last output value), it cannot fit in the heap, so the size of the +heap decreases. The freed memory could be cleverly reused immediately +for progressively building a second heap, which grows at exactly the +same rate the first heap is melting. When the first heap completely +vanishes, you switch heaps and start a new run. Clever and quite +effective! + +In a word, heaps are useful memory structures to know. I use them in +a few applications, and I think it is good to keep a `heap' module +around. :-) + +-------------------- +[1] The disk balancing algorithms which are current, nowadays, are +more annoying than clever, and this is a consequence of the seeking +capabilities of the disks. On devices which cannot seek, like big +tape drives, the story was quite different, and one had to be very +clever to ensure (far in advance) that each tape movement will be the +most effective possible (that is, will best participate at +"progressing" the merge). Some tapes were even able to read +backwards, and this was also used to avoid the rewinding time. +Believe me, real good tape sorts were quite spectacular to watch! +From all times, sorting has always been a Great Art! :-) +""" + +__all__ = ['heappush', 'heappop', 'heapify', 'heapreplace', 'merge', + 'nlargest', 'nsmallest', 'heappushpop'] + +from itertools import islice, repeat, count, imap, izip, tee, chain +from operator import itemgetter +import bisect + +def heappush(heap, item): + """Push item onto heap, maintaining the heap invariant.""" + heap.append(item) + _siftdown(heap, 0, len(heap)-1) + +def heappop(heap): + """Pop the smallest item off the heap, maintaining the heap invariant.""" + lastelt = heap.pop() # raises appropriate IndexError if heap is empty + if heap: + returnitem = heap[0] + heap[0] = lastelt + _siftup(heap, 0) + else: + returnitem = lastelt + return returnitem + +def heapreplace(heap, item): + """Pop and return the current smallest value, and add the new item. + + This is more efficient than heappop() followed by heappush(), and can be + more appropriate when using a fixed-size heap. Note that the value + returned may be larger than item! That constrains reasonable uses of + this routine unless written as part of a conditional replacement: + + if item > heap[0]: + item = heapreplace(heap, item) + """ + returnitem = heap[0] # raises appropriate IndexError if heap is empty + heap[0] = item + _siftup(heap, 0) + return returnitem + +def heappushpop(heap, item): + """Fast version of a heappush followed by a heappop.""" + if heap and heap[0] < item: + item, heap[0] = heap[0], item + _siftup(heap, 0) + return item + +def heapify(x): + """Transform list into a heap, in-place, in O(len(heap)) time.""" + n = len(x) + # Transform bottom-up. The largest index there's any point to looking at + # is the largest with a child index in-range, so must have 2*i + 1 < n, + # or i < (n-1)/2. If n is even = 2*j, this is (2*j-1)/2 = j-1/2 so + # j-1 is the largest, which is n//2 - 1. If n is odd = 2*j+1, this is + # (2*j+1-1)/2 = j so j-1 is the largest, and that's again n//2-1. + for i in reversed(xrange(n//2)): + _siftup(x, i) + +def nlargest(n, iterable): + """Find the n largest elements in a dataset. + + Equivalent to: sorted(iterable, reverse=True)[:n] + """ + if n < 0: # for consistency with the c impl + return [] + it = iter(iterable) + result = list(islice(it, n)) + if not result: + return result + heapify(result) + _heappushpop = heappushpop + for elem in it: + _heappushpop(result, elem) + result.sort(reverse=True) + return result + +def nsmallest(n, iterable): + """Find the n smallest elements in a dataset. + + Equivalent to: sorted(iterable)[:n] + """ + if n < 0: # for consistency with the c impl + return [] + if hasattr(iterable, '__len__') and n * 10 <= len(iterable): + # For smaller values of n, the bisect method is faster than a minheap. + # It is also memory efficient, consuming only n elements of space. + it = iter(iterable) + result = sorted(islice(it, 0, n)) + if not result: + return result + insort = bisect.insort + pop = result.pop + los = result[-1] # los --> Largest of the nsmallest + for elem in it: + if los <= elem: + continue + insort(result, elem) + pop() + los = result[-1] + return result + # An alternative approach manifests the whole iterable in memory but + # saves comparisons by heapifying all at once. Also, saves time + # over bisect.insort() which has O(n) data movement time for every + # insertion. Finding the n smallest of an m length iterable requires + # O(m) + O(n log m) comparisons. + h = list(iterable) + heapify(h) + return map(heappop, repeat(h, min(n, len(h)))) + +# 'heap' is a heap at all indices >= startpos, except possibly for pos. pos +# is the index of a leaf with a possibly out-of-order value. Restore the +# heap invariant. +def _siftdown(heap, startpos, pos): + newitem = heap[pos] + # Follow the path to the root, moving parents down until finding a place + # newitem fits. + while pos > startpos: + parentpos = (pos - 1) >> 1 + parent = heap[parentpos] + if newitem < parent: + heap[pos] = parent + pos = parentpos + continue + break + heap[pos] = newitem + +# The child indices of heap index pos are already heaps, and we want to make +# a heap at index pos too. We do this by bubbling the smaller child of +# pos up (and so on with that child's children, etc) until hitting a leaf, +# then using _siftdown to move the oddball originally at index pos into place. +# +# We *could* break out of the loop as soon as we find a pos where newitem <= +# both its children, but turns out that's not a good idea, and despite that +# many books write the algorithm that way. During a heap pop, the last array +# element is sifted in, and that tends to be large, so that comparing it +# against values starting from the root usually doesn't pay (= usually doesn't +# get us out of the loop early). See Knuth, Volume 3, where this is +# explained and quantified in an exercise. +# +# Cutting the # of comparisons is important, since these routines have no +# way to extract "the priority" from an array element, so that intelligence +# is likely to be hiding in custom __cmp__ methods, or in array elements +# storing (priority, record) tuples. Comparisons are thus potentially +# expensive. +# +# On random arrays of length 1000, making this change cut the number of +# comparisons made by heapify() a little, and those made by exhaustive +# heappop() a lot, in accord with theory. Here are typical results from 3 +# runs (3 just to demonstrate how small the variance is): +# +# Compares needed by heapify Compares needed by 1000 heappops +# -------------------------- -------------------------------- +# 1837 cut to 1663 14996 cut to 8680 +# 1855 cut to 1659 14966 cut to 8678 +# 1847 cut to 1660 15024 cut to 8703 +# +# Building the heap by using heappush() 1000 times instead required +# 2198, 2148, and 2219 compares: heapify() is more efficient, when +# you can use it. +# +# The total compares needed by list.sort() on the same lists were 8627, +# 8627, and 8632 (this should be compared to the sum of heapify() and +# heappop() compares): list.sort() is (unsurprisingly!) more efficient +# for sorting. + +def _siftup(heap, pos): + endpos = len(heap) + startpos = pos + newitem = heap[pos] + # Bubble up the smaller child until hitting a leaf. + childpos = 2*pos + 1 # leftmost child position + while childpos < endpos: + # Set childpos to index of smaller child. + rightpos = childpos + 1 + if rightpos < endpos and not heap[childpos] < heap[rightpos]: + childpos = rightpos + # Move the smaller child up. + heap[pos] = heap[childpos] + pos = childpos + childpos = 2*pos + 1 + # The leaf at pos is empty now. Put newitem there, and bubble it up + # to its final resting place (by sifting its parents down). + heap[pos] = newitem + _siftdown(heap, startpos, pos) + +# If available, use C implementation +try: + from _heapq import * +except ImportError: + pass + +def merge(*iterables): + '''Merge multiple sorted inputs into a single sorted output. + + Similar to sorted(itertools.chain(*iterables)) but returns a generator, + does not pull the data into memory all at once, and assumes that each of + the input streams is already sorted (smallest to largest). + + >>> list(merge([1,3,5,7], [0,2,4,8], [5,10,15,20], [], [25])) + [0, 1, 2, 3, 4, 5, 5, 7, 8, 10, 15, 20, 25] + + ''' + _heappop, _heapreplace, _StopIteration = heappop, heapreplace, StopIteration + + h = [] + h_append = h.append + for itnum, it in enumerate(map(iter, iterables)): + try: + next = it.next + h_append([next(), itnum, next]) + except _StopIteration: + pass + heapify(h) + + while 1: + try: + while 1: + v, itnum, next = s = h[0] # raises IndexError when h is empty + yield v + s[0] = next() # raises StopIteration when exhausted + _heapreplace(h, s) # restore heap condition + except _StopIteration: + _heappop(h) # remove empty iterator + except IndexError: + return + +# Extend the implementations of nsmallest and nlargest to use a key= argument +_nsmallest = nsmallest +def nsmallest(n, iterable, key=None): + """Find the n smallest elements in a dataset. + + Equivalent to: sorted(iterable, key=key)[:n] + """ + # Short-cut for n==1 is to use min() when len(iterable)>0 + if n == 1: + it = iter(iterable) + head = list(islice(it, 1)) + if not head: + return [] + if key is None: + return [min(chain(head, it))] + return [min(chain(head, it), key=key)] + + # When n>=size, it's faster to use sort() + try: + size = len(iterable) + except (TypeError, AttributeError): + pass + else: + if n >= size: + return sorted(iterable, key=key)[:n] + + # When key is none, use simpler decoration + if key is None: + it = izip(iterable, count()) # decorate + result = _nsmallest(n, it) + return map(itemgetter(0), result) # undecorate + + # General case, slowest method + in1, in2 = tee(iterable) + it = izip(imap(key, in1), count(), in2) # decorate + result = _nsmallest(n, it) + return map(itemgetter(2), result) # undecorate + +_nlargest = nlargest +def nlargest(n, iterable, key=None): + """Find the n largest elements in a dataset. + + Equivalent to: sorted(iterable, key=key, reverse=True)[:n] + """ + + # Short-cut for n==1 is to use max() when len(iterable)>0 + if n == 1: + it = iter(iterable) + head = list(islice(it, 1)) + if not head: + return [] + if key is None: + return [max(chain(head, it))] + return [max(chain(head, it), key=key)] + + # When n>=size, it's faster to use sort() + try: + size = len(iterable) + except (TypeError, AttributeError): + pass + else: + if n >= size: + return sorted(iterable, key=key, reverse=True)[:n] + + # When key is none, use simpler decoration + if key is None: + it = izip(iterable, count(0,-1)) # decorate + result = _nlargest(n, it) + return map(itemgetter(0), result) # undecorate + + # General case, slowest method + in1, in2 = tee(iterable) + it = izip(imap(key, in1), count(0,-1), in2) # decorate + result = _nlargest(n, it) + return map(itemgetter(2), result) # undecorate + +if __name__ == "__main__": + # Simple sanity test + heap = [] + data = [1, 3, 5, 7, 9, 2, 4, 6, 8, 0] + for item in data: + heappush(heap, item) + sort = [] + while heap: + sort.append(heappop(heap)) + print sort + + import doctest + doctest.testmod() diff --git a/lib-python/2.7/pkgutil.py b/lib-python/modified-2.7/pkgutil.py copy from lib-python/2.7/pkgutil.py copy to lib-python/modified-2.7/pkgutil.py --- a/lib-python/2.7/pkgutil.py +++ b/lib-python/modified-2.7/pkgutil.py @@ -244,7 +244,8 @@ return mod def get_data(self, pathname): - return open(pathname, "rb").read() + with open(pathname, "rb") as f: + return f.read() def _reopen(self): if self.file and self.file.closed: diff --git a/lib-python/modified-2.7/test/test_heapq.py b/lib-python/modified-2.7/test/test_heapq.py --- a/lib-python/modified-2.7/test/test_heapq.py +++ b/lib-python/modified-2.7/test/test_heapq.py @@ -186,6 +186,11 @@ self.assertFalse(sys.modules['heapq'] is self.module) self.assertTrue(hasattr(self.module.heapify, 'func_code')) + def test_islice_protection(self): + m = self.module + self.assertFalse(m.nsmallest(-1, [1])) + self.assertFalse(m.nlargest(-1, [1])) + class TestHeapC(TestHeap): module = c_heapq diff --git a/lib-python/modified-2.7/test/test_import.py b/lib-python/modified-2.7/test/test_import.py --- a/lib-python/modified-2.7/test/test_import.py +++ b/lib-python/modified-2.7/test/test_import.py @@ -64,6 +64,7 @@ except ImportError, err: self.fail("import from %s failed: %s" % (ext, err)) else: + # XXX importing .pyw is missing on Windows self.assertEqual(mod.a, a, "module loaded (%s) but contents invalid" % mod) self.assertEqual(mod.b, b, diff --git a/lib-python/modified-2.7/test/test_repr.py b/lib-python/modified-2.7/test/test_repr.py --- a/lib-python/modified-2.7/test/test_repr.py +++ b/lib-python/modified-2.7/test/test_repr.py @@ -254,8 +254,14 @@ eq = self.assertEqual touch(os.path.join(self.subpkgname, self.pkgname + os.extsep + 'py')) from areallylongpackageandmodulenametotestreprtruncation.areallylongpackageandmodulenametotestreprtruncation import areallylongpackageandmodulenametotestreprtruncation - eq(repr(areallylongpackageandmodulenametotestreprtruncation), - "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + # On PyPy, we use %r to format the file name; on CPython it is done + # with '%s'. It seems to me that %r is safer . + if '__pypy__' in sys.builtin_module_names: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + else: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) eq(repr(sys), "") def test_type(self): diff --git a/lib-python/2.7/test/test_subprocess.py b/lib-python/modified-2.7/test/test_subprocess.py copy from lib-python/2.7/test/test_subprocess.py copy to lib-python/modified-2.7/test/test_subprocess.py --- a/lib-python/2.7/test/test_subprocess.py +++ b/lib-python/modified-2.7/test/test_subprocess.py @@ -16,11 +16,11 @@ # Depends on the following external programs: Python # -if mswindows: - SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' - 'os.O_BINARY);') -else: - SETBINARY = '' +#if mswindows: +# SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' +# 'os.O_BINARY);') +#else: +# SETBINARY = '' try: @@ -420,8 +420,9 @@ self.assertStderrEqual(stderr, "") def test_universal_newlines(self): - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' @@ -448,8 +449,9 @@ def test_universal_newlines_communicate(self): # universal newlines through communicate() - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' diff --git a/lib-python/modified-2.7/urllib2.py b/lib-python/modified-2.7/urllib2.py --- a/lib-python/modified-2.7/urllib2.py +++ b/lib-python/modified-2.7/urllib2.py @@ -395,11 +395,7 @@ meth_name = protocol+"_response" for processor in self.process_response.get(protocol, []): meth = getattr(processor, meth_name) - try: - response = meth(req, response) - except: - response.close() - raise + response = meth(req, response) return response diff --git a/lib_pypy/_ctypes/pointer.py b/lib_pypy/_ctypes/pointer.py --- a/lib_pypy/_ctypes/pointer.py +++ b/lib_pypy/_ctypes/pointer.py @@ -124,7 +124,8 @@ # for now, we always allow types.pointer, else a lot of tests # break. We need to rethink how pointers are represented, though if my_ffitype is not ffitype and ffitype is not _ffi.types.void_p: - raise ArgumentError, "expected %s instance, got %s" % (type(value), ffitype) + raise ArgumentError("expected %s instance, got %s" % (type(value), + ffitype)) return value._get_buffer_value() def _cast_addr(obj, _, tp): diff --git a/lib_pypy/_ctypes/structure.py b/lib_pypy/_ctypes/structure.py --- a/lib_pypy/_ctypes/structure.py +++ b/lib_pypy/_ctypes/structure.py @@ -17,7 +17,7 @@ if len(f) == 3: if (not hasattr(tp, '_type_') or not isinstance(tp._type_, str) - or tp._type_ not in "iIhHbBlL"): + or tp._type_ not in "iIhHbBlLqQ"): #XXX: are those all types? # we just dont get the type name # in the interp levle thrown TypeError diff --git a/lib_pypy/_pypy_irc_topic.py b/lib_pypy/_pypy_irc_topic.py --- a/lib_pypy/_pypy_irc_topic.py +++ b/lib_pypy/_pypy_irc_topic.py @@ -1,117 +1,6 @@ -"""qvfgbcvna naq hgbcvna punvef -qlfgbcvna naq hgbcvna punvef -V'z fbeel, pbhyq lbh cyrnfr abg nterr jvgu gur png nf jryy? -V'z fbeel, pbhyq lbh cyrnfr abg nterr jvgu gur punve nf jryy? -jr cnffrq gur RH erivrj -cbfg RhebClguba fcevag fgnegf 12.IVV.2007, 10nz -RhebClguba raqrq -n Pyrna Ragrecevfrf cebqhpgvba -npnqrzl vf n pbzcyvpngrq ebyr tnzr -npnqrzvn vf n pbzcyvpngrq ebyr tnzr -jbexvat pbqr vf crn fbhc -abg lbhe snhyg, zber yvxr vg'f n zbivat gnetrg -guvf fragrapr vf snyfr -abguvat vf gehr -Yncfnat Fbhpubat -Oenpunzhgnaqn -fbeel, V'yy grnpu gur pnpghf ubj gb fjvz yngre -Jul fb znal znal znal znal znal ivbyvaf? -Jul fb znal znal znal znal znal bowrpgf? -"eha njnl naq yvir ba n snez" nccebnpu gb fbsgjner qrirybczrag -"va snpg, lbh zvtug xabj zber nobhg gur genafyngvba gbbypunva nsgre znfgrevat eclguba guna fbzr angvir fcrnxre xabjf nobhg uvf zbgure gbathr" - kbeNkNk -"jurer qvq nyy gur ivbyvaf tb?" -- ClCl fgnghf oybt: uggc://zberclcl.oybtfcbg.pbz/ -uggc://kxpq.pbz/353/ -pnfhnyvgl ivbyngvbaf naq sylvat -wrgmg abpu fpubxbynqvtre -R09 2X @PNN:85? -vs lbh'er gelvat gb oybj hc fghss, jub pnerf? -vs fghss oybjf hc, lbh pner -2008 jvyy or gur lrne bs clcl ba gur qrfxgbc -2008 jvyy or gur lrne bs gur qrfxgbc ba #clcl -2008 jvyy or gur lrne bs gur qrfxgbc ba #clcl, Wnahnel jvyy or gur zbagu bs gur nyc gbcf -lrf, ohg jung'g gur frafr bs 0 < "qhena qhena" -eclguba: flagnk naq frznagvpf bs clguba, fcrrq bs p, erfgevpgvbaf bs wnin naq pbzcvyre reebe zrffntrf nf crargenoyr nf ZHZCF +"""eclguba: flagnk naq frznagvpf bs clguba, fcrrq bs p, erfgevpgvbaf bs wnin naq pbzcvyre reebe zrffntrf nf crargenoyr nf ZHZCF pglcrf unf n fcva bs 1/3 ' ' vf n fcnpr gbb -2009 jvyy or gur lrne bs WVG ba gur qrfxgbc -N ynathntr vf n qvnyrpg jvgu na nezl naq anil -gbcvpf ner sbe gur srroyr zvaqrq -2009 vf gur lrne bs ersyrpgvba ba gur qrfxgbc -gur tybor vf bhe cbal, gur pbfzbf bhe erny ubefr -jub nz V naq vs lrf, ubj znal? -cebtenzzvat va orq vf n cresrpgyl svar npgvivgl -zbber'f ynj vf n qeht jvgu gur jbefg pbzr qbja -EClguba: jr hfr vg fb lbh qba'g unir gb -Zbber'f ynj vf n qeht jvgu gur jbefg pbzr qbja. EClguba: haqrpvqrq. -guvatf jvyy or avpr naq fghss -qba'g cbfg yvaxf gb cngragf urer -Abg lbhe hfhny nanylfrf. -Gur Neg bs gur Punaary -Clguba 300 -V fhccbfr ZO bs UGZY cre frpbaq vf abg gur hfhny fcrrq zrnfher crbcyr jbhyq rkcrpg sbe n wvg -gur fha arire frgf ba gur ClCl rzcver -ghegyrf ner snfgre guna lbh guvax -cebtenzzvat vf na nrfgrguvp raqrnibhe -P vf tbbq sbe fbzrguvat, whfg abg sbe jevgvat fbsgjner -trezna vf tbbq sbe fbzrguvat, whfg abg sbe jevgvat fbsgjner -trezna vf tbbq sbe artngvbaf, whfg abg sbe jevgvat fbsgjner -# nffreg qvq abg penfu -lbh fubhyq fgneg n cresrpg fbsgjner zbirzrag -lbh fubhyq fgneg n cresrpg punaary gbcvp zbirzrag -guvf vf n cresrpg punaary gbcvp -guvf vf n frys-ersreragvny punaary gbcvp -crrcubcr bcgvzvmngvbaf ner jung n Fhssvpvragyl Fzneg Pbzcvyre hfrf -"crrcubcr" bcgvzvmngvbaf ner jung na bcgvzvfgvp Pbzcvyre hfrf -pubbfr lbhe unpx -gur 'fhcre' xrljbeq vf abg gung uhttnoyr -wlguba cngpurf ner abg rabhtu sbe clcl -- qb lbh xabj oreyva? - nyy bs vg? - jryy, whfg oreyva -- ubj jvyy gur snpg gung gurl ner hfrq va bhe ercy punatr bhe gbcvpf? -- ubj pna vg rire unir jbexrq? -- jurer fubhyq gur unpx or fgberq? -- Vg'f uneq gb fnl rknpgyl jung pbafgvghgrf erfrnepu va gur pbzchgre jbeyq, ohg nf n svefg nccebkvzngvba, vg'f fbsgjner gung qbrfa'g unir hfref. -- Cebtenzzvat vf nyy nobhg xabjvat jura gb obvy gur benatr fcbatr qbaxrl npebff gur cuvyyvcvarf -- Jul fb znal, znal, znal, znal, znal, znal qhpxyvatf? -- ab qrgnvy vf bofpher rabhtu gb abg unir fbzr pbqr qrcraqvat ba vg. -- jung V trarenyyl jnag vf serr fcrrqhcf -- nyy bs ClCl vf kv-dhnyvgl -"lbh pna nyjnlf xvyy -9 be bf._rkvg() vs lbh'er va n uheel" -Ohernhpengf ohvyq npnqrzvp rzcverf juvpu puhea bhg zrnavatyrff fbyhgvbaf gb veeryrinag ceboyrzf. -vg'f abg n unpx, vg'f n jbexnebhaq -ClCl qbrfa'g unir pbcbylinevnqvp qrcraqragyl-zbabzbecurq ulcresyhknqf -ClCl qbrfa'g punatr gur shaqnzragny culfvpf pbafgnagf -Qnapr bs gur Fhtnecyhz Snvel -Wnin vf whfg tbbq rabhtu gb or cenpgvpny, ohg abg tbbq rabhtu gb or hfnoyr. -RhebClguba vf unccravat, qba'g rkcrpg nal dhvpx erfcbafr gvzrf. -"V jbhyq yvxr gb fgnl njnl sebz ernyvgl gura" -"gung'f jul gur 'be' vf ernyyl na 'naq' " -jvgu nyy nccebcevngr pbagrkghnyvfngvbavat -qba'g gevc ba gur cbjre pbeq -vzcyrzragvat YBTB va YBTB: "ghegyrf nyy gur jnl qbja" -gur ohooyrfbeg jbhyq or gur jebat jnl gb tb -gur cevapvcyr bs pbafreingvba bs zrff -gb fnir n gerr, rng n ornire -Qre Ovore znpugf evpugvt: Antg nyyrf xnchgg. -"Nal jbeyqivrj gung vfag jenpxrq ol frys-qbhog naq pbashfvba bire vgf bja vqragvgl vf abg n jbeyqivrj sbe zr." - Fpbgg Nnebafba -jr oryvrir va cnapnxrf, znlor -jr oryvrir va ghegyrf, znlor -jr qrsvavgryl oryvrir va zrgn -gur zngevk unf lbh -"Yvsr vf uneq, gura lbh anc" - n png -Vf Nezva ubzr jura gur havirefr prnfrf gb rkvfg? -Qhrffryqbes fcevag fgnegrq -frys.nobeeg("pnaabg ybnq negvpyrf") -QRAGVFGEL FLZOBY YVTUG IREGVPNY NAQ JNIR -"Gur UUH pnzchf vf n tbbq Dhnxr yriry" - Nezva -"Gur UUH pnzchf jbhyq or n greevoyr dhnxr yriry - lbh'q arire unir n pyhr jurer lbh ner" - zvpunry -N enqvbnpgvir png unf 18 unys-yvirf. - : j [fvtu] -f -pbybe-pbqrq oyhrf -"Neebtnapr va pbzchgre fpvrapr vf zrnfherq va anab-Qvwxfgenf." -ClCl arrqf n Whfg-va-Gvzr WVG -"Lbh pna'g gvzr geniry whfg ol frggvat lbhe pybpxf jebat" -Gjb guernqf jnyx vagb n one. Gur onexrrcre ybbxf hc naq lryyf, "url, V jnag qba'g nal pbaqvgvbaf enpr yvxr gvzr ynfg!" Clguba 2.k rfg cerfdhr zbeg, ivir Clguba! Clguba 2.k vf abg qrnq Riregvzr fbzrbar nethrf jvgu "Fznyygnyx unf nyjnlf qbar K", vg vf nyjnlf n tbbq uvag gung fbzrguvat arrqf gb or punatrq snfg. - Znephf Qraxre @@ -119,7 +8,6 @@ __kkk__ naq __ekkk__ if bcrengvba fybgf: cnegvpyr dhnaghz fhcrecbfvgvba xvaq bs sha ClCl vf na rkpvgvat grpuabybtl gung yrgf lbh gb jevgr snfg, cbegnoyr, zhygv-cyngsbez vagrecergref jvgu yrff rssbeg Nezva: "Cebybt vf n zrff.", PS: "Ab, vg'f irel pbby!", Nezva: "Vfa'g guvf jung V fnvq?" - tbbq, grfgf ner hfrshy fbzrgvzrf :-) ClCl vf yvxr nofheq gurngre jr unir ab nagv-vzcbffvoyr fgvpx gung znxrf fher gung nyy lbhe cebtenzf unyg clcl vf n enpr orgjrra crbcyr funivat lnxf naq gur havirefr cebqhpvat zber orneqrq lnxf. Fb sne, gur havirefr vf jvaavat @@ -136,14 +24,14 @@ ClCl 1.1.0orgn eryrnfrq: uggc://pbqrfcrnx.arg/clcl/qvfg/clcl/qbp/eryrnfr-1.1.0.ugzy "gurer fubhyq or bar naq bayl bar boivbhf jnl gb qb vg". ClCl inevnag: "gurer pna or A unys-ohttl jnlf gb qb vg" 1.1 svany eryrnfrq: uggc://pbqrfcrnx.arg/clcl/qvfg/clcl/qbp/eryrnfr-1.1.0.ugzy -1.1 svany eryrnfrq | nzq64 naq ccp ner bayl ninvynoyr va ragrecevfr irefvba + nzq64 naq ccp ner bayl ninvynoyr va ragrecevfr irefvba Vf gurer n clcl gvzr? - vs lbh pna srry vg (?) gura gurer vf ab, abezny jbex vf fhpu zhpu yrff gvevat guna inpngvbaf ab, abezny jbex vf fb zhpu yrff gvevat guna inpngvbaf -SVEFG gurl vtaber lbh, gura gurl ynhtu ng lbh, gura gurl svtug lbh, gura lbh jva. +-SVEFG gurl vtaber lbh, gura gurl ynhtu ng lbh, gura gurl svtug lbh, gura lbh jva.- vg'f Fhaqnl, znlor vg'f Fhaqnl, ntnva -"3 + 3 = 8" Nagb va gur WVG gnyx +"3 + 3 = 8" - Nagb va gur WVG gnyx RPBBC vf unccravat RPBBC vf svavfurq cflpb rngf bar oenva cre vapu bs cebterff @@ -175,10 +63,108 @@ "nu, whfg va gvzr qbphzragngvba" (__nc__) ClCl vf abg n erny IZ: ab frtsnhyg unaqyref gb qb gur ener pnfrf lbh pna'g unir obgu pbairavrapr naq fcrrq -gur WVG qbrfa'g jbex ba BF/K (abi'09) -ab fhccbeg sbe BF/K evtug abj! (abi'09) fyvccref urvtug pna or zrnfherq va k86 ertvfgref clcl vf n enpr orgjrra gur vaqhfgel gelvat gb ohvyq znpuvarf jvgu zber naq zber erfbheprf, naq gur clcl qrirybcref gelvat gb rng nyy bs gurz. Fb sne, gur jvaare vf fgvyy hapyrne +"znl pbagnva ahgf naq/be lbhat cbvagref" +vg'f nyy irel fvzcyr, yvxr gur ubyvqnlf +unccl ClCl'f lrne 2010! +fnzhryr fnlf gung jr ybfg n enmbe. fb jr pna'g funir lnxf +"yrg'f abg or bofpher, hayrff jr ernyyl arrq gb" + (abg guernq-fnsr, ohg jryy, abguvat vf) +clcl unf znal ceboyrzf, ohg rnpu bar unf znal fbyhgvbaf +whfg nabgure vgrz (1.333...) ba bhe erny-ahzorerq gbqb yvfg +ClCl vf Fuveg Bevtnzv erfrnepu + nafjrevat n dhrfgvba: "ab -- sbe ng yrnfg bar cbffvoyr vagrecergngvba bs lbhe fragrapr" +eryrnfr 1.2 hcpbzvat +ClCl 1.2 eryrnfrq - uggc://clcl.bet/ +AB IPF QVFPHFFVBAF +EClguba vf n svar pnzry unve oehfu +ClCl vf n npghnyyl n ivfhnyvmngvba cebwrpg, jr whfg ohvyq vagrecergref gb unir vagrerfgvat qngn gb ivfhnyvmr +clcl vf yvxr fnhfntrf +naq abj sbe fbzrguvat pbzcyrgryl qvssrerag +n 10gu bs sberire vf 1u45 +pbeerpg pbqr qbrfag arrq nal grfgf +cbfgfgehpghenyvfz rgp. +clcl UVG trarengbe +gur arj clcl fcbeg vf gb cnff clcl ohtf nf pclguba ohtf +jr unir zhpu zber vagrecergref guna hfref +ClCl 1.3 njnvgvat eryrnfr +ClCl 1.3 eryrnfrq +vg frrzf gb zr gung bapr lbh frggyr ba na rkrphgvba / bowrpg zbqry naq / be olgrpbqr sbezng, lbh'ir nyernql qrpvqrq jung ynathntrf (jurer gur 'f' frrzf fhcresyhbhf) fhccbeg vf tbvat gb or svefg pynff sbe +"Nyy ceboyrzf va ClCl pna or fbyirq ol nabgure yriry bs vagrecergngvba" +ClCl 1.3 eryrnfrq (jvaqbjf ovanevrf vapyhqrq) +jul qvq lbh thlf unir gb znxr gur ohvygva sbeghar zber vagrerfgvat guna npghny jbex? v whfg pngpurq zlfrys erfgnegvat clcl 20 gvzrf +"jr hfrq gb unir n zrff jvgu na bofpher vagresnpr, abj jr unir zrff urer naq bofpher vagresnpr gurer. cebterff" crqebavf ba n clcl fcevag +"phcf bs pbssrr ner yvxr nanybtvrf va gung V'z znxvat bar evtug abj" +"vg'f nyjnlf hc gb hf, va n jnl be gur bgure" +ClCl vf infg, naq pbagnvaf zhygvghqrf +qravny vf eneryl n tbbq qrohttvat grpuavdhr +"Yrg'f tb." - "Jr pna'g" - "Jul abg?" - "Jr'er jnvgvat sbe n Genafyngvba." - (qrfcnvevatyl) "Nu!" +'gung'f qrsvavgryl n pnfr bs "hu????"' +va gurbel gurer vf gur Ybbc, va cenpgvpr gurer ner oevqtrf +gur uneqqevir - pbafgnag qngn cvytevzntr +ClCl vf n gbby gb xrrc bgurejvfr qnatrebhf zvaqf fnsryl bpphcvrq. +jr ner n trareny senzrjbex ohvyg ba pbafvfgrag nccyvpngvba bs nqubp-arff +gur jnl gb nibvq n jbexnebhaq vf gb vagebqhpr n fgebatre jbexnebhaq fbzrjurer ryfr +pnyyvat gur genafyngvba gbby punva n 'fpevcg' vf xvaq bs bssrafvir +ehaavat clcl-p genafyngr.cl vf n ovg yvxr jngpuvat n guevyyre zbivr, vg pbhyq pbafhzr nyy gur zrzbel ng nal gvzr +ehaavat clcl-p genafyngr.cl vf n ovg yvxr jngpuvat n guevyyre zbivr, vg pbhyq qvr ng nal gvzr orpnhfr bs gur 32-ovg 4TO yvzvg bs ENZ +Qh jvefg rora tranh qnf reervpura, jbena xrvare tynhog +vs fjvgmreynaq jrer jurer terrpr vf (ba vfynaqf) jbhyq gurl nyy or pbaarpgrq ol oevqtrf? +genafyngvat clcl jvgu pclguba vf fbbbbbb fybj +ClCl 1.4 eryrnfrq! +Jr ner abg urebrf, whfg irel cngvrag. +QBAR zrnaf vg'f qbar +jul gurer vf ab "ClCl 1.4 eryrnfrq" va gbcvp nal zber? +fabj! fabj! +svanyyl, zrephevny zvtengvba vf unccravat! +Gur zvtengvba gb zrephevny vf pbzcyrgrq! uggc://ovgohpxrg.bet/clcl/clcl +fabj! fabj! (gre) +unccl arj lrne +naq anaanaw gb lbh nf jryy +Frrvat nf gur ynjf bs culfvpf ner ntnvafg lbh, lbh unir gb pnershyyl pbafvqre lbhe fpbcr fb gung lbhe tbnyf ner ernfbanoyr. +nf hfhny va clcl, gur fbyhgvba nccrnef pbzcyrgryl qvfcebcbegvbangr gb gur ceboyrz naq vafgrnq jr'yy tb sbe n pbzcyrgryl qvssrerag fvzcyre nccebnpu gb gur bevtvany ceboyrz +fabj, fabj! +va clcl lbh ner nyjnlf ng gur jebat yriry, va bar jnl be gur bgure +jryy, vg'f jebat ohg abg fb "irel jebat" nf vg ybbxrq + V ybir clcl +ynmvarff vzcngvrapr naq uhoevf +fabj, fabj +EClguba: guvatf lbh jbhyqa'g qb va Clguba, naq pna'g qb va P. +vg vf gur rkcrpgrq orunivbe, rkprcg jura lbh qba'g rkcrpg vg +erqrsvavat lryybj frrzf yvxr n orggre vqrn +"gung'f ubjrire whfg ratvarrevat" (svwny) +"[vg] whfg fubjf ntnva gung eclguba vf bofpher" (psobym) +"naljnl, clguba vf n infg ynathntr" (svwny) +bhg-bs-yvr-thneqf +"gurer ner qnlf ba juvpu lbh ybbx nebhaq naq abguvat fubhyq unir rire jbexrq" (svwny) +clcl vf n orggre xvaq bs sbbyvfuarff - ynp +ehaavat grfgf vf rffragvny sbe qrirybcvat clcl -- hu? qvq V oernx gur grfg? (svwny) +V'ir tbg guvf sybbe jnk gung'f nyfb n TERNG qrffreg gbccvat!! +rknexha: "gur cneg gung V gubhtug jnf tbvat gb or uneq jnf gevivny, fb abj V whfg unir guvf cneg gung V qvqa'g rira guvax bs gung vf uneq" +V fhccbfr jr pna yvir jvgu gur bofphevgl, nf ybat nf gurer vf n pbzzrag znxvat vg yvtugre +V nz n ovt oryvrire va ernfbaf. ohg gur nccnerag xvaq ner zl snibevgr. +clcl: trg n WVG sbe serr (jryy gur svefg qnl lbh jba'g znantr naq vg jvyy or irel sehfgengvat) + thgjbegu: bu, jr fubhyq znxr gur WVG zntvpnyyl orggre, jvgu qrpbengbef naq fghss +vg'f n pbzcyrgr unpx, ohg n irel zvavzny bar (nevtngb) +svefg gurl ynhtu ng lbh, gura gurl vtaber lbh, gura gurl svtug lbh, gura lbh jva +ClCl vf snzvyl sevraqyl +jr yvxr pbzcynvagf +gbqnl jr'er snfgre guna lrfgreqnl (hfhnyyl) +ClCl naq PClguba: gurl ner zbegny rarzvrf vagrag ba xvyyvat rnpu bgure +nethnoyl, rirelguvat vf n avpur +clcl unf ynlref yvxr bavbaf: crryvat gurz onpx jvyy znxr lbh pel +EClguba zntvpnyyl znxrf lbh evpu naq snzbhf (fnlf fb ba gur gva) +Vf evtbobg nebhaq jura gur havirefr prnfrf gb rkvfg? +ClCl vf gbb pbby sbe dhrelfgevatf. +< nevtngb> gura jung bpphef? < svwny> tbbq fghss V oryvrir +ClCl 1.6 eryrnfrq! + jurer ner gur grfgf? +uggc://gjvgcvp.pbz/52nr8s +N enaqbz dhbgr +Nyy rkprcgoybpxf frrz fnar. +N cvax tyvggrel ebgngvat ynzoqn +"vg'f yvxryl grzcbenel hagvy sberire" nevtb """ def some_topic(): diff --git a/lib_pypy/pyrepl/commands.py b/lib_pypy/pyrepl/commands.py --- a/lib_pypy/pyrepl/commands.py +++ b/lib_pypy/pyrepl/commands.py @@ -33,10 +33,9 @@ class Command(object): finish = 0 kills_digit_arg = 1 - def __init__(self, reader, (event_name, event)): + def __init__(self, reader, cmd): self.reader = reader - self.event = event - self.event_name = event_name + self.event_name, self.event = cmd def do(self): pass diff --git a/lib_pypy/pyrepl/pygame_console.py b/lib_pypy/pyrepl/pygame_console.py --- a/lib_pypy/pyrepl/pygame_console.py +++ b/lib_pypy/pyrepl/pygame_console.py @@ -130,7 +130,7 @@ s.fill(c, [0, 600 - bmargin, 800, bmargin]) s.fill(c, [800 - rmargin, 0, lmargin, 600]) - def refresh(self, screen, (cx, cy)): + def refresh(self, screen, cxy): self.screen = screen self.pygame_screen.fill(colors.bg, [0, tmargin + self.cur_top + self.scroll, @@ -139,8 +139,8 @@ line_top = self.cur_top width, height = self.fontsize - self.cxy = (cx, cy) - cp = self.char_pos(cx, cy) + self.cxy = cxy + cp = self.char_pos(*cxy) if cp[1] < tmargin: self.scroll = - (cy*self.fh + self.cur_top) self.repaint() @@ -148,7 +148,7 @@ self.scroll += (600 - bmargin) - (cp[1] + self.fh) self.repaint() if self.curs_vis: - self.pygame_screen.blit(self.cursor, self.char_pos(cx, cy)) + self.pygame_screen.blit(self.cursor, self.char_pos(*cxy)) for line in screen: if 0 <= line_top + self.scroll <= (600 - bmargin - tmargin - self.fh): if line: diff --git a/lib_pypy/pyrepl/readline.py b/lib_pypy/pyrepl/readline.py --- a/lib_pypy/pyrepl/readline.py +++ b/lib_pypy/pyrepl/readline.py @@ -231,7 +231,11 @@ return ''.join(chars) def _histline(self, line): - return unicode(line.rstrip('\n'), ENCODING) + line = line.rstrip('\n') + try: + return unicode(line, ENCODING) + except UnicodeDecodeError: # bah, silently fall back... + return unicode(line, 'utf-8') def get_history_length(self): return self.saved_history_length @@ -268,7 +272,10 @@ f = open(os.path.expanduser(filename), 'w') for entry in history: if isinstance(entry, unicode): - entry = entry.encode(ENCODING) + try: + entry = entry.encode(ENCODING) + except UnicodeEncodeError: # bah, silently fall back... + entry = entry.encode('utf-8') entry = entry.replace('\n', '\r\n') # multiline history support f.write(entry + '\n') f.close() diff --git a/lib_pypy/pyrepl/unix_console.py b/lib_pypy/pyrepl/unix_console.py --- a/lib_pypy/pyrepl/unix_console.py +++ b/lib_pypy/pyrepl/unix_console.py @@ -163,7 +163,7 @@ def change_encoding(self, encoding): self.encoding = encoding - def refresh(self, screen, (cx, cy)): + def refresh(self, screen, cxy): # this function is still too long (over 90 lines) if not self.__gone_tall: @@ -198,6 +198,7 @@ # we make sure the cursor is on the screen, and that we're # using all of the screen if we can + cx, cy = cxy if cy < offset: offset = cy elif cy >= offset + height: diff --git a/pypy/config/pypyoption.py b/pypy/config/pypyoption.py --- a/pypy/config/pypyoption.py +++ b/pypy/config/pypyoption.py @@ -92,7 +92,7 @@ module_import_dependencies = { # no _rawffi if importing pypy.rlib.clibffi raises ImportError - # or CompilationError + # or CompilationError or py.test.skip.Exception "_rawffi" : ["pypy.rlib.clibffi"], "_ffi" : ["pypy.rlib.clibffi"], @@ -113,7 +113,7 @@ try: for name in modlist: __import__(name) - except (ImportError, CompilationError), e: + except (ImportError, CompilationError, py.test.skip.Exception), e: errcls = e.__class__.__name__ config.add_warning( "The module %r is disabled\n" % (modname,) + diff --git a/pypy/doc/project-ideas.rst b/pypy/doc/project-ideas.rst --- a/pypy/doc/project-ideas.rst +++ b/pypy/doc/project-ideas.rst @@ -17,6 +17,12 @@ projects, or anything else in PyPy, pop up on IRC or write to us on the `mailing list`_. +Make big integers faster +------------------------- + +PyPy's implementation of the Python ``long`` type is slower than CPython's. +Find out why and optimize them. + Numpy improvements ------------------ diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -777,22 +777,63 @@ """Unpack an iterable object into a real (interpreter-level) list. Raise an OperationError(w_ValueError) if the length is wrong.""" w_iterator = self.iter(w_iterable) - # If we know the expected length we can preallocate. if expected_length == -1: + # xxx special hack for speed + from pypy.interpreter.generator import GeneratorIterator + if isinstance(w_iterator, GeneratorIterator): + lst_w = [] + w_iterator.unpack_into(lst_w) + return lst_w + # /xxx + return self._unpackiterable_unknown_length(w_iterator, w_iterable) + else: + lst_w = self._unpackiterable_known_length(w_iterator, + expected_length) + return lst_w[:] # make the resulting list resizable + + @jit.dont_look_inside + def _unpackiterable_unknown_length(self, w_iterator, w_iterable): + # Unpack a variable-size list of unknown length. + # The JIT does not look inside this function because it + # contains a loop (made explicit with the decorator above). + # + # If we can guess the expected length we can preallocate. + try: + lgt_estimate = self.len_w(w_iterable) + except OperationError, o: + if (not o.match(self, self.w_AttributeError) and + not o.match(self, self.w_TypeError)): + raise + items = [] + else: try: - lgt_estimate = self.len_w(w_iterable) - except OperationError, o: - if (not o.match(self, self.w_AttributeError) and - not o.match(self, self.w_TypeError)): + items = newlist(lgt_estimate) + except MemoryError: + items = [] # it might have lied + # + while True: + try: + w_item = self.next(w_iterator) + except OperationError, e: + if not e.match(self, self.w_StopIteration): raise - items = [] - else: - try: - items = newlist(lgt_estimate) - except MemoryError: - items = [] # it might have lied - else: - items = [None] * expected_length + break # done + items.append(w_item) + # + return items + + @jit.dont_look_inside + def _unpackiterable_known_length(self, w_iterator, expected_length): + # Unpack a known length list, without letting the JIT look inside. + # Implemented by just calling the @jit.unroll_safe version, but + # the JIT stopped looking inside already. + return self._unpackiterable_known_length_jitlook(w_iterator, + expected_length) + + @jit.unroll_safe + def _unpackiterable_known_length_jitlook(self, w_iterator, + expected_length): + items = [None] * expected_length idx = 0 while True: try: @@ -801,26 +842,29 @@ if not e.match(self, self.w_StopIteration): raise break # done - if expected_length != -1 and idx == expected_length: + if idx == expected_length: raise OperationError(self.w_ValueError, - self.wrap("too many values to unpack")) - if expected_length == -1: - items.append(w_item) - else: - items[idx] = w_item + self.wrap("too many values to unpack")) + items[idx] = w_item idx += 1 - if expected_length != -1 and idx < expected_length: + if idx < expected_length: if idx == 1: plural = "" else: plural = "s" - raise OperationError(self.w_ValueError, - self.wrap("need more than %d value%s to unpack" % - (idx, plural))) + raise operationerrfmt(self.w_ValueError, + "need more than %d value%s to unpack", + idx, plural) return items - unpackiterable_unroll = jit.unroll_safe(func_with_new_name(unpackiterable, - 'unpackiterable_unroll')) + def unpackiterable_unroll(self, w_iterable, expected_length): + # Like unpackiterable(), but for the cases where we have + # an expected_length and want to unroll when JITted. + # Returns a fixed-size list. + w_iterator = self.iter(w_iterable) + assert expected_length != -1 + return self._unpackiterable_known_length_jitlook(w_iterator, + expected_length) def fixedview(self, w_iterable, expected_length=-1): """ A fixed list view of w_iterable. Don't modify the result diff --git a/pypy/interpreter/generator.py b/pypy/interpreter/generator.py --- a/pypy/interpreter/generator.py +++ b/pypy/interpreter/generator.py @@ -8,7 +8,7 @@ class GeneratorIterator(Wrappable): "An iterator created by a generator." _immutable_fields_ = ['pycode'] - + def __init__(self, frame): self.space = frame.space self.frame = frame # turned into None when frame_finished_execution @@ -81,7 +81,7 @@ # if the frame is now marked as finished, it was RETURNed from if frame.frame_finished_execution: self.frame = None - raise OperationError(space.w_StopIteration, space.w_None) + raise OperationError(space.w_StopIteration, space.w_None) else: return w_result # YIELDed finally: @@ -97,21 +97,21 @@ def throw(self, w_type, w_val, w_tb): from pypy.interpreter.pytraceback import check_traceback space = self.space - + msg = "throw() third argument must be a traceback object" if space.is_w(w_tb, space.w_None): tb = None else: tb = check_traceback(space, w_tb, msg) - + operr = OperationError(w_type, w_val, tb) operr.normalize_exception(space) return self.send_ex(space.w_None, operr) - + def descr_next(self): """x.next() -> the next value, or raise StopIteration""" return self.send_ex(self.space.w_None) - + def descr_close(self): """x.close(arg) -> raise GeneratorExit inside generator.""" assert isinstance(self, GeneratorIterator) @@ -124,7 +124,7 @@ e.match(space, space.w_GeneratorExit): return space.w_None raise - + if w_retval is not None: msg = "generator ignored GeneratorExit" raise OperationError(space.w_RuntimeError, space.wrap(msg)) @@ -155,3 +155,39 @@ "interrupting generator of ") break block = block.previous + + def unpack_into(self, results_w): + """This is a hack for performance: runs the generator and collects + all produced items in a list.""" + # XXX copied and simplified version of send_ex() + space = self.space + if self.running: + raise OperationError(space.w_ValueError, + space.wrap('generator already executing')) + frame = self.frame + if frame is None: # already finished + return + self.running = True + try: + pycode = self.pycode + while True: + jitdriver.jit_merge_point(self=self, frame=frame, + results_w=results_w, + pycode=pycode) + try: + w_result = frame.execute_frame(space.w_None) + except OperationError, e: + if not e.match(space, space.w_StopIteration): + raise + break + # if the frame is now marked as finished, it was RETURNed from + if frame.frame_finished_execution: + break + results_w.append(w_result) # YIELDed + finally: + frame.f_backref = jit.vref_None + self.running = False + self.frame = None + +jitdriver = jit.JitDriver(greens=['pycode'], + reds=['self', 'frame', 'results_w']) diff --git a/pypy/interpreter/test/test_generator.py b/pypy/interpreter/test/test_generator.py --- a/pypy/interpreter/test/test_generator.py +++ b/pypy/interpreter/test/test_generator.py @@ -117,7 +117,7 @@ g = f() raises(NameError, g.throw, NameError, "Error", None) - + def test_throw_fail(self): def f(): yield 1 @@ -129,7 +129,7 @@ yield 1 g = f() raises(TypeError, g.throw, list()) - + def test_throw_fail3(self): def f(): yield 1 @@ -188,7 +188,7 @@ g = f() g.next() raises(NameError, g.close) - + def test_close_fail(self): def f(): try: @@ -267,3 +267,15 @@ assert r.startswith("' % self.fielddescr.repr_of_descr() @@ -302,12 +305,16 @@ _clsname = '' loop_token = None arg_classes = '' # <-- annotation hack - ffi_flags = 0 + ffi_flags = 1 - def __init__(self, arg_classes, extrainfo=None, ffi_flags=0): + def __init__(self, arg_classes, extrainfo=None, ffi_flags=1): self.arg_classes = arg_classes # string of "r" and "i" (ref/int) self.extrainfo = extrainfo self.ffi_flags = ffi_flags + # NB. the default ffi_flags is 1, meaning FUNCFLAG_CDECL, which + # makes sense on Windows as it's the one for all the C functions + # we are compiling together with the JIT. On non-Windows platforms + # it is just ignored anyway. def __repr__(self): res = '%s(%s)' % (self.__class__.__name__, self.arg_classes) @@ -348,6 +355,10 @@ return False # unless overridden def create_call_stub(self, rtyper, RESULT): + from pypy.rlib.clibffi import FFI_DEFAULT_ABI + assert self.get_call_conv() == FFI_DEFAULT_ABI, ( + "%r: create_call_stub() with a non-default call ABI" % (self,)) + def process(c): if c == 'L': assert longlong.supports_longlong @@ -442,7 +453,7 @@ """ _clsname = 'DynamicIntCallDescr' - def __init__(self, arg_classes, result_size, result_sign, extrainfo=None, ffi_flags=0): + def __init__(self, arg_classes, result_size, result_sign, extrainfo, ffi_flags): BaseIntCallDescr.__init__(self, arg_classes, extrainfo, ffi_flags) assert isinstance(result_sign, bool) self._result_size = chr(result_size) diff --git a/pypy/jit/backend/llsupport/ffisupport.py b/pypy/jit/backend/llsupport/ffisupport.py --- a/pypy/jit/backend/llsupport/ffisupport.py +++ b/pypy/jit/backend/llsupport/ffisupport.py @@ -8,7 +8,7 @@ class UnsupportedKind(Exception): pass -def get_call_descr_dynamic(cpu, ffi_args, ffi_result, extrainfo=None, ffi_flags=0): +def get_call_descr_dynamic(cpu, ffi_args, ffi_result, extrainfo, ffi_flags): """Get a call descr: the types of result and args are represented by rlib.libffi.types.*""" try: diff --git a/pypy/jit/backend/llsupport/gc.py b/pypy/jit/backend/llsupport/gc.py --- a/pypy/jit/backend/llsupport/gc.py +++ b/pypy/jit/backend/llsupport/gc.py @@ -650,10 +650,13 @@ assert size > 0, 'size should be > 0' type_id = llop.extract_ushort(llgroup.HALFWORD, tid) has_finalizer = bool(tid & (1<' % (self._location_code, info) - def location_code(self): - return self._location_code - def value_a(self): return self.loc_a @@ -191,6 +185,7 @@ # we want a width of 8 (... I think. Check this!) _immutable_ = True width = 8 + _location_code = 'j' def __init__(self, address): self.value = address @@ -198,9 +193,6 @@ def __repr__(self): return '' % (self.value,) - def location_code(self): - return 'j' - if IS_X86_32: class FloatImmedLoc(AssemblerLocation): # This stands for an immediate float. It cannot be directly used in @@ -209,6 +201,7 @@ # instead; see below. _immutable_ = True width = 8 + _location_code = '#' # don't use me def __init__(self, floatstorage): self.aslonglong = floatstorage @@ -229,9 +222,6 @@ floatvalue = longlong.getrealfloat(self.aslonglong) return '' % (floatvalue,) - def location_code(self): - raise NotImplementedError - if IS_X86_64: def FloatImmedLoc(floatstorage): from pypy.rlib.longlong2float import float2longlong @@ -270,6 +260,11 @@ else: raise AssertionError(methname + " undefined") +def _missing_binary_insn(name, code1, code2): + raise AssertionError(name + "_" + code1 + code2 + " missing") +_missing_binary_insn._dont_inline_ = True + + class LocationCodeBuilder(object): _mixin_ = True @@ -303,6 +298,8 @@ else: # For this case, we should not need the scratch register more than here. self._load_scratch(val2) + if name == 'MOV' and loc1 is X86_64_SCRATCH_REG: + return # don't need a dummy "MOV r11, r11" INSN(self, loc1, X86_64_SCRATCH_REG) def invoke(self, codes, val1, val2): @@ -310,6 +307,23 @@ _rx86_getattr(self, methname)(val1, val2) invoke._annspecialcase_ = 'specialize:arg(1)' + def has_implementation_for(loc1, loc2): + # A memo function that returns True if there is any NAME_xy that could match. + # If it returns False we know the whole subcase can be omitted from translated + # code. Without this hack, the size of most _binaryop INSN functions ends up + # quite large in C code. + if loc1 == '?': + return any([has_implementation_for(loc1, loc2) + for loc1 in unrolling_location_codes]) + methname = name + "_" + loc1 + loc2 + if not hasattr(rx86.AbstractX86CodeBuilder, methname): + return False + # any NAME_j should have a NAME_m as a fallback, too. Check it + if loc1 == 'j': assert has_implementation_for('m', loc2), methname + if loc2 == 'j': assert has_implementation_for(loc1, 'm'), methname + return True + has_implementation_for._annspecialcase_ = 'specialize:memo' + def INSN(self, loc1, loc2): code1 = loc1.location_code() code2 = loc2.location_code() @@ -325,6 +339,8 @@ assert code2 not in ('j', 'i') for possible_code2 in unrolling_location_codes: + if not has_implementation_for('?', possible_code2): + continue if code2 == possible_code2: val2 = getattr(loc2, "value_" + possible_code2)() # @@ -335,28 +351,32 @@ # # Regular case for possible_code1 in unrolling_location_codes: + if not has_implementation_for(possible_code1, + possible_code2): + continue if code1 == possible_code1: val1 = getattr(loc1, "value_" + possible_code1)() # More faking out of certain operations for x86_64 - if possible_code1 == 'j' and not rx86.fits_in_32bits(val1): + fits32 = rx86.fits_in_32bits + if possible_code1 == 'j' and not fits32(val1): val1 = self._addr_as_reg_offset(val1) invoke(self, "m" + possible_code2, val1, val2) - elif possible_code2 == 'j' and not rx86.fits_in_32bits(val2): + return + if possible_code2 == 'j' and not fits32(val2): val2 = self._addr_as_reg_offset(val2) invoke(self, possible_code1 + "m", val1, val2) - elif possible_code1 == 'm' and not rx86.fits_in_32bits(val1[1]): + return + if possible_code1 == 'm' and not fits32(val1[1]): val1 = self._fix_static_offset_64_m(val1) - invoke(self, "a" + possible_code2, val1, val2) - elif possible_code2 == 'm' and not rx86.fits_in_32bits(val2[1]): + if possible_code2 == 'm' and not fits32(val2[1]): val2 = self._fix_static_offset_64_m(val2) - invoke(self, possible_code1 + "a", val1, val2) - else: - if possible_code1 == 'a' and not rx86.fits_in_32bits(val1[3]): - val1 = self._fix_static_offset_64_a(val1) - if possible_code2 == 'a' and not rx86.fits_in_32bits(val2[3]): - val2 = self._fix_static_offset_64_a(val2) - invoke(self, possible_code1 + possible_code2, val1, val2) + if possible_code1 == 'a' and not fits32(val1[3]): + val1 = self._fix_static_offset_64_a(val1) + if possible_code2 == 'a' and not fits32(val2[3]): + val2 = self._fix_static_offset_64_a(val2) + invoke(self, possible_code1 + possible_code2, val1, val2) return + _missing_binary_insn(name, code1, code2) return func_with_new_name(INSN, "INSN_" + name) @@ -431,12 +451,14 @@ def _fix_static_offset_64_m(self, (basereg, static_offset)): # For cases where an AddressLoc has the location_code 'm', but # where the static offset does not fit in 32-bits. We have to fall - # back to the X86_64_SCRATCH_REG. Note that this returns a location - # encoded as mode 'a'. These are all possibly rare cases; don't try + # back to the X86_64_SCRATCH_REG. Returns a new location encoded + # as mode 'm' too. These are all possibly rare cases; don't try # to reuse a past value of the scratch register at all. self._scratch_register_known = False self.MOV_ri(X86_64_SCRATCH_REG.value, static_offset) - return (basereg, X86_64_SCRATCH_REG.value, 0, 0) + self.LEA_ra(X86_64_SCRATCH_REG.value, + (basereg, X86_64_SCRATCH_REG.value, 0, 0)) + return (X86_64_SCRATCH_REG.value, 0) def _fix_static_offset_64_a(self, (basereg, scalereg, scale, static_offset)): diff --git a/pypy/jit/backend/x86/rx86.py b/pypy/jit/backend/x86/rx86.py --- a/pypy/jit/backend/x86/rx86.py +++ b/pypy/jit/backend/x86/rx86.py @@ -745,6 +745,7 @@ assert insnname_template.count('*') == 1 add_insn('x', register(2), '\xC0') add_insn('j', abs_, immediate(2)) + add_insn('m', mem_reg_plus_const(2)) define_pxmm_insn('PADDQ_x*', '\xD4') define_pxmm_insn('PSUBQ_x*', '\xFB') diff --git a/pypy/jit/backend/x86/test/test_regloc.py b/pypy/jit/backend/x86/test/test_regloc.py --- a/pypy/jit/backend/x86/test/test_regloc.py +++ b/pypy/jit/backend/x86/test/test_regloc.py @@ -146,8 +146,10 @@ expected_instructions = ( # mov r11, 0xFEDCBA9876543210 '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' - # mov rcx, [rdx+r11] - '\x4A\x8B\x0C\x1A' + # lea r11, [rdx+r11] + '\x4E\x8D\x1C\x1A' + # mov rcx, [r11] + '\x49\x8B\x0B' ) assert cb.getvalue() == expected_instructions @@ -174,6 +176,30 @@ # ------------------------------------------------------------ + def test_MOV_64bit_constant_into_r11(self): + base_constant = 0xFEDCBA9876543210 + cb = LocationCodeBuilder64() + cb.MOV(r11, imm(base_constant)) + + expected_instructions = ( + # mov r11, 0xFEDCBA9876543210 + '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' + ) + assert cb.getvalue() == expected_instructions + + def test_MOV_64bit_address_into_r11(self): + base_addr = 0xFEDCBA9876543210 + cb = LocationCodeBuilder64() + cb.MOV(r11, heap(base_addr)) + + expected_instructions = ( + # mov r11, 0xFEDCBA9876543210 + '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' + + # mov r11, [r11] + '\x4D\x8B\x1B' + ) + assert cb.getvalue() == expected_instructions + def test_MOV_immed32_into_64bit_address_1(self): immed = -0x01234567 base_addr = 0xFEDCBA9876543210 @@ -217,8 +243,10 @@ expected_instructions = ( # mov r11, 0xFEDCBA9876543210 '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' - # mov [rdx+r11], -0x01234567 - '\x4A\xC7\x04\x1A\x99\xBA\xDC\xFE' + # lea r11, [rdx+r11] + '\x4E\x8D\x1C\x1A' + # mov [r11], -0x01234567 + '\x49\xC7\x03\x99\xBA\xDC\xFE' ) assert cb.getvalue() == expected_instructions @@ -300,8 +328,10 @@ '\x48\xBA\xEF\xCD\xAB\x89\x67\x45\x23\x01' # mov r11, 0xFEDCBA9876543210 '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' - # mov [rax+r11], rdx - '\x4A\x89\x14\x18' + # lea r11, [rax+r11] + '\x4E\x8D\x1C\x18' + # mov [r11], rdx + '\x49\x89\x13' # pop rdx '\x5A' ) diff --git a/pypy/jit/backend/x86/test/test_runner.py b/pypy/jit/backend/x86/test/test_runner.py --- a/pypy/jit/backend/x86/test/test_runner.py +++ b/pypy/jit/backend/x86/test/test_runner.py @@ -455,6 +455,9 @@ EffectInfo.MOST_GENERAL, ffi_flags=-1) calldescr.get_call_conv = lambda: ffi # <==== hack + # ^^^ we patch get_call_conv() so that the test also makes sense + # on Linux, because clibffi.get_call_conv() would always + # return FFI_DEFAULT_ABI on non-Windows platforms. funcbox = ConstInt(rawstart) i1 = BoxInt() i2 = BoxInt() diff --git a/pypy/jit/backend/x86/tool/viewcode.py b/pypy/jit/backend/x86/tool/viewcode.py --- a/pypy/jit/backend/x86/tool/viewcode.py +++ b/pypy/jit/backend/x86/tool/viewcode.py @@ -58,7 +58,7 @@ assert not p.returncode, ('Encountered an error running objdump: %s' % stderr) # drop some objdump cruft - lines = stdout.splitlines()[6:] + lines = stdout.splitlines(True)[6:] # drop some objdump cruft return format_code_dump_with_labels(originaddr, lines, label_list) def format_code_dump_with_labels(originaddr, lines, label_list): @@ -97,7 +97,7 @@ stdout, stderr = p.communicate() assert not p.returncode, ('Encountered an error running nm: %s' % stderr) - for line in stdout.splitlines(): + for line in stdout.splitlines(True): match = re_symbolentry.match(line) if match: addr = long(match.group(1), 16) diff --git a/pypy/jit/codewriter/call.py b/pypy/jit/codewriter/call.py --- a/pypy/jit/codewriter/call.py +++ b/pypy/jit/codewriter/call.py @@ -212,7 +212,10 @@ elidable = False loopinvariant = False if op.opname == "direct_call": - func = getattr(get_funcobj(op.args[0].value), '_callable', None) + funcobj = get_funcobj(op.args[0].value) + assert getattr(funcobj, 'calling_conv', 'c') == 'c', ( + "%r: getcalldescr() with a non-default call ABI" % (op,)) + func = getattr(funcobj, '_callable', None) elidable = getattr(func, "_elidable_function_", False) loopinvariant = getattr(func, "_jit_loop_invariant_", False) if loopinvariant: diff --git a/pypy/jit/codewriter/effectinfo.py b/pypy/jit/codewriter/effectinfo.py --- a/pypy/jit/codewriter/effectinfo.py +++ b/pypy/jit/codewriter/effectinfo.py @@ -78,6 +78,9 @@ # OS_MATH_SQRT = 100 + # for debugging: + _OS_CANRAISE = set([OS_NONE, OS_STR2UNICODE, OS_LIBFFI_CALL]) + def __new__(cls, readonly_descrs_fields, readonly_descrs_arrays, write_descrs_fields, write_descrs_arrays, extraeffect=EF_CAN_RAISE, @@ -116,6 +119,8 @@ result.extraeffect = extraeffect result.can_invalidate = can_invalidate result.oopspecindex = oopspecindex + if result.check_can_raise(): + assert oopspecindex in cls._OS_CANRAISE cls._cache[key] = result return result @@ -125,6 +130,10 @@ def check_can_invalidate(self): return self.can_invalidate + def check_is_elidable(self): + return (self.extraeffect == self.EF_ELIDABLE_CAN_RAISE or + self.extraeffect == self.EF_ELIDABLE_CANNOT_RAISE) + def check_forces_virtual_or_virtualizable(self): return self.extraeffect >= self.EF_FORCES_VIRTUAL_OR_VIRTUALIZABLE diff --git a/pypy/jit/codewriter/jtransform.py b/pypy/jit/codewriter/jtransform.py --- a/pypy/jit/codewriter/jtransform.py +++ b/pypy/jit/codewriter/jtransform.py @@ -443,6 +443,8 @@ rewrite_op_gc_identityhash = _do_builtin_call rewrite_op_gc_id = _do_builtin_call rewrite_op_uint_mod = _do_builtin_call + rewrite_op_cast_float_to_uint = _do_builtin_call + rewrite_op_cast_uint_to_float = _do_builtin_call # ---------- # getfield/setfield/mallocs etc. @@ -798,6 +800,9 @@ def _is_gc(self, v): return getattr(getattr(v.concretetype, "TO", None), "_gckind", "?") == 'gc' + def _is_rclass_instance(self, v): + return lltype._castdepth(v.concretetype.TO, rclass.OBJECT) >= 0 + def _rewrite_cmp_ptrs(self, op): if self._is_gc(op.args[0]): return op @@ -815,11 +820,21 @@ return self._rewrite_equality(op, 'int_is_true') def rewrite_op_ptr_eq(self, op): - op1 = self._rewrite_equality(op, 'ptr_iszero') + prefix = '' + if self._is_rclass_instance(op.args[0]): + assert self._is_rclass_instance(op.args[1]) + op = SpaceOperation('instance_ptr_eq', op.args, op.result) + prefix = 'instance_' + op1 = self._rewrite_equality(op, prefix + 'ptr_iszero') return self._rewrite_cmp_ptrs(op1) def rewrite_op_ptr_ne(self, op): - op1 = self._rewrite_equality(op, 'ptr_nonzero') + prefix = '' + if self._is_rclass_instance(op.args[0]): + assert self._is_rclass_instance(op.args[1]) + op = SpaceOperation('instance_ptr_ne', op.args, op.result) + prefix = 'instance_' + op1 = self._rewrite_equality(op, prefix + 'ptr_nonzero') return self._rewrite_cmp_ptrs(op1) rewrite_op_ptr_iszero = _rewrite_cmp_ptrs @@ -829,6 +844,10 @@ if self._is_gc(op.args[0]): return op + def rewrite_op_cast_opaque_ptr(self, op): + # None causes the result of this op to get aliased to op.args[0] + return [SpaceOperation('mark_opaque_ptr', op.args, None), None] + def rewrite_op_force_cast(self, op): v_arg = op.args[0] v_result = op.result @@ -848,26 +867,44 @@ elif not float_arg and float_res: # some int -> some float ops = [] - v1 = varoftype(lltype.Signed) - oplist = self.rewrite_operation( - SpaceOperation('force_cast', [v_arg], v1) - ) - if oplist: - ops.extend(oplist) + v2 = varoftype(lltype.Float) + sizesign = rffi.size_and_sign(v_arg.concretetype) + if sizesign <= rffi.size_and_sign(lltype.Signed): + # cast from a type that fits in an int: either the size is + # smaller, or it is equal and it is not unsigned + v1 = varoftype(lltype.Signed) + oplist = self.rewrite_operation( + SpaceOperation('force_cast', [v_arg], v1) + ) + if oplist: + ops.extend(oplist) + else: + v1 = v_arg + op = self.rewrite_operation( + SpaceOperation('cast_int_to_float', [v1], v2) + ) + ops.append(op) else: - v1 = v_arg - v2 = varoftype(lltype.Float) - op = self.rewrite_operation( - SpaceOperation('cast_int_to_float', [v1], v2) - ) - ops.append(op) + if sizesign == rffi.size_and_sign(lltype.Unsigned): + opname = 'cast_uint_to_float' + elif sizesign == rffi.size_and_sign(lltype.SignedLongLong): + opname = 'cast_longlong_to_float' + elif sizesign == rffi.size_and_sign(lltype.UnsignedLongLong): + opname = 'cast_ulonglong_to_float' + else: + raise AssertionError('cast_x_to_float: %r' % (sizesign,)) + ops1 = self.rewrite_operation( + SpaceOperation(opname, [v_arg], v2) + ) + if not isinstance(ops1, list): ops1 = [ops1] + ops.extend(ops1) op2 = self.rewrite_operation( SpaceOperation('force_cast', [v2], v_result) ) if op2: ops.append(op2) else: - op.result = v_result + ops[-1].result = v_result return ops elif float_arg and not float_res: # some float -> some int @@ -880,18 +917,36 @@ ops.append(op1) else: v1 = v_arg - v2 = varoftype(lltype.Signed) - op = self.rewrite_operation( - SpaceOperation('cast_float_to_int', [v1], v2) - ) - ops.append(op) - oplist = self.rewrite_operation( - SpaceOperation('force_cast', [v2], v_result) - ) - if oplist: - ops.extend(oplist) + sizesign = rffi.size_and_sign(v_result.concretetype) + if sizesign <= rffi.size_and_sign(lltype.Signed): + # cast to a type that fits in an int: either the size is + # smaller, or it is equal and it is not unsigned + v2 = varoftype(lltype.Signed) + op = self.rewrite_operation( + SpaceOperation('cast_float_to_int', [v1], v2) + ) + ops.append(op) + oplist = self.rewrite_operation( + SpaceOperation('force_cast', [v2], v_result) + ) + if oplist: + ops.extend(oplist) + else: + op.result = v_result else: - op.result = v_result + if sizesign == rffi.size_and_sign(lltype.Unsigned): + opname = 'cast_float_to_uint' + elif sizesign == rffi.size_and_sign(lltype.SignedLongLong): + opname = 'cast_float_to_longlong' + elif sizesign == rffi.size_and_sign(lltype.UnsignedLongLong): + opname = 'cast_float_to_ulonglong' + else: + raise AssertionError('cast_float_to_x: %r' % (sizesign,)) + ops1 = self.rewrite_operation( + SpaceOperation(opname, [v1], v_result) + ) + if not isinstance(ops1, list): ops1 = [ops1] + ops.extend(ops1) return ops else: assert False @@ -1097,8 +1152,6 @@ # The new operation is optionally further processed by rewrite_operation(). for _old, _new in [('bool_not', 'int_is_zero'), ('cast_bool_to_float', 'cast_int_to_float'), - ('cast_uint_to_float', 'cast_int_to_float'), - ('cast_float_to_uint', 'cast_float_to_int'), ('int_add_nonneg_ovf', 'int_add_ovf'), ('keepalive', '-live-'), diff --git a/pypy/jit/codewriter/support.py b/pypy/jit/codewriter/support.py --- a/pypy/jit/codewriter/support.py +++ b/pypy/jit/codewriter/support.py @@ -37,9 +37,11 @@ return a.typeannotation(t) def annotate(func, values, inline=None, backendoptimize=True, - type_system="lltype"): + type_system="lltype", translationoptions={}): # build the normal ll graphs for ll_function t = TranslationContext() + for key, value in translationoptions.items(): + setattr(t.config.translation, key, value) annpolicy = AnnotatorPolicy() annpolicy.allow_someobjects = False a = t.buildannotator(policy=annpolicy) @@ -229,6 +231,17 @@ else: return x +def _ll_1_cast_uint_to_float(x): + # XXX on 32-bit platforms, this should be done using cast_longlong_to_float + # (which is a residual call right now in the x86 backend) + return llop.cast_uint_to_float(lltype.Float, x) + +def _ll_1_cast_float_to_uint(x): + # XXX on 32-bit platforms, this should be done using cast_float_to_longlong + # (which is a residual call right now in the x86 backend) + return llop.cast_float_to_uint(lltype.Unsigned, x) + + # math support # ------------ diff --git a/pypy/jit/codewriter/test/test_flatten.py b/pypy/jit/codewriter/test/test_flatten.py --- a/pypy/jit/codewriter/test/test_flatten.py +++ b/pypy/jit/codewriter/test/test_flatten.py @@ -5,10 +5,10 @@ from pypy.jit.codewriter.format import assert_format from pypy.jit.codewriter import longlong from pypy.jit.metainterp.history import AbstractDescr -from pypy.rpython.lltypesystem import lltype, rclass, rstr +from pypy.rpython.lltypesystem import lltype, rclass, rstr, rffi from pypy.objspace.flow.model import SpaceOperation, Variable, Constant from pypy.translator.unsimplify import varoftype -from pypy.rlib.rarithmetic import ovfcheck, r_uint +from pypy.rlib.rarithmetic import ovfcheck, r_uint, r_longlong, r_ulonglong from pypy.rlib.jit import dont_look_inside, _we_are_jitted, JitDriver from pypy.rlib.objectmodel import keepalive_until_here from pypy.rlib import jit @@ -70,7 +70,8 @@ return 'residual' def getcalldescr(self, op, oopspecindex=None, extraeffect=None): try: - if 'cannot_raise' in op.args[0].value._obj.graph.name: + name = op.args[0].value._obj._name + if 'cannot_raise' in name or name.startswith('cast_'): return self._descr_cannot_raise except AttributeError: pass @@ -742,7 +743,6 @@ """, transform=True) def test_force_cast(self): - from pypy.rpython.lltypesystem import rffi # NB: we don't need to test for INT here, the logic in jtransform is # general enough so that if we have the below cases it should # generalize also to INT @@ -848,7 +848,6 @@ transform=True) def test_force_cast_pointer(self): - from pypy.rpython.lltypesystem import rffi def h(p): return rffi.cast(rffi.VOIDP, p) self.encoding_test(h, [lltype.nullptr(rffi.CCHARP.TO)], """ @@ -856,7 +855,6 @@ """, transform=True) def test_force_cast_floats(self): - from pypy.rpython.lltypesystem import rffi # Caststs to lltype.Float def f(n): return rffi.cast(lltype.Float, n) @@ -900,9 +898,69 @@ int_return %i4 """, transform=True) + def f(dbl): + return rffi.cast(rffi.UCHAR, dbl) + self.encoding_test(f, [12.456], """ + cast_float_to_int %f0 -> %i0 + int_and %i0, $255 -> %i1 + int_return %i1 + """, transform=True) + + def f(dbl): + return rffi.cast(lltype.Unsigned, dbl) + self.encoding_test(f, [12.456], """ + residual_call_irf_i $<* fn cast_float_to_uint>, , I[], R[], F[%f0] -> %i0 + int_return %i0 + """, transform=True) + + def f(i): + return rffi.cast(lltype.Float, chr(i)) # "char -> float" + self.encoding_test(f, [12], """ + cast_int_to_float %i0 -> %f0 + float_return %f0 + """, transform=True) + + def f(i): + return rffi.cast(lltype.Float, r_uint(i)) # "uint -> float" + self.encoding_test(f, [12], """ + residual_call_irf_f $<* fn cast_uint_to_float>, , I[%i0], R[], F[] -> %f0 + float_return %f0 + """, transform=True) + + if not longlong.is_64_bit: + def f(dbl): + return rffi.cast(lltype.SignedLongLong, dbl) + self.encoding_test(f, [12.3], """ + residual_call_irf_f $<* fn llong_from_float>, , I[], R[], F[%f0] -> %f1 + float_return %f1 + """, transform=True) + + def f(dbl): + return rffi.cast(lltype.UnsignedLongLong, dbl) + self.encoding_test(f, [12.3], """ + residual_call_irf_f $<* fn ullong_from_float>, , I[], R[], F[%f0] -> %f1 + float_return %f1 + """, transform=True) + + def f(x): + ll = r_longlong(x) + return rffi.cast(lltype.Float, ll) + self.encoding_test(f, [12], """ + residual_call_irf_f $<* fn llong_from_int>, , I[%i0], R[], F[] -> %f0 + residual_call_irf_f $<* fn llong_to_float>, , I[], R[], F[%f0] -> %f1 + float_return %f1 + """, transform=True) + + def f(x): + ll = r_ulonglong(x) + return rffi.cast(lltype.Float, ll) + self.encoding_test(f, [12], """ + residual_call_irf_f $<* fn ullong_from_int>, , I[%i0], R[], F[] -> %f0 + residual_call_irf_f $<* fn ullong_u_to_float>, , I[], R[], F[%f0] -> %f1 + float_return %f1 + """, transform=True) def test_direct_ptradd(self): - from pypy.rpython.lltypesystem import rffi def f(p, n): return lltype.direct_ptradd(p, n) self.encoding_test(f, [lltype.nullptr(rffi.CCHARP.TO), 123], """ @@ -913,7 +971,6 @@ def check_force_cast(FROM, TO, operations, value): """Check that the test is correctly written...""" - from pypy.rpython.lltypesystem import rffi import re r = re.compile('(\w+) \%i\d, \$(-?\d+)') # diff --git a/pypy/jit/codewriter/test/test_jtransform.py b/pypy/jit/codewriter/test/test_jtransform.py --- a/pypy/jit/codewriter/test/test_jtransform.py +++ b/pypy/jit/codewriter/test/test_jtransform.py @@ -576,10 +576,10 @@ assert op1.args == [v2] def test_ptr_eq(): - v1 = varoftype(rclass.OBJECTPTR) - v2 = varoftype(rclass.OBJECTPTR) + v1 = varoftype(lltype.Ptr(rstr.STR)) + v2 = varoftype(lltype.Ptr(rstr.STR)) v3 = varoftype(lltype.Bool) - c0 = const(lltype.nullptr(rclass.OBJECT)) + c0 = const(lltype.nullptr(rstr.STR)) # for opname, reducedname in [('ptr_eq', 'ptr_iszero'), ('ptr_ne', 'ptr_nonzero')]: @@ -598,6 +598,31 @@ assert op1.opname == reducedname assert op1.args == [v2] +def test_instance_ptr_eq(): + v1 = varoftype(rclass.OBJECTPTR) + v2 = varoftype(rclass.OBJECTPTR) + v3 = varoftype(lltype.Bool) + c0 = const(lltype.nullptr(rclass.OBJECT)) + + for opname, newopname, reducedname in [ + ('ptr_eq', 'instance_ptr_eq', 'instance_ptr_iszero'), + ('ptr_ne', 'instance_ptr_ne', 'instance_ptr_nonzero') + ]: + op = SpaceOperation(opname, [v1, v2], v3) + op1 = Transformer().rewrite_operation(op) + assert op1.opname == newopname + assert op1.args == [v1, v2] + + op = SpaceOperation(opname, [v1, c0], v3) + op1 = Transformer().rewrite_operation(op) + assert op1.opname == reducedname + assert op1.args == [v1] + + op = SpaceOperation(opname, [c0, v1], v3) + op1 = Transformer().rewrite_operation(op) + assert op1.opname == reducedname + assert op1.args == [v1] + def test_nongc_ptr_eq(): v1 = varoftype(rclass.NONGCOBJECTPTR) v2 = varoftype(rclass.NONGCOBJECTPTR) @@ -1103,3 +1128,16 @@ varoftype(lltype.Signed)) tr = Transformer(None, None) raises(NotImplementedError, tr.rewrite_operation, op) + +def test_cast_opaque_ptr(): + S = lltype.GcStruct("S", ("x", lltype.Signed)) + v1 = varoftype(lltype.Ptr(S)) + v2 = varoftype(lltype.Ptr(rclass.OBJECT)) + + op = SpaceOperation('cast_opaque_ptr', [v1], v2) + tr = Transformer() + [op1, op2] = tr.rewrite_operation(op) + assert op1.opname == 'mark_opaque_ptr' + assert op1.args == [v1] + assert op1.result is None + assert op2 is None \ No newline at end of file diff --git a/pypy/jit/metainterp/blackhole.py b/pypy/jit/metainterp/blackhole.py --- a/pypy/jit/metainterp/blackhole.py +++ b/pypy/jit/metainterp/blackhole.py @@ -499,9 +499,12 @@ @arguments("r", returns="i") def bhimpl_ptr_nonzero(a): return bool(a) - @arguments("r", returns="r") - def bhimpl_cast_opaque_ptr(a): - return a + @arguments("r", "r", returns="i") + def bhimpl_instance_ptr_eq(a, b): + return a == b + @arguments("r", "r", returns="i") + def bhimpl_instance_ptr_ne(a, b): + return a != b @arguments("r", returns="i") def bhimpl_cast_ptr_to_int(a): i = lltype.cast_ptr_to_int(a) @@ -512,6 +515,10 @@ ll_assert((i & 1) == 1, "bhimpl_cast_int_to_ptr: not an odd int") return lltype.cast_int_to_ptr(llmemory.GCREF, i) + @arguments("r") + def bhimpl_mark_opaque_ptr(a): + pass + @arguments("i", returns="i") def bhimpl_int_copy(a): return a @@ -630,6 +637,9 @@ a = longlong.getrealfloat(a) # note: we need to call int() twice to care for the fact that # int(-2147483648.0) returns a long :-( + # we could also call intmask() instead of the outermost int(), but + # it's probably better to explicitly crash (by getting a long) if a + # non-translated version tries to cast a too large float to an int. return int(int(a)) @arguments("i", returns="f") diff --git a/pypy/jit/metainterp/heapcache.py b/pypy/jit/metainterp/heapcache.py --- a/pypy/jit/metainterp/heapcache.py +++ b/pypy/jit/metainterp/heapcache.py @@ -34,7 +34,6 @@ self.clear_caches(opnum, descr, argboxes) def mark_escaped(self, opnum, argboxes): - idx = 0 if opnum == rop.SETFIELD_GC: assert len(argboxes) == 2 box, valuebox = argboxes @@ -42,8 +41,20 @@ self.dependencies.setdefault(box, []).append(valuebox) else: self._escape(valuebox) - # GETFIELD_GC doesn't escape it's argument - elif opnum != rop.GETFIELD_GC: + elif opnum == rop.SETARRAYITEM_GC: + assert len(argboxes) == 3 + box, indexbox, valuebox = argboxes + if self.is_unescaped(box) and self.is_unescaped(valuebox): + self.dependencies.setdefault(box, []).append(valuebox) + else: + self._escape(valuebox) + # GETFIELD_GC, MARK_OPAQUE_PTR, PTR_EQ, and PTR_NE don't escape their + # arguments + elif (opnum != rop.GETFIELD_GC and + opnum != rop.MARK_OPAQUE_PTR and + opnum != rop.PTR_EQ and + opnum != rop.PTR_NE): + idx = 0 for box in argboxes: # setarrayitem_gc don't escape its first argument if not (idx == 0 and opnum in [rop.SETARRAYITEM_GC]): @@ -60,13 +71,13 @@ self._escape(dep) def clear_caches(self, opnum, descr, argboxes): - if opnum == rop.SETFIELD_GC: - return - if opnum == rop.SETARRAYITEM_GC: - return - if opnum == rop.SETFIELD_RAW: - return - if opnum == rop.SETARRAYITEM_RAW: + if (opnum == rop.SETFIELD_GC or + opnum == rop.SETARRAYITEM_GC or + opnum == rop.SETFIELD_RAW or + opnum == rop.SETARRAYITEM_RAW or + opnum == rop.SETINTERIORFIELD_GC or + opnum == rop.COPYSTRCONTENT or + opnum == rop.COPYUNICODECONTENT): return if rop._OVF_FIRST <= opnum <= rop._OVF_LAST: return @@ -75,9 +86,9 @@ if opnum == rop.CALL or opnum == rop.CALL_LOOPINVARIANT: effectinfo = descr.get_extra_info() ef = effectinfo.extraeffect - if ef == effectinfo.EF_LOOPINVARIANT or \ - ef == effectinfo.EF_ELIDABLE_CANNOT_RAISE or \ - ef == effectinfo.EF_ELIDABLE_CAN_RAISE: + if (ef == effectinfo.EF_LOOPINVARIANT or + ef == effectinfo.EF_ELIDABLE_CANNOT_RAISE or + ef == effectinfo.EF_ELIDABLE_CAN_RAISE): return # A special case for ll_arraycopy, because it is so common, and its # effects are so well defined. diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -929,6 +929,9 @@ def view(self, **kwds): pass + def clear(self): + pass + class Stats(object): """For tests.""" @@ -943,6 +946,15 @@ self.aborted_keys = [] self.invalidated_token_numbers = set() + def clear(self): + del self.loops[:] + del self.locations[:] + del self.aborted_keys[:] + self.invalidated_token_numbers.clear() + self.compiled_count = 0 + self.enter_count = 0 + self.aborted_count = 0 + def set_history(self, history): self.operations = history.operations diff --git a/pypy/jit/metainterp/optimizeopt/__init__.py b/pypy/jit/metainterp/optimizeopt/__init__.py --- a/pypy/jit/metainterp/optimizeopt/__init__.py +++ b/pypy/jit/metainterp/optimizeopt/__init__.py @@ -55,7 +55,7 @@ def optimize_loop_1(metainterp_sd, loop, enable_opts, - inline_short_preamble=True, retraced=False, bridge=False): + inline_short_preamble=True, retraced=False): """Optimize loop.operations to remove internal overheadish operations. """ @@ -64,7 +64,7 @@ if unroll: optimize_unroll(metainterp_sd, loop, optimizations) else: - optimizer = Optimizer(metainterp_sd, loop, optimizations, bridge) + optimizer = Optimizer(metainterp_sd, loop, optimizations) optimizer.propagate_all_forward() def optimize_bridge_1(metainterp_sd, bridge, enable_opts, @@ -76,7 +76,7 @@ except KeyError: pass optimize_loop_1(metainterp_sd, bridge, enable_opts, - inline_short_preamble, retraced, bridge=True) + inline_short_preamble, retraced) if __name__ == '__main__': print ALL_OPTS_NAMES diff --git a/pypy/jit/metainterp/optimizeopt/heap.py b/pypy/jit/metainterp/optimizeopt/heap.py --- a/pypy/jit/metainterp/optimizeopt/heap.py +++ b/pypy/jit/metainterp/optimizeopt/heap.py @@ -43,7 +43,7 @@ optheap.optimizer.ensure_imported(cached_fieldvalue) cached_fieldvalue = self._cached_fields.get(structvalue, None) - if cached_fieldvalue is not fieldvalue: + if not fieldvalue.same_value(cached_fieldvalue): # common case: store the 'op' as lazy_setfield, and register # myself in the optheap's _lazy_setfields_and_arrayitems list self._lazy_setfield = op @@ -140,6 +140,15 @@ getop = ResOperation(rop.GETFIELD_GC, [op.getarg(0)], result, op.getdescr()) shortboxes.add_potential(getop, synthetic=True) + if op.getopnum() == rop.SETARRAYITEM_GC: + result = op.getarg(2) + if isinstance(result, Const): + newresult = result.clonebox() + optimizer.make_constant(newresult, result) + result = newresult + getop = ResOperation(rop.GETARRAYITEM_GC, [op.getarg(0), op.getarg(1)], + result, op.getdescr()) + shortboxes.add_potential(getop, synthetic=True) elif op.result is not None: shortboxes.add_potential(op) @@ -225,7 +234,7 @@ or op.is_ovf()): self.posponedop = op else: - self.next_optimization.propagate_forward(op) + Optimization.emit_operation(self, op) def emitting_operation(self, op): if op.has_no_side_effect(): diff --git a/pypy/jit/metainterp/optimizeopt/intbounds.py b/pypy/jit/metainterp/optimizeopt/intbounds.py --- a/pypy/jit/metainterp/optimizeopt/intbounds.py +++ b/pypy/jit/metainterp/optimizeopt/intbounds.py @@ -1,3 +1,4 @@ +import sys from pypy.jit.metainterp.optimizeopt.optimizer import Optimization, CONST_1, CONST_0, \ MODE_ARRAY, MODE_STR, MODE_UNICODE from pypy.jit.metainterp.history import ConstInt @@ -5,36 +6,18 @@ IntUpperBound) from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method from pypy.jit.metainterp.resoperation import rop +from pypy.jit.metainterp.optimize import InvalidLoop +from pypy.rlib.rarithmetic import LONG_BIT class OptIntBounds(Optimization): """Keeps track of the bounds placed on integers by guards and remove redundant guards""" - def setup(self): - self.posponedop = None - self.nextop = None - def new(self): - assert self.posponedop is None return OptIntBounds() - - def flush(self): - assert self.posponedop is None - - def setup(self): - self.posponedop = None - self.nextop = None def propagate_forward(self, op): - if op.is_ovf(): - self.posponedop = op - return - if self.posponedop: - self.nextop = op - op = self.posponedop - self.posponedop = None - dispatch_opt(self, op) def opt_default(self, op): @@ -126,14 +109,29 @@ r.intbound.intersect(v1.intbound.div_bound(v2.intbound)) def optimize_INT_MOD(self, op): + v1 = self.getvalue(op.getarg(0)) + v2 = self.getvalue(op.getarg(1)) + known_nonneg = (v1.intbound.known_ge(IntBound(0, 0)) and + v2.intbound.known_ge(IntBound(0, 0))) + if known_nonneg and v2.is_constant(): + val = v2.box.getint() + if (val & (val-1)) == 0: + # nonneg % power-of-two ==> nonneg & (power-of-two - 1) + arg1 = op.getarg(0) + arg2 = ConstInt(val-1) + op = op.copy_and_change(rop.INT_AND, args=[arg1, arg2]) self.emit_operation(op) - v2 = self.getvalue(op.getarg(1)) if v2.is_constant(): val = v2.box.getint() r = self.getvalue(op.result) if val < 0: + if val == -sys.maxint-1: + return # give up val = -val - r.intbound.make_gt(IntBound(-val, -val)) + if known_nonneg: + r.intbound.make_ge(IntBound(0, 0)) + else: + r.intbound.make_gt(IntBound(-val, -val)) r.intbound.make_lt(IntBound(val, val)) def optimize_INT_LSHIFT(self, op): @@ -153,72 +151,84 @@ def optimize_INT_RSHIFT(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) + b = v1.intbound.rshift_bound(v2.intbound) + if b.has_lower and b.has_upper and b.lower == b.upper: + # constant result (likely 0, for rshifts that kill all bits) + self.make_constant_int(op.result, b.lower) + else: + self.emit_operation(op) + r = self.getvalue(op.result) + r.intbound.intersect(b) + + def optimize_GUARD_NO_OVERFLOW(self, op): + lastop = self.last_emitted_operation + if lastop is not None: + opnum = lastop.getopnum() + args = lastop.getarglist() + result = lastop.result + # If the INT_xxx_OVF was replaced with INT_xxx, then we can kill + # the GUARD_NO_OVERFLOW. + if (opnum == rop.INT_ADD or + opnum == rop.INT_SUB or + opnum == rop.INT_MUL): + return + # Else, synthesize the non overflowing op for optimize_default to + # reuse, as well as the reverse op + elif opnum == rop.INT_ADD_OVF: + self.pure(rop.INT_ADD, args[:], result) + self.pure(rop.INT_SUB, [result, args[1]], args[0]) + self.pure(rop.INT_SUB, [result, args[0]], args[1]) + elif opnum == rop.INT_SUB_OVF: + self.pure(rop.INT_SUB, args[:], result) + self.pure(rop.INT_ADD, [result, args[1]], args[0]) + self.pure(rop.INT_SUB, [args[0], result], args[1]) + elif opnum == rop.INT_MUL_OVF: + self.pure(rop.INT_MUL, args[:], result) self.emit_operation(op) - r = self.getvalue(op.result) - r.intbound.intersect(v1.intbound.rshift_bound(v2.intbound)) + + def optimize_GUARD_OVERFLOW(self, op): + # If INT_xxx_OVF was replaced by INT_xxx, *but* we still see + # GUARD_OVERFLOW, then the loop is invalid. + lastop = self.last_emitted_operation + if lastop is None: + raise InvalidLoop + opnum = lastop.getopnum() + if opnum not in (rop.INT_ADD_OVF, rop.INT_SUB_OVF, rop.INT_MUL_OVF): + raise InvalidLoop + self.emit_operation(op) def optimize_INT_ADD_OVF(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) resbound = v1.intbound.add_bound(v2.intbound) - if resbound.has_lower and resbound.has_upper and \ - self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Transform into INT_ADD and remove guard + if resbound.bounded(): + # Transform into INT_ADD. The following guard will be killed + # by optimize_GUARD_NO_OVERFLOW; if we see instead an + # optimize_GUARD_OVERFLOW, then InvalidLoop. op = op.copy_and_change(rop.INT_ADD) - self.optimize_INT_ADD(op) # emit the op - else: - self.emit_operation(op) - r = self.getvalue(op.result) - r.intbound.intersect(resbound) - self.emit_operation(self.nextop) - if self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Synthesize the non overflowing op for optimize_default to reuse - self.pure(rop.INT_ADD, op.getarglist()[:], op.result) - # Synthesize the reverse op for optimize_default to reuse - self.pure(rop.INT_SUB, [op.result, op.getarg(1)], op.getarg(0)) - self.pure(rop.INT_SUB, [op.result, op.getarg(0)], op.getarg(1)) - + self.emit_operation(op) # emit the op + r = self.getvalue(op.result) + r.intbound.intersect(resbound) def optimize_INT_SUB_OVF(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) resbound = v1.intbound.sub_bound(v2.intbound) - if resbound.has_lower and resbound.has_upper and \ - self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Transform into INT_SUB and remove guard + if resbound.bounded(): op = op.copy_and_change(rop.INT_SUB) - self.optimize_INT_SUB(op) # emit the op - else: - self.emit_operation(op) - r = self.getvalue(op.result) - r.intbound.intersect(resbound) - self.emit_operation(self.nextop) - if self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Synthesize the non overflowing op for optimize_default to reuse - self.pure(rop.INT_SUB, op.getarglist()[:], op.result) - # Synthesize the reverse ops for optimize_default to reuse - self.pure(rop.INT_ADD, [op.result, op.getarg(1)], op.getarg(0)) - self.pure(rop.INT_SUB, [op.getarg(0), op.result], op.getarg(1)) - + self.emit_operation(op) # emit the op + r = self.getvalue(op.result) + r.intbound.intersect(resbound) def optimize_INT_MUL_OVF(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) resbound = v1.intbound.mul_bound(v2.intbound) - if resbound.has_lower and resbound.has_upper and \ - self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Transform into INT_MUL and remove guard + if resbound.bounded(): op = op.copy_and_change(rop.INT_MUL) - self.optimize_INT_MUL(op) # emit the op - else: - self.emit_operation(op) - r = self.getvalue(op.result) - r.intbound.intersect(resbound) - self.emit_operation(self.nextop) - if self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Synthesize the non overflowing op for optimize_default to reuse - self.pure(rop.INT_MUL, op.getarglist()[:], op.result) - + self.emit_operation(op) + r = self.getvalue(op.result) + r.intbound.intersect(resbound) def optimize_INT_LT(self, op): v1 = self.getvalue(op.getarg(0)) diff --git a/pypy/jit/metainterp/optimizeopt/intutils.py b/pypy/jit/metainterp/optimizeopt/intutils.py --- a/pypy/jit/metainterp/optimizeopt/intutils.py +++ b/pypy/jit/metainterp/optimizeopt/intutils.py @@ -1,4 +1,5 @@ -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift, LONG_BIT +from pypy.rlib.rarithmetic import ovfcheck, LONG_BIT +from pypy.rlib.objectmodel import we_are_translated from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.jit.metainterp.history import BoxInt, ConstInt import sys @@ -13,6 +14,10 @@ self.has_lower = True self.upper = upper self.lower = lower + # check for unexpected overflows: + if not we_are_translated(): + assert type(upper) is not long + assert type(lower) is not long # Returns True if the bound was updated def make_le(self, other): @@ -169,10 +174,10 @@ other.known_ge(IntBound(0, 0)) and \ other.known_lt(IntBound(LONG_BIT, LONG_BIT)): try: - vals = (ovfcheck_lshift(self.upper, other.upper), - ovfcheck_lshift(self.upper, other.lower), - ovfcheck_lshift(self.lower, other.upper), - ovfcheck_lshift(self.lower, other.lower)) + vals = (ovfcheck(self.upper << other.upper), + ovfcheck(self.upper << other.lower), + ovfcheck(self.lower << other.upper), + ovfcheck(self.lower << other.lower)) return IntBound(min4(vals), max4(vals)) except (OverflowError, ValueError): return IntUnbounded() diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -1,12 +1,12 @@ from pypy.jit.metainterp import jitprof, resume, compile from pypy.jit.metainterp.executor import execute_nonspec -from pypy.jit.metainterp.history import BoxInt, BoxFloat, Const, ConstInt, REF +from pypy.jit.metainterp.history import BoxInt, BoxFloat, Const, ConstInt, REF, INT from pypy.jit.metainterp.optimizeopt.intutils import IntBound, IntUnbounded, \ ImmutableIntUnbounded, \ IntLowerBound, MININT, MAXINT from pypy.jit.metainterp.optimizeopt.util import (make_dispatcher_method, args_dict) -from pypy.jit.metainterp.resoperation import rop, ResOperation +from pypy.jit.metainterp.resoperation import rop, ResOperation, AbstractResOp from pypy.jit.metainterp.typesystem import llhelper, oohelper from pypy.tool.pairtype import extendabletype from pypy.rlib.debug import debug_start, debug_stop, debug_print @@ -95,6 +95,10 @@ return guards def import_from(self, other, optimizer): + if self.level == LEVEL_CONSTANT: + assert other.level == LEVEL_CONSTANT + assert other.box.same_constant(self.box) + return assert self.level <= LEVEL_NONNULL if other.level == LEVEL_CONSTANT: self.make_constant(other.get_key_box()) @@ -141,6 +145,13 @@ return not box.nonnull() return False + def same_value(self, other): + if not other: + return False + if self.is_constant() and other.is_constant(): + return self.box.same_constant(other.box) + return self is other + def make_constant(self, constbox): """Replace 'self.box' with a Const box.""" assert isinstance(constbox, Const) @@ -209,13 +220,19 @@ def setfield(self, ofs, value): raise NotImplementedError + def getlength(self): + raise NotImplementedError + def getitem(self, index): raise NotImplementedError - def getlength(self): + def setitem(self, index, value): raise NotImplementedError - def setitem(self, index, value): + def getinteriorfield(self, index, ofs, default): + raise NotImplementedError + + def setinteriorfield(self, index, ofs, value): raise NotImplementedError @@ -230,9 +247,10 @@ CONST_1 = ConstInt(1) CVAL_ZERO = ConstantValue(CONST_0) CVAL_ZERO_FLOAT = ConstantValue(Const._new(0.0)) -CVAL_UNINITIALIZED_ZERO = ConstantValue(CONST_0) llhelper.CVAL_NULLREF = ConstantValue(llhelper.CONST_NULL) oohelper.CVAL_NULLREF = ConstantValue(oohelper.CONST_NULL) +REMOVED = AbstractResOp(None) + class Optimization(object): next_optimization = None @@ -244,6 +262,7 @@ raise NotImplementedError def emit_operation(self, op): + self.last_emitted_operation = op self.next_optimization.propagate_forward(op) # FIXME: Move some of these here? @@ -283,11 +302,11 @@ return self.optimizer.optpure.has_pure_result(opnum, args, descr) return False - def get_pure_result(self, key): + def get_pure_result(self, key): if self.optimizer.optpure: return self.optimizer.optpure.get_pure_result(key) return None - + def setup(self): pass @@ -311,20 +330,20 @@ def forget_numberings(self, box): self.optimizer.forget_numberings(box) + class Optimizer(Optimization): - def __init__(self, metainterp_sd, loop, optimizations=None, bridge=False): + def __init__(self, metainterp_sd, loop, optimizations=None): self.metainterp_sd = metainterp_sd self.cpu = metainterp_sd.cpu self.loop = loop - self.bridge = bridge self.values = {} self.interned_refs = self.cpu.ts.new_ref_dict() + self.interned_ints = {} self.resumedata_memo = resume.ResumeDataLoopMemo(metainterp_sd) self.bool_boxes = {} self.producer = {} self.pendingfields = [] - self.exception_might_have_happened = False self.quasi_immutable_deps = None self.opaque_pointers = {} self.replaces_guard = {} @@ -346,6 +365,7 @@ optimizations[-1].next_optimization = self for o in optimizations: o.optimizer = self + o.last_emitted_operation = None o.setup() else: optimizations = [] @@ -392,6 +412,9 @@ if not value: return box return self.interned_refs.setdefault(value, box) + #elif constbox.type == INT: + # value = constbox.getint() + # return self.interned_ints.setdefault(value, box) else: return box @@ -477,7 +500,6 @@ return CVAL_ZERO def propagate_all_forward(self): - self.exception_might_have_happened = self.bridge self.clear_newoperations() for op in self.loop.operations: self.first_optimization.propagate_forward(op) @@ -524,7 +546,7 @@ def replace_op(self, old_op, new_op): # XXX: Do we want to cache indexes to prevent search? - i = len(self._newoperations) + i = len(self._newoperations) while i > 0: i -= 1 if self._newoperations[i] is old_op: diff --git a/pypy/jit/metainterp/optimizeopt/pure.py b/pypy/jit/metainterp/optimizeopt/pure.py --- a/pypy/jit/metainterp/optimizeopt/pure.py +++ b/pypy/jit/metainterp/optimizeopt/pure.py @@ -1,4 +1,4 @@ -from pypy.jit.metainterp.optimizeopt.optimizer import Optimization +from pypy.jit.metainterp.optimizeopt.optimizer import Optimization, REMOVED from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.jit.metainterp.optimizeopt.util import (make_dispatcher_method, args_dict) @@ -61,7 +61,10 @@ oldop = self.pure_operations.get(args, None) if oldop is not None and oldop.getdescr() is op.getdescr(): assert oldop.getopnum() == op.getopnum() + # this removes a CALL_PURE that has the same (non-constant) + # arguments as a previous CALL_PURE. self.make_equal_to(op.result, self.getvalue(oldop.result)) + self.last_emitted_operation = REMOVED return else: self.pure_operations[args] = op @@ -72,6 +75,13 @@ self.emit_operation(ResOperation(rop.CALL, args, op.result, op.getdescr())) + def optimize_GUARD_NO_EXCEPTION(self, op): + if self.last_emitted_operation is REMOVED: + # it was a CALL_PURE that was killed; so we also kill the + # following GUARD_NO_EXCEPTION + return + self.emit_operation(op) + def flush(self): assert self.posponedop is None diff --git a/pypy/jit/metainterp/optimizeopt/rewrite.py b/pypy/jit/metainterp/optimizeopt/rewrite.py --- a/pypy/jit/metainterp/optimizeopt/rewrite.py +++ b/pypy/jit/metainterp/optimizeopt/rewrite.py @@ -294,12 +294,6 @@ raise InvalidLoop self.optimize_GUARD_CLASS(op) - def optimize_GUARD_NO_EXCEPTION(self, op): - if not self.optimizer.exception_might_have_happened: - return - self.emit_operation(op) - self.optimizer.exception_might_have_happened = False - def optimize_CALL_LOOPINVARIANT(self, op): arg = op.getarg(0) # 'arg' must be a Const, because residual_call in codewriter @@ -310,6 +304,7 @@ resvalue = self.loop_invariant_results.get(key, None) if resvalue is not None: self.make_equal_to(op.result, resvalue) + self.last_emitted_operation = REMOVED return # change the op to be a normal call, from the backend's point of view # there is no reason to have a separate operation for this @@ -337,7 +332,7 @@ def optimize_INT_IS_ZERO(self, op): self._optimize_nullness(op, op.getarg(0), False) - def _optimize_oois_ooisnot(self, op, expect_isnot): + def _optimize_oois_ooisnot(self, op, expect_isnot, instance): value0 = self.getvalue(op.getarg(0)) value1 = self.getvalue(op.getarg(1)) if value0.is_virtual(): @@ -355,21 +350,28 @@ elif value0 is value1: self.make_constant_int(op.result, not expect_isnot) else: - cls0 = value0.get_constant_class(self.optimizer.cpu) - if cls0 is not None: - cls1 = value1.get_constant_class(self.optimizer.cpu) - if cls1 is not None and not cls0.same_constant(cls1): - # cannot be the same object, as we know that their - # class is different - self.make_constant_int(op.result, expect_isnot) - return + if instance: + cls0 = value0.get_constant_class(self.optimizer.cpu) + if cls0 is not None: + cls1 = value1.get_constant_class(self.optimizer.cpu) + if cls1 is not None and not cls0.same_constant(cls1): + # cannot be the same object, as we know that their + # class is different + self.make_constant_int(op.result, expect_isnot) + return self.emit_operation(op) + def optimize_PTR_EQ(self, op): + self._optimize_oois_ooisnot(op, False, False) + def optimize_PTR_NE(self, op): - self._optimize_oois_ooisnot(op, True) + self._optimize_oois_ooisnot(op, True, False) - def optimize_PTR_EQ(self, op): - self._optimize_oois_ooisnot(op, False) + def optimize_INSTANCE_PTR_EQ(self, op): + self._optimize_oois_ooisnot(op, False, True) + + def optimize_INSTANCE_PTR_NE(self, op): + self._optimize_oois_ooisnot(op, True, True) ## def optimize_INSTANCEOF(self, op): ## value = self.getvalue(op.args[0]) @@ -437,10 +439,19 @@ except KeyError: pass else: + # this removes a CALL_PURE with all constant arguments. self.make_constant(op.result, result) + self.last_emitted_operation = REMOVED return self.emit_operation(op) + def optimize_GUARD_NO_EXCEPTION(self, op): + if self.last_emitted_operation is REMOVED: + # it was a CALL_PURE or a CALL_LOOPINVARIANT that was killed; + # so we also kill the following GUARD_NO_EXCEPTION + return + self.emit_operation(op) + def optimize_INT_FLOORDIV(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) @@ -458,10 +469,9 @@ args = [op.getarg(0), ConstInt(highest_bit(val))]) self.emit_operation(op) - def optimize_CAST_OPAQUE_PTR(self, op): + def optimize_MARK_OPAQUE_PTR(self, op): value = self.getvalue(op.getarg(0)) self.optimizer.opaque_pointers[value] = True - self.make_equal_to(op.result, value) def optimize_CAST_PTR_TO_INT(self, op): self.pure(rop.CAST_INT_TO_PTR, [op.result], op.getarg(0)) diff --git a/pypy/jit/metainterp/optimizeopt/simplify.py b/pypy/jit/metainterp/optimizeopt/simplify.py --- a/pypy/jit/metainterp/optimizeopt/simplify.py +++ b/pypy/jit/metainterp/optimizeopt/simplify.py @@ -25,7 +25,8 @@ # but it's a bit hard to implement robustly if heap.py is also run pass - optimize_CAST_OPAQUE_PTR = optimize_VIRTUAL_REF + def optimize_MARK_OPAQUE_PTR(self, op): + pass dispatch_opt = make_dispatcher_method(OptSimplify, 'optimize_', diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -9,6 +9,7 @@ from pypy.jit.metainterp.history import AbstractDescr, ConstInt, BoxInt from pypy.jit.metainterp import executor, compile, resume, history from pypy.jit.metainterp.resoperation import rop, opname, ResOperation +from pypy.rlib.rarithmetic import LONG_BIT def test_store_final_boxes_in_guard(): @@ -508,13 +509,13 @@ ops = """ [p0] guard_class(p0, ConstClass(node_vtable)) [] - i0 = ptr_ne(p0, NULL) + i0 = instance_ptr_ne(p0, NULL) guard_true(i0) [] - i1 = ptr_eq(p0, NULL) + i1 = instance_ptr_eq(p0, NULL) guard_false(i1) [] - i2 = ptr_ne(NULL, p0) + i2 = instance_ptr_ne(NULL, p0) guard_true(i0) [] - i3 = ptr_eq(NULL, p0) + i3 = instance_ptr_eq(NULL, p0) guard_false(i1) [] jump(p0) """ @@ -680,25 +681,60 @@ # ---------- - def test_fold_guard_no_exception(self): - ops = """ - [i] - guard_no_exception() [] - i1 = int_add(i, 3) - guard_no_exception() [] + def test_keep_guard_no_exception(self): + ops = """ + [i1] i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] - guard_no_exception() [] - i3 = call(i2, descr=nonwritedescr) - jump(i1) # the exception is considered lost when we loop back - """ - expected = """ - [i] - i1 = int_add(i, 3) - i2 = call(i1, descr=nonwritedescr) + jump(i2) + """ + self.optimize_loop(ops, ops) + + def test_keep_guard_no_exception_with_call_pure_that_is_not_folded(self): + ops = """ + [i1] + i2 = call_pure(123456, i1, descr=nonwritedescr) guard_no_exception() [i1, i2] - i3 = call(i2, descr=nonwritedescr) - jump(i1) + jump(i2) + """ + expected = """ + [i1] + i2 = call(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2] + jump(i2) + """ + self.optimize_loop(ops, expected) + + def test_remove_guard_no_exception_with_call_pure_on_constant_args(self): + arg_consts = [ConstInt(i) for i in (123456, 81)] + call_pure_results = {tuple(arg_consts): ConstInt(5)} + ops = """ + [i1] + i3 = same_as(81) + i2 = call_pure(123456, i3, descr=nonwritedescr) + guard_no_exception() [i1, i2] + jump(i2) + """ + expected = """ + [i1] + jump(5) + """ + self.optimize_loop(ops, expected, call_pure_results) + + def test_remove_guard_no_exception_with_duplicated_call_pure(self): + ops = """ + [i1] + i2 = call_pure(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2] + i3 = call_pure(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2, i3] + jump(i3) + """ + expected = """ + [i1] + i2 = call(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2] + jump(i2) """ self.optimize_loop(ops, expected) @@ -935,7 +971,6 @@ """ self.optimize_loop(ops, expected) - def test_virtual_constant_isnonnull(self): ops = """ [i0] @@ -951,6 +986,55 @@ """ self.optimize_loop(ops, expected) + def test_virtual_array_of_struct(self): + ops = """ + [f0, f1, f2, f3] + p0 = new_array(2, descr=complexarraydescr) + setinteriorfield_gc(p0, 0, f0, descr=complexrealdescr) + setinteriorfield_gc(p0, 0, f1, descr=compleximagdescr) + setinteriorfield_gc(p0, 1, f2, descr=complexrealdescr) + setinteriorfield_gc(p0, 1, f3, descr=compleximagdescr) + f4 = getinteriorfield_gc(p0, 0, descr=complexrealdescr) + f5 = getinteriorfield_gc(p0, 1, descr=complexrealdescr) + f6 = float_mul(f4, f5) + f7 = getinteriorfield_gc(p0, 0, descr=compleximagdescr) + f8 = getinteriorfield_gc(p0, 1, descr=compleximagdescr) + f9 = float_mul(f7, f8) + f10 = float_add(f6, f9) + finish(f10) + """ + expected = """ + [f0, f1, f2, f3] + f4 = float_mul(f0, f2) + f5 = float_mul(f1, f3) + f6 = float_add(f4, f5) + finish(f6) + """ + self.optimize_loop(ops, expected) + + def test_virtual_array_of_struct_forced(self): + ops = """ + [f0, f1] + p0 = new_array(1, descr=complexarraydescr) + setinteriorfield_gc(p0, 0, f0, descr=complexrealdescr) + setinteriorfield_gc(p0, 0, f1, descr=compleximagdescr) + f2 = getinteriorfield_gc(p0, 0, descr=complexrealdescr) + f3 = getinteriorfield_gc(p0, 0, descr=compleximagdescr) + f4 = float_mul(f2, f3) + i0 = escape(f4, p0) + finish(i0) + """ + expected = """ + [f0, f1] + f2 = float_mul(f0, f1) + p0 = new_array(1, descr=complexarraydescr) + setinteriorfield_gc(p0, 0, f0, descr=complexrealdescr) + setinteriorfield_gc(p0, 0, f1, descr=compleximagdescr) + i0 = escape(f2, p0) + finish(i0) + """ + self.optimize_loop(ops, expected) + def test_nonvirtual_1(self): ops = """ [i] @@ -2026,7 +2110,7 @@ ops = """ [p1] guard_class(p1, ConstClass(node_vtable2)) [] - i = ptr_ne(ConstPtr(myptr), p1) + i = instance_ptr_ne(ConstPtr(myptr), p1) guard_true(i) [] jump(p1) """ @@ -4074,6 +4158,38 @@ """ self.optimize_strunicode_loop(ops, expected) + def test_str_concat_constant_lengths(self): + ops = """ + [i0] + p0 = newstr(1) + strsetitem(p0, 0, i0) + p1 = newstr(0) + p2 = call(0, p0, p1, descr=strconcatdescr) + i1 = call(0, p2, p0, descr=strequaldescr) + finish(i1) + """ + expected = """ + [i0] + finish(1) + """ + self.optimize_strunicode_loop(ops, expected) + + def test_str_concat_constant_lengths_2(self): + ops = """ + [i0] + p0 = newstr(0) + p1 = newstr(1) + strsetitem(p1, 0, i0) + p2 = call(0, p0, p1, descr=strconcatdescr) + i1 = call(0, p2, p1, descr=strequaldescr) + finish(i1) + """ + expected = """ + [i0] + finish(1) + """ + self.optimize_strunicode_loop(ops, expected) + def test_str_slice_1(self): ops = """ [p1, i1, i2] @@ -4176,15 +4292,38 @@ """ self.optimize_strunicode_loop(ops, expected) + def test_str_slice_plain_virtual(self): + ops = """ + [] + p0 = newstr(11) + copystrcontent(s"hello world", p0, 0, 0, 11) + p1 = call(0, p0, 0, 5, descr=strslicedescr) + finish(p1) + """ + expected = """ + [] + p0 = newstr(11) + copystrcontent(s"hello world", p0, 0, 0, 11) + # Eventually this should just return s"hello", but ATM this test is + # just verifying that it doesn't return "\0\0\0\0\0", so being + # slightly underoptimized is ok. + p1 = newstr(5) + copystrcontent(p0, p1, 0, 0, 5) + finish(p1) + """ + self.optimize_strunicode_loop(ops, expected) + # ---------- def optimize_strunicode_loop_extradescrs(self, ops, optops): class FakeCallInfoCollection: def callinfo_for_oopspec(self, oopspecindex): calldescrtype = type(LLtypeMixin.strequaldescr) + effectinfotype = type(LLtypeMixin.strequaldescr.get_extra_info()) for value in LLtypeMixin.__dict__.values(): if isinstance(value, calldescrtype): extra = value.get_extra_info() - if extra and extra.oopspecindex == oopspecindex: + if (extra and isinstance(extra, effectinfotype) and + extra.oopspecindex == oopspecindex): # returns 0 for 'func' in this test return value, 0 raise AssertionError("not found: oopspecindex=%d" % @@ -4664,11 +4803,11 @@ i5 = int_ge(i0, 0) guard_true(i5) [] i1 = int_mod(i0, 42) - i2 = int_rshift(i1, 63) + i2 = int_rshift(i1, %d) i3 = int_and(42, i2) i4 = int_add(i1, i3) finish(i4) - """ + """ % (LONG_BIT-1) expected = """ [i0] i5 = int_ge(i0, 0) @@ -4676,21 +4815,41 @@ i1 = int_mod(i0, 42) finish(i1) """ - py.test.skip("in-progress") self.optimize_loop(ops, expected) - # Also, 'n % power-of-two' can be turned into int_and(), - # but that's a bit harder to detect here because it turns into - # several operations, and of course it is wrong to just turn + # 'n % power-of-two' can be turned into int_and(); at least that's + # easy to do now if n is known to be non-negative. + ops = """ + [i0] + i5 = int_ge(i0, 0) + guard_true(i5) [] + i1 = int_mod(i0, 8) + i2 = int_rshift(i1, %d) + i3 = int_and(42, i2) + i4 = int_add(i1, i3) + finish(i4) + """ % (LONG_BIT-1) + expected = """ + [i0] + i5 = int_ge(i0, 0) + guard_true(i5) [] + i1 = int_and(i0, 7) + finish(i1) + """ + self.optimize_loop(ops, expected) + + # Of course any 'maybe-negative % power-of-two' can be turned into + # int_and(), but that's a bit harder to detect here because it turns + # into several operations, and of course it is wrong to just turn # int_mod(i0, 16) into int_and(i0, 15). ops = """ [i0] i1 = int_mod(i0, 16) - i2 = int_rshift(i1, 63) + i2 = int_rshift(i1, %d) i3 = int_and(16, i2) i4 = int_add(i1, i3) finish(i4) - """ + """ % (LONG_BIT-1) expected = """ [i0] i4 = int_and(i0, 15) @@ -4699,6 +4858,16 @@ py.test.skip("harder") self.optimize_loop(ops, expected) + def test_intmod_bounds_bug1(self): + ops = """ + [i0] + i1 = int_mod(i0, %d) + i2 = int_eq(i1, 0) + guard_false(i2) [] + finish() + """ % (-(1<<(LONG_BIT-1)),) + self.optimize_loop(ops, ops) + def test_bounded_lazy_setfield(self): ops = """ [p0, i0] @@ -4781,6 +4950,27 @@ def test_plain_virtual_string_copy_content(self): ops = """ + [i1] + p0 = newstr(6) + copystrcontent(s"hello!", p0, 0, 0, 6) + p1 = call(0, p0, s"abc123", descr=strconcatdescr) + i0 = strgetitem(p1, i1) + finish(i0) + """ + expected = """ + [i1] + p0 = newstr(6) + copystrcontent(s"hello!", p0, 0, 0, 6) + p1 = newstr(12) + copystrcontent(p0, p1, 0, 0, 6) + copystrcontent(s"abc123", p1, 0, 6, 6) + i0 = strgetitem(p1, i1) + finish(i0) + """ + self.optimize_strunicode_loop(ops, expected) + + def test_plain_virtual_string_copy_content_2(self): + ops = """ [] p0 = newstr(6) copystrcontent(s"hello!", p0, 0, 0, 6) @@ -4792,10 +4982,7 @@ [] p0 = newstr(6) copystrcontent(s"hello!", p0, 0, 0, 6) - p1 = newstr(12) - copystrcontent(p0, p1, 0, 0, 6) - copystrcontent(s"abc123", p1, 0, 6, 6) - i0 = strgetitem(p1, 0) + i0 = strgetitem(p0, 0) finish(i0) """ self.optimize_strunicode_loop(ops, expected) @@ -4812,6 +4999,34 @@ """ self.optimize_loop(ops, expected) + def test_known_equal_ints(self): + py.test.skip("in-progress") + ops = """ + [i0, i1, i2, p0] + i3 = int_eq(i0, i1) + guard_true(i3) [] + + i4 = int_lt(i2, i0) + guard_true(i4) [] + i5 = int_lt(i2, i1) + guard_true(i5) [] + + i6 = getarrayitem_gc(p0, i2) + finish(i6) + """ + expected = """ + [i0, i1, i2, p0] + i3 = int_eq(i0, i1) + guard_true(i3) [] + + i4 = int_lt(i2, i0) + guard_true(i4) [] + + i6 = getarrayitem_gc(p0, i3) + finish(i6) + """ + self.optimize_loop(ops, expected) + class TestLLtype(BaseTestOptimizeBasic, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -931,17 +931,14 @@ [i] guard_no_exception() [] i1 = int_add(i, 3) - guard_no_exception() [] i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] - guard_no_exception() [] i3 = call(i2, descr=nonwritedescr) jump(i1) # the exception is considered lost when we loop back """ - # note that 'guard_no_exception' at the very start is kept around - # for bridges, but not for loops preamble = """ [i] + guard_no_exception() [] # occurs at the start of bridges, so keep it i1 = int_add(i, 3) i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] @@ -950,6 +947,7 @@ """ expected = """ [i] + guard_no_exception() [] # occurs at the start of bridges, so keep it i1 = int_add(i, 3) i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] @@ -958,6 +956,23 @@ """ self.optimize_loop(ops, expected, preamble) + def test_bug_guard_no_exception(self): + ops = """ + [] + i0 = call(123, descr=nonwritedescr) + p0 = call(0, "xy", descr=s2u_descr) # string -> unicode + guard_no_exception() [] + escape(p0) + jump() + """ + expected = """ + [] + i0 = call(123, descr=nonwritedescr) + escape(u"xy") + jump() + """ + self.optimize_loop(ops, expected) + # ---------- def test_call_loopinvariant(self): @@ -2168,13 +2183,13 @@ ops = """ [p0, i0, p1, i1, i2] setfield_gc(p0, i1, descr=valuedescr) - copystrcontent(p0, i0, p1, i1, i2) + copystrcontent(p0, p1, i0, i1, i2) escape() jump(p0, i0, p1, i1, i2) """ expected = """ [p0, i0, p1, i1, i2] - copystrcontent(p0, i0, p1, i1, i2) + copystrcontent(p0, p1, i0, i1, i2) setfield_gc(p0, i1, descr=valuedescr) escape() jump(p0, i0, p1, i1, i2) @@ -2683,7 +2698,7 @@ ops = """ [p1] guard_class(p1, ConstClass(node_vtable2)) [] - i = ptr_ne(ConstPtr(myptr), p1) + i = instance_ptr_ne(ConstPtr(myptr), p1) guard_true(i) [] jump(p1) """ @@ -3331,7 +3346,7 @@ jump(p1, i1, i2, i6) ''' self.optimize_loop(ops, expected, preamble) - + # ---------- @@ -4783,6 +4798,52 @@ """ self.optimize_loop(ops, expected) + + def test_division_nonneg(self): + py.test.skip("harder") + # this is how an app-level division turns into right now + ops = """ + [i4] + i1 = int_ge(i4, 0) + guard_true(i1) [] + i16 = int_floordiv(i4, 3) + i18 = int_mul(i16, 3) + i19 = int_sub(i4, i18) + i21 = int_rshift(i19, %d) + i22 = int_add(i16, i21) + finish(i22) + """ % (LONG_BIT-1) + expected = """ + [i4] + i1 = int_ge(i4, 0) + guard_true(i1) [] + i16 = int_floordiv(i4, 3) + finish(i16) + """ + self.optimize_loop(ops, expected) + + def test_division_by_2(self): + py.test.skip("harder") + ops = """ + [i4] + i1 = int_ge(i4, 0) + guard_true(i1) [] + i16 = int_floordiv(i4, 2) + i18 = int_mul(i16, 2) + i19 = int_sub(i4, i18) + i21 = int_rshift(i19, %d) + i22 = int_add(i16, i21) + finish(i22) + """ % (LONG_BIT-1) + expected = """ + [i4] + i1 = int_ge(i4, 0) + guard_true(i1) [] + i16 = int_rshift(i4, 1) + finish(i16) + """ + self.optimize_loop(ops, expected) + def test_subsub_ovf(self): ops = """ [i0] @@ -5800,10 +5861,12 @@ class FakeCallInfoCollection: def callinfo_for_oopspec(self, oopspecindex): calldescrtype = type(LLtypeMixin.strequaldescr) + effectinfotype = type(LLtypeMixin.strequaldescr.get_extra_info()) for value in LLtypeMixin.__dict__.values(): if isinstance(value, calldescrtype): extra = value.get_extra_info() - if extra and extra.oopspecindex == oopspecindex: + if (extra and isinstance(extra, effectinfotype) and + extra.oopspecindex == oopspecindex): # returns 0 for 'func' in this test return value, 0 raise AssertionError("not found: oopspecindex=%d" % @@ -6233,12 +6296,15 @@ def test_str2unicode_constant(self): ops = """ [] + escape(1213) p0 = call(0, "xy", descr=s2u_descr) # string -> unicode + guard_no_exception() [] escape(p0) jump() """ expected = """ [] + escape(1213) escape(u"xy") jump() """ @@ -6248,6 +6314,7 @@ ops = """ [p0] p1 = call(0, p0, descr=s2u_descr) # string -> unicode + guard_no_exception() [] escape(p1) jump(p1) """ @@ -7280,7 +7347,7 @@ ops = """ [p1, p2] setarrayitem_gc(p1, 2, 10, descr=arraydescr) - setarrayitem_gc(p2, 3, 13, descr=arraydescr) + setarrayitem_gc(p2, 3, 13, descr=arraydescr) call(0, p1, p2, 0, 0, 10, descr=arraycopydescr) jump(p1, p2) """ @@ -7307,6 +7374,150 @@ """ self.optimize_loop(ops, expected) + def test_repeated_constant_setfield_mixed_with_guard(self): + ops = """ + [p22, p18] + setfield_gc(p22, 2, descr=valuedescr) + guard_nonnull_class(p18, ConstClass(node_vtable)) [] + setfield_gc(p22, 2, descr=valuedescr) + jump(p22, p18) + """ + preamble = """ + [p22, p18] + setfield_gc(p22, 2, descr=valuedescr) + guard_nonnull_class(p18, ConstClass(node_vtable)) [] + jump(p22, p18) + """ + short = """ + [p22, p18] + i1 = getfield_gc(p22, descr=valuedescr) + guard_value(i1, 2) [] + jump(p22, p18) + """ + expected = """ + [p22, p18] + jump(p22, p18) + """ + self.optimize_loop(ops, expected, preamble, expected_short=short) + + def test_repeated_setfield_mixed_with_guard(self): + ops = """ + [p22, p18, i1] + i2 = getfield_gc(p22, descr=valuedescr) + call(i2, descr=nonwritedescr) + setfield_gc(p22, i1, descr=valuedescr) + guard_nonnull_class(p18, ConstClass(node_vtable)) [] + setfield_gc(p22, i1, descr=valuedescr) + jump(p22, p18, i1) + """ + preamble = """ + [p22, p18, i1] + i2 = getfield_gc(p22, descr=valuedescr) + call(i2, descr=nonwritedescr) + setfield_gc(p22, i1, descr=valuedescr) + guard_nonnull_class(p18, ConstClass(node_vtable)) [] + jump(p22, p18, i1, i1) + """ + short = """ + [p22, p18, i1] + i2 = getfield_gc(p22, descr=valuedescr) + jump(p22, p18, i1, i2) + """ + expected = """ + [p22, p18, i1, i2] + call(i2, descr=nonwritedescr) + setfield_gc(p22, i1, descr=valuedescr) + jump(p22, p18, i1, i1) + """ + self.optimize_loop(ops, expected, preamble, expected_short=short) + + def test_cache_setfield_across_loop_boundaries(self): + ops = """ + [p1] + p2 = getfield_gc(p1, descr=valuedescr) + guard_nonnull_class(p2, ConstClass(node_vtable)) [] + call(p2, descr=nonwritedescr) + p3 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(p1, p3, descr=valuedescr) + jump(p1) + """ + expected = """ + [p1, p2] + call(p2, descr=nonwritedescr) + p3 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(p1, p3, descr=valuedescr) + jump(p1, p3) + """ + self.optimize_loop(ops, expected) + + def test_cache_setarrayitem_across_loop_boundaries(self): + ops = """ + [p1] + p2 = getarrayitem_gc(p1, 3, descr=arraydescr) + guard_nonnull_class(p2, ConstClass(node_vtable)) [] + call(p2, descr=nonwritedescr) + p3 = new_with_vtable(ConstClass(node_vtable)) + setarrayitem_gc(p1, 3, p3, descr=arraydescr) + jump(p1) + """ + expected = """ + [p1, p2] + call(p2, descr=nonwritedescr) + p3 = new_with_vtable(ConstClass(node_vtable)) + setarrayitem_gc(p1, 3, p3, descr=arraydescr) + jump(p1, p3) + """ + self.optimize_loop(ops, expected) + + def test_setarrayitem_p0_p0(self): + ops = """ + [i0, i1] + p0 = escape() + setarrayitem_gc(p0, 2, p0, descr=arraydescr) + jump(i0, i1) + """ + expected = """ + [i0, i1] + p0 = escape() + setarrayitem_gc(p0, 2, p0, descr=arraydescr) + jump(i0, i1) + """ + self.optimize_loop(ops, expected) + + def test_setfield_p0_p0(self): + ops = """ + [i0, i1] + p0 = escape() + setfield_gc(p0, p0, descr=arraydescr) + jump(i0, i1) + """ + expected = """ + [i0, i1] + p0 = escape() + setfield_gc(p0, p0, descr=arraydescr) + jump(i0, i1) + """ + self.optimize_loop(ops, expected) + + def test_setfield_p0_p1_p0(self): + ops = """ + [i0, i1] + p0 = escape() + p1 = escape() + setfield_gc(p0, p1, descr=adescr) + setfield_gc(p1, p0, descr=bdescr) + jump(i0, i1) + """ + expected = """ + [i0, i1] + p0 = escape() + p1 = escape() + setfield_gc(p0, p1, descr=adescr) + setfield_gc(p1, p0, descr=bdescr) + jump(i0, i1) + """ + self.optimize_loop(ops, expected) + class TestLLtype(OptimizeOptTest, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/optimizeopt/test/test_util.py b/pypy/jit/metainterp/optimizeopt/test/test_util.py --- a/pypy/jit/metainterp/optimizeopt/test/test_util.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_util.py @@ -183,8 +183,21 @@ can_invalidate=True)) arraycopydescr = cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, EffectInfo([], [arraydescr], [], [arraydescr], + EffectInfo.EF_CANNOT_RAISE, oopspecindex=EffectInfo.OS_ARRAYCOPY)) + + # array of structs (complex data) + complexarray = lltype.GcArray( + lltype.Struct("complex", + ("real", lltype.Float), + ("imag", lltype.Float), + ) + ) + complexarraydescr = cpu.arraydescrof(complexarray) + complexrealdescr = cpu.interiorfielddescrof(complexarray, "real") + compleximagdescr = cpu.interiorfielddescrof(complexarray, "imag") + for _name, _os in [ ('strconcatdescr', 'OS_STR_CONCAT'), ('strslicedescr', 'OS_STR_SLICE'), @@ -200,12 +213,14 @@ _oopspecindex = getattr(EffectInfo, _os) locals()[_name] = \ cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, - EffectInfo([], [], [], [], oopspecindex=_oopspecindex)) + EffectInfo([], [], [], [], EffectInfo.EF_CANNOT_RAISE, + oopspecindex=_oopspecindex)) # _oopspecindex = getattr(EffectInfo, _os.replace('STR', 'UNI')) locals()[_name.replace('str', 'unicode')] = \ cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, - EffectInfo([], [], [], [], oopspecindex=_oopspecindex)) + EffectInfo([], [], [], [], EffectInfo.EF_CANNOT_RAISE, + oopspecindex=_oopspecindex)) s2u_descr = cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, EffectInfo([], [], [], [], oopspecindex=EffectInfo.OS_STR2UNICODE)) @@ -240,7 +255,7 @@ ## def get_class_of_box(self, box): ## root = box.getref(ootype.ROOT) ## return ootype.classof(root) - + ## cpu = runner.OOtypeCPU(None) ## NODE = ootype.Instance('NODE', ootype.ROOT, {}) ## NODE._add_fields({'value': ootype.Signed, diff --git a/pypy/jit/metainterp/optimizeopt/virtualize.py b/pypy/jit/metainterp/optimizeopt/virtualize.py --- a/pypy/jit/metainterp/optimizeopt/virtualize.py +++ b/pypy/jit/metainterp/optimizeopt/virtualize.py @@ -271,6 +271,74 @@ def _make_virtual(self, modifier): return modifier.make_varray(self.arraydescr) +class VArrayStructValue(AbstractVirtualValue): + def __init__(self, arraydescr, size, keybox, source_op=None): + AbstractVirtualValue.__init__(self, keybox, source_op) + self.arraydescr = arraydescr + self._items = [{} for _ in xrange(size)] + + def getlength(self): + return len(self._items) + + def getinteriorfield(self, index, ofs, default): + return self._items[index].get(ofs, default) + + def setinteriorfield(self, index, ofs, itemvalue): + assert isinstance(itemvalue, optimizer.OptValue) + self._items[index][ofs] = itemvalue + + def _really_force(self, optforce): + assert self.source_op is not None + if not we_are_translated(): + self.source_op.name = 'FORCE ' + self.source_op.name + optforce.emit_operation(self.source_op) + self.box = box = self.source_op.result + for index in range(len(self._items)): + iteritems = self._items[index].iteritems() + # random order is fine, except for tests + if not we_are_translated(): + iteritems = list(iteritems) + iteritems.sort(key = lambda (x, y): x.sort_key()) + for descr, value in iteritems: + subbox = value.force_box(optforce) + op = ResOperation(rop.SETINTERIORFIELD_GC, + [box, ConstInt(index), subbox], None, descr=descr + ) + optforce.emit_operation(op) + + def _get_list_of_descrs(self): + descrs = [] + for item in self._items: + item_descrs = item.keys() + sort_descrs(item_descrs) + descrs.append(item_descrs) + return descrs + + def get_args_for_fail(self, modifier): + if self.box is None and not modifier.already_seen_virtual(self.keybox): + itemdescrs = self._get_list_of_descrs() + itemboxes = [] + for i in range(len(self._items)): + for descr in itemdescrs[i]: + itemboxes.append(self._items[i][descr].get_key_box()) + modifier.register_virtual_fields(self.keybox, itemboxes) + for i in range(len(self._items)): + for descr in itemdescrs[i]: + self._items[i][descr].get_args_for_fail(modifier) + + def force_at_end_of_preamble(self, already_forced, optforce): + if self in already_forced: + return self + already_forced[self] = self + for index in range(len(self._items)): + for descr in self._items[index].keys(): + self._items[index][descr] = self._items[index][descr].force_at_end_of_preamble(already_forced, optforce) + return self + + def _make_virtual(self, modifier): + return modifier.make_varraystruct(self.arraydescr, self._get_list_of_descrs()) + + class OptVirtualize(optimizer.Optimization): "Virtualize objects until they escape." @@ -283,8 +351,11 @@ return vvalue def make_varray(self, arraydescr, size, box, source_op=None): - constvalue = self.new_const_item(arraydescr) - vvalue = VArrayValue(arraydescr, constvalue, size, box, source_op) + if arraydescr.is_array_of_structs(): + vvalue = VArrayStructValue(arraydescr, size, box, source_op) + else: + constvalue = self.new_const_item(arraydescr) + vvalue = VArrayValue(arraydescr, constvalue, size, box, source_op) self.make_equal_to(box, vvalue) return vvalue @@ -386,8 +457,7 @@ def optimize_NEW_ARRAY(self, op): sizebox = self.get_constant_box(op.getarg(0)) - # For now we can't make arrays of structs virtual. - if sizebox is not None and not op.getdescr().is_array_of_structs(): + if sizebox is not None: # if the original 'op' did not have a ConstInt as argument, # build a new one with the ConstInt argument if not isinstance(op.getarg(0), ConstInt): @@ -432,6 +502,34 @@ value.ensure_nonnull() self.emit_operation(op) + def optimize_GETINTERIORFIELD_GC(self, op): + value = self.getvalue(op.getarg(0)) + if value.is_virtual(): + indexbox = self.get_constant_box(op.getarg(1)) + if indexbox is not None: + descr = op.getdescr() + fieldvalue = value.getinteriorfield( + indexbox.getint(), descr, None + ) + if fieldvalue is None: + fieldvalue = self.new_const(descr) + self.make_equal_to(op.result, fieldvalue) + return + value.ensure_nonnull() + self.emit_operation(op) + + def optimize_SETINTERIORFIELD_GC(self, op): + value = self.getvalue(op.getarg(0)) + if value.is_virtual(): + indexbox = self.get_constant_box(op.getarg(1)) + if indexbox is not None: + value.setinteriorfield( + indexbox.getint(), op.getdescr(), self.getvalue(op.getarg(2)) + ) + return + value.ensure_nonnull() + self.emit_operation(op) + dispatch_opt = make_dispatcher_method(OptVirtualize, 'optimize_', default=OptVirtualize.emit_operation) diff --git a/pypy/jit/metainterp/optimizeopt/virtualstate.py b/pypy/jit/metainterp/optimizeopt/virtualstate.py --- a/pypy/jit/metainterp/optimizeopt/virtualstate.py +++ b/pypy/jit/metainterp/optimizeopt/virtualstate.py @@ -16,7 +16,7 @@ class AbstractVirtualStateInfo(resume.AbstractVirtualInfo): position = -1 - + def generalization_of(self, other, renum, bad): raise NotImplementedError @@ -54,7 +54,7 @@ s.debug_print(indent + " ", seen, bad) else: debug_print(indent + " ...") - + def debug_header(self, indent): raise NotImplementedError @@ -77,13 +77,15 @@ bad[self] = True bad[other] = True return False + + assert isinstance(other, AbstractVirtualStructStateInfo) assert len(self.fielddescrs) == len(self.fieldstate) assert len(other.fielddescrs) == len(other.fieldstate) if len(self.fielddescrs) != len(other.fielddescrs): bad[self] = True bad[other] = True return False - + for i in range(len(self.fielddescrs)): if other.fielddescrs[i] is not self.fielddescrs[i]: bad[self] = True @@ -112,8 +114,8 @@ def _enum(self, virtual_state): for s in self.fieldstate: s.enum(virtual_state) - - + + class VirtualStateInfo(AbstractVirtualStructStateInfo): def __init__(self, known_class, fielddescrs): AbstractVirtualStructStateInfo.__init__(self, fielddescrs) @@ -128,13 +130,13 @@ def debug_header(self, indent): debug_print(indent + 'VirtualStateInfo(%d):' % self.position) - + class VStructStateInfo(AbstractVirtualStructStateInfo): def __init__(self, typedescr, fielddescrs): AbstractVirtualStructStateInfo.__init__(self, fielddescrs) self.typedescr = typedescr - def _generalization_of(self, other): + def _generalization_of(self, other): if not isinstance(other, VStructStateInfo): return False if self.typedescr is not other.typedescr: @@ -143,7 +145,7 @@ def debug_header(self, indent): debug_print(indent + 'VStructStateInfo(%d):' % self.position) - + class VArrayStateInfo(AbstractVirtualStateInfo): def __init__(self, arraydescr): self.arraydescr = arraydescr @@ -157,11 +159,7 @@ bad[other] = True return False renum[self.position] = other.position - if not isinstance(other, VArrayStateInfo): - bad[self] = True - bad[other] = True - return False - if self.arraydescr is not other.arraydescr: + if not self._generalization_of(other): bad[self] = True bad[other] = True return False @@ -177,6 +175,10 @@ return False return True + def _generalization_of(self, other): + return (isinstance(other, VArrayStateInfo) and + self.arraydescr is other.arraydescr) + def enum_forced_boxes(self, boxes, value, optimizer): assert isinstance(value, virtualize.VArrayValue) assert value.is_virtual() @@ -192,8 +194,75 @@ def debug_header(self, indent): debug_print(indent + 'VArrayStateInfo(%d):' % self.position) - - + +class VArrayStructStateInfo(AbstractVirtualStateInfo): + def __init__(self, arraydescr, fielddescrs): + self.arraydescr = arraydescr + self.fielddescrs = fielddescrs + + def generalization_of(self, other, renum, bad): + assert self.position != -1 + if self.position in renum: + if renum[self.position] == other.position: + return True + bad[self] = True + bad[other] = True + return False + renum[self.position] = other.position + if not self._generalization_of(other): + bad[self] = True + bad[other] = True + return False + + assert isinstance(other, VArrayStructStateInfo) + if len(self.fielddescrs) != len(other.fielddescrs): + bad[self] = True + bad[other] = True + return False + + p = 0 + for i in range(len(self.fielddescrs)): + if len(self.fielddescrs[i]) != len(other.fielddescrs[i]): + bad[self] = True + bad[other] = True + return False + for j in range(len(self.fielddescrs[i])): + if self.fielddescrs[i][j] is not other.fielddescrs[i][j]: + bad[self] = True + bad[other] = True + return False + if not self.fieldstate[p].generalization_of(other.fieldstate[p], + renum, bad): + bad[self] = True + bad[other] = True + return False + p += 1 + return True + + def _generalization_of(self, other): + return (isinstance(other, VArrayStructStateInfo) and + self.arraydescr is other.arraydescr) + + def _enum(self, virtual_state): + for s in self.fieldstate: + s.enum(virtual_state) + + def enum_forced_boxes(self, boxes, value, optimizer): + assert isinstance(value, virtualize.VArrayStructValue) + assert value.is_virtual() + p = 0 + for i in range(len(self.fielddescrs)): + for j in range(len(self.fielddescrs[i])): + v = value._items[i][self.fielddescrs[i][j]] + s = self.fieldstate[p] + if s.position > self.position: + s.enum_forced_boxes(boxes, v, optimizer) + p += 1 + + def debug_header(self, indent): + debug_print(indent + 'VArrayStructStateInfo(%d):' % self.position) + + class NotVirtualStateInfo(AbstractVirtualStateInfo): def __init__(self, value): self.known_class = value.known_class @@ -277,7 +346,7 @@ op = ResOperation(rop.GUARD_CLASS, [box, self.known_class], None) extra_guards.append(op) return - + if self.level == LEVEL_NONNULL and \ other.level == LEVEL_UNKNOWN and \ isinstance(box, BoxPtr) and \ @@ -285,7 +354,7 @@ op = ResOperation(rop.GUARD_NONNULL, [box], None) extra_guards.append(op) return - + if self.level == LEVEL_UNKNOWN and \ other.level == LEVEL_UNKNOWN and \ isinstance(box, BoxInt) and \ @@ -309,7 +378,7 @@ op = ResOperation(rop.GUARD_TRUE, [res], None) extra_guards.append(op) return - + # Remaining cases are probably not interesting raise InvalidLoop if self.level == LEVEL_CONSTANT: @@ -319,7 +388,7 @@ def enum_forced_boxes(self, boxes, value, optimizer): if self.level == LEVEL_CONSTANT: return - assert 0 <= self.position_in_notvirtuals + assert 0 <= self.position_in_notvirtuals boxes[self.position_in_notvirtuals] = value.force_box(optimizer) def _enum(self, virtual_state): @@ -348,7 +417,7 @@ lb = '' if self.lenbound: lb = ', ' + self.lenbound.bound.__repr__() - + debug_print(indent + mark + 'NotVirtualInfo(%d' % self.position + ', ' + l + ', ' + self.intbound.__repr__() + lb + ')') @@ -370,7 +439,7 @@ return False return True - def generate_guards(self, other, args, cpu, extra_guards): + def generate_guards(self, other, args, cpu, extra_guards): assert len(self.state) == len(other.state) == len(args) renum = {} for i in range(len(self.state)): @@ -393,7 +462,7 @@ inputargs.append(box) assert None not in inputargs - + return inputargs def debug_print(self, hdr='', bad=None): @@ -412,7 +481,7 @@ def register_virtual_fields(self, keybox, fieldboxes): self.fieldboxes[keybox] = fieldboxes - + def already_seen_virtual(self, keybox): return keybox in self.fieldboxes @@ -463,6 +532,9 @@ def make_varray(self, arraydescr): return VArrayStateInfo(arraydescr) + def make_varraystruct(self, arraydescr, fielddescrs): + return VArrayStructStateInfo(arraydescr, fielddescrs) + class BoxNotProducable(Exception): pass @@ -479,6 +551,7 @@ optimizer.produce_potential_short_preamble_ops(self) self.short_boxes = {} + self.short_boxes_in_production = {} for box in self.potential_ops.keys(): try: @@ -501,12 +574,12 @@ else: # Low priority lo -= 1 return alts - + def renamed(self, box): if box in self.rename: return self.rename[box] return box - + def add_to_short(self, box, op): if op: op = op.clone() @@ -528,12 +601,16 @@ self.optimizer.make_equal_to(newbox, value) else: self.short_boxes[box] = op - + def produce_short_preamble_box(self, box): if box in self.short_boxes: - return + return if isinstance(box, Const): - return + return + if box in self.short_boxes_in_production: + raise BoxNotProducable + self.short_boxes_in_production[box] = True + if box in self.potential_ops: ops = self.prioritized_alternatives(box) produced_one = False @@ -570,7 +647,7 @@ else: debug_print(logops.repr_of_arg(box) + ': None') debug_stop('jit-short-boxes') - + def operations(self): if not we_are_translated(): # For tests ops = self.short_boxes.values() @@ -588,7 +665,7 @@ if not isinstance(oldbox, Const) and newbox not in self.short_boxes: self.short_boxes[newbox] = self.short_boxes[oldbox] self.aliases[newbox] = oldbox - + def original(self, box): while box in self.aliases: box = self.aliases[box] diff --git a/pypy/jit/metainterp/optimizeopt/vstring.py b/pypy/jit/metainterp/optimizeopt/vstring.py --- a/pypy/jit/metainterp/optimizeopt/vstring.py +++ b/pypy/jit/metainterp/optimizeopt/vstring.py @@ -1,8 +1,9 @@ from pypy.jit.codewriter.effectinfo import EffectInfo from pypy.jit.metainterp.history import (BoxInt, Const, ConstInt, ConstPtr, - get_const_ptr_for_string, get_const_ptr_for_unicode) + get_const_ptr_for_string, get_const_ptr_for_unicode, BoxPtr, REF, INT) from pypy.jit.metainterp.optimizeopt import optimizer, virtualize -from pypy.jit.metainterp.optimizeopt.optimizer import CONST_0, CONST_1, llhelper +from pypy.jit.metainterp.optimizeopt.optimizer import CONST_0, CONST_1 +from pypy.jit.metainterp.optimizeopt.optimizer import llhelper, REMOVED from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.rlib.objectmodel import specialize, we_are_translated @@ -106,7 +107,12 @@ if not we_are_translated(): op.name = 'FORCE' optforce.emit_operation(op) - self.string_copy_parts(optforce, box, CONST_0, self.mode) + self.initialize_forced_string(optforce, box, CONST_0, self.mode) + + def initialize_forced_string(self, string_optimizer, targetbox, + offsetbox, mode): + return self.string_copy_parts(string_optimizer, targetbox, + offsetbox, mode) class VStringPlainValue(VAbstractStringValue): @@ -114,11 +120,20 @@ _lengthbox = None # cache only def setup(self, size): - self._chars = [optimizer.CVAL_UNINITIALIZED_ZERO] * size + # in this list, None means: "it's probably uninitialized so far, + # but maybe it was actually filled." So to handle this case, + # strgetitem cannot be virtual-ized and must be done as a residual + # operation. By contrast, any non-None value means: we know it + # is initialized to this value; strsetitem() there makes no sense. + # Also, as long as self.is_virtual(), then we know that no-one else + # could have written to the string, so we know that in this case + # "None" corresponds to "really uninitialized". + self._chars = [None] * size def setup_slice(self, longerlist, start, stop): assert 0 <= start <= stop <= len(longerlist) self._chars = longerlist[start:stop] + # slice the 'longerlist', which may also contain Nones def getstrlen(self, _, mode): if self._lengthbox is None: @@ -126,53 +141,66 @@ return self._lengthbox def getitem(self, index): - return self._chars[index] + return self._chars[index] # may return None! def setitem(self, index, charvalue): assert isinstance(charvalue, optimizer.OptValue) + assert self._chars[index] is None, ( + "setitem() on an already-initialized location") self._chars[index] = charvalue + def is_completely_initialized(self): + for c in self._chars: + if c is None: + return False + return True + @specialize.arg(1) def get_constant_string_spec(self, mode): for c in self._chars: - if c is optimizer.CVAL_UNINITIALIZED_ZERO or not c.is_constant(): + if c is None or not c.is_constant(): return None return mode.emptystr.join([mode.chr(c.box.getint()) for c in self._chars]) def string_copy_parts(self, string_optimizer, targetbox, offsetbox, mode): - if not self.is_virtual() and targetbox is not self.box: - lengthbox = self.getstrlen(string_optimizer, mode) - srcbox = self.force_box(string_optimizer) - return copy_str_content(string_optimizer, srcbox, targetbox, - CONST_0, offsetbox, lengthbox, mode) + if not self.is_virtual() and not self.is_completely_initialized(): + return VAbstractStringValue.string_copy_parts( + self, string_optimizer, targetbox, offsetbox, mode) + else: + return self.initialize_forced_string(string_optimizer, targetbox, + offsetbox, mode) + + def initialize_forced_string(self, string_optimizer, targetbox, + offsetbox, mode): for i in range(len(self._chars)): - charbox = self._chars[i].force_box(string_optimizer) - if not (isinstance(charbox, Const) and charbox.same_constant(CONST_0)): - string_optimizer.emit_operation(ResOperation(mode.STRSETITEM, [targetbox, - offsetbox, - charbox], - None)) + assert isinstance(targetbox, BoxPtr) # ConstPtr never makes sense + charvalue = self.getitem(i) + if charvalue is not None: + charbox = charvalue.force_box(string_optimizer) + if not (isinstance(charbox, Const) and + charbox.same_constant(CONST_0)): + op = ResOperation(mode.STRSETITEM, [targetbox, + offsetbox, + charbox], + None) + string_optimizer.emit_operation(op) offsetbox = _int_add(string_optimizer, offsetbox, CONST_1) return offsetbox def get_args_for_fail(self, modifier): if self.box is None and not modifier.already_seen_virtual(self.keybox): - charboxes = [value.get_key_box() for value in self._chars] + charboxes = [] + for value in self._chars: + if value is not None: + box = value.get_key_box() + else: + box = None + charboxes.append(box) modifier.register_virtual_fields(self.keybox, charboxes) for value in self._chars: - value.get_args_for_fail(modifier) - - def FIXME_enum_forced_boxes(self, boxes, already_seen): - key = self.get_key_box() - if key in already_seen: - return - already_seen[key] = None - if self.box is None: - for box in self._chars: - box.enum_forced_boxes(boxes, already_seen) - else: - boxes.append(self.box) + if value is not None: + value.get_args_for_fail(modifier) def _make_virtual(self, modifier): return modifier.make_vstrplain(self.mode is mode_unicode) @@ -180,6 +208,7 @@ class VStringConcatValue(VAbstractStringValue): """The concatenation of two other strings.""" + _attrs_ = ('left', 'right', 'lengthbox') lengthbox = None # or the computed length @@ -226,18 +255,6 @@ self.left.get_args_for_fail(modifier) self.right.get_args_for_fail(modifier) - def FIXME_enum_forced_boxes(self, boxes, already_seen): - key = self.get_key_box() - if key in already_seen: - return - already_seen[key] = None - if self.box is None: - self.left.enum_forced_boxes(boxes, already_seen) - self.right.enum_forced_boxes(boxes, already_seen) - self.lengthbox = None - else: - boxes.append(self.box) - def _make_virtual(self, modifier): return modifier.make_vstrconcat(self.mode is mode_unicode) @@ -284,18 +301,6 @@ self.vstart.get_args_for_fail(modifier) self.vlength.get_args_for_fail(modifier) - def FIXME_enum_forced_boxes(self, boxes, already_seen): - key = self.get_key_box() - if key in already_seen: - return - already_seen[key] = None - if self.box is None: - self.vstr.enum_forced_boxes(boxes, already_seen) - self.vstart.enum_forced_boxes(boxes, already_seen) - self.vlength.enum_forced_boxes(boxes, already_seen) - else: - boxes.append(self.box) - def _make_virtual(self, modifier): return modifier.make_vstrslice(self.mode is mode_unicode) @@ -312,6 +317,7 @@ for i in range(lengthbox.value): charbox = _strgetitem(string_optimizer, srcbox, srcoffsetbox, mode) srcoffsetbox = _int_add(string_optimizer, srcoffsetbox, CONST_1) + assert isinstance(targetbox, BoxPtr) # ConstPtr never makes sense string_optimizer.emit_operation(ResOperation(mode.STRSETITEM, [targetbox, offsetbox, charbox], @@ -322,6 +328,7 @@ nextoffsetbox = _int_add(string_optimizer, offsetbox, lengthbox) else: nextoffsetbox = None + assert isinstance(targetbox, BoxPtr) # ConstPtr never makes sense op = ResOperation(mode.COPYSTRCONTENT, [srcbox, targetbox, srcoffsetbox, offsetbox, lengthbox], None) @@ -408,6 +415,7 @@ def optimize_STRSETITEM(self, op): value = self.getvalue(op.getarg(0)) + assert not value.is_constant() # strsetitem(ConstPtr) never makes sense if value.is_virtual() and isinstance(value, VStringPlainValue): indexbox = self.get_constant_box(op.getarg(1)) if indexbox is not None: @@ -441,11 +449,20 @@ # if isinstance(value, VStringPlainValue): # even if no longer virtual if vindex.is_constant(): - res = value.getitem(vindex.box.getint()) - # If it is uninitialized we can't return it, it was set by a - # COPYSTRCONTENT, not a STRSETITEM - if res is not optimizer.CVAL_UNINITIALIZED_ZERO: - return res + result = value.getitem(vindex.box.getint()) + if result is not None: + return result + # + if isinstance(value, VStringConcatValue) and vindex.is_constant(): + len1box = value.left.getstrlen(self, mode) + if isinstance(len1box, ConstInt): + index = vindex.box.getint() + len1 = len1box.getint() + if index < len1: + return self.strgetitem(value.left, vindex, mode) + else: + vindex = optimizer.ConstantValue(ConstInt(index - len1)) + return self.strgetitem(value.right, vindex, mode) # resbox = _strgetitem(self, value.force_box(self), vindex.force_box(self), mode) return self.getvalue(resbox) @@ -467,6 +484,11 @@ def _optimize_COPYSTRCONTENT(self, op, mode): # args: src dst srcstart dststart length + assert op.getarg(0).type == REF + assert op.getarg(1).type == REF + assert op.getarg(2).type == INT + assert op.getarg(3).type == INT + assert op.getarg(4).type == INT src = self.getvalue(op.getarg(0)) dst = self.getvalue(op.getarg(1)) srcstart = self.getvalue(op.getarg(2)) @@ -508,6 +530,11 @@ optimize_CALL_PURE = optimize_CALL + def optimize_GUARD_NO_EXCEPTION(self, op): + if self.last_emitted_operation is REMOVED: + return + self.emit_operation(op) + def opt_call_str_STR2UNICODE(self, op): # Constant-fold unicode("constant string"). # More generally, supporting non-constant but virtual cases is @@ -522,6 +549,7 @@ except UnicodeDecodeError: return False self.make_constant(op.result, get_const_ptr_for_unicode(u)) + self.last_emitted_operation = REMOVED return True def opt_call_stroruni_STR_CONCAT(self, op, mode): @@ -538,13 +566,12 @@ vstart = self.getvalue(op.getarg(2)) vstop = self.getvalue(op.getarg(3)) # - if (isinstance(vstr, VStringPlainValue) and vstart.is_constant() - and vstop.is_constant()): - # slicing with constant bounds of a VStringPlainValue - value = self.make_vstring_plain(op.result, op, mode) - value.setup_slice(vstr._chars, vstart.box.getint(), - vstop.box.getint()) - return True + #if (isinstance(vstr, VStringPlainValue) and vstart.is_constant() + # and vstop.is_constant()): + # value = self.make_vstring_plain(op.result, op, mode) + # value.setup_slice(vstr._chars, vstart.box.getint(), + # vstop.box.getint()) + # return True # vstr.ensure_nonnull() lengthbox = _int_sub(self, vstop.force_box(self), diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -165,7 +165,7 @@ if not we_are_translated(): for b in registers[count:]: assert not oldbox.same_box(b) - + def make_result_of_lastop(self, resultbox): got_type = resultbox.type @@ -199,7 +199,7 @@ 'float_add', 'float_sub', 'float_mul', 'float_truediv', 'float_lt', 'float_le', 'float_eq', 'float_ne', 'float_gt', 'float_ge', - 'ptr_eq', 'ptr_ne', + 'ptr_eq', 'ptr_ne', 'instance_ptr_eq', 'instance_ptr_ne', ]: exec py.code.Source(''' @arguments("box", "box") @@ -240,8 +240,8 @@ return self.execute(rop.PTR_EQ, box, history.CONST_NULL) @arguments("box") - def opimpl_cast_opaque_ptr(self, box): - return self.execute(rop.CAST_OPAQUE_PTR, box) + def opimpl_mark_opaque_ptr(self, box): + return self.execute(rop.MARK_OPAQUE_PTR, box) @arguments("box") def _opimpl_any_return(self, box): @@ -604,7 +604,7 @@ opimpl_setinteriorfield_gc_i = _opimpl_setinteriorfield_gc_any opimpl_setinteriorfield_gc_f = _opimpl_setinteriorfield_gc_any opimpl_setinteriorfield_gc_r = _opimpl_setinteriorfield_gc_any - + @arguments("box", "descr") def _opimpl_getfield_raw_any(self, box, fielddescr): @@ -1345,10 +1345,8 @@ if effect == effectinfo.EF_LOOPINVARIANT: return self.execute_varargs(rop.CALL_LOOPINVARIANT, allboxes, descr, False, False) - exc = (effect != effectinfo.EF_CANNOT_RAISE and - effect != effectinfo.EF_ELIDABLE_CANNOT_RAISE) - pure = (effect == effectinfo.EF_ELIDABLE_CAN_RAISE or - effect == effectinfo.EF_ELIDABLE_CANNOT_RAISE) + exc = effectinfo.check_can_raise() + pure = effectinfo.check_is_elidable() return self.execute_varargs(rop.CALL, allboxes, descr, exc, pure) def do_residual_or_indirect_call(self, funcbox, calldescr, argboxes): diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -90,7 +90,10 @@ return op def __repr__(self): - return self.repr() + try: + return self.repr() + except NotImplementedError: + return object.__repr__(self) def repr(self, graytext=False): # RPython-friendly version @@ -404,8 +407,8 @@ 'FLOAT_TRUEDIV/2', 'FLOAT_NEG/1', 'FLOAT_ABS/1', - 'CAST_FLOAT_TO_INT/1', - 'CAST_INT_TO_FLOAT/1', + 'CAST_FLOAT_TO_INT/1', # don't use for unsigned ints; we would + 'CAST_INT_TO_FLOAT/1', # need some messy code in the backend 'CAST_FLOAT_TO_SINGLEFLOAT/1', 'CAST_SINGLEFLOAT_TO_FLOAT/1', # @@ -437,7 +440,8 @@ # 'PTR_EQ/2b', 'PTR_NE/2b', - 'CAST_OPAQUE_PTR/1b', + 'INSTANCE_PTR_EQ/2b', + 'INSTANCE_PTR_NE/2b', # 'ARRAYLEN_GC/1d', 'STRLEN/1', @@ -469,6 +473,7 @@ 'FORCE_TOKEN/0', 'VIRTUAL_REF/2', # removed before it's passed to the backend 'READ_TIMESTAMP/0', + 'MARK_OPAQUE_PTR/1b', '_NOSIDEEFFECT_LAST', # ----- end of no_side_effect operations ----- 'SETARRAYITEM_GC/3d', diff --git a/pypy/jit/metainterp/resume.py b/pypy/jit/metainterp/resume.py --- a/pypy/jit/metainterp/resume.py +++ b/pypy/jit/metainterp/resume.py @@ -126,6 +126,7 @@ UNASSIGNED = tag(-1<<13, TAGBOX) UNASSIGNEDVIRTUAL = tag(-1<<13, TAGVIRTUAL) NULLREF = tag(-1, TAGCONST) +UNINITIALIZED = tag(-2, TAGCONST) # used for uninitialized string characters class ResumeDataLoopMemo(object): @@ -139,7 +140,7 @@ self.numberings = {} self.cached_boxes = {} self.cached_virtuals = {} - + self.nvirtuals = 0 self.nvholes = 0 self.nvreused = 0 @@ -273,6 +274,9 @@ def make_varray(self, arraydescr): return VArrayInfo(arraydescr) + def make_varraystruct(self, arraydescr, fielddescrs): + return VArrayStructInfo(arraydescr, fielddescrs) + def make_vstrplain(self, is_unicode=False): if is_unicode: return VUniPlainInfo() @@ -402,7 +406,7 @@ virtuals[num] = vinfo if self._invalidation_needed(len(liveboxes), nholes): - memo.clear_box_virtual_numbers() + memo.clear_box_virtual_numbers() def _invalidation_needed(self, nliveboxes, nholes): memo = self.memo @@ -436,6 +440,8 @@ self.storage.rd_pendingfields = rd_pendingfields def _gettagged(self, box): + if box is None: + return UNINITIALIZED if isinstance(box, Const): return self.memo.getconst(box) else: @@ -455,7 +461,7 @@ def debug_prints(self): raise NotImplementedError - + class AbstractVirtualStructInfo(AbstractVirtualInfo): def __init__(self, fielddescrs): self.fielddescrs = fielddescrs @@ -537,6 +543,29 @@ for i in self.fieldnums: debug_print("\t\t", str(untag(i))) + +class VArrayStructInfo(AbstractVirtualInfo): + def __init__(self, arraydescr, fielddescrs): + self.arraydescr = arraydescr + self.fielddescrs = fielddescrs + + def debug_prints(self): + debug_print("\tvarraystructinfo", self.arraydescr) + for i in self.fieldnums: + debug_print("\t\t", str(untag(i))) + + @specialize.argtype(1) + def allocate(self, decoder, index): + array = decoder.allocate_array(self.arraydescr, len(self.fielddescrs)) + decoder.virtuals_cache[index] = array + p = 0 + for i in range(len(self.fielddescrs)): + for j in range(len(self.fielddescrs[i])): + decoder.setinteriorfield(i, self.fielddescrs[i][j], array, self.fieldnums[p]) + p += 1 + return array + + class VStrPlainInfo(AbstractVirtualInfo): """Stands for the string made out of the characters of all fieldnums.""" @@ -546,7 +575,9 @@ string = decoder.allocate_string(length) decoder.virtuals_cache[index] = string for i in range(length): - decoder.string_setitem(string, i, self.fieldnums[i]) + charnum = self.fieldnums[i] + if not tagged_eq(charnum, UNINITIALIZED): + decoder.string_setitem(string, i, charnum) return string def debug_prints(self): @@ -599,7 +630,9 @@ string = decoder.allocate_unicode(length) decoder.virtuals_cache[index] = string for i in range(length): - decoder.unicode_setitem(string, i, self.fieldnums[i]) + charnum = self.fieldnums[i] + if not tagged_eq(charnum, UNINITIALIZED): + decoder.unicode_setitem(string, i, charnum) return string def debug_prints(self): @@ -884,6 +917,17 @@ self.metainterp.execute_and_record(rop.SETFIELD_GC, descr, structbox, fieldbox) + def setinteriorfield(self, index, descr, array, fieldnum): + if descr.is_pointer_field(): + kind = REF + elif descr.is_float_field(): + kind = FLOAT + else: + kind = INT + fieldbox = self.decode_box(fieldnum, kind) + self.metainterp.execute_and_record(rop.SETINTERIORFIELD_GC, descr, + array, ConstInt(index), fieldbox) + def setarrayitem_int(self, arraydescr, arraybox, index, fieldnum): self._setarrayitem(arraydescr, arraybox, index, fieldnum, INT) @@ -1164,6 +1208,17 @@ newvalue = self.decode_int(fieldnum) self.cpu.bh_setfield_gc_i(struct, descr, newvalue) + def setinteriorfield(self, index, descr, array, fieldnum): + if descr.is_pointer_field(): + newvalue = self.decode_ref(fieldnum) + self.cpu.bh_setinteriorfield_gc_r(array, index, descr, newvalue) + elif descr.is_float_field(): + newvalue = self.decode_float(fieldnum) + self.cpu.bh_setinteriorfield_gc_f(array, index, descr, newvalue) + else: + newvalue = self.decode_int(fieldnum) + self.cpu.bh_setinteriorfield_gc_i(array, index, descr, newvalue) + def setarrayitem_int(self, arraydescr, array, index, fieldnum): newvalue = self.decode_int(fieldnum) self.cpu.bh_setarrayitem_gc_i(arraydescr, array, index, newvalue) diff --git a/pypy/jit/metainterp/test/support.py b/pypy/jit/metainterp/test/support.py --- a/pypy/jit/metainterp/test/support.py +++ b/pypy/jit/metainterp/test/support.py @@ -12,7 +12,7 @@ from pypy.rlib.rfloat import isnan def _get_jitcodes(testself, CPUClass, func, values, type_system, - supports_longlong=False, **kwds): + supports_longlong=False, translationoptions={}, **kwds): from pypy.jit.codewriter import support class FakeJitCell(object): @@ -42,7 +42,8 @@ enable_opts = ALL_OPTS_DICT func._jit_unroll_safe_ = True - rtyper = support.annotate(func, values, type_system=type_system) + rtyper = support.annotate(func, values, type_system=type_system, + translationoptions=translationoptions) graphs = rtyper.annotator.translator.graphs testself.all_graphs = graphs result_kind = history.getkind(graphs[0].getreturnvar().concretetype)[0] diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -10,6 +10,7 @@ from pypy.jit.metainterp.typesystem import LLTypeHelper, OOTypeHelper from pypy.jit.metainterp.warmspot import get_stats from pypy.jit.metainterp.warmstate import set_future_value +from pypy.rlib import rerased from pypy.rlib.jit import (JitDriver, we_are_jitted, hint, dont_look_inside, loop_invariant, elidable, promote, jit_debug, assert_green, AssertGreenFailed, unroll_safe, current_trace_length, look_inside_iff, @@ -3436,7 +3437,7 @@ res = self.meta_interp(f, [16]) assert res == f(16) - def test_ptr_eq_str_constants(self): + def test_ptr_eq(self): myjitdriver = JitDriver(greens = [], reds = ["n", "x"]) class A(object): def __init__(self, v): @@ -3452,22 +3453,142 @@ res = self.meta_interp(f, [10, 1]) assert res == 0 + def test_instance_ptr_eq(self): + myjitdriver = JitDriver(greens = [], reds = ["n", "i", "a1", "a2"]) + class A(object): + pass + def f(n): + a1 = A() + a2 = A() + i = 0 + while n > 0: + myjitdriver.jit_merge_point(n=n, i=i, a1=a1, a2=a2) + if n % 2: + a = a2 + else: + a = a1 + i += a is a1 + n -= 1 + return i + res = self.meta_interp(f, [10]) + assert res == f(10) + def f(n): + a1 = A() + a2 = A() + i = 0 + while n > 0: + myjitdriver.jit_merge_point(n=n, i=i, a1=a1, a2=a2) + if n % 2: + a = a2 + else: + a = a1 + if a is a2: + i += 1 + n -= 1 + return i + res = self.meta_interp(f, [10]) + assert res == f(10) + def test_virtual_array_of_structs(self): myjitdriver = JitDriver(greens = [], reds=["n", "d"]) def f(n): d = None while n > 0: myjitdriver.jit_merge_point(n=n, d=d) - d = {} + d = {"q": 1} if n % 2: d["k"] = n else: d["z"] = n - n -= len(d) + n -= len(d) - d["q"] return n res = self.meta_interp(f, [10]) assert res == 0 + def test_virtual_dict_constant_keys(self): + myjitdriver = JitDriver(greens = [], reds = ["n"]) + def g(d): + return d["key"] - 1 + + def f(n): + while n > 0: + myjitdriver.jit_merge_point(n=n) + x = {"key": n} + n = g(x) + del x["key"] + return n + + res = self.meta_interp(f, [10]) + assert res == 0 + self.check_loops({"int_sub": 1, "int_gt": 1, "guard_true": 1, "jump": 1}) + + def test_virtual_opaque_ptr(self): + myjitdriver = JitDriver(greens = [], reds = ["n"]) + erase, unerase = rerased.new_erasing_pair("x") + @look_inside_iff(lambda x: isvirtual(x)) + def g(x): + return x[0] + def f(n): + while n > 0: + myjitdriver.jit_merge_point(n=n) + x = [] + y = erase(x) + z = unerase(y) + z.append(1) + n -= g(z) + return n + res = self.meta_interp(f, [10]) + assert res == 0 + self.check_loops({"int_sub": 1, "int_gt": 1, "guard_true": 1, "jump": 1}) + + def test_virtual_opaque_dict(self): + myjitdriver = JitDriver(greens = [], reds = ["n"]) + erase, unerase = rerased.new_erasing_pair("x") + @look_inside_iff(lambda x: isvirtual(x)) + def g(x): + return x[0]["key"] - 1 + def f(n): + while n > 0: + myjitdriver.jit_merge_point(n=n) + x = [{}] + x[0]["key"] = n + x[0]["other key"] = n + y = erase(x) + z = unerase(y) + n = g(x) + return n + res = self.meta_interp(f, [10]) + assert res == 0 + self.check_loops({"int_sub": 1, "int_gt": 1, "guard_true": 1, "jump": 1}) + + def test_convert_from_SmallFunctionSetPBCRepr_to_FunctionsPBCRepr(self): + f1 = lambda n: n+1 + f2 = lambda n: n+2 + f3 = lambda n: n+3 + f4 = lambda n: n+4 + f5 = lambda n: n+5 + f6 = lambda n: n+6 + f7 = lambda n: n+7 + f8 = lambda n: n+8 + def h(n, x): + return x(n) + h._dont_inline = True + def g(n, x): + return h(n, x) + g._dont_inline = True + def f(n): + n = g(n, f1) + n = g(n, f2) + n = h(n, f3) + n = h(n, f4) + n = h(n, f5) + n = h(n, f6) + n = h(n, f7) + n = h(n, f8) + return n + assert f(5) == 41 + translationoptions = {'withsmallfuncsets': 3} + self.interp_operations(f, [5], translationoptions=translationoptions) class TestLLtype(BaseLLtypeTests, LLJitMixin): @@ -3522,11 +3643,12 @@ o = o.dec() pc += 1 return pc - res = self.meta_interp(main, [False, 100, True], taggedpointers=True) + topt = {'taggedpointers': True} + res = self.meta_interp(main, [False, 100, True], + translationoptions=topt) def test_rerased(self): - from pypy.rlib.rerased import erase_int, unerase_int, new_erasing_pair - eraseX, uneraseX = new_erasing_pair("X") + eraseX, uneraseX = rerased.new_erasing_pair("X") # class X: def __init__(self, a, b): @@ -3539,19 +3661,33 @@ e = eraseX(X(i, j)) else: try: - e = erase_int(i) + e = rerased.erase_int(i) except OverflowError: return -42 if j & 1: x = uneraseX(e) return x.a - x.b else: - return unerase_int(e) + return rerased.unerase_int(e) # - x = self.interp_operations(f, [-128, 0], taggedpointers=True) + topt = {'taggedpointers': True} + x = self.interp_operations(f, [-128, 0], translationoptions=topt) assert x == -128 bigint = sys.maxint//2 + 1 - x = self.interp_operations(f, [bigint, 0], taggedpointers=True) + x = self.interp_operations(f, [bigint, 0], translationoptions=topt) assert x == -42 - x = self.interp_operations(f, [1000, 1], taggedpointers=True) + x = self.interp_operations(f, [1000, 1], translationoptions=topt) assert x == 999 + + def test_ll_arraycopy(self): + from pypy.rlib import rgc + A = lltype.GcArray(lltype.Char) + a = lltype.malloc(A, 10) + for i in range(10): a[i] = chr(i) + b = lltype.malloc(A, 10) + # + def f(c, d, e): + rgc.ll_arraycopy(a, b, c, d, e) + return 42 + self.interp_operations(f, [1, 2, 3]) + self.check_operations_history(call=1, guard_no_exception=0) diff --git a/pypy/jit/metainterp/test/test_float.py b/pypy/jit/metainterp/test/test_float.py --- a/pypy/jit/metainterp/test/test_float.py +++ b/pypy/jit/metainterp/test/test_float.py @@ -1,5 +1,6 @@ -import math +import math, sys from pypy.jit.metainterp.test.support import LLJitMixin, OOJitMixin +from pypy.rlib.rarithmetic import intmask, r_uint class FloatTests: @@ -45,6 +46,34 @@ res = self.interp_operations(f, [-2.0]) assert res == -8.5 + def test_cast_float_to_int(self): + def g(f): + return int(f) + res = self.interp_operations(g, [-12345.9]) + assert res == -12345 + + def test_cast_float_to_uint(self): + def g(f): + return intmask(r_uint(f)) + res = self.interp_operations(g, [sys.maxint*2.0]) + assert res == intmask(long(sys.maxint*2.0)) + res = self.interp_operations(g, [-12345.9]) + assert res == -12345 + + def test_cast_int_to_float(self): + def g(i): + return float(i) + res = self.interp_operations(g, [-12345]) + assert type(res) is float and res == -12345.0 + + def test_cast_uint_to_float(self): + def g(i): + return float(r_uint(i)) + res = self.interp_operations(g, [intmask(sys.maxint*2)]) + assert type(res) is float and res == float(sys.maxint*2) + res = self.interp_operations(g, [-12345]) + assert type(res) is float and res == float(long(r_uint(-12345))) + class TestOOtype(FloatTests, OOJitMixin): pass diff --git a/pypy/jit/metainterp/test/test_heapcache.py b/pypy/jit/metainterp/test/test_heapcache.py --- a/pypy/jit/metainterp/test/test_heapcache.py +++ b/pypy/jit/metainterp/test/test_heapcache.py @@ -371,3 +371,17 @@ assert h.is_unescaped(box1) h.invalidate_caches(rop.SETARRAYITEM_GC, None, [box2, index1, box1]) assert not h.is_unescaped(box1) + + h = HeapCache() + h.new_array(box1, lengthbox1) + h.new(box2) + assert h.is_unescaped(box1) + assert h.is_unescaped(box2) + h.invalidate_caches(rop.SETARRAYITEM_GC, None, [box1, lengthbox2, box2]) + assert h.is_unescaped(box1) + assert h.is_unescaped(box2) + h.invalidate_caches( + rop.CALL, FakeCallDescr(FakeEffektinfo.EF_RANDOM_EFFECTS), [box1] + ) + assert not h.is_unescaped(box1) + assert not h.is_unescaped(box2) diff --git a/pypy/jit/metainterp/test/test_resume.py b/pypy/jit/metainterp/test/test_resume.py --- a/pypy/jit/metainterp/test/test_resume.py +++ b/pypy/jit/metainterp/test/test_resume.py @@ -1135,16 +1135,11 @@ assert ptr2.parent.next == ptr class CompareableConsts(object): - def __init__(self): - self.oldeq = None - def __enter__(self): - assert self.oldeq is None - self.oldeq = Const.__eq__ Const.__eq__ = Const.same_box - + def __exit__(self, type, value, traceback): - Const.__eq__ = self.oldeq + del Const.__eq__ def test_virtual_adder_make_varray(): b2s, b4s = [BoxPtr(), BoxInt(4)] diff --git a/pypy/jit/metainterp/test/test_tracingopts.py b/pypy/jit/metainterp/test/test_tracingopts.py --- a/pypy/jit/metainterp/test/test_tracingopts.py +++ b/pypy/jit/metainterp/test/test_tracingopts.py @@ -3,6 +3,7 @@ from pypy.jit.metainterp.test.support import LLJitMixin from pypy.rlib import jit from pypy.rlib.rarithmetic import ovfcheck +from pypy.rlib.rstring import StringBuilder import py @@ -590,4 +591,14 @@ assert res == 4 self.check_operations_history(int_add_ovf=0) res = self.interp_operations(fn, [sys.maxint]) - assert res == 12 \ No newline at end of file + assert res == 12 + + def test_copy_str_content(self): + def fn(n): + a = StringBuilder() + x = [1] + a.append("hello world") + return x[0] + res = self.interp_operations(fn, [0]) + assert res == 1 + self.check_operations_history(getarrayitem_gc=0, getarrayitem_gc_pure=0 ) \ No newline at end of file diff --git a/pypy/jit/metainterp/test/test_virtualstate.py b/pypy/jit/metainterp/test/test_virtualstate.py --- a/pypy/jit/metainterp/test/test_virtualstate.py +++ b/pypy/jit/metainterp/test/test_virtualstate.py @@ -847,7 +847,8 @@ i5 = arraylen_gc(p2, descr=arraydescr) i6 = int_ge(i5, 1) guard_true(i6) [] - jump(p0, p1, p2) + p3 = getarrayitem_gc(p2, 0, descr=arraydescr) + jump(p0, p1, p3, p2) """ self.optimize_bridge(loop, bridge, expected, p0=self.myptr) diff --git a/pypy/jit/metainterp/warmspot.py b/pypy/jit/metainterp/warmspot.py --- a/pypy/jit/metainterp/warmspot.py +++ b/pypy/jit/metainterp/warmspot.py @@ -48,13 +48,13 @@ translator.warmrunnerdesc = warmrunnerdesc # for later debugging def ll_meta_interp(function, args, backendopt=False, type_system='lltype', - listcomp=False, **kwds): + listcomp=False, translationoptions={}, **kwds): if listcomp: extraconfigopts = {'translation.list_comprehension_operations': True} else: extraconfigopts = {} - if kwds.pop("taggedpointers", False): - extraconfigopts["translation.taggedpointers"] = True + for key, value in translationoptions.items(): + extraconfigopts['translation.' + key] = value interp, graph = get_interpreter(function, args, backendopt=False, # will be done below type_system=type_system, @@ -62,7 +62,7 @@ clear_tcache() return jittify_and_run(interp, graph, args, backendopt=backendopt, **kwds) -def jittify_and_run(interp, graph, args, repeat=1, +def jittify_and_run(interp, graph, args, repeat=1, graph_and_interp_only=False, backendopt=False, trace_limit=sys.maxint, inline=False, loop_longevity=0, retrace_limit=5, function_threshold=4, @@ -93,6 +93,8 @@ jd.warmstate.set_param_max_retrace_guards(max_retrace_guards) jd.warmstate.set_param_enable_opts(enable_opts) warmrunnerdesc.finish() + if graph_and_interp_only: + return interp, graph res = interp.eval_graph(graph, args) if not kwds.get('translate_support_code', False): warmrunnerdesc.metainterp_sd.profiler.finish() @@ -157,6 +159,9 @@ def get_stats(): return pyjitpl._warmrunnerdesc.stats +def reset_stats(): + pyjitpl._warmrunnerdesc.stats.clear() + def get_translator(): return pyjitpl._warmrunnerdesc.translator diff --git a/pypy/module/_file/interp_file.py b/pypy/module/_file/interp_file.py --- a/pypy/module/_file/interp_file.py +++ b/pypy/module/_file/interp_file.py @@ -206,24 +206,28 @@ @unwrap_spec(size=int) def direct_readlines(self, size=0): stream = self.getstream() - # NB. this implementation is very inefficient for unbuffered - # streams, but ok if stream.readline() is efficient. + # this is implemented as: .read().split('\n') + # except that it keeps the \n in the resulting strings if size <= 0: - result = [] - while True: - line = stream.readline() - if not line: - break - result.append(line) - size -= len(line) + data = stream.readall() else: - result = [] - while size > 0: - line = stream.readline() - if not line: - break - result.append(line) - size -= len(line) + data = stream.read(size) + result = [] + splitfrom = 0 + for i in range(len(data)): + if data[i] == '\n': + result.append(data[splitfrom : i + 1]) + splitfrom = i + 1 + # + if splitfrom < len(data): + # there is a partial line at the end. If size > 0, it is likely + # to be because the 'read(size)' returned data up to the middle + # of a line. In that case, use 'readline()' to read until the + # end of the current line. + data = data[splitfrom:] + if size > 0: + data += stream.readline() + result.append(data) return result @unwrap_spec(offset=r_longlong, whence=int) diff --git a/pypy/module/_hashlib/interp_hashlib.py b/pypy/module/_hashlib/interp_hashlib.py --- a/pypy/module/_hashlib/interp_hashlib.py +++ b/pypy/module/_hashlib/interp_hashlib.py @@ -4,32 +4,44 @@ from pypy.interpreter.error import OperationError from pypy.tool.sourcetools import func_renamer from pypy.interpreter.baseobjspace import Wrappable -from pypy.rpython.lltypesystem import lltype, rffi +from pypy.rpython.lltypesystem import lltype, llmemory, rffi +from pypy.rlib import rgc, ropenssl from pypy.rlib.objectmodel import keepalive_until_here -from pypy.rlib import ropenssl from pypy.rlib.rstring import StringBuilder from pypy.module.thread.os_lock import Lock algorithms = ('md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512') +# HASH_MALLOC_SIZE is the size of EVP_MD, EVP_MD_CTX plus their points +# Used for adding memory pressure. Last number is an (under?)estimate of +# EVP_PKEY_CTX's size. +# XXX: Make a better estimate here +HASH_MALLOC_SIZE = ropenssl.EVP_MD_SIZE + ropenssl.EVP_MD_CTX_SIZE \ + + rffi.sizeof(ropenssl.EVP_MD) * 2 + 208 + class W_Hash(Wrappable): ctx = lltype.nullptr(ropenssl.EVP_MD_CTX.TO) + _block_size = -1 def __init__(self, space, name): self.name = name + self.digest_size = self.compute_digest_size() # Allocate a lock for each HASH object. # An optimization would be to not release the GIL on small requests, # and use a custom lock only when needed. self.lock = Lock(space) + ctx = lltype.malloc(ropenssl.EVP_MD_CTX.TO, flavor='raw') + rgc.add_memory_pressure(HASH_MALLOC_SIZE + self.digest_size) + self.ctx = ctx + + def initdigest(self, space, name): digest = ropenssl.EVP_get_digestbyname(name) if not digest: raise OperationError(space.w_ValueError, space.wrap("unknown hash function")) - ctx = lltype.malloc(ropenssl.EVP_MD_CTX.TO, flavor='raw') - ropenssl.EVP_DigestInit(ctx, digest) - self.ctx = ctx + ropenssl.EVP_DigestInit(self.ctx, digest) def __del__(self): # self.lock.free() @@ -65,33 +77,29 @@ "Return the digest value as a string of hexadecimal digits." digest = self._digest(space) hexdigits = '0123456789abcdef' - result = StringBuilder(self._digest_size() * 2) + result = StringBuilder(self.digest_size * 2) for c in digest: result.append(hexdigits[(ord(c) >> 4) & 0xf]) result.append(hexdigits[ ord(c) & 0xf]) return space.wrap(result.build()) def get_digest_size(self, space): - return space.wrap(self._digest_size()) + return space.wrap(self.digest_size) def get_block_size(self, space): - return space.wrap(self._block_size()) + return space.wrap(self.compute_block_size()) def _digest(self, space): - copy = self.copy(space) - ctx = copy.ctx - digest_size = self._digest_size() - digest = lltype.malloc(rffi.CCHARP.TO, digest_size, flavor='raw') + with lltype.scoped_alloc(ropenssl.EVP_MD_CTX.TO) as ctx: + with self.lock: + ropenssl.EVP_MD_CTX_copy(ctx, self.ctx) + digest_size = self.digest_size + with lltype.scoped_alloc(rffi.CCHARP.TO, digest_size) as digest: + ropenssl.EVP_DigestFinal(ctx, digest, None) + ropenssl.EVP_MD_CTX_cleanup(ctx) + return rffi.charpsize2str(digest, digest_size) - try: - ropenssl.EVP_DigestFinal(ctx, digest, None) - return rffi.charpsize2str(digest, digest_size) - finally: - keepalive_until_here(copy) - lltype.free(digest, flavor='raw') - - - def _digest_size(self): + def compute_digest_size(self): # XXX This isn't the nicest way, but the EVP_MD_size OpenSSL # XXX function is defined as a C macro on OS X and would be # XXX significantly harder to implement in another way. @@ -105,12 +113,14 @@ 'sha512': 64, 'SHA512': 64, }.get(self.name, 0) - def _block_size(self): + def compute_block_size(self): + if self._block_size != -1: + return self._block_size # XXX This isn't the nicest way, but the EVP_MD_CTX_block_size # XXX OpenSSL function is defined as a C macro on some systems # XXX and would be significantly harder to implement in # XXX another way. - return { + self._block_size = { 'md5': 64, 'MD5': 64, 'sha1': 64, 'SHA1': 64, 'sha224': 64, 'SHA224': 64, @@ -118,6 +128,7 @@ 'sha384': 128, 'SHA384': 128, 'sha512': 128, 'SHA512': 128, }.get(self.name, 0) + return self._block_size W_Hash.typedef = TypeDef( 'HASH', @@ -135,6 +146,7 @@ @unwrap_spec(name=str, string='bufferstr') def new(space, name, string=''): w_hash = W_Hash(space, name) + w_hash.initdigest(space, name) w_hash.update(space, string) return space.wrap(w_hash) diff --git a/pypy/module/_minimal_curses/__init__.py b/pypy/module/_minimal_curses/__init__.py --- a/pypy/module/_minimal_curses/__init__.py +++ b/pypy/module/_minimal_curses/__init__.py @@ -4,7 +4,8 @@ try: import _minimal_curses as _curses # when running on top of pypy-c except ImportError: - raise ImportError("no _curses or _minimal_curses module") # no _curses at all + import py + py.test.skip("no _curses or _minimal_curses module") #no _curses at all from pypy.interpreter.mixedmodule import MixedModule from pypy.module._minimal_curses import fficurses diff --git a/pypy/module/_multiprocessing/interp_semaphore.py b/pypy/module/_multiprocessing/interp_semaphore.py --- a/pypy/module/_multiprocessing/interp_semaphore.py +++ b/pypy/module/_multiprocessing/interp_semaphore.py @@ -4,6 +4,7 @@ from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.error import wrap_oserror, OperationError from pypy.rpython.lltypesystem import rffi, lltype +from pypy.rlib import rgc from pypy.rlib.rarithmetic import r_uint from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.rpython.tool import rffi_platform as platform @@ -23,6 +24,8 @@ _CreateSemaphore = rwin32.winexternal( 'CreateSemaphoreA', [rffi.VOIDP, rffi.LONG, rffi.LONG, rwin32.LPCSTR], rwin32.HANDLE) + _CloseHandle = rwin32.winexternal('CloseHandle', [rwin32.HANDLE], + rwin32.BOOL, threadsafe=False) _ReleaseSemaphore = rwin32.winexternal( 'ReleaseSemaphore', [rwin32.HANDLE, rffi.LONG, rffi.LONGP], rwin32.BOOL) @@ -51,6 +54,7 @@ SEM_FAILED = platform.ConstantInteger('SEM_FAILED') SEM_VALUE_MAX = platform.ConstantInteger('SEM_VALUE_MAX') SEM_TIMED_WAIT = platform.Has('sem_timedwait') + SEM_T_SIZE = platform.SizeOf('sem_t') config = platform.configure(CConfig) TIMEVAL = config['TIMEVAL'] @@ -61,18 +65,21 @@ SEM_FAILED = config['SEM_FAILED'] # rffi.cast(SEM_T, config['SEM_FAILED']) SEM_VALUE_MAX = config['SEM_VALUE_MAX'] SEM_TIMED_WAIT = config['SEM_TIMED_WAIT'] + SEM_T_SIZE = config['SEM_T_SIZE'] if sys.platform == 'darwin': HAVE_BROKEN_SEM_GETVALUE = True else: HAVE_BROKEN_SEM_GETVALUE = False - def external(name, args, result): + def external(name, args, result, **kwargs): return rffi.llexternal(name, args, result, - compilation_info=eci) + compilation_info=eci, **kwargs) _sem_open = external('sem_open', [rffi.CCHARP, rffi.INT, rffi.INT, rffi.UINT], SEM_T) + # tread sem_close as not threadsafe for now to be able to use the __del__ + _sem_close = external('sem_close', [SEM_T], rffi.INT, threadsafe=False) _sem_unlink = external('sem_unlink', [rffi.CCHARP], rffi.INT) _sem_wait = external('sem_wait', [SEM_T], rffi.INT) _sem_trywait = external('sem_trywait', [SEM_T], rffi.INT) @@ -90,6 +97,11 @@ raise OSError(rposix.get_errno(), "sem_open failed") return res + def sem_close(handle): + res = _sem_close(handle) + if res < 0: + raise OSError(rposix.get_errno(), "sem_close failed") + def sem_unlink(name): res = _sem_unlink(name) if res < 0: @@ -205,6 +217,11 @@ raise WindowsError(err, "CreateSemaphore") return handle + def delete_semaphore(handle): + if not _CloseHandle(handle): + err = rwin32.GetLastError() + raise WindowsError(err, "CloseHandle") + def semlock_acquire(self, space, block, w_timeout): if not block: full_msecs = 0 @@ -291,8 +308,13 @@ sem_unlink(name) except OSError: pass + else: + rgc.add_memory_pressure(SEM_T_SIZE) return sem + def delete_semaphore(handle): + sem_close(handle) + def semlock_acquire(self, space, block, w_timeout): if not block: deadline = lltype.nullptr(TIMESPECP.TO) @@ -483,6 +505,9 @@ def exit(self, space, __args__): self.release(space) + def __del__(self): + delete_semaphore(self.handle) + @unwrap_spec(kind=int, value=int, maxvalue=int) def descr_new(space, w_subtype, kind, value, maxvalue): if kind != RECURSIVE_MUTEX and kind != SEMAPHORE: diff --git a/pypy/module/_rawffi/structure.py b/pypy/module/_rawffi/structure.py --- a/pypy/module/_rawffi/structure.py +++ b/pypy/module/_rawffi/structure.py @@ -212,6 +212,8 @@ while count + basic_size <= total_size: fieldtypes.append(basic_ffi_type) count += basic_size + if basic_size == 0: # corner case. get out of this infinite + break # loop after 1 iteration ("why not") self.ffi_struct = clibffi.make_struct_ffitype_e(self.size, self.alignment, fieldtypes) diff --git a/pypy/module/_socket/interp_socket.py b/pypy/module/_socket/interp_socket.py --- a/pypy/module/_socket/interp_socket.py +++ b/pypy/module/_socket/interp_socket.py @@ -19,7 +19,7 @@ class W_RSocket(Wrappable, RSocket): def __del__(self): self.clear_all_weakrefs() - self.close() + RSocket.__del__(self) def accept_w(self, space): """accept() -> (socket object, address info) diff --git a/pypy/module/array/interp_array.py b/pypy/module/array/interp_array.py --- a/pypy/module/array/interp_array.py +++ b/pypy/module/array/interp_array.py @@ -211,7 +211,9 @@ return result def __del__(self): - self.clear_all_weakrefs() + # note that we don't call clear_all_weakrefs here because + # an array with freed buffer is ok to see - it's just empty with 0 + # length self.setlen(0) def setlen(self, size): diff --git a/pypy/module/array/test/test_array.py b/pypy/module/array/test/test_array.py --- a/pypy/module/array/test/test_array.py +++ b/pypy/module/array/test/test_array.py @@ -824,6 +824,22 @@ r = weakref.ref(a) assert r() is a + def test_subclass_del(self): + import array, gc, weakref + l = [] + + class A(array.array): + pass + + a = A('d') + a.append(3.0) + r = weakref.ref(a, lambda a: l.append(a())) + del a + gc.collect(); gc.collect() # XXX needs two of them right now... + assert l + assert l[0] is None or len(l[0]) == 0 + + class TestCPythonsOwnArray(BaseArrayTests): def setup_class(cls): @@ -844,11 +860,7 @@ cls.w_tempfile = cls.space.wrap( str(py.test.ensuretemp('array').join('tmpfile'))) cls.w_maxint = cls.space.wrap(sys.maxint) - - - - - + def test_buffer_info(self): a = self.array('c', 'Hi!') bi = a.buffer_info() diff --git a/pypy/module/bz2/test/test_large.py b/pypy/module/bz2/test/test_large.py --- a/pypy/module/bz2/test/test_large.py +++ b/pypy/module/bz2/test/test_large.py @@ -8,7 +8,7 @@ py.test.skip("skipping this very slow test; try 'pypy-c -A'") cls.space = gettestobjspace(usemodules=('bz2',)) largetest_bz2 = py.path.local(__file__).dirpath().join("largetest.bz2") - cls.w_compressed_data = cls.space.wrap(largetest_bz2.read()) + cls.w_compressed_data = cls.space.wrap(largetest_bz2.read('rb')) def test_decompress(self): from bz2 import decompress diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -392,6 +392,7 @@ 'Slice': 'space.gettypeobject(W_SliceObject.typedef)', 'StaticMethod': 'space.gettypeobject(StaticMethod.typedef)', 'CFunction': 'space.gettypeobject(cpyext.methodobject.W_PyCFunctionObject.typedef)', + 'WrapperDescr': 'space.gettypeobject(cpyext.methodobject.W_PyCMethodObject.typedef)' }.items(): GLOBALS['Py%s_Type#' % (cpyname, )] = ('PyTypeObject*', pypyexpr) diff --git a/pypy/module/cpyext/include/patchlevel.h b/pypy/module/cpyext/include/patchlevel.h --- a/pypy/module/cpyext/include/patchlevel.h +++ b/pypy/module/cpyext/include/patchlevel.h @@ -29,7 +29,7 @@ #define PY_VERSION "2.7.1" /* PyPy version as a string */ -#define PYPY_VERSION "1.6.1" +#define PYPY_VERSION "1.7.1" /* Subversion Revision number of this file (not of the repository). * Empty since Mercurial migration. */ diff --git a/pypy/module/cpyext/methodobject.py b/pypy/module/cpyext/methodobject.py --- a/pypy/module/cpyext/methodobject.py +++ b/pypy/module/cpyext/methodobject.py @@ -240,6 +240,7 @@ def PyStaticMethod_New(space, w_func): return space.wrap(StaticMethod(w_func)) + at cpython_api([PyObject, lltype.Ptr(PyMethodDef)], PyObject) def PyDescr_NewMethod(space, w_type, method): return space.wrap(W_PyCMethodObject(space, method, w_type)) diff --git a/pypy/module/cpyext/pyobject.py b/pypy/module/cpyext/pyobject.py --- a/pypy/module/cpyext/pyobject.py +++ b/pypy/module/cpyext/pyobject.py @@ -116,8 +116,8 @@ try: return typedescr_cache[typedef] except KeyError: - if typedef.base is not None: - return _get_typedescr_1(typedef.base) + if typedef.bases: + return _get_typedescr_1(typedef.bases[0]) return typedescr_cache[None] def get_typedescr(typedef): diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -586,10 +586,6 @@ def PyDescr_NewMember(space, type, meth): raise NotImplementedError - at cpython_api([PyTypeObjectPtr, PyMethodDef], PyObject) -def PyDescr_NewMethod(space, type, meth): - raise NotImplementedError - @cpython_api([PyTypeObjectPtr, wrapperbase, rffi.VOIDP], PyObject) def PyDescr_NewWrapper(space, type, wrapper, wrapped): raise NotImplementedError @@ -610,14 +606,6 @@ def PyWrapper_New(space, w_d, w_self): raise NotImplementedError - at cpython_api([PyObject], PyObject) -def PyDictProxy_New(space, dict): - """Return a proxy object for a mapping which enforces read-only behavior. - This is normally used to create a proxy to prevent modification of the - dictionary for non-dynamic class types. - """ - raise NotImplementedError - @cpython_api([PyObject, PyObject, rffi.INT_real], rffi.INT_real, error=-1) def PyDict_Merge(space, a, b, override): """Iterate over mapping object b adding key-value pairs to dictionary a. @@ -2293,15 +2281,6 @@ changes in your code for properly supporting 64-bit systems.""" raise NotImplementedError - at cpython_api([rffi.CWCHARP, Py_ssize_t, rffi.CCHARP], PyObject) -def PyUnicode_EncodeUTF8(space, s, size, errors): - """Encode the Py_UNICODE buffer of the given size using UTF-8 and return a - Python string object. Return NULL if an exception was raised by the codec. - - This function used an int type for size. This might require - changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.INTP], PyObject) def PyUnicode_DecodeUTF32(space, s, size, errors, byteorder): """Decode length bytes from a UTF-32 encoded buffer string and return the @@ -2481,31 +2460,6 @@ was raised by the codec.""" raise NotImplementedError - at cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP], PyObject) -def PyUnicode_DecodeLatin1(space, s, size, errors): - """Create a Unicode object by decoding size bytes of the Latin-1 encoded string - s. Return NULL if an exception was raised by the codec. - - This function used an int type for size. This might require - changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - - at cpython_api([rffi.CWCHARP, Py_ssize_t, rffi.CCHARP], PyObject) -def PyUnicode_EncodeLatin1(space, s, size, errors): - """Encode the Py_UNICODE buffer of the given size using Latin-1 and return - a Python string object. Return NULL if an exception was raised by the codec. - - This function used an int type for size. This might require - changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - - at cpython_api([PyObject], PyObject) -def PyUnicode_AsLatin1String(space, unicode): - """Encode a Unicode object using Latin-1 and return the result as Python string - object. Error handling is "strict". Return NULL if an exception was raised - by the codec.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, PyObject, rffi.CCHARP], PyObject) def PyUnicode_DecodeCharmap(space, s, size, mapping, errors): """Create a Unicode object by decoding size bytes of the encoded string s using @@ -2564,13 +2518,6 @@ """ raise NotImplementedError - at cpython_api([PyObject], PyObject) -def PyUnicode_AsMBCSString(space, unicode): - """Encode a Unicode object using MBCS and return the result as Python string - object. Error handling is "strict". Return NULL if an exception was raised - by the codec.""" - raise NotImplementedError - @cpython_api([PyObject, PyObject], PyObject) def PyUnicode_Concat(space, left, right): """Concat two strings giving a new Unicode string.""" @@ -2912,16 +2859,3 @@ """Return true if ob is a proxy object. """ raise NotImplementedError - - at cpython_api([PyObject, PyObject], PyObject) -def PyWeakref_NewProxy(space, ob, callback): - """Return a weak reference proxy object for the object ob. This will always - return a new reference, but is not guaranteed to create a new object; an - existing proxy object may be returned. The second parameter, callback, can - be a callable object that receives notification when ob is garbage - collected; it should accept a single parameter, which will be the weak - reference object itself. callback may also be None or NULL. If ob - is not a weakly-referencable object, or if callback is not callable, - None, or NULL, this will return NULL and raise TypeError. - """ - raise NotImplementedError diff --git a/pypy/module/cpyext/test/test_methodobject.py b/pypy/module/cpyext/test/test_methodobject.py --- a/pypy/module/cpyext/test/test_methodobject.py +++ b/pypy/module/cpyext/test/test_methodobject.py @@ -79,7 +79,7 @@ raises(TypeError, mod.isSameFunction, 1) class TestPyCMethodObject(BaseApiTest): - def test_repr(self, space): + def test_repr(self, space, api): """ W_PyCMethodObject has a repr string which describes it as a method and gives its name and the name of its class. @@ -94,7 +94,7 @@ ml.c_ml_meth = rffi.cast(PyCFunction_typedef, c_func.get_llhelper(space)) - method = PyDescr_NewMethod(space, space.w_str, ml) + method = api.PyDescr_NewMethod(space.w_str, ml) assert repr(method).startswith( "" + assert space.unwrap(space.repr(w_proxy)).startswith('': + if isinstance(w_rhs, Scalar): + index = int(interp.space.float_w( + w_rhs.value.wrap(interp.space))) + dtype = interp.space.fromcache(W_Float64Dtype) + return Scalar(dtype, w_lhs.get_concrete().eval(index)) + else: + raise NotImplementedError else: - print "Unknown opcode: %s" % b - raise BogusBytecode() - if len(stack) != 1: - print "Bogus bytecode, uneven stack length" - raise BogusBytecode() - return stack[0] + raise NotImplementedError + if not isinstance(w_res, BaseArray): + dtype = interp.space.fromcache(W_Float64Dtype) + w_res = scalar_w(interp.space, dtype, w_res) + return w_res + + def __repr__(self): + return '(%r %s %r)' % (self.lhs, self.name, self.rhs) + +class FloatConstant(Node): + def __init__(self, v): + self.v = float(v) + + def __repr__(self): + return "Const(%s)" % self.v + + def wrap(self, space): + return space.wrap(self.v) + + def execute(self, interp): + dtype = interp.space.fromcache(W_Float64Dtype) + assert isinstance(dtype, W_Float64Dtype) + return Scalar(dtype, dtype.box(self.v)) + +class RangeConstant(Node): + def __init__(self, v): + self.v = int(v) + + def execute(self, interp): + w_list = interp.space.newlist( + [interp.space.wrap(float(i)) for i in range(self.v)]) + dtype = interp.space.fromcache(W_Float64Dtype) + return descr_new_array(interp.space, None, w_list, w_dtype=dtype) + + def __repr__(self): + return 'Range(%s)' % self.v + +class Code(Node): + def __init__(self, statements): + self.statements = statements + + def __repr__(self): + return "\n".join([repr(i) for i in self.statements]) + +class ArrayConstant(Node): + def __init__(self, items): + self.items = items + + def wrap(self, space): + return space.newlist([item.wrap(space) for item in self.items]) + + def execute(self, interp): + w_list = self.wrap(interp.space) + dtype = interp.space.fromcache(W_Float64Dtype) + return descr_new_array(interp.space, None, w_list, w_dtype=dtype) + + def __repr__(self): + return "[" + ", ".join([repr(item) for item in self.items]) + "]" + +class SliceConstant(Node): + def __init__(self): + pass + + def __repr__(self): + return 'slice()' + +class Execute(Node): + def __init__(self, expr): + self.expr = expr + + def __repr__(self): + return repr(self.expr) + + def execute(self, interp): + interp.results.append(self.expr.execute(interp)) + +class FunctionCall(Node): + def __init__(self, name, args): + self.name = name + self.args = args + + def __repr__(self): + return "%s(%s)" % (self.name, ", ".join([repr(arg) + for arg in self.args])) + + def execute(self, interp): + if self.name in SINGLE_ARG_FUNCTIONS: + if len(self.args) != 1: + raise ArgumentMismatch + arr = self.args[0].execute(interp) + if not isinstance(arr, BaseArray): + raise ArgumentNotAnArray + if self.name == "sum": + w_res = arr.descr_sum(interp.space) + elif self.name == "prod": + w_res = arr.descr_prod(interp.space) + elif self.name == "max": + w_res = arr.descr_max(interp.space) + elif self.name == "min": + w_res = arr.descr_min(interp.space) + elif self.name == "any": + w_res = arr.descr_any(interp.space) + elif self.name == "all": + w_res = arr.descr_all(interp.space) + elif self.name == "unegative": + neg = interp_ufuncs.get(interp.space).negative + w_res = neg.call(interp.space, [arr]) + else: + assert False # unreachable code + if isinstance(w_res, BaseArray): + return w_res + if isinstance(w_res, FloatObject): + dtype = interp.space.fromcache(W_Float64Dtype) + elif isinstance(w_res, BoolObject): + dtype = interp.space.fromcache(W_BoolDtype) + else: + dtype = None + return scalar_w(interp.space, dtype, w_res) + else: + raise WrongFunctionName + +class Parser(object): + def parse_identifier(self, id): + id = id.strip(" ") + #assert id.isalpha() + return Variable(id) + + def parse_expression(self, expr): + tokens = [i for i in expr.split(" ") if i] + if len(tokens) == 1: + return self.parse_constant_or_identifier(tokens[0]) + stack = [] + tokens.reverse() + while tokens: + token = tokens.pop() + if token == ')': + raise NotImplementedError + elif self.is_identifier_or_const(token): + if stack: + name = stack.pop().name + lhs = stack.pop() + rhs = self.parse_constant_or_identifier(token) + stack.append(Operator(lhs, name, rhs)) + else: + stack.append(self.parse_constant_or_identifier(token)) + else: + stack.append(Variable(token)) + assert len(stack) == 1 + return stack[-1] + + def parse_constant(self, v): + lgt = len(v)-1 + assert lgt >= 0 + if ':' in v: + # a slice + assert v == ':' + return SliceConstant() + if v[0] == '[': + return ArrayConstant([self.parse_constant(elem) + for elem in v[1:lgt].split(",")]) + if v[0] == '|': + return RangeConstant(v[1:lgt]) + return FloatConstant(v) + + def is_identifier_or_const(self, v): + c = v[0] + if ((c >= 'a' and c <= 'z') or (c >= 'A' and c <= 'Z') or + (c >= '0' and c <= '9') or c in '-.[|:'): + if v == '-' or v == "->": + return False + return True + return False + + def parse_function_call(self, v): + l = v.split('(') + assert len(l) == 2 + name = l[0] + cut = len(l[1]) - 1 + assert cut >= 0 + args = [self.parse_constant_or_identifier(id) + for id in l[1][:cut].split(",")] + return FunctionCall(name, args) + + def parse_constant_or_identifier(self, v): + c = v[0] + if (c >= 'a' and c <= 'z') or (c >= 'A' and c <= 'Z'): + if '(' in v: + return self.parse_function_call(v) + return self.parse_identifier(v) + return self.parse_constant(v) + + def parse_array_subscript(self, v): + v = v.strip(" ") + l = v.split("[") + lgt = len(l[1]) - 1 + assert lgt >= 0 + rhs = self.parse_constant_or_identifier(l[1][:lgt]) + return l[0], rhs + + def parse_statement(self, line): + if '=' in line: + lhs, rhs = line.split("=") + lhs = lhs.strip(" ") + if '[' in lhs: + name, index = self.parse_array_subscript(lhs) + return ArrayAssignment(name, index, self.parse_expression(rhs)) + else: + return Assignment(lhs, self.parse_expression(rhs)) + else: + return Execute(self.parse_expression(line)) + + def parse(self, code): + statements = [] + for line in code.split("\n"): + if '#' in line: + line = line.split('#', 1)[0] + line = line.strip(" ") + if line: + statements.append(self.parse_statement(line)) + return Code(statements) + +def numpy_compile(code): + parser = Parser() + return InterpreterState(parser.parse(code)) diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -108,6 +108,12 @@ def setitem_w(self, space, storage, i, w_item): self.setitem(storage, i, self.unwrap(space, w_item)) + def fill(self, storage, item, start, stop): + storage = self.unerase(storage) + item = self.unbox(item) + for i in xrange(start, stop): + storage[i] = item + @specialize.argtype(1) def adapt_val(self, val): return self.box(rffi.cast(TP.TO.OF, val)) diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -14,6 +14,27 @@ any_driver = jit.JitDriver(greens=['signature'], reds=['i', 'size', 'self', 'dtype']) slice_driver = jit.JitDriver(greens=['signature'], reds=['i', 'j', 'step', 'stop', 'source', 'dest']) +def descr_new_array(space, w_subtype, w_size_or_iterable, w_dtype=None): + l = space.listview(w_size_or_iterable) + if space.is_w(w_dtype, space.w_None): + w_dtype = None + for w_item in l: + w_dtype = interp_ufuncs.find_dtype_for_scalar(space, w_item, w_dtype) + if w_dtype is space.fromcache(interp_dtype.W_Float64Dtype): + break + if w_dtype is None: + w_dtype = space.w_None + + dtype = space.interp_w(interp_dtype.W_Dtype, + space.call_function(space.gettypefor(interp_dtype.W_Dtype), w_dtype) + ) + arr = SingleDimArray(len(l), dtype=dtype) + i = 0 + for w_elem in l: + dtype.setitem_w(space, arr.storage, i, w_elem) + i += 1 + return arr + class BaseArray(Wrappable): _attrs_ = ["invalidates", "signature"] @@ -32,27 +53,6 @@ def add_invalidates(self, other): self.invalidates.append(other) - def descr__new__(space, w_subtype, w_size_or_iterable, w_dtype=None): - l = space.listview(w_size_or_iterable) - if space.is_w(w_dtype, space.w_None): - w_dtype = None - for w_item in l: - w_dtype = interp_ufuncs.find_dtype_for_scalar(space, w_item, w_dtype) - if w_dtype is space.fromcache(interp_dtype.W_Float64Dtype): - break - if w_dtype is None: - w_dtype = space.w_None - - dtype = space.interp_w(interp_dtype.W_Dtype, - space.call_function(space.gettypefor(interp_dtype.W_Dtype), w_dtype) - ) - arr = SingleDimArray(len(l), dtype=dtype) - i = 0 - for w_elem in l: - dtype.setitem_w(space, arr.storage, i, w_elem) - i += 1 - return arr - def _unaryop_impl(ufunc_name): def impl(self, space): return getattr(interp_ufuncs.get(space), ufunc_name).call(space, [self]) @@ -201,6 +201,9 @@ def descr_get_shape(self, space): return space.newtuple([self.descr_len(space)]) + def descr_get_size(self, space): + return space.wrap(self.find_size()) + def descr_copy(self, space): return space.call_function(space.gettypefor(BaseArray), self, self.find_dtype()) @@ -565,13 +568,12 @@ arr = SingleDimArray(size, dtype=dtype) one = dtype.adapt_val(1) - for i in xrange(size): - arr.dtype.setitem(arr.storage, i, one) + arr.dtype.fill(arr.storage, one, 0, size) return space.wrap(arr) BaseArray.typedef = TypeDef( 'numarray', - __new__ = interp2app(BaseArray.descr__new__.im_func), + __new__ = interp2app(descr_new_array), __len__ = interp2app(BaseArray.descr_len), @@ -608,6 +610,7 @@ dtype = GetSetProperty(BaseArray.descr_get_dtype), shape = GetSetProperty(BaseArray.descr_get_shape), + size = GetSetProperty(BaseArray.descr_get_size), mean = interp2app(BaseArray.descr_mean), sum = interp2app(BaseArray.descr_sum), diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -32,11 +32,17 @@ return self.identity.wrap(space) def descr_call(self, space, __args__): - try: - args_w = __args__.fixedunpack(self.argcount) - except ValueError, e: - raise OperationError(space.w_TypeError, space.wrap(str(e))) - return self.call(space, args_w) + if __args__.keywords or len(__args__.arguments_w) < self.argcount: + raise OperationError(space.w_ValueError, + space.wrap("invalid number of arguments") + ) + elif len(__args__.arguments_w) > self.argcount: + # The extra arguments should actually be the output array, but we + # don't support that yet. + raise OperationError(space.w_TypeError, + space.wrap("invalid number of arguments") + ) + return self.call(space, __args__.arguments_w) def descr_reduce(self, space, w_obj): from pypy.module.micronumpy.interp_numarray import convert_to_array, Scalar @@ -236,22 +242,20 @@ return dt def find_dtype_for_scalar(space, w_obj, current_guess=None): - w_type = space.type(w_obj) - bool_dtype = space.fromcache(interp_dtype.W_BoolDtype) long_dtype = space.fromcache(interp_dtype.W_LongDtype) int64_dtype = space.fromcache(interp_dtype.W_Int64Dtype) - if space.is_w(w_type, space.w_bool): + if space.isinstance_w(w_obj, space.w_bool): if current_guess is None or current_guess is bool_dtype: return bool_dtype return current_guess - elif space.is_w(w_type, space.w_int): + elif space.isinstance_w(w_obj, space.w_int): if (current_guess is None or current_guess is bool_dtype or current_guess is long_dtype): return long_dtype return current_guess - elif space.is_w(w_type, space.w_long): + elif space.isinstance_w(w_obj, space.w_long): if (current_guess is None or current_guess is bool_dtype or current_guess is long_dtype or current_guess is int64_dtype): return int64_dtype diff --git a/pypy/module/micronumpy/test/test_compile.py b/pypy/module/micronumpy/test/test_compile.py new file mode 100644 --- /dev/null +++ b/pypy/module/micronumpy/test/test_compile.py @@ -0,0 +1,170 @@ + +import py +from pypy.module.micronumpy.compile import * + +class TestCompiler(object): + def compile(self, code): + return numpy_compile(code) + + def test_vars(self): + code = """ + a = 2 + b = 3 + """ + interp = self.compile(code) + assert isinstance(interp.code.statements[0], Assignment) + assert interp.code.statements[0].name == 'a' + assert interp.code.statements[0].expr.v == 2 + assert interp.code.statements[1].name == 'b' + assert interp.code.statements[1].expr.v == 3 + + def test_array_literal(self): + code = "a = [1,2,3]" + interp = self.compile(code) + assert isinstance(interp.code.statements[0].expr, ArrayConstant) + st = interp.code.statements[0] + assert st.expr.items == [FloatConstant(1), FloatConstant(2), + FloatConstant(3)] + + def test_array_literal2(self): + code = "a = [[1],[2],[3]]" + interp = self.compile(code) + assert isinstance(interp.code.statements[0].expr, ArrayConstant) + st = interp.code.statements[0] + assert st.expr.items == [ArrayConstant([FloatConstant(1)]), + ArrayConstant([FloatConstant(2)]), + ArrayConstant([FloatConstant(3)])] + + def test_expr_1(self): + code = "b = a + 1" + interp = self.compile(code) + assert (interp.code.statements[0].expr == + Operator(Variable("a"), "+", FloatConstant(1))) + + def test_expr_2(self): + code = "b = a + b - 3" + interp = self.compile(code) + assert (interp.code.statements[0].expr == + Operator(Operator(Variable("a"), "+", Variable("b")), "-", + FloatConstant(3))) + + def test_expr_3(self): + # an equivalent of range + code = "a = |20|" + interp = self.compile(code) + assert interp.code.statements[0].expr == RangeConstant(20) + + def test_expr_only(self): + code = "3 + a" + interp = self.compile(code) + assert interp.code.statements[0] == Execute( + Operator(FloatConstant(3), "+", Variable("a"))) + + def test_array_access(self): + code = "a -> 3" + interp = self.compile(code) + assert interp.code.statements[0] == Execute( + Operator(Variable("a"), "->", FloatConstant(3))) + + def test_function_call(self): + code = "sum(a)" + interp = self.compile(code) + assert interp.code.statements[0] == Execute( + FunctionCall("sum", [Variable("a")])) + + def test_comment(self): + code = """ + # some comment + a = b + 3 # another comment + """ + interp = self.compile(code) + assert interp.code.statements[0] == Assignment( + 'a', Operator(Variable('b'), "+", FloatConstant(3))) + +class TestRunner(object): + def run(self, code): + interp = numpy_compile(code) + space = FakeSpace() + interp.run(space) + return interp + + def test_one(self): + code = """ + a = 3 + b = 4 + a + b + """ + interp = self.run(code) + assert sorted(interp.variables.keys()) == ['a', 'b'] + assert interp.results[0] + + def test_array_add(self): + code = """ + a = [1,2,3,4] + b = [4,5,6,5] + a + b + """ + interp = self.run(code) + assert interp.results[0]._getnums(False) == ["5.0", "7.0", "9.0", "9.0"] + + def test_array_getitem(self): + code = """ + a = [1,2,3,4] + b = [4,5,6,5] + a + b -> 3 + """ + interp = self.run(code) + assert interp.results[0].value.val == 3 + 6 + + def test_range_getitem(self): + code = """ + r = |20| + 3 + r -> 3 + """ + interp = self.run(code) + assert interp.results[0].value.val == 6 + + def test_sum(self): + code = """ + a = [1,2,3,4,5] + r = sum(a) + r + """ + interp = self.run(code) + assert interp.results[0].value.val == 15 + + def test_array_write(self): + code = """ + a = [1,2,3,4,5] + a[3] = 15 + a -> 3 + """ + interp = self.run(code) + assert interp.results[0].value.val == 15 + + def test_min(self): + interp = self.run(""" + a = |30| + a[15] = -12 + b = a + a + min(b) + """) + assert interp.results[0].value.val == -24 + + def test_max(self): + interp = self.run(""" + a = |30| + a[13] = 128 + b = a + a + max(b) + """) + assert interp.results[0].value.val == 256 + + def test_slice(self): + py.test.skip("in progress") + interp = self.run(""" + a = [1,2,3,4] + b = a -> : + b -> 3 + """) + assert interp.results[0].value.val == 3 diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -36,37 +36,40 @@ assert str(d) == "bool" def test_bool_array(self): - from numpy import array + import numpy - a = array([0, 1, 2, 2.5], dtype='?') - assert a[0] is False + a = numpy.array([0, 1, 2, 2.5], dtype='?') + assert a[0] is numpy.False_ for i in xrange(1, 4): - assert a[i] is True + assert a[i] is numpy.True_ def test_copy_array_with_dtype(self): - from numpy import array - a = array([0, 1, 2, 3], dtype=long) + import numpy + + a = numpy.array([0, 1, 2, 3], dtype=long) # int on 64-bit, long in 32-bit assert isinstance(a[0], (int, long)) b = a.copy() assert isinstance(b[0], (int, long)) - a = array([0, 1, 2, 3], dtype=bool) - assert isinstance(a[0], bool) + a = numpy.array([0, 1, 2, 3], dtype=bool) + assert a[0] is numpy.False_ b = a.copy() - assert isinstance(b[0], bool) + assert b[0] is numpy.False_ def test_zeros_bool(self): - from numpy import zeros - a = zeros(10, dtype=bool) + import numpy + + a = numpy.zeros(10, dtype=bool) for i in range(10): - assert a[i] is False + assert a[i] is numpy.False_ def test_ones_bool(self): - from numpy import ones - a = ones(10, dtype=bool) + import numpy + + a = numpy.ones(10, dtype=bool) for i in range(10): - assert a[i] is True + assert a[i] is numpy.True_ def test_zeros_long(self): from numpy import zeros @@ -77,7 +80,7 @@ def test_ones_long(self): from numpy import ones - a = ones(10, dtype=bool) + a = ones(10, dtype=long) for i in range(10): assert isinstance(a[i], (int, long)) assert a[1] == 1 @@ -96,8 +99,9 @@ def test_bool_binop_types(self): from numpy import array, dtype - types = ('?','b','B','h','H','i','I','l','L','q','Q','f','d') - N = len(types) + types = [ + '?', 'b', 'B', 'h', 'H', 'i', 'I', 'l', 'L', 'q', 'Q', 'f', 'd' + ] a = array([True], '?') for t in types: assert (a + array([0], t)).dtype is dtype(t) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -17,6 +17,14 @@ a[13] = 5.3 assert a[13] == 5.3 + def test_size(self): + from numpy import array + # XXX fixed on multidim branch + #assert array(3).size == 1 + a = array([1, 2, 3]) + assert a.size == 3 + assert (a + a).size == 3 + def test_empty(self): """ Test that empty() works. @@ -214,7 +222,7 @@ def test_add_other(self): from numpy import array a = array(range(5)) - b = array(reversed(range(5))) + b = array(range(4, -1, -1)) c = a + b for i in range(5): assert c[i] == 4 @@ -264,18 +272,19 @@ assert b[i] == i - 5 def test_mul(self): - from numpy import array, dtype - a = array(range(5)) + import numpy + + a = numpy.array(range(5)) b = a * a for i in range(5): assert b[i] == i * i - a = array(range(5), dtype=bool) + a = numpy.array(range(5), dtype=bool) b = a * a - assert b.dtype is dtype(bool) - assert b[0] is False + assert b.dtype is numpy.dtype(bool) + assert b[0] is numpy.False_ for i in range(1, 5): - assert b[i] is True + assert b[i] is numpy.True_ def test_mul_constant(self): from numpy import array diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -24,10 +24,10 @@ def test_wrong_arguments(self): from numpy import add, sin - raises(TypeError, add, 1) + raises(ValueError, add, 1) raises(TypeError, add, 1, 2, 3) raises(TypeError, sin, 1, 2) - raises(TypeError, sin) + raises(ValueError, sin) def test_single_item(self): from numpy import negative, sign, minimum @@ -82,6 +82,8 @@ b = negative(a) a[0] = 5.0 assert b[0] == 5.0 + a = array(range(30)) + assert negative(a + a)[3] == -6 def test_abs(self): from numpy import array, absolute @@ -355,4 +357,4 @@ (3.5, 3), (3, 3.5), ]: - assert ufunc(a, b) is func(a, b) + assert ufunc(a, b) == func(a, b) diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -1,253 +1,195 @@ from pypy.jit.metainterp.test.support import LLJitMixin from pypy.module.micronumpy import interp_ufuncs, signature -from pypy.module.micronumpy.compile import (numpy_compile, FakeSpace, - FloatObject, IntObject) -from pypy.module.micronumpy.interp_dtype import W_Int32Dtype, W_Float64Dtype, W_Int64Dtype, W_UInt64Dtype -from pypy.module.micronumpy.interp_numarray import (BaseArray, SingleDimArray, - SingleDimSlice, scalar_w) +from pypy.module.micronumpy.compile import (FakeSpace, + FloatObject, IntObject, numpy_compile, BoolObject) +from pypy.module.micronumpy.interp_numarray import (SingleDimArray, + SingleDimSlice) from pypy.rlib.nonconst import NonConstant -from pypy.rpython.annlowlevel import llstr -from pypy.rpython.test.test_llinterp import interpret +from pypy.rpython.annlowlevel import llstr, hlstr +from pypy.jit.metainterp.warmspot import reset_stats +from pypy.jit.metainterp import pyjitpl import py class TestNumpyJIt(LLJitMixin): - def setup_class(cls): - cls.space = FakeSpace() - cls.float64_dtype = cls.space.fromcache(W_Float64Dtype) - cls.int64_dtype = cls.space.fromcache(W_Int64Dtype) - cls.uint64_dtype = cls.space.fromcache(W_UInt64Dtype) - cls.int32_dtype = cls.space.fromcache(W_Int32Dtype) + graph = None + interp = None + + def run(self, code): + space = FakeSpace() + + def f(code): + interp = numpy_compile(hlstr(code)) + interp.run(space) + res = interp.results[-1] + w_res = res.eval(0).wrap(interp.space) + if isinstance(w_res, BoolObject): + return float(w_res.boolval) + elif isinstance(w_res, FloatObject): + return w_res.floatval + elif isinstance(w_res, IntObject): + return w_res.intval + else: + return -42. + + if self.graph is None: + interp, graph = self.meta_interp(f, [llstr(code)], + listops=True, + backendopt=True, + graph_and_interp_only=True) + self.__class__.interp = interp + self.__class__.graph = graph + + reset_stats() + pyjitpl._warmrunnerdesc.memory_manager.alive_loops.clear() + return self.interp.eval_graph(self.graph, [llstr(code)]) def test_add(self): - def f(i): - ar = SingleDimArray(i, dtype=self.float64_dtype) - v = interp_ufuncs.get(self.space).add.call(self.space, [ar, ar]) - return v.get_concrete().eval(3).val - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + result = self.run(""" + a = |30| + b = a + a + b -> 3 + """) self.check_loops({'getarrayitem_raw': 2, 'float_add': 1, 'setarrayitem_raw': 1, 'int_add': 1, 'int_lt': 1, 'guard_true': 1, 'jump': 1}) - assert result == f(5) + assert result == 3 + 3 def test_floatadd(self): - def f(i): - ar = SingleDimArray(i, dtype=self.float64_dtype) - v = interp_ufuncs.get(self.space).add.call(self.space, [ - ar, - scalar_w(self.space, self.float64_dtype, self.space.wrap(4.5)) - ], - ) - assert isinstance(v, BaseArray) - return v.get_concrete().eval(3).val - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + result = self.run(""" + a = |30| + 3 + a -> 3 + """) + assert result == 3 + 3 self.check_loops({"getarrayitem_raw": 1, "float_add": 1, "setarrayitem_raw": 1, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1}) - assert result == f(5) def test_sum(self): - space = self.space - float64_dtype = self.float64_dtype - int64_dtype = self.int64_dtype - - def f(i): - if NonConstant(False): - dtype = int64_dtype - else: - dtype = float64_dtype - ar = SingleDimArray(i, dtype=dtype) - v = ar.descr_add(space, ar).descr_sum(space) - assert isinstance(v, FloatObject) - return v.floatval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + result = self.run(""" + a = |30| + b = a + a + sum(b) + """) + assert result == 2 * sum(range(30)) self.check_loops({"getarrayitem_raw": 2, "float_add": 2, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1}) - assert result == f(5) def test_prod(self): - space = self.space - float64_dtype = self.float64_dtype - int64_dtype = self.int64_dtype - - def f(i): - if NonConstant(False): - dtype = int64_dtype - else: - dtype = float64_dtype - ar = SingleDimArray(i, dtype=dtype) - v = ar.descr_add(space, ar).descr_prod(space) - assert isinstance(v, FloatObject) - return v.floatval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + result = self.run(""" + a = |30| + b = a + a + prod(b) + """) + expected = 1 + for i in range(30): + expected *= i * 2 + assert result == expected self.check_loops({"getarrayitem_raw": 2, "float_add": 1, "float_mul": 1, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1}) - assert result == f(5) def test_max(self): - space = self.space - float64_dtype = self.float64_dtype - int64_dtype = self.int64_dtype - - def f(i): - if NonConstant(False): - dtype = int64_dtype - else: - dtype = float64_dtype - ar = SingleDimArray(i, dtype=dtype) - j = 0 - while j < i: - ar.get_concrete().setitem(j, float64_dtype.box(float(j))) - j += 1 - v = ar.descr_add(space, ar).descr_max(space) - assert isinstance(v, FloatObject) - return v.floatval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + py.test.skip("broken, investigate") + result = self.run(""" + a = |30| + a[13] = 128 + b = a + a + max(b) + """) + assert result == 256 self.check_loops({"getarrayitem_raw": 2, "float_add": 1, - "float_gt": 1, "int_add": 1, - "int_lt": 1, "guard_true": 1, - "guard_false": 1, "jump": 1}) - assert result == f(5) + "float_mul": 1, "int_add": 1, + "int_lt": 1, "guard_true": 1, "jump": 1}) def test_min(self): - space = self.space - float64_dtype = self.float64_dtype - int64_dtype = self.int64_dtype - - def f(i): - if NonConstant(False): - dtype = int64_dtype - else: - dtype = float64_dtype - ar = SingleDimArray(i, dtype=dtype) - j = 0 - while j < i: - ar.get_concrete().setitem(j, float64_dtype.box(float(j))) - j += 1 - v = ar.descr_add(space, ar).descr_min(space) - assert isinstance(v, FloatObject) - return v.floatval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + py.test.skip("broken, investigate") + result = self.run(""" + a = |30| + a[15] = -12 + b = a + a + min(b) + """) + assert result == -24 self.check_loops({"getarrayitem_raw": 2, "float_add": 1, - "float_lt": 1, "int_add": 1, - "int_lt": 1, "guard_true": 2, - "jump": 1}) - assert result == f(5) - - def test_argmin(self): - space = self.space - float64_dtype = self.float64_dtype - - def f(i): - ar = SingleDimArray(i, dtype=NonConstant(float64_dtype)) - j = 0 - while j < i: - ar.get_concrete().setitem(j, float64_dtype.box(float(j))) - j += 1 - return ar.descr_add(space, ar).descr_argmin(space).intval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) - self.check_loops({"getarrayitem_raw": 2, "float_add": 1, - "float_lt": 1, "int_add": 1, - "int_lt": 1, "guard_true": 2, - "jump": 1}) - assert result == f(5) - - def test_all(self): - space = self.space - float64_dtype = self.float64_dtype - - def f(i): - ar = SingleDimArray(i, dtype=NonConstant(float64_dtype)) - j = 0 - while j < i: - ar.get_concrete().setitem(j, float64_dtype.box(1.0)) - j += 1 - return ar.descr_add(space, ar).descr_all(space).boolval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) - self.check_loops({"getarrayitem_raw": 2, "float_add": 1, - "int_add": 1, "float_ne": 1, - "int_lt": 1, "guard_true": 2, "jump": 1}) - assert result == f(5) + "float_mul": 1, "int_add": 1, + "int_lt": 1, "guard_true": 1, "jump": 1}) def test_any(self): - space = self.space - float64_dtype = self.float64_dtype - - def f(i): - ar = SingleDimArray(i, dtype=NonConstant(float64_dtype)) - return ar.descr_add(space, ar).descr_any(space).boolval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + result = self.run(""" + a = [0,0,0,0,0,0,0,0,0,0,0] + a[8] = -12 + b = a + a + any(b) + """) + assert result == 1 self.check_loops({"getarrayitem_raw": 2, "float_add": 1, - "int_add": 1, "float_ne": 1, "guard_false": 1, - "int_lt": 1, "guard_true": 1, "jump": 1}) - assert result == f(5) + "float_ne": 1, "int_add": 1, + "int_lt": 1, "guard_true": 1, "jump": 1, + "guard_false": 1}) def test_already_forced(self): - space = self.space - - def f(i): - ar = SingleDimArray(i, dtype=self.float64_dtype) - v1 = interp_ufuncs.get(self.space).add.call(space, [ar, scalar_w(space, self.float64_dtype, space.wrap(4.5))]) - assert isinstance(v1, BaseArray) - v2 = interp_ufuncs.get(self.space).multiply.call(space, [v1, scalar_w(space, self.float64_dtype, space.wrap(4.5))]) - v1.force_if_needed() - assert isinstance(v2, BaseArray) - return v2.get_concrete().eval(3).val - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + result = self.run(""" + a = |30| + b = a + 4.5 + b -> 5 # forces + c = b * 8 + c -> 5 + """) + assert result == (5 + 4.5) * 8 # This is the sum of the ops for both loops, however if you remove the # optimization then you end up with 2 float_adds, so we can still be # sure it was optimized correctly. self.check_loops({"getarrayitem_raw": 2, "float_mul": 1, "float_add": 1, "setarrayitem_raw": 2, "int_add": 2, "int_lt": 2, "guard_true": 2, "jump": 2}) - assert result == f(5) def test_ufunc(self): - space = self.space - def f(i): - ar = SingleDimArray(i, dtype=self.float64_dtype) - v1 = interp_ufuncs.get(self.space).add.call(space, [ar, ar]) - v2 = interp_ufuncs.get(self.space).negative.call(space, [v1]) - return v2.get_concrete().eval(3).val - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + result = self.run(""" + a = |30| + b = a + a + c = unegative(b) + c -> 3 + """) + assert result == -6 self.check_loops({"getarrayitem_raw": 2, "float_add": 1, "float_neg": 1, "setarrayitem_raw": 1, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1, }) - assert result == f(5) - def test_appropriate_specialization(self): - space = self.space - def f(i): - ar = SingleDimArray(i, dtype=self.float64_dtype) - - v1 = interp_ufuncs.get(self.space).add.call(space, [ar, ar]) - v2 = interp_ufuncs.get(self.space).negative.call(space, [v1]) - v2.get_concrete() - - for i in xrange(5): - v1 = interp_ufuncs.get(self.space).multiply.call(space, [ar, ar]) - v2 = interp_ufuncs.get(self.space).negative.call(space, [v1]) - v2.get_concrete() - - self.meta_interp(f, [5], listops=True, backendopt=True) + def test_specialization(self): + self.run(""" + a = |30| + b = a + a + c = unegative(b) + c -> 3 + d = a * a + unegative(d) + d -> 3 + d = a * a + unegative(d) + d -> 3 + d = a * a + unegative(d) + d -> 3 + d = a * a + unegative(d) + d -> 3 + """) # This is 3, not 2 because there is a bridge for the exit. self.check_loop_count(3) + +class TestNumpyOld(LLJitMixin): + def setup_class(cls): + from pypy.module.micronumpy.compile import FakeSpace + from pypy.module.micronumpy.interp_dtype import W_Float64Dtype + + cls.space = FakeSpace() + cls.float64_dtype = cls.space.fromcache(W_Float64Dtype) + def test_slice(self): def f(i): step = 3 @@ -332,17 +274,3 @@ result = self.meta_interp(f, [5], listops=True, backendopt=True) assert result == f(5) -class TestTranslation(object): - def test_compile(self): - x = numpy_compile('aa+f*f/a-', 10) - x = x.compute() - assert isinstance(x, SingleDimArray) - assert x.size == 10 - assert x.eval(0).val == 0 - assert x.eval(1).val == ((1 + 1) * 1.2) / 1.2 - 1 - - def test_translation(self): - # we import main to check if the target compiles - from pypy.translator.goal.targetnumpystandalone import main - - interpret(main, [llstr('af+'), 100]) diff --git a/pypy/module/posix/__init__.py b/pypy/module/posix/__init__.py --- a/pypy/module/posix/__init__.py +++ b/pypy/module/posix/__init__.py @@ -137,6 +137,8 @@ interpleveldefs['execve'] = 'interp_posix.execve' if hasattr(posix, 'spawnv'): interpleveldefs['spawnv'] = 'interp_posix.spawnv' + if hasattr(posix, 'spawnve'): + interpleveldefs['spawnve'] = 'interp_posix.spawnve' if hasattr(os, 'uname'): interpleveldefs['uname'] = 'interp_posix.uname' if hasattr(os, 'sysconf'): diff --git a/pypy/module/posix/interp_posix.py b/pypy/module/posix/interp_posix.py --- a/pypy/module/posix/interp_posix.py +++ b/pypy/module/posix/interp_posix.py @@ -760,6 +760,14 @@ except OSError, e: raise wrap_oserror(space, e) +def _env2interp(space, w_env): + env = {} + w_keys = space.call_method(w_env, 'keys') + for w_key in space.unpackiterable(w_keys): + w_value = space.getitem(w_env, w_key) + env[space.str_w(w_key)] = space.str_w(w_value) + return env + def execve(space, w_command, w_args, w_env): """ execve(path, args, env) @@ -771,11 +779,7 @@ """ command = fsencode_w(space, w_command) args = [fsencode_w(space, w_arg) for w_arg in space.unpackiterable(w_args)] - env = {} - w_keys = space.call_method(w_env, 'keys') - for w_key in space.unpackiterable(w_keys): - w_value = space.getitem(w_env, w_key) - env[space.str_w(w_key)] = space.str_w(w_value) + env = _env2interp(space, w_env) try: os.execve(command, args, env) except OSError, e: @@ -790,6 +794,16 @@ raise wrap_oserror(space, e) return space.wrap(ret) + at unwrap_spec(mode=int, path=str) +def spawnve(space, mode, path, w_args, w_env): + args = [space.str_w(w_arg) for w_arg in space.unpackiterable(w_args)] + env = _env2interp(space, w_env) + try: + ret = os.spawnve(mode, path, args, env) + except OSError, e: + raise wrap_oserror(space, e) + return space.wrap(ret) + def utime(space, w_path, w_tuple): """ utime(path, (atime, mtime)) utime(path, None) diff --git a/pypy/module/posix/test/test_posix2.py b/pypy/module/posix/test/test_posix2.py --- a/pypy/module/posix/test/test_posix2.py +++ b/pypy/module/posix/test/test_posix2.py @@ -471,6 +471,17 @@ ['python', '-c', 'raise(SystemExit(42))']) assert ret == 42 + if hasattr(__import__(os.name), "spawnve"): + def test_spawnve(self): + os = self.posix + import sys + print self.python + ret = os.spawnve(os.P_WAIT, self.python, + ['python', '-c', + "raise(SystemExit(int(__import__('os').environ['FOOBAR'])))"], + {'FOOBAR': '42'}) + assert ret == 42 + def test_popen(self): os = self.posix for i in range(5): diff --git a/pypy/module/pyexpat/interp_pyexpat.py b/pypy/module/pyexpat/interp_pyexpat.py --- a/pypy/module/pyexpat/interp_pyexpat.py +++ b/pypy/module/pyexpat/interp_pyexpat.py @@ -4,9 +4,9 @@ from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.error import OperationError from pypy.objspace.descroperation import object_setattr +from pypy.rlib import rgc +from pypy.rlib.unroll import unrolling_iterable from pypy.rpython.lltypesystem import rffi, lltype -from pypy.rlib.unroll import unrolling_iterable - from pypy.rpython.tool import rffi_platform from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.translator.platform import platform @@ -118,6 +118,19 @@ locals()[name] = rffi_platform.ConstantInteger(name) for name in xml_model_list: locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + XML_Parser_SIZE = rffi_platform.SizeOf("XML_Parser") for k, v in rffi_platform.configure(CConfigure).items(): globals()[k] = v @@ -793,7 +806,10 @@ rffi.cast(rffi.CHAR, namespace_separator)) else: xmlparser = XML_ParserCreate(encoding) - + # Currently this is just the size of the pointer and some estimated bytes. + # The struct isn't actually defined in expat.h - it is in xmlparse.c + # XXX: find a good estimate of the XML_ParserStruct + rgc.add_memory_pressure(XML_Parser_SIZE + 300) if not xmlparser: raise OperationError(space.w_RuntimeError, space.wrap('XML_ParserCreate failed')) diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py --- a/pypy/module/pypyjit/policy.py +++ b/pypy/module/pypyjit/policy.py @@ -16,7 +16,8 @@ if modname in ['pypyjit', 'signal', 'micronumpy', 'math', 'exceptions', 'imp', 'sys', 'array', '_ffi', 'itertools', 'operator', 'posix', '_socket', '_sre', '_lsprof', '_weakref', - '__pypy__', 'cStringIO', '_collections', 'struct']: + '__pypy__', 'cStringIO', '_collections', 'struct', + 'mmap']: return True return False diff --git a/pypy/module/pypyjit/test_pypy_c/model.py b/pypy/module/pypyjit/test_pypy_c/model.py --- a/pypy/module/pypyjit/test_pypy_c/model.py +++ b/pypy/module/pypyjit/test_pypy_c/model.py @@ -285,6 +285,11 @@ guard_false(ticker_cond1, descr=...) """ src = src.replace('--EXC-TICK--', exc_ticker_check) + # + # ISINF is done as a macro; fix it here + r = re.compile('(\w+) = --ISINF--[(](\w+)[)]') + src = r.sub(r'\2\B999 = float_add(\2, ...)\n\1 = float_eq(\2\B999, \2)', + src) return src @classmethod diff --git a/pypy/module/pypyjit/test_pypy_c/test_call.py b/pypy/module/pypyjit/test_pypy_c/test_call.py --- a/pypy/module/pypyjit/test_pypy_c/test_call.py +++ b/pypy/module/pypyjit/test_pypy_c/test_call.py @@ -465,3 +465,25 @@ setfield_gc(p4, p22, descr=) jump(p0, p1, p2, p3, p4, p7, p22, p7, descr=) """) + + def test_kwargs_virtual(self): + def main(n): + def g(**kwargs): + return kwargs["x"] + 1 + + i = 0 + while i < n: + i = g(x=i) + return i + + log = self.run(main, [500]) + assert log.result == 500 + loop, = log.loops_by_filename(self.filepath) + assert loop.match(""" + i2 = int_lt(i0, i1) + guard_true(i2, descr=...) + i3 = force_token() + i4 = int_add(i0, 1) + --TICK-- + jump(..., descr=...) + """) \ No newline at end of file diff --git a/pypy/module/pypyjit/test_pypy_c/test_containers.py b/pypy/module/pypyjit/test_pypy_c/test_containers.py --- a/pypy/module/pypyjit/test_pypy_c/test_containers.py +++ b/pypy/module/pypyjit/test_pypy_c/test_containers.py @@ -44,7 +44,7 @@ # gc_id call is hoisted out of the loop, the id of a value obviously # can't change ;) assert loop.match_by_id("getitem", """ - i28 = call(ConstClass(ll_dict_lookup__dicttablePtr_objectPtr_Signed), p18, p6, i25, descr=...) + i26 = call(ConstClass(ll_dict_lookup), p18, p6, i25, descr=...) ... p33 = getinteriorfield_gc(p31, i26, descr=>) ... @@ -69,4 +69,51 @@ i9 = int_add(i5, 1) --TICK-- jump(..., descr=...) + """) + + def test_non_virtual_dict(self): + def main(n): + i = 0 + while i < n: + d = {str(i): i} + i += d[str(i)] - i + 1 + return i + + log = self.run(main, [1000]) + assert log.result == main(1000) + loop, = log.loops_by_filename(self.filepath) + assert loop.match(""" + i8 = int_lt(i5, i7) + guard_true(i8, descr=...) + guard_not_invalidated(descr=...) + p10 = call(ConstClass(ll_int_str), i5, descr=) + guard_no_exception(descr=...) + i12 = call(ConstClass(ll_strhash), p10, descr=) + p13 = new(descr=...) + p15 = new_array(8, descr=) + setfield_gc(p13, p15, descr=) + i17 = call(ConstClass(ll_dict_lookup_trampoline), p13, p10, i12, descr=) + setfield_gc(p13, 16, descr=) + guard_no_exception(descr=...) + p20 = new_with_vtable(ConstClass(W_IntObject)) + call(ConstClass(_ll_dict_setitem_lookup_done_trampoline), p13, p10, p20, i12, i17, descr=) + setfield_gc(p20, i5, descr=) + guard_no_exception(descr=...) + i23 = call(ConstClass(ll_dict_lookup_trampoline), p13, p10, i12, descr=) + guard_no_exception(descr=...) + i26 = int_and(i23, .*) + i27 = int_is_true(i26) + guard_false(i27, descr=...) + p28 = getfield_gc(p13, descr=) + p29 = getinteriorfield_gc(p28, i23, descr=>) + guard_nonnull_class(p29, ConstClass(W_IntObject), descr=...) + i31 = getfield_gc_pure(p29, descr=) + i32 = int_sub_ovf(i31, i5) + guard_no_overflow(descr=...) + i34 = int_add_ovf(i32, 1) + guard_no_overflow(descr=...) + i35 = int_add_ovf(i5, i34) + guard_no_overflow(descr=...) + --TICK-- + jump(p0, p1, p2, p3, p4, i35, p13, i7, descr=) """) \ No newline at end of file diff --git a/pypy/module/pypyjit/test_pypy_c/test_math.py b/pypy/module/pypyjit/test_pypy_c/test_math.py --- a/pypy/module/pypyjit/test_pypy_c/test_math.py +++ b/pypy/module/pypyjit/test_pypy_c/test_math.py @@ -1,3 +1,4 @@ +import py from pypy.module.pypyjit.test_pypy_c.test_00_model import BaseTestPyPyC @@ -49,10 +50,7 @@ guard_true(i2, descr=...) guard_not_invalidated(descr=...) f1 = cast_int_to_float(i0) - i3 = float_eq(f1, inf) - i4 = float_eq(f1, -inf) - i5 = int_or(i3, i4) - i6 = int_is_true(i5) + i6 = --ISINF--(f1) guard_false(i6, descr=...) f2 = call(ConstClass(sin), f1, descr=) f3 = call(ConstClass(cos), f1, descr=) @@ -64,6 +62,7 @@ """) def test_fmod(self): + py.test.skip("test relies on the old and broken ll_math_fmod") def main(n): import math @@ -90,4 +89,4 @@ i6 = int_sub(i0, 1) --TICK-- jump(..., descr=) - """) \ No newline at end of file + """) diff --git a/pypy/module/rctime/interp_time.py b/pypy/module/rctime/interp_time.py --- a/pypy/module/rctime/interp_time.py +++ b/pypy/module/rctime/interp_time.py @@ -245,6 +245,9 @@ if sys.platform != 'win32': @unwrap_spec(secs=float) def sleep(space, secs): + if secs < 0: + raise OperationError(space.w_IOError, + space.wrap("Invalid argument: negative time in sleep")) pytime.sleep(secs) else: from pypy.rlib import rwin32 @@ -265,6 +268,9 @@ OSError(EINTR, "sleep() interrupted")) @unwrap_spec(secs=float) def sleep(space, secs): + if secs < 0: + raise OperationError(space.w_IOError, + space.wrap("Invalid argument: negative time in sleep")) # as decreed by Guido, only the main thread can be # interrupted. main_thread = space.fromcache(State).main_thread diff --git a/pypy/module/rctime/test/test_rctime.py b/pypy/module/rctime/test/test_rctime.py --- a/pypy/module/rctime/test/test_rctime.py +++ b/pypy/module/rctime/test/test_rctime.py @@ -20,8 +20,9 @@ import sys import os raises(TypeError, rctime.sleep, "foo") - rctime.sleep(1.2345) - + rctime.sleep(0.12345) + raises(IOError, rctime.sleep, -1.0) + def test_clock(self): import time as rctime rctime.clock() diff --git a/pypy/module/select/test/test_select.py b/pypy/module/select/test/test_select.py --- a/pypy/module/select/test/test_select.py +++ b/pypy/module/select/test/test_select.py @@ -214,11 +214,15 @@ def test_poll(self): import select - class A(object): - def __int__(self): - return 3 - - select.poll().poll(A()) # assert did not crash + readend, writeend = self.getpair() + try: + class A(object): + def __int__(self): + return readend.fileno() + select.poll().poll(A()) # assert did not crash + finally: + readend.close() + writeend.close() class AppTestSelectWithPipes(_AppTestSelect): "Use a pipe to get pairs of file descriptors" diff --git a/pypy/module/signal/__init__.py b/pypy/module/signal/__init__.py --- a/pypy/module/signal/__init__.py +++ b/pypy/module/signal/__init__.py @@ -20,7 +20,7 @@ interpleveldefs['pause'] = 'interp_signal.pause' interpleveldefs['siginterrupt'] = 'interp_signal.siginterrupt' - if hasattr(cpy_signal, 'setitimer'): + if os.name == 'posix': interpleveldefs['setitimer'] = 'interp_signal.setitimer' interpleveldefs['getitimer'] = 'interp_signal.getitimer' for name in ['ITIMER_REAL', 'ITIMER_VIRTUAL', 'ITIMER_PROF']: diff --git a/pypy/module/signal/test/test_signal.py b/pypy/module/signal/test/test_signal.py --- a/pypy/module/signal/test/test_signal.py +++ b/pypy/module/signal/test/test_signal.py @@ -1,4 +1,4 @@ -import os, py +import os, py, sys import signal as cpy_signal from pypy.conftest import gettestobjspace @@ -264,6 +264,10 @@ class AppTestItimer: spaceconfig = dict(usemodules=['signal']) + def setup_class(cls): + if sys.platform == 'win32': + py.test.skip("Unix only") + def test_itimer_real(self): import signal diff --git a/pypy/module/sys/version.py b/pypy/module/sys/version.py --- a/pypy/module/sys/version.py +++ b/pypy/module/sys/version.py @@ -10,7 +10,7 @@ CPYTHON_VERSION = (2, 7, 1, "final", 42) #XXX # sync patchlevel.h CPYTHON_API_VERSION = 1013 #XXX # sync with include/modsupport.h -PYPY_VERSION = (1, 6, 1, "dev", 0) #XXX # sync patchlevel.h +PYPY_VERSION = (1, 7, 1, "dev", 0) #XXX # sync patchlevel.h if platform.name == 'msvc': COMPILER_INFO = 'MSC v.%d 32 bit' % (platform.version * 10 + 600) diff --git a/pypy/module/thread/ll_thread.py b/pypy/module/thread/ll_thread.py --- a/pypy/module/thread/ll_thread.py +++ b/pypy/module/thread/ll_thread.py @@ -2,10 +2,11 @@ from pypy.rpython.lltypesystem import rffi, lltype, llmemory from pypy.translator.tool.cbuild import ExternalCompilationInfo import py -from pypy.rlib import jit +from pypy.rlib import jit, rgc from pypy.rlib.debug import ll_assert from pypy.rlib.objectmodel import we_are_translated from pypy.rpython.lltypesystem.lloperation import llop +from pypy.rpython.tool import rffi_platform from pypy.tool import autopath class error(Exception): @@ -49,7 +50,7 @@ TLOCKP = rffi.COpaquePtr('struct RPyOpaque_ThreadLock', compilation_info=eci) - +TLOCKP_SIZE = rffi_platform.sizeof('struct RPyOpaque_ThreadLock', eci) c_thread_lock_init = llexternal('RPyThreadLockInit', [TLOCKP], rffi.INT, threadsafe=False) # may add in a global list c_thread_lock_dealloc_NOAUTO = llexternal('RPyOpaqueDealloc_ThreadLock', @@ -164,6 +165,9 @@ if rffi.cast(lltype.Signed, res) <= 0: lltype.free(ll_lock, flavor='raw', track_allocation=False) raise error("out of resources") + # Add some memory pressure for the size of the lock because it is an + # Opaque object + rgc.add_memory_pressure(TLOCKP_SIZE) return ll_lock def free_ll_lock(ll_lock): diff --git a/pypy/objspace/flow/operation.py b/pypy/objspace/flow/operation.py --- a/pypy/objspace/flow/operation.py +++ b/pypy/objspace/flow/operation.py @@ -11,7 +11,7 @@ from pypy.interpreter.baseobjspace import ObjSpace from pypy.interpreter.error import OperationError from pypy.tool.sourcetools import compile2 -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift +from pypy.rlib.rarithmetic import ovfcheck from pypy.objspace.flow import model @@ -144,7 +144,7 @@ return ovfcheck(x % y) def lshift_ovf(x, y): - return ovfcheck_lshift(x, y) + return ovfcheck(x << y) # slicing: operator.{get,set,del}slice() don't support b=None or c=None def do_getslice(a, b, c): diff --git a/pypy/objspace/std/boolobject.py b/pypy/objspace/std/boolobject.py --- a/pypy/objspace/std/boolobject.py +++ b/pypy/objspace/std/boolobject.py @@ -27,11 +27,7 @@ def uint_w(w_self, space): intval = int(w_self.boolval) - if intval < 0: - raise OperationError(space.w_ValueError, - space.wrap("cannot convert negative integer to unsigned")) - else: - return r_uint(intval) + return r_uint(intval) def bigint_w(w_self, space): return rbigint.fromint(int(w_self.boolval)) diff --git a/pypy/objspace/std/bytearrayobject.py b/pypy/objspace/std/bytearrayobject.py --- a/pypy/objspace/std/bytearrayobject.py +++ b/pypy/objspace/std/bytearrayobject.py @@ -282,23 +282,12 @@ def str__Bytearray(space, w_bytearray): return space.wrap(''.join(w_bytearray.data)) -def _convert_idx_params(space, w_self, w_start, w_stop): - start = slicetype.eval_slice_index(space, w_start) - stop = slicetype.eval_slice_index(space, w_stop) - length = len(w_self.data) - if start < 0: - start += length - if start < 0: - start = 0 - if stop < 0: - stop += length - if stop < 0: - stop = 0 - return start, stop, length - def str_count__Bytearray_Int_ANY_ANY(space, w_bytearray, w_char, w_start, w_stop): char = w_char.intval - start, stop, length = _convert_idx_params(space, w_bytearray, w_start, w_stop) + bytearray = w_bytearray.data + length = len(bytearray) + start, stop = slicetype.unwrap_start_stop( + space, length, w_start, w_stop, False) count = 0 for i in range(start, min(stop, length)): c = w_bytearray.data[i] diff --git a/pypy/objspace/std/bytearraytype.py b/pypy/objspace/std/bytearraytype.py --- a/pypy/objspace/std/bytearraytype.py +++ b/pypy/objspace/std/bytearraytype.py @@ -122,10 +122,11 @@ return -1 def descr_fromhex(space, w_type, w_hexstring): - "bytearray.fromhex(string) -> bytearray\n\nCreate a bytearray object " - "from a string of hexadecimal numbers.\nSpaces between two numbers are " - "accepted.\nExample: bytearray.fromhex('B9 01EF') -> " - "bytearray(b'\\xb9\\x01\\xef')." + "bytearray.fromhex(string) -> bytearray\n" + "\n" + "Create a bytearray object from a string of hexadecimal numbers.\n" + "Spaces between two numbers are accepted.\n" + "Example: bytearray.fromhex('B9 01EF') -> bytearray(b'\\xb9\\x01\\xef')." hexstring = space.str_w(w_hexstring) hexstring = hexstring.lower() data = [] diff --git a/pypy/objspace/std/floatobject.py b/pypy/objspace/std/floatobject.py --- a/pypy/objspace/std/floatobject.py +++ b/pypy/objspace/std/floatobject.py @@ -546,6 +546,12 @@ # Try to return int. return space.newtuple([space.int(w_num), space.int(w_den)]) +def float_is_integer__Float(space, w_float): + v = w_float.floatval + if not rfloat.isfinite(v): + return space.w_False + return space.wrap(math.floor(v) == v) + from pypy.objspace.std import floattype register_all(vars(), floattype) diff --git a/pypy/objspace/std/floattype.py b/pypy/objspace/std/floattype.py --- a/pypy/objspace/std/floattype.py +++ b/pypy/objspace/std/floattype.py @@ -12,6 +12,7 @@ float_as_integer_ratio = SMM("as_integer_ratio", 1) +float_is_integer = SMM("is_integer", 1) float_hex = SMM("hex", 1) def descr_conjugate(space, w_float): diff --git a/pypy/objspace/std/intobject.py b/pypy/objspace/std/intobject.py --- a/pypy/objspace/std/intobject.py +++ b/pypy/objspace/std/intobject.py @@ -6,7 +6,7 @@ from pypy.objspace.std.noneobject import W_NoneObject from pypy.objspace.std.register_all import register_all from pypy.rlib import jit -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift, LONG_BIT, r_uint +from pypy.rlib.rarithmetic import ovfcheck, LONG_BIT, r_uint from pypy.rlib.rbigint import rbigint """ @@ -16,7 +16,10 @@ something CPython does not do anymore. """ -class W_IntObject(W_Object): +class W_AbstractIntObject(W_Object): + pass + +class W_IntObject(W_AbstractIntObject): __slots__ = 'intval' _immutable_fields_ = ['intval'] @@ -245,7 +248,7 @@ b = w_int2.intval if r_uint(b) < LONG_BIT: # 0 <= b < LONG_BIT try: - c = ovfcheck_lshift(a, b) + c = ovfcheck(a << b) except OverflowError: raise FailedToImplementArgs(space.w_OverflowError, space.wrap("integer left shift")) diff --git a/pypy/objspace/std/iterobject.py b/pypy/objspace/std/iterobject.py --- a/pypy/objspace/std/iterobject.py +++ b/pypy/objspace/std/iterobject.py @@ -4,7 +4,10 @@ from pypy.objspace.std.register_all import register_all -class W_AbstractSeqIterObject(W_Object): +class W_AbstractIterObject(W_Object): + pass + +class W_AbstractSeqIterObject(W_AbstractIterObject): from pypy.objspace.std.itertype import iter_typedef as typedef def __init__(w_self, w_seq, index=0): diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py --- a/pypy/objspace/std/listobject.py +++ b/pypy/objspace/std/listobject.py @@ -11,7 +11,10 @@ from pypy.rlib.listsort import make_timsort_class from pypy.interpreter.argument import Signature -class W_ListObject(W_Object): +class W_AbstractListObject(W_Object): + pass + +class W_ListObject(W_AbstractListObject): from pypy.objspace.std.listtype import list_typedef as typedef def __init__(w_self, wrappeditems): @@ -54,7 +57,12 @@ def _init_from_iterable(space, items_w, w_iterable): # in its own function to make the JIT look into init__List - # XXX this would need a JIT driver somehow? + # xxx special hack for speed + from pypy.interpreter.generator import GeneratorIterator + if isinstance(w_iterable, GeneratorIterator): + w_iterable.unpack_into(items_w) + return + # /xxx w_iterator = space.iter(w_iterable) while True: try: @@ -414,8 +422,8 @@ # needs to be safe against eq_w() mutating the w_list behind our back items = w_list.wrappeditems size = len(items) - i = slicetype.adapt_bound(space, size, w_start) - stop = slicetype.adapt_bound(space, size, w_stop) + i, stop = slicetype.unwrap_start_stop( + space, size, w_start, w_stop, True) while i < stop and i < len(items): if space.eq_w(items[i], w_any): return space.wrap(i) diff --git a/pypy/objspace/std/longobject.py b/pypy/objspace/std/longobject.py --- a/pypy/objspace/std/longobject.py +++ b/pypy/objspace/std/longobject.py @@ -8,7 +8,10 @@ from pypy.objspace.std.noneobject import W_NoneObject from pypy.rlib.rbigint import rbigint, SHIFT -class W_LongObject(W_Object): +class W_AbstractLongObject(W_Object): + pass + +class W_LongObject(W_AbstractLongObject): """This is a wrapper of rbigint.""" from pypy.objspace.std.longtype import long_typedef as typedef _immutable_fields_ = ['num'] diff --git a/pypy/objspace/std/model.py b/pypy/objspace/std/model.py --- a/pypy/objspace/std/model.py +++ b/pypy/objspace/std/model.py @@ -69,19 +69,11 @@ from pypy.objspace.std import floatobject from pypy.objspace.std import complexobject from pypy.objspace.std import setobject - from pypy.objspace.std import smallintobject - from pypy.objspace.std import smalllongobject from pypy.objspace.std import tupleobject - from pypy.objspace.std import smalltupleobject from pypy.objspace.std import listobject from pypy.objspace.std import dictmultiobject from pypy.objspace.std import stringobject from pypy.objspace.std import bytearrayobject - from pypy.objspace.std import ropeobject - from pypy.objspace.std import ropeunicodeobject - from pypy.objspace.std import strsliceobject - from pypy.objspace.std import strjoinobject - from pypy.objspace.std import strbufobject from pypy.objspace.std import typeobject from pypy.objspace.std import sliceobject from pypy.objspace.std import longobject @@ -89,7 +81,6 @@ from pypy.objspace.std import iterobject from pypy.objspace.std import unicodeobject from pypy.objspace.std import dictproxyobject - from pypy.objspace.std import rangeobject from pypy.objspace.std import proxyobject from pypy.objspace.std import fake import pypy.objspace.std.default # register a few catch-all multimethods @@ -141,7 +132,12 @@ for option, value in config.objspace.std: if option.startswith("with") and option in option_to_typename: for classname in option_to_typename[option]: - implcls = eval(classname) + modname = classname[:classname.index('.')] + classname = classname[classname.index('.')+1:] + d = {} + exec "from pypy.objspace.std.%s import %s" % ( + modname, classname) in d + implcls = d[classname] if value: self.typeorder[implcls] = [] else: @@ -167,6 +163,7 @@ # XXX build these lists a bit more automatically later if config.objspace.std.withsmallint: + from pypy.objspace.std import smallintobject self.typeorder[boolobject.W_BoolObject] += [ (smallintobject.W_SmallIntObject, boolobject.delegate_Bool2SmallInt), ] @@ -189,6 +186,7 @@ (complexobject.W_ComplexObject, complexobject.delegate_Int2Complex), ] if config.objspace.std.withsmalllong: + from pypy.objspace.std import smalllongobject self.typeorder[boolobject.W_BoolObject] += [ (smalllongobject.W_SmallLongObject, smalllongobject.delegate_Bool2SmallLong), ] @@ -220,7 +218,9 @@ (unicodeobject.W_UnicodeObject, unicodeobject.delegate_String2Unicode), ] else: + from pypy.objspace.std import ropeobject if config.objspace.std.withropeunicode: + from pypy.objspace.std import ropeunicodeobject self.typeorder[ropeobject.W_RopeObject] += [ (ropeunicodeobject.W_RopeUnicodeObject, ropeunicodeobject.delegate_Rope2RopeUnicode), @@ -230,6 +230,7 @@ (unicodeobject.W_UnicodeObject, unicodeobject.delegate_String2Unicode), ] if config.objspace.std.withstrslice: + from pypy.objspace.std import strsliceobject self.typeorder[strsliceobject.W_StringSliceObject] += [ (stringobject.W_StringObject, strsliceobject.delegate_slice2str), @@ -237,6 +238,7 @@ strsliceobject.delegate_slice2unicode), ] if config.objspace.std.withstrjoin: + from pypy.objspace.std import strjoinobject self.typeorder[strjoinobject.W_StringJoinObject] += [ (stringobject.W_StringObject, strjoinobject.delegate_join2str), @@ -244,6 +246,7 @@ strjoinobject.delegate_join2unicode) ] elif config.objspace.std.withstrbuf: + from pypy.objspace.std import strbufobject self.typeorder[strbufobject.W_StringBufferObject] += [ (stringobject.W_StringObject, strbufobject.delegate_buf2str), @@ -251,11 +254,13 @@ strbufobject.delegate_buf2unicode) ] if config.objspace.std.withrangelist: + from pypy.objspace.std import rangeobject self.typeorder[rangeobject.W_RangeListObject] += [ (listobject.W_ListObject, rangeobject.delegate_range2list), ] if config.objspace.std.withsmalltuple: + from pypy.objspace.std import smalltupleobject self.typeorder[smalltupleobject.W_SmallTupleObject] += [ (tupleobject.W_TupleObject, smalltupleobject.delegate_SmallTuple2Tuple)] diff --git a/pypy/objspace/std/objspace.py b/pypy/objspace/std/objspace.py --- a/pypy/objspace/std/objspace.py +++ b/pypy/objspace/std/objspace.py @@ -83,11 +83,7 @@ if self.config.objspace.std.withtproxy: transparent.setup(self) - for type, classes in self.model.typeorder.iteritems(): - if len(classes) >= 3: - # W_Root, AnyXxx and actual object - self.gettypefor(type).interplevel_cls = classes[0][0] - + self.setup_isinstance_cache() def get_builtin_types(self): return self.builtin_types @@ -413,7 +409,7 @@ else: if unroll: return make_sure_not_resized(ObjSpace.unpackiterable_unroll( - self, w_obj, expected_length)[:]) + self, w_obj, expected_length)) else: return make_sure_not_resized(ObjSpace.unpackiterable( self, w_obj, expected_length)[:]) @@ -421,7 +417,8 @@ raise self._wrap_expected_length(expected_length, len(t)) return make_sure_not_resized(t) - def fixedview_unroll(self, w_obj, expected_length=-1): + def fixedview_unroll(self, w_obj, expected_length): + assert expected_length >= 0 return self.fixedview(w_obj, expected_length, unroll=True) def listview(self, w_obj, expected_length=-1): @@ -579,7 +576,7 @@ raise OperationError(self.w_TypeError, self.wrap("need type object")) if is_annotation_constant(w_type): - cls = w_type.interplevel_cls + cls = self._get_interplevel_cls(w_type) if cls is not None: assert w_inst is not None if isinstance(w_inst, cls): @@ -589,3 +586,66 @@ @specialize.arg_or_var(2) def isinstance_w(space, w_inst, w_type): return space._type_isinstance(w_inst, w_type) + + def setup_isinstance_cache(self): + # This assumes that all classes in the stdobjspace implementing a + # particular app-level type are distinguished by a common base class. + # Alternatively, you can turn off the cache on specific classes, + # like e.g. proxyobject. It is just a bit less performant but + # should not have any bad effect. + from pypy.objspace.std.model import W_Root, W_Object + # + # Build a dict {class: w_typeobject-or-None}. The value None is used + # on classes that are known to be abstract base classes. + class2type = {} + class2type[W_Root] = None + class2type[W_Object] = None + for cls in self.model.typeorder.keys(): + if getattr(cls, 'typedef', None) is None: + continue + if getattr(cls, 'ignore_for_isinstance_cache', False): + continue + w_type = self.gettypefor(cls) + w_oldtype = class2type.setdefault(cls, w_type) + assert w_oldtype is w_type + # + # Build the real dict {w_typeobject: class-or-base-class}. For every + # w_typeobject we look for the most precise common base class of all + # the registered classes. If no such class is found, we will find + # W_Object or W_Root, and complain. Then you must either add an + # artificial common base class, or disable caching on one of the + # two classes with ignore_for_isinstance_cache. + def getmro(cls): + while True: + yield cls + if cls is W_Root: + break + cls = cls.__bases__[0] + self._interplevel_classes = {} + for cls, w_type in class2type.items(): + if w_type is None: + continue + if w_type not in self._interplevel_classes: + self._interplevel_classes[w_type] = cls + else: + cls1 = self._interplevel_classes[w_type] + mro1 = list(getmro(cls1)) + for base in getmro(cls): + if base in mro1: + break + if base in class2type and class2type[base] is not w_type: + if class2type.get(base) is None: + msg = ("cannot find a common interp-level base class" + " between %r and %r" % (cls1, cls)) + else: + msg = ("%s is a base class of both %r and %r" % ( + class2type[base], cls1, cls)) + raise AssertionError("%r: %s" % (w_type, msg)) + class2type[base] = w_type + self._interplevel_classes[w_type] = base + + @specialize.memo() + def _get_interplevel_cls(self, w_type): + if not hasattr(self, "_interplevel_classes"): + return None # before running initialize + return self._interplevel_classes.get(w_type, None) diff --git a/pypy/objspace/std/proxyobject.py b/pypy/objspace/std/proxyobject.py --- a/pypy/objspace/std/proxyobject.py +++ b/pypy/objspace/std/proxyobject.py @@ -16,6 +16,8 @@ def transparent_class(name, BaseCls): class W_Transparent(BaseCls): + ignore_for_isinstance_cache = True + def __init__(self, space, w_type, w_controller): self.w_type = w_type self.w_controller = w_controller diff --git a/pypy/objspace/std/rangeobject.py b/pypy/objspace/std/rangeobject.py --- a/pypy/objspace/std/rangeobject.py +++ b/pypy/objspace/std/rangeobject.py @@ -5,7 +5,7 @@ from pypy.objspace.std.noneobject import W_NoneObject from pypy.objspace.std.inttype import wrapint from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice -from pypy.objspace.std.listobject import W_ListObject +from pypy.objspace.std.listobject import W_AbstractListObject, W_ListObject from pypy.objspace.std import listtype, iterobject, slicetype from pypy.interpreter import gateway, baseobjspace @@ -21,7 +21,7 @@ return (start - stop - step - 1)/-step -class W_RangeListObject(W_Object): +class W_RangeListObject(W_AbstractListObject): typedef = listtype.list_typedef def __init__(w_self, start, step, length): diff --git a/pypy/objspace/std/ropeobject.py b/pypy/objspace/std/ropeobject.py --- a/pypy/objspace/std/ropeobject.py +++ b/pypy/objspace/std/ropeobject.py @@ -6,7 +6,7 @@ from pypy.rlib.objectmodel import we_are_translated from pypy.objspace.std.inttype import wrapint from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice -from pypy.objspace.std import slicetype +from pypy.objspace.std import stringobject, slicetype, iterobject from pypy.objspace.std.listobject import W_ListObject from pypy.objspace.std.noneobject import W_NoneObject from pypy.objspace.std.tupleobject import W_TupleObject @@ -19,7 +19,7 @@ str_format__String as str_format__Rope, _upper, _lower, DEFAULT_NOOP_TABLE) -class W_RopeObject(W_Object): +class W_RopeObject(stringobject.W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef _immutable_fields_ = ['_node'] @@ -59,7 +59,7 @@ registerimplementation(W_RopeObject) -class W_RopeIterObject(W_Object): +class W_RopeIterObject(iterobject.W_AbstractIterObject): from pypy.objspace.std.itertype import iter_typedef as typedef def __init__(w_self, w_rope, index=0): @@ -357,16 +357,8 @@ self = w_self._node sub = w_sub._node - if space.is_w(w_start, space.w_None): - w_start = space.wrap(0) - if space.is_w(w_end, space.w_None): - w_end = space.len(w_self) - if upper_bound: - start = slicetype.adapt_bound(space, self.length(), w_start) - end = slicetype.adapt_bound(space, self.length(), w_end) - else: - start = slicetype.adapt_lower_bound(space, self.length(), w_start) - end = slicetype.adapt_lower_bound(space, self.length(), w_end) + start, end = slicetype.unwrap_start_stop( + space, self.length(), w_start, w_end, upper_bound) return (self, sub, start, end) _convert_idx_params._annspecialcase_ = 'specialize:arg(5)' diff --git a/pypy/objspace/std/ropeunicodeobject.py b/pypy/objspace/std/ropeunicodeobject.py --- a/pypy/objspace/std/ropeunicodeobject.py +++ b/pypy/objspace/std/ropeunicodeobject.py @@ -9,7 +9,7 @@ from pypy.objspace.std.noneobject import W_NoneObject from pypy.rlib import rope from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice -from pypy.objspace.std import slicetype +from pypy.objspace.std import unicodeobject, slicetype, iterobject from pypy.objspace.std.tupleobject import W_TupleObject from pypy.rlib.rarithmetic import intmask, ovfcheck from pypy.module.unicodedata import unicodedb @@ -76,7 +76,7 @@ return encode_object(space, w_unistr, encoding, errors) -class W_RopeUnicodeObject(W_Object): +class W_RopeUnicodeObject(unicodeobject.W_AbstractUnicodeObject): from pypy.objspace.std.unicodetype import unicode_typedef as typedef _immutable_fields_ = ['_node'] @@ -117,7 +117,7 @@ return rope.LiteralUnicodeNode(space.unicode_w(w_str)) -class W_RopeUnicodeIterObject(W_Object): +class W_RopeUnicodeIterObject(iterobject.W_AbstractIterObject): from pypy.objspace.std.itertype import iter_typedef as typedef def __init__(w_self, w_rope, index=0): diff --git a/pypy/objspace/std/sliceobject.py b/pypy/objspace/std/sliceobject.py --- a/pypy/objspace/std/sliceobject.py +++ b/pypy/objspace/std/sliceobject.py @@ -4,7 +4,7 @@ from pypy.interpreter import gateway from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all -from pypy.objspace.std.slicetype import eval_slice_index +from pypy.objspace.std.slicetype import _eval_slice_index class W_SliceObject(W_Object): from pypy.objspace.std.slicetype import slice_typedef as typedef @@ -25,7 +25,7 @@ if space.is_w(w_slice.w_step, space.w_None): step = 1 else: - step = eval_slice_index(space, w_slice.w_step) + step = _eval_slice_index(space, w_slice.w_step) if step == 0: raise OperationError(space.w_ValueError, space.wrap("slice step cannot be zero")) @@ -35,7 +35,7 @@ else: start = 0 else: - start = eval_slice_index(space, w_slice.w_start) + start = _eval_slice_index(space, w_slice.w_start) if start < 0: start += length if start < 0: @@ -54,7 +54,7 @@ else: stop = length else: - stop = eval_slice_index(space, w_slice.w_stop) + stop = _eval_slice_index(space, w_slice.w_stop) if stop < 0: stop += length if stop < 0: diff --git a/pypy/objspace/std/slicetype.py b/pypy/objspace/std/slicetype.py --- a/pypy/objspace/std/slicetype.py +++ b/pypy/objspace/std/slicetype.py @@ -3,6 +3,7 @@ from pypy.objspace.std.stdtypedef import StdTypeDef, SMM from pypy.objspace.std.register_all import register_all from pypy.interpreter.error import OperationError +from pypy.rlib.objectmodel import specialize # indices multimehtod slice_indices = SMM('indices', 2, @@ -14,7 +15,9 @@ ' normal slices.') # utility functions -def eval_slice_index(space, w_int): +def _eval_slice_index(space, w_int): + # note that it is the *callers* responsibility to check for w_None + # otherwise you can get funny error messages try: return space.getindex_w(w_int, None) # clamp if long integer too large except OperationError, err: @@ -25,7 +28,7 @@ "None or have an __index__ method")) def adapt_lower_bound(space, size, w_index): - index = eval_slice_index(space, w_index) + index = _eval_slice_index(space, w_index) if index < 0: index = index + size if index < 0: @@ -34,16 +37,29 @@ return index def adapt_bound(space, size, w_index): - index = eval_slice_index(space, w_index) - if index < 0: - index = index + size - if index < 0: - index = 0 + index = adapt_lower_bound(space, size, w_index) if index > size: index = size assert index >= 0 return index + at specialize.arg(4) +def unwrap_start_stop(space, size, w_start, w_end, upper_bound=False): + if space.is_w(w_start, space.w_None): + start = 0 + elif upper_bound: + start = adapt_bound(space, size, w_start) + else: + start = adapt_lower_bound(space, size, w_start) + + if space.is_w(w_end, space.w_None): + end = size + elif upper_bound: + end = adapt_bound(space, size, w_end) + else: + end = adapt_lower_bound(space, size, w_end) + return start, end + register_all(vars(), globals()) # ____________________________________________________________ diff --git a/pypy/objspace/std/smallintobject.py b/pypy/objspace/std/smallintobject.py --- a/pypy/objspace/std/smallintobject.py +++ b/pypy/objspace/std/smallintobject.py @@ -6,14 +6,15 @@ from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all from pypy.objspace.std.noneobject import W_NoneObject -from pypy.objspace.std.intobject import W_IntObject +from pypy.objspace.std.intobject import W_AbstractIntObject, W_IntObject from pypy.interpreter.error import OperationError from pypy.rlib.objectmodel import UnboxedValue from pypy.rlib.rbigint import rbigint from pypy.rlib.rarithmetic import r_uint from pypy.tool.sourcetools import func_with_new_name +from pypy.objspace.std.inttype import wrapint -class W_SmallIntObject(W_Object, UnboxedValue): +class W_SmallIntObject(W_AbstractIntObject, UnboxedValue): __slots__ = 'intval' from pypy.objspace.std.inttype import int_typedef as typedef @@ -48,14 +49,36 @@ def delegate_SmallInt2Complex(space, w_small): return space.newcomplex(float(w_small.intval), 0.0) +def add__SmallInt_SmallInt(space, w_a, w_b): + return wrapint(space, w_a.intval + w_b.intval) # cannot overflow + +def sub__SmallInt_SmallInt(space, w_a, w_b): + return wrapint(space, w_a.intval - w_b.intval) # cannot overflow + +def floordiv__SmallInt_SmallInt(space, w_a, w_b): + return wrapint(space, w_a.intval // w_b.intval) # cannot overflow + +div__SmallInt_SmallInt = floordiv__SmallInt_SmallInt + +def mod__SmallInt_SmallInt(space, w_a, w_b): + return wrapint(space, w_a.intval % w_b.intval) # cannot overflow + +def divmod__SmallInt_SmallInt(space, w_a, w_b): + w = wrapint(space, w_a.intval // w_b.intval) # cannot overflow + z = wrapint(space, w_a.intval % w_b.intval) + return space.newtuple([w, z]) + def copy_multimethods(ns): """Copy integer multimethods for small int.""" for name, func in intobject.__dict__.iteritems(): if "__Int" in name: new_name = name.replace("Int", "SmallInt") - # Copy the function, so the annotator specializes it for - # W_SmallIntObject. - ns[new_name] = func_with_new_name(func, new_name) + if new_name not in ns: + # Copy the function, so the annotator specializes it for + # W_SmallIntObject. + ns[new_name] = func = func_with_new_name(func, new_name, globals=ns) + else: + ns[name] = func ns["get_integer"] = ns["pos__SmallInt"] = ns["int__SmallInt"] ns["get_negint"] = ns["neg__SmallInt"] diff --git a/pypy/objspace/std/smalllongobject.py b/pypy/objspace/std/smalllongobject.py --- a/pypy/objspace/std/smalllongobject.py +++ b/pypy/objspace/std/smalllongobject.py @@ -9,7 +9,7 @@ from pypy.rlib.rarithmetic import r_longlong, r_int, r_uint from pypy.rlib.rarithmetic import intmask, LONGLONG_BIT from pypy.rlib.rbigint import rbigint -from pypy.objspace.std.longobject import W_LongObject +from pypy.objspace.std.longobject import W_AbstractLongObject, W_LongObject from pypy.objspace.std.intobject import W_IntObject from pypy.objspace.std.noneobject import W_NoneObject from pypy.interpreter.error import OperationError @@ -17,7 +17,7 @@ LONGLONG_MIN = r_longlong((-1) << (LONGLONG_BIT-1)) -class W_SmallLongObject(W_Object): +class W_SmallLongObject(W_AbstractLongObject): from pypy.objspace.std.longtype import long_typedef as typedef _immutable_fields_ = ['longlong'] diff --git a/pypy/objspace/std/smalltupleobject.py b/pypy/objspace/std/smalltupleobject.py --- a/pypy/objspace/std/smalltupleobject.py +++ b/pypy/objspace/std/smalltupleobject.py @@ -9,9 +9,9 @@ from pypy.interpreter import gateway from pypy.rlib.debug import make_sure_not_resized from pypy.rlib.unroll import unrolling_iterable -from pypy.objspace.std.tupleobject import W_TupleObject +from pypy.objspace.std.tupleobject import W_AbstractTupleObject, W_TupleObject -class W_SmallTupleObject(W_Object): +class W_SmallTupleObject(W_AbstractTupleObject): from pypy.objspace.std.tupletype import tuple_typedef as typedef def tolist(self): @@ -68,10 +68,10 @@ raise IndexError def eq(self, space, w_other): - if self.length() != w_other.length(): + if n != w_other.length(): return space.w_False for i in iter_n: - item1 = self.getitem(i) + item1 = getattr(self,'w_value%s' % i) item2 = w_other.getitem(i) if not space.eq_w(item1, item2): return space.w_False @@ -80,9 +80,9 @@ def hash(self, space): mult = 1000003 x = 0x345678 - z = self.length() + z = n for i in iter_n: - w_item = self.getitem(i) + w_item = getattr(self, 'w_value%s' % i) y = space.int_w(space.hash(w_item)) x = (x ^ y) * mult z -= 1 diff --git a/pypy/objspace/std/stdtypedef.py b/pypy/objspace/std/stdtypedef.py --- a/pypy/objspace/std/stdtypedef.py +++ b/pypy/objspace/std/stdtypedef.py @@ -32,11 +32,14 @@ from pypy.objspace.std.objecttype import object_typedef if b is object_typedef: return True - while a is not b: - if a is None: - return False - a = a.base - return True + if a is None: + return False + if a is b: + return True + for a1 in a.bases: + if issubtypedef(a1, b): + return True + return False std_dict_descr = GetSetProperty(descr_get_dict, descr_set_dict, descr_del_dict, doc="dictionary for instance variables (if defined)") @@ -75,8 +78,8 @@ if typedef is object_typedef: bases_w = [] else: - base = typedef.base or object_typedef - bases_w = [space.gettypeobject(base)] + bases = typedef.bases or [object_typedef] + bases_w = [space.gettypeobject(base) for base in bases] # wrap everything dict_w = {} diff --git a/pypy/objspace/std/strbufobject.py b/pypy/objspace/std/strbufobject.py --- a/pypy/objspace/std/strbufobject.py +++ b/pypy/objspace/std/strbufobject.py @@ -1,11 +1,12 @@ from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all +from pypy.objspace.std.stringobject import W_AbstractStringObject from pypy.objspace.std.stringobject import W_StringObject from pypy.objspace.std.unicodeobject import delegate_String2Unicode from pypy.rlib.rstring import StringBuilder from pypy.interpreter.buffer import Buffer -class W_StringBufferObject(W_Object): +class W_StringBufferObject(W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef w_str = None diff --git a/pypy/objspace/std/stringobject.py b/pypy/objspace/std/stringobject.py --- a/pypy/objspace/std/stringobject.py +++ b/pypy/objspace/std/stringobject.py @@ -4,7 +4,7 @@ from pypy.interpreter.error import OperationError, operationerrfmt from pypy.interpreter import gateway from pypy.rlib.rarithmetic import ovfcheck -from pypy.rlib.objectmodel import we_are_translated, compute_hash +from pypy.rlib.objectmodel import we_are_translated, compute_hash, specialize from pypy.objspace.std.inttype import wrapint from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice from pypy.objspace.std import slicetype, newformat @@ -19,7 +19,10 @@ from pypy.objspace.std.formatting import mod_format -class W_StringObject(W_Object): +class W_AbstractStringObject(W_Object): + pass + +class W_StringObject(W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef _immutable_fields_ = ['_value'] @@ -47,6 +50,7 @@ W_StringObject.PREBUILT = [W_StringObject(chr(i)) for i in range(256)] del i + at specialize.arg(2) def _is_generic(space, w_self, fun): v = w_self._value if len(v) == 0: @@ -56,14 +60,13 @@ return space.newbool(fun(c)) else: return _is_generic_loop(space, v, fun) -_is_generic._annspecialcase_ = "specialize:arg(2)" + at specialize.arg(2) def _is_generic_loop(space, v, fun): for idx in range(len(v)): if not fun(v[idx]): return space.w_False return space.w_True -_is_generic_loop._annspecialcase_ = "specialize:arg(2)" def _upper(ch): if ch.islower(): @@ -420,22 +423,14 @@ return space.wrap(u_self) -def _convert_idx_params(space, w_self, w_sub, w_start, w_end, upper_bound=False): + at specialize.arg(4) +def _convert_idx_params(space, w_self, w_start, w_end, upper_bound=False): self = w_self._value - sub = w_sub._value + lenself = len(self) - if space.is_w(w_start, space.w_None): - w_start = space.wrap(0) - if space.is_w(w_end, space.w_None): - w_end = space.len(w_self) - if upper_bound: - start = slicetype.adapt_bound(space, len(self), w_start) - end = slicetype.adapt_bound(space, len(self), w_end) - else: - start = slicetype.adapt_lower_bound(space, len(self), w_start) - end = slicetype.adapt_lower_bound(space, len(self), w_end) - return (self, sub, start, end) -_convert_idx_params._annspecialcase_ = 'specialize:arg(5)' + start, end = slicetype.unwrap_start_stop( + space, lenself, w_start, w_end, upper_bound=upper_bound) + return (self, start, end) def contains__String_String(space, w_self, w_sub): self = w_self._value @@ -443,13 +438,13 @@ return space.newbool(self.find(sub) >= 0) def str_find__String_String_ANY_ANY(space, w_self, w_sub, w_start, w_end): - (self, sub, start, end) = _convert_idx_params(space, w_self, w_sub, w_start, w_end) - res = self.find(sub, start, end) + (self, start, end) = _convert_idx_params(space, w_self, w_start, w_end) + res = self.find(w_sub._value, start, end) return space.wrap(res) def str_rfind__String_String_ANY_ANY(space, w_self, w_sub, w_start, w_end): - (self, sub, start, end) = _convert_idx_params(space, w_self, w_sub, w_start, w_end) - res = self.rfind(sub, start, end) + (self, start, end) = _convert_idx_params(space, w_self, w_start, w_end) + res = self.rfind(w_sub._value, start, end) return space.wrap(res) def str_partition__String_String(space, w_self, w_sub): @@ -483,8 +478,8 @@ def str_index__String_String_ANY_ANY(space, w_self, w_sub, w_start, w_end): - (self, sub, start, end) = _convert_idx_params(space, w_self, w_sub, w_start, w_end) - res = self.find(sub, start, end) + (self, start, end) = _convert_idx_params(space, w_self, w_start, w_end) + res = self.find(w_sub._value, start, end) if res < 0: raise OperationError(space.w_ValueError, space.wrap("substring not found in string.index")) @@ -493,8 +488,8 @@ def str_rindex__String_String_ANY_ANY(space, w_self, w_sub, w_start, w_end): - (self, sub, start, end) = _convert_idx_params(space, w_self, w_sub, w_start, w_end) - res = self.rfind(sub, start, end) + (self, start, end) = _convert_idx_params(space, w_self, w_start, w_end) + res = self.rfind(w_sub._value, start, end) if res < 0: raise OperationError(space.w_ValueError, space.wrap("substring not found in string.rindex")) @@ -636,20 +631,17 @@ return wrapstr(space, u_centered) def str_count__String_String_ANY_ANY(space, w_self, w_arg, w_start, w_end): - u_self, u_arg, u_start, u_end = _convert_idx_params(space, w_self, w_arg, - w_start, w_end) - return wrapint(space, u_self.count(u_arg, u_start, u_end)) + u_self, u_start, u_end = _convert_idx_params(space, w_self, w_start, w_end) + return wrapint(space, u_self.count(w_arg._value, u_start, u_end)) def str_endswith__String_String_ANY_ANY(space, w_self, w_suffix, w_start, w_end): - (u_self, suffix, start, end) = _convert_idx_params(space, w_self, - w_suffix, w_start, - w_end, True) - return space.newbool(stringendswith(u_self, suffix, start, end)) + (u_self, start, end) = _convert_idx_params(space, w_self, w_start, + w_end, True) + return space.newbool(stringendswith(u_self, w_suffix._value, start, end)) def str_endswith__String_Tuple_ANY_ANY(space, w_self, w_suffixes, w_start, w_end): - (u_self, _, start, end) = _convert_idx_params(space, w_self, - space.wrap(''), w_start, - w_end, True) + (u_self, start, end) = _convert_idx_params(space, w_self, w_start, + w_end, True) for w_suffix in space.fixedview(w_suffixes): if space.isinstance_w(w_suffix, space.w_unicode): w_u = space.call_function(space.w_unicode, w_self) @@ -661,14 +653,13 @@ return space.w_False def str_startswith__String_String_ANY_ANY(space, w_self, w_prefix, w_start, w_end): - (u_self, prefix, start, end) = _convert_idx_params(space, w_self, - w_prefix, w_start, - w_end, True) - return space.newbool(stringstartswith(u_self, prefix, start, end)) + (u_self, start, end) = _convert_idx_params(space, w_self, w_start, + w_end, True) + return space.newbool(stringstartswith(u_self, w_prefix._value, start, end)) def str_startswith__String_Tuple_ANY_ANY(space, w_self, w_prefixes, w_start, w_end): - (u_self, _, start, end) = _convert_idx_params(space, w_self, space.wrap(''), - w_start, w_end, True) + (u_self, start, end) = _convert_idx_params(space, w_self, + w_start, w_end, True) for w_prefix in space.fixedview(w_prefixes): if space.isinstance_w(w_prefix, space.w_unicode): w_u = space.call_function(space.w_unicode, w_self) diff --git a/pypy/objspace/std/strjoinobject.py b/pypy/objspace/std/strjoinobject.py --- a/pypy/objspace/std/strjoinobject.py +++ b/pypy/objspace/std/strjoinobject.py @@ -1,11 +1,12 @@ from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all +from pypy.objspace.std.stringobject import W_AbstractStringObject from pypy.objspace.std.stringobject import W_StringObject from pypy.objspace.std.unicodeobject import delegate_String2Unicode from pypy.objspace.std.stringtype import wrapstr -class W_StringJoinObject(W_Object): +class W_StringJoinObject(W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef def __init__(w_self, joined_strs, until=-1): diff --git a/pypy/objspace/std/strsliceobject.py b/pypy/objspace/std/strsliceobject.py --- a/pypy/objspace/std/strsliceobject.py +++ b/pypy/objspace/std/strsliceobject.py @@ -1,6 +1,7 @@ from pypy.interpreter.error import OperationError from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all +from pypy.objspace.std.stringobject import W_AbstractStringObject from pypy.objspace.std.stringobject import W_StringObject from pypy.objspace.std.unicodeobject import delegate_String2Unicode from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice @@ -12,7 +13,7 @@ stringendswith, stringstartswith -class W_StringSliceObject(W_Object): +class W_StringSliceObject(W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef def __init__(w_self, str, start, stop): @@ -60,8 +61,8 @@ def _convert_idx_params(space, w_self, w_sub, w_start, w_end): length = w_self.stop - w_self.start sub = w_sub._value - start = slicetype.adapt_bound(space, length, w_start) - end = slicetype.adapt_bound(space, length, w_end) + start, end = slicetype.unwrap_start_stop( + space, length, w_start, w_end, True) assert start >= 0 assert end >= 0 diff --git a/pypy/objspace/std/test/test_bytes.py b/pypy/objspace/std/test/test_bytearrayobject.py rename from pypy/objspace/std/test/test_bytes.py rename to pypy/objspace/std/test/test_bytearrayobject.py diff --git a/pypy/objspace/std/test/test_floatobject.py b/pypy/objspace/std/test/test_floatobject.py --- a/pypy/objspace/std/test/test_floatobject.py +++ b/pypy/objspace/std/test/test_floatobject.py @@ -63,6 +63,12 @@ def setup_class(cls): cls.w_py26 = cls.space.wrap(sys.version_info >= (2, 6)) + def test_isinteger(self): + assert (1.).is_integer() + assert not (1.1).is_integer() + assert not float("inf").is_integer() + assert not float("nan").is_integer() + def test_conjugate(self): assert (1.).conjugate() == 1. assert (-1.).conjugate() == -1. @@ -782,4 +788,4 @@ # divide by 0 raises(ZeroDivisionError, lambda: inf % 0) raises(ZeroDivisionError, lambda: inf // 0) - raises(ZeroDivisionError, divmod, inf, 0) \ No newline at end of file + raises(ZeroDivisionError, divmod, inf, 0) diff --git a/pypy/objspace/std/test/test_listobject.py b/pypy/objspace/std/test/test_listobject.py --- a/pypy/objspace/std/test/test_listobject.py +++ b/pypy/objspace/std/test/test_listobject.py @@ -2,11 +2,10 @@ from pypy.objspace.std.listobject import W_ListObject from pypy.interpreter.error import OperationError -from pypy.conftest import gettestobjspace +from pypy.conftest import gettestobjspace, option class TestW_ListObject(object): - def test_is_true(self): w = self.space.wrap w_list = W_ListObject([]) @@ -343,6 +342,13 @@ class AppTestW_ListObject(object): + def setup_class(cls): + import sys + on_cpython = (option.runappdirect and + not hasattr(sys, 'pypy_translation_info')) + + cls.w_on_cpython = cls.space.wrap(on_cpython) + def test_call_list(self): assert list('') == [] assert list('abc') == ['a', 'b', 'c'] @@ -616,6 +622,14 @@ assert c.index(0) == 0 raises(ValueError, c.index, 3) + def test_index_cpython_bug(self): + if self.on_cpython: + skip("cpython has a bug here") + c = list('hello world') + assert c.index('l', None, None) == 2 + assert c.index('l', 3, None) == 3 + assert c.index('l', None, 4) == 2 + def test_ass_slice(self): l = range(6) l[1:3] = 'abc' @@ -801,6 +815,20 @@ l.__delslice__(0, 2) assert l == [3, 4] + def test_list_from_set(self): + l = ['a'] + l.__init__(set('b')) + assert l == ['b'] + + def test_list_from_generator(self): + l = ['a'] + g = (i*i for i in range(5)) + l.__init__(g) + assert l == [0, 1, 4, 9, 16] + l.__init__(g) + assert l == [] + assert list(g) == [] + class AppTestListFastSubscr: diff --git a/pypy/objspace/std/test/test_obj.py b/pypy/objspace/std/test/test_obj.py --- a/pypy/objspace/std/test/test_obj.py +++ b/pypy/objspace/std/test/test_obj.py @@ -102,3 +102,11 @@ def __repr__(self): return 123456 assert A().__str__() == 123456 + +def test_isinstance_shortcut(): + from pypy.objspace.std import objspace + space = objspace.StdObjSpace() + w_a = space.wrap("a") + space.type = None + space.isinstance_w(w_a, space.w_str) # does not crash + diff --git a/pypy/objspace/std/test/test_sliceobject.py b/pypy/objspace/std/test/test_sliceobject.py --- a/pypy/objspace/std/test/test_sliceobject.py +++ b/pypy/objspace/std/test/test_sliceobject.py @@ -42,6 +42,23 @@ getslice(length, mystart, mystop)) + def test_indexes4(self): + space = self.space + w = space.wrap + + def getslice(length, start, stop, step): + return [i for i in range(0, length, step) if start <= i < stop] + + for step in [-5, -4, -3, -2, -1, 1, 2, 3, 4, 5, None]: + for length in range(5): + for start in range(-2*length-2, 2*length+3) + [None]: + for stop in range(-2*length-2, 2*length+3) + [None]: + sl = space.newslice(w(start), w(stop), w(step)) + mystart, mystop, mystep, slicelength = sl.indices4(space, length) + assert len(range(length)[start:stop:step]) == slicelength + assert slice(start, stop, step).indices(length) == ( + mystart, mystop, mystep) + class AppTest_SliceObject: def test_new(self): def cmp_slice(sl1, sl2): diff --git a/pypy/objspace/std/test/test_stdobjspace.py b/pypy/objspace/std/test/test_stdobjspace.py --- a/pypy/objspace/std/test/test_stdobjspace.py +++ b/pypy/objspace/std/test/test_stdobjspace.py @@ -1,5 +1,6 @@ from pypy.interpreter.error import OperationError from pypy.interpreter.gateway import app2interp +from pypy.conftest import gettestobjspace class TestW_StdObjSpace: @@ -14,11 +15,11 @@ def test_int_w_non_int(self): raises(OperationError,self.space.int_w,self.space.wrap(None)) - raises(OperationError,self.space.int_w,self.space.wrap("")) + raises(OperationError,self.space.int_w,self.space.wrap("")) def test_uint_w_non_int(self): raises(OperationError,self.space.uint_w,self.space.wrap(None)) - raises(OperationError,self.space.uint_w,self.space.wrap("")) + raises(OperationError,self.space.uint_w,self.space.wrap("")) def test_multimethods_defined_on(self): from pypy.objspace.std.stdtypedef import multimethods_defined_on @@ -49,14 +50,27 @@ def test_fastpath_isinstance(self): from pypy.objspace.std.stringobject import W_StringObject from pypy.objspace.std.intobject import W_IntObject - + from pypy.objspace.std.iterobject import W_AbstractSeqIterObject + from pypy.objspace.std.iterobject import W_SeqIterObject + space = self.space - assert space.w_str.interplevel_cls is W_StringObject - assert space.w_int.interplevel_cls is W_IntObject + assert space._get_interplevel_cls(space.w_str) is W_StringObject + assert space._get_interplevel_cls(space.w_int) is W_IntObject class X(W_StringObject): def __init__(self): pass - + typedef = None assert space.isinstance_w(X(), space.w_str) + + w_sequenceiterator = space.gettypefor(W_SeqIterObject) + cls = space._get_interplevel_cls(w_sequenceiterator) + assert cls is W_AbstractSeqIterObject + + def test_withstrbuf_fastpath_isinstance(self): + from pypy.objspace.std.stringobject import W_AbstractStringObject + + space = gettestobjspace(withstrbuf=True) + cls = space._get_interplevel_cls(space.w_str) + assert cls is W_AbstractStringObject diff --git a/pypy/objspace/std/tupleobject.py b/pypy/objspace/std/tupleobject.py --- a/pypy/objspace/std/tupleobject.py +++ b/pypy/objspace/std/tupleobject.py @@ -9,7 +9,10 @@ from pypy.interpreter import gateway from pypy.rlib.debug import make_sure_not_resized -class W_TupleObject(W_Object): +class W_AbstractTupleObject(W_Object): + pass + +class W_TupleObject(W_AbstractTupleObject): from pypy.objspace.std.tupletype import tuple_typedef as typedef _immutable_fields_ = ['wrappeditems[*]'] @@ -108,15 +111,10 @@ return space.w_False return space.w_True -def _min(a, b): - if a < b: - return a - return b - def lt__Tuple_Tuple(space, w_tuple1, w_tuple2): items1 = w_tuple1.wrappeditems items2 = w_tuple2.wrappeditems - ncmp = _min(len(items1), len(items2)) + ncmp = min(len(items1), len(items2)) # Search for the first index where items are different for p in range(ncmp): if not space.eq_w(items1[p], items2[p]): @@ -127,7 +125,7 @@ def gt__Tuple_Tuple(space, w_tuple1, w_tuple2): items1 = w_tuple1.wrappeditems items2 = w_tuple2.wrappeditems - ncmp = _min(len(items1), len(items2)) + ncmp = min(len(items1), len(items2)) # Search for the first index where items are different for p in range(ncmp): if not space.eq_w(items1[p], items2[p]): @@ -172,17 +170,8 @@ return space.wrap(count) def tuple_index__Tuple_ANY_ANY_ANY(space, w_tuple, w_obj, w_start, w_stop): - start = slicetype.eval_slice_index(space, w_start) - stop = slicetype.eval_slice_index(space, w_stop) length = len(w_tuple.wrappeditems) - if start < 0: - start += length - if start < 0: - start = 0 - if stop < 0: - stop += length - if stop < 0: - stop = 0 + start, stop = slicetype.unwrap_start_stop(space, length, w_start, w_stop) for i in range(start, min(stop, length)): w_item = w_tuple.wrappeditems[i] if space.eq_w(w_item, w_obj): diff --git a/pypy/objspace/std/tupletype.py b/pypy/objspace/std/tupletype.py --- a/pypy/objspace/std/tupletype.py +++ b/pypy/objspace/std/tupletype.py @@ -5,14 +5,14 @@ def wraptuple(space, list_w): from pypy.objspace.std.tupleobject import W_TupleObject - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject2 - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject3 - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject4 - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject5 - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject6 - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject7 - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject8 if space.config.objspace.std.withsmalltuple: + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject2 + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject3 + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject4 + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject5 + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject6 + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject7 + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject8 if len(list_w) == 2: return W_SmallTupleObject2(list_w) if len(list_w) == 3: diff --git a/pypy/objspace/std/typeobject.py b/pypy/objspace/std/typeobject.py --- a/pypy/objspace/std/typeobject.py +++ b/pypy/objspace/std/typeobject.py @@ -102,7 +102,6 @@ 'instancetypedef', 'terminator', '_version_tag?', - 'interplevel_cls', ] # for config.objspace.std.getattributeshortcut @@ -117,9 +116,6 @@ # of the __new__ is an instance of the type w_bltin_new = None - interplevel_cls = None # not None for prebuilt instances of - # interpreter-level types - @dont_look_inside def __init__(w_self, space, name, bases_w, dict_w, overridetypedef=None): diff --git a/pypy/objspace/std/unicodeobject.py b/pypy/objspace/std/unicodeobject.py --- a/pypy/objspace/std/unicodeobject.py +++ b/pypy/objspace/std/unicodeobject.py @@ -10,7 +10,7 @@ from pypy.objspace.std import slicetype, newformat from pypy.objspace.std.tupleobject import W_TupleObject from pypy.rlib.rarithmetic import intmask, ovfcheck -from pypy.rlib.objectmodel import compute_hash +from pypy.rlib.objectmodel import compute_hash, specialize from pypy.rlib.rstring import UnicodeBuilder from pypy.rlib.runicode import unicode_encode_unicode_escape from pypy.module.unicodedata import unicodedb @@ -19,7 +19,10 @@ from pypy.objspace.std.formatting import mod_format from pypy.objspace.std.stringtype import stringstartswith, stringendswith -class W_UnicodeObject(W_Object): +class W_AbstractUnicodeObject(W_Object): + pass + +class W_UnicodeObject(W_AbstractUnicodeObject): from pypy.objspace.std.unicodetype import unicode_typedef as typedef _immutable_fields_ = ['_value'] @@ -475,42 +478,29 @@ index = length return index -def _convert_idx_params(space, w_self, w_sub, w_start, w_end, upper_bound=False): - assert isinstance(w_sub, W_UnicodeObject) + at specialize.arg(4) +def _convert_idx_params(space, w_self, w_start, w_end, upper_bound=False): self = w_self._value - sub = w_sub._value - - if space.is_w(w_start, space.w_None): - w_start = space.wrap(0) - if space.is_w(w_end, space.w_None): - w_end = space.len(w_self) - - if upper_bound: - start = slicetype.adapt_bound(space, len(self), w_start) - end = slicetype.adapt_bound(space, len(self), w_end) - else: - start = slicetype.adapt_lower_bound(space, len(self), w_start) - end = slicetype.adapt_lower_bound(space, len(self), w_end) - return (self, sub, start, end) -_convert_idx_params._annspecialcase_ = 'specialize:arg(5)' + start, end = slicetype.unwrap_start_stop( + space, len(self), w_start, w_end, upper_bound) + return (self, start, end) def unicode_endswith__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, + self, start, end = _convert_idx_params(space, w_self, w_start, w_end, True) - return space.newbool(stringendswith(self, substr, start, end)) + return space.newbool(stringendswith(self, w_substr._value, start, end)) def unicode_startswith__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end, True) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end, True) # XXX this stuff can be waaay better for ootypebased backends if # we re-use more of our rpython machinery (ie implement startswith # with additional parameters as rpython) - return space.newbool(stringstartswith(self, substr, start, end)) + return space.newbool(stringstartswith(self, w_substr._value, start, end)) def unicode_startswith__Unicode_Tuple_ANY_ANY(space, w_unistr, w_prefixes, w_start, w_end): - unistr, _, start, end = _convert_idx_params(space, w_unistr, space.wrap(u''), - w_start, w_end, True) + unistr, start, end = _convert_idx_params(space, w_unistr, + w_start, w_end, True) for w_prefix in space.fixedview(w_prefixes): prefix = space.unicode_w(w_prefix) if stringstartswith(unistr, prefix, start, end): @@ -519,7 +509,7 @@ def unicode_endswith__Unicode_Tuple_ANY_ANY(space, w_unistr, w_suffixes, w_start, w_end): - unistr, _, start, end = _convert_idx_params(space, w_unistr, space.wrap(u''), + unistr, start, end = _convert_idx_params(space, w_unistr, w_start, w_end, True) for w_suffix in space.fixedview(w_suffixes): suffix = space.unicode_w(w_suffix) @@ -625,37 +615,32 @@ return space.newlist(lines) def unicode_find__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - return space.wrap(self.find(substr, start, end)) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + return space.wrap(self.find(w_substr._value, start, end)) def unicode_rfind__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - return space.wrap(self.rfind(substr, start, end)) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + return space.wrap(self.rfind(w_substr._value, start, end)) def unicode_index__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - index = self.find(substr, start, end) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + index = self.find(w_substr._value, start, end) if index < 0: raise OperationError(space.w_ValueError, space.wrap('substring not found')) return space.wrap(index) def unicode_rindex__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - index = self.rfind(substr, start, end) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + index = self.rfind(w_substr._value, start, end) if index < 0: raise OperationError(space.w_ValueError, space.wrap('substring not found')) return space.wrap(index) def unicode_count__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - return space.wrap(self.count(substr, start, end)) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + return space.wrap(self.count(w_substr._value, start, end)) def unicode_split__Unicode_None_ANY(space, w_self, w_none, w_maxsplit): maxsplit = space.int_w(w_maxsplit) diff --git a/pypy/pytest.ini b/pypy/pytest.ini --- a/pypy/pytest.ini +++ b/pypy/pytest.ini @@ -1,2 +1,2 @@ [pytest] -addopts = --assertmode=old \ No newline at end of file +addopts = --assertmode=old -rf diff --git a/pypy/rlib/clibffi.py b/pypy/rlib/clibffi.py --- a/pypy/rlib/clibffi.py +++ b/pypy/rlib/clibffi.py @@ -30,6 +30,9 @@ _MAC_OS = platform.name == "darwin" _FREEBSD_7 = platform.name == "freebsd7" +_LITTLE_ENDIAN = sys.byteorder == 'little' +_BIG_ENDIAN = sys.byteorder == 'big' + if _WIN32: from pypy.rlib import rwin32 @@ -338,15 +341,38 @@ cast_type_to_ffitype._annspecialcase_ = 'specialize:memo' def push_arg_as_ffiptr(ffitp, arg, ll_buf): - # this is for primitive types. For structures and arrays - # would be something different (more dynamic) + # This is for primitive types. Note that the exact type of 'arg' may be + # different from the expected 'c_size'. To cope with that, we fall back + # to a byte-by-byte copy. TP = lltype.typeOf(arg) TP_P = lltype.Ptr(rffi.CArray(TP)) - buf = rffi.cast(TP_P, ll_buf) - buf[0] = arg + TP_size = rffi.sizeof(TP) + c_size = intmask(ffitp.c_size) + # if both types have the same size, we can directly write the + # value to the buffer + if c_size == TP_size: + buf = rffi.cast(TP_P, ll_buf) + buf[0] = arg + else: + # needs byte-by-byte copying. Make sure 'arg' is an integer type. + # Note that this won't work for rffi.FLOAT/rffi.DOUBLE. + assert TP is not rffi.FLOAT and TP is not rffi.DOUBLE + if TP_size <= rffi.sizeof(lltype.Signed): + arg = rffi.cast(lltype.Unsigned, arg) + else: + arg = rffi.cast(lltype.UnsignedLongLong, arg) + if _LITTLE_ENDIAN: + for i in range(c_size): + ll_buf[i] = chr(arg & 0xFF) + arg >>= 8 + elif _BIG_ENDIAN: + for i in range(c_size-1, -1, -1): + ll_buf[i] = chr(arg & 0xFF) + arg >>= 8 + else: + raise AssertionError push_arg_as_ffiptr._annspecialcase_ = 'specialize:argtype(1)' - # type defs for callback and closure userdata USERDATA_P = lltype.Ptr(lltype.ForwardReference()) CALLBACK_TP = lltype.Ptr(lltype.FuncType([rffi.VOIDPP, rffi.VOIDP, USERDATA_P], diff --git a/pypy/rlib/libffi.py b/pypy/rlib/libffi.py --- a/pypy/rlib/libffi.py +++ b/pypy/rlib/libffi.py @@ -234,7 +234,7 @@ # It is important that there is no other operation in the middle, else # the optimizer will fail to recognize the pattern and won't turn it # into a fast CALL. Note that "arg = arg.next" is optimized away, - # assuming that archain is completely virtual. + # assuming that argchain is completely virtual. self = jit.promote(self) if argchain.numargs != len(self.argtypes): raise TypeError, 'Wrong number of arguments: %d expected, got %d' %\ diff --git a/pypy/rlib/listsort.py b/pypy/rlib/listsort.py --- a/pypy/rlib/listsort.py +++ b/pypy/rlib/listsort.py @@ -1,4 +1,4 @@ -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift +from pypy.rlib.rarithmetic import ovfcheck ## ------------------------------------------------------------------------ @@ -136,7 +136,7 @@ if lower(a.list[p + ofs], key): lastofs = ofs try: - ofs = ovfcheck_lshift(ofs, 1) + ofs = ovfcheck(ofs << 1) except OverflowError: ofs = maxofs else: @@ -161,7 +161,7 @@ # key <= a[hint - ofs] lastofs = ofs try: - ofs = ovfcheck_lshift(ofs, 1) + ofs = ovfcheck(ofs << 1) except OverflowError: ofs = maxofs else: diff --git a/pypy/rlib/rarithmetic.py b/pypy/rlib/rarithmetic.py --- a/pypy/rlib/rarithmetic.py +++ b/pypy/rlib/rarithmetic.py @@ -12,9 +12,6 @@ back to a signed int value ovfcheck check on CPython whether the result of a signed integer operation did overflow -ovfcheck_lshift - << with oveflow checking - catering to 2.3/2.4 differences about << ovfcheck_float_to_int convert to an integer or raise OverflowError r_longlong @@ -111,18 +108,6 @@ raise OverflowError, "signed integer expression did overflow" return r -def _local_ovfcheck(r): - # a copy of the above, because we cannot call ovfcheck - # in a context where no primitiveoperator is involved. From noreply at buildbot.pypy.org Mon Nov 14 10:45:42 2011 From: noreply at buildbot.pypy.org (bivab) Date: Mon, 14 Nov 2011 10:45:42 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: make sure arm backend tests are only executed when running on ARM Message-ID: <20111114094542.C4749820BE@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r49389:f6952466347e Date: 2011-11-14 10:44 +0100 http://bitbucket.org/pypy/pypy/changeset/f6952466347e/ Log: make sure arm backend tests are only executed when running on ARM diff --git a/pypy/jit/backend/arm/test/test_arch.py b/pypy/jit/backend/arm/test/test_arch.py --- a/pypy/jit/backend/arm/test/test_arch.py +++ b/pypy/jit/backend/arm/test/test_arch.py @@ -1,4 +1,6 @@ from pypy.jit.backend.arm import arch +from pypy.jit.backend.arm.test.support import skip_unless_arm +skip_unless_arm() def test_mod(): assert arch.arm_int_mod(10, 2) == 0 diff --git a/pypy/jit/backend/arm/test/test_calling_convention.py b/pypy/jit/backend/arm/test/test_calling_convention.py --- a/pypy/jit/backend/arm/test/test_calling_convention.py +++ b/pypy/jit/backend/arm/test/test_calling_convention.py @@ -3,6 +3,8 @@ from pypy.jit.backend.test.calling_convention_test import TestCallingConv, parse from pypy.rpython.lltypesystem import lltype from pypy.jit.codewriter.effectinfo import EffectInfo +from pypy.jit.backend.arm.test.support import skip_unless_arm +skip_unless_arm() # ../../test/calling_convention_test.py class TestARMCallingConvention(TestCallingConv): diff --git a/pypy/jit/backend/arm/test/test_gc_integration.py b/pypy/jit/backend/arm/test/test_gc_integration.py --- a/pypy/jit/backend/arm/test/test_gc_integration.py +++ b/pypy/jit/backend/arm/test/test_gc_integration.py @@ -23,6 +23,8 @@ from pypy.jit.backend.arm.regalloc import ARMv7RegisterMananger, ARMFrameManager,\ VFPRegisterManager from pypy.jit.codewriter.effectinfo import EffectInfo +from pypy.jit.backend.arm.test.support import skip_unless_arm +skip_unless_arm() CPU = getcpuclass() diff --git a/pypy/jit/backend/arm/test/test_generated.py b/pypy/jit/backend/arm/test/test_generated.py --- a/pypy/jit/backend/arm/test/test_generated.py +++ b/pypy/jit/backend/arm/test/test_generated.py @@ -10,6 +10,8 @@ from pypy.jit.metainterp.resoperation import ResOperation, rop from pypy.rpython.test.test_llinterp import interpret from pypy.jit.backend.detect_cpu import getcpuclass +from pypy.jit.backend.arm.test.support import skip_unless_arm +skip_unless_arm() CPU = getcpuclass() class TestStuff(object): diff --git a/pypy/jit/backend/arm/test/test_helper.py b/pypy/jit/backend/arm/test/test_helper.py --- a/pypy/jit/backend/arm/test/test_helper.py +++ b/pypy/jit/backend/arm/test/test_helper.py @@ -3,6 +3,8 @@ decode64, encode64 from pypy.jit.metainterp.history import (BoxInt, BoxPtr, BoxFloat, INT, REF, FLOAT) +from pypy.jit.backend.arm.test.support import skip_unless_arm +skip_unless_arm() def test_count_reg_args(): assert count_reg_args([BoxPtr()]) == 1 diff --git a/pypy/jit/backend/arm/test/test_instr_codebuilder.py b/pypy/jit/backend/arm/test/test_instr_codebuilder.py --- a/pypy/jit/backend/arm/test/test_instr_codebuilder.py +++ b/pypy/jit/backend/arm/test/test_instr_codebuilder.py @@ -5,6 +5,8 @@ from pypy.jit.backend.arm.test.support import (requires_arm_as, define_test, gen_test_function) from gen import assemble import py +from pypy.jit.backend.arm.test.support import skip_unless_arm +skip_unless_arm() requires_arm_as() diff --git a/pypy/jit/backend/arm/test/test_jump.py b/pypy/jit/backend/arm/test/test_jump.py --- a/pypy/jit/backend/arm/test/test_jump.py +++ b/pypy/jit/backend/arm/test/test_jump.py @@ -6,6 +6,8 @@ from pypy.jit.backend.arm.regalloc import ARMFrameManager from pypy.jit.backend.arm.jump import remap_frame_layout, remap_frame_layout_mixed from pypy.jit.metainterp.history import INT +from pypy.jit.backend.arm.test.support import skip_unless_arm +skip_unless_arm() frame_pos = ARMFrameManager.frame_pos diff --git a/pypy/jit/backend/arm/test/test_list.py b/pypy/jit/backend/arm/test/test_list.py --- a/pypy/jit/backend/arm/test/test_list.py +++ b/pypy/jit/backend/arm/test/test_list.py @@ -1,6 +1,8 @@ from pypy.jit.metainterp.test.test_list import ListTests from pypy.jit.backend.arm.test.support import JitARMMixin +from pypy.jit.backend.arm.test.support import skip_unless_arm +skip_unless_arm() class TestList(JitARMMixin, ListTests): # for individual tests see diff --git a/pypy/jit/backend/arm/test/test_loop_unroll.py b/pypy/jit/backend/arm/test/test_loop_unroll.py --- a/pypy/jit/backend/arm/test/test_loop_unroll.py +++ b/pypy/jit/backend/arm/test/test_loop_unroll.py @@ -1,6 +1,8 @@ import py from pypy.jit.backend.x86.test.test_basic import Jit386Mixin from pypy.jit.metainterp.test import test_loop_unroll +from pypy.jit.backend.arm.test.support import skip_unless_arm +skip_unless_arm() class TestLoopSpec(Jit386Mixin, test_loop_unroll.LoopUnrollTest): # for the individual tests see diff --git a/pypy/jit/backend/arm/test/test_recompilation.py b/pypy/jit/backend/arm/test/test_recompilation.py --- a/pypy/jit/backend/arm/test/test_recompilation.py +++ b/pypy/jit/backend/arm/test/test_recompilation.py @@ -1,4 +1,6 @@ from pypy.jit.backend.arm.test.test_regalloc import BaseTestRegalloc +from pypy.jit.backend.arm.test.support import skip_unless_arm +skip_unless_arm() class TestRecompilation(BaseTestRegalloc): def test_compile_bridge_not_deeper(self): diff --git a/pypy/jit/backend/arm/test/test_recursive.py b/pypy/jit/backend/arm/test/test_recursive.py --- a/pypy/jit/backend/arm/test/test_recursive.py +++ b/pypy/jit/backend/arm/test/test_recursive.py @@ -1,6 +1,8 @@ from pypy.jit.metainterp.test.test_recursive import RecursiveTests from pypy.jit.backend.arm.test.support import JitARMMixin +from pypy.jit.backend.arm.test.support import skip_unless_arm +skip_unless_arm() class TestRecursive(JitARMMixin, RecursiveTests): # for the individual tests see diff --git a/pypy/jit/backend/arm/test/test_regalloc.py b/pypy/jit/backend/arm/test/test_regalloc.py --- a/pypy/jit/backend/arm/test/test_regalloc.py +++ b/pypy/jit/backend/arm/test/test_regalloc.py @@ -14,6 +14,8 @@ from pypy.rpython.annlowlevel import llhelper from pypy.rpython.lltypesystem import rclass, rstr from pypy.jit.codewriter.effectinfo import EffectInfo +from pypy.jit.backend.arm.test.support import skip_unless_arm +skip_unless_arm() CPU = getcpuclass() diff --git a/pypy/jit/backend/arm/test/test_regalloc2.py b/pypy/jit/backend/arm/test/test_regalloc2.py --- a/pypy/jit/backend/arm/test/test_regalloc2.py +++ b/pypy/jit/backend/arm/test/test_regalloc2.py @@ -4,6 +4,8 @@ from pypy.jit.metainterp.resoperation import rop from pypy.jit.backend.detect_cpu import getcpuclass from pypy.jit.backend.arm.arch import WORD +from pypy.jit.backend.arm.test.support import skip_unless_arm +skip_unless_arm() CPU = getcpuclass() def test_bug_rshift(): diff --git a/pypy/jit/backend/arm/test/test_regalloc_mov.py b/pypy/jit/backend/arm/test/test_regalloc_mov.py --- a/pypy/jit/backend/arm/test/test_regalloc_mov.py +++ b/pypy/jit/backend/arm/test/test_regalloc_mov.py @@ -7,6 +7,9 @@ from pypy.jit.backend.arm.conditions import AL from pypy.jit.metainterp.history import INT, FLOAT, REF import py +from pypy.jit.backend.arm.test.support import skip_unless_arm +skip_unless_arm() + class MockInstr(object): def __init__(self, name, *args, **kwargs): self.name = name diff --git a/pypy/jit/backend/arm/test/test_string.py b/pypy/jit/backend/arm/test/test_string.py --- a/pypy/jit/backend/arm/test/test_string.py +++ b/pypy/jit/backend/arm/test/test_string.py @@ -1,6 +1,8 @@ import py from pypy.jit.metainterp.test import test_string from pypy.jit.backend.arm.test.support import JitARMMixin +from pypy.jit.backend.arm.test.support import skip_unless_arm +skip_unless_arm() class TestString(JitARMMixin, test_string.TestLLtype): # for the individual tests see diff --git a/pypy/jit/backend/arm/test/test_trace_operations.py b/pypy/jit/backend/arm/test/test_trace_operations.py --- a/pypy/jit/backend/arm/test/test_trace_operations.py +++ b/pypy/jit/backend/arm/test/test_trace_operations.py @@ -1,3 +1,6 @@ +from pypy.jit.backend.arm.test.support import skip_unless_arm +skip_unless_arm() + from pypy.jit.backend.x86.test.test_regalloc import BaseTestRegalloc from pypy.jit.backend.detect_cpu import getcpuclass from pypy.rpython.lltypesystem import lltype, llmemory diff --git a/pypy/jit/backend/arm/test/test_zll_random.py b/pypy/jit/backend/arm/test/test_zll_random.py --- a/pypy/jit/backend/arm/test/test_zll_random.py +++ b/pypy/jit/backend/arm/test/test_zll_random.py @@ -4,6 +4,8 @@ from pypy.jit.backend.test.test_ll_random import LLtypeOperationBuilder from pypy.jit.backend.test.test_random import check_random_function, Random from pypy.jit.metainterp.resoperation import rop +from pypy.jit.backend.arm.test.support import skip_unless_arm +skip_unless_arm() CPU = getcpuclass() diff --git a/pypy/jit/backend/arm/test/test_zrpy_gc.py b/pypy/jit/backend/arm/test/test_zrpy_gc.py --- a/pypy/jit/backend/arm/test/test_zrpy_gc.py +++ b/pypy/jit/backend/arm/test/test_zrpy_gc.py @@ -14,6 +14,8 @@ from pypy.jit.backend.llsupport.gc import GcLLDescr_framework from pypy.tool.udir import udir from pypy.config.translationoption import DEFL_GC +from pypy.jit.backend.arm.test.support import skip_unless_arm +skip_unless_arm() class X(object): def __init__(self, x=0): diff --git a/pypy/jit/backend/arm/test/test_ztranslate_backend.py b/pypy/jit/backend/arm/test/test_ztranslate_backend.py --- a/pypy/jit/backend/arm/test/test_ztranslate_backend.py +++ b/pypy/jit/backend/arm/test/test_ztranslate_backend.py @@ -13,6 +13,8 @@ from pypy.jit.backend.detect_cpu import getcpuclass from pypy.jit.backend.arm.runner import ArmCPU from pypy.tool.udir import udir +from pypy.jit.backend.arm.test.support import skip_unless_arm +skip_unless_arm() class FakeStats(object): pass From noreply at buildbot.pypy.org Mon Nov 14 10:45:45 2011 From: noreply at buildbot.pypy.org (bivab) Date: Mon, 14 Nov 2011 10:45:45 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: merge upstream Message-ID: <20111114094545.A9A78820BE@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: ppc-jit-backend Changeset: r49390:3e3ed29ed104 Date: 2011-11-14 10:45 +0100 http://bitbucket.org/pypy/pypy/changeset/3e3ed29ed104/ Log: merge upstream diff too long, truncating to 10000 out of 13473 lines diff --git a/lib-python/2.7/test/test_os.py b/lib-python/2.7/test/test_os.py --- a/lib-python/2.7/test/test_os.py +++ b/lib-python/2.7/test/test_os.py @@ -74,7 +74,8 @@ self.assertFalse(os.path.exists(name), "file already exists for temporary file") # make sure we can create the file - open(name, "w") + f = open(name, "w") + f.close() self.files.append(name) def test_tempnam(self): diff --git a/lib-python/conftest.py b/lib-python/conftest.py --- a/lib-python/conftest.py +++ b/lib-python/conftest.py @@ -201,7 +201,7 @@ RegrTest('test_difflib.py'), RegrTest('test_dircache.py', core=True), RegrTest('test_dis.py'), - RegrTest('test_distutils.py'), + RegrTest('test_distutils.py', skip=True), RegrTest('test_dl.py', skip=True), RegrTest('test_doctest.py', usemodules="thread"), RegrTest('test_doctest2.py'), diff --git a/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py b/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py --- a/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py +++ b/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py @@ -1,6 +1,5 @@ import unittest from ctypes import * -from ctypes.test import xfail class MyInt(c_int): def __cmp__(self, other): @@ -27,7 +26,6 @@ self.assertEqual(None, cb()) - @xfail def test_int_callback(self): args = [] def func(arg): diff --git a/lib-python/modified-2.7/heapq.py b/lib-python/modified-2.7/heapq.py new file mode 100644 --- /dev/null +++ b/lib-python/modified-2.7/heapq.py @@ -0,0 +1,442 @@ +# -*- coding: latin-1 -*- + +"""Heap queue algorithm (a.k.a. priority queue). + +Heaps are arrays for which a[k] <= a[2*k+1] and a[k] <= a[2*k+2] for +all k, counting elements from 0. For the sake of comparison, +non-existing elements are considered to be infinite. The interesting +property of a heap is that a[0] is always its smallest element. + +Usage: + +heap = [] # creates an empty heap +heappush(heap, item) # pushes a new item on the heap +item = heappop(heap) # pops the smallest item from the heap +item = heap[0] # smallest item on the heap without popping it +heapify(x) # transforms list into a heap, in-place, in linear time +item = heapreplace(heap, item) # pops and returns smallest item, and adds + # new item; the heap size is unchanged + +Our API differs from textbook heap algorithms as follows: + +- We use 0-based indexing. This makes the relationship between the + index for a node and the indexes for its children slightly less + obvious, but is more suitable since Python uses 0-based indexing. + +- Our heappop() method returns the smallest item, not the largest. + +These two make it possible to view the heap as a regular Python list +without surprises: heap[0] is the smallest item, and heap.sort() +maintains the heap invariant! +""" + +# Original code by Kevin O'Connor, augmented by Tim Peters and Raymond Hettinger + +__about__ = """Heap queues + +[explanation by Fran�ois Pinard] + +Heaps are arrays for which a[k] <= a[2*k+1] and a[k] <= a[2*k+2] for +all k, counting elements from 0. For the sake of comparison, +non-existing elements are considered to be infinite. The interesting +property of a heap is that a[0] is always its smallest element. + +The strange invariant above is meant to be an efficient memory +representation for a tournament. The numbers below are `k', not a[k]: + + 0 + + 1 2 + + 3 4 5 6 + + 7 8 9 10 11 12 13 14 + + 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 + + +In the tree above, each cell `k' is topping `2*k+1' and `2*k+2'. In +an usual binary tournament we see in sports, each cell is the winner +over the two cells it tops, and we can trace the winner down the tree +to see all opponents s/he had. However, in many computer applications +of such tournaments, we do not need to trace the history of a winner. +To be more memory efficient, when a winner is promoted, we try to +replace it by something else at a lower level, and the rule becomes +that a cell and the two cells it tops contain three different items, +but the top cell "wins" over the two topped cells. + +If this heap invariant is protected at all time, index 0 is clearly +the overall winner. The simplest algorithmic way to remove it and +find the "next" winner is to move some loser (let's say cell 30 in the +diagram above) into the 0 position, and then percolate this new 0 down +the tree, exchanging values, until the invariant is re-established. +This is clearly logarithmic on the total number of items in the tree. +By iterating over all items, you get an O(n ln n) sort. + +A nice feature of this sort is that you can efficiently insert new +items while the sort is going on, provided that the inserted items are +not "better" than the last 0'th element you extracted. This is +especially useful in simulation contexts, where the tree holds all +incoming events, and the "win" condition means the smallest scheduled +time. When an event schedule other events for execution, they are +scheduled into the future, so they can easily go into the heap. So, a +heap is a good structure for implementing schedulers (this is what I +used for my MIDI sequencer :-). + +Various structures for implementing schedulers have been extensively +studied, and heaps are good for this, as they are reasonably speedy, +the speed is almost constant, and the worst case is not much different +than the average case. However, there are other representations which +are more efficient overall, yet the worst cases might be terrible. + +Heaps are also very useful in big disk sorts. You most probably all +know that a big sort implies producing "runs" (which are pre-sorted +sequences, which size is usually related to the amount of CPU memory), +followed by a merging passes for these runs, which merging is often +very cleverly organised[1]. It is very important that the initial +sort produces the longest runs possible. Tournaments are a good way +to that. If, using all the memory available to hold a tournament, you +replace and percolate items that happen to fit the current run, you'll +produce runs which are twice the size of the memory for random input, +and much better for input fuzzily ordered. + +Moreover, if you output the 0'th item on disk and get an input which +may not fit in the current tournament (because the value "wins" over +the last output value), it cannot fit in the heap, so the size of the +heap decreases. The freed memory could be cleverly reused immediately +for progressively building a second heap, which grows at exactly the +same rate the first heap is melting. When the first heap completely +vanishes, you switch heaps and start a new run. Clever and quite +effective! + +In a word, heaps are useful memory structures to know. I use them in +a few applications, and I think it is good to keep a `heap' module +around. :-) + +-------------------- +[1] The disk balancing algorithms which are current, nowadays, are +more annoying than clever, and this is a consequence of the seeking +capabilities of the disks. On devices which cannot seek, like big +tape drives, the story was quite different, and one had to be very +clever to ensure (far in advance) that each tape movement will be the +most effective possible (that is, will best participate at +"progressing" the merge). Some tapes were even able to read +backwards, and this was also used to avoid the rewinding time. +Believe me, real good tape sorts were quite spectacular to watch! +From all times, sorting has always been a Great Art! :-) +""" + +__all__ = ['heappush', 'heappop', 'heapify', 'heapreplace', 'merge', + 'nlargest', 'nsmallest', 'heappushpop'] + +from itertools import islice, repeat, count, imap, izip, tee, chain +from operator import itemgetter +import bisect + +def heappush(heap, item): + """Push item onto heap, maintaining the heap invariant.""" + heap.append(item) + _siftdown(heap, 0, len(heap)-1) + +def heappop(heap): + """Pop the smallest item off the heap, maintaining the heap invariant.""" + lastelt = heap.pop() # raises appropriate IndexError if heap is empty + if heap: + returnitem = heap[0] + heap[0] = lastelt + _siftup(heap, 0) + else: + returnitem = lastelt + return returnitem + +def heapreplace(heap, item): + """Pop and return the current smallest value, and add the new item. + + This is more efficient than heappop() followed by heappush(), and can be + more appropriate when using a fixed-size heap. Note that the value + returned may be larger than item! That constrains reasonable uses of + this routine unless written as part of a conditional replacement: + + if item > heap[0]: + item = heapreplace(heap, item) + """ + returnitem = heap[0] # raises appropriate IndexError if heap is empty + heap[0] = item + _siftup(heap, 0) + return returnitem + +def heappushpop(heap, item): + """Fast version of a heappush followed by a heappop.""" + if heap and heap[0] < item: + item, heap[0] = heap[0], item + _siftup(heap, 0) + return item + +def heapify(x): + """Transform list into a heap, in-place, in O(len(heap)) time.""" + n = len(x) + # Transform bottom-up. The largest index there's any point to looking at + # is the largest with a child index in-range, so must have 2*i + 1 < n, + # or i < (n-1)/2. If n is even = 2*j, this is (2*j-1)/2 = j-1/2 so + # j-1 is the largest, which is n//2 - 1. If n is odd = 2*j+1, this is + # (2*j+1-1)/2 = j so j-1 is the largest, and that's again n//2-1. + for i in reversed(xrange(n//2)): + _siftup(x, i) + +def nlargest(n, iterable): + """Find the n largest elements in a dataset. + + Equivalent to: sorted(iterable, reverse=True)[:n] + """ + if n < 0: # for consistency with the c impl + return [] + it = iter(iterable) + result = list(islice(it, n)) + if not result: + return result + heapify(result) + _heappushpop = heappushpop + for elem in it: + _heappushpop(result, elem) + result.sort(reverse=True) + return result + +def nsmallest(n, iterable): + """Find the n smallest elements in a dataset. + + Equivalent to: sorted(iterable)[:n] + """ + if n < 0: # for consistency with the c impl + return [] + if hasattr(iterable, '__len__') and n * 10 <= len(iterable): + # For smaller values of n, the bisect method is faster than a minheap. + # It is also memory efficient, consuming only n elements of space. + it = iter(iterable) + result = sorted(islice(it, 0, n)) + if not result: + return result + insort = bisect.insort + pop = result.pop + los = result[-1] # los --> Largest of the nsmallest + for elem in it: + if los <= elem: + continue + insort(result, elem) + pop() + los = result[-1] + return result + # An alternative approach manifests the whole iterable in memory but + # saves comparisons by heapifying all at once. Also, saves time + # over bisect.insort() which has O(n) data movement time for every + # insertion. Finding the n smallest of an m length iterable requires + # O(m) + O(n log m) comparisons. + h = list(iterable) + heapify(h) + return map(heappop, repeat(h, min(n, len(h)))) + +# 'heap' is a heap at all indices >= startpos, except possibly for pos. pos +# is the index of a leaf with a possibly out-of-order value. Restore the +# heap invariant. +def _siftdown(heap, startpos, pos): + newitem = heap[pos] + # Follow the path to the root, moving parents down until finding a place + # newitem fits. + while pos > startpos: + parentpos = (pos - 1) >> 1 + parent = heap[parentpos] + if newitem < parent: + heap[pos] = parent + pos = parentpos + continue + break + heap[pos] = newitem + +# The child indices of heap index pos are already heaps, and we want to make +# a heap at index pos too. We do this by bubbling the smaller child of +# pos up (and so on with that child's children, etc) until hitting a leaf, +# then using _siftdown to move the oddball originally at index pos into place. +# +# We *could* break out of the loop as soon as we find a pos where newitem <= +# both its children, but turns out that's not a good idea, and despite that +# many books write the algorithm that way. During a heap pop, the last array +# element is sifted in, and that tends to be large, so that comparing it +# against values starting from the root usually doesn't pay (= usually doesn't +# get us out of the loop early). See Knuth, Volume 3, where this is +# explained and quantified in an exercise. +# +# Cutting the # of comparisons is important, since these routines have no +# way to extract "the priority" from an array element, so that intelligence +# is likely to be hiding in custom __cmp__ methods, or in array elements +# storing (priority, record) tuples. Comparisons are thus potentially +# expensive. +# +# On random arrays of length 1000, making this change cut the number of +# comparisons made by heapify() a little, and those made by exhaustive +# heappop() a lot, in accord with theory. Here are typical results from 3 +# runs (3 just to demonstrate how small the variance is): +# +# Compares needed by heapify Compares needed by 1000 heappops +# -------------------------- -------------------------------- +# 1837 cut to 1663 14996 cut to 8680 +# 1855 cut to 1659 14966 cut to 8678 +# 1847 cut to 1660 15024 cut to 8703 +# +# Building the heap by using heappush() 1000 times instead required +# 2198, 2148, and 2219 compares: heapify() is more efficient, when +# you can use it. +# +# The total compares needed by list.sort() on the same lists were 8627, +# 8627, and 8632 (this should be compared to the sum of heapify() and +# heappop() compares): list.sort() is (unsurprisingly!) more efficient +# for sorting. + +def _siftup(heap, pos): + endpos = len(heap) + startpos = pos + newitem = heap[pos] + # Bubble up the smaller child until hitting a leaf. + childpos = 2*pos + 1 # leftmost child position + while childpos < endpos: + # Set childpos to index of smaller child. + rightpos = childpos + 1 + if rightpos < endpos and not heap[childpos] < heap[rightpos]: + childpos = rightpos + # Move the smaller child up. + heap[pos] = heap[childpos] + pos = childpos + childpos = 2*pos + 1 + # The leaf at pos is empty now. Put newitem there, and bubble it up + # to its final resting place (by sifting its parents down). + heap[pos] = newitem + _siftdown(heap, startpos, pos) + +# If available, use C implementation +try: + from _heapq import * +except ImportError: + pass + +def merge(*iterables): + '''Merge multiple sorted inputs into a single sorted output. + + Similar to sorted(itertools.chain(*iterables)) but returns a generator, + does not pull the data into memory all at once, and assumes that each of + the input streams is already sorted (smallest to largest). + + >>> list(merge([1,3,5,7], [0,2,4,8], [5,10,15,20], [], [25])) + [0, 1, 2, 3, 4, 5, 5, 7, 8, 10, 15, 20, 25] + + ''' + _heappop, _heapreplace, _StopIteration = heappop, heapreplace, StopIteration + + h = [] + h_append = h.append + for itnum, it in enumerate(map(iter, iterables)): + try: + next = it.next + h_append([next(), itnum, next]) + except _StopIteration: + pass + heapify(h) + + while 1: + try: + while 1: + v, itnum, next = s = h[0] # raises IndexError when h is empty + yield v + s[0] = next() # raises StopIteration when exhausted + _heapreplace(h, s) # restore heap condition + except _StopIteration: + _heappop(h) # remove empty iterator + except IndexError: + return + +# Extend the implementations of nsmallest and nlargest to use a key= argument +_nsmallest = nsmallest +def nsmallest(n, iterable, key=None): + """Find the n smallest elements in a dataset. + + Equivalent to: sorted(iterable, key=key)[:n] + """ + # Short-cut for n==1 is to use min() when len(iterable)>0 + if n == 1: + it = iter(iterable) + head = list(islice(it, 1)) + if not head: + return [] + if key is None: + return [min(chain(head, it))] + return [min(chain(head, it), key=key)] + + # When n>=size, it's faster to use sort() + try: + size = len(iterable) + except (TypeError, AttributeError): + pass + else: + if n >= size: + return sorted(iterable, key=key)[:n] + + # When key is none, use simpler decoration + if key is None: + it = izip(iterable, count()) # decorate + result = _nsmallest(n, it) + return map(itemgetter(0), result) # undecorate + + # General case, slowest method + in1, in2 = tee(iterable) + it = izip(imap(key, in1), count(), in2) # decorate + result = _nsmallest(n, it) + return map(itemgetter(2), result) # undecorate + +_nlargest = nlargest +def nlargest(n, iterable, key=None): + """Find the n largest elements in a dataset. + + Equivalent to: sorted(iterable, key=key, reverse=True)[:n] + """ + + # Short-cut for n==1 is to use max() when len(iterable)>0 + if n == 1: + it = iter(iterable) + head = list(islice(it, 1)) + if not head: + return [] + if key is None: + return [max(chain(head, it))] + return [max(chain(head, it), key=key)] + + # When n>=size, it's faster to use sort() + try: + size = len(iterable) + except (TypeError, AttributeError): + pass + else: + if n >= size: + return sorted(iterable, key=key, reverse=True)[:n] + + # When key is none, use simpler decoration + if key is None: + it = izip(iterable, count(0,-1)) # decorate + result = _nlargest(n, it) + return map(itemgetter(0), result) # undecorate + + # General case, slowest method + in1, in2 = tee(iterable) + it = izip(imap(key, in1), count(0,-1), in2) # decorate + result = _nlargest(n, it) + return map(itemgetter(2), result) # undecorate + +if __name__ == "__main__": + # Simple sanity test + heap = [] + data = [1, 3, 5, 7, 9, 2, 4, 6, 8, 0] + for item in data: + heappush(heap, item) + sort = [] + while heap: + sort.append(heappop(heap)) + print sort + + import doctest + doctest.testmod() diff --git a/lib-python/2.7/pkgutil.py b/lib-python/modified-2.7/pkgutil.py copy from lib-python/2.7/pkgutil.py copy to lib-python/modified-2.7/pkgutil.py --- a/lib-python/2.7/pkgutil.py +++ b/lib-python/modified-2.7/pkgutil.py @@ -244,7 +244,8 @@ return mod def get_data(self, pathname): - return open(pathname, "rb").read() + with open(pathname, "rb") as f: + return f.read() def _reopen(self): if self.file and self.file.closed: diff --git a/lib-python/modified-2.7/test/test_heapq.py b/lib-python/modified-2.7/test/test_heapq.py --- a/lib-python/modified-2.7/test/test_heapq.py +++ b/lib-python/modified-2.7/test/test_heapq.py @@ -186,6 +186,11 @@ self.assertFalse(sys.modules['heapq'] is self.module) self.assertTrue(hasattr(self.module.heapify, 'func_code')) + def test_islice_protection(self): + m = self.module + self.assertFalse(m.nsmallest(-1, [1])) + self.assertFalse(m.nlargest(-1, [1])) + class TestHeapC(TestHeap): module = c_heapq diff --git a/lib-python/modified-2.7/test/test_import.py b/lib-python/modified-2.7/test/test_import.py --- a/lib-python/modified-2.7/test/test_import.py +++ b/lib-python/modified-2.7/test/test_import.py @@ -64,6 +64,7 @@ except ImportError, err: self.fail("import from %s failed: %s" % (ext, err)) else: + # XXX importing .pyw is missing on Windows self.assertEqual(mod.a, a, "module loaded (%s) but contents invalid" % mod) self.assertEqual(mod.b, b, diff --git a/lib-python/modified-2.7/test/test_repr.py b/lib-python/modified-2.7/test/test_repr.py --- a/lib-python/modified-2.7/test/test_repr.py +++ b/lib-python/modified-2.7/test/test_repr.py @@ -254,8 +254,14 @@ eq = self.assertEqual touch(os.path.join(self.subpkgname, self.pkgname + os.extsep + 'py')) from areallylongpackageandmodulenametotestreprtruncation.areallylongpackageandmodulenametotestreprtruncation import areallylongpackageandmodulenametotestreprtruncation - eq(repr(areallylongpackageandmodulenametotestreprtruncation), - "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + # On PyPy, we use %r to format the file name; on CPython it is done + # with '%s'. It seems to me that %r is safer . + if '__pypy__' in sys.builtin_module_names: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + else: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) eq(repr(sys), "") def test_type(self): diff --git a/lib-python/2.7/test/test_subprocess.py b/lib-python/modified-2.7/test/test_subprocess.py copy from lib-python/2.7/test/test_subprocess.py copy to lib-python/modified-2.7/test/test_subprocess.py --- a/lib-python/2.7/test/test_subprocess.py +++ b/lib-python/modified-2.7/test/test_subprocess.py @@ -16,11 +16,11 @@ # Depends on the following external programs: Python # -if mswindows: - SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' - 'os.O_BINARY);') -else: - SETBINARY = '' +#if mswindows: +# SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' +# 'os.O_BINARY);') +#else: +# SETBINARY = '' try: @@ -420,8 +420,9 @@ self.assertStderrEqual(stderr, "") def test_universal_newlines(self): - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' @@ -448,8 +449,9 @@ def test_universal_newlines_communicate(self): # universal newlines through communicate() - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' diff --git a/lib-python/modified-2.7/urllib2.py b/lib-python/modified-2.7/urllib2.py --- a/lib-python/modified-2.7/urllib2.py +++ b/lib-python/modified-2.7/urllib2.py @@ -395,11 +395,7 @@ meth_name = protocol+"_response" for processor in self.process_response.get(protocol, []): meth = getattr(processor, meth_name) - try: - response = meth(req, response) - except: - response.close() - raise + response = meth(req, response) return response diff --git a/lib_pypy/_ctypes/pointer.py b/lib_pypy/_ctypes/pointer.py --- a/lib_pypy/_ctypes/pointer.py +++ b/lib_pypy/_ctypes/pointer.py @@ -124,7 +124,8 @@ # for now, we always allow types.pointer, else a lot of tests # break. We need to rethink how pointers are represented, though if my_ffitype is not ffitype and ffitype is not _ffi.types.void_p: - raise ArgumentError, "expected %s instance, got %s" % (type(value), ffitype) + raise ArgumentError("expected %s instance, got %s" % (type(value), + ffitype)) return value._get_buffer_value() def _cast_addr(obj, _, tp): diff --git a/lib_pypy/_ctypes/structure.py b/lib_pypy/_ctypes/structure.py --- a/lib_pypy/_ctypes/structure.py +++ b/lib_pypy/_ctypes/structure.py @@ -17,7 +17,7 @@ if len(f) == 3: if (not hasattr(tp, '_type_') or not isinstance(tp._type_, str) - or tp._type_ not in "iIhHbBlL"): + or tp._type_ not in "iIhHbBlLqQ"): #XXX: are those all types? # we just dont get the type name # in the interp levle thrown TypeError diff --git a/lib_pypy/_pypy_irc_topic.py b/lib_pypy/_pypy_irc_topic.py --- a/lib_pypy/_pypy_irc_topic.py +++ b/lib_pypy/_pypy_irc_topic.py @@ -1,117 +1,6 @@ -"""qvfgbcvna naq hgbcvna punvef -qlfgbcvna naq hgbcvna punvef -V'z fbeel, pbhyq lbh cyrnfr abg nterr jvgu gur png nf jryy? -V'z fbeel, pbhyq lbh cyrnfr abg nterr jvgu gur punve nf jryy? -jr cnffrq gur RH erivrj -cbfg RhebClguba fcevag fgnegf 12.IVV.2007, 10nz -RhebClguba raqrq -n Pyrna Ragrecevfrf cebqhpgvba -npnqrzl vf n pbzcyvpngrq ebyr tnzr -npnqrzvn vf n pbzcyvpngrq ebyr tnzr -jbexvat pbqr vf crn fbhc -abg lbhe snhyg, zber yvxr vg'f n zbivat gnetrg -guvf fragrapr vf snyfr -abguvat vf gehr -Yncfnat Fbhpubat -Oenpunzhgnaqn -fbeel, V'yy grnpu gur pnpghf ubj gb fjvz yngre -Jul fb znal znal znal znal znal ivbyvaf? -Jul fb znal znal znal znal znal bowrpgf? -"eha njnl naq yvir ba n snez" nccebnpu gb fbsgjner qrirybczrag -"va snpg, lbh zvtug xabj zber nobhg gur genafyngvba gbbypunva nsgre znfgrevat eclguba guna fbzr angvir fcrnxre xabjf nobhg uvf zbgure gbathr" - kbeNkNk -"jurer qvq nyy gur ivbyvaf tb?" -- ClCl fgnghf oybt: uggc://zberclcl.oybtfcbg.pbz/ -uggc://kxpq.pbz/353/ -pnfhnyvgl ivbyngvbaf naq sylvat -wrgmg abpu fpubxbynqvtre -R09 2X @PNN:85? -vs lbh'er gelvat gb oybj hc fghss, jub pnerf? -vs fghss oybjf hc, lbh pner -2008 jvyy or gur lrne bs clcl ba gur qrfxgbc -2008 jvyy or gur lrne bs gur qrfxgbc ba #clcl -2008 jvyy or gur lrne bs gur qrfxgbc ba #clcl, Wnahnel jvyy or gur zbagu bs gur nyc gbcf -lrf, ohg jung'g gur frafr bs 0 < "qhena qhena" -eclguba: flagnk naq frznagvpf bs clguba, fcrrq bs p, erfgevpgvbaf bs wnin naq pbzcvyre reebe zrffntrf nf crargenoyr nf ZHZCF +"""eclguba: flagnk naq frznagvpf bs clguba, fcrrq bs p, erfgevpgvbaf bs wnin naq pbzcvyre reebe zrffntrf nf crargenoyr nf ZHZCF pglcrf unf n fcva bs 1/3 ' ' vf n fcnpr gbb -2009 jvyy or gur lrne bs WVG ba gur qrfxgbc -N ynathntr vf n qvnyrpg jvgu na nezl naq anil -gbcvpf ner sbe gur srroyr zvaqrq -2009 vf gur lrne bs ersyrpgvba ba gur qrfxgbc -gur tybor vf bhe cbal, gur pbfzbf bhe erny ubefr -jub nz V naq vs lrf, ubj znal? -cebtenzzvat va orq vf n cresrpgyl svar npgvivgl -zbber'f ynj vf n qeht jvgu gur jbefg pbzr qbja -EClguba: jr hfr vg fb lbh qba'g unir gb -Zbber'f ynj vf n qeht jvgu gur jbefg pbzr qbja. EClguba: haqrpvqrq. -guvatf jvyy or avpr naq fghss -qba'g cbfg yvaxf gb cngragf urer -Abg lbhe hfhny nanylfrf. -Gur Neg bs gur Punaary -Clguba 300 -V fhccbfr ZO bs UGZY cre frpbaq vf abg gur hfhny fcrrq zrnfher crbcyr jbhyq rkcrpg sbe n wvg -gur fha arire frgf ba gur ClCl rzcver -ghegyrf ner snfgre guna lbh guvax -cebtenzzvat vf na nrfgrguvp raqrnibhe -P vf tbbq sbe fbzrguvat, whfg abg sbe jevgvat fbsgjner -trezna vf tbbq sbe fbzrguvat, whfg abg sbe jevgvat fbsgjner -trezna vf tbbq sbe artngvbaf, whfg abg sbe jevgvat fbsgjner -# nffreg qvq abg penfu -lbh fubhyq fgneg n cresrpg fbsgjner zbirzrag -lbh fubhyq fgneg n cresrpg punaary gbcvp zbirzrag -guvf vf n cresrpg punaary gbcvp -guvf vf n frys-ersreragvny punaary gbcvp -crrcubcr bcgvzvmngvbaf ner jung n Fhssvpvragyl Fzneg Pbzcvyre hfrf -"crrcubcr" bcgvzvmngvbaf ner jung na bcgvzvfgvp Pbzcvyre hfrf -pubbfr lbhe unpx -gur 'fhcre' xrljbeq vf abg gung uhttnoyr -wlguba cngpurf ner abg rabhtu sbe clcl -- qb lbh xabj oreyva? - nyy bs vg? - jryy, whfg oreyva -- ubj jvyy gur snpg gung gurl ner hfrq va bhe ercy punatr bhe gbcvpf? -- ubj pna vg rire unir jbexrq? -- jurer fubhyq gur unpx or fgberq? -- Vg'f uneq gb fnl rknpgyl jung pbafgvghgrf erfrnepu va gur pbzchgre jbeyq, ohg nf n svefg nccebkvzngvba, vg'f fbsgjner gung qbrfa'g unir hfref. -- Cebtenzzvat vf nyy nobhg xabjvat jura gb obvy gur benatr fcbatr qbaxrl npebff gur cuvyyvcvarf -- Jul fb znal, znal, znal, znal, znal, znal qhpxyvatf? -- ab qrgnvy vf bofpher rabhtu gb abg unir fbzr pbqr qrcraqvat ba vg. -- jung V trarenyyl jnag vf serr fcrrqhcf -- nyy bs ClCl vf kv-dhnyvgl -"lbh pna nyjnlf xvyy -9 be bf._rkvg() vs lbh'er va n uheel" -Ohernhpengf ohvyq npnqrzvp rzcverf juvpu puhea bhg zrnavatyrff fbyhgvbaf gb veeryrinag ceboyrzf. -vg'f abg n unpx, vg'f n jbexnebhaq -ClCl qbrfa'g unir pbcbylinevnqvp qrcraqragyl-zbabzbecurq ulcresyhknqf -ClCl qbrfa'g punatr gur shaqnzragny culfvpf pbafgnagf -Qnapr bs gur Fhtnecyhz Snvel -Wnin vf whfg tbbq rabhtu gb or cenpgvpny, ohg abg tbbq rabhtu gb or hfnoyr. -RhebClguba vf unccravat, qba'g rkcrpg nal dhvpx erfcbafr gvzrf. -"V jbhyq yvxr gb fgnl njnl sebz ernyvgl gura" -"gung'f jul gur 'be' vf ernyyl na 'naq' " -jvgu nyy nccebcevngr pbagrkghnyvfngvbavat -qba'g gevc ba gur cbjre pbeq -vzcyrzragvat YBTB va YBTB: "ghegyrf nyy gur jnl qbja" -gur ohooyrfbeg jbhyq or gur jebat jnl gb tb -gur cevapvcyr bs pbafreingvba bs zrff -gb fnir n gerr, rng n ornire -Qre Ovore znpugf evpugvt: Antg nyyrf xnchgg. -"Nal jbeyqivrj gung vfag jenpxrq ol frys-qbhog naq pbashfvba bire vgf bja vqragvgl vf abg n jbeyqivrj sbe zr." - Fpbgg Nnebafba -jr oryvrir va cnapnxrf, znlor -jr oryvrir va ghegyrf, znlor -jr qrsvavgryl oryvrir va zrgn -gur zngevk unf lbh -"Yvsr vf uneq, gura lbh anc" - n png -Vf Nezva ubzr jura gur havirefr prnfrf gb rkvfg? -Qhrffryqbes fcevag fgnegrq -frys.nobeeg("pnaabg ybnq negvpyrf") -QRAGVFGEL FLZOBY YVTUG IREGVPNY NAQ JNIR -"Gur UUH pnzchf vf n tbbq Dhnxr yriry" - Nezva -"Gur UUH pnzchf jbhyq or n greevoyr dhnxr yriry - lbh'q arire unir n pyhr jurer lbh ner" - zvpunry -N enqvbnpgvir png unf 18 unys-yvirf. - : j [fvtu] -f -pbybe-pbqrq oyhrf -"Neebtnapr va pbzchgre fpvrapr vf zrnfherq va anab-Qvwxfgenf." -ClCl arrqf n Whfg-va-Gvzr WVG -"Lbh pna'g gvzr geniry whfg ol frggvat lbhe pybpxf jebat" -Gjb guernqf jnyx vagb n one. Gur onexrrcre ybbxf hc naq lryyf, "url, V jnag qba'g nal pbaqvgvbaf enpr yvxr gvzr ynfg!" Clguba 2.k rfg cerfdhr zbeg, ivir Clguba! Clguba 2.k vf abg qrnq Riregvzr fbzrbar nethrf jvgu "Fznyygnyx unf nyjnlf qbar K", vg vf nyjnlf n tbbq uvag gung fbzrguvat arrqf gb or punatrq snfg. - Znephf Qraxre @@ -119,7 +8,6 @@ __kkk__ naq __ekkk__ if bcrengvba fybgf: cnegvpyr dhnaghz fhcrecbfvgvba xvaq bs sha ClCl vf na rkpvgvat grpuabybtl gung yrgf lbh gb jevgr snfg, cbegnoyr, zhygv-cyngsbez vagrecergref jvgu yrff rssbeg Nezva: "Cebybt vf n zrff.", PS: "Ab, vg'f irel pbby!", Nezva: "Vfa'g guvf jung V fnvq?" - tbbq, grfgf ner hfrshy fbzrgvzrf :-) ClCl vf yvxr nofheq gurngre jr unir ab nagv-vzcbffvoyr fgvpx gung znxrf fher gung nyy lbhe cebtenzf unyg clcl vf n enpr orgjrra crbcyr funivat lnxf naq gur havirefr cebqhpvat zber orneqrq lnxf. Fb sne, gur havirefr vf jvaavat @@ -136,14 +24,14 @@ ClCl 1.1.0orgn eryrnfrq: uggc://pbqrfcrnx.arg/clcl/qvfg/clcl/qbp/eryrnfr-1.1.0.ugzy "gurer fubhyq or bar naq bayl bar boivbhf jnl gb qb vg". ClCl inevnag: "gurer pna or A unys-ohttl jnlf gb qb vg" 1.1 svany eryrnfrq: uggc://pbqrfcrnx.arg/clcl/qvfg/clcl/qbp/eryrnfr-1.1.0.ugzy -1.1 svany eryrnfrq | nzq64 naq ccp ner bayl ninvynoyr va ragrecevfr irefvba + nzq64 naq ccp ner bayl ninvynoyr va ragrecevfr irefvba Vf gurer n clcl gvzr? - vs lbh pna srry vg (?) gura gurer vf ab, abezny jbex vf fhpu zhpu yrff gvevat guna inpngvbaf ab, abezny jbex vf fb zhpu yrff gvevat guna inpngvbaf -SVEFG gurl vtaber lbh, gura gurl ynhtu ng lbh, gura gurl svtug lbh, gura lbh jva. +-SVEFG gurl vtaber lbh, gura gurl ynhtu ng lbh, gura gurl svtug lbh, gura lbh jva.- vg'f Fhaqnl, znlor vg'f Fhaqnl, ntnva -"3 + 3 = 8" Nagb va gur WVG gnyx +"3 + 3 = 8" - Nagb va gur WVG gnyx RPBBC vf unccravat RPBBC vf svavfurq cflpb rngf bar oenva cre vapu bs cebterff @@ -175,10 +63,108 @@ "nu, whfg va gvzr qbphzragngvba" (__nc__) ClCl vf abg n erny IZ: ab frtsnhyg unaqyref gb qb gur ener pnfrf lbh pna'g unir obgu pbairavrapr naq fcrrq -gur WVG qbrfa'g jbex ba BF/K (abi'09) -ab fhccbeg sbe BF/K evtug abj! (abi'09) fyvccref urvtug pna or zrnfherq va k86 ertvfgref clcl vf n enpr orgjrra gur vaqhfgel gelvat gb ohvyq znpuvarf jvgu zber naq zber erfbheprf, naq gur clcl qrirybcref gelvat gb rng nyy bs gurz. Fb sne, gur jvaare vf fgvyy hapyrne +"znl pbagnva ahgf naq/be lbhat cbvagref" +vg'f nyy irel fvzcyr, yvxr gur ubyvqnlf +unccl ClCl'f lrne 2010! +fnzhryr fnlf gung jr ybfg n enmbe. fb jr pna'g funir lnxf +"yrg'f abg or bofpher, hayrff jr ernyyl arrq gb" + (abg guernq-fnsr, ohg jryy, abguvat vf) +clcl unf znal ceboyrzf, ohg rnpu bar unf znal fbyhgvbaf +whfg nabgure vgrz (1.333...) ba bhe erny-ahzorerq gbqb yvfg +ClCl vf Fuveg Bevtnzv erfrnepu + nafjrevat n dhrfgvba: "ab -- sbe ng yrnfg bar cbffvoyr vagrecergngvba bs lbhe fragrapr" +eryrnfr 1.2 hcpbzvat +ClCl 1.2 eryrnfrq - uggc://clcl.bet/ +AB IPF QVFPHFFVBAF +EClguba vf n svar pnzry unve oehfu +ClCl vf n npghnyyl n ivfhnyvmngvba cebwrpg, jr whfg ohvyq vagrecergref gb unir vagrerfgvat qngn gb ivfhnyvmr +clcl vf yvxr fnhfntrf +naq abj sbe fbzrguvat pbzcyrgryl qvssrerag +n 10gu bs sberire vf 1u45 +pbeerpg pbqr qbrfag arrq nal grfgf +cbfgfgehpghenyvfz rgp. +clcl UVG trarengbe +gur arj clcl fcbeg vf gb cnff clcl ohtf nf pclguba ohtf +jr unir zhpu zber vagrecergref guna hfref +ClCl 1.3 njnvgvat eryrnfr +ClCl 1.3 eryrnfrq +vg frrzf gb zr gung bapr lbh frggyr ba na rkrphgvba / bowrpg zbqry naq / be olgrpbqr sbezng, lbh'ir nyernql qrpvqrq jung ynathntrf (jurer gur 'f' frrzf fhcresyhbhf) fhccbeg vf tbvat gb or svefg pynff sbe +"Nyy ceboyrzf va ClCl pna or fbyirq ol nabgure yriry bs vagrecergngvba" +ClCl 1.3 eryrnfrq (jvaqbjf ovanevrf vapyhqrq) +jul qvq lbh thlf unir gb znxr gur ohvygva sbeghar zber vagrerfgvat guna npghny jbex? v whfg pngpurq zlfrys erfgnegvat clcl 20 gvzrf +"jr hfrq gb unir n zrff jvgu na bofpher vagresnpr, abj jr unir zrff urer naq bofpher vagresnpr gurer. cebterff" crqebavf ba n clcl fcevag +"phcf bs pbssrr ner yvxr nanybtvrf va gung V'z znxvat bar evtug abj" +"vg'f nyjnlf hc gb hf, va n jnl be gur bgure" +ClCl vf infg, naq pbagnvaf zhygvghqrf +qravny vf eneryl n tbbq qrohttvat grpuavdhr +"Yrg'f tb." - "Jr pna'g" - "Jul abg?" - "Jr'er jnvgvat sbe n Genafyngvba." - (qrfcnvevatyl) "Nu!" +'gung'f qrsvavgryl n pnfr bs "hu????"' +va gurbel gurer vf gur Ybbc, va cenpgvpr gurer ner oevqtrf +gur uneqqevir - pbafgnag qngn cvytevzntr +ClCl vf n gbby gb xrrc bgurejvfr qnatrebhf zvaqf fnsryl bpphcvrq. +jr ner n trareny senzrjbex ohvyg ba pbafvfgrag nccyvpngvba bs nqubp-arff +gur jnl gb nibvq n jbexnebhaq vf gb vagebqhpr n fgebatre jbexnebhaq fbzrjurer ryfr +pnyyvat gur genafyngvba gbby punva n 'fpevcg' vf xvaq bs bssrafvir +ehaavat clcl-p genafyngr.cl vf n ovg yvxr jngpuvat n guevyyre zbivr, vg pbhyq pbafhzr nyy gur zrzbel ng nal gvzr +ehaavat clcl-p genafyngr.cl vf n ovg yvxr jngpuvat n guevyyre zbivr, vg pbhyq qvr ng nal gvzr orpnhfr bs gur 32-ovg 4TO yvzvg bs ENZ +Qh jvefg rora tranh qnf reervpura, jbena xrvare tynhog +vs fjvgmreynaq jrer jurer terrpr vf (ba vfynaqf) jbhyq gurl nyy or pbaarpgrq ol oevqtrf? +genafyngvat clcl jvgu pclguba vf fbbbbbb fybj +ClCl 1.4 eryrnfrq! +Jr ner abg urebrf, whfg irel cngvrag. +QBAR zrnaf vg'f qbar +jul gurer vf ab "ClCl 1.4 eryrnfrq" va gbcvp nal zber? +fabj! fabj! +svanyyl, zrephevny zvtengvba vf unccravat! +Gur zvtengvba gb zrephevny vf pbzcyrgrq! uggc://ovgohpxrg.bet/clcl/clcl +fabj! fabj! (gre) +unccl arj lrne +naq anaanaw gb lbh nf jryy +Frrvat nf gur ynjf bs culfvpf ner ntnvafg lbh, lbh unir gb pnershyyl pbafvqre lbhe fpbcr fb gung lbhe tbnyf ner ernfbanoyr. +nf hfhny va clcl, gur fbyhgvba nccrnef pbzcyrgryl qvfcebcbegvbangr gb gur ceboyrz naq vafgrnq jr'yy tb sbe n pbzcyrgryl qvssrerag fvzcyre nccebnpu gb gur bevtvany ceboyrz +fabj, fabj! +va clcl lbh ner nyjnlf ng gur jebat yriry, va bar jnl be gur bgure +jryy, vg'f jebat ohg abg fb "irel jebat" nf vg ybbxrq + V ybir clcl +ynmvarff vzcngvrapr naq uhoevf +fabj, fabj +EClguba: guvatf lbh jbhyqa'g qb va Clguba, naq pna'g qb va P. +vg vf gur rkcrpgrq orunivbe, rkprcg jura lbh qba'g rkcrpg vg +erqrsvavat lryybj frrzf yvxr n orggre vqrn +"gung'f ubjrire whfg ratvarrevat" (svwny) +"[vg] whfg fubjf ntnva gung eclguba vf bofpher" (psobym) +"naljnl, clguba vf n infg ynathntr" (svwny) +bhg-bs-yvr-thneqf +"gurer ner qnlf ba juvpu lbh ybbx nebhaq naq abguvat fubhyq unir rire jbexrq" (svwny) +clcl vf n orggre xvaq bs sbbyvfuarff - ynp +ehaavat grfgf vf rffragvny sbe qrirybcvat clcl -- hu? qvq V oernx gur grfg? (svwny) +V'ir tbg guvf sybbe jnk gung'f nyfb n TERNG qrffreg gbccvat!! +rknexha: "gur cneg gung V gubhtug jnf tbvat gb or uneq jnf gevivny, fb abj V whfg unir guvf cneg gung V qvqa'g rira guvax bs gung vf uneq" +V fhccbfr jr pna yvir jvgu gur bofphevgl, nf ybat nf gurer vf n pbzzrag znxvat vg yvtugre +V nz n ovt oryvrire va ernfbaf. ohg gur nccnerag xvaq ner zl snibevgr. +clcl: trg n WVG sbe serr (jryy gur svefg qnl lbh jba'g znantr naq vg jvyy or irel sehfgengvat) + thgjbegu: bu, jr fubhyq znxr gur WVG zntvpnyyl orggre, jvgu qrpbengbef naq fghss +vg'f n pbzcyrgr unpx, ohg n irel zvavzny bar (nevtngb) +svefg gurl ynhtu ng lbh, gura gurl vtaber lbh, gura gurl svtug lbh, gura lbh jva +ClCl vf snzvyl sevraqyl +jr yvxr pbzcynvagf +gbqnl jr'er snfgre guna lrfgreqnl (hfhnyyl) +ClCl naq PClguba: gurl ner zbegny rarzvrf vagrag ba xvyyvat rnpu bgure +nethnoyl, rirelguvat vf n avpur +clcl unf ynlref yvxr bavbaf: crryvat gurz onpx jvyy znxr lbh pel +EClguba zntvpnyyl znxrf lbh evpu naq snzbhf (fnlf fb ba gur gva) +Vf evtbobg nebhaq jura gur havirefr prnfrf gb rkvfg? +ClCl vf gbb pbby sbe dhrelfgevatf. +< nevtngb> gura jung bpphef? < svwny> tbbq fghss V oryvrir +ClCl 1.6 eryrnfrq! + jurer ner gur grfgf? +uggc://gjvgcvp.pbz/52nr8s +N enaqbz dhbgr +Nyy rkprcgoybpxf frrz fnar. +N cvax tyvggrel ebgngvat ynzoqn +"vg'f yvxryl grzcbenel hagvy sberire" nevtb """ def some_topic(): diff --git a/lib_pypy/pyrepl/commands.py b/lib_pypy/pyrepl/commands.py --- a/lib_pypy/pyrepl/commands.py +++ b/lib_pypy/pyrepl/commands.py @@ -33,10 +33,9 @@ class Command(object): finish = 0 kills_digit_arg = 1 - def __init__(self, reader, (event_name, event)): + def __init__(self, reader, cmd): self.reader = reader - self.event = event - self.event_name = event_name + self.event_name, self.event = cmd def do(self): pass diff --git a/lib_pypy/pyrepl/pygame_console.py b/lib_pypy/pyrepl/pygame_console.py --- a/lib_pypy/pyrepl/pygame_console.py +++ b/lib_pypy/pyrepl/pygame_console.py @@ -130,7 +130,7 @@ s.fill(c, [0, 600 - bmargin, 800, bmargin]) s.fill(c, [800 - rmargin, 0, lmargin, 600]) - def refresh(self, screen, (cx, cy)): + def refresh(self, screen, cxy): self.screen = screen self.pygame_screen.fill(colors.bg, [0, tmargin + self.cur_top + self.scroll, @@ -139,8 +139,8 @@ line_top = self.cur_top width, height = self.fontsize - self.cxy = (cx, cy) - cp = self.char_pos(cx, cy) + self.cxy = cxy + cp = self.char_pos(*cxy) if cp[1] < tmargin: self.scroll = - (cy*self.fh + self.cur_top) self.repaint() @@ -148,7 +148,7 @@ self.scroll += (600 - bmargin) - (cp[1] + self.fh) self.repaint() if self.curs_vis: - self.pygame_screen.blit(self.cursor, self.char_pos(cx, cy)) + self.pygame_screen.blit(self.cursor, self.char_pos(*cxy)) for line in screen: if 0 <= line_top + self.scroll <= (600 - bmargin - tmargin - self.fh): if line: diff --git a/lib_pypy/pyrepl/readline.py b/lib_pypy/pyrepl/readline.py --- a/lib_pypy/pyrepl/readline.py +++ b/lib_pypy/pyrepl/readline.py @@ -231,7 +231,11 @@ return ''.join(chars) def _histline(self, line): - return unicode(line.rstrip('\n'), ENCODING) + line = line.rstrip('\n') + try: + return unicode(line, ENCODING) + except UnicodeDecodeError: # bah, silently fall back... + return unicode(line, 'utf-8') def get_history_length(self): return self.saved_history_length @@ -268,7 +272,10 @@ f = open(os.path.expanduser(filename), 'w') for entry in history: if isinstance(entry, unicode): - entry = entry.encode(ENCODING) + try: + entry = entry.encode(ENCODING) + except UnicodeEncodeError: # bah, silently fall back... + entry = entry.encode('utf-8') entry = entry.replace('\n', '\r\n') # multiline history support f.write(entry + '\n') f.close() diff --git a/lib_pypy/pyrepl/unix_console.py b/lib_pypy/pyrepl/unix_console.py --- a/lib_pypy/pyrepl/unix_console.py +++ b/lib_pypy/pyrepl/unix_console.py @@ -163,7 +163,7 @@ def change_encoding(self, encoding): self.encoding = encoding - def refresh(self, screen, (cx, cy)): + def refresh(self, screen, cxy): # this function is still too long (over 90 lines) if not self.__gone_tall: @@ -198,6 +198,7 @@ # we make sure the cursor is on the screen, and that we're # using all of the screen if we can + cx, cy = cxy if cy < offset: offset = cy elif cy >= offset + height: diff --git a/pypy/config/pypyoption.py b/pypy/config/pypyoption.py --- a/pypy/config/pypyoption.py +++ b/pypy/config/pypyoption.py @@ -92,7 +92,7 @@ module_import_dependencies = { # no _rawffi if importing pypy.rlib.clibffi raises ImportError - # or CompilationError + # or CompilationError or py.test.skip.Exception "_rawffi" : ["pypy.rlib.clibffi"], "_ffi" : ["pypy.rlib.clibffi"], @@ -113,7 +113,7 @@ try: for name in modlist: __import__(name) - except (ImportError, CompilationError), e: + except (ImportError, CompilationError, py.test.skip.Exception), e: errcls = e.__class__.__name__ config.add_warning( "The module %r is disabled\n" % (modname,) + diff --git a/pypy/doc/project-ideas.rst b/pypy/doc/project-ideas.rst --- a/pypy/doc/project-ideas.rst +++ b/pypy/doc/project-ideas.rst @@ -17,6 +17,12 @@ projects, or anything else in PyPy, pop up on IRC or write to us on the `mailing list`_. +Make big integers faster +------------------------- + +PyPy's implementation of the Python ``long`` type is slower than CPython's. +Find out why and optimize them. + Numpy improvements ------------------ diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -777,22 +777,63 @@ """Unpack an iterable object into a real (interpreter-level) list. Raise an OperationError(w_ValueError) if the length is wrong.""" w_iterator = self.iter(w_iterable) - # If we know the expected length we can preallocate. if expected_length == -1: + # xxx special hack for speed + from pypy.interpreter.generator import GeneratorIterator + if isinstance(w_iterator, GeneratorIterator): + lst_w = [] + w_iterator.unpack_into(lst_w) + return lst_w + # /xxx + return self._unpackiterable_unknown_length(w_iterator, w_iterable) + else: + lst_w = self._unpackiterable_known_length(w_iterator, + expected_length) + return lst_w[:] # make the resulting list resizable + + @jit.dont_look_inside + def _unpackiterable_unknown_length(self, w_iterator, w_iterable): + # Unpack a variable-size list of unknown length. + # The JIT does not look inside this function because it + # contains a loop (made explicit with the decorator above). + # + # If we can guess the expected length we can preallocate. + try: + lgt_estimate = self.len_w(w_iterable) + except OperationError, o: + if (not o.match(self, self.w_AttributeError) and + not o.match(self, self.w_TypeError)): + raise + items = [] + else: try: - lgt_estimate = self.len_w(w_iterable) - except OperationError, o: - if (not o.match(self, self.w_AttributeError) and - not o.match(self, self.w_TypeError)): + items = newlist(lgt_estimate) + except MemoryError: + items = [] # it might have lied + # + while True: + try: + w_item = self.next(w_iterator) + except OperationError, e: + if not e.match(self, self.w_StopIteration): raise - items = [] - else: - try: - items = newlist(lgt_estimate) - except MemoryError: - items = [] # it might have lied - else: - items = [None] * expected_length + break # done + items.append(w_item) + # + return items + + @jit.dont_look_inside + def _unpackiterable_known_length(self, w_iterator, expected_length): + # Unpack a known length list, without letting the JIT look inside. + # Implemented by just calling the @jit.unroll_safe version, but + # the JIT stopped looking inside already. + return self._unpackiterable_known_length_jitlook(w_iterator, + expected_length) + + @jit.unroll_safe + def _unpackiterable_known_length_jitlook(self, w_iterator, + expected_length): + items = [None] * expected_length idx = 0 while True: try: @@ -801,26 +842,29 @@ if not e.match(self, self.w_StopIteration): raise break # done - if expected_length != -1 and idx == expected_length: + if idx == expected_length: raise OperationError(self.w_ValueError, - self.wrap("too many values to unpack")) - if expected_length == -1: - items.append(w_item) - else: - items[idx] = w_item + self.wrap("too many values to unpack")) + items[idx] = w_item idx += 1 - if expected_length != -1 and idx < expected_length: + if idx < expected_length: if idx == 1: plural = "" else: plural = "s" - raise OperationError(self.w_ValueError, - self.wrap("need more than %d value%s to unpack" % - (idx, plural))) + raise operationerrfmt(self.w_ValueError, + "need more than %d value%s to unpack", + idx, plural) return items - unpackiterable_unroll = jit.unroll_safe(func_with_new_name(unpackiterable, - 'unpackiterable_unroll')) + def unpackiterable_unroll(self, w_iterable, expected_length): + # Like unpackiterable(), but for the cases where we have + # an expected_length and want to unroll when JITted. + # Returns a fixed-size list. + w_iterator = self.iter(w_iterable) + assert expected_length != -1 + return self._unpackiterable_known_length_jitlook(w_iterator, + expected_length) def fixedview(self, w_iterable, expected_length=-1): """ A fixed list view of w_iterable. Don't modify the result diff --git a/pypy/interpreter/generator.py b/pypy/interpreter/generator.py --- a/pypy/interpreter/generator.py +++ b/pypy/interpreter/generator.py @@ -8,7 +8,7 @@ class GeneratorIterator(Wrappable): "An iterator created by a generator." _immutable_fields_ = ['pycode'] - + def __init__(self, frame): self.space = frame.space self.frame = frame # turned into None when frame_finished_execution @@ -81,7 +81,7 @@ # if the frame is now marked as finished, it was RETURNed from if frame.frame_finished_execution: self.frame = None - raise OperationError(space.w_StopIteration, space.w_None) + raise OperationError(space.w_StopIteration, space.w_None) else: return w_result # YIELDed finally: @@ -97,21 +97,21 @@ def throw(self, w_type, w_val, w_tb): from pypy.interpreter.pytraceback import check_traceback space = self.space - + msg = "throw() third argument must be a traceback object" if space.is_w(w_tb, space.w_None): tb = None else: tb = check_traceback(space, w_tb, msg) - + operr = OperationError(w_type, w_val, tb) operr.normalize_exception(space) return self.send_ex(space.w_None, operr) - + def descr_next(self): """x.next() -> the next value, or raise StopIteration""" return self.send_ex(self.space.w_None) - + def descr_close(self): """x.close(arg) -> raise GeneratorExit inside generator.""" assert isinstance(self, GeneratorIterator) @@ -124,7 +124,7 @@ e.match(space, space.w_GeneratorExit): return space.w_None raise - + if w_retval is not None: msg = "generator ignored GeneratorExit" raise OperationError(space.w_RuntimeError, space.wrap(msg)) @@ -155,3 +155,39 @@ "interrupting generator of ") break block = block.previous + + def unpack_into(self, results_w): + """This is a hack for performance: runs the generator and collects + all produced items in a list.""" + # XXX copied and simplified version of send_ex() + space = self.space + if self.running: + raise OperationError(space.w_ValueError, + space.wrap('generator already executing')) + frame = self.frame + if frame is None: # already finished + return + self.running = True + try: + pycode = self.pycode + while True: + jitdriver.jit_merge_point(self=self, frame=frame, + results_w=results_w, + pycode=pycode) + try: + w_result = frame.execute_frame(space.w_None) + except OperationError, e: + if not e.match(space, space.w_StopIteration): + raise + break + # if the frame is now marked as finished, it was RETURNed from + if frame.frame_finished_execution: + break + results_w.append(w_result) # YIELDed + finally: + frame.f_backref = jit.vref_None + self.running = False + self.frame = None + +jitdriver = jit.JitDriver(greens=['pycode'], + reds=['self', 'frame', 'results_w']) diff --git a/pypy/interpreter/test/test_generator.py b/pypy/interpreter/test/test_generator.py --- a/pypy/interpreter/test/test_generator.py +++ b/pypy/interpreter/test/test_generator.py @@ -117,7 +117,7 @@ g = f() raises(NameError, g.throw, NameError, "Error", None) - + def test_throw_fail(self): def f(): yield 1 @@ -129,7 +129,7 @@ yield 1 g = f() raises(TypeError, g.throw, list()) - + def test_throw_fail3(self): def f(): yield 1 @@ -188,7 +188,7 @@ g = f() g.next() raises(NameError, g.close) - + def test_close_fail(self): def f(): try: @@ -267,3 +267,15 @@ assert r.startswith("' % self.fielddescr.repr_of_descr() @@ -302,12 +305,16 @@ _clsname = '' loop_token = None arg_classes = '' # <-- annotation hack - ffi_flags = 0 + ffi_flags = 1 - def __init__(self, arg_classes, extrainfo=None, ffi_flags=0): + def __init__(self, arg_classes, extrainfo=None, ffi_flags=1): self.arg_classes = arg_classes # string of "r" and "i" (ref/int) self.extrainfo = extrainfo self.ffi_flags = ffi_flags + # NB. the default ffi_flags is 1, meaning FUNCFLAG_CDECL, which + # makes sense on Windows as it's the one for all the C functions + # we are compiling together with the JIT. On non-Windows platforms + # it is just ignored anyway. def __repr__(self): res = '%s(%s)' % (self.__class__.__name__, self.arg_classes) @@ -348,6 +355,10 @@ return False # unless overridden def create_call_stub(self, rtyper, RESULT): + from pypy.rlib.clibffi import FFI_DEFAULT_ABI + assert self.get_call_conv() == FFI_DEFAULT_ABI, ( + "%r: create_call_stub() with a non-default call ABI" % (self,)) + def process(c): if c == 'L': assert longlong.supports_longlong @@ -442,7 +453,7 @@ """ _clsname = 'DynamicIntCallDescr' - def __init__(self, arg_classes, result_size, result_sign, extrainfo=None, ffi_flags=0): + def __init__(self, arg_classes, result_size, result_sign, extrainfo, ffi_flags): BaseIntCallDescr.__init__(self, arg_classes, extrainfo, ffi_flags) assert isinstance(result_sign, bool) self._result_size = chr(result_size) diff --git a/pypy/jit/backend/llsupport/ffisupport.py b/pypy/jit/backend/llsupport/ffisupport.py --- a/pypy/jit/backend/llsupport/ffisupport.py +++ b/pypy/jit/backend/llsupport/ffisupport.py @@ -8,7 +8,7 @@ class UnsupportedKind(Exception): pass -def get_call_descr_dynamic(cpu, ffi_args, ffi_result, extrainfo=None, ffi_flags=0): +def get_call_descr_dynamic(cpu, ffi_args, ffi_result, extrainfo, ffi_flags): """Get a call descr: the types of result and args are represented by rlib.libffi.types.*""" try: diff --git a/pypy/jit/backend/llsupport/gc.py b/pypy/jit/backend/llsupport/gc.py --- a/pypy/jit/backend/llsupport/gc.py +++ b/pypy/jit/backend/llsupport/gc.py @@ -650,10 +650,13 @@ assert size > 0, 'size should be > 0' type_id = llop.extract_ushort(llgroup.HALFWORD, tid) has_finalizer = bool(tid & (1< (y)), + (rop.UINT_LT, lambda x, y: (x) < (y)), + (rop.UINT_GE, lambda x, y: (x) >= (y)), + ]: + for opguard, guard_case in [ + (rop.GUARD_FALSE, False), + (rop.GUARD_TRUE, True), + ]: + for combinaison in ["bb", "bc", "cb"]: + # + if combinaison[0] == 'b': + ibox1 = BoxInt() + else: + ibox1 = ConstInt(42) + if combinaison[1] == 'b': + ibox2 = BoxInt() + else: + ibox2 = ConstInt(42) + b1 = BoxInt() + faildescr1 = BasicFailDescr(1) + faildescr2 = BasicFailDescr(2) + inputargs = [ib for ib in [ibox1, ibox2] + if isinstance(ib, BoxInt)] + operations = [ + ResOperation(opname, [ibox1, ibox2], b1), + ResOperation(opguard, [b1], None, descr=faildescr1), + ResOperation(rop.FINISH, [], None, descr=faildescr2), + ] + operations[-2].setfailargs([]) + looptoken = LoopToken() + self.cpu.compile_loop(inputargs, operations, looptoken) + # + cpu = self.cpu + for test1 in [65, 42, 11, 0, 1]: + if test1 == 42 or combinaison[0] == 'b': + for test2 in [65, 42, 11, 0, 1]: + if test2 == 42 or combinaison[1] == 'b': + n = 0 + if combinaison[0] == 'b': + cpu.set_future_value_int(n, test1) + n += 1 + if combinaison[1] == 'b': + cpu.set_future_value_int(n, test2) + n += 1 + fail = cpu.execute_token(looptoken) + # + expected = compare(test1, test2) + expected ^= guard_case + assert fail.identifier == 2 - expected + def test_floats_and_guards(self): if not self.cpu.supports_floats: py.test.skip("requires floats") diff --git a/pypy/jit/backend/test/test_ll_random.py b/pypy/jit/backend/test/test_ll_random.py --- a/pypy/jit/backend/test/test_ll_random.py +++ b/pypy/jit/backend/test/test_ll_random.py @@ -28,16 +28,27 @@ fork.structure_types_and_vtables = self.structure_types_and_vtables return fork - def get_structptr_var(self, r, must_have_vtable=False, type=lltype.Struct): + def _choose_ptr_vars(self, from_, type, array_of_structs): + ptrvars = [] + for i in range(len(from_)): + v, S = from_[i][:2] + if not isinstance(S, type): + continue + if ((isinstance(S, lltype.Array) and + isinstance(S.OF, lltype.Struct)) == array_of_structs): + ptrvars.append((v, S)) + return ptrvars + + def get_structptr_var(self, r, must_have_vtable=False, type=lltype.Struct, + array_of_structs=False): while True: - ptrvars = [(v, S) for (v, S) in self.ptrvars - if isinstance(S, type)] + ptrvars = self._choose_ptr_vars(self.ptrvars, type, + array_of_structs) if ptrvars and r.random() < 0.8: v, S = r.choice(ptrvars) else: - prebuilt_ptr_consts = [(v, S) - for (v, S, _) in self.prebuilt_ptr_consts - if isinstance(S, type)] + prebuilt_ptr_consts = self._choose_ptr_vars( + self.prebuilt_ptr_consts, type, array_of_structs) if prebuilt_ptr_consts and r.random() < 0.7: v, S = r.choice(prebuilt_ptr_consts) else: @@ -48,7 +59,8 @@ has_vtable=must_have_vtable) else: # create a new constant array - p = self.get_random_array(r) + p = self.get_random_array(r, + must_be_array_of_structs=array_of_structs) S = lltype.typeOf(p).TO v = ConstPtr(lltype.cast_opaque_ptr(llmemory.GCREF, p)) self.prebuilt_ptr_consts.append((v, S, @@ -74,7 +86,8 @@ TYPE = lltype.Signed return TYPE - def get_random_structure_type(self, r, with_vtable=None, cache=True): + def get_random_structure_type(self, r, with_vtable=None, cache=True, + type=lltype.GcStruct): if cache and self.structure_types and r.random() < 0.5: return r.choice(self.structure_types) fields = [] @@ -85,7 +98,7 @@ for i in range(r.randrange(1, 5)): TYPE = self.get_random_primitive_type(r) fields.append(('f%d' % i, TYPE)) - S = lltype.GcStruct('S%d' % self.counter, *fields, **kwds) + S = type('S%d' % self.counter, *fields, **kwds) self.counter += 1 if cache: self.structure_types.append(S) @@ -125,17 +138,29 @@ setattr(p, fieldname, rffi.cast(TYPE, r.random_integer())) return p - def get_random_array_type(self, r): - TYPE = self.get_random_primitive_type(r) + def get_random_array_type(self, r, can_be_array_of_struct=False, + must_be_array_of_structs=False): + if ((can_be_array_of_struct and r.random() < 0.1) or + must_be_array_of_structs): + TYPE = self.get_random_structure_type(r, cache=False, + type=lltype.Struct) + else: + TYPE = self.get_random_primitive_type(r) return lltype.GcArray(TYPE) - def get_random_array(self, r): - A = self.get_random_array_type(r) + def get_random_array(self, r, must_be_array_of_structs=False): + A = self.get_random_array_type(r, + must_be_array_of_structs=must_be_array_of_structs) length = (r.random_integer() // 15) % 300 # length: between 0 and 299 # likely to be small p = lltype.malloc(A, length) - for i in range(length): - p[i] = rffi.cast(A.OF, r.random_integer()) + if isinstance(A.OF, lltype.Primitive): + for i in range(length): + p[i] = rffi.cast(A.OF, r.random_integer()) + else: + for i in range(length): + for fname, TP in A.OF._flds.iteritems(): + setattr(p[i], fname, rffi.cast(TP, r.random_integer())) return p def get_index(self, length, r): @@ -155,8 +180,16 @@ dic[fieldname] = getattr(p, fieldname) else: assert isinstance(S, lltype.Array) - for i in range(len(p)): - dic[i] = p[i] + if isinstance(S.OF, lltype.Struct): + for i in range(len(p)): + item = p[i] + s1 = {} + for fieldname in S.OF._names: + s1[fieldname] = getattr(item, fieldname) + dic[i] = s1 + else: + for i in range(len(p)): + dic[i] = p[i] return dic def print_loop_prebuilt(self, names, writevar, s): @@ -220,7 +253,7 @@ class GetFieldOperation(test_random.AbstractOperation): def field_descr(self, builder, r): - v, S = builder.get_structptr_var(r) + v, S = builder.get_structptr_var(r, ) names = S._names if names[0] == 'parent': names = names[1:] @@ -239,6 +272,28 @@ continue break +class GetInteriorFieldOperation(test_random.AbstractOperation): + def field_descr(self, builder, r): + v, A = builder.get_structptr_var(r, type=lltype.Array, + array_of_structs=True) + array = v.getref(lltype.Ptr(A)) + v_index = builder.get_index(len(array), r) + name = r.choice(A.OF._names) + descr = builder.cpu.interiorfielddescrof(A, name) + descr._random_info = 'cpu.interiorfielddescrof(%s, %r)' % (A.OF._name, + name) + TYPE = getattr(A.OF, name) + return v, v_index, descr, TYPE + + def produce_into(self, builder, r): + while True: + try: + v, v_index, descr, _ = self.field_descr(builder, r) + self.put(builder, [v, v_index], descr) + except lltype.UninitializedMemoryAccess: + continue + break + class SetFieldOperation(GetFieldOperation): def produce_into(self, builder, r): v, descr, TYPE = self.field_descr(builder, r) @@ -251,6 +306,18 @@ break builder.do(self.opnum, [v, w], descr) +class SetInteriorFieldOperation(GetInteriorFieldOperation): + def produce_into(self, builder, r): + v, v_index, descr, TYPE = self.field_descr(builder, r) + while True: + if r.random() < 0.3: + w = ConstInt(r.random_integer()) + else: + w = r.choice(builder.intvars) + if rffi.cast(lltype.Signed, rffi.cast(TYPE, w.value)) == w.value: + break + builder.do(self.opnum, [v, v_index, w], descr) + class NewOperation(test_random.AbstractOperation): def size_descr(self, builder, S): descr = builder.cpu.sizeof(S) @@ -306,7 +373,7 @@ class NewArrayOperation(ArrayOperation): def produce_into(self, builder, r): - A = builder.get_random_array_type(r) + A = builder.get_random_array_type(r, can_be_array_of_struct=True) v_size = builder.get_index(300, r) v_ptr = builder.do(self.opnum, [v_size], self.array_descr(builder, A)) builder.ptrvars.append((v_ptr, A)) @@ -586,7 +653,9 @@ for i in range(4): # make more common OPERATIONS.append(GetFieldOperation(rop.GETFIELD_GC)) OPERATIONS.append(GetFieldOperation(rop.GETFIELD_GC)) + OPERATIONS.append(GetInteriorFieldOperation(rop.GETINTERIORFIELD_GC)) OPERATIONS.append(SetFieldOperation(rop.SETFIELD_GC)) + OPERATIONS.append(SetInteriorFieldOperation(rop.SETINTERIORFIELD_GC)) OPERATIONS.append(NewOperation(rop.NEW)) OPERATIONS.append(NewOperation(rop.NEW_WITH_VTABLE)) diff --git a/pypy/jit/backend/test/test_random.py b/pypy/jit/backend/test/test_random.py --- a/pypy/jit/backend/test/test_random.py +++ b/pypy/jit/backend/test/test_random.py @@ -595,6 +595,10 @@ for name, value in fields.items(): if isinstance(name, str): setattr(container, name, value) + elif isinstance(value, dict): + item = container.getitem(name) + for key1, value1 in value.items(): + setattr(item, key1, value1) else: container.setitem(name, value) diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -1276,8 +1276,8 @@ genop_int_ne = _cmpop("NE", "NE") genop_int_gt = _cmpop("G", "L") genop_int_ge = _cmpop("GE", "LE") - genop_ptr_eq = genop_int_eq - genop_ptr_ne = genop_int_ne + genop_ptr_eq = genop_instance_ptr_eq = genop_int_eq + genop_ptr_ne = genop_instance_ptr_ne = genop_int_ne genop_float_lt = _cmpop_float('B', 'A') genop_float_le = _cmpop_float('BE', 'AE') @@ -1297,8 +1297,8 @@ genop_guard_int_ne = _cmpop_guard("NE", "NE", "E", "E") genop_guard_int_gt = _cmpop_guard("G", "L", "LE", "GE") genop_guard_int_ge = _cmpop_guard("GE", "LE", "L", "G") - genop_guard_ptr_eq = genop_guard_int_eq - genop_guard_ptr_ne = genop_guard_int_ne + genop_guard_ptr_eq = genop_guard_instance_ptr_eq = genop_guard_int_eq + genop_guard_ptr_ne = genop_guard_instance_ptr_ne = genop_guard_int_ne genop_guard_uint_gt = _cmpop_guard("A", "B", "BE", "AE") genop_guard_uint_lt = _cmpop_guard("B", "A", "AE", "BE") @@ -1596,11 +1596,27 @@ genop_getarrayitem_gc_pure = genop_getarrayitem_gc genop_getarrayitem_raw = genop_getarrayitem_gc + def _get_interiorfield_addr(self, temp_loc, index_loc, itemsize_loc, + base_loc, ofs_loc): + assert isinstance(itemsize_loc, ImmedLoc) + if isinstance(index_loc, ImmedLoc): + temp_loc = imm(index_loc.value * itemsize_loc.value) + else: + # XXX should not use IMUL in most cases + assert isinstance(temp_loc, RegLoc) + assert isinstance(index_loc, RegLoc) + assert not temp_loc.is_xmm + self.mc.IMUL_rri(temp_loc.value, index_loc.value, + itemsize_loc.value) + assert isinstance(ofs_loc, ImmedLoc) + return AddressLoc(base_loc, temp_loc, 0, ofs_loc.value) + def genop_getinteriorfield_gc(self, op, arglocs, resloc): - base_loc, ofs_loc, itemsize_loc, fieldsize_loc, index_loc, sign_loc = arglocs - # XXX should not use IMUL in most cases - self.mc.IMUL(index_loc, itemsize_loc) - src_addr = AddressLoc(base_loc, index_loc, 0, ofs_loc.value) + (base_loc, ofs_loc, itemsize_loc, fieldsize_loc, + index_loc, temp_loc, sign_loc) = arglocs + src_addr = self._get_interiorfield_addr(temp_loc, index_loc, + itemsize_loc, base_loc, + ofs_loc) self.load_from_mem(resloc, src_addr, fieldsize_loc, sign_loc) @@ -1611,10 +1627,11 @@ self.save_into_mem(dest_addr, value_loc, size_loc) def genop_discard_setinteriorfield_gc(self, op, arglocs): - base_loc, ofs_loc, itemsize_loc, fieldsize_loc, index_loc, value_loc = arglocs - # XXX should not use IMUL in most cases - self.mc.IMUL(index_loc, itemsize_loc) - dest_addr = AddressLoc(base_loc, index_loc, 0, ofs_loc.value) + (base_loc, ofs_loc, itemsize_loc, fieldsize_loc, + index_loc, temp_loc, value_loc) = arglocs + dest_addr = self._get_interiorfield_addr(temp_loc, index_loc, + itemsize_loc, base_loc, + ofs_loc) self.save_into_mem(dest_addr, value_loc, fieldsize_loc) def genop_discard_setarrayitem_gc(self, op, arglocs): diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -601,8 +601,8 @@ consider_uint_lt = _consider_compop consider_uint_le = _consider_compop consider_uint_ge = _consider_compop - consider_ptr_eq = _consider_compop - consider_ptr_ne = _consider_compop + consider_ptr_eq = consider_instance_ptr_eq = _consider_compop + consider_ptr_ne = consider_instance_ptr_ne = _consider_compop def _consider_float_op(self, op): loc1 = self.xrm.loc(op.getarg(1)) @@ -992,16 +992,30 @@ t = self._unpack_interiorfielddescr(op.getdescr()) ofs, itemsize, fieldsize, _ = t args = op.getarglist() - tmpvar = TempBox() - base_loc = self.rm.make_sure_var_in_reg(op.getarg(0), args) - index_loc = self.rm.force_result_in_reg(tmpvar, op.getarg(1), - args) - # we're free to modify index now - value_loc = self.make_sure_var_in_reg(op.getarg(2), args) - self.possibly_free_vars(args) - self.rm.possibly_free_var(tmpvar) + if fieldsize.value == 1: + need_lower_byte = True + else: + need_lower_byte = False + box_base, box_index, box_value = args + base_loc = self.rm.make_sure_var_in_reg(box_base, args) + index_loc = self.rm.make_sure_var_in_reg(box_index, args) + value_loc = self.make_sure_var_in_reg(box_value, args, + need_lower_byte=need_lower_byte) + # If 'index_loc' is not an immediate, then we need a 'temp_loc' that + # is a register whose value will be destroyed. It's fine to destroy + # the same register as 'index_loc', but not the other ones. + self.rm.possibly_free_var(box_index) + if not isinstance(index_loc, ImmedLoc): + tempvar = TempBox() + temp_loc = self.rm.force_allocate_reg(tempvar, [box_base, + box_value]) + self.rm.possibly_free_var(tempvar) + else: + temp_loc = None + self.rm.possibly_free_var(box_base) + self.possibly_free_var(box_value) self.PerformDiscard(op, [base_loc, ofs, itemsize, fieldsize, - index_loc, value_loc]) + index_loc, temp_loc, value_loc]) def consider_strsetitem(self, op): args = op.getarglist() @@ -1072,15 +1086,27 @@ else: sign_loc = imm0 args = op.getarglist() - tmpvar = TempBox() base_loc = self.rm.make_sure_var_in_reg(op.getarg(0), args) - index_loc = self.rm.force_result_in_reg(tmpvar, op.getarg(1), - args) - self.rm.possibly_free_vars_for_op(op) - self.rm.possibly_free_var(tmpvar) - result_loc = self.force_allocate_reg(op.result) + index_loc = self.rm.make_sure_var_in_reg(op.getarg(1), args) + # 'base' and 'index' are put in two registers (or one if 'index' + # is an immediate). 'result' can be in the same register as + # 'index' but must be in a different register than 'base'. + self.rm.possibly_free_var(op.getarg(1)) + result_loc = self.force_allocate_reg(op.result, [op.getarg(0)]) + assert isinstance(result_loc, RegLoc) + # two cases: 1) if result_loc is a normal register, use it as temp_loc + if not result_loc.is_xmm: + temp_loc = result_loc + else: + # 2) if result_loc is an xmm register, we (likely) need another + # temp_loc that is a normal register. It can be in the same + # register as 'index' but not 'base'. + tempvar = TempBox() + temp_loc = self.rm.force_allocate_reg(tempvar, [op.getarg(0)]) + self.rm.possibly_free_var(tempvar) + self.rm.possibly_free_var(op.getarg(0)) self.Perform(op, [base_loc, ofs, itemsize, fieldsize, - index_loc, sign_loc], result_loc) + index_loc, temp_loc, sign_loc], result_loc) def consider_int_is_true(self, op, guard_op): # doesn't need arg to be in a register diff --git a/pypy/jit/backend/x86/regloc.py b/pypy/jit/backend/x86/regloc.py --- a/pypy/jit/backend/x86/regloc.py +++ b/pypy/jit/backend/x86/regloc.py @@ -17,7 +17,7 @@ class AssemblerLocation(object): # XXX: Is adding "width" here correct? - __slots__ = ('value', 'width') + _attrs_ = ('value', 'width', '_location_code') _immutable_ = True def _getregkey(self): return self.value @@ -25,6 +25,9 @@ def is_memory_reference(self): return self.location_code() in ('b', 's', 'j', 'a', 'm') + def location_code(self): + return self._location_code + def value_r(self): return self.value def value_b(self): return self.value def value_s(self): return self.value @@ -38,6 +41,8 @@ class StackLoc(AssemblerLocation): _immutable_ = True + _location_code = 'b' + def __init__(self, position, ebp_offset, num_words, type): assert ebp_offset < 0 # so no confusion with RegLoc.value self.position = position @@ -49,9 +54,6 @@ def __repr__(self): return '%d(%%ebp)' % (self.value,) - def location_code(self): - return 'b' - def assembler(self): return repr(self) @@ -63,8 +65,10 @@ self.is_xmm = is_xmm if self.is_xmm: self.width = 8 + self._location_code = 'x' else: self.width = WORD + self._location_code = 'r' def __repr__(self): if self.is_xmm: return rx86.R.xmmnames[self.value] @@ -79,12 +83,6 @@ assert not self.is_xmm return RegLoc(rx86.high_byte(self.value), False) - def location_code(self): - if self.is_xmm: - return 'x' - else: - return 'r' - def assembler(self): return '%' + repr(self) @@ -97,14 +95,13 @@ class ImmedLoc(AssemblerLocation): _immutable_ = True width = WORD + _location_code = 'i' + def __init__(self, value): from pypy.rpython.lltypesystem import rffi, lltype # force as a real int self.value = rffi.cast(lltype.Signed, value) - def location_code(self): - return 'i' - def getint(self): return self.value @@ -149,9 +146,6 @@ info = getattr(self, attr, '?') return '' % (self._location_code, info) - def location_code(self): - return self._location_code - def value_a(self): return self.loc_a @@ -191,6 +185,7 @@ # we want a width of 8 (... I think. Check this!) _immutable_ = True width = 8 + _location_code = 'j' def __init__(self, address): self.value = address @@ -198,9 +193,6 @@ def __repr__(self): return '' % (self.value,) - def location_code(self): - return 'j' - if IS_X86_32: class FloatImmedLoc(AssemblerLocation): # This stands for an immediate float. It cannot be directly used in @@ -209,6 +201,7 @@ # instead; see below. _immutable_ = True width = 8 + _location_code = '#' # don't use me def __init__(self, floatstorage): self.aslonglong = floatstorage @@ -229,9 +222,6 @@ floatvalue = longlong.getrealfloat(self.aslonglong) return '' % (floatvalue,) - def location_code(self): - raise NotImplementedError - if IS_X86_64: def FloatImmedLoc(floatstorage): from pypy.rlib.longlong2float import float2longlong @@ -270,6 +260,11 @@ else: raise AssertionError(methname + " undefined") +def _missing_binary_insn(name, code1, code2): + raise AssertionError(name + "_" + code1 + code2 + " missing") +_missing_binary_insn._dont_inline_ = True + + class LocationCodeBuilder(object): _mixin_ = True @@ -303,6 +298,8 @@ else: # For this case, we should not need the scratch register more than here. self._load_scratch(val2) + if name == 'MOV' and loc1 is X86_64_SCRATCH_REG: + return # don't need a dummy "MOV r11, r11" INSN(self, loc1, X86_64_SCRATCH_REG) def invoke(self, codes, val1, val2): @@ -310,6 +307,23 @@ _rx86_getattr(self, methname)(val1, val2) invoke._annspecialcase_ = 'specialize:arg(1)' + def has_implementation_for(loc1, loc2): + # A memo function that returns True if there is any NAME_xy that could match. + # If it returns False we know the whole subcase can be omitted from translated + # code. Without this hack, the size of most _binaryop INSN functions ends up + # quite large in C code. + if loc1 == '?': + return any([has_implementation_for(loc1, loc2) + for loc1 in unrolling_location_codes]) + methname = name + "_" + loc1 + loc2 + if not hasattr(rx86.AbstractX86CodeBuilder, methname): + return False + # any NAME_j should have a NAME_m as a fallback, too. Check it + if loc1 == 'j': assert has_implementation_for('m', loc2), methname + if loc2 == 'j': assert has_implementation_for(loc1, 'm'), methname + return True + has_implementation_for._annspecialcase_ = 'specialize:memo' + def INSN(self, loc1, loc2): code1 = loc1.location_code() code2 = loc2.location_code() @@ -325,6 +339,8 @@ assert code2 not in ('j', 'i') for possible_code2 in unrolling_location_codes: + if not has_implementation_for('?', possible_code2): + continue if code2 == possible_code2: val2 = getattr(loc2, "value_" + possible_code2)() # @@ -335,28 +351,32 @@ # # Regular case for possible_code1 in unrolling_location_codes: + if not has_implementation_for(possible_code1, + possible_code2): + continue if code1 == possible_code1: val1 = getattr(loc1, "value_" + possible_code1)() # More faking out of certain operations for x86_64 - if possible_code1 == 'j' and not rx86.fits_in_32bits(val1): + fits32 = rx86.fits_in_32bits + if possible_code1 == 'j' and not fits32(val1): val1 = self._addr_as_reg_offset(val1) invoke(self, "m" + possible_code2, val1, val2) - elif possible_code2 == 'j' and not rx86.fits_in_32bits(val2): + return + if possible_code2 == 'j' and not fits32(val2): val2 = self._addr_as_reg_offset(val2) invoke(self, possible_code1 + "m", val1, val2) - elif possible_code1 == 'm' and not rx86.fits_in_32bits(val1[1]): + return + if possible_code1 == 'm' and not fits32(val1[1]): val1 = self._fix_static_offset_64_m(val1) - invoke(self, "a" + possible_code2, val1, val2) - elif possible_code2 == 'm' and not rx86.fits_in_32bits(val2[1]): + if possible_code2 == 'm' and not fits32(val2[1]): val2 = self._fix_static_offset_64_m(val2) - invoke(self, possible_code1 + "a", val1, val2) - else: - if possible_code1 == 'a' and not rx86.fits_in_32bits(val1[3]): - val1 = self._fix_static_offset_64_a(val1) - if possible_code2 == 'a' and not rx86.fits_in_32bits(val2[3]): - val2 = self._fix_static_offset_64_a(val2) - invoke(self, possible_code1 + possible_code2, val1, val2) + if possible_code1 == 'a' and not fits32(val1[3]): + val1 = self._fix_static_offset_64_a(val1) + if possible_code2 == 'a' and not fits32(val2[3]): + val2 = self._fix_static_offset_64_a(val2) + invoke(self, possible_code1 + possible_code2, val1, val2) return + _missing_binary_insn(name, code1, code2) return func_with_new_name(INSN, "INSN_" + name) @@ -431,12 +451,14 @@ def _fix_static_offset_64_m(self, (basereg, static_offset)): # For cases where an AddressLoc has the location_code 'm', but # where the static offset does not fit in 32-bits. We have to fall - # back to the X86_64_SCRATCH_REG. Note that this returns a location - # encoded as mode 'a'. These are all possibly rare cases; don't try + # back to the X86_64_SCRATCH_REG. Returns a new location encoded + # as mode 'm' too. These are all possibly rare cases; don't try # to reuse a past value of the scratch register at all. self._scratch_register_known = False self.MOV_ri(X86_64_SCRATCH_REG.value, static_offset) - return (basereg, X86_64_SCRATCH_REG.value, 0, 0) + self.LEA_ra(X86_64_SCRATCH_REG.value, + (basereg, X86_64_SCRATCH_REG.value, 0, 0)) + return (X86_64_SCRATCH_REG.value, 0) def _fix_static_offset_64_a(self, (basereg, scalereg, scale, static_offset)): diff --git a/pypy/jit/backend/x86/rx86.py b/pypy/jit/backend/x86/rx86.py --- a/pypy/jit/backend/x86/rx86.py +++ b/pypy/jit/backend/x86/rx86.py @@ -745,6 +745,7 @@ assert insnname_template.count('*') == 1 add_insn('x', register(2), '\xC0') add_insn('j', abs_, immediate(2)) + add_insn('m', mem_reg_plus_const(2)) define_pxmm_insn('PADDQ_x*', '\xD4') define_pxmm_insn('PSUBQ_x*', '\xFB') diff --git a/pypy/jit/backend/x86/test/test_regloc.py b/pypy/jit/backend/x86/test/test_regloc.py --- a/pypy/jit/backend/x86/test/test_regloc.py +++ b/pypy/jit/backend/x86/test/test_regloc.py @@ -146,8 +146,10 @@ expected_instructions = ( # mov r11, 0xFEDCBA9876543210 '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' - # mov rcx, [rdx+r11] - '\x4A\x8B\x0C\x1A' + # lea r11, [rdx+r11] + '\x4E\x8D\x1C\x1A' + # mov rcx, [r11] + '\x49\x8B\x0B' ) assert cb.getvalue() == expected_instructions @@ -174,6 +176,30 @@ # ------------------------------------------------------------ + def test_MOV_64bit_constant_into_r11(self): + base_constant = 0xFEDCBA9876543210 + cb = LocationCodeBuilder64() + cb.MOV(r11, imm(base_constant)) + + expected_instructions = ( + # mov r11, 0xFEDCBA9876543210 + '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' + ) + assert cb.getvalue() == expected_instructions + + def test_MOV_64bit_address_into_r11(self): + base_addr = 0xFEDCBA9876543210 + cb = LocationCodeBuilder64() + cb.MOV(r11, heap(base_addr)) + + expected_instructions = ( + # mov r11, 0xFEDCBA9876543210 + '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' + + # mov r11, [r11] + '\x4D\x8B\x1B' + ) + assert cb.getvalue() == expected_instructions + def test_MOV_immed32_into_64bit_address_1(self): immed = -0x01234567 base_addr = 0xFEDCBA9876543210 @@ -217,8 +243,10 @@ expected_instructions = ( # mov r11, 0xFEDCBA9876543210 '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' - # mov [rdx+r11], -0x01234567 - '\x4A\xC7\x04\x1A\x99\xBA\xDC\xFE' + # lea r11, [rdx+r11] + '\x4E\x8D\x1C\x1A' + # mov [r11], -0x01234567 + '\x49\xC7\x03\x99\xBA\xDC\xFE' ) assert cb.getvalue() == expected_instructions @@ -300,8 +328,10 @@ '\x48\xBA\xEF\xCD\xAB\x89\x67\x45\x23\x01' # mov r11, 0xFEDCBA9876543210 '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' - # mov [rax+r11], rdx - '\x4A\x89\x14\x18' + # lea r11, [rax+r11] + '\x4E\x8D\x1C\x18' + # mov [r11], rdx + '\x49\x89\x13' # pop rdx '\x5A' ) diff --git a/pypy/jit/backend/x86/test/test_runner.py b/pypy/jit/backend/x86/test/test_runner.py --- a/pypy/jit/backend/x86/test/test_runner.py +++ b/pypy/jit/backend/x86/test/test_runner.py @@ -455,6 +455,9 @@ EffectInfo.MOST_GENERAL, ffi_flags=-1) calldescr.get_call_conv = lambda: ffi # <==== hack + # ^^^ we patch get_call_conv() so that the test also makes sense + # on Linux, because clibffi.get_call_conv() would always + # return FFI_DEFAULT_ABI on non-Windows platforms. funcbox = ConstInt(rawstart) i1 = BoxInt() i2 = BoxInt() diff --git a/pypy/jit/backend/x86/tool/viewcode.py b/pypy/jit/backend/x86/tool/viewcode.py --- a/pypy/jit/backend/x86/tool/viewcode.py +++ b/pypy/jit/backend/x86/tool/viewcode.py @@ -58,7 +58,7 @@ assert not p.returncode, ('Encountered an error running objdump: %s' % stderr) # drop some objdump cruft - lines = stdout.splitlines()[6:] + lines = stdout.splitlines(True)[6:] # drop some objdump cruft return format_code_dump_with_labels(originaddr, lines, label_list) def format_code_dump_with_labels(originaddr, lines, label_list): @@ -97,7 +97,7 @@ stdout, stderr = p.communicate() assert not p.returncode, ('Encountered an error running nm: %s' % stderr) - for line in stdout.splitlines(): + for line in stdout.splitlines(True): match = re_symbolentry.match(line) if match: addr = long(match.group(1), 16) diff --git a/pypy/jit/codewriter/call.py b/pypy/jit/codewriter/call.py --- a/pypy/jit/codewriter/call.py +++ b/pypy/jit/codewriter/call.py @@ -212,7 +212,10 @@ elidable = False loopinvariant = False if op.opname == "direct_call": - func = getattr(get_funcobj(op.args[0].value), '_callable', None) + funcobj = get_funcobj(op.args[0].value) + assert getattr(funcobj, 'calling_conv', 'c') == 'c', ( + "%r: getcalldescr() with a non-default call ABI" % (op,)) + func = getattr(funcobj, '_callable', None) elidable = getattr(func, "_elidable_function_", False) loopinvariant = getattr(func, "_jit_loop_invariant_", False) if loopinvariant: diff --git a/pypy/jit/codewriter/effectinfo.py b/pypy/jit/codewriter/effectinfo.py --- a/pypy/jit/codewriter/effectinfo.py +++ b/pypy/jit/codewriter/effectinfo.py @@ -78,6 +78,9 @@ # OS_MATH_SQRT = 100 + # for debugging: + _OS_CANRAISE = set([OS_NONE, OS_STR2UNICODE, OS_LIBFFI_CALL]) + def __new__(cls, readonly_descrs_fields, readonly_descrs_arrays, write_descrs_fields, write_descrs_arrays, extraeffect=EF_CAN_RAISE, @@ -116,6 +119,8 @@ result.extraeffect = extraeffect result.can_invalidate = can_invalidate result.oopspecindex = oopspecindex + if result.check_can_raise(): + assert oopspecindex in cls._OS_CANRAISE cls._cache[key] = result return result @@ -125,6 +130,10 @@ def check_can_invalidate(self): return self.can_invalidate + def check_is_elidable(self): + return (self.extraeffect == self.EF_ELIDABLE_CAN_RAISE or + self.extraeffect == self.EF_ELIDABLE_CANNOT_RAISE) + def check_forces_virtual_or_virtualizable(self): return self.extraeffect >= self.EF_FORCES_VIRTUAL_OR_VIRTUALIZABLE diff --git a/pypy/jit/codewriter/jtransform.py b/pypy/jit/codewriter/jtransform.py --- a/pypy/jit/codewriter/jtransform.py +++ b/pypy/jit/codewriter/jtransform.py @@ -443,6 +443,8 @@ rewrite_op_gc_identityhash = _do_builtin_call rewrite_op_gc_id = _do_builtin_call rewrite_op_uint_mod = _do_builtin_call + rewrite_op_cast_float_to_uint = _do_builtin_call + rewrite_op_cast_uint_to_float = _do_builtin_call # ---------- # getfield/setfield/mallocs etc. @@ -798,6 +800,9 @@ def _is_gc(self, v): return getattr(getattr(v.concretetype, "TO", None), "_gckind", "?") == 'gc' + def _is_rclass_instance(self, v): + return lltype._castdepth(v.concretetype.TO, rclass.OBJECT) >= 0 + def _rewrite_cmp_ptrs(self, op): if self._is_gc(op.args[0]): return op @@ -815,11 +820,21 @@ return self._rewrite_equality(op, 'int_is_true') def rewrite_op_ptr_eq(self, op): - op1 = self._rewrite_equality(op, 'ptr_iszero') + prefix = '' + if self._is_rclass_instance(op.args[0]): + assert self._is_rclass_instance(op.args[1]) + op = SpaceOperation('instance_ptr_eq', op.args, op.result) + prefix = 'instance_' + op1 = self._rewrite_equality(op, prefix + 'ptr_iszero') return self._rewrite_cmp_ptrs(op1) def rewrite_op_ptr_ne(self, op): - op1 = self._rewrite_equality(op, 'ptr_nonzero') + prefix = '' + if self._is_rclass_instance(op.args[0]): + assert self._is_rclass_instance(op.args[1]) + op = SpaceOperation('instance_ptr_ne', op.args, op.result) + prefix = 'instance_' + op1 = self._rewrite_equality(op, prefix + 'ptr_nonzero') return self._rewrite_cmp_ptrs(op1) rewrite_op_ptr_iszero = _rewrite_cmp_ptrs @@ -829,6 +844,10 @@ if self._is_gc(op.args[0]): return op + def rewrite_op_cast_opaque_ptr(self, op): + # None causes the result of this op to get aliased to op.args[0] + return [SpaceOperation('mark_opaque_ptr', op.args, None), None] + def rewrite_op_force_cast(self, op): v_arg = op.args[0] v_result = op.result @@ -848,26 +867,44 @@ elif not float_arg and float_res: # some int -> some float ops = [] - v1 = varoftype(lltype.Signed) - oplist = self.rewrite_operation( - SpaceOperation('force_cast', [v_arg], v1) - ) - if oplist: - ops.extend(oplist) + v2 = varoftype(lltype.Float) + sizesign = rffi.size_and_sign(v_arg.concretetype) + if sizesign <= rffi.size_and_sign(lltype.Signed): + # cast from a type that fits in an int: either the size is + # smaller, or it is equal and it is not unsigned + v1 = varoftype(lltype.Signed) + oplist = self.rewrite_operation( + SpaceOperation('force_cast', [v_arg], v1) + ) + if oplist: + ops.extend(oplist) + else: + v1 = v_arg + op = self.rewrite_operation( + SpaceOperation('cast_int_to_float', [v1], v2) + ) + ops.append(op) else: - v1 = v_arg - v2 = varoftype(lltype.Float) - op = self.rewrite_operation( - SpaceOperation('cast_int_to_float', [v1], v2) - ) - ops.append(op) + if sizesign == rffi.size_and_sign(lltype.Unsigned): + opname = 'cast_uint_to_float' + elif sizesign == rffi.size_and_sign(lltype.SignedLongLong): + opname = 'cast_longlong_to_float' + elif sizesign == rffi.size_and_sign(lltype.UnsignedLongLong): + opname = 'cast_ulonglong_to_float' + else: + raise AssertionError('cast_x_to_float: %r' % (sizesign,)) + ops1 = self.rewrite_operation( + SpaceOperation(opname, [v_arg], v2) + ) + if not isinstance(ops1, list): ops1 = [ops1] + ops.extend(ops1) op2 = self.rewrite_operation( SpaceOperation('force_cast', [v2], v_result) ) if op2: ops.append(op2) else: - op.result = v_result + ops[-1].result = v_result return ops elif float_arg and not float_res: # some float -> some int @@ -880,18 +917,36 @@ ops.append(op1) else: v1 = v_arg - v2 = varoftype(lltype.Signed) - op = self.rewrite_operation( - SpaceOperation('cast_float_to_int', [v1], v2) - ) - ops.append(op) - oplist = self.rewrite_operation( - SpaceOperation('force_cast', [v2], v_result) - ) - if oplist: - ops.extend(oplist) + sizesign = rffi.size_and_sign(v_result.concretetype) + if sizesign <= rffi.size_and_sign(lltype.Signed): + # cast to a type that fits in an int: either the size is + # smaller, or it is equal and it is not unsigned + v2 = varoftype(lltype.Signed) + op = self.rewrite_operation( + SpaceOperation('cast_float_to_int', [v1], v2) + ) + ops.append(op) + oplist = self.rewrite_operation( + SpaceOperation('force_cast', [v2], v_result) + ) + if oplist: + ops.extend(oplist) + else: + op.result = v_result else: - op.result = v_result + if sizesign == rffi.size_and_sign(lltype.Unsigned): + opname = 'cast_float_to_uint' + elif sizesign == rffi.size_and_sign(lltype.SignedLongLong): + opname = 'cast_float_to_longlong' + elif sizesign == rffi.size_and_sign(lltype.UnsignedLongLong): + opname = 'cast_float_to_ulonglong' + else: + raise AssertionError('cast_float_to_x: %r' % (sizesign,)) + ops1 = self.rewrite_operation( + SpaceOperation(opname, [v1], v_result) + ) + if not isinstance(ops1, list): ops1 = [ops1] + ops.extend(ops1) return ops else: assert False @@ -1097,8 +1152,6 @@ # The new operation is optionally further processed by rewrite_operation(). for _old, _new in [('bool_not', 'int_is_zero'), ('cast_bool_to_float', 'cast_int_to_float'), - ('cast_uint_to_float', 'cast_int_to_float'), - ('cast_float_to_uint', 'cast_float_to_int'), ('int_add_nonneg_ovf', 'int_add_ovf'), ('keepalive', '-live-'), diff --git a/pypy/jit/codewriter/support.py b/pypy/jit/codewriter/support.py --- a/pypy/jit/codewriter/support.py +++ b/pypy/jit/codewriter/support.py @@ -37,9 +37,11 @@ return a.typeannotation(t) def annotate(func, values, inline=None, backendoptimize=True, - type_system="lltype"): + type_system="lltype", translationoptions={}): # build the normal ll graphs for ll_function t = TranslationContext() + for key, value in translationoptions.items(): + setattr(t.config.translation, key, value) annpolicy = AnnotatorPolicy() annpolicy.allow_someobjects = False a = t.buildannotator(policy=annpolicy) @@ -229,6 +231,17 @@ else: return x +def _ll_1_cast_uint_to_float(x): + # XXX on 32-bit platforms, this should be done using cast_longlong_to_float + # (which is a residual call right now in the x86 backend) + return llop.cast_uint_to_float(lltype.Float, x) + +def _ll_1_cast_float_to_uint(x): + # XXX on 32-bit platforms, this should be done using cast_float_to_longlong + # (which is a residual call right now in the x86 backend) + return llop.cast_float_to_uint(lltype.Unsigned, x) + + # math support # ------------ diff --git a/pypy/jit/codewriter/test/test_flatten.py b/pypy/jit/codewriter/test/test_flatten.py --- a/pypy/jit/codewriter/test/test_flatten.py +++ b/pypy/jit/codewriter/test/test_flatten.py @@ -5,10 +5,10 @@ from pypy.jit.codewriter.format import assert_format from pypy.jit.codewriter import longlong from pypy.jit.metainterp.history import AbstractDescr -from pypy.rpython.lltypesystem import lltype, rclass, rstr +from pypy.rpython.lltypesystem import lltype, rclass, rstr, rffi from pypy.objspace.flow.model import SpaceOperation, Variable, Constant from pypy.translator.unsimplify import varoftype -from pypy.rlib.rarithmetic import ovfcheck, r_uint +from pypy.rlib.rarithmetic import ovfcheck, r_uint, r_longlong, r_ulonglong from pypy.rlib.jit import dont_look_inside, _we_are_jitted, JitDriver from pypy.rlib.objectmodel import keepalive_until_here from pypy.rlib import jit @@ -70,7 +70,8 @@ return 'residual' def getcalldescr(self, op, oopspecindex=None, extraeffect=None): try: - if 'cannot_raise' in op.args[0].value._obj.graph.name: + name = op.args[0].value._obj._name + if 'cannot_raise' in name or name.startswith('cast_'): return self._descr_cannot_raise except AttributeError: pass @@ -742,7 +743,6 @@ """, transform=True) def test_force_cast(self): - from pypy.rpython.lltypesystem import rffi # NB: we don't need to test for INT here, the logic in jtransform is # general enough so that if we have the below cases it should # generalize also to INT @@ -848,7 +848,6 @@ transform=True) def test_force_cast_pointer(self): - from pypy.rpython.lltypesystem import rffi def h(p): return rffi.cast(rffi.VOIDP, p) self.encoding_test(h, [lltype.nullptr(rffi.CCHARP.TO)], """ @@ -856,7 +855,6 @@ """, transform=True) def test_force_cast_floats(self): - from pypy.rpython.lltypesystem import rffi # Caststs to lltype.Float def f(n): return rffi.cast(lltype.Float, n) @@ -900,9 +898,69 @@ int_return %i4 """, transform=True) + def f(dbl): + return rffi.cast(rffi.UCHAR, dbl) + self.encoding_test(f, [12.456], """ + cast_float_to_int %f0 -> %i0 + int_and %i0, $255 -> %i1 + int_return %i1 + """, transform=True) + + def f(dbl): + return rffi.cast(lltype.Unsigned, dbl) + self.encoding_test(f, [12.456], """ + residual_call_irf_i $<* fn cast_float_to_uint>, , I[], R[], F[%f0] -> %i0 + int_return %i0 + """, transform=True) + + def f(i): + return rffi.cast(lltype.Float, chr(i)) # "char -> float" + self.encoding_test(f, [12], """ + cast_int_to_float %i0 -> %f0 + float_return %f0 + """, transform=True) + + def f(i): + return rffi.cast(lltype.Float, r_uint(i)) # "uint -> float" + self.encoding_test(f, [12], """ + residual_call_irf_f $<* fn cast_uint_to_float>, , I[%i0], R[], F[] -> %f0 + float_return %f0 + """, transform=True) + + if not longlong.is_64_bit: + def f(dbl): + return rffi.cast(lltype.SignedLongLong, dbl) + self.encoding_test(f, [12.3], """ + residual_call_irf_f $<* fn llong_from_float>, , I[], R[], F[%f0] -> %f1 + float_return %f1 + """, transform=True) + + def f(dbl): + return rffi.cast(lltype.UnsignedLongLong, dbl) + self.encoding_test(f, [12.3], """ + residual_call_irf_f $<* fn ullong_from_float>, , I[], R[], F[%f0] -> %f1 + float_return %f1 + """, transform=True) + + def f(x): + ll = r_longlong(x) + return rffi.cast(lltype.Float, ll) + self.encoding_test(f, [12], """ + residual_call_irf_f $<* fn llong_from_int>, , I[%i0], R[], F[] -> %f0 + residual_call_irf_f $<* fn llong_to_float>, , I[], R[], F[%f0] -> %f1 + float_return %f1 + """, transform=True) + + def f(x): + ll = r_ulonglong(x) + return rffi.cast(lltype.Float, ll) + self.encoding_test(f, [12], """ + residual_call_irf_f $<* fn ullong_from_int>, , I[%i0], R[], F[] -> %f0 + residual_call_irf_f $<* fn ullong_u_to_float>, , I[], R[], F[%f0] -> %f1 + float_return %f1 + """, transform=True) def test_direct_ptradd(self): - from pypy.rpython.lltypesystem import rffi def f(p, n): return lltype.direct_ptradd(p, n) self.encoding_test(f, [lltype.nullptr(rffi.CCHARP.TO), 123], """ @@ -913,7 +971,6 @@ def check_force_cast(FROM, TO, operations, value): """Check that the test is correctly written...""" - from pypy.rpython.lltypesystem import rffi import re r = re.compile('(\w+) \%i\d, \$(-?\d+)') # diff --git a/pypy/jit/codewriter/test/test_jtransform.py b/pypy/jit/codewriter/test/test_jtransform.py --- a/pypy/jit/codewriter/test/test_jtransform.py +++ b/pypy/jit/codewriter/test/test_jtransform.py @@ -576,10 +576,10 @@ assert op1.args == [v2] def test_ptr_eq(): - v1 = varoftype(rclass.OBJECTPTR) - v2 = varoftype(rclass.OBJECTPTR) + v1 = varoftype(lltype.Ptr(rstr.STR)) + v2 = varoftype(lltype.Ptr(rstr.STR)) v3 = varoftype(lltype.Bool) - c0 = const(lltype.nullptr(rclass.OBJECT)) + c0 = const(lltype.nullptr(rstr.STR)) # for opname, reducedname in [('ptr_eq', 'ptr_iszero'), ('ptr_ne', 'ptr_nonzero')]: @@ -598,6 +598,31 @@ assert op1.opname == reducedname assert op1.args == [v2] +def test_instance_ptr_eq(): + v1 = varoftype(rclass.OBJECTPTR) + v2 = varoftype(rclass.OBJECTPTR) + v3 = varoftype(lltype.Bool) + c0 = const(lltype.nullptr(rclass.OBJECT)) + + for opname, newopname, reducedname in [ + ('ptr_eq', 'instance_ptr_eq', 'instance_ptr_iszero'), + ('ptr_ne', 'instance_ptr_ne', 'instance_ptr_nonzero') + ]: + op = SpaceOperation(opname, [v1, v2], v3) + op1 = Transformer().rewrite_operation(op) + assert op1.opname == newopname + assert op1.args == [v1, v2] + + op = SpaceOperation(opname, [v1, c0], v3) + op1 = Transformer().rewrite_operation(op) + assert op1.opname == reducedname + assert op1.args == [v1] + + op = SpaceOperation(opname, [c0, v1], v3) + op1 = Transformer().rewrite_operation(op) + assert op1.opname == reducedname + assert op1.args == [v1] + def test_nongc_ptr_eq(): v1 = varoftype(rclass.NONGCOBJECTPTR) v2 = varoftype(rclass.NONGCOBJECTPTR) @@ -1103,3 +1128,16 @@ varoftype(lltype.Signed)) tr = Transformer(None, None) raises(NotImplementedError, tr.rewrite_operation, op) + +def test_cast_opaque_ptr(): + S = lltype.GcStruct("S", ("x", lltype.Signed)) + v1 = varoftype(lltype.Ptr(S)) + v2 = varoftype(lltype.Ptr(rclass.OBJECT)) + + op = SpaceOperation('cast_opaque_ptr', [v1], v2) + tr = Transformer() + [op1, op2] = tr.rewrite_operation(op) + assert op1.opname == 'mark_opaque_ptr' + assert op1.args == [v1] + assert op1.result is None + assert op2 is None \ No newline at end of file diff --git a/pypy/jit/metainterp/blackhole.py b/pypy/jit/metainterp/blackhole.py --- a/pypy/jit/metainterp/blackhole.py +++ b/pypy/jit/metainterp/blackhole.py @@ -499,9 +499,12 @@ @arguments("r", returns="i") def bhimpl_ptr_nonzero(a): return bool(a) - @arguments("r", returns="r") - def bhimpl_cast_opaque_ptr(a): - return a + @arguments("r", "r", returns="i") + def bhimpl_instance_ptr_eq(a, b): + return a == b + @arguments("r", "r", returns="i") + def bhimpl_instance_ptr_ne(a, b): + return a != b @arguments("r", returns="i") def bhimpl_cast_ptr_to_int(a): i = lltype.cast_ptr_to_int(a) @@ -512,6 +515,10 @@ ll_assert((i & 1) == 1, "bhimpl_cast_int_to_ptr: not an odd int") return lltype.cast_int_to_ptr(llmemory.GCREF, i) + @arguments("r") + def bhimpl_mark_opaque_ptr(a): + pass + @arguments("i", returns="i") def bhimpl_int_copy(a): return a @@ -630,6 +637,9 @@ a = longlong.getrealfloat(a) # note: we need to call int() twice to care for the fact that # int(-2147483648.0) returns a long :-( + # we could also call intmask() instead of the outermost int(), but + # it's probably better to explicitly crash (by getting a long) if a + # non-translated version tries to cast a too large float to an int. return int(int(a)) @arguments("i", returns="f") diff --git a/pypy/jit/metainterp/heapcache.py b/pypy/jit/metainterp/heapcache.py --- a/pypy/jit/metainterp/heapcache.py +++ b/pypy/jit/metainterp/heapcache.py @@ -34,7 +34,6 @@ self.clear_caches(opnum, descr, argboxes) def mark_escaped(self, opnum, argboxes): - idx = 0 if opnum == rop.SETFIELD_GC: assert len(argboxes) == 2 box, valuebox = argboxes @@ -42,8 +41,20 @@ self.dependencies.setdefault(box, []).append(valuebox) else: self._escape(valuebox) - # GETFIELD_GC doesn't escape it's argument - elif opnum != rop.GETFIELD_GC: + elif opnum == rop.SETARRAYITEM_GC: + assert len(argboxes) == 3 + box, indexbox, valuebox = argboxes + if self.is_unescaped(box) and self.is_unescaped(valuebox): + self.dependencies.setdefault(box, []).append(valuebox) + else: + self._escape(valuebox) + # GETFIELD_GC, MARK_OPAQUE_PTR, PTR_EQ, and PTR_NE don't escape their + # arguments + elif (opnum != rop.GETFIELD_GC and + opnum != rop.MARK_OPAQUE_PTR and + opnum != rop.PTR_EQ and + opnum != rop.PTR_NE): + idx = 0 for box in argboxes: # setarrayitem_gc don't escape its first argument if not (idx == 0 and opnum in [rop.SETARRAYITEM_GC]): @@ -60,13 +71,13 @@ self._escape(dep) def clear_caches(self, opnum, descr, argboxes): - if opnum == rop.SETFIELD_GC: - return - if opnum == rop.SETARRAYITEM_GC: - return - if opnum == rop.SETFIELD_RAW: - return - if opnum == rop.SETARRAYITEM_RAW: + if (opnum == rop.SETFIELD_GC or + opnum == rop.SETARRAYITEM_GC or + opnum == rop.SETFIELD_RAW or + opnum == rop.SETARRAYITEM_RAW or + opnum == rop.SETINTERIORFIELD_GC or + opnum == rop.COPYSTRCONTENT or + opnum == rop.COPYUNICODECONTENT): return if rop._OVF_FIRST <= opnum <= rop._OVF_LAST: return @@ -75,9 +86,9 @@ if opnum == rop.CALL or opnum == rop.CALL_LOOPINVARIANT: effectinfo = descr.get_extra_info() ef = effectinfo.extraeffect - if ef == effectinfo.EF_LOOPINVARIANT or \ - ef == effectinfo.EF_ELIDABLE_CANNOT_RAISE or \ - ef == effectinfo.EF_ELIDABLE_CAN_RAISE: + if (ef == effectinfo.EF_LOOPINVARIANT or + ef == effectinfo.EF_ELIDABLE_CANNOT_RAISE or + ef == effectinfo.EF_ELIDABLE_CAN_RAISE): return # A special case for ll_arraycopy, because it is so common, and its # effects are so well defined. diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -929,6 +929,9 @@ def view(self, **kwds): pass + def clear(self): + pass + class Stats(object): """For tests.""" @@ -943,6 +946,15 @@ self.aborted_keys = [] self.invalidated_token_numbers = set() + def clear(self): + del self.loops[:] + del self.locations[:] + del self.aborted_keys[:] + self.invalidated_token_numbers.clear() + self.compiled_count = 0 + self.enter_count = 0 + self.aborted_count = 0 + def set_history(self, history): self.operations = history.operations diff --git a/pypy/jit/metainterp/optimizeopt/__init__.py b/pypy/jit/metainterp/optimizeopt/__init__.py --- a/pypy/jit/metainterp/optimizeopt/__init__.py +++ b/pypy/jit/metainterp/optimizeopt/__init__.py @@ -55,7 +55,7 @@ def optimize_loop_1(metainterp_sd, loop, enable_opts, - inline_short_preamble=True, retraced=False, bridge=False): + inline_short_preamble=True, retraced=False): """Optimize loop.operations to remove internal overheadish operations. """ @@ -64,7 +64,7 @@ if unroll: optimize_unroll(metainterp_sd, loop, optimizations) else: - optimizer = Optimizer(metainterp_sd, loop, optimizations, bridge) + optimizer = Optimizer(metainterp_sd, loop, optimizations) optimizer.propagate_all_forward() def optimize_bridge_1(metainterp_sd, bridge, enable_opts, @@ -76,7 +76,7 @@ except KeyError: pass optimize_loop_1(metainterp_sd, bridge, enable_opts, - inline_short_preamble, retraced, bridge=True) + inline_short_preamble, retraced) if __name__ == '__main__': print ALL_OPTS_NAMES diff --git a/pypy/jit/metainterp/optimizeopt/heap.py b/pypy/jit/metainterp/optimizeopt/heap.py --- a/pypy/jit/metainterp/optimizeopt/heap.py +++ b/pypy/jit/metainterp/optimizeopt/heap.py @@ -43,7 +43,7 @@ optheap.optimizer.ensure_imported(cached_fieldvalue) cached_fieldvalue = self._cached_fields.get(structvalue, None) - if cached_fieldvalue is not fieldvalue: + if not fieldvalue.same_value(cached_fieldvalue): # common case: store the 'op' as lazy_setfield, and register # myself in the optheap's _lazy_setfields_and_arrayitems list self._lazy_setfield = op @@ -140,6 +140,15 @@ getop = ResOperation(rop.GETFIELD_GC, [op.getarg(0)], result, op.getdescr()) shortboxes.add_potential(getop, synthetic=True) + if op.getopnum() == rop.SETARRAYITEM_GC: + result = op.getarg(2) + if isinstance(result, Const): + newresult = result.clonebox() + optimizer.make_constant(newresult, result) + result = newresult + getop = ResOperation(rop.GETARRAYITEM_GC, [op.getarg(0), op.getarg(1)], + result, op.getdescr()) + shortboxes.add_potential(getop, synthetic=True) elif op.result is not None: shortboxes.add_potential(op) @@ -225,7 +234,7 @@ or op.is_ovf()): self.posponedop = op else: - self.next_optimization.propagate_forward(op) + Optimization.emit_operation(self, op) def emitting_operation(self, op): if op.has_no_side_effect(): diff --git a/pypy/jit/metainterp/optimizeopt/intbounds.py b/pypy/jit/metainterp/optimizeopt/intbounds.py --- a/pypy/jit/metainterp/optimizeopt/intbounds.py +++ b/pypy/jit/metainterp/optimizeopt/intbounds.py @@ -1,3 +1,4 @@ +import sys from pypy.jit.metainterp.optimizeopt.optimizer import Optimization, CONST_1, CONST_0, \ MODE_ARRAY, MODE_STR, MODE_UNICODE from pypy.jit.metainterp.history import ConstInt @@ -5,36 +6,18 @@ IntUpperBound) from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method from pypy.jit.metainterp.resoperation import rop +from pypy.jit.metainterp.optimize import InvalidLoop +from pypy.rlib.rarithmetic import LONG_BIT class OptIntBounds(Optimization): """Keeps track of the bounds placed on integers by guards and remove redundant guards""" - def setup(self): - self.posponedop = None - self.nextop = None - def new(self): - assert self.posponedop is None return OptIntBounds() - - def flush(self): - assert self.posponedop is None - - def setup(self): - self.posponedop = None - self.nextop = None def propagate_forward(self, op): - if op.is_ovf(): - self.posponedop = op - return - if self.posponedop: - self.nextop = op - op = self.posponedop - self.posponedop = None - dispatch_opt(self, op) def opt_default(self, op): @@ -126,14 +109,29 @@ r.intbound.intersect(v1.intbound.div_bound(v2.intbound)) def optimize_INT_MOD(self, op): + v1 = self.getvalue(op.getarg(0)) + v2 = self.getvalue(op.getarg(1)) + known_nonneg = (v1.intbound.known_ge(IntBound(0, 0)) and + v2.intbound.known_ge(IntBound(0, 0))) + if known_nonneg and v2.is_constant(): + val = v2.box.getint() + if (val & (val-1)) == 0: + # nonneg % power-of-two ==> nonneg & (power-of-two - 1) + arg1 = op.getarg(0) + arg2 = ConstInt(val-1) + op = op.copy_and_change(rop.INT_AND, args=[arg1, arg2]) self.emit_operation(op) - v2 = self.getvalue(op.getarg(1)) if v2.is_constant(): val = v2.box.getint() r = self.getvalue(op.result) if val < 0: + if val == -sys.maxint-1: + return # give up val = -val - r.intbound.make_gt(IntBound(-val, -val)) + if known_nonneg: + r.intbound.make_ge(IntBound(0, 0)) + else: + r.intbound.make_gt(IntBound(-val, -val)) r.intbound.make_lt(IntBound(val, val)) def optimize_INT_LSHIFT(self, op): @@ -153,72 +151,84 @@ def optimize_INT_RSHIFT(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) + b = v1.intbound.rshift_bound(v2.intbound) + if b.has_lower and b.has_upper and b.lower == b.upper: + # constant result (likely 0, for rshifts that kill all bits) + self.make_constant_int(op.result, b.lower) + else: + self.emit_operation(op) + r = self.getvalue(op.result) + r.intbound.intersect(b) + + def optimize_GUARD_NO_OVERFLOW(self, op): + lastop = self.last_emitted_operation + if lastop is not None: + opnum = lastop.getopnum() + args = lastop.getarglist() + result = lastop.result + # If the INT_xxx_OVF was replaced with INT_xxx, then we can kill + # the GUARD_NO_OVERFLOW. + if (opnum == rop.INT_ADD or + opnum == rop.INT_SUB or + opnum == rop.INT_MUL): + return + # Else, synthesize the non overflowing op for optimize_default to + # reuse, as well as the reverse op + elif opnum == rop.INT_ADD_OVF: + self.pure(rop.INT_ADD, args[:], result) + self.pure(rop.INT_SUB, [result, args[1]], args[0]) + self.pure(rop.INT_SUB, [result, args[0]], args[1]) + elif opnum == rop.INT_SUB_OVF: + self.pure(rop.INT_SUB, args[:], result) + self.pure(rop.INT_ADD, [result, args[1]], args[0]) + self.pure(rop.INT_SUB, [args[0], result], args[1]) + elif opnum == rop.INT_MUL_OVF: + self.pure(rop.INT_MUL, args[:], result) self.emit_operation(op) - r = self.getvalue(op.result) - r.intbound.intersect(v1.intbound.rshift_bound(v2.intbound)) + + def optimize_GUARD_OVERFLOW(self, op): + # If INT_xxx_OVF was replaced by INT_xxx, *but* we still see + # GUARD_OVERFLOW, then the loop is invalid. + lastop = self.last_emitted_operation + if lastop is None: + raise InvalidLoop + opnum = lastop.getopnum() + if opnum not in (rop.INT_ADD_OVF, rop.INT_SUB_OVF, rop.INT_MUL_OVF): + raise InvalidLoop + self.emit_operation(op) def optimize_INT_ADD_OVF(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) resbound = v1.intbound.add_bound(v2.intbound) - if resbound.has_lower and resbound.has_upper and \ - self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Transform into INT_ADD and remove guard + if resbound.bounded(): + # Transform into INT_ADD. The following guard will be killed + # by optimize_GUARD_NO_OVERFLOW; if we see instead an + # optimize_GUARD_OVERFLOW, then InvalidLoop. op = op.copy_and_change(rop.INT_ADD) - self.optimize_INT_ADD(op) # emit the op - else: - self.emit_operation(op) - r = self.getvalue(op.result) - r.intbound.intersect(resbound) - self.emit_operation(self.nextop) - if self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Synthesize the non overflowing op for optimize_default to reuse - self.pure(rop.INT_ADD, op.getarglist()[:], op.result) - # Synthesize the reverse op for optimize_default to reuse - self.pure(rop.INT_SUB, [op.result, op.getarg(1)], op.getarg(0)) - self.pure(rop.INT_SUB, [op.result, op.getarg(0)], op.getarg(1)) - + self.emit_operation(op) # emit the op + r = self.getvalue(op.result) + r.intbound.intersect(resbound) def optimize_INT_SUB_OVF(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) resbound = v1.intbound.sub_bound(v2.intbound) - if resbound.has_lower and resbound.has_upper and \ - self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Transform into INT_SUB and remove guard + if resbound.bounded(): op = op.copy_and_change(rop.INT_SUB) - self.optimize_INT_SUB(op) # emit the op - else: - self.emit_operation(op) - r = self.getvalue(op.result) - r.intbound.intersect(resbound) - self.emit_operation(self.nextop) - if self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Synthesize the non overflowing op for optimize_default to reuse - self.pure(rop.INT_SUB, op.getarglist()[:], op.result) - # Synthesize the reverse ops for optimize_default to reuse - self.pure(rop.INT_ADD, [op.result, op.getarg(1)], op.getarg(0)) - self.pure(rop.INT_SUB, [op.getarg(0), op.result], op.getarg(1)) - + self.emit_operation(op) # emit the op + r = self.getvalue(op.result) + r.intbound.intersect(resbound) def optimize_INT_MUL_OVF(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) resbound = v1.intbound.mul_bound(v2.intbound) - if resbound.has_lower and resbound.has_upper and \ - self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Transform into INT_MUL and remove guard + if resbound.bounded(): op = op.copy_and_change(rop.INT_MUL) - self.optimize_INT_MUL(op) # emit the op - else: - self.emit_operation(op) - r = self.getvalue(op.result) - r.intbound.intersect(resbound) - self.emit_operation(self.nextop) - if self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Synthesize the non overflowing op for optimize_default to reuse - self.pure(rop.INT_MUL, op.getarglist()[:], op.result) - + self.emit_operation(op) + r = self.getvalue(op.result) + r.intbound.intersect(resbound) def optimize_INT_LT(self, op): v1 = self.getvalue(op.getarg(0)) diff --git a/pypy/jit/metainterp/optimizeopt/intutils.py b/pypy/jit/metainterp/optimizeopt/intutils.py --- a/pypy/jit/metainterp/optimizeopt/intutils.py +++ b/pypy/jit/metainterp/optimizeopt/intutils.py @@ -1,4 +1,5 @@ -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift, LONG_BIT +from pypy.rlib.rarithmetic import ovfcheck, LONG_BIT +from pypy.rlib.objectmodel import we_are_translated from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.jit.metainterp.history import BoxInt, ConstInt import sys @@ -13,6 +14,10 @@ self.has_lower = True self.upper = upper self.lower = lower + # check for unexpected overflows: + if not we_are_translated(): + assert type(upper) is not long + assert type(lower) is not long # Returns True if the bound was updated def make_le(self, other): @@ -169,10 +174,10 @@ other.known_ge(IntBound(0, 0)) and \ other.known_lt(IntBound(LONG_BIT, LONG_BIT)): try: - vals = (ovfcheck_lshift(self.upper, other.upper), - ovfcheck_lshift(self.upper, other.lower), - ovfcheck_lshift(self.lower, other.upper), - ovfcheck_lshift(self.lower, other.lower)) + vals = (ovfcheck(self.upper << other.upper), + ovfcheck(self.upper << other.lower), + ovfcheck(self.lower << other.upper), + ovfcheck(self.lower << other.lower)) return IntBound(min4(vals), max4(vals)) except (OverflowError, ValueError): return IntUnbounded() diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -1,12 +1,12 @@ from pypy.jit.metainterp import jitprof, resume, compile from pypy.jit.metainterp.executor import execute_nonspec -from pypy.jit.metainterp.history import BoxInt, BoxFloat, Const, ConstInt, REF +from pypy.jit.metainterp.history import BoxInt, BoxFloat, Const, ConstInt, REF, INT from pypy.jit.metainterp.optimizeopt.intutils import IntBound, IntUnbounded, \ ImmutableIntUnbounded, \ IntLowerBound, MININT, MAXINT from pypy.jit.metainterp.optimizeopt.util import (make_dispatcher_method, args_dict) -from pypy.jit.metainterp.resoperation import rop, ResOperation +from pypy.jit.metainterp.resoperation import rop, ResOperation, AbstractResOp from pypy.jit.metainterp.typesystem import llhelper, oohelper from pypy.tool.pairtype import extendabletype from pypy.rlib.debug import debug_start, debug_stop, debug_print @@ -95,6 +95,10 @@ return guards def import_from(self, other, optimizer): + if self.level == LEVEL_CONSTANT: + assert other.level == LEVEL_CONSTANT + assert other.box.same_constant(self.box) + return assert self.level <= LEVEL_NONNULL if other.level == LEVEL_CONSTANT: self.make_constant(other.get_key_box()) @@ -141,6 +145,13 @@ return not box.nonnull() return False + def same_value(self, other): + if not other: + return False + if self.is_constant() and other.is_constant(): + return self.box.same_constant(other.box) + return self is other + def make_constant(self, constbox): """Replace 'self.box' with a Const box.""" assert isinstance(constbox, Const) @@ -209,13 +220,19 @@ def setfield(self, ofs, value): raise NotImplementedError + def getlength(self): + raise NotImplementedError + def getitem(self, index): raise NotImplementedError - def getlength(self): + def setitem(self, index, value): raise NotImplementedError - def setitem(self, index, value): + def getinteriorfield(self, index, ofs, default): + raise NotImplementedError + + def setinteriorfield(self, index, ofs, value): raise NotImplementedError @@ -230,9 +247,10 @@ CONST_1 = ConstInt(1) CVAL_ZERO = ConstantValue(CONST_0) CVAL_ZERO_FLOAT = ConstantValue(Const._new(0.0)) -CVAL_UNINITIALIZED_ZERO = ConstantValue(CONST_0) llhelper.CVAL_NULLREF = ConstantValue(llhelper.CONST_NULL) oohelper.CVAL_NULLREF = ConstantValue(oohelper.CONST_NULL) +REMOVED = AbstractResOp(None) + class Optimization(object): next_optimization = None @@ -244,6 +262,7 @@ raise NotImplementedError def emit_operation(self, op): + self.last_emitted_operation = op self.next_optimization.propagate_forward(op) # FIXME: Move some of these here? @@ -283,11 +302,11 @@ return self.optimizer.optpure.has_pure_result(opnum, args, descr) return False - def get_pure_result(self, key): + def get_pure_result(self, key): if self.optimizer.optpure: return self.optimizer.optpure.get_pure_result(key) return None - + def setup(self): pass @@ -311,20 +330,20 @@ def forget_numberings(self, box): self.optimizer.forget_numberings(box) + class Optimizer(Optimization): - def __init__(self, metainterp_sd, loop, optimizations=None, bridge=False): + def __init__(self, metainterp_sd, loop, optimizations=None): self.metainterp_sd = metainterp_sd self.cpu = metainterp_sd.cpu self.loop = loop - self.bridge = bridge self.values = {} self.interned_refs = self.cpu.ts.new_ref_dict() + self.interned_ints = {} self.resumedata_memo = resume.ResumeDataLoopMemo(metainterp_sd) self.bool_boxes = {} self.producer = {} self.pendingfields = [] - self.exception_might_have_happened = False self.quasi_immutable_deps = None self.opaque_pointers = {} self.replaces_guard = {} @@ -346,6 +365,7 @@ optimizations[-1].next_optimization = self for o in optimizations: o.optimizer = self + o.last_emitted_operation = None o.setup() else: optimizations = [] @@ -392,6 +412,9 @@ if not value: return box return self.interned_refs.setdefault(value, box) + #elif constbox.type == INT: + # value = constbox.getint() + # return self.interned_ints.setdefault(value, box) else: return box @@ -477,7 +500,6 @@ return CVAL_ZERO def propagate_all_forward(self): - self.exception_might_have_happened = self.bridge self.clear_newoperations() for op in self.loop.operations: self.first_optimization.propagate_forward(op) @@ -524,7 +546,7 @@ def replace_op(self, old_op, new_op): # XXX: Do we want to cache indexes to prevent search? - i = len(self._newoperations) + i = len(self._newoperations) while i > 0: i -= 1 if self._newoperations[i] is old_op: diff --git a/pypy/jit/metainterp/optimizeopt/pure.py b/pypy/jit/metainterp/optimizeopt/pure.py --- a/pypy/jit/metainterp/optimizeopt/pure.py +++ b/pypy/jit/metainterp/optimizeopt/pure.py @@ -1,4 +1,4 @@ -from pypy.jit.metainterp.optimizeopt.optimizer import Optimization +from pypy.jit.metainterp.optimizeopt.optimizer import Optimization, REMOVED from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.jit.metainterp.optimizeopt.util import (make_dispatcher_method, args_dict) @@ -61,7 +61,10 @@ oldop = self.pure_operations.get(args, None) if oldop is not None and oldop.getdescr() is op.getdescr(): assert oldop.getopnum() == op.getopnum() + # this removes a CALL_PURE that has the same (non-constant) + # arguments as a previous CALL_PURE. self.make_equal_to(op.result, self.getvalue(oldop.result)) + self.last_emitted_operation = REMOVED return else: self.pure_operations[args] = op @@ -72,6 +75,13 @@ self.emit_operation(ResOperation(rop.CALL, args, op.result, op.getdescr())) + def optimize_GUARD_NO_EXCEPTION(self, op): + if self.last_emitted_operation is REMOVED: + # it was a CALL_PURE that was killed; so we also kill the + # following GUARD_NO_EXCEPTION + return + self.emit_operation(op) + def flush(self): assert self.posponedop is None diff --git a/pypy/jit/metainterp/optimizeopt/rewrite.py b/pypy/jit/metainterp/optimizeopt/rewrite.py --- a/pypy/jit/metainterp/optimizeopt/rewrite.py +++ b/pypy/jit/metainterp/optimizeopt/rewrite.py @@ -294,12 +294,6 @@ raise InvalidLoop self.optimize_GUARD_CLASS(op) - def optimize_GUARD_NO_EXCEPTION(self, op): - if not self.optimizer.exception_might_have_happened: - return - self.emit_operation(op) - self.optimizer.exception_might_have_happened = False - def optimize_CALL_LOOPINVARIANT(self, op): arg = op.getarg(0) # 'arg' must be a Const, because residual_call in codewriter @@ -310,6 +304,7 @@ resvalue = self.loop_invariant_results.get(key, None) if resvalue is not None: self.make_equal_to(op.result, resvalue) + self.last_emitted_operation = REMOVED return # change the op to be a normal call, from the backend's point of view # there is no reason to have a separate operation for this @@ -337,7 +332,7 @@ def optimize_INT_IS_ZERO(self, op): self._optimize_nullness(op, op.getarg(0), False) - def _optimize_oois_ooisnot(self, op, expect_isnot): + def _optimize_oois_ooisnot(self, op, expect_isnot, instance): value0 = self.getvalue(op.getarg(0)) value1 = self.getvalue(op.getarg(1)) if value0.is_virtual(): @@ -355,21 +350,28 @@ elif value0 is value1: self.make_constant_int(op.result, not expect_isnot) else: - cls0 = value0.get_constant_class(self.optimizer.cpu) - if cls0 is not None: - cls1 = value1.get_constant_class(self.optimizer.cpu) - if cls1 is not None and not cls0.same_constant(cls1): - # cannot be the same object, as we know that their - # class is different - self.make_constant_int(op.result, expect_isnot) - return + if instance: + cls0 = value0.get_constant_class(self.optimizer.cpu) + if cls0 is not None: + cls1 = value1.get_constant_class(self.optimizer.cpu) + if cls1 is not None and not cls0.same_constant(cls1): + # cannot be the same object, as we know that their + # class is different + self.make_constant_int(op.result, expect_isnot) + return self.emit_operation(op) + def optimize_PTR_EQ(self, op): + self._optimize_oois_ooisnot(op, False, False) + def optimize_PTR_NE(self, op): - self._optimize_oois_ooisnot(op, True) + self._optimize_oois_ooisnot(op, True, False) - def optimize_PTR_EQ(self, op): - self._optimize_oois_ooisnot(op, False) + def optimize_INSTANCE_PTR_EQ(self, op): + self._optimize_oois_ooisnot(op, False, True) + + def optimize_INSTANCE_PTR_NE(self, op): + self._optimize_oois_ooisnot(op, True, True) ## def optimize_INSTANCEOF(self, op): ## value = self.getvalue(op.args[0]) @@ -437,10 +439,19 @@ except KeyError: pass else: + # this removes a CALL_PURE with all constant arguments. self.make_constant(op.result, result) + self.last_emitted_operation = REMOVED return self.emit_operation(op) + def optimize_GUARD_NO_EXCEPTION(self, op): + if self.last_emitted_operation is REMOVED: + # it was a CALL_PURE or a CALL_LOOPINVARIANT that was killed; + # so we also kill the following GUARD_NO_EXCEPTION + return + self.emit_operation(op) + def optimize_INT_FLOORDIV(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) @@ -458,10 +469,9 @@ args = [op.getarg(0), ConstInt(highest_bit(val))]) self.emit_operation(op) - def optimize_CAST_OPAQUE_PTR(self, op): + def optimize_MARK_OPAQUE_PTR(self, op): value = self.getvalue(op.getarg(0)) self.optimizer.opaque_pointers[value] = True - self.make_equal_to(op.result, value) def optimize_CAST_PTR_TO_INT(self, op): self.pure(rop.CAST_INT_TO_PTR, [op.result], op.getarg(0)) diff --git a/pypy/jit/metainterp/optimizeopt/simplify.py b/pypy/jit/metainterp/optimizeopt/simplify.py --- a/pypy/jit/metainterp/optimizeopt/simplify.py +++ b/pypy/jit/metainterp/optimizeopt/simplify.py @@ -25,7 +25,8 @@ # but it's a bit hard to implement robustly if heap.py is also run pass - optimize_CAST_OPAQUE_PTR = optimize_VIRTUAL_REF + def optimize_MARK_OPAQUE_PTR(self, op): + pass dispatch_opt = make_dispatcher_method(OptSimplify, 'optimize_', diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -9,6 +9,7 @@ from pypy.jit.metainterp.history import AbstractDescr, ConstInt, BoxInt from pypy.jit.metainterp import executor, compile, resume, history from pypy.jit.metainterp.resoperation import rop, opname, ResOperation +from pypy.rlib.rarithmetic import LONG_BIT def test_store_final_boxes_in_guard(): @@ -508,13 +509,13 @@ ops = """ [p0] guard_class(p0, ConstClass(node_vtable)) [] - i0 = ptr_ne(p0, NULL) + i0 = instance_ptr_ne(p0, NULL) guard_true(i0) [] - i1 = ptr_eq(p0, NULL) + i1 = instance_ptr_eq(p0, NULL) guard_false(i1) [] - i2 = ptr_ne(NULL, p0) + i2 = instance_ptr_ne(NULL, p0) guard_true(i0) [] - i3 = ptr_eq(NULL, p0) + i3 = instance_ptr_eq(NULL, p0) guard_false(i1) [] jump(p0) """ @@ -680,25 +681,60 @@ # ---------- - def test_fold_guard_no_exception(self): - ops = """ - [i] - guard_no_exception() [] - i1 = int_add(i, 3) - guard_no_exception() [] + def test_keep_guard_no_exception(self): + ops = """ + [i1] i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] - guard_no_exception() [] - i3 = call(i2, descr=nonwritedescr) - jump(i1) # the exception is considered lost when we loop back - """ - expected = """ - [i] - i1 = int_add(i, 3) - i2 = call(i1, descr=nonwritedescr) + jump(i2) + """ + self.optimize_loop(ops, ops) + + def test_keep_guard_no_exception_with_call_pure_that_is_not_folded(self): + ops = """ + [i1] + i2 = call_pure(123456, i1, descr=nonwritedescr) guard_no_exception() [i1, i2] - i3 = call(i2, descr=nonwritedescr) - jump(i1) + jump(i2) + """ + expected = """ + [i1] + i2 = call(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2] + jump(i2) + """ + self.optimize_loop(ops, expected) + + def test_remove_guard_no_exception_with_call_pure_on_constant_args(self): + arg_consts = [ConstInt(i) for i in (123456, 81)] + call_pure_results = {tuple(arg_consts): ConstInt(5)} + ops = """ + [i1] + i3 = same_as(81) + i2 = call_pure(123456, i3, descr=nonwritedescr) + guard_no_exception() [i1, i2] + jump(i2) + """ + expected = """ + [i1] + jump(5) + """ + self.optimize_loop(ops, expected, call_pure_results) + + def test_remove_guard_no_exception_with_duplicated_call_pure(self): + ops = """ + [i1] + i2 = call_pure(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2] + i3 = call_pure(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2, i3] + jump(i3) + """ + expected = """ + [i1] + i2 = call(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2] + jump(i2) """ self.optimize_loop(ops, expected) @@ -935,7 +971,6 @@ """ self.optimize_loop(ops, expected) - def test_virtual_constant_isnonnull(self): ops = """ [i0] @@ -951,6 +986,55 @@ """ self.optimize_loop(ops, expected) + def test_virtual_array_of_struct(self): + ops = """ + [f0, f1, f2, f3] + p0 = new_array(2, descr=complexarraydescr) + setinteriorfield_gc(p0, 0, f0, descr=complexrealdescr) + setinteriorfield_gc(p0, 0, f1, descr=compleximagdescr) + setinteriorfield_gc(p0, 1, f2, descr=complexrealdescr) + setinteriorfield_gc(p0, 1, f3, descr=compleximagdescr) + f4 = getinteriorfield_gc(p0, 0, descr=complexrealdescr) + f5 = getinteriorfield_gc(p0, 1, descr=complexrealdescr) + f6 = float_mul(f4, f5) + f7 = getinteriorfield_gc(p0, 0, descr=compleximagdescr) + f8 = getinteriorfield_gc(p0, 1, descr=compleximagdescr) + f9 = float_mul(f7, f8) + f10 = float_add(f6, f9) + finish(f10) + """ + expected = """ + [f0, f1, f2, f3] + f4 = float_mul(f0, f2) + f5 = float_mul(f1, f3) + f6 = float_add(f4, f5) + finish(f6) + """ + self.optimize_loop(ops, expected) + + def test_virtual_array_of_struct_forced(self): + ops = """ + [f0, f1] + p0 = new_array(1, descr=complexarraydescr) + setinteriorfield_gc(p0, 0, f0, descr=complexrealdescr) + setinteriorfield_gc(p0, 0, f1, descr=compleximagdescr) + f2 = getinteriorfield_gc(p0, 0, descr=complexrealdescr) + f3 = getinteriorfield_gc(p0, 0, descr=compleximagdescr) + f4 = float_mul(f2, f3) + i0 = escape(f4, p0) + finish(i0) + """ + expected = """ + [f0, f1] + f2 = float_mul(f0, f1) + p0 = new_array(1, descr=complexarraydescr) + setinteriorfield_gc(p0, 0, f0, descr=complexrealdescr) + setinteriorfield_gc(p0, 0, f1, descr=compleximagdescr) + i0 = escape(f2, p0) + finish(i0) + """ + self.optimize_loop(ops, expected) + def test_nonvirtual_1(self): ops = """ [i] @@ -2026,7 +2110,7 @@ ops = """ [p1] guard_class(p1, ConstClass(node_vtable2)) [] - i = ptr_ne(ConstPtr(myptr), p1) + i = instance_ptr_ne(ConstPtr(myptr), p1) guard_true(i) [] jump(p1) """ @@ -4074,6 +4158,38 @@ """ self.optimize_strunicode_loop(ops, expected) + def test_str_concat_constant_lengths(self): + ops = """ + [i0] + p0 = newstr(1) + strsetitem(p0, 0, i0) + p1 = newstr(0) + p2 = call(0, p0, p1, descr=strconcatdescr) + i1 = call(0, p2, p0, descr=strequaldescr) + finish(i1) + """ + expected = """ + [i0] + finish(1) + """ + self.optimize_strunicode_loop(ops, expected) + + def test_str_concat_constant_lengths_2(self): + ops = """ + [i0] + p0 = newstr(0) + p1 = newstr(1) + strsetitem(p1, 0, i0) + p2 = call(0, p0, p1, descr=strconcatdescr) + i1 = call(0, p2, p1, descr=strequaldescr) + finish(i1) + """ + expected = """ + [i0] + finish(1) + """ + self.optimize_strunicode_loop(ops, expected) + def test_str_slice_1(self): ops = """ [p1, i1, i2] @@ -4176,15 +4292,38 @@ """ self.optimize_strunicode_loop(ops, expected) + def test_str_slice_plain_virtual(self): + ops = """ + [] + p0 = newstr(11) + copystrcontent(s"hello world", p0, 0, 0, 11) + p1 = call(0, p0, 0, 5, descr=strslicedescr) + finish(p1) + """ + expected = """ + [] + p0 = newstr(11) + copystrcontent(s"hello world", p0, 0, 0, 11) + # Eventually this should just return s"hello", but ATM this test is + # just verifying that it doesn't return "\0\0\0\0\0", so being + # slightly underoptimized is ok. + p1 = newstr(5) + copystrcontent(p0, p1, 0, 0, 5) + finish(p1) + """ + self.optimize_strunicode_loop(ops, expected) + # ---------- def optimize_strunicode_loop_extradescrs(self, ops, optops): class FakeCallInfoCollection: def callinfo_for_oopspec(self, oopspecindex): calldescrtype = type(LLtypeMixin.strequaldescr) + effectinfotype = type(LLtypeMixin.strequaldescr.get_extra_info()) for value in LLtypeMixin.__dict__.values(): if isinstance(value, calldescrtype): extra = value.get_extra_info() - if extra and extra.oopspecindex == oopspecindex: + if (extra and isinstance(extra, effectinfotype) and + extra.oopspecindex == oopspecindex): # returns 0 for 'func' in this test return value, 0 raise AssertionError("not found: oopspecindex=%d" % @@ -4664,11 +4803,11 @@ i5 = int_ge(i0, 0) guard_true(i5) [] i1 = int_mod(i0, 42) - i2 = int_rshift(i1, 63) + i2 = int_rshift(i1, %d) i3 = int_and(42, i2) i4 = int_add(i1, i3) finish(i4) - """ + """ % (LONG_BIT-1) expected = """ [i0] i5 = int_ge(i0, 0) @@ -4676,21 +4815,41 @@ i1 = int_mod(i0, 42) finish(i1) """ - py.test.skip("in-progress") self.optimize_loop(ops, expected) - # Also, 'n % power-of-two' can be turned into int_and(), - # but that's a bit harder to detect here because it turns into - # several operations, and of course it is wrong to just turn + # 'n % power-of-two' can be turned into int_and(); at least that's + # easy to do now if n is known to be non-negative. + ops = """ + [i0] + i5 = int_ge(i0, 0) + guard_true(i5) [] + i1 = int_mod(i0, 8) + i2 = int_rshift(i1, %d) + i3 = int_and(42, i2) + i4 = int_add(i1, i3) + finish(i4) + """ % (LONG_BIT-1) + expected = """ + [i0] + i5 = int_ge(i0, 0) + guard_true(i5) [] + i1 = int_and(i0, 7) + finish(i1) + """ + self.optimize_loop(ops, expected) + + # Of course any 'maybe-negative % power-of-two' can be turned into + # int_and(), but that's a bit harder to detect here because it turns + # into several operations, and of course it is wrong to just turn # int_mod(i0, 16) into int_and(i0, 15). ops = """ [i0] i1 = int_mod(i0, 16) - i2 = int_rshift(i1, 63) + i2 = int_rshift(i1, %d) i3 = int_and(16, i2) i4 = int_add(i1, i3) finish(i4) - """ + """ % (LONG_BIT-1) expected = """ [i0] i4 = int_and(i0, 15) @@ -4699,6 +4858,16 @@ py.test.skip("harder") self.optimize_loop(ops, expected) + def test_intmod_bounds_bug1(self): + ops = """ + [i0] + i1 = int_mod(i0, %d) + i2 = int_eq(i1, 0) + guard_false(i2) [] + finish() + """ % (-(1<<(LONG_BIT-1)),) + self.optimize_loop(ops, ops) + def test_bounded_lazy_setfield(self): ops = """ [p0, i0] @@ -4781,6 +4950,27 @@ def test_plain_virtual_string_copy_content(self): ops = """ + [i1] + p0 = newstr(6) + copystrcontent(s"hello!", p0, 0, 0, 6) + p1 = call(0, p0, s"abc123", descr=strconcatdescr) + i0 = strgetitem(p1, i1) + finish(i0) + """ + expected = """ + [i1] + p0 = newstr(6) + copystrcontent(s"hello!", p0, 0, 0, 6) + p1 = newstr(12) + copystrcontent(p0, p1, 0, 0, 6) + copystrcontent(s"abc123", p1, 0, 6, 6) + i0 = strgetitem(p1, i1) + finish(i0) + """ + self.optimize_strunicode_loop(ops, expected) + + def test_plain_virtual_string_copy_content_2(self): + ops = """ [] p0 = newstr(6) copystrcontent(s"hello!", p0, 0, 0, 6) @@ -4792,10 +4982,7 @@ [] p0 = newstr(6) copystrcontent(s"hello!", p0, 0, 0, 6) - p1 = newstr(12) - copystrcontent(p0, p1, 0, 0, 6) - copystrcontent(s"abc123", p1, 0, 6, 6) - i0 = strgetitem(p1, 0) + i0 = strgetitem(p0, 0) finish(i0) """ self.optimize_strunicode_loop(ops, expected) @@ -4812,6 +4999,34 @@ """ self.optimize_loop(ops, expected) + def test_known_equal_ints(self): + py.test.skip("in-progress") + ops = """ + [i0, i1, i2, p0] + i3 = int_eq(i0, i1) + guard_true(i3) [] + + i4 = int_lt(i2, i0) + guard_true(i4) [] + i5 = int_lt(i2, i1) + guard_true(i5) [] + + i6 = getarrayitem_gc(p0, i2) + finish(i6) + """ + expected = """ + [i0, i1, i2, p0] + i3 = int_eq(i0, i1) + guard_true(i3) [] + + i4 = int_lt(i2, i0) + guard_true(i4) [] + + i6 = getarrayitem_gc(p0, i3) + finish(i6) + """ + self.optimize_loop(ops, expected) + class TestLLtype(BaseTestOptimizeBasic, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -931,17 +931,14 @@ [i] guard_no_exception() [] i1 = int_add(i, 3) - guard_no_exception() [] i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] - guard_no_exception() [] i3 = call(i2, descr=nonwritedescr) jump(i1) # the exception is considered lost when we loop back """ - # note that 'guard_no_exception' at the very start is kept around - # for bridges, but not for loops preamble = """ [i] + guard_no_exception() [] # occurs at the start of bridges, so keep it i1 = int_add(i, 3) i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] @@ -950,6 +947,7 @@ """ expected = """ [i] + guard_no_exception() [] # occurs at the start of bridges, so keep it i1 = int_add(i, 3) i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] @@ -958,6 +956,23 @@ """ self.optimize_loop(ops, expected, preamble) + def test_bug_guard_no_exception(self): + ops = """ + [] + i0 = call(123, descr=nonwritedescr) + p0 = call(0, "xy", descr=s2u_descr) # string -> unicode + guard_no_exception() [] + escape(p0) + jump() + """ + expected = """ + [] + i0 = call(123, descr=nonwritedescr) + escape(u"xy") + jump() + """ + self.optimize_loop(ops, expected) + # ---------- def test_call_loopinvariant(self): @@ -2168,13 +2183,13 @@ ops = """ [p0, i0, p1, i1, i2] setfield_gc(p0, i1, descr=valuedescr) - copystrcontent(p0, i0, p1, i1, i2) + copystrcontent(p0, p1, i0, i1, i2) escape() jump(p0, i0, p1, i1, i2) """ expected = """ [p0, i0, p1, i1, i2] - copystrcontent(p0, i0, p1, i1, i2) + copystrcontent(p0, p1, i0, i1, i2) setfield_gc(p0, i1, descr=valuedescr) escape() jump(p0, i0, p1, i1, i2) @@ -2683,7 +2698,7 @@ ops = """ [p1] guard_class(p1, ConstClass(node_vtable2)) [] - i = ptr_ne(ConstPtr(myptr), p1) + i = instance_ptr_ne(ConstPtr(myptr), p1) guard_true(i) [] jump(p1) """ @@ -3331,7 +3346,7 @@ jump(p1, i1, i2, i6) ''' self.optimize_loop(ops, expected, preamble) - + # ---------- @@ -4783,6 +4798,52 @@ """ self.optimize_loop(ops, expected) + + def test_division_nonneg(self): + py.test.skip("harder") + # this is how an app-level division turns into right now + ops = """ + [i4] + i1 = int_ge(i4, 0) + guard_true(i1) [] + i16 = int_floordiv(i4, 3) + i18 = int_mul(i16, 3) + i19 = int_sub(i4, i18) + i21 = int_rshift(i19, %d) + i22 = int_add(i16, i21) + finish(i22) + """ % (LONG_BIT-1) + expected = """ + [i4] + i1 = int_ge(i4, 0) + guard_true(i1) [] + i16 = int_floordiv(i4, 3) + finish(i16) + """ + self.optimize_loop(ops, expected) + + def test_division_by_2(self): + py.test.skip("harder") + ops = """ + [i4] + i1 = int_ge(i4, 0) + guard_true(i1) [] + i16 = int_floordiv(i4, 2) + i18 = int_mul(i16, 2) + i19 = int_sub(i4, i18) + i21 = int_rshift(i19, %d) + i22 = int_add(i16, i21) + finish(i22) + """ % (LONG_BIT-1) + expected = """ + [i4] + i1 = int_ge(i4, 0) + guard_true(i1) [] + i16 = int_rshift(i4, 1) + finish(i16) + """ + self.optimize_loop(ops, expected) + def test_subsub_ovf(self): ops = """ [i0] @@ -5800,10 +5861,12 @@ class FakeCallInfoCollection: def callinfo_for_oopspec(self, oopspecindex): calldescrtype = type(LLtypeMixin.strequaldescr) + effectinfotype = type(LLtypeMixin.strequaldescr.get_extra_info()) for value in LLtypeMixin.__dict__.values(): if isinstance(value, calldescrtype): extra = value.get_extra_info() - if extra and extra.oopspecindex == oopspecindex: + if (extra and isinstance(extra, effectinfotype) and + extra.oopspecindex == oopspecindex): # returns 0 for 'func' in this test return value, 0 raise AssertionError("not found: oopspecindex=%d" % @@ -6233,12 +6296,15 @@ def test_str2unicode_constant(self): ops = """ [] + escape(1213) p0 = call(0, "xy", descr=s2u_descr) # string -> unicode + guard_no_exception() [] escape(p0) jump() """ expected = """ [] + escape(1213) escape(u"xy") jump() """ @@ -6248,6 +6314,7 @@ ops = """ [p0] p1 = call(0, p0, descr=s2u_descr) # string -> unicode + guard_no_exception() [] escape(p1) jump(p1) """ @@ -7280,7 +7347,7 @@ ops = """ [p1, p2] setarrayitem_gc(p1, 2, 10, descr=arraydescr) - setarrayitem_gc(p2, 3, 13, descr=arraydescr) + setarrayitem_gc(p2, 3, 13, descr=arraydescr) call(0, p1, p2, 0, 0, 10, descr=arraycopydescr) jump(p1, p2) """ @@ -7307,6 +7374,150 @@ """ self.optimize_loop(ops, expected) + def test_repeated_constant_setfield_mixed_with_guard(self): + ops = """ + [p22, p18] + setfield_gc(p22, 2, descr=valuedescr) + guard_nonnull_class(p18, ConstClass(node_vtable)) [] + setfield_gc(p22, 2, descr=valuedescr) + jump(p22, p18) + """ + preamble = """ + [p22, p18] + setfield_gc(p22, 2, descr=valuedescr) + guard_nonnull_class(p18, ConstClass(node_vtable)) [] + jump(p22, p18) + """ + short = """ + [p22, p18] + i1 = getfield_gc(p22, descr=valuedescr) + guard_value(i1, 2) [] + jump(p22, p18) + """ + expected = """ + [p22, p18] + jump(p22, p18) + """ + self.optimize_loop(ops, expected, preamble, expected_short=short) + + def test_repeated_setfield_mixed_with_guard(self): + ops = """ + [p22, p18, i1] + i2 = getfield_gc(p22, descr=valuedescr) + call(i2, descr=nonwritedescr) + setfield_gc(p22, i1, descr=valuedescr) + guard_nonnull_class(p18, ConstClass(node_vtable)) [] + setfield_gc(p22, i1, descr=valuedescr) + jump(p22, p18, i1) + """ + preamble = """ + [p22, p18, i1] + i2 = getfield_gc(p22, descr=valuedescr) + call(i2, descr=nonwritedescr) + setfield_gc(p22, i1, descr=valuedescr) + guard_nonnull_class(p18, ConstClass(node_vtable)) [] + jump(p22, p18, i1, i1) + """ + short = """ + [p22, p18, i1] + i2 = getfield_gc(p22, descr=valuedescr) + jump(p22, p18, i1, i2) + """ + expected = """ + [p22, p18, i1, i2] + call(i2, descr=nonwritedescr) + setfield_gc(p22, i1, descr=valuedescr) + jump(p22, p18, i1, i1) + """ + self.optimize_loop(ops, expected, preamble, expected_short=short) + + def test_cache_setfield_across_loop_boundaries(self): + ops = """ + [p1] + p2 = getfield_gc(p1, descr=valuedescr) + guard_nonnull_class(p2, ConstClass(node_vtable)) [] + call(p2, descr=nonwritedescr) + p3 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(p1, p3, descr=valuedescr) + jump(p1) + """ + expected = """ + [p1, p2] + call(p2, descr=nonwritedescr) + p3 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(p1, p3, descr=valuedescr) + jump(p1, p3) + """ + self.optimize_loop(ops, expected) + + def test_cache_setarrayitem_across_loop_boundaries(self): + ops = """ + [p1] + p2 = getarrayitem_gc(p1, 3, descr=arraydescr) + guard_nonnull_class(p2, ConstClass(node_vtable)) [] + call(p2, descr=nonwritedescr) + p3 = new_with_vtable(ConstClass(node_vtable)) + setarrayitem_gc(p1, 3, p3, descr=arraydescr) + jump(p1) + """ + expected = """ + [p1, p2] + call(p2, descr=nonwritedescr) + p3 = new_with_vtable(ConstClass(node_vtable)) + setarrayitem_gc(p1, 3, p3, descr=arraydescr) + jump(p1, p3) + """ + self.optimize_loop(ops, expected) + + def test_setarrayitem_p0_p0(self): + ops = """ + [i0, i1] + p0 = escape() + setarrayitem_gc(p0, 2, p0, descr=arraydescr) + jump(i0, i1) + """ + expected = """ + [i0, i1] + p0 = escape() + setarrayitem_gc(p0, 2, p0, descr=arraydescr) + jump(i0, i1) + """ + self.optimize_loop(ops, expected) + + def test_setfield_p0_p0(self): + ops = """ + [i0, i1] + p0 = escape() + setfield_gc(p0, p0, descr=arraydescr) + jump(i0, i1) + """ + expected = """ + [i0, i1] + p0 = escape() + setfield_gc(p0, p0, descr=arraydescr) + jump(i0, i1) + """ + self.optimize_loop(ops, expected) + + def test_setfield_p0_p1_p0(self): + ops = """ + [i0, i1] + p0 = escape() + p1 = escape() + setfield_gc(p0, p1, descr=adescr) + setfield_gc(p1, p0, descr=bdescr) + jump(i0, i1) + """ + expected = """ + [i0, i1] + p0 = escape() + p1 = escape() + setfield_gc(p0, p1, descr=adescr) + setfield_gc(p1, p0, descr=bdescr) + jump(i0, i1) + """ + self.optimize_loop(ops, expected) + class TestLLtype(OptimizeOptTest, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/optimizeopt/test/test_util.py b/pypy/jit/metainterp/optimizeopt/test/test_util.py --- a/pypy/jit/metainterp/optimizeopt/test/test_util.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_util.py @@ -183,8 +183,21 @@ can_invalidate=True)) arraycopydescr = cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, EffectInfo([], [arraydescr], [], [arraydescr], + EffectInfo.EF_CANNOT_RAISE, oopspecindex=EffectInfo.OS_ARRAYCOPY)) + + # array of structs (complex data) + complexarray = lltype.GcArray( + lltype.Struct("complex", + ("real", lltype.Float), + ("imag", lltype.Float), + ) + ) + complexarraydescr = cpu.arraydescrof(complexarray) + complexrealdescr = cpu.interiorfielddescrof(complexarray, "real") + compleximagdescr = cpu.interiorfielddescrof(complexarray, "imag") + for _name, _os in [ ('strconcatdescr', 'OS_STR_CONCAT'), ('strslicedescr', 'OS_STR_SLICE'), @@ -200,12 +213,14 @@ _oopspecindex = getattr(EffectInfo, _os) locals()[_name] = \ cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, - EffectInfo([], [], [], [], oopspecindex=_oopspecindex)) + EffectInfo([], [], [], [], EffectInfo.EF_CANNOT_RAISE, + oopspecindex=_oopspecindex)) # _oopspecindex = getattr(EffectInfo, _os.replace('STR', 'UNI')) locals()[_name.replace('str', 'unicode')] = \ cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, - EffectInfo([], [], [], [], oopspecindex=_oopspecindex)) + EffectInfo([], [], [], [], EffectInfo.EF_CANNOT_RAISE, + oopspecindex=_oopspecindex)) s2u_descr = cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, EffectInfo([], [], [], [], oopspecindex=EffectInfo.OS_STR2UNICODE)) @@ -240,7 +255,7 @@ ## def get_class_of_box(self, box): ## root = box.getref(ootype.ROOT) ## return ootype.classof(root) - + ## cpu = runner.OOtypeCPU(None) ## NODE = ootype.Instance('NODE', ootype.ROOT, {}) ## NODE._add_fields({'value': ootype.Signed, diff --git a/pypy/jit/metainterp/optimizeopt/virtualize.py b/pypy/jit/metainterp/optimizeopt/virtualize.py --- a/pypy/jit/metainterp/optimizeopt/virtualize.py +++ b/pypy/jit/metainterp/optimizeopt/virtualize.py @@ -271,6 +271,74 @@ def _make_virtual(self, modifier): return modifier.make_varray(self.arraydescr) +class VArrayStructValue(AbstractVirtualValue): + def __init__(self, arraydescr, size, keybox, source_op=None): + AbstractVirtualValue.__init__(self, keybox, source_op) + self.arraydescr = arraydescr + self._items = [{} for _ in xrange(size)] + + def getlength(self): + return len(self._items) + + def getinteriorfield(self, index, ofs, default): + return self._items[index].get(ofs, default) + + def setinteriorfield(self, index, ofs, itemvalue): + assert isinstance(itemvalue, optimizer.OptValue) + self._items[index][ofs] = itemvalue + + def _really_force(self, optforce): + assert self.source_op is not None + if not we_are_translated(): + self.source_op.name = 'FORCE ' + self.source_op.name + optforce.emit_operation(self.source_op) + self.box = box = self.source_op.result + for index in range(len(self._items)): + iteritems = self._items[index].iteritems() + # random order is fine, except for tests + if not we_are_translated(): + iteritems = list(iteritems) + iteritems.sort(key = lambda (x, y): x.sort_key()) + for descr, value in iteritems: + subbox = value.force_box(optforce) + op = ResOperation(rop.SETINTERIORFIELD_GC, + [box, ConstInt(index), subbox], None, descr=descr + ) + optforce.emit_operation(op) + + def _get_list_of_descrs(self): + descrs = [] + for item in self._items: + item_descrs = item.keys() + sort_descrs(item_descrs) + descrs.append(item_descrs) + return descrs + + def get_args_for_fail(self, modifier): + if self.box is None and not modifier.already_seen_virtual(self.keybox): + itemdescrs = self._get_list_of_descrs() + itemboxes = [] + for i in range(len(self._items)): + for descr in itemdescrs[i]: + itemboxes.append(self._items[i][descr].get_key_box()) + modifier.register_virtual_fields(self.keybox, itemboxes) + for i in range(len(self._items)): + for descr in itemdescrs[i]: + self._items[i][descr].get_args_for_fail(modifier) + + def force_at_end_of_preamble(self, already_forced, optforce): + if self in already_forced: + return self + already_forced[self] = self + for index in range(len(self._items)): + for descr in self._items[index].keys(): + self._items[index][descr] = self._items[index][descr].force_at_end_of_preamble(already_forced, optforce) + return self + + def _make_virtual(self, modifier): + return modifier.make_varraystruct(self.arraydescr, self._get_list_of_descrs()) + + class OptVirtualize(optimizer.Optimization): "Virtualize objects until they escape." @@ -283,8 +351,11 @@ return vvalue def make_varray(self, arraydescr, size, box, source_op=None): - constvalue = self.new_const_item(arraydescr) - vvalue = VArrayValue(arraydescr, constvalue, size, box, source_op) + if arraydescr.is_array_of_structs(): + vvalue = VArrayStructValue(arraydescr, size, box, source_op) + else: + constvalue = self.new_const_item(arraydescr) + vvalue = VArrayValue(arraydescr, constvalue, size, box, source_op) self.make_equal_to(box, vvalue) return vvalue @@ -386,8 +457,7 @@ def optimize_NEW_ARRAY(self, op): sizebox = self.get_constant_box(op.getarg(0)) - # For now we can't make arrays of structs virtual. - if sizebox is not None and not op.getdescr().is_array_of_structs(): + if sizebox is not None: # if the original 'op' did not have a ConstInt as argument, # build a new one with the ConstInt argument if not isinstance(op.getarg(0), ConstInt): @@ -432,6 +502,34 @@ value.ensure_nonnull() self.emit_operation(op) + def optimize_GETINTERIORFIELD_GC(self, op): + value = self.getvalue(op.getarg(0)) + if value.is_virtual(): + indexbox = self.get_constant_box(op.getarg(1)) + if indexbox is not None: + descr = op.getdescr() + fieldvalue = value.getinteriorfield( + indexbox.getint(), descr, None + ) + if fieldvalue is None: + fieldvalue = self.new_const(descr) + self.make_equal_to(op.result, fieldvalue) + return + value.ensure_nonnull() + self.emit_operation(op) + + def optimize_SETINTERIORFIELD_GC(self, op): + value = self.getvalue(op.getarg(0)) + if value.is_virtual(): + indexbox = self.get_constant_box(op.getarg(1)) + if indexbox is not None: + value.setinteriorfield( + indexbox.getint(), op.getdescr(), self.getvalue(op.getarg(2)) + ) + return + value.ensure_nonnull() + self.emit_operation(op) + dispatch_opt = make_dispatcher_method(OptVirtualize, 'optimize_', default=OptVirtualize.emit_operation) diff --git a/pypy/jit/metainterp/optimizeopt/virtualstate.py b/pypy/jit/metainterp/optimizeopt/virtualstate.py --- a/pypy/jit/metainterp/optimizeopt/virtualstate.py +++ b/pypy/jit/metainterp/optimizeopt/virtualstate.py @@ -16,7 +16,7 @@ class AbstractVirtualStateInfo(resume.AbstractVirtualInfo): position = -1 - + def generalization_of(self, other, renum, bad): raise NotImplementedError @@ -54,7 +54,7 @@ s.debug_print(indent + " ", seen, bad) else: debug_print(indent + " ...") - + def debug_header(self, indent): raise NotImplementedError @@ -77,13 +77,15 @@ bad[self] = True bad[other] = True return False + + assert isinstance(other, AbstractVirtualStructStateInfo) assert len(self.fielddescrs) == len(self.fieldstate) assert len(other.fielddescrs) == len(other.fieldstate) if len(self.fielddescrs) != len(other.fielddescrs): bad[self] = True bad[other] = True return False - + for i in range(len(self.fielddescrs)): if other.fielddescrs[i] is not self.fielddescrs[i]: bad[self] = True @@ -112,8 +114,8 @@ def _enum(self, virtual_state): for s in self.fieldstate: s.enum(virtual_state) - - + + class VirtualStateInfo(AbstractVirtualStructStateInfo): def __init__(self, known_class, fielddescrs): AbstractVirtualStructStateInfo.__init__(self, fielddescrs) @@ -128,13 +130,13 @@ def debug_header(self, indent): debug_print(indent + 'VirtualStateInfo(%d):' % self.position) - + class VStructStateInfo(AbstractVirtualStructStateInfo): def __init__(self, typedescr, fielddescrs): AbstractVirtualStructStateInfo.__init__(self, fielddescrs) self.typedescr = typedescr - def _generalization_of(self, other): + def _generalization_of(self, other): if not isinstance(other, VStructStateInfo): return False if self.typedescr is not other.typedescr: @@ -143,7 +145,7 @@ def debug_header(self, indent): debug_print(indent + 'VStructStateInfo(%d):' % self.position) - + class VArrayStateInfo(AbstractVirtualStateInfo): def __init__(self, arraydescr): self.arraydescr = arraydescr @@ -157,11 +159,7 @@ bad[other] = True return False renum[self.position] = other.position - if not isinstance(other, VArrayStateInfo): - bad[self] = True - bad[other] = True - return False - if self.arraydescr is not other.arraydescr: + if not self._generalization_of(other): bad[self] = True bad[other] = True return False @@ -177,6 +175,10 @@ return False return True + def _generalization_of(self, other): + return (isinstance(other, VArrayStateInfo) and + self.arraydescr is other.arraydescr) + def enum_forced_boxes(self, boxes, value, optimizer): assert isinstance(value, virtualize.VArrayValue) assert value.is_virtual() @@ -192,8 +194,75 @@ def debug_header(self, indent): debug_print(indent + 'VArrayStateInfo(%d):' % self.position) - - + +class VArrayStructStateInfo(AbstractVirtualStateInfo): + def __init__(self, arraydescr, fielddescrs): + self.arraydescr = arraydescr + self.fielddescrs = fielddescrs + + def generalization_of(self, other, renum, bad): + assert self.position != -1 + if self.position in renum: + if renum[self.position] == other.position: + return True + bad[self] = True + bad[other] = True + return False + renum[self.position] = other.position + if not self._generalization_of(other): + bad[self] = True + bad[other] = True + return False + + assert isinstance(other, VArrayStructStateInfo) + if len(self.fielddescrs) != len(other.fielddescrs): + bad[self] = True + bad[other] = True + return False + + p = 0 + for i in range(len(self.fielddescrs)): + if len(self.fielddescrs[i]) != len(other.fielddescrs[i]): + bad[self] = True + bad[other] = True + return False + for j in range(len(self.fielddescrs[i])): + if self.fielddescrs[i][j] is not other.fielddescrs[i][j]: + bad[self] = True + bad[other] = True + return False + if not self.fieldstate[p].generalization_of(other.fieldstate[p], + renum, bad): + bad[self] = True + bad[other] = True + return False + p += 1 + return True + + def _generalization_of(self, other): + return (isinstance(other, VArrayStructStateInfo) and + self.arraydescr is other.arraydescr) + + def _enum(self, virtual_state): + for s in self.fieldstate: + s.enum(virtual_state) + + def enum_forced_boxes(self, boxes, value, optimizer): + assert isinstance(value, virtualize.VArrayStructValue) + assert value.is_virtual() + p = 0 + for i in range(len(self.fielddescrs)): + for j in range(len(self.fielddescrs[i])): + v = value._items[i][self.fielddescrs[i][j]] + s = self.fieldstate[p] + if s.position > self.position: + s.enum_forced_boxes(boxes, v, optimizer) + p += 1 + + def debug_header(self, indent): + debug_print(indent + 'VArrayStructStateInfo(%d):' % self.position) + + class NotVirtualStateInfo(AbstractVirtualStateInfo): def __init__(self, value): self.known_class = value.known_class @@ -277,7 +346,7 @@ op = ResOperation(rop.GUARD_CLASS, [box, self.known_class], None) extra_guards.append(op) return - + if self.level == LEVEL_NONNULL and \ other.level == LEVEL_UNKNOWN and \ isinstance(box, BoxPtr) and \ @@ -285,7 +354,7 @@ op = ResOperation(rop.GUARD_NONNULL, [box], None) extra_guards.append(op) return - + if self.level == LEVEL_UNKNOWN and \ other.level == LEVEL_UNKNOWN and \ isinstance(box, BoxInt) and \ @@ -309,7 +378,7 @@ op = ResOperation(rop.GUARD_TRUE, [res], None) extra_guards.append(op) return - + # Remaining cases are probably not interesting raise InvalidLoop if self.level == LEVEL_CONSTANT: @@ -319,7 +388,7 @@ def enum_forced_boxes(self, boxes, value, optimizer): if self.level == LEVEL_CONSTANT: return - assert 0 <= self.position_in_notvirtuals + assert 0 <= self.position_in_notvirtuals boxes[self.position_in_notvirtuals] = value.force_box(optimizer) def _enum(self, virtual_state): @@ -348,7 +417,7 @@ lb = '' if self.lenbound: lb = ', ' + self.lenbound.bound.__repr__() - + debug_print(indent + mark + 'NotVirtualInfo(%d' % self.position + ', ' + l + ', ' + self.intbound.__repr__() + lb + ')') @@ -370,7 +439,7 @@ return False return True - def generate_guards(self, other, args, cpu, extra_guards): + def generate_guards(self, other, args, cpu, extra_guards): assert len(self.state) == len(other.state) == len(args) renum = {} for i in range(len(self.state)): @@ -393,7 +462,7 @@ inputargs.append(box) assert None not in inputargs - + return inputargs def debug_print(self, hdr='', bad=None): @@ -412,7 +481,7 @@ def register_virtual_fields(self, keybox, fieldboxes): self.fieldboxes[keybox] = fieldboxes - + def already_seen_virtual(self, keybox): return keybox in self.fieldboxes @@ -463,6 +532,9 @@ def make_varray(self, arraydescr): return VArrayStateInfo(arraydescr) + def make_varraystruct(self, arraydescr, fielddescrs): + return VArrayStructStateInfo(arraydescr, fielddescrs) + class BoxNotProducable(Exception): pass @@ -479,6 +551,7 @@ optimizer.produce_potential_short_preamble_ops(self) self.short_boxes = {} + self.short_boxes_in_production = {} for box in self.potential_ops.keys(): try: @@ -501,12 +574,12 @@ else: # Low priority lo -= 1 return alts - + def renamed(self, box): if box in self.rename: return self.rename[box] return box - + def add_to_short(self, box, op): if op: op = op.clone() @@ -528,12 +601,16 @@ self.optimizer.make_equal_to(newbox, value) else: self.short_boxes[box] = op - + def produce_short_preamble_box(self, box): if box in self.short_boxes: - return + return if isinstance(box, Const): - return + return + if box in self.short_boxes_in_production: + raise BoxNotProducable + self.short_boxes_in_production[box] = True + if box in self.potential_ops: ops = self.prioritized_alternatives(box) produced_one = False @@ -570,7 +647,7 @@ else: debug_print(logops.repr_of_arg(box) + ': None') debug_stop('jit-short-boxes') - + def operations(self): if not we_are_translated(): # For tests ops = self.short_boxes.values() @@ -588,7 +665,7 @@ if not isinstance(oldbox, Const) and newbox not in self.short_boxes: self.short_boxes[newbox] = self.short_boxes[oldbox] self.aliases[newbox] = oldbox - + def original(self, box): while box in self.aliases: box = self.aliases[box] diff --git a/pypy/jit/metainterp/optimizeopt/vstring.py b/pypy/jit/metainterp/optimizeopt/vstring.py --- a/pypy/jit/metainterp/optimizeopt/vstring.py +++ b/pypy/jit/metainterp/optimizeopt/vstring.py @@ -1,8 +1,9 @@ from pypy.jit.codewriter.effectinfo import EffectInfo from pypy.jit.metainterp.history import (BoxInt, Const, ConstInt, ConstPtr, - get_const_ptr_for_string, get_const_ptr_for_unicode) + get_const_ptr_for_string, get_const_ptr_for_unicode, BoxPtr, REF, INT) from pypy.jit.metainterp.optimizeopt import optimizer, virtualize -from pypy.jit.metainterp.optimizeopt.optimizer import CONST_0, CONST_1, llhelper +from pypy.jit.metainterp.optimizeopt.optimizer import CONST_0, CONST_1 +from pypy.jit.metainterp.optimizeopt.optimizer import llhelper, REMOVED from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.rlib.objectmodel import specialize, we_are_translated @@ -106,7 +107,12 @@ if not we_are_translated(): op.name = 'FORCE' optforce.emit_operation(op) - self.string_copy_parts(optforce, box, CONST_0, self.mode) + self.initialize_forced_string(optforce, box, CONST_0, self.mode) + + def initialize_forced_string(self, string_optimizer, targetbox, + offsetbox, mode): + return self.string_copy_parts(string_optimizer, targetbox, + offsetbox, mode) class VStringPlainValue(VAbstractStringValue): @@ -114,11 +120,20 @@ _lengthbox = None # cache only def setup(self, size): - self._chars = [optimizer.CVAL_UNINITIALIZED_ZERO] * size + # in this list, None means: "it's probably uninitialized so far, + # but maybe it was actually filled." So to handle this case, + # strgetitem cannot be virtual-ized and must be done as a residual + # operation. By contrast, any non-None value means: we know it + # is initialized to this value; strsetitem() there makes no sense. + # Also, as long as self.is_virtual(), then we know that no-one else + # could have written to the string, so we know that in this case + # "None" corresponds to "really uninitialized". + self._chars = [None] * size def setup_slice(self, longerlist, start, stop): assert 0 <= start <= stop <= len(longerlist) self._chars = longerlist[start:stop] + # slice the 'longerlist', which may also contain Nones def getstrlen(self, _, mode): if self._lengthbox is None: @@ -126,53 +141,66 @@ return self._lengthbox def getitem(self, index): - return self._chars[index] + return self._chars[index] # may return None! def setitem(self, index, charvalue): assert isinstance(charvalue, optimizer.OptValue) + assert self._chars[index] is None, ( + "setitem() on an already-initialized location") self._chars[index] = charvalue + def is_completely_initialized(self): + for c in self._chars: + if c is None: + return False + return True + @specialize.arg(1) def get_constant_string_spec(self, mode): for c in self._chars: - if c is optimizer.CVAL_UNINITIALIZED_ZERO or not c.is_constant(): + if c is None or not c.is_constant(): return None return mode.emptystr.join([mode.chr(c.box.getint()) for c in self._chars]) def string_copy_parts(self, string_optimizer, targetbox, offsetbox, mode): - if not self.is_virtual() and targetbox is not self.box: - lengthbox = self.getstrlen(string_optimizer, mode) - srcbox = self.force_box(string_optimizer) - return copy_str_content(string_optimizer, srcbox, targetbox, - CONST_0, offsetbox, lengthbox, mode) + if not self.is_virtual() and not self.is_completely_initialized(): + return VAbstractStringValue.string_copy_parts( + self, string_optimizer, targetbox, offsetbox, mode) + else: + return self.initialize_forced_string(string_optimizer, targetbox, + offsetbox, mode) + + def initialize_forced_string(self, string_optimizer, targetbox, + offsetbox, mode): for i in range(len(self._chars)): - charbox = self._chars[i].force_box(string_optimizer) - if not (isinstance(charbox, Const) and charbox.same_constant(CONST_0)): - string_optimizer.emit_operation(ResOperation(mode.STRSETITEM, [targetbox, - offsetbox, - charbox], - None)) + assert isinstance(targetbox, BoxPtr) # ConstPtr never makes sense + charvalue = self.getitem(i) + if charvalue is not None: + charbox = charvalue.force_box(string_optimizer) + if not (isinstance(charbox, Const) and + charbox.same_constant(CONST_0)): + op = ResOperation(mode.STRSETITEM, [targetbox, + offsetbox, + charbox], + None) + string_optimizer.emit_operation(op) offsetbox = _int_add(string_optimizer, offsetbox, CONST_1) return offsetbox def get_args_for_fail(self, modifier): if self.box is None and not modifier.already_seen_virtual(self.keybox): - charboxes = [value.get_key_box() for value in self._chars] + charboxes = [] + for value in self._chars: + if value is not None: + box = value.get_key_box() + else: + box = None + charboxes.append(box) modifier.register_virtual_fields(self.keybox, charboxes) for value in self._chars: - value.get_args_for_fail(modifier) - - def FIXME_enum_forced_boxes(self, boxes, already_seen): - key = self.get_key_box() - if key in already_seen: - return - already_seen[key] = None - if self.box is None: - for box in self._chars: - box.enum_forced_boxes(boxes, already_seen) - else: - boxes.append(self.box) + if value is not None: + value.get_args_for_fail(modifier) def _make_virtual(self, modifier): return modifier.make_vstrplain(self.mode is mode_unicode) @@ -180,6 +208,7 @@ class VStringConcatValue(VAbstractStringValue): """The concatenation of two other strings.""" + _attrs_ = ('left', 'right', 'lengthbox') lengthbox = None # or the computed length @@ -226,18 +255,6 @@ self.left.get_args_for_fail(modifier) self.right.get_args_for_fail(modifier) - def FIXME_enum_forced_boxes(self, boxes, already_seen): - key = self.get_key_box() - if key in already_seen: - return - already_seen[key] = None - if self.box is None: - self.left.enum_forced_boxes(boxes, already_seen) - self.right.enum_forced_boxes(boxes, already_seen) - self.lengthbox = None - else: - boxes.append(self.box) - def _make_virtual(self, modifier): return modifier.make_vstrconcat(self.mode is mode_unicode) @@ -284,18 +301,6 @@ self.vstart.get_args_for_fail(modifier) self.vlength.get_args_for_fail(modifier) - def FIXME_enum_forced_boxes(self, boxes, already_seen): - key = self.get_key_box() - if key in already_seen: - return - already_seen[key] = None - if self.box is None: - self.vstr.enum_forced_boxes(boxes, already_seen) - self.vstart.enum_forced_boxes(boxes, already_seen) - self.vlength.enum_forced_boxes(boxes, already_seen) - else: - boxes.append(self.box) - def _make_virtual(self, modifier): return modifier.make_vstrslice(self.mode is mode_unicode) @@ -312,6 +317,7 @@ for i in range(lengthbox.value): charbox = _strgetitem(string_optimizer, srcbox, srcoffsetbox, mode) srcoffsetbox = _int_add(string_optimizer, srcoffsetbox, CONST_1) + assert isinstance(targetbox, BoxPtr) # ConstPtr never makes sense string_optimizer.emit_operation(ResOperation(mode.STRSETITEM, [targetbox, offsetbox, charbox], @@ -322,6 +328,7 @@ nextoffsetbox = _int_add(string_optimizer, offsetbox, lengthbox) else: nextoffsetbox = None + assert isinstance(targetbox, BoxPtr) # ConstPtr never makes sense op = ResOperation(mode.COPYSTRCONTENT, [srcbox, targetbox, srcoffsetbox, offsetbox, lengthbox], None) @@ -408,6 +415,7 @@ def optimize_STRSETITEM(self, op): value = self.getvalue(op.getarg(0)) + assert not value.is_constant() # strsetitem(ConstPtr) never makes sense if value.is_virtual() and isinstance(value, VStringPlainValue): indexbox = self.get_constant_box(op.getarg(1)) if indexbox is not None: @@ -441,11 +449,20 @@ # if isinstance(value, VStringPlainValue): # even if no longer virtual if vindex.is_constant(): - res = value.getitem(vindex.box.getint()) - # If it is uninitialized we can't return it, it was set by a - # COPYSTRCONTENT, not a STRSETITEM - if res is not optimizer.CVAL_UNINITIALIZED_ZERO: - return res + result = value.getitem(vindex.box.getint()) + if result is not None: + return result + # + if isinstance(value, VStringConcatValue) and vindex.is_constant(): + len1box = value.left.getstrlen(self, mode) + if isinstance(len1box, ConstInt): + index = vindex.box.getint() + len1 = len1box.getint() + if index < len1: + return self.strgetitem(value.left, vindex, mode) + else: + vindex = optimizer.ConstantValue(ConstInt(index - len1)) + return self.strgetitem(value.right, vindex, mode) # resbox = _strgetitem(self, value.force_box(self), vindex.force_box(self), mode) return self.getvalue(resbox) @@ -467,6 +484,11 @@ def _optimize_COPYSTRCONTENT(self, op, mode): # args: src dst srcstart dststart length + assert op.getarg(0).type == REF + assert op.getarg(1).type == REF + assert op.getarg(2).type == INT + assert op.getarg(3).type == INT + assert op.getarg(4).type == INT src = self.getvalue(op.getarg(0)) dst = self.getvalue(op.getarg(1)) srcstart = self.getvalue(op.getarg(2)) @@ -508,6 +530,11 @@ optimize_CALL_PURE = optimize_CALL + def optimize_GUARD_NO_EXCEPTION(self, op): + if self.last_emitted_operation is REMOVED: + return + self.emit_operation(op) + def opt_call_str_STR2UNICODE(self, op): # Constant-fold unicode("constant string"). # More generally, supporting non-constant but virtual cases is @@ -522,6 +549,7 @@ except UnicodeDecodeError: return False self.make_constant(op.result, get_const_ptr_for_unicode(u)) + self.last_emitted_operation = REMOVED return True def opt_call_stroruni_STR_CONCAT(self, op, mode): @@ -538,13 +566,12 @@ vstart = self.getvalue(op.getarg(2)) vstop = self.getvalue(op.getarg(3)) # - if (isinstance(vstr, VStringPlainValue) and vstart.is_constant() - and vstop.is_constant()): - # slicing with constant bounds of a VStringPlainValue - value = self.make_vstring_plain(op.result, op, mode) - value.setup_slice(vstr._chars, vstart.box.getint(), - vstop.box.getint()) - return True + #if (isinstance(vstr, VStringPlainValue) and vstart.is_constant() + # and vstop.is_constant()): + # value = self.make_vstring_plain(op.result, op, mode) + # value.setup_slice(vstr._chars, vstart.box.getint(), + # vstop.box.getint()) + # return True # vstr.ensure_nonnull() lengthbox = _int_sub(self, vstop.force_box(self), diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -165,7 +165,7 @@ if not we_are_translated(): for b in registers[count:]: assert not oldbox.same_box(b) - + def make_result_of_lastop(self, resultbox): got_type = resultbox.type @@ -199,7 +199,7 @@ 'float_add', 'float_sub', 'float_mul', 'float_truediv', 'float_lt', 'float_le', 'float_eq', 'float_ne', 'float_gt', 'float_ge', - 'ptr_eq', 'ptr_ne', + 'ptr_eq', 'ptr_ne', 'instance_ptr_eq', 'instance_ptr_ne', ]: exec py.code.Source(''' @arguments("box", "box") @@ -240,8 +240,8 @@ return self.execute(rop.PTR_EQ, box, history.CONST_NULL) @arguments("box") - def opimpl_cast_opaque_ptr(self, box): - return self.execute(rop.CAST_OPAQUE_PTR, box) + def opimpl_mark_opaque_ptr(self, box): + return self.execute(rop.MARK_OPAQUE_PTR, box) @arguments("box") def _opimpl_any_return(self, box): @@ -604,7 +604,7 @@ opimpl_setinteriorfield_gc_i = _opimpl_setinteriorfield_gc_any opimpl_setinteriorfield_gc_f = _opimpl_setinteriorfield_gc_any opimpl_setinteriorfield_gc_r = _opimpl_setinteriorfield_gc_any - + @arguments("box", "descr") def _opimpl_getfield_raw_any(self, box, fielddescr): @@ -1345,10 +1345,8 @@ if effect == effectinfo.EF_LOOPINVARIANT: return self.execute_varargs(rop.CALL_LOOPINVARIANT, allboxes, descr, False, False) - exc = (effect != effectinfo.EF_CANNOT_RAISE and - effect != effectinfo.EF_ELIDABLE_CANNOT_RAISE) - pure = (effect == effectinfo.EF_ELIDABLE_CAN_RAISE or - effect == effectinfo.EF_ELIDABLE_CANNOT_RAISE) + exc = effectinfo.check_can_raise() + pure = effectinfo.check_is_elidable() return self.execute_varargs(rop.CALL, allboxes, descr, exc, pure) def do_residual_or_indirect_call(self, funcbox, calldescr, argboxes): diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -90,7 +90,10 @@ return op def __repr__(self): - return self.repr() + try: + return self.repr() + except NotImplementedError: + return object.__repr__(self) def repr(self, graytext=False): # RPython-friendly version @@ -404,8 +407,8 @@ 'FLOAT_TRUEDIV/2', 'FLOAT_NEG/1', 'FLOAT_ABS/1', - 'CAST_FLOAT_TO_INT/1', - 'CAST_INT_TO_FLOAT/1', + 'CAST_FLOAT_TO_INT/1', # don't use for unsigned ints; we would + 'CAST_INT_TO_FLOAT/1', # need some messy code in the backend 'CAST_FLOAT_TO_SINGLEFLOAT/1', 'CAST_SINGLEFLOAT_TO_FLOAT/1', # @@ -437,7 +440,8 @@ # 'PTR_EQ/2b', 'PTR_NE/2b', - 'CAST_OPAQUE_PTR/1b', + 'INSTANCE_PTR_EQ/2b', + 'INSTANCE_PTR_NE/2b', # 'ARRAYLEN_GC/1d', 'STRLEN/1', @@ -469,6 +473,7 @@ 'FORCE_TOKEN/0', 'VIRTUAL_REF/2', # removed before it's passed to the backend 'READ_TIMESTAMP/0', + 'MARK_OPAQUE_PTR/1b', '_NOSIDEEFFECT_LAST', # ----- end of no_side_effect operations ----- 'SETARRAYITEM_GC/3d', diff --git a/pypy/jit/metainterp/resume.py b/pypy/jit/metainterp/resume.py --- a/pypy/jit/metainterp/resume.py +++ b/pypy/jit/metainterp/resume.py @@ -126,6 +126,7 @@ UNASSIGNED = tag(-1<<13, TAGBOX) UNASSIGNEDVIRTUAL = tag(-1<<13, TAGVIRTUAL) NULLREF = tag(-1, TAGCONST) +UNINITIALIZED = tag(-2, TAGCONST) # used for uninitialized string characters class ResumeDataLoopMemo(object): @@ -139,7 +140,7 @@ self.numberings = {} self.cached_boxes = {} self.cached_virtuals = {} - + self.nvirtuals = 0 self.nvholes = 0 self.nvreused = 0 @@ -273,6 +274,9 @@ def make_varray(self, arraydescr): return VArrayInfo(arraydescr) + def make_varraystruct(self, arraydescr, fielddescrs): + return VArrayStructInfo(arraydescr, fielddescrs) + def make_vstrplain(self, is_unicode=False): if is_unicode: return VUniPlainInfo() @@ -402,7 +406,7 @@ virtuals[num] = vinfo if self._invalidation_needed(len(liveboxes), nholes): - memo.clear_box_virtual_numbers() + memo.clear_box_virtual_numbers() def _invalidation_needed(self, nliveboxes, nholes): memo = self.memo @@ -436,6 +440,8 @@ self.storage.rd_pendingfields = rd_pendingfields def _gettagged(self, box): + if box is None: + return UNINITIALIZED if isinstance(box, Const): return self.memo.getconst(box) else: @@ -455,7 +461,7 @@ def debug_prints(self): raise NotImplementedError - + class AbstractVirtualStructInfo(AbstractVirtualInfo): def __init__(self, fielddescrs): self.fielddescrs = fielddescrs @@ -537,6 +543,29 @@ for i in self.fieldnums: debug_print("\t\t", str(untag(i))) + +class VArrayStructInfo(AbstractVirtualInfo): + def __init__(self, arraydescr, fielddescrs): + self.arraydescr = arraydescr + self.fielddescrs = fielddescrs + + def debug_prints(self): + debug_print("\tvarraystructinfo", self.arraydescr) + for i in self.fieldnums: + debug_print("\t\t", str(untag(i))) + + @specialize.argtype(1) + def allocate(self, decoder, index): + array = decoder.allocate_array(self.arraydescr, len(self.fielddescrs)) + decoder.virtuals_cache[index] = array + p = 0 + for i in range(len(self.fielddescrs)): + for j in range(len(self.fielddescrs[i])): + decoder.setinteriorfield(i, self.fielddescrs[i][j], array, self.fieldnums[p]) + p += 1 + return array + + class VStrPlainInfo(AbstractVirtualInfo): """Stands for the string made out of the characters of all fieldnums.""" @@ -546,7 +575,9 @@ string = decoder.allocate_string(length) decoder.virtuals_cache[index] = string for i in range(length): - decoder.string_setitem(string, i, self.fieldnums[i]) + charnum = self.fieldnums[i] + if not tagged_eq(charnum, UNINITIALIZED): + decoder.string_setitem(string, i, charnum) return string def debug_prints(self): @@ -599,7 +630,9 @@ string = decoder.allocate_unicode(length) decoder.virtuals_cache[index] = string for i in range(length): - decoder.unicode_setitem(string, i, self.fieldnums[i]) + charnum = self.fieldnums[i] + if not tagged_eq(charnum, UNINITIALIZED): + decoder.unicode_setitem(string, i, charnum) return string def debug_prints(self): @@ -884,6 +917,17 @@ self.metainterp.execute_and_record(rop.SETFIELD_GC, descr, structbox, fieldbox) + def setinteriorfield(self, index, descr, array, fieldnum): + if descr.is_pointer_field(): + kind = REF + elif descr.is_float_field(): + kind = FLOAT + else: + kind = INT + fieldbox = self.decode_box(fieldnum, kind) + self.metainterp.execute_and_record(rop.SETINTERIORFIELD_GC, descr, + array, ConstInt(index), fieldbox) + def setarrayitem_int(self, arraydescr, arraybox, index, fieldnum): self._setarrayitem(arraydescr, arraybox, index, fieldnum, INT) @@ -1164,6 +1208,17 @@ newvalue = self.decode_int(fieldnum) self.cpu.bh_setfield_gc_i(struct, descr, newvalue) + def setinteriorfield(self, index, descr, array, fieldnum): + if descr.is_pointer_field(): + newvalue = self.decode_ref(fieldnum) + self.cpu.bh_setinteriorfield_gc_r(array, index, descr, newvalue) + elif descr.is_float_field(): + newvalue = self.decode_float(fieldnum) + self.cpu.bh_setinteriorfield_gc_f(array, index, descr, newvalue) + else: + newvalue = self.decode_int(fieldnum) + self.cpu.bh_setinteriorfield_gc_i(array, index, descr, newvalue) + def setarrayitem_int(self, arraydescr, array, index, fieldnum): newvalue = self.decode_int(fieldnum) self.cpu.bh_setarrayitem_gc_i(arraydescr, array, index, newvalue) diff --git a/pypy/jit/metainterp/test/support.py b/pypy/jit/metainterp/test/support.py --- a/pypy/jit/metainterp/test/support.py +++ b/pypy/jit/metainterp/test/support.py @@ -12,7 +12,7 @@ from pypy.rlib.rfloat import isnan def _get_jitcodes(testself, CPUClass, func, values, type_system, - supports_longlong=False, **kwds): + supports_longlong=False, translationoptions={}, **kwds): from pypy.jit.codewriter import support class FakeJitCell(object): @@ -42,7 +42,8 @@ enable_opts = ALL_OPTS_DICT func._jit_unroll_safe_ = True - rtyper = support.annotate(func, values, type_system=type_system) + rtyper = support.annotate(func, values, type_system=type_system, + translationoptions=translationoptions) graphs = rtyper.annotator.translator.graphs testself.all_graphs = graphs result_kind = history.getkind(graphs[0].getreturnvar().concretetype)[0] diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -10,6 +10,7 @@ from pypy.jit.metainterp.typesystem import LLTypeHelper, OOTypeHelper from pypy.jit.metainterp.warmspot import get_stats from pypy.jit.metainterp.warmstate import set_future_value +from pypy.rlib import rerased from pypy.rlib.jit import (JitDriver, we_are_jitted, hint, dont_look_inside, loop_invariant, elidable, promote, jit_debug, assert_green, AssertGreenFailed, unroll_safe, current_trace_length, look_inside_iff, @@ -3436,7 +3437,7 @@ res = self.meta_interp(f, [16]) assert res == f(16) - def test_ptr_eq_str_constants(self): + def test_ptr_eq(self): myjitdriver = JitDriver(greens = [], reds = ["n", "x"]) class A(object): def __init__(self, v): @@ -3452,22 +3453,142 @@ res = self.meta_interp(f, [10, 1]) assert res == 0 + def test_instance_ptr_eq(self): + myjitdriver = JitDriver(greens = [], reds = ["n", "i", "a1", "a2"]) + class A(object): + pass + def f(n): + a1 = A() + a2 = A() + i = 0 + while n > 0: + myjitdriver.jit_merge_point(n=n, i=i, a1=a1, a2=a2) + if n % 2: + a = a2 + else: + a = a1 + i += a is a1 + n -= 1 + return i + res = self.meta_interp(f, [10]) + assert res == f(10) + def f(n): + a1 = A() + a2 = A() + i = 0 + while n > 0: + myjitdriver.jit_merge_point(n=n, i=i, a1=a1, a2=a2) + if n % 2: + a = a2 + else: + a = a1 + if a is a2: + i += 1 + n -= 1 + return i + res = self.meta_interp(f, [10]) + assert res == f(10) + def test_virtual_array_of_structs(self): myjitdriver = JitDriver(greens = [], reds=["n", "d"]) def f(n): d = None while n > 0: myjitdriver.jit_merge_point(n=n, d=d) - d = {} + d = {"q": 1} if n % 2: d["k"] = n else: d["z"] = n - n -= len(d) + n -= len(d) - d["q"] return n res = self.meta_interp(f, [10]) assert res == 0 + def test_virtual_dict_constant_keys(self): + myjitdriver = JitDriver(greens = [], reds = ["n"]) + def g(d): + return d["key"] - 1 + + def f(n): + while n > 0: + myjitdriver.jit_merge_point(n=n) + x = {"key": n} + n = g(x) + del x["key"] + return n + + res = self.meta_interp(f, [10]) + assert res == 0 + self.check_loops({"int_sub": 1, "int_gt": 1, "guard_true": 1, "jump": 1}) + + def test_virtual_opaque_ptr(self): + myjitdriver = JitDriver(greens = [], reds = ["n"]) + erase, unerase = rerased.new_erasing_pair("x") + @look_inside_iff(lambda x: isvirtual(x)) + def g(x): + return x[0] + def f(n): + while n > 0: + myjitdriver.jit_merge_point(n=n) + x = [] + y = erase(x) + z = unerase(y) + z.append(1) + n -= g(z) + return n + res = self.meta_interp(f, [10]) + assert res == 0 + self.check_loops({"int_sub": 1, "int_gt": 1, "guard_true": 1, "jump": 1}) + + def test_virtual_opaque_dict(self): + myjitdriver = JitDriver(greens = [], reds = ["n"]) + erase, unerase = rerased.new_erasing_pair("x") + @look_inside_iff(lambda x: isvirtual(x)) + def g(x): + return x[0]["key"] - 1 + def f(n): + while n > 0: + myjitdriver.jit_merge_point(n=n) + x = [{}] + x[0]["key"] = n + x[0]["other key"] = n + y = erase(x) + z = unerase(y) + n = g(x) + return n + res = self.meta_interp(f, [10]) + assert res == 0 + self.check_loops({"int_sub": 1, "int_gt": 1, "guard_true": 1, "jump": 1}) + + def test_convert_from_SmallFunctionSetPBCRepr_to_FunctionsPBCRepr(self): + f1 = lambda n: n+1 + f2 = lambda n: n+2 + f3 = lambda n: n+3 + f4 = lambda n: n+4 + f5 = lambda n: n+5 + f6 = lambda n: n+6 + f7 = lambda n: n+7 + f8 = lambda n: n+8 + def h(n, x): + return x(n) + h._dont_inline = True + def g(n, x): + return h(n, x) + g._dont_inline = True + def f(n): + n = g(n, f1) + n = g(n, f2) + n = h(n, f3) + n = h(n, f4) + n = h(n, f5) + n = h(n, f6) + n = h(n, f7) + n = h(n, f8) + return n + assert f(5) == 41 + translationoptions = {'withsmallfuncsets': 3} + self.interp_operations(f, [5], translationoptions=translationoptions) class TestLLtype(BaseLLtypeTests, LLJitMixin): @@ -3522,11 +3643,12 @@ o = o.dec() pc += 1 return pc - res = self.meta_interp(main, [False, 100, True], taggedpointers=True) + topt = {'taggedpointers': True} + res = self.meta_interp(main, [False, 100, True], + translationoptions=topt) def test_rerased(self): - from pypy.rlib.rerased import erase_int, unerase_int, new_erasing_pair - eraseX, uneraseX = new_erasing_pair("X") + eraseX, uneraseX = rerased.new_erasing_pair("X") # class X: def __init__(self, a, b): @@ -3539,19 +3661,33 @@ e = eraseX(X(i, j)) else: try: - e = erase_int(i) + e = rerased.erase_int(i) except OverflowError: return -42 if j & 1: x = uneraseX(e) return x.a - x.b else: - return unerase_int(e) + return rerased.unerase_int(e) # - x = self.interp_operations(f, [-128, 0], taggedpointers=True) + topt = {'taggedpointers': True} + x = self.interp_operations(f, [-128, 0], translationoptions=topt) assert x == -128 bigint = sys.maxint//2 + 1 - x = self.interp_operations(f, [bigint, 0], taggedpointers=True) + x = self.interp_operations(f, [bigint, 0], translationoptions=topt) assert x == -42 - x = self.interp_operations(f, [1000, 1], taggedpointers=True) + x = self.interp_operations(f, [1000, 1], translationoptions=topt) assert x == 999 + + def test_ll_arraycopy(self): + from pypy.rlib import rgc + A = lltype.GcArray(lltype.Char) + a = lltype.malloc(A, 10) + for i in range(10): a[i] = chr(i) + b = lltype.malloc(A, 10) + # + def f(c, d, e): + rgc.ll_arraycopy(a, b, c, d, e) + return 42 + self.interp_operations(f, [1, 2, 3]) + self.check_operations_history(call=1, guard_no_exception=0) diff --git a/pypy/jit/metainterp/test/test_float.py b/pypy/jit/metainterp/test/test_float.py --- a/pypy/jit/metainterp/test/test_float.py +++ b/pypy/jit/metainterp/test/test_float.py @@ -1,5 +1,6 @@ -import math +import math, sys from pypy.jit.metainterp.test.support import LLJitMixin, OOJitMixin +from pypy.rlib.rarithmetic import intmask, r_uint class FloatTests: @@ -45,6 +46,34 @@ res = self.interp_operations(f, [-2.0]) assert res == -8.5 + def test_cast_float_to_int(self): + def g(f): + return int(f) + res = self.interp_operations(g, [-12345.9]) + assert res == -12345 + + def test_cast_float_to_uint(self): + def g(f): + return intmask(r_uint(f)) + res = self.interp_operations(g, [sys.maxint*2.0]) + assert res == intmask(long(sys.maxint*2.0)) + res = self.interp_operations(g, [-12345.9]) + assert res == -12345 + + def test_cast_int_to_float(self): + def g(i): + return float(i) + res = self.interp_operations(g, [-12345]) + assert type(res) is float and res == -12345.0 + + def test_cast_uint_to_float(self): + def g(i): + return float(r_uint(i)) + res = self.interp_operations(g, [intmask(sys.maxint*2)]) + assert type(res) is float and res == float(sys.maxint*2) + res = self.interp_operations(g, [-12345]) + assert type(res) is float and res == float(long(r_uint(-12345))) + class TestOOtype(FloatTests, OOJitMixin): pass diff --git a/pypy/jit/metainterp/test/test_heapcache.py b/pypy/jit/metainterp/test/test_heapcache.py --- a/pypy/jit/metainterp/test/test_heapcache.py +++ b/pypy/jit/metainterp/test/test_heapcache.py @@ -371,3 +371,17 @@ assert h.is_unescaped(box1) h.invalidate_caches(rop.SETARRAYITEM_GC, None, [box2, index1, box1]) assert not h.is_unescaped(box1) + + h = HeapCache() + h.new_array(box1, lengthbox1) + h.new(box2) + assert h.is_unescaped(box1) + assert h.is_unescaped(box2) + h.invalidate_caches(rop.SETARRAYITEM_GC, None, [box1, lengthbox2, box2]) + assert h.is_unescaped(box1) + assert h.is_unescaped(box2) + h.invalidate_caches( + rop.CALL, FakeCallDescr(FakeEffektinfo.EF_RANDOM_EFFECTS), [box1] + ) + assert not h.is_unescaped(box1) + assert not h.is_unescaped(box2) diff --git a/pypy/jit/metainterp/test/test_resume.py b/pypy/jit/metainterp/test/test_resume.py --- a/pypy/jit/metainterp/test/test_resume.py +++ b/pypy/jit/metainterp/test/test_resume.py @@ -1135,16 +1135,11 @@ assert ptr2.parent.next == ptr class CompareableConsts(object): - def __init__(self): - self.oldeq = None - def __enter__(self): - assert self.oldeq is None - self.oldeq = Const.__eq__ Const.__eq__ = Const.same_box - + def __exit__(self, type, value, traceback): - Const.__eq__ = self.oldeq + del Const.__eq__ def test_virtual_adder_make_varray(): b2s, b4s = [BoxPtr(), BoxInt(4)] diff --git a/pypy/jit/metainterp/test/test_tracingopts.py b/pypy/jit/metainterp/test/test_tracingopts.py --- a/pypy/jit/metainterp/test/test_tracingopts.py +++ b/pypy/jit/metainterp/test/test_tracingopts.py @@ -3,6 +3,7 @@ from pypy.jit.metainterp.test.support import LLJitMixin from pypy.rlib import jit from pypy.rlib.rarithmetic import ovfcheck +from pypy.rlib.rstring import StringBuilder import py @@ -590,4 +591,14 @@ assert res == 4 self.check_operations_history(int_add_ovf=0) res = self.interp_operations(fn, [sys.maxint]) - assert res == 12 \ No newline at end of file + assert res == 12 + + def test_copy_str_content(self): + def fn(n): + a = StringBuilder() + x = [1] + a.append("hello world") + return x[0] + res = self.interp_operations(fn, [0]) + assert res == 1 + self.check_operations_history(getarrayitem_gc=0, getarrayitem_gc_pure=0 ) \ No newline at end of file diff --git a/pypy/jit/metainterp/test/test_virtualstate.py b/pypy/jit/metainterp/test/test_virtualstate.py --- a/pypy/jit/metainterp/test/test_virtualstate.py +++ b/pypy/jit/metainterp/test/test_virtualstate.py @@ -847,7 +847,8 @@ i5 = arraylen_gc(p2, descr=arraydescr) i6 = int_ge(i5, 1) guard_true(i6) [] - jump(p0, p1, p2) + p3 = getarrayitem_gc(p2, 0, descr=arraydescr) + jump(p0, p1, p3, p2) """ self.optimize_bridge(loop, bridge, expected, p0=self.myptr) diff --git a/pypy/jit/metainterp/warmspot.py b/pypy/jit/metainterp/warmspot.py --- a/pypy/jit/metainterp/warmspot.py +++ b/pypy/jit/metainterp/warmspot.py @@ -48,13 +48,13 @@ translator.warmrunnerdesc = warmrunnerdesc # for later debugging def ll_meta_interp(function, args, backendopt=False, type_system='lltype', - listcomp=False, **kwds): + listcomp=False, translationoptions={}, **kwds): if listcomp: extraconfigopts = {'translation.list_comprehension_operations': True} else: extraconfigopts = {} - if kwds.pop("taggedpointers", False): - extraconfigopts["translation.taggedpointers"] = True + for key, value in translationoptions.items(): + extraconfigopts['translation.' + key] = value interp, graph = get_interpreter(function, args, backendopt=False, # will be done below type_system=type_system, @@ -62,7 +62,7 @@ clear_tcache() return jittify_and_run(interp, graph, args, backendopt=backendopt, **kwds) -def jittify_and_run(interp, graph, args, repeat=1, +def jittify_and_run(interp, graph, args, repeat=1, graph_and_interp_only=False, backendopt=False, trace_limit=sys.maxint, inline=False, loop_longevity=0, retrace_limit=5, function_threshold=4, @@ -93,6 +93,8 @@ jd.warmstate.set_param_max_retrace_guards(max_retrace_guards) jd.warmstate.set_param_enable_opts(enable_opts) warmrunnerdesc.finish() + if graph_and_interp_only: + return interp, graph res = interp.eval_graph(graph, args) if not kwds.get('translate_support_code', False): warmrunnerdesc.metainterp_sd.profiler.finish() @@ -157,6 +159,9 @@ def get_stats(): return pyjitpl._warmrunnerdesc.stats +def reset_stats(): + pyjitpl._warmrunnerdesc.stats.clear() + def get_translator(): return pyjitpl._warmrunnerdesc.translator diff --git a/pypy/module/_file/interp_file.py b/pypy/module/_file/interp_file.py --- a/pypy/module/_file/interp_file.py +++ b/pypy/module/_file/interp_file.py @@ -206,24 +206,28 @@ @unwrap_spec(size=int) def direct_readlines(self, size=0): stream = self.getstream() - # NB. this implementation is very inefficient for unbuffered - # streams, but ok if stream.readline() is efficient. + # this is implemented as: .read().split('\n') + # except that it keeps the \n in the resulting strings if size <= 0: - result = [] - while True: - line = stream.readline() - if not line: - break - result.append(line) - size -= len(line) + data = stream.readall() else: - result = [] - while size > 0: - line = stream.readline() - if not line: - break - result.append(line) - size -= len(line) + data = stream.read(size) + result = [] + splitfrom = 0 + for i in range(len(data)): + if data[i] == '\n': + result.append(data[splitfrom : i + 1]) + splitfrom = i + 1 + # + if splitfrom < len(data): + # there is a partial line at the end. If size > 0, it is likely + # to be because the 'read(size)' returned data up to the middle + # of a line. In that case, use 'readline()' to read until the + # end of the current line. + data = data[splitfrom:] + if size > 0: + data += stream.readline() + result.append(data) return result @unwrap_spec(offset=r_longlong, whence=int) diff --git a/pypy/module/_hashlib/interp_hashlib.py b/pypy/module/_hashlib/interp_hashlib.py --- a/pypy/module/_hashlib/interp_hashlib.py +++ b/pypy/module/_hashlib/interp_hashlib.py @@ -4,32 +4,44 @@ from pypy.interpreter.error import OperationError from pypy.tool.sourcetools import func_renamer from pypy.interpreter.baseobjspace import Wrappable -from pypy.rpython.lltypesystem import lltype, rffi +from pypy.rpython.lltypesystem import lltype, llmemory, rffi +from pypy.rlib import rgc, ropenssl from pypy.rlib.objectmodel import keepalive_until_here -from pypy.rlib import ropenssl from pypy.rlib.rstring import StringBuilder from pypy.module.thread.os_lock import Lock algorithms = ('md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512') +# HASH_MALLOC_SIZE is the size of EVP_MD, EVP_MD_CTX plus their points +# Used for adding memory pressure. Last number is an (under?)estimate of +# EVP_PKEY_CTX's size. +# XXX: Make a better estimate here +HASH_MALLOC_SIZE = ropenssl.EVP_MD_SIZE + ropenssl.EVP_MD_CTX_SIZE \ + + rffi.sizeof(ropenssl.EVP_MD) * 2 + 208 + class W_Hash(Wrappable): ctx = lltype.nullptr(ropenssl.EVP_MD_CTX.TO) + _block_size = -1 def __init__(self, space, name): self.name = name + self.digest_size = self.compute_digest_size() # Allocate a lock for each HASH object. # An optimization would be to not release the GIL on small requests, # and use a custom lock only when needed. self.lock = Lock(space) + ctx = lltype.malloc(ropenssl.EVP_MD_CTX.TO, flavor='raw') + rgc.add_memory_pressure(HASH_MALLOC_SIZE + self.digest_size) + self.ctx = ctx + + def initdigest(self, space, name): digest = ropenssl.EVP_get_digestbyname(name) if not digest: raise OperationError(space.w_ValueError, space.wrap("unknown hash function")) - ctx = lltype.malloc(ropenssl.EVP_MD_CTX.TO, flavor='raw') - ropenssl.EVP_DigestInit(ctx, digest) - self.ctx = ctx + ropenssl.EVP_DigestInit(self.ctx, digest) def __del__(self): # self.lock.free() @@ -65,33 +77,29 @@ "Return the digest value as a string of hexadecimal digits." digest = self._digest(space) hexdigits = '0123456789abcdef' - result = StringBuilder(self._digest_size() * 2) + result = StringBuilder(self.digest_size * 2) for c in digest: result.append(hexdigits[(ord(c) >> 4) & 0xf]) result.append(hexdigits[ ord(c) & 0xf]) return space.wrap(result.build()) def get_digest_size(self, space): - return space.wrap(self._digest_size()) + return space.wrap(self.digest_size) def get_block_size(self, space): - return space.wrap(self._block_size()) + return space.wrap(self.compute_block_size()) def _digest(self, space): - copy = self.copy(space) - ctx = copy.ctx - digest_size = self._digest_size() - digest = lltype.malloc(rffi.CCHARP.TO, digest_size, flavor='raw') + with lltype.scoped_alloc(ropenssl.EVP_MD_CTX.TO) as ctx: + with self.lock: + ropenssl.EVP_MD_CTX_copy(ctx, self.ctx) + digest_size = self.digest_size + with lltype.scoped_alloc(rffi.CCHARP.TO, digest_size) as digest: + ropenssl.EVP_DigestFinal(ctx, digest, None) + ropenssl.EVP_MD_CTX_cleanup(ctx) + return rffi.charpsize2str(digest, digest_size) - try: - ropenssl.EVP_DigestFinal(ctx, digest, None) - return rffi.charpsize2str(digest, digest_size) - finally: - keepalive_until_here(copy) - lltype.free(digest, flavor='raw') - - - def _digest_size(self): + def compute_digest_size(self): # XXX This isn't the nicest way, but the EVP_MD_size OpenSSL # XXX function is defined as a C macro on OS X and would be # XXX significantly harder to implement in another way. @@ -105,12 +113,14 @@ 'sha512': 64, 'SHA512': 64, }.get(self.name, 0) - def _block_size(self): + def compute_block_size(self): + if self._block_size != -1: + return self._block_size # XXX This isn't the nicest way, but the EVP_MD_CTX_block_size # XXX OpenSSL function is defined as a C macro on some systems # XXX and would be significantly harder to implement in # XXX another way. - return { + self._block_size = { 'md5': 64, 'MD5': 64, 'sha1': 64, 'SHA1': 64, 'sha224': 64, 'SHA224': 64, @@ -118,6 +128,7 @@ 'sha384': 128, 'SHA384': 128, 'sha512': 128, 'SHA512': 128, }.get(self.name, 0) + return self._block_size W_Hash.typedef = TypeDef( 'HASH', @@ -135,6 +146,7 @@ @unwrap_spec(name=str, string='bufferstr') def new(space, name, string=''): w_hash = W_Hash(space, name) + w_hash.initdigest(space, name) w_hash.update(space, string) return space.wrap(w_hash) diff --git a/pypy/module/_minimal_curses/__init__.py b/pypy/module/_minimal_curses/__init__.py --- a/pypy/module/_minimal_curses/__init__.py +++ b/pypy/module/_minimal_curses/__init__.py @@ -4,7 +4,8 @@ try: import _minimal_curses as _curses # when running on top of pypy-c except ImportError: - raise ImportError("no _curses or _minimal_curses module") # no _curses at all + import py + py.test.skip("no _curses or _minimal_curses module") #no _curses at all from pypy.interpreter.mixedmodule import MixedModule from pypy.module._minimal_curses import fficurses diff --git a/pypy/module/_multiprocessing/interp_semaphore.py b/pypy/module/_multiprocessing/interp_semaphore.py --- a/pypy/module/_multiprocessing/interp_semaphore.py +++ b/pypy/module/_multiprocessing/interp_semaphore.py @@ -4,6 +4,7 @@ from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.error import wrap_oserror, OperationError from pypy.rpython.lltypesystem import rffi, lltype +from pypy.rlib import rgc from pypy.rlib.rarithmetic import r_uint from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.rpython.tool import rffi_platform as platform @@ -23,6 +24,8 @@ _CreateSemaphore = rwin32.winexternal( 'CreateSemaphoreA', [rffi.VOIDP, rffi.LONG, rffi.LONG, rwin32.LPCSTR], rwin32.HANDLE) + _CloseHandle = rwin32.winexternal('CloseHandle', [rwin32.HANDLE], + rwin32.BOOL, threadsafe=False) _ReleaseSemaphore = rwin32.winexternal( 'ReleaseSemaphore', [rwin32.HANDLE, rffi.LONG, rffi.LONGP], rwin32.BOOL) @@ -51,6 +54,7 @@ SEM_FAILED = platform.ConstantInteger('SEM_FAILED') SEM_VALUE_MAX = platform.ConstantInteger('SEM_VALUE_MAX') SEM_TIMED_WAIT = platform.Has('sem_timedwait') + SEM_T_SIZE = platform.SizeOf('sem_t') config = platform.configure(CConfig) TIMEVAL = config['TIMEVAL'] @@ -61,18 +65,21 @@ SEM_FAILED = config['SEM_FAILED'] # rffi.cast(SEM_T, config['SEM_FAILED']) SEM_VALUE_MAX = config['SEM_VALUE_MAX'] SEM_TIMED_WAIT = config['SEM_TIMED_WAIT'] + SEM_T_SIZE = config['SEM_T_SIZE'] if sys.platform == 'darwin': HAVE_BROKEN_SEM_GETVALUE = True else: HAVE_BROKEN_SEM_GETVALUE = False - def external(name, args, result): + def external(name, args, result, **kwargs): return rffi.llexternal(name, args, result, - compilation_info=eci) + compilation_info=eci, **kwargs) _sem_open = external('sem_open', [rffi.CCHARP, rffi.INT, rffi.INT, rffi.UINT], SEM_T) + # tread sem_close as not threadsafe for now to be able to use the __del__ + _sem_close = external('sem_close', [SEM_T], rffi.INT, threadsafe=False) _sem_unlink = external('sem_unlink', [rffi.CCHARP], rffi.INT) _sem_wait = external('sem_wait', [SEM_T], rffi.INT) _sem_trywait = external('sem_trywait', [SEM_T], rffi.INT) @@ -90,6 +97,11 @@ raise OSError(rposix.get_errno(), "sem_open failed") return res + def sem_close(handle): + res = _sem_close(handle) + if res < 0: + raise OSError(rposix.get_errno(), "sem_close failed") + def sem_unlink(name): res = _sem_unlink(name) if res < 0: @@ -205,6 +217,11 @@ raise WindowsError(err, "CreateSemaphore") return handle + def delete_semaphore(handle): + if not _CloseHandle(handle): + err = rwin32.GetLastError() + raise WindowsError(err, "CloseHandle") + def semlock_acquire(self, space, block, w_timeout): if not block: full_msecs = 0 @@ -291,8 +308,13 @@ sem_unlink(name) except OSError: pass + else: + rgc.add_memory_pressure(SEM_T_SIZE) return sem + def delete_semaphore(handle): + sem_close(handle) + def semlock_acquire(self, space, block, w_timeout): if not block: deadline = lltype.nullptr(TIMESPECP.TO) @@ -483,6 +505,9 @@ def exit(self, space, __args__): self.release(space) + def __del__(self): + delete_semaphore(self.handle) + @unwrap_spec(kind=int, value=int, maxvalue=int) def descr_new(space, w_subtype, kind, value, maxvalue): if kind != RECURSIVE_MUTEX and kind != SEMAPHORE: diff --git a/pypy/module/_rawffi/structure.py b/pypy/module/_rawffi/structure.py --- a/pypy/module/_rawffi/structure.py +++ b/pypy/module/_rawffi/structure.py @@ -212,6 +212,8 @@ while count + basic_size <= total_size: fieldtypes.append(basic_ffi_type) count += basic_size + if basic_size == 0: # corner case. get out of this infinite + break # loop after 1 iteration ("why not") self.ffi_struct = clibffi.make_struct_ffitype_e(self.size, self.alignment, fieldtypes) diff --git a/pypy/module/_socket/interp_socket.py b/pypy/module/_socket/interp_socket.py --- a/pypy/module/_socket/interp_socket.py +++ b/pypy/module/_socket/interp_socket.py @@ -19,7 +19,7 @@ class W_RSocket(Wrappable, RSocket): def __del__(self): self.clear_all_weakrefs() - self.close() + RSocket.__del__(self) def accept_w(self, space): """accept() -> (socket object, address info) diff --git a/pypy/module/array/interp_array.py b/pypy/module/array/interp_array.py --- a/pypy/module/array/interp_array.py +++ b/pypy/module/array/interp_array.py @@ -211,7 +211,9 @@ return result def __del__(self): - self.clear_all_weakrefs() + # note that we don't call clear_all_weakrefs here because + # an array with freed buffer is ok to see - it's just empty with 0 + # length self.setlen(0) def setlen(self, size): diff --git a/pypy/module/array/test/test_array.py b/pypy/module/array/test/test_array.py --- a/pypy/module/array/test/test_array.py +++ b/pypy/module/array/test/test_array.py @@ -824,6 +824,22 @@ r = weakref.ref(a) assert r() is a + def test_subclass_del(self): + import array, gc, weakref + l = [] + + class A(array.array): + pass + + a = A('d') + a.append(3.0) + r = weakref.ref(a, lambda a: l.append(a())) + del a + gc.collect(); gc.collect() # XXX needs two of them right now... + assert l + assert l[0] is None or len(l[0]) == 0 + + class TestCPythonsOwnArray(BaseArrayTests): def setup_class(cls): @@ -844,11 +860,7 @@ cls.w_tempfile = cls.space.wrap( str(py.test.ensuretemp('array').join('tmpfile'))) cls.w_maxint = cls.space.wrap(sys.maxint) - - - - - + def test_buffer_info(self): a = self.array('c', 'Hi!') bi = a.buffer_info() diff --git a/pypy/module/bz2/test/test_large.py b/pypy/module/bz2/test/test_large.py --- a/pypy/module/bz2/test/test_large.py +++ b/pypy/module/bz2/test/test_large.py @@ -8,7 +8,7 @@ py.test.skip("skipping this very slow test; try 'pypy-c -A'") cls.space = gettestobjspace(usemodules=('bz2',)) largetest_bz2 = py.path.local(__file__).dirpath().join("largetest.bz2") - cls.w_compressed_data = cls.space.wrap(largetest_bz2.read()) + cls.w_compressed_data = cls.space.wrap(largetest_bz2.read('rb')) def test_decompress(self): from bz2 import decompress diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -392,6 +392,7 @@ 'Slice': 'space.gettypeobject(W_SliceObject.typedef)', 'StaticMethod': 'space.gettypeobject(StaticMethod.typedef)', 'CFunction': 'space.gettypeobject(cpyext.methodobject.W_PyCFunctionObject.typedef)', + 'WrapperDescr': 'space.gettypeobject(cpyext.methodobject.W_PyCMethodObject.typedef)' }.items(): GLOBALS['Py%s_Type#' % (cpyname, )] = ('PyTypeObject*', pypyexpr) diff --git a/pypy/module/cpyext/include/patchlevel.h b/pypy/module/cpyext/include/patchlevel.h --- a/pypy/module/cpyext/include/patchlevel.h +++ b/pypy/module/cpyext/include/patchlevel.h @@ -29,7 +29,7 @@ #define PY_VERSION "2.7.1" /* PyPy version as a string */ -#define PYPY_VERSION "1.6.1" +#define PYPY_VERSION "1.7.1" /* Subversion Revision number of this file (not of the repository). * Empty since Mercurial migration. */ diff --git a/pypy/module/cpyext/methodobject.py b/pypy/module/cpyext/methodobject.py --- a/pypy/module/cpyext/methodobject.py +++ b/pypy/module/cpyext/methodobject.py @@ -240,6 +240,7 @@ def PyStaticMethod_New(space, w_func): return space.wrap(StaticMethod(w_func)) + at cpython_api([PyObject, lltype.Ptr(PyMethodDef)], PyObject) def PyDescr_NewMethod(space, w_type, method): return space.wrap(W_PyCMethodObject(space, method, w_type)) diff --git a/pypy/module/cpyext/pyobject.py b/pypy/module/cpyext/pyobject.py --- a/pypy/module/cpyext/pyobject.py +++ b/pypy/module/cpyext/pyobject.py @@ -116,8 +116,8 @@ try: return typedescr_cache[typedef] except KeyError: - if typedef.base is not None: - return _get_typedescr_1(typedef.base) + if typedef.bases: + return _get_typedescr_1(typedef.bases[0]) return typedescr_cache[None] def get_typedescr(typedef): diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -586,10 +586,6 @@ def PyDescr_NewMember(space, type, meth): raise NotImplementedError - at cpython_api([PyTypeObjectPtr, PyMethodDef], PyObject) -def PyDescr_NewMethod(space, type, meth): - raise NotImplementedError - @cpython_api([PyTypeObjectPtr, wrapperbase, rffi.VOIDP], PyObject) def PyDescr_NewWrapper(space, type, wrapper, wrapped): raise NotImplementedError @@ -610,14 +606,6 @@ def PyWrapper_New(space, w_d, w_self): raise NotImplementedError - at cpython_api([PyObject], PyObject) -def PyDictProxy_New(space, dict): - """Return a proxy object for a mapping which enforces read-only behavior. - This is normally used to create a proxy to prevent modification of the - dictionary for non-dynamic class types. - """ - raise NotImplementedError - @cpython_api([PyObject, PyObject, rffi.INT_real], rffi.INT_real, error=-1) def PyDict_Merge(space, a, b, override): """Iterate over mapping object b adding key-value pairs to dictionary a. @@ -2293,15 +2281,6 @@ changes in your code for properly supporting 64-bit systems.""" raise NotImplementedError - at cpython_api([rffi.CWCHARP, Py_ssize_t, rffi.CCHARP], PyObject) -def PyUnicode_EncodeUTF8(space, s, size, errors): - """Encode the Py_UNICODE buffer of the given size using UTF-8 and return a - Python string object. Return NULL if an exception was raised by the codec. - - This function used an int type for size. This might require - changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.INTP], PyObject) def PyUnicode_DecodeUTF32(space, s, size, errors, byteorder): """Decode length bytes from a UTF-32 encoded buffer string and return the @@ -2481,31 +2460,6 @@ was raised by the codec.""" raise NotImplementedError - at cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP], PyObject) -def PyUnicode_DecodeLatin1(space, s, size, errors): - """Create a Unicode object by decoding size bytes of the Latin-1 encoded string - s. Return NULL if an exception was raised by the codec. - - This function used an int type for size. This might require - changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - - at cpython_api([rffi.CWCHARP, Py_ssize_t, rffi.CCHARP], PyObject) -def PyUnicode_EncodeLatin1(space, s, size, errors): - """Encode the Py_UNICODE buffer of the given size using Latin-1 and return - a Python string object. Return NULL if an exception was raised by the codec. - - This function used an int type for size. This might require - changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - - at cpython_api([PyObject], PyObject) -def PyUnicode_AsLatin1String(space, unicode): - """Encode a Unicode object using Latin-1 and return the result as Python string - object. Error handling is "strict". Return NULL if an exception was raised - by the codec.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, PyObject, rffi.CCHARP], PyObject) def PyUnicode_DecodeCharmap(space, s, size, mapping, errors): """Create a Unicode object by decoding size bytes of the encoded string s using @@ -2564,13 +2518,6 @@ """ raise NotImplementedError - at cpython_api([PyObject], PyObject) -def PyUnicode_AsMBCSString(space, unicode): - """Encode a Unicode object using MBCS and return the result as Python string - object. Error handling is "strict". Return NULL if an exception was raised - by the codec.""" - raise NotImplementedError - @cpython_api([PyObject, PyObject], PyObject) def PyUnicode_Concat(space, left, right): """Concat two strings giving a new Unicode string.""" @@ -2912,16 +2859,3 @@ """Return true if ob is a proxy object. """ raise NotImplementedError - - at cpython_api([PyObject, PyObject], PyObject) -def PyWeakref_NewProxy(space, ob, callback): - """Return a weak reference proxy object for the object ob. This will always - return a new reference, but is not guaranteed to create a new object; an - existing proxy object may be returned. The second parameter, callback, can - be a callable object that receives notification when ob is garbage - collected; it should accept a single parameter, which will be the weak - reference object itself. callback may also be None or NULL. If ob - is not a weakly-referencable object, or if callback is not callable, - None, or NULL, this will return NULL and raise TypeError. - """ - raise NotImplementedError diff --git a/pypy/module/cpyext/test/test_methodobject.py b/pypy/module/cpyext/test/test_methodobject.py --- a/pypy/module/cpyext/test/test_methodobject.py +++ b/pypy/module/cpyext/test/test_methodobject.py @@ -79,7 +79,7 @@ raises(TypeError, mod.isSameFunction, 1) class TestPyCMethodObject(BaseApiTest): - def test_repr(self, space): + def test_repr(self, space, api): """ W_PyCMethodObject has a repr string which describes it as a method and gives its name and the name of its class. @@ -94,7 +94,7 @@ ml.c_ml_meth = rffi.cast(PyCFunction_typedef, c_func.get_llhelper(space)) - method = PyDescr_NewMethod(space, space.w_str, ml) + method = api.PyDescr_NewMethod(space.w_str, ml) assert repr(method).startswith( "" + assert space.unwrap(space.repr(w_proxy)).startswith('': + if isinstance(w_rhs, Scalar): + index = int(interp.space.float_w( + w_rhs.value.wrap(interp.space))) + dtype = interp.space.fromcache(W_Float64Dtype) + return Scalar(dtype, w_lhs.get_concrete().eval(index)) + else: + raise NotImplementedError else: - print "Unknown opcode: %s" % b - raise BogusBytecode() - if len(stack) != 1: - print "Bogus bytecode, uneven stack length" - raise BogusBytecode() - return stack[0] + raise NotImplementedError + if not isinstance(w_res, BaseArray): + dtype = interp.space.fromcache(W_Float64Dtype) + w_res = scalar_w(interp.space, dtype, w_res) + return w_res + + def __repr__(self): + return '(%r %s %r)' % (self.lhs, self.name, self.rhs) + +class FloatConstant(Node): + def __init__(self, v): + self.v = float(v) + + def __repr__(self): + return "Const(%s)" % self.v + + def wrap(self, space): + return space.wrap(self.v) + + def execute(self, interp): + dtype = interp.space.fromcache(W_Float64Dtype) + assert isinstance(dtype, W_Float64Dtype) + return Scalar(dtype, dtype.box(self.v)) + +class RangeConstant(Node): + def __init__(self, v): + self.v = int(v) + + def execute(self, interp): + w_list = interp.space.newlist( + [interp.space.wrap(float(i)) for i in range(self.v)]) + dtype = interp.space.fromcache(W_Float64Dtype) + return descr_new_array(interp.space, None, w_list, w_dtype=dtype) + + def __repr__(self): + return 'Range(%s)' % self.v + +class Code(Node): + def __init__(self, statements): + self.statements = statements + + def __repr__(self): + return "\n".join([repr(i) for i in self.statements]) + +class ArrayConstant(Node): + def __init__(self, items): + self.items = items + + def wrap(self, space): + return space.newlist([item.wrap(space) for item in self.items]) + + def execute(self, interp): + w_list = self.wrap(interp.space) + dtype = interp.space.fromcache(W_Float64Dtype) + return descr_new_array(interp.space, None, w_list, w_dtype=dtype) + + def __repr__(self): + return "[" + ", ".join([repr(item) for item in self.items]) + "]" + +class SliceConstant(Node): + def __init__(self): + pass + + def __repr__(self): + return 'slice()' + +class Execute(Node): + def __init__(self, expr): + self.expr = expr + + def __repr__(self): + return repr(self.expr) + + def execute(self, interp): + interp.results.append(self.expr.execute(interp)) + +class FunctionCall(Node): + def __init__(self, name, args): + self.name = name + self.args = args + + def __repr__(self): + return "%s(%s)" % (self.name, ", ".join([repr(arg) + for arg in self.args])) + + def execute(self, interp): + if self.name in SINGLE_ARG_FUNCTIONS: + if len(self.args) != 1: + raise ArgumentMismatch + arr = self.args[0].execute(interp) + if not isinstance(arr, BaseArray): + raise ArgumentNotAnArray + if self.name == "sum": + w_res = arr.descr_sum(interp.space) + elif self.name == "prod": + w_res = arr.descr_prod(interp.space) + elif self.name == "max": + w_res = arr.descr_max(interp.space) + elif self.name == "min": + w_res = arr.descr_min(interp.space) + elif self.name == "any": + w_res = arr.descr_any(interp.space) + elif self.name == "all": + w_res = arr.descr_all(interp.space) + elif self.name == "unegative": + neg = interp_ufuncs.get(interp.space).negative + w_res = neg.call(interp.space, [arr]) + else: + assert False # unreachable code + if isinstance(w_res, BaseArray): + return w_res + if isinstance(w_res, FloatObject): + dtype = interp.space.fromcache(W_Float64Dtype) + elif isinstance(w_res, BoolObject): + dtype = interp.space.fromcache(W_BoolDtype) + else: + dtype = None + return scalar_w(interp.space, dtype, w_res) + else: + raise WrongFunctionName + +class Parser(object): + def parse_identifier(self, id): + id = id.strip(" ") + #assert id.isalpha() + return Variable(id) + + def parse_expression(self, expr): + tokens = [i for i in expr.split(" ") if i] + if len(tokens) == 1: + return self.parse_constant_or_identifier(tokens[0]) + stack = [] + tokens.reverse() + while tokens: + token = tokens.pop() + if token == ')': + raise NotImplementedError + elif self.is_identifier_or_const(token): + if stack: + name = stack.pop().name + lhs = stack.pop() + rhs = self.parse_constant_or_identifier(token) + stack.append(Operator(lhs, name, rhs)) + else: + stack.append(self.parse_constant_or_identifier(token)) + else: + stack.append(Variable(token)) + assert len(stack) == 1 + return stack[-1] + + def parse_constant(self, v): + lgt = len(v)-1 + assert lgt >= 0 + if ':' in v: + # a slice + assert v == ':' + return SliceConstant() + if v[0] == '[': + return ArrayConstant([self.parse_constant(elem) + for elem in v[1:lgt].split(",")]) + if v[0] == '|': + return RangeConstant(v[1:lgt]) + return FloatConstant(v) + + def is_identifier_or_const(self, v): + c = v[0] + if ((c >= 'a' and c <= 'z') or (c >= 'A' and c <= 'Z') or + (c >= '0' and c <= '9') or c in '-.[|:'): + if v == '-' or v == "->": + return False + return True + return False + + def parse_function_call(self, v): + l = v.split('(') + assert len(l) == 2 + name = l[0] + cut = len(l[1]) - 1 + assert cut >= 0 + args = [self.parse_constant_or_identifier(id) + for id in l[1][:cut].split(",")] + return FunctionCall(name, args) + + def parse_constant_or_identifier(self, v): + c = v[0] + if (c >= 'a' and c <= 'z') or (c >= 'A' and c <= 'Z'): + if '(' in v: + return self.parse_function_call(v) + return self.parse_identifier(v) + return self.parse_constant(v) + + def parse_array_subscript(self, v): + v = v.strip(" ") + l = v.split("[") + lgt = len(l[1]) - 1 + assert lgt >= 0 + rhs = self.parse_constant_or_identifier(l[1][:lgt]) + return l[0], rhs + + def parse_statement(self, line): + if '=' in line: + lhs, rhs = line.split("=") + lhs = lhs.strip(" ") + if '[' in lhs: + name, index = self.parse_array_subscript(lhs) + return ArrayAssignment(name, index, self.parse_expression(rhs)) + else: + return Assignment(lhs, self.parse_expression(rhs)) + else: + return Execute(self.parse_expression(line)) + + def parse(self, code): + statements = [] + for line in code.split("\n"): + if '#' in line: + line = line.split('#', 1)[0] + line = line.strip(" ") + if line: + statements.append(self.parse_statement(line)) + return Code(statements) + +def numpy_compile(code): + parser = Parser() + return InterpreterState(parser.parse(code)) diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -108,6 +108,12 @@ def setitem_w(self, space, storage, i, w_item): self.setitem(storage, i, self.unwrap(space, w_item)) + def fill(self, storage, item, start, stop): + storage = self.unerase(storage) + item = self.unbox(item) + for i in xrange(start, stop): + storage[i] = item + @specialize.argtype(1) def adapt_val(self, val): return self.box(rffi.cast(TP.TO.OF, val)) diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -14,6 +14,27 @@ any_driver = jit.JitDriver(greens=['signature'], reds=['i', 'size', 'self', 'dtype']) slice_driver = jit.JitDriver(greens=['signature'], reds=['i', 'j', 'step', 'stop', 'source', 'dest']) +def descr_new_array(space, w_subtype, w_size_or_iterable, w_dtype=None): + l = space.listview(w_size_or_iterable) + if space.is_w(w_dtype, space.w_None): + w_dtype = None + for w_item in l: + w_dtype = interp_ufuncs.find_dtype_for_scalar(space, w_item, w_dtype) + if w_dtype is space.fromcache(interp_dtype.W_Float64Dtype): + break + if w_dtype is None: + w_dtype = space.w_None + + dtype = space.interp_w(interp_dtype.W_Dtype, + space.call_function(space.gettypefor(interp_dtype.W_Dtype), w_dtype) + ) + arr = SingleDimArray(len(l), dtype=dtype) + i = 0 + for w_elem in l: + dtype.setitem_w(space, arr.storage, i, w_elem) + i += 1 + return arr + class BaseArray(Wrappable): _attrs_ = ["invalidates", "signature"] @@ -32,27 +53,6 @@ def add_invalidates(self, other): self.invalidates.append(other) - def descr__new__(space, w_subtype, w_size_or_iterable, w_dtype=None): - l = space.listview(w_size_or_iterable) - if space.is_w(w_dtype, space.w_None): - w_dtype = None - for w_item in l: - w_dtype = interp_ufuncs.find_dtype_for_scalar(space, w_item, w_dtype) - if w_dtype is space.fromcache(interp_dtype.W_Float64Dtype): - break - if w_dtype is None: - w_dtype = space.w_None - - dtype = space.interp_w(interp_dtype.W_Dtype, - space.call_function(space.gettypefor(interp_dtype.W_Dtype), w_dtype) - ) - arr = SingleDimArray(len(l), dtype=dtype) - i = 0 - for w_elem in l: - dtype.setitem_w(space, arr.storage, i, w_elem) - i += 1 - return arr - def _unaryop_impl(ufunc_name): def impl(self, space): return getattr(interp_ufuncs.get(space), ufunc_name).call(space, [self]) @@ -201,6 +201,9 @@ def descr_get_shape(self, space): return space.newtuple([self.descr_len(space)]) + def descr_get_size(self, space): + return space.wrap(self.find_size()) + def descr_copy(self, space): return space.call_function(space.gettypefor(BaseArray), self, self.find_dtype()) @@ -565,13 +568,12 @@ arr = SingleDimArray(size, dtype=dtype) one = dtype.adapt_val(1) - for i in xrange(size): - arr.dtype.setitem(arr.storage, i, one) + arr.dtype.fill(arr.storage, one, 0, size) return space.wrap(arr) BaseArray.typedef = TypeDef( 'numarray', - __new__ = interp2app(BaseArray.descr__new__.im_func), + __new__ = interp2app(descr_new_array), __len__ = interp2app(BaseArray.descr_len), @@ -608,6 +610,7 @@ dtype = GetSetProperty(BaseArray.descr_get_dtype), shape = GetSetProperty(BaseArray.descr_get_shape), + size = GetSetProperty(BaseArray.descr_get_size), mean = interp2app(BaseArray.descr_mean), sum = interp2app(BaseArray.descr_sum), diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -32,11 +32,17 @@ return self.identity.wrap(space) def descr_call(self, space, __args__): - try: - args_w = __args__.fixedunpack(self.argcount) - except ValueError, e: - raise OperationError(space.w_TypeError, space.wrap(str(e))) - return self.call(space, args_w) + if __args__.keywords or len(__args__.arguments_w) < self.argcount: + raise OperationError(space.w_ValueError, + space.wrap("invalid number of arguments") + ) + elif len(__args__.arguments_w) > self.argcount: + # The extra arguments should actually be the output array, but we + # don't support that yet. + raise OperationError(space.w_TypeError, + space.wrap("invalid number of arguments") + ) + return self.call(space, __args__.arguments_w) def descr_reduce(self, space, w_obj): from pypy.module.micronumpy.interp_numarray import convert_to_array, Scalar @@ -236,22 +242,20 @@ return dt def find_dtype_for_scalar(space, w_obj, current_guess=None): - w_type = space.type(w_obj) - bool_dtype = space.fromcache(interp_dtype.W_BoolDtype) long_dtype = space.fromcache(interp_dtype.W_LongDtype) int64_dtype = space.fromcache(interp_dtype.W_Int64Dtype) - if space.is_w(w_type, space.w_bool): + if space.isinstance_w(w_obj, space.w_bool): if current_guess is None or current_guess is bool_dtype: return bool_dtype return current_guess - elif space.is_w(w_type, space.w_int): + elif space.isinstance_w(w_obj, space.w_int): if (current_guess is None or current_guess is bool_dtype or current_guess is long_dtype): return long_dtype return current_guess - elif space.is_w(w_type, space.w_long): + elif space.isinstance_w(w_obj, space.w_long): if (current_guess is None or current_guess is bool_dtype or current_guess is long_dtype or current_guess is int64_dtype): return int64_dtype diff --git a/pypy/module/micronumpy/test/test_compile.py b/pypy/module/micronumpy/test/test_compile.py new file mode 100644 --- /dev/null +++ b/pypy/module/micronumpy/test/test_compile.py @@ -0,0 +1,170 @@ + +import py +from pypy.module.micronumpy.compile import * + +class TestCompiler(object): + def compile(self, code): + return numpy_compile(code) + + def test_vars(self): + code = """ + a = 2 + b = 3 + """ + interp = self.compile(code) + assert isinstance(interp.code.statements[0], Assignment) + assert interp.code.statements[0].name == 'a' + assert interp.code.statements[0].expr.v == 2 + assert interp.code.statements[1].name == 'b' + assert interp.code.statements[1].expr.v == 3 + + def test_array_literal(self): + code = "a = [1,2,3]" + interp = self.compile(code) + assert isinstance(interp.code.statements[0].expr, ArrayConstant) + st = interp.code.statements[0] + assert st.expr.items == [FloatConstant(1), FloatConstant(2), + FloatConstant(3)] + + def test_array_literal2(self): + code = "a = [[1],[2],[3]]" + interp = self.compile(code) + assert isinstance(interp.code.statements[0].expr, ArrayConstant) + st = interp.code.statements[0] + assert st.expr.items == [ArrayConstant([FloatConstant(1)]), + ArrayConstant([FloatConstant(2)]), + ArrayConstant([FloatConstant(3)])] + + def test_expr_1(self): + code = "b = a + 1" + interp = self.compile(code) + assert (interp.code.statements[0].expr == + Operator(Variable("a"), "+", FloatConstant(1))) + + def test_expr_2(self): + code = "b = a + b - 3" + interp = self.compile(code) + assert (interp.code.statements[0].expr == + Operator(Operator(Variable("a"), "+", Variable("b")), "-", + FloatConstant(3))) + + def test_expr_3(self): + # an equivalent of range + code = "a = |20|" + interp = self.compile(code) + assert interp.code.statements[0].expr == RangeConstant(20) + + def test_expr_only(self): + code = "3 + a" + interp = self.compile(code) + assert interp.code.statements[0] == Execute( + Operator(FloatConstant(3), "+", Variable("a"))) + + def test_array_access(self): + code = "a -> 3" + interp = self.compile(code) + assert interp.code.statements[0] == Execute( + Operator(Variable("a"), "->", FloatConstant(3))) + + def test_function_call(self): + code = "sum(a)" + interp = self.compile(code) + assert interp.code.statements[0] == Execute( + FunctionCall("sum", [Variable("a")])) + + def test_comment(self): + code = """ + # some comment + a = b + 3 # another comment + """ + interp = self.compile(code) + assert interp.code.statements[0] == Assignment( + 'a', Operator(Variable('b'), "+", FloatConstant(3))) + +class TestRunner(object): + def run(self, code): + interp = numpy_compile(code) + space = FakeSpace() + interp.run(space) + return interp + + def test_one(self): + code = """ + a = 3 + b = 4 + a + b + """ + interp = self.run(code) + assert sorted(interp.variables.keys()) == ['a', 'b'] + assert interp.results[0] + + def test_array_add(self): + code = """ + a = [1,2,3,4] + b = [4,5,6,5] + a + b + """ + interp = self.run(code) + assert interp.results[0]._getnums(False) == ["5.0", "7.0", "9.0", "9.0"] + + def test_array_getitem(self): + code = """ + a = [1,2,3,4] + b = [4,5,6,5] + a + b -> 3 + """ + interp = self.run(code) + assert interp.results[0].value.val == 3 + 6 + + def test_range_getitem(self): + code = """ + r = |20| + 3 + r -> 3 + """ + interp = self.run(code) + assert interp.results[0].value.val == 6 + + def test_sum(self): + code = """ + a = [1,2,3,4,5] + r = sum(a) + r + """ + interp = self.run(code) + assert interp.results[0].value.val == 15 + + def test_array_write(self): + code = """ + a = [1,2,3,4,5] + a[3] = 15 + a -> 3 + """ + interp = self.run(code) + assert interp.results[0].value.val == 15 + + def test_min(self): + interp = self.run(""" + a = |30| + a[15] = -12 + b = a + a + min(b) + """) + assert interp.results[0].value.val == -24 + + def test_max(self): + interp = self.run(""" + a = |30| + a[13] = 128 + b = a + a + max(b) + """) + assert interp.results[0].value.val == 256 + + def test_slice(self): + py.test.skip("in progress") + interp = self.run(""" + a = [1,2,3,4] + b = a -> : + b -> 3 + """) + assert interp.results[0].value.val == 3 diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -36,37 +36,40 @@ assert str(d) == "bool" def test_bool_array(self): - from numpy import array + import numpy - a = array([0, 1, 2, 2.5], dtype='?') - assert a[0] is False + a = numpy.array([0, 1, 2, 2.5], dtype='?') + assert a[0] is numpy.False_ for i in xrange(1, 4): - assert a[i] is True + assert a[i] is numpy.True_ def test_copy_array_with_dtype(self): - from numpy import array - a = array([0, 1, 2, 3], dtype=long) + import numpy + + a = numpy.array([0, 1, 2, 3], dtype=long) # int on 64-bit, long in 32-bit assert isinstance(a[0], (int, long)) b = a.copy() assert isinstance(b[0], (int, long)) - a = array([0, 1, 2, 3], dtype=bool) - assert isinstance(a[0], bool) + a = numpy.array([0, 1, 2, 3], dtype=bool) + assert a[0] is numpy.False_ b = a.copy() - assert isinstance(b[0], bool) + assert b[0] is numpy.False_ def test_zeros_bool(self): - from numpy import zeros - a = zeros(10, dtype=bool) + import numpy + + a = numpy.zeros(10, dtype=bool) for i in range(10): - assert a[i] is False + assert a[i] is numpy.False_ def test_ones_bool(self): - from numpy import ones - a = ones(10, dtype=bool) + import numpy + + a = numpy.ones(10, dtype=bool) for i in range(10): - assert a[i] is True + assert a[i] is numpy.True_ def test_zeros_long(self): from numpy import zeros @@ -77,7 +80,7 @@ def test_ones_long(self): from numpy import ones - a = ones(10, dtype=bool) + a = ones(10, dtype=long) for i in range(10): assert isinstance(a[i], (int, long)) assert a[1] == 1 @@ -96,8 +99,9 @@ def test_bool_binop_types(self): from numpy import array, dtype - types = ('?','b','B','h','H','i','I','l','L','q','Q','f','d') - N = len(types) + types = [ + '?', 'b', 'B', 'h', 'H', 'i', 'I', 'l', 'L', 'q', 'Q', 'f', 'd' + ] a = array([True], '?') for t in types: assert (a + array([0], t)).dtype is dtype(t) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -17,6 +17,14 @@ a[13] = 5.3 assert a[13] == 5.3 + def test_size(self): + from numpy import array + # XXX fixed on multidim branch + #assert array(3).size == 1 + a = array([1, 2, 3]) + assert a.size == 3 + assert (a + a).size == 3 + def test_empty(self): """ Test that empty() works. @@ -214,7 +222,7 @@ def test_add_other(self): from numpy import array a = array(range(5)) - b = array(reversed(range(5))) + b = array(range(4, -1, -1)) c = a + b for i in range(5): assert c[i] == 4 @@ -264,18 +272,19 @@ assert b[i] == i - 5 def test_mul(self): - from numpy import array, dtype - a = array(range(5)) + import numpy + + a = numpy.array(range(5)) b = a * a for i in range(5): assert b[i] == i * i - a = array(range(5), dtype=bool) + a = numpy.array(range(5), dtype=bool) b = a * a - assert b.dtype is dtype(bool) - assert b[0] is False + assert b.dtype is numpy.dtype(bool) + assert b[0] is numpy.False_ for i in range(1, 5): - assert b[i] is True + assert b[i] is numpy.True_ def test_mul_constant(self): from numpy import array diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -24,10 +24,10 @@ def test_wrong_arguments(self): from numpy import add, sin - raises(TypeError, add, 1) + raises(ValueError, add, 1) raises(TypeError, add, 1, 2, 3) raises(TypeError, sin, 1, 2) - raises(TypeError, sin) + raises(ValueError, sin) def test_single_item(self): from numpy import negative, sign, minimum @@ -82,6 +82,8 @@ b = negative(a) a[0] = 5.0 assert b[0] == 5.0 + a = array(range(30)) + assert negative(a + a)[3] == -6 def test_abs(self): from numpy import array, absolute @@ -355,4 +357,4 @@ (3.5, 3), (3, 3.5), ]: - assert ufunc(a, b) is func(a, b) + assert ufunc(a, b) == func(a, b) diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -1,253 +1,195 @@ from pypy.jit.metainterp.test.support import LLJitMixin from pypy.module.micronumpy import interp_ufuncs, signature -from pypy.module.micronumpy.compile import (numpy_compile, FakeSpace, - FloatObject, IntObject) -from pypy.module.micronumpy.interp_dtype import W_Int32Dtype, W_Float64Dtype, W_Int64Dtype, W_UInt64Dtype -from pypy.module.micronumpy.interp_numarray import (BaseArray, SingleDimArray, - SingleDimSlice, scalar_w) +from pypy.module.micronumpy.compile import (FakeSpace, + FloatObject, IntObject, numpy_compile, BoolObject) +from pypy.module.micronumpy.interp_numarray import (SingleDimArray, + SingleDimSlice) from pypy.rlib.nonconst import NonConstant -from pypy.rpython.annlowlevel import llstr -from pypy.rpython.test.test_llinterp import interpret +from pypy.rpython.annlowlevel import llstr, hlstr +from pypy.jit.metainterp.warmspot import reset_stats +from pypy.jit.metainterp import pyjitpl import py class TestNumpyJIt(LLJitMixin): - def setup_class(cls): - cls.space = FakeSpace() - cls.float64_dtype = cls.space.fromcache(W_Float64Dtype) - cls.int64_dtype = cls.space.fromcache(W_Int64Dtype) - cls.uint64_dtype = cls.space.fromcache(W_UInt64Dtype) - cls.int32_dtype = cls.space.fromcache(W_Int32Dtype) + graph = None + interp = None + + def run(self, code): + space = FakeSpace() + + def f(code): + interp = numpy_compile(hlstr(code)) + interp.run(space) + res = interp.results[-1] + w_res = res.eval(0).wrap(interp.space) + if isinstance(w_res, BoolObject): + return float(w_res.boolval) + elif isinstance(w_res, FloatObject): + return w_res.floatval + elif isinstance(w_res, IntObject): + return w_res.intval + else: + return -42. + + if self.graph is None: + interp, graph = self.meta_interp(f, [llstr(code)], + listops=True, + backendopt=True, + graph_and_interp_only=True) + self.__class__.interp = interp + self.__class__.graph = graph + + reset_stats() + pyjitpl._warmrunnerdesc.memory_manager.alive_loops.clear() + return self.interp.eval_graph(self.graph, [llstr(code)]) def test_add(self): - def f(i): - ar = SingleDimArray(i, dtype=self.float64_dtype) - v = interp_ufuncs.get(self.space).add.call(self.space, [ar, ar]) - return v.get_concrete().eval(3).val - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + result = self.run(""" + a = |30| + b = a + a + b -> 3 + """) self.check_loops({'getarrayitem_raw': 2, 'float_add': 1, 'setarrayitem_raw': 1, 'int_add': 1, 'int_lt': 1, 'guard_true': 1, 'jump': 1}) - assert result == f(5) + assert result == 3 + 3 def test_floatadd(self): - def f(i): - ar = SingleDimArray(i, dtype=self.float64_dtype) - v = interp_ufuncs.get(self.space).add.call(self.space, [ - ar, - scalar_w(self.space, self.float64_dtype, self.space.wrap(4.5)) - ], - ) - assert isinstance(v, BaseArray) - return v.get_concrete().eval(3).val - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + result = self.run(""" + a = |30| + 3 + a -> 3 + """) + assert result == 3 + 3 self.check_loops({"getarrayitem_raw": 1, "float_add": 1, "setarrayitem_raw": 1, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1}) - assert result == f(5) def test_sum(self): - space = self.space - float64_dtype = self.float64_dtype - int64_dtype = self.int64_dtype - - def f(i): - if NonConstant(False): - dtype = int64_dtype - else: - dtype = float64_dtype - ar = SingleDimArray(i, dtype=dtype) - v = ar.descr_add(space, ar).descr_sum(space) - assert isinstance(v, FloatObject) - return v.floatval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + result = self.run(""" + a = |30| + b = a + a + sum(b) + """) + assert result == 2 * sum(range(30)) self.check_loops({"getarrayitem_raw": 2, "float_add": 2, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1}) - assert result == f(5) def test_prod(self): - space = self.space - float64_dtype = self.float64_dtype - int64_dtype = self.int64_dtype - - def f(i): - if NonConstant(False): - dtype = int64_dtype - else: - dtype = float64_dtype - ar = SingleDimArray(i, dtype=dtype) - v = ar.descr_add(space, ar).descr_prod(space) - assert isinstance(v, FloatObject) - return v.floatval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + result = self.run(""" + a = |30| + b = a + a + prod(b) + """) + expected = 1 + for i in range(30): + expected *= i * 2 + assert result == expected self.check_loops({"getarrayitem_raw": 2, "float_add": 1, "float_mul": 1, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1}) - assert result == f(5) def test_max(self): - space = self.space - float64_dtype = self.float64_dtype - int64_dtype = self.int64_dtype - - def f(i): - if NonConstant(False): - dtype = int64_dtype - else: - dtype = float64_dtype - ar = SingleDimArray(i, dtype=dtype) - j = 0 - while j < i: - ar.get_concrete().setitem(j, float64_dtype.box(float(j))) - j += 1 - v = ar.descr_add(space, ar).descr_max(space) - assert isinstance(v, FloatObject) - return v.floatval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + py.test.skip("broken, investigate") + result = self.run(""" + a = |30| + a[13] = 128 + b = a + a + max(b) + """) + assert result == 256 self.check_loops({"getarrayitem_raw": 2, "float_add": 1, - "float_gt": 1, "int_add": 1, - "int_lt": 1, "guard_true": 1, - "guard_false": 1, "jump": 1}) - assert result == f(5) + "float_mul": 1, "int_add": 1, + "int_lt": 1, "guard_true": 1, "jump": 1}) def test_min(self): - space = self.space - float64_dtype = self.float64_dtype - int64_dtype = self.int64_dtype - - def f(i): - if NonConstant(False): - dtype = int64_dtype - else: - dtype = float64_dtype - ar = SingleDimArray(i, dtype=dtype) - j = 0 - while j < i: - ar.get_concrete().setitem(j, float64_dtype.box(float(j))) - j += 1 - v = ar.descr_add(space, ar).descr_min(space) - assert isinstance(v, FloatObject) - return v.floatval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + py.test.skip("broken, investigate") + result = self.run(""" + a = |30| + a[15] = -12 + b = a + a + min(b) + """) + assert result == -24 self.check_loops({"getarrayitem_raw": 2, "float_add": 1, - "float_lt": 1, "int_add": 1, - "int_lt": 1, "guard_true": 2, - "jump": 1}) - assert result == f(5) - - def test_argmin(self): - space = self.space - float64_dtype = self.float64_dtype - - def f(i): - ar = SingleDimArray(i, dtype=NonConstant(float64_dtype)) - j = 0 - while j < i: - ar.get_concrete().setitem(j, float64_dtype.box(float(j))) - j += 1 - return ar.descr_add(space, ar).descr_argmin(space).intval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) - self.check_loops({"getarrayitem_raw": 2, "float_add": 1, - "float_lt": 1, "int_add": 1, - "int_lt": 1, "guard_true": 2, - "jump": 1}) - assert result == f(5) - - def test_all(self): - space = self.space - float64_dtype = self.float64_dtype - - def f(i): - ar = SingleDimArray(i, dtype=NonConstant(float64_dtype)) - j = 0 - while j < i: - ar.get_concrete().setitem(j, float64_dtype.box(1.0)) - j += 1 - return ar.descr_add(space, ar).descr_all(space).boolval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) - self.check_loops({"getarrayitem_raw": 2, "float_add": 1, - "int_add": 1, "float_ne": 1, - "int_lt": 1, "guard_true": 2, "jump": 1}) - assert result == f(5) + "float_mul": 1, "int_add": 1, + "int_lt": 1, "guard_true": 1, "jump": 1}) def test_any(self): - space = self.space - float64_dtype = self.float64_dtype - - def f(i): - ar = SingleDimArray(i, dtype=NonConstant(float64_dtype)) - return ar.descr_add(space, ar).descr_any(space).boolval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + result = self.run(""" + a = [0,0,0,0,0,0,0,0,0,0,0] + a[8] = -12 + b = a + a + any(b) + """) + assert result == 1 self.check_loops({"getarrayitem_raw": 2, "float_add": 1, - "int_add": 1, "float_ne": 1, "guard_false": 1, - "int_lt": 1, "guard_true": 1, "jump": 1}) - assert result == f(5) + "float_ne": 1, "int_add": 1, + "int_lt": 1, "guard_true": 1, "jump": 1, + "guard_false": 1}) def test_already_forced(self): - space = self.space - - def f(i): - ar = SingleDimArray(i, dtype=self.float64_dtype) - v1 = interp_ufuncs.get(self.space).add.call(space, [ar, scalar_w(space, self.float64_dtype, space.wrap(4.5))]) - assert isinstance(v1, BaseArray) - v2 = interp_ufuncs.get(self.space).multiply.call(space, [v1, scalar_w(space, self.float64_dtype, space.wrap(4.5))]) - v1.force_if_needed() - assert isinstance(v2, BaseArray) - return v2.get_concrete().eval(3).val - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + result = self.run(""" + a = |30| + b = a + 4.5 + b -> 5 # forces + c = b * 8 + c -> 5 + """) + assert result == (5 + 4.5) * 8 # This is the sum of the ops for both loops, however if you remove the # optimization then you end up with 2 float_adds, so we can still be # sure it was optimized correctly. self.check_loops({"getarrayitem_raw": 2, "float_mul": 1, "float_add": 1, "setarrayitem_raw": 2, "int_add": 2, "int_lt": 2, "guard_true": 2, "jump": 2}) - assert result == f(5) def test_ufunc(self): - space = self.space - def f(i): - ar = SingleDimArray(i, dtype=self.float64_dtype) - v1 = interp_ufuncs.get(self.space).add.call(space, [ar, ar]) - v2 = interp_ufuncs.get(self.space).negative.call(space, [v1]) - return v2.get_concrete().eval(3).val - - result = self.meta_interp(f, [5], listops=True, backendopt=True) + result = self.run(""" + a = |30| + b = a + a + c = unegative(b) + c -> 3 + """) + assert result == -6 self.check_loops({"getarrayitem_raw": 2, "float_add": 1, "float_neg": 1, "setarrayitem_raw": 1, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1, }) - assert result == f(5) - def test_appropriate_specialization(self): - space = self.space - def f(i): - ar = SingleDimArray(i, dtype=self.float64_dtype) - - v1 = interp_ufuncs.get(self.space).add.call(space, [ar, ar]) - v2 = interp_ufuncs.get(self.space).negative.call(space, [v1]) - v2.get_concrete() - - for i in xrange(5): - v1 = interp_ufuncs.get(self.space).multiply.call(space, [ar, ar]) - v2 = interp_ufuncs.get(self.space).negative.call(space, [v1]) - v2.get_concrete() - - self.meta_interp(f, [5], listops=True, backendopt=True) + def test_specialization(self): + self.run(""" + a = |30| + b = a + a + c = unegative(b) + c -> 3 + d = a * a + unegative(d) + d -> 3 + d = a * a + unegative(d) + d -> 3 + d = a * a + unegative(d) + d -> 3 + d = a * a + unegative(d) + d -> 3 + """) # This is 3, not 2 because there is a bridge for the exit. self.check_loop_count(3) + +class TestNumpyOld(LLJitMixin): + def setup_class(cls): + from pypy.module.micronumpy.compile import FakeSpace + from pypy.module.micronumpy.interp_dtype import W_Float64Dtype + + cls.space = FakeSpace() + cls.float64_dtype = cls.space.fromcache(W_Float64Dtype) + def test_slice(self): def f(i): step = 3 @@ -332,17 +274,3 @@ result = self.meta_interp(f, [5], listops=True, backendopt=True) assert result == f(5) -class TestTranslation(object): - def test_compile(self): - x = numpy_compile('aa+f*f/a-', 10) - x = x.compute() - assert isinstance(x, SingleDimArray) - assert x.size == 10 - assert x.eval(0).val == 0 - assert x.eval(1).val == ((1 + 1) * 1.2) / 1.2 - 1 - - def test_translation(self): - # we import main to check if the target compiles - from pypy.translator.goal.targetnumpystandalone import main - - interpret(main, [llstr('af+'), 100]) diff --git a/pypy/module/posix/__init__.py b/pypy/module/posix/__init__.py --- a/pypy/module/posix/__init__.py +++ b/pypy/module/posix/__init__.py @@ -137,6 +137,8 @@ interpleveldefs['execve'] = 'interp_posix.execve' if hasattr(posix, 'spawnv'): interpleveldefs['spawnv'] = 'interp_posix.spawnv' + if hasattr(posix, 'spawnve'): + interpleveldefs['spawnve'] = 'interp_posix.spawnve' if hasattr(os, 'uname'): interpleveldefs['uname'] = 'interp_posix.uname' if hasattr(os, 'sysconf'): diff --git a/pypy/module/posix/interp_posix.py b/pypy/module/posix/interp_posix.py --- a/pypy/module/posix/interp_posix.py +++ b/pypy/module/posix/interp_posix.py @@ -760,6 +760,14 @@ except OSError, e: raise wrap_oserror(space, e) +def _env2interp(space, w_env): + env = {} + w_keys = space.call_method(w_env, 'keys') + for w_key in space.unpackiterable(w_keys): + w_value = space.getitem(w_env, w_key) + env[space.str_w(w_key)] = space.str_w(w_value) + return env + def execve(space, w_command, w_args, w_env): """ execve(path, args, env) @@ -771,11 +779,7 @@ """ command = fsencode_w(space, w_command) args = [fsencode_w(space, w_arg) for w_arg in space.unpackiterable(w_args)] - env = {} - w_keys = space.call_method(w_env, 'keys') - for w_key in space.unpackiterable(w_keys): - w_value = space.getitem(w_env, w_key) - env[space.str_w(w_key)] = space.str_w(w_value) + env = _env2interp(space, w_env) try: os.execve(command, args, env) except OSError, e: @@ -790,6 +794,16 @@ raise wrap_oserror(space, e) return space.wrap(ret) + at unwrap_spec(mode=int, path=str) +def spawnve(space, mode, path, w_args, w_env): + args = [space.str_w(w_arg) for w_arg in space.unpackiterable(w_args)] + env = _env2interp(space, w_env) + try: + ret = os.spawnve(mode, path, args, env) + except OSError, e: + raise wrap_oserror(space, e) + return space.wrap(ret) + def utime(space, w_path, w_tuple): """ utime(path, (atime, mtime)) utime(path, None) diff --git a/pypy/module/posix/test/test_posix2.py b/pypy/module/posix/test/test_posix2.py --- a/pypy/module/posix/test/test_posix2.py +++ b/pypy/module/posix/test/test_posix2.py @@ -471,6 +471,17 @@ ['python', '-c', 'raise(SystemExit(42))']) assert ret == 42 + if hasattr(__import__(os.name), "spawnve"): + def test_spawnve(self): + os = self.posix + import sys + print self.python + ret = os.spawnve(os.P_WAIT, self.python, + ['python', '-c', + "raise(SystemExit(int(__import__('os').environ['FOOBAR'])))"], + {'FOOBAR': '42'}) + assert ret == 42 + def test_popen(self): os = self.posix for i in range(5): diff --git a/pypy/module/pyexpat/interp_pyexpat.py b/pypy/module/pyexpat/interp_pyexpat.py --- a/pypy/module/pyexpat/interp_pyexpat.py +++ b/pypy/module/pyexpat/interp_pyexpat.py @@ -4,9 +4,9 @@ from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.error import OperationError from pypy.objspace.descroperation import object_setattr +from pypy.rlib import rgc +from pypy.rlib.unroll import unrolling_iterable from pypy.rpython.lltypesystem import rffi, lltype -from pypy.rlib.unroll import unrolling_iterable - from pypy.rpython.tool import rffi_platform from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.translator.platform import platform @@ -118,6 +118,19 @@ locals()[name] = rffi_platform.ConstantInteger(name) for name in xml_model_list: locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + XML_Parser_SIZE = rffi_platform.SizeOf("XML_Parser") for k, v in rffi_platform.configure(CConfigure).items(): globals()[k] = v @@ -793,7 +806,10 @@ rffi.cast(rffi.CHAR, namespace_separator)) else: xmlparser = XML_ParserCreate(encoding) - + # Currently this is just the size of the pointer and some estimated bytes. + # The struct isn't actually defined in expat.h - it is in xmlparse.c + # XXX: find a good estimate of the XML_ParserStruct + rgc.add_memory_pressure(XML_Parser_SIZE + 300) if not xmlparser: raise OperationError(space.w_RuntimeError, space.wrap('XML_ParserCreate failed')) diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py --- a/pypy/module/pypyjit/policy.py +++ b/pypy/module/pypyjit/policy.py @@ -16,7 +16,8 @@ if modname in ['pypyjit', 'signal', 'micronumpy', 'math', 'exceptions', 'imp', 'sys', 'array', '_ffi', 'itertools', 'operator', 'posix', '_socket', '_sre', '_lsprof', '_weakref', - '__pypy__', 'cStringIO', '_collections', 'struct']: + '__pypy__', 'cStringIO', '_collections', 'struct', + 'mmap']: return True return False diff --git a/pypy/module/pypyjit/test_pypy_c/model.py b/pypy/module/pypyjit/test_pypy_c/model.py --- a/pypy/module/pypyjit/test_pypy_c/model.py +++ b/pypy/module/pypyjit/test_pypy_c/model.py @@ -285,6 +285,11 @@ guard_false(ticker_cond1, descr=...) """ src = src.replace('--EXC-TICK--', exc_ticker_check) + # + # ISINF is done as a macro; fix it here + r = re.compile('(\w+) = --ISINF--[(](\w+)[)]') + src = r.sub(r'\2\B999 = float_add(\2, ...)\n\1 = float_eq(\2\B999, \2)', + src) return src @classmethod diff --git a/pypy/module/pypyjit/test_pypy_c/test_call.py b/pypy/module/pypyjit/test_pypy_c/test_call.py --- a/pypy/module/pypyjit/test_pypy_c/test_call.py +++ b/pypy/module/pypyjit/test_pypy_c/test_call.py @@ -465,3 +465,25 @@ setfield_gc(p4, p22, descr=) jump(p0, p1, p2, p3, p4, p7, p22, p7, descr=) """) + + def test_kwargs_virtual(self): + def main(n): + def g(**kwargs): + return kwargs["x"] + 1 + + i = 0 + while i < n: + i = g(x=i) + return i + + log = self.run(main, [500]) + assert log.result == 500 + loop, = log.loops_by_filename(self.filepath) + assert loop.match(""" + i2 = int_lt(i0, i1) + guard_true(i2, descr=...) + i3 = force_token() + i4 = int_add(i0, 1) + --TICK-- + jump(..., descr=...) + """) \ No newline at end of file diff --git a/pypy/module/pypyjit/test_pypy_c/test_containers.py b/pypy/module/pypyjit/test_pypy_c/test_containers.py --- a/pypy/module/pypyjit/test_pypy_c/test_containers.py +++ b/pypy/module/pypyjit/test_pypy_c/test_containers.py @@ -44,7 +44,7 @@ # gc_id call is hoisted out of the loop, the id of a value obviously # can't change ;) assert loop.match_by_id("getitem", """ - i28 = call(ConstClass(ll_dict_lookup__dicttablePtr_objectPtr_Signed), p18, p6, i25, descr=...) + i26 = call(ConstClass(ll_dict_lookup), p18, p6, i25, descr=...) ... p33 = getinteriorfield_gc(p31, i26, descr=>) ... @@ -69,4 +69,51 @@ i9 = int_add(i5, 1) --TICK-- jump(..., descr=...) + """) + + def test_non_virtual_dict(self): + def main(n): + i = 0 + while i < n: + d = {str(i): i} + i += d[str(i)] - i + 1 + return i + + log = self.run(main, [1000]) + assert log.result == main(1000) + loop, = log.loops_by_filename(self.filepath) + assert loop.match(""" + i8 = int_lt(i5, i7) + guard_true(i8, descr=...) + guard_not_invalidated(descr=...) + p10 = call(ConstClass(ll_int_str), i5, descr=) + guard_no_exception(descr=...) + i12 = call(ConstClass(ll_strhash), p10, descr=) + p13 = new(descr=...) + p15 = new_array(8, descr=) + setfield_gc(p13, p15, descr=) + i17 = call(ConstClass(ll_dict_lookup_trampoline), p13, p10, i12, descr=) + setfield_gc(p13, 16, descr=) + guard_no_exception(descr=...) + p20 = new_with_vtable(ConstClass(W_IntObject)) + call(ConstClass(_ll_dict_setitem_lookup_done_trampoline), p13, p10, p20, i12, i17, descr=) + setfield_gc(p20, i5, descr=) + guard_no_exception(descr=...) + i23 = call(ConstClass(ll_dict_lookup_trampoline), p13, p10, i12, descr=) + guard_no_exception(descr=...) + i26 = int_and(i23, .*) + i27 = int_is_true(i26) + guard_false(i27, descr=...) + p28 = getfield_gc(p13, descr=) + p29 = getinteriorfield_gc(p28, i23, descr=>) + guard_nonnull_class(p29, ConstClass(W_IntObject), descr=...) + i31 = getfield_gc_pure(p29, descr=) + i32 = int_sub_ovf(i31, i5) + guard_no_overflow(descr=...) + i34 = int_add_ovf(i32, 1) + guard_no_overflow(descr=...) + i35 = int_add_ovf(i5, i34) + guard_no_overflow(descr=...) + --TICK-- + jump(p0, p1, p2, p3, p4, i35, p13, i7, descr=) """) \ No newline at end of file diff --git a/pypy/module/pypyjit/test_pypy_c/test_math.py b/pypy/module/pypyjit/test_pypy_c/test_math.py --- a/pypy/module/pypyjit/test_pypy_c/test_math.py +++ b/pypy/module/pypyjit/test_pypy_c/test_math.py @@ -1,3 +1,4 @@ +import py from pypy.module.pypyjit.test_pypy_c.test_00_model import BaseTestPyPyC @@ -49,10 +50,7 @@ guard_true(i2, descr=...) guard_not_invalidated(descr=...) f1 = cast_int_to_float(i0) - i3 = float_eq(f1, inf) - i4 = float_eq(f1, -inf) - i5 = int_or(i3, i4) - i6 = int_is_true(i5) + i6 = --ISINF--(f1) guard_false(i6, descr=...) f2 = call(ConstClass(sin), f1, descr=) f3 = call(ConstClass(cos), f1, descr=) @@ -64,6 +62,7 @@ """) def test_fmod(self): + py.test.skip("test relies on the old and broken ll_math_fmod") def main(n): import math @@ -90,4 +89,4 @@ i6 = int_sub(i0, 1) --TICK-- jump(..., descr=) - """) \ No newline at end of file + """) diff --git a/pypy/module/rctime/interp_time.py b/pypy/module/rctime/interp_time.py --- a/pypy/module/rctime/interp_time.py +++ b/pypy/module/rctime/interp_time.py @@ -245,6 +245,9 @@ if sys.platform != 'win32': @unwrap_spec(secs=float) def sleep(space, secs): + if secs < 0: + raise OperationError(space.w_IOError, + space.wrap("Invalid argument: negative time in sleep")) pytime.sleep(secs) else: from pypy.rlib import rwin32 @@ -265,6 +268,9 @@ OSError(EINTR, "sleep() interrupted")) @unwrap_spec(secs=float) def sleep(space, secs): + if secs < 0: + raise OperationError(space.w_IOError, + space.wrap("Invalid argument: negative time in sleep")) # as decreed by Guido, only the main thread can be # interrupted. main_thread = space.fromcache(State).main_thread diff --git a/pypy/module/rctime/test/test_rctime.py b/pypy/module/rctime/test/test_rctime.py --- a/pypy/module/rctime/test/test_rctime.py +++ b/pypy/module/rctime/test/test_rctime.py @@ -20,8 +20,9 @@ import sys import os raises(TypeError, rctime.sleep, "foo") - rctime.sleep(1.2345) - + rctime.sleep(0.12345) + raises(IOError, rctime.sleep, -1.0) + def test_clock(self): import time as rctime rctime.clock() diff --git a/pypy/module/select/test/test_select.py b/pypy/module/select/test/test_select.py --- a/pypy/module/select/test/test_select.py +++ b/pypy/module/select/test/test_select.py @@ -214,11 +214,15 @@ def test_poll(self): import select - class A(object): - def __int__(self): - return 3 - - select.poll().poll(A()) # assert did not crash + readend, writeend = self.getpair() + try: + class A(object): + def __int__(self): + return readend.fileno() + select.poll().poll(A()) # assert did not crash + finally: + readend.close() + writeend.close() class AppTestSelectWithPipes(_AppTestSelect): "Use a pipe to get pairs of file descriptors" diff --git a/pypy/module/signal/__init__.py b/pypy/module/signal/__init__.py --- a/pypy/module/signal/__init__.py +++ b/pypy/module/signal/__init__.py @@ -20,7 +20,7 @@ interpleveldefs['pause'] = 'interp_signal.pause' interpleveldefs['siginterrupt'] = 'interp_signal.siginterrupt' - if hasattr(cpy_signal, 'setitimer'): + if os.name == 'posix': interpleveldefs['setitimer'] = 'interp_signal.setitimer' interpleveldefs['getitimer'] = 'interp_signal.getitimer' for name in ['ITIMER_REAL', 'ITIMER_VIRTUAL', 'ITIMER_PROF']: diff --git a/pypy/module/signal/test/test_signal.py b/pypy/module/signal/test/test_signal.py --- a/pypy/module/signal/test/test_signal.py +++ b/pypy/module/signal/test/test_signal.py @@ -1,4 +1,4 @@ -import os, py +import os, py, sys import signal as cpy_signal from pypy.conftest import gettestobjspace @@ -264,6 +264,10 @@ class AppTestItimer: spaceconfig = dict(usemodules=['signal']) + def setup_class(cls): + if sys.platform == 'win32': + py.test.skip("Unix only") + def test_itimer_real(self): import signal diff --git a/pypy/module/sys/version.py b/pypy/module/sys/version.py --- a/pypy/module/sys/version.py +++ b/pypy/module/sys/version.py @@ -10,7 +10,7 @@ CPYTHON_VERSION = (2, 7, 1, "final", 42) #XXX # sync patchlevel.h CPYTHON_API_VERSION = 1013 #XXX # sync with include/modsupport.h -PYPY_VERSION = (1, 6, 1, "dev", 0) #XXX # sync patchlevel.h +PYPY_VERSION = (1, 7, 1, "dev", 0) #XXX # sync patchlevel.h if platform.name == 'msvc': COMPILER_INFO = 'MSC v.%d 32 bit' % (platform.version * 10 + 600) diff --git a/pypy/module/thread/ll_thread.py b/pypy/module/thread/ll_thread.py --- a/pypy/module/thread/ll_thread.py +++ b/pypy/module/thread/ll_thread.py @@ -2,10 +2,11 @@ from pypy.rpython.lltypesystem import rffi, lltype, llmemory from pypy.translator.tool.cbuild import ExternalCompilationInfo import py -from pypy.rlib import jit +from pypy.rlib import jit, rgc from pypy.rlib.debug import ll_assert from pypy.rlib.objectmodel import we_are_translated from pypy.rpython.lltypesystem.lloperation import llop +from pypy.rpython.tool import rffi_platform from pypy.tool import autopath class error(Exception): @@ -49,7 +50,7 @@ TLOCKP = rffi.COpaquePtr('struct RPyOpaque_ThreadLock', compilation_info=eci) - +TLOCKP_SIZE = rffi_platform.sizeof('struct RPyOpaque_ThreadLock', eci) c_thread_lock_init = llexternal('RPyThreadLockInit', [TLOCKP], rffi.INT, threadsafe=False) # may add in a global list c_thread_lock_dealloc_NOAUTO = llexternal('RPyOpaqueDealloc_ThreadLock', @@ -164,6 +165,9 @@ if rffi.cast(lltype.Signed, res) <= 0: lltype.free(ll_lock, flavor='raw', track_allocation=False) raise error("out of resources") + # Add some memory pressure for the size of the lock because it is an + # Opaque object + rgc.add_memory_pressure(TLOCKP_SIZE) return ll_lock def free_ll_lock(ll_lock): diff --git a/pypy/objspace/flow/operation.py b/pypy/objspace/flow/operation.py --- a/pypy/objspace/flow/operation.py +++ b/pypy/objspace/flow/operation.py @@ -11,7 +11,7 @@ from pypy.interpreter.baseobjspace import ObjSpace from pypy.interpreter.error import OperationError from pypy.tool.sourcetools import compile2 -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift +from pypy.rlib.rarithmetic import ovfcheck from pypy.objspace.flow import model @@ -144,7 +144,7 @@ return ovfcheck(x % y) def lshift_ovf(x, y): - return ovfcheck_lshift(x, y) + return ovfcheck(x << y) # slicing: operator.{get,set,del}slice() don't support b=None or c=None def do_getslice(a, b, c): diff --git a/pypy/objspace/std/boolobject.py b/pypy/objspace/std/boolobject.py --- a/pypy/objspace/std/boolobject.py +++ b/pypy/objspace/std/boolobject.py @@ -27,11 +27,7 @@ def uint_w(w_self, space): intval = int(w_self.boolval) - if intval < 0: - raise OperationError(space.w_ValueError, - space.wrap("cannot convert negative integer to unsigned")) - else: - return r_uint(intval) + return r_uint(intval) def bigint_w(w_self, space): return rbigint.fromint(int(w_self.boolval)) diff --git a/pypy/objspace/std/bytearrayobject.py b/pypy/objspace/std/bytearrayobject.py --- a/pypy/objspace/std/bytearrayobject.py +++ b/pypy/objspace/std/bytearrayobject.py @@ -282,23 +282,12 @@ def str__Bytearray(space, w_bytearray): return space.wrap(''.join(w_bytearray.data)) -def _convert_idx_params(space, w_self, w_start, w_stop): - start = slicetype.eval_slice_index(space, w_start) - stop = slicetype.eval_slice_index(space, w_stop) - length = len(w_self.data) - if start < 0: - start += length - if start < 0: - start = 0 - if stop < 0: - stop += length - if stop < 0: - stop = 0 - return start, stop, length - def str_count__Bytearray_Int_ANY_ANY(space, w_bytearray, w_char, w_start, w_stop): char = w_char.intval - start, stop, length = _convert_idx_params(space, w_bytearray, w_start, w_stop) + bytearray = w_bytearray.data + length = len(bytearray) + start, stop = slicetype.unwrap_start_stop( + space, length, w_start, w_stop, False) count = 0 for i in range(start, min(stop, length)): c = w_bytearray.data[i] diff --git a/pypy/objspace/std/bytearraytype.py b/pypy/objspace/std/bytearraytype.py --- a/pypy/objspace/std/bytearraytype.py +++ b/pypy/objspace/std/bytearraytype.py @@ -122,10 +122,11 @@ return -1 def descr_fromhex(space, w_type, w_hexstring): - "bytearray.fromhex(string) -> bytearray\n\nCreate a bytearray object " - "from a string of hexadecimal numbers.\nSpaces between two numbers are " - "accepted.\nExample: bytearray.fromhex('B9 01EF') -> " - "bytearray(b'\\xb9\\x01\\xef')." + "bytearray.fromhex(string) -> bytearray\n" + "\n" + "Create a bytearray object from a string of hexadecimal numbers.\n" + "Spaces between two numbers are accepted.\n" + "Example: bytearray.fromhex('B9 01EF') -> bytearray(b'\\xb9\\x01\\xef')." hexstring = space.str_w(w_hexstring) hexstring = hexstring.lower() data = [] diff --git a/pypy/objspace/std/floatobject.py b/pypy/objspace/std/floatobject.py --- a/pypy/objspace/std/floatobject.py +++ b/pypy/objspace/std/floatobject.py @@ -546,6 +546,12 @@ # Try to return int. return space.newtuple([space.int(w_num), space.int(w_den)]) +def float_is_integer__Float(space, w_float): + v = w_float.floatval + if not rfloat.isfinite(v): + return space.w_False + return space.wrap(math.floor(v) == v) + from pypy.objspace.std import floattype register_all(vars(), floattype) diff --git a/pypy/objspace/std/floattype.py b/pypy/objspace/std/floattype.py --- a/pypy/objspace/std/floattype.py +++ b/pypy/objspace/std/floattype.py @@ -12,6 +12,7 @@ float_as_integer_ratio = SMM("as_integer_ratio", 1) +float_is_integer = SMM("is_integer", 1) float_hex = SMM("hex", 1) def descr_conjugate(space, w_float): diff --git a/pypy/objspace/std/intobject.py b/pypy/objspace/std/intobject.py --- a/pypy/objspace/std/intobject.py +++ b/pypy/objspace/std/intobject.py @@ -6,7 +6,7 @@ from pypy.objspace.std.noneobject import W_NoneObject from pypy.objspace.std.register_all import register_all from pypy.rlib import jit -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift, LONG_BIT, r_uint +from pypy.rlib.rarithmetic import ovfcheck, LONG_BIT, r_uint from pypy.rlib.rbigint import rbigint """ @@ -16,7 +16,10 @@ something CPython does not do anymore. """ -class W_IntObject(W_Object): +class W_AbstractIntObject(W_Object): + pass + +class W_IntObject(W_AbstractIntObject): __slots__ = 'intval' _immutable_fields_ = ['intval'] @@ -245,7 +248,7 @@ b = w_int2.intval if r_uint(b) < LONG_BIT: # 0 <= b < LONG_BIT try: - c = ovfcheck_lshift(a, b) + c = ovfcheck(a << b) except OverflowError: raise FailedToImplementArgs(space.w_OverflowError, space.wrap("integer left shift")) diff --git a/pypy/objspace/std/iterobject.py b/pypy/objspace/std/iterobject.py --- a/pypy/objspace/std/iterobject.py +++ b/pypy/objspace/std/iterobject.py @@ -4,7 +4,10 @@ from pypy.objspace.std.register_all import register_all -class W_AbstractSeqIterObject(W_Object): +class W_AbstractIterObject(W_Object): + pass + +class W_AbstractSeqIterObject(W_AbstractIterObject): from pypy.objspace.std.itertype import iter_typedef as typedef def __init__(w_self, w_seq, index=0): diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py --- a/pypy/objspace/std/listobject.py +++ b/pypy/objspace/std/listobject.py @@ -11,7 +11,10 @@ from pypy.rlib.listsort import make_timsort_class from pypy.interpreter.argument import Signature -class W_ListObject(W_Object): +class W_AbstractListObject(W_Object): + pass + +class W_ListObject(W_AbstractListObject): from pypy.objspace.std.listtype import list_typedef as typedef def __init__(w_self, wrappeditems): @@ -54,7 +57,12 @@ def _init_from_iterable(space, items_w, w_iterable): # in its own function to make the JIT look into init__List - # XXX this would need a JIT driver somehow? + # xxx special hack for speed + from pypy.interpreter.generator import GeneratorIterator + if isinstance(w_iterable, GeneratorIterator): + w_iterable.unpack_into(items_w) + return + # /xxx w_iterator = space.iter(w_iterable) while True: try: @@ -414,8 +422,8 @@ # needs to be safe against eq_w() mutating the w_list behind our back items = w_list.wrappeditems size = len(items) - i = slicetype.adapt_bound(space, size, w_start) - stop = slicetype.adapt_bound(space, size, w_stop) + i, stop = slicetype.unwrap_start_stop( + space, size, w_start, w_stop, True) while i < stop and i < len(items): if space.eq_w(items[i], w_any): return space.wrap(i) diff --git a/pypy/objspace/std/longobject.py b/pypy/objspace/std/longobject.py --- a/pypy/objspace/std/longobject.py +++ b/pypy/objspace/std/longobject.py @@ -8,7 +8,10 @@ from pypy.objspace.std.noneobject import W_NoneObject from pypy.rlib.rbigint import rbigint, SHIFT -class W_LongObject(W_Object): +class W_AbstractLongObject(W_Object): + pass + +class W_LongObject(W_AbstractLongObject): """This is a wrapper of rbigint.""" from pypy.objspace.std.longtype import long_typedef as typedef _immutable_fields_ = ['num'] diff --git a/pypy/objspace/std/model.py b/pypy/objspace/std/model.py --- a/pypy/objspace/std/model.py +++ b/pypy/objspace/std/model.py @@ -69,19 +69,11 @@ from pypy.objspace.std import floatobject from pypy.objspace.std import complexobject from pypy.objspace.std import setobject - from pypy.objspace.std import smallintobject - from pypy.objspace.std import smalllongobject from pypy.objspace.std import tupleobject - from pypy.objspace.std import smalltupleobject from pypy.objspace.std import listobject from pypy.objspace.std import dictmultiobject from pypy.objspace.std import stringobject from pypy.objspace.std import bytearrayobject - from pypy.objspace.std import ropeobject - from pypy.objspace.std import ropeunicodeobject - from pypy.objspace.std import strsliceobject - from pypy.objspace.std import strjoinobject - from pypy.objspace.std import strbufobject from pypy.objspace.std import typeobject from pypy.objspace.std import sliceobject from pypy.objspace.std import longobject @@ -89,7 +81,6 @@ from pypy.objspace.std import iterobject from pypy.objspace.std import unicodeobject from pypy.objspace.std import dictproxyobject - from pypy.objspace.std import rangeobject from pypy.objspace.std import proxyobject from pypy.objspace.std import fake import pypy.objspace.std.default # register a few catch-all multimethods @@ -141,7 +132,12 @@ for option, value in config.objspace.std: if option.startswith("with") and option in option_to_typename: for classname in option_to_typename[option]: - implcls = eval(classname) + modname = classname[:classname.index('.')] + classname = classname[classname.index('.')+1:] + d = {} + exec "from pypy.objspace.std.%s import %s" % ( + modname, classname) in d + implcls = d[classname] if value: self.typeorder[implcls] = [] else: @@ -167,6 +163,7 @@ # XXX build these lists a bit more automatically later if config.objspace.std.withsmallint: + from pypy.objspace.std import smallintobject self.typeorder[boolobject.W_BoolObject] += [ (smallintobject.W_SmallIntObject, boolobject.delegate_Bool2SmallInt), ] @@ -189,6 +186,7 @@ (complexobject.W_ComplexObject, complexobject.delegate_Int2Complex), ] if config.objspace.std.withsmalllong: + from pypy.objspace.std import smalllongobject self.typeorder[boolobject.W_BoolObject] += [ (smalllongobject.W_SmallLongObject, smalllongobject.delegate_Bool2SmallLong), ] @@ -220,7 +218,9 @@ (unicodeobject.W_UnicodeObject, unicodeobject.delegate_String2Unicode), ] else: + from pypy.objspace.std import ropeobject if config.objspace.std.withropeunicode: + from pypy.objspace.std import ropeunicodeobject self.typeorder[ropeobject.W_RopeObject] += [ (ropeunicodeobject.W_RopeUnicodeObject, ropeunicodeobject.delegate_Rope2RopeUnicode), @@ -230,6 +230,7 @@ (unicodeobject.W_UnicodeObject, unicodeobject.delegate_String2Unicode), ] if config.objspace.std.withstrslice: + from pypy.objspace.std import strsliceobject self.typeorder[strsliceobject.W_StringSliceObject] += [ (stringobject.W_StringObject, strsliceobject.delegate_slice2str), @@ -237,6 +238,7 @@ strsliceobject.delegate_slice2unicode), ] if config.objspace.std.withstrjoin: + from pypy.objspace.std import strjoinobject self.typeorder[strjoinobject.W_StringJoinObject] += [ (stringobject.W_StringObject, strjoinobject.delegate_join2str), @@ -244,6 +246,7 @@ strjoinobject.delegate_join2unicode) ] elif config.objspace.std.withstrbuf: + from pypy.objspace.std import strbufobject self.typeorder[strbufobject.W_StringBufferObject] += [ (stringobject.W_StringObject, strbufobject.delegate_buf2str), @@ -251,11 +254,13 @@ strbufobject.delegate_buf2unicode) ] if config.objspace.std.withrangelist: + from pypy.objspace.std import rangeobject self.typeorder[rangeobject.W_RangeListObject] += [ (listobject.W_ListObject, rangeobject.delegate_range2list), ] if config.objspace.std.withsmalltuple: + from pypy.objspace.std import smalltupleobject self.typeorder[smalltupleobject.W_SmallTupleObject] += [ (tupleobject.W_TupleObject, smalltupleobject.delegate_SmallTuple2Tuple)] diff --git a/pypy/objspace/std/objspace.py b/pypy/objspace/std/objspace.py --- a/pypy/objspace/std/objspace.py +++ b/pypy/objspace/std/objspace.py @@ -83,11 +83,7 @@ if self.config.objspace.std.withtproxy: transparent.setup(self) - for type, classes in self.model.typeorder.iteritems(): - if len(classes) >= 3: - # W_Root, AnyXxx and actual object - self.gettypefor(type).interplevel_cls = classes[0][0] - + self.setup_isinstance_cache() def get_builtin_types(self): return self.builtin_types @@ -413,7 +409,7 @@ else: if unroll: return make_sure_not_resized(ObjSpace.unpackiterable_unroll( - self, w_obj, expected_length)[:]) + self, w_obj, expected_length)) else: return make_sure_not_resized(ObjSpace.unpackiterable( self, w_obj, expected_length)[:]) @@ -421,7 +417,8 @@ raise self._wrap_expected_length(expected_length, len(t)) return make_sure_not_resized(t) - def fixedview_unroll(self, w_obj, expected_length=-1): + def fixedview_unroll(self, w_obj, expected_length): + assert expected_length >= 0 return self.fixedview(w_obj, expected_length, unroll=True) def listview(self, w_obj, expected_length=-1): @@ -579,7 +576,7 @@ raise OperationError(self.w_TypeError, self.wrap("need type object")) if is_annotation_constant(w_type): - cls = w_type.interplevel_cls + cls = self._get_interplevel_cls(w_type) if cls is not None: assert w_inst is not None if isinstance(w_inst, cls): @@ -589,3 +586,66 @@ @specialize.arg_or_var(2) def isinstance_w(space, w_inst, w_type): return space._type_isinstance(w_inst, w_type) + + def setup_isinstance_cache(self): + # This assumes that all classes in the stdobjspace implementing a + # particular app-level type are distinguished by a common base class. + # Alternatively, you can turn off the cache on specific classes, + # like e.g. proxyobject. It is just a bit less performant but + # should not have any bad effect. + from pypy.objspace.std.model import W_Root, W_Object + # + # Build a dict {class: w_typeobject-or-None}. The value None is used + # on classes that are known to be abstract base classes. + class2type = {} + class2type[W_Root] = None + class2type[W_Object] = None + for cls in self.model.typeorder.keys(): + if getattr(cls, 'typedef', None) is None: + continue + if getattr(cls, 'ignore_for_isinstance_cache', False): + continue + w_type = self.gettypefor(cls) + w_oldtype = class2type.setdefault(cls, w_type) + assert w_oldtype is w_type + # + # Build the real dict {w_typeobject: class-or-base-class}. For every + # w_typeobject we look for the most precise common base class of all + # the registered classes. If no such class is found, we will find + # W_Object or W_Root, and complain. Then you must either add an + # artificial common base class, or disable caching on one of the + # two classes with ignore_for_isinstance_cache. + def getmro(cls): + while True: + yield cls + if cls is W_Root: + break + cls = cls.__bases__[0] + self._interplevel_classes = {} + for cls, w_type in class2type.items(): + if w_type is None: + continue + if w_type not in self._interplevel_classes: + self._interplevel_classes[w_type] = cls + else: + cls1 = self._interplevel_classes[w_type] + mro1 = list(getmro(cls1)) + for base in getmro(cls): + if base in mro1: + break + if base in class2type and class2type[base] is not w_type: + if class2type.get(base) is None: + msg = ("cannot find a common interp-level base class" + " between %r and %r" % (cls1, cls)) + else: + msg = ("%s is a base class of both %r and %r" % ( + class2type[base], cls1, cls)) + raise AssertionError("%r: %s" % (w_type, msg)) + class2type[base] = w_type + self._interplevel_classes[w_type] = base + + @specialize.memo() + def _get_interplevel_cls(self, w_type): + if not hasattr(self, "_interplevel_classes"): + return None # before running initialize + return self._interplevel_classes.get(w_type, None) diff --git a/pypy/objspace/std/proxyobject.py b/pypy/objspace/std/proxyobject.py --- a/pypy/objspace/std/proxyobject.py +++ b/pypy/objspace/std/proxyobject.py @@ -16,6 +16,8 @@ def transparent_class(name, BaseCls): class W_Transparent(BaseCls): + ignore_for_isinstance_cache = True + def __init__(self, space, w_type, w_controller): self.w_type = w_type self.w_controller = w_controller diff --git a/pypy/objspace/std/rangeobject.py b/pypy/objspace/std/rangeobject.py --- a/pypy/objspace/std/rangeobject.py +++ b/pypy/objspace/std/rangeobject.py @@ -5,7 +5,7 @@ from pypy.objspace.std.noneobject import W_NoneObject from pypy.objspace.std.inttype import wrapint from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice -from pypy.objspace.std.listobject import W_ListObject +from pypy.objspace.std.listobject import W_AbstractListObject, W_ListObject from pypy.objspace.std import listtype, iterobject, slicetype from pypy.interpreter import gateway, baseobjspace @@ -21,7 +21,7 @@ return (start - stop - step - 1)/-step -class W_RangeListObject(W_Object): +class W_RangeListObject(W_AbstractListObject): typedef = listtype.list_typedef def __init__(w_self, start, step, length): diff --git a/pypy/objspace/std/ropeobject.py b/pypy/objspace/std/ropeobject.py --- a/pypy/objspace/std/ropeobject.py +++ b/pypy/objspace/std/ropeobject.py @@ -6,7 +6,7 @@ from pypy.rlib.objectmodel import we_are_translated from pypy.objspace.std.inttype import wrapint from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice -from pypy.objspace.std import slicetype +from pypy.objspace.std import stringobject, slicetype, iterobject from pypy.objspace.std.listobject import W_ListObject from pypy.objspace.std.noneobject import W_NoneObject from pypy.objspace.std.tupleobject import W_TupleObject @@ -19,7 +19,7 @@ str_format__String as str_format__Rope, _upper, _lower, DEFAULT_NOOP_TABLE) -class W_RopeObject(W_Object): +class W_RopeObject(stringobject.W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef _immutable_fields_ = ['_node'] @@ -59,7 +59,7 @@ registerimplementation(W_RopeObject) -class W_RopeIterObject(W_Object): +class W_RopeIterObject(iterobject.W_AbstractIterObject): from pypy.objspace.std.itertype import iter_typedef as typedef def __init__(w_self, w_rope, index=0): @@ -357,16 +357,8 @@ self = w_self._node sub = w_sub._node - if space.is_w(w_start, space.w_None): - w_start = space.wrap(0) - if space.is_w(w_end, space.w_None): - w_end = space.len(w_self) - if upper_bound: - start = slicetype.adapt_bound(space, self.length(), w_start) - end = slicetype.adapt_bound(space, self.length(), w_end) - else: - start = slicetype.adapt_lower_bound(space, self.length(), w_start) - end = slicetype.adapt_lower_bound(space, self.length(), w_end) + start, end = slicetype.unwrap_start_stop( + space, self.length(), w_start, w_end, upper_bound) return (self, sub, start, end) _convert_idx_params._annspecialcase_ = 'specialize:arg(5)' diff --git a/pypy/objspace/std/ropeunicodeobject.py b/pypy/objspace/std/ropeunicodeobject.py --- a/pypy/objspace/std/ropeunicodeobject.py +++ b/pypy/objspace/std/ropeunicodeobject.py @@ -9,7 +9,7 @@ from pypy.objspace.std.noneobject import W_NoneObject from pypy.rlib import rope from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice -from pypy.objspace.std import slicetype +from pypy.objspace.std import unicodeobject, slicetype, iterobject from pypy.objspace.std.tupleobject import W_TupleObject from pypy.rlib.rarithmetic import intmask, ovfcheck from pypy.module.unicodedata import unicodedb @@ -76,7 +76,7 @@ return encode_object(space, w_unistr, encoding, errors) -class W_RopeUnicodeObject(W_Object): +class W_RopeUnicodeObject(unicodeobject.W_AbstractUnicodeObject): from pypy.objspace.std.unicodetype import unicode_typedef as typedef _immutable_fields_ = ['_node'] @@ -117,7 +117,7 @@ return rope.LiteralUnicodeNode(space.unicode_w(w_str)) -class W_RopeUnicodeIterObject(W_Object): +class W_RopeUnicodeIterObject(iterobject.W_AbstractIterObject): from pypy.objspace.std.itertype import iter_typedef as typedef def __init__(w_self, w_rope, index=0): diff --git a/pypy/objspace/std/sliceobject.py b/pypy/objspace/std/sliceobject.py --- a/pypy/objspace/std/sliceobject.py +++ b/pypy/objspace/std/sliceobject.py @@ -4,7 +4,7 @@ from pypy.interpreter import gateway from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all -from pypy.objspace.std.slicetype import eval_slice_index +from pypy.objspace.std.slicetype import _eval_slice_index class W_SliceObject(W_Object): from pypy.objspace.std.slicetype import slice_typedef as typedef @@ -25,7 +25,7 @@ if space.is_w(w_slice.w_step, space.w_None): step = 1 else: - step = eval_slice_index(space, w_slice.w_step) + step = _eval_slice_index(space, w_slice.w_step) if step == 0: raise OperationError(space.w_ValueError, space.wrap("slice step cannot be zero")) @@ -35,7 +35,7 @@ else: start = 0 else: - start = eval_slice_index(space, w_slice.w_start) + start = _eval_slice_index(space, w_slice.w_start) if start < 0: start += length if start < 0: @@ -54,7 +54,7 @@ else: stop = length else: - stop = eval_slice_index(space, w_slice.w_stop) + stop = _eval_slice_index(space, w_slice.w_stop) if stop < 0: stop += length if stop < 0: diff --git a/pypy/objspace/std/slicetype.py b/pypy/objspace/std/slicetype.py --- a/pypy/objspace/std/slicetype.py +++ b/pypy/objspace/std/slicetype.py @@ -3,6 +3,7 @@ from pypy.objspace.std.stdtypedef import StdTypeDef, SMM from pypy.objspace.std.register_all import register_all from pypy.interpreter.error import OperationError +from pypy.rlib.objectmodel import specialize # indices multimehtod slice_indices = SMM('indices', 2, @@ -14,7 +15,9 @@ ' normal slices.') # utility functions -def eval_slice_index(space, w_int): +def _eval_slice_index(space, w_int): + # note that it is the *callers* responsibility to check for w_None + # otherwise you can get funny error messages try: return space.getindex_w(w_int, None) # clamp if long integer too large except OperationError, err: @@ -25,7 +28,7 @@ "None or have an __index__ method")) def adapt_lower_bound(space, size, w_index): - index = eval_slice_index(space, w_index) + index = _eval_slice_index(space, w_index) if index < 0: index = index + size if index < 0: @@ -34,16 +37,29 @@ return index def adapt_bound(space, size, w_index): - index = eval_slice_index(space, w_index) - if index < 0: - index = index + size - if index < 0: - index = 0 + index = adapt_lower_bound(space, size, w_index) if index > size: index = size assert index >= 0 return index + at specialize.arg(4) +def unwrap_start_stop(space, size, w_start, w_end, upper_bound=False): + if space.is_w(w_start, space.w_None): + start = 0 + elif upper_bound: + start = adapt_bound(space, size, w_start) + else: + start = adapt_lower_bound(space, size, w_start) + + if space.is_w(w_end, space.w_None): + end = size + elif upper_bound: + end = adapt_bound(space, size, w_end) + else: + end = adapt_lower_bound(space, size, w_end) + return start, end + register_all(vars(), globals()) # ____________________________________________________________ diff --git a/pypy/objspace/std/smallintobject.py b/pypy/objspace/std/smallintobject.py --- a/pypy/objspace/std/smallintobject.py +++ b/pypy/objspace/std/smallintobject.py @@ -6,14 +6,15 @@ from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all from pypy.objspace.std.noneobject import W_NoneObject -from pypy.objspace.std.intobject import W_IntObject +from pypy.objspace.std.intobject import W_AbstractIntObject, W_IntObject from pypy.interpreter.error import OperationError from pypy.rlib.objectmodel import UnboxedValue from pypy.rlib.rbigint import rbigint from pypy.rlib.rarithmetic import r_uint from pypy.tool.sourcetools import func_with_new_name +from pypy.objspace.std.inttype import wrapint -class W_SmallIntObject(W_Object, UnboxedValue): +class W_SmallIntObject(W_AbstractIntObject, UnboxedValue): __slots__ = 'intval' from pypy.objspace.std.inttype import int_typedef as typedef @@ -48,14 +49,36 @@ def delegate_SmallInt2Complex(space, w_small): return space.newcomplex(float(w_small.intval), 0.0) +def add__SmallInt_SmallInt(space, w_a, w_b): + return wrapint(space, w_a.intval + w_b.intval) # cannot overflow + +def sub__SmallInt_SmallInt(space, w_a, w_b): + return wrapint(space, w_a.intval - w_b.intval) # cannot overflow + +def floordiv__SmallInt_SmallInt(space, w_a, w_b): + return wrapint(space, w_a.intval // w_b.intval) # cannot overflow + +div__SmallInt_SmallInt = floordiv__SmallInt_SmallInt + +def mod__SmallInt_SmallInt(space, w_a, w_b): + return wrapint(space, w_a.intval % w_b.intval) # cannot overflow + +def divmod__SmallInt_SmallInt(space, w_a, w_b): + w = wrapint(space, w_a.intval // w_b.intval) # cannot overflow + z = wrapint(space, w_a.intval % w_b.intval) + return space.newtuple([w, z]) + def copy_multimethods(ns): """Copy integer multimethods for small int.""" for name, func in intobject.__dict__.iteritems(): if "__Int" in name: new_name = name.replace("Int", "SmallInt") - # Copy the function, so the annotator specializes it for - # W_SmallIntObject. - ns[new_name] = func_with_new_name(func, new_name) + if new_name not in ns: + # Copy the function, so the annotator specializes it for + # W_SmallIntObject. + ns[new_name] = func = func_with_new_name(func, new_name, globals=ns) + else: + ns[name] = func ns["get_integer"] = ns["pos__SmallInt"] = ns["int__SmallInt"] ns["get_negint"] = ns["neg__SmallInt"] diff --git a/pypy/objspace/std/smalllongobject.py b/pypy/objspace/std/smalllongobject.py --- a/pypy/objspace/std/smalllongobject.py +++ b/pypy/objspace/std/smalllongobject.py @@ -9,7 +9,7 @@ from pypy.rlib.rarithmetic import r_longlong, r_int, r_uint from pypy.rlib.rarithmetic import intmask, LONGLONG_BIT from pypy.rlib.rbigint import rbigint -from pypy.objspace.std.longobject import W_LongObject +from pypy.objspace.std.longobject import W_AbstractLongObject, W_LongObject from pypy.objspace.std.intobject import W_IntObject from pypy.objspace.std.noneobject import W_NoneObject from pypy.interpreter.error import OperationError @@ -17,7 +17,7 @@ LONGLONG_MIN = r_longlong((-1) << (LONGLONG_BIT-1)) -class W_SmallLongObject(W_Object): +class W_SmallLongObject(W_AbstractLongObject): from pypy.objspace.std.longtype import long_typedef as typedef _immutable_fields_ = ['longlong'] diff --git a/pypy/objspace/std/smalltupleobject.py b/pypy/objspace/std/smalltupleobject.py --- a/pypy/objspace/std/smalltupleobject.py +++ b/pypy/objspace/std/smalltupleobject.py @@ -9,9 +9,9 @@ from pypy.interpreter import gateway from pypy.rlib.debug import make_sure_not_resized from pypy.rlib.unroll import unrolling_iterable -from pypy.objspace.std.tupleobject import W_TupleObject +from pypy.objspace.std.tupleobject import W_AbstractTupleObject, W_TupleObject -class W_SmallTupleObject(W_Object): +class W_SmallTupleObject(W_AbstractTupleObject): from pypy.objspace.std.tupletype import tuple_typedef as typedef def tolist(self): @@ -68,10 +68,10 @@ raise IndexError def eq(self, space, w_other): - if self.length() != w_other.length(): + if n != w_other.length(): return space.w_False for i in iter_n: - item1 = self.getitem(i) + item1 = getattr(self,'w_value%s' % i) item2 = w_other.getitem(i) if not space.eq_w(item1, item2): return space.w_False @@ -80,9 +80,9 @@ def hash(self, space): mult = 1000003 x = 0x345678 - z = self.length() + z = n for i in iter_n: - w_item = self.getitem(i) + w_item = getattr(self, 'w_value%s' % i) y = space.int_w(space.hash(w_item)) x = (x ^ y) * mult z -= 1 diff --git a/pypy/objspace/std/stdtypedef.py b/pypy/objspace/std/stdtypedef.py --- a/pypy/objspace/std/stdtypedef.py +++ b/pypy/objspace/std/stdtypedef.py @@ -32,11 +32,14 @@ from pypy.objspace.std.objecttype import object_typedef if b is object_typedef: return True - while a is not b: - if a is None: - return False - a = a.base - return True + if a is None: + return False + if a is b: + return True + for a1 in a.bases: + if issubtypedef(a1, b): + return True + return False std_dict_descr = GetSetProperty(descr_get_dict, descr_set_dict, descr_del_dict, doc="dictionary for instance variables (if defined)") @@ -75,8 +78,8 @@ if typedef is object_typedef: bases_w = [] else: - base = typedef.base or object_typedef - bases_w = [space.gettypeobject(base)] + bases = typedef.bases or [object_typedef] + bases_w = [space.gettypeobject(base) for base in bases] # wrap everything dict_w = {} diff --git a/pypy/objspace/std/strbufobject.py b/pypy/objspace/std/strbufobject.py --- a/pypy/objspace/std/strbufobject.py From noreply at buildbot.pypy.org Mon Nov 14 10:53:46 2011 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 14 Nov 2011 10:53:46 +0100 (CET) Subject: [pypy-commit] pypy default: Fix annotation issues. Message-ID: <20111114095346.57A7A820BE@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49391:3ed133d0ce83 Date: 2011-11-14 10:53 +0100 http://bitbucket.org/pypy/pypy/changeset/3ed133d0ce83/ Log: Fix annotation issues. diff --git a/pypy/objspace/std/intobject.py b/pypy/objspace/std/intobject.py --- a/pypy/objspace/std/intobject.py +++ b/pypy/objspace/std/intobject.py @@ -17,7 +17,7 @@ """ class W_AbstractIntObject(W_Object): - pass + __slots__ = () class W_IntObject(W_AbstractIntObject): __slots__ = 'intval' diff --git a/pypy/objspace/std/iterobject.py b/pypy/objspace/std/iterobject.py --- a/pypy/objspace/std/iterobject.py +++ b/pypy/objspace/std/iterobject.py @@ -5,7 +5,7 @@ class W_AbstractIterObject(W_Object): - pass + __slots__ = () class W_AbstractSeqIterObject(W_AbstractIterObject): from pypy.objspace.std.itertype import iter_typedef as typedef diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py --- a/pypy/objspace/std/listobject.py +++ b/pypy/objspace/std/listobject.py @@ -12,7 +12,7 @@ from pypy.interpreter.argument import Signature class W_AbstractListObject(W_Object): - pass + __slots__ = () class W_ListObject(W_AbstractListObject): from pypy.objspace.std.listtype import list_typedef as typedef diff --git a/pypy/objspace/std/longobject.py b/pypy/objspace/std/longobject.py --- a/pypy/objspace/std/longobject.py +++ b/pypy/objspace/std/longobject.py @@ -9,7 +9,7 @@ from pypy.rlib.rbigint import rbigint, SHIFT class W_AbstractLongObject(W_Object): - pass + __slots__ = () class W_LongObject(W_AbstractLongObject): """This is a wrapper of rbigint.""" diff --git a/pypy/objspace/std/stringobject.py b/pypy/objspace/std/stringobject.py --- a/pypy/objspace/std/stringobject.py +++ b/pypy/objspace/std/stringobject.py @@ -20,7 +20,7 @@ from pypy.objspace.std.formatting import mod_format class W_AbstractStringObject(W_Object): - pass + __slots__ = () class W_StringObject(W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef diff --git a/pypy/objspace/std/tupleobject.py b/pypy/objspace/std/tupleobject.py --- a/pypy/objspace/std/tupleobject.py +++ b/pypy/objspace/std/tupleobject.py @@ -10,7 +10,7 @@ from pypy.rlib.debug import make_sure_not_resized class W_AbstractTupleObject(W_Object): - pass + __slots__ = () class W_TupleObject(W_AbstractTupleObject): from pypy.objspace.std.tupletype import tuple_typedef as typedef diff --git a/pypy/objspace/std/unicodeobject.py b/pypy/objspace/std/unicodeobject.py --- a/pypy/objspace/std/unicodeobject.py +++ b/pypy/objspace/std/unicodeobject.py @@ -20,7 +20,7 @@ from pypy.objspace.std.stringtype import stringstartswith, stringendswith class W_AbstractUnicodeObject(W_Object): - pass + __slots__ = () class W_UnicodeObject(W_AbstractUnicodeObject): from pypy.objspace.std.unicodetype import unicode_typedef as typedef From noreply at buildbot.pypy.org Mon Nov 14 10:56:12 2011 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 14 Nov 2011 10:56:12 +0100 (CET) Subject: [pypy-commit] pypy default: a test and a fix Message-ID: <20111114095612.6309D820BE@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r49392:3cbd94863224 Date: 2011-11-14 10:53 +0100 http://bitbucket.org/pypy/pypy/changeset/3cbd94863224/ Log: a test and a fix diff --git a/pypy/translator/platform/posix.py b/pypy/translator/platform/posix.py --- a/pypy/translator/platform/posix.py +++ b/pypy/translator/platform/posix.py @@ -140,7 +140,7 @@ ('DEFAULT_TARGET', exe_name.basename), ('SOURCES', rel_cfiles), ('OBJECTS', rel_ofiles), - ('LIBS', self._libs(eci.libraries)), + ('LIBS', self._libs(eci.libraries) + self.extra_libs), ('LIBDIRS', self._libdirs(rel_libdirs)), ('INCLUDEDIRS', self._includedirs(rel_includedirs)), ('CFLAGS', cflags), diff --git a/pypy/translator/platform/test/test_posix.py b/pypy/translator/platform/test/test_posix.py --- a/pypy/translator/platform/test/test_posix.py +++ b/pypy/translator/platform/test/test_posix.py @@ -41,6 +41,7 @@ if self.strict_on_stderr: assert res.err == '' assert res.returncode == 0 + assert '-lrt' in tmpdir.join("Makefile").read() def test_link_files(self): tmpdir = udir.join('link_files' + self.__class__.__name__).ensure(dir=1) From noreply at buildbot.pypy.org Mon Nov 14 10:56:13 2011 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 14 Nov 2011 10:56:13 +0100 (CET) Subject: [pypy-commit] pypy default: use tuple Message-ID: <20111114095613.94430820BE@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r49393:654821ec2610 Date: 2011-11-14 10:54 +0100 http://bitbucket.org/pypy/pypy/changeset/654821ec2610/ Log: use tuple diff --git a/pypy/translator/platform/__init__.py b/pypy/translator/platform/__init__.py --- a/pypy/translator/platform/__init__.py +++ b/pypy/translator/platform/__init__.py @@ -42,7 +42,7 @@ so_prefixes = ('',) - extra_libs = [] + extra_libs = () def __init__(self, cc): if self.__class__ is Platform: @@ -183,7 +183,8 @@ link_files = self._linkfiles(eci.link_files) export_flags = self._exportsymbols_link_flags(eci) return (library_dirs + list(self.link_flags) + export_flags + - link_files + list(eci.link_extra) + libraries + self.extra_libs) + link_files + list(eci.link_extra) + libraries + + list(self.extra_libs)) def _exportsymbols_link_flags(self, eci, relto=None): if eci.export_symbols: diff --git a/pypy/translator/platform/linux.py b/pypy/translator/platform/linux.py --- a/pypy/translator/platform/linux.py +++ b/pypy/translator/platform/linux.py @@ -7,7 +7,7 @@ name = "linux" link_flags = ('-pthread',) - extra_libs = ['-lrt'] + extra_libs = ('-lrt',) cflags = ('-O3', '-pthread', '-fomit-frame-pointer', '-Wall', '-Wno-unused') standalone_only = () diff --git a/pypy/translator/platform/posix.py b/pypy/translator/platform/posix.py --- a/pypy/translator/platform/posix.py +++ b/pypy/translator/platform/posix.py @@ -140,7 +140,7 @@ ('DEFAULT_TARGET', exe_name.basename), ('SOURCES', rel_cfiles), ('OBJECTS', rel_ofiles), - ('LIBS', self._libs(eci.libraries) + self.extra_libs), + ('LIBS', self._libs(eci.libraries) + list(self.extra_libs)), ('LIBDIRS', self._libdirs(rel_libdirs)), ('INCLUDEDIRS', self._includedirs(rel_includedirs)), ('CFLAGS', cflags), From noreply at buildbot.pypy.org Mon Nov 14 10:56:14 2011 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 14 Nov 2011 10:56:14 +0100 (CET) Subject: [pypy-commit] pypy default: merge Message-ID: <20111114095614.C79E6820BE@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r49394:e36e879eb1fe Date: 2011-11-14 10:55 +0100 http://bitbucket.org/pypy/pypy/changeset/e36e879eb1fe/ Log: merge diff --git a/pypy/objspace/std/intobject.py b/pypy/objspace/std/intobject.py --- a/pypy/objspace/std/intobject.py +++ b/pypy/objspace/std/intobject.py @@ -17,7 +17,7 @@ """ class W_AbstractIntObject(W_Object): - pass + __slots__ = () class W_IntObject(W_AbstractIntObject): __slots__ = 'intval' diff --git a/pypy/objspace/std/iterobject.py b/pypy/objspace/std/iterobject.py --- a/pypy/objspace/std/iterobject.py +++ b/pypy/objspace/std/iterobject.py @@ -5,7 +5,7 @@ class W_AbstractIterObject(W_Object): - pass + __slots__ = () class W_AbstractSeqIterObject(W_AbstractIterObject): from pypy.objspace.std.itertype import iter_typedef as typedef diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py --- a/pypy/objspace/std/listobject.py +++ b/pypy/objspace/std/listobject.py @@ -12,7 +12,7 @@ from pypy.interpreter.argument import Signature class W_AbstractListObject(W_Object): - pass + __slots__ = () class W_ListObject(W_AbstractListObject): from pypy.objspace.std.listtype import list_typedef as typedef diff --git a/pypy/objspace/std/longobject.py b/pypy/objspace/std/longobject.py --- a/pypy/objspace/std/longobject.py +++ b/pypy/objspace/std/longobject.py @@ -9,7 +9,7 @@ from pypy.rlib.rbigint import rbigint, SHIFT class W_AbstractLongObject(W_Object): - pass + __slots__ = () class W_LongObject(W_AbstractLongObject): """This is a wrapper of rbigint.""" diff --git a/pypy/objspace/std/stringobject.py b/pypy/objspace/std/stringobject.py --- a/pypy/objspace/std/stringobject.py +++ b/pypy/objspace/std/stringobject.py @@ -20,7 +20,7 @@ from pypy.objspace.std.formatting import mod_format class W_AbstractStringObject(W_Object): - pass + __slots__ = () class W_StringObject(W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef diff --git a/pypy/objspace/std/tupleobject.py b/pypy/objspace/std/tupleobject.py --- a/pypy/objspace/std/tupleobject.py +++ b/pypy/objspace/std/tupleobject.py @@ -10,7 +10,7 @@ from pypy.rlib.debug import make_sure_not_resized class W_AbstractTupleObject(W_Object): - pass + __slots__ = () class W_TupleObject(W_AbstractTupleObject): from pypy.objspace.std.tupletype import tuple_typedef as typedef diff --git a/pypy/objspace/std/unicodeobject.py b/pypy/objspace/std/unicodeobject.py --- a/pypy/objspace/std/unicodeobject.py +++ b/pypy/objspace/std/unicodeobject.py @@ -20,7 +20,7 @@ from pypy.objspace.std.stringtype import stringstartswith, stringendswith class W_AbstractUnicodeObject(W_Object): - pass + __slots__ = () class W_UnicodeObject(W_AbstractUnicodeObject): from pypy.objspace.std.unicodetype import unicode_typedef as typedef From noreply at buildbot.pypy.org Mon Nov 14 10:57:11 2011 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 14 Nov 2011 10:57:11 +0100 (CET) Subject: [pypy-commit] pypy default: Skip an assert in a test that fails on Python 2.5. Message-ID: <20111114095711.35CD3820BE@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49395:1931f105c85b Date: 2011-11-14 10:56 +0100 http://bitbucket.org/pypy/pypy/changeset/1931f105c85b/ Log: Skip an assert in a test that fails on Python 2.5. diff --git a/pypy/objspace/std/test/test_sliceobject.py b/pypy/objspace/std/test/test_sliceobject.py --- a/pypy/objspace/std/test/test_sliceobject.py +++ b/pypy/objspace/std/test/test_sliceobject.py @@ -1,3 +1,4 @@ +import sys from pypy.objspace.std.sliceobject import normalize_simple_slice @@ -56,8 +57,9 @@ sl = space.newslice(w(start), w(stop), w(step)) mystart, mystop, mystep, slicelength = sl.indices4(space, length) assert len(range(length)[start:stop:step]) == slicelength - assert slice(start, stop, step).indices(length) == ( - mystart, mystop, mystep) + if sys.version_info >= (2, 6): # doesn't work in 2.5 + assert slice(start, stop, step).indices(length) == ( + mystart, mystop, mystep) class AppTest_SliceObject: def test_new(self): From noreply at buildbot.pypy.org Mon Nov 14 10:57:12 2011 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 14 Nov 2011 10:57:12 +0100 (CET) Subject: [pypy-commit] pypy default: merge heads Message-ID: <20111114095712.8F67A820BE@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49396:1be28eed2789 Date: 2011-11-14 10:56 +0100 http://bitbucket.org/pypy/pypy/changeset/1be28eed2789/ Log: merge heads diff --git a/pypy/translator/platform/__init__.py b/pypy/translator/platform/__init__.py --- a/pypy/translator/platform/__init__.py +++ b/pypy/translator/platform/__init__.py @@ -42,6 +42,8 @@ so_prefixes = ('',) + extra_libs = () + def __init__(self, cc): if self.__class__ is Platform: raise TypeError("You should not instantiate Platform class directly") @@ -181,7 +183,8 @@ link_files = self._linkfiles(eci.link_files) export_flags = self._exportsymbols_link_flags(eci) return (library_dirs + list(self.link_flags) + export_flags + - link_files + list(eci.link_extra) + libraries) + link_files + list(eci.link_extra) + libraries + + list(self.extra_libs)) def _exportsymbols_link_flags(self, eci, relto=None): if eci.export_symbols: diff --git a/pypy/translator/platform/linux.py b/pypy/translator/platform/linux.py --- a/pypy/translator/platform/linux.py +++ b/pypy/translator/platform/linux.py @@ -6,7 +6,8 @@ class BaseLinux(BasePosix): name = "linux" - link_flags = ('-pthread', '-lrt') + link_flags = ('-pthread',) + extra_libs = ('-lrt',) cflags = ('-O3', '-pthread', '-fomit-frame-pointer', '-Wall', '-Wno-unused') standalone_only = () diff --git a/pypy/translator/platform/posix.py b/pypy/translator/platform/posix.py --- a/pypy/translator/platform/posix.py +++ b/pypy/translator/platform/posix.py @@ -140,7 +140,7 @@ ('DEFAULT_TARGET', exe_name.basename), ('SOURCES', rel_cfiles), ('OBJECTS', rel_ofiles), - ('LIBS', self._libs(eci.libraries)), + ('LIBS', self._libs(eci.libraries) + list(self.extra_libs)), ('LIBDIRS', self._libdirs(rel_libdirs)), ('INCLUDEDIRS', self._includedirs(rel_includedirs)), ('CFLAGS', cflags), diff --git a/pypy/translator/platform/test/test_posix.py b/pypy/translator/platform/test/test_posix.py --- a/pypy/translator/platform/test/test_posix.py +++ b/pypy/translator/platform/test/test_posix.py @@ -41,6 +41,7 @@ if self.strict_on_stderr: assert res.err == '' assert res.returncode == 0 + assert '-lrt' in tmpdir.join("Makefile").read() def test_link_files(self): tmpdir = udir.join('link_files' + self.__class__.__name__).ensure(dir=1) From noreply at buildbot.pypy.org Mon Nov 14 11:27:45 2011 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 14 Nov 2011 11:27:45 +0100 (CET) Subject: [pypy-commit] pypy default: skip on win32. Message-ID: <20111114102745.B7D4B820BE@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49397:1368405e24e1 Date: 2011-11-14 11:27 +0100 http://bitbucket.org/pypy/pypy/changeset/1368405e24e1/ Log: skip on win32. diff --git a/pypy/module/test_lib_pypy/test_pwd.py b/pypy/module/test_lib_pypy/test_pwd.py --- a/pypy/module/test_lib_pypy/test_pwd.py +++ b/pypy/module/test_lib_pypy/test_pwd.py @@ -1,7 +1,10 @@ +import py, sys from pypy.conftest import gettestobjspace class AppTestPwd: def setup_class(cls): + if sys.platform == 'win32': + py.test.skip("Unix only") cls.space = gettestobjspace(usemodules=('_ffi', '_rawffi')) cls.space.appexec((), "(): import pwd") From noreply at buildbot.pypy.org Mon Nov 14 11:49:05 2011 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 14 Nov 2011 11:49:05 +0100 (CET) Subject: [pypy-commit] pypy default: This number of quotes suddenly stopped working on Windows, but Message-ID: <20111114104905.69303820BE@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49398:335d74f2d6e3 Date: 2011-11-14 11:48 +0100 http://bitbucket.org/pypy/pypy/changeset/335d74f2d6e3/ Log: This number of quotes suddenly stopped working on Windows, but with one level of quotes less it seems to work. I have no clue and I don't really care enough. diff --git a/pypy/interpreter/test/test_executioncontext.py b/pypy/interpreter/test/test_executioncontext.py --- a/pypy/interpreter/test/test_executioncontext.py +++ b/pypy/interpreter/test/test_executioncontext.py @@ -292,7 +292,7 @@ import os, sys print sys.executable, self.tmpfile if sys.platform == "win32": - cmdformat = '""%s" "%s""' # excellent! tons of "! + cmdformat = '"%s" "%s"' else: cmdformat = "'%s' '%s'" g = os.popen(cmdformat % (sys.executable, self.tmpfile), 'r') From noreply at buildbot.pypy.org Mon Nov 14 12:34:14 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 14 Nov 2011 12:34:14 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: twaks Message-ID: <20111114113414.6A2C5820BE@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: extradoc Changeset: r3965:bba5526b4eb5 Date: 2011-11-14 12:34 +0100 http://bitbucket.org/pypy/extradoc/changeset/bba5526b4eb5/ Log: twaks diff --git a/blog/draft/2011-11-gborg-sprint-report.rst b/blog/draft/2011-11-gborg-sprint-report.rst --- a/blog/draft/2011-11-gborg-sprint-report.rst +++ b/blog/draft/2011-11-gborg-sprint-report.rst @@ -1,11 +1,11 @@ Gothenburg sprint report ========================= -In the past days, we have been busy hacking on PyPy at the Gothenburg sprint, +In the past week, we have been busy hacking on PyPy at the Gothenburg sprint, the second of this 2011. The sprint was hold at Laura's and Jacob's place, and here is a brief report of what happened. - +.. img:: 5x-cake.jpg In the first day we welcomed Mark Pearse, which was new to PyPy and at his first sprint. Mark worked the whole sprint at the new SpecialisedTuple_ @@ -66,7 +66,8 @@ producing a CPython extension module it produces a pure python modules based on ``ctypes``. More work is needed before it can be considered complete, but ``f2pypy`` is already able to produce a wrapper for BLAS which passes most of -the tests (although not all). +the tests under CPython, although there's still work left to get it working +for PyPy. .. _f2pypy: http://bitbucket.org/pypy/f2pypy From noreply at buildbot.pypy.org Mon Nov 14 12:38:11 2011 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 14 Nov 2011 12:38:11 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: latest version Message-ID: <20111114113811.46DA2820BE@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: extradoc Changeset: r3966:37cc0ee778a9 Date: 2011-11-14 12:37 +0100 http://bitbucket.org/pypy/extradoc/changeset/37cc0ee778a9/ Log: latest version diff --git a/talk/fscons2011/author.latex b/talk/fscons2011/author.latex --- a/talk/fscons2011/author.latex +++ b/talk/fscons2011/author.latex @@ -5,4 +5,4 @@ {Armin Rigo} \institute{FSCONS 2011} -\date{November 13 2011} +\date{November 13, 2011} diff --git a/talk/fscons2011/example/demo1.py b/talk/fscons2011/example/demo1.py new file mode 100644 --- /dev/null +++ b/talk/fscons2011/example/demo1.py @@ -0,0 +1,7 @@ + +def f(n): + print "running demo1..." + i = 0 + while i < n: + i = i + 1 + return i diff --git a/talk/fscons2011/example/demo2.py b/talk/fscons2011/example/demo2.py new file mode 100644 --- /dev/null +++ b/talk/fscons2011/example/demo2.py @@ -0,0 +1,11 @@ + +class A(object): + def __init__(self, value): + self.value = value + +def f(n): + print "running demo2..." + i = A(0) + while i.value < n: + i = A(i.value + 1) + return i diff --git a/talk/fscons2011/notes.txt b/talk/fscons2011/notes.txt new file mode 100644 --- /dev/null +++ b/talk/fscons2011/notes.txt @@ -0,0 +1,105 @@ + + +thanks Laura + + + +$ pypy + +differences: irc topics +prompt +(multi-line editing) + + +def f(n): + i = 0 + while i < n: + i = i + 1 + return i + + + +class A(object): + def __init__(self, value): + self.value = value + +def f(n): + a = A(0) + while a.value < n: + a = A(a.value + 1) + return a + + + +hack/3d/test6.py + + +...for maximum effect; +in truth less impressive +on usual programs +(but still) + + + + +gitdm + - data mining tool + - reads the output of + ``git log`` + - generate kernel dev. + statistics + - ...3x + +MyHDL: VHDL-like lang +written in Python +now competitive with +"real world" VHDL +and Verilog simulators + - 6 to 12 times faster + +Random largeish program: + - depends on 3rd-party C + extensions + - may try to install + them for pypy + - (pypy setup.py install) + - may or may not work + +ai: jumped 3x last week +but in general, constant +slow(?) progress + + + + +partially public. funded: + - EU, EU countries + - still open source + "mindset" at its core + +Open Source + - not GPL + - like CPython + - intense discussion of + about 15 seconds + + + + +Java or .NET + - large pieces of codes + no direct control + - designed and optimiz. + for some class of + languages != Python + + + + +CPython 2.7 "deprecated" + +PyPy 1.x won't be anytime +soon + +future: support both +PyPy (2.x) and PyPy3 diff --git a/talk/fscons2011/talk.rst b/talk/fscons2011/talk.rst --- a/talk/fscons2011/talk.rst +++ b/talk/fscons2011/talk.rst @@ -9,20 +9,19 @@ -------- +Speed +--------- + +.. image:: speed.png + :scale: 45% + :align: center + Speed --------- .. image:: progress.png - :scale: 40% - :align: center - - -Speed ---------- - -.. image:: speed.png - :scale: 40% + :scale: 50% :align: center @@ -61,7 +60,7 @@ * It is easy to implement a new language with PyPy -* Better suited to dynamic languages +* Suited for *dynamic* languages (preferrably) |pause| @@ -109,7 +108,7 @@ * Pyrolog, a Prolog interpreter, is fast too -* Haskell and a number of other experiments +* Haskell, GameBoy, ... |pause| @@ -121,7 +120,7 @@ * Tracing JIT Compiler -* Not unlike TraceMonkey for JavaScript in FireFox +* Not unlike TraceMonkey for JavaScript in Firefox * But two levels @@ -168,6 +167,38 @@ PyPy 1.x <------> PyPy3 1.x +PyPy's future? +-------------------- + +.. sourcecode:: plain + + CPython 2.7 -------> CPython 3.x + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + ^ written in C ^ + | | + | | + | | + V V + + PyPy 1.x <------> PyPy3 1.x + + +PyPy's future? +-------------------- + +.. sourcecode:: plain + + CPython 2.7 -------> CPython 3.x + + ^ ^ + | | + | | + | written in | + V Python 2.5-7 V + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + PyPy 1.x <------> PyPy3 1.x + + Contacts, Q/A -------------- From noreply at buildbot.pypy.org Mon Nov 14 12:38:12 2011 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 14 Nov 2011 12:38:12 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: Typo Message-ID: <20111114113812.66FF3820BE@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: extradoc Changeset: r3967:614c01f2ba2b Date: 2011-11-14 12:37 +0100 http://bitbucket.org/pypy/extradoc/changeset/614c01f2ba2b/ Log: Typo diff --git a/blog/draft/2011-11-gborg-sprint-report.rst b/blog/draft/2011-11-gborg-sprint-report.rst --- a/blog/draft/2011-11-gborg-sprint-report.rst +++ b/blog/draft/2011-11-gborg-sprint-report.rst @@ -51,7 +51,7 @@ .. _STM: http://bitbucket.org/pypy/pypy/changesets/tip/branch("stm") -Håkan, with some help from Armim, worked on the `jit-targets`_ branch, whose goal +Håkan, with some help from Armin, worked on the `jit-targets`_ branch, whose goal is to heavily refactor the way the traces are internally represented by the JIT, so that in the end we can produce (even :-)) better code than what we do nowadays. More details in this mail_. From noreply at buildbot.pypy.org Mon Nov 14 12:41:37 2011 From: noreply at buildbot.pypy.org (bivab) Date: Mon, 14 Nov 2011 12:41:37 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: fix for tests Message-ID: <20111114114137.C8315820BE@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: ppc-jit-backend Changeset: r49399:08426f22e5d5 Date: 2011-11-14 12:41 +0100 http://bitbucket.org/pypy/pypy/changeset/08426f22e5d5/ Log: fix for tests diff --git a/pypy/jit/codewriter/heaptracker.py b/pypy/jit/codewriter/heaptracker.py --- a/pypy/jit/codewriter/heaptracker.py +++ b/pypy/jit/codewriter/heaptracker.py @@ -89,7 +89,7 @@ except AttributeError: pass assert lltype.typeOf(vtable) == VTABLETYPE - if cpu._all_size_descrs_with_vtable is None: + if not hasattr(cpu, '_all_size_descrs_with_vtable') or cpu._all_size_descrs_with_vtable is None: cpu._all_size_descrs_with_vtable = [] cpu._vtable_to_descr_dict = None cpu._all_size_descrs_with_vtable.append(sizedescr) From noreply at buildbot.pypy.org Mon Nov 14 12:52:20 2011 From: noreply at buildbot.pypy.org (bivab) Date: Mon, 14 Nov 2011 12:52:20 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: another fix for the ppc tests Message-ID: <20111114115220.08F9A820BE@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: ppc-jit-backend Changeset: r49400:cdd52796a997 Date: 2011-11-14 12:52 +0100 http://bitbucket.org/pypy/pypy/changeset/cdd52796a997/ Log: another fix for the ppc tests diff --git a/pypy/jit/backend/x86/test/test_assembler.py b/pypy/jit/backend/x86/test/test_assembler.py --- a/pypy/jit/backend/x86/test/test_assembler.py +++ b/pypy/jit/backend/x86/test/test_assembler.py @@ -16,7 +16,9 @@ class FakeCPU: rtyper = None supports_floats = True - NUM_REGS = ACTUAL_CPU.NUM_REGS + + def __init__(self): + NUM_REGS = ACTUAL_CPU.NUM_REGS def fielddescrof(self, STRUCT, name): return 42 From noreply at buildbot.pypy.org Mon Nov 14 14:50:09 2011 From: noreply at buildbot.pypy.org (bivab) Date: Mon, 14 Nov 2011 14:50:09 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: more test fixes Message-ID: <20111114135009.F3A0F820BE@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: ppc-jit-backend Changeset: r49401:deb6836af9be Date: 2011-11-14 14:46 +0100 http://bitbucket.org/pypy/pypy/changeset/deb6836af9be/ Log: more test fixes diff --git a/pypy/jit/backend/test/test_frame_size.py b/pypy/jit/backend/test/test_frame_size.py --- a/pypy/jit/backend/test/test_frame_size.py +++ b/pypy/jit/backend/test/test_frame_size.py @@ -15,6 +15,7 @@ from pypy.rpython.annlowlevel import llhelper from pypy.rpython.llinterp import LLException from pypy.jit.codewriter import heaptracker, longlong +from pypy.jit.codewriter.effectinfo import EffectInfo from pypy.rlib.rarithmetic import intmask from pypy.jit.backend.detect_cpu import getcpuclass @@ -31,7 +32,8 @@ F1PTR = lltype.Ptr(lltype.FuncType([lltype.Signed], lltype.Signed)) f1ptr = llhelper(F1PTR, f1) - f1_calldescr = cpu.calldescrof(F1PTR.TO, F1PTR.TO.ARGS, F1PTR.TO.RESULT) + f1_calldescr = cpu.calldescrof(F1PTR.TO, F1PTR.TO.ARGS, + F1PTR.TO.RESULT, EffectInfo.MOST_GENERAL) namespace = locals().copy() type_system = 'lltype' diff --git a/pypy/jit/codewriter/heaptracker.py b/pypy/jit/codewriter/heaptracker.py --- a/pypy/jit/codewriter/heaptracker.py +++ b/pypy/jit/codewriter/heaptracker.py @@ -97,7 +97,7 @@ def finish_registering(cpu): # annotation hack for small examples which have no vtable at all - if cpu._all_size_descrs_with_vtable is None: + if not hasattr(cpu, '_all_size_descrs_with_vtable') or cpu._all_size_descrs_with_vtable is None: vtable = lltype.malloc(rclass.OBJECT_VTABLE, immortal=True) register_known_gctype(cpu, vtable, rclass.OBJECT) From noreply at buildbot.pypy.org Mon Nov 14 15:16:06 2011 From: noreply at buildbot.pypy.org (cfbolz) Date: Mon, 14 Nov 2011 15:16:06 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: merge Message-ID: <20111114141606.88205820BE@wyvern.cs.uni-duesseldorf.de> Author: Carl Friedrich Bolz Branch: set-strategies Changeset: r49402:c3ed604fcfb5 Date: 2011-11-14 15:15 +0100 http://bitbucket.org/pypy/pypy/changeset/c3ed604fcfb5/ Log: merge diff --git a/lib-python/modified-2.7/test/test_import.py b/lib-python/modified-2.7/test/test_import.py --- a/lib-python/modified-2.7/test/test_import.py +++ b/lib-python/modified-2.7/test/test_import.py @@ -64,6 +64,7 @@ except ImportError, err: self.fail("import from %s failed: %s" % (ext, err)) else: + # XXX importing .pyw is missing on Windows self.assertEqual(mod.a, a, "module loaded (%s) but contents invalid" % mod) self.assertEqual(mod.b, b, diff --git a/lib-python/modified-2.7/test/test_repr.py b/lib-python/modified-2.7/test/test_repr.py --- a/lib-python/modified-2.7/test/test_repr.py +++ b/lib-python/modified-2.7/test/test_repr.py @@ -254,8 +254,14 @@ eq = self.assertEqual touch(os.path.join(self.subpkgname, self.pkgname + os.extsep + 'py')) from areallylongpackageandmodulenametotestreprtruncation.areallylongpackageandmodulenametotestreprtruncation import areallylongpackageandmodulenametotestreprtruncation - eq(repr(areallylongpackageandmodulenametotestreprtruncation), - "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + # On PyPy, we use %r to format the file name; on CPython it is done + # with '%s'. It seems to me that %r is safer . + if '__pypy__' in sys.builtin_module_names: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + else: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) eq(repr(sys), "") def test_type(self): diff --git a/lib-python/2.7/test/test_subprocess.py b/lib-python/modified-2.7/test/test_subprocess.py copy from lib-python/2.7/test/test_subprocess.py copy to lib-python/modified-2.7/test/test_subprocess.py --- a/lib-python/2.7/test/test_subprocess.py +++ b/lib-python/modified-2.7/test/test_subprocess.py @@ -16,11 +16,11 @@ # Depends on the following external programs: Python # -if mswindows: - SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' - 'os.O_BINARY);') -else: - SETBINARY = '' +#if mswindows: +# SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' +# 'os.O_BINARY);') +#else: +# SETBINARY = '' try: @@ -420,8 +420,9 @@ self.assertStderrEqual(stderr, "") def test_universal_newlines(self): - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' @@ -448,8 +449,9 @@ def test_universal_newlines_communicate(self): # universal newlines through communicate() - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' diff --git a/pypy/interpreter/test/test_executioncontext.py b/pypy/interpreter/test/test_executioncontext.py --- a/pypy/interpreter/test/test_executioncontext.py +++ b/pypy/interpreter/test/test_executioncontext.py @@ -292,7 +292,7 @@ import os, sys print sys.executable, self.tmpfile if sys.platform == "win32": - cmdformat = '""%s" "%s""' # excellent! tons of "! + cmdformat = '"%s" "%s"' else: cmdformat = "'%s' '%s'" g = os.popen(cmdformat % (sys.executable, self.tmpfile), 'r') diff --git a/pypy/jit/codewriter/call.py b/pypy/jit/codewriter/call.py --- a/pypy/jit/codewriter/call.py +++ b/pypy/jit/codewriter/call.py @@ -212,7 +212,10 @@ elidable = False loopinvariant = False if op.opname == "direct_call": - func = getattr(get_funcobj(op.args[0].value), '_callable', None) + funcobj = get_funcobj(op.args[0].value) + assert getattr(funcobj, 'calling_conv', 'c') == 'c', ( + "%r: getcalldescr() with a non-default call ABI" % (op,)) + func = getattr(funcobj, '_callable', None) elidable = getattr(func, "_elidable_function_", False) loopinvariant = getattr(func, "_jit_loop_invariant_", False) if loopinvariant: diff --git a/pypy/module/_rawffi/structure.py b/pypy/module/_rawffi/structure.py --- a/pypy/module/_rawffi/structure.py +++ b/pypy/module/_rawffi/structure.py @@ -212,6 +212,8 @@ while count + basic_size <= total_size: fieldtypes.append(basic_ffi_type) count += basic_size + if basic_size == 0: # corner case. get out of this infinite + break # loop after 1 iteration ("why not") self.ffi_struct = clibffi.make_struct_ffitype_e(self.size, self.alignment, fieldtypes) diff --git a/pypy/module/imp/importing.py b/pypy/module/imp/importing.py --- a/pypy/module/imp/importing.py +++ b/pypy/module/imp/importing.py @@ -513,7 +513,7 @@ space.warn(msg, space.w_ImportWarning) modtype, suffix, filemode = find_modtype(space, filepart) try: - if modtype in (PY_SOURCE, PY_COMPILED): + if modtype in (PY_SOURCE, PY_COMPILED, C_EXTENSION): assert suffix is not None filename = filepart + suffix stream = streamio.open_file_as_stream(filename, filemode) @@ -522,9 +522,6 @@ except: stream.close() raise - if modtype == C_EXTENSION: - filename = filepart + suffix - return FindInfo(modtype, filename, None, suffix, filemode) except StreamErrors: pass # XXX! must not eat all exceptions, e.g. # Out of file descriptors. diff --git a/pypy/module/posix/__init__.py b/pypy/module/posix/__init__.py --- a/pypy/module/posix/__init__.py +++ b/pypy/module/posix/__init__.py @@ -137,6 +137,8 @@ interpleveldefs['execve'] = 'interp_posix.execve' if hasattr(posix, 'spawnv'): interpleveldefs['spawnv'] = 'interp_posix.spawnv' + if hasattr(posix, 'spawnve'): + interpleveldefs['spawnve'] = 'interp_posix.spawnve' if hasattr(os, 'uname'): interpleveldefs['uname'] = 'interp_posix.uname' if hasattr(os, 'sysconf'): diff --git a/pypy/module/posix/interp_posix.py b/pypy/module/posix/interp_posix.py --- a/pypy/module/posix/interp_posix.py +++ b/pypy/module/posix/interp_posix.py @@ -760,6 +760,14 @@ except OSError, e: raise wrap_oserror(space, e) +def _env2interp(space, w_env): + env = {} + w_keys = space.call_method(w_env, 'keys') + for w_key in space.unpackiterable(w_keys): + w_value = space.getitem(w_env, w_key) + env[space.str_w(w_key)] = space.str_w(w_value) + return env + def execve(space, w_command, w_args, w_env): """ execve(path, args, env) @@ -771,11 +779,7 @@ """ command = fsencode_w(space, w_command) args = [fsencode_w(space, w_arg) for w_arg in space.unpackiterable(w_args)] - env = {} - w_keys = space.call_method(w_env, 'keys') - for w_key in space.unpackiterable(w_keys): - w_value = space.getitem(w_env, w_key) - env[space.str_w(w_key)] = space.str_w(w_value) + env = _env2interp(space, w_env) try: os.execve(command, args, env) except OSError, e: @@ -790,6 +794,16 @@ raise wrap_oserror(space, e) return space.wrap(ret) + at unwrap_spec(mode=int, path=str) +def spawnve(space, mode, path, w_args, w_env): + args = [space.str_w(w_arg) for w_arg in space.unpackiterable(w_args)] + env = _env2interp(space, w_env) + try: + ret = os.spawnve(mode, path, args, env) + except OSError, e: + raise wrap_oserror(space, e) + return space.wrap(ret) + def utime(space, w_path, w_tuple): """ utime(path, (atime, mtime)) utime(path, None) diff --git a/pypy/module/posix/test/test_posix2.py b/pypy/module/posix/test/test_posix2.py --- a/pypy/module/posix/test/test_posix2.py +++ b/pypy/module/posix/test/test_posix2.py @@ -471,6 +471,17 @@ ['python', '-c', 'raise(SystemExit(42))']) assert ret == 42 + if hasattr(__import__(os.name), "spawnve"): + def test_spawnve(self): + os = self.posix + import sys + print self.python + ret = os.spawnve(os.P_WAIT, self.python, + ['python', '-c', + "raise(SystemExit(int(__import__('os').environ['FOOBAR'])))"], + {'FOOBAR': '42'}) + assert ret == 42 + def test_popen(self): os = self.posix for i in range(5): diff --git a/pypy/module/pypyjit/test_pypy_c/model.py b/pypy/module/pypyjit/test_pypy_c/model.py --- a/pypy/module/pypyjit/test_pypy_c/model.py +++ b/pypy/module/pypyjit/test_pypy_c/model.py @@ -285,6 +285,11 @@ guard_false(ticker_cond1, descr=...) """ src = src.replace('--EXC-TICK--', exc_ticker_check) + # + # ISINF is done as a macro; fix it here + r = re.compile('(\w+) = --ISINF--[(](\w+)[)]') + src = r.sub(r'\2\B999 = float_add(\2, ...)\n\1 = float_eq(\2\B999, \2)', + src) return src @classmethod diff --git a/pypy/module/pypyjit/test_pypy_c/test_math.py b/pypy/module/pypyjit/test_pypy_c/test_math.py --- a/pypy/module/pypyjit/test_pypy_c/test_math.py +++ b/pypy/module/pypyjit/test_pypy_c/test_math.py @@ -1,3 +1,4 @@ +import py from pypy.module.pypyjit.test_pypy_c.test_00_model import BaseTestPyPyC @@ -49,10 +50,7 @@ guard_true(i2, descr=...) guard_not_invalidated(descr=...) f1 = cast_int_to_float(i0) - i3 = float_eq(f1, inf) - i4 = float_eq(f1, -inf) - i5 = int_or(i3, i4) - i6 = int_is_true(i5) + i6 = --ISINF--(f1) guard_false(i6, descr=...) f2 = call(ConstClass(sin), f1, descr=) f3 = call(ConstClass(cos), f1, descr=) @@ -64,6 +62,7 @@ """) def test_fmod(self): + py.test.skip("test relies on the old and broken ll_math_fmod") def main(n): import math @@ -90,4 +89,4 @@ i6 = int_sub(i0, 1) --TICK-- jump(..., descr=) - """) \ No newline at end of file + """) diff --git a/pypy/module/test_lib_pypy/test_pwd.py b/pypy/module/test_lib_pypy/test_pwd.py --- a/pypy/module/test_lib_pypy/test_pwd.py +++ b/pypy/module/test_lib_pypy/test_pwd.py @@ -1,7 +1,10 @@ +import py, sys from pypy.conftest import gettestobjspace class AppTestPwd: def setup_class(cls): + if sys.platform == 'win32': + py.test.skip("Unix only") cls.space = gettestobjspace(usemodules=('_ffi', '_rawffi')) cls.space.appexec((), "(): import pwd") diff --git a/pypy/objspace/std/bytearrayobject.py b/pypy/objspace/std/bytearrayobject.py --- a/pypy/objspace/std/bytearrayobject.py +++ b/pypy/objspace/std/bytearrayobject.py @@ -282,15 +282,12 @@ def str__Bytearray(space, w_bytearray): return space.wrap(''.join(w_bytearray.data)) -def _convert_idx_params(space, w_self, w_start, w_stop): - length = len(w_self.data) +def str_count__Bytearray_Int_ANY_ANY(space, w_bytearray, w_char, w_start, w_stop): + char = w_char.intval + bytearray = w_bytearray.data + length = len(bytearray) start, stop = slicetype.unwrap_start_stop( space, length, w_start, w_stop, False) - return start, stop, length - -def str_count__Bytearray_Int_ANY_ANY(space, w_bytearray, w_char, w_start, w_stop): - char = w_char.intval - start, stop, length = _convert_idx_params(space, w_bytearray, w_start, w_stop) count = 0 for i in range(start, min(stop, length)): c = w_bytearray.data[i] diff --git a/pypy/objspace/std/bytearraytype.py b/pypy/objspace/std/bytearraytype.py --- a/pypy/objspace/std/bytearraytype.py +++ b/pypy/objspace/std/bytearraytype.py @@ -122,10 +122,11 @@ return -1 def descr_fromhex(space, w_type, w_hexstring): - "bytearray.fromhex(string) -> bytearray\n\nCreate a bytearray object " - "from a string of hexadecimal numbers.\nSpaces between two numbers are " - "accepted.\nExample: bytearray.fromhex('B9 01EF') -> " - "bytearray(b'\\xb9\\x01\\xef')." + "bytearray.fromhex(string) -> bytearray\n" + "\n" + "Create a bytearray object from a string of hexadecimal numbers.\n" + "Spaces between two numbers are accepted.\n" + "Example: bytearray.fromhex('B9 01EF') -> bytearray(b'\\xb9\\x01\\xef')." hexstring = space.str_w(w_hexstring) hexstring = hexstring.lower() data = [] diff --git a/pypy/objspace/std/intobject.py b/pypy/objspace/std/intobject.py --- a/pypy/objspace/std/intobject.py +++ b/pypy/objspace/std/intobject.py @@ -16,7 +16,10 @@ something CPython does not do anymore. """ -class W_IntObject(W_Object): +class W_AbstractIntObject(W_Object): + __slots__ = () + +class W_IntObject(W_AbstractIntObject): __slots__ = 'intval' _immutable_fields_ = ['intval'] diff --git a/pypy/objspace/std/iterobject.py b/pypy/objspace/std/iterobject.py --- a/pypy/objspace/std/iterobject.py +++ b/pypy/objspace/std/iterobject.py @@ -4,7 +4,10 @@ from pypy.objspace.std.register_all import register_all -class W_AbstractSeqIterObject(W_Object): +class W_AbstractIterObject(W_Object): + __slots__ = () + +class W_AbstractSeqIterObject(W_AbstractIterObject): from pypy.objspace.std.itertype import iter_typedef as typedef def __init__(w_self, w_seq, index=0): diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py --- a/pypy/objspace/std/listobject.py +++ b/pypy/objspace/std/listobject.py @@ -11,7 +11,10 @@ from pypy.rlib.listsort import make_timsort_class from pypy.interpreter.argument import Signature -class W_ListObject(W_Object): +class W_AbstractListObject(W_Object): + __slots__ = () + +class W_ListObject(W_AbstractListObject): from pypy.objspace.std.listtype import list_typedef as typedef def __init__(w_self, wrappeditems): diff --git a/pypy/objspace/std/longobject.py b/pypy/objspace/std/longobject.py --- a/pypy/objspace/std/longobject.py +++ b/pypy/objspace/std/longobject.py @@ -8,7 +8,10 @@ from pypy.objspace.std.noneobject import W_NoneObject from pypy.rlib.rbigint import rbigint, SHIFT -class W_LongObject(W_Object): +class W_AbstractLongObject(W_Object): + __slots__ = () + +class W_LongObject(W_AbstractLongObject): """This is a wrapper of rbigint.""" from pypy.objspace.std.longtype import long_typedef as typedef _immutable_fields_ = ['num'] diff --git a/pypy/objspace/std/objspace.py b/pypy/objspace/std/objspace.py --- a/pypy/objspace/std/objspace.py +++ b/pypy/objspace/std/objspace.py @@ -83,12 +83,7 @@ if self.config.objspace.std.withtproxy: transparent.setup(self) - interplevel_classes = {} - for type, classes in self.model.typeorder.iteritems(): - if len(classes) >= 3: # XXX what does this 3 mean??! - # W_Root, AnyXxx and actual object - interplevel_classes[self.gettypefor(type)] = classes[0][0] - self._interplevel_classes = interplevel_classes + self.setup_isinstance_cache() def get_builtin_types(self): return self.builtin_types @@ -589,6 +584,63 @@ def isinstance_w(space, w_inst, w_type): return space._type_isinstance(w_inst, w_type) + def setup_isinstance_cache(self): + # This assumes that all classes in the stdobjspace implementing a + # particular app-level type are distinguished by a common base class. + # Alternatively, you can turn off the cache on specific classes, + # like e.g. proxyobject. It is just a bit less performant but + # should not have any bad effect. + from pypy.objspace.std.model import W_Root, W_Object + # + # Build a dict {class: w_typeobject-or-None}. The value None is used + # on classes that are known to be abstract base classes. + class2type = {} + class2type[W_Root] = None + class2type[W_Object] = None + for cls in self.model.typeorder.keys(): + if getattr(cls, 'typedef', None) is None: + continue + if getattr(cls, 'ignore_for_isinstance_cache', False): + continue + w_type = self.gettypefor(cls) + w_oldtype = class2type.setdefault(cls, w_type) + assert w_oldtype is w_type + # + # Build the real dict {w_typeobject: class-or-base-class}. For every + # w_typeobject we look for the most precise common base class of all + # the registered classes. If no such class is found, we will find + # W_Object or W_Root, and complain. Then you must either add an + # artificial common base class, or disable caching on one of the + # two classes with ignore_for_isinstance_cache. + def getmro(cls): + while True: + yield cls + if cls is W_Root: + break + cls = cls.__bases__[0] + self._interplevel_classes = {} + for cls, w_type in class2type.items(): + if w_type is None: + continue + if w_type not in self._interplevel_classes: + self._interplevel_classes[w_type] = cls + else: + cls1 = self._interplevel_classes[w_type] + mro1 = list(getmro(cls1)) + for base in getmro(cls): + if base in mro1: + break + if base in class2type and class2type[base] is not w_type: + if class2type.get(base) is None: + msg = ("cannot find a common interp-level base class" + " between %r and %r" % (cls1, cls)) + else: + msg = ("%s is a base class of both %r and %r" % ( + class2type[base], cls1, cls)) + raise AssertionError("%r: %s" % (w_type, msg)) + class2type[base] = w_type + self._interplevel_classes[w_type] = base + @specialize.memo() def _get_interplevel_cls(self, w_type): if not hasattr(self, "_interplevel_classes"): diff --git a/pypy/objspace/std/proxyobject.py b/pypy/objspace/std/proxyobject.py --- a/pypy/objspace/std/proxyobject.py +++ b/pypy/objspace/std/proxyobject.py @@ -16,6 +16,8 @@ def transparent_class(name, BaseCls): class W_Transparent(BaseCls): + ignore_for_isinstance_cache = True + def __init__(self, space, w_type, w_controller): self.w_type = w_type self.w_controller = w_controller diff --git a/pypy/objspace/std/rangeobject.py b/pypy/objspace/std/rangeobject.py --- a/pypy/objspace/std/rangeobject.py +++ b/pypy/objspace/std/rangeobject.py @@ -5,7 +5,7 @@ from pypy.objspace.std.noneobject import W_NoneObject from pypy.objspace.std.inttype import wrapint from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice -from pypy.objspace.std.listobject import W_ListObject +from pypy.objspace.std.listobject import W_AbstractListObject, W_ListObject from pypy.objspace.std import listtype, iterobject, slicetype from pypy.interpreter import gateway, baseobjspace @@ -21,7 +21,7 @@ return (start - stop - step - 1)/-step -class W_RangeListObject(W_Object): +class W_RangeListObject(W_AbstractListObject): typedef = listtype.list_typedef def __init__(w_self, start, step, length): diff --git a/pypy/objspace/std/ropeobject.py b/pypy/objspace/std/ropeobject.py --- a/pypy/objspace/std/ropeobject.py +++ b/pypy/objspace/std/ropeobject.py @@ -6,7 +6,7 @@ from pypy.rlib.objectmodel import we_are_translated from pypy.objspace.std.inttype import wrapint from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice -from pypy.objspace.std import slicetype +from pypy.objspace.std import stringobject, slicetype, iterobject from pypy.objspace.std.listobject import W_ListObject from pypy.objspace.std.noneobject import W_NoneObject from pypy.objspace.std.tupleobject import W_TupleObject @@ -19,7 +19,7 @@ str_format__String as str_format__Rope, _upper, _lower, DEFAULT_NOOP_TABLE) -class W_RopeObject(W_Object): +class W_RopeObject(stringobject.W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef _immutable_fields_ = ['_node'] @@ -59,7 +59,7 @@ registerimplementation(W_RopeObject) -class W_RopeIterObject(W_Object): +class W_RopeIterObject(iterobject.W_AbstractIterObject): from pypy.objspace.std.itertype import iter_typedef as typedef def __init__(w_self, w_rope, index=0): diff --git a/pypy/objspace/std/ropeunicodeobject.py b/pypy/objspace/std/ropeunicodeobject.py --- a/pypy/objspace/std/ropeunicodeobject.py +++ b/pypy/objspace/std/ropeunicodeobject.py @@ -9,7 +9,7 @@ from pypy.objspace.std.noneobject import W_NoneObject from pypy.rlib import rope from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice -from pypy.objspace.std import slicetype +from pypy.objspace.std import unicodeobject, slicetype, iterobject from pypy.objspace.std.tupleobject import W_TupleObject from pypy.rlib.rarithmetic import intmask, ovfcheck from pypy.module.unicodedata import unicodedb @@ -76,7 +76,7 @@ return encode_object(space, w_unistr, encoding, errors) -class W_RopeUnicodeObject(W_Object): +class W_RopeUnicodeObject(unicodeobject.W_AbstractUnicodeObject): from pypy.objspace.std.unicodetype import unicode_typedef as typedef _immutable_fields_ = ['_node'] @@ -117,7 +117,7 @@ return rope.LiteralUnicodeNode(space.unicode_w(w_str)) -class W_RopeUnicodeIterObject(W_Object): +class W_RopeUnicodeIterObject(iterobject.W_AbstractIterObject): from pypy.objspace.std.itertype import iter_typedef as typedef def __init__(w_self, w_rope, index=0): diff --git a/pypy/objspace/std/smallintobject.py b/pypy/objspace/std/smallintobject.py --- a/pypy/objspace/std/smallintobject.py +++ b/pypy/objspace/std/smallintobject.py @@ -6,7 +6,7 @@ from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all from pypy.objspace.std.noneobject import W_NoneObject -from pypy.objspace.std.intobject import W_IntObject +from pypy.objspace.std.intobject import W_AbstractIntObject, W_IntObject from pypy.interpreter.error import OperationError from pypy.rlib.objectmodel import UnboxedValue from pypy.rlib.rbigint import rbigint @@ -14,7 +14,7 @@ from pypy.tool.sourcetools import func_with_new_name from pypy.objspace.std.inttype import wrapint -class W_SmallIntObject(W_Object, UnboxedValue): +class W_SmallIntObject(W_AbstractIntObject, UnboxedValue): __slots__ = 'intval' from pypy.objspace.std.inttype import int_typedef as typedef diff --git a/pypy/objspace/std/smalllongobject.py b/pypy/objspace/std/smalllongobject.py --- a/pypy/objspace/std/smalllongobject.py +++ b/pypy/objspace/std/smalllongobject.py @@ -9,7 +9,7 @@ from pypy.rlib.rarithmetic import r_longlong, r_int, r_uint from pypy.rlib.rarithmetic import intmask, LONGLONG_BIT from pypy.rlib.rbigint import rbigint -from pypy.objspace.std.longobject import W_LongObject +from pypy.objspace.std.longobject import W_AbstractLongObject, W_LongObject from pypy.objspace.std.intobject import W_IntObject from pypy.objspace.std.noneobject import W_NoneObject from pypy.interpreter.error import OperationError @@ -17,7 +17,7 @@ LONGLONG_MIN = r_longlong((-1) << (LONGLONG_BIT-1)) -class W_SmallLongObject(W_Object): +class W_SmallLongObject(W_AbstractLongObject): from pypy.objspace.std.longtype import long_typedef as typedef _immutable_fields_ = ['longlong'] diff --git a/pypy/objspace/std/smalltupleobject.py b/pypy/objspace/std/smalltupleobject.py --- a/pypy/objspace/std/smalltupleobject.py +++ b/pypy/objspace/std/smalltupleobject.py @@ -9,9 +9,9 @@ from pypy.interpreter import gateway from pypy.rlib.debug import make_sure_not_resized from pypy.rlib.unroll import unrolling_iterable -from pypy.objspace.std.tupleobject import W_TupleObject +from pypy.objspace.std.tupleobject import W_AbstractTupleObject, W_TupleObject -class W_SmallTupleObject(W_Object): +class W_SmallTupleObject(W_AbstractTupleObject): from pypy.objspace.std.tupletype import tuple_typedef as typedef def tolist(self): diff --git a/pypy/objspace/std/strbufobject.py b/pypy/objspace/std/strbufobject.py --- a/pypy/objspace/std/strbufobject.py +++ b/pypy/objspace/std/strbufobject.py @@ -1,11 +1,12 @@ from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all +from pypy.objspace.std.stringobject import W_AbstractStringObject from pypy.objspace.std.stringobject import W_StringObject from pypy.objspace.std.unicodeobject import delegate_String2Unicode from pypy.rlib.rstring import StringBuilder from pypy.interpreter.buffer import Buffer -class W_StringBufferObject(W_Object): +class W_StringBufferObject(W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef w_str = None diff --git a/pypy/objspace/std/stringobject.py b/pypy/objspace/std/stringobject.py --- a/pypy/objspace/std/stringobject.py +++ b/pypy/objspace/std/stringobject.py @@ -19,7 +19,10 @@ from pypy.objspace.std.formatting import mod_format -class W_StringObject(W_Object): +class W_AbstractStringObject(W_Object): + __slots__ = () + +class W_StringObject(W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef _immutable_fields_ = ['_value'] diff --git a/pypy/objspace/std/strjoinobject.py b/pypy/objspace/std/strjoinobject.py --- a/pypy/objspace/std/strjoinobject.py +++ b/pypy/objspace/std/strjoinobject.py @@ -1,11 +1,12 @@ from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all +from pypy.objspace.std.stringobject import W_AbstractStringObject from pypy.objspace.std.stringobject import W_StringObject from pypy.objspace.std.unicodeobject import delegate_String2Unicode from pypy.objspace.std.stringtype import wrapstr -class W_StringJoinObject(W_Object): +class W_StringJoinObject(W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef def __init__(w_self, joined_strs, until=-1): diff --git a/pypy/objspace/std/strsliceobject.py b/pypy/objspace/std/strsliceobject.py --- a/pypy/objspace/std/strsliceobject.py +++ b/pypy/objspace/std/strsliceobject.py @@ -1,6 +1,7 @@ from pypy.interpreter.error import OperationError from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all +from pypy.objspace.std.stringobject import W_AbstractStringObject from pypy.objspace.std.stringobject import W_StringObject from pypy.objspace.std.unicodeobject import delegate_String2Unicode from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice @@ -12,7 +13,7 @@ stringendswith, stringstartswith -class W_StringSliceObject(W_Object): +class W_StringSliceObject(W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef def __init__(w_self, str, start, stop): diff --git a/pypy/objspace/std/test/test_bytes.py b/pypy/objspace/std/test/test_bytearrayobject.py rename from pypy/objspace/std/test/test_bytes.py rename to pypy/objspace/std/test/test_bytearrayobject.py diff --git a/pypy/objspace/std/test/test_sliceobject.py b/pypy/objspace/std/test/test_sliceobject.py --- a/pypy/objspace/std/test/test_sliceobject.py +++ b/pypy/objspace/std/test/test_sliceobject.py @@ -1,3 +1,4 @@ +import sys from pypy.objspace.std.sliceobject import normalize_simple_slice @@ -56,8 +57,9 @@ sl = space.newslice(w(start), w(stop), w(step)) mystart, mystop, mystep, slicelength = sl.indices4(space, length) assert len(range(length)[start:stop:step]) == slicelength - assert slice(start, stop, step).indices(length) == ( - mystart, mystop, mystep) + if sys.version_info >= (2, 6): # doesn't work in 2.5 + assert slice(start, stop, step).indices(length) == ( + mystart, mystop, mystep) class AppTest_SliceObject: def test_new(self): diff --git a/pypy/objspace/std/test/test_stdobjspace.py b/pypy/objspace/std/test/test_stdobjspace.py --- a/pypy/objspace/std/test/test_stdobjspace.py +++ b/pypy/objspace/std/test/test_stdobjspace.py @@ -50,6 +50,8 @@ def test_fastpath_isinstance(self): from pypy.objspace.std.stringobject import W_StringObject from pypy.objspace.std.intobject import W_IntObject + from pypy.objspace.std.iterobject import W_AbstractSeqIterObject + from pypy.objspace.std.iterobject import W_SeqIterObject space = self.space assert space._get_interplevel_cls(space.w_str) is W_StringObject @@ -62,9 +64,13 @@ assert space.isinstance_w(X(), space.w_str) + w_sequenceiterator = space.gettypefor(W_SeqIterObject) + cls = space._get_interplevel_cls(w_sequenceiterator) + assert cls is W_AbstractSeqIterObject + def test_withstrbuf_fastpath_isinstance(self): - from pypy.objspace.std.stringobject import W_StringObject + from pypy.objspace.std.stringobject import W_AbstractStringObject - space = gettestobjspace(withstrbuf=True) - assert space._get_interplevel_cls(space.w_str) is W_StringObject - + space = gettestobjspace(withstrbuf=True) + cls = space._get_interplevel_cls(space.w_str) + assert cls is W_AbstractStringObject diff --git a/pypy/objspace/std/tupleobject.py b/pypy/objspace/std/tupleobject.py --- a/pypy/objspace/std/tupleobject.py +++ b/pypy/objspace/std/tupleobject.py @@ -9,7 +9,10 @@ from pypy.interpreter import gateway from pypy.rlib.debug import make_sure_not_resized -class W_TupleObject(W_Object): +class W_AbstractTupleObject(W_Object): + __slots__ = () + +class W_TupleObject(W_AbstractTupleObject): from pypy.objspace.std.tupletype import tuple_typedef as typedef _immutable_fields_ = ['wrappeditems[*]'] diff --git a/pypy/objspace/std/unicodeobject.py b/pypy/objspace/std/unicodeobject.py --- a/pypy/objspace/std/unicodeobject.py +++ b/pypy/objspace/std/unicodeobject.py @@ -19,7 +19,10 @@ from pypy.objspace.std.formatting import mod_format from pypy.objspace.std.stringtype import stringstartswith, stringendswith -class W_UnicodeObject(W_Object): +class W_AbstractUnicodeObject(W_Object): + __slots__ = () + +class W_UnicodeObject(W_AbstractUnicodeObject): from pypy.objspace.std.unicodetype import unicode_typedef as typedef _immutable_fields_ = ['_value'] diff --git a/pypy/rlib/clibffi.py b/pypy/rlib/clibffi.py --- a/pypy/rlib/clibffi.py +++ b/pypy/rlib/clibffi.py @@ -30,6 +30,9 @@ _MAC_OS = platform.name == "darwin" _FREEBSD_7 = platform.name == "freebsd7" +_LITTLE_ENDIAN = sys.byteorder == 'little' +_BIG_ENDIAN = sys.byteorder == 'big' + if _WIN32: from pypy.rlib import rwin32 @@ -338,15 +341,38 @@ cast_type_to_ffitype._annspecialcase_ = 'specialize:memo' def push_arg_as_ffiptr(ffitp, arg, ll_buf): - # this is for primitive types. For structures and arrays - # would be something different (more dynamic) + # This is for primitive types. Note that the exact type of 'arg' may be + # different from the expected 'c_size'. To cope with that, we fall back + # to a byte-by-byte copy. TP = lltype.typeOf(arg) TP_P = lltype.Ptr(rffi.CArray(TP)) - buf = rffi.cast(TP_P, ll_buf) - buf[0] = arg + TP_size = rffi.sizeof(TP) + c_size = intmask(ffitp.c_size) + # if both types have the same size, we can directly write the + # value to the buffer + if c_size == TP_size: + buf = rffi.cast(TP_P, ll_buf) + buf[0] = arg + else: + # needs byte-by-byte copying. Make sure 'arg' is an integer type. + # Note that this won't work for rffi.FLOAT/rffi.DOUBLE. + assert TP is not rffi.FLOAT and TP is not rffi.DOUBLE + if TP_size <= rffi.sizeof(lltype.Signed): + arg = rffi.cast(lltype.Unsigned, arg) + else: + arg = rffi.cast(lltype.UnsignedLongLong, arg) + if _LITTLE_ENDIAN: + for i in range(c_size): + ll_buf[i] = chr(arg & 0xFF) + arg >>= 8 + elif _BIG_ENDIAN: + for i in range(c_size-1, -1, -1): + ll_buf[i] = chr(arg & 0xFF) + arg >>= 8 + else: + raise AssertionError push_arg_as_ffiptr._annspecialcase_ = 'specialize:argtype(1)' - # type defs for callback and closure userdata USERDATA_P = lltype.Ptr(lltype.ForwardReference()) CALLBACK_TP = lltype.Ptr(lltype.FuncType([rffi.VOIDPP, rffi.VOIDP, USERDATA_P], diff --git a/pypy/rlib/libffi.py b/pypy/rlib/libffi.py --- a/pypy/rlib/libffi.py +++ b/pypy/rlib/libffi.py @@ -234,7 +234,7 @@ # It is important that there is no other operation in the middle, else # the optimizer will fail to recognize the pattern and won't turn it # into a fast CALL. Note that "arg = arg.next" is optimized away, - # assuming that archain is completely virtual. + # assuming that argchain is completely virtual. self = jit.promote(self) if argchain.numargs != len(self.argtypes): raise TypeError, 'Wrong number of arguments: %d expected, got %d' %\ diff --git a/pypy/rlib/test/test_libffi.py b/pypy/rlib/test/test_libffi.py --- a/pypy/rlib/test/test_libffi.py +++ b/pypy/rlib/test/test_libffi.py @@ -179,6 +179,17 @@ res = self.call(func, [chr(20), 22], rffi.LONG) assert res == 42 + def test_char_args(self): + """ + char sum_args(char a, char b) { + return a + b; + } + """ + libfoo = self.get_libfoo() + func = (libfoo, 'sum_args', [types.schar, types.schar], types.schar) + res = self.call(func, [123, 43], rffi.CHAR) + assert res == chr(166) + def test_unsigned_short_args(self): """ unsigned short sum_xy_us(unsigned short x, unsigned short y) diff --git a/pypy/rpython/lltypesystem/ll2ctypes.py b/pypy/rpython/lltypesystem/ll2ctypes.py --- a/pypy/rpython/lltypesystem/ll2ctypes.py +++ b/pypy/rpython/lltypesystem/ll2ctypes.py @@ -1163,10 +1163,14 @@ value = value.adr if isinstance(value, llmemory.fakeaddress): value = value.ptr or 0 + if isinstance(value, r_singlefloat): + value = float(value) TYPE1 = lltype.typeOf(value) cvalue = lltype2ctypes(value) cresulttype = get_ctypes_type(RESTYPE) - if isinstance(TYPE1, lltype.Ptr): + if RESTYPE == TYPE1: + return value + elif isinstance(TYPE1, lltype.Ptr): if isinstance(RESTYPE, lltype.Ptr): # shortcut: ptr->ptr cast cptr = ctypes.cast(cvalue, cresulttype) diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -125,6 +125,7 @@ canraise=False, random_effects_on_gcobjs= random_effects_on_gcobjs, + calling_conv=calling_conv, **kwds) if isinstance(_callable, ll2ctypes.LL2CtypesCallable): _callable.funcptr = funcptr @@ -861,11 +862,12 @@ try: unsigned = not tp._type.SIGNED except AttributeError: - if tp in [lltype.Char, lltype.Float, lltype.Signed] or\ - isinstance(tp, lltype.Ptr): + if (not isinstance(tp, lltype.Primitive) or + tp in (FLOAT, DOUBLE) or + cast(lltype.SignedLongLong, cast(tp, -1)) < 0): unsigned = False else: - unsigned = False + unsigned = True return size, unsigned def sizeof(tp): diff --git a/pypy/rpython/lltypesystem/test/test_rffi.py b/pypy/rpython/lltypesystem/test/test_rffi.py --- a/pypy/rpython/lltypesystem/test/test_rffi.py +++ b/pypy/rpython/lltypesystem/test/test_rffi.py @@ -18,6 +18,7 @@ from pypy.conftest import option from pypy.objspace.flow.model import summary from pypy.translator.tool.cbuild import ExternalCompilationInfo +from pypy.rlib.rarithmetic import r_singlefloat class BaseTestRffi: def test_basic(self): @@ -704,6 +705,14 @@ res = cast(lltype.Signed, 42.5) assert res == 42 + res = cast(lltype.SingleFloat, 12.3) + assert res == r_singlefloat(12.3) + res = cast(lltype.SingleFloat, res) + assert res == r_singlefloat(12.3) + + res = cast(lltype.Float, r_singlefloat(12.)) + assert res == 12. + def test_rffi_sizeof(self): try: import ctypes @@ -733,7 +742,7 @@ assert sizeof(ll) == ctypes.sizeof(ctp) assert sizeof(lltype.Typedef(ll, 'test')) == sizeof(ll) assert not size_and_sign(lltype.Signed)[1] - assert not size_and_sign(lltype.Char)[1] + assert size_and_sign(lltype.Char) == (1, True) assert not size_and_sign(lltype.UniChar)[1] assert size_and_sign(UINT)[1] diff --git a/pypy/rpython/module/ll_os.py b/pypy/rpython/module/ll_os.py --- a/pypy/rpython/module/ll_os.py +++ b/pypy/rpython/module/ll_os.py @@ -356,6 +356,32 @@ return extdef([int, str, [str]], int, llimpl=spawnv_llimpl, export_name="ll_os.ll_os_spawnv") + @registering_if(os, 'spawnve') + def register_os_spawnve(self): + os_spawnve = self.llexternal('spawnve', + [rffi.INT, rffi.CCHARP, rffi.CCHARPP, + rffi.CCHARPP], + rffi.INT) + + def spawnve_llimpl(mode, path, args, env): + envstrs = [] + for item in env.iteritems(): + envstrs.append("%s=%s" % item) + + mode = rffi.cast(rffi.INT, mode) + l_args = rffi.liststr2charpp(args) + l_env = rffi.liststr2charpp(envstrs) + childpid = os_spawnve(mode, path, l_args, l_env) + rffi.free_charpp(l_env) + rffi.free_charpp(l_args) + if childpid == -1: + raise OSError(rposix.get_errno(), "os_spawnve failed") + return rffi.cast(lltype.Signed, childpid) + + return extdef([int, str, [str], {str: str}], int, + llimpl=spawnve_llimpl, + export_name="ll_os.ll_os_spawnve") + @registering(os.dup) def register_os_dup(self): os_dup = self.llexternal(underscore_on_windows+'dup', [rffi.INT], rffi.INT) diff --git a/pypy/translator/c/test/test_extfunc.py b/pypy/translator/c/test/test_extfunc.py --- a/pypy/translator/c/test/test_extfunc.py +++ b/pypy/translator/c/test/test_extfunc.py @@ -818,6 +818,24 @@ func() assert open(filename).read() == "2" +if hasattr(posix, 'spawnve'): + def test_spawnve(): + filename = str(udir.join('test_spawnve.txt')) + progname = str(sys.executable) + scriptpath = udir.join('test_spawnve.py') + scriptpath.write('import os\n' + + 'f=open(%r,"w")\n' % filename + + 'f.write(os.environ["FOOBAR"])\n' + + 'f.close\n') + scriptname = str(scriptpath) + def does_stuff(): + l = [progname, scriptname] + pid = os.spawnve(os.P_NOWAIT, progname, l, {'FOOBAR': '42'}) + os.waitpid(pid, 0) + func = compile(does_stuff, []) + func() + assert open(filename).read() == "42" + def test_utime(): path = str(udir.ensure("test_utime.txt")) from time import time, sleep diff --git a/pypy/translator/platform/__init__.py b/pypy/translator/platform/__init__.py --- a/pypy/translator/platform/__init__.py +++ b/pypy/translator/platform/__init__.py @@ -42,6 +42,8 @@ so_prefixes = ('',) + extra_libs = () + def __init__(self, cc): if self.__class__ is Platform: raise TypeError("You should not instantiate Platform class directly") @@ -181,7 +183,8 @@ link_files = self._linkfiles(eci.link_files) export_flags = self._exportsymbols_link_flags(eci) return (library_dirs + list(self.link_flags) + export_flags + - link_files + list(eci.link_extra) + libraries) + link_files + list(eci.link_extra) + libraries + + list(self.extra_libs)) def _exportsymbols_link_flags(self, eci, relto=None): if eci.export_symbols: diff --git a/pypy/translator/platform/linux.py b/pypy/translator/platform/linux.py --- a/pypy/translator/platform/linux.py +++ b/pypy/translator/platform/linux.py @@ -6,7 +6,8 @@ class BaseLinux(BasePosix): name = "linux" - link_flags = ('-pthread', '-lrt') + link_flags = ('-pthread',) + extra_libs = ('-lrt',) cflags = ('-O3', '-pthread', '-fomit-frame-pointer', '-Wall', '-Wno-unused') standalone_only = () diff --git a/pypy/translator/platform/posix.py b/pypy/translator/platform/posix.py --- a/pypy/translator/platform/posix.py +++ b/pypy/translator/platform/posix.py @@ -140,7 +140,7 @@ ('DEFAULT_TARGET', exe_name.basename), ('SOURCES', rel_cfiles), ('OBJECTS', rel_ofiles), - ('LIBS', self._libs(eci.libraries)), + ('LIBS', self._libs(eci.libraries) + list(self.extra_libs)), ('LIBDIRS', self._libdirs(rel_libdirs)), ('INCLUDEDIRS', self._includedirs(rel_includedirs)), ('CFLAGS', cflags), diff --git a/pypy/translator/platform/test/test_posix.py b/pypy/translator/platform/test/test_posix.py --- a/pypy/translator/platform/test/test_posix.py +++ b/pypy/translator/platform/test/test_posix.py @@ -41,6 +41,7 @@ if self.strict_on_stderr: assert res.err == '' assert res.returncode == 0 + assert '-lrt' in tmpdir.join("Makefile").read() def test_link_files(self): tmpdir = udir.join('link_files' + self.__class__.__name__).ensure(dir=1) diff --git a/pypy/translator/test/test_simplify.py b/pypy/translator/test/test_simplify.py --- a/pypy/translator/test/test_simplify.py +++ b/pypy/translator/test/test_simplify.py @@ -42,24 +42,6 @@ assert graph.startblock.operations[0].opname == 'int_mul_ovf' assert graph.startblock.operations[1].opname == 'int_sub' -def test_remove_ovfcheck_lshift(): - # check that ovfcheck_lshift() is handled - from pypy.rlib.rarithmetic import ovfcheck_lshift - def f(x): - try: - return ovfcheck_lshift(x, 2) - except OverflowError: - return -42 - graph, _ = translate(f, [int]) - assert len(graph.startblock.operations) == 1 - assert graph.startblock.operations[0].opname == 'int_lshift_ovf' - assert len(graph.startblock.operations[0].args) == 2 - assert len(graph.startblock.exits) == 2 - assert [link.exitcase for link in graph.startblock.exits] == \ - [None, OverflowError] - assert [link.target.operations for link in graph.startblock.exits] == \ - [(), ()] - def test_remove_ovfcheck_floordiv(): # check that ovfcheck() is handled even if the operation raises # and catches another exception too, here ZeroDivisionError From noreply at buildbot.pypy.org Mon Nov 14 15:47:46 2011 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 14 Nov 2011 15:47:46 +0100 (CET) Subject: [pypy-commit] pypy default: Add a test for ed83fd7b7ec1. Message-ID: <20111114144746.548D6820BE@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49403:913f736ff114 Date: 2011-11-14 15:47 +0100 http://bitbucket.org/pypy/pypy/changeset/913f736ff114/ Log: Add a test for ed83fd7b7ec1. diff --git a/pypy/module/_rawffi/test/test__rawffi.py b/pypy/module/_rawffi/test/test__rawffi.py --- a/pypy/module/_rawffi/test/test__rawffi.py +++ b/pypy/module/_rawffi/test/test__rawffi.py @@ -1022,6 +1022,12 @@ assert ret.y == 1234500, "ret.y == %d" % (ret.y,) s.free() + def test_ffi_type(self): + import _rawffi + EMPTY = _rawffi.Structure([]) + S2E = _rawffi.Structure([('bah', (EMPTY, 1))]) + S2E.get_ffi_type() # does not hang + class AppTestAutoFree: def setup_class(cls): space = gettestobjspace(usemodules=('_rawffi', 'struct')) From noreply at buildbot.pypy.org Mon Nov 14 16:56:32 2011 From: noreply at buildbot.pypy.org (edelsohn) Date: Mon, 14 Nov 2011 16:56:32 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: refactor load_from_addr. clarify register usage in store_reg. Message-ID: <20111114155632.EF31A820BE@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r49404:3db474e494aa Date: 2011-11-14 10:56 -0500 http://bitbucket.org/pypy/pypy/changeset/3db474e494aa/ Log: refactor load_from_addr. clarify register usage in store_reg. diff --git a/pypy/jit/backend/ppc/ppcgen/codebuilder.py b/pypy/jit/backend/ppc/ppcgen/codebuilder.py --- a/pypy/jit/backend/ppc/ppcgen/codebuilder.py +++ b/pypy/jit/backend/ppc/ppcgen/codebuilder.py @@ -950,19 +950,18 @@ self.ori(rD, rD, lo(word)) def load_from_addr(self, rD, addr): + self.load_imm(rD, addr) if IS_PPC_32: - self.load_imm(rD, addr) self.lwzx(rD.value, 0, rD.value) else: - self.load_imm(rD, addr) self.ldx(rD.value, 0, rD.value) def store_reg(self, source_reg, addr): self.load_imm(r.r0, addr) if IS_PPC_32: - self.stwx(source_reg.value, 0, 0) + self.stwx(source_reg.value, 0, r.r0.value) else: - self.stdx(source_reg.value, 0, 0) + self.stdx(source_reg.value, 0, r.r0.value) def b_cond_offset(self, offset, condition): pos = self.currpos() From noreply at buildbot.pypy.org Mon Nov 14 17:59:55 2011 From: noreply at buildbot.pypy.org (hager) Date: Mon, 14 Nov 2011 17:59:55 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Fixed assertion that made multiple tests fail due to side effect Message-ID: <20111114165955.3B708820BE@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r49405:a1d07117b3f1 Date: 2011-11-14 17:59 +0100 http://bitbucket.org/pypy/pypy/changeset/a1d07117b3f1/ Log: Fixed assertion that made multiple tests fail due to side effect diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -451,7 +451,6 @@ operations, self.current_clt.allgcrefs) self.mc = PPCBuilder() self.pending_guards = [] - assert self.datablockwrapper is None allblocks = self.get_asmmemmgr_blocks(looptoken) self.datablockwrapper = MachineDataBlockWrapper(self.cpu.asmmemmgr, allblocks) @@ -588,6 +587,7 @@ self.current_clt = None self.mc = None self._regalloc = None + assert self.datablockwrapper is None def _walk_operations(self, operations, regalloc): self._regalloc = regalloc From noreply at buildbot.pypy.org Mon Nov 14 18:38:19 2011 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 14 Nov 2011 18:38:19 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim-shards: move to_str to the base class Message-ID: <20111114173819.AF9B0820BE@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-multidim-shards Changeset: r49406:9121efb3b83f Date: 2011-11-14 16:49 +0100 http://bitbucket.org/pypy/pypy/changeset/9121efb3b83f/ Log: move to_str to the base class diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -353,6 +353,48 @@ res.append(")") return space.wrap(res.build()) + def to_str(self, comma, builder, indent=' '): + dtype = self.find_dtype() + ndims = len(self.shape) + if ndims > 2: + builder.append('[') + builder.append("xxx") + # for i in range(self.shape[0]): + # smallerview = NDimSlice(self.parent, self.signature, + # [(i, 0, 0, 1)], self.shape[1:]) + # ret.append(smallerview.to_str(comma, indent=indent + ' ')) + # if i + 1 < self.shape[0]: + # ret.append(',\n\n' + indent) + ret.append(']') + elif ndims == 2: + ret.append('[') + for i in range(self.shape[0]): + ret.append('[') + spacer = ',' * comma + ' ' + ret.append(spacer.join(\ + [dtype.str_format(self.eval(i * self.shape[1] + j)) \ + for j in range(self.shape[1])])) + ret.append(']') + if i + 1 < self.shape[0]: + ret.append(',\n' + indent) + ret.append(']') + elif ndims == 1: + ret.append('[') + spacer = ',' * comma + ' ' + if self.shape[0] > 1000: + ret.append(spacer.join([dtype.str_format(self.eval(j)) \ + for j in range(3)])) + ret.append(',' * comma + ' ..., ') + ret.append(spacer.join([dtype.str_format(self.eval(j)) \ + for j in range(self.shape[0] - 3, self.shape[0])])) + else: + ret.append(spacer.join([dtype.str_format(self.eval(j)) \ + for j in range(self.shape[0])])) + ret.append(']') + else: + ret.append(dtype.str_format(self.eval(self.start))) + return ret.build() + def descr_str(self, space): # Simple implementation so that we can see the array. # Since what we want is to print a plethora of 2d views, let @@ -778,52 +820,6 @@ def get_root_shape(self): return self.parent.get_root_shape() - def to_str(self, comma, indent=' '): - ret = StringBuilder() - dtype = self.find_dtype() - ndims = len(self.shape) - for s in self.shape: - if s == 0: - ret.append('[]') - return ret.build() - if ndims > 2: - ret.append('[') - for i in range(self.shape[0]): - smallerview = NDimSlice(self.parent, self.signature, - [(i, 0, 0, 1)], self.shape[1:]) - ret.append(smallerview.to_str(comma, indent=indent + ' ')) - if i + 1 < self.shape[0]: - ret.append(',\n\n' + indent) - ret.append(']') - elif ndims == 2: - ret.append('[') - for i in range(self.shape[0]): - ret.append('[') - spacer = ',' * comma + ' ' - ret.append(spacer.join(\ - [dtype.str_format(self.eval(i * self.shape[1] + j)) \ - for j in range(self.shape[1])])) - ret.append(']') - if i + 1 < self.shape[0]: - ret.append(',\n' + indent) - ret.append(']') - elif ndims == 1: - ret.append('[') - spacer = ',' * comma + ' ' - if self.shape[0] > 1000: - ret.append(spacer.join([dtype.str_format(self.eval(j)) \ - for j in range(3)])) - ret.append(',' * comma + ' ..., ') - ret.append(spacer.join([dtype.str_format(self.eval(j)) \ - for j in range(self.shape[0] - 3, self.shape[0])])) - else: - ret.append(spacer.join([dtype.str_format(self.eval(j)) \ - for j in range(self.shape[0])])) - ret.append(']') - else: - ret.append(dtype.str_format(self.eval(self.start))) - return ret.build() - class NDimArray(BaseArray): """ A class representing contiguous array. We know that each iteration by say ufunc will increase the data index by one From noreply at buildbot.pypy.org Mon Nov 14 18:38:20 2011 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 14 Nov 2011 18:38:20 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim-shards: rpythonize a bit. Disable _immutable_fields_ for now Message-ID: <20111114173820.E50F882A88@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-multidim-shards Changeset: r49407:c857554d0c6c Date: 2011-11-14 18:37 +0100 http://bitbucket.org/pypy/pypy/changeset/c857554d0c6c/ Log: rpythonize a bit. Disable _immutable_fields_ for now diff --git a/pypy/module/micronumpy/compile.py b/pypy/module/micronumpy/compile.py --- a/pypy/module/micronumpy/compile.py +++ b/pypy/module/micronumpy/compile.py @@ -208,11 +208,11 @@ def execute(self, interp): arr = interp.variables[self.name] - w_index = self.index.execute(interp).eval(0).wrap(interp.space) + w_index = self.index.execute(interp).eval(arr.start_iter()).wrap(interp.space) # cast to int if isinstance(w_index, FloatObject): w_index = IntObject(int(w_index.floatval)) - w_val = self.expr.execute(interp).eval(0).wrap(interp.space) + w_val = self.expr.execute(interp).eval(arr.start_iter()).wrap(interp.space) arr.descr_setitem(interp.space, w_index, w_val) def __repr__(self): @@ -249,7 +249,7 @@ w_res = w_lhs.descr_sub(interp.space, w_rhs) elif self.name == '->': if isinstance(w_rhs, Scalar): - w_rhs = w_rhs.eval(0).wrap(interp.space) + w_rhs = w_rhs.eval(w_rhs.start_iter()).wrap(interp.space) assert isinstance(w_rhs, FloatObject) w_rhs = IntObject(int(w_rhs.floatval)) w_res = w_lhs.descr_getitem(interp.space, w_rhs) diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -9,7 +9,8 @@ from pypy.rlib.rstring import StringBuilder numpy_driver = jit.JitDriver(greens = ['signature'], - reds = ['result_size', 'i', 'self', 'result']) + reds = ['result_size', 'i', 'ri', 'self', + 'result']) all_driver = jit.JitDriver(greens=['signature'], reds=['i', 'self', 'dtype']) any_driver = jit.JitDriver(greens=['signature'], reds=['i', 'self', 'dtype']) slice_driver = jit.JitDriver(greens=['signature'], reds=['self', 'source', @@ -162,7 +163,7 @@ _attrs_ = ["invalidates", "signature", "shape", "shards", "backshards", "start"] - _immutable_fields_ = ['shape[*]', "shards[*]", "backshards[*]", 'start'] + #_immutable_fields_ = ['shape[*]', "shards[*]", "backshards[*]", 'start'] shards = None start = 0 @@ -521,9 +522,11 @@ backshards.append(self.shards[i] * lgt * step) start += self.shards[i] * start_ # add a reminder - shape += self.shape[i + 1:] - shards += self.shards[i + 1:] - backshards += self.backshards[i + 1:] + s = i + 1 + assert s >= 0 + shape += self.shape[s:] + shards += self.shards[s:] + backshards += self.backshards[s:] return NDimSlice(self, new_sig, start, shards, backshards, shape) def descr_mean(self, space): @@ -627,7 +630,7 @@ ri = result.start_iter() while not ri.done(): numpy_driver.jit_merge_point(signature=signature, - result_size=result_size, i=i, + result_size=result_size, i=i, ri=ri, self=self, result=result) result.dtype.setitem(result.storage, ri.offset, self.eval(i)) i.next() @@ -770,7 +773,7 @@ class NDimSlice(ViewArray): signature = signature.BaseSignature() - _immutable_fields_ = ['shape[*]', 'shards[*]', 'backshards[*]', 'start'] + #_immutable_fields_ = ['shape[*]', 'shards[*]', 'backshards[*]', 'start'] def __init__(self, parent, signature, start, shards, backshards, shape): diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -47,7 +47,7 @@ interp = InterpreterState(codes[i]) interp.run(space) res = interp.results[-1] - w_res = res.eval(0).wrap(interp.space) + w_res = res.eval(res.start_iter()).wrap(interp.space) if isinstance(w_res, BoolObject): return float(w_res.boolval) elif isinstance(w_res, FloatObject): From noreply at buildbot.pypy.org Mon Nov 14 18:50:40 2011 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 14 Nov 2011 18:50:40 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim-shards: Make array iterators a once-off immutable things. Message-ID: <20111114175040.1F8BF820BE@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-multidim-shards Changeset: r49408:14f8da6a95bf Date: 2011-11-14 18:50 +0100 http://bitbucket.org/pypy/pypy/changeset/14f8da6a95bf/ Log: Make array iterators a once-off immutable things. diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -80,12 +80,12 @@ raise NotImplementedError class ArrayIterator(BaseIterator): - def __init__(self, size): - self.offset = 0 + def __init__(self, size, offset=0): + self.offset = offset self.size = size def next(self): - self.offset += 1 + return ArrayIterator(self.size, self.offset + 1) def done(self): return self.offset >= self.size @@ -94,24 +94,32 @@ return self.offset class ViewIterator(BaseIterator): - def __init__(self, arr): - self.indices = [0] * len(arr.shape) - self.offset = arr.start - self.arr = arr - self._done = False + def __init__(self, arr, offset=0, indices=None, done=False): + if indices is None: + self.indices = [0] * len(arr.shape) + self.offset = arr.start + else: + self.offset = offset + self.indices = indices + self.arr = arr + self._done = done @jit.unroll_safe def next(self): + indices = self.indices[:] + done = False + offset = self.offset for i in range(len(self.indices)): - if self.indices[i] < self.arr.shape[i] - 1: - self.indices[i] += 1 - self.offset += self.arr.shards[i] + if indices[i] < self.arr.shape[i] - 1: + indices[i] += 1 + offset += self.arr.shards[i] break else: - self.indices[i] = 0 - self.offset -= self.arr.backshards[i] + indices[i] = 0 + offset -= self.arr.backshards[i] else: - self._done = True + done = True + return ViewIterator(self.arr, offset, indices, done) def done(self): return self._done @@ -125,8 +133,7 @@ self.right = right def next(self): - self.left.next() - self.right.next() + return Call2Iterator(self.left.next(), self.right.next()) def done(self): return self.left.done() or self.right.done() @@ -141,7 +148,7 @@ self.child = child def next(self): - self.child.next() + return Call1Iterator(self.child.next()) def done(self): return self.child.done() @@ -151,7 +158,7 @@ class ConstantIterator(BaseIterator): def next(self): - pass + return self def done(self): return False @@ -268,7 +275,7 @@ if dtype.ne(new_best, cur_best): result = i.get_offset() cur_best = new_best - i.next() + i = i.next() return result def impl(self, space): size = self.find_size() @@ -286,7 +293,7 @@ all_driver.jit_merge_point(signature=self.signature, self=self, dtype=dtype, i=i) if not dtype.bool(self.eval(i)): return False - i.next() + i = i.next() return True def descr_all(self, space): return space.wrap(self._all()) @@ -299,7 +306,7 @@ dtype=dtype, i=i) if dtype.bool(self.eval(i)): return True - i.next() + i = i.next() return False def descr_any(self, space): return space.wrap(self._any()) @@ -633,8 +640,8 @@ result_size=result_size, i=i, ri=ri, self=self, result=result) result.dtype.setitem(result.storage, ri.offset, self.eval(i)) - i.next() - ri.next() + i = i.next() + ri = ri.next() return result def force_if_needed(self): @@ -811,8 +818,8 @@ source_iter=source_iter) self.setitem(res_iter.offset, source.eval(source_iter).convert_to( self.find_dtype())) - source_iter.next() - res_iter.next() + source_iter = source_iter.next() + res_iter = res_iter.next() def start_iter(self): return ViewIterator(self) diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -76,7 +76,7 @@ value=value, obj=obj, i=i, dtype=dtype) value = self.func(dtype, value, obj.eval(i).convert_to(dtype)) - i.next() + i = i.next() return value class W_Ufunc1(W_Ufunc): diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -29,21 +29,21 @@ def test_create_slice(self): space = self.space a = NDimArray(10*5*3, [10, 5, 3], MockDtype()) - s = a._create_slice(space, space.wrap(3)) + s = a.create_slice(space, space.wrap(3)) assert s.start == 45 assert s.shards == [3, 1] assert s.backshards == [12, 2] - s = a._create_slice(space, self.newslice(1, 9, 2)) + s = a.create_slice(space, self.newslice(1, 9, 2)) assert s.start == 15 assert s.shards == [30, 3, 1] assert s.backshards == [120, 12, 2] - s = a._create_slice(space, space.newtuple([ + s = a.create_slice(space, space.newtuple([ self.newslice(1, 5, 3), self.newslice(1, 2, 1), space.wrap(1)])) assert s.start == 19 assert s.shape == [2, 1] assert s.shards == [45, 3] assert s.backshards == [90, 3] - s = a._create_slice(space, self.newtuple( + s = a.create_slice(space, self.newtuple( self.newslice(None, None, None), space.wrap(2))) assert s.start == 6 assert s.shape == [10, 3] @@ -51,16 +51,16 @@ def test_slice_of_slice(self): space = self.space a = NDimArray(10*5*3, [10, 5, 3], MockDtype()) - s = a._create_slice(space, space.wrap(5)) + s = a.create_slice(space, space.wrap(5)) assert s.start == 15*5 - s2 = s._create_slice(space, space.wrap(3)) + s2 = s.create_slice(space, space.wrap(3)) assert s2.shape == [3] assert s2.shards == [1] assert s2.parent is a assert s2.backshards == [2] assert s2.start == 5*15 + 3*3 - s = a._create_slice(space, self.newslice(1, 5, 3)) - s2 = s._create_slice(space, space.newtuple([ + s = a.create_slice(space, self.newslice(1, 5, 3)) + s2 = s.create_slice(space, space.newtuple([ self.newslice(None, None, None), space.wrap(2)])) assert s2.shape == [2, 3] assert s2.shards == [45, 1] @@ -70,7 +70,7 @@ def test_negative_step(self): space = self.space a = NDimArray(10*5*3, [10, 5, 3], MockDtype()) - s = a._create_slice(space, self.newslice(None, None, -2)) + s = a.create_slice(space, self.newslice(None, None, -2)) assert s.start == 135 assert s.shards == [-30, 3, 1] assert s.backshards == [-150, 12, 2] @@ -79,7 +79,7 @@ a = NDimArray(10*5*3, [10, 5, 3], MockDtype()) r = a._index_of_single_item(self.space, self.newtuple(1, 2, 2)) assert r == 1 * 3 * 5 + 2 * 3 + 2 - s = a._create_slice(self.space, self.newtuple( + s = a.create_slice(self.space, self.newtuple( self.newslice(None, None, None), 2)) r = s._index_of_single_item(self.space, self.newtuple(1, 0)) assert r == a._index_of_single_item(self.space, self.newtuple(1, 2, 0)) diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -254,8 +254,10 @@ result = self.run('multidim') assert result == 8 self.check_loops({'float_add': 1, 'getarrayitem_raw': 2, - 'guard_true': 1, 'int_add': 1, 'int_lt': 1, + 'guard_false': 1, 'int_add': 3, 'int_ge': 1, 'jump': 1, 'setarrayitem_raw': 1}) + # int_add might be 1 here if we try slightly harder with + # reusing indexes or some optimization def define_multidim_slice(): return """ From noreply at buildbot.pypy.org Mon Nov 14 18:59:35 2011 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 14 Nov 2011 18:59:35 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim-shards: one more test and make the other test failing without problems with repr Message-ID: <20111114175935.D8125820BE@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-multidim-shards Changeset: r49409:ba1a3becc049 Date: 2011-11-14 18:59 +0100 http://bitbucket.org/pypy/pypy/changeset/ba1a3becc049/ Log: one more test and make the other test failing without problems with repr diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -764,7 +764,15 @@ from numpy import array, negative a = array([[1, 2], [3, 4]]) b = negative(a + a) - assert (b == [[-1, -2], [-3, -4]]).all() + res = (b == [[-1, -2], [-3, -4]]).all() + assert res + + def test_getitem_3(self): + from numpy import array + a = array([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10], [11, 12], [13, 14]]) + b = a[::2] + c = b + b + assert c[1][1] == 16 def test_broadcast(self): skip("not working") From noreply at buildbot.pypy.org Mon Nov 14 19:20:39 2011 From: noreply at buildbot.pypy.org (hager) Date: Mon, 14 Nov 2011 19:20:39 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Implemented COPYSTRCONTENT Message-ID: <20111114182039.DE503820BE@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r49410:c76ce86e60ff Date: 2011-11-14 19:20 +0100 http://bitbucket.org/pypy/pypy/changeset/c76ce86e60ff/ Log: Implemented COPYSTRCONTENT diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -10,6 +10,11 @@ from pypy.rlib.objectmodel import we_are_translated from pypy.jit.backend.ppc.ppcgen.helper.assembler import count_reg_args from pypy.jit.backend.ppc.ppcgen.jump import remap_frame_layout +from pypy.jit.backend.ppc.ppcgen.regalloc import TempPtr +from pypy.jit.backend.llsupport import symbolic +from pypy.rpython.lltypesystem import rstr + +NO_FORCE_INDEX = -1 class GuardToken(object): def __init__(self, descr, failargs, faillocs, offset, fcond=c.NE, @@ -452,6 +457,91 @@ self.mc.add(base_loc.value, base_loc.value, ofs_loc.value) self.mc.stb(value_loc.value, base_loc.value, basesize.value) + #from ../x86/regalloc.py:928 ff. + def emit_copystrcontent(self, op, arglocs, regalloc): + assert len(arglocs) == 0 + self._emit_copystrcontent(op, regalloc, is_unicode=False) + + def _emit_copystrcontent(self, op, regalloc, is_unicode): + # compute the source address + args = list(op.getarglist()) + base_loc, box = regalloc._ensure_value_is_boxed(args[0], args) + args.append(box) + ofs_loc, box = regalloc._ensure_value_is_boxed(args[2], args) + args.append(box) + assert args[0] is not args[1] # forbidden case of aliasing + regalloc.possibly_free_var(args[0]) + if args[3] is not args[2] is not args[4]: # MESS MESS MESS: don't free + regalloc.possibly_free_var(args[2]) # it if ==args[3] or args[4] + srcaddr_box = TempPtr() + forbidden_vars = [args[1], args[3], args[4], srcaddr_box] + srcaddr_loc = regalloc.force_allocate_reg(srcaddr_box) + self._gen_address_inside_string(base_loc, ofs_loc, srcaddr_loc, + is_unicode=is_unicode) + + # compute the destination address + forbidden_vars = [args[4], args[3], srcaddr_box] + dstaddr_box = TempPtr() + dstaddr_loc = regalloc.force_allocate_reg(dstaddr_box) + forbidden_vars.append(dstaddr_box) + base_loc, box = regalloc._ensure_value_is_boxed(args[1], forbidden_vars) + args.append(box) + forbidden_vars.append(box) + ofs_loc, box = regalloc._ensure_value_is_boxed(args[3], forbidden_vars) + args.append(box) + assert base_loc.is_reg() + assert ofs_loc.is_reg() + regalloc.possibly_free_var(args[1]) + if args[3] is not args[4]: # more of the MESS described above + regalloc.possibly_free_var(args[3]) + self._gen_address_inside_string(base_loc, ofs_loc, dstaddr_loc, + is_unicode=is_unicode) + + # compute the length in bytes + forbidden_vars = [srcaddr_box, dstaddr_box] + length_loc, length_box = regalloc._ensure_value_is_boxed(args[4], forbidden_vars) + args.append(length_box) + if is_unicode: + assert 0, "not implemented yet" + # call memcpy() + self._emit_call(NO_FORCE_INDEX, self.memcpy_addr, + [dstaddr_box, srcaddr_box, length_box], regalloc) + + regalloc.possibly_free_vars(args) + regalloc.possibly_free_var(length_box) + regalloc.possibly_free_var(dstaddr_box) + regalloc.possibly_free_var(srcaddr_box) + + def _gen_address_inside_string(self, baseloc, ofsloc, resloc, is_unicode): + cpu = self.cpu + if is_unicode: + ofs_items, _, _ = symbolic.get_array_token(rstr.UNICODE, + self.cpu.translate_support_code) + scale = self._get_unicode_item_scale() + else: + ofs_items, itemsize, _ = symbolic.get_array_token(rstr.STR, + self.cpu.translate_support_code) + assert itemsize == 1 + scale = 0 + self._gen_address(ofsloc, ofs_items, scale, resloc, baseloc) + + def _gen_address(self, sizereg, baseofs, scale, result, baseloc=None): + assert sizereg.is_reg() + if scale > 0: + scaled_loc = r.r0 + if IS_PPC_32: + self.mc.slwi(scaled_loc.value, sizereg.value, scale) + else: + self.mc.sldi(scaled_loc.value, sizereg.value, scale) + else: + scaled_loc = sizereg + if baseloc is not None: + assert baseloc.is_reg() + self.mc.add(result.value, baseloc.value, scaled_loc.value) + self.mc.addi(result.value, result.value, baseofs) + else: + self.mc.addi(result.value, scaled_loc.value, baseofs) + emit_unicodelen = emit_strlen # XXX 64 bit adjustment diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/regalloc.py @@ -225,6 +225,10 @@ # * P R E P A R E O P E R A T I O N S * # ****************************************************** + + def void(self, op): + return [] + prepare_int_add = prepare_binary_int_op_with_imm() prepare_int_sub = prepare_binary_int_op_with_imm() prepare_int_floordiv = prepare_binary_int_op_with_imm() @@ -528,6 +532,8 @@ assert itemsize == 1 return [value_loc, base_loc, ofs_loc, imm(basesize)] + prepare_copystrcontent = void + def prepare_unicodelen(self, op): l0, box = self._ensure_value_is_boxed(op.getarg(0)) boxes = [box] @@ -606,9 +612,6 @@ args = [imm(rffi.cast(lltype.Signed, op.getarg(0).getint()))] return args - def void(self, op): - return [] - prepare_debug_merge_point = void prepare_jit_debug = void From noreply at buildbot.pypy.org Mon Nov 14 19:26:37 2011 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 14 Nov 2011 19:26:37 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim-shards: provide some sort of descr_repr (a broken one) and a fix Message-ID: <20111114182637.91831820BE@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-multidim-shards Changeset: r49411:8521a920ed05 Date: 2011-11-14 19:26 +0100 http://bitbucket.org/pypy/pypy/changeset/8521a920ed05/ Log: provide some sort of descr_repr (a broken one) and a fix diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -342,8 +342,19 @@ # Simple implementation so that we can see the array. # Since what we want is to print a plethora of 2d views, # use recursive calls to to_str() to do the work. + res = StringBuilder() concrete = self.get_concrete() - res = StringBuilder() + i = concrete.start_iter() + start = True + while not i.done(): + if start: + start = False + else: + res.append(", ") + res.append(concrete.dtype.str_format(concrete.eval(i))) + i = i.next() + return space.wrap(res.build()) + res.append("array(") #This is for numpy compliance: an empty slice reports its shape if not concrete.find_size(): @@ -651,7 +662,7 @@ def get_concrete(self): self.force_if_needed() - return self.forced_result + return self.forced_result def eval(self, iter): if self.forced_result is not None: @@ -698,6 +709,8 @@ return call_sig.func(self.res_dtype, val) def start_iter(self): + if self.forced_result is not None: + return self.forced_result.start_iter() return Call1Iterator(self.values.start_iter()) class Call2(VirtualArray): @@ -722,6 +735,8 @@ return self.right.find_size() def start_iter(self): + if self.forced_result is not None: + return self.forced_result.start_iter() return Call2Iterator(self.left.start_iter(), self.right.start_iter()) def _eval(self, iter): diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -764,8 +764,7 @@ from numpy import array, negative a = array([[1, 2], [3, 4]]) b = negative(a + a) - res = (b == [[-1, -2], [-3, -4]]).all() - assert res + assert (b == [[-2, -4], [-6, -8]]).all() def test_getitem_3(self): from numpy import array From noreply at buildbot.pypy.org Mon Nov 14 19:36:32 2011 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 14 Nov 2011 19:36:32 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim-shards: some partial fixes, will continue later Message-ID: <20111114183632.642D5820BE@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-multidim-shards Changeset: r49412:f7cb08c0396d Date: 2011-11-14 19:36 +0100 http://bitbucket.org/pypy/pypy/changeset/f7cb08c0396d/ Log: some partial fixes, will continue later diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -346,12 +346,13 @@ concrete = self.get_concrete() i = concrete.start_iter() start = True + dtype = concrete.find_dtype() while not i.done(): if start: start = False else: res.append(", ") - res.append(concrete.dtype.str_format(concrete.eval(i))) + res.append(dtype.str_format(concrete.eval(i))) i = i.next() return space.wrap(res.build()) @@ -522,7 +523,7 @@ else: shape = [lgt] + self.shape[1:] shards = [self.shards[0] * step] + self.shards[1:] - backshards = [lgt * self.shards[0] * step] + self.backshards[1:] + backshards = [(lgt - 1) * self.shards[0] * step] + self.backshards[1:] start *= self.shards[0] start += self.start else: @@ -537,7 +538,7 @@ if step != 0: shape.append(lgt) shards.append(self.shards[i] * step) - backshards.append(self.shards[i] * lgt * step) + backshards.append(self.shards[i] * (lgt - 1) * step) start += self.shards[i] * start_ # add a reminder s = i + 1 diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -36,13 +36,13 @@ s = a.create_slice(space, self.newslice(1, 9, 2)) assert s.start == 15 assert s.shards == [30, 3, 1] - assert s.backshards == [120, 12, 2] + assert s.backshards == [90, 12, 2] s = a.create_slice(space, space.newtuple([ self.newslice(1, 5, 3), self.newslice(1, 2, 1), space.wrap(1)])) assert s.start == 19 assert s.shape == [2, 1] assert s.shards == [45, 3] - assert s.backshards == [90, 3] + assert s.backshards == [45, 3] s = a.create_slice(space, self.newtuple( self.newslice(None, None, None), space.wrap(2))) assert s.start == 6 @@ -770,8 +770,9 @@ from numpy import array a = array([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10], [11, 12], [13, 14]]) b = a[::2] + assert (b == [[1, 2], [5, 6], [9, 10], [13, 14]]).all() c = b + b - assert c[1][1] == 16 + assert c[1][1] == 12 def test_broadcast(self): skip("not working") From noreply at buildbot.pypy.org Mon Nov 14 19:38:07 2011 From: noreply at buildbot.pypy.org (hager) Date: Mon, 14 Nov 2011 19:38:07 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Implemented COPYUNICODECONTENT Message-ID: <20111114183807.70AE6820BE@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r49413:34e139792020 Date: 2011-11-14 19:37 +0100 http://bitbucket.org/pypy/pypy/changeset/34e139792020/ Log: Implemented COPYUNICODECONTENT diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -462,6 +462,10 @@ assert len(arglocs) == 0 self._emit_copystrcontent(op, regalloc, is_unicode=False) + def emit_copyunicodecontent(self, op, arglocs, regalloc): + assert len(arglocs) == 0 + self._emit_copystrcontent(op, regalloc, is_unicode=True) + def _emit_copystrcontent(self, op, regalloc, is_unicode): # compute the source address args = list(op.getarglist()) @@ -502,7 +506,18 @@ length_loc, length_box = regalloc._ensure_value_is_boxed(args[4], forbidden_vars) args.append(length_box) if is_unicode: - assert 0, "not implemented yet" + forbidden_vars = [srcaddr_box, dstaddr_box] + bytes_box = TempPtr() + bytes_loc = regalloc.force_allocate_reg(bytes_box, forbidden_vars) + scale = self._get_unicode_item_scale() + assert length_loc.is_reg() + self.mc.li(r.r0.value, 1< Author: edelsohn Branch: ppc-jit-backend Changeset: r49414:3a6600bf032a Date: 2011-11-14 14:27 -0500 http://bitbucket.org/pypy/pypy/changeset/3a6600bf032a/ Log: setarrayitem and getarrayitem offsets are immediate values. diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -367,11 +367,10 @@ value_loc, base_loc, ofs_loc, scale, ofs = arglocs if scale.value > 0: scale_loc = r.r0 - self.mc.load_imm(r.r0, scale.value) if IS_PPC_32: - self.mc.slw(r.r0.value, ofs_loc.value, r.r0.value) + self.mc.slwi(r.r0.value, ofs_loc.value, scale.value) else: - self.mc.sld(r.r0.value, ofs_loc.value, r.r0.value) + self.mc.sldi(r.r0.value, ofs_loc.value, scale.value) else: scale_loc = ofs_loc @@ -396,11 +395,10 @@ res, base_loc, ofs_loc, scale, ofs = arglocs if scale.value > 0: scale_loc = r.r0 - self.mc.load_imm(r.r0, scale.value) if IS_PPC_32: - self.mc.slw(r.r0.value, ofs_loc.value, scale.value) + self.mc.slwi(r.r0.value, ofs_loc.value, scale.value) else: - self.mc.sld(r.r0.value, ofs_loc.value, scale.value) + self.mc.sldi(r.r0.value, ofs_loc.value, scale.value) else: scale_loc = ofs_loc if ofs.value > 0: From noreply at buildbot.pypy.org Mon Nov 14 20:35:24 2011 From: noreply at buildbot.pypy.org (edelsohn) Date: Mon, 14 Nov 2011 20:35:24 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: setarrayitem and getarrayitem cannot add offset with addi. Message-ID: <20111114193524.4D727820BE@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r49415:b6a18d1530bf Date: 2011-11-14 14:35 -0500 http://bitbucket.org/pypy/pypy/changeset/b6a18d1530bf/ Log: setarrayitem and getarrayitem cannot add offset with addi. diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -374,8 +374,10 @@ else: scale_loc = ofs_loc + # add the base offset if ofs.value > 0: - self.mc.addi(r.r0.value, scale_loc.value, ofs.value) + #XXX cannot use addi because scale_loc may be r0 + self.mc.addic(r.r0.value, scale_loc.value, ofs.value) scale_loc = r.r0 if scale.value == 3: @@ -401,8 +403,11 @@ self.mc.sldi(r.r0.value, ofs_loc.value, scale.value) else: scale_loc = ofs_loc + + # add the base offset if ofs.value > 0: - self.mc.addi(r.r0.value, scale_loc.value, ofs.value) + #XXX cannot use addi because scale_loc may be r0 + self.mc.addic(r.r0.value, scale_loc.value, ofs.value) scale_loc = r.r0 if scale.value == 3: From noreply at buildbot.pypy.org Mon Nov 14 20:48:23 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Mon, 14 Nov 2011 20:48:23 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: translation fix Message-ID: <20111114194823.A7482820BE@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r49416:36b8365a8bba Date: 2011-11-14 20:47 +0100 http://bitbucket.org/pypy/pypy/changeset/36b8365a8bba/ Log: translation fix diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -112,6 +112,8 @@ self.export_state(stop_label) loop.operations.append(stop_label) else: + assert stop_label + assert start_label stop_target = stop_label.getdescr() start_target = start_label.getdescr() assert isinstance(stop_target, TargetToken) From noreply at buildbot.pypy.org Mon Nov 14 21:08:03 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Mon, 14 Nov 2011 21:08:03 +0100 (CET) Subject: [pypy-commit] pypy jit-dynamic-getarrayitem: added these to base model Message-ID: <20111114200803.B3F3A820BE@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: jit-dynamic-getarrayitem Changeset: r49417:46c3f314a1e7 Date: 2011-11-14 12:51 -0500 http://bitbucket.org/pypy/pypy/changeset/46c3f314a1e7/ Log: added these to base model diff --git a/pypy/jit/backend/model.py b/pypy/jit/backend/model.py --- a/pypy/jit/backend/model.py +++ b/pypy/jit/backend/model.py @@ -195,6 +195,15 @@ raise NotImplementedError @staticmethod + def interiorfielddescrof(A, fieldname): + raise NotImplementedError + + @staticmethod + def interiorfielddescrof_dynamic(offset, width, fieldsize, is_pointer, + is_float, is_signed): + raise NotImplementedError + + @staticmethod def arraydescrof(A): raise NotImplementedError From noreply at buildbot.pypy.org Mon Nov 14 21:08:04 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Mon, 14 Nov 2011 21:08:04 +0100 (CET) Subject: [pypy-commit] pypy jit-dynamic-getarrayitem: don't cache dynamic descrs, progress maybe Message-ID: <20111114200804.DBB87820BE@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: jit-dynamic-getarrayitem Changeset: r49418:99efe7320cf1 Date: 2011-11-14 15:07 -0500 http://bitbucket.org/pypy/pypy/changeset/99efe7320cf1/ Log: don't cache dynamic descrs, progress maybe diff --git a/pypy/jit/backend/llgraph/runner.py b/pypy/jit/backend/llgraph/runner.py --- a/pypy/jit/backend/llgraph/runner.py +++ b/pypy/jit/backend/llgraph/runner.py @@ -339,7 +339,7 @@ else: typeinfo = INT # we abuse the arg_types field to distinguish dynamic and static descrs - return self.getdescr(offset, typeinfo, arg_types='dynamic', name='', extrainfo=width) + return Descr(offset, typeinfo, arg_types='dynamic', name='', extrainfo=width) def calldescrof(self, FUNC, ARGS, RESULT, extrainfo): arg_types = [] From noreply at buildbot.pypy.org Mon Nov 14 21:09:25 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Mon, 14 Nov 2011 21:09:25 +0100 (CET) Subject: [pypy-commit] pypy jit-dynamic-getarrayitem: remove the dupe Message-ID: <20111114200925.E6197820BE@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: jit-dynamic-getarrayitem Changeset: r49419:69c9793c54c0 Date: 2011-11-14 15:09 -0500 http://bitbucket.org/pypy/pypy/changeset/69c9793c54c0/ Log: remove the dupe diff --git a/pypy/jit/backend/model.py b/pypy/jit/backend/model.py --- a/pypy/jit/backend/model.py +++ b/pypy/jit/backend/model.py @@ -222,10 +222,6 @@ def typedescrof(TYPE): raise NotImplementedError - @staticmethod - def interiorfielddescrof(A, fieldname): - raise NotImplementedError - # ---------- the backend-dependent operations ---------- # lltype specific operations From noreply at buildbot.pypy.org Mon Nov 14 22:44:26 2011 From: noreply at buildbot.pypy.org (boemmels) Date: Mon, 14 Nov 2011 22:44:26 +0100 (CET) Subject: [pypy-commit] lang-scheme default: naming consistency Message-ID: <20111114214426.AAD7D820BE@wyvern.cs.uni-duesseldorf.de> Author: Juergen Boemmels Branch: Changeset: r11:84d83d4e7639 Date: 2011-11-14 22:44 +0100 http://bitbucket.org/pypy/lang-scheme/changeset/84d83d4e7639/ Log: naming consistency diff --git a/scheme/execution.py b/scheme/execution.py --- a/scheme/execution.py +++ b/scheme/execution.py @@ -1,6 +1,6 @@ import scheme.object as ssobject -import scheme.syntax as procedure -import scheme.procedure as syntax +import scheme.syntax as syntax +import scheme.procedure as procedure import scheme.macro as macro from scheme.ssparser import parse import py From noreply at buildbot.pypy.org Mon Nov 14 23:27:27 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Mon, 14 Nov 2011 23:27:27 +0100 (CET) Subject: [pypy-commit] pypy win64-stage1: fixed clibffi.py, but win32.c must be replaced, and I don't know yes by what. Message-ID: <20111114222727.A1330820BE@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64-stage1 Changeset: r49420:bb40c355d38d Date: 2011-11-14 18:49 +0100 http://bitbucket.org/pypy/pypy/changeset/bb40c355d38d/ Log: fixed clibffi.py, but win32.c must be replaced, and I don't know yes by what. diff --git a/pypy/rlib/clibffi.py b/pypy/rlib/clibffi.py --- a/pypy/rlib/clibffi.py +++ b/pypy/rlib/clibffi.py @@ -5,7 +5,7 @@ from pypy.rpython.tool import rffi_platform from pypy.rpython.lltypesystem import lltype, rffi from pypy.rlib.unroll import unrolling_iterable -from pypy.rlib.rarithmetic import intmask, r_uint +from pypy.rlib.rarithmetic import intmask, r_uint, is_emulated_long from pypy.rlib.objectmodel import we_are_translated from pypy.rlib.rmmap import alloc from pypy.rlib.rdynload import dlopen, dlclose, dlsym, dlsym_byordinal @@ -27,6 +27,7 @@ _MSVC = platform.name == "msvc" _MINGW = platform.name == "mingw32" _WIN32 = _MSVC or _MINGW +_WIN64 = _WIN32 and is_emulated_long _MAC_OS = platform.name == "darwin" _FREEBSD_7 = platform.name == "freebsd7" @@ -139,7 +140,7 @@ FFI_OK = rffi_platform.ConstantInteger('FFI_OK') FFI_BAD_TYPEDEF = rffi_platform.ConstantInteger('FFI_BAD_TYPEDEF') FFI_DEFAULT_ABI = rffi_platform.ConstantInteger('FFI_DEFAULT_ABI') - if _WIN32: + if _WIN32 and not _WIN64: FFI_STDCALL = rffi_platform.ConstantInteger('FFI_STDCALL') FFI_TYPE_STRUCT = rffi_platform.ConstantInteger('FFI_TYPE_STRUCT') @@ -409,7 +410,7 @@ FUNCFLAG_USE_LASTERROR = 16 def get_call_conv(flags, from_jit): - if _WIN32 and (flags & FUNCFLAG_CDECL == 0): + if _WIN32 and not _WIN64 and (flags & FUNCFLAG_CDECL == 0): return FFI_STDCALL else: return FFI_DEFAULT_ABI From noreply at buildbot.pypy.org Mon Nov 14 23:27:29 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Mon, 14 Nov 2011 23:27:29 +0100 (CET) Subject: [pypy-commit] pypy win64-stage1: Merge with default Message-ID: <20111114222729.11AB4820BE@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64-stage1 Changeset: r49421:4b214639c276 Date: 2011-11-14 18:54 +0100 http://bitbucket.org/pypy/pypy/changeset/4b214639c276/ Log: Merge with default diff --git a/lib-python/modified-2.7/test/test_import.py b/lib-python/modified-2.7/test/test_import.py --- a/lib-python/modified-2.7/test/test_import.py +++ b/lib-python/modified-2.7/test/test_import.py @@ -64,6 +64,7 @@ except ImportError, err: self.fail("import from %s failed: %s" % (ext, err)) else: + # XXX importing .pyw is missing on Windows self.assertEqual(mod.a, a, "module loaded (%s) but contents invalid" % mod) self.assertEqual(mod.b, b, diff --git a/lib-python/modified-2.7/test/test_repr.py b/lib-python/modified-2.7/test/test_repr.py --- a/lib-python/modified-2.7/test/test_repr.py +++ b/lib-python/modified-2.7/test/test_repr.py @@ -254,8 +254,14 @@ eq = self.assertEqual touch(os.path.join(self.subpkgname, self.pkgname + os.extsep + 'py')) from areallylongpackageandmodulenametotestreprtruncation.areallylongpackageandmodulenametotestreprtruncation import areallylongpackageandmodulenametotestreprtruncation - eq(repr(areallylongpackageandmodulenametotestreprtruncation), - "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + # On PyPy, we use %r to format the file name; on CPython it is done + # with '%s'. It seems to me that %r is safer . + if '__pypy__' in sys.builtin_module_names: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + else: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) eq(repr(sys), "") def test_type(self): diff --git a/lib-python/2.7/test/test_subprocess.py b/lib-python/modified-2.7/test/test_subprocess.py copy from lib-python/2.7/test/test_subprocess.py copy to lib-python/modified-2.7/test/test_subprocess.py --- a/lib-python/2.7/test/test_subprocess.py +++ b/lib-python/modified-2.7/test/test_subprocess.py @@ -16,11 +16,11 @@ # Depends on the following external programs: Python # -if mswindows: - SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' - 'os.O_BINARY);') -else: - SETBINARY = '' +#if mswindows: +# SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' +# 'os.O_BINARY);') +#else: +# SETBINARY = '' try: @@ -420,8 +420,9 @@ self.assertStderrEqual(stderr, "") def test_universal_newlines(self): - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' @@ -448,8 +449,9 @@ def test_universal_newlines_communicate(self): # universal newlines through communicate() - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' diff --git a/pypy/interpreter/test/test_executioncontext.py b/pypy/interpreter/test/test_executioncontext.py --- a/pypy/interpreter/test/test_executioncontext.py +++ b/pypy/interpreter/test/test_executioncontext.py @@ -292,7 +292,7 @@ import os, sys print sys.executable, self.tmpfile if sys.platform == "win32": - cmdformat = '""%s" "%s""' # excellent! tons of "! + cmdformat = '"%s" "%s"' else: cmdformat = "'%s' '%s'" g = os.popen(cmdformat % (sys.executable, self.tmpfile), 'r') diff --git a/pypy/jit/codewriter/call.py b/pypy/jit/codewriter/call.py --- a/pypy/jit/codewriter/call.py +++ b/pypy/jit/codewriter/call.py @@ -212,7 +212,10 @@ elidable = False loopinvariant = False if op.opname == "direct_call": - func = getattr(get_funcobj(op.args[0].value), '_callable', None) + funcobj = get_funcobj(op.args[0].value) + assert getattr(funcobj, 'calling_conv', 'c') == 'c', ( + "%r: getcalldescr() with a non-default call ABI" % (op,)) + func = getattr(funcobj, '_callable', None) elidable = getattr(func, "_elidable_function_", False) loopinvariant = getattr(func, "_jit_loop_invariant_", False) if loopinvariant: diff --git a/pypy/module/_rawffi/structure.py b/pypy/module/_rawffi/structure.py --- a/pypy/module/_rawffi/structure.py +++ b/pypy/module/_rawffi/structure.py @@ -212,6 +212,8 @@ while count + basic_size <= total_size: fieldtypes.append(basic_ffi_type) count += basic_size + if basic_size == 0: # corner case. get out of this infinite + break # loop after 1 iteration ("why not") self.ffi_struct = clibffi.make_struct_ffitype_e(self.size, self.alignment, fieldtypes) diff --git a/pypy/module/_rawffi/test/test__rawffi.py b/pypy/module/_rawffi/test/test__rawffi.py --- a/pypy/module/_rawffi/test/test__rawffi.py +++ b/pypy/module/_rawffi/test/test__rawffi.py @@ -1022,6 +1022,12 @@ assert ret.y == 1234500, "ret.y == %d" % (ret.y,) s.free() + def test_ffi_type(self): + import _rawffi + EMPTY = _rawffi.Structure([]) + S2E = _rawffi.Structure([('bah', (EMPTY, 1))]) + S2E.get_ffi_type() # does not hang + class AppTestAutoFree: def setup_class(cls): space = gettestobjspace(usemodules=('_rawffi', 'struct')) diff --git a/pypy/module/imp/importing.py b/pypy/module/imp/importing.py --- a/pypy/module/imp/importing.py +++ b/pypy/module/imp/importing.py @@ -513,7 +513,7 @@ space.warn(msg, space.w_ImportWarning) modtype, suffix, filemode = find_modtype(space, filepart) try: - if modtype in (PY_SOURCE, PY_COMPILED): + if modtype in (PY_SOURCE, PY_COMPILED, C_EXTENSION): assert suffix is not None filename = filepart + suffix stream = streamio.open_file_as_stream(filename, filemode) @@ -522,9 +522,6 @@ except: stream.close() raise - if modtype == C_EXTENSION: - filename = filepart + suffix - return FindInfo(modtype, filename, None, suffix, filemode) except StreamErrors: pass # XXX! must not eat all exceptions, e.g. # Out of file descriptors. diff --git a/pypy/module/posix/__init__.py b/pypy/module/posix/__init__.py --- a/pypy/module/posix/__init__.py +++ b/pypy/module/posix/__init__.py @@ -137,6 +137,8 @@ interpleveldefs['execve'] = 'interp_posix.execve' if hasattr(posix, 'spawnv'): interpleveldefs['spawnv'] = 'interp_posix.spawnv' + if hasattr(posix, 'spawnve'): + interpleveldefs['spawnve'] = 'interp_posix.spawnve' if hasattr(os, 'uname'): interpleveldefs['uname'] = 'interp_posix.uname' if hasattr(os, 'sysconf'): diff --git a/pypy/module/posix/interp_posix.py b/pypy/module/posix/interp_posix.py --- a/pypy/module/posix/interp_posix.py +++ b/pypy/module/posix/interp_posix.py @@ -760,6 +760,14 @@ except OSError, e: raise wrap_oserror(space, e) +def _env2interp(space, w_env): + env = {} + w_keys = space.call_method(w_env, 'keys') + for w_key in space.unpackiterable(w_keys): + w_value = space.getitem(w_env, w_key) + env[space.str_w(w_key)] = space.str_w(w_value) + return env + def execve(space, w_command, w_args, w_env): """ execve(path, args, env) @@ -771,11 +779,7 @@ """ command = fsencode_w(space, w_command) args = [fsencode_w(space, w_arg) for w_arg in space.unpackiterable(w_args)] - env = {} - w_keys = space.call_method(w_env, 'keys') - for w_key in space.unpackiterable(w_keys): - w_value = space.getitem(w_env, w_key) - env[space.str_w(w_key)] = space.str_w(w_value) + env = _env2interp(space, w_env) try: os.execve(command, args, env) except OSError, e: @@ -790,6 +794,16 @@ raise wrap_oserror(space, e) return space.wrap(ret) + at unwrap_spec(mode=int, path=str) +def spawnve(space, mode, path, w_args, w_env): + args = [space.str_w(w_arg) for w_arg in space.unpackiterable(w_args)] + env = _env2interp(space, w_env) + try: + ret = os.spawnve(mode, path, args, env) + except OSError, e: + raise wrap_oserror(space, e) + return space.wrap(ret) + def utime(space, w_path, w_tuple): """ utime(path, (atime, mtime)) utime(path, None) diff --git a/pypy/module/posix/test/test_posix2.py b/pypy/module/posix/test/test_posix2.py --- a/pypy/module/posix/test/test_posix2.py +++ b/pypy/module/posix/test/test_posix2.py @@ -471,6 +471,17 @@ ['python', '-c', 'raise(SystemExit(42))']) assert ret == 42 + if hasattr(__import__(os.name), "spawnve"): + def test_spawnve(self): + os = self.posix + import sys + print self.python + ret = os.spawnve(os.P_WAIT, self.python, + ['python', '-c', + "raise(SystemExit(int(__import__('os').environ['FOOBAR'])))"], + {'FOOBAR': '42'}) + assert ret == 42 + def test_popen(self): os = self.posix for i in range(5): diff --git a/pypy/module/pypyjit/test_pypy_c/model.py b/pypy/module/pypyjit/test_pypy_c/model.py --- a/pypy/module/pypyjit/test_pypy_c/model.py +++ b/pypy/module/pypyjit/test_pypy_c/model.py @@ -285,6 +285,11 @@ guard_false(ticker_cond1, descr=...) """ src = src.replace('--EXC-TICK--', exc_ticker_check) + # + # ISINF is done as a macro; fix it here + r = re.compile('(\w+) = --ISINF--[(](\w+)[)]') + src = r.sub(r'\2\B999 = float_add(\2, ...)\n\1 = float_eq(\2\B999, \2)', + src) return src @classmethod diff --git a/pypy/module/pypyjit/test_pypy_c/test_math.py b/pypy/module/pypyjit/test_pypy_c/test_math.py --- a/pypy/module/pypyjit/test_pypy_c/test_math.py +++ b/pypy/module/pypyjit/test_pypy_c/test_math.py @@ -1,3 +1,4 @@ +import py from pypy.module.pypyjit.test_pypy_c.test_00_model import BaseTestPyPyC @@ -49,10 +50,7 @@ guard_true(i2, descr=...) guard_not_invalidated(descr=...) f1 = cast_int_to_float(i0) - i3 = float_eq(f1, inf) - i4 = float_eq(f1, -inf) - i5 = int_or(i3, i4) - i6 = int_is_true(i5) + i6 = --ISINF--(f1) guard_false(i6, descr=...) f2 = call(ConstClass(sin), f1, descr=) f3 = call(ConstClass(cos), f1, descr=) @@ -64,6 +62,7 @@ """) def test_fmod(self): + py.test.skip("test relies on the old and broken ll_math_fmod") def main(n): import math @@ -90,4 +89,4 @@ i6 = int_sub(i0, 1) --TICK-- jump(..., descr=) - """) \ No newline at end of file + """) diff --git a/pypy/module/test_lib_pypy/test_pwd.py b/pypy/module/test_lib_pypy/test_pwd.py --- a/pypy/module/test_lib_pypy/test_pwd.py +++ b/pypy/module/test_lib_pypy/test_pwd.py @@ -1,7 +1,10 @@ +import py, sys from pypy.conftest import gettestobjspace class AppTestPwd: def setup_class(cls): + if sys.platform == 'win32': + py.test.skip("Unix only") cls.space = gettestobjspace(usemodules=('_ffi', '_rawffi')) cls.space.appexec((), "(): import pwd") diff --git a/pypy/objspace/std/bytearraytype.py b/pypy/objspace/std/bytearraytype.py --- a/pypy/objspace/std/bytearraytype.py +++ b/pypy/objspace/std/bytearraytype.py @@ -122,10 +122,11 @@ return -1 def descr_fromhex(space, w_type, w_hexstring): - "bytearray.fromhex(string) -> bytearray\n\nCreate a bytearray object " - "from a string of hexadecimal numbers.\nSpaces between two numbers are " - "accepted.\nExample: bytearray.fromhex('B9 01EF') -> " - "bytearray(b'\\xb9\\x01\\xef')." + "bytearray.fromhex(string) -> bytearray\n" + "\n" + "Create a bytearray object from a string of hexadecimal numbers.\n" + "Spaces between two numbers are accepted.\n" + "Example: bytearray.fromhex('B9 01EF') -> bytearray(b'\\xb9\\x01\\xef')." hexstring = space.str_w(w_hexstring) hexstring = hexstring.lower() data = [] diff --git a/pypy/objspace/std/intobject.py b/pypy/objspace/std/intobject.py --- a/pypy/objspace/std/intobject.py +++ b/pypy/objspace/std/intobject.py @@ -16,7 +16,10 @@ something CPython does not do anymore. """ -class W_IntObject(W_Object): +class W_AbstractIntObject(W_Object): + __slots__ = () + +class W_IntObject(W_AbstractIntObject): __slots__ = 'intval' _immutable_fields_ = ['intval'] diff --git a/pypy/objspace/std/iterobject.py b/pypy/objspace/std/iterobject.py --- a/pypy/objspace/std/iterobject.py +++ b/pypy/objspace/std/iterobject.py @@ -4,7 +4,10 @@ from pypy.objspace.std.register_all import register_all -class W_AbstractSeqIterObject(W_Object): +class W_AbstractIterObject(W_Object): + __slots__ = () + +class W_AbstractSeqIterObject(W_AbstractIterObject): from pypy.objspace.std.itertype import iter_typedef as typedef def __init__(w_self, w_seq, index=0): diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py --- a/pypy/objspace/std/listobject.py +++ b/pypy/objspace/std/listobject.py @@ -11,7 +11,10 @@ from pypy.rlib.listsort import make_timsort_class from pypy.interpreter.argument import Signature -class W_ListObject(W_Object): +class W_AbstractListObject(W_Object): + __slots__ = () + +class W_ListObject(W_AbstractListObject): from pypy.objspace.std.listtype import list_typedef as typedef def __init__(w_self, wrappeditems): diff --git a/pypy/objspace/std/longobject.py b/pypy/objspace/std/longobject.py --- a/pypy/objspace/std/longobject.py +++ b/pypy/objspace/std/longobject.py @@ -8,7 +8,10 @@ from pypy.objspace.std.noneobject import W_NoneObject from pypy.rlib.rbigint import rbigint, SHIFT -class W_LongObject(W_Object): +class W_AbstractLongObject(W_Object): + __slots__ = () + +class W_LongObject(W_AbstractLongObject): """This is a wrapper of rbigint.""" from pypy.objspace.std.longtype import long_typedef as typedef _immutable_fields_ = ['num'] diff --git a/pypy/objspace/std/objspace.py b/pypy/objspace/std/objspace.py --- a/pypy/objspace/std/objspace.py +++ b/pypy/objspace/std/objspace.py @@ -83,12 +83,7 @@ if self.config.objspace.std.withtproxy: transparent.setup(self) - interplevel_classes = {} - for type, classes in self.model.typeorder.iteritems(): - if len(classes) >= 3: # XXX what does this 3 mean??! - # W_Root, AnyXxx and actual object - interplevel_classes[self.gettypefor(type)] = classes[0][0] - self._interplevel_classes = interplevel_classes + self.setup_isinstance_cache() def get_builtin_types(self): return self.builtin_types @@ -592,6 +587,63 @@ def isinstance_w(space, w_inst, w_type): return space._type_isinstance(w_inst, w_type) + def setup_isinstance_cache(self): + # This assumes that all classes in the stdobjspace implementing a + # particular app-level type are distinguished by a common base class. + # Alternatively, you can turn off the cache on specific classes, + # like e.g. proxyobject. It is just a bit less performant but + # should not have any bad effect. + from pypy.objspace.std.model import W_Root, W_Object + # + # Build a dict {class: w_typeobject-or-None}. The value None is used + # on classes that are known to be abstract base classes. + class2type = {} + class2type[W_Root] = None + class2type[W_Object] = None + for cls in self.model.typeorder.keys(): + if getattr(cls, 'typedef', None) is None: + continue + if getattr(cls, 'ignore_for_isinstance_cache', False): + continue + w_type = self.gettypefor(cls) + w_oldtype = class2type.setdefault(cls, w_type) + assert w_oldtype is w_type + # + # Build the real dict {w_typeobject: class-or-base-class}. For every + # w_typeobject we look for the most precise common base class of all + # the registered classes. If no such class is found, we will find + # W_Object or W_Root, and complain. Then you must either add an + # artificial common base class, or disable caching on one of the + # two classes with ignore_for_isinstance_cache. + def getmro(cls): + while True: + yield cls + if cls is W_Root: + break + cls = cls.__bases__[0] + self._interplevel_classes = {} + for cls, w_type in class2type.items(): + if w_type is None: + continue + if w_type not in self._interplevel_classes: + self._interplevel_classes[w_type] = cls + else: + cls1 = self._interplevel_classes[w_type] + mro1 = list(getmro(cls1)) + for base in getmro(cls): + if base in mro1: + break + if base in class2type and class2type[base] is not w_type: + if class2type.get(base) is None: + msg = ("cannot find a common interp-level base class" + " between %r and %r" % (cls1, cls)) + else: + msg = ("%s is a base class of both %r and %r" % ( + class2type[base], cls1, cls)) + raise AssertionError("%r: %s" % (w_type, msg)) + class2type[base] = w_type + self._interplevel_classes[w_type] = base + @specialize.memo() def _get_interplevel_cls(self, w_type): if not hasattr(self, "_interplevel_classes"): diff --git a/pypy/objspace/std/proxyobject.py b/pypy/objspace/std/proxyobject.py --- a/pypy/objspace/std/proxyobject.py +++ b/pypy/objspace/std/proxyobject.py @@ -16,6 +16,8 @@ def transparent_class(name, BaseCls): class W_Transparent(BaseCls): + ignore_for_isinstance_cache = True + def __init__(self, space, w_type, w_controller): self.w_type = w_type self.w_controller = w_controller diff --git a/pypy/objspace/std/rangeobject.py b/pypy/objspace/std/rangeobject.py --- a/pypy/objspace/std/rangeobject.py +++ b/pypy/objspace/std/rangeobject.py @@ -5,7 +5,7 @@ from pypy.objspace.std.noneobject import W_NoneObject from pypy.objspace.std.inttype import wrapint from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice -from pypy.objspace.std.listobject import W_ListObject +from pypy.objspace.std.listobject import W_AbstractListObject, W_ListObject from pypy.objspace.std import listtype, iterobject, slicetype from pypy.interpreter import gateway, baseobjspace @@ -21,7 +21,7 @@ return (start - stop - step - 1)/-step -class W_RangeListObject(W_Object): +class W_RangeListObject(W_AbstractListObject): typedef = listtype.list_typedef def __init__(w_self, start, step, length): diff --git a/pypy/objspace/std/ropeobject.py b/pypy/objspace/std/ropeobject.py --- a/pypy/objspace/std/ropeobject.py +++ b/pypy/objspace/std/ropeobject.py @@ -6,7 +6,7 @@ from pypy.rlib.objectmodel import we_are_translated from pypy.objspace.std.inttype import wrapint from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice -from pypy.objspace.std import slicetype +from pypy.objspace.std import stringobject, slicetype, iterobject from pypy.objspace.std.listobject import W_ListObject from pypy.objspace.std.noneobject import W_NoneObject from pypy.objspace.std.tupleobject import W_TupleObject @@ -19,7 +19,7 @@ str_format__String as str_format__Rope, _upper, _lower, DEFAULT_NOOP_TABLE) -class W_RopeObject(W_Object): +class W_RopeObject(stringobject.W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef _immutable_fields_ = ['_node'] @@ -59,7 +59,7 @@ registerimplementation(W_RopeObject) -class W_RopeIterObject(W_Object): +class W_RopeIterObject(iterobject.W_AbstractIterObject): from pypy.objspace.std.itertype import iter_typedef as typedef def __init__(w_self, w_rope, index=0): diff --git a/pypy/objspace/std/ropeunicodeobject.py b/pypy/objspace/std/ropeunicodeobject.py --- a/pypy/objspace/std/ropeunicodeobject.py +++ b/pypy/objspace/std/ropeunicodeobject.py @@ -9,7 +9,7 @@ from pypy.objspace.std.noneobject import W_NoneObject from pypy.rlib import rope from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice -from pypy.objspace.std import slicetype +from pypy.objspace.std import unicodeobject, slicetype, iterobject from pypy.objspace.std.tupleobject import W_TupleObject from pypy.rlib.rarithmetic import intmask, ovfcheck from pypy.module.unicodedata import unicodedb @@ -76,7 +76,7 @@ return encode_object(space, w_unistr, encoding, errors) -class W_RopeUnicodeObject(W_Object): +class W_RopeUnicodeObject(unicodeobject.W_AbstractUnicodeObject): from pypy.objspace.std.unicodetype import unicode_typedef as typedef _immutable_fields_ = ['_node'] @@ -117,7 +117,7 @@ return rope.LiteralUnicodeNode(space.unicode_w(w_str)) -class W_RopeUnicodeIterObject(W_Object): +class W_RopeUnicodeIterObject(iterobject.W_AbstractIterObject): from pypy.objspace.std.itertype import iter_typedef as typedef def __init__(w_self, w_rope, index=0): diff --git a/pypy/objspace/std/smallintobject.py b/pypy/objspace/std/smallintobject.py --- a/pypy/objspace/std/smallintobject.py +++ b/pypy/objspace/std/smallintobject.py @@ -6,7 +6,7 @@ from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all from pypy.objspace.std.noneobject import W_NoneObject -from pypy.objspace.std.intobject import W_IntObject +from pypy.objspace.std.intobject import W_AbstractIntObject, W_IntObject from pypy.interpreter.error import OperationError from pypy.rlib.objectmodel import UnboxedValue from pypy.rlib.rbigint import rbigint @@ -14,7 +14,7 @@ from pypy.tool.sourcetools import func_with_new_name from pypy.objspace.std.inttype import wrapint -class W_SmallIntObject(W_Object, UnboxedValue): +class W_SmallIntObject(W_AbstractIntObject, UnboxedValue): __slots__ = 'intval' from pypy.objspace.std.inttype import int_typedef as typedef diff --git a/pypy/objspace/std/smalllongobject.py b/pypy/objspace/std/smalllongobject.py --- a/pypy/objspace/std/smalllongobject.py +++ b/pypy/objspace/std/smalllongobject.py @@ -9,7 +9,7 @@ from pypy.rlib.rarithmetic import r_longlong, r_int, r_uint from pypy.rlib.rarithmetic import intmask, LONGLONG_BIT from pypy.rlib.rbigint import rbigint -from pypy.objspace.std.longobject import W_LongObject +from pypy.objspace.std.longobject import W_AbstractLongObject, W_LongObject from pypy.objspace.std.intobject import W_IntObject from pypy.objspace.std.noneobject import W_NoneObject from pypy.interpreter.error import OperationError @@ -17,7 +17,7 @@ LONGLONG_MIN = r_longlong((-1) << (LONGLONG_BIT-1)) -class W_SmallLongObject(W_Object): +class W_SmallLongObject(W_AbstractLongObject): from pypy.objspace.std.longtype import long_typedef as typedef _immutable_fields_ = ['longlong'] diff --git a/pypy/objspace/std/smalltupleobject.py b/pypy/objspace/std/smalltupleobject.py --- a/pypy/objspace/std/smalltupleobject.py +++ b/pypy/objspace/std/smalltupleobject.py @@ -9,9 +9,9 @@ from pypy.interpreter import gateway from pypy.rlib.debug import make_sure_not_resized from pypy.rlib.unroll import unrolling_iterable -from pypy.objspace.std.tupleobject import W_TupleObject +from pypy.objspace.std.tupleobject import W_AbstractTupleObject, W_TupleObject -class W_SmallTupleObject(W_Object): +class W_SmallTupleObject(W_AbstractTupleObject): from pypy.objspace.std.tupletype import tuple_typedef as typedef def tolist(self): diff --git a/pypy/objspace/std/strbufobject.py b/pypy/objspace/std/strbufobject.py --- a/pypy/objspace/std/strbufobject.py +++ b/pypy/objspace/std/strbufobject.py @@ -1,11 +1,12 @@ from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all +from pypy.objspace.std.stringobject import W_AbstractStringObject from pypy.objspace.std.stringobject import W_StringObject from pypy.objspace.std.unicodeobject import delegate_String2Unicode from pypy.rlib.rstring import StringBuilder from pypy.interpreter.buffer import Buffer -class W_StringBufferObject(W_Object): +class W_StringBufferObject(W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef w_str = None diff --git a/pypy/objspace/std/stringobject.py b/pypy/objspace/std/stringobject.py --- a/pypy/objspace/std/stringobject.py +++ b/pypy/objspace/std/stringobject.py @@ -19,7 +19,10 @@ from pypy.objspace.std.formatting import mod_format -class W_StringObject(W_Object): +class W_AbstractStringObject(W_Object): + __slots__ = () + +class W_StringObject(W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef _immutable_fields_ = ['_value'] diff --git a/pypy/objspace/std/strjoinobject.py b/pypy/objspace/std/strjoinobject.py --- a/pypy/objspace/std/strjoinobject.py +++ b/pypy/objspace/std/strjoinobject.py @@ -1,11 +1,12 @@ from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all +from pypy.objspace.std.stringobject import W_AbstractStringObject from pypy.objspace.std.stringobject import W_StringObject from pypy.objspace.std.unicodeobject import delegate_String2Unicode from pypy.objspace.std.stringtype import wrapstr -class W_StringJoinObject(W_Object): +class W_StringJoinObject(W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef def __init__(w_self, joined_strs, until=-1): diff --git a/pypy/objspace/std/strsliceobject.py b/pypy/objspace/std/strsliceobject.py --- a/pypy/objspace/std/strsliceobject.py +++ b/pypy/objspace/std/strsliceobject.py @@ -1,6 +1,7 @@ from pypy.interpreter.error import OperationError from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all +from pypy.objspace.std.stringobject import W_AbstractStringObject from pypy.objspace.std.stringobject import W_StringObject from pypy.objspace.std.unicodeobject import delegate_String2Unicode from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice @@ -12,7 +13,7 @@ stringendswith, stringstartswith -class W_StringSliceObject(W_Object): +class W_StringSliceObject(W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef def __init__(w_self, str, start, stop): diff --git a/pypy/objspace/std/test/test_sliceobject.py b/pypy/objspace/std/test/test_sliceobject.py --- a/pypy/objspace/std/test/test_sliceobject.py +++ b/pypy/objspace/std/test/test_sliceobject.py @@ -1,3 +1,4 @@ +import sys from pypy.objspace.std.sliceobject import normalize_simple_slice @@ -56,8 +57,9 @@ sl = space.newslice(w(start), w(stop), w(step)) mystart, mystop, mystep, slicelength = sl.indices4(space, length) assert len(range(length)[start:stop:step]) == slicelength - assert slice(start, stop, step).indices(length) == ( - mystart, mystop, mystep) + if sys.version_info >= (2, 6): # doesn't work in 2.5 + assert slice(start, stop, step).indices(length) == ( + mystart, mystop, mystep) class AppTest_SliceObject: def test_new(self): diff --git a/pypy/objspace/std/test/test_stdobjspace.py b/pypy/objspace/std/test/test_stdobjspace.py --- a/pypy/objspace/std/test/test_stdobjspace.py +++ b/pypy/objspace/std/test/test_stdobjspace.py @@ -50,6 +50,8 @@ def test_fastpath_isinstance(self): from pypy.objspace.std.stringobject import W_StringObject from pypy.objspace.std.intobject import W_IntObject + from pypy.objspace.std.iterobject import W_AbstractSeqIterObject + from pypy.objspace.std.iterobject import W_SeqIterObject space = self.space assert space._get_interplevel_cls(space.w_str) is W_StringObject @@ -62,9 +64,13 @@ assert space.isinstance_w(X(), space.w_str) + w_sequenceiterator = space.gettypefor(W_SeqIterObject) + cls = space._get_interplevel_cls(w_sequenceiterator) + assert cls is W_AbstractSeqIterObject + def test_withstrbuf_fastpath_isinstance(self): - from pypy.objspace.std.stringobject import W_StringObject + from pypy.objspace.std.stringobject import W_AbstractStringObject - space = gettestobjspace(withstrbuf=True) - assert space._get_interplevel_cls(space.w_str) is W_StringObject - + space = gettestobjspace(withstrbuf=True) + cls = space._get_interplevel_cls(space.w_str) + assert cls is W_AbstractStringObject diff --git a/pypy/objspace/std/tupleobject.py b/pypy/objspace/std/tupleobject.py --- a/pypy/objspace/std/tupleobject.py +++ b/pypy/objspace/std/tupleobject.py @@ -9,7 +9,10 @@ from pypy.interpreter import gateway from pypy.rlib.debug import make_sure_not_resized -class W_TupleObject(W_Object): +class W_AbstractTupleObject(W_Object): + __slots__ = () + +class W_TupleObject(W_AbstractTupleObject): from pypy.objspace.std.tupletype import tuple_typedef as typedef _immutable_fields_ = ['wrappeditems[*]'] diff --git a/pypy/objspace/std/unicodeobject.py b/pypy/objspace/std/unicodeobject.py --- a/pypy/objspace/std/unicodeobject.py +++ b/pypy/objspace/std/unicodeobject.py @@ -19,7 +19,10 @@ from pypy.objspace.std.formatting import mod_format from pypy.objspace.std.stringtype import stringstartswith, stringendswith -class W_UnicodeObject(W_Object): +class W_AbstractUnicodeObject(W_Object): + __slots__ = () + +class W_UnicodeObject(W_AbstractUnicodeObject): from pypy.objspace.std.unicodetype import unicode_typedef as typedef _immutable_fields_ = ['_value'] diff --git a/pypy/rlib/clibffi.py b/pypy/rlib/clibffi.py --- a/pypy/rlib/clibffi.py +++ b/pypy/rlib/clibffi.py @@ -31,6 +31,9 @@ _MAC_OS = platform.name == "darwin" _FREEBSD_7 = platform.name == "freebsd7" +_LITTLE_ENDIAN = sys.byteorder == 'little' +_BIG_ENDIAN = sys.byteorder == 'big' + if _WIN32: from pypy.rlib import rwin32 @@ -339,15 +342,38 @@ cast_type_to_ffitype._annspecialcase_ = 'specialize:memo' def push_arg_as_ffiptr(ffitp, arg, ll_buf): - # this is for primitive types. For structures and arrays - # would be something different (more dynamic) + # This is for primitive types. Note that the exact type of 'arg' may be + # different from the expected 'c_size'. To cope with that, we fall back + # to a byte-by-byte copy. TP = lltype.typeOf(arg) TP_P = lltype.Ptr(rffi.CArray(TP)) - buf = rffi.cast(TP_P, ll_buf) - buf[0] = arg + TP_size = rffi.sizeof(TP) + c_size = intmask(ffitp.c_size) + # if both types have the same size, we can directly write the + # value to the buffer + if c_size == TP_size: + buf = rffi.cast(TP_P, ll_buf) + buf[0] = arg + else: + # needs byte-by-byte copying. Make sure 'arg' is an integer type. + # Note that this won't work for rffi.FLOAT/rffi.DOUBLE. + assert TP is not rffi.FLOAT and TP is not rffi.DOUBLE + if TP_size <= rffi.sizeof(lltype.Signed): + arg = rffi.cast(lltype.Unsigned, arg) + else: + arg = rffi.cast(lltype.UnsignedLongLong, arg) + if _LITTLE_ENDIAN: + for i in range(c_size): + ll_buf[i] = chr(arg & 0xFF) + arg >>= 8 + elif _BIG_ENDIAN: + for i in range(c_size-1, -1, -1): + ll_buf[i] = chr(arg & 0xFF) + arg >>= 8 + else: + raise AssertionError push_arg_as_ffiptr._annspecialcase_ = 'specialize:argtype(1)' - # type defs for callback and closure userdata USERDATA_P = lltype.Ptr(lltype.ForwardReference()) CALLBACK_TP = lltype.Ptr(lltype.FuncType([rffi.VOIDPP, rffi.VOIDP, USERDATA_P], diff --git a/pypy/rlib/libffi.py b/pypy/rlib/libffi.py --- a/pypy/rlib/libffi.py +++ b/pypy/rlib/libffi.py @@ -234,7 +234,7 @@ # It is important that there is no other operation in the middle, else # the optimizer will fail to recognize the pattern and won't turn it # into a fast CALL. Note that "arg = arg.next" is optimized away, - # assuming that archain is completely virtual. + # assuming that argchain is completely virtual. self = jit.promote(self) if argchain.numargs != len(self.argtypes): raise TypeError, 'Wrong number of arguments: %d expected, got %d' %\ diff --git a/pypy/rlib/test/test_libffi.py b/pypy/rlib/test/test_libffi.py --- a/pypy/rlib/test/test_libffi.py +++ b/pypy/rlib/test/test_libffi.py @@ -179,6 +179,17 @@ res = self.call(func, [chr(20), 22], rffi.LONG) assert res == 42 + def test_char_args(self): + """ + char sum_args(char a, char b) { + return a + b; + } + """ + libfoo = self.get_libfoo() + func = (libfoo, 'sum_args', [types.schar, types.schar], types.schar) + res = self.call(func, [123, 43], rffi.CHAR) + assert res == chr(166) + def test_unsigned_short_args(self): """ unsigned short sum_xy_us(unsigned short x, unsigned short y) diff --git a/pypy/rpython/lltypesystem/ll2ctypes.py b/pypy/rpython/lltypesystem/ll2ctypes.py --- a/pypy/rpython/lltypesystem/ll2ctypes.py +++ b/pypy/rpython/lltypesystem/ll2ctypes.py @@ -1178,10 +1178,14 @@ value = value.adr if isinstance(value, llmemory.fakeaddress): value = value.ptr or 0 + if isinstance(value, r_singlefloat): + value = float(value) TYPE1 = lltype.typeOf(value) cvalue = lltype2ctypes(value) cresulttype = get_ctypes_type(RESTYPE) - if isinstance(TYPE1, lltype.Ptr): + if RESTYPE == TYPE1: + return value + elif isinstance(TYPE1, lltype.Ptr): if isinstance(RESTYPE, lltype.Ptr): # shortcut: ptr->ptr cast cptr = ctypes.cast(cvalue, cresulttype) diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -126,6 +126,7 @@ canraise=False, random_effects_on_gcobjs= random_effects_on_gcobjs, + calling_conv=calling_conv, **kwds) if isinstance(_callable, ll2ctypes.LL2CtypesCallable): _callable.funcptr = funcptr @@ -865,11 +866,12 @@ try: unsigned = not tp._type.SIGNED except AttributeError: - if tp in [lltype.Char, lltype.Float, lltype.Signed] or\ - isinstance(tp, lltype.Ptr): + if (not isinstance(tp, lltype.Primitive) or + tp in (FLOAT, DOUBLE) or + cast(lltype.SignedLongLong, cast(tp, -1)) < 0): unsigned = False else: - unsigned = False + unsigned = True return size, unsigned def sizeof(tp): diff --git a/pypy/rpython/lltypesystem/test/test_rffi.py b/pypy/rpython/lltypesystem/test/test_rffi.py --- a/pypy/rpython/lltypesystem/test/test_rffi.py +++ b/pypy/rpython/lltypesystem/test/test_rffi.py @@ -18,6 +18,7 @@ from pypy.conftest import option from pypy.objspace.flow.model import summary from pypy.translator.tool.cbuild import ExternalCompilationInfo +from pypy.rlib.rarithmetic import r_singlefloat class BaseTestRffi: def test_basic(self): @@ -704,6 +705,14 @@ res = cast(lltype.Signed, 42.5) assert res == 42 + res = cast(lltype.SingleFloat, 12.3) + assert res == r_singlefloat(12.3) + res = cast(lltype.SingleFloat, res) + assert res == r_singlefloat(12.3) + + res = cast(lltype.Float, r_singlefloat(12.)) + assert res == 12. + def test_rffi_sizeof(self): try: import ctypes @@ -733,7 +742,7 @@ assert sizeof(ll) == ctypes.sizeof(ctp) assert sizeof(lltype.Typedef(ll, 'test')) == sizeof(ll) assert not size_and_sign(lltype.Signed)[1] - assert not size_and_sign(lltype.Char)[1] + assert size_and_sign(lltype.Char) == (1, True) assert not size_and_sign(lltype.UniChar)[1] assert size_and_sign(UINT)[1] diff --git a/pypy/rpython/module/ll_os.py b/pypy/rpython/module/ll_os.py --- a/pypy/rpython/module/ll_os.py +++ b/pypy/rpython/module/ll_os.py @@ -356,6 +356,32 @@ return extdef([int, str, [str]], int, llimpl=spawnv_llimpl, export_name="ll_os.ll_os_spawnv") + @registering_if(os, 'spawnve') + def register_os_spawnve(self): + os_spawnve = self.llexternal('spawnve', + [rffi.INT, rffi.CCHARP, rffi.CCHARPP, + rffi.CCHARPP], + rffi.INT) + + def spawnve_llimpl(mode, path, args, env): + envstrs = [] + for item in env.iteritems(): + envstrs.append("%s=%s" % item) + + mode = rffi.cast(rffi.INT, mode) + l_args = rffi.liststr2charpp(args) + l_env = rffi.liststr2charpp(envstrs) + childpid = os_spawnve(mode, path, l_args, l_env) + rffi.free_charpp(l_env) + rffi.free_charpp(l_args) + if childpid == -1: + raise OSError(rposix.get_errno(), "os_spawnve failed") + return rffi.cast(lltype.Signed, childpid) + + return extdef([int, str, [str], {str: str}], int, + llimpl=spawnve_llimpl, + export_name="ll_os.ll_os_spawnve") + @registering(os.dup) def register_os_dup(self): os_dup = self.llexternal(underscore_on_windows+'dup', [rffi.INT], rffi.INT) diff --git a/pypy/translator/c/test/test_extfunc.py b/pypy/translator/c/test/test_extfunc.py --- a/pypy/translator/c/test/test_extfunc.py +++ b/pypy/translator/c/test/test_extfunc.py @@ -818,6 +818,24 @@ func() assert open(filename).read() == "2" +if hasattr(posix, 'spawnve'): + def test_spawnve(): + filename = str(udir.join('test_spawnve.txt')) + progname = str(sys.executable) + scriptpath = udir.join('test_spawnve.py') + scriptpath.write('import os\n' + + 'f=open(%r,"w")\n' % filename + + 'f.write(os.environ["FOOBAR"])\n' + + 'f.close\n') + scriptname = str(scriptpath) + def does_stuff(): + l = [progname, scriptname] + pid = os.spawnve(os.P_NOWAIT, progname, l, {'FOOBAR': '42'}) + os.waitpid(pid, 0) + func = compile(does_stuff, []) + func() + assert open(filename).read() == "42" + def test_utime(): path = str(udir.ensure("test_utime.txt")) from time import time, sleep diff --git a/pypy/translator/platform/__init__.py b/pypy/translator/platform/__init__.py --- a/pypy/translator/platform/__init__.py +++ b/pypy/translator/platform/__init__.py @@ -42,6 +42,8 @@ so_prefixes = ('',) + extra_libs = () + def __init__(self, cc): if self.__class__ is Platform: raise TypeError("You should not instantiate Platform class directly") @@ -181,7 +183,8 @@ link_files = self._linkfiles(eci.link_files) export_flags = self._exportsymbols_link_flags(eci) return (library_dirs + list(self.link_flags) + export_flags + - link_files + list(eci.link_extra) + libraries) + link_files + list(eci.link_extra) + libraries + + list(self.extra_libs)) def _exportsymbols_link_flags(self, eci, relto=None): if eci.export_symbols: diff --git a/pypy/translator/platform/linux.py b/pypy/translator/platform/linux.py --- a/pypy/translator/platform/linux.py +++ b/pypy/translator/platform/linux.py @@ -6,7 +6,8 @@ class BaseLinux(BasePosix): name = "linux" - link_flags = ('-pthread', '-lrt') + link_flags = ('-pthread',) + extra_libs = ('-lrt',) cflags = ('-O3', '-pthread', '-fomit-frame-pointer', '-Wall', '-Wno-unused') standalone_only = () diff --git a/pypy/translator/platform/posix.py b/pypy/translator/platform/posix.py --- a/pypy/translator/platform/posix.py +++ b/pypy/translator/platform/posix.py @@ -140,7 +140,7 @@ ('DEFAULT_TARGET', exe_name.basename), ('SOURCES', rel_cfiles), ('OBJECTS', rel_ofiles), - ('LIBS', self._libs(eci.libraries)), + ('LIBS', self._libs(eci.libraries) + list(self.extra_libs)), ('LIBDIRS', self._libdirs(rel_libdirs)), ('INCLUDEDIRS', self._includedirs(rel_includedirs)), ('CFLAGS', cflags), diff --git a/pypy/translator/platform/test/test_posix.py b/pypy/translator/platform/test/test_posix.py --- a/pypy/translator/platform/test/test_posix.py +++ b/pypy/translator/platform/test/test_posix.py @@ -41,6 +41,7 @@ if self.strict_on_stderr: assert res.err == '' assert res.returncode == 0 + assert '-lrt' in tmpdir.join("Makefile").read() def test_link_files(self): tmpdir = udir.join('link_files' + self.__class__.__name__).ensure(dir=1) diff --git a/pypy/translator/test/test_simplify.py b/pypy/translator/test/test_simplify.py --- a/pypy/translator/test/test_simplify.py +++ b/pypy/translator/test/test_simplify.py @@ -42,24 +42,6 @@ assert graph.startblock.operations[0].opname == 'int_mul_ovf' assert graph.startblock.operations[1].opname == 'int_sub' -def test_remove_ovfcheck_lshift(): - # check that ovfcheck_lshift() is handled - from pypy.rlib.rarithmetic import ovfcheck_lshift - def f(x): - try: - return ovfcheck_lshift(x, 2) - except OverflowError: - return -42 - graph, _ = translate(f, [int]) - assert len(graph.startblock.operations) == 1 - assert graph.startblock.operations[0].opname == 'int_lshift_ovf' - assert len(graph.startblock.operations[0].args) == 2 - assert len(graph.startblock.exits) == 2 - assert [link.exitcase for link in graph.startblock.exits] == \ - [None, OverflowError] - assert [link.target.operations for link in graph.startblock.exits] == \ - [(), ()] - def test_remove_ovfcheck_floordiv(): # check that ovfcheck() is handled even if the operation raises # and catches another exception too, here ZeroDivisionError From noreply at buildbot.pypy.org Mon Nov 14 23:27:30 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Mon, 14 Nov 2011 23:27:30 +0100 (CET) Subject: [pypy-commit] pypy win64-stage1: added the win64.asm source from cpython Message-ID: <20111114222730.3B641820BE@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64-stage1 Changeset: r49422:0b37384fbaad Date: 2011-11-14 23:14 +0100 http://bitbucket.org/pypy/pypy/changeset/0b37384fbaad/ Log: added the win64.asm source from cpython diff --git a/pypy/translator/c/src/libffi_msvc/win64.asm b/pypy/translator/c/src/libffi_msvc/win64.asm new file mode 100644 --- /dev/null +++ b/pypy/translator/c/src/libffi_msvc/win64.asm @@ -0,0 +1,156 @@ +PUBLIC ffi_call_AMD64 + +EXTRN __chkstk:NEAR +EXTRN ffi_closure_SYSV:NEAR + +_TEXT SEGMENT + +;;; ffi_closure_OUTER will be called with these registers set: +;;; rax points to 'closure' +;;; r11 contains a bit mask that specifies which of the +;;; first four parameters are float or double +;;; +;;; It must move the parameters passed in registers to their stack location, +;;; call ffi_closure_SYSV for the actual work, then return the result. +;;; +ffi_closure_OUTER PROC FRAME + ;; save actual arguments to their stack space. + test r11, 1 + jne first_is_float + mov QWORD PTR [rsp+8], rcx + jmp second +first_is_float: + movlpd QWORD PTR [rsp+8], xmm0 + +second: + test r11, 2 + jne second_is_float + mov QWORD PTR [rsp+16], rdx + jmp third +second_is_float: + movlpd QWORD PTR [rsp+16], xmm1 + +third: + test r11, 4 + jne third_is_float + mov QWORD PTR [rsp+24], r8 + jmp forth +third_is_float: + movlpd QWORD PTR [rsp+24], xmm2 + +forth: + test r11, 8 + jne forth_is_float + mov QWORD PTR [rsp+32], r9 + jmp done +forth_is_float: + movlpd QWORD PTR [rsp+32], xmm3 + +done: +.ALLOCSTACK 40 + sub rsp, 40 +.ENDPROLOG + mov rcx, rax ; context is first parameter + mov rdx, rsp ; stack is second parameter + add rdx, 40 ; correct our own area + mov rax, ffi_closure_SYSV + call rax ; call the real closure function + ;; Here, code is missing that handles float return values + add rsp, 40 + movd xmm0, rax ; In case the closure returned a float. + ret 0 +ffi_closure_OUTER ENDP + + +;;; ffi_call_AMD64 + +stack$ = 0 +prepfunc$ = 32 +ecif$ = 40 +bytes$ = 48 +flags$ = 56 +rvalue$ = 64 +fn$ = 72 + +ffi_call_AMD64 PROC FRAME + + mov QWORD PTR [rsp+32], r9 + mov QWORD PTR [rsp+24], r8 + mov QWORD PTR [rsp+16], rdx + mov QWORD PTR [rsp+8], rcx +.PUSHREG rbp + push rbp +.ALLOCSTACK 48 + sub rsp, 48 ; 00000030H +.SETFRAME rbp, 32 + lea rbp, QWORD PTR [rsp+32] +.ENDPROLOG + + mov eax, DWORD PTR bytes$[rbp] + add rax, 15 + and rax, -16 + call __chkstk + sub rsp, rax + lea rax, QWORD PTR [rsp+32] + mov QWORD PTR stack$[rbp], rax + + mov rdx, QWORD PTR ecif$[rbp] + mov rcx, QWORD PTR stack$[rbp] + call QWORD PTR prepfunc$[rbp] + + mov rsp, QWORD PTR stack$[rbp] + + movlpd xmm3, QWORD PTR [rsp+24] + movd r9, xmm3 + + movlpd xmm2, QWORD PTR [rsp+16] + movd r8, xmm2 + + movlpd xmm1, QWORD PTR [rsp+8] + movd rdx, xmm1 + + movlpd xmm0, QWORD PTR [rsp] + movd rcx, xmm0 + + call QWORD PTR fn$[rbp] +ret_int$: + cmp DWORD PTR flags$[rbp], 1 ; FFI_TYPE_INT + jne ret_float$ + + mov rcx, QWORD PTR rvalue$[rbp] + mov DWORD PTR [rcx], eax + jmp SHORT ret_nothing$ + +ret_float$: + cmp DWORD PTR flags$[rbp], 2 ; FFI_TYPE_FLOAT + jne SHORT ret_double$ + + mov rax, QWORD PTR rvalue$[rbp] + movlpd QWORD PTR [rax], xmm0 + jmp SHORT ret_nothing$ + +ret_double$: + cmp DWORD PTR flags$[rbp], 3 ; FFI_TYPE_DOUBLE + jne SHORT ret_int64$ + + mov rax, QWORD PTR rvalue$[rbp] + movlpd QWORD PTR [rax], xmm0 + jmp SHORT ret_nothing$ + +ret_int64$: + cmp DWORD PTR flags$[rbp], 12 ; FFI_TYPE_SINT64 + jne ret_nothing$ + + mov rcx, QWORD PTR rvalue$[rbp] + mov QWORD PTR [rcx], rax + jmp SHORT ret_nothing$ + +ret_nothing$: + xor eax, eax + + lea rsp, QWORD PTR [rbp+16] + pop rbp + ret 0 +ffi_call_AMD64 ENDP +_TEXT ENDS +END From noreply at buildbot.pypy.org Mon Nov 14 23:27:31 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Mon, 14 Nov 2011 23:27:31 +0100 (CET) Subject: [pypy-commit] pypy win64-stage1: added win64.asm Message-ID: <20111114222731.63581820BE@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64-stage1 Changeset: r49423:e1834e63c531 Date: 2011-11-14 23:21 +0100 http://bitbucket.org/pypy/pypy/changeset/e1834e63c531/ Log: added win64.asm diff --git a/pypy/rlib/clibffi.py b/pypy/rlib/clibffi.py --- a/pypy/rlib/clibffi.py +++ b/pypy/rlib/clibffi.py @@ -120,6 +120,10 @@ ]) else: libffidir = py.path.local(pypydir).join('translator', 'c', 'src', 'libffi_msvc') + if not _WIN64: + asm_ifc = 'win32.c' + else: + asm_ifc = 'win64.asm' eci = ExternalCompilationInfo( includes = ['ffi.h', 'windows.h'], libraries = ['kernel32'], @@ -127,7 +131,7 @@ separate_module_sources = separate_module_sources, separate_module_files = [libffidir.join('ffi.c'), libffidir.join('prep_cif.c'), - libffidir.join('win32.c'), + libffidir.join(asm_ifc), libffidir.join('pypy_ffi.c'), ], export_symbols = ['ffi_call', 'ffi_prep_cif', 'ffi_prep_closure', From noreply at buildbot.pypy.org Tue Nov 15 00:56:49 2011 From: noreply at buildbot.pypy.org (amauryfa) Date: Tue, 15 Nov 2011 00:56:49 +0100 (CET) Subject: [pypy-commit] pypy default: cpyext: turn some parameters into const char* Message-ID: <20111114235649.C4EEF820BE@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r49424:c074503990bb Date: 2011-11-15 00:46 +0100 http://bitbucket.org/pypy/pypy/changeset/c074503990bb/ Log: cpyext: turn some parameters into const char* diff --git a/pypy/module/cpyext/include/eval.h b/pypy/module/cpyext/include/eval.h --- a/pypy/module/cpyext/include/eval.h +++ b/pypy/module/cpyext/include/eval.h @@ -14,8 +14,8 @@ PyObject * PyEval_CallFunction(PyObject *obj, const char *format, ...); PyObject * PyEval_CallMethod(PyObject *obj, const char *name, const char *format, ...); -PyObject * PyObject_CallFunction(PyObject *obj, char *format, ...); -PyObject * PyObject_CallMethod(PyObject *obj, char *name, char *format, ...); +PyObject * PyObject_CallFunction(PyObject *obj, const char *format, ...); +PyObject * PyObject_CallMethod(PyObject *obj, const char *name, const char *format, ...); PyObject * PyObject_CallFunctionObjArgs(PyObject *callable, ...); PyObject * PyObject_CallMethodObjArgs(PyObject *callable, PyObject *name, ...); diff --git a/pypy/module/cpyext/include/pycobject.h b/pypy/module/cpyext/include/pycobject.h --- a/pypy/module/cpyext/include/pycobject.h +++ b/pypy/module/cpyext/include/pycobject.h @@ -33,7 +33,7 @@ PyAPI_FUNC(void *) PyCObject_GetDesc(PyObject *); /* Import a pointer to a C object from a module using a PyCObject. */ -PyAPI_FUNC(void *) PyCObject_Import(char *module_name, char *cobject_name); +PyAPI_FUNC(void *) PyCObject_Import(const char *module_name, const char *cobject_name); /* Modify a C object. Fails (==0) if object has a destructor. */ PyAPI_FUNC(int) PyCObject_SetVoidPtr(PyObject *self, void *cobj); diff --git a/pypy/module/cpyext/include/pyerrors.h b/pypy/module/cpyext/include/pyerrors.h --- a/pypy/module/cpyext/include/pyerrors.h +++ b/pypy/module/cpyext/include/pyerrors.h @@ -11,8 +11,8 @@ (PyClass_Check((x)) || (PyType_Check((x)) && \ PyObject_IsSubclass((x), PyExc_BaseException))) -PyObject *PyErr_NewException(char *name, PyObject *base, PyObject *dict); -PyObject *PyErr_NewExceptionWithDoc(char *name, char *doc, PyObject *base, PyObject *dict); +PyObject *PyErr_NewException(const char *name, PyObject *base, PyObject *dict); +PyObject *PyErr_NewExceptionWithDoc(const char *name, const char *doc, PyObject *base, PyObject *dict); PyObject *PyErr_Format(PyObject *exception, const char *format, ...); /* These APIs aren't really part of the error implementation, but diff --git a/pypy/module/cpyext/src/cobject.c b/pypy/module/cpyext/src/cobject.c --- a/pypy/module/cpyext/src/cobject.c +++ b/pypy/module/cpyext/src/cobject.c @@ -77,7 +77,7 @@ } void * -PyCObject_Import(char *module_name, char *name) +PyCObject_Import(const char *module_name, const char *name) { PyObject *m, *c; void *r = NULL; diff --git a/pypy/module/cpyext/src/modsupport.c b/pypy/module/cpyext/src/modsupport.c --- a/pypy/module/cpyext/src/modsupport.c +++ b/pypy/module/cpyext/src/modsupport.c @@ -541,7 +541,7 @@ } PyObject * -PyObject_CallFunction(PyObject *callable, char *format, ...) +PyObject_CallFunction(PyObject *callable, const char *format, ...) { va_list va; PyObject *args; @@ -558,7 +558,7 @@ } PyObject * -PyObject_CallMethod(PyObject *o, char *name, char *format, ...) +PyObject_CallMethod(PyObject *o, const char *name, const char *format, ...) { va_list va; PyObject *args; diff --git a/pypy/module/cpyext/src/pyerrors.c b/pypy/module/cpyext/src/pyerrors.c --- a/pypy/module/cpyext/src/pyerrors.c +++ b/pypy/module/cpyext/src/pyerrors.c @@ -21,7 +21,7 @@ } PyObject * -PyErr_NewException(char *name, PyObject *base, PyObject *dict) +PyErr_NewException(const char *name, PyObject *base, PyObject *dict) { char *dot; PyObject *modulename = NULL; @@ -72,7 +72,7 @@ /* Create an exception with docstring */ PyObject * -PyErr_NewExceptionWithDoc(char *name, char *doc, PyObject *base, PyObject *dict) +PyErr_NewExceptionWithDoc(const char *name, const char *doc, PyObject *base, PyObject *dict) { int result; PyObject *ret = NULL; From noreply at buildbot.pypy.org Tue Nov 15 00:56:50 2011 From: noreply at buildbot.pypy.org (amauryfa) Date: Tue, 15 Nov 2011 00:56:50 +0100 (CET) Subject: [pypy-commit] pypy default: Fix usages of Py_InitModule outside of the module init() function. Message-ID: <20111114235650.EBDE0820BE@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r49425:05ed38d2c537 Date: 2011-11-15 00:50 +0100 http://bitbucket.org/pypy/pypy/changeset/05ed38d2c537/ Log: Fix usages of Py_InitModule outside of the module init() function. SWIG for example uses it to share the type system between modules. diff --git a/pypy/module/cpyext/modsupport.py b/pypy/module/cpyext/modsupport.py --- a/pypy/module/cpyext/modsupport.py +++ b/pypy/module/cpyext/modsupport.py @@ -54,9 +54,15 @@ modname = rffi.charp2str(name) state = space.fromcache(State) f_name, f_path = state.package_context - w_mod = PyImport_AddModule(space, f_name) + if f_name is not None: + modname = f_name + w_mod = PyImport_AddModule(space, modname) + state.package_context = None, None - dict_w = {'__file__': space.wrap(f_path)} + if f_path is not None: + dict_w = {'__file__': space.wrap(f_path)} + else: + dict_w = {} convert_method_defs(space, dict_w, methods, None, w_self, modname) for key, w_value in dict_w.items(): space.setattr(w_mod, space.wrap(key), w_value) From noreply at buildbot.pypy.org Tue Nov 15 06:26:01 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Tue, 15 Nov 2011 06:26:01 +0100 (CET) Subject: [pypy-commit] pypy jit-dynamic-getarrayitem: merged default in Message-ID: <20111115052601.E65AB820BE@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: jit-dynamic-getarrayitem Changeset: r49426:25fd786beb8d Date: 2011-11-14 15:26 -0500 http://bitbucket.org/pypy/pypy/changeset/25fd786beb8d/ Log: merged default in diff --git a/pypy/interpreter/test/test_executioncontext.py b/pypy/interpreter/test/test_executioncontext.py --- a/pypy/interpreter/test/test_executioncontext.py +++ b/pypy/interpreter/test/test_executioncontext.py @@ -292,7 +292,7 @@ import os, sys print sys.executable, self.tmpfile if sys.platform == "win32": - cmdformat = '""%s" "%s""' # excellent! tons of "! + cmdformat = '"%s" "%s"' else: cmdformat = "'%s' '%s'" g = os.popen(cmdformat % (sys.executable, self.tmpfile), 'r') diff --git a/pypy/module/_rawffi/test/test__rawffi.py b/pypy/module/_rawffi/test/test__rawffi.py --- a/pypy/module/_rawffi/test/test__rawffi.py +++ b/pypy/module/_rawffi/test/test__rawffi.py @@ -1022,6 +1022,12 @@ assert ret.y == 1234500, "ret.y == %d" % (ret.y,) s.free() + def test_ffi_type(self): + import _rawffi + EMPTY = _rawffi.Structure([]) + S2E = _rawffi.Structure([('bah', (EMPTY, 1))]) + S2E.get_ffi_type() # does not hang + class AppTestAutoFree: def setup_class(cls): space = gettestobjspace(usemodules=('_rawffi', 'struct')) diff --git a/pypy/module/test_lib_pypy/test_pwd.py b/pypy/module/test_lib_pypy/test_pwd.py --- a/pypy/module/test_lib_pypy/test_pwd.py +++ b/pypy/module/test_lib_pypy/test_pwd.py @@ -1,7 +1,10 @@ +import py, sys from pypy.conftest import gettestobjspace class AppTestPwd: def setup_class(cls): + if sys.platform == 'win32': + py.test.skip("Unix only") cls.space = gettestobjspace(usemodules=('_ffi', '_rawffi')) cls.space.appexec((), "(): import pwd") diff --git a/pypy/objspace/std/intobject.py b/pypy/objspace/std/intobject.py --- a/pypy/objspace/std/intobject.py +++ b/pypy/objspace/std/intobject.py @@ -17,7 +17,7 @@ """ class W_AbstractIntObject(W_Object): - pass + __slots__ = () class W_IntObject(W_AbstractIntObject): __slots__ = 'intval' diff --git a/pypy/objspace/std/iterobject.py b/pypy/objspace/std/iterobject.py --- a/pypy/objspace/std/iterobject.py +++ b/pypy/objspace/std/iterobject.py @@ -5,7 +5,7 @@ class W_AbstractIterObject(W_Object): - pass + __slots__ = () class W_AbstractSeqIterObject(W_AbstractIterObject): from pypy.objspace.std.itertype import iter_typedef as typedef diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py --- a/pypy/objspace/std/listobject.py +++ b/pypy/objspace/std/listobject.py @@ -12,7 +12,7 @@ from pypy.interpreter.argument import Signature class W_AbstractListObject(W_Object): - pass + __slots__ = () class W_ListObject(W_AbstractListObject): from pypy.objspace.std.listtype import list_typedef as typedef diff --git a/pypy/objspace/std/longobject.py b/pypy/objspace/std/longobject.py --- a/pypy/objspace/std/longobject.py +++ b/pypy/objspace/std/longobject.py @@ -9,7 +9,7 @@ from pypy.rlib.rbigint import rbigint, SHIFT class W_AbstractLongObject(W_Object): - pass + __slots__ = () class W_LongObject(W_AbstractLongObject): """This is a wrapper of rbigint.""" diff --git a/pypy/objspace/std/stringobject.py b/pypy/objspace/std/stringobject.py --- a/pypy/objspace/std/stringobject.py +++ b/pypy/objspace/std/stringobject.py @@ -20,7 +20,7 @@ from pypy.objspace.std.formatting import mod_format class W_AbstractStringObject(W_Object): - pass + __slots__ = () class W_StringObject(W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef diff --git a/pypy/objspace/std/test/test_sliceobject.py b/pypy/objspace/std/test/test_sliceobject.py --- a/pypy/objspace/std/test/test_sliceobject.py +++ b/pypy/objspace/std/test/test_sliceobject.py @@ -1,3 +1,4 @@ +import sys from pypy.objspace.std.sliceobject import normalize_simple_slice @@ -56,8 +57,9 @@ sl = space.newslice(w(start), w(stop), w(step)) mystart, mystop, mystep, slicelength = sl.indices4(space, length) assert len(range(length)[start:stop:step]) == slicelength - assert slice(start, stop, step).indices(length) == ( - mystart, mystop, mystep) + if sys.version_info >= (2, 6): # doesn't work in 2.5 + assert slice(start, stop, step).indices(length) == ( + mystart, mystop, mystep) class AppTest_SliceObject: def test_new(self): diff --git a/pypy/objspace/std/tupleobject.py b/pypy/objspace/std/tupleobject.py --- a/pypy/objspace/std/tupleobject.py +++ b/pypy/objspace/std/tupleobject.py @@ -10,7 +10,7 @@ from pypy.rlib.debug import make_sure_not_resized class W_AbstractTupleObject(W_Object): - pass + __slots__ = () class W_TupleObject(W_AbstractTupleObject): from pypy.objspace.std.tupletype import tuple_typedef as typedef diff --git a/pypy/objspace/std/unicodeobject.py b/pypy/objspace/std/unicodeobject.py --- a/pypy/objspace/std/unicodeobject.py +++ b/pypy/objspace/std/unicodeobject.py @@ -20,7 +20,7 @@ from pypy.objspace.std.stringtype import stringstartswith, stringendswith class W_AbstractUnicodeObject(W_Object): - pass + __slots__ = () class W_UnicodeObject(W_AbstractUnicodeObject): from pypy.objspace.std.unicodetype import unicode_typedef as typedef diff --git a/pypy/translator/platform/__init__.py b/pypy/translator/platform/__init__.py --- a/pypy/translator/platform/__init__.py +++ b/pypy/translator/platform/__init__.py @@ -42,6 +42,8 @@ so_prefixes = ('',) + extra_libs = () + def __init__(self, cc): if self.__class__ is Platform: raise TypeError("You should not instantiate Platform class directly") @@ -181,7 +183,8 @@ link_files = self._linkfiles(eci.link_files) export_flags = self._exportsymbols_link_flags(eci) return (library_dirs + list(self.link_flags) + export_flags + - link_files + list(eci.link_extra) + libraries) + link_files + list(eci.link_extra) + libraries + + list(self.extra_libs)) def _exportsymbols_link_flags(self, eci, relto=None): if eci.export_symbols: diff --git a/pypy/translator/platform/linux.py b/pypy/translator/platform/linux.py --- a/pypy/translator/platform/linux.py +++ b/pypy/translator/platform/linux.py @@ -6,7 +6,8 @@ class BaseLinux(BasePosix): name = "linux" - link_flags = ('-pthread', '-lrt') + link_flags = ('-pthread',) + extra_libs = ('-lrt',) cflags = ('-O3', '-pthread', '-fomit-frame-pointer', '-Wall', '-Wno-unused') standalone_only = () diff --git a/pypy/translator/platform/posix.py b/pypy/translator/platform/posix.py --- a/pypy/translator/platform/posix.py +++ b/pypy/translator/platform/posix.py @@ -140,7 +140,7 @@ ('DEFAULT_TARGET', exe_name.basename), ('SOURCES', rel_cfiles), ('OBJECTS', rel_ofiles), - ('LIBS', self._libs(eci.libraries)), + ('LIBS', self._libs(eci.libraries) + list(self.extra_libs)), ('LIBDIRS', self._libdirs(rel_libdirs)), ('INCLUDEDIRS', self._includedirs(rel_includedirs)), ('CFLAGS', cflags), diff --git a/pypy/translator/platform/test/test_posix.py b/pypy/translator/platform/test/test_posix.py --- a/pypy/translator/platform/test/test_posix.py +++ b/pypy/translator/platform/test/test_posix.py @@ -41,6 +41,7 @@ if self.strict_on_stderr: assert res.err == '' assert res.returncode == 0 + assert '-lrt' in tmpdir.join("Makefile").read() def test_link_files(self): tmpdir = udir.join('link_files' + self.__class__.__name__).ensure(dir=1) From noreply at buildbot.pypy.org Tue Nov 15 06:26:03 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Tue, 15 Nov 2011 06:26:03 +0100 (CET) Subject: [pypy-commit] pypy jit-dynamic-getarrayitem: make these not static Message-ID: <20111115052603.1A23C820BE@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: jit-dynamic-getarrayitem Changeset: r49427:fc596321cc9b Date: 2011-11-14 20:07 -0500 http://bitbucket.org/pypy/pypy/changeset/fc596321cc9b/ Log: make these not static diff --git a/pypy/jit/backend/model.py b/pypy/jit/backend/model.py --- a/pypy/jit/backend/model.py +++ b/pypy/jit/backend/model.py @@ -183,43 +183,35 @@ lst[n] = None self.fail_descr_free_list.extend(faildescr_indices) - @staticmethod - def sizeof(S): + def sizeof(self, S): raise NotImplementedError - @staticmethod - def fielddescrof(S, fieldname): + def fielddescrof(self, S, fieldname): """Return the Descr corresponding to field 'fieldname' on the structure 'S'. It is important that this function (at least) caches the results.""" raise NotImplementedError - @staticmethod - def interiorfielddescrof(A, fieldname): + def interiorfielddescrof(self, A, fieldname): raise NotImplementedError - @staticmethod - def interiorfielddescrof_dynamic(offset, width, fieldsize, is_pointer, + def interiorfielddescrof_dynamic(self, offset, width, fieldsize, is_pointer, is_float, is_signed): raise NotImplementedError - @staticmethod - def arraydescrof(A): + def arraydescrof(self, A): raise NotImplementedError - @staticmethod - def calldescrof(FUNC, ARGS, RESULT): + def calldescrof(self, FUNC, ARGS, RESULT): # FUNC is the original function type, but ARGS is a list of types # with Voids removed raise NotImplementedError - @staticmethod - def methdescrof(SELFTYPE, methname): + def methdescrof(self, SELFTYPE, methname): # must return a subclass of history.AbstractMethDescr raise NotImplementedError - @staticmethod - def typedescrof(TYPE): + def typedescrof(self, TYPE): raise NotImplementedError # ---------- the backend-dependent operations ---------- From noreply at buildbot.pypy.org Tue Nov 15 06:26:04 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Tue, 15 Nov 2011 06:26:04 +0100 (CET) Subject: [pypy-commit] pypy jit-dynamic-getarrayitem: translation-ish fix, except it still breaks Message-ID: <20111115052604.45AEE820BE@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: jit-dynamic-getarrayitem Changeset: r49428:4f3e2c9dda26 Date: 2011-11-15 00:25 -0500 http://bitbucket.org/pypy/pypy/changeset/4f3e2c9dda26/ Log: translation-ish fix, except it still breaks diff --git a/pypy/jit/backend/llgraph/llimpl.py b/pypy/jit/backend/llgraph/llimpl.py --- a/pypy/jit/backend/llgraph/llimpl.py +++ b/pypy/jit/backend/llgraph/llimpl.py @@ -326,12 +326,12 @@ loop = _from_opaque(loop) loop.operations.append(Operation(opnum)) -def compile_add_descr(loop, ofs, type, arg_types, extrainfo): +def compile_add_descr(loop, ofs, type, arg_types, extrainfo, width): from pypy.jit.backend.llgraph.runner import Descr loop = _from_opaque(loop) op = loop.operations[-1] assert isinstance(type, str) and len(type) == 1 - op.descr = Descr(ofs, type, arg_types=arg_types, extrainfo=extrainfo) + op.descr = Descr(ofs, type, arg_types=arg_types, extrainfo=extrainfo, width=width) def compile_add_descr_arg(loop, ofs, type, arg_types): from pypy.jit.backend.llgraph.runner import Descr @@ -828,11 +828,11 @@ def op_getinteriorfield_raw(self, descr, array, index): if descr.typeinfo == REF: - return do_getinteriorfield_raw_ptr(array, index, descr.extrainfo, descr.ofs) + return do_getinteriorfield_raw_ptr(array, index, descr.width, descr.ofs) elif descr.typeinfo == INT: - return do_getinteriorfield_raw_int(array, index, descr.extrainfo, descr.ofs) + return do_getinteriorfield_raw_int(array, index, descr.width, descr.ofs) elif descr.typeinfo == FLOAT: - return do_getinteriorfield_raw_float(array, index, descr.extrainfo, descr.ofs) + return do_getinteriorfield_raw_float(array, index, descr.width, descr.ofs) else: raise NotImplementedError @@ -851,11 +851,11 @@ def op_setinteriorfield_raw(self, descr, array, index, newvalue): if descr.typeinfo == REF: - return do_setinteriorfield_raw_ptr(array, index, newvalue, descr.extrainfo, descr.ofs) + return do_setinteriorfield_raw_ptr(array, index, newvalue, descr.width, descr.ofs) elif descr.typeinfo == INT: - return do_setinteriorfield_raw_int(array, index, newvalue, descr.extrainfo, descr.ofs) + return do_setinteriorfield_raw_int(array, index, newvalue, descr.width, descr.ofs) elif descr.typeinfo == FLOAT: - return do_setinteriorfield_raw_float(array, index, newvalue, descr.extrainfo, descr.ofs) + return do_setinteriorfield_raw_float(array, index, newvalue, descr.width, descr.ofs) else: raise NotImplementedError diff --git a/pypy/jit/backend/llgraph/runner.py b/pypy/jit/backend/llgraph/runner.py --- a/pypy/jit/backend/llgraph/runner.py +++ b/pypy/jit/backend/llgraph/runner.py @@ -23,8 +23,10 @@ class Descr(history.AbstractDescr): def __init__(self, ofs, typeinfo, extrainfo=None, name=None, - arg_types=None, count_fields_if_immut=-1, ffi_flags=0): + arg_types=None, count_fields_if_immut=-1, ffi_flags=0, width=-1): + self.ofs = ofs + self.width = width self.typeinfo = typeinfo self.extrainfo = extrainfo self.name = name @@ -119,14 +121,14 @@ return False def getdescr(self, ofs, typeinfo='?', extrainfo=None, name=None, - arg_types=None, count_fields_if_immut=-1, ffi_flags=0): + arg_types=None, count_fields_if_immut=-1, ffi_flags=0, width=-1): key = (ofs, typeinfo, extrainfo, name, arg_types, - count_fields_if_immut, ffi_flags) + count_fields_if_immut, ffi_flags, width) try: return self._descrs[key] except KeyError: descr = Descr(ofs, typeinfo, extrainfo, name, arg_types, - count_fields_if_immut, ffi_flags) + count_fields_if_immut, ffi_flags, width) self._descrs[key] = descr return descr @@ -179,7 +181,8 @@ descr = op.getdescr() if isinstance(descr, Descr): llimpl.compile_add_descr(c, descr.ofs, descr.typeinfo, - descr.arg_types, descr.extrainfo) + descr.arg_types, descr.extrainfo, + descr.width) if (isinstance(descr, history.LoopToken) and op.getopnum() != rop.JUMP): llimpl.compile_add_loop_token(c, descr) @@ -324,10 +327,10 @@ def interiorfielddescrof(self, A, fieldname): S = A.OF - ofs2 = symbolic.get_size(A) + width = symbolic.get_size(A) ofs, size = symbolic.get_field_token(S, fieldname) token = history.getkind(getattr(S, fieldname)) - return self.getdescr(ofs, token[0], name=fieldname, extrainfo=ofs2) + return self.getdescr(ofs, token[0], name=fieldname, width=width) def interiorfielddescrof_dynamic(self, offset, width, fieldsize, is_pointer, is_float, is_signed): @@ -339,7 +342,7 @@ else: typeinfo = INT # we abuse the arg_types field to distinguish dynamic and static descrs - return Descr(offset, typeinfo, arg_types='dynamic', name='', extrainfo=width) + return Descr(offset, typeinfo, arg_types='dynamic', name='', width=width) def calldescrof(self, FUNC, ARGS, RESULT, extrainfo): arg_types = [] From noreply at buildbot.pypy.org Tue Nov 15 08:21:48 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Tue, 15 Nov 2011 08:21:48 +0100 (CET) Subject: [pypy-commit] pypy default: a failing test Message-ID: <20111115072148.A5B88820BE@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r49429:7a416e643dc0 Date: 2011-11-15 02:20 -0500 http://bitbucket.org/pypy/pypy/changeset/7a416e643dc0/ Log: a failing test diff --git a/pypy/rpython/test/test_rtuple.py b/pypy/rpython/test/test_rtuple.py --- a/pypy/rpython/test/test_rtuple.py +++ b/pypy/rpython/test/test_rtuple.py @@ -180,6 +180,22 @@ res2 = self.interpret(f, [27, 12]) assert res1 != res2 + def test_constant_tuple_hash_str(self): + def f(i): + d = {} + if i: + t = (None, "abc") + d[t] = 3 + else: + t = ("abc", None) + d[t] = 4 + return d[t] + + res = self.interpret(f, [0]) + assert res == 4 + res = self.interpret(f, [1]) + assert res == 3 + def test_tuple_to_list(self): def f(i, j): return list((i, j)) From noreply at buildbot.pypy.org Tue Nov 15 08:21:49 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Tue, 15 Nov 2011 08:21:49 +0100 (CET) Subject: [pypy-commit] pypy default: merged upstream Message-ID: <20111115072149.E2E4B820BE@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r49430:cfb76a08edcb Date: 2011-11-15 02:21 -0500 http://bitbucket.org/pypy/pypy/changeset/cfb76a08edcb/ Log: merged upstream diff --git a/pypy/module/cpyext/include/eval.h b/pypy/module/cpyext/include/eval.h --- a/pypy/module/cpyext/include/eval.h +++ b/pypy/module/cpyext/include/eval.h @@ -14,8 +14,8 @@ PyObject * PyEval_CallFunction(PyObject *obj, const char *format, ...); PyObject * PyEval_CallMethod(PyObject *obj, const char *name, const char *format, ...); -PyObject * PyObject_CallFunction(PyObject *obj, char *format, ...); -PyObject * PyObject_CallMethod(PyObject *obj, char *name, char *format, ...); +PyObject * PyObject_CallFunction(PyObject *obj, const char *format, ...); +PyObject * PyObject_CallMethod(PyObject *obj, const char *name, const char *format, ...); PyObject * PyObject_CallFunctionObjArgs(PyObject *callable, ...); PyObject * PyObject_CallMethodObjArgs(PyObject *callable, PyObject *name, ...); diff --git a/pypy/module/cpyext/include/pycobject.h b/pypy/module/cpyext/include/pycobject.h --- a/pypy/module/cpyext/include/pycobject.h +++ b/pypy/module/cpyext/include/pycobject.h @@ -33,7 +33,7 @@ PyAPI_FUNC(void *) PyCObject_GetDesc(PyObject *); /* Import a pointer to a C object from a module using a PyCObject. */ -PyAPI_FUNC(void *) PyCObject_Import(char *module_name, char *cobject_name); +PyAPI_FUNC(void *) PyCObject_Import(const char *module_name, const char *cobject_name); /* Modify a C object. Fails (==0) if object has a destructor. */ PyAPI_FUNC(int) PyCObject_SetVoidPtr(PyObject *self, void *cobj); diff --git a/pypy/module/cpyext/include/pyerrors.h b/pypy/module/cpyext/include/pyerrors.h --- a/pypy/module/cpyext/include/pyerrors.h +++ b/pypy/module/cpyext/include/pyerrors.h @@ -11,8 +11,8 @@ (PyClass_Check((x)) || (PyType_Check((x)) && \ PyObject_IsSubclass((x), PyExc_BaseException))) -PyObject *PyErr_NewException(char *name, PyObject *base, PyObject *dict); -PyObject *PyErr_NewExceptionWithDoc(char *name, char *doc, PyObject *base, PyObject *dict); +PyObject *PyErr_NewException(const char *name, PyObject *base, PyObject *dict); +PyObject *PyErr_NewExceptionWithDoc(const char *name, const char *doc, PyObject *base, PyObject *dict); PyObject *PyErr_Format(PyObject *exception, const char *format, ...); /* These APIs aren't really part of the error implementation, but diff --git a/pypy/module/cpyext/modsupport.py b/pypy/module/cpyext/modsupport.py --- a/pypy/module/cpyext/modsupport.py +++ b/pypy/module/cpyext/modsupport.py @@ -54,9 +54,15 @@ modname = rffi.charp2str(name) state = space.fromcache(State) f_name, f_path = state.package_context - w_mod = PyImport_AddModule(space, f_name) + if f_name is not None: + modname = f_name + w_mod = PyImport_AddModule(space, modname) + state.package_context = None, None - dict_w = {'__file__': space.wrap(f_path)} + if f_path is not None: + dict_w = {'__file__': space.wrap(f_path)} + else: + dict_w = {} convert_method_defs(space, dict_w, methods, None, w_self, modname) for key, w_value in dict_w.items(): space.setattr(w_mod, space.wrap(key), w_value) diff --git a/pypy/module/cpyext/src/cobject.c b/pypy/module/cpyext/src/cobject.c --- a/pypy/module/cpyext/src/cobject.c +++ b/pypy/module/cpyext/src/cobject.c @@ -77,7 +77,7 @@ } void * -PyCObject_Import(char *module_name, char *name) +PyCObject_Import(const char *module_name, const char *name) { PyObject *m, *c; void *r = NULL; diff --git a/pypy/module/cpyext/src/modsupport.c b/pypy/module/cpyext/src/modsupport.c --- a/pypy/module/cpyext/src/modsupport.c +++ b/pypy/module/cpyext/src/modsupport.c @@ -541,7 +541,7 @@ } PyObject * -PyObject_CallFunction(PyObject *callable, char *format, ...) +PyObject_CallFunction(PyObject *callable, const char *format, ...) { va_list va; PyObject *args; @@ -558,7 +558,7 @@ } PyObject * -PyObject_CallMethod(PyObject *o, char *name, char *format, ...) +PyObject_CallMethod(PyObject *o, const char *name, const char *format, ...) { va_list va; PyObject *args; diff --git a/pypy/module/cpyext/src/pyerrors.c b/pypy/module/cpyext/src/pyerrors.c --- a/pypy/module/cpyext/src/pyerrors.c +++ b/pypy/module/cpyext/src/pyerrors.c @@ -21,7 +21,7 @@ } PyObject * -PyErr_NewException(char *name, PyObject *base, PyObject *dict) +PyErr_NewException(const char *name, PyObject *base, PyObject *dict) { char *dot; PyObject *modulename = NULL; @@ -72,7 +72,7 @@ /* Create an exception with docstring */ PyObject * -PyErr_NewExceptionWithDoc(char *name, char *doc, PyObject *base, PyObject *dict) +PyErr_NewExceptionWithDoc(const char *name, const char *doc, PyObject *base, PyObject *dict) { int result; PyObject *ret = NULL; From noreply at buildbot.pypy.org Tue Nov 15 08:39:25 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Tue, 15 Nov 2011 08:39:25 +0100 (CET) Subject: [pypy-commit] pypy default: simplify failing test Message-ID: <20111115073925.682AD820BE@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r49431:bf8bf8a5810e Date: 2011-11-15 02:39 -0500 http://bitbucket.org/pypy/pypy/changeset/bf8bf8a5810e/ Log: simplify failing test diff --git a/pypy/rpython/test/test_rtuple.py b/pypy/rpython/test/test_rtuple.py --- a/pypy/rpython/test/test_rtuple.py +++ b/pypy/rpython/test/test_rtuple.py @@ -181,20 +181,17 @@ assert res1 != res2 def test_constant_tuple_hash_str(self): + from pypy.rlib.objectmodel import compute_hash def f(i): - d = {} if i: t = (None, "abc") - d[t] = 3 else: t = ("abc", None) - d[t] = 4 - return d[t] + return compute_hash(t) - res = self.interpret(f, [0]) - assert res == 4 - res = self.interpret(f, [1]) - assert res == 3 + res1 = self.interpret(f, [0]) + res2 = self.interpret(f, [1]) + assert res1 != res2 def test_tuple_to_list(self): def f(i, j): From noreply at buildbot.pypy.org Tue Nov 15 10:35:52 2011 From: noreply at buildbot.pypy.org (bivab) Date: Tue, 15 Nov 2011 10:35:52 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: add a helper method to the register manager to allocate a scratch register Message-ID: <20111115093552.063D4820BE@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: ppc-jit-backend Changeset: r49432:d1fa57a9cf80 Date: 2011-11-15 10:34 +0100 http://bitbucket.org/pypy/pypy/changeset/d1fa57a9cf80/ Log: add a helper method to the register manager to allocate a scratch register diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/regalloc.py @@ -86,6 +86,16 @@ assert isinstance(c, ConstPtr) return locations.ImmLocation(rffi.cast(lltype.Signed, c.value)) + def allocate_scratch_reg(self, type=INT, selected_reg=None, forbidden_vars=None): + """Allocate a scratch register, possibly spilling a managed register. + This register is freed after emitting the current operation and can not + be spilled""" + box = TempBox() + reg = self.force_allocate_reg(box, + selected_reg=selected_reg, + forbidden_vars=forbidden_vars) + return reg, box + class PPCFrameManager(FrameManager): def __init__(self): FrameManager.__init__(self) @@ -170,6 +180,12 @@ return self.rm.force_allocate_reg(var, forbidden_vars, selected_reg, need_lower_byte) + def allocate_scratch_reg(self, type=INT, forbidden_vars=[], selected_reg=None): + assert type == INT # XXX extend this once floats are supported + return self.rm.allocate_scratch_reg(type=type, + forbidden_vars=forbidden_vars, + selected_reg=selected_reg) + def _check_invariants(self): self.rm._check_invariants() @@ -458,7 +474,6 @@ def prepare_getarrayitem_gc(self, op): a0, a1 = boxes = list(op.getarglist()) _, scale, ofs, _, ptr = self._unpack_arraydescr(op.getdescr()) - base_loc, base_box = self._ensure_value_is_boxed(a0, boxes) boxes.append(base_box) ofs_loc, ofs_box = self._ensure_value_is_boxed(a1, boxes) diff --git a/pypy/jit/backend/ppc/ppcgen/test/test_register_manager.py b/pypy/jit/backend/ppc/ppcgen/test/test_register_manager.py new file mode 100644 --- /dev/null +++ b/pypy/jit/backend/ppc/ppcgen/test/test_register_manager.py @@ -0,0 +1,8 @@ +from pypy.jit.backend.ppc.ppcgen import regalloc, register + +class TestPPCRegisterManager(object): + def test_allocate_scratch_register(self): + rm = regalloc.PPCRegisterManager({}) + reg, box = rm.allocate_scratch_reg() + assert reg in register.MANAGED_REGS + assert rm.stays_alive(box) == False From noreply at buildbot.pypy.org Tue Nov 15 10:35:53 2011 From: noreply at buildbot.pypy.org (bivab) Date: Tue, 15 Nov 2011 10:35:53 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: allocate and use a scratch register for get/set arrayitem in case the scale > 0 Message-ID: <20111115093553.54A5C820BE@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: ppc-jit-backend Changeset: r49433:7a670ce597e7 Date: 2011-11-15 10:35 +0100 http://bitbucket.org/pypy/pypy/changeset/7a670ce597e7/ Log: allocate and use a scratch register for get/set arrayitem in case the scale > 0 diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -364,20 +364,20 @@ self.mc.ld(res.value, base_loc.value, ofs.value) def emit_setarrayitem_gc(self, op, arglocs, regalloc): - value_loc, base_loc, ofs_loc, scale, ofs = arglocs + value_loc, base_loc, ofs_loc, scale, ofs, scratch_reg = arglocs if scale.value > 0: - scale_loc = r.r0 + scale_loc = scratch_reg if IS_PPC_32: - self.mc.slwi(r.r0.value, ofs_loc.value, scale.value) + self.mc.slwi(scale_loc.value, ofs_loc.value, scale.value) else: - self.mc.sldi(r.r0.value, ofs_loc.value, scale.value) + self.mc.sldi(scale_loc.value, ofs_loc.value, scale.value) else: scale_loc = ofs_loc # add the base offset if ofs.value > 0: - #XXX cannot use addi because scale_loc may be r0 - self.mc.addic(r.r0.value, scale_loc.value, ofs.value) + assert scale_loc is not r.r0 + self.mc.addi(r.r0.value, scale_loc.value, ofs.value) scale_loc = r.r0 if scale.value == 3: @@ -394,20 +394,20 @@ emit_setarrayitem_raw = emit_setarrayitem_gc def emit_getarrayitem_gc(self, op, arglocs, regalloc): - res, base_loc, ofs_loc, scale, ofs = arglocs + res, base_loc, ofs_loc, scale, ofs, scratch_reg = arglocs if scale.value > 0: - scale_loc = r.r0 + scale_loc = scratch_reg if IS_PPC_32: - self.mc.slwi(r.r0.value, ofs_loc.value, scale.value) + self.mc.slwi(scale_loc.value, ofs_loc.value, scale.value) else: - self.mc.sldi(r.r0.value, ofs_loc.value, scale.value) + self.mc.sldi(scale_loc.value, ofs_loc.value, scale.value) else: scale_loc = ofs_loc # add the base offset if ofs.value > 0: - #XXX cannot use addi because scale_loc may be r0 - self.mc.addic(r.r0.value, scale_loc.value, ofs.value) + assert scale_loc is not r.r0 + self.mc.addi(r.r0.value, scale_loc.value, ofs.value) scale_loc = r.r0 if scale.value == 3: diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/regalloc.py @@ -466,8 +466,13 @@ #XXX check if imm would be fine here value_loc, value_box = self._ensure_value_is_boxed(b2, boxes) boxes.append(value_box) + if scale > 0: + tmp, box = self.allocate_scratch_reg(forbidden_vars=boxes) + boxes.append(box) + else: + tmp = None self.possibly_free_vars(boxes) - return [value_loc, base_loc, ofs_loc, imm(scale), imm(ofs)] + return [value_loc, base_loc, ofs_loc, imm(scale), imm(ofs), tmp] prepare_setarrayitem_raw = prepare_setarrayitem_gc @@ -478,10 +483,15 @@ boxes.append(base_box) ofs_loc, ofs_box = self._ensure_value_is_boxed(a1, boxes) boxes.append(ofs_box) + if scale > 0: + tmp, box = self.allocate_scratch_reg(forbidden_vars=boxes) + boxes.append(box) + else: + tmp = None self.possibly_free_vars(boxes) res = self.force_allocate_reg(op.result) self.possibly_free_var(op.result) - return [res, base_loc, ofs_loc, imm(scale), imm(ofs)] + return [res, base_loc, ofs_loc, imm(scale), imm(ofs), tmp] prepare_getarrayitem_raw = prepare_getarrayitem_gc prepare_getarrayitem_gc_pure = prepare_getarrayitem_gc From noreply at buildbot.pypy.org Tue Nov 15 12:59:15 2011 From: noreply at buildbot.pypy.org (hager) Date: Tue, 15 Nov 2011 12:59:15 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Added regalloc_push and regalloc_pop => test_jump passes Message-ID: <20111115115915.27963820BE@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r49434:2cc68464f249 Date: 2011-11-15 11:08 +0100 http://bitbucket.org/pypy/pypy/changeset/2cc68464f249/ Log: Added regalloc_push and regalloc_pop => test_jump passes diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -779,6 +779,52 @@ assert 0, "not supported location" assert 0, "not supported location" + def regalloc_push(self, loc): + """Pushes the value stored in loc to the stack + Can trash the current value of r0 when pushing a stack + loc""" + + if loc.is_stack(): + if loc.type != FLOAT: + scratch_reg = r.r0 + else: + assert 0, "not implemented yet" + self.regalloc_mov(loc, scratch_reg) + self.regalloc_push(scratch_reg) + elif loc.is_reg(): + self.mc.addi(r.SP.value, r.SP.value, -WORD) # decrease stack pointer + # push value + if IS_PPC_32: + self.mc.stw(loc.value, r.SP.value, 0) + else: + self.mc.std(loc.value, r.SP.value, 0) + elif loc.is_imm(): + assert 0, "not implemented yet" + elif loc.is_imm_float(): + assert 0, "not implemented yet" + else: + raise AssertionError('Trying to push an invalid location') + + def regalloc_pop(self, loc): + """Pops the value on top of the stack to loc. Can trash the current + value of r0 when popping to a stack loc""" + if loc.is_stack(): + if loc.type != FLOAT: + scratch_reg = r.r0 + else: + assert 0, "not implemented yet" + self.regalloc_pop(scratch_reg) + self.regalloc_mov(scratch_reg, loc) + elif loc.is_reg(): + # pop value + if IS_PPC_32: + self.mc.lwz(loc.value, r.SP.value, 0) + else: + self.mc.ld(loc.value, r.SP.value, 0) + self.mc.addi(r.SP.value, r.SP.value, WORD) # increase stack pointer + else: + raise AssertionError('Trying to pop to an invalid location') + def _ensure_result_bit_extension(self, resloc, size, signed): if size == 1: if not signed: #unsigned char From noreply at buildbot.pypy.org Tue Nov 15 12:59:16 2011 From: noreply at buildbot.pypy.org (hager) Date: Tue, 15 Nov 2011 12:59:16 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: merge Message-ID: <20111115115916.5574D820BE@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r49435:2a816abcb981 Date: 2011-11-15 12:58 +0100 http://bitbucket.org/pypy/pypy/changeset/2a816abcb981/ Log: merge diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -364,18 +364,19 @@ self.mc.ld(res.value, base_loc.value, ofs.value) def emit_setarrayitem_gc(self, op, arglocs, regalloc): - value_loc, base_loc, ofs_loc, scale, ofs = arglocs + value_loc, base_loc, ofs_loc, scale, ofs, scratch_reg = arglocs if scale.value > 0: - scale_loc = r.r0 - self.mc.load_imm(r.r0, scale.value) + scale_loc = scratch_reg if IS_PPC_32: - self.mc.slw(r.r0.value, ofs_loc.value, r.r0.value) + self.mc.slwi(scale_loc.value, ofs_loc.value, scale.value) else: - self.mc.sld(r.r0.value, ofs_loc.value, r.r0.value) + self.mc.sldi(scale_loc.value, ofs_loc.value, scale.value) else: scale_loc = ofs_loc + # add the base offset if ofs.value > 0: + assert scale_loc is not r.r0 self.mc.addi(r.r0.value, scale_loc.value, ofs.value) scale_loc = r.r0 @@ -393,17 +394,19 @@ emit_setarrayitem_raw = emit_setarrayitem_gc def emit_getarrayitem_gc(self, op, arglocs, regalloc): - res, base_loc, ofs_loc, scale, ofs = arglocs + res, base_loc, ofs_loc, scale, ofs, scratch_reg = arglocs if scale.value > 0: - scale_loc = r.r0 - self.mc.load_imm(r.r0, scale.value) + scale_loc = scratch_reg if IS_PPC_32: - self.mc.slw(r.r0.value, ofs_loc.value, scale.value) + self.mc.slwi(scale_loc.value, ofs_loc.value, scale.value) else: - self.mc.sld(r.r0.value, ofs_loc.value, scale.value) + self.mc.sldi(scale_loc.value, ofs_loc.value, scale.value) else: scale_loc = ofs_loc + + # add the base offset if ofs.value > 0: + assert scale_loc is not r.r0 self.mc.addi(r.r0.value, scale_loc.value, ofs.value) scale_loc = r.r0 diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/regalloc.py @@ -86,6 +86,16 @@ assert isinstance(c, ConstPtr) return locations.ImmLocation(rffi.cast(lltype.Signed, c.value)) + def allocate_scratch_reg(self, type=INT, selected_reg=None, forbidden_vars=None): + """Allocate a scratch register, possibly spilling a managed register. + This register is freed after emitting the current operation and can not + be spilled""" + box = TempBox() + reg = self.force_allocate_reg(box, + selected_reg=selected_reg, + forbidden_vars=forbidden_vars) + return reg, box + class PPCFrameManager(FrameManager): def __init__(self): FrameManager.__init__(self) @@ -170,6 +180,12 @@ return self.rm.force_allocate_reg(var, forbidden_vars, selected_reg, need_lower_byte) + def allocate_scratch_reg(self, type=INT, forbidden_vars=[], selected_reg=None): + assert type == INT # XXX extend this once floats are supported + return self.rm.allocate_scratch_reg(type=type, + forbidden_vars=forbidden_vars, + selected_reg=selected_reg) + def _check_invariants(self): self.rm._check_invariants() @@ -450,23 +466,32 @@ #XXX check if imm would be fine here value_loc, value_box = self._ensure_value_is_boxed(b2, boxes) boxes.append(value_box) + if scale > 0: + tmp, box = self.allocate_scratch_reg(forbidden_vars=boxes) + boxes.append(box) + else: + tmp = None self.possibly_free_vars(boxes) - return [value_loc, base_loc, ofs_loc, imm(scale), imm(ofs)] + return [value_loc, base_loc, ofs_loc, imm(scale), imm(ofs), tmp] prepare_setarrayitem_raw = prepare_setarrayitem_gc def prepare_getarrayitem_gc(self, op): a0, a1 = boxes = list(op.getarglist()) _, scale, ofs, _, ptr = self._unpack_arraydescr(op.getdescr()) - base_loc, base_box = self._ensure_value_is_boxed(a0, boxes) boxes.append(base_box) ofs_loc, ofs_box = self._ensure_value_is_boxed(a1, boxes) boxes.append(ofs_box) + if scale > 0: + tmp, box = self.allocate_scratch_reg(forbidden_vars=boxes) + boxes.append(box) + else: + tmp = None self.possibly_free_vars(boxes) res = self.force_allocate_reg(op.result) self.possibly_free_var(op.result) - return [res, base_loc, ofs_loc, imm(scale), imm(ofs)] + return [res, base_loc, ofs_loc, imm(scale), imm(ofs), tmp] prepare_getarrayitem_raw = prepare_getarrayitem_gc prepare_getarrayitem_gc_pure = prepare_getarrayitem_gc diff --git a/pypy/jit/backend/ppc/ppcgen/test/test_register_manager.py b/pypy/jit/backend/ppc/ppcgen/test/test_register_manager.py new file mode 100644 --- /dev/null +++ b/pypy/jit/backend/ppc/ppcgen/test/test_register_manager.py @@ -0,0 +1,8 @@ +from pypy.jit.backend.ppc.ppcgen import regalloc, register + +class TestPPCRegisterManager(object): + def test_allocate_scratch_register(self): + rm = regalloc.PPCRegisterManager({}) + reg, box = rm.allocate_scratch_reg() + assert reg in register.MANAGED_REGS + assert rm.stays_alive(box) == False From noreply at buildbot.pypy.org Tue Nov 15 15:01:07 2011 From: noreply at buildbot.pypy.org (hager) Date: Tue, 15 Nov 2011 15:01:07 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Started implementation of NEW, first test passes Message-ID: <20111115140107.E5CF4820BE@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r49436:dd67675a28a9 Date: 2011-11-15 15:00 +0100 http://bitbucket.org/pypy/pypy/changeset/dd67675a28a9/ Log: Started implementation of NEW, first test passes diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -606,6 +606,10 @@ else: assert 0, itemsize.value + def emit_new(self, op, arglocs, regalloc): + # XXX do exception handling here! + pass + def emit_same_as(self, op, arglocs, regalloc): argloc, resloc = arglocs self.regalloc_mov(argloc, resloc) diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -115,6 +115,7 @@ self.fail_boxes_int = values_array(lltype.Signed, failargs_limit) self.fail_boxes_ptr = values_array(llmemory.GCREF, failargs_limit) self.mc = None + self.malloc_func_addr = 0 self.datablockwrapper = None self.memcpy_addr = 0 self.fail_boxes_count = 0 @@ -456,6 +457,10 @@ allblocks) def setup_once(self): + gc_ll_descr = self.cpu.gc_ll_descr + gc_ll_descr.initialize() + ll_new = gc_ll_descr.get_funcptr_for_new() + self.malloc_func_addr = rffi.cast(lltype.Signed, ll_new) self.memcpy_addr = self.cpu.cast_ptr_to_int(memcpy_fn) self.setup_failure_recovery() self.exit_code_adr = self._gen_exit_path() @@ -857,6 +862,34 @@ assert gcrootmap.is_shadow_stack gcrootmap.write_callshape(mark, force_index) + def write_new_force_index(self): + # for shadowstack only: get a new, unused force_index number and + # write it to FORCE_INDEX_OFS. Used to record the call shape + # (i.e. where the GC pointers are in the stack) around a CALL + # instruction that doesn't already have a force_index. + gcrootmap = self.cpu.gc_ll_descr.gcrootmap + if gcrootmap and gcrootmap.is_shadow_stack: + clt = self.current_clt + force_index = clt.reserve_and_record_some_faildescr_index() + self._write_fail_index(force_index) + return force_index + else: + return 0 + + def _write_fail_index(self, fail_index): + self.mc.load_imm(r.r0.value, fail_index) + if IS_PPC_32: + self.mc.stw(r.r0.value, r.SSP.value, 0) + else: + self.mc.std(r.r0.value, r.SSP.value, 0) + + def load(self, loc, value): + assert loc.is_reg() and value.is_imm() + if value.is_imm(): + self.mc.load_imm(loc, value.getint()) + elif value.is_imm_float(): + assert 0, "not implemented yet" + def notimplemented_op(self, op, arglocs, regalloc): raise NotImplementedError, op diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/regalloc.py @@ -627,6 +627,17 @@ prepare_cast_ptr_to_int = prepare_same_as prepare_cast_int_to_ptr = prepare_same_as + def prepare_new(self, op): + gc_ll_descr = self.assembler.cpu.gc_ll_descr + # XXX introduce the fastpath for malloc + arglocs = self._prepare_args_for_new_op(op.getdescr()) + force_index = self.assembler.write_new_force_index() + self.assembler._emit_call(force_index, self.assembler.malloc_func_addr, + arglocs, self, result=op.result) + self.possibly_free_vars(arglocs) + self.possibly_free_var(op.result) + return [] + def prepare_call(self, op): effectinfo = op.getdescr().get_extra_info() if effectinfo is not None: @@ -641,6 +652,18 @@ prepare_debug_merge_point = void prepare_jit_debug = void + def _prepare_args_for_new_op(self, new_args): + gc_ll_descr = self.cpu.gc_ll_descr + args = gc_ll_descr.args_for_new(new_args) + arglocs = [] + for i in range(len(args)): + arg = args[i] + t = TempInt() + l = self.force_allocate_reg(t, selected_reg=r.MANAGED_REGS[i]) + self.assembler.load(l, imm(arg)) + arglocs.append(t) + return arglocs + # from ../x86/regalloc.py:791 def _unpack_fielddescr(self, fielddescr): assert isinstance(fielddescr, BaseFieldDescr) From noreply at buildbot.pypy.org Tue Nov 15 18:13:41 2011 From: noreply at buildbot.pypy.org (hager) Date: Tue, 15 Nov 2011 18:13:41 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Use r0 as one-element stack Message-ID: <20111115171341.1130E820BE@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r49437:9b6289cbf90b Date: 2011-11-15 18:13 +0100 http://bitbucket.org/pypy/pypy/changeset/9b6289cbf90b/ Log: Use r0 as one-element stack diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -455,6 +455,7 @@ allblocks = self.get_asmmemmgr_blocks(looptoken) self.datablockwrapper = MachineDataBlockWrapper(self.cpu.asmmemmgr, allblocks) + self.stack_in_use = False def setup_once(self): gc_ll_descr = self.cpu.gc_ll_descr @@ -593,6 +594,7 @@ self.mc = None self._regalloc = None assert self.datablockwrapper is None + self.stack_in_use = False def _walk_operations(self, operations, regalloc): self._regalloc = regalloc @@ -790,12 +792,12 @@ loc""" if loc.is_stack(): - if loc.type != FLOAT: - scratch_reg = r.r0 - else: + if loc.type == FLOAT: assert 0, "not implemented yet" - self.regalloc_mov(loc, scratch_reg) - self.regalloc_push(scratch_reg) + # XXX this code has to be verified + assert not self.stack_in_use + self.regalloc_mov(loc, r.r0) + self.stack_in_use = True elif loc.is_reg(): self.mc.addi(r.SP.value, r.SP.value, -WORD) # decrease stack pointer # push value @@ -814,12 +816,12 @@ """Pops the value on top of the stack to loc. Can trash the current value of r0 when popping to a stack loc""" if loc.is_stack(): - if loc.type != FLOAT: - scratch_reg = r.r0 - else: + if loc.type == FLOAT: assert 0, "not implemented yet" - self.regalloc_pop(scratch_reg) - self.regalloc_mov(scratch_reg, loc) + # XXX this code has to be verified + assert self.stack_in_use + self.regalloc_mov(r.r0, loc) + self.stack_in_use = False elif loc.is_reg(): # pop value if IS_PPC_32: From noreply at buildbot.pypy.org Tue Nov 15 18:26:39 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Tue, 15 Nov 2011 18:26:39 +0100 (CET) Subject: [pypy-commit] pypy default: handle hashing a None rstr, fixes the test I checked in yesterday. Message-ID: <20111115172639.D8CFC820BE@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r49438:89c328e7b0fa Date: 2011-11-15 12:26 -0500 http://bitbucket.org/pypy/pypy/changeset/89c328e7b0fa/ Log: handle hashing a None rstr, fixes the test I checked in yesterday. diff --git a/pypy/rpython/lltypesystem/rstr.py b/pypy/rpython/lltypesystem/rstr.py --- a/pypy/rpython/lltypesystem/rstr.py +++ b/pypy/rpython/lltypesystem/rstr.py @@ -331,6 +331,8 @@ # unlike CPython, there is no reason to avoid to return -1 # but our malloc initializes the memory to zero, so we use zero as the # special non-computed-yet value. + if not s: + return 0 x = s.hash if x == 0: x = _hash_string(s.chars) diff --git a/pypy/rpython/ootypesystem/rstr.py b/pypy/rpython/ootypesystem/rstr.py --- a/pypy/rpython/ootypesystem/rstr.py +++ b/pypy/rpython/ootypesystem/rstr.py @@ -116,6 +116,8 @@ return ootype.oounicode(ch, -1) def ll_strhash(s): + if not s: + return 0 return s.ll_hash() def ll_strfasthash(s): From noreply at buildbot.pypy.org Tue Nov 15 18:44:33 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Tue, 15 Nov 2011 18:44:33 +0100 (CET) Subject: [pypy-commit] pypy jit-dynamic-getarrayitem: merged default in Message-ID: <20111115174433.F3962820BE@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: jit-dynamic-getarrayitem Changeset: r49439:9733b32ba514 Date: 2011-11-15 12:28 -0500 http://bitbucket.org/pypy/pypy/changeset/9733b32ba514/ Log: merged default in diff --git a/pypy/module/cpyext/include/eval.h b/pypy/module/cpyext/include/eval.h --- a/pypy/module/cpyext/include/eval.h +++ b/pypy/module/cpyext/include/eval.h @@ -14,8 +14,8 @@ PyObject * PyEval_CallFunction(PyObject *obj, const char *format, ...); PyObject * PyEval_CallMethod(PyObject *obj, const char *name, const char *format, ...); -PyObject * PyObject_CallFunction(PyObject *obj, char *format, ...); -PyObject * PyObject_CallMethod(PyObject *obj, char *name, char *format, ...); +PyObject * PyObject_CallFunction(PyObject *obj, const char *format, ...); +PyObject * PyObject_CallMethod(PyObject *obj, const char *name, const char *format, ...); PyObject * PyObject_CallFunctionObjArgs(PyObject *callable, ...); PyObject * PyObject_CallMethodObjArgs(PyObject *callable, PyObject *name, ...); diff --git a/pypy/module/cpyext/include/pycobject.h b/pypy/module/cpyext/include/pycobject.h --- a/pypy/module/cpyext/include/pycobject.h +++ b/pypy/module/cpyext/include/pycobject.h @@ -33,7 +33,7 @@ PyAPI_FUNC(void *) PyCObject_GetDesc(PyObject *); /* Import a pointer to a C object from a module using a PyCObject. */ -PyAPI_FUNC(void *) PyCObject_Import(char *module_name, char *cobject_name); +PyAPI_FUNC(void *) PyCObject_Import(const char *module_name, const char *cobject_name); /* Modify a C object. Fails (==0) if object has a destructor. */ PyAPI_FUNC(int) PyCObject_SetVoidPtr(PyObject *self, void *cobj); diff --git a/pypy/module/cpyext/include/pyerrors.h b/pypy/module/cpyext/include/pyerrors.h --- a/pypy/module/cpyext/include/pyerrors.h +++ b/pypy/module/cpyext/include/pyerrors.h @@ -11,8 +11,8 @@ (PyClass_Check((x)) || (PyType_Check((x)) && \ PyObject_IsSubclass((x), PyExc_BaseException))) -PyObject *PyErr_NewException(char *name, PyObject *base, PyObject *dict); -PyObject *PyErr_NewExceptionWithDoc(char *name, char *doc, PyObject *base, PyObject *dict); +PyObject *PyErr_NewException(const char *name, PyObject *base, PyObject *dict); +PyObject *PyErr_NewExceptionWithDoc(const char *name, const char *doc, PyObject *base, PyObject *dict); PyObject *PyErr_Format(PyObject *exception, const char *format, ...); /* These APIs aren't really part of the error implementation, but diff --git a/pypy/module/cpyext/modsupport.py b/pypy/module/cpyext/modsupport.py --- a/pypy/module/cpyext/modsupport.py +++ b/pypy/module/cpyext/modsupport.py @@ -54,9 +54,15 @@ modname = rffi.charp2str(name) state = space.fromcache(State) f_name, f_path = state.package_context - w_mod = PyImport_AddModule(space, f_name) + if f_name is not None: + modname = f_name + w_mod = PyImport_AddModule(space, modname) + state.package_context = None, None - dict_w = {'__file__': space.wrap(f_path)} + if f_path is not None: + dict_w = {'__file__': space.wrap(f_path)} + else: + dict_w = {} convert_method_defs(space, dict_w, methods, None, w_self, modname) for key, w_value in dict_w.items(): space.setattr(w_mod, space.wrap(key), w_value) diff --git a/pypy/module/cpyext/src/cobject.c b/pypy/module/cpyext/src/cobject.c --- a/pypy/module/cpyext/src/cobject.c +++ b/pypy/module/cpyext/src/cobject.c @@ -77,7 +77,7 @@ } void * -PyCObject_Import(char *module_name, char *name) +PyCObject_Import(const char *module_name, const char *name) { PyObject *m, *c; void *r = NULL; diff --git a/pypy/module/cpyext/src/modsupport.c b/pypy/module/cpyext/src/modsupport.c --- a/pypy/module/cpyext/src/modsupport.c +++ b/pypy/module/cpyext/src/modsupport.c @@ -541,7 +541,7 @@ } PyObject * -PyObject_CallFunction(PyObject *callable, char *format, ...) +PyObject_CallFunction(PyObject *callable, const char *format, ...) { va_list va; PyObject *args; @@ -558,7 +558,7 @@ } PyObject * -PyObject_CallMethod(PyObject *o, char *name, char *format, ...) +PyObject_CallMethod(PyObject *o, const char *name, const char *format, ...) { va_list va; PyObject *args; diff --git a/pypy/module/cpyext/src/pyerrors.c b/pypy/module/cpyext/src/pyerrors.c --- a/pypy/module/cpyext/src/pyerrors.c +++ b/pypy/module/cpyext/src/pyerrors.c @@ -21,7 +21,7 @@ } PyObject * -PyErr_NewException(char *name, PyObject *base, PyObject *dict) +PyErr_NewException(const char *name, PyObject *base, PyObject *dict) { char *dot; PyObject *modulename = NULL; @@ -72,7 +72,7 @@ /* Create an exception with docstring */ PyObject * -PyErr_NewExceptionWithDoc(char *name, char *doc, PyObject *base, PyObject *dict) +PyErr_NewExceptionWithDoc(const char *name, const char *doc, PyObject *base, PyObject *dict) { int result; PyObject *ret = NULL; diff --git a/pypy/rpython/lltypesystem/rstr.py b/pypy/rpython/lltypesystem/rstr.py --- a/pypy/rpython/lltypesystem/rstr.py +++ b/pypy/rpython/lltypesystem/rstr.py @@ -331,6 +331,8 @@ # unlike CPython, there is no reason to avoid to return -1 # but our malloc initializes the memory to zero, so we use zero as the # special non-computed-yet value. + if not s: + return 0 x = s.hash if x == 0: x = _hash_string(s.chars) diff --git a/pypy/rpython/ootypesystem/rstr.py b/pypy/rpython/ootypesystem/rstr.py --- a/pypy/rpython/ootypesystem/rstr.py +++ b/pypy/rpython/ootypesystem/rstr.py @@ -116,6 +116,8 @@ return ootype.oounicode(ch, -1) def ll_strhash(s): + if not s: + return 0 return s.ll_hash() def ll_strfasthash(s): diff --git a/pypy/rpython/test/test_rtuple.py b/pypy/rpython/test/test_rtuple.py --- a/pypy/rpython/test/test_rtuple.py +++ b/pypy/rpython/test/test_rtuple.py @@ -180,6 +180,19 @@ res2 = self.interpret(f, [27, 12]) assert res1 != res2 + def test_constant_tuple_hash_str(self): + from pypy.rlib.objectmodel import compute_hash + def f(i): + if i: + t = (None, "abc") + else: + t = ("abc", None) + return compute_hash(t) + + res1 = self.interpret(f, [0]) + res2 = self.interpret(f, [1]) + assert res1 != res2 + def test_tuple_to_list(self): def f(i, j): return list((i, j)) From noreply at buildbot.pypy.org Tue Nov 15 18:44:35 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Tue, 15 Nov 2011 18:44:35 +0100 (CET) Subject: [pypy-commit] pypy jit-dynamic-getarrayitem: put tests in a sane subclass and only run the new ones in the x86 backend Message-ID: <20111115174435.34C7C82A88@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: jit-dynamic-getarrayitem Changeset: r49440:ee7c71b7e412 Date: 2011-11-15 12:44 -0500 http://bitbucket.org/pypy/pypy/changeset/ee7c71b7e412/ Log: put tests in a sane subclass and only run the new ones in the x86 backend diff --git a/pypy/jit/backend/x86/test/test_fficall.py b/pypy/jit/backend/x86/test/test_fficall.py --- a/pypy/jit/backend/x86/test/test_fficall.py +++ b/pypy/jit/backend/x86/test/test_fficall.py @@ -2,7 +2,7 @@ from pypy.jit.metainterp.test import test_fficall from pypy.jit.backend.x86.test.test_basic import Jit386Mixin -class TestFfiCall(Jit386Mixin, test_fficall.FfiCallTests): +class TestFfiLookups(Jit386Mixin, test_fficall.FfiLookupTests): # for the individual tests see # ====> ../../../metainterp/test/test_fficall.py supports_all = True diff --git a/pypy/jit/metainterp/test/test_fficall.py b/pypy/jit/metainterp/test/test_fficall.py --- a/pypy/jit/metainterp/test/test_fficall.py +++ b/pypy/jit/metainterp/test/test_fficall.py @@ -91,6 +91,7 @@ test_byval_result.__doc__ = _TestLibffiCall.test_byval_result.__doc__ test_byval_result.dont_track_allocations = True +class FfiLookupTests(object): def test_array_fields(self): myjitdriver = JitDriver( greens = [], @@ -148,9 +149,11 @@ }) - class TestFfiCall(FfiCallTests, LLJitMixin): supports_all = False class TestFfiCallSupportAll(FfiCallTests, LLJitMixin): supports_all = True # supports_{floats,longlong,singlefloats} + +class TestFfiLookup(FfiLookupTests, LLJitMixin): + pass \ No newline at end of file From noreply at buildbot.pypy.org Tue Nov 15 19:02:35 2011 From: noreply at buildbot.pypy.org (hager) Date: Tue, 15 Nov 2011 19:02:35 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Implemented NEW_ARRAY Message-ID: <20111115180235.4DE9F820BE@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r49441:b2ad6b915f48 Date: 2011-11-15 19:02 +0100 http://bitbucket.org/pypy/pypy/changeset/b2ad6b915f48/ Log: Implemented NEW_ARRAY diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -12,7 +12,7 @@ from pypy.jit.backend.ppc.ppcgen.jump import remap_frame_layout from pypy.jit.backend.ppc.ppcgen.regalloc import TempPtr from pypy.jit.backend.llsupport import symbolic -from pypy.rpython.lltypesystem import rstr +from pypy.rpython.lltypesystem import rstr, rffi, lltype NO_FORCE_INDEX = -1 @@ -610,6 +610,15 @@ # XXX do exception handling here! pass + def emit_new_array(self, op, arglocs, regalloc): + # XXX handle memory errors + if len(arglocs) > 0: + value_loc, base_loc, ofs_length = arglocs + if IS_PPC_32: + self.mc.stw(value_loc.value, base_loc.value, ofs_length.value) + else: + self.mc.std(value_loc.value, base_loc.value, ofs_length.value) + def emit_same_as(self, op, arglocs, regalloc): argloc, resloc = arglocs self.regalloc_mov(argloc, resloc) @@ -771,3 +780,20 @@ def nop(self): self.mc.ori(0, 0, 0) + + # from: ../x86/regalloc.py:750 + # called from regalloc + # XXX kill this function at some point + def _regalloc_malloc_varsize(self, size, size_box, vloc, vbox, ofs_items_loc, regalloc, result): + if IS_PPC_32: + self.mc.mullw(size.value, size.value, vloc.value) + else: + self.mc.mulld(size.value, size.value, vloc.value) + if ofs_items_loc.is_imm(): + self.mc.addi(size.value, size.value, ofs_items_loc.value) + else: + self.mc.add(size.value, size.value, ofs_items_loc.value) + force_index = self.write_new_force_index() + regalloc.force_spill_var(vbox) + self._emit_call(force_index, self.malloc_func_addr, [size_box], regalloc, + result=result) diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -116,6 +116,7 @@ self.fail_boxes_ptr = values_array(llmemory.GCREF, failargs_limit) self.mc = None self.malloc_func_addr = 0 + self.malloc_array_func_addr = 0 self.datablockwrapper = None self.memcpy_addr = 0 self.fail_boxes_count = 0 @@ -462,6 +463,10 @@ gc_ll_descr.initialize() ll_new = gc_ll_descr.get_funcptr_for_new() self.malloc_func_addr = rffi.cast(lltype.Signed, ll_new) + if gc_ll_descr.get_funcptr_for_newarray is not None: + ll_new_array = gc_ll_descr.get_funcptr_for_newarray() + self.malloc_array_func_addr = rffi.cast(lltype.Signed, + ll_new_array) self.memcpy_addr = self.cpu.cast_ptr_to_int(memcpy_fn) self.setup_failure_recovery() self.exit_code_adr = self._gen_exit_path() diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/regalloc.py @@ -20,6 +20,7 @@ from pypy.jit.backend.llsupport import symbolic from pypy.jit.codewriter.effectinfo import EffectInfo import pypy.jit.backend.ppc.ppcgen.register as r +from pypy.jit.codewriter import heaptracker class TempInt(TempBox): type = INT @@ -638,6 +639,27 @@ self.possibly_free_var(op.result) return [] + def prepare_new_array(self, op): + gc_ll_descr = self.cpu.gc_ll_descr + if gc_ll_descr.get_funcptr_for_newarray is not None: + # framework GC + box_num_elem = op.getarg(0) + if isinstance(box_num_elem, ConstInt): + num_elem = box_num_elem.value + # XXX implement fastpath for malloc + args = self.assembler.cpu.gc_ll_descr.args_for_new_array( + op.getdescr()) + argboxes = [ConstInt(x) for x in args] + argboxes.append(box_num_elem) + force_index = self.assembler.write_new_force_index() + self.assembler._emit_call(force_index, self.assembler.malloc_array_func_addr, + argboxes, self, result=op.result) + return [] + # boehm GC + itemsize, scale, basesize, ofs_length, _ = ( + self._unpack_arraydescr(op.getdescr())) + return self._malloc_varsize(basesize, ofs_length, itemsize, op) + def prepare_call(self, op): effectinfo = op.getdescr().get_extra_info() if effectinfo is not None: @@ -664,6 +686,32 @@ arglocs.append(t) return arglocs + def _malloc_varsize(self, ofs_items, ofs_length, itemsize, op): + v = op.getarg(0) + res_v = op.result + boxes = [v, res_v] + itemsize_box = ConstInt(itemsize) + ofs_items_box = ConstInt(ofs_items) + if _check_imm_arg(ofs_items_box): + ofs_items_loc = self.convert_to_imm(ofs_items_box) + else: + ofs_items_loc, ofs_items_box = self._ensure_value_is_boxed(ofs_items_box, boxes) + boxes.append(ofs_items_box) + vloc, vbox = self._ensure_value_is_boxed(v, [res_v]) + boxes.append(vbox) + size, size_box = self._ensure_value_is_boxed(itemsize_box, boxes) + boxes.append(size_box) + self.assembler._regalloc_malloc_varsize(size, size_box, + vloc, vbox, ofs_items_loc, self, res_v) + base_loc = self.make_sure_var_in_reg(res_v) + + value_loc, vbox = self._ensure_value_is_boxed(v, [res_v]) + boxes.append(vbox) + self.possibly_free_vars(boxes) + assert value_loc.is_reg() + assert base_loc.is_reg() + return [value_loc, base_loc, imm(ofs_length)] + # from ../x86/regalloc.py:791 def _unpack_fielddescr(self, fielddescr): assert isinstance(fielddescr, BaseFieldDescr) From noreply at buildbot.pypy.org Tue Nov 15 19:12:11 2011 From: noreply at buildbot.pypy.org (hager) Date: Tue, 15 Nov 2011 19:12:11 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Implemented NEWSTR and NEWUNICODE Message-ID: <20111115181211.33666820BE@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r49442:991e2aba52b0 Date: 2011-11-15 19:11 +0100 http://bitbucket.org/pypy/pypy/changeset/991e2aba52b0/ Log: Implemented NEWSTR and NEWUNICODE diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -619,6 +619,9 @@ else: self.mc.std(value_loc.value, base_loc.value, ofs_length.value) + emit_newstr = emit_new_array + emit_newunicode = emit_new_array + def emit_same_as(self, op, arglocs, regalloc): argloc, resloc = arglocs self.regalloc_mov(argloc, resloc) diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -117,6 +117,8 @@ self.mc = None self.malloc_func_addr = 0 self.malloc_array_func_addr = 0 + self.malloc_str_func_addr = 0 + self.malloc_unicode_func_addr = 0 self.datablockwrapper = None self.memcpy_addr = 0 self.fail_boxes_count = 0 @@ -467,6 +469,14 @@ ll_new_array = gc_ll_descr.get_funcptr_for_newarray() self.malloc_array_func_addr = rffi.cast(lltype.Signed, ll_new_array) + if gc_ll_descr.get_funcptr_for_newstr is not None: + ll_new_str = gc_ll_descr.get_funcptr_for_newstr() + self.malloc_str_func_addr = rffi.cast(lltype.Signed, + ll_new_str) + if gc_ll_descr.get_funcptr_for_newunicode is not None: + ll_new_unicode = gc_ll_descr.get_funcptr_for_newunicode() + self.malloc_unicode_func_addr = rffi.cast(lltype.Signed, + ll_new_unicode) self.memcpy_addr = self.cpu.cast_ptr_to_int(memcpy_fn) self.setup_failure_recovery() self.exit_code_adr = self._gen_exit_path() diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/regalloc.py @@ -660,6 +660,35 @@ self._unpack_arraydescr(op.getdescr())) return self._malloc_varsize(basesize, ofs_length, itemsize, op) + def prepare_newstr(self, op): + gc_ll_descr = self.cpu.gc_ll_descr + if gc_ll_descr.get_funcptr_for_newstr is not None: + force_index = self.assembler.write_new_force_index() + self.assembler._emit_call(force_index, + self.assembler.malloc_str_func_addr, [op.getarg(0)], + self, op.result) + return [] + # boehm GC + ofs_items, itemsize, ofs = symbolic.get_array_token(rstr.STR, + self.cpu.translate_support_code) + assert itemsize == 1 + return self._malloc_varsize(ofs_items, ofs, itemsize, op) + + def prepare_newunicode(self, op): + gc_ll_descr = self.cpu.gc_ll_descr + if gc_ll_descr.get_funcptr_for_newunicode is not None: + force_index = self.assembler.write_new_force_index() + self.assembler._emit_call(force_index, + self.assembler.malloc_unicode_func_addr, + [op.getarg(0)], self, op.result) + return [] + # boehm GC + ofs_items, _, ofs = symbolic.get_array_token(rstr.UNICODE, + self.cpu.translate_support_code) + _, itemsize, _ = symbolic.get_array_token(rstr.UNICODE, + self.cpu.translate_support_code) + return self._malloc_varsize(ofs_items, ofs, itemsize, op) + def prepare_call(self, op): effectinfo = op.getdescr().get_extra_info() if effectinfo is not None: From noreply at buildbot.pypy.org Tue Nov 15 19:25:36 2011 From: noreply at buildbot.pypy.org (edelsohn) Date: Tue, 15 Nov 2011 19:25:36 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Correct indentation of prepare_setarrayitem_raw. Message-ID: <20111115182536.0FB26820BE@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r49443:760676c29f43 Date: 2011-11-15 13:25 -0500 http://bitbucket.org/pypy/pypy/changeset/760676c29f43/ Log: Correct indentation of prepare_setarrayitem_raw. diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/regalloc.py @@ -475,7 +475,7 @@ self.possibly_free_vars(boxes) return [value_loc, base_loc, ofs_loc, imm(scale), imm(ofs), tmp] - prepare_setarrayitem_raw = prepare_setarrayitem_gc + prepare_setarrayitem_raw = prepare_setarrayitem_gc def prepare_getarrayitem_gc(self, op): a0, a1 = boxes = list(op.getarglist()) From noreply at buildbot.pypy.org Tue Nov 15 20:10:04 2011 From: noreply at buildbot.pypy.org (pjenvey) Date: Tue, 15 Nov 2011 20:10:04 +0100 (CET) Subject: [pypy-commit] pypy py3k: backout 21b2914fdb96 pending type.name switching to unicode Message-ID: <20111115191004.EF9E4820BE@wyvern.cs.uni-duesseldorf.de> Author: Philip Jenvey Branch: py3k Changeset: r49444:706419ee2d49 Date: 2011-11-15 10:58 -0800 http://bitbucket.org/pypy/pypy/changeset/706419ee2d49/ Log: backout 21b2914fdb96 pending type.name switching to unicode diff --git a/pypy/objspace/std/test/test_typeobject.py b/pypy/objspace/std/test/test_typeobject.py --- a/pypy/objspace/std/test/test_typeobject.py +++ b/pypy/objspace/std/test/test_typeobject.py @@ -739,10 +739,10 @@ class A(object): pass assert repr(A) == "" - assert repr(type(type)) == "" - assert repr(complex) == "" - assert repr(property) == "" - assert repr(TypeError) == "" + assert repr(type(type)) == "" + assert repr(complex) == "" + assert repr(property) == "" + assert repr(TypeError) == "" def test_invalid_mro(self): class A(object): diff --git a/pypy/objspace/std/typeobject.py b/pypy/objspace/std/typeobject.py --- a/pypy/objspace/std/typeobject.py +++ b/pypy/objspace/std/typeobject.py @@ -518,10 +518,10 @@ def get_module_type_name(w_self): space = w_self.space w_mod = w_self.get_module() - if not space.isinstance_w(w_mod, space.w_unicode): + if not space.isinstance_w(w_mod, space.w_str): mod = 'builtins' else: - mod = space.unicode_w(w_mod) + mod = space.str_w(w_mod) if mod != 'builtins': return '%s.%s' % (mod, w_self.name) else: @@ -871,14 +871,19 @@ def repr__Type(space, w_obj): w_mod = w_obj.get_module() - if not space.isinstance_w(w_mod, space.w_unicode): + if not space.isinstance_w(w_mod, space.w_str): mod = None else: - mod = space.unicode_w(w_mod) - if mod is not None and mod != 'builtins': - return space.wrap("" % (mod, w_obj.name)) + mod = space.str_w(w_mod) + if (not w_obj.is_heaptype() or + (mod == '__builtin__' or mod == 'exceptions')): + kind = 'type' else: - return space.wrap("" % (w_obj.name)) + kind = 'class' + if mod is not None and mod !='builtins': + return space.wrap("<%s '%s.%s'>" % (kind, mod, w_obj.name)) + else: + return space.wrap("<%s '%s'>" % (kind, w_obj.name)) def getattr__Type_ANY(space, w_type, w_name): name = space.str_w(w_name) From noreply at buildbot.pypy.org Tue Nov 15 21:21:31 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Tue, 15 Nov 2011 21:21:31 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: translation fix Message-ID: <20111115202131.C4E03820BE@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r49445:520196bc50f9 Date: 2011-11-15 19:05 +0100 http://bitbucket.org/pypy/pypy/changeset/520196bc50f9/ Log: translation fix diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -134,7 +134,9 @@ optimize_trace(metainterp_sd, part, jitdriver_sd.warmstate.enable_opts) except InvalidLoop: return None - all_target_tokens = [part.operations[0].getdescr()] + target_token = part.operations[0].getdescr() + assert isinstance(target_token, TargetToken) + all_target_tokens = [target_token] loop = create_empty_loop(metainterp) loop.inputargs = part.inputargs @@ -149,7 +151,9 @@ [inliner.inline_op(h_ops[i]) for i in range(start, len(h_ops))] + \ [ResOperation(rop.JUMP, [inliner.inline_arg(a) for a in jumpargs], None, descr=jitcell_token)] - all_target_tokens.append(part.operations[0].getdescr()) + target_token = part.operations[0].getdescr() + assert isinstance(target_token, TargetToken) + all_target_tokens.append(target_token) inputargs = jumpargs jumpargs = part.operations[-1].getarglist() From noreply at buildbot.pypy.org Tue Nov 15 23:01:16 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Tue, 15 Nov 2011 23:01:16 +0100 (CET) Subject: [pypy-commit] pypy jit-dynamic-getarrayitem: close for merge Message-ID: <20111115220116.537E7820BE@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: jit-dynamic-getarrayitem Changeset: r49446:ec49334c3989 Date: 2011-11-15 17:00 -0500 http://bitbucket.org/pypy/pypy/changeset/ec49334c3989/ Log: close for merge From noreply at buildbot.pypy.org Tue Nov 15 23:01:18 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Tue, 15 Nov 2011 23:01:18 +0100 (CET) Subject: [pypy-commit] pypy default: Merged jit-dynamic-getarrayitem. Added support for creating custom getarrayitems at jit-compile time. Steals some stuff from anto's ffistruct. Message-ID: <20111115220118.06724820BE@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r49447:bd871afa3feb Date: 2011-11-15 17:00 -0500 http://bitbucket.org/pypy/pypy/changeset/bd871afa3feb/ Log: Merged jit-dynamic-getarrayitem. Added support for creating custom getarrayitems at jit-compile time. Steals some stuff from anto's ffistruct. diff --git a/pypy/jit/backend/llgraph/llimpl.py b/pypy/jit/backend/llgraph/llimpl.py --- a/pypy/jit/backend/llgraph/llimpl.py +++ b/pypy/jit/backend/llgraph/llimpl.py @@ -20,6 +20,7 @@ from pypy.jit.backend.llgraph import symbolic from pypy.jit.codewriter import longlong +from pypy.rlib import libffi from pypy.rlib.objectmodel import ComputedIntSymbolic, we_are_translated from pypy.rlib.rarithmetic import ovfcheck from pypy.rlib.rarithmetic import r_longlong, r_ulonglong, r_uint @@ -325,12 +326,12 @@ loop = _from_opaque(loop) loop.operations.append(Operation(opnum)) -def compile_add_descr(loop, ofs, type, arg_types): +def compile_add_descr(loop, ofs, type, arg_types, extrainfo, width): from pypy.jit.backend.llgraph.runner import Descr loop = _from_opaque(loop) op = loop.operations[-1] assert isinstance(type, str) and len(type) == 1 - op.descr = Descr(ofs, type, arg_types=arg_types) + op.descr = Descr(ofs, type, arg_types=arg_types, extrainfo=extrainfo, width=width) def compile_add_descr_arg(loop, ofs, type, arg_types): from pypy.jit.backend.llgraph.runner import Descr @@ -825,6 +826,16 @@ else: raise NotImplementedError + def op_getinteriorfield_raw(self, descr, array, index): + if descr.typeinfo == REF: + return do_getinteriorfield_raw_ptr(array, index, descr.width, descr.ofs) + elif descr.typeinfo == INT: + return do_getinteriorfield_raw_int(array, index, descr.width, descr.ofs) + elif descr.typeinfo == FLOAT: + return do_getinteriorfield_raw_float(array, index, descr.width, descr.ofs) + else: + raise NotImplementedError + def op_setinteriorfield_gc(self, descr, array, index, newvalue): if descr.typeinfo == REF: return do_setinteriorfield_gc_ptr(array, index, descr.ofs, @@ -838,6 +849,16 @@ else: raise NotImplementedError + def op_setinteriorfield_raw(self, descr, array, index, newvalue): + if descr.typeinfo == REF: + return do_setinteriorfield_raw_ptr(array, index, newvalue, descr.width, descr.ofs) + elif descr.typeinfo == INT: + return do_setinteriorfield_raw_int(array, index, newvalue, descr.width, descr.ofs) + elif descr.typeinfo == FLOAT: + return do_setinteriorfield_raw_float(array, index, newvalue, descr.width, descr.ofs) + else: + raise NotImplementedError + def op_setfield_gc(self, fielddescr, struct, newvalue): if fielddescr.typeinfo == REF: do_setfield_gc_ptr(struct, fielddescr.ofs, newvalue) @@ -1403,6 +1424,14 @@ struct = array._obj.container.getitem(index) return cast_to_ptr(_getinteriorfield_gc(struct, fieldnum)) +def _getinteriorfield_raw(ffitype, array, index, width, ofs): + addr = rffi.cast(rffi.VOIDP, array) + return libffi.array_getitem(ffitype, width, addr, index, ofs) + +def do_getinteriorfield_raw_int(array, index, width, ofs): + res = _getinteriorfield_raw(libffi.types.slong, array, index, width, ofs) + return res + def _getfield_raw(struct, fieldnum): STRUCT, fieldname = symbolic.TokenToField[fieldnum] ptr = cast_from_int(lltype.Ptr(STRUCT), struct) @@ -1479,7 +1508,14 @@ return do_setinteriorfield_gc do_setinteriorfield_gc_int = new_setinteriorfield_gc(cast_from_int) do_setinteriorfield_gc_float = new_setinteriorfield_gc(cast_from_floatstorage) -do_setinteriorfield_gc_ptr = new_setinteriorfield_gc(cast_from_ptr) +do_setinteriorfield_gc_ptr = new_setinteriorfield_gc(cast_from_ptr) + +def new_setinteriorfield_raw(ffitype): + def do_setinteriorfield_raw(array, index, newvalue, width, ofs): + addr = rffi.cast(rffi.VOIDP, array) + return libffi.array_setitem(ffitype, width, addr, index, ofs, newvalue) + return do_setinteriorfield_raw +do_setinteriorfield_raw_int = new_setinteriorfield_raw(libffi.types.slong) def do_setfield_raw_int(struct, fieldnum, newvalue): STRUCT, fieldname = symbolic.TokenToField[fieldnum] diff --git a/pypy/jit/backend/llgraph/runner.py b/pypy/jit/backend/llgraph/runner.py --- a/pypy/jit/backend/llgraph/runner.py +++ b/pypy/jit/backend/llgraph/runner.py @@ -23,8 +23,10 @@ class Descr(history.AbstractDescr): def __init__(self, ofs, typeinfo, extrainfo=None, name=None, - arg_types=None, count_fields_if_immut=-1, ffi_flags=0): + arg_types=None, count_fields_if_immut=-1, ffi_flags=0, width=-1): + self.ofs = ofs + self.width = width self.typeinfo = typeinfo self.extrainfo = extrainfo self.name = name @@ -119,14 +121,14 @@ return False def getdescr(self, ofs, typeinfo='?', extrainfo=None, name=None, - arg_types=None, count_fields_if_immut=-1, ffi_flags=0): + arg_types=None, count_fields_if_immut=-1, ffi_flags=0, width=-1): key = (ofs, typeinfo, extrainfo, name, arg_types, - count_fields_if_immut, ffi_flags) + count_fields_if_immut, ffi_flags, width) try: return self._descrs[key] except KeyError: descr = Descr(ofs, typeinfo, extrainfo, name, arg_types, - count_fields_if_immut, ffi_flags) + count_fields_if_immut, ffi_flags, width) self._descrs[key] = descr return descr @@ -179,7 +181,8 @@ descr = op.getdescr() if isinstance(descr, Descr): llimpl.compile_add_descr(c, descr.ofs, descr.typeinfo, - descr.arg_types) + descr.arg_types, descr.extrainfo, + descr.width) if (isinstance(descr, history.LoopToken) and op.getopnum() != rop.JUMP): llimpl.compile_add_loop_token(c, descr) @@ -324,10 +327,22 @@ def interiorfielddescrof(self, A, fieldname): S = A.OF - ofs2 = symbolic.get_size(A) + width = symbolic.get_size(A) ofs, size = symbolic.get_field_token(S, fieldname) token = history.getkind(getattr(S, fieldname)) - return self.getdescr(ofs, token[0], name=fieldname, extrainfo=ofs2) + return self.getdescr(ofs, token[0], name=fieldname, width=width) + + def interiorfielddescrof_dynamic(self, offset, width, fieldsize, + is_pointer, is_float, is_signed): + + if is_pointer: + typeinfo = REF + elif is_float: + typeinfo = FLOAT + else: + typeinfo = INT + # we abuse the arg_types field to distinguish dynamic and static descrs + return Descr(offset, typeinfo, arg_types='dynamic', name='', width=width) def calldescrof(self, FUNC, ARGS, RESULT, extrainfo): arg_types = [] diff --git a/pypy/jit/backend/llsupport/descr.py b/pypy/jit/backend/llsupport/descr.py --- a/pypy/jit/backend/llsupport/descr.py +++ b/pypy/jit/backend/llsupport/descr.py @@ -111,6 +111,16 @@ def repr_of_descr(self): return '<%s %s %s>' % (self._clsname, self.name, self.offset) +class DynamicFieldDescr(BaseFieldDescr): + def __init__(self, offset, fieldsize, is_pointer, is_float, is_signed): + self.offset = offset + self._fieldsize = fieldsize + self._is_pointer_field = is_pointer + self._is_float_field = is_float + self._is_field_signed = is_signed + + def get_field_size(self, translate_support_code): + return self._fieldsize class NonGcPtrFieldDescr(BaseFieldDescr): _clsname = 'NonGcPtrFieldDescr' @@ -182,6 +192,7 @@ def repr_of_descr(self): return '<%s>' % self._clsname + class NonGcPtrArrayDescr(BaseArrayDescr): _clsname = 'NonGcPtrArrayDescr' def get_item_size(self, translate_support_code): @@ -211,6 +222,13 @@ def get_ofs_length(self, translate_support_code): return -1 +class DynamicArrayNoLengthDescr(BaseArrayNoLengthDescr): + def __init__(self, itemsize): + self.itemsize = itemsize + + def get_item_size(self, translate_support_code): + return self.itemsize + class NonGcPtrArrayNoLengthDescr(BaseArrayNoLengthDescr): _clsname = 'NonGcPtrArrayNoLengthDescr' def get_item_size(self, translate_support_code): diff --git a/pypy/jit/backend/llsupport/llmodel.py b/pypy/jit/backend/llsupport/llmodel.py --- a/pypy/jit/backend/llsupport/llmodel.py +++ b/pypy/jit/backend/llsupport/llmodel.py @@ -9,9 +9,10 @@ from pypy.jit.backend.llsupport import symbolic from pypy.jit.backend.llsupport.symbolic import WORD, unroll_basic_sizes from pypy.jit.backend.llsupport.descr import (get_size_descr, - get_field_descr, BaseFieldDescr, get_array_descr, BaseArrayDescr, - get_call_descr, BaseIntCallDescr, GcPtrCallDescr, FloatCallDescr, - VoidCallDescr, InteriorFieldDescr, get_interiorfield_descr) + get_field_descr, BaseFieldDescr, DynamicFieldDescr, get_array_descr, + BaseArrayDescr, DynamicArrayNoLengthDescr, get_call_descr, + BaseIntCallDescr, GcPtrCallDescr, FloatCallDescr, VoidCallDescr, + InteriorFieldDescr, get_interiorfield_descr) from pypy.jit.backend.llsupport.asmmemmgr import AsmMemoryManager @@ -238,6 +239,12 @@ def interiorfielddescrof(self, A, fieldname): return get_interiorfield_descr(self.gc_ll_descr, A, A.OF, fieldname) + def interiorfielddescrof_dynamic(self, offset, width, fieldsize, + is_pointer, is_float, is_signed): + arraydescr = DynamicArrayNoLengthDescr(width) + fielddescr = DynamicFieldDescr(offset, fieldsize, is_pointer, is_float, is_signed) + return InteriorFieldDescr(arraydescr, fielddescr) + def unpack_arraydescr(self, arraydescr): assert isinstance(arraydescr, BaseArrayDescr) return arraydescr.get_base_size(self.translate_support_code) diff --git a/pypy/jit/backend/model.py b/pypy/jit/backend/model.py --- a/pypy/jit/backend/model.py +++ b/pypy/jit/backend/model.py @@ -183,38 +183,35 @@ lst[n] = None self.fail_descr_free_list.extend(faildescr_indices) - @staticmethod - def sizeof(S): + def sizeof(self, S): raise NotImplementedError - @staticmethod - def fielddescrof(S, fieldname): + def fielddescrof(self, S, fieldname): """Return the Descr corresponding to field 'fieldname' on the structure 'S'. It is important that this function (at least) caches the results.""" raise NotImplementedError - @staticmethod - def arraydescrof(A): + def interiorfielddescrof(self, A, fieldname): raise NotImplementedError - @staticmethod - def calldescrof(FUNC, ARGS, RESULT): + def interiorfielddescrof_dynamic(self, offset, width, fieldsize, is_pointer, + is_float, is_signed): + raise NotImplementedError + + def arraydescrof(self, A): + raise NotImplementedError + + def calldescrof(self, FUNC, ARGS, RESULT): # FUNC is the original function type, but ARGS is a list of types # with Voids removed raise NotImplementedError - @staticmethod - def methdescrof(SELFTYPE, methname): + def methdescrof(self, SELFTYPE, methname): # must return a subclass of history.AbstractMethDescr raise NotImplementedError - @staticmethod - def typedescrof(TYPE): - raise NotImplementedError - - @staticmethod - def interiorfielddescrof(A, fieldname): + def typedescrof(self, TYPE): raise NotImplementedError # ---------- the backend-dependent operations ---------- diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -8,8 +8,8 @@ from pypy.rpython.lltypesystem.lloperation import llop from pypy.rpython.annlowlevel import llhelper from pypy.jit.backend.model import CompiledLoopToken -from pypy.jit.backend.x86.regalloc import (RegAlloc, get_ebp_ofs, - _get_scale, gpr_reg_mgr_cls) +from pypy.jit.backend.x86.regalloc import (RegAlloc, get_ebp_ofs, _get_scale, + gpr_reg_mgr_cls, _valid_addressing_size) from pypy.jit.backend.x86.arch import (FRAME_FIXED_SIZE, FORCE_INDEX_OFS, WORD, IS_X86_32, IS_X86_64) @@ -1601,8 +1601,10 @@ assert isinstance(itemsize_loc, ImmedLoc) if isinstance(index_loc, ImmedLoc): temp_loc = imm(index_loc.value * itemsize_loc.value) + elif _valid_addressing_size(itemsize_loc.value): + return AddressLoc(base_loc, index_loc, _get_scale(itemsize_loc.value), ofs_loc.value) else: - # XXX should not use IMUL in most cases + # XXX should not use IMUL in more cases, it can use a clever LEA assert isinstance(temp_loc, RegLoc) assert isinstance(index_loc, RegLoc) assert not temp_loc.is_xmm @@ -1619,6 +1621,8 @@ ofs_loc) self.load_from_mem(resloc, src_addr, fieldsize_loc, sign_loc) + genop_getinteriorfield_raw = genop_getinteriorfield_gc + def genop_discard_setfield_gc(self, op, arglocs): base_loc, ofs_loc, size_loc, value_loc = arglocs @@ -1634,6 +1638,8 @@ ofs_loc) self.save_into_mem(dest_addr, value_loc, fieldsize_loc) + genop_discard_setinteriorfield_raw = genop_discard_setinteriorfield_gc + def genop_discard_setarrayitem_gc(self, op, arglocs): base_loc, ofs_loc, value_loc, size_loc, baseofs = arglocs assert isinstance(baseofs, ImmedLoc) diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -1067,6 +1067,8 @@ self.PerformDiscard(op, [base_loc, ofs, itemsize, fieldsize, index_loc, temp_loc, value_loc]) + consider_setinteriorfield_raw = consider_setinteriorfield_gc + def consider_strsetitem(self, op): args = op.getarglist() base_loc = self.rm.make_sure_var_in_reg(op.getarg(0), args) @@ -1158,6 +1160,8 @@ self.Perform(op, [base_loc, ofs, itemsize, fieldsize, index_loc, temp_loc, sign_loc], result_loc) + consider_getinteriorfield_raw = consider_getinteriorfield_gc + def consider_int_is_true(self, op, guard_op): # doesn't need arg to be in a register argloc = self.loc(op.getarg(0)) @@ -1430,8 +1434,11 @@ # i.e. the n'th word beyond the fixed frame size. return -WORD * (FRAME_FIXED_SIZE + position) +def _valid_addressing_size(size): + return size == 1 or size == 2 or size == 4 or size == 8 + def _get_scale(size): - assert size == 1 or size == 2 or size == 4 or size == 8 + assert _valid_addressing_size(size) if size < 4: return size - 1 # 1, 2 => 0, 1 else: diff --git a/pypy/jit/backend/x86/test/test_fficall.py b/pypy/jit/backend/x86/test/test_fficall.py new file mode 100644 --- /dev/null +++ b/pypy/jit/backend/x86/test/test_fficall.py @@ -0,0 +1,8 @@ +import py +from pypy.jit.metainterp.test import test_fficall +from pypy.jit.backend.x86.test.test_basic import Jit386Mixin + +class TestFfiLookups(Jit386Mixin, test_fficall.FfiLookupTests): + # for the individual tests see + # ====> ../../../metainterp/test/test_fficall.py + supports_all = True diff --git a/pypy/jit/codewriter/effectinfo.py b/pypy/jit/codewriter/effectinfo.py --- a/pypy/jit/codewriter/effectinfo.py +++ b/pypy/jit/codewriter/effectinfo.py @@ -48,6 +48,8 @@ OS_LIBFFI_PREPARE = 60 OS_LIBFFI_PUSH_ARG = 61 OS_LIBFFI_CALL = 62 + OS_LIBFFI_GETARRAYITEM = 63 + OS_LIBFFI_SETARRAYITEM = 64 # OS_LLONG_INVERT = 69 OS_LLONG_ADD = 70 diff --git a/pypy/jit/codewriter/jtransform.py b/pypy/jit/codewriter/jtransform.py --- a/pypy/jit/codewriter/jtransform.py +++ b/pypy/jit/codewriter/jtransform.py @@ -1615,6 +1615,12 @@ elif oopspec_name.startswith('libffi_call_'): oopspecindex = EffectInfo.OS_LIBFFI_CALL extraeffect = EffectInfo.EF_RANDOM_EFFECTS + elif oopspec_name == 'libffi_array_getitem': + oopspecindex = EffectInfo.OS_LIBFFI_GETARRAYITEM + extraeffect = EffectInfo.EF_CANNOT_RAISE + elif oopspec_name == 'libffi_array_setitem': + oopspecindex = EffectInfo.OS_LIBFFI_SETARRAYITEM + extraeffect = EffectInfo.EF_CANNOT_RAISE else: assert False, 'unsupported oopspec: %s' % oopspec_name return self._handle_oopspec_call(op, args, oopspecindex, extraeffect) diff --git a/pypy/jit/metainterp/executor.py b/pypy/jit/metainterp/executor.py --- a/pypy/jit/metainterp/executor.py +++ b/pypy/jit/metainterp/executor.py @@ -340,6 +340,8 @@ rop.DEBUG_MERGE_POINT, rop.JIT_DEBUG, rop.SETARRAYITEM_RAW, + rop.GETINTERIORFIELD_RAW, + rop.SETINTERIORFIELD_RAW, rop.CALL_RELEASE_GIL, rop.QUASIIMMUT_FIELD, ): # list of opcodes never executed by pyjitpl diff --git a/pypy/jit/metainterp/optimizeopt/fficall.py b/pypy/jit/metainterp/optimizeopt/fficall.py --- a/pypy/jit/metainterp/optimizeopt/fficall.py +++ b/pypy/jit/metainterp/optimizeopt/fficall.py @@ -1,11 +1,13 @@ +from pypy.jit.codewriter.effectinfo import EffectInfo +from pypy.jit.metainterp.optimizeopt.optimizer import Optimization +from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method +from pypy.jit.metainterp.resoperation import rop, ResOperation +from pypy.rlib import clibffi, libffi +from pypy.rlib.debug import debug_print +from pypy.rlib.libffi import Func +from pypy.rlib.objectmodel import we_are_translated from pypy.rpython.annlowlevel import cast_base_ptr_to_instance -from pypy.rlib.objectmodel import we_are_translated -from pypy.rlib.libffi import Func -from pypy.rlib.debug import debug_print -from pypy.jit.codewriter.effectinfo import EffectInfo -from pypy.jit.metainterp.resoperation import rop, ResOperation -from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method -from pypy.jit.metainterp.optimizeopt.optimizer import Optimization +from pypy.rpython.lltypesystem import llmemory class FuncInfo(object): @@ -78,7 +80,7 @@ def new(self): return OptFfiCall() - + def begin_optimization(self, funcval, op): self.rollback_maybe('begin_optimization', op) self.funcinfo = FuncInfo(funcval, self.optimizer.cpu, op) @@ -116,6 +118,9 @@ ops = self.do_push_arg(op) elif oopspec == EffectInfo.OS_LIBFFI_CALL: ops = self.do_call(op) + elif (oopspec == EffectInfo.OS_LIBFFI_GETARRAYITEM or + oopspec == EffectInfo.OS_LIBFFI_SETARRAYITEM): + ops = self.do_getsetarrayitem(op, oopspec) # for op in ops: self.emit_operation(op) @@ -190,6 +195,53 @@ ops.append(newop) return ops + def do_getsetarrayitem(self, op, oopspec): + ffitypeval = self.getvalue(op.getarg(1)) + widthval = self.getvalue(op.getarg(2)) + offsetval = self.getvalue(op.getarg(5)) + if not ffitypeval.is_constant() or not widthval.is_constant() or not offsetval.is_constant(): + return [op] + + ffitypeaddr = ffitypeval.box.getaddr() + ffitype = llmemory.cast_adr_to_ptr(ffitypeaddr, clibffi.FFI_TYPE_P) + offset = offsetval.box.getint() + width = widthval.box.getint() + descr = self._get_interior_descr(ffitype, width, offset) + + arglist = [ + self.getvalue(op.getarg(3)).force_box(self.optimizer), + self.getvalue(op.getarg(4)).force_box(self.optimizer), + ] + if oopspec == EffectInfo.OS_LIBFFI_GETARRAYITEM: + opnum = rop.GETINTERIORFIELD_RAW + elif oopspec == EffectInfo.OS_LIBFFI_SETARRAYITEM: + opnum = rop.SETINTERIORFIELD_RAW + arglist.append(self.getvalue(op.getarg(6)).force_box(self.optimizer)) + else: + assert False + return [ + ResOperation(opnum, arglist, op.result, descr=descr), + ] + + def _get_interior_descr(self, ffitype, width, offset): + kind = libffi.types.getkind(ffitype) + is_pointer = is_float = is_signed = False + if ffitype is libffi.types.pointer: + is_pointer = True + elif kind == 'i': + is_signed = True + elif kind == 'f' or kind == 'I' or kind == 'U': + # longlongs are treated as floats, see + # e.g. llsupport/descr.py:getDescrClass + is_float = True + else: + assert False, "unsupported ffitype or kind" + # + fieldsize = ffitype.c_size + return self.optimizer.cpu.interiorfielddescrof_dynamic( + offset, width, fieldsize, is_pointer, is_float, is_signed + ) + def propagate_forward(self, op): if self.logops is not None: debug_print(self.logops.repr_of_resop(op)) diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -461,6 +461,7 @@ 'GETARRAYITEM_GC/2d', 'GETARRAYITEM_RAW/2d', 'GETINTERIORFIELD_GC/2d', + 'GETINTERIORFIELD_RAW/2d', 'GETFIELD_GC/1d', 'GETFIELD_RAW/1d', '_MALLOC_FIRST', @@ -479,6 +480,7 @@ 'SETARRAYITEM_GC/3d', 'SETARRAYITEM_RAW/3d', 'SETINTERIORFIELD_GC/3d', + 'SETINTERIORFIELD_RAW/3d', 'SETFIELD_GC/2d', 'SETFIELD_RAW/2d', 'STRSETITEM/3', diff --git a/pypy/jit/metainterp/test/test_fficall.py b/pypy/jit/metainterp/test/test_fficall.py --- a/pypy/jit/metainterp/test/test_fficall.py +++ b/pypy/jit/metainterp/test/test_fficall.py @@ -1,19 +1,18 @@ +import py -import py +from pypy.jit.metainterp.test.support import LLJitMixin +from pypy.rlib.jit import JitDriver, promote, dont_look_inside +from pypy.rlib.libffi import (ArgChain, IS_32_BIT, array_getitem, array_setitem, + types) +from pypy.rlib.objectmodel import specialize from pypy.rlib.rarithmetic import r_singlefloat, r_longlong, r_ulonglong -from pypy.rlib.jit import JitDriver, promote, dont_look_inside +from pypy.rlib.test.test_libffi import TestLibffiCall as _TestLibffiCall from pypy.rlib.unroll import unrolling_iterable -from pypy.rlib.libffi import ArgChain -from pypy.rlib.libffi import IS_32_BIT -from pypy.rlib.test.test_libffi import TestLibffiCall as _TestLibffiCall from pypy.rpython.lltypesystem import lltype, rffi -from pypy.rlib.objectmodel import specialize from pypy.tool.sourcetools import func_with_new_name -from pypy.jit.metainterp.test.support import LLJitMixin -class TestFfiCall(LLJitMixin, _TestLibffiCall): - supports_all = False # supports_{floats,longlong,singlefloats} +class FfiCallTests(_TestLibffiCall): # ===> ../../../rlib/test/test_libffi.py def call(self, funcspec, args, RESULT, is_struct=False, jitif=[]): @@ -92,6 +91,69 @@ test_byval_result.__doc__ = _TestLibffiCall.test_byval_result.__doc__ test_byval_result.dont_track_allocations = True +class FfiLookupTests(object): + def test_array_fields(self): + myjitdriver = JitDriver( + greens = [], + reds = ["n", "i", "points", "result_point"], + ) -class TestFfiCallSupportAll(TestFfiCall): + POINT = lltype.Struct("POINT", + ("x", lltype.Signed), + ("y", lltype.Signed), + ) + def f(points, result_point, n): + i = 0 + while i < n: + myjitdriver.jit_merge_point(i=i, points=points, n=n, + result_point=result_point) + x = array_getitem( + types.slong, rffi.sizeof(lltype.Signed) * 2, points, i, 0 + ) + y = array_getitem( + types.slong, rffi.sizeof(lltype.Signed) * 2, points, i, rffi.sizeof(lltype.Signed) + ) + + cur_x = array_getitem( + types.slong, rffi.sizeof(lltype.Signed) * 2, result_point, 0, 0 + ) + cur_y = array_getitem( + types.slong, rffi.sizeof(lltype.Signed) * 2, result_point, 0, rffi.sizeof(lltype.Signed) + ) + + array_setitem( + types.slong, rffi.sizeof(lltype.Signed) * 2, result_point, 0, 0, cur_x + x + ) + array_setitem( + types.slong, rffi.sizeof(lltype.Signed) * 2, result_point, 0, rffi.sizeof(lltype.Signed), cur_y + y + ) + i += 1 + + def main(n): + with lltype.scoped_alloc(rffi.CArray(POINT), n) as points: + with lltype.scoped_alloc(rffi.CArray(POINT), 1) as result_point: + for i in xrange(n): + points[i].x = i * 2 + points[i].y = i * 2 + 1 + points = rffi.cast(rffi.CArrayPtr(lltype.Char), points) + result_point[0].x = 0 + result_point[0].y = 0 + result_point = rffi.cast(rffi.CArrayPtr(lltype.Char), result_point) + f(points, result_point, n) + result_point = rffi.cast(rffi.CArrayPtr(POINT), result_point) + return result_point[0].x * result_point[0].y + + assert self.meta_interp(main, [10]) == main(10) == 9000 + self.check_loops({"int_add": 3, "jump": 1, "int_lt": 1, "guard_true": 1, + "getinteriorfield_raw": 4, "setinteriorfield_raw": 2 + }) + + +class TestFfiCall(FfiCallTests, LLJitMixin): + supports_all = False + +class TestFfiCallSupportAll(FfiCallTests, LLJitMixin): supports_all = True # supports_{floats,longlong,singlefloats} + +class TestFfiLookup(FfiLookupTests, LLJitMixin): + pass \ No newline at end of file diff --git a/pypy/rlib/clibffi.py b/pypy/rlib/clibffi.py --- a/pypy/rlib/clibffi.py +++ b/pypy/rlib/clibffi.py @@ -30,9 +30,6 @@ _MAC_OS = platform.name == "darwin" _FREEBSD_7 = platform.name == "freebsd7" -_LITTLE_ENDIAN = sys.byteorder == 'little' -_BIG_ENDIAN = sys.byteorder == 'big' - if _WIN32: from pypy.rlib import rwin32 @@ -213,26 +210,48 @@ elif sz == 8: return ffi_type_uint64 else: raise ValueError("unsupported type size for %r" % (TYPE,)) -TYPE_MAP = { - rffi.DOUBLE : ffi_type_double, - rffi.FLOAT : ffi_type_float, - rffi.LONGDOUBLE : ffi_type_longdouble, - rffi.UCHAR : ffi_type_uchar, - rffi.CHAR : ffi_type_schar, - rffi.SHORT : ffi_type_sshort, - rffi.USHORT : ffi_type_ushort, - rffi.UINT : ffi_type_uint, - rffi.INT : ffi_type_sint, +__int_type_map = [ + (rffi.UCHAR, ffi_type_uchar), + (rffi.SIGNEDCHAR, ffi_type_schar), + (rffi.SHORT, ffi_type_sshort), + (rffi.USHORT, ffi_type_ushort), + (rffi.UINT, ffi_type_uint), + (rffi.INT, ffi_type_sint), # xxx don't use ffi_type_slong and ffi_type_ulong - their meaning # changes from a libffi version to another :-(( - rffi.ULONG : _unsigned_type_for(rffi.ULONG), - rffi.LONG : _signed_type_for(rffi.LONG), - rffi.ULONGLONG : _unsigned_type_for(rffi.ULONGLONG), - rffi.LONGLONG : _signed_type_for(rffi.LONGLONG), - lltype.Void : ffi_type_void, - lltype.UniChar : _unsigned_type_for(lltype.UniChar), - lltype.Bool : _unsigned_type_for(lltype.Bool), - } + (rffi.ULONG, _unsigned_type_for(rffi.ULONG)), + (rffi.LONG, _signed_type_for(rffi.LONG)), + (rffi.ULONGLONG, _unsigned_type_for(rffi.ULONGLONG)), + (rffi.LONGLONG, _signed_type_for(rffi.LONGLONG)), + (lltype.UniChar, _unsigned_type_for(lltype.UniChar)), + (lltype.Bool, _unsigned_type_for(lltype.Bool)), + ] + +__float_type_map = [ + (rffi.DOUBLE, ffi_type_double), + (rffi.FLOAT, ffi_type_float), + (rffi.LONGDOUBLE, ffi_type_longdouble), + ] + +__ptr_type_map = [ + (rffi.VOIDP, ffi_type_pointer), + ] + +__type_map = __int_type_map + __float_type_map + [ + (lltype.Void, ffi_type_void) + ] + +TYPE_MAP_INT = dict(__int_type_map) +TYPE_MAP_FLOAT = dict(__float_type_map) +TYPE_MAP = dict(__type_map) + +ffitype_map_int = unrolling_iterable(__int_type_map) +ffitype_map_int_or_ptr = unrolling_iterable(__int_type_map + __ptr_type_map) +ffitype_map_float = unrolling_iterable(__float_type_map) +ffitype_map = unrolling_iterable(__type_map) + +del __int_type_map, __float_type_map, __ptr_type_map, __type_map + def external(name, args, result, **kwds): return rffi.llexternal(name, args, result, compilation_info=eci, **kwds) @@ -341,38 +360,15 @@ cast_type_to_ffitype._annspecialcase_ = 'specialize:memo' def push_arg_as_ffiptr(ffitp, arg, ll_buf): - # This is for primitive types. Note that the exact type of 'arg' may be - # different from the expected 'c_size'. To cope with that, we fall back - # to a byte-by-byte copy. + # this is for primitive types. For structures and arrays + # would be something different (more dynamic) TP = lltype.typeOf(arg) TP_P = lltype.Ptr(rffi.CArray(TP)) - TP_size = rffi.sizeof(TP) - c_size = intmask(ffitp.c_size) - # if both types have the same size, we can directly write the - # value to the buffer - if c_size == TP_size: - buf = rffi.cast(TP_P, ll_buf) - buf[0] = arg - else: - # needs byte-by-byte copying. Make sure 'arg' is an integer type. - # Note that this won't work for rffi.FLOAT/rffi.DOUBLE. - assert TP is not rffi.FLOAT and TP is not rffi.DOUBLE - if TP_size <= rffi.sizeof(lltype.Signed): - arg = rffi.cast(lltype.Unsigned, arg) - else: - arg = rffi.cast(lltype.UnsignedLongLong, arg) - if _LITTLE_ENDIAN: - for i in range(c_size): - ll_buf[i] = chr(arg & 0xFF) - arg >>= 8 - elif _BIG_ENDIAN: - for i in range(c_size-1, -1, -1): - ll_buf[i] = chr(arg & 0xFF) - arg >>= 8 - else: - raise AssertionError + buf = rffi.cast(TP_P, ll_buf) + buf[0] = arg push_arg_as_ffiptr._annspecialcase_ = 'specialize:argtype(1)' + # type defs for callback and closure userdata USERDATA_P = lltype.Ptr(lltype.ForwardReference()) CALLBACK_TP = lltype.Ptr(lltype.FuncType([rffi.VOIDPP, rffi.VOIDP, USERDATA_P], diff --git a/pypy/rlib/libffi.py b/pypy/rlib/libffi.py --- a/pypy/rlib/libffi.py +++ b/pypy/rlib/libffi.py @@ -140,7 +140,7 @@ self.last.next = arg self.last = arg self.numargs += 1 - + class AbstractArg(object): next = None @@ -410,3 +410,22 @@ def getaddressindll(self, name): return dlsym(self.lib, name) + + at jit.oopspec("libffi_array_getitem(ffitype, width, addr, index, offset)") +def array_getitem(ffitype, width, addr, index, offset): + for TYPE, ffitype2 in clibffi.ffitype_map: + if ffitype is ffitype2: + addr = rffi.ptradd(addr, index * width) + addr = rffi.ptradd(addr, offset) + return rffi.cast(rffi.CArrayPtr(TYPE), addr)[0] + assert False + + at jit.oopspec("libffi_array_setitem(ffitype, width, addr, index, offset, value)") +def array_setitem(ffitype, width, addr, index, offset, value): + for TYPE, ffitype2 in clibffi.ffitype_map: + if ffitype is ffitype2: + addr = rffi.ptradd(addr, index * width) + addr = rffi.ptradd(addr, offset) + rffi.cast(rffi.CArrayPtr(TYPE), addr)[0] = value + return + assert False \ No newline at end of file diff --git a/pypy/rlib/test/test_libffi.py b/pypy/rlib/test/test_libffi.py --- a/pypy/rlib/test/test_libffi.py +++ b/pypy/rlib/test/test_libffi.py @@ -1,11 +1,13 @@ +import sys + import py -import sys + +from pypy.rlib.libffi import (CDLL, Func, get_libc_name, ArgChain, types, + IS_32_BIT, array_getitem, array_setitem) +from pypy.rlib.rarithmetic import r_singlefloat, r_longlong, r_ulonglong +from pypy.rlib.test.test_clibffi import BaseFfiTest, get_libm_name, make_struct_ffitype_e from pypy.rpython.lltypesystem import rffi, lltype from pypy.rpython.lltypesystem.ll2ctypes import ALLOCATED -from pypy.rlib.rarithmetic import r_singlefloat, r_longlong, r_ulonglong -from pypy.rlib.test.test_clibffi import BaseFfiTest, get_libm_name, make_struct_ffitype_e -from pypy.rlib.libffi import CDLL, Func, get_libc_name, ArgChain, types -from pypy.rlib.libffi import IS_32_BIT class TestLibffiMisc(BaseFfiTest): @@ -52,6 +54,34 @@ del lib assert not ALLOCATED + def test_array_fields(self): + POINT = lltype.Struct("POINT", + ("x", lltype.Float), + ("y", lltype.Float), + ) + points = lltype.malloc(rffi.CArray(POINT), 2, flavor="raw") + points[0].x = 1.0 + points[0].y = 2.0 + points[1].x = 3.0 + points[1].y = 4.0 + points = rffi.cast(rffi.CArrayPtr(lltype.Char), points) + assert array_getitem(types.double, 16, points, 0, 0) == 1.0 + assert array_getitem(types.double, 16, points, 0, 8) == 2.0 + assert array_getitem(types.double, 16, points, 1, 0) == 3.0 + assert array_getitem(types.double, 16, points, 1, 8) == 4.0 + + array_setitem(types.double, 16, points, 0, 0, 10.0) + array_setitem(types.double, 16, points, 0, 8, 20.0) + array_setitem(types.double, 16, points, 1, 0, 30.0) + array_setitem(types.double, 16, points, 1, 8, 40.0) + + assert array_getitem(types.double, 16, points, 0, 0) == 10.0 + assert array_getitem(types.double, 16, points, 0, 8) == 20.0 + assert array_getitem(types.double, 16, points, 1, 0) == 30.0 + assert array_getitem(types.double, 16, points, 1, 8) == 40.0 + + lltype.free(points, flavor="raw") + class TestLibffiCall(BaseFfiTest): """ Test various kind of calls through libffi. @@ -109,7 +139,7 @@ This method is overridden by metainterp/test/test_fficall.py in order to do the call in a loop and JIT it. The optional arguments are used only by that overridden method. - + """ lib, name, argtypes, restype = funcspec func = lib.getpointer(name, argtypes, restype) @@ -132,7 +162,7 @@ return x - y; } """ - libfoo = self.get_libfoo() + libfoo = self.get_libfoo() func = (libfoo, 'diff_xy', [types.sint, types.slong], types.sint) res = self.call(func, [50, 8], lltype.Signed) assert res == 42 @@ -144,7 +174,7 @@ return (x + (int)y); } """ - libfoo = self.get_libfoo() + libfoo = self.get_libfoo() func = (libfoo, 'sum_xy', [types.sint, types.double], types.sint) res = self.call(func, [38, 4.2], lltype.Signed, jitif=["floats"]) assert res == 42 @@ -249,7 +279,7 @@ }; struct pair my_static_pair = {10, 20}; - + long* get_pointer_to_b() { return &my_static_pair.b; @@ -340,7 +370,7 @@ def test_wrong_number_of_arguments(self): from pypy.rpython.llinterp import LLException - libfoo = self.get_libfoo() + libfoo = self.get_libfoo() func = (libfoo, 'sum_xy', [types.sint, types.double], types.sint) glob = globals() From noreply at buildbot.pypy.org Tue Nov 15 23:13:11 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Tue, 15 Nov 2011 23:13:11 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: merged default in Message-ID: <20111115221311.B3D64820BE@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49448:0b6ec862fb6e Date: 2011-11-15 17:01 -0500 http://bitbucket.org/pypy/pypy/changeset/0b6ec862fb6e/ Log: merged default in diff --git a/pypy/interpreter/test/test_executioncontext.py b/pypy/interpreter/test/test_executioncontext.py --- a/pypy/interpreter/test/test_executioncontext.py +++ b/pypy/interpreter/test/test_executioncontext.py @@ -292,7 +292,7 @@ import os, sys print sys.executable, self.tmpfile if sys.platform == "win32": - cmdformat = '""%s" "%s""' # excellent! tons of "! + cmdformat = '"%s" "%s"' else: cmdformat = "'%s' '%s'" g = os.popen(cmdformat % (sys.executable, self.tmpfile), 'r') diff --git a/pypy/jit/backend/llgraph/llimpl.py b/pypy/jit/backend/llgraph/llimpl.py --- a/pypy/jit/backend/llgraph/llimpl.py +++ b/pypy/jit/backend/llgraph/llimpl.py @@ -20,6 +20,7 @@ from pypy.jit.backend.llgraph import symbolic from pypy.jit.codewriter import longlong +from pypy.rlib import libffi from pypy.rlib.objectmodel import ComputedIntSymbolic, we_are_translated from pypy.rlib.rarithmetic import ovfcheck from pypy.rlib.rarithmetic import r_longlong, r_ulonglong, r_uint @@ -325,12 +326,12 @@ loop = _from_opaque(loop) loop.operations.append(Operation(opnum)) -def compile_add_descr(loop, ofs, type, arg_types): +def compile_add_descr(loop, ofs, type, arg_types, extrainfo, width): from pypy.jit.backend.llgraph.runner import Descr loop = _from_opaque(loop) op = loop.operations[-1] assert isinstance(type, str) and len(type) == 1 - op.descr = Descr(ofs, type, arg_types=arg_types) + op.descr = Descr(ofs, type, arg_types=arg_types, extrainfo=extrainfo, width=width) def compile_add_descr_arg(loop, ofs, type, arg_types): from pypy.jit.backend.llgraph.runner import Descr @@ -825,6 +826,16 @@ else: raise NotImplementedError + def op_getinteriorfield_raw(self, descr, array, index): + if descr.typeinfo == REF: + return do_getinteriorfield_raw_ptr(array, index, descr.width, descr.ofs) + elif descr.typeinfo == INT: + return do_getinteriorfield_raw_int(array, index, descr.width, descr.ofs) + elif descr.typeinfo == FLOAT: + return do_getinteriorfield_raw_float(array, index, descr.width, descr.ofs) + else: + raise NotImplementedError + def op_setinteriorfield_gc(self, descr, array, index, newvalue): if descr.typeinfo == REF: return do_setinteriorfield_gc_ptr(array, index, descr.ofs, @@ -838,6 +849,16 @@ else: raise NotImplementedError + def op_setinteriorfield_raw(self, descr, array, index, newvalue): + if descr.typeinfo == REF: + return do_setinteriorfield_raw_ptr(array, index, newvalue, descr.width, descr.ofs) + elif descr.typeinfo == INT: + return do_setinteriorfield_raw_int(array, index, newvalue, descr.width, descr.ofs) + elif descr.typeinfo == FLOAT: + return do_setinteriorfield_raw_float(array, index, newvalue, descr.width, descr.ofs) + else: + raise NotImplementedError + def op_setfield_gc(self, fielddescr, struct, newvalue): if fielddescr.typeinfo == REF: do_setfield_gc_ptr(struct, fielddescr.ofs, newvalue) @@ -1403,6 +1424,14 @@ struct = array._obj.container.getitem(index) return cast_to_ptr(_getinteriorfield_gc(struct, fieldnum)) +def _getinteriorfield_raw(ffitype, array, index, width, ofs): + addr = rffi.cast(rffi.VOIDP, array) + return libffi.array_getitem(ffitype, width, addr, index, ofs) + +def do_getinteriorfield_raw_int(array, index, width, ofs): + res = _getinteriorfield_raw(libffi.types.slong, array, index, width, ofs) + return res + def _getfield_raw(struct, fieldnum): STRUCT, fieldname = symbolic.TokenToField[fieldnum] ptr = cast_from_int(lltype.Ptr(STRUCT), struct) @@ -1479,7 +1508,14 @@ return do_setinteriorfield_gc do_setinteriorfield_gc_int = new_setinteriorfield_gc(cast_from_int) do_setinteriorfield_gc_float = new_setinteriorfield_gc(cast_from_floatstorage) -do_setinteriorfield_gc_ptr = new_setinteriorfield_gc(cast_from_ptr) +do_setinteriorfield_gc_ptr = new_setinteriorfield_gc(cast_from_ptr) + +def new_setinteriorfield_raw(ffitype): + def do_setinteriorfield_raw(array, index, newvalue, width, ofs): + addr = rffi.cast(rffi.VOIDP, array) + return libffi.array_setitem(ffitype, width, addr, index, ofs, newvalue) + return do_setinteriorfield_raw +do_setinteriorfield_raw_int = new_setinteriorfield_raw(libffi.types.slong) def do_setfield_raw_int(struct, fieldnum, newvalue): STRUCT, fieldname = symbolic.TokenToField[fieldnum] diff --git a/pypy/jit/backend/llgraph/runner.py b/pypy/jit/backend/llgraph/runner.py --- a/pypy/jit/backend/llgraph/runner.py +++ b/pypy/jit/backend/llgraph/runner.py @@ -23,8 +23,10 @@ class Descr(history.AbstractDescr): def __init__(self, ofs, typeinfo, extrainfo=None, name=None, - arg_types=None, count_fields_if_immut=-1, ffi_flags=0): + arg_types=None, count_fields_if_immut=-1, ffi_flags=0, width=-1): + self.ofs = ofs + self.width = width self.typeinfo = typeinfo self.extrainfo = extrainfo self.name = name @@ -119,14 +121,14 @@ return False def getdescr(self, ofs, typeinfo='?', extrainfo=None, name=None, - arg_types=None, count_fields_if_immut=-1, ffi_flags=0): + arg_types=None, count_fields_if_immut=-1, ffi_flags=0, width=-1): key = (ofs, typeinfo, extrainfo, name, arg_types, - count_fields_if_immut, ffi_flags) + count_fields_if_immut, ffi_flags, width) try: return self._descrs[key] except KeyError: descr = Descr(ofs, typeinfo, extrainfo, name, arg_types, - count_fields_if_immut, ffi_flags) + count_fields_if_immut, ffi_flags, width) self._descrs[key] = descr return descr @@ -179,7 +181,8 @@ descr = op.getdescr() if isinstance(descr, Descr): llimpl.compile_add_descr(c, descr.ofs, descr.typeinfo, - descr.arg_types) + descr.arg_types, descr.extrainfo, + descr.width) if (isinstance(descr, history.LoopToken) and op.getopnum() != rop.JUMP): llimpl.compile_add_loop_token(c, descr) @@ -324,10 +327,22 @@ def interiorfielddescrof(self, A, fieldname): S = A.OF - ofs2 = symbolic.get_size(A) + width = symbolic.get_size(A) ofs, size = symbolic.get_field_token(S, fieldname) token = history.getkind(getattr(S, fieldname)) - return self.getdescr(ofs, token[0], name=fieldname, extrainfo=ofs2) + return self.getdescr(ofs, token[0], name=fieldname, width=width) + + def interiorfielddescrof_dynamic(self, offset, width, fieldsize, + is_pointer, is_float, is_signed): + + if is_pointer: + typeinfo = REF + elif is_float: + typeinfo = FLOAT + else: + typeinfo = INT + # we abuse the arg_types field to distinguish dynamic and static descrs + return Descr(offset, typeinfo, arg_types='dynamic', name='', width=width) def calldescrof(self, FUNC, ARGS, RESULT, extrainfo): arg_types = [] diff --git a/pypy/jit/backend/llsupport/descr.py b/pypy/jit/backend/llsupport/descr.py --- a/pypy/jit/backend/llsupport/descr.py +++ b/pypy/jit/backend/llsupport/descr.py @@ -111,6 +111,16 @@ def repr_of_descr(self): return '<%s %s %s>' % (self._clsname, self.name, self.offset) +class DynamicFieldDescr(BaseFieldDescr): + def __init__(self, offset, fieldsize, is_pointer, is_float, is_signed): + self.offset = offset + self._fieldsize = fieldsize + self._is_pointer_field = is_pointer + self._is_float_field = is_float + self._is_field_signed = is_signed + + def get_field_size(self, translate_support_code): + return self._fieldsize class NonGcPtrFieldDescr(BaseFieldDescr): _clsname = 'NonGcPtrFieldDescr' @@ -182,6 +192,7 @@ def repr_of_descr(self): return '<%s>' % self._clsname + class NonGcPtrArrayDescr(BaseArrayDescr): _clsname = 'NonGcPtrArrayDescr' def get_item_size(self, translate_support_code): @@ -211,6 +222,13 @@ def get_ofs_length(self, translate_support_code): return -1 +class DynamicArrayNoLengthDescr(BaseArrayNoLengthDescr): + def __init__(self, itemsize): + self.itemsize = itemsize + + def get_item_size(self, translate_support_code): + return self.itemsize + class NonGcPtrArrayNoLengthDescr(BaseArrayNoLengthDescr): _clsname = 'NonGcPtrArrayNoLengthDescr' def get_item_size(self, translate_support_code): diff --git a/pypy/jit/backend/llsupport/llmodel.py b/pypy/jit/backend/llsupport/llmodel.py --- a/pypy/jit/backend/llsupport/llmodel.py +++ b/pypy/jit/backend/llsupport/llmodel.py @@ -9,9 +9,10 @@ from pypy.jit.backend.llsupport import symbolic from pypy.jit.backend.llsupport.symbolic import WORD, unroll_basic_sizes from pypy.jit.backend.llsupport.descr import (get_size_descr, - get_field_descr, BaseFieldDescr, get_array_descr, BaseArrayDescr, - get_call_descr, BaseIntCallDescr, GcPtrCallDescr, FloatCallDescr, - VoidCallDescr, InteriorFieldDescr, get_interiorfield_descr) + get_field_descr, BaseFieldDescr, DynamicFieldDescr, get_array_descr, + BaseArrayDescr, DynamicArrayNoLengthDescr, get_call_descr, + BaseIntCallDescr, GcPtrCallDescr, FloatCallDescr, VoidCallDescr, + InteriorFieldDescr, get_interiorfield_descr) from pypy.jit.backend.llsupport.asmmemmgr import AsmMemoryManager @@ -238,6 +239,12 @@ def interiorfielddescrof(self, A, fieldname): return get_interiorfield_descr(self.gc_ll_descr, A, A.OF, fieldname) + def interiorfielddescrof_dynamic(self, offset, width, fieldsize, + is_pointer, is_float, is_signed): + arraydescr = DynamicArrayNoLengthDescr(width) + fielddescr = DynamicFieldDescr(offset, fieldsize, is_pointer, is_float, is_signed) + return InteriorFieldDescr(arraydescr, fielddescr) + def unpack_arraydescr(self, arraydescr): assert isinstance(arraydescr, BaseArrayDescr) return arraydescr.get_base_size(self.translate_support_code) diff --git a/pypy/jit/backend/model.py b/pypy/jit/backend/model.py --- a/pypy/jit/backend/model.py +++ b/pypy/jit/backend/model.py @@ -183,38 +183,35 @@ lst[n] = None self.fail_descr_free_list.extend(faildescr_indices) - @staticmethod - def sizeof(S): + def sizeof(self, S): raise NotImplementedError - @staticmethod - def fielddescrof(S, fieldname): + def fielddescrof(self, S, fieldname): """Return the Descr corresponding to field 'fieldname' on the structure 'S'. It is important that this function (at least) caches the results.""" raise NotImplementedError - @staticmethod - def arraydescrof(A): + def interiorfielddescrof(self, A, fieldname): raise NotImplementedError - @staticmethod - def calldescrof(FUNC, ARGS, RESULT): + def interiorfielddescrof_dynamic(self, offset, width, fieldsize, is_pointer, + is_float, is_signed): + raise NotImplementedError + + def arraydescrof(self, A): + raise NotImplementedError + + def calldescrof(self, FUNC, ARGS, RESULT): # FUNC is the original function type, but ARGS is a list of types # with Voids removed raise NotImplementedError - @staticmethod - def methdescrof(SELFTYPE, methname): + def methdescrof(self, SELFTYPE, methname): # must return a subclass of history.AbstractMethDescr raise NotImplementedError - @staticmethod - def typedescrof(TYPE): - raise NotImplementedError - - @staticmethod - def interiorfielddescrof(A, fieldname): + def typedescrof(self, TYPE): raise NotImplementedError # ---------- the backend-dependent operations ---------- diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -8,8 +8,8 @@ from pypy.rpython.lltypesystem.lloperation import llop from pypy.rpython.annlowlevel import llhelper from pypy.jit.backend.model import CompiledLoopToken -from pypy.jit.backend.x86.regalloc import (RegAlloc, get_ebp_ofs, - _get_scale, gpr_reg_mgr_cls) +from pypy.jit.backend.x86.regalloc import (RegAlloc, get_ebp_ofs, _get_scale, + gpr_reg_mgr_cls, _valid_addressing_size) from pypy.jit.backend.x86.arch import (FRAME_FIXED_SIZE, FORCE_INDEX_OFS, WORD, IS_X86_32, IS_X86_64) @@ -1601,8 +1601,10 @@ assert isinstance(itemsize_loc, ImmedLoc) if isinstance(index_loc, ImmedLoc): temp_loc = imm(index_loc.value * itemsize_loc.value) + elif _valid_addressing_size(itemsize_loc.value): + return AddressLoc(base_loc, index_loc, _get_scale(itemsize_loc.value), ofs_loc.value) else: - # XXX should not use IMUL in most cases + # XXX should not use IMUL in more cases, it can use a clever LEA assert isinstance(temp_loc, RegLoc) assert isinstance(index_loc, RegLoc) assert not temp_loc.is_xmm @@ -1619,6 +1621,8 @@ ofs_loc) self.load_from_mem(resloc, src_addr, fieldsize_loc, sign_loc) + genop_getinteriorfield_raw = genop_getinteriorfield_gc + def genop_discard_setfield_gc(self, op, arglocs): base_loc, ofs_loc, size_loc, value_loc = arglocs @@ -1634,6 +1638,8 @@ ofs_loc) self.save_into_mem(dest_addr, value_loc, fieldsize_loc) + genop_discard_setinteriorfield_raw = genop_discard_setinteriorfield_gc + def genop_discard_setarrayitem_gc(self, op, arglocs): base_loc, ofs_loc, value_loc, size_loc, baseofs = arglocs assert isinstance(baseofs, ImmedLoc) diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -1067,6 +1067,8 @@ self.PerformDiscard(op, [base_loc, ofs, itemsize, fieldsize, index_loc, temp_loc, value_loc]) + consider_setinteriorfield_raw = consider_setinteriorfield_gc + def consider_strsetitem(self, op): args = op.getarglist() base_loc = self.rm.make_sure_var_in_reg(op.getarg(0), args) @@ -1158,6 +1160,8 @@ self.Perform(op, [base_loc, ofs, itemsize, fieldsize, index_loc, temp_loc, sign_loc], result_loc) + consider_getinteriorfield_raw = consider_getinteriorfield_gc + def consider_int_is_true(self, op, guard_op): # doesn't need arg to be in a register argloc = self.loc(op.getarg(0)) @@ -1430,8 +1434,11 @@ # i.e. the n'th word beyond the fixed frame size. return -WORD * (FRAME_FIXED_SIZE + position) +def _valid_addressing_size(size): + return size == 1 or size == 2 or size == 4 or size == 8 + def _get_scale(size): - assert size == 1 or size == 2 or size == 4 or size == 8 + assert _valid_addressing_size(size) if size < 4: return size - 1 # 1, 2 => 0, 1 else: diff --git a/pypy/jit/backend/x86/test/test_fficall.py b/pypy/jit/backend/x86/test/test_fficall.py new file mode 100644 --- /dev/null +++ b/pypy/jit/backend/x86/test/test_fficall.py @@ -0,0 +1,8 @@ +import py +from pypy.jit.metainterp.test import test_fficall +from pypy.jit.backend.x86.test.test_basic import Jit386Mixin + +class TestFfiLookups(Jit386Mixin, test_fficall.FfiLookupTests): + # for the individual tests see + # ====> ../../../metainterp/test/test_fficall.py + supports_all = True diff --git a/pypy/jit/codewriter/effectinfo.py b/pypy/jit/codewriter/effectinfo.py --- a/pypy/jit/codewriter/effectinfo.py +++ b/pypy/jit/codewriter/effectinfo.py @@ -48,6 +48,8 @@ OS_LIBFFI_PREPARE = 60 OS_LIBFFI_PUSH_ARG = 61 OS_LIBFFI_CALL = 62 + OS_LIBFFI_GETARRAYITEM = 63 + OS_LIBFFI_SETARRAYITEM = 64 # OS_LLONG_INVERT = 69 OS_LLONG_ADD = 70 diff --git a/pypy/jit/codewriter/jtransform.py b/pypy/jit/codewriter/jtransform.py --- a/pypy/jit/codewriter/jtransform.py +++ b/pypy/jit/codewriter/jtransform.py @@ -1615,6 +1615,12 @@ elif oopspec_name.startswith('libffi_call_'): oopspecindex = EffectInfo.OS_LIBFFI_CALL extraeffect = EffectInfo.EF_RANDOM_EFFECTS + elif oopspec_name == 'libffi_array_getitem': + oopspecindex = EffectInfo.OS_LIBFFI_GETARRAYITEM + extraeffect = EffectInfo.EF_CANNOT_RAISE + elif oopspec_name == 'libffi_array_setitem': + oopspecindex = EffectInfo.OS_LIBFFI_SETARRAYITEM + extraeffect = EffectInfo.EF_CANNOT_RAISE else: assert False, 'unsupported oopspec: %s' % oopspec_name return self._handle_oopspec_call(op, args, oopspecindex, extraeffect) diff --git a/pypy/jit/metainterp/executor.py b/pypy/jit/metainterp/executor.py --- a/pypy/jit/metainterp/executor.py +++ b/pypy/jit/metainterp/executor.py @@ -340,6 +340,8 @@ rop.DEBUG_MERGE_POINT, rop.JIT_DEBUG, rop.SETARRAYITEM_RAW, + rop.GETINTERIORFIELD_RAW, + rop.SETINTERIORFIELD_RAW, rop.CALL_RELEASE_GIL, rop.QUASIIMMUT_FIELD, ): # list of opcodes never executed by pyjitpl diff --git a/pypy/jit/metainterp/optimizeopt/fficall.py b/pypy/jit/metainterp/optimizeopt/fficall.py --- a/pypy/jit/metainterp/optimizeopt/fficall.py +++ b/pypy/jit/metainterp/optimizeopt/fficall.py @@ -1,11 +1,13 @@ +from pypy.jit.codewriter.effectinfo import EffectInfo +from pypy.jit.metainterp.optimizeopt.optimizer import Optimization +from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method +from pypy.jit.metainterp.resoperation import rop, ResOperation +from pypy.rlib import clibffi, libffi +from pypy.rlib.debug import debug_print +from pypy.rlib.libffi import Func +from pypy.rlib.objectmodel import we_are_translated from pypy.rpython.annlowlevel import cast_base_ptr_to_instance -from pypy.rlib.objectmodel import we_are_translated -from pypy.rlib.libffi import Func -from pypy.rlib.debug import debug_print -from pypy.jit.codewriter.effectinfo import EffectInfo -from pypy.jit.metainterp.resoperation import rop, ResOperation -from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method -from pypy.jit.metainterp.optimizeopt.optimizer import Optimization +from pypy.rpython.lltypesystem import llmemory class FuncInfo(object): @@ -78,7 +80,7 @@ def new(self): return OptFfiCall() - + def begin_optimization(self, funcval, op): self.rollback_maybe('begin_optimization', op) self.funcinfo = FuncInfo(funcval, self.optimizer.cpu, op) @@ -116,6 +118,9 @@ ops = self.do_push_arg(op) elif oopspec == EffectInfo.OS_LIBFFI_CALL: ops = self.do_call(op) + elif (oopspec == EffectInfo.OS_LIBFFI_GETARRAYITEM or + oopspec == EffectInfo.OS_LIBFFI_SETARRAYITEM): + ops = self.do_getsetarrayitem(op, oopspec) # for op in ops: self.emit_operation(op) @@ -190,6 +195,53 @@ ops.append(newop) return ops + def do_getsetarrayitem(self, op, oopspec): + ffitypeval = self.getvalue(op.getarg(1)) + widthval = self.getvalue(op.getarg(2)) + offsetval = self.getvalue(op.getarg(5)) + if not ffitypeval.is_constant() or not widthval.is_constant() or not offsetval.is_constant(): + return [op] + + ffitypeaddr = ffitypeval.box.getaddr() + ffitype = llmemory.cast_adr_to_ptr(ffitypeaddr, clibffi.FFI_TYPE_P) + offset = offsetval.box.getint() + width = widthval.box.getint() + descr = self._get_interior_descr(ffitype, width, offset) + + arglist = [ + self.getvalue(op.getarg(3)).force_box(self.optimizer), + self.getvalue(op.getarg(4)).force_box(self.optimizer), + ] + if oopspec == EffectInfo.OS_LIBFFI_GETARRAYITEM: + opnum = rop.GETINTERIORFIELD_RAW + elif oopspec == EffectInfo.OS_LIBFFI_SETARRAYITEM: + opnum = rop.SETINTERIORFIELD_RAW + arglist.append(self.getvalue(op.getarg(6)).force_box(self.optimizer)) + else: + assert False + return [ + ResOperation(opnum, arglist, op.result, descr=descr), + ] + + def _get_interior_descr(self, ffitype, width, offset): + kind = libffi.types.getkind(ffitype) + is_pointer = is_float = is_signed = False + if ffitype is libffi.types.pointer: + is_pointer = True + elif kind == 'i': + is_signed = True + elif kind == 'f' or kind == 'I' or kind == 'U': + # longlongs are treated as floats, see + # e.g. llsupport/descr.py:getDescrClass + is_float = True + else: + assert False, "unsupported ffitype or kind" + # + fieldsize = ffitype.c_size + return self.optimizer.cpu.interiorfielddescrof_dynamic( + offset, width, fieldsize, is_pointer, is_float, is_signed + ) + def propagate_forward(self, op): if self.logops is not None: debug_print(self.logops.repr_of_resop(op)) diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -461,6 +461,7 @@ 'GETARRAYITEM_GC/2d', 'GETARRAYITEM_RAW/2d', 'GETINTERIORFIELD_GC/2d', + 'GETINTERIORFIELD_RAW/2d', 'GETFIELD_GC/1d', 'GETFIELD_RAW/1d', '_MALLOC_FIRST', @@ -479,6 +480,7 @@ 'SETARRAYITEM_GC/3d', 'SETARRAYITEM_RAW/3d', 'SETINTERIORFIELD_GC/3d', + 'SETINTERIORFIELD_RAW/3d', 'SETFIELD_GC/2d', 'SETFIELD_RAW/2d', 'STRSETITEM/3', diff --git a/pypy/jit/metainterp/test/test_fficall.py b/pypy/jit/metainterp/test/test_fficall.py --- a/pypy/jit/metainterp/test/test_fficall.py +++ b/pypy/jit/metainterp/test/test_fficall.py @@ -1,19 +1,18 @@ +import py -import py +from pypy.jit.metainterp.test.support import LLJitMixin +from pypy.rlib.jit import JitDriver, promote, dont_look_inside +from pypy.rlib.libffi import (ArgChain, IS_32_BIT, array_getitem, array_setitem, + types) +from pypy.rlib.objectmodel import specialize from pypy.rlib.rarithmetic import r_singlefloat, r_longlong, r_ulonglong -from pypy.rlib.jit import JitDriver, promote, dont_look_inside +from pypy.rlib.test.test_libffi import TestLibffiCall as _TestLibffiCall from pypy.rlib.unroll import unrolling_iterable -from pypy.rlib.libffi import ArgChain -from pypy.rlib.libffi import IS_32_BIT -from pypy.rlib.test.test_libffi import TestLibffiCall as _TestLibffiCall from pypy.rpython.lltypesystem import lltype, rffi -from pypy.rlib.objectmodel import specialize from pypy.tool.sourcetools import func_with_new_name -from pypy.jit.metainterp.test.support import LLJitMixin -class TestFfiCall(LLJitMixin, _TestLibffiCall): - supports_all = False # supports_{floats,longlong,singlefloats} +class FfiCallTests(_TestLibffiCall): # ===> ../../../rlib/test/test_libffi.py def call(self, funcspec, args, RESULT, is_struct=False, jitif=[]): @@ -92,6 +91,69 @@ test_byval_result.__doc__ = _TestLibffiCall.test_byval_result.__doc__ test_byval_result.dont_track_allocations = True +class FfiLookupTests(object): + def test_array_fields(self): + myjitdriver = JitDriver( + greens = [], + reds = ["n", "i", "points", "result_point"], + ) -class TestFfiCallSupportAll(TestFfiCall): + POINT = lltype.Struct("POINT", + ("x", lltype.Signed), + ("y", lltype.Signed), + ) + def f(points, result_point, n): + i = 0 + while i < n: + myjitdriver.jit_merge_point(i=i, points=points, n=n, + result_point=result_point) + x = array_getitem( + types.slong, rffi.sizeof(lltype.Signed) * 2, points, i, 0 + ) + y = array_getitem( + types.slong, rffi.sizeof(lltype.Signed) * 2, points, i, rffi.sizeof(lltype.Signed) + ) + + cur_x = array_getitem( + types.slong, rffi.sizeof(lltype.Signed) * 2, result_point, 0, 0 + ) + cur_y = array_getitem( + types.slong, rffi.sizeof(lltype.Signed) * 2, result_point, 0, rffi.sizeof(lltype.Signed) + ) + + array_setitem( + types.slong, rffi.sizeof(lltype.Signed) * 2, result_point, 0, 0, cur_x + x + ) + array_setitem( + types.slong, rffi.sizeof(lltype.Signed) * 2, result_point, 0, rffi.sizeof(lltype.Signed), cur_y + y + ) + i += 1 + + def main(n): + with lltype.scoped_alloc(rffi.CArray(POINT), n) as points: + with lltype.scoped_alloc(rffi.CArray(POINT), 1) as result_point: + for i in xrange(n): + points[i].x = i * 2 + points[i].y = i * 2 + 1 + points = rffi.cast(rffi.CArrayPtr(lltype.Char), points) + result_point[0].x = 0 + result_point[0].y = 0 + result_point = rffi.cast(rffi.CArrayPtr(lltype.Char), result_point) + f(points, result_point, n) + result_point = rffi.cast(rffi.CArrayPtr(POINT), result_point) + return result_point[0].x * result_point[0].y + + assert self.meta_interp(main, [10]) == main(10) == 9000 + self.check_loops({"int_add": 3, "jump": 1, "int_lt": 1, "guard_true": 1, + "getinteriorfield_raw": 4, "setinteriorfield_raw": 2 + }) + + +class TestFfiCall(FfiCallTests, LLJitMixin): + supports_all = False + +class TestFfiCallSupportAll(FfiCallTests, LLJitMixin): supports_all = True # supports_{floats,longlong,singlefloats} + +class TestFfiLookup(FfiLookupTests, LLJitMixin): + pass \ No newline at end of file diff --git a/pypy/module/_rawffi/test/test__rawffi.py b/pypy/module/_rawffi/test/test__rawffi.py --- a/pypy/module/_rawffi/test/test__rawffi.py +++ b/pypy/module/_rawffi/test/test__rawffi.py @@ -1022,6 +1022,12 @@ assert ret.y == 1234500, "ret.y == %d" % (ret.y,) s.free() + def test_ffi_type(self): + import _rawffi + EMPTY = _rawffi.Structure([]) + S2E = _rawffi.Structure([('bah', (EMPTY, 1))]) + S2E.get_ffi_type() # does not hang + class AppTestAutoFree: def setup_class(cls): space = gettestobjspace(usemodules=('_rawffi', 'struct')) diff --git a/pypy/module/cpyext/include/eval.h b/pypy/module/cpyext/include/eval.h --- a/pypy/module/cpyext/include/eval.h +++ b/pypy/module/cpyext/include/eval.h @@ -14,8 +14,8 @@ PyObject * PyEval_CallFunction(PyObject *obj, const char *format, ...); PyObject * PyEval_CallMethod(PyObject *obj, const char *name, const char *format, ...); -PyObject * PyObject_CallFunction(PyObject *obj, char *format, ...); -PyObject * PyObject_CallMethod(PyObject *obj, char *name, char *format, ...); +PyObject * PyObject_CallFunction(PyObject *obj, const char *format, ...); +PyObject * PyObject_CallMethod(PyObject *obj, const char *name, const char *format, ...); PyObject * PyObject_CallFunctionObjArgs(PyObject *callable, ...); PyObject * PyObject_CallMethodObjArgs(PyObject *callable, PyObject *name, ...); diff --git a/pypy/module/cpyext/include/pycobject.h b/pypy/module/cpyext/include/pycobject.h --- a/pypy/module/cpyext/include/pycobject.h +++ b/pypy/module/cpyext/include/pycobject.h @@ -33,7 +33,7 @@ PyAPI_FUNC(void *) PyCObject_GetDesc(PyObject *); /* Import a pointer to a C object from a module using a PyCObject. */ -PyAPI_FUNC(void *) PyCObject_Import(char *module_name, char *cobject_name); +PyAPI_FUNC(void *) PyCObject_Import(const char *module_name, const char *cobject_name); /* Modify a C object. Fails (==0) if object has a destructor. */ PyAPI_FUNC(int) PyCObject_SetVoidPtr(PyObject *self, void *cobj); diff --git a/pypy/module/cpyext/include/pyerrors.h b/pypy/module/cpyext/include/pyerrors.h --- a/pypy/module/cpyext/include/pyerrors.h +++ b/pypy/module/cpyext/include/pyerrors.h @@ -11,8 +11,8 @@ (PyClass_Check((x)) || (PyType_Check((x)) && \ PyObject_IsSubclass((x), PyExc_BaseException))) -PyObject *PyErr_NewException(char *name, PyObject *base, PyObject *dict); -PyObject *PyErr_NewExceptionWithDoc(char *name, char *doc, PyObject *base, PyObject *dict); +PyObject *PyErr_NewException(const char *name, PyObject *base, PyObject *dict); +PyObject *PyErr_NewExceptionWithDoc(const char *name, const char *doc, PyObject *base, PyObject *dict); PyObject *PyErr_Format(PyObject *exception, const char *format, ...); /* These APIs aren't really part of the error implementation, but diff --git a/pypy/module/cpyext/modsupport.py b/pypy/module/cpyext/modsupport.py --- a/pypy/module/cpyext/modsupport.py +++ b/pypy/module/cpyext/modsupport.py @@ -54,9 +54,15 @@ modname = rffi.charp2str(name) state = space.fromcache(State) f_name, f_path = state.package_context - w_mod = PyImport_AddModule(space, f_name) + if f_name is not None: + modname = f_name + w_mod = PyImport_AddModule(space, modname) + state.package_context = None, None - dict_w = {'__file__': space.wrap(f_path)} + if f_path is not None: + dict_w = {'__file__': space.wrap(f_path)} + else: + dict_w = {} convert_method_defs(space, dict_w, methods, None, w_self, modname) for key, w_value in dict_w.items(): space.setattr(w_mod, space.wrap(key), w_value) diff --git a/pypy/module/cpyext/src/cobject.c b/pypy/module/cpyext/src/cobject.c --- a/pypy/module/cpyext/src/cobject.c +++ b/pypy/module/cpyext/src/cobject.c @@ -77,7 +77,7 @@ } void * -PyCObject_Import(char *module_name, char *name) +PyCObject_Import(const char *module_name, const char *name) { PyObject *m, *c; void *r = NULL; diff --git a/pypy/module/cpyext/src/modsupport.c b/pypy/module/cpyext/src/modsupport.c --- a/pypy/module/cpyext/src/modsupport.c +++ b/pypy/module/cpyext/src/modsupport.c @@ -541,7 +541,7 @@ } PyObject * -PyObject_CallFunction(PyObject *callable, char *format, ...) +PyObject_CallFunction(PyObject *callable, const char *format, ...) { va_list va; PyObject *args; @@ -558,7 +558,7 @@ } PyObject * -PyObject_CallMethod(PyObject *o, char *name, char *format, ...) +PyObject_CallMethod(PyObject *o, const char *name, const char *format, ...) { va_list va; PyObject *args; diff --git a/pypy/module/cpyext/src/pyerrors.c b/pypy/module/cpyext/src/pyerrors.c --- a/pypy/module/cpyext/src/pyerrors.c +++ b/pypy/module/cpyext/src/pyerrors.c @@ -21,7 +21,7 @@ } PyObject * -PyErr_NewException(char *name, PyObject *base, PyObject *dict) +PyErr_NewException(const char *name, PyObject *base, PyObject *dict) { char *dot; PyObject *modulename = NULL; @@ -72,7 +72,7 @@ /* Create an exception with docstring */ PyObject * -PyErr_NewExceptionWithDoc(char *name, char *doc, PyObject *base, PyObject *dict) +PyErr_NewExceptionWithDoc(const char *name, const char *doc, PyObject *base, PyObject *dict) { int result; PyObject *ret = NULL; diff --git a/pypy/module/imp/importing.py b/pypy/module/imp/importing.py --- a/pypy/module/imp/importing.py +++ b/pypy/module/imp/importing.py @@ -513,7 +513,7 @@ space.warn(msg, space.w_ImportWarning) modtype, suffix, filemode = find_modtype(space, filepart) try: - if modtype in (PY_SOURCE, PY_COMPILED): + if modtype in (PY_SOURCE, PY_COMPILED, C_EXTENSION): assert suffix is not None filename = filepart + suffix stream = streamio.open_file_as_stream(filename, filemode) @@ -522,9 +522,6 @@ except: stream.close() raise - if modtype == C_EXTENSION: - filename = filepart + suffix - return FindInfo(modtype, filename, None, suffix, filemode) except StreamErrors: pass # XXX! must not eat all exceptions, e.g. # Out of file descriptors. diff --git a/pypy/module/pypyjit/test_pypy_c/model.py b/pypy/module/pypyjit/test_pypy_c/model.py --- a/pypy/module/pypyjit/test_pypy_c/model.py +++ b/pypy/module/pypyjit/test_pypy_c/model.py @@ -285,6 +285,11 @@ guard_false(ticker_cond1, descr=...) """ src = src.replace('--EXC-TICK--', exc_ticker_check) + # + # ISINF is done as a macro; fix it here + r = re.compile('(\w+) = --ISINF--[(](\w+)[)]') + src = r.sub(r'\2\B999 = float_add(\2, ...)\n\1 = float_eq(\2\B999, \2)', + src) return src @classmethod diff --git a/pypy/module/pypyjit/test_pypy_c/test_math.py b/pypy/module/pypyjit/test_pypy_c/test_math.py --- a/pypy/module/pypyjit/test_pypy_c/test_math.py +++ b/pypy/module/pypyjit/test_pypy_c/test_math.py @@ -1,3 +1,4 @@ +import py from pypy.module.pypyjit.test_pypy_c.test_00_model import BaseTestPyPyC @@ -49,10 +50,7 @@ guard_true(i2, descr=...) guard_not_invalidated(descr=...) f1 = cast_int_to_float(i0) - i3 = float_eq(f1, inf) - i4 = float_eq(f1, -inf) - i5 = int_or(i3, i4) - i6 = int_is_true(i5) + i6 = --ISINF--(f1) guard_false(i6, descr=...) f2 = call(ConstClass(sin), f1, descr=) f3 = call(ConstClass(cos), f1, descr=) @@ -64,6 +62,7 @@ """) def test_fmod(self): + py.test.skip("test relies on the old and broken ll_math_fmod") def main(n): import math @@ -90,4 +89,4 @@ i6 = int_sub(i0, 1) --TICK-- jump(..., descr=) - """) \ No newline at end of file + """) diff --git a/pypy/module/test_lib_pypy/test_pwd.py b/pypy/module/test_lib_pypy/test_pwd.py --- a/pypy/module/test_lib_pypy/test_pwd.py +++ b/pypy/module/test_lib_pypy/test_pwd.py @@ -1,7 +1,10 @@ +import py, sys from pypy.conftest import gettestobjspace class AppTestPwd: def setup_class(cls): + if sys.platform == 'win32': + py.test.skip("Unix only") cls.space = gettestobjspace(usemodules=('_ffi', '_rawffi')) cls.space.appexec((), "(): import pwd") diff --git a/pypy/objspace/std/bytearraytype.py b/pypy/objspace/std/bytearraytype.py --- a/pypy/objspace/std/bytearraytype.py +++ b/pypy/objspace/std/bytearraytype.py @@ -122,10 +122,11 @@ return -1 def descr_fromhex(space, w_type, w_hexstring): - "bytearray.fromhex(string) -> bytearray\n\nCreate a bytearray object " - "from a string of hexadecimal numbers.\nSpaces between two numbers are " - "accepted.\nExample: bytearray.fromhex('B9 01EF') -> " - "bytearray(b'\\xb9\\x01\\xef')." + "bytearray.fromhex(string) -> bytearray\n" + "\n" + "Create a bytearray object from a string of hexadecimal numbers.\n" + "Spaces between two numbers are accepted.\n" + "Example: bytearray.fromhex('B9 01EF') -> bytearray(b'\\xb9\\x01\\xef')." hexstring = space.str_w(w_hexstring) hexstring = hexstring.lower() data = [] diff --git a/pypy/objspace/std/intobject.py b/pypy/objspace/std/intobject.py --- a/pypy/objspace/std/intobject.py +++ b/pypy/objspace/std/intobject.py @@ -16,7 +16,10 @@ something CPython does not do anymore. """ -class W_IntObject(W_Object): +class W_AbstractIntObject(W_Object): + __slots__ = () + +class W_IntObject(W_AbstractIntObject): __slots__ = 'intval' _immutable_fields_ = ['intval'] diff --git a/pypy/objspace/std/iterobject.py b/pypy/objspace/std/iterobject.py --- a/pypy/objspace/std/iterobject.py +++ b/pypy/objspace/std/iterobject.py @@ -4,7 +4,10 @@ from pypy.objspace.std.register_all import register_all -class W_AbstractSeqIterObject(W_Object): +class W_AbstractIterObject(W_Object): + __slots__ = () + +class W_AbstractSeqIterObject(W_AbstractIterObject): from pypy.objspace.std.itertype import iter_typedef as typedef def __init__(w_self, w_seq, index=0): diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py --- a/pypy/objspace/std/listobject.py +++ b/pypy/objspace/std/listobject.py @@ -11,7 +11,10 @@ from pypy.rlib.listsort import make_timsort_class from pypy.interpreter.argument import Signature -class W_ListObject(W_Object): +class W_AbstractListObject(W_Object): + __slots__ = () + +class W_ListObject(W_AbstractListObject): from pypy.objspace.std.listtype import list_typedef as typedef def __init__(w_self, wrappeditems): diff --git a/pypy/objspace/std/longobject.py b/pypy/objspace/std/longobject.py --- a/pypy/objspace/std/longobject.py +++ b/pypy/objspace/std/longobject.py @@ -8,7 +8,10 @@ from pypy.objspace.std.noneobject import W_NoneObject from pypy.rlib.rbigint import rbigint, SHIFT -class W_LongObject(W_Object): +class W_AbstractLongObject(W_Object): + __slots__ = () + +class W_LongObject(W_AbstractLongObject): """This is a wrapper of rbigint.""" from pypy.objspace.std.longtype import long_typedef as typedef _immutable_fields_ = ['num'] diff --git a/pypy/objspace/std/objspace.py b/pypy/objspace/std/objspace.py --- a/pypy/objspace/std/objspace.py +++ b/pypy/objspace/std/objspace.py @@ -83,12 +83,7 @@ if self.config.objspace.std.withtproxy: transparent.setup(self) - interplevel_classes = {} - for type, classes in self.model.typeorder.iteritems(): - if len(classes) >= 3: # XXX what does this 3 mean??! - # W_Root, AnyXxx and actual object - interplevel_classes[self.gettypefor(type)] = classes[0][0] - self._interplevel_classes = interplevel_classes + self.setup_isinstance_cache() def get_builtin_types(self): return self.builtin_types @@ -592,6 +587,63 @@ def isinstance_w(space, w_inst, w_type): return space._type_isinstance(w_inst, w_type) + def setup_isinstance_cache(self): + # This assumes that all classes in the stdobjspace implementing a + # particular app-level type are distinguished by a common base class. + # Alternatively, you can turn off the cache on specific classes, + # like e.g. proxyobject. It is just a bit less performant but + # should not have any bad effect. + from pypy.objspace.std.model import W_Root, W_Object + # + # Build a dict {class: w_typeobject-or-None}. The value None is used + # on classes that are known to be abstract base classes. + class2type = {} + class2type[W_Root] = None + class2type[W_Object] = None + for cls in self.model.typeorder.keys(): + if getattr(cls, 'typedef', None) is None: + continue + if getattr(cls, 'ignore_for_isinstance_cache', False): + continue + w_type = self.gettypefor(cls) + w_oldtype = class2type.setdefault(cls, w_type) + assert w_oldtype is w_type + # + # Build the real dict {w_typeobject: class-or-base-class}. For every + # w_typeobject we look for the most precise common base class of all + # the registered classes. If no such class is found, we will find + # W_Object or W_Root, and complain. Then you must either add an + # artificial common base class, or disable caching on one of the + # two classes with ignore_for_isinstance_cache. + def getmro(cls): + while True: + yield cls + if cls is W_Root: + break + cls = cls.__bases__[0] + self._interplevel_classes = {} + for cls, w_type in class2type.items(): + if w_type is None: + continue + if w_type not in self._interplevel_classes: + self._interplevel_classes[w_type] = cls + else: + cls1 = self._interplevel_classes[w_type] + mro1 = list(getmro(cls1)) + for base in getmro(cls): + if base in mro1: + break + if base in class2type and class2type[base] is not w_type: + if class2type.get(base) is None: + msg = ("cannot find a common interp-level base class" + " between %r and %r" % (cls1, cls)) + else: + msg = ("%s is a base class of both %r and %r" % ( + class2type[base], cls1, cls)) + raise AssertionError("%r: %s" % (w_type, msg)) + class2type[base] = w_type + self._interplevel_classes[w_type] = base + @specialize.memo() def _get_interplevel_cls(self, w_type): if not hasattr(self, "_interplevel_classes"): diff --git a/pypy/objspace/std/proxyobject.py b/pypy/objspace/std/proxyobject.py --- a/pypy/objspace/std/proxyobject.py +++ b/pypy/objspace/std/proxyobject.py @@ -16,6 +16,8 @@ def transparent_class(name, BaseCls): class W_Transparent(BaseCls): + ignore_for_isinstance_cache = True + def __init__(self, space, w_type, w_controller): self.w_type = w_type self.w_controller = w_controller diff --git a/pypy/objspace/std/rangeobject.py b/pypy/objspace/std/rangeobject.py --- a/pypy/objspace/std/rangeobject.py +++ b/pypy/objspace/std/rangeobject.py @@ -5,7 +5,7 @@ from pypy.objspace.std.noneobject import W_NoneObject from pypy.objspace.std.inttype import wrapint from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice -from pypy.objspace.std.listobject import W_ListObject +from pypy.objspace.std.listobject import W_AbstractListObject, W_ListObject from pypy.objspace.std import listtype, iterobject, slicetype from pypy.interpreter import gateway, baseobjspace @@ -21,7 +21,7 @@ return (start - stop - step - 1)/-step -class W_RangeListObject(W_Object): +class W_RangeListObject(W_AbstractListObject): typedef = listtype.list_typedef def __init__(w_self, start, step, length): diff --git a/pypy/objspace/std/ropeobject.py b/pypy/objspace/std/ropeobject.py --- a/pypy/objspace/std/ropeobject.py +++ b/pypy/objspace/std/ropeobject.py @@ -6,7 +6,7 @@ from pypy.rlib.objectmodel import we_are_translated from pypy.objspace.std.inttype import wrapint from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice -from pypy.objspace.std import slicetype +from pypy.objspace.std import stringobject, slicetype, iterobject from pypy.objspace.std.listobject import W_ListObject from pypy.objspace.std.noneobject import W_NoneObject from pypy.objspace.std.tupleobject import W_TupleObject @@ -19,7 +19,7 @@ str_format__String as str_format__Rope, _upper, _lower, DEFAULT_NOOP_TABLE) -class W_RopeObject(W_Object): +class W_RopeObject(stringobject.W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef _immutable_fields_ = ['_node'] @@ -59,7 +59,7 @@ registerimplementation(W_RopeObject) -class W_RopeIterObject(W_Object): +class W_RopeIterObject(iterobject.W_AbstractIterObject): from pypy.objspace.std.itertype import iter_typedef as typedef def __init__(w_self, w_rope, index=0): diff --git a/pypy/objspace/std/ropeunicodeobject.py b/pypy/objspace/std/ropeunicodeobject.py --- a/pypy/objspace/std/ropeunicodeobject.py +++ b/pypy/objspace/std/ropeunicodeobject.py @@ -9,7 +9,7 @@ from pypy.objspace.std.noneobject import W_NoneObject from pypy.rlib import rope from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice -from pypy.objspace.std import slicetype +from pypy.objspace.std import unicodeobject, slicetype, iterobject from pypy.objspace.std.tupleobject import W_TupleObject from pypy.rlib.rarithmetic import intmask, ovfcheck from pypy.module.unicodedata import unicodedb @@ -76,7 +76,7 @@ return encode_object(space, w_unistr, encoding, errors) -class W_RopeUnicodeObject(W_Object): +class W_RopeUnicodeObject(unicodeobject.W_AbstractUnicodeObject): from pypy.objspace.std.unicodetype import unicode_typedef as typedef _immutable_fields_ = ['_node'] @@ -117,7 +117,7 @@ return rope.LiteralUnicodeNode(space.unicode_w(w_str)) -class W_RopeUnicodeIterObject(W_Object): +class W_RopeUnicodeIterObject(iterobject.W_AbstractIterObject): from pypy.objspace.std.itertype import iter_typedef as typedef def __init__(w_self, w_rope, index=0): diff --git a/pypy/objspace/std/smallintobject.py b/pypy/objspace/std/smallintobject.py --- a/pypy/objspace/std/smallintobject.py +++ b/pypy/objspace/std/smallintobject.py @@ -6,7 +6,7 @@ from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all from pypy.objspace.std.noneobject import W_NoneObject -from pypy.objspace.std.intobject import W_IntObject +from pypy.objspace.std.intobject import W_AbstractIntObject, W_IntObject from pypy.interpreter.error import OperationError from pypy.rlib.objectmodel import UnboxedValue from pypy.rlib.rbigint import rbigint @@ -14,7 +14,7 @@ from pypy.tool.sourcetools import func_with_new_name from pypy.objspace.std.inttype import wrapint -class W_SmallIntObject(W_Object, UnboxedValue): +class W_SmallIntObject(W_AbstractIntObject, UnboxedValue): __slots__ = 'intval' from pypy.objspace.std.inttype import int_typedef as typedef diff --git a/pypy/objspace/std/smalllongobject.py b/pypy/objspace/std/smalllongobject.py --- a/pypy/objspace/std/smalllongobject.py +++ b/pypy/objspace/std/smalllongobject.py @@ -9,7 +9,7 @@ from pypy.rlib.rarithmetic import r_longlong, r_int, r_uint from pypy.rlib.rarithmetic import intmask, LONGLONG_BIT from pypy.rlib.rbigint import rbigint -from pypy.objspace.std.longobject import W_LongObject +from pypy.objspace.std.longobject import W_AbstractLongObject, W_LongObject from pypy.objspace.std.intobject import W_IntObject from pypy.objspace.std.noneobject import W_NoneObject from pypy.interpreter.error import OperationError @@ -17,7 +17,7 @@ LONGLONG_MIN = r_longlong((-1) << (LONGLONG_BIT-1)) -class W_SmallLongObject(W_Object): +class W_SmallLongObject(W_AbstractLongObject): from pypy.objspace.std.longtype import long_typedef as typedef _immutable_fields_ = ['longlong'] diff --git a/pypy/objspace/std/smalltupleobject.py b/pypy/objspace/std/smalltupleobject.py --- a/pypy/objspace/std/smalltupleobject.py +++ b/pypy/objspace/std/smalltupleobject.py @@ -9,9 +9,9 @@ from pypy.interpreter import gateway from pypy.rlib.debug import make_sure_not_resized from pypy.rlib.unroll import unrolling_iterable -from pypy.objspace.std.tupleobject import W_TupleObject +from pypy.objspace.std.tupleobject import W_AbstractTupleObject, W_TupleObject -class W_SmallTupleObject(W_Object): +class W_SmallTupleObject(W_AbstractTupleObject): from pypy.objspace.std.tupletype import tuple_typedef as typedef def tolist(self): diff --git a/pypy/objspace/std/strbufobject.py b/pypy/objspace/std/strbufobject.py --- a/pypy/objspace/std/strbufobject.py +++ b/pypy/objspace/std/strbufobject.py @@ -1,11 +1,12 @@ from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all +from pypy.objspace.std.stringobject import W_AbstractStringObject from pypy.objspace.std.stringobject import W_StringObject from pypy.objspace.std.unicodeobject import delegate_String2Unicode from pypy.rlib.rstring import StringBuilder from pypy.interpreter.buffer import Buffer -class W_StringBufferObject(W_Object): +class W_StringBufferObject(W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef w_str = None diff --git a/pypy/objspace/std/stringobject.py b/pypy/objspace/std/stringobject.py --- a/pypy/objspace/std/stringobject.py +++ b/pypy/objspace/std/stringobject.py @@ -19,7 +19,10 @@ from pypy.objspace.std.formatting import mod_format -class W_StringObject(W_Object): +class W_AbstractStringObject(W_Object): + __slots__ = () + +class W_StringObject(W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef _immutable_fields_ = ['_value'] diff --git a/pypy/objspace/std/strjoinobject.py b/pypy/objspace/std/strjoinobject.py --- a/pypy/objspace/std/strjoinobject.py +++ b/pypy/objspace/std/strjoinobject.py @@ -1,11 +1,12 @@ from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all +from pypy.objspace.std.stringobject import W_AbstractStringObject from pypy.objspace.std.stringobject import W_StringObject from pypy.objspace.std.unicodeobject import delegate_String2Unicode from pypy.objspace.std.stringtype import wrapstr -class W_StringJoinObject(W_Object): +class W_StringJoinObject(W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef def __init__(w_self, joined_strs, until=-1): diff --git a/pypy/objspace/std/strsliceobject.py b/pypy/objspace/std/strsliceobject.py --- a/pypy/objspace/std/strsliceobject.py +++ b/pypy/objspace/std/strsliceobject.py @@ -1,6 +1,7 @@ from pypy.interpreter.error import OperationError from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all +from pypy.objspace.std.stringobject import W_AbstractStringObject from pypy.objspace.std.stringobject import W_StringObject from pypy.objspace.std.unicodeobject import delegate_String2Unicode from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice @@ -12,7 +13,7 @@ stringendswith, stringstartswith -class W_StringSliceObject(W_Object): +class W_StringSliceObject(W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef def __init__(w_self, str, start, stop): diff --git a/pypy/objspace/std/test/test_sliceobject.py b/pypy/objspace/std/test/test_sliceobject.py --- a/pypy/objspace/std/test/test_sliceobject.py +++ b/pypy/objspace/std/test/test_sliceobject.py @@ -1,3 +1,4 @@ +import sys from pypy.objspace.std.sliceobject import normalize_simple_slice @@ -56,8 +57,9 @@ sl = space.newslice(w(start), w(stop), w(step)) mystart, mystop, mystep, slicelength = sl.indices4(space, length) assert len(range(length)[start:stop:step]) == slicelength - assert slice(start, stop, step).indices(length) == ( - mystart, mystop, mystep) + if sys.version_info >= (2, 6): # doesn't work in 2.5 + assert slice(start, stop, step).indices(length) == ( + mystart, mystop, mystep) class AppTest_SliceObject: def test_new(self): diff --git a/pypy/objspace/std/test/test_stdobjspace.py b/pypy/objspace/std/test/test_stdobjspace.py --- a/pypy/objspace/std/test/test_stdobjspace.py +++ b/pypy/objspace/std/test/test_stdobjspace.py @@ -50,6 +50,8 @@ def test_fastpath_isinstance(self): from pypy.objspace.std.stringobject import W_StringObject from pypy.objspace.std.intobject import W_IntObject + from pypy.objspace.std.iterobject import W_AbstractSeqIterObject + from pypy.objspace.std.iterobject import W_SeqIterObject space = self.space assert space._get_interplevel_cls(space.w_str) is W_StringObject @@ -62,9 +64,13 @@ assert space.isinstance_w(X(), space.w_str) + w_sequenceiterator = space.gettypefor(W_SeqIterObject) + cls = space._get_interplevel_cls(w_sequenceiterator) + assert cls is W_AbstractSeqIterObject + def test_withstrbuf_fastpath_isinstance(self): - from pypy.objspace.std.stringobject import W_StringObject + from pypy.objspace.std.stringobject import W_AbstractStringObject - space = gettestobjspace(withstrbuf=True) - assert space._get_interplevel_cls(space.w_str) is W_StringObject - + space = gettestobjspace(withstrbuf=True) + cls = space._get_interplevel_cls(space.w_str) + assert cls is W_AbstractStringObject diff --git a/pypy/objspace/std/tupleobject.py b/pypy/objspace/std/tupleobject.py --- a/pypy/objspace/std/tupleobject.py +++ b/pypy/objspace/std/tupleobject.py @@ -9,7 +9,10 @@ from pypy.interpreter import gateway from pypy.rlib.debug import make_sure_not_resized -class W_TupleObject(W_Object): +class W_AbstractTupleObject(W_Object): + __slots__ = () + +class W_TupleObject(W_AbstractTupleObject): from pypy.objspace.std.tupletype import tuple_typedef as typedef _immutable_fields_ = ['wrappeditems[*]'] diff --git a/pypy/objspace/std/unicodeobject.py b/pypy/objspace/std/unicodeobject.py --- a/pypy/objspace/std/unicodeobject.py +++ b/pypy/objspace/std/unicodeobject.py @@ -19,7 +19,10 @@ from pypy.objspace.std.formatting import mod_format from pypy.objspace.std.stringtype import stringstartswith, stringendswith -class W_UnicodeObject(W_Object): +class W_AbstractUnicodeObject(W_Object): + __slots__ = () + +class W_UnicodeObject(W_AbstractUnicodeObject): from pypy.objspace.std.unicodetype import unicode_typedef as typedef _immutable_fields_ = ['_value'] diff --git a/pypy/rlib/clibffi.py b/pypy/rlib/clibffi.py --- a/pypy/rlib/clibffi.py +++ b/pypy/rlib/clibffi.py @@ -30,9 +30,6 @@ _MAC_OS = platform.name == "darwin" _FREEBSD_7 = platform.name == "freebsd7" -_LITTLE_ENDIAN = sys.byteorder == 'little' -_BIG_ENDIAN = sys.byteorder == 'big' - if _WIN32: from pypy.rlib import rwin32 @@ -213,26 +210,48 @@ elif sz == 8: return ffi_type_uint64 else: raise ValueError("unsupported type size for %r" % (TYPE,)) -TYPE_MAP = { - rffi.DOUBLE : ffi_type_double, - rffi.FLOAT : ffi_type_float, - rffi.LONGDOUBLE : ffi_type_longdouble, - rffi.UCHAR : ffi_type_uchar, - rffi.CHAR : ffi_type_schar, - rffi.SHORT : ffi_type_sshort, - rffi.USHORT : ffi_type_ushort, - rffi.UINT : ffi_type_uint, - rffi.INT : ffi_type_sint, +__int_type_map = [ + (rffi.UCHAR, ffi_type_uchar), + (rffi.SIGNEDCHAR, ffi_type_schar), + (rffi.SHORT, ffi_type_sshort), + (rffi.USHORT, ffi_type_ushort), + (rffi.UINT, ffi_type_uint), + (rffi.INT, ffi_type_sint), # xxx don't use ffi_type_slong and ffi_type_ulong - their meaning # changes from a libffi version to another :-(( - rffi.ULONG : _unsigned_type_for(rffi.ULONG), - rffi.LONG : _signed_type_for(rffi.LONG), - rffi.ULONGLONG : _unsigned_type_for(rffi.ULONGLONG), - rffi.LONGLONG : _signed_type_for(rffi.LONGLONG), - lltype.Void : ffi_type_void, - lltype.UniChar : _unsigned_type_for(lltype.UniChar), - lltype.Bool : _unsigned_type_for(lltype.Bool), - } + (rffi.ULONG, _unsigned_type_for(rffi.ULONG)), + (rffi.LONG, _signed_type_for(rffi.LONG)), + (rffi.ULONGLONG, _unsigned_type_for(rffi.ULONGLONG)), + (rffi.LONGLONG, _signed_type_for(rffi.LONGLONG)), + (lltype.UniChar, _unsigned_type_for(lltype.UniChar)), + (lltype.Bool, _unsigned_type_for(lltype.Bool)), + ] + +__float_type_map = [ + (rffi.DOUBLE, ffi_type_double), + (rffi.FLOAT, ffi_type_float), + (rffi.LONGDOUBLE, ffi_type_longdouble), + ] + +__ptr_type_map = [ + (rffi.VOIDP, ffi_type_pointer), + ] + +__type_map = __int_type_map + __float_type_map + [ + (lltype.Void, ffi_type_void) + ] + +TYPE_MAP_INT = dict(__int_type_map) +TYPE_MAP_FLOAT = dict(__float_type_map) +TYPE_MAP = dict(__type_map) + +ffitype_map_int = unrolling_iterable(__int_type_map) +ffitype_map_int_or_ptr = unrolling_iterable(__int_type_map + __ptr_type_map) +ffitype_map_float = unrolling_iterable(__float_type_map) +ffitype_map = unrolling_iterable(__type_map) + +del __int_type_map, __float_type_map, __ptr_type_map, __type_map + def external(name, args, result, **kwds): return rffi.llexternal(name, args, result, compilation_info=eci, **kwds) @@ -341,38 +360,15 @@ cast_type_to_ffitype._annspecialcase_ = 'specialize:memo' def push_arg_as_ffiptr(ffitp, arg, ll_buf): - # This is for primitive types. Note that the exact type of 'arg' may be - # different from the expected 'c_size'. To cope with that, we fall back - # to a byte-by-byte copy. + # this is for primitive types. For structures and arrays + # would be something different (more dynamic) TP = lltype.typeOf(arg) TP_P = lltype.Ptr(rffi.CArray(TP)) - TP_size = rffi.sizeof(TP) - c_size = intmask(ffitp.c_size) - # if both types have the same size, we can directly write the - # value to the buffer - if c_size == TP_size: - buf = rffi.cast(TP_P, ll_buf) - buf[0] = arg - else: - # needs byte-by-byte copying. Make sure 'arg' is an integer type. - # Note that this won't work for rffi.FLOAT/rffi.DOUBLE. - assert TP is not rffi.FLOAT and TP is not rffi.DOUBLE - if TP_size <= rffi.sizeof(lltype.Signed): - arg = rffi.cast(lltype.Unsigned, arg) - else: - arg = rffi.cast(lltype.UnsignedLongLong, arg) - if _LITTLE_ENDIAN: - for i in range(c_size): - ll_buf[i] = chr(arg & 0xFF) - arg >>= 8 - elif _BIG_ENDIAN: - for i in range(c_size-1, -1, -1): - ll_buf[i] = chr(arg & 0xFF) - arg >>= 8 - else: - raise AssertionError + buf = rffi.cast(TP_P, ll_buf) + buf[0] = arg push_arg_as_ffiptr._annspecialcase_ = 'specialize:argtype(1)' + # type defs for callback and closure userdata USERDATA_P = lltype.Ptr(lltype.ForwardReference()) CALLBACK_TP = lltype.Ptr(lltype.FuncType([rffi.VOIDPP, rffi.VOIDP, USERDATA_P], diff --git a/pypy/rlib/libffi.py b/pypy/rlib/libffi.py --- a/pypy/rlib/libffi.py +++ b/pypy/rlib/libffi.py @@ -140,7 +140,7 @@ self.last.next = arg self.last = arg self.numargs += 1 - + class AbstractArg(object): next = None @@ -410,3 +410,22 @@ def getaddressindll(self, name): return dlsym(self.lib, name) + + at jit.oopspec("libffi_array_getitem(ffitype, width, addr, index, offset)") +def array_getitem(ffitype, width, addr, index, offset): + for TYPE, ffitype2 in clibffi.ffitype_map: + if ffitype is ffitype2: + addr = rffi.ptradd(addr, index * width) + addr = rffi.ptradd(addr, offset) + return rffi.cast(rffi.CArrayPtr(TYPE), addr)[0] + assert False + + at jit.oopspec("libffi_array_setitem(ffitype, width, addr, index, offset, value)") +def array_setitem(ffitype, width, addr, index, offset, value): + for TYPE, ffitype2 in clibffi.ffitype_map: + if ffitype is ffitype2: + addr = rffi.ptradd(addr, index * width) + addr = rffi.ptradd(addr, offset) + rffi.cast(rffi.CArrayPtr(TYPE), addr)[0] = value + return + assert False \ No newline at end of file diff --git a/pypy/rlib/test/test_libffi.py b/pypy/rlib/test/test_libffi.py --- a/pypy/rlib/test/test_libffi.py +++ b/pypy/rlib/test/test_libffi.py @@ -1,11 +1,13 @@ +import sys + import py -import sys + +from pypy.rlib.libffi import (CDLL, Func, get_libc_name, ArgChain, types, + IS_32_BIT, array_getitem, array_setitem) +from pypy.rlib.rarithmetic import r_singlefloat, r_longlong, r_ulonglong +from pypy.rlib.test.test_clibffi import BaseFfiTest, get_libm_name, make_struct_ffitype_e from pypy.rpython.lltypesystem import rffi, lltype from pypy.rpython.lltypesystem.ll2ctypes import ALLOCATED -from pypy.rlib.rarithmetic import r_singlefloat, r_longlong, r_ulonglong -from pypy.rlib.test.test_clibffi import BaseFfiTest, get_libm_name, make_struct_ffitype_e -from pypy.rlib.libffi import CDLL, Func, get_libc_name, ArgChain, types -from pypy.rlib.libffi import IS_32_BIT class TestLibffiMisc(BaseFfiTest): @@ -52,6 +54,34 @@ del lib assert not ALLOCATED + def test_array_fields(self): + POINT = lltype.Struct("POINT", + ("x", lltype.Float), + ("y", lltype.Float), + ) + points = lltype.malloc(rffi.CArray(POINT), 2, flavor="raw") + points[0].x = 1.0 + points[0].y = 2.0 + points[1].x = 3.0 + points[1].y = 4.0 + points = rffi.cast(rffi.CArrayPtr(lltype.Char), points) + assert array_getitem(types.double, 16, points, 0, 0) == 1.0 + assert array_getitem(types.double, 16, points, 0, 8) == 2.0 + assert array_getitem(types.double, 16, points, 1, 0) == 3.0 + assert array_getitem(types.double, 16, points, 1, 8) == 4.0 + + array_setitem(types.double, 16, points, 0, 0, 10.0) + array_setitem(types.double, 16, points, 0, 8, 20.0) + array_setitem(types.double, 16, points, 1, 0, 30.0) + array_setitem(types.double, 16, points, 1, 8, 40.0) + + assert array_getitem(types.double, 16, points, 0, 0) == 10.0 + assert array_getitem(types.double, 16, points, 0, 8) == 20.0 + assert array_getitem(types.double, 16, points, 1, 0) == 30.0 + assert array_getitem(types.double, 16, points, 1, 8) == 40.0 + + lltype.free(points, flavor="raw") + class TestLibffiCall(BaseFfiTest): """ Test various kind of calls through libffi. @@ -109,7 +139,7 @@ This method is overridden by metainterp/test/test_fficall.py in order to do the call in a loop and JIT it. The optional arguments are used only by that overridden method. - + """ lib, name, argtypes, restype = funcspec func = lib.getpointer(name, argtypes, restype) @@ -132,7 +162,7 @@ return x - y; } """ - libfoo = self.get_libfoo() + libfoo = self.get_libfoo() func = (libfoo, 'diff_xy', [types.sint, types.slong], types.sint) res = self.call(func, [50, 8], lltype.Signed) assert res == 42 @@ -144,7 +174,7 @@ return (x + (int)y); } """ - libfoo = self.get_libfoo() + libfoo = self.get_libfoo() func = (libfoo, 'sum_xy', [types.sint, types.double], types.sint) res = self.call(func, [38, 4.2], lltype.Signed, jitif=["floats"]) assert res == 42 @@ -249,7 +279,7 @@ }; struct pair my_static_pair = {10, 20}; - + long* get_pointer_to_b() { return &my_static_pair.b; @@ -340,7 +370,7 @@ def test_wrong_number_of_arguments(self): from pypy.rpython.llinterp import LLException - libfoo = self.get_libfoo() + libfoo = self.get_libfoo() func = (libfoo, 'sum_xy', [types.sint, types.double], types.sint) glob = globals() diff --git a/pypy/rpython/lltypesystem/rstr.py b/pypy/rpython/lltypesystem/rstr.py --- a/pypy/rpython/lltypesystem/rstr.py +++ b/pypy/rpython/lltypesystem/rstr.py @@ -331,6 +331,8 @@ # unlike CPython, there is no reason to avoid to return -1 # but our malloc initializes the memory to zero, so we use zero as the # special non-computed-yet value. + if not s: + return 0 x = s.hash if x == 0: x = _hash_string(s.chars) diff --git a/pypy/rpython/lltypesystem/test/test_rffi.py b/pypy/rpython/lltypesystem/test/test_rffi.py --- a/pypy/rpython/lltypesystem/test/test_rffi.py +++ b/pypy/rpython/lltypesystem/test/test_rffi.py @@ -742,7 +742,7 @@ assert sizeof(ll) == ctypes.sizeof(ctp) assert sizeof(lltype.Typedef(ll, 'test')) == sizeof(ll) assert not size_and_sign(lltype.Signed)[1] - assert not size_and_sign(lltype.Char)[1] + assert size_and_sign(lltype.Char) == (1, True) assert not size_and_sign(lltype.UniChar)[1] assert size_and_sign(UINT)[1] diff --git a/pypy/rpython/ootypesystem/rstr.py b/pypy/rpython/ootypesystem/rstr.py --- a/pypy/rpython/ootypesystem/rstr.py +++ b/pypy/rpython/ootypesystem/rstr.py @@ -116,6 +116,8 @@ return ootype.oounicode(ch, -1) def ll_strhash(s): + if not s: + return 0 return s.ll_hash() def ll_strfasthash(s): diff --git a/pypy/rpython/test/test_rtuple.py b/pypy/rpython/test/test_rtuple.py --- a/pypy/rpython/test/test_rtuple.py +++ b/pypy/rpython/test/test_rtuple.py @@ -180,6 +180,19 @@ res2 = self.interpret(f, [27, 12]) assert res1 != res2 + def test_constant_tuple_hash_str(self): + from pypy.rlib.objectmodel import compute_hash + def f(i): + if i: + t = (None, "abc") + else: + t = ("abc", None) + return compute_hash(t) + + res1 = self.interpret(f, [0]) + res2 = self.interpret(f, [1]) + assert res1 != res2 + def test_tuple_to_list(self): def f(i, j): return list((i, j)) diff --git a/pypy/translator/platform/__init__.py b/pypy/translator/platform/__init__.py --- a/pypy/translator/platform/__init__.py +++ b/pypy/translator/platform/__init__.py @@ -42,6 +42,8 @@ so_prefixes = ('',) + extra_libs = () + def __init__(self, cc): if self.__class__ is Platform: raise TypeError("You should not instantiate Platform class directly") @@ -181,7 +183,8 @@ link_files = self._linkfiles(eci.link_files) export_flags = self._exportsymbols_link_flags(eci) return (library_dirs + list(self.link_flags) + export_flags + - link_files + list(eci.link_extra) + libraries) + link_files + list(eci.link_extra) + libraries + + list(self.extra_libs)) def _exportsymbols_link_flags(self, eci, relto=None): if eci.export_symbols: diff --git a/pypy/translator/platform/linux.py b/pypy/translator/platform/linux.py --- a/pypy/translator/platform/linux.py +++ b/pypy/translator/platform/linux.py @@ -6,7 +6,8 @@ class BaseLinux(BasePosix): name = "linux" - link_flags = ('-pthread', '-lrt') + link_flags = ('-pthread',) + extra_libs = ('-lrt',) cflags = ('-O3', '-pthread', '-fomit-frame-pointer', '-Wall', '-Wno-unused') standalone_only = () diff --git a/pypy/translator/platform/posix.py b/pypy/translator/platform/posix.py --- a/pypy/translator/platform/posix.py +++ b/pypy/translator/platform/posix.py @@ -140,7 +140,7 @@ ('DEFAULT_TARGET', exe_name.basename), ('SOURCES', rel_cfiles), ('OBJECTS', rel_ofiles), - ('LIBS', self._libs(eci.libraries)), + ('LIBS', self._libs(eci.libraries) + list(self.extra_libs)), ('LIBDIRS', self._libdirs(rel_libdirs)), ('INCLUDEDIRS', self._includedirs(rel_includedirs)), ('CFLAGS', cflags), diff --git a/pypy/translator/platform/test/test_posix.py b/pypy/translator/platform/test/test_posix.py --- a/pypy/translator/platform/test/test_posix.py +++ b/pypy/translator/platform/test/test_posix.py @@ -41,6 +41,7 @@ if self.strict_on_stderr: assert res.err == '' assert res.returncode == 0 + assert '-lrt' in tmpdir.join("Makefile").read() def test_link_files(self): tmpdir = udir.join('link_files' + self.__class__.__name__).ensure(dir=1) From noreply at buildbot.pypy.org Tue Nov 15 23:13:12 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Tue, 15 Nov 2011 23:13:12 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: convert to use the new libffi support Message-ID: <20111115221312.E3EC0820BE@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49449:b1025f8b8ca2 Date: 2011-11-15 17:12 -0500 http://bitbucket.org/pypy/pypy/changeset/b1025f8b8ca2/ Log: convert to use the new libffi support diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -8,8 +8,6 @@ from pypy.rpython.lltypesystem import lltype, rffi -STORAGE_TYPE = rffi.CArray(lltype.Char) - UNSIGNEDLTR = "u" SIGNEDLTR = "i" BOOLLTR = "b" @@ -27,7 +25,7 @@ def malloc(self, length): # XXX find out why test_zjit explodes with tracking of allocations - return lltype.malloc(STORAGE_TYPE, self.itemtype.get_element_size() * length, + return lltype.malloc(rffi.CArray(lltype.Char), self.itemtype.get_element_size() * length, zero=True, flavor="raw", track_allocation=False, add_memory_pressure=True ) @@ -40,16 +38,13 @@ return self.itemtype.coerce(space, w_item) def getitem(self, storage, i): - struct_ptr = rffi.ptradd(storage, i * self.itemtype.get_element_size()) - return self.itemtype.read(struct_ptr, 0) + return self.itemtype.read(storage, self.itemtype.get_element_size(), i, 0) def setitem(self, storage, i, box): - struct_ptr = rffi.ptradd(storage, i * self.itemtype.get_element_size()) - self.itemtype.store(struct_ptr, 0, box) + self.itemtype.store(storage, self.itemtype.get_element_size(), i, 0, box) def fill(self, storage, box, start, stop): - start_ptr = rffi.ptradd(storage, start * self.itemtype.get_element_size()) - self.itemtype.fill(start_ptr, box, stop - start) + self.itemtype.fill(storage, self.itemtype.get_element_size(), box, start, stop, 0) def descr__new__(space, w_subtype, w_dtype): cache = get_dtype_cache(space) diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -2,7 +2,7 @@ from pypy.module.micronumpy import interp_boxes from pypy.objspace.std.floatobject import float2string -from pypy.rlib import rfloat +from pypy.rlib import rfloat, libffi, clibffi from pypy.rlib.objectmodel import specialize from pypy.rlib.rarithmetic import LONG_BIT, widen from pypy.rpython.lltypesystem import lltype, rffi @@ -58,22 +58,23 @@ def _coerce(self, space, w_item): raise NotImplementedError - def read(self, ptr, offset): - ptr = rffi.ptradd(ptr, offset) - return self.box( - rffi.cast(rffi.CArrayPtr(self.T), ptr)[0] + def read(self, storage, width, i, offset): + return self.box(libffi.array_getitem(clibffi.cast_type_to_ffitype(self.T), + width, storage, i, offset + )) + + def store(self, storage, width, i, offset, box): + value = self.unbox(box) + libffi.array_setitem(clibffi.cast_type_to_ffitype(self.T), + width, storage, i, offset, value ) - def store(self, ptr, offset, box): + def fill(self, storage, width, box, start, stop, offset): value = self.unbox(box) - ptr = rffi.ptradd(ptr, offset) - rffi.cast(rffi.CArrayPtr(self.T), ptr)[0] = value - - def fill(self, ptr, box, n): - value = self.unbox(box) - for i in xrange(n): - rffi.cast(rffi.CArrayPtr(self.T), ptr)[0] = value - ptr = rffi.ptradd(ptr, self.get_element_size()) + for i in xrange(start, stop): + libffi.array_setitem(clibffi.cast_type_to_ffitype(self.T), + width, storage, i, offset, value + ) @simple_binary_op def add(self, v1, v2): From noreply at buildbot.pypy.org Wed Nov 16 00:07:37 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 16 Nov 2011 00:07:37 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: Teach llimpl about {get, set}interiorfield_raw with floats. Message-ID: <20111115230737.C1E2E820BE@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49450:b2bbe6d8f5fa Date: 2011-11-15 18:06 -0500 http://bitbucket.org/pypy/pypy/changeset/b2bbe6d8f5fa/ Log: Teach llimpl about {get,set}interiorfield_raw with floats. diff --git a/pypy/jit/backend/llgraph/llimpl.py b/pypy/jit/backend/llgraph/llimpl.py --- a/pypy/jit/backend/llgraph/llimpl.py +++ b/pypy/jit/backend/llgraph/llimpl.py @@ -1432,6 +1432,10 @@ res = _getinteriorfield_raw(libffi.types.slong, array, index, width, ofs) return res +def do_getinteriorfield_raw_float(array, index, width, ofs): + res = _getinteriorfield_raw(libffi.types.double, array, index, width, ofs) + return res + def _getfield_raw(struct, fieldnum): STRUCT, fieldname = symbolic.TokenToField[fieldnum] ptr = cast_from_int(lltype.Ptr(STRUCT), struct) @@ -1516,6 +1520,7 @@ return libffi.array_setitem(ffitype, width, addr, index, ofs, newvalue) return do_setinteriorfield_raw do_setinteriorfield_raw_int = new_setinteriorfield_raw(libffi.types.slong) +do_setinteriorfield_raw_float = new_setinteriorfield_raw(libffi.types.double) def do_setfield_raw_int(struct, fieldnum, newvalue): STRUCT, fieldname = symbolic.TokenToField[fieldnum] From noreply at buildbot.pypy.org Wed Nov 16 00:07:38 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 16 Nov 2011 00:07:38 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: Added a decorator for specialize:call_location Message-ID: <20111115230738.EBC67820BE@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49451:b594ceafe738 Date: 2011-11-15 18:07 -0500 http://bitbucket.org/pypy/pypy/changeset/b594ceafe738/ Log: Added a decorator for specialize:call_location diff --git a/pypy/rlib/objectmodel.py b/pypy/rlib/objectmodel.py --- a/pypy/rlib/objectmodel.py +++ b/pypy/rlib/objectmodel.py @@ -91,9 +91,18 @@ return decorated_func + def call_location(self): + """ Specializes the function for each call site. + """ + def decorated_func(func): + func._annspecialcase_ = "specialize:call_location" + return func + + return decorated_func + def _wrap(self, args): return "("+','.join([repr(arg) for arg in args]) +")" - + specialize = _Specialize() def enforceargs(*args): @@ -125,7 +134,7 @@ def __hash__(self): raise TypeError("Symbolics are not hashable!") - + def __nonzero__(self): raise TypeError("Symbolics are not comparable") @@ -155,7 +164,7 @@ def lltype(self): from pypy.rpython.lltypesystem import lltype return lltype.Signed - + malloc_zero_filled = CDefinedIntSymbolic('MALLOC_ZERO_FILLED', default=0) running_on_llinterp = CDefinedIntSymbolic('RUNNING_ON_LLINTERP', default=1) # running_on_llinterp is meant to have the value 0 in all backends @@ -221,7 +230,7 @@ def compute_result_annotation(self, s_sizehint): from pypy.annotation.model import SomeInteger - + assert isinstance(s_sizehint, SomeInteger) return self.bookkeeper.newlist() From noreply at buildbot.pypy.org Wed Nov 16 00:07:40 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 16 Nov 2011 00:07:40 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: Specialize these properly Message-ID: <20111115230740.1FAEA820BE@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49452:eb398ab0ef00 Date: 2011-11-15 18:07 -0500 http://bitbucket.org/pypy/pypy/changeset/eb398ab0ef00/ Log: Specialize these properly diff --git a/pypy/rlib/libffi.py b/pypy/rlib/libffi.py --- a/pypy/rlib/libffi.py +++ b/pypy/rlib/libffi.py @@ -411,6 +411,10 @@ def getaddressindll(self, name): return dlsym(self.lib, name) +# These specialize.call_location's should really be specialize.arg(0), however +# you can't hash a pointer obj, which the specialize machinery wants to do. +# Given the present usage of these functions, it's good enough. + at specialize.call_location() @jit.oopspec("libffi_array_getitem(ffitype, width, addr, index, offset)") def array_getitem(ffitype, width, addr, index, offset): for TYPE, ffitype2 in clibffi.ffitype_map: @@ -420,6 +424,7 @@ return rffi.cast(rffi.CArrayPtr(TYPE), addr)[0] assert False + at specialize.call_location() @jit.oopspec("libffi_array_setitem(ffitype, width, addr, index, offset, value)") def array_setitem(ffitype, width, addr, index, offset, value): for TYPE, ffitype2 in clibffi.ffitype_map: @@ -428,4 +433,4 @@ addr = rffi.ptradd(addr, offset) rffi.cast(rffi.CArrayPtr(TYPE), addr)[0] = value return - assert False \ No newline at end of file + assert False From noreply at buildbot.pypy.org Wed Nov 16 00:07:41 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 16 Nov 2011 00:07:41 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: Update these tests. Message-ID: <20111115230741.48267820BE@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49453:ea7461f576fa Date: 2011-11-15 18:07 -0500 http://bitbucket.org/pypy/pypy/changeset/ea7461f576fa/ Log: Update these tests. diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -52,8 +52,8 @@ b = a + a b -> 3 """) - self.check_loops({'getarrayitem_raw': 2, 'float_add': 1, - 'setarrayitem_raw': 1, 'int_add': 1, + self.check_loops({'getinteriorfield_raw': 2, 'float_add': 1, + 'setinteriorfield_raw': 1, 'int_add': 1, 'int_lt': 1, 'guard_true': 1, 'jump': 1}) assert result == 3 + 3 @@ -63,8 +63,8 @@ a -> 3 """) assert result == 3 + 3 - self.check_loops({"getarrayitem_raw": 1, "float_add": 1, - "setarrayitem_raw": 1, "int_add": 1, + self.check_loops({"getinteriorfield_raw": 1, "float_add": 1, + "setinteriorfield_raw": 1, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1}) def test_sum(self): @@ -74,7 +74,7 @@ sum(b) """) assert result == 2 * sum(range(30)) - self.check_loops({"getarrayitem_raw": 2, "float_add": 2, + self.check_loops({"getinteriorfield_raw": 2, "float_add": 2, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1}) @@ -88,7 +88,7 @@ for i in range(30): expected *= i * 2 assert result == expected - self.check_loops({"getarrayitem_raw": 2, "float_add": 1, + self.check_loops({"getinteriorfield_raw": 2, "float_add": 1, "float_mul": 1, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1}) @@ -101,7 +101,7 @@ max(b) """) assert result == 256 - self.check_loops({"getarrayitem_raw": 2, "float_add": 1, + self.check_loops({"getinteriorfield_raw": 2, "float_add": 1, "float_mul": 1, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1}) @@ -114,7 +114,7 @@ min(b) """) assert result == -24 - self.check_loops({"getarrayitem_raw": 2, "float_add": 1, + self.check_loops({"getinteriorfield_raw": 2, "float_add": 1, "float_mul": 1, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1}) @@ -126,7 +126,7 @@ any(b) """) assert result == 1 - self.check_loops({"getarrayitem_raw": 2, "float_add": 1, + self.check_loops({"getinteriorfield_raw": 2, "float_add": 1, "float_ne": 1, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1, "guard_false": 1}) @@ -143,9 +143,9 @@ # This is the sum of the ops for both loops, however if you remove the # optimization then you end up with 2 float_adds, so we can still be # sure it was optimized correctly. - self.check_loops({"getarrayitem_raw": 2, "float_mul": 1, "float_add": 1, - "setarrayitem_raw": 2, "int_add": 2, - "int_lt": 2, "guard_true": 2, "jump": 2}) + self.check_loops({"getinteriorfield_raw": 2, "float_mul": 1, "float_add": 1, + "setinteriorfield_raw": 2, "int_add": 2, + "int_lt": 2, "guard_true": 2, "jump": 2}) def test_ufunc(self): result = self.run(""" @@ -155,8 +155,8 @@ c -> 3 """) assert result == -6 - self.check_loops({"getarrayitem_raw": 2, "float_add": 1, "float_neg": 1, - "setarrayitem_raw": 1, "int_add": 1, + self.check_loops({"getinteriorfield_raw": 2, "float_add": 1, "float_neg": 1, + "setinteriorfield_raw": 1, "int_add": 1, "int_lt": 1, "guard_true": 1, "jump": 1, }) @@ -186,10 +186,10 @@ class TestNumpyOld(LLJitMixin): def setup_class(cls): from pypy.module.micronumpy.compile import FakeSpace - from pypy.module.micronumpy.interp_dtype import W_Float64Dtype + from pypy.module.micronumpy.interp_dtype import get_dtype_cache cls.space = FakeSpace() - cls.float64_dtype = cls.space.fromcache(W_Float64Dtype) + cls.float64_dtype = get_dtype_cache(cls.space).w_float64dtype def test_slice(self): def f(i): @@ -200,11 +200,11 @@ ]) s = SingleDimSlice(0, step*i, step, i, ar, new_sig) v = interp_ufuncs.get(self.space).add.call(self.space, [s, s]) - return v.get_concrete().eval(3).val + return v.get_concrete().eval(3).value result = self.meta_interp(f, [5], listops=True, backendopt=True) - self.check_loops({'int_mul': 1, 'getarrayitem_raw': 2, 'float_add': 1, - 'setarrayitem_raw': 1, 'int_add': 1, + self.check_loops({'int_mul': 1, 'getinteriorfield_raw': 2, 'float_add': 1, + 'setinteriorfield_raw': 1, 'int_add': 1, 'int_lt': 1, 'guard_true': 1, 'jump': 1}) assert result == f(5) @@ -222,11 +222,11 @@ ]) s2 = SingleDimSlice(0, step2*i, step2, i, ar, new_sig) v = interp_ufuncs.get(self.space).add.call(self.space, [s1, s2]) - return v.get_concrete().eval(3).val + return v.get_concrete().eval(3).value result = self.meta_interp(f, [5], listops=True, backendopt=True) - self.check_loops({'int_mul': 2, 'getarrayitem_raw': 2, 'float_add': 1, - 'setarrayitem_raw': 1, 'int_add': 1, + self.check_loops({'int_mul': 2, 'getinteriorfield_raw': 2, 'float_add': 1, + 'setinteriorfield_raw': 1, 'int_add': 1, 'int_lt': 1, 'guard_true': 1, 'jump': 1}) assert result == f(5) @@ -241,12 +241,12 @@ ar2.get_concrete().setitem(1, float64_dtype.box(5.5)) arg = ar2.descr_add(space, ar2) ar.setslice(space, 0, step*i, step, i, arg) - return ar.get_concrete().eval(3).val + return ar.get_concrete().eval(3).value result = self.meta_interp(f, [5], listops=True, backendopt=True) - self.check_loops({'getarrayitem_raw': 2, + self.check_loops({'getinteriorfield_raw': 2, 'float_add' : 1, - 'setarrayitem_raw': 1, 'int_add': 2, + 'setinteriorfield_raw': 1, 'int_add': 2, 'int_lt': 1, 'guard_true': 1, 'jump': 1}) assert result == 11.0 From noreply at buildbot.pypy.org Wed Nov 16 00:31:22 2011 From: noreply at buildbot.pypy.org (mattip) Date: Wed, 16 Nov 2011 00:31:22 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim-shards: start to fix descr_repr Message-ID: <20111115233122.A31EF820BE@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpy-multidim-shards Changeset: r49454:623b485bea06 Date: 2011-11-15 02:00 +0200 http://bitbucket.org/pypy/pypy/changeset/623b485bea06/ Log: start to fix descr_repr diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -382,46 +382,52 @@ # for i in range(self.shape[0]): # smallerview = NDimSlice(self.parent, self.signature, # [(i, 0, 0, 1)], self.shape[1:]) - # ret.append(smallerview.to_str(comma, indent=indent + ' ')) + # builder.append(smallerview.to_str(comma, indent=indent + ' ')) # if i + 1 < self.shape[0]: - # ret.append(',\n\n' + indent) - ret.append(']') + # builder.append(',\n\n' + indent) + builder.append(']') elif ndims == 2: - ret.append('[') + builder.append('[') for i in range(self.shape[0]): - ret.append('[') + builder.append('[') spacer = ',' * comma + ' ' - ret.append(spacer.join(\ + builder.append(spacer.join(\ [dtype.str_format(self.eval(i * self.shape[1] + j)) \ for j in range(self.shape[1])])) - ret.append(']') + builder.append(']') if i + 1 < self.shape[0]: - ret.append(',\n' + indent) - ret.append(']') + builder.append(',\n' + indent) + builder.append(']') elif ndims == 1: - ret.append('[') + builder.append('[') spacer = ',' * comma + ' ' if self.shape[0] > 1000: - ret.append(spacer.join([dtype.str_format(self.eval(j)) \ - for j in range(3)])) - ret.append(',' * comma + ' ..., ') - ret.append(spacer.join([dtype.str_format(self.eval(j)) \ - for j in range(self.shape[0] - 3, self.shape[0])])) + firstSlice = NDimSlice(self, self.signature, 0, [3,], [2,], [3,]) + builder.append(firstSlice.to_str(comma, builder, indent)) + builder.append(',' * comma + ' ..., ') + lastSlice = NDimSlice(self, self.signature, + self.backshards[0]-2*self.shards[0], [3,], [2,], [3,]) + builder.append(lastSlice.to_str(comma, builder, indent)) else: - ret.append(spacer.join([dtype.str_format(self.eval(j)) \ - for j in range(self.shape[0])])) - ret.append(']') + strs = [] + i = self.start_iter() + while not i.done(): + strs.append(dtype.str_format(self.eval(i))) + i.next() + builder.append(spacer.join(strs)) + builder.append(']') else: - ret.append(dtype.str_format(self.eval(self.start))) - return ret.build() + builder.append(dtype.str_format(self.eval(self.start))) + return builder.build() def descr_str(self, space): # Simple implementation so that we can see the array. # Since what we want is to print a plethora of 2d views, let # a slice do the work for us. concrete = self.get_concrete() - r = NDimSlice(concrete, self.signature, [], self.shape).to_str(False) - return space.wrap(r) + s = StringBuilder() + r = NDimSlice(concrete, self.signature, 0, self.shards, self.backshards, self.shape) + return space.wrap(r.to_str(False, s)) def _index_of_single_item(self, space, w_idx): # we assume C ordering for now From noreply at buildbot.pypy.org Wed Nov 16 00:31:23 2011 From: noreply at buildbot.pypy.org (mattip) Date: Wed, 16 Nov 2011 00:31:23 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim-shards: split tests, test_repr passes Message-ID: <20111115233123.CF28B82A88@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpy-multidim-shards Changeset: r49455:f508193f73f0 Date: 2011-11-16 01:28 +0200 http://bitbucket.org/pypy/pypy/changeset/f508193f73f0/ Log: split tests, test_repr passes diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -339,33 +339,40 @@ return self.get_concrete().descr_len(space) def descr_repr(self, space): - # Simple implementation so that we can see the array. - # Since what we want is to print a plethora of 2d views, - # use recursive calls to to_str() to do the work. res = StringBuilder() + res.append("array([") concrete = self.get_concrete() - i = concrete.start_iter() + i = concrete.start_iter(offset=0, indices=[0]) start = True dtype = concrete.find_dtype() - while not i.done(): - if start: - start = False + if not concrete.find_size(): + if len(self.shape) > 1: + #This is for numpy compliance: an empty slice reports its shape + res.append("], shape=(") + self_shape = str(self.shape) + res.append_slice(str(self_shape), 1, len(self_shape) - 1) + res.append(')') else: - res.append(", ") - res.append(dtype.str_format(concrete.eval(i))) - i = i.next() - return space.wrap(res.build()) - - res.append("array(") - #This is for numpy compliance: an empty slice reports its shape - if not concrete.find_size(): - res.append("[], shape=(") - self_shape = str(self.shape) - res.append_slice(str(self_shape), 1, len(self_shape)-1) - res.append(')') + res.append(']') else: - concrete.to_str(True, res, indent=' ') - dtype = concrete.find_dtype() + if self.shape[0] > 1000: + for xx in range(3): + if start: + start = False + else: + res.append(", ") + res.append(dtype.str_format(concrete.eval(i))) + i = i.next() + res.append(', ...') + i = concrete.start_iter(offset=self.shape[0] - 3) + while not i.done(): + if start: + start = False + else: + res.append(", ") + res.append(dtype.str_format(concrete.eval(i))) + i = i.next() + res.append(']') if (dtype is not space.fromcache(interp_dtype.W_Float64Dtype) and dtype is not space.fromcache(interp_dtype.W_Int64Dtype)) or \ not self.find_size(): @@ -376,44 +383,32 @@ def to_str(self, comma, builder, indent=' '): dtype = self.find_dtype() ndims = len(self.shape) - if ndims > 2: + if ndims > 1: builder.append('[') builder.append("xxx") - # for i in range(self.shape[0]): - # smallerview = NDimSlice(self.parent, self.signature, - # [(i, 0, 0, 1)], self.shape[1:]) - # builder.append(smallerview.to_str(comma, indent=indent + ' ')) - # if i + 1 < self.shape[0]: - # builder.append(',\n\n' + indent) - builder.append(']') - elif ndims == 2: - builder.append('[') - for i in range(self.shape[0]): - builder.append('[') - spacer = ',' * comma + ' ' - builder.append(spacer.join(\ - [dtype.str_format(self.eval(i * self.shape[1] + j)) \ - for j in range(self.shape[1])])) - builder.append(']') - if i + 1 < self.shape[0]: - builder.append(',\n' + indent) + i = self.start_iter(offest=0, indices=[0]) + while not i.done(): + i.to_str(comma, builder, indent=indent + ' ') + builder.append('\n') + i = i.next() builder.append(']') elif ndims == 1: builder.append('[') spacer = ',' * comma + ' ' if self.shape[0] > 1000: - firstSlice = NDimSlice(self, self.signature, 0, [3,], [2,], [3,]) + #This is wrong. Use iterator + firstSlice = NDimSlice(self, self.signature, 0, [3, ], [2, ], [3, ]) builder.append(firstSlice.to_str(comma, builder, indent)) builder.append(',' * comma + ' ..., ') lastSlice = NDimSlice(self, self.signature, - self.backshards[0]-2*self.shards[0], [3,], [2,], [3,]) + self.backshards[0] - 2 * self.shards[0], [3, ], [2, ], [3, ]) builder.append(lastSlice.to_str(comma, builder, indent)) else: strs = [] i = self.start_iter() while not i.done(): strs.append(dtype.str_format(self.eval(i))) - i.next() + i = i.next() builder.append(spacer.join(strs)) builder.append(']') else: @@ -421,7 +416,7 @@ return builder.build() def descr_str(self, space): - # Simple implementation so that we can see the array. + # Simple implementation so that we can see the array. # Since what we want is to print a plethora of 2d views, let # a slice do the work for us. concrete = self.get_concrete() @@ -843,8 +838,8 @@ source_iter = source_iter.next() res_iter = res_iter.next() - def start_iter(self): - return ViewIterator(self) + def start_iter(self, offset=0, indices=None): + return ViewIterator(self, offset=offset, indices=indices) def setitem(self, item, value): self.parent.setitem(item, value) @@ -877,7 +872,7 @@ def getitem(self, item): return self.dtype.getitem(self.storage, item) - + def eval(self, iter): assert isinstance(iter, ArrayIterator) return self.dtype.getitem(self.storage, iter.offset) @@ -896,8 +891,8 @@ self.invalidated() self.dtype.setitem(self.storage, item, value) - def start_iter(self): - return ArrayIterator(self.size) + def start_iter(self, offset=0, indices=None): + return ArrayIterator(self.size, offset=offset) def __del__(self): lltype.free(self.storage, flavor='raw', track_allocation=False) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -810,6 +810,9 @@ assert repr(a) == "array([], dtype=int64)" a = array([True, False, True, False], "?") assert repr(a) == "array([True, False, True, False], dtype=bool)" + + def test_repr_multi(self): + from numpy import array, zeros a = zeros((3,4)) assert repr(a) == '''array([[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], From noreply at buildbot.pypy.org Wed Nov 16 00:31:25 2011 From: noreply at buildbot.pypy.org (mattip) Date: Wed, 16 Nov 2011 00:31:25 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim-shards: code cleanup Message-ID: <20111115233125.062E1820BE@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpy-multidim-shards Changeset: r49456:989579c2237f Date: 2011-11-16 01:30 +0200 http://bitbucket.org/pypy/pypy/changeset/989579c2237f/ Log: code cleanup diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -173,7 +173,7 @@ #_immutable_fields_ = ['shape[*]', "shards[*]", "backshards[*]", 'start'] shards = None - start = 0 + start = 0 def __init__(self, shape): self.invalidates = [] @@ -400,7 +400,7 @@ firstSlice = NDimSlice(self, self.signature, 0, [3, ], [2, ], [3, ]) builder.append(firstSlice.to_str(comma, builder, indent)) builder.append(',' * comma + ' ..., ') - lastSlice = NDimSlice(self, self.signature, + lastSlice = NDimSlice(self, self.signature, self.backshards[0] - 2 * self.shards[0], [3, ], [2, ], [3, ]) builder.append(lastSlice.to_str(comma, builder, indent)) else: @@ -550,7 +550,7 @@ return NDimSlice(self, new_sig, start, shards, backshards, shape) def descr_mean(self, space): - return space.wrap(space.float_w(self.descr_sum(space))/self.find_size()) + return space.wrap(space.float_w(self.descr_sum(space)) / self.find_size()) def descr_nonzero(self, space): try: diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -20,7 +20,7 @@ else: args_w.append(arg) return self.space.newtuple(args_w) - + def test_shards(self): a = NDimArray(100, [10, 5, 3], MockDtype()) assert a.shards == [15, 3, 1] From noreply at buildbot.pypy.org Wed Nov 16 01:03:23 2011 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 16 Nov 2011 01:03:23 +0100 (CET) Subject: [pypy-commit] pypy default: Allows presetup.py to execute setup.py scripts which use Message-ID: <20111116000323.8E78E820BE@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r49457:dff1cac01f75 Date: 2011-11-16 01:01 +0100 http://bitbucket.org/pypy/pypy/changeset/dff1cac01f75/ Log: Allows presetup.py to execute setup.py scripts which use the "if __name__ == '__main__'" idiom diff --git a/pypy/module/cpyext/presetup.py b/pypy/module/cpyext/presetup.py --- a/pypy/module/cpyext/presetup.py +++ b/pypy/module/cpyext/presetup.py @@ -42,4 +42,4 @@ patch_distutils() del sys.argv[0] -execfile(sys.argv[0], {'__file__': sys.argv[0]}) +execfile(sys.argv[0], {'__file__': sys.argv[0], '__name__': '__main__'}) From noreply at buildbot.pypy.org Wed Nov 16 01:03:24 2011 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 16 Nov 2011 01:03:24 +0100 (CET) Subject: [pypy-commit] pypy default: Update PyMODINIT_FUNC for C++ extensions. Message-ID: <20111116000324.B6AF4820BE@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r49458:365410e9e95e Date: 2011-11-16 01:02 +0100 http://bitbucket.org/pypy/pypy/changeset/365410e9e95e/ Log: Update PyMODINIT_FUNC for C++ extensions. diff --git a/pypy/module/cpyext/include/modsupport.h b/pypy/module/cpyext/include/modsupport.h --- a/pypy/module/cpyext/include/modsupport.h +++ b/pypy/module/cpyext/include/modsupport.h @@ -48,7 +48,11 @@ /* * This is from pyport.h. Perhaps it belongs elsewhere. */ +#ifdef __cplusplus +#define PyMODINIT_FUNC extern "C" void +#else #define PyMODINIT_FUNC void +#endif #ifdef __cplusplus From noreply at buildbot.pypy.org Wed Nov 16 04:46:11 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 16 Nov 2011 04:46:11 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: expose more stuff at app level Message-ID: <20111116034611.5CA49820BE@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49459:9e74aaa8665b Date: 2011-11-15 22:45 -0500 http://bitbucket.org/pypy/pypy/changeset/9e74aaa8665b/ Log: expose more stuff at app level diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -16,6 +16,12 @@ 'True_': 'types.Bool.True', 'False_': 'types.Bool.False', + + 'generic': 'interp_boxes.W_GenericBox', + 'number': 'interp_boxes.W_NumberBox', + 'integer': 'interp_boxes.W_IntegerBox', + 'signedinteger': 'interp_boxes.W_SignedIntegerBox', + 'int8': 'interp_boxes.W_Int8Box', } # ufuncs diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -167,6 +167,10 @@ __module__ = "numpy", ) +W_Int8Box.typedef = TypeDef("int8", W_SignedIntegerBox.typedef, + __module__ = "numpy", +) + if LONG_BIT == 32: long_name = "int32" elif LONG_BIT == 64: diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -1,8 +1,9 @@ from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.error import OperationError from pypy.interpreter.gateway import interp2app -from pypy.interpreter.typedef import TypeDef, GetSetProperty, interp_attrproperty -from pypy.module.micronumpy import types, signature +from pypy.interpreter.typedef import (TypeDef, GetSetProperty, + interp_attrproperty, interp_attrproperty_w) +from pypy.module.micronumpy import types, signature, interp_boxes from pypy.rlib.objectmodel import specialize from pypy.rlib.rarithmetic import LONG_BIT from pypy.rpython.lltypesystem import lltype, rffi @@ -14,13 +15,14 @@ FLOATINGLTR = "f" class W_Dtype(Wrappable): - def __init__(self, itemtype, num, kind, name, char, alternate_constructors=[]): + def __init__(self, itemtype, num, kind, name, char, w_box_type, alternate_constructors=[]): self.signature = signature.BaseSignature() self.itemtype = itemtype self.num = num self.kind = kind self.name = name self.char = char + self.w_box_type = w_box_type self.alternate_constructors = alternate_constructors def malloc(self, length): @@ -62,6 +64,8 @@ for dtype in cache.builtin_dtypes: if w_dtype in dtype.alternate_constructors: return dtype + if w_dtype is dtype.w_box_type: + return dtype raise OperationError(space.w_TypeError, space.wrap("data type not understood")) def descr_str(self, space): @@ -85,6 +89,7 @@ num = interp_attrproperty("num", cls=W_Dtype), kind = interp_attrproperty("kind", cls=W_Dtype), + type = interp_attrproperty_w("w_box_type", cls=W_Dtype), itemsize = GetSetProperty(W_Dtype.descr_get_itemsize), shape = GetSetProperty(W_Dtype.descr_get_shape), ) @@ -98,6 +103,7 @@ kind=BOOLLTR, name="bool", char="?", + w_box_type = space.gettypefor(interp_boxes.W_BoolBox), alternate_constructors=[space.w_bool], ) self.w_int8dtype = W_Dtype( @@ -106,6 +112,7 @@ kind=SIGNEDLTR, name="int8", char="b", + w_box_type = space.gettypefor(interp_boxes.W_Int8Box) ) self.w_uint8dtype = W_Dtype( types.UInt8(), @@ -113,6 +120,7 @@ kind=UNSIGNEDLTR, name="uint8", char="B", + w_box_type = space.gettypefor(interp_boxes.W_UInt8Box), ) self.w_int16dtype = W_Dtype( types.Int16(), @@ -120,6 +128,7 @@ kind=SIGNEDLTR, name="int16", char="h", + w_box_type = space.gettypefor(interp_boxes.W_Int16Box), ) self.w_uint16dtype = W_Dtype( types.UInt16(), @@ -127,6 +136,7 @@ kind=UNSIGNEDLTR, name="uint16", char="H", + w_box_type = space.gettypefor(interp_boxes.W_UInt16Box), ) self.w_int32dtype = W_Dtype( types.Int32(), @@ -134,13 +144,15 @@ kind=SIGNEDLTR, name="int32", char="i", - ) + w_box_type = space.gettypefor(interp_boxes.W_Int32Box), + ) self.w_uint32dtype = W_Dtype( types.UInt32(), num=6, kind=UNSIGNEDLTR, name="uint32", char="I", + w_box_type = space.gettypefor(interp_boxes.W_UInt32Box), ) if LONG_BIT == 32: name = "int32" @@ -152,6 +164,7 @@ kind=SIGNEDLTR, name=name, char="l", + w_box_type = space.gettypefor(interp_boxes.W_LongBox), alternate_constructors=[space.w_int], ) self.w_ulongdtype = W_Dtype( @@ -160,6 +173,7 @@ kind=UNSIGNEDLTR, name="u" + name, char="L", + w_box_type = space.gettypefor(interp_boxes.W_ULongBox), ) self.w_int64dtype = W_Dtype( types.Int64(), @@ -167,6 +181,7 @@ kind=SIGNEDLTR, name="int64", char="q", + w_box_type = space.gettypefor(interp_boxes.W_Int64Box), alternate_constructors=[space.w_long], ) self.w_uint64dtype = W_Dtype( @@ -175,6 +190,7 @@ kind=UNSIGNEDLTR, name="uint64", char="Q", + w_box_type = space.gettypefor(interp_boxes.W_UInt64Box), ) self.w_float32dtype = W_Dtype( types.Float32(), @@ -182,6 +198,7 @@ kind=FLOATINGLTR, name="float32", char="f", + w_box_type = space.gettypefor(interp_boxes.W_Float32Box), ) self.w_float64dtype = W_Dtype( types.Float64(), @@ -189,6 +206,7 @@ kind=FLOATINGLTR, name="float64", char="d", + w_box_type = space.gettypefor(interp_boxes.W_Float64Box), alternate_constructors=[space.w_float], ) diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -165,3 +165,13 @@ # You can't subclass dtype raises(TypeError, type, "Foo", (dtype,), {}) + +class AppTestTypes(BaseNumpyAppTest): + def test_int8(self): + import numpy + + assert numpy.int8.mro() == [numpy.int8, numpy.signedinteger, numpy.integer, numpy.number, numpy.generic, object] + + a = numpy.array([1, 2, 3], numpy.int8) + assert type(a[1]) is numpy.int8 + assert numpy.dtype("int8").type is numpy.int8 From noreply at buildbot.pypy.org Wed Nov 16 06:23:19 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 16 Nov 2011 06:23:19 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: allowisntantiating numpy boxes from applevel Message-ID: <20111116052319.B896B820BE@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49460:a099f70075f5 Date: 2011-11-16 00:23 -0500 http://bitbucket.org/pypy/pypy/changeset/a099f70075f5/ Log: allowisntantiating numpy boxes from applevel diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -1,4 +1,5 @@ from pypy.interpreter.baseobjspace import Wrappable +from pypy.interpreter.error import operationerrfmt from pypy.interpreter.gateway import interp2app from pypy.interpreter.typedef import TypeDef from pypy.objspace.std.inttype import int_typedef @@ -27,6 +28,17 @@ class W_GenericBox(Wrappable): _attrs_ = () + def descr__new__(space, w_subtype, w_value): + from pypy.module.micronumpy.interp_dtype import get_dtype_cache + # XXX: not correct if w_subtype is a user defined subclass of a builtin + # type, this whole thing feels a little wrong. + for dtype in get_dtype_cache(space).builtin_dtypes: + if w_subtype is dtype.w_box_type: + return dtype.coerce(space, w_value) + raise operationerrfmt(space.w_TypeError, "cannot create '%s' instances", + w_subtype.get_module_type_name() + ) + def descr_repr(self, space): return space.wrap(self.get_dtype(space).itemtype.str_format(self)) @@ -87,7 +99,7 @@ pass class W_Int8Box(W_SignedIntegerBox, PrimitiveBox): - pass + get_dtype = dtype_getter("int8") class W_UInt8Box(W_UnsignedIntgerBox, PrimitiveBox): pass @@ -133,6 +145,7 @@ W_GenericBox.typedef = TypeDef("generic", __module__ = "numpy", + __new__ = interp2app(W_GenericBox.descr__new__.im_func), __repr__ = interp2app(W_GenericBox.descr_repr), __int__ = interp2app(W_GenericBox.descr_int), __float__ = interp2app(W_GenericBox.descr_float), @@ -171,6 +184,26 @@ __module__ = "numpy", ) +W_UInt8Box.typedef = TypeDef("uint8", W_UnsignedIntgerBox.typedef, + __module__ = "numpy", +) + +W_Int16Box.typedef = TypeDef("int16", W_SignedIntegerBox.typedef, + __module__ = "numpy", +) + +W_UInt16Box.typedef = TypeDef("uint16", W_UnsignedIntgerBox.typedef, + __module__ = "numpy", +) + +W_Int32Box.typedef = TypeDef("int32", W_SignedIntegerBox.typedef, + __module__ = "numpy", +) + +W_UInt32Box.typedef = TypeDef("uint32", W_UnsignedIntgerBox.typedef, + __module__ = "numpy", +) + if LONG_BIT == 32: long_name = "int32" elif LONG_BIT == 64: @@ -179,6 +212,18 @@ __module__ = "numpy", ) +W_ULongBox.typedef = TypeDef("u" + long_name, W_UnsignedIntgerBox.typedef, + __module__ = "numpy", +) + W_Int64Box.typedef = TypeDef("int64", (W_SignedIntegerBox.typedef,) + MIXIN_64, __module__ = "numpy", +) + +W_UInt64Box.typedef = TypeDef("uint64", W_UnsignedIntgerBox.typedef, + __module__ = "numpy", +) + +W_InexactBox.typedef = TypeDef("inexact", W_NumberBox.typedef, + __module__ = "numpy", ) \ No newline at end of file diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -167,6 +167,14 @@ raises(TypeError, type, "Foo", (dtype,), {}) class AppTestTypes(BaseNumpyAppTest): + def test_abstract_types(self): + import numpy + raises(TypeError, numpy.generic, 0) + raises(TypeError, numpy.number, 0) + raises(TypeError, numpy.integer, 0) + exc = raises(TypeError, numpy.signedinteger, 0) + assert str(exc.value) == "cannot create 'numpy.signedinteger' instances" + def test_int8(self): import numpy @@ -175,3 +183,9 @@ a = numpy.array([1, 2, 3], numpy.int8) assert type(a[1]) is numpy.int8 assert numpy.dtype("int8").type is numpy.int8 + + x = numpy.int8(128) + assert x == -128 + assert x != 128 + assert type(x) is numpy.int8 + assert repr(x) == "-128" \ No newline at end of file From noreply at buildbot.pypy.org Wed Nov 16 08:16:12 2011 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 16 Nov 2011 08:16:12 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim-shards: fix those tests - downside: they're for fortran layout Message-ID: <20111116071612.DEC61820BE@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-multidim-shards Changeset: r49461:cc5e404f423b Date: 2011-11-16 09:15 +0200 http://bitbucket.org/pypy/pypy/changeset/cc5e404f423b/ Log: fix those tests - downside: they're for fortran layout diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -42,7 +42,7 @@ assert s.start == 19 assert s.shape == [2, 1] assert s.shards == [45, 3] - assert s.backshards == [45, 3] + assert s.backshards == [45, 0] s = a.create_slice(space, self.newtuple( self.newslice(None, None, None), space.wrap(2))) assert s.start == 6 @@ -64,7 +64,7 @@ self.newslice(None, None, None), space.wrap(2)])) assert s2.shape == [2, 3] assert s2.shards == [45, 1] - assert s2.backshards == [90, 2] + assert s2.backshards == [45, 2] assert s2.start == 1*15 + 2*3 def test_negative_step(self): @@ -73,7 +73,7 @@ s = a.create_slice(space, self.newslice(None, None, -2)) assert s.start == 135 assert s.shards == [-30, 3, 1] - assert s.backshards == [-150, 12, 2] + assert s.backshards == [-120, 12, 2] def test_index_of_single_item(self): a = NDimArray(10*5*3, [10, 5, 3], MockDtype()) From noreply at buildbot.pypy.org Wed Nov 16 08:49:09 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 16 Nov 2011 08:49:09 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: remove some dead code, doesn't fix translation as I hoped Message-ID: <20111116074909.074F4820BE@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49462:c47b62ba14df Date: 2011-11-16 02:48 -0500 http://bitbucket.org/pypy/pypy/changeset/c47b62ba14df/ Log: remove some dead code, doesn't fix translation as I hoped diff --git a/pypy/module/micronumpy/compile.py b/pypy/module/micronumpy/compile.py --- a/pypy/module/micronumpy/compile.py +++ b/pypy/module/micronumpy/compile.py @@ -41,7 +41,6 @@ def __init__(self): """NOT_RPYTHON""" self.fromcache = InternalSpaceCache(self).getorbuild - self.w_float64dtype = get_dtype_cache(self).w_float64dtype def issequence_w(self, w_obj): return isinstance(w_obj, ListObject) or isinstance(w_obj, SingleDimArray) From noreply at buildbot.pypy.org Wed Nov 16 12:29:03 2011 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 16 Nov 2011 12:29:03 +0100 (CET) Subject: [pypy-commit] pypy default: Added a failing test. Message-ID: <20111116112903.C81D4820BE@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49463:d6424b565434 Date: 2011-11-16 12:25 +0100 http://bitbucket.org/pypy/pypy/changeset/d6424b565434/ Log: Added a failing test. diff --git a/pypy/module/math/test/test_translated.py b/pypy/module/math/test/test_translated.py new file mode 100644 --- /dev/null +++ b/pypy/module/math/test/test_translated.py @@ -0,0 +1,10 @@ +import py +from pypy.translator.c.test.test_genc import compile +from pypy.module.math.interp_math import _gamma + + +def test_gamma_overflow(): + f = compile(_gamma, [float]) + assert f(10.0) == 362880.0 + py.test.raises(OverflowError, f, 1720.0) + py.test.raises(OverflowError, f, 172.0) From noreply at buildbot.pypy.org Wed Nov 16 12:29:05 2011 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 16 Nov 2011 12:29:05 +0100 (CET) Subject: [pypy-commit] pypy default: The hack "y + VERY_LARGE_FLOAT == y" fails to give the correct Message-ID: <20111116112905.018F1820BE@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49464:14495ad804bc Date: 2011-11-16 12:28 +0100 http://bitbucket.org/pypy/pypy/changeset/14495ad804bc/ Log: The hack "y + VERY_LARGE_FLOAT == y" fails to give the correct result on gcc, not only on msvc. Revert to the comparison with INFINITY and -INFINITY when not jitted. diff --git a/pypy/rpython/lltypesystem/module/ll_math.py b/pypy/rpython/lltypesystem/module/ll_math.py --- a/pypy/rpython/lltypesystem/module/ll_math.py +++ b/pypy/rpython/lltypesystem/module/ll_math.py @@ -127,9 +127,12 @@ return y != y def ll_math_isinf(y): - if use_library_isinf_isnan and not jit.we_are_jitted(): + if jit.we_are_jitted(): + return (y + VERY_LARGE_FLOAT) == y + elif use_library_isinf_isnan: return not _lib_finite(y) and not _lib_isnan(y) - return (y + VERY_LARGE_FLOAT) == y + else: + return y == INFINITY or y == -INFINITY def ll_math_isfinite(y): # Use a custom hack that is reasonably well-suited to the JIT. From noreply at buildbot.pypy.org Wed Nov 16 13:22:13 2011 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 16 Nov 2011 13:22:13 +0100 (CET) Subject: [pypy-commit] pypy default: Fix. Message-ID: <20111116122213.53EEE820BE@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49465:914c5625808a Date: 2011-11-16 12:37 +0100 http://bitbucket.org/pypy/pypy/changeset/914c5625808a/ Log: Fix. diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -862,12 +862,14 @@ try: unsigned = not tp._type.SIGNED except AttributeError: - if (not isinstance(tp, lltype.Primitive) or - tp in (FLOAT, DOUBLE) or - cast(lltype.SignedLongLong, cast(tp, -1)) < 0): + if not isinstance(tp, lltype.Primitive): unsigned = False + elif tp in (lltype.Signed, FLOAT, DOUBLE): + unsigned = False + elif tp in (lltype.Char, lltype.UniChar, lltype.Bool): + unsigned = True else: - unsigned = True + raise AssertionError("size_and_sign(%r)" % (tp,)) return size, unsigned def sizeof(tp): diff --git a/pypy/rpython/lltypesystem/test/test_rffi.py b/pypy/rpython/lltypesystem/test/test_rffi.py --- a/pypy/rpython/lltypesystem/test/test_rffi.py +++ b/pypy/rpython/lltypesystem/test/test_rffi.py @@ -743,8 +743,9 @@ assert sizeof(lltype.Typedef(ll, 'test')) == sizeof(ll) assert not size_and_sign(lltype.Signed)[1] assert size_and_sign(lltype.Char) == (1, True) - assert not size_and_sign(lltype.UniChar)[1] + assert size_and_sign(lltype.UniChar)[1] assert size_and_sign(UINT)[1] + assert not size_and_sign(INT)[1] def test_rffi_offsetof(self): import struct From noreply at buildbot.pypy.org Wed Nov 16 14:14:55 2011 From: noreply at buildbot.pypy.org (bivab) Date: Wed, 16 Nov 2011 14:14:55 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: fix tests Message-ID: <20111116131455.2E630820BE@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r49466:cee2830d1a69 Date: 2011-11-16 13:54 +0100 http://bitbucket.org/pypy/pypy/changeset/cee2830d1a69/ Log: fix tests diff --git a/pypy/jit/backend/arm/test/test_runner.py b/pypy/jit/backend/arm/test/test_runner.py --- a/pypy/jit/backend/arm/test/test_runner.py +++ b/pypy/jit/backend/arm/test/test_runner.py @@ -29,6 +29,9 @@ cls.cpu = ArmCPU(rtyper=None, stats=FakeStats()) cls.cpu.setup_once() + def teardown_method(self, method): + self.cpu.assembler.teardown() + # for the individual tests see # ====> ../../test/runner_test.py def test_result_is_spilled(self): From noreply at buildbot.pypy.org Wed Nov 16 14:14:56 2011 From: noreply at buildbot.pypy.org (bivab) Date: Wed, 16 Nov 2011 14:14:56 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: improve freeing of boxes in the backend Message-ID: <20111116131456.608CE820BE@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r49467:73ba39e70415 Date: 2011-11-16 13:54 +0100 http://bitbucket.org/pypy/pypy/changeset/73ba39e70415/ Log: improve freeing of boxes in the backend diff --git a/pypy/jit/backend/arm/assembler.py b/pypy/jit/backend/arm/assembler.py --- a/pypy/jit/backend/arm/assembler.py +++ b/pypy/jit/backend/arm/assembler.py @@ -799,12 +799,17 @@ operations[i+1], fcond) fcond = asm_operations_with_guard[opnum](self, op, operations[i+1], arglocs, regalloc, fcond) + guard = operations[i+1] + regalloc.possibly_free_vars_for_op(guard) + regalloc.possibly_free_vars(guard.getfailargs()) elif not we_are_translated() and op.getopnum() == -124: regalloc.prepare_force_spill(op, fcond) else: arglocs = regalloc_operations[opnum](regalloc, op, fcond) if arglocs is not None: fcond = asm_operations[opnum](self, op, arglocs, regalloc, fcond) + if op.is_guard(): + regalloc.possibly_free_vars(op.getfailargs()) if op.result: regalloc.possibly_free_var(op.result) regalloc.possibly_free_vars_for_op(op) From noreply at buildbot.pypy.org Wed Nov 16 14:14:57 2011 From: noreply at buildbot.pypy.org (bivab) Date: Wed, 16 Nov 2011 14:14:57 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: Add a method to allocate a scratch register that is managed by the register manager. The register manager keeps a list of temporary boxes that need to freed before before emitting the next operation Message-ID: <20111116131457.8E181820BE@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r49468:66a82d2d9786 Date: 2011-11-16 13:58 +0100 http://bitbucket.org/pypy/pypy/changeset/66a82d2d9786/ Log: Add a method to allocate a scratch register that is managed by the register manager. The register manager keeps a list of temporary boxes that need to freed before before emitting the next operation diff --git a/pypy/jit/backend/arm/assembler.py b/pypy/jit/backend/arm/assembler.py --- a/pypy/jit/backend/arm/assembler.py +++ b/pypy/jit/backend/arm/assembler.py @@ -813,6 +813,7 @@ if op.result: regalloc.possibly_free_var(op.result) regalloc.possibly_free_vars_for_op(op) + regalloc.free_temp_vars() regalloc._check_invariants() # from ../x86/regalloc.py diff --git a/pypy/jit/backend/arm/regalloc.py b/pypy/jit/backend/arm/regalloc.py --- a/pypy/jit/backend/arm/regalloc.py +++ b/pypy/jit/backend/arm/regalloc.py @@ -84,6 +84,13 @@ self._check_type(v) r = self.force_allocate_reg(v) return r + def get_scratch_reg(self, type=FLOAT, forbidden_vars=[], selected_reg=None): + assert type == FLOAT # for now + box = TempFloat() + self.temp_boxes.append(box) + return self.force_allocate_reg(box, forbidden_vars=forbidden_vars, selected_reg=selected_reg) + + class ARMv7RegisterMananger(RegisterManager): all_regs = r.all_regs box_types = None # or a list of acceptable types @@ -115,6 +122,12 @@ assert isinstance(c, ConstPtr) return locations.ImmLocation(rffi.cast(lltype.Signed, c.value)) + def get_scratch_reg(self, type=INT, forbidden_vars=[], selected_reg=None): + assert type == INT or type == REF + box = TempBox() + self.temp_boxes.append(box) + return self.force_allocate_reg(box, forbidden_vars=forbidden_vars, selected_reg=selected_reg) + class Regalloc(object): def __init__(self, longevity, frame_manager=None, assembler=None): @@ -191,6 +204,16 @@ if var is not None: # xxx kludgy self.possibly_free_var(var) + def get_scratch_reg(self, type, forbidden_vars=[], selected_reg=None): + if type == FLOAT: + return self.vfprm.get_scratch_reg(type, forbidden_vars, selected_reg) + else: + return self.rm.get_scratch_reg(type, forbidden_vars, selected_reg) + + def free_temp_vars(self): + self.rm.free_temp_vars() + self.vfprm.free_temp_vars() + def make_sure_var_in_reg(self, var, forbidden_vars=[], selected_reg=None, need_lower_byte=False): if var.type == FLOAT: @@ -668,11 +691,9 @@ if _check_imm_arg(c_ofs): ofs_loc = imm(ofs) else: - ofs_loc, ofs_box = self._ensure_value_is_boxed(c_ofs, [base_box, index_box]) - self.possibly_free_var(ofs_box) - self.possibly_free_vars(args) - self.possibly_free_var(base_box) - self.possibly_free_var(index_box) + ofs_loc = self._ensure_value_is_boxed(c_ofs, args) + self.possibly_free_vars_for_op(op) + self.free_temp_vars() result_loc = self.force_allocate_reg(op.result) return [base_loc, index_loc, result_loc, ofs_loc, imm(ofs), imm(itemsize), imm(fieldsize)] diff --git a/pypy/jit/backend/llsupport/regalloc.py b/pypy/jit/backend/llsupport/regalloc.py --- a/pypy/jit/backend/llsupport/regalloc.py +++ b/pypy/jit/backend/llsupport/regalloc.py @@ -59,6 +59,7 @@ no_lower_byte_regs = [] save_around_call_regs = [] frame_reg = None + temp_boxes = [] def __init__(self, longevity, frame_manager=None, assembler=None): self.free_regs = self.all_regs[:] @@ -101,6 +102,10 @@ for i in range(op.numargs()): self.possibly_free_var(op.getarg(i)) + def free_temp_vars(self): + self.possibly_free_vars(self.temp_boxes) + self.temp_boxes = [] + def _check_invariants(self): if not we_are_translated(): # make sure no duplicates @@ -111,6 +116,7 @@ assert len(rev_regs) + len(self.free_regs) == len(self.all_regs) else: assert len(self.reg_bindings) + len(self.free_regs) == len(self.all_regs) + assert len(self.temp_boxes) == 0 if self.longevity: for v in self.reg_bindings: assert self.longevity[v][1] > self.position @@ -383,6 +389,10 @@ """ raise NotImplementedError("Abstract") + def get_scratch_reg(self, forbidden_vars=[]): + """ Platform specific - Allocates a temporary register """ + raise NotImplementedError("Abstract") + def compute_vars_longevity(inputargs, operations): # compute a dictionary that maps variables to index in # operations that is a "last-time-seen" From noreply at buildbot.pypy.org Wed Nov 16 14:14:58 2011 From: noreply at buildbot.pypy.org (bivab) Date: Wed, 16 Nov 2011 14:14:58 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: refactor _ensure_value_is_boxed to use a managed scratch_register and to not return the box for the allocated location anymore Message-ID: <20111116131458.C0856820BE@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r49469:a3207081d5fb Date: 2011-11-16 14:03 +0100 http://bitbucket.org/pypy/pypy/changeset/a3207081d5fb/ Log: refactor _ensure_value_is_boxed to use a managed scratch_register and to not return the box for the allocated location anymore diff --git a/pypy/jit/backend/arm/helper/regalloc.py b/pypy/jit/backend/arm/helper/regalloc.py --- a/pypy/jit/backend/arm/helper/regalloc.py +++ b/pypy/jit/backend/arm/helper/regalloc.py @@ -24,21 +24,17 @@ imm_a0 = _check_imm_arg(a0, imm_size, allow_zero=allow_zero) imm_a1 = _check_imm_arg(a1, imm_size, allow_zero=allow_zero) if not imm_a0 and imm_a1: - l0, box = self._ensure_value_is_boxed(a0) - boxes.append(box) + l0 = self._ensure_value_is_boxed(a0) l1 = self.make_sure_var_in_reg(a1, boxes) elif commutative and imm_a0 and not imm_a1: l1 = self.make_sure_var_in_reg(a0, boxes) - l0, box = self._ensure_value_is_boxed(a1, boxes) - boxes.append(box) + l0 = self._ensure_value_is_boxed(a1, boxes) else: - l0, box = self._ensure_value_is_boxed(a0, boxes) - boxes.append(box) - l1, box = self._ensure_value_is_boxed(a1, boxes) - boxes.append(box) - self.possibly_free_vars(boxes) + l0 = self._ensure_value_is_boxed(a0, boxes) + l1 = self._ensure_value_is_boxed(a1, boxes) + self.possibly_free_vars_for_op(op) + self.free_temp_vars() res = self.force_allocate_reg(op.result, boxes) - self.possibly_free_var(op.result) return [l0, l1, res] if name: f.__name__ = name @@ -48,36 +44,33 @@ if guard: def f(self, op, guard_op, fcond): locs = [] - loc1, box1 = self._ensure_value_is_boxed(op.getarg(0)) + loc1 = self._ensure_value_is_boxed(op.getarg(0)) locs.append(loc1) if base: - loc2, box2 = self._ensure_value_is_boxed(op.getarg(1)) + loc2 = self._ensure_value_is_boxed(op.getarg(1)) locs.append(loc2) - self.possibly_free_var(box2) - self.possibly_free_var(box1) + self.possibly_free_vars_for_op(op) + self.free_temp_vars() if guard_op is None: res = self.force_allocate_reg(op.result) assert float_result == (op.result.type == FLOAT) - self.possibly_free_var(op.result) locs.append(res) return locs else: args = self._prepare_guard(guard_op, locs) - self.possibly_free_vars(guard_op.getfailargs()) return args else: def f(self, op, fcond): locs = [] - loc1, box1 = self._ensure_value_is_boxed(op.getarg(0)) + loc1 = self._ensure_value_is_boxed(op.getarg(0)) locs.append(loc1) if base: - loc2, box2 = self._ensure_value_is_boxed(op.getarg(1)) + loc2 = self._ensure_value_is_boxed(op.getarg(1)) locs.append(loc2) - self.possibly_free_var(box2) - self.possibly_free_var(box1) + self.possibly_free_vars_for_op(op) + self.free_temp_vars() res = self.force_allocate_reg(op.result) assert float_result == (op.result.type == FLOAT) - self.possibly_free_var(op.result) locs.append(res) return locs if name: @@ -110,21 +103,19 @@ arg0, arg1 = boxes imm_a1 = _check_imm_arg(arg1) - l0, box = self._ensure_value_is_boxed(arg0, forbidden_vars=boxes) - boxes.append(box) + l0 = self._ensure_value_is_boxed(arg0, forbidden_vars=boxes) if imm_a1: l1 = self.make_sure_var_in_reg(arg1, boxes) else: - l1, box = self._ensure_value_is_boxed(arg1, forbidden_vars=boxes) - boxes.append(box) - self.possibly_free_vars(boxes) + l1 = self._ensure_value_is_boxed(arg1, forbidden_vars=boxes) + + self.possibly_free_vars_for_op(op) + self.free_temp_vars() if guard_op is None: res = self.force_allocate_reg(op.result) - self.possibly_free_var(op.result) return [l0, l1, res] else: args = self._prepare_guard(guard_op, [l0, l1]) - self.possibly_free_vars(guard_op.getfailargs()) return args if name: f.__name__ = name @@ -134,14 +125,14 @@ def f(self, op, guard_op, fcond): assert fcond is not None a0 = op.getarg(0) - reg, box = self._ensure_value_is_boxed(a0) + assert isinstance(a0, Box) + reg = self.make_sure_var_in_reg(a0) + self.possibly_free_vars_for_op(op) if guard_op is None: - res = self.force_allocate_reg(op.result, [box]) - self.possibly_free_vars([a0, box, op.result]) + res = self.force_allocate_reg(op.result, [a0]) return [reg, res] else: args = self._prepare_guard(guard_op, [reg]) - self.possibly_free_vars(guard_op.getfailargs()) return args if name: f.__name__ = name diff --git a/pypy/jit/backend/arm/opassembler.py b/pypy/jit/backend/arm/opassembler.py --- a/pypy/jit/backend/arm/opassembler.py +++ b/pypy/jit/backend/arm/opassembler.py @@ -336,6 +336,7 @@ # XXX improve this interface # emit_op_call_may_force # XXX improve freeing of stuff here + # XXX add an interface that takes locations instead of boxes def _emit_call(self, force_index, adr, args, regalloc, fcond=c.AL, result=None): n_args = len(args) reg_args = count_reg_args(args) @@ -785,15 +786,15 @@ def _emit_copystrcontent(self, op, regalloc, fcond, is_unicode): # compute the source address - args = list(op.getarglist()) - base_loc, box = regalloc._ensure_value_is_boxed(args[0], args) - args.append(box) - ofs_loc, box = regalloc._ensure_value_is_boxed(args[2], args) - args.append(box) + args = op.getarglist() + base_loc = regalloc._ensure_value_is_boxed(args[0], args) + ofs_loc = regalloc._ensure_value_is_boxed(args[2], args) assert args[0] is not args[1] # forbidden case of aliasing regalloc.possibly_free_var(args[0]) + regalloc.free_temp_vars() if args[3] is not args[2] is not args[4]: # MESS MESS MESS: don't free regalloc.possibly_free_var(args[2]) # it if ==args[3] or args[4] + regalloc.free_temp_vars() srcaddr_box = TempPtr() forbidden_vars = [args[1], args[3], args[4], srcaddr_box] srcaddr_loc = regalloc.force_allocate_reg(srcaddr_box, selected_reg=r.r1) @@ -805,27 +806,33 @@ dstaddr_box = TempPtr() dstaddr_loc = regalloc.force_allocate_reg(dstaddr_box, selected_reg=r.r0) forbidden_vars.append(dstaddr_box) - base_loc, box = regalloc._ensure_value_is_boxed(args[1], forbidden_vars) - args.append(box) - forbidden_vars.append(box) - ofs_loc, box = regalloc._ensure_value_is_boxed(args[3], forbidden_vars) - args.append(box) + base_loc = regalloc._ensure_value_is_boxed(args[1], forbidden_vars) + ofs_loc = regalloc._ensure_value_is_boxed(args[3], forbidden_vars) assert base_loc.is_reg() assert ofs_loc.is_reg() regalloc.possibly_free_var(args[1]) if args[3] is not args[4]: # more of the MESS described above regalloc.possibly_free_var(args[3]) + regalloc.free_temp_vars() self._gen_address_inside_string(base_loc, ofs_loc, dstaddr_loc, is_unicode=is_unicode) # compute the length in bytes forbidden_vars = [srcaddr_box, dstaddr_box] - length_loc, length_box = regalloc._ensure_value_is_boxed(args[4], forbidden_vars) - args.append(length_box) + # XXX basically duplicates regalloc.ensure_value_is_boxed, but we + # need the box here + if isinstance(args[4], Box): + length_box = args[4] + length_loc = regalloc.make_sure_var_in_reg(args[4], forbidden_vars) + else: + length_box = TempInt() + length_loc = regalloc.force_allocate_reg(length_box, + forbidden_vars, selected_reg = r.r2) + imm = regalloc.convert_to_imm(args[4]) + self.load(length_loc, imm) if is_unicode: - forbidden_vars = [srcaddr_box, dstaddr_box] bytes_box = TempPtr() - bytes_loc = regalloc.force_allocate_reg(bytes_box, forbidden_vars) + bytes_loc = regalloc.force_allocate_reg(bytes_box, forbidden_vars, selected_reg=r.r2) scale = self._get_unicode_item_scale() assert length_loc.is_reg() self.mc.MOV_ri(r.ip.value, 1< Author: David Schneider Branch: arm-backend-2 Changeset: r49470:35ea8271c7ee Date: 2011-11-16 14:11 +0100 http://bitbucket.org/pypy/pypy/changeset/35ea8271c7ee/ Log: fix test diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -3112,7 +3112,7 @@ assert False, 'should not be called' from pypy.jit.codewriter.effectinfo import EffectInfo - effectinfo = EffectInfo([], [], [], [], EffectInfo.EF_CAN_RAISE, EffectInfo.OS_MATH_SQRT) + effectinfo = EffectInfo([], [], [], [], EffectInfo.EF_CANNOT_RAISE, EffectInfo.OS_MATH_SQRT) FPTR = self.Ptr(self.FuncType([lltype.Float], lltype.Float)) func_ptr = llhelper(FPTR, math_sqrt) FUNC = deref(FPTR) From noreply at buildbot.pypy.org Wed Nov 16 14:18:11 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Wed, 16 Nov 2011 14:18:11 +0100 (CET) Subject: [pypy-commit] pypy type-specialized-instances: (l.diekmann, cfbolz): a branch to play with the idea of type-specializing instances Message-ID: <20111116131811.27F1D820BE@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: type-specialized-instances Changeset: r49471:2ae5b770cb48 Date: 2011-11-14 12:28 +0100 http://bitbucket.org/pypy/pypy/changeset/2ae5b770cb48/ Log: (l.diekmann, cfbolz): a branch to play with the idea of type- specializing instances From noreply at buildbot.pypy.org Wed Nov 16 14:18:12 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Wed, 16 Nov 2011 14:18:12 +0100 (CET) Subject: [pypy-commit] pypy type-specialized-instances: do not use index anymore to read attributes. in future the attributes manage (un)erasing and (un)wrapping of their values themselves Message-ID: <20111116131812.54A60820BE@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: type-specialized-instances Changeset: r49472:9348749a851e Date: 2011-11-15 14:04 +0100 http://bitbucket.org/pypy/pypy/changeset/9348749a851e/ Log: do not use index anymore to read attributes. in future the attributes manage (un)erasing and (un)wrapping of their values themselves diff --git a/pypy/objspace/std/mapdict.py b/pypy/objspace/std/mapdict.py --- a/pypy/objspace/std/mapdict.py +++ b/pypy/objspace/std/mapdict.py @@ -29,42 +29,42 @@ self.terminator = terminator def read(self, obj, selector): - index = self.index(selector) - if index < 0: + attr = self.findmap(selector) # index = self.index(selector) + if attr is None: return self.terminator._read_terminator(obj, selector) - return obj._mapdict_read_storage(index) + return attr.read_attr(obj) #obj._mapdict_read_storage(index) def write(self, obj, selector, w_value): - index = self.index(selector) - if index < 0: + attr = self.findmap(selector) # index = self.index(selector) + if attr is None: return self.terminator._write_terminator(obj, selector, w_value) - obj._mapdict_write_storage(index, w_value) + attr.write_attr(obj, w_value) #obj._mapdict_write_storage(index, w_value) return True def delete(self, obj, selector): return None - def index(self, selector): + def findmap(self, selector): if jit.we_are_jitted(): # hack for the jit: # the _index method is pure too, but its argument is never # constant, because it is always a new tuple - return self._index_jit_pure(selector[0], selector[1]) + return self._findmap_jit_pure(selector[0], selector[1]) else: - return self._index_indirection(selector) + return self._findmap_indirection(selector) @jit.elidable - def _index_jit_pure(self, name, index): - return self._index_indirection((name, index)) + def _findmap_jit_pure(self, name, index): + return self._findmap_indirection((name, index)) @jit.dont_look_inside - def _index_indirection(self, selector): + def _findmap_indirection(self, selector): if (self.space.config.objspace.std.withmethodcache): - return self._index_cache(selector) - return self._index(selector) + return self._findmap_cache(selector) + return self._findmap(selector) @jit.dont_look_inside - def _index_cache(self, selector): + def _findmap_cache(self, selector): space = self.space cache = space.fromcache(IndexCache) SHIFT2 = r_uint.BITS - space.config.objspace.std.methodcachesizeexp @@ -80,26 +80,31 @@ if cached_attr is self: cached_selector = cache.selectors[index_hash] if cached_selector == selector: - index = cache.indices[index_hash] + attr = cache.cachedattrs[index_hash] if space.config.objspace.std.withmethodcachecounter: name = selector[0] cache.hits[name] = cache.hits.get(name, 0) + 1 - return index - index = self._index(selector) + # XXX return the correct Attribute here + return attr + attr = self._findmap(selector) + if attr is None: + index = -1 + else: + index = attr.position cache.attrs[index_hash] = self cache.selectors[index_hash] = selector - cache.indices[index_hash] = index + cache.cachedattrs[index_hash] = attr if space.config.objspace.std.withmethodcachecounter: name = selector[0] cache.misses[name] = cache.misses.get(name, 0) + 1 - return index + return attr - def _index(self, selector): + def _findmap(self, selector): while isinstance(self, PlainAttribute): if selector == self.selector: - return self.position + return self self = self.back - return -1 + return None def copy(self, obj): raise NotImplementedError("abstract base class") @@ -273,6 +278,14 @@ w_value = self.read(obj, self.selector) new_obj._get_mapdict_map().add_attr(new_obj, self.selector, w_value) + def read_attr(self, obj): + # XXX do the unerasing (and wrapping) here + return obj._mapdict_read_storage(self.position) + + def write_attr(self, obj, w_value): + # XXX do the unerasing (and unwrapping) here + obj._mapdict_write_storage(self.position, w_value) + def delete(self, obj, selector): if selector == self.selector: # ok, attribute is deleted @@ -330,7 +343,7 @@ self.attrs = [None] * SIZE self._empty_selector = (None, INVALID) self.selectors = [self._empty_selector] * SIZE - self.indices = [0] * SIZE + self.cachedattrs = [None] * SIZE if space.config.objspace.std.withmethodcachecounter: self.hits = {} self.misses = {} @@ -819,12 +832,12 @@ selector = (name, DICT) # if selector[1] != INVALID: - index = map.index(selector) - if index >= 0: + attr = map.findmap(selector) + if attr is not None: # Note that if map.terminator is a DevolvedDictTerminator, # map.index() will always return -1 if selector[1]==DICT. - _fill_cache(pycode, nameindex, map, version_tag, index) - return w_obj._mapdict_read_storage(index) + _fill_cache(pycode, nameindex, map, version_tag, attr.position) + return attr.read_attr(w_obj) #w_obj._mapdict_read_storage(index) if space.config.objspace.std.withmethodcachecounter: INVALID_CACHE_ENTRY.failure_counter += 1 return space.getattr(w_obj, w_name) diff --git a/pypy/objspace/std/test/test_mapdict.py b/pypy/objspace/std/test/test_mapdict.py --- a/pypy/objspace/std/test/test_mapdict.py +++ b/pypy/objspace/std/test/test_mapdict.py @@ -55,7 +55,8 @@ current = Terminator(space, "cls") for i in range(20000): current = PlainAttribute((str(i), DICT), current) - assert current.index(("0", DICT)) == 0 + attr = current.findmap(("0", DICT)) + assert attr.position == 0 def test_search(): From noreply at buildbot.pypy.org Wed Nov 16 14:18:13 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Wed, 16 Nov 2011 14:18:13 +0100 (CET) Subject: [pypy-commit] pypy type-specialized-instances: read attributes only through Attribute class. fixed tests Message-ID: <20111116131813.81F0A820BE@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: type-specialized-instances Changeset: r49473:c905b06f965f Date: 2011-11-16 14:17 +0100 http://bitbucket.org/pypy/pypy/changeset/c905b06f965f/ Log: read attributes only through Attribute class. fixed tests diff --git a/pypy/objspace/std/mapdict.py b/pypy/objspace/std/mapdict.py --- a/pypy/objspace/std/mapdict.py +++ b/pypy/objspace/std/mapdict.py @@ -84,7 +84,6 @@ if space.config.objspace.std.withmethodcachecounter: name = selector[0] cache.hits[name] = cache.hits.get(name, 0) + 1 - # XXX return the correct Attribute here return attr attr = self._findmap(selector) if attr is None: @@ -160,7 +159,7 @@ # the order is important here: first change the map, then the storage, # for the benefit of the special subclasses obj._set_mapdict_map(attr) - obj._mapdict_write_storage(attr.position, w_value) + attr.write_attr(obj, w_value) #obj._mapdict_write_storage(attr.position, w_value) def materialize_r_dict(self, space, obj, dict_w): raise NotImplementedError("abstract base class") @@ -280,11 +279,14 @@ def read_attr(self, obj): # XXX do the unerasing (and wrapping) here - return obj._mapdict_read_storage(self.position) + erased = obj._mapdict_read_storage(self.position) + w_value = unerase_item(erased) + return w_value def write_attr(self, obj, w_value): # XXX do the unerasing (and unwrapping) here - obj._mapdict_write_storage(self.position, w_value) + erased = erase_item(w_value) + obj._mapdict_write_storage(self.position, erased) def delete(self, obj, selector): if selector == self.selector: @@ -317,7 +319,7 @@ new_obj = self.back.materialize_r_dict(space, obj, dict_w) if self.selector[1] == DICT: w_attr = space.wrap(self.selector[0]) - dict_w[w_attr] = obj._mapdict_read_storage(self.position) + dict_w[w_attr] = self.read_attr(obj) else: self._copy_attr(obj, new_obj) return new_obj @@ -550,20 +552,19 @@ for i in rangenmin1: if index == i: erased = getattr(self, "_value%s" % i) - return unerase_item(erased) + return erased if self._has_storage_list(): return self._mapdict_get_storage_list()[index - nmin1] erased = getattr(self, "_value%s" % nmin1) - return unerase_item(erased) + return erased - def _mapdict_write_storage(self, index, value): - erased = erase_item(value) + def _mapdict_write_storage(self, index, erased): for i in rangenmin1: if index == i: setattr(self, "_value%s" % i, erased) return if self._has_storage_list(): - self._mapdict_get_storage_list()[index - nmin1] = value + self._mapdict_get_storage_list()[index - nmin1] = erased return setattr(self, "_value%s" % nmin1, erased) @@ -577,21 +578,23 @@ len_storage = len(storage) for i in rangenmin1: if i < len_storage: - erased = erase_item(storage[i]) + erased = storage[i] else: + # XXX later: use correct erase method from attribute erased = erase_item(None) setattr(self, "_value%s" % i, erased) has_storage_list = self._has_storage_list() if len_storage < n: assert not has_storage_list + # XXX later: use correct erase method from attribute erased = erase_item(None) elif len_storage == n: assert not has_storage_list - erased = erase_item(storage[nmin1]) + erased = storage[nmin1] elif not has_storage_list: # storage is longer than self.map.length() only due to # overallocation - erased = erase_item(storage[nmin1]) + erased = storage[nmin1] # in theory, we should be ultra-paranoid and check all entries, # but checking just one should catch most problems anyway: assert storage[n] is None @@ -771,14 +774,14 @@ pycode._mapdict_caches = [INVALID_CACHE_ENTRY] * num_entries @jit.dont_look_inside -def _fill_cache(pycode, nameindex, map, version_tag, index, w_method=None): +def _fill_cache(pycode, nameindex, map, version_tag, attr, w_method=None): entry = pycode._mapdict_caches[nameindex] if entry is INVALID_CACHE_ENTRY: entry = CacheEntry() pycode._mapdict_caches[nameindex] = entry entry.map_wref = weakref.ref(map) entry.version_tag = version_tag - entry.index = index + entry.attr = attr entry.w_method = w_method if pycode.space.config.objspace.std.withmethodcachecounter: entry.failure_counter += 1 @@ -790,7 +793,7 @@ map = w_obj._get_mapdict_map() if entry.is_valid_for_map(map) and entry.w_method is None: # everything matches, it's incredibly fast - return w_obj._mapdict_read_storage(entry.index) + return entry.attr.read_attr(w_obj) #._mapdict_read_storage(entry.index) return LOAD_ATTR_slowpath(pycode, w_obj, nameindex, map) LOAD_ATTR_caching._always_inline_ = True @@ -836,7 +839,7 @@ if attr is not None: # Note that if map.terminator is a DevolvedDictTerminator, # map.index() will always return -1 if selector[1]==DICT. - _fill_cache(pycode, nameindex, map, version_tag, attr.position) + _fill_cache(pycode, nameindex, map, version_tag, attr) return attr.read_attr(w_obj) #w_obj._mapdict_read_storage(index) if space.config.objspace.std.withmethodcachecounter: INVALID_CACHE_ENTRY.failure_counter += 1 diff --git a/pypy/objspace/std/test/test_mapdict.py b/pypy/objspace/std/test/test_mapdict.py --- a/pypy/objspace/std/test/test_mapdict.py +++ b/pypy/objspace/std/test/test_mapdict.py @@ -23,7 +23,14 @@ class typedef: hasdict = False +def erase_storage_items(items): + return [erase_item(item) for item in items] + +def unerase_storage_items(storage): + return [unerase_item(item) for item in storage] + def test_plain_attribute(): + w_cls = "class" aa = PlainAttribute(("b", DICT), PlainAttribute(("a", DICT), @@ -33,18 +40,18 @@ assert aa.get_terminator() is aa.terminator obj = Object() - obj.map, obj.storage = aa, [10, 20] + obj.map, obj.storage = aa, erase_storage_items([10, 20]) assert obj.getdictvalue(space, "a") == 10 assert obj.getdictvalue(space, "b") == 20 assert obj.getdictvalue(space, "c") is None obj = Object() - obj.map, obj.storage = aa, [30, 40] + obj.map, obj.storage = aa, erase_storage_items([30, 40]) obj.setdictvalue(space, "a", 50) - assert obj.storage == [50, 40] + assert unerase_storage_items(obj.storage) == [50, 40] assert obj.getdictvalue(space, "a") == 50 obj.setdictvalue(space, "b", 60) - assert obj.storage == [50, 60] + assert unerase_storage_items(obj.storage) == [50, 60] assert obj.getdictvalue(space, "b") == 60 assert aa.length() == 2 @@ -73,7 +80,7 @@ cls = Class() obj = cls.instantiate() obj.setdictvalue(space, "a", 10) - assert obj.storage == [10] + assert unerase_storage_items(obj.storage) == [10] assert obj.getdictvalue(space, "a") == 10 assert obj.getdictvalue(space, "b") is None assert obj.getdictvalue(space, "c") is None @@ -83,7 +90,7 @@ assert obj.getdictvalue(space, "c") is None obj.setdictvalue(space, "b", 30) - assert obj.storage == [20, 30] + assert unerase_storage_items(obj.storage) == [20, 30] assert obj.getdictvalue(space, "a") == 20 assert obj.getdictvalue(space, "b") == 30 assert obj.getdictvalue(space, "c") is None @@ -106,12 +113,12 @@ obj.setdictvalue(space, "a", 50) obj.setdictvalue(space, "b", 60) obj.setdictvalue(space, "c", 70) - assert obj.storage == [50, 60, 70] + assert unerase_storage_items(obj.storage) == [50, 60, 70] res = obj.deldictvalue(space, dattr) assert res s = [50, 60, 70] del s[i] - assert obj.storage == s + assert unerase_storage_items(obj.storage) == s obj = c.instantiate() obj.setdictvalue(space, "a", 50) @@ -134,7 +141,7 @@ c2 = Class() obj.setclass(space, c2) assert obj.getclass(space) is c2 - assert obj.storage == [50, 60, 70] + assert unerase_storage_items(obj.storage) == [50, 60, 70] def test_special(): from pypy.module._weakref.interp__weakref import WeakrefLifeline @@ -150,7 +157,7 @@ assert obj.getdictvalue(space, "a") == 50 assert obj.getdictvalue(space, "b") == 60 assert obj.getdictvalue(space, "c") == 70 - assert obj.storage == [50, 60, 70, lifeline1] + assert unerase_storage_items(obj.storage) == [50, 60, 70, lifeline1] assert obj.getweakref() is lifeline1 obj2 = c.instantiate() @@ -158,7 +165,7 @@ obj2.setdictvalue(space, "b", 160) obj2.setdictvalue(space, "c", 170) obj2.setweakref(space, lifeline2) - assert obj2.storage == [150, 160, 170, lifeline2] + assert unerase_storage_items(obj2.storage) == [150, 160, 170, lifeline2] assert obj2.getweakref() is lifeline2 assert obj2.map is obj.map @@ -188,7 +195,7 @@ assert obj.getslotvalue(a) == 50 assert obj.getslotvalue(b) == 60 assert obj.getslotvalue(c) == 70 - assert obj.storage == [50, 60, 70] + assert unerase_storage_items(obj.storage) == [50, 60, 70] obj.setdictvalue(space, "a", 5) obj.setdictvalue(space, "b", 6) @@ -199,7 +206,7 @@ assert obj.getslotvalue(a) == 50 assert obj.getslotvalue(b) == 60 assert obj.getslotvalue(c) == 70 - assert obj.storage == [50, 60, 70, 5, 6, 7] + assert unerase_storage_items(obj.storage) == [50, 60, 70, 5, 6, 7] obj2 = cls.instantiate() obj2.setslotvalue(a, 501) @@ -208,13 +215,13 @@ obj2.setdictvalue(space, "a", 51) obj2.setdictvalue(space, "b", 61) obj2.setdictvalue(space, "c", 71) - assert obj2.storage == [501, 601, 701, 51, 61, 71] + assert unerase_storage_items(obj2.storage) == [501, 601, 701, 51, 61, 71] assert obj.map is obj2.map assert obj2.getslotvalue(b) == 601 assert obj2.delslotvalue(b) assert obj2.getslotvalue(b) is None - assert obj2.storage == [501, 701, 51, 61, 71] + assert unerase_storage_items(obj2.storage) == [501, 701, 51, 61, 71] assert not obj2.delslotvalue(b) @@ -228,7 +235,7 @@ obj.setslotvalue(b, 60) assert obj.getslotvalue(a) == 50 assert obj.getslotvalue(b) == 60 - assert obj.storage == [50, 60] + assert unerase_storage_items(obj.storage) == [50, 60] assert not obj.setdictvalue(space, "a", 70) def test_getdict(): @@ -253,7 +260,7 @@ obj.setdictvalue(space, "a", 5) obj.setdictvalue(space, "b", 6) obj.setdictvalue(space, "c", 7) - assert obj.storage == [50, 60, 70, 5, 6, 7] + assert unerase_storage_items(obj.storage) == [50, 60, 70, 5, 6, 7] class FakeDict(W_DictMultiObject): def __init__(self, d): @@ -270,7 +277,7 @@ assert flag materialize_r_dict(space, obj, d) assert d == {"a": 5, "b": 6, "c": 7} - assert obj.storage == [50, 60, 70, w_d] + assert unerase_storage_items(obj.storage) == [50, 60, 70, w_d] def test_size_prediction(): @@ -442,6 +449,17 @@ def setup_class(cls): cls.space = gettestobjspace(**{"objspace.std.withmapdict": True}) + def test_reading_twice(self): + class A(object): + pass + a = A() + a.x = 42 + + assert a.x == 42 + print "read once" + assert a.x == 42 + print "read twice" + def test_simple(self): class A(object): pass @@ -664,6 +682,7 @@ INVALID_CACHE_ENTRY.failure_counter = 0 # w_res = space.call_function(w_func) + print w_res assert space.eq_w(w_res, space.wrap(42)) # entry = w_code._mapdict_caches[nameindex] @@ -677,6 +696,15 @@ check.unwrap_spec = [gateway.ObjSpace, gateway.W_Root, str] cls.w_check = cls.space.wrap(gateway.interp2app(check)) + def test_do_not_change_while_counting(self): + class A(object): + pass + a = A() + a.x = 42 + + assert a.x == 42 + assert a.x == 42 + def test_simple(self): class A(object): pass @@ -685,7 +713,14 @@ def f(): return a.x # + print "1" + assert a.x == 42 + print "2" + assert a.x == 42 + print "3" + print "first check" res = self.check(f, 'x') + print "second check" assert res == (1, 0, 0) res = self.check(f, 'x') assert res == (0, 1, 0) From noreply at buildbot.pypy.org Wed Nov 16 14:56:35 2011 From: noreply at buildbot.pypy.org (bivab) Date: Wed, 16 Nov 2011 14:56:35 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: merge default Message-ID: <20111116135635.69E04820BE@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r49474:5786677fd91d Date: 2011-11-16 14:33 +0100 http://bitbucket.org/pypy/pypy/changeset/5786677fd91d/ Log: merge default diff --git a/pypy/interpreter/test/test_executioncontext.py b/pypy/interpreter/test/test_executioncontext.py --- a/pypy/interpreter/test/test_executioncontext.py +++ b/pypy/interpreter/test/test_executioncontext.py @@ -292,7 +292,7 @@ import os, sys print sys.executable, self.tmpfile if sys.platform == "win32": - cmdformat = '""%s" "%s""' # excellent! tons of "! + cmdformat = '"%s" "%s"' else: cmdformat = "'%s' '%s'" g = os.popen(cmdformat % (sys.executable, self.tmpfile), 'r') diff --git a/pypy/jit/backend/llgraph/llimpl.py b/pypy/jit/backend/llgraph/llimpl.py --- a/pypy/jit/backend/llgraph/llimpl.py +++ b/pypy/jit/backend/llgraph/llimpl.py @@ -20,6 +20,7 @@ from pypy.jit.backend.llgraph import symbolic from pypy.jit.codewriter import longlong +from pypy.rlib import libffi from pypy.rlib.objectmodel import ComputedIntSymbolic, we_are_translated from pypy.rlib.rarithmetic import ovfcheck from pypy.rlib.rarithmetic import r_longlong, r_ulonglong, r_uint @@ -325,12 +326,12 @@ loop = _from_opaque(loop) loop.operations.append(Operation(opnum)) -def compile_add_descr(loop, ofs, type, arg_types): +def compile_add_descr(loop, ofs, type, arg_types, extrainfo, width): from pypy.jit.backend.llgraph.runner import Descr loop = _from_opaque(loop) op = loop.operations[-1] assert isinstance(type, str) and len(type) == 1 - op.descr = Descr(ofs, type, arg_types=arg_types) + op.descr = Descr(ofs, type, arg_types=arg_types, extrainfo=extrainfo, width=width) def compile_add_descr_arg(loop, ofs, type, arg_types): from pypy.jit.backend.llgraph.runner import Descr @@ -825,6 +826,16 @@ else: raise NotImplementedError + def op_getinteriorfield_raw(self, descr, array, index): + if descr.typeinfo == REF: + return do_getinteriorfield_raw_ptr(array, index, descr.width, descr.ofs) + elif descr.typeinfo == INT: + return do_getinteriorfield_raw_int(array, index, descr.width, descr.ofs) + elif descr.typeinfo == FLOAT: + return do_getinteriorfield_raw_float(array, index, descr.width, descr.ofs) + else: + raise NotImplementedError + def op_setinteriorfield_gc(self, descr, array, index, newvalue): if descr.typeinfo == REF: return do_setinteriorfield_gc_ptr(array, index, descr.ofs, @@ -838,6 +849,16 @@ else: raise NotImplementedError + def op_setinteriorfield_raw(self, descr, array, index, newvalue): + if descr.typeinfo == REF: + return do_setinteriorfield_raw_ptr(array, index, newvalue, descr.width, descr.ofs) + elif descr.typeinfo == INT: + return do_setinteriorfield_raw_int(array, index, newvalue, descr.width, descr.ofs) + elif descr.typeinfo == FLOAT: + return do_setinteriorfield_raw_float(array, index, newvalue, descr.width, descr.ofs) + else: + raise NotImplementedError + def op_setfield_gc(self, fielddescr, struct, newvalue): if fielddescr.typeinfo == REF: do_setfield_gc_ptr(struct, fielddescr.ofs, newvalue) @@ -1403,6 +1424,14 @@ struct = array._obj.container.getitem(index) return cast_to_ptr(_getinteriorfield_gc(struct, fieldnum)) +def _getinteriorfield_raw(ffitype, array, index, width, ofs): + addr = rffi.cast(rffi.VOIDP, array) + return libffi.array_getitem(ffitype, width, addr, index, ofs) + +def do_getinteriorfield_raw_int(array, index, width, ofs): + res = _getinteriorfield_raw(libffi.types.slong, array, index, width, ofs) + return res + def _getfield_raw(struct, fieldnum): STRUCT, fieldname = symbolic.TokenToField[fieldnum] ptr = cast_from_int(lltype.Ptr(STRUCT), struct) @@ -1479,7 +1508,14 @@ return do_setinteriorfield_gc do_setinteriorfield_gc_int = new_setinteriorfield_gc(cast_from_int) do_setinteriorfield_gc_float = new_setinteriorfield_gc(cast_from_floatstorage) -do_setinteriorfield_gc_ptr = new_setinteriorfield_gc(cast_from_ptr) +do_setinteriorfield_gc_ptr = new_setinteriorfield_gc(cast_from_ptr) + +def new_setinteriorfield_raw(ffitype): + def do_setinteriorfield_raw(array, index, newvalue, width, ofs): + addr = rffi.cast(rffi.VOIDP, array) + return libffi.array_setitem(ffitype, width, addr, index, ofs, newvalue) + return do_setinteriorfield_raw +do_setinteriorfield_raw_int = new_setinteriorfield_raw(libffi.types.slong) def do_setfield_raw_int(struct, fieldnum, newvalue): STRUCT, fieldname = symbolic.TokenToField[fieldnum] diff --git a/pypy/jit/backend/llgraph/runner.py b/pypy/jit/backend/llgraph/runner.py --- a/pypy/jit/backend/llgraph/runner.py +++ b/pypy/jit/backend/llgraph/runner.py @@ -23,8 +23,10 @@ class Descr(history.AbstractDescr): def __init__(self, ofs, typeinfo, extrainfo=None, name=None, - arg_types=None, count_fields_if_immut=-1, ffi_flags=0): + arg_types=None, count_fields_if_immut=-1, ffi_flags=0, width=-1): + self.ofs = ofs + self.width = width self.typeinfo = typeinfo self.extrainfo = extrainfo self.name = name @@ -119,14 +121,14 @@ return False def getdescr(self, ofs, typeinfo='?', extrainfo=None, name=None, - arg_types=None, count_fields_if_immut=-1, ffi_flags=0): + arg_types=None, count_fields_if_immut=-1, ffi_flags=0, width=-1): key = (ofs, typeinfo, extrainfo, name, arg_types, - count_fields_if_immut, ffi_flags) + count_fields_if_immut, ffi_flags, width) try: return self._descrs[key] except KeyError: descr = Descr(ofs, typeinfo, extrainfo, name, arg_types, - count_fields_if_immut, ffi_flags) + count_fields_if_immut, ffi_flags, width) self._descrs[key] = descr return descr @@ -179,7 +181,8 @@ descr = op.getdescr() if isinstance(descr, Descr): llimpl.compile_add_descr(c, descr.ofs, descr.typeinfo, - descr.arg_types) + descr.arg_types, descr.extrainfo, + descr.width) if (isinstance(descr, history.LoopToken) and op.getopnum() != rop.JUMP): llimpl.compile_add_loop_token(c, descr) @@ -324,10 +327,22 @@ def interiorfielddescrof(self, A, fieldname): S = A.OF - ofs2 = symbolic.get_size(A) + width = symbolic.get_size(A) ofs, size = symbolic.get_field_token(S, fieldname) token = history.getkind(getattr(S, fieldname)) - return self.getdescr(ofs, token[0], name=fieldname, extrainfo=ofs2) + return self.getdescr(ofs, token[0], name=fieldname, width=width) + + def interiorfielddescrof_dynamic(self, offset, width, fieldsize, + is_pointer, is_float, is_signed): + + if is_pointer: + typeinfo = REF + elif is_float: + typeinfo = FLOAT + else: + typeinfo = INT + # we abuse the arg_types field to distinguish dynamic and static descrs + return Descr(offset, typeinfo, arg_types='dynamic', name='', width=width) def calldescrof(self, FUNC, ARGS, RESULT, extrainfo): arg_types = [] diff --git a/pypy/jit/backend/llsupport/descr.py b/pypy/jit/backend/llsupport/descr.py --- a/pypy/jit/backend/llsupport/descr.py +++ b/pypy/jit/backend/llsupport/descr.py @@ -111,6 +111,16 @@ def repr_of_descr(self): return '<%s %s %s>' % (self._clsname, self.name, self.offset) +class DynamicFieldDescr(BaseFieldDescr): + def __init__(self, offset, fieldsize, is_pointer, is_float, is_signed): + self.offset = offset + self._fieldsize = fieldsize + self._is_pointer_field = is_pointer + self._is_float_field = is_float + self._is_field_signed = is_signed + + def get_field_size(self, translate_support_code): + return self._fieldsize class NonGcPtrFieldDescr(BaseFieldDescr): _clsname = 'NonGcPtrFieldDescr' @@ -182,6 +192,7 @@ def repr_of_descr(self): return '<%s>' % self._clsname + class NonGcPtrArrayDescr(BaseArrayDescr): _clsname = 'NonGcPtrArrayDescr' def get_item_size(self, translate_support_code): @@ -211,6 +222,13 @@ def get_ofs_length(self, translate_support_code): return -1 +class DynamicArrayNoLengthDescr(BaseArrayNoLengthDescr): + def __init__(self, itemsize): + self.itemsize = itemsize + + def get_item_size(self, translate_support_code): + return self.itemsize + class NonGcPtrArrayNoLengthDescr(BaseArrayNoLengthDescr): _clsname = 'NonGcPtrArrayNoLengthDescr' def get_item_size(self, translate_support_code): diff --git a/pypy/jit/backend/llsupport/llmodel.py b/pypy/jit/backend/llsupport/llmodel.py --- a/pypy/jit/backend/llsupport/llmodel.py +++ b/pypy/jit/backend/llsupport/llmodel.py @@ -9,9 +9,10 @@ from pypy.jit.backend.llsupport import symbolic from pypy.jit.backend.llsupport.symbolic import WORD, unroll_basic_sizes from pypy.jit.backend.llsupport.descr import (get_size_descr, - get_field_descr, BaseFieldDescr, get_array_descr, BaseArrayDescr, - get_call_descr, BaseIntCallDescr, GcPtrCallDescr, FloatCallDescr, - VoidCallDescr, InteriorFieldDescr, get_interiorfield_descr) + get_field_descr, BaseFieldDescr, DynamicFieldDescr, get_array_descr, + BaseArrayDescr, DynamicArrayNoLengthDescr, get_call_descr, + BaseIntCallDescr, GcPtrCallDescr, FloatCallDescr, VoidCallDescr, + InteriorFieldDescr, get_interiorfield_descr) from pypy.jit.backend.llsupport.asmmemmgr import AsmMemoryManager @@ -240,6 +241,12 @@ def interiorfielddescrof(self, A, fieldname): return get_interiorfield_descr(self.gc_ll_descr, A, A.OF, fieldname) + def interiorfielddescrof_dynamic(self, offset, width, fieldsize, + is_pointer, is_float, is_signed): + arraydescr = DynamicArrayNoLengthDescr(width) + fielddescr = DynamicFieldDescr(offset, fieldsize, is_pointer, is_float, is_signed) + return InteriorFieldDescr(arraydescr, fielddescr) + def unpack_arraydescr(self, arraydescr): assert isinstance(arraydescr, BaseArrayDescr) return arraydescr.get_base_size(self.translate_support_code) diff --git a/pypy/jit/backend/model.py b/pypy/jit/backend/model.py --- a/pypy/jit/backend/model.py +++ b/pypy/jit/backend/model.py @@ -188,38 +188,35 @@ lst[n] = None self.fail_descr_free_list.extend(faildescr_indices) - @staticmethod - def sizeof(S): + def sizeof(self, S): raise NotImplementedError - @staticmethod - def fielddescrof(S, fieldname): + def fielddescrof(self, S, fieldname): """Return the Descr corresponding to field 'fieldname' on the structure 'S'. It is important that this function (at least) caches the results.""" raise NotImplementedError - @staticmethod - def arraydescrof(A): + def interiorfielddescrof(self, A, fieldname): raise NotImplementedError - @staticmethod - def calldescrof(FUNC, ARGS, RESULT): + def interiorfielddescrof_dynamic(self, offset, width, fieldsize, is_pointer, + is_float, is_signed): + raise NotImplementedError + + def arraydescrof(self, A): + raise NotImplementedError + + def calldescrof(self, FUNC, ARGS, RESULT): # FUNC is the original function type, but ARGS is a list of types # with Voids removed raise NotImplementedError - @staticmethod - def methdescrof(SELFTYPE, methname): + def methdescrof(self, SELFTYPE, methname): # must return a subclass of history.AbstractMethDescr raise NotImplementedError - @staticmethod - def typedescrof(TYPE): - raise NotImplementedError - - @staticmethod - def interiorfielddescrof(A, fieldname): + def typedescrof(self, TYPE): raise NotImplementedError # ---------- the backend-dependent operations ---------- diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -8,8 +8,8 @@ from pypy.rpython.lltypesystem.lloperation import llop from pypy.rpython.annlowlevel import llhelper from pypy.jit.backend.model import CompiledLoopToken -from pypy.jit.backend.x86.regalloc import (RegAlloc, get_ebp_ofs, - _get_scale, gpr_reg_mgr_cls) +from pypy.jit.backend.x86.regalloc import (RegAlloc, get_ebp_ofs, _get_scale, + gpr_reg_mgr_cls, _valid_addressing_size) from pypy.jit.backend.x86.arch import (FRAME_FIXED_SIZE, FORCE_INDEX_OFS, WORD, IS_X86_32, IS_X86_64) @@ -1601,8 +1601,10 @@ assert isinstance(itemsize_loc, ImmedLoc) if isinstance(index_loc, ImmedLoc): temp_loc = imm(index_loc.value * itemsize_loc.value) + elif _valid_addressing_size(itemsize_loc.value): + return AddressLoc(base_loc, index_loc, _get_scale(itemsize_loc.value), ofs_loc.value) else: - # XXX should not use IMUL in most cases + # XXX should not use IMUL in more cases, it can use a clever LEA assert isinstance(temp_loc, RegLoc) assert isinstance(index_loc, RegLoc) assert not temp_loc.is_xmm @@ -1619,6 +1621,8 @@ ofs_loc) self.load_from_mem(resloc, src_addr, fieldsize_loc, sign_loc) + genop_getinteriorfield_raw = genop_getinteriorfield_gc + def genop_discard_setfield_gc(self, op, arglocs): base_loc, ofs_loc, size_loc, value_loc = arglocs @@ -1634,6 +1638,8 @@ ofs_loc) self.save_into_mem(dest_addr, value_loc, fieldsize_loc) + genop_discard_setinteriorfield_raw = genop_discard_setinteriorfield_gc + def genop_discard_setarrayitem_gc(self, op, arglocs): base_loc, ofs_loc, value_loc, size_loc, baseofs = arglocs assert isinstance(baseofs, ImmedLoc) diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -1017,6 +1017,8 @@ self.PerformDiscard(op, [base_loc, ofs, itemsize, fieldsize, index_loc, temp_loc, value_loc]) + consider_setinteriorfield_raw = consider_setinteriorfield_gc + def consider_strsetitem(self, op): args = op.getarglist() base_loc = self.rm.make_sure_var_in_reg(op.getarg(0), args) @@ -1108,6 +1110,8 @@ self.Perform(op, [base_loc, ofs, itemsize, fieldsize, index_loc, temp_loc, sign_loc], result_loc) + consider_getinteriorfield_raw = consider_getinteriorfield_gc + def consider_int_is_true(self, op, guard_op): # doesn't need arg to be in a register argloc = self.loc(op.getarg(0)) @@ -1380,8 +1384,11 @@ # i.e. the n'th word beyond the fixed frame size. return -WORD * (FRAME_FIXED_SIZE + position) +def _valid_addressing_size(size): + return size == 1 or size == 2 or size == 4 or size == 8 + def _get_scale(size): - assert size == 1 or size == 2 or size == 4 or size == 8 + assert _valid_addressing_size(size) if size < 4: return size - 1 # 1, 2 => 0, 1 else: diff --git a/pypy/jit/backend/x86/test/test_fficall.py b/pypy/jit/backend/x86/test/test_fficall.py new file mode 100644 --- /dev/null +++ b/pypy/jit/backend/x86/test/test_fficall.py @@ -0,0 +1,8 @@ +import py +from pypy.jit.metainterp.test import test_fficall +from pypy.jit.backend.x86.test.test_basic import Jit386Mixin + +class TestFfiLookups(Jit386Mixin, test_fficall.FfiLookupTests): + # for the individual tests see + # ====> ../../../metainterp/test/test_fficall.py + supports_all = True diff --git a/pypy/jit/codewriter/effectinfo.py b/pypy/jit/codewriter/effectinfo.py --- a/pypy/jit/codewriter/effectinfo.py +++ b/pypy/jit/codewriter/effectinfo.py @@ -48,6 +48,8 @@ OS_LIBFFI_PREPARE = 60 OS_LIBFFI_PUSH_ARG = 61 OS_LIBFFI_CALL = 62 + OS_LIBFFI_GETARRAYITEM = 63 + OS_LIBFFI_SETARRAYITEM = 64 # OS_LLONG_INVERT = 69 OS_LLONG_ADD = 70 diff --git a/pypy/jit/codewriter/jtransform.py b/pypy/jit/codewriter/jtransform.py --- a/pypy/jit/codewriter/jtransform.py +++ b/pypy/jit/codewriter/jtransform.py @@ -1615,6 +1615,12 @@ elif oopspec_name.startswith('libffi_call_'): oopspecindex = EffectInfo.OS_LIBFFI_CALL extraeffect = EffectInfo.EF_RANDOM_EFFECTS + elif oopspec_name == 'libffi_array_getitem': + oopspecindex = EffectInfo.OS_LIBFFI_GETARRAYITEM + extraeffect = EffectInfo.EF_CANNOT_RAISE + elif oopspec_name == 'libffi_array_setitem': + oopspecindex = EffectInfo.OS_LIBFFI_SETARRAYITEM + extraeffect = EffectInfo.EF_CANNOT_RAISE else: assert False, 'unsupported oopspec: %s' % oopspec_name return self._handle_oopspec_call(op, args, oopspecindex, extraeffect) diff --git a/pypy/jit/metainterp/executor.py b/pypy/jit/metainterp/executor.py --- a/pypy/jit/metainterp/executor.py +++ b/pypy/jit/metainterp/executor.py @@ -340,6 +340,8 @@ rop.DEBUG_MERGE_POINT, rop.JIT_DEBUG, rop.SETARRAYITEM_RAW, + rop.GETINTERIORFIELD_RAW, + rop.SETINTERIORFIELD_RAW, rop.CALL_RELEASE_GIL, rop.QUASIIMMUT_FIELD, ): # list of opcodes never executed by pyjitpl diff --git a/pypy/jit/metainterp/optimizeopt/fficall.py b/pypy/jit/metainterp/optimizeopt/fficall.py --- a/pypy/jit/metainterp/optimizeopt/fficall.py +++ b/pypy/jit/metainterp/optimizeopt/fficall.py @@ -1,11 +1,13 @@ +from pypy.jit.codewriter.effectinfo import EffectInfo +from pypy.jit.metainterp.optimizeopt.optimizer import Optimization +from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method +from pypy.jit.metainterp.resoperation import rop, ResOperation +from pypy.rlib import clibffi, libffi +from pypy.rlib.debug import debug_print +from pypy.rlib.libffi import Func +from pypy.rlib.objectmodel import we_are_translated from pypy.rpython.annlowlevel import cast_base_ptr_to_instance -from pypy.rlib.objectmodel import we_are_translated -from pypy.rlib.libffi import Func -from pypy.rlib.debug import debug_print -from pypy.jit.codewriter.effectinfo import EffectInfo -from pypy.jit.metainterp.resoperation import rop, ResOperation -from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method -from pypy.jit.metainterp.optimizeopt.optimizer import Optimization +from pypy.rpython.lltypesystem import llmemory class FuncInfo(object): @@ -78,7 +80,7 @@ def new(self): return OptFfiCall() - + def begin_optimization(self, funcval, op): self.rollback_maybe('begin_optimization', op) self.funcinfo = FuncInfo(funcval, self.optimizer.cpu, op) @@ -116,6 +118,9 @@ ops = self.do_push_arg(op) elif oopspec == EffectInfo.OS_LIBFFI_CALL: ops = self.do_call(op) + elif (oopspec == EffectInfo.OS_LIBFFI_GETARRAYITEM or + oopspec == EffectInfo.OS_LIBFFI_SETARRAYITEM): + ops = self.do_getsetarrayitem(op, oopspec) # for op in ops: self.emit_operation(op) @@ -190,6 +195,53 @@ ops.append(newop) return ops + def do_getsetarrayitem(self, op, oopspec): + ffitypeval = self.getvalue(op.getarg(1)) + widthval = self.getvalue(op.getarg(2)) + offsetval = self.getvalue(op.getarg(5)) + if not ffitypeval.is_constant() or not widthval.is_constant() or not offsetval.is_constant(): + return [op] + + ffitypeaddr = ffitypeval.box.getaddr() + ffitype = llmemory.cast_adr_to_ptr(ffitypeaddr, clibffi.FFI_TYPE_P) + offset = offsetval.box.getint() + width = widthval.box.getint() + descr = self._get_interior_descr(ffitype, width, offset) + + arglist = [ + self.getvalue(op.getarg(3)).force_box(self.optimizer), + self.getvalue(op.getarg(4)).force_box(self.optimizer), + ] + if oopspec == EffectInfo.OS_LIBFFI_GETARRAYITEM: + opnum = rop.GETINTERIORFIELD_RAW + elif oopspec == EffectInfo.OS_LIBFFI_SETARRAYITEM: + opnum = rop.SETINTERIORFIELD_RAW + arglist.append(self.getvalue(op.getarg(6)).force_box(self.optimizer)) + else: + assert False + return [ + ResOperation(opnum, arglist, op.result, descr=descr), + ] + + def _get_interior_descr(self, ffitype, width, offset): + kind = libffi.types.getkind(ffitype) + is_pointer = is_float = is_signed = False + if ffitype is libffi.types.pointer: + is_pointer = True + elif kind == 'i': + is_signed = True + elif kind == 'f' or kind == 'I' or kind == 'U': + # longlongs are treated as floats, see + # e.g. llsupport/descr.py:getDescrClass + is_float = True + else: + assert False, "unsupported ffitype or kind" + # + fieldsize = ffitype.c_size + return self.optimizer.cpu.interiorfielddescrof_dynamic( + offset, width, fieldsize, is_pointer, is_float, is_signed + ) + def propagate_forward(self, op): if self.logops is not None: debug_print(self.logops.repr_of_resop(op)) diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -461,6 +461,7 @@ 'GETARRAYITEM_GC/2d', 'GETARRAYITEM_RAW/2d', 'GETINTERIORFIELD_GC/2d', + 'GETINTERIORFIELD_RAW/2d', 'GETFIELD_GC/1d', 'GETFIELD_RAW/1d', '_MALLOC_FIRST', @@ -479,6 +480,7 @@ 'SETARRAYITEM_GC/3d', 'SETARRAYITEM_RAW/3d', 'SETINTERIORFIELD_GC/3d', + 'SETINTERIORFIELD_RAW/3d', 'SETFIELD_GC/2d', 'SETFIELD_RAW/2d', 'STRSETITEM/3', diff --git a/pypy/jit/metainterp/test/test_fficall.py b/pypy/jit/metainterp/test/test_fficall.py --- a/pypy/jit/metainterp/test/test_fficall.py +++ b/pypy/jit/metainterp/test/test_fficall.py @@ -1,19 +1,18 @@ +import py -import py +from pypy.jit.metainterp.test.support import LLJitMixin +from pypy.rlib.jit import JitDriver, promote, dont_look_inside +from pypy.rlib.libffi import (ArgChain, IS_32_BIT, array_getitem, array_setitem, + types) +from pypy.rlib.objectmodel import specialize from pypy.rlib.rarithmetic import r_singlefloat, r_longlong, r_ulonglong -from pypy.rlib.jit import JitDriver, promote, dont_look_inside +from pypy.rlib.test.test_libffi import TestLibffiCall as _TestLibffiCall from pypy.rlib.unroll import unrolling_iterable -from pypy.rlib.libffi import ArgChain -from pypy.rlib.libffi import IS_32_BIT -from pypy.rlib.test.test_libffi import TestLibffiCall as _TestLibffiCall from pypy.rpython.lltypesystem import lltype, rffi -from pypy.rlib.objectmodel import specialize from pypy.tool.sourcetools import func_with_new_name -from pypy.jit.metainterp.test.support import LLJitMixin -class TestFfiCall(LLJitMixin, _TestLibffiCall): - supports_all = False # supports_{floats,longlong,singlefloats} +class FfiCallTests(_TestLibffiCall): # ===> ../../../rlib/test/test_libffi.py def call(self, funcspec, args, RESULT, is_struct=False, jitif=[]): @@ -92,6 +91,69 @@ test_byval_result.__doc__ = _TestLibffiCall.test_byval_result.__doc__ test_byval_result.dont_track_allocations = True +class FfiLookupTests(object): + def test_array_fields(self): + myjitdriver = JitDriver( + greens = [], + reds = ["n", "i", "points", "result_point"], + ) -class TestFfiCallSupportAll(TestFfiCall): + POINT = lltype.Struct("POINT", + ("x", lltype.Signed), + ("y", lltype.Signed), + ) + def f(points, result_point, n): + i = 0 + while i < n: + myjitdriver.jit_merge_point(i=i, points=points, n=n, + result_point=result_point) + x = array_getitem( + types.slong, rffi.sizeof(lltype.Signed) * 2, points, i, 0 + ) + y = array_getitem( + types.slong, rffi.sizeof(lltype.Signed) * 2, points, i, rffi.sizeof(lltype.Signed) + ) + + cur_x = array_getitem( + types.slong, rffi.sizeof(lltype.Signed) * 2, result_point, 0, 0 + ) + cur_y = array_getitem( + types.slong, rffi.sizeof(lltype.Signed) * 2, result_point, 0, rffi.sizeof(lltype.Signed) + ) + + array_setitem( + types.slong, rffi.sizeof(lltype.Signed) * 2, result_point, 0, 0, cur_x + x + ) + array_setitem( + types.slong, rffi.sizeof(lltype.Signed) * 2, result_point, 0, rffi.sizeof(lltype.Signed), cur_y + y + ) + i += 1 + + def main(n): + with lltype.scoped_alloc(rffi.CArray(POINT), n) as points: + with lltype.scoped_alloc(rffi.CArray(POINT), 1) as result_point: + for i in xrange(n): + points[i].x = i * 2 + points[i].y = i * 2 + 1 + points = rffi.cast(rffi.CArrayPtr(lltype.Char), points) + result_point[0].x = 0 + result_point[0].y = 0 + result_point = rffi.cast(rffi.CArrayPtr(lltype.Char), result_point) + f(points, result_point, n) + result_point = rffi.cast(rffi.CArrayPtr(POINT), result_point) + return result_point[0].x * result_point[0].y + + assert self.meta_interp(main, [10]) == main(10) == 9000 + self.check_loops({"int_add": 3, "jump": 1, "int_lt": 1, "guard_true": 1, + "getinteriorfield_raw": 4, "setinteriorfield_raw": 2 + }) + + +class TestFfiCall(FfiCallTests, LLJitMixin): + supports_all = False + +class TestFfiCallSupportAll(FfiCallTests, LLJitMixin): supports_all = True # supports_{floats,longlong,singlefloats} + +class TestFfiLookup(FfiLookupTests, LLJitMixin): + pass \ No newline at end of file diff --git a/pypy/module/_rawffi/test/test__rawffi.py b/pypy/module/_rawffi/test/test__rawffi.py --- a/pypy/module/_rawffi/test/test__rawffi.py +++ b/pypy/module/_rawffi/test/test__rawffi.py @@ -1022,6 +1022,12 @@ assert ret.y == 1234500, "ret.y == %d" % (ret.y,) s.free() + def test_ffi_type(self): + import _rawffi + EMPTY = _rawffi.Structure([]) + S2E = _rawffi.Structure([('bah', (EMPTY, 1))]) + S2E.get_ffi_type() # does not hang + class AppTestAutoFree: def setup_class(cls): space = gettestobjspace(usemodules=('_rawffi', 'struct')) diff --git a/pypy/module/cpyext/include/eval.h b/pypy/module/cpyext/include/eval.h --- a/pypy/module/cpyext/include/eval.h +++ b/pypy/module/cpyext/include/eval.h @@ -14,8 +14,8 @@ PyObject * PyEval_CallFunction(PyObject *obj, const char *format, ...); PyObject * PyEval_CallMethod(PyObject *obj, const char *name, const char *format, ...); -PyObject * PyObject_CallFunction(PyObject *obj, char *format, ...); -PyObject * PyObject_CallMethod(PyObject *obj, char *name, char *format, ...); +PyObject * PyObject_CallFunction(PyObject *obj, const char *format, ...); +PyObject * PyObject_CallMethod(PyObject *obj, const char *name, const char *format, ...); PyObject * PyObject_CallFunctionObjArgs(PyObject *callable, ...); PyObject * PyObject_CallMethodObjArgs(PyObject *callable, PyObject *name, ...); diff --git a/pypy/module/cpyext/include/modsupport.h b/pypy/module/cpyext/include/modsupport.h --- a/pypy/module/cpyext/include/modsupport.h +++ b/pypy/module/cpyext/include/modsupport.h @@ -48,7 +48,11 @@ /* * This is from pyport.h. Perhaps it belongs elsewhere. */ +#ifdef __cplusplus +#define PyMODINIT_FUNC extern "C" void +#else #define PyMODINIT_FUNC void +#endif #ifdef __cplusplus diff --git a/pypy/module/cpyext/include/pycobject.h b/pypy/module/cpyext/include/pycobject.h --- a/pypy/module/cpyext/include/pycobject.h +++ b/pypy/module/cpyext/include/pycobject.h @@ -33,7 +33,7 @@ PyAPI_FUNC(void *) PyCObject_GetDesc(PyObject *); /* Import a pointer to a C object from a module using a PyCObject. */ -PyAPI_FUNC(void *) PyCObject_Import(char *module_name, char *cobject_name); +PyAPI_FUNC(void *) PyCObject_Import(const char *module_name, const char *cobject_name); /* Modify a C object. Fails (==0) if object has a destructor. */ PyAPI_FUNC(int) PyCObject_SetVoidPtr(PyObject *self, void *cobj); diff --git a/pypy/module/cpyext/include/pyerrors.h b/pypy/module/cpyext/include/pyerrors.h --- a/pypy/module/cpyext/include/pyerrors.h +++ b/pypy/module/cpyext/include/pyerrors.h @@ -11,8 +11,8 @@ (PyClass_Check((x)) || (PyType_Check((x)) && \ PyObject_IsSubclass((x), PyExc_BaseException))) -PyObject *PyErr_NewException(char *name, PyObject *base, PyObject *dict); -PyObject *PyErr_NewExceptionWithDoc(char *name, char *doc, PyObject *base, PyObject *dict); +PyObject *PyErr_NewException(const char *name, PyObject *base, PyObject *dict); +PyObject *PyErr_NewExceptionWithDoc(const char *name, const char *doc, PyObject *base, PyObject *dict); PyObject *PyErr_Format(PyObject *exception, const char *format, ...); /* These APIs aren't really part of the error implementation, but diff --git a/pypy/module/cpyext/modsupport.py b/pypy/module/cpyext/modsupport.py --- a/pypy/module/cpyext/modsupport.py +++ b/pypy/module/cpyext/modsupport.py @@ -54,9 +54,15 @@ modname = rffi.charp2str(name) state = space.fromcache(State) f_name, f_path = state.package_context - w_mod = PyImport_AddModule(space, f_name) + if f_name is not None: + modname = f_name + w_mod = PyImport_AddModule(space, modname) + state.package_context = None, None - dict_w = {'__file__': space.wrap(f_path)} + if f_path is not None: + dict_w = {'__file__': space.wrap(f_path)} + else: + dict_w = {} convert_method_defs(space, dict_w, methods, None, w_self, modname) for key, w_value in dict_w.items(): space.setattr(w_mod, space.wrap(key), w_value) diff --git a/pypy/module/cpyext/presetup.py b/pypy/module/cpyext/presetup.py --- a/pypy/module/cpyext/presetup.py +++ b/pypy/module/cpyext/presetup.py @@ -42,4 +42,4 @@ patch_distutils() del sys.argv[0] -execfile(sys.argv[0], {'__file__': sys.argv[0]}) +execfile(sys.argv[0], {'__file__': sys.argv[0], '__name__': '__main__'}) diff --git a/pypy/module/cpyext/src/cobject.c b/pypy/module/cpyext/src/cobject.c --- a/pypy/module/cpyext/src/cobject.c +++ b/pypy/module/cpyext/src/cobject.c @@ -77,7 +77,7 @@ } void * -PyCObject_Import(char *module_name, char *name) +PyCObject_Import(const char *module_name, const char *name) { PyObject *m, *c; void *r = NULL; diff --git a/pypy/module/cpyext/src/modsupport.c b/pypy/module/cpyext/src/modsupport.c --- a/pypy/module/cpyext/src/modsupport.c +++ b/pypy/module/cpyext/src/modsupport.c @@ -541,7 +541,7 @@ } PyObject * -PyObject_CallFunction(PyObject *callable, char *format, ...) +PyObject_CallFunction(PyObject *callable, const char *format, ...) { va_list va; PyObject *args; @@ -558,7 +558,7 @@ } PyObject * -PyObject_CallMethod(PyObject *o, char *name, char *format, ...) +PyObject_CallMethod(PyObject *o, const char *name, const char *format, ...) { va_list va; PyObject *args; diff --git a/pypy/module/cpyext/src/pyerrors.c b/pypy/module/cpyext/src/pyerrors.c --- a/pypy/module/cpyext/src/pyerrors.c +++ b/pypy/module/cpyext/src/pyerrors.c @@ -21,7 +21,7 @@ } PyObject * -PyErr_NewException(char *name, PyObject *base, PyObject *dict) +PyErr_NewException(const char *name, PyObject *base, PyObject *dict) { char *dot; PyObject *modulename = NULL; @@ -72,7 +72,7 @@ /* Create an exception with docstring */ PyObject * -PyErr_NewExceptionWithDoc(char *name, char *doc, PyObject *base, PyObject *dict) +PyErr_NewExceptionWithDoc(const char *name, const char *doc, PyObject *base, PyObject *dict) { int result; PyObject *ret = NULL; diff --git a/pypy/module/math/test/test_translated.py b/pypy/module/math/test/test_translated.py new file mode 100644 --- /dev/null +++ b/pypy/module/math/test/test_translated.py @@ -0,0 +1,10 @@ +import py +from pypy.translator.c.test.test_genc import compile +from pypy.module.math.interp_math import _gamma + + +def test_gamma_overflow(): + f = compile(_gamma, [float]) + assert f(10.0) == 362880.0 + py.test.raises(OverflowError, f, 1720.0) + py.test.raises(OverflowError, f, 172.0) diff --git a/pypy/module/test_lib_pypy/test_pwd.py b/pypy/module/test_lib_pypy/test_pwd.py --- a/pypy/module/test_lib_pypy/test_pwd.py +++ b/pypy/module/test_lib_pypy/test_pwd.py @@ -1,7 +1,10 @@ +import py, sys from pypy.conftest import gettestobjspace class AppTestPwd: def setup_class(cls): + if sys.platform == 'win32': + py.test.skip("Unix only") cls.space = gettestobjspace(usemodules=('_ffi', '_rawffi')) cls.space.appexec((), "(): import pwd") diff --git a/pypy/objspace/std/intobject.py b/pypy/objspace/std/intobject.py --- a/pypy/objspace/std/intobject.py +++ b/pypy/objspace/std/intobject.py @@ -17,7 +17,7 @@ """ class W_AbstractIntObject(W_Object): - pass + __slots__ = () class W_IntObject(W_AbstractIntObject): __slots__ = 'intval' diff --git a/pypy/objspace/std/iterobject.py b/pypy/objspace/std/iterobject.py --- a/pypy/objspace/std/iterobject.py +++ b/pypy/objspace/std/iterobject.py @@ -5,7 +5,7 @@ class W_AbstractIterObject(W_Object): - pass + __slots__ = () class W_AbstractSeqIterObject(W_AbstractIterObject): from pypy.objspace.std.itertype import iter_typedef as typedef diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py --- a/pypy/objspace/std/listobject.py +++ b/pypy/objspace/std/listobject.py @@ -12,7 +12,7 @@ from pypy.interpreter.argument import Signature class W_AbstractListObject(W_Object): - pass + __slots__ = () class W_ListObject(W_AbstractListObject): from pypy.objspace.std.listtype import list_typedef as typedef diff --git a/pypy/objspace/std/longobject.py b/pypy/objspace/std/longobject.py --- a/pypy/objspace/std/longobject.py +++ b/pypy/objspace/std/longobject.py @@ -9,7 +9,7 @@ from pypy.rlib.rbigint import rbigint, SHIFT class W_AbstractLongObject(W_Object): - pass + __slots__ = () class W_LongObject(W_AbstractLongObject): """This is a wrapper of rbigint.""" diff --git a/pypy/objspace/std/stringobject.py b/pypy/objspace/std/stringobject.py --- a/pypy/objspace/std/stringobject.py +++ b/pypy/objspace/std/stringobject.py @@ -20,7 +20,7 @@ from pypy.objspace.std.formatting import mod_format class W_AbstractStringObject(W_Object): - pass + __slots__ = () class W_StringObject(W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef diff --git a/pypy/objspace/std/test/test_sliceobject.py b/pypy/objspace/std/test/test_sliceobject.py --- a/pypy/objspace/std/test/test_sliceobject.py +++ b/pypy/objspace/std/test/test_sliceobject.py @@ -1,3 +1,4 @@ +import sys from pypy.objspace.std.sliceobject import normalize_simple_slice @@ -56,8 +57,9 @@ sl = space.newslice(w(start), w(stop), w(step)) mystart, mystop, mystep, slicelength = sl.indices4(space, length) assert len(range(length)[start:stop:step]) == slicelength - assert slice(start, stop, step).indices(length) == ( - mystart, mystop, mystep) + if sys.version_info >= (2, 6): # doesn't work in 2.5 + assert slice(start, stop, step).indices(length) == ( + mystart, mystop, mystep) class AppTest_SliceObject: def test_new(self): diff --git a/pypy/objspace/std/tupleobject.py b/pypy/objspace/std/tupleobject.py --- a/pypy/objspace/std/tupleobject.py +++ b/pypy/objspace/std/tupleobject.py @@ -10,7 +10,7 @@ from pypy.rlib.debug import make_sure_not_resized class W_AbstractTupleObject(W_Object): - pass + __slots__ = () class W_TupleObject(W_AbstractTupleObject): from pypy.objspace.std.tupletype import tuple_typedef as typedef diff --git a/pypy/objspace/std/unicodeobject.py b/pypy/objspace/std/unicodeobject.py --- a/pypy/objspace/std/unicodeobject.py +++ b/pypy/objspace/std/unicodeobject.py @@ -20,7 +20,7 @@ from pypy.objspace.std.stringtype import stringstartswith, stringendswith class W_AbstractUnicodeObject(W_Object): - pass + __slots__ = () class W_UnicodeObject(W_AbstractUnicodeObject): from pypy.objspace.std.unicodetype import unicode_typedef as typedef diff --git a/pypy/rlib/clibffi.py b/pypy/rlib/clibffi.py --- a/pypy/rlib/clibffi.py +++ b/pypy/rlib/clibffi.py @@ -30,9 +30,6 @@ _MAC_OS = platform.name == "darwin" _FREEBSD_7 = platform.name == "freebsd7" -_LITTLE_ENDIAN = sys.byteorder == 'little' -_BIG_ENDIAN = sys.byteorder == 'big' - if _WIN32: from pypy.rlib import rwin32 @@ -213,26 +210,48 @@ elif sz == 8: return ffi_type_uint64 else: raise ValueError("unsupported type size for %r" % (TYPE,)) -TYPE_MAP = { - rffi.DOUBLE : ffi_type_double, - rffi.FLOAT : ffi_type_float, - rffi.LONGDOUBLE : ffi_type_longdouble, - rffi.UCHAR : ffi_type_uchar, - rffi.CHAR : ffi_type_schar, - rffi.SHORT : ffi_type_sshort, - rffi.USHORT : ffi_type_ushort, - rffi.UINT : ffi_type_uint, - rffi.INT : ffi_type_sint, +__int_type_map = [ + (rffi.UCHAR, ffi_type_uchar), + (rffi.SIGNEDCHAR, ffi_type_schar), + (rffi.SHORT, ffi_type_sshort), + (rffi.USHORT, ffi_type_ushort), + (rffi.UINT, ffi_type_uint), + (rffi.INT, ffi_type_sint), # xxx don't use ffi_type_slong and ffi_type_ulong - their meaning # changes from a libffi version to another :-(( - rffi.ULONG : _unsigned_type_for(rffi.ULONG), - rffi.LONG : _signed_type_for(rffi.LONG), - rffi.ULONGLONG : _unsigned_type_for(rffi.ULONGLONG), - rffi.LONGLONG : _signed_type_for(rffi.LONGLONG), - lltype.Void : ffi_type_void, - lltype.UniChar : _unsigned_type_for(lltype.UniChar), - lltype.Bool : _unsigned_type_for(lltype.Bool), - } + (rffi.ULONG, _unsigned_type_for(rffi.ULONG)), + (rffi.LONG, _signed_type_for(rffi.LONG)), + (rffi.ULONGLONG, _unsigned_type_for(rffi.ULONGLONG)), + (rffi.LONGLONG, _signed_type_for(rffi.LONGLONG)), + (lltype.UniChar, _unsigned_type_for(lltype.UniChar)), + (lltype.Bool, _unsigned_type_for(lltype.Bool)), + ] + +__float_type_map = [ + (rffi.DOUBLE, ffi_type_double), + (rffi.FLOAT, ffi_type_float), + (rffi.LONGDOUBLE, ffi_type_longdouble), + ] + +__ptr_type_map = [ + (rffi.VOIDP, ffi_type_pointer), + ] + +__type_map = __int_type_map + __float_type_map + [ + (lltype.Void, ffi_type_void) + ] + +TYPE_MAP_INT = dict(__int_type_map) +TYPE_MAP_FLOAT = dict(__float_type_map) +TYPE_MAP = dict(__type_map) + +ffitype_map_int = unrolling_iterable(__int_type_map) +ffitype_map_int_or_ptr = unrolling_iterable(__int_type_map + __ptr_type_map) +ffitype_map_float = unrolling_iterable(__float_type_map) +ffitype_map = unrolling_iterable(__type_map) + +del __int_type_map, __float_type_map, __ptr_type_map, __type_map + def external(name, args, result, **kwds): return rffi.llexternal(name, args, result, compilation_info=eci, **kwds) @@ -341,38 +360,15 @@ cast_type_to_ffitype._annspecialcase_ = 'specialize:memo' def push_arg_as_ffiptr(ffitp, arg, ll_buf): - # This is for primitive types. Note that the exact type of 'arg' may be - # different from the expected 'c_size'. To cope with that, we fall back - # to a byte-by-byte copy. + # this is for primitive types. For structures and arrays + # would be something different (more dynamic) TP = lltype.typeOf(arg) TP_P = lltype.Ptr(rffi.CArray(TP)) - TP_size = rffi.sizeof(TP) - c_size = intmask(ffitp.c_size) - # if both types have the same size, we can directly write the - # value to the buffer - if c_size == TP_size: - buf = rffi.cast(TP_P, ll_buf) - buf[0] = arg - else: - # needs byte-by-byte copying. Make sure 'arg' is an integer type. - # Note that this won't work for rffi.FLOAT/rffi.DOUBLE. - assert TP is not rffi.FLOAT and TP is not rffi.DOUBLE - if TP_size <= rffi.sizeof(lltype.Signed): - arg = rffi.cast(lltype.Unsigned, arg) - else: - arg = rffi.cast(lltype.UnsignedLongLong, arg) - if _LITTLE_ENDIAN: - for i in range(c_size): - ll_buf[i] = chr(arg & 0xFF) - arg >>= 8 - elif _BIG_ENDIAN: - for i in range(c_size-1, -1, -1): - ll_buf[i] = chr(arg & 0xFF) - arg >>= 8 - else: - raise AssertionError + buf = rffi.cast(TP_P, ll_buf) + buf[0] = arg push_arg_as_ffiptr._annspecialcase_ = 'specialize:argtype(1)' + # type defs for callback and closure userdata USERDATA_P = lltype.Ptr(lltype.ForwardReference()) CALLBACK_TP = lltype.Ptr(lltype.FuncType([rffi.VOIDPP, rffi.VOIDP, USERDATA_P], diff --git a/pypy/rlib/libffi.py b/pypy/rlib/libffi.py --- a/pypy/rlib/libffi.py +++ b/pypy/rlib/libffi.py @@ -140,7 +140,7 @@ self.last.next = arg self.last = arg self.numargs += 1 - + class AbstractArg(object): next = None @@ -410,3 +410,22 @@ def getaddressindll(self, name): return dlsym(self.lib, name) + + at jit.oopspec("libffi_array_getitem(ffitype, width, addr, index, offset)") +def array_getitem(ffitype, width, addr, index, offset): + for TYPE, ffitype2 in clibffi.ffitype_map: + if ffitype is ffitype2: + addr = rffi.ptradd(addr, index * width) + addr = rffi.ptradd(addr, offset) + return rffi.cast(rffi.CArrayPtr(TYPE), addr)[0] + assert False + + at jit.oopspec("libffi_array_setitem(ffitype, width, addr, index, offset, value)") +def array_setitem(ffitype, width, addr, index, offset, value): + for TYPE, ffitype2 in clibffi.ffitype_map: + if ffitype is ffitype2: + addr = rffi.ptradd(addr, index * width) + addr = rffi.ptradd(addr, offset) + rffi.cast(rffi.CArrayPtr(TYPE), addr)[0] = value + return + assert False \ No newline at end of file diff --git a/pypy/rlib/test/test_libffi.py b/pypy/rlib/test/test_libffi.py --- a/pypy/rlib/test/test_libffi.py +++ b/pypy/rlib/test/test_libffi.py @@ -1,11 +1,13 @@ +import sys + import py -import sys + +from pypy.rlib.libffi import (CDLL, Func, get_libc_name, ArgChain, types, + IS_32_BIT, array_getitem, array_setitem) +from pypy.rlib.rarithmetic import r_singlefloat, r_longlong, r_ulonglong +from pypy.rlib.test.test_clibffi import BaseFfiTest, get_libm_name, make_struct_ffitype_e from pypy.rpython.lltypesystem import rffi, lltype from pypy.rpython.lltypesystem.ll2ctypes import ALLOCATED -from pypy.rlib.rarithmetic import r_singlefloat, r_longlong, r_ulonglong -from pypy.rlib.test.test_clibffi import BaseFfiTest, get_libm_name, make_struct_ffitype_e -from pypy.rlib.libffi import CDLL, Func, get_libc_name, ArgChain, types -from pypy.rlib.libffi import IS_32_BIT class TestLibffiMisc(BaseFfiTest): @@ -52,6 +54,34 @@ del lib assert not ALLOCATED + def test_array_fields(self): + POINT = lltype.Struct("POINT", + ("x", lltype.Float), + ("y", lltype.Float), + ) + points = lltype.malloc(rffi.CArray(POINT), 2, flavor="raw") + points[0].x = 1.0 + points[0].y = 2.0 + points[1].x = 3.0 + points[1].y = 4.0 + points = rffi.cast(rffi.CArrayPtr(lltype.Char), points) + assert array_getitem(types.double, 16, points, 0, 0) == 1.0 + assert array_getitem(types.double, 16, points, 0, 8) == 2.0 + assert array_getitem(types.double, 16, points, 1, 0) == 3.0 + assert array_getitem(types.double, 16, points, 1, 8) == 4.0 + + array_setitem(types.double, 16, points, 0, 0, 10.0) + array_setitem(types.double, 16, points, 0, 8, 20.0) + array_setitem(types.double, 16, points, 1, 0, 30.0) + array_setitem(types.double, 16, points, 1, 8, 40.0) + + assert array_getitem(types.double, 16, points, 0, 0) == 10.0 + assert array_getitem(types.double, 16, points, 0, 8) == 20.0 + assert array_getitem(types.double, 16, points, 1, 0) == 30.0 + assert array_getitem(types.double, 16, points, 1, 8) == 40.0 + + lltype.free(points, flavor="raw") + class TestLibffiCall(BaseFfiTest): """ Test various kind of calls through libffi. @@ -109,7 +139,7 @@ This method is overridden by metainterp/test/test_fficall.py in order to do the call in a loop and JIT it. The optional arguments are used only by that overridden method. - + """ lib, name, argtypes, restype = funcspec func = lib.getpointer(name, argtypes, restype) @@ -132,7 +162,7 @@ return x - y; } """ - libfoo = self.get_libfoo() + libfoo = self.get_libfoo() func = (libfoo, 'diff_xy', [types.sint, types.slong], types.sint) res = self.call(func, [50, 8], lltype.Signed) assert res == 42 @@ -144,7 +174,7 @@ return (x + (int)y); } """ - libfoo = self.get_libfoo() + libfoo = self.get_libfoo() func = (libfoo, 'sum_xy', [types.sint, types.double], types.sint) res = self.call(func, [38, 4.2], lltype.Signed, jitif=["floats"]) assert res == 42 @@ -249,7 +279,7 @@ }; struct pair my_static_pair = {10, 20}; - + long* get_pointer_to_b() { return &my_static_pair.b; @@ -340,7 +370,7 @@ def test_wrong_number_of_arguments(self): from pypy.rpython.llinterp import LLException - libfoo = self.get_libfoo() + libfoo = self.get_libfoo() func = (libfoo, 'sum_xy', [types.sint, types.double], types.sint) glob = globals() diff --git a/pypy/rpython/lltypesystem/module/ll_math.py b/pypy/rpython/lltypesystem/module/ll_math.py --- a/pypy/rpython/lltypesystem/module/ll_math.py +++ b/pypy/rpython/lltypesystem/module/ll_math.py @@ -127,9 +127,12 @@ return y != y def ll_math_isinf(y): - if use_library_isinf_isnan and not jit.we_are_jitted(): + if jit.we_are_jitted(): + return (y + VERY_LARGE_FLOAT) == y + elif use_library_isinf_isnan: return not _lib_finite(y) and not _lib_isnan(y) - return (y + VERY_LARGE_FLOAT) == y + else: + return y == INFINITY or y == -INFINITY def ll_math_isfinite(y): # Use a custom hack that is reasonably well-suited to the JIT. diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -862,12 +862,14 @@ try: unsigned = not tp._type.SIGNED except AttributeError: - if (not isinstance(tp, lltype.Primitive) or - tp in (FLOAT, DOUBLE) or - cast(lltype.SignedLongLong, cast(tp, -1)) < 0): + if not isinstance(tp, lltype.Primitive): unsigned = False + elif tp in (lltype.Signed, FLOAT, DOUBLE): + unsigned = False + elif tp in (lltype.Char, lltype.UniChar, lltype.Bool): + unsigned = True else: - unsigned = True + raise AssertionError("size_and_sign(%r)" % (tp,)) return size, unsigned def sizeof(tp): diff --git a/pypy/rpython/lltypesystem/rstr.py b/pypy/rpython/lltypesystem/rstr.py --- a/pypy/rpython/lltypesystem/rstr.py +++ b/pypy/rpython/lltypesystem/rstr.py @@ -331,6 +331,8 @@ # unlike CPython, there is no reason to avoid to return -1 # but our malloc initializes the memory to zero, so we use zero as the # special non-computed-yet value. + if not s: + return 0 x = s.hash if x == 0: x = _hash_string(s.chars) diff --git a/pypy/rpython/lltypesystem/test/test_rffi.py b/pypy/rpython/lltypesystem/test/test_rffi.py --- a/pypy/rpython/lltypesystem/test/test_rffi.py +++ b/pypy/rpython/lltypesystem/test/test_rffi.py @@ -743,8 +743,9 @@ assert sizeof(lltype.Typedef(ll, 'test')) == sizeof(ll) assert not size_and_sign(lltype.Signed)[1] assert size_and_sign(lltype.Char) == (1, True) - assert not size_and_sign(lltype.UniChar)[1] + assert size_and_sign(lltype.UniChar)[1] assert size_and_sign(UINT)[1] + assert not size_and_sign(INT)[1] def test_rffi_offsetof(self): import struct diff --git a/pypy/rpython/ootypesystem/rstr.py b/pypy/rpython/ootypesystem/rstr.py --- a/pypy/rpython/ootypesystem/rstr.py +++ b/pypy/rpython/ootypesystem/rstr.py @@ -116,6 +116,8 @@ return ootype.oounicode(ch, -1) def ll_strhash(s): + if not s: + return 0 return s.ll_hash() def ll_strfasthash(s): diff --git a/pypy/rpython/test/test_rtuple.py b/pypy/rpython/test/test_rtuple.py --- a/pypy/rpython/test/test_rtuple.py +++ b/pypy/rpython/test/test_rtuple.py @@ -180,6 +180,19 @@ res2 = self.interpret(f, [27, 12]) assert res1 != res2 + def test_constant_tuple_hash_str(self): + from pypy.rlib.objectmodel import compute_hash + def f(i): + if i: + t = (None, "abc") + else: + t = ("abc", None) + return compute_hash(t) + + res1 = self.interpret(f, [0]) + res2 = self.interpret(f, [1]) + assert res1 != res2 + def test_tuple_to_list(self): def f(i, j): return list((i, j)) diff --git a/pypy/translator/platform/__init__.py b/pypy/translator/platform/__init__.py --- a/pypy/translator/platform/__init__.py +++ b/pypy/translator/platform/__init__.py @@ -42,7 +42,7 @@ so_prefixes = ('',) - extra_libs = [] + extra_libs = () def __init__(self, cc): if self.__class__ is Platform: @@ -183,7 +183,8 @@ link_files = self._linkfiles(eci.link_files) export_flags = self._exportsymbols_link_flags(eci) return (library_dirs + list(self.link_flags) + export_flags + - link_files + list(eci.link_extra) + libraries + self.extra_libs) + link_files + list(eci.link_extra) + libraries + + list(self.extra_libs)) def _exportsymbols_link_flags(self, eci, relto=None): if eci.export_symbols: diff --git a/pypy/translator/platform/linux.py b/pypy/translator/platform/linux.py --- a/pypy/translator/platform/linux.py +++ b/pypy/translator/platform/linux.py @@ -7,7 +7,7 @@ name = "linux" link_flags = ('-pthread',) - extra_libs = ['-lrt'] + extra_libs = ('-lrt',) cflags = ('-O3', '-pthread', '-fomit-frame-pointer', '-Wall', '-Wno-unused') standalone_only = () diff --git a/pypy/translator/platform/posix.py b/pypy/translator/platform/posix.py --- a/pypy/translator/platform/posix.py +++ b/pypy/translator/platform/posix.py @@ -140,7 +140,7 @@ ('DEFAULT_TARGET', exe_name.basename), ('SOURCES', rel_cfiles), ('OBJECTS', rel_ofiles), - ('LIBS', self._libs(eci.libraries)), + ('LIBS', self._libs(eci.libraries) + list(self.extra_libs)), ('LIBDIRS', self._libdirs(rel_libdirs)), ('INCLUDEDIRS', self._includedirs(rel_includedirs)), ('CFLAGS', cflags), diff --git a/pypy/translator/platform/test/test_posix.py b/pypy/translator/platform/test/test_posix.py --- a/pypy/translator/platform/test/test_posix.py +++ b/pypy/translator/platform/test/test_posix.py @@ -41,6 +41,7 @@ if self.strict_on_stderr: assert res.err == '' assert res.returncode == 0 + assert '-lrt' in tmpdir.join("Makefile").read() def test_link_files(self): tmpdir = udir.join('link_files' + self.__class__.__name__).ensure(dir=1) From noreply at buildbot.pypy.org Wed Nov 16 14:56:36 2011 From: noreply at buildbot.pypy.org (bivab) Date: Wed, 16 Nov 2011 14:56:36 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: Bah, fix. Message-ID: <20111116135636.971D8820BE@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r49475:0fc247cc5f82 Date: 2011-11-16 14:56 +0100 http://bitbucket.org/pypy/pypy/changeset/0fc247cc5f82/ Log: Bah, fix. diff --git a/pypy/jit/backend/arm/regalloc.py b/pypy/jit/backend/arm/regalloc.py --- a/pypy/jit/backend/arm/regalloc.py +++ b/pypy/jit/backend/arm/regalloc.py @@ -141,7 +141,7 @@ loc = None if isinstance(thing, Const): if isinstance(thing, ConstPtr): - tp = PTR + tp = REF else: tp = INT loc = self.get_scratch_reg(tp, forbidden_vars=self.temp_boxes + forbidden_vars) From noreply at buildbot.pypy.org Wed Nov 16 17:55:09 2011 From: noreply at buildbot.pypy.org (hager) Date: Wed, 16 Nov 2011 17:55:09 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Implemented NEW_WITH_VTABLE Message-ID: <20111116165509.4046C820BE@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r49476:7ad615ae7faf Date: 2011-11-16 17:54 +0100 http://bitbucket.org/pypy/pypy/changeset/7ad615ae7faf/ Log: Implemented NEW_WITH_VTABLE diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -610,6 +610,19 @@ # XXX do exception handling here! pass + def emit_new_with_vtable(self, op, arglocs, regalloc): + classint = arglocs[0].value + self.set_vtable(op.result, classint) + + def set_vtable(self, box, vtable): + if self.cpu.vtable_offset is not None: + adr = rffi.cast(lltype.Signed, vtable) + self.mc.load_imm(r.r0, adr) + if IS_PPC_32: + self.mc.stw(r.r0.value, r.r3.value, self.cpu.vtable_offset) + else: + self.mc.std(r.r0.value, r.r3.value, self.cpu.vtable_offset) + def emit_new_array(self, op, arglocs, regalloc): # XXX handle memory errors if len(arglocs) > 0: diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/regalloc.py @@ -639,6 +639,18 @@ self.possibly_free_var(op.result) return [] + def prepare_new_with_vtable(self, op): + classint = op.getarg(0).getint() + descrsize = heaptracker.vtable2descr(self.cpu, classint) + # XXX add fastpath for allocation + callargs = self._prepare_args_for_new_op(descrsize) + force_index = self.assembler.write_new_force_index() + self.assembler._emit_call(force_index, self.assembler.malloc_func_addr, + callargs, self, result=op.result) + self.possibly_free_vars(callargs) + self.possibly_free_var(op.result) + return [imm(classint)] + def prepare_new_array(self, op): gc_ll_descr = self.cpu.gc_ll_descr if gc_ll_descr.get_funcptr_for_newarray is not None: From noreply at buildbot.pypy.org Wed Nov 16 20:22:15 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Wed, 16 Nov 2011 20:22:15 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: test_target_loop_kept_alive_or_not relies on send_bridge_to_backend() NOT increasing the generation of its original preamble (its current action on default is to increas the generation of the peeled loop, which is a bit of a no-op). Is this realy the desiered behaviour? Message-ID: <20111116192215.96157820BE@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r49477:b07f3eacf8ae Date: 2011-11-16 20:21 +0100 http://bitbucket.org/pypy/pypy/changeset/b07f3eacf8ae/ Log: test_target_loop_kept_alive_or_not relies on send_bridge_to_backend() NOT increasing the generation of its original preamble (its current action on default is to increas the generation of the peeled loop, which is a bit of a no-op). Is this realy the desiered behaviour? diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -294,9 +294,9 @@ # metainterp_sd.logger_ops.log_bridge(inputargs, operations, n, ops_offset) # - if metainterp_sd.warmrunnerdesc is not None: # for tests - metainterp_sd.warmrunnerdesc.memory_manager.keep_loop_alive( - original_loop_token) + #if metainterp_sd.warmrunnerdesc is not None: # for tests + # metainterp_sd.warmrunnerdesc.memory_manager.keep_loop_alive( + # original_loop_token) # ____________________________________________________________ diff --git a/pypy/jit/metainterp/test/test_memmgr.py b/pypy/jit/metainterp/test/test_memmgr.py --- a/pypy/jit/metainterp/test/test_memmgr.py +++ b/pypy/jit/metainterp/test/test_memmgr.py @@ -114,6 +114,8 @@ # Depending on loop_longevity, either: # A. create the loop and the entry bridge for 'g(5)' # B. create 8 loops (and throw them away at each iteration) + # Actually, it's 4 loops and 4 exit bridges thrown away + # every second iteration for i in range(8): g(5) # create another loop and another entry bridge for 'g(7)', diff --git a/pypy/jit/metainterp/warmstate.py b/pypy/jit/metainterp/warmstate.py --- a/pypy/jit/metainterp/warmstate.py +++ b/pypy/jit/metainterp/warmstate.py @@ -173,6 +173,9 @@ wref_procedure_token = None def get_procedure_token(self): + if not we_are_translated(): + from pypy.rlib import rgc + rgc.collect(); rgc.collect(); rgc.collect() if self.wref_procedure_token is not None: return self.wref_procedure_token() return None From noreply at buildbot.pypy.org Wed Nov 16 20:42:32 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 16 Nov 2011 20:42:32 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: merged default Message-ID: <20111116194232.BF268820BE@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49478:98e8d0b934b3 Date: 2011-11-16 11:53 -0500 http://bitbucket.org/pypy/pypy/changeset/98e8d0b934b3/ Log: merged default diff --git a/pypy/module/cpyext/include/modsupport.h b/pypy/module/cpyext/include/modsupport.h --- a/pypy/module/cpyext/include/modsupport.h +++ b/pypy/module/cpyext/include/modsupport.h @@ -48,7 +48,11 @@ /* * This is from pyport.h. Perhaps it belongs elsewhere. */ +#ifdef __cplusplus +#define PyMODINIT_FUNC extern "C" void +#else #define PyMODINIT_FUNC void +#endif #ifdef __cplusplus diff --git a/pypy/module/cpyext/presetup.py b/pypy/module/cpyext/presetup.py --- a/pypy/module/cpyext/presetup.py +++ b/pypy/module/cpyext/presetup.py @@ -42,4 +42,4 @@ patch_distutils() del sys.argv[0] -execfile(sys.argv[0], {'__file__': sys.argv[0]}) +execfile(sys.argv[0], {'__file__': sys.argv[0], '__name__': '__main__'}) diff --git a/pypy/module/math/test/test_translated.py b/pypy/module/math/test/test_translated.py new file mode 100644 --- /dev/null +++ b/pypy/module/math/test/test_translated.py @@ -0,0 +1,10 @@ +import py +from pypy.translator.c.test.test_genc import compile +from pypy.module.math.interp_math import _gamma + + +def test_gamma_overflow(): + f = compile(_gamma, [float]) + assert f(10.0) == 362880.0 + py.test.raises(OverflowError, f, 1720.0) + py.test.raises(OverflowError, f, 172.0) diff --git a/pypy/rpython/lltypesystem/module/ll_math.py b/pypy/rpython/lltypesystem/module/ll_math.py --- a/pypy/rpython/lltypesystem/module/ll_math.py +++ b/pypy/rpython/lltypesystem/module/ll_math.py @@ -127,9 +127,12 @@ return y != y def ll_math_isinf(y): - if use_library_isinf_isnan and not jit.we_are_jitted(): + if jit.we_are_jitted(): + return (y + VERY_LARGE_FLOAT) == y + elif use_library_isinf_isnan: return not _lib_finite(y) and not _lib_isnan(y) - return (y + VERY_LARGE_FLOAT) == y + else: + return y == INFINITY or y == -INFINITY def ll_math_isfinite(y): # Use a custom hack that is reasonably well-suited to the JIT. diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -862,12 +862,14 @@ try: unsigned = not tp._type.SIGNED except AttributeError: - if (not isinstance(tp, lltype.Primitive) or - tp in (FLOAT, DOUBLE) or - cast(lltype.SignedLongLong, cast(tp, -1)) < 0): + if not isinstance(tp, lltype.Primitive): unsigned = False + elif tp in (lltype.Signed, FLOAT, DOUBLE): + unsigned = False + elif tp in (lltype.Char, lltype.UniChar, lltype.Bool): + unsigned = True else: - unsigned = True + raise AssertionError("size_and_sign(%r)" % (tp,)) return size, unsigned def sizeof(tp): diff --git a/pypy/rpython/lltypesystem/test/test_rffi.py b/pypy/rpython/lltypesystem/test/test_rffi.py --- a/pypy/rpython/lltypesystem/test/test_rffi.py +++ b/pypy/rpython/lltypesystem/test/test_rffi.py @@ -743,8 +743,9 @@ assert sizeof(lltype.Typedef(ll, 'test')) == sizeof(ll) assert not size_and_sign(lltype.Signed)[1] assert size_and_sign(lltype.Char) == (1, True) - assert not size_and_sign(lltype.UniChar)[1] + assert size_and_sign(lltype.UniChar)[1] assert size_and_sign(UINT)[1] + assert not size_and_sign(INT)[1] def test_rffi_offsetof(self): import struct From noreply at buildbot.pypy.org Wed Nov 16 20:42:34 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 16 Nov 2011 20:42:34 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: translation fixes Message-ID: <20111116194234.09993820BE@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49479:f0a1bcf419fb Date: 2011-11-16 13:43 -0500 http://bitbucket.org/pypy/pypy/changeset/f0a1bcf419fb/ Log: translation fixes diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -3,6 +3,7 @@ from pypy.interpreter.gateway import interp2app from pypy.interpreter.typedef import TypeDef from pypy.objspace.std.inttype import int_typedef +from pypy.objspace.std.typeobject import W_TypeObject from pypy.rlib.rarithmetic import LONG_BIT from pypy.tool.sourcetools import func_with_new_name @@ -35,6 +36,7 @@ for dtype in get_dtype_cache(space).builtin_dtypes: if w_subtype is dtype.w_box_type: return dtype.coerce(space, w_value) + assert isinstance(w_subtype, W_TypeObject) raise operationerrfmt(space.w_TypeError, "cannot create '%s' instances", w_subtype.get_module_type_name() ) diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -29,6 +29,14 @@ ) return dispatcher +def raw_binary_op(func): + def dispatcher(self, v1, v2): + return func(self, + self.for_computation(self.unbox(v1)), + self.for_computation(self.unbox(v2)) + ) + return dispatcher + class BaseType(object): def _unimplemented_ufunc(self, *args): raise NotImplementedError @@ -100,23 +108,29 @@ def abs(self, v): return abs(v) + @raw_binary_op def eq(self, v1, v2): - return self.unbox(v1) == self.unbox(v2) + return v1 == v2 + @raw_binary_op def ne(self, v1, v2): - return self.unbox(v1) != self.unbox(v2) + return v1 != v2 + @raw_binary_op def lt(self, v1, v2): - return self.unbox(v1) < self.unbox(v2) + return v1 < v2 + @raw_binary_op def le(self, v1, v2): - return self.unbox(v1) <= self.unbox(v2) + return v1 <= v2 + @raw_binary_op def gt(self, v1, v2): - return self.unbox(v1) > self.unbox(v2) + return v1 > v2 + @raw_binary_op def ge(self, v1, v2): - return self.unbox(v1) >= self.unbox(v2) + return v1 >= v2 def bool(self, v): return bool(self.for_computation(self.unbox(v))) @@ -235,7 +249,7 @@ def str_format(self, box): value = self.unbox(box) - return float2string(value, "g", rfloat.DTSF_STR_PRECISION) + return float2string(self.for_computation(value), "g", rfloat.DTSF_STR_PRECISION) def for_computation(self, v): return float(v) From noreply at buildbot.pypy.org Wed Nov 16 20:42:35 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 16 Nov 2011 20:42:35 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: translation fix Message-ID: <20111116194235.391F0820BE@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49480:d30c5c155528 Date: 2011-11-16 14:42 -0500 http://bitbucket.org/pypy/pypy/changeset/d30c5c155528/ Log: translation fix diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -176,7 +176,7 @@ def str_format(self, box): value = self.unbox(box) - return str(value) + return str(self.for_computation(value)) def for_computation(self, v): return widen(v) From noreply at buildbot.pypy.org Wed Nov 16 21:36:39 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Wed, 16 Nov 2011 21:36:39 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: more consisten naming, fixed test Message-ID: <20111116203639.84178820BE@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r49481:ccbfe0737e7d Date: 2011-11-16 21:36 +0100 http://bitbucket.org/pypy/pypy/changeset/ccbfe0737e7d/ Log: more consisten naming, fixed test diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -751,10 +751,11 @@ def __init__(self): # For memory management of assembled loops - self._keepalive_target_looktokens = {} # set of other LoopTokens + self._keepalive_jitcell_tokens = {} # set of other JitCellToken - def record_jump_to(self, target_loop_token): - self._keepalive_target_looktokens[target_loop_token] = None + def record_jump_to(self, jitcell_token): + assert isinstance(jitcell_token, JitCellToken) + self._keepalive_jitcell_tokens[jitcell_token] = None def __repr__(self): return '' % (self.number, self.generation) diff --git a/pypy/jit/metainterp/test/test_memmgr.py b/pypy/jit/metainterp/test/test_memmgr.py --- a/pypy/jit/metainterp/test/test_memmgr.py +++ b/pypy/jit/metainterp/test/test_memmgr.py @@ -14,7 +14,7 @@ from pypy.jit.metainterp.memmgr import MemoryManager from pypy.jit.metainterp.test.support import LLJitMixin from pypy.rlib.jit import JitDriver, dont_look_inside - +from pypy.jit.metainterp.warmspot import get_stats class FakeLoopToken: generation = 0 @@ -155,9 +155,9 @@ return 21 def f(): for i in range(10): - g(1) # g(1) gets a loop and an entry bridge, stays alive - g(2) # (and an exit bridge, which does not count in - g(1) # check_target_token_count) + g(1) # g(1) gets a loop with an entry bridge + g(2) # and an exit bridge, stays alive + g(1) g(3) g(1) g(4) # g(2), g(3), g(4), g(5) are thrown away every iteration @@ -167,7 +167,7 @@ res = self.meta_interp(f, [], loop_longevity=3) assert res == 42 - self.check_enter_count(2 + 10*4*2) + self.check_enter_count(2 + 10*4) def test_call_assembler_keep_alive(self): myjitdriver1 = JitDriver(greens=['m'], reds=['n']) @@ -190,7 +190,7 @@ return 21 def f(u): for i in range(8): - h(u, 32) # make a loop and an entry bridge for h(u) + h(u, 32) # make a loop and an exit bridge for h(u) g(u, 8) # make a loop for g(u) with a call_assembler g(u, 0); g(u+1, 0) # \ g(u, 0); g(u+2, 0) # \ make more loops for g(u+1) to g(u+4), @@ -201,7 +201,12 @@ res = self.meta_interp(f, [1], loop_longevity=4, inline=True) assert res == 42 - self.check_enter_count(12) + self.check_jitcell_token_count(6) + tokens = [t() for t in get_stats().jitcell_token_wrefs] + # Some loops have been freed + assert None in tokens + # Loop with number 0, h(), has not been freed + assert 0 in [t.number for t in tokens if t] # ____________________________________________________________ diff --git a/pypy/jit/metainterp/warmstate.py b/pypy/jit/metainterp/warmstate.py --- a/pypy/jit/metainterp/warmstate.py +++ b/pypy/jit/metainterp/warmstate.py @@ -175,7 +175,7 @@ def get_procedure_token(self): if not we_are_translated(): from pypy.rlib import rgc - rgc.collect(); rgc.collect(); rgc.collect() + rgc.collect(); if self.wref_procedure_token is not None: return self.wref_procedure_token() return None From noreply at buildbot.pypy.org Wed Nov 16 23:06:30 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 16 Nov 2011 23:06:30 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: expose some more stuff at app-levle Message-ID: <20111116220630.6081A820BE@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49482:64e5eaa5e5ba Date: 2011-11-16 17:06 -0500 http://bitbucket.org/pypy/pypy/changeset/64e5eaa5e5ba/ Log: expose some more stuff at app-levle diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -21,7 +21,11 @@ 'number': 'interp_boxes.W_NumberBox', 'integer': 'interp_boxes.W_IntegerBox', 'signedinteger': 'interp_boxes.W_SignedIntegerBox', + 'bool_': 'interp_boxes.W_BoolBox', 'int8': 'interp_boxes.W_Int8Box', + 'inexact': 'interp_boxes.W_InexactBox', + 'floating': 'interp_boxes.W_FloatingBox', + 'float64': 'interp_boxes.W_Float64Box', } # ufuncs diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -2,6 +2,7 @@ from pypy.interpreter.error import operationerrfmt from pypy.interpreter.gateway import interp2app from pypy.interpreter.typedef import TypeDef +from pypy.objspace.std.floattype import float_typedef from pypy.objspace.std.inttype import int_typedef from pypy.objspace.std.typeobject import W_TypeObject from pypy.rlib.rarithmetic import LONG_BIT @@ -41,6 +42,9 @@ w_subtype.get_module_type_name() ) + def descr_str(self, space): + return self.descr_repr(space) + def descr_repr(self, space): return space.wrap(self.get_dtype(space).itemtype.str_format(self)) @@ -54,6 +58,10 @@ assert isinstance(box, W_Float64Box) return space.wrap(box.value) + def descr_nonzero(self, space): + dtype = self.get_dtype(space) + return space.wrap(dtype.itemtype.bool(self)) + def _binop_impl(ufunc_name): def impl(self, space, w_other): from pypy.module.micronumpy import interp_ufuncs @@ -77,7 +85,11 @@ descr_mul = _binop_impl("multiply") descr_div = _binop_impl("divide") descr_eq = _binop_impl("equal") + descr_ne = _binop_impl("not_equal") descr_lt = _binop_impl("less") + descr_le = _binop_impl("less_equal") + descr_gt = _binop_impl("greater") + descr_ge = _binop_impl("greater_equal") descr_rmul = _binop_right_impl("multiply") @@ -86,7 +98,7 @@ class W_BoolBox(W_GenericBox, PrimitiveBox): - pass + get_dtype = dtype_getter("bool") class W_NumberBox(W_GenericBox): _attrs_ = () @@ -148,9 +160,11 @@ __module__ = "numpy", __new__ = interp2app(W_GenericBox.descr__new__.im_func), + __str__ = interp2app(W_GenericBox.descr_str), __repr__ = interp2app(W_GenericBox.descr_repr), __int__ = interp2app(W_GenericBox.descr_int), __float__ = interp2app(W_GenericBox.descr_float), + __nonzero__ = interp2app(W_GenericBox.descr_nonzero), __add__ = interp2app(W_GenericBox.descr_add), __sub__ = interp2app(W_GenericBox.descr_sub), @@ -160,7 +174,11 @@ __rmul__ = interp2app(W_GenericBox.descr_rmul), __eq__ = interp2app(W_GenericBox.descr_eq), + __ne__ = interp2app(W_GenericBox.descr_ne), __lt__ = interp2app(W_GenericBox.descr_lt), + __le__ = interp2app(W_GenericBox.descr_le), + __gt__ = interp2app(W_GenericBox.descr_gt), + __ge__ = interp2app(W_GenericBox.descr_ge), __neg__ = interp2app(W_GenericBox.descr_neg), __abs__ = interp2app(W_GenericBox.descr_abs), @@ -228,4 +246,16 @@ W_InexactBox.typedef = TypeDef("inexact", W_NumberBox.typedef, __module__ = "numpy", +) + +W_FloatingBox.typedef = TypeDef("floating", W_InexactBox.typedef, + __module__ = "numpy", +) + +W_Float32Box.typedef = TypeDef("float32", W_FloatingBox.typedef, + __module__ = "numpy", +) + +W_Float64Box.typedef = TypeDef("float64", (W_FloatingBox.typedef, float_typedef), + __module__ = "numpy", ) \ No newline at end of file diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -175,6 +175,17 @@ exc = raises(TypeError, numpy.signedinteger, 0) assert str(exc.value) == "cannot create 'numpy.signedinteger' instances" + raises(TypeError, numpy.floating, 0) + raises(TypeError, numpy.inexact, 0) + + def test_bool(self): + import numpy + + assert numpy.bool_.mro() == [numpy.bool_, numpy.generic, object] + assert numpy.bool_(3) is numpy.True_ + assert numpy.bool_("") is numpy.False_ + assert type(numpy.True_) is type(numpy.False_) is numpy.bool_ + def test_int8(self): import numpy @@ -188,4 +199,15 @@ assert x == -128 assert x != 128 assert type(x) is numpy.int8 - assert repr(x) == "-128" \ No newline at end of file + assert repr(x) == "-128" + + def test_float64(self): + import numpy + + assert numpy.float64.mro() == [numpy.float64, numpy.floating, numpy.inexact, numpy.number, numpy.generic, float, object] + + a = numpy.array([1, 2, 3], numpy.float64) + assert type(a[1]) is numpy.float64 + assert numpy.dtype(float).type is numpy.float64 + + assert numpy.float64(2.0) == 2.0 From noreply at buildbot.pypy.org Thu Nov 17 00:07:59 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Thu, 17 Nov 2011 00:07:59 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: update the meta data on these funcs Message-ID: <20111116230759.8EB4D820BE@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49483:94b5cfa8e30a Date: 2011-11-16 18:07 -0500 http://bitbucket.org/pypy/pypy/changeset/94b5cfa8e30a/ Log: update the meta data on these funcs diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -1,3 +1,4 @@ +import functools import math from pypy.module.micronumpy import interp_boxes @@ -9,6 +10,7 @@ def simple_unary_op(func): + @functools.wraps(func) def dispatcher(self, v): return self.box( func( @@ -19,6 +21,7 @@ return dispatcher def simple_binary_op(func): + @functools.wraps(func) def dispatcher(self, v1, v2): return self.box( func( @@ -30,6 +33,7 @@ return dispatcher def raw_binary_op(func): + @functools.wraps(func) def dispatcher(self, v1, v2): return func(self, self.for_computation(self.unbox(v1)), From noreply at buildbot.pypy.org Thu Nov 17 08:02:23 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Thu, 17 Nov 2011 08:02:23 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: dont use invalidated loops Message-ID: <20111117070223.ECC2882A9D@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r49484:1b27d145ac64 Date: 2011-11-17 07:52 +0100 http://bitbucket.org/pypy/pypy/changeset/1b27d145ac64/ Log: dont use invalidated loops diff --git a/pypy/jit/metainterp/test/test_quasiimmut.py b/pypy/jit/metainterp/test/test_quasiimmut.py --- a/pypy/jit/metainterp/test/test_quasiimmut.py +++ b/pypy/jit/metainterp/test/test_quasiimmut.py @@ -294,7 +294,8 @@ return total res = self.meta_interp(main, []) - self.check_tree_loop_count(6) + self.check_trace_count(6) + self.check_jitcell_token_count(3) assert res == main() def test_change_during_running(self): diff --git a/pypy/jit/metainterp/warmstate.py b/pypy/jit/metainterp/warmstate.py --- a/pypy/jit/metainterp/warmstate.py +++ b/pypy/jit/metainterp/warmstate.py @@ -177,7 +177,9 @@ from pypy.rlib import rgc rgc.collect(); if self.wref_procedure_token is not None: - return self.wref_procedure_token() + token = self.wref_procedure_token() + if token and not token.invalidated: + return token return None def set_procedure_token(self, token): From noreply at buildbot.pypy.org Thu Nov 17 08:02:25 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Thu, 17 Nov 2011 08:02:25 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: fix tests Message-ID: <20111117070225.25E0182A9E@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r49485:f77a779d9a4e Date: 2011-11-17 08:00 +0100 http://bitbucket.org/pypy/pypy/changeset/f77a779d9a4e/ Log: fix tests diff --git a/pypy/jit/metainterp/test/test_quasiimmut.py b/pypy/jit/metainterp/test/test_quasiimmut.py --- a/pypy/jit/metainterp/test/test_quasiimmut.py +++ b/pypy/jit/metainterp/test/test_quasiimmut.py @@ -306,7 +306,7 @@ self.a = a @dont_look_inside def residual_call(foo, x): - if x == 5: + if x == 10: foo.a += 1 def f(a, x): foo = Foo(a) @@ -320,9 +320,9 @@ x -= 1 return total # - assert f(100, 15) == 3009 - res = self.meta_interp(f, [100, 15]) - assert res == 3009 + assert f(100, 30) == 6019 + res = self.meta_interp(f, [100, 30]) + assert res == 6019 self.check_resops(guard_not_invalidated=8, guard_not_forced=0, call_may_force=0, getfield_gc=0) @@ -435,7 +435,7 @@ self.lst = lst @dont_look_inside def residual_call(foo, x): - if x == 5: + if x == 10: lst2 = [0, 0] lst2[1] = foo.lst[1] + 1 foo.lst = lst2 @@ -453,9 +453,9 @@ x -= 1 return total # - assert f(100, 15) == 3009 - res = self.meta_interp(f, [100, 15]) - assert res == 3009 + assert f(100, 30) == 6019 + res = self.meta_interp(f, [100, 30]) + assert res == 6019 self.check_resops(call_may_force=0, getfield_gc=0, getarrayitem_gc_pure=0, guard_not_forced=0, getarrayitem_gc=0, guard_not_invalidated=8) @@ -478,7 +478,7 @@ return foo.step res = self.meta_interp(f, [60]) assert res == 1 - self.check_tree_loop_count(4) # at least not 2 like before + self.check_jitcell_token_count(2) class TestLLtypeGreenFieldsTests(QuasiImmutTests, LLJitMixin): From noreply at buildbot.pypy.org Thu Nov 17 08:26:06 2011 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 17 Nov 2011 08:26:06 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim-shards: Implement shards correctly. Allow fortran and C order. Temporarily replace repr Message-ID: <20111117072606.E8AFE82A9D@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-multidim-shards Changeset: r49486:09ed665d55c6 Date: 2011-11-17 09:25 +0200 http://bitbucket.org/pypy/pypy/changeset/09ed665d55c6/ Log: Implement shards correctly. Allow fortran and C order. Temporarily replace repr and str with something that works. diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -1,6 +1,6 @@ from pypy.interpreter.baseobjspace import Wrappable -from pypy.interpreter.error import OperationError -from pypy.interpreter.gateway import interp2app, unwrap_spec +from pypy.interpreter.error import OperationError, operationerrfmt +from pypy.interpreter.gateway import interp2app, unwrap_spec, NoneNotWrapped from pypy.interpreter.typedef import TypeDef, GetSetProperty from pypy.module.micronumpy import interp_ufuncs, interp_dtype, signature from pypy.rlib import jit @@ -39,7 +39,8 @@ shape.append(size) batch = new_batch -def descr_new_array(space, w_subtype, w_item_or_iterable, w_dtype=None): +def descr_new_array(space, w_subtype, w_item_or_iterable, w_dtype=None, + w_order=NoneNotWrapped): # find scalar if not space.issequence_w(w_item_or_iterable): w_dtype = interp_ufuncs.find_dtype_for_scalar(space, @@ -48,7 +49,15 @@ dtype = space.interp_w(interp_dtype.W_Dtype, space.call_function(space.gettypefor(interp_dtype.W_Dtype), w_dtype)) return scalar_w(space, dtype, w_item_or_iterable) + if w_order is None: + order = 'C' + else: + order = space.str_w(w_order) + if order != 'C': # or order != 'F': + raise operationerrfmt(space.w_ValueError, "Unknown order: %s", + order) shape, elems_w = _find_shape_and_elems(space, w_item_or_iterable) + # they come back in C order size = len(elems_w) if space.is_w(w_dtype, space.w_None): w_dtype = None @@ -62,11 +71,12 @@ dtype = space.interp_w(interp_dtype.W_Dtype, space.call_function(space.gettypefor(interp_dtype.W_Dtype), w_dtype) ) - arr = NDimArray(size, shape[:], dtype=dtype) - i = 0 + arr = NDimArray(size, shape[:], dtype=dtype, order=order) + arr_iter = arr.start_iter() for i in range(len(elems_w)): w_elem = elems_w[i] - dtype.setitem_w(space, arr.storage, i, w_elem) + dtype.setitem_w(space, arr.storage, arr_iter.offset, w_elem) + arr_iter = arr_iter.next() return arr class BaseIterator(object): @@ -109,7 +119,7 @@ indices = self.indices[:] done = False offset = self.offset - for i in range(len(self.indices)): + for i in range(len(self.indices) -1, -1, -1): if indices[i] < self.arr.shape[i] - 1: indices[i] += 1 offset += self.arr.shards[i] @@ -168,28 +178,31 @@ class BaseArray(Wrappable): _attrs_ = ["invalidates", "signature", "shape", "shards", "backshards", - "start"] + "start", 'order'] #_immutable_fields_ = ['shape[*]', "shards[*]", "backshards[*]", 'start'] shards = None start = 0 - def __init__(self, shape): + def __init__(self, shape, order): self.invalidates = [] self.shape = shape + self.order = order if self.shards is None: self.shards = [] self.backshards = [] s = 1 shape_rev = shape[:] - shape_rev.reverse() + if order == 'C': + shape_rev.reverse() for sh in shape_rev: self.shards.append(s) self.backshards.append(s * (sh - 1)) s *= sh - self.shards.reverse() - self.backshards.reverse() + if order == 'C': + self.shards.reverse() + self.backshards.reverse() def invalidated(self): if self.invalidates: @@ -340,9 +353,23 @@ def descr_repr(self, space): res = StringBuilder() + concrete = self.get_concrete() + i = concrete.start_iter() + start = True + dtype = self.find_dtype() + while not i.done(): + if start: + start = False + else: + res.append(", ") + res.append(dtype.str_format(concrete.getitem(i.offset))) + i = i.next() + return space.wrap(res.build()) + + res = StringBuilder() res.append("array([") concrete = self.get_concrete() - i = concrete.start_iter(offset=0, indices=[0]) + i = concrete.start_iter()#offset=0, indices=[0]) start = True dtype = concrete.find_dtype() if not concrete.find_size(): @@ -386,7 +413,7 @@ if ndims > 1: builder.append('[') builder.append("xxx") - i = self.start_iter(offest=0, indices=[0]) + i = self.start_iter() while not i.done(): i.to_str(comma, builder, indent=indent + ' ') builder.append('\n') @@ -416,6 +443,7 @@ return builder.build() def descr_str(self, space): + return self.descr_repr(space) # Simple implementation so that we can see the array. # Since what we want is to print a plethora of 2d views, let # a slice do the work for us. @@ -425,7 +453,6 @@ return space.wrap(r.to_str(False, s)) def _index_of_single_item(self, space, w_idx): - # we assume C ordering for now if space.isinstance_w(w_idx, space.w_int): idx = space.int_w(w_idx) if not self.shape: @@ -605,7 +632,7 @@ _attrs_ = ["dtype", "value", "shape"] def __init__(self, dtype, value): - BaseArray.__init__(self, []) + BaseArray.__init__(self, [], 'C') self.dtype = dtype self.value = value @@ -631,8 +658,8 @@ """ Class for representing virtual arrays, such as binary ops or ufuncs """ - def __init__(self, signature, shape, res_dtype): - BaseArray.__init__(self, shape) + def __init__(self, signature, shape, res_dtype, order): + BaseArray.__init__(self, shape, order) self.forced_result = None self.signature = signature self.res_dtype = res_dtype @@ -688,8 +715,9 @@ class Call1(VirtualArray): - def __init__(self, signature, shape, res_dtype, values): - VirtualArray.__init__(self, signature, shape, res_dtype) + def __init__(self, signature, shape, res_dtype, values, order): + VirtualArray.__init__(self, signature, shape, res_dtype, + values.order) self.values = values def _del_sources(self): @@ -720,7 +748,8 @@ Intermediate class for performing binary operations. """ def __init__(self, signature, shape, calc_dtype, res_dtype, left, right): - VirtualArray.__init__(self, signature, shape, res_dtype) + # XXX do something if left.order != right.order + VirtualArray.__init__(self, signature, shape, res_dtype, left.order) self.left = left self.right = right self.calc_dtype = calc_dtype @@ -759,7 +788,7 @@ def __init__(self, parent, signature, shards, backshards, shape): self.shards = shards self.backshards = backshards - BaseArray.__init__(self, shape) + BaseArray.__init__(self, shape, parent.order) self.signature = signature self.parent = parent self.invalidates = parent.invalidates @@ -851,8 +880,8 @@ """ A class representing contiguous array. We know that each iteration by say ufunc will increase the data index by one """ - def __init__(self, size, shape, dtype): - BaseArray.__init__(self, shape) + def __init__(self, size, shape, dtype, order='C'): + BaseArray.__init__(self, shape, order) self.size = size self.dtype = dtype self.storage = dtype.malloc(size) @@ -892,7 +921,9 @@ self.dtype.setitem(self.storage, item, value) def start_iter(self, offset=0, indices=None): - return ArrayIterator(self.size, offset=offset) + if self.order == 'C': + return ArrayIterator(self.size, offset=offset) + raise NotImplementedError # use ViewIterator simply, test it def __del__(self): lltype.free(self.storage, flavor='raw', track_allocation=False) diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -104,7 +104,7 @@ return self.func(res_dtype, w_obj.value.convert_to(res_dtype)).wrap(space) new_sig = signature.Signature.find_sig([self.signature, w_obj.signature]) - w_res = Call1(new_sig, w_obj.shape, res_dtype, w_obj) + w_res = Call1(new_sig, w_obj.shape, res_dtype, w_obj, w_obj.order) w_obj.add_invalidates(w_res) return w_res diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -21,14 +21,41 @@ args_w.append(arg) return self.space.newtuple(args_w) - def test_shards(self): - a = NDimArray(100, [10, 5, 3], MockDtype()) + def test_shards_f(self): + a = NDimArray(100, [10, 5, 3], MockDtype(), 'F') + assert a.shards == [1, 10, 50] + assert a.backshards == [9, 40, 100] + + def test_shards_c(self): + a = NDimArray(100, [10, 5, 3], MockDtype(), 'C') assert a.shards == [15, 3, 1] assert a.backshards == [135, 12, 2] - def test_create_slice(self): + def test_create_slice_f(self): space = self.space - a = NDimArray(10*5*3, [10, 5, 3], MockDtype()) + a = NDimArray(10*5*3, [10, 5, 3], MockDtype(), 'F') + s = a.create_slice(space, space.wrap(3)) + assert s.start == 3 + assert s.shards == [10, 50] + assert s.backshards == [40, 100] + s = a.create_slice(space, self.newslice(1, 9, 2)) + assert s.start == 1 + assert s.shards == [2, 10, 50] + assert s.backshards == [6, 40, 100] + s = a.create_slice(space, space.newtuple([ + self.newslice(1, 5, 3), self.newslice(1, 2, 1), space.wrap(1)])) + assert s.start == 61 + assert s.shape == [2, 1] + assert s.shards == [3, 10] + assert s.backshards == [3, 0] + s = a.create_slice(space, self.newtuple( + self.newslice(None, None, None), space.wrap(2))) + assert s.start == 20 + assert s.shape == [10, 3] + + def test_create_slice_c(self): + space = self.space + a = NDimArray(10*5*3, [10, 5, 3], MockDtype(), 'C') s = a.create_slice(space, space.wrap(3)) assert s.start == 45 assert s.shards == [3, 1] @@ -48,9 +75,28 @@ assert s.start == 6 assert s.shape == [10, 3] - def test_slice_of_slice(self): + def test_slice_of_slice_f(self): space = self.space - a = NDimArray(10*5*3, [10, 5, 3], MockDtype()) + a = NDimArray(10*5*3, [10, 5, 3], MockDtype(), 'F') + s = a.create_slice(space, space.wrap(5)) + assert s.start == 5 + s2 = s.create_slice(space, space.wrap(3)) + assert s2.shape == [3] + assert s2.shards == [50] + assert s2.parent is a + assert s2.backshards == [100] + assert s2.start == 35 + s = a.create_slice(space, self.newslice(1, 5, 3)) + s2 = s.create_slice(space, space.newtuple([ + self.newslice(None, None, None), space.wrap(2)])) + assert s2.shape == [2, 3] + assert s2.shards == [3, 50] + assert s2.backshards == [3, 100] + assert s2.start == 1*15 + 2*3 + + def test_slice_of_slice_c(self): + space = self.space + a = NDimArray(10*5*3, [10, 5, 3], MockDtype(), order='C') s = a.create_slice(space, space.wrap(5)) assert s.start == 15*5 s2 = s.create_slice(space, space.wrap(3)) @@ -67,16 +113,35 @@ assert s2.backshards == [45, 2] assert s2.start == 1*15 + 2*3 - def test_negative_step(self): + def test_negative_step_f(self): space = self.space - a = NDimArray(10*5*3, [10, 5, 3], MockDtype()) + a = NDimArray(10*5*3, [10, 5, 3], MockDtype(), 'F') + s = a.create_slice(space, self.newslice(None, None, -2)) + assert s.start == 9 + assert s.shards == [-2, 10, 50] + assert s.backshards == [-8, 40, 100] + + def test_negative_step_c(self): + space = self.space + a = NDimArray(10*5*3, [10, 5, 3], MockDtype(), order='C') s = a.create_slice(space, self.newslice(None, None, -2)) assert s.start == 135 assert s.shards == [-30, 3, 1] assert s.backshards == [-120, 12, 2] - def test_index_of_single_item(self): - a = NDimArray(10*5*3, [10, 5, 3], MockDtype()) + def test_index_of_single_item_f(self): + a = NDimArray(10*5*3, [10, 5, 3], MockDtype(), 'F') + r = a._index_of_single_item(self.space, self.newtuple(1, 2, 2)) + assert r == 1 + 2 * 10 + 2 * 50 + s = a.create_slice(self.space, self.newtuple( + self.newslice(None, None, None), 2)) + r = s._index_of_single_item(self.space, self.newtuple(1, 0)) + assert r == a._index_of_single_item(self.space, self.newtuple(1, 2, 0)) + r = s._index_of_single_item(self.space, self.newtuple(1, 1)) + assert r == a._index_of_single_item(self.space, self.newtuple(1, 2, 1)) + + def test_index_of_single_item_c(self): + a = NDimArray(10*5*3, [10, 5, 3], MockDtype(), 'C') r = a._index_of_single_item(self.space, self.newtuple(1, 2, 2)) assert r == 1 * 3 * 5 + 2 * 3 + 2 s = a.create_slice(self.space, self.newtuple( @@ -675,6 +740,7 @@ assert numpy.zeros(1).shape == (1,) assert numpy.zeros((2, 2)).shape == (2,2) assert numpy.zeros((3, 1, 2)).shape == (3, 1, 2) + assert numpy.array([[1], [2], [3]]).shape == (3, 1) assert len(numpy.zeros((3, 1, 2))) == 3 raises(TypeError, len, numpy.zeros(())) @@ -770,6 +836,8 @@ from numpy import array a = array([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10], [11, 12], [13, 14]]) b = a[::2] + print a + print b assert (b == [[1, 2], [5, 6], [9, 10], [13, 14]]).all() c = b + b assert c[1][1] == 12 From noreply at buildbot.pypy.org Thu Nov 17 08:39:22 2011 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 17 Nov 2011 08:39:22 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim-shards: fix compile tests Message-ID: <20111117073922.03BF482A9D@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-multidim-shards Changeset: r49487:1bc7731d90fd Date: 2011-11-17 09:38 +0200 http://bitbucket.org/pypy/pypy/changeset/1bc7731d90fd/ Log: fix compile tests diff --git a/pypy/module/micronumpy/compile.py b/pypy/module/micronumpy/compile.py --- a/pypy/module/micronumpy/compile.py +++ b/pypy/module/micronumpy/compile.py @@ -286,7 +286,8 @@ w_list = interp.space.newlist( [interp.space.wrap(float(i)) for i in range(self.v)]) dtype = interp.space.fromcache(W_Float64Dtype) - return descr_new_array(interp.space, None, w_list, w_dtype=dtype) + return descr_new_array(interp.space, None, w_list, w_dtype=dtype, + w_order=None) def __repr__(self): return 'Range(%s)' % self.v @@ -308,7 +309,8 @@ def execute(self, interp): w_list = self.wrap(interp.space) dtype = interp.space.fromcache(W_Float64Dtype) - return descr_new_array(interp.space, None, w_list, w_dtype=dtype) + return descr_new_array(interp.space, None, w_list, w_dtype=dtype, + w_order=None) def __repr__(self): return "[" + ", ".join([repr(item) for item in self.items]) + "]" From noreply at buildbot.pypy.org Thu Nov 17 09:31:23 2011 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 17 Nov 2011 09:31:23 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim-shards: Fight a bit with test_zjit. I'm still not 100% happy, but getting there Message-ID: <20111117083123.39EE482A9D@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-multidim-shards Changeset: r49488:7ed6605cac6c Date: 2011-11-17 10:30 +0200 http://bitbucket.org/pypy/pypy/changeset/7ed6605cac6c/ Log: Fight a bit with test_zjit. I'm still not 100% happy, but getting there diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -116,10 +116,12 @@ @jit.unroll_safe def next(self): - indices = self.indices[:] + indices = [0] * len(self.arr.shape) + for i in range(len(self.arr.shape)): + indices[i] = self.indices[i] done = False offset = self.offset - for i in range(len(self.indices) -1, -1, -1): + for i in range(len(self.arr.shape) -1, -1, -1): if indices[i] < self.arr.shape[i] - 1: indices[i] += 1 offset += self.arr.shards[i] @@ -180,7 +182,8 @@ _attrs_ = ["invalidates", "signature", "shape", "shards", "backshards", "start", 'order'] - #_immutable_fields_ = ['shape[*]', "shards[*]", "backshards[*]", 'start'] + _immutable_fields_ = ['shape[*]', "shards[*]", "backshards[*]", 'start', + "order"] shards = None start = 0 @@ -190,19 +193,21 @@ self.shape = shape self.order = order if self.shards is None: - self.shards = [] - self.backshards = [] + shards = [] + backshards = [] s = 1 shape_rev = shape[:] if order == 'C': shape_rev.reverse() for sh in shape_rev: - self.shards.append(s) - self.backshards.append(s * (sh - 1)) + shards.append(s) + backshards.append(s * (sh - 1)) s *= sh if order == 'C': - self.shards.reverse() - self.backshards.reverse() + shards.reverse() + backshards.reverse() + self.shards = shards[:] + self.backshards = backshards[:] def invalidated(self): if self.invalidates: @@ -574,7 +579,8 @@ shape += self.shape[s:] shards += self.shards[s:] backshards += self.backshards[s:] - return NDimSlice(self, new_sig, start, shards, backshards, shape) + return NDimSlice(self, new_sig, start, shards[:], backshards[:], + shape[:]) def descr_mean(self, space): return space.wrap(space.float_w(self.descr_sum(space)) / self.find_size()) @@ -826,8 +832,6 @@ class NDimSlice(ViewArray): signature = signature.BaseSignature() - #_immutable_fields_ = ['shape[*]', 'shards[*]', 'backshards[*]', 'start'] - def __init__(self, parent, signature, start, shards, backshards, shape): if isinstance(parent, NDimSlice): @@ -942,7 +946,7 @@ item = space.int_w(w_item) size *= item shape.append(item) - return space.wrap(NDimArray(size, shape, dtype=dtype)) + return space.wrap(NDimArray(size, shape[:], dtype=dtype)) @unwrap_spec(size=int) def ones(space, size, w_dtype=None): From noreply at buildbot.pypy.org Thu Nov 17 10:08:46 2011 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 17 Nov 2011 10:08:46 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim-shards: make create_slice have a better interface Message-ID: <20111117090846.8506D82A9D@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-multidim-shards Changeset: r49489:9b0de2f82bac Date: 2011-11-17 11:08 +0200 http://bitbucket.org/pypy/pypy/changeset/9b0de2f82bac/ Log: make create_slice have a better interface diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -370,7 +370,6 @@ res.append(dtype.str_format(concrete.getitem(i.offset))) i = i.next() return space.wrap(res.build()) - res = StringBuilder() res.append("array([") concrete = self.get_concrete() @@ -412,6 +411,7 @@ res.append(")") return space.wrap(res.build()) + def to_str(self, comma, builder, indent=' '): dtype = self.find_dtype() ndims = len(self.shape) @@ -480,7 +480,7 @@ v += self.shape[i] if v < 0 or v >= self.shape[i]: raise OperationError(space.w_IndexError, - space.wrap("index (%d) out of range (0<=index<%d" % (index[i], self.shape[i]))) + space.wrap("index (%d) out of range (0<=index<%d" % (i, self.shape[i]))) item += v * self.shards[i] return item @@ -516,12 +516,21 @@ return False return True + def _prepare_slice_args(self, space, w_idx): + if (space.isinstance_w(w_idx, space.w_int) or + space.isinstance_w(w_idx, space.w_slice)): + return [space.decode_index4(w_idx, self.shape[0])] + return [space.decode_index4(w_item, self.shape[i]) for i, w_item in + enumerate(space.fixedview(w_idx))] + + def descr_getitem(self, space, w_idx): if self._single_item_result(space, w_idx): concrete = self.get_concrete() item = concrete._index_of_single_item(space, w_idx) return concrete.getitem(item).wrap(space) - return space.wrap(self.create_slice(space, w_idx)) + chunks = self._prepare_slice_args(space, w_idx) + return space.wrap(self.create_slice(space, chunks)) def descr_setitem(self, space, w_idx, w_value): self.invalidated() @@ -539,16 +548,16 @@ assert isinstance(w_value, BaseArray) else: w_value = convert_to_array(space, w_value) - view = self.create_slice(space, w_idx) + chunks = self._prepare_slice_args(space, w_idx) + view = self.create_slice(space, chunks) view.setslice(space, w_value) - def create_slice(self, space, w_idx): + def create_slice(self, space, chunks): new_sig = signature.Signature.find_sig([ NDimSlice.signature, self.signature ]) - if (space.isinstance_w(w_idx, space.w_int) or - space.isinstance_w(w_idx, space.w_slice)): - start, stop, step, lgt = space.decode_index4(w_idx, self.shape[0]) + if len(chunks) == 1: + start, stop, step, lgt = chunks[0] if step == 0: shape = self.shape[1:] shards = self.shards[1:] @@ -565,9 +574,7 @@ backshards = [] start = self.start i = -1 - for i, w_item in enumerate(space.fixedview(w_idx)): - start_, stop, step, lgt = space.decode_index4(w_item, - self.shape[i]) + for i, (start_, stop, step, lgt) in enumerate(chunks): if step != 0: shape.append(lgt) shards.append(self.shards[i] * step) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -34,61 +34,55 @@ def test_create_slice_f(self): space = self.space a = NDimArray(10*5*3, [10, 5, 3], MockDtype(), 'F') - s = a.create_slice(space, space.wrap(3)) + s = a.create_slice(space, [(3, 0, 0, 1)]) assert s.start == 3 assert s.shards == [10, 50] assert s.backshards == [40, 100] - s = a.create_slice(space, self.newslice(1, 9, 2)) + s = a.create_slice(space, [(1, 9, 2, 4)]) assert s.start == 1 assert s.shards == [2, 10, 50] assert s.backshards == [6, 40, 100] - s = a.create_slice(space, space.newtuple([ - self.newslice(1, 5, 3), self.newslice(1, 2, 1), space.wrap(1)])) - assert s.start == 61 + s = a.create_slice(space, [(1, 5, 3, 2), (1, 2, 1, 1), (1, 0, 0, 1)]) assert s.shape == [2, 1] assert s.shards == [3, 10] assert s.backshards == [3, 0] - s = a.create_slice(space, self.newtuple( - self.newslice(None, None, None), space.wrap(2))) + s = a.create_slice(space, [(0, 10, 1, 10), (2, 0, 0, 1)]) assert s.start == 20 assert s.shape == [10, 3] def test_create_slice_c(self): space = self.space a = NDimArray(10*5*3, [10, 5, 3], MockDtype(), 'C') - s = a.create_slice(space, space.wrap(3)) + s = a.create_slice(space, [(3, 0, 0, 1)]) assert s.start == 45 assert s.shards == [3, 1] assert s.backshards == [12, 2] - s = a.create_slice(space, self.newslice(1, 9, 2)) + s = a.create_slice(space, [(1, 9, 2, 4)]) assert s.start == 15 assert s.shards == [30, 3, 1] assert s.backshards == [90, 12, 2] - s = a.create_slice(space, space.newtuple([ - self.newslice(1, 5, 3), self.newslice(1, 2, 1), space.wrap(1)])) + s = a.create_slice(space, [(1, 5, 3, 2), (1, 2, 1, 1), (1, 0, 0, 1)]) assert s.start == 19 assert s.shape == [2, 1] assert s.shards == [45, 3] assert s.backshards == [45, 0] - s = a.create_slice(space, self.newtuple( - self.newslice(None, None, None), space.wrap(2))) + s = a.create_slice(space, [(0, 10, 1, 10), (2, 0, 0, 1)]) assert s.start == 6 assert s.shape == [10, 3] def test_slice_of_slice_f(self): space = self.space a = NDimArray(10*5*3, [10, 5, 3], MockDtype(), 'F') - s = a.create_slice(space, space.wrap(5)) + s = a.create_slice(space, [(5, 0, 0, 1)]) assert s.start == 5 - s2 = s.create_slice(space, space.wrap(3)) + s2 = s.create_slice(space, [(3, 0, 0, 1)]) assert s2.shape == [3] assert s2.shards == [50] assert s2.parent is a assert s2.backshards == [100] assert s2.start == 35 - s = a.create_slice(space, self.newslice(1, 5, 3)) - s2 = s.create_slice(space, space.newtuple([ - self.newslice(None, None, None), space.wrap(2)])) + s = a.create_slice(space, [(1, 5, 3, 2)]) + s2 = s.create_slice(space, [(0, 2, 1, 2), (2, 0, 0, 1)]) assert s2.shape == [2, 3] assert s2.shards == [3, 50] assert s2.backshards == [3, 100] @@ -97,17 +91,16 @@ def test_slice_of_slice_c(self): space = self.space a = NDimArray(10*5*3, [10, 5, 3], MockDtype(), order='C') - s = a.create_slice(space, space.wrap(5)) + s = a.create_slice(space, [(5, 0, 0, 1)]) assert s.start == 15*5 - s2 = s.create_slice(space, space.wrap(3)) + s2 = s.create_slice(space, [(3, 0, 0, 1)]) assert s2.shape == [3] assert s2.shards == [1] assert s2.parent is a assert s2.backshards == [2] assert s2.start == 5*15 + 3*3 - s = a.create_slice(space, self.newslice(1, 5, 3)) - s2 = s.create_slice(space, space.newtuple([ - self.newslice(None, None, None), space.wrap(2)])) + s = a.create_slice(space, [(1, 5, 3, 2)]) + s2 = s.create_slice(space, [(0, 2, 1, 2), (2, 0, 0, 1)]) assert s2.shape == [2, 3] assert s2.shards == [45, 1] assert s2.backshards == [45, 2] @@ -116,7 +109,7 @@ def test_negative_step_f(self): space = self.space a = NDimArray(10*5*3, [10, 5, 3], MockDtype(), 'F') - s = a.create_slice(space, self.newslice(None, None, -2)) + s = a.create_slice(space, [(9, -1, -2, 5)]) assert s.start == 9 assert s.shards == [-2, 10, 50] assert s.backshards == [-8, 40, 100] @@ -124,7 +117,7 @@ def test_negative_step_c(self): space = self.space a = NDimArray(10*5*3, [10, 5, 3], MockDtype(), order='C') - s = a.create_slice(space, self.newslice(None, None, -2)) + s = a.create_slice(space, [(9, -1, -2, 5)]) assert s.start == 135 assert s.shards == [-30, 3, 1] assert s.backshards == [-120, 12, 2] @@ -133,8 +126,7 @@ a = NDimArray(10*5*3, [10, 5, 3], MockDtype(), 'F') r = a._index_of_single_item(self.space, self.newtuple(1, 2, 2)) assert r == 1 + 2 * 10 + 2 * 50 - s = a.create_slice(self.space, self.newtuple( - self.newslice(None, None, None), 2)) + s = a.create_slice(self.space, [(0, 10, 1, 10), (2, 0, 0, 1)]) r = s._index_of_single_item(self.space, self.newtuple(1, 0)) assert r == a._index_of_single_item(self.space, self.newtuple(1, 2, 0)) r = s._index_of_single_item(self.space, self.newtuple(1, 1)) @@ -144,8 +136,7 @@ a = NDimArray(10*5*3, [10, 5, 3], MockDtype(), 'C') r = a._index_of_single_item(self.space, self.newtuple(1, 2, 2)) assert r == 1 * 3 * 5 + 2 * 3 + 2 - s = a.create_slice(self.space, self.newtuple( - self.newslice(None, None, None), 2)) + s = a.create_slice(self.space, [(0, 10, 1, 10), (2, 0, 0, 1)]) r = s._index_of_single_item(self.space, self.newtuple(1, 0)) assert r == a._index_of_single_item(self.space, self.newtuple(1, 2, 0)) r = s._index_of_single_item(self.space, self.newtuple(1, 1)) From noreply at buildbot.pypy.org Thu Nov 17 10:50:28 2011 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 17 Nov 2011 10:50:28 +0100 (CET) Subject: [pypy-commit] pypy default: Fix for test_zrpy_releasegil. Message-ID: <20111117095028.E885F82A9D@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49490:afe73da501bc Date: 2011-11-17 09:50 +0000 http://bitbucket.org/pypy/pypy/changeset/afe73da501bc/ Log: Fix for test_zrpy_releasegil. diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -864,7 +864,7 @@ except AttributeError: if not isinstance(tp, lltype.Primitive): unsigned = False - elif tp in (lltype.Signed, FLOAT, DOUBLE): + elif tp in (lltype.Signed, FLOAT, DOUBLE, llmemory.Address): unsigned = False elif tp in (lltype.Char, lltype.UniChar, lltype.Bool): unsigned = True From noreply at buildbot.pypy.org Thu Nov 17 10:53:35 2011 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 17 Nov 2011 10:53:35 +0100 (CET) Subject: [pypy-commit] pypy default: Argh. See comment. Message-ID: <20111117095335.B204082A9D@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49491:8477b89ad7b4 Date: 2011-11-17 09:53 +0000 http://bitbucket.org/pypy/pypy/changeset/8477b89ad7b4/ Log: Argh. See comment. diff --git a/lib_pypy/pyrepl/unix_console.py b/lib_pypy/pyrepl/unix_console.py --- a/lib_pypy/pyrepl/unix_console.py +++ b/lib_pypy/pyrepl/unix_console.py @@ -412,7 +412,12 @@ e.args[4] == 'unexpected end of data': pass else: - raise + # was: "raise". But it crashes pyrepl, and by extension the + # pypy currently running, in which we are e.g. in the middle + # of some debugging session. Argh. Instead just print an + # error message to stderr and continue running, for now. + self.partial_char = '' + sys.stderr.write('\n%s: %s\n' % (e.__class__.__name__, e)) else: self.partial_char = '' self.event_queue.push(c) From noreply at buildbot.pypy.org Thu Nov 17 11:30:50 2011 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 17 Nov 2011 11:30:50 +0100 (CET) Subject: [pypy-commit] pypy default: Tweaks. Unsure why but it seems that test_zll_random ends up with Message-ID: <20111117103050.96EC782A9D@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49492:5d63ae101d32 Date: 2011-11-17 10:29 +0000 http://bitbucket.org/pypy/pypy/changeset/5d63ae101d32/ Log: Tweaks. Unsure why but it seems that test_zll_random ends up with two differently-typed arrays at the same address --- it might be because of casts. The _subarray caching fails in this case, by confusing the types of the subarrays. diff --git a/pypy/rpython/lltypesystem/lltype.py b/pypy/rpython/lltypesystem/lltype.py --- a/pypy/rpython/lltypesystem/lltype.py +++ b/pypy/rpython/lltypesystem/lltype.py @@ -1723,7 +1723,7 @@ class _subarray(_parentable): # only for direct_fieldptr() # and direct_arrayitems() _kind = "subarray" - _cache = weakref.WeakKeyDictionary() # parentarray -> {subarrays} + _cache = {} # TYPE -> weak{ parentarray -> {subarrays} } def __init__(self, TYPE, parent, baseoffset_or_fieldname): _parentable.__init__(self, TYPE) @@ -1781,10 +1781,14 @@ def _makeptr(parent, baseoffset_or_fieldname, solid=False): try: - cache = _subarray._cache.setdefault(parent, {}) + d = _subarray._cache[parent._TYPE] + except KeyError: + d = _subarray._cache[parent._TYPE] = weakref.WeakKeyDictionary() + try: + cache = d.setdefault(parent, {}) except RuntimeError: # pointer comparison with a freed structure _subarray._cleanup_cache() - cache = _subarray._cache.setdefault(parent, {}) # try again + cache = d.setdefault(parent, {}) # try again try: subarray = cache[baseoffset_or_fieldname] except KeyError: @@ -1805,14 +1809,18 @@ raise NotImplementedError('_subarray._getid()') def _cleanup_cache(): - newcache = weakref.WeakKeyDictionary() - for key, value in _subarray._cache.items(): - try: - if not key._was_freed(): - newcache[key] = value - except RuntimeError: - pass # ignore "accessing subxxx, but already gc-ed parent" - _subarray._cache = newcache + for T, d in _subarray._cache.items(): + newcache = weakref.WeakKeyDictionary() + for key, value in d.items(): + try: + if not key._was_freed(): + newcache[key] = value + except RuntimeError: + pass # ignore "accessing subxxx, but already gc-ed parent" + if newcache: + _subarray._cache[T] = newcache + else: + del _subarray._cache[T] _cleanup_cache = staticmethod(_cleanup_cache) From noreply at buildbot.pypy.org Thu Nov 17 14:04:34 2011 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 17 Nov 2011 14:04:34 +0100 (CET) Subject: [pypy-commit] pypy numpy NDimArray: close abandoned branch, work happen on numpy-multidim one Message-ID: <20111117130434.47B9782A9D@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy NDimArray Changeset: r49493:ee74794f5464 Date: 2011-11-17 15:04 +0200 http://bitbucket.org/pypy/pypy/changeset/ee74794f5464/ Log: close abandoned branch, work happen on numpy-multidim one From noreply at buildbot.pypy.org Thu Nov 17 14:27:40 2011 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 17 Nov 2011 14:27:40 +0100 (CET) Subject: [pypy-commit] pypy default: Fix syslog.py by following the CPython source, as per Message-ID: <20111117132740.69C6082A9D@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49494:b6585727903b Date: 2011-11-17 14:27 +0100 http://bitbucket.org/pypy/pypy/changeset/b6585727903b/ Log: Fix syslog.py by following the CPython source, as per https://bugs.pypy.org/issue928. Not really tested -- there seems to be no tests for syslog? diff --git a/lib_pypy/syslog.py b/lib_pypy/syslog.py --- a/lib_pypy/syslog.py +++ b/lib_pypy/syslog.py @@ -38,9 +38,27 @@ _setlogmask.argtypes = (c_int,) _setlogmask.restype = c_int +_S_log_open = False +_S_ident_o = None + +def _get_argv(): + try: + import sys + script = sys.argv[0] + if isinstance(script, str): + return script[script.rfind('/')+1:] or None + except Exception: + pass + return None + @builtinify -def openlog(ident, option, facility): - _openlog(ident, option, facility) +def openlog(ident=None, logoption=0, facility=LOG_USER): + global _S_ident_o, _S_log_open + if ident is None: + ident = _get_argv() + _S_ident_o = c_char_p(ident) # keepalive + _openlog(_S_ident_o, logoption, facility) + _S_log_open = True @builtinify def syslog(arg1, arg2=None): @@ -48,11 +66,18 @@ priority, message = arg1, arg2 else: priority, message = LOG_INFO, arg1 + # if log is not opened, open it now + if not _S_log_open: + openlog() _syslog(priority, "%s", message) @builtinify def closelog(): - _closelog() + global _S_log_open, S_ident_o + if _S_log_open: + _closelog() + _S_log_open = False + _S_ident_o = None @builtinify def setlogmask(mask): From noreply at buildbot.pypy.org Thu Nov 17 15:14:22 2011 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 17 Nov 2011 15:14:22 +0100 (CET) Subject: [pypy-commit] pypy default: Workaround. See comment. Message-ID: <20111117141422.A353182A9D@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49495:017adf8b32ba Date: 2011-11-17 15:13 +0100 http://bitbucket.org/pypy/pypy/changeset/017adf8b32ba/ Log: Workaround. See comment. diff --git a/pypy/module/sys/test/test_sysmodule.py b/pypy/module/sys/test/test_sysmodule.py --- a/pypy/module/sys/test/test_sysmodule.py +++ b/pypy/module/sys/test/test_sysmodule.py @@ -567,6 +567,11 @@ import time import thread + # XXX workaround for now: to prevent deadlocks, call + # sys._current_frames() once before starting threads. + # This is an issue in non-translated versions only. + sys._current_frames() + thread_id = thread.get_ident() def other_thread(): print "thread started" From noreply at buildbot.pypy.org Thu Nov 17 16:30:23 2011 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 17 Nov 2011 16:30:23 +0100 (CET) Subject: [pypy-commit] pypy generator-in-rpython: Generators in the flow space: keep the structure of the graph, Message-ID: <20111117153023.B7ED082A9D@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: generator-in-rpython Changeset: r49496:a084a39f04b9 Date: 2011-11-14 16:41 +0100 http://bitbucket.org/pypy/pypy/changeset/a084a39f04b9/ Log: Generators in the flow space: keep the structure of the graph, but insert 'generator_entry' as the first operation and emit 'yield' operations. diff --git a/pypy/objspace/flow/flowcontext.py b/pypy/objspace/flow/flowcontext.py --- a/pypy/objspace/flow/flowcontext.py +++ b/pypy/objspace/flow/flowcontext.py @@ -185,7 +185,7 @@ class FlowExecutionContext(ExecutionContext): def __init__(self, space, code, globals, constargs={}, outer_func=None, - name=None): + name=None, is_generator=False): ExecutionContext.__init__(self, space) self.code = code @@ -208,6 +208,7 @@ initialblock = SpamBlock(FrameState(frame).copy()) self.pendingblocks = collections.deque([initialblock]) self.graph = FunctionGraph(name or code.co_name, initialblock) + self.is_generator = is_generator make_link = Link # overridable for transition tracking @@ -247,6 +248,8 @@ return outcome, w_exc_cls, w_exc_value def build_flow(self): + if self.is_generator: + self.produce_generator_entry() while self.pendingblocks: block = self.pendingblocks.popleft() frame = self.create_frame() @@ -259,9 +262,15 @@ self.topframeref = jit.non_virtual_ref(frame) self.crnt_frame = frame try: - w_result = frame.dispatch(frame.pycode, - frame.last_instr, - self) + frame.frame_finished_execution = False + while True: + w_result = frame.dispatch(frame.pycode, + frame.last_instr, + self) + if frame.frame_finished_execution: + break + else: + self.generate_yield(frame, w_result) finally: self.crnt_frame = None self.topframeref = old_frameref @@ -307,6 +316,19 @@ del self.recorder self.fixeggblocks() + def produce_generator_entry(self): + [initialblock] = self.pendingblocks + initialblock.operations.append( + SpaceOperation('generator_entry', list(initialblock.inputargs), + Variable())) + + def generate_yield(self, frame, w_result): + assert self.is_generator + self.recorder.crnt_block.operations.append( + SpaceOperation('yield', [w_result], Variable())) + frame.pushvalue(None) + frame.last_instr += 1 + def fixeggblocks(self): # EggBlocks reuse the variables of their previous block, # which is deemed not acceptable for simplicity of the operations diff --git a/pypy/objspace/flow/objspace.py b/pypy/objspace/flow/objspace.py --- a/pypy/objspace/flow/objspace.py +++ b/pypy/objspace/flow/objspace.py @@ -8,6 +8,7 @@ from pypy.interpreter.pycode import PyCode, cpython_code_signature from pypy.interpreter.module import Module from pypy.interpreter.error import OperationError +from pypy.interpreter.astcompiler.consts import CO_GENERATOR from pypy.interpreter import pyframe, argument from pypy.objspace.flow.model import * from pypy.objspace.flow import flowcontext, operation, specialcase @@ -247,9 +248,7 @@ if func.func_doc and func.func_doc.lstrip().startswith('NOT_RPYTHON'): raise Exception, "%r is tagged as NOT_RPYTHON" % (func,) code = func.func_code - if code.co_flags & 32: - # generator - raise TypeError("%r is a generator" % (func,)) + is_generator = bool(code.co_flags & CO_GENERATOR) code = PyCode._from_code(self, code) if func.func_closure is None: cl = None @@ -265,7 +264,8 @@ class outerfunc: # hack closure = cl ec = flowcontext.FlowExecutionContext(self, code, func.func_globals, - constargs, outerfunc, name) + constargs, outerfunc, name, + is_generator) graph = ec.graph graph.func = func # attach a signature and defaults to the graph diff --git a/pypy/objspace/flow/test/test_generator.py b/pypy/objspace/flow/test/test_generator.py new file mode 100644 --- /dev/null +++ b/pypy/objspace/flow/test/test_generator.py @@ -0,0 +1,18 @@ +from pypy.objspace.flow.test.test_objspace import Base + + +class TestGenerator(Base): + + def test_simple_generator(self): + def f(n): + i = 0 + while i < n: + yield i + yield i + i += 1 + graph = self.codetest(f) + ops = self.all_operations(graph) + assert ops == {'generator_entry': 1, + 'lt': 1, 'is_true': 1, + 'yield': 2, + 'inplace_add': 1} diff --git a/pypy/objspace/flow/test/test_objspace.py b/pypy/objspace/flow/test/test_objspace.py --- a/pypy/objspace/flow/test/test_objspace.py +++ b/pypy/objspace/flow/test/test_objspace.py @@ -882,12 +882,6 @@ num = bytecode_spec.opmap[name] flow_meth_names[num] = locals()['old_' + name] - def test_generator(self): - def f(): - yield 3 - - py.test.raises(TypeError, "self.codetest(f)") - def test_dont_capture_RuntimeError(self): class Foo: def __hash__(self): From noreply at buildbot.pypy.org Thu Nov 17 17:33:01 2011 From: noreply at buildbot.pypy.org (hager) Date: Thu, 17 Nov 2011 17:33:01 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Refactored code in opassembler.py, divided the code in class OpAssembler Message-ID: <20111117163301.34D4082A9D@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r49497:de832accb4e6 Date: 2011-11-17 17:32 +0100 http://bitbucket.org/pypy/pypy/changeset/de832accb4e6/ Log: Refactored code in opassembler.py, divided the code in class OpAssembler in several classes. diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -27,8 +27,11 @@ self.save_exc = save_exc self.fcond=fcond -class OpAssembler(object): +#class OpAssembler(object): +class IntOpAssembler(object): + _mixin_ = True + # ******************************************************** # * I N T O P E R A T I O N S * # ******************************************************** @@ -167,6 +170,11 @@ l0, res = arglocs self.mc.not_(res.value, l0.value) + +class GuardOpAssembler(object): + + _mixin_ = True + def _emit_guard(self, op, arglocs, fcond, save_exc=False, is_guard_not_invalidated=False): descr = op.getdescr() @@ -277,6 +285,11 @@ raise NotImplementedError self._cmp_guard_class(op, arglocs, regalloc) + +class MiscOpAssembler(object): + + _mixin_ = True + def emit_finish(self, op, arglocs, regalloc): self.gen_exit_stub(op.getdescr(), op.getarglist(), arglocs) @@ -293,6 +306,139 @@ descr._ppc_frame_manager_depth) regalloc.frame_manager.frame_depth = new_fd + def emit_same_as(self, op, arglocs, regalloc): + argloc, resloc = arglocs + self.regalloc_mov(argloc, resloc) + + emit_cast_ptr_to_int = emit_same_as + emit_cast_int_to_ptr = emit_same_as + + def emit_call(self, op, args, regalloc, force_index=-1): + adr = args[0].value + arglist = op.getarglist()[1:] + if force_index == -1: + force_index = self.write_new_force_index() + self._emit_call(force_index, adr, arglist, regalloc, op.result) + descr = op.getdescr() + #XXX Hack, Hack, Hack + if op.result and not we_are_translated() and not isinstance(descr, + LoopToken): + #XXX check result type + loc = regalloc.rm.call_result_location(op.result) + size = descr.get_result_size(False) + signed = descr.is_result_signed() + self._ensure_result_bit_extension(loc, size, signed) + + def _emit_call(self, force_index, adr, args, regalloc, result=None): + n_args = len(args) + reg_args = count_reg_args(args) + + n = 0 # used to count the number of words pushed on the stack, so we + # can later modify the SP back to its original value + stack_args = [] + if n_args > reg_args: + # first we need to prepare the list so it stays aligned + count = 0 + for i in range(reg_args, n_args): + arg = args[i] + if arg.type == FLOAT: + assert 0, "not implemented yet" + else: + count += 1 + n += WORD + stack_args.append(arg) + if count % 2 != 0: + n += WORD + stack_args.append(None) + + # adjust SP and compute size of parameter save area + if IS_PPC_32: + stack_space = BACKCHAIN_SIZE + len(stack_args) * WORD + while stack_space % (4 * WORD) != 0: + stack_space += 1 + self.mc.stwu(r.SP.value, r.SP.value, -stack_space) + self.mc.mflr(r.r0.value) + self.mc.stw(r.r0.value, r.SP.value, stack_space + WORD) + else: + # ABI fixed frame + 8 GPRs + arguments + stack_space = (6 + MAX_REG_PARAMS + len(stack_args)) * WORD + while stack_space % (2 * WORD) != 0: + stack_space += 1 + self.mc.stdu(r.SP.value, r.SP.value, -stack_space) + self.mc.mflr(r.r0.value) + self.mc.std(r.r0.value, r.SP.value, stack_space + 2 * WORD) + + # then we push everything on the stack + for i, arg in enumerate(stack_args): + if IS_PPC_32: + abi = 2 + else: + abi = 14 + offset = (abi + i) * WORD + if arg is not None: + self.mc.load_imm(r.r0, arg.value) + if IS_PPC_32: + self.mc.stw(r.r0.value, r.SP.value, offset) + else: + self.mc.std(r.r0.value, r.SP.value, offset) + + # collect variables that need to go in registers + # and the registers they will be stored in + num = 0 + count = 0 + non_float_locs = [] + non_float_regs = [] + for i in range(reg_args): + arg = args[i] + if arg.type == FLOAT and count % 2 != 0: + assert 0, "not implemented yet" + reg = r.PARAM_REGS[num] + + if arg.type == FLOAT: + assert 0, "not implemented yet" + else: + non_float_locs.append(regalloc.loc(arg)) + non_float_regs.append(reg) + + if arg.type == FLOAT: + assert 0, "not implemented yet" + else: + num += 1 + count += 1 + + # spill variables that need to be saved around calls + regalloc.before_call(save_all_regs=2) + + # remap values stored in core registers + remap_frame_layout(self, non_float_locs, non_float_regs, r.r0) + + #the actual call + if IS_PPC_32: + self.mc.bl_abs(adr) + self.mc.lwz(r.r0.value, r.SP.value, stack_space + WORD) + else: + self.mc.std(r.r2.value, r.SP.value, 3 * WORD) + self.mc.load_from_addr(r.r0, adr) + self.mc.load_from_addr(r.r2, adr + WORD) + self.mc.load_from_addr(r.r11, adr + 2 * WORD) + self.mc.mtctr(r.r0.value) + self.mc.bctrl() + self.mc.ld(r.r2.value, r.SP.value, 3 * WORD) + self.mc.ld(r.r0.value, r.SP.value, stack_space + 2 * WORD) + self.mc.mtlr(r.r0.value) + self.mc.addi(r.SP.value, r.SP.value, stack_space) + + self.mark_gc_roots(force_index) + regalloc.possibly_free_vars(args) + + # restore the arguments stored on the stack + if result is not None: + resloc = regalloc.after_call(result) + +class FieldOpAssembler(object): + + _mixin_ = True + def emit_setfield_gc(self, op, arglocs, regalloc): value_loc, base_loc, ofs, size = arglocs if size.value == 8: @@ -356,6 +502,11 @@ emit_getfield_raw_pure = emit_getfield_gc emit_getfield_gc_pure = emit_getfield_gc + +class ArrayOpAssembler(object): + + _mixin_ = True + def emit_arraylen_gc(self, op, arglocs, regalloc): res, base_loc, ofs = arglocs if IS_PPC_32: @@ -431,6 +582,11 @@ emit_getarrayitem_raw = emit_getarrayitem_gc emit_getarrayitem_gc_pure = emit_getarrayitem_gc + +class StrOpAssembler(object): + + _mixin_ = True + def emit_strlen(self, op, arglocs, regalloc): l0, l1, res = arglocs if l1.is_imm(): @@ -570,7 +726,12 @@ else: raise AssertionError("bad unicode item size") - emit_unicodelen = emit_strlen + +class UnicodeOpAssembler(object): + + _mixin_ = True + + emit_unicodelen = StrOpAssembler.emit_strlen # XXX 64 bit adjustment def emit_unicodegetitem(self, op, arglocs, regalloc): @@ -606,6 +767,29 @@ else: assert 0, itemsize.value + +class AllocOpAssembler(object): + + _mixin_ = True + + # from: ../x86/regalloc.py:750 + # called from regalloc + # XXX kill this function at some point + def _regalloc_malloc_varsize(self, size, size_box, vloc, vbox, + ofs_items_loc, regalloc, result): + if IS_PPC_32: + self.mc.mullw(size.value, size.value, vloc.value) + else: + self.mc.mulld(size.value, size.value, vloc.value) + if ofs_items_loc.is_imm(): + self.mc.addi(size.value, size.value, ofs_items_loc.value) + else: + self.mc.add(size.value, size.value, ofs_items_loc.value) + force_index = self.write_new_force_index() + regalloc.force_spill_var(vbox) + self._emit_call(force_index, self.malloc_func_addr, [size_box], regalloc, + result=result) + def emit_new(self, op, arglocs, regalloc): # XXX do exception handling here! pass @@ -635,145 +819,6 @@ emit_newstr = emit_new_array emit_newunicode = emit_new_array - def emit_same_as(self, op, arglocs, regalloc): - argloc, resloc = arglocs - self.regalloc_mov(argloc, resloc) - - emit_cast_ptr_to_int = emit_same_as - emit_cast_int_to_ptr = emit_same_as - - def emit_call(self, op, args, regalloc, force_index=-1): - adr = args[0].value - arglist = op.getarglist()[1:] - if force_index == -1: - force_index = self.write_new_force_index() - self._emit_call(force_index, adr, arglist, regalloc, op.result) - descr = op.getdescr() - #XXX Hack, Hack, Hack - if op.result and not we_are_translated() and not isinstance(descr, - LoopToken): - #XXX check result type - loc = regalloc.rm.call_result_location(op.result) - size = descr.get_result_size(False) - signed = descr.is_result_signed() - self._ensure_result_bit_extension(loc, size, signed) - - # XXX 64 bit adjustment - def _emit_call(self, force_index, adr, args, regalloc, result=None): - n_args = len(args) - reg_args = count_reg_args(args) - - n = 0 # used to count the number of words pushed on the stack, so we - # can later modify the SP back to its original value - stack_args = [] - if n_args > reg_args: - # first we need to prepare the list so it stays aligned - count = 0 - for i in range(reg_args, n_args): - arg = args[i] - if arg.type == FLOAT: - assert 0, "not implemented yet" - else: - count += 1 - n += WORD - stack_args.append(arg) - if count % 2 != 0: - n += WORD - stack_args.append(None) - - # adjust SP and compute size of parameter save area - if IS_PPC_32: - stack_space = BACKCHAIN_SIZE + len(stack_args) * WORD - while stack_space % (4 * WORD) != 0: - stack_space += 1 - self.mc.stwu(r.SP.value, r.SP.value, -stack_space) - self.mc.mflr(r.r0.value) - self.mc.stw(r.r0.value, r.SP.value, stack_space + WORD) - else: - # ABI fixed frame + 8 GPRs + arguments - stack_space = (6 + MAX_REG_PARAMS + len(stack_args)) * WORD - while stack_space % (2 * WORD) != 0: - stack_space += 1 - self.mc.stdu(r.SP.value, r.SP.value, -stack_space) - self.mc.mflr(r.r0.value) - self.mc.std(r.r0.value, r.SP.value, stack_space + 2 * WORD) - - # then we push everything on the stack - for i, arg in enumerate(stack_args): - if IS_PPC_32: - abi = 2 - else: - abi = 14 - offset = (abi + i) * WORD - if arg is not None: - self.mc.load_imm(r.r0, arg.value) - if IS_PPC_32: - self.mc.stw(r.r0.value, r.SP.value, offset) - else: - self.mc.std(r.r0.value, r.SP.value, offset) - - # collect variables that need to go in registers - # and the registers they will be stored in - num = 0 - count = 0 - non_float_locs = [] - non_float_regs = [] - for i in range(reg_args): - arg = args[i] - if arg.type == FLOAT and count % 2 != 0: - assert 0, "not implemented yet" - reg = r.PARAM_REGS[num] - - if arg.type == FLOAT: - assert 0, "not implemented yet" - else: - non_float_locs.append(regalloc.loc(arg)) - non_float_regs.append(reg) - - if arg.type == FLOAT: - assert 0, "not implemented yet" - else: - num += 1 - count += 1 - - # spill variables that need to be saved around calls - regalloc.before_call(save_all_regs=2) - - # remap values stored in core registers - remap_frame_layout(self, non_float_locs, non_float_regs, r.r0) - - #the actual call - if IS_PPC_32: - self.mc.bl_abs(adr) - self.mc.lwz(r.r0.value, r.SP.value, stack_space + WORD) - else: - self.mc.std(r.r2.value, r.SP.value, 3 * WORD) - self.mc.load_from_addr(r.r0, adr) - self.mc.load_from_addr(r.r2, adr + WORD) - self.mc.load_from_addr(r.r11, adr + 2 * WORD) - self.mc.mtctr(r.r0.value) - self.mc.bctrl() - self.mc.ld(r.r2.value, r.SP.value, 3 * WORD) - self.mc.ld(r.r0.value, r.SP.value, stack_space + 2 * WORD) - self.mc.mtlr(r.r0.value) - self.mc.addi(r.SP.value, r.SP.value, stack_space) - - self.mark_gc_roots(force_index) - regalloc.possibly_free_vars(args) - - # restore the arguments stored on the stack - if result is not None: - resloc = regalloc.after_call(result) - - def emit_guard_call_may_force(self, op, guard_op, arglocs, regalloc): - self.mc.mr(r.r0.value, r.SP.value) - if IS_PPC_32: - self.mc.cmpwi(r.r0.value, 0) - else: - self.mc.cmpdi(r.r0.value, 0) - self._emit_guard(guard_op, arglocs, c.EQ) - - emit_guard_call_release_gil = emit_guard_call_may_force def write_new_force_index(self): # for shadowstack only: get a new, unused force_index number and @@ -794,22 +839,27 @@ emit_jit_debug = emit_debug_merge_point + +class ForceOpAssembler(object): + + _mixin_ = True + + def emit_guard_call_may_force(self, op, guard_op, arglocs, regalloc): + self.mc.mr(r.r0.value, r.SP.value) + if IS_PPC_32: + self.mc.cmpwi(r.r0.value, 0) + else: + self.mc.cmpdi(r.r0.value, 0) + self._emit_guard(guard_op, arglocs, c.EQ) + + emit_guard_call_release_gil = emit_guard_call_may_force + + +class OpAssembler(IntOpAssembler, GuardOpAssembler, + MiscOpAssembler, FieldOpAssembler, + ArrayOpAssembler, StrOpAssembler, + UnicodeOpAssembler, ForceOpAssembler, + AllocOpAssembler): + def nop(self): self.mc.ori(0, 0, 0) - - # from: ../x86/regalloc.py:750 - # called from regalloc - # XXX kill this function at some point - def _regalloc_malloc_varsize(self, size, size_box, vloc, vbox, ofs_items_loc, regalloc, result): - if IS_PPC_32: - self.mc.mullw(size.value, size.value, vloc.value) - else: - self.mc.mulld(size.value, size.value, vloc.value) - if ofs_items_loc.is_imm(): - self.mc.addi(size.value, size.value, ofs_items_loc.value) - else: - self.mc.add(size.value, size.value, ofs_items_loc.value) - force_index = self.write_new_force_index() - regalloc.force_spill_var(vbox) - self._emit_call(force_index, self.malloc_func_addr, [size_box], regalloc, - result=result) From noreply at buildbot.pypy.org Thu Nov 17 17:40:59 2011 From: noreply at buildbot.pypy.org (hager) Date: Thu, 17 Nov 2011 17:40:59 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Remove some comments and blank lines Message-ID: <20111117164059.2E2AB82A9D@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r49498:13d56fd8042b Date: 2011-11-17 17:40 +0100 http://bitbucket.org/pypy/pypy/changeset/13d56fd8042b/ Log: Remove some comments and blank lines diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -27,15 +27,10 @@ self.save_exc = save_exc self.fcond=fcond -#class OpAssembler(object): class IntOpAssembler(object): _mixin_ = True - # ******************************************************** - # * I N T O P E R A T I O N S * - # ******************************************************** - def emit_int_add(self, op, arglocs, regalloc): l0, l1, res = arglocs if l0.is_imm(): @@ -435,6 +430,7 @@ if result is not None: resloc = regalloc.after_call(result) + class FieldOpAssembler(object): _mixin_ = True From noreply at buildbot.pypy.org Thu Nov 17 18:23:45 2011 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 17 Nov 2011 18:23:45 +0100 (CET) Subject: [pypy-commit] pypy default: Fix. Message-ID: <20111117172345.8B1AE82A9D@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49499:fe6032125f68 Date: 2011-11-17 18:23 +0100 http://bitbucket.org/pypy/pypy/changeset/fe6032125f68/ Log: Fix. diff --git a/pypy/rpython/lltypesystem/lltype.py b/pypy/rpython/lltypesystem/lltype.py --- a/pypy/rpython/lltypesystem/lltype.py +++ b/pypy/rpython/lltypesystem/lltype.py @@ -1788,7 +1788,8 @@ cache = d.setdefault(parent, {}) except RuntimeError: # pointer comparison with a freed structure _subarray._cleanup_cache() - cache = d.setdefault(parent, {}) # try again + # try again + return _subarray._makeptr(parent, baseoffset_or_fieldname, solid) try: subarray = cache[baseoffset_or_fieldname] except KeyError: From noreply at buildbot.pypy.org Thu Nov 17 21:00:29 2011 From: noreply at buildbot.pypy.org (amauryfa) Date: Thu, 17 Nov 2011 21:00:29 +0100 (CET) Subject: [pypy-commit] pypy default: issue887: cpyext: add support for mp_ass_subscript Message-ID: <20111117200029.EC5D582A9D@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r49500:f46e309f89bd Date: 2011-11-17 20:58 +0100 http://bitbucket.org/pypy/pypy/changeset/f46e309f89bd/ Log: issue887: cpyext: add support for mp_ass_subscript diff --git a/pypy/module/cpyext/slotdefs.py b/pypy/module/cpyext/slotdefs.py --- a/pypy/module/cpyext/slotdefs.py +++ b/pypy/module/cpyext/slotdefs.py @@ -9,7 +9,8 @@ unaryfunc, wrapperfunc, ternaryfunc, PyTypeObjectPtr, binaryfunc, getattrfunc, getattrofunc, setattrofunc, lenfunc, ssizeargfunc, ssizessizeargfunc, ssizeobjargproc, iternextfunc, initproc, richcmpfunc, - cmpfunc, hashfunc, descrgetfunc, descrsetfunc, objobjproc, readbufferproc) + cmpfunc, hashfunc, descrgetfunc, descrsetfunc, objobjproc, objobjargproc, + readbufferproc) from pypy.module.cpyext.pyobject import from_ref from pypy.module.cpyext.pyerrors import PyErr_Occurred from pypy.module.cpyext.state import State @@ -175,6 +176,15 @@ space.fromcache(State).check_and_raise_exception(always=True) return space.wrap(res) +def wrap_objobjargproc(space, w_self, w_args, func): + func_target = rffi.cast(objobjargproc, func) + check_num_args(space, w_args, 2) + w_key, w_value = space.fixedview(w_args) + res = generic_cpy_call(space, func_target, w_self, w_key, w_value) + if rffi.cast(lltype.Signed, res) == -1: + space.fromcache(State).check_and_raise_exception(always=True) + return space.wrap(res) + def wrap_ssizessizeargfunc(space, w_self, w_args, func): func_target = rffi.cast(ssizessizeargfunc, func) check_num_args(space, w_args, 2) diff --git a/pypy/module/cpyext/test/test_typeobject.py b/pypy/module/cpyext/test/test_typeobject.py --- a/pypy/module/cpyext/test/test_typeobject.py +++ b/pypy/module/cpyext/test/test_typeobject.py @@ -397,3 +397,31 @@ def __str__(self): return "text" assert module.tp_str(C()) == "text" + + def test_mp_ass_subscript(self): + module = self.import_extension('foo', [ + ("new_obj", "METH_NOARGS", + ''' + PyObject *obj; + Foo_Type.tp_as_mapping = &tp_as_mapping; + tp_as_mapping.mp_ass_subscript = mp_ass_subscript; + if (PyType_Ready(&Foo_Type) < 0) return NULL; + obj = PyObject_New(PyObject, &Foo_Type); + return obj; + ''' + )], + ''' + static int + mp_ass_subscript(PyObject *self, PyObject *key, PyObject *value) + { + PyErr_SetNone(PyExc_ZeroDivisionError); + return -1; + } + PyMappingMethods tp_as_mapping; + static PyTypeObject Foo_Type = { + PyVarObject_HEAD_INIT(NULL, 0) + "foo.foo", + }; + ''') + obj = module.new_obj() + raises(ZeroDivisionError, obj.__setitem__, 5, None) From noreply at buildbot.pypy.org Thu Nov 17 22:53:37 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Thu, 17 Nov 2011 22:53:37 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: merged default Message-ID: <20111117215337.0F8D082A9D@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49501:9733ec985293 Date: 2011-11-17 14:39 -0500 http://bitbucket.org/pypy/pypy/changeset/9733ec985293/ Log: merged default diff --git a/lib_pypy/pyrepl/unix_console.py b/lib_pypy/pyrepl/unix_console.py --- a/lib_pypy/pyrepl/unix_console.py +++ b/lib_pypy/pyrepl/unix_console.py @@ -412,7 +412,12 @@ e.args[4] == 'unexpected end of data': pass else: - raise + # was: "raise". But it crashes pyrepl, and by extension the + # pypy currently running, in which we are e.g. in the middle + # of some debugging session. Argh. Instead just print an + # error message to stderr and continue running, for now. + self.partial_char = '' + sys.stderr.write('\n%s: %s\n' % (e.__class__.__name__, e)) else: self.partial_char = '' self.event_queue.push(c) diff --git a/lib_pypy/syslog.py b/lib_pypy/syslog.py --- a/lib_pypy/syslog.py +++ b/lib_pypy/syslog.py @@ -38,9 +38,27 @@ _setlogmask.argtypes = (c_int,) _setlogmask.restype = c_int +_S_log_open = False +_S_ident_o = None + +def _get_argv(): + try: + import sys + script = sys.argv[0] + if isinstance(script, str): + return script[script.rfind('/')+1:] or None + except Exception: + pass + return None + @builtinify -def openlog(ident, option, facility): - _openlog(ident, option, facility) +def openlog(ident=None, logoption=0, facility=LOG_USER): + global _S_ident_o, _S_log_open + if ident is None: + ident = _get_argv() + _S_ident_o = c_char_p(ident) # keepalive + _openlog(_S_ident_o, logoption, facility) + _S_log_open = True @builtinify def syslog(arg1, arg2=None): @@ -48,11 +66,18 @@ priority, message = arg1, arg2 else: priority, message = LOG_INFO, arg1 + # if log is not opened, open it now + if not _S_log_open: + openlog() _syslog(priority, "%s", message) @builtinify def closelog(): - _closelog() + global _S_log_open, S_ident_o + if _S_log_open: + _closelog() + _S_log_open = False + _S_ident_o = None @builtinify def setlogmask(mask): diff --git a/pypy/module/sys/test/test_sysmodule.py b/pypy/module/sys/test/test_sysmodule.py --- a/pypy/module/sys/test/test_sysmodule.py +++ b/pypy/module/sys/test/test_sysmodule.py @@ -567,6 +567,11 @@ import time import thread + # XXX workaround for now: to prevent deadlocks, call + # sys._current_frames() once before starting threads. + # This is an issue in non-translated versions only. + sys._current_frames() + thread_id = thread.get_ident() def other_thread(): print "thread started" diff --git a/pypy/rpython/lltypesystem/lltype.py b/pypy/rpython/lltypesystem/lltype.py --- a/pypy/rpython/lltypesystem/lltype.py +++ b/pypy/rpython/lltypesystem/lltype.py @@ -1723,7 +1723,7 @@ class _subarray(_parentable): # only for direct_fieldptr() # and direct_arrayitems() _kind = "subarray" - _cache = weakref.WeakKeyDictionary() # parentarray -> {subarrays} + _cache = {} # TYPE -> weak{ parentarray -> {subarrays} } def __init__(self, TYPE, parent, baseoffset_or_fieldname): _parentable.__init__(self, TYPE) @@ -1781,10 +1781,15 @@ def _makeptr(parent, baseoffset_or_fieldname, solid=False): try: - cache = _subarray._cache.setdefault(parent, {}) + d = _subarray._cache[parent._TYPE] + except KeyError: + d = _subarray._cache[parent._TYPE] = weakref.WeakKeyDictionary() + try: + cache = d.setdefault(parent, {}) except RuntimeError: # pointer comparison with a freed structure _subarray._cleanup_cache() - cache = _subarray._cache.setdefault(parent, {}) # try again + # try again + return _subarray._makeptr(parent, baseoffset_or_fieldname, solid) try: subarray = cache[baseoffset_or_fieldname] except KeyError: @@ -1805,14 +1810,18 @@ raise NotImplementedError('_subarray._getid()') def _cleanup_cache(): - newcache = weakref.WeakKeyDictionary() - for key, value in _subarray._cache.items(): - try: - if not key._was_freed(): - newcache[key] = value - except RuntimeError: - pass # ignore "accessing subxxx, but already gc-ed parent" - _subarray._cache = newcache + for T, d in _subarray._cache.items(): + newcache = weakref.WeakKeyDictionary() + for key, value in d.items(): + try: + if not key._was_freed(): + newcache[key] = value + except RuntimeError: + pass # ignore "accessing subxxx, but already gc-ed parent" + if newcache: + _subarray._cache[T] = newcache + else: + del _subarray._cache[T] _cleanup_cache = staticmethod(_cleanup_cache) diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -864,7 +864,7 @@ except AttributeError: if not isinstance(tp, lltype.Primitive): unsigned = False - elif tp in (lltype.Signed, FLOAT, DOUBLE): + elif tp in (lltype.Signed, FLOAT, DOUBLE, llmemory.Address): unsigned = False elif tp in (lltype.Char, lltype.UniChar, lltype.Bool): unsigned = True From noreply at buildbot.pypy.org Thu Nov 17 22:53:38 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Thu, 17 Nov 2011 22:53:38 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: subclassing types, failing tests for bools (why are they so special :() Message-ID: <20111117215338.3E69E82A9E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49502:057601b01a65 Date: 2011-11-17 16:53 -0500 http://bitbucket.org/pypy/pypy/changeset/057601b01a65/ Log: subclassing types, failing tests for bools (why are they so special :() diff --git a/pypy/module/micronumpy/compile.py b/pypy/module/micronumpy/compile.py --- a/pypy/module/micronumpy/compile.py +++ b/pypy/module/micronumpy/compile.py @@ -9,7 +9,7 @@ from pypy.module.micronumpy.interp_numarray import (Scalar, BaseArray, descr_new_array, scalar_w, SingleDimArray) from pypy.module.micronumpy import interp_ufuncs -from pypy.rlib.objectmodel import specialize +from pypy.rlib.objectmodel import specialize, instantiate class BogusBytecode(Exception): @@ -112,6 +112,9 @@ assert isinstance(what, tp) return what + def allocate_instance(self, klass, w_subtype): + return instantiate(klass) + class FloatObject(W_Root): tp = FakeSpace.w_float def __init__(self, floatval): diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -11,12 +11,14 @@ MIXIN_64 = (int_typedef,) if LONG_BIT == 64 else () -def dtype_getter(name): - @staticmethod +def new_dtype_getter(name): def get_dtype(space): from pypy.module.micronumpy.interp_dtype import get_dtype_cache return getattr(get_dtype_cache(space), "w_%sdtype" % name) - return get_dtype + def new(space, w_subtype, w_value): + dtype = get_dtype(space) + return dtype.itemtype.coerce_subtype(space, w_subtype, w_value) + return new, staticmethod(get_dtype) class PrimitiveBox(object): _mixin_ = True @@ -30,13 +32,7 @@ class W_GenericBox(Wrappable): _attrs_ = () - def descr__new__(space, w_subtype, w_value): - from pypy.module.micronumpy.interp_dtype import get_dtype_cache - # XXX: not correct if w_subtype is a user defined subclass of a builtin - # type, this whole thing feels a little wrong. - for dtype in get_dtype_cache(space).builtin_dtypes: - if w_subtype is dtype.w_box_type: - return dtype.coerce(space, w_value) + def descr__new__(space, w_subtype, __args__): assert isinstance(w_subtype, W_TypeObject) raise operationerrfmt(space.w_TypeError, "cannot create '%s' instances", w_subtype.get_module_type_name() @@ -98,7 +94,7 @@ class W_BoolBox(W_GenericBox, PrimitiveBox): - get_dtype = dtype_getter("bool") + descr__new__, get_dtype = new_dtype_getter("bool") class W_NumberBox(W_GenericBox): _attrs_ = () @@ -113,7 +109,7 @@ pass class W_Int8Box(W_SignedIntegerBox, PrimitiveBox): - get_dtype = dtype_getter("int8") + descr__new__, get_dtype = new_dtype_getter("int8") class W_UInt8Box(W_UnsignedIntgerBox, PrimitiveBox): pass @@ -131,13 +127,13 @@ pass class W_LongBox(W_SignedIntegerBox, PrimitiveBox): - get_dtype = dtype_getter("long") + descr__new__, get_dtype = new_dtype_getter("long") class W_ULongBox(W_UnsignedIntgerBox, PrimitiveBox): pass class W_Int64Box(W_SignedIntegerBox, PrimitiveBox): - get_dtype = dtype_getter("int64") + descr__new__, get_dtype = new_dtype_getter("int64") class W_UInt64Box(W_UnsignedIntgerBox, PrimitiveBox): pass @@ -149,10 +145,10 @@ _attrs_ = () class W_Float32Box(W_FloatingBox, PrimitiveBox): - get_dtype = dtype_getter("float32") + descr__new__, get_dtype = new_dtype_getter("float32") class W_Float64Box(W_FloatingBox, PrimitiveBox): - get_dtype = dtype_getter("float64") + descr__new__, get_dtype = new_dtype_getter("float64") @@ -160,6 +156,7 @@ __module__ = "numpy", __new__ = interp2app(W_GenericBox.descr__new__.im_func), + __str__ = interp2app(W_GenericBox.descr_str), __repr__ = interp2app(W_GenericBox.descr_repr), __int__ = interp2app(W_GenericBox.descr_int), @@ -186,6 +183,7 @@ W_BoolBox.typedef = TypeDef("bool_", W_GenericBox.typedef, __module__ = "numpy", + __new__ = interp2app(W_BoolBox.descr__new__.im_func), ) W_NumberBox.typedef = TypeDef("number", W_GenericBox.typedef, @@ -202,6 +200,7 @@ W_Int8Box.typedef = TypeDef("int8", W_SignedIntegerBox.typedef, __module__ = "numpy", + __new__ = interp2app(W_Int8Box.descr__new__.im_func), ) W_UInt8Box.typedef = TypeDef("uint8", W_UnsignedIntgerBox.typedef, @@ -258,4 +257,6 @@ W_Float64Box.typedef = TypeDef("float64", (W_FloatingBox.typedef, float_typedef), __module__ = "numpy", + + __new__ = interp2app(W_Float64Box.descr__new__.im_func), ) \ No newline at end of file diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -186,6 +186,12 @@ assert numpy.bool_("") is numpy.False_ assert type(numpy.True_) is type(numpy.False_) is numpy.bool_ + class X(numpy.bool_): + pass + + assert type(X(True)) is numpy.bool_ + assert X(True) is numpy.True_ + def test_int8(self): import numpy @@ -206,8 +212,19 @@ assert numpy.float64.mro() == [numpy.float64, numpy.floating, numpy.inexact, numpy.number, numpy.generic, float, object] - a = numpy.array([1, 2, 3], numpy.float64) + a = numpy.array([1, 2, 3], numpy.float64) assert type(a[1]) is numpy.float64 assert numpy.dtype(float).type is numpy.float64 assert numpy.float64(2.0) == 2.0 + + def test_subclass_type(self): + import numpy + + class X(numpy.float64): + def m(self): + return self + 2 + + b = X(10) + assert type(b) is X + assert b.m() == 12 diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -65,7 +65,14 @@ def coerce(self, space, w_item): if isinstance(w_item, self.BoxType): return w_item - return self._coerce(space, w_item) + return self.coerce_subtype(space, space.gettypefor(self.BoxType), w_item) + + def coerce_subtype(self, space, w_subtype, w_item): + # XXX: ugly + w_obj = space.allocate_instance(self.BoxType, w_subtype) + assert isinstance(w_obj, self.BoxType) + w_obj.__init__(self._coerce(space, w_item).value) + return w_obj def _coerce(self, space, w_item): raise NotImplementedError From noreply at buildbot.pypy.org Thu Nov 17 23:06:12 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Thu, 17 Nov 2011 23:06:12 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: fix the bool tests Message-ID: <20111117220612.4C0E182A9D@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49503:926f651facb4 Date: 2011-11-17 17:05 -0500 http://bitbucket.org/pypy/pypy/changeset/926f651facb4/ Log: fix the bool tests diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -169,6 +169,10 @@ else: return self.False + def coerce_subtype(self, space, w_subtype, w_item): + # Doesn't return subclasses so it can return the constants. + return self._coerce(space, w_item) + def _coerce(self, space, w_item): return self.box(space.is_true(w_item)) From noreply at buildbot.pypy.org Thu Nov 17 23:50:23 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Thu, 17 Nov 2011 23:50:23 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: bit of cleanup Message-ID: <20111117225023.B716082A9D@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49504:d779e7a20055 Date: 2011-11-17 17:50 -0500 http://bitbucket.org/pypy/pypy/changeset/d779e7a20055/ Log: bit of cleanup diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -609,10 +609,10 @@ assert c[i] == func(b[i], 3) -class AppTestSupport(object): +class AppTestSupport(BaseNumpyAppTest): def setup_class(cls): import struct - cls.space = gettestobjspace(usemodules=('micronumpy',)) + BaseNumpyAppTest.setup_class.im_func(cls) cls.w_data = cls.space.wrap(struct.pack('dddd', 1, 2, 3, 4)) def test_fromstring(self): From noreply at buildbot.pypy.org Fri Nov 18 02:06:02 2011 From: noreply at buildbot.pypy.org (mattip) Date: Fri, 18 Nov 2011 02:06:02 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim-shards: repr and str pass tests Message-ID: <20111118010602.4925E82A9D@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpy-multidim-shards Changeset: r49505:1be076ca0053 Date: 2011-11-18 03:04 +0200 http://bitbucket.org/pypy/pypy/changeset/1be076ca0053/ Log: repr and str pass tests diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -53,7 +53,7 @@ order = 'C' else: order = space.str_w(w_order) - if order != 'C': # or order != 'F': + if order != 'C': # or order != 'F': raise operationerrfmt(space.w_ValueError, "Unknown order: %s", order) shape, elems_w = _find_shape_and_elems(space, w_item_or_iterable) @@ -358,52 +358,20 @@ def descr_repr(self, space): res = StringBuilder() + res.append("array(") concrete = self.get_concrete() - i = concrete.start_iter() - start = True - dtype = self.find_dtype() - while not i.done(): - if start: - start = False - else: - res.append(", ") - res.append(dtype.str_format(concrete.getitem(i.offset))) - i = i.next() - return space.wrap(res.build()) - res = StringBuilder() - res.append("array([") - concrete = self.get_concrete() - i = concrete.start_iter()#offset=0, indices=[0]) start = True dtype = concrete.find_dtype() if not concrete.find_size(): + res.append('[]') if len(self.shape) > 1: #This is for numpy compliance: an empty slice reports its shape - res.append("], shape=(") + res.append(", shape=(") self_shape = str(self.shape) res.append_slice(str(self_shape), 1, len(self_shape) - 1) res.append(')') - else: - res.append(']') else: - if self.shape[0] > 1000: - for xx in range(3): - if start: - start = False - else: - res.append(", ") - res.append(dtype.str_format(concrete.eval(i))) - i = i.next() - res.append(', ...') - i = concrete.start_iter(offset=self.shape[0] - 3) - while not i.done(): - if start: - start = False - else: - res.append(", ") - res.append(dtype.str_format(concrete.eval(i))) - i = i.next() - res.append(']') + self.to_str(space, 1, res, indent=' ') if (dtype is not space.fromcache(interp_dtype.W_Float64Dtype) and dtype is not space.fromcache(interp_dtype.W_Int64Dtype)) or \ not self.find_size(): @@ -411,51 +379,85 @@ res.append(")") return space.wrap(res.build()) - - def to_str(self, comma, builder, indent=' '): + def to_str(self, space, comma, builder, indent=' ', use_ellipsis=False): + '''Modifies builder with a representation of the array/slice + The items will be seperated by a comma if comma is 1 + Multidimensional arrays/slices will span a number of lines, + each line will begin with indent. + ''' + if self.size < 1: + builder.append('[]') + return + if self.size > 1000: + #Once this goes True it does not go back to False for recursive calls + use_ellipsis = True dtype = self.find_dtype() ndims = len(self.shape) + i = 0 + start = True + builder.append('[') if ndims > 1: + if use_ellipsis: + for i in range(3): + if start: + start = False + else: + builder.append(',' * comma + '\n') + if ndims == 3: + builder.append('\n' + indent) + else: + builder.append(indent) + #create_slice requires len(chunks)>1 in order to reduce shape + view = self.create_slice(space, [(i, 0, 0, 1), (0, self.shape[1], 1, self.shape[1])]) + view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) + builder.append('\n' + indent + '..., ') + i = self.shape[0] - 3 + while i < self.shape[0]: + if start: + start = False + else: + builder.append(',' * comma + '\n') + if ndims == 3: + builder.append('\n' + indent) + else: + builder.append(indent) + #create_slice requires len(chunks)>1 in order to reduce shape + view = self.create_slice(space, [(i, 0, 0, 1), (0, self.shape[1], 1, self.shape[1])]) + view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) + i += 1 + elif ndims == 1: + #This should not directly access the start,shards: what happens if order changes? + spacer = ',' * comma + ' ' + item = self.start + i = 0 + if use_ellipsis: + for i in range(3): + if start: + start = False + else: + builder.append(spacer) + builder.append(dtype.str_format(self.getitem(item))) + item += self.shards[0] + #Add a comma only if comma is False - this prevents adding two commas + builder.append(spacer + '...' + ',' * (1 - comma)) + item = self.start + self.backshards[0] - 2 * self.shards[0] + i = self.shape[0] - 3 + while i < self.shape[0]: + if start: + start = False + else: + builder.append(spacer) + builder.append(dtype.str_format(self.getitem(item))) + item += self.shards[0] + i += 1 + else: builder.append('[') - builder.append("xxx") - i = self.start_iter() - while not i.done(): - i.to_str(comma, builder, indent=indent + ' ') - builder.append('\n') - i = i.next() - builder.append(']') - elif ndims == 1: - builder.append('[') - spacer = ',' * comma + ' ' - if self.shape[0] > 1000: - #This is wrong. Use iterator - firstSlice = NDimSlice(self, self.signature, 0, [3, ], [2, ], [3, ]) - builder.append(firstSlice.to_str(comma, builder, indent)) - builder.append(',' * comma + ' ..., ') - lastSlice = NDimSlice(self, self.signature, - self.backshards[0] - 2 * self.shards[0], [3, ], [2, ], [3, ]) - builder.append(lastSlice.to_str(comma, builder, indent)) - else: - strs = [] - i = self.start_iter() - while not i.done(): - strs.append(dtype.str_format(self.eval(i))) - i = i.next() - builder.append(spacer.join(strs)) - builder.append(']') - else: - builder.append(dtype.str_format(self.eval(self.start))) - return builder.build() + builder.append(']') def descr_str(self, space): - return self.descr_repr(space) - # Simple implementation so that we can see the array. - # Since what we want is to print a plethora of 2d views, let - # a slice do the work for us. - concrete = self.get_concrete() - s = StringBuilder() - r = NDimSlice(concrete, self.signature, 0, self.shards, self.backshards, self.shape) - return space.wrap(r.to_str(False, s)) + ret = StringBuilder() + self.to_str(space, 0, ret, ' ') + return space.wrap(ret.build()) def _index_of_single_item(self, space, w_idx): if space.isinstance_w(w_idx, space.w_int): @@ -523,7 +525,6 @@ return [space.decode_index4(w_item, self.shape[i]) for i, w_item in enumerate(space.fixedview(w_idx))] - def descr_getitem(self, space, w_idx): if self._single_item_result(space, w_idx): concrete = self.get_concrete() @@ -667,6 +668,9 @@ def start_iter(self): return ConstantIterator() + def to_str(self, space, comma, builder, indent=' '): + builder.append(self.dtype.str_format(self.value)) + class VirtualArray(BaseArray): """ Class for representing virtual arrays, such as binary ops or ufuncs @@ -934,7 +938,7 @@ def start_iter(self, offset=0, indices=None): if self.order == 'C': return ArrayIterator(self.size, offset=offset) - raise NotImplementedError # use ViewIterator simply, test it + raise NotImplementedError # use ViewIterator simply, test it def __del__(self): lltype.free(self.storage, flavor='raw', track_allocation=False) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -33,7 +33,7 @@ def test_create_slice_f(self): space = self.space - a = NDimArray(10*5*3, [10, 5, 3], MockDtype(), 'F') + a = NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') s = a.create_slice(space, [(3, 0, 0, 1)]) assert s.start == 3 assert s.shards == [10, 50] @@ -52,7 +52,7 @@ def test_create_slice_c(self): space = self.space - a = NDimArray(10*5*3, [10, 5, 3], MockDtype(), 'C') + a = NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'C') s = a.create_slice(space, [(3, 0, 0, 1)]) assert s.start == 45 assert s.shards == [3, 1] @@ -72,7 +72,7 @@ def test_slice_of_slice_f(self): space = self.space - a = NDimArray(10*5*3, [10, 5, 3], MockDtype(), 'F') + a = NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') s = a.create_slice(space, [(5, 0, 0, 1)]) assert s.start == 5 s2 = s.create_slice(space, [(3, 0, 0, 1)]) @@ -86,29 +86,29 @@ assert s2.shape == [2, 3] assert s2.shards == [3, 50] assert s2.backshards == [3, 100] - assert s2.start == 1*15 + 2*3 + assert s2.start == 1 * 15 + 2 * 3 def test_slice_of_slice_c(self): space = self.space - a = NDimArray(10*5*3, [10, 5, 3], MockDtype(), order='C') + a = NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), order='C') s = a.create_slice(space, [(5, 0, 0, 1)]) - assert s.start == 15*5 + assert s.start == 15 * 5 s2 = s.create_slice(space, [(3, 0, 0, 1)]) assert s2.shape == [3] assert s2.shards == [1] assert s2.parent is a assert s2.backshards == [2] - assert s2.start == 5*15 + 3*3 + assert s2.start == 5 * 15 + 3 * 3 s = a.create_slice(space, [(1, 5, 3, 2)]) s2 = s.create_slice(space, [(0, 2, 1, 2), (2, 0, 0, 1)]) assert s2.shape == [2, 3] assert s2.shards == [45, 1] assert s2.backshards == [45, 2] - assert s2.start == 1*15 + 2*3 + assert s2.start == 1 * 15 + 2 * 3 def test_negative_step_f(self): space = self.space - a = NDimArray(10*5*3, [10, 5, 3], MockDtype(), 'F') + a = NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') s = a.create_slice(space, [(9, -1, -2, 5)]) assert s.start == 9 assert s.shards == [-2, 10, 50] @@ -116,16 +116,16 @@ def test_negative_step_c(self): space = self.space - a = NDimArray(10*5*3, [10, 5, 3], MockDtype(), order='C') + a = NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), order='C') s = a.create_slice(space, [(9, -1, -2, 5)]) assert s.start == 135 assert s.shards == [-30, 3, 1] assert s.backshards == [-120, 12, 2] def test_index_of_single_item_f(self): - a = NDimArray(10*5*3, [10, 5, 3], MockDtype(), 'F') + a = NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') r = a._index_of_single_item(self.space, self.newtuple(1, 2, 2)) - assert r == 1 + 2 * 10 + 2 * 50 + assert r == 1 + 2 * 10 + 2 * 50 s = a.create_slice(self.space, [(0, 10, 1, 10), (2, 0, 0, 1)]) r = s._index_of_single_item(self.space, self.newtuple(1, 0)) assert r == a._index_of_single_item(self.space, self.newtuple(1, 2, 0)) @@ -133,7 +133,7 @@ assert r == a._index_of_single_item(self.space, self.newtuple(1, 2, 1)) def test_index_of_single_item_c(self): - a = NDimArray(10*5*3, [10, 5, 3], MockDtype(), 'C') + a = NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'C') r = a._index_of_single_item(self.space, self.newtuple(1, 2, 2)) assert r == 1 * 3 * 5 + 2 * 3 + 2 s = a.create_slice(self.space, [(0, 10, 1, 10), (2, 0, 0, 1)]) @@ -225,8 +225,8 @@ a = array(range(5)) raises(IndexError, "a[(1,2)] = [0,1]") for i in xrange(5): - a[(i,)] = i+1 - assert a[i] == i+1 + a[(i,)] = i + 1 + assert a[i] == i + 1 a[()] = range(5) for i in xrange(5): assert a[i] == i @@ -256,7 +256,7 @@ assert a[3] == 1. assert a[4] == 11. a = zeros(10) - a[::2][::-1][::2] = array(range(1,4)) + a[::2][::-1][::2] = array(range(1, 4)) assert a[8] == 1. assert a[4] == 2. assert a[0] == 3. @@ -275,11 +275,11 @@ a[1:4:2] = 0. assert a[1] == 0. assert a[3] == 0. - + def test_scalar(self): from numpy import array a = array(3) - assert a[0] == 3 + assert a[0] == 3 def test_len(self): from numpy import array @@ -435,8 +435,8 @@ a = array(range(5), float) b = a ** a for i in range(5): - print b[i], i**i - assert b[i] == i**i + print b[i], i ** i + assert b[i] == i ** i def test_pow_other(self): from numpy import array @@ -455,7 +455,7 @@ def test_mod(self): from numpy import array - a = array(range(1,6)) + a = array(range(1, 6)) b = a % a for i in range(5): assert b[i] == 0 @@ -483,7 +483,7 @@ def test_pos(self): from numpy import array - a = array([1.,-2.,3.,-4.,-5.]) + a = array([1., -2., 3., -4., -5.]) b = +a for i in range(5): assert b[i] == a[i] @@ -494,7 +494,7 @@ def test_neg(self): from numpy import array - a = array([1.,-2.,3.,-4.,-5.]) + a = array([1., -2., 3., -4., -5.]) b = -a for i in range(5): assert b[i] == -a[i] @@ -505,7 +505,7 @@ def test_abs(self): from numpy import array - a = array([1.,-2.,3.,-4.,-5.]) + a = array([1., -2., 3., -4., -5.]) b = abs(a) for i in range(5): assert b[i] == abs(a[i]) @@ -534,7 +534,7 @@ s = a[1:5] assert len(s) == 4 for i in range(4): - assert s[i] == a[i+1] + assert s[i] == a[i + 1] s = (a + a)[1:2] assert len(s) == 1 @@ -548,7 +548,7 @@ s = a[1:9:2] assert len(s) == 4 for i in range(4): - assert s[i] == a[2*i+1] + assert s[i] == a[2 * i + 1] def test_slice_update(self): from numpy import array @@ -559,13 +559,12 @@ a[2] = 20 assert s[2] == 20 - def test_slice_invaidate(self): # check that slice shares invalidation list with from numpy import array a = array(range(5)) s = a[0:2] - b = array([10,11]) + b = array([10, 11]) c = s + b a[0] = 100 assert c[0] == 10 @@ -592,7 +591,7 @@ def test_prod(self): from numpy import array - a = array(range(1,6)) + a = array(range(1, 6)) assert a.prod() == 120.0 assert a[:4].prod() == 24.0 @@ -606,7 +605,7 @@ def test_max_add(self): from numpy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) - assert (a+a).max() == 11.4 + assert (a + a).max() == 11.4 def test_min(self): from numpy import array @@ -729,7 +728,7 @@ def test_shape(self): import numpy assert numpy.zeros(1).shape == (1,) - assert numpy.zeros((2, 2)).shape == (2,2) + assert numpy.zeros((2, 2)).shape == (2, 2) assert numpy.zeros((3, 1, 2)).shape == (3, 1, 2) assert numpy.array([[1], [2], [3]]).shape == (3, 1) assert len(numpy.zeros((3, 1, 2))) == 3 @@ -752,30 +751,30 @@ raises(IndexError, a.__getitem__, (4,)) raises(IndexError, a.__getitem__, (3, 3)) raises(IndexError, a.__getitem__, (slice(None), 3)) - a[0,1,1] = 13 - a[1,2,1] = 15 + a[0, 1, 1] = 13 + a[1, 2, 1] = 15 b = a[0] assert len(b) == 3 assert b.shape == (3, 2) - assert b[1,1] == 13 + assert b[1, 1] == 13 b = a[1] assert b.shape == (3, 2) - assert b[2,1] == 15 - b = a[:,1] + assert b[2, 1] == 15 + b = a[:, 1] assert b.shape == (4, 2) - assert b[0,1] == 13 - b = a[:,1,:] + assert b[0, 1] == 13 + b = a[:, 1, :] assert b.shape == (4, 2) - assert b[0,1] == 13 + assert b[0, 1] == 13 b = a[1, 2] assert b[1] == 15 b = a[:] assert b.shape == (4, 3, 2) - assert b[1,2,1] == 15 - assert b[0,1,1] == 13 - b = a[:][:,1][:] - assert b[2,1] == 0.0 - assert b[0,1] == 13 + assert b[1, 2, 1] == 15 + assert b[0, 1, 1] == 13 + b = a[:][:, 1][:] + assert b[2, 1] == 0.0 + assert b[0, 1] == 13 raises(IndexError, b.__getitem__, (4, 1)) assert a[0][1][1] == 13 assert a[1][2][1] == 15 @@ -802,15 +801,15 @@ assert (a == [[1, 2], [3, 4]]).all() a[1] = numpy.array([5, 6]) assert (a == [[1, 2], [5, 6]]).all() - a[:,1] = numpy.array([8, 10]) + a[:, 1] = numpy.array([8, 10]) assert (a == [[1, 8], [5, 10]]).all() - a[0,::-1] = numpy.array([11, 12]) + a[0, :: -1] = numpy.array([11, 12]) assert (a == [[12, 11], [5, 10]]).all() def test_ufunc(self): from numpy import array a = array([[1, 2], [3, 4], [5, 6]]) - assert ((a + a) == array([[1+1, 2+2], [3+3, 4+4], [5+5, 6+6]])).all() + assert ((a + a) == array([[1 + 1, 2 + 2], [3 + 3, 4 + 4], [5 + 5, 6 + 6]])).all() def test_getitem_add(self): from numpy import array @@ -838,8 +837,8 @@ import numpy a = numpy.zeros((100, 100)) b = numpy.ones(100) - a[:,:] = b - assert a[13,15] == 1 + a[:, :] = b + assert a[13, 15] == 1 class AppTestSupport(object): def setup_class(cls): @@ -872,11 +871,11 @@ def test_repr_multi(self): from numpy import array, zeros - a = zeros((3,4)) + a = zeros((3, 4)) assert repr(a) == '''array([[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0]])''' - a = zeros((2,3,4)) + a = zeros((2, 3, 4)) assert repr(a) == '''array([[[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0]], @@ -893,18 +892,18 @@ a = zeros(2002) b = a[::2] assert repr(b) == "array([0.0, 0.0, 0.0, ..., 0.0, 0.0, 0.0])" - a = array((range(5),range(5,10)), dtype="int16") - b=a[1,2:] + a = array((range(5), range(5, 10)), dtype="int16") + b = a[1, 2:] assert repr(b) == "array([7, 8, 9], dtype=int16)" #This is the way cpython numpy does it - an empty slice prints its shape - b=a[2:1,] + b = a[2:1, ] assert repr(b) == "array([], shape=(0, 5), dtype=int16)" def test_str(self): from numpy import array, zeros a = array(range(5), float) assert str(a) == "[0.0 1.0 2.0 3.0 4.0]" - assert str((2*a)[:]) == "[0.0 2.0 4.0 6.0 8.0]" + assert str((2 * a)[:]) == "[0.0 2.0 4.0 6.0 8.0]" a = zeros(1001) assert str(a) == "[0.0 0.0 0.0 ..., 0.0 0.0 0.0]" @@ -919,12 +918,14 @@ a = array(range(5), dtype="int16") assert str(a) == "[0 1 2 3 4]" - a = array((range(5),range(5,10)), dtype="int16") - assert str(a) == "[[0 1 2 3 4],\n [5 6 7 8 9]]" + a = array((range(5), range(5, 10)), dtype="int16") + assert str(a) == "[[0 1 2 3 4]\n [5 6 7 8 9]]" - a = array(3,dtype=int) + a = array(3, dtype=int) assert str(a) == "3" + a = zeros((400, 400), dtype=int) + assert str(a) == "[[0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]\n ..., \n [0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]]" def test_str_slice(self): from numpy import array, zeros a = array(range(5), float) @@ -933,8 +934,8 @@ a = zeros(2002) b = a[::2] assert str(b) == "[0.0 0.0 0.0 ..., 0.0 0.0 0.0]" - a = array((range(5),range(5,10)), dtype="int16") - b=a[1,2:] + a = array((range(5), range(5, 10)), dtype="int16") + b = a[1, 2:] assert str(b) == "[7 8 9]" - b=a[2:1,] + b = a[2:1, ] assert str(b) == "[]" From noreply at buildbot.pypy.org Fri Nov 18 10:16:40 2011 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 18 Nov 2011 10:16:40 +0100 (CET) Subject: [pypy-commit] pypy jitdriver-setparam-all: Change set_param API. Now we run set_param(driver-or-None, 'name', value) Message-ID: <20111118091640.ACF5582A9D@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: jitdriver-setparam-all Changeset: r49506:f121b8ac7363 Date: 2011-11-18 11:15 +0200 http://bitbucket.org/pypy/pypy/changeset/f121b8ac7363/ Log: Change set_param API. Now we run set_param(driver-or-None, 'name', value) instead of driver.set_param(...). This makes it possible to pass None, which changes the param on all drivers. diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -14,7 +14,7 @@ from pypy.rlib.jit import (JitDriver, we_are_jitted, hint, dont_look_inside, loop_invariant, elidable, promote, jit_debug, assert_green, AssertGreenFailed, unroll_safe, current_trace_length, look_inside_iff, - isconstant, isvirtual, promote_string) + isconstant, isvirtual, promote_string, set_param) from pypy.rlib.rarithmetic import ovfcheck from pypy.rpython.lltypesystem import lltype, llmemory, rffi from pypy.rpython.ootypesystem import ootype @@ -1256,15 +1256,18 @@ n -= 1 x += n return x - def f(n, threshold): - myjitdriver.set_param('threshold', threshold) + def f(n, threshold, arg): + if arg: + set_param(myjitdriver, 'threshold', threshold) + else: + set_param(None, 'threshold', threshold) return g(n) - res = self.meta_interp(f, [10, 3]) + res = self.meta_interp(f, [10, 3, 1]) assert res == 9 + 8 + 7 + 6 + 5 + 4 + 3 + 2 + 1 + 0 self.check_tree_loop_count(2) - res = self.meta_interp(f, [10, 13]) + res = self.meta_interp(f, [10, 13, 0]) assert res == 9 + 8 + 7 + 6 + 5 + 4 + 3 + 2 + 1 + 0 self.check_tree_loop_count(0) @@ -2328,8 +2331,8 @@ get_printable_location=get_printable_location) bytecode = "0j10jc20a3" def f(): - myjitdriver.set_param('threshold', 7) - myjitdriver.set_param('trace_eagerness', 1) + set_param(myjitdriver, 'threshold', 7) + set_param(myjitdriver, 'trace_eagerness', 1) i = j = c = a = 1 while True: myjitdriver.jit_merge_point(i=i, j=j, c=c, a=a) @@ -2607,7 +2610,7 @@ myjitdriver = JitDriver(greens = [], reds = ['n', 'i', 'sa', 'a']) def f(n, limit): - myjitdriver.set_param('retrace_limit', limit) + set_param(myjitdriver, 'retrace_limit', limit) sa = i = a = 0 while i < n: myjitdriver.jit_merge_point(n=n, i=i, sa=sa, a=a) @@ -2625,8 +2628,8 @@ myjitdriver = JitDriver(greens = [], reds = ['n', 'i', 'sa', 'a']) def f(n, limit): - myjitdriver.set_param('retrace_limit', 3) - myjitdriver.set_param('max_retrace_guards', limit) + set_param(myjitdriver, 'retrace_limit', 3) + set_param(myjitdriver, 'max_retrace_guards', limit) sa = i = a = 0 while i < n: myjitdriver.jit_merge_point(n=n, i=i, sa=sa, a=a) @@ -2645,7 +2648,7 @@ myjitdriver = JitDriver(greens = [], reds = ['n', 'i', 'sa', 'a', 'node']) def f(n, limit): - myjitdriver.set_param('retrace_limit', limit) + set_param(myjitdriver, 'retrace_limit', limit) sa = i = a = 0 node = [1, 2, 3] node[1] = n @@ -2668,10 +2671,10 @@ myjitdriver = JitDriver(greens = ['pc'], reds = ['n', 'i', 'sa']) bytecode = "0+sI0+SI" def f(n): - myjitdriver.set_param('threshold', 3) - myjitdriver.set_param('trace_eagerness', 1) - myjitdriver.set_param('retrace_limit', 5) - myjitdriver.set_param('function_threshold', -1) + set_param(None, 'threshold', 3) + set_param(None, 'trace_eagerness', 1) + set_param(None, 'retrace_limit', 5) + set_param(None, 'function_threshold', -1) pc = sa = i = 0 while pc < len(bytecode): myjitdriver.jit_merge_point(pc=pc, n=n, sa=sa, i=i) @@ -2728,9 +2731,9 @@ myjitdriver = JitDriver(greens = ['pc'], reds = ['n', 'a', 'i', 'j', 'sa']) bytecode = "ij+Jj+JI" def f(n, a): - myjitdriver.set_param('threshold', 5) - myjitdriver.set_param('trace_eagerness', 1) - myjitdriver.set_param('retrace_limit', 2) + set_param(None, 'threshold', 5) + set_param(None, 'trace_eagerness', 1) + set_param(None, 'retrace_limit', 2) pc = sa = i = j = 0 while pc < len(bytecode): myjitdriver.jit_merge_point(pc=pc, n=n, sa=sa, i=i, j=j, a=a) @@ -2793,8 +2796,8 @@ return B(self.val + 1) myjitdriver = JitDriver(greens = [], reds = ['sa', 'a']) def f(): - myjitdriver.set_param('threshold', 3) - myjitdriver.set_param('trace_eagerness', 2) + set_param(None, 'threshold', 3) + set_param(None, 'trace_eagerness', 2) a = A(0) sa = 0 while a.val < 8: @@ -2824,8 +2827,8 @@ return B(self.val + 1) myjitdriver = JitDriver(greens = [], reds = ['sa', 'b', 'a']) def f(b): - myjitdriver.set_param('threshold', 6) - myjitdriver.set_param('trace_eagerness', 4) + set_param(None, 'threshold', 6) + set_param(None, 'trace_eagerness', 4) a = A(0) sa = 0 while a.val < 15: @@ -2862,10 +2865,10 @@ myjitdriver = JitDriver(greens = ['pc'], reds = ['n', 'i', 'sa']) bytecode = "0+sI0+SI" def f(n): - myjitdriver.set_param('threshold', 3) - myjitdriver.set_param('trace_eagerness', 1) - myjitdriver.set_param('retrace_limit', 5) - myjitdriver.set_param('function_threshold', -1) + set_param(None, 'threshold', 3) + set_param(None, 'trace_eagerness', 1) + set_param(None, 'retrace_limit', 5) + set_param(None, 'function_threshold', -1) pc = sa = i = 0 while pc < len(bytecode): myjitdriver.jit_merge_point(pc=pc, n=n, sa=sa, i=i) diff --git a/pypy/jit/metainterp/test/test_jitdriver.py b/pypy/jit/metainterp/test/test_jitdriver.py --- a/pypy/jit/metainterp/test/test_jitdriver.py +++ b/pypy/jit/metainterp/test/test_jitdriver.py @@ -1,5 +1,5 @@ """Tests for multiple JitDrivers.""" -from pypy.rlib.jit import JitDriver, unroll_safe +from pypy.rlib.jit import JitDriver, unroll_safe, set_param from pypy.jit.metainterp.test.support import LLJitMixin, OOJitMixin from pypy.jit.metainterp.warmspot import get_stats @@ -113,7 +113,7 @@ return n # def loop2(g, r): - myjitdriver1.set_param('function_threshold', 0) + set_param(None, 'function_threshold', 0) while r > 0: myjitdriver2.can_enter_jit(g=g, r=r) myjitdriver2.jit_merge_point(g=g, r=r) diff --git a/pypy/jit/metainterp/test/test_loop.py b/pypy/jit/metainterp/test/test_loop.py --- a/pypy/jit/metainterp/test/test_loop.py +++ b/pypy/jit/metainterp/test/test_loop.py @@ -1,5 +1,5 @@ import py -from pypy.rlib.jit import JitDriver, hint +from pypy.rlib.jit import JitDriver, hint, set_param from pypy.rlib.objectmodel import compute_hash from pypy.jit.metainterp.warmspot import ll_meta_interp, get_stats from pypy.jit.metainterp.test.support import LLJitMixin, OOJitMixin @@ -364,7 +364,7 @@ myjitdriver = JitDriver(greens = ['pos'], reds = ['i', 'j', 'n', 'x']) bytecode = "IzJxji" def f(n, threshold): - myjitdriver.set_param('threshold', threshold) + set_param(myjitdriver, 'threshold', threshold) i = j = x = 0 pos = 0 op = '-' @@ -411,7 +411,7 @@ myjitdriver = JitDriver(greens = ['pos'], reds = ['i', 'j', 'n', 'x']) bytecode = "IzJxji" def f(nval, threshold): - myjitdriver.set_param('threshold', threshold) + set_param(myjitdriver, 'threshold', threshold) i, j, x = A(0), A(0), A(0) n = A(nval) pos = 0 diff --git a/pypy/jit/metainterp/test/test_recursive.py b/pypy/jit/metainterp/test/test_recursive.py --- a/pypy/jit/metainterp/test/test_recursive.py +++ b/pypy/jit/metainterp/test/test_recursive.py @@ -1,5 +1,5 @@ import py -from pypy.rlib.jit import JitDriver, we_are_jitted, hint +from pypy.rlib.jit import JitDriver, hint, set_param from pypy.rlib.jit import unroll_safe, dont_look_inside, promote from pypy.rlib.objectmodel import we_are_translated from pypy.rlib.debug import fatalerror @@ -308,8 +308,8 @@ pc += 1 return n def main(n): - myjitdriver.set_param('threshold', 3) - myjitdriver.set_param('trace_eagerness', 5) + set_param(None, 'threshold', 3) + set_param(None, 'trace_eagerness', 5) return f("c-l", n) expected = main(100) res = self.meta_interp(main, [100], enable_opts='', inline=True) @@ -329,7 +329,7 @@ return recursive(n - 1) + 1 return 0 def loop(n): - myjitdriver.set_param("threshold", 10) + set_param(myjitdriver, "threshold", 10) pc = 0 while n: myjitdriver.can_enter_jit(n=n) @@ -351,8 +351,8 @@ return 0 myjitdriver = JitDriver(greens=[], reds=['n']) def loop(n): - myjitdriver.set_param("threshold", 4) - myjitdriver.set_param("trace_eagerness", 2) + set_param(None, "threshold", 4) + set_param(None, "trace_eagerness", 2) while n: myjitdriver.can_enter_jit(n=n) myjitdriver.jit_merge_point(n=n) @@ -482,12 +482,12 @@ TRACE_LIMIT = 66 def main(inline): - myjitdriver.set_param("threshold", 10) - myjitdriver.set_param('function_threshold', 60) + set_param(None, "threshold", 10) + set_param(None, 'function_threshold', 60) if inline: - myjitdriver.set_param('inlining', True) + set_param(None, 'inlining', True) else: - myjitdriver.set_param('inlining', False) + set_param(None, 'inlining', False) return loop(100) res = self.meta_interp(main, [0], enable_opts='', trace_limit=TRACE_LIMIT) @@ -564,11 +564,11 @@ pc += 1 return n def g(m): - myjitdriver.set_param('inlining', True) + set_param(None, 'inlining', True) # carefully chosen threshold to make sure that the inner function # cannot be inlined, but the inner function on its own is small # enough - myjitdriver.set_param('trace_limit', 40) + set_param(None, 'trace_limit', 40) if m > 1000000: f('', 0) result = 0 @@ -1207,9 +1207,9 @@ driver.can_enter_jit(c=c, i=i, v=v) break - def main(c, i, set_param, v): - if set_param: - driver.set_param('function_threshold', 0) + def main(c, i, _set_param, v): + if _set_param: + set_param(driver, 'function_threshold', 0) portal(c, i, v) self.meta_interp(main, [10, 10, False, False], inline=True) diff --git a/pypy/jit/metainterp/test/test_warmspot.py b/pypy/jit/metainterp/test/test_warmspot.py --- a/pypy/jit/metainterp/test/test_warmspot.py +++ b/pypy/jit/metainterp/test/test_warmspot.py @@ -1,10 +1,7 @@ import py -from pypy.jit.metainterp.warmspot import ll_meta_interp from pypy.jit.metainterp.warmspot import get_stats -from pypy.rlib.jit import JitDriver -from pypy.rlib.jit import unroll_safe +from pypy.rlib.jit import JitDriver, set_param, unroll_safe from pypy.jit.backend.llgraph import runner -from pypy.jit.metainterp.history import BoxInt from pypy.jit.metainterp.test.support import LLJitMixin, OOJitMixin from pypy.jit.metainterp.optimizeopt import ALL_OPTS_NAMES @@ -97,7 +94,7 @@ n = A().m(n) return n def f(n, enable_opts): - myjitdriver.set_param('enable_opts', hlstr(enable_opts)) + set_param(None, 'enable_opts', hlstr(enable_opts)) return g(n) # check that the set_param will override the default diff --git a/pypy/jit/metainterp/test/test_ztranslation.py b/pypy/jit/metainterp/test/test_ztranslation.py --- a/pypy/jit/metainterp/test/test_ztranslation.py +++ b/pypy/jit/metainterp/test/test_ztranslation.py @@ -1,7 +1,7 @@ import py from pypy.jit.metainterp.warmspot import rpython_ll_meta_interp, ll_meta_interp from pypy.jit.backend.llgraph import runner -from pypy.rlib.jit import JitDriver, unroll_parameters +from pypy.rlib.jit import JitDriver, unroll_parameters, set_param from pypy.rlib.jit import PARAMETERS, dont_look_inside, hint from pypy.jit.metainterp.jitprof import Profiler from pypy.rpython.lltypesystem import lltype, llmemory @@ -57,9 +57,9 @@ get_printable_location=get_printable_location) def f(i): for param, defl in unroll_parameters: - jitdriver.set_param(param, defl) - jitdriver.set_param("threshold", 3) - jitdriver.set_param("trace_eagerness", 2) + set_param(jitdriver, param, defl) + set_param(jitdriver, "threshold", 3) + set_param(jitdriver, "trace_eagerness", 2) total = 0 frame = Frame(i) while frame.l[0] > 3: @@ -117,8 +117,8 @@ raise ValueError return 2 def main(i): - jitdriver.set_param("threshold", 3) - jitdriver.set_param("trace_eagerness", 2) + set_param(jitdriver, "threshold", 3) + set_param(jitdriver, "trace_eagerness", 2) total = 0 n = i while n > 3: diff --git a/pypy/jit/metainterp/warmspot.py b/pypy/jit/metainterp/warmspot.py --- a/pypy/jit/metainterp/warmspot.py +++ b/pypy/jit/metainterp/warmspot.py @@ -120,7 +120,8 @@ op = block.operations[i] if (op.opname == 'jit_marker' and op.args[0].value == marker_name and - op.args[1].value.active): # the jitdriver + (op.args[1].value is None or + op.args[1].value.active)): # the jitdriver results.append((graph, block, i)) return results @@ -846,11 +847,18 @@ _, PTR_SET_PARAM_STR_FUNCTYPE = self.cpu.ts.get_FuncType( [lltype.Ptr(STR)], lltype.Void) def make_closure(jd, fullfuncname, is_string): - state = jd.warmstate - def closure(i): - if is_string: - i = hlstr(i) - getattr(state, fullfuncname)(i) + if jd is None: + def closure(i): + if is_string: + i = hlstr(i) + for jd in self.jitdrivers_sd: + getattr(jd.warmstate, fullfuncname)(i) + else: + state = jd.warmstate + def closure(i): + if is_string: + i = hlstr(i) + getattr(state, fullfuncname)(i) if is_string: TP = PTR_SET_PARAM_STR_FUNCTYPE else: @@ -859,12 +867,14 @@ return Constant(funcptr, TP) # for graph, block, i in find_set_param(graphs): + op = block.operations[i] - for jd in self.jitdrivers_sd: - if jd.jitdriver is op.args[1].value: - break + if op.args[1].value is not None: + for jd in self.jitdrivers_sd: + if jd.jitdriver is op.args[1].value: + break else: - assert 0, "jitdriver of set_param() not found" + jd = None funcname = op.args[2].value key = jd, funcname if key not in closures: diff --git a/pypy/module/pypyjit/interp_jit.py b/pypy/module/pypyjit/interp_jit.py --- a/pypy/module/pypyjit/interp_jit.py +++ b/pypy/module/pypyjit/interp_jit.py @@ -6,6 +6,7 @@ from pypy.tool.pairtype import extendabletype from pypy.rlib.rarithmetic import r_uint, intmask from pypy.rlib.jit import JitDriver, hint, we_are_jitted, dont_look_inside +from pypy.rlib import jit from pypy.rlib.jit import current_trace_length, unroll_parameters import pypy.interpreter.pyopcode # for side-effects from pypy.interpreter.error import OperationError, operationerrfmt @@ -200,18 +201,18 @@ if len(args_w) == 1: text = space.str_w(args_w[0]) try: - pypyjitdriver.set_user_param(text) + jit.set_user_param(None, text) except ValueError: raise OperationError(space.w_ValueError, space.wrap("error in JIT parameters string")) for key, w_value in kwds_w.items(): if key == 'enable_opts': - pypyjitdriver.set_param('enable_opts', space.str_w(w_value)) + jit.set_param(None, 'enable_opts', space.str_w(w_value)) else: intval = space.int_w(w_value) for name, _ in unroll_parameters: if name == key and name != 'enable_opts': - pypyjitdriver.set_param(name, intval) + jit.set_param(None, name, intval) break else: raise operationerrfmt(space.w_TypeError, diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -450,55 +450,6 @@ # special-cased by ExtRegistryEntry pass - def _set_param(self, name, value): - # special-cased by ExtRegistryEntry - # (internal, must receive a constant 'name') - # if value is DEFAULT, sets the default value. - assert name in PARAMETERS - - @specialize.arg(0, 1) - def set_param(self, name, value): - """Set one of the tunable JIT parameter.""" - self._set_param(name, value) - - @specialize.arg(0, 1) - def set_param_to_default(self, name): - """Reset one of the tunable JIT parameters to its default value.""" - self._set_param(name, DEFAULT) - - def set_user_param(self, text): - """Set the tunable JIT parameters from a user-supplied string - following the format 'param=value,param=value', or 'off' to - disable the JIT. For programmatic setting of parameters, use - directly JitDriver.set_param(). - """ - if text == 'off': - self.set_param('threshold', -1) - self.set_param('function_threshold', -1) - return - if text == 'default': - for name1, _ in unroll_parameters: - self.set_param_to_default(name1) - return - for s in text.split(','): - s = s.strip(' ') - parts = s.split('=') - if len(parts) != 2: - raise ValueError - name = parts[0] - value = parts[1] - if name == 'enable_opts': - self.set_param('enable_opts', value) - else: - for name1, _ in unroll_parameters: - if name1 == name and name1 != 'enable_opts': - try: - self.set_param(name1, int(value)) - except ValueError: - raise - set_user_param._annspecialcase_ = 'specialize:arg(0)' - - def on_compile(self, logger, looptoken, operations, type, *greenargs): """ A hook called when loop is compiled. Overwrite for your own jitdriver if you want to do something special, like @@ -524,16 +475,61 @@ self.jit_merge_point = self.jit_merge_point self.can_enter_jit = self.can_enter_jit self.loop_header = self.loop_header - self._set_param = self._set_param - class Entry(ExtEnterLeaveMarker): _about_ = (self.jit_merge_point, self.can_enter_jit) class Entry(ExtLoopHeader): _about_ = self.loop_header - class Entry(ExtSetParam): - _about_ = self._set_param +def _set_param(driver, name, value): + # special-cased by ExtRegistryEntry + # (internal, must receive a constant 'name') + # if value is DEFAULT, sets the default value. + assert name in PARAMETERS + + at specialize.arg(0, 1) +def set_param(driver, name, value): + """Set one of the tunable JIT parameter. Driver can be None, then all + drivers have this set """ + _set_param(driver, name, value) + + at specialize.arg(0, 1) +def set_param_to_default(driver, name): + """Reset one of the tunable JIT parameters to its default value.""" + _set_param(driver, name, DEFAULT) + +def set_user_param(driver, text): + """Set the tunable JIT parameters from a user-supplied string + following the format 'param=value,param=value', or 'off' to + disable the JIT. For programmatic setting of parameters, use + directly JitDriver.set_param(). + """ + if text == 'off': + set_param(driver, 'threshold', -1) + set_param(driver, 'function_threshold', -1) + return + if text == 'default': + for name1, _ in unroll_parameters: + set_param_to_default(driver, name1) + return + for s in text.split(','): + s = s.strip(' ') + parts = s.split('=') + if len(parts) != 2: + raise ValueError + name = parts[0] + value = parts[1] + if name == 'enable_opts': + set_param(driver, 'enable_opts', value) + else: + for name1, _ in unroll_parameters: + if name1 == name and name1 != 'enable_opts': + try: + set_param(driver, name1, int(value)) + except ValueError: + raise +set_user_param._annspecialcase_ = 'specialize:arg(0)' + # ____________________________________________________________ # @@ -705,8 +701,9 @@ resulttype=lltype.Void) class ExtSetParam(ExtRegistryEntry): + _about_ = _set_param - def compute_result_annotation(self, s_name, s_value): + def compute_result_annotation(self, s_driver, s_name, s_value): from pypy.annotation import model as annmodel assert s_name.is_constant() if not self.bookkeeper.immutablevalue(DEFAULT).contains(s_value): @@ -722,21 +719,22 @@ from pypy.objspace.flow.model import Constant hop.exception_cannot_occur() - driver = self.instance.im_self - name = hop.args_s[0].const + driver = hop.inputarg(lltype.Void, arg=0) + name = hop.args_s[1].const if name == 'enable_opts': repr = string_repr else: repr = lltype.Signed - if (isinstance(hop.args_v[1], Constant) and - hop.args_v[1].value is DEFAULT): + if (isinstance(hop.args_v[2], Constant) and + hop.args_v[2].value is DEFAULT): value = PARAMETERS[name] v_value = hop.inputconst(repr, value) else: - v_value = hop.inputarg(repr, arg=1) + v_value = hop.inputarg(repr, arg=2) vlist = [hop.inputconst(lltype.Void, "set_param"), - hop.inputconst(lltype.Void, driver), + driver, hop.inputconst(lltype.Void, name), v_value] return hop.genop('jit_marker', vlist, resulttype=lltype.Void) + From noreply at buildbot.pypy.org Fri Nov 18 10:17:28 2011 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 18 Nov 2011 10:17:28 +0100 (CET) Subject: [pypy-commit] pypy op_malloc_gc: A branch to simplify the backend interface: instead of Message-ID: <20111118091728.CBD6182A9D@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: op_malloc_gc Changeset: r49507:62ea37257eb5 Date: 2011-11-18 10:13 +0100 http://bitbucket.org/pypy/pypy/changeset/62ea37257eb5/ Log: A branch to simplify the backend interface: instead of the various new_xxx resoperations, there is only one malloc_gc operation. Can also be used to do mallocs "in groups", when mallocing several small structures. From noreply at buildbot.pypy.org Fri Nov 18 10:17:30 2011 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 18 Nov 2011 10:17:30 +0100 (CET) Subject: [pypy-commit] pypy op_malloc_gc: Attempt to collapse several NEW-like operations into a single Message-ID: <20111118091730.124EF82A9D@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: op_malloc_gc Changeset: r49508:a85153701bb6 Date: 2011-11-10 21:32 +0100 http://bitbucket.org/pypy/pypy/changeset/a85153701bb6/ Log: Attempt to collapse several NEW-like operations into a single simpler MALLOC_GC operation: starting... diff --git a/pypy/jit/backend/llsupport/gc.py b/pypy/jit/backend/llsupport/gc.py --- a/pypy/jit/backend/llsupport/gc.py +++ b/pypy/jit/backend/llsupport/gc.py @@ -672,6 +672,7 @@ self.WB_ARRAY_FUNCPTR = lltype.Ptr(lltype.FuncType( [llmemory.Address, lltype.Signed, llmemory.Address], lltype.Void)) self.write_barrier_descr = WriteBarrierDescr(self) + self.fielddescr_tid = self.write_barrier_descr.fielddescr_tid # def malloc_array(itemsize, tid, num_elem): type_id = llop.extract_ushort(llgroup.HALFWORD, tid) @@ -809,17 +810,52 @@ newops = [] known_lengths = {} # we can only remember one malloc since the next malloc can possibly - # collect - last_malloc = None + # collect; but we can try to collapse several known-size mallocs into + # one, both for performance and to reduce the number of write + # barriers. We do this on each "basic block" of operations, which in + # this case means between CALLs or unknown-size mallocs. + op_malloc_gc = None + v_last_malloc = None + previous_size = -1 + current_mallocs = {} + # for op in operations: if op.getopnum() == rop.DEBUG_MERGE_POINT: continue # ---------- record the ConstPtrs ---------- self.record_constptrs(op, gcrefs_output_list) + # ---------- fold the NEWxxx operations into MALLOC_GC ---------- if op.is_malloc(): - last_malloc = op.result + if op.getopnum() == rop.NEW: + descr = op.getdescr() + assert isinstance(descr, BaseSizeDescr) + if op_malloc_gc is None: + # it is the first we see: emit MALLOC_GC + op = ResOperation(rop.MALLOC_GC, + [ConstInt(descr.size)], + op.result) + op_malloc_gc = op + else: + # already a MALLOC_GC: increment its total size + total_size = op_malloc_gc.getarg(0).getint() + total_size += descr.size + op_malloc_gc.setarg(0, ConstInt(total_size)) + op = ResOperation(rop.INT_ADD, + [v_last_malloc, + ConstInt(previous_size)], + op.result) + previous_size = descr.size + v_last_malloc = op.result + newops.append(op) + # NEW: add a SETFIELD to initialize the GC header + op = ResOperation(rop.SETFIELD_GC, + [op.result, ConstInt(descr.tid)], + None, descr=self.fielddescr_tid) + newops.append(op) + continue + op_last_malloc = op elif op.can_malloc(): - last_malloc = None + op_last_malloc = None # ---------- write barrier for SETFIELD_GC ---------- if op.getopnum() == rop.SETFIELD_GC: val = op.getarg(0) diff --git a/pypy/jit/backend/llsupport/test/test_gc.py b/pypy/jit/backend/llsupport/test/test_gc.py --- a/pypy/jit/backend/llsupport/test/test_gc.py +++ b/pypy/jit/backend/llsupport/test/test_gc.py @@ -570,6 +570,68 @@ assert operations[1].getarg(2) == v_value assert operations[1].getdescr() == array_descr + def test_rewrite_assembler_new_to_malloc(self): + self.gc_ll_descr.translate_support_code = False + try: + S = lltype.GcStruct('S', ('x', lltype.Signed)) + sdescr = get_size_descr(self.gc_ll_descr, S) + sdescr.tid = 1234 + finally: + self.gc_ll_descr.translate_support_code = True + tiddescr = self.gc_ll_descr.fielddescr_tid + ops = parse(""" + [p1] + p0 = new(descr=sdescr) + jump() + """, namespace=locals()) + expected = parse(""" + [p1] + p0 = malloc_gc(%d) + setfield_gc(p0, 1234, descr=tiddescr) + jump() + """ % (sdescr.size,), namespace=locals()) + operations = get_deep_immutable_oplist(ops.operations) + operations = self.gc_ll_descr.rewrite_assembler(self.fake_cpu, + operations, []) + equaloplists(operations, expected.operations) + + def test_rewrite_assembler_new3_to_malloc(self): + self.gc_ll_descr.translate_support_code = False + try: + S = lltype.GcStruct('S', ('x', lltype.Signed)) + sdescr = get_size_descr(self.gc_ll_descr, S) + sdescr.tid = 1234 + T = lltype.GcStruct('T', ('y', lltype.Signed), + ('z', lltype.Signed)) + tdescr = get_size_descr(self.gc_ll_descr, T) + tdescr.tid = 5678 + finally: + self.gc_ll_descr.translate_support_code = True + tiddescr = self.gc_ll_descr.fielddescr_tid + ops = parse(""" + [] + p0 = new(descr=sdescr) + p1 = new(descr=tdescr) + p2 = new(descr=sdescr) + jump() + """, namespace=locals()) + expected = parse(""" + [] + p0 = malloc_gc(%d) + setfield_gc(p0, 1234, descr=tiddescr) + p1 = int_add(p0, %d) + setfield_gc(p1, 5678, descr=tiddescr) + p2 = int_add(p1, %d) + setfield_gc(p2, 1234, descr=tiddescr) + jump() + """ % (sdescr.size + tdescr.size + sdescr.size, + sdescr.size, + tdescr.size), namespace=locals()) + operations = get_deep_immutable_oplist(ops.operations) + operations = self.gc_ll_descr.rewrite_assembler(self.fake_cpu, + operations, []) + equaloplists(operations, expected.operations) + def test_rewrite_assembler_initialization_store(self): S = lltype.GcStruct('S', ('parent', OBJECT), ('x', lltype.Signed)) diff --git a/pypy/jit/metainterp/executor.py b/pypy/jit/metainterp/executor.py --- a/pypy/jit/metainterp/executor.py +++ b/pypy/jit/metainterp/executor.py @@ -344,6 +344,7 @@ rop.SETINTERIORFIELD_RAW, rop.CALL_RELEASE_GIL, rop.QUASIIMMUT_FIELD, + rop.MALLOC_GC, ): # list of opcodes never executed by pyjitpl continue raise AssertionError("missing %r" % (key,)) diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -470,6 +470,7 @@ 'NEW_ARRAY/1d', 'NEWSTR/1', 'NEWUNICODE/1', + 'MALLOC_GC/1', # added by llsupport/gc: GC malloc of ConstInt bytes '_MALLOC_LAST', 'FORCE_TOKEN/0', 'VIRTUAL_REF/2', # removed before it's passed to the backend From noreply at buildbot.pypy.org Fri Nov 18 10:17:31 2011 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 18 Nov 2011 10:17:31 +0100 (CET) Subject: [pypy-commit] pypy op_malloc_gc: In-progress. Message-ID: <20111118091731.4D82B82A9D@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: op_malloc_gc Changeset: r49509:40d91227555d Date: 2011-11-17 17:30 +0100 http://bitbucket.org/pypy/pypy/changeset/40d91227555d/ Log: In-progress. diff --git a/pypy/jit/backend/llsupport/descr.py b/pypy/jit/backend/llsupport/descr.py --- a/pypy/jit/backend/llsupport/descr.py +++ b/pypy/jit/backend/llsupport/descr.py @@ -19,6 +19,7 @@ self._cache_size = {} self._cache_field = {} self._cache_array = {} + self._cache_arraylen = {} self._cache_call = {} self._cache_interiorfield = {} @@ -150,6 +151,18 @@ cachedict[fieldname] = fielddescr return fielddescr +def get_field_arraylen_descr(gccache, ARRAY): + cache = gccache._cache_arraylen + try: + return cache[ARRAY] + except KeyError: + tsc = gccache.translate_support_code + (_, _, ofs) = symbolic.get_array_token(ARRAY, tsc) + SignedFieldDescr = getFieldDescrClass(lltype.Signed) + result = SignedFieldDescr("len", ofs) + cache[ARRAY] = result + return result + # ____________________________________________________________ # ArrayDescrs @@ -270,6 +283,8 @@ else: assert isinstance(ARRAY, lltype.GcArray) arraydescr = getArrayDescrClass(ARRAY)() + arraydescr.field_arraylen_descr = get_field_arraylen_descr( + gccache, ARRAY) # verify basic assumption that all arrays' basesize and ofslength # are equal basesize, itemsize, ofslength = symbolic.get_array_token(ARRAY, False) diff --git a/pypy/jit/backend/llsupport/gc.py b/pypy/jit/backend/llsupport/gc.py --- a/pypy/jit/backend/llsupport/gc.py +++ b/pypy/jit/backend/llsupport/gc.py @@ -17,6 +17,7 @@ from pypy.jit.backend.llsupport.descr import GcCache, get_field_descr from pypy.jit.backend.llsupport.descr import GcPtrFieldDescr from pypy.jit.backend.llsupport.descr import get_call_descr +from pypy.jit.backend.llsupport.descr import get_field_arraylen_descr from pypy.rpython.memory.gctransform import asmgcroot # ____________________________________________________________ @@ -34,8 +35,6 @@ pass def do_write_barrier(self, gcref_struct, gcref_newptr): pass - def rewrite_assembler(self, cpu, operations, gcrefs_output_list): - return operations def can_inline_malloc(self, descr): return False def can_inline_malloc_varsize(self, descr, num_elem): @@ -61,6 +60,13 @@ rgc._make_sure_does_not_move(p) gcrefs_output_list.append(p) + def rewrite_assembler(self, cpu, operations, gcrefs_output_list): + # record all GCREFs, because the GC (or Boehm) cannot see them and + # keep them alive if they end up as constants in the assembler + for op in operations: + self.record_constptrs(op, gcrefs_output_list) + return operations + # ____________________________________________________________ class GcLLDescr_boehm(GcLLDescription): @@ -178,15 +184,6 @@ def get_funcptr_for_new(self): return self.funcptr_for_new - def rewrite_assembler(self, cpu, operations, gcrefs_output_list): - # record all GCREFs too, because Boehm cannot see them and keep them - # alive if they end up as constants in the assembler - for op in operations: - self.record_constptrs(op, gcrefs_output_list) - return GcLLDescription.rewrite_assembler(self, cpu, operations, - gcrefs_output_list) - - # ____________________________________________________________ # All code below is for the hybrid or minimark GC @@ -800,114 +797,10 @@ llmemory.cast_ptr_to_adr(gcref_newptr)) def rewrite_assembler(self, cpu, operations, gcrefs_output_list): - # Perform two kinds of rewrites in parallel: - # - # - Add COND_CALLs to the write barrier before SETFIELD_GC and - # SETARRAYITEM_GC operations. - # - # - Record the ConstPtrs from the assembler. - # - newops = [] - known_lengths = {} - # we can only remember one malloc since the next malloc can possibly - # collect; but we can try to collapse several known-size mallocs into - # one, both for performance and to reduce the number of write - # barriers. We do this on each "basic block" of operations, which in - # this case means between CALLs or unknown-size mallocs. - op_malloc_gc = None - v_last_malloc = None - previous_size = -1 - current_mallocs = {} - # - for op in operations: - if op.getopnum() == rop.DEBUG_MERGE_POINT: - continue - # ---------- record the ConstPtrs ---------- - self.record_constptrs(op, gcrefs_output_list) - # ---------- fold the NEWxxx operations into MALLOC_GC ---------- - if op.is_malloc(): - if op.getopnum() == rop.NEW: - descr = op.getdescr() - assert isinstance(descr, BaseSizeDescr) - if op_malloc_gc is None: - # it is the first we see: emit MALLOC_GC - op = ResOperation(rop.MALLOC_GC, - [ConstInt(descr.size)], - op.result) - op_malloc_gc = op - else: - # already a MALLOC_GC: increment its total size - total_size = op_malloc_gc.getarg(0).getint() - total_size += descr.size - op_malloc_gc.setarg(0, ConstInt(total_size)) - op = ResOperation(rop.INT_ADD, - [v_last_malloc, - ConstInt(previous_size)], - op.result) - previous_size = descr.size - v_last_malloc = op.result - newops.append(op) - # NEW: add a SETFIELD to initialize the GC header - op = ResOperation(rop.SETFIELD_GC, - [op.result, ConstInt(descr.tid)], - None, descr=self.fielddescr_tid) - newops.append(op) - continue - op_last_malloc = op - elif op.can_malloc(): - op_last_malloc = None - # ---------- write barrier for SETFIELD_GC ---------- - if op.getopnum() == rop.SETFIELD_GC: - val = op.getarg(0) - # no need for a write barrier in the case of previous malloc - if val is not last_malloc: - v = op.getarg(1) - if isinstance(v, BoxPtr) or (isinstance(v, ConstPtr) and - bool(v.value)): # store a non-NULL - self._gen_write_barrier(newops, op.getarg(0), v) - op = op.copy_and_change(rop.SETFIELD_RAW) - # ---------- write barrier for SETARRAYITEM_GC ---------- - if op.getopnum() == rop.SETARRAYITEM_GC: - val = op.getarg(0) - # no need for a write barrier in the case of previous malloc - if val is not last_malloc: - v = op.getarg(2) - if isinstance(v, BoxPtr) or (isinstance(v, ConstPtr) and - bool(v.value)): # store a non-NULL - self._gen_write_barrier_array(newops, op.getarg(0), - op.getarg(1), v, - cpu, known_lengths) - op = op.copy_and_change(rop.SETARRAYITEM_RAW) - elif op.getopnum() == rop.NEW_ARRAY: - v_length = op.getarg(0) - if isinstance(v_length, ConstInt): - known_lengths[op.result] = v_length.getint() - # ---------- - newops.append(op) - return newops - - def _gen_write_barrier(self, newops, v_base, v_value): - args = [v_base, v_value] - newops.append(ResOperation(rop.COND_CALL_GC_WB, args, None, - descr=self.write_barrier_descr)) - - def _gen_write_barrier_array(self, newops, v_base, v_index, v_value, - cpu, known_lengths): - if self.write_barrier_descr.get_write_barrier_from_array_fn(cpu) != 0: - # If we know statically the length of 'v', and it is not too - # big, then produce a regular write_barrier. If it's unknown or - # too big, produce instead a write_barrier_from_array. - LARGE = 130 - length = known_lengths.get(v_base, LARGE) - if length >= LARGE: - # unknown or too big: produce a write_barrier_from_array - args = [v_base, v_index, v_value] - newops.append(ResOperation(rop.COND_CALL_GC_WB_ARRAY, args, - None, - descr=self.write_barrier_descr)) - return - # fall-back case: produce a write_barrier - self._gen_write_barrier(newops, v_base, v_value) + rewriter = GcRewriterAssembler(self, cpu) + newops = rewriter.rewrite(operations) + return GcLLDescription.rewrite_assembler(self, cpu, newops, + gcrefs_output_list) def can_inline_malloc(self, descr): assert isinstance(descr, BaseSizeDescr) @@ -934,6 +827,146 @@ def freeing_block(self, start, stop): self.gcrootmap.freeing_block(start, stop) + +class GcRewriterAssembler(object): + # This class performs the following rewrites on the list of operations: + # + # - Remove the DEBUG_MERGE_POINTs. + # + # - Turn all NEW_xxx to MALLOC_GC operations, possibly followed by + # SETFIELDs in order set their GC fields. + # + # - Add COND_CALLs to the write barrier before SETFIELD_GC and + # SETARRAYITEM_GC operations. + + def __init__(self, gc_ll_descr, cpu): + self.gc_ll_descr = gc_ll_descr + self.cpu = cpu + self.tsc = self.gc_ll_descr.translate_support_code + + def rewrite(self, operations): + self.newops = [] + self.known_lengths = {} + # we can only remember one malloc since the next malloc can possibly + # collect; but we can try to collapse several known-size mallocs into + # one, both for performance and to reduce the number of write + # barriers. We do this on each "basic block" of operations, which in + # this case means between CALLs or unknown-size mallocs. + self.op_malloc_gc = None + self.v_last_malloc = None + self.previous_size = -1 + # + for op in operations: + if op.getopnum() == rop.DEBUG_MERGE_POINT: + continue + # ---------- fold the NEWxxx operations into MALLOC_GC ---------- + if op.is_malloc(): + if op.getopnum() == rop.NEW: + descr = op.getdescr() + assert isinstance(descr, BaseSizeDescr) + self.gen_malloc_const(descr.size, op.result) + self.gen_initialize_tid(op.result, descr.tid) + continue + if op.getopnum() == rop.NEW_ARRAY: + v_newlength = op.getarg(0) + if isinstance(v_newlength, ConstInt): + newlength = v_newlength.getint() + self.known_lengths[op.result] = newlength + descr = op.getdescr() + assert isinstance(descr, BaseArrayDescr) + basesize = descr.get_base_size(self.tsc) + itemsize = descr.get_item_size(self.tsc) + fullsize = basesize + newlength * itemsize + self.gen_malloc_const(fullsize, op.result) + self.gen_initialize_tid(op.result, descr.tid) + self.gen_initialize_len(op.result, v_newlength, descr) + continue + yyyyy + xxxx + elif op.can_malloc(): + self.op_malloc_gc = None + # ---------- write barrier for SETFIELD_GC ---------- + if op.getopnum() == rop.SETFIELD_GC: + val = op.getarg(0) + # no need for a write barrier in the case of previous malloc + if val is not last_malloc: + v = op.getarg(1) + if isinstance(v, BoxPtr) or (isinstance(v, ConstPtr) and + bool(v.value)): # store a non-NULL + self.gen_write_barrier(op.getarg(0), v) + op = op.copy_and_change(rop.SETFIELD_RAW) + # ---------- write barrier for SETARRAYITEM_GC ---------- + if op.getopnum() == rop.SETARRAYITEM_GC: + val = op.getarg(0) + # no need for a write barrier in the case of previous malloc + if val is not last_malloc: + v = op.getarg(2) + if isinstance(v, BoxPtr) or (isinstance(v, ConstPtr) and + bool(v.value)): # store a non-NULL + self.gen_write_barrier_array(op.getarg(0), + op.getarg(1), v) + op = op.copy_and_change(rop.SETARRAYITEM_RAW) + # ---------- + self.newops.append(op) + return self.newops + + def gen_malloc_const(self, size, v_result): + if self.op_malloc_gc is None: + # it is the first we see: emit MALLOC_GC + op = ResOperation(rop.MALLOC_GC, + [ConstInt(size)], + v_result) + self.op_malloc_gc = op + else: + # already a MALLOC_GC: increment its total size + total_size = self.op_malloc_gc.getarg(0).getint() + total_size += size + self.op_malloc_gc.setarg(0, ConstInt(total_size)) + op = ResOperation(rop.INT_ADD, + [self.v_last_malloc, + ConstInt(self.previous_size)], + v_result) + self.previous_size = size + self.v_last_malloc = v_result + self.newops.append(op) + + def gen_initialize_tid(self, v_newgcobj, tid): + # produce a SETFIELD to initialize the GC header + op = ResOperation(rop.SETFIELD_GC, + [v_newgcobj, ConstInt(tid)], None, + descr=self.gc_ll_descr.fielddescr_tid) + self.newops.append(op) + + def gen_initialize_len(self, v_newgcobj, v_length, arraydescr): + # produce a SETFIELD to initialize the array length + op = ResOperation(rop.SETFIELD_GC, + [v_newgcobj, v_length], None, + descr=arraydescr.field_arraylen_descr) + self.newops.append(op) + + def gen_write_barrier(self, v_base, v_value): + args = [v_base, v_value] + self.newops.append(ResOperation(rop.COND_CALL_GC_WB, args, None, + descr=self.write_barrier_descr)) + + def gen_write_barrier_array(self, v_base, v_index, v_value): + write_barrier_descr = self.gc_ll_descr.write_barrier_descr + if write_barrier_descr.get_write_barrier_from_array_fn(self.cpu) != 0: + # If we know statically the length of 'v', and it is not too + # big, then produce a regular write_barrier. If it's unknown or + # too big, produce instead a write_barrier_from_array. + LARGE = 130 + length = self.known_lengths.get(v_base, LARGE) + if length >= LARGE: + # unknown or too big: produce a write_barrier_from_array + args = [v_base, v_index, v_value] + self.newops.append( + ResOperation(rop.COND_CALL_GC_WB_ARRAY, args, None, + descr=write_barrier_descr)) + return + # fall-back case: produce a write_barrier + self.gen_write_barrier(v_base, v_value) + # ____________________________________________________________ def get_ll_description(gcdescr, translator=None, rtyper=None): diff --git a/pypy/jit/backend/llsupport/test/test_descr.py b/pypy/jit/backend/llsupport/test/test_descr.py --- a/pypy/jit/backend/llsupport/test/test_descr.py +++ b/pypy/jit/backend/llsupport/test/test_descr.py @@ -1,4 +1,4 @@ -from pypy.rpython.lltypesystem import lltype, rffi +from pypy.rpython.lltypesystem import lltype, rffi, rstr from pypy.jit.backend.llsupport.descr import * from pypy.jit.backend.llsupport import symbolic from pypy.rlib.objectmodel import Symbolic @@ -448,3 +448,19 @@ res = descr2.call_stub(rffi.cast(lltype.Signed, fnptr), [a, b, c], [], []) assert float(uint2singlefloat(rffi.r_uint(res))) == -11.5 + +def test_field_arraylen_descr(): + c0 = GcCache(True) + A1 = lltype.GcArray(lltype.Signed) + fielddescr = get_field_arraylen_descr(c0, A1) + assert isinstance(fielddescr, BaseFieldDescr) + ofs = fielddescr.offset + assert repr(ofs) == '< ArrayLengthOffset >' + # + fielddescr = get_field_arraylen_descr(c0, rstr.STR) + ofs = fielddescr.offset + assert repr(ofs) == ("< " + " 'chars'> + < ArrayLengthOffset" + " > >") + # caching: + assert fielddescr is get_field_arraylen_descr(c0, rstr.STR) diff --git a/pypy/jit/backend/llsupport/test/test_gc.py b/pypy/jit/backend/llsupport/test/test_gc.py --- a/pypy/jit/backend/llsupport/test/test_gc.py +++ b/pypy/jit/backend/llsupport/test/test_gc.py @@ -632,6 +632,34 @@ operations, []) equaloplists(operations, expected.operations) + def test_rewrite_assembler_new_array_fixed_to_malloc(self): + self.gc_ll_descr.translate_support_code = False + try: + A = lltype.GcArray(lltype.Signed) + adescr = get_array_descr(self.gc_ll_descr, A) + adescr.tid = 1234 + lengthdescr = get_field_arraylen_descr(self.gc_ll_descr, A) + finally: + self.gc_ll_descr.translate_support_code = True + tiddescr = self.gc_ll_descr.fielddescr_tid + ops = parse(""" + [] + p0 = new_array(10, descr=adescr) + jump() + """, namespace=locals()) + expected = parse(""" + [] + p0 = malloc_gc(%d) + setfield_gc(p0, 1234, descr=tiddescr) + setfield_gc(p0, 10, descr=lengthdescr) + jump() + """ % (adescr.get_base_size(False) + 10 * adescr.get_item_size(False),), + namespace=locals()) + operations = get_deep_immutable_oplist(ops.operations) + operations = self.gc_ll_descr.rewrite_assembler(self.fake_cpu, + operations, []) + equaloplists(operations, expected.operations) + def test_rewrite_assembler_initialization_store(self): S = lltype.GcStruct('S', ('parent', OBJECT), ('x', lltype.Signed)) From noreply at buildbot.pypy.org Fri Nov 18 10:17:32 2011 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 18 Nov 2011 10:17:32 +0100 (CET) Subject: [pypy-commit] pypy op_malloc_gc: Fix some tests. Message-ID: <20111118091732.7FA2582A9D@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: op_malloc_gc Changeset: r49510:8b49822025c6 Date: 2011-11-17 17:40 +0100 http://bitbucket.org/pypy/pypy/changeset/8b49822025c6/ Log: Fix some tests. diff --git a/pypy/jit/backend/llsupport/gc.py b/pypy/jit/backend/llsupport/gc.py --- a/pypy/jit/backend/llsupport/gc.py +++ b/pypy/jit/backend/llsupport/gc.py @@ -839,22 +839,24 @@ # - Add COND_CALLs to the write barrier before SETFIELD_GC and # SETARRAYITEM_GC operations. + _v_last_malloc = None + _previous_size = -1 + def __init__(self, gc_ll_descr, cpu): self.gc_ll_descr = gc_ll_descr self.cpu = cpu self.tsc = self.gc_ll_descr.translate_support_code + self.newops = [] + self.known_lengths = {} + self.op_malloc_gc = None + self.current_mallocs = {} # set of variables def rewrite(self, operations): - self.newops = [] - self.known_lengths = {} # we can only remember one malloc since the next malloc can possibly # collect; but we can try to collapse several known-size mallocs into # one, both for performance and to reduce the number of write # barriers. We do this on each "basic block" of operations, which in # this case means between CALLs or unknown-size mallocs. - self.op_malloc_gc = None - self.v_last_malloc = None - self.previous_size = -1 # for op in operations: if op.getopnum() == rop.DEBUG_MERGE_POINT: @@ -885,11 +887,12 @@ xxxx elif op.can_malloc(): self.op_malloc_gc = None + self.current_mallocs.clear() # ---------- write barrier for SETFIELD_GC ---------- if op.getopnum() == rop.SETFIELD_GC: val = op.getarg(0) # no need for a write barrier in the case of previous malloc - if val is not last_malloc: + if val not in self.current_mallocs: v = op.getarg(1) if isinstance(v, BoxPtr) or (isinstance(v, ConstPtr) and bool(v.value)): # store a non-NULL @@ -899,7 +902,7 @@ if op.getopnum() == rop.SETARRAYITEM_GC: val = op.getarg(0) # no need for a write barrier in the case of previous malloc - if val is not last_malloc: + if val not in self.current_mallocs: v = op.getarg(2) if isinstance(v, BoxPtr) or (isinstance(v, ConstPtr) and bool(v.value)): # store a non-NULL @@ -923,11 +926,12 @@ total_size += size self.op_malloc_gc.setarg(0, ConstInt(total_size)) op = ResOperation(rop.INT_ADD, - [self.v_last_malloc, - ConstInt(self.previous_size)], + [self._v_last_malloc, + ConstInt(self._previous_size)], v_result) - self.previous_size = size - self.v_last_malloc = v_result + self._previous_size = size + self._v_last_malloc = v_result + self.current_mallocs[v_result] = None self.newops.append(op) def gen_initialize_tid(self, v_newgcobj, tid): @@ -945,9 +949,10 @@ self.newops.append(op) def gen_write_barrier(self, v_base, v_value): + write_barrier_descr = self.gc_ll_descr.write_barrier_descr args = [v_base, v_value] self.newops.append(ResOperation(rop.COND_CALL_GC_WB, args, None, - descr=self.write_barrier_descr)) + descr=write_barrier_descr)) def gen_write_barrier_array(self, v_base, v_index, v_value): write_barrier_descr = self.gc_ll_descr.write_barrier_descr diff --git a/pypy/jit/backend/llsupport/test/test_gc.py b/pypy/jit/backend/llsupport/test/test_gc.py --- a/pypy/jit/backend/llsupport/test/test_gc.py +++ b/pypy/jit/backend/llsupport/test/test_gc.py @@ -404,10 +404,11 @@ gc_ll_descr = self.gc_ll_descr llop1 = self.llop1 # - newops = [] + rewriter = GcRewriterAssembler(gc_ll_descr, None) + newops = rewriter.newops v_base = BoxPtr() v_value = BoxPtr() - gc_ll_descr._gen_write_barrier(newops, v_base, v_value) + rewriter.gen_write_barrier(v_base, v_value) assert llop1.record == [] assert len(newops) == 1 assert newops[0].getopnum() == rop.COND_CALL_GC_WB @@ -482,7 +483,7 @@ def test_rewrite_assembler_3(self): # check write barriers before SETARRAYITEM_GC - for v_new_length in (None, ConstInt(5), ConstInt(5000), BoxInt()): + for new_length in (-1, 5, 5000): v_base = BoxPtr() v_index = BoxInt() v_value = BoxPtr() @@ -491,23 +492,11 @@ ResOperation(rop.SETARRAYITEM_GC, [v_base, v_index, v_value], None, descr=array_descr), ] - if v_new_length is not None: - operations.insert(0, ResOperation(rop.NEW_ARRAY, - [v_new_length], v_base, - descr=array_descr)) - # we need to insert another, unrelated NEW_ARRAY here - # to prevent the initialization_store optimization - operations.insert(1, ResOperation(rop.NEW_ARRAY, - [ConstInt(12)], BoxPtr(), - descr=array_descr)) - gc_ll_descr = self.gc_ll_descr + rewriter = GcRewriterAssembler(self.gc_ll_descr, self.fake_cpu) + if new_length >= 0: + rewriter.known_lengths[v_base] = new_length operations = get_deep_immutable_oplist(operations) - operations = gc_ll_descr.rewrite_assembler(self.fake_cpu, - operations, []) - if v_new_length is not None: - assert operations[0].getopnum() == rop.NEW_ARRAY - assert operations[1].getopnum() == rop.NEW_ARRAY - del operations[:2] + operations = rewriter.rewrite(operations) assert len(operations) == 2 # assert operations[0].getopnum() == rop.COND_CALL_GC_WB @@ -525,7 +514,7 @@ # check write barriers before SETARRAYITEM_GC, # if we have actually a write_barrier_from_array. self.llop1._have_wb_from_array = True - for v_new_length in (None, ConstInt(5), ConstInt(5000), BoxInt()): + for new_length in (-1, 5, 5000): v_base = BoxPtr() v_index = BoxInt() v_value = BoxPtr() @@ -534,26 +523,14 @@ ResOperation(rop.SETARRAYITEM_GC, [v_base, v_index, v_value], None, descr=array_descr), ] - if v_new_length is not None: - operations.insert(0, ResOperation(rop.NEW_ARRAY, - [v_new_length], v_base, - descr=array_descr)) - # we need to insert another, unrelated NEW_ARRAY here - # to prevent the initialization_store optimization - operations.insert(1, ResOperation(rop.NEW_ARRAY, - [ConstInt(12)], BoxPtr(), - descr=array_descr)) - gc_ll_descr = self.gc_ll_descr + rewriter = GcRewriterAssembler(self.gc_ll_descr, self.fake_cpu) + if new_length >= 0: + rewriter.known_lengths[v_base] = new_length operations = get_deep_immutable_oplist(operations) - operations = gc_ll_descr.rewrite_assembler(self.fake_cpu, - operations, []) - if v_new_length is not None: - assert operations[0].getopnum() == rop.NEW_ARRAY - assert operations[1].getopnum() == rop.NEW_ARRAY - del operations[:2] + operations = rewriter.rewrite(operations) assert len(operations) == 2 # - if isinstance(v_new_length, ConstInt) and v_new_length.value < 130: + if 0 <= new_length < 130: assert operations[0].getopnum() == rop.COND_CALL_GC_WB assert operations[0].getarg(0) == v_base assert operations[0].getarg(1) == v_value From noreply at buildbot.pypy.org Fri Nov 18 10:17:33 2011 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 18 Nov 2011 10:17:33 +0100 (CET) Subject: [pypy-commit] pypy op_malloc_gc: Clean up the tests. Message-ID: <20111118091733.AFA1482A9D@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: op_malloc_gc Changeset: r49511:b5fdf3f1574e Date: 2011-11-17 17:59 +0100 http://bitbucket.org/pypy/pypy/changeset/b5fdf3f1574e/ Log: Clean up the tests. diff --git a/pypy/jit/backend/llsupport/test/test_gc.py b/pypy/jit/backend/llsupport/test/test_gc.py --- a/pypy/jit/backend/llsupport/test/test_gc.py +++ b/pypy/jit/backend/llsupport/test/test_gc.py @@ -547,95 +547,96 @@ assert operations[1].getarg(2) == v_value assert operations[1].getdescr() == array_descr - def test_rewrite_assembler_new_to_malloc(self): + def check_rewrite(self, frm_operations, to_operations): self.gc_ll_descr.translate_support_code = False try: S = lltype.GcStruct('S', ('x', lltype.Signed)) sdescr = get_size_descr(self.gc_ll_descr, S) sdescr.tid = 1234 - finally: - self.gc_ll_descr.translate_support_code = True - tiddescr = self.gc_ll_descr.fielddescr_tid - ops = parse(""" - [p1] - p0 = new(descr=sdescr) - jump() - """, namespace=locals()) - expected = parse(""" - [p1] - p0 = malloc_gc(%d) - setfield_gc(p0, 1234, descr=tiddescr) - jump() - """ % (sdescr.size,), namespace=locals()) - operations = get_deep_immutable_oplist(ops.operations) - operations = self.gc_ll_descr.rewrite_assembler(self.fake_cpu, - operations, []) - equaloplists(operations, expected.operations) - - def test_rewrite_assembler_new3_to_malloc(self): - self.gc_ll_descr.translate_support_code = False - try: - S = lltype.GcStruct('S', ('x', lltype.Signed)) - sdescr = get_size_descr(self.gc_ll_descr, S) - sdescr.tid = 1234 + # T = lltype.GcStruct('T', ('y', lltype.Signed), ('z', lltype.Signed)) tdescr = get_size_descr(self.gc_ll_descr, T) tdescr.tid = 5678 + # + A = lltype.GcArray(lltype.Signed) + adescr = get_array_descr(self.gc_ll_descr, A) + adescr.tid = 4321 + alendescr = get_field_arraylen_descr(self.gc_ll_descr, A) + # + tiddescr = self.gc_ll_descr.fielddescr_tid + # + ops = parse(frm_operations, namespace=locals()) + expected = parse(to_operations % Evaluator(locals()), + namespace=locals()) + operations = get_deep_immutable_oplist(ops.operations) + operations = self.gc_ll_descr.rewrite_assembler(self.fake_cpu, + operations, []) finally: self.gc_ll_descr.translate_support_code = True - tiddescr = self.gc_ll_descr.fielddescr_tid - ops = parse(""" - [] - p0 = new(descr=sdescr) - p1 = new(descr=tdescr) - p2 = new(descr=sdescr) - jump() - """, namespace=locals()) - expected = parse(""" - [] - p0 = malloc_gc(%d) - setfield_gc(p0, 1234, descr=tiddescr) - p1 = int_add(p0, %d) - setfield_gc(p1, 5678, descr=tiddescr) - p2 = int_add(p1, %d) - setfield_gc(p2, 1234, descr=tiddescr) - jump() - """ % (sdescr.size + tdescr.size + sdescr.size, - sdescr.size, - tdescr.size), namespace=locals()) - operations = get_deep_immutable_oplist(ops.operations) - operations = self.gc_ll_descr.rewrite_assembler(self.fake_cpu, - operations, []) equaloplists(operations, expected.operations) + def test_rewrite_assembler_new_to_malloc(self): + self.check_rewrite(""" + [p1] + p0 = new(descr=sdescr) + jump() + """, """ + [p1] + p0 = malloc_gc(%(sdescr.size)d) + setfield_gc(p0, 1234, descr=tiddescr) + jump() + """) + + def test_rewrite_assembler_new3_to_malloc(self): + self.check_rewrite(""" + [] + p0 = new(descr=sdescr) + p1 = new(descr=tdescr) + p2 = new(descr=sdescr) + jump() + """, """ + [] + p0 = malloc_gc(%(sdescr.size + tdescr.size + sdescr.size)d) + setfield_gc(p0, 1234, descr=tiddescr) + p1 = int_add(p0, %(sdescr.size)d) + setfield_gc(p1, 5678, descr=tiddescr) + p2 = int_add(p1, %(tdescr.size)d) + setfield_gc(p2, 1234, descr=tiddescr) + jump() + """) + def test_rewrite_assembler_new_array_fixed_to_malloc(self): - self.gc_ll_descr.translate_support_code = False - try: - A = lltype.GcArray(lltype.Signed) - adescr = get_array_descr(self.gc_ll_descr, A) - adescr.tid = 1234 - lengthdescr = get_field_arraylen_descr(self.gc_ll_descr, A) - finally: - self.gc_ll_descr.translate_support_code = True - tiddescr = self.gc_ll_descr.fielddescr_tid - ops = parse(""" - [] - p0 = new_array(10, descr=adescr) - jump() - """, namespace=locals()) - expected = parse(""" - [] - p0 = malloc_gc(%d) - setfield_gc(p0, 1234, descr=tiddescr) - setfield_gc(p0, 10, descr=lengthdescr) - jump() - """ % (adescr.get_base_size(False) + 10 * adescr.get_item_size(False),), - namespace=locals()) - operations = get_deep_immutable_oplist(ops.operations) - operations = self.gc_ll_descr.rewrite_assembler(self.fake_cpu, - operations, []) - equaloplists(operations, expected.operations) + self.check_rewrite(""" + [] + p0 = new_array(10, descr=adescr) + jump() + """, """ + [] + p0 = malloc_gc(%(adescr.get_base_size(False) + \ + 10 * adescr.get_item_size(False))d) + setfield_gc(p0, 4321, descr=tiddescr) + setfield_gc(p0, 10, descr=alendescr) + jump() + """) + + def test_rewrite_assembler_new_and_new_array_fixed_to_malloc(self): + self.check_rewrite(""" + [] + p0 = new(descr=sdescr) + p1 = new_array(10, descr=adescr) + jump() + """, """ + [] + p0 = malloc_gc(%(sdescr.size + \ + adescr.get_base_size(False) + \ + 10 * adescr.get_item_size(False))d) + setfield_gc(p0, 1234, descr=tiddescr) + p1 = int_add(p0, %(sdescr.size)d) + setfield_gc(p1, 4321, descr=tiddescr) + setfield_gc(p1, 10, descr=alendescr) + jump() + """) def test_rewrite_assembler_initialization_store(self): S = lltype.GcStruct('S', ('parent', OBJECT), @@ -706,5 +707,11 @@ operations, []) equaloplists(operations, expected.operations) +class Evaluator(object): + def __init__(self, scope): + self.scope = scope + def __getitem__(self, key): + return eval(key, self.scope) + class TestFrameworkMiniMark(TestFramework): gc = 'minimark' From noreply at buildbot.pypy.org Fri Nov 18 10:17:34 2011 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 18 Nov 2011 10:17:34 +0100 (CET) Subject: [pypy-commit] pypy op_malloc_gc: Missing rounding up: tests and fix. Message-ID: <20111118091734.E1B4F82A9D@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: op_malloc_gc Changeset: r49512:c4bcb5c2b12f Date: 2011-11-17 18:11 +0100 http://bitbucket.org/pypy/pypy/changeset/c4bcb5c2b12f/ Log: Missing rounding up: tests and fix. diff --git a/pypy/jit/backend/llsupport/gc.py b/pypy/jit/backend/llsupport/gc.py --- a/pypy/jit/backend/llsupport/gc.py +++ b/pypy/jit/backend/llsupport/gc.py @@ -4,7 +4,7 @@ from pypy.rlib.debug import fatalerror from pypy.rlib.rarithmetic import ovfcheck from pypy.rpython.lltypesystem import lltype, llmemory, rffi, rclass, rstr -from pypy.rpython.lltypesystem import llgroup +from pypy.rpython.lltypesystem import llgroup, llarena from pypy.rpython.lltypesystem.lloperation import llop from pypy.rpython.annlowlevel import llhelper from pypy.translator.tool.cbuild import ExternalCompilationInfo @@ -879,6 +879,7 @@ basesize = descr.get_base_size(self.tsc) itemsize = descr.get_item_size(self.tsc) fullsize = basesize + newlength * itemsize + fullsize = self.round_up_for_allocation(fullsize) self.gen_malloc_const(fullsize, op.result) self.gen_initialize_tid(op.result, descr.tid) self.gen_initialize_len(op.result, v_newlength, descr) @@ -972,6 +973,16 @@ # fall-back case: produce a write_barrier self.gen_write_barrier(v_base, v_value) + def round_up_for_allocation(self, size): + if self.tsc: + return llarena.round_up_for_allocation( + size, self.gc_ll_descr.minimal_size_in_nursery) + else: + # non-translated: do it manually + # assume that "self.gc_ll_descr.minimal_size_in_nursery" is 2 WORDs + size = max(size, 2 * WORD) + return (size + WORD-1) & ~(WORD-1) # round up + # ____________________________________________________________ def get_ll_description(gcdescr, translator=None, rtyper=None): diff --git a/pypy/jit/backend/llsupport/test/test_gc.py b/pypy/jit/backend/llsupport/test/test_gc.py --- a/pypy/jit/backend/llsupport/test/test_gc.py +++ b/pypy/jit/backend/llsupport/test/test_gc.py @@ -564,6 +564,11 @@ adescr.tid = 4321 alendescr = get_field_arraylen_descr(self.gc_ll_descr, A) # + B = lltype.GcArray(lltype.Char) + bdescr = get_array_descr(self.gc_ll_descr, B) + bdescr.tid = 8765 + blendescr = get_field_arraylen_descr(self.gc_ll_descr, B) + # tiddescr = self.gc_ll_descr.fielddescr_tid # ops = parse(frm_operations, namespace=locals()) @@ -638,6 +643,44 @@ jump() """) + def test_rewrite_assembler_round_up(self): + self.check_rewrite(""" + [] + p0 = new_array(6, descr=bdescr) + jump() + """, """ + [] + p0 = malloc_gc(%(adescr.get_base_size(False) + 8)d) + setfield_gc(p0, 8765, descr=tiddescr) + setfield_gc(p0, 6, descr=blendescr) + jump() + """) + + def test_rewrite_assembler_round_up_always(self): + self.check_rewrite(""" + [] + p0 = new_array(5, descr=bdescr) + p1 = new_array(5, descr=bdescr) + p2 = new_array(5, descr=bdescr) + p3 = new_array(5, descr=bdescr) + jump() + """, """ + [] + p0 = malloc_gc(%(4 * (adescr.get_base_size(False) + 8))d) + setfield_gc(p0, 8765, descr=tiddescr) + setfield_gc(p0, 5, descr=blendescr) + p1 = int_add(p0, %(adescr.get_base_size(False) + 8)d) + setfield_gc(p1, 8765, descr=tiddescr) + setfield_gc(p1, 5, descr=blendescr) + p2 = int_add(p1, %(adescr.get_base_size(False) + 8)d) + setfield_gc(p2, 8765, descr=tiddescr) + setfield_gc(p2, 5, descr=blendescr) + p3 = int_add(p2, %(adescr.get_base_size(False) + 8)d) + setfield_gc(p3, 8765, descr=tiddescr) + setfield_gc(p3, 5, descr=blendescr) + jump() + """) + def test_rewrite_assembler_initialization_store(self): S = lltype.GcStruct('S', ('parent', OBJECT), ('x', lltype.Signed)) From noreply at buildbot.pypy.org Fri Nov 18 10:17:36 2011 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 18 Nov 2011 10:17:36 +0100 (CET) Subject: [pypy-commit] pypy op_malloc_gc: Test and fix for allocating tiny structures --- they are smaller Message-ID: <20111118091736.1FB2A82A9D@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: op_malloc_gc Changeset: r49513:454b0eb8af85 Date: 2011-11-17 18:15 +0100 http://bitbucket.org/pypy/pypy/changeset/454b0eb8af85/ Log: Test and fix for allocating tiny structures --- they are smaller than the minimal_size_in_nursery. diff --git a/pypy/jit/backend/llsupport/gc.py b/pypy/jit/backend/llsupport/gc.py --- a/pypy/jit/backend/llsupport/gc.py +++ b/pypy/jit/backend/llsupport/gc.py @@ -879,7 +879,6 @@ basesize = descr.get_base_size(self.tsc) itemsize = descr.get_item_size(self.tsc) fullsize = basesize + newlength * itemsize - fullsize = self.round_up_for_allocation(fullsize) self.gen_malloc_const(fullsize, op.result) self.gen_initialize_tid(op.result, descr.tid) self.gen_initialize_len(op.result, v_newlength, descr) @@ -915,6 +914,7 @@ return self.newops def gen_malloc_const(self, size, v_result): + size = self.round_up_for_allocation(size) if self.op_malloc_gc is None: # it is the first we see: emit MALLOC_GC op = ResOperation(rop.MALLOC_GC, diff --git a/pypy/jit/backend/llsupport/test/test_gc.py b/pypy/jit/backend/llsupport/test/test_gc.py --- a/pypy/jit/backend/llsupport/test/test_gc.py +++ b/pypy/jit/backend/llsupport/test/test_gc.py @@ -550,12 +550,14 @@ def check_rewrite(self, frm_operations, to_operations): self.gc_ll_descr.translate_support_code = False try: - S = lltype.GcStruct('S', ('x', lltype.Signed)) + S = lltype.GcStruct('S', ('x', lltype.Signed), + ('y', lltype.Signed)) sdescr = get_size_descr(self.gc_ll_descr, S) sdescr.tid = 1234 # T = lltype.GcStruct('T', ('y', lltype.Signed), - ('z', lltype.Signed)) + ('z', lltype.Signed), + ('t', lltype.Signed)) tdescr = get_size_descr(self.gc_ll_descr, T) tdescr.tid = 5678 # @@ -569,7 +571,12 @@ bdescr.tid = 8765 blendescr = get_field_arraylen_descr(self.gc_ll_descr, B) # + E = lltype.GcStruct('Empty') + edescr = get_size_descr(self.gc_ll_descr, E) + edescr.tid = 9000 + # tiddescr = self.gc_ll_descr.fielddescr_tid + WORD = globals()['WORD'] # ops = parse(frm_operations, namespace=locals()) expected = parse(to_operations % Evaluator(locals()), @@ -681,6 +688,21 @@ jump() """) + def test_rewrite_assembler_minimal_size(self): + self.check_rewrite(""" + [] + p0 = new(descr=edescr) + p1 = new(descr=edescr) + jump() + """, """ + [] + p0 = malloc_gc(%(4*WORD)d) + setfield_gc(p0, 9000, descr=tiddescr) + p1 = int_add(p0, %(2*WORD)d) + setfield_gc(p1, 9000, descr=tiddescr) + jump() + """) + def test_rewrite_assembler_initialization_store(self): S = lltype.GcStruct('S', ('parent', OBJECT), ('x', lltype.Signed)) From noreply at buildbot.pypy.org Fri Nov 18 10:17:37 2011 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 18 Nov 2011 10:17:37 +0100 (CET) Subject: [pypy-commit] pypy op_malloc_gc: Kill unused attribute. Message-ID: <20111118091737.4F83182A9D@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: op_malloc_gc Changeset: r49514:cf80cbdf1c65 Date: 2011-11-18 10:09 +0100 http://bitbucket.org/pypy/pypy/changeset/cf80cbdf1c65/ Log: Kill unused attribute. diff --git a/pypy/jit/backend/llsupport/descr.py b/pypy/jit/backend/llsupport/descr.py --- a/pypy/jit/backend/llsupport/descr.py +++ b/pypy/jit/backend/llsupport/descr.py @@ -43,8 +43,6 @@ class SizeDescr(AbstractDescr): size = 0 # help translation - is_immutable = False - tid = llop.combine_ushort(lltype.Signed, 0, 0) def __init__(self, size, count_fields_if_immut=-1): From noreply at buildbot.pypy.org Fri Nov 18 10:26:44 2011 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 18 Nov 2011 10:26:44 +0100 (CET) Subject: [pypy-commit] pypy default: rename numpy -> numpypy Message-ID: <20111118092644.4E19382A9D@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r49515:f732c0996244 Date: 2011-11-18 11:26 +0200 http://bitbucket.org/pypy/pypy/changeset/f732c0996244/ Log: rename numpy -> numpypy diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -2,7 +2,7 @@ class Module(MixedModule): - applevel_name = 'numpy' + applevel_name = 'numpypy' interpleveldefs = { 'array': 'interp_numarray.SingleDimArray', diff --git a/pypy/module/micronumpy/app_numpy.py b/pypy/module/micronumpy/app_numpy.py --- a/pypy/module/micronumpy/app_numpy.py +++ b/pypy/module/micronumpy/app_numpy.py @@ -1,6 +1,6 @@ import math -import numpy +import numpypy inf = float("inf") @@ -13,5 +13,5 @@ def mean(a): if not hasattr(a, "mean"): - a = numpy.array(a) + a = numpypy.array(a) return a.mean() diff --git a/pypy/module/micronumpy/bench/add.py b/pypy/module/micronumpy/bench/add.py --- a/pypy/module/micronumpy/bench/add.py +++ b/pypy/module/micronumpy/bench/add.py @@ -1,5 +1,8 @@ -import numpy +try: + import numpypy as numpy +except: + import numpy def f(): a = numpy.zeros(10000000) diff --git a/pypy/module/micronumpy/bench/iterate.py b/pypy/module/micronumpy/bench/iterate.py --- a/pypy/module/micronumpy/bench/iterate.py +++ b/pypy/module/micronumpy/bench/iterate.py @@ -1,5 +1,8 @@ -import numpy +try: + import numpypy as numpy +except: + import numpy def f(): sum = 0 diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -3,7 +3,7 @@ class AppTestDtypes(BaseNumpyAppTest): def test_dtype(self): - from numpy import dtype + from numpypy import dtype d = dtype('?') assert d.num == 0 @@ -14,7 +14,7 @@ raises(TypeError, dtype, 1042) def test_dtype_with_types(self): - from numpy import dtype + from numpypy import dtype assert dtype(bool).num == 0 assert dtype(int).num == 7 @@ -22,13 +22,13 @@ assert dtype(float).num == 12 def test_array_dtype_attr(self): - from numpy import array, dtype + from numpypy import array, dtype a = array(range(5), long) assert a.dtype is dtype(long) def test_repr_str(self): - from numpy import dtype + from numpypy import dtype assert repr(dtype) == "" d = dtype('?') @@ -36,57 +36,57 @@ assert str(d) == "bool" def test_bool_array(self): - import numpy + from numpypy import array, False_, True_ - a = numpy.array([0, 1, 2, 2.5], dtype='?') - assert a[0] is numpy.False_ + a = array([0, 1, 2, 2.5], dtype='?') + assert a[0] is False_ for i in xrange(1, 4): - assert a[i] is numpy.True_ + assert a[i] is True_ def test_copy_array_with_dtype(self): - import numpy + from numpypy import array, False_, True_ - a = numpy.array([0, 1, 2, 3], dtype=long) + a = array([0, 1, 2, 3], dtype=long) # int on 64-bit, long in 32-bit assert isinstance(a[0], (int, long)) b = a.copy() assert isinstance(b[0], (int, long)) - a = numpy.array([0, 1, 2, 3], dtype=bool) - assert a[0] is numpy.False_ + a = array([0, 1, 2, 3], dtype=bool) + assert a[0] is False_ b = a.copy() - assert b[0] is numpy.False_ + assert b[0] is False_ def test_zeros_bool(self): - import numpy + from numpypy import zeros, False_ - a = numpy.zeros(10, dtype=bool) + a = zeros(10, dtype=bool) for i in range(10): - assert a[i] is numpy.False_ + assert a[i] is False_ def test_ones_bool(self): - import numpy + from numpypy import ones, True_ - a = numpy.ones(10, dtype=bool) + a = ones(10, dtype=bool) for i in range(10): - assert a[i] is numpy.True_ + assert a[i] is True_ def test_zeros_long(self): - from numpy import zeros + from numpypy import zeros a = zeros(10, dtype=long) for i in range(10): assert isinstance(a[i], (int, long)) assert a[1] == 0 def test_ones_long(self): - from numpy import ones + from numpypy import ones a = ones(10, dtype=long) for i in range(10): assert isinstance(a[i], (int, long)) assert a[1] == 1 def test_overflow(self): - from numpy import array, dtype + from numpypy import array, dtype assert array([128], 'b')[0] == -128 assert array([256], 'B')[0] == 0 assert array([32768], 'h')[0] == -32768 @@ -98,7 +98,7 @@ raises(OverflowError, "array([2**64], 'Q')") def test_bool_binop_types(self): - from numpy import array, dtype + from numpypy import array, dtype types = [ '?', 'b', 'B', 'h', 'H', 'i', 'I', 'l', 'L', 'q', 'Q', 'f', 'd' ] @@ -107,7 +107,7 @@ assert (a + array([0], t)).dtype is dtype(t) def test_binop_types(self): - from numpy import array, dtype + from numpypy import array, dtype tests = [('b','B','h'), ('b','h','h'), ('b','H','i'), ('b','i','i'), ('b','l','l'), ('b','q','q'), ('b','Q','d'), ('B','h','h'), ('B','H','H'), ('B','i','i'), ('B','I','I'), ('B','l','l'), @@ -129,7 +129,7 @@ assert (array([1], d1) + array([1], d2)).dtype is dtype(dout) def test_add_int8(self): - from numpy import array, dtype + from numpypy import array, dtype a = array(range(5), dtype="int8") b = a + a @@ -138,7 +138,7 @@ assert b[i] == i * 2 def test_add_int16(self): - from numpy import array, dtype + from numpypy import array, dtype a = array(range(5), dtype="int16") b = a + a @@ -147,7 +147,7 @@ assert b[i] == i * 2 def test_add_uint32(self): - from numpy import array, dtype + from numpypy import array, dtype a = array(range(5), dtype="I") b = a + a @@ -156,12 +156,12 @@ assert b[i] == i * 2 def test_shape(self): - from numpy import dtype + from numpypy import dtype assert dtype(long).shape == () def test_cant_subclass(self): - from numpy import dtype + from numpypy import dtype # You can't subclass dtype raises(TypeError, type, "Foo", (dtype,), {}) diff --git a/pypy/module/micronumpy/test/test_module.py b/pypy/module/micronumpy/test/test_module.py --- a/pypy/module/micronumpy/test/test_module.py +++ b/pypy/module/micronumpy/test/test_module.py @@ -3,19 +3,19 @@ class AppTestNumPyModule(BaseNumpyAppTest): def test_mean(self): - from numpy import array, mean + from numpypy import array, mean assert mean(array(range(5))) == 2.0 assert mean(range(5)) == 2.0 def test_average(self): - from numpy import array, average + from numpypy import array, average assert average(range(10)) == 4.5 assert average(array(range(10))) == 4.5 def test_constants(self): import math - from numpy import inf, e + from numpypy import inf, e assert type(inf) is float assert inf == float("inf") assert e == math.e - assert type(e) is float \ No newline at end of file + assert type(e) is float diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -4,12 +4,12 @@ class AppTestNumArray(BaseNumpyAppTest): def test_type(self): - from numpy import array + from numpypy import array ar = array(range(5)) assert type(ar) is type(ar + ar) def test_init(self): - from numpy import zeros + from numpypy import zeros a = zeros(15) # Check that storage was actually zero'd. assert a[10] == 0.0 @@ -18,7 +18,7 @@ assert a[13] == 5.3 def test_size(self): - from numpy import array + from numpypy import array # XXX fixed on multidim branch #assert array(3).size == 1 a = array([1, 2, 3]) @@ -30,13 +30,13 @@ Test that empty() works. """ - from numpy import empty + from numpypy import empty a = empty(2) a[1] = 1.0 assert a[1] == 1.0 def test_ones(self): - from numpy import ones + from numpypy import ones a = ones(3) assert len(a) == 3 assert a[0] == 1 @@ -45,19 +45,19 @@ assert a[2] == 4 def test_copy(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a.copy() for i in xrange(5): assert b[i] == a[i] def test_iterator_init(self): - from numpy import array + from numpypy import array a = array(range(5)) assert a[3] == 3 def test_repr(self): - from numpy import array, zeros + from numpypy import array, zeros a = array(range(5), float) assert repr(a) == "array([0.0, 1.0, 2.0, 3.0, 4.0])" a = array([], float) @@ -72,7 +72,7 @@ assert repr(a) == "array([True, False, True, False], dtype=bool)" def test_repr_slice(self): - from numpy import array, zeros + from numpypy import array, zeros a = array(range(5), float) b = a[1::2] assert repr(b) == "array([1.0, 3.0])" @@ -81,7 +81,7 @@ assert repr(b) == "array([0.0, 0.0, 0.0, ..., 0.0, 0.0, 0.0])" def test_str(self): - from numpy import array, zeros + from numpypy import array, zeros a = array(range(5), float) assert str(a) == "[0.0 1.0 2.0 3.0 4.0]" assert str((2*a)[:]) == "[0.0 2.0 4.0 6.0 8.0]" @@ -100,7 +100,7 @@ assert str(a) == "[0 1 2 3 4]" def test_str_slice(self): - from numpy import array, zeros + from numpypy import array, zeros a = array(range(5), float) b = a[1::2] assert str(b) == "[1.0 3.0]" @@ -109,7 +109,7 @@ assert str(b) == "[0.0 0.0 0.0 ..., 0.0 0.0 0.0]" def test_getitem(self): - from numpy import array + from numpypy import array a = array(range(5)) raises(IndexError, "a[5]") a = a + a @@ -118,7 +118,7 @@ raises(IndexError, "a[-6]") def test_getitem_tuple(self): - from numpy import array + from numpypy import array a = array(range(5)) raises(IndexError, "a[(1,2)]") for i in xrange(5): @@ -128,7 +128,7 @@ assert a[i] == b[i] def test_setitem(self): - from numpy import array + from numpypy import array a = array(range(5)) a[-1] = 5.0 assert a[4] == 5.0 @@ -136,7 +136,7 @@ raises(IndexError, "a[-6] = 3.0") def test_setitem_tuple(self): - from numpy import array + from numpypy import array a = array(range(5)) raises(IndexError, "a[(1,2)] = [0,1]") for i in xrange(5): @@ -147,7 +147,7 @@ assert a[i] == i def test_setslice_array(self): - from numpy import array + from numpypy import array a = array(range(5)) b = array(range(2)) a[1:4:2] = b @@ -158,7 +158,7 @@ assert b[1] == 0. def test_setslice_of_slice_array(self): - from numpy import array, zeros + from numpypy import array, zeros a = zeros(5) a[::2] = array([9., 10., 11.]) assert a[0] == 9. @@ -177,7 +177,7 @@ assert a[0] == 3. def test_setslice_list(self): - from numpy import array + from numpypy import array a = array(range(5), float) b = [0., 1.] a[1:4:2] = b @@ -185,20 +185,20 @@ assert a[3] == 1. def test_setslice_constant(self): - from numpy import array + from numpypy import array a = array(range(5), float) a[1:4:2] = 0. assert a[1] == 0. assert a[3] == 0. def test_len(self): - from numpy import array + from numpypy import array a = array(range(5)) assert len(a) == 5 assert len(a + a) == 5 def test_shape(self): - from numpy import array + from numpypy import array a = array(range(5)) assert a.shape == (5,) b = a + a @@ -207,7 +207,7 @@ assert c.shape == (3,) def test_add(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a + a for i in range(5): @@ -220,7 +220,7 @@ assert c[i] == bool(a[i] + b[i]) def test_add_other(self): - from numpy import array + from numpypy import array a = array(range(5)) b = array(range(4, -1, -1)) c = a + b @@ -228,20 +228,20 @@ assert c[i] == 4 def test_add_constant(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a + 5 for i in range(5): assert b[i] == i + 5 def test_radd(self): - from numpy import array + from numpypy import array r = 3 + array(range(3)) for i in range(3): assert r[i] == i + 3 def test_add_list(self): - from numpy import array + from numpypy import array a = array(range(5)) b = list(reversed(range(5))) c = a + b @@ -250,14 +250,14 @@ assert c[i] == 4 def test_subtract(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a - a for i in range(5): assert b[i] == 0 def test_subtract_other(self): - from numpy import array + from numpypy import array a = array(range(5)) b = array([1, 1, 1, 1, 1]) c = a - b @@ -265,29 +265,29 @@ assert c[i] == i - 1 def test_subtract_constant(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a - 5 for i in range(5): assert b[i] == i - 5 def test_mul(self): - import numpy + import numpypy - a = numpy.array(range(5)) + a = numpypy.array(range(5)) b = a * a for i in range(5): assert b[i] == i * i - a = numpy.array(range(5), dtype=bool) + a = numpypy.array(range(5), dtype=bool) b = a * a - assert b.dtype is numpy.dtype(bool) - assert b[0] is numpy.False_ + assert b.dtype is numpypy.dtype(bool) + assert b[0] is numpypy.False_ for i in range(1, 5): - assert b[i] is numpy.True_ + assert b[i] is numpypy.True_ def test_mul_constant(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a * 5 for i in range(5): @@ -295,7 +295,7 @@ def test_div(self): from math import isnan - from numpy import array, dtype, inf + from numpypy import array, dtype, inf a = array(range(1, 6)) b = a / a @@ -327,7 +327,7 @@ assert c[2] == -inf def test_div_other(self): - from numpy import array + from numpypy import array a = array(range(5)) b = array([2, 2, 2, 2, 2], float) c = a / b @@ -335,14 +335,14 @@ assert c[i] == i / 2.0 def test_div_constant(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a / 5.0 for i in range(5): assert b[i] == i / 5.0 def test_pow(self): - from numpy import array + from numpypy import array a = array(range(5), float) b = a ** a for i in range(5): @@ -350,7 +350,7 @@ assert b[i] == i**i def test_pow_other(self): - from numpy import array + from numpypy import array a = array(range(5), float) b = array([2, 2, 2, 2, 2]) c = a ** b @@ -358,14 +358,14 @@ assert c[i] == i ** 2 def test_pow_constant(self): - from numpy import array + from numpypy import array a = array(range(5), float) b = a ** 2 for i in range(5): assert b[i] == i ** 2 def test_mod(self): - from numpy import array + from numpypy import array a = array(range(1,6)) b = a % a for i in range(5): @@ -378,7 +378,7 @@ assert b[i] == 1 def test_mod_other(self): - from numpy import array + from numpypy import array a = array(range(5)) b = array([2, 2, 2, 2, 2]) c = a % b @@ -386,14 +386,14 @@ assert c[i] == i % 2 def test_mod_constant(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a % 2 for i in range(5): assert b[i] == i % 2 def test_pos(self): - from numpy import array + from numpypy import array a = array([1.,-2.,3.,-4.,-5.]) b = +a for i in range(5): @@ -404,7 +404,7 @@ assert a[i] == i def test_neg(self): - from numpy import array + from numpypy import array a = array([1.,-2.,3.,-4.,-5.]) b = -a for i in range(5): @@ -415,7 +415,7 @@ assert a[i] == -i def test_abs(self): - from numpy import array + from numpypy import array a = array([1.,-2.,3.,-4.,-5.]) b = abs(a) for i in range(5): @@ -426,7 +426,7 @@ assert a[i + 5] == abs(i) def test_auto_force(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a - 1 a[2] = 3 @@ -440,7 +440,7 @@ assert c[1] == 4 def test_getslice(self): - from numpy import array + from numpypy import array a = array(range(5)) s = a[1:5] assert len(s) == 4 @@ -454,7 +454,7 @@ assert s[0] == 5 def test_getslice_step(self): - from numpy import array + from numpypy import array a = array(range(10)) s = a[1:9:2] assert len(s) == 4 @@ -462,7 +462,7 @@ assert s[i] == a[2*i+1] def test_slice_update(self): - from numpy import array + from numpypy import array a = array(range(5)) s = a[0:3] s[1] = 10 @@ -473,7 +473,7 @@ def test_slice_invaidate(self): # check that slice shares invalidation list with - from numpy import array + from numpypy import array a = array(range(5)) s = a[0:2] b = array([10,11]) @@ -487,13 +487,13 @@ assert d[1] == 12 def test_mean(self): - from numpy import array + from numpypy import array a = array(range(5)) assert a.mean() == 2.0 assert a[:4].mean() == 1.5 def test_sum(self): - from numpy import array + from numpypy import array a = array(range(5)) assert a.sum() == 10.0 assert a[:4].sum() == 6.0 @@ -502,32 +502,32 @@ assert a.sum() == 5 def test_prod(self): - from numpy import array + from numpypy import array a = array(range(1,6)) assert a.prod() == 120.0 assert a[:4].prod() == 24.0 def test_max(self): - from numpy import array + from numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.max() == 5.7 b = array([]) raises(ValueError, "b.max()") def test_max_add(self): - from numpy import array + from numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert (a+a).max() == 11.4 def test_min(self): - from numpy import array + from numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.min() == -3.0 b = array([]) raises(ValueError, "b.min()") def test_argmax(self): - from numpy import array + from numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.argmax() == 2 b = array([]) @@ -537,14 +537,14 @@ assert a.argmax() == 9 def test_argmin(self): - from numpy import array + from numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.argmin() == 3 b = array([]) raises(ValueError, "b.argmin()") def test_all(self): - from numpy import array + from numpypy import array a = array(range(5)) assert a.all() == False a[0] = 3.0 @@ -553,7 +553,7 @@ assert b.all() == True def test_any(self): - from numpy import array, zeros + from numpypy import array, zeros a = array(range(5)) assert a.any() == True b = zeros(5) @@ -562,7 +562,7 @@ assert c.any() == False def test_dot(self): - from numpy import array + from numpypy import array a = array(range(5)) assert a.dot(a) == 30.0 @@ -570,14 +570,14 @@ assert a.dot(range(5)) == 30 def test_dot_constant(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a.dot(2.5) for i in xrange(5): assert b[i] == 2.5 * a[i] def test_dtype_guessing(self): - from numpy import array, dtype + from numpypy import array, dtype assert array([True]).dtype is dtype(bool) assert array([True, False]).dtype is dtype(bool) @@ -590,7 +590,7 @@ def test_comparison(self): import operator - from numpy import array, dtype + from numpypy import array, dtype a = array(range(5)) b = array(range(5), float) @@ -616,7 +616,7 @@ cls.w_data = cls.space.wrap(struct.pack('dddd', 1, 2, 3, 4)) def test_fromstring(self): - from numpy import fromstring + from numpypy import fromstring a = fromstring(self.data) for i in range(4): assert a[i] == i + 1 diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -4,14 +4,14 @@ class AppTestUfuncs(BaseNumpyAppTest): def test_ufunc_instance(self): - from numpy import add, ufunc + from numpypy import add, ufunc assert isinstance(add, ufunc) assert repr(add) == "" assert repr(ufunc) == "" def test_ufunc_attrs(self): - from numpy import add, multiply, sin + from numpypy import add, multiply, sin assert add.identity == 0 assert multiply.identity == 1 @@ -22,7 +22,7 @@ assert sin.nin == 1 def test_wrong_arguments(self): - from numpy import add, sin + from numpypy import add, sin raises(ValueError, add, 1) raises(TypeError, add, 1, 2, 3) @@ -30,14 +30,14 @@ raises(ValueError, sin) def test_single_item(self): - from numpy import negative, sign, minimum + from numpypy import negative, sign, minimum assert negative(5.0) == -5.0 assert sign(-0.0) == 0.0 assert minimum(2.0, 3.0) == 2.0 def test_sequence(self): - from numpy import array, negative, minimum + from numpypy import array, negative, minimum a = array(range(3)) b = [2.0, 1.0, 0.0] c = 1.0 @@ -71,7 +71,7 @@ assert min_c_b[i] == min(b[i], c) def test_negative(self): - from numpy import array, negative + from numpypy import array, negative a = array([-5.0, 0.0, 1.0]) b = negative(a) @@ -86,7 +86,7 @@ assert negative(a + a)[3] == -6 def test_abs(self): - from numpy import array, absolute + from numpypy import array, absolute a = array([-5.0, -0.0, 1.0]) b = absolute(a) @@ -94,7 +94,7 @@ assert b[i] == abs(a[i]) def test_add(self): - from numpy import array, add + from numpypy import array, add a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -103,7 +103,7 @@ assert c[i] == a[i] + b[i] def test_divide(self): - from numpy import array, divide + from numpypy import array, divide a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -112,7 +112,7 @@ assert c[i] == a[i] / b[i] def test_fabs(self): - from numpy import array, fabs + from numpypy import array, fabs from math import fabs as math_fabs a = array([-5.0, -0.0, 1.0]) @@ -121,7 +121,7 @@ assert b[i] == math_fabs(a[i]) def test_minimum(self): - from numpy import array, minimum + from numpypy import array, minimum a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -130,7 +130,7 @@ assert c[i] == min(a[i], b[i]) def test_maximum(self): - from numpy import array, maximum + from numpypy import array, maximum a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -143,7 +143,7 @@ assert isinstance(x, (int, long)) def test_multiply(self): - from numpy import array, multiply + from numpypy import array, multiply a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -152,7 +152,7 @@ assert c[i] == a[i] * b[i] def test_sign(self): - from numpy import array, sign, dtype + from numpypy import array, sign, dtype reference = [-1.0, 0.0, 0.0, 1.0] a = array([-5.0, -0.0, 0.0, 6.0]) @@ -171,7 +171,7 @@ assert a[1] == 0 def test_reciporocal(self): - from numpy import array, reciprocal + from numpypy import array, reciprocal reference = [-0.2, float("inf"), float("-inf"), 2.0] a = array([-5.0, 0.0, -0.0, 0.5]) @@ -180,7 +180,7 @@ assert b[i] == reference[i] def test_subtract(self): - from numpy import array, subtract + from numpypy import array, subtract a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -189,7 +189,7 @@ assert c[i] == a[i] - b[i] def test_floor(self): - from numpy import array, floor + from numpypy import array, floor reference = [-2.0, -1.0, 0.0, 1.0, 1.0] a = array([-1.4, -1.0, 0.0, 1.0, 1.4]) @@ -198,7 +198,7 @@ assert b[i] == reference[i] def test_copysign(self): - from numpy import array, copysign + from numpypy import array, copysign reference = [5.0, -0.0, 0.0, -6.0] a = array([-5.0, 0.0, 0.0, 6.0]) @@ -214,7 +214,7 @@ def test_exp(self): import math - from numpy import array, exp + from numpypy import array, exp a = array([-5.0, -0.0, 0.0, 12345678.0, float("inf"), -float('inf'), -12343424.0]) @@ -228,7 +228,7 @@ def test_sin(self): import math - from numpy import array, sin + from numpypy import array, sin a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2]) b = sin(a) @@ -241,7 +241,7 @@ def test_cos(self): import math - from numpy import array, cos + from numpypy import array, cos a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2]) b = cos(a) @@ -250,7 +250,7 @@ def test_tan(self): import math - from numpy import array, tan + from numpypy import array, tan a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2]) b = tan(a) @@ -260,7 +260,7 @@ def test_arcsin(self): import math - from numpy import array, arcsin + from numpypy import array, arcsin a = array([-1, -0.5, -0.33, 0, 0.33, 0.5, 1]) b = arcsin(a) @@ -274,7 +274,7 @@ def test_arccos(self): import math - from numpy import array, arccos + from numpypy import array, arccos a = array([-1, -0.5, -0.33, 0, 0.33, 0.5, 1]) b = arccos(a) @@ -289,7 +289,7 @@ def test_arctan(self): import math - from numpy import array, arctan + from numpypy import array, arctan a = array([-3, -2, -1, 0, 1, 2, 3, float('inf'), float('-inf')]) b = arctan(a) @@ -302,7 +302,7 @@ def test_arcsinh(self): import math - from numpy import arcsinh, inf + from numpypy import arcsinh, inf for v in [inf, -inf, 1.0, math.e]: assert math.asinh(v) == arcsinh(v) @@ -310,7 +310,7 @@ def test_arctanh(self): import math - from numpy import arctanh + from numpypy import arctanh for v in [.99, .5, 0, -.5, -.99]: assert math.atanh(v) == arctanh(v) @@ -320,13 +320,13 @@ assert arctanh(v) == math.copysign(float("inf"), v) def test_reduce_errors(self): - from numpy import sin, add + from numpypy import sin, add raises(ValueError, sin.reduce, [1, 2, 3]) raises(TypeError, add.reduce, 1) def test_reduce(self): - from numpy import add, maximum + from numpypy import add, maximum assert add.reduce([1, 2, 3]) == 6 assert maximum.reduce([1]) == 1 @@ -335,7 +335,7 @@ def test_comparisons(self): import operator - from numpy import equal, not_equal, less, less_equal, greater, greater_equal + from numpypy import equal, not_equal, less, less_equal, greater, greater_equal for ufunc, func in [ (equal, operator.eq), From noreply at buildbot.pypy.org Fri Nov 18 10:43:47 2011 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 18 Nov 2011 10:43:47 +0100 (CET) Subject: [pypy-commit] pypy jitdriver-setparam-all: Re-add an assert that was lost. Message-ID: <20111118094347.CE75182A9D@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: jitdriver-setparam-all Changeset: r49516:73172e1ee46c Date: 2011-11-18 10:43 +0100 http://bitbucket.org/pypy/pypy/changeset/73172e1ee46c/ Log: Re-add an assert that was lost. diff --git a/pypy/jit/metainterp/warmspot.py b/pypy/jit/metainterp/warmspot.py --- a/pypy/jit/metainterp/warmspot.py +++ b/pypy/jit/metainterp/warmspot.py @@ -873,6 +873,8 @@ for jd in self.jitdrivers_sd: if jd.jitdriver is op.args[1].value: break + else: + assert 0, "jitdriver of set_param() not found" else: jd = None funcname = op.args[2].value From noreply at buildbot.pypy.org Fri Nov 18 10:51:20 2011 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 18 Nov 2011 10:51:20 +0100 (CET) Subject: [pypy-commit] pypy jitdriver-setparam-all: close about to be merged branch Message-ID: <20111118095120.33A9982A9D@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: jitdriver-setparam-all Changeset: r49517:310e106460c0 Date: 2011-11-18 11:50 +0200 http://bitbucket.org/pypy/pypy/changeset/310e106460c0/ Log: close about to be merged branch From noreply at buildbot.pypy.org Fri Nov 18 10:51:21 2011 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 18 Nov 2011 10:51:21 +0100 (CET) Subject: [pypy-commit] pypy default: Merge jitdriver-setparam-all branch that allows setting parameters to all Message-ID: <20111118095121.7A7C982A9D@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r49518:42d55fa4ac69 Date: 2011-11-18 11:50 +0200 http://bitbucket.org/pypy/pypy/changeset/42d55fa4ac69/ Log: Merge jitdriver-setparam-all branch that allows setting parameters to all jitdrivers. diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -14,7 +14,7 @@ from pypy.rlib.jit import (JitDriver, we_are_jitted, hint, dont_look_inside, loop_invariant, elidable, promote, jit_debug, assert_green, AssertGreenFailed, unroll_safe, current_trace_length, look_inside_iff, - isconstant, isvirtual, promote_string) + isconstant, isvirtual, promote_string, set_param) from pypy.rlib.rarithmetic import ovfcheck from pypy.rpython.lltypesystem import lltype, llmemory, rffi from pypy.rpython.ootypesystem import ootype @@ -1256,15 +1256,18 @@ n -= 1 x += n return x - def f(n, threshold): - myjitdriver.set_param('threshold', threshold) + def f(n, threshold, arg): + if arg: + set_param(myjitdriver, 'threshold', threshold) + else: + set_param(None, 'threshold', threshold) return g(n) - res = self.meta_interp(f, [10, 3]) + res = self.meta_interp(f, [10, 3, 1]) assert res == 9 + 8 + 7 + 6 + 5 + 4 + 3 + 2 + 1 + 0 self.check_tree_loop_count(2) - res = self.meta_interp(f, [10, 13]) + res = self.meta_interp(f, [10, 13, 0]) assert res == 9 + 8 + 7 + 6 + 5 + 4 + 3 + 2 + 1 + 0 self.check_tree_loop_count(0) @@ -2328,8 +2331,8 @@ get_printable_location=get_printable_location) bytecode = "0j10jc20a3" def f(): - myjitdriver.set_param('threshold', 7) - myjitdriver.set_param('trace_eagerness', 1) + set_param(myjitdriver, 'threshold', 7) + set_param(myjitdriver, 'trace_eagerness', 1) i = j = c = a = 1 while True: myjitdriver.jit_merge_point(i=i, j=j, c=c, a=a) @@ -2607,7 +2610,7 @@ myjitdriver = JitDriver(greens = [], reds = ['n', 'i', 'sa', 'a']) def f(n, limit): - myjitdriver.set_param('retrace_limit', limit) + set_param(myjitdriver, 'retrace_limit', limit) sa = i = a = 0 while i < n: myjitdriver.jit_merge_point(n=n, i=i, sa=sa, a=a) @@ -2625,8 +2628,8 @@ myjitdriver = JitDriver(greens = [], reds = ['n', 'i', 'sa', 'a']) def f(n, limit): - myjitdriver.set_param('retrace_limit', 3) - myjitdriver.set_param('max_retrace_guards', limit) + set_param(myjitdriver, 'retrace_limit', 3) + set_param(myjitdriver, 'max_retrace_guards', limit) sa = i = a = 0 while i < n: myjitdriver.jit_merge_point(n=n, i=i, sa=sa, a=a) @@ -2645,7 +2648,7 @@ myjitdriver = JitDriver(greens = [], reds = ['n', 'i', 'sa', 'a', 'node']) def f(n, limit): - myjitdriver.set_param('retrace_limit', limit) + set_param(myjitdriver, 'retrace_limit', limit) sa = i = a = 0 node = [1, 2, 3] node[1] = n @@ -2668,10 +2671,10 @@ myjitdriver = JitDriver(greens = ['pc'], reds = ['n', 'i', 'sa']) bytecode = "0+sI0+SI" def f(n): - myjitdriver.set_param('threshold', 3) - myjitdriver.set_param('trace_eagerness', 1) - myjitdriver.set_param('retrace_limit', 5) - myjitdriver.set_param('function_threshold', -1) + set_param(None, 'threshold', 3) + set_param(None, 'trace_eagerness', 1) + set_param(None, 'retrace_limit', 5) + set_param(None, 'function_threshold', -1) pc = sa = i = 0 while pc < len(bytecode): myjitdriver.jit_merge_point(pc=pc, n=n, sa=sa, i=i) @@ -2728,9 +2731,9 @@ myjitdriver = JitDriver(greens = ['pc'], reds = ['n', 'a', 'i', 'j', 'sa']) bytecode = "ij+Jj+JI" def f(n, a): - myjitdriver.set_param('threshold', 5) - myjitdriver.set_param('trace_eagerness', 1) - myjitdriver.set_param('retrace_limit', 2) + set_param(None, 'threshold', 5) + set_param(None, 'trace_eagerness', 1) + set_param(None, 'retrace_limit', 2) pc = sa = i = j = 0 while pc < len(bytecode): myjitdriver.jit_merge_point(pc=pc, n=n, sa=sa, i=i, j=j, a=a) @@ -2793,8 +2796,8 @@ return B(self.val + 1) myjitdriver = JitDriver(greens = [], reds = ['sa', 'a']) def f(): - myjitdriver.set_param('threshold', 3) - myjitdriver.set_param('trace_eagerness', 2) + set_param(None, 'threshold', 3) + set_param(None, 'trace_eagerness', 2) a = A(0) sa = 0 while a.val < 8: @@ -2824,8 +2827,8 @@ return B(self.val + 1) myjitdriver = JitDriver(greens = [], reds = ['sa', 'b', 'a']) def f(b): - myjitdriver.set_param('threshold', 6) - myjitdriver.set_param('trace_eagerness', 4) + set_param(None, 'threshold', 6) + set_param(None, 'trace_eagerness', 4) a = A(0) sa = 0 while a.val < 15: @@ -2862,10 +2865,10 @@ myjitdriver = JitDriver(greens = ['pc'], reds = ['n', 'i', 'sa']) bytecode = "0+sI0+SI" def f(n): - myjitdriver.set_param('threshold', 3) - myjitdriver.set_param('trace_eagerness', 1) - myjitdriver.set_param('retrace_limit', 5) - myjitdriver.set_param('function_threshold', -1) + set_param(None, 'threshold', 3) + set_param(None, 'trace_eagerness', 1) + set_param(None, 'retrace_limit', 5) + set_param(None, 'function_threshold', -1) pc = sa = i = 0 while pc < len(bytecode): myjitdriver.jit_merge_point(pc=pc, n=n, sa=sa, i=i) diff --git a/pypy/jit/metainterp/test/test_jitdriver.py b/pypy/jit/metainterp/test/test_jitdriver.py --- a/pypy/jit/metainterp/test/test_jitdriver.py +++ b/pypy/jit/metainterp/test/test_jitdriver.py @@ -1,5 +1,5 @@ """Tests for multiple JitDrivers.""" -from pypy.rlib.jit import JitDriver, unroll_safe +from pypy.rlib.jit import JitDriver, unroll_safe, set_param from pypy.jit.metainterp.test.support import LLJitMixin, OOJitMixin from pypy.jit.metainterp.warmspot import get_stats @@ -113,7 +113,7 @@ return n # def loop2(g, r): - myjitdriver1.set_param('function_threshold', 0) + set_param(None, 'function_threshold', 0) while r > 0: myjitdriver2.can_enter_jit(g=g, r=r) myjitdriver2.jit_merge_point(g=g, r=r) diff --git a/pypy/jit/metainterp/test/test_loop.py b/pypy/jit/metainterp/test/test_loop.py --- a/pypy/jit/metainterp/test/test_loop.py +++ b/pypy/jit/metainterp/test/test_loop.py @@ -1,5 +1,5 @@ import py -from pypy.rlib.jit import JitDriver, hint +from pypy.rlib.jit import JitDriver, hint, set_param from pypy.rlib.objectmodel import compute_hash from pypy.jit.metainterp.warmspot import ll_meta_interp, get_stats from pypy.jit.metainterp.test.support import LLJitMixin, OOJitMixin @@ -364,7 +364,7 @@ myjitdriver = JitDriver(greens = ['pos'], reds = ['i', 'j', 'n', 'x']) bytecode = "IzJxji" def f(n, threshold): - myjitdriver.set_param('threshold', threshold) + set_param(myjitdriver, 'threshold', threshold) i = j = x = 0 pos = 0 op = '-' @@ -411,7 +411,7 @@ myjitdriver = JitDriver(greens = ['pos'], reds = ['i', 'j', 'n', 'x']) bytecode = "IzJxji" def f(nval, threshold): - myjitdriver.set_param('threshold', threshold) + set_param(myjitdriver, 'threshold', threshold) i, j, x = A(0), A(0), A(0) n = A(nval) pos = 0 diff --git a/pypy/jit/metainterp/test/test_recursive.py b/pypy/jit/metainterp/test/test_recursive.py --- a/pypy/jit/metainterp/test/test_recursive.py +++ b/pypy/jit/metainterp/test/test_recursive.py @@ -1,5 +1,5 @@ import py -from pypy.rlib.jit import JitDriver, we_are_jitted, hint +from pypy.rlib.jit import JitDriver, hint, set_param from pypy.rlib.jit import unroll_safe, dont_look_inside, promote from pypy.rlib.objectmodel import we_are_translated from pypy.rlib.debug import fatalerror @@ -308,8 +308,8 @@ pc += 1 return n def main(n): - myjitdriver.set_param('threshold', 3) - myjitdriver.set_param('trace_eagerness', 5) + set_param(None, 'threshold', 3) + set_param(None, 'trace_eagerness', 5) return f("c-l", n) expected = main(100) res = self.meta_interp(main, [100], enable_opts='', inline=True) @@ -329,7 +329,7 @@ return recursive(n - 1) + 1 return 0 def loop(n): - myjitdriver.set_param("threshold", 10) + set_param(myjitdriver, "threshold", 10) pc = 0 while n: myjitdriver.can_enter_jit(n=n) @@ -351,8 +351,8 @@ return 0 myjitdriver = JitDriver(greens=[], reds=['n']) def loop(n): - myjitdriver.set_param("threshold", 4) - myjitdriver.set_param("trace_eagerness", 2) + set_param(None, "threshold", 4) + set_param(None, "trace_eagerness", 2) while n: myjitdriver.can_enter_jit(n=n) myjitdriver.jit_merge_point(n=n) @@ -482,12 +482,12 @@ TRACE_LIMIT = 66 def main(inline): - myjitdriver.set_param("threshold", 10) - myjitdriver.set_param('function_threshold', 60) + set_param(None, "threshold", 10) + set_param(None, 'function_threshold', 60) if inline: - myjitdriver.set_param('inlining', True) + set_param(None, 'inlining', True) else: - myjitdriver.set_param('inlining', False) + set_param(None, 'inlining', False) return loop(100) res = self.meta_interp(main, [0], enable_opts='', trace_limit=TRACE_LIMIT) @@ -564,11 +564,11 @@ pc += 1 return n def g(m): - myjitdriver.set_param('inlining', True) + set_param(None, 'inlining', True) # carefully chosen threshold to make sure that the inner function # cannot be inlined, but the inner function on its own is small # enough - myjitdriver.set_param('trace_limit', 40) + set_param(None, 'trace_limit', 40) if m > 1000000: f('', 0) result = 0 @@ -1207,9 +1207,9 @@ driver.can_enter_jit(c=c, i=i, v=v) break - def main(c, i, set_param, v): - if set_param: - driver.set_param('function_threshold', 0) + def main(c, i, _set_param, v): + if _set_param: + set_param(driver, 'function_threshold', 0) portal(c, i, v) self.meta_interp(main, [10, 10, False, False], inline=True) diff --git a/pypy/jit/metainterp/test/test_warmspot.py b/pypy/jit/metainterp/test/test_warmspot.py --- a/pypy/jit/metainterp/test/test_warmspot.py +++ b/pypy/jit/metainterp/test/test_warmspot.py @@ -1,10 +1,7 @@ import py -from pypy.jit.metainterp.warmspot import ll_meta_interp from pypy.jit.metainterp.warmspot import get_stats -from pypy.rlib.jit import JitDriver -from pypy.rlib.jit import unroll_safe +from pypy.rlib.jit import JitDriver, set_param, unroll_safe from pypy.jit.backend.llgraph import runner -from pypy.jit.metainterp.history import BoxInt from pypy.jit.metainterp.test.support import LLJitMixin, OOJitMixin from pypy.jit.metainterp.optimizeopt import ALL_OPTS_NAMES @@ -97,7 +94,7 @@ n = A().m(n) return n def f(n, enable_opts): - myjitdriver.set_param('enable_opts', hlstr(enable_opts)) + set_param(None, 'enable_opts', hlstr(enable_opts)) return g(n) # check that the set_param will override the default diff --git a/pypy/jit/metainterp/test/test_ztranslation.py b/pypy/jit/metainterp/test/test_ztranslation.py --- a/pypy/jit/metainterp/test/test_ztranslation.py +++ b/pypy/jit/metainterp/test/test_ztranslation.py @@ -1,7 +1,7 @@ import py from pypy.jit.metainterp.warmspot import rpython_ll_meta_interp, ll_meta_interp from pypy.jit.backend.llgraph import runner -from pypy.rlib.jit import JitDriver, unroll_parameters +from pypy.rlib.jit import JitDriver, unroll_parameters, set_param from pypy.rlib.jit import PARAMETERS, dont_look_inside, hint from pypy.jit.metainterp.jitprof import Profiler from pypy.rpython.lltypesystem import lltype, llmemory @@ -57,9 +57,9 @@ get_printable_location=get_printable_location) def f(i): for param, defl in unroll_parameters: - jitdriver.set_param(param, defl) - jitdriver.set_param("threshold", 3) - jitdriver.set_param("trace_eagerness", 2) + set_param(jitdriver, param, defl) + set_param(jitdriver, "threshold", 3) + set_param(jitdriver, "trace_eagerness", 2) total = 0 frame = Frame(i) while frame.l[0] > 3: @@ -117,8 +117,8 @@ raise ValueError return 2 def main(i): - jitdriver.set_param("threshold", 3) - jitdriver.set_param("trace_eagerness", 2) + set_param(jitdriver, "threshold", 3) + set_param(jitdriver, "trace_eagerness", 2) total = 0 n = i while n > 3: diff --git a/pypy/jit/metainterp/warmspot.py b/pypy/jit/metainterp/warmspot.py --- a/pypy/jit/metainterp/warmspot.py +++ b/pypy/jit/metainterp/warmspot.py @@ -120,7 +120,8 @@ op = block.operations[i] if (op.opname == 'jit_marker' and op.args[0].value == marker_name and - op.args[1].value.active): # the jitdriver + (op.args[1].value is None or + op.args[1].value.active)): # the jitdriver results.append((graph, block, i)) return results @@ -846,11 +847,18 @@ _, PTR_SET_PARAM_STR_FUNCTYPE = self.cpu.ts.get_FuncType( [lltype.Ptr(STR)], lltype.Void) def make_closure(jd, fullfuncname, is_string): - state = jd.warmstate - def closure(i): - if is_string: - i = hlstr(i) - getattr(state, fullfuncname)(i) + if jd is None: + def closure(i): + if is_string: + i = hlstr(i) + for jd in self.jitdrivers_sd: + getattr(jd.warmstate, fullfuncname)(i) + else: + state = jd.warmstate + def closure(i): + if is_string: + i = hlstr(i) + getattr(state, fullfuncname)(i) if is_string: TP = PTR_SET_PARAM_STR_FUNCTYPE else: @@ -859,12 +867,16 @@ return Constant(funcptr, TP) # for graph, block, i in find_set_param(graphs): + op = block.operations[i] - for jd in self.jitdrivers_sd: - if jd.jitdriver is op.args[1].value: - break + if op.args[1].value is not None: + for jd in self.jitdrivers_sd: + if jd.jitdriver is op.args[1].value: + break + else: + assert 0, "jitdriver of set_param() not found" else: - assert 0, "jitdriver of set_param() not found" + jd = None funcname = op.args[2].value key = jd, funcname if key not in closures: diff --git a/pypy/module/pypyjit/interp_jit.py b/pypy/module/pypyjit/interp_jit.py --- a/pypy/module/pypyjit/interp_jit.py +++ b/pypy/module/pypyjit/interp_jit.py @@ -6,6 +6,7 @@ from pypy.tool.pairtype import extendabletype from pypy.rlib.rarithmetic import r_uint, intmask from pypy.rlib.jit import JitDriver, hint, we_are_jitted, dont_look_inside +from pypy.rlib import jit from pypy.rlib.jit import current_trace_length, unroll_parameters import pypy.interpreter.pyopcode # for side-effects from pypy.interpreter.error import OperationError, operationerrfmt @@ -200,18 +201,18 @@ if len(args_w) == 1: text = space.str_w(args_w[0]) try: - pypyjitdriver.set_user_param(text) + jit.set_user_param(None, text) except ValueError: raise OperationError(space.w_ValueError, space.wrap("error in JIT parameters string")) for key, w_value in kwds_w.items(): if key == 'enable_opts': - pypyjitdriver.set_param('enable_opts', space.str_w(w_value)) + jit.set_param(None, 'enable_opts', space.str_w(w_value)) else: intval = space.int_w(w_value) for name, _ in unroll_parameters: if name == key and name != 'enable_opts': - pypyjitdriver.set_param(name, intval) + jit.set_param(None, name, intval) break else: raise operationerrfmt(space.w_TypeError, diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -450,55 +450,6 @@ # special-cased by ExtRegistryEntry pass - def _set_param(self, name, value): - # special-cased by ExtRegistryEntry - # (internal, must receive a constant 'name') - # if value is DEFAULT, sets the default value. - assert name in PARAMETERS - - @specialize.arg(0, 1) - def set_param(self, name, value): - """Set one of the tunable JIT parameter.""" - self._set_param(name, value) - - @specialize.arg(0, 1) - def set_param_to_default(self, name): - """Reset one of the tunable JIT parameters to its default value.""" - self._set_param(name, DEFAULT) - - def set_user_param(self, text): - """Set the tunable JIT parameters from a user-supplied string - following the format 'param=value,param=value', or 'off' to - disable the JIT. For programmatic setting of parameters, use - directly JitDriver.set_param(). - """ - if text == 'off': - self.set_param('threshold', -1) - self.set_param('function_threshold', -1) - return - if text == 'default': - for name1, _ in unroll_parameters: - self.set_param_to_default(name1) - return - for s in text.split(','): - s = s.strip(' ') - parts = s.split('=') - if len(parts) != 2: - raise ValueError - name = parts[0] - value = parts[1] - if name == 'enable_opts': - self.set_param('enable_opts', value) - else: - for name1, _ in unroll_parameters: - if name1 == name and name1 != 'enable_opts': - try: - self.set_param(name1, int(value)) - except ValueError: - raise - set_user_param._annspecialcase_ = 'specialize:arg(0)' - - def on_compile(self, logger, looptoken, operations, type, *greenargs): """ A hook called when loop is compiled. Overwrite for your own jitdriver if you want to do something special, like @@ -524,16 +475,61 @@ self.jit_merge_point = self.jit_merge_point self.can_enter_jit = self.can_enter_jit self.loop_header = self.loop_header - self._set_param = self._set_param - class Entry(ExtEnterLeaveMarker): _about_ = (self.jit_merge_point, self.can_enter_jit) class Entry(ExtLoopHeader): _about_ = self.loop_header - class Entry(ExtSetParam): - _about_ = self._set_param +def _set_param(driver, name, value): + # special-cased by ExtRegistryEntry + # (internal, must receive a constant 'name') + # if value is DEFAULT, sets the default value. + assert name in PARAMETERS + + at specialize.arg(0, 1) +def set_param(driver, name, value): + """Set one of the tunable JIT parameter. Driver can be None, then all + drivers have this set """ + _set_param(driver, name, value) + + at specialize.arg(0, 1) +def set_param_to_default(driver, name): + """Reset one of the tunable JIT parameters to its default value.""" + _set_param(driver, name, DEFAULT) + +def set_user_param(driver, text): + """Set the tunable JIT parameters from a user-supplied string + following the format 'param=value,param=value', or 'off' to + disable the JIT. For programmatic setting of parameters, use + directly JitDriver.set_param(). + """ + if text == 'off': + set_param(driver, 'threshold', -1) + set_param(driver, 'function_threshold', -1) + return + if text == 'default': + for name1, _ in unroll_parameters: + set_param_to_default(driver, name1) + return + for s in text.split(','): + s = s.strip(' ') + parts = s.split('=') + if len(parts) != 2: + raise ValueError + name = parts[0] + value = parts[1] + if name == 'enable_opts': + set_param(driver, 'enable_opts', value) + else: + for name1, _ in unroll_parameters: + if name1 == name and name1 != 'enable_opts': + try: + set_param(driver, name1, int(value)) + except ValueError: + raise +set_user_param._annspecialcase_ = 'specialize:arg(0)' + # ____________________________________________________________ # @@ -705,8 +701,9 @@ resulttype=lltype.Void) class ExtSetParam(ExtRegistryEntry): + _about_ = _set_param - def compute_result_annotation(self, s_name, s_value): + def compute_result_annotation(self, s_driver, s_name, s_value): from pypy.annotation import model as annmodel assert s_name.is_constant() if not self.bookkeeper.immutablevalue(DEFAULT).contains(s_value): @@ -722,21 +719,22 @@ from pypy.objspace.flow.model import Constant hop.exception_cannot_occur() - driver = self.instance.im_self - name = hop.args_s[0].const + driver = hop.inputarg(lltype.Void, arg=0) + name = hop.args_s[1].const if name == 'enable_opts': repr = string_repr else: repr = lltype.Signed - if (isinstance(hop.args_v[1], Constant) and - hop.args_v[1].value is DEFAULT): + if (isinstance(hop.args_v[2], Constant) and + hop.args_v[2].value is DEFAULT): value = PARAMETERS[name] v_value = hop.inputconst(repr, value) else: - v_value = hop.inputarg(repr, arg=1) + v_value = hop.inputarg(repr, arg=2) vlist = [hop.inputconst(lltype.Void, "set_param"), - hop.inputconst(lltype.Void, driver), + driver, hop.inputconst(lltype.Void, name), v_value] return hop.genop('jit_marker', vlist, resulttype=lltype.Void) + From noreply at buildbot.pypy.org Fri Nov 18 10:52:43 2011 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 18 Nov 2011 10:52:43 +0100 (CET) Subject: [pypy-commit] pypy release-1.7.x: merge in default Message-ID: <20111118095243.D3E0B82A9D@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: release-1.7.x Changeset: r49519:7773f8fc4223 Date: 2011-11-18 11:52 +0200 http://bitbucket.org/pypy/pypy/changeset/7773f8fc4223/ Log: merge in default diff --git a/lib-python/2.7/test/test_os.py b/lib-python/2.7/test/test_os.py --- a/lib-python/2.7/test/test_os.py +++ b/lib-python/2.7/test/test_os.py @@ -74,7 +74,8 @@ self.assertFalse(os.path.exists(name), "file already exists for temporary file") # make sure we can create the file - open(name, "w") + f = open(name, "w") + f.close() self.files.append(name) def test_tempnam(self): diff --git a/lib-python/2.7/pkgutil.py b/lib-python/modified-2.7/pkgutil.py copy from lib-python/2.7/pkgutil.py copy to lib-python/modified-2.7/pkgutil.py --- a/lib-python/2.7/pkgutil.py +++ b/lib-python/modified-2.7/pkgutil.py @@ -244,7 +244,8 @@ return mod def get_data(self, pathname): - return open(pathname, "rb").read() + with open(pathname, "rb") as f: + return f.read() def _reopen(self): if self.file and self.file.closed: diff --git a/lib-python/modified-2.7/test/test_import.py b/lib-python/modified-2.7/test/test_import.py --- a/lib-python/modified-2.7/test/test_import.py +++ b/lib-python/modified-2.7/test/test_import.py @@ -64,6 +64,7 @@ except ImportError, err: self.fail("import from %s failed: %s" % (ext, err)) else: + # XXX importing .pyw is missing on Windows self.assertEqual(mod.a, a, "module loaded (%s) but contents invalid" % mod) self.assertEqual(mod.b, b, diff --git a/lib-python/modified-2.7/test/test_repr.py b/lib-python/modified-2.7/test/test_repr.py --- a/lib-python/modified-2.7/test/test_repr.py +++ b/lib-python/modified-2.7/test/test_repr.py @@ -254,8 +254,14 @@ eq = self.assertEqual touch(os.path.join(self.subpkgname, self.pkgname + os.extsep + 'py')) from areallylongpackageandmodulenametotestreprtruncation.areallylongpackageandmodulenametotestreprtruncation import areallylongpackageandmodulenametotestreprtruncation - eq(repr(areallylongpackageandmodulenametotestreprtruncation), - "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + # On PyPy, we use %r to format the file name; on CPython it is done + # with '%s'. It seems to me that %r is safer . + if '__pypy__' in sys.builtin_module_names: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + else: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) eq(repr(sys), "") def test_type(self): diff --git a/lib-python/2.7/test/test_subprocess.py b/lib-python/modified-2.7/test/test_subprocess.py copy from lib-python/2.7/test/test_subprocess.py copy to lib-python/modified-2.7/test/test_subprocess.py --- a/lib-python/2.7/test/test_subprocess.py +++ b/lib-python/modified-2.7/test/test_subprocess.py @@ -16,11 +16,11 @@ # Depends on the following external programs: Python # -if mswindows: - SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' - 'os.O_BINARY);') -else: - SETBINARY = '' +#if mswindows: +# SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' +# 'os.O_BINARY);') +#else: +# SETBINARY = '' try: @@ -420,8 +420,9 @@ self.assertStderrEqual(stderr, "") def test_universal_newlines(self): - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' @@ -448,8 +449,9 @@ def test_universal_newlines_communicate(self): # universal newlines through communicate() - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' diff --git a/lib_pypy/pyrepl/unix_console.py b/lib_pypy/pyrepl/unix_console.py --- a/lib_pypy/pyrepl/unix_console.py +++ b/lib_pypy/pyrepl/unix_console.py @@ -412,7 +412,12 @@ e.args[4] == 'unexpected end of data': pass else: - raise + # was: "raise". But it crashes pyrepl, and by extension the + # pypy currently running, in which we are e.g. in the middle + # of some debugging session. Argh. Instead just print an + # error message to stderr and continue running, for now. + self.partial_char = '' + sys.stderr.write('\n%s: %s\n' % (e.__class__.__name__, e)) else: self.partial_char = '' self.event_queue.push(c) diff --git a/lib_pypy/syslog.py b/lib_pypy/syslog.py --- a/lib_pypy/syslog.py +++ b/lib_pypy/syslog.py @@ -38,9 +38,27 @@ _setlogmask.argtypes = (c_int,) _setlogmask.restype = c_int +_S_log_open = False +_S_ident_o = None + +def _get_argv(): + try: + import sys + script = sys.argv[0] + if isinstance(script, str): + return script[script.rfind('/')+1:] or None + except Exception: + pass + return None + @builtinify -def openlog(ident, option, facility): - _openlog(ident, option, facility) +def openlog(ident=None, logoption=0, facility=LOG_USER): + global _S_ident_o, _S_log_open + if ident is None: + ident = _get_argv() + _S_ident_o = c_char_p(ident) # keepalive + _openlog(_S_ident_o, logoption, facility) + _S_log_open = True @builtinify def syslog(arg1, arg2=None): @@ -48,11 +66,18 @@ priority, message = arg1, arg2 else: priority, message = LOG_INFO, arg1 + # if log is not opened, open it now + if not _S_log_open: + openlog() _syslog(priority, "%s", message) @builtinify def closelog(): - _closelog() + global _S_log_open, S_ident_o + if _S_log_open: + _closelog() + _S_log_open = False + _S_ident_o = None @builtinify def setlogmask(mask): diff --git a/pypy/interpreter/test/test_executioncontext.py b/pypy/interpreter/test/test_executioncontext.py --- a/pypy/interpreter/test/test_executioncontext.py +++ b/pypy/interpreter/test/test_executioncontext.py @@ -292,7 +292,7 @@ import os, sys print sys.executable, self.tmpfile if sys.platform == "win32": - cmdformat = '""%s" "%s""' # excellent! tons of "! + cmdformat = '"%s" "%s"' else: cmdformat = "'%s' '%s'" g = os.popen(cmdformat % (sys.executable, self.tmpfile), 'r') diff --git a/pypy/interpreter/test/test_typedef.py b/pypy/interpreter/test/test_typedef.py --- a/pypy/interpreter/test/test_typedef.py +++ b/pypy/interpreter/test/test_typedef.py @@ -2,7 +2,7 @@ from pypy.interpreter import typedef from pypy.tool.udir import udir from pypy.interpreter.baseobjspace import Wrappable -from pypy.interpreter.gateway import ObjSpace +from pypy.interpreter.gateway import ObjSpace, interp2app # this test isn't so much to test that the objspace interface *works* # -- it's more to test that it's *there* @@ -260,6 +260,50 @@ gc.collect(); gc.collect() assert space.unwrap(w_seen) == [6, 2] + def test_multiple_inheritance(self): + class W_A(Wrappable): + a = 1 + b = 2 + class W_C(W_A): + b = 3 + W_A.typedef = typedef.TypeDef("A", + a = typedef.interp_attrproperty("a", cls=W_A), + b = typedef.interp_attrproperty("b", cls=W_A), + ) + class W_B(Wrappable): + pass + def standalone_method(space, w_obj): + if isinstance(w_obj, W_A): + return space.w_True + else: + return space.w_False + W_B.typedef = typedef.TypeDef("B", + c = interp2app(standalone_method) + ) + W_C.typedef = typedef.TypeDef("C", (W_A.typedef, W_B.typedef,)) + + w_o1 = self.space.wrap(W_C()) + w_o2 = self.space.wrap(W_B()) + w_c = self.space.gettypefor(W_C) + w_b = self.space.gettypefor(W_B) + w_a = self.space.gettypefor(W_A) + assert w_c.mro_w == [ + w_c, + w_a, + w_b, + self.space.w_object, + ] + for w_tp in w_c.mro_w: + assert self.space.isinstance_w(w_o1, w_tp) + def assert_attr(w_obj, name, value): + assert self.space.unwrap(self.space.getattr(w_obj, self.space.wrap(name))) == value + def assert_method(w_obj, name, value): + assert self.space.unwrap(self.space.call_method(w_obj, name)) == value + assert_attr(w_o1, "a", 1) + assert_attr(w_o1, "b", 3) + assert_method(w_o1, "c", True) + assert_method(w_o2, "c", False) + class AppTestTypeDef: diff --git a/pypy/interpreter/typedef.py b/pypy/interpreter/typedef.py --- a/pypy/interpreter/typedef.py +++ b/pypy/interpreter/typedef.py @@ -15,13 +15,19 @@ def __init__(self, __name, __base=None, **rawdict): "NOT_RPYTHON: initialization-time only" self.name = __name - self.base = __base + if __base is None: + bases = [] + elif isinstance(__base, tuple): + bases = list(__base) + else: + bases = [__base] + self.bases = bases self.hasdict = '__dict__' in rawdict self.weakrefable = '__weakref__' in rawdict self.doc = rawdict.pop('__doc__', None) - if __base is not None: - self.hasdict |= __base.hasdict - self.weakrefable |= __base.weakrefable + for base in bases: + self.hasdict |= base.hasdict + self.weakrefable |= base.weakrefable self.rawdict = {} self.acceptable_as_base_class = '__new__' in rawdict self.applevel_subclasses_base = None diff --git a/pypy/jit/backend/llgraph/llimpl.py b/pypy/jit/backend/llgraph/llimpl.py --- a/pypy/jit/backend/llgraph/llimpl.py +++ b/pypy/jit/backend/llgraph/llimpl.py @@ -20,6 +20,7 @@ from pypy.jit.backend.llgraph import symbolic from pypy.jit.codewriter import longlong +from pypy.rlib import libffi from pypy.rlib.objectmodel import ComputedIntSymbolic, we_are_translated from pypy.rlib.rarithmetic import ovfcheck from pypy.rlib.rarithmetic import r_longlong, r_ulonglong, r_uint @@ -325,12 +326,12 @@ loop = _from_opaque(loop) loop.operations.append(Operation(opnum)) -def compile_add_descr(loop, ofs, type, arg_types): +def compile_add_descr(loop, ofs, type, arg_types, extrainfo, width): from pypy.jit.backend.llgraph.runner import Descr loop = _from_opaque(loop) op = loop.operations[-1] assert isinstance(type, str) and len(type) == 1 - op.descr = Descr(ofs, type, arg_types=arg_types) + op.descr = Descr(ofs, type, arg_types=arg_types, extrainfo=extrainfo, width=width) def compile_add_descr_arg(loop, ofs, type, arg_types): from pypy.jit.backend.llgraph.runner import Descr @@ -825,6 +826,16 @@ else: raise NotImplementedError + def op_getinteriorfield_raw(self, descr, array, index): + if descr.typeinfo == REF: + return do_getinteriorfield_raw_ptr(array, index, descr.width, descr.ofs) + elif descr.typeinfo == INT: + return do_getinteriorfield_raw_int(array, index, descr.width, descr.ofs) + elif descr.typeinfo == FLOAT: + return do_getinteriorfield_raw_float(array, index, descr.width, descr.ofs) + else: + raise NotImplementedError + def op_setinteriorfield_gc(self, descr, array, index, newvalue): if descr.typeinfo == REF: return do_setinteriorfield_gc_ptr(array, index, descr.ofs, @@ -838,6 +849,16 @@ else: raise NotImplementedError + def op_setinteriorfield_raw(self, descr, array, index, newvalue): + if descr.typeinfo == REF: + return do_setinteriorfield_raw_ptr(array, index, newvalue, descr.width, descr.ofs) + elif descr.typeinfo == INT: + return do_setinteriorfield_raw_int(array, index, newvalue, descr.width, descr.ofs) + elif descr.typeinfo == FLOAT: + return do_setinteriorfield_raw_float(array, index, newvalue, descr.width, descr.ofs) + else: + raise NotImplementedError + def op_setfield_gc(self, fielddescr, struct, newvalue): if fielddescr.typeinfo == REF: do_setfield_gc_ptr(struct, fielddescr.ofs, newvalue) @@ -1403,6 +1424,14 @@ struct = array._obj.container.getitem(index) return cast_to_ptr(_getinteriorfield_gc(struct, fieldnum)) +def _getinteriorfield_raw(ffitype, array, index, width, ofs): + addr = rffi.cast(rffi.VOIDP, array) + return libffi.array_getitem(ffitype, width, addr, index, ofs) + +def do_getinteriorfield_raw_int(array, index, width, ofs): + res = _getinteriorfield_raw(libffi.types.slong, array, index, width, ofs) + return res + def _getfield_raw(struct, fieldnum): STRUCT, fieldname = symbolic.TokenToField[fieldnum] ptr = cast_from_int(lltype.Ptr(STRUCT), struct) @@ -1479,7 +1508,14 @@ return do_setinteriorfield_gc do_setinteriorfield_gc_int = new_setinteriorfield_gc(cast_from_int) do_setinteriorfield_gc_float = new_setinteriorfield_gc(cast_from_floatstorage) -do_setinteriorfield_gc_ptr = new_setinteriorfield_gc(cast_from_ptr) +do_setinteriorfield_gc_ptr = new_setinteriorfield_gc(cast_from_ptr) + +def new_setinteriorfield_raw(ffitype): + def do_setinteriorfield_raw(array, index, newvalue, width, ofs): + addr = rffi.cast(rffi.VOIDP, array) + return libffi.array_setitem(ffitype, width, addr, index, ofs, newvalue) + return do_setinteriorfield_raw +do_setinteriorfield_raw_int = new_setinteriorfield_raw(libffi.types.slong) def do_setfield_raw_int(struct, fieldnum, newvalue): STRUCT, fieldname = symbolic.TokenToField[fieldnum] diff --git a/pypy/jit/backend/llgraph/runner.py b/pypy/jit/backend/llgraph/runner.py --- a/pypy/jit/backend/llgraph/runner.py +++ b/pypy/jit/backend/llgraph/runner.py @@ -23,8 +23,10 @@ class Descr(history.AbstractDescr): def __init__(self, ofs, typeinfo, extrainfo=None, name=None, - arg_types=None, count_fields_if_immut=-1, ffi_flags=0): + arg_types=None, count_fields_if_immut=-1, ffi_flags=0, width=-1): + self.ofs = ofs + self.width = width self.typeinfo = typeinfo self.extrainfo = extrainfo self.name = name @@ -119,14 +121,14 @@ return False def getdescr(self, ofs, typeinfo='?', extrainfo=None, name=None, - arg_types=None, count_fields_if_immut=-1, ffi_flags=0): + arg_types=None, count_fields_if_immut=-1, ffi_flags=0, width=-1): key = (ofs, typeinfo, extrainfo, name, arg_types, - count_fields_if_immut, ffi_flags) + count_fields_if_immut, ffi_flags, width) try: return self._descrs[key] except KeyError: descr = Descr(ofs, typeinfo, extrainfo, name, arg_types, - count_fields_if_immut, ffi_flags) + count_fields_if_immut, ffi_flags, width) self._descrs[key] = descr return descr @@ -179,7 +181,8 @@ descr = op.getdescr() if isinstance(descr, Descr): llimpl.compile_add_descr(c, descr.ofs, descr.typeinfo, - descr.arg_types) + descr.arg_types, descr.extrainfo, + descr.width) if (isinstance(descr, history.LoopToken) and op.getopnum() != rop.JUMP): llimpl.compile_add_loop_token(c, descr) @@ -324,10 +327,22 @@ def interiorfielddescrof(self, A, fieldname): S = A.OF - ofs2 = symbolic.get_size(A) + width = symbolic.get_size(A) ofs, size = symbolic.get_field_token(S, fieldname) token = history.getkind(getattr(S, fieldname)) - return self.getdescr(ofs, token[0], name=fieldname, extrainfo=ofs2) + return self.getdescr(ofs, token[0], name=fieldname, width=width) + + def interiorfielddescrof_dynamic(self, offset, width, fieldsize, + is_pointer, is_float, is_signed): + + if is_pointer: + typeinfo = REF + elif is_float: + typeinfo = FLOAT + else: + typeinfo = INT + # we abuse the arg_types field to distinguish dynamic and static descrs + return Descr(offset, typeinfo, arg_types='dynamic', name='', width=width) def calldescrof(self, FUNC, ARGS, RESULT, extrainfo): arg_types = [] diff --git a/pypy/jit/backend/llsupport/descr.py b/pypy/jit/backend/llsupport/descr.py --- a/pypy/jit/backend/llsupport/descr.py +++ b/pypy/jit/backend/llsupport/descr.py @@ -111,6 +111,16 @@ def repr_of_descr(self): return '<%s %s %s>' % (self._clsname, self.name, self.offset) +class DynamicFieldDescr(BaseFieldDescr): + def __init__(self, offset, fieldsize, is_pointer, is_float, is_signed): + self.offset = offset + self._fieldsize = fieldsize + self._is_pointer_field = is_pointer + self._is_float_field = is_float + self._is_field_signed = is_signed + + def get_field_size(self, translate_support_code): + return self._fieldsize class NonGcPtrFieldDescr(BaseFieldDescr): _clsname = 'NonGcPtrFieldDescr' @@ -182,6 +192,7 @@ def repr_of_descr(self): return '<%s>' % self._clsname + class NonGcPtrArrayDescr(BaseArrayDescr): _clsname = 'NonGcPtrArrayDescr' def get_item_size(self, translate_support_code): @@ -211,6 +222,13 @@ def get_ofs_length(self, translate_support_code): return -1 +class DynamicArrayNoLengthDescr(BaseArrayNoLengthDescr): + def __init__(self, itemsize): + self.itemsize = itemsize + + def get_item_size(self, translate_support_code): + return self.itemsize + class NonGcPtrArrayNoLengthDescr(BaseArrayNoLengthDescr): _clsname = 'NonGcPtrArrayNoLengthDescr' def get_item_size(self, translate_support_code): @@ -305,12 +323,16 @@ _clsname = '' loop_token = None arg_classes = '' # <-- annotation hack - ffi_flags = 0 + ffi_flags = 1 - def __init__(self, arg_classes, extrainfo=None, ffi_flags=0): + def __init__(self, arg_classes, extrainfo=None, ffi_flags=1): self.arg_classes = arg_classes # string of "r" and "i" (ref/int) self.extrainfo = extrainfo self.ffi_flags = ffi_flags + # NB. the default ffi_flags is 1, meaning FUNCFLAG_CDECL, which + # makes sense on Windows as it's the one for all the C functions + # we are compiling together with the JIT. On non-Windows platforms + # it is just ignored anyway. def __repr__(self): res = '%s(%s)' % (self.__class__.__name__, self.arg_classes) @@ -351,6 +373,10 @@ return False # unless overridden def create_call_stub(self, rtyper, RESULT): + from pypy.rlib.clibffi import FFI_DEFAULT_ABI + assert self.get_call_conv() == FFI_DEFAULT_ABI, ( + "%r: create_call_stub() with a non-default call ABI" % (self,)) + def process(c): if c == 'L': assert longlong.supports_longlong @@ -445,7 +471,7 @@ """ _clsname = 'DynamicIntCallDescr' - def __init__(self, arg_classes, result_size, result_sign, extrainfo=None, ffi_flags=0): + def __init__(self, arg_classes, result_size, result_sign, extrainfo, ffi_flags): BaseIntCallDescr.__init__(self, arg_classes, extrainfo, ffi_flags) assert isinstance(result_sign, bool) self._result_size = chr(result_size) diff --git a/pypy/jit/backend/llsupport/ffisupport.py b/pypy/jit/backend/llsupport/ffisupport.py --- a/pypy/jit/backend/llsupport/ffisupport.py +++ b/pypy/jit/backend/llsupport/ffisupport.py @@ -8,7 +8,7 @@ class UnsupportedKind(Exception): pass -def get_call_descr_dynamic(cpu, ffi_args, ffi_result, extrainfo=None, ffi_flags=0): +def get_call_descr_dynamic(cpu, ffi_args, ffi_result, extrainfo, ffi_flags): """Get a call descr: the types of result and args are represented by rlib.libffi.types.*""" try: diff --git a/pypy/jit/backend/llsupport/llmodel.py b/pypy/jit/backend/llsupport/llmodel.py --- a/pypy/jit/backend/llsupport/llmodel.py +++ b/pypy/jit/backend/llsupport/llmodel.py @@ -9,9 +9,10 @@ from pypy.jit.backend.llsupport import symbolic from pypy.jit.backend.llsupport.symbolic import WORD, unroll_basic_sizes from pypy.jit.backend.llsupport.descr import (get_size_descr, - get_field_descr, BaseFieldDescr, get_array_descr, BaseArrayDescr, - get_call_descr, BaseIntCallDescr, GcPtrCallDescr, FloatCallDescr, - VoidCallDescr, InteriorFieldDescr, get_interiorfield_descr) + get_field_descr, BaseFieldDescr, DynamicFieldDescr, get_array_descr, + BaseArrayDescr, DynamicArrayNoLengthDescr, get_call_descr, + BaseIntCallDescr, GcPtrCallDescr, FloatCallDescr, VoidCallDescr, + InteriorFieldDescr, get_interiorfield_descr) from pypy.jit.backend.llsupport.asmmemmgr import AsmMemoryManager @@ -238,6 +239,12 @@ def interiorfielddescrof(self, A, fieldname): return get_interiorfield_descr(self.gc_ll_descr, A, A.OF, fieldname) + def interiorfielddescrof_dynamic(self, offset, width, fieldsize, + is_pointer, is_float, is_signed): + arraydescr = DynamicArrayNoLengthDescr(width) + fielddescr = DynamicFieldDescr(offset, fieldsize, is_pointer, is_float, is_signed) + return InteriorFieldDescr(arraydescr, fielddescr) + def unpack_arraydescr(self, arraydescr): assert isinstance(arraydescr, BaseArrayDescr) return arraydescr.get_base_size(self.translate_support_code) diff --git a/pypy/jit/backend/llsupport/test/test_ffisupport.py b/pypy/jit/backend/llsupport/test/test_ffisupport.py --- a/pypy/jit/backend/llsupport/test/test_ffisupport.py +++ b/pypy/jit/backend/llsupport/test/test_ffisupport.py @@ -13,44 +13,46 @@ def test_call_descr_dynamic(): args = [types.sint, types.pointer] - descr = get_call_descr_dynamic(FakeCPU(), args, types.sint, ffi_flags=42) + descr = get_call_descr_dynamic(FakeCPU(), args, types.sint, None, + ffi_flags=42) assert isinstance(descr, DynamicIntCallDescr) assert descr.arg_classes == 'ii' assert descr.get_ffi_flags() == 42 args = [types.sint, types.double, types.pointer] - descr = get_call_descr_dynamic(FakeCPU(), args, types.void) + descr = get_call_descr_dynamic(FakeCPU(), args, types.void, None, 42) assert descr is None # missing floats descr = get_call_descr_dynamic(FakeCPU(supports_floats=True), - args, types.void, ffi_flags=43) + args, types.void, None, ffi_flags=43) assert isinstance(descr, VoidCallDescr) assert descr.arg_classes == 'ifi' assert descr.get_ffi_flags() == 43 - descr = get_call_descr_dynamic(FakeCPU(), [], types.sint8) + descr = get_call_descr_dynamic(FakeCPU(), [], types.sint8, None, 42) assert isinstance(descr, DynamicIntCallDescr) assert descr.get_result_size(False) == 1 assert descr.is_result_signed() == True - descr = get_call_descr_dynamic(FakeCPU(), [], types.uint8) + descr = get_call_descr_dynamic(FakeCPU(), [], types.uint8, None, 42) assert isinstance(descr, DynamicIntCallDescr) assert descr.get_result_size(False) == 1 assert descr.is_result_signed() == False if not is_64_bit: - descr = get_call_descr_dynamic(FakeCPU(), [], types.slonglong) + descr = get_call_descr_dynamic(FakeCPU(), [], types.slonglong, + None, 42) assert descr is None # missing longlongs descr = get_call_descr_dynamic(FakeCPU(supports_longlong=True), - [], types.slonglong, ffi_flags=43) + [], types.slonglong, None, ffi_flags=43) assert isinstance(descr, LongLongCallDescr) assert descr.get_ffi_flags() == 43 else: assert types.slonglong is types.slong - descr = get_call_descr_dynamic(FakeCPU(), [], types.float) + descr = get_call_descr_dynamic(FakeCPU(), [], types.float, None, 42) assert descr is None # missing singlefloats descr = get_call_descr_dynamic(FakeCPU(supports_singlefloats=True), - [], types.float, ffi_flags=44) + [], types.float, None, ffi_flags=44) SingleFloatCallDescr = getCallDescrClass(rffi.FLOAT) assert isinstance(descr, SingleFloatCallDescr) assert descr.get_ffi_flags() == 44 diff --git a/pypy/jit/backend/model.py b/pypy/jit/backend/model.py --- a/pypy/jit/backend/model.py +++ b/pypy/jit/backend/model.py @@ -183,38 +183,35 @@ lst[n] = None self.fail_descr_free_list.extend(faildescr_indices) - @staticmethod - def sizeof(S): + def sizeof(self, S): raise NotImplementedError - @staticmethod - def fielddescrof(S, fieldname): + def fielddescrof(self, S, fieldname): """Return the Descr corresponding to field 'fieldname' on the structure 'S'. It is important that this function (at least) caches the results.""" raise NotImplementedError - @staticmethod - def arraydescrof(A): + def interiorfielddescrof(self, A, fieldname): raise NotImplementedError - @staticmethod - def calldescrof(FUNC, ARGS, RESULT): + def interiorfielddescrof_dynamic(self, offset, width, fieldsize, is_pointer, + is_float, is_signed): + raise NotImplementedError + + def arraydescrof(self, A): + raise NotImplementedError + + def calldescrof(self, FUNC, ARGS, RESULT): # FUNC is the original function type, but ARGS is a list of types # with Voids removed raise NotImplementedError - @staticmethod - def methdescrof(SELFTYPE, methname): + def methdescrof(self, SELFTYPE, methname): # must return a subclass of history.AbstractMethDescr raise NotImplementedError - @staticmethod - def typedescrof(TYPE): - raise NotImplementedError - - @staticmethod - def interiorfielddescrof(A, fieldname): + def typedescrof(self, TYPE): raise NotImplementedError # ---------- the backend-dependent operations ---------- diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -8,8 +8,8 @@ from pypy.rpython.lltypesystem.lloperation import llop from pypy.rpython.annlowlevel import llhelper from pypy.jit.backend.model import CompiledLoopToken -from pypy.jit.backend.x86.regalloc import (RegAlloc, get_ebp_ofs, - _get_scale, gpr_reg_mgr_cls) +from pypy.jit.backend.x86.regalloc import (RegAlloc, get_ebp_ofs, _get_scale, + gpr_reg_mgr_cls, _valid_addressing_size) from pypy.jit.backend.x86.arch import (FRAME_FIXED_SIZE, FORCE_INDEX_OFS, WORD, IS_X86_32, IS_X86_64) @@ -1601,8 +1601,10 @@ assert isinstance(itemsize_loc, ImmedLoc) if isinstance(index_loc, ImmedLoc): temp_loc = imm(index_loc.value * itemsize_loc.value) + elif _valid_addressing_size(itemsize_loc.value): + return AddressLoc(base_loc, index_loc, _get_scale(itemsize_loc.value), ofs_loc.value) else: - # XXX should not use IMUL in most cases + # XXX should not use IMUL in more cases, it can use a clever LEA assert isinstance(temp_loc, RegLoc) assert isinstance(index_loc, RegLoc) assert not temp_loc.is_xmm @@ -1619,6 +1621,8 @@ ofs_loc) self.load_from_mem(resloc, src_addr, fieldsize_loc, sign_loc) + genop_getinteriorfield_raw = genop_getinteriorfield_gc + def genop_discard_setfield_gc(self, op, arglocs): base_loc, ofs_loc, size_loc, value_loc = arglocs @@ -1634,6 +1638,8 @@ ofs_loc) self.save_into_mem(dest_addr, value_loc, fieldsize_loc) + genop_discard_setinteriorfield_raw = genop_discard_setinteriorfield_gc + def genop_discard_setarrayitem_gc(self, op, arglocs): base_loc, ofs_loc, value_loc, size_loc, baseofs = arglocs assert isinstance(baseofs, ImmedLoc) diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -1067,6 +1067,8 @@ self.PerformDiscard(op, [base_loc, ofs, itemsize, fieldsize, index_loc, temp_loc, value_loc]) + consider_setinteriorfield_raw = consider_setinteriorfield_gc + def consider_strsetitem(self, op): args = op.getarglist() base_loc = self.rm.make_sure_var_in_reg(op.getarg(0), args) @@ -1158,6 +1160,8 @@ self.Perform(op, [base_loc, ofs, itemsize, fieldsize, index_loc, temp_loc, sign_loc], result_loc) + consider_getinteriorfield_raw = consider_getinteriorfield_gc + def consider_int_is_true(self, op, guard_op): # doesn't need arg to be in a register argloc = self.loc(op.getarg(0)) @@ -1430,8 +1434,11 @@ # i.e. the n'th word beyond the fixed frame size. return -WORD * (FRAME_FIXED_SIZE + position) +def _valid_addressing_size(size): + return size == 1 or size == 2 or size == 4 or size == 8 + def _get_scale(size): - assert size == 1 or size == 2 or size == 4 or size == 8 + assert _valid_addressing_size(size) if size < 4: return size - 1 # 1, 2 => 0, 1 else: diff --git a/pypy/jit/backend/x86/test/test_fficall.py b/pypy/jit/backend/x86/test/test_fficall.py new file mode 100644 --- /dev/null +++ b/pypy/jit/backend/x86/test/test_fficall.py @@ -0,0 +1,8 @@ +import py +from pypy.jit.metainterp.test import test_fficall +from pypy.jit.backend.x86.test.test_basic import Jit386Mixin + +class TestFfiLookups(Jit386Mixin, test_fficall.FfiLookupTests): + # for the individual tests see + # ====> ../../../metainterp/test/test_fficall.py + supports_all = True diff --git a/pypy/jit/backend/x86/test/test_runner.py b/pypy/jit/backend/x86/test/test_runner.py --- a/pypy/jit/backend/x86/test/test_runner.py +++ b/pypy/jit/backend/x86/test/test_runner.py @@ -455,6 +455,9 @@ EffectInfo.MOST_GENERAL, ffi_flags=-1) calldescr.get_call_conv = lambda: ffi # <==== hack + # ^^^ we patch get_call_conv() so that the test also makes sense + # on Linux, because clibffi.get_call_conv() would always + # return FFI_DEFAULT_ABI on non-Windows platforms. funcbox = ConstInt(rawstart) i1 = BoxInt() i2 = BoxInt() diff --git a/pypy/jit/codewriter/call.py b/pypy/jit/codewriter/call.py --- a/pypy/jit/codewriter/call.py +++ b/pypy/jit/codewriter/call.py @@ -212,7 +212,10 @@ elidable = False loopinvariant = False if op.opname == "direct_call": - func = getattr(get_funcobj(op.args[0].value), '_callable', None) + funcobj = get_funcobj(op.args[0].value) + assert getattr(funcobj, 'calling_conv', 'c') == 'c', ( + "%r: getcalldescr() with a non-default call ABI" % (op,)) + func = getattr(funcobj, '_callable', None) elidable = getattr(func, "_elidable_function_", False) loopinvariant = getattr(func, "_jit_loop_invariant_", False) if loopinvariant: diff --git a/pypy/jit/codewriter/effectinfo.py b/pypy/jit/codewriter/effectinfo.py --- a/pypy/jit/codewriter/effectinfo.py +++ b/pypy/jit/codewriter/effectinfo.py @@ -48,6 +48,8 @@ OS_LIBFFI_PREPARE = 60 OS_LIBFFI_PUSH_ARG = 61 OS_LIBFFI_CALL = 62 + OS_LIBFFI_GETARRAYITEM = 63 + OS_LIBFFI_SETARRAYITEM = 64 # OS_LLONG_INVERT = 69 OS_LLONG_ADD = 70 diff --git a/pypy/jit/codewriter/jtransform.py b/pypy/jit/codewriter/jtransform.py --- a/pypy/jit/codewriter/jtransform.py +++ b/pypy/jit/codewriter/jtransform.py @@ -1615,6 +1615,12 @@ elif oopspec_name.startswith('libffi_call_'): oopspecindex = EffectInfo.OS_LIBFFI_CALL extraeffect = EffectInfo.EF_RANDOM_EFFECTS + elif oopspec_name == 'libffi_array_getitem': + oopspecindex = EffectInfo.OS_LIBFFI_GETARRAYITEM + extraeffect = EffectInfo.EF_CANNOT_RAISE + elif oopspec_name == 'libffi_array_setitem': + oopspecindex = EffectInfo.OS_LIBFFI_SETARRAYITEM + extraeffect = EffectInfo.EF_CANNOT_RAISE else: assert False, 'unsupported oopspec: %s' % oopspec_name return self._handle_oopspec_call(op, args, oopspecindex, extraeffect) diff --git a/pypy/jit/metainterp/executor.py b/pypy/jit/metainterp/executor.py --- a/pypy/jit/metainterp/executor.py +++ b/pypy/jit/metainterp/executor.py @@ -340,6 +340,8 @@ rop.DEBUG_MERGE_POINT, rop.JIT_DEBUG, rop.SETARRAYITEM_RAW, + rop.GETINTERIORFIELD_RAW, + rop.SETINTERIORFIELD_RAW, rop.CALL_RELEASE_GIL, rop.QUASIIMMUT_FIELD, ): # list of opcodes never executed by pyjitpl diff --git a/pypy/jit/metainterp/optimizeopt/fficall.py b/pypy/jit/metainterp/optimizeopt/fficall.py --- a/pypy/jit/metainterp/optimizeopt/fficall.py +++ b/pypy/jit/metainterp/optimizeopt/fficall.py @@ -1,11 +1,13 @@ +from pypy.jit.codewriter.effectinfo import EffectInfo +from pypy.jit.metainterp.optimizeopt.optimizer import Optimization +from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method +from pypy.jit.metainterp.resoperation import rop, ResOperation +from pypy.rlib import clibffi, libffi +from pypy.rlib.debug import debug_print +from pypy.rlib.libffi import Func +from pypy.rlib.objectmodel import we_are_translated from pypy.rpython.annlowlevel import cast_base_ptr_to_instance -from pypy.rlib.objectmodel import we_are_translated -from pypy.rlib.libffi import Func -from pypy.rlib.debug import debug_print -from pypy.jit.codewriter.effectinfo import EffectInfo -from pypy.jit.metainterp.resoperation import rop, ResOperation -from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method -from pypy.jit.metainterp.optimizeopt.optimizer import Optimization +from pypy.rpython.lltypesystem import llmemory class FuncInfo(object): @@ -78,7 +80,7 @@ def new(self): return OptFfiCall() - + def begin_optimization(self, funcval, op): self.rollback_maybe('begin_optimization', op) self.funcinfo = FuncInfo(funcval, self.optimizer.cpu, op) @@ -116,6 +118,9 @@ ops = self.do_push_arg(op) elif oopspec == EffectInfo.OS_LIBFFI_CALL: ops = self.do_call(op) + elif (oopspec == EffectInfo.OS_LIBFFI_GETARRAYITEM or + oopspec == EffectInfo.OS_LIBFFI_SETARRAYITEM): + ops = self.do_getsetarrayitem(op, oopspec) # for op in ops: self.emit_operation(op) @@ -190,6 +195,53 @@ ops.append(newop) return ops + def do_getsetarrayitem(self, op, oopspec): + ffitypeval = self.getvalue(op.getarg(1)) + widthval = self.getvalue(op.getarg(2)) + offsetval = self.getvalue(op.getarg(5)) + if not ffitypeval.is_constant() or not widthval.is_constant() or not offsetval.is_constant(): + return [op] + + ffitypeaddr = ffitypeval.box.getaddr() + ffitype = llmemory.cast_adr_to_ptr(ffitypeaddr, clibffi.FFI_TYPE_P) + offset = offsetval.box.getint() + width = widthval.box.getint() + descr = self._get_interior_descr(ffitype, width, offset) + + arglist = [ + self.getvalue(op.getarg(3)).force_box(self.optimizer), + self.getvalue(op.getarg(4)).force_box(self.optimizer), + ] + if oopspec == EffectInfo.OS_LIBFFI_GETARRAYITEM: + opnum = rop.GETINTERIORFIELD_RAW + elif oopspec == EffectInfo.OS_LIBFFI_SETARRAYITEM: + opnum = rop.SETINTERIORFIELD_RAW + arglist.append(self.getvalue(op.getarg(6)).force_box(self.optimizer)) + else: + assert False + return [ + ResOperation(opnum, arglist, op.result, descr=descr), + ] + + def _get_interior_descr(self, ffitype, width, offset): + kind = libffi.types.getkind(ffitype) + is_pointer = is_float = is_signed = False + if ffitype is libffi.types.pointer: + is_pointer = True + elif kind == 'i': + is_signed = True + elif kind == 'f' or kind == 'I' or kind == 'U': + # longlongs are treated as floats, see + # e.g. llsupport/descr.py:getDescrClass + is_float = True + else: + assert False, "unsupported ffitype or kind" + # + fieldsize = ffitype.c_size + return self.optimizer.cpu.interiorfielddescrof_dynamic( + offset, width, fieldsize, is_pointer, is_float, is_signed + ) + def propagate_forward(self, op): if self.logops is not None: debug_print(self.logops.repr_of_resop(op)) diff --git a/pypy/jit/metainterp/optimizeopt/intutils.py b/pypy/jit/metainterp/optimizeopt/intutils.py --- a/pypy/jit/metainterp/optimizeopt/intutils.py +++ b/pypy/jit/metainterp/optimizeopt/intutils.py @@ -1,4 +1,4 @@ -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift, LONG_BIT +from pypy.rlib.rarithmetic import ovfcheck, LONG_BIT from pypy.rlib.objectmodel import we_are_translated from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.jit.metainterp.history import BoxInt, ConstInt @@ -174,10 +174,10 @@ other.known_ge(IntBound(0, 0)) and \ other.known_lt(IntBound(LONG_BIT, LONG_BIT)): try: - vals = (ovfcheck_lshift(self.upper, other.upper), - ovfcheck_lshift(self.upper, other.lower), - ovfcheck_lshift(self.lower, other.upper), - ovfcheck_lshift(self.lower, other.lower)) + vals = (ovfcheck(self.upper << other.upper), + ovfcheck(self.upper << other.lower), + ovfcheck(self.lower << other.upper), + ovfcheck(self.lower << other.lower)) return IntBound(min4(vals), max4(vals)) except (OverflowError, ValueError): return IntUnbounded() diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -4999,6 +4999,34 @@ """ self.optimize_loop(ops, expected) + def test_known_equal_ints(self): + py.test.skip("in-progress") + ops = """ + [i0, i1, i2, p0] + i3 = int_eq(i0, i1) + guard_true(i3) [] + + i4 = int_lt(i2, i0) + guard_true(i4) [] + i5 = int_lt(i2, i1) + guard_true(i5) [] + + i6 = getarrayitem_gc(p0, i2) + finish(i6) + """ + expected = """ + [i0, i1, i2, p0] + i3 = int_eq(i0, i1) + guard_true(i3) [] + + i4 = int_lt(i2, i0) + guard_true(i4) [] + + i6 = getarrayitem_gc(p0, i3) + finish(i6) + """ + self.optimize_loop(ops, expected) + class TestLLtype(BaseTestOptimizeBasic, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -461,6 +461,7 @@ 'GETARRAYITEM_GC/2d', 'GETARRAYITEM_RAW/2d', 'GETINTERIORFIELD_GC/2d', + 'GETINTERIORFIELD_RAW/2d', 'GETFIELD_GC/1d', 'GETFIELD_RAW/1d', '_MALLOC_FIRST', @@ -479,6 +480,7 @@ 'SETARRAYITEM_GC/3d', 'SETARRAYITEM_RAW/3d', 'SETINTERIORFIELD_GC/3d', + 'SETINTERIORFIELD_RAW/3d', 'SETFIELD_GC/2d', 'SETFIELD_RAW/2d', 'STRSETITEM/3', diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -14,7 +14,7 @@ from pypy.rlib.jit import (JitDriver, we_are_jitted, hint, dont_look_inside, loop_invariant, elidable, promote, jit_debug, assert_green, AssertGreenFailed, unroll_safe, current_trace_length, look_inside_iff, - isconstant, isvirtual, promote_string) + isconstant, isvirtual, promote_string, set_param) from pypy.rlib.rarithmetic import ovfcheck from pypy.rpython.lltypesystem import lltype, llmemory, rffi from pypy.rpython.ootypesystem import ootype @@ -1256,15 +1256,18 @@ n -= 1 x += n return x - def f(n, threshold): - myjitdriver.set_param('threshold', threshold) + def f(n, threshold, arg): + if arg: + set_param(myjitdriver, 'threshold', threshold) + else: + set_param(None, 'threshold', threshold) return g(n) - res = self.meta_interp(f, [10, 3]) + res = self.meta_interp(f, [10, 3, 1]) assert res == 9 + 8 + 7 + 6 + 5 + 4 + 3 + 2 + 1 + 0 self.check_tree_loop_count(2) - res = self.meta_interp(f, [10, 13]) + res = self.meta_interp(f, [10, 13, 0]) assert res == 9 + 8 + 7 + 6 + 5 + 4 + 3 + 2 + 1 + 0 self.check_tree_loop_count(0) @@ -2328,8 +2331,8 @@ get_printable_location=get_printable_location) bytecode = "0j10jc20a3" def f(): - myjitdriver.set_param('threshold', 7) - myjitdriver.set_param('trace_eagerness', 1) + set_param(myjitdriver, 'threshold', 7) + set_param(myjitdriver, 'trace_eagerness', 1) i = j = c = a = 1 while True: myjitdriver.jit_merge_point(i=i, j=j, c=c, a=a) @@ -2607,7 +2610,7 @@ myjitdriver = JitDriver(greens = [], reds = ['n', 'i', 'sa', 'a']) def f(n, limit): - myjitdriver.set_param('retrace_limit', limit) + set_param(myjitdriver, 'retrace_limit', limit) sa = i = a = 0 while i < n: myjitdriver.jit_merge_point(n=n, i=i, sa=sa, a=a) @@ -2625,8 +2628,8 @@ myjitdriver = JitDriver(greens = [], reds = ['n', 'i', 'sa', 'a']) def f(n, limit): - myjitdriver.set_param('retrace_limit', 3) - myjitdriver.set_param('max_retrace_guards', limit) + set_param(myjitdriver, 'retrace_limit', 3) + set_param(myjitdriver, 'max_retrace_guards', limit) sa = i = a = 0 while i < n: myjitdriver.jit_merge_point(n=n, i=i, sa=sa, a=a) @@ -2645,7 +2648,7 @@ myjitdriver = JitDriver(greens = [], reds = ['n', 'i', 'sa', 'a', 'node']) def f(n, limit): - myjitdriver.set_param('retrace_limit', limit) + set_param(myjitdriver, 'retrace_limit', limit) sa = i = a = 0 node = [1, 2, 3] node[1] = n @@ -2668,10 +2671,10 @@ myjitdriver = JitDriver(greens = ['pc'], reds = ['n', 'i', 'sa']) bytecode = "0+sI0+SI" def f(n): - myjitdriver.set_param('threshold', 3) - myjitdriver.set_param('trace_eagerness', 1) - myjitdriver.set_param('retrace_limit', 5) - myjitdriver.set_param('function_threshold', -1) + set_param(None, 'threshold', 3) + set_param(None, 'trace_eagerness', 1) + set_param(None, 'retrace_limit', 5) + set_param(None, 'function_threshold', -1) pc = sa = i = 0 while pc < len(bytecode): myjitdriver.jit_merge_point(pc=pc, n=n, sa=sa, i=i) @@ -2728,9 +2731,9 @@ myjitdriver = JitDriver(greens = ['pc'], reds = ['n', 'a', 'i', 'j', 'sa']) bytecode = "ij+Jj+JI" def f(n, a): - myjitdriver.set_param('threshold', 5) - myjitdriver.set_param('trace_eagerness', 1) - myjitdriver.set_param('retrace_limit', 2) + set_param(None, 'threshold', 5) + set_param(None, 'trace_eagerness', 1) + set_param(None, 'retrace_limit', 2) pc = sa = i = j = 0 while pc < len(bytecode): myjitdriver.jit_merge_point(pc=pc, n=n, sa=sa, i=i, j=j, a=a) @@ -2793,8 +2796,8 @@ return B(self.val + 1) myjitdriver = JitDriver(greens = [], reds = ['sa', 'a']) def f(): - myjitdriver.set_param('threshold', 3) - myjitdriver.set_param('trace_eagerness', 2) + set_param(None, 'threshold', 3) + set_param(None, 'trace_eagerness', 2) a = A(0) sa = 0 while a.val < 8: @@ -2824,8 +2827,8 @@ return B(self.val + 1) myjitdriver = JitDriver(greens = [], reds = ['sa', 'b', 'a']) def f(b): - myjitdriver.set_param('threshold', 6) - myjitdriver.set_param('trace_eagerness', 4) + set_param(None, 'threshold', 6) + set_param(None, 'trace_eagerness', 4) a = A(0) sa = 0 while a.val < 15: @@ -2862,10 +2865,10 @@ myjitdriver = JitDriver(greens = ['pc'], reds = ['n', 'i', 'sa']) bytecode = "0+sI0+SI" def f(n): - myjitdriver.set_param('threshold', 3) - myjitdriver.set_param('trace_eagerness', 1) - myjitdriver.set_param('retrace_limit', 5) - myjitdriver.set_param('function_threshold', -1) + set_param(None, 'threshold', 3) + set_param(None, 'trace_eagerness', 1) + set_param(None, 'retrace_limit', 5) + set_param(None, 'function_threshold', -1) pc = sa = i = 0 while pc < len(bytecode): myjitdriver.jit_merge_point(pc=pc, n=n, sa=sa, i=i) diff --git a/pypy/jit/metainterp/test/test_fficall.py b/pypy/jit/metainterp/test/test_fficall.py --- a/pypy/jit/metainterp/test/test_fficall.py +++ b/pypy/jit/metainterp/test/test_fficall.py @@ -1,19 +1,18 @@ +import py -import py +from pypy.jit.metainterp.test.support import LLJitMixin +from pypy.rlib.jit import JitDriver, promote, dont_look_inside +from pypy.rlib.libffi import (ArgChain, IS_32_BIT, array_getitem, array_setitem, + types) +from pypy.rlib.objectmodel import specialize from pypy.rlib.rarithmetic import r_singlefloat, r_longlong, r_ulonglong -from pypy.rlib.jit import JitDriver, promote, dont_look_inside +from pypy.rlib.test.test_libffi import TestLibffiCall as _TestLibffiCall from pypy.rlib.unroll import unrolling_iterable -from pypy.rlib.libffi import ArgChain -from pypy.rlib.libffi import IS_32_BIT -from pypy.rlib.test.test_libffi import TestLibffiCall as _TestLibffiCall from pypy.rpython.lltypesystem import lltype, rffi -from pypy.rlib.objectmodel import specialize from pypy.tool.sourcetools import func_with_new_name -from pypy.jit.metainterp.test.support import LLJitMixin -class TestFfiCall(LLJitMixin, _TestLibffiCall): - supports_all = False # supports_{floats,longlong,singlefloats} +class FfiCallTests(_TestLibffiCall): # ===> ../../../rlib/test/test_libffi.py def call(self, funcspec, args, RESULT, is_struct=False, jitif=[]): @@ -92,6 +91,69 @@ test_byval_result.__doc__ = _TestLibffiCall.test_byval_result.__doc__ test_byval_result.dont_track_allocations = True +class FfiLookupTests(object): + def test_array_fields(self): + myjitdriver = JitDriver( + greens = [], + reds = ["n", "i", "points", "result_point"], + ) -class TestFfiCallSupportAll(TestFfiCall): + POINT = lltype.Struct("POINT", + ("x", lltype.Signed), + ("y", lltype.Signed), + ) + def f(points, result_point, n): + i = 0 + while i < n: + myjitdriver.jit_merge_point(i=i, points=points, n=n, + result_point=result_point) + x = array_getitem( + types.slong, rffi.sizeof(lltype.Signed) * 2, points, i, 0 + ) + y = array_getitem( + types.slong, rffi.sizeof(lltype.Signed) * 2, points, i, rffi.sizeof(lltype.Signed) + ) + + cur_x = array_getitem( + types.slong, rffi.sizeof(lltype.Signed) * 2, result_point, 0, 0 + ) + cur_y = array_getitem( + types.slong, rffi.sizeof(lltype.Signed) * 2, result_point, 0, rffi.sizeof(lltype.Signed) + ) + + array_setitem( + types.slong, rffi.sizeof(lltype.Signed) * 2, result_point, 0, 0, cur_x + x + ) + array_setitem( + types.slong, rffi.sizeof(lltype.Signed) * 2, result_point, 0, rffi.sizeof(lltype.Signed), cur_y + y + ) + i += 1 + + def main(n): + with lltype.scoped_alloc(rffi.CArray(POINT), n) as points: + with lltype.scoped_alloc(rffi.CArray(POINT), 1) as result_point: + for i in xrange(n): + points[i].x = i * 2 + points[i].y = i * 2 + 1 + points = rffi.cast(rffi.CArrayPtr(lltype.Char), points) + result_point[0].x = 0 + result_point[0].y = 0 + result_point = rffi.cast(rffi.CArrayPtr(lltype.Char), result_point) + f(points, result_point, n) + result_point = rffi.cast(rffi.CArrayPtr(POINT), result_point) + return result_point[0].x * result_point[0].y + + assert self.meta_interp(main, [10]) == main(10) == 9000 + self.check_loops({"int_add": 3, "jump": 1, "int_lt": 1, "guard_true": 1, + "getinteriorfield_raw": 4, "setinteriorfield_raw": 2 + }) + + +class TestFfiCall(FfiCallTests, LLJitMixin): + supports_all = False + +class TestFfiCallSupportAll(FfiCallTests, LLJitMixin): supports_all = True # supports_{floats,longlong,singlefloats} + +class TestFfiLookup(FfiLookupTests, LLJitMixin): + pass \ No newline at end of file diff --git a/pypy/jit/metainterp/test/test_jitdriver.py b/pypy/jit/metainterp/test/test_jitdriver.py --- a/pypy/jit/metainterp/test/test_jitdriver.py +++ b/pypy/jit/metainterp/test/test_jitdriver.py @@ -1,5 +1,5 @@ """Tests for multiple JitDrivers.""" -from pypy.rlib.jit import JitDriver, unroll_safe +from pypy.rlib.jit import JitDriver, unroll_safe, set_param from pypy.jit.metainterp.test.support import LLJitMixin, OOJitMixin from pypy.jit.metainterp.warmspot import get_stats @@ -113,7 +113,7 @@ return n # def loop2(g, r): - myjitdriver1.set_param('function_threshold', 0) + set_param(None, 'function_threshold', 0) while r > 0: myjitdriver2.can_enter_jit(g=g, r=r) myjitdriver2.jit_merge_point(g=g, r=r) diff --git a/pypy/jit/metainterp/test/test_loop.py b/pypy/jit/metainterp/test/test_loop.py --- a/pypy/jit/metainterp/test/test_loop.py +++ b/pypy/jit/metainterp/test/test_loop.py @@ -1,5 +1,5 @@ import py -from pypy.rlib.jit import JitDriver, hint +from pypy.rlib.jit import JitDriver, hint, set_param from pypy.rlib.objectmodel import compute_hash from pypy.jit.metainterp.warmspot import ll_meta_interp, get_stats from pypy.jit.metainterp.test.support import LLJitMixin, OOJitMixin @@ -364,7 +364,7 @@ myjitdriver = JitDriver(greens = ['pos'], reds = ['i', 'j', 'n', 'x']) bytecode = "IzJxji" def f(n, threshold): - myjitdriver.set_param('threshold', threshold) + set_param(myjitdriver, 'threshold', threshold) i = j = x = 0 pos = 0 op = '-' @@ -411,7 +411,7 @@ myjitdriver = JitDriver(greens = ['pos'], reds = ['i', 'j', 'n', 'x']) bytecode = "IzJxji" def f(nval, threshold): - myjitdriver.set_param('threshold', threshold) + set_param(myjitdriver, 'threshold', threshold) i, j, x = A(0), A(0), A(0) n = A(nval) pos = 0 diff --git a/pypy/jit/metainterp/test/test_recursive.py b/pypy/jit/metainterp/test/test_recursive.py --- a/pypy/jit/metainterp/test/test_recursive.py +++ b/pypy/jit/metainterp/test/test_recursive.py @@ -1,5 +1,5 @@ import py -from pypy.rlib.jit import JitDriver, we_are_jitted, hint +from pypy.rlib.jit import JitDriver, hint, set_param from pypy.rlib.jit import unroll_safe, dont_look_inside, promote from pypy.rlib.objectmodel import we_are_translated from pypy.rlib.debug import fatalerror @@ -308,8 +308,8 @@ pc += 1 return n def main(n): - myjitdriver.set_param('threshold', 3) - myjitdriver.set_param('trace_eagerness', 5) + set_param(None, 'threshold', 3) + set_param(None, 'trace_eagerness', 5) return f("c-l", n) expected = main(100) res = self.meta_interp(main, [100], enable_opts='', inline=True) @@ -329,7 +329,7 @@ return recursive(n - 1) + 1 return 0 def loop(n): - myjitdriver.set_param("threshold", 10) + set_param(myjitdriver, "threshold", 10) pc = 0 while n: myjitdriver.can_enter_jit(n=n) @@ -351,8 +351,8 @@ return 0 myjitdriver = JitDriver(greens=[], reds=['n']) def loop(n): - myjitdriver.set_param("threshold", 4) - myjitdriver.set_param("trace_eagerness", 2) + set_param(None, "threshold", 4) + set_param(None, "trace_eagerness", 2) while n: myjitdriver.can_enter_jit(n=n) myjitdriver.jit_merge_point(n=n) @@ -482,12 +482,12 @@ TRACE_LIMIT = 66 def main(inline): - myjitdriver.set_param("threshold", 10) - myjitdriver.set_param('function_threshold', 60) + set_param(None, "threshold", 10) + set_param(None, 'function_threshold', 60) if inline: - myjitdriver.set_param('inlining', True) + set_param(None, 'inlining', True) else: - myjitdriver.set_param('inlining', False) + set_param(None, 'inlining', False) return loop(100) res = self.meta_interp(main, [0], enable_opts='', trace_limit=TRACE_LIMIT) @@ -564,11 +564,11 @@ pc += 1 return n def g(m): - myjitdriver.set_param('inlining', True) + set_param(None, 'inlining', True) # carefully chosen threshold to make sure that the inner function # cannot be inlined, but the inner function on its own is small # enough - myjitdriver.set_param('trace_limit', 40) + set_param(None, 'trace_limit', 40) if m > 1000000: f('', 0) result = 0 @@ -1207,9 +1207,9 @@ driver.can_enter_jit(c=c, i=i, v=v) break - def main(c, i, set_param, v): - if set_param: - driver.set_param('function_threshold', 0) + def main(c, i, _set_param, v): + if _set_param: + set_param(driver, 'function_threshold', 0) portal(c, i, v) self.meta_interp(main, [10, 10, False, False], inline=True) diff --git a/pypy/jit/metainterp/test/test_warmspot.py b/pypy/jit/metainterp/test/test_warmspot.py --- a/pypy/jit/metainterp/test/test_warmspot.py +++ b/pypy/jit/metainterp/test/test_warmspot.py @@ -1,10 +1,7 @@ import py -from pypy.jit.metainterp.warmspot import ll_meta_interp from pypy.jit.metainterp.warmspot import get_stats -from pypy.rlib.jit import JitDriver -from pypy.rlib.jit import unroll_safe +from pypy.rlib.jit import JitDriver, set_param, unroll_safe from pypy.jit.backend.llgraph import runner -from pypy.jit.metainterp.history import BoxInt from pypy.jit.metainterp.test.support import LLJitMixin, OOJitMixin from pypy.jit.metainterp.optimizeopt import ALL_OPTS_NAMES @@ -97,7 +94,7 @@ n = A().m(n) return n def f(n, enable_opts): - myjitdriver.set_param('enable_opts', hlstr(enable_opts)) + set_param(None, 'enable_opts', hlstr(enable_opts)) return g(n) # check that the set_param will override the default diff --git a/pypy/jit/metainterp/test/test_ztranslation.py b/pypy/jit/metainterp/test/test_ztranslation.py --- a/pypy/jit/metainterp/test/test_ztranslation.py +++ b/pypy/jit/metainterp/test/test_ztranslation.py @@ -1,7 +1,7 @@ import py from pypy.jit.metainterp.warmspot import rpython_ll_meta_interp, ll_meta_interp from pypy.jit.backend.llgraph import runner -from pypy.rlib.jit import JitDriver, unroll_parameters +from pypy.rlib.jit import JitDriver, unroll_parameters, set_param from pypy.rlib.jit import PARAMETERS, dont_look_inside, hint from pypy.jit.metainterp.jitprof import Profiler from pypy.rpython.lltypesystem import lltype, llmemory @@ -57,9 +57,9 @@ get_printable_location=get_printable_location) def f(i): for param, defl in unroll_parameters: - jitdriver.set_param(param, defl) - jitdriver.set_param("threshold", 3) - jitdriver.set_param("trace_eagerness", 2) + set_param(jitdriver, param, defl) + set_param(jitdriver, "threshold", 3) + set_param(jitdriver, "trace_eagerness", 2) total = 0 frame = Frame(i) while frame.l[0] > 3: @@ -117,8 +117,8 @@ raise ValueError return 2 def main(i): - jitdriver.set_param("threshold", 3) - jitdriver.set_param("trace_eagerness", 2) + set_param(jitdriver, "threshold", 3) + set_param(jitdriver, "trace_eagerness", 2) total = 0 n = i while n > 3: diff --git a/pypy/jit/metainterp/warmspot.py b/pypy/jit/metainterp/warmspot.py --- a/pypy/jit/metainterp/warmspot.py +++ b/pypy/jit/metainterp/warmspot.py @@ -120,7 +120,8 @@ op = block.operations[i] if (op.opname == 'jit_marker' and op.args[0].value == marker_name and - op.args[1].value.active): # the jitdriver + (op.args[1].value is None or + op.args[1].value.active)): # the jitdriver results.append((graph, block, i)) return results @@ -846,11 +847,18 @@ _, PTR_SET_PARAM_STR_FUNCTYPE = self.cpu.ts.get_FuncType( [lltype.Ptr(STR)], lltype.Void) def make_closure(jd, fullfuncname, is_string): - state = jd.warmstate - def closure(i): - if is_string: - i = hlstr(i) - getattr(state, fullfuncname)(i) + if jd is None: + def closure(i): + if is_string: + i = hlstr(i) + for jd in self.jitdrivers_sd: + getattr(jd.warmstate, fullfuncname)(i) + else: + state = jd.warmstate + def closure(i): + if is_string: + i = hlstr(i) + getattr(state, fullfuncname)(i) if is_string: TP = PTR_SET_PARAM_STR_FUNCTYPE else: @@ -859,12 +867,16 @@ return Constant(funcptr, TP) # for graph, block, i in find_set_param(graphs): + op = block.operations[i] - for jd in self.jitdrivers_sd: - if jd.jitdriver is op.args[1].value: - break + if op.args[1].value is not None: + for jd in self.jitdrivers_sd: + if jd.jitdriver is op.args[1].value: + break + else: + assert 0, "jitdriver of set_param() not found" else: - assert 0, "jitdriver of set_param() not found" + jd = None funcname = op.args[2].value key = jd, funcname if key not in closures: diff --git a/pypy/module/_rawffi/structure.py b/pypy/module/_rawffi/structure.py --- a/pypy/module/_rawffi/structure.py +++ b/pypy/module/_rawffi/structure.py @@ -212,6 +212,8 @@ while count + basic_size <= total_size: fieldtypes.append(basic_ffi_type) count += basic_size + if basic_size == 0: # corner case. get out of this infinite + break # loop after 1 iteration ("why not") self.ffi_struct = clibffi.make_struct_ffitype_e(self.size, self.alignment, fieldtypes) diff --git a/pypy/module/_rawffi/test/test__rawffi.py b/pypy/module/_rawffi/test/test__rawffi.py --- a/pypy/module/_rawffi/test/test__rawffi.py +++ b/pypy/module/_rawffi/test/test__rawffi.py @@ -1022,6 +1022,12 @@ assert ret.y == 1234500, "ret.y == %d" % (ret.y,) s.free() + def test_ffi_type(self): + import _rawffi + EMPTY = _rawffi.Structure([]) + S2E = _rawffi.Structure([('bah', (EMPTY, 1))]) + S2E.get_ffi_type() # does not hang + class AppTestAutoFree: def setup_class(cls): space = gettestobjspace(usemodules=('_rawffi', 'struct')) diff --git a/pypy/module/cpyext/include/eval.h b/pypy/module/cpyext/include/eval.h --- a/pypy/module/cpyext/include/eval.h +++ b/pypy/module/cpyext/include/eval.h @@ -14,8 +14,8 @@ PyObject * PyEval_CallFunction(PyObject *obj, const char *format, ...); PyObject * PyEval_CallMethod(PyObject *obj, const char *name, const char *format, ...); -PyObject * PyObject_CallFunction(PyObject *obj, char *format, ...); -PyObject * PyObject_CallMethod(PyObject *obj, char *name, char *format, ...); +PyObject * PyObject_CallFunction(PyObject *obj, const char *format, ...); +PyObject * PyObject_CallMethod(PyObject *obj, const char *name, const char *format, ...); PyObject * PyObject_CallFunctionObjArgs(PyObject *callable, ...); PyObject * PyObject_CallMethodObjArgs(PyObject *callable, PyObject *name, ...); diff --git a/pypy/module/cpyext/include/modsupport.h b/pypy/module/cpyext/include/modsupport.h --- a/pypy/module/cpyext/include/modsupport.h +++ b/pypy/module/cpyext/include/modsupport.h @@ -48,7 +48,11 @@ /* * This is from pyport.h. Perhaps it belongs elsewhere. */ +#ifdef __cplusplus +#define PyMODINIT_FUNC extern "C" void +#else #define PyMODINIT_FUNC void +#endif #ifdef __cplusplus diff --git a/pypy/module/cpyext/include/pycobject.h b/pypy/module/cpyext/include/pycobject.h --- a/pypy/module/cpyext/include/pycobject.h +++ b/pypy/module/cpyext/include/pycobject.h @@ -33,7 +33,7 @@ PyAPI_FUNC(void *) PyCObject_GetDesc(PyObject *); /* Import a pointer to a C object from a module using a PyCObject. */ -PyAPI_FUNC(void *) PyCObject_Import(char *module_name, char *cobject_name); +PyAPI_FUNC(void *) PyCObject_Import(const char *module_name, const char *cobject_name); /* Modify a C object. Fails (==0) if object has a destructor. */ PyAPI_FUNC(int) PyCObject_SetVoidPtr(PyObject *self, void *cobj); diff --git a/pypy/module/cpyext/include/pyerrors.h b/pypy/module/cpyext/include/pyerrors.h --- a/pypy/module/cpyext/include/pyerrors.h +++ b/pypy/module/cpyext/include/pyerrors.h @@ -11,8 +11,8 @@ (PyClass_Check((x)) || (PyType_Check((x)) && \ PyObject_IsSubclass((x), PyExc_BaseException))) -PyObject *PyErr_NewException(char *name, PyObject *base, PyObject *dict); -PyObject *PyErr_NewExceptionWithDoc(char *name, char *doc, PyObject *base, PyObject *dict); +PyObject *PyErr_NewException(const char *name, PyObject *base, PyObject *dict); +PyObject *PyErr_NewExceptionWithDoc(const char *name, const char *doc, PyObject *base, PyObject *dict); PyObject *PyErr_Format(PyObject *exception, const char *format, ...); /* These APIs aren't really part of the error implementation, but diff --git a/pypy/module/cpyext/modsupport.py b/pypy/module/cpyext/modsupport.py --- a/pypy/module/cpyext/modsupport.py +++ b/pypy/module/cpyext/modsupport.py @@ -54,9 +54,15 @@ modname = rffi.charp2str(name) state = space.fromcache(State) f_name, f_path = state.package_context - w_mod = PyImport_AddModule(space, f_name) + if f_name is not None: + modname = f_name + w_mod = PyImport_AddModule(space, modname) + state.package_context = None, None - dict_w = {'__file__': space.wrap(f_path)} + if f_path is not None: + dict_w = {'__file__': space.wrap(f_path)} + else: + dict_w = {} convert_method_defs(space, dict_w, methods, None, w_self, modname) for key, w_value in dict_w.items(): space.setattr(w_mod, space.wrap(key), w_value) diff --git a/pypy/module/cpyext/presetup.py b/pypy/module/cpyext/presetup.py --- a/pypy/module/cpyext/presetup.py +++ b/pypy/module/cpyext/presetup.py @@ -42,4 +42,4 @@ patch_distutils() del sys.argv[0] -execfile(sys.argv[0], {'__file__': sys.argv[0]}) +execfile(sys.argv[0], {'__file__': sys.argv[0], '__name__': '__main__'}) diff --git a/pypy/module/cpyext/pyobject.py b/pypy/module/cpyext/pyobject.py --- a/pypy/module/cpyext/pyobject.py +++ b/pypy/module/cpyext/pyobject.py @@ -116,8 +116,8 @@ try: return typedescr_cache[typedef] except KeyError: - if typedef.base is not None: - return _get_typedescr_1(typedef.base) + if typedef.bases: + return _get_typedescr_1(typedef.bases[0]) return typedescr_cache[None] def get_typedescr(typedef): diff --git a/pypy/module/cpyext/slotdefs.py b/pypy/module/cpyext/slotdefs.py --- a/pypy/module/cpyext/slotdefs.py +++ b/pypy/module/cpyext/slotdefs.py @@ -9,7 +9,8 @@ unaryfunc, wrapperfunc, ternaryfunc, PyTypeObjectPtr, binaryfunc, getattrfunc, getattrofunc, setattrofunc, lenfunc, ssizeargfunc, ssizessizeargfunc, ssizeobjargproc, iternextfunc, initproc, richcmpfunc, - cmpfunc, hashfunc, descrgetfunc, descrsetfunc, objobjproc, readbufferproc) + cmpfunc, hashfunc, descrgetfunc, descrsetfunc, objobjproc, objobjargproc, + readbufferproc) from pypy.module.cpyext.pyobject import from_ref from pypy.module.cpyext.pyerrors import PyErr_Occurred from pypy.module.cpyext.state import State @@ -175,6 +176,15 @@ space.fromcache(State).check_and_raise_exception(always=True) return space.wrap(res) +def wrap_objobjargproc(space, w_self, w_args, func): + func_target = rffi.cast(objobjargproc, func) + check_num_args(space, w_args, 2) + w_key, w_value = space.fixedview(w_args) + res = generic_cpy_call(space, func_target, w_self, w_key, w_value) + if rffi.cast(lltype.Signed, res) == -1: + space.fromcache(State).check_and_raise_exception(always=True) + return space.wrap(res) + def wrap_ssizessizeargfunc(space, w_self, w_args, func): func_target = rffi.cast(ssizessizeargfunc, func) check_num_args(space, w_args, 2) diff --git a/pypy/module/cpyext/src/cobject.c b/pypy/module/cpyext/src/cobject.c --- a/pypy/module/cpyext/src/cobject.c +++ b/pypy/module/cpyext/src/cobject.c @@ -77,7 +77,7 @@ } void * -PyCObject_Import(char *module_name, char *name) +PyCObject_Import(const char *module_name, const char *name) { PyObject *m, *c; void *r = NULL; diff --git a/pypy/module/cpyext/src/modsupport.c b/pypy/module/cpyext/src/modsupport.c --- a/pypy/module/cpyext/src/modsupport.c +++ b/pypy/module/cpyext/src/modsupport.c @@ -541,7 +541,7 @@ } PyObject * -PyObject_CallFunction(PyObject *callable, char *format, ...) +PyObject_CallFunction(PyObject *callable, const char *format, ...) { va_list va; PyObject *args; @@ -558,7 +558,7 @@ } PyObject * -PyObject_CallMethod(PyObject *o, char *name, char *format, ...) +PyObject_CallMethod(PyObject *o, const char *name, const char *format, ...) { va_list va; PyObject *args; diff --git a/pypy/module/cpyext/src/pyerrors.c b/pypy/module/cpyext/src/pyerrors.c --- a/pypy/module/cpyext/src/pyerrors.c +++ b/pypy/module/cpyext/src/pyerrors.c @@ -21,7 +21,7 @@ } PyObject * -PyErr_NewException(char *name, PyObject *base, PyObject *dict) +PyErr_NewException(const char *name, PyObject *base, PyObject *dict) { char *dot; PyObject *modulename = NULL; @@ -72,7 +72,7 @@ /* Create an exception with docstring */ PyObject * -PyErr_NewExceptionWithDoc(char *name, char *doc, PyObject *base, PyObject *dict) +PyErr_NewExceptionWithDoc(const char *name, const char *doc, PyObject *base, PyObject *dict) { int result; PyObject *ret = NULL; diff --git a/pypy/module/cpyext/test/test_typeobject.py b/pypy/module/cpyext/test/test_typeobject.py --- a/pypy/module/cpyext/test/test_typeobject.py +++ b/pypy/module/cpyext/test/test_typeobject.py @@ -397,3 +397,31 @@ def __str__(self): return "text" assert module.tp_str(C()) == "text" + + def test_mp_ass_subscript(self): + module = self.import_extension('foo', [ + ("new_obj", "METH_NOARGS", + ''' + PyObject *obj; + Foo_Type.tp_as_mapping = &tp_as_mapping; + tp_as_mapping.mp_ass_subscript = mp_ass_subscript; + if (PyType_Ready(&Foo_Type) < 0) return NULL; + obj = PyObject_New(PyObject, &Foo_Type); + return obj; + ''' + )], + ''' + static int + mp_ass_subscript(PyObject *self, PyObject *key, PyObject *value) + { + PyErr_SetNone(PyExc_ZeroDivisionError); + return -1; + } + PyMappingMethods tp_as_mapping; + static PyTypeObject Foo_Type = { + PyVarObject_HEAD_INIT(NULL, 0) + "foo.foo", + }; + ''') + obj = module.new_obj() + raises(ZeroDivisionError, obj.__setitem__, 5, None) diff --git a/pypy/module/imp/importing.py b/pypy/module/imp/importing.py --- a/pypy/module/imp/importing.py +++ b/pypy/module/imp/importing.py @@ -513,7 +513,7 @@ space.warn(msg, space.w_ImportWarning) modtype, suffix, filemode = find_modtype(space, filepart) try: - if modtype in (PY_SOURCE, PY_COMPILED): + if modtype in (PY_SOURCE, PY_COMPILED, C_EXTENSION): assert suffix is not None filename = filepart + suffix stream = streamio.open_file_as_stream(filename, filemode) @@ -522,9 +522,6 @@ except: stream.close() raise - if modtype == C_EXTENSION: - filename = filepart + suffix - return FindInfo(modtype, filename, None, suffix, filemode) except StreamErrors: pass # XXX! must not eat all exceptions, e.g. # Out of file descriptors. diff --git a/pypy/module/math/test/test_translated.py b/pypy/module/math/test/test_translated.py new file mode 100644 --- /dev/null +++ b/pypy/module/math/test/test_translated.py @@ -0,0 +1,10 @@ +import py +from pypy.translator.c.test.test_genc import compile +from pypy.module.math.interp_math import _gamma + + +def test_gamma_overflow(): + f = compile(_gamma, [float]) + assert f(10.0) == 362880.0 + py.test.raises(OverflowError, f, 1720.0) + py.test.raises(OverflowError, f, 172.0) diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -2,7 +2,7 @@ class Module(MixedModule): - applevel_name = 'numpy' + applevel_name = 'numpypy' interpleveldefs = { 'array': 'interp_numarray.SingleDimArray', diff --git a/pypy/module/micronumpy/app_numpy.py b/pypy/module/micronumpy/app_numpy.py --- a/pypy/module/micronumpy/app_numpy.py +++ b/pypy/module/micronumpy/app_numpy.py @@ -1,6 +1,6 @@ import math -import numpy +import numpypy inf = float("inf") @@ -13,5 +13,5 @@ def mean(a): if not hasattr(a, "mean"): - a = numpy.array(a) + a = numpypy.array(a) return a.mean() diff --git a/pypy/module/micronumpy/bench/add.py b/pypy/module/micronumpy/bench/add.py --- a/pypy/module/micronumpy/bench/add.py +++ b/pypy/module/micronumpy/bench/add.py @@ -1,5 +1,8 @@ -import numpy +try: + import numpypy as numpy +except: + import numpy def f(): a = numpy.zeros(10000000) diff --git a/pypy/module/micronumpy/bench/iterate.py b/pypy/module/micronumpy/bench/iterate.py --- a/pypy/module/micronumpy/bench/iterate.py +++ b/pypy/module/micronumpy/bench/iterate.py @@ -1,5 +1,8 @@ -import numpy +try: + import numpypy as numpy +except: + import numpy def f(): sum = 0 diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -3,7 +3,7 @@ class AppTestDtypes(BaseNumpyAppTest): def test_dtype(self): - from numpy import dtype + from numpypy import dtype d = dtype('?') assert d.num == 0 @@ -14,7 +14,7 @@ raises(TypeError, dtype, 1042) def test_dtype_with_types(self): - from numpy import dtype + from numpypy import dtype assert dtype(bool).num == 0 assert dtype(int).num == 7 @@ -22,13 +22,13 @@ assert dtype(float).num == 12 def test_array_dtype_attr(self): - from numpy import array, dtype + from numpypy import array, dtype a = array(range(5), long) assert a.dtype is dtype(long) def test_repr_str(self): - from numpy import dtype + from numpypy import dtype assert repr(dtype) == "" d = dtype('?') @@ -36,57 +36,57 @@ assert str(d) == "bool" def test_bool_array(self): - import numpy + from numpypy import array, False_, True_ - a = numpy.array([0, 1, 2, 2.5], dtype='?') - assert a[0] is numpy.False_ + a = array([0, 1, 2, 2.5], dtype='?') + assert a[0] is False_ for i in xrange(1, 4): - assert a[i] is numpy.True_ + assert a[i] is True_ def test_copy_array_with_dtype(self): - import numpy + from numpypy import array, False_, True_ - a = numpy.array([0, 1, 2, 3], dtype=long) + a = array([0, 1, 2, 3], dtype=long) # int on 64-bit, long in 32-bit assert isinstance(a[0], (int, long)) b = a.copy() assert isinstance(b[0], (int, long)) - a = numpy.array([0, 1, 2, 3], dtype=bool) - assert a[0] is numpy.False_ + a = array([0, 1, 2, 3], dtype=bool) + assert a[0] is False_ b = a.copy() - assert b[0] is numpy.False_ + assert b[0] is False_ def test_zeros_bool(self): - import numpy + from numpypy import zeros, False_ - a = numpy.zeros(10, dtype=bool) + a = zeros(10, dtype=bool) for i in range(10): - assert a[i] is numpy.False_ + assert a[i] is False_ def test_ones_bool(self): - import numpy + from numpypy import ones, True_ - a = numpy.ones(10, dtype=bool) + a = ones(10, dtype=bool) for i in range(10): - assert a[i] is numpy.True_ + assert a[i] is True_ def test_zeros_long(self): - from numpy import zeros + from numpypy import zeros a = zeros(10, dtype=long) for i in range(10): assert isinstance(a[i], (int, long)) assert a[1] == 0 def test_ones_long(self): - from numpy import ones + from numpypy import ones a = ones(10, dtype=long) for i in range(10): assert isinstance(a[i], (int, long)) assert a[1] == 1 def test_overflow(self): - from numpy import array, dtype + from numpypy import array, dtype assert array([128], 'b')[0] == -128 assert array([256], 'B')[0] == 0 assert array([32768], 'h')[0] == -32768 @@ -98,7 +98,7 @@ raises(OverflowError, "array([2**64], 'Q')") def test_bool_binop_types(self): - from numpy import array, dtype + from numpypy import array, dtype types = [ '?', 'b', 'B', 'h', 'H', 'i', 'I', 'l', 'L', 'q', 'Q', 'f', 'd' ] @@ -107,7 +107,7 @@ assert (a + array([0], t)).dtype is dtype(t) def test_binop_types(self): - from numpy import array, dtype + from numpypy import array, dtype tests = [('b','B','h'), ('b','h','h'), ('b','H','i'), ('b','i','i'), ('b','l','l'), ('b','q','q'), ('b','Q','d'), ('B','h','h'), ('B','H','H'), ('B','i','i'), ('B','I','I'), ('B','l','l'), @@ -129,7 +129,7 @@ assert (array([1], d1) + array([1], d2)).dtype is dtype(dout) def test_add_int8(self): - from numpy import array, dtype + from numpypy import array, dtype a = array(range(5), dtype="int8") b = a + a @@ -138,7 +138,7 @@ assert b[i] == i * 2 def test_add_int16(self): - from numpy import array, dtype + from numpypy import array, dtype a = array(range(5), dtype="int16") b = a + a @@ -147,7 +147,7 @@ assert b[i] == i * 2 def test_add_uint32(self): - from numpy import array, dtype + from numpypy import array, dtype a = array(range(5), dtype="I") b = a + a @@ -156,12 +156,12 @@ assert b[i] == i * 2 def test_shape(self): - from numpy import dtype + from numpypy import dtype assert dtype(long).shape == () def test_cant_subclass(self): - from numpy import dtype + from numpypy import dtype # You can't subclass dtype raises(TypeError, type, "Foo", (dtype,), {}) diff --git a/pypy/module/micronumpy/test/test_module.py b/pypy/module/micronumpy/test/test_module.py --- a/pypy/module/micronumpy/test/test_module.py +++ b/pypy/module/micronumpy/test/test_module.py @@ -3,19 +3,19 @@ class AppTestNumPyModule(BaseNumpyAppTest): def test_mean(self): - from numpy import array, mean + from numpypy import array, mean assert mean(array(range(5))) == 2.0 assert mean(range(5)) == 2.0 def test_average(self): - from numpy import array, average + from numpypy import array, average assert average(range(10)) == 4.5 assert average(array(range(10))) == 4.5 def test_constants(self): import math - from numpy import inf, e + from numpypy import inf, e assert type(inf) is float assert inf == float("inf") assert e == math.e - assert type(e) is float \ No newline at end of file + assert type(e) is float diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -4,12 +4,12 @@ class AppTestNumArray(BaseNumpyAppTest): def test_type(self): - from numpy import array + from numpypy import array ar = array(range(5)) assert type(ar) is type(ar + ar) def test_init(self): - from numpy import zeros + from numpypy import zeros a = zeros(15) # Check that storage was actually zero'd. assert a[10] == 0.0 @@ -18,7 +18,7 @@ assert a[13] == 5.3 def test_size(self): - from numpy import array + from numpypy import array # XXX fixed on multidim branch #assert array(3).size == 1 a = array([1, 2, 3]) @@ -30,13 +30,13 @@ Test that empty() works. """ - from numpy import empty + from numpypy import empty a = empty(2) a[1] = 1.0 assert a[1] == 1.0 def test_ones(self): - from numpy import ones + from numpypy import ones a = ones(3) assert len(a) == 3 assert a[0] == 1 @@ -45,19 +45,19 @@ assert a[2] == 4 def test_copy(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a.copy() for i in xrange(5): assert b[i] == a[i] def test_iterator_init(self): - from numpy import array + from numpypy import array a = array(range(5)) assert a[3] == 3 def test_repr(self): - from numpy import array, zeros + from numpypy import array, zeros a = array(range(5), float) assert repr(a) == "array([0.0, 1.0, 2.0, 3.0, 4.0])" a = array([], float) @@ -72,7 +72,7 @@ assert repr(a) == "array([True, False, True, False], dtype=bool)" def test_repr_slice(self): - from numpy import array, zeros + from numpypy import array, zeros a = array(range(5), float) b = a[1::2] assert repr(b) == "array([1.0, 3.0])" @@ -81,7 +81,7 @@ assert repr(b) == "array([0.0, 0.0, 0.0, ..., 0.0, 0.0, 0.0])" def test_str(self): - from numpy import array, zeros + from numpypy import array, zeros a = array(range(5), float) assert str(a) == "[0.0 1.0 2.0 3.0 4.0]" assert str((2*a)[:]) == "[0.0 2.0 4.0 6.0 8.0]" @@ -100,7 +100,7 @@ assert str(a) == "[0 1 2 3 4]" def test_str_slice(self): - from numpy import array, zeros + from numpypy import array, zeros a = array(range(5), float) b = a[1::2] assert str(b) == "[1.0 3.0]" @@ -109,7 +109,7 @@ assert str(b) == "[0.0 0.0 0.0 ..., 0.0 0.0 0.0]" def test_getitem(self): - from numpy import array + from numpypy import array a = array(range(5)) raises(IndexError, "a[5]") a = a + a @@ -118,7 +118,7 @@ raises(IndexError, "a[-6]") def test_getitem_tuple(self): - from numpy import array + from numpypy import array a = array(range(5)) raises(IndexError, "a[(1,2)]") for i in xrange(5): @@ -128,7 +128,7 @@ assert a[i] == b[i] def test_setitem(self): - from numpy import array + from numpypy import array a = array(range(5)) a[-1] = 5.0 assert a[4] == 5.0 @@ -136,7 +136,7 @@ raises(IndexError, "a[-6] = 3.0") def test_setitem_tuple(self): - from numpy import array + from numpypy import array a = array(range(5)) raises(IndexError, "a[(1,2)] = [0,1]") for i in xrange(5): @@ -147,7 +147,7 @@ assert a[i] == i def test_setslice_array(self): - from numpy import array + from numpypy import array a = array(range(5)) b = array(range(2)) a[1:4:2] = b @@ -158,7 +158,7 @@ assert b[1] == 0. def test_setslice_of_slice_array(self): - from numpy import array, zeros + from numpypy import array, zeros a = zeros(5) a[::2] = array([9., 10., 11.]) assert a[0] == 9. @@ -177,7 +177,7 @@ assert a[0] == 3. def test_setslice_list(self): - from numpy import array + from numpypy import array a = array(range(5), float) b = [0., 1.] a[1:4:2] = b @@ -185,20 +185,20 @@ assert a[3] == 1. def test_setslice_constant(self): - from numpy import array + from numpypy import array a = array(range(5), float) a[1:4:2] = 0. assert a[1] == 0. assert a[3] == 0. def test_len(self): - from numpy import array + from numpypy import array a = array(range(5)) assert len(a) == 5 assert len(a + a) == 5 def test_shape(self): - from numpy import array + from numpypy import array a = array(range(5)) assert a.shape == (5,) b = a + a @@ -207,7 +207,7 @@ assert c.shape == (3,) def test_add(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a + a for i in range(5): @@ -220,7 +220,7 @@ assert c[i] == bool(a[i] + b[i]) def test_add_other(self): - from numpy import array + from numpypy import array a = array(range(5)) b = array(range(4, -1, -1)) c = a + b @@ -228,20 +228,20 @@ assert c[i] == 4 def test_add_constant(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a + 5 for i in range(5): assert b[i] == i + 5 def test_radd(self): - from numpy import array + from numpypy import array r = 3 + array(range(3)) for i in range(3): assert r[i] == i + 3 def test_add_list(self): - from numpy import array + from numpypy import array a = array(range(5)) b = list(reversed(range(5))) c = a + b @@ -250,14 +250,14 @@ assert c[i] == 4 def test_subtract(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a - a for i in range(5): assert b[i] == 0 def test_subtract_other(self): - from numpy import array + from numpypy import array a = array(range(5)) b = array([1, 1, 1, 1, 1]) c = a - b @@ -265,29 +265,29 @@ assert c[i] == i - 1 def test_subtract_constant(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a - 5 for i in range(5): assert b[i] == i - 5 def test_mul(self): - import numpy + import numpypy - a = numpy.array(range(5)) + a = numpypy.array(range(5)) b = a * a for i in range(5): assert b[i] == i * i - a = numpy.array(range(5), dtype=bool) + a = numpypy.array(range(5), dtype=bool) b = a * a - assert b.dtype is numpy.dtype(bool) - assert b[0] is numpy.False_ + assert b.dtype is numpypy.dtype(bool) + assert b[0] is numpypy.False_ for i in range(1, 5): - assert b[i] is numpy.True_ + assert b[i] is numpypy.True_ def test_mul_constant(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a * 5 for i in range(5): @@ -295,7 +295,7 @@ def test_div(self): from math import isnan - from numpy import array, dtype, inf + from numpypy import array, dtype, inf a = array(range(1, 6)) b = a / a @@ -327,7 +327,7 @@ assert c[2] == -inf def test_div_other(self): - from numpy import array + from numpypy import array a = array(range(5)) b = array([2, 2, 2, 2, 2], float) c = a / b @@ -335,14 +335,14 @@ assert c[i] == i / 2.0 def test_div_constant(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a / 5.0 for i in range(5): assert b[i] == i / 5.0 def test_pow(self): - from numpy import array + from numpypy import array a = array(range(5), float) b = a ** a for i in range(5): @@ -350,7 +350,7 @@ assert b[i] == i**i def test_pow_other(self): - from numpy import array + from numpypy import array a = array(range(5), float) b = array([2, 2, 2, 2, 2]) c = a ** b @@ -358,14 +358,14 @@ assert c[i] == i ** 2 def test_pow_constant(self): - from numpy import array + from numpypy import array a = array(range(5), float) b = a ** 2 for i in range(5): assert b[i] == i ** 2 def test_mod(self): - from numpy import array + from numpypy import array a = array(range(1,6)) b = a % a for i in range(5): @@ -378,7 +378,7 @@ assert b[i] == 1 def test_mod_other(self): - from numpy import array + from numpypy import array a = array(range(5)) b = array([2, 2, 2, 2, 2]) c = a % b @@ -386,14 +386,14 @@ assert c[i] == i % 2 def test_mod_constant(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a % 2 for i in range(5): assert b[i] == i % 2 def test_pos(self): - from numpy import array + from numpypy import array a = array([1.,-2.,3.,-4.,-5.]) b = +a for i in range(5): @@ -404,7 +404,7 @@ assert a[i] == i def test_neg(self): - from numpy import array + from numpypy import array a = array([1.,-2.,3.,-4.,-5.]) b = -a for i in range(5): @@ -415,7 +415,7 @@ assert a[i] == -i def test_abs(self): - from numpy import array + from numpypy import array a = array([1.,-2.,3.,-4.,-5.]) b = abs(a) for i in range(5): @@ -426,7 +426,7 @@ assert a[i + 5] == abs(i) def test_auto_force(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a - 1 a[2] = 3 @@ -440,7 +440,7 @@ assert c[1] == 4 def test_getslice(self): - from numpy import array + from numpypy import array a = array(range(5)) s = a[1:5] assert len(s) == 4 @@ -454,7 +454,7 @@ assert s[0] == 5 def test_getslice_step(self): - from numpy import array + from numpypy import array a = array(range(10)) s = a[1:9:2] assert len(s) == 4 @@ -462,7 +462,7 @@ assert s[i] == a[2*i+1] def test_slice_update(self): - from numpy import array + from numpypy import array a = array(range(5)) s = a[0:3] s[1] = 10 @@ -473,7 +473,7 @@ def test_slice_invaidate(self): # check that slice shares invalidation list with - from numpy import array + from numpypy import array a = array(range(5)) s = a[0:2] b = array([10,11]) @@ -487,13 +487,13 @@ assert d[1] == 12 def test_mean(self): - from numpy import array + from numpypy import array a = array(range(5)) assert a.mean() == 2.0 assert a[:4].mean() == 1.5 def test_sum(self): - from numpy import array + from numpypy import array a = array(range(5)) assert a.sum() == 10.0 assert a[:4].sum() == 6.0 @@ -502,32 +502,32 @@ assert a.sum() == 5 def test_prod(self): - from numpy import array + from numpypy import array a = array(range(1,6)) assert a.prod() == 120.0 assert a[:4].prod() == 24.0 def test_max(self): - from numpy import array + from numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.max() == 5.7 b = array([]) raises(ValueError, "b.max()") def test_max_add(self): - from numpy import array + from numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert (a+a).max() == 11.4 def test_min(self): - from numpy import array + from numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.min() == -3.0 b = array([]) raises(ValueError, "b.min()") def test_argmax(self): - from numpy import array + from numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.argmax() == 2 b = array([]) @@ -537,14 +537,14 @@ assert a.argmax() == 9 def test_argmin(self): - from numpy import array + from numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.argmin() == 3 b = array([]) raises(ValueError, "b.argmin()") def test_all(self): - from numpy import array + from numpypy import array a = array(range(5)) assert a.all() == False a[0] = 3.0 @@ -553,7 +553,7 @@ assert b.all() == True def test_any(self): - from numpy import array, zeros + from numpypy import array, zeros a = array(range(5)) assert a.any() == True b = zeros(5) @@ -562,7 +562,7 @@ assert c.any() == False def test_dot(self): - from numpy import array + from numpypy import array a = array(range(5)) assert a.dot(a) == 30.0 @@ -570,14 +570,14 @@ assert a.dot(range(5)) == 30 def test_dot_constant(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a.dot(2.5) for i in xrange(5): assert b[i] == 2.5 * a[i] def test_dtype_guessing(self): - from numpy import array, dtype + from numpypy import array, dtype assert array([True]).dtype is dtype(bool) assert array([True, False]).dtype is dtype(bool) @@ -590,7 +590,7 @@ def test_comparison(self): import operator - from numpy import array, dtype + from numpypy import array, dtype a = array(range(5)) b = array(range(5), float) @@ -616,7 +616,7 @@ cls.w_data = cls.space.wrap(struct.pack('dddd', 1, 2, 3, 4)) def test_fromstring(self): - from numpy import fromstring + from numpypy import fromstring a = fromstring(self.data) for i in range(4): assert a[i] == i + 1 diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -4,14 +4,14 @@ class AppTestUfuncs(BaseNumpyAppTest): def test_ufunc_instance(self): - from numpy import add, ufunc + from numpypy import add, ufunc assert isinstance(add, ufunc) assert repr(add) == "" assert repr(ufunc) == "" def test_ufunc_attrs(self): - from numpy import add, multiply, sin + from numpypy import add, multiply, sin assert add.identity == 0 assert multiply.identity == 1 @@ -22,7 +22,7 @@ assert sin.nin == 1 def test_wrong_arguments(self): - from numpy import add, sin + from numpypy import add, sin raises(ValueError, add, 1) raises(TypeError, add, 1, 2, 3) @@ -30,14 +30,14 @@ raises(ValueError, sin) def test_single_item(self): - from numpy import negative, sign, minimum + from numpypy import negative, sign, minimum assert negative(5.0) == -5.0 assert sign(-0.0) == 0.0 assert minimum(2.0, 3.0) == 2.0 def test_sequence(self): - from numpy import array, negative, minimum + from numpypy import array, negative, minimum a = array(range(3)) b = [2.0, 1.0, 0.0] c = 1.0 @@ -71,7 +71,7 @@ assert min_c_b[i] == min(b[i], c) def test_negative(self): - from numpy import array, negative + from numpypy import array, negative a = array([-5.0, 0.0, 1.0]) b = negative(a) @@ -86,7 +86,7 @@ assert negative(a + a)[3] == -6 def test_abs(self): - from numpy import array, absolute + from numpypy import array, absolute a = array([-5.0, -0.0, 1.0]) b = absolute(a) @@ -94,7 +94,7 @@ assert b[i] == abs(a[i]) def test_add(self): - from numpy import array, add + from numpypy import array, add a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -103,7 +103,7 @@ assert c[i] == a[i] + b[i] def test_divide(self): - from numpy import array, divide + from numpypy import array, divide a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -112,7 +112,7 @@ assert c[i] == a[i] / b[i] def test_fabs(self): - from numpy import array, fabs + from numpypy import array, fabs from math import fabs as math_fabs a = array([-5.0, -0.0, 1.0]) @@ -121,7 +121,7 @@ assert b[i] == math_fabs(a[i]) def test_minimum(self): - from numpy import array, minimum + from numpypy import array, minimum a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -130,7 +130,7 @@ assert c[i] == min(a[i], b[i]) def test_maximum(self): - from numpy import array, maximum + from numpypy import array, maximum a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -143,7 +143,7 @@ assert isinstance(x, (int, long)) def test_multiply(self): - from numpy import array, multiply + from numpypy import array, multiply a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -152,7 +152,7 @@ assert c[i] == a[i] * b[i] def test_sign(self): - from numpy import array, sign, dtype + from numpypy import array, sign, dtype reference = [-1.0, 0.0, 0.0, 1.0] a = array([-5.0, -0.0, 0.0, 6.0]) @@ -171,7 +171,7 @@ assert a[1] == 0 def test_reciporocal(self): - from numpy import array, reciprocal + from numpypy import array, reciprocal reference = [-0.2, float("inf"), float("-inf"), 2.0] a = array([-5.0, 0.0, -0.0, 0.5]) @@ -180,7 +180,7 @@ assert b[i] == reference[i] def test_subtract(self): - from numpy import array, subtract + from numpypy import array, subtract a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -189,7 +189,7 @@ assert c[i] == a[i] - b[i] def test_floor(self): - from numpy import array, floor + from numpypy import array, floor reference = [-2.0, -1.0, 0.0, 1.0, 1.0] a = array([-1.4, -1.0, 0.0, 1.0, 1.4]) @@ -198,7 +198,7 @@ assert b[i] == reference[i] def test_copysign(self): - from numpy import array, copysign + from numpypy import array, copysign reference = [5.0, -0.0, 0.0, -6.0] a = array([-5.0, 0.0, 0.0, 6.0]) @@ -214,7 +214,7 @@ def test_exp(self): import math - from numpy import array, exp + from numpypy import array, exp a = array([-5.0, -0.0, 0.0, 12345678.0, float("inf"), -float('inf'), -12343424.0]) @@ -228,7 +228,7 @@ def test_sin(self): import math - from numpy import array, sin + from numpypy import array, sin a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2]) b = sin(a) @@ -241,7 +241,7 @@ def test_cos(self): import math - from numpy import array, cos + from numpypy import array, cos a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2]) b = cos(a) @@ -250,7 +250,7 @@ def test_tan(self): import math - from numpy import array, tan + from numpypy import array, tan a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2]) b = tan(a) @@ -260,7 +260,7 @@ def test_arcsin(self): import math - from numpy import array, arcsin + from numpypy import array, arcsin a = array([-1, -0.5, -0.33, 0, 0.33, 0.5, 1]) b = arcsin(a) @@ -274,7 +274,7 @@ def test_arccos(self): import math - from numpy import array, arccos + from numpypy import array, arccos a = array([-1, -0.5, -0.33, 0, 0.33, 0.5, 1]) b = arccos(a) @@ -289,7 +289,7 @@ def test_arctan(self): import math - from numpy import array, arctan + from numpypy import array, arctan a = array([-3, -2, -1, 0, 1, 2, 3, float('inf'), float('-inf')]) b = arctan(a) @@ -302,7 +302,7 @@ def test_arcsinh(self): import math - from numpy import arcsinh, inf + from numpypy import arcsinh, inf for v in [inf, -inf, 1.0, math.e]: assert math.asinh(v) == arcsinh(v) @@ -310,7 +310,7 @@ def test_arctanh(self): import math - from numpy import arctanh + from numpypy import arctanh for v in [.99, .5, 0, -.5, -.99]: assert math.atanh(v) == arctanh(v) @@ -320,13 +320,13 @@ assert arctanh(v) == math.copysign(float("inf"), v) def test_reduce_errors(self): - from numpy import sin, add + from numpypy import sin, add raises(ValueError, sin.reduce, [1, 2, 3]) raises(TypeError, add.reduce, 1) def test_reduce(self): - from numpy import add, maximum + from numpypy import add, maximum assert add.reduce([1, 2, 3]) == 6 assert maximum.reduce([1]) == 1 @@ -335,7 +335,7 @@ def test_comparisons(self): import operator - from numpy import equal, not_equal, less, less_equal, greater, greater_equal + from numpypy import equal, not_equal, less, less_equal, greater, greater_equal for ufunc, func in [ (equal, operator.eq), diff --git a/pypy/module/posix/__init__.py b/pypy/module/posix/__init__.py --- a/pypy/module/posix/__init__.py +++ b/pypy/module/posix/__init__.py @@ -137,6 +137,8 @@ interpleveldefs['execve'] = 'interp_posix.execve' if hasattr(posix, 'spawnv'): interpleveldefs['spawnv'] = 'interp_posix.spawnv' + if hasattr(posix, 'spawnve'): + interpleveldefs['spawnve'] = 'interp_posix.spawnve' if hasattr(os, 'uname'): interpleveldefs['uname'] = 'interp_posix.uname' if hasattr(os, 'sysconf'): diff --git a/pypy/module/posix/interp_posix.py b/pypy/module/posix/interp_posix.py --- a/pypy/module/posix/interp_posix.py +++ b/pypy/module/posix/interp_posix.py @@ -760,6 +760,14 @@ except OSError, e: raise wrap_oserror(space, e) +def _env2interp(space, w_env): + env = {} + w_keys = space.call_method(w_env, 'keys') + for w_key in space.unpackiterable(w_keys): + w_value = space.getitem(w_env, w_key) + env[space.str_w(w_key)] = space.str_w(w_value) + return env + def execve(space, w_command, w_args, w_env): """ execve(path, args, env) @@ -771,11 +779,7 @@ """ command = fsencode_w(space, w_command) args = [fsencode_w(space, w_arg) for w_arg in space.unpackiterable(w_args)] - env = {} - w_keys = space.call_method(w_env, 'keys') - for w_key in space.unpackiterable(w_keys): - w_value = space.getitem(w_env, w_key) - env[space.str_w(w_key)] = space.str_w(w_value) + env = _env2interp(space, w_env) try: os.execve(command, args, env) except OSError, e: @@ -790,6 +794,16 @@ raise wrap_oserror(space, e) return space.wrap(ret) + at unwrap_spec(mode=int, path=str) +def spawnve(space, mode, path, w_args, w_env): + args = [space.str_w(w_arg) for w_arg in space.unpackiterable(w_args)] + env = _env2interp(space, w_env) + try: + ret = os.spawnve(mode, path, args, env) + except OSError, e: + raise wrap_oserror(space, e) + return space.wrap(ret) + def utime(space, w_path, w_tuple): """ utime(path, (atime, mtime)) utime(path, None) diff --git a/pypy/module/posix/test/test_posix2.py b/pypy/module/posix/test/test_posix2.py --- a/pypy/module/posix/test/test_posix2.py +++ b/pypy/module/posix/test/test_posix2.py @@ -471,6 +471,17 @@ ['python', '-c', 'raise(SystemExit(42))']) assert ret == 42 + if hasattr(__import__(os.name), "spawnve"): + def test_spawnve(self): + os = self.posix + import sys + print self.python + ret = os.spawnve(os.P_WAIT, self.python, + ['python', '-c', + "raise(SystemExit(int(__import__('os').environ['FOOBAR'])))"], + {'FOOBAR': '42'}) + assert ret == 42 + def test_popen(self): os = self.posix for i in range(5): diff --git a/pypy/module/pypyjit/interp_jit.py b/pypy/module/pypyjit/interp_jit.py --- a/pypy/module/pypyjit/interp_jit.py +++ b/pypy/module/pypyjit/interp_jit.py @@ -6,6 +6,7 @@ from pypy.tool.pairtype import extendabletype from pypy.rlib.rarithmetic import r_uint, intmask from pypy.rlib.jit import JitDriver, hint, we_are_jitted, dont_look_inside +from pypy.rlib import jit from pypy.rlib.jit import current_trace_length, unroll_parameters import pypy.interpreter.pyopcode # for side-effects from pypy.interpreter.error import OperationError, operationerrfmt @@ -200,18 +201,18 @@ if len(args_w) == 1: text = space.str_w(args_w[0]) try: - pypyjitdriver.set_user_param(text) + jit.set_user_param(None, text) except ValueError: raise OperationError(space.w_ValueError, space.wrap("error in JIT parameters string")) for key, w_value in kwds_w.items(): if key == 'enable_opts': - pypyjitdriver.set_param('enable_opts', space.str_w(w_value)) + jit.set_param(None, 'enable_opts', space.str_w(w_value)) else: intval = space.int_w(w_value) for name, _ in unroll_parameters: if name == key and name != 'enable_opts': - pypyjitdriver.set_param(name, intval) + jit.set_param(None, name, intval) break else: raise operationerrfmt(space.w_TypeError, diff --git a/pypy/module/pypyjit/test_pypy_c/model.py b/pypy/module/pypyjit/test_pypy_c/model.py --- a/pypy/module/pypyjit/test_pypy_c/model.py +++ b/pypy/module/pypyjit/test_pypy_c/model.py @@ -285,6 +285,11 @@ guard_false(ticker_cond1, descr=...) """ src = src.replace('--EXC-TICK--', exc_ticker_check) + # + # ISINF is done as a macro; fix it here + r = re.compile('(\w+) = --ISINF--[(](\w+)[)]') + src = r.sub(r'\2\B999 = float_add(\2, ...)\n\1 = float_eq(\2\B999, \2)', + src) return src @classmethod diff --git a/pypy/module/pypyjit/test_pypy_c/test_math.py b/pypy/module/pypyjit/test_pypy_c/test_math.py --- a/pypy/module/pypyjit/test_pypy_c/test_math.py +++ b/pypy/module/pypyjit/test_pypy_c/test_math.py @@ -1,3 +1,4 @@ +import py from pypy.module.pypyjit.test_pypy_c.test_00_model import BaseTestPyPyC @@ -49,10 +50,7 @@ guard_true(i2, descr=...) guard_not_invalidated(descr=...) f1 = cast_int_to_float(i0) - i3 = float_eq(f1, inf) - i4 = float_eq(f1, -inf) - i5 = int_or(i3, i4) - i6 = int_is_true(i5) + i6 = --ISINF--(f1) guard_false(i6, descr=...) f2 = call(ConstClass(sin), f1, descr=) f3 = call(ConstClass(cos), f1, descr=) @@ -64,6 +62,7 @@ """) def test_fmod(self): + py.test.skip("test relies on the old and broken ll_math_fmod") def main(n): import math @@ -90,4 +89,4 @@ i6 = int_sub(i0, 1) --TICK-- jump(..., descr=) - """) \ No newline at end of file + """) diff --git a/pypy/module/select/test/test_select.py b/pypy/module/select/test/test_select.py --- a/pypy/module/select/test/test_select.py +++ b/pypy/module/select/test/test_select.py @@ -214,11 +214,15 @@ def test_poll(self): import select - class A(object): - def __int__(self): - return 3 - - select.poll().poll(A()) # assert did not crash + readend, writeend = self.getpair() + try: + class A(object): + def __int__(self): + return readend.fileno() + select.poll().poll(A()) # assert did not crash + finally: + readend.close() + writeend.close() class AppTestSelectWithPipes(_AppTestSelect): "Use a pipe to get pairs of file descriptors" diff --git a/pypy/module/signal/__init__.py b/pypy/module/signal/__init__.py --- a/pypy/module/signal/__init__.py +++ b/pypy/module/signal/__init__.py @@ -20,7 +20,7 @@ interpleveldefs['pause'] = 'interp_signal.pause' interpleveldefs['siginterrupt'] = 'interp_signal.siginterrupt' - if hasattr(cpy_signal, 'setitimer'): + if os.name == 'posix': interpleveldefs['setitimer'] = 'interp_signal.setitimer' interpleveldefs['getitimer'] = 'interp_signal.getitimer' for name in ['ITIMER_REAL', 'ITIMER_VIRTUAL', 'ITIMER_PROF']: diff --git a/pypy/module/signal/test/test_signal.py b/pypy/module/signal/test/test_signal.py --- a/pypy/module/signal/test/test_signal.py +++ b/pypy/module/signal/test/test_signal.py @@ -1,4 +1,4 @@ -import os, py +import os, py, sys import signal as cpy_signal from pypy.conftest import gettestobjspace @@ -264,6 +264,10 @@ class AppTestItimer: spaceconfig = dict(usemodules=['signal']) + def setup_class(cls): + if sys.platform == 'win32': + py.test.skip("Unix only") + def test_itimer_real(self): import signal diff --git a/pypy/module/sys/test/test_sysmodule.py b/pypy/module/sys/test/test_sysmodule.py --- a/pypy/module/sys/test/test_sysmodule.py +++ b/pypy/module/sys/test/test_sysmodule.py @@ -567,6 +567,11 @@ import time import thread + # XXX workaround for now: to prevent deadlocks, call + # sys._current_frames() once before starting threads. + # This is an issue in non-translated versions only. + sys._current_frames() + thread_id = thread.get_ident() def other_thread(): print "thread started" diff --git a/pypy/module/test_lib_pypy/test_pwd.py b/pypy/module/test_lib_pypy/test_pwd.py --- a/pypy/module/test_lib_pypy/test_pwd.py +++ b/pypy/module/test_lib_pypy/test_pwd.py @@ -1,7 +1,10 @@ +import py, sys from pypy.conftest import gettestobjspace class AppTestPwd: def setup_class(cls): + if sys.platform == 'win32': + py.test.skip("Unix only") cls.space = gettestobjspace(usemodules=('_ffi', '_rawffi')) cls.space.appexec((), "(): import pwd") diff --git a/pypy/objspace/flow/operation.py b/pypy/objspace/flow/operation.py --- a/pypy/objspace/flow/operation.py +++ b/pypy/objspace/flow/operation.py @@ -11,7 +11,7 @@ from pypy.interpreter.baseobjspace import ObjSpace from pypy.interpreter.error import OperationError from pypy.tool.sourcetools import compile2 -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift +from pypy.rlib.rarithmetic import ovfcheck from pypy.objspace.flow import model @@ -144,7 +144,7 @@ return ovfcheck(x % y) def lshift_ovf(x, y): - return ovfcheck_lshift(x, y) + return ovfcheck(x << y) # slicing: operator.{get,set,del}slice() don't support b=None or c=None def do_getslice(a, b, c): diff --git a/pypy/objspace/std/bytearrayobject.py b/pypy/objspace/std/bytearrayobject.py --- a/pypy/objspace/std/bytearrayobject.py +++ b/pypy/objspace/std/bytearrayobject.py @@ -282,15 +282,12 @@ def str__Bytearray(space, w_bytearray): return space.wrap(''.join(w_bytearray.data)) -def _convert_idx_params(space, w_self, w_start, w_stop): - length = len(w_self.data) +def str_count__Bytearray_Int_ANY_ANY(space, w_bytearray, w_char, w_start, w_stop): + char = w_char.intval + bytearray = w_bytearray.data + length = len(bytearray) start, stop = slicetype.unwrap_start_stop( space, length, w_start, w_stop, False) - return start, stop, length - -def str_count__Bytearray_Int_ANY_ANY(space, w_bytearray, w_char, w_start, w_stop): - char = w_char.intval - start, stop, length = _convert_idx_params(space, w_bytearray, w_start, w_stop) count = 0 for i in range(start, min(stop, length)): c = w_bytearray.data[i] diff --git a/pypy/objspace/std/bytearraytype.py b/pypy/objspace/std/bytearraytype.py --- a/pypy/objspace/std/bytearraytype.py +++ b/pypy/objspace/std/bytearraytype.py @@ -122,10 +122,11 @@ return -1 def descr_fromhex(space, w_type, w_hexstring): - "bytearray.fromhex(string) -> bytearray\n\nCreate a bytearray object " - "from a string of hexadecimal numbers.\nSpaces between two numbers are " - "accepted.\nExample: bytearray.fromhex('B9 01EF') -> " - "bytearray(b'\\xb9\\x01\\xef')." + "bytearray.fromhex(string) -> bytearray\n" + "\n" + "Create a bytearray object from a string of hexadecimal numbers.\n" + "Spaces between two numbers are accepted.\n" + "Example: bytearray.fromhex('B9 01EF') -> bytearray(b'\\xb9\\x01\\xef')." hexstring = space.str_w(w_hexstring) hexstring = hexstring.lower() data = [] diff --git a/pypy/objspace/std/intobject.py b/pypy/objspace/std/intobject.py --- a/pypy/objspace/std/intobject.py +++ b/pypy/objspace/std/intobject.py @@ -6,7 +6,7 @@ from pypy.objspace.std.noneobject import W_NoneObject from pypy.objspace.std.register_all import register_all from pypy.rlib import jit -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift, LONG_BIT, r_uint +from pypy.rlib.rarithmetic import ovfcheck, LONG_BIT, r_uint from pypy.rlib.rbigint import rbigint """ @@ -16,7 +16,10 @@ something CPython does not do anymore. """ -class W_IntObject(W_Object): +class W_AbstractIntObject(W_Object): + __slots__ = () + +class W_IntObject(W_AbstractIntObject): __slots__ = 'intval' _immutable_fields_ = ['intval'] @@ -245,7 +248,7 @@ b = w_int2.intval if r_uint(b) < LONG_BIT: # 0 <= b < LONG_BIT try: - c = ovfcheck_lshift(a, b) + c = ovfcheck(a << b) except OverflowError: raise FailedToImplementArgs(space.w_OverflowError, space.wrap("integer left shift")) diff --git a/pypy/objspace/std/iterobject.py b/pypy/objspace/std/iterobject.py --- a/pypy/objspace/std/iterobject.py +++ b/pypy/objspace/std/iterobject.py @@ -4,7 +4,10 @@ from pypy.objspace.std.register_all import register_all -class W_AbstractSeqIterObject(W_Object): +class W_AbstractIterObject(W_Object): + __slots__ = () + +class W_AbstractSeqIterObject(W_AbstractIterObject): from pypy.objspace.std.itertype import iter_typedef as typedef def __init__(w_self, w_seq, index=0): diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py --- a/pypy/objspace/std/listobject.py +++ b/pypy/objspace/std/listobject.py @@ -11,7 +11,10 @@ from pypy.rlib.listsort import make_timsort_class from pypy.interpreter.argument import Signature -class W_ListObject(W_Object): +class W_AbstractListObject(W_Object): + __slots__ = () + +class W_ListObject(W_AbstractListObject): from pypy.objspace.std.listtype import list_typedef as typedef def __init__(w_self, wrappeditems): diff --git a/pypy/objspace/std/longobject.py b/pypy/objspace/std/longobject.py --- a/pypy/objspace/std/longobject.py +++ b/pypy/objspace/std/longobject.py @@ -8,7 +8,10 @@ from pypy.objspace.std.noneobject import W_NoneObject from pypy.rlib.rbigint import rbigint, SHIFT -class W_LongObject(W_Object): +class W_AbstractLongObject(W_Object): + __slots__ = () + +class W_LongObject(W_AbstractLongObject): """This is a wrapper of rbigint.""" from pypy.objspace.std.longtype import long_typedef as typedef _immutable_fields_ = ['num'] diff --git a/pypy/objspace/std/objspace.py b/pypy/objspace/std/objspace.py --- a/pypy/objspace/std/objspace.py +++ b/pypy/objspace/std/objspace.py @@ -83,12 +83,7 @@ if self.config.objspace.std.withtproxy: transparent.setup(self) - interplevel_classes = {} - for type, classes in self.model.typeorder.iteritems(): - if len(classes) >= 3: # XXX what does this 3 mean??! - # W_Root, AnyXxx and actual object - interplevel_classes[self.gettypefor(type)] = classes[0][0] - self._interplevel_classes = interplevel_classes + self.setup_isinstance_cache() def get_builtin_types(self): return self.builtin_types @@ -592,6 +587,63 @@ def isinstance_w(space, w_inst, w_type): return space._type_isinstance(w_inst, w_type) + def setup_isinstance_cache(self): + # This assumes that all classes in the stdobjspace implementing a + # particular app-level type are distinguished by a common base class. + # Alternatively, you can turn off the cache on specific classes, + # like e.g. proxyobject. It is just a bit less performant but + # should not have any bad effect. + from pypy.objspace.std.model import W_Root, W_Object + # + # Build a dict {class: w_typeobject-or-None}. The value None is used + # on classes that are known to be abstract base classes. + class2type = {} + class2type[W_Root] = None + class2type[W_Object] = None + for cls in self.model.typeorder.keys(): + if getattr(cls, 'typedef', None) is None: + continue + if getattr(cls, 'ignore_for_isinstance_cache', False): + continue + w_type = self.gettypefor(cls) + w_oldtype = class2type.setdefault(cls, w_type) + assert w_oldtype is w_type + # + # Build the real dict {w_typeobject: class-or-base-class}. For every + # w_typeobject we look for the most precise common base class of all + # the registered classes. If no such class is found, we will find + # W_Object or W_Root, and complain. Then you must either add an + # artificial common base class, or disable caching on one of the + # two classes with ignore_for_isinstance_cache. + def getmro(cls): + while True: + yield cls + if cls is W_Root: + break + cls = cls.__bases__[0] + self._interplevel_classes = {} + for cls, w_type in class2type.items(): + if w_type is None: + continue + if w_type not in self._interplevel_classes: + self._interplevel_classes[w_type] = cls + else: + cls1 = self._interplevel_classes[w_type] + mro1 = list(getmro(cls1)) + for base in getmro(cls): + if base in mro1: + break + if base in class2type and class2type[base] is not w_type: + if class2type.get(base) is None: + msg = ("cannot find a common interp-level base class" + " between %r and %r" % (cls1, cls)) + else: + msg = ("%s is a base class of both %r and %r" % ( + class2type[base], cls1, cls)) + raise AssertionError("%r: %s" % (w_type, msg)) + class2type[base] = w_type + self._interplevel_classes[w_type] = base + @specialize.memo() def _get_interplevel_cls(self, w_type): if not hasattr(self, "_interplevel_classes"): diff --git a/pypy/objspace/std/proxyobject.py b/pypy/objspace/std/proxyobject.py --- a/pypy/objspace/std/proxyobject.py +++ b/pypy/objspace/std/proxyobject.py @@ -16,6 +16,8 @@ def transparent_class(name, BaseCls): class W_Transparent(BaseCls): + ignore_for_isinstance_cache = True + def __init__(self, space, w_type, w_controller): self.w_type = w_type self.w_controller = w_controller diff --git a/pypy/objspace/std/rangeobject.py b/pypy/objspace/std/rangeobject.py --- a/pypy/objspace/std/rangeobject.py +++ b/pypy/objspace/std/rangeobject.py @@ -5,7 +5,7 @@ from pypy.objspace.std.noneobject import W_NoneObject from pypy.objspace.std.inttype import wrapint from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice -from pypy.objspace.std.listobject import W_ListObject +from pypy.objspace.std.listobject import W_AbstractListObject, W_ListObject from pypy.objspace.std import listtype, iterobject, slicetype from pypy.interpreter import gateway, baseobjspace @@ -21,7 +21,7 @@ return (start - stop - step - 1)/-step -class W_RangeListObject(W_Object): +class W_RangeListObject(W_AbstractListObject): typedef = listtype.list_typedef def __init__(w_self, start, step, length): diff --git a/pypy/objspace/std/ropeobject.py b/pypy/objspace/std/ropeobject.py --- a/pypy/objspace/std/ropeobject.py +++ b/pypy/objspace/std/ropeobject.py @@ -6,7 +6,7 @@ from pypy.rlib.objectmodel import we_are_translated from pypy.objspace.std.inttype import wrapint from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice -from pypy.objspace.std import slicetype +from pypy.objspace.std import stringobject, slicetype, iterobject from pypy.objspace.std.listobject import W_ListObject from pypy.objspace.std.noneobject import W_NoneObject from pypy.objspace.std.tupleobject import W_TupleObject @@ -19,7 +19,7 @@ str_format__String as str_format__Rope, _upper, _lower, DEFAULT_NOOP_TABLE) -class W_RopeObject(W_Object): +class W_RopeObject(stringobject.W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef _immutable_fields_ = ['_node'] @@ -59,7 +59,7 @@ registerimplementation(W_RopeObject) -class W_RopeIterObject(W_Object): +class W_RopeIterObject(iterobject.W_AbstractIterObject): from pypy.objspace.std.itertype import iter_typedef as typedef def __init__(w_self, w_rope, index=0): diff --git a/pypy/objspace/std/ropeunicodeobject.py b/pypy/objspace/std/ropeunicodeobject.py --- a/pypy/objspace/std/ropeunicodeobject.py +++ b/pypy/objspace/std/ropeunicodeobject.py @@ -9,7 +9,7 @@ from pypy.objspace.std.noneobject import W_NoneObject from pypy.rlib import rope from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice -from pypy.objspace.std import slicetype +from pypy.objspace.std import unicodeobject, slicetype, iterobject from pypy.objspace.std.tupleobject import W_TupleObject from pypy.rlib.rarithmetic import intmask, ovfcheck from pypy.module.unicodedata import unicodedb @@ -76,7 +76,7 @@ return encode_object(space, w_unistr, encoding, errors) -class W_RopeUnicodeObject(W_Object): +class W_RopeUnicodeObject(unicodeobject.W_AbstractUnicodeObject): from pypy.objspace.std.unicodetype import unicode_typedef as typedef _immutable_fields_ = ['_node'] @@ -117,7 +117,7 @@ return rope.LiteralUnicodeNode(space.unicode_w(w_str)) -class W_RopeUnicodeIterObject(W_Object): +class W_RopeUnicodeIterObject(iterobject.W_AbstractIterObject): from pypy.objspace.std.itertype import iter_typedef as typedef def __init__(w_self, w_rope, index=0): diff --git a/pypy/objspace/std/smallintobject.py b/pypy/objspace/std/smallintobject.py --- a/pypy/objspace/std/smallintobject.py +++ b/pypy/objspace/std/smallintobject.py @@ -6,7 +6,7 @@ from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all from pypy.objspace.std.noneobject import W_NoneObject -from pypy.objspace.std.intobject import W_IntObject +from pypy.objspace.std.intobject import W_AbstractIntObject, W_IntObject from pypy.interpreter.error import OperationError from pypy.rlib.objectmodel import UnboxedValue from pypy.rlib.rbigint import rbigint @@ -14,7 +14,7 @@ from pypy.tool.sourcetools import func_with_new_name from pypy.objspace.std.inttype import wrapint -class W_SmallIntObject(W_Object, UnboxedValue): +class W_SmallIntObject(W_AbstractIntObject, UnboxedValue): __slots__ = 'intval' from pypy.objspace.std.inttype import int_typedef as typedef diff --git a/pypy/objspace/std/smalllongobject.py b/pypy/objspace/std/smalllongobject.py --- a/pypy/objspace/std/smalllongobject.py +++ b/pypy/objspace/std/smalllongobject.py @@ -9,7 +9,7 @@ from pypy.rlib.rarithmetic import r_longlong, r_int, r_uint from pypy.rlib.rarithmetic import intmask, LONGLONG_BIT from pypy.rlib.rbigint import rbigint -from pypy.objspace.std.longobject import W_LongObject +from pypy.objspace.std.longobject import W_AbstractLongObject, W_LongObject from pypy.objspace.std.intobject import W_IntObject from pypy.objspace.std.noneobject import W_NoneObject from pypy.interpreter.error import OperationError @@ -17,7 +17,7 @@ LONGLONG_MIN = r_longlong((-1) << (LONGLONG_BIT-1)) -class W_SmallLongObject(W_Object): +class W_SmallLongObject(W_AbstractLongObject): from pypy.objspace.std.longtype import long_typedef as typedef _immutable_fields_ = ['longlong'] diff --git a/pypy/objspace/std/smalltupleobject.py b/pypy/objspace/std/smalltupleobject.py --- a/pypy/objspace/std/smalltupleobject.py +++ b/pypy/objspace/std/smalltupleobject.py @@ -9,9 +9,9 @@ from pypy.interpreter import gateway from pypy.rlib.debug import make_sure_not_resized from pypy.rlib.unroll import unrolling_iterable -from pypy.objspace.std.tupleobject import W_TupleObject +from pypy.objspace.std.tupleobject import W_AbstractTupleObject, W_TupleObject -class W_SmallTupleObject(W_Object): +class W_SmallTupleObject(W_AbstractTupleObject): from pypy.objspace.std.tupletype import tuple_typedef as typedef def tolist(self): diff --git a/pypy/objspace/std/stdtypedef.py b/pypy/objspace/std/stdtypedef.py --- a/pypy/objspace/std/stdtypedef.py +++ b/pypy/objspace/std/stdtypedef.py @@ -32,11 +32,14 @@ from pypy.objspace.std.objecttype import object_typedef if b is object_typedef: return True - while a is not b: - if a is None: - return False - a = a.base - return True + if a is None: + return False + if a is b: + return True + for a1 in a.bases: + if issubtypedef(a1, b): + return True + return False std_dict_descr = GetSetProperty(descr_get_dict, descr_set_dict, descr_del_dict, doc="dictionary for instance variables (if defined)") @@ -75,8 +78,8 @@ if typedef is object_typedef: bases_w = [] else: - base = typedef.base or object_typedef - bases_w = [space.gettypeobject(base)] + bases = typedef.bases or [object_typedef] + bases_w = [space.gettypeobject(base) for base in bases] # wrap everything dict_w = {} diff --git a/pypy/objspace/std/strbufobject.py b/pypy/objspace/std/strbufobject.py --- a/pypy/objspace/std/strbufobject.py +++ b/pypy/objspace/std/strbufobject.py @@ -1,11 +1,12 @@ from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all +from pypy.objspace.std.stringobject import W_AbstractStringObject from pypy.objspace.std.stringobject import W_StringObject from pypy.objspace.std.unicodeobject import delegate_String2Unicode from pypy.rlib.rstring import StringBuilder from pypy.interpreter.buffer import Buffer -class W_StringBufferObject(W_Object): +class W_StringBufferObject(W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef w_str = None diff --git a/pypy/objspace/std/stringobject.py b/pypy/objspace/std/stringobject.py --- a/pypy/objspace/std/stringobject.py +++ b/pypy/objspace/std/stringobject.py @@ -19,7 +19,10 @@ from pypy.objspace.std.formatting import mod_format -class W_StringObject(W_Object): +class W_AbstractStringObject(W_Object): + __slots__ = () + +class W_StringObject(W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef _immutable_fields_ = ['_value'] diff --git a/pypy/objspace/std/strjoinobject.py b/pypy/objspace/std/strjoinobject.py --- a/pypy/objspace/std/strjoinobject.py +++ b/pypy/objspace/std/strjoinobject.py @@ -1,11 +1,12 @@ from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all +from pypy.objspace.std.stringobject import W_AbstractStringObject from pypy.objspace.std.stringobject import W_StringObject from pypy.objspace.std.unicodeobject import delegate_String2Unicode from pypy.objspace.std.stringtype import wrapstr -class W_StringJoinObject(W_Object): +class W_StringJoinObject(W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef def __init__(w_self, joined_strs, until=-1): diff --git a/pypy/objspace/std/strsliceobject.py b/pypy/objspace/std/strsliceobject.py --- a/pypy/objspace/std/strsliceobject.py +++ b/pypy/objspace/std/strsliceobject.py @@ -1,6 +1,7 @@ from pypy.interpreter.error import OperationError from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all +from pypy.objspace.std.stringobject import W_AbstractStringObject from pypy.objspace.std.stringobject import W_StringObject from pypy.objspace.std.unicodeobject import delegate_String2Unicode from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice @@ -12,7 +13,7 @@ stringendswith, stringstartswith -class W_StringSliceObject(W_Object): +class W_StringSliceObject(W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef def __init__(w_self, str, start, stop): diff --git a/pypy/objspace/std/test/test_bytes.py b/pypy/objspace/std/test/test_bytearrayobject.py rename from pypy/objspace/std/test/test_bytes.py rename to pypy/objspace/std/test/test_bytearrayobject.py diff --git a/pypy/objspace/std/test/test_sliceobject.py b/pypy/objspace/std/test/test_sliceobject.py --- a/pypy/objspace/std/test/test_sliceobject.py +++ b/pypy/objspace/std/test/test_sliceobject.py @@ -1,3 +1,4 @@ +import sys from pypy.objspace.std.sliceobject import normalize_simple_slice @@ -56,8 +57,9 @@ sl = space.newslice(w(start), w(stop), w(step)) mystart, mystop, mystep, slicelength = sl.indices4(space, length) assert len(range(length)[start:stop:step]) == slicelength - assert slice(start, stop, step).indices(length) == ( - mystart, mystop, mystep) + if sys.version_info >= (2, 6): # doesn't work in 2.5 + assert slice(start, stop, step).indices(length) == ( + mystart, mystop, mystep) class AppTest_SliceObject: def test_new(self): diff --git a/pypy/objspace/std/test/test_stdobjspace.py b/pypy/objspace/std/test/test_stdobjspace.py --- a/pypy/objspace/std/test/test_stdobjspace.py +++ b/pypy/objspace/std/test/test_stdobjspace.py @@ -50,6 +50,8 @@ def test_fastpath_isinstance(self): from pypy.objspace.std.stringobject import W_StringObject from pypy.objspace.std.intobject import W_IntObject + from pypy.objspace.std.iterobject import W_AbstractSeqIterObject + from pypy.objspace.std.iterobject import W_SeqIterObject space = self.space assert space._get_interplevel_cls(space.w_str) is W_StringObject @@ -62,9 +64,13 @@ assert space.isinstance_w(X(), space.w_str) + w_sequenceiterator = space.gettypefor(W_SeqIterObject) + cls = space._get_interplevel_cls(w_sequenceiterator) + assert cls is W_AbstractSeqIterObject + def test_withstrbuf_fastpath_isinstance(self): - from pypy.objspace.std.stringobject import W_StringObject + from pypy.objspace.std.stringobject import W_AbstractStringObject - space = gettestobjspace(withstrbuf=True) - assert space._get_interplevel_cls(space.w_str) is W_StringObject - + space = gettestobjspace(withstrbuf=True) + cls = space._get_interplevel_cls(space.w_str) + assert cls is W_AbstractStringObject diff --git a/pypy/objspace/std/tupleobject.py b/pypy/objspace/std/tupleobject.py --- a/pypy/objspace/std/tupleobject.py +++ b/pypy/objspace/std/tupleobject.py @@ -9,7 +9,10 @@ from pypy.interpreter import gateway from pypy.rlib.debug import make_sure_not_resized -class W_TupleObject(W_Object): +class W_AbstractTupleObject(W_Object): + __slots__ = () + +class W_TupleObject(W_AbstractTupleObject): from pypy.objspace.std.tupletype import tuple_typedef as typedef _immutable_fields_ = ['wrappeditems[*]'] diff --git a/pypy/objspace/std/unicodeobject.py b/pypy/objspace/std/unicodeobject.py --- a/pypy/objspace/std/unicodeobject.py +++ b/pypy/objspace/std/unicodeobject.py @@ -19,7 +19,10 @@ from pypy.objspace.std.formatting import mod_format from pypy.objspace.std.stringtype import stringstartswith, stringendswith -class W_UnicodeObject(W_Object): +class W_AbstractUnicodeObject(W_Object): + __slots__ = () + +class W_UnicodeObject(W_AbstractUnicodeObject): from pypy.objspace.std.unicodetype import unicode_typedef as typedef _immutable_fields_ = ['_value'] diff --git a/pypy/rlib/clibffi.py b/pypy/rlib/clibffi.py --- a/pypy/rlib/clibffi.py +++ b/pypy/rlib/clibffi.py @@ -210,26 +210,48 @@ elif sz == 8: return ffi_type_uint64 else: raise ValueError("unsupported type size for %r" % (TYPE,)) -TYPE_MAP = { - rffi.DOUBLE : ffi_type_double, - rffi.FLOAT : ffi_type_float, - rffi.LONGDOUBLE : ffi_type_longdouble, - rffi.UCHAR : ffi_type_uchar, - rffi.CHAR : ffi_type_schar, - rffi.SHORT : ffi_type_sshort, - rffi.USHORT : ffi_type_ushort, - rffi.UINT : ffi_type_uint, - rffi.INT : ffi_type_sint, +__int_type_map = [ + (rffi.UCHAR, ffi_type_uchar), + (rffi.SIGNEDCHAR, ffi_type_schar), + (rffi.SHORT, ffi_type_sshort), + (rffi.USHORT, ffi_type_ushort), + (rffi.UINT, ffi_type_uint), + (rffi.INT, ffi_type_sint), # xxx don't use ffi_type_slong and ffi_type_ulong - their meaning # changes from a libffi version to another :-(( - rffi.ULONG : _unsigned_type_for(rffi.ULONG), - rffi.LONG : _signed_type_for(rffi.LONG), - rffi.ULONGLONG : _unsigned_type_for(rffi.ULONGLONG), - rffi.LONGLONG : _signed_type_for(rffi.LONGLONG), - lltype.Void : ffi_type_void, - lltype.UniChar : _unsigned_type_for(lltype.UniChar), - lltype.Bool : _unsigned_type_for(lltype.Bool), - } + (rffi.ULONG, _unsigned_type_for(rffi.ULONG)), + (rffi.LONG, _signed_type_for(rffi.LONG)), + (rffi.ULONGLONG, _unsigned_type_for(rffi.ULONGLONG)), + (rffi.LONGLONG, _signed_type_for(rffi.LONGLONG)), + (lltype.UniChar, _unsigned_type_for(lltype.UniChar)), + (lltype.Bool, _unsigned_type_for(lltype.Bool)), + ] + +__float_type_map = [ + (rffi.DOUBLE, ffi_type_double), + (rffi.FLOAT, ffi_type_float), + (rffi.LONGDOUBLE, ffi_type_longdouble), + ] + +__ptr_type_map = [ + (rffi.VOIDP, ffi_type_pointer), + ] + +__type_map = __int_type_map + __float_type_map + [ + (lltype.Void, ffi_type_void) + ] + +TYPE_MAP_INT = dict(__int_type_map) +TYPE_MAP_FLOAT = dict(__float_type_map) +TYPE_MAP = dict(__type_map) + +ffitype_map_int = unrolling_iterable(__int_type_map) +ffitype_map_int_or_ptr = unrolling_iterable(__int_type_map + __ptr_type_map) +ffitype_map_float = unrolling_iterable(__float_type_map) +ffitype_map = unrolling_iterable(__type_map) + +del __int_type_map, __float_type_map, __ptr_type_map, __type_map + def external(name, args, result, **kwds): return rffi.llexternal(name, args, result, compilation_info=eci, **kwds) diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -450,55 +450,6 @@ # special-cased by ExtRegistryEntry pass - def _set_param(self, name, value): - # special-cased by ExtRegistryEntry - # (internal, must receive a constant 'name') - # if value is DEFAULT, sets the default value. - assert name in PARAMETERS - - @specialize.arg(0, 1) - def set_param(self, name, value): - """Set one of the tunable JIT parameter.""" - self._set_param(name, value) - - @specialize.arg(0, 1) - def set_param_to_default(self, name): - """Reset one of the tunable JIT parameters to its default value.""" - self._set_param(name, DEFAULT) - - def set_user_param(self, text): - """Set the tunable JIT parameters from a user-supplied string - following the format 'param=value,param=value', or 'off' to - disable the JIT. For programmatic setting of parameters, use - directly JitDriver.set_param(). - """ - if text == 'off': - self.set_param('threshold', -1) - self.set_param('function_threshold', -1) - return - if text == 'default': - for name1, _ in unroll_parameters: - self.set_param_to_default(name1) - return - for s in text.split(','): - s = s.strip(' ') - parts = s.split('=') - if len(parts) != 2: - raise ValueError - name = parts[0] - value = parts[1] - if name == 'enable_opts': - self.set_param('enable_opts', value) - else: - for name1, _ in unroll_parameters: - if name1 == name and name1 != 'enable_opts': - try: - self.set_param(name1, int(value)) - except ValueError: - raise - set_user_param._annspecialcase_ = 'specialize:arg(0)' - - def on_compile(self, logger, looptoken, operations, type, *greenargs): """ A hook called when loop is compiled. Overwrite for your own jitdriver if you want to do something special, like @@ -524,16 +475,61 @@ self.jit_merge_point = self.jit_merge_point self.can_enter_jit = self.can_enter_jit self.loop_header = self.loop_header - self._set_param = self._set_param - class Entry(ExtEnterLeaveMarker): _about_ = (self.jit_merge_point, self.can_enter_jit) class Entry(ExtLoopHeader): _about_ = self.loop_header - class Entry(ExtSetParam): - _about_ = self._set_param +def _set_param(driver, name, value): + # special-cased by ExtRegistryEntry + # (internal, must receive a constant 'name') + # if value is DEFAULT, sets the default value. + assert name in PARAMETERS + + at specialize.arg(0, 1) +def set_param(driver, name, value): + """Set one of the tunable JIT parameter. Driver can be None, then all + drivers have this set """ + _set_param(driver, name, value) + + at specialize.arg(0, 1) +def set_param_to_default(driver, name): + """Reset one of the tunable JIT parameters to its default value.""" + _set_param(driver, name, DEFAULT) + +def set_user_param(driver, text): + """Set the tunable JIT parameters from a user-supplied string + following the format 'param=value,param=value', or 'off' to + disable the JIT. For programmatic setting of parameters, use + directly JitDriver.set_param(). + """ + if text == 'off': + set_param(driver, 'threshold', -1) + set_param(driver, 'function_threshold', -1) + return + if text == 'default': + for name1, _ in unroll_parameters: + set_param_to_default(driver, name1) + return + for s in text.split(','): + s = s.strip(' ') + parts = s.split('=') + if len(parts) != 2: + raise ValueError + name = parts[0] + value = parts[1] + if name == 'enable_opts': + set_param(driver, 'enable_opts', value) + else: + for name1, _ in unroll_parameters: + if name1 == name and name1 != 'enable_opts': + try: + set_param(driver, name1, int(value)) + except ValueError: + raise +set_user_param._annspecialcase_ = 'specialize:arg(0)' + # ____________________________________________________________ # @@ -705,8 +701,9 @@ resulttype=lltype.Void) class ExtSetParam(ExtRegistryEntry): + _about_ = _set_param - def compute_result_annotation(self, s_name, s_value): + def compute_result_annotation(self, s_driver, s_name, s_value): from pypy.annotation import model as annmodel assert s_name.is_constant() if not self.bookkeeper.immutablevalue(DEFAULT).contains(s_value): @@ -722,21 +719,22 @@ from pypy.objspace.flow.model import Constant hop.exception_cannot_occur() - driver = self.instance.im_self - name = hop.args_s[0].const + driver = hop.inputarg(lltype.Void, arg=0) + name = hop.args_s[1].const if name == 'enable_opts': repr = string_repr else: repr = lltype.Signed - if (isinstance(hop.args_v[1], Constant) and - hop.args_v[1].value is DEFAULT): + if (isinstance(hop.args_v[2], Constant) and + hop.args_v[2].value is DEFAULT): value = PARAMETERS[name] v_value = hop.inputconst(repr, value) else: - v_value = hop.inputarg(repr, arg=1) + v_value = hop.inputarg(repr, arg=2) vlist = [hop.inputconst(lltype.Void, "set_param"), - hop.inputconst(lltype.Void, driver), + driver, hop.inputconst(lltype.Void, name), v_value] return hop.genop('jit_marker', vlist, resulttype=lltype.Void) + diff --git a/pypy/rlib/libffi.py b/pypy/rlib/libffi.py --- a/pypy/rlib/libffi.py +++ b/pypy/rlib/libffi.py @@ -140,7 +140,7 @@ self.last.next = arg self.last = arg self.numargs += 1 - + class AbstractArg(object): next = None @@ -234,7 +234,7 @@ # It is important that there is no other operation in the middle, else # the optimizer will fail to recognize the pattern and won't turn it # into a fast CALL. Note that "arg = arg.next" is optimized away, - # assuming that archain is completely virtual. + # assuming that argchain is completely virtual. self = jit.promote(self) if argchain.numargs != len(self.argtypes): raise TypeError, 'Wrong number of arguments: %d expected, got %d' %\ @@ -410,3 +410,22 @@ def getaddressindll(self, name): return dlsym(self.lib, name) + + at jit.oopspec("libffi_array_getitem(ffitype, width, addr, index, offset)") +def array_getitem(ffitype, width, addr, index, offset): + for TYPE, ffitype2 in clibffi.ffitype_map: + if ffitype is ffitype2: + addr = rffi.ptradd(addr, index * width) + addr = rffi.ptradd(addr, offset) + return rffi.cast(rffi.CArrayPtr(TYPE), addr)[0] + assert False + + at jit.oopspec("libffi_array_setitem(ffitype, width, addr, index, offset, value)") +def array_setitem(ffitype, width, addr, index, offset, value): + for TYPE, ffitype2 in clibffi.ffitype_map: + if ffitype is ffitype2: + addr = rffi.ptradd(addr, index * width) + addr = rffi.ptradd(addr, offset) + rffi.cast(rffi.CArrayPtr(TYPE), addr)[0] = value + return + assert False \ No newline at end of file diff --git a/pypy/rlib/listsort.py b/pypy/rlib/listsort.py --- a/pypy/rlib/listsort.py +++ b/pypy/rlib/listsort.py @@ -1,4 +1,4 @@ -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift +from pypy.rlib.rarithmetic import ovfcheck ## ------------------------------------------------------------------------ @@ -136,7 +136,7 @@ if lower(a.list[p + ofs], key): lastofs = ofs try: - ofs = ovfcheck_lshift(ofs, 1) + ofs = ovfcheck(ofs << 1) except OverflowError: ofs = maxofs else: @@ -161,7 +161,7 @@ # key <= a[hint - ofs] lastofs = ofs try: - ofs = ovfcheck_lshift(ofs, 1) + ofs = ovfcheck(ofs << 1) except OverflowError: ofs = maxofs else: diff --git a/pypy/rlib/rarithmetic.py b/pypy/rlib/rarithmetic.py --- a/pypy/rlib/rarithmetic.py +++ b/pypy/rlib/rarithmetic.py @@ -12,9 +12,6 @@ back to a signed int value ovfcheck check on CPython whether the result of a signed integer operation did overflow -ovfcheck_lshift - << with oveflow checking - catering to 2.3/2.4 differences about << ovfcheck_float_to_int convert to an integer or raise OverflowError r_longlong @@ -111,18 +108,6 @@ raise OverflowError, "signed integer expression did overflow" return r -def _local_ovfcheck(r): - # a copy of the above, because we cannot call ovfcheck - # in a context where no primitiveoperator is involved. - assert not isinstance(r, r_uint), "unexpected ovf check on unsigned" - if isinstance(r, long): - raise OverflowError, "signed integer expression did overflow" - return r - -def ovfcheck_lshift(a, b): - "NOT_RPYTHON" - return _local_ovfcheck(int(long(a) << b)) - # Strange things happening for float to int on 64 bit: # int(float(i)) != i because of rounding issues. # These are the minimum and maximum float value that can diff --git a/pypy/rlib/rgc.py b/pypy/rlib/rgc.py --- a/pypy/rlib/rgc.py +++ b/pypy/rlib/rgc.py @@ -216,8 +216,8 @@ func._gc_no_collect_ = True return func -def is_light_finalizer(func): - func._is_light_finalizer_ = True +def must_be_light_finalizer(func): + func._must_be_light_finalizer_ = True return func # ____________________________________________________________ diff --git a/pypy/rlib/test/test_libffi.py b/pypy/rlib/test/test_libffi.py --- a/pypy/rlib/test/test_libffi.py +++ b/pypy/rlib/test/test_libffi.py @@ -1,11 +1,13 @@ +import sys + import py -import sys + +from pypy.rlib.libffi import (CDLL, Func, get_libc_name, ArgChain, types, + IS_32_BIT, array_getitem, array_setitem) +from pypy.rlib.rarithmetic import r_singlefloat, r_longlong, r_ulonglong +from pypy.rlib.test.test_clibffi import BaseFfiTest, get_libm_name, make_struct_ffitype_e from pypy.rpython.lltypesystem import rffi, lltype from pypy.rpython.lltypesystem.ll2ctypes import ALLOCATED -from pypy.rlib.rarithmetic import r_singlefloat, r_longlong, r_ulonglong -from pypy.rlib.test.test_clibffi import BaseFfiTest, get_libm_name, make_struct_ffitype_e -from pypy.rlib.libffi import CDLL, Func, get_libc_name, ArgChain, types -from pypy.rlib.libffi import IS_32_BIT class TestLibffiMisc(BaseFfiTest): @@ -52,6 +54,34 @@ del lib assert not ALLOCATED + def test_array_fields(self): + POINT = lltype.Struct("POINT", + ("x", lltype.Float), + ("y", lltype.Float), + ) + points = lltype.malloc(rffi.CArray(POINT), 2, flavor="raw") + points[0].x = 1.0 + points[0].y = 2.0 + points[1].x = 3.0 + points[1].y = 4.0 + points = rffi.cast(rffi.CArrayPtr(lltype.Char), points) + assert array_getitem(types.double, 16, points, 0, 0) == 1.0 + assert array_getitem(types.double, 16, points, 0, 8) == 2.0 + assert array_getitem(types.double, 16, points, 1, 0) == 3.0 + assert array_getitem(types.double, 16, points, 1, 8) == 4.0 + + array_setitem(types.double, 16, points, 0, 0, 10.0) + array_setitem(types.double, 16, points, 0, 8, 20.0) + array_setitem(types.double, 16, points, 1, 0, 30.0) + array_setitem(types.double, 16, points, 1, 8, 40.0) + + assert array_getitem(types.double, 16, points, 0, 0) == 10.0 + assert array_getitem(types.double, 16, points, 0, 8) == 20.0 + assert array_getitem(types.double, 16, points, 1, 0) == 30.0 + assert array_getitem(types.double, 16, points, 1, 8) == 40.0 + + lltype.free(points, flavor="raw") + class TestLibffiCall(BaseFfiTest): """ Test various kind of calls through libffi. @@ -109,7 +139,7 @@ This method is overridden by metainterp/test/test_fficall.py in order to do the call in a loop and JIT it. The optional arguments are used only by that overridden method. - + """ lib, name, argtypes, restype = funcspec func = lib.getpointer(name, argtypes, restype) @@ -132,7 +162,7 @@ return x - y; } """ - libfoo = self.get_libfoo() + libfoo = self.get_libfoo() func = (libfoo, 'diff_xy', [types.sint, types.slong], types.sint) res = self.call(func, [50, 8], lltype.Signed) assert res == 42 @@ -144,7 +174,7 @@ return (x + (int)y); } """ - libfoo = self.get_libfoo() + libfoo = self.get_libfoo() func = (libfoo, 'sum_xy', [types.sint, types.double], types.sint) res = self.call(func, [38, 4.2], lltype.Signed, jitif=["floats"]) assert res == 42 @@ -179,6 +209,17 @@ res = self.call(func, [chr(20), 22], rffi.LONG) assert res == 42 + def test_char_args(self): + """ + char sum_args(char a, char b) { + return a + b; + } + """ + libfoo = self.get_libfoo() + func = (libfoo, 'sum_args', [types.schar, types.schar], types.schar) + res = self.call(func, [123, 43], rffi.CHAR) + assert res == chr(166) + def test_unsigned_short_args(self): """ unsigned short sum_xy_us(unsigned short x, unsigned short y) @@ -238,7 +279,7 @@ }; struct pair my_static_pair = {10, 20}; - + long* get_pointer_to_b() { return &my_static_pair.b; @@ -329,7 +370,7 @@ def test_wrong_number_of_arguments(self): from pypy.rpython.llinterp import LLException - libfoo = self.get_libfoo() + libfoo = self.get_libfoo() func = (libfoo, 'sum_xy', [types.sint, types.double], types.sint) glob = globals() diff --git a/pypy/rpython/llinterp.py b/pypy/rpython/llinterp.py --- a/pypy/rpython/llinterp.py +++ b/pypy/rpython/llinterp.py @@ -1,6 +1,6 @@ from pypy.objspace.flow.model import FunctionGraph, Constant, Variable, c_last_exception from pypy.rlib.rarithmetic import intmask, r_uint, ovfcheck, r_longlong -from pypy.rlib.rarithmetic import r_ulonglong, ovfcheck_lshift +from pypy.rlib.rarithmetic import r_ulonglong from pypy.rpython.lltypesystem import lltype, llmemory, lloperation, llheap from pypy.rpython.lltypesystem import rclass from pypy.rpython.ootypesystem import ootype @@ -1035,7 +1035,7 @@ assert isinstance(x, int) assert isinstance(y, int) try: - return ovfcheck_lshift(x, y) + return ovfcheck(x << y) except OverflowError: self.make_llexception() diff --git a/pypy/rpython/lltypesystem/ll2ctypes.py b/pypy/rpython/lltypesystem/ll2ctypes.py --- a/pypy/rpython/lltypesystem/ll2ctypes.py +++ b/pypy/rpython/lltypesystem/ll2ctypes.py @@ -1163,10 +1163,14 @@ value = value.adr if isinstance(value, llmemory.fakeaddress): value = value.ptr or 0 + if isinstance(value, r_singlefloat): + value = float(value) TYPE1 = lltype.typeOf(value) cvalue = lltype2ctypes(value) cresulttype = get_ctypes_type(RESTYPE) - if isinstance(TYPE1, lltype.Ptr): + if RESTYPE == TYPE1: + return value + elif isinstance(TYPE1, lltype.Ptr): if isinstance(RESTYPE, lltype.Ptr): # shortcut: ptr->ptr cast cptr = ctypes.cast(cvalue, cresulttype) diff --git a/pypy/rpython/lltypesystem/lltype.py b/pypy/rpython/lltypesystem/lltype.py --- a/pypy/rpython/lltypesystem/lltype.py +++ b/pypy/rpython/lltypesystem/lltype.py @@ -1723,7 +1723,7 @@ class _subarray(_parentable): # only for direct_fieldptr() # and direct_arrayitems() _kind = "subarray" - _cache = weakref.WeakKeyDictionary() # parentarray -> {subarrays} + _cache = {} # TYPE -> weak{ parentarray -> {subarrays} } def __init__(self, TYPE, parent, baseoffset_or_fieldname): _parentable.__init__(self, TYPE) @@ -1781,10 +1781,15 @@ def _makeptr(parent, baseoffset_or_fieldname, solid=False): try: - cache = _subarray._cache.setdefault(parent, {}) + d = _subarray._cache[parent._TYPE] + except KeyError: + d = _subarray._cache[parent._TYPE] = weakref.WeakKeyDictionary() + try: + cache = d.setdefault(parent, {}) except RuntimeError: # pointer comparison with a freed structure _subarray._cleanup_cache() - cache = _subarray._cache.setdefault(parent, {}) # try again + # try again + return _subarray._makeptr(parent, baseoffset_or_fieldname, solid) try: subarray = cache[baseoffset_or_fieldname] except KeyError: @@ -1805,14 +1810,18 @@ raise NotImplementedError('_subarray._getid()') def _cleanup_cache(): - newcache = weakref.WeakKeyDictionary() - for key, value in _subarray._cache.items(): - try: - if not key._was_freed(): - newcache[key] = value - except RuntimeError: - pass # ignore "accessing subxxx, but already gc-ed parent" - _subarray._cache = newcache + for T, d in _subarray._cache.items(): + newcache = weakref.WeakKeyDictionary() + for key, value in d.items(): + try: + if not key._was_freed(): + newcache[key] = value + except RuntimeError: + pass # ignore "accessing subxxx, but already gc-ed parent" + if newcache: + _subarray._cache[T] = newcache + else: + del _subarray._cache[T] _cleanup_cache = staticmethod(_cleanup_cache) diff --git a/pypy/rpython/lltypesystem/module/ll_math.py b/pypy/rpython/lltypesystem/module/ll_math.py --- a/pypy/rpython/lltypesystem/module/ll_math.py +++ b/pypy/rpython/lltypesystem/module/ll_math.py @@ -11,15 +11,17 @@ from pypy.translator.platform import platform from pypy.rlib.rfloat import isfinite, isinf, isnan, INFINITY, NAN +use_library_isinf_isnan = False if sys.platform == "win32": if platform.name == "msvc": # When compiled with /O2 or /Oi (enable intrinsic functions) # It's no more possible to take the address of some math functions. # Ensure that the compiler chooses real functions instead. eci = ExternalCompilationInfo( - includes = ['math.h'], + includes = ['math.h', 'float.h'], post_include_bits = ['#pragma function(floor)'], ) + use_library_isinf_isnan = True else: eci = ExternalCompilationInfo() # Some math functions are C99 and not defined by the Microsoft compiler @@ -108,18 +110,35 @@ # # Custom implementations +VERY_LARGE_FLOAT = 1.0 +while VERY_LARGE_FLOAT * 100.0 != INFINITY: + VERY_LARGE_FLOAT *= 64.0 + +_lib_isnan = rffi.llexternal("_isnan", [lltype.Float], lltype.Signed, + compilation_info=eci) +_lib_finite = rffi.llexternal("_finite", [lltype.Float], lltype.Signed, + compilation_info=eci) + def ll_math_isnan(y): # By not calling into the external function the JIT can inline this. # Floats are awesome. + if use_library_isinf_isnan and not jit.we_are_jitted(): + return bool(_lib_isnan(y)) return y != y def ll_math_isinf(y): - # Use a bitwise OR so the JIT doesn't produce 2 different guards. - return (y == INFINITY) | (y == -INFINITY) + if jit.we_are_jitted(): + return (y + VERY_LARGE_FLOAT) == y + elif use_library_isinf_isnan: + return not _lib_finite(y) and not _lib_isnan(y) + else: + return y == INFINITY or y == -INFINITY def ll_math_isfinite(y): # Use a custom hack that is reasonably well-suited to the JIT. # Floats are awesome (bis). + if use_library_isinf_isnan and not jit.we_are_jitted(): + return bool(_lib_finite(y)) z = 0.0 * y return z == z # i.e.: z is not a NaN @@ -136,10 +155,12 @@ Windows, FreeBSD and alpha Tru64 are amongst platforms that don't always follow C99. """ - if isnan(x) or isnan(y): + if isnan(x): return NAN - if isinf(y): + if not isfinite(y): + if isnan(y): + return NAN if isinf(x): if math_copysign(1.0, x) == 1.0: # atan2(+-inf, +inf) == +-pi/4 @@ -168,7 +189,7 @@ def ll_math_frexp(x): # deal with special cases directly, to sidestep platform differences - if isnan(x) or isinf(x) or not x: + if not isfinite(x) or not x: mantissa = x exponent = 0 else: @@ -185,7 +206,7 @@ INT_MIN = int(-2**31) def ll_math_ldexp(x, exp): - if x == 0.0 or isinf(x) or isnan(x): + if x == 0.0 or not isfinite(x): return x # NaNs, zeros and infinities are returned unchanged if exp > INT_MAX: # overflow (64-bit platforms only) @@ -209,10 +230,11 @@ def ll_math_modf(x): # some platforms don't do the right thing for NaNs and # infinities, so we take care of special cases directly. - if isinf(x): - return (math_copysign(0.0, x), x) - elif isnan(x): - return (x, x) + if not isfinite(x): + if isnan(x): + return (x, x) + else: # isinf(x) + return (math_copysign(0.0, x), x) intpart_p = lltype.malloc(rffi.DOUBLEP.TO, 1, flavor='raw') try: fracpart = math_modf(x, intpart_p) @@ -223,13 +245,21 @@ def ll_math_fmod(x, y): - if isinf(x) and not isnan(y): - raise ValueError("math domain error") + # fmod(x, +/-Inf) returns x for finite x. + if isinf(y) and isfinite(x): + return x - if y == 0: - raise ValueError("math domain error") - - return math_fmod(x, y) + _error_reset() + r = math_fmod(x, y) + errno = rposix.get_errno() + if isnan(r): + if isnan(x) or isnan(y): + errno = 0 + else: + errno = EDOM + if errno: + _likely_raise(errno, r) + return r def ll_math_hypot(x, y): @@ -242,16 +272,17 @@ _error_reset() r = math_hypot(x, y) errno = rposix.get_errno() - if isnan(r): - if isnan(x) or isnan(y): - errno = 0 - else: - errno = EDOM - elif isinf(r): - if isinf(x) or isnan(x) or isinf(y) or isnan(y): - errno = 0 - else: - errno = ERANGE + if not isfinite(r): + if isnan(r): + if isnan(x) or isnan(y): + errno = 0 + else: + errno = EDOM + else: # isinf(r) + if isfinite(x) and isfinite(y): + errno = ERANGE + else: + errno = 0 if errno: _likely_raise(errno, r) return r @@ -261,30 +292,30 @@ # deal directly with IEEE specials, to cope with problems on various # platforms whose semantics don't exactly match C99 - if isnan(x): - if y == 0.0: - return 1.0 # NaN**0 = 1 - return x - - elif isnan(y): + if isnan(y): if x == 1.0: return 1.0 # 1**Nan = 1 return y - elif isinf(x): - odd_y = not isinf(y) and math_fmod(math_fabs(y), 2.0) == 1.0 - if y > 0.0: - if odd_y: - return x - return math_fabs(x) - elif y == 0.0: - return 1.0 - else: # y < 0.0 - if odd_y: - return math_copysign(0.0, x) - return 0.0 + if not isfinite(x): + if isnan(x): + if y == 0.0: + return 1.0 # NaN**0 = 1 + return x + else: # isinf(x) + odd_y = not isinf(y) and math_fmod(math_fabs(y), 2.0) == 1.0 + if y > 0.0: + if odd_y: + return x + return math_fabs(x) + elif y == 0.0: + return 1.0 + else: # y < 0.0 + if odd_y: + return math_copysign(0.0, x) + return 0.0 - elif isinf(y): + if isinf(y): if math_fabs(x) == 1.0: return 1.0 elif y > 0.0 and math_fabs(x) > 1.0: @@ -299,17 +330,18 @@ _error_reset() r = math_pow(x, y) errno = rposix.get_errno() - if isnan(r): - # a NaN result should arise only from (-ve)**(finite non-integer) - errno = EDOM - elif isinf(r): - # an infinite result here arises either from: - # (A) (+/-0.)**negative (-> divide-by-zero) - # (B) overflow of x**y with x and y finite - if x == 0.0: + if not isfinite(r): + if isnan(r): + # a NaN result should arise only from (-ve)**(finite non-integer) errno = EDOM - else: - errno = ERANGE + else: # isinf(r) + # an infinite result here arises either from: + # (A) (+/-0.)**negative (-> divide-by-zero) + # (B) overflow of x**y with x and y finite + if x == 0.0: + errno = EDOM + else: + errno = ERANGE if errno: _likely_raise(errno, r) return r @@ -358,18 +390,19 @@ r = c_func(x) # Error checking fun. Copied from CPython 2.6 errno = rposix.get_errno() - if isnan(r): - if isnan(x): - errno = 0 - else: - errno = EDOM - elif isinf(r): - if isinf(x) or isnan(x): - errno = 0 - elif can_overflow: - errno = ERANGE - else: - errno = EDOM + if not isfinite(r): + if isnan(r): + if isnan(x): + errno = 0 + else: + errno = EDOM + else: # isinf(r) + if not isfinite(x): + errno = 0 + elif can_overflow: + errno = ERANGE + else: + errno = EDOM if errno: _likely_raise(errno, r) return r diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -125,6 +125,7 @@ canraise=False, random_effects_on_gcobjs= random_effects_on_gcobjs, + calling_conv=calling_conv, **kwds) if isinstance(_callable, ll2ctypes.LL2CtypesCallable): _callable.funcptr = funcptr @@ -245,8 +246,14 @@ wrapper._always_inline_ = True # for debugging, stick ll func ptr to that wrapper._ptr = funcptr + wrapper = func_with_new_name(wrapper, name) - return func_with_new_name(wrapper, name) + if calling_conv != "c": + from pypy.rlib.jit import dont_look_inside + wrapper = dont_look_inside(wrapper) + + return wrapper + class CallbackHolder: def __init__(self): @@ -855,11 +862,14 @@ try: unsigned = not tp._type.SIGNED except AttributeError: - if tp in [lltype.Char, lltype.Float, lltype.Signed] or\ - isinstance(tp, lltype.Ptr): + if not isinstance(tp, lltype.Primitive): unsigned = False + elif tp in (lltype.Signed, FLOAT, DOUBLE, llmemory.Address): + unsigned = False + elif tp in (lltype.Char, lltype.UniChar, lltype.Bool): + unsigned = True else: - unsigned = False + raise AssertionError("size_and_sign(%r)" % (tp,)) return size, unsigned def sizeof(tp): diff --git a/pypy/rpython/lltypesystem/rstr.py b/pypy/rpython/lltypesystem/rstr.py --- a/pypy/rpython/lltypesystem/rstr.py +++ b/pypy/rpython/lltypesystem/rstr.py @@ -331,6 +331,8 @@ # unlike CPython, there is no reason to avoid to return -1 # but our malloc initializes the memory to zero, so we use zero as the # special non-computed-yet value. + if not s: + return 0 x = s.hash if x == 0: x = _hash_string(s.chars) diff --git a/pypy/rpython/lltypesystem/test/test_rffi.py b/pypy/rpython/lltypesystem/test/test_rffi.py --- a/pypy/rpython/lltypesystem/test/test_rffi.py +++ b/pypy/rpython/lltypesystem/test/test_rffi.py @@ -18,6 +18,7 @@ from pypy.conftest import option from pypy.objspace.flow.model import summary from pypy.translator.tool.cbuild import ExternalCompilationInfo +from pypy.rlib.rarithmetic import r_singlefloat class BaseTestRffi: def test_basic(self): @@ -704,6 +705,14 @@ res = cast(lltype.Signed, 42.5) assert res == 42 + res = cast(lltype.SingleFloat, 12.3) + assert res == r_singlefloat(12.3) + res = cast(lltype.SingleFloat, res) + assert res == r_singlefloat(12.3) + + res = cast(lltype.Float, r_singlefloat(12.)) + assert res == 12. + def test_rffi_sizeof(self): try: import ctypes @@ -733,9 +742,10 @@ assert sizeof(ll) == ctypes.sizeof(ctp) assert sizeof(lltype.Typedef(ll, 'test')) == sizeof(ll) assert not size_and_sign(lltype.Signed)[1] - assert not size_and_sign(lltype.Char)[1] - assert not size_and_sign(lltype.UniChar)[1] + assert size_and_sign(lltype.Char) == (1, True) + assert size_and_sign(lltype.UniChar)[1] assert size_and_sign(UINT)[1] + assert not size_and_sign(INT)[1] def test_rffi_offsetof(self): import struct diff --git a/pypy/rpython/module/ll_os.py b/pypy/rpython/module/ll_os.py --- a/pypy/rpython/module/ll_os.py +++ b/pypy/rpython/module/ll_os.py @@ -356,6 +356,32 @@ return extdef([int, str, [str]], int, llimpl=spawnv_llimpl, export_name="ll_os.ll_os_spawnv") + @registering_if(os, 'spawnve') + def register_os_spawnve(self): + os_spawnve = self.llexternal('spawnve', + [rffi.INT, rffi.CCHARP, rffi.CCHARPP, + rffi.CCHARPP], + rffi.INT) + + def spawnve_llimpl(mode, path, args, env): + envstrs = [] + for item in env.iteritems(): + envstrs.append("%s=%s" % item) + + mode = rffi.cast(rffi.INT, mode) + l_args = rffi.liststr2charpp(args) + l_env = rffi.liststr2charpp(envstrs) + childpid = os_spawnve(mode, path, l_args, l_env) + rffi.free_charpp(l_env) + rffi.free_charpp(l_args) + if childpid == -1: + raise OSError(rposix.get_errno(), "os_spawnve failed") + return rffi.cast(lltype.Signed, childpid) + + return extdef([int, str, [str], {str: str}], int, + llimpl=spawnve_llimpl, + export_name="ll_os.ll_os_spawnve") + @registering(os.dup) def register_os_dup(self): os_dup = self.llexternal(underscore_on_windows+'dup', [rffi.INT], rffi.INT) diff --git a/pypy/rpython/ootypesystem/rstr.py b/pypy/rpython/ootypesystem/rstr.py --- a/pypy/rpython/ootypesystem/rstr.py +++ b/pypy/rpython/ootypesystem/rstr.py @@ -116,6 +116,8 @@ return ootype.oounicode(ch, -1) def ll_strhash(s): + if not s: + return 0 return s.ll_hash() def ll_strfasthash(s): diff --git a/pypy/rpython/test/test_rtuple.py b/pypy/rpython/test/test_rtuple.py --- a/pypy/rpython/test/test_rtuple.py +++ b/pypy/rpython/test/test_rtuple.py @@ -180,6 +180,19 @@ res2 = self.interpret(f, [27, 12]) assert res1 != res2 + def test_constant_tuple_hash_str(self): + from pypy.rlib.objectmodel import compute_hash + def f(i): + if i: + t = (None, "abc") + else: + t = ("abc", None) + return compute_hash(t) + + res1 = self.interpret(f, [0]) + res2 = self.interpret(f, [1]) + assert res1 != res2 + def test_tuple_to_list(self): def f(i, j): return list((i, j)) diff --git a/pypy/translator/backendopt/finalizer.py b/pypy/translator/backendopt/finalizer.py --- a/pypy/translator/backendopt/finalizer.py +++ b/pypy/translator/backendopt/finalizer.py @@ -4,7 +4,7 @@ class FinalizerError(Exception): """ __del__ marked as lightweight finalizer, but the analyzer did - not agreed + not agree """ class FinalizerAnalyzer(graphanalyze.BoolGraphAnalyzer): @@ -23,7 +23,7 @@ def analyze_light_finalizer(self, graph): result = self.analyze_direct_call(graph) if (result is self.top_result() and - getattr(graph.func, '_is_light_finalizer_', False)): + getattr(graph.func, '_must_be_light_finalizer_', False)): raise FinalizerError(FinalizerError.__doc__, graph) return result diff --git a/pypy/translator/backendopt/test/test_finalizer.py b/pypy/translator/backendopt/test/test_finalizer.py --- a/pypy/translator/backendopt/test/test_finalizer.py +++ b/pypy/translator/backendopt/test/test_finalizer.py @@ -126,13 +126,13 @@ r = self.analyze(f, [], A.__del__.im_func) assert r - def test_is_light_finalizer_decorator(self): + def test_must_be_light_finalizer_decorator(self): S = lltype.GcStruct('S') - @rgc.is_light_finalizer + @rgc.must_be_light_finalizer def f(): lltype.malloc(S) - @rgc.is_light_finalizer + @rgc.must_be_light_finalizer def g(): pass self.analyze(g, []) # did not explode diff --git a/pypy/translator/c/genc.py b/pypy/translator/c/genc.py --- a/pypy/translator/c/genc.py +++ b/pypy/translator/c/genc.py @@ -521,13 +521,13 @@ rules = [ ('clean', '', 'rm -f $(OBJECTS) $(TARGET) $(GCMAPFILES) $(ASMFILES) *.gc?? ../module_cache/*.gc??'), ('clean_noprof', '', 'rm -f $(OBJECTS) $(TARGET) $(GCMAPFILES) $(ASMFILES)'), - ('debug', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT" $(TARGET)'), - ('debug_exc', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DDO_LOG_EXC" $(TARGET)'), - ('debug_mem', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DTRIVIAL_MALLOC_DEBUG" $(TARGET)'), + ('debug', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT" debug_target'), + ('debug_exc', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DDO_LOG_EXC" debug_target'), + ('debug_mem', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DTRIVIAL_MALLOC_DEBUG" debug_target'), ('no_obmalloc', '', '$(MAKE) CFLAGS="-g -O2 -DRPY_ASSERT -DNO_OBMALLOC" $(TARGET)'), - ('linuxmemchk', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DLINUXMEMCHK" $(TARGET)'), + ('linuxmemchk', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DLINUXMEMCHK" debug_target'), ('llsafer', '', '$(MAKE) CFLAGS="-O2 -DRPY_LL_ASSERT" $(TARGET)'), - ('lldebug', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DRPY_LL_ASSERT" $(TARGET)'), + ('lldebug', '', '$(MAKE) CFLAGS="$(DEBUGFLAGS) -DRPY_ASSERT -DRPY_LL_ASSERT" debug_target'), ('profile', '', '$(MAKE) CFLAGS="-g -O1 -pg $(CFLAGS) -fno-omit-frame-pointer" LDFLAGS="-pg $(LDFLAGS)" $(TARGET)'), ] if self.has_profopt(): @@ -554,7 +554,7 @@ mk.definition('ASMLBLFILES', lblsfiles) mk.definition('GCMAPFILES', gcmapfiles) if sys.platform == 'win32': - mk.definition('DEBUGFLAGS', '/Zi') + mk.definition('DEBUGFLAGS', '/MD /Zi') else: mk.definition('DEBUGFLAGS', '-O2 -fomit-frame-pointer -g') @@ -618,9 +618,13 @@ else: if sys.platform == 'win32': - mk.definition('DEBUGFLAGS', '/Zi') + mk.definition('DEBUGFLAGS', '/MD /Zi') else: mk.definition('DEBUGFLAGS', '-O1 -g') + if sys.platform == 'win32': + mk.rule('debug_target', 'debugmode_$(DEFAULT_TARGET)', 'rem') + else: + mk.rule('debug_target', '$(TARGET)', '#') mk.write() #self.translator.platform, # , diff --git a/pypy/translator/c/test/test_extfunc.py b/pypy/translator/c/test/test_extfunc.py --- a/pypy/translator/c/test/test_extfunc.py +++ b/pypy/translator/c/test/test_extfunc.py @@ -818,6 +818,24 @@ func() assert open(filename).read() == "2" +if hasattr(posix, 'spawnve'): + def test_spawnve(): + filename = str(udir.join('test_spawnve.txt')) + progname = str(sys.executable) + scriptpath = udir.join('test_spawnve.py') + scriptpath.write('import os\n' + + 'f=open(%r,"w")\n' % filename + + 'f.write(os.environ["FOOBAR"])\n' + + 'f.close\n') + scriptname = str(scriptpath) + def does_stuff(): + l = [progname, scriptname] + pid = os.spawnve(os.P_NOWAIT, progname, l, {'FOOBAR': '42'}) + os.waitpid(pid, 0) + func = compile(does_stuff, []) + func() + assert open(filename).read() == "42" + def test_utime(): path = str(udir.ensure("test_utime.txt")) from time import time, sleep diff --git a/pypy/translator/cli/test/test_snippet.py b/pypy/translator/cli/test/test_snippet.py --- a/pypy/translator/cli/test/test_snippet.py +++ b/pypy/translator/cli/test/test_snippet.py @@ -28,14 +28,14 @@ res = self.interpret(fn, [], backendopt=False) def test_link_vars_overlapping(self): - from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift + from pypy.rlib.rarithmetic import ovfcheck def fn(maxofs): lastofs = 0 ofs = 1 while ofs < maxofs: lastofs = ofs try: - ofs = ovfcheck_lshift(ofs, 1) + ofs = ovfcheck(ofs << 1) except OverflowError: ofs = maxofs else: diff --git a/pypy/translator/platform/__init__.py b/pypy/translator/platform/__init__.py --- a/pypy/translator/platform/__init__.py +++ b/pypy/translator/platform/__init__.py @@ -42,6 +42,8 @@ so_prefixes = ('',) + extra_libs = () + def __init__(self, cc): if self.__class__ is Platform: raise TypeError("You should not instantiate Platform class directly") @@ -102,6 +104,8 @@ bits = [self.__class__.__name__, 'cc=%r' % self.cc] for varname in self.relevant_environ: bits.append('%s=%r' % (varname, os.environ.get(varname))) + # adding sys.maxint to disambiguate windows + bits.append('%s=%r' % ('sys.maxint', sys.maxint)) return ' '.join(bits) # some helpers which seem to be cross-platform enough @@ -179,7 +183,8 @@ link_files = self._linkfiles(eci.link_files) export_flags = self._exportsymbols_link_flags(eci) return (library_dirs + list(self.link_flags) + export_flags + - link_files + list(eci.link_extra) + libraries) + link_files + list(eci.link_extra) + libraries + + list(self.extra_libs)) def _exportsymbols_link_flags(self, eci, relto=None): if eci.export_symbols: @@ -238,10 +243,13 @@ else: host_factory = Linux64 elif sys.platform == 'darwin': - from pypy.translator.platform.darwin import Darwin_i386, Darwin_x86_64 + from pypy.translator.platform.darwin import Darwin_i386, Darwin_x86_64, Darwin_PowerPC import platform - assert platform.machine() in ('i386', 'x86_64') - if sys.maxint <= 2147483647: + assert platform.machine() in ('Power Macintosh', 'i386', 'x86_64') + + if platform.machine() == 'Power Macintosh': + host_factory = Darwin_PowerPC + elif sys.maxint <= 2147483647: host_factory = Darwin_i386 else: host_factory = Darwin_x86_64 diff --git a/pypy/translator/platform/darwin.py b/pypy/translator/platform/darwin.py --- a/pypy/translator/platform/darwin.py +++ b/pypy/translator/platform/darwin.py @@ -71,6 +71,11 @@ link_flags = ('-arch', 'i386') cflags = ('-arch', 'i386', '-O3', '-fomit-frame-pointer') +class Darwin_PowerPC(Darwin):#xxx fixme, mwp + name = "darwin_powerpc" + link_flags = () + cflags = ('-O3', '-fomit-frame-pointer') + class Darwin_x86_64(Darwin): name = "darwin_x86_64" link_flags = ('-arch', 'x86_64') diff --git a/pypy/translator/platform/linux.py b/pypy/translator/platform/linux.py --- a/pypy/translator/platform/linux.py +++ b/pypy/translator/platform/linux.py @@ -6,7 +6,8 @@ class BaseLinux(BasePosix): name = "linux" - link_flags = ('-pthread', '-lrt') + link_flags = ('-pthread',) + extra_libs = ('-lrt',) cflags = ('-O3', '-pthread', '-fomit-frame-pointer', '-Wall', '-Wno-unused') standalone_only = () diff --git a/pypy/translator/platform/posix.py b/pypy/translator/platform/posix.py --- a/pypy/translator/platform/posix.py +++ b/pypy/translator/platform/posix.py @@ -140,7 +140,7 @@ ('DEFAULT_TARGET', exe_name.basename), ('SOURCES', rel_cfiles), ('OBJECTS', rel_ofiles), - ('LIBS', self._libs(eci.libraries)), + ('LIBS', self._libs(eci.libraries) + list(self.extra_libs)), ('LIBDIRS', self._libdirs(rel_libdirs)), ('INCLUDEDIRS', self._includedirs(rel_includedirs)), ('CFLAGS', cflags), diff --git a/pypy/translator/platform/test/test_darwin.py b/pypy/translator/platform/test/test_darwin.py --- a/pypy/translator/platform/test/test_darwin.py +++ b/pypy/translator/platform/test/test_darwin.py @@ -7,7 +7,7 @@ py.test.skip("Darwin only") from pypy.tool.udir import udir -from pypy.translator.platform.darwin import Darwin_i386, Darwin_x86_64 +from pypy.translator.platform.darwin import Darwin_i386, Darwin_x86_64, Darwin_PowerPC from pypy.translator.platform.test.test_platform import TestPlatform as BasicTest from pypy.translator.tool.cbuild import ExternalCompilationInfo @@ -17,7 +17,7 @@ else: host_factory = Darwin_x86_64 else: - host_factory = Darwin + host_factory = Darwin_PowerPC class TestDarwin(BasicTest): platform = host_factory() diff --git a/pypy/translator/platform/test/test_posix.py b/pypy/translator/platform/test/test_posix.py --- a/pypy/translator/platform/test/test_posix.py +++ b/pypy/translator/platform/test/test_posix.py @@ -41,6 +41,7 @@ if self.strict_on_stderr: assert res.err == '' assert res.returncode == 0 + assert '-lrt' in tmpdir.join("Makefile").read() def test_link_files(self): tmpdir = udir.join('link_files' + self.__class__.__name__).ensure(dir=1) diff --git a/pypy/translator/platform/windows.py b/pypy/translator/platform/windows.py --- a/pypy/translator/platform/windows.py +++ b/pypy/translator/platform/windows.py @@ -294,6 +294,9 @@ ['$(CC_LINK) /nologo $(LDFLAGS) $(LDFLAGSEXTRA) $(OBJECTS) $(LINKFILES) /out:$@ $(LIBDIRS) $(LIBS) /MANIFEST /MANIFESTFILE:$*.manifest', 'mt.exe -nologo -manifest $*.manifest -outputresource:$@;1', ]) + m.rule('debugmode_$(TARGET)', '$(OBJECTS)', + ['$(CC_LINK) /nologo /DEBUG $(LDFLAGS) $(LDFLAGSEXTRA) $(OBJECTS) $(LINKFILES) /out:$@ $(LIBDIRS) $(LIBS)', + ]) if shared: m.definition('SHARED_IMPORT_LIB', so_name.new(ext='lib').basename) @@ -307,6 +310,9 @@ ['$(CC_LINK) /nologo main.obj $(SHARED_IMPORT_LIB) /out:$@ /MANIFEST /MANIFESTFILE:$*.manifest', 'mt.exe -nologo -manifest $*.manifest -outputresource:$@;1', ]) + m.rule('debugmode_$(DEFAULT_TARGET)', ['debugmode_$(TARGET)', 'main.obj'], + ['$(CC_LINK) /nologo /DEBUG main.obj $(SHARED_IMPORT_LIB) /out:$@' + ]) return m diff --git a/pypy/translator/simplify.py b/pypy/translator/simplify.py --- a/pypy/translator/simplify.py +++ b/pypy/translator/simplify.py @@ -111,16 +111,13 @@ # the while loop above will simplify recursively the new link def transform_ovfcheck(graph): - """The special function calls ovfcheck and ovfcheck_lshift need to + """The special function calls ovfcheck needs to be translated into primitive operations. ovfcheck is called directly after an operation that should be turned into an overflow-checked version. It is considered a syntax error if the resulting _ovf is not defined in objspace/flow/objspace.py. - ovfcheck_lshift is special because there is no preceding operation. - Instead, it will be replaced by an OP_LSHIFT_OVF operation. """ covf = Constant(rarithmetic.ovfcheck) - covfls = Constant(rarithmetic.ovfcheck_lshift) def check_syntax(opname): exlis = operation.implicit_exceptions.get("%s_ovf" % (opname,), []) @@ -154,9 +151,6 @@ op1.opname += '_ovf' del block.operations[i] block.renamevariables({op.result: op1.result}) - elif op.args[0] == covfls: - op.opname = 'lshift_ovf' - del op.args[0] def simplify_exceptions(graph): """The exception handling caused by non-implicit exceptions diff --git a/pypy/translator/test/snippet.py b/pypy/translator/test/snippet.py --- a/pypy/translator/test/snippet.py +++ b/pypy/translator/test/snippet.py @@ -1210,7 +1210,7 @@ return istk.top(), sstk.top() -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift +from pypy.rlib.rarithmetic import ovfcheck def add_func(i=numtype): try: @@ -1253,7 +1253,7 @@ def lshift_func(i=numtype): try: hugo(2, 3, 5) - return ovfcheck_lshift((-maxint-1), i) + return ovfcheck((-maxint-1) << i) except (hugelmugel, OverflowError, StandardError, ValueError): raise diff --git a/pypy/translator/test/test_simplify.py b/pypy/translator/test/test_simplify.py --- a/pypy/translator/test/test_simplify.py +++ b/pypy/translator/test/test_simplify.py @@ -42,24 +42,6 @@ assert graph.startblock.operations[0].opname == 'int_mul_ovf' assert graph.startblock.operations[1].opname == 'int_sub' -def test_remove_ovfcheck_lshift(): - # check that ovfcheck_lshift() is handled - from pypy.rlib.rarithmetic import ovfcheck_lshift - def f(x): - try: - return ovfcheck_lshift(x, 2) - except OverflowError: - return -42 - graph, _ = translate(f, [int]) - assert len(graph.startblock.operations) == 1 - assert graph.startblock.operations[0].opname == 'int_lshift_ovf' - assert len(graph.startblock.operations[0].args) == 2 - assert len(graph.startblock.exits) == 2 - assert [link.exitcase for link in graph.startblock.exits] == \ - [None, OverflowError] - assert [link.target.operations for link in graph.startblock.exits] == \ - [(), ()] - def test_remove_ovfcheck_floordiv(): # check that ovfcheck() is handled even if the operation raises # and catches another exception too, here ZeroDivisionError From noreply at buildbot.pypy.org Fri Nov 18 11:22:29 2011 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 18 Nov 2011 11:22:29 +0100 (CET) Subject: [pypy-commit] pypy default: what I remember that goes into the release Message-ID: <20111118102229.27CF882A9D@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r49520:7c06e045a439 Date: 2011-11-18 12:22 +0200 http://bitbucket.org/pypy/pypy/changeset/7c06e045a439/ Log: what I remember that goes into the release diff --git a/pypy/doc/release-1.7.0.rst b/pypy/doc/release-1.7.0.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/release-1.7.0.rst @@ -0,0 +1,44 @@ +===================== +PyPy 1.7 +===================== + +Highlights +========== + +* numerous performance improvements, PyPy 1.7 is xxx faster than 1.6 + +* numerous bugfixes, compatibility fixes + +* windows fixes + +* stackless and JIT integration + +* numpy progress - dtypes, numpy -> numpypy renaming + +* brand new JSON encoder + +* improved memory footprint on heavy users of C APIs example - tornado + +* cpyext progress + +Things that didn't make it, expect in 1.8 soon +============================================== + +* list strategies + +* multi-dimensional arrays for numpy + +* ARM backend + +* PPC backend + +Things we're working on with unclear ETA +======================================== + +* windows 64 (?) + +* Py3k + +* SSE for numpy + +* specialized objects From noreply at buildbot.pypy.org Fri Nov 18 13:40:31 2011 From: noreply at buildbot.pypy.org (cfbolz) Date: Fri, 18 Nov 2011 13:40:31 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: uh? this test is clearly dict order dependent Message-ID: <20111118124031.1686982A9D@wyvern.cs.uni-duesseldorf.de> Author: Carl Friedrich Bolz Branch: set-strategies Changeset: r49521:617ce44795c0 Date: 2011-11-18 13:40 +0100 http://bitbucket.org/pypy/pypy/changeset/617ce44795c0/ Log: uh? this test is clearly dict order dependent diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -791,11 +791,13 @@ raises(TypeError, s.discard, set([1])) def test_create_set_from_set(self): + # no sharing x = set([1,2,3]) y = set(x) - x.pop() - assert x == set([2,3]) + a = x.pop() assert y == set([1,2,3]) + assert len(x) == 2 + assert x.union(set([a])) == y def test_never_change_frozenset(self): a = frozenset([1,2]) From noreply at buildbot.pypy.org Fri Nov 18 14:42:27 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Fri, 18 Nov 2011 14:42:27 +0100 (CET) Subject: [pypy-commit] pypy type-specialized-instances: introduced IntAttribute Message-ID: <20111118134227.CBE7A82A9D@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: type-specialized-instances Changeset: r49522:b16fead0c3f3 Date: 2011-11-16 18:24 +0100 http://bitbucket.org/pypy/pypy/changeset/b16fead0c3f3/ Log: introduced IntAttribute diff --git a/pypy/objspace/std/mapdict.py b/pypy/objspace/std/mapdict.py --- a/pypy/objspace/std/mapdict.py +++ b/pypy/objspace/std/mapdict.py @@ -125,15 +125,15 @@ return None @jit.elidable - def _get_new_attr(self, name, index): - selector = name, index + def _get_new_attr(self, name, index, attrclass): + key = name, index, attrclass cache = self.cache_attrs if cache is None: cache = self.cache_attrs = {} - attr = cache.get(selector, None) + attr = cache.get(key, None) if attr is None: - attr = PlainAttribute(selector, self) - cache[selector] = attr + attr = attrclass(key, self) + cache[key] = attr return attr @jit.look_inside_iff(lambda self, obj, selector, w_value: @@ -141,8 +141,9 @@ jit.isconstant(selector[0]) and jit.isconstant(selector[1])) def add_attr(self, obj, selector, w_value): + attrclass = get_attrclass_from_value(self.space, w_value) # grumble, jit needs this - attr = self._get_new_attr(selector[0], selector[1]) + attr = self._get_new_attr(selector[0], selector[1], attrclass) oldattr = obj._get_mapdict_map() if not jit.we_are_jitted(): size_est = (oldattr._size_estimate + attr.size_estimate() @@ -264,8 +265,10 @@ terminator = terminator.devolved_dict_terminator return Terminator.set_terminator(self, obj, terminator) -class PlainAttribute(AbstractAttribute): +class AbstractStoredAttribute(AbstractAttribute): + _immutable_fields_ = ['selector', 'position', 'back'] + def __init__(self, selector, back): AbstractAttribute.__init__(self, back.space, back.terminator) self.selector = selector @@ -277,17 +280,6 @@ w_value = self.read(obj, self.selector) new_obj._get_mapdict_map().add_attr(new_obj, self.selector, w_value) - def read_attr(self, obj): - # XXX do the unerasing (and wrapping) here - erased = obj._mapdict_read_storage(self.position) - w_value = unerase_item(erased) - return w_value - - def write_attr(self, obj, w_value): - # XXX do the unerasing (and unwrapping) here - erased = erase_item(w_value) - obj._mapdict_write_storage(self.position, erased) - def delete(self, obj, selector): if selector == self.selector: # ok, attribute is deleted @@ -333,6 +325,41 @@ def __repr__(self): return "" % (self.selector, self.position, self.back) +class PlainAttribute(AbstractStoredAttribute): + + erase_item, unerase_item = rerased.new_erasing_pair("mapdict storage object item") + erase_item = staticmethod(erase_item) + unerase_item = staticmethod(unerase_item) + + def read_attr(self, obj): + erased = obj._mapdict_read_storage(self.position) + w_value = self.unerase_item(erased) + return w_value + + def write_attr(self, obj, w_value): + erased = self.erase_item(w_value) + obj._mapdict_write_storage(self.position, erased) + +class IntAttribute(AbstractStoredAttribute): + + erase_item, unerase_item = rerased.erase_int, rerased.unerase_int + erase_item = staticmethod(erase_item) + unerase_item = staticmethod(unerase_item) + + def read_attr(self, obj): + erased = obj._mapdict_read_storage(self.position) + value = self.unerase_item(erased) + return self.space.wrap(value) + + def write_attr(self, obj, w_value): + erased = self.erase_item(self.space.int_w(w_value)) + obj._mapdict_write_storage(self.position, erased) + +def get_attrclass_from_value(space, w_value): + if space.is_w(space.type(w_value), space.w_int): + return IntAttribute + return PlainAttribute + def _become(w_obj, new_obj): # this is like the _become method, really, but we cannot use that due to # RPython reasons @@ -524,7 +551,6 @@ memo_get_subclass_of_correct_size._annspecialcase_ = "specialize:memo" _subclass_cache = {} -erase_item, unerase_item = rerased.new_erasing_pair("mapdict storage item") erase_list, unerase_list = rerased.new_erasing_pair("mapdict storage list") def _make_subclass_size_n(supercls, n): diff --git a/pypy/objspace/std/test/test_mapdict.py b/pypy/objspace/std/test/test_mapdict.py --- a/pypy/objspace/std/test/test_mapdict.py +++ b/pypy/objspace/std/test/test_mapdict.py @@ -5,12 +5,14 @@ space = FakeSpace() class Class(object): - def __init__(self, hasdict=True): + def __init__(self, hasdict=True, sp=None): self.hasdict = True + if sp is None: + sp = space if hasdict: - self.terminator = DictTerminator(space, self) + self.terminator = DictTerminator(sp, self) else: - self.terminator = NoDictTerminator(space, self) + self.terminator = NoDictTerminator(sp, self) def instantiate(self, sp=None): if sp is None: @@ -24,10 +26,10 @@ hasdict = False def erase_storage_items(items): - return [erase_item(item) for item in items] + return [IntAttribute.erase_item(item) for item in items] def unerase_storage_items(storage): - return [unerase_item(item) for item in storage] + return [IntAttribute.unerase_item(item) for item in storage] def test_plain_attribute(): @@ -247,7 +249,6 @@ assert obj.getdict(space) is obj.getdict(space) assert obj.getdict(space).length() == 3 - def test_materialize_r_dict(): cls = Class() obj = cls.instantiate() @@ -301,6 +302,50 @@ obj.setdictvalue(space, a, 50) assert c.terminator.size_estimate() in [(i + 10) // 2, (i + 11) // 2] +class TestTypeSpecializedAttributes(object): + def setup_class(cls): + cls.space = gettestobjspace(**{"objspace.std.withmapdict": True}) + + def test_attributes(self): + space = self.space + cls = Class(sp=space) + obj1 = cls.instantiate() + obj1.setdictvalue(space, "x", space.wrap(1)) + #assert space.eq_w(obj1.getdictvalue(space, "x"), space.wrap(1)) + + obj2 = cls.instantiate() + w_str = space.wrap("string") + obj2.setdictvalue(space, "x", w_str) + #assert space.eq_w(obj1.getdictvalue(space, "x"), w_str) + + assert obj1.map is not obj2.map + assert isinstance(obj1.map, IntAttribute) + + obj3 = cls.instantiate() + obj3.setdictvalue(space, "x", space.wrap(5)) + + assert obj1.map is obj3.map + + assert IntAttribute.unerase_item(obj1.storage[0]) == 1 + assert PlainAttribute.unerase_item(obj2.storage[0]) == w_str + + def test_add_more_attributes(self): + space = self.space + cls = Class(sp=space) + + obj1 = cls.instantiate() + obj1.setdictvalue(space, "x", space.wrap(1)) + obj1.setdictvalue(space, "y", space.wrap(2)) + + def test_switch_attribute_types(self): + space = self.space + cls = Class(sp=space) + obj1 = cls.instantiate() + obj1.setdictvalue(space, "x", space.wrap(1)) + assert isinstance(obj1.map, IntAttribute) + obj1.setdictvalue(space, "y", space.wrap("str")) + assert isinstance(obj1.map, PlainAttribute) + # ___________________________________________________________ # dict tests From noreply at buildbot.pypy.org Fri Nov 18 14:42:29 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Fri, 18 Nov 2011 14:42:29 +0100 (CET) Subject: [pypy-commit] pypy type-specialized-instances: removed some old prints Message-ID: <20111118134229.0523582A9D@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: type-specialized-instances Changeset: r49523:26d4cedd131b Date: 2011-11-16 20:39 +0100 http://bitbucket.org/pypy/pypy/changeset/26d4cedd131b/ Log: removed some old prints diff --git a/pypy/objspace/std/test/test_mapdict.py b/pypy/objspace/std/test/test_mapdict.py --- a/pypy/objspace/std/test/test_mapdict.py +++ b/pypy/objspace/std/test/test_mapdict.py @@ -311,12 +311,12 @@ cls = Class(sp=space) obj1 = cls.instantiate() obj1.setdictvalue(space, "x", space.wrap(1)) - #assert space.eq_w(obj1.getdictvalue(space, "x"), space.wrap(1)) + assert space.eq_w(obj1.getdictvalue(space, "x"), space.wrap(1)) obj2 = cls.instantiate() w_str = space.wrap("string") obj2.setdictvalue(space, "x", w_str) - #assert space.eq_w(obj1.getdictvalue(space, "x"), w_str) + assert space.eq_w(obj2.getdictvalue(space, "x"), w_str) assert obj1.map is not obj2.map assert isinstance(obj1.map, IntAttribute) @@ -501,9 +501,7 @@ a.x = 42 assert a.x == 42 - print "read once" assert a.x == 42 - print "read twice" def test_simple(self): class A(object): @@ -727,7 +725,6 @@ INVALID_CACHE_ENTRY.failure_counter = 0 # w_res = space.call_function(w_func) - print w_res assert space.eq_w(w_res, space.wrap(42)) # entry = w_code._mapdict_caches[nameindex] @@ -758,14 +755,9 @@ def f(): return a.x # - print "1" assert a.x == 42 - print "2" assert a.x == 42 - print "3" - print "first check" res = self.check(f, 'x') - print "second check" assert res == (1, 0, 0) res = self.check(f, 'x') assert res == (0, 1, 0) From noreply at buildbot.pypy.org Fri Nov 18 14:42:30 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Fri, 18 Nov 2011 14:42:30 +0100 (CET) Subject: [pypy-commit] pypy type-specialized-instances: fix: former PlainAttribute is now AbstractStoredAttribute Message-ID: <20111118134230.327F782A9D@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: type-specialized-instances Changeset: r49524:51f420b3af95 Date: 2011-11-16 20:40 +0100 http://bitbucket.org/pypy/pypy/changeset/51f420b3af95/ Log: fix: former PlainAttribute is now AbstractStoredAttribute diff --git a/pypy/objspace/std/mapdict.py b/pypy/objspace/std/mapdict.py --- a/pypy/objspace/std/mapdict.py +++ b/pypy/objspace/std/mapdict.py @@ -99,7 +99,7 @@ return attr def _findmap(self, selector): - while isinstance(self, PlainAttribute): + while isinstance(self, AbstractStoredAttribute): if selector == self.selector: return self self = self.back From noreply at buildbot.pypy.org Fri Nov 18 14:42:31 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Fri, 18 Nov 2011 14:42:31 +0100 (CET) Subject: [pypy-commit] pypy type-specialized-instances: fixed read with new selector (still not sure if this is the right fix) Message-ID: <20111118134231.5F3C082A9D@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: type-specialized-instances Changeset: r49525:ca3e806187f6 Date: 2011-11-18 14:41 +0100 http://bitbucket.org/pypy/pypy/changeset/ca3e806187f6/ Log: fixed read with new selector (still not sure if this is the right fix) diff --git a/pypy/objspace/std/mapdict.py b/pypy/objspace/std/mapdict.py --- a/pypy/objspace/std/mapdict.py +++ b/pypy/objspace/std/mapdict.py @@ -100,7 +100,8 @@ def _findmap(self, selector): while isinstance(self, AbstractStoredAttribute): - if selector == self.selector: + # XXX is this the right fix? + if selector == self.selector[:2]: return self self = self.back return None @@ -277,11 +278,12 @@ self._size_estimate = self.length() * NUM_DIGITS_POW2 def _copy_attr(self, obj, new_obj): - w_value = self.read(obj, self.selector) + #XXX this the right fix? + w_value = self.read(obj, self.selector[:2]) new_obj._get_mapdict_map().add_attr(new_obj, self.selector, w_value) def delete(self, obj, selector): - if selector == self.selector: + if selector == self.selector[:2]: # ok, attribute is deleted return self.back.copy(obj) new_obj = self.back.delete(obj, selector) From noreply at buildbot.pypy.org Fri Nov 18 14:42:32 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Fri, 18 Nov 2011 14:42:32 +0100 (CET) Subject: [pypy-commit] pypy type-specialized-instances: started fixing tests to work with new selector Message-ID: <20111118134232.9032E82A9D@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: type-specialized-instances Changeset: r49526:6f9cd7e5bdeb Date: 2011-11-18 14:42 +0100 http://bitbucket.org/pypy/pypy/changeset/6f9cd7e5bdeb/ Log: started fixing tests to work with new selector diff --git a/pypy/objspace/std/test/test_mapdict.py b/pypy/objspace/std/test/test_mapdict.py --- a/pypy/objspace/std/test/test_mapdict.py +++ b/pypy/objspace/std/test/test_mapdict.py @@ -28,14 +28,14 @@ def erase_storage_items(items): return [IntAttribute.erase_item(item) for item in items] -def unerase_storage_items(storage): - return [IntAttribute.unerase_item(item) for item in storage] +def unerase_storage_items(storage, uneraser=IntAttribute): + return [uneraser.unerase_item(item) for item in storage] def test_plain_attribute(): w_cls = "class" - aa = PlainAttribute(("b", DICT), - PlainAttribute(("a", DICT), + aa = IntAttribute(("b", DICT, IntAttribute), + IntAttribute(("a", DICT, IntAttribute), Terminator(space, w_cls))) assert aa.space is space assert aa.terminator.w_cls is w_cls @@ -115,8 +115,10 @@ obj.setdictvalue(space, "a", 50) obj.setdictvalue(space, "b", 60) obj.setdictvalue(space, "c", 70) + print obj.storage assert unerase_storage_items(obj.storage) == [50, 60, 70] res = obj.deldictvalue(space, dattr) + print obj.storage assert res s = [50, 60, 70] del s[i] @@ -159,7 +161,8 @@ assert obj.getdictvalue(space, "a") == 50 assert obj.getdictvalue(space, "b") == 60 assert obj.getdictvalue(space, "c") == 70 - assert unerase_storage_items(obj.storage) == [50, 60, 70, lifeline1] + assert unerase_storage_items(obj.storage[:-1], IntAttribute) == [50, 60, 70] + assert unerase_storage_items(obj.storage[-1:], PlainAttribute) == [lifeline1] assert obj.getweakref() is lifeline1 obj2 = c.instantiate() @@ -323,6 +326,7 @@ obj3 = cls.instantiate() obj3.setdictvalue(space, "x", space.wrap(5)) + assert space.eq_w(obj3.getdictvalue(space, "x"), space.wrap(5)) assert obj1.map is obj3.map @@ -336,15 +340,27 @@ obj1 = cls.instantiate() obj1.setdictvalue(space, "x", space.wrap(1)) obj1.setdictvalue(space, "y", space.wrap(2)) + assert space.eq_w(obj1.getdictvalue(space, "x"), space.wrap(1)) + assert space.eq_w(obj1.getdictvalue(space, "y"), space.wrap(2)) + + obj2 = cls.instantiate() + obj2.setdictvalue(space, "x", space.wrap(5)) # this is shared + obj2.setdictvalue(space, "y", space.wrap("str")) # this not + assert space.eq_w(obj2.getdictvalue(space, "x"), space.wrap(5)) + assert space.eq_w(obj2.getdictvalue(space, "y"), space.wrap("str")) def test_switch_attribute_types(self): space = self.space cls = Class(sp=space) obj1 = cls.instantiate() + obj1.setdictvalue(space, "x", space.wrap(1)) assert isinstance(obj1.map, IntAttribute) + assert space.eq_w(obj1.getdictvalue(space, "x"), space.wrap(1)) + obj1.setdictvalue(space, "y", space.wrap("str")) assert isinstance(obj1.map, PlainAttribute) + assert space.eq_w(obj1.getdictvalue(space, "y"), space.wrap("str")) # ___________________________________________________________ # dict tests From noreply at buildbot.pypy.org Fri Nov 18 14:43:35 2011 From: noreply at buildbot.pypy.org (l.diekmann) Date: Fri, 18 Nov 2011 14:43:35 +0100 (CET) Subject: [pypy-commit] pypy type-specialized-instances: removed some old debug prints Message-ID: <20111118134335.D1FC082A9D@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: type-specialized-instances Changeset: r49527:c76cccda3d75 Date: 2011-11-18 14:43 +0100 http://bitbucket.org/pypy/pypy/changeset/c76cccda3d75/ Log: removed some old debug prints diff --git a/pypy/objspace/std/test/test_mapdict.py b/pypy/objspace/std/test/test_mapdict.py --- a/pypy/objspace/std/test/test_mapdict.py +++ b/pypy/objspace/std/test/test_mapdict.py @@ -115,10 +115,8 @@ obj.setdictvalue(space, "a", 50) obj.setdictvalue(space, "b", 60) obj.setdictvalue(space, "c", 70) - print obj.storage assert unerase_storage_items(obj.storage) == [50, 60, 70] res = obj.deldictvalue(space, dattr) - print obj.storage assert res s = [50, 60, 70] del s[i] From noreply at buildbot.pypy.org Fri Nov 18 16:49:11 2011 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 18 Nov 2011 16:49:11 +0100 (CET) Subject: [pypy-commit] pypy op_malloc_gc: Plan which tests are still needed (and the corresponding code). Message-ID: <20111118154911.2850782A9D@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: op_malloc_gc Changeset: r49528:211a72d5b8b8 Date: 2011-11-18 14:28 +0100 http://bitbucket.org/pypy/pypy/changeset/211a72d5b8b8/ Log: Plan which tests are still needed (and the corresponding code). diff --git a/pypy/jit/backend/llsupport/test/test_gc.py b/pypy/jit/backend/llsupport/test/test_gc.py --- a/pypy/jit/backend/llsupport/test/test_gc.py +++ b/pypy/jit/backend/llsupport/test/test_gc.py @@ -703,6 +703,28 @@ jump() """) + def test_rewrite_assembler_maximal_size(self): + xxx + + def test_rewrite_assembler_variable_size(self): + xxx + + def test_rewrite_assembler_new_with_vtable(self): + self.check_rewrite(""" + [p1] + p0 = new_with_vtable(descr=vdescr) + jump() + """, """ + [p1] + p0 = malloc_gc(%(vdescr.size)d) + setfield_gc(p0, 1234, descr=tiddescr) + ... + jump() + """) + + def test_rewrite_assembler_newstr_newunicode(self): + xxx + def test_rewrite_assembler_initialization_store(self): S = lltype.GcStruct('S', ('parent', OBJECT), ('x', lltype.Signed)) From noreply at buildbot.pypy.org Fri Nov 18 16:49:12 2011 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 18 Nov 2011 16:49:12 +0100 (CET) Subject: [pypy-commit] pypy default: Test for crashes when we call StringBuilder.build() several times over a growing Message-ID: <20111118154912.55D6A82A9E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49529:612f7784a228 Date: 2011-11-18 16:43 +0100 http://bitbucket.org/pypy/pypy/changeset/612f7784a228/ Log: Test for crashes when we call StringBuilder.build() several times over a growing builder, like W_StringBufferObject does. diff --git a/pypy/translator/c/test/test_newgc.py b/pypy/translator/c/test/test_newgc.py --- a/pypy/translator/c/test/test_newgc.py +++ b/pypy/translator/c/test/test_newgc.py @@ -1318,6 +1318,23 @@ res = self.run('string_builder_over_allocation') assert res[1000] == 'y' + def definestr_string_builder_multiple_builds(cls): + import gc + def fn(_): + s = StringBuilder(4) + got = [] + for i in range(50): + s.append(chr(i)) + got.append(s.build()) + gc.collect() + return '/'.join(got) + return fn + + def test_string_builder_multiple_builds(self): + res = self.run('string_builder_multiple_builds') + assert res == '/'.join([''.join(map(chr, range(length))) + for length in range(1, 51)]) + def define_nursery_hash_base(cls): from pypy.rlib.objectmodel import compute_identity_hash class A: From noreply at buildbot.pypy.org Fri Nov 18 16:49:13 2011 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 18 Nov 2011 16:49:13 +0100 (CET) Subject: [pypy-commit] pypy default: Fix for 612f7784a228. Message-ID: <20111118154913.83E3382A9D@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49530:5269f0ee1ed2 Date: 2011-11-18 16:48 +0100 http://bitbucket.org/pypy/pypy/changeset/5269f0ee1ed2/ Log: Fix for 612f7784a228. diff --git a/pypy/rpython/lltypesystem/rbuilder.py b/pypy/rpython/lltypesystem/rbuilder.py --- a/pypy/rpython/lltypesystem/rbuilder.py +++ b/pypy/rpython/lltypesystem/rbuilder.py @@ -123,9 +123,10 @@ def ll_build(ll_builder): final_size = ll_builder.used assert final_size >= 0 - if final_size == ll_builder.allocated: - return ll_builder.buf - return rgc.ll_shrink_array(ll_builder.buf, final_size) + if final_size < ll_builder.allocated: + ll_builder.allocated = final_size + ll_builder.buf = rgc.ll_shrink_array(ll_builder.buf, final_size) + return ll_builder.buf @classmethod def ll_is_true(cls, ll_builder): diff --git a/pypy/translator/c/test/test_newgc.py b/pypy/translator/c/test/test_newgc.py --- a/pypy/translator/c/test/test_newgc.py +++ b/pypy/translator/c/test/test_newgc.py @@ -1324,15 +1324,15 @@ s = StringBuilder(4) got = [] for i in range(50): - s.append(chr(i)) + s.append(chr(33+i)) got.append(s.build()) gc.collect() - return '/'.join(got) + return ' '.join(got) return fn def test_string_builder_multiple_builds(self): res = self.run('string_builder_multiple_builds') - assert res == '/'.join([''.join(map(chr, range(length))) + assert res == ' '.join([''.join(map(chr, range(33, 33+length))) for length in range(1, 51)]) def define_nursery_hash_base(cls): From noreply at buildbot.pypy.org Fri Nov 18 16:51:46 2011 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 18 Nov 2011 16:51:46 +0100 (CET) Subject: [pypy-commit] pypy default: Python 2.5 compatibility. Message-ID: <20111118155146.6F07A82A9D@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49531:5019a28f7e6a Date: 2011-11-18 16:51 +0100 http://bitbucket.org/pypy/pypy/changeset/5019a28f7e6a/ Log: Python 2.5 compatibility. diff --git a/pypy/rpython/test/test_rbuilder.py b/pypy/rpython/test/test_rbuilder.py --- a/pypy/rpython/test/test_rbuilder.py +++ b/pypy/rpython/test/test_rbuilder.py @@ -1,3 +1,4 @@ +from __future__ import with_statement import py from pypy.rlib.rstring import StringBuilder, UnicodeBuilder From noreply at buildbot.pypy.org Fri Nov 18 17:26:38 2011 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 18 Nov 2011 17:26:38 +0100 (CET) Subject: [pypy-commit] pypy default: use autodetect if you don't want to run llgraph Message-ID: <20111118162638.2597482A9D@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r49532:cb316be6a96f Date: 2011-11-18 18:25 +0200 http://bitbucket.org/pypy/pypy/changeset/cb316be6a96f/ Log: use autodetect if you don't want to run llgraph diff --git a/pypy/jit/backend/conftest.py b/pypy/jit/backend/conftest.py --- a/pypy/jit/backend/conftest.py +++ b/pypy/jit/backend/conftest.py @@ -12,7 +12,7 @@ help="choose a fixed random seed") group.addoption('--backend', action="store", default='llgraph', - choices=['llgraph', 'x86'], + choices=['llgraph', 'cpu'], dest="backend", help="select the backend to run the functions with") group.addoption('--block-length', action="store", type="int", diff --git a/pypy/jit/backend/test/test_random.py b/pypy/jit/backend/test/test_random.py --- a/pypy/jit/backend/test/test_random.py +++ b/pypy/jit/backend/test/test_random.py @@ -495,9 +495,9 @@ if pytest.config.option.backend == 'llgraph': from pypy.jit.backend.llgraph.runner import LLtypeCPU return LLtypeCPU(None) - elif pytest.config.option.backend == 'x86': - from pypy.jit.backend.x86.runner import CPU386 - return CPU386(None, None) + elif pytest.config.option.backend == 'cpu': + from pypy.jit.backend.detect_cpu import getcpuclass + return getcpuclass()(None, None) else: assert 0, "unknown backend %r" % pytest.config.option.backend From noreply at buildbot.pypy.org Fri Nov 18 17:26:39 2011 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 18 Nov 2011 17:26:39 +0100 (CET) Subject: [pypy-commit] pypy default: merge Message-ID: <20111118162639.58C3B82A9D@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r49533:54e972f7c8ba Date: 2011-11-18 18:26 +0200 http://bitbucket.org/pypy/pypy/changeset/54e972f7c8ba/ Log: merge diff --git a/pypy/rpython/lltypesystem/rbuilder.py b/pypy/rpython/lltypesystem/rbuilder.py --- a/pypy/rpython/lltypesystem/rbuilder.py +++ b/pypy/rpython/lltypesystem/rbuilder.py @@ -123,9 +123,10 @@ def ll_build(ll_builder): final_size = ll_builder.used assert final_size >= 0 - if final_size == ll_builder.allocated: - return ll_builder.buf - return rgc.ll_shrink_array(ll_builder.buf, final_size) + if final_size < ll_builder.allocated: + ll_builder.allocated = final_size + ll_builder.buf = rgc.ll_shrink_array(ll_builder.buf, final_size) + return ll_builder.buf @classmethod def ll_is_true(cls, ll_builder): diff --git a/pypy/rpython/test/test_rbuilder.py b/pypy/rpython/test/test_rbuilder.py --- a/pypy/rpython/test/test_rbuilder.py +++ b/pypy/rpython/test/test_rbuilder.py @@ -1,3 +1,4 @@ +from __future__ import with_statement import py from pypy.rlib.rstring import StringBuilder, UnicodeBuilder diff --git a/pypy/translator/c/test/test_newgc.py b/pypy/translator/c/test/test_newgc.py --- a/pypy/translator/c/test/test_newgc.py +++ b/pypy/translator/c/test/test_newgc.py @@ -1318,6 +1318,23 @@ res = self.run('string_builder_over_allocation') assert res[1000] == 'y' + def definestr_string_builder_multiple_builds(cls): + import gc + def fn(_): + s = StringBuilder(4) + got = [] + for i in range(50): + s.append(chr(33+i)) + got.append(s.build()) + gc.collect() + return ' '.join(got) + return fn + + def test_string_builder_multiple_builds(self): + res = self.run('string_builder_multiple_builds') + assert res == ' '.join([''.join(map(chr, range(33, 33+length))) + for length in range(1, 51)]) + def define_nursery_hash_base(cls): from pypy.rlib.objectmodel import compute_identity_hash class A: From noreply at buildbot.pypy.org Fri Nov 18 17:30:13 2011 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 18 Nov 2011 17:30:13 +0100 (CET) Subject: [pypy-commit] pypy default: move test_zll_random to llsupport Message-ID: <20111118163013.F10B082A9D@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r49534:81dcafabbba9 Date: 2011-11-18 18:29 +0200 http://bitbucket.org/pypy/pypy/changeset/81dcafabbba9/ Log: move test_zll_random to llsupport diff --git a/pypy/jit/backend/x86/test/test_zll_random.py b/pypy/jit/backend/llsupport/test/test_zll_random.py rename from pypy/jit/backend/x86/test/test_zll_random.py rename to pypy/jit/backend/llsupport/test/test_zll_random.py From noreply at buildbot.pypy.org Fri Nov 18 17:49:01 2011 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 18 Nov 2011 17:49:01 +0100 (CET) Subject: [pypy-commit] pypy default: Move the stress test directly to backend/test and rename it Message-ID: <20111118164901.1F0B982A9D@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r49535:dcc33426aadc Date: 2011-11-18 18:48 +0200 http://bitbucket.org/pypy/pypy/changeset/dcc33426aadc/ Log: Move the stress test directly to backend/test and rename it diff --git a/pypy/jit/backend/llsupport/test/test_zll_random.py b/pypy/jit/backend/test/test_zll_stress.py rename from pypy/jit/backend/llsupport/test/test_zll_random.py rename to pypy/jit/backend/test/test_zll_stress.py From noreply at buildbot.pypy.org Fri Nov 18 19:02:46 2011 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 18 Nov 2011 19:02:46 +0100 (CET) Subject: [pypy-commit] pypy op_malloc_gc: The interface that I really want. Message-ID: <20111118180246.23ECD82A9E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: op_malloc_gc Changeset: r49537:f84b61015be5 Date: 2011-11-18 19:02 +0100 http://bitbucket.org/pypy/pypy/changeset/f84b61015be5/ Log: The interface that I really want. diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -470,7 +470,8 @@ 'NEW_ARRAY/1d', 'NEWSTR/1', 'NEWUNICODE/1', - 'MALLOC_GC/1', # added by llsupport/gc: GC malloc of ConstInt bytes + 'MALLOC_GC/3', # added by llsupport/gc: malloc of C1+N*C2 bytes + 'MALLOC_NURSERY/1', # added by llsupport/gc: nursery malloc, const bytes '_MALLOC_LAST', 'FORCE_TOKEN/0', 'VIRTUAL_REF/2', # removed before it's passed to the backend From noreply at buildbot.pypy.org Fri Nov 18 19:02:44 2011 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 18 Nov 2011 19:02:44 +0100 (CET) Subject: [pypy-commit] pypy op_malloc_gc: Move the new tests to their own file. Message-ID: <20111118180244.EC00682A9D@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: op_malloc_gc Changeset: r49536:feae28c11d92 Date: 2011-11-18 19:02 +0100 http://bitbucket.org/pypy/pypy/changeset/feae28c11d92/ Log: Move the new tests to their own file. diff --git a/pypy/jit/backend/llsupport/test/test_gc.py b/pypy/jit/backend/llsupport/test/test_gc.py --- a/pypy/jit/backend/llsupport/test/test_gc.py +++ b/pypy/jit/backend/llsupport/test/test_gc.py @@ -547,184 +547,6 @@ assert operations[1].getarg(2) == v_value assert operations[1].getdescr() == array_descr - def check_rewrite(self, frm_operations, to_operations): - self.gc_ll_descr.translate_support_code = False - try: - S = lltype.GcStruct('S', ('x', lltype.Signed), - ('y', lltype.Signed)) - sdescr = get_size_descr(self.gc_ll_descr, S) - sdescr.tid = 1234 - # - T = lltype.GcStruct('T', ('y', lltype.Signed), - ('z', lltype.Signed), - ('t', lltype.Signed)) - tdescr = get_size_descr(self.gc_ll_descr, T) - tdescr.tid = 5678 - # - A = lltype.GcArray(lltype.Signed) - adescr = get_array_descr(self.gc_ll_descr, A) - adescr.tid = 4321 - alendescr = get_field_arraylen_descr(self.gc_ll_descr, A) - # - B = lltype.GcArray(lltype.Char) - bdescr = get_array_descr(self.gc_ll_descr, B) - bdescr.tid = 8765 - blendescr = get_field_arraylen_descr(self.gc_ll_descr, B) - # - E = lltype.GcStruct('Empty') - edescr = get_size_descr(self.gc_ll_descr, E) - edescr.tid = 9000 - # - tiddescr = self.gc_ll_descr.fielddescr_tid - WORD = globals()['WORD'] - # - ops = parse(frm_operations, namespace=locals()) - expected = parse(to_operations % Evaluator(locals()), - namespace=locals()) - operations = get_deep_immutable_oplist(ops.operations) - operations = self.gc_ll_descr.rewrite_assembler(self.fake_cpu, - operations, []) - finally: - self.gc_ll_descr.translate_support_code = True - equaloplists(operations, expected.operations) - - def test_rewrite_assembler_new_to_malloc(self): - self.check_rewrite(""" - [p1] - p0 = new(descr=sdescr) - jump() - """, """ - [p1] - p0 = malloc_gc(%(sdescr.size)d) - setfield_gc(p0, 1234, descr=tiddescr) - jump() - """) - - def test_rewrite_assembler_new3_to_malloc(self): - self.check_rewrite(""" - [] - p0 = new(descr=sdescr) - p1 = new(descr=tdescr) - p2 = new(descr=sdescr) - jump() - """, """ - [] - p0 = malloc_gc(%(sdescr.size + tdescr.size + sdescr.size)d) - setfield_gc(p0, 1234, descr=tiddescr) - p1 = int_add(p0, %(sdescr.size)d) - setfield_gc(p1, 5678, descr=tiddescr) - p2 = int_add(p1, %(tdescr.size)d) - setfield_gc(p2, 1234, descr=tiddescr) - jump() - """) - - def test_rewrite_assembler_new_array_fixed_to_malloc(self): - self.check_rewrite(""" - [] - p0 = new_array(10, descr=adescr) - jump() - """, """ - [] - p0 = malloc_gc(%(adescr.get_base_size(False) + \ - 10 * adescr.get_item_size(False))d) - setfield_gc(p0, 4321, descr=tiddescr) - setfield_gc(p0, 10, descr=alendescr) - jump() - """) - - def test_rewrite_assembler_new_and_new_array_fixed_to_malloc(self): - self.check_rewrite(""" - [] - p0 = new(descr=sdescr) - p1 = new_array(10, descr=adescr) - jump() - """, """ - [] - p0 = malloc_gc(%(sdescr.size + \ - adescr.get_base_size(False) + \ - 10 * adescr.get_item_size(False))d) - setfield_gc(p0, 1234, descr=tiddescr) - p1 = int_add(p0, %(sdescr.size)d) - setfield_gc(p1, 4321, descr=tiddescr) - setfield_gc(p1, 10, descr=alendescr) - jump() - """) - - def test_rewrite_assembler_round_up(self): - self.check_rewrite(""" - [] - p0 = new_array(6, descr=bdescr) - jump() - """, """ - [] - p0 = malloc_gc(%(adescr.get_base_size(False) + 8)d) - setfield_gc(p0, 8765, descr=tiddescr) - setfield_gc(p0, 6, descr=blendescr) - jump() - """) - - def test_rewrite_assembler_round_up_always(self): - self.check_rewrite(""" - [] - p0 = new_array(5, descr=bdescr) - p1 = new_array(5, descr=bdescr) - p2 = new_array(5, descr=bdescr) - p3 = new_array(5, descr=bdescr) - jump() - """, """ - [] - p0 = malloc_gc(%(4 * (adescr.get_base_size(False) + 8))d) - setfield_gc(p0, 8765, descr=tiddescr) - setfield_gc(p0, 5, descr=blendescr) - p1 = int_add(p0, %(adescr.get_base_size(False) + 8)d) - setfield_gc(p1, 8765, descr=tiddescr) - setfield_gc(p1, 5, descr=blendescr) - p2 = int_add(p1, %(adescr.get_base_size(False) + 8)d) - setfield_gc(p2, 8765, descr=tiddescr) - setfield_gc(p2, 5, descr=blendescr) - p3 = int_add(p2, %(adescr.get_base_size(False) + 8)d) - setfield_gc(p3, 8765, descr=tiddescr) - setfield_gc(p3, 5, descr=blendescr) - jump() - """) - - def test_rewrite_assembler_minimal_size(self): - self.check_rewrite(""" - [] - p0 = new(descr=edescr) - p1 = new(descr=edescr) - jump() - """, """ - [] - p0 = malloc_gc(%(4*WORD)d) - setfield_gc(p0, 9000, descr=tiddescr) - p1 = int_add(p0, %(2*WORD)d) - setfield_gc(p1, 9000, descr=tiddescr) - jump() - """) - - def test_rewrite_assembler_maximal_size(self): - xxx - - def test_rewrite_assembler_variable_size(self): - xxx - - def test_rewrite_assembler_new_with_vtable(self): - self.check_rewrite(""" - [p1] - p0 = new_with_vtable(descr=vdescr) - jump() - """, """ - [p1] - p0 = malloc_gc(%(vdescr.size)d) - setfield_gc(p0, 1234, descr=tiddescr) - ... - jump() - """) - - def test_rewrite_assembler_newstr_newunicode(self): - xxx - def test_rewrite_assembler_initialization_store(self): S = lltype.GcStruct('S', ('parent', OBJECT), ('x', lltype.Signed)) @@ -794,11 +616,6 @@ operations, []) equaloplists(operations, expected.operations) -class Evaluator(object): - def __init__(self, scope): - self.scope = scope - def __getitem__(self, key): - return eval(key, self.scope) class TestFrameworkMiniMark(TestFramework): gc = 'minimark' diff --git a/pypy/jit/backend/llsupport/test/test_rewrite.py b/pypy/jit/backend/llsupport/test/test_rewrite.py new file mode 100644 --- /dev/null +++ b/pypy/jit/backend/llsupport/test/test_rewrite.py @@ -0,0 +1,267 @@ +from pypy.jit.backend.llsupport.descr import * +from pypy.jit.backend.llsupport.gc import * +from pypy.jit.metainterp.gc import get_description +from pypy.jit.tool.oparser import parse + + +class Evaluator(object): + def __init__(self, scope): + self.scope = scope + def __getitem__(self, key): + return eval(key, self.scope) + + +class RewriteTests(object): + def check_rewrite(self, frm_operations, to_operations): + self.gc_ll_descr.translate_support_code = False + try: + S = lltype.GcStruct('S', ('x', lltype.Signed), + ('y', lltype.Signed)) + sdescr = get_size_descr(self.gc_ll_descr, S) + sdescr.tid = 1234 + # + T = lltype.GcStruct('T', ('y', lltype.Signed), + ('z', lltype.Signed), + ('t', lltype.Signed)) + tdescr = get_size_descr(self.gc_ll_descr, T) + tdescr.tid = 5678 + # + A = lltype.GcArray(lltype.Signed) + adescr = get_array_descr(self.gc_ll_descr, A) + adescr.tid = 4321 + alendescr = get_field_arraylen_descr(self.gc_ll_descr, A) + # + B = lltype.GcArray(lltype.Char) + bdescr = get_array_descr(self.gc_ll_descr, B) + bdescr.tid = 8765 + blendescr = get_field_arraylen_descr(self.gc_ll_descr, B) + # + E = lltype.GcStruct('Empty') + edescr = get_size_descr(self.gc_ll_descr, E) + edescr.tid = 9000 + # + tiddescr = self.gc_ll_descr.fielddescr_tid + WORD = globals()['WORD'] + # + ops = parse(frm_operations, namespace=locals()) + expected = parse(to_operations % Evaluator(locals()), + namespace=locals()) + operations = self.gc_ll_descr.rewrite_assembler(None, + ops.operations, + []) + finally: + self.gc_ll_descr.translate_support_code = True + equaloplists(operations, expected.operations) + + def test_new_array_variable(self): + self.check_rewrite(""" + [i1] + p0 = new_array(i1, descr=adescr) + jump() + """, """ + [i1] + p0 = malloc_gc(%(adescr.get_base_size(False))d, \ + i1, %(adescr.get_item_size(False))d) + setfield_gc(p0, 4321, descr=tiddescr) + setfield_gc(p0, 10, descr=alendescr) + jump() + """) + + +class TestBoehm(RewriteTests): + def setup_method(self, meth): + self.gc_ll_descr = GcLLDescr_boehm(None, None, None) + + def test_new(self): + self.check_rewrite(""" + [] + p0 = new(descr=sdescr) + jump() + """, """ + [p1] + p0 = malloc_gc(%(sdescr.size)d, 0, 0) + setfield_gc(p0, 1234, descr=tiddescr) + jump() + """) + + def test_no_collapsing(self): + self.check_rewrite(""" + [] + p0 = new(descr=sdescr) + p1 = new(descr=sdescr) + jump() + """, """ + [p1] + p0 = malloc_gc(%(sdescr.size)d, 0, 0) + setfield_gc(p0, 1234, descr=tiddescr) + p1 = malloc_gc(%(sdescr.size)d, 0, 0) + setfield_gc(p1, 1234, descr=tiddescr) + jump() + """) + + def test_new_array_fixed(self): + self.check_rewrite(""" + [] + p0 = new_array(10, descr=adescr) + jump() + """, """ + [] + p0 = malloc_gc(%(adescr.get_base_size(False))d, \ + 10, %(adescr.get_item_size(False))d) + setfield_gc(p0, 4321, descr=tiddescr) + setfield_gc(p0, 10, descr=alendescr) + jump() + """) + + +class TestFramework(RewriteTests): + def setup_method(self, meth): + class config_(object): + class translation(object): + gc = 'hybrid' + gcrootfinder = 'asmgcc' + gctransformer = 'framework' + gcremovetypeptr = False + class FakeTranslator(object): + config = config_ + gcdescr = get_description(config_) + self.gc_ll_descr = GcLLDescr_framework(gcdescr, FakeTranslator(), + None, None) + + def test_rewrite_assembler_new_to_malloc(self): + self.check_rewrite(""" + [p1] + p0 = new(descr=sdescr) + jump() + """, """ + [p1] + p0 = malloc_nursery(%(sdescr.size)d) + setfield_gc(p0, 1234, descr=tiddescr) + jump() + """) + + def test_rewrite_assembler_new3_to_malloc(self): + self.check_rewrite(""" + [] + p0 = new(descr=sdescr) + p1 = new(descr=tdescr) + p2 = new(descr=sdescr) + jump() + """, """ + [] + p0 = malloc_nursery(%(sdescr.size + tdescr.size + sdescr.size)d) + setfield_gc(p0, 1234, descr=tiddescr) + p1 = int_add(p0, %(sdescr.size)d) + setfield_gc(p1, 5678, descr=tiddescr) + p2 = int_add(p1, %(tdescr.size)d) + setfield_gc(p2, 1234, descr=tiddescr) + jump() + """) + + def test_rewrite_assembler_new_array_fixed_to_malloc(self): + self.check_rewrite(""" + [] + p0 = new_array(10, descr=adescr) + jump() + """, """ + [] + p0 = malloc_nursery(%(adescr.get_base_size(False) + \ + 10 * adescr.get_item_size(False))d) + setfield_gc(p0, 4321, descr=tiddescr) + setfield_gc(p0, 10, descr=alendescr) + jump() + """) + + def test_rewrite_assembler_new_and_new_array_fixed_to_malloc(self): + self.check_rewrite(""" + [] + p0 = new(descr=sdescr) + p1 = new_array(10, descr=adescr) + jump() + """, """ + [] + p0 = malloc_nursery(%(sdescr.size + \ + adescr.get_base_size(False) + \ + 10 * adescr.get_item_size(False))d) + setfield_gc(p0, 1234, descr=tiddescr) + p1 = int_add(p0, %(sdescr.size)d) + setfield_gc(p1, 4321, descr=tiddescr) + setfield_gc(p1, 10, descr=alendescr) + jump() + """) + + def test_rewrite_assembler_round_up(self): + self.check_rewrite(""" + [] + p0 = new_array(6, descr=bdescr) + jump() + """, """ + [] + p0 = malloc_nursery(%(adescr.get_base_size(False) + 8)d) + setfield_gc(p0, 8765, descr=tiddescr) + setfield_gc(p0, 6, descr=blendescr) + jump() + """) + + def test_rewrite_assembler_round_up_always(self): + self.check_rewrite(""" + [] + p0 = new_array(5, descr=bdescr) + p1 = new_array(5, descr=bdescr) + p2 = new_array(5, descr=bdescr) + p3 = new_array(5, descr=bdescr) + jump() + """, """ + [] + p0 = malloc_nursery(%(4 * (adescr.get_base_size(False) + 8))d) + setfield_gc(p0, 8765, descr=tiddescr) + setfield_gc(p0, 5, descr=blendescr) + p1 = int_add(p0, %(adescr.get_base_size(False) + 8)d) + setfield_gc(p1, 8765, descr=tiddescr) + setfield_gc(p1, 5, descr=blendescr) + p2 = int_add(p1, %(adescr.get_base_size(False) + 8)d) + setfield_gc(p2, 8765, descr=tiddescr) + setfield_gc(p2, 5, descr=blendescr) + p3 = int_add(p2, %(adescr.get_base_size(False) + 8)d) + setfield_gc(p3, 8765, descr=tiddescr) + setfield_gc(p3, 5, descr=blendescr) + jump() + """) + + def test_rewrite_assembler_minimal_size(self): + self.check_rewrite(""" + [] + p0 = new(descr=edescr) + p1 = new(descr=edescr) + jump() + """, """ + [] + p0 = malloc_nursery(%(4*WORD)d) + setfield_gc(p0, 9000, descr=tiddescr) + p1 = int_add(p0, %(2*WORD)d) + setfield_gc(p1, 9000, descr=tiddescr) + jump() + """) + + def test_rewrite_assembler_maximal_size(self): + xxx + + def test_rewrite_assembler_variable_size(self): + xxx + + def test_rewrite_assembler_new_with_vtable(self): + self.check_rewrite(""" + [p1] + p0 = new_with_vtable(descr=vdescr) + jump() + """, """ + [p1] + p0 = malloc_nursery(%(vdescr.size)d) + setfield_gc(p0, 1234, descr=tiddescr) + ... + jump() + """) + + def test_rewrite_assembler_newstr_newunicode(self): + xxx + From noreply at buildbot.pypy.org Fri Nov 18 20:00:51 2011 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 18 Nov 2011 20:00:51 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim-shards: remove unused parameter Message-ID: <20111118190051.954D982A9D@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-multidim-shards Changeset: r49538:ad7839a7ff62 Date: 2011-11-18 21:00 +0200 http://bitbucket.org/pypy/pypy/changeset/ad7839a7ff62/ Log: remove unused parameter diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -360,7 +360,6 @@ res = StringBuilder() res.append("array(") concrete = self.get_concrete() - start = True dtype = concrete.find_dtype() if not concrete.find_size(): res.append('[]') From noreply at buildbot.pypy.org Fri Nov 18 23:13:31 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Fri, 18 Nov 2011 23:13:31 +0100 (CET) Subject: [pypy-commit] pypy default: made the assembler sources compile so far Message-ID: <20111118221331.482E182A9D@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: Changeset: r49539:7d9e78a91dce Date: 2011-11-18 23:12 +0100 http://bitbucket.org/pypy/pypy/changeset/7d9e78a91dce/ Log: made the assembler sources compile so far diff --git a/pypy/rlib/_rffi_stacklet.py b/pypy/rlib/_rffi_stacklet.py --- a/pypy/rlib/_rffi_stacklet.py +++ b/pypy/rlib/_rffi_stacklet.py @@ -3,16 +3,22 @@ from pypy.rpython.lltypesystem import lltype, llmemory, rffi from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.rpython.tool import rffi_platform +import sys cdir = py.path.local(pypydir) / 'translator' / 'c' - +_sep_mods = [] +if sys.platform == 'win32': + _sep_mods = [cdir / "src/stacklet/switch_x86_msvc.asm"] + eci = ExternalCompilationInfo( include_dirs = [cdir], includes = ['src/stacklet/stacklet.h'], separate_module_sources = ['#include "src/stacklet/stacklet.c"\n'], + separate_module_files = _sep_mods ) + rffi_platform.verify_eci(eci.convert_sources_to_files()) def llexternal(name, args, result, **kwds): diff --git a/pypy/translator/platform/__init__.py b/pypy/translator/platform/__init__.py --- a/pypy/translator/platform/__init__.py +++ b/pypy/translator/platform/__init__.py @@ -59,7 +59,11 @@ compile_args = self._compile_args_from_eci(eci, standalone) ofiles = [] for cfile in cfiles: - ofiles.append(self._compile_c_file(self.cc, cfile, compile_args)) + # Windows hack: use masm for files ending in .asm + if str(cfile).lower().endswith('.asm'): + ofiles.append(self._compile_c_file(self.masm, cfile, [])) + else: + ofiles.append(self._compile_c_file(self.cc, cfile, compile_args)) return ofiles def execute(self, executable, args=None, env=None, compilation_info=None): diff --git a/pypy/translator/platform/windows.py b/pypy/translator/platform/windows.py --- a/pypy/translator/platform/windows.py +++ b/pypy/translator/platform/windows.py @@ -165,7 +165,7 @@ def _compile_c_file(self, cc, cfile, compile_args): oname = cfile.new(ext='obj') - args = ['/nologo', '/c'] + compile_args + [str(cfile), '/Fo%s' % (oname,)] + args = ['/nologo', '/c'] + compile_args + ['/Fo%s' % (oname,), str(cfile)] self._execute_c_compiler(cc, args, oname) return oname From noreply at buildbot.pypy.org Fri Nov 18 23:35:08 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Fri, 18 Nov 2011 23:35:08 +0100 (CET) Subject: [pypy-commit] pypy win64-stage1: merge Message-ID: <20111118223508.2D77B82A9D@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64-stage1 Changeset: r49540:30d0ed4adb92 Date: 2011-11-18 23:34 +0100 http://bitbucket.org/pypy/pypy/changeset/30d0ed4adb92/ Log: merge diff --git a/lib_pypy/pyrepl/unix_console.py b/lib_pypy/pyrepl/unix_console.py --- a/lib_pypy/pyrepl/unix_console.py +++ b/lib_pypy/pyrepl/unix_console.py @@ -412,7 +412,12 @@ e.args[4] == 'unexpected end of data': pass else: - raise + # was: "raise". But it crashes pyrepl, and by extension the + # pypy currently running, in which we are e.g. in the middle + # of some debugging session. Argh. Instead just print an + # error message to stderr and continue running, for now. + self.partial_char = '' + sys.stderr.write('\n%s: %s\n' % (e.__class__.__name__, e)) else: self.partial_char = '' self.event_queue.push(c) diff --git a/lib_pypy/syslog.py b/lib_pypy/syslog.py --- a/lib_pypy/syslog.py +++ b/lib_pypy/syslog.py @@ -38,9 +38,27 @@ _setlogmask.argtypes = (c_int,) _setlogmask.restype = c_int +_S_log_open = False +_S_ident_o = None + +def _get_argv(): + try: + import sys + script = sys.argv[0] + if isinstance(script, str): + return script[script.rfind('/')+1:] or None + except Exception: + pass + return None + @builtinify -def openlog(ident, option, facility): - _openlog(ident, option, facility) +def openlog(ident=None, logoption=0, facility=LOG_USER): + global _S_ident_o, _S_log_open + if ident is None: + ident = _get_argv() + _S_ident_o = c_char_p(ident) # keepalive + _openlog(_S_ident_o, logoption, facility) + _S_log_open = True @builtinify def syslog(arg1, arg2=None): @@ -48,11 +66,18 @@ priority, message = arg1, arg2 else: priority, message = LOG_INFO, arg1 + # if log is not opened, open it now + if not _S_log_open: + openlog() _syslog(priority, "%s", message) @builtinify def closelog(): - _closelog() + global _S_log_open, S_ident_o + if _S_log_open: + _closelog() + _S_log_open = False + _S_ident_o = None @builtinify def setlogmask(mask): diff --git a/pypy/doc/release-1.7.0.rst b/pypy/doc/release-1.7.0.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/release-1.7.0.rst @@ -0,0 +1,44 @@ +===================== +PyPy 1.7 +===================== + +Highlights +========== + +* numerous performance improvements, PyPy 1.7 is xxx faster than 1.6 + +* numerous bugfixes, compatibility fixes + +* windows fixes + +* stackless and JIT integration + +* numpy progress - dtypes, numpy -> numpypy renaming + +* brand new JSON encoder + +* improved memory footprint on heavy users of C APIs example - tornado + +* cpyext progress + +Things that didn't make it, expect in 1.8 soon +============================================== + +* list strategies + +* multi-dimensional arrays for numpy + +* ARM backend + +* PPC backend + +Things we're working on with unclear ETA +======================================== + +* windows 64 (?) + +* Py3k + +* SSE for numpy + +* specialized objects diff --git a/pypy/jit/backend/conftest.py b/pypy/jit/backend/conftest.py --- a/pypy/jit/backend/conftest.py +++ b/pypy/jit/backend/conftest.py @@ -12,7 +12,7 @@ help="choose a fixed random seed") group.addoption('--backend', action="store", default='llgraph', - choices=['llgraph', 'x86'], + choices=['llgraph', 'cpu'], dest="backend", help="select the backend to run the functions with") group.addoption('--block-length', action="store", type="int", diff --git a/pypy/jit/backend/llgraph/llimpl.py b/pypy/jit/backend/llgraph/llimpl.py --- a/pypy/jit/backend/llgraph/llimpl.py +++ b/pypy/jit/backend/llgraph/llimpl.py @@ -20,6 +20,7 @@ from pypy.jit.backend.llgraph import symbolic from pypy.jit.codewriter import longlong +from pypy.rlib import libffi from pypy.rlib.objectmodel import ComputedIntSymbolic, we_are_translated from pypy.rlib.rarithmetic import ovfcheck from pypy.rlib.rarithmetic import r_longlong, r_ulonglong, r_uint @@ -325,12 +326,12 @@ loop = _from_opaque(loop) loop.operations.append(Operation(opnum)) -def compile_add_descr(loop, ofs, type, arg_types): +def compile_add_descr(loop, ofs, type, arg_types, extrainfo, width): from pypy.jit.backend.llgraph.runner import Descr loop = _from_opaque(loop) op = loop.operations[-1] assert isinstance(type, str) and len(type) == 1 - op.descr = Descr(ofs, type, arg_types=arg_types) + op.descr = Descr(ofs, type, arg_types=arg_types, extrainfo=extrainfo, width=width) def compile_add_descr_arg(loop, ofs, type, arg_types): from pypy.jit.backend.llgraph.runner import Descr @@ -825,6 +826,16 @@ else: raise NotImplementedError + def op_getinteriorfield_raw(self, descr, array, index): + if descr.typeinfo == REF: + return do_getinteriorfield_raw_ptr(array, index, descr.width, descr.ofs) + elif descr.typeinfo == INT: + return do_getinteriorfield_raw_int(array, index, descr.width, descr.ofs) + elif descr.typeinfo == FLOAT: + return do_getinteriorfield_raw_float(array, index, descr.width, descr.ofs) + else: + raise NotImplementedError + def op_setinteriorfield_gc(self, descr, array, index, newvalue): if descr.typeinfo == REF: return do_setinteriorfield_gc_ptr(array, index, descr.ofs, @@ -838,6 +849,16 @@ else: raise NotImplementedError + def op_setinteriorfield_raw(self, descr, array, index, newvalue): + if descr.typeinfo == REF: + return do_setinteriorfield_raw_ptr(array, index, newvalue, descr.width, descr.ofs) + elif descr.typeinfo == INT: + return do_setinteriorfield_raw_int(array, index, newvalue, descr.width, descr.ofs) + elif descr.typeinfo == FLOAT: + return do_setinteriorfield_raw_float(array, index, newvalue, descr.width, descr.ofs) + else: + raise NotImplementedError + def op_setfield_gc(self, fielddescr, struct, newvalue): if fielddescr.typeinfo == REF: do_setfield_gc_ptr(struct, fielddescr.ofs, newvalue) @@ -1403,6 +1424,14 @@ struct = array._obj.container.getitem(index) return cast_to_ptr(_getinteriorfield_gc(struct, fieldnum)) +def _getinteriorfield_raw(ffitype, array, index, width, ofs): + addr = rffi.cast(rffi.VOIDP, array) + return libffi.array_getitem(ffitype, width, addr, index, ofs) + +def do_getinteriorfield_raw_int(array, index, width, ofs): + res = _getinteriorfield_raw(libffi.types.slong, array, index, width, ofs) + return res + def _getfield_raw(struct, fieldnum): STRUCT, fieldname = symbolic.TokenToField[fieldnum] ptr = cast_from_int(lltype.Ptr(STRUCT), struct) @@ -1479,7 +1508,14 @@ return do_setinteriorfield_gc do_setinteriorfield_gc_int = new_setinteriorfield_gc(cast_from_int) do_setinteriorfield_gc_float = new_setinteriorfield_gc(cast_from_floatstorage) -do_setinteriorfield_gc_ptr = new_setinteriorfield_gc(cast_from_ptr) +do_setinteriorfield_gc_ptr = new_setinteriorfield_gc(cast_from_ptr) + +def new_setinteriorfield_raw(ffitype): + def do_setinteriorfield_raw(array, index, newvalue, width, ofs): + addr = rffi.cast(rffi.VOIDP, array) + return libffi.array_setitem(ffitype, width, addr, index, ofs, newvalue) + return do_setinteriorfield_raw +do_setinteriorfield_raw_int = new_setinteriorfield_raw(libffi.types.slong) def do_setfield_raw_int(struct, fieldnum, newvalue): STRUCT, fieldname = symbolic.TokenToField[fieldnum] diff --git a/pypy/jit/backend/llgraph/runner.py b/pypy/jit/backend/llgraph/runner.py --- a/pypy/jit/backend/llgraph/runner.py +++ b/pypy/jit/backend/llgraph/runner.py @@ -23,8 +23,10 @@ class Descr(history.AbstractDescr): def __init__(self, ofs, typeinfo, extrainfo=None, name=None, - arg_types=None, count_fields_if_immut=-1, ffi_flags=0): + arg_types=None, count_fields_if_immut=-1, ffi_flags=0, width=-1): + self.ofs = ofs + self.width = width self.typeinfo = typeinfo self.extrainfo = extrainfo self.name = name @@ -119,14 +121,14 @@ return False def getdescr(self, ofs, typeinfo='?', extrainfo=None, name=None, - arg_types=None, count_fields_if_immut=-1, ffi_flags=0): + arg_types=None, count_fields_if_immut=-1, ffi_flags=0, width=-1): key = (ofs, typeinfo, extrainfo, name, arg_types, - count_fields_if_immut, ffi_flags) + count_fields_if_immut, ffi_flags, width) try: return self._descrs[key] except KeyError: descr = Descr(ofs, typeinfo, extrainfo, name, arg_types, - count_fields_if_immut, ffi_flags) + count_fields_if_immut, ffi_flags, width) self._descrs[key] = descr return descr @@ -179,7 +181,8 @@ descr = op.getdescr() if isinstance(descr, Descr): llimpl.compile_add_descr(c, descr.ofs, descr.typeinfo, - descr.arg_types) + descr.arg_types, descr.extrainfo, + descr.width) if (isinstance(descr, history.LoopToken) and op.getopnum() != rop.JUMP): llimpl.compile_add_loop_token(c, descr) @@ -324,10 +327,22 @@ def interiorfielddescrof(self, A, fieldname): S = A.OF - ofs2 = symbolic.get_size(A) + width = symbolic.get_size(A) ofs, size = symbolic.get_field_token(S, fieldname) token = history.getkind(getattr(S, fieldname)) - return self.getdescr(ofs, token[0], name=fieldname, extrainfo=ofs2) + return self.getdescr(ofs, token[0], name=fieldname, width=width) + + def interiorfielddescrof_dynamic(self, offset, width, fieldsize, + is_pointer, is_float, is_signed): + + if is_pointer: + typeinfo = REF + elif is_float: + typeinfo = FLOAT + else: + typeinfo = INT + # we abuse the arg_types field to distinguish dynamic and static descrs + return Descr(offset, typeinfo, arg_types='dynamic', name='', width=width) def calldescrof(self, FUNC, ARGS, RESULT, extrainfo): arg_types = [] diff --git a/pypy/jit/backend/llsupport/descr.py b/pypy/jit/backend/llsupport/descr.py --- a/pypy/jit/backend/llsupport/descr.py +++ b/pypy/jit/backend/llsupport/descr.py @@ -111,6 +111,16 @@ def repr_of_descr(self): return '<%s %s %s>' % (self._clsname, self.name, self.offset) +class DynamicFieldDescr(BaseFieldDescr): + def __init__(self, offset, fieldsize, is_pointer, is_float, is_signed): + self.offset = offset + self._fieldsize = fieldsize + self._is_pointer_field = is_pointer + self._is_float_field = is_float + self._is_field_signed = is_signed + + def get_field_size(self, translate_support_code): + return self._fieldsize class NonGcPtrFieldDescr(BaseFieldDescr): _clsname = 'NonGcPtrFieldDescr' @@ -182,6 +192,7 @@ def repr_of_descr(self): return '<%s>' % self._clsname + class NonGcPtrArrayDescr(BaseArrayDescr): _clsname = 'NonGcPtrArrayDescr' def get_item_size(self, translate_support_code): @@ -211,6 +222,13 @@ def get_ofs_length(self, translate_support_code): return -1 +class DynamicArrayNoLengthDescr(BaseArrayNoLengthDescr): + def __init__(self, itemsize): + self.itemsize = itemsize + + def get_item_size(self, translate_support_code): + return self.itemsize + class NonGcPtrArrayNoLengthDescr(BaseArrayNoLengthDescr): _clsname = 'NonGcPtrArrayNoLengthDescr' def get_item_size(self, translate_support_code): diff --git a/pypy/jit/backend/llsupport/llmodel.py b/pypy/jit/backend/llsupport/llmodel.py --- a/pypy/jit/backend/llsupport/llmodel.py +++ b/pypy/jit/backend/llsupport/llmodel.py @@ -9,9 +9,10 @@ from pypy.jit.backend.llsupport import symbolic from pypy.jit.backend.llsupport.symbolic import WORD, unroll_basic_sizes from pypy.jit.backend.llsupport.descr import (get_size_descr, - get_field_descr, BaseFieldDescr, get_array_descr, BaseArrayDescr, - get_call_descr, BaseIntCallDescr, GcPtrCallDescr, FloatCallDescr, - VoidCallDescr, InteriorFieldDescr, get_interiorfield_descr) + get_field_descr, BaseFieldDescr, DynamicFieldDescr, get_array_descr, + BaseArrayDescr, DynamicArrayNoLengthDescr, get_call_descr, + BaseIntCallDescr, GcPtrCallDescr, FloatCallDescr, VoidCallDescr, + InteriorFieldDescr, get_interiorfield_descr) from pypy.jit.backend.llsupport.asmmemmgr import AsmMemoryManager @@ -238,6 +239,12 @@ def interiorfielddescrof(self, A, fieldname): return get_interiorfield_descr(self.gc_ll_descr, A, A.OF, fieldname) + def interiorfielddescrof_dynamic(self, offset, width, fieldsize, + is_pointer, is_float, is_signed): + arraydescr = DynamicArrayNoLengthDescr(width) + fielddescr = DynamicFieldDescr(offset, fieldsize, is_pointer, is_float, is_signed) + return InteriorFieldDescr(arraydescr, fielddescr) + def unpack_arraydescr(self, arraydescr): assert isinstance(arraydescr, BaseArrayDescr) return arraydescr.get_base_size(self.translate_support_code) diff --git a/pypy/jit/backend/model.py b/pypy/jit/backend/model.py --- a/pypy/jit/backend/model.py +++ b/pypy/jit/backend/model.py @@ -183,38 +183,35 @@ lst[n] = None self.fail_descr_free_list.extend(faildescr_indices) - @staticmethod - def sizeof(S): + def sizeof(self, S): raise NotImplementedError - @staticmethod - def fielddescrof(S, fieldname): + def fielddescrof(self, S, fieldname): """Return the Descr corresponding to field 'fieldname' on the structure 'S'. It is important that this function (at least) caches the results.""" raise NotImplementedError - @staticmethod - def arraydescrof(A): + def interiorfielddescrof(self, A, fieldname): raise NotImplementedError - @staticmethod - def calldescrof(FUNC, ARGS, RESULT): + def interiorfielddescrof_dynamic(self, offset, width, fieldsize, is_pointer, + is_float, is_signed): + raise NotImplementedError + + def arraydescrof(self, A): + raise NotImplementedError + + def calldescrof(self, FUNC, ARGS, RESULT): # FUNC is the original function type, but ARGS is a list of types # with Voids removed raise NotImplementedError - @staticmethod - def methdescrof(SELFTYPE, methname): + def methdescrof(self, SELFTYPE, methname): # must return a subclass of history.AbstractMethDescr raise NotImplementedError - @staticmethod - def typedescrof(TYPE): - raise NotImplementedError - - @staticmethod - def interiorfielddescrof(A, fieldname): + def typedescrof(self, TYPE): raise NotImplementedError # ---------- the backend-dependent operations ---------- diff --git a/pypy/jit/backend/test/test_random.py b/pypy/jit/backend/test/test_random.py --- a/pypy/jit/backend/test/test_random.py +++ b/pypy/jit/backend/test/test_random.py @@ -495,9 +495,9 @@ if pytest.config.option.backend == 'llgraph': from pypy.jit.backend.llgraph.runner import LLtypeCPU return LLtypeCPU(None) - elif pytest.config.option.backend == 'x86': - from pypy.jit.backend.x86.runner import CPU386 - return CPU386(None, None) + elif pytest.config.option.backend == 'cpu': + from pypy.jit.backend.detect_cpu import getcpuclass + return getcpuclass()(None, None) else: assert 0, "unknown backend %r" % pytest.config.option.backend diff --git a/pypy/jit/backend/x86/test/test_zll_random.py b/pypy/jit/backend/test/test_zll_stress.py rename from pypy/jit/backend/x86/test/test_zll_random.py rename to pypy/jit/backend/test/test_zll_stress.py diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -8,8 +8,8 @@ from pypy.rpython.lltypesystem.lloperation import llop from pypy.rpython.annlowlevel import llhelper from pypy.jit.backend.model import CompiledLoopToken -from pypy.jit.backend.x86.regalloc import (RegAlloc, get_ebp_ofs, - _get_scale, gpr_reg_mgr_cls) +from pypy.jit.backend.x86.regalloc import (RegAlloc, get_ebp_ofs, _get_scale, + gpr_reg_mgr_cls, _valid_addressing_size) from pypy.jit.backend.x86.arch import (FRAME_FIXED_SIZE, FORCE_INDEX_OFS, WORD, IS_X86_32, IS_X86_64) @@ -1601,8 +1601,10 @@ assert isinstance(itemsize_loc, ImmedLoc) if isinstance(index_loc, ImmedLoc): temp_loc = imm(index_loc.value * itemsize_loc.value) + elif _valid_addressing_size(itemsize_loc.value): + return AddressLoc(base_loc, index_loc, _get_scale(itemsize_loc.value), ofs_loc.value) else: - # XXX should not use IMUL in most cases + # XXX should not use IMUL in more cases, it can use a clever LEA assert isinstance(temp_loc, RegLoc) assert isinstance(index_loc, RegLoc) assert not temp_loc.is_xmm @@ -1619,6 +1621,8 @@ ofs_loc) self.load_from_mem(resloc, src_addr, fieldsize_loc, sign_loc) + genop_getinteriorfield_raw = genop_getinteriorfield_gc + def genop_discard_setfield_gc(self, op, arglocs): base_loc, ofs_loc, size_loc, value_loc = arglocs @@ -1634,6 +1638,8 @@ ofs_loc) self.save_into_mem(dest_addr, value_loc, fieldsize_loc) + genop_discard_setinteriorfield_raw = genop_discard_setinteriorfield_gc + def genop_discard_setarrayitem_gc(self, op, arglocs): base_loc, ofs_loc, value_loc, size_loc, baseofs = arglocs assert isinstance(baseofs, ImmedLoc) diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -1067,6 +1067,8 @@ self.PerformDiscard(op, [base_loc, ofs, itemsize, fieldsize, index_loc, temp_loc, value_loc]) + consider_setinteriorfield_raw = consider_setinteriorfield_gc + def consider_strsetitem(self, op): args = op.getarglist() base_loc = self.rm.make_sure_var_in_reg(op.getarg(0), args) @@ -1158,6 +1160,8 @@ self.Perform(op, [base_loc, ofs, itemsize, fieldsize, index_loc, temp_loc, sign_loc], result_loc) + consider_getinteriorfield_raw = consider_getinteriorfield_gc + def consider_int_is_true(self, op, guard_op): # doesn't need arg to be in a register argloc = self.loc(op.getarg(0)) @@ -1430,8 +1434,11 @@ # i.e. the n'th word beyond the fixed frame size. return -WORD * (FRAME_FIXED_SIZE + position) +def _valid_addressing_size(size): + return size == 1 or size == 2 or size == 4 or size == 8 + def _get_scale(size): - assert size == 1 or size == 2 or size == 4 or size == 8 + assert _valid_addressing_size(size) if size < 4: return size - 1 # 1, 2 => 0, 1 else: diff --git a/pypy/jit/backend/x86/test/test_fficall.py b/pypy/jit/backend/x86/test/test_fficall.py new file mode 100644 --- /dev/null +++ b/pypy/jit/backend/x86/test/test_fficall.py @@ -0,0 +1,8 @@ +import py +from pypy.jit.metainterp.test import test_fficall +from pypy.jit.backend.x86.test.test_basic import Jit386Mixin + +class TestFfiLookups(Jit386Mixin, test_fficall.FfiLookupTests): + # for the individual tests see + # ====> ../../../metainterp/test/test_fficall.py + supports_all = True diff --git a/pypy/jit/codewriter/effectinfo.py b/pypy/jit/codewriter/effectinfo.py --- a/pypy/jit/codewriter/effectinfo.py +++ b/pypy/jit/codewriter/effectinfo.py @@ -48,6 +48,8 @@ OS_LIBFFI_PREPARE = 60 OS_LIBFFI_PUSH_ARG = 61 OS_LIBFFI_CALL = 62 + OS_LIBFFI_GETARRAYITEM = 63 + OS_LIBFFI_SETARRAYITEM = 64 # OS_LLONG_INVERT = 69 OS_LLONG_ADD = 70 diff --git a/pypy/jit/codewriter/jtransform.py b/pypy/jit/codewriter/jtransform.py --- a/pypy/jit/codewriter/jtransform.py +++ b/pypy/jit/codewriter/jtransform.py @@ -1615,6 +1615,12 @@ elif oopspec_name.startswith('libffi_call_'): oopspecindex = EffectInfo.OS_LIBFFI_CALL extraeffect = EffectInfo.EF_RANDOM_EFFECTS + elif oopspec_name == 'libffi_array_getitem': + oopspecindex = EffectInfo.OS_LIBFFI_GETARRAYITEM + extraeffect = EffectInfo.EF_CANNOT_RAISE + elif oopspec_name == 'libffi_array_setitem': + oopspecindex = EffectInfo.OS_LIBFFI_SETARRAYITEM + extraeffect = EffectInfo.EF_CANNOT_RAISE else: assert False, 'unsupported oopspec: %s' % oopspec_name return self._handle_oopspec_call(op, args, oopspecindex, extraeffect) diff --git a/pypy/jit/metainterp/executor.py b/pypy/jit/metainterp/executor.py --- a/pypy/jit/metainterp/executor.py +++ b/pypy/jit/metainterp/executor.py @@ -340,6 +340,8 @@ rop.DEBUG_MERGE_POINT, rop.JIT_DEBUG, rop.SETARRAYITEM_RAW, + rop.GETINTERIORFIELD_RAW, + rop.SETINTERIORFIELD_RAW, rop.CALL_RELEASE_GIL, rop.QUASIIMMUT_FIELD, ): # list of opcodes never executed by pyjitpl diff --git a/pypy/jit/metainterp/optimizeopt/fficall.py b/pypy/jit/metainterp/optimizeopt/fficall.py --- a/pypy/jit/metainterp/optimizeopt/fficall.py +++ b/pypy/jit/metainterp/optimizeopt/fficall.py @@ -1,11 +1,13 @@ +from pypy.jit.codewriter.effectinfo import EffectInfo +from pypy.jit.metainterp.optimizeopt.optimizer import Optimization +from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method +from pypy.jit.metainterp.resoperation import rop, ResOperation +from pypy.rlib import clibffi, libffi +from pypy.rlib.debug import debug_print +from pypy.rlib.libffi import Func +from pypy.rlib.objectmodel import we_are_translated from pypy.rpython.annlowlevel import cast_base_ptr_to_instance -from pypy.rlib.objectmodel import we_are_translated -from pypy.rlib.libffi import Func -from pypy.rlib.debug import debug_print -from pypy.jit.codewriter.effectinfo import EffectInfo -from pypy.jit.metainterp.resoperation import rop, ResOperation -from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method -from pypy.jit.metainterp.optimizeopt.optimizer import Optimization +from pypy.rpython.lltypesystem import llmemory class FuncInfo(object): @@ -78,7 +80,7 @@ def new(self): return OptFfiCall() - + def begin_optimization(self, funcval, op): self.rollback_maybe('begin_optimization', op) self.funcinfo = FuncInfo(funcval, self.optimizer.cpu, op) @@ -116,6 +118,9 @@ ops = self.do_push_arg(op) elif oopspec == EffectInfo.OS_LIBFFI_CALL: ops = self.do_call(op) + elif (oopspec == EffectInfo.OS_LIBFFI_GETARRAYITEM or + oopspec == EffectInfo.OS_LIBFFI_SETARRAYITEM): + ops = self.do_getsetarrayitem(op, oopspec) # for op in ops: self.emit_operation(op) @@ -190,6 +195,53 @@ ops.append(newop) return ops + def do_getsetarrayitem(self, op, oopspec): + ffitypeval = self.getvalue(op.getarg(1)) + widthval = self.getvalue(op.getarg(2)) + offsetval = self.getvalue(op.getarg(5)) + if not ffitypeval.is_constant() or not widthval.is_constant() or not offsetval.is_constant(): + return [op] + + ffitypeaddr = ffitypeval.box.getaddr() + ffitype = llmemory.cast_adr_to_ptr(ffitypeaddr, clibffi.FFI_TYPE_P) + offset = offsetval.box.getint() + width = widthval.box.getint() + descr = self._get_interior_descr(ffitype, width, offset) + + arglist = [ + self.getvalue(op.getarg(3)).force_box(self.optimizer), + self.getvalue(op.getarg(4)).force_box(self.optimizer), + ] + if oopspec == EffectInfo.OS_LIBFFI_GETARRAYITEM: + opnum = rop.GETINTERIORFIELD_RAW + elif oopspec == EffectInfo.OS_LIBFFI_SETARRAYITEM: + opnum = rop.SETINTERIORFIELD_RAW + arglist.append(self.getvalue(op.getarg(6)).force_box(self.optimizer)) + else: + assert False + return [ + ResOperation(opnum, arglist, op.result, descr=descr), + ] + + def _get_interior_descr(self, ffitype, width, offset): + kind = libffi.types.getkind(ffitype) + is_pointer = is_float = is_signed = False + if ffitype is libffi.types.pointer: + is_pointer = True + elif kind == 'i': + is_signed = True + elif kind == 'f' or kind == 'I' or kind == 'U': + # longlongs are treated as floats, see + # e.g. llsupport/descr.py:getDescrClass + is_float = True + else: + assert False, "unsupported ffitype or kind" + # + fieldsize = ffitype.c_size + return self.optimizer.cpu.interiorfielddescrof_dynamic( + offset, width, fieldsize, is_pointer, is_float, is_signed + ) + def propagate_forward(self, op): if self.logops is not None: debug_print(self.logops.repr_of_resop(op)) diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -461,6 +461,7 @@ 'GETARRAYITEM_GC/2d', 'GETARRAYITEM_RAW/2d', 'GETINTERIORFIELD_GC/2d', + 'GETINTERIORFIELD_RAW/2d', 'GETFIELD_GC/1d', 'GETFIELD_RAW/1d', '_MALLOC_FIRST', @@ -479,6 +480,7 @@ 'SETARRAYITEM_GC/3d', 'SETARRAYITEM_RAW/3d', 'SETINTERIORFIELD_GC/3d', + 'SETINTERIORFIELD_RAW/3d', 'SETFIELD_GC/2d', 'SETFIELD_RAW/2d', 'STRSETITEM/3', diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -14,7 +14,7 @@ from pypy.rlib.jit import (JitDriver, we_are_jitted, hint, dont_look_inside, loop_invariant, elidable, promote, jit_debug, assert_green, AssertGreenFailed, unroll_safe, current_trace_length, look_inside_iff, - isconstant, isvirtual, promote_string) + isconstant, isvirtual, promote_string, set_param) from pypy.rlib.rarithmetic import ovfcheck from pypy.rpython.lltypesystem import lltype, llmemory, rffi from pypy.rpython.ootypesystem import ootype @@ -1256,15 +1256,18 @@ n -= 1 x += n return x - def f(n, threshold): - myjitdriver.set_param('threshold', threshold) + def f(n, threshold, arg): + if arg: + set_param(myjitdriver, 'threshold', threshold) + else: + set_param(None, 'threshold', threshold) return g(n) - res = self.meta_interp(f, [10, 3]) + res = self.meta_interp(f, [10, 3, 1]) assert res == 9 + 8 + 7 + 6 + 5 + 4 + 3 + 2 + 1 + 0 self.check_tree_loop_count(2) - res = self.meta_interp(f, [10, 13]) + res = self.meta_interp(f, [10, 13, 0]) assert res == 9 + 8 + 7 + 6 + 5 + 4 + 3 + 2 + 1 + 0 self.check_tree_loop_count(0) @@ -2328,8 +2331,8 @@ get_printable_location=get_printable_location) bytecode = "0j10jc20a3" def f(): - myjitdriver.set_param('threshold', 7) - myjitdriver.set_param('trace_eagerness', 1) + set_param(myjitdriver, 'threshold', 7) + set_param(myjitdriver, 'trace_eagerness', 1) i = j = c = a = 1 while True: myjitdriver.jit_merge_point(i=i, j=j, c=c, a=a) @@ -2607,7 +2610,7 @@ myjitdriver = JitDriver(greens = [], reds = ['n', 'i', 'sa', 'a']) def f(n, limit): - myjitdriver.set_param('retrace_limit', limit) + set_param(myjitdriver, 'retrace_limit', limit) sa = i = a = 0 while i < n: myjitdriver.jit_merge_point(n=n, i=i, sa=sa, a=a) @@ -2625,8 +2628,8 @@ myjitdriver = JitDriver(greens = [], reds = ['n', 'i', 'sa', 'a']) def f(n, limit): - myjitdriver.set_param('retrace_limit', 3) - myjitdriver.set_param('max_retrace_guards', limit) + set_param(myjitdriver, 'retrace_limit', 3) + set_param(myjitdriver, 'max_retrace_guards', limit) sa = i = a = 0 while i < n: myjitdriver.jit_merge_point(n=n, i=i, sa=sa, a=a) @@ -2645,7 +2648,7 @@ myjitdriver = JitDriver(greens = [], reds = ['n', 'i', 'sa', 'a', 'node']) def f(n, limit): - myjitdriver.set_param('retrace_limit', limit) + set_param(myjitdriver, 'retrace_limit', limit) sa = i = a = 0 node = [1, 2, 3] node[1] = n @@ -2668,10 +2671,10 @@ myjitdriver = JitDriver(greens = ['pc'], reds = ['n', 'i', 'sa']) bytecode = "0+sI0+SI" def f(n): - myjitdriver.set_param('threshold', 3) - myjitdriver.set_param('trace_eagerness', 1) - myjitdriver.set_param('retrace_limit', 5) - myjitdriver.set_param('function_threshold', -1) + set_param(None, 'threshold', 3) + set_param(None, 'trace_eagerness', 1) + set_param(None, 'retrace_limit', 5) + set_param(None, 'function_threshold', -1) pc = sa = i = 0 while pc < len(bytecode): myjitdriver.jit_merge_point(pc=pc, n=n, sa=sa, i=i) @@ -2728,9 +2731,9 @@ myjitdriver = JitDriver(greens = ['pc'], reds = ['n', 'a', 'i', 'j', 'sa']) bytecode = "ij+Jj+JI" def f(n, a): - myjitdriver.set_param('threshold', 5) - myjitdriver.set_param('trace_eagerness', 1) - myjitdriver.set_param('retrace_limit', 2) + set_param(None, 'threshold', 5) + set_param(None, 'trace_eagerness', 1) + set_param(None, 'retrace_limit', 2) pc = sa = i = j = 0 while pc < len(bytecode): myjitdriver.jit_merge_point(pc=pc, n=n, sa=sa, i=i, j=j, a=a) @@ -2793,8 +2796,8 @@ return B(self.val + 1) myjitdriver = JitDriver(greens = [], reds = ['sa', 'a']) def f(): - myjitdriver.set_param('threshold', 3) - myjitdriver.set_param('trace_eagerness', 2) + set_param(None, 'threshold', 3) + set_param(None, 'trace_eagerness', 2) a = A(0) sa = 0 while a.val < 8: @@ -2824,8 +2827,8 @@ return B(self.val + 1) myjitdriver = JitDriver(greens = [], reds = ['sa', 'b', 'a']) def f(b): - myjitdriver.set_param('threshold', 6) - myjitdriver.set_param('trace_eagerness', 4) + set_param(None, 'threshold', 6) + set_param(None, 'trace_eagerness', 4) a = A(0) sa = 0 while a.val < 15: @@ -2862,10 +2865,10 @@ myjitdriver = JitDriver(greens = ['pc'], reds = ['n', 'i', 'sa']) bytecode = "0+sI0+SI" def f(n): - myjitdriver.set_param('threshold', 3) - myjitdriver.set_param('trace_eagerness', 1) - myjitdriver.set_param('retrace_limit', 5) - myjitdriver.set_param('function_threshold', -1) + set_param(None, 'threshold', 3) + set_param(None, 'trace_eagerness', 1) + set_param(None, 'retrace_limit', 5) + set_param(None, 'function_threshold', -1) pc = sa = i = 0 while pc < len(bytecode): myjitdriver.jit_merge_point(pc=pc, n=n, sa=sa, i=i) diff --git a/pypy/jit/metainterp/test/test_fficall.py b/pypy/jit/metainterp/test/test_fficall.py --- a/pypy/jit/metainterp/test/test_fficall.py +++ b/pypy/jit/metainterp/test/test_fficall.py @@ -1,19 +1,18 @@ +import py -import py +from pypy.jit.metainterp.test.support import LLJitMixin +from pypy.rlib.jit import JitDriver, promote, dont_look_inside +from pypy.rlib.libffi import (ArgChain, IS_32_BIT, array_getitem, array_setitem, + types) +from pypy.rlib.objectmodel import specialize from pypy.rlib.rarithmetic import r_singlefloat, r_longlong, r_ulonglong -from pypy.rlib.jit import JitDriver, promote, dont_look_inside +from pypy.rlib.test.test_libffi import TestLibffiCall as _TestLibffiCall from pypy.rlib.unroll import unrolling_iterable -from pypy.rlib.libffi import ArgChain -from pypy.rlib.libffi import IS_32_BIT -from pypy.rlib.test.test_libffi import TestLibffiCall as _TestLibffiCall from pypy.rpython.lltypesystem import lltype, rffi -from pypy.rlib.objectmodel import specialize from pypy.tool.sourcetools import func_with_new_name -from pypy.jit.metainterp.test.support import LLJitMixin -class TestFfiCall(LLJitMixin, _TestLibffiCall): - supports_all = False # supports_{floats,longlong,singlefloats} +class FfiCallTests(_TestLibffiCall): # ===> ../../../rlib/test/test_libffi.py def call(self, funcspec, args, RESULT, is_struct=False, jitif=[]): @@ -92,6 +91,69 @@ test_byval_result.__doc__ = _TestLibffiCall.test_byval_result.__doc__ test_byval_result.dont_track_allocations = True +class FfiLookupTests(object): + def test_array_fields(self): + myjitdriver = JitDriver( + greens = [], + reds = ["n", "i", "points", "result_point"], + ) -class TestFfiCallSupportAll(TestFfiCall): + POINT = lltype.Struct("POINT", + ("x", lltype.Signed), + ("y", lltype.Signed), + ) + def f(points, result_point, n): + i = 0 + while i < n: + myjitdriver.jit_merge_point(i=i, points=points, n=n, + result_point=result_point) + x = array_getitem( + types.slong, rffi.sizeof(lltype.Signed) * 2, points, i, 0 + ) + y = array_getitem( + types.slong, rffi.sizeof(lltype.Signed) * 2, points, i, rffi.sizeof(lltype.Signed) + ) + + cur_x = array_getitem( + types.slong, rffi.sizeof(lltype.Signed) * 2, result_point, 0, 0 + ) + cur_y = array_getitem( + types.slong, rffi.sizeof(lltype.Signed) * 2, result_point, 0, rffi.sizeof(lltype.Signed) + ) + + array_setitem( + types.slong, rffi.sizeof(lltype.Signed) * 2, result_point, 0, 0, cur_x + x + ) + array_setitem( + types.slong, rffi.sizeof(lltype.Signed) * 2, result_point, 0, rffi.sizeof(lltype.Signed), cur_y + y + ) + i += 1 + + def main(n): + with lltype.scoped_alloc(rffi.CArray(POINT), n) as points: + with lltype.scoped_alloc(rffi.CArray(POINT), 1) as result_point: + for i in xrange(n): + points[i].x = i * 2 + points[i].y = i * 2 + 1 + points = rffi.cast(rffi.CArrayPtr(lltype.Char), points) + result_point[0].x = 0 + result_point[0].y = 0 + result_point = rffi.cast(rffi.CArrayPtr(lltype.Char), result_point) + f(points, result_point, n) + result_point = rffi.cast(rffi.CArrayPtr(POINT), result_point) + return result_point[0].x * result_point[0].y + + assert self.meta_interp(main, [10]) == main(10) == 9000 + self.check_loops({"int_add": 3, "jump": 1, "int_lt": 1, "guard_true": 1, + "getinteriorfield_raw": 4, "setinteriorfield_raw": 2 + }) + + +class TestFfiCall(FfiCallTests, LLJitMixin): + supports_all = False + +class TestFfiCallSupportAll(FfiCallTests, LLJitMixin): supports_all = True # supports_{floats,longlong,singlefloats} + +class TestFfiLookup(FfiLookupTests, LLJitMixin): + pass \ No newline at end of file diff --git a/pypy/jit/metainterp/test/test_jitdriver.py b/pypy/jit/metainterp/test/test_jitdriver.py --- a/pypy/jit/metainterp/test/test_jitdriver.py +++ b/pypy/jit/metainterp/test/test_jitdriver.py @@ -1,5 +1,5 @@ """Tests for multiple JitDrivers.""" -from pypy.rlib.jit import JitDriver, unroll_safe +from pypy.rlib.jit import JitDriver, unroll_safe, set_param from pypy.jit.metainterp.test.support import LLJitMixin, OOJitMixin from pypy.jit.metainterp.warmspot import get_stats @@ -113,7 +113,7 @@ return n # def loop2(g, r): - myjitdriver1.set_param('function_threshold', 0) + set_param(None, 'function_threshold', 0) while r > 0: myjitdriver2.can_enter_jit(g=g, r=r) myjitdriver2.jit_merge_point(g=g, r=r) diff --git a/pypy/jit/metainterp/test/test_loop.py b/pypy/jit/metainterp/test/test_loop.py --- a/pypy/jit/metainterp/test/test_loop.py +++ b/pypy/jit/metainterp/test/test_loop.py @@ -1,5 +1,5 @@ import py -from pypy.rlib.jit import JitDriver, hint +from pypy.rlib.jit import JitDriver, hint, set_param from pypy.rlib.objectmodel import compute_hash from pypy.jit.metainterp.warmspot import ll_meta_interp, get_stats from pypy.jit.metainterp.test.support import LLJitMixin, OOJitMixin @@ -364,7 +364,7 @@ myjitdriver = JitDriver(greens = ['pos'], reds = ['i', 'j', 'n', 'x']) bytecode = "IzJxji" def f(n, threshold): - myjitdriver.set_param('threshold', threshold) + set_param(myjitdriver, 'threshold', threshold) i = j = x = 0 pos = 0 op = '-' @@ -411,7 +411,7 @@ myjitdriver = JitDriver(greens = ['pos'], reds = ['i', 'j', 'n', 'x']) bytecode = "IzJxji" def f(nval, threshold): - myjitdriver.set_param('threshold', threshold) + set_param(myjitdriver, 'threshold', threshold) i, j, x = A(0), A(0), A(0) n = A(nval) pos = 0 diff --git a/pypy/jit/metainterp/test/test_recursive.py b/pypy/jit/metainterp/test/test_recursive.py --- a/pypy/jit/metainterp/test/test_recursive.py +++ b/pypy/jit/metainterp/test/test_recursive.py @@ -1,5 +1,5 @@ import py -from pypy.rlib.jit import JitDriver, we_are_jitted, hint +from pypy.rlib.jit import JitDriver, hint, set_param from pypy.rlib.jit import unroll_safe, dont_look_inside, promote from pypy.rlib.objectmodel import we_are_translated from pypy.rlib.debug import fatalerror @@ -308,8 +308,8 @@ pc += 1 return n def main(n): - myjitdriver.set_param('threshold', 3) - myjitdriver.set_param('trace_eagerness', 5) + set_param(None, 'threshold', 3) + set_param(None, 'trace_eagerness', 5) return f("c-l", n) expected = main(100) res = self.meta_interp(main, [100], enable_opts='', inline=True) @@ -329,7 +329,7 @@ return recursive(n - 1) + 1 return 0 def loop(n): - myjitdriver.set_param("threshold", 10) + set_param(myjitdriver, "threshold", 10) pc = 0 while n: myjitdriver.can_enter_jit(n=n) @@ -351,8 +351,8 @@ return 0 myjitdriver = JitDriver(greens=[], reds=['n']) def loop(n): - myjitdriver.set_param("threshold", 4) - myjitdriver.set_param("trace_eagerness", 2) + set_param(None, "threshold", 4) + set_param(None, "trace_eagerness", 2) while n: myjitdriver.can_enter_jit(n=n) myjitdriver.jit_merge_point(n=n) @@ -482,12 +482,12 @@ TRACE_LIMIT = 66 def main(inline): - myjitdriver.set_param("threshold", 10) - myjitdriver.set_param('function_threshold', 60) + set_param(None, "threshold", 10) + set_param(None, 'function_threshold', 60) if inline: - myjitdriver.set_param('inlining', True) + set_param(None, 'inlining', True) else: - myjitdriver.set_param('inlining', False) + set_param(None, 'inlining', False) return loop(100) res = self.meta_interp(main, [0], enable_opts='', trace_limit=TRACE_LIMIT) @@ -564,11 +564,11 @@ pc += 1 return n def g(m): - myjitdriver.set_param('inlining', True) + set_param(None, 'inlining', True) # carefully chosen threshold to make sure that the inner function # cannot be inlined, but the inner function on its own is small # enough - myjitdriver.set_param('trace_limit', 40) + set_param(None, 'trace_limit', 40) if m > 1000000: f('', 0) result = 0 @@ -1207,9 +1207,9 @@ driver.can_enter_jit(c=c, i=i, v=v) break - def main(c, i, set_param, v): - if set_param: - driver.set_param('function_threshold', 0) + def main(c, i, _set_param, v): + if _set_param: + set_param(driver, 'function_threshold', 0) portal(c, i, v) self.meta_interp(main, [10, 10, False, False], inline=True) diff --git a/pypy/jit/metainterp/test/test_warmspot.py b/pypy/jit/metainterp/test/test_warmspot.py --- a/pypy/jit/metainterp/test/test_warmspot.py +++ b/pypy/jit/metainterp/test/test_warmspot.py @@ -1,10 +1,7 @@ import py -from pypy.jit.metainterp.warmspot import ll_meta_interp from pypy.jit.metainterp.warmspot import get_stats -from pypy.rlib.jit import JitDriver -from pypy.rlib.jit import unroll_safe +from pypy.rlib.jit import JitDriver, set_param, unroll_safe from pypy.jit.backend.llgraph import runner -from pypy.jit.metainterp.history import BoxInt from pypy.jit.metainterp.test.support import LLJitMixin, OOJitMixin from pypy.jit.metainterp.optimizeopt import ALL_OPTS_NAMES @@ -97,7 +94,7 @@ n = A().m(n) return n def f(n, enable_opts): - myjitdriver.set_param('enable_opts', hlstr(enable_opts)) + set_param(None, 'enable_opts', hlstr(enable_opts)) return g(n) # check that the set_param will override the default diff --git a/pypy/jit/metainterp/test/test_ztranslation.py b/pypy/jit/metainterp/test/test_ztranslation.py --- a/pypy/jit/metainterp/test/test_ztranslation.py +++ b/pypy/jit/metainterp/test/test_ztranslation.py @@ -1,7 +1,7 @@ import py from pypy.jit.metainterp.warmspot import rpython_ll_meta_interp, ll_meta_interp from pypy.jit.backend.llgraph import runner -from pypy.rlib.jit import JitDriver, unroll_parameters +from pypy.rlib.jit import JitDriver, unroll_parameters, set_param from pypy.rlib.jit import PARAMETERS, dont_look_inside, hint from pypy.jit.metainterp.jitprof import Profiler from pypy.rpython.lltypesystem import lltype, llmemory @@ -57,9 +57,9 @@ get_printable_location=get_printable_location) def f(i): for param, defl in unroll_parameters: - jitdriver.set_param(param, defl) - jitdriver.set_param("threshold", 3) - jitdriver.set_param("trace_eagerness", 2) + set_param(jitdriver, param, defl) + set_param(jitdriver, "threshold", 3) + set_param(jitdriver, "trace_eagerness", 2) total = 0 frame = Frame(i) while frame.l[0] > 3: @@ -117,8 +117,8 @@ raise ValueError return 2 def main(i): - jitdriver.set_param("threshold", 3) - jitdriver.set_param("trace_eagerness", 2) + set_param(jitdriver, "threshold", 3) + set_param(jitdriver, "trace_eagerness", 2) total = 0 n = i while n > 3: diff --git a/pypy/jit/metainterp/warmspot.py b/pypy/jit/metainterp/warmspot.py --- a/pypy/jit/metainterp/warmspot.py +++ b/pypy/jit/metainterp/warmspot.py @@ -120,7 +120,8 @@ op = block.operations[i] if (op.opname == 'jit_marker' and op.args[0].value == marker_name and - op.args[1].value.active): # the jitdriver + (op.args[1].value is None or + op.args[1].value.active)): # the jitdriver results.append((graph, block, i)) return results @@ -846,11 +847,18 @@ _, PTR_SET_PARAM_STR_FUNCTYPE = self.cpu.ts.get_FuncType( [lltype.Ptr(STR)], lltype.Void) def make_closure(jd, fullfuncname, is_string): - state = jd.warmstate - def closure(i): - if is_string: - i = hlstr(i) - getattr(state, fullfuncname)(i) + if jd is None: + def closure(i): + if is_string: + i = hlstr(i) + for jd in self.jitdrivers_sd: + getattr(jd.warmstate, fullfuncname)(i) + else: + state = jd.warmstate + def closure(i): + if is_string: + i = hlstr(i) + getattr(state, fullfuncname)(i) if is_string: TP = PTR_SET_PARAM_STR_FUNCTYPE else: @@ -859,12 +867,16 @@ return Constant(funcptr, TP) # for graph, block, i in find_set_param(graphs): + op = block.operations[i] - for jd in self.jitdrivers_sd: - if jd.jitdriver is op.args[1].value: - break + if op.args[1].value is not None: + for jd in self.jitdrivers_sd: + if jd.jitdriver is op.args[1].value: + break + else: + assert 0, "jitdriver of set_param() not found" else: - assert 0, "jitdriver of set_param() not found" + jd = None funcname = op.args[2].value key = jd, funcname if key not in closures: diff --git a/pypy/module/cpyext/include/eval.h b/pypy/module/cpyext/include/eval.h --- a/pypy/module/cpyext/include/eval.h +++ b/pypy/module/cpyext/include/eval.h @@ -14,8 +14,8 @@ PyObject * PyEval_CallFunction(PyObject *obj, const char *format, ...); PyObject * PyEval_CallMethod(PyObject *obj, const char *name, const char *format, ...); -PyObject * PyObject_CallFunction(PyObject *obj, char *format, ...); -PyObject * PyObject_CallMethod(PyObject *obj, char *name, char *format, ...); +PyObject * PyObject_CallFunction(PyObject *obj, const char *format, ...); +PyObject * PyObject_CallMethod(PyObject *obj, const char *name, const char *format, ...); PyObject * PyObject_CallFunctionObjArgs(PyObject *callable, ...); PyObject * PyObject_CallMethodObjArgs(PyObject *callable, PyObject *name, ...); diff --git a/pypy/module/cpyext/include/modsupport.h b/pypy/module/cpyext/include/modsupport.h --- a/pypy/module/cpyext/include/modsupport.h +++ b/pypy/module/cpyext/include/modsupport.h @@ -48,7 +48,11 @@ /* * This is from pyport.h. Perhaps it belongs elsewhere. */ +#ifdef __cplusplus +#define PyMODINIT_FUNC extern "C" void +#else #define PyMODINIT_FUNC void +#endif #ifdef __cplusplus diff --git a/pypy/module/cpyext/include/pycobject.h b/pypy/module/cpyext/include/pycobject.h --- a/pypy/module/cpyext/include/pycobject.h +++ b/pypy/module/cpyext/include/pycobject.h @@ -33,7 +33,7 @@ PyAPI_FUNC(void *) PyCObject_GetDesc(PyObject *); /* Import a pointer to a C object from a module using a PyCObject. */ -PyAPI_FUNC(void *) PyCObject_Import(char *module_name, char *cobject_name); +PyAPI_FUNC(void *) PyCObject_Import(const char *module_name, const char *cobject_name); /* Modify a C object. Fails (==0) if object has a destructor. */ PyAPI_FUNC(int) PyCObject_SetVoidPtr(PyObject *self, void *cobj); diff --git a/pypy/module/cpyext/include/pyerrors.h b/pypy/module/cpyext/include/pyerrors.h --- a/pypy/module/cpyext/include/pyerrors.h +++ b/pypy/module/cpyext/include/pyerrors.h @@ -11,8 +11,8 @@ (PyClass_Check((x)) || (PyType_Check((x)) && \ PyObject_IsSubclass((x), PyExc_BaseException))) -PyObject *PyErr_NewException(char *name, PyObject *base, PyObject *dict); -PyObject *PyErr_NewExceptionWithDoc(char *name, char *doc, PyObject *base, PyObject *dict); +PyObject *PyErr_NewException(const char *name, PyObject *base, PyObject *dict); +PyObject *PyErr_NewExceptionWithDoc(const char *name, const char *doc, PyObject *base, PyObject *dict); PyObject *PyErr_Format(PyObject *exception, const char *format, ...); /* These APIs aren't really part of the error implementation, but diff --git a/pypy/module/cpyext/modsupport.py b/pypy/module/cpyext/modsupport.py --- a/pypy/module/cpyext/modsupport.py +++ b/pypy/module/cpyext/modsupport.py @@ -54,9 +54,15 @@ modname = rffi.charp2str(name) state = space.fromcache(State) f_name, f_path = state.package_context - w_mod = PyImport_AddModule(space, f_name) + if f_name is not None: + modname = f_name + w_mod = PyImport_AddModule(space, modname) + state.package_context = None, None - dict_w = {'__file__': space.wrap(f_path)} + if f_path is not None: + dict_w = {'__file__': space.wrap(f_path)} + else: + dict_w = {} convert_method_defs(space, dict_w, methods, None, w_self, modname) for key, w_value in dict_w.items(): space.setattr(w_mod, space.wrap(key), w_value) diff --git a/pypy/module/cpyext/presetup.py b/pypy/module/cpyext/presetup.py --- a/pypy/module/cpyext/presetup.py +++ b/pypy/module/cpyext/presetup.py @@ -42,4 +42,4 @@ patch_distutils() del sys.argv[0] -execfile(sys.argv[0], {'__file__': sys.argv[0]}) +execfile(sys.argv[0], {'__file__': sys.argv[0], '__name__': '__main__'}) diff --git a/pypy/module/cpyext/slotdefs.py b/pypy/module/cpyext/slotdefs.py --- a/pypy/module/cpyext/slotdefs.py +++ b/pypy/module/cpyext/slotdefs.py @@ -9,7 +9,8 @@ unaryfunc, wrapperfunc, ternaryfunc, PyTypeObjectPtr, binaryfunc, getattrfunc, getattrofunc, setattrofunc, lenfunc, ssizeargfunc, ssizessizeargfunc, ssizeobjargproc, iternextfunc, initproc, richcmpfunc, - cmpfunc, hashfunc, descrgetfunc, descrsetfunc, objobjproc, readbufferproc) + cmpfunc, hashfunc, descrgetfunc, descrsetfunc, objobjproc, objobjargproc, + readbufferproc) from pypy.module.cpyext.pyobject import from_ref from pypy.module.cpyext.pyerrors import PyErr_Occurred from pypy.module.cpyext.state import State @@ -175,6 +176,15 @@ space.fromcache(State).check_and_raise_exception(always=True) return space.wrap(res) +def wrap_objobjargproc(space, w_self, w_args, func): + func_target = rffi.cast(objobjargproc, func) + check_num_args(space, w_args, 2) + w_key, w_value = space.fixedview(w_args) + res = generic_cpy_call(space, func_target, w_self, w_key, w_value) + if rffi.cast(lltype.Signed, res) == -1: + space.fromcache(State).check_and_raise_exception(always=True) + return space.wrap(res) + def wrap_ssizessizeargfunc(space, w_self, w_args, func): func_target = rffi.cast(ssizessizeargfunc, func) check_num_args(space, w_args, 2) diff --git a/pypy/module/cpyext/src/cobject.c b/pypy/module/cpyext/src/cobject.c --- a/pypy/module/cpyext/src/cobject.c +++ b/pypy/module/cpyext/src/cobject.c @@ -77,7 +77,7 @@ } void * -PyCObject_Import(char *module_name, char *name) +PyCObject_Import(const char *module_name, const char *name) { PyObject *m, *c; void *r = NULL; diff --git a/pypy/module/cpyext/src/modsupport.c b/pypy/module/cpyext/src/modsupport.c --- a/pypy/module/cpyext/src/modsupport.c +++ b/pypy/module/cpyext/src/modsupport.c @@ -541,7 +541,7 @@ } PyObject * -PyObject_CallFunction(PyObject *callable, char *format, ...) +PyObject_CallFunction(PyObject *callable, const char *format, ...) { va_list va; PyObject *args; @@ -558,7 +558,7 @@ } PyObject * -PyObject_CallMethod(PyObject *o, char *name, char *format, ...) +PyObject_CallMethod(PyObject *o, const char *name, const char *format, ...) { va_list va; PyObject *args; diff --git a/pypy/module/cpyext/src/pyerrors.c b/pypy/module/cpyext/src/pyerrors.c --- a/pypy/module/cpyext/src/pyerrors.c +++ b/pypy/module/cpyext/src/pyerrors.c @@ -21,7 +21,7 @@ } PyObject * -PyErr_NewException(char *name, PyObject *base, PyObject *dict) +PyErr_NewException(const char *name, PyObject *base, PyObject *dict) { char *dot; PyObject *modulename = NULL; @@ -72,7 +72,7 @@ /* Create an exception with docstring */ PyObject * -PyErr_NewExceptionWithDoc(char *name, char *doc, PyObject *base, PyObject *dict) +PyErr_NewExceptionWithDoc(const char *name, const char *doc, PyObject *base, PyObject *dict) { int result; PyObject *ret = NULL; diff --git a/pypy/module/cpyext/test/test_typeobject.py b/pypy/module/cpyext/test/test_typeobject.py --- a/pypy/module/cpyext/test/test_typeobject.py +++ b/pypy/module/cpyext/test/test_typeobject.py @@ -397,3 +397,31 @@ def __str__(self): return "text" assert module.tp_str(C()) == "text" + + def test_mp_ass_subscript(self): + module = self.import_extension('foo', [ + ("new_obj", "METH_NOARGS", + ''' + PyObject *obj; + Foo_Type.tp_as_mapping = &tp_as_mapping; + tp_as_mapping.mp_ass_subscript = mp_ass_subscript; + if (PyType_Ready(&Foo_Type) < 0) return NULL; + obj = PyObject_New(PyObject, &Foo_Type); + return obj; + ''' + )], + ''' + static int + mp_ass_subscript(PyObject *self, PyObject *key, PyObject *value) + { + PyErr_SetNone(PyExc_ZeroDivisionError); + return -1; + } + PyMappingMethods tp_as_mapping; + static PyTypeObject Foo_Type = { + PyVarObject_HEAD_INIT(NULL, 0) + "foo.foo", + }; + ''') + obj = module.new_obj() + raises(ZeroDivisionError, obj.__setitem__, 5, None) diff --git a/pypy/module/math/test/test_translated.py b/pypy/module/math/test/test_translated.py new file mode 100644 --- /dev/null +++ b/pypy/module/math/test/test_translated.py @@ -0,0 +1,10 @@ +import py +from pypy.translator.c.test.test_genc import compile +from pypy.module.math.interp_math import _gamma + + +def test_gamma_overflow(): + f = compile(_gamma, [float]) + assert f(10.0) == 362880.0 + py.test.raises(OverflowError, f, 1720.0) + py.test.raises(OverflowError, f, 172.0) diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -2,7 +2,7 @@ class Module(MixedModule): - applevel_name = 'numpy' + applevel_name = 'numpypy' interpleveldefs = { 'array': 'interp_numarray.SingleDimArray', diff --git a/pypy/module/micronumpy/app_numpy.py b/pypy/module/micronumpy/app_numpy.py --- a/pypy/module/micronumpy/app_numpy.py +++ b/pypy/module/micronumpy/app_numpy.py @@ -1,6 +1,6 @@ import math -import numpy +import numpypy inf = float("inf") @@ -13,5 +13,5 @@ def mean(a): if not hasattr(a, "mean"): - a = numpy.array(a) + a = numpypy.array(a) return a.mean() diff --git a/pypy/module/micronumpy/bench/add.py b/pypy/module/micronumpy/bench/add.py --- a/pypy/module/micronumpy/bench/add.py +++ b/pypy/module/micronumpy/bench/add.py @@ -1,5 +1,8 @@ -import numpy +try: + import numpypy as numpy +except: + import numpy def f(): a = numpy.zeros(10000000) diff --git a/pypy/module/micronumpy/bench/iterate.py b/pypy/module/micronumpy/bench/iterate.py --- a/pypy/module/micronumpy/bench/iterate.py +++ b/pypy/module/micronumpy/bench/iterate.py @@ -1,5 +1,8 @@ -import numpy +try: + import numpypy as numpy +except: + import numpy def f(): sum = 0 diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -3,7 +3,7 @@ class AppTestDtypes(BaseNumpyAppTest): def test_dtype(self): - from numpy import dtype + from numpypy import dtype d = dtype('?') assert d.num == 0 @@ -14,7 +14,7 @@ raises(TypeError, dtype, 1042) def test_dtype_with_types(self): - from numpy import dtype + from numpypy import dtype assert dtype(bool).num == 0 assert dtype(int).num == 7 @@ -22,13 +22,13 @@ assert dtype(float).num == 12 def test_array_dtype_attr(self): - from numpy import array, dtype + from numpypy import array, dtype a = array(range(5), long) assert a.dtype is dtype(long) def test_repr_str(self): - from numpy import dtype + from numpypy import dtype assert repr(dtype) == "" d = dtype('?') @@ -36,57 +36,57 @@ assert str(d) == "bool" def test_bool_array(self): - import numpy + from numpypy import array, False_, True_ - a = numpy.array([0, 1, 2, 2.5], dtype='?') - assert a[0] is numpy.False_ + a = array([0, 1, 2, 2.5], dtype='?') + assert a[0] is False_ for i in xrange(1, 4): - assert a[i] is numpy.True_ + assert a[i] is True_ def test_copy_array_with_dtype(self): - import numpy + from numpypy import array, False_, True_ - a = numpy.array([0, 1, 2, 3], dtype=long) + a = array([0, 1, 2, 3], dtype=long) # int on 64-bit, long in 32-bit assert isinstance(a[0], (int, long)) b = a.copy() assert isinstance(b[0], (int, long)) - a = numpy.array([0, 1, 2, 3], dtype=bool) - assert a[0] is numpy.False_ + a = array([0, 1, 2, 3], dtype=bool) + assert a[0] is False_ b = a.copy() - assert b[0] is numpy.False_ + assert b[0] is False_ def test_zeros_bool(self): - import numpy + from numpypy import zeros, False_ - a = numpy.zeros(10, dtype=bool) + a = zeros(10, dtype=bool) for i in range(10): - assert a[i] is numpy.False_ + assert a[i] is False_ def test_ones_bool(self): - import numpy + from numpypy import ones, True_ - a = numpy.ones(10, dtype=bool) + a = ones(10, dtype=bool) for i in range(10): - assert a[i] is numpy.True_ + assert a[i] is True_ def test_zeros_long(self): - from numpy import zeros + from numpypy import zeros a = zeros(10, dtype=long) for i in range(10): assert isinstance(a[i], (int, long)) assert a[1] == 0 def test_ones_long(self): - from numpy import ones + from numpypy import ones a = ones(10, dtype=long) for i in range(10): assert isinstance(a[i], (int, long)) assert a[1] == 1 def test_overflow(self): - from numpy import array, dtype + from numpypy import array, dtype assert array([128], 'b')[0] == -128 assert array([256], 'B')[0] == 0 assert array([32768], 'h')[0] == -32768 @@ -98,7 +98,7 @@ raises(OverflowError, "array([2**64], 'Q')") def test_bool_binop_types(self): - from numpy import array, dtype + from numpypy import array, dtype types = [ '?', 'b', 'B', 'h', 'H', 'i', 'I', 'l', 'L', 'q', 'Q', 'f', 'd' ] @@ -107,7 +107,7 @@ assert (a + array([0], t)).dtype is dtype(t) def test_binop_types(self): - from numpy import array, dtype + from numpypy import array, dtype tests = [('b','B','h'), ('b','h','h'), ('b','H','i'), ('b','i','i'), ('b','l','l'), ('b','q','q'), ('b','Q','d'), ('B','h','h'), ('B','H','H'), ('B','i','i'), ('B','I','I'), ('B','l','l'), @@ -129,7 +129,7 @@ assert (array([1], d1) + array([1], d2)).dtype is dtype(dout) def test_add_int8(self): - from numpy import array, dtype + from numpypy import array, dtype a = array(range(5), dtype="int8") b = a + a @@ -138,7 +138,7 @@ assert b[i] == i * 2 def test_add_int16(self): - from numpy import array, dtype + from numpypy import array, dtype a = array(range(5), dtype="int16") b = a + a @@ -147,7 +147,7 @@ assert b[i] == i * 2 def test_add_uint32(self): - from numpy import array, dtype + from numpypy import array, dtype a = array(range(5), dtype="I") b = a + a @@ -156,12 +156,12 @@ assert b[i] == i * 2 def test_shape(self): - from numpy import dtype + from numpypy import dtype assert dtype(long).shape == () def test_cant_subclass(self): - from numpy import dtype + from numpypy import dtype # You can't subclass dtype raises(TypeError, type, "Foo", (dtype,), {}) diff --git a/pypy/module/micronumpy/test/test_module.py b/pypy/module/micronumpy/test/test_module.py --- a/pypy/module/micronumpy/test/test_module.py +++ b/pypy/module/micronumpy/test/test_module.py @@ -3,19 +3,19 @@ class AppTestNumPyModule(BaseNumpyAppTest): def test_mean(self): - from numpy import array, mean + from numpypy import array, mean assert mean(array(range(5))) == 2.0 assert mean(range(5)) == 2.0 def test_average(self): - from numpy import array, average + from numpypy import array, average assert average(range(10)) == 4.5 assert average(array(range(10))) == 4.5 def test_constants(self): import math - from numpy import inf, e + from numpypy import inf, e assert type(inf) is float assert inf == float("inf") assert e == math.e - assert type(e) is float \ No newline at end of file + assert type(e) is float diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -4,12 +4,12 @@ class AppTestNumArray(BaseNumpyAppTest): def test_type(self): - from numpy import array + from numpypy import array ar = array(range(5)) assert type(ar) is type(ar + ar) def test_init(self): - from numpy import zeros + from numpypy import zeros a = zeros(15) # Check that storage was actually zero'd. assert a[10] == 0.0 @@ -18,7 +18,7 @@ assert a[13] == 5.3 def test_size(self): - from numpy import array + from numpypy import array # XXX fixed on multidim branch #assert array(3).size == 1 a = array([1, 2, 3]) @@ -30,13 +30,13 @@ Test that empty() works. """ - from numpy import empty + from numpypy import empty a = empty(2) a[1] = 1.0 assert a[1] == 1.0 def test_ones(self): - from numpy import ones + from numpypy import ones a = ones(3) assert len(a) == 3 assert a[0] == 1 @@ -45,19 +45,19 @@ assert a[2] == 4 def test_copy(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a.copy() for i in xrange(5): assert b[i] == a[i] def test_iterator_init(self): - from numpy import array + from numpypy import array a = array(range(5)) assert a[3] == 3 def test_repr(self): - from numpy import array, zeros + from numpypy import array, zeros a = array(range(5), float) assert repr(a) == "array([0.0, 1.0, 2.0, 3.0, 4.0])" a = array([], float) @@ -72,7 +72,7 @@ assert repr(a) == "array([True, False, True, False], dtype=bool)" def test_repr_slice(self): - from numpy import array, zeros + from numpypy import array, zeros a = array(range(5), float) b = a[1::2] assert repr(b) == "array([1.0, 3.0])" @@ -81,7 +81,7 @@ assert repr(b) == "array([0.0, 0.0, 0.0, ..., 0.0, 0.0, 0.0])" def test_str(self): - from numpy import array, zeros + from numpypy import array, zeros a = array(range(5), float) assert str(a) == "[0.0 1.0 2.0 3.0 4.0]" assert str((2*a)[:]) == "[0.0 2.0 4.0 6.0 8.0]" @@ -100,7 +100,7 @@ assert str(a) == "[0 1 2 3 4]" def test_str_slice(self): - from numpy import array, zeros + from numpypy import array, zeros a = array(range(5), float) b = a[1::2] assert str(b) == "[1.0 3.0]" @@ -109,7 +109,7 @@ assert str(b) == "[0.0 0.0 0.0 ..., 0.0 0.0 0.0]" def test_getitem(self): - from numpy import array + from numpypy import array a = array(range(5)) raises(IndexError, "a[5]") a = a + a @@ -118,7 +118,7 @@ raises(IndexError, "a[-6]") def test_getitem_tuple(self): - from numpy import array + from numpypy import array a = array(range(5)) raises(IndexError, "a[(1,2)]") for i in xrange(5): @@ -128,7 +128,7 @@ assert a[i] == b[i] def test_setitem(self): - from numpy import array + from numpypy import array a = array(range(5)) a[-1] = 5.0 assert a[4] == 5.0 @@ -136,7 +136,7 @@ raises(IndexError, "a[-6] = 3.0") def test_setitem_tuple(self): - from numpy import array + from numpypy import array a = array(range(5)) raises(IndexError, "a[(1,2)] = [0,1]") for i in xrange(5): @@ -147,7 +147,7 @@ assert a[i] == i def test_setslice_array(self): - from numpy import array + from numpypy import array a = array(range(5)) b = array(range(2)) a[1:4:2] = b @@ -158,7 +158,7 @@ assert b[1] == 0. def test_setslice_of_slice_array(self): - from numpy import array, zeros + from numpypy import array, zeros a = zeros(5) a[::2] = array([9., 10., 11.]) assert a[0] == 9. @@ -177,7 +177,7 @@ assert a[0] == 3. def test_setslice_list(self): - from numpy import array + from numpypy import array a = array(range(5), float) b = [0., 1.] a[1:4:2] = b @@ -185,20 +185,20 @@ assert a[3] == 1. def test_setslice_constant(self): - from numpy import array + from numpypy import array a = array(range(5), float) a[1:4:2] = 0. assert a[1] == 0. assert a[3] == 0. def test_len(self): - from numpy import array + from numpypy import array a = array(range(5)) assert len(a) == 5 assert len(a + a) == 5 def test_shape(self): - from numpy import array + from numpypy import array a = array(range(5)) assert a.shape == (5,) b = a + a @@ -207,7 +207,7 @@ assert c.shape == (3,) def test_add(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a + a for i in range(5): @@ -220,7 +220,7 @@ assert c[i] == bool(a[i] + b[i]) def test_add_other(self): - from numpy import array + from numpypy import array a = array(range(5)) b = array(range(4, -1, -1)) c = a + b @@ -228,20 +228,20 @@ assert c[i] == 4 def test_add_constant(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a + 5 for i in range(5): assert b[i] == i + 5 def test_radd(self): - from numpy import array + from numpypy import array r = 3 + array(range(3)) for i in range(3): assert r[i] == i + 3 def test_add_list(self): - from numpy import array + from numpypy import array a = array(range(5)) b = list(reversed(range(5))) c = a + b @@ -250,14 +250,14 @@ assert c[i] == 4 def test_subtract(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a - a for i in range(5): assert b[i] == 0 def test_subtract_other(self): - from numpy import array + from numpypy import array a = array(range(5)) b = array([1, 1, 1, 1, 1]) c = a - b @@ -265,29 +265,29 @@ assert c[i] == i - 1 def test_subtract_constant(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a - 5 for i in range(5): assert b[i] == i - 5 def test_mul(self): - import numpy + import numpypy - a = numpy.array(range(5)) + a = numpypy.array(range(5)) b = a * a for i in range(5): assert b[i] == i * i - a = numpy.array(range(5), dtype=bool) + a = numpypy.array(range(5), dtype=bool) b = a * a - assert b.dtype is numpy.dtype(bool) - assert b[0] is numpy.False_ + assert b.dtype is numpypy.dtype(bool) + assert b[0] is numpypy.False_ for i in range(1, 5): - assert b[i] is numpy.True_ + assert b[i] is numpypy.True_ def test_mul_constant(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a * 5 for i in range(5): @@ -295,7 +295,7 @@ def test_div(self): from math import isnan - from numpy import array, dtype, inf + from numpypy import array, dtype, inf a = array(range(1, 6)) b = a / a @@ -327,7 +327,7 @@ assert c[2] == -inf def test_div_other(self): - from numpy import array + from numpypy import array a = array(range(5)) b = array([2, 2, 2, 2, 2], float) c = a / b @@ -335,14 +335,14 @@ assert c[i] == i / 2.0 def test_div_constant(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a / 5.0 for i in range(5): assert b[i] == i / 5.0 def test_pow(self): - from numpy import array + from numpypy import array a = array(range(5), float) b = a ** a for i in range(5): @@ -350,7 +350,7 @@ assert b[i] == i**i def test_pow_other(self): - from numpy import array + from numpypy import array a = array(range(5), float) b = array([2, 2, 2, 2, 2]) c = a ** b @@ -358,14 +358,14 @@ assert c[i] == i ** 2 def test_pow_constant(self): - from numpy import array + from numpypy import array a = array(range(5), float) b = a ** 2 for i in range(5): assert b[i] == i ** 2 def test_mod(self): - from numpy import array + from numpypy import array a = array(range(1,6)) b = a % a for i in range(5): @@ -378,7 +378,7 @@ assert b[i] == 1 def test_mod_other(self): - from numpy import array + from numpypy import array a = array(range(5)) b = array([2, 2, 2, 2, 2]) c = a % b @@ -386,14 +386,14 @@ assert c[i] == i % 2 def test_mod_constant(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a % 2 for i in range(5): assert b[i] == i % 2 def test_pos(self): - from numpy import array + from numpypy import array a = array([1.,-2.,3.,-4.,-5.]) b = +a for i in range(5): @@ -404,7 +404,7 @@ assert a[i] == i def test_neg(self): - from numpy import array + from numpypy import array a = array([1.,-2.,3.,-4.,-5.]) b = -a for i in range(5): @@ -415,7 +415,7 @@ assert a[i] == -i def test_abs(self): - from numpy import array + from numpypy import array a = array([1.,-2.,3.,-4.,-5.]) b = abs(a) for i in range(5): @@ -426,7 +426,7 @@ assert a[i + 5] == abs(i) def test_auto_force(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a - 1 a[2] = 3 @@ -440,7 +440,7 @@ assert c[1] == 4 def test_getslice(self): - from numpy import array + from numpypy import array a = array(range(5)) s = a[1:5] assert len(s) == 4 @@ -454,7 +454,7 @@ assert s[0] == 5 def test_getslice_step(self): - from numpy import array + from numpypy import array a = array(range(10)) s = a[1:9:2] assert len(s) == 4 @@ -462,7 +462,7 @@ assert s[i] == a[2*i+1] def test_slice_update(self): - from numpy import array + from numpypy import array a = array(range(5)) s = a[0:3] s[1] = 10 @@ -473,7 +473,7 @@ def test_slice_invaidate(self): # check that slice shares invalidation list with - from numpy import array + from numpypy import array a = array(range(5)) s = a[0:2] b = array([10,11]) @@ -487,13 +487,13 @@ assert d[1] == 12 def test_mean(self): - from numpy import array + from numpypy import array a = array(range(5)) assert a.mean() == 2.0 assert a[:4].mean() == 1.5 def test_sum(self): - from numpy import array + from numpypy import array a = array(range(5)) assert a.sum() == 10.0 assert a[:4].sum() == 6.0 @@ -502,32 +502,32 @@ assert a.sum() == 5 def test_prod(self): - from numpy import array + from numpypy import array a = array(range(1,6)) assert a.prod() == 120.0 assert a[:4].prod() == 24.0 def test_max(self): - from numpy import array + from numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.max() == 5.7 b = array([]) raises(ValueError, "b.max()") def test_max_add(self): - from numpy import array + from numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert (a+a).max() == 11.4 def test_min(self): - from numpy import array + from numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.min() == -3.0 b = array([]) raises(ValueError, "b.min()") def test_argmax(self): - from numpy import array + from numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.argmax() == 2 b = array([]) @@ -537,14 +537,14 @@ assert a.argmax() == 9 def test_argmin(self): - from numpy import array + from numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.argmin() == 3 b = array([]) raises(ValueError, "b.argmin()") def test_all(self): - from numpy import array + from numpypy import array a = array(range(5)) assert a.all() == False a[0] = 3.0 @@ -553,7 +553,7 @@ assert b.all() == True def test_any(self): - from numpy import array, zeros + from numpypy import array, zeros a = array(range(5)) assert a.any() == True b = zeros(5) @@ -562,7 +562,7 @@ assert c.any() == False def test_dot(self): - from numpy import array + from numpypy import array a = array(range(5)) assert a.dot(a) == 30.0 @@ -570,14 +570,14 @@ assert a.dot(range(5)) == 30 def test_dot_constant(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a.dot(2.5) for i in xrange(5): assert b[i] == 2.5 * a[i] def test_dtype_guessing(self): - from numpy import array, dtype + from numpypy import array, dtype assert array([True]).dtype is dtype(bool) assert array([True, False]).dtype is dtype(bool) @@ -590,7 +590,7 @@ def test_comparison(self): import operator - from numpy import array, dtype + from numpypy import array, dtype a = array(range(5)) b = array(range(5), float) @@ -616,7 +616,7 @@ cls.w_data = cls.space.wrap(struct.pack('dddd', 1, 2, 3, 4)) def test_fromstring(self): - from numpy import fromstring + from numpypy import fromstring a = fromstring(self.data) for i in range(4): assert a[i] == i + 1 diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -4,14 +4,14 @@ class AppTestUfuncs(BaseNumpyAppTest): def test_ufunc_instance(self): - from numpy import add, ufunc + from numpypy import add, ufunc assert isinstance(add, ufunc) assert repr(add) == "" assert repr(ufunc) == "" def test_ufunc_attrs(self): - from numpy import add, multiply, sin + from numpypy import add, multiply, sin assert add.identity == 0 assert multiply.identity == 1 @@ -22,7 +22,7 @@ assert sin.nin == 1 def test_wrong_arguments(self): - from numpy import add, sin + from numpypy import add, sin raises(ValueError, add, 1) raises(TypeError, add, 1, 2, 3) @@ -30,14 +30,14 @@ raises(ValueError, sin) def test_single_item(self): - from numpy import negative, sign, minimum + from numpypy import negative, sign, minimum assert negative(5.0) == -5.0 assert sign(-0.0) == 0.0 assert minimum(2.0, 3.0) == 2.0 def test_sequence(self): - from numpy import array, negative, minimum + from numpypy import array, negative, minimum a = array(range(3)) b = [2.0, 1.0, 0.0] c = 1.0 @@ -71,7 +71,7 @@ assert min_c_b[i] == min(b[i], c) def test_negative(self): - from numpy import array, negative + from numpypy import array, negative a = array([-5.0, 0.0, 1.0]) b = negative(a) @@ -86,7 +86,7 @@ assert negative(a + a)[3] == -6 def test_abs(self): - from numpy import array, absolute + from numpypy import array, absolute a = array([-5.0, -0.0, 1.0]) b = absolute(a) @@ -94,7 +94,7 @@ assert b[i] == abs(a[i]) def test_add(self): - from numpy import array, add + from numpypy import array, add a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -103,7 +103,7 @@ assert c[i] == a[i] + b[i] def test_divide(self): - from numpy import array, divide + from numpypy import array, divide a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -112,7 +112,7 @@ assert c[i] == a[i] / b[i] def test_fabs(self): - from numpy import array, fabs + from numpypy import array, fabs from math import fabs as math_fabs a = array([-5.0, -0.0, 1.0]) @@ -121,7 +121,7 @@ assert b[i] == math_fabs(a[i]) def test_minimum(self): - from numpy import array, minimum + from numpypy import array, minimum a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -130,7 +130,7 @@ assert c[i] == min(a[i], b[i]) def test_maximum(self): - from numpy import array, maximum + from numpypy import array, maximum a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -143,7 +143,7 @@ assert isinstance(x, (int, long)) def test_multiply(self): - from numpy import array, multiply + from numpypy import array, multiply a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -152,7 +152,7 @@ assert c[i] == a[i] * b[i] def test_sign(self): - from numpy import array, sign, dtype + from numpypy import array, sign, dtype reference = [-1.0, 0.0, 0.0, 1.0] a = array([-5.0, -0.0, 0.0, 6.0]) @@ -171,7 +171,7 @@ assert a[1] == 0 def test_reciporocal(self): - from numpy import array, reciprocal + from numpypy import array, reciprocal reference = [-0.2, float("inf"), float("-inf"), 2.0] a = array([-5.0, 0.0, -0.0, 0.5]) @@ -180,7 +180,7 @@ assert b[i] == reference[i] def test_subtract(self): - from numpy import array, subtract + from numpypy import array, subtract a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -189,7 +189,7 @@ assert c[i] == a[i] - b[i] def test_floor(self): - from numpy import array, floor + from numpypy import array, floor reference = [-2.0, -1.0, 0.0, 1.0, 1.0] a = array([-1.4, -1.0, 0.0, 1.0, 1.4]) @@ -198,7 +198,7 @@ assert b[i] == reference[i] def test_copysign(self): - from numpy import array, copysign + from numpypy import array, copysign reference = [5.0, -0.0, 0.0, -6.0] a = array([-5.0, 0.0, 0.0, 6.0]) @@ -214,7 +214,7 @@ def test_exp(self): import math - from numpy import array, exp + from numpypy import array, exp a = array([-5.0, -0.0, 0.0, 12345678.0, float("inf"), -float('inf'), -12343424.0]) @@ -228,7 +228,7 @@ def test_sin(self): import math - from numpy import array, sin + from numpypy import array, sin a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2]) b = sin(a) @@ -241,7 +241,7 @@ def test_cos(self): import math - from numpy import array, cos + from numpypy import array, cos a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2]) b = cos(a) @@ -250,7 +250,7 @@ def test_tan(self): import math - from numpy import array, tan + from numpypy import array, tan a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2]) b = tan(a) @@ -260,7 +260,7 @@ def test_arcsin(self): import math - from numpy import array, arcsin + from numpypy import array, arcsin a = array([-1, -0.5, -0.33, 0, 0.33, 0.5, 1]) b = arcsin(a) @@ -274,7 +274,7 @@ def test_arccos(self): import math - from numpy import array, arccos + from numpypy import array, arccos a = array([-1, -0.5, -0.33, 0, 0.33, 0.5, 1]) b = arccos(a) @@ -289,7 +289,7 @@ def test_arctan(self): import math - from numpy import array, arctan + from numpypy import array, arctan a = array([-3, -2, -1, 0, 1, 2, 3, float('inf'), float('-inf')]) b = arctan(a) @@ -302,7 +302,7 @@ def test_arcsinh(self): import math - from numpy import arcsinh, inf + from numpypy import arcsinh, inf for v in [inf, -inf, 1.0, math.e]: assert math.asinh(v) == arcsinh(v) @@ -310,7 +310,7 @@ def test_arctanh(self): import math - from numpy import arctanh + from numpypy import arctanh for v in [.99, .5, 0, -.5, -.99]: assert math.atanh(v) == arctanh(v) @@ -320,13 +320,13 @@ assert arctanh(v) == math.copysign(float("inf"), v) def test_reduce_errors(self): - from numpy import sin, add + from numpypy import sin, add raises(ValueError, sin.reduce, [1, 2, 3]) raises(TypeError, add.reduce, 1) def test_reduce(self): - from numpy import add, maximum + from numpypy import add, maximum assert add.reduce([1, 2, 3]) == 6 assert maximum.reduce([1]) == 1 @@ -335,7 +335,7 @@ def test_comparisons(self): import operator - from numpy import equal, not_equal, less, less_equal, greater, greater_equal + from numpypy import equal, not_equal, less, less_equal, greater, greater_equal for ufunc, func in [ (equal, operator.eq), diff --git a/pypy/module/pypyjit/interp_jit.py b/pypy/module/pypyjit/interp_jit.py --- a/pypy/module/pypyjit/interp_jit.py +++ b/pypy/module/pypyjit/interp_jit.py @@ -6,6 +6,7 @@ from pypy.tool.pairtype import extendabletype from pypy.rlib.rarithmetic import r_uint, intmask from pypy.rlib.jit import JitDriver, hint, we_are_jitted, dont_look_inside +from pypy.rlib import jit from pypy.rlib.jit import current_trace_length, unroll_parameters import pypy.interpreter.pyopcode # for side-effects from pypy.interpreter.error import OperationError, operationerrfmt @@ -200,18 +201,18 @@ if len(args_w) == 1: text = space.str_w(args_w[0]) try: - pypyjitdriver.set_user_param(text) + jit.set_user_param(None, text) except ValueError: raise OperationError(space.w_ValueError, space.wrap("error in JIT parameters string")) for key, w_value in kwds_w.items(): if key == 'enable_opts': - pypyjitdriver.set_param('enable_opts', space.str_w(w_value)) + jit.set_param(None, 'enable_opts', space.str_w(w_value)) else: intval = space.int_w(w_value) for name, _ in unroll_parameters: if name == key and name != 'enable_opts': - pypyjitdriver.set_param(name, intval) + jit.set_param(None, name, intval) break else: raise operationerrfmt(space.w_TypeError, diff --git a/pypy/module/sys/test/test_sysmodule.py b/pypy/module/sys/test/test_sysmodule.py --- a/pypy/module/sys/test/test_sysmodule.py +++ b/pypy/module/sys/test/test_sysmodule.py @@ -567,6 +567,11 @@ import time import thread + # XXX workaround for now: to prevent deadlocks, call + # sys._current_frames() once before starting threads. + # This is an issue in non-translated versions only. + sys._current_frames() + thread_id = thread.get_ident() def other_thread(): print "thread started" diff --git a/pypy/rlib/_rffi_stacklet.py b/pypy/rlib/_rffi_stacklet.py --- a/pypy/rlib/_rffi_stacklet.py +++ b/pypy/rlib/_rffi_stacklet.py @@ -3,16 +3,22 @@ from pypy.rpython.lltypesystem import lltype, llmemory, rffi from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.rpython.tool import rffi_platform +import sys cdir = py.path.local(pypydir) / 'translator' / 'c' - +_sep_mods = [] +if sys.platform == 'win32': + _sep_mods = [cdir / "src/stacklet/switch_x86_msvc.asm"] + eci = ExternalCompilationInfo( include_dirs = [cdir], includes = ['src/stacklet/stacklet.h'], separate_module_sources = ['#include "src/stacklet/stacklet.c"\n'], + separate_module_files = _sep_mods ) + rffi_platform.verify_eci(eci.convert_sources_to_files()) def llexternal(name, args, result, **kwds): diff --git a/pypy/rlib/clibffi.py b/pypy/rlib/clibffi.py --- a/pypy/rlib/clibffi.py +++ b/pypy/rlib/clibffi.py @@ -31,9 +31,6 @@ _MAC_OS = platform.name == "darwin" _FREEBSD_7 = platform.name == "freebsd7" -_LITTLE_ENDIAN = sys.byteorder == 'little' -_BIG_ENDIAN = sys.byteorder == 'big' - if _WIN32: from pypy.rlib import rwin32 @@ -218,26 +215,48 @@ elif sz == 8: return ffi_type_uint64 else: raise ValueError("unsupported type size for %r" % (TYPE,)) -TYPE_MAP = { - rffi.DOUBLE : ffi_type_double, - rffi.FLOAT : ffi_type_float, - rffi.LONGDOUBLE : ffi_type_longdouble, - rffi.UCHAR : ffi_type_uchar, - rffi.CHAR : ffi_type_schar, - rffi.SHORT : ffi_type_sshort, - rffi.USHORT : ffi_type_ushort, - rffi.UINT : ffi_type_uint, - rffi.INT : ffi_type_sint, +__int_type_map = [ + (rffi.UCHAR, ffi_type_uchar), + (rffi.SIGNEDCHAR, ffi_type_schar), + (rffi.SHORT, ffi_type_sshort), + (rffi.USHORT, ffi_type_ushort), + (rffi.UINT, ffi_type_uint), + (rffi.INT, ffi_type_sint), # xxx don't use ffi_type_slong and ffi_type_ulong - their meaning # changes from a libffi version to another :-(( - rffi.ULONG : _unsigned_type_for(rffi.ULONG), - rffi.LONG : _signed_type_for(rffi.LONG), - rffi.ULONGLONG : _unsigned_type_for(rffi.ULONGLONG), - rffi.LONGLONG : _signed_type_for(rffi.LONGLONG), - lltype.Void : ffi_type_void, - lltype.UniChar : _unsigned_type_for(lltype.UniChar), - lltype.Bool : _unsigned_type_for(lltype.Bool), - } + (rffi.ULONG, _unsigned_type_for(rffi.ULONG)), + (rffi.LONG, _signed_type_for(rffi.LONG)), + (rffi.ULONGLONG, _unsigned_type_for(rffi.ULONGLONG)), + (rffi.LONGLONG, _signed_type_for(rffi.LONGLONG)), + (lltype.UniChar, _unsigned_type_for(lltype.UniChar)), + (lltype.Bool, _unsigned_type_for(lltype.Bool)), + ] + +__float_type_map = [ + (rffi.DOUBLE, ffi_type_double), + (rffi.FLOAT, ffi_type_float), + (rffi.LONGDOUBLE, ffi_type_longdouble), + ] + +__ptr_type_map = [ + (rffi.VOIDP, ffi_type_pointer), + ] + +__type_map = __int_type_map + __float_type_map + [ + (lltype.Void, ffi_type_void) + ] + +TYPE_MAP_INT = dict(__int_type_map) +TYPE_MAP_FLOAT = dict(__float_type_map) +TYPE_MAP = dict(__type_map) + +ffitype_map_int = unrolling_iterable(__int_type_map) +ffitype_map_int_or_ptr = unrolling_iterable(__int_type_map + __ptr_type_map) +ffitype_map_float = unrolling_iterable(__float_type_map) +ffitype_map = unrolling_iterable(__type_map) + +del __int_type_map, __float_type_map, __ptr_type_map, __type_map + def external(name, args, result, **kwds): return rffi.llexternal(name, args, result, compilation_info=eci, **kwds) @@ -346,38 +365,15 @@ cast_type_to_ffitype._annspecialcase_ = 'specialize:memo' def push_arg_as_ffiptr(ffitp, arg, ll_buf): - # This is for primitive types. Note that the exact type of 'arg' may be - # different from the expected 'c_size'. To cope with that, we fall back - # to a byte-by-byte copy. + # this is for primitive types. For structures and arrays + # would be something different (more dynamic) TP = lltype.typeOf(arg) TP_P = lltype.Ptr(rffi.CArray(TP)) - TP_size = rffi.sizeof(TP) - c_size = intmask(ffitp.c_size) - # if both types have the same size, we can directly write the - # value to the buffer - if c_size == TP_size: - buf = rffi.cast(TP_P, ll_buf) - buf[0] = arg - else: - # needs byte-by-byte copying. Make sure 'arg' is an integer type. - # Note that this won't work for rffi.FLOAT/rffi.DOUBLE. - assert TP is not rffi.FLOAT and TP is not rffi.DOUBLE - if TP_size <= rffi.sizeof(lltype.Signed): - arg = rffi.cast(lltype.Unsigned, arg) - else: - arg = rffi.cast(lltype.UnsignedLongLong, arg) - if _LITTLE_ENDIAN: - for i in range(c_size): - ll_buf[i] = chr(arg & 0xFF) - arg >>= 8 - elif _BIG_ENDIAN: - for i in range(c_size-1, -1, -1): - ll_buf[i] = chr(arg & 0xFF) - arg >>= 8 - else: - raise AssertionError + buf = rffi.cast(TP_P, ll_buf) + buf[0] = arg push_arg_as_ffiptr._annspecialcase_ = 'specialize:argtype(1)' + # type defs for callback and closure userdata USERDATA_P = lltype.Ptr(lltype.ForwardReference()) CALLBACK_TP = lltype.Ptr(lltype.FuncType([rffi.VOIDPP, rffi.VOIDP, USERDATA_P], diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -450,55 +450,6 @@ # special-cased by ExtRegistryEntry pass - def _set_param(self, name, value): - # special-cased by ExtRegistryEntry - # (internal, must receive a constant 'name') - # if value is DEFAULT, sets the default value. - assert name in PARAMETERS - - @specialize.arg(0, 1) - def set_param(self, name, value): - """Set one of the tunable JIT parameter.""" - self._set_param(name, value) - - @specialize.arg(0, 1) - def set_param_to_default(self, name): - """Reset one of the tunable JIT parameters to its default value.""" - self._set_param(name, DEFAULT) - - def set_user_param(self, text): - """Set the tunable JIT parameters from a user-supplied string - following the format 'param=value,param=value', or 'off' to - disable the JIT. For programmatic setting of parameters, use - directly JitDriver.set_param(). - """ - if text == 'off': - self.set_param('threshold', -1) - self.set_param('function_threshold', -1) - return - if text == 'default': - for name1, _ in unroll_parameters: - self.set_param_to_default(name1) - return - for s in text.split(','): - s = s.strip(' ') - parts = s.split('=') - if len(parts) != 2: - raise ValueError - name = parts[0] - value = parts[1] - if name == 'enable_opts': - self.set_param('enable_opts', value) - else: - for name1, _ in unroll_parameters: - if name1 == name and name1 != 'enable_opts': - try: - self.set_param(name1, int(value)) - except ValueError: - raise - set_user_param._annspecialcase_ = 'specialize:arg(0)' - - def on_compile(self, logger, looptoken, operations, type, *greenargs): """ A hook called when loop is compiled. Overwrite for your own jitdriver if you want to do something special, like @@ -524,16 +475,61 @@ self.jit_merge_point = self.jit_merge_point self.can_enter_jit = self.can_enter_jit self.loop_header = self.loop_header - self._set_param = self._set_param - class Entry(ExtEnterLeaveMarker): _about_ = (self.jit_merge_point, self.can_enter_jit) class Entry(ExtLoopHeader): _about_ = self.loop_header - class Entry(ExtSetParam): - _about_ = self._set_param +def _set_param(driver, name, value): + # special-cased by ExtRegistryEntry + # (internal, must receive a constant 'name') + # if value is DEFAULT, sets the default value. + assert name in PARAMETERS + + at specialize.arg(0, 1) +def set_param(driver, name, value): + """Set one of the tunable JIT parameter. Driver can be None, then all + drivers have this set """ + _set_param(driver, name, value) + + at specialize.arg(0, 1) +def set_param_to_default(driver, name): + """Reset one of the tunable JIT parameters to its default value.""" + _set_param(driver, name, DEFAULT) + +def set_user_param(driver, text): + """Set the tunable JIT parameters from a user-supplied string + following the format 'param=value,param=value', or 'off' to + disable the JIT. For programmatic setting of parameters, use + directly JitDriver.set_param(). + """ + if text == 'off': + set_param(driver, 'threshold', -1) + set_param(driver, 'function_threshold', -1) + return + if text == 'default': + for name1, _ in unroll_parameters: + set_param_to_default(driver, name1) + return + for s in text.split(','): + s = s.strip(' ') + parts = s.split('=') + if len(parts) != 2: + raise ValueError + name = parts[0] + value = parts[1] + if name == 'enable_opts': + set_param(driver, 'enable_opts', value) + else: + for name1, _ in unroll_parameters: + if name1 == name and name1 != 'enable_opts': + try: + set_param(driver, name1, int(value)) + except ValueError: + raise +set_user_param._annspecialcase_ = 'specialize:arg(0)' + # ____________________________________________________________ # @@ -705,8 +701,9 @@ resulttype=lltype.Void) class ExtSetParam(ExtRegistryEntry): + _about_ = _set_param - def compute_result_annotation(self, s_name, s_value): + def compute_result_annotation(self, s_driver, s_name, s_value): from pypy.annotation import model as annmodel assert s_name.is_constant() if not self.bookkeeper.immutablevalue(DEFAULT).contains(s_value): @@ -722,21 +719,22 @@ from pypy.objspace.flow.model import Constant hop.exception_cannot_occur() - driver = self.instance.im_self - name = hop.args_s[0].const + driver = hop.inputarg(lltype.Void, arg=0) + name = hop.args_s[1].const if name == 'enable_opts': repr = string_repr else: repr = lltype.Signed - if (isinstance(hop.args_v[1], Constant) and - hop.args_v[1].value is DEFAULT): + if (isinstance(hop.args_v[2], Constant) and + hop.args_v[2].value is DEFAULT): value = PARAMETERS[name] v_value = hop.inputconst(repr, value) else: - v_value = hop.inputarg(repr, arg=1) + v_value = hop.inputarg(repr, arg=2) vlist = [hop.inputconst(lltype.Void, "set_param"), - hop.inputconst(lltype.Void, driver), + driver, hop.inputconst(lltype.Void, name), v_value] return hop.genop('jit_marker', vlist, resulttype=lltype.Void) + diff --git a/pypy/rlib/libffi.py b/pypy/rlib/libffi.py --- a/pypy/rlib/libffi.py +++ b/pypy/rlib/libffi.py @@ -140,7 +140,7 @@ self.last.next = arg self.last = arg self.numargs += 1 - + class AbstractArg(object): next = None @@ -410,3 +410,22 @@ def getaddressindll(self, name): return dlsym(self.lib, name) + + at jit.oopspec("libffi_array_getitem(ffitype, width, addr, index, offset)") +def array_getitem(ffitype, width, addr, index, offset): + for TYPE, ffitype2 in clibffi.ffitype_map: + if ffitype is ffitype2: + addr = rffi.ptradd(addr, index * width) + addr = rffi.ptradd(addr, offset) + return rffi.cast(rffi.CArrayPtr(TYPE), addr)[0] + assert False + + at jit.oopspec("libffi_array_setitem(ffitype, width, addr, index, offset, value)") +def array_setitem(ffitype, width, addr, index, offset, value): + for TYPE, ffitype2 in clibffi.ffitype_map: + if ffitype is ffitype2: + addr = rffi.ptradd(addr, index * width) + addr = rffi.ptradd(addr, offset) + rffi.cast(rffi.CArrayPtr(TYPE), addr)[0] = value + return + assert False \ No newline at end of file diff --git a/pypy/rlib/test/test_libffi.py b/pypy/rlib/test/test_libffi.py --- a/pypy/rlib/test/test_libffi.py +++ b/pypy/rlib/test/test_libffi.py @@ -1,11 +1,13 @@ +import sys + import py -import sys + +from pypy.rlib.libffi import (CDLL, Func, get_libc_name, ArgChain, types, + IS_32_BIT, array_getitem, array_setitem) +from pypy.rlib.rarithmetic import r_singlefloat, r_longlong, r_ulonglong +from pypy.rlib.test.test_clibffi import BaseFfiTest, get_libm_name, make_struct_ffitype_e from pypy.rpython.lltypesystem import rffi, lltype from pypy.rpython.lltypesystem.ll2ctypes import ALLOCATED -from pypy.rlib.rarithmetic import r_singlefloat, r_longlong, r_ulonglong -from pypy.rlib.test.test_clibffi import BaseFfiTest, get_libm_name, make_struct_ffitype_e -from pypy.rlib.libffi import CDLL, Func, get_libc_name, ArgChain, types -from pypy.rlib.libffi import IS_32_BIT class TestLibffiMisc(BaseFfiTest): @@ -52,6 +54,34 @@ del lib assert not ALLOCATED + def test_array_fields(self): + POINT = lltype.Struct("POINT", + ("x", lltype.Float), + ("y", lltype.Float), + ) + points = lltype.malloc(rffi.CArray(POINT), 2, flavor="raw") + points[0].x = 1.0 + points[0].y = 2.0 + points[1].x = 3.0 + points[1].y = 4.0 + points = rffi.cast(rffi.CArrayPtr(lltype.Char), points) + assert array_getitem(types.double, 16, points, 0, 0) == 1.0 + assert array_getitem(types.double, 16, points, 0, 8) == 2.0 + assert array_getitem(types.double, 16, points, 1, 0) == 3.0 + assert array_getitem(types.double, 16, points, 1, 8) == 4.0 + + array_setitem(types.double, 16, points, 0, 0, 10.0) + array_setitem(types.double, 16, points, 0, 8, 20.0) + array_setitem(types.double, 16, points, 1, 0, 30.0) + array_setitem(types.double, 16, points, 1, 8, 40.0) + + assert array_getitem(types.double, 16, points, 0, 0) == 10.0 + assert array_getitem(types.double, 16, points, 0, 8) == 20.0 + assert array_getitem(types.double, 16, points, 1, 0) == 30.0 + assert array_getitem(types.double, 16, points, 1, 8) == 40.0 + + lltype.free(points, flavor="raw") + class TestLibffiCall(BaseFfiTest): """ Test various kind of calls through libffi. @@ -109,7 +139,7 @@ This method is overridden by metainterp/test/test_fficall.py in order to do the call in a loop and JIT it. The optional arguments are used only by that overridden method. - + """ lib, name, argtypes, restype = funcspec func = lib.getpointer(name, argtypes, restype) @@ -132,7 +162,7 @@ return x - y; } """ - libfoo = self.get_libfoo() + libfoo = self.get_libfoo() func = (libfoo, 'diff_xy', [types.sint, types.slong], types.sint) res = self.call(func, [50, 8], lltype.Signed) assert res == 42 @@ -144,7 +174,7 @@ return (x + (int)y); } """ - libfoo = self.get_libfoo() + libfoo = self.get_libfoo() func = (libfoo, 'sum_xy', [types.sint, types.double], types.sint) res = self.call(func, [38, 4.2], lltype.Signed, jitif=["floats"]) assert res == 42 @@ -249,7 +279,7 @@ }; struct pair my_static_pair = {10, 20}; - + long* get_pointer_to_b() { return &my_static_pair.b; @@ -340,7 +370,7 @@ def test_wrong_number_of_arguments(self): from pypy.rpython.llinterp import LLException - libfoo = self.get_libfoo() + libfoo = self.get_libfoo() func = (libfoo, 'sum_xy', [types.sint, types.double], types.sint) glob = globals() diff --git a/pypy/rpython/lltypesystem/lltype.py b/pypy/rpython/lltypesystem/lltype.py --- a/pypy/rpython/lltypesystem/lltype.py +++ b/pypy/rpython/lltypesystem/lltype.py @@ -1724,7 +1724,7 @@ class _subarray(_parentable): # only for direct_fieldptr() # and direct_arrayitems() _kind = "subarray" - _cache = weakref.WeakKeyDictionary() # parentarray -> {subarrays} + _cache = {} # TYPE -> weak{ parentarray -> {subarrays} } def __init__(self, TYPE, parent, baseoffset_or_fieldname): _parentable.__init__(self, TYPE) @@ -1782,10 +1782,15 @@ def _makeptr(parent, baseoffset_or_fieldname, solid=False): try: - cache = _subarray._cache.setdefault(parent, {}) + d = _subarray._cache[parent._TYPE] + except KeyError: + d = _subarray._cache[parent._TYPE] = weakref.WeakKeyDictionary() + try: + cache = d.setdefault(parent, {}) except RuntimeError: # pointer comparison with a freed structure _subarray._cleanup_cache() - cache = _subarray._cache.setdefault(parent, {}) # try again + # try again + return _subarray._makeptr(parent, baseoffset_or_fieldname, solid) try: subarray = cache[baseoffset_or_fieldname] except KeyError: @@ -1806,14 +1811,18 @@ raise NotImplementedError('_subarray._getid()') def _cleanup_cache(): - newcache = weakref.WeakKeyDictionary() - for key, value in _subarray._cache.items(): - try: - if not key._was_freed(): - newcache[key] = value - except RuntimeError: - pass # ignore "accessing subxxx, but already gc-ed parent" - _subarray._cache = newcache + for T, d in _subarray._cache.items(): + newcache = weakref.WeakKeyDictionary() + for key, value in d.items(): + try: + if not key._was_freed(): + newcache[key] = value + except RuntimeError: + pass # ignore "accessing subxxx, but already gc-ed parent" + if newcache: + _subarray._cache[T] = newcache + else: + del _subarray._cache[T] _cleanup_cache = staticmethod(_cleanup_cache) diff --git a/pypy/rpython/lltypesystem/module/ll_math.py b/pypy/rpython/lltypesystem/module/ll_math.py --- a/pypy/rpython/lltypesystem/module/ll_math.py +++ b/pypy/rpython/lltypesystem/module/ll_math.py @@ -127,9 +127,12 @@ return y != y def ll_math_isinf(y): - if use_library_isinf_isnan and not jit.we_are_jitted(): + if jit.we_are_jitted(): + return (y + VERY_LARGE_FLOAT) == y + elif use_library_isinf_isnan: return not _lib_finite(y) and not _lib_isnan(y) - return (y + VERY_LARGE_FLOAT) == y + else: + return y == INFINITY or y == -INFINITY def ll_math_isfinite(y): # Use a custom hack that is reasonably well-suited to the JIT. diff --git a/pypy/rpython/lltypesystem/rbuilder.py b/pypy/rpython/lltypesystem/rbuilder.py --- a/pypy/rpython/lltypesystem/rbuilder.py +++ b/pypy/rpython/lltypesystem/rbuilder.py @@ -123,9 +123,10 @@ def ll_build(ll_builder): final_size = ll_builder.used assert final_size >= 0 - if final_size == ll_builder.allocated: - return ll_builder.buf - return rgc.ll_shrink_array(ll_builder.buf, final_size) + if final_size < ll_builder.allocated: + ll_builder.allocated = final_size + ll_builder.buf = rgc.ll_shrink_array(ll_builder.buf, final_size) + return ll_builder.buf @classmethod def ll_is_true(cls, ll_builder): diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -866,12 +866,14 @@ try: unsigned = not tp._type.SIGNED except AttributeError: - if (not isinstance(tp, lltype.Primitive) or - tp in (FLOAT, DOUBLE) or - cast(lltype.SignedLongLong, cast(tp, -1)) < 0): + if not isinstance(tp, lltype.Primitive): unsigned = False + elif tp in (lltype.Signed, FLOAT, DOUBLE, llmemory.Address): + unsigned = False + elif tp in (lltype.Char, lltype.UniChar, lltype.Bool): + unsigned = True else: - unsigned = True + raise AssertionError("size_and_sign(%r)" % (tp,)) return size, unsigned def sizeof(tp): diff --git a/pypy/rpython/lltypesystem/rstr.py b/pypy/rpython/lltypesystem/rstr.py --- a/pypy/rpython/lltypesystem/rstr.py +++ b/pypy/rpython/lltypesystem/rstr.py @@ -331,6 +331,8 @@ # unlike CPython, there is no reason to avoid to return -1 # but our malloc initializes the memory to zero, so we use zero as the # special non-computed-yet value. + if not s: + return 0 x = s.hash if x == 0: x = _hash_string(s.chars) diff --git a/pypy/rpython/lltypesystem/test/test_rffi.py b/pypy/rpython/lltypesystem/test/test_rffi.py --- a/pypy/rpython/lltypesystem/test/test_rffi.py +++ b/pypy/rpython/lltypesystem/test/test_rffi.py @@ -743,8 +743,9 @@ assert sizeof(lltype.Typedef(ll, 'test')) == sizeof(ll) assert not size_and_sign(lltype.Signed)[1] assert size_and_sign(lltype.Char) == (1, True) - assert not size_and_sign(lltype.UniChar)[1] + assert size_and_sign(lltype.UniChar)[1] assert size_and_sign(UINT)[1] + assert not size_and_sign(INT)[1] def test_rffi_offsetof(self): import struct diff --git a/pypy/rpython/ootypesystem/rstr.py b/pypy/rpython/ootypesystem/rstr.py --- a/pypy/rpython/ootypesystem/rstr.py +++ b/pypy/rpython/ootypesystem/rstr.py @@ -116,6 +116,8 @@ return ootype.oounicode(ch, -1) def ll_strhash(s): + if not s: + return 0 return s.ll_hash() def ll_strfasthash(s): diff --git a/pypy/rpython/test/test_rbuilder.py b/pypy/rpython/test/test_rbuilder.py --- a/pypy/rpython/test/test_rbuilder.py +++ b/pypy/rpython/test/test_rbuilder.py @@ -1,3 +1,4 @@ +from __future__ import with_statement import py from pypy.rlib.rstring import StringBuilder, UnicodeBuilder diff --git a/pypy/rpython/test/test_rtuple.py b/pypy/rpython/test/test_rtuple.py --- a/pypy/rpython/test/test_rtuple.py +++ b/pypy/rpython/test/test_rtuple.py @@ -180,6 +180,19 @@ res2 = self.interpret(f, [27, 12]) assert res1 != res2 + def test_constant_tuple_hash_str(self): + from pypy.rlib.objectmodel import compute_hash + def f(i): + if i: + t = (None, "abc") + else: + t = ("abc", None) + return compute_hash(t) + + res1 = self.interpret(f, [0]) + res2 = self.interpret(f, [1]) + assert res1 != res2 + def test_tuple_to_list(self): def f(i, j): return list((i, j)) diff --git a/pypy/translator/c/test/test_newgc.py b/pypy/translator/c/test/test_newgc.py --- a/pypy/translator/c/test/test_newgc.py +++ b/pypy/translator/c/test/test_newgc.py @@ -1318,6 +1318,23 @@ res = self.run('string_builder_over_allocation') assert res[1000] == 'y' + def definestr_string_builder_multiple_builds(cls): + import gc + def fn(_): + s = StringBuilder(4) + got = [] + for i in range(50): + s.append(chr(33+i)) + got.append(s.build()) + gc.collect() + return ' '.join(got) + return fn + + def test_string_builder_multiple_builds(self): + res = self.run('string_builder_multiple_builds') + assert res == ' '.join([''.join(map(chr, range(33, 33+length))) + for length in range(1, 51)]) + def define_nursery_hash_base(cls): from pypy.rlib.objectmodel import compute_identity_hash class A: diff --git a/pypy/translator/platform/__init__.py b/pypy/translator/platform/__init__.py --- a/pypy/translator/platform/__init__.py +++ b/pypy/translator/platform/__init__.py @@ -59,7 +59,11 @@ compile_args = self._compile_args_from_eci(eci, standalone) ofiles = [] for cfile in cfiles: - ofiles.append(self._compile_c_file(self.cc, cfile, compile_args)) + # Windows hack: use masm for files ending in .asm + if str(cfile).lower().endswith('.asm'): + ofiles.append(self._compile_c_file(self.masm, cfile, [])) + else: + ofiles.append(self._compile_c_file(self.cc, cfile, compile_args)) return ofiles def execute(self, executable, args=None, env=None, compilation_info=None): diff --git a/pypy/translator/platform/windows.py b/pypy/translator/platform/windows.py --- a/pypy/translator/platform/windows.py +++ b/pypy/translator/platform/windows.py @@ -179,7 +179,7 @@ def _compile_c_file(self, cc, cfile, compile_args): oname = cfile.new(ext='obj') - args = ['/nologo', '/c'] + compile_args + [str(cfile), '/Fo%s' % (oname,)] + args = ['/nologo', '/c'] + compile_args + ['/Fo%s' % (oname,), str(cfile)] self._execute_c_compiler(cc, args, oname) return oname From noreply at buildbot.pypy.org Sat Nov 19 01:51:59 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Sat, 19 Nov 2011 01:51:59 +0100 (CET) Subject: [pypy-commit] pypy default: maybe wrong merge or wrong update. Anyway, fixed Message-ID: <20111119005159.2D03982A9D@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: Changeset: r49541:7a0e4c3ee81a Date: 2011-11-19 01:51 +0100 http://bitbucket.org/pypy/pypy/changeset/7a0e4c3ee81a/ Log: maybe wrong merge or wrong update. Anyway, fixed diff --git a/pypy/jit/codewriter/codewriter.py b/pypy/jit/codewriter/codewriter.py --- a/pypy/jit/codewriter/codewriter.py +++ b/pypy/jit/codewriter/codewriter.py @@ -104,6 +104,8 @@ else: name = 'unnamed' % id(ssarepr) i = 1 + # escape names for windows + #name = name.replace('', '_(lambda)_') extra = '' while name+extra in self._seen_files: i += 1 diff --git a/pypy/translator/platform/windows.py b/pypy/translator/platform/windows.py --- a/pypy/translator/platform/windows.py +++ b/pypy/translator/platform/windows.py @@ -80,7 +80,7 @@ shared_only = () environ = None - def __init__(self, cc=None): + def __init__(self, cc=None, x64=False): Platform.__init__(self, 'cl.exe') if msvc_compiler_environ: self.c_environ = os.environ.copy() @@ -103,9 +103,16 @@ env=self.c_environ) r = re.search('Macro Assembler', stderr) if r is None and os.path.exists('c:/masm32/bin/ml.exe'): - self.masm = 'c:/masm32/bin/ml.exe' + masm32 = 'c:/masm32/bin/ml.exe' + masm64 = 'c:/masm64/bin/ml64.exe' else: - self.masm = 'ml.exe' + masm32 = 'ml.exe' + masm64 = 'ml64.exe' + + if x64: + self.masm = masm64 + else: + self.masm = masm32 # Install debug options only when interpreter is in debug mode if sys.executable.lower().endswith('_d.exe'): From noreply at buildbot.pypy.org Sat Nov 19 02:01:47 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Sat, 19 Nov 2011 02:01:47 +0100 (CET) Subject: [pypy-commit] pypy default: escape '' for windows Message-ID: <20111119010147.D17DE82A9D@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: Changeset: r49542:11f0c5fd62bd Date: 2011-11-19 02:01 +0100 http://bitbucket.org/pypy/pypy/changeset/11f0c5fd62bd/ Log: escape '' for windows diff --git a/pypy/jit/codewriter/codewriter.py b/pypy/jit/codewriter/codewriter.py --- a/pypy/jit/codewriter/codewriter.py +++ b/pypy/jit/codewriter/codewriter.py @@ -105,7 +105,7 @@ name = 'unnamed' % id(ssarepr) i = 1 # escape names for windows - #name = name.replace('', '_(lambda)_') + name = name.replace('', '_(lambda)_') extra = '' while name+extra in self._seen_files: i += 1 From noreply at buildbot.pypy.org Sat Nov 19 08:52:13 2011 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 19 Nov 2011 08:52:13 +0100 (CET) Subject: [pypy-commit] pypy default: fix the test Message-ID: <20111119075213.BDFDE82A9D@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r49543:a12c7868cd94 Date: 2011-11-19 09:44 +0200 http://bitbucket.org/pypy/pypy/changeset/a12c7868cd94/ Log: fix the test diff --git a/pypy/jit/backend/x86/test/test_ztranslation.py b/pypy/jit/backend/x86/test/test_ztranslation.py --- a/pypy/jit/backend/x86/test/test_ztranslation.py +++ b/pypy/jit/backend/x86/test/test_ztranslation.py @@ -1,6 +1,6 @@ import py, os, sys from pypy.tool.udir import udir -from pypy.rlib.jit import JitDriver, unroll_parameters +from pypy.rlib.jit import JitDriver, unroll_parameters, set_param from pypy.rlib.jit import PARAMETERS, dont_look_inside from pypy.rlib.jit import promote from pypy.jit.metainterp.jitprof import Profiler @@ -47,9 +47,9 @@ def f(i, j): for param, _ in unroll_parameters: defl = PARAMETERS[param] - jitdriver.set_param(param, defl) - jitdriver.set_param("threshold", 3) - jitdriver.set_param("trace_eagerness", 2) + set_param(jitdriver, param, defl) + set_param(jitdriver, "threshold", 3) + set_param(jitdriver, "trace_eagerness", 2) total = 0 frame = Frame(i) while frame.i > 3: @@ -213,8 +213,8 @@ else: return Base() def myportal(i): - jitdriver.set_param("threshold", 3) - jitdriver.set_param("trace_eagerness", 2) + set_param(jitdriver, "threshold", 3) + set_param(jitdriver, "trace_eagerness", 2) total = 0 n = i while True: From noreply at buildbot.pypy.org Sat Nov 19 08:52:15 2011 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 19 Nov 2011 08:52:15 +0100 (CET) Subject: [pypy-commit] pypy release-1.7.x: fix the test Message-ID: <20111119075215.03CCD82A9D@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: release-1.7.x Changeset: r49544:ff4af8f31882 Date: 2011-11-19 09:44 +0200 http://bitbucket.org/pypy/pypy/changeset/ff4af8f31882/ Log: fix the test diff --git a/pypy/jit/backend/x86/test/test_ztranslation.py b/pypy/jit/backend/x86/test/test_ztranslation.py --- a/pypy/jit/backend/x86/test/test_ztranslation.py +++ b/pypy/jit/backend/x86/test/test_ztranslation.py @@ -1,6 +1,6 @@ import py, os, sys from pypy.tool.udir import udir -from pypy.rlib.jit import JitDriver, unroll_parameters +from pypy.rlib.jit import JitDriver, unroll_parameters, set_param from pypy.rlib.jit import PARAMETERS, dont_look_inside from pypy.rlib.jit import promote from pypy.jit.metainterp.jitprof import Profiler @@ -47,9 +47,9 @@ def f(i, j): for param, _ in unroll_parameters: defl = PARAMETERS[param] - jitdriver.set_param(param, defl) - jitdriver.set_param("threshold", 3) - jitdriver.set_param("trace_eagerness", 2) + set_param(jitdriver, param, defl) + set_param(jitdriver, "threshold", 3) + set_param(jitdriver, "trace_eagerness", 2) total = 0 frame = Frame(i) while frame.i > 3: @@ -213,8 +213,8 @@ else: return Base() def myportal(i): - jitdriver.set_param("threshold", 3) - jitdriver.set_param("trace_eagerness", 2) + set_param(jitdriver, "threshold", 3) + set_param(jitdriver, "trace_eagerness", 2) total = 0 n = i while True: From noreply at buildbot.pypy.org Sat Nov 19 08:52:16 2011 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 19 Nov 2011 08:52:16 +0100 (CET) Subject: [pypy-commit] pypy default: merge Message-ID: <20111119075216.3176B82A9D@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r49545:c2d0f2933d2f Date: 2011-11-19 09:51 +0200 http://bitbucket.org/pypy/pypy/changeset/c2d0f2933d2f/ Log: merge diff --git a/pypy/jit/backend/x86/test/test_ztranslation.py b/pypy/jit/backend/x86/test/test_ztranslation.py --- a/pypy/jit/backend/x86/test/test_ztranslation.py +++ b/pypy/jit/backend/x86/test/test_ztranslation.py @@ -1,6 +1,6 @@ import py, os, sys from pypy.tool.udir import udir -from pypy.rlib.jit import JitDriver, unroll_parameters +from pypy.rlib.jit import JitDriver, unroll_parameters, set_param from pypy.rlib.jit import PARAMETERS, dont_look_inside from pypy.rlib.jit import promote from pypy.jit.metainterp.jitprof import Profiler @@ -47,9 +47,9 @@ def f(i, j): for param, _ in unroll_parameters: defl = PARAMETERS[param] - jitdriver.set_param(param, defl) - jitdriver.set_param("threshold", 3) - jitdriver.set_param("trace_eagerness", 2) + set_param(jitdriver, param, defl) + set_param(jitdriver, "threshold", 3) + set_param(jitdriver, "trace_eagerness", 2) total = 0 frame = Frame(i) while frame.i > 3: @@ -213,8 +213,8 @@ else: return Base() def myportal(i): - jitdriver.set_param("threshold", 3) - jitdriver.set_param("trace_eagerness", 2) + set_param(jitdriver, "threshold", 3) + set_param(jitdriver, "trace_eagerness", 2) total = 0 n = i while True: From noreply at buildbot.pypy.org Sat Nov 19 09:25:11 2011 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 19 Nov 2011 09:25:11 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim-shards: start working on broadcasting - a helper function Message-ID: <20111119082511.AA06C82A9D@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-multidim-shards Changeset: r49546:21a78357a235 Date: 2011-11-19 10:24 +0200 http://bitbucket.org/pypy/pypy/changeset/21a78357a235/ Log: start working on broadcasting - a helper function diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -39,6 +39,39 @@ shape.append(size) batch = new_batch +def shape_agreement(space, shape1, shape2): + """ Checks agreement about two shapes with respect to broadcasting. Returns + the resulting shape. + """ + lshift = 0 + rshift = 0 + if len(shape1) > len(shape2): + m = len(shape1) + n = len(shape2) + rshift = len(shape2) - len(shape1) + remainder = shape1 + else: + m = len(shape2) + n = len(shape1) + lshift = len(shape1) - len(shape2) + remainder = shape2 + endshape = [0] * m + for i in range(m - 1, m - n - 1, -1): + left = shape1[i + lshift] + right = shape2[i + rshift] + if left == right: + endshape[i] = left + elif left == 1: + endshape[i] = right + elif right == 1: + endshape[i] = left + else: + raise OperationError(space.w_ValueError, space.wrap( + "frames are not aligned")) + for i in range(m - n): + endshape[i] = remainder[i] + return endshape + def descr_new_array(space, w_subtype, w_item_or_iterable, w_dtype=None, w_order=NoneNotWrapped): # find scalar diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1,6 +1,9 @@ + +import py from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest -from pypy.module.micronumpy.interp_numarray import NDimArray +from pypy.module.micronumpy.interp_numarray import NDimArray, shape_agreement from pypy.module.micronumpy import signature +from pypy.interpreter.error import OperationError from pypy.conftest import gettestobjspace class MockDtype(object): @@ -142,6 +145,14 @@ r = s._index_of_single_item(self.space, self.newtuple(1, 1)) assert r == a._index_of_single_item(self.space, self.newtuple(1, 2, 1)) + def test_shape_agreement(self): + assert shape_agreement(self.space, [3], [3]) == [3] + assert shape_agreement(self.space, [1, 2, 3], [1, 2, 3]) == [1, 2, 3] + py.test.raises(OperationError, shape_agreement, self.space, [2], [3]) + assert shape_agreement(self.space, [4, 4], []) == [4, 4] + assert shape_agreement(self.space, [8, 1, 6, 1], [7, 1, 5]) == [8, 7, 6, 5] + assert shape_agreement(self.space, [5, 2], [4, 3, 5, 2]) == [4, 3, 5, 2] + class AppTestNumArray(BaseNumpyAppTest): def test_type(self): from numpy import array From noreply at buildbot.pypy.org Sat Nov 19 16:41:17 2011 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 19 Nov 2011 16:41:17 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim-shards: in-progress. Get this into some shape so we can run tests Message-ID: <20111119154117.725EE82A9D@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-multidim-shards Changeset: r49547:638b988b580e Date: 2011-11-19 17:40 +0200 http://bitbucket.org/pypy/pypy/changeset/638b988b580e/ Log: in-progress. Get this into some shape so we can run tests diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -39,12 +39,19 @@ shape.append(size) batch = new_batch +class BroadcastDescription(object): + def __init__(self, shape, indices1, indices2): + self.shape = shape + self.indices1 = indices1 + self.indices2 = indices2 + def shape_agreement(space, shape1, shape2): """ Checks agreement about two shapes with respect to broadcasting. Returns the resulting shape. """ lshift = 0 rshift = 0 + adjustment = False if len(shape1) > len(shape2): m = len(shape1) n = len(shape2) @@ -56,21 +63,35 @@ lshift = len(shape1) - len(shape2) remainder = shape2 endshape = [0] * m + indices1 = [True] * m + indices2 = [True] * m for i in range(m - 1, m - n - 1, -1): left = shape1[i + lshift] right = shape2[i + rshift] if left == right: endshape[i] = left elif left == 1: + adjustment = True endshape[i] = right + indices1[i + lshift] = False elif right == 1: + adjustment = True endshape[i] = left + indices2[i + rshift] = False else: raise OperationError(space.w_ValueError, space.wrap( "frames are not aligned")) for i in range(m - n): + adjustment = True endshape[i] = remainder[i] + #if len(shape1) > len(shape2): + # xxx + #else: + # xxx + #if not adjustment: + # return None return endshape + return BroadcastDescription(endshape, indices1, indices2) def descr_new_array(space, w_subtype, w_item_or_iterable, w_dtype=None, w_order=NoneNotWrapped): @@ -105,7 +126,7 @@ space.call_function(space.gettypefor(interp_dtype.W_Dtype), w_dtype) ) arr = NDimArray(size, shape[:], dtype=dtype, order=order) - arr_iter = arr.start_iter() + arr_iter = arr.start_iter(arr.shape) for i in range(len(elems_w)): w_elem = elems_w[i] dtype.setitem_w(space, arr.storage, arr_iter.offset, w_elem) @@ -123,12 +144,13 @@ raise NotImplementedError class ArrayIterator(BaseIterator): - def __init__(self, size, offset=0): - self.offset = offset + def __init__(self, size): + self.offset = 0 self.size = size def next(self): - return ArrayIterator(self.size, self.offset + 1) + self.offset += 1 + return self def done(self): return self.offset >= self.size @@ -137,34 +159,25 @@ return self.offset class ViewIterator(BaseIterator): - def __init__(self, arr, offset=0, indices=None, done=False): - if indices is None: - self.indices = [0] * len(arr.shape) - self.offset = arr.start - else: - self.offset = offset - self.indices = indices - self.arr = arr - self._done = done + def __init__(self, arr): + self.indices = [0] * len(arr.shape) + self.offset = arr.start + self.arr = arr + self._done = False @jit.unroll_safe def next(self): - indices = [0] * len(self.arr.shape) - for i in range(len(self.arr.shape)): - indices[i] = self.indices[i] - done = False - offset = self.offset for i in range(len(self.arr.shape) -1, -1, -1): - if indices[i] < self.arr.shape[i] - 1: - indices[i] += 1 - offset += self.arr.shards[i] + if self.indices[i] < self.arr.shape[i] - 1: + self.indices[i] += 1 + self.offset += self.arr.shards[i] break else: - indices[i] = 0 - offset -= self.arr.backshards[i] + self.indices[i] = 0 + self.offset -= self.arr.backshards[i] else: - done = True - return ViewIterator(self.arr, offset, indices, done) + self._done = True + return self def done(self): return self._done @@ -172,13 +185,43 @@ def get_offset(self): return self.offset +class ResizingIterator(object): + def __init__(self, iter, shape, orig_indices): + self.shape = shape + self.indices = [0] * len(shape) + self.orig_indices = orig_indices + self.iter = iter + self._done = False + + @jit.unroll_safe + def next(self): + for i in range(len(self.shape) -1, -1, -1): + if self.indices[i] < self.shape[i] - 1: + self.indices[i] += 1 + if self.orig_indices[i]: + self.iter.next() + break + else: + self.indices[i] = 0 + else: + self._done = True + return self + + def get_offset(self): + return self.iter.get_offset() + + def done(self): + return self._done + class Call2Iterator(BaseIterator): def __init__(self, left, right): self.left = left self.right = right def next(self): - return Call2Iterator(self.left.next(), self.right.next()) + self.left.next() + self.right.next() + return self def done(self): return self.left.done() or self.right.done() @@ -193,7 +236,8 @@ self.child = child def next(self): - return Call1Iterator(self.child.next()) + self.child.next() + return self def done(self): return self.child.done() @@ -312,7 +356,7 @@ reduce_driver = jit.JitDriver(greens=['signature'], reds = ['i', 'result', 'self', 'cur_best', 'dtype']) def loop(self): - i = self.start_iter() + i = self.start_iter(self.shape) result = i.get_offset() cur_best = self.eval(i) i.next() @@ -339,7 +383,7 @@ def _all(self): dtype = self.find_dtype() - i = self.start_iter() + i = self.start_iter(self.shape) while not i.done(): all_driver.jit_merge_point(signature=self.signature, self=self, dtype=dtype, i=i) if not dtype.bool(self.eval(i)): @@ -351,7 +395,7 @@ def _any(self): dtype = self.find_dtype() - i = self.start_iter() + i = self.start_iter(self.shape) while not i.done(): any_driver.jit_merge_point(signature=self.signature, self=self, dtype=dtype, i=i) @@ -403,7 +447,7 @@ res.append_slice(str(self_shape), 1, len(self_shape) - 1) res.append(')') else: - self.to_str(space, 1, res, indent=' ') + concrete.to_str(space, 1, res, indent=' ') if (dtype is not space.fromcache(interp_dtype.W_Float64Dtype) and dtype is not space.fromcache(interp_dtype.W_Int64Dtype)) or \ not self.find_size(): @@ -488,7 +532,8 @@ def descr_str(self, space): ret = StringBuilder() - self.to_str(space, 0, ret, ' ') + concrete = self.get_concrete() + concrete.to_str(space, 0, ret, ' ') return space.wrap(ret.build()) def _index_of_single_item(self, space, w_idx): @@ -633,12 +678,12 @@ except ValueError: pass return space.wrap(space.is_true(self.get_concrete().eval( - self.start_iter()).wrap(space))) + self.start_iter(self.shape)).wrap(space))) def getitem(self, item): raise NotImplementedError - def start_iter(self): + def start_iter(self, res_shape=None): raise NotImplementedError def compute_index(self, space, offset): @@ -697,7 +742,7 @@ def eval(self, iter): return self.value - def start_iter(self): + def start_iter(self, res_shape=None): return ConstantIterator() def to_str(self, space, comma, builder, indent=' '): @@ -787,10 +832,10 @@ assert isinstance(call_sig, signature.Call1) return call_sig.func(self.res_dtype, val) - def start_iter(self): + def start_iter(self, res_shape=None): if self.forced_result is not None: - return self.forced_result.start_iter() - return Call1Iterator(self.values.start_iter()) + return self.forced_result.start_iter(res_shape) + return Call1Iterator(self.values.start_iter(res_shape)) class Call2(VirtualArray): """ @@ -814,10 +859,13 @@ pass return self.right.find_size() - def start_iter(self): + def start_iter(self, res_shape=None): if self.forced_result is not None: - return self.forced_result.start_iter() - return Call2Iterator(self.left.start_iter(), self.right.start_iter()) + return self.forced_result.start_iter(res_shape) + if res_shape is None: + res_shape = self.shape # we still force the shape on children + return Call2Iterator(self.left.start_iter(res_shape), + self.right.start_iter(res_shape)) def _eval(self, iter): assert isinstance(iter, Call2Iterator) @@ -895,15 +943,12 @@ return self.parent.find_dtype() def setslice(self, space, w_value): - if isinstance(w_value, NDimArray): - if self.shape != w_value.shape: - raise OperationError(space.w_TypeError, space.wrap( - "wrong assignment")) - self._sliceloop(w_value) + res_shape = shape_agreement(space, self.shape, w_value.shape) + self._sliceloop(w_value, res_shape) - def _sliceloop(self, source): - source_iter = source.start_iter() - res_iter = self.start_iter() + def _sliceloop(self, source, res_shape): + source_iter = source.start_iter(res_shape) + res_iter = self.start_iter(res_shape) while not res_iter.done(): slice_driver.jit_merge_point(signature=source.signature, self=self, source=source, @@ -914,8 +959,11 @@ source_iter = source_iter.next() res_iter = res_iter.next() - def start_iter(self, offset=0, indices=None): - return ViewIterator(self, offset=offset, indices=indices) + def start_iter(self, res_shape=None): + if res_shape is not None and res_shape != self.shape: + raise NotImplementedError # xxx + #return ResizingIterator(ViewIterator(self), res_shape, orig_indices) + return ViewIterator(self) def setitem(self, item, value): self.parent.setitem(item, value) @@ -967,9 +1015,11 @@ self.invalidated() self.dtype.setitem(self.storage, item, value) - def start_iter(self, offset=0, indices=None): + def start_iter(self, res_shape=None): if self.order == 'C': - return ArrayIterator(self.size, offset=offset) + if res_shape is not None and res_shape != self.shape: + raise NotImplementedError # xxx + return ArrayIterator(self.size) raise NotImplementedError # use ViewIterator simply, test it def __del__(self): diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -56,7 +56,7 @@ space, obj.find_dtype(), promote_to_largest=True ) - start = obj.start_iter() + start = obj.start_iter(obj.shape) if self.identity is None: if size == 0: raise operationerrfmt(space.w_ValueError, "zero-size array to " @@ -123,7 +123,7 @@ def call(self, space, args_w): from pypy.module.micronumpy.interp_numarray import (Call2, - convert_to_array, Scalar) + convert_to_array, Scalar, shape_agreement) [w_lhs, w_rhs] = args_w w_lhs = convert_to_array(space, w_lhs) @@ -146,7 +146,8 @@ new_sig = signature.Signature.find_sig([ self.signature, w_lhs.signature, w_rhs.signature ]) - w_res = Call2(new_sig, w_lhs.shape or w_rhs.shape, calc_dtype, + new_shape = shape_agreement(space, w_lhs.shape, w_rhs.shape) + w_res = Call2(new_sig, new_shape, calc_dtype, res_dtype, w_lhs, w_rhs) w_lhs.add_invalidates(w_res) w_rhs.add_invalidates(w_res) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -843,8 +843,17 @@ c = b + b assert c[1][1] == 12 - def test_broadcast(self): - skip("not working") + def test_broadcast_ufunc(self): + from numpy import array + a = array([[1, 2], [3, 4], [5, 6]]) + b = array([5, 6]) + #print a + b + c = ((a + b) == [[1+5, 2+6], [3+5, 4+6], [5+5, 6+6]]) + print c + print c.all() + assert c.all() + + def test_broadcast_setslice(self): import numpy a = numpy.zeros((100, 100)) b = numpy.ones(100) From noreply at buildbot.pypy.org Sat Nov 19 16:45:17 2011 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 19 Nov 2011 16:45:17 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim-shards: promote shape length Message-ID: <20111119154517.1137E82A9D@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-multidim-shards Changeset: r49548:fede8d91e0a4 Date: 2011-11-19 17:44 +0200 http://bitbucket.org/pypy/pypy/changeset/fede8d91e0a4/ Log: promote shape length diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -164,10 +164,12 @@ self.offset = arr.start self.arr = arr self._done = False + self.shape_len = len(arr.shape) @jit.unroll_safe def next(self): - for i in range(len(self.arr.shape) -1, -1, -1): + shape_len = jit.promote(self.shape_len) + for i in range(shape_len - 1, -1, -1): if self.indices[i] < self.arr.shape[i] - 1: self.indices[i] += 1 self.offset += self.arr.shards[i] From noreply at buildbot.pypy.org Sat Nov 19 17:05:31 2011 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 19 Nov 2011 17:05:31 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim-shards: merge default Message-ID: <20111119160531.9998982A9D@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-multidim-shards Changeset: r49549:6717f12a3702 Date: 2011-11-19 18:05 +0200 http://bitbucket.org/pypy/pypy/changeset/6717f12a3702/ Log: merge default diff too long, truncating to 10000 out of 10678 lines diff --git a/lib-python/2.7/test/test_os.py b/lib-python/2.7/test/test_os.py --- a/lib-python/2.7/test/test_os.py +++ b/lib-python/2.7/test/test_os.py @@ -74,7 +74,8 @@ self.assertFalse(os.path.exists(name), "file already exists for temporary file") # make sure we can create the file - open(name, "w") + f = open(name, "w") + f.close() self.files.append(name) def test_tempnam(self): diff --git a/lib-python/conftest.py b/lib-python/conftest.py --- a/lib-python/conftest.py +++ b/lib-python/conftest.py @@ -201,7 +201,7 @@ RegrTest('test_difflib.py'), RegrTest('test_dircache.py', core=True), RegrTest('test_dis.py'), - RegrTest('test_distutils.py'), + RegrTest('test_distutils.py', skip=True), RegrTest('test_dl.py', skip=True), RegrTest('test_doctest.py', usemodules="thread"), RegrTest('test_doctest2.py'), diff --git a/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py b/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py --- a/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py +++ b/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py @@ -1,6 +1,5 @@ import unittest from ctypes import * -from ctypes.test import xfail class MyInt(c_int): def __cmp__(self, other): @@ -27,7 +26,6 @@ self.assertEqual(None, cb()) - @xfail def test_int_callback(self): args = [] def func(arg): diff --git a/lib-python/2.7/pkgutil.py b/lib-python/modified-2.7/pkgutil.py copy from lib-python/2.7/pkgutil.py copy to lib-python/modified-2.7/pkgutil.py --- a/lib-python/2.7/pkgutil.py +++ b/lib-python/modified-2.7/pkgutil.py @@ -244,7 +244,8 @@ return mod def get_data(self, pathname): - return open(pathname, "rb").read() + with open(pathname, "rb") as f: + return f.read() def _reopen(self): if self.file and self.file.closed: diff --git a/lib-python/modified-2.7/test/test_import.py b/lib-python/modified-2.7/test/test_import.py --- a/lib-python/modified-2.7/test/test_import.py +++ b/lib-python/modified-2.7/test/test_import.py @@ -64,6 +64,7 @@ except ImportError, err: self.fail("import from %s failed: %s" % (ext, err)) else: + # XXX importing .pyw is missing on Windows self.assertEqual(mod.a, a, "module loaded (%s) but contents invalid" % mod) self.assertEqual(mod.b, b, diff --git a/lib-python/modified-2.7/test/test_repr.py b/lib-python/modified-2.7/test/test_repr.py --- a/lib-python/modified-2.7/test/test_repr.py +++ b/lib-python/modified-2.7/test/test_repr.py @@ -254,8 +254,14 @@ eq = self.assertEqual touch(os.path.join(self.subpkgname, self.pkgname + os.extsep + 'py')) from areallylongpackageandmodulenametotestreprtruncation.areallylongpackageandmodulenametotestreprtruncation import areallylongpackageandmodulenametotestreprtruncation - eq(repr(areallylongpackageandmodulenametotestreprtruncation), - "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + # On PyPy, we use %r to format the file name; on CPython it is done + # with '%s'. It seems to me that %r is safer . + if '__pypy__' in sys.builtin_module_names: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + else: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) eq(repr(sys), "") def test_type(self): diff --git a/lib-python/2.7/test/test_subprocess.py b/lib-python/modified-2.7/test/test_subprocess.py copy from lib-python/2.7/test/test_subprocess.py copy to lib-python/modified-2.7/test/test_subprocess.py --- a/lib-python/2.7/test/test_subprocess.py +++ b/lib-python/modified-2.7/test/test_subprocess.py @@ -16,11 +16,11 @@ # Depends on the following external programs: Python # -if mswindows: - SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' - 'os.O_BINARY);') -else: - SETBINARY = '' +#if mswindows: +# SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' +# 'os.O_BINARY);') +#else: +# SETBINARY = '' try: @@ -420,8 +420,9 @@ self.assertStderrEqual(stderr, "") def test_universal_newlines(self): - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' @@ -448,8 +449,9 @@ def test_universal_newlines_communicate(self): # universal newlines through communicate() - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' diff --git a/lib_pypy/_ctypes/pointer.py b/lib_pypy/_ctypes/pointer.py --- a/lib_pypy/_ctypes/pointer.py +++ b/lib_pypy/_ctypes/pointer.py @@ -124,7 +124,8 @@ # for now, we always allow types.pointer, else a lot of tests # break. We need to rethink how pointers are represented, though if my_ffitype is not ffitype and ffitype is not _ffi.types.void_p: - raise ArgumentError, "expected %s instance, got %s" % (type(value), ffitype) + raise ArgumentError("expected %s instance, got %s" % (type(value), + ffitype)) return value._get_buffer_value() def _cast_addr(obj, _, tp): diff --git a/lib_pypy/_ctypes/structure.py b/lib_pypy/_ctypes/structure.py --- a/lib_pypy/_ctypes/structure.py +++ b/lib_pypy/_ctypes/structure.py @@ -17,7 +17,7 @@ if len(f) == 3: if (not hasattr(tp, '_type_') or not isinstance(tp._type_, str) - or tp._type_ not in "iIhHbBlL"): + or tp._type_ not in "iIhHbBlLqQ"): #XXX: are those all types? # we just dont get the type name # in the interp levle thrown TypeError diff --git a/lib_pypy/_pypy_irc_topic.py b/lib_pypy/_pypy_irc_topic.py --- a/lib_pypy/_pypy_irc_topic.py +++ b/lib_pypy/_pypy_irc_topic.py @@ -1,117 +1,6 @@ -"""qvfgbcvna naq hgbcvna punvef -qlfgbcvna naq hgbcvna punvef -V'z fbeel, pbhyq lbh cyrnfr abg nterr jvgu gur png nf jryy? -V'z fbeel, pbhyq lbh cyrnfr abg nterr jvgu gur punve nf jryy? -jr cnffrq gur RH erivrj -cbfg RhebClguba fcevag fgnegf 12.IVV.2007, 10nz -RhebClguba raqrq -n Pyrna Ragrecevfrf cebqhpgvba -npnqrzl vf n pbzcyvpngrq ebyr tnzr -npnqrzvn vf n pbzcyvpngrq ebyr tnzr -jbexvat pbqr vf crn fbhc -abg lbhe snhyg, zber yvxr vg'f n zbivat gnetrg -guvf fragrapr vf snyfr -abguvat vf gehr -Yncfnat Fbhpubat -Oenpunzhgnaqn -fbeel, V'yy grnpu gur pnpghf ubj gb fjvz yngre -Jul fb znal znal znal znal znal ivbyvaf? -Jul fb znal znal znal znal znal bowrpgf? -"eha njnl naq yvir ba n snez" nccebnpu gb fbsgjner qrirybczrag -"va snpg, lbh zvtug xabj zber nobhg gur genafyngvba gbbypunva nsgre znfgrevat eclguba guna fbzr angvir fcrnxre xabjf nobhg uvf zbgure gbathr" - kbeNkNk -"jurer qvq nyy gur ivbyvaf tb?" -- ClCl fgnghf oybt: uggc://zberclcl.oybtfcbg.pbz/ -uggc://kxpq.pbz/353/ -pnfhnyvgl ivbyngvbaf naq sylvat -wrgmg abpu fpubxbynqvtre -R09 2X @PNN:85? -vs lbh'er gelvat gb oybj hc fghss, jub pnerf? -vs fghss oybjf hc, lbh pner -2008 jvyy or gur lrne bs clcl ba gur qrfxgbc -2008 jvyy or gur lrne bs gur qrfxgbc ba #clcl -2008 jvyy or gur lrne bs gur qrfxgbc ba #clcl, Wnahnel jvyy or gur zbagu bs gur nyc gbcf -lrf, ohg jung'g gur frafr bs 0 < "qhena qhena" -eclguba: flagnk naq frznagvpf bs clguba, fcrrq bs p, erfgevpgvbaf bs wnin naq pbzcvyre reebe zrffntrf nf crargenoyr nf ZHZCF +"""eclguba: flagnk naq frznagvpf bs clguba, fcrrq bs p, erfgevpgvbaf bs wnin naq pbzcvyre reebe zrffntrf nf crargenoyr nf ZHZCF pglcrf unf n fcva bs 1/3 ' ' vf n fcnpr gbb -2009 jvyy or gur lrne bs WVG ba gur qrfxgbc -N ynathntr vf n qvnyrpg jvgu na nezl naq anil -gbcvpf ner sbe gur srroyr zvaqrq -2009 vf gur lrne bs ersyrpgvba ba gur qrfxgbc -gur tybor vf bhe cbal, gur pbfzbf bhe erny ubefr -jub nz V naq vs lrf, ubj znal? -cebtenzzvat va orq vf n cresrpgyl svar npgvivgl -zbber'f ynj vf n qeht jvgu gur jbefg pbzr qbja -EClguba: jr hfr vg fb lbh qba'g unir gb -Zbber'f ynj vf n qeht jvgu gur jbefg pbzr qbja. EClguba: haqrpvqrq. -guvatf jvyy or avpr naq fghss -qba'g cbfg yvaxf gb cngragf urer -Abg lbhe hfhny nanylfrf. -Gur Neg bs gur Punaary -Clguba 300 -V fhccbfr ZO bs UGZY cre frpbaq vf abg gur hfhny fcrrq zrnfher crbcyr jbhyq rkcrpg sbe n wvg -gur fha arire frgf ba gur ClCl rzcver -ghegyrf ner snfgre guna lbh guvax -cebtenzzvat vf na nrfgrguvp raqrnibhe -P vf tbbq sbe fbzrguvat, whfg abg sbe jevgvat fbsgjner -trezna vf tbbq sbe fbzrguvat, whfg abg sbe jevgvat fbsgjner -trezna vf tbbq sbe artngvbaf, whfg abg sbe jevgvat fbsgjner -# nffreg qvq abg penfu -lbh fubhyq fgneg n cresrpg fbsgjner zbirzrag -lbh fubhyq fgneg n cresrpg punaary gbcvp zbirzrag -guvf vf n cresrpg punaary gbcvp -guvf vf n frys-ersreragvny punaary gbcvp -crrcubcr bcgvzvmngvbaf ner jung n Fhssvpvragyl Fzneg Pbzcvyre hfrf -"crrcubcr" bcgvzvmngvbaf ner jung na bcgvzvfgvp Pbzcvyre hfrf -pubbfr lbhe unpx -gur 'fhcre' xrljbeq vf abg gung uhttnoyr -wlguba cngpurf ner abg rabhtu sbe clcl -- qb lbh xabj oreyva? - nyy bs vg? - jryy, whfg oreyva -- ubj jvyy gur snpg gung gurl ner hfrq va bhe ercy punatr bhe gbcvpf? -- ubj pna vg rire unir jbexrq? -- jurer fubhyq gur unpx or fgberq? -- Vg'f uneq gb fnl rknpgyl jung pbafgvghgrf erfrnepu va gur pbzchgre jbeyq, ohg nf n svefg nccebkvzngvba, vg'f fbsgjner gung qbrfa'g unir hfref. -- Cebtenzzvat vf nyy nobhg xabjvat jura gb obvy gur benatr fcbatr qbaxrl npebff gur cuvyyvcvarf -- Jul fb znal, znal, znal, znal, znal, znal qhpxyvatf? -- ab qrgnvy vf bofpher rabhtu gb abg unir fbzr pbqr qrcraqvat ba vg. -- jung V trarenyyl jnag vf serr fcrrqhcf -- nyy bs ClCl vf kv-dhnyvgl -"lbh pna nyjnlf xvyy -9 be bf._rkvg() vs lbh'er va n uheel" -Ohernhpengf ohvyq npnqrzvp rzcverf juvpu puhea bhg zrnavatyrff fbyhgvbaf gb veeryrinag ceboyrzf. -vg'f abg n unpx, vg'f n jbexnebhaq -ClCl qbrfa'g unir pbcbylinevnqvp qrcraqragyl-zbabzbecurq ulcresyhknqf -ClCl qbrfa'g punatr gur shaqnzragny culfvpf pbafgnagf -Qnapr bs gur Fhtnecyhz Snvel -Wnin vf whfg tbbq rabhtu gb or cenpgvpny, ohg abg tbbq rabhtu gb or hfnoyr. -RhebClguba vf unccravat, qba'g rkcrpg nal dhvpx erfcbafr gvzrf. -"V jbhyq yvxr gb fgnl njnl sebz ernyvgl gura" -"gung'f jul gur 'be' vf ernyyl na 'naq' " -jvgu nyy nccebcevngr pbagrkghnyvfngvbavat -qba'g gevc ba gur cbjre pbeq -vzcyrzragvat YBTB va YBTB: "ghegyrf nyy gur jnl qbja" -gur ohooyrfbeg jbhyq or gur jebat jnl gb tb -gur cevapvcyr bs pbafreingvba bs zrff -gb fnir n gerr, rng n ornire -Qre Ovore znpugf evpugvt: Antg nyyrf xnchgg. -"Nal jbeyqivrj gung vfag jenpxrq ol frys-qbhog naq pbashfvba bire vgf bja vqragvgl vf abg n jbeyqivrj sbe zr." - Fpbgg Nnebafba -jr oryvrir va cnapnxrf, znlor -jr oryvrir va ghegyrf, znlor -jr qrsvavgryl oryvrir va zrgn -gur zngevk unf lbh -"Yvsr vf uneq, gura lbh anc" - n png -Vf Nezva ubzr jura gur havirefr prnfrf gb rkvfg? -Qhrffryqbes fcevag fgnegrq -frys.nobeeg("pnaabg ybnq negvpyrf") -QRAGVFGEL FLZOBY YVTUG IREGVPNY NAQ JNIR -"Gur UUH pnzchf vf n tbbq Dhnxr yriry" - Nezva -"Gur UUH pnzchf jbhyq or n greevoyr dhnxr yriry - lbh'q arire unir n pyhr jurer lbh ner" - zvpunry -N enqvbnpgvir png unf 18 unys-yvirf. - : j [fvtu] -f -pbybe-pbqrq oyhrf -"Neebtnapr va pbzchgre fpvrapr vf zrnfherq va anab-Qvwxfgenf." -ClCl arrqf n Whfg-va-Gvzr WVG -"Lbh pna'g gvzr geniry whfg ol frggvat lbhe pybpxf jebat" -Gjb guernqf jnyx vagb n one. Gur onexrrcre ybbxf hc naq lryyf, "url, V jnag qba'g nal pbaqvgvbaf enpr yvxr gvzr ynfg!" Clguba 2.k rfg cerfdhr zbeg, ivir Clguba! Clguba 2.k vf abg qrnq Riregvzr fbzrbar nethrf jvgu "Fznyygnyx unf nyjnlf qbar K", vg vf nyjnlf n tbbq uvag gung fbzrguvat arrqf gb or punatrq snfg. - Znephf Qraxre @@ -119,7 +8,6 @@ __kkk__ naq __ekkk__ if bcrengvba fybgf: cnegvpyr dhnaghz fhcrecbfvgvba xvaq bs sha ClCl vf na rkpvgvat grpuabybtl gung yrgf lbh gb jevgr snfg, cbegnoyr, zhygv-cyngsbez vagrecergref jvgu yrff rssbeg Nezva: "Cebybt vf n zrff.", PS: "Ab, vg'f irel pbby!", Nezva: "Vfa'g guvf jung V fnvq?" - tbbq, grfgf ner hfrshy fbzrgvzrf :-) ClCl vf yvxr nofheq gurngre jr unir ab nagv-vzcbffvoyr fgvpx gung znxrf fher gung nyy lbhe cebtenzf unyg clcl vf n enpr orgjrra crbcyr funivat lnxf naq gur havirefr cebqhpvat zber orneqrq lnxf. Fb sne, gur havirefr vf jvaavat @@ -136,14 +24,14 @@ ClCl 1.1.0orgn eryrnfrq: uggc://pbqrfcrnx.arg/clcl/qvfg/clcl/qbp/eryrnfr-1.1.0.ugzy "gurer fubhyq or bar naq bayl bar boivbhf jnl gb qb vg". ClCl inevnag: "gurer pna or A unys-ohttl jnlf gb qb vg" 1.1 svany eryrnfrq: uggc://pbqrfcrnx.arg/clcl/qvfg/clcl/qbp/eryrnfr-1.1.0.ugzy -1.1 svany eryrnfrq | nzq64 naq ccp ner bayl ninvynoyr va ragrecevfr irefvba + nzq64 naq ccp ner bayl ninvynoyr va ragrecevfr irefvba Vf gurer n clcl gvzr? - vs lbh pna srry vg (?) gura gurer vf ab, abezny jbex vf fhpu zhpu yrff gvevat guna inpngvbaf ab, abezny jbex vf fb zhpu yrff gvevat guna inpngvbaf -SVEFG gurl vtaber lbh, gura gurl ynhtu ng lbh, gura gurl svtug lbh, gura lbh jva. +-SVEFG gurl vtaber lbh, gura gurl ynhtu ng lbh, gura gurl svtug lbh, gura lbh jva.- vg'f Fhaqnl, znlor vg'f Fhaqnl, ntnva -"3 + 3 = 8" Nagb va gur WVG gnyx +"3 + 3 = 8" - Nagb va gur WVG gnyx RPBBC vf unccravat RPBBC vf svavfurq cflpb rngf bar oenva cre vapu bs cebterff @@ -175,10 +63,108 @@ "nu, whfg va gvzr qbphzragngvba" (__nc__) ClCl vf abg n erny IZ: ab frtsnhyg unaqyref gb qb gur ener pnfrf lbh pna'g unir obgu pbairavrapr naq fcrrq -gur WVG qbrfa'g jbex ba BF/K (abi'09) -ab fhccbeg sbe BF/K evtug abj! (abi'09) fyvccref urvtug pna or zrnfherq va k86 ertvfgref clcl vf n enpr orgjrra gur vaqhfgel gelvat gb ohvyq znpuvarf jvgu zber naq zber erfbheprf, naq gur clcl qrirybcref gelvat gb rng nyy bs gurz. Fb sne, gur jvaare vf fgvyy hapyrne +"znl pbagnva ahgf naq/be lbhat cbvagref" +vg'f nyy irel fvzcyr, yvxr gur ubyvqnlf +unccl ClCl'f lrne 2010! +fnzhryr fnlf gung jr ybfg n enmbe. fb jr pna'g funir lnxf +"yrg'f abg or bofpher, hayrff jr ernyyl arrq gb" + (abg guernq-fnsr, ohg jryy, abguvat vf) +clcl unf znal ceboyrzf, ohg rnpu bar unf znal fbyhgvbaf +whfg nabgure vgrz (1.333...) ba bhe erny-ahzorerq gbqb yvfg +ClCl vf Fuveg Bevtnzv erfrnepu + nafjrevat n dhrfgvba: "ab -- sbe ng yrnfg bar cbffvoyr vagrecergngvba bs lbhe fragrapr" +eryrnfr 1.2 hcpbzvat +ClCl 1.2 eryrnfrq - uggc://clcl.bet/ +AB IPF QVFPHFFVBAF +EClguba vf n svar pnzry unve oehfu +ClCl vf n npghnyyl n ivfhnyvmngvba cebwrpg, jr whfg ohvyq vagrecergref gb unir vagrerfgvat qngn gb ivfhnyvmr +clcl vf yvxr fnhfntrf +naq abj sbe fbzrguvat pbzcyrgryl qvssrerag +n 10gu bs sberire vf 1u45 +pbeerpg pbqr qbrfag arrq nal grfgf +cbfgfgehpghenyvfz rgp. +clcl UVG trarengbe +gur arj clcl fcbeg vf gb cnff clcl ohtf nf pclguba ohtf +jr unir zhpu zber vagrecergref guna hfref +ClCl 1.3 njnvgvat eryrnfr +ClCl 1.3 eryrnfrq +vg frrzf gb zr gung bapr lbh frggyr ba na rkrphgvba / bowrpg zbqry naq / be olgrpbqr sbezng, lbh'ir nyernql qrpvqrq jung ynathntrf (jurer gur 'f' frrzf fhcresyhbhf) fhccbeg vf tbvat gb or svefg pynff sbe +"Nyy ceboyrzf va ClCl pna or fbyirq ol nabgure yriry bs vagrecergngvba" +ClCl 1.3 eryrnfrq (jvaqbjf ovanevrf vapyhqrq) +jul qvq lbh thlf unir gb znxr gur ohvygva sbeghar zber vagrerfgvat guna npghny jbex? v whfg pngpurq zlfrys erfgnegvat clcl 20 gvzrf +"jr hfrq gb unir n zrff jvgu na bofpher vagresnpr, abj jr unir zrff urer naq bofpher vagresnpr gurer. cebterff" crqebavf ba n clcl fcevag +"phcf bs pbssrr ner yvxr nanybtvrf va gung V'z znxvat bar evtug abj" +"vg'f nyjnlf hc gb hf, va n jnl be gur bgure" +ClCl vf infg, naq pbagnvaf zhygvghqrf +qravny vf eneryl n tbbq qrohttvat grpuavdhr +"Yrg'f tb." - "Jr pna'g" - "Jul abg?" - "Jr'er jnvgvat sbe n Genafyngvba." - (qrfcnvevatyl) "Nu!" +'gung'f qrsvavgryl n pnfr bs "hu????"' +va gurbel gurer vf gur Ybbc, va cenpgvpr gurer ner oevqtrf +gur uneqqevir - pbafgnag qngn cvytevzntr +ClCl vf n gbby gb xrrc bgurejvfr qnatrebhf zvaqf fnsryl bpphcvrq. +jr ner n trareny senzrjbex ohvyg ba pbafvfgrag nccyvpngvba bs nqubp-arff +gur jnl gb nibvq n jbexnebhaq vf gb vagebqhpr n fgebatre jbexnebhaq fbzrjurer ryfr +pnyyvat gur genafyngvba gbby punva n 'fpevcg' vf xvaq bs bssrafvir +ehaavat clcl-p genafyngr.cl vf n ovg yvxr jngpuvat n guevyyre zbivr, vg pbhyq pbafhzr nyy gur zrzbel ng nal gvzr +ehaavat clcl-p genafyngr.cl vf n ovg yvxr jngpuvat n guevyyre zbivr, vg pbhyq qvr ng nal gvzr orpnhfr bs gur 32-ovg 4TO yvzvg bs ENZ +Qh jvefg rora tranh qnf reervpura, jbena xrvare tynhog +vs fjvgmreynaq jrer jurer terrpr vf (ba vfynaqf) jbhyq gurl nyy or pbaarpgrq ol oevqtrf? +genafyngvat clcl jvgu pclguba vf fbbbbbb fybj +ClCl 1.4 eryrnfrq! +Jr ner abg urebrf, whfg irel cngvrag. +QBAR zrnaf vg'f qbar +jul gurer vf ab "ClCl 1.4 eryrnfrq" va gbcvp nal zber? +fabj! fabj! +svanyyl, zrephevny zvtengvba vf unccravat! +Gur zvtengvba gb zrephevny vf pbzcyrgrq! uggc://ovgohpxrg.bet/clcl/clcl +fabj! fabj! (gre) +unccl arj lrne +naq anaanaw gb lbh nf jryy +Frrvat nf gur ynjf bs culfvpf ner ntnvafg lbh, lbh unir gb pnershyyl pbafvqre lbhe fpbcr fb gung lbhe tbnyf ner ernfbanoyr. +nf hfhny va clcl, gur fbyhgvba nccrnef pbzcyrgryl qvfcebcbegvbangr gb gur ceboyrz naq vafgrnq jr'yy tb sbe n pbzcyrgryl qvssrerag fvzcyre nccebnpu gb gur bevtvany ceboyrz +fabj, fabj! +va clcl lbh ner nyjnlf ng gur jebat yriry, va bar jnl be gur bgure +jryy, vg'f jebat ohg abg fb "irel jebat" nf vg ybbxrq + V ybir clcl +ynmvarff vzcngvrapr naq uhoevf +fabj, fabj +EClguba: guvatf lbh jbhyqa'g qb va Clguba, naq pna'g qb va P. +vg vf gur rkcrpgrq orunivbe, rkprcg jura lbh qba'g rkcrpg vg +erqrsvavat lryybj frrzf yvxr n orggre vqrn +"gung'f ubjrire whfg ratvarrevat" (svwny) +"[vg] whfg fubjf ntnva gung eclguba vf bofpher" (psobym) +"naljnl, clguba vf n infg ynathntr" (svwny) +bhg-bs-yvr-thneqf +"gurer ner qnlf ba juvpu lbh ybbx nebhaq naq abguvat fubhyq unir rire jbexrq" (svwny) +clcl vf n orggre xvaq bs sbbyvfuarff - ynp +ehaavat grfgf vf rffragvny sbe qrirybcvat clcl -- hu? qvq V oernx gur grfg? (svwny) +V'ir tbg guvf sybbe jnk gung'f nyfb n TERNG qrffreg gbccvat!! +rknexha: "gur cneg gung V gubhtug jnf tbvat gb or uneq jnf gevivny, fb abj V whfg unir guvf cneg gung V qvqa'g rira guvax bs gung vf uneq" +V fhccbfr jr pna yvir jvgu gur bofphevgl, nf ybat nf gurer vf n pbzzrag znxvat vg yvtugre +V nz n ovt oryvrire va ernfbaf. ohg gur nccnerag xvaq ner zl snibevgr. +clcl: trg n WVG sbe serr (jryy gur svefg qnl lbh jba'g znantr naq vg jvyy or irel sehfgengvat) + thgjbegu: bu, jr fubhyq znxr gur WVG zntvpnyyl orggre, jvgu qrpbengbef naq fghss +vg'f n pbzcyrgr unpx, ohg n irel zvavzny bar (nevtngb) +svefg gurl ynhtu ng lbh, gura gurl vtaber lbh, gura gurl svtug lbh, gura lbh jva +ClCl vf snzvyl sevraqyl +jr yvxr pbzcynvagf +gbqnl jr'er snfgre guna lrfgreqnl (hfhnyyl) +ClCl naq PClguba: gurl ner zbegny rarzvrf vagrag ba xvyyvat rnpu bgure +nethnoyl, rirelguvat vf n avpur +clcl unf ynlref yvxr bavbaf: crryvat gurz onpx jvyy znxr lbh pel +EClguba zntvpnyyl znxrf lbh evpu naq snzbhf (fnlf fb ba gur gva) +Vf evtbobg nebhaq jura gur havirefr prnfrf gb rkvfg? +ClCl vf gbb pbby sbe dhrelfgevatf. +< nevtngb> gura jung bpphef? < svwny> tbbq fghss V oryvrir +ClCl 1.6 eryrnfrq! + jurer ner gur grfgf? +uggc://gjvgcvp.pbz/52nr8s +N enaqbz dhbgr +Nyy rkprcgoybpxf frrz fnar. +N cvax tyvggrel ebgngvat ynzoqn +"vg'f yvxryl grzcbenel hagvy sberire" nevtb """ def some_topic(): diff --git a/lib_pypy/pyrepl/commands.py b/lib_pypy/pyrepl/commands.py --- a/lib_pypy/pyrepl/commands.py +++ b/lib_pypy/pyrepl/commands.py @@ -33,10 +33,9 @@ class Command(object): finish = 0 kills_digit_arg = 1 - def __init__(self, reader, (event_name, event)): + def __init__(self, reader, cmd): self.reader = reader - self.event = event - self.event_name = event_name + self.event_name, self.event = cmd def do(self): pass diff --git a/lib_pypy/pyrepl/pygame_console.py b/lib_pypy/pyrepl/pygame_console.py --- a/lib_pypy/pyrepl/pygame_console.py +++ b/lib_pypy/pyrepl/pygame_console.py @@ -130,7 +130,7 @@ s.fill(c, [0, 600 - bmargin, 800, bmargin]) s.fill(c, [800 - rmargin, 0, lmargin, 600]) - def refresh(self, screen, (cx, cy)): + def refresh(self, screen, cxy): self.screen = screen self.pygame_screen.fill(colors.bg, [0, tmargin + self.cur_top + self.scroll, @@ -139,8 +139,8 @@ line_top = self.cur_top width, height = self.fontsize - self.cxy = (cx, cy) - cp = self.char_pos(cx, cy) + self.cxy = cxy + cp = self.char_pos(*cxy) if cp[1] < tmargin: self.scroll = - (cy*self.fh + self.cur_top) self.repaint() @@ -148,7 +148,7 @@ self.scroll += (600 - bmargin) - (cp[1] + self.fh) self.repaint() if self.curs_vis: - self.pygame_screen.blit(self.cursor, self.char_pos(cx, cy)) + self.pygame_screen.blit(self.cursor, self.char_pos(*cxy)) for line in screen: if 0 <= line_top + self.scroll <= (600 - bmargin - tmargin - self.fh): if line: diff --git a/lib_pypy/pyrepl/readline.py b/lib_pypy/pyrepl/readline.py --- a/lib_pypy/pyrepl/readline.py +++ b/lib_pypy/pyrepl/readline.py @@ -231,7 +231,11 @@ return ''.join(chars) def _histline(self, line): - return unicode(line.rstrip('\n'), ENCODING) + line = line.rstrip('\n') + try: + return unicode(line, ENCODING) + except UnicodeDecodeError: # bah, silently fall back... + return unicode(line, 'utf-8') def get_history_length(self): return self.saved_history_length @@ -268,7 +272,10 @@ f = open(os.path.expanduser(filename), 'w') for entry in history: if isinstance(entry, unicode): - entry = entry.encode(ENCODING) + try: + entry = entry.encode(ENCODING) + except UnicodeEncodeError: # bah, silently fall back... + entry = entry.encode('utf-8') entry = entry.replace('\n', '\r\n') # multiline history support f.write(entry + '\n') f.close() diff --git a/lib_pypy/pyrepl/unix_console.py b/lib_pypy/pyrepl/unix_console.py --- a/lib_pypy/pyrepl/unix_console.py +++ b/lib_pypy/pyrepl/unix_console.py @@ -163,7 +163,7 @@ def change_encoding(self, encoding): self.encoding = encoding - def refresh(self, screen, (cx, cy)): + def refresh(self, screen, cxy): # this function is still too long (over 90 lines) if not self.__gone_tall: @@ -198,6 +198,7 @@ # we make sure the cursor is on the screen, and that we're # using all of the screen if we can + cx, cy = cxy if cy < offset: offset = cy elif cy >= offset + height: @@ -411,7 +412,12 @@ e.args[4] == 'unexpected end of data': pass else: - raise + # was: "raise". But it crashes pyrepl, and by extension the + # pypy currently running, in which we are e.g. in the middle + # of some debugging session. Argh. Instead just print an + # error message to stderr and continue running, for now. + self.partial_char = '' + sys.stderr.write('\n%s: %s\n' % (e.__class__.__name__, e)) else: self.partial_char = '' self.event_queue.push(c) diff --git a/lib_pypy/syslog.py b/lib_pypy/syslog.py --- a/lib_pypy/syslog.py +++ b/lib_pypy/syslog.py @@ -38,9 +38,27 @@ _setlogmask.argtypes = (c_int,) _setlogmask.restype = c_int +_S_log_open = False +_S_ident_o = None + +def _get_argv(): + try: + import sys + script = sys.argv[0] + if isinstance(script, str): + return script[script.rfind('/')+1:] or None + except Exception: + pass + return None + @builtinify -def openlog(ident, option, facility): - _openlog(ident, option, facility) +def openlog(ident=None, logoption=0, facility=LOG_USER): + global _S_ident_o, _S_log_open + if ident is None: + ident = _get_argv() + _S_ident_o = c_char_p(ident) # keepalive + _openlog(_S_ident_o, logoption, facility) + _S_log_open = True @builtinify def syslog(arg1, arg2=None): @@ -48,11 +66,18 @@ priority, message = arg1, arg2 else: priority, message = LOG_INFO, arg1 + # if log is not opened, open it now + if not _S_log_open: + openlog() _syslog(priority, "%s", message) @builtinify def closelog(): - _closelog() + global _S_log_open, S_ident_o + if _S_log_open: + _closelog() + _S_log_open = False + _S_ident_o = None @builtinify def setlogmask(mask): diff --git a/pypy/config/pypyoption.py b/pypy/config/pypyoption.py --- a/pypy/config/pypyoption.py +++ b/pypy/config/pypyoption.py @@ -92,7 +92,7 @@ module_import_dependencies = { # no _rawffi if importing pypy.rlib.clibffi raises ImportError - # or CompilationError + # or CompilationError or py.test.skip.Exception "_rawffi" : ["pypy.rlib.clibffi"], "_ffi" : ["pypy.rlib.clibffi"], @@ -113,7 +113,7 @@ try: for name in modlist: __import__(name) - except (ImportError, CompilationError), e: + except (ImportError, CompilationError, py.test.skip.Exception), e: errcls = e.__class__.__name__ config.add_warning( "The module %r is disabled\n" % (modname,) + diff --git a/pypy/doc/release-1.7.0.rst b/pypy/doc/release-1.7.0.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/release-1.7.0.rst @@ -0,0 +1,44 @@ +===================== +PyPy 1.7 +===================== + +Highlights +========== + +* numerous performance improvements, PyPy 1.7 is xxx faster than 1.6 + +* numerous bugfixes, compatibility fixes + +* windows fixes + +* stackless and JIT integration + +* numpy progress - dtypes, numpy -> numpypy renaming + +* brand new JSON encoder + +* improved memory footprint on heavy users of C APIs example - tornado + +* cpyext progress + +Things that didn't make it, expect in 1.8 soon +============================================== + +* list strategies + +* multi-dimensional arrays for numpy + +* ARM backend + +* PPC backend + +Things we're working on with unclear ETA +======================================== + +* windows 64 (?) + +* Py3k + +* SSE for numpy + +* specialized objects diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -777,22 +777,63 @@ """Unpack an iterable object into a real (interpreter-level) list. Raise an OperationError(w_ValueError) if the length is wrong.""" w_iterator = self.iter(w_iterable) - # If we know the expected length we can preallocate. if expected_length == -1: + # xxx special hack for speed + from pypy.interpreter.generator import GeneratorIterator + if isinstance(w_iterator, GeneratorIterator): + lst_w = [] + w_iterator.unpack_into(lst_w) + return lst_w + # /xxx + return self._unpackiterable_unknown_length(w_iterator, w_iterable) + else: + lst_w = self._unpackiterable_known_length(w_iterator, + expected_length) + return lst_w[:] # make the resulting list resizable + + @jit.dont_look_inside + def _unpackiterable_unknown_length(self, w_iterator, w_iterable): + # Unpack a variable-size list of unknown length. + # The JIT does not look inside this function because it + # contains a loop (made explicit with the decorator above). + # + # If we can guess the expected length we can preallocate. + try: + lgt_estimate = self.len_w(w_iterable) + except OperationError, o: + if (not o.match(self, self.w_AttributeError) and + not o.match(self, self.w_TypeError)): + raise + items = [] + else: try: - lgt_estimate = self.len_w(w_iterable) - except OperationError, o: - if (not o.match(self, self.w_AttributeError) and - not o.match(self, self.w_TypeError)): + items = newlist(lgt_estimate) + except MemoryError: + items = [] # it might have lied + # + while True: + try: + w_item = self.next(w_iterator) + except OperationError, e: + if not e.match(self, self.w_StopIteration): raise - items = [] - else: - try: - items = newlist(lgt_estimate) - except MemoryError: - items = [] # it might have lied - else: - items = [None] * expected_length + break # done + items.append(w_item) + # + return items + + @jit.dont_look_inside + def _unpackiterable_known_length(self, w_iterator, expected_length): + # Unpack a known length list, without letting the JIT look inside. + # Implemented by just calling the @jit.unroll_safe version, but + # the JIT stopped looking inside already. + return self._unpackiterable_known_length_jitlook(w_iterator, + expected_length) + + @jit.unroll_safe + def _unpackiterable_known_length_jitlook(self, w_iterator, + expected_length): + items = [None] * expected_length idx = 0 while True: try: @@ -801,26 +842,29 @@ if not e.match(self, self.w_StopIteration): raise break # done - if expected_length != -1 and idx == expected_length: + if idx == expected_length: raise OperationError(self.w_ValueError, - self.wrap("too many values to unpack")) - if expected_length == -1: - items.append(w_item) - else: - items[idx] = w_item + self.wrap("too many values to unpack")) + items[idx] = w_item idx += 1 - if expected_length != -1 and idx < expected_length: + if idx < expected_length: if idx == 1: plural = "" else: plural = "s" - raise OperationError(self.w_ValueError, - self.wrap("need more than %d value%s to unpack" % - (idx, plural))) + raise operationerrfmt(self.w_ValueError, + "need more than %d value%s to unpack", + idx, plural) return items - unpackiterable_unroll = jit.unroll_safe(func_with_new_name(unpackiterable, - 'unpackiterable_unroll')) + def unpackiterable_unroll(self, w_iterable, expected_length): + # Like unpackiterable(), but for the cases where we have + # an expected_length and want to unroll when JITted. + # Returns a fixed-size list. + w_iterator = self.iter(w_iterable) + assert expected_length != -1 + return self._unpackiterable_known_length_jitlook(w_iterator, + expected_length) def fixedview(self, w_iterable, expected_length=-1): """ A fixed list view of w_iterable. Don't modify the result diff --git a/pypy/interpreter/generator.py b/pypy/interpreter/generator.py --- a/pypy/interpreter/generator.py +++ b/pypy/interpreter/generator.py @@ -8,7 +8,7 @@ class GeneratorIterator(Wrappable): "An iterator created by a generator." _immutable_fields_ = ['pycode'] - + def __init__(self, frame): self.space = frame.space self.frame = frame # turned into None when frame_finished_execution @@ -81,7 +81,7 @@ # if the frame is now marked as finished, it was RETURNed from if frame.frame_finished_execution: self.frame = None - raise OperationError(space.w_StopIteration, space.w_None) + raise OperationError(space.w_StopIteration, space.w_None) else: return w_result # YIELDed finally: @@ -97,21 +97,21 @@ def throw(self, w_type, w_val, w_tb): from pypy.interpreter.pytraceback import check_traceback space = self.space - + msg = "throw() third argument must be a traceback object" if space.is_w(w_tb, space.w_None): tb = None else: tb = check_traceback(space, w_tb, msg) - + operr = OperationError(w_type, w_val, tb) operr.normalize_exception(space) return self.send_ex(space.w_None, operr) - + def descr_next(self): """x.next() -> the next value, or raise StopIteration""" return self.send_ex(self.space.w_None) - + def descr_close(self): """x.close(arg) -> raise GeneratorExit inside generator.""" assert isinstance(self, GeneratorIterator) @@ -124,7 +124,7 @@ e.match(space, space.w_GeneratorExit): return space.w_None raise - + if w_retval is not None: msg = "generator ignored GeneratorExit" raise OperationError(space.w_RuntimeError, space.wrap(msg)) @@ -155,3 +155,39 @@ "interrupting generator of ") break block = block.previous + + def unpack_into(self, results_w): + """This is a hack for performance: runs the generator and collects + all produced items in a list.""" + # XXX copied and simplified version of send_ex() + space = self.space + if self.running: + raise OperationError(space.w_ValueError, + space.wrap('generator already executing')) + frame = self.frame + if frame is None: # already finished + return + self.running = True + try: + pycode = self.pycode + while True: + jitdriver.jit_merge_point(self=self, frame=frame, + results_w=results_w, + pycode=pycode) + try: + w_result = frame.execute_frame(space.w_None) + except OperationError, e: + if not e.match(space, space.w_StopIteration): + raise + break + # if the frame is now marked as finished, it was RETURNed from + if frame.frame_finished_execution: + break + results_w.append(w_result) # YIELDed + finally: + frame.f_backref = jit.vref_None + self.running = False + self.frame = None + +jitdriver = jit.JitDriver(greens=['pycode'], + reds=['self', 'frame', 'results_w']) diff --git a/pypy/interpreter/test/test_executioncontext.py b/pypy/interpreter/test/test_executioncontext.py --- a/pypy/interpreter/test/test_executioncontext.py +++ b/pypy/interpreter/test/test_executioncontext.py @@ -292,7 +292,7 @@ import os, sys print sys.executable, self.tmpfile if sys.platform == "win32": - cmdformat = '""%s" "%s""' # excellent! tons of "! + cmdformat = '"%s" "%s"' else: cmdformat = "'%s' '%s'" g = os.popen(cmdformat % (sys.executable, self.tmpfile), 'r') diff --git a/pypy/interpreter/test/test_generator.py b/pypy/interpreter/test/test_generator.py --- a/pypy/interpreter/test/test_generator.py +++ b/pypy/interpreter/test/test_generator.py @@ -117,7 +117,7 @@ g = f() raises(NameError, g.throw, NameError, "Error", None) - + def test_throw_fail(self): def f(): yield 1 @@ -129,7 +129,7 @@ yield 1 g = f() raises(TypeError, g.throw, list()) - + def test_throw_fail3(self): def f(): yield 1 @@ -188,7 +188,7 @@ g = f() g.next() raises(NameError, g.close) - + def test_close_fail(self): def f(): try: @@ -267,3 +267,15 @@ assert r.startswith("' % (self._clsname, self.name, self.offset) +class DynamicFieldDescr(BaseFieldDescr): + def __init__(self, offset, fieldsize, is_pointer, is_float, is_signed): + self.offset = offset + self._fieldsize = fieldsize + self._is_pointer_field = is_pointer + self._is_float_field = is_float + self._is_field_signed = is_signed + + def get_field_size(self, translate_support_code): + return self._fieldsize class NonGcPtrFieldDescr(BaseFieldDescr): _clsname = 'NonGcPtrFieldDescr' @@ -182,6 +192,7 @@ def repr_of_descr(self): return '<%s>' % self._clsname + class NonGcPtrArrayDescr(BaseArrayDescr): _clsname = 'NonGcPtrArrayDescr' def get_item_size(self, translate_support_code): @@ -211,6 +222,13 @@ def get_ofs_length(self, translate_support_code): return -1 +class DynamicArrayNoLengthDescr(BaseArrayNoLengthDescr): + def __init__(self, itemsize): + self.itemsize = itemsize + + def get_item_size(self, translate_support_code): + return self.itemsize + class NonGcPtrArrayNoLengthDescr(BaseArrayNoLengthDescr): _clsname = 'NonGcPtrArrayNoLengthDescr' def get_item_size(self, translate_support_code): @@ -305,12 +323,16 @@ _clsname = '' loop_token = None arg_classes = '' # <-- annotation hack - ffi_flags = 0 + ffi_flags = 1 - def __init__(self, arg_classes, extrainfo=None, ffi_flags=0): + def __init__(self, arg_classes, extrainfo=None, ffi_flags=1): self.arg_classes = arg_classes # string of "r" and "i" (ref/int) self.extrainfo = extrainfo self.ffi_flags = ffi_flags + # NB. the default ffi_flags is 1, meaning FUNCFLAG_CDECL, which + # makes sense on Windows as it's the one for all the C functions + # we are compiling together with the JIT. On non-Windows platforms + # it is just ignored anyway. def __repr__(self): res = '%s(%s)' % (self.__class__.__name__, self.arg_classes) @@ -351,6 +373,10 @@ return False # unless overridden def create_call_stub(self, rtyper, RESULT): + from pypy.rlib.clibffi import FFI_DEFAULT_ABI + assert self.get_call_conv() == FFI_DEFAULT_ABI, ( + "%r: create_call_stub() with a non-default call ABI" % (self,)) + def process(c): if c == 'L': assert longlong.supports_longlong @@ -445,7 +471,7 @@ """ _clsname = 'DynamicIntCallDescr' - def __init__(self, arg_classes, result_size, result_sign, extrainfo=None, ffi_flags=0): + def __init__(self, arg_classes, result_size, result_sign, extrainfo, ffi_flags): BaseIntCallDescr.__init__(self, arg_classes, extrainfo, ffi_flags) assert isinstance(result_sign, bool) self._result_size = chr(result_size) diff --git a/pypy/jit/backend/llsupport/ffisupport.py b/pypy/jit/backend/llsupport/ffisupport.py --- a/pypy/jit/backend/llsupport/ffisupport.py +++ b/pypy/jit/backend/llsupport/ffisupport.py @@ -8,7 +8,7 @@ class UnsupportedKind(Exception): pass -def get_call_descr_dynamic(cpu, ffi_args, ffi_result, extrainfo=None, ffi_flags=0): +def get_call_descr_dynamic(cpu, ffi_args, ffi_result, extrainfo, ffi_flags): """Get a call descr: the types of result and args are represented by rlib.libffi.types.*""" try: diff --git a/pypy/jit/backend/llsupport/llmodel.py b/pypy/jit/backend/llsupport/llmodel.py --- a/pypy/jit/backend/llsupport/llmodel.py +++ b/pypy/jit/backend/llsupport/llmodel.py @@ -9,9 +9,10 @@ from pypy.jit.backend.llsupport import symbolic from pypy.jit.backend.llsupport.symbolic import WORD, unroll_basic_sizes from pypy.jit.backend.llsupport.descr import (get_size_descr, - get_field_descr, BaseFieldDescr, get_array_descr, BaseArrayDescr, - get_call_descr, BaseIntCallDescr, GcPtrCallDescr, FloatCallDescr, - VoidCallDescr, InteriorFieldDescr, get_interiorfield_descr) + get_field_descr, BaseFieldDescr, DynamicFieldDescr, get_array_descr, + BaseArrayDescr, DynamicArrayNoLengthDescr, get_call_descr, + BaseIntCallDescr, GcPtrCallDescr, FloatCallDescr, VoidCallDescr, + InteriorFieldDescr, get_interiorfield_descr) from pypy.jit.backend.llsupport.asmmemmgr import AsmMemoryManager @@ -238,6 +239,12 @@ def interiorfielddescrof(self, A, fieldname): return get_interiorfield_descr(self.gc_ll_descr, A, A.OF, fieldname) + def interiorfielddescrof_dynamic(self, offset, width, fieldsize, + is_pointer, is_float, is_signed): + arraydescr = DynamicArrayNoLengthDescr(width) + fielddescr = DynamicFieldDescr(offset, fieldsize, is_pointer, is_float, is_signed) + return InteriorFieldDescr(arraydescr, fielddescr) + def unpack_arraydescr(self, arraydescr): assert isinstance(arraydescr, BaseArrayDescr) return arraydescr.get_base_size(self.translate_support_code) diff --git a/pypy/jit/backend/llsupport/test/test_ffisupport.py b/pypy/jit/backend/llsupport/test/test_ffisupport.py --- a/pypy/jit/backend/llsupport/test/test_ffisupport.py +++ b/pypy/jit/backend/llsupport/test/test_ffisupport.py @@ -13,44 +13,46 @@ def test_call_descr_dynamic(): args = [types.sint, types.pointer] - descr = get_call_descr_dynamic(FakeCPU(), args, types.sint, ffi_flags=42) + descr = get_call_descr_dynamic(FakeCPU(), args, types.sint, None, + ffi_flags=42) assert isinstance(descr, DynamicIntCallDescr) assert descr.arg_classes == 'ii' assert descr.get_ffi_flags() == 42 args = [types.sint, types.double, types.pointer] - descr = get_call_descr_dynamic(FakeCPU(), args, types.void) + descr = get_call_descr_dynamic(FakeCPU(), args, types.void, None, 42) assert descr is None # missing floats descr = get_call_descr_dynamic(FakeCPU(supports_floats=True), - args, types.void, ffi_flags=43) + args, types.void, None, ffi_flags=43) assert isinstance(descr, VoidCallDescr) assert descr.arg_classes == 'ifi' assert descr.get_ffi_flags() == 43 - descr = get_call_descr_dynamic(FakeCPU(), [], types.sint8) + descr = get_call_descr_dynamic(FakeCPU(), [], types.sint8, None, 42) assert isinstance(descr, DynamicIntCallDescr) assert descr.get_result_size(False) == 1 assert descr.is_result_signed() == True - descr = get_call_descr_dynamic(FakeCPU(), [], types.uint8) + descr = get_call_descr_dynamic(FakeCPU(), [], types.uint8, None, 42) assert isinstance(descr, DynamicIntCallDescr) assert descr.get_result_size(False) == 1 assert descr.is_result_signed() == False if not is_64_bit: - descr = get_call_descr_dynamic(FakeCPU(), [], types.slonglong) + descr = get_call_descr_dynamic(FakeCPU(), [], types.slonglong, + None, 42) assert descr is None # missing longlongs descr = get_call_descr_dynamic(FakeCPU(supports_longlong=True), - [], types.slonglong, ffi_flags=43) + [], types.slonglong, None, ffi_flags=43) assert isinstance(descr, LongLongCallDescr) assert descr.get_ffi_flags() == 43 else: assert types.slonglong is types.slong - descr = get_call_descr_dynamic(FakeCPU(), [], types.float) + descr = get_call_descr_dynamic(FakeCPU(), [], types.float, None, 42) assert descr is None # missing singlefloats descr = get_call_descr_dynamic(FakeCPU(supports_singlefloats=True), - [], types.float, ffi_flags=44) + [], types.float, None, ffi_flags=44) SingleFloatCallDescr = getCallDescrClass(rffi.FLOAT) assert isinstance(descr, SingleFloatCallDescr) assert descr.get_ffi_flags() == 44 diff --git a/pypy/jit/backend/model.py b/pypy/jit/backend/model.py --- a/pypy/jit/backend/model.py +++ b/pypy/jit/backend/model.py @@ -183,38 +183,35 @@ lst[n] = None self.fail_descr_free_list.extend(faildescr_indices) - @staticmethod - def sizeof(S): + def sizeof(self, S): raise NotImplementedError - @staticmethod - def fielddescrof(S, fieldname): + def fielddescrof(self, S, fieldname): """Return the Descr corresponding to field 'fieldname' on the structure 'S'. It is important that this function (at least) caches the results.""" raise NotImplementedError - @staticmethod - def arraydescrof(A): + def interiorfielddescrof(self, A, fieldname): raise NotImplementedError - @staticmethod - def calldescrof(FUNC, ARGS, RESULT): + def interiorfielddescrof_dynamic(self, offset, width, fieldsize, is_pointer, + is_float, is_signed): + raise NotImplementedError + + def arraydescrof(self, A): + raise NotImplementedError + + def calldescrof(self, FUNC, ARGS, RESULT): # FUNC is the original function type, but ARGS is a list of types # with Voids removed raise NotImplementedError - @staticmethod - def methdescrof(SELFTYPE, methname): + def methdescrof(self, SELFTYPE, methname): # must return a subclass of history.AbstractMethDescr raise NotImplementedError - @staticmethod - def typedescrof(TYPE): - raise NotImplementedError - - @staticmethod - def interiorfielddescrof(A, fieldname): + def typedescrof(self, TYPE): raise NotImplementedError # ---------- the backend-dependent operations ---------- diff --git a/pypy/jit/backend/test/test_random.py b/pypy/jit/backend/test/test_random.py --- a/pypy/jit/backend/test/test_random.py +++ b/pypy/jit/backend/test/test_random.py @@ -495,9 +495,9 @@ if pytest.config.option.backend == 'llgraph': from pypy.jit.backend.llgraph.runner import LLtypeCPU return LLtypeCPU(None) - elif pytest.config.option.backend == 'x86': - from pypy.jit.backend.x86.runner import CPU386 - return CPU386(None, None) + elif pytest.config.option.backend == 'cpu': + from pypy.jit.backend.detect_cpu import getcpuclass + return getcpuclass()(None, None) else: assert 0, "unknown backend %r" % pytest.config.option.backend diff --git a/pypy/jit/backend/x86/test/test_zll_random.py b/pypy/jit/backend/test/test_zll_stress.py rename from pypy/jit/backend/x86/test/test_zll_random.py rename to pypy/jit/backend/test/test_zll_stress.py diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -8,8 +8,8 @@ from pypy.rpython.lltypesystem.lloperation import llop from pypy.rpython.annlowlevel import llhelper from pypy.jit.backend.model import CompiledLoopToken -from pypy.jit.backend.x86.regalloc import (RegAlloc, get_ebp_ofs, - _get_scale, gpr_reg_mgr_cls) +from pypy.jit.backend.x86.regalloc import (RegAlloc, get_ebp_ofs, _get_scale, + gpr_reg_mgr_cls, _valid_addressing_size) from pypy.jit.backend.x86.arch import (FRAME_FIXED_SIZE, FORCE_INDEX_OFS, WORD, IS_X86_32, IS_X86_64) @@ -1601,10 +1601,13 @@ assert isinstance(itemsize_loc, ImmedLoc) if isinstance(index_loc, ImmedLoc): temp_loc = imm(index_loc.value * itemsize_loc.value) + elif _valid_addressing_size(itemsize_loc.value): + return AddressLoc(base_loc, index_loc, _get_scale(itemsize_loc.value), ofs_loc.value) else: - # XXX should not use IMUL in most cases + # XXX should not use IMUL in more cases, it can use a clever LEA assert isinstance(temp_loc, RegLoc) assert isinstance(index_loc, RegLoc) + assert not temp_loc.is_xmm self.mc.IMUL_rri(temp_loc.value, index_loc.value, itemsize_loc.value) assert isinstance(ofs_loc, ImmedLoc) @@ -1612,12 +1615,14 @@ def genop_getinteriorfield_gc(self, op, arglocs, resloc): (base_loc, ofs_loc, itemsize_loc, fieldsize_loc, - index_loc, sign_loc) = arglocs - src_addr = self._get_interiorfield_addr(resloc, index_loc, + index_loc, temp_loc, sign_loc) = arglocs + src_addr = self._get_interiorfield_addr(temp_loc, index_loc, itemsize_loc, base_loc, ofs_loc) self.load_from_mem(resloc, src_addr, fieldsize_loc, sign_loc) + genop_getinteriorfield_raw = genop_getinteriorfield_gc + def genop_discard_setfield_gc(self, op, arglocs): base_loc, ofs_loc, size_loc, value_loc = arglocs @@ -1633,6 +1638,8 @@ ofs_loc) self.save_into_mem(dest_addr, value_loc, fieldsize_loc) + genop_discard_setinteriorfield_raw = genop_discard_setinteriorfield_gc + def genop_discard_setarrayitem_gc(self, op, arglocs): base_loc, ofs_loc, value_loc, size_loc, baseofs = arglocs assert isinstance(baseofs, ImmedLoc) diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -1067,6 +1067,8 @@ self.PerformDiscard(op, [base_loc, ofs, itemsize, fieldsize, index_loc, temp_loc, value_loc]) + consider_setinteriorfield_raw = consider_setinteriorfield_gc + def consider_strsetitem(self, op): args = op.getarglist() base_loc = self.rm.make_sure_var_in_reg(op.getarg(0), args) @@ -1143,9 +1145,22 @@ # 'index' but must be in a different register than 'base'. self.rm.possibly_free_var(op.getarg(1)) result_loc = self.force_allocate_reg(op.result, [op.getarg(0)]) + assert isinstance(result_loc, RegLoc) + # two cases: 1) if result_loc is a normal register, use it as temp_loc + if not result_loc.is_xmm: + temp_loc = result_loc + else: + # 2) if result_loc is an xmm register, we (likely) need another + # temp_loc that is a normal register. It can be in the same + # register as 'index' but not 'base'. + tempvar = TempBox() + temp_loc = self.rm.force_allocate_reg(tempvar, [op.getarg(0)]) + self.rm.possibly_free_var(tempvar) self.rm.possibly_free_var(op.getarg(0)) self.Perform(op, [base_loc, ofs, itemsize, fieldsize, - index_loc, sign_loc], result_loc) + index_loc, temp_loc, sign_loc], result_loc) + + consider_getinteriorfield_raw = consider_getinteriorfield_gc def consider_int_is_true(self, op, guard_op): # doesn't need arg to be in a register @@ -1419,8 +1434,11 @@ # i.e. the n'th word beyond the fixed frame size. return -WORD * (FRAME_FIXED_SIZE + position) +def _valid_addressing_size(size): + return size == 1 or size == 2 or size == 4 or size == 8 + def _get_scale(size): - assert size == 1 or size == 2 or size == 4 or size == 8 + assert _valid_addressing_size(size) if size < 4: return size - 1 # 1, 2 => 0, 1 else: diff --git a/pypy/jit/backend/x86/regloc.py b/pypy/jit/backend/x86/regloc.py --- a/pypy/jit/backend/x86/regloc.py +++ b/pypy/jit/backend/x86/regloc.py @@ -17,7 +17,7 @@ class AssemblerLocation(object): # XXX: Is adding "width" here correct? - __slots__ = ('value', 'width') + _attrs_ = ('value', 'width', '_location_code') _immutable_ = True def _getregkey(self): return self.value @@ -25,6 +25,9 @@ def is_memory_reference(self): return self.location_code() in ('b', 's', 'j', 'a', 'm') + def location_code(self): + return self._location_code + def value_r(self): return self.value def value_b(self): return self.value def value_s(self): return self.value @@ -38,6 +41,8 @@ class StackLoc(AssemblerLocation): _immutable_ = True + _location_code = 'b' + def __init__(self, position, ebp_offset, num_words, type): assert ebp_offset < 0 # so no confusion with RegLoc.value self.position = position @@ -49,9 +54,6 @@ def __repr__(self): return '%d(%%ebp)' % (self.value,) - def location_code(self): - return 'b' - def assembler(self): return repr(self) @@ -63,8 +65,10 @@ self.is_xmm = is_xmm if self.is_xmm: self.width = 8 + self._location_code = 'x' else: self.width = WORD + self._location_code = 'r' def __repr__(self): if self.is_xmm: return rx86.R.xmmnames[self.value] @@ -79,12 +83,6 @@ assert not self.is_xmm return RegLoc(rx86.high_byte(self.value), False) - def location_code(self): - if self.is_xmm: - return 'x' - else: - return 'r' - def assembler(self): return '%' + repr(self) @@ -97,14 +95,13 @@ class ImmedLoc(AssemblerLocation): _immutable_ = True width = WORD + _location_code = 'i' + def __init__(self, value): from pypy.rpython.lltypesystem import rffi, lltype # force as a real int self.value = rffi.cast(lltype.Signed, value) - def location_code(self): - return 'i' - def getint(self): return self.value @@ -149,9 +146,6 @@ info = getattr(self, attr, '?') return '' % (self._location_code, info) - def location_code(self): - return self._location_code - def value_a(self): return self.loc_a @@ -191,6 +185,7 @@ # we want a width of 8 (... I think. Check this!) _immutable_ = True width = 8 + _location_code = 'j' def __init__(self, address): self.value = address @@ -198,9 +193,6 @@ def __repr__(self): return '' % (self.value,) - def location_code(self): - return 'j' - if IS_X86_32: class FloatImmedLoc(AssemblerLocation): # This stands for an immediate float. It cannot be directly used in @@ -209,6 +201,7 @@ # instead; see below. _immutable_ = True width = 8 + _location_code = '#' # don't use me def __init__(self, floatstorage): self.aslonglong = floatstorage @@ -229,9 +222,6 @@ floatvalue = longlong.getrealfloat(self.aslonglong) return '' % (floatvalue,) - def location_code(self): - raise NotImplementedError - if IS_X86_64: def FloatImmedLoc(floatstorage): from pypy.rlib.longlong2float import float2longlong @@ -270,6 +260,11 @@ else: raise AssertionError(methname + " undefined") +def _missing_binary_insn(name, code1, code2): + raise AssertionError(name + "_" + code1 + code2 + " missing") +_missing_binary_insn._dont_inline_ = True + + class LocationCodeBuilder(object): _mixin_ = True @@ -303,6 +298,8 @@ else: # For this case, we should not need the scratch register more than here. self._load_scratch(val2) + if name == 'MOV' and loc1 is X86_64_SCRATCH_REG: + return # don't need a dummy "MOV r11, r11" INSN(self, loc1, X86_64_SCRATCH_REG) def invoke(self, codes, val1, val2): @@ -310,6 +307,23 @@ _rx86_getattr(self, methname)(val1, val2) invoke._annspecialcase_ = 'specialize:arg(1)' + def has_implementation_for(loc1, loc2): + # A memo function that returns True if there is any NAME_xy that could match. + # If it returns False we know the whole subcase can be omitted from translated + # code. Without this hack, the size of most _binaryop INSN functions ends up + # quite large in C code. + if loc1 == '?': + return any([has_implementation_for(loc1, loc2) + for loc1 in unrolling_location_codes]) + methname = name + "_" + loc1 + loc2 + if not hasattr(rx86.AbstractX86CodeBuilder, methname): + return False + # any NAME_j should have a NAME_m as a fallback, too. Check it + if loc1 == 'j': assert has_implementation_for('m', loc2), methname + if loc2 == 'j': assert has_implementation_for(loc1, 'm'), methname + return True + has_implementation_for._annspecialcase_ = 'specialize:memo' + def INSN(self, loc1, loc2): code1 = loc1.location_code() code2 = loc2.location_code() @@ -325,6 +339,8 @@ assert code2 not in ('j', 'i') for possible_code2 in unrolling_location_codes: + if not has_implementation_for('?', possible_code2): + continue if code2 == possible_code2: val2 = getattr(loc2, "value_" + possible_code2)() # @@ -335,28 +351,32 @@ # # Regular case for possible_code1 in unrolling_location_codes: + if not has_implementation_for(possible_code1, + possible_code2): + continue if code1 == possible_code1: val1 = getattr(loc1, "value_" + possible_code1)() # More faking out of certain operations for x86_64 - if possible_code1 == 'j' and not rx86.fits_in_32bits(val1): + fits32 = rx86.fits_in_32bits + if possible_code1 == 'j' and not fits32(val1): val1 = self._addr_as_reg_offset(val1) invoke(self, "m" + possible_code2, val1, val2) - elif possible_code2 == 'j' and not rx86.fits_in_32bits(val2): + return + if possible_code2 == 'j' and not fits32(val2): val2 = self._addr_as_reg_offset(val2) invoke(self, possible_code1 + "m", val1, val2) - elif possible_code1 == 'm' and not rx86.fits_in_32bits(val1[1]): + return + if possible_code1 == 'm' and not fits32(val1[1]): val1 = self._fix_static_offset_64_m(val1) - invoke(self, "a" + possible_code2, val1, val2) - elif possible_code2 == 'm' and not rx86.fits_in_32bits(val2[1]): + if possible_code2 == 'm' and not fits32(val2[1]): val2 = self._fix_static_offset_64_m(val2) - invoke(self, possible_code1 + "a", val1, val2) - else: - if possible_code1 == 'a' and not rx86.fits_in_32bits(val1[3]): - val1 = self._fix_static_offset_64_a(val1) - if possible_code2 == 'a' and not rx86.fits_in_32bits(val2[3]): - val2 = self._fix_static_offset_64_a(val2) - invoke(self, possible_code1 + possible_code2, val1, val2) + if possible_code1 == 'a' and not fits32(val1[3]): + val1 = self._fix_static_offset_64_a(val1) + if possible_code2 == 'a' and not fits32(val2[3]): + val2 = self._fix_static_offset_64_a(val2) + invoke(self, possible_code1 + possible_code2, val1, val2) return + _missing_binary_insn(name, code1, code2) return func_with_new_name(INSN, "INSN_" + name) @@ -431,12 +451,14 @@ def _fix_static_offset_64_m(self, (basereg, static_offset)): # For cases where an AddressLoc has the location_code 'm', but # where the static offset does not fit in 32-bits. We have to fall - # back to the X86_64_SCRATCH_REG. Note that this returns a location - # encoded as mode 'a'. These are all possibly rare cases; don't try + # back to the X86_64_SCRATCH_REG. Returns a new location encoded + # as mode 'm' too. These are all possibly rare cases; don't try # to reuse a past value of the scratch register at all. self._scratch_register_known = False self.MOV_ri(X86_64_SCRATCH_REG.value, static_offset) - return (basereg, X86_64_SCRATCH_REG.value, 0, 0) + self.LEA_ra(X86_64_SCRATCH_REG.value, + (basereg, X86_64_SCRATCH_REG.value, 0, 0)) + return (X86_64_SCRATCH_REG.value, 0) def _fix_static_offset_64_a(self, (basereg, scalereg, scale, static_offset)): diff --git a/pypy/jit/backend/x86/rx86.py b/pypy/jit/backend/x86/rx86.py --- a/pypy/jit/backend/x86/rx86.py +++ b/pypy/jit/backend/x86/rx86.py @@ -745,6 +745,7 @@ assert insnname_template.count('*') == 1 add_insn('x', register(2), '\xC0') add_insn('j', abs_, immediate(2)) + add_insn('m', mem_reg_plus_const(2)) define_pxmm_insn('PADDQ_x*', '\xD4') define_pxmm_insn('PSUBQ_x*', '\xFB') diff --git a/pypy/jit/backend/x86/test/test_fficall.py b/pypy/jit/backend/x86/test/test_fficall.py new file mode 100644 --- /dev/null +++ b/pypy/jit/backend/x86/test/test_fficall.py @@ -0,0 +1,8 @@ +import py +from pypy.jit.metainterp.test import test_fficall +from pypy.jit.backend.x86.test.test_basic import Jit386Mixin + +class TestFfiLookups(Jit386Mixin, test_fficall.FfiLookupTests): + # for the individual tests see + # ====> ../../../metainterp/test/test_fficall.py + supports_all = True diff --git a/pypy/jit/backend/x86/test/test_regloc.py b/pypy/jit/backend/x86/test/test_regloc.py --- a/pypy/jit/backend/x86/test/test_regloc.py +++ b/pypy/jit/backend/x86/test/test_regloc.py @@ -146,8 +146,10 @@ expected_instructions = ( # mov r11, 0xFEDCBA9876543210 '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' - # mov rcx, [rdx+r11] - '\x4A\x8B\x0C\x1A' + # lea r11, [rdx+r11] + '\x4E\x8D\x1C\x1A' + # mov rcx, [r11] + '\x49\x8B\x0B' ) assert cb.getvalue() == expected_instructions @@ -174,6 +176,30 @@ # ------------------------------------------------------------ + def test_MOV_64bit_constant_into_r11(self): + base_constant = 0xFEDCBA9876543210 + cb = LocationCodeBuilder64() + cb.MOV(r11, imm(base_constant)) + + expected_instructions = ( + # mov r11, 0xFEDCBA9876543210 + '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' + ) + assert cb.getvalue() == expected_instructions + + def test_MOV_64bit_address_into_r11(self): + base_addr = 0xFEDCBA9876543210 + cb = LocationCodeBuilder64() + cb.MOV(r11, heap(base_addr)) + + expected_instructions = ( + # mov r11, 0xFEDCBA9876543210 + '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' + + # mov r11, [r11] + '\x4D\x8B\x1B' + ) + assert cb.getvalue() == expected_instructions + def test_MOV_immed32_into_64bit_address_1(self): immed = -0x01234567 base_addr = 0xFEDCBA9876543210 @@ -217,8 +243,10 @@ expected_instructions = ( # mov r11, 0xFEDCBA9876543210 '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' - # mov [rdx+r11], -0x01234567 - '\x4A\xC7\x04\x1A\x99\xBA\xDC\xFE' + # lea r11, [rdx+r11] + '\x4E\x8D\x1C\x1A' + # mov [r11], -0x01234567 + '\x49\xC7\x03\x99\xBA\xDC\xFE' ) assert cb.getvalue() == expected_instructions @@ -300,8 +328,10 @@ '\x48\xBA\xEF\xCD\xAB\x89\x67\x45\x23\x01' # mov r11, 0xFEDCBA9876543210 '\x49\xBB\x10\x32\x54\x76\x98\xBA\xDC\xFE' - # mov [rax+r11], rdx - '\x4A\x89\x14\x18' + # lea r11, [rax+r11] + '\x4E\x8D\x1C\x18' + # mov [r11], rdx + '\x49\x89\x13' # pop rdx '\x5A' ) diff --git a/pypy/jit/backend/x86/test/test_runner.py b/pypy/jit/backend/x86/test/test_runner.py --- a/pypy/jit/backend/x86/test/test_runner.py +++ b/pypy/jit/backend/x86/test/test_runner.py @@ -455,6 +455,9 @@ EffectInfo.MOST_GENERAL, ffi_flags=-1) calldescr.get_call_conv = lambda: ffi # <==== hack + # ^^^ we patch get_call_conv() so that the test also makes sense + # on Linux, because clibffi.get_call_conv() would always + # return FFI_DEFAULT_ABI on non-Windows platforms. funcbox = ConstInt(rawstart) i1 = BoxInt() i2 = BoxInt() diff --git a/pypy/jit/backend/x86/test/test_ztranslation.py b/pypy/jit/backend/x86/test/test_ztranslation.py --- a/pypy/jit/backend/x86/test/test_ztranslation.py +++ b/pypy/jit/backend/x86/test/test_ztranslation.py @@ -1,6 +1,6 @@ import py, os, sys from pypy.tool.udir import udir -from pypy.rlib.jit import JitDriver, unroll_parameters +from pypy.rlib.jit import JitDriver, unroll_parameters, set_param from pypy.rlib.jit import PARAMETERS, dont_look_inside from pypy.rlib.jit import promote from pypy.jit.metainterp.jitprof import Profiler @@ -47,9 +47,9 @@ def f(i, j): for param, _ in unroll_parameters: defl = PARAMETERS[param] - jitdriver.set_param(param, defl) - jitdriver.set_param("threshold", 3) - jitdriver.set_param("trace_eagerness", 2) + set_param(jitdriver, param, defl) + set_param(jitdriver, "threshold", 3) + set_param(jitdriver, "trace_eagerness", 2) total = 0 frame = Frame(i) while frame.i > 3: @@ -213,8 +213,8 @@ else: return Base() def myportal(i): - jitdriver.set_param("threshold", 3) - jitdriver.set_param("trace_eagerness", 2) + set_param(jitdriver, "threshold", 3) + set_param(jitdriver, "trace_eagerness", 2) total = 0 n = i while True: diff --git a/pypy/jit/backend/x86/tool/viewcode.py b/pypy/jit/backend/x86/tool/viewcode.py --- a/pypy/jit/backend/x86/tool/viewcode.py +++ b/pypy/jit/backend/x86/tool/viewcode.py @@ -58,7 +58,7 @@ assert not p.returncode, ('Encountered an error running objdump: %s' % stderr) # drop some objdump cruft - lines = stdout.splitlines()[6:] + lines = stdout.splitlines(True)[6:] # drop some objdump cruft return format_code_dump_with_labels(originaddr, lines, label_list) def format_code_dump_with_labels(originaddr, lines, label_list): @@ -97,7 +97,7 @@ stdout, stderr = p.communicate() assert not p.returncode, ('Encountered an error running nm: %s' % stderr) - for line in stdout.splitlines(): + for line in stdout.splitlines(True): match = re_symbolentry.match(line) if match: addr = long(match.group(1), 16) diff --git a/pypy/jit/codewriter/call.py b/pypy/jit/codewriter/call.py --- a/pypy/jit/codewriter/call.py +++ b/pypy/jit/codewriter/call.py @@ -212,7 +212,10 @@ elidable = False loopinvariant = False if op.opname == "direct_call": - func = getattr(get_funcobj(op.args[0].value), '_callable', None) + funcobj = get_funcobj(op.args[0].value) + assert getattr(funcobj, 'calling_conv', 'c') == 'c', ( + "%r: getcalldescr() with a non-default call ABI" % (op,)) + func = getattr(funcobj, '_callable', None) elidable = getattr(func, "_elidable_function_", False) loopinvariant = getattr(func, "_jit_loop_invariant_", False) if loopinvariant: diff --git a/pypy/jit/codewriter/codewriter.py b/pypy/jit/codewriter/codewriter.py --- a/pypy/jit/codewriter/codewriter.py +++ b/pypy/jit/codewriter/codewriter.py @@ -104,6 +104,8 @@ else: name = 'unnamed' % id(ssarepr) i = 1 + # escape names for windows + name = name.replace('', '_(lambda)_') extra = '' while name+extra in self._seen_files: i += 1 diff --git a/pypy/jit/codewriter/effectinfo.py b/pypy/jit/codewriter/effectinfo.py --- a/pypy/jit/codewriter/effectinfo.py +++ b/pypy/jit/codewriter/effectinfo.py @@ -48,6 +48,8 @@ OS_LIBFFI_PREPARE = 60 OS_LIBFFI_PUSH_ARG = 61 OS_LIBFFI_CALL = 62 + OS_LIBFFI_GETARRAYITEM = 63 + OS_LIBFFI_SETARRAYITEM = 64 # OS_LLONG_INVERT = 69 OS_LLONG_ADD = 70 @@ -78,6 +80,9 @@ # OS_MATH_SQRT = 100 + # for debugging: + _OS_CANRAISE = set([OS_NONE, OS_STR2UNICODE, OS_LIBFFI_CALL]) + def __new__(cls, readonly_descrs_fields, readonly_descrs_arrays, write_descrs_fields, write_descrs_arrays, extraeffect=EF_CAN_RAISE, @@ -116,6 +121,8 @@ result.extraeffect = extraeffect result.can_invalidate = can_invalidate result.oopspecindex = oopspecindex + if result.check_can_raise(): + assert oopspecindex in cls._OS_CANRAISE cls._cache[key] = result return result @@ -125,6 +132,10 @@ def check_can_invalidate(self): return self.can_invalidate + def check_is_elidable(self): + return (self.extraeffect == self.EF_ELIDABLE_CAN_RAISE or + self.extraeffect == self.EF_ELIDABLE_CANNOT_RAISE) + def check_forces_virtual_or_virtualizable(self): return self.extraeffect >= self.EF_FORCES_VIRTUAL_OR_VIRTUALIZABLE diff --git a/pypy/jit/codewriter/jtransform.py b/pypy/jit/codewriter/jtransform.py --- a/pypy/jit/codewriter/jtransform.py +++ b/pypy/jit/codewriter/jtransform.py @@ -1615,6 +1615,12 @@ elif oopspec_name.startswith('libffi_call_'): oopspecindex = EffectInfo.OS_LIBFFI_CALL extraeffect = EffectInfo.EF_RANDOM_EFFECTS + elif oopspec_name == 'libffi_array_getitem': + oopspecindex = EffectInfo.OS_LIBFFI_GETARRAYITEM + extraeffect = EffectInfo.EF_CANNOT_RAISE + elif oopspec_name == 'libffi_array_setitem': + oopspecindex = EffectInfo.OS_LIBFFI_SETARRAYITEM + extraeffect = EffectInfo.EF_CANNOT_RAISE else: assert False, 'unsupported oopspec: %s' % oopspec_name return self._handle_oopspec_call(op, args, oopspecindex, extraeffect) diff --git a/pypy/jit/codewriter/support.py b/pypy/jit/codewriter/support.py --- a/pypy/jit/codewriter/support.py +++ b/pypy/jit/codewriter/support.py @@ -37,9 +37,11 @@ return a.typeannotation(t) def annotate(func, values, inline=None, backendoptimize=True, - type_system="lltype"): + type_system="lltype", translationoptions={}): # build the normal ll graphs for ll_function t = TranslationContext() + for key, value in translationoptions.items(): + setattr(t.config.translation, key, value) annpolicy = AnnotatorPolicy() annpolicy.allow_someobjects = False a = t.buildannotator(policy=annpolicy) diff --git a/pypy/jit/codewriter/test/test_flatten.py b/pypy/jit/codewriter/test/test_flatten.py --- a/pypy/jit/codewriter/test/test_flatten.py +++ b/pypy/jit/codewriter/test/test_flatten.py @@ -5,7 +5,7 @@ from pypy.jit.codewriter.format import assert_format from pypy.jit.codewriter import longlong from pypy.jit.metainterp.history import AbstractDescr -from pypy.rpython.lltypesystem import lltype, rclass, rstr +from pypy.rpython.lltypesystem import lltype, rclass, rstr, rffi from pypy.objspace.flow.model import SpaceOperation, Variable, Constant from pypy.translator.unsimplify import varoftype from pypy.rlib.rarithmetic import ovfcheck, r_uint, r_longlong, r_ulonglong @@ -743,7 +743,6 @@ """, transform=True) def test_force_cast(self): - from pypy.rpython.lltypesystem import rffi # NB: we don't need to test for INT here, the logic in jtransform is # general enough so that if we have the below cases it should # generalize also to INT @@ -849,7 +848,6 @@ transform=True) def test_force_cast_pointer(self): - from pypy.rpython.lltypesystem import rffi def h(p): return rffi.cast(rffi.VOIDP, p) self.encoding_test(h, [lltype.nullptr(rffi.CCHARP.TO)], """ @@ -857,7 +855,6 @@ """, transform=True) def test_force_cast_floats(self): - from pypy.rpython.lltypesystem import rffi # Caststs to lltype.Float def f(n): return rffi.cast(lltype.Float, n) @@ -964,7 +961,6 @@ """, transform=True) def test_direct_ptradd(self): - from pypy.rpython.lltypesystem import rffi def f(p, n): return lltype.direct_ptradd(p, n) self.encoding_test(f, [lltype.nullptr(rffi.CCHARP.TO), 123], """ @@ -975,7 +971,6 @@ def check_force_cast(FROM, TO, operations, value): """Check that the test is correctly written...""" - from pypy.rpython.lltypesystem import rffi import re r = re.compile('(\w+) \%i\d, \$(-?\d+)') # diff --git a/pypy/jit/metainterp/executor.py b/pypy/jit/metainterp/executor.py --- a/pypy/jit/metainterp/executor.py +++ b/pypy/jit/metainterp/executor.py @@ -340,6 +340,8 @@ rop.DEBUG_MERGE_POINT, rop.JIT_DEBUG, rop.SETARRAYITEM_RAW, + rop.GETINTERIORFIELD_RAW, + rop.SETINTERIORFIELD_RAW, rop.CALL_RELEASE_GIL, rop.QUASIIMMUT_FIELD, ): # list of opcodes never executed by pyjitpl diff --git a/pypy/jit/metainterp/optimizeopt/__init__.py b/pypy/jit/metainterp/optimizeopt/__init__.py --- a/pypy/jit/metainterp/optimizeopt/__init__.py +++ b/pypy/jit/metainterp/optimizeopt/__init__.py @@ -55,7 +55,7 @@ def optimize_loop_1(metainterp_sd, loop, enable_opts, - inline_short_preamble=True, retraced=False, bridge=False): + inline_short_preamble=True, retraced=False): """Optimize loop.operations to remove internal overheadish operations. """ @@ -64,7 +64,7 @@ if unroll: optimize_unroll(metainterp_sd, loop, optimizations) else: - optimizer = Optimizer(metainterp_sd, loop, optimizations, bridge) + optimizer = Optimizer(metainterp_sd, loop, optimizations) optimizer.propagate_all_forward() def optimize_bridge_1(metainterp_sd, bridge, enable_opts, @@ -76,7 +76,7 @@ except KeyError: pass optimize_loop_1(metainterp_sd, bridge, enable_opts, - inline_short_preamble, retraced, bridge=True) + inline_short_preamble, retraced) if __name__ == '__main__': print ALL_OPTS_NAMES diff --git a/pypy/jit/metainterp/optimizeopt/fficall.py b/pypy/jit/metainterp/optimizeopt/fficall.py --- a/pypy/jit/metainterp/optimizeopt/fficall.py +++ b/pypy/jit/metainterp/optimizeopt/fficall.py @@ -1,11 +1,13 @@ +from pypy.jit.codewriter.effectinfo import EffectInfo +from pypy.jit.metainterp.optimizeopt.optimizer import Optimization +from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method +from pypy.jit.metainterp.resoperation import rop, ResOperation +from pypy.rlib import clibffi, libffi +from pypy.rlib.debug import debug_print +from pypy.rlib.libffi import Func +from pypy.rlib.objectmodel import we_are_translated from pypy.rpython.annlowlevel import cast_base_ptr_to_instance -from pypy.rlib.objectmodel import we_are_translated -from pypy.rlib.libffi import Func -from pypy.rlib.debug import debug_print -from pypy.jit.codewriter.effectinfo import EffectInfo -from pypy.jit.metainterp.resoperation import rop, ResOperation -from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method -from pypy.jit.metainterp.optimizeopt.optimizer import Optimization +from pypy.rpython.lltypesystem import llmemory class FuncInfo(object): @@ -78,7 +80,7 @@ def new(self): return OptFfiCall() - + def begin_optimization(self, funcval, op): self.rollback_maybe('begin_optimization', op) self.funcinfo = FuncInfo(funcval, self.optimizer.cpu, op) @@ -116,6 +118,9 @@ ops = self.do_push_arg(op) elif oopspec == EffectInfo.OS_LIBFFI_CALL: ops = self.do_call(op) + elif (oopspec == EffectInfo.OS_LIBFFI_GETARRAYITEM or + oopspec == EffectInfo.OS_LIBFFI_SETARRAYITEM): + ops = self.do_getsetarrayitem(op, oopspec) # for op in ops: self.emit_operation(op) @@ -190,6 +195,53 @@ ops.append(newop) return ops + def do_getsetarrayitem(self, op, oopspec): + ffitypeval = self.getvalue(op.getarg(1)) + widthval = self.getvalue(op.getarg(2)) + offsetval = self.getvalue(op.getarg(5)) + if not ffitypeval.is_constant() or not widthval.is_constant() or not offsetval.is_constant(): + return [op] + + ffitypeaddr = ffitypeval.box.getaddr() + ffitype = llmemory.cast_adr_to_ptr(ffitypeaddr, clibffi.FFI_TYPE_P) + offset = offsetval.box.getint() + width = widthval.box.getint() + descr = self._get_interior_descr(ffitype, width, offset) + + arglist = [ + self.getvalue(op.getarg(3)).force_box(self.optimizer), + self.getvalue(op.getarg(4)).force_box(self.optimizer), + ] + if oopspec == EffectInfo.OS_LIBFFI_GETARRAYITEM: + opnum = rop.GETINTERIORFIELD_RAW + elif oopspec == EffectInfo.OS_LIBFFI_SETARRAYITEM: + opnum = rop.SETINTERIORFIELD_RAW + arglist.append(self.getvalue(op.getarg(6)).force_box(self.optimizer)) + else: + assert False + return [ + ResOperation(opnum, arglist, op.result, descr=descr), + ] + + def _get_interior_descr(self, ffitype, width, offset): + kind = libffi.types.getkind(ffitype) + is_pointer = is_float = is_signed = False + if ffitype is libffi.types.pointer: + is_pointer = True + elif kind == 'i': + is_signed = True + elif kind == 'f' or kind == 'I' or kind == 'U': + # longlongs are treated as floats, see + # e.g. llsupport/descr.py:getDescrClass + is_float = True + else: + assert False, "unsupported ffitype or kind" + # + fieldsize = ffitype.c_size + return self.optimizer.cpu.interiorfielddescrof_dynamic( + offset, width, fieldsize, is_pointer, is_float, is_signed + ) + def propagate_forward(self, op): if self.logops is not None: debug_print(self.logops.repr_of_resop(op)) diff --git a/pypy/jit/metainterp/optimizeopt/heap.py b/pypy/jit/metainterp/optimizeopt/heap.py --- a/pypy/jit/metainterp/optimizeopt/heap.py +++ b/pypy/jit/metainterp/optimizeopt/heap.py @@ -43,7 +43,7 @@ optheap.optimizer.ensure_imported(cached_fieldvalue) cached_fieldvalue = self._cached_fields.get(structvalue, None) - if cached_fieldvalue is not fieldvalue: + if not fieldvalue.same_value(cached_fieldvalue): # common case: store the 'op' as lazy_setfield, and register # myself in the optheap's _lazy_setfields_and_arrayitems list self._lazy_setfield = op @@ -140,6 +140,15 @@ getop = ResOperation(rop.GETFIELD_GC, [op.getarg(0)], result, op.getdescr()) shortboxes.add_potential(getop, synthetic=True) + if op.getopnum() == rop.SETARRAYITEM_GC: + result = op.getarg(2) + if isinstance(result, Const): + newresult = result.clonebox() + optimizer.make_constant(newresult, result) + result = newresult + getop = ResOperation(rop.GETARRAYITEM_GC, [op.getarg(0), op.getarg(1)], + result, op.getdescr()) + shortboxes.add_potential(getop, synthetic=True) elif op.result is not None: shortboxes.add_potential(op) @@ -225,7 +234,7 @@ or op.is_ovf()): self.posponedop = op else: - self.next_optimization.propagate_forward(op) + Optimization.emit_operation(self, op) def emitting_operation(self, op): if op.has_no_side_effect(): diff --git a/pypy/jit/metainterp/optimizeopt/intbounds.py b/pypy/jit/metainterp/optimizeopt/intbounds.py --- a/pypy/jit/metainterp/optimizeopt/intbounds.py +++ b/pypy/jit/metainterp/optimizeopt/intbounds.py @@ -1,3 +1,4 @@ +import sys from pypy.jit.metainterp.optimizeopt.optimizer import Optimization, CONST_1, CONST_0, \ MODE_ARRAY, MODE_STR, MODE_UNICODE from pypy.jit.metainterp.history import ConstInt @@ -5,36 +6,18 @@ IntUpperBound) from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method from pypy.jit.metainterp.resoperation import rop +from pypy.jit.metainterp.optimize import InvalidLoop +from pypy.rlib.rarithmetic import LONG_BIT class OptIntBounds(Optimization): """Keeps track of the bounds placed on integers by guards and remove redundant guards""" - def setup(self): - self.posponedop = None - self.nextop = None - def new(self): - assert self.posponedop is None return OptIntBounds() - - def flush(self): - assert self.posponedop is None - - def setup(self): - self.posponedop = None - self.nextop = None def propagate_forward(self, op): - if op.is_ovf(): - self.posponedop = op - return - if self.posponedop: - self.nextop = op - op = self.posponedop - self.posponedop = None - dispatch_opt(self, op) def opt_default(self, op): @@ -126,14 +109,29 @@ r.intbound.intersect(v1.intbound.div_bound(v2.intbound)) def optimize_INT_MOD(self, op): + v1 = self.getvalue(op.getarg(0)) + v2 = self.getvalue(op.getarg(1)) + known_nonneg = (v1.intbound.known_ge(IntBound(0, 0)) and + v2.intbound.known_ge(IntBound(0, 0))) + if known_nonneg and v2.is_constant(): + val = v2.box.getint() + if (val & (val-1)) == 0: + # nonneg % power-of-two ==> nonneg & (power-of-two - 1) + arg1 = op.getarg(0) + arg2 = ConstInt(val-1) + op = op.copy_and_change(rop.INT_AND, args=[arg1, arg2]) self.emit_operation(op) - v2 = self.getvalue(op.getarg(1)) if v2.is_constant(): val = v2.box.getint() r = self.getvalue(op.result) if val < 0: + if val == -sys.maxint-1: + return # give up val = -val - r.intbound.make_gt(IntBound(-val, -val)) + if known_nonneg: + r.intbound.make_ge(IntBound(0, 0)) + else: + r.intbound.make_gt(IntBound(-val, -val)) r.intbound.make_lt(IntBound(val, val)) def optimize_INT_LSHIFT(self, op): @@ -153,72 +151,84 @@ def optimize_INT_RSHIFT(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) + b = v1.intbound.rshift_bound(v2.intbound) + if b.has_lower and b.has_upper and b.lower == b.upper: + # constant result (likely 0, for rshifts that kill all bits) + self.make_constant_int(op.result, b.lower) + else: + self.emit_operation(op) + r = self.getvalue(op.result) + r.intbound.intersect(b) + + def optimize_GUARD_NO_OVERFLOW(self, op): + lastop = self.last_emitted_operation + if lastop is not None: + opnum = lastop.getopnum() + args = lastop.getarglist() + result = lastop.result + # If the INT_xxx_OVF was replaced with INT_xxx, then we can kill + # the GUARD_NO_OVERFLOW. + if (opnum == rop.INT_ADD or + opnum == rop.INT_SUB or + opnum == rop.INT_MUL): + return + # Else, synthesize the non overflowing op for optimize_default to + # reuse, as well as the reverse op + elif opnum == rop.INT_ADD_OVF: + self.pure(rop.INT_ADD, args[:], result) + self.pure(rop.INT_SUB, [result, args[1]], args[0]) + self.pure(rop.INT_SUB, [result, args[0]], args[1]) + elif opnum == rop.INT_SUB_OVF: + self.pure(rop.INT_SUB, args[:], result) + self.pure(rop.INT_ADD, [result, args[1]], args[0]) + self.pure(rop.INT_SUB, [args[0], result], args[1]) + elif opnum == rop.INT_MUL_OVF: + self.pure(rop.INT_MUL, args[:], result) self.emit_operation(op) - r = self.getvalue(op.result) - r.intbound.intersect(v1.intbound.rshift_bound(v2.intbound)) + + def optimize_GUARD_OVERFLOW(self, op): + # If INT_xxx_OVF was replaced by INT_xxx, *but* we still see + # GUARD_OVERFLOW, then the loop is invalid. + lastop = self.last_emitted_operation + if lastop is None: + raise InvalidLoop + opnum = lastop.getopnum() + if opnum not in (rop.INT_ADD_OVF, rop.INT_SUB_OVF, rop.INT_MUL_OVF): + raise InvalidLoop + self.emit_operation(op) def optimize_INT_ADD_OVF(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) resbound = v1.intbound.add_bound(v2.intbound) - if resbound.has_lower and resbound.has_upper and \ - self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Transform into INT_ADD and remove guard + if resbound.bounded(): + # Transform into INT_ADD. The following guard will be killed + # by optimize_GUARD_NO_OVERFLOW; if we see instead an + # optimize_GUARD_OVERFLOW, then InvalidLoop. op = op.copy_and_change(rop.INT_ADD) - self.optimize_INT_ADD(op) # emit the op - else: - self.emit_operation(op) - r = self.getvalue(op.result) - r.intbound.intersect(resbound) - self.emit_operation(self.nextop) - if self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Synthesize the non overflowing op for optimize_default to reuse - self.pure(rop.INT_ADD, op.getarglist()[:], op.result) - # Synthesize the reverse op for optimize_default to reuse - self.pure(rop.INT_SUB, [op.result, op.getarg(1)], op.getarg(0)) - self.pure(rop.INT_SUB, [op.result, op.getarg(0)], op.getarg(1)) - + self.emit_operation(op) # emit the op + r = self.getvalue(op.result) + r.intbound.intersect(resbound) def optimize_INT_SUB_OVF(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) resbound = v1.intbound.sub_bound(v2.intbound) - if resbound.has_lower and resbound.has_upper and \ - self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Transform into INT_SUB and remove guard + if resbound.bounded(): op = op.copy_and_change(rop.INT_SUB) - self.optimize_INT_SUB(op) # emit the op - else: - self.emit_operation(op) - r = self.getvalue(op.result) - r.intbound.intersect(resbound) - self.emit_operation(self.nextop) - if self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Synthesize the non overflowing op for optimize_default to reuse - self.pure(rop.INT_SUB, op.getarglist()[:], op.result) - # Synthesize the reverse ops for optimize_default to reuse - self.pure(rop.INT_ADD, [op.result, op.getarg(1)], op.getarg(0)) - self.pure(rop.INT_SUB, [op.getarg(0), op.result], op.getarg(1)) - + self.emit_operation(op) # emit the op + r = self.getvalue(op.result) + r.intbound.intersect(resbound) def optimize_INT_MUL_OVF(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) resbound = v1.intbound.mul_bound(v2.intbound) - if resbound.has_lower and resbound.has_upper and \ - self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Transform into INT_MUL and remove guard + if resbound.bounded(): op = op.copy_and_change(rop.INT_MUL) - self.optimize_INT_MUL(op) # emit the op - else: - self.emit_operation(op) - r = self.getvalue(op.result) - r.intbound.intersect(resbound) - self.emit_operation(self.nextop) - if self.nextop.getopnum() == rop.GUARD_NO_OVERFLOW: - # Synthesize the non overflowing op for optimize_default to reuse - self.pure(rop.INT_MUL, op.getarglist()[:], op.result) - + self.emit_operation(op) + r = self.getvalue(op.result) + r.intbound.intersect(resbound) def optimize_INT_LT(self, op): v1 = self.getvalue(op.getarg(0)) diff --git a/pypy/jit/metainterp/optimizeopt/intutils.py b/pypy/jit/metainterp/optimizeopt/intutils.py --- a/pypy/jit/metainterp/optimizeopt/intutils.py +++ b/pypy/jit/metainterp/optimizeopt/intutils.py @@ -1,4 +1,5 @@ -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift, LONG_BIT +from pypy.rlib.rarithmetic import ovfcheck, LONG_BIT +from pypy.rlib.objectmodel import we_are_translated from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.jit.metainterp.history import BoxInt, ConstInt import sys @@ -13,6 +14,10 @@ self.has_lower = True self.upper = upper self.lower = lower + # check for unexpected overflows: + if not we_are_translated(): + assert type(upper) is not long + assert type(lower) is not long # Returns True if the bound was updated def make_le(self, other): @@ -169,10 +174,10 @@ other.known_ge(IntBound(0, 0)) and \ other.known_lt(IntBound(LONG_BIT, LONG_BIT)): try: - vals = (ovfcheck_lshift(self.upper, other.upper), - ovfcheck_lshift(self.upper, other.lower), - ovfcheck_lshift(self.lower, other.upper), - ovfcheck_lshift(self.lower, other.lower)) + vals = (ovfcheck(self.upper << other.upper), + ovfcheck(self.upper << other.lower), + ovfcheck(self.lower << other.upper), + ovfcheck(self.lower << other.lower)) return IntBound(min4(vals), max4(vals)) except (OverflowError, ValueError): return IntUnbounded() diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -1,12 +1,12 @@ from pypy.jit.metainterp import jitprof, resume, compile from pypy.jit.metainterp.executor import execute_nonspec -from pypy.jit.metainterp.history import BoxInt, BoxFloat, Const, ConstInt, REF +from pypy.jit.metainterp.history import BoxInt, BoxFloat, Const, ConstInt, REF, INT from pypy.jit.metainterp.optimizeopt.intutils import IntBound, IntUnbounded, \ ImmutableIntUnbounded, \ IntLowerBound, MININT, MAXINT from pypy.jit.metainterp.optimizeopt.util import (make_dispatcher_method, args_dict) -from pypy.jit.metainterp.resoperation import rop, ResOperation +from pypy.jit.metainterp.resoperation import rop, ResOperation, AbstractResOp from pypy.jit.metainterp.typesystem import llhelper, oohelper from pypy.tool.pairtype import extendabletype from pypy.rlib.debug import debug_start, debug_stop, debug_print @@ -95,6 +95,10 @@ return guards def import_from(self, other, optimizer): + if self.level == LEVEL_CONSTANT: + assert other.level == LEVEL_CONSTANT + assert other.box.same_constant(self.box) + return assert self.level <= LEVEL_NONNULL if other.level == LEVEL_CONSTANT: self.make_constant(other.get_key_box()) @@ -141,6 +145,13 @@ return not box.nonnull() return False + def same_value(self, other): + if not other: + return False + if self.is_constant() and other.is_constant(): + return self.box.same_constant(other.box) + return self is other + def make_constant(self, constbox): """Replace 'self.box' with a Const box.""" assert isinstance(constbox, Const) @@ -236,9 +247,10 @@ CONST_1 = ConstInt(1) CVAL_ZERO = ConstantValue(CONST_0) CVAL_ZERO_FLOAT = ConstantValue(Const._new(0.0)) -CVAL_UNINITIALIZED_ZERO = ConstantValue(CONST_0) llhelper.CVAL_NULLREF = ConstantValue(llhelper.CONST_NULL) oohelper.CVAL_NULLREF = ConstantValue(oohelper.CONST_NULL) +REMOVED = AbstractResOp(None) + class Optimization(object): next_optimization = None @@ -250,6 +262,7 @@ raise NotImplementedError def emit_operation(self, op): + self.last_emitted_operation = op self.next_optimization.propagate_forward(op) # FIXME: Move some of these here? @@ -317,20 +330,20 @@ def forget_numberings(self, box): self.optimizer.forget_numberings(box) + class Optimizer(Optimization): - def __init__(self, metainterp_sd, loop, optimizations=None, bridge=False): + def __init__(self, metainterp_sd, loop, optimizations=None): self.metainterp_sd = metainterp_sd self.cpu = metainterp_sd.cpu self.loop = loop - self.bridge = bridge self.values = {} self.interned_refs = self.cpu.ts.new_ref_dict() + self.interned_ints = {} self.resumedata_memo = resume.ResumeDataLoopMemo(metainterp_sd) self.bool_boxes = {} self.producer = {} self.pendingfields = [] - self.exception_might_have_happened = False self.quasi_immutable_deps = None self.opaque_pointers = {} self.replaces_guard = {} @@ -352,6 +365,7 @@ optimizations[-1].next_optimization = self for o in optimizations: o.optimizer = self + o.last_emitted_operation = None o.setup() else: optimizations = [] @@ -398,6 +412,9 @@ if not value: return box return self.interned_refs.setdefault(value, box) + #elif constbox.type == INT: + # value = constbox.getint() + # return self.interned_ints.setdefault(value, box) else: return box @@ -483,7 +500,6 @@ return CVAL_ZERO def propagate_all_forward(self): - self.exception_might_have_happened = self.bridge self.clear_newoperations() for op in self.loop.operations: self.first_optimization.propagate_forward(op) diff --git a/pypy/jit/metainterp/optimizeopt/pure.py b/pypy/jit/metainterp/optimizeopt/pure.py --- a/pypy/jit/metainterp/optimizeopt/pure.py +++ b/pypy/jit/metainterp/optimizeopt/pure.py @@ -1,4 +1,4 @@ -from pypy.jit.metainterp.optimizeopt.optimizer import Optimization +from pypy.jit.metainterp.optimizeopt.optimizer import Optimization, REMOVED from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.jit.metainterp.optimizeopt.util import (make_dispatcher_method, args_dict) @@ -61,7 +61,10 @@ oldop = self.pure_operations.get(args, None) if oldop is not None and oldop.getdescr() is op.getdescr(): assert oldop.getopnum() == op.getopnum() + # this removes a CALL_PURE that has the same (non-constant) + # arguments as a previous CALL_PURE. self.make_equal_to(op.result, self.getvalue(oldop.result)) + self.last_emitted_operation = REMOVED return else: self.pure_operations[args] = op @@ -72,6 +75,13 @@ self.emit_operation(ResOperation(rop.CALL, args, op.result, op.getdescr())) + def optimize_GUARD_NO_EXCEPTION(self, op): + if self.last_emitted_operation is REMOVED: + # it was a CALL_PURE that was killed; so we also kill the + # following GUARD_NO_EXCEPTION + return + self.emit_operation(op) + def flush(self): assert self.posponedop is None diff --git a/pypy/jit/metainterp/optimizeopt/rewrite.py b/pypy/jit/metainterp/optimizeopt/rewrite.py --- a/pypy/jit/metainterp/optimizeopt/rewrite.py +++ b/pypy/jit/metainterp/optimizeopt/rewrite.py @@ -294,12 +294,6 @@ raise InvalidLoop self.optimize_GUARD_CLASS(op) - def optimize_GUARD_NO_EXCEPTION(self, op): - if not self.optimizer.exception_might_have_happened: - return - self.emit_operation(op) - self.optimizer.exception_might_have_happened = False - def optimize_CALL_LOOPINVARIANT(self, op): arg = op.getarg(0) # 'arg' must be a Const, because residual_call in codewriter @@ -310,6 +304,7 @@ resvalue = self.loop_invariant_results.get(key, None) if resvalue is not None: self.make_equal_to(op.result, resvalue) + self.last_emitted_operation = REMOVED return # change the op to be a normal call, from the backend's point of view # there is no reason to have a separate operation for this @@ -444,10 +439,19 @@ except KeyError: pass else: + # this removes a CALL_PURE with all constant arguments. self.make_constant(op.result, result) + self.last_emitted_operation = REMOVED return self.emit_operation(op) + def optimize_GUARD_NO_EXCEPTION(self, op): + if self.last_emitted_operation is REMOVED: + # it was a CALL_PURE or a CALL_LOOPINVARIANT that was killed; + # so we also kill the following GUARD_NO_EXCEPTION + return + self.emit_operation(op) + def optimize_INT_FLOORDIV(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -9,6 +9,7 @@ from pypy.jit.metainterp.history import AbstractDescr, ConstInt, BoxInt from pypy.jit.metainterp import executor, compile, resume, history from pypy.jit.metainterp.resoperation import rop, opname, ResOperation +from pypy.rlib.rarithmetic import LONG_BIT def test_store_final_boxes_in_guard(): @@ -680,25 +681,60 @@ # ---------- - def test_fold_guard_no_exception(self): - ops = """ - [i] - guard_no_exception() [] - i1 = int_add(i, 3) - guard_no_exception() [] + def test_keep_guard_no_exception(self): + ops = """ + [i1] i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] - guard_no_exception() [] - i3 = call(i2, descr=nonwritedescr) - jump(i1) # the exception is considered lost when we loop back - """ - expected = """ - [i] - i1 = int_add(i, 3) - i2 = call(i1, descr=nonwritedescr) + jump(i2) + """ + self.optimize_loop(ops, ops) + + def test_keep_guard_no_exception_with_call_pure_that_is_not_folded(self): + ops = """ + [i1] + i2 = call_pure(123456, i1, descr=nonwritedescr) guard_no_exception() [i1, i2] - i3 = call(i2, descr=nonwritedescr) - jump(i1) + jump(i2) + """ + expected = """ + [i1] + i2 = call(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2] + jump(i2) + """ + self.optimize_loop(ops, expected) + + def test_remove_guard_no_exception_with_call_pure_on_constant_args(self): + arg_consts = [ConstInt(i) for i in (123456, 81)] + call_pure_results = {tuple(arg_consts): ConstInt(5)} + ops = """ + [i1] + i3 = same_as(81) + i2 = call_pure(123456, i3, descr=nonwritedescr) + guard_no_exception() [i1, i2] + jump(i2) + """ + expected = """ + [i1] + jump(5) + """ + self.optimize_loop(ops, expected, call_pure_results) + + def test_remove_guard_no_exception_with_duplicated_call_pure(self): + ops = """ + [i1] + i2 = call_pure(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2] + i3 = call_pure(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2, i3] + jump(i3) + """ + expected = """ + [i1] + i2 = call(123456, i1, descr=nonwritedescr) + guard_no_exception() [i1, i2] + jump(i2) """ self.optimize_loop(ops, expected) @@ -976,6 +1012,29 @@ """ self.optimize_loop(ops, expected) + def test_virtual_array_of_struct_forced(self): + ops = """ + [f0, f1] + p0 = new_array(1, descr=complexarraydescr) + setinteriorfield_gc(p0, 0, f0, descr=complexrealdescr) + setinteriorfield_gc(p0, 0, f1, descr=compleximagdescr) + f2 = getinteriorfield_gc(p0, 0, descr=complexrealdescr) + f3 = getinteriorfield_gc(p0, 0, descr=compleximagdescr) + f4 = float_mul(f2, f3) + i0 = escape(f4, p0) + finish(i0) + """ + expected = """ + [f0, f1] + f2 = float_mul(f0, f1) + p0 = new_array(1, descr=complexarraydescr) + setinteriorfield_gc(p0, 0, f0, descr=complexrealdescr) + setinteriorfield_gc(p0, 0, f1, descr=compleximagdescr) + i0 = escape(f2, p0) + finish(i0) + """ + self.optimize_loop(ops, expected) + def test_nonvirtual_1(self): ops = """ [i] @@ -4099,6 +4158,38 @@ """ self.optimize_strunicode_loop(ops, expected) + def test_str_concat_constant_lengths(self): + ops = """ + [i0] + p0 = newstr(1) + strsetitem(p0, 0, i0) + p1 = newstr(0) + p2 = call(0, p0, p1, descr=strconcatdescr) + i1 = call(0, p2, p0, descr=strequaldescr) + finish(i1) + """ + expected = """ + [i0] + finish(1) + """ + self.optimize_strunicode_loop(ops, expected) + + def test_str_concat_constant_lengths_2(self): + ops = """ + [i0] + p0 = newstr(0) + p1 = newstr(1) + strsetitem(p1, 0, i0) + p2 = call(0, p0, p1, descr=strconcatdescr) + i1 = call(0, p2, p1, descr=strequaldescr) + finish(i1) + """ + expected = """ + [i0] + finish(1) + """ + self.optimize_strunicode_loop(ops, expected) + def test_str_slice_1(self): ops = """ [p1, i1, i2] @@ -4201,6 +4292,27 @@ """ self.optimize_strunicode_loop(ops, expected) + def test_str_slice_plain_virtual(self): + ops = """ + [] + p0 = newstr(11) + copystrcontent(s"hello world", p0, 0, 0, 11) + p1 = call(0, p0, 0, 5, descr=strslicedescr) + finish(p1) + """ + expected = """ + [] + p0 = newstr(11) + copystrcontent(s"hello world", p0, 0, 0, 11) + # Eventually this should just return s"hello", but ATM this test is + # just verifying that it doesn't return "\0\0\0\0\0", so being + # slightly underoptimized is ok. + p1 = newstr(5) + copystrcontent(p0, p1, 0, 0, 5) + finish(p1) + """ + self.optimize_strunicode_loop(ops, expected) + # ---------- def optimize_strunicode_loop_extradescrs(self, ops, optops): class FakeCallInfoCollection: @@ -4691,11 +4803,11 @@ i5 = int_ge(i0, 0) guard_true(i5) [] i1 = int_mod(i0, 42) - i2 = int_rshift(i1, 63) + i2 = int_rshift(i1, %d) i3 = int_and(42, i2) i4 = int_add(i1, i3) finish(i4) - """ + """ % (LONG_BIT-1) expected = """ [i0] i5 = int_ge(i0, 0) @@ -4703,21 +4815,41 @@ i1 = int_mod(i0, 42) finish(i1) """ - py.test.skip("in-progress") self.optimize_loop(ops, expected) - # Also, 'n % power-of-two' can be turned into int_and(), - # but that's a bit harder to detect here because it turns into - # several operations, and of course it is wrong to just turn + # 'n % power-of-two' can be turned into int_and(); at least that's + # easy to do now if n is known to be non-negative. + ops = """ + [i0] + i5 = int_ge(i0, 0) + guard_true(i5) [] + i1 = int_mod(i0, 8) + i2 = int_rshift(i1, %d) + i3 = int_and(42, i2) + i4 = int_add(i1, i3) + finish(i4) + """ % (LONG_BIT-1) + expected = """ + [i0] + i5 = int_ge(i0, 0) + guard_true(i5) [] + i1 = int_and(i0, 7) + finish(i1) + """ + self.optimize_loop(ops, expected) + + # Of course any 'maybe-negative % power-of-two' can be turned into + # int_and(), but that's a bit harder to detect here because it turns + # into several operations, and of course it is wrong to just turn # int_mod(i0, 16) into int_and(i0, 15). ops = """ [i0] i1 = int_mod(i0, 16) - i2 = int_rshift(i1, 63) + i2 = int_rshift(i1, %d) i3 = int_and(16, i2) i4 = int_add(i1, i3) finish(i4) - """ + """ % (LONG_BIT-1) expected = """ [i0] i4 = int_and(i0, 15) @@ -4726,6 +4858,16 @@ py.test.skip("harder") self.optimize_loop(ops, expected) + def test_intmod_bounds_bug1(self): + ops = """ + [i0] + i1 = int_mod(i0, %d) + i2 = int_eq(i1, 0) + guard_false(i2) [] + finish() + """ % (-(1<<(LONG_BIT-1)),) + self.optimize_loop(ops, ops) + def test_bounded_lazy_setfield(self): ops = """ [p0, i0] @@ -4808,6 +4950,27 @@ def test_plain_virtual_string_copy_content(self): ops = """ + [i1] + p0 = newstr(6) + copystrcontent(s"hello!", p0, 0, 0, 6) + p1 = call(0, p0, s"abc123", descr=strconcatdescr) + i0 = strgetitem(p1, i1) + finish(i0) + """ + expected = """ + [i1] + p0 = newstr(6) + copystrcontent(s"hello!", p0, 0, 0, 6) + p1 = newstr(12) + copystrcontent(p0, p1, 0, 0, 6) + copystrcontent(s"abc123", p1, 0, 6, 6) + i0 = strgetitem(p1, i1) + finish(i0) + """ + self.optimize_strunicode_loop(ops, expected) + + def test_plain_virtual_string_copy_content_2(self): + ops = """ [] p0 = newstr(6) copystrcontent(s"hello!", p0, 0, 0, 6) @@ -4819,10 +4982,7 @@ [] p0 = newstr(6) copystrcontent(s"hello!", p0, 0, 0, 6) - p1 = newstr(12) - copystrcontent(p0, p1, 0, 0, 6) - copystrcontent(s"abc123", p1, 0, 6, 6) - i0 = strgetitem(p1, 0) + i0 = strgetitem(p0, 0) finish(i0) """ self.optimize_strunicode_loop(ops, expected) @@ -4839,6 +4999,34 @@ """ self.optimize_loop(ops, expected) + def test_known_equal_ints(self): + py.test.skip("in-progress") + ops = """ + [i0, i1, i2, p0] + i3 = int_eq(i0, i1) + guard_true(i3) [] + + i4 = int_lt(i2, i0) + guard_true(i4) [] + i5 = int_lt(i2, i1) + guard_true(i5) [] + + i6 = getarrayitem_gc(p0, i2) + finish(i6) + """ + expected = """ + [i0, i1, i2, p0] + i3 = int_eq(i0, i1) + guard_true(i3) [] + + i4 = int_lt(i2, i0) + guard_true(i4) [] + + i6 = getarrayitem_gc(p0, i3) + finish(i6) + """ + self.optimize_loop(ops, expected) + class TestLLtype(BaseTestOptimizeBasic, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -931,17 +931,14 @@ [i] guard_no_exception() [] i1 = int_add(i, 3) - guard_no_exception() [] i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] - guard_no_exception() [] i3 = call(i2, descr=nonwritedescr) jump(i1) # the exception is considered lost when we loop back """ - # note that 'guard_no_exception' at the very start is kept around - # for bridges, but not for loops preamble = """ [i] + guard_no_exception() [] # occurs at the start of bridges, so keep it i1 = int_add(i, 3) i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] @@ -950,6 +947,7 @@ """ expected = """ [i] + guard_no_exception() [] # occurs at the start of bridges, so keep it i1 = int_add(i, 3) i2 = call(i1, descr=nonwritedescr) guard_no_exception() [i1, i2] @@ -958,6 +956,23 @@ """ self.optimize_loop(ops, expected, preamble) + def test_bug_guard_no_exception(self): + ops = """ + [] + i0 = call(123, descr=nonwritedescr) + p0 = call(0, "xy", descr=s2u_descr) # string -> unicode + guard_no_exception() [] + escape(p0) + jump() + """ + expected = """ + [] + i0 = call(123, descr=nonwritedescr) + escape(u"xy") + jump() + """ + self.optimize_loop(ops, expected) + # ---------- def test_call_loopinvariant(self): @@ -2168,13 +2183,13 @@ ops = """ [p0, i0, p1, i1, i2] setfield_gc(p0, i1, descr=valuedescr) - copystrcontent(p0, i0, p1, i1, i2) + copystrcontent(p0, p1, i0, i1, i2) escape() jump(p0, i0, p1, i1, i2) """ expected = """ [p0, i0, p1, i1, i2] - copystrcontent(p0, i0, p1, i1, i2) + copystrcontent(p0, p1, i0, i1, i2) setfield_gc(p0, i1, descr=valuedescr) escape() jump(p0, i0, p1, i1, i2) @@ -4783,6 +4798,52 @@ """ self.optimize_loop(ops, expected) + + def test_division_nonneg(self): + py.test.skip("harder") + # this is how an app-level division turns into right now + ops = """ + [i4] + i1 = int_ge(i4, 0) + guard_true(i1) [] + i16 = int_floordiv(i4, 3) + i18 = int_mul(i16, 3) + i19 = int_sub(i4, i18) + i21 = int_rshift(i19, %d) + i22 = int_add(i16, i21) + finish(i22) + """ % (LONG_BIT-1) + expected = """ + [i4] + i1 = int_ge(i4, 0) + guard_true(i1) [] + i16 = int_floordiv(i4, 3) + finish(i16) + """ + self.optimize_loop(ops, expected) + + def test_division_by_2(self): + py.test.skip("harder") + ops = """ + [i4] + i1 = int_ge(i4, 0) + guard_true(i1) [] + i16 = int_floordiv(i4, 2) + i18 = int_mul(i16, 2) + i19 = int_sub(i4, i18) + i21 = int_rshift(i19, %d) + i22 = int_add(i16, i21) + finish(i22) + """ % (LONG_BIT-1) + expected = """ + [i4] + i1 = int_ge(i4, 0) + guard_true(i1) [] + i16 = int_rshift(i4, 1) + finish(i16) + """ + self.optimize_loop(ops, expected) + def test_subsub_ovf(self): ops = """ [i0] @@ -6235,12 +6296,15 @@ def test_str2unicode_constant(self): ops = """ [] + escape(1213) p0 = call(0, "xy", descr=s2u_descr) # string -> unicode + guard_no_exception() [] escape(p0) jump() """ expected = """ [] + escape(1213) escape(u"xy") jump() """ @@ -6250,6 +6314,7 @@ ops = """ [p0] p1 = call(0, p0, descr=s2u_descr) # string -> unicode + guard_no_exception() [] escape(p1) jump(p1) """ @@ -7309,6 +7374,150 @@ """ self.optimize_loop(ops, expected) + def test_repeated_constant_setfield_mixed_with_guard(self): + ops = """ + [p22, p18] + setfield_gc(p22, 2, descr=valuedescr) + guard_nonnull_class(p18, ConstClass(node_vtable)) [] + setfield_gc(p22, 2, descr=valuedescr) + jump(p22, p18) + """ + preamble = """ + [p22, p18] + setfield_gc(p22, 2, descr=valuedescr) + guard_nonnull_class(p18, ConstClass(node_vtable)) [] + jump(p22, p18) + """ + short = """ + [p22, p18] + i1 = getfield_gc(p22, descr=valuedescr) + guard_value(i1, 2) [] + jump(p22, p18) + """ + expected = """ + [p22, p18] + jump(p22, p18) + """ + self.optimize_loop(ops, expected, preamble, expected_short=short) + + def test_repeated_setfield_mixed_with_guard(self): + ops = """ + [p22, p18, i1] + i2 = getfield_gc(p22, descr=valuedescr) + call(i2, descr=nonwritedescr) + setfield_gc(p22, i1, descr=valuedescr) + guard_nonnull_class(p18, ConstClass(node_vtable)) [] + setfield_gc(p22, i1, descr=valuedescr) + jump(p22, p18, i1) + """ + preamble = """ + [p22, p18, i1] + i2 = getfield_gc(p22, descr=valuedescr) + call(i2, descr=nonwritedescr) + setfield_gc(p22, i1, descr=valuedescr) + guard_nonnull_class(p18, ConstClass(node_vtable)) [] + jump(p22, p18, i1, i1) + """ + short = """ + [p22, p18, i1] + i2 = getfield_gc(p22, descr=valuedescr) + jump(p22, p18, i1, i2) + """ + expected = """ + [p22, p18, i1, i2] + call(i2, descr=nonwritedescr) + setfield_gc(p22, i1, descr=valuedescr) + jump(p22, p18, i1, i1) + """ + self.optimize_loop(ops, expected, preamble, expected_short=short) + + def test_cache_setfield_across_loop_boundaries(self): + ops = """ + [p1] + p2 = getfield_gc(p1, descr=valuedescr) + guard_nonnull_class(p2, ConstClass(node_vtable)) [] + call(p2, descr=nonwritedescr) + p3 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(p1, p3, descr=valuedescr) + jump(p1) + """ + expected = """ + [p1, p2] + call(p2, descr=nonwritedescr) + p3 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(p1, p3, descr=valuedescr) + jump(p1, p3) + """ + self.optimize_loop(ops, expected) + + def test_cache_setarrayitem_across_loop_boundaries(self): + ops = """ + [p1] + p2 = getarrayitem_gc(p1, 3, descr=arraydescr) + guard_nonnull_class(p2, ConstClass(node_vtable)) [] + call(p2, descr=nonwritedescr) + p3 = new_with_vtable(ConstClass(node_vtable)) + setarrayitem_gc(p1, 3, p3, descr=arraydescr) + jump(p1) + """ + expected = """ + [p1, p2] + call(p2, descr=nonwritedescr) + p3 = new_with_vtable(ConstClass(node_vtable)) + setarrayitem_gc(p1, 3, p3, descr=arraydescr) + jump(p1, p3) + """ + self.optimize_loop(ops, expected) + + def test_setarrayitem_p0_p0(self): + ops = """ + [i0, i1] + p0 = escape() + setarrayitem_gc(p0, 2, p0, descr=arraydescr) + jump(i0, i1) + """ + expected = """ + [i0, i1] + p0 = escape() + setarrayitem_gc(p0, 2, p0, descr=arraydescr) + jump(i0, i1) + """ + self.optimize_loop(ops, expected) + + def test_setfield_p0_p0(self): + ops = """ + [i0, i1] + p0 = escape() + setfield_gc(p0, p0, descr=arraydescr) + jump(i0, i1) + """ + expected = """ + [i0, i1] + p0 = escape() + setfield_gc(p0, p0, descr=arraydescr) + jump(i0, i1) + """ + self.optimize_loop(ops, expected) + + def test_setfield_p0_p1_p0(self): + ops = """ + [i0, i1] + p0 = escape() + p1 = escape() + setfield_gc(p0, p1, descr=adescr) + setfield_gc(p1, p0, descr=bdescr) + jump(i0, i1) + """ + expected = """ + [i0, i1] + p0 = escape() + p1 = escape() + setfield_gc(p0, p1, descr=adescr) + setfield_gc(p1, p0, descr=bdescr) + jump(i0, i1) + """ + self.optimize_loop(ops, expected) + class TestLLtype(OptimizeOptTest, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/optimizeopt/test/test_util.py b/pypy/jit/metainterp/optimizeopt/test/test_util.py --- a/pypy/jit/metainterp/optimizeopt/test/test_util.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_util.py @@ -183,6 +183,7 @@ can_invalidate=True)) arraycopydescr = cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, EffectInfo([], [arraydescr], [], [arraydescr], + EffectInfo.EF_CANNOT_RAISE, oopspecindex=EffectInfo.OS_ARRAYCOPY)) @@ -212,12 +213,14 @@ _oopspecindex = getattr(EffectInfo, _os) locals()[_name] = \ cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, - EffectInfo([], [], [], [], oopspecindex=_oopspecindex)) + EffectInfo([], [], [], [], EffectInfo.EF_CANNOT_RAISE, + oopspecindex=_oopspecindex)) # _oopspecindex = getattr(EffectInfo, _os.replace('STR', 'UNI')) locals()[_name.replace('str', 'unicode')] = \ cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, - EffectInfo([], [], [], [], oopspecindex=_oopspecindex)) + EffectInfo([], [], [], [], EffectInfo.EF_CANNOT_RAISE, + oopspecindex=_oopspecindex)) s2u_descr = cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, EffectInfo([], [], [], [], oopspecindex=EffectInfo.OS_STR2UNICODE)) diff --git a/pypy/jit/metainterp/optimizeopt/virtualize.py b/pypy/jit/metainterp/optimizeopt/virtualize.py --- a/pypy/jit/metainterp/optimizeopt/virtualize.py +++ b/pypy/jit/metainterp/optimizeopt/virtualize.py @@ -294,7 +294,12 @@ optforce.emit_operation(self.source_op) self.box = box = self.source_op.result for index in range(len(self._items)): - for descr, value in self._items[index].iteritems(): + iteritems = self._items[index].iteritems() + # random order is fine, except for tests + if not we_are_translated(): + iteritems = list(iteritems) + iteritems.sort(key = lambda (x, y): x.sort_key()) + for descr, value in iteritems: subbox = value.force_box(optforce) op = ResOperation(rop.SETINTERIORFIELD_GC, [box, ConstInt(index), subbox], None, descr=descr diff --git a/pypy/jit/metainterp/optimizeopt/virtualstate.py b/pypy/jit/metainterp/optimizeopt/virtualstate.py --- a/pypy/jit/metainterp/optimizeopt/virtualstate.py +++ b/pypy/jit/metainterp/optimizeopt/virtualstate.py @@ -551,6 +551,7 @@ optimizer.produce_potential_short_preamble_ops(self) self.short_boxes = {} + self.short_boxes_in_production = {} for box in self.potential_ops.keys(): try: @@ -606,6 +607,10 @@ return if isinstance(box, Const): return + if box in self.short_boxes_in_production: + raise BoxNotProducable + self.short_boxes_in_production[box] = True + if box in self.potential_ops: ops = self.prioritized_alternatives(box) produced_one = False diff --git a/pypy/jit/metainterp/optimizeopt/vstring.py b/pypy/jit/metainterp/optimizeopt/vstring.py --- a/pypy/jit/metainterp/optimizeopt/vstring.py +++ b/pypy/jit/metainterp/optimizeopt/vstring.py @@ -1,8 +1,9 @@ from pypy.jit.codewriter.effectinfo import EffectInfo from pypy.jit.metainterp.history import (BoxInt, Const, ConstInt, ConstPtr, - get_const_ptr_for_string, get_const_ptr_for_unicode) + get_const_ptr_for_string, get_const_ptr_for_unicode, BoxPtr, REF, INT) from pypy.jit.metainterp.optimizeopt import optimizer, virtualize -from pypy.jit.metainterp.optimizeopt.optimizer import CONST_0, CONST_1, llhelper +from pypy.jit.metainterp.optimizeopt.optimizer import CONST_0, CONST_1 +from pypy.jit.metainterp.optimizeopt.optimizer import llhelper, REMOVED from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.rlib.objectmodel import specialize, we_are_translated @@ -106,7 +107,12 @@ if not we_are_translated(): op.name = 'FORCE' optforce.emit_operation(op) - self.string_copy_parts(optforce, box, CONST_0, self.mode) + self.initialize_forced_string(optforce, box, CONST_0, self.mode) + + def initialize_forced_string(self, string_optimizer, targetbox, + offsetbox, mode): + return self.string_copy_parts(string_optimizer, targetbox, + offsetbox, mode) class VStringPlainValue(VAbstractStringValue): @@ -114,11 +120,20 @@ _lengthbox = None # cache only def setup(self, size): - self._chars = [optimizer.CVAL_UNINITIALIZED_ZERO] * size + # in this list, None means: "it's probably uninitialized so far, + # but maybe it was actually filled." So to handle this case, + # strgetitem cannot be virtual-ized and must be done as a residual + # operation. By contrast, any non-None value means: we know it + # is initialized to this value; strsetitem() there makes no sense. + # Also, as long as self.is_virtual(), then we know that no-one else + # could have written to the string, so we know that in this case + # "None" corresponds to "really uninitialized". + self._chars = [None] * size def setup_slice(self, longerlist, start, stop): assert 0 <= start <= stop <= len(longerlist) self._chars = longerlist[start:stop] + # slice the 'longerlist', which may also contain Nones def getstrlen(self, _, mode): if self._lengthbox is None: @@ -126,42 +141,66 @@ return self._lengthbox def getitem(self, index): - return self._chars[index] + return self._chars[index] # may return None! def setitem(self, index, charvalue): assert isinstance(charvalue, optimizer.OptValue) + assert self._chars[index] is None, ( + "setitem() on an already-initialized location") self._chars[index] = charvalue + def is_completely_initialized(self): + for c in self._chars: + if c is None: + return False + return True + @specialize.arg(1) def get_constant_string_spec(self, mode): for c in self._chars: - if c is optimizer.CVAL_UNINITIALIZED_ZERO or not c.is_constant(): + if c is None or not c.is_constant(): return None return mode.emptystr.join([mode.chr(c.box.getint()) for c in self._chars]) def string_copy_parts(self, string_optimizer, targetbox, offsetbox, mode): - if not self.is_virtual() and targetbox is not self.box: - lengthbox = self.getstrlen(string_optimizer, mode) - srcbox = self.force_box(string_optimizer) - return copy_str_content(string_optimizer, srcbox, targetbox, - CONST_0, offsetbox, lengthbox, mode) + if not self.is_virtual() and not self.is_completely_initialized(): + return VAbstractStringValue.string_copy_parts( + self, string_optimizer, targetbox, offsetbox, mode) + else: + return self.initialize_forced_string(string_optimizer, targetbox, + offsetbox, mode) + + def initialize_forced_string(self, string_optimizer, targetbox, + offsetbox, mode): for i in range(len(self._chars)): - charbox = self._chars[i].force_box(string_optimizer) - if not (isinstance(charbox, Const) and charbox.same_constant(CONST_0)): - string_optimizer.emit_operation(ResOperation(mode.STRSETITEM, [targetbox, - offsetbox, - charbox], - None)) + assert isinstance(targetbox, BoxPtr) # ConstPtr never makes sense + charvalue = self.getitem(i) + if charvalue is not None: + charbox = charvalue.force_box(string_optimizer) + if not (isinstance(charbox, Const) and + charbox.same_constant(CONST_0)): + op = ResOperation(mode.STRSETITEM, [targetbox, + offsetbox, + charbox], + None) + string_optimizer.emit_operation(op) offsetbox = _int_add(string_optimizer, offsetbox, CONST_1) return offsetbox def get_args_for_fail(self, modifier): if self.box is None and not modifier.already_seen_virtual(self.keybox): - charboxes = [value.get_key_box() for value in self._chars] + charboxes = [] + for value in self._chars: + if value is not None: + box = value.get_key_box() + else: + box = None + charboxes.append(box) modifier.register_virtual_fields(self.keybox, charboxes) for value in self._chars: - value.get_args_for_fail(modifier) + if value is not None: + value.get_args_for_fail(modifier) def _make_virtual(self, modifier): return modifier.make_vstrplain(self.mode is mode_unicode) @@ -169,6 +208,7 @@ class VStringConcatValue(VAbstractStringValue): """The concatenation of two other strings.""" + _attrs_ = ('left', 'right', 'lengthbox') lengthbox = None # or the computed length @@ -277,6 +317,7 @@ for i in range(lengthbox.value): charbox = _strgetitem(string_optimizer, srcbox, srcoffsetbox, mode) srcoffsetbox = _int_add(string_optimizer, srcoffsetbox, CONST_1) + assert isinstance(targetbox, BoxPtr) # ConstPtr never makes sense string_optimizer.emit_operation(ResOperation(mode.STRSETITEM, [targetbox, offsetbox, charbox], @@ -287,6 +328,7 @@ nextoffsetbox = _int_add(string_optimizer, offsetbox, lengthbox) else: nextoffsetbox = None + assert isinstance(targetbox, BoxPtr) # ConstPtr never makes sense op = ResOperation(mode.COPYSTRCONTENT, [srcbox, targetbox, srcoffsetbox, offsetbox, lengthbox], None) @@ -373,6 +415,7 @@ def optimize_STRSETITEM(self, op): value = self.getvalue(op.getarg(0)) + assert not value.is_constant() # strsetitem(ConstPtr) never makes sense if value.is_virtual() and isinstance(value, VStringPlainValue): indexbox = self.get_constant_box(op.getarg(1)) if indexbox is not None: @@ -406,11 +449,20 @@ # if isinstance(value, VStringPlainValue): # even if no longer virtual if vindex.is_constant(): - res = value.getitem(vindex.box.getint()) - # If it is uninitialized we can't return it, it was set by a - # COPYSTRCONTENT, not a STRSETITEM - if res is not optimizer.CVAL_UNINITIALIZED_ZERO: - return res + result = value.getitem(vindex.box.getint()) + if result is not None: + return result + # + if isinstance(value, VStringConcatValue) and vindex.is_constant(): + len1box = value.left.getstrlen(self, mode) + if isinstance(len1box, ConstInt): + index = vindex.box.getint() + len1 = len1box.getint() + if index < len1: + return self.strgetitem(value.left, vindex, mode) + else: + vindex = optimizer.ConstantValue(ConstInt(index - len1)) + return self.strgetitem(value.right, vindex, mode) # resbox = _strgetitem(self, value.force_box(self), vindex.force_box(self), mode) return self.getvalue(resbox) @@ -432,6 +484,11 @@ def _optimize_COPYSTRCONTENT(self, op, mode): # args: src dst srcstart dststart length + assert op.getarg(0).type == REF + assert op.getarg(1).type == REF + assert op.getarg(2).type == INT + assert op.getarg(3).type == INT + assert op.getarg(4).type == INT src = self.getvalue(op.getarg(0)) dst = self.getvalue(op.getarg(1)) srcstart = self.getvalue(op.getarg(2)) @@ -473,6 +530,11 @@ optimize_CALL_PURE = optimize_CALL + def optimize_GUARD_NO_EXCEPTION(self, op): + if self.last_emitted_operation is REMOVED: + return + self.emit_operation(op) + def opt_call_str_STR2UNICODE(self, op): # Constant-fold unicode("constant string"). # More generally, supporting non-constant but virtual cases is @@ -487,6 +549,7 @@ except UnicodeDecodeError: return False self.make_constant(op.result, get_const_ptr_for_unicode(u)) + self.last_emitted_operation = REMOVED return True def opt_call_stroruni_STR_CONCAT(self, op, mode): @@ -503,13 +566,12 @@ vstart = self.getvalue(op.getarg(2)) vstop = self.getvalue(op.getarg(3)) # - if (isinstance(vstr, VStringPlainValue) and vstart.is_constant() - and vstop.is_constant()): - # slicing with constant bounds of a VStringPlainValue - value = self.make_vstring_plain(op.result, op, mode) - value.setup_slice(vstr._chars, vstart.box.getint(), - vstop.box.getint()) - return True + #if (isinstance(vstr, VStringPlainValue) and vstart.is_constant() + # and vstop.is_constant()): + # value = self.make_vstring_plain(op.result, op, mode) + # value.setup_slice(vstr._chars, vstart.box.getint(), + # vstop.box.getint()) + # return True # vstr.ensure_nonnull() lengthbox = _int_sub(self, vstop.force_box(self), diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -1345,10 +1345,8 @@ if effect == effectinfo.EF_LOOPINVARIANT: return self.execute_varargs(rop.CALL_LOOPINVARIANT, allboxes, descr, False, False) - exc = (effect != effectinfo.EF_CANNOT_RAISE and - effect != effectinfo.EF_ELIDABLE_CANNOT_RAISE) - pure = (effect == effectinfo.EF_ELIDABLE_CAN_RAISE or - effect == effectinfo.EF_ELIDABLE_CANNOT_RAISE) + exc = effectinfo.check_can_raise() + pure = effectinfo.check_is_elidable() return self.execute_varargs(rop.CALL, allboxes, descr, exc, pure) def do_residual_or_indirect_call(self, funcbox, calldescr, argboxes): diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -90,7 +90,10 @@ return op def __repr__(self): - return self.repr() + try: + return self.repr() + except NotImplementedError: + return object.__repr__(self) def repr(self, graytext=False): # RPython-friendly version @@ -458,6 +461,7 @@ 'GETARRAYITEM_GC/2d', 'GETARRAYITEM_RAW/2d', 'GETINTERIORFIELD_GC/2d', + 'GETINTERIORFIELD_RAW/2d', 'GETFIELD_GC/1d', 'GETFIELD_RAW/1d', '_MALLOC_FIRST', @@ -476,6 +480,7 @@ 'SETARRAYITEM_GC/3d', 'SETARRAYITEM_RAW/3d', 'SETINTERIORFIELD_GC/3d', + 'SETINTERIORFIELD_RAW/3d', 'SETFIELD_GC/2d', 'SETFIELD_RAW/2d', 'STRSETITEM/3', diff --git a/pypy/jit/metainterp/resume.py b/pypy/jit/metainterp/resume.py --- a/pypy/jit/metainterp/resume.py +++ b/pypy/jit/metainterp/resume.py @@ -126,6 +126,7 @@ UNASSIGNED = tag(-1<<13, TAGBOX) UNASSIGNEDVIRTUAL = tag(-1<<13, TAGVIRTUAL) NULLREF = tag(-1, TAGCONST) +UNINITIALIZED = tag(-2, TAGCONST) # used for uninitialized string characters class ResumeDataLoopMemo(object): @@ -439,6 +440,8 @@ self.storage.rd_pendingfields = rd_pendingfields def _gettagged(self, box): + if box is None: + return UNINITIALIZED if isinstance(box, Const): return self.memo.getconst(box) else: @@ -572,7 +575,9 @@ string = decoder.allocate_string(length) decoder.virtuals_cache[index] = string for i in range(length): - decoder.string_setitem(string, i, self.fieldnums[i]) + charnum = self.fieldnums[i] + if not tagged_eq(charnum, UNINITIALIZED): + decoder.string_setitem(string, i, charnum) return string def debug_prints(self): @@ -625,7 +630,9 @@ string = decoder.allocate_unicode(length) decoder.virtuals_cache[index] = string for i in range(length): - decoder.unicode_setitem(string, i, self.fieldnums[i]) + charnum = self.fieldnums[i] + if not tagged_eq(charnum, UNINITIALIZED): + decoder.unicode_setitem(string, i, charnum) return string def debug_prints(self): diff --git a/pypy/jit/metainterp/test/support.py b/pypy/jit/metainterp/test/support.py --- a/pypy/jit/metainterp/test/support.py +++ b/pypy/jit/metainterp/test/support.py @@ -12,7 +12,7 @@ from pypy.rlib.rfloat import isnan def _get_jitcodes(testself, CPUClass, func, values, type_system, - supports_longlong=False, **kwds): + supports_longlong=False, translationoptions={}, **kwds): from pypy.jit.codewriter import support class FakeJitCell(object): @@ -42,7 +42,8 @@ enable_opts = ALL_OPTS_DICT func._jit_unroll_safe_ = True - rtyper = support.annotate(func, values, type_system=type_system) + rtyper = support.annotate(func, values, type_system=type_system, + translationoptions=translationoptions) graphs = rtyper.annotator.translator.graphs testself.all_graphs = graphs result_kind = history.getkind(graphs[0].getreturnvar().concretetype)[0] diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -14,7 +14,7 @@ from pypy.rlib.jit import (JitDriver, we_are_jitted, hint, dont_look_inside, loop_invariant, elidable, promote, jit_debug, assert_green, AssertGreenFailed, unroll_safe, current_trace_length, look_inside_iff, - isconstant, isvirtual, promote_string) + isconstant, isvirtual, promote_string, set_param) from pypy.rlib.rarithmetic import ovfcheck from pypy.rpython.lltypesystem import lltype, llmemory, rffi from pypy.rpython.ootypesystem import ootype @@ -1256,15 +1256,18 @@ n -= 1 x += n return x - def f(n, threshold): - myjitdriver.set_param('threshold', threshold) + def f(n, threshold, arg): + if arg: + set_param(myjitdriver, 'threshold', threshold) + else: + set_param(None, 'threshold', threshold) return g(n) - res = self.meta_interp(f, [10, 3]) + res = self.meta_interp(f, [10, 3, 1]) assert res == 9 + 8 + 7 + 6 + 5 + 4 + 3 + 2 + 1 + 0 self.check_tree_loop_count(2) - res = self.meta_interp(f, [10, 13]) + res = self.meta_interp(f, [10, 13, 0]) assert res == 9 + 8 + 7 + 6 + 5 + 4 + 3 + 2 + 1 + 0 self.check_tree_loop_count(0) @@ -2328,8 +2331,8 @@ get_printable_location=get_printable_location) bytecode = "0j10jc20a3" def f(): - myjitdriver.set_param('threshold', 7) - myjitdriver.set_param('trace_eagerness', 1) + set_param(myjitdriver, 'threshold', 7) + set_param(myjitdriver, 'trace_eagerness', 1) i = j = c = a = 1 while True: myjitdriver.jit_merge_point(i=i, j=j, c=c, a=a) @@ -2607,7 +2610,7 @@ myjitdriver = JitDriver(greens = [], reds = ['n', 'i', 'sa', 'a']) def f(n, limit): - myjitdriver.set_param('retrace_limit', limit) + set_param(myjitdriver, 'retrace_limit', limit) sa = i = a = 0 while i < n: myjitdriver.jit_merge_point(n=n, i=i, sa=sa, a=a) @@ -2625,8 +2628,8 @@ myjitdriver = JitDriver(greens = [], reds = ['n', 'i', 'sa', 'a']) def f(n, limit): - myjitdriver.set_param('retrace_limit', 3) - myjitdriver.set_param('max_retrace_guards', limit) + set_param(myjitdriver, 'retrace_limit', 3) + set_param(myjitdriver, 'max_retrace_guards', limit) sa = i = a = 0 while i < n: myjitdriver.jit_merge_point(n=n, i=i, sa=sa, a=a) @@ -2645,7 +2648,7 @@ myjitdriver = JitDriver(greens = [], reds = ['n', 'i', 'sa', 'a', 'node']) def f(n, limit): - myjitdriver.set_param('retrace_limit', limit) + set_param(myjitdriver, 'retrace_limit', limit) sa = i = a = 0 node = [1, 2, 3] node[1] = n @@ -2668,10 +2671,10 @@ myjitdriver = JitDriver(greens = ['pc'], reds = ['n', 'i', 'sa']) bytecode = "0+sI0+SI" def f(n): - myjitdriver.set_param('threshold', 3) - myjitdriver.set_param('trace_eagerness', 1) - myjitdriver.set_param('retrace_limit', 5) - myjitdriver.set_param('function_threshold', -1) + set_param(None, 'threshold', 3) + set_param(None, 'trace_eagerness', 1) + set_param(None, 'retrace_limit', 5) + set_param(None, 'function_threshold', -1) pc = sa = i = 0 while pc < len(bytecode): myjitdriver.jit_merge_point(pc=pc, n=n, sa=sa, i=i) @@ -2728,9 +2731,9 @@ myjitdriver = JitDriver(greens = ['pc'], reds = ['n', 'a', 'i', 'j', 'sa']) bytecode = "ij+Jj+JI" def f(n, a): - myjitdriver.set_param('threshold', 5) - myjitdriver.set_param('trace_eagerness', 1) - myjitdriver.set_param('retrace_limit', 2) + set_param(None, 'threshold', 5) + set_param(None, 'trace_eagerness', 1) + set_param(None, 'retrace_limit', 2) pc = sa = i = j = 0 while pc < len(bytecode): myjitdriver.jit_merge_point(pc=pc, n=n, sa=sa, i=i, j=j, a=a) @@ -2793,8 +2796,8 @@ return B(self.val + 1) myjitdriver = JitDriver(greens = [], reds = ['sa', 'a']) def f(): - myjitdriver.set_param('threshold', 3) - myjitdriver.set_param('trace_eagerness', 2) + set_param(None, 'threshold', 3) + set_param(None, 'trace_eagerness', 2) a = A(0) sa = 0 while a.val < 8: @@ -2824,8 +2827,8 @@ return B(self.val + 1) myjitdriver = JitDriver(greens = [], reds = ['sa', 'b', 'a']) def f(b): - myjitdriver.set_param('threshold', 6) - myjitdriver.set_param('trace_eagerness', 4) + set_param(None, 'threshold', 6) + set_param(None, 'trace_eagerness', 4) a = A(0) sa = 0 while a.val < 15: @@ -2862,10 +2865,10 @@ myjitdriver = JitDriver(greens = ['pc'], reds = ['n', 'i', 'sa']) bytecode = "0+sI0+SI" def f(n): - myjitdriver.set_param('threshold', 3) - myjitdriver.set_param('trace_eagerness', 1) - myjitdriver.set_param('retrace_limit', 5) - myjitdriver.set_param('function_threshold', -1) + set_param(None, 'threshold', 3) + set_param(None, 'trace_eagerness', 1) + set_param(None, 'retrace_limit', 5) + set_param(None, 'function_threshold', -1) pc = sa = i = 0 while pc < len(bytecode): myjitdriver.jit_merge_point(pc=pc, n=n, sa=sa, i=i) @@ -3513,7 +3516,9 @@ def f(n): while n > 0: myjitdriver.jit_merge_point(n=n) - n = g({"key": n}) + x = {"key": n} + n = g(x) + del x["key"] return n res = self.meta_interp(f, [10]) @@ -3559,6 +3564,34 @@ assert res == 0 self.check_loops({"int_sub": 1, "int_gt": 1, "guard_true": 1, "jump": 1}) + def test_convert_from_SmallFunctionSetPBCRepr_to_FunctionsPBCRepr(self): + f1 = lambda n: n+1 + f2 = lambda n: n+2 + f3 = lambda n: n+3 + f4 = lambda n: n+4 + f5 = lambda n: n+5 + f6 = lambda n: n+6 + f7 = lambda n: n+7 + f8 = lambda n: n+8 + def h(n, x): + return x(n) + h._dont_inline = True + def g(n, x): + return h(n, x) + g._dont_inline = True + def f(n): + n = g(n, f1) + n = g(n, f2) + n = h(n, f3) + n = h(n, f4) + n = h(n, f5) + n = h(n, f6) + n = h(n, f7) + n = h(n, f8) + return n + assert f(5) == 41 + translationoptions = {'withsmallfuncsets': 3} + self.interp_operations(f, [5], translationoptions=translationoptions) class TestLLtype(BaseLLtypeTests, LLJitMixin): @@ -3613,7 +3646,9 @@ o = o.dec() pc += 1 return pc - res = self.meta_interp(main, [False, 100, True], taggedpointers=True) + topt = {'taggedpointers': True} + res = self.meta_interp(main, [False, 100, True], + translationoptions=topt) def test_rerased(self): eraseX, uneraseX = rerased.new_erasing_pair("X") @@ -3638,10 +3673,24 @@ else: return rerased.unerase_int(e) # - x = self.interp_operations(f, [-128, 0], taggedpointers=True) + topt = {'taggedpointers': True} + x = self.interp_operations(f, [-128, 0], translationoptions=topt) assert x == -128 bigint = sys.maxint//2 + 1 - x = self.interp_operations(f, [bigint, 0], taggedpointers=True) + x = self.interp_operations(f, [bigint, 0], translationoptions=topt) assert x == -42 - x = self.interp_operations(f, [1000, 1], taggedpointers=True) + x = self.interp_operations(f, [1000, 1], translationoptions=topt) assert x == 999 + + def test_ll_arraycopy(self): + from pypy.rlib import rgc + A = lltype.GcArray(lltype.Char) + a = lltype.malloc(A, 10) + for i in range(10): a[i] = chr(i) + b = lltype.malloc(A, 10) + # + def f(c, d, e): + rgc.ll_arraycopy(a, b, c, d, e) + return 42 + self.interp_operations(f, [1, 2, 3]) + self.check_operations_history(call=1, guard_no_exception=0) diff --git a/pypy/jit/metainterp/test/test_fficall.py b/pypy/jit/metainterp/test/test_fficall.py --- a/pypy/jit/metainterp/test/test_fficall.py +++ b/pypy/jit/metainterp/test/test_fficall.py @@ -1,19 +1,18 @@ +import py -import py +from pypy.jit.metainterp.test.support import LLJitMixin +from pypy.rlib.jit import JitDriver, promote, dont_look_inside +from pypy.rlib.libffi import (ArgChain, IS_32_BIT, array_getitem, array_setitem, + types) +from pypy.rlib.objectmodel import specialize from pypy.rlib.rarithmetic import r_singlefloat, r_longlong, r_ulonglong -from pypy.rlib.jit import JitDriver, promote, dont_look_inside +from pypy.rlib.test.test_libffi import TestLibffiCall as _TestLibffiCall from pypy.rlib.unroll import unrolling_iterable -from pypy.rlib.libffi import ArgChain -from pypy.rlib.libffi import IS_32_BIT -from pypy.rlib.test.test_libffi import TestLibffiCall as _TestLibffiCall from pypy.rpython.lltypesystem import lltype, rffi -from pypy.rlib.objectmodel import specialize from pypy.tool.sourcetools import func_with_new_name -from pypy.jit.metainterp.test.support import LLJitMixin -class TestFfiCall(LLJitMixin, _TestLibffiCall): - supports_all = False # supports_{floats,longlong,singlefloats} +class FfiCallTests(_TestLibffiCall): # ===> ../../../rlib/test/test_libffi.py def call(self, funcspec, args, RESULT, is_struct=False, jitif=[]): @@ -92,6 +91,69 @@ test_byval_result.__doc__ = _TestLibffiCall.test_byval_result.__doc__ test_byval_result.dont_track_allocations = True +class FfiLookupTests(object): + def test_array_fields(self): + myjitdriver = JitDriver( + greens = [], + reds = ["n", "i", "points", "result_point"], + ) -class TestFfiCallSupportAll(TestFfiCall): + POINT = lltype.Struct("POINT", + ("x", lltype.Signed), + ("y", lltype.Signed), + ) + def f(points, result_point, n): + i = 0 + while i < n: + myjitdriver.jit_merge_point(i=i, points=points, n=n, + result_point=result_point) + x = array_getitem( + types.slong, rffi.sizeof(lltype.Signed) * 2, points, i, 0 + ) + y = array_getitem( + types.slong, rffi.sizeof(lltype.Signed) * 2, points, i, rffi.sizeof(lltype.Signed) + ) + + cur_x = array_getitem( + types.slong, rffi.sizeof(lltype.Signed) * 2, result_point, 0, 0 + ) + cur_y = array_getitem( + types.slong, rffi.sizeof(lltype.Signed) * 2, result_point, 0, rffi.sizeof(lltype.Signed) + ) + + array_setitem( + types.slong, rffi.sizeof(lltype.Signed) * 2, result_point, 0, 0, cur_x + x + ) + array_setitem( + types.slong, rffi.sizeof(lltype.Signed) * 2, result_point, 0, rffi.sizeof(lltype.Signed), cur_y + y + ) + i += 1 + + def main(n): + with lltype.scoped_alloc(rffi.CArray(POINT), n) as points: + with lltype.scoped_alloc(rffi.CArray(POINT), 1) as result_point: + for i in xrange(n): + points[i].x = i * 2 + points[i].y = i * 2 + 1 + points = rffi.cast(rffi.CArrayPtr(lltype.Char), points) + result_point[0].x = 0 + result_point[0].y = 0 + result_point = rffi.cast(rffi.CArrayPtr(lltype.Char), result_point) + f(points, result_point, n) + result_point = rffi.cast(rffi.CArrayPtr(POINT), result_point) + return result_point[0].x * result_point[0].y + + assert self.meta_interp(main, [10]) == main(10) == 9000 + self.check_loops({"int_add": 3, "jump": 1, "int_lt": 1, "guard_true": 1, + "getinteriorfield_raw": 4, "setinteriorfield_raw": 2 + }) + + +class TestFfiCall(FfiCallTests, LLJitMixin): + supports_all = False + +class TestFfiCallSupportAll(FfiCallTests, LLJitMixin): supports_all = True # supports_{floats,longlong,singlefloats} + +class TestFfiLookup(FfiLookupTests, LLJitMixin): + pass \ No newline at end of file diff --git a/pypy/jit/metainterp/test/test_jitdriver.py b/pypy/jit/metainterp/test/test_jitdriver.py --- a/pypy/jit/metainterp/test/test_jitdriver.py +++ b/pypy/jit/metainterp/test/test_jitdriver.py @@ -1,5 +1,5 @@ """Tests for multiple JitDrivers.""" -from pypy.rlib.jit import JitDriver, unroll_safe +from pypy.rlib.jit import JitDriver, unroll_safe, set_param from pypy.jit.metainterp.test.support import LLJitMixin, OOJitMixin from pypy.jit.metainterp.warmspot import get_stats @@ -113,7 +113,7 @@ return n # def loop2(g, r): - myjitdriver1.set_param('function_threshold', 0) + set_param(None, 'function_threshold', 0) while r > 0: myjitdriver2.can_enter_jit(g=g, r=r) myjitdriver2.jit_merge_point(g=g, r=r) diff --git a/pypy/jit/metainterp/test/test_loop.py b/pypy/jit/metainterp/test/test_loop.py --- a/pypy/jit/metainterp/test/test_loop.py +++ b/pypy/jit/metainterp/test/test_loop.py @@ -1,5 +1,5 @@ import py -from pypy.rlib.jit import JitDriver, hint +from pypy.rlib.jit import JitDriver, hint, set_param from pypy.rlib.objectmodel import compute_hash from pypy.jit.metainterp.warmspot import ll_meta_interp, get_stats from pypy.jit.metainterp.test.support import LLJitMixin, OOJitMixin @@ -364,7 +364,7 @@ myjitdriver = JitDriver(greens = ['pos'], reds = ['i', 'j', 'n', 'x']) bytecode = "IzJxji" def f(n, threshold): - myjitdriver.set_param('threshold', threshold) + set_param(myjitdriver, 'threshold', threshold) i = j = x = 0 pos = 0 op = '-' @@ -411,7 +411,7 @@ myjitdriver = JitDriver(greens = ['pos'], reds = ['i', 'j', 'n', 'x']) bytecode = "IzJxji" def f(nval, threshold): - myjitdriver.set_param('threshold', threshold) + set_param(myjitdriver, 'threshold', threshold) i, j, x = A(0), A(0), A(0) n = A(nval) pos = 0 diff --git a/pypy/jit/metainterp/test/test_recursive.py b/pypy/jit/metainterp/test/test_recursive.py --- a/pypy/jit/metainterp/test/test_recursive.py +++ b/pypy/jit/metainterp/test/test_recursive.py @@ -1,5 +1,5 @@ import py -from pypy.rlib.jit import JitDriver, we_are_jitted, hint +from pypy.rlib.jit import JitDriver, hint, set_param from pypy.rlib.jit import unroll_safe, dont_look_inside, promote from pypy.rlib.objectmodel import we_are_translated from pypy.rlib.debug import fatalerror @@ -308,8 +308,8 @@ pc += 1 return n def main(n): - myjitdriver.set_param('threshold', 3) - myjitdriver.set_param('trace_eagerness', 5) + set_param(None, 'threshold', 3) + set_param(None, 'trace_eagerness', 5) return f("c-l", n) expected = main(100) res = self.meta_interp(main, [100], enable_opts='', inline=True) @@ -329,7 +329,7 @@ return recursive(n - 1) + 1 return 0 def loop(n): - myjitdriver.set_param("threshold", 10) + set_param(myjitdriver, "threshold", 10) pc = 0 while n: myjitdriver.can_enter_jit(n=n) @@ -351,8 +351,8 @@ return 0 myjitdriver = JitDriver(greens=[], reds=['n']) def loop(n): - myjitdriver.set_param("threshold", 4) - myjitdriver.set_param("trace_eagerness", 2) + set_param(None, "threshold", 4) + set_param(None, "trace_eagerness", 2) while n: myjitdriver.can_enter_jit(n=n) myjitdriver.jit_merge_point(n=n) @@ -482,12 +482,12 @@ TRACE_LIMIT = 66 def main(inline): - myjitdriver.set_param("threshold", 10) - myjitdriver.set_param('function_threshold', 60) + set_param(None, "threshold", 10) + set_param(None, 'function_threshold', 60) if inline: - myjitdriver.set_param('inlining', True) + set_param(None, 'inlining', True) else: - myjitdriver.set_param('inlining', False) + set_param(None, 'inlining', False) return loop(100) res = self.meta_interp(main, [0], enable_opts='', trace_limit=TRACE_LIMIT) @@ -564,11 +564,11 @@ pc += 1 return n def g(m): - myjitdriver.set_param('inlining', True) + set_param(None, 'inlining', True) # carefully chosen threshold to make sure that the inner function # cannot be inlined, but the inner function on its own is small # enough - myjitdriver.set_param('trace_limit', 40) + set_param(None, 'trace_limit', 40) if m > 1000000: f('', 0) result = 0 @@ -1207,9 +1207,9 @@ driver.can_enter_jit(c=c, i=i, v=v) break - def main(c, i, set_param, v): - if set_param: - driver.set_param('function_threshold', 0) + def main(c, i, _set_param, v): + if _set_param: + set_param(driver, 'function_threshold', 0) portal(c, i, v) self.meta_interp(main, [10, 10, False, False], inline=True) diff --git a/pypy/jit/metainterp/test/test_resume.py b/pypy/jit/metainterp/test/test_resume.py --- a/pypy/jit/metainterp/test/test_resume.py +++ b/pypy/jit/metainterp/test/test_resume.py @@ -1135,16 +1135,11 @@ assert ptr2.parent.next == ptr class CompareableConsts(object): - def __init__(self): - self.oldeq = None - def __enter__(self): - assert self.oldeq is None - self.oldeq = Const.__eq__ Const.__eq__ = Const.same_box - + def __exit__(self, type, value, traceback): - Const.__eq__ = self.oldeq + del Const.__eq__ def test_virtual_adder_make_varray(): b2s, b4s = [BoxPtr(), BoxInt(4)] diff --git a/pypy/jit/metainterp/test/test_virtualstate.py b/pypy/jit/metainterp/test/test_virtualstate.py --- a/pypy/jit/metainterp/test/test_virtualstate.py +++ b/pypy/jit/metainterp/test/test_virtualstate.py @@ -847,7 +847,8 @@ i5 = arraylen_gc(p2, descr=arraydescr) i6 = int_ge(i5, 1) guard_true(i6) [] - jump(p0, p1, p2) + p3 = getarrayitem_gc(p2, 0, descr=arraydescr) + jump(p0, p1, p3, p2) """ self.optimize_bridge(loop, bridge, expected, p0=self.myptr) diff --git a/pypy/jit/metainterp/test/test_warmspot.py b/pypy/jit/metainterp/test/test_warmspot.py --- a/pypy/jit/metainterp/test/test_warmspot.py +++ b/pypy/jit/metainterp/test/test_warmspot.py @@ -1,10 +1,7 @@ import py -from pypy.jit.metainterp.warmspot import ll_meta_interp from pypy.jit.metainterp.warmspot import get_stats -from pypy.rlib.jit import JitDriver -from pypy.rlib.jit import unroll_safe +from pypy.rlib.jit import JitDriver, set_param, unroll_safe from pypy.jit.backend.llgraph import runner -from pypy.jit.metainterp.history import BoxInt from pypy.jit.metainterp.test.support import LLJitMixin, OOJitMixin from pypy.jit.metainterp.optimizeopt import ALL_OPTS_NAMES @@ -97,7 +94,7 @@ n = A().m(n) return n def f(n, enable_opts): - myjitdriver.set_param('enable_opts', hlstr(enable_opts)) + set_param(None, 'enable_opts', hlstr(enable_opts)) return g(n) # check that the set_param will override the default diff --git a/pypy/jit/metainterp/test/test_ztranslation.py b/pypy/jit/metainterp/test/test_ztranslation.py --- a/pypy/jit/metainterp/test/test_ztranslation.py +++ b/pypy/jit/metainterp/test/test_ztranslation.py @@ -1,7 +1,7 @@ import py from pypy.jit.metainterp.warmspot import rpython_ll_meta_interp, ll_meta_interp from pypy.jit.backend.llgraph import runner -from pypy.rlib.jit import JitDriver, unroll_parameters +from pypy.rlib.jit import JitDriver, unroll_parameters, set_param from pypy.rlib.jit import PARAMETERS, dont_look_inside, hint from pypy.jit.metainterp.jitprof import Profiler from pypy.rpython.lltypesystem import lltype, llmemory @@ -57,9 +57,9 @@ get_printable_location=get_printable_location) def f(i): for param, defl in unroll_parameters: - jitdriver.set_param(param, defl) - jitdriver.set_param("threshold", 3) - jitdriver.set_param("trace_eagerness", 2) + set_param(jitdriver, param, defl) + set_param(jitdriver, "threshold", 3) + set_param(jitdriver, "trace_eagerness", 2) total = 0 frame = Frame(i) while frame.l[0] > 3: @@ -117,8 +117,8 @@ raise ValueError return 2 def main(i): - jitdriver.set_param("threshold", 3) - jitdriver.set_param("trace_eagerness", 2) + set_param(jitdriver, "threshold", 3) + set_param(jitdriver, "trace_eagerness", 2) total = 0 n = i while n > 3: diff --git a/pypy/jit/metainterp/warmspot.py b/pypy/jit/metainterp/warmspot.py --- a/pypy/jit/metainterp/warmspot.py +++ b/pypy/jit/metainterp/warmspot.py @@ -48,13 +48,13 @@ translator.warmrunnerdesc = warmrunnerdesc # for later debugging def ll_meta_interp(function, args, backendopt=False, type_system='lltype', - listcomp=False, **kwds): + listcomp=False, translationoptions={}, **kwds): if listcomp: extraconfigopts = {'translation.list_comprehension_operations': True} else: extraconfigopts = {} - if kwds.pop("taggedpointers", False): - extraconfigopts["translation.taggedpointers"] = True + for key, value in translationoptions.items(): + extraconfigopts['translation.' + key] = value interp, graph = get_interpreter(function, args, backendopt=False, # will be done below type_system=type_system, @@ -120,7 +120,8 @@ op = block.operations[i] if (op.opname == 'jit_marker' and op.args[0].value == marker_name and - op.args[1].value.active): # the jitdriver + (op.args[1].value is None or + op.args[1].value.active)): # the jitdriver results.append((graph, block, i)) return results @@ -846,11 +847,18 @@ _, PTR_SET_PARAM_STR_FUNCTYPE = self.cpu.ts.get_FuncType( [lltype.Ptr(STR)], lltype.Void) def make_closure(jd, fullfuncname, is_string): - state = jd.warmstate - def closure(i): - if is_string: - i = hlstr(i) - getattr(state, fullfuncname)(i) + if jd is None: + def closure(i): + if is_string: + i = hlstr(i) + for jd in self.jitdrivers_sd: + getattr(jd.warmstate, fullfuncname)(i) + else: + state = jd.warmstate + def closure(i): + if is_string: + i = hlstr(i) + getattr(state, fullfuncname)(i) if is_string: TP = PTR_SET_PARAM_STR_FUNCTYPE else: @@ -859,12 +867,16 @@ return Constant(funcptr, TP) # for graph, block, i in find_set_param(graphs): + op = block.operations[i] - for jd in self.jitdrivers_sd: - if jd.jitdriver is op.args[1].value: - break + if op.args[1].value is not None: + for jd in self.jitdrivers_sd: + if jd.jitdriver is op.args[1].value: + break + else: + assert 0, "jitdriver of set_param() not found" else: - assert 0, "jitdriver of set_param() not found" + jd = None funcname = op.args[2].value key = jd, funcname if key not in closures: diff --git a/pypy/module/_file/interp_file.py b/pypy/module/_file/interp_file.py --- a/pypy/module/_file/interp_file.py +++ b/pypy/module/_file/interp_file.py @@ -206,24 +206,28 @@ @unwrap_spec(size=int) def direct_readlines(self, size=0): stream = self.getstream() - # NB. this implementation is very inefficient for unbuffered - # streams, but ok if stream.readline() is efficient. + # this is implemented as: .read().split('\n') + # except that it keeps the \n in the resulting strings if size <= 0: - result = [] - while True: - line = stream.readline() - if not line: - break - result.append(line) - size -= len(line) + data = stream.readall() else: - result = [] - while size > 0: - line = stream.readline() - if not line: - break - result.append(line) - size -= len(line) + data = stream.read(size) + result = [] + splitfrom = 0 + for i in range(len(data)): + if data[i] == '\n': + result.append(data[splitfrom : i + 1]) + splitfrom = i + 1 + # + if splitfrom < len(data): + # there is a partial line at the end. If size > 0, it is likely + # to be because the 'read(size)' returned data up to the middle + # of a line. In that case, use 'readline()' to read until the + # end of the current line. + data = data[splitfrom:] + if size > 0: + data += stream.readline() + result.append(data) return result @unwrap_spec(offset=r_longlong, whence=int) diff --git a/pypy/module/_hashlib/interp_hashlib.py b/pypy/module/_hashlib/interp_hashlib.py --- a/pypy/module/_hashlib/interp_hashlib.py +++ b/pypy/module/_hashlib/interp_hashlib.py @@ -4,32 +4,44 @@ from pypy.interpreter.error import OperationError from pypy.tool.sourcetools import func_renamer from pypy.interpreter.baseobjspace import Wrappable -from pypy.rpython.lltypesystem import lltype, rffi +from pypy.rpython.lltypesystem import lltype, llmemory, rffi +from pypy.rlib import rgc, ropenssl from pypy.rlib.objectmodel import keepalive_until_here -from pypy.rlib import ropenssl from pypy.rlib.rstring import StringBuilder from pypy.module.thread.os_lock import Lock algorithms = ('md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512') +# HASH_MALLOC_SIZE is the size of EVP_MD, EVP_MD_CTX plus their points +# Used for adding memory pressure. Last number is an (under?)estimate of +# EVP_PKEY_CTX's size. +# XXX: Make a better estimate here +HASH_MALLOC_SIZE = ropenssl.EVP_MD_SIZE + ropenssl.EVP_MD_CTX_SIZE \ + + rffi.sizeof(ropenssl.EVP_MD) * 2 + 208 + class W_Hash(Wrappable): ctx = lltype.nullptr(ropenssl.EVP_MD_CTX.TO) + _block_size = -1 def __init__(self, space, name): self.name = name + self.digest_size = self.compute_digest_size() # Allocate a lock for each HASH object. # An optimization would be to not release the GIL on small requests, # and use a custom lock only when needed. self.lock = Lock(space) + ctx = lltype.malloc(ropenssl.EVP_MD_CTX.TO, flavor='raw') + rgc.add_memory_pressure(HASH_MALLOC_SIZE + self.digest_size) + self.ctx = ctx + + def initdigest(self, space, name): digest = ropenssl.EVP_get_digestbyname(name) if not digest: raise OperationError(space.w_ValueError, space.wrap("unknown hash function")) - ctx = lltype.malloc(ropenssl.EVP_MD_CTX.TO, flavor='raw') - ropenssl.EVP_DigestInit(ctx, digest) - self.ctx = ctx + ropenssl.EVP_DigestInit(self.ctx, digest) def __del__(self): # self.lock.free() @@ -65,33 +77,29 @@ "Return the digest value as a string of hexadecimal digits." digest = self._digest(space) hexdigits = '0123456789abcdef' - result = StringBuilder(self._digest_size() * 2) + result = StringBuilder(self.digest_size * 2) for c in digest: result.append(hexdigits[(ord(c) >> 4) & 0xf]) result.append(hexdigits[ ord(c) & 0xf]) return space.wrap(result.build()) def get_digest_size(self, space): - return space.wrap(self._digest_size()) + return space.wrap(self.digest_size) def get_block_size(self, space): - return space.wrap(self._block_size()) + return space.wrap(self.compute_block_size()) def _digest(self, space): - copy = self.copy(space) - ctx = copy.ctx - digest_size = self._digest_size() - digest = lltype.malloc(rffi.CCHARP.TO, digest_size, flavor='raw') + with lltype.scoped_alloc(ropenssl.EVP_MD_CTX.TO) as ctx: + with self.lock: + ropenssl.EVP_MD_CTX_copy(ctx, self.ctx) + digest_size = self.digest_size + with lltype.scoped_alloc(rffi.CCHARP.TO, digest_size) as digest: + ropenssl.EVP_DigestFinal(ctx, digest, None) + ropenssl.EVP_MD_CTX_cleanup(ctx) + return rffi.charpsize2str(digest, digest_size) - try: - ropenssl.EVP_DigestFinal(ctx, digest, None) - return rffi.charpsize2str(digest, digest_size) - finally: - keepalive_until_here(copy) - lltype.free(digest, flavor='raw') - - - def _digest_size(self): + def compute_digest_size(self): # XXX This isn't the nicest way, but the EVP_MD_size OpenSSL # XXX function is defined as a C macro on OS X and would be # XXX significantly harder to implement in another way. @@ -105,12 +113,14 @@ 'sha512': 64, 'SHA512': 64, }.get(self.name, 0) - def _block_size(self): + def compute_block_size(self): + if self._block_size != -1: + return self._block_size # XXX This isn't the nicest way, but the EVP_MD_CTX_block_size # XXX OpenSSL function is defined as a C macro on some systems # XXX and would be significantly harder to implement in # XXX another way. - return { + self._block_size = { 'md5': 64, 'MD5': 64, 'sha1': 64, 'SHA1': 64, 'sha224': 64, 'SHA224': 64, @@ -118,6 +128,7 @@ 'sha384': 128, 'SHA384': 128, 'sha512': 128, 'SHA512': 128, }.get(self.name, 0) + return self._block_size W_Hash.typedef = TypeDef( 'HASH', @@ -135,6 +146,7 @@ @unwrap_spec(name=str, string='bufferstr') def new(space, name, string=''): w_hash = W_Hash(space, name) + w_hash.initdigest(space, name) w_hash.update(space, string) return space.wrap(w_hash) diff --git a/pypy/module/_minimal_curses/__init__.py b/pypy/module/_minimal_curses/__init__.py --- a/pypy/module/_minimal_curses/__init__.py +++ b/pypy/module/_minimal_curses/__init__.py @@ -4,7 +4,8 @@ try: import _minimal_curses as _curses # when running on top of pypy-c except ImportError: - raise ImportError("no _curses or _minimal_curses module") # no _curses at all + import py + py.test.skip("no _curses or _minimal_curses module") #no _curses at all from pypy.interpreter.mixedmodule import MixedModule from pypy.module._minimal_curses import fficurses diff --git a/pypy/module/_multiprocessing/interp_semaphore.py b/pypy/module/_multiprocessing/interp_semaphore.py --- a/pypy/module/_multiprocessing/interp_semaphore.py +++ b/pypy/module/_multiprocessing/interp_semaphore.py @@ -4,6 +4,7 @@ from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.error import wrap_oserror, OperationError from pypy.rpython.lltypesystem import rffi, lltype +from pypy.rlib import rgc from pypy.rlib.rarithmetic import r_uint from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.rpython.tool import rffi_platform as platform @@ -23,6 +24,8 @@ _CreateSemaphore = rwin32.winexternal( 'CreateSemaphoreA', [rffi.VOIDP, rffi.LONG, rffi.LONG, rwin32.LPCSTR], rwin32.HANDLE) + _CloseHandle = rwin32.winexternal('CloseHandle', [rwin32.HANDLE], + rwin32.BOOL, threadsafe=False) _ReleaseSemaphore = rwin32.winexternal( 'ReleaseSemaphore', [rwin32.HANDLE, rffi.LONG, rffi.LONGP], rwin32.BOOL) @@ -51,6 +54,7 @@ SEM_FAILED = platform.ConstantInteger('SEM_FAILED') SEM_VALUE_MAX = platform.ConstantInteger('SEM_VALUE_MAX') SEM_TIMED_WAIT = platform.Has('sem_timedwait') + SEM_T_SIZE = platform.SizeOf('sem_t') config = platform.configure(CConfig) TIMEVAL = config['TIMEVAL'] @@ -61,18 +65,21 @@ SEM_FAILED = config['SEM_FAILED'] # rffi.cast(SEM_T, config['SEM_FAILED']) SEM_VALUE_MAX = config['SEM_VALUE_MAX'] SEM_TIMED_WAIT = config['SEM_TIMED_WAIT'] + SEM_T_SIZE = config['SEM_T_SIZE'] if sys.platform == 'darwin': HAVE_BROKEN_SEM_GETVALUE = True else: HAVE_BROKEN_SEM_GETVALUE = False - def external(name, args, result): + def external(name, args, result, **kwargs): return rffi.llexternal(name, args, result, - compilation_info=eci) + compilation_info=eci, **kwargs) _sem_open = external('sem_open', [rffi.CCHARP, rffi.INT, rffi.INT, rffi.UINT], SEM_T) + # tread sem_close as not threadsafe for now to be able to use the __del__ + _sem_close = external('sem_close', [SEM_T], rffi.INT, threadsafe=False) _sem_unlink = external('sem_unlink', [rffi.CCHARP], rffi.INT) _sem_wait = external('sem_wait', [SEM_T], rffi.INT) _sem_trywait = external('sem_trywait', [SEM_T], rffi.INT) @@ -90,6 +97,11 @@ raise OSError(rposix.get_errno(), "sem_open failed") return res + def sem_close(handle): + res = _sem_close(handle) + if res < 0: + raise OSError(rposix.get_errno(), "sem_close failed") + def sem_unlink(name): res = _sem_unlink(name) if res < 0: @@ -205,6 +217,11 @@ raise WindowsError(err, "CreateSemaphore") return handle + def delete_semaphore(handle): + if not _CloseHandle(handle): + err = rwin32.GetLastError() + raise WindowsError(err, "CloseHandle") + def semlock_acquire(self, space, block, w_timeout): if not block: full_msecs = 0 @@ -291,8 +308,13 @@ sem_unlink(name) except OSError: pass + else: + rgc.add_memory_pressure(SEM_T_SIZE) return sem + def delete_semaphore(handle): + sem_close(handle) + def semlock_acquire(self, space, block, w_timeout): if not block: deadline = lltype.nullptr(TIMESPECP.TO) @@ -483,6 +505,9 @@ def exit(self, space, __args__): self.release(space) + def __del__(self): + delete_semaphore(self.handle) + @unwrap_spec(kind=int, value=int, maxvalue=int) def descr_new(space, w_subtype, kind, value, maxvalue): if kind != RECURSIVE_MUTEX and kind != SEMAPHORE: diff --git a/pypy/module/_rawffi/structure.py b/pypy/module/_rawffi/structure.py --- a/pypy/module/_rawffi/structure.py +++ b/pypy/module/_rawffi/structure.py @@ -212,6 +212,8 @@ while count + basic_size <= total_size: fieldtypes.append(basic_ffi_type) count += basic_size + if basic_size == 0: # corner case. get out of this infinite + break # loop after 1 iteration ("why not") self.ffi_struct = clibffi.make_struct_ffitype_e(self.size, self.alignment, fieldtypes) diff --git a/pypy/module/_rawffi/test/test__rawffi.py b/pypy/module/_rawffi/test/test__rawffi.py --- a/pypy/module/_rawffi/test/test__rawffi.py +++ b/pypy/module/_rawffi/test/test__rawffi.py @@ -1022,6 +1022,12 @@ assert ret.y == 1234500, "ret.y == %d" % (ret.y,) s.free() + def test_ffi_type(self): + import _rawffi + EMPTY = _rawffi.Structure([]) + S2E = _rawffi.Structure([('bah', (EMPTY, 1))]) + S2E.get_ffi_type() # does not hang + class AppTestAutoFree: def setup_class(cls): space = gettestobjspace(usemodules=('_rawffi', 'struct')) diff --git a/pypy/module/array/test/test_array.py b/pypy/module/array/test/test_array.py --- a/pypy/module/array/test/test_array.py +++ b/pypy/module/array/test/test_array.py @@ -835,7 +835,7 @@ a.append(3.0) r = weakref.ref(a, lambda a: l.append(a())) del a - gc.collect() + gc.collect(); gc.collect() # XXX needs two of them right now... assert l assert l[0] is None or len(l[0]) == 0 diff --git a/pypy/module/bz2/test/test_large.py b/pypy/module/bz2/test/test_large.py --- a/pypy/module/bz2/test/test_large.py +++ b/pypy/module/bz2/test/test_large.py @@ -8,7 +8,7 @@ py.test.skip("skipping this very slow test; try 'pypy-c -A'") cls.space = gettestobjspace(usemodules=('bz2',)) largetest_bz2 = py.path.local(__file__).dirpath().join("largetest.bz2") - cls.w_compressed_data = cls.space.wrap(largetest_bz2.read()) + cls.w_compressed_data = cls.space.wrap(largetest_bz2.read('rb')) def test_decompress(self): from bz2 import decompress diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -392,6 +392,7 @@ 'Slice': 'space.gettypeobject(W_SliceObject.typedef)', 'StaticMethod': 'space.gettypeobject(StaticMethod.typedef)', 'CFunction': 'space.gettypeobject(cpyext.methodobject.W_PyCFunctionObject.typedef)', + 'WrapperDescr': 'space.gettypeobject(cpyext.methodobject.W_PyCMethodObject.typedef)' }.items(): GLOBALS['Py%s_Type#' % (cpyname, )] = ('PyTypeObject*', pypyexpr) diff --git a/pypy/module/cpyext/include/eval.h b/pypy/module/cpyext/include/eval.h --- a/pypy/module/cpyext/include/eval.h +++ b/pypy/module/cpyext/include/eval.h @@ -14,8 +14,8 @@ PyObject * PyEval_CallFunction(PyObject *obj, const char *format, ...); PyObject * PyEval_CallMethod(PyObject *obj, const char *name, const char *format, ...); -PyObject * PyObject_CallFunction(PyObject *obj, char *format, ...); -PyObject * PyObject_CallMethod(PyObject *obj, char *name, char *format, ...); +PyObject * PyObject_CallFunction(PyObject *obj, const char *format, ...); +PyObject * PyObject_CallMethod(PyObject *obj, const char *name, const char *format, ...); PyObject * PyObject_CallFunctionObjArgs(PyObject *callable, ...); PyObject * PyObject_CallMethodObjArgs(PyObject *callable, PyObject *name, ...); diff --git a/pypy/module/cpyext/include/modsupport.h b/pypy/module/cpyext/include/modsupport.h --- a/pypy/module/cpyext/include/modsupport.h +++ b/pypy/module/cpyext/include/modsupport.h @@ -48,7 +48,11 @@ /* * This is from pyport.h. Perhaps it belongs elsewhere. */ +#ifdef __cplusplus +#define PyMODINIT_FUNC extern "C" void +#else #define PyMODINIT_FUNC void +#endif #ifdef __cplusplus diff --git a/pypy/module/cpyext/include/patchlevel.h b/pypy/module/cpyext/include/patchlevel.h --- a/pypy/module/cpyext/include/patchlevel.h +++ b/pypy/module/cpyext/include/patchlevel.h @@ -29,7 +29,7 @@ #define PY_VERSION "2.7.1" /* PyPy version as a string */ -#define PYPY_VERSION "1.6.1" +#define PYPY_VERSION "1.7.1" /* Subversion Revision number of this file (not of the repository). * Empty since Mercurial migration. */ diff --git a/pypy/module/cpyext/include/pycobject.h b/pypy/module/cpyext/include/pycobject.h --- a/pypy/module/cpyext/include/pycobject.h +++ b/pypy/module/cpyext/include/pycobject.h @@ -33,7 +33,7 @@ PyAPI_FUNC(void *) PyCObject_GetDesc(PyObject *); /* Import a pointer to a C object from a module using a PyCObject. */ -PyAPI_FUNC(void *) PyCObject_Import(char *module_name, char *cobject_name); +PyAPI_FUNC(void *) PyCObject_Import(const char *module_name, const char *cobject_name); /* Modify a C object. Fails (==0) if object has a destructor. */ PyAPI_FUNC(int) PyCObject_SetVoidPtr(PyObject *self, void *cobj); diff --git a/pypy/module/cpyext/include/pyerrors.h b/pypy/module/cpyext/include/pyerrors.h --- a/pypy/module/cpyext/include/pyerrors.h +++ b/pypy/module/cpyext/include/pyerrors.h @@ -11,8 +11,8 @@ (PyClass_Check((x)) || (PyType_Check((x)) && \ PyObject_IsSubclass((x), PyExc_BaseException))) -PyObject *PyErr_NewException(char *name, PyObject *base, PyObject *dict); -PyObject *PyErr_NewExceptionWithDoc(char *name, char *doc, PyObject *base, PyObject *dict); +PyObject *PyErr_NewException(const char *name, PyObject *base, PyObject *dict); +PyObject *PyErr_NewExceptionWithDoc(const char *name, const char *doc, PyObject *base, PyObject *dict); PyObject *PyErr_Format(PyObject *exception, const char *format, ...); /* These APIs aren't really part of the error implementation, but diff --git a/pypy/module/cpyext/methodobject.py b/pypy/module/cpyext/methodobject.py --- a/pypy/module/cpyext/methodobject.py +++ b/pypy/module/cpyext/methodobject.py @@ -240,6 +240,7 @@ def PyStaticMethod_New(space, w_func): return space.wrap(StaticMethod(w_func)) + at cpython_api([PyObject, lltype.Ptr(PyMethodDef)], PyObject) def PyDescr_NewMethod(space, w_type, method): return space.wrap(W_PyCMethodObject(space, method, w_type)) diff --git a/pypy/module/cpyext/modsupport.py b/pypy/module/cpyext/modsupport.py --- a/pypy/module/cpyext/modsupport.py +++ b/pypy/module/cpyext/modsupport.py @@ -54,9 +54,15 @@ modname = rffi.charp2str(name) state = space.fromcache(State) f_name, f_path = state.package_context - w_mod = PyImport_AddModule(space, f_name) + if f_name is not None: + modname = f_name + w_mod = PyImport_AddModule(space, modname) + state.package_context = None, None - dict_w = {'__file__': space.wrap(f_path)} + if f_path is not None: + dict_w = {'__file__': space.wrap(f_path)} + else: + dict_w = {} convert_method_defs(space, dict_w, methods, None, w_self, modname) for key, w_value in dict_w.items(): space.setattr(w_mod, space.wrap(key), w_value) diff --git a/pypy/module/cpyext/presetup.py b/pypy/module/cpyext/presetup.py --- a/pypy/module/cpyext/presetup.py +++ b/pypy/module/cpyext/presetup.py @@ -42,4 +42,4 @@ patch_distutils() del sys.argv[0] -execfile(sys.argv[0], {'__file__': sys.argv[0]}) +execfile(sys.argv[0], {'__file__': sys.argv[0], '__name__': '__main__'}) diff --git a/pypy/module/cpyext/pyobject.py b/pypy/module/cpyext/pyobject.py --- a/pypy/module/cpyext/pyobject.py +++ b/pypy/module/cpyext/pyobject.py @@ -116,8 +116,8 @@ try: return typedescr_cache[typedef] except KeyError: - if typedef.base is not None: - return _get_typedescr_1(typedef.base) + if typedef.bases: + return _get_typedescr_1(typedef.bases[0]) return typedescr_cache[None] def get_typedescr(typedef): diff --git a/pypy/module/cpyext/slotdefs.py b/pypy/module/cpyext/slotdefs.py --- a/pypy/module/cpyext/slotdefs.py +++ b/pypy/module/cpyext/slotdefs.py @@ -9,7 +9,8 @@ unaryfunc, wrapperfunc, ternaryfunc, PyTypeObjectPtr, binaryfunc, getattrfunc, getattrofunc, setattrofunc, lenfunc, ssizeargfunc, ssizessizeargfunc, ssizeobjargproc, iternextfunc, initproc, richcmpfunc, - cmpfunc, hashfunc, descrgetfunc, descrsetfunc, objobjproc, readbufferproc) + cmpfunc, hashfunc, descrgetfunc, descrsetfunc, objobjproc, objobjargproc, + readbufferproc) from pypy.module.cpyext.pyobject import from_ref from pypy.module.cpyext.pyerrors import PyErr_Occurred from pypy.module.cpyext.state import State @@ -175,6 +176,15 @@ space.fromcache(State).check_and_raise_exception(always=True) return space.wrap(res) +def wrap_objobjargproc(space, w_self, w_args, func): + func_target = rffi.cast(objobjargproc, func) + check_num_args(space, w_args, 2) + w_key, w_value = space.fixedview(w_args) + res = generic_cpy_call(space, func_target, w_self, w_key, w_value) + if rffi.cast(lltype.Signed, res) == -1: + space.fromcache(State).check_and_raise_exception(always=True) + return space.wrap(res) + def wrap_ssizessizeargfunc(space, w_self, w_args, func): func_target = rffi.cast(ssizessizeargfunc, func) check_num_args(space, w_args, 2) diff --git a/pypy/module/cpyext/src/cobject.c b/pypy/module/cpyext/src/cobject.c --- a/pypy/module/cpyext/src/cobject.c +++ b/pypy/module/cpyext/src/cobject.c @@ -77,7 +77,7 @@ } void * -PyCObject_Import(char *module_name, char *name) +PyCObject_Import(const char *module_name, const char *name) { PyObject *m, *c; void *r = NULL; diff --git a/pypy/module/cpyext/src/modsupport.c b/pypy/module/cpyext/src/modsupport.c --- a/pypy/module/cpyext/src/modsupport.c +++ b/pypy/module/cpyext/src/modsupport.c @@ -541,7 +541,7 @@ } PyObject * -PyObject_CallFunction(PyObject *callable, char *format, ...) +PyObject_CallFunction(PyObject *callable, const char *format, ...) { va_list va; PyObject *args; @@ -558,7 +558,7 @@ } PyObject * -PyObject_CallMethod(PyObject *o, char *name, char *format, ...) +PyObject_CallMethod(PyObject *o, const char *name, const char *format, ...) { va_list va; PyObject *args; diff --git a/pypy/module/cpyext/src/pyerrors.c b/pypy/module/cpyext/src/pyerrors.c --- a/pypy/module/cpyext/src/pyerrors.c +++ b/pypy/module/cpyext/src/pyerrors.c @@ -21,7 +21,7 @@ } PyObject * -PyErr_NewException(char *name, PyObject *base, PyObject *dict) +PyErr_NewException(const char *name, PyObject *base, PyObject *dict) { char *dot; PyObject *modulename = NULL; @@ -72,7 +72,7 @@ /* Create an exception with docstring */ PyObject * -PyErr_NewExceptionWithDoc(char *name, char *doc, PyObject *base, PyObject *dict) +PyErr_NewExceptionWithDoc(const char *name, const char *doc, PyObject *base, PyObject *dict) { int result; PyObject *ret = NULL; diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -586,10 +586,6 @@ def PyDescr_NewMember(space, type, meth): raise NotImplementedError - at cpython_api([PyTypeObjectPtr, PyMethodDef], PyObject) -def PyDescr_NewMethod(space, type, meth): - raise NotImplementedError - @cpython_api([PyTypeObjectPtr, wrapperbase, rffi.VOIDP], PyObject) def PyDescr_NewWrapper(space, type, wrapper, wrapped): raise NotImplementedError @@ -610,14 +606,6 @@ def PyWrapper_New(space, w_d, w_self): raise NotImplementedError - at cpython_api([PyObject], PyObject) -def PyDictProxy_New(space, dict): - """Return a proxy object for a mapping which enforces read-only behavior. - This is normally used to create a proxy to prevent modification of the - dictionary for non-dynamic class types. - """ - raise NotImplementedError - @cpython_api([PyObject, PyObject, rffi.INT_real], rffi.INT_real, error=-1) def PyDict_Merge(space, a, b, override): """Iterate over mapping object b adding key-value pairs to dictionary a. @@ -2293,15 +2281,6 @@ changes in your code for properly supporting 64-bit systems.""" raise NotImplementedError - at cpython_api([rffi.CWCHARP, Py_ssize_t, rffi.CCHARP], PyObject) -def PyUnicode_EncodeUTF8(space, s, size, errors): - """Encode the Py_UNICODE buffer of the given size using UTF-8 and return a - Python string object. Return NULL if an exception was raised by the codec. - - This function used an int type for size. This might require - changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.INTP], PyObject) def PyUnicode_DecodeUTF32(space, s, size, errors, byteorder): """Decode length bytes from a UTF-32 encoded buffer string and return the @@ -2481,31 +2460,6 @@ was raised by the codec.""" raise NotImplementedError - at cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP], PyObject) -def PyUnicode_DecodeLatin1(space, s, size, errors): - """Create a Unicode object by decoding size bytes of the Latin-1 encoded string - s. Return NULL if an exception was raised by the codec. - - This function used an int type for size. This might require - changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - - at cpython_api([rffi.CWCHARP, Py_ssize_t, rffi.CCHARP], PyObject) -def PyUnicode_EncodeLatin1(space, s, size, errors): - """Encode the Py_UNICODE buffer of the given size using Latin-1 and return - a Python string object. Return NULL if an exception was raised by the codec. - - This function used an int type for size. This might require - changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - - at cpython_api([PyObject], PyObject) -def PyUnicode_AsLatin1String(space, unicode): - """Encode a Unicode object using Latin-1 and return the result as Python string - object. Error handling is "strict". Return NULL if an exception was raised - by the codec.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, PyObject, rffi.CCHARP], PyObject) def PyUnicode_DecodeCharmap(space, s, size, mapping, errors): """Create a Unicode object by decoding size bytes of the encoded string s using @@ -2564,13 +2518,6 @@ """ raise NotImplementedError - at cpython_api([PyObject], PyObject) -def PyUnicode_AsMBCSString(space, unicode): - """Encode a Unicode object using MBCS and return the result as Python string - object. Error handling is "strict". Return NULL if an exception was raised - by the codec.""" - raise NotImplementedError - @cpython_api([PyObject, PyObject], PyObject) def PyUnicode_Concat(space, left, right): """Concat two strings giving a new Unicode string.""" @@ -2912,16 +2859,3 @@ """Return true if ob is a proxy object. """ raise NotImplementedError - - at cpython_api([PyObject, PyObject], PyObject) -def PyWeakref_NewProxy(space, ob, callback): - """Return a weak reference proxy object for the object ob. This will always - return a new reference, but is not guaranteed to create a new object; an - existing proxy object may be returned. The second parameter, callback, can - be a callable object that receives notification when ob is garbage - collected; it should accept a single parameter, which will be the weak - reference object itself. callback may also be None or NULL. If ob - is not a weakly-referencable object, or if callback is not callable, - None, or NULL, this will return NULL and raise TypeError. - """ - raise NotImplementedError diff --git a/pypy/module/cpyext/test/test_methodobject.py b/pypy/module/cpyext/test/test_methodobject.py --- a/pypy/module/cpyext/test/test_methodobject.py +++ b/pypy/module/cpyext/test/test_methodobject.py @@ -79,7 +79,7 @@ raises(TypeError, mod.isSameFunction, 1) class TestPyCMethodObject(BaseApiTest): - def test_repr(self, space): + def test_repr(self, space, api): """ W_PyCMethodObject has a repr string which describes it as a method and gives its name and the name of its class. @@ -94,7 +94,7 @@ ml.c_ml_meth = rffi.cast(PyCFunction_typedef, c_func.get_llhelper(space)) - method = PyDescr_NewMethod(space, space.w_str, ml) + method = api.PyDescr_NewMethod(space.w_str, ml) assert repr(method).startswith( "" + assert space.unwrap(space.repr(w_proxy)).startswith(' self.argcount: + # The extra arguments should actually be the output array, but we + # don't support that yet. + raise OperationError(space.w_TypeError, + space.wrap("invalid number of arguments") + ) + return self.call(space, __args__.arguments_w) def descr_reduce(self, space, w_obj): from pypy.module.micronumpy.interp_numarray import convert_to_array, Scalar diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -3,7 +3,7 @@ class AppTestDtypes(BaseNumpyAppTest): def test_dtype(self): - from numpy import dtype + from numpypy import dtype d = dtype('?') assert d.num == 0 @@ -14,7 +14,7 @@ raises(TypeError, dtype, 1042) def test_dtype_with_types(self): - from numpy import dtype + from numpypy import dtype assert dtype(bool).num == 0 assert dtype(int).num == 7 @@ -22,13 +22,13 @@ assert dtype(float).num == 12 def test_array_dtype_attr(self): - from numpy import array, dtype + from numpypy import array, dtype a = array(range(5), long) assert a.dtype is dtype(long) def test_repr_str(self): - from numpy import dtype + from numpypy import dtype assert repr(dtype) == "" d = dtype('?') @@ -36,15 +36,16 @@ assert str(d) == "bool" def test_bool_array(self): - from numpy import array + from numpypy import array, False_, True_ a = array([0, 1, 2, 2.5], dtype='?') - assert a[0] is False + assert a[0] is False_ for i in xrange(1, 4): - assert a[i] is True + assert a[i] is True_ def test_copy_array_with_dtype(self): - from numpy import array + from numpypy import array, False_, True_ + a = array([0, 1, 2, 3], dtype=long) # int on 64-bit, long in 32-bit assert isinstance(a[0], (int, long)) @@ -52,38 +53,40 @@ assert isinstance(b[0], (int, long)) a = array([0, 1, 2, 3], dtype=bool) - assert isinstance(a[0], bool) + assert a[0] is False_ b = a.copy() - assert isinstance(b[0], bool) + assert b[0] is False_ def test_zeros_bool(self): - from numpy import zeros + from numpypy import zeros, False_ + a = zeros(10, dtype=bool) for i in range(10): - assert a[i] is False + assert a[i] is False_ def test_ones_bool(self): - from numpy import ones + from numpypy import ones, True_ + a = ones(10, dtype=bool) for i in range(10): - assert a[i] is True + assert a[i] is True_ def test_zeros_long(self): - from numpy import zeros + from numpypy import zeros a = zeros(10, dtype=long) for i in range(10): assert isinstance(a[i], (int, long)) assert a[1] == 0 def test_ones_long(self): - from numpy import ones - a = ones(10, dtype=bool) + from numpypy import ones + a = ones(10, dtype=long) for i in range(10): assert isinstance(a[i], (int, long)) assert a[1] == 1 def test_overflow(self): - from numpy import array, dtype + from numpypy import array, dtype assert array([128], 'b')[0] == -128 assert array([256], 'B')[0] == 0 assert array([32768], 'h')[0] == -32768 @@ -95,15 +98,16 @@ raises(OverflowError, "array([2**64], 'Q')") def test_bool_binop_types(self): - from numpy import array, dtype - types = ('?','b','B','h','H','i','I','l','L','q','Q','f','d') - N = len(types) + from numpypy import array, dtype + types = [ + '?', 'b', 'B', 'h', 'H', 'i', 'I', 'l', 'L', 'q', 'Q', 'f', 'd' + ] a = array([True], '?') for t in types: assert (a + array([0], t)).dtype is dtype(t) def test_binop_types(self): - from numpy import array, dtype + from numpypy import array, dtype tests = [('b','B','h'), ('b','h','h'), ('b','H','i'), ('b','i','i'), ('b','l','l'), ('b','q','q'), ('b','Q','d'), ('B','h','h'), ('B','H','H'), ('B','i','i'), ('B','I','I'), ('B','l','l'), @@ -125,7 +129,7 @@ assert (array([1], d1) + array([1], d2)).dtype is dtype(dout) def test_add_int8(self): - from numpy import array, dtype + from numpypy import array, dtype a = array(range(5), dtype="int8") b = a + a @@ -134,7 +138,7 @@ assert b[i] == i * 2 def test_add_int16(self): - from numpy import array, dtype + from numpypy import array, dtype a = array(range(5), dtype="int16") b = a + a @@ -143,7 +147,7 @@ assert b[i] == i * 2 def test_add_uint32(self): - from numpy import array, dtype + from numpypy import array, dtype a = array(range(5), dtype="I") b = a + a @@ -152,12 +156,12 @@ assert b[i] == i * 2 def test_shape(self): - from numpy import dtype + from numpypy import dtype assert dtype(long).shape == () def test_cant_subclass(self): - from numpy import dtype + from numpypy import dtype # You can't subclass dtype raises(TypeError, type, "Foo", (dtype,), {}) diff --git a/pypy/module/micronumpy/test/test_module.py b/pypy/module/micronumpy/test/test_module.py --- a/pypy/module/micronumpy/test/test_module.py +++ b/pypy/module/micronumpy/test/test_module.py @@ -3,19 +3,19 @@ class AppTestNumPyModule(BaseNumpyAppTest): def test_mean(self): - from numpy import array, mean + from numpypy import array, mean assert mean(array(range(5))) == 2.0 assert mean(range(5)) == 2.0 def test_average(self): - from numpy import array, average + from numpypy import array, average assert average(range(10)) == 4.5 assert average(array(range(10))) == 4.5 def test_constants(self): import math - from numpy import inf, e + from numpypy import inf, e assert type(inf) is float assert inf == float("inf") assert e == math.e - assert type(e) is float \ No newline at end of file + assert type(e) is float diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -155,12 +155,12 @@ class AppTestNumArray(BaseNumpyAppTest): def test_type(self): - from numpy import array + from numpypy import array ar = array(range(5)) assert type(ar) is type(ar + ar) def test_init(self): - from numpy import zeros + from numpypy import zeros a = zeros(15) # Check that storage was actually zero'd. assert a[10] == 0.0 @@ -168,18 +168,26 @@ a[13] = 5.3 assert a[13] == 5.3 + def test_size(self): + from numpypy import array + # XXX fixed on multidim branch + #assert array(3).size == 1 + a = array([1, 2, 3]) + assert a.size == 3 + assert (a + a).size == 3 + def test_empty(self): """ Test that empty() works. """ - from numpy import empty + from numpypy import empty a = empty(2) a[1] = 1.0 assert a[1] == 1.0 def test_ones(self): - from numpy import ones + from numpypy import ones a = ones(3) assert len(a) == 3 assert a[0] == 1 @@ -188,7 +196,7 @@ assert a[2] == 4 def test_copy(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a.copy() for i in xrange(5): @@ -197,7 +205,7 @@ assert b[3] == 3 def test_iterator_init(self): - from numpy import array + from numpypy import array a = array(range(5)) assert a[3] == 3 a = array(1) @@ -205,7 +213,7 @@ assert a.shape == () def test_getitem(self): - from numpy import array + from numpypy import array a = array(range(5)) raises(IndexError, "a[5]") a = a + a @@ -214,7 +222,7 @@ raises(IndexError, "a[-6]") def test_getitem_tuple(self): - from numpy import array + from numpypy import array a = array(range(5)) raises(IndexError, "a[(1,2)]") for i in xrange(5): @@ -224,7 +232,7 @@ assert a[i] == b[i] def test_setitem(self): - from numpy import array + from numpypy import array a = array(range(5)) a[-1] = 5.0 assert a[4] == 5.0 @@ -232,7 +240,7 @@ raises(IndexError, "a[-6] = 3.0") def test_setitem_tuple(self): - from numpy import array + from numpypy import array a = array(range(5)) raises(IndexError, "a[(1,2)] = [0,1]") for i in xrange(5): @@ -243,7 +251,7 @@ assert a[i] == i def test_setslice_array(self): - from numpy import array + from numpypy import array a = array(range(5)) b = array(range(2)) a[1:4:2] = b @@ -254,7 +262,7 @@ assert b[1] == 0. def test_setslice_of_slice_array(self): - from numpy import array, zeros + from numpypy import array, zeros a = zeros(5) a[::2] = array([9., 10., 11.]) assert a[0] == 9. @@ -273,7 +281,7 @@ assert a[0] == 3. def test_setslice_list(self): - from numpy import array + from numpypy import array a = array(range(5), float) b = [0., 1.] a[1:4:2] = b @@ -281,7 +289,7 @@ assert a[3] == 1. def test_setslice_constant(self): - from numpy import array + from numpypy import array a = array(range(5), float) a[1:4:2] = 0. assert a[1] == 0. @@ -293,13 +301,13 @@ assert a[0] == 3 def test_len(self): - from numpy import array + from numpypy import array a = array(range(5)) assert len(a) == 5 assert len(a + a) == 5 def test_shape(self): - from numpy import array + from numpypy import array a = array(range(5)) assert a.shape == (5,) b = a + a @@ -308,7 +316,7 @@ assert c.shape == (3,) def test_add(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a + a for i in range(5): @@ -321,7 +329,7 @@ assert c[i] == bool(a[i] + b[i]) def test_add_other(self): - from numpy import array + from numpypy import array a = array(range(5)) b = array([i for i in reversed(range(5))]) c = a + b @@ -329,20 +337,20 @@ assert c[i] == 4 def test_add_constant(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a + 5 for i in range(5): assert b[i] == i + 5 def test_radd(self): - from numpy import array + from numpypy import array r = 3 + array(range(3)) for i in range(3): assert r[i] == i + 3 def test_add_list(self): - from numpy import array + from numpypy import array a = array(range(5)) b = list(reversed(range(5))) c = a + b @@ -351,14 +359,14 @@ assert c[i] == 4 def test_subtract(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a - a for i in range(5): assert b[i] == 0 def test_subtract_other(self): - from numpy import array + from numpypy import array a = array(range(5)) b = array([1, 1, 1, 1, 1]) c = a - b @@ -366,28 +374,29 @@ assert c[i] == i - 1 def test_subtract_constant(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a - 5 for i in range(5): assert b[i] == i - 5 def test_mul(self): - from numpy import array, dtype - a = array(range(5)) + import numpypy + + a = numpypy.array(range(5)) b = a * a for i in range(5): assert b[i] == i * i - a = array(range(5), dtype=bool) + a = numpypy.array(range(5), dtype=bool) b = a * a - assert b.dtype is dtype(bool) - assert b[0] is False + assert b.dtype is numpypy.dtype(bool) + assert b[0] is numpypy.False_ for i in range(1, 5): - assert b[i] is True + assert b[i] is numpypy.True_ def test_mul_constant(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a * 5 for i in range(5): @@ -395,7 +404,7 @@ def test_div(self): from math import isnan - from numpy import array, dtype, inf + from numpypy import array, dtype, inf a = array(range(1, 6)) b = a / a @@ -427,7 +436,7 @@ assert c[2] == -inf def test_div_other(self): - from numpy import array + from numpypy import array a = array(range(5)) b = array([2, 2, 2, 2, 2], float) c = a / b @@ -435,14 +444,14 @@ assert c[i] == i / 2.0 def test_div_constant(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a / 5.0 for i in range(5): assert b[i] == i / 5.0 def test_pow(self): - from numpy import array + from numpypy import array a = array(range(5), float) b = a ** a for i in range(5): @@ -450,7 +459,7 @@ assert b[i] == i ** i def test_pow_other(self): - from numpy import array + from numpypy import array a = array(range(5), float) b = array([2, 2, 2, 2, 2]) c = a ** b @@ -458,15 +467,15 @@ assert c[i] == i ** 2 def test_pow_constant(self): - from numpy import array + from numpypy import array a = array(range(5), float) b = a ** 2 for i in range(5): assert b[i] == i ** 2 def test_mod(self): - from numpy import array - a = array(range(1, 6)) + from numpypy import array + a = array(range(1,6)) b = a % a for i in range(5): assert b[i] == 0 @@ -478,7 +487,7 @@ assert b[i] == 1 def test_mod_other(self): - from numpy import array + from numpypy import array a = array(range(5)) b = array([2, 2, 2, 2, 2]) c = a % b @@ -486,14 +495,14 @@ assert c[i] == i % 2 def test_mod_constant(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a % 2 for i in range(5): assert b[i] == i % 2 def test_pos(self): - from numpy import array + from numpypy import array a = array([1., -2., 3., -4., -5.]) b = +a for i in range(5): @@ -504,7 +513,7 @@ assert a[i] == i def test_neg(self): - from numpy import array + from numpypy import array a = array([1., -2., 3., -4., -5.]) b = -a for i in range(5): @@ -515,7 +524,7 @@ assert a[i] == -i def test_abs(self): - from numpy import array + from numpypy import array a = array([1., -2., 3., -4., -5.]) b = abs(a) for i in range(5): @@ -526,7 +535,7 @@ assert a[i + 5] == abs(i) def test_auto_force(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a - 1 a[2] = 3 @@ -540,7 +549,7 @@ assert c[1] == 4 def test_getslice(self): - from numpy import array + from numpypy import array a = array(range(5)) s = a[1:5] assert len(s) == 4 @@ -554,7 +563,7 @@ assert s[0] == 5 def test_getslice_step(self): - from numpy import array + from numpypy import array a = array(range(10)) s = a[1:9:2] assert len(s) == 4 @@ -562,7 +571,7 @@ assert s[i] == a[2 * i + 1] def test_slice_update(self): - from numpy import array + from numpypy import array a = array(range(5)) s = a[0:3] s[1] = 10 @@ -572,7 +581,7 @@ def test_slice_invaidate(self): # check that slice shares invalidation list with - from numpy import array + from numpypy import array a = array(range(5)) s = a[0:2] b = array([10, 11]) @@ -586,13 +595,13 @@ assert d[1] == 12 def test_mean(self): - from numpy import array + from numpypy import array a = array(range(5)) assert a.mean() == 2.0 assert a[:4].mean() == 1.5 def test_sum(self): - from numpy import array + from numpypy import array a = array(range(5)) assert a.sum() == 10.0 assert a[:4].sum() == 6.0 @@ -601,33 +610,32 @@ assert a.sum() == 5 def test_prod(self): - from numpy import array + from numpypy import array a = array(range(1, 6)) assert a.prod() == 120.0 assert a[:4].prod() == 24.0 def test_max(self): - from numpy import array + from numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.max() == 5.7 b = array([]) raises(ValueError, "b.max()") def test_max_add(self): - from numpy import array + from numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert (a + a).max() == 11.4 def test_min(self): - from numpy import array + from numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.min() == -3.0 b = array([]) raises(ValueError, "b.min()") def test_argmax(self): - import sys - from numpy import array + from numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) r = a.argmax() assert r == 2 @@ -649,14 +657,14 @@ assert r == 9 def test_argmin(self): - from numpy import array + from numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.argmin() == 3 b = array([]) raises(ValueError, "b.argmin()") def test_all(self): - from numpy import array + from numpypy import array a = array(range(5)) assert a.all() == False a[0] = 3.0 @@ -665,7 +673,7 @@ assert b.all() == True def test_any(self): - from numpy import array, zeros + from numpypy import array, zeros a = array(range(5)) assert a.any() == True b = zeros(5) @@ -674,7 +682,7 @@ assert c.any() == False def test_dot(self): - from numpy import array + from numpypy import array a = array(range(5)) assert a.dot(a) == 30.0 @@ -682,14 +690,14 @@ assert a.dot(range(5)) == 30 def test_dot_constant(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a.dot(2.5) for i in xrange(5): assert b[i] == 2.5 * a[i] def test_dtype_guessing(self): - from numpy import array, dtype + from numpypy import array, dtype assert array([True]).dtype is dtype(bool) assert array([True, False]).dtype is dtype(bool) @@ -702,7 +710,7 @@ def test_comparison(self): import operator - from numpy import array, dtype + from numpypy import array, dtype a = array(range(5)) b = array(range(5), float) @@ -867,7 +875,7 @@ cls.w_data = cls.space.wrap(struct.pack('dddd', 1, 2, 3, 4)) def test_fromstring(self): - from numpy import fromstring + from numpypy import fromstring a = fromstring(self.data) for i in range(4): assert a[i] == i + 1 diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -4,14 +4,14 @@ class AppTestUfuncs(BaseNumpyAppTest): def test_ufunc_instance(self): - from numpy import add, ufunc + from numpypy import add, ufunc assert isinstance(add, ufunc) assert repr(add) == "" assert repr(ufunc) == "" def test_ufunc_attrs(self): - from numpy import add, multiply, sin + from numpypy import add, multiply, sin assert add.identity == 0 assert multiply.identity == 1 @@ -22,22 +22,22 @@ assert sin.nin == 1 def test_wrong_arguments(self): - from numpy import add, sin + from numpypy import add, sin - raises(TypeError, add, 1) + raises(ValueError, add, 1) raises(TypeError, add, 1, 2, 3) raises(TypeError, sin, 1, 2) - raises(TypeError, sin) + raises(ValueError, sin) def test_single_item(self): - from numpy import negative, sign, minimum + from numpypy import negative, sign, minimum assert negative(5.0) == -5.0 assert sign(-0.0) == 0.0 assert minimum(2.0, 3.0) == 2.0 def test_sequence(self): - from numpy import array, negative, minimum + from numpypy import array, negative, minimum a = array(range(3)) b = [2.0, 1.0, 0.0] c = 1.0 @@ -71,7 +71,7 @@ assert min_c_b[i] == min(b[i], c) def test_negative(self): - from numpy import array, negative + from numpypy import array, negative a = array([-5.0, 0.0, 1.0]) b = negative(a) @@ -86,7 +86,7 @@ assert negative(a + a)[3] == -6 def test_abs(self): - from numpy import array, absolute + from numpypy import array, absolute a = array([-5.0, -0.0, 1.0]) b = absolute(a) @@ -94,7 +94,7 @@ assert b[i] == abs(a[i]) def test_add(self): - from numpy import array, add + from numpypy import array, add a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -103,7 +103,7 @@ assert c[i] == a[i] + b[i] def test_divide(self): - from numpy import array, divide + from numpypy import array, divide a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -112,7 +112,7 @@ assert c[i] == a[i] / b[i] def test_fabs(self): - from numpy import array, fabs + from numpypy import array, fabs from math import fabs as math_fabs a = array([-5.0, -0.0, 1.0]) @@ -121,7 +121,7 @@ assert b[i] == math_fabs(a[i]) def test_minimum(self): - from numpy import array, minimum + from numpypy import array, minimum a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -130,7 +130,7 @@ assert c[i] == min(a[i], b[i]) def test_maximum(self): - from numpy import array, maximum + from numpypy import array, maximum a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -143,7 +143,7 @@ assert isinstance(x, (int, long)) def test_multiply(self): - from numpy import array, multiply + from numpypy import array, multiply a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -152,7 +152,7 @@ assert c[i] == a[i] * b[i] def test_sign(self): - from numpy import array, sign, dtype + from numpypy import array, sign, dtype reference = [-1.0, 0.0, 0.0, 1.0] a = array([-5.0, -0.0, 0.0, 6.0]) @@ -171,7 +171,7 @@ assert a[1] == 0 def test_reciporocal(self): - from numpy import array, reciprocal + from numpypy import array, reciprocal reference = [-0.2, float("inf"), float("-inf"), 2.0] a = array([-5.0, 0.0, -0.0, 0.5]) @@ -180,7 +180,7 @@ assert b[i] == reference[i] def test_subtract(self): - from numpy import array, subtract + from numpypy import array, subtract a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -189,7 +189,7 @@ assert c[i] == a[i] - b[i] def test_floor(self): - from numpy import array, floor + from numpypy import array, floor reference = [-2.0, -1.0, 0.0, 1.0, 1.0] a = array([-1.4, -1.0, 0.0, 1.0, 1.4]) @@ -198,7 +198,7 @@ assert b[i] == reference[i] def test_copysign(self): - from numpy import array, copysign + from numpypy import array, copysign reference = [5.0, -0.0, 0.0, -6.0] a = array([-5.0, 0.0, 0.0, 6.0]) @@ -214,7 +214,7 @@ def test_exp(self): import math - from numpy import array, exp + from numpypy import array, exp a = array([-5.0, -0.0, 0.0, 12345678.0, float("inf"), -float('inf'), -12343424.0]) @@ -228,7 +228,7 @@ def test_sin(self): import math - from numpy import array, sin + from numpypy import array, sin a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2]) b = sin(a) @@ -241,7 +241,7 @@ def test_cos(self): import math - from numpy import array, cos + from numpypy import array, cos a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2]) b = cos(a) @@ -250,7 +250,7 @@ def test_tan(self): import math - from numpy import array, tan + from numpypy import array, tan a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2]) b = tan(a) @@ -260,7 +260,7 @@ def test_arcsin(self): import math - from numpy import array, arcsin + from numpypy import array, arcsin a = array([-1, -0.5, -0.33, 0, 0.33, 0.5, 1]) b = arcsin(a) @@ -274,7 +274,7 @@ def test_arccos(self): import math - from numpy import array, arccos + from numpypy import array, arccos a = array([-1, -0.5, -0.33, 0, 0.33, 0.5, 1]) b = arccos(a) @@ -289,7 +289,7 @@ def test_arctan(self): import math - from numpy import array, arctan + from numpypy import array, arctan a = array([-3, -2, -1, 0, 1, 2, 3, float('inf'), float('-inf')]) b = arctan(a) @@ -302,7 +302,7 @@ def test_arcsinh(self): import math - from numpy import arcsinh, inf + from numpypy import arcsinh, inf for v in [inf, -inf, 1.0, math.e]: assert math.asinh(v) == arcsinh(v) @@ -310,7 +310,7 @@ def test_arctanh(self): import math - from numpy import arctanh + from numpypy import arctanh for v in [.99, .5, 0, -.5, -.99]: assert math.atanh(v) == arctanh(v) @@ -320,13 +320,13 @@ assert arctanh(v) == math.copysign(float("inf"), v) def test_reduce_errors(self): - from numpy import sin, add + from numpypy import sin, add raises(ValueError, sin.reduce, [1, 2, 3]) raises(TypeError, add.reduce, 1) def test_reduce(self): - from numpy import add, maximum + from numpypy import add, maximum assert add.reduce([1, 2, 3]) == 6 assert maximum.reduce([1]) == 1 @@ -335,7 +335,7 @@ def test_comparisons(self): import operator - from numpy import equal, not_equal, less, less_equal, greater, greater_equal + from numpypy import equal, not_equal, less, less_equal, greater, greater_equal for ufunc, func in [ (equal, operator.eq), @@ -357,4 +357,4 @@ (3.5, 3), (3, 3.5), ]: - assert ufunc(a, b) is func(a, b) + assert ufunc(a, b) == func(a, b) diff --git a/pypy/module/posix/__init__.py b/pypy/module/posix/__init__.py --- a/pypy/module/posix/__init__.py +++ b/pypy/module/posix/__init__.py @@ -137,6 +137,8 @@ interpleveldefs['execve'] = 'interp_posix.execve' if hasattr(posix, 'spawnv'): interpleveldefs['spawnv'] = 'interp_posix.spawnv' + if hasattr(posix, 'spawnve'): + interpleveldefs['spawnve'] = 'interp_posix.spawnve' if hasattr(os, 'uname'): interpleveldefs['uname'] = 'interp_posix.uname' if hasattr(os, 'sysconf'): diff --git a/pypy/module/posix/interp_posix.py b/pypy/module/posix/interp_posix.py --- a/pypy/module/posix/interp_posix.py +++ b/pypy/module/posix/interp_posix.py @@ -760,6 +760,14 @@ except OSError, e: raise wrap_oserror(space, e) +def _env2interp(space, w_env): + env = {} + w_keys = space.call_method(w_env, 'keys') + for w_key in space.unpackiterable(w_keys): + w_value = space.getitem(w_env, w_key) + env[space.str_w(w_key)] = space.str_w(w_value) + return env + def execve(space, w_command, w_args, w_env): """ execve(path, args, env) @@ -771,11 +779,7 @@ """ command = fsencode_w(space, w_command) args = [fsencode_w(space, w_arg) for w_arg in space.unpackiterable(w_args)] - env = {} - w_keys = space.call_method(w_env, 'keys') - for w_key in space.unpackiterable(w_keys): - w_value = space.getitem(w_env, w_key) - env[space.str_w(w_key)] = space.str_w(w_value) + env = _env2interp(space, w_env) try: os.execve(command, args, env) except OSError, e: @@ -790,6 +794,16 @@ raise wrap_oserror(space, e) return space.wrap(ret) + at unwrap_spec(mode=int, path=str) +def spawnve(space, mode, path, w_args, w_env): + args = [space.str_w(w_arg) for w_arg in space.unpackiterable(w_args)] + env = _env2interp(space, w_env) + try: + ret = os.spawnve(mode, path, args, env) + except OSError, e: + raise wrap_oserror(space, e) + return space.wrap(ret) + def utime(space, w_path, w_tuple): """ utime(path, (atime, mtime)) utime(path, None) diff --git a/pypy/module/posix/test/test_posix2.py b/pypy/module/posix/test/test_posix2.py --- a/pypy/module/posix/test/test_posix2.py +++ b/pypy/module/posix/test/test_posix2.py @@ -471,6 +471,17 @@ ['python', '-c', 'raise(SystemExit(42))']) assert ret == 42 + if hasattr(__import__(os.name), "spawnve"): + def test_spawnve(self): + os = self.posix + import sys + print self.python + ret = os.spawnve(os.P_WAIT, self.python, + ['python', '-c', + "raise(SystemExit(int(__import__('os').environ['FOOBAR'])))"], + {'FOOBAR': '42'}) + assert ret == 42 + def test_popen(self): os = self.posix for i in range(5): diff --git a/pypy/module/pyexpat/interp_pyexpat.py b/pypy/module/pyexpat/interp_pyexpat.py --- a/pypy/module/pyexpat/interp_pyexpat.py +++ b/pypy/module/pyexpat/interp_pyexpat.py @@ -4,9 +4,9 @@ from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.error import OperationError from pypy.objspace.descroperation import object_setattr +from pypy.rlib import rgc +from pypy.rlib.unroll import unrolling_iterable from pypy.rpython.lltypesystem import rffi, lltype -from pypy.rlib.unroll import unrolling_iterable - from pypy.rpython.tool import rffi_platform from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.translator.platform import platform @@ -118,6 +118,19 @@ locals()[name] = rffi_platform.ConstantInteger(name) for name in xml_model_list: locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + for name in xml_model_list: + locals()[name] = rffi_platform.ConstantInteger(name) + XML_Parser_SIZE = rffi_platform.SizeOf("XML_Parser") for k, v in rffi_platform.configure(CConfigure).items(): globals()[k] = v @@ -793,7 +806,10 @@ rffi.cast(rffi.CHAR, namespace_separator)) else: xmlparser = XML_ParserCreate(encoding) - + # Currently this is just the size of the pointer and some estimated bytes. + # The struct isn't actually defined in expat.h - it is in xmlparse.c + # XXX: find a good estimate of the XML_ParserStruct + rgc.add_memory_pressure(XML_Parser_SIZE + 300) if not xmlparser: raise OperationError(space.w_RuntimeError, space.wrap('XML_ParserCreate failed')) diff --git a/pypy/module/pypyjit/interp_jit.py b/pypy/module/pypyjit/interp_jit.py --- a/pypy/module/pypyjit/interp_jit.py +++ b/pypy/module/pypyjit/interp_jit.py @@ -6,6 +6,7 @@ from pypy.tool.pairtype import extendabletype from pypy.rlib.rarithmetic import r_uint, intmask from pypy.rlib.jit import JitDriver, hint, we_are_jitted, dont_look_inside +from pypy.rlib import jit from pypy.rlib.jit import current_trace_length, unroll_parameters import pypy.interpreter.pyopcode # for side-effects from pypy.interpreter.error import OperationError, operationerrfmt @@ -200,18 +201,18 @@ if len(args_w) == 1: text = space.str_w(args_w[0]) try: - pypyjitdriver.set_user_param(text) + jit.set_user_param(None, text) except ValueError: raise OperationError(space.w_ValueError, space.wrap("error in JIT parameters string")) for key, w_value in kwds_w.items(): if key == 'enable_opts': - pypyjitdriver.set_param('enable_opts', space.str_w(w_value)) + jit.set_param(None, 'enable_opts', space.str_w(w_value)) else: intval = space.int_w(w_value) for name, _ in unroll_parameters: if name == key and name != 'enable_opts': - pypyjitdriver.set_param(name, intval) + jit.set_param(None, name, intval) break else: raise operationerrfmt(space.w_TypeError, diff --git a/pypy/module/pypyjit/test_pypy_c/model.py b/pypy/module/pypyjit/test_pypy_c/model.py --- a/pypy/module/pypyjit/test_pypy_c/model.py +++ b/pypy/module/pypyjit/test_pypy_c/model.py @@ -285,6 +285,11 @@ guard_false(ticker_cond1, descr=...) """ src = src.replace('--EXC-TICK--', exc_ticker_check) + # + # ISINF is done as a macro; fix it here + r = re.compile('(\w+) = --ISINF--[(](\w+)[)]') + src = r.sub(r'\2\B999 = float_add(\2, ...)\n\1 = float_eq(\2\B999, \2)', + src) return src @classmethod diff --git a/pypy/module/pypyjit/test_pypy_c/test_containers.py b/pypy/module/pypyjit/test_pypy_c/test_containers.py --- a/pypy/module/pypyjit/test_pypy_c/test_containers.py +++ b/pypy/module/pypyjit/test_pypy_c/test_containers.py @@ -69,4 +69,51 @@ i9 = int_add(i5, 1) --TICK-- jump(..., descr=...) + """) + + def test_non_virtual_dict(self): + def main(n): + i = 0 + while i < n: + d = {str(i): i} + i += d[str(i)] - i + 1 + return i + + log = self.run(main, [1000]) + assert log.result == main(1000) + loop, = log.loops_by_filename(self.filepath) + assert loop.match(""" + i8 = int_lt(i5, i7) + guard_true(i8, descr=...) + guard_not_invalidated(descr=...) + p10 = call(ConstClass(ll_int_str), i5, descr=) + guard_no_exception(descr=...) + i12 = call(ConstClass(ll_strhash), p10, descr=) + p13 = new(descr=...) + p15 = new_array(8, descr=) + setfield_gc(p13, p15, descr=) + i17 = call(ConstClass(ll_dict_lookup_trampoline), p13, p10, i12, descr=) + setfield_gc(p13, 16, descr=) + guard_no_exception(descr=...) + p20 = new_with_vtable(ConstClass(W_IntObject)) + call(ConstClass(_ll_dict_setitem_lookup_done_trampoline), p13, p10, p20, i12, i17, descr=) + setfield_gc(p20, i5, descr=) + guard_no_exception(descr=...) + i23 = call(ConstClass(ll_dict_lookup_trampoline), p13, p10, i12, descr=) + guard_no_exception(descr=...) + i26 = int_and(i23, .*) + i27 = int_is_true(i26) + guard_false(i27, descr=...) + p28 = getfield_gc(p13, descr=) + p29 = getinteriorfield_gc(p28, i23, descr=>) + guard_nonnull_class(p29, ConstClass(W_IntObject), descr=...) + i31 = getfield_gc_pure(p29, descr=) + i32 = int_sub_ovf(i31, i5) + guard_no_overflow(descr=...) + i34 = int_add_ovf(i32, 1) + guard_no_overflow(descr=...) + i35 = int_add_ovf(i5, i34) + guard_no_overflow(descr=...) + --TICK-- + jump(p0, p1, p2, p3, p4, i35, p13, i7, descr=) """) \ No newline at end of file diff --git a/pypy/module/pypyjit/test_pypy_c/test_math.py b/pypy/module/pypyjit/test_pypy_c/test_math.py --- a/pypy/module/pypyjit/test_pypy_c/test_math.py +++ b/pypy/module/pypyjit/test_pypy_c/test_math.py @@ -1,3 +1,4 @@ +import py from pypy.module.pypyjit.test_pypy_c.test_00_model import BaseTestPyPyC @@ -49,10 +50,7 @@ guard_true(i2, descr=...) guard_not_invalidated(descr=...) f1 = cast_int_to_float(i0) - i3 = float_eq(f1, inf) - i4 = float_eq(f1, -inf) - i5 = int_or(i3, i4) - i6 = int_is_true(i5) + i6 = --ISINF--(f1) guard_false(i6, descr=...) f2 = call(ConstClass(sin), f1, descr=) f3 = call(ConstClass(cos), f1, descr=) @@ -64,6 +62,7 @@ """) def test_fmod(self): + py.test.skip("test relies on the old and broken ll_math_fmod") def main(n): import math @@ -90,4 +89,4 @@ i6 = int_sub(i0, 1) --TICK-- jump(..., descr=) - """) \ No newline at end of file + """) diff --git a/pypy/module/select/test/test_select.py b/pypy/module/select/test/test_select.py --- a/pypy/module/select/test/test_select.py +++ b/pypy/module/select/test/test_select.py @@ -214,11 +214,15 @@ def test_poll(self): import select - class A(object): - def __int__(self): - return 3 - - select.poll().poll(A()) # assert did not crash + readend, writeend = self.getpair() + try: + class A(object): + def __int__(self): + return readend.fileno() + select.poll().poll(A()) # assert did not crash + finally: + readend.close() + writeend.close() class AppTestSelectWithPipes(_AppTestSelect): "Use a pipe to get pairs of file descriptors" diff --git a/pypy/module/signal/__init__.py b/pypy/module/signal/__init__.py --- a/pypy/module/signal/__init__.py +++ b/pypy/module/signal/__init__.py @@ -20,7 +20,7 @@ interpleveldefs['pause'] = 'interp_signal.pause' interpleveldefs['siginterrupt'] = 'interp_signal.siginterrupt' - if hasattr(cpy_signal, 'setitimer'): + if os.name == 'posix': interpleveldefs['setitimer'] = 'interp_signal.setitimer' interpleveldefs['getitimer'] = 'interp_signal.getitimer' for name in ['ITIMER_REAL', 'ITIMER_VIRTUAL', 'ITIMER_PROF']: diff --git a/pypy/module/signal/test/test_signal.py b/pypy/module/signal/test/test_signal.py --- a/pypy/module/signal/test/test_signal.py +++ b/pypy/module/signal/test/test_signal.py @@ -1,4 +1,4 @@ -import os, py +import os, py, sys import signal as cpy_signal from pypy.conftest import gettestobjspace @@ -264,6 +264,10 @@ class AppTestItimer: spaceconfig = dict(usemodules=['signal']) + def setup_class(cls): + if sys.platform == 'win32': + py.test.skip("Unix only") + def test_itimer_real(self): import signal diff --git a/pypy/module/sys/test/test_sysmodule.py b/pypy/module/sys/test/test_sysmodule.py --- a/pypy/module/sys/test/test_sysmodule.py +++ b/pypy/module/sys/test/test_sysmodule.py @@ -567,6 +567,11 @@ import time import thread + # XXX workaround for now: to prevent deadlocks, call + # sys._current_frames() once before starting threads. + # This is an issue in non-translated versions only. + sys._current_frames() + thread_id = thread.get_ident() def other_thread(): print "thread started" diff --git a/pypy/module/sys/version.py b/pypy/module/sys/version.py --- a/pypy/module/sys/version.py +++ b/pypy/module/sys/version.py @@ -10,7 +10,7 @@ CPYTHON_VERSION = (2, 7, 1, "final", 42) #XXX # sync patchlevel.h CPYTHON_API_VERSION = 1013 #XXX # sync with include/modsupport.h -PYPY_VERSION = (1, 6, 1, "dev", 0) #XXX # sync patchlevel.h +PYPY_VERSION = (1, 7, 1, "dev", 0) #XXX # sync patchlevel.h if platform.name == 'msvc': COMPILER_INFO = 'MSC v.%d 32 bit' % (platform.version * 10 + 600) diff --git a/pypy/module/test_lib_pypy/test_pwd.py b/pypy/module/test_lib_pypy/test_pwd.py --- a/pypy/module/test_lib_pypy/test_pwd.py +++ b/pypy/module/test_lib_pypy/test_pwd.py @@ -1,7 +1,10 @@ +import py, sys from pypy.conftest import gettestobjspace class AppTestPwd: def setup_class(cls): + if sys.platform == 'win32': + py.test.skip("Unix only") cls.space = gettestobjspace(usemodules=('_ffi', '_rawffi')) cls.space.appexec((), "(): import pwd") diff --git a/pypy/module/thread/ll_thread.py b/pypy/module/thread/ll_thread.py --- a/pypy/module/thread/ll_thread.py +++ b/pypy/module/thread/ll_thread.py @@ -2,10 +2,11 @@ from pypy.rpython.lltypesystem import rffi, lltype, llmemory from pypy.translator.tool.cbuild import ExternalCompilationInfo import py -from pypy.rlib import jit +from pypy.rlib import jit, rgc from pypy.rlib.debug import ll_assert from pypy.rlib.objectmodel import we_are_translated from pypy.rpython.lltypesystem.lloperation import llop +from pypy.rpython.tool import rffi_platform from pypy.tool import autopath class error(Exception): @@ -49,7 +50,7 @@ TLOCKP = rffi.COpaquePtr('struct RPyOpaque_ThreadLock', compilation_info=eci) - +TLOCKP_SIZE = rffi_platform.sizeof('struct RPyOpaque_ThreadLock', eci) c_thread_lock_init = llexternal('RPyThreadLockInit', [TLOCKP], rffi.INT, threadsafe=False) # may add in a global list c_thread_lock_dealloc_NOAUTO = llexternal('RPyOpaqueDealloc_ThreadLock', @@ -164,6 +165,9 @@ if rffi.cast(lltype.Signed, res) <= 0: lltype.free(ll_lock, flavor='raw', track_allocation=False) raise error("out of resources") + # Add some memory pressure for the size of the lock because it is an + # Opaque object + rgc.add_memory_pressure(TLOCKP_SIZE) return ll_lock def free_ll_lock(ll_lock): diff --git a/pypy/objspace/flow/operation.py b/pypy/objspace/flow/operation.py --- a/pypy/objspace/flow/operation.py +++ b/pypy/objspace/flow/operation.py @@ -11,7 +11,7 @@ from pypy.interpreter.baseobjspace import ObjSpace from pypy.interpreter.error import OperationError from pypy.tool.sourcetools import compile2 -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift +from pypy.rlib.rarithmetic import ovfcheck from pypy.objspace.flow import model @@ -144,7 +144,7 @@ return ovfcheck(x % y) def lshift_ovf(x, y): - return ovfcheck_lshift(x, y) + return ovfcheck(x << y) # slicing: operator.{get,set,del}slice() don't support b=None or c=None def do_getslice(a, b, c): diff --git a/pypy/objspace/std/boolobject.py b/pypy/objspace/std/boolobject.py --- a/pypy/objspace/std/boolobject.py +++ b/pypy/objspace/std/boolobject.py @@ -27,11 +27,7 @@ def uint_w(w_self, space): intval = int(w_self.boolval) - if intval < 0: - raise OperationError(space.w_ValueError, - space.wrap("cannot convert negative integer to unsigned")) - else: - return r_uint(intval) + return r_uint(intval) def bigint_w(w_self, space): return rbigint.fromint(int(w_self.boolval)) diff --git a/pypy/objspace/std/bytearrayobject.py b/pypy/objspace/std/bytearrayobject.py --- a/pypy/objspace/std/bytearrayobject.py +++ b/pypy/objspace/std/bytearrayobject.py @@ -282,23 +282,12 @@ def str__Bytearray(space, w_bytearray): return space.wrap(''.join(w_bytearray.data)) -def _convert_idx_params(space, w_self, w_start, w_stop): - start = slicetype.eval_slice_index(space, w_start) - stop = slicetype.eval_slice_index(space, w_stop) - length = len(w_self.data) - if start < 0: - start += length - if start < 0: - start = 0 - if stop < 0: - stop += length - if stop < 0: - stop = 0 - return start, stop, length - def str_count__Bytearray_Int_ANY_ANY(space, w_bytearray, w_char, w_start, w_stop): char = w_char.intval - start, stop, length = _convert_idx_params(space, w_bytearray, w_start, w_stop) + bytearray = w_bytearray.data + length = len(bytearray) + start, stop = slicetype.unwrap_start_stop( + space, length, w_start, w_stop, False) count = 0 for i in range(start, min(stop, length)): c = w_bytearray.data[i] diff --git a/pypy/objspace/std/bytearraytype.py b/pypy/objspace/std/bytearraytype.py --- a/pypy/objspace/std/bytearraytype.py +++ b/pypy/objspace/std/bytearraytype.py @@ -122,10 +122,11 @@ return -1 def descr_fromhex(space, w_type, w_hexstring): - "bytearray.fromhex(string) -> bytearray\n\nCreate a bytearray object " - "from a string of hexadecimal numbers.\nSpaces between two numbers are " - "accepted.\nExample: bytearray.fromhex('B9 01EF') -> " - "bytearray(b'\\xb9\\x01\\xef')." + "bytearray.fromhex(string) -> bytearray\n" + "\n" + "Create a bytearray object from a string of hexadecimal numbers.\n" + "Spaces between two numbers are accepted.\n" + "Example: bytearray.fromhex('B9 01EF') -> bytearray(b'\\xb9\\x01\\xef')." hexstring = space.str_w(w_hexstring) hexstring = hexstring.lower() data = [] diff --git a/pypy/objspace/std/floatobject.py b/pypy/objspace/std/floatobject.py --- a/pypy/objspace/std/floatobject.py +++ b/pypy/objspace/std/floatobject.py @@ -546,6 +546,12 @@ # Try to return int. return space.newtuple([space.int(w_num), space.int(w_den)]) +def float_is_integer__Float(space, w_float): + v = w_float.floatval + if not rfloat.isfinite(v): + return space.w_False + return space.wrap(math.floor(v) == v) + from pypy.objspace.std import floattype register_all(vars(), floattype) diff --git a/pypy/objspace/std/floattype.py b/pypy/objspace/std/floattype.py --- a/pypy/objspace/std/floattype.py +++ b/pypy/objspace/std/floattype.py @@ -12,6 +12,7 @@ float_as_integer_ratio = SMM("as_integer_ratio", 1) +float_is_integer = SMM("is_integer", 1) float_hex = SMM("hex", 1) def descr_conjugate(space, w_float): diff --git a/pypy/objspace/std/intobject.py b/pypy/objspace/std/intobject.py --- a/pypy/objspace/std/intobject.py +++ b/pypy/objspace/std/intobject.py @@ -6,7 +6,7 @@ from pypy.objspace.std.noneobject import W_NoneObject from pypy.objspace.std.register_all import register_all from pypy.rlib import jit -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift, LONG_BIT, r_uint +from pypy.rlib.rarithmetic import ovfcheck, LONG_BIT, r_uint from pypy.rlib.rbigint import rbigint """ @@ -16,7 +16,10 @@ something CPython does not do anymore. """ -class W_IntObject(W_Object): +class W_AbstractIntObject(W_Object): + __slots__ = () + +class W_IntObject(W_AbstractIntObject): __slots__ = 'intval' _immutable_fields_ = ['intval'] @@ -245,7 +248,7 @@ b = w_int2.intval if r_uint(b) < LONG_BIT: # 0 <= b < LONG_BIT try: - c = ovfcheck_lshift(a, b) + c = ovfcheck(a << b) except OverflowError: raise FailedToImplementArgs(space.w_OverflowError, space.wrap("integer left shift")) diff --git a/pypy/objspace/std/iterobject.py b/pypy/objspace/std/iterobject.py --- a/pypy/objspace/std/iterobject.py +++ b/pypy/objspace/std/iterobject.py @@ -4,7 +4,10 @@ from pypy.objspace.std.register_all import register_all -class W_AbstractSeqIterObject(W_Object): +class W_AbstractIterObject(W_Object): + __slots__ = () + +class W_AbstractSeqIterObject(W_AbstractIterObject): from pypy.objspace.std.itertype import iter_typedef as typedef def __init__(w_self, w_seq, index=0): diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py --- a/pypy/objspace/std/listobject.py +++ b/pypy/objspace/std/listobject.py @@ -11,7 +11,10 @@ from pypy.rlib.listsort import make_timsort_class from pypy.interpreter.argument import Signature -class W_ListObject(W_Object): +class W_AbstractListObject(W_Object): + __slots__ = () + +class W_ListObject(W_AbstractListObject): from pypy.objspace.std.listtype import list_typedef as typedef def __init__(w_self, wrappeditems): @@ -54,7 +57,12 @@ def _init_from_iterable(space, items_w, w_iterable): # in its own function to make the JIT look into init__List - # XXX this would need a JIT driver somehow? + # xxx special hack for speed + from pypy.interpreter.generator import GeneratorIterator + if isinstance(w_iterable, GeneratorIterator): + w_iterable.unpack_into(items_w) + return + # /xxx w_iterator = space.iter(w_iterable) while True: try: @@ -414,8 +422,8 @@ # needs to be safe against eq_w() mutating the w_list behind our back items = w_list.wrappeditems size = len(items) - i = slicetype.adapt_bound(space, size, w_start) - stop = slicetype.adapt_bound(space, size, w_stop) + i, stop = slicetype.unwrap_start_stop( + space, size, w_start, w_stop, True) while i < stop and i < len(items): if space.eq_w(items[i], w_any): return space.wrap(i) diff --git a/pypy/objspace/std/longobject.py b/pypy/objspace/std/longobject.py --- a/pypy/objspace/std/longobject.py +++ b/pypy/objspace/std/longobject.py @@ -8,7 +8,10 @@ from pypy.objspace.std.noneobject import W_NoneObject from pypy.rlib.rbigint import rbigint, SHIFT -class W_LongObject(W_Object): +class W_AbstractLongObject(W_Object): + __slots__ = () + +class W_LongObject(W_AbstractLongObject): """This is a wrapper of rbigint.""" from pypy.objspace.std.longtype import long_typedef as typedef _immutable_fields_ = ['num'] diff --git a/pypy/objspace/std/model.py b/pypy/objspace/std/model.py --- a/pypy/objspace/std/model.py +++ b/pypy/objspace/std/model.py @@ -69,19 +69,11 @@ from pypy.objspace.std import floatobject from pypy.objspace.std import complexobject from pypy.objspace.std import setobject - from pypy.objspace.std import smallintobject - from pypy.objspace.std import smalllongobject from pypy.objspace.std import tupleobject - from pypy.objspace.std import smalltupleobject from pypy.objspace.std import listobject from pypy.objspace.std import dictmultiobject from pypy.objspace.std import stringobject from pypy.objspace.std import bytearrayobject - from pypy.objspace.std import ropeobject - from pypy.objspace.std import ropeunicodeobject - from pypy.objspace.std import strsliceobject - from pypy.objspace.std import strjoinobject - from pypy.objspace.std import strbufobject from pypy.objspace.std import typeobject from pypy.objspace.std import sliceobject from pypy.objspace.std import longobject @@ -89,7 +81,6 @@ from pypy.objspace.std import iterobject from pypy.objspace.std import unicodeobject from pypy.objspace.std import dictproxyobject - from pypy.objspace.std import rangeobject from pypy.objspace.std import proxyobject from pypy.objspace.std import fake import pypy.objspace.std.default # register a few catch-all multimethods @@ -141,7 +132,12 @@ for option, value in config.objspace.std: if option.startswith("with") and option in option_to_typename: for classname in option_to_typename[option]: - implcls = eval(classname) + modname = classname[:classname.index('.')] + classname = classname[classname.index('.')+1:] + d = {} + exec "from pypy.objspace.std.%s import %s" % ( + modname, classname) in d + implcls = d[classname] if value: self.typeorder[implcls] = [] else: @@ -167,6 +163,7 @@ # XXX build these lists a bit more automatically later if config.objspace.std.withsmallint: + from pypy.objspace.std import smallintobject self.typeorder[boolobject.W_BoolObject] += [ (smallintobject.W_SmallIntObject, boolobject.delegate_Bool2SmallInt), ] @@ -189,6 +186,7 @@ (complexobject.W_ComplexObject, complexobject.delegate_Int2Complex), ] if config.objspace.std.withsmalllong: + from pypy.objspace.std import smalllongobject self.typeorder[boolobject.W_BoolObject] += [ (smalllongobject.W_SmallLongObject, smalllongobject.delegate_Bool2SmallLong), ] @@ -220,7 +218,9 @@ (unicodeobject.W_UnicodeObject, unicodeobject.delegate_String2Unicode), ] else: + from pypy.objspace.std import ropeobject if config.objspace.std.withropeunicode: + from pypy.objspace.std import ropeunicodeobject self.typeorder[ropeobject.W_RopeObject] += [ (ropeunicodeobject.W_RopeUnicodeObject, ropeunicodeobject.delegate_Rope2RopeUnicode), @@ -230,6 +230,7 @@ (unicodeobject.W_UnicodeObject, unicodeobject.delegate_String2Unicode), ] if config.objspace.std.withstrslice: + from pypy.objspace.std import strsliceobject self.typeorder[strsliceobject.W_StringSliceObject] += [ (stringobject.W_StringObject, strsliceobject.delegate_slice2str), @@ -237,6 +238,7 @@ strsliceobject.delegate_slice2unicode), ] if config.objspace.std.withstrjoin: + from pypy.objspace.std import strjoinobject self.typeorder[strjoinobject.W_StringJoinObject] += [ (stringobject.W_StringObject, strjoinobject.delegate_join2str), @@ -244,6 +246,7 @@ strjoinobject.delegate_join2unicode) ] elif config.objspace.std.withstrbuf: + from pypy.objspace.std import strbufobject self.typeorder[strbufobject.W_StringBufferObject] += [ (stringobject.W_StringObject, strbufobject.delegate_buf2str), @@ -251,11 +254,13 @@ strbufobject.delegate_buf2unicode) ] if config.objspace.std.withrangelist: + from pypy.objspace.std import rangeobject self.typeorder[rangeobject.W_RangeListObject] += [ (listobject.W_ListObject, rangeobject.delegate_range2list), ] if config.objspace.std.withsmalltuple: + from pypy.objspace.std import smalltupleobject self.typeorder[smalltupleobject.W_SmallTupleObject] += [ (tupleobject.W_TupleObject, smalltupleobject.delegate_SmallTuple2Tuple)] diff --git a/pypy/objspace/std/objspace.py b/pypy/objspace/std/objspace.py --- a/pypy/objspace/std/objspace.py +++ b/pypy/objspace/std/objspace.py @@ -83,12 +83,7 @@ if self.config.objspace.std.withtproxy: transparent.setup(self) - interplevel_classes = {} - for type, classes in self.model.typeorder.iteritems(): - if len(classes) >= 3: # XXX what does this 3 mean??! - # W_Root, AnyXxx and actual object - interplevel_classes[self.gettypefor(type)] = classes[0][0] - self._interplevel_classes = interplevel_classes + self.setup_isinstance_cache() def get_builtin_types(self): return self.builtin_types @@ -414,7 +409,7 @@ else: if unroll: return make_sure_not_resized(ObjSpace.unpackiterable_unroll( - self, w_obj, expected_length)[:]) + self, w_obj, expected_length)) else: return make_sure_not_resized(ObjSpace.unpackiterable( self, w_obj, expected_length)[:]) @@ -422,7 +417,8 @@ raise self._wrap_expected_length(expected_length, len(t)) return make_sure_not_resized(t) - def fixedview_unroll(self, w_obj, expected_length=-1): + def fixedview_unroll(self, w_obj, expected_length): + assert expected_length >= 0 return self.fixedview(w_obj, expected_length, unroll=True) def listview(self, w_obj, expected_length=-1): @@ -591,6 +587,63 @@ def isinstance_w(space, w_inst, w_type): return space._type_isinstance(w_inst, w_type) + def setup_isinstance_cache(self): + # This assumes that all classes in the stdobjspace implementing a + # particular app-level type are distinguished by a common base class. + # Alternatively, you can turn off the cache on specific classes, + # like e.g. proxyobject. It is just a bit less performant but + # should not have any bad effect. + from pypy.objspace.std.model import W_Root, W_Object + # + # Build a dict {class: w_typeobject-or-None}. The value None is used + # on classes that are known to be abstract base classes. + class2type = {} + class2type[W_Root] = None + class2type[W_Object] = None + for cls in self.model.typeorder.keys(): + if getattr(cls, 'typedef', None) is None: + continue + if getattr(cls, 'ignore_for_isinstance_cache', False): + continue + w_type = self.gettypefor(cls) + w_oldtype = class2type.setdefault(cls, w_type) + assert w_oldtype is w_type + # + # Build the real dict {w_typeobject: class-or-base-class}. For every + # w_typeobject we look for the most precise common base class of all + # the registered classes. If no such class is found, we will find + # W_Object or W_Root, and complain. Then you must either add an + # artificial common base class, or disable caching on one of the + # two classes with ignore_for_isinstance_cache. + def getmro(cls): + while True: + yield cls + if cls is W_Root: + break + cls = cls.__bases__[0] + self._interplevel_classes = {} + for cls, w_type in class2type.items(): + if w_type is None: + continue + if w_type not in self._interplevel_classes: + self._interplevel_classes[w_type] = cls + else: + cls1 = self._interplevel_classes[w_type] + mro1 = list(getmro(cls1)) + for base in getmro(cls): + if base in mro1: + break + if base in class2type and class2type[base] is not w_type: + if class2type.get(base) is None: + msg = ("cannot find a common interp-level base class" + " between %r and %r" % (cls1, cls)) + else: + msg = ("%s is a base class of both %r and %r" % ( + class2type[base], cls1, cls)) + raise AssertionError("%r: %s" % (w_type, msg)) + class2type[base] = w_type + self._interplevel_classes[w_type] = base + @specialize.memo() def _get_interplevel_cls(self, w_type): if not hasattr(self, "_interplevel_classes"): diff --git a/pypy/objspace/std/proxyobject.py b/pypy/objspace/std/proxyobject.py --- a/pypy/objspace/std/proxyobject.py +++ b/pypy/objspace/std/proxyobject.py @@ -16,6 +16,8 @@ def transparent_class(name, BaseCls): class W_Transparent(BaseCls): + ignore_for_isinstance_cache = True + def __init__(self, space, w_type, w_controller): self.w_type = w_type self.w_controller = w_controller diff --git a/pypy/objspace/std/rangeobject.py b/pypy/objspace/std/rangeobject.py --- a/pypy/objspace/std/rangeobject.py +++ b/pypy/objspace/std/rangeobject.py @@ -5,7 +5,7 @@ from pypy.objspace.std.noneobject import W_NoneObject from pypy.objspace.std.inttype import wrapint from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice -from pypy.objspace.std.listobject import W_ListObject +from pypy.objspace.std.listobject import W_AbstractListObject, W_ListObject from pypy.objspace.std import listtype, iterobject, slicetype from pypy.interpreter import gateway, baseobjspace @@ -21,7 +21,7 @@ return (start - stop - step - 1)/-step -class W_RangeListObject(W_Object): +class W_RangeListObject(W_AbstractListObject): typedef = listtype.list_typedef def __init__(w_self, start, step, length): diff --git a/pypy/objspace/std/ropeobject.py b/pypy/objspace/std/ropeobject.py --- a/pypy/objspace/std/ropeobject.py +++ b/pypy/objspace/std/ropeobject.py @@ -6,7 +6,7 @@ from pypy.rlib.objectmodel import we_are_translated from pypy.objspace.std.inttype import wrapint from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice -from pypy.objspace.std import slicetype +from pypy.objspace.std import stringobject, slicetype, iterobject from pypy.objspace.std.listobject import W_ListObject from pypy.objspace.std.noneobject import W_NoneObject from pypy.objspace.std.tupleobject import W_TupleObject @@ -19,7 +19,7 @@ str_format__String as str_format__Rope, _upper, _lower, DEFAULT_NOOP_TABLE) -class W_RopeObject(W_Object): +class W_RopeObject(stringobject.W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef _immutable_fields_ = ['_node'] @@ -59,7 +59,7 @@ registerimplementation(W_RopeObject) -class W_RopeIterObject(W_Object): +class W_RopeIterObject(iterobject.W_AbstractIterObject): from pypy.objspace.std.itertype import iter_typedef as typedef def __init__(w_self, w_rope, index=0): @@ -357,16 +357,8 @@ self = w_self._node sub = w_sub._node - if space.is_w(w_start, space.w_None): - w_start = space.wrap(0) - if space.is_w(w_end, space.w_None): - w_end = space.len(w_self) - if upper_bound: - start = slicetype.adapt_bound(space, self.length(), w_start) - end = slicetype.adapt_bound(space, self.length(), w_end) - else: - start = slicetype.adapt_lower_bound(space, self.length(), w_start) - end = slicetype.adapt_lower_bound(space, self.length(), w_end) + start, end = slicetype.unwrap_start_stop( + space, self.length(), w_start, w_end, upper_bound) return (self, sub, start, end) _convert_idx_params._annspecialcase_ = 'specialize:arg(5)' diff --git a/pypy/objspace/std/ropeunicodeobject.py b/pypy/objspace/std/ropeunicodeobject.py --- a/pypy/objspace/std/ropeunicodeobject.py +++ b/pypy/objspace/std/ropeunicodeobject.py @@ -9,7 +9,7 @@ from pypy.objspace.std.noneobject import W_NoneObject from pypy.rlib import rope from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice -from pypy.objspace.std import slicetype +from pypy.objspace.std import unicodeobject, slicetype, iterobject from pypy.objspace.std.tupleobject import W_TupleObject from pypy.rlib.rarithmetic import intmask, ovfcheck from pypy.module.unicodedata import unicodedb @@ -76,7 +76,7 @@ return encode_object(space, w_unistr, encoding, errors) -class W_RopeUnicodeObject(W_Object): +class W_RopeUnicodeObject(unicodeobject.W_AbstractUnicodeObject): from pypy.objspace.std.unicodetype import unicode_typedef as typedef _immutable_fields_ = ['_node'] @@ -117,7 +117,7 @@ return rope.LiteralUnicodeNode(space.unicode_w(w_str)) -class W_RopeUnicodeIterObject(W_Object): +class W_RopeUnicodeIterObject(iterobject.W_AbstractIterObject): from pypy.objspace.std.itertype import iter_typedef as typedef def __init__(w_self, w_rope, index=0): diff --git a/pypy/objspace/std/sliceobject.py b/pypy/objspace/std/sliceobject.py --- a/pypy/objspace/std/sliceobject.py +++ b/pypy/objspace/std/sliceobject.py @@ -4,7 +4,7 @@ from pypy.interpreter import gateway from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all -from pypy.objspace.std.slicetype import eval_slice_index +from pypy.objspace.std.slicetype import _eval_slice_index class W_SliceObject(W_Object): from pypy.objspace.std.slicetype import slice_typedef as typedef @@ -25,7 +25,7 @@ if space.is_w(w_slice.w_step, space.w_None): step = 1 else: - step = eval_slice_index(space, w_slice.w_step) + step = _eval_slice_index(space, w_slice.w_step) if step == 0: raise OperationError(space.w_ValueError, space.wrap("slice step cannot be zero")) @@ -35,7 +35,7 @@ else: start = 0 else: - start = eval_slice_index(space, w_slice.w_start) + start = _eval_slice_index(space, w_slice.w_start) if start < 0: start += length if start < 0: @@ -54,7 +54,7 @@ else: stop = length else: - stop = eval_slice_index(space, w_slice.w_stop) + stop = _eval_slice_index(space, w_slice.w_stop) if stop < 0: stop += length if stop < 0: diff --git a/pypy/objspace/std/slicetype.py b/pypy/objspace/std/slicetype.py --- a/pypy/objspace/std/slicetype.py +++ b/pypy/objspace/std/slicetype.py @@ -3,6 +3,7 @@ from pypy.objspace.std.stdtypedef import StdTypeDef, SMM from pypy.objspace.std.register_all import register_all from pypy.interpreter.error import OperationError +from pypy.rlib.objectmodel import specialize # indices multimehtod slice_indices = SMM('indices', 2, @@ -14,7 +15,9 @@ ' normal slices.') # utility functions -def eval_slice_index(space, w_int): +def _eval_slice_index(space, w_int): + # note that it is the *callers* responsibility to check for w_None + # otherwise you can get funny error messages try: return space.getindex_w(w_int, None) # clamp if long integer too large except OperationError, err: @@ -25,7 +28,7 @@ "None or have an __index__ method")) def adapt_lower_bound(space, size, w_index): - index = eval_slice_index(space, w_index) + index = _eval_slice_index(space, w_index) if index < 0: index = index + size if index < 0: @@ -34,16 +37,29 @@ return index def adapt_bound(space, size, w_index): - index = eval_slice_index(space, w_index) - if index < 0: - index = index + size - if index < 0: - index = 0 + index = adapt_lower_bound(space, size, w_index) if index > size: index = size assert index >= 0 return index + at specialize.arg(4) +def unwrap_start_stop(space, size, w_start, w_end, upper_bound=False): + if space.is_w(w_start, space.w_None): + start = 0 + elif upper_bound: + start = adapt_bound(space, size, w_start) + else: + start = adapt_lower_bound(space, size, w_start) + + if space.is_w(w_end, space.w_None): + end = size + elif upper_bound: + end = adapt_bound(space, size, w_end) + else: + end = adapt_lower_bound(space, size, w_end) + return start, end + register_all(vars(), globals()) # ____________________________________________________________ diff --git a/pypy/objspace/std/smallintobject.py b/pypy/objspace/std/smallintobject.py --- a/pypy/objspace/std/smallintobject.py +++ b/pypy/objspace/std/smallintobject.py @@ -6,7 +6,7 @@ from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all from pypy.objspace.std.noneobject import W_NoneObject -from pypy.objspace.std.intobject import W_IntObject +from pypy.objspace.std.intobject import W_AbstractIntObject, W_IntObject from pypy.interpreter.error import OperationError from pypy.rlib.objectmodel import UnboxedValue from pypy.rlib.rbigint import rbigint @@ -14,7 +14,7 @@ from pypy.tool.sourcetools import func_with_new_name from pypy.objspace.std.inttype import wrapint -class W_SmallIntObject(W_Object, UnboxedValue): +class W_SmallIntObject(W_AbstractIntObject, UnboxedValue): __slots__ = 'intval' from pypy.objspace.std.inttype import int_typedef as typedef diff --git a/pypy/objspace/std/smalllongobject.py b/pypy/objspace/std/smalllongobject.py --- a/pypy/objspace/std/smalllongobject.py +++ b/pypy/objspace/std/smalllongobject.py @@ -9,7 +9,7 @@ from pypy.rlib.rarithmetic import r_longlong, r_int, r_uint from pypy.rlib.rarithmetic import intmask, LONGLONG_BIT from pypy.rlib.rbigint import rbigint -from pypy.objspace.std.longobject import W_LongObject +from pypy.objspace.std.longobject import W_AbstractLongObject, W_LongObject from pypy.objspace.std.intobject import W_IntObject from pypy.objspace.std.noneobject import W_NoneObject from pypy.interpreter.error import OperationError @@ -17,7 +17,7 @@ LONGLONG_MIN = r_longlong((-1) << (LONGLONG_BIT-1)) -class W_SmallLongObject(W_Object): +class W_SmallLongObject(W_AbstractLongObject): from pypy.objspace.std.longtype import long_typedef as typedef _immutable_fields_ = ['longlong'] diff --git a/pypy/objspace/std/smalltupleobject.py b/pypy/objspace/std/smalltupleobject.py --- a/pypy/objspace/std/smalltupleobject.py +++ b/pypy/objspace/std/smalltupleobject.py @@ -9,9 +9,9 @@ from pypy.interpreter import gateway from pypy.rlib.debug import make_sure_not_resized from pypy.rlib.unroll import unrolling_iterable -from pypy.objspace.std.tupleobject import W_TupleObject +from pypy.objspace.std.tupleobject import W_AbstractTupleObject, W_TupleObject -class W_SmallTupleObject(W_Object): +class W_SmallTupleObject(W_AbstractTupleObject): from pypy.objspace.std.tupletype import tuple_typedef as typedef def tolist(self): @@ -68,10 +68,10 @@ raise IndexError def eq(self, space, w_other): - if self.length() != w_other.length(): + if n != w_other.length(): return space.w_False for i in iter_n: - item1 = self.getitem(i) + item1 = getattr(self,'w_value%s' % i) item2 = w_other.getitem(i) if not space.eq_w(item1, item2): return space.w_False @@ -80,9 +80,9 @@ def hash(self, space): mult = 1000003 x = 0x345678 - z = self.length() + z = n for i in iter_n: - w_item = self.getitem(i) + w_item = getattr(self, 'w_value%s' % i) y = space.int_w(space.hash(w_item)) x = (x ^ y) * mult z -= 1 diff --git a/pypy/objspace/std/stdtypedef.py b/pypy/objspace/std/stdtypedef.py --- a/pypy/objspace/std/stdtypedef.py +++ b/pypy/objspace/std/stdtypedef.py @@ -32,11 +32,14 @@ from pypy.objspace.std.objecttype import object_typedef if b is object_typedef: return True - while a is not b: - if a is None: - return False - a = a.base - return True + if a is None: + return False + if a is b: + return True + for a1 in a.bases: + if issubtypedef(a1, b): + return True + return False std_dict_descr = GetSetProperty(descr_get_dict, descr_set_dict, descr_del_dict, doc="dictionary for instance variables (if defined)") @@ -75,8 +78,8 @@ if typedef is object_typedef: bases_w = [] else: - base = typedef.base or object_typedef - bases_w = [space.gettypeobject(base)] + bases = typedef.bases or [object_typedef] + bases_w = [space.gettypeobject(base) for base in bases] # wrap everything dict_w = {} diff --git a/pypy/objspace/std/strbufobject.py b/pypy/objspace/std/strbufobject.py --- a/pypy/objspace/std/strbufobject.py +++ b/pypy/objspace/std/strbufobject.py @@ -1,11 +1,12 @@ from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all +from pypy.objspace.std.stringobject import W_AbstractStringObject from pypy.objspace.std.stringobject import W_StringObject from pypy.objspace.std.unicodeobject import delegate_String2Unicode from pypy.rlib.rstring import StringBuilder from pypy.interpreter.buffer import Buffer -class W_StringBufferObject(W_Object): +class W_StringBufferObject(W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef w_str = None diff --git a/pypy/objspace/std/stringobject.py b/pypy/objspace/std/stringobject.py --- a/pypy/objspace/std/stringobject.py +++ b/pypy/objspace/std/stringobject.py @@ -4,7 +4,7 @@ from pypy.interpreter.error import OperationError, operationerrfmt from pypy.interpreter import gateway from pypy.rlib.rarithmetic import ovfcheck -from pypy.rlib.objectmodel import we_are_translated, compute_hash +from pypy.rlib.objectmodel import we_are_translated, compute_hash, specialize from pypy.objspace.std.inttype import wrapint from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice from pypy.objspace.std import slicetype, newformat @@ -19,7 +19,10 @@ from pypy.objspace.std.formatting import mod_format -class W_StringObject(W_Object): +class W_AbstractStringObject(W_Object): + __slots__ = () + +class W_StringObject(W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef _immutable_fields_ = ['_value'] @@ -47,6 +50,7 @@ W_StringObject.PREBUILT = [W_StringObject(chr(i)) for i in range(256)] del i + at specialize.arg(2) def _is_generic(space, w_self, fun): v = w_self._value if len(v) == 0: @@ -56,14 +60,13 @@ return space.newbool(fun(c)) else: return _is_generic_loop(space, v, fun) -_is_generic._annspecialcase_ = "specialize:arg(2)" + at specialize.arg(2) def _is_generic_loop(space, v, fun): for idx in range(len(v)): if not fun(v[idx]): return space.w_False return space.w_True -_is_generic_loop._annspecialcase_ = "specialize:arg(2)" def _upper(ch): if ch.islower(): @@ -420,22 +423,14 @@ return space.wrap(u_self) -def _convert_idx_params(space, w_self, w_sub, w_start, w_end, upper_bound=False): + at specialize.arg(4) +def _convert_idx_params(space, w_self, w_start, w_end, upper_bound=False): self = w_self._value - sub = w_sub._value + lenself = len(self) - if space.is_w(w_start, space.w_None): - w_start = space.wrap(0) - if space.is_w(w_end, space.w_None): - w_end = space.len(w_self) - if upper_bound: - start = slicetype.adapt_bound(space, len(self), w_start) - end = slicetype.adapt_bound(space, len(self), w_end) - else: - start = slicetype.adapt_lower_bound(space, len(self), w_start) - end = slicetype.adapt_lower_bound(space, len(self), w_end) - return (self, sub, start, end) -_convert_idx_params._annspecialcase_ = 'specialize:arg(5)' + start, end = slicetype.unwrap_start_stop( + space, lenself, w_start, w_end, upper_bound=upper_bound) + return (self, start, end) def contains__String_String(space, w_self, w_sub): self = w_self._value @@ -443,13 +438,13 @@ return space.newbool(self.find(sub) >= 0) def str_find__String_String_ANY_ANY(space, w_self, w_sub, w_start, w_end): - (self, sub, start, end) = _convert_idx_params(space, w_self, w_sub, w_start, w_end) - res = self.find(sub, start, end) + (self, start, end) = _convert_idx_params(space, w_self, w_start, w_end) + res = self.find(w_sub._value, start, end) return space.wrap(res) def str_rfind__String_String_ANY_ANY(space, w_self, w_sub, w_start, w_end): - (self, sub, start, end) = _convert_idx_params(space, w_self, w_sub, w_start, w_end) - res = self.rfind(sub, start, end) + (self, start, end) = _convert_idx_params(space, w_self, w_start, w_end) + res = self.rfind(w_sub._value, start, end) return space.wrap(res) def str_partition__String_String(space, w_self, w_sub): @@ -483,8 +478,8 @@ def str_index__String_String_ANY_ANY(space, w_self, w_sub, w_start, w_end): - (self, sub, start, end) = _convert_idx_params(space, w_self, w_sub, w_start, w_end) - res = self.find(sub, start, end) + (self, start, end) = _convert_idx_params(space, w_self, w_start, w_end) + res = self.find(w_sub._value, start, end) if res < 0: raise OperationError(space.w_ValueError, space.wrap("substring not found in string.index")) @@ -493,8 +488,8 @@ def str_rindex__String_String_ANY_ANY(space, w_self, w_sub, w_start, w_end): - (self, sub, start, end) = _convert_idx_params(space, w_self, w_sub, w_start, w_end) - res = self.rfind(sub, start, end) + (self, start, end) = _convert_idx_params(space, w_self, w_start, w_end) + res = self.rfind(w_sub._value, start, end) if res < 0: raise OperationError(space.w_ValueError, space.wrap("substring not found in string.rindex")) @@ -636,20 +631,17 @@ return wrapstr(space, u_centered) def str_count__String_String_ANY_ANY(space, w_self, w_arg, w_start, w_end): - u_self, u_arg, u_start, u_end = _convert_idx_params(space, w_self, w_arg, - w_start, w_end) - return wrapint(space, u_self.count(u_arg, u_start, u_end)) + u_self, u_start, u_end = _convert_idx_params(space, w_self, w_start, w_end) + return wrapint(space, u_self.count(w_arg._value, u_start, u_end)) def str_endswith__String_String_ANY_ANY(space, w_self, w_suffix, w_start, w_end): - (u_self, suffix, start, end) = _convert_idx_params(space, w_self, - w_suffix, w_start, - w_end, True) - return space.newbool(stringendswith(u_self, suffix, start, end)) + (u_self, start, end) = _convert_idx_params(space, w_self, w_start, + w_end, True) + return space.newbool(stringendswith(u_self, w_suffix._value, start, end)) def str_endswith__String_Tuple_ANY_ANY(space, w_self, w_suffixes, w_start, w_end): - (u_self, _, start, end) = _convert_idx_params(space, w_self, - space.wrap(''), w_start, - w_end, True) + (u_self, start, end) = _convert_idx_params(space, w_self, w_start, + w_end, True) for w_suffix in space.fixedview(w_suffixes): if space.isinstance_w(w_suffix, space.w_unicode): w_u = space.call_function(space.w_unicode, w_self) @@ -661,14 +653,13 @@ return space.w_False def str_startswith__String_String_ANY_ANY(space, w_self, w_prefix, w_start, w_end): - (u_self, prefix, start, end) = _convert_idx_params(space, w_self, - w_prefix, w_start, - w_end, True) - return space.newbool(stringstartswith(u_self, prefix, start, end)) + (u_self, start, end) = _convert_idx_params(space, w_self, w_start, + w_end, True) + return space.newbool(stringstartswith(u_self, w_prefix._value, start, end)) def str_startswith__String_Tuple_ANY_ANY(space, w_self, w_prefixes, w_start, w_end): - (u_self, _, start, end) = _convert_idx_params(space, w_self, space.wrap(''), - w_start, w_end, True) + (u_self, start, end) = _convert_idx_params(space, w_self, + w_start, w_end, True) for w_prefix in space.fixedview(w_prefixes): if space.isinstance_w(w_prefix, space.w_unicode): w_u = space.call_function(space.w_unicode, w_self) diff --git a/pypy/objspace/std/strjoinobject.py b/pypy/objspace/std/strjoinobject.py --- a/pypy/objspace/std/strjoinobject.py +++ b/pypy/objspace/std/strjoinobject.py @@ -1,11 +1,12 @@ from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all +from pypy.objspace.std.stringobject import W_AbstractStringObject from pypy.objspace.std.stringobject import W_StringObject from pypy.objspace.std.unicodeobject import delegate_String2Unicode from pypy.objspace.std.stringtype import wrapstr -class W_StringJoinObject(W_Object): +class W_StringJoinObject(W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef def __init__(w_self, joined_strs, until=-1): diff --git a/pypy/objspace/std/strsliceobject.py b/pypy/objspace/std/strsliceobject.py --- a/pypy/objspace/std/strsliceobject.py +++ b/pypy/objspace/std/strsliceobject.py @@ -1,6 +1,7 @@ from pypy.interpreter.error import OperationError from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all +from pypy.objspace.std.stringobject import W_AbstractStringObject from pypy.objspace.std.stringobject import W_StringObject from pypy.objspace.std.unicodeobject import delegate_String2Unicode from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice @@ -12,7 +13,7 @@ stringendswith, stringstartswith -class W_StringSliceObject(W_Object): +class W_StringSliceObject(W_AbstractStringObject): from pypy.objspace.std.stringtype import str_typedef as typedef def __init__(w_self, str, start, stop): @@ -60,8 +61,8 @@ def _convert_idx_params(space, w_self, w_sub, w_start, w_end): length = w_self.stop - w_self.start sub = w_sub._value - start = slicetype.adapt_bound(space, length, w_start) - end = slicetype.adapt_bound(space, length, w_end) + start, end = slicetype.unwrap_start_stop( + space, length, w_start, w_end, True) assert start >= 0 assert end >= 0 diff --git a/pypy/objspace/std/test/test_bytes.py b/pypy/objspace/std/test/test_bytearrayobject.py rename from pypy/objspace/std/test/test_bytes.py rename to pypy/objspace/std/test/test_bytearrayobject.py diff --git a/pypy/objspace/std/test/test_floatobject.py b/pypy/objspace/std/test/test_floatobject.py --- a/pypy/objspace/std/test/test_floatobject.py +++ b/pypy/objspace/std/test/test_floatobject.py @@ -63,6 +63,12 @@ def setup_class(cls): cls.w_py26 = cls.space.wrap(sys.version_info >= (2, 6)) + def test_isinteger(self): + assert (1.).is_integer() + assert not (1.1).is_integer() + assert not float("inf").is_integer() + assert not float("nan").is_integer() + def test_conjugate(self): assert (1.).conjugate() == 1. assert (-1.).conjugate() == -1. @@ -782,4 +788,4 @@ # divide by 0 raises(ZeroDivisionError, lambda: inf % 0) raises(ZeroDivisionError, lambda: inf // 0) - raises(ZeroDivisionError, divmod, inf, 0) \ No newline at end of file + raises(ZeroDivisionError, divmod, inf, 0) diff --git a/pypy/objspace/std/test/test_listobject.py b/pypy/objspace/std/test/test_listobject.py --- a/pypy/objspace/std/test/test_listobject.py +++ b/pypy/objspace/std/test/test_listobject.py @@ -2,11 +2,10 @@ from pypy.objspace.std.listobject import W_ListObject from pypy.interpreter.error import OperationError -from pypy.conftest import gettestobjspace +from pypy.conftest import gettestobjspace, option class TestW_ListObject(object): - def test_is_true(self): w = self.space.wrap w_list = W_ListObject([]) @@ -343,6 +342,13 @@ class AppTestW_ListObject(object): + def setup_class(cls): + import sys + on_cpython = (option.runappdirect and + not hasattr(sys, 'pypy_translation_info')) + + cls.w_on_cpython = cls.space.wrap(on_cpython) + def test_call_list(self): assert list('') == [] assert list('abc') == ['a', 'b', 'c'] @@ -616,6 +622,14 @@ assert c.index(0) == 0 raises(ValueError, c.index, 3) + def test_index_cpython_bug(self): + if self.on_cpython: + skip("cpython has a bug here") + c = list('hello world') + assert c.index('l', None, None) == 2 + assert c.index('l', 3, None) == 3 + assert c.index('l', None, 4) == 2 + def test_ass_slice(self): l = range(6) l[1:3] = 'abc' @@ -801,6 +815,20 @@ l.__delslice__(0, 2) assert l == [3, 4] + def test_list_from_set(self): + l = ['a'] + l.__init__(set('b')) + assert l == ['b'] + + def test_list_from_generator(self): + l = ['a'] + g = (i*i for i in range(5)) + l.__init__(g) + assert l == [0, 1, 4, 9, 16] + l.__init__(g) + assert l == [] + assert list(g) == [] + class AppTestListFastSubscr: diff --git a/pypy/objspace/std/test/test_sliceobject.py b/pypy/objspace/std/test/test_sliceobject.py --- a/pypy/objspace/std/test/test_sliceobject.py +++ b/pypy/objspace/std/test/test_sliceobject.py @@ -1,3 +1,4 @@ +import sys from pypy.objspace.std.sliceobject import normalize_simple_slice @@ -42,6 +43,24 @@ getslice(length, mystart, mystop)) + def test_indexes4(self): + space = self.space + w = space.wrap + + def getslice(length, start, stop, step): + return [i for i in range(0, length, step) if start <= i < stop] + + for step in [-5, -4, -3, -2, -1, 1, 2, 3, 4, 5, None]: + for length in range(5): + for start in range(-2*length-2, 2*length+3) + [None]: + for stop in range(-2*length-2, 2*length+3) + [None]: + sl = space.newslice(w(start), w(stop), w(step)) + mystart, mystop, mystep, slicelength = sl.indices4(space, length) + assert len(range(length)[start:stop:step]) == slicelength + if sys.version_info >= (2, 6): # doesn't work in 2.5 + assert slice(start, stop, step).indices(length) == ( + mystart, mystop, mystep) + class AppTest_SliceObject: def test_new(self): def cmp_slice(sl1, sl2): diff --git a/pypy/objspace/std/test/test_stdobjspace.py b/pypy/objspace/std/test/test_stdobjspace.py --- a/pypy/objspace/std/test/test_stdobjspace.py +++ b/pypy/objspace/std/test/test_stdobjspace.py @@ -1,5 +1,6 @@ from pypy.interpreter.error import OperationError from pypy.interpreter.gateway import app2interp +from pypy.conftest import gettestobjspace class TestW_StdObjSpace: @@ -49,6 +50,8 @@ def test_fastpath_isinstance(self): from pypy.objspace.std.stringobject import W_StringObject from pypy.objspace.std.intobject import W_IntObject + from pypy.objspace.std.iterobject import W_AbstractSeqIterObject + from pypy.objspace.std.iterobject import W_SeqIterObject space = self.space assert space._get_interplevel_cls(space.w_str) is W_StringObject @@ -60,3 +63,14 @@ typedef = None assert space.isinstance_w(X(), space.w_str) + + w_sequenceiterator = space.gettypefor(W_SeqIterObject) + cls = space._get_interplevel_cls(w_sequenceiterator) + assert cls is W_AbstractSeqIterObject + + def test_withstrbuf_fastpath_isinstance(self): + from pypy.objspace.std.stringobject import W_AbstractStringObject + + space = gettestobjspace(withstrbuf=True) + cls = space._get_interplevel_cls(space.w_str) + assert cls is W_AbstractStringObject diff --git a/pypy/objspace/std/tupleobject.py b/pypy/objspace/std/tupleobject.py --- a/pypy/objspace/std/tupleobject.py +++ b/pypy/objspace/std/tupleobject.py @@ -9,7 +9,10 @@ from pypy.interpreter import gateway from pypy.rlib.debug import make_sure_not_resized -class W_TupleObject(W_Object): +class W_AbstractTupleObject(W_Object): + __slots__ = () + +class W_TupleObject(W_AbstractTupleObject): from pypy.objspace.std.tupletype import tuple_typedef as typedef _immutable_fields_ = ['wrappeditems[*]'] @@ -108,15 +111,10 @@ return space.w_False return space.w_True -def _min(a, b): - if a < b: - return a - return b - def lt__Tuple_Tuple(space, w_tuple1, w_tuple2): items1 = w_tuple1.wrappeditems items2 = w_tuple2.wrappeditems - ncmp = _min(len(items1), len(items2)) + ncmp = min(len(items1), len(items2)) # Search for the first index where items are different for p in range(ncmp): if not space.eq_w(items1[p], items2[p]): @@ -127,7 +125,7 @@ def gt__Tuple_Tuple(space, w_tuple1, w_tuple2): items1 = w_tuple1.wrappeditems items2 = w_tuple2.wrappeditems - ncmp = _min(len(items1), len(items2)) + ncmp = min(len(items1), len(items2)) # Search for the first index where items are different for p in range(ncmp): if not space.eq_w(items1[p], items2[p]): @@ -172,17 +170,8 @@ return space.wrap(count) def tuple_index__Tuple_ANY_ANY_ANY(space, w_tuple, w_obj, w_start, w_stop): - start = slicetype.eval_slice_index(space, w_start) - stop = slicetype.eval_slice_index(space, w_stop) length = len(w_tuple.wrappeditems) - if start < 0: - start += length - if start < 0: - start = 0 - if stop < 0: - stop += length - if stop < 0: - stop = 0 + start, stop = slicetype.unwrap_start_stop(space, length, w_start, w_stop) for i in range(start, min(stop, length)): w_item = w_tuple.wrappeditems[i] if space.eq_w(w_item, w_obj): diff --git a/pypy/objspace/std/tupletype.py b/pypy/objspace/std/tupletype.py --- a/pypy/objspace/std/tupletype.py +++ b/pypy/objspace/std/tupletype.py @@ -5,14 +5,14 @@ def wraptuple(space, list_w): from pypy.objspace.std.tupleobject import W_TupleObject - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject2 - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject3 - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject4 - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject5 - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject6 - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject7 - from pypy.objspace.std.smalltupleobject import W_SmallTupleObject8 if space.config.objspace.std.withsmalltuple: + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject2 + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject3 + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject4 + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject5 + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject6 + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject7 + from pypy.objspace.std.smalltupleobject import W_SmallTupleObject8 if len(list_w) == 2: return W_SmallTupleObject2(list_w) if len(list_w) == 3: diff --git a/pypy/objspace/std/unicodeobject.py b/pypy/objspace/std/unicodeobject.py --- a/pypy/objspace/std/unicodeobject.py +++ b/pypy/objspace/std/unicodeobject.py @@ -10,7 +10,7 @@ from pypy.objspace.std import slicetype, newformat from pypy.objspace.std.tupleobject import W_TupleObject from pypy.rlib.rarithmetic import intmask, ovfcheck -from pypy.rlib.objectmodel import compute_hash +from pypy.rlib.objectmodel import compute_hash, specialize from pypy.rlib.rstring import UnicodeBuilder from pypy.rlib.runicode import unicode_encode_unicode_escape from pypy.module.unicodedata import unicodedb @@ -19,7 +19,10 @@ from pypy.objspace.std.formatting import mod_format from pypy.objspace.std.stringtype import stringstartswith, stringendswith -class W_UnicodeObject(W_Object): +class W_AbstractUnicodeObject(W_Object): + __slots__ = () + +class W_UnicodeObject(W_AbstractUnicodeObject): from pypy.objspace.std.unicodetype import unicode_typedef as typedef _immutable_fields_ = ['_value'] @@ -475,42 +478,29 @@ index = length return index -def _convert_idx_params(space, w_self, w_sub, w_start, w_end, upper_bound=False): - assert isinstance(w_sub, W_UnicodeObject) + at specialize.arg(4) +def _convert_idx_params(space, w_self, w_start, w_end, upper_bound=False): self = w_self._value - sub = w_sub._value - - if space.is_w(w_start, space.w_None): - w_start = space.wrap(0) - if space.is_w(w_end, space.w_None): - w_end = space.len(w_self) - - if upper_bound: - start = slicetype.adapt_bound(space, len(self), w_start) - end = slicetype.adapt_bound(space, len(self), w_end) - else: - start = slicetype.adapt_lower_bound(space, len(self), w_start) - end = slicetype.adapt_lower_bound(space, len(self), w_end) - return (self, sub, start, end) -_convert_idx_params._annspecialcase_ = 'specialize:arg(5)' + start, end = slicetype.unwrap_start_stop( + space, len(self), w_start, w_end, upper_bound) + return (self, start, end) def unicode_endswith__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, + self, start, end = _convert_idx_params(space, w_self, w_start, w_end, True) - return space.newbool(stringendswith(self, substr, start, end)) + return space.newbool(stringendswith(self, w_substr._value, start, end)) def unicode_startswith__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end, True) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end, True) # XXX this stuff can be waaay better for ootypebased backends if # we re-use more of our rpython machinery (ie implement startswith # with additional parameters as rpython) - return space.newbool(stringstartswith(self, substr, start, end)) + return space.newbool(stringstartswith(self, w_substr._value, start, end)) def unicode_startswith__Unicode_Tuple_ANY_ANY(space, w_unistr, w_prefixes, w_start, w_end): - unistr, _, start, end = _convert_idx_params(space, w_unistr, space.wrap(u''), - w_start, w_end, True) + unistr, start, end = _convert_idx_params(space, w_unistr, + w_start, w_end, True) for w_prefix in space.fixedview(w_prefixes): prefix = space.unicode_w(w_prefix) if stringstartswith(unistr, prefix, start, end): @@ -519,7 +509,7 @@ def unicode_endswith__Unicode_Tuple_ANY_ANY(space, w_unistr, w_suffixes, w_start, w_end): - unistr, _, start, end = _convert_idx_params(space, w_unistr, space.wrap(u''), + unistr, start, end = _convert_idx_params(space, w_unistr, w_start, w_end, True) for w_suffix in space.fixedview(w_suffixes): suffix = space.unicode_w(w_suffix) @@ -625,37 +615,32 @@ return space.newlist(lines) def unicode_find__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - return space.wrap(self.find(substr, start, end)) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + return space.wrap(self.find(w_substr._value, start, end)) def unicode_rfind__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - return space.wrap(self.rfind(substr, start, end)) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + return space.wrap(self.rfind(w_substr._value, start, end)) def unicode_index__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - index = self.find(substr, start, end) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + index = self.find(w_substr._value, start, end) if index < 0: raise OperationError(space.w_ValueError, space.wrap('substring not found')) return space.wrap(index) def unicode_rindex__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - index = self.rfind(substr, start, end) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + index = self.rfind(w_substr._value, start, end) if index < 0: raise OperationError(space.w_ValueError, space.wrap('substring not found')) return space.wrap(index) def unicode_count__Unicode_Unicode_ANY_ANY(space, w_self, w_substr, w_start, w_end): - self, substr, start, end = _convert_idx_params(space, w_self, w_substr, - w_start, w_end) - return space.wrap(self.count(substr, start, end)) + self, start, end = _convert_idx_params(space, w_self, w_start, w_end) + return space.wrap(self.count(w_substr._value, start, end)) def unicode_split__Unicode_None_ANY(space, w_self, w_none, w_maxsplit): maxsplit = space.int_w(w_maxsplit) diff --git a/pypy/rlib/_rffi_stacklet.py b/pypy/rlib/_rffi_stacklet.py --- a/pypy/rlib/_rffi_stacklet.py +++ b/pypy/rlib/_rffi_stacklet.py @@ -3,16 +3,22 @@ from pypy.rpython.lltypesystem import lltype, llmemory, rffi from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.rpython.tool import rffi_platform +import sys cdir = py.path.local(pypydir) / 'translator' / 'c' - +_sep_mods = [] +if sys.platform == 'win32': + _sep_mods = [cdir / "src/stacklet/switch_x86_msvc.asm"] + eci = ExternalCompilationInfo( include_dirs = [cdir], includes = ['src/stacklet/stacklet.h'], separate_module_sources = ['#include "src/stacklet/stacklet.c"\n'], + separate_module_files = _sep_mods ) + rffi_platform.verify_eci(eci.convert_sources_to_files()) def llexternal(name, args, result, **kwds): diff --git a/pypy/rlib/clibffi.py b/pypy/rlib/clibffi.py --- a/pypy/rlib/clibffi.py +++ b/pypy/rlib/clibffi.py @@ -210,26 +210,48 @@ elif sz == 8: return ffi_type_uint64 else: raise ValueError("unsupported type size for %r" % (TYPE,)) -TYPE_MAP = { - rffi.DOUBLE : ffi_type_double, - rffi.FLOAT : ffi_type_float, - rffi.LONGDOUBLE : ffi_type_longdouble, - rffi.UCHAR : ffi_type_uchar, - rffi.CHAR : ffi_type_schar, - rffi.SHORT : ffi_type_sshort, - rffi.USHORT : ffi_type_ushort, - rffi.UINT : ffi_type_uint, - rffi.INT : ffi_type_sint, +__int_type_map = [ + (rffi.UCHAR, ffi_type_uchar), + (rffi.SIGNEDCHAR, ffi_type_schar), + (rffi.SHORT, ffi_type_sshort), + (rffi.USHORT, ffi_type_ushort), + (rffi.UINT, ffi_type_uint), + (rffi.INT, ffi_type_sint), # xxx don't use ffi_type_slong and ffi_type_ulong - their meaning # changes from a libffi version to another :-(( - rffi.ULONG : _unsigned_type_for(rffi.ULONG), - rffi.LONG : _signed_type_for(rffi.LONG), - rffi.ULONGLONG : _unsigned_type_for(rffi.ULONGLONG), - rffi.LONGLONG : _signed_type_for(rffi.LONGLONG), - lltype.Void : ffi_type_void, - lltype.UniChar : _unsigned_type_for(lltype.UniChar), - lltype.Bool : _unsigned_type_for(lltype.Bool), - } + (rffi.ULONG, _unsigned_type_for(rffi.ULONG)), + (rffi.LONG, _signed_type_for(rffi.LONG)), + (rffi.ULONGLONG, _unsigned_type_for(rffi.ULONGLONG)), + (rffi.LONGLONG, _signed_type_for(rffi.LONGLONG)), + (lltype.UniChar, _unsigned_type_for(lltype.UniChar)), + (lltype.Bool, _unsigned_type_for(lltype.Bool)), + ] + +__float_type_map = [ + (rffi.DOUBLE, ffi_type_double), + (rffi.FLOAT, ffi_type_float), + (rffi.LONGDOUBLE, ffi_type_longdouble), + ] + +__ptr_type_map = [ + (rffi.VOIDP, ffi_type_pointer), + ] + +__type_map = __int_type_map + __float_type_map + [ + (lltype.Void, ffi_type_void) + ] + +TYPE_MAP_INT = dict(__int_type_map) +TYPE_MAP_FLOAT = dict(__float_type_map) +TYPE_MAP = dict(__type_map) + +ffitype_map_int = unrolling_iterable(__int_type_map) +ffitype_map_int_or_ptr = unrolling_iterable(__int_type_map + __ptr_type_map) +ffitype_map_float = unrolling_iterable(__float_type_map) +ffitype_map = unrolling_iterable(__type_map) + +del __int_type_map, __float_type_map, __ptr_type_map, __type_map + def external(name, args, result, **kwds): return rffi.llexternal(name, args, result, compilation_info=eci, **kwds) diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -450,55 +450,6 @@ # special-cased by ExtRegistryEntry pass - def _set_param(self, name, value): - # special-cased by ExtRegistryEntry - # (internal, must receive a constant 'name') - # if value is DEFAULT, sets the default value. - assert name in PARAMETERS - - @specialize.arg(0, 1) - def set_param(self, name, value): - """Set one of the tunable JIT parameter.""" - self._set_param(name, value) - - @specialize.arg(0, 1) - def set_param_to_default(self, name): - """Reset one of the tunable JIT parameters to its default value.""" - self._set_param(name, DEFAULT) - - def set_user_param(self, text): - """Set the tunable JIT parameters from a user-supplied string - following the format 'param=value,param=value', or 'off' to - disable the JIT. For programmatic setting of parameters, use - directly JitDriver.set_param(). - """ - if text == 'off': - self.set_param('threshold', -1) - self.set_param('function_threshold', -1) - return - if text == 'default': - for name1, _ in unroll_parameters: - self.set_param_to_default(name1) - return - for s in text.split(','): - s = s.strip(' ') - parts = s.split('=') - if len(parts) != 2: - raise ValueError - name = parts[0] - value = parts[1] - if name == 'enable_opts': - self.set_param('enable_opts', value) - else: - for name1, _ in unroll_parameters: - if name1 == name and name1 != 'enable_opts': - try: - self.set_param(name1, int(value)) - except ValueError: - raise - set_user_param._annspecialcase_ = 'specialize:arg(0)' - - def on_compile(self, logger, looptoken, operations, type, *greenargs): """ A hook called when loop is compiled. Overwrite for your own jitdriver if you want to do something special, like @@ -524,16 +475,61 @@ self.jit_merge_point = self.jit_merge_point self.can_enter_jit = self.can_enter_jit self.loop_header = self.loop_header - self._set_param = self._set_param - class Entry(ExtEnterLeaveMarker): _about_ = (self.jit_merge_point, self.can_enter_jit) class Entry(ExtLoopHeader): _about_ = self.loop_header - class Entry(ExtSetParam): - _about_ = self._set_param +def _set_param(driver, name, value): + # special-cased by ExtRegistryEntry + # (internal, must receive a constant 'name') + # if value is DEFAULT, sets the default value. + assert name in PARAMETERS + + at specialize.arg(0, 1) +def set_param(driver, name, value): + """Set one of the tunable JIT parameter. Driver can be None, then all + drivers have this set """ + _set_param(driver, name, value) + + at specialize.arg(0, 1) +def set_param_to_default(driver, name): + """Reset one of the tunable JIT parameters to its default value.""" + _set_param(driver, name, DEFAULT) + +def set_user_param(driver, text): + """Set the tunable JIT parameters from a user-supplied string + following the format 'param=value,param=value', or 'off' to + disable the JIT. For programmatic setting of parameters, use + directly JitDriver.set_param(). + """ + if text == 'off': + set_param(driver, 'threshold', -1) + set_param(driver, 'function_threshold', -1) + return + if text == 'default': + for name1, _ in unroll_parameters: + set_param_to_default(driver, name1) + return + for s in text.split(','): + s = s.strip(' ') + parts = s.split('=') + if len(parts) != 2: + raise ValueError + name = parts[0] + value = parts[1] + if name == 'enable_opts': + set_param(driver, 'enable_opts', value) + else: + for name1, _ in unroll_parameters: + if name1 == name and name1 != 'enable_opts': + try: + set_param(driver, name1, int(value)) + except ValueError: + raise +set_user_param._annspecialcase_ = 'specialize:arg(0)' + # ____________________________________________________________ # @@ -705,8 +701,9 @@ resulttype=lltype.Void) class ExtSetParam(ExtRegistryEntry): + _about_ = _set_param - def compute_result_annotation(self, s_name, s_value): + def compute_result_annotation(self, s_driver, s_name, s_value): from pypy.annotation import model as annmodel assert s_name.is_constant() if not self.bookkeeper.immutablevalue(DEFAULT).contains(s_value): @@ -722,21 +719,22 @@ from pypy.objspace.flow.model import Constant hop.exception_cannot_occur() - driver = self.instance.im_self - name = hop.args_s[0].const + driver = hop.inputarg(lltype.Void, arg=0) + name = hop.args_s[1].const if name == 'enable_opts': repr = string_repr else: repr = lltype.Signed - if (isinstance(hop.args_v[1], Constant) and - hop.args_v[1].value is DEFAULT): + if (isinstance(hop.args_v[2], Constant) and + hop.args_v[2].value is DEFAULT): value = PARAMETERS[name] v_value = hop.inputconst(repr, value) else: - v_value = hop.inputarg(repr, arg=1) + v_value = hop.inputarg(repr, arg=2) vlist = [hop.inputconst(lltype.Void, "set_param"), - hop.inputconst(lltype.Void, driver), + driver, hop.inputconst(lltype.Void, name), v_value] return hop.genop('jit_marker', vlist, resulttype=lltype.Void) + diff --git a/pypy/rlib/libffi.py b/pypy/rlib/libffi.py --- a/pypy/rlib/libffi.py +++ b/pypy/rlib/libffi.py @@ -140,7 +140,7 @@ self.last.next = arg self.last = arg self.numargs += 1 - + class AbstractArg(object): next = None @@ -234,7 +234,7 @@ # It is important that there is no other operation in the middle, else # the optimizer will fail to recognize the pattern and won't turn it # into a fast CALL. Note that "arg = arg.next" is optimized away, - # assuming that archain is completely virtual. + # assuming that argchain is completely virtual. self = jit.promote(self) if argchain.numargs != len(self.argtypes): raise TypeError, 'Wrong number of arguments: %d expected, got %d' %\ @@ -410,3 +410,22 @@ def getaddressindll(self, name): return dlsym(self.lib, name) + + at jit.oopspec("libffi_array_getitem(ffitype, width, addr, index, offset)") +def array_getitem(ffitype, width, addr, index, offset): + for TYPE, ffitype2 in clibffi.ffitype_map: + if ffitype is ffitype2: + addr = rffi.ptradd(addr, index * width) + addr = rffi.ptradd(addr, offset) + return rffi.cast(rffi.CArrayPtr(TYPE), addr)[0] + assert False + + at jit.oopspec("libffi_array_setitem(ffitype, width, addr, index, offset, value)") +def array_setitem(ffitype, width, addr, index, offset, value): + for TYPE, ffitype2 in clibffi.ffitype_map: + if ffitype is ffitype2: + addr = rffi.ptradd(addr, index * width) + addr = rffi.ptradd(addr, offset) + rffi.cast(rffi.CArrayPtr(TYPE), addr)[0] = value + return + assert False \ No newline at end of file diff --git a/pypy/rlib/listsort.py b/pypy/rlib/listsort.py --- a/pypy/rlib/listsort.py +++ b/pypy/rlib/listsort.py @@ -1,4 +1,4 @@ -from pypy.rlib.rarithmetic import ovfcheck, ovfcheck_lshift +from pypy.rlib.rarithmetic import ovfcheck ## ------------------------------------------------------------------------ @@ -136,7 +136,7 @@ if lower(a.list[p + ofs], key): lastofs = ofs try: - ofs = ovfcheck_lshift(ofs, 1) + ofs = ovfcheck(ofs << 1) except OverflowError: ofs = maxofs else: @@ -161,7 +161,7 @@ # key <= a[hint - ofs] lastofs = ofs try: - ofs = ovfcheck_lshift(ofs, 1) + ofs = ovfcheck(ofs << 1) except OverflowError: ofs = maxofs else: diff --git a/pypy/rlib/rarithmetic.py b/pypy/rlib/rarithmetic.py --- a/pypy/rlib/rarithmetic.py +++ b/pypy/rlib/rarithmetic.py @@ -12,9 +12,6 @@ back to a signed int value ovfcheck check on CPython whether the result of a signed integer operation did overflow -ovfcheck_lshift - << with oveflow checking - catering to 2.3/2.4 differences about << ovfcheck_float_to_int convert to an integer or raise OverflowError r_longlong @@ -111,18 +108,6 @@ raise OverflowError, "signed integer expression did overflow" return r -def _local_ovfcheck(r): - # a copy of the above, because we cannot call ovfcheck - # in a context where no primitiveoperator is involved. - assert not isinstance(r, r_uint), "unexpected ovf check on unsigned" - if isinstance(r, long): - raise OverflowError, "signed integer expression did overflow" - return r - -def ovfcheck_lshift(a, b): - "NOT_RPYTHON" - return _local_ovfcheck(int(long(a) << b)) - # Strange things happening for float to int on 64 bit: # int(float(i)) != i because of rounding issues. # These are the minimum and maximum float value that can diff --git a/pypy/rlib/rbigint.py b/pypy/rlib/rbigint.py --- a/pypy/rlib/rbigint.py +++ b/pypy/rlib/rbigint.py @@ -921,7 +921,7 @@ ah, al = _kmul_split(a, shift) assert ah.sign == 1 # the split isn't degenerate - if a == b: + if a is b: bh = ah bl = al else: @@ -975,26 +975,21 @@ i = ret.numdigits() - shift # # digits after shift _v_isub(ret, shift, i, t2, t2.numdigits()) _v_isub(ret, shift, i, t1, t1.numdigits()) - del t1, t2 # 6. t3 <- (ah+al)(bh+bl), and add into result. t1 = _x_add(ah, al) - del ah, al - if a == b: + if a is b: t2 = t1 else: t2 = _x_add(bh, bl) - del bh, bl t3 = _k_mul(t1, t2) - del t1, t2 assert t3.sign >=0 # Add t3. It's not obvious why we can't run out of room here. # See the (*) comment after this function. _v_iadd(ret, shift, i, t3, t3.numdigits()) - del t3 ret._normalize() return ret @@ -1085,7 +1080,6 @@ # Add into result. _v_iadd(ret, nbdone, ret.numdigits() - nbdone, product, product.numdigits()) - del product bsize -= nbtouse nbdone += nbtouse diff --git a/pypy/rlib/rgc.py b/pypy/rlib/rgc.py --- a/pypy/rlib/rgc.py +++ b/pypy/rlib/rgc.py @@ -163,8 +163,10 @@ source_start, dest_start, length): # if the write barrier is not supported, copy by hand - for i in range(length): + i = 0 + while i < length: dest[i + dest_start] = source[i + source_start] + i += 1 return source_addr = llmemory.cast_ptr_to_adr(source) dest_addr = llmemory.cast_ptr_to_adr(dest) @@ -214,8 +216,8 @@ func._gc_no_collect_ = True return func -def is_light_finalizer(func): - func._is_light_finalizer_ = True +def must_be_light_finalizer(func): + func._must_be_light_finalizer_ = True return func # ____________________________________________________________ @@ -259,6 +261,24 @@ except Exception: return False # don't keep objects whose _freeze_() method explodes +def add_memory_pressure(estimate): + """Add memory pressure for OpaquePtrs.""" + pass + +class AddMemoryPressureEntry(ExtRegistryEntry): + _about_ = add_memory_pressure + + def compute_result_annotation(self, s_nbytes): + from pypy.annotation import model as annmodel + return annmodel.s_None + + def specialize_call(self, hop): + [v_size] = hop.inputargs(lltype.Signed) + hop.exception_cannot_occur() + return hop.genop('gc_add_memory_pressure', [v_size], + resulttype=lltype.Void) + + def get_rpy_memory_usage(gcref): "NOT_RPYTHON" # approximate implementation using CPython's type info diff --git a/pypy/rlib/rmmap.py b/pypy/rlib/rmmap.py --- a/pypy/rlib/rmmap.py +++ b/pypy/rlib/rmmap.py @@ -78,7 +78,7 @@ from pypy.rlib.rwin32 import HANDLE, LPHANDLE from pypy.rlib.rwin32 import NULL_HANDLE, INVALID_HANDLE_VALUE from pypy.rlib.rwin32 import DWORD, WORD, DWORD_PTR, LPDWORD - from pypy.rlib.rwin32 import BOOL, LPVOID, LPCVOID, LPCSTR, SIZE_T + from pypy.rlib.rwin32 import BOOL, LPVOID, LPCSTR, SIZE_T from pypy.rlib.rwin32 import INT, LONG, PLONG # export the constants inside and outside. see __init__.py @@ -174,9 +174,9 @@ DuplicateHandle = winexternal('DuplicateHandle', [HANDLE, HANDLE, HANDLE, LPHANDLE, DWORD, BOOL, DWORD], BOOL) CreateFileMapping = winexternal('CreateFileMappingA', [HANDLE, rwin32.LPSECURITY_ATTRIBUTES, DWORD, DWORD, DWORD, LPCSTR], HANDLE) MapViewOfFile = winexternal('MapViewOfFile', [HANDLE, DWORD, DWORD, DWORD, SIZE_T], LPCSTR)##!!LPVOID) - UnmapViewOfFile = winexternal('UnmapViewOfFile', [LPCVOID], BOOL, + UnmapViewOfFile = winexternal('UnmapViewOfFile', [LPCSTR], BOOL, threadsafe=False) - FlushViewOfFile = winexternal('FlushViewOfFile', [LPCVOID, SIZE_T], BOOL) + FlushViewOfFile = winexternal('FlushViewOfFile', [LPCSTR, SIZE_T], BOOL) SetFilePointer = winexternal('SetFilePointer', [HANDLE, LONG, PLONG, DWORD], DWORD) SetEndOfFile = winexternal('SetEndOfFile', [HANDLE], BOOL) VirtualAlloc = winexternal('VirtualAlloc', diff --git a/pypy/rlib/ropenssl.py b/pypy/rlib/ropenssl.py --- a/pypy/rlib/ropenssl.py +++ b/pypy/rlib/ropenssl.py @@ -25,6 +25,7 @@ 'openssl/err.h', 'openssl/rand.h', 'openssl/evp.h', + 'openssl/ossl_typ.h', 'openssl/x509v3.h'] eci = ExternalCompilationInfo( @@ -108,7 +109,9 @@ GENERAL_NAME_st = rffi_platform.Struct( 'struct GENERAL_NAME_st', [('type', rffi.INT), - ]) + ]) + EVP_MD_SIZE = rffi_platform.SizeOf('EVP_MD') + EVP_MD_CTX_SIZE = rffi_platform.SizeOf('EVP_MD_CTX') for k, v in rffi_platform.configure(CConfig).items(): @@ -154,7 +157,7 @@ ssl_external('CRYPTO_set_id_callback', [lltype.Ptr(lltype.FuncType([], rffi.LONG))], lltype.Void) - + if HAVE_OPENSSL_RAND: ssl_external('RAND_add', [rffi.CCHARP, rffi.INT, rffi.DOUBLE], lltype.Void) ssl_external('RAND_status', [], rffi.INT) @@ -255,7 +258,7 @@ [BIO, rffi.VOIDP, rffi.VOIDP, rffi.VOIDP], X509) EVP_MD_CTX = rffi.COpaquePtr('EVP_MD_CTX', compilation_info=eci) -EVP_MD = rffi.COpaquePtr('EVP_MD') +EVP_MD = rffi.COpaquePtr('EVP_MD', compilation_info=eci) OpenSSL_add_all_digests = external( 'OpenSSL_add_all_digests', [], lltype.Void) diff --git a/pypy/rlib/test/test_libffi.py b/pypy/rlib/test/test_libffi.py --- a/pypy/rlib/test/test_libffi.py +++ b/pypy/rlib/test/test_libffi.py @@ -1,11 +1,13 @@ +import sys + import py -import sys + +from pypy.rlib.libffi import (CDLL, Func, get_libc_name, ArgChain, types, + IS_32_BIT, array_getitem, array_setitem) +from pypy.rlib.rarithmetic import r_singlefloat, r_longlong, r_ulonglong +from pypy.rlib.test.test_clibffi import BaseFfiTest, get_libm_name, make_struct_ffitype_e from pypy.rpython.lltypesystem import rffi, lltype from pypy.rpython.lltypesystem.ll2ctypes import ALLOCATED -from pypy.rlib.rarithmetic import r_singlefloat, r_longlong, r_ulonglong -from pypy.rlib.test.test_clibffi import BaseFfiTest, get_libm_name, make_struct_ffitype_e -from pypy.rlib.libffi import CDLL, Func, get_libc_name, ArgChain, types -from pypy.rlib.libffi import IS_32_BIT class TestLibffiMisc(BaseFfiTest): @@ -52,6 +54,34 @@ del lib assert not ALLOCATED + def test_array_fields(self): + POINT = lltype.Struct("POINT", + ("x", lltype.Float), + ("y", lltype.Float), + ) + points = lltype.malloc(rffi.CArray(POINT), 2, flavor="raw") + points[0].x = 1.0 + points[0].y = 2.0 + points[1].x = 3.0 + points[1].y = 4.0 + points = rffi.cast(rffi.CArrayPtr(lltype.Char), points) + assert array_getitem(types.double, 16, points, 0, 0) == 1.0 + assert array_getitem(types.double, 16, points, 0, 8) == 2.0 + assert array_getitem(types.double, 16, points, 1, 0) == 3.0 + assert array_getitem(types.double, 16, points, 1, 8) == 4.0 + + array_setitem(types.double, 16, points, 0, 0, 10.0) + array_setitem(types.double, 16, points, 0, 8, 20.0) + array_setitem(types.double, 16, points, 1, 0, 30.0) + array_setitem(types.double, 16, points, 1, 8, 40.0) + + assert array_getitem(types.double, 16, points, 0, 0) == 10.0 + assert array_getitem(types.double, 16, points, 0, 8) == 20.0 + assert array_getitem(types.double, 16, points, 1, 0) == 30.0 + assert array_getitem(types.double, 16, points, 1, 8) == 40.0 + + lltype.free(points, flavor="raw") + class TestLibffiCall(BaseFfiTest): """ Test various kind of calls through libffi. @@ -109,7 +139,7 @@ This method is overridden by metainterp/test/test_fficall.py in order to do the call in a loop and JIT it. The optional arguments are used only by that overridden method. - + """ lib, name, argtypes, restype = funcspec func = lib.getpointer(name, argtypes, restype) @@ -132,7 +162,7 @@ return x - y; } """ - libfoo = self.get_libfoo() + libfoo = self.get_libfoo() func = (libfoo, 'diff_xy', [types.sint, types.slong], types.sint) res = self.call(func, [50, 8], lltype.Signed) assert res == 42 @@ -144,7 +174,7 @@ return (x + (int)y); } """ - libfoo = self.get_libfoo() + libfoo = self.get_libfoo() func = (libfoo, 'sum_xy', [types.sint, types.double], types.sint) res = self.call(func, [38, 4.2], lltype.Signed, jitif=["floats"]) assert res == 42 @@ -179,6 +209,17 @@ res = self.call(func, [chr(20), 22], rffi.LONG) assert res == 42 + def test_char_args(self): + """ + char sum_args(char a, char b) { + return a + b; + } + """ + libfoo = self.get_libfoo() + func = (libfoo, 'sum_args', [types.schar, types.schar], types.schar) + res = self.call(func, [123, 43], rffi.CHAR) + assert res == chr(166) + def test_unsigned_short_args(self): """ unsigned short sum_xy_us(unsigned short x, unsigned short y) @@ -238,7 +279,7 @@ }; struct pair my_static_pair = {10, 20}; - + long* get_pointer_to_b() { return &my_static_pair.b; @@ -329,7 +370,7 @@ def test_wrong_number_of_arguments(self): from pypy.rpython.llinterp import LLException - libfoo = self.get_libfoo() + libfoo = self.get_libfoo() func = (libfoo, 'sum_xy', [types.sint, types.double], types.sint) glob = globals() diff --git a/pypy/rpython/llinterp.py b/pypy/rpython/llinterp.py --- a/pypy/rpython/llinterp.py +++ b/pypy/rpython/llinterp.py @@ -1,6 +1,6 @@ from pypy.objspace.flow.model import FunctionGraph, Constant, Variable, c_last_exception from pypy.rlib.rarithmetic import intmask, r_uint, ovfcheck, r_longlong -from pypy.rlib.rarithmetic import r_ulonglong, ovfcheck_lshift +from pypy.rlib.rarithmetic import r_ulonglong from pypy.rpython.lltypesystem import lltype, llmemory, lloperation, llheap from pypy.rpython.lltypesystem import rclass from pypy.rpython.ootypesystem import ootype @@ -172,7 +172,7 @@ def checkadr(addr): assert lltype.typeOf(addr) is llmemory.Address - + def is_inst(inst): return isinstance(lltype.typeOf(inst), (ootype.Instance, ootype.BuiltinType, ootype.StaticMethod)) @@ -657,7 +657,7 @@ raise TypeError("graph with %r args called with wrong func ptr type: %r" % (tuple([v.concretetype for v in args_v]), ARGS)) frame = self.newsubframe(graph, args) - return frame.eval() + return frame.eval() def op_direct_call(self, f, *args): FTYPE = self.llinterpreter.typer.type_system.derefType(lltype.typeOf(f)) @@ -698,13 +698,13 @@ return ptr except MemoryError: self.make_llexception() - + def op_malloc_nonmovable(self, TYPE, flags): flavor = flags['flavor'] assert flavor == 'gc' zero = flags.get('zero', False) return self.heap.malloc_nonmovable(TYPE, zero=zero) - + def op_malloc_nonmovable_varsize(self, TYPE, flags, size): flavor = flags['flavor'] assert flavor == 'gc' @@ -716,6 +716,9 @@ track_allocation = flags.get('track_allocation', True) self.heap.free(obj, flavor='raw', track_allocation=track_allocation) + def op_gc_add_memory_pressure(self, size): + self.heap.add_memory_pressure(size) + def op_shrink_array(self, obj, smallersize): return self.heap.shrink_array(obj, smallersize) @@ -1032,7 +1035,7 @@ assert isinstance(x, int) assert isinstance(y, int) try: - return ovfcheck_lshift(x, y) + return ovfcheck(x << y) except OverflowError: self.make_llexception() @@ -1318,7 +1321,7 @@ func_graph = fn.graph else: # obj is an instance, we want to call 'method_name' on it - assert fn is None + assert fn is None self_arg = [obj] func_graph = obj._TYPE._methods[method_name._str].graph diff --git a/pypy/rpython/lltypesystem/ll2ctypes.py b/pypy/rpython/lltypesystem/ll2ctypes.py --- a/pypy/rpython/lltypesystem/ll2ctypes.py +++ b/pypy/rpython/lltypesystem/ll2ctypes.py @@ -1163,10 +1163,14 @@ value = value.adr if isinstance(value, llmemory.fakeaddress): value = value.ptr or 0 + if isinstance(value, r_singlefloat): + value = float(value) TYPE1 = lltype.typeOf(value) cvalue = lltype2ctypes(value) cresulttype = get_ctypes_type(RESTYPE) - if isinstance(TYPE1, lltype.Ptr): + if RESTYPE == TYPE1: + return value + elif isinstance(TYPE1, lltype.Ptr): if isinstance(RESTYPE, lltype.Ptr): # shortcut: ptr->ptr cast cptr = ctypes.cast(cvalue, cresulttype) diff --git a/pypy/rpython/lltypesystem/llheap.py b/pypy/rpython/lltypesystem/llheap.py --- a/pypy/rpython/lltypesystem/llheap.py +++ b/pypy/rpython/lltypesystem/llheap.py @@ -5,8 +5,7 @@ setfield = setattr from operator import setitem as setarrayitem -from pypy.rlib.rgc import collect -from pypy.rlib.rgc import can_move +from pypy.rlib.rgc import can_move, collect, add_memory_pressure def setinterior(toplevelcontainer, inneraddr, INNERTYPE, newvalue, offsets=None): diff --git a/pypy/rpython/lltypesystem/lloperation.py b/pypy/rpython/lltypesystem/lloperation.py --- a/pypy/rpython/lltypesystem/lloperation.py +++ b/pypy/rpython/lltypesystem/lloperation.py @@ -473,6 +473,7 @@ 'gc_is_rpy_instance' : LLOp(), 'gc_dump_rpy_heap' : LLOp(), 'gc_typeids_z' : LLOp(), + 'gc_add_memory_pressure': LLOp(), # ------- JIT & GC interaction, only for some GCs ---------- diff --git a/pypy/rpython/lltypesystem/lltype.py b/pypy/rpython/lltypesystem/lltype.py --- a/pypy/rpython/lltypesystem/lltype.py +++ b/pypy/rpython/lltypesystem/lltype.py @@ -48,7 +48,7 @@ self.TYPE = TYPE def __repr__(self): return ''%(self.TYPE,) - + def saferecursive(func, defl, TLS=TLS): def safe(*args): @@ -537,9 +537,9 @@ return "Func ( %s ) -> %s" % (args, self.RESULT) __str__ = saferecursive(__str__, '...') - def _short_name(self): + def _short_name(self): args = ', '.join([ARG._short_name() for ARG in self.ARGS]) - return "Func(%s)->%s" % (args, self.RESULT._short_name()) + return "Func(%s)->%s" % (args, self.RESULT._short_name()) _short_name = saferecursive(_short_name, '...') def _container_example(self): @@ -553,7 +553,7 @@ class OpaqueType(ContainerType): _gckind = 'raw' - + def __init__(self, tag, hints={}): """ if hints['render_structure'] is set, the type is internal and not considered to come from somewhere else (it should be rendered as a structure) """ @@ -723,10 +723,10 @@ def __str__(self): return '* %s' % (self.TO, ) - + def _short_name(self): return 'Ptr %s' % (self.TO._short_name(), ) - + def _is_atomic(self): return self.TO._gckind == 'raw' @@ -1723,7 +1723,7 @@ class _subarray(_parentable): # only for direct_fieldptr() # and direct_arrayitems() _kind = "subarray" - _cache = weakref.WeakKeyDictionary() # parentarray -> {subarrays} + _cache = {} # TYPE -> weak{ parentarray -> {subarrays} } def __init__(self, TYPE, parent, baseoffset_or_fieldname): _parentable.__init__(self, TYPE) @@ -1781,10 +1781,15 @@ def _makeptr(parent, baseoffset_or_fieldname, solid=False): try: - cache = _subarray._cache.setdefault(parent, {}) + d = _subarray._cache[parent._TYPE] + except KeyError: + d = _subarray._cache[parent._TYPE] = weakref.WeakKeyDictionary() + try: + cache = d.setdefault(parent, {}) except RuntimeError: # pointer comparison with a freed structure _subarray._cleanup_cache() - cache = _subarray._cache.setdefault(parent, {}) # try again + # try again + return _subarray._makeptr(parent, baseoffset_or_fieldname, solid) try: subarray = cache[baseoffset_or_fieldname] except KeyError: @@ -1805,14 +1810,18 @@ raise NotImplementedError('_subarray._getid()') def _cleanup_cache(): - newcache = weakref.WeakKeyDictionary() - for key, value in _subarray._cache.items(): - try: - if not key._was_freed(): - newcache[key] = value - except RuntimeError: - pass # ignore "accessing subxxx, but already gc-ed parent" - _subarray._cache = newcache + for T, d in _subarray._cache.items(): + newcache = weakref.WeakKeyDictionary() + for key, value in d.items(): + try: + if not key._was_freed(): + newcache[key] = value + except RuntimeError: + pass # ignore "accessing subxxx, but already gc-ed parent" + if newcache: + _subarray._cache[T] = newcache + else: + del _subarray._cache[T] _cleanup_cache = staticmethod(_cleanup_cache) diff --git a/pypy/rpython/lltypesystem/module/ll_math.py b/pypy/rpython/lltypesystem/module/ll_math.py --- a/pypy/rpython/lltypesystem/module/ll_math.py +++ b/pypy/rpython/lltypesystem/module/ll_math.py @@ -11,15 +11,17 @@ from pypy.translator.platform import platform from pypy.rlib.rfloat import isfinite, isinf, isnan, INFINITY, NAN +use_library_isinf_isnan = False if sys.platform == "win32": if platform.name == "msvc": # When compiled with /O2 or /Oi (enable intrinsic functions) # It's no more possible to take the address of some math functions. # Ensure that the compiler chooses real functions instead. eci = ExternalCompilationInfo( - includes = ['math.h'], + includes = ['math.h', 'float.h'], post_include_bits = ['#pragma function(floor)'], ) + use_library_isinf_isnan = True else: eci = ExternalCompilationInfo() # Some math functions are C99 and not defined by the Microsoft compiler @@ -108,18 +110,35 @@ # # Custom implementations +VERY_LARGE_FLOAT = 1.0 +while VERY_LARGE_FLOAT * 100.0 != INFINITY: + VERY_LARGE_FLOAT *= 64.0 + +_lib_isnan = rffi.llexternal("_isnan", [lltype.Float], lltype.Signed, + compilation_info=eci) +_lib_finite = rffi.llexternal("_finite", [lltype.Float], lltype.Signed, + compilation_info=eci) + def ll_math_isnan(y): # By not calling into the external function the JIT can inline this. # Floats are awesome. + if use_library_isinf_isnan and not jit.we_are_jitted(): + return bool(_lib_isnan(y)) return y != y def ll_math_isinf(y): - # Use a bitwise OR so the JIT doesn't produce 2 different guards. - return (y == INFINITY) | (y == -INFINITY) + if jit.we_are_jitted(): + return (y + VERY_LARGE_FLOAT) == y + elif use_library_isinf_isnan: + return not _lib_finite(y) and not _lib_isnan(y) + else: + return y == INFINITY or y == -INFINITY def ll_math_isfinite(y): # Use a custom hack that is reasonably well-suited to the JIT. # Floats are awesome (bis). + if use_library_isinf_isnan and not jit.we_are_jitted(): + return bool(_lib_finite(y)) z = 0.0 * y return z == z # i.e.: z is not a NaN @@ -136,10 +155,12 @@ Windows, FreeBSD and alpha Tru64 are amongst platforms that don't always follow C99. """ - if isnan(x) or isnan(y): + if isnan(x): return NAN - if isinf(y): + if not isfinite(y): + if isnan(y): + return NAN if isinf(x): if math_copysign(1.0, x) == 1.0: # atan2(+-inf, +inf) == +-pi/4 @@ -168,7 +189,7 @@ def ll_math_frexp(x): # deal with special cases directly, to sidestep platform differences - if isnan(x) or isinf(x) or not x: + if not isfinite(x) or not x: mantissa = x exponent = 0 else: @@ -185,7 +206,7 @@ INT_MIN = int(-2**31) def ll_math_ldexp(x, exp): - if x == 0.0 or isinf(x) or isnan(x): + if x == 0.0 or not isfinite(x): return x # NaNs, zeros and infinities are returned unchanged if exp > INT_MAX: # overflow (64-bit platforms only) @@ -209,10 +230,11 @@ def ll_math_modf(x): # some platforms don't do the right thing for NaNs and # infinities, so we take care of special cases directly. - if isinf(x): - return (math_copysign(0.0, x), x) - elif isnan(x): - return (x, x) + if not isfinite(x): + if isnan(x): + return (x, x) + else: # isinf(x) + return (math_copysign(0.0, x), x) intpart_p = lltype.malloc(rffi.DOUBLEP.TO, 1, flavor='raw') try: fracpart = math_modf(x, intpart_p) @@ -223,13 +245,21 @@ def ll_math_fmod(x, y): - if isinf(x) and not isnan(y): - raise ValueError("math domain error") + # fmod(x, +/-Inf) returns x for finite x. + if isinf(y) and isfinite(x): + return x - if y == 0: - raise ValueError("math domain error") - - return math_fmod(x, y) + _error_reset() + r = math_fmod(x, y) + errno = rposix.get_errno() + if isnan(r): + if isnan(x) or isnan(y): + errno = 0 + else: + errno = EDOM + if errno: + _likely_raise(errno, r) + return r def ll_math_hypot(x, y): @@ -242,16 +272,17 @@ _error_reset() r = math_hypot(x, y) errno = rposix.get_errno() - if isnan(r): - if isnan(x) or isnan(y): - errno = 0 - else: - errno = EDOM - elif isinf(r): - if isinf(x) or isnan(x) or isinf(y) or isnan(y): - errno = 0 - else: - errno = ERANGE + if not isfinite(r): + if isnan(r): + if isnan(x) or isnan(y): + errno = 0 + else: + errno = EDOM + else: # isinf(r) + if isfinite(x) and isfinite(y): + errno = ERANGE + else: + errno = 0 if errno: _likely_raise(errno, r) return r @@ -261,30 +292,30 @@ # deal directly with IEEE specials, to cope with problems on various # platforms whose semantics don't exactly match C99 - if isnan(x): - if y == 0.0: - return 1.0 # NaN**0 = 1 - return x - - elif isnan(y): + if isnan(y): if x == 1.0: return 1.0 # 1**Nan = 1 return y - elif isinf(x): - odd_y = not isinf(y) and math_fmod(math_fabs(y), 2.0) == 1.0 - if y > 0.0: - if odd_y: - return x - return math_fabs(x) - elif y == 0.0: - return 1.0 - else: # y < 0.0 - if odd_y: - return math_copysign(0.0, x) - return 0.0 + if not isfinite(x): + if isnan(x): + if y == 0.0: + return 1.0 # NaN**0 = 1 + return x + else: # isinf(x) + odd_y = not isinf(y) and math_fmod(math_fabs(y), 2.0) == 1.0 + if y > 0.0: + if odd_y: + return x + return math_fabs(x) + elif y == 0.0: + return 1.0 + else: # y < 0.0 + if odd_y: + return math_copysign(0.0, x) + return 0.0 - elif isinf(y): + if isinf(y): if math_fabs(x) == 1.0: return 1.0 elif y > 0.0 and math_fabs(x) > 1.0: @@ -299,17 +330,18 @@ _error_reset() r = math_pow(x, y) errno = rposix.get_errno() - if isnan(r): - # a NaN result should arise only from (-ve)**(finite non-integer) - errno = EDOM - elif isinf(r): - # an infinite result here arises either from: - # (A) (+/-0.)**negative (-> divide-by-zero) - # (B) overflow of x**y with x and y finite - if x == 0.0: + if not isfinite(r): + if isnan(r): + # a NaN result should arise only from (-ve)**(finite non-integer) errno = EDOM - else: - errno = ERANGE + else: # isinf(r) + # an infinite result here arises either from: + # (A) (+/-0.)**negative (-> divide-by-zero) + # (B) overflow of x**y with x and y finite + if x == 0.0: + errno = EDOM + else: + errno = ERANGE if errno: _likely_raise(errno, r) return r @@ -358,18 +390,19 @@ r = c_func(x) # Error checking fun. Copied from CPython 2.6 errno = rposix.get_errno() - if isnan(r): - if isnan(x): - errno = 0 - else: - errno = EDOM - elif isinf(r): - if isinf(x) or isnan(x): - errno = 0 - elif can_overflow: - errno = ERANGE - else: - errno = EDOM + if not isfinite(r): + if isnan(r): + if isnan(x): + errno = 0 + else: + errno = EDOM + else: # isinf(r) + if not isfinite(x): + errno = 0 + elif can_overflow: + errno = ERANGE + else: + errno = EDOM if errno: _likely_raise(errno, r) return r diff --git a/pypy/rpython/lltypesystem/rbuilder.py b/pypy/rpython/lltypesystem/rbuilder.py --- a/pypy/rpython/lltypesystem/rbuilder.py +++ b/pypy/rpython/lltypesystem/rbuilder.py @@ -123,9 +123,10 @@ def ll_build(ll_builder): final_size = ll_builder.used assert final_size >= 0 - if final_size == ll_builder.allocated: - return ll_builder.buf - return rgc.ll_shrink_array(ll_builder.buf, final_size) + if final_size < ll_builder.allocated: + ll_builder.allocated = final_size + ll_builder.buf = rgc.ll_shrink_array(ll_builder.buf, final_size) + return ll_builder.buf @classmethod def ll_is_true(cls, ll_builder): diff --git a/pypy/rpython/lltypesystem/rdict.py b/pypy/rpython/lltypesystem/rdict.py --- a/pypy/rpython/lltypesystem/rdict.py +++ b/pypy/rpython/lltypesystem/rdict.py @@ -492,8 +492,8 @@ _ll_dict_del(d, i) # XXX: Move the size checking and resize into a single call which is opauqe to -# the JIT to avoid extra branches. - at jit.dont_look_inside +# the JIT when the dict isn't virtual, to avoid extra branches. + at jit.look_inside_iff(lambda d, i: jit.isvirtual(d) and jit.isconstant(i)) def _ll_dict_del(d, i): d.entries.mark_deleted(i) d.num_items -= 1 diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -125,6 +125,7 @@ canraise=False, random_effects_on_gcobjs= random_effects_on_gcobjs, + calling_conv=calling_conv, **kwds) if isinstance(_callable, ll2ctypes.LL2CtypesCallable): _callable.funcptr = funcptr @@ -245,8 +246,14 @@ wrapper._always_inline_ = True # for debugging, stick ll func ptr to that wrapper._ptr = funcptr + wrapper = func_with_new_name(wrapper, name) - return func_with_new_name(wrapper, name) + if calling_conv != "c": + from pypy.rlib.jit import dont_look_inside + wrapper = dont_look_inside(wrapper) + + return wrapper + class CallbackHolder: def __init__(self): @@ -855,11 +862,14 @@ try: unsigned = not tp._type.SIGNED except AttributeError: - if tp in [lltype.Char, lltype.Float, lltype.Signed] or\ - isinstance(tp, lltype.Ptr): + if not isinstance(tp, lltype.Primitive): unsigned = False + elif tp in (lltype.Signed, FLOAT, DOUBLE, llmemory.Address): + unsigned = False + elif tp in (lltype.Char, lltype.UniChar, lltype.Bool): + unsigned = True else: - unsigned = False + raise AssertionError("size_and_sign(%r)" % (tp,)) return size, unsigned def sizeof(tp): diff --git a/pypy/rpython/lltypesystem/rpbc.py b/pypy/rpython/lltypesystem/rpbc.py --- a/pypy/rpython/lltypesystem/rpbc.py +++ b/pypy/rpython/lltypesystem/rpbc.py @@ -116,7 +116,7 @@ fields.append((row.attrname, row.fntype)) kwds = {'hints': {'immutable': True}} return Ptr(Struct('specfunc', *fields, **kwds)) - + def create_specfunc(self): return malloc(self.lowleveltype.TO, immortal=True) @@ -149,7 +149,8 @@ self.descriptions = list(self.s_pbc.descriptions) if self.s_pbc.can_be_None: self.descriptions.insert(0, None) - POINTER_TABLE = Array(self.pointer_repr.lowleveltype) + POINTER_TABLE = Array(self.pointer_repr.lowleveltype, + hints={'nolength': True}) pointer_table = malloc(POINTER_TABLE, len(self.descriptions), immortal=True) for i, desc in enumerate(self.descriptions): @@ -302,7 +303,8 @@ if r_to in r_from._conversion_tables: return r_from._conversion_tables[r_to] else: - t = malloc(Array(Char), len(r_from.descriptions), immortal=True) + t = malloc(Array(Char, hints={'nolength': True}), + len(r_from.descriptions), immortal=True) l = [] for i, d in enumerate(r_from.descriptions): if d in r_to.descriptions: @@ -314,7 +316,7 @@ if l == range(len(r_from.descriptions)): r = None else: - r = inputconst(Ptr(Array(Char)), t) + r = inputconst(Ptr(Array(Char, hints={'nolength': True})), t) r_from._conversion_tables[r_to] = r return r @@ -402,12 +404,12 @@ # ____________________________________________________________ -##def rtype_call_memo(hop): +##def rtype_call_memo(hop): ## memo_table = hop.args_v[0].value ## if memo_table.s_result.is_constant(): ## return hop.inputconst(hop.r_result, memo_table.s_result.const) -## fieldname = memo_table.fieldname -## assert hop.nb_args == 2, "XXX" +## fieldname = memo_table.fieldname +## assert hop.nb_args == 2, "XXX" ## r_pbc = hop.args_r[1] ## assert isinstance(r_pbc, (MultipleFrozenPBCRepr, ClassesPBCRepr)) diff --git a/pypy/rpython/lltypesystem/rstr.py b/pypy/rpython/lltypesystem/rstr.py --- a/pypy/rpython/lltypesystem/rstr.py +++ b/pypy/rpython/lltypesystem/rstr.py @@ -331,6 +331,8 @@ # unlike CPython, there is no reason to avoid to return -1 # but our malloc initializes the memory to zero, so we use zero as the # special non-computed-yet value. + if not s: + return 0 x = s.hash if x == 0: x = _hash_string(s.chars) diff --git a/pypy/rpython/lltypesystem/test/test_rffi.py b/pypy/rpython/lltypesystem/test/test_rffi.py --- a/pypy/rpython/lltypesystem/test/test_rffi.py +++ b/pypy/rpython/lltypesystem/test/test_rffi.py @@ -18,6 +18,7 @@ from pypy.conftest import option from pypy.objspace.flow.model import summary from pypy.translator.tool.cbuild import ExternalCompilationInfo +from pypy.rlib.rarithmetic import r_singlefloat class BaseTestRffi: def test_basic(self): @@ -704,6 +705,14 @@ res = cast(lltype.Signed, 42.5) assert res == 42 + res = cast(lltype.SingleFloat, 12.3) + assert res == r_singlefloat(12.3) + res = cast(lltype.SingleFloat, res) + assert res == r_singlefloat(12.3) + + res = cast(lltype.Float, r_singlefloat(12.)) + assert res == 12. + def test_rffi_sizeof(self): try: import ctypes @@ -733,9 +742,10 @@ assert sizeof(ll) == ctypes.sizeof(ctp) assert sizeof(lltype.Typedef(ll, 'test')) == sizeof(ll) assert not size_and_sign(lltype.Signed)[1] - assert not size_and_sign(lltype.Char)[1] - assert not size_and_sign(lltype.UniChar)[1] + assert size_and_sign(lltype.Char) == (1, True) + assert size_and_sign(lltype.UniChar)[1] assert size_and_sign(UINT)[1] + assert not size_and_sign(INT)[1] def test_rffi_offsetof(self): import struct diff --git a/pypy/rpython/memory/gc/minimark.py b/pypy/rpython/memory/gc/minimark.py --- a/pypy/rpython/memory/gc/minimark.py +++ b/pypy/rpython/memory/gc/minimark.py @@ -1850,6 +1850,9 @@ finalizer = self.getlightfinalizer(self.get_type_id(obj)) ll_assert(bool(finalizer), "no light finalizer found") finalizer(obj, llmemory.NULL) + else: + obj = self.get_forwarding_address(obj) + self.old_objects_with_light_finalizers.append(obj) def deal_with_old_objects_with_finalizers(self): """ This is a much simpler version of dealing with finalizers diff --git a/pypy/rpython/memory/gc/semispace.py b/pypy/rpython/memory/gc/semispace.py --- a/pypy/rpython/memory/gc/semispace.py +++ b/pypy/rpython/memory/gc/semispace.py @@ -105,9 +105,10 @@ llarena.arena_reserve(result, totalsize) self.init_gc_object(result, typeid16) self.free = result + totalsize - if is_finalizer_light: - self.objects_with_light_finalizers.append(result + size_gc_header) - elif has_finalizer: + #if is_finalizer_light: + # self.objects_with_light_finalizers.append(result + size_gc_header) + #else: + if has_finalizer: self.objects_with_finalizers.append(result + size_gc_header) if contains_weakptr: self.objects_with_weakrefs.append(result + size_gc_header) diff --git a/pypy/rpython/memory/gctransform/framework.py b/pypy/rpython/memory/gctransform/framework.py --- a/pypy/rpython/memory/gctransform/framework.py +++ b/pypy/rpython/memory/gctransform/framework.py @@ -377,17 +377,24 @@ self.malloc_varsize_nonmovable_ptr = None if getattr(GCClass, 'raw_malloc_memory_pressure', False): - def raw_malloc_memory_pressure(length, itemsize): + def raw_malloc_memory_pressure_varsize(length, itemsize): totalmem = length * itemsize if totalmem > 0: gcdata.gc.raw_malloc_memory_pressure(totalmem) #else: probably an overflow -- the following rawmalloc # will fail then + def raw_malloc_memory_pressure(sizehint): + gcdata.gc.raw_malloc_memory_pressure(sizehint) + self.raw_malloc_memory_pressure_varsize_ptr = getfn( + raw_malloc_memory_pressure_varsize, + [annmodel.SomeInteger(), annmodel.SomeInteger()], + annmodel.s_None, minimal_transform = False) self.raw_malloc_memory_pressure_ptr = getfn( raw_malloc_memory_pressure, - [annmodel.SomeInteger(), annmodel.SomeInteger()], + [annmodel.SomeInteger()], annmodel.s_None, minimal_transform = False) + self.identityhash_ptr = getfn(GCClass.identityhash.im_func, [s_gc, s_gcref], annmodel.SomeInteger(), diff --git a/pypy/rpython/memory/gctransform/transform.py b/pypy/rpython/memory/gctransform/transform.py --- a/pypy/rpython/memory/gctransform/transform.py +++ b/pypy/rpython/memory/gctransform/transform.py @@ -63,7 +63,7 @@ gct.push_alive(v_result, self.llops) elif opname not in ('direct_call', 'indirect_call'): gct.push_alive(v_result, self.llops) - + def rename(self, newopname): @@ -118,7 +118,7 @@ self.minimalgctransformer = self.MinimalGCTransformer(self) else: self.minimalgctransformer = None - + def get_lltype_of_exception_value(self): if self.translator is not None: exceptiondata = self.translator.rtyper.getexceptiondata() @@ -399,7 +399,7 @@ def gct_gc_heap_stats(self, hop): from pypy.rpython.memory.gc.base import ARRAY_TYPEID_MAP - + return hop.cast_result(rmodel.inputconst(lltype.Ptr(ARRAY_TYPEID_MAP), lltype.nullptr(ARRAY_TYPEID_MAP))) @@ -427,7 +427,7 @@ assert flavor == 'raw' assert not flags.get('zero') return self.parenttransformer.gct_malloc_varsize(hop) - + def gct_free(self, hop): flags = hop.spaceop.args[1].value flavor = flags['flavor'] @@ -502,7 +502,7 @@ stack_mh = mallocHelpers() stack_mh.allocate = lambda size: llop.stack_malloc(llmemory.Address, size) ll_stack_malloc_fixedsize = stack_mh._ll_malloc_fixedsize - + if self.translator: self.raw_malloc_fixedsize_ptr = self.inittime_helper( ll_raw_malloc_fixedsize, [lltype.Signed], llmemory.Address) @@ -541,7 +541,7 @@ resulttype=llmemory.Address) if flags.get('zero'): hop.genop("raw_memclear", [v_raw, c_size]) - return v_raw + return v_raw def gct_malloc_varsize(self, hop, add_flags=None): flags = hop.spaceop.args[1].value @@ -559,6 +559,14 @@ def gct_malloc_nonmovable_varsize(self, *args, **kwds): return self.gct_malloc_varsize(*args, **kwds) + def gct_gc_add_memory_pressure(self, hop): + if hasattr(self, 'raw_malloc_memory_pressure_ptr'): + op = hop.spaceop + size = op.args[0] + return hop.genop("direct_call", + [self.raw_malloc_memory_pressure_ptr, + size]) + def varsize_malloc_helper(self, hop, flags, meth, extraargs): def intconst(c): return rmodel.inputconst(lltype.Signed, c) op = hop.spaceop @@ -590,9 +598,9 @@ def gct_fv_raw_malloc_varsize(self, hop, flags, TYPE, v_length, c_const_size, c_item_size, c_offset_to_length): if flags.get('add_memory_pressure', False): - if hasattr(self, 'raw_malloc_memory_pressure_ptr'): + if hasattr(self, 'raw_malloc_memory_pressure_varsize_ptr'): hop.genop("direct_call", - [self.raw_malloc_memory_pressure_ptr, + [self.raw_malloc_memory_pressure_varsize_ptr, v_length, c_item_size]) if c_offset_to_length is None: if flags.get('zero'): @@ -625,7 +633,7 @@ hop.genop("track_alloc_stop", [v]) hop.genop('raw_free', [v]) else: - assert False, "%s has no support for free with flavor %r" % (self, flavor) + assert False, "%s has no support for free with flavor %r" % (self, flavor) def gct_gc_can_move(self, hop): return hop.cast_result(rmodel.inputconst(lltype.Bool, False)) diff --git a/pypy/rpython/memory/gcwrapper.py b/pypy/rpython/memory/gcwrapper.py --- a/pypy/rpython/memory/gcwrapper.py +++ b/pypy/rpython/memory/gcwrapper.py @@ -66,6 +66,10 @@ gctypelayout.zero_gc_pointers(result) return result + def add_memory_pressure(self, size): + if hasattr(self.gc, 'raw_malloc_memory_pressure'): + self.gc.raw_malloc_memory_pressure(size) + def shrink_array(self, p, smallersize): if hasattr(self.gc, 'shrink_array'): addr = llmemory.cast_ptr_to_adr(p) diff --git a/pypy/rpython/memory/test/test_gc.py b/pypy/rpython/memory/test/test_gc.py --- a/pypy/rpython/memory/test/test_gc.py +++ b/pypy/rpython/memory/test/test_gc.py @@ -592,7 +592,7 @@ return rgc.can_move(lltype.malloc(TP, 1)) assert self.interpret(func, []) == self.GC_CAN_MOVE - + def test_malloc_nonmovable(self): TP = lltype.GcArray(lltype.Char) def func(): diff --git a/pypy/rpython/memory/test/test_transformed_gc.py b/pypy/rpython/memory/test/test_transformed_gc.py --- a/pypy/rpython/memory/test/test_transformed_gc.py +++ b/pypy/rpython/memory/test/test_transformed_gc.py @@ -27,7 +27,7 @@ t.config.set(**extraconfigopts) ann = t.buildannotator(policy=annpolicy.StrictAnnotatorPolicy()) ann.build_types(func, inputtypes) - + if specialize: t.buildrtyper().specialize() if backendopt: @@ -44,7 +44,7 @@ GC_CAN_MOVE = False GC_CAN_MALLOC_NONMOVABLE = True taggedpointers = False - + def setup_class(cls): funcs0 = [] funcs2 = [] @@ -155,7 +155,7 @@ return run, gct else: return run - + class GenericGCTests(GCTest): GC_CAN_SHRINK_ARRAY = False @@ -190,7 +190,7 @@ j += 1 return 0 return malloc_a_lot - + def test_instances(self): run, statistics = self.runner("instances", statistics=True) run([]) @@ -276,7 +276,7 @@ for i in range(1, 5): res = run([i, i - 1]) assert res == i - 1 # crashes if constants are not considered roots - + def define_string_concatenation(cls): def concat(j, dummy): lst = [] @@ -656,7 +656,7 @@ # return 2 return func - + def test_malloc_nonmovable(self): run = self.runner("malloc_nonmovable") assert int(self.GC_CAN_MALLOC_NONMOVABLE) == run([]) @@ -676,7 +676,7 @@ return 2 return func - + def test_malloc_nonmovable_fixsize(self): run = self.runner("malloc_nonmovable_fixsize") assert run([]) == int(self.GC_CAN_MALLOC_NONMOVABLE) @@ -757,7 +757,7 @@ lltype.free(idarray, flavor='raw') return 0 return f - + def test_many_ids(self): if not self.GC_CAN_TEST_ID: py.test.skip("fails for bad reasons in lltype.py :-(") @@ -813,7 +813,7 @@ else: assert 0, "oups, not found" return f, None, fix_graph_of_g - + def test_do_malloc_operations(self): run = self.runner("do_malloc_operations") run([]) @@ -850,7 +850,7 @@ else: assert 0, "oups, not found" return f, None, fix_graph_of_g - + def test_do_malloc_operations_in_call(self): run = self.runner("do_malloc_operations_in_call") run([]) @@ -861,7 +861,7 @@ l2 = [] l3 = [] l4 = [] - + def f(): for i in range(10): s = lltype.malloc(S) @@ -1026,7 +1026,7 @@ llop.gc__collect(lltype.Void) return static.p.x + i def cleanup(): - static.p = lltype.nullptr(T1) + static.p = lltype.nullptr(T1) return f, cleanup, None def test_nongc_static_root_minor_collect(self): @@ -1081,7 +1081,7 @@ return 0 return f - + def test_many_weakrefs(self): run = self.runner("many_weakrefs") run([]) @@ -1131,7 +1131,7 @@ def define_adr_of_nursery(cls): class A(object): pass - + def f(): # we need at least 1 obj to allocate a nursery a = A() @@ -1147,9 +1147,9 @@ assert nt1 > nf1 assert nt1 == nt0 return 0 - + return f - + def test_adr_of_nursery(self): run = self.runner("adr_of_nursery") res = run([]) @@ -1175,7 +1175,7 @@ def _teardown(self): self.__ready = False # collecting here is expected GenerationGC._teardown(self) - + GC_PARAMS = {'space_size': 512*WORD, 'nursery_size': 128*WORD, 'translated_to_c': False} diff --git a/pypy/rpython/module/ll_os.py b/pypy/rpython/module/ll_os.py --- a/pypy/rpython/module/ll_os.py +++ b/pypy/rpython/module/ll_os.py @@ -356,6 +356,32 @@ return extdef([int, str, [str]], int, llimpl=spawnv_llimpl, export_name="ll_os.ll_os_spawnv") + @registering_if(os, 'spawnve') + def register_os_spawnve(self): + os_spawnve = self.llexternal('spawnve', + [rffi.INT, rffi.CCHARP, rffi.CCHARPP, + rffi.CCHARPP], + rffi.INT) + + def spawnve_llimpl(mode, path, args, env): + envstrs = [] + for item in env.iteritems(): + envstrs.append("%s=%s" % item) + + mode = rffi.cast(rffi.INT, mode) + l_args = rffi.liststr2charpp(args) + l_env = rffi.liststr2charpp(envstrs) + childpid = os_spawnve(mode, path, l_args, l_env) + rffi.free_charpp(l_env) + rffi.free_charpp(l_args) + if childpid == -1: + raise OSError(rposix.get_errno(), "os_spawnve failed") + return rffi.cast(lltype.Signed, childpid) + + return extdef([int, str, [str], {str: str}], int, + llimpl=spawnve_llimpl, + export_name="ll_os.ll_os_spawnve") + @registering(os.dup) def register_os_dup(self): os_dup = self.llexternal(underscore_on_windows+'dup', [rffi.INT], rffi.INT) diff --git a/pypy/rpython/ootypesystem/rstr.py b/pypy/rpython/ootypesystem/rstr.py --- a/pypy/rpython/ootypesystem/rstr.py +++ b/pypy/rpython/ootypesystem/rstr.py From noreply at buildbot.pypy.org Sat Nov 19 19:21:47 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sat, 19 Nov 2011 19:21:47 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: merged default Message-ID: <20111119182147.8257782A9D@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49550:bdbd25961ae0 Date: 2011-11-19 12:51 -0500 http://bitbucket.org/pypy/pypy/changeset/bdbd25961ae0/ Log: merged default diff --git a/pypy/doc/release-1.7.0.rst b/pypy/doc/release-1.7.0.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/release-1.7.0.rst @@ -0,0 +1,44 @@ +===================== +PyPy 1.7 +===================== + +Highlights +========== + +* numerous performance improvements, PyPy 1.7 is xxx faster than 1.6 + +* numerous bugfixes, compatibility fixes + +* windows fixes + +* stackless and JIT integration + +* numpy progress - dtypes, numpy -> numpypy renaming + +* brand new JSON encoder + +* improved memory footprint on heavy users of C APIs example - tornado + +* cpyext progress + +Things that didn't make it, expect in 1.8 soon +============================================== + +* list strategies + +* multi-dimensional arrays for numpy + +* ARM backend + +* PPC backend + +Things we're working on with unclear ETA +======================================== + +* windows 64 (?) + +* Py3k + +* SSE for numpy + +* specialized objects diff --git a/pypy/jit/backend/conftest.py b/pypy/jit/backend/conftest.py --- a/pypy/jit/backend/conftest.py +++ b/pypy/jit/backend/conftest.py @@ -12,7 +12,7 @@ help="choose a fixed random seed") group.addoption('--backend', action="store", default='llgraph', - choices=['llgraph', 'x86'], + choices=['llgraph', 'cpu'], dest="backend", help="select the backend to run the functions with") group.addoption('--block-length', action="store", type="int", diff --git a/pypy/jit/backend/test/test_random.py b/pypy/jit/backend/test/test_random.py --- a/pypy/jit/backend/test/test_random.py +++ b/pypy/jit/backend/test/test_random.py @@ -495,9 +495,9 @@ if pytest.config.option.backend == 'llgraph': from pypy.jit.backend.llgraph.runner import LLtypeCPU return LLtypeCPU(None) - elif pytest.config.option.backend == 'x86': - from pypy.jit.backend.x86.runner import CPU386 - return CPU386(None, None) + elif pytest.config.option.backend == 'cpu': + from pypy.jit.backend.detect_cpu import getcpuclass + return getcpuclass()(None, None) else: assert 0, "unknown backend %r" % pytest.config.option.backend diff --git a/pypy/jit/backend/x86/test/test_zll_random.py b/pypy/jit/backend/test/test_zll_stress.py rename from pypy/jit/backend/x86/test/test_zll_random.py rename to pypy/jit/backend/test/test_zll_stress.py diff --git a/pypy/jit/backend/x86/test/test_ztranslation.py b/pypy/jit/backend/x86/test/test_ztranslation.py --- a/pypy/jit/backend/x86/test/test_ztranslation.py +++ b/pypy/jit/backend/x86/test/test_ztranslation.py @@ -1,6 +1,6 @@ import py, os, sys from pypy.tool.udir import udir -from pypy.rlib.jit import JitDriver, unroll_parameters +from pypy.rlib.jit import JitDriver, unroll_parameters, set_param from pypy.rlib.jit import PARAMETERS, dont_look_inside from pypy.rlib.jit import promote from pypy.jit.metainterp.jitprof import Profiler @@ -47,9 +47,9 @@ def f(i, j): for param, _ in unroll_parameters: defl = PARAMETERS[param] - jitdriver.set_param(param, defl) - jitdriver.set_param("threshold", 3) - jitdriver.set_param("trace_eagerness", 2) + set_param(jitdriver, param, defl) + set_param(jitdriver, "threshold", 3) + set_param(jitdriver, "trace_eagerness", 2) total = 0 frame = Frame(i) while frame.i > 3: @@ -213,8 +213,8 @@ else: return Base() def myportal(i): - jitdriver.set_param("threshold", 3) - jitdriver.set_param("trace_eagerness", 2) + set_param(jitdriver, "threshold", 3) + set_param(jitdriver, "trace_eagerness", 2) total = 0 n = i while True: diff --git a/pypy/jit/codewriter/codewriter.py b/pypy/jit/codewriter/codewriter.py --- a/pypy/jit/codewriter/codewriter.py +++ b/pypy/jit/codewriter/codewriter.py @@ -104,6 +104,8 @@ else: name = 'unnamed' % id(ssarepr) i = 1 + # escape names for windows + name = name.replace('', '_(lambda)_') extra = '' while name+extra in self._seen_files: i += 1 diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -14,7 +14,7 @@ from pypy.rlib.jit import (JitDriver, we_are_jitted, hint, dont_look_inside, loop_invariant, elidable, promote, jit_debug, assert_green, AssertGreenFailed, unroll_safe, current_trace_length, look_inside_iff, - isconstant, isvirtual, promote_string) + isconstant, isvirtual, promote_string, set_param) from pypy.rlib.rarithmetic import ovfcheck from pypy.rpython.lltypesystem import lltype, llmemory, rffi from pypy.rpython.ootypesystem import ootype @@ -1256,15 +1256,18 @@ n -= 1 x += n return x - def f(n, threshold): - myjitdriver.set_param('threshold', threshold) + def f(n, threshold, arg): + if arg: + set_param(myjitdriver, 'threshold', threshold) + else: + set_param(None, 'threshold', threshold) return g(n) - res = self.meta_interp(f, [10, 3]) + res = self.meta_interp(f, [10, 3, 1]) assert res == 9 + 8 + 7 + 6 + 5 + 4 + 3 + 2 + 1 + 0 self.check_tree_loop_count(2) - res = self.meta_interp(f, [10, 13]) + res = self.meta_interp(f, [10, 13, 0]) assert res == 9 + 8 + 7 + 6 + 5 + 4 + 3 + 2 + 1 + 0 self.check_tree_loop_count(0) @@ -2328,8 +2331,8 @@ get_printable_location=get_printable_location) bytecode = "0j10jc20a3" def f(): - myjitdriver.set_param('threshold', 7) - myjitdriver.set_param('trace_eagerness', 1) + set_param(myjitdriver, 'threshold', 7) + set_param(myjitdriver, 'trace_eagerness', 1) i = j = c = a = 1 while True: myjitdriver.jit_merge_point(i=i, j=j, c=c, a=a) @@ -2607,7 +2610,7 @@ myjitdriver = JitDriver(greens = [], reds = ['n', 'i', 'sa', 'a']) def f(n, limit): - myjitdriver.set_param('retrace_limit', limit) + set_param(myjitdriver, 'retrace_limit', limit) sa = i = a = 0 while i < n: myjitdriver.jit_merge_point(n=n, i=i, sa=sa, a=a) @@ -2625,8 +2628,8 @@ myjitdriver = JitDriver(greens = [], reds = ['n', 'i', 'sa', 'a']) def f(n, limit): - myjitdriver.set_param('retrace_limit', 3) - myjitdriver.set_param('max_retrace_guards', limit) + set_param(myjitdriver, 'retrace_limit', 3) + set_param(myjitdriver, 'max_retrace_guards', limit) sa = i = a = 0 while i < n: myjitdriver.jit_merge_point(n=n, i=i, sa=sa, a=a) @@ -2645,7 +2648,7 @@ myjitdriver = JitDriver(greens = [], reds = ['n', 'i', 'sa', 'a', 'node']) def f(n, limit): - myjitdriver.set_param('retrace_limit', limit) + set_param(myjitdriver, 'retrace_limit', limit) sa = i = a = 0 node = [1, 2, 3] node[1] = n @@ -2668,10 +2671,10 @@ myjitdriver = JitDriver(greens = ['pc'], reds = ['n', 'i', 'sa']) bytecode = "0+sI0+SI" def f(n): - myjitdriver.set_param('threshold', 3) - myjitdriver.set_param('trace_eagerness', 1) - myjitdriver.set_param('retrace_limit', 5) - myjitdriver.set_param('function_threshold', -1) + set_param(None, 'threshold', 3) + set_param(None, 'trace_eagerness', 1) + set_param(None, 'retrace_limit', 5) + set_param(None, 'function_threshold', -1) pc = sa = i = 0 while pc < len(bytecode): myjitdriver.jit_merge_point(pc=pc, n=n, sa=sa, i=i) @@ -2728,9 +2731,9 @@ myjitdriver = JitDriver(greens = ['pc'], reds = ['n', 'a', 'i', 'j', 'sa']) bytecode = "ij+Jj+JI" def f(n, a): - myjitdriver.set_param('threshold', 5) - myjitdriver.set_param('trace_eagerness', 1) - myjitdriver.set_param('retrace_limit', 2) + set_param(None, 'threshold', 5) + set_param(None, 'trace_eagerness', 1) + set_param(None, 'retrace_limit', 2) pc = sa = i = j = 0 while pc < len(bytecode): myjitdriver.jit_merge_point(pc=pc, n=n, sa=sa, i=i, j=j, a=a) @@ -2793,8 +2796,8 @@ return B(self.val + 1) myjitdriver = JitDriver(greens = [], reds = ['sa', 'a']) def f(): - myjitdriver.set_param('threshold', 3) - myjitdriver.set_param('trace_eagerness', 2) + set_param(None, 'threshold', 3) + set_param(None, 'trace_eagerness', 2) a = A(0) sa = 0 while a.val < 8: @@ -2824,8 +2827,8 @@ return B(self.val + 1) myjitdriver = JitDriver(greens = [], reds = ['sa', 'b', 'a']) def f(b): - myjitdriver.set_param('threshold', 6) - myjitdriver.set_param('trace_eagerness', 4) + set_param(None, 'threshold', 6) + set_param(None, 'trace_eagerness', 4) a = A(0) sa = 0 while a.val < 15: @@ -2862,10 +2865,10 @@ myjitdriver = JitDriver(greens = ['pc'], reds = ['n', 'i', 'sa']) bytecode = "0+sI0+SI" def f(n): - myjitdriver.set_param('threshold', 3) - myjitdriver.set_param('trace_eagerness', 1) - myjitdriver.set_param('retrace_limit', 5) - myjitdriver.set_param('function_threshold', -1) + set_param(None, 'threshold', 3) + set_param(None, 'trace_eagerness', 1) + set_param(None, 'retrace_limit', 5) + set_param(None, 'function_threshold', -1) pc = sa = i = 0 while pc < len(bytecode): myjitdriver.jit_merge_point(pc=pc, n=n, sa=sa, i=i) diff --git a/pypy/jit/metainterp/test/test_jitdriver.py b/pypy/jit/metainterp/test/test_jitdriver.py --- a/pypy/jit/metainterp/test/test_jitdriver.py +++ b/pypy/jit/metainterp/test/test_jitdriver.py @@ -1,5 +1,5 @@ """Tests for multiple JitDrivers.""" -from pypy.rlib.jit import JitDriver, unroll_safe +from pypy.rlib.jit import JitDriver, unroll_safe, set_param from pypy.jit.metainterp.test.support import LLJitMixin, OOJitMixin from pypy.jit.metainterp.warmspot import get_stats @@ -113,7 +113,7 @@ return n # def loop2(g, r): - myjitdriver1.set_param('function_threshold', 0) + set_param(None, 'function_threshold', 0) while r > 0: myjitdriver2.can_enter_jit(g=g, r=r) myjitdriver2.jit_merge_point(g=g, r=r) diff --git a/pypy/jit/metainterp/test/test_loop.py b/pypy/jit/metainterp/test/test_loop.py --- a/pypy/jit/metainterp/test/test_loop.py +++ b/pypy/jit/metainterp/test/test_loop.py @@ -1,5 +1,5 @@ import py -from pypy.rlib.jit import JitDriver, hint +from pypy.rlib.jit import JitDriver, hint, set_param from pypy.rlib.objectmodel import compute_hash from pypy.jit.metainterp.warmspot import ll_meta_interp, get_stats from pypy.jit.metainterp.test.support import LLJitMixin, OOJitMixin @@ -364,7 +364,7 @@ myjitdriver = JitDriver(greens = ['pos'], reds = ['i', 'j', 'n', 'x']) bytecode = "IzJxji" def f(n, threshold): - myjitdriver.set_param('threshold', threshold) + set_param(myjitdriver, 'threshold', threshold) i = j = x = 0 pos = 0 op = '-' @@ -411,7 +411,7 @@ myjitdriver = JitDriver(greens = ['pos'], reds = ['i', 'j', 'n', 'x']) bytecode = "IzJxji" def f(nval, threshold): - myjitdriver.set_param('threshold', threshold) + set_param(myjitdriver, 'threshold', threshold) i, j, x = A(0), A(0), A(0) n = A(nval) pos = 0 diff --git a/pypy/jit/metainterp/test/test_recursive.py b/pypy/jit/metainterp/test/test_recursive.py --- a/pypy/jit/metainterp/test/test_recursive.py +++ b/pypy/jit/metainterp/test/test_recursive.py @@ -1,5 +1,5 @@ import py -from pypy.rlib.jit import JitDriver, we_are_jitted, hint +from pypy.rlib.jit import JitDriver, hint, set_param from pypy.rlib.jit import unroll_safe, dont_look_inside, promote from pypy.rlib.objectmodel import we_are_translated from pypy.rlib.debug import fatalerror @@ -308,8 +308,8 @@ pc += 1 return n def main(n): - myjitdriver.set_param('threshold', 3) - myjitdriver.set_param('trace_eagerness', 5) + set_param(None, 'threshold', 3) + set_param(None, 'trace_eagerness', 5) return f("c-l", n) expected = main(100) res = self.meta_interp(main, [100], enable_opts='', inline=True) @@ -329,7 +329,7 @@ return recursive(n - 1) + 1 return 0 def loop(n): - myjitdriver.set_param("threshold", 10) + set_param(myjitdriver, "threshold", 10) pc = 0 while n: myjitdriver.can_enter_jit(n=n) @@ -351,8 +351,8 @@ return 0 myjitdriver = JitDriver(greens=[], reds=['n']) def loop(n): - myjitdriver.set_param("threshold", 4) - myjitdriver.set_param("trace_eagerness", 2) + set_param(None, "threshold", 4) + set_param(None, "trace_eagerness", 2) while n: myjitdriver.can_enter_jit(n=n) myjitdriver.jit_merge_point(n=n) @@ -482,12 +482,12 @@ TRACE_LIMIT = 66 def main(inline): - myjitdriver.set_param("threshold", 10) - myjitdriver.set_param('function_threshold', 60) + set_param(None, "threshold", 10) + set_param(None, 'function_threshold', 60) if inline: - myjitdriver.set_param('inlining', True) + set_param(None, 'inlining', True) else: - myjitdriver.set_param('inlining', False) + set_param(None, 'inlining', False) return loop(100) res = self.meta_interp(main, [0], enable_opts='', trace_limit=TRACE_LIMIT) @@ -564,11 +564,11 @@ pc += 1 return n def g(m): - myjitdriver.set_param('inlining', True) + set_param(None, 'inlining', True) # carefully chosen threshold to make sure that the inner function # cannot be inlined, but the inner function on its own is small # enough - myjitdriver.set_param('trace_limit', 40) + set_param(None, 'trace_limit', 40) if m > 1000000: f('', 0) result = 0 @@ -1207,9 +1207,9 @@ driver.can_enter_jit(c=c, i=i, v=v) break - def main(c, i, set_param, v): - if set_param: - driver.set_param('function_threshold', 0) + def main(c, i, _set_param, v): + if _set_param: + set_param(driver, 'function_threshold', 0) portal(c, i, v) self.meta_interp(main, [10, 10, False, False], inline=True) diff --git a/pypy/jit/metainterp/test/test_warmspot.py b/pypy/jit/metainterp/test/test_warmspot.py --- a/pypy/jit/metainterp/test/test_warmspot.py +++ b/pypy/jit/metainterp/test/test_warmspot.py @@ -1,10 +1,7 @@ import py -from pypy.jit.metainterp.warmspot import ll_meta_interp from pypy.jit.metainterp.warmspot import get_stats -from pypy.rlib.jit import JitDriver -from pypy.rlib.jit import unroll_safe +from pypy.rlib.jit import JitDriver, set_param, unroll_safe from pypy.jit.backend.llgraph import runner -from pypy.jit.metainterp.history import BoxInt from pypy.jit.metainterp.test.support import LLJitMixin, OOJitMixin from pypy.jit.metainterp.optimizeopt import ALL_OPTS_NAMES @@ -97,7 +94,7 @@ n = A().m(n) return n def f(n, enable_opts): - myjitdriver.set_param('enable_opts', hlstr(enable_opts)) + set_param(None, 'enable_opts', hlstr(enable_opts)) return g(n) # check that the set_param will override the default diff --git a/pypy/jit/metainterp/test/test_ztranslation.py b/pypy/jit/metainterp/test/test_ztranslation.py --- a/pypy/jit/metainterp/test/test_ztranslation.py +++ b/pypy/jit/metainterp/test/test_ztranslation.py @@ -1,7 +1,7 @@ import py from pypy.jit.metainterp.warmspot import rpython_ll_meta_interp, ll_meta_interp from pypy.jit.backend.llgraph import runner -from pypy.rlib.jit import JitDriver, unroll_parameters +from pypy.rlib.jit import JitDriver, unroll_parameters, set_param from pypy.rlib.jit import PARAMETERS, dont_look_inside, hint from pypy.jit.metainterp.jitprof import Profiler from pypy.rpython.lltypesystem import lltype, llmemory @@ -57,9 +57,9 @@ get_printable_location=get_printable_location) def f(i): for param, defl in unroll_parameters: - jitdriver.set_param(param, defl) - jitdriver.set_param("threshold", 3) - jitdriver.set_param("trace_eagerness", 2) + set_param(jitdriver, param, defl) + set_param(jitdriver, "threshold", 3) + set_param(jitdriver, "trace_eagerness", 2) total = 0 frame = Frame(i) while frame.l[0] > 3: @@ -117,8 +117,8 @@ raise ValueError return 2 def main(i): - jitdriver.set_param("threshold", 3) - jitdriver.set_param("trace_eagerness", 2) + set_param(jitdriver, "threshold", 3) + set_param(jitdriver, "trace_eagerness", 2) total = 0 n = i while n > 3: diff --git a/pypy/jit/metainterp/warmspot.py b/pypy/jit/metainterp/warmspot.py --- a/pypy/jit/metainterp/warmspot.py +++ b/pypy/jit/metainterp/warmspot.py @@ -120,7 +120,8 @@ op = block.operations[i] if (op.opname == 'jit_marker' and op.args[0].value == marker_name and - op.args[1].value.active): # the jitdriver + (op.args[1].value is None or + op.args[1].value.active)): # the jitdriver results.append((graph, block, i)) return results @@ -846,11 +847,18 @@ _, PTR_SET_PARAM_STR_FUNCTYPE = self.cpu.ts.get_FuncType( [lltype.Ptr(STR)], lltype.Void) def make_closure(jd, fullfuncname, is_string): - state = jd.warmstate - def closure(i): - if is_string: - i = hlstr(i) - getattr(state, fullfuncname)(i) + if jd is None: + def closure(i): + if is_string: + i = hlstr(i) + for jd in self.jitdrivers_sd: + getattr(jd.warmstate, fullfuncname)(i) + else: + state = jd.warmstate + def closure(i): + if is_string: + i = hlstr(i) + getattr(state, fullfuncname)(i) if is_string: TP = PTR_SET_PARAM_STR_FUNCTYPE else: @@ -859,12 +867,16 @@ return Constant(funcptr, TP) # for graph, block, i in find_set_param(graphs): + op = block.operations[i] - for jd in self.jitdrivers_sd: - if jd.jitdriver is op.args[1].value: - break + if op.args[1].value is not None: + for jd in self.jitdrivers_sd: + if jd.jitdriver is op.args[1].value: + break + else: + assert 0, "jitdriver of set_param() not found" else: - assert 0, "jitdriver of set_param() not found" + jd = None funcname = op.args[2].value key = jd, funcname if key not in closures: diff --git a/pypy/module/cpyext/slotdefs.py b/pypy/module/cpyext/slotdefs.py --- a/pypy/module/cpyext/slotdefs.py +++ b/pypy/module/cpyext/slotdefs.py @@ -9,7 +9,8 @@ unaryfunc, wrapperfunc, ternaryfunc, PyTypeObjectPtr, binaryfunc, getattrfunc, getattrofunc, setattrofunc, lenfunc, ssizeargfunc, ssizessizeargfunc, ssizeobjargproc, iternextfunc, initproc, richcmpfunc, - cmpfunc, hashfunc, descrgetfunc, descrsetfunc, objobjproc, readbufferproc) + cmpfunc, hashfunc, descrgetfunc, descrsetfunc, objobjproc, objobjargproc, + readbufferproc) from pypy.module.cpyext.pyobject import from_ref from pypy.module.cpyext.pyerrors import PyErr_Occurred from pypy.module.cpyext.state import State @@ -175,6 +176,15 @@ space.fromcache(State).check_and_raise_exception(always=True) return space.wrap(res) +def wrap_objobjargproc(space, w_self, w_args, func): + func_target = rffi.cast(objobjargproc, func) + check_num_args(space, w_args, 2) + w_key, w_value = space.fixedview(w_args) + res = generic_cpy_call(space, func_target, w_self, w_key, w_value) + if rffi.cast(lltype.Signed, res) == -1: + space.fromcache(State).check_and_raise_exception(always=True) + return space.wrap(res) + def wrap_ssizessizeargfunc(space, w_self, w_args, func): func_target = rffi.cast(ssizessizeargfunc, func) check_num_args(space, w_args, 2) diff --git a/pypy/module/cpyext/test/test_typeobject.py b/pypy/module/cpyext/test/test_typeobject.py --- a/pypy/module/cpyext/test/test_typeobject.py +++ b/pypy/module/cpyext/test/test_typeobject.py @@ -397,3 +397,31 @@ def __str__(self): return "text" assert module.tp_str(C()) == "text" + + def test_mp_ass_subscript(self): + module = self.import_extension('foo', [ + ("new_obj", "METH_NOARGS", + ''' + PyObject *obj; + Foo_Type.tp_as_mapping = &tp_as_mapping; + tp_as_mapping.mp_ass_subscript = mp_ass_subscript; + if (PyType_Ready(&Foo_Type) < 0) return NULL; + obj = PyObject_New(PyObject, &Foo_Type); + return obj; + ''' + )], + ''' + static int + mp_ass_subscript(PyObject *self, PyObject *key, PyObject *value) + { + PyErr_SetNone(PyExc_ZeroDivisionError); + return -1; + } + PyMappingMethods tp_as_mapping; + static PyTypeObject Foo_Type = { + PyVarObject_HEAD_INIT(NULL, 0) + "foo.foo", + }; + ''') + obj = module.new_obj() + raises(ZeroDivisionError, obj.__setitem__, 5, None) diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -2,7 +2,7 @@ class Module(MixedModule): - applevel_name = 'numpy' + applevel_name = 'numpypy' interpleveldefs = { 'array': 'interp_numarray.SingleDimArray', diff --git a/pypy/module/micronumpy/app_numpy.py b/pypy/module/micronumpy/app_numpy.py --- a/pypy/module/micronumpy/app_numpy.py +++ b/pypy/module/micronumpy/app_numpy.py @@ -1,6 +1,6 @@ import math -import numpy +import numpypy inf = float("inf") @@ -13,5 +13,5 @@ def mean(a): if not hasattr(a, "mean"): - a = numpy.array(a) + a = numpypy.array(a) return a.mean() diff --git a/pypy/module/micronumpy/bench/add.py b/pypy/module/micronumpy/bench/add.py --- a/pypy/module/micronumpy/bench/add.py +++ b/pypy/module/micronumpy/bench/add.py @@ -1,5 +1,8 @@ -import numpy +try: + import numpypy as numpy +except: + import numpy def f(): a = numpy.zeros(10000000) diff --git a/pypy/module/micronumpy/bench/iterate.py b/pypy/module/micronumpy/bench/iterate.py --- a/pypy/module/micronumpy/bench/iterate.py +++ b/pypy/module/micronumpy/bench/iterate.py @@ -1,5 +1,8 @@ -import numpy +try: + import numpypy as numpy +except: + import numpy def f(): sum = 0 diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -3,7 +3,7 @@ class AppTestDtypes(BaseNumpyAppTest): def test_dtype(self): - from numpy import dtype + from numpypy import dtype d = dtype('?') assert d.num == 0 @@ -14,7 +14,7 @@ raises(TypeError, dtype, 1042) def test_dtype_with_types(self): - from numpy import dtype + from numpypy import dtype assert dtype(bool).num == 0 assert dtype(int).num == 7 @@ -22,13 +22,13 @@ assert dtype(float).num == 12 def test_array_dtype_attr(self): - from numpy import array, dtype + from numpypy import array, dtype a = array(range(5), long) assert a.dtype is dtype(long) def test_repr_str(self): - from numpy import dtype + from numpypy import dtype assert repr(dtype) == "" d = dtype('?') @@ -36,57 +36,57 @@ assert str(d) == "bool" def test_bool_array(self): - import numpy + from numpypy import array, False_, True_ - a = numpy.array([0, 1, 2, 2.5], dtype='?') - assert a[0] is numpy.False_ + a = array([0, 1, 2, 2.5], dtype='?') + assert a[0] is False_ for i in xrange(1, 4): - assert a[i] is numpy.True_ + assert a[i] is True_ def test_copy_array_with_dtype(self): - import numpy + from numpypy import array, False_, True_ - a = numpy.array([0, 1, 2, 3], dtype=long) + a = array([0, 1, 2, 3], dtype=long) # int on 64-bit, long in 32-bit assert isinstance(a[0], (int, long)) b = a.copy() assert isinstance(b[0], (int, long)) - a = numpy.array([0, 1, 2, 3], dtype=bool) - assert a[0] is numpy.False_ + a = array([0, 1, 2, 3], dtype=bool) + assert a[0] is False_ b = a.copy() - assert b[0] is numpy.False_ + assert b[0] is False_ def test_zeros_bool(self): - import numpy + from numpypy import zeros, False_ - a = numpy.zeros(10, dtype=bool) + a = zeros(10, dtype=bool) for i in range(10): - assert a[i] is numpy.False_ + assert a[i] is False_ def test_ones_bool(self): - import numpy + from numpypy import ones, True_ - a = numpy.ones(10, dtype=bool) + a = ones(10, dtype=bool) for i in range(10): - assert a[i] is numpy.True_ + assert a[i] is True_ def test_zeros_long(self): - from numpy import zeros + from numpypy import zeros a = zeros(10, dtype=long) for i in range(10): assert isinstance(a[i], (int, long)) assert a[1] == 0 def test_ones_long(self): - from numpy import ones + from numpypy import ones a = ones(10, dtype=long) for i in range(10): assert isinstance(a[i], (int, long)) assert a[1] == 1 def test_overflow(self): - from numpy import array, dtype + from numpypy import array, dtype assert array([128], 'b')[0] == -128 assert array([256], 'B')[0] == 0 assert array([32768], 'h')[0] == -32768 @@ -98,7 +98,7 @@ raises(OverflowError, "array([2**64], 'Q')") def test_bool_binop_types(self): - from numpy import array, dtype + from numpypy import array, dtype types = [ '?', 'b', 'B', 'h', 'H', 'i', 'I', 'l', 'L', 'q', 'Q', 'f', 'd' ] @@ -107,7 +107,7 @@ assert (a + array([0], t)).dtype is dtype(t) def test_binop_types(self): - from numpy import array, dtype + from numpypy import array, dtype tests = [('b','B','h'), ('b','h','h'), ('b','H','i'), ('b','i','i'), ('b','l','l'), ('b','q','q'), ('b','Q','d'), ('B','h','h'), ('B','H','H'), ('B','i','i'), ('B','I','I'), ('B','l','l'), @@ -129,7 +129,7 @@ assert (array([1], d1) + array([1], d2)).dtype is dtype(dout) def test_add_int8(self): - from numpy import array, dtype + from numpypy import array, dtype a = array(range(5), dtype="int8") b = a + a @@ -138,7 +138,7 @@ assert b[i] == i * 2 def test_add_int16(self): - from numpy import array, dtype + from numpypy import array, dtype a = array(range(5), dtype="int16") b = a + a @@ -147,7 +147,7 @@ assert b[i] == i * 2 def test_add_uint32(self): - from numpy import array, dtype + from numpypy import array, dtype a = array(range(5), dtype="I") b = a + a @@ -156,12 +156,12 @@ assert b[i] == i * 2 def test_shape(self): - from numpy import dtype + from numpypy import dtype assert dtype(long).shape == () def test_cant_subclass(self): - from numpy import dtype + from numpypy import dtype # You can't subclass dtype raises(TypeError, type, "Foo", (dtype,), {}) diff --git a/pypy/module/micronumpy/test/test_module.py b/pypy/module/micronumpy/test/test_module.py --- a/pypy/module/micronumpy/test/test_module.py +++ b/pypy/module/micronumpy/test/test_module.py @@ -3,19 +3,19 @@ class AppTestNumPyModule(BaseNumpyAppTest): def test_mean(self): - from numpy import array, mean + from numpypy import array, mean assert mean(array(range(5))) == 2.0 assert mean(range(5)) == 2.0 def test_average(self): - from numpy import array, average + from numpypy import array, average assert average(range(10)) == 4.5 assert average(array(range(10))) == 4.5 def test_constants(self): import math - from numpy import inf, e + from numpypy import inf, e assert type(inf) is float assert inf == float("inf") assert e == math.e - assert type(e) is float \ No newline at end of file + assert type(e) is float diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -4,12 +4,12 @@ class AppTestNumArray(BaseNumpyAppTest): def test_type(self): - from numpy import array + from numpypy import array ar = array(range(5)) assert type(ar) is type(ar + ar) def test_init(self): - from numpy import zeros + from numpypy import zeros a = zeros(15) # Check that storage was actually zero'd. assert a[10] == 0.0 @@ -18,7 +18,7 @@ assert a[13] == 5.3 def test_size(self): - from numpy import array + from numpypy import array # XXX fixed on multidim branch #assert array(3).size == 1 a = array([1, 2, 3]) @@ -30,13 +30,13 @@ Test that empty() works. """ - from numpy import empty + from numpypy import empty a = empty(2) a[1] = 1.0 assert a[1] == 1.0 def test_ones(self): - from numpy import ones + from numpypy import ones a = ones(3) assert len(a) == 3 assert a[0] == 1 @@ -45,19 +45,19 @@ assert a[2] == 4 def test_copy(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a.copy() for i in xrange(5): assert b[i] == a[i] def test_iterator_init(self): - from numpy import array + from numpypy import array a = array(range(5)) assert a[3] == 3 def test_repr(self): - from numpy import array, zeros + from numpypy import array, zeros a = array(range(5), float) assert repr(a) == "array([0.0, 1.0, 2.0, 3.0, 4.0])" a = array([], float) @@ -72,7 +72,7 @@ assert repr(a) == "array([True, False, True, False], dtype=bool)" def test_repr_slice(self): - from numpy import array, zeros + from numpypy import array, zeros a = array(range(5), float) b = a[1::2] assert repr(b) == "array([1.0, 3.0])" @@ -81,7 +81,7 @@ assert repr(b) == "array([0.0, 0.0, 0.0, ..., 0.0, 0.0, 0.0])" def test_str(self): - from numpy import array, zeros + from numpypy import array, zeros a = array(range(5), float) assert str(a) == "[0.0 1.0 2.0 3.0 4.0]" assert str((2*a)[:]) == "[0.0 2.0 4.0 6.0 8.0]" @@ -100,7 +100,7 @@ assert str(a) == "[0 1 2 3 4]" def test_str_slice(self): - from numpy import array, zeros + from numpypy import array, zeros a = array(range(5), float) b = a[1::2] assert str(b) == "[1.0 3.0]" @@ -109,7 +109,7 @@ assert str(b) == "[0.0 0.0 0.0 ..., 0.0 0.0 0.0]" def test_getitem(self): - from numpy import array + from numpypy import array a = array(range(5)) raises(IndexError, "a[5]") a = a + a @@ -118,7 +118,7 @@ raises(IndexError, "a[-6]") def test_getitem_tuple(self): - from numpy import array + from numpypy import array a = array(range(5)) raises(IndexError, "a[(1,2)]") for i in xrange(5): @@ -128,7 +128,7 @@ assert a[i] == b[i] def test_setitem(self): - from numpy import array + from numpypy import array a = array(range(5)) a[-1] = 5.0 assert a[4] == 5.0 @@ -136,7 +136,7 @@ raises(IndexError, "a[-6] = 3.0") def test_setitem_tuple(self): - from numpy import array + from numpypy import array a = array(range(5)) raises(IndexError, "a[(1,2)] = [0,1]") for i in xrange(5): @@ -147,7 +147,7 @@ assert a[i] == i def test_setslice_array(self): - from numpy import array + from numpypy import array a = array(range(5)) b = array(range(2)) a[1:4:2] = b @@ -158,7 +158,7 @@ assert b[1] == 0. def test_setslice_of_slice_array(self): - from numpy import array, zeros + from numpypy import array, zeros a = zeros(5) a[::2] = array([9., 10., 11.]) assert a[0] == 9. @@ -177,7 +177,7 @@ assert a[0] == 3. def test_setslice_list(self): - from numpy import array + from numpypy import array a = array(range(5), float) b = [0., 1.] a[1:4:2] = b @@ -185,20 +185,20 @@ assert a[3] == 1. def test_setslice_constant(self): - from numpy import array + from numpypy import array a = array(range(5), float) a[1:4:2] = 0. assert a[1] == 0. assert a[3] == 0. def test_len(self): - from numpy import array + from numpypy import array a = array(range(5)) assert len(a) == 5 assert len(a + a) == 5 def test_shape(self): - from numpy import array + from numpypy import array a = array(range(5)) assert a.shape == (5,) b = a + a @@ -207,7 +207,7 @@ assert c.shape == (3,) def test_add(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a + a for i in range(5): @@ -220,7 +220,7 @@ assert c[i] == bool(a[i] + b[i]) def test_add_other(self): - from numpy import array + from numpypy import array a = array(range(5)) b = array(range(4, -1, -1)) c = a + b @@ -228,20 +228,20 @@ assert c[i] == 4 def test_add_constant(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a + 5 for i in range(5): assert b[i] == i + 5 def test_radd(self): - from numpy import array + from numpypy import array r = 3 + array(range(3)) for i in range(3): assert r[i] == i + 3 def test_add_list(self): - from numpy import array + from numpypy import array a = array(range(5)) b = list(reversed(range(5))) c = a + b @@ -250,14 +250,14 @@ assert c[i] == 4 def test_subtract(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a - a for i in range(5): assert b[i] == 0 def test_subtract_other(self): - from numpy import array + from numpypy import array a = array(range(5)) b = array([1, 1, 1, 1, 1]) c = a - b @@ -265,29 +265,29 @@ assert c[i] == i - 1 def test_subtract_constant(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a - 5 for i in range(5): assert b[i] == i - 5 def test_mul(self): - import numpy + import numpypy - a = numpy.array(range(5)) + a = numpypy.array(range(5)) b = a * a for i in range(5): assert b[i] == i * i - a = numpy.array(range(5), dtype=bool) + a = numpypy.array(range(5), dtype=bool) b = a * a - assert b.dtype is numpy.dtype(bool) - assert b[0] is numpy.False_ + assert b.dtype is numpypy.dtype(bool) + assert b[0] is numpypy.False_ for i in range(1, 5): - assert b[i] is numpy.True_ + assert b[i] is numpypy.True_ def test_mul_constant(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a * 5 for i in range(5): @@ -295,7 +295,7 @@ def test_div(self): from math import isnan - from numpy import array, dtype, inf + from numpypy import array, dtype, inf a = array(range(1, 6)) b = a / a @@ -327,7 +327,7 @@ assert c[2] == -inf def test_div_other(self): - from numpy import array + from numpypy import array a = array(range(5)) b = array([2, 2, 2, 2, 2], float) c = a / b @@ -335,14 +335,14 @@ assert c[i] == i / 2.0 def test_div_constant(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a / 5.0 for i in range(5): assert b[i] == i / 5.0 def test_pow(self): - from numpy import array + from numpypy import array a = array(range(5), float) b = a ** a for i in range(5): @@ -350,7 +350,7 @@ assert b[i] == i**i def test_pow_other(self): - from numpy import array + from numpypy import array a = array(range(5), float) b = array([2, 2, 2, 2, 2]) c = a ** b @@ -358,14 +358,14 @@ assert c[i] == i ** 2 def test_pow_constant(self): - from numpy import array + from numpypy import array a = array(range(5), float) b = a ** 2 for i in range(5): assert b[i] == i ** 2 def test_mod(self): - from numpy import array + from numpypy import array a = array(range(1,6)) b = a % a for i in range(5): @@ -378,7 +378,7 @@ assert b[i] == 1 def test_mod_other(self): - from numpy import array + from numpypy import array a = array(range(5)) b = array([2, 2, 2, 2, 2]) c = a % b @@ -386,14 +386,14 @@ assert c[i] == i % 2 def test_mod_constant(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a % 2 for i in range(5): assert b[i] == i % 2 def test_pos(self): - from numpy import array + from numpypy import array a = array([1.,-2.,3.,-4.,-5.]) b = +a for i in range(5): @@ -404,7 +404,7 @@ assert a[i] == i def test_neg(self): - from numpy import array + from numpypy import array a = array([1.,-2.,3.,-4.,-5.]) b = -a for i in range(5): @@ -415,7 +415,7 @@ assert a[i] == -i def test_abs(self): - from numpy import array + from numpypy import array a = array([1.,-2.,3.,-4.,-5.]) b = abs(a) for i in range(5): @@ -426,7 +426,7 @@ assert a[i + 5] == abs(i) def test_auto_force(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a - 1 a[2] = 3 @@ -440,7 +440,7 @@ assert c[1] == 4 def test_getslice(self): - from numpy import array + from numpypy import array a = array(range(5)) s = a[1:5] assert len(s) == 4 @@ -454,7 +454,7 @@ assert s[0] == 5 def test_getslice_step(self): - from numpy import array + from numpypy import array a = array(range(10)) s = a[1:9:2] assert len(s) == 4 @@ -462,7 +462,7 @@ assert s[i] == a[2*i+1] def test_slice_update(self): - from numpy import array + from numpypy import array a = array(range(5)) s = a[0:3] s[1] = 10 @@ -473,7 +473,7 @@ def test_slice_invaidate(self): # check that slice shares invalidation list with - from numpy import array + from numpypy import array a = array(range(5)) s = a[0:2] b = array([10,11]) @@ -487,13 +487,13 @@ assert d[1] == 12 def test_mean(self): - from numpy import array + from numpypy import array a = array(range(5)) assert a.mean() == 2.0 assert a[:4].mean() == 1.5 def test_sum(self): - from numpy import array + from numpypy import array a = array(range(5)) assert a.sum() == 10.0 assert a[:4].sum() == 6.0 @@ -502,32 +502,32 @@ assert a.sum() == 5 def test_prod(self): - from numpy import array + from numpypy import array a = array(range(1,6)) assert a.prod() == 120.0 assert a[:4].prod() == 24.0 def test_max(self): - from numpy import array + from numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.max() == 5.7 b = array([]) raises(ValueError, "b.max()") def test_max_add(self): - from numpy import array + from numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert (a+a).max() == 11.4 def test_min(self): - from numpy import array + from numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.min() == -3.0 b = array([]) raises(ValueError, "b.min()") def test_argmax(self): - from numpy import array + from numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.argmax() == 2 b = array([]) @@ -537,14 +537,14 @@ assert a.argmax() == 9 def test_argmin(self): - from numpy import array + from numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.argmin() == 3 b = array([]) raises(ValueError, "b.argmin()") def test_all(self): - from numpy import array + from numpypy import array a = array(range(5)) assert a.all() == False a[0] = 3.0 @@ -553,7 +553,7 @@ assert b.all() == True def test_any(self): - from numpy import array, zeros + from numpypy import array, zeros a = array(range(5)) assert a.any() == True b = zeros(5) @@ -562,7 +562,7 @@ assert c.any() == False def test_dot(self): - from numpy import array + from numpypy import array a = array(range(5)) assert a.dot(a) == 30.0 @@ -570,14 +570,14 @@ assert a.dot(range(5)) == 30 def test_dot_constant(self): - from numpy import array + from numpypy import array a = array(range(5)) b = a.dot(2.5) for i in xrange(5): assert b[i] == 2.5 * a[i] def test_dtype_guessing(self): - from numpy import array, dtype + from numpypy import array, dtype assert array([True]).dtype is dtype(bool) assert array([True, False]).dtype is dtype(bool) @@ -590,7 +590,7 @@ def test_comparison(self): import operator - from numpy import array, dtype + from numpypy import array, dtype a = array(range(5)) b = array(range(5), float) @@ -616,7 +616,7 @@ cls.w_data = cls.space.wrap(struct.pack('dddd', 1, 2, 3, 4)) def test_fromstring(self): - from numpy import fromstring + from numpypy import fromstring a = fromstring(self.data) for i in range(4): assert a[i] == i + 1 diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -4,14 +4,14 @@ class AppTestUfuncs(BaseNumpyAppTest): def test_ufunc_instance(self): - from numpy import add, ufunc + from numpypy import add, ufunc assert isinstance(add, ufunc) assert repr(add) == "" assert repr(ufunc) == "" def test_ufunc_attrs(self): - from numpy import add, multiply, sin + from numpypy import add, multiply, sin assert add.identity == 0 assert multiply.identity == 1 @@ -22,7 +22,7 @@ assert sin.nin == 1 def test_wrong_arguments(self): - from numpy import add, sin + from numpypy import add, sin raises(ValueError, add, 1) raises(TypeError, add, 1, 2, 3) @@ -30,14 +30,14 @@ raises(ValueError, sin) def test_single_item(self): - from numpy import negative, sign, minimum + from numpypy import negative, sign, minimum assert negative(5.0) == -5.0 assert sign(-0.0) == 0.0 assert minimum(2.0, 3.0) == 2.0 def test_sequence(self): - from numpy import array, negative, minimum + from numpypy import array, negative, minimum a = array(range(3)) b = [2.0, 1.0, 0.0] c = 1.0 @@ -71,7 +71,7 @@ assert min_c_b[i] == min(b[i], c) def test_negative(self): - from numpy import array, negative + from numpypy import array, negative a = array([-5.0, 0.0, 1.0]) b = negative(a) @@ -86,7 +86,7 @@ assert negative(a + a)[3] == -6 def test_abs(self): - from numpy import array, absolute + from numpypy import array, absolute a = array([-5.0, -0.0, 1.0]) b = absolute(a) @@ -94,7 +94,7 @@ assert b[i] == abs(a[i]) def test_add(self): - from numpy import array, add + from numpypy import array, add a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -103,7 +103,7 @@ assert c[i] == a[i] + b[i] def test_divide(self): - from numpy import array, divide + from numpypy import array, divide a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -112,7 +112,7 @@ assert c[i] == a[i] / b[i] def test_fabs(self): - from numpy import array, fabs + from numpypy import array, fabs from math import fabs as math_fabs a = array([-5.0, -0.0, 1.0]) @@ -121,7 +121,7 @@ assert b[i] == math_fabs(a[i]) def test_minimum(self): - from numpy import array, minimum + from numpypy import array, minimum a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -130,7 +130,7 @@ assert c[i] == min(a[i], b[i]) def test_maximum(self): - from numpy import array, maximum + from numpypy import array, maximum a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -143,7 +143,7 @@ assert isinstance(x, (int, long)) def test_multiply(self): - from numpy import array, multiply + from numpypy import array, multiply a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -152,7 +152,7 @@ assert c[i] == a[i] * b[i] def test_sign(self): - from numpy import array, sign, dtype + from numpypy import array, sign, dtype reference = [-1.0, 0.0, 0.0, 1.0] a = array([-5.0, -0.0, 0.0, 6.0]) @@ -171,7 +171,7 @@ assert a[1] == 0 def test_reciporocal(self): - from numpy import array, reciprocal + from numpypy import array, reciprocal reference = [-0.2, float("inf"), float("-inf"), 2.0] a = array([-5.0, 0.0, -0.0, 0.5]) @@ -180,7 +180,7 @@ assert b[i] == reference[i] def test_subtract(self): - from numpy import array, subtract + from numpypy import array, subtract a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -189,7 +189,7 @@ assert c[i] == a[i] - b[i] def test_floor(self): - from numpy import array, floor + from numpypy import array, floor reference = [-2.0, -1.0, 0.0, 1.0, 1.0] a = array([-1.4, -1.0, 0.0, 1.0, 1.4]) @@ -198,7 +198,7 @@ assert b[i] == reference[i] def test_copysign(self): - from numpy import array, copysign + from numpypy import array, copysign reference = [5.0, -0.0, 0.0, -6.0] a = array([-5.0, 0.0, 0.0, 6.0]) @@ -214,7 +214,7 @@ def test_exp(self): import math - from numpy import array, exp + from numpypy import array, exp a = array([-5.0, -0.0, 0.0, 12345678.0, float("inf"), -float('inf'), -12343424.0]) @@ -228,7 +228,7 @@ def test_sin(self): import math - from numpy import array, sin + from numpypy import array, sin a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2]) b = sin(a) @@ -241,7 +241,7 @@ def test_cos(self): import math - from numpy import array, cos + from numpypy import array, cos a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2]) b = cos(a) @@ -250,7 +250,7 @@ def test_tan(self): import math - from numpy import array, tan + from numpypy import array, tan a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2]) b = tan(a) @@ -260,7 +260,7 @@ def test_arcsin(self): import math - from numpy import array, arcsin + from numpypy import array, arcsin a = array([-1, -0.5, -0.33, 0, 0.33, 0.5, 1]) b = arcsin(a) @@ -274,7 +274,7 @@ def test_arccos(self): import math - from numpy import array, arccos + from numpypy import array, arccos a = array([-1, -0.5, -0.33, 0, 0.33, 0.5, 1]) b = arccos(a) @@ -289,7 +289,7 @@ def test_arctan(self): import math - from numpy import array, arctan + from numpypy import array, arctan a = array([-3, -2, -1, 0, 1, 2, 3, float('inf'), float('-inf')]) b = arctan(a) @@ -302,7 +302,7 @@ def test_arcsinh(self): import math - from numpy import arcsinh, inf + from numpypy import arcsinh, inf for v in [inf, -inf, 1.0, math.e]: assert math.asinh(v) == arcsinh(v) @@ -310,7 +310,7 @@ def test_arctanh(self): import math - from numpy import arctanh + from numpypy import arctanh for v in [.99, .5, 0, -.5, -.99]: assert math.atanh(v) == arctanh(v) @@ -320,13 +320,13 @@ assert arctanh(v) == math.copysign(float("inf"), v) def test_reduce_errors(self): - from numpy import sin, add + from numpypy import sin, add raises(ValueError, sin.reduce, [1, 2, 3]) raises(TypeError, add.reduce, 1) def test_reduce(self): - from numpy import add, maximum + from numpypy import add, maximum assert add.reduce([1, 2, 3]) == 6 assert maximum.reduce([1]) == 1 @@ -335,7 +335,7 @@ def test_comparisons(self): import operator - from numpy import equal, not_equal, less, less_equal, greater, greater_equal + from numpypy import equal, not_equal, less, less_equal, greater, greater_equal for ufunc, func in [ (equal, operator.eq), diff --git a/pypy/module/pypyjit/interp_jit.py b/pypy/module/pypyjit/interp_jit.py --- a/pypy/module/pypyjit/interp_jit.py +++ b/pypy/module/pypyjit/interp_jit.py @@ -6,6 +6,7 @@ from pypy.tool.pairtype import extendabletype from pypy.rlib.rarithmetic import r_uint, intmask from pypy.rlib.jit import JitDriver, hint, we_are_jitted, dont_look_inside +from pypy.rlib import jit from pypy.rlib.jit import current_trace_length, unroll_parameters import pypy.interpreter.pyopcode # for side-effects from pypy.interpreter.error import OperationError, operationerrfmt @@ -200,18 +201,18 @@ if len(args_w) == 1: text = space.str_w(args_w[0]) try: - pypyjitdriver.set_user_param(text) + jit.set_user_param(None, text) except ValueError: raise OperationError(space.w_ValueError, space.wrap("error in JIT parameters string")) for key, w_value in kwds_w.items(): if key == 'enable_opts': - pypyjitdriver.set_param('enable_opts', space.str_w(w_value)) + jit.set_param(None, 'enable_opts', space.str_w(w_value)) else: intval = space.int_w(w_value) for name, _ in unroll_parameters: if name == key and name != 'enable_opts': - pypyjitdriver.set_param(name, intval) + jit.set_param(None, name, intval) break else: raise operationerrfmt(space.w_TypeError, diff --git a/pypy/rlib/_rffi_stacklet.py b/pypy/rlib/_rffi_stacklet.py --- a/pypy/rlib/_rffi_stacklet.py +++ b/pypy/rlib/_rffi_stacklet.py @@ -3,16 +3,22 @@ from pypy.rpython.lltypesystem import lltype, llmemory, rffi from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.rpython.tool import rffi_platform +import sys cdir = py.path.local(pypydir) / 'translator' / 'c' - +_sep_mods = [] +if sys.platform == 'win32': + _sep_mods = [cdir / "src/stacklet/switch_x86_msvc.asm"] + eci = ExternalCompilationInfo( include_dirs = [cdir], includes = ['src/stacklet/stacklet.h'], separate_module_sources = ['#include "src/stacklet/stacklet.c"\n'], + separate_module_files = _sep_mods ) + rffi_platform.verify_eci(eci.convert_sources_to_files()) def llexternal(name, args, result, **kwds): diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -450,55 +450,6 @@ # special-cased by ExtRegistryEntry pass - def _set_param(self, name, value): - # special-cased by ExtRegistryEntry - # (internal, must receive a constant 'name') - # if value is DEFAULT, sets the default value. - assert name in PARAMETERS - - @specialize.arg(0, 1) - def set_param(self, name, value): - """Set one of the tunable JIT parameter.""" - self._set_param(name, value) - - @specialize.arg(0, 1) - def set_param_to_default(self, name): - """Reset one of the tunable JIT parameters to its default value.""" - self._set_param(name, DEFAULT) - - def set_user_param(self, text): - """Set the tunable JIT parameters from a user-supplied string - following the format 'param=value,param=value', or 'off' to - disable the JIT. For programmatic setting of parameters, use - directly JitDriver.set_param(). - """ - if text == 'off': - self.set_param('threshold', -1) - self.set_param('function_threshold', -1) - return - if text == 'default': - for name1, _ in unroll_parameters: - self.set_param_to_default(name1) - return - for s in text.split(','): - s = s.strip(' ') - parts = s.split('=') - if len(parts) != 2: - raise ValueError - name = parts[0] - value = parts[1] - if name == 'enable_opts': - self.set_param('enable_opts', value) - else: - for name1, _ in unroll_parameters: - if name1 == name and name1 != 'enable_opts': - try: - self.set_param(name1, int(value)) - except ValueError: - raise - set_user_param._annspecialcase_ = 'specialize:arg(0)' - - def on_compile(self, logger, looptoken, operations, type, *greenargs): """ A hook called when loop is compiled. Overwrite for your own jitdriver if you want to do something special, like @@ -524,16 +475,61 @@ self.jit_merge_point = self.jit_merge_point self.can_enter_jit = self.can_enter_jit self.loop_header = self.loop_header - self._set_param = self._set_param - class Entry(ExtEnterLeaveMarker): _about_ = (self.jit_merge_point, self.can_enter_jit) class Entry(ExtLoopHeader): _about_ = self.loop_header - class Entry(ExtSetParam): - _about_ = self._set_param +def _set_param(driver, name, value): + # special-cased by ExtRegistryEntry + # (internal, must receive a constant 'name') + # if value is DEFAULT, sets the default value. + assert name in PARAMETERS + + at specialize.arg(0, 1) +def set_param(driver, name, value): + """Set one of the tunable JIT parameter. Driver can be None, then all + drivers have this set """ + _set_param(driver, name, value) + + at specialize.arg(0, 1) +def set_param_to_default(driver, name): + """Reset one of the tunable JIT parameters to its default value.""" + _set_param(driver, name, DEFAULT) + +def set_user_param(driver, text): + """Set the tunable JIT parameters from a user-supplied string + following the format 'param=value,param=value', or 'off' to + disable the JIT. For programmatic setting of parameters, use + directly JitDriver.set_param(). + """ + if text == 'off': + set_param(driver, 'threshold', -1) + set_param(driver, 'function_threshold', -1) + return + if text == 'default': + for name1, _ in unroll_parameters: + set_param_to_default(driver, name1) + return + for s in text.split(','): + s = s.strip(' ') + parts = s.split('=') + if len(parts) != 2: + raise ValueError + name = parts[0] + value = parts[1] + if name == 'enable_opts': + set_param(driver, 'enable_opts', value) + else: + for name1, _ in unroll_parameters: + if name1 == name and name1 != 'enable_opts': + try: + set_param(driver, name1, int(value)) + except ValueError: + raise +set_user_param._annspecialcase_ = 'specialize:arg(0)' + # ____________________________________________________________ # @@ -705,8 +701,9 @@ resulttype=lltype.Void) class ExtSetParam(ExtRegistryEntry): + _about_ = _set_param - def compute_result_annotation(self, s_name, s_value): + def compute_result_annotation(self, s_driver, s_name, s_value): from pypy.annotation import model as annmodel assert s_name.is_constant() if not self.bookkeeper.immutablevalue(DEFAULT).contains(s_value): @@ -722,21 +719,22 @@ from pypy.objspace.flow.model import Constant hop.exception_cannot_occur() - driver = self.instance.im_self - name = hop.args_s[0].const + driver = hop.inputarg(lltype.Void, arg=0) + name = hop.args_s[1].const if name == 'enable_opts': repr = string_repr else: repr = lltype.Signed - if (isinstance(hop.args_v[1], Constant) and - hop.args_v[1].value is DEFAULT): + if (isinstance(hop.args_v[2], Constant) and + hop.args_v[2].value is DEFAULT): value = PARAMETERS[name] v_value = hop.inputconst(repr, value) else: - v_value = hop.inputarg(repr, arg=1) + v_value = hop.inputarg(repr, arg=2) vlist = [hop.inputconst(lltype.Void, "set_param"), - hop.inputconst(lltype.Void, driver), + driver, hop.inputconst(lltype.Void, name), v_value] return hop.genop('jit_marker', vlist, resulttype=lltype.Void) + diff --git a/pypy/rpython/lltypesystem/rbuilder.py b/pypy/rpython/lltypesystem/rbuilder.py --- a/pypy/rpython/lltypesystem/rbuilder.py +++ b/pypy/rpython/lltypesystem/rbuilder.py @@ -123,9 +123,10 @@ def ll_build(ll_builder): final_size = ll_builder.used assert final_size >= 0 - if final_size == ll_builder.allocated: - return ll_builder.buf - return rgc.ll_shrink_array(ll_builder.buf, final_size) + if final_size < ll_builder.allocated: + ll_builder.allocated = final_size + ll_builder.buf = rgc.ll_shrink_array(ll_builder.buf, final_size) + return ll_builder.buf @classmethod def ll_is_true(cls, ll_builder): diff --git a/pypy/rpython/test/test_rbuilder.py b/pypy/rpython/test/test_rbuilder.py --- a/pypy/rpython/test/test_rbuilder.py +++ b/pypy/rpython/test/test_rbuilder.py @@ -1,3 +1,4 @@ +from __future__ import with_statement import py from pypy.rlib.rstring import StringBuilder, UnicodeBuilder diff --git a/pypy/translator/c/test/test_newgc.py b/pypy/translator/c/test/test_newgc.py --- a/pypy/translator/c/test/test_newgc.py +++ b/pypy/translator/c/test/test_newgc.py @@ -1318,6 +1318,23 @@ res = self.run('string_builder_over_allocation') assert res[1000] == 'y' + def definestr_string_builder_multiple_builds(cls): + import gc + def fn(_): + s = StringBuilder(4) + got = [] + for i in range(50): + s.append(chr(33+i)) + got.append(s.build()) + gc.collect() + return ' '.join(got) + return fn + + def test_string_builder_multiple_builds(self): + res = self.run('string_builder_multiple_builds') + assert res == ' '.join([''.join(map(chr, range(33, 33+length))) + for length in range(1, 51)]) + def define_nursery_hash_base(cls): from pypy.rlib.objectmodel import compute_identity_hash class A: diff --git a/pypy/translator/platform/__init__.py b/pypy/translator/platform/__init__.py --- a/pypy/translator/platform/__init__.py +++ b/pypy/translator/platform/__init__.py @@ -59,7 +59,11 @@ compile_args = self._compile_args_from_eci(eci, standalone) ofiles = [] for cfile in cfiles: - ofiles.append(self._compile_c_file(self.cc, cfile, compile_args)) + # Windows hack: use masm for files ending in .asm + if str(cfile).lower().endswith('.asm'): + ofiles.append(self._compile_c_file(self.masm, cfile, [])) + else: + ofiles.append(self._compile_c_file(self.cc, cfile, compile_args)) return ofiles def execute(self, executable, args=None, env=None, compilation_info=None): diff --git a/pypy/translator/platform/windows.py b/pypy/translator/platform/windows.py --- a/pypy/translator/platform/windows.py +++ b/pypy/translator/platform/windows.py @@ -80,7 +80,7 @@ shared_only = () environ = None - def __init__(self, cc=None): + def __init__(self, cc=None, x64=False): Platform.__init__(self, 'cl.exe') if msvc_compiler_environ: self.c_environ = os.environ.copy() @@ -103,9 +103,16 @@ env=self.c_environ) r = re.search('Macro Assembler', stderr) if r is None and os.path.exists('c:/masm32/bin/ml.exe'): - self.masm = 'c:/masm32/bin/ml.exe' + masm32 = 'c:/masm32/bin/ml.exe' + masm64 = 'c:/masm64/bin/ml64.exe' else: - self.masm = 'ml.exe' + masm32 = 'ml.exe' + masm64 = 'ml64.exe' + + if x64: + self.masm = masm64 + else: + self.masm = masm32 # Install debug options only when interpreter is in debug mode if sys.executable.lower().endswith('_d.exe'): @@ -165,7 +172,7 @@ def _compile_c_file(self, cc, cfile, compile_args): oname = cfile.new(ext='obj') - args = ['/nologo', '/c'] + compile_args + [str(cfile), '/Fo%s' % (oname,)] + args = ['/nologo', '/c'] + compile_args + ['/Fo%s' % (oname,), str(cfile)] self._execute_c_compiler(cc, args, oname) return oname From noreply at buildbot.pypy.org Sat Nov 19 19:21:48 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sat, 19 Nov 2011 19:21:48 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: update for new module name Message-ID: <20111119182148.B0FC282A9E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49551:be3f1bb6a45c Date: 2011-11-19 12:53 -0500 http://bitbucket.org/pypy/pypy/changeset/be3f1bb6a45c/ Log: update for new module name diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -168,7 +168,7 @@ class AppTestTypes(BaseNumpyAppTest): def test_abstract_types(self): - import numpy + import numpypy as numpy raises(TypeError, numpy.generic, 0) raises(TypeError, numpy.number, 0) raises(TypeError, numpy.integer, 0) @@ -179,7 +179,7 @@ raises(TypeError, numpy.inexact, 0) def test_bool(self): - import numpy + import numpypy as numpy assert numpy.bool_.mro() == [numpy.bool_, numpy.generic, object] assert numpy.bool_(3) is numpy.True_ @@ -193,7 +193,7 @@ assert X(True) is numpy.True_ def test_int8(self): - import numpy + import numpypy as numpy assert numpy.int8.mro() == [numpy.int8, numpy.signedinteger, numpy.integer, numpy.number, numpy.generic, object] @@ -208,7 +208,7 @@ assert repr(x) == "-128" def test_float64(self): - import numpy + import numpypy as numpy assert numpy.float64.mro() == [numpy.float64, numpy.floating, numpy.inexact, numpy.number, numpy.generic, float, object] @@ -219,7 +219,7 @@ assert numpy.float64(2.0) == 2.0 def test_subclass_type(self): - import numpy + import numpypy as numpy class X(numpy.float64): def m(self): From noreply at buildbot.pypy.org Sat Nov 19 19:21:49 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sat, 19 Nov 2011 19:21:49 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: running all tests together now passes Message-ID: <20111119182149.E07B682A9D@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49552:93b855d2ce60 Date: 2011-11-19 13:21 -0500 http://bitbucket.org/pypy/pypy/changeset/93b855d2ce60/ Log: running all tests together now passes diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -270,11 +270,11 @@ def impl(res_dtype, value): return getattr(res_dtype.itemtype, op_name)(value) elif argcount == 2: + dtype_cache = interp_dtype.get_dtype_cache(space) def impl(res_dtype, lvalue, rvalue): res = getattr(res_dtype.itemtype, op_name)(lvalue, rvalue) if comparison_func: - bool_dtype = interp_dtype.get_dtype_cache(space).w_booldtype - res = bool_dtype.box(res) + return dtype_cache.w_booldtype.box(res) return res return func_with_new_name(impl, ufunc_name) diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -200,7 +200,9 @@ ]) s = SingleDimSlice(0, step*i, step, i, ar, new_sig) v = interp_ufuncs.get(self.space).add.call(self.space, [s, s]) - return v.get_concrete().eval(3).value + v = v.get_concrete().eval(3) + assert isinstance(v, interp_boxes.W_Float64Box) + return v.value result = self.meta_interp(f, [5], listops=True, backendopt=True) self.check_loops({'int_mul': 1, 'getinteriorfield_raw': 2, 'float_add': 1, @@ -222,7 +224,9 @@ ]) s2 = SingleDimSlice(0, step2*i, step2, i, ar, new_sig) v = interp_ufuncs.get(self.space).add.call(self.space, [s1, s2]) - return v.get_concrete().eval(3).value + v = v.get_concrete().eval(3) + assert isinstance(v, interp_boxes.W_Float64Box) + return v.value result = self.meta_interp(f, [5], listops=True, backendopt=True) self.check_loops({'int_mul': 2, 'getinteriorfield_raw': 2, 'float_add': 1, @@ -241,7 +245,9 @@ ar2.get_concrete().setitem(1, float64_dtype.box(5.5)) arg = ar2.descr_add(space, ar2) ar.setslice(space, 0, step*i, step, i, arg) - return ar.get_concrete().eval(3).value + v = ar.get_concrete().eval(3) + assert isinstance(v, interp_boxes.W_Float64Box) + return v.value result = self.meta_interp(f, [5], listops=True, backendopt=True) self.check_loops({'getinteriorfield_raw': 2, From noreply at buildbot.pypy.org Sat Nov 19 20:44:21 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sat, 19 Nov 2011 20:44:21 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: recognize scalars in array's constrcutor when guessing types Message-ID: <20111119194421.9B5A582A9D@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49553:f65a5c270040 Date: 2011-11-19 14:44 -0500 http://bitbucket.org/pypy/pypy/changeset/f65a5c270040/ Log: recognize scalars in array's constrcutor when guessing types diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -2,7 +2,7 @@ from pypy.interpreter.error import OperationError, operationerrfmt from pypy.interpreter.gateway import interp2app from pypy.interpreter.typedef import TypeDef, GetSetProperty, interp_attrproperty -from pypy.module.micronumpy import interp_dtype, signature, types +from pypy.module.micronumpy import interp_boxes, interp_dtype, signature, types from pypy.rlib import jit from pypy.rlib.rarithmetic import LONG_BIT from pypy.tool.sourcetools import func_with_new_name @@ -248,6 +248,12 @@ long_dtype = interp_dtype.get_dtype_cache(space).w_longdtype int64_dtype = interp_dtype.get_dtype_cache(space).w_int64dtype + if isinstance(w_obj, interp_boxes.W_GenericBox): + dtype = w_obj.get_dtype(space) + if current_guess is None: + return dtype + return find_binop_result_dtype(dtype, current_guess) + if space.isinstance_w(w_obj, space.w_bool): if current_guess is None or current_guess is bool_dtype: return bool_dtype diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -577,7 +577,7 @@ assert b[i] == 2.5 * a[i] def test_dtype_guessing(self): - from numpypy import array, dtype + from numpypy import array, dtype, float64, int8, bool_ assert array([True]).dtype is dtype(bool) assert array([True, False]).dtype is dtype(bool) @@ -587,6 +587,10 @@ assert array([1.2, True]).dtype is dtype(float) assert array([1.2, 5]).dtype is dtype(float) assert array([]).dtype is dtype(float) + assert array([float64(2)]).dtype is dtype(float) + assert array([int8(3)]).dtype is dtype("int8") + assert array([bool_(True)]).dtype is dtype(bool) + assert array([bool_(True), 3.0]).dtype is dtype(float) def test_comparison(self): import operator From noreply at buildbot.pypy.org Sat Nov 19 21:54:04 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sat, 19 Nov 2011 21:54:04 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: fix a bunch of tests, not sure how I ddi'nt catch this before Message-ID: <20111119205404.1A45782A9D@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49554:43e36f217520 Date: 2011-11-19 15:53 -0500 http://bitbucket.org/pypy/pypy/changeset/43e36f217520/ Log: fix a bunch of tests, not sure how I ddi'nt catch this before diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -112,19 +112,19 @@ descr__new__, get_dtype = new_dtype_getter("int8") class W_UInt8Box(W_UnsignedIntgerBox, PrimitiveBox): - pass + descr__new__, get_dtype = new_dtype_getter("uint8") class W_Int16Box(W_SignedIntegerBox, PrimitiveBox): - pass + descr__new__, get_dtype = new_dtype_getter("int16") class W_UInt16Box(W_UnsignedIntgerBox, PrimitiveBox): - pass + descr__new__, get_dtype = new_dtype_getter("uint16") class W_Int32Box(W_SignedIntegerBox, PrimitiveBox): pass class W_UInt32Box(W_UnsignedIntgerBox, PrimitiveBox): - pass + descr__new__, get_dtype = new_dtype_getter("uint32") class W_LongBox(W_SignedIntegerBox, PrimitiveBox): descr__new__, get_dtype = new_dtype_getter("long") diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -252,7 +252,7 @@ dtype = w_obj.get_dtype(space) if current_guess is None: return dtype - return find_binop_result_dtype(dtype, current_guess) + return find_binop_result_dtype(space, dtype, current_guess) if space.isinstance_w(w_obj, space.w_bool): if current_guess is None or current_guess is bool_dtype: From noreply at buildbot.pypy.org Sat Nov 19 22:04:29 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sat, 19 Nov 2011 22:04:29 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor: expose another thing at app level Message-ID: <20111119210429.BF18182A9D@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor Changeset: r49555:d7a04dcb76c5 Date: 2011-11-19 16:04 -0500 http://bitbucket.org/pypy/pypy/changeset/d7a04dcb76c5/ Log: expose another thing at app level diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -23,6 +23,7 @@ 'signedinteger': 'interp_boxes.W_SignedIntegerBox', 'bool_': 'interp_boxes.W_BoolBox', 'int8': 'interp_boxes.W_Int8Box', + 'int_': 'interp_boxes.W_LongBox', 'inexact': 'interp_boxes.W_InexactBox', 'floating': 'interp_boxes.W_FloatingBox', 'float64': 'interp_boxes.W_Float64Box', diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -207,6 +207,12 @@ assert type(x) is numpy.int8 assert repr(x) == "-128" + def test_int_(self): + import numpypy as numpy + + assert numpy.int_ is numpy.dtype(int).type + assert numpy.int_.mro() == [numpy.int_, numpy.signedinteger, numpy.integer, numpy.number, numpy.generic, int, object] + def test_float64(self): import numpypy as numpy From noreply at buildbot.pypy.org Sat Nov 19 22:34:39 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Sat, 19 Nov 2011 22:34:39 +0100 (CET) Subject: [pypy-commit] pypy default: got continuelets ready for windows. Message-ID: <20111119213439.3603482A9D@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: Changeset: r49556:b64cba156148 Date: 2011-11-19 22:33 +0100 http://bitbucket.org/pypy/pypy/changeset/b64cba156148/ Log: got continuelets ready for windows. There were two things where I stumbled quite a while. 1) _compile_c_file was sloppy with the defined order of arguments, where masm had different behavior. 2) it was needed to add external definitions, but it was not clear to me that this already works and nothing than an entry in export_symbols was needed. diff --git a/pypy/rlib/_rffi_stacklet.py b/pypy/rlib/_rffi_stacklet.py --- a/pypy/rlib/_rffi_stacklet.py +++ b/pypy/rlib/_rffi_stacklet.py @@ -8,16 +8,21 @@ cdir = py.path.local(pypydir) / 'translator' / 'c' -_sep_mods = [] -if sys.platform == 'win32': - _sep_mods = [cdir / "src/stacklet/switch_x86_msvc.asm"] - eci = ExternalCompilationInfo( include_dirs = [cdir], includes = ['src/stacklet/stacklet.h'], separate_module_sources = ['#include "src/stacklet/stacklet.c"\n'], - separate_module_files = _sep_mods ) +if sys.platform == 'win32': + eci.separate_module_files += (cdir / "src/stacklet/switch_x86_msvc.asm", ) + eci.export_symbols += ( + 'stacklet_newthread', + 'stacklet_deletethread', + 'stacklet_new', + 'stacklet_switch', + 'stacklet_destroy', + '_stacklet_translate_pointer', + ) rffi_platform.verify_eci(eci.convert_sources_to_files()) diff --git a/pypy/translator/platform/windows.py b/pypy/translator/platform/windows.py --- a/pypy/translator/platform/windows.py +++ b/pypy/translator/platform/windows.py @@ -172,6 +172,12 @@ def _compile_c_file(self, cc, cfile, compile_args): oname = cfile.new(ext='obj') + # notabene: (tismer) + # This function may be called for .c but also .asm files. + # The c compiler accepts any order of arguments, while + # the assembler still has the old behavior that all options + # must come first, and after the file name all options are ignored. + # So please be careful with the oder of parameters! ;-) args = ['/nologo', '/c'] + compile_args + ['/Fo%s' % (oname,), str(cfile)] self._execute_c_compiler(cc, args, oname) return oname From noreply at buildbot.pypy.org Sat Nov 19 22:56:27 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Sat, 19 Nov 2011 22:56:27 +0100 (CET) Subject: [pypy-commit] pypy win64-stage1: mertsch Message-ID: <20111119215627.EC01982A9D@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64-stage1 Changeset: r49557:a096ecd9fea3 Date: 2011-11-19 22:55 +0100 http://bitbucket.org/pypy/pypy/changeset/a096ecd9fea3/ Log: mertsch diff --git a/pypy/jit/backend/x86/test/test_ztranslation.py b/pypy/jit/backend/x86/test/test_ztranslation.py --- a/pypy/jit/backend/x86/test/test_ztranslation.py +++ b/pypy/jit/backend/x86/test/test_ztranslation.py @@ -1,6 +1,6 @@ import py, os, sys from pypy.tool.udir import udir -from pypy.rlib.jit import JitDriver, unroll_parameters +from pypy.rlib.jit import JitDriver, unroll_parameters, set_param from pypy.rlib.jit import PARAMETERS, dont_look_inside from pypy.rlib.jit import promote from pypy.jit.metainterp.jitprof import Profiler @@ -47,9 +47,9 @@ def f(i, j): for param, _ in unroll_parameters: defl = PARAMETERS[param] - jitdriver.set_param(param, defl) - jitdriver.set_param("threshold", 3) - jitdriver.set_param("trace_eagerness", 2) + set_param(jitdriver, param, defl) + set_param(jitdriver, "threshold", 3) + set_param(jitdriver, "trace_eagerness", 2) total = 0 frame = Frame(i) while frame.i > 3: @@ -213,8 +213,8 @@ else: return Base() def myportal(i): - jitdriver.set_param("threshold", 3) - jitdriver.set_param("trace_eagerness", 2) + set_param(jitdriver, "threshold", 3) + set_param(jitdriver, "trace_eagerness", 2) total = 0 n = i while True: diff --git a/pypy/jit/codewriter/codewriter.py b/pypy/jit/codewriter/codewriter.py --- a/pypy/jit/codewriter/codewriter.py +++ b/pypy/jit/codewriter/codewriter.py @@ -104,6 +104,8 @@ else: name = 'unnamed' % id(ssarepr) i = 1 + # escape names for windows + name = name.replace('', '_(lambda)_') extra = '' while name+extra in self._seen_files: i += 1 diff --git a/pypy/rlib/_rffi_stacklet.py b/pypy/rlib/_rffi_stacklet.py --- a/pypy/rlib/_rffi_stacklet.py +++ b/pypy/rlib/_rffi_stacklet.py @@ -8,16 +8,21 @@ cdir = py.path.local(pypydir) / 'translator' / 'c' -_sep_mods = [] -if sys.platform == 'win32': - _sep_mods = [cdir / "src/stacklet/switch_x86_msvc.asm"] - eci = ExternalCompilationInfo( include_dirs = [cdir], includes = ['src/stacklet/stacklet.h'], separate_module_sources = ['#include "src/stacklet/stacklet.c"\n'], - separate_module_files = _sep_mods ) +if sys.platform == 'win32': + eci.separate_module_files += (cdir / "src/stacklet/switch_x86_msvc.asm", ) + eci.export_symbols += ( + 'stacklet_newthread', + 'stacklet_deletethread', + 'stacklet_new', + 'stacklet_switch', + 'stacklet_destroy', + '_stacklet_translate_pointer', + ) rffi_platform.verify_eci(eci.convert_sources_to_files()) diff --git a/pypy/translator/platform/windows.py b/pypy/translator/platform/windows.py --- a/pypy/translator/platform/windows.py +++ b/pypy/translator/platform/windows.py @@ -117,9 +117,16 @@ env=self.c_environ) r = re.search('Macro Assembler', stderr) if r is None and os.path.exists('c:/masm32/bin/ml.exe'): - self.masm = 'c:/masm32/bin/ml.exe' + masm32 = 'c:/masm32/bin/ml.exe' + masm64 = 'c:/masm64/bin/ml64.exe' else: - self.masm = 'ml.exe' + masm32 = 'ml.exe' + masm64 = 'ml64.exe' + + if x64: + self.masm = masm64 + else: + self.masm = masm32 # Install debug options only when interpreter is in debug mode if sys.executable.lower().endswith('_d.exe'): @@ -179,6 +186,12 @@ def _compile_c_file(self, cc, cfile, compile_args): oname = cfile.new(ext='obj') + # notabene: (tismer) + # This function may be called for .c but also .asm files. + # The c compiler accepts any order of arguments, while + # the assembler still has the old behavior that all options + # must come first, and after the file name all options are ignored. + # So please be careful with the oder of parameters! ;-) args = ['/nologo', '/c'] + compile_args + ['/Fo%s' % (oname,), str(cfile)] self._execute_c_compiler(cc, args, oname) return oname From noreply at buildbot.pypy.org Sat Nov 19 23:30:20 2011 From: noreply at buildbot.pypy.org (ctismer) Date: Sat, 19 Nov 2011 23:30:20 +0100 (CET) Subject: [pypy-commit] pypy win64-stage1: continuelet support on win64, - something doesn't work, checked in anyway Message-ID: <20111119223020.8162B82A9D@wyvern.cs.uni-duesseldorf.de> Author: Christian Tismer Branch: win64-stage1 Changeset: r49558:06ac95b1d2d9 Date: 2011-11-19 23:29 +0100 http://bitbucket.org/pypy/pypy/changeset/06ac95b1d2d9/ Log: continuelet support on win64, - something doesn't work, checked in anyway diff --git a/pypy/rlib/_rffi_stacklet.py b/pypy/rlib/_rffi_stacklet.py --- a/pypy/rlib/_rffi_stacklet.py +++ b/pypy/rlib/_rffi_stacklet.py @@ -3,6 +3,7 @@ from pypy.rpython.lltypesystem import lltype, llmemory, rffi from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.rpython.tool import rffi_platform +from pypy.rlib.rarithmetic import is_emulated_long import sys @@ -14,7 +15,11 @@ separate_module_sources = ['#include "src/stacklet/stacklet.c"\n'], ) if sys.platform == 'win32': - eci.separate_module_files += (cdir / "src/stacklet/switch_x86_msvc.asm", ) + if is_emulated_long: + asmsrc = 'switch_x64_msvc.asm' + else: + asmsrc = 'switch_x86_msvc.asm' + eci.separate_module_files += (cdir / 'src' / 'stacklet' / asmsrc, ) eci.export_symbols += ( 'stacklet_newthread', 'stacklet_deletethread', From noreply at buildbot.pypy.org Sun Nov 20 00:37:43 2011 From: noreply at buildbot.pypy.org (mattip) Date: Sun, 20 Nov 2011 00:37:43 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim-shards: module name change Message-ID: <20111119233743.4B0E482A9D@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpy-multidim-shards Changeset: r49559:c6cdb1d70ee8 Date: 2011-11-19 20:35 +0200 http://bitbucket.org/pypy/pypy/changeset/c6cdb1d70ee8/ Log: module name change diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -296,7 +296,7 @@ assert a[3] == 0. def test_scalar(self): - from numpy import array + from numpypy import array a = array(3) assert a[0] == 3 @@ -729,7 +729,7 @@ assert c[i] == func(b[i], 3) def test_nonzero(self): - from numpy import array + from numpypy import array a = array([1, 2]) raises(ValueError, bool, a) raises(ValueError, bool, a == a) @@ -740,22 +740,22 @@ class AppTestMultiDim(BaseNumpyAppTest): def test_init(self): - import numpy - a = numpy.zeros((2, 2)) + import numpypy + a = numpypy.zeros((2, 2)) assert len(a) == 2 def test_shape(self): - import numpy - assert numpy.zeros(1).shape == (1,) - assert numpy.zeros((2, 2)).shape == (2, 2) - assert numpy.zeros((3, 1, 2)).shape == (3, 1, 2) - assert numpy.array([[1], [2], [3]]).shape == (3, 1) - assert len(numpy.zeros((3, 1, 2))) == 3 - raises(TypeError, len, numpy.zeros(())) + import numpypy + assert numpypy.zeros(1).shape == (1,) + assert numpypy.zeros((2, 2)).shape == (2, 2) + assert numpypy.zeros((3, 1, 2)).shape == (3, 1, 2) + assert numpypy.array([[1], [2], [3]]).shape == (3, 1) + assert len(numpypy.zeros((3, 1, 2))) == 3 + raises(TypeError, len, numpypy.zeros(())) def test_getsetitem(self): - import numpy - a = numpy.zeros((2, 3, 1)) + import numpypy + a = numpypy.zeros((2, 3, 1)) raises(IndexError, a.__getitem__, (2, 0, 0)) raises(IndexError, a.__getitem__, (0, 3, 0)) raises(IndexError, a.__getitem__, (0, 0, 1)) @@ -765,8 +765,8 @@ assert a[1, 1, 0] == 0 def test_slices(self): - import numpy - a = numpy.zeros((4, 3, 2)) + import numpypy + a = numpypy.zeros((4, 3, 2)) raises(IndexError, a.__getitem__, (4,)) raises(IndexError, a.__getitem__, (3, 3)) raises(IndexError, a.__getitem__, (slice(None), 3)) @@ -799,50 +799,50 @@ assert a[1][2][1] == 15 def test_init_2(self): - import numpy - raises(ValueError, numpy.array, [[1], 2]) - raises(ValueError, numpy.array, [[1, 2], [3]]) - raises(ValueError, numpy.array, [[[1, 2], [3, 4], 5]]) - raises(ValueError, numpy.array, [[[1, 2], [3, 4], [5]]]) - a = numpy.array([[1, 2], [4, 5]]) + import numpypy + raises(ValueError, numpypy.array, [[1], 2]) + raises(ValueError, numpypy.array, [[1, 2], [3]]) + raises(ValueError, numpypy.array, [[[1, 2], [3, 4], 5]]) + raises(ValueError, numpypy.array, [[[1, 2], [3, 4], [5]]]) + a = numpypy.array([[1, 2], [4, 5]]) assert a[0, 1] == 2 assert a[0][1] == 2 - a = numpy.array(([[[1, 2], [3, 4], [5, 6]]])) + a = numpypy.array(([[[1, 2], [3, 4], [5, 6]]])) assert (a[0, 1] == [3, 4]).all() def test_setitem_slice(self): - import numpy - a = numpy.zeros((3, 4)) + import numpypy + a = numpypy.zeros((3, 4)) a[1] = [1, 2, 3, 4] assert a[1, 2] == 3 raises(TypeError, a[1].__setitem__, [1, 2, 3]) - a = numpy.array([[1, 2], [3, 4]]) + a = numpypy.array([[1, 2], [3, 4]]) assert (a == [[1, 2], [3, 4]]).all() - a[1] = numpy.array([5, 6]) + a[1] = numpypy.array([5, 6]) assert (a == [[1, 2], [5, 6]]).all() - a[:, 1] = numpy.array([8, 10]) + a[:, 1] = numpypy.array([8, 10]) assert (a == [[1, 8], [5, 10]]).all() - a[0, :: -1] = numpy.array([11, 12]) + a[0, :: -1] = numpypy.array([11, 12]) assert (a == [[12, 11], [5, 10]]).all() def test_ufunc(self): - from numpy import array + from numpypy import array a = array([[1, 2], [3, 4], [5, 6]]) assert ((a + a) == array([[1 + 1, 2 + 2], [3 + 3, 4 + 4], [5 + 5, 6 + 6]])).all() def test_getitem_add(self): - from numpy import array + from numpypy import array a = array([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]]) assert (a + a)[1, 1] == 8 def test_ufunc_negative(self): - from numpy import array, negative + from numpypy import array, negative a = array([[1, 2], [3, 4]]) b = negative(a + a) assert (b == [[-2, -4], [-6, -8]]).all() def test_getitem_3(self): - from numpy import array + from numpypy import array a = array([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10], [11, 12], [13, 14]]) b = a[::2] print a @@ -852,7 +852,7 @@ assert c[1][1] == 12 def test_broadcast_ufunc(self): - from numpy import array + from numpypy import array a = array([[1, 2], [3, 4], [5, 6]]) b = array([5, 6]) #print a + b @@ -862,9 +862,9 @@ assert c.all() def test_broadcast_setslice(self): - import numpy - a = numpy.zeros((100, 100)) - b = numpy.ones(100) + import numpypy + a = numpypy.zeros((100, 100)) + b = numpypy.ones(100) a[:, :] = b assert a[13, 15] == 1 @@ -883,7 +883,7 @@ class AppTestRepr(BaseNumpyAppTest): def test_repr(self): - from numpy import array, zeros + from numpypy import array, zeros a = array(range(5), float) assert repr(a) == "array([0.0, 1.0, 2.0, 3.0, 4.0])" a = array([], float) @@ -898,7 +898,7 @@ assert repr(a) == "array([True, False, True, False], dtype=bool)" def test_repr_multi(self): - from numpy import array, zeros + from numpypy import array, zeros a = zeros((3, 4)) assert repr(a) == '''array([[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], @@ -913,7 +913,7 @@ [0.0, 0.0, 0.0, 0.0]]])''' def test_repr_slice(self): - from numpy import array, zeros + from numpypy import array, zeros a = array(range(5), float) b = a[1::2] assert repr(b) == "array([1.0, 3.0])" @@ -928,7 +928,7 @@ assert repr(b) == "array([], shape=(0, 5), dtype=int16)" def test_str(self): - from numpy import array, zeros + from numpypy import array, zeros a = array(range(5), float) assert str(a) == "[0.0 1.0 2.0 3.0 4.0]" assert str((2 * a)[:]) == "[0.0 2.0 4.0 6.0 8.0]" @@ -955,7 +955,7 @@ a = zeros((400, 400), dtype=int) assert str(a) == "[[0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]\n ..., \n [0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]]" def test_str_slice(self): - from numpy import array, zeros + from numpypy import array, zeros a = array(range(5), float) b = a[1::2] assert str(b) == "[1.0 3.0]" From noreply at buildbot.pypy.org Sun Nov 20 00:37:44 2011 From: noreply at buildbot.pypy.org (mattip) Date: Sun, 20 Nov 2011 00:37:44 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim-shards: broadcast passes tests, code needs review Message-ID: <20111119233744.7C83782A9E@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpy-multidim-shards Changeset: r49560:03025883c2b1 Date: 2011-11-20 01:28 +0200 http://bitbucket.org/pypy/pypy/changeset/03025883c2b1/ Log: broadcast passes tests, code needs review diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -8,8 +8,8 @@ from pypy.tool.sourcetools import func_with_new_name from pypy.rlib.rstring import StringBuilder -numpy_driver = jit.JitDriver(greens = ['signature'], - reds = ['result_size', 'i', 'ri', 'self', +numpy_driver = jit.JitDriver(greens=['signature'], + reds=['result_size', 'i', 'ri', 'self', 'result']) all_driver = jit.JitDriver(greens=['signature'], reds=['i', 'self', 'dtype']) any_driver = jit.JitDriver(greens=['signature'], reds=['i', 'self', 'dtype']) @@ -39,13 +39,21 @@ shape.append(size) batch = new_batch -class BroadcastDescription(object): - def __init__(self, shape, indices1, indices2): - self.shape = shape - self.indices1 = indices1 - self.indices2 = indices2 +#class BroadcastDescription(object): +# def __init__(self, shape, indices1, indices2): +# self.shape = shape +# self.indices1 = indices1 +# self.indices2 = indices2 + def shape_agreement(space, shape1, shape2): + ret = _shape_agreement(shape1, shape2) + if len(ret) < max(len(shape1), len(shape2)): + raise OperationError(space.w_ValueError, space.wrap( + "shape mismatch: objects cannot be broadcast to a single shape")) + return ret + +def _shape_agreement(shape1, shape2): """ Checks agreement about two shapes with respect to broadcasting. Returns the resulting shape. """ @@ -79,8 +87,9 @@ endshape[i] = left indices2[i + rshift] = False else: - raise OperationError(space.w_ValueError, space.wrap( - "frames are not aligned")) + return [] + #raise OperationError(space.w_ValueError, space.wrap( + # "frames are not aligned")) for i in range(m - n): adjustment = True endshape[i] = remainder[i] @@ -91,7 +100,6 @@ #if not adjustment: # return None return endshape - return BroadcastDescription(endshape, indices1, indices2) def descr_new_array(space, w_subtype, w_item_or_iterable, w_dtype=None, w_order=NoneNotWrapped): @@ -130,7 +138,7 @@ for i in range(len(elems_w)): w_elem = elems_w[i] dtype.setitem_w(space, arr.storage, arr_iter.offset, w_elem) - arr_iter = arr_iter.next() + arr_iter.next() return arr class BaseIterator(object): @@ -146,11 +154,11 @@ class ArrayIterator(BaseIterator): def __init__(self, size): self.offset = 0 - self.size = size + self.size = size def next(self): self.offset += 1 - return self + #return self def done(self): return self.offset >= self.size @@ -179,7 +187,7 @@ self.offset -= self.arr.backshards[i] else: self._done = True - return self + #return self def done(self): return self._done @@ -187,7 +195,47 @@ def get_offset(self): return self.offset -class ResizingIterator(object): +class BroadcastIterator(BaseIterator): + '''Like a view iterator, but will repeatedly access values + for all iterations across a res_shape, folding the offset + using mod() arithmetic + ''' + def __init__(self, arr, res_shape): + self.indices = [0] * len(res_shape) + self.offset = arr.start + self.shards = [s for s in arr.shards] # Is there a better way to make a copy in rpython? + self.backshards = [s for s in arr.backshards] # Is there a better way to make a copy in rpython? + self.shape_len = len(res_shape) + self.res_shape = res_shape + for i in range(self.shape_len - len(arr.shape)): + self.shards.insert(0, 0) + self.backshards.insert(0, 0) + self._done = False + self.size = sum(arr.shape) + self.arr = arr + + @jit.unroll_safe + def next(self): + shape_len = jit.promote(self.shape_len) + for i in range(shape_len - 1, -1, -1): + if self.indices[i] < self.res_shape[i] - 1: + self.indices[i] += 1 + self.offset += self.shards[i] + break + else: + self.indices[i] = 0 + self.offset -= self.backshards[i] + else: + self._done = True + #return self + + def done(self): + return self._done + + def get_offset(self): + return self.offset % self.size + +class _ResizingIterator(object): def __init__(self, iter, shape, orig_indices): self.shape = shape self.indices = [0] * len(shape) @@ -207,7 +255,7 @@ self.indices[i] = 0 else: self._done = True - return self + #return self def get_offset(self): return self.iter.get_offset() @@ -223,7 +271,7 @@ def next(self): self.left.next() self.right.next() - return self + #return self def done(self): return self.left.done() or self.right.done() @@ -239,7 +287,7 @@ def next(self): self.child.next() - return self + #return self def done(self): return self.child.done() @@ -249,7 +297,8 @@ class ConstantIterator(BaseIterator): def next(self): - return self + pass + #return self def done(self): return False @@ -356,7 +405,7 @@ def _reduce_argmax_argmin_impl(op_name): reduce_driver = jit.JitDriver(greens=['signature'], - reds = ['i', 'result', 'self', 'cur_best', 'dtype']) + reds=['i', 'result', 'self', 'cur_best', 'dtype']) def loop(self): i = self.start_iter(self.shape) result = i.get_offset() @@ -372,7 +421,7 @@ if dtype.ne(new_best, cur_best): result = i.get_offset() cur_best = new_best - i = i.next() + i.next() return result def impl(self, space): size = self.find_size() @@ -390,7 +439,7 @@ all_driver.jit_merge_point(signature=self.signature, self=self, dtype=dtype, i=i) if not dtype.bool(self.eval(i)): return False - i = i.next() + i.next() return True def descr_all(self, space): return space.wrap(self._all()) @@ -403,7 +452,7 @@ dtype=dtype, i=i) if dtype.bool(self.eval(i)): return True - i = i.next() + i.next() return False def descr_any(self, space): return space.wrap(self._any()) @@ -507,9 +556,10 @@ view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) i += 1 elif ndims == 1: - #This should not directly access the start,shards: what happens if order changes? spacer = ',' * comma + ' ' item = self.start + #An iterator would be a nicer way to walk along the 1d array, but how do + # I reset it if printing ellipsis? iterators have no "set_offset()" i = 0 if use_ellipsis: for i in range(3): @@ -521,6 +571,7 @@ item += self.shards[0] #Add a comma only if comma is False - this prevents adding two commas builder.append(spacer + '...' + ',' * (1 - comma)) + #Ugly, but can this be done with an iterator? item = self.start + self.backshards[0] - 2 * self.shards[0] i = self.shape[0] - 3 while i < self.shape[0]: @@ -779,8 +830,8 @@ result_size=result_size, i=i, ri=ri, self=self, result=result) result.dtype.setitem(result.storage, ri.offset, self.eval(i)) - i = i.next() - ri = ri.next() + i.next() + ri.next() return result def force_if_needed(self): @@ -852,23 +903,22 @@ self.left = left self.right = right self.calc_dtype = calc_dtype + self.size = 1 + for s in self.shape: + self.size *= s def _del_sources(self): self.left = None self.right = None def _find_size(self): - try: - return self.left.find_size() - except ValueError: - pass - return self.right.find_size() + return self.size def start_iter(self, res_shape=None): if self.forced_result is not None: return self.forced_result.start_iter(res_shape) if res_shape is None: - res_shape = self.shape # we still force the shape on children + res_shape = self.shape # we still force the shape on children return Call2Iterator(self.left.start_iter(res_shape), self.right.start_iter(res_shape)) @@ -906,7 +956,7 @@ return self.parent.getitem(item) def eval(self, iter): - assert isinstance(iter, ViewIterator) + assert isinstance(iter, (ViewIterator, BroadcastIterator)) return self.parent.getitem(iter.offset) @unwrap_spec(item=int) @@ -961,12 +1011,19 @@ source_iter=source_iter) self.setitem(res_iter.offset, source.eval(source_iter).convert_to( self.find_dtype())) - source_iter = source_iter.next() - res_iter = res_iter.next() + source_iter.next() + res_iter.next() def start_iter(self, res_shape=None): if res_shape is not None and res_shape != self.shape: - raise NotImplementedError # xxx + # I would prefer to throw the exception using a space, + # but do not have access to one here. Is there a way + # to get access to one and pass it into shape_agreement? + res_shape = _shape_agreement(self.shape, res_shape) + if len(res_shape) < len(self.shape): + raise ValueError("shape mismatch: objects cannot" + \ + " be broadcast to a single shape") + return BroadcastIterator(self, res_shape) #return ResizingIterator(ViewIterator(self), res_shape, orig_indices) return ViewIterator(self) @@ -1003,7 +1060,7 @@ return self.dtype.getitem(self.storage, item) def eval(self, iter): - assert isinstance(iter, ArrayIterator) + assert isinstance(iter, (ArrayIterator, BroadcastIterator)) return self.dtype.getitem(self.storage, iter.offset) def descr_len(self, space): @@ -1023,7 +1080,14 @@ def start_iter(self, res_shape=None): if self.order == 'C': if res_shape is not None and res_shape != self.shape: - raise NotImplementedError # xxx + res_shape = _shape_agreement(self.shape, res_shape) + # I would prefer to throw the exception using a space, + # but do not have access to one here. Is there a way + # to get access to one and pass it into shape_agreement? + if len(res_shape) < len(self.shape): + raise ValueError("shape mismatch: objects cannot " + \ + "be broadcast to a single shape") + return BroadcastIterator(self, res_shape) return ArrayIterator(self.size) raise NotImplementedError # use ViewIterator simply, test it diff --git a/pypy/module/micronumpy/test/test_base.py b/pypy/module/micronumpy/test/test_base.py --- a/pypy/module/micronumpy/test/test_base.py +++ b/pypy/module/micronumpy/test/test_base.py @@ -31,12 +31,12 @@ def test_slice_signature(self, space): ar = NDimArray(10, [10], dtype=space.fromcache(interp_dtype.W_Float64Dtype)) - v1 = ar.descr_getitem(space, space.wrap(slice(1, 5, 1))) + v1 = ar.descr_getitem(space, space.wrap(slice(1, 3, 1))) v2 = ar.descr_getitem(space, space.wrap(slice(4, 6, 1))) assert v1.signature is v2.signature - v3 = ar.descr_add(space, v1) - v4 = ar.descr_add(space, v2) + v3 = v2.descr_add(space, v1) + v4 = v1.descr_add(space, v2) assert v3.signature is v4.signature class TestUfuncCoerscion(object): diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -475,7 +475,7 @@ def test_mod(self): from numpypy import array - a = array(range(1,6)) + a = array(range(1, 6)) b = a % a for i in range(5): assert b[i] == 0 @@ -855,18 +855,19 @@ from numpypy import array a = array([[1, 2], [3, 4], [5, 6]]) b = array([5, 6]) - #print a + b - c = ((a + b) == [[1+5, 2+6], [3+5, 4+6], [5+5, 6+6]]) - print c - print c.all() + c = ((a + b) == [[1 + 5, 2 + 6], [3 + 5, 4 + 6], [5 + 5, 6 + 6]]) assert c.all() def test_broadcast_setslice(self): - import numpypy - a = numpypy.zeros((100, 100)) - b = numpypy.ones(100) + from numpypy import zeros, ones, array + a = zeros((100, 100)) + b = ones(100) a[:, :] = b assert a[13, 15] == 1 + a = zeros((3, 1, 3)) + b = array(((10, 11, 12), (20, 21, 22), (30, 31, 32))) + c = ((a + b) == [b, b, b]) + assert c.all() class AppTestSupport(object): def setup_class(cls): From noreply at buildbot.pypy.org Sun Nov 20 00:37:45 2011 From: noreply at buildbot.pypy.org (mattip) Date: Sun, 20 Nov 2011 00:37:45 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim-shards: clean up tests since broadcast, iterator changes broke them Message-ID: <20111119233745.A926C82A9D@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpy-multidim-shards Changeset: r49561:47f179be6866 Date: 2011-11-20 01:36 +0200 http://bitbucket.org/pypy/pypy/changeset/47f179be6866/ Log: clean up tests since broadcast, iterator changes broke them diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -82,7 +82,7 @@ value=value, obj=obj, i=i, dtype=dtype) value = self.func(dtype, value, obj.eval(i).convert_to(dtype)) - i = i.next() + i.next() return value class W_Ufunc1(W_Ufunc): From noreply at buildbot.pypy.org Sun Nov 20 06:07:24 2011 From: noreply at buildbot.pypy.org (mattip) Date: Sun, 20 Nov 2011 06:07:24 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim-shards: add shape_agreement broadcast test, fix for it Message-ID: <20111120050724.8845C82A9D@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpy-multidim-shards Changeset: r49562:cf0782e42b72 Date: 2011-11-20 07:04 +0200 http://bitbucket.org/pypy/pypy/changeset/cf0782e42b72/ Log: add shape_agreement broadcast test, fix for it diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -203,8 +203,16 @@ def __init__(self, arr, res_shape): self.indices = [0] * len(res_shape) self.offset = arr.start - self.shards = [s for s in arr.shards] # Is there a better way to make a copy in rpython? - self.backshards = [s for s in arr.backshards] # Is there a better way to make a copy in rpython? + #shards are 0 where original shape==1 + self.shards = [] + self.backshards = [] + for i in range(len(arr.shape)): + if arr.shape[i]==1: + self.shards.append(0) + self.backshards.append(0) + else: + self.shards.append(arr.shards[i]) + self.backshards.append(arr.backshards[i]) self.shape_len = len(res_shape) self.res_shape = res_shape for i in range(self.shape_len - len(arr.shape)): diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -859,15 +859,25 @@ assert c.all() def test_broadcast_setslice(self): - from numpypy import zeros, ones, array + from numpypy import zeros, ones a = zeros((100, 100)) b = ones(100) a[:, :] = b assert a[13, 15] == 1 + + def test_broadcast_shape_agreement(self): + from numpypy import zeros, array a = zeros((3, 1, 3)) b = array(((10, 11, 12), (20, 21, 22), (30, 31, 32))) c = ((a + b) == [b, b, b]) assert c.all() + a = array((((10,11,12), ), ((20, 21, 22), ), ((30,31,32), ))) + assert(a.shape == (3, 1, 3)) + d = zeros((3, 3)) + c = ((a + d) == [b, b, b]) + c = ((a + d) == array([[[10., 11., 12.]]*3, [[20.,21.,22.]]*3, [[30.,31.,32.]]*3])) + assert c.all() + class AppTestSupport(object): def setup_class(cls): From noreply at buildbot.pypy.org Sun Nov 20 11:13:11 2011 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 20 Nov 2011 11:13:11 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim-shards: make ones support multidim arrays. Also write a passing test Message-ID: <20111120101311.DA00882297@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-multidim-shards Changeset: r49570:29fe0349fa99 Date: 2011-11-20 12:12 +0200 http://bitbucket.org/pypy/pypy/changeset/29fe0349fa99/ Log: make ones support multidim arrays. Also write a passing test diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -1049,13 +1049,21 @@ shape.append(item) return space.wrap(NDimArray(size, shape[:], dtype=dtype)) - at unwrap_spec(size=int) -def ones(space, size, w_dtype=None): +def ones(space, w_size, w_dtype=None): dtype = space.interp_w(interp_dtype.W_Dtype, space.call_function(space.gettypefor(interp_dtype.W_Dtype), w_dtype) ) - - arr = NDimArray(size, [size], dtype=dtype) + if space.isinstance_w(w_size, space.w_int): + size = space.int_w(w_size) + shape = [size] + else: + size = 1 + shape = [] + for w_item in space.fixedview(w_size): + item = space.int_w(w_item) + size *= item + shape.append(item) + arr = NDimArray(size, shape[:], dtype=dtype) one = dtype.adapt_val(1) arr.dtype.fill(arr.storage, one, 0, size) return space.wrap(arr) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -851,6 +851,11 @@ c = b + b assert c[1][1] == 12 + def test_multidim_ones(self): + from numpypy import ones + a = ones((1, 2, 3)) + assert a[0, 1, 2] == 1.0 + def test_broadcast_ufunc(self): from numpypy import array a = array([[1, 2], [3, 4], [5, 6]]) @@ -877,7 +882,13 @@ c = ((a + d) == [b, b, b]) c = ((a + d) == array([[[10., 11., 12.]]*3, [[20.,21.,22.]]*3, [[30.,31.,32.]]*3])) assert c.all() - + + def test_broadcast_call2(self): + from numpypy import zeros, ones + a = zeros((4, 1, 5)) + b = ones((4, 3, 5)) + b[:] = (a + a) + assert (b == zeros((4, 3, 5))).all() class AppTestSupport(object): def setup_class(cls): From noreply at buildbot.pypy.org Sun Nov 20 11:18:53 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sun, 20 Nov 2011 11:18:53 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: rename Message-ID: <20111120101853.5D96882297@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r49571:8df495b92a7d Date: 2011-11-19 16:42 +0100 http://bitbucket.org/pypy/pypy/changeset/8df495b92a7d/ Log: rename diff --git a/pypy/jit/metainterp/test/test_logger.py b/pypy/jit/metainterp/test/test_logger.py --- a/pypy/jit/metainterp/test/test_logger.py +++ b/pypy/jit/metainterp/test/test_logger.py @@ -5,7 +5,7 @@ from pypy.jit.metainterp.typesystem import llhelper from StringIO import StringIO from pypy.jit.metainterp.optimizeopt.util import equaloplists -from pypy.jit.metainterp.history import AbstractDescr, LoopToken, BasicFailDescr +from pypy.jit.metainterp.history import AbstractDescr, JitCellToken, BasicFailDescr from pypy.jit.backend.model import AbstractCPU @@ -131,7 +131,7 @@ equaloplists(loop.operations, oloop.operations) def test_jump(self): - namespace = {'target': LoopToken()} + namespace = {'target': JitCellToken()} namespace['target'].number = 3 inp = ''' [i0] diff --git a/pypy/jit/tool/test/test_jitoutput.py b/pypy/jit/tool/test/test_jitoutput.py --- a/pypy/jit/tool/test/test_jitoutput.py +++ b/pypy/jit/tool/test/test_jitoutput.py @@ -36,12 +36,12 @@ assert info.tracing_no == 1 assert info.asm_no == 1 assert info.blackhole_no == 1 - assert info.backend_no == 2 + assert info.backend_no == 1 assert info.ops.total == 2 assert info.recorded_ops.total == 2 assert info.recorded_ops.calls == 0 assert info.guards == 1 - assert info.opt_ops == 11 + assert info.opt_ops == 13 assert info.opt_guards == 2 assert info.forcings == 0 diff --git a/pypy/jit/tool/test/test_oparser.py b/pypy/jit/tool/test/test_oparser.py --- a/pypy/jit/tool/test/test_oparser.py +++ b/pypy/jit/tool/test/test_oparser.py @@ -4,7 +4,7 @@ from pypy.jit.tool.oparser import parse, OpParser from pypy.jit.metainterp.resoperation import rop -from pypy.jit.metainterp.history import AbstractDescr, BoxInt, LoopToken +from pypy.jit.metainterp.history import AbstractDescr, BoxInt, JitCellToken class BaseTestOparser(object): @@ -122,7 +122,7 @@ assert loop.operations[0].getdescr() is loop.token def test_jump_target_other(self): - looptoken = LoopToken() + looptoken = JitCellToken() looptoken.I_am_a_descr = True # for the mock case x = ''' [] From noreply at buildbot.pypy.org Sun Nov 20 11:18:54 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sun, 20 Nov 2011 11:18:54 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: centralize the unrolling call to optimizeopt Message-ID: <20111120101854.9086182297@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r49572:37eed202c113 Date: 2011-11-20 11:18 +0100 http://bitbucket.org/pypy/pypy/changeset/37eed202c113/ Log: centralize the unrolling call to optimizeopt diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -54,18 +54,6 @@ # ____________________________________________________________ -class FakeDescr(compile.ResumeGuardDescr): - class rd_snapshot: - class prev: - prev = None - boxes = [] - boxes = [] - def clone_if_mutable(self): - return FakeDescr() - def __eq__(self, other): - return isinstance(other, Storage) or isinstance(other, FakeDescr) - - class BaseTestWithUnroll(BaseTest): enable_opts = "intbounds:rewrite:virtualize:string:earlyforce:pure:heap:unroll" @@ -79,49 +67,8 @@ expected_preamble = self.parse(expected_preamble) if expected_short: expected_short = self.parse(expected_short) - operations = loop.operations - jumpop = operations[-1] - assert jumpop.getopnum() == rop.JUMP - inputargs = loop.inputargs - - jump_args = jumpop.getarglist()[:] - operations = operations[:-1] - cloned_operations = [op.clone() for op in operations] - - preamble = TreeLoop('preamble') - preamble.inputargs = inputargs - preamble.start_resumedescr = FakeDescr() - - token = JitCellToken() - preamble.operations = [ResOperation(rop.LABEL, inputargs, None, descr=TargetToken(token))] + \ - operations + \ - [ResOperation(rop.JUMP, jump_args, None, descr=token)] - self._do_optimize_loop(preamble, call_pure_results) - - assert preamble.operations[-1].getopnum() == rop.LABEL - - inliner = Inliner(inputargs, jump_args) - loop.start_resumedescr = preamble.start_resumedescr - loop.operations = [preamble.operations[-1]] + \ - [inliner.inline_op(op, clone=False) for op in cloned_operations] + \ - [ResOperation(rop.JUMP, [inliner.inline_arg(a) for a in jump_args], - None, descr=token)] - #[inliner.inline_op(jumpop)] - assert loop.operations[-1].getopnum() == rop.JUMP - assert loop.operations[0].getopnum() == rop.LABEL - loop.inputargs = loop.operations[0].getarglist() - - self._do_optimize_loop(loop, call_pure_results) - extra_same_as = [] - while loop.operations[0].getopnum() != rop.LABEL: - extra_same_as.append(loop.operations[0]) - del loop.operations[0] - - # Hack to prevent random order of same_as ops - extra_same_as.sort(key=lambda op: str(preamble.operations).find(str(op.getarg(0)))) - - for op in extra_same_as: - preamble.operations.insert(-1, op) + + preamble = self.unroll_and_optimize(loop, call_pure_results) # print diff --git a/pypy/jit/metainterp/optimizeopt/test/test_util.py b/pypy/jit/metainterp/optimizeopt/test/test_util.py --- a/pypy/jit/metainterp/optimizeopt/test/test_util.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_util.py @@ -8,7 +8,8 @@ from pypy.jit.backend.llgraph import runner from pypy.jit.metainterp.history import (BoxInt, BoxPtr, ConstInt, ConstPtr, Const, TreeLoop, BoxObj, - ConstObj, AbstractDescr) + ConstObj, AbstractDescr, + JitCellToken, TargetToken) from pypy.jit.metainterp.optimizeopt.util import sort_descrs, equaloplists from pypy.jit.metainterp.optimize import InvalidLoop from pypy.jit.codewriter.effectinfo import EffectInfo @@ -19,6 +20,7 @@ from pypy.jit.metainterp.jitprof import EmptyProfiler from pypy.config.pypyoption import get_pypy_config from pypy.jit.metainterp.resoperation import rop, opname, ResOperation +from pypy.jit.metainterp.optimizeopt.unroll import Inliner def test_sort_descrs(): class PseudoDescr(AbstractDescr): @@ -404,12 +406,72 @@ # optimize_trace(metainterp_sd, loop, self.enable_opts) + def unroll_and_optimize(self, loop, call_pure_results): + operations = loop.operations + jumpop = operations[-1] + assert jumpop.getopnum() == rop.JUMP + inputargs = loop.inputargs + + jump_args = jumpop.getarglist()[:] + operations = operations[:-1] + cloned_operations = [op.clone() for op in operations] + + preamble = TreeLoop('preamble') + preamble.inputargs = inputargs + preamble.start_resumedescr = FakeDescrWithSnapshot() + + token = JitCellToken() + preamble.operations = [ResOperation(rop.LABEL, inputargs, None, descr=TargetToken(token))] + \ + operations + \ + [ResOperation(rop.JUMP, jump_args, None, descr=token)] + self._do_optimize_loop(preamble, call_pure_results) + + assert preamble.operations[-1].getopnum() == rop.LABEL + + inliner = Inliner(inputargs, jump_args) + loop.start_resumedescr = preamble.start_resumedescr + loop.operations = [preamble.operations[-1]] + \ + [inliner.inline_op(op, clone=False) for op in cloned_operations] + \ + [ResOperation(rop.JUMP, [inliner.inline_arg(a) for a in jump_args], + None, descr=token)] + #[inliner.inline_op(jumpop)] + assert loop.operations[-1].getopnum() == rop.JUMP + assert loop.operations[0].getopnum() == rop.LABEL + loop.inputargs = loop.operations[0].getarglist() + + self._do_optimize_loop(loop, call_pure_results) + extra_same_as = [] + while loop.operations[0].getopnum() != rop.LABEL: + extra_same_as.append(loop.operations[0]) + del loop.operations[0] + + # Hack to prevent random order of same_as ops + extra_same_as.sort(key=lambda op: str(preamble.operations).find(str(op.getarg(0)))) + + for op in extra_same_as: + preamble.operations.insert(-1, op) + + return preamble + + class FakeDescr(compile.ResumeGuardDescr): def clone_if_mutable(self): return FakeDescr() def __eq__(self, other): return isinstance(other, FakeDescr) +class FakeDescrWithSnapshot(compile.ResumeGuardDescr): + class rd_snapshot: + class prev: + prev = None + boxes = [] + boxes = [] + def clone_if_mutable(self): + return FakeDescrWithSnapshot() + def __eq__(self, other): + return isinstance(other, Storage) or isinstance(other, FakeDescrWithSnapshot) + + def convert_old_style_to_targets(loop, jump): newloop = TreeLoop(loop.name) newloop.inputargs = loop.inputargs diff --git a/pypy/jit/metainterp/test/test_virtualstate.py b/pypy/jit/metainterp/test/test_virtualstate.py --- a/pypy/jit/metainterp/test/test_virtualstate.py +++ b/pypy/jit/metainterp/test/test_virtualstate.py @@ -8,7 +8,7 @@ from pypy.rpython.lltypesystem import lltype from pypy.jit.metainterp.optimizeopt.test.test_util import LLtypeMixin, BaseTest, equaloplists from pypy.jit.metainterp.optimizeopt.intutils import IntBound -from pypy.jit.metainterp.history import TreeLoop, LoopToken +from pypy.jit.metainterp.history import TreeLoop, JitCellToken from pypy.jit.metainterp.optimizeopt.test.test_optimizeopt import FakeDescr, FakeMetaInterpStaticData from pypy.jit.metainterp.optimize import RetraceLoop from pypy.jit.metainterp.resoperation import ResOperation, rop @@ -461,7 +461,7 @@ for loop in loops: loop.preamble = TreeLoop('preamble') loop.preamble.inputargs = loop.inputargs - loop.preamble.token = LoopToken() + loop.preamble.token = JitCellToken() loop.preamble.start_resumedescr = FakeDescr() self._do_optimize_loop(loop, None) preamble = loops[0].preamble From noreply at buildbot.pypy.org Sun Nov 20 11:24:17 2011 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 20 Nov 2011 11:24:17 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim-shards: more rpythonization (and potential crash avoidance) Message-ID: <20111120102417.7AEBB82297@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-multidim-shards Changeset: r49573:3f7e25afec9d Date: 2011-11-20 12:23 +0200 http://bitbucket.org/pypy/pypy/changeset/3f7e25afec9d/ Log: more rpythonization (and potential crash avoidance) diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -471,10 +471,11 @@ Multidimensional arrays/slices will span a number of lines, each line will begin with indent. ''' - if self.size < 1: + size = self.find_size() + if size < 1: builder.append('[]') return - if self.size > 1000: + if size > 1000: #Once this goes True it does not go back to False for recursive calls use_ellipsis = True dtype = self.find_dtype() From noreply at buildbot.pypy.org Sun Nov 20 11:43:14 2011 From: noreply at buildbot.pypy.org (hakanardo) Date: Sun, 20 Nov 2011 11:43:14 +0100 (CET) Subject: [pypy-commit] pypy jit-targets: adapt test framework to new interface Message-ID: <20111120104314.101AE82297@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-targets Changeset: r49574:73f40a140282 Date: 2011-11-20 11:42 +0100 http://bitbucket.org/pypy/pypy/changeset/73f40a140282/ Log: adapt test framework to new interface diff --git a/pypy/jit/metainterp/optimizeopt/test/test_util.py b/pypy/jit/metainterp/optimizeopt/test/test_util.py --- a/pypy/jit/metainterp/optimizeopt/test/test_util.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_util.py @@ -406,7 +406,7 @@ # optimize_trace(metainterp_sd, loop, self.enable_opts) - def unroll_and_optimize(self, loop, call_pure_results): + def unroll_and_optimize(self, loop, call_pure_results=None): operations = loop.operations jumpop = operations[-1] assert jumpop.getopnum() == rop.JUMP diff --git a/pypy/jit/metainterp/test/test_virtualstate.py b/pypy/jit/metainterp/test/test_virtualstate.py --- a/pypy/jit/metainterp/test/test_virtualstate.py +++ b/pypy/jit/metainterp/test/test_virtualstate.py @@ -6,10 +6,11 @@ from pypy.jit.metainterp.optimizeopt.optimizer import OptValue from pypy.jit.metainterp.history import BoxInt, BoxFloat, BoxPtr, ConstInt, ConstPtr from pypy.rpython.lltypesystem import lltype -from pypy.jit.metainterp.optimizeopt.test.test_util import LLtypeMixin, BaseTest, equaloplists +from pypy.jit.metainterp.optimizeopt.test.test_util import LLtypeMixin, BaseTest, \ + equaloplists, FakeDescrWithSnapshot from pypy.jit.metainterp.optimizeopt.intutils import IntBound from pypy.jit.metainterp.history import TreeLoop, JitCellToken -from pypy.jit.metainterp.optimizeopt.test.test_optimizeopt import FakeDescr, FakeMetaInterpStaticData +from pypy.jit.metainterp.optimizeopt.test.test_optimizeopt import FakeMetaInterpStaticData from pypy.jit.metainterp.optimize import RetraceLoop from pypy.jit.metainterp.resoperation import ResOperation, rop @@ -434,7 +435,7 @@ enable_opts = "intbounds:rewrite:virtualize:string:pure:heap:unroll" def _do_optimize_bridge(self, bridge, call_pure_results): - from pypy.jit.metainterp.optimizeopt import optimize_bridge_1, build_opt_chain + from pypy.jit.metainterp.optimizeopt import optimize_trace from pypy.jit.metainterp.optimizeopt.util import args_dict self.bridge = bridge @@ -448,10 +449,9 @@ if hasattr(self, 'callinfocollection'): metainterp_sd.callinfocollection = self.callinfocollection # - d = {} - for name in self.enable_opts.split(":"): - d[name] = None - optimize_bridge_1(metainterp_sd, bridge, d) + bridge.start_resumedescr = FakeDescrWithSnapshot() + optimize_trace(metainterp_sd, bridge, self.enable_opts) + def optimize_bridge(self, loops, bridge, expected, expected_target='Loop', **boxvalues): if isinstance(loops, str): @@ -459,24 +459,19 @@ loops = [self.parse(loop) for loop in loops] bridge = self.parse(bridge) for loop in loops: - loop.preamble = TreeLoop('preamble') - loop.preamble.inputargs = loop.inputargs - loop.preamble.token = JitCellToken() - loop.preamble.start_resumedescr = FakeDescr() - self._do_optimize_loop(loop, None) + loop.preamble = self.unroll_and_optimize(loop) preamble = loops[0].preamble - for loop in loops[1:]: - preamble.token.short_preamble.extend(loop.preamble.token.short_preamble) + token = JitCellToken() + token.target_tokens = [l.operations[0].getdescr() for l in [preamble] + loops] boxes = {} for b in bridge.inputargs + [op.result for op in bridge.operations]: boxes[str(b)] = b for b, v in boxvalues.items(): boxes[b].value = v - bridge.operations[-1].setdescr(preamble.token) - try: - self._do_optimize_bridge(bridge, None) - except RetraceLoop: + bridge.operations[-1].setdescr(token) + self._do_optimize_bridge(bridge, None) + if bridge.operations[-1].getopnum() == rop.LABEL: assert expected == 'RETRACE' return @@ -485,13 +480,13 @@ self.assert_equal(bridge, expected) if expected_target == 'Preamble': - assert bridge.operations[-1].getdescr() is preamble.token + assert bridge.operations[-1].getdescr() is preamble.operations[0].getdescr() elif expected_target == 'Loop': assert len(loops) == 1 - assert bridge.operations[-1].getdescr() is loops[0].token + assert bridge.operations[-1].getdescr() is loops[0].operations[0].getdescr() elif expected_target.startswith('Loop'): n = int(expected_target[4:]) - assert bridge.operations[-1].getdescr() is loops[n].token + assert bridge.operations[-1].getdescr() is loops[n].operations[0].getdescr() else: assert False From noreply at buildbot.pypy.org Sun Nov 20 11:52:40 2011 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 20 Nov 2011 11:52:40 +0100 (CET) Subject: [pypy-commit] pypy default: Backout 09269d2f8fee. It forces the builder object itself to be escaped. Message-ID: <20111120105240.E7D5382297@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49575:099a556b1828 Date: 2011-11-20 10:52 +0000 http://bitbucket.org/pypy/pypy/changeset/099a556b1828/ Log: Backout 09269d2f8fee. It forces the builder object itself to be escaped. diff --git a/pypy/rpython/lltypesystem/rbuilder.py b/pypy/rpython/lltypesystem/rbuilder.py --- a/pypy/rpython/lltypesystem/rbuilder.py +++ b/pypy/rpython/lltypesystem/rbuilder.py @@ -120,7 +120,6 @@ return ll_builder.used @staticmethod - @jit.dont_look_inside def ll_build(ll_builder): final_size = ll_builder.used assert final_size >= 0 From noreply at buildbot.pypy.org Sun Nov 20 11:55:41 2011 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 20 Nov 2011 11:55:41 +0100 (CET) Subject: [pypy-commit] pypy default: Fix. Message-ID: <20111120105541.0E39B82297@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49576:b158161216db Date: 2011-11-20 10:55 +0000 http://bitbucket.org/pypy/pypy/changeset/b158161216db/ Log: Fix. diff --git a/pypy/module/pypyjit/test_pypy_c/test_string.py b/pypy/module/pypyjit/test_pypy_c/test_string.py --- a/pypy/module/pypyjit/test_pypy_c/test_string.py +++ b/pypy/module/pypyjit/test_pypy_c/test_string.py @@ -147,8 +147,8 @@ i31 = int_gt(i30, 23) guard_false(i31, descr=...) copystrcontent(p9, p21, 0, i25, i10) - i33 = int_eq(i30, 23) - guard_false(i33, descr=...) + i33 = int_lt(i30, 23) + guard_true(i33, descr=...) p35 = call(ConstClass(ll_shrink_array__rpy_stringPtr_Signed), p21, i30, descr=) guard_no_exception(descr=...) i37 = strlen(p35) From noreply at buildbot.pypy.org Sun Nov 20 15:57:45 2011 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 20 Nov 2011 15:57:45 +0100 (CET) Subject: [pypy-commit] pypy default: Document slice assignment and deletion for RPython lists. Message-ID: <20111120145745.55E8382297@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49577:c665226786f8 Date: 2011-11-20 15:57 +0100 http://bitbucket.org/pypy/pypy/changeset/c665226786f8/ Log: Document slice assignment and deletion for RPython lists. diff --git a/pypy/doc/coding-guide.rst b/pypy/doc/coding-guide.rst --- a/pypy/doc/coding-guide.rst +++ b/pypy/doc/coding-guide.rst @@ -270,7 +270,12 @@ - *slicing*: the slice start must be within bounds. The stop doesn't need to, but it must not be smaller than the start. All negative indexes are disallowed, except for - the [:-1] special case. No step. + the [:-1] special case. No step. Slice deletion follows the same rules. + + - *slice assignment*: + only supports ``lst[x:y] = sublist``, if ``len(sublist) == y - x``. + In other words, slice assignment cannot change the total length of the list, + but just replace items. - *other operators*: ``+``, ``+=``, ``in``, ``*``, ``*=``, ``==``, ``!=`` work as expected. From noreply at buildbot.pypy.org Sun Nov 20 16:02:06 2011 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 20 Nov 2011 16:02:06 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim-shards: a fix and a comment Message-ID: <20111120150206.2638682297@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-multidim-shards Changeset: r49578:b553f019a2df Date: 2011-11-20 17:01 +0200 http://bitbucket.org/pypy/pypy/changeset/b553f019a2df/ Log: a fix and a comment diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -708,7 +708,9 @@ r = offset // shard indices_w.append(space.wrap(r)) offset -= shard * r - return space.newtuple(indices_w) + # XXX for reasons unclear indices_w becomes a resizable list, work + # around for now + return space.newtuple(indices_w[:]) def convert_to_array(space, w_obj): if isinstance(w_obj, BaseArray): From noreply at buildbot.pypy.org Sun Nov 20 16:23:43 2011 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 20 Nov 2011 16:23:43 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim-shards: fix signatures Message-ID: <20111120152343.39C1482297@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-multidim-shards Changeset: r49579:09220a3f29a0 Date: 2011-11-20 17:23 +0200 http://bitbucket.org/pypy/pypy/changeset/09220a3f29a0/ Log: fix signatures diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -760,7 +760,7 @@ def start_iter(self, res_shape=None): return ConstantIterator() - def to_str(self, space, comma, builder, indent=' '): + def to_str(self, space, comma, builder, indent=' ', use_ellipsis=False): builder.append(self.dtype.str_format(self.value)) class VirtualArray(BaseArray): From noreply at buildbot.pypy.org Sun Nov 20 16:50:07 2011 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 20 Nov 2011 16:50:07 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim-shards: rpython fixes (?) Message-ID: <20111120155007.A1F9282297@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-multidim-shards Changeset: r49580:93944ee46bb5 Date: 2011-11-20 17:49 +0200 http://bitbucket.org/pypy/pypy/changeset/93944ee46bb5/ Log: rpython fixes (?) diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -361,7 +361,7 @@ def _reduce_argmax_argmin_impl(op_name): reduce_driver = jit.JitDriver(greens=['signature'], - reds=['i', 'result', 'self', 'cur_best', 'dtype']) + reds=['result', 'i', 'self', 'cur_best', 'dtype']) def loop(self): i = self.start_iter(self.shape) result = i.get_offset() From noreply at buildbot.pypy.org Sun Nov 20 17:01:33 2011 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 20 Nov 2011 17:01:33 +0100 (CET) Subject: [pypy-commit] pypy default: A failing test showing problems with list-comprehension-optimization Message-ID: <20111120160133.D479282297@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r49581:4150312ee458 Date: 2011-11-20 17:59 +0200 http://bitbucket.org/pypy/pypy/changeset/4150312ee458/ Log: A failing test showing problems with list-comprehension-optimization diff --git a/pypy/translator/test/test_simplify.py b/pypy/translator/test/test_simplify.py --- a/pypy/translator/test/test_simplify.py +++ b/pypy/translator/test/test_simplify.py @@ -305,6 +305,21 @@ 'hint': 2, }) + def test_iterate_over_list(self): + def wrap(elem): + return elem + + def f(i): + new_l = [] + l = range(4) + for elem in l: + new_l.append(wrap(elem)) + return new_l + + self.check(f, { + }) + + class TestLLSpecializeListComprehension: typesystem = 'lltype' From noreply at buildbot.pypy.org Sun Nov 20 17:01:35 2011 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 20 Nov 2011 17:01:35 +0100 (CET) Subject: [pypy-commit] pypy default: merge Message-ID: <20111120160135.1A9CC82297@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r49582:307b3c6238de Date: 2011-11-20 18:01 +0200 http://bitbucket.org/pypy/pypy/changeset/307b3c6238de/ Log: merge diff --git a/pypy/doc/coding-guide.rst b/pypy/doc/coding-guide.rst --- a/pypy/doc/coding-guide.rst +++ b/pypy/doc/coding-guide.rst @@ -270,7 +270,12 @@ - *slicing*: the slice start must be within bounds. The stop doesn't need to, but it must not be smaller than the start. All negative indexes are disallowed, except for - the [:-1] special case. No step. + the [:-1] special case. No step. Slice deletion follows the same rules. + + - *slice assignment*: + only supports ``lst[x:y] = sublist``, if ``len(sublist) == y - x``. + In other words, slice assignment cannot change the total length of the list, + but just replace items. - *other operators*: ``+``, ``+=``, ``in``, ``*``, ``*=``, ``==``, ``!=`` work as expected. diff --git a/pypy/doc/release-1.7.0.rst b/pypy/doc/release-1.7.0.rst --- a/pypy/doc/release-1.7.0.rst +++ b/pypy/doc/release-1.7.0.rst @@ -12,6 +12,9 @@ * windows fixes * stackless and JIT integration + (stackless is now in the same executable, but any loop using + stackless features will interrupt the JIT for now, so no real + performance improvement for now) * numpy progress - dtypes, numpy -> numpypy renaming diff --git a/pypy/module/pypyjit/test_pypy_c/test_string.py b/pypy/module/pypyjit/test_pypy_c/test_string.py --- a/pypy/module/pypyjit/test_pypy_c/test_string.py +++ b/pypy/module/pypyjit/test_pypy_c/test_string.py @@ -147,8 +147,8 @@ i31 = int_gt(i30, 23) guard_false(i31, descr=...) copystrcontent(p9, p21, 0, i25, i10) - i33 = int_eq(i30, 23) - guard_false(i33, descr=...) + i33 = int_lt(i30, 23) + guard_true(i33, descr=...) p35 = call(ConstClass(ll_shrink_array__rpy_stringPtr_Signed), p21, i30, descr=) guard_no_exception(descr=...) i37 = strlen(p35) From noreply at buildbot.pypy.org Sun Nov 20 17:09:35 2011 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 20 Nov 2011 17:09:35 +0100 (CET) Subject: [pypy-commit] pypy default: improve the list-comprehension-operation optimization to work for slightly Message-ID: <20111120160935.C4F9C82297@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r49583:0bdcf803c0fd Date: 2011-11-20 18:09 +0200 http://bitbucket.org/pypy/pypy/changeset/0bdcf803c0fd/ Log: improve the list-comprehension-operation optimization to work for slightly more advanced cases as well diff --git a/pypy/translator/simplify.py b/pypy/translator/simplify.py --- a/pypy/translator/simplify.py +++ b/pypy/translator/simplify.py @@ -697,10 +697,12 @@ if op.opname == 'getattr' and op.args[1] == c_append: vlist = variable_families.find_rep(op.args[0]) if vlist in newlist_v: - op2 = block.operations[i+1] - if (op2.opname == 'simple_call' and len(op2.args) == 2 - and op2.args[0] is op.result): - append_v.append((op.args[0], op.result, block)) + for j in range(i + 1, len(block.operations)): + op2 = block.operations[j] + if (op2.opname == 'simple_call' and len(op2.args) == 2 + and op2.args[0] is op.result): + append_v.append((op.args[0], op.result, block)) + break if not append_v: return detector = ListComprehensionDetector(graph, loops, newlist_v, diff --git a/pypy/translator/test/test_simplify.py b/pypy/translator/test/test_simplify.py --- a/pypy/translator/test/test_simplify.py +++ b/pypy/translator/test/test_simplify.py @@ -317,6 +317,12 @@ return new_l self.check(f, { + 'hint': 2, + 'newlist': 1, + 'iter': 1, + 'next': 1, + 'getattr': 1, + 'simple_call': 3, }) From noreply at buildbot.pypy.org Sun Nov 20 17:15:18 2011 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 20 Nov 2011 17:15:18 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim-shards: merge default in Message-ID: <20111120161518.D6CE482297@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-multidim-shards Changeset: r49584:249dc7d8f9cb Date: 2011-11-20 18:13 +0200 http://bitbucket.org/pypy/pypy/changeset/249dc7d8f9cb/ Log: merge default in diff --git a/pypy/doc/coding-guide.rst b/pypy/doc/coding-guide.rst --- a/pypy/doc/coding-guide.rst +++ b/pypy/doc/coding-guide.rst @@ -270,7 +270,12 @@ - *slicing*: the slice start must be within bounds. The stop doesn't need to, but it must not be smaller than the start. All negative indexes are disallowed, except for - the [:-1] special case. No step. + the [:-1] special case. No step. Slice deletion follows the same rules. + + - *slice assignment*: + only supports ``lst[x:y] = sublist``, if ``len(sublist) == y - x``. + In other words, slice assignment cannot change the total length of the list, + but just replace items. - *other operators*: ``+``, ``+=``, ``in``, ``*``, ``*=``, ``==``, ``!=`` work as expected. diff --git a/pypy/doc/release-1.7.0.rst b/pypy/doc/release-1.7.0.rst --- a/pypy/doc/release-1.7.0.rst +++ b/pypy/doc/release-1.7.0.rst @@ -12,6 +12,9 @@ * windows fixes * stackless and JIT integration + (stackless is now in the same executable, but any loop using + stackless features will interrupt the JIT for now, so no real + performance improvement for now) * numpy progress - dtypes, numpy -> numpypy renaming diff --git a/pypy/module/pypyjit/test_pypy_c/test_string.py b/pypy/module/pypyjit/test_pypy_c/test_string.py --- a/pypy/module/pypyjit/test_pypy_c/test_string.py +++ b/pypy/module/pypyjit/test_pypy_c/test_string.py @@ -147,8 +147,8 @@ i31 = int_gt(i30, 23) guard_false(i31, descr=...) copystrcontent(p9, p21, 0, i25, i10) - i33 = int_eq(i30, 23) - guard_false(i33, descr=...) + i33 = int_lt(i30, 23) + guard_true(i33, descr=...) p35 = call(ConstClass(ll_shrink_array__rpy_stringPtr_Signed), p21, i30, descr=) guard_no_exception(descr=...) i37 = strlen(p35) diff --git a/pypy/rlib/_rffi_stacklet.py b/pypy/rlib/_rffi_stacklet.py --- a/pypy/rlib/_rffi_stacklet.py +++ b/pypy/rlib/_rffi_stacklet.py @@ -8,16 +8,21 @@ cdir = py.path.local(pypydir) / 'translator' / 'c' -_sep_mods = [] -if sys.platform == 'win32': - _sep_mods = [cdir / "src/stacklet/switch_x86_msvc.asm"] - eci = ExternalCompilationInfo( include_dirs = [cdir], includes = ['src/stacklet/stacklet.h'], separate_module_sources = ['#include "src/stacklet/stacklet.c"\n'], - separate_module_files = _sep_mods ) +if sys.platform == 'win32': + eci.separate_module_files += (cdir / "src/stacklet/switch_x86_msvc.asm", ) + eci.export_symbols += ( + 'stacklet_newthread', + 'stacklet_deletethread', + 'stacklet_new', + 'stacklet_switch', + 'stacklet_destroy', + '_stacklet_translate_pointer', + ) rffi_platform.verify_eci(eci.convert_sources_to_files()) diff --git a/pypy/translator/platform/windows.py b/pypy/translator/platform/windows.py --- a/pypy/translator/platform/windows.py +++ b/pypy/translator/platform/windows.py @@ -172,6 +172,12 @@ def _compile_c_file(self, cc, cfile, compile_args): oname = cfile.new(ext='obj') + # notabene: (tismer) + # This function may be called for .c but also .asm files. + # The c compiler accepts any order of arguments, while + # the assembler still has the old behavior that all options + # must come first, and after the file name all options are ignored. + # So please be careful with the oder of parameters! ;-) args = ['/nologo', '/c'] + compile_args + ['/Fo%s' % (oname,), str(cfile)] self._execute_c_compiler(cc, args, oname) return oname diff --git a/pypy/translator/simplify.py b/pypy/translator/simplify.py --- a/pypy/translator/simplify.py +++ b/pypy/translator/simplify.py @@ -697,10 +697,12 @@ if op.opname == 'getattr' and op.args[1] == c_append: vlist = variable_families.find_rep(op.args[0]) if vlist in newlist_v: - op2 = block.operations[i+1] - if (op2.opname == 'simple_call' and len(op2.args) == 2 - and op2.args[0] is op.result): - append_v.append((op.args[0], op.result, block)) + for j in range(i + 1, len(block.operations)): + op2 = block.operations[j] + if (op2.opname == 'simple_call' and len(op2.args) == 2 + and op2.args[0] is op.result): + append_v.append((op.args[0], op.result, block)) + break if not append_v: return detector = ListComprehensionDetector(graph, loops, newlist_v, diff --git a/pypy/translator/test/test_simplify.py b/pypy/translator/test/test_simplify.py --- a/pypy/translator/test/test_simplify.py +++ b/pypy/translator/test/test_simplify.py @@ -305,6 +305,27 @@ 'hint': 2, }) + def test_iterate_over_list(self): + def wrap(elem): + return elem + + def f(i): + new_l = [] + l = range(4) + for elem in l: + new_l.append(wrap(elem)) + return new_l + + self.check(f, { + 'hint': 2, + 'newlist': 1, + 'iter': 1, + 'next': 1, + 'getattr': 1, + 'simple_call': 3, + }) + + class TestLLSpecializeListComprehension: typesystem = 'lltype' From noreply at buildbot.pypy.org Sun Nov 20 17:15:20 2011 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 20 Nov 2011 17:15:20 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim-shards: don't copy a list any more - the cause was fixed in 0bdcf803c0fd Message-ID: <20111120161520.11A4882297@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-multidim-shards Changeset: r49585:0fc1ea420599 Date: 2011-11-20 18:14 +0200 http://bitbucket.org/pypy/pypy/changeset/0fc1ea420599/ Log: don't copy a list any more - the cause was fixed in 0bdcf803c0fd diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -708,9 +708,7 @@ r = offset // shard indices_w.append(space.wrap(r)) offset -= shard * r - # XXX for reasons unclear indices_w becomes a resizable list, work - # around for now - return space.newtuple(indices_w[:]) + return space.newtuple(indices_w) def convert_to_array(space, w_obj): if isinstance(w_obj, BaseArray): From noreply at buildbot.pypy.org Sun Nov 20 17:49:03 2011 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 20 Nov 2011 17:49:03 +0100 (CET) Subject: [pypy-commit] pypy numpy-multidim-shards: add a passing test, just because Message-ID: <20111120164903.D737182297@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-multidim-shards Changeset: r49586:7a534f000326 Date: 2011-11-20 18:48 +0200 http://bitbucket.org/pypy/pypy/changeset/7a534f000326/ Log: add a passing test, just because diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -883,6 +883,17 @@ c = ((a + d) == array([[[10., 11., 12.]]*3, [[20.,21.,22.]]*3, [[30.,31.,32.]]*3])) assert c.all() + def test_broadcast_scalar(self): + from numpypy import zeros + a = zeros((4, 5), 'd') + a[:, 1] = 3 + assert a[2, 1] == 3 + assert a[0, 2] == 0 + a[0, :] = 5 + assert a[0, 3] == 5 + assert a[2, 1] == 3 + assert a[3, 2] == 0 + def test_broadcast_call2(self): from numpypy import zeros, ones a = zeros((4, 1, 5)) From noreply at buildbot.pypy.org Sun Nov 20 20:26:33 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sun, 20 Nov 2011 20:26:33 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-refactor-complex: initial work on complex values Message-ID: <20111120192633.1A4AD82297@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-refactor-complex Changeset: r49587:7a565e62ba58 Date: 2011-11-19 17:47 -0500 http://bitbucket.org/pypy/pypy/changeset/7a565e62ba58/ Log: initial work on complex values diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -27,6 +27,8 @@ 'inexact': 'interp_boxes.W_InexactBox', 'floating': 'interp_boxes.W_FloatingBox', 'float64': 'interp_boxes.W_Float64Box', + 'complexfloating': 'interp_boxes.W_ComplexFloatingBox', + 'complex128': 'interp_boxes.W_Complex128Box', } # ufuncs diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -2,6 +2,7 @@ from pypy.interpreter.error import operationerrfmt from pypy.interpreter.gateway import interp2app from pypy.interpreter.typedef import TypeDef +from pypy.objspace.std.complextype import complex_typedef from pypy.objspace.std.floattype import float_typedef from pypy.objspace.std.inttype import int_typedef from pypy.objspace.std.typeobject import W_TypeObject @@ -29,6 +30,12 @@ def convert_to(self, dtype): return dtype.box(self.value) +class CompositeBox(object): + _mixin_ = True + + def __init__(self, subboxes): + self.subboxes = subboxes + class W_GenericBox(Wrappable): _attrs_ = () @@ -150,7 +157,11 @@ class W_Float64Box(W_FloatingBox, PrimitiveBox): descr__new__, get_dtype = new_dtype_getter("float64") +class W_ComplexFloatingBox(W_InexactBox): + pass +class W_Complex128Box(W_ComplexFloatingBox, CompositeBox): + descr__new__, get_dtype = new_dtype_getter("complex128") W_GenericBox.typedef = TypeDef("generic", __module__ = "numpy", @@ -259,4 +270,12 @@ __module__ = "numpy", __new__ = interp2app(W_Float64Box.descr__new__.im_func), +) + +W_ComplexFloatingBox.typedef = TypeDef("complexfloating", W_InexactBox.typedef, + __module__ = "numpy", +) + +W_Complex128Box.typedef = TypeDef("complex128", (W_ComplexFloatingBox.typedef, complex_typedef), + __module__ = "numpy", ) \ No newline at end of file diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -13,6 +13,7 @@ SIGNEDLTR = "i" BOOLLTR = "b" FLOATINGLTR = "f" +COMPLEXLTR = "c" class W_Dtype(Wrappable): def __init__(self, itemtype, num, kind, name, char, w_box_type, alternate_constructors=[]): @@ -90,6 +91,8 @@ num = interp_attrproperty("num", cls=W_Dtype), kind = interp_attrproperty("kind", cls=W_Dtype), type = interp_attrproperty_w("w_box_type", cls=W_Dtype), + name = interp_attrproperty("name", cls=W_Dtype), + char = interp_attrproperty("char", cls=W_Dtype), itemsize = GetSetProperty(W_Dtype.descr_get_itemsize), shape = GetSetProperty(W_Dtype.descr_get_shape), ) @@ -103,7 +106,7 @@ kind=BOOLLTR, name="bool", char="?", - w_box_type = space.gettypefor(interp_boxes.W_BoolBox), + w_box_type=space.gettypefor(interp_boxes.W_BoolBox), alternate_constructors=[space.w_bool], ) self.w_int8dtype = W_Dtype( @@ -112,7 +115,7 @@ kind=SIGNEDLTR, name="int8", char="b", - w_box_type = space.gettypefor(interp_boxes.W_Int8Box) + w_box_type=space.gettypefor(interp_boxes.W_Int8Box) ) self.w_uint8dtype = W_Dtype( types.UInt8(), @@ -120,7 +123,7 @@ kind=UNSIGNEDLTR, name="uint8", char="B", - w_box_type = space.gettypefor(interp_boxes.W_UInt8Box), + w_box_type=space.gettypefor(interp_boxes.W_UInt8Box), ) self.w_int16dtype = W_Dtype( types.Int16(), @@ -128,7 +131,7 @@ kind=SIGNEDLTR, name="int16", char="h", - w_box_type = space.gettypefor(interp_boxes.W_Int16Box), + w_box_type=space.gettypefor(interp_boxes.W_Int16Box), ) self.w_uint16dtype = W_Dtype( types.UInt16(), @@ -136,7 +139,7 @@ kind=UNSIGNEDLTR, name="uint16", char="H", - w_box_type = space.gettypefor(interp_boxes.W_UInt16Box), + w_box_type=space.gettypefor(interp_boxes.W_UInt16Box), ) self.w_int32dtype = W_Dtype( types.Int32(), @@ -144,7 +147,7 @@ kind=SIGNEDLTR, name="int32", char="i", - w_box_type = space.gettypefor(interp_boxes.W_Int32Box), + w_box_type=space.gettypefor(interp_boxes.W_Int32Box), ) self.w_uint32dtype = W_Dtype( types.UInt32(), @@ -152,7 +155,7 @@ kind=UNSIGNEDLTR, name="uint32", char="I", - w_box_type = space.gettypefor(interp_boxes.W_UInt32Box), + w_box_type=space.gettypefor(interp_boxes.W_UInt32Box), ) if LONG_BIT == 32: name = "int32" @@ -164,7 +167,7 @@ kind=SIGNEDLTR, name=name, char="l", - w_box_type = space.gettypefor(interp_boxes.W_LongBox), + w_box_type=space.gettypefor(interp_boxes.W_LongBox), alternate_constructors=[space.w_int], ) self.w_ulongdtype = W_Dtype( @@ -173,7 +176,7 @@ kind=UNSIGNEDLTR, name="u" + name, char="L", - w_box_type = space.gettypefor(interp_boxes.W_ULongBox), + w_box_type=space.gettypefor(interp_boxes.W_ULongBox), ) self.w_int64dtype = W_Dtype( types.Int64(), @@ -181,7 +184,7 @@ kind=SIGNEDLTR, name="int64", char="q", - w_box_type = space.gettypefor(interp_boxes.W_Int64Box), + w_box_type=space.gettypefor(interp_boxes.W_Int64Box), alternate_constructors=[space.w_long], ) self.w_uint64dtype = W_Dtype( @@ -190,7 +193,7 @@ kind=UNSIGNEDLTR, name="uint64", char="Q", - w_box_type = space.gettypefor(interp_boxes.W_UInt64Box), + w_box_type=space.gettypefor(interp_boxes.W_UInt64Box), ) self.w_float32dtype = W_Dtype( types.Float32(), @@ -198,7 +201,7 @@ kind=FLOATINGLTR, name="float32", char="f", - w_box_type = space.gettypefor(interp_boxes.W_Float32Box), + w_box_type=space.gettypefor(interp_boxes.W_Float32Box), ) self.w_float64dtype = W_Dtype( types.Float64(), @@ -206,16 +209,26 @@ kind=FLOATINGLTR, name="float64", char="d", - w_box_type = space.gettypefor(interp_boxes.W_Float64Box), + w_box_type=space.gettypefor(interp_boxes.W_Float64Box), alternate_constructors=[space.w_float], ) + self.w_complex128dtype = W_Dtype( + types.Complex([types.Float64(), types.Float64()]), + num=15, + kind=COMPLEXLTR, + name="complex128", + char="D", + w_box_type=space.gettypefor(interp_boxes.W_Complex128Box), + alternate_constructors=[space.w_complex], + ) + self.builtin_dtypes = [ self.w_booldtype, self.w_int8dtype, self.w_uint8dtype, self.w_int16dtype, self.w_uint16dtype, self.w_int32dtype, self.w_uint32dtype, self.w_longdtype, self.w_ulongdtype, self.w_int64dtype, self.w_uint64dtype, self.w_float32dtype, - self.w_float64dtype + self.w_float64dtype, self.w_complex128dtype ] self.dtypes_by_num_bytes = sorted( (dtype.itemtype.get_element_size(), dtype) diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -166,6 +166,17 @@ # You can't subclass dtype raises(TypeError, type, "Foo", (dtype,), {}) + def test_complex128(self): + import numpypy as numpy + + dtype = numpy.dtype(complex) + assert dtype.name == "complex128" + assert dtype.char == "D" + assert dtype.itemsize == 16 + assert dtype.kind == "c" + assert dtype.num == 15 + assert dtype.type is numpy.complex128 + class AppTestTypes(BaseNumpyAppTest): def test_abstract_types(self): import numpypy as numpy @@ -224,6 +235,11 @@ assert numpy.float64(2.0) == 2.0 + def test_complex128(self): + import numpypy as numpy + + assert numpy.complex128.mro() == [numpy.complex128, numpy.complexfloating, numpy.inexact, numpy.number, numpy.generic, complex, object] + def test_subclass_type(self): import numpypy as numpy diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -591,6 +591,8 @@ assert array([int8(3)]).dtype is dtype("int8") assert array([bool_(True)]).dtype is dtype(bool) assert array([bool_(True), 3.0]).dtype is dtype(float) + assert array([1 + 2j]).dtype is dtype(complex) + assert array([1, 1 + 2j]).dtype is dtype(complex) def test_comparison(self): import operator @@ -612,6 +614,15 @@ for i in xrange(5): assert c[i] == func(b[i], 3) + def test_complex_basic(self): + from numpypy import array + + x = array([1, 2, 3], complex) + assert x[0] == complex(1, 0) + x[0] = 1 + 3j + assert x[0] == 1 + 3j + assert x[2].real == 3 + assert x[2].imag == 0 class AppTestSupport(BaseNumpyAppTest): def setup_class(cls): diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -364,4 +364,50 @@ class Float64(BaseType, Float): T = rffi.DOUBLE - BoxType = interp_boxes.W_Float64Box \ No newline at end of file + BoxType = interp_boxes.W_Float64Box + + +class BaseCompositeType(BaseType): + def __init__(self, itemtypes): + self.itemtypes = itemtypes + + def get_element_size(self): + s = 0 + for itemtype in self.itemtypes: + s += itemtype.get_element_size() + return s + + def box(self, value): + return self.BoxType(value) + + def unbox(self, box): + assert isinstance(box, self.BoxType) + return box.subboxes + + def store(self, storage, width, i, offset, box): + subboxes = self.unbox(box) + i = 0 + for box in subboxes: + self.itemtypes[i].store(storage, width, i, offset, box) + offset += self.itemtypes[i].get_element_size() + i += 1 + + def read(self, storage, width, i, offset): + boxes = [] + for itemtype in self.itemtypes: + boxes.append(itemtype.read(storage, width, i, offset)) + offset += itemtype.get_element_size() + return self.box(boxes) + +class Complex(BaseCompositeType): + BoxType = interp_boxes.W_Complex128Box + + def __init__(self, itemtypes): + BaseCompositeType.__init__(self, itemtypes) + [self.real, self.imag] = self.itemtypes + + def coerce(self, space, w_item): + if isinstance(w_item, self.BoxType): + return w_item + real, imag = space.unpackcomplex(w_item) + return self.box([self.real.box(real), self.imag.box(imag)]) \ No newline at end of file From noreply at buildbot.pypy.org Sun Nov 20 20:26:34 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sun, 20 Nov 2011 20:26:34 +0100 (CET) Subject: [pypy-commit] pypy default: clarrify Message-ID: <20111120192634.4E83082A9E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r49588:f714eadacaef Date: 2011-11-20 14:25 -0500 http://bitbucket.org/pypy/pypy/changeset/f714eadacaef/ Log: clarrify diff --git a/pypy/doc/release-1.7.0.rst b/pypy/doc/release-1.7.0.rst --- a/pypy/doc/release-1.7.0.rst +++ b/pypy/doc/release-1.7.0.rst @@ -18,9 +18,9 @@ * numpy progress - dtypes, numpy -> numpypy renaming -* brand new JSON encoder +* brand new, faster, JSON encoder -* improved memory footprint on heavy users of C APIs example - tornado +* improved memory footprint on some RPython modules, such as hashlib * cpyext progress From noreply at buildbot.pypy.org Sun Nov 20 20:35:51 2011 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 20 Nov 2011 20:35:51 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: add few tasks Message-ID: <20111120193551.1306782297@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r3968:b42c3cfb12d8 Date: 2011-11-20 21:35 +0200 http://bitbucket.org/pypy/extradoc/changeset/b42c3cfb12d8/ Log: add few tasks diff --git a/planning/micronumpy.txt b/planning/micronumpy.txt --- a/planning/micronumpy.txt +++ b/planning/micronumpy.txt @@ -22,3 +22,7 @@ - frompyfunc to create ufuncs from python functions - more ufuncs + +- arange/linspace/other ranges + +- numpy.flatiter array.flat and friends From noreply at buildbot.pypy.org Sun Nov 20 20:36:13 2011 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 20 Nov 2011 20:36:13 +0100 (CET) Subject: [pypy-commit] pypy default: link to planning Message-ID: <20111120193613.32B3282297@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r49589:e3fa364982b2 Date: 2011-11-20 21:35 +0200 http://bitbucket.org/pypy/pypy/changeset/e3fa364982b2/ Log: link to planning diff --git a/pypy/doc/project-ideas.rst b/pypy/doc/project-ideas.rst --- a/pypy/doc/project-ideas.rst +++ b/pypy/doc/project-ideas.rst @@ -26,14 +26,11 @@ Numpy improvements ------------------ -This is more of a project-container than a single project. Possible ideas: +The numpy is rapidly progressing in pypy, so feel free to come to IRC and +ask for proposed topic. A not necesarilly up-to-date `list of topics`_ +is also available. -* experiment with auto-vectorization using SSE or implement vectorization - without automatically detecting it for array operations. - -* improve numpy, for example implement memory views. - -* interface with fortran/C libraries. +.. _`list of topics`: https://bitbucket.org/pypy/extradoc/src/extradoc/planning/micronumpy.txt Improving the jitviewer ------------------------ From noreply at buildbot.pypy.org Sun Nov 20 23:16:47 2011 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sun, 20 Nov 2011 23:16:47 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: note the branches Message-ID: <20111120221647.376D682297@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: extradoc Changeset: r3969:83c60b7c3d49 Date: 2011-11-20 17:16 -0500 http://bitbucket.org/pypy/extradoc/changeset/83c60b7c3d49/ Log: note the branches diff --git a/planning/micronumpy.txt b/planning/micronumpy.txt --- a/planning/micronumpy.txt +++ b/planning/micronumpy.txt @@ -3,7 +3,7 @@ - add in numpy.generic and the various subclasses, use them in returning instances from subscripting (and possibly internally), also make them valid - for the dtype arguments + for the dtype arguments (numpy-dtype-refactor branch) - astype @@ -13,9 +13,9 @@ - endianness -- scalar types like numpy.int8 +- scalar types like numpy.int8 (numpy-dtype-refacotr branch) -- add multi-dim arrays +- add multi-dim arrays (numpy-multidim-shards branch) - will need to refactor some functions From noreply at buildbot.pypy.org Mon Nov 21 00:11:49 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 21 Nov 2011 00:11:49 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: test _as_ffi_pointer_ in type_converter Message-ID: <20111120231149.97AB082297@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r49590:77f79b80d773 Date: 2011-11-12 17:30 +0100 http://bitbucket.org/pypy/pypy/changeset/77f79b80d773/ Log: test _as_ffi_pointer_ in type_converter diff --git a/pypy/module/_ffi/test/test_type_converter.py b/pypy/module/_ffi/test/test_type_converter.py --- a/pypy/module/_ffi/test/test_type_converter.py +++ b/pypy/module/_ffi/test/test_type_converter.py @@ -56,5 +56,21 @@ # no good reason, at interp-level Signed or Unsigned makes no # difference for passing bits around) self.check(app_types.void_p, self.space.wrap(42), 42) - self.check( - app_types.void_p, self.space.wrap(sys.maxint+1), -sys.maxint-1) + self.check(app_types.void_p, self.space.wrap(sys.maxint+1), + -sys.maxint-1) + + def test__as_ffi_pointer_(self): + space = self.space + w_MyPointerWrapper = space.appexec([], """(): + import _ffi + class MyPointerWrapper(object): + def __init__(self, value): + self.value = value + def _as_ffi_pointer_(self, ffitype): + assert ffitype is _ffi.types.void_p + return self.value + + return MyPointerWrapper + """) + w_obj = space.call_function(w_MyPointerWrapper, space.wrap(42)) + self.check(app_types.void_p, w_obj, 42) From noreply at buildbot.pypy.org Mon Nov 21 00:11:50 2011 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 21 Nov 2011 00:11:50 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: more tests for pointers and strings Message-ID: <20111120231150.BC43E82A9E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r49591:b38d44f35469 Date: 2011-11-12 17:44 +0100 http://bitbucket.org/pypy/pypy/changeset/b38d44f35469/ Log: more tests for pointers and strings diff --git a/pypy/module/_ffi/test/test_type_converter.py b/pypy/module/_ffi/test/test_type_converter.py --- a/pypy/module/_ffi/test/test_type_converter.py +++ b/pypy/module/_ffi/test/test_type_converter.py @@ -1,7 +1,7 @@ import sys from pypy.conftest import gettestobjspace from pypy.rlib.rarithmetic import r_uint -from pypy.module._ffi.interp_ffitype import app_types +from pypy.module._ffi.interp_ffitype import app_types, descr_new_pointer from pypy.module._ffi.type_converter import FromAppLevelConverter, ToAppLevelConverter class DummyFromAppLevelConverter(FromAppLevelConverter): @@ -55,9 +55,14 @@ # pointers are "unsigned" at applevel, but signed at interp-level (for # no good reason, at interp-level Signed or Unsigned makes no # difference for passing bits around) - self.check(app_types.void_p, self.space.wrap(42), 42) - self.check(app_types.void_p, self.space.wrap(sys.maxint+1), - -sys.maxint-1) + space = self.space + self.check(app_types.void_p, space.wrap(42), 42) + self.check(app_types.void_p, space.wrap(sys.maxint+1), -sys.maxint-1) + # + # typed pointers + w_ptr_sint = descr_new_pointer(space, None, app_types.sint) + self.check(w_ptr_sint, space.wrap(sys.maxint+1), -sys.maxint-1) + def test__as_ffi_pointer_(self): space = self.space @@ -74,3 +79,12 @@ """) w_obj = space.call_function(w_MyPointerWrapper, space.wrap(42)) self.check(app_types.void_p, w_obj, 42) + + def test_strings(self): + # first, try automatic conversion from applevel + self.check(app_types.char_p, self.space.wrap('foo'), 'foo') + self.check(app_types.unichar_p, self.space.wrap(u'foo\u1234'), u'foo\u1234') + self.check(app_types.unichar_p, self.space.wrap('foo'), u'foo') + # then, try to pass explicit pointers + self.check(app_types.char_p, self.space.wrap(42), 42) + self.check(app_types.unichar_p, self.space.wrap(42), 42) From noreply at buildbot.pypy.org Mon Nov 21 05:18:04 2011 From: noreply at buildbot.pypy.org (mattip) Date: Mon, 21 Nov 2011 05:18:04 +0100 (CET) Subject: [pypy-commit] pypy numpy-monkeyaround: branch for experimental implementations Message-ID: <20111121041804.EB1FF82297@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpy-monkeyaround Changeset: r49592:425ecbd26101 Date: 2011-11-21 04:57 +0200 http://bitbucket.org/pypy/pypy/changeset/425ecbd26101/ Log: branch for experimental implementations From noreply at buildbot.pypy.org Mon Nov 21 05:18:06 2011 From: noreply at buildbot.pypy.org (mattip) Date: Mon, 21 Nov 2011 05:18:06 +0100 (CET) Subject: [pypy-commit] pypy numpy-monkeyaround: Add test for arange Message-ID: <20111121041806.1D16E82A9E@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpy-monkeyaround Changeset: r49593:1c02422a8f96 Date: 2011-11-21 05:41 +0200 http://bitbucket.org/pypy/pypy/changeset/1c02422a8f96/ Log: Add test for arange diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1000,3 +1000,19 @@ assert str(b) == "[7 8 9]" b = a[2:1, ] assert str(b) == "[]" + +class AppTestRanges(BaseNumpyAppTest): + def test_arange(self): + from numpypy import arange, array + a = arange(3) + assert (a == [0, 1, 2]).all() + assert a.dtype is dtype(int) + a = arange(3.0) + assert (a == [0., 1., 2.]).all() + assert a.dtype is dtype(float) + a = arange(3, 7) + assert (a == [3, 4, 5, 6]).all() + assert a.dtype is dtype(int) + a = arange(3, 7, 2) + assert (a == [3, 5]).all() + From noreply at buildbot.pypy.org Mon Nov 21 05:18:07 2011 From: noreply at buildbot.pypy.org (mattip) Date: Mon, 21 Nov 2011 05:18:07 +0100 (CET) Subject: [pypy-commit] pypy numpy-monkeyaround: test, implement arange Message-ID: <20111121041807.4C9CC82A9F@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpy-monkeyaround Changeset: r49594:1ae3fd3bf105 Date: 2011-11-21 06:15 +0200 http://bitbucket.org/pypy/pypy/changeset/1ae3fd3bf105/ Log: test, implement arange diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -57,4 +57,5 @@ 'mean': 'app_numpy.mean', 'inf': 'app_numpy.inf', 'e': 'app_numpy.e', + 'arange' : 'app_numpy.arange', } diff --git a/pypy/module/micronumpy/app_numpy.py b/pypy/module/micronumpy/app_numpy.py --- a/pypy/module/micronumpy/app_numpy.py +++ b/pypy/module/micronumpy/app_numpy.py @@ -15,3 +15,24 @@ if not hasattr(a, "mean"): a = numpypy.array(a) return a.mean() + + +def arange(start, stop=None, step=1, dtype=None, maskna=False): + '''arange([start], stop[, step], dtype=None, maskna=False) + Generate values in the half-interval [start, stop). + ''' + if stop is None: + stop = start + start = 0 + if maskna is not False: + raise ValueError,'Not Implemented' + if dtype is None: + test = numpypy.array([start,stop,step]) + dtype = test.dtype + arr = numpypy.zeros(int(math.ceil((stop-start)/step)),dtype=dtype) + i = start + for j in range(arr.size): + arr[j] = i + j += 1 + i += step + return arr diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -177,10 +177,10 @@ def get_offset(self): return self.offset -class BroadcastIterator(BaseIterator): +class BroadcastIterator(ViewIterator): '''Like a view iterator, but will repeatedly access values for all iterations across a res_shape, folding the offset - using mod() arithmetic + by setting shards, backshards to 0 ''' def __init__(self, arr, res_shape): self.indices = [0] * len(res_shape) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1003,7 +1003,7 @@ class AppTestRanges(BaseNumpyAppTest): def test_arange(self): - from numpypy import arange, array + from numpypy import arange, array, dtype a = arange(3) assert (a == [0, 1, 2]).all() assert a.dtype is dtype(int) From noreply at buildbot.pypy.org Mon Nov 21 08:52:25 2011 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 21 Nov 2011 08:52:25 +0100 (CET) Subject: [pypy-commit] pypy default: work out the release announcement Message-ID: <20111121075225.DA2C482297@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r49595:f0938cd9a6f1 Date: 2011-11-21 09:51 +0200 http://bitbucket.org/pypy/pypy/changeset/f0938cd9a6f1/ Log: work out the release announcement diff --git a/pypy/doc/release-1.7.0.rst b/pypy/doc/release-1.7.0.rst --- a/pypy/doc/release-1.7.0.rst +++ b/pypy/doc/release-1.7.0.rst @@ -1,47 +1,94 @@ -===================== -PyPy 1.7 -===================== +================================== +PyPy 1.7 - widening the sweet spot +================================== + +We're pleased to announce the 1.7 release of PyPy. As became a habit, this +release brings a lot of bugfixes and performance improvements over the 1.6 +release. However, unlike the previous releases, the focus has been on widening +the "sweet spot" of PyPy. That is, classes of Python code that PyPy can greatly +speed up should be vastly improved with this release. You can download the 1.7 +release here: + + http://pypy.org/download.html + +What is PyPy? +============= + +PyPy is a very compliant Python interpreter, almost a drop-in replacement for +CPython 2.7. It's fast (`pypy 1.7 and cpython 2.7.1`_ performance comparison) +due to its integrated tracing JIT compiler. + +This release supports x86 machines running Linux 32/64, Mac OS X 23/64 or +Windows 32. Windows 64 work is ongoing, but not yet supported. + +The main topic of this release is widening the range of code which PyPy +can greatly speed up. On average on +our benchmark suite, PyPy 1.7 is around **30%** faster than PyPy 1.6 and up +to **20x** faster on some benchmarks. + +.. _`pypy 1.7 and cpython 2.7.1`: http://speed.pypy.org + Highlights ========== -* numerous performance improvements, PyPy 1.7 is xxx faster than 1.6 +* Numerous performance improvements. There are too many examples which python + constructs now should behave faster to list them. -* numerous bugfixes, compatibility fixes +* Bugfixes and compatibility fixes with CPython. -* windows fixes +* Windows fixes. -* stackless and JIT integration - (stackless is now in the same executable, but any loop using - stackless features will interrupt the JIT for now, so no real - performance improvement for now) +* PyPy now comes with stackless features enabled by default. However, + any loop using stackless features will interrupt the JIT for now, so no real + performance improvement for stackless-based programs. Contact pypy-dev for + info how to help on removing this restriction. -* numpy progress - dtypes, numpy -> numpypy renaming +* NumPy effort in PyPy was renamed numpypy. In order to try using it, simply + write:: -* brand new, faster, JSON encoder + import numpypy as numpy -* improved memory footprint on some RPython modules, such as hashlib + at the beginning of your program. There is a huge progress on numpy in PyPy + since 1.6, the main feature being implementation of dtypes. -* cpyext progress +* JSON encoder (but not decoder) has been replaced with a new one. This one + is written in pure Python, but is known to outperform CPython's C extension + up to **2x** in some cases. It's about **20x** faster than the one that + we had in 1.6. + +* The memory footprint of some of our RPython modules has been drastically + improved. This should impact any applications using for example cryptography, + like tornado. + +* There was some progress in exposing even more CPython C API via cpyext. Things that didn't make it, expect in 1.8 soon ============================================== -* list strategies +There is an ongoing work, which while didn't make it to the release, is +probably worth mentioning here. This is what you should probably expect in +1.8 some time soon: -* multi-dimensional arrays for numpy +* Specialized list implementation. There is a branch that implements lists of + integers/floats/strings as compactly as array.array. This should drastically + improve performance/memory impact of some applications -* ARM backend +* NumPy effort is progressing forward, with multi-dimensional arrays coming + soon. -* PPC backend +* There are two brand new JIT assembler backends, notably for the PowerPC and + ARM processors. -Things we're working on with unclear ETA -======================================== +Fundraising +=========== -* windows 64 (?) +It's maybe worth mentioning that we're running fundraising campaigns for +NumPy effort in PyPy and for Python 3 in PyPy. In case you want to see any +of those happen faster, we urge you to donate to `numpy proposal`_ or +`py3k proposal`_. In case you want PyPy to progress, but you trust us with +the general direction, you can always donate to the `general pot`_. -* Py3k - -* SSE for numpy - -* specialized objects +.. _`numpy proposal`: http://pypy.org/numpydonate.html +.. _`py3k proposal`: http://pypy.org/py3donate.html +.. _`general pot`: http://pypy.org From noreply at buildbot.pypy.org Mon Nov 21 10:25:49 2011 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 21 Nov 2011 10:25:49 +0100 (CET) Subject: [pypy-commit] pypy default: kill the meeting talk Message-ID: <20111121092549.9735382297@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r49596:3cde514a9a92 Date: 2011-11-21 11:25 +0200 http://bitbucket.org/pypy/pypy/changeset/3cde514a9a92/ Log: kill the meeting talk diff --git a/pypy/doc/how-to-release.rst b/pypy/doc/how-to-release.rst --- a/pypy/doc/how-to-release.rst +++ b/pypy/doc/how-to-release.rst @@ -1,6 +1,3 @@ -.. include:: needswork.txt - -.. needs work, it talks about svn. also, it is not really user documentation Making a PyPy Release ======================= @@ -12,11 +9,8 @@ forgetting things. A set of todo files may also work. Check and prioritize all issues for the release, postpone some if necessary, -create new issues also as necessary. A meeting (or meetings) should be -organized to decide what things are priorities, should go in and work for -the release. - -An important thing is to get the documentation into an up-to-date state! +create new issues also as necessary. An important thing is to get +the documentation into an up-to-date state! Release Steps ---------------- From noreply at buildbot.pypy.org Mon Nov 21 11:17:28 2011 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 21 Nov 2011 11:17:28 +0100 (CET) Subject: [pypy-commit] pypy release-1.7.x: Added tag release-1.7 for changeset ff4af8f31882 Message-ID: <20111121101728.0D5CC82297@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: release-1.7.x Changeset: r49597:7a1d5bd05243 Date: 2011-11-21 12:14 +0200 http://bitbucket.org/pypy/pypy/changeset/7a1d5bd05243/ Log: Added tag release-1.7 for changeset ff4af8f31882 diff --git a/.hgtags b/.hgtags --- a/.hgtags +++ b/.hgtags @@ -1,3 +1,4 @@ b590cf6de4190623aad9aa698694c22e614d67b9 release-1.5 b48df0bf4e75b81d98f19ce89d4a7dc3e1dab5e5 benchmarked d8ac7d23d3ec5f9a0fa1264972f74a010dbfd07f release-1.6 +ff4af8f318821f7f5ca998613a60fca09aa137da release-1.7 From noreply at buildbot.pypy.org Mon Nov 21 11:19:17 2011 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 21 Nov 2011 11:19:17 +0100 (CET) Subject: [pypy-commit] pypy default: Fix. Message-ID: <20111121101917.6ECDB82297@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49598:3c59b9c3ba47 Date: 2011-11-21 11:15 +0100 http://bitbucket.org/pypy/pypy/changeset/3c59b9c3ba47/ Log: Fix. diff --git a/pypy/doc/release-1.7.0.rst b/pypy/doc/release-1.7.0.rst --- a/pypy/doc/release-1.7.0.rst +++ b/pypy/doc/release-1.7.0.rst @@ -18,8 +18,8 @@ CPython 2.7. It's fast (`pypy 1.7 and cpython 2.7.1`_ performance comparison) due to its integrated tracing JIT compiler. -This release supports x86 machines running Linux 32/64, Mac OS X 23/64 or -Windows 32. Windows 64 work is ongoing, but not yet supported. +This release supports x86 machines running Linux 32/64, Mac OS X 32/64 or +Windows 32. Windows 64 work is ongoing, but not yet natively supported. The main topic of this release is widening the range of code which PyPy can greatly speed up. On average on From noreply at buildbot.pypy.org Mon Nov 21 11:19:18 2011 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 21 Nov 2011 11:19:18 +0100 (CET) Subject: [pypy-commit] pypy default: merge heads Message-ID: <20111121101918.92D9882297@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r49599:aed533007ccf Date: 2011-11-21 11:18 +0100 http://bitbucket.org/pypy/pypy/changeset/aed533007ccf/ Log: merge heads diff --git a/pypy/doc/how-to-release.rst b/pypy/doc/how-to-release.rst --- a/pypy/doc/how-to-release.rst +++ b/pypy/doc/how-to-release.rst @@ -1,6 +1,3 @@ -.. include:: needswork.txt - -.. needs work, it talks about svn. also, it is not really user documentation Making a PyPy Release ======================= @@ -12,11 +9,8 @@ forgetting things. A set of todo files may also work. Check and prioritize all issues for the release, postpone some if necessary, -create new issues also as necessary. A meeting (or meetings) should be -organized to decide what things are priorities, should go in and work for -the release. - -An important thing is to get the documentation into an up-to-date state! +create new issues also as necessary. An important thing is to get +the documentation into an up-to-date state! Release Steps ---------------- From noreply at buildbot.pypy.org Mon Nov 21 11:24:00 2011 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 21 Nov 2011 11:24:00 +0100 (CET) Subject: [pypy-commit] pypy.org extradoc: bump the release number Message-ID: <20111121102400.104FD82297@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r291:0a1d901e60ef Date: 2011-11-21 12:16 +0200 http://bitbucket.org/pypy/pypy.org/changeset/0a1d901e60ef/ Log: bump the release number diff --git a/archive.html b/archive.html --- a/archive.html +++ b/archive.html @@ -16,7 +16,7 @@ - + - + - + - + - + - + - + - + - + - + - + - +